There was a new holographic shader developed by members of SNR Labs group on VRChat that completely blew me away in how it was used in a couple of performances at Raindance Immersive. It’s like a new form of volumetric capture that also has lighting effects, and can be used in translating 2D images into 3D or capturing low-res 3D objects into a sort of hybrid of point-cloud, particle effect, 3D objects. It’s called Apple Global Illumination after it’s primary author Apple_Blossom, who worked with designing it with A://DDOS along with other collaborating artists including VJ Silent, VJ Namoron, and immersive dancer SoftlySteph. SoftlySteph’s Frictions of a Modulated Soul took home the best dance performance at Raindance Immersive (and will be featured more in depth in the next episode), and the Night Under Lights: The Seasons at Moon Pool was one of the more awe-inspiring uses of this shader and was a personal favorite of mine from this year’s festival. See the VRChat replay of the Moon Pool here and I’ll be covering this in two episodes from now.
There’s a particle screen variation of the shader that is able to translate 2D VJ screens into more volumetric experiences based upon the luminance value of each pixel, which can be seen in this video below:
Then there’s a version that can encode 3D objects into a video that is streamed into VRChat, and then decoded into a dynamic hybrid between a particle effect, point cloud representation, and 3D model. And each of the particles are emitting light that are reflected on the surrounding environment and avatars. Here’s a clip of SoftlySteph’s performance as captured by Madame Kana
Their SNR Labs Test Facility was also selected to be a part of the Venice Immersive Worlds gallery that opens next week. Here’s a video overview:
I had a chance to sit down with Apple_Blossom and A://DDOS to learn more about how this breakthrough shader was developed, as well as diving a bit into the weeds for the mechanics for how it works. The Apple Global Illumination feels like a real revelation and breakthrough in how it’s opening up new avenues for how VJs and dancers will be able to use it for new forms of creative expression and holographic and volumetric effects to be paired with DJ sets across different clubbing venues in VRChat.
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality
Podcast: Play in new window | Download
Rough Transcript
[00:00:05.458] Kent Bye: The Voices of VR podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR podcast. It's a podcast that looks at the future of spatial computing. You can support the podcast at patreon.com slash voicesofvr. So one of the big trends within VRChat is the whole clubbing scene and the different array of venues with really vast architectures that are tuning into different subgenres of music, of DJs, but there's also a whole aspect of the VJs, or the video jockeys, who are showing either 2D representations that they're able to do on their computer or more and more moving into either 3D spatial shaders that they're creating or, in some cases, starting to use completely new systems that allow this kind of hybrid approach where they're able to use their existing 2D workflow and then translate that into these volumetric experiences. And that's a little bit about what we're going to be covering today with a new shader system that's called Apple Global Illumination. So Apple Blossom and A://DDOS are part of this SNR Labs and have these weekly meetups with Modular Mondays where people are doing performances with physical synthesizers. Then they started to experiment with having more and more volumetric expressions for how VJs could come in and start to translate what they're doing already in programs like Notch and TouchDesigner and input a 2D signal and have that translated into a 3D object. And then from there, they added lighting effects. And then there's a whole other system that is able to take 3D objects within TouchDesigner and then do this trickery where they take these six cameras, they put it into a video, they stream the video into the VRChat world, and that video is decoded back into a 3D object that is this volumetric point cloud animation. That was the basis of a piece called Frictions of a Modulated Soul. And the other screen-based approach was the basis of an experience called Night Under the Lights, The Seasons at Moonpool. And both of these experiences were completely and utterly mind-blowing, not only because of the scale of these particle effects and animations were unlike anything else that I've seen in VR, especially doing these things in real time, but also there's lighting effects on all this stuff too. So it's able to impact the world around you and just really reminds me of the light and space movement from the 1960s and this whole new iteration of what's happening in these dance club scenes and how they're translating all these things into new forms of expression with volumetric holograms and translating these 2D abstractions into these more volumetric experiences in a way that is just completely transfixing and awe-inspiring. Oh, and before we dive in, it is worth noting that there is a replayable version of Night on the Lights at the Moon Pool that I'll put a link to in the description. There's also a video on YouTube if you prefer to just see what we're talking about. I think it'd be really helpful just to get a little bit of a sense of what we're dealing with here. And if you're going to be at Venice, there's also going to be a SNR Labs test facility, which is the place where some of this hologram technology was originally being developed, and that's going to be showing there at Venice. Not sure if that's going to be available afterwards with some replayable versions as well, but... Keep an eye on all that. And there's going to be a replayable version at some point for the frictions of a modulated soul by softly stuff, but that's not quite finished yet and hasn't been released. So keep an eye out for that because you'll be able to eventually see that as well. So we're coming all that and more on today's episode of the voices of your podcast. So this interview with Apple_Blossom and A://DDOS happened on Wednesday, July 3rd, 2024. So with that, let's go ahead and dive right in.
[00:03:36.050] Apple_Blossom: My name's Apple Blossom, aka Erin, and I'm one third of SNR Labs, a very experimental space where we work on weird niche projects with lots of coding involved.
[00:03:51.785] A://DDOS: And I am Adidas or A://DDOS or however you want to pronounce it. I, I don't correct anybody. It's all fine. And I'm kind of the lead for SNR labs. And then we also work with another friend of ours in Namoron who does a lot of visuals with us.
[00:04:07.072] Kent Bye: Great. Maybe you can each give a bit more context as to your background and your journey into VR.
[00:04:12.927] Apple_Blossom: Sure. I joined VRChat like a lot of other people during COVID. Going from lots of friends and lots of social interaction to not a lot was a big shift, and it was nice to be able to find a place where I could actually talk to people still, and it felt like face-to-face conversations. That kind of led me down the VR rabbit hole, and then we ended up here.
[00:04:38.516] A://DDOS: I started doing VR back in the early days of Oculus when they were kind of touring with a prototype. I tried a prototype development kit at PAX East back in, I don't even know, 2015 maybe or 2014. really early on and just kind of was absorbed with it. And then as soon as the DK2s came out, I picked up one, you know, tried out whatever there was to try out at the time, which is pretty limited. And then bought the CV1 when it first came out. And then I had fun with that for a bit. And then at the time I was in the Navy, the US Navy, and I was deployed a lot. And so eventually I came back and kind of forgot about it after not touching VR for a really long time. And then COVID came around and just like Apple said, I picked up the headset again to start socializing in the midst of COVID and kind of somehow found the iceberg that is the creative scene in VRChat where there's live music events and stuff like that.
[00:05:42.694] Kent Bye: Nice. Well, the catalyst for this conversation was a performance that was actually two performances that happened at brain dance. One was softly steps, frictions of a modulated soul, which happened to be on the jury for, for best dance. And it completely blew me away in terms of just a really powerful performance. emotional dance performance that was really using a lot of the spatialized technologies that you've helped to develop to have additional spatial elements. It's almost like a new volumetric capture technique, but using the constraints of Unity and VRChat to find new ways of expression, both for creative expression for dance, but also for VJs with this more screen grid effect at the moon pool location that you're able to do this kind of spatialized shader effects. And so maybe if we go back to the very beginning, as you start to enter into VRChat, obviously you're seeing what's happening with these different forms of creative expression. And it seems like each of you are apt with being able to navigate how to do the different types of programming and bend these programs to your will to have these new forms of creative expression. But What was it that you saw in VRChat that took you from, hey, this is a cool place to be able to talk to people to, hey, there's some really cool things that I can do in this world to actually modulate people's experiences?
[00:07:03.800] Apple_Blossom: Unity Engine. I've been, well, for me, I've been developing or doing some form of coding relative to games for about 15 years now. And probably solid 10 years of that has been on and off on Unity. So when I got into this game and I found out that they did use the Unity engine and that making avatars and worlds let you access different parts of the Unity engine to various extents, it's gotten a lot more open compared to where it was two years ago. But when I found that out, I was like, oh, all of my knowledge is going to transfer over. And then I started working on avatars and I realized, no, it wasn't. but a lot of the stuff is proprietary. But UdonSharp and the shader code, same as regular Unity for the most part. So it's pretty smooth transition to get into developing for VRChat. So combine that with the existing player base and net code that I don't have to write, thank God. That's how we ended up here. And then shout out Modular Monday for the real intro and not being nervous.
[00:08:15.922] A://DDOS: I think my background and getting into the development side of this stuff, I'm not a programmer. I'm a systems engineer by trade. That's like my day job. And so I know about coding. I know limitations with coding. And I know how to communicate, how to write code. But I am not a programmer. I couldn't write any languages. Saved my life. But I got into the music scene. That's what got me into kind of the creative side of VRChat. And for me, it was the first, I think it was probably the first shelter event I ever went to back when shelter was first kind of kicking off a few years ago. And after that, I was hooked. I mean, it was every weekend, all night, I'm traveling between these different venues and VRChat and meeting people and experiencing this new thing that I never thought could be a thing, which is like this weird... artistic club life where people are doing some really creative things, not only with music, but also with visuals. And over time, you just kind of meet other people that are those creative people that are doing those things. And you get to talking about how they do it, and then they just kind of show you how they do it. And that's where that seed started for me. I have a friend named Shaggy who I think wanted Apple to start doing visuals. And I had met Shaggy through some live hardware stuff that we had found each other on Twitter doing. And we didn't really have a scene inside of VRChat. And one day, there's this group, a couple of people that are starting to do live hardware music stuff. So we reached out to them. And eventually, it came to be this group called Modular Monday. And they meet every Monday and do live hardware music. But also, we started experimenting with live visuals. using TouchDesigner. So I started learning TouchDesigner to complement the hardware music stuff that I was creating with this group of people. And through that also Apple was VJing as well. I mean, we kind of learned TouchDesigner at the same time. And so she got back into programming and she can talk more about that, but she got into like shader programming through that avenue with TouchDesigner.
[00:10:27.875] Apple_Blossom: Shaggy yelling at me to learn VJing, and I was like, sure, whatever. Grab TouchDesigner, see that I can code shaders in there, and decide it's finally time to learn how these magic GPU terms work.
[00:10:39.684] Kent Bye: Is that a node-based system that you're doing the shader programming, or is it actually just writing the pure code? You could do both.
[00:10:47.239] Apple_Blossom: In TouchDesigner, it's node-based, but there is an option to have a node take in GLSL code straight, and then you can do a lot of fun stuff with that. And then in Unity, until the most recent version change, well, sorry, two version changes ago, when we switched over to 2022 in VRChat, that's when they added Shader Graph. which was a node-based system that lets you write shaders just by connecting the nodes. And then it actually cross-compiles back into, I think, they do write to HLSL. And then you can actually copy-paste the code from there. You can go into the nodes, yank the code off of that. But I handwrite all the shaders that I use in VRChat because it's either really being familiar with writing code line by line or masochism. It's hard to tell. But it's what I enjoy and it's how I enjoy doing it.
[00:11:39.079] Kent Bye: Okay, well, in talking to Silent, it sounds like that there was actually like a particle shader that was on Booth that was part of the inspiration for some of the systems that you started to build, which would essentially take like a video input and then start to offset some of those pixel values to create like a 3D grid. But maybe you could take me back to what was some of the original inspiration to start to develop this kind of hologram system, but also this particle screen system that you've developed.
[00:12:07.762] Apple_Blossom: Yeah, so the particle screening came first, and that was when, A://DDOS, you want to tell how that came to be?
[00:12:15.004] A://DDOS: Yeah. Yeah, so like I said, I was jumping around between events for a long time, and there was kind of a test event that another one of our friends, 10.exe, was at, and she was hosting on these... particle screens that were basically they have these points of particles that push away from a surface. And they just look really cool because they push away based off of the luminosity of the pixel. So imagine sending like a video stream across Twitch. And that's kind of how we ingest the music and the video inside of the VR chat scene. And the shader would look at the pixels and figure out the luminosity value for those and then push the pixels away from the screen based off of that luminosity value. And you get a really cool 3D effect with the screen. And after this event, and Namron was actually there VJing, and I was talking to Namron about a way to maybe harness this and to do something creative. And that kind of led us down this path of buying it off Booth and checking it out. And again, I'm not a programmer, so I can't look into the shader code and really tell what's going on. But I realized quickly that it was a little limited, and you could only use the mesh that came with this shader. And you could not apply it to any mesh to have any weird UV projection that you might want. So let's say like a curved pillar or a sphere, you have to only use the screens that it came with. And so Apple's kind of my figure eight ball. And I ask her all these hypothetical programmer questions. And I went to her and said, hey, wouldn't it be cool or is it possible if we could make something like this that we can apply to any mesh?
[00:14:02.766] Apple_Blossom: And at this point, I had just started learning how to do shaders in Touch Designer, not even Unity. I had done a lot of C-sharp before. I had not touched shaders yet. So this was like, you're coming to someone who is just brand new at this thing, and we're jumping into the deep end.
[00:14:23.311] A://DDOS: Right, yeah. And I know she's been working on the shader stuff for TouchDesigner and some of the visuals she's been making. So, you know, I asked her the question, is this possible? And she said, yeah, probably. And I went to bed that night and then I woke up with my inbox full of example videos of it working. She had written some code to do her own particle screen.
[00:14:46.936] Apple_Blossom: The other one was written in a very specific way where it wouldn't work for any of the stuff we wanted, so I had to write the entire thing from scratch. I went out, found old Git hubs that were about point cloud rendering and particle rendering techniques, and I found Unity's old GP particle rendering code. i amalgamated a bunch of those together to make the first and worst version of the shaders that would eventually become the moon pool and the hologram and so once i had that prototype working i started messing around with all different shapes i would just like make whatever shape whatever mesh and then slap it in i had spheres, cubes, cylinder shapes. I had an AK-47 in the test world for a while, just as a really complex test model. And of course, a teapot. Got to have the Etah teapot. But then A://DDOS notices one of the cubes that I had set up.
[00:15:42.342] A://DDOS: Yeah, I had actually been experimenting with some 3D visuals in TouchDesigner at the time. And I'd started thinking of, well, is there a way that we could use these particles to kind of change or adjust values on the video stream to send some type of data to be interpreted on the VRChat side that would be like depth data, or a way to basically render 3D particle objects, almost like a point cloud, over video stream? And I was able to generate the particle screens around a cube, but I couldn't actually accurately place the particles in a 3D position. We could flip the UVs on the mesh, push the particles inwards, but you'd have planes colliding and things like that that didn't really make sense. So I knew what I wanted to do. I knew what my goal was at this point, which was to create some type of way of sending basically point cloud data over a stream. But I couldn't quite figure out how to do it yet. I had to sit on it for a couple months. But I had a eureka moment one day at work where I wrote it on a napkin and I brought it back to Apple. And again, you know, she's my... mystic eight ball and I shake her and she gives me an answer. And she didn't know what I was asking at first. And I said, look, this is what I want you to do. And it was basically we're converting color data from RGB to HSV, which is just different ways of calculating color space. And instead of sending a saturation value, we'll send a depth value. So what we're doing is we're encoding the pixel data with the depth value for each pixel. And then on the shader side, we're just interpreting that depth value. And we place this on a cube that pushes these particles inwards. And those particles will be positioned at the point where they intercept with the camera setup we have in TouchDesigner. So this is kind of like the first test build was only in TouchDesigner. We had this camera setup where there's six cameras looking at a center point, collecting this depth data and the color information, and... Terribly inefficient, just like... Yeah, it wasn't very optimized, but it was just for testing. And we UV mapped a cube to reflect these camera positions. So just six cameras facing a center point. And then we had a cube that that texture in the VRChat world would get that texture data from, and her shader would interpret that depth data and then generate the hologram visuals.
[00:18:12.253] Kent Bye: So just as a metaphor to help explain in my mind what's happening here, what you're doing is that you're creating a video that has six views of a shape. And then so you can imagine a cube with six sides and that you're a sculptor and that you're basically using one of the channels of the hue to be able to send over information for how much you want to chip down and create a 3D shape. So you're kind of like... a metaphorical virtual sculptor taking a cube and then kind of shaping it into different objects. So you get the 3D space, whereas the screen, you're able to create like a grid effect that gives some space, but it doesn't give a full 3D object. But with the six sides of the cubes, as far as I understand, is that you're able to then have these six different views that are rendered out into the video and then decoded into a 3D object.
[00:19:03.666] Apple_Blossom: Yep, that's pretty much it.
[00:19:05.187] A://DDOS: That's a good explanation.
[00:19:06.638] Apple_Blossom: The sculpting analogy is really good, actually. I like that a lot. You can actually see the limitations of just having the six sides and not a proper volumetric texture, where during the step performances, if you look at the faces when they're all facing inward, there's only so many particles. And if you have two cubes aligned on an axis, the inner face of them will disappear. Well, during the Cage of Expectations, when she has the larger versions of herself dancing around the cage, her face disappears when she looks inward, just because that's being taken up by the hair of one of the mirror copies.
[00:19:43.508] Kent Bye: Right, because you can't have multiple layers. There's only one layer, is what you're saying. It can be occluded by other things, is what you're saying.
[00:19:52.105] Apple_Blossom: Exactly. They'll get occluded by other things being rendered in the particles. There's like an amount of complexity that we can have in any given shape, and we have to kind of design around it, but it's not an awful limitation because there's a lot of particles. And we also found that doing dither effects on the transparency at different levels for different views would let it so it would have seemingly multiple layers, but it's actually just every other pixel is a different part of that image.
[00:20:21.550] Kent Bye: And so is the video resolution like 1280 by 720, and maybe you could say what's the video resolution overall, the video and how many pixels do you have for each of the face of the cubes to be able to render out these three objects?
[00:20:35.868] Apple_Blossom: Yeah, so each face is 360 by 360, and that's laid out on a grid that's, yeah, 1280 by 720. And the reason we chose that is because when we built the version for TouchDesigner, that would actually do the capture. I had improved on the six-camera system, and now we have one that just does one. But the output of that If you're on the free trial version, the maximum is $12.80 by $7.20. And I don't want to make people pay for some expensive software just to mess around in our system. And neither does A://DDOS. And so we decided that we want it to be where everyone can actually use the software.
[00:21:15.722] A://DDOS: Yeah, that's a big thing with what we work on is we're trying to make whatever we do pretty accessible to anybody that's interested in it.
[00:21:23.328] Kent Bye: Okay, so default to 720p with each cube size 360 by 360. If you were to pay, could you theoretically get like 1080p or higher resolution?
[00:21:35.008] Apple_Blossom: Yeah, but there's only so much bandwidth you can expect the average user to have. So 720p is actually a really good compromise value for getting accurate pixel data coming through because the way video compression works is it like groups up chunks of the screen as like single colors and it looks at previous frames and sees what changed so you can move blocks of pixels around instead of individual pixel values. Makes it so instead of having like gigabytes per second, it's just megabytes per second. But you lose a lot of accuracy that way. And because we're encoding the depth and the color the way we are, we need it to be very, very, very accurate or as close as possible. So having at a lower resolution at a higher bit rate will decrease the artifacting that you'll see on the edges of every object in the system that just come from video compression. We could do 1440p, I think, was the max that someone had going. Yeah, we experimented with that. No one would be able to render it properly.
[00:22:38.876] Kent Bye: So is it at 30 frames per second, the video coming in?
[00:22:42.300] Apple_Blossom: Yes, same reason.
[00:22:44.181] Kent Bye: Okay. Okay. So yeah, I remember asking you at the Q&A, like, okay, why not just do JSON data? It seems like that would be faster, more efficient to just send over the raw data. But I guess there's two issues is that one, you can't really take in raw JSON data into VRChat. And two, with the video compression, it sounds like you actually have a little bit more efficiency when it comes to compressing the data and sending more data over through this video format rather than sending over like raw data.
[00:23:12.741] Apple_Blossom: Have you ever taken a picture or had a video file that was raw, uncompressed, just pure value data for those video inputs? If you take a picture and you have just every single pixel is the raw values, those are huge. For one hour, you could be in the dozens of gigabytes of data. And it would be a lot to send over. So that kind of cuts out us trying to bake in JSON data as a big file, because that would be the size of the world times five, unless it was static images, which I did agree with you on the day that that would have been a good idea. Or if we had positional data that we were interpolating between, that'd be a good use of JSON. And I actually did use JSON in a different project, because VRChat can use it. But the problem is, You can pull for a web request every X number of seconds. And I think it's 15, but don't quote me on that. Way slower than the video goes. So sending it over video stream has always been the ideal just because of the volume of information we can send over. H.264 and other video compression algorithms are magic, I swear. I don't understand them. They're magic and they get so much data in such a small space. It's wonderful.
[00:24:36.445] Kent Bye: Yeah, I think that's part of the innovation of using video as like an encoding format for this kind of spatial information is I think potentially opening up not only a lot of new creative exploration for VRChat, but potentially beyond because like all these things we're mentioning, this could be just a level of efficiency that is proven out to be super useful for other contexts because video compression and distribution is such a ubiquitous thing that, you know, as I was talking to Silent and Nam and Ladybug, they were saying that this whole VR CDN system that is allowing DJs to send over the audio to VJs and then the VJs to send that back with the synced audio to do this kind of real-time audio reactive effects that can have these really impressive effects experiences that then are translated from the video into the 3D spatial effects. So for me, when I saw these two pieces of Frictions of a Modulated Soul, as well as with The Night Under Lights, The Seasons at the Moon Pool, both of these are innovative in a way that starts to have the type of experiences that I think go beyond what I've seen before, just because of all the different limitations of having that much data present that much real-time spatial information. So That's just a comment that I have. I don't know if you've seen any other equivalent thing or where you see this type of like really large scale amounts of data that are being translated into a spatial context. But as far as I can tell, this is kind of a new phase or new epoch of what's possible with this type of volumetric capture technique.
[00:26:09.648] A://DDOS: Yeah, I haven't seen a lot of stuff like this. I mean, there's DMX lighting that's all over video stream, but that doesn't really give as much information as we're sending, which is positional data. But yeah, no, I haven't seen anything like this. And this is something that we've been working on together for over a year now on both of these setups. And at least, go ahead.
[00:26:34.456] Apple_Blossom: Closest would probably be Shader Motion, where they sent the bone position data for the puppet through the video stream. That's really cool and impressive.
[00:26:42.842] A://DDOS: Oh, yes. Shader motion is another good one to look up if you haven't seen that yet. But basically, they record bone positional data, and they can restream it over a video stream, and they encode a part of the video stream with that data. So there is these techniques that exist on a much smaller scale, though. I think they're more tailored purposed for other things and controlling stuff in the world to kind of synchronize with live video feed. But the actual visual component of it, I think, is something that we've probably pioneered, I hope, maybe.
[00:27:12.131] Apple_Blossom: If we weren't the first ones to make it, we were likely the first ones to make it performant because there is a lot of reading the shader compilation output, seeing what's going on in the internals and trying to adjust things that they run as fast as possible so people can actually render out this massive number of particles.
[00:27:35.251] A://DDOS: Oh, we haven't even accounted for lighting because he also did all the lighting system too.
[00:27:38.972] Apple_Blossom: We didn't even touch that part. We'll get to it. But for context, this is 360 by 360 per side, six sides. That's 777,000 particles. And we had two of those running for Seth's performance.
[00:27:54.896] Kent Bye: So two separate videos were having information that was put out. Is that what you're saying?
[00:27:59.549] Apple_Blossom: One of them was from the video stream and the other, if you remember me talking about our touch designer setup where we have the one camera that captures all six sides. Well, I just copy pasted that code over, converted it from GLSL to HLSL and then slapped that in the world. And so there is a camera and a visible box that was constantly capturing Steph's avatar. And we had to do some funky stuff to make sure it would only capture Steph. I made a little walking texture. in one of the Poyomi global texture slots, and then it would check against those values for every single vertex that would try to render, which is really performant in the end, because it skipped all the fragmentation step, which is nice. But anyway, that capture setup copied into Unity, So your game would have to capture Steph's all six sides, then convert that over, do the mirroring steps on it, then render actual Steph and everyone else in the world along with the rest of the world, then capture the lighting, then render the lighting.
[00:29:04.052] Kent Bye: So my understanding was that there was some of it that was real-time and some of it was pre-recorded. So is that what you did? You recorded some of these things and then were playing them back? Yeah.
[00:29:14.079] A://DDOS: Yeah, so we had worked on the prerecorded portion, which is most of the not, well, everything that's not a Steph visual or that is feeding off of Steph's avatar from the in-world camera was prerecorded. And that, I mean, we had a ton crunch on a lot of this stuff. So I think we didn't get the world till, four or five days before the first performance at Raindance. But we had test roles, or Nam and I were working on making these pre-recorded visuals for Steph. And we had a storyboard and stuff like that we were working on. So it's not like we couldn't do anything. But the actual visuals, I mean, we were adjusting those till the day of, day before maybe, the first Raindance performance. And then- The world in the scripts. Right, yeah, we were crunching. Yeah, so there's the video, pre-recorded video, and we would just send recorded video clips from the outputs from TouchDesigner straight to Steph. She made this compilation of all the visuals that we had timed out, and she would stream that live from her PC when she was performing. So she had full control over when that started. And then in-world, you couldn't see it, but I was in a rock above the stage. And I had these invisible tablets that had like a cue card for when I have to do certain things. And then a bunch of controls for changing scenes and changing values for the in-world capture for stuff. And so those together, we were able to kind of script everything. the story and Steph would perform within the stage that was automated she already knew where it was going to be but there was the live element of how I mean she had me in her ear the whole time so we could communicate she we were in a discord call during the whole performance so she would tell me if she wanted something specific done to maybe the color or the trails of, because, you know, there's like a trail effect that Apple had made for the in-world capture where there's trails of particles that kind of fade away behind stuff or, you know, whatever the controls were, that was all live. But there was predetermined points where there's values, you know, we had determined that needed to be changed at those specific timings of the video. So that's all stuff that we did in world. Yeah, I think that pretty much covers most of the performance.
[00:31:27.930] Kent Bye: Okay. And just to give a bit more context for the listeners, since we're jumping around to the creation of this and what this creation that you've made is enabling within the context of VRChat, which is Softly Stuff's award-winning Frictions of a Modulated Soul, which was a dance performance that was exploring aspects of identity and feeling trapped and finding liberation through different expressions of identity. It's a really powerful piece of a really beautiful dance that had like steps dance and then there was mirroring stuff that was happening and then she's reacting to like the recorded portions and so it had like you said like 1.4 million particles that were floating around and it was just an incredible performance that was not only innovating on the technology but using it to have this type of self-expression in a way that i just thought was incredibly powerful well thank you And so, okay, so that's the frictions of modulation. So, so I want to go back a DDoS to the point where you're, you're writing this system on a napkin. You're trying to design the architecture of a system like this. Did you have in your mind that it would lead to the type of either performances that softly stuff did with dancing, or if there was like whole other separate thing, which was like the VJs coming in and having new ways of having VJs create these spatialized real-time experiences. But what was the catalyst that made you want to make something like this?
[00:32:53.875] A://DDOS: That's a good question. Honestly, just the creative side of it. I think you could probably talk to any VJ. I mean, you've talked to three of them already that are kind of in our scene, but the ability to create something that has never existed before, that couldn't exist in reality, just not something that technologically we can do, is an enticing concept. It's something that I think is worth pursuing. And the hypotheticals of wouldn't it be cool if we could make these 3D particle objects definitely You don't know where it's going, but you know you want to do it. And that's kind of where we started. Sometimes we just do stuff for the sake of doing stuff and the sake of it just being artistic. And Apple and I both, we're not the best or most creative artists, but we'd love to try to work out problems to kind of enable these stories that could not be told any other way or have not been told a specific way before. And that's kind of the whole push for this type of system is the ability for anybody to come in and kind of story tell in a way that might be new or unique. And I know when Raindance first approached us, it wasn't for the hologram stuff. It was for the 900 lights stuff. They approached Arby about that project. And so we were excited to do Raindance for that. But Steph had approached us separately and wanted to work with us on the hologram stuff because she was one of the first people that we had talked to and shown what we were working on back in December of last year, I think. We had our first prototype working. Her and Silent and Mam were the first three people that got to really see what we were doing. And Apple and I knew as soon as we got the first hologram working, it was like, all right, how do we integrate Steph into this? How do we get her dance performances? Because she's been dancing in VRChat for a couple of years now. And her dances are always amazing and really expressive. And I've always wanted to work with her. And this was a good opportunity. So when she approached us, it was, I think for Apple and I both, it was just, of course, like we were already planning on it. We already have like, you know, design concepts for how the system could work and, you know, in-world capture and stuff like that. And so we had already had plans to work with her before she ever approached us. And so I think working together with her was going to happen eventually anyways, but Raindance was just kind of an excuse to get together to work on this stuff. Yeah.
[00:35:28.001] Apple_Blossom: I definitely grant me a artist side. Like I'm not an artist, not a programmer, but what I do is I make canvases. I'll make big flashy canvas and then it'll be blank and empty. And then the VJs will come over with their big bucket of paint and they will fill that canvas. And together we make the art.
[00:35:52.666] Kent Bye: Hmm. Yeah, well, I know as I was going through the SNR's Discord, I had come across a series of different versions and iterations that you had done for the system. And I know that you said you had to write it and rewrite it seven or eight times just to get it to the point where it is. And so I'm wondering if you can maybe give a abbreviated version as you were having to write this and rewrite it that you had to really figure out new ways to be performant. And then you throw in lighting, which is a whole other complication of notoriously difficult for performance wise. So yeah, maybe just if you could give me a little bit of a recap for how this system came about through all of its many permutations and iterations.
[00:36:38.294] Apple_Blossom: Yeah, so I'm a big reinvent the wheel kind of person in a really bad way, which is why things take months, but they turn out real good. So first we had the particle system and we just had the screen and it was empty. And after we had made it, we saw a world that had a very, very good implementation of reflection probes that made it almost look like it was lit. And I was like, okay, reflection probe lighting, is this viable? So I spend a few days implementing this thing where it would have a custom reflection probe that would only capture the particles and it would like reproject reflections onto all of the avatars. It was terrible, the worst. And just like with the hologram and with the particles, the lighting solution that I found is the simple dumb answer, but with the performance cranked. So... What's the simple dumb way of calculating lighting for a million particles? Well, you just cast rays through it. Obviously, just like real light shoots little rays, we'll just shoot rays off the particles. Well, the reason that's dumb is the number, of course. Unity can handle like probably 30 real-time lights because they use forward rendering. I can get into that if you'd like, but it's not fun. So I had to make my own lighting system. It wouldn't work with Area Lit or with LTCGI because both of those depend on either really small numbers of meshes so they can do linear transformation. Wait, sorry, I can't remember what the acronym stands for, but... complicated math on every single part of the mesh. And I needed something that would work in big parallel. So I wrote the system where just like we capture all six sides of the camera, we capture a bunch of slices of a voxel version of the world. And at first it was just 16 slices. So we had this big grid texture and each slice of it would be a different like Z position in the world. And the camera would just capture all the slices, place all of the particles where they should be, blur it so that they Light diffuses and then we cast the rays through that texture, which is a lot simpler because each position is the pixel and you don't have to actually do ray intersect calculations on every object. You just linearly step to the pixel, sample pixel, step to the next one, sample it. Real simple in concept, real difficult in execution just to make it actually function.
[00:39:13.939] Kent Bye: Just a quick clarifying question. You said like a voxelized version of the world, because you know, you're in unity, you have these 3d objects with meshes and physical based rendering in my understanding that voxel engines are completely separate. So is this all happening within the shader that you're doing this kind of voxelized approach, or are you having like a virtualized voxel approximation of the world, or maybe you can just elaborate, what do you mean by how does the voxel interface with what you're doing?
[00:39:38.539] Apple_Blossom: Yeah, so in Unity, you had mentioned it being kind of a limitation. It's actually a big boon because they handle a lot of the more complex shader stuff in the back end, like having to actually call renderers and interfacing directly with the GPU. So what's nice about Unity is they have a bunch of cool shader stuff hidden in the inner workings where there's no documentation and you need to just go right into it. But there's these things called custom render textures. I love them. They're the best thing in the world. What they are is a texture on your GPU, which is just a big list of pixel values, basically. certain VRAM. Now these textures, the custom render textures, you write a shader for them and you insert a material and then every frame it recalculates all those pixels in arbitrary VRAM space where it doesn't actually affect the final image of the world. It just edits a texture. Now, you can put these textures into shaders and render them out. So you could have arbitrary calculations happening in the custom render textures, which is what we're doing with our lighting. Arbitrary, weird stuff happening in a big 2D, then 3D grid. And then eventually, at the very end of that big chain, it gets put into every single material that used that lighting system, and it casts the rays off that texture. So it's a texture version of the world. We don't have it actually voxelizing the meshes in the world yet, but that is something that's going to be done before we release the system. At least that's what I'm hoping. It's complex, but I do have code that does it.
[00:41:19.130] Kent Bye: Okay, so the venue that I saw the Night Under the Lights, the seasons at the Moon Pool, well, I guess I'm shifting over into the screen-based approach. At the Moon Pool, you have what looks like this pool, like this circular indent that looks like a swimming pool that you're able to have at the base. The lighting in that was really just moving, profoundly moving. It reminded me a lot of the light and space movement and the piece of Aku that I saw at Superblue Miami by James Turrell. And a lot of the light and space movement was really focusing in on these really subtle ambient shifts in color that, you know, this was like an real-time audio reactive experience that was like much higher pace and velocity than what I've seen any of the other previous light and space movements. But now with these virtual spaces, you're able to do this kind of more dynamic experience, but the lighting effects were so profound because I haven't really seen a lot of experimentation with doing these types of abstract particle effects with a lot of the lighting effects that are also attached to it to have that much dynamic motion to the lighting So it felt like it was something I'd never seen before, just because it's terribly difficult to do. And it also felt like the experience of it was unlike anything that I've seen before. It was just really awe inspiring and beautiful. So I don't know if you designed the moon pool to specifically amplify some of those real time lighting effects and if it only appears in the pool or if it's also reacting to the avatars and other dynamic aspects of whatever's in the scene.
[00:42:50.167] Apple_Blossom: Thank you so much. The moon pool was something that Arby had designed. So the first version of these particles we had showcased in the first version of the lighting system, which was terrible, we showcased at Modular Monday. And our friend to Arbiter, Arby, who's the owner of Night Under Lights, had contacted us and said that they would love to design something around the shader and the lighting system. And so Arby had sent me a sketch that was just like, a circle, that was the pool, and then there's like seating around it. And I was like, OK, I can slap something together in Blender and see if this looks nice. And then as I was working more on the lighting system, I realized the pool is like the perfect test bed. You have a lot of different shapes happening. You have a lot of different surface area that you can reflect off of. And because we're doing ray marching, technically it's cone marching, a weird technical thing to do to cast cones without actually having to do a full cone. But with this map that Arby had designed, oh, it was beautiful. You could see the reflections shooting off of things. And in Array Marcher, it's real reflections. It's like real time. We don't need a reflection probe. There is one in there. That's just for the avatars. It gets all of it. It's beautiful and I love it. So I started looking through materials that we could add that would enhance those reflections. And so in a way, the lighting system was designed around Moonpool and Moonpool was designed around the lighting system. They both matured together over the three iterations of Moonpool's actual modeling. The first version was just a concrete pit. It was terrible. There's a reason we redesigned it twice. But the newest version I'm very proud of. It makes me very happy.
[00:44:38.820] A://DDOS: I will say too, really your lighting system, we have to look at the background and everything with your lighting system to see how it plays, because it really does affect not only the visuals, but the lighting and the feel of those visuals. Like if you're in a darker room and you have like a darker background, that lighting system is going to kind of stick out more. It's not going to blend as much with the background, right? That light's being absorbed by whatever materials behind it. So experimenting with... the moon pool, and also looking at the SNR Labs world and the Steph performance, they all kind of have a different feel to them because of what's behind the particles when you're looking at them.
[00:45:19.454] Apple_Blossom: Yeah, for sure. So the texture that's used for the floor of the pool, the texture that I grabbed for it off of open game assets, I grabbed that, desaturated it, threw it in, wasn't enough. It was still too much and it was too much contrast. It was ruining the visual. So I did that again. And then Arby told me it's not enough and we have to do it again. And so it's basically just like white and gray. And that lets the light absorb really well.
[00:45:46.089] Kent Bye: And I didn't know if like, sometimes when I see color effects, it's like your whole color context, it's so relational so that if you change one aspect of the color, you perceive other colors to be different, even though that actual color may have not changed. So I couldn't actually tell if the light was also reflecting off of the avatars as well. And so is light reflecting off the avatars or is it just like contextual relational thing? That's more of a perceptual illusion that I saw that it was changing the avatar colors.
[00:46:14.475] Apple_Blossom: is a projector that affects the avatars so it's similar to how aerial it works where it has to unfortunately re-render the entire avatar well at least the vertex stop and reload all of the mesh data back into the gpu which is a bit of a performance hit but it's not terrible But what it lets me do is do that ray marching off of the avatars themselves. And on top of one ray, we shoot some diffuse cone rays at all different directions. And that gives us a lot more soft, like diffuse light. So if you look at some of the recordings, you'll see it's not like it's a reflection of the world. It's more like there's light in the air and it's softly touching the skin. And that's why the avatars, they just look beautiful. I spent so much time tuning it and now I'm so proud. So I'm just going to brag. They look nice. I love them. But yeah, lighting affects the avatars, but not vice versa. So they're not going to have any shadows or anything. That would be way too expensive, but they do receive the lighting from the system.
[00:47:23.519] Kent Bye: Nice. Yeah. Now I was rewatching some of the video and just noticing those subtle lighting effects. And yeah, it just really reminded me of the light and space movement. And I did a whole deep dive digging into a lot of the videos of that. Now pass them along. I don't know if you had any thoughts or reflections on how this could start a whole new chapter of the light and space movement.
[00:47:43.712] Apple_Blossom: I did read through them and I did look at a lot of the pictures and I saw some of the videos I need to finish, but beautiful and very evocative of different feelings. There's this thing called liminal spaces that became fairly popular a little bit ago. If you saw like the back rooms was kind of a thing. The original version of that was just like an endless semi-familiar hallways. And they freaked me out in a way that I didn't understand for a while. And I still don't. And in the same vein that the ocean also freaks me out, it's grand, it's vast, and it's just there. There's nothing to really focus on besides the feeling of being surrounded by that thing. And yeah, it's very interesting in a way that makes me want to make some more. Maybe I'll do the art side. I'm thinking about trying the art because I've been doing the canvases and I've been doing the making the stuff for the artists. Now I kind of want my turn on the art part of it. So with some of the imagery that I was seeing from the movement, yeah, definitely something that will be inspiring stuff I work on in the future.
[00:48:54.448] Kent Bye: Yeah, that was a big takeaway that I had from watching the performance at the moon pool, because it was happening on two screens that was happening both from below and above where you have the pool, but you also have above. So my understanding was that it was a 480p resolution for each of those, and that you had 600,000 plus pixels that were being animated, but that the lighting, the color was being dictated by the height. So the higher it was, it would be white and the lower would be basically the dark. And so depending on how high it was, it would determine the color and the color and the height is determined by the luminance of the pixels that you're sending over. And so I feel like there's a way that you can start to play with color palettes and schemes and I don't know how to describe it other than like basically floating lights that are changing color, depending on how high they were. But it gave this deep emotional feeling to it that I feel like there's a lot of space there that can really explore how to just focus on the color aspect of it, rather than the spatial aspect of what shapes that are there and just play with color in that space. Cause it was just really striking.
[00:50:03.874] Apple_Blossom: Yeah, it's, um, so for the, if any day ones are listening, for Old Moon Pool, you'll remember that the pool water level and the sky were the same image. And that was, I think, 720p, just because that's my default for everything. It looks nice. But actually, I think it just takes in whatever the video player is using. Now, we had the new version going for Rain Dance and Silent had started working on stuff, but I was in the world, I think it was like two or three weeks before and Silent just comes up to me and is like, Hey, you should make it so we could split these. I was like, sure, we'll try it out. So I wrote this really janky code that just like moves where the scale and offset values are on the texture input. literally just like set those in a udon sharp script real quick to make it so it would split the screen in half to where the top like the sky screen was sampling the right half of the screen and then the floor screen was sampling the left and those could both be flipped individually as well so another thing silent had requested but I didn't realize how night and day it would be to have the option to switch those because Silent just immediately starts like making stuff out of my wildest dreams. Now, the screens, they take in the color and each of the particles is mapped to different parts of the TV input, like the streams input. And it just renders the color out from each pixel on that stream input. And it calculates the luminance and then offsets the vertex by that luminance value. So Silent sees that and it's like, well, if I just make this little gradient shape, then I can make like 3D shapes happen. And we're like, you're so far ahead of us right now. You've taken the old version of this that eventually became the hologram and you're forcing it to be the hologram. Silent is wild. But yeah, I want to give as much control to the VJs as possible. So it does take in everything from their input directly and tries to just modify it. We've tried more detailed footage like real life footage. It looks a little cursed. You'll never notice how bright in RGB value your scalia are until they're shooting out 10 feet in front of the rest of your face. That's what the recordings look like. So what we ended up having to do was a lot of like larger vector graphics or lots of gradients and lots of abstract shapes and trying to create feelings off of those led us into a lot more abstract spaces in ways that are just beautiful and I love them. And yeah, I was very happy with the artists that we got to work with like that. Sorry, I'm just so happy. My favorites, absolute favorites in the whole site. And we got to work with all of them.
[00:52:57.316] A://DDOS: I will say, you commented on the light and space movement. And I would say that this is kind of a similar experience. There's definitely other visualists out there that have done visuals on this that weren't featured at Night Under Lights because we've been doing these null events since probably, what, February, January? And so we've had a few VJs come through and have their own take on what visuals might look on this. There was an event we did, well, not we, but one of the performers from Night Under Lights did, Turquoise, this Sunday on the old version of the map. And he put together, was it like a three hour ambient audio visual experience? And I mean, it was, I think we were all blown away. Like, you know, outside of Rain Dance, there's still stuff going on in these spaces. There's still a lot of creativity going on and a lot of experimentation with this type of system that, you know, is out of our hands. They're whitelisted. They can come in and mess around and do whatever they want. And so we're definitely not the only experience that is out there with these tools. It's just, you know, we're the people that made them.
[00:54:04.977] Kent Bye: Yeah, it seems like that the collaboration with the artists was a pretty key factor of, like you said, you're creating these canvases and the artists see what's even possible. Then you're working closely with them to have this real iterative process to listen to what they need and to create these custom systems that can amplify their creative visions with softly stuff with her dance performance, frictions of a modulated soul. And then working with these different VJs with silent RB, as well as an AM with having the not in the lights, the seasons at the moon pool. And so take me back to some of those first conversations with Silent because Silent said that she came in and had seen this shader from Booth that was allowing you to do this screen projector of taking a 2D flat plane and having like spatial objects. And she was like, hey, maybe we could do 3D objects. But then, you know, eventually, you know, you'd already basically started to think about it and make it. But also at the same time, there's a thread of that point that led to a further iteration and development of what became the system that was shown at the Moon Pool at Raindance. And so take me back to that time when you are ready to start showing it to the artist and then what happened from there to further refine the systems.
[00:55:16.617] A://DDOS: Well, I think we showed the first version of the particle screen like the second week it was made. And that was just so that Apple could test the lighting system and performance because we had no idea at scale what the system would do to somebody's computer. so for a good four or five months that's really all we did we ran i want to say four maybe three modular monday events using different iterations of our lighting system as she was developing them just to test what a full lobby would look like you know which is 80 people And at this point I knew of Silent, I had never spoken to Silent. I think I caught her at one of the events and I hadn't spoken to her, but I'm kind of a wallflower at a lot of these events where I just kind of sit back and just observe and see what people are saying and take notes. I remember her saying, one day I'm going to do visuals on the screen. I think we had a pop-up or some type of event later on in the year. That's the first time we met her. She came out and she did visuals on one of the screens. I don't remember if it was for Modular Monday or another event we did. That's when we first met and first started talking. Then in December... or January, we got a working prototype for the hologram. And I remember she, at this point we had talked a little bit about visuals and she had done visuals with us on the screens like once or twice. And, um, we were in VR chat our first day testing visuals on the hologram and she had requested to join office because she wanted to talk to us about this crazy idea she had about making a 3d hologram. And. So she joins off of us and she's like, guys, I've got an idea. I have an idea of making 3D visuals. How would you approach this? And then she's telling us how she wants to make something that does 3D visuals. And right behind her, there's a working hologram that she hadn't turned around to see yet. And she turns around and then sees the working hologram. Yeah.
[00:57:17.751] Apple_Blossom: It was so funny. She's one of like three people who invented our approach after we had done it because it's that like beautiful simplicity that just takes a lot of time and patience to optimize. But the concept is there and the concept, a lot of people will figure out how to do it broad strokes, but not like the little nitty gritty.
[00:57:41.238] Kent Bye: And part of the genius of this approach of using the video as a way of encoding the information is that you can always have a recording of that video and play it back and have a full playback of this whole spatial experience that has not only the audio that's playing, but also these spatial visuals that allow you to really have a new way of doing this type of performance capture So it sounds like that you're still in the process of getting both of the softly stuff performance as well as the moon pool performance of the seasons. But maybe just elaborate a little bit about what's it take to then have this system that can do a, like a live performance, but then also have the capacity to capture the video and then have it play back for other people to experience in the future.
[00:58:22.825] Apple_Blossom: So doing it for Moonpool is super simple. All they have to do is ask that the VJs will record what they're going to output to the stream. So in OBS, you say stream and record instead of stream. That's simple and very approachable. Like Moonpool, the public version that's on my profile has four videos that will autoplay that are four different performances that we had at various stages of the Moonpool development. But the problem with the one that we did for Raindance, and especially for the Steph one, is the in-world controls in the avatar. So in Moonpool, the new version that I made for Raindance, it has that invisible control tablet as well, like the one with Frictions of a Modulated Soul. But in the Moonfall one, it affects how the particles are rendered. So if you remember when Arby was up, Arby was manually controlling that tablet while also VJing. And there's different parts of Arby's set where all of the particles will start extending way further than you think they're supposed to. And like this part, like in the center of the winter with the snowflake, where Arby just has it like gigantic, like to the point where the top and the bottom are touching almost. And you'll see all around the edges, you'll see the particles coming down and like hitting the... couches and lighting them up as well so it's getting those controls synced up with the timeline of the video is going to be a challenge and it's one that i'm still staring at it's right in front of me i just need all the rest of the information from them and the way harder one's gonna be steph Avatar. We live record the avatar while she's dancing because capturing her avatar alongside the big avatar looks real nice when they're dancing together because you're not having like a 10 second delay or if she misses a single movement or a fine detail, it's going to look off. we're capturing it live problem is she can't be there every time we want to replay it of course so we have to find a way to record the animations and basically mocap a copy of her avatar into the world also synced up with the video and get that control tablet that A://DDOS was controlling the entire time which has 20 preset scene buttons and a bunch of sliders that they were modifying And also the avatar parameters for Steph's avatar with the Heath slider that was something she was manually adjusting while dancing. Lots of moving parts in that one. And to get it to autoplay means I have to have all of those synced up to a gigantic 40 minute animation. That's going to be a challenge, but it's one that programmer brains churn in the background. It's going to figure something out. We'll get it done. It's very important that the shows are accessible and that people can see them because I don't want it to be just the first 60 people that were able to sign up to both. That being said, I want everyone to be able to see it if they want to.
[01:01:20.208] Kent Bye: Okay, so it sounds like there's a little bit extra complication where it's not just the video, there's the controls that are happening. So a number of different real-time actions that someone who's either VJing or A://DDOS was there doing a lot of that control tablet control. So there's a timing of when those things were pushed. as well as real-time avatar information that's coming in. Do you have a sense of what the timeline format? I know there's MIDI that has timing information. There's OSC that has been used to do different avatar controls, or if there's SRT files for captions or additional metadata for the subtitles. WebVTT, this is something I've looked into in terms of like, how do you sync up timed information into stuff that's unfolding? But have you looked at those formats? Like how do you sync those together with all that information?
[01:02:09.150] Apple_Blossom: I'm sad to say you're overthinking it. Unity has a built-in animation system that will allow you to control an entire timeline per frame. It has access to more controls than UdonSharp does because of the number of things you can call. You can have events in there that call UdonSharp scripts at specific times so you don't have to constantly check the timer to see if you're synced up and if you need to do things. and on top of all of that it also has interpolation built in with like curves and it all runs on the actual unity engine instead of having to go through udon sharp because udon sharp is like an extra layer of performance issues so while it would be possible and not super difficult to implement something that would just like cross-check the time code of what we're gonna end up doing is when the player starts if you hit the button that says auto play the stuff video then it's just going to keep track of the time on the video player, and then make sure the big animation that's playing across the entire world that's moving everything and setting all the values, that that animation is just synced up relatively to the player, and everything else will be handled on Unity's side. I'm very selective on when I actually reinvent the wheel, but it would be very interesting to see. MIDI controls was something I wanted for those tablets. It just wasn't something that any of the performers actually wanted to use.
[01:03:30.051] A://DDOS: Hey, we were pretty time crunched too. So we would, we would have to set up both our MIDI controllers and get that all set at a whole nother layer when we were crunching for time. So.
[01:03:39.322] Kent Bye: Yeah. And also eliminate another potential point of failure of not taking the MIDI input. If you have it all in the virtual world, then you can have a lot more confidence for button presses for simplicity sakes, but. Yeah. And part of my context of looking at that is more in WebXR rather than in Unity. So I don't have the Unity animation to sync some of this stuff together, but some of the stuff that I'm doing with like the podcast and having time code information, you know, how do you take the transcripts that I have and match it up? So I've been kind of digging into like the different formats to do that. So yeah. Yeah.
[01:04:10.100] A://DDOS: We have some other experiments we've been working on too. I don't know if Apple's ready to reveal any of that, but there are ways to encode the video stream too with timing data. That's something that's really common with some of the larger events in VRChat. Even just having part of your stream, like a corner of it, when it hits a certain color band, that starts a timer in world or something. There's a bunch of different ways that are really simple. You could probably time sync stuff too.
[01:04:37.381] Apple_Blossom: But I've never seen API for something that plays videos that didn't let you access directly the time values for when that video is in the recording. So especially if we're doing the prerecorded and we just have a YouTube video that it's loading. I know exactly what part of the YouTube video we're on at any given time just by calling like, oh, currentTV.currentTime, something simple like that. If I was going to have to reinvent Unity's animation system from scratch, I probably would actually just use JSON, where it has like a start time and stop time variables inside of them. And we do use JSON in other projects. I think it was revealed that I'm working on the C1024 project as part of the Unity prefab, where I'm working on like the Unity side. And for that, we are actually loading JSON data off of one of the pieces of that project. So it's definitely possible. It's just that I don't want to if I don't have to. OK. You can get real arbitrary, to be fair. One of my current things is I downloaded star data from NASA and threw that into the particle shader. where I had to grab 2.5 million data points and try and convert them into a format where it wouldn't destroy the computer to render them, which was fun. I ended up encoding them all into a mesh and then just rendering the particles where the mesh positions were. It's simple, make it efficient, and then it'll just go.
[01:06:09.847] Kent Bye: Yeah. It's like a stress test. Yeah. And the SRT formats and web VTT is essentially you have the start and end times and with web VTT, you can actually encode Jason objects at those time code information. It's an open standard, but I don't know how like widely applied it is. So it was getting the roots of what I needed to do my thing. So, but yeah, I guess as we start to wrap up, I'd love to hear what each of you think the ultimate potential of virtual reality might be and what it might be able to enable.
[01:06:38.314] Apple_Blossom: Until we can fit in the wires. I can't really live there, but I can sure do my best to be absorbed into them as much as I can. VRChat is such a gigantic and wonderful community in some parts. If you skip Publix and actually go find people to spend time with. It feels like socializing. Yeah, you can live in there. In a way, it kind of scares me sometimes because I can't fit into the wires. Sometimes I want to.
[01:07:08.985] A://DDOS: I think, I don't know, I look at VRChat specifically and this kind of scene as an iteration of something that's already existed before. You know, think of like old web development days when the average person could get access to tools to make like their own website or learning how to host websites or, you know, even something as simple as like a MySpace. It's going to show my age, like the, you know, MySpace profiles or whatever. You know, you give people that... Yeah, GeoCities. You give the average person access to tools to create things. And I think you're going to get way better stuff than we've made. It's just going to take time. And I think it's really cool to see something like VRChat enable that creative side and creative environment for people to explore things that have never been explored before. So I don't know where it's going. I don't even really want to predict where it's going. But I hope it keeps going. And I hope that a space like this exists in a virtual medium forever. That'd be really cool.
[01:08:06.980] Kent Bye: Oh, go ahead.
[01:08:08.802] Apple_Blossom: Oh, I was just going to add on. A lot of what VRChat has been up to is stuff that Roblox had done during its development. And it jumps from Roblox to VRChat as it became very, very, very corporatized. But there are certain steps in Roblox's development that VRChat hasn't done, like they haven't IPO'd yet. I'm pretty sure. Memory serves. And not having the shareholders to be beholden to has definitely helped them stay at a really weird course instead of having to shift toward whatever would make the most money at any given time. So it's definitely a very fun, creative space. And I do think it'll keep following a lot of the trends that Roblox followed if anyone was in those development communities back in 2012. It's a very interesting space.
[01:08:59.053] Kent Bye: Yeah, I had a whole discussion with Table, Unix, Mustabi, and QDOT back in episode 1394, right after the VRChat layoff. So I do think there's actually still a lot of pending things that VRChat still has to figure out in terms of how they're monetizing. One of the things that QDOT had mentioned in that conversation is that there is a lot of centralized control of assets within a place like Second Life where they could really monetize a lot of the look and feel and all the assets, but the asset genie is kind of out of the bottle is what he said with VRChat, where there's not really a clear way of monetizing some of the different look and feel, which I think has actually catalyzed a whole lot of innovation on so many fronts of avatar representation and environments. And it's in some ways the magical key aspect for what makes VRChat the way that it is. But I don't think that there's necessarily a clear path towards monetizing just because the creator economy and how that fits into even things that you all are making. I think it's a key potential part for how they're going to actually make it a sustainable business is to have like a thriving economy where not only VRChat can get their bills paid, but also independent creators can have a way that they can turn this more than just a hobby and more into a career. So I don't know if you have any thoughts or reflections on that, just because I do think it's a kind of open question that as they go on this path more towards like Roblox, then they kind of have to evolve just the way that they have all of their revenues. So that makes sense as a business that they can sustain themselves.
[01:10:33.058] Apple_Blossom: From day one until two years ago, Roblox had locked down the actual avatar customization. You can make whatever you want out of the world, assuming you can make it out of little Lego bricks, but you could not make your own hat. And so hats were like one of the most expensive items in Roblox for a very long time. I think they still are. But what Roblox really made their money was user-made experiences and having different transactions on there. Like it started with VIP t-shirts where you'd check if a user account owns a t-shirt. And then if they had that T-shirt, you'd let them get access to special things in those worlds. Now, VRChat has slowly started moving towards that, and it's something that made Roblox a lot of money. But the problem with VRChat is before they actually implement this, and it's still not there in the same way, you know who's making a lot of the money off the VRChat user base? It's Gumroad, it's Booth, it's... Patreon. They're all taking a cut from VRChat's community that VRChat could be taking a cut from instead and actually funding the platform that all of these are based on. So I think it's interesting, but I think it'd be a big step in the positive direction if some of that monetization wasn't forced into VRChat, but that it was optionally included in VRChat. Because that like yeah jeans out of the bottle on like stopping people from putting hats on themselves without paying but i'm glad that it's not that because i hated that and i'm really hoping it's very focused on the creators in the space because it's a lot of wonderful creators and vr chats a lot of people much smarter than me much more talented than me who are making beautiful things and they deserve to get paid A lot of people don't get paid in VRChat for work that they're doing for these groups, but they all deserve it. The money deserves to flow. If people are trying to make it jobs, it should be a possibility. That's kind of where I'm at with it. I just think if they had like, Roblox has Robux and keeping it in their currency makes a lot of things simpler on the legal side, on top of making it so there's a very standardized like form of payment. You don't have to do a bunch of local currency conversions. If you actually had something similar where you could just pay it right to games and the games could like see what you've purchased. Those t-shirts eventually turned into this thing called game passes. Same exact thing, except that you didn't have to wear a t-shirt while you did it. If VRChat had something similar where you could like really scan through and see what the player has purchased from the groups or from the world and have those all be interconnected so that you could pay like, oh, $5 and you get to go in the VIP booth of the dance area. If that was in VRChat instead of on Patreon for all these dance clubs, the money would be flowing.
[01:13:24.025] A://DDOS: I'm of a similar mindset, but I also know from my experience, at least with the more creative side of the scene, is there's a lot of opposition, honestly, with the player base and monetization. Not necessarily the creators. I think the creators and people who are making a lot of the content agree that there should be some type of compensation for a lot of the work they do. But a lot of the people that just are kind of attendees You know, they don't really want to pay for a lot of these events. So a big thing I hear from them is, you know, I'm not going to an event if there's any tit-me-jar or anything like that. You know, you're just begging for money. I've heard a lot of comments like that, which is surprising. But I think you're right. I think it does have to happen eventually. And I think, I mean, I know you would love to do nothing but create all day inside of VRChat if you had the money to do it. But I think it's not just a shift for VRChat as a company, but I think as a community and a mindset, it's something that needs to shift over time as well.
[01:14:22.271] Apple_Blossom: Yeah. That's why I said what they deserve to get paid. Cause they do deserve to get paid, even if they don't think they do. Hmm.
[01:14:30.802] Kent Bye: Yeah, it's certainly an open question. I think to go from having no economy to a big thriving economy, I think they've been really hesitant to fully roll it out because they don't want to get it wrong. But also, yeah, it's just one of those things where trying to do it in a way where it doesn't just completely get rejected by the culture, but also makes sense for each of the worlds that are being created. And there's a lot of moving parts there that have to happen to make it get to the point where they're in a better place where they're not having to lay off their core staff. Yeah. And also just allow the platform to continue to exist in the way that it is right now. So anyway, yeah, lots of open questions there as we continue to go forward and see how it all plays out. But yeah, I guess I'd give you one last opportunity if there's anything else that's left unsaid that you'd like to say to the broader immersive community.
[01:15:19.210] Apple_Blossom: I just want to say you're all putting a lot of work in and it's wonderful and appreciated and it deserves to be compensated if that's something that's happening. There is money flowing. There is an economy in VRChat. It's all inside of avatars and world creators. They get paid a lot of money and basically know what else does. There's a lot of I won't get too into it, but you deserve to get paid for labor that you're doing. And if you're not enjoying it, it's work. Work deserves to be paid.
[01:15:52.318] A://DDOS: And I guess the only thing I really have to say is with the SNR Labs project is we're really looking to meet other people in the VR chat scene or maybe even people from outside the scene that want to do something different in the virtual space that might have any interesting ideas. We're pretty open with talking to people about hypotheticals and new projects. I mean, I think Raindance, we already had like one or two people come up to us after Raindance ended wanting to collaborate. And that's kind of what we want to do from here on out is work with other people that might have some cool creative ideas that we can be a part of. Feel free to reach out to us on Twitter, Discord, or wherever, wherever you see us in the metaverse.
[01:16:32.535] Apple_Blossom: I'll talk anyone's ear off about all my shaders for hours. You won't be able to stop me. DMs are open. Come say hi.
[01:16:40.822] Kent Bye: Nice. And just a quick follow-on, has there been any specialized audio experiments that you've seen within VRChatClub?
[01:16:48.189] A://DDOS: Yes. Yes. There's, I forgot the name of it, ArcLabs. Shelter uses it. There's a couple other places that use it. There is a real experimentation going on. I know, did you go to Candy Trip?
[01:17:02.162] Apple_Blossom: Yeah, I was going to say candy chips.
[01:17:03.943] A://DDOS: So they kind of have a similar system where they're experimenting with spatial audio and reverb and stuff like that. I mean, there's definitely experiences out there already, but I think that's still a field that is treading new ground, I guess.
[01:17:16.860] Kent Bye: Okay. Yeah, it was hard for me to discern the exact spatialization of what Candy Trip was doing because it just sounded like a stereo mix to me as I was moving around the space. But I think the system that you've created can be very well suited to be paired up with a spatialized audio experience to be able to help visualize that sonification in a spatial context in a way that I think there's a lot of space there that I'm really excited to see. or that goes in the future. So, yeah.
[01:17:43.741] Apple_Blossom: I think I see what you mean. So, Candy Trip had just, like, a better dynamic range and, like, surround sound kind of thing going, but it wasn't like it was in different spots in the world. Yeah. Yeah, no, I don't think there's any, like, I can't think of any events that do it, but there are a lot of worlds that use it, especially like, there's a lot of fun, weird stuff happening in like horror maps and other experimental maps. They'll use them a lot. But for music itself, having it all in one ear kind of sucks, to be honest. I think that's why it's not super popular in clubs.
[01:18:13.524] Kent Bye: Yeah, well, anyway, I'll pass along some videos and stuff that have ideas for what could be possible with mixing the spatialization with the visualization of the particle system that you've created. But yeah, I just wanted to thank you again, both A://DDOS and Apple for coming on the podcast to help share a little bit more about your creative journey. I think the tools that you've been able to create are creating an entirely new spatial canvas for VJs and, you know, actually one of the things that Silent and Nam had said is that you're gonna have to come up with a new name beyond VJ because it's not just Video Jockey. It's now like Pixelmancer or Holographic Engineer or other.
[01:18:51.899] A://DDOS: I've just been saying visualist. Or visual engineer.
[01:18:57.177] Kent Bye: Immersive, jockey, whatever the new acronym is to describe what's possible. I think the tools that you're creating is creating a new genre of this type of spatialized, immersive experience creation that's dynamic, that is kind of the next phase of going from a 2D VJ into a more 3D spatialized experience and Yeah, just all the lighting effects and the experience that I had in both after Softly Steps, Frictions of a Modulated Stole, as well as the Moon Pool experience of Night Under the Lights with the seasons. Both of them were just really awe-inspiring to see what kind of new possibilities are going to be opened up and expect to see lots of people be inspired and to further push forward what's even possible within the medium. So thanks very much for creating this system and for coming on the podcast to help break it all down.
[01:19:45.353] Apple_Blossom: Thank you so much for your time and thank you so much for the conference. It means a lot.
[01:19:50.475] A://DDOS: Yeah, thank you for having us.
[01:19:52.436] Kent Bye: So that was Apple_Blossom and A://DDOS. They are part of the SNR Labs that is doing all this type of experimentation and innovation when it comes to what's called the Apple Global Illumination System. It's this hologram shader system that has a particle screen that's more of a 2D version and then the hologram version that they're able to use their existing 2D workflow and then translate that into these volumetric experiences. And they were also a part of two really amazing and mind-blowing experiences that personally were some of my favorites that I saw this year at Raindance Immersive. The Frictions of a Modulated Soul by Salafi Steph ended up winning the best dance experience at Raindance Immersive 2024. And then the Night Under Lights, The Seasons at the Moon Pool was personally my favorite music experience this year. And something that was completely and utterly mind-blowing, I highly recommend going into VRChat and playing through the replayable version. I'll put a link into the show notes that you can go check it out. And there's also a YouTube video, but honestly, it doesn't really translate for how majestic it is to be actually in the space and to see the lighting effects. So I have a number of different takeaways about this interview is that, first of all, well, again, I want to come back to like the light and space movement, because there's just a real amazing technical innovations to figure out how to do that many points, but also light emitting from them. And then you have the impacts on the environment, but also the impact on the environment. Avatars, it's just like a really incredible experience of like these subtle modulations of light that really feels like to me like the next iteration of what I've seen from the light and space movement. I saw a piece by James Turrell and Superblue Miami that. We're kind of much more of a subtle use of using slight gradients of color. And you've been this big giant space that's just turning in to these different colors and you walk out of it and your perception around the world around you is completely shifted because your eyes have been like saturated with these colors. And so the types of experiences with the light and space movement were typically pretty subtle and more ambient in that way. And some of these are much more into like the clubbing aesthetic and much more faster paced. But still, nonetheless, the experience of watching some of these lighting effects are just utterly transfixing and magical. There's something around the subtle ways that these new innovations are not only like radical in terms of doing this biometric representation, using this primary innovation of in the case of the hologram version, transcoding everything into these videos. So there's six different faces that are able to. encode 3d objects into these six different views and then decode them into the 3d objects that kind of look like this somewhat noisy depth kit kind of point cloud aesthetic but also with like particle effects but also with more of like 3d objects that are clearly there so it's kind of a hybrid between each of those and so it's kind of a new aesthetic and then on top of that there's a bunch of lighting effects on top of it all, which is impacting both the environment as well as with the avatars. With the different environments, they have the test facility, which is more of a cube, but is more concrete, dark, grayish, with a little bit of a glimmer that is reflecting off the lights. But you're essentially two views where you're up top and you're looking through what's a little bit more like a frame. For me, that view looks a little bit more like you're watching something on a screen. But once you dive into the bottom of the pit, then it has much more of a volumetric effect because you're immersed there with these undulating 3d objects that are there and so that's a piece that's going to be featured at venice immersive as part of the vr worlds gallery and then there's the moon pool which is a completely other venue that is much more like a pool that you're around and you have this kind of indent and within the indent where there would normally be water is this particle effects that are going up and down And that's the more screen-based where there's both on the pool, but also up above kind of mirrored. And so there's two different views that are being able to kind of play off of each other in a certain way, but there it's a little bit less control over the colors that's coming through because you have the luminance values that are on the original video that are dictating like what the height is. And so there's kind of a translation that folks have to go through to be able to translate some of these different videos that they would normally do within the context of a vj set and then put them into the moon pool and have this translation so they have this more volumetric rendering of them and apple said that it's a little bit less like you take a video and it's moving into these more abstractions of shapes and kind of have less of a 3d object and more of a flowing relational particle effect that are floating around and having all these really amazing lighting effects on both the world around them as well as with the avatars I think this is a broader trend of what I'm seeing with these different types of volumetric systems that are in these VJ club scenes. There's the moon pool that has the more 2D particle effect. Softly Steph has more of the holographic representation that has, in her case, she had actually two streams. One was pre-recorded and one was being live streamed of translating her avatar. So she's kind of mirroring her body in a way that is being able to be represented visually. Sometimes in one or double or multiple ways that are able to copy that over and render it out multiple times. And so, you know, basically what would normally be 360 pixels square six times. So for over 700,000 pixels and then times two, 1.4 million of these pixels. So the level of optimization that they had to do to figure out how to stream all this data in and the brilliant insight that you could use video compression to do that because it's like way more data. Then if you were to put in the data raw, so they're able to do like this 30 frames per second and these 720p, but with these smaller resolution of 360 by 360, which frankly is enough to do a pixel art aesthetic that is combined with both the particle effect aesthetic mixed with a little bit of the depth kit point cloud aesthetic that has a little bit of noise in it. with more of the 3D objects that you can see as a trace in there as well. So each of those are creating new forms of volumetric capture in a display that I think is really quite exciting because it's something that I think the VJs are going to continue to iterate on this in different contexts and different platforms. Apple had to write her own lighting system, and there's a couple other lighting systems that were mentioned. There was the Aerialit shader by Lox, as well as the LTCGI by Pi. That one is specifically using the linear transformed cosine algorithm. And each of these systems had different limitations that didn't quite work at the scale of having 1.4 million pixels, particle pixels, that were emitting light and then having that impact the environment and the avatars around them. It's really quite amazing to see it because you really haven't seen a lot of like this scale before. And it's really quite awe-inspiring and transfixing to see that. And some of the other things that I've seen recently, I had a chance to go to the concrete space called Pale Sands, which is also being featured at Venice Immersive this year, where It's more of a really extremely vast sand dunes, and the sand dunes are mapped out with a mesh, and so they're able to project and reflect light off of what is kind of like, again, a 2D screen that VJs are able to project onto, and then it's at an angle. And then there's what they call the creature that has more of a 3D spatial shaders that are able to come in there as well. Essentially, it's like overlooking a mass of sand dunes and seeing like what looks like to be projection mapped reflections that are coming from a floating hologram that's on a 2D plane. And it was just utterly transfixing because there's different ways that they're able to kind of have these holographic projection map like effects that would be physically impossible to do in reality, especially at that scale. So that was the concrete at Pale Sands. There's also the sanctum that is being featured at Venice Immersive that I had a chance to get a tour from Silent where she was showing what's more of a stereoscopic effect that is in this reactor cylinder. And there's some lighting effects that are happening with it. And again, having other lighting effects that are in the world around them. playing with other lighting systems and having kind of more of a billboarded stereoscopic 3d effects but still nonetheless more of a spatialized experience of what vjs can start to do so yeah overall i'm seeing these trends of moving more and more into these kind of spatial shader experiments with lighting different lighting systems that are a lot more complex and yeah just really pushing the edge of the visual forms of these experiences that people are having in the clubbing scene So I'll be ending this whole series with, next I'll have an interview with Frictions of a Modulated Soul with Softly Stuffed to really unpack that dance performance, which was just really quite moving and beautiful. And then we'll be ending with kind of a reflection of the evolution of the VJ scene over the last number of years, led by the VJ Silent, as well as VJ Namaron, as well as the DJ Ladybug, who all participated in the Night on the Lights, the seasons at Moonpool. And so we'll be talking a little bit more about how this all got developed. So I guess one other final takeaway is just that I'm seeing a lot of these really interesting collaborations that are happening with people that are collaborating from around the world, but also they have the desire to push the edge for creative expression. And so Apple Blossom and A://DDOS, they're creating these blank spatial canvases for other VJs and artists to come in and start to paint on. And so they're really enabling this whole new form of expression for artists how you can start to create this kind of spatial art in a way that is using existing workflows with VGA programs like notch and touch designer, but also more internal ways of taking like dancers, like softly stuff and being able to capture her dances and to be able to translate that into a whole spatial expression. And so, um, yeah, just the way that this technology was developed in very tight collaboration and iteration with these artists for what they wanted to do with it. And as they're building the technology, then they're looking to see both the environments where they're going to be displayed with like the moon pool by the arbiter, which was part of the night and the lights, but also with softly stuff and silent and, you know, a number of different other VJs that they were collaborating with to kind of push the edge for what's even possible with these new technologies. So yeah, I'm really excited to see where this goes. Cause it's a really exciting trend. It's one of the things that I personally think is some of the most exciting things where you go to the club scenes and yeah, you can have this kind of replication of what it's like to go to a club and mimic the lights and have a sense of realism and sense of environmental presence and full body tracking and dancing. But yeah, The thing that you can't do in physical reality is to do the types of stuff that you have with this 3D spatial holographic visualizations and volumetric capture that's able to kind of blow your mind with what's even possible with the technology. So, you know, in other venues like Concrete at Pale Sands, basically impossible to get into because there's such a high demand for people to try to see these different types of performances that are really just quite awe-inspiring to be able to witness. So yeah, it's a really exciting time because there's a lot of innovations for new platforms and technologies like this that are enabling all sorts of different creative exploration. And you can even go into the Night Under the Lights and hook up a live stream and start to experiment with yourself. So you can start to play with that as well. So go check out the replayable version of the Night Under the Lights. It's called NUL Presents The Season's Replay at Moonpool by Apple Blossom. So that's all I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoyed the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and so I do rely upon donations from people like yourself in order to continue to bring you this coverage. So you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.