At Snap’s Developer Conference of Lensfest, I did an interview with 1st place team in the Snap Spectacles Lensathon named Decisionator including Candice Branchereau, Marcin Polakowski, Volodymyr Kurbatov, and Inna Horobchuk. I also summarize the other 10 Spectacles Lensathon projects after serving as a preliminary judge for the competition. See more context in the rough transcript below.
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality
Podcast: Play in new window | Download
Rough Transcript
[00:00:05.458] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR Podcast. It's a podcast that looks at the structures and forms of immersive storytelling and the features of spatial computing. You can support the podcast at patreon.com slash voicesofvr. So continue my coverage of the Snap Developer Conference of LensFest. Today's interview is with the first place prize winner of the Lensathon, which was a 25-hour hackathon that happened a couple of days ahead of the LensFest. And this team name is called Decisionator. So this was an augmented reality application where you would just point at two different objects, and then that would trigger object detection of those different objects. And then those objects would be sent up to an AI. And then they're basically asking the AI to make a decision between these two objects. Something that user interface felt like this is the most AR type of applications, pointed things, and then it reacts and then gives you more information. And also kind of a tongue in cheek of like, hey, we're moving into a world where AI is going to making all these decisions algorithmically. Why not just explicitly ask to make these decisions? So a bit of like social commentary. It's an experience where you're asking AI to make decisions for you, essentially. But just the way it's done was really seamless and I think points to a future where you have more and more of these artificial intelligent type of integrations for augmented reality. So AR being this kind of front end interface to the AI that's happening here on the back end. down by Snap to cover both the Lensathon as well as the Lensfest. And I did want to at least be a judge so I could see all the different projects. And I'll have a few comments on some of the other projects that were developed this year using their new features of the Snap Cloud. So the Snap Cloud was including the Superbase integrations, which is like a Postgres database backend that was able to more dynamically create data buckets on the fly. So if you wanted to have an application that had data persistence or be able to store information or to refer to information later, having some sort of live features was another option. So a multiplayer types of experiences that were being featured. And the last thing was like these edge functions to call out to these AI applications and get more information and metadata. And so in this case, they're sending out this image and getting information around these different objects and then having AI make a decision, but also storing that information and building up a repository to have a little bit more contextual information so that it's learning a little bit about your preferences or information over time. So ways that it's starting to customize this decisionator process. So yeah, just a quick comment on like right now, I don't know if we're in the middle of an AI bubble and whether or not it's going to be economically feasible to base different types of projects like this on these different AI services. Depends on who's going to be paying the bill, because right now, Snap doesn't have their own large language model that they're using. building off of. They're mostly calling off of other third-party APIs, which allows them the latitude to give the choice to developers for what type of stuff that they want to integrate. But if you're going to be creating these different types of AI applications, then at the end of the day, either Snap's going to be paying that bill or the developer's going to be paying that bill. So they did announce the Commerce Kit, which would be some sort of subscription service for you to join in some of these different applications and to be able to buy or to do in-app purchases, essentially. So the Commerce Kit and the business model of all this stuff hasn't really been fully fleshed out or laid out. But as they get closer to the consumer launch next year, I'm sure we'll get a little bit more information for that. what different types of business models that developers can start to think about. I have other conversations with Joe Darko and other previous conversations where we started to talk about this, but at this point, they've only announced the Commerce Kit and also that there's location-based entertainment as one revenue stream, but also that Snap themselves are going to be working with different brand partners and bringing different developers in to start to develop launch applications for different brands. So I just want to mention some of the other types of applications that were developed over this 25-hour period. Honestly, it wasn't a lot of time, and I wasn't expecting much, but I was impressed with how sophisticated some of these different applications were able to do, to send up images, do AI processing on it, and get different information back. So the Decisionator was the winner. And I think of all the different applications, this is one. that felt like the most ar type of interaction where this is kind of like the interface that you want just point at two things and you have this kind of gesture based triggering of that so the decisionator was a top cart db was a previous conversation where you could look at a barcode it would scan it and then you're able to get additional metadata pulled in a shopping application but just the idea that you could start to use these machine readable codes and then start to pull that in for additional context or metadata for whether or not some of these foods fit within your food diet or how healthy is the food. So that kind of idea where you scan something and get more metadata about that object. The third place prize was called Fireside Tales, which was also in a couple episodes ago, which was you are in a social AR application and there's a campfire in the middle and people take turns by sharing a story or prompt that's going to be sent into generative AI. It gets sent back an image of that and then you have the opportunity to go around the circle and have people either build upon that story or tell their own story. And at the end, you're able to get like a recounting of all the different images that were created as a part of this collaborative storytelling experience. And each time that you were saying something, then it was being saved in a database, but also triggers a call and response type of thing where you say something and then the other AR devices within that social AR experience were also able to see that same image that was being generated. Spidgets was a application that was just the idea of doing spatial widgets. There was like a weather application, a meditation application, and then also kind of a memory game. This one was mostly storing the weather information, where you're at within the database, but just the idea that you would have these spatial widgets. Rootly was pointing at different objects within the context of a room and adding metadata around that. So putting location markers and being able to tag things. So essentially able to kind of add spatial tags and you're able to save these different routes and potentially have these spatial annotations around the world. Total Recall was a facial recognition application that would allow you to see somebody's face and to see if it was already saved in the database of something that you've saved before. Or you could add someone's face and their name and basically doing facial recognition and being able to add metadata that was more mainly added, not necessarily leveraging any existing facial recognition databases or anything like that. Boosters Jetpack Tours was a demo where it was getting like guided tours. It was basically an AI assistant, very similar to what Atlantic Spatial was doing with their Project Jade, where in this case, you were looking at photos of monuments on a laptop because they imported a number of different major monuments into a database and then added a bunch of factoids, having AI generate those or having an AI assistant that's able to kind of reference these different monuments, but also to share different information around that. So very well-designed character and kind of interaction where you're looking at different laptop screens and then getting a little bit more information around these different locations from this virtual being. Super AR board, they were aspiring to be able to take stab shots of whiteboards and then eventually use artificial intelligence like optical character recognition to be able to detect the handwriting and then translate those whiteboard notes into text notes. They didn't get to the point of actually doing the optical character recognition. So it was essentially just able to kind of store a series of different images of a whiteboard session. Then there was a number of different multiplayer experiences. There was Party in a Box, which was kind of this idea that you'd be at a concert ahead of time with these AR glasses, and you'd be able to step into a circle with these other people and have a shared augmented reality experience. And they also did more of a live lyric captioning, so you could see a song that was playing, and then you could see the lyrics of what was being said. And then the final one was called Act One, which was a two-person experience where they would have a generative AI prompt with a different scene. And so it was being built for actors to be able to play out these different improv scenes. But as you're playing them out, they would also pull in these different objects. And so you would have these objects to play with as you were doing this improv scene. So those are the 10 different applications that were built. And then I had a chance to talk to the top three. I wish I had more time to be able to talk to more of the teams to go into all the different things, but it was a very quick day, not a lot of time after the whole development process and everything. And so I ended up doing most of my interviews on the last day there during the Lens Fest and managed to talk to the top three teams. So in this conversation, I had a chance to talk to the top winning team and some of their design process of creating the decisionator. So that's what we're covering on today's episode of the Voices of VR podcast. So this interview with Candice, Marcin, Volodimir, and Ina happened on Thursday, October 16th, 2025 at the Snap Developer Conference at Snap headquarters in Santa Monica, California. So with that, let's go ahead and dive right in.
[00:09:11.181] Candice Branchereau: So I'm Candice, and I work with Flatpixel team. So it's been like three years. And I work on lenses and also kind of video games on the side. And that's pretty much it. I don't know.
[00:09:27.312] Marcin Polakowski: So I'm Marcin. I'm a creative director at Flatpixel. So I work with Candice. And we've been making XR, AR stuff since 2016, basically.
[00:09:39.465] Volodymyr Kurbatov: Hello, I am Volodymyr, and I just like to build good stuff, whatever it is.
[00:09:43.990] Candice Branchereau: Hi, my name is Ina. I'm a creative technologist, and I came from the architecture background.
[00:09:52.038] Kent Bye: Great. Maybe you could each give a bit more context as to your background and your journey into working with XR.
[00:09:57.671] Candice Branchereau: So at first I needed to choose an internship and I was interested in XRAR and I found Flatpixel from the previous students so I asked them if they want me and they accepted. So then I was there and I continued the next year an internship and then I was with the team.
[00:10:22.742] Marcin Polakowski: So yeah, I've been more of a video game background. So I was making a lot of mobile games and stuff like that before. And then, yeah, so we started Flat Pixel in 2016 with that in mind, actually. And then we slowly drifted towards AR because it was exciting. There's things happening there and the mobile games market was kind of changing and being more like a slot machine based type thing. So the actual game design around it wasn't that interesting. There was just a lot more to do in AR that hadn't been done before. So it was just like a sort of like new world to explore and everything to define. And yeah, so it was just more exciting. It was like unexplored territory.
[00:10:59.611] Volodymyr Kurbatov: I have an architecture degree, then I somehow gradually shifted to some UI, UX, product design or web design as it was called back in the days. But then when the first sequel called Rift came out, I saw that this is it, this is so cool, it's like having these interactions in a space, I want to build something for it, so I started just playing and somehow I'm still playing every day with it. I don't know how lucky I got. I'm just lucky to try all the hardware, all the interactions. I feel it's more like a play, not even as a work.
[00:11:34.554] Candice Branchereau: As I said, I started as an architect, like we both started as architectures. Then I shifted to interior design and then more to the 3D modeling. The idea was to combine interior design with VR when AR appeared, then with AR. I think then AR took the bigger part than interior design and I shifted totally to augmented reality because it's actually more fun for me. You have the feedback more faster and it's more inspiring and you can create the stuff which do not existed before. I mean the interior designs for me now it looks boring but I still interested to find some combination of AR and design still but art took the biggest part now of my creative life.
[00:12:27.830] Kent Bye: Awesome. Well, congratulations for winning the top prize of the Lensathon, the last, I guess, 25-hour hackathon where you were able to build an application, given basically a prompt that was to try to use the new features of Superbase. And so just curious to hear a bit around the deliberation process of trying to figure out what you wanted to do, because you're creating a decisionator, which you're essentially pointing at two different things and asking the AI to make a decision. And so just curious to hear. how this came to this idea of the Decisionator.
[00:12:59.609] Volodymyr Kurbatov: It was just easy.
[00:13:03.793] Candice Branchereau: Yeah, we've been thinking like people using AI now and we try to make it now more simple and accessible with the AR technologies. So we try to merge it. And we are thinking also about the mental health of and the mindfulness and now we are surrounded with a lot of stuff and we need to take these micro decisions that drain us till the end of the day and we have no willpower to choose some important stuff in the end of the day. So I think maybe yeah it's important we discovered the theme of decision fatigue And we try to make AI just to help us with some minor choices, which is not that important. But AI can rely on data that we provide. So it's more like, I don't know, maybe an assistant, background friend that just guides you. Yeah, that's the official reason.
[00:13:59.355] Marcin Polakowski: That's the one for the judges. Yeah, that's the one for the judges. But I think originally how we landed on the idea was kind of more of like a social commentary and that we're kind of like a Black Mirror episode, like I mentioned up on stage, is that the idea was that eventually, we're just going to delegate everything to AI. And what's our role? What's the role going to be left for the human being behind it? So never make a decision in your life again. Everything will be done with AI. And so that was kind of the idea. And we all liked that idea. And then we had to kind of dress it up to make it Yeah. But I think it was always a little bit tongue in cheek. I think people sensed that. That's what kept it fun, actually. And that's why I think that if the thing has a future, we should keep it fun. Because if we do get into the whole like, oh, we get really technical about how it creates a profile and all this stuff, that could actually become a little scary and a little weird.
[00:14:49.671] Candice Branchereau: I mean, it's quite edgy, but we have to think about this right now. So maybe we need to show it funny, but to make people understand that it's edgy, and we're already moving into this direction. So we're just putting attention, like we're actually living this life now, so it's not such a futuristic thing. It's actually a thing that works, and it's actually making decisions for us right now. So it's just maybe pointing attention of the society, you know, what is going on. Why not us? Yeah. What's that?
[00:15:17.790] Volodymyr Kurbatov: So someone had to build this? Why not us?
[00:15:21.633] Kent Bye: So maybe you could just elaborate a little bit, because the prompt was to use Superbase. So maybe you could just talk a bit about how is Superbase being used in this application, and how is it being wired through?
[00:15:33.740] Candice Branchereau: So the idea is to make the AI make his own prompts. So with the data of the user and the contextual things, so like the weather or the location. And then we stored prompts in the database so that later we don't have to really process the thing. And there is also the fact that we use the previous choices of the users so that we get some context of what he often asks, like food. Maybe he's on a desk, so we get the location of the player. So we have this profile that is growing with the database.
[00:16:13.715] Kent Bye: OK. So it sounds like you're able to have some contextual history that is stored relative. Because there's four different glasses that you had in your team. And so for each of the different glasses, they may have had a different history of choices.
[00:16:25.279] Marcin Polakowski: Yeah, yeah. And they didn't make the same choices, depending on who was using the glasses. So that was pretty interesting. That I already had, just out of playing with it for like 12 times, making 12 choices, it already had like a, it already knows me better. If I put it on his glasses, it would be completely different.
[00:16:42.084] Volodymyr Kurbatov: And therefore why SuperBase was useful in this case, because practically you just fit some data into the AI and typically you would see only the replies of AI, but actually from the SuperBase we can go and see what AI has prepared for itself and all of the nuances about what it assumed about you based on the previous data, on your personal data, on your context. And read all of this pre-prompting. It's like huge text which AI created for itself and you can just go there and maybe tweak them, maybe see them at least. Hopefully Snap will make it possible to save it privately because right now the table is open. So if you are not able to release it publicly, of course, this lens.
[00:17:22.688] Candice Branchereau: It's a concept. It's just a working concept. And it's cool because you also can adjust it a little bit. Like a user may have its own preferences. And like the person want to eat more healthier, so he would prefer, so AI chose for him more healthier food or... i mean any other decision like more dark t-shirt or so on because he have his personal style which is not reflected on his information in the snapchat account and it's also one more time highlights maybe is that the glasses smart glasses there are more personal item it's not for everyone used that's why like you have In spectacles you have this distance which is adjusted for your eye, so it's like a personal accessory. So you're not sharing it, yeah. So it's like something on your own thing.
[00:18:12.022] Kent Bye: It's like a private interface for people to have conversations with AI where they are. What I thought was really effective was that it felt very much like an AR app where you could just point at things. And it was able to detect and use the computer vision to see what you were pointing at. And the hand gestures and the pointing seemed to be like a nice trigger point to actually activate the two different objects being compared to each other. So it felt like, of all the different experiences in the hackathon, the most kind of AR-friendly user interface, where you would point, and then you would see some sort of feedback that it was actually working. Because there's a number of apps that you would try to do something, but you were not sure if it was thinking it was working. But just to have the meme where there's a woman who's thinking about math and all the math equations coming up. It was at least a great feedback to be like, oh, this is actually working. And then you get to see the choice, but then you get to see the explanation, which I think is another part that was interesting. Yeah.
[00:19:10.913] Candice Branchereau: Because when we presented it, maybe the sound wasn't that loud because we don't want to also traumatize their ears. quite noisy already. It has the feedback also when you're pointing, it has some magic sound. While it's thinking, it has the different sounds, not just the image. But it also gives you an explanation because the text in AR is not really a usable thing. So, of course, you have text-to-audio that read it for you. And we try to also keep the explanation as short as possible, but still understandable and fun.
[00:19:40.065] Volodymyr Kurbatov: It's very insightful, by the way. Every time it was like so unexpected reasoning, like you have two equal paper cups of coffee and you ask like, which is better? I mean, you know that they are just the same, but the AI would reason that it would choose the one on the left because it's slightly closer. I like to do it like one inch for me. It's not a biggie, but between almost equal items, it is a big deal.
[00:20:02.022] Marcin Polakowski: A preference. Yeah, that's what's really fun is that type of thing. It's trying to trick it. Not trick it, but really just get like a set of fun reasons about like, okay, post-it note or a pen. Like which one would you choose? And for me, at least, I chose the post-it note because that's a more rare item. It's slightly more rare, and you can do more with it. So grab the post-it note while you can, a pen you can find anywhere. So anyway, so it's just kind of funny.
[00:20:26.963] Kent Bye: So I think it also lent itself to be like, I could see this as a type of experience where people may want to record a decision and then share it out as in terms of a, hey, look at what AI is telling me what I should do. I'm sure there's lots of different small decisions where you can actually use it practically, but it felt like more of a spectacle to see like, oh, here's a crazy decision that the AI made, and then sharing that as a way that feels like it's something that people could
[00:20:53.064] Marcin Polakowski: have other commentary around so it does does feel like this kind of larger social commentary piece there is there is it's actually funny you can even point to two people and it finds a preference between two people so that's like kind of the creepy part but yeah like so example we pointed at two different people and it took the person on the right because well they had a slightly better posture so that was uh really funny so yeah i think that type of thing it's a little creepy but it's something that you could maybe take a snapshot of and like oh look at that like you know anyway
[00:21:23.093] Candice Branchereau: If you choose between something, there are always some right, some yes or no. So there's always exist some better option for you. So it's good to know that. And yeah, you actually can share sometimes like, look, I recommend for me this kind of food because it thinks that it's more healthy for me.
[00:21:38.707] Volodymyr Kurbatov: Or fitting for you, because it knows all your backgrounds, it's not just a random choice, it's not like salad or pizza, it's like what kind of pizza you prefer, what kind of pizza you had AI to choose for you during the last months, which you preferred more and so on, and what time of the day is it. For example, maybe in the evening it would prefer a little bit more pizza for you, because you have more pizzas in the evening, maybe coffee in the morning. Again, we feed all of the contextual data as well.
[00:22:04.764] Candice Branchereau: the personalization of the AI will also make the decision based for example you have allergy for the tomatoes and you say you know it's better to use the salad because you have the allergy if there is no tomatoes but it also can analyze and to find this origin for you so this is also the coolest thing of the personalization of the lens.
[00:22:21.940] Kent Bye: Yeah, I feel like that there's a lot of decisions being made for us by these algorithms, and I feel like this is a type of experience that just makes it a little bit more clear.
[00:22:29.682] Marcin Polakowski: Yeah, yeah, yeah. Well, that was the idea, just to make that super obvious. Remove the responsibility.
[00:22:35.524] Candice Branchereau: Yeah. Take off the responsibility. For your life, yeah. For your choice, for your life.
[00:22:42.014] Kent Bye: Yeah, and I'm curious to hear, as you're working on this project, what kind of things that you were noticing in terms of either the feedback or insight of just the process of working with the specs.
[00:22:53.012] Volodymyr Kurbatov: comparable with the previous time last year we got spectacles practically the same day when we started hackathon it was hectic like it was difficult to build something it was everything new for us everything worked not so well but this time i think hardware wise and software wise it was quite good you got documentation for even for super base i was worried that like i never tried to use it and then how do i figure it out but then alessio shared the video tutorial in one in 20 minutes thank you alessio i knew what is it about and then mentors shared the documentation and so I can just copy-paste some stuff and it just worked.
[00:23:27.526] Candice Branchereau: We focused more on the idea because the technology wasn't new for us and it was easier. So you don't need to think and idea and try to find a technical just approach to the Spectacles here, not just solution for your idea. So that's maybe it was a little bit more smoother, easier, and we trusted the process and it just happened.
[00:23:49.831] Kent Bye: And it sounds like a number of you have had previous experience with Spectacles. I'm just curious what kind of other apps or experiences you've been able to create for the platform.
[00:23:57.461] Candice Branchereau: So it was in collaboration with Snap RR Studio in Paris. So we made like your pride projects and there was also a project that we will launch soon. So it's called Task Force and it's about asking AI to tell us what task can we do in our space, and it's anchored in space so we can go back the day later and see, all right, I forgot I need to make my beds.
[00:24:26.347] Marcin Polakowski: It's basically like a home management app that kind of gamified and made tasks in your house like a kind of family experience and a kind of like a family goal to get them all done but also kind of competitive so when you complete a task you punch out of the task and like your points rank up and then you have kind of competition so it creates some conflict in the family but also keeps track of who's doing what and who's kind of slacking off so it was kind of the idea the last one i helped enough to build star map maybe you can share more
[00:24:53.843] Candice Branchereau: for spectacles. Yeah, now it's in the future, spectacles. We built a star map. We actually rebuilt it from the older spectacles, but we thought that we can rebuild it, but we built it from the scratch eventually. We made it smarter, we made it more, we fulfilled with more information. It's around like a thousand objects you can see in the sky. By pointing, we're using like a lot of technology. Of course, the hand tracking, the interaction with the objects and we also put some tutorials there and we also made the spectator view so you can learn something like in an observatory or in a book we make it like alive rely on your location and put the stars in the same location where you are but also you can actually put the spectator mode on and you can go outside in a sunset, for example, when you can see the first stars and spectacles will connect the real stars just with the lines and show you what the constellation is. Nothing more. I mean, you don't need to set up nothing. It's automatically do the location and put all the celestial sphere in the right position and it just merged with actual contextual stars. So that's the magic we did recently. Yeah. So that's awesome. You just put the glasses and you look around and you see the information about the stars.
[00:26:11.686] Volodymyr Kurbatov: That's our... The most complex lens we made so far. It has like a lot of stuff. I think it's on the limit of the spectacles already. And it was released I think yesterday or day before yesterday on a spectacle. So we would love to hear a feedback from people because we are so happy that it's finally released.
[00:26:27.103] Kent Bye: Nice. Awesome. You have to check it out. I think when I tried to launch it, it wasn't launching. I don't know if it requires internet.
[00:26:32.829] Candice Branchereau: No. Oh, OK. I think it requires location. Sometimes it maybe need you to be a little bit not on the ground or something like this because it requires location. But also, if it doesn't work because you're in an elevator, you can manually set up the compass. But actually doing everything automatically, it should be working. I've been testing it in lately. during the sunset and it's magical because you can see like all the stars and the sun also aligned with the actual stars and it gives this effect. I mean we have very simple sphere of sun but when there's actual sun in the same position it gives you all this magic with combining science and real like natural beauty of the sunset from the gate.
[00:27:14.795] Kent Bye: Awesome. And finally, what do you each think is the ultimate potential of augmented reality with AI, this sort of head-mounted display for AR glasses, and what that potential might be, and where you think it might go in the future, and what it might be able to enable?
[00:27:30.712] Marcin Polakowski: Well, basically, I mean, I don't think it could go anywhere other than AR glasses in the future. I don't think there's any other option. That's the only thing that's going to, I mean, we're going to get rid of every screen in your life and you're just going to have one pair of glasses. So you're not going to have a TV anymore. You're not going to have any of this. It's going to actually reduce even more the amount of hardware. We've just been reducing hardware from our offices like every year. And this is going to be just one thing where we do everything in it. So yeah, for that, it's great. And I don't know. I mean, it's something that just enhances your world. And what's great about it, it's not like VR or anything like that. It's really your world. And it helps you reintegrate into the world maybe a little bit. Like you have those old... Those old photos of the people reading their newspapers in the metro or something. And then you have the equivalent of everybody staring at their phone. And then you'll have the AR glasses and they'll be looking up. Finally, they'll be looking up and maybe looking around. Also, maybe just doing crazy stuff and everybody's going to be... Pinching in the air. Yeah, pinching in the air, pinching somebody's nose or something. But yeah, maybe finally get them to look up a little bit and just see how fascinating the world is, actually. So, yeah.
[00:28:41.445] Candice Branchereau: Be aware about the surrounding. I absolutely agree. That's the idea, to take the eyes of the people from the phones and look around, look at the stars, and look around the people. So the information we have in the ER that's reflected in the glasses, it should be like nothing overstuffed with advertising and so on. We're trying to cut it. We're trying to make it like... seamlessly integrated into our lives. So it just helps us like some superpower. So it helps us just to know more, to understand people who speak the language you do not know, or to understand the science on the street that we don't know when we're traveling. Like we're going to Japan and it will help us to understand and to integrate. So it's just like a superpower for me. So you can see better, like we can zoom in maybe in the future, we can zoom in with the spectacles. We've been thinking to try this too for the Lanza Tour. So, yeah, and AI with AR is like someone said on the stage, it's like a love story. It's something that supports each other. They were created for each other.
[00:29:47.130] Volodymyr Kurbatov: We just talked about the fact, it's so difficult for me to pronounce it every time, that AR is UI for AI. We just talked about it and it makes so much sense.
[00:29:57.389] Candice Branchereau: All the inputs, it creates the information about the context. So yeah, they're just made for each other.
[00:30:07.932] Kent Bye: Any thoughts about where the future might be heading?
[00:30:11.173] Candice Branchereau: I have no idea. I don't know what to answer.
[00:30:16.395] Kent Bye: Anything that you want to do in the context of XR, AR, VR, any experiences you want to have?
[00:30:22.245] Candice Branchereau: I think I'm in love with the idea to bring back old stuff from museum, so like old statues and so on. And I think we can make a lot of great stuff with museum.
[00:30:36.324] Kent Bye: Awesome. Any final thoughts you'd like to share with the broader XR community?
[00:30:39.497] Candice Branchereau: you for inviting us to the podcast thank you we really enjoyed I mean we had to take the vocation but it's it's definitely worth it we connected with the people we connect with we reconnected with our friends and that's the awesome the community the snapchat community is just awesome that's why keeping us coming here so it takes like 12 hours fly for all of us but it's definitely worth it so you're super happy thank you for having us
[00:31:08.123] Marcin Polakowski: Yeah, I mean, final thoughts is just like, thank you to the judges because they chose our game and and had the same sense of humor. And I mean, I'm just yeah, yeah, yeah. I mean, we didn't we honestly didn't even think that we would ever get onto the finalists list. So to be number one is pretty crazy. So.
[00:31:29.406] Kent Bye: Awesome. Well, the judges did point and make a decision, like who should we use the app to decide who's going to win. No. And they have to use our app to point it to us. Yeah, that's actually what happened. No, that's not what happened. Anyway, thanks again for joining me here on the podcast to help break down a little bit of the design process of Decisionator and some of the larger social commentary that it's making. And I think in terms of the experience, it felt very suited for an AR app to be able to just point at things and get information contextually as to what's happening. So it just felt like a intuitive interface that I haven't seen too much of. And it felt like it was a good integration of all the different technologies. So yeah, congratulations again for the win. And thanks again for joining me here on the podcast to help break it all down.
[00:32:08.860] Marcin Polakowski: Thank you. Thank you. Thank you. Thank you. Thank you.
[00:32:12.008] Kent Bye: Thanks again for listening to this episode of the voices of your podcast. And if you enjoy the podcast and please do spread the word, tell your friends and consider becoming a member of the Patreon. This is a, this is part of podcast. And so I do rely upon donations from people like yourself in order to continue to bring this coverage. So you can become a member and donate today at patreon.com slash voices of VR. Thanks for listening.

