#1675: 2nd Place Spectacles Lensathon Team: CartDB Barcode-Scanning Nutrition App

At Snap’s Developer Conference of Lensfest, I did an interview with 2nd place team in the Snap Spectacles Lensathon named CartdB including Guillaume Dagens, Nigel Hartman, and Uttam Grandhi (the other team member Nicholas Ross had some prior commitments). See more context in the rough transcript below.

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.458] Kent Bye: The Voices of VR podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR podcast. It's a podcast that looks at the structures and forms of immersive storytelling and the future of spatial computing. You can support the podcast at patreon.com slash voicesofvr. So continuing my coverage of Snap Lens Fest 2025, today's episode is with the second place prize winner of the Lensathon for the Spectacles hackathon that happened for 25 hours in the couple of days leading up to the Lens Fest. So this experience was called cart DB. And what the experience was, you could look at a barcode, it would scan it, and then it would pull up nutritional information, other metadata, and kind of like more of a shopping app experience. They had a number of different products during the demo, which were curated down. And then they had specifically updated their own information on a small subsection of products that were available. This is not something where you could just take any product because they have to get licenses for the database. But the whole idea is using barcodes that are on these objects and how can you start to pull in metadata from these objects. And Snap did bring me down to cover both the developer conference, the LensFest, all the different announcements, as well as to participate as a judge in the Lensathon. So after all the 10 different teams had a chance to develop all their products, I was a part of the team that went through and saw all the different demos. and was involved in a little bit of a process. And so I wasn't involved in the final judging. That was a separate team. But I wanted to see all 10 of the different demos. And in the next episode of 1676, I'll have a little bit more comments on some of the other types of applications that were developed. Just for context, the prompt that the developers were given was they wanted the developers to develop a spectacle lens using one or more of the new capabilities that had just been announced with the Snap Cloud with their capabilities of the Super base integration. And so being able to do different database integrations, also real time multiplayer types of experiences, and then also like the use of some types of edge functions. And so there's a lot of ways that they had to offload this computer vision processing into a Docker container. And so basically have this wired up function to kind of send that information out into the cloud and then come back with the metadata that was stored in their database. So we're covering all that and more on today's episode of the Voices of VR podcast. So this interview with Guillaume-Nagel and Utom happened on Thursday, October 16th, 2025 at the Snap Developer Conference in Santa Monica, California. So with that, let's go ahead and dive right in.

[00:02:39.848] Guillaume Dagens: I'm Guillaume Dagens. I come from France and I create psychedelic lenses for spectacles.

[00:02:46.849] Nigel Hartman: I'm Nigel from Germany and I'm also creating lenses for the spectacles and always try to push the limits of what's possible on platforms like that.

[00:02:57.582] Uttam Grandhi: Hi, my name is Uttam. I'm from Delaware. I build mixed reality applications for medical education.

[00:03:05.672] Kent Bye: Right. And you had one more person from your team. Maybe you can also give a little bit of an introduction for who is not here.

[00:03:11.417] Nigel Hartman: Yes. So the fourth member is Nicholas Ross from LA here. And he is also building crazy stuff. And he was like the heart of our team, I would say. Like, was directly clicking. And we all was knowing that we are working great together. Yeah.

[00:03:31.775] Kent Bye: Nice. So we're kind of following on from the hackathon. And yeah, I always like to get a little bit more context for each of the different folks I'm talking with. And so I'm just curious if you could each give a bit more context as to your background and your journey into working in XR.

[00:03:45.543] Guillaume Dagens: Yeah, so I come from a 3D artist background. I was creating 3D models and selling them online. But it was not the end of the world for me. I was looking for something more. And I found that in the Snap ecosystem.

[00:04:00.390] Nigel Hartman: Yeah, for me, I'm a software engineer. I'm coming full from the tech side. I was working in telecommunication before and also in biology related parts like bioinformatics and was from there going into AI. Yeah, I was just like falling into XR because there's an opportunity to like combine all the different experiences I already had and combine it in a nice way.

[00:04:28.263] Uttam Grandhi: My background is a little bit all over the place. I went to school to study mechanical engineering and physics. And then I studied character animation in Vancouver Film School. And then I did this wonderful program called ITP, which is at NYU Tisch School of Arts. And that really opened up my world. And I started doing things in the interactive, mixing digital and physical. You know, during my time at ITP, I was experimenting some stuff. I ventured into XR when I created this application called PlayGummy, which is an augmented reality origami application. So that is sort of like, you know, culmination of all my diverse backgrounds, my interest in origami, my interest in mathematics and geometry, and then mixed reality. And that's how it all started, like, you know, originated. My journey into XR, and that was like my first project. And then during the pandemic, you know, I did this, you know, very impactful work for College of Dentistry at NYU, where I created this local anesthesia simulation module for the oral and maxillofacial surgery department. And that was a big hit. There's like 1200 students that was used. It was made part of the curriculum. There are more than a dozen universities in the US that are currently licensing that product.

[00:05:34.810] Kent Bye: Nice. And so I'd love to hear a bit more context for how each of you started to encounter the Snap Spectacles, and then if any of the other previous projects that you've worked on. And yeah, just to hear a little bit more around that journey and some of the different ways that you've started to already experiment with the Snap Spectacles.

[00:05:52.136] Guillaume Dagens: So for me, I was in the hospital because I had been a hospitalist after a motorcycle accident and I found the Spectacles website, the old Spectacles website, the previous generation. And I was immediately conquered. I wanted to have them. So I contacted them and started to build some lens for them to recognize me. and maybe send them to me but they did not even though I sent them a lot of emails and along the year but when they announced the new spectacles on the subscription last year they sent me an invite and so I subscribed which was a kind of a leap of faith for me but clearly it paid off and right now I'm very happy to be here and to continue building the future of AR.

[00:06:39.872] Kent Bye: And you have a really amazing shader program that you have. I don't know if you want to say anything around that in terms of experimenting. What's it like to be able to scan a room with the snap spectacles and start to put a number of different psychedelic trippy shaders on them?

[00:06:50.836] Guillaume Dagens: Yeah, so it's based on the world mesh, which is my favorite component of Lens Studio. The first thing I wanted to do when I got the spectacles was seeing if it's possible to 3D scan a mesh of the environment. And when I saw that it was the case, I started fiddling around with the pre-made shaders from the library. And once I began to understand the node-based shader system, I began to code some custom shaders with Claude and do some setups to project them on the world mesh. And yeah, it was very, very satisfying to be able to make this. And it's kind of a new thing. It's what makes me think that Spectacles is much more than just feature sets. It's clearly some platform where you can build anything and where everything becomes possible, even the most craziest dreams of application or delirium. Because this is kind of a delirium. It's kind of a trip. It's the first ever numerical trip application created. And I want to push this concept even further. So stay tuned. And maybe in the future, it will become something bigger.

[00:08:02.629] Nigel Hartman: For me, the Spectacle is like the device I was always dreaming about, the device I always wanted to have. And how I was getting there or how I was hearing about it, I probably need to go a bit more back. So as a kid, I was always wanting to have devices like that. but they were not available and I was dreaming about it. And my first contact with XR was, I guess, 2016 or 17 with the Oculus Quest, which I was getting there. But as a developer, I didn't have all the opportunities I wanted to have there because it was quite a new device and it didn't support XR, augmented reality. And that's why it was fast going into, after some time, going to the edge of the room and I didn't use it for some years. And as soon as Meta Quest 3 was coming out, I was starting to use it and I really loved it. But for me, I really want to have some real world programs, apps. I want to really change something and not just build games. And yeah, I was going to the AWE in Vienna last year and that's where I first heard about the spectacles and I directly had like a hundred project ideas in my head and I directly registered for it and got accepted for the program and since then I just try to push the limits every day with technology.

[00:09:32.074] Uttam Grandhi: my first experience with spectacles um actually started with the 2024 version so i mean i saw some you know wonderful videos and experiments of the 2021 version and i always you know jumped over having that device but it was a very closed beta and i was not part of the network then but luckily in 2022 i got an opportunity to participate in a lens fest i mean i was not part of the lensathon but i was so inspired by the community and the talent around me, I actually built a lens during like, you know, my time on that day, like for a couple of hours, I built a quick lens. And I shared that with, you know, some of the snap employees, and they really liked it. And they actually, you know, made me, you know, part of this network. And, you know, I got my hands on the current version of Spectacles last year at the Snap Partner Summit. And I just fell in love with the form factor and, you know, what you could do with the device like Spectacles. And I actually won a Spectacles in the city hackathon earlier this year. In New York, we got the first prize, where we actually created a multiplayer experience of, like, where you can actually turn any cylindrical pillar-like object into, like, a gamified reverse Tetris, where blocks will be falling from the top, and then you, along with your four other friends, can just go around and, you know, try to kind of touch the blocks of one color, so each will be assigned a unique color, and you just try to, you know, touch your hands. It's a very fun sort of, like, you know, twister slash, you know, like a brick game Tetris, you know, game. And yeah, people really loved it. And I tried to, you know, keep building like fun lenses. And this time around, I wanted to build like a game. But because of the prompt, you know, we ended up building something much more utilitarian, useful, but also we added our, you know, spin on it, we made it like in really fun. I mean, I can talk more about, you know, what we built in for the question. But yeah, that's how I Yeah.

[00:11:22.469] Kent Bye: Yeah, so the prompt was to look at some of the new capabilities of the Supabase, like basically having a database backend with Postgres, but also some of the edge functions and other features that were being newly launched as a part of the Snap Cloud. And so with that constraint, it did sort of lean a certain direction to sort of do an experience that uses a database. It kind of leads to more utilities rather than game-based projects. Although you could do a game, but I think with the quick 25 hours wasn't really much time to really, you know, fully flesh out something too complicated, but still really quite impressive to see what people managed to pull off within that time. So very curious to hear where the kind of brainstorming process started, where you kind of ended on the cart DB, where you're essentially using the snap spectacles to scan a QR code and get lots of metadata around these different products from more of a shopping based context. So where did the idea begin? And just talk a bit about the kind of brainstorming process.

[00:12:19.031] Guillaume Dagens: So at first we all had very different ideas and very different concepts that we wanted to put and Nick Ross took the lead and made us write everything on the board and we exchanged ideas and I think it's Nigel that came up with the idea of scanning barcodes. Afterwards I said that we should do a grocery list because my mom, my mother had said to me that I should make a grocery list app so she can do the groceries and stuff. But yeah, all the database and stuff, it's more them who have handled it. Me, I have more worked on the whole UI and UX parts of the application. But I think we can say a big thanks to Nick Ross who helped us to kind of put all our differences aside and merge in the middle into a concept that satisfies all of us.

[00:13:19.179] Nigel Hartman: He was a really great project manager and was also having the big picture of everything and always being good at orchestrating all the to-dos. It was really great. It was basically about using this database and we wanted to focus on that part. where do you have more data than in the real world, in the supermarkets, or maybe even at a other place, at warehouses or something like that. There's so much data available, which we can now, with our lens we created, just scan and understand and utilize in some way. I see there's really a big potential for that, not just in the supermarket, but like I said, also really in warehouses, being able to scan all these type of codes, yeah.

[00:14:11.417] Kent Bye: So just a quick follow-on, because you came up with the idea of the barcode scanning. So where did the idea come from that this would be a possibility? Were you aware that the data sets were available and would be easy to access? And just give a bit more context for how you came to that idea.

[00:14:27.768] Nigel Hartman: Yes, so basically barcodes or QR codes or any type of these codes are not meant for us humans, right? It's for machines to read and use this data for something. But because now we are wearing this machine in the form of glasses in front of our head, we are able to use these codes, we can read these codes, we can interact with these codes. So it's a really great way to interact with these codes, being able to use this data. And what we were doing was there's a big provider available who is able to find these codes and images. It's called Scandit. And they don't provide directly an open API or server which we can use for that. They provide some tools for directly pushing it to devices like Android or something like that, but that was not possible on the Spectacles, for example. So what we were doing was we took the Linux program they provide and we were writing our own server in C to call this program by utilizing an API. that we provided in a Docker container. And it was quite good in performance. The only problem we had right now was the internet connection, which was adding some additional response time on top of it. But yeah, it was cool.

[00:15:52.333] Kent Bye: So you have a Docker container that has a Linux that's running a C program that you had to run. the glasses and you're just holding up and you're looking and you're seeing a barcode and then is it just automatically sending up those images let me just walk through the pipeline for where the image is going from the camera and then how does it get to this kind of web of all these different services and then come back with the data

[00:16:17.611] Nigel Hartman: Yeah, so basically it's first you choose your preferences, like if you're region or something like that, so that the app can guide you also in this type of decision making.

[00:16:29.721] Kent Bye: MARK MIRCHANDANI- Different dietary preferences that you have.

[00:16:31.702] Nigel Hartman: Yeah, exactly. And then it's just like every time a new image is available, he's sending that to the server and getting the information back if he found on this image a barcode or not. And always after we got the response from the server, that's the time point where we are available to send a new picture to the server. And then we were getting information about if the barcode got scanned correctly. And if there was a barcode, we are searching for that barcode in our database of products we provided using Supabase. And yeah, show that product to the user, show information about prices, descriptions, but also like information about how healthy this product is, because we were thinking about, if you can maybe tell more about it. That, for example, in Europe, there's like a score, Nutri score, from A to F, like if a product is healthy or not. And we was thinking that's a great way to also visualize that directly to the user, to give also not just information about the price, like what he is buying right now, but also about the background.

[00:17:40.215] Kent Bye: And just one quick other follow on. In terms of the Linux server, was that a part of SuperBase as well? Or is that something else that is another service that you had to use?

[00:17:50.481] Nigel Hartman: It's an additional service. So we had some trouble because also this toolkit was provided to us for the first time. So we were not directly able to use the Internet module to call the server directly from Spectacles. So what we were doing was we were using edge functions in SuperBase and were using these edge functions basically as a proxy to call our Linux container, our Docker container in the cloud.

[00:18:17.799] Kent Bye: Okay. Okay. Yeah. So that helps to just get a bit of sense of the magic behind the scenes of everything. So thanks for that.

[00:18:24.963] Uttam Grandhi: Cool. So I would like to add a little bit more on the kind of information that we are showing when you scan the barcode. What we want to do is like we really wanted to kind of like, you know, add more critical information that is not currently available in like other food scanning apps that are currently available. And when we're thinking more about it, you know, we thought it would be nice to show the food pairings, right? Like what foods go well with this so that, you know, it's beneficial to you in terms of like the nutrient absorption. And also, you know, we provided something like meal time, which is like when is best time to eat this product. So these things along coupled with the NutriScore and, you know, having that sort of like, you know, calories and other information, we thought it would be like a more informed choice available for the user. And, you know, like making things like, you know, much more sort of like, you know, bland without any sort of like, you know, spirit. We thought that might be boring and, you know, it'll just be like any other food scanning app. So when we came up with the name, you know, Cardi B, you know, the way I said it, it sounded like Cardi B, right? So immediately I started to think about like how we can present this in a way that, you know, it's fun and in a lighter tone, you know, it's not just, you know, educational, but it's also, you know, playfully pulling your leg off like, oh, hey, too many calories or it's like 80% air and 20% regret or something like that. So we use ChatGPT to come up with like a bunch of lines which are, you know, fun. And, you know, we could sort of like, you know, and then we use some kind of like AI tool to kind of like make them sound like the rapper Cardi B. But again, like, you know, it's something close. It's not exact replica of her voice. I mean, I'm being very cautious not to, you know, you know, get into like any trouble. But yeah, it was just sort of like a fun exploration for us to kind of like, you know, present like important information, but in a fun and playful way. Actually, this kind of approach, it kind of dates back to one of the projects that I did during my time at IDP, where I actually used Valentine's cards to educate people about science. So I made cards on mathematics. I made sort of like a fractal pop-up book where, you know, it'll be like, oh, this is like my heart and, you know, my love for you is like a fractal. It keeps on going forever. So it's a little bit cheesy, but I chose Valentine's messaging to communicate science. So this kind of like, you know, tone or thing of like, you know, communicating critical information in like a much more fun and playful way has been in my, you know, like playbook for quite some time. Yeah. Yeah.

[00:20:48.413] Kent Bye: Yeah, one of the user interface experiences of this app was that I found myself staring at a barcode. And usually when you scan barcodes, there's like a beep that says that it's like actually been scanned. But there wasn't any sort of beep or indication that it had successfully captured it. And so I found myself kind of like staring at it and waiting until there was some sort of like feedback of whether it was going to work or not. Sometimes it worked and sometimes it didn't. Right. Also, I'm... This is very good feedback.

[00:21:14.367] Guillaume Dagens: Thank you. Yeah, we will make sure that it makes a beep. And I have something to tell. Did you know that the nutritional score or the NutriScore is a French invention? So I brought with me this idea of the French NutriScore and I hope it's going to resonate and that you Americans are going to be able to tell in one click if a product is good for you or not. Yeah.

[00:21:38.963] Kent Bye: And this is something that is invented in France and is pretty widely distributed in France and in Europe and beyond?

[00:21:43.786] Guillaume Dagens: Yeah, I think so. I think it's also distributed in Europe now, yeah. But it's come from France, yeah. It's very useful when you're in the supermarket. And from a glimpse of an eye, you can tell if a product is good for you based on A, B, C, D, E, and the colors. And we could even imagine that we could do a Nutri-Score Z for the very unhealthy products, like the cigarettes and stuff. Yeah.

[00:22:09.784] Kent Bye: Not that it's always going to dissuade people, but at least the information. But at least it's funny. Yeah. Nice. And so as I was scanning it, I found that sometimes it would work and sometimes it wouldn't. And as I was talking to people afterwards, they were like, oh, well, I knew that the camera was on this side. And if I hold it up to a way that actually isn't intuitive to me, where you would actually have to hold it off to the side and optimize to the camera on the left, where that was probably part of the reason why I wasn't getting

[00:22:37.799] Guillaume Dagens: clear interactions but some of the people who work at snap like oh well i knew that the camera was here on my left side or right side and they would hold it up to that and i was like okay well that's not obvious for me as a new user coming in that i would know that this is one of the things i hope the most for the next spectacles is that they do like the iphone pro you know a bunch of cameras with different focal lengths so you could even zoom with your eyes and look for things that are very far away But yeah, the cameras currently they are made for the tracking system so basically you have to be very close to have the details and the focus onto a barcode but I'm pretty sure that in the next-gen Spectacles we would have a way better barcode scanning mechanics.

[00:23:25.509] Nigel Hartman: I think there's also a lot of potential to improve vScanning by tweaking some of the parameters in the container, which is running right now. Yeah, it was just about time, I guess. But we were happy that it was working exactly on time. So we just had 24 hours. two hours of sleep, maybe three. And so it was really exactly like working maybe one hour before we needed to finish it. And we were quickly driving to the supermarket, trying to get one video in. So yeah, but definitely, I think there's even additional to the camera. I guess it's definitely possible to change some of the parameters in the scanning itself.

[00:24:09.963] Kent Bye: Now, when I tried the demo, you had a number of different products that were there on the counter. Most of the ones that were there were, you know, in the system and working. Is this a type of system that you were pulling in databases where you could literally scan any barcode that was out there? Or I guess I'm wondering if you were creating like a small set of like just 10 entries in your database, or if this was something that you could literally pull off any product and scan it and it work because it's pulling in all these other databases.

[00:24:39.804] Nigel Hartman: So actually we are building our own set of barcodes connected to information about these because we wanted to have our own database. We can control the data and we know what we want to show. But sure it would be great to maybe integrate like different APIs for that in future to be able to maybe support all the different type of products which is out there directly. On the other side, if you for example would have a grocery store who tries to engage users to buy things there, then you would have anyway your own database of products. So I guess it would be even realistic to have it like that right now.

[00:25:21.420] Guillaume Dagens: Yeah, at first I was thinking about we could integrate open food facts database which is a very very huge database which could scan anything in any country basically. So yeah, I think that would be the perfect kind of database.

[00:25:39.656] Kent Bye: Yeah, it seemed like this hackathon was encouraging the use of pulling out what I would normally associate with like web applications or things that are kind of using these services in a way that you could have like on a mobile app or a web app, but this is all on the spectacles. And so just curious to hear any reflections on the affordance of having augmented reality and these computers that we're wearing on our faces have quick access to these different types of applications that would normally require either a phone or a web application to be able to gather this information. So having more contextual information so that you could perhaps quickly get this information in the context and make a decision in the moment based upon Something that would be using the affordances of computer vision and augmented reality that feels like this is a type of use case that feels very unique to AR just because you already have the camera on your face and you could technically already use your phone. But it's another extra additional steps that if people are already wearing this, that you could have a little bit more of a streamlined experience that feels like it's kind of luring people. the gap between having this contextual computing capability. So just curious to hear some of the reflections on the implications of this type of trajectory.

[00:26:52.734] Guillaume Dagens: Well, that's what I told the guys. When we weren't sure if we would win a prize, I was like, guys, we can be confident because we have a killer app. It's the kind of application that makes you like a cyborg, you know? You're in the supermarket, you scan the barcodes with your glasses and you see the information directly into your retina, so directly into the brain. So yeah, I think this is kind of an avant-goût, a pre-taste of the future of humanity, the future of the augmented human.

[00:27:28.052] Nigel Hartman: Yeah, that's a good point actually. I couldn't have said it better. Like we are just like able to do things right now. Just like the glasses could be part of us. So I can't add something. It's a great answer, right? Yeah.

[00:27:45.842] Uttam Grandhi: Yeah, I think speaking about the affordances of, you know, augmented reality in the format of, you know, spectacles, I think there is a lot of scope to explore the depth aspect of our surroundings. And I think that is like a key differentiator between a technology like spectacles and a standard like in a phone or a web app where, you know, information can only be presented in like, you know, 2D even though, you know, you can have like, you know, camera and, you know, depth kind of, you know, experiences. They're not like true depth, right? It's just like a 3D information rendered on a 2D screen. But here, you know, you could use like in your spatial environment around you to kind of like, you know, render more contextual information. And, you know, even like the product itself, like, you know, it's like most of the products come in a box and it's very easy to kind of like, you know, render like multiple informations on like, you know, different sides of the box and even maybe get like a sneak preview of what's inside the box using a technology like augmented reality without even like opening it and you can actually see, you know, what is the actual size of the product. Like I know in Japan there is a rule that, you know, you can't have an image of a product of a different size than what is actually in the box. So whatever image that they printed is exactly the same size of the product. But in the U.S. there's no such rule, right? Like they could just simply have like a giant piece of granola and they can simply add like a fine print saying that enlarge to show texture. Come on, right? But with augmented reality you can peek through the box and then see what's inside, right? Mm-hmm.

[00:29:07.185] Kent Bye: Nice. And so where are you going to take this application next? Are you going to continue to develop it? Or do you feel like now that you've won the second place prize here at the Lens-a-thon that you're going to take the prize money and kind of disperse and go about your merry ways? Or just curious if you're going to continue to build this out?

[00:29:21.528] Guillaume Dagens: Actually, I love these guys. I'm so grateful for destiny, for life, for chance to have met them. And I think this is a dream team. I couldn't have found better friends, partners. And if these guys want to keep going on with this relationship, I'd be glad to continue building with them, to continue to refine this application, the user interface, the user experience. And maybe even build new projects together and work together and build stuff. Yeah, build the future.

[00:29:54.463] Nigel Hartman: Yeah, that's a great point. I mean, we just spent like two days maybe together and I would really call these guys here my friends already. It was really great to meet them. So I would definitely like to work together more. If it will be CardDB or maybe other product in future, that doesn't matter. It's just sometimes you feel like this is the right people to work with and so you go this way.

[00:30:23.322] Guillaume Dagens: Some kind of alchemy.

[00:30:24.844] Nigel Hartman: Yeah, exactly.

[00:30:26.965] Guillaume Dagens: Also, Nicolas, who is not right here now, but here. Nicolas is a legend. Big respect to Nicolas. We do not forget. Nicolas, we think about you. Don't worry.

[00:30:39.013] Nigel Hartman: You are here with us. Just the only problem which would happen with this product is that we used the Scandit library like we said and it's not a public library so we need to have a license for that. And the other challenge would be additional that This type of things you probably want to do directly on device. And on Spectacles right now, that could be a challenge, not just because it was 24 hours now, but also because we don't get the really deep, low code, low layer access to the hardware. We just basically like proxy our expectations through TypeScript to the hardware. And this type of things like really scanning this code efficiently, for that we would need to have access to the hardware. Or this type of things maybe need to be provided by Snap itself. But probably they will start to support maybe QR codes first because that's the most interesting for most of the use cases. Barcodes is always like a different type of thing. Yeah.

[00:31:46.594] Guillaume Dagens: That's true. The closed SDK can be a pain. For example, I created some custom locations where you go with the spectacles and scan a place and wanted to export the custom location mesh to Blender to block a more clean version of the room. But it wasn't possible and there's a lot of limitations actually with the closed SDK and I really hope in the future they open and if they don't open it to the public at least maybe they can open it to some devs like us. Trust us.

[00:32:21.710] Kent Bye: I'm sure that there's always going to be a thing around privacy and what are the privacy implications of taking scans and making it available for developers. And I think if you're making site-specific things that are for that, we're in complete control. But yeah, I'm not speaking on behalf of Snap. But I know that that's part of the consideration is looking at their overall values of privacy and what may or may not be revealed to people. As we have these technologies that are scanning all these things, then what happens to that data is something that is of concern to know there's the integrity of that. So that'd be the counter argument for some of that.

[00:32:56.149] Guillaume Dagens: I get it, I get it, but I don't think like that. I think that all these scans should be decentralized and open to the public and that eventually we should have a full 3D scanned map of the entire Earth in a kind of decentralized network. I'm not a very technical kind of guy when we talk about database, privacy and all this stuff, but I'm a tech lover, so yeah, that's why I wanted to... have access to all the data and be able to do more stuff. Yeah.

[00:33:28.524] Kent Bye: Nice. And I don't know if you had any other comments in terms of any aspirations you have in the future.

[00:33:32.706] Uttam Grandhi: Sure, totally. I mean, I really love this team. Everyone has such a strong, you know, like, capability and skill in whatever they do. So definitely if the team is interested in, you know, taking this to the next step, I already have, you know, some ideas on the applications of this technology. I know, like, we presented this as, like, you know, more for, like, you know, your grocery cart or for health-related purposes. But I think there's other opportunities, like what Nigel has mentioned earlier about warehousing. And also, I mean, I just thought about it that we could actually revolutionize in the way how books are consumed in this new century with the advent of mixed reality technologies like you could actually use a barcode from say a library book and actually show summaries or sort of like you know books written or you know films that were made that were inspired from this book so there's actually a lot of you know opportunity for us to you know take this technology to the next

[00:34:26.792] Kent Bye: Great. And finally, what do you each think is the ultimate potential for XR, all these head-worn augmented reality headsets and AI thrown into the mix as well? And what all those things put together, what the ultimate potential of that might be and what it might be able to enable?

[00:34:44.797] Guillaume Dagens: As I said, I was talking earlier about the augmented human, about being a cyborg and stuff. Just imagine if you could see in thermal vision or see in night vision. Touch the Wi-Fi, see the Wi-Fi, see the radio wave all around you. It's just giving you basically a sixth sense, a seventh sense. That's what is exciting about and that's what I'm looking forward to see is a society where what will augmented humans do in such a society because What we consider today as entertainment, work, everything's probably going to be shifted in a way we can't foresee and this is exciting and I'm curious about the future of augmented reality.

[00:35:34.062] Uttam Grandhi: I have something to add. That's a great point Guillaume has mentioned because recently I heard a really old episode of Radiolab where they're talking about color and how, you know, certain organisms have these, you know, additional cones and, you know, rods in their eyes that let them see like different, you know, like electromagnetic spectrum that we are not even visible. Like, for example, the mantis shrimp, right? Mantis shrimp. I think it can actually see a lot of other frequencies that we can't see and we don't even know what it's seeing. You know what I mean? And, you know, even for that episode, they did something really cool that, you know, whenever, you know, talking about like the colors, they try to represent that in music. Right. So what would, say, a dog see? Because, you know, it has like, you know, one less cone. So it can only see like blue and green and no red. So they try to represent a dog's vision in music. So that was a really interesting way to actually represent things that, you know, we as humans cannot perceive, right? So bringing them into like, you know, our realm and our senses and our capabilities of our senses. And I think, you know, tools like, you know, augmented reality have the potential to do that. Like things that are not visible or things that you can't hear or, you know, other senses that, you know, we're not even capable of. Like, you know, imagine a neural link is interfaced with, say, some kind of like in a crazy vision, right? glasses and then you're able to suddenly sort of like you know see things that are not even possible yeah

[00:37:03.041] Nigel Hartman: Yeah, I would just connect to what the other ones already said, because I think that's a really great view to watch on this. Because data is usually something really abstract, which is maybe sitting on some servers, which we can just experience maybe through a display, maybe being able to talk to it maybe by smartphone nowadays. This real potential is like we can really start feeling data soon. Like let it be free display first, but there'll be so much more sensors and what's the opposite of sensor? Like what is giving me feedback to my body? Yeah, like something which is really like giving us the opportunity to feel data around us. Yeah, that's a really great way to view on that. I like that. I also really closely work together with a company. It's XR Bootcamp right now and they changed the branding of it to Sense AI. And I think that's a really good term to call it. What is the future of XR and AI in future? It's sensing things. It's like changing how we experience our environment for being able to have all these new senses.

[00:38:25.416] Kent Bye: Nice. And is there anything else that's left unsaid? Any final thoughts you have to share with the rest of the XR community?

[00:38:31.761] Guillaume Dagens: I'd say if you lack ideas and you have the white page problem, just play some video games about futuristic tech like Deus Ex, Cyberpunk, Metal Gear Solid, and you will find an idea. Nice.

[00:38:49.358] Nigel Hartman: I would say, considering the last days and also the experiences I collected over the last days, over this last year, it would be try to catch up to the Snap ecosystem because I never felt so much appreciated as a developer when I did here at Snap. It's really amazing. how they give us all these opportunities and sure sometimes they push new features out too fast and maybe it's breaking some stuff but at least they are really open about it and they share everything with the developers and they also hear on our feedback so yeah that's a great thing

[00:39:29.773] Uttam Grandhi: speaking about you know senses and perception i have more sort of like you know philosophical you know outlook on that question and i feel that like how we are literally you know inventing uses for technology like augmented reality i feel like friendships are actually something that will augment us with you know different perspectives of you know different people different backgrounds And I think Snap has really nailed that part along with the tech, where they are actually so open and, you know, considerate and respectful. And they're not judgmental. And they're like so welcoming, regardless of which race you're from, which background, which education, they don't care at all. And that is something that adds such a strong impact to me. And I try to take that message in whatever I do.

[00:40:18.143] Kent Bye: Awesome. Well, thanks so much for joining me here on the podcast to help break down a little bit around your journey of the Lensathon this year and the second place prize of the CartDB. And yeah, I just feel like that, you know, it's an impressive integration of a technology stack that allows you to kind of imagine a future where you can seamlessly pick up a product and look at a barcode and have lots of different metadata pop up in the context of a screen and feels like a prototype of a type of experience that we can expect as we lower the barrier between this contextually aware computing and the different types of information for us to make decisions as to whether or not to get different products and you know with the top product being decisionator where the ai just makes the decision decision for you you're adding the layer in the metadata that you can leave those decisions up to the consumer so just different approaches for different ways that the ai technologies are starting to integrate into these different types of applications so Yeah, really appreciated hearing a little bit more around the process and design of the CartDB. And thanks again for joining me here on the podcast to help break it all down.

[00:41:17.416] Guillaume Dagens: Yeah, thank you for having me. Thank you for having us. Thank you very much for having us here.

[00:41:24.098] Uttam Grandhi: Thank you very much, Ken. And I hope you had a great time at LensFest and you got to kind of speak with other developers. And I'm looking forward to hearing to their episodes as well. Thank you.

[00:41:35.936] Kent Bye: Thanks again for listening to this episode of the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a supported podcast, and so I do rely upon donations from people like yourself in order to continue to bring this coverage. So you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.

More from this show