#1100: Looking Glass Factory’s Holographic Displays feel like Magical Portals: Demystifying their Lightfield Display Technology

Looking Glass Factory has been producing Holographic Displays for a number of years with their Kickstarted Dev Kit and then their $400 Looking Glass Portrait, and at the Tribeca Film Festival on June 9th they publicly premiered their 65-inch Holographic Display with a piece there called Zanzibar: Trouble in Paradise. I took an opportunity to visit the headquarters of Looking Glass Factory in the Greenpoint Neighborhood of Brooklyn during my trip to New York City in order to see their interactive and CGI demos on their new 65-inch Holographic Display, and it was like looking into a Magical, spatialized portal into another realm.

There are not a lot of other companies who are producing multiple perspective, holographic displays and so I sat down with Looking Glass Factory CEO and co-founder Shawn Frayne to have him explain the fundamentals of their lightfield display technology. Their magical portal effect is produced by their custom-designed, optical overlays that are placed on top of high performance and high color depth, 8K resolution, LCD or OLED panels.

Their highest resolution 65-inch and 32-inch displays both take 8K input, and able to produce 100 different “views” by spatially directing nearly 100 million subpixel points of light. To break that down, 8K resolution is 7,680 horizontal and 4,320 vertical pixels for a total of 33,177,600 pixel groups. There’s a red, green, and blue subpixel for each pixel group, which yields 99,532,800 total subpixels. Looking Glass Factory’s optical overlays can control where each of these ~100 million points of light are directed towards as they split up that resolution into roughly 100 different images that are roughly 768 x 432 each. That’s seems like an incredibly tiny resolution, but these 100 different horizontal parallax views are laid on top of each other into a light field radiated from the screen, which has the effect of producing an incredibly high resolution image with lots of depth where Frayne says experts feel like it should not be possible from a screen of that resolution. It’s certainly feels like a mind-bending, magic trick that they’ve pulled off when you look at their displays.

Frayne provides a lot more context in our conversation, but this is also a technology that you really have to see for yourself to fully appreciate. They are currently taking reservations to schedule a demo of the 65-inch Holographic Display, which I highly recommend doing. The interactive demos with hand tracking and using a 6-DoF Quest 2 controller to shine a spotlight into a video game scene was also really quite compelling.

Frayne also recounts his own history with becoming obsessed with holograms as a kid after seeing the Jaws 19 hologram in Back to the Future 2, and his parents then bought him the 1982 book “Holography Handbook: Making Holograms the Easy Way” where he became a legit holographer making holograms from the interference patterns of coherent light. After the successful Oculus Kickstarter in August 2012, then there was a lot of buzz around 3D and VR content that led Frayne to start Looking Glass Factory to their own series of four successful Kickstarter projects over a four-year period that bootstrapped their company and cultivated a community of enthusiasts for holographic content:

  • Looking Glass: Hologram 2.0 ran from May 29, 2014 to July 6 2014 with a $25,000 goal, and successfully ended with 406 backers who pledged $36,477.
  • L3D Cube: The 3D LED Cube from the Future ran from November 24, 2014 to January 5, 2015 with a $38,000 goal, and successfully ended with 689 backers who pledged $252,678.
  • The Looking Glass: A Holographic Display was the dev kit and Unity Plug-in that ran from July 24, 2018 to August 23, 2018 with a $50,000 goal, and successfully ended with 1,301 backers who pledged $844,621.
  • Looking Glass Portrait ran from December 2, 2020 to January 14, 2021 with a $50,000 goal, and successfully ended with 8,051 backers pledged $2,511,785.

They’ve been able to build a strong community, and then leverage each of these Kickstarter projects by making progress towards a larger vision of making holograms real and producing a really quite compelling lightfield display. It’s been such a successful evolution that the Intercept reported that the Looking Glass Factory received a “$2.54 million of technology development funding from In-Q-Tel, the venture capital arm of the CIA, from April 2020 to March 2021 and a $50,000 Small Business Innovation Research award from the U.S. Air Force in November 2021 to “revolutionize 3D/virtual reality visualization.””

Frayne said that Looking Glass Factory are producing general purpose spatial computing display technologies, and that there are plenty of compelling use cases for their Holographic Displays across many different contexts. After seeing the display technology myself, I can see how there would be many useful decision-making applications for evaluating and analyzing spatial data within a group context. Most holographic displays are either tracked or untracked for a single user, but Looking Glass Factory’s displays are unique in that they produce over 100 different views simultaneously at 60 frames a second that can be field by groups of people.

They’re not only developing these cutting edge display technologies in different sizes, but also creating an ecosystem for embedding holograms and displaying lightfield display content that’s responsive to the whatever depth the display tech can output. Their HologramsOnTheInternet.com is in closed beta at the moment, but from the demos that I saw is a promising way of embedding spatial content onto web pages, that can then be displayed within WebXR views or via an external Looking Glass Factory Portrait display that’s connected to a computer.

It’s hard to fully describe the experience of seeing spatial content on one of Looking Glass Factory’s Holographic Displays as it shows a range from 45 to 100 different views. You really do have to see it to believe it and to fully grok it, and we’re at the beginning of having new ways of displaying this spatial content. A anecdote Frayne came back to again and again is having us imagine that someone has developed a way to capture color on film, but that the only display technology that we have is black and white. Just the same, the world is infused with spatialized 3D content, but we’ve been living in a world of 2D displays. Part of the lesson from the Looking Glass Factory is that the paradigm shift from 2D to 3D goes beyond just head-mounted VR and AR devices, and into new forms of holographic displays and lightfield capture technologies that do a better job of capturing and displaying volumetric lightfields that more closely mimic the physics of reality. These new types of holographic displays for volumetric lightfield content provide additional outlets for a responsive immersive design within the XR ecosystem, which will surely find a number of use cases where having a group-viewable, magical portal into another realm provides some business utility.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to The Voices of VR Podcast. So I recently had a chance to attend the Tribeca Film Festival, where I ended up doing 13 different interviews with not only some of the projects and creators there, but also two technology companies that are based out of Brooklyn, where I started and ended my coverage by going to these XR technology company headquarters. The first one was with Looking Glass Factory, and the last one is with OpenBCI, so I'll be getting an update on Project Galea. Looking glass is a startup that has been working in the XR field for a number of years now They've been producing a number of different Kickstarter projects where they've done these different types of holographic displays the one that they came out with in 2018 the holographic display called looking glass was a dev kit and then eventually they came out with a looking glass portrait that was a Kickstarted from December 2020 into January 2021. It's some really, really compelling technology when you look at it. It looks quite magical because you're able to see this display. And as you move your head to the left and right, it's giving you different perspectives or views anywhere from 45 up to 100 different views with their higher resolution displays. And it looks like spatial content that you're looking at. So it's like a little magical portal that you're able to get access to this 3D spatial content. So I had a chance to check out the very latest cutting-edge technology of the 65-inch holographic display that was premiering with a piece at Tribeco called Zanzibar Trouble in Paradise. You could see the specialized content, but I took a trip to the offices of Looking Glass Factory, and some of the CGI content looked even better than I could even possibly describe. You really have to see it to believe it and really understand what's going on here. But because there aren't very many holographic display companies that are out there that are trying to essentially create a light field display that is at the sub pixel level directing photons. And so they're able to take AK resolution with ends up being around 33.18 million pixel groups. And then within each of those pixel groups are three sub pixels. And so you're talking around a hundred million points of light. that are able to be directed. And from that, they're able to have 100 different views, and so they're slicing it up into a 768 by 432 image, but because they're overlaid on top of each other like 100 times, it ends up looking like a very high-resolution depiction of these objects. It is just absolutely amazing and magical, and you really have to see it to believe it. In fact, you can try to reserve an opportunity to go and check out the display if you wanna check it out for yourself. It feels like you're interacting with a portal into another world. It's really quite surreal. So that's what we're covering on today's episode of The Voices of VR Podcast. So this interview with Sean happened on Friday, June 10th, 2022. So with that, let's go ahead and... Dive right in.

[00:03:02.278] Shawn Frayne: Hey, I'm Sean Frain. I'm one of the co-founders and CEO of a company called Looking Glass Factory here in Brooklyn, New York. And we are a hologram company. So we make holographic software to turn the universe of 3D content into holographic media. And we also make a physical display called the Looking Glass, which is the world's first holographic display that folks can buy either for their desk or for installations in businesses, marketing activations, et cetera.

[00:03:29.434] Kent Bye: Maybe you could give me a bit more context as to your background and your journey into XR.

[00:03:33.998] Shawn Frayne: Sure. Let's see. I got hooked by all this stuff when I was seven or eight. For me, it was Back to the Future 2, Marty McFly getting gobbled up by the holographic shark in Jaws 19. And to me, that was a vision of the future. The future at that time being the year 2015. And, you know, then I would just blab so much to my parents. I was, you know, just a kid at the time. I would ask, how can you get the hologram? How can we make holograms real? Where are they? Can we go to a museum to check them out? So every time we'd go on family trips, we would try to find the holographic shark in the real world, and it didn't exist. So then, I don't know, when I was 12 or 13, my parents got me a book. Christmas Day, I saw under the tree a book called the Holography Handbook. It's here somewhere. Oh, here it is. I'm holding it now. So the holography handbook is a guide on how you can make classic interference pattern holograms. So get a laser from Edmund Optics or from eBay, get a few optics here and there, and then capture a laser photograph, essentially. And so I spent about six months going through this book, and my dad and I built a sand table with half a ton of sand to do vibration isolation, and then Yeah, when I was about 14 or so, I had what I think is probably the only hologram studio in Tampa, Florida. I'm at the foot of my bed, and I was making little holograms of little statues, little things I would take apart from different blenders and different machines. But it wasn't what I had wanted in that movie that I had seen some years before in Back to the Future 2 of the dynamic, computer-controlled, full-color holographic future. Then I went searching for that at MIT. I studied physics, took this famous holography class at the time by Steve Benton, and was sure someone would have had the secret holographic display there behind some locked door, but no one had it. So I sort of set aside that dream for about 10 years, but then fast forward and now I'm going back seven or eight years from now. So back in 2013, 2014, there was a lot of stuff happening in 3D printer land. There was a lot of stuff happening with VR, like I think the DK1 had just been launched. I'm on Kickstarter, Oculus is DK1. I was like, oh, people are starting to get into 3D again. Because when I made the little holograms in Tampa, Florida, in my little studio, nobody else really cared. I literally had to hold it above my head. This is going to sound super nerdy. I held my first hologram of a little pewter Mickey Mouse over my head and ran around the cul-de-sac where we were living by myself at one or two in the morning to celebrate because no one was sharing ideas about the shift from 2D to 3D. So when folks started to share STL files in 3D printer land and people were talking about VR, I was like, oh, wow, maybe people are ready for the hologram now. And so then a few of us started a company back in 2014 to pursue the dream of holographic media and a holographic display that could show that without having to gear up.

[00:06:48.413] Kent Bye: And at what point did you decide to then take whatever you were able to make and prototype into a Kickstarter? Had you already built something? And maybe you could give the story because that's how I first heard about the Looking Glass is the Kickstarter and being able to actually make it and promote it and ship a product which after the Oculus I think there was a long time where there was a lot of products that didn't actually ship so there was a It was kind of a gamble as to whether or not anything would actually finish, but you were able to take that path to produce a product. So maybe you could give a bit more context as to what led up to that moment and what that time frame was.

[00:07:19.263] Shawn Frayne: Yeah, sure. So we've actually done four major Kickstarters in our company, and I've kind of grown up with the community of 3D creators. And that's what shaped the technology and shaped our company up until now. So the first thing we launched on Kickstarter back in, I think it was 2014, was called a Looking Glass Print. And it's basically a bunch of slides that have UV cured ink on them to make a volumetric image. And you can show stuff in a volumetric print, you can't in a 3D print and this was our vision of like one freeze frame of what a perfect holographic or volumetric display could be because at the time in 2014 we and nobody else in the world knew how to accomplish an actual dynamic computer controlled full color holographic display.

[00:08:06.686] Kent Bye: And just just for people that are listening I when I watched the severance There was a bit of a like a holographic glass thing that was there, but this looks like a 3d model But it's in a glass block That you'd able to do different things that you wouldn't be able to do in just like a normal 3d print

[00:08:21.451] Shawn Frayne: Yeah, exactly. And if folks are really curious, you could Google looking glass print, and it would show up in that Kickstarter would show up. So, you know, we sold, I don't know, maybe 1000 of those that gave us enough money to then get to the next stage. So we said, Okay, we have pretty high resolution static 3D imagery that can show things that a 3D print alone can't. Now we need to make something that's dynamic and computer controlled where you could share just a program with someone else that has the same kit that you have. So we launched our second Kickstarter, which was called the L3D Cube. So this was an 8x8x8 3D LED matrix. But it was unique from what had come before because it was a standardized kit that you could assemble in 30 minutes. And you could share some processing code with someone across the world, and they would have a super low resolution. It's only like 512 points of light. But you could have a really low resolution 3D thing that someone had sent you as a file. And that was cool. And so we sold about a million dollars worth of those and had good margins on them. Everything was laser cut and so on. And so that gave us enough money for the third step, which was launching the first looking glass dev kit. And that was maybe back in, I don't even remember now, maybe 2018, something like that. And so the Looking Glass dev kit combined the resolution of the Looking Glass prints with the dynamic computer-controlled quality of the L3D cube. And people loved it. And so we did a big Kickstarter for us at the time. We made a Unity plug-in. That was the first time that we had actually made something that tied into the bigger universe of 3D and wasn't some custom thing that we had made off on our own. And we learned a lot from that. Those kits sold for about $600, the dev kits. So then we did our most recent step in terms of the individual 3D creator community for a physical display, which was the Looking Glass Portrait. And that's probably the one that you saw, Kent. And that was in 2020. That's a lot of folks' first personal holographic display. It has onboard storage, so you can store holographic content on the system. It sells right now for $400. It's higher resolution than the early systems and so on. And that broadened out the aperture even more. And it outsold in its first 30 days the DK1 Kickstarter launch, which was the high watermark for me for building a new community of folks who were in 3D. So, you know, and now, you know, fast forward, you know, a couple years beyond that, and we just announced the Looking Glass 65 inch, which is this massive display that runs the same content that the Looking Glass portrait does, but it's just huge. And we'd been hoping to do that for a long time. That's obviously not a Kickstarter. type a product but it's based on everything that we've learned in us in the community have built up over time and also releasing a lot of interesting new software tools for folks like ways to share holographic content not only on the looking glass but on other devices as well.

[00:11:26.425] Kent Bye: Yeah, I wanna maybe first clear up the definition of holograms, because I know that there's been companies like Microsoft that will maybe use the word hologram, but that's maybe as a metaphor for people to understand this shift from 2D to 3D, and then it may actually just be a 3D model that's being rendered in some fashion, but... it's still like in a 2D plane and not fully volumetric in a way. And so maybe you could describe how you make sense of how the mainstream understands holograms as a metaphor versus how some companies may be saying the word hologram but not actually have a 3D spatial hologram versus what you think you're doing here at Lookingglass that is actually trying to preserve some of that spatial volumetric display of content.

[00:12:06.952] Shawn Frayne: Yeah, sure. Technically our displays are light field displays. And as a conventional, in this debate, as one of the few people who actually is a traditional holographer as well, I've seen the term change over the last 20 years dramatically. So 30 years ago when my parents got me that book, the holography handbook, Hologram meant an interference pattern of coherent light that you could trap on a high-resolution holographic photographic slide, basically. And that was cool. But everyone who gets into the field of holography gets into it for what we saw in Back to the Future 2, what we see in Iron Man and Star Wars. And honestly, I think most of us don't care about how we get there. So if you're a researcher that goes deep into different human computer interfaces, maybe the word hologram to you is what it meant 30 years ago, the interference pattern of coherent light. But language is something that changes over time. And so everybody now, not only Looking Glass, but Microsoft, Meta, you name it, is using the word hologram as the new media. That's the thing that comes after flat media that you can transport across different devices. And that's how we use the term as well. And instead of calling the looking glass a light field display, which is more technically what it is, if you are a researcher or a scientist, we call it a holographic display because it's the best way, in our opinion, to view holographic media content. This has happened many times before in other fields. 3D printing used to be a very specific additive technology. until the word changed. And that encompassed the whole field. And so the same thing has happened with holograms and holographic content. And there's probably only a hundred people in the world who really get fired up about this debate. Five years ago it was maybe a thousand.

[00:14:07.312] Kent Bye: Yeah, well I guess I run into enough circles of people where people still discuss. So that's helpful to know because I feel like, from what I can tell at least, is that most of the stuff that I see right now in the spatial computing land is 3D objects that have a 3D pipeline from the game industry that's rendered through Unity or Unreal Engine. and then you output that into a virtual reality headset or augmented reality or some way that is essentially mediated through a stereoscopic 2D display that allows you to through the perspective of the perceptual illusion of as your brain is seeing those two images it puts together that spatial content but that light field displays are a whole other thing that you could potentially take a 3D object from Unity and render out of light field but maybe you could talk about What's happening on the back end in order for you to produce an actual light field through what may be coming from a traditional 3D pipeline or how you start to kind of mathematically understand what a light field even is?

[00:15:04.479] Shawn Frayne: Sure. I mean, a light field is, which again, most folks would call a hologram, most likely. But in different circles, a light field is the most perfect representation of reality because a light field preserves all of the directionality of light, which is what makes the real world real. So things on a flat display, like the display on your phone or your laptop, they don't feel real because the pixels on that screen are shining with only two properties of intensity and color. But if you can add a third property of directionality to them, then suddenly you recreate reality. And so that's what Lightfield content does, and that's what we call holographic content, and that's what Lightfield displays, or what most folks would call holographic displays, do as well. They create a multitude of perspectives of any scene, and this can be done in a variety of ways. This can be chopped into actual different views of a scene, or it can be on a per sub-pixel basis or on a per ray of light basis that then ends up approximating to a large number of perspectives. And that is kind of the end destination. That's the holy grail of the human computer interface, both from a content generation standpoint for perfect content that is indistinguishable from reality and the displays that show them. So our pipeline is designed to take different types of 3D content synthesize a piece of holographic media, but some folks might call light field media, and then display that in our displays and other people's displays as well. Folks can check out more about this if they want on hologramsontheinternet.com, where we're doing a sign up for folks to get early access to some of this software. And, you know, there's, well, I can go off on this. There's two pillars that are emerging in terms of 3D content. One is, I guess you could say it's like model or vert based. So volumetric video, for instance, or a 3D model. And there's limitations to how real something can feel in that pillar. But what's nice is it can go across different platforms really easily. You can plop a model into Unity, Unreal, share it in different places, and that's cool as an asset. But it's always limited in terms of its fidelity. And then the second pillar is this light field or holographic pillar, where you are using imagery-based data, almost the equivalent of advanced form of a render of a single render, but for multiple perspectives. And that is what lets you achieve the fidelity that we all crave in this step from 2D to 3D.

[00:17:53.453] Kent Bye: Yeah, I actually have an unpublished interview with Litro talking about some of their different light field captures, which is essentially putting an array of different cameras from multiple different perspectives. And so you're able to reconstruct a light field from taking a moment in time from multiple different perspectives, and then from those multiple perspectives, do a lot of processing and mathematics to be able to generate a light field. What I've noticed in the XR industry over tracking it for the past eight years of the Voices of VR podcast is that going from the DK1 to DK2 to CV1 with VR, with the new distribution platform, it was able to catalyze a lot of innovation in terms of what type of content was being able to be produced. and that most of the people that are from the games industry are using these existing pipelines, but that it seems like with this holographic display there seems to be like a new outlet for these light fields, where something like Lytro went under, or maybe it was acquired, it's sort of like... I didn't see as much about light field displays at like SIGGRAPH for a number of years after the economic business seemed to be maybe early but not have something that was a form factor that was able to actually show off some of those light fields because a lot of ways that they had to do that was eventually kind of rendering it out back into a VR experience that had more of a stereoscopic display rather than a true light field display that you have. So I'd love to hear some of your reflections on how you see some of the parallels for how the VR was able to catalyze a new community, but that with the looking glass and these more light field displays or holographic displays, it's kind of opening up new opportunities for creators.

[00:19:25.895] Shawn Frayne: Yeah, absolutely. I mean, I think there has to be a way to consume a new form of media. If you can't consume it, then it's, I mean, there's no point to generating it. So starting the flywheel with holographic display that people could buy, develop content for, take content they'd already made in certain platforms, import it to the looking glass with our Unity plugin, or Unreal plugin, or Blender plugin, or you name it, that's key to starting the flywheel for a new type of media like holographic media. The next step, which is what we're taking now, and what some folks in the VR and AR landscape have recognized, but in a different way, I think, is that that media has to then be shared across any device. Not only the best way to view it, but on any device, mobile phone, VR headset, AR headset, you name it. And so that's what our Holograms on the Internet initiative is doing. And I guess to comment on what you were bringing up about Lytro, Yeah, I think they were a little bit too early, probably. And the trade-offs were too great in terms of resolution trade-off that they had. And the general consumer awareness of what the benefit was wasn't there. So, you know, it was almost as if they were in a universe where they created a color camera, but nobody had anything to display a color movie on. And so that's changed now. Now there are ways to display light field content or holographic content. on multiple devices, including ours, and, you know, I think it's off to the races now.

[00:21:04.048] Kent Bye: Help me understand what's actually happening in this display because what I can imagine is maybe happening is that there's multiple panes of glass with maybe wave guides or something that's Using the light fields. How do you describe? Technically what's happening in order to produce the effect that I'm seeing that I'm seeing something that's spatialized But it's as I move my head. I see the multiple perspectives. And so what is it that I'm seeing that is happening with the looking-glass portrait?

[00:21:28.426] Shawn Frayne: Sure, so the input to the displays is something that contains a multitude of perspectives between 45 to 100 different images or different views of a three-dimensional scene. Sometimes those are created on the fly, sometimes they're pre-rendered. But basically, that's transmitted in an encoded form over an HDMI or DisplayPort cable from a computer to the display on the Looking Glass Portrait. It has a built-in computer, but on other displays, you'll have an external computer, PC or Mac, and then it drives that video signal with the encoded multi-view light field content. to the display and that's encoded in such a way that's specific to each display. So each display we have a calibration procedure that goes through that assigns essentially in our biggest displays 100 million points of light to a specific location in space that it can shine into. And that's matched with how the video signal is scrambled up that's being sent to the display. And then we have a couple optical overlays that we design that are part of that assignment of where the sub pixels end up shining into space. Ultimately, what that ends up resulting in is in, for instance, the new looking glass 65 inch, you generate what approximates down to a hundred different perspectives that are shining out into the world in a 53 degree view cone. So as you look around the scene, your eyes are getting hit with, it can be five or six or seven different perspectives at a given time, depending on where you're standing around the display. and you see 3D content. There's no gaps or breaks. It's smooth within that 53 degree view cone. And it feels real because we are generating in our software a crazy number of perspectives efficiently. And then we're displaying those crazy number of perspectives simultaneously every 60th of a second. So in terms of data, you can think of it as a, Like a movie is a number of frames and it's just a bunch of images that then make this new medium. A hologram displayed through our displays or the media itself is a movie squared, you know, one time over. Because not only are you generating all of these perspectives, but then you're displaying them every 60th of a second to make something that can move and that you can interact with. And so the lineage from photograph to film and now to holographic display and holographic content will seem a hundred years from now to be the most obvious step in the world.

[00:24:10.205] Kent Bye: Well, there's a lot of metaphors from going from 2D to 3D, and let me try to explain how I'm starting to make sense of it through those metaphoric bridges. Because you have like 1080p as an example, 1920 by 1080, which is like a number of pixels from the height and the width of a video, and then so you have a resolution that's a way of talking about resolution of a display screen. So when in VR you can talk about the resolution of a stereoscopic display which allows you to say per eye what the resolution is here but for what I understand at least from a holographic display is that you have not only the resolution metaphorically of that image but also from that 53 degree cone of view but also what you say are sub pixels so like in a point in space away from 3d coordinates away from the screen there may be a photon or a ray of light that's being refreshed 60 frames per second so for all of those potential points of light, you're basically shooting them out and you don't know whether or not people are going to be seeing them or not, but you're able to create these group experiences by allowing people to kind of experience those photons or rays of light that are shooting out of the holographic display.

[00:25:18.480] Shawn Frayne: Yeah, that's right. So a holographic display is just as wasteful as the real world. You know, not everyone sees everything in a room all at once, but it means it's zero friction to have that experience. So I'm looking at you now, Kent, but I'm not looking at the shelf behind me, but it's still there. And that means that Tommy across the office can see it. He doesn't have to have my permission. He doesn't have to put anything on to see it. And so that's how our media and how our displays work. We blast out these hundred different perspectives. And if there's only one person around the display, maybe they're only seeing a few of those at a time, but then when their friend walks in or when 50 people walk in behind them or a hundred for the looking glass 65 inch, They're all getting different perspectives simultaneously and the display didn't have to do anything. There's no tracking. It just always works in the same way that our perception of something in reality works. And so that's a deliberate trade-off that we made. It made the systems in the early years inefficient, because you're not only driving one view or two perspectives, like in the case of a VR headset, to a single person. You are wasting a lot of information, but for the benefit of having a zero friction, non-headset group experience. And over time, over the last few years, we've been able to make our software stack more and more and more efficient. So now there are no trade-offs. But at first, that was something that was highly debated. not only in our community, but more broadly. Like, why would you do this? You're wasting so much information. Computers are only so powerful and so on. We're like, well, computers are getting more powerful, especially all the stuff going into GPU efficiency and whatnot in the gaming world. So we think we'll be cool. And that's what happened over the last few years.

[00:27:05.165] Kent Bye: Yeah, and also has the side effect of having multiple people be able to see a holographic display rather than just tracking one person and essentially kind of recreating a stereographic effect using a 3D TV that's tracking you in some ways, metaphorically. So you say there's like a hundred different views and let me just make sure I understand that properly because I talked about the 1080p which is essentially a 2D slice of an image Is the hundred views, is that kind of taking a 2D slice of an image but at different angles to kind of get the spatialization of it? Maybe you could describe that and maybe compare the different views that you have in each of the displays. And also, you had mentioned the subpixels and if you track that as a metric in terms of another way of thinking about the volumetric resolution of a holographic display. And yeah, if you can kind of talk about the different evolution of your displays and what each of the specs you have and how you start to describe them.

[00:27:57.315] Shawn Frayne: Sure, we have four physical displays out there in the market now, so I'll take the question in reverse. We have four physical displays out there, starting with the $400 Looking Glass Portrait, and then we have a 16-inch, a 32-inch, and a 65-inch system, the 65-inch one we just announced, and we're showing it a few places around the world now. And each of those has higher and higher and higher resolution. You can get more and more people around the system just because of sheer size of the unit. And all of them work, though, in fundamentally the same way, which is that you drive in holographic content, which our software generates from anything in 3D land, whether it's something someone made in Unity or Unreal or Blender or even a portrait mode photo that someone takes actually contains depth. Portrait mode photos blur out the background to simulate a DSLR look. But to do that, the camera is capturing depth information. And we can use that depth information to synthesize a multitude of perspectives necessary for a holographic display or holographic media. So anyway, there's these hundreds of trillions of pieces of 3D content that can be holographic content. And our displays then display out or essentially resurrect all of the different perspectives that that content had originally by basically taking these custom designed optical overlays that we've made with the calibration procedure and the indexing and the software to generate all of these views and displays them out into space simultaneously. And the way that works is we actually have very high resolution, high performance, high color depth, LCD or OLED panels that then we're chopping up optically into a multitude of different views, but on the sub-pixel level. And that means that we are taking a source resolution, like an 8K source resolution in the case of the 65-inch looking glass, and dividing that into 100 different views. And some folks will take out their pen and a cocktail napkin and start to do some calculations and be like, well, wait. Then the end resulting resolution is no good. But what is missing in that calculation is that your eyes are getting exposed to, or you're getting a number of different views simultaneously wherever you're standing around the display. And that ends up synthesizing to a very high quality experience. I mean, you saw the display, the 65 inch in the back, Kent, so you can comment on what you think about it. But everyone, including the top display experts in the world outside of our company, is shocked that this system actually works because we're taking normal LCD and OLED panels with our own custom optics overlay and our software, and then creating something that shouldn't be possible from an end resulting resolution perspective. And it has to do with a lot of advances that we've made in terms of how those perspectives are blurred together, how you can use each individual sub-pixel to get to a very high level of end-resulting quality. And so I hope folks will get to see a system in person, because there aren't really any standards around this field yet. So it's really hard to say, what's the resolution of the holographic display? I can tell you what goes in, and your eyes can tell you what comes out.

[00:31:31.003] Kent Bye: Yeah, I see it as a fusion of your previous displays with the existing technology, and it does create this unique spatial experience that is, I'm sure, difficult to capture with any cameras. You kind of have to see it to understand what's happening. And so, to go back to the question about the views, what is a view when you say there's 100 views, and how do you produce that view?

[00:31:52.014] Shawn Frayne: Oh, sure. So your phone or your laptop that you're looking at right now has a single view. So it's producing a single image and it doesn't change as you move around your laptop or phone. A VR headset or the most common VR headsets at least present two perspectives at a time. And those two perspectives for one person's two eyes, they change as you look around something. But at any given moment in time, your eyes are only getting two perspectives. So a holographic display like the Looking Glass presents out up to 100 different perspectives at a time. So you can imagine this as being like 50 VR headsets that are splayed out, all displaying a different perspective at a time, and your face is sort of going across them. Except, of course, in this case, it's just light. that's producing all of those perspectives. But in terms of the compute challenges that we had to solve, it's similar to having like 50 VR headsets kind of splayed out, but for the benefit of then getting a large group of people that don't have to gear up to see genuinely three-dimensional, super stereoscopic content. And so a lot has had to go into the quality and efficiency of our software stack to make that possible. I don't know, does that explain the views?

[00:33:09.438] Kent Bye: Yeah, I think so and I guess another follow-up is that you have from as I'm moving in the vertical position I guess I'm kind of turning my head left to right versus like looking up and down When you talk about views because you talk about splaying out 50 different VR headsets to get a hundred views But I mean you have the y'all on the pitch and just different ways of kind of understanding if it's only on one Axes or if it's in multiple axes

[00:33:33.512] Shawn Frayne: Yeah, so it's a horizontal parallax system. So as you're getting different 3D information in the horizontal axis. And this is actually very forgiving in terms of if you're turning your head, like 30 degrees, 40 degrees, whatever. So in any normal viewing condition, you're getting different three-dimensional content wherever you're at around our displays. but you're not getting different information in the vertical direction. You still see content as you go up and down around the display up to nearly 180 degree view cone vertically, but it's not different 3D information. The reason that works so well is because our eyes are oriented in a horizontal axis. And because we are shooting out so many views, even if you're tilting your head, back and forth, you're not going to see any change in the experience because your eye will just receive a slightly different view at that moment in time. If you tilted your head 90 degrees, then you would see the image flatten because then you're getting no different information at that one slice in space. But that's not how people walk around.

[00:34:44.700] Kent Bye: Okay, so that helps me understand some metaphors to understand what's happening. Now if we go back to the different displays, how do you describe from your perspective the specifications of each of those displays with all these new metrics of these light field displays or holographic displays?

[00:35:00.712] Shawn Frayne: Sure, there's no other company in the world that's doing group viewable holographic display commercially that I'm aware of. So there are a number of companies that are doing single viewer systems where you're tracked and you're getting two views, or in some cases, a few more than that at any given time, but for a single viewer. And so those can be light field displays too, but for a single viewer. And then there's untracked systems for a single viewer as well. So those are kind of the three classes of holographic or light field display. So single viewer untracked, single viewer tracked, and multi viewer untracked. Yeah, and so folks can look up, if they're curious, the field of emergent contenders. in the land of holographic display. And what you'll see is a lot of folks who are doing pretty interesting single viewer track systems, but there's no, to our knowledge, multi-view systems for multiple viewers. So looking glass is really, in that respect, in a class of its own right now. That won't be the case forever, probably. And we want more and more ways for folks to experience three-dimensional content, but it has to be at a certain level of quality. The reason that we have pushed so hard is to make sure the mistakes that were made 20 years ago in that attempt to jump from 2D to 3D are not made again. And quality and the comfort of the experience is a part of that. One of the really nice things about a multi-viewer system, besides the obvious that you can have more than one person around it, like the looking glass, is that you get no eye fatigue at all, no matter how long you look at the display. That's because your eyes are getting the same information it gets in the real world.

[00:36:43.703] Kent Bye: So what are the different specs for each of those different as you go through the evolution of your different displays? Just to help me kind of understand the scope of your own evolution of creating this unique class of holographic displays, as you go through each of them, the resolution and the number of views and the number of subpixels, if you have those numbers kind of roughly in your head.

[00:37:05.516] Shawn Frayne: Yeah, so the number of views is a definition that other groups are starting to use as well after we started to use it a few years ago. So our top of the line system, the 65 inch looking glass can generate up to 100 different views of a 3D scene at a time. And it takes an 8K input signal. The 32 inch looking glass can technically do up to 100 views, but at less depth. So the 65-inch gets significantly more depth than 32-inch. 32-inch is still a super sick system. It also has an 8K input. So you can think of the 32-inch, it's kind of like the smaller version of the 65-inch. And then the 16-inch looking glass takes a 4K signal, and it can do between 45 to 100 views. As you start to go to greater number of views in that system, you don't see as many benefits as you do with the larger systems. And then the smallest Looking Glass Portrait, it can take an input signal of 45 to 100 views. And when I say input signal, our software handles all of the details of that. But you end up getting good results with only 45 views. 45 views lets you get good results with the Looking Glass Portrait. And it has about a 2.5K input signal. So 2.5K, 4K, 8K, and 8K as you go on up from the Looking Glass Portrait to the 16 inch, to the 32 inch, to the 65 inch.

[00:38:34.393] Kent Bye: And then you said that there's differences in depth between the 32 and 64. Maybe you can elaborate on that a little bit.

[00:38:40.598] Shawn Frayne: Yeah. And this is kind of like it's equivalent to the different kind of spheres of clarity and whatnot that you would have in a VR headset and what have you. So all of this is a little bit subjective right now as the field is still developing. But the Looking Glass 65 inch gets up to four times greater depth than the other systems because of how the optics are designed. And because we have crammed in so many different views, up to 100, into that system, you can push significantly more depth before you start to get blurring and whatnot in the resulting content. And as the systems scale down in size to 32 inch and then to 16 inch and then to portrait, you get different amounts of depth. Roughly in the systems, it ends up being equivalent to a 4 by 3 by 2 aspect ratio, roughly. So in the Looking Glass Portrait, Four is the height three is the width and a two is the depth in terms of like kind of volumetric aspect ratio and That's similar in all of our systems and sort of aspect ratio But the absolute depth you get on the bigger systems is naturally kind of bigger

[00:39:50.851] Kent Bye: Okay, yeah, that's helped just to clarify, just put some words into how to understand what I'm seeing, and as people see it, they'll have their own experience, but then be able to kind of listen to the language that's emerging. I think that's part of the sensemaking process, is you have this immersive experience, and then you have the raw sensor experience, and then you try to put the language on top of it, so. So I wanted to ask about, you have a number of different initiatives now with holograms on the internet, which you have, they just announced, which is essentially creating a number of different views and it seems to be like an embeddable object that you can start to put within the context of a website. So maybe you could just describe what that is, because you just showed me a demo where you're able to pull something up on a website and then put it onto the holographic display of like a portrait or other displays that you're working on here at Wilking Glass. But to be able to take content from the internet, have people through a 2D plane, scroll their finger across it to be able to get a spatial sense of it, kind of like an embeddable 3D object from Sketchfab is what I understand it to be the closest experience to, but then also take that object and to show it with a light field display of it to get a much higher resolution than you would be able to see in just a 2D laptop or a phone.

[00:41:01.473] Shawn Frayne: Yeah, so folks can check this out if they go to hologramsontheinternet.com, which is just a little explanatory landing page for this initiative. And it basically lets folks take 3D content of different sources. So that could be a model, but it could also be a portrait mode photo, which has this 3D information hiding behind it. Or it could be something that someone generated with the Looking Glass plugins for Blender, or Unity, or Unreal, this holographic content. Then they can post it onto the site and get a link to share that hologram with anyone, or an embed code where they can put it into their own website, onto other folks' websites, etc. Then, no matter what device you have, when you go to that link, or when you go to that website with the embedded hologram, you'll see it there, whether you go to that website on your phone or on your laptop or the looking glass connected or inside a VR headset, you'll see that hologram to the best level of 3D that the device can support. And the thing that's really exciting about this is that there is a huge world, hundreds of trillions of pieces of media that are made in 3D, but currently viewed in 2D, like portrait mode photos is one of many, many examples. And we can get a higher fidelity, more realistic experience if we could just view that 3D content actually in 3D. And this lets us do that now, not only for models, there's a million times more things out there that are made in 3D. but only viewed in 2D beyond the world of models. And a lot of this is real world content. A lot of this is the advanced stuff folks are doing with Nerfies, but a lot of it is sort of the known quantity stuff. Like there are a huge number of NFTs that are made in Cinema 4D. Like Beeple, the artist, makes new piece every day and his 5,000 plus pieces, a bunch of them were made in Cinema 4D. All of them have only ever been sold or viewed in 2D. But it's actually natively 3D content. It's like that analogy I was talking about earlier. It's as if we lived in a world where every video ever shot was shot in color, but people were only viewing it in black and white. But now we can actually view it in color. And so we're running a pilot where folks can get access to the tool and start to play around with it. And there's already a handful of folks in the community who have been great in shaping what even the pilot looks like through early access. So we're hoping that that lets folks get holographic content much more widely distributed, because now it can work not only in a looking glass, but in any device.

[00:43:39.338] Kent Bye: Yeah, metaphorically it kind of reminds me in some ways of like an animated GIF because you're taking a video file and then showing images that are going over time but it's looping in a way that's kind of like translating a video but using it as an image file and so this seems to be in some ways taking 3D objects and converting them into light fields but then maybe displaying them into these views. Are you actually displaying? and delivering the light field information that's then rendered out into these different WebGL projects that then are creating a physically based rendered object? Or what is it that you're actually doing technically? How are you storing this information and what's happening on the back end for what you're actually creating with this holograms on the internet?

[00:44:17.131] Shawn Frayne: Yeah, I mean, without going into the super nitty gritty, the way that the information is displayed is a little bit different for the different devices that are used to consume it. And that's invisible to the user. The experience is just, if you view this content in a different device that has more 3D capability than another device, then it will be displayed at a higher 3D fidelity. And so in the background, we're using a bunch of existing web standards and tools. WebXR is one of them, but there's a number of others that we leverage to be able to accomplish that. We, of course, store the source that somebody uploads, but then we chop that up in a variety of different ways to accomplish that display on multiple devices in 3D.

[00:45:02.255] Kent Bye: I had a chance to see the demo of the 65-inch holographic display. What I was really struck by is that I saw the Zanzibar, the 360 video that was being shot, that was almost like upscaled in a way that was kind of like Depthkit taking a SLR camera but then adding depth information and then projecting it out, but when you were showing me the 3D renders from the Unity content, I felt like I had a lot more resolution or it was much more clear as to what the quality of the display was versus the content that was being shown at the Tribeca Film Festival. So I'm glad I got to see that extra demos, but also to see the Ultra Leap input, which is like the Leap Motion hand track, to be able to actually engage and interact with the content or even ways of using a 6DoF controller to interact with that content in a way that was really impressive to see how you can have like a portal into a 3D space, but to have a group experience. And so I'd love to hear anything that you can share in terms of the, because there was a recent announcement that you've also been working within the context of the intelligence community that is interested in this. And I know from data visualization, talking to different experts. There's geospatial visualization and there's abstract data visualization that is two different types of visualization. So love to hear what was it about what you've been able to produce that has garnered the interest of the broader intelligence community.

[00:46:19.752] Shawn Frayne: Yeah, I mean we make a general purpose holographic display and general purpose holographic software that lets folks create and share holograms. And so a lot of folks want these systems because there's a lot of 3D content now and people need ways to view that and share it and really get the benefit of that 3D content. And so that includes all different types of 3D content, whether it's gaming content or a marketing activation, holographic advertising, a new experience that you want to convey to somebody, memory that you might've shot on your phone, the Zanzibar experience where they shot with phones in Zanzibar, the captured depth video, and then we're able to display that in a looking glass 65 inch at Tribeca, as you were mentioning. In terms of the direct question about geospatial information, we don't make purpose-built applications. We make general purpose plugins and general purpose software and general purpose hardware. And that is enough for the different folks to use in their different workflows. So if someone is doing a survey of New York City, For instance, there's 3D information about New York City. And you can make better decisions faster about the elevation or the grade of a particular mountain or what have you, like how folks are moving around, like for traffic patterns and things like that, if you can see that content in 3D sometimes. And so visualizing that holographically and viewing it in a holographic display in a group context, different groups find that advantageous.

[00:48:05.348] Kent Bye: One of the demos you showed me was Snapchat filters where you were able to take information from Snapchat and then be able to display it within a looking glass portrait. And so you'd mentioned that there's actually depth information that you're able to potentially extract from some of these augmented reality facial filters or other ways that people are taking selfies or other images but they're using a facial filter that then you're able to extract out somehow and then sometimes you can do up version so you have just a 2D image and then use AI to maybe extrapolate that information or sometimes there's actual depth information that you're capturing from some of these apps that you're able to display so maybe could just explain what's happening there because that for me it feels like a whole other new realm of being able to take stuff that has been shared on these 2D platforms but to share it and maybe more spatial contexts.

[00:48:51.572] Shawn Frayne: Yeah, I mean, there's 250,000 lens creators for Snapchat. They're 3D creators. They're designing stuff in 3D. So there's 3D information there. And so that's another example where there's all of this amazing 3D content. that is only viewed in 2D. And that's like viewing a color film in black and white. And so we can convert some of that content into a holographic form that can be displayed in our systems and through holograms on the internet, on other folks' systems as well. And it feels more real. So I have this snap of my daughter, Jane, on the subway. I mean, her face is turned into this 3D cartoon face. And I love seeing the hologram of her in my Looking Glass portrait. And I love it because it feels like she's there with me a little bit more than on my phone.

[00:49:44.448] Kent Bye: Great. And finally, what do you think the ultimate potential of spatial computing and these holographic displays might be and what it might be able to enable?

[00:49:54.712] Shawn Frayne: I mean, I think this is the holy grail of the human computer interface is when you can make the jump into something that's as real as reality and do it in a way that is completely comfortable to use. So something that is magical, but indistinguishable from reality. is what we've been chasing for me since I was seven or eight and a bunch of us in the team and different careers before looking last for similar amounts of time. So the ultimate goal is to complete the transition. from 2D flatland media and display to 3D immersive media and display. And what we're doing, I think, is an important part of that, but so is what folks are doing in VR headset land, in AR headset land, in mobile AR land. So all of us together are going to help get from a flat universe to one in which we're working and experiencing and sharing things in 3D.

[00:51:00.492] Kent Bye: Is there anything else that's left unsaid that you'd like to say to the broader immersive community?

[00:51:04.780] Shawn Frayne: No, I know it's the hardest thing to describe a holographic display over voice only. It's also hard on video, on 2D video. So I hope folks will get a chance to check this out for themselves. What we have been really fortunate with is that folks trusted us in the early days. I think we delivered on that trust by actually delivering what we said we were doing. So I want folks to be able to see everything I'm talking about and judge for themselves. So folks can sign up for any views of the newer systems. They can sign up for holograms on the internet at hologramsontheinternet.com or check out the broader stuff at lookingglassfactory.com. And I hope to meet a bunch of y'all in person soon.

[00:51:47.239] Kent Bye: Awesome. Well, thank you so much for joining me on the podcast today. I think this really is a technology. You have to see it to really understand it. And I know that a number of the people I follow on Twitter, it's almost like getting Christmas during the pandemic of getting their holographic displays and getting to play around with it a little bit. Yeah, if you're a 3D creator or if you're just trying to look at the new frontiers of where this is going, I think just like the Oculus DK1 was seeding the new industry of virtual reality that then expands into augmented reality, I think that this is kind of another platform that I think is going to be a big part of enabling all sorts of new opportunities to have more and more spatial information that's being displayed that goes beyond just the stereoscopic 3D stuff that we have but into more spatial content that also has a social context and a group context as well that I think is quite interesting to see where that could go. That you can also interact with, which is also amazing to be able to have a interaction. So it's like a portal into another realm. So thank you so much for all that you're doing and for sitting down for me to help explain what you're working on. And yeah, just helping to make the holograms real. So thank you.

[00:52:53.295] Shawn Frayne: Great. Thanks so much.

[00:52:54.954] Kent Bye: So that was Sean Frane. He's one of the co-founders and CEO of the Looking Glass Factory. So I have a number of different takeaways about this interview is that first of all, well, I was just really blown away by actually seeing some of the latest holographic displays. And it is quite magical in the sense that when you're looking at it, you don't quite understand how to parse it because you just really never seen anything quite like it. It just feels like you're kind of looking into a portal into another world. And it's just really high resolution in a way that just looks super clear. As he describes the technique, because you're essentially using an AK input and from that AK input, you have 7680 by 4320. And that's for a total of 33,177,600 pixel groups. And then within that you have 99,532,800 total sub pixels. And from each of those sub pixels. They have a custom designed optical overlay that's dictating where that photon is being shot into. So they're able to essentially control this 3D spatial view of having those 100 million points of light from that AK display. And you're able to get these 100 different views. And so it's a horizontal parallax system. And so imagine taking a 3D object and just taking pictures, 100 different pictures from different angles. And then you're compositing all those on top of each other. So you're able to see behind different aspects of the object. And then, yeah, just when you're able to move your head around, it just makes it feel like a spatial object. He was really keen of differentiating it from other holographic displays like Leia. He was saying that a lot of the existing ones were either a single user on track, or you have to have some sort of tracking to be able to understand where your head is to be able to then render whatever's being shown on the screen. And so with their system, they're able to have a group experience, meaning that all those 100 million points of light are being shot out. And so you can have multiple people from multiple perspectives seeing it and getting a sense of that spatial context. They have their four different displays. They have the 65 inch which was a hundred different views at 8k input and he was saying that there's like four times more depth before you get some more of the blurring and so he was saying roughly that the aspect ratio was around four height by three width by two depth and And that with the lower resolutions, they were getting a little bit less depth. These are all things that are new in terms of having language and numbers of trying to quantify and describe these. And so I don't have a good intuitive sense of the depth. I'd have to see it side by side with the same object to be able to differentiate when the depth starts to change. I was just seeing a number of different demos and wasn't able to refine my own perception to be able to differentiate some of these different things he's talking about. But that 8k resolution essentially produces 100 million points of light and so the 32 resolution also 8k presumably also 100 million points of light and then when you go down to the 16 inch that's when you have a 4k input which is 3840 by 2160 that's around 8.29 million sub-pixel groups which ends up being around 24.88 million points of light for the 16 inch. I'm assuming that's the similar translation from 4k rather than 8k with 8k being 100 million points of light. And then the $400 looking glass portrait has a 2.5 input signal or 2048 by 1536 that's around 3.145 million pixel groups or around 9.437 million points of light. So there's around 45 views with the looking glass portrait and the 16 inch and the 32 and the 65 can go up to like 100 views or so. Those are just some of the specs of understanding it. Like I said, you really have to see it to really understand what the technology is doing. But you can understand why In-Q-Tel, which is the venture capital firm for the CIA, had invested $2.54 million technology development fund from In-Q-Tel. another $500,000 Small Business Innovation Research Award from the U.S. Air Force, as reported by The Intercept back on May 22, 2022. We talked about that a little bit in terms of them just creating a general-purpose computer technology that has lots of different use cases. I can totally see the utility of a holographic display like this in order to show spatial content and to be able to make group decisions around that. The group aspect of this experience I think is probably what differentiates it from what you already have with the VR headsets and just being able to not have any eye strain and be able to get for whatever decision making processes that people are working on that I can See, there's probably a wide range of different uses for this type of holographic display technology, which the holographic as a term is, you know, we kind of talked about how technically it's more of a light field display, but that just metaphorically people understand a little bit more what it means to have holographic content, meaning that this moving from a 2D to 3D and. Yeah, I don't know how the larger community is going to start to adopt these different terms I know there's some diehards who are very connected to having the holograms being only connected to the interference patterns of coherent light as Sean was describing here in this conversation But it's more or less a light field display and as you have the light field displays and you have the directionality of where the light is showing up and then with that they have a variety of different technologies from their display technologies, but also holograms on the internet. And so they have their own ways of rendering out into that light field display. And it was really cool to see the demo because Sean was just scrolling down a website and then as he would come across one of these embeddable objects, it would just show up in the looking glass display. And so it was kind of like a holographic display that's off to the side of your computer that as you're scrolling through a normal 2D content, you're able to come across these embeddable holograms on the internet and then from there, dive in and interact with them in different ways. Yeah, really cool to see that integration and felt like the SketchUp object that you're able to embed into a website But also like an animated GIF and so it's just a new file format that's using their Multiple views that they're doing different perspectives that as you kind of scroll from left to right you see a very high resolution photorealistic depictions of essentially these light field displays And the other thing that I think was really striking as I was doing the demos and seeing is just that there's so much content, you know, this thing he came back to again and again, which is that, you know, imagine if someone created a color camera, but that the only way that we could show content was a black and white display. And that's essentially what we have metaphorically for having all this 3D content, but we only have these 2D displays. And so all this stuff like portrait mode on the phone is capturing depth information. You have these Snapchat filters that are capturing different aspects of a depth field. And so all this media that's already using this spatial aspects of our medium, but we're only seeing the 2D representations of that on our phones and on our computers. And that with the looking glass technology, it just provides another outlet to be able to display and show some of those different captures. So equivalent to sliding through your phone images, but to store it within the context of these looking glass portraits, to scroll through these different volumetric depictions. Really quite compelling. It's like these types of display outputs that I think are going to be driving new types of applications and ways for people to not only produce the content, but also to consume it. So very excited to see where all this goes. Also, a company that has quite a history with Kickstarter programs, starting with their Looking Glass print from May 29th of 2014, where they set a $25,000 goal. They got 406 backers and $36,000, so they shipped that product out and then turned that right around later that year, November 24th, 2014, with L3d cube which was their follow-on to the looking glass print $38,000 goal and then they had 689 backers to get $252,000 to bring that project to life and then the looking glass which was the holographic display for 3d creators the dev kit and the unity plug-in launched on July 24th 2018 and A month later, they blew through their $50,000 goal with 1,301 backers, $844,000 to be able to bring that project to life. And then with The Looking Glass Portrait in December 2, 2020, they had a $50,000 goal. And again, they had 8,051 backers who had pledged $2.5 million to bring that Looking Glass project to life, and that was shipping during the pandemic and 2021 and that's you know I saw a lot of people that were getting it and getting just really excited about you know having a holographic display. So looking glass factory, really interesting company and highly recommend actually seeing the technology. It's hard to describe what it is until you actually see it. It is quite magical and really curious to see where this continues to go. Cause I think that it's got some really compelling technology. Like I said, if you do want to see their 65 inch, you can reach out to them and apply for a demo and check it out. It's pretty mind blowing what they're able to achieve. So. Anyway, that's all that I have for today, and I just wanted to thank you for listening to the Voices of VR podcast, and if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and I do rely upon donations from people like yourself in order to continue to bring you this coverage. So you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.

More from this show