Visbit is a B2B, 360-video streaming distribution company based out of Silicon Valley, and co-founder and CEO CY Zhou is originally from China. I caught up with CY at CES 2017 to talk about some of the basic streaming services and application SDK that Visbit provides to 360 filmmakers who want to distribute immersive video content on-demand.
LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality
Support Voices of VR
- Subscribe on iTunes
- Donate to the Voices of VR Podcast Patreon
Music: Fatality & Summer Trip
https://dts.podtrac.com/redirect.mp3/d1icj85yqthyoq.cloudfront.net
Rough Transcript
[00:00:05.412] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye and welcome to the Voices of VR podcast. So I was just in Qingdao, China and had a chance to run into CY Zhao. He's one of the founders of Vizbit Streaming. And I've run into CY at a number of different conferences over the last couple of years. First saw him back in 2016 at the TechCrunch Disrupt. And then ran into them again at CES 2017, actually did this interview with them, and then afterwards kept running into them either at VRLA and SVVR. They're doing this streaming service for 360 videos, so if you want to do something outside of YouTube and be able to create your own app, and have your own 360 videos, whether it's something where you're doing narrative content or training content, then you can use their streaming service and their SDK to be able to both host and serve up a dynamically video content. And so that's what we're covering on today's episode of the Voices of VR podcast. So this interview with CY happened on Thursday, January 5th, 2017 at the Consumer Electronics Show in Las Vegas, Nevada. So with that, let's go ahead and dive right in.
[00:01:22.323] CY Zhou: Sure, thank you, Kent. I'm CY, I'm CEO and co-founder of Visbit. Visbit is a Silicon Valley-based technology company. We focus on virtual reality. Especially, we try to solve a problem we think is really important in virtual reality, which is that virtual reality videos, or people call 360-degree videos, usually is really big. It's 10 times or even 20 times bigger than the regular videos. So that as a result, most people, they either have to download and then watch later, or they have stream at a low quality and watch online. So we come up with a solution we call View Optimized Streaming, which can improve the streaming efficiency by 10 to up to five times. We do that by we only stream where you look at, but when you turn your head to other direction, we can give you new data so fast you cannot notice it. So we call View Optimized Streaming and we have a lot of patterns to cover it and we have a product built on top of the technology and we make that a cloud service so that all content publishers can use our service to deliver their contents at the highest quality to their consumers. We can stream 4K to 8K resolution VR videos over regular Wi-Fi and LTE.
[00:02:37.055] Kent Bye: So yeah, I had a chance to try out a demo at TechCrunch Disrupt and was impressed how I could see a little bit of the artifact's effects. So you're actually doing different resolutions that you're overlaying. And whenever you're looking at it, it'll update with the higher resolution. So it can kind of do a dynamic update based upon where you're looking at. So maybe you could talk a bit about what you're actually doing to be able to accomplish that.
[00:03:03.564] CY Zhou: Yeah, that's actually, people say, only stream where you look at. People use a name called Fovea Streaming. So for Fovea Streaming, the most difficult thing, as just Ken mentioned, when you turn your head, you need some trick to get the content as soon as possible. That involves a lot of work. Because the latency from low resolution to high resolution, Involved, you have to get data from internet, then you have to decode the data, you have to computation, then you have to render to the display so people can see it. So there's a bunch of things that add up to the latency. So you have to design a system so that you can squeeze all the latency together into a really, really short latency. And one good news is like, we can do motion prediction. Human head move around, but compared to machine speed, you move really, really, really slow. So before you get to a new direction, we already know where you look at in another maybe one or two hundred milliseconds. So we can preload the frame before you look at that. So that helps us reduce latency a lot. Also, there's a lot of check in computational photography and computer graphics, which can help us to do the rendering really fast. You know, on the mobile device, you are really limited by the computation power. So, to do things fast, you need a really powerful CPU and GPU. With a limited CPU and GPU, you have to optimize the algorithm to do things fast. So, in summary, there's a bunch of things come together and reduce latency to a level people cannot notice.
[00:04:37.604] Kent Bye: And so I know that there's compression codecs. MPEG-4 is one that it actually gets a very small file size, but yet the trade-off is that it takes a lot of time to encode and both decode. So did you come up with your own codec to be able to do this then?
[00:04:53.672] CY Zhou: Here we think about it and finally decide not to do our own codec. The main reason we try to make that compatible with all devices. So if we want to make it run on all smartphones, you cannot do your own codec because that needs some hardware support or you have to run on CPU, which is not doable on mobile phones. So then we have to choose the typical, the traditional codec and work on top of that, make that work. And luckily we make that work and then we can run on every phone.
[00:05:25.960] Kent Bye: So as a consumer of 360 video, I think the user experience of that has been pretty poor for the first couple years of virtual reality, just because you go and you want to watch a video, but then you have to download it and you wait, and it could take anywhere from 5, 10, 15, up to 20 minutes, depending on how fast your Wi-Fi is. And so then you can actually watch the content. But what you're essentially suggesting is that with the Vizbit technology, you're going to essentially push play, and it's going to dynamically stream and track where you're looking. And then using your technology being delivered from the cloud, people will be able to instantaneously watch 360 video.
[00:06:01.014] CY Zhou: Yeah, exactly. And also, as Kent just mentioned, there's still some problems we have to solve and make the technology really, really easy to use and play smoothly. And there's some artifacts. Sometimes if you turn your head really fast, you can still see it. Then we are solving that on the way. Every week we have a new build. And that's how the technology become more exciting. So every time we're getting better and better. Hopefully in 2017 and all people can use our technology and see really good content without downloading them.
[00:06:31.960] Kent Bye: Can you talk about some of your customers in terms of who's actually using and deploying this technology?
[00:06:36.864] CY Zhou: Sure. We're now at the stage of closed beta. So there's about more than 60 companies that apply to use our technology. We pick five of them, and we work closely on our closed beta. And very soon, maybe in a few months, we'll have open beta so that all people can use it. Can you name some of those companies? We are still at a very close beta stage. There's a few. They do education. They do health care. Maybe we can tell that publicly next time when we have open beta.
[00:07:09.036] Kent Bye: OK. So for you, what's next for VisBit?
[00:07:13.135] CY Zhou: First thing, we need to grow the team a little bit. As you can tell, there's a lot of technology problems we have to solve here to make this technology really work well, especially for all devices at even lower internet bandwidth. And so we're looking for really good talents who are good at streaming, also at OpenGL, to join our team. And we try to improve our technology so that we can Even, like, be in front of other possible competitors with a big gap. That's our hope in 2017. And in 2017, I hope there are some big players we can work together and really help a lot of users in VR community.
[00:07:56.668] Kent Bye: So can you talk about the business model in terms of right now, 360 video, I think it's challenging because VR is so new. We're still waiting for a high enough adoption to be able to have 360 video creators charge people for the content that they're watching. So we're kind of in this area where everything's free. But yet, if you're a startup that's expecting to charge people who are giving their content away for free, then I'm just trying to figure out how the revenue is going to come in. Are you going to license this technology to these companies? Or are you planning on eventually getting bought out by a big streaming company like YouTube that is going to integrate your technology into their system? Or what's kind of your strategy in terms of business model in the whole 360 video ecosystem?
[00:08:37.938] CY Zhou: This is a great question. So think about VR videos, it's a long pipeline. So people do content creation, people make a platform so that consumers can watch. We are just one small piece in the whole pipeline and we help people solve the problem of delivering contents. So more like we do B2B business, we try to help other business to help them to reach more users. So give one number here. Without our technology, only 20% of the U.S. population can watch 4K videos. And with our technology, we can improve the number to 80% of the U.S. population. And we make it a B2B business so that all content publishers or content creators they can use our service to deliver the highest quality to their consumers. So for that, we make end-to-end streaming. So we have a web portal. So if you have contents, you can upload it to our web portal. And on the other end of your app, you will integrate our SDK. It's pretty simple. And you can stream all the data from the cloud, obviously. And after people watch it, you can see all the BI information. You can tell where people look at every video you have. And you can get a lot of information from our platform. So for content publishers, they don't have to do all this work. So Visby does all the technical problems for them so that they can focus on content creation, which is really difficult already. So they can focus on content creation, and they can focus on how to acquire more users, and understand user needs, and it's kind of loop. So they acquire user content, we help them to deliver. So I think that's something how we can use our technology to help the community.
[00:10:21.493] Kent Bye: Yeah, because I know a lot of the 360 video production companies, the ones that are bigger, like, within, they have their own app that you can go in and watch their whole content. So, I'd imagine if people are trying to build up their own brand and business, the trade-offs for using something like YouTube would be that your content's out there for free, but yet, if you're trying to have a company and a service, like, it sounds like you have a number of different educational applications so that If people want to have different training videos that are 360 videos, then they could somehow have that money exchanged through buying and getting access to that application, which then there would be enough revenue to be able to actually pay for the streaming that you're doing. So that BDB model seems like a good approach for these use cases. And I'm just curious if you could tell me a little bit more about the education or what kind of things people are doing with those kind of more closed content.
[00:11:12.808] CY Zhou: Exactly. I think it's really the idea. We try to help the content creator and publisher to keep their own brand. And we are just a person behind them. The partner we have, they cover healthcare and also education, for example. That's the question you asked. For example, there's a United Nations, they try to help some women to find jobs. And they make a VR app to help them to do job interview training. So that for those unemployed women, it's hard for them to pay a lot of money to attend some training courses. But they can watch VR videos to see, oh, if I was in an interview, how should I perform? And that would be very helpful for these women. I think that's how VR can be useful for helping society. Also for health care, sometimes we can have remote clinic. And also I see in some medical school for the young students, the first time they enter the operation room, it's a big surprise. So VR can help here. Before they enter the surgery room, operation room, they can watch a VR video and get themselves used to the environment. That can help the training a lot, and it will do much better.
[00:12:27.528] Kent Bye: Great. And finally, what do you see as kind of the ultimate potential of virtual reality and what it might be able to enable?
[00:12:35.809] CY Zhou: The potential of virtual reality is a big problem. To me, I think virtual reality is a new human computer interface. So think about it that way. I think the question has become pretty simple. Think about from doors to windows, from keyboard to mouse, and from monitor to touch screen. I think VR is just the next step. And this is a big step so that it will take a long time, and people should be more patient. And I think 2017 will be a lot of exciting things will happen, especially that some technology become more mature. If you look at CES today, you can see a lot of the cameras, which was not available last year. So if you have come here last year, 2016, you'll see some VR cameras, which is really early stage prototype. And today you can see some product you can buy. With that, it really makes the VR video pipeline work. Otherwise, without a camera, you cannot do all the things I have just described for education and health care. So now you can buy a camera, you can use VizSpace streaming to stream it, and you can easily make an app because we make SDK for you. So I think 2017 will be a big improvement for the experience. And looking forward, I think, VR video is just the first step. It's just like low-hanging fruit compared to some more interactive VR game. And I think 2018 and 2019 you'll see a lot of change because it's a new human-computer interface.
[00:14:05.299] Kent Bye: Oh, wow. So you said that Visbit can actually do a little bit of a white label application so that people want to create their own application. They could design the look and feel, but have all the technical back end deliver the videos just using your application. Is that right?
[00:14:22.137] CY Zhou: Yeah, so our SDK or software library solve most of the video streaming and the playback problem for VR, which is pretty hard, even for some technical engineers, experienced engineers. With that library, also we provide sample apps. So for some engineer from other domain, you can look at a sample app. You can try to do the same thing and use our library. Then you can easily stream and play back. Awesome. Well, thank you so much. Thank you, Ken.
[00:14:51.524] Kent Bye: Thank you. So that was CY Zhao. He's one of the founders of Visbit streaming and, uh, yeah, I had a chance to talk to him at CES and I've kept running into him over the last couple of years and, you know, they keep improving the Visbit streaming and coming at higher and higher different resolutions, uh, up to AK. And yeah, that's just a technology that I think has found a niche, uh, enough to be able to allow people to have a business model, you know, if you have training and. they've now released the open beta back in April of 2017. And so they've announced more of who were some of the users of this bit, including Vive, AMD, Dell Technologies, Airpano, Canda, Z Cam, Primacy, CloudWave, and 3D Live AXO. So it seems like they've been able to find a niche of kind of like a middleware to be able to serve up any type of 360 video content that goes above and beyond what something like YouTube might be able to do. You know, if you want to get something out there and available freely for everybody, you know, I think that either the Facebook streaming or YouTube sharing is great. But if you actually want to like create a business around it, then I think that's where it becomes a little bit more tricky to be able to, you know, have an app that you're going to be able to sell and then be able to control the access to some of the videos that you may not want to just have generally available freely for everybody. So yeah, it seems like that, you know, their technology seem to be working really well in terms of having this dynamic streaming and being able to actually have you watch a video dynamically. And I think this is something that across the board that is getting better and better across all the different streaming services for 360 video, whether it's Visbit streaming or YouTube streaming or Facebook 360 videos, you know early days of virtual reality watching 360 videos You really kind of had to download the video. Otherwise, it was really kind of a deprecated experience And so they're up to 8k streaming now that they're able to produce for visbit And yeah, it seems like that's a solution that I keep seeing this bit streaming in around these different types of conferences and events. And so it was just in Qingdao, China for the Sandbox Immersive Festival and ran into Cy there again. And I was kind of reminded of this interview that we did back in CES and yeah, just wanted to get it out. And yeah, they're a company that is actually based out of Silicon Valley, but is from China and connections in China and also working within the content ecosystem of China as well. So that's all that I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member to the Patreon. This is a listener-supported podcast, and so I do rely upon your donations in order to continue to bring you this coverage. So you can donate today at patreon.com slash Voices of VR. Thanks for listening.