#401: Automatic Sound Generation through Physics Simulations & Machine Learning

ming-linDr. Ming Lin has been working on realtime physics simulations before physics engines were cool. Ming is now actively researching how to simulate audio in real-time. Rather than recording or generating sounds that are then simulated within a virtual environment, Dr. Ming and her students are pioneering methods for “coupling sound synthesis with sound propagation to automatically generate realistic aural content for virtual environments.” She’s also using machine learning techniques to be able to extract the material properties of the environment from a sound recording.

I had a chance to catch up with Ming after she presented some of her SynCoPation techniques as a part of her keynote at the IEEE VR academic conference in March. You can read more about some of her virtual sound simulation work in the “Interactive Sound Rendering” section of her vast amount of VR-related research.

LISTEN TO THE VOICES OF VR PODCAST

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. My name is Kent Bye, and welcome to The Voices of VR Podcast. On today's episode, I have Dr. Ming Lin of the University of North Carolina, Chapel Hill. And Dr. Lin has been working in VR for a number of different years, going back to working on physics engines before physics engines were all pervasive into all the different video game engines. So I'm going to be talking today to Ming about all the work that she's doing in audio and the future of audio, just like we do real-time simulations of physics within game engines. We're going to be moving into a time where a lot of the audio is going to be simulated in real-time and generated from the material properties within these simulations. So I'm going to be talking to Ming about some of her initiatives on that front and how she's planning on using some machine learning and deep learning technologies to be able to extract material properties of objects based upon real-life recordings. And so that's what we'll be covering on today's episode of the Voices of VR podcast. But first, a quick word from our sponsor. Today's episode is brought to you by The Virtual Reality Company. VRC is creating a lot of premier storytelling experiences and exploring this cross-section between art, story, and interactivity. They were responsible for creating the Martian VR experience, which was really the hottest ticket at Sundance, and a really smart balance between narrative and interactive. So if you'd like to watch a premier VR experience, then check out thevrcompany.com. So this interview with Ming happened at the IEEE VR academic conference that was happening on March 19th to 23rd in Greenfield, South Carolina. So with that, let's go ahead and dive right in.

[00:01:51.132] Ming Lin: My name is Ming Ling. I am a faculty at UNC Chapel Hill. I have worked in many different areas in computer graphics and virtual reality and robotics. In particular, I'm interested in physics-based simulations, modeling, and interactions. And so that fits in very nicely within this multimodal physical interactions within virtual reality. So I focus mostly on all different forms of physically-based simulations, mostly focus on the real-time aspect, interactive simulations. In my earlier day, I worked on collision detections. Collision detection is the process where you have to determine when two simulated objects come into contact, and when they come into contact, Where did they come into contact? When did they come into contact? And then you use the contact information to actually simulate collision response. So it was really the essential part of any kind of physics engine. In fact, some of our earlier work gets used in so many physics engines out there, as well as commercial products in CAD CAM company. because it is like some people call it a leave technology because you can use it in so many different applications. You can plug into a physics engine, you can plug into a robotics application, you can plug into a virtual environment where it would enable you to interact with any object in the scene. So that's sort of kind of my background. I've been teaching and doing research at UNC Chapel Hill for many years and over the years I have been looking at all these different applications in VR where that it would require any kind of simulation and modeling. I like to think of VR, virtual environments, as like an experimental platform. If you have a hypothesis or you have some design or you have some idea, you want to try and test it out, VR will offer you that platform to experiment with, right, in the virtual world. And since VR or any kind of virtual environment is either a replicate of the real world or it's an imagined equivalence of the real world. You want to try to emulate as many phenomena as possible so that you can test it out some of these hypotheses or experiment or design that you hope to carry out. So you really need to be able to capture the interactions and you need to be able to model the behaviors of objects and that sort of has always been my focus. And I also focus very much on the human sensory side. In my earlier work, because I do a lot of simulation, haptics was a natural area where I actually, you know, dabbling in and have worked on for several years. I have a couple books in that area as well. So haptics is really related to touching able interface. So you can think of Most people think of data club, but there is actually computational haptics that enable you to interact with virtual objects in the virtual world through phantom-like devices. Many of the haptic devices are essentially robotic devices used in reverse. So you can use the device to pick up an object, to move an object around, and when you feel the object, you're actually essentially receiving simulated forces and pushing it back. So that creates the sensation of virtual surfaces. So I worked in haptics for many years also, and it's really an interesting area. Just think of the possibility that you can actually feel something that's virtual. It's an extremely powerful way of interacting with any kind of virtual environment. And I think a lot of people, you know, when they think about virtual world, they still want to have the ability to have this kind of tactile interaction. And it's a hard problem. It's not solved. I have not been working in the area using the traditional haptic devices like phantom for a few years partly because it feel a little bit limiting and it's generally speaking also very very expensive to maintain a lab which require that you have a haptic devices and you know anytime you have a device is broken down it could easily cost a thousand bucks just to fix it. With that price you can get two Oculus Rift. So I've been sort of kind of moving toward these kind of multi-touch tabletop devices, like even just your tablets, where you can actually interact just with your finger, with your hands, and it's easy to come about and it's widely available. So that's the kind of touch-enabled interfaces that I have focused on most recently. But even more recently, I actually have been working on audio rendering or sound rendering. And part of the reason is that I think audio is critical, sound is critical, because just imagine what we do on a daily basis, right? We see things, but we also hear things. And without sound, you cannot even barely watch a movie. So I've been working on that for many years, along with my students and my collaborator. And we have focused predominantly on two aspects, just as I mentioned yesterday. The one aspect is how does sound get propagated through space? and that matters because it gives you a sense of what the space is like, how big it is. But it also tells you what's coming behind you or to the left, to the right of you. It gives you a sense of directionality. And I think those are really, really critical. And we use our ears all the time. And most of the time, it's just underappreciated. I was talking to someone yesterday, and he was telling me about the people with visual impairment can even play tennis. just simply based on their years, except their rules of playing tennis is different. You have the tennis ball bouncing on the floor. People with a visual impairment can't allow the tennis ball to bounce twice. And before they serve, they will call out they are serving. And so it's just the idea that you can actually play tennis. without using your eyes at all, simply based on your ear, is incredibly powerful. That kind of tells you how important our hearing is. And they can see, they can still play sports, just purely based on ears. So I think that's just really, really, you know, it kind of tells me whatever we're working on, it's critical. And that's another area that we're trying to work on, is we can put together all these technologies that we have and actually do something meaningful for people with visual impairment. And we have been looking at this, because other than visual and haptics and audio, we also do crowd simulation, we also do traffic simulation. And this will be one project that kind of brings everything together. So that's something that we are working on right now. But we also work on generating sound automatically from physics-based simulation. Because there are just so many people who are using Game Engine nowadays for everything that you can imagine, including virtual environment, simulating interaction with the virtual world. And if we are already simulating the physics, why not just take it just a step further so that you can generate sound automatically, directly from the physics simulation itself. And that's what we have been working on, is to take advantage of the physics simulation which we are already doing. to enable interactions or plausible physical behaviors for all the objects in the virtual environment. So that the sound can be automatically generated directly from that kind of physics interaction. Instead of trying to do the recording and then try to fake it. I don't know if you hear my talk yesterday, right? So I showed some clips of how people fake sound. and those fake sounds are generated not even according to what actually happened but it is sound that was fake using some sort of striking of the object that is close enough approximation to the other sound that you you might want to hear and if we are doing this we are faking it striking object that are not even you know the same as what you might have seen visually in the movie why not just let the physics engine do the job for you and that's sort of kind of our principle is that if you already have any kind of physics simulation, just let it take a step further to generate a sound that would be natural, and it would not be a recording, and it would not be fake. It would actually generate it automatically based on the principle of physics. And that sort of is another area that we have been looking at, and it's a harder problem than most people think. because there are just so many different types of interaction. We have only scratched the surfaces of this particular area. I think there's just so much more to be done. You know, it's really, really exciting, at least for somebody like me, who have been working on physics-based simulation for years, to see that you can actually take the physics a step further to now generating all types of sound effects. One of the areas that we have been also thinking about was generating explosion sound. And I have looked at that a little bit with some of my students, and it's not easy. Actually, I would say it's very hard. That's why we haven't done it yet. But we have been looking at it because explosion is kind of like a special effect that you're seeing in many, many movies. But yet, you know, at this moment, we don't have the technology to simulate explosion sound automatically yet. We do have some simulation technique to generate explosion. But the sound generation process still has eluded us. So that's like for example one of the things that we've been kind of thinking about. And you know in the last day or so I've been also talking to folks about applying some of these technology to help determining like speaker placement. Forget about just the house. Somebody told me, have you thought about car? I say, Oh yeah, and the sound effect within a car is actually probably easier to simulate, but we haven't done that yet. So there are many, many different applications. I'm sort of very excited about the potential. We've been kind of thinking about, well, can we apply this to a VR environment, so they give you a better sense of directionality, to give you a better sense of where you are, to give you a more feel of immersion. But in addition to that, we have been also thinking about how can we use this kind of a technology for designing a space, acoustic space. But there's so much more, you know, we're just sort of kind of starting to thinking about what we can do. And one of the areas I talked about was if we have enough, all these different pieces of technology, the other thing that we have worked on was just having one recording to automatically figuring out what's the material property of the object. And by the same principle, what we like to do is if we now have ability to capture this room visually, so you have the data from the room, you can build an environment for this room. And if we could also use the visual information to reconstruct the environment, what we love to do is to combine what we have for sound propagation technology to automatically figure out what are the acoustic properties of the material in the room. And that's going to be a pretty hard problem because you're going to have many, many different materials and it's going to be a large combination of different materials that you can use to give you the kind of sound effect that you may have. But that would be another possible direction that we have been thinking about. And if we are able to achieve that, it would have a tremendous application, like I mentioned yesterday, that if you can see your speaker in the other room, and you can figure out what's the acoustic effect that's being introduced to that room, you can then, in fact, have whatever speaker's voice that has been transmitted through the network to the other end, take that voice, take that recording, or take the audio input, and you can do a deconvolution to remove the environmental effect and then bring that speaker into the room where you are and add the environmental effect of wherever the room you are to the speaker's voice. Then you can actually literally feel like this person that you're talking to is sitting next to you. And so that sort of kind of was the idea I was talking about being here. How do you bring somebody to be next to you? Right. So I think it's a direction that we have been kind of thinking about, and you would have the effect of actually helping people who do teleconferencing all the time. You know, I mean, just think how many phone calls that we have made, how many teleconferences I have done last week. And I was just thinking, I can use that to help improving our teleconference experience. That would be great, because that would help so many people, other than just myself, to have a better teleconference experience, or teleimmersion. Because some of my colleagues, like Henry Fuchs, has been working on teleimmersion for many years. That's where I see a lot of capturing of the remote environment has been done, and I think they have made tremendous progress we could easily combine the advances that has been done visual reconstruction combined with audio reconstructions so that that's the kind of thing that we have been kind of thinking about sort of for the future direction what can we do to improve so you know there's no question visual is first we are very visually dominant beings and you know vision has always been the kind of like the dominant sensory, but audio is so critical that I think the compute power is here. And I think the reason it has been neglected is because we do not have all the computation. So we are struggling just to take advantages of whatever compute power that we have just to solve the visual problem. How do we get more realistic rendering? But the technology for realistic rendering in real time, it's here now. It's here, it's available today. But the audio is just far, far, far away. You know, from everything that I know of. And I think the visual information that we have can in fact help us also solve some of the problems that we have with audio. Simply because we can infer more information from the the visual information that we receive. For example, if I'm looking at it, I would know this environment would have carpet, it would have wall, and I have poster and poster stand. And I can kind of guess what kind of material properly that would be. And that would help me initialize my guesses in terms of trying to find out what exactly is the parameter. So that's the basic idea.

[00:16:05.260] Kent Bye: Yeah, so what that makes me think of is that in a real-time physics engine that it's running about 90 frames per second and that's kind of what the visual input that we need to be able to create this sense of visual continuity. But yet audio is at the rate of like 48,000 hertz. It's a lot higher. I know there was a presentation last year by Dr. Von Visselt from Drexel talking about simulating breaking apart these wood fibers and trying to simulate with a real-time physics engine what that would sound like. To me, it doesn't sound as good as if you were to actually go do some field recordings of actual sticks breaking. How do you get over that fidelity issue? Because at this point, it doesn't seem like the real-time rendering of sounds is good enough to be able to generate it.

[00:16:53.513] Ming Lin: And I think part of that is that, like for example, that's related to the characterization of a material, as I already mentioned. I think I show it, we had a xylophone and we don't have the correct parameters. It sounds like xylophone, but it's not quite there yet. But I think the second one that I demonstrated is xylophone. a multi-touch tabletop, that one sounds a lot better. And the reason is that material property was automatically calculated from a single recording. So that's one of the reasons I say that's really, really critical. Being able to automatically get that material property directly from a real-world recording. And in a way that what we were working on is how do you take the real-world example and be able to transfer into the virtual environment. What we did is, in a way, a learning-based approach. It's to learn from reality and bring that into the virtual world. And as I already kind of mentioned that we just simply scratched the surface. We only have done some. It showed the promises, but there's so much more that need to be done and we are not there yet. So if we really wanted a truly, you know, believable and truly realistic virtual world, that's going to reflect and that's going to sound realistic as what we will mine in a real environment. Then we need to have the ability to transport that kind of information that we have. into a simulation and I don't believe that paper that you were thinking about that they have not incorporate any kind of these technology, right? So it's definitely one thing that need to be done is that any kind of audio that we hear really truly depending on having the correct simulation parameters. And a lot of people don't even know how to generate sound for all these different phenomena. What we have focused on primarily has been the impact sound due to objects, interactions, and also things like liquid sound. So very common phenomenon. And wood breaking, that's a little bit more niche, right? And stepping on the leaves, that's even furthermore. And every single interaction, you have all these material property that you need to worry about. So you don't have the right parameters, you're not going to generate a song that's going to sound as realistic as possible. So I think there's a tremendous amount of promise, but the problem is there's just not enough people working in that area, and there's just not enough resources trying to crack this difficult problem. Because, you know, my joke is audio rendering is infancy. So if you look at it that way, what we have is a tremendous amount of promise right here, but they're just simply not enough effort. And you don't look at a baby and say the baby can barely even crawl. Forget about one day the baby is going to be the next Einstein. So you just simply don't give up on a baby simply because the baby can barely crawl today, right? So I think it's by the same analogy that I think the audio technology has a promise, but the reality is there has not been enough investment in that area for us to create this auditorily rich experience. And I think if we do want to create a truly immersive environment, we need to get there. This is something that we have to work on. And visual, you know, visual has also come a long way. If you look at the computer graphics in the 70s, I mean, think about that. I mean, have you ever seen a picture that would generate in the 70s? So this is sort of kind of where we are right now for audio, right? And so if you look at computer graphics in the 70s, look at HMD in the 70s, look at what we have today. So I would say that we haven't made that progress or that transition for audio. And for that reason, it's also a really exciting area for graduate students because trying to work on a problem that has been worked on heavily for so many decades, it's going to be so much harder to make any more new progress versus on a problem that has been barely touched upon. So it's been kind of opportunistic trying to find an area which really needs some help and not enough work has gone in.

[00:20:40.607] Kent Bye: Now I know that engines like Unreal and Unity have their own real-time physics engines that in some ways I imagine that they're able to calculate in real-time these very complicated real-time interactions. Do you foresee a time when in the future we'll have like these real-time audio engines that are be able to take the interactions that are happening with this environment and then be able to generate realistic sounding audio from that?

[00:21:03.071] Ming Lin: I think so. I think so. I mean, at a minimum, you have better recording. But also, I think you're going to get more better propagation effect. I think what will happen first is that people are going to have more realistic recording. And then the next thing is people are now going to have more realistic room effect being added. Because you can add a more realistic effect and you can simulate this more realistic effect based on the existing recording. And next thing is you are definitely going to have more sound that's going to be generated by the physics engine. And then people will start to think about how can we make that more realistic? How can we bring the real world material into the virtual environment? So I think it's kind of like a step-by-step process. We are going to get there, but it will take time, simply because there's just simply not enough resources and not enough people working on the problem. I mean, if you think about a physics engine, I remember in the 80s, there weren't that many people working on the physics engine. It was just a small handful of people. I mean, just look at the number of people who actually don't need to work on physics engine. They're using a physics engine. I mean, just this transition the last two decades, at least for me, it's like tremendous. You know, in the late 80s, early 90s, like nobody's simulating. I mean, very, very few people are simulating physics, right? But just this very, very small number of people who even focus this as part of their research. But now it's like every game engine has some sort of physics engine embedded in there. I would have never ever thought about, you know, we would have this much resources being available. So it is, I mean, I think progress can be done and progress can be made. And I'm hopeful. I'm hopeful. I think that one day we are going to have some sort of audio, you know, engine. I mean, let me just be clear. There has been a lot of also audio library, but they're doing mostly DSP, digital signal processing. Right. So there has been actually a lot of DSP library out there that does a lot of different kind of audio processing support. So they are like MIDI, you know, taking a recording, how do you generate, how do you mix the song. So there has been a lot of work in that area. But they are very, very application specific. It's not generalized yet to generating all type of song that we might need for a virtual world.

[00:23:17.800] Kent Bye: And so as you're coming up with these models, do you foresee a time of like being able to take like an ambisonic recording of a sound field and then be able to use like machine learning techniques to be able to kind of refine the actual models?

[00:23:30.727] Ming Lin: Yeah, that's something that we're working on. Yeah, we are. I think we are thinking, we already have been thinking about this for a while. And we are working on that and we are trying different kind of learning technique that we can try to learn more from a single audio recording and then trying to extrapolate and then figure out sort of a more generalized model that can be applied to essentially any kind of a virtual object. So we are already working on that. Yeah, you know I think that that particular stuff is not any surprise because I think all the recent deep learning techniques that people have been really really excited about actually started from voice recognition and it has speech recognition and speech processing and it got really really good result and the technique has been adopted by the vision community so they have been using that for recognition of images. So I would imagine it's only natural that It's going to come back in a full circle that we're going to do audio recognition and audio identification and extraction. So yeah, I think we are going to be there. My guess is it's going to happen in the next five years. If we are good, maybe we'll make some progress within the next year.

[00:24:41.615] Kent Bye: And one of the things that I've observed within the virtual reality community and games in particular is that anytime you have like real-time physics interaction, it's like really super compelling. Why do you think it's so compelling to see real-time physics within VR?

[00:24:55.908] Ming Lin: Well, as I already mentioned that most of you want to create a virtual environment, not just to look at, but to interact with it and to do something with it. And so anytime you want any kind of interaction, you have to be able to simulate the physical interaction between object. And that just simply based on the law of physics, right? Like if you had a ball bouncing around, you want them to bounce off the wall and you want the object not to penetrating and you want them to interact with each other. in a way that's going to obey the law of physics. So not having that kind of a behavior that's automatically simulated, it's going to break your illusion immediately. And so I think that the ability to actually have that capability to simulate an interaction becomes critical. And so when we see it, we recognize how essential it is and it helps the virtual environment. It makes the virtual environment more realistic. It enables you to do things that you want to do with the virtual environment. My joke is you want to be able to touch and feel something and then you want to be able to manipulate the object. I think the good old day in the 90s is that you can walk through the wall like you're a ghostly figure and that is totally unrealistic. You probably would say it's uncompelling and no one can walk through the wall and any object you want to pick It just kind of fell through your hands and your fingers because it's kind of like ghostly images, right? And I think that's why the physics engine contributed and added so much to the virtual environment because it certainly made the object become real, become solid, because it's enforcing the law of physics, right? That you cannot just be pushing through an object like it doesn't exist. So I think that's the biggest reason why physics engines are so popular.

[00:26:40.500] Kent Bye: And finally, what do you see as kind of the ultimate potential of virtual reality and what it might be able to enable?

[00:26:46.582] Ming Lin: Yeah, too many things, just about everything. You know, I just had this discussion. I am surprised that, you know, not more and more government agency will be putting investment into virtual environment. Because I like to think virtual reality is, it's like a platform. It allows you to conduct your experiment, all kind of experiment. but it also allows you to replicate your reality. And it's at a much cheaper cost once you have it, right? Because you can design things in the virtual world. You can prototype structure, very complex structure, and interact with it in the virtual world. You can figure out what are the problems in the virtual environment. You can train somebody to learn how to assemble and disassemble a structure to maintain a very complex machinery all in the virtual environment. You can train someone to operate on a human, on a virtual human. So it's good for medical simulation, it's good for medical training, it's good for training the first responder, it's good for training on the police, it's even good for training a group of soldiers going to a foreign country, and it's good for training people with fears and phobias. It's good for designing anything that you can imagine. Build your dream house, build your concert hall, build your classrooms, build your churches. I mean just imagine design your cathedral. I mean just a host of things. Or just for an average person to be somewhere they cannot be. Like someone who cannot travel due to physical limitation. They can be there through a virtual environment. And when I say travel in space and time, I literally mean that. You can go back in time and see something that you are not able to see today. So you can reconstruct a historical artifact. You can reconstruct a period of time that you could not possibly be. And that ability to travel in time and in space, it's extremely powerful, even if it does nothing else. But it has tremendous potential for scientific exploration, for space exploration. for all kind of things. So I think our imagination is a limitation about what VR can be. And so I like to think that VR is so much more than everything that we have seen so far. Awesome.

[00:29:04.674] Kent Bye: Well, thank you so much.

[00:29:05.595] Ming Lin: All right. Thanks again for the time.

[00:29:07.737] Kent Bye: So that was Dr. Ming Lin. She's a professor of computer science at the University of North Carolina, Chapel Hill. So I have a number of different takeaways from this interview is that, first of all, I really haven't been able to stop thinking about the future of audio and simulation in this way. I think that at this point it's pretty limited in terms of what you're able to do in terms of capturing a field recording and then being able to put it into a virtualized environment and then try to have a lot of different reflections and trying to recreate the feeling of the space of what things actually sound like. I think in hindsight, if we look back at where audio is right now, it's going to just kind of feel like these pixelated Atari games because it's just so low fidelity in terms of the spatialized experience that we get from audio. And I think that there's a number of different things that we talked about earlier on this week of Voices of VR podcast, talking about the AUSIC headphones, which I think are going to be important, as well as you know, the more proprietary solutions for being able to do audio object-based formats to be able to recreate this spatialization through the approach of creating virtualized rooms. But I'm really excited about where things are going to end up in the future of all this innovation that is left to be done in terms of actually simulating what things sound like through the material properties and hearing that in real time. And so last year at the IEEE VR 2015, I did this interview with Jan Vissel, who was doing all these physics simulations and specifically focusing in on haptics, but there was this crossover into doing these audio simulations of figuring out the material properties of things, and there just happened to be some overlap there between the haptics and simulating the sound. And some of the audio sounds of what things sound like when they're being simulated by a computer, they still very much sound like they're computer generated. They don't sound like they're a real recording at all. And I think that in VR we're getting a lot closer in terms of the visual fidelity of like actually recreating something in VR and making it kind of give you this sense of it can kind of trick your mind a little bit. At least the physics engines, you know, we have this ability to simulate physics and I think these physics interactions and all the things on the back end that's happening there, it's pretty remarkable in terms of what they've been able to do. Essentially, they're doing that at like 90 frames a second and being able to take a different approach of how they're solving the math around things. It's a little bit more of an approximation rather than doing something that's really super precise. And so I'm really excited to see where this future of audio simulation within VR is headed. And it's still very early, and I think it's going to be a long while before we get to the point of being able to generate sounds and not being able to completely tell the difference to whether or not it was generated by a computer or not. But I think with the advent of machine learning and deep learning that a lot of these techniques of being able to extract these material properties of the world, I think it's super fascinating. I mean, something that us as humans wouldn't be able to really do. And the fact that artificial intelligence can start to give us some of those specific numbers I'm really excited to see this future of kind of blending these AI and machine learning techniques and applying it to audio within VR and moving towards this vision of actually doing real-time simulation of the audio. I think it's just going to be a richer experience. And like Pete Moss said in the last interview that I did in episode 400, he said that audio really sells the space. And I think that's absolutely true. And it's something that's subtle and kind of the last thing that's really paid attention to and that I think that the experiences that actually pay attention to it first, it makes a huge difference. If you haven't seen the experience of 6x9 on the Gear VR, you should absolutely go check it out and check out the interview that I did with Fran Panetta back in episode 287. I met her at Sundance and she was doing a lot of audio design work and she comes from an audio production background. That's a great example of somebody who is thinking about audio first and to see how much layers of presence that you feel in that solitary confinement experience of 6x9. I think if you're interested in audio and the impact that audio can make, that's definitely an experience that you should check out. So Ming was giving a keynote to the IEEE VR community about some of her latest research and just excited to have her on the show and to share that with the broader VR community because Ming is kind of in the minority even within the VR community and thinking about doing real-time simulations of audio and so just want to help spread the word that this is something that's on the horizon in the future and something that I'm looking forward to hearing more about. That was terrible. I'm sorry. Anyway, I just wanted to thank you for listening to the Voices of VR podcast. If you enjoy the podcast, then spread the word. Tell your friends directly, or if you want to indirectly tell the world, then please do go to the iTunes and leave a review and share some thoughts and just help bring more attention to what's happening here on the podcast. And if you'd like to help out financially, then please do consider going to patreon.com slash Voices of VR.

More from this show