#815: Neuroscience & VR: Music, Body Sway, & Synchrony at the Institute for Music and the Mind

laurel-trainor
Laurel Trainor is the Director of McMaster Institute for Music and the Mind, which has a LIVElab concert hall for 100 people that allows her to do a lot of studies in the relationship between musical performance and how it’s received by an audience. She’s been using a number of different immersive technologies including motion tracking to track how body sway is a form of bi-directional, non-verbal communication that happens between musicians. She’s also been able to study synchrony, and the impact of movement in the audience and how audience members communicate with each other. She’s also able to do some pretty sophisticated spatialized audio within the LIVElab, and to recreate the sound of live performances, which allows her to research the role of live embodiment when listening to music.

I had a chance to catch up with Trainor at the Canadian Institute for Advanced Research Future of Neuroscience & VR Workshop in New York City. We talked about the role of body sway and non-verbal communication in playing music, the importance of synchrony in group dynamics, and how deficits in perceiving time and rhythm could be a factor in a number of different major developmental disorders including autism spectrum disorder, attention deficit disorder, dylsexia, and developmental coordination disorder.

LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST

This is a listener-supported podcast through the Voices of VR Patreon.

Music: Fatality

Rough Transcript

[00:00:05.452] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR Podcast. So continuing on in our series of looking at the future of neuroscience in VR, today's episode I feature Laurel Trainor. She's a professor of McMaster University. She's focusing on audition and music and how people interact with each other. But she's also the director of the McMaster Institute for Music in the Mind. So she's got this whole concert hall that's able to do all sorts of research in terms of watching the musicians and their body sway and how their body sway impacts each other, the audience and how they move, how they're impacting each other, and all sorts of different aspects of synchrony and group dynamics and body sway. Lots of really fascinating stuff, looking at the intersection between music, our body movement, and the way that we synchronize with each other. So that's what we're covering on today's episode of the Wasteless of VR podcast. So this interview with Laurel happened on Thursday, May 23rd, 2019 at the Canadian Institute for Advanced Research Future of Neuroscience Workshop that happened in New York City, New York. So with that, let's go ahead and dive right in.

[00:01:18.158] Laurel Trainor: My name is Laurel Trainor and I'm a professor at McMaster University. I'm very interested in audition and music and what that means for kids and adults and interactions between people. I also direct the McMaster Institute for Music and the Mind. And this is a very unique venue. It's a fully functioning concert hall that holds over 100 people. But in it, we have all kinds of equipment for monitoring how musicians interact with each other, how audiences interact with each other, how the two audiences and musicians interact, and exploring ways that audiences can become more immersed in a performance and more in control of what goes on in a performance. So we are basic scientists, but we also are very interested in health applications as well as just general applications for people to use in their everyday lives.

[00:02:22.692] Kent Bye: And we're here at the Canadian Institute for Advanced Research. They have the future of neuroscience and VR. And so you were talking a bit about using motion track controllers to look at body sway. And maybe you could talk a bit about the body motion and body sway head movement and how you're able to maybe extract meaning out of how people move their bodies and how they interact and communicate with each other.

[00:02:45.233] Laurel Trainor: Well, many non-verbal cues, and body movement is a big one, are really important for how we interact with each other. So we've been using music as a model for understanding some of these non-verbal mechanisms. So, for example, we have put motion capture markers on musicians in a group, say a string quartet, and looked at basically their body sway. And these are movements that the musicians don't need to make in order to play their instruments, but they all make them. Sometimes they're not even aware that they're doing. It's a little bit like hand gesturing when we talk. And in fact, if you prevent people from moving their hands when they talk, it actually makes it more difficult for them to speak. And this is truly an example of what we call embodied cognition. So the way that we perceive the world, the way that we think, all uses the motor system, motor planning and premotor areas of the brain in order to accomplish these things. So when you're just perceiving or thinking, the motor system is involved. So we thought that maybe this body sway that musicians are doing is actually reflective of the thought processes that are going on as they're planning their motor movements and their musical conception of how they're going to phrase the next bit of music coming up. So we took the body sway of the four different musicians, and rather than looking at synchrony, which is also important, we were more interested in this case in looking at communication. So we wanted to know whether the way one musician just moved would actually influence how another musician was going to move next. So we're looking at prediction. Can we predict from how one musician is moving how another one is going to move? And we can, of course, look at that bidirectional relationship between all the pairs of musicians. And so, in fact, when we do this, we can figure out who is leading and who is following to a greater or lesser extent. It turns out that if we assign different people to be the leader, they actually influence others more than people who are assigned to be followers. To some extent, the music itself dictates who's the leader. So if someone's playing the melody part, they tend to be more influential than someone playing an accompaniment part. Interestingly, if the musicians can't even see each other, we can still predict from their body movements from how one person moves, how another one's going to move next. So vision is involved, but it's not the whole story. We think that this communication that we're measuring between the musicians is actually reflecting their thought processes. And it's coming out not only in how they're moving, but it's also reflected in the sounds that they're making. So in fact, the sound that one musician is producing is giving clues to another musician about what they intend to do next. And that other musician is then influenced by that and may modify how they're going to play the next bit of music. So there's this very complicated dynamic of how the musicians are all influencing, listening to each other, planning their own next moves, and so on. And if you think about it, if they weren't doing that, it would be pretty much impossible to play together. Because music speeds up, it slows down. We phrase things. increase or decrease dynamics. We do all kinds of subtle things. And if you're not together with your fellow musicians, it's just going to sound terrible. So this communication is, I think, vital to playing music, but we can use this as a model for looking at nonverbal interactions in many other situations. So, for example, we've done it now with speed dating, and it turns out from the way that the two people move in interaction, whether they are influencing how each other are moving, is predictive of whether they're going to want to see each other again, whether they match at the end of a speed dating encounter. We'd like to also, in the future, apply it to, say, nonverbal populations. Can we get more insight about communication with someone with dementia, with, say, nonverbal children with autism? So I think it's a powerful new way to look at human interaction. Now, in the live lab that we have at McMaster, that's where we're doing a lot of these studies, we can also expand it out to looking at the audience. So for example, we have motion captured the whole audience and looked at how they react to what's going on on stage by the musicians, but also how they interact with each other. So using motion capture and also using EEG measures, so looking at the brain responses of all the different audience members, we can show how one audience member actually is connecting with another audience member. And we are finding out interesting things, like when people attend a live concert versus just seeing a video of that live concert. When it's a live concert, they're more synchronized with their fellow audience members. So their brains are doing the same thing at the same time. So there's a social communication. They also, when we ask them to make ratings, they rate that they feel more connected to each other. So I think there's all sorts of really interesting things in why people want to go to concerts instead of just listening at home, why we like to experience things with other people. And a lot of it, I think, has to do with our motor system and the way that we move and these non-verbal cues that we give each other that actually connect us socially with other people and give us a sense of belonging and a sense that we're experiencing things in common with other people.

[00:08:38.556] Kent Bye: Yeah, because you were talking about synchrony and the importance of people being synchronized with each other. And so maybe you could give a bit more context as to the research in the neuroscience community around synchrony and what do we know about synchrony and what its influence is on these group dynamics.

[00:08:56.110] Laurel Trainor: Synchrony is very interesting. Music, of course, is a great way to get people to move in synchrony because the regularity of the beat in music is predictable. You know when the next beat is going to happen, so you can easily plan the motor movements to move at the right time. When you're listening to music with other people, it's very easy to synchronize your movements with other people as a result. So a number of studies in adults now have shown that when people move in synchrony, afterwards, they like each other more. They say that they would trust each other more. And in fact, if you give them a cooperative game to play, they will actually cooperate more. So people may not even be aware that they were moving in synchrony with somebody, but after such an experience, their feelings and their empathy and their relationship with that person changes dramatically. In my lab, we've explored this with infants as young as 14 months now. So at that age, their motor system is not very mature, so they can't usually precisely coordinate their movements to be on beat with music. But they're often carried and walked and rocked to music. So they have experiences of moving with music. So we bounced them to music. And at the same time, they faced an experimenter who, in some cases, bounced with them to the music in synchrony. And in other cases, the experimenter bounced at a different tempo. So they were bouncing out of sync. And then immediately afterwards, we gave them a task of opportunities to help that experimenter. So the experimenter would do things like be hanging clothes on a little clothesline. The experimenter would accidentally drop one of the clothespins and make a gesture that she couldn't reach it. And we would see, we would give the baby 30 seconds to see if they would help her by picking up the clothespin and handing it back. And what we found over a series of studies is that infants are about twice as likely to help her after three minutes of experiencing synchronous bouncing as opposed to asynchronous bouncing. So in my mind, that's really powerful. The helping behavior is targeted at the person they bounced in sync with, not at other people that they didn't bounce in sync with. However, if an infant sees that somebody they bounced in sync with is friends with another person, they will transfer their helping to that person. So we think it's one of the cues that babies are using to navigate their social world, to figure out, how do people interact? Who should I trust? Who should I run away from? Who should I be friends with? And is that person likely to be helpful to me and empathetic to my needs?

[00:11:43.992] Kent Bye: Yeah, and I'm wondering if you could comment broadly on the use of immersive technologies, whether it's motion tracking or EEGs on top of virtual and augmented reality at some point, like how you see these immersive technologies are going to influence neuroscience in terms of the import of immersive technologies to the future of neuroscience.

[00:12:05.474] Laurel Trainor: Yeah, that's a huge question. There's, I mean, obviously so many applications of virtual realities and immersive technologies in neuroscience and health applications. In the Live Lab, one of the things that we have is an incredible sound system. So it has 28 microphones and 76 loudspeakers. And so we can recreate virtually any virtual environment, sound environment that we want in there. Naturally, the room is extremely dead, very sound isolated, less than 10 decibels background noise. But we can change the background noise. We can add whatever reverberation characteristics we want. We can put virtual sounds into the space in three dimensions. So we can have sounds going around the space and so on. And we're exploring in this environment the effects that that has on the audience experience of music but also other kinds of performances. So the feeling of the audience being immersed in it rather than somehow the sounds being created somewhere else and they're a passive observer. We're also giving audience members opportunities to actually participate in the music. So through an iPhone app or our tablets, they can make responses in real time that affect, say, the acoustics or affect things about how the musicians are performing. From simple things like, in one case, people could vote for who in the jazz ensemble they wanted to play the next solo, or how they wanted the music. They had a few choices of how they wanted the piece to end. to having people with motion capture on their bodies in the audience and then using that motion capture to create sound. So depending on how the person moves, that actually adds to the soundscape of a musical performance going on. Or we've had people who can tap on their phones or move their phones to create sounds. So then the audience actually becomes part of the performance. So we're very interested in this in a sort of creative way. So how can people who may not be great musicians or have put in the many hours of practicing to be an expert on a musical instrument, but how can they experience music with other people in a creative and fulfilling way? So that's sort of a case of using these kinds of technologies for well-being, for people's fulfillment. But there are also many, many applications. So for example, in patients who are non-verbal, so say someone with dementia, are there ways that you can get them to experience things that bring back memories so we know that music is powerful in this domain? if you play music to someone that they experienced as a teenager or in their early 20s, that often will bring back memories and put them into a more communicative space than they were in before. So use of VR in cases like this, I think there's enormous possibilities. Working with children with autism to, you know, maybe put them in a world where they can learn to interact socially in a non-threatening way. We're working with children with developmental coordination disorder. So these are children who are, you know, they're the clumsy children. They have trouble walking even in some cases, but catching a ball is difficult. In VR, we could give them training, again, in a fun environment. Right now, when they go in for motor training, it's not always that fun, and compliance is not always high. If you're the last one to be chosen on a sports team, you don't really want to go in and practice catching a ball. It's not that fun. But if we can use VR to create fun environments for them to learn motor skills, that could be a huge step forward for that population. So really, the possibilities are really enormous.

[00:16:06.854] Kent Bye: And for you, what are some of the biggest open questions that are driving your work forward?

[00:16:12.710] Laurel Trainor: Well, one of the things I'm working on right now has to do with timing and rhythm. So I've come to think of the brain as actually being an organ that's continually predicting the future, and then comparing what actually happens with its prediction. And in cases where the prediction is incorrect, it creates an error signal. And that's a large way that the brain learns, because when you make an error, in prediction, then how you're thinking about the world, your model of the world is not completely correct. So you can make a better model that you won't make that mistake again. So if prediction is really important, and I think it is, then rhythms actually, by default, become very important because rhythms are predictable. If you have a beat, like just a simple beat, and every successive pair of notes is separated by the same time interval, then you can predict with high precision when the next beat is going to happen. And so we do this in music all the time. That's the only way that we could actually play music or synchronize our body movements to music. But it also happens in other domains, and speech is another big one. So speech is not as regular as music, but syllables are somewhat regular in how they occur over time, and they're quite predictable. in that sense as well. And what we're finding is that basically all of the developmental disorders, the major developmental disorders, which would include autism, attention deficits, dyslexia, attention coordination disorder, all of these disorders are associated with timing problems. Deficits in perceiving time and perceiving rhythm. So I'm actually very excited about this idea that these disorders, which we've tried for decades to sort of separate out and figure out what's different about each disorder and so on, well, they have high comorbidity between them. So if you have one of these disorders, there's a good chance that you might have one or more of the others. And so I'm interested in looking at what is the underlying commonality between all of these disorders, and deficits in time processing looks like a really good candidate. And so if that holds, it'll first of all tell us a lot about the origins of these disorders. We can, with EEG, we can measure in infants how well the brain is processing time information. So very early in development, we can get a reading of whether an infant is at risk for having some kind of time or rhythm processing deficits. which would allow us to intervene very early and maybe affect many of these, you know, be able to ameliorate some of the effects of many of these developmental disorders. So that's one of the things that I'm finding very exciting at the moment.

[00:19:10.890] Kent Bye: Well, the thing that comes to mind is the difference between Western and non-Western music where there's like 4-4 time, it's very monochronic, a singular consistent, but in polychronic cultures where there may be more irregular time signatures. I'm not sure if you've looked at that at all in terms of differences of culture in terms of the different rhythms that they use.

[00:19:29.172] Laurel Trainor: Yeah, very interesting point. Yeah, different cultures do use very different rhythms. My colleague, Aaron Hannon, has done a lot of work on this, actually. And it turns out, it's a very interesting story, it turns out that early in development, infants can actually process simple and complex rhythms. So they can process rhythms that are in 7, in 4, in 8. But if they're exposed to Western music, which tends to have very simple rhythms, then they lose the ability to process those complex rhythms. And this happens already by their first birthday. So it happens very young, just as they get attuned to the speech sounds of their language-to-be, they're getting enculturated to the musical structures in their musical environment. Yeah. So if we want children to be open to hearing and understanding music from other cultures, we actually should be looking in that first year of life, first year after birth, because that's where a lot of this, we call it sensory narrowing, is taking place.

[00:20:34.202] Kent Bye: Great. And it looks like we're about to gather up again. But just quickly, I'm just curious what you think the ultimate potential of virtual reality is and what it might be able to enable.

[00:20:44.434] Laurel Trainor: Well, I think it can probably open up new ways of thinking, new worlds. It's possible it would be on the order of when we developed the ability to print text. So the worlds that were opened up by people being able to read. books and to pass knowledge on in those ways and the way it made us think sort of a metacognitive level about who we are as people and you know not just understanding or processing information but thinking about ourselves in relation to that information. I think VR actually has the potential to make us think again about who we are as humans, and what it means to be a person, and what consciousness is, and what metaconsciousness is, and so on.

[00:21:32.746] Kent Bye: Awesome. Great. Well, thank you so much.

[00:21:34.427] Laurel Trainor: Thank you very much.

[00:21:35.797] Kent Bye: So that was Laurel Trainor. She's a professor at the McMaster University focusing on audition and music. And she's also the director of the McMaster Institute for the Music of the Mind where she has a concert hall that can have up to 100 people. She's looking at how audiences and musicians interact and how to be more immersed and participatory within the process of listening to music. So, I'm going to have different takeaways about this interview is that first of all, well, the thing that really sticks with me is when Laurel was showing some of the research that she was doing with these like 14 month old infants, where you have like an experimenter that was either bouncing up and down in synchrony or out of synchrony, and then showing to see if the baby was willing to help that person that they were bouncing with. it turns out that they're about twice as likely to be able to help out the people that they're synchronizing with when they're bouncing versus not synchronized if they're bouncing out of sync. So synchrony seems to be a huge part within neuroscience and just the way that we go to these concerts and we start to move with each other also looking at musicians and body sway doing motion capture and tracking them, how they're moving, starting to see these bi-directional relationships for how people are moving and how they're impacting other people. Start to see who's the leader of music, either if someone's playing the melody that typically tends to be the leader, but just noticing how there's this whole subtle nonverbal communication that's happening within the musicians. And that, that nonverbal communication can also be happening with the audience as well, how they're interacting and moving both with the musicians, but also with each other. and seeing how people within the audience can start to impact and connect to each other as well. So apparently this McMaster Institute for the Music of the Mind has this concert hall where they're able to have really great, amazing ambisonic audio. She said they're able to like pretty much recreate what it sounds like for a fly to be flying around. You can imagine like creating these 3D sounds. and start to be able to control the background noise and information, but do these things where they're able to record what it sounds like with people playing live and then broadcast that out and then take that exact same recording and then to broadcast it out in the same environment so it functionally sounds exactly the same, but to put them on a 2D screen and show what is the difference between when people are listening to the music if they're able to see them in full embodied playing live versus if they're able to see like some sort of 2D abstraction and the way that people tell the difference between the different ways of modulating all these different aspects and variables. So lots of really interesting stuff that she's doing there with this McMaster Institute for the Music in the Mind, using all sorts of immersive technologies and body tracking and ambisonic audio, positional audio, all sorts of stuff like that. So it sounds like that, you know, there's a big bright future for the future of neuroscience and VR, lots of different applications, medical applications, different ways of working with people with dementia, bringing back memories, children with autism and developmental coordination issues. It sounded like the potentially big insight and breakthrough to see if like all of these different developmental disorders, whether it's autism, attention deficit, dyslexia, attention coordination disorders, if there's some sort of deficit and perceiving time and rhythm. So there's a high morbidity between all those different developmental disorders. And maybe the commonality between all of them is these deficits in time processing seems to be a good candidate. So what does it mean to be able to potentially detect that if that turns out to be the case? Can you start to do these interventions where you maybe are able to help cultivate those aspects of the brain? Is it plastic enough so that you can start to change it? I don't know, I think that's speculative at this point, but it sounds like there could be a commonality there of deficits in timing and rhythm, and how can VR in the future help alleviate that? Just looking at the principles of neuroplasticity, it seems like if that's a deficit that's leading to these other things, then are there ways to address that and help to cultivate these different types of skills? And, you know, she said that there is this kind of window, a one year sensory window for children as they're listening to different rhythms. If it's just a very Western music or very regular time signatures, then if they're not exposed to these different irregular time signatures as they're growing up, then maybe they have more difficulty and be able to understand and processing them. And that to me is very interesting and fascinating to hear that there's maybe more monochronic, linear, very standard 4-4 time music that you hear one rhythm, but I think some of these alternative, more complicated rhythms have many different layers of rhythms on top of each other, different levels and different cycles. And so to be able to start to perceive that in this window when you're very young, what's it mean to start to cultivate that as a skill, not only in these infants, but in the entire culture? So it sounds like there's a lot of possibilities for where this could all go. She's studying consciousness, the nature of consciousness, and very interested in seeing how VR technologies, these immersive technologies, equating it to a lot of what was happening with the printing press and books and how that changed our ability to capture knowledge and to reflect upon ourselves. I've been mentioning this a lot in terms of equating the computing technologies as like the printing press of this era. And what's it mean to be able to start to then have all these deeper insights about the nature of our own consciousness, be more aware of our consciousness and different levels of metaconsciousness. Yeah, I think this is very exciting. It's still at the very beginning of all of this. So I'm just excited to see not only where this dialogue and dialectic goes between VR and neuroscience, but also as it gets down to the metal of how we ourselves understand ourselves, we relate to each other. and able to maybe open up new possibilities for us to appreciate more aspects of music and the relationship to movement and synchrony and all these other aspects of group dynamics. So, that's all that I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoyed the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and so I do rely upon your donations in order to continue to bring you this coverage. Just $5 a month is a great amount to give. If you want to help support this podcast and help keep it free for not only yourself, but the entire community, then please do become a member and donate today at patreon.com slash voices of VR. Thanks for listening.

More from this show