Muse is an immersive meditation device that provides real-time feedback on your mental activity, heart rate, breathing, and body movements. They’ve optimized the device for ease of use, and to assist in making the invisible and intangible process of meditation more transparent and tangible.
I talked with Interaxon co-founder Ariel Garten about her journey in creating the Muse headband, the changing perspectives on meditation over the past 17 years, what types of things that the Muse headband can detect, how researchers have been using Muse as a research tool, the ethical issues around biometric data, how VR experiences have been using Muse as a form of real-time biofeedback, and how they’re going to be releasing a special version of Muse later in 2019 that will work more directly with VR HMDs.
LISTEN TO THIS EPISODE OF THE VOICES OF VR PODCAST
This is a listener-supported podcast through the Voices of VR Patreon.
[00:00:05.412] Kent Bye: The Voices of VR Podcast. Hello, my name is Kent Bye, and welcome to the Voices of VR podcast. So brain control interfaces, the future of where it's going to go within the next five or 10 years is going to get to the point of being able to actually read your thoughts, read your brainwaves, which on the one hand is going to be amazing for computer interfaces if they can actually understand what we're thinking and what we want. The barrier between what our intentions are and how we're able to either imagine it or communicate that with our mind it may unlock all these new potentials, certainly for people who have accessibility issues or paralyzed, but being able to get away from just the mouse and keyboard and getting into a direct access to what's happening into your mind to be able to interface with these computing technologies is a vast field of huge potential and also huge danger in terms of what happens to that data, who owns that data, and what type of information they might be able to glean from that data. So Ariel Garten, she's one of the founders of Muse. It's a band-sensing headband that helps you meditate. And she traces her 17-year journey, starting with Steve Mann, who's one of the founders of wearable computers and augmented reality. He's an amazing researcher who's been doing a lot of great work over the last number of decades. But Ariel studied with Steve Mann and she got together with a number of other students and formed this company that is distributing the Muse headband. And she was at the Awaken Future Summit talking a bit about the way to use technology to be able to help amplify your own meditative practices. And she traces the whole journey and the shifts in the culture that she's seen, as well as what you can do with the Muse headband in terms of having access to these four different channels of your brain and all different stuff that you can do with that in terms of biofeedback and different applications. So that's what we're covering on today's episode of the Voices of VR podcast. So this interview with Arielle happened on Saturday, May 18th, 2019, at the Awaken Future Summit in San Francisco, California. So with that, let's go ahead and dive right in.
[00:02:09.401] Ariel Garten: Hey there, my name is Arielle Garten. I'm one of the founders of Muse, and we create a brain-sensing headband that helps you meditate.
[00:02:16.967] Kent Bye: Great, so maybe you could talk a bit about your background and journey into what you're doing now.
[00:02:21.720] Ariel Garten: Sure, so my background comes from the world of art, design, neuroscience, and psychotherapy. And so I started working in the lab of Dr. Steve Mann in the early 2000s. He's one of the accredited inventors of the wearable computer. And he had an early brain-computer interface system. And we were using it to create concerts where you could make music with your mind. So this was the beginning of the intersection of art and neuroscience for me. And I started collaborating with Chris Amoney, he was one of Steve's master's students and an incredibly brilliant engineer who really got both the art and the science side. And we sat back and we said, okay, well, what can we do with this technology? Where is it going to go? So Chris, myself, and then one of our third co-founders, Trevor, we started to imagine what you could do with this tech and where it would really let you go in this world. And we said, okay, How do we show people the power of their brain to move physical objects? And so we created a levitating chair. As you would relax, the EEG system would detect the change in your brain activity and actually allow the chair to rise to the ceiling. And then we said, what's the biggest thing we can let people do with their minds? And we had a project at the Vancouver 2010 Winter Olympics that let people in Vancouver control the lights on the CN Tower, the Canadian Parliament Buildings, and Niagara Falls, and do it with their brain from across the country. So we literally let people control the lighting on massive objects 3,000 kilometers away by shifting their own brain state.
[00:03:52.533] Kent Bye: In all the different brain control interface wearable devices that I've seen, there's this trade-off between the fidelity of input signal that you have and the comfort and ease of use of putting it on and being able to get a good enough brainwave signal-to-noise ratio to be able to actually do something with being able to detect your brainwaves. So for me, it feels like Muse is on the one side where it's really preferencing the ease of use to be able to quickly put it on. But in some ways, you may be losing a lot more of the higher fidelity that you may get with something with 32 sensors and putting gel in your hair and all this stuff. That will get much better signal, but it's not very easy to use. So it feels like this is more of a consumer device, but bridging that gap of starting to make accessible what your brainwaves can detect and do. So maybe you could talk a bit about that trade-off and where you feel like you've landed at Muse and what type of things you're able to detect with the Muse headband.
[00:04:44.909] Ariel Garten: So it absolutely is a trade-off. Actually, when we started doing these experiences, we were using a single EEG sensor, so we were far more constrained. And the question was, what can you see just with one channel? And when we went to actually build the Muse, we tried to build as many channels as we could into a form factor that was going to be very simple, very clean, very easy to use and dry sensor. And at the time, we really looked at what is the least number of sensors that can still allow you to do something interesting. And so the four-channel EEG system, two channels frontal, two channels temporal, you can look at changes in state. So you can look at levels of focus, levels of relaxation. You can't do the fancy motor control stuff, so that requires C3 and C4, the areas on the top of your head. That's where your sensory motor cortex is. Those are typically what you would need to do imagined movement left or imagined movement right. What we do have is the ability to look at data from hundreds and hundreds and hundreds of people at the same time or from a single individual hundreds of sessions. So from that vast amount of data, we actually are able to pull out new insights that you haven't been able to see before relative to things that you might see on a 32 channel on a person just once.
[00:05:59.415] Kent Bye: Well, I've had a chance to go to the Experiential Technology Conference twice. It used to be called Neurogaming Conference, and then this past year they didn't have it. It felt like there was a bit of wanting to have these brain control interfaces or biofeedback that would be able to do like real-time interactions and making choices and taking action, and I felt In some ways some of the feedback I was getting is that it may not be like the best use case to be able to like Instantaneously be able to make a choice and interact with a highly dynamic video game But that it seemed to be like the other years of the experiential technology conference kind of this pivot more towards these medical applications or trying to look at their brainwave states over bigger swaths of time to be able to see these larger shifts and so it seemed to be less about Exerting your agency in an instant in a moment and more about detecting these subtle shifts But maybe taking a rolling average over time just curious to hear if you've found that as well in terms of as you're doing these different applications if you tend to see that it's better to take an average over time and to get some insight rather than trying to pick apart like specific moments or instances of trying to evoke these very specific brain states and
[00:07:07.808] Ariel Garten: So long ago we abandoned trying to use the technology to make a decision in the moment or turn on your lights or control a cursor on your mouse. That was the original aspiration over a decade ago now. And we very quickly came to realize that that is not the best use of the technology and probably won't be the best use of the technology for quite some time to come. What we did recognize was an incredibly powerful use was the ability to actually see inside your own mind and then gain insight and clarity into what's happening in your brain. So, whereas it wasn't great at controlling a cursor, it was amazing at teaching you what you needed to do to control that cursor. So, when we were trying to, you know, push around a light or a bar with your brain, we recognized what we were teaching ourselves to do was to focus, to relax, and we're actually training ourselves to shift around mental states effectively. And that we could give ourselves insight as to what was happening in our mind and really be able to track those states. And so from there, our own meditation application was born. We recognized that this had extremely valuable application to processes where you are meant to look into your own mind and figure out what's going on, but it's very difficult. So in the act of meditation, you focus your attention on your breath, your mind wanders, you watch your mind, you notice it wandering and you return it. We recognized that if we could make that process for people more tangible and visible, we would have done something really meaningful. So now that's the main use case for the Muse as a meditation training tool that makes the invisible process of meditating visible, tangible, and actionable with real data in real time and data after the fact. And, you know, there's many more applications to come down the road, but this is like the sweet spot of the match where the technology is currently at. So your brain is different every day, your brain is different moment to moment, your brain is different second to second. If you're giving somebody real-time feedback instantaneously on what's happening in their own brain, it's going to be very disjointed because it really is jumping around. And so when you give somebody feedback, you want to be smoothing it over a particular window. And that window can be quite small, that window can be half a second. But if you are giving feedback based on the trajectory of where the brain is going rather than where it actually is moment by moment by moment by moment, you're going to do a much better job of teaching somebody where their brain is moving and trying to get there in a way that's going to be coherent and understandable rather than this chaotic moment by moment noise. So you have to be making actually assumptions about where is the brain at this moment, where was it over a larger window, and what does that mean about where it's going in order to give somebody effective neurofeedback.
[00:09:37.872] Kent Bye: Yeah, one of the frustrations that I've had with whether it's HeartMath or Muse or any of these experiences where you're tracking your brainwaves is to match my own sort of internal phenomenology of what I'm feeling versus what the sensor is detecting. It's almost like sometimes feeling like not having a trace of my own agency or the trace of my own like positive reinforcement that I'm like doing it right. And I don't know if that's sort of a universal problem of measuring the brain waves and trying to match that with your phenomenal experience. But what have you found in terms of trying to close that gap between your own phenomenological experience of what's happening versus what is actually radiating from your mind?
[00:10:17.739] Ariel Garten: So a lot of people do say that the feedback is quite accurate. You know, they'll use Muse and they'll be like, wow, that I really didn't, yes, that really matches when I was focused and when I was not. Some people do have that phenomenological gap that you just described. And part of the reason for that phenomenological gap is that there is a small delay. So there is this delay that we've had to smooth over so that we can identify the trajectory of where you're going and where you've been, and then say, oh, OK, you were wandering half a second ago. And if in your mind you're like, I'm not wandering right now, but the muse is picking it up, well, actually, we're giving you feedback that has a little bit of a lag just because that's what you have to do when you have so many things in the loop, giving somebody feedback in a system like this.
[00:11:00.402] Kent Bye: I'm wondering if you could give me a brief history of when you founded Muse, when it formed, and then when you had the initial iterations of the hardware, and if you've had additional iterations after that. Kind of a bit of a timeline, a history of Muse as a company.
[00:11:14.862] Ariel Garten: Sure. So the early experiments that we were doing in Steve's lab were in 2002, 2003. And then from there we recognized that there was really something here. We were creating concerts where we could make music with your mind and 48 people at a time would be controlling their EEG and hearing the sounds that emerge. It was just like... Wow, the world needs to know about this. And so then for the next several years, Chris, Trevor, and I, and James Fung, another one of Steve's early master's students, would periodically come together and explore this. And by 2007, we sort of sat down and said, all right, this is real. We need to make a company here. There's something here. And so we'd get together in Trevor's basement regularly and come up with business plans and proposals for a business that didn't exist and a technology that was still so nascent. And by 2009, we had incorporated, as Chris, Trevor, and I, and we had kind of the basis of a company. That's when we, around 2009, is when we built the levitating chair and started to create some of the more meaningful explorations of the technology that were more publicly available. And then 2010 was the Olympics project. So that was the first time that 17,000 people got to control big stuff with their brain from across the country. And at that point, we still didn't really know what our use case was going to be. We knew we had this cool tech. We knew we could build a thought-controlled beer tap, and you could focus on your toast, and it would start toasting, and then you could clench your jaw, and it would pop. And somehow that was amazing and satisfying, but ultimately super unsatisfying. And so, you know, 2010, 2011 was us grappling with this, saying, like, it's so amazing. What do we do? Like, what is the killer app? And we started to recognize that this internal process and bringing it to light and training your own mind was going to be the killer app. And around that time, we started to build a cognitive trainer for your brain. But meditation still was not really widely known. And so we'd go into VC pitches, or we'd try it with people, and they're like, is this like meditation? And our slides had brains with big biceps coming out of this as a trainer for your brain. And they'd be like, is this like meditation? We're like, do you meditate? And they'd be like, yes, I do. Then we'd be like, yes, this is like meditation. And they'd be like, OK, great. I want to be part of it. And it was only until several years later when meditation was on the cover of Time magazine that we could really go out and say, yes, this is a meditation tool. And then it started to ramp up. We did our first Indiegogo campaign in 2012, at which point we knew we were building a meditation tool, but we still weren't quite sure what it was going to be. And it was much more like, we have this open platform that's amazing. Everybody can build what they want on top of it. We'll all figure out what the future will hold with this tech. By 2014, we came out with the first Muse with the consumer meditation app. So we launched in Best Buy in Canada and Amazon and online. In 2018, we translated into multiple languages, French, German, Spanish, Italian, and Japanese are coming next year. Last year, we also merged with a meditation company, a meditation studio, meditation app. And so we now have a library of content that's come into Muse that allows us to build in voice activation with voice guidance with the neurofeedback. And then last year we also came out with Muse 2. So whereas Muse 1 gave you real-time feedback on the brain, Muse 2 gives you real-time feedback also on the heart, the breath, and the body.
[00:14:37.876] Kent Bye: Interesting. And so it seems like this is like a Bluetooth device, so that it's, I guess, an open platform for other people. If they wanted to create an application, they could.
[00:14:47.140] Ariel Garten: So we used to have a SDK that was available on our website and lots of people building on top of it. It ultimately became really difficult to support and manage. Now if you're building an application, send us an email and we'll put you through the process of giving you access to the developer kit if it makes sense with your application. And then we also have a big research community that uses Muse. So there's literally thousands of neuroscientists who use Muse in their research to do both in-lab and in-situ neuroscience research. So the tools are also available to the research community.
[00:15:17.518] Kent Bye: What's some of the most compelling research results that you've had, thanks in part to having access to the Muse?
[00:15:24.048] Ariel Garten: Oh, there's lots and lots of fun stuff. So MIT demonstrated that you could look at pain in EEG using Muse. IBM, very early on, they were the first ones, did a study looking at the brain and could determine if you were looking at kitten video or educational content. With the Muse meditation app, Mayo Clinic has been running a study with Muse using Muse with breast cancer patients awaiting surgery to ideally demonstrate that there's a decrease in stress during the cancer care process using it and potentially an improvement in outcomes. Catholic University of Milan did a study using Muse again as a meditation tool, and they looked at people meditating with Muse versus meditating without Muse, and what they saw was improvements along the measures that you'd imagine meditation would lead to improvements. Increase in calm, improvements in cognition, but what they saw was in those meditating with Muse versus without Muse, actual persistent changes in brain activity that suggested persistent calm and focus throughout the day. So that was quite cool. And another study from Baycrest Hospital, one of our earlier studies, they demonstrated increase in somatic awareness, so people reporting less headache, less pain, less nausea, better relationship to their body. Decrease in stress, or rather increase in calm. Stress is a medical term, so we can't talk about, use improving your stress, this is not an FDA device. And they also saw an improvement in cognition as demonstrated by the STROOP task. So they saw improved reaction times and stroke tasks. So your ability to make difficult decisions and stressful information with conflicting information seems to improve.
[00:16:55.471] Kent Bye: Yeah, and I've seen a number of different VR developers that have started to integrate the Muse in different applications, whether it's Helium with Sarah and Jeff, but also Cubicle Ninjas, you know, they were integrating with their meditation app. But I also found that in some ways the form factor was you have two different devices that weren't designed for each other and they conflict in different ways, whether it's Difficult to get on or there may be some instances where the BCI Sensors are actually being occluded or covered up by the head straps of the VR headset So I always figured that the real dream situation would have a direct integration with the VR headset rather than having a separate device But it does seem like it is possible to have integrations with VR But I'm just curious to hear from your perspective the user experience challenges there But also the potential benefits and where you kind of see that, you know playing out in the future. I
[00:17:45.815] Ariel Garten: So VR is a space that we're really interested in and have a lot of fun with. We've done close collaborations with Android Jones and Microdose, making really beautiful VR experiences that shift with your brain activity. We know that people use Muse, and actually for years have been using Muse with things like an Oculus, and we were kind of surprised that you can actually just wear both of them at once, and it seems to just work. In order to further enable VR experiences with Muse, we actually have a device that we'll be bringing out towards the end of the year, a new form factor that we designed specifically with use with the VR rig in mind.
[00:18:21.354] Kent Bye: Oh, that's great to hear. So it's going to be like an add-on to like just an HMD so that fits on nicely or is it a device within its own right?
[00:18:29.397] Ariel Garten: So it'll be a Muse headband with a slightly different form factor that fits comfortably with a headset.
[00:18:34.758] Kent Bye: Okay, great. Well, it's great to hear. What are you most excited as to what might be made possible with that?
[00:18:40.480] Ariel Garten: So many things. I mean, you know, Android Jones' experiences, for example, are so... I reference those because we've been inside the design process, where some of the others have been designing, and then I've just seen them outside. But, like, you know, the Helium experience is also amazing. There's so many of them that have been extraordinary. I'm most excited about the ability to have the VR experience feel more intimately connected with the self. So the idea of VR in so many ways is to create a reality that is separate that you are engaged in and engrossed in. And the more of yourself you can put into that environment, the more of yourself can shift and change, the more compelling the experience becomes. And so that's fascinating both from an entertainment perspective and of course from a personal transformation perspective. So, you know, when we can access states that help us shift and change with the visuals that enable and encourage that shift and change, and then we can lead to personal transformation more deeply and more successfully, you know, that becomes super exciting.
[00:19:42.449] Kent Bye: Well, one of the concerns that I have in the broader virtual and augmented reality industry is the capturing of biometric data, correlating it to what we're seeing, and then this data mining aspects that's happening, both from Google and Facebook in particular, their business models of surveillance capitalism, of really trying to harvest and mine as much information as they can on us. To me, it feels like with the biometric data, whether it's our heart rate variability, our heartbeats, or our emotional states, or even our brainwaves, that there's a certain amount of ephemerality of we don't really always know the full context. And so to try to capture it and then lock that moment in time, but to try to gather that data, how good of a sound process is that in the first place? But even if it is sound, then it becomes another issue of the intimacy of that biometric data to some extent potentially becoming this almost like Rosetta Stone to our psyche. You know, like very intimate information about what we're looking at, what we're paying attention to, and how we're reacting to experiences that could be correlated to what we're actually experiencing within an experience. And so I have a lot of deep concerns around this biometric data and the future of it. And I'm just curious, you know, Mew is your own separate entity with these headbands, but like how are you thinking about biometric data and the potential risks that are involved there?
[00:21:00.654] Ariel Garten: So I am so 100% with you. When I started this business, the thing that would keep me up at night was worrying about how data would be used. Everybody would be like, oh, it's fine. There's really not much data in there anyways, which is true. There's not much meaningful that you can put out. It's not really your psyche. We can't know your pin number. Nonetheless, I don't. care. It's still deeply important to me that this data is never used in the wrong way. And so I actually started the Center for Responsible Brainwave Technologies, CERB. The goal of CERB was to create a set of standards for the entire industry that says you can never use brainwave data or neurotech data to reduce somebody's agency, to reduce the transparency, to sell them shit they don't need. Part of the reason that we took down our SDK is so that not everybody gains access to it. You have to go through us. And if you're in a marketing company, my answer is please go somewhere else. What I really want to say is fuck off. And please don't do what you're doing because it's really unpleasant to humanity. But I'm much more polite than that. Maybe this is why I no longer personally answer these emails, because it's hard for me to be polite to a marketer. I fundamentally believe that our brain and our tendency should not be used to sell us crap that we don't need more effectively. And I stand by that so strongly. So, you know, this data should never be mined for those reasons. This data should never be sold for those reasons. And it's incredibly important that everybody who ever builds on top of the platform, you know, signs our ironclad agreements that says that you will never do that. And we have every right to shut everything down if you do.
[00:22:31.122] Kent Bye: Well, I'm obviously totally on board with all of that. And the challenge that I find is that in interfacing with these big, huge companies like Google and Facebook and trying to, in some ways, have them make a similarly strong commitment, as you just did to me right now, that they're not going to do that. But in the absence of that, everything that I see is that they're on this roadmap and trajectory to do just that, is to use all this biometric data in a way of detecting all these deeper patterns of our behavior. And the way that John Burkhart, he's a behavioral neuroscientist, what he said is that the line and threshold between predicting behavior and controlling behavior starts to get erased once you start to get deep down into it. Like, if you can predict what's going to happen, you can also start to control what's happening. And this was before Cambridge Analytica, when he had told me this. And so it was like, before it had become known that there's a potential of creating these psychographic profiles that could be used for information warfare, it was more of a philosophical, speculative potential. But now that that's an actual reality, it feels like I don't hear the same similar amount of caution from these big major companies in the absence of them taking a strong philosophical stand against recording and capturing this biometric information that I have to just assume that they not only want to do it, but that they're actively architecting these systems to be able to do that. And so what are the deeper ethical arguments for why they shouldn't do that?
[00:23:58.454] Ariel Garten: I think you just made the deeper ethical argument as to why they shouldn't do that. And I think a better question is how do we ensure that large companies are sufficiently lobbied, that there's enough of a consumer awareness, that people are enthusiastically on board with ensuring that this doesn't happen. You know, we have some amazing movements coming out of the Valley, like Tristan Harris' pushing against the attention economy, and you actually have seen that make change. You know, Apple has made some pretty dramatic changes to the way they do things and how they monitor and share back to you the screen time, et cetera, based on the advocacy and evangelism that Tristan has done in this space. So we are, you know, in some ways seeing big players wanting to play. On the other hand, I don't know that I'd ever want Google to see my brain data. And please stand back.
[00:24:52.011] Kent Bye: Maybe you could unpack a little bit about what type of information they would have access to if they did have access to your brainwaves.
[00:24:58.795] Ariel Garten: So the funny thing is not much. And there's this whole question of, is EEG identifiable data? And to date, the answer is no. If you see somebody's EEG, it's not like a fingerprint. You can't go back and say, oh, this EEG came from that person. There's nothing in there that is so unique that it could be a particular signature just of you. Depending on how many sensors, you know, the four channels, you can't read somebody's PIN number. You can't know what's going on in their mind. You can't know their thoughts. You're just really seeing this broad spectrum of is somebody focused? Are they relaxed? How much gamma activity do they have? You know, that's not particularly useful. The one signal that comes out as potentially meaningful is P300. So that's the reaction that you're having to a stimulus in your environment. And it's been used as a lie detector test in some studies. So if you show somebody a picture of five houses and they're the criminal and they did a crime in house B, you will see the P300 potentially going in house B and not the other houses. So that's sort of the one context that it's been demonstrated to be kind of meaningful. But even that is not airtight in any way. So what can you get from brain data at this point? Not necessarily that much.
[00:26:09.440] Kent Bye: Well, when I was talking to Conor Russomano from OpenBCI in 2016, he told me then that it was possible to do like digital fingerprinting of brainwave data and that he was against recording brainwave data because of that. And his point was that, you know, in some ways it still may be somewhat speculative, but, you know, in the future it could be possible that all biometric data has a unique identifier of a signature that would only come from us. and that maybe it will only be unlocked with machine learning and being able to filter or find out what those signals are. But that, to me, is the risk of that. Right now, it is what they would call de-identified. It's not connected to your identity, but it could potentially be personally identifiable if they're storehousing and saving it for years and years and years. And it would only take coming in and sort of capturing you within an environment and maybe getting access to a little bit of your biometric data, and then that could potentially serve as like a cryptographic key to unlock all this other information that may get out there. So, I don't know if it's proven that it is never going to be possible, but for me, I think it's sort of a matter of time before it does become possible.
[00:27:17.735] Ariel Garten: I agree. And Connor's a good friend, and we jam on these things all the time. And he joined the Center for Responsible Brainwave Technologies early on when we had our first symposiums, before 2016. Yeah, we started around 2014. And so these are questions that we bounce back and forth all the time. So in an incredibly large pool of data, is it identifiable? It still hasn't been proven that you can actually do an EEG signature. Is that a reason to be lax about it? No. You know, I'm with you. In the future, what can we read? We don't know. Probably still not a lot. But that's, again, not a great reason to give Google access to your brain.
[00:27:53.929] Kent Bye: I think the strategy of both Google and Facebook or any of these companies that have a business model of surveillance capitalism is that they're going to want to storehouse and save as much data as they can for many years. And I don't necessarily want a decade's worth of my biometric data to become accessible, especially if there's a turn towards a totalitarian government that then is wanting to go back and look at these emotional profiles and then start to create social scores based upon our behaviors and whatnot. So for me, it's more of a risk of having biometric data stored is a risk that feels like if you're going to do processing on it, then try to maybe do real-time processing. Or if maybe you need to have a rolling window, then have a small cache that's then sort of dumped. But not to be storing this stuff, because there's a bit of wanting to store it in order to do a training of AI, I would imagine, in the future. So I have a pretty strong intuition around just not recording it or saving it at all.
[00:28:50.316] Ariel Garten: So I come from the tradition of Dr. Steve Mann. He's one of the first guys in the AR space, and he has a concept of surveillance. He too was very frustrated by surveillance, and this is like our conversations start back in the 90s, when surveillance meant something different than it does today. And so, you know, this notion of surveillance, how can we control our own conversations? How can we shine the light on the people who are eavesdropping on us. How can we be the, you know, the owners of our own data? How can we be empowered by what we learn about ourselves? And then use that to essentially culture jam. To essentially jam the information that other people can glean from up above us. So, I'm very much with you in this conversation. really think that the monetization of our own information is something that we should have ownership of if we have an interest in doing it at all and should always have the right to, you know, rescind, to own, to eject from any process that owns any part of ourselves.
[00:29:56.307] Kent Bye: Great. So for Muse, what are some of the either biggest open questions you're trying to answer or open problems you're trying to solve?
[00:30:05.160] Ariel Garten: So the problem that we're really trying to solve now is how do you give people insight about themselves in ways that allow them to transform effectively. And so, you know, the pathway that we've gone down is meditation. We started with helping people understand what goes on in the brain during meditation. We're now helping people understand the relationship between their heart, their breath, and their body while they meditate. We're bringing in all of the other meditation traditions around guidance and teaching and trying to bring teaching into the app so that you can get lessons as you learn to shift your own biology and psychology and brain activity. And so we're trying to figure out how we create these like, you know, these small mobile experiences that allow you to access the wisdom of a teacher in a way that allows you to shift and change and transform. So that's on the Muse meditation side. And then on the Muse EEG side, it's really how do we enable researchers to unlock new insights about the brain, about brain health, about disease, about the ability to diagnose disease that we may not otherwise see, and how do we allow them to do that in ways that are scalable, affordable, inexpensive, and accessible.
[00:31:18.878] Kent Bye: We're here at this Consciousness Hacking Awaken Futures Summit and just curious to hear some thoughts from you about this movement of consciousness hacking and if you kind of see this as a new thing or if it's a larger kind of awareness of like many different trajectories of people getting interested in these different meditation technologies and yeah it's changed a lot since when you first started back in 2002 and you know it's like 17 years later, there's been a lot of cultural shifts and awareness around meditation So but I'm just curious to hear about your thoughts on this whole movement of consciousness hacking
[00:31:51.281] Ariel Garten: So first of all, I can't believe it's been 17 years hearing you say that. I was like, wow. On the other hand, 17 years is the drop in a bucket compared to 5,000 years of wisdom. You know, consciousness hacking is something that people have been trying to do for a very, very, very long time. We have ancient scripts and ancient languages describing people's very precise technologies that they applied at the time. you know, these being mental technologies or repetitive activities, all sorts of drug usage, all sorts of ways that people have tried to shift and hack their consciousness. So I think what we're seeing now, and I'm glad we're seeing it now, is a reassessment of all of the wisdom that was there, as much as we can hold in our hands at once, and the application of the new tools and technologies that we have to say how can we Hasten these processes. How can we unlock something that somebody else has seen? How can we understand what was really going on so that we know if it was real or not? How can we more widely disseminate these technologies, you know technologies being ancient wisdom? We can define a technology in many many ways. So I'm very pro consciousness hacking and I think everybody takes responsibility for what they do when they hack their own consciousness. I'm not pro disseminating consciousness hacking en masse until it's proven that the methodology is really safe, effective, and meaningful for people. But I'm absolutely on board the experimentation.
[00:33:16.367] Kent Bye: Yeah, it's funny to think about that just talking about the influx of entrepreneurship into these more contemplative spiritual traditions and the challenge between how things kind of organically grow and change and evolve with having no influx of capital that's into the system but yet once you sort of introduce all these economic factors then I don't know, it feels like there's this interesting shift that's happening right now where you're having all these different realms that have been traditionally more sacred spiritual communities, but now with the tech of Silicon Valley coming in and starting to create these different technologies, I mean, you're at Muse creating a platform to be able to facilitate a lot of these, but there's also just a lot of other interest and money that's coming into this that I guess there's a lot of ethical things to be aware of and I don't know how you think about that or how to navigate it in a way that isn't growing too fast or Appropriative or you know, like you said trying to push it out too quickly and go to mass scale before it's ready So yeah, I don't know if what moral dilemmas that you find in that
[00:34:17.194] Ariel Garten: So it's interesting. I mean, we saw VCs backing, like, transcranial direct current stimulation. It's not necessarily proven out in the research literature that that's really a great thing. Luckily, a lot of these companies have died. The market was also. Shy. We know that meditation is good for you. We know that ways to get you to meditate and make the practice work for you are really helpful. Like when we started creating Muse, we knew even if the technology didn't work, at least getting people to meditate would be a good thing and couldn't do harm. And that the way that we were applying the technology would get people to meditate and couldn't do harm. So we felt pretty good about that and then did a whole bunch of studies with a bunch of researchers that demonstrated this is cool. When you see people putting money behind it, it's a hard thing, because on one hand, for something to spread and scale, you need money behind it. In order to, you know, manufacture devices, in order to deploy them, in order to get these technologies widely distributed, it takes capital. And if we look at, you know, how do we get even simple things like meditation technologies to people in the third world or people in North America who cannot afford or access them, it takes the kind of economies of scale that bring the costs way, way down to allow it to be implemented. And there's all sorts of ways that we're using technology in this loop. It could even just be using an app to train more meditation teachers, using an app to, as low as possible, deploy meditation guidance. These things are low-tech, but they still count as tech and conscious-attacking solutions. And even that takes money to build the awareness, to deploy, to have the customer support, to make it work on every platform and update with every new iOS update, et cetera. So, you know, it takes money, but you have to make sure it's the right money. And we're lucky enough in this consciousness hacking space to have some investors who are really ethically aligned. Folks like Tim Chang, folks like Charlie Hartwell and the Bridge Builders Collective. These are investors whose mission it is to spread consciousness technologies and mind training technologies while not traipsing on any, you know, unpleasant ethical divides of just doing it for the money. So it really becomes a question of who is funding it and what their goals of funding it are.
[00:36:32.510] Kent Bye: That's good to hear. And finally, what do you think the ultimate potential of immersive technologies are and what they might be able to enable?
[00:36:43.914] Ariel Garten: I have no idea. I don't know that anyone in this room knows. If we zap your brain, will you be enlightened? Who knows? Is that enlightenment that you achieve as effective as having gone through the 50 years of meditation that would have otherwise taken? Probably not. Who knows? Will it enable you to find another part of yourself? Who knows? Will it put you in the right epigenetic condition to release a whole other set of factors inside your body that will allow you to have new skills and abilities? Who knows? I don't think I or anyone else am qualified to answer that question.
[00:37:25.470] Kent Bye: That's some good questions though. Is there anything else that's left unsaid that you'd like to say to the immersive community?
[00:37:32.675] Ariel Garten: Just that I believe very deeply in the potential of humans to become better in so many ways. And I know we so desperately want to, and I see what's going on. in North America and so much of the world today where we are put into positions of fear and scarcity thinking and that causes us to act unfairly and unpleasantly towards one another. to not create the world that we want to see. And through simple technologies like meditation, I'm not talking fancy tech here, I'm talking closing your eyes, focusing on your breath and quieting your amygdala, we are able to teach ourselves the skills required to live more peacefully, effectively, and productively in our lives. And so I just wish more and more people employed these technologies and methodologies of mental and emotional self-regulation so that we can spend more time building a world in which we can all flourish.
[00:38:34.045] Kent Bye: Beautiful. Awesome. Well, thank you so much for joining me today. So thank you.
[00:38:38.029] Ariel Garten: My sincere pleasure. Thank you. This is fun.
[00:38:41.697] Kent Bye: So that was Arielle Garten. She's one of the co-founders of Muse, which is a band sensing headband that helps you meditate. So I have a number of different takeaways about this interview is that first of all, Well, it's fascinating to hear this evolution from when she started this in 2002 and gone through these many different iterations of seeing that it was actually really compelling to take this technology from Steve Mann's lab to have at first just one channel sensor of this EEG to see what you could do with it. And what is like the minimum number of channels that you need to do something that's really super interesting. I think they ended up with about these four different channels. And then eventually last year, 2018, they released the Muse 2 headband, which also starts to look at the heart rate and the breath and the body in terms of your brainwaves. And so fusing all those things together. And sometimes later in 2019, it sounds like they're gonna be coming out with a Muse headband that fits specifically for a virtual reality headset. And so there's been a number of different VR experiences, whether it's from Cubicle Ninjas with their meditation applications, or from Healing XR with Sarah Hill, So because it's doing this rolling average, it's not getting this super specific information as to what's happening at any given moment. It's hard to use these BCIs to be able to exert your agency to be able to do things out in the world. But what they found is that it does a great job of being able to make what is otherwise an invisible process that's happening inside of your mind with the four channels, they don't have the 32 channel resolution that you may have with the more higher end BCIs and these EEGs that take a lot of time to be able to put the lubricant on these sensors, to be able to connect to your mind. So the trade-off is that they get many different iterations. So you're able to track these specific points of information over long periods of time and be able to do that type of training in that way. And what they're essentially doing is making this invisible process of meditation more visible, tangible, and actionable with this real-time data and data after the fact to be able to look at all the various different patterns that you have. And they found that they focused on meditation as that way to peer into the mind to see what's happening and to be able to see more of the trajectory of these different states of brain activity and to be able to identify all the different factors that would help you know whether or not you're going in the right trajectory of achieving these different meditative states. And so back when they started, it was like meditation was not in the mainstream. And it was kind of more of an esoteric New Age topic that hadn't necessarily been legitimized and validated by science. And so in that way, it was kind of discarded as this spiritual venture, but not necessarily within the context of a medical applications. But There's been a huge shift that's happening over the last couple of decades when it comes to the general acceptance of meditation and these different technologies. It went from them kind of going into these hush whispers saying, hey, do you meditate? Yeah, I do meditate. Then, is this like meditation? Then, yeah, it is. you know going from there to now all of a sudden being a premier speaker at a conference like the Awakened Future Summit which is having at this point meditation being pretty mainstreamed with a lot of the technology culture and you know a lot of people that are coming from this psychedelic underground are having these various different psychedelic experiences and part of the trajectory that usually happens with all these folks is that they start to then pick up these other contemplative meditative practices to be able to integrate into their lives. So it's kind of like this interesting fusion of these different worlds. So it's just interesting to hear from Ariel that the different evolution and changes that have happened just from her perspective of being within this field. and all the different things that they can look at in terms of being able to tell whether or not you're looking at a kitten or having an educational experience or whether or not you have some increase of somatic awareness or increase in calm, which it was interesting that she said that she can't technically say that it reduces stress because stress is an FDA regulated term. It's a medical term and so they can't say that it reduces stress without having it approved by the FDA, they have to sort of flip it on the end and it increases calm. So, you know, overall, there's so many different applications and usefulness for meditation. And when I ask Ariel about, you know, whether or not there's any ethical or moral dilemmas in terms of the commodification of meditation, she's like, well, not really, because we've seen that meditation is something that is good for people, and it's something that is safe, and it's just that people should be doing. It's hard to do it, it's hard to get any good feedback, and so they're just creating the tools to be able to give a little bit more transparency into the process, and to just help people do it more often. And if there's more people meditating, then overall, that's just a really good thing. And of course, a big, huge thing Aria was pretty adamant about was to not be recording a lot of this biometric data. At this point, the EEG hasn't been verified that it's any specific fingerprints or biometric marker that's connecting it to your identity in specific ways. When I did an interview with Connor Russomano, he had a different opinion saying that it was already having different levels of biometric fingerprints. it could be that Aria was talking about from just the fourth channel of the Muse headband which is a pretty low resolution and being able to track whether or not you're paying attention to something you're not paying attention to it and this whole P300 signal which is essentially whether or not you're lying or not if you could show you some specific information and kind of use it as a low level polygraph text, but polygraph tests in general are already controversial. And so doing this level of polygraph test analysis is something that is not reliable as well. But at the same time, she's taken a pretty strong stance of not capturing and store housing this data. And in fact, they used to have a pretty open SDK in some ways. Part of the reason why they put it behind having to reach out and connect to them is because they wanted to cut down on a lot of the neural marketing techniques of being able to either do these different types of applications to be able to get information about people, to be able to reduce their agency, or to sell and market stuff to people that they don't really actually need. And so there's a lot of this neuroethics that she created this whole Center for Responsible Brave Wave Technologies that is setting up a lot of these different standards that say what you should and should not be doing with these neurotech and brainwave data. So, you know, she's pretty adamant that she doesn't want Google or Facebook to have access to her brainwave data. And so, you know, she's taking the stance that even though we don't know whether or not there's going to be things that are going to be particularly dangerous with the EEG, I think actually, if you do this different types of sensor fusion with other different types of biometric markers, I think we'll be surprised for the next five to 10 years to see how much information you could tell on somebody when you aggregate all this biometric data. especially when it's correlated to these specific experiences that then you could start to see what people were actually looking at and start to deduce maybe what some of the stimulus and response type of reactions people have where you start to re-architect and map out the brain and be able to detect these different emotional hot spots and You know, once you create a map like that, then you could start to create these pathways and roadways to be able to persuade people. But that level of prediction and persuasion then very easily slips into control if it gets into the wrong hands. So the final point that I just make here is that she says the consciousness hacking movement, even though she's been involved in this space for 17 years, she says that this has been around for like 5,000 years for all these different technologies and techniques that people have created to be able to modulate and hack their consciousness. And so we've been doing this consciousness hacking for a very long, long, long time. This is not something that is new. but that there's all these different various ethical issues in terms of transcranial stimulation. Can you zap your brain into enlightenment? Or if you achieve some sort of like samadhi state or state of enlightenment that's aided by a temporary state from these substances, then is that as effective or lasts as long as something that if you're doing these different practices for 40 or 50 years, Are you going to be able to find new parts of yourself? Or are you going to be able to cultivate the right epigenetic conditions to be able to unlock these latent human potentials? So it sounds like that Ariel at this point is now really focused on using the Muse headband in order to assist within these meditative and contemplative practices, and that it's all about mental and emotional self-regulation and to help people move closer and closer to a place of much more flourishing within our lives. So, that's all that I have for today, and I just wanted to thank you for listening to the Voices of VR podcast. And if you enjoy the podcast, then please do spread the word, tell your friends, and consider becoming a member of the Patreon. This is a listener-supported podcast, and so I do rely upon your donations in order to continue to bring you this coverage. So, you can become a member and donate today at patreon.com slash voicesofvr. Thanks for listening.