#444: Developing a New Eye Interaction Model with Eyefluence

jim-marggraffI had a chance to do a demo of Eyefluence, which has created a new model for eye interactions within virtual and augmented reality apps. Rather than using the normal eye interaction paradigm of dwelling focus or explicitly winking to select, Eyefluence has developed a more comfortable way to trigger discrete actions with a selection system that’s triggered through natural eye movements. At times it felt magical to feel like the technology was almost reading my mind, while other times it was clear that this is still an early iteration of an emerging visual language that’s is still being developed and defined.

I had a chance to talk with Jim Marggraff, the CEO and founder of Eyefluence, at TechCrunch Disrupt last week where we talked about the strengths and weaknesses of their eye interaction model, as well as some of the applications that were prototyped within the demo.

LISTEN TO THE VOICES OF VR PODCAST

Eyefluence’s overarching principle is to let the eyes do what the eyes are going to do, and Jim claims that extended use of their system doesn’t result in any measurable eye fatigue. While Jim concedes that most future VR and AR interactions will be a multimodal combination of using our hands, head, eyes, and voice, Eyefluence wants to push the limits of what’s possible by using the eyes alone.

After seeing their demo, I became convinced that there is a place for using eye interactions within VR and AR 3D user interfaces. While the eyes are able to accomplish some amazing things on their own, I don’t think that most people are not going to want to only use their eyes. In some applications like in mobile VR or in augmented reality apps, then I could see how Eyefluence’s eye interaction would work well as the primary or sole interaction mechanism. But it’s much more likely that eye interactions will be used to supplement and accelerate selection tasks while also using physical buttons on motion controllers and voice input in immersive applications.

Here’s an abbreviated version of the demo that I saw with Jim presenting at Augmented World Expo 2016:
https://www.youtube.com/watch?v=TYcrQswVcnA

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip


Support Voices of VR

Music: Fatality & Summer Trip

Rough Transcript

[00:00:05.412] Kent Bye: The Voices of VR Podcast. My name is Kent Bye, and welcome to the Voices of VR podcast. So on yesterday's episode, I had Doug Bowman talking about 3D user interfaces, which is all the different ways of interacting with computers that have evolved from beyond just the mouse and keyboard in these new immersive environments like VR and AR. And so on today's episode, I have the CEO of iFluence, which is creating a new interaction model, which is primarily just using your eyes. We'll be exploring iFluence and their new eye interaction model on today's episode of the Voices of VR podcast. But first, a quick word from our sponsor. This is a paid sponsored ad by the Intel Core i7 processor. VR really forced me to buy my first high-end gaming PC. And so Intel asked me to come talk about my process. So my philosophy was to get the absolute best parts on everything, because I really don't want to have to worry about replacing components once the second gen headsets come out and the VR min specs will inevitably go up at some point. So I did rigorous research online, looked at all the benchmarks, online reviews. And what I found was that the best CPU was the Intel Core i7 processor. But don't take my word for it. Go do your own research. And I think what you'll find is that the i7 really is the best option that's out there. So this interview with Jim was happening at TechCrunch Disrupt in San Francisco from September 2nd to 5th. And so with that, let's go ahead and dive right in.

[00:01:39.410] Jim Marggraff: My name is Jim Margraf. I'm CEO and founder of iFluence. And we're doing technology we call high interaction technology that transforms intent into action through your eyes.

[00:01:49.896] Kent Bye: Great. So maybe you could tell me a bit about how did this project come about?

[00:01:53.225] Jim Marggraff: It was a great, great story because back in 2012, I met a neurologist who in Reno, Nevada, who for the last 15 years had been working on this technology development. He met a person with quadriplegia who was also couldn't speak. He was locked in. And Dr. Torch developed first a blink detection mechanism so the man could communicate by blinking with Morse code. And over the next 15 years, he went through seven generations of wearable eye-tracking technology funded by the government, worked on developing a pair of glasses that were put on Stephen Hawking, surgeons at the Mayo Clinic, worked with the Army and the Navy on scuba divers and helicopter pilots. And when I found him in 2012, I had started a number of companies before this, creating technology for kids to learn to read on the LeapPad and doing a smart pen at Livescribe and founding that. So I looked at this and said, I think there's a huge opportunity to really advance eye interaction and develop a new model for HCI, wearables and AR and VR. And so set about doing that, started a company, founded iFluence, got some funding and went off to figure out how we now we're going to solve this problem of eye interaction to do something beyond dwell and wink to interact in a meaningful way with your eyes.

[00:03:00.715] Kent Bye: Yeah, so how do you translate intent of the eye then?

[00:03:04.018] Jim Marggraff: So what we did, we looked at essentially the eye-brain connection. We looked at the biomechanics of the eye, the movements of the eye. We studied how light flows through the eye to your retina, to your fovea, the timing of that to your occipital region in your brain. We looked then at how you respond to different stimuli in different parts of your field of view. and said there's got to be a way to give you some form of control other than dwelling or winking. So ultimately we created a language that looks for what we call purposeful and non-purposeful eye signals and that means we'll offer affordances at times that you'll see explicitly and you'll react by your eyes will move sometimes consciously and other times without you even realizing it will move your eyes around or you'll move them And also, we look for other signals in your eyes. So that allows you to transform, again, intent into action.

[00:03:52.778] Kent Bye: Yeah, as I was going through the demo, there wasn't a lot of reticles that were on the screen. And so there were certain times where it just felt like it was kind of reading my mind, which was kind of a surreal feeling, especially when I was rotating a globe and you were telling me to look at different countries. And then I would just look towards where the country was, and then it would automatically rotate the object. And so can you talk a bit about that decision to not include a reticle and kind of give this magical feeling?

[00:04:18.605] Jim Marggraff: Absolutely. That's an early, definitive choice, very conscious, because we have this language, we have what we call 12 eye-fluence laws of eye interaction, and above them sits one guiding principle, and that is to let your eyes do what your eyes do. If we give you a reticle, and what happens is, it's like chasing your tail. You see it, you look, and it begins moving around, and as it moves, you look at it all the time, and it's very disconcerting. So, we don't have reticles that you look at. The one instance might be if we are allowing somebody else to watch where you're looking, and I showed you the demo where someone might be wearing an AR camera and looking at a scene, maybe a security officer, and another person, an analyst, might be wearing a headset, a VR headset, and they're looking through the eyes of the security agent. They're seeing what that person is looking at and what they're seeing explicitly. In that case, you would see that individual's reticle. You'd see where they're looking. Now the image that the analyst has wearing the VR headset also would provide them in that case a reticle for feedback so they could see what they're looking at and then consequently share that information with the person wearing the AR glasses out in the field. So there's an instance. But other than that, what we've done is design this so you're not aware of thinking about what it is you want to do until you do it and then it just happens.

[00:05:32.644] Kent Bye: Yeah, so it seems like for augmented reality applications in particular, this have a strong use case in terms of being able to see somebody and look them in the face, but still be able to navigate technology without having to explicitly say anything or click anything. It's very subtle. It's hardly even perceptible. So maybe you can talk about that in terms of that was part of the design to be able to create a way that you could maintain physical presence with another person, but yet still interact with technology.

[00:06:00.592] Jim Marggraff: Yes, yes. If you think about, so let's go AR, so I'm now wearing a pair of glasses ultimately. First of all, we have to respect sensitivities around interruption in social discourse. So once we've done that, and we might occasionally move to a courtesy mode where it doesn't interrupt you, but assume that we both acknowledge we're in a meeting and several people are in a meeting with AR glasses and we want to share some information. We're sitting around a table and I look over at you and I say, you know, you gave me an idea. And with my eyes, I pull up some information, which of course presents itself immediately. and your screen, and we're now both looking at that, hovering in space in front of us. So I pull up an object or some information, we're sharing it. And so there's a perfect instance of speed of access, sharing, and then I start to talk about it, and I might be manipulating that object, the information, visually with my eyes, and we might have a group of us doing that simultaneously. So now we have a form of collaboration with eye interaction, again, which is natural, because I'm just looking, but we're seeing what each other wants to communicate about our intent to modify or interact with that information.

[00:06:56.077] Kent Bye: Have you looked at specifically eye fatigue? I know that when I'm looking and doing some of these interactions, I'm doing more movements with my eyes than I normally would do if I had a button, for example. A lot of times I'd be able to look at something, select it, but being able to click at something with my button would reduce the number of looks around a screen for a given interaction. So I'm just curious if you've looked at fatigue, if this is a non-fatiguing type of interface for people doing it for extended periods of time, anywhere from like six to eight hours a day.

[00:07:23.559] Jim Marggraff: Absolutely. Let's go back to the very basic principle, let your eyes do what your eyes do. So it turns out that the saccades, or eye motions from point to point, depending upon what you're doing, you're reading they slow down, it's fewer per minute, but it might be up to 20 per minute. And so it may feel as though you're looking more, but you'd be astounded at the saccadic activity that's going on with your eyes. And we've looked at that and Essentially, it remains about the same because typically what is frustrating, what will fatigue your eyes very bad, is staring and gazing and dwelling at something and being forced to do so. That will fatigue your eyes quickly. But aside from that, put yourself in different places in the world and think about areas where your eyes might be moving a lot. You're watching a movie. You're out watching sports. Your eyes are moving. You're not aware of the saccades. Saccades don't fatigue your eyes. What fatigues it is forcing your eyes to do something unnatural. So that's what we find. We spend hours and hours, and it's just because you're looking around the way you normally look around. For instance, if you wanted to draw my attention to a person at the table, you might look from me to Mary sitting over here, and you'd say, hey Jim, have you met Mary? At which point, my eyes will move to Mary. Your eyes will also move to Mary. That's a natural signal as well. So basically, as long as we keep the signals natural, then you won't fatigue.

[00:08:36.106] Kent Bye: Yeah, in this TechCrunch Disrupt, there's a little bit of discussion about multimodal interfaces. I know there's been a lot of talk with the future of immersive technologies, having conversational interfaces, being able to speak naturally. It's kind of how we interact with other humans a lot, and so being able to just talk to our technology like we do to humans. But yet, there's other use cases and contexts which it's not going to be appropriate to talk, or it may actually be faster to do some things with our eyes and faster to do some things with talking to a computer and some things faster doing a combination of looking and pushing buttons. And so when you're looking at iFluence and figuring out how it fits into the ecosystem, it seems like the technology of iFluence is introducing a new technology of being able to open up a lot of new capabilities with the eyes. I don't expect that you'd be able to do everything with the eyes.

[00:09:23.746] Jim Marggraff: Absolutely. We're looking at multimodality and embracing it fully. One demo I showed you where I was again texting and I decided to send a message. Speaking is much faster than typing. We speak 120 to 150 words a minute. Right now typing generally using an anachronistic QWERTY keyboard. to look at letters is way too slow. So there are other ways, however, to communicate with your eyes that will evolve that will be very exciting. That said, we embrace hands, head, eyes, voice. And the question is, which one do you use when? What we focused on intensely is finding out how far we can advance your interaction with your eyes to essentially let you think and look and act as fast as you can. And that speed is a key thing. I'll mention one point. if you remember that the environment where you had a large number of screens all around you. If you think about what happens in memory in short term, basically intelligence often is viewed as eventually making connections, which come from your ability to synthesize information from short term to working memory. If you can process more information, hold more information in your working memory, and process that to come up with conclusions or ideas, you can be smarter. Well, if we give you an environment where you have access to a vast amount of information as fast as you can look, first of all, and so I don't have to take the extra time to look then move my hand, because that extra 200 milliseconds is time wasted that is time when you have to maintain information in your working memory. But instead, if we make that very short, so that as fast as you can move your eyes around, then dive down into a piece of information, navigate, scroll, clip it, possibly with your eyes to a clipboard, move back to another piece of information, clip that, roll back, look at the things you've looked at, all in literally seconds. Much, much faster than when you do with any other modes of interaction. Now we've given you the ability to possibly hold more information in working memory and process that and I could give other examples of this but I'm excited because I think there's something we could do in the area of problem-solving and ultimately expanding intelligence that could result from this new mode of interaction which could never have been done before.

[00:11:27.643] Kent Bye: Yeah, talking to different researchers about the concept of embodied cognition, a big part of that concept is that we don't just think in our brains, we think in our bodies, but not just our bodies, but our environment as well. So our context actually helps us think. And so when I was in that room with all of the different kind of menus and floating squares, There was a part of me that, as you were talking, I would want to customize my own environment to be able to help and externalize my thinking and maybe come up with hierarchical lists or ways of kind of structuring that information. And so have you found a way to actually drag and drop objects within your system here?

[00:12:02.648] Jim Marggraff: Yeah, absolutely. Again, it comes back to eye signals, and you look at the basic mechanics of the eye. We do saccades, pursuits, but there's a whole range of nuances of what occurs as those actions and signals are being developed or being offered that offer lots of potential. And again, in that environment, to be able to drag, drop, clip, move things, if it made you feel like you're looking at something and dragging it with your eyes, that would be a disaster. It would be like looking at the reticle and trying to pull it along. Rather, you think about what you want to do, we give you the mechanics, and it just happens. And that's really cool.

[00:12:34.582] Kent Bye: So, just talking to different researchers, like Hal Lee, he's been doing a lot of stuff with Oculus Research and looking at different social applications, being able to get more of our emotions, gestures, our facial gestures within VR, but also eye tracking within VR. Feels like second generation of headsets, I'd be really surprised if they don't include eye tracking. For you, is this something that it's a technology that you're wanting to license into a lot of the major HMDs, or what's your path forward in terms of having this type of interaction within the next generation of the VR headsets?

[00:13:10.258] Jim Marggraff: Yeah, our business is focused on developing what we've done with the core eye tracking, which is very low power, robust, low MIPS, and small, so it can be deployed in AR or VR headsets, and we're platform agnostic. What we have then is this stack of software on top of that, the eye interaction model, that lets people do things they didn't know they could do. And as you said before, it feels like it was reading your mind. We hear that frequently. So our idea is to then deploy that. Ultimately, we'd like to see this universally. And our goal is to, and we're working with partners, and we'd like to see this deployed on as many headsets as possible and create the new, what we call the language of looking for stories, or the interaction model, which is the next form of human-computer interaction.

[00:13:49.140] Kent Bye: What's some of the biggest open problems that you're trying to solve right now?

[00:13:52.944] Jim Marggraff: We're looking at some of the problems that have been addressed in other ways, such as teleportation. Because right now there's a variety of mechanisms and mechanics, you know, using pointers, and we've developed some really interesting ways to use your eyes, because we're aware of things that happen when your eyes move. You may not know it, but when your eyes move from point to point, That period during which your eyes are moving, which is called saccade, something happens there. It's called saccadic suppression, which means during that instant of the tens of milliseconds, you're technically blind. You don't see anything. What that means is there are things that can happen during that time. in a system that can take advantage of that, that could change a scene, because when you then emerge from that blindness, there's the opportunity to present information, which could do things like reduce sim sickness, for instance. So there's really interesting things we can do to take advantage of, again, what your brain is doing that you're not aware of when your eyes are behaving in certain ways.

[00:14:46.316] Kent Bye: Yeah, I think one of the big challenges for using the software for the first time I think was getting the appropriate feedback to know when I'm actually doing it with intention and doing it correctly. I think without having your eyes tracked and the model you have with instead of having some sort of positive reinforcement like a blink or a flash or something, the icons kind of disappear. there was some moments where I felt like I was able to just do the task you were saying and then other times where it was like imagine just kind of clicking the button a hundred times when you're kind of moving around and just kind of erraticness and so do you think that this is something that people are going to have to have a learning curve of learning how to actually interact with their eyes or do you think there's also other kind of user interface things that you're You're trying to innovate in some ways, but it's going against some of the user interaction paradigms that I've seen using within the web, for example.

[00:15:39.769] Jim Marggraff: Yeah. If you start with the model of using your eyes as a mouse, and this is what people have done. So what we find is individuals that have spent time using other eye tracking systems, and there's a small number, that have learned to think about using your eye like a mouse, expecting a reticle, for instance, or expecting some feedback, and waiting and saying, OK, I'm going to blink now. When they land here, at first, the comment you made, it feels like, again, it's responding to you, reading your mind. At first, they're saying, wait a second, where is that feedback that I'm accustomed to using other eye-tracking systems? People that have never used this before put this on, and the learning curve is, well, it was fast for you. You had less than two minutes tutorial, and you were using it. But we don't hear that comment from them at all. Basically, they just say, this is amazing. I can just do things. It's working for me.

[00:16:23.263] Kent Bye: Yeah, and I think that there's some things that I could see how you want to go and optimize for speed on some use cases like scrolling. But for other use cases where you may want to read the entire things, it was sort of like universal controls to be able to stop the interaction. I think there's still some development that you're doing and trying to iron a lot of this stuff out. But I think that, for me, what I thought was interesting is that there's new problems that I've never experienced before, where wanting to be able to read a number of things, but every time I looked at it, it was sort of scrolling, and as somebody who gets some motion sickness, I think there is a little bit of, like, danger there of having a lot of movement and scrolling around too much that could, for some people, make them feel a little bit uncomfortable within the, you know, almost giving them the feeling that they're in a moving train, for example, moving around, but... As you're trying to move forward and find this combination, it seems like you're trying to come up with specific use cases and trying to solve this specific problem. Like, I'm trying to scroll quickly, but yet also another use case may be to be able to get an overall view of everything without having your eyes do any interaction. So it seems like it's a language that's still in its very early phases in developing.

[00:17:34.100] Jim Marggraff: Yeah, I think it is absolutely. And the example of navigating, in the instance you'd like to get an overview of something, it starts out with intent. And so the degree to which what your intent is, is to say, read something. Because when you started reading, it was responding to you. I was watching. If you start thinking, gee, what else might I want to do? And we have instances where we create environments where, for instance, when you're scrolling, panning, zooming, looking around, you have complete control. And in an instance where you're looking and navigating through, for instance, a carousel, and you'd like to get an overview of that, we absolutely can allow you to get the overview of that. In the demo we put together, it was intended to show speed of access. And so the affordance for offering that freeze wasn't in that demo. But it's part of our language.

[00:18:20.480] Kent Bye: So you're going to be going to the future of storytelling to talk about iFluence, and I'm just curious about how you see this type of eye-tracking technology is kind of fitting into the future of narrative.

[00:18:29.450] Jim Marggraff: Absolutely. Very exciting because clearly we've seen demos that people put together regarding what happens when a character looks back at you, and we know that can be done somewhat with head tracking in a VR environment. We also know the sense of presence that occurs there and it's chilling and it's very compelling. Let's go back to the idea of what we know about the aspects of the way your eyes and brain behave when you're interacting with these signals and what happens when we start to then consider how we engage you and amplify the intensity of the feelings that you have when you're in an environment as a participant in an interactive participatory narrative, what I call IPNs. So you're in an IPN, and now the characters are aware, and let's make the characters another level, make them salient, and we have now characters that are aware of you and begin, because working with a company called Rival Theory, begin to develop an understanding of who you are, what your experience has been with them, and bring that forward. And now the interaction you have with them is developed and offered on the fly, spontaneously, based upon that agent that's aware of what you have done and how you have looked at them. And when that happens, the presence that occurs is striking. It's like nothing that anyone has seen before. So we're pretty excited about that.

[00:19:47.512] Kent Bye: Great. And finally, what do you see as the ultimate potential of virtual reality and what it might be able to enable?

[00:19:55.458] Jim Marggraff: Our vision for this and what I'm most excited about at iFluence is to expand human potential and empathy. And as I spoke a moment ago about what this can do for individuals in terms of intelligence and problem solving, I think that we can, with VR and AR, we can lift ourselves to a new plane of both understanding, of thinking, of communicating. And as we particularly look at the linkage between intelligent agents as AI moves forward, and now we consider what it means to enhance ourselves, I think there will be breakthroughs in the way we as a species think and communicate. And that's exciting.

[00:20:34.184] Kent Bye: Awesome. Anything else left unsaid that you'd like to say?

[00:20:37.725] Jim Marggraff: I think your shows are awesome. Your questions are great. Thanks.

[00:20:41.446] Kent Bye: Awesome. Thank you so much.

[00:20:42.686] Jim Marggraff: OK. Thank you. Thanks. Thanks again.

[00:20:45.115] Kent Bye: So that was Jim McGrath. He's the CEO and founder of iFluence, which is working on this eye tracking technology as well as a new eye interaction model. So I have a number of different takeaways about this interview is that first of all there were some moments of the demo that I was doing with Jim that felt really truly magical where it felt like I was trying to look and see and do something and I was just looking at it and it was happening and it almost felt like it was reading my mind. There are other times where it felt like a little clunky where it wasn't actually doing what I intended, and it had some errors. And so in talking to Doug Bowman and looking at human-computer interactions and how they actually measure it, you look at a couple of different factors. One is the speed in which you're able to do it, as well as how many errors you have while you're doing it. And so I think with using eye interactions, I think you can actually do some tasks faster than moving your hand around or your thumb, because your eyes just kind of instantaneously move there, and you're able to do these different actions. I think the big thing is trying to create an overall piece of software that is able to minimize the number of errors because I was having some errors and intending to do things and it wasn't happening and that could be an issue of some of the software implementation. some of the actual user interfaces but in the actual implementation of their user interface there are some things that I thought were backwards from the paradigm that I was expecting and some of that is intentional and some of that is confusing. So just for an example like when you're on a web page and you click on a button you usually see that button either indent or maybe there'll be a glow that happens around it or there's some sort of change that blinks that gives you some sort of positive affirmation that the task that you were trying to do was successfully accomplished. Well, in the primary mode of interaction with using your eyes, because Jim is explicitly trying to avoid using your eyes as a cursor, then instead of giving you that positive reinforcement, he's essentially making something disappear when it's working correctly, which I think that there were some moments when I thought I was clicking something and it didn't actually click. And so a good example is that sometimes if you're on a website filling out a form and you hit the submit button and nothing happens and you think to yourself, well, I don't want to submit twice because I don't want to have my credit card charged twice. And then if nothing does happen, then you think, oh, well, maybe I actually didn't hit the button. You hit it again and it works. Well, it's kind of that type of feeling where it doesn't give you any specific feedback right away. Then as a user, you're kind of expecting some sort of indication of success. And sometimes in the user interface that they have right now, there is not any indication of success other than it happening successfully. But when it doesn't happen, it doesn't happen and you don't know what you did wrong exactly. The point being is that in thinking about these new user interfaces with the eyes, it's a balance of giving that type of positive feedback versus doing something that's not invasive. Because I've seen other demos like with the Fove, it had a reticle where you're basically looking and shooting, and having a reticle move around where you're looking actually is pretty disruptive. So I think getting back to iFluence and just talking about this approach of eye interaction, I think what they were trying to show me in some of these demos is the level and extent to which you could interact with technology by just using your eyes. And I think it is fairly impressive what you are able to do. However, I don't think that anybody will actually want to do complete eye interaction unless they're forced to do so because they don't have any ability to have any sort of other input control. So, for example, if you're in VR and you have a button available, then looking at something and clicking could be actually just faster and easier to do for some tasks. So I think that there's going to be other multi-modal combinations, you know, like speaking and using your hands and, you know, like text input, for example, just using your eyes, I think is going to be a pretty horrible experience. And so you basically want to be able to just speak into the technology. And so that's something that Jim was also saying, that he's really pushing for these multi-modal interfaces to be able to use all of them. But I can imagine that iFluence as a technology in general is likely going to take off like crazy in the augmented reality market, just because there's going to be a lot more situations and contexts in which you're either going to be having your hands occupied with other things or you wouldn't be able to have a button or any other kind of input control. So I could see that the biggest applications for this would probably be in augmented reality first and then maybe mobile VR headsets and then likely the higher-end headsets. I say that just because the higher-end headsets already have a lot of other input controls that are available for If people are going to be having an HTC or a Vive, they're going to have some either touch controller or with the Vive, their lighthouse. And so there's just a lot more options that are available for those technologies. But for mobile, if the power is able to get low enough, if they go with a hardware solution, then that type of eye interaction could just make a lot of things a lot easier. I think it's also important to note that this does feel like a new language that's emerging. And as a person who's using this for the first time, in addition to the technology still being really early, I think there's still a lot of things to be worked out. There's another thing that I experienced when there was a demo of reading, which it felt okay like I was able to read it but you can imagine that if you're reading something and then somebody was just sort of automatically scrolling when you weren't necessarily expecting it then your eyes just kind of have to jump and kind of reconnect to what you're reading and so when I'm reading on a website I have the ability to scroll with my mouse and so I'm doing that I'm able to do what's called like a smooth pursuit of being able to track the text and be able to look up as it's moving but with the sole eye interaction it basically has to happen automatically so instead of a smooth pursuit eye movement I have to do another saccade which feels more disruptive when I'm reading it felt like I would rather have like a scroll wheel on my thumb to scroll down rather than to just rely upon that mechanism of automatically scrolling so I While it might be okay for reading small amounts of text, I certainly want to read anything of significance of like a long article or a long email or certainly not a book. But in certain contexts, having that sort of automatic scrolling may be the thing that works really well. So I think we'll see what's going to happen with the future of iFluence. They seem to be doing both the technology and the software. I think technologically there's likely going to be other competitors that are out there that are on par or even further ahead in terms of what they're already producing and putting out there and installing into other headsets. You know, SMI Tracking already has some kits that you can have available to be able to put into a DK2 as well as into an HTC Vive. But I think the big differentiator for iFluence is definitely their eye interaction model and their software. And that seems to be something that I haven't seen anywhere else. And so as far as their software, whether or not it'll gain traction into some of these bigger headsets, I think it's going to kind of come down to licensing issues and whether or not it makes sense financially and whether or not they're able to develop the interactions well enough to be rock solid, to just kind of be dropped into some of these different technologies. So that's all that I have for today. I wanted to just thank you for listening to the Voices of VR podcast. And if you'd like to support the podcast, then spread the word, tell your friends, and become a donor at patreon.com slash Voices of VR.

More from this show