conorOpenBCI is an open source, brain control interface that gathers EEG data. It was designed for makers and DIY neuroengineers, and has the potential to democratize neuroscience with a $100 price point. At the moment, neither OpenBCI nor other commercial off-the-shelf neural devices are compatible with any of the virtual reality HMDs, but there are VR headsets like MindMaze that are fully integrated their headset with neural inputs. I had the chance to talk with OpenBCI founder Conor Russomanno about the future of VR and neural input on the day before the Experiential Technology and Neurogaming Expo — also known as XTech. Once the neural inputs are integrated in VR headsets, then VR experiences will be able to detect and react whenever something catches your attention, your level of alertness, your degree of cognitive load and frustration, as well differentiating between different emotional states.

LISTEN TO THE VOICES OF VR PODCAST

“Neurogaming” is undergoing a bit of rebranding effort towards “Experiential Technology” to take some of the emphasis off of the real-time interaction of brain waves. Right now the latency of EEG data is too slow and it is not consistent enough to be reliable. One indication of this was that all of the experiential technology applications that I saw at XTech that integrated with neural inputs were either medical and educational applications.

Conor says that there are electromyography (EMG) signals that are more reliable and consistent including micro expressions of the face, jaw grits, moving your tongue, and eye clinches. He expects developers to start to use some of these cues to drive drones or do medical applications for quadriplegics or people who have limited mobility from ALS.

There are a lot of privacy implications once you start to gather some of this EEG data, and Conor is particularly sensitive to this. He says that recent research is indicating that EEG signals are very unique to each person, and represent a unique digital signature that could trace anonymously submitted data back to you. He says that companies of the future will need to take into consider a strict privacy policy, and not use this data to exploit their users.

At the same time, there were a number of software-as-a-service companies at XTech who were taking EEG data and applying their own algorithms to extrapolate emotions and other higher-level insights. A lot of these algorithms are using AI techniques like machine learning in order to capture a baseline signals of someone’s unique fingerprint and start to train the AI to be able to make sense of the data. AI that interprets and extrapolates meaning out of a vast sea of data from dozens of biometric sensors is going to be a big part of the business models for Experiential Technology.

Once this biometric data starts to become available to VR developers, then we’ll be able to go into a VR experience and be able to see visualizations of what contextual inputs were affecting our brain activity and we’ll start to be able to make decisions to optimize our lifestyle.

I could also imagine some pretty amazing social applications of these neural inputs. Imagine being able to see a visualization of someone’s physical state as you interacting with them. This could have huge implications within the medical context where mental health consolers could get additional insight and the physiological context that would be correlated to the content of a counseling session. Or I could see experiments in social interactions with people who trusted each other enough to be that intimate with their inner most unconscious reactions. And I could also see how immersive theater actors could have very intimate interactions or entertainers could be able to read the mood of the crowd as they’re giving a performance.

Finally, there are a lot of deep and important questions to protect users from loosing control of how their data is used and how it’s kept private since it may prove impossible to completely anonymize it. VR enthusiasts will have to wait on better hardware integrations, but the sky is the limit for what’s possible once all of the inputs are integrated and made available for VR developers.

Here’s a partial transcript of what specifically Russomanno said about the limits of using EEG for real-time interaction:

Russomanno: I think it’s really important to be practical and realistic about the data that you can get from a low-cost dry, portable, EEG headset. A lot of people are very excited about brain-controlled robots and mind-controlled drones. In many cases, it’s just not a practical use of the technology. I’m not saying that it’s not cool, but it’s important to understand that this technology is very valuable for the future of humanity, but we need to distinguish between the things that are practical and the things that are just blowing smoke and getting people excited about the products.

With EEG, there’s tons of valuable data that is your brain over time in the context of your environment, not looking at EEG or brain-computer interfaces for real-time interaction, but rather looking at this data and contextualizing it with other biometric information like eye-tracking, heart rate, heart rate variability, respiration, and then integrating that with the way that we interact with technology, where you’re clicking on a screen, what you’re looking at, what application you’re using.

All of this combined creates a really rich data set of your brain and what you’re interacting with. I think that’s where EEG and BCI is really going to go, at least for non-invasive BCI.

That said, when it comes to muscle data and micro expressions of the face and jaw grits and eye clenches, I think this is where systems like open BCI are actually going to be very practical for helping people who need new interactive systems, people with ALS, quadriplegics.

It doesn’t make sense to jump past all of this muscle data directly to brain data when we have this rich data set that’s really easy to control for real-time interaction. I recently have been really preaching like BCI is great, it’s super exciting, but let’s use it for the right things. For the other things, let’s use these data sets that exist already like EMG data.

Voices of VR: What are some of the right things to use BCI data then?

Russomanno: As I was alluding to, I think looking at attention, looking at what your brain is interested in as you’re doing different things. Right now, there are a lot of medical applications ADHD training, neuro-feedback training for ADHD, depression, anxiety, and then also new types of interactivity such as someone who’s locked in could practically use a few binary inputs from a BCI controller. In many ways, I like to think of the neuro revolution goes way beyond BCI. EMG, muscle control, and all of these other data sets should be included in this revolution as well, because we’re not even coming close to making full use of these technologies currently.

Voices of VR: So what can you extrapolate from EEG Data in terms of emotional intent or activities in different parts of the brain? What can you tell from the EEG data?

Russomanno: I think the jury is still out on this one in terms of how far we can go with non-invasive EEG, but right now we can find attention, alertness; if something catches your attention. If you’re in a mind wandering state and you’re searching for the next thing to be interested in, if something catches your eye there’s an event related potential that’s associated with that. That’s really interesting data. Presenting a user or a player with little flags or trigger moments and seeing what stimuli are actually eliciting interesting responses. Emotional states; we’re getting to the point now where we can distinguish between different emotional states. Specifically anxiety, fear, happiness; some very general brain states. That’s kind of where we’re at right now but I think that we’re going to learn a lot more in the next few years.

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

bruce-reggieAltSpaceVR is the first cross-platform social VR application that supports both the Gear VR, Oculus Rift, and HTC Vive. Now that all of these VR headsets have had their consumer launch, then AltSpace VR has been one of the top places in VR to have social interactions.

I had chance to catch up with Bruce Wooden (aka “Cymatic Bruce”) at SVVR 2016 to talk about what types of events they’ve been holding, what types of emergent social behaviors he’s seeing, and how they’ve been teaching VR etiquette within their welcome spaces for new users. We also talk about some of the social dynamics that occur when they mix together the full range of interaction fidelity between hand-tracked Vive, Leap Motion hands in the Rift, and just the head rotation of the Gear VR. There is a bit of a power dynamic that emerges where people with tracked hands can be more expressive and have more attention and power in a conversation, and so we talk a bit about the implications of that including whether or not it makes Gear VR users feel more like consumers than creatives, producers, or equal participants.

LISTEN TO THE VOICES OF VR PODCAST

AltSpaceVR has focused on building out different games and things to do in VR that allow users to participate in an activity while socializing with other people, and they are also starting to hold bigger events. One of the more successful AltSpace events was bringing comedians from the JASH festival to do Improv Comedy inside Virtual Reality featuring Josh Brekhus, Beth Lepley, Danny Cohen and Ali Ghandour:

Comedy in VR tends to work particularly well because you get a lot more social presence when you hear other people laugh, and so they’re using that success to hold what could end up being the largest VR event up to this point with Reggie Watts live in AltSpace VR on May 26, 2016 at 8pm PST.

Bruce said that one of his personal highlights was having meeting Reggie when he did a comedy and music set in VR, and I had a chance to interview Reggie about his VR film at Sundance where I discovered that he’s a huge VR geek.

The release of consumer VR has made social VR spaces like AltSpaceVR a lot less lonely, and they are likely to have some of the bigger VR events given their cross platform capabilities. It will be interesting to see if catering to the Gear VR users as their main target demographic will limit features that might be Vive or Rift specific. Right now, being inclusive of all of the VR headsets is proving to foster a lot of diversity of participation of users that go beyond the VR enthusiast community and into mainstream Gear VR users, who may have received a free headset with a their S7 pre-order.

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

bruderAt Oculus Connect 2, SVVR’s Karl Krantz told me that he accidentally spent 12 straight hours in VR and only thought that 3 hours had passed. I then started hearing a lot more time dilation stories from Owlchemy Labs’ Alex Schwartz and Devin Reimer as well as Fantastic Contraption’s Sarah Northway. These time perception underestimation anecdotes were both fascinating and really scary, and they found that there some of the likely causes were achieving the flow state, having a deep sense of presence, as was as fun the fact that ‘time flies when you’re having fun.’

University of Hamberg’s Dr. Gerd Bruder has done some research into the issue of time perception in virtual environments, and he’s discovered some fascinating results that he presented at the IEEE VR conference in March. He found that there are environmental cues — such as the movement of the sun — that can change subject’s perception of time when it’s artificially manipulated in VR. These environmental time estimation cues are called “zeitgebers”, and they are one of the many factors that impact our time perception. Some other correlating factors are cognitive load, the level of flow that a VR experience generates.

I had a chance to catch up with Gerd at the IEEE VR where he shared some of the other insights and results into why we’re vastly underestimating our time that we’re spending in VR.

LISTEN TO THE VOICES OF VR PODCAST

Here’s the paper by Christian Schatzschneider, Gerd Bruder, & Frank Steinicke titled
“Who turned the clock? Effects of Manipulated Zeitgebers, Cognitive Load and Immersion on Time Estimation.” This research starts to explain some the factors that directly impact time dilation, but there is still a lot that we don’t know about it yet. What this research shows is that there are a lot of zeitgebers cues that can be manipulated and studied within VR environments.

https://twitter.com/BenKuchera/status/720265921294499841

For more information on this topic, then be sure to check out ResearchVR’s podcast on “Time Perception and Dilation in VR.”

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

brandon-jonesBrandon Jones has been one of the lead developers on the WebVR API over the past couple of years as part of his 20% project at Google. He announced this week that he’s now going to be working on WebVR full-time, which is a great indicator that Google is putting more resources in supporting VR on the open web. I had a chance to catch up with Brandon at GDC to talk about all of the web technologies enabling web browsers to drive room-scale Vive experiences and WebGL exports from Unity & Unreal Engine. Some of the highlights include a new WebVR 1.0 draft spec, the Gamepad API, WebGL 2, and WebAssembly.

I expect that there will be more announcements about what Google is doing in VR next week at Google I/O. Google is definitely investing in the future of VR and the open web with Brandon working on this full-time, as well as with their recent hiring of Josh Carpenter to the WebVR team.

LISTEN TO THE VOICES OF VR PODCAST

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

walter-greenleafWalter Greenleaf has been researching medical applications of virtual reality since 1984, and he believes that healthcare is going to be transformed by consumer VR & AR technologies. Walter says that VR is fitting into a number of different healthcare trends including the digitization of tools, moving from subjective assessments to objective measurements, moving towards patient-centered medical care, and moving away from a fee-for-service to a result-driven business model. These are all pointing towards the desire to collect more and more objective measurements, and VR technology has the capability to capture and present a lot of this data in entirely new ways.

I had a chance to catch up with Walter at the Silicon Valley Virtual Reality conference where he give me an overview of the medical applications of VR, and why he believes that VR is going to transform healthcare. There’s many different industry verticals within healthcare, and he believes that we are just at the very beginning of seeing how consumer VR could help improve many different dimensions of our health and potentially even help save lives.

LISTEN TO THE VOICES OF VR PODCAST

Here are the slides from one of Walter’s recent presentation on “How VR & AR Technologies will Transform Healthcare.”

Subscribe to the Voices of VR Podcast on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

ebbe-altberg2Linden Lab’s Second Life has been one of the largest and most successful virtual world ecosystems with nearly 13 years of existence. But Linden Lab recognized that the infrastructure and foundations of Second Life was not going to be able to drive the level of low-latency performance that virtual reality requires, and so they announced in June 2014 that they were going to be building a new project codenamed Project Sansar that would be optimized for VR.

I had a chance to sit down with Linden Lab CEO Ebbe Altberg this year at SVVR to talk about their design goals and plans for Project Sansar. We also talked about a lot of deeper issues about the future of the Metaverse ranging from the tradeoffs of walled garden silos vs the open web, control vs. freedom, identity vs. anonymity, and moderating for the group experience vs. justice and reconciliation beyond a one-strike ban.

This interview with Ebbe also inspired a lot of deep thoughts about how the overall political, economic, and legal context is setting the tone and boundaries about the future of VR. This interview made quite an impression on me, and I appreciate Ebbe’s candor and honesty to discuss and explore some of the larger issues of the closed vs open web, and the future of privacy and data tracking as we move beyond the “Information Age” and into the “Experiential Age.”

LISTEN TO THE VOICES OF VR PODCAST

Linden Lab has had a lot of experience of running a successful virtual world with Second Life, and they are well-positioned to make an early move at creating a user-generated virtual world at scale. A press release from last August mentions that pilot users of Project Sansar will have to “create 3D content using Autodesk’s Maya® software,” and so it will be interesting to see to what types of world-building tools will be available within the experience to their non-expert, 3D modeling users. At SVVR, Linden Lab announced that they’re taking creator preview applications for people interested in creating experiences on their platform starting later this summer.

This interview inspired a lot of deep reflection, and I noticed that there was a qualitative difference between being able to track data from your behaviors in a web browser and being able to track biometric data from a VR experience. I think this reflects some of the wider discussion in the tech community that VR and AR may be catalyzing a larger shift from the “Information Age” and into this new “Experiential Age.”

MOVING FROM THE INFORMATION AGE TO THE EXPERIENTIAL AGE
On Monday, Mike Wadhera wrote an article on TechCrunch titled “The Information Age is over; Welcome to the Experience Age,” where he argues there’s a fundamental shift of “the changing context of our online interactions, shaped by our connected devices” that has users posting and consuming less personal information and moving towards having more “experiences” online.

Wadhera argues that Facebook and Twitter are Information Age natives where users aggregate data to reflect their identity. He says, “Accumulation manifests in a digital profile where my identity is the sum of all the information I’ve saved —  text, photos, videos, web pages.” With original Facebook status update sharing on the decline, then this could be an early indication that the tide is shifting away from experiences that value data and information, and more towards ones that emphasize visceral emotions and deeper meaning.

The Experiential Age is more about having an authentic experience of being yourself rather than collecting abstract representations of identity through the posting of information. Wadhera argues that Snapchat is a native to this new Experiential Age, and that their ephemeral, self-destructing messages “force us to break the accumulation habit we brought over from desktop computing.”

Wadhera identifies mobile technologies as one of key drivers of this shift, but I would also argue that the rise of virtual and augmented reality has the potential to move the center of gravity of our attention from information on screen-based media to experience within immersive media.

The Virt’s Phil Johnston argues a similar point in his post from 2014 where he says that Virtual Reality represents the Dawn of the Experiential Age. VR allows for the direct transmission of experiences that goes beyond a level of data transmission that happens when it’s abstracted into a 2D plane.

This is a similar conclusion that I came to within my summary of 400 Voices of VR interviews talk that I gave at SVVR. I titled my graphic “The Human Experience of Virtual Reality” because it was the underlying human experience that I found could make the most sense of understanding the virtual reality landscape. The “human experience” landscape of VR is less about market verticals, and more about how VR has the capacity to reflect the full complexity and nuance of the human experience.

Human-Experience-of-Virtual-Reality

I would argue that the more of these twelve different domains of human experience that a VR experience can include, then the more popular it will be since it will be able to reflect the fullness of our actual human experience. Both Second Life and Project Sansar aim to give expression to all twelve of these domains of human experience within the context of their virtual worlds, and this is often overlooked or not fully appreciated by the new consumer VR community.

This was a point that was brought home to me in my 2014 interview with Ebbe as well as with Second Life documentarian Bernard Drax. Linden Lab does have an incredible amount of experience in fostering and cultivating each of these domains of human experience, and so I would expect that if Project Sansar enables user-friendly world-building capabilities, then they’ll have the potential to be one of the first virtual worlds that captures the full range of expression for all of the different dimensions of the human experience within VR.

INFORMATION AGE BUSINESS MODELS DRIVEN BY SURVEILLANCE
One of the primary business models of The Information Age has been that information is freely available, and that it’s supported by ads. There’s an explicit agreement that authenticated users are volunteering to be tracked and surveilled by companies in exchange for all of this free content and social connections that they are enabling.

This was a point that was made by Ethan Zuckerman in a Reply All podcast, where he argued that the JavaScript pop-up ads that he invented in 1994 may have helped to sustain an ad-based revenue model on the Internet that could have had the unintended side effect of “ushering in a world in which the american public has grown too comfortable with the idea of being under surveillance.”

Zuckerman feels guilty that he may have “helped create a world today in which Edward Snowden can come forward with his revelations about government spying, and most of us will just shrug, because we’re so used to being generally surveilled by the websites we visit.”

We often don’t hesitate to consent to the Terms of Service agreements of Information Age websites that dictate how our data are collected and used in exchange for the attention of our social network and the platform tools to share photos, status updates or videos. We have a lot of agency over what information we share and don’t share, and so this is a value exchange where we’re willing to trust these companies in exchange for the real value they’re providing.

WHY EXPERIENTIAL AGE SURVEILLANCE IS DIFFERENT
While there’s a level of consent for data that we are explicitly sharing on websites within the context of the Information Age, the Experiential Age is going to be tracking behavioral and biometric data that is a lot more unconscious but yet still revealing. Virtual Reality has the capability to gather an enormous amount of biometric data ranging from our heart rate data, our emotional states, identifiable body language cues extrapolated from head and hand tracking, and eventually our eye-tracked “attention” for what we’re looking at and getting impressed by.

While we have had no real pause with sharing abstracted information with companies, then perhaps we will be more cautious about what type of unconscious medical data from our bodies that we’re willing to share with companies. That means that Facebook, Google, or Linden Lab could start to save vast repositories of personal biometric data that could become a target for governments or hackers.

US companies can receive a National Security Letter from the government requesting data that they’re prevented to talk about under a gag order. There are government transparency reports available from Google and Facebook that have assurances that they’re not required to hand over certain private data, but the Electronic Frontier Foundation has found thousands of pages of documents from a related lawsuit that showed repeated revealations of government abuses of power.

The Terms of Service from Oculus even remind us “[No] data transmission or storage can be guaranteed to be 100% secure. As a result, while we strive to protect the information we maintain, we cannot guarantee or warrant the security of any information you disclose or transmit to our Services and cannot be responsible for the theft, destruction, or inadvertent disclosure of information.”

While we like to think that all of our personal data will be completely safe in the hands of these companies, the truth of the matter is that there are hackers and abusive governments that make it impossible for companies to be able to guarantee 100% security.

THE EVOLVING BUSINESS MODELS OF THE EXPERIENTIAL AGE
Will the Experiential Age catalyze a change in what types of Terms of Service that we’re willing to accept? And will this lead to new viable business models that don’t rely upon surveillance? Here are a number of big open questions as to what the emerging business model of this Experiential Age:

  • Will VR users still be willing to share personal data in exchange for free content?
  • How much of this gathered data will VR users be willing to share?
  • What types of benefits of interactivity or more targeted content would this data enable?
  • What insights and judgements could AI-trained, deep learning networks be able to assert about us after studying months of our biometric data gathered from VR experiences?

Overall, I think that the underlying business models of The Experiential Age may be evolving towards a pay-per-event type of model. So rather than receiving all of the immersive content for free in exchange for seemingly innocuous data collection, then perhaps we’ll move towards a culture that is willing to pay for experiences up front without having to submit to additional surveillance.

We’re already moving towards an app-based ecosystem with VR where there is a pay-upfront mentality that more mirrors what we have seen in the gaming market, but it’s still an open question as to whether we’ll be willing to pay for every immersive experience after living through this Information Age ethic that “Information should be free.”

We are still willing to pay for live sporting, music, and cultural events, and so perhaps The Experiential Age will introduce new viable business models for holding virtual events.

MICROPAYMENTS AND DISTRIBUTED TRUST WITH THE BLOCKCHAIN REVOLUTION
One key technology that may provide a viable solution for micropayments and the anonymous exchange of payments is the “Blockchain,” which is the underlying trust mechanism in cryptocurrencies like Bitcoin. Don Tapscott & Alex Tapscott just released a book on May 10th about The Blockchain Revolution, which talks about some of the implications of the blockchain including, “Keeping the user’s information anonymous, the blockchain validates and keeps a permanent public record of all transactions. That means that your personal information is private and secure, while all activity is transparent and incorruptible–reconciled by mass collaboration and stored in code on a digital ledger.”

I believe that there are a lot of decentralization implications of the blockchain that could impact our political, economic, and legal systems, and that the blockchain is a technology that has the potential to change the larger context towards enabling the full potential of The Experience Age.

Will companies and the government still try to track and surveil us? Of course. Just because our attention is moving towards the Experience Age doesn’t mean that the Information Age is over. We are still a long ways away from completely transcending the limitations of our current obsession with Big Data and the Information Age business models based upon the pervasive surveillance of our digital lives. We’re still just at the very beginning of this transition, but the overall political and economic context may shift more towards privacy and liberty given the decentralization of power that the blockchain enables. Perhaps after that point, privacy will have become an absolute requirement for any viable implementation of the Metaverse.

THE METAVERSE IN TRANSITION FROM INFORMATION AGE TO EXPERIENTIAL AGE
There are a number of different companies and technologies that would love to be the foundation and primary enablers of the Metaverse, and this battle is unfolding as we are still shifting from the Information Age to the Experiential Age. There are two major approaches to building the foundations of the Metaverse, and it comes down to a walled garden versus open web approach. There are walled garden, hosted virtual world solutions like Linden Lab, AltSpace & Facebook, and then there are more open source or self-hosted solutions such as High Fidelity, VR Chat, JanusVR, self-hosted Unity Builds, or WebVR.

Each of these approaches have different tradeoffs between control and freedom, identity and anonymity, and whether or not there will be different sales or property taxes that are collected within these virtual worlds. It’s also an open question with the walled garden solutions whether or not you’ll be able to export and reclaim ownership over the content that build within these worlds. The walled garden tools will no doubt have some of the most user-friendly content creation tools and communities form around them, but there may be some free speech and behavioral restrictions with these tools and networks. It’s also most likely that the walled garden approaches will have the strongest networks of people and vibrant social interaction.

The Information Age has also had a very punitive mindset when it comes to policing trolling behavior, which could get your IP banned for life. This may have been tolerable for authenticated websites within the Information Age, but getting banned from a virtual world could have implications that are much more serious and long-term. If the Metaverse becomes the primary source of income or social interaction for some people, then banning them could have a much bigger impact on their life. Having the ability to restrict access is one of the potential risks of consolidating power to a private company with no official appeals process.

There will need to be tools to deal with legitimate trolling behavior, but what types of recourse or due process will be available for those who have been unfairly and permanently banned from a virtual world that may be as enriching than the real world? Will could be new truth and reconciliation mechanisms in the Experiential Age where a more restorative justice system evolves that balances accountability with the chance to change and grow?

The walled garden versus open web is a debate has played out on the World Wide Web since the early days of AOL and CompuServe when the balance of power was concentrated within a handful of walled gardens sites. Then the ugly HTML pages become more interconnected, and it was this linking between documents that ultimately provided more value according to Metcalf’s Law. This was a victory for the decentralized open web, but now there seems to be a reconsolidating of power into a small handful of social media, technology, and entertainment websites. Will VR experiences and the evolution of the interconnected Metaverse experience a similar trajectory of Closed, Open, and then Closed again?

CULTIVATING TRUST AND PRIVACY IN THE EXPERIENTIAL AGE
After talking with Ebbe, I realized that a lot of these privacy issues may go beyond what Linden Lab are reasonably able to design for at this point. There is not a lot of direct evidence for how big of a concern these evolving privacy issues are going to be within the context of this New Experiential Age. And there’s not a market demand that can be articulated down to a specific feature request, and so it boils down to whether or not the consumer can trust a company like Linden Lab or Facebook with their data.

Ebbe said that there are some websites that he would not trust with his data, but that he does happen to trust Facebook with the limited amount of engagement he has with the site. But a lot of other people are not so trusting, and they were very vocal with their skepticism when Facebook bought Oculus. Ultimately, in the short-term, there’s no doubt that the Facebook acquisition legitimized VR in a powerful way, and therefore overall helped VR on it’s path towards going mainstream. But yet, the long-term privacy implications for VR are still very much open up for debate.

The Information Age has cultivated a culture where in order to use a website, then we have to sign a Terms of Service where we consent to having all of our actions and behaviors tracked on their site while we’re an authenticated user. Most people barely read the terms of service before checking the box because there have been no real severe consequences. But these same seemingly innocuous Terms of Service may have much broader impact within the Experiential Age.

UploadVR’s Will Mason wrote an article about some of the potential privacy concerns with Facebook and the Oculus Rift’s ‘always on’ process that’s detailed within their Terms of Service. This article got a lot of social media buzz, and it even caught the attention of Senator Al Franken who sent a letter to Oculus asking six specific questions about privacy. The Oculus Terms of Service has a clause that says that “We use the information we collect to send you promotional messages and content and otherwise market to you on and off our Services. We also use this information to measure how users respond to our marketing efforts.”

Oculus responded by saying that they’re not even using that data for anything yet, and that they’re not yet even currently sharing any information with Facebook, but they may do so in the future. Given what the Terms of Service allows, then there’s absolutely nothing stopping them from using that data they’ve collected and sharing it with Facebook at any moment.

Facebook has traditionally taken a slow and steady approach of eroding default privacy controls over many years. Matt McKeon made a visualization of the default privacy settings (shown as blue in the graph below) at Facebook from 2005 to 2010, and the pattern is clearly moving towards making more and more information public by default.

In 2010, the Electronic Frontier Foundation traced the evolution of Facebook’s privacy policies since 2005 to see a very clear story evolve. The EFF concluded the following:

Facebook originally earned its core base of users by offering them simple and powerful controls over their personal information. As Facebook grew larger and became more important, it could have chosen to maintain or improve those controls. Instead, it’s slowly but surely helped itself — and its advertising and business partners — to more and more of its users’ information, while limiting the users’ options to control their own information.

So while Facebook may be taking a conservative approach to what data they are collecting and sharing with the Oculus Rift, then the clear trajectory is that privacy will continue to erode in order to benefit their advertising and business partners. There seems to be a certain level of autonomy and independence that Oculus is emphasizing to give the impression that they’re still independently operating from Facebook, but this will not last forever. Given Facebook’s history, then it’s almost inevitable that their Information Age business model will continue to push towards gathering and analyzing as much data as possible for the sake of selling more ads.

CONCLUSION

There are a lot of open questions as to whether the future of the Metaverse will be dominated by a handful of walled garden sites or a larger set of openly connected virtual worlds.

Here’s a number of open questions that can only be answered over time by the virtual reality community, and eventually everyone as we transition into the Experiential Age.

  • Will any of these companies building the Metaverse be willing to take a strong stand on privacy?
  • Do VR consumers even care? Or will it even matter?
  • As users of VR, will be we willing to support a culture of micropayments or pay-per-events?
  • Or do we want an ad-supported immersive future where we’re willing to be share whatever data on us can be gathered and used to get better targeted advertising?
  • Will the winning business models of the future be based upon an Information Age paradigm or some sort of emerging Experiential Age paradigm?
  • Will it be a matter of the best technology winning? Or will market demand for strong values around privacy be a differentiating factor?

These are all big open questions, and you can bet that some users will be keeping a close eye on these terms of service as we continue to move into the Experiential Age. Ebbe was right that many of these questions go beyond what individual businesses may have the capability to reasonably address, especially with the lack of consumer demand for specific features.

But if we really are in the midst of moving from the Information Age to the Experiential Age, then perhaps we’ll start to see a larger shift in the political, economic, and legal context. Then perhaps this will enable us to fully live up to the ultimate potential of virtual reality that accurately reflects the full complexity and beauty of the human experience. And ultimately any tracking and data that’s collected will be primarily focused on enriching our experiences within VR rather than enriching a small handful of companies at the cost of our privacy and freedom.

LISTEN TO THE VOICES OF VR PODCAST

Subscribe to the Voices of VR Podcast on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

fred-brooksFred Brooks was in the process of looking for his next research agenda when he heard Ivan Sutherland give a speech in 1965 at a computer conference that laid out his vision for future of virtual reality. Sutherland said that you shouldn’t think of a computer screen as a way to display information, but rather as a window into a virtual world that could eventually look real, sound real, move real, interact real, and feel real.

Brooks won the Turing Award in 1999 for some of his pioneering work in the computer architecture, operating systems, and software engineering when he was at IBM. He also wrote the famous book, The Mythical Man-Month based upon some of his experiences. In 1964, Brooks left IBM to start the computer science department at the University of North Carolina at Chapel Hill.

Sutherland’s vision was articulated fully in his 1965 “The Ultimate Display” paper that predicts the trajectory of technology towards virtual reality, head-mounted displays, eye tracking, haptics, speech recognition, and even the holodeck.

I had a chance to catch up with Brooks at the IEEE VR conference in South Carolina in March where he shared some of the highlights from the very beginning of VR, and then on through the big milestones that he’s seen develop over the last fifty years.

LISTEN TO THE VOICES OF VR PODCAST

Fred Brooks recalled seeing Sutherland speak at the Fall Joint Computer Conference in 1965, but I could not find any mention of Sutherland’s speech from the publication of the conference proceedings listed online. I did find that Sutherland’s “The Ultimate Display” paper was published in 1965 in the Proceedings of the IFIP Congress, pp. 506-508.

Brooks had a paper titled “The Future of Computer Architecture” that was also published in that same Proceedings of the IFIP Congress in 1965, and so it’s possible that Sutherland’s speech happened at the International Federation for Information Processing conference rather than the Fall Joint Computer Conference. Whenever this speech happened may well be the first time that Sutherland shared his thoughts about what would become virtual reality to a public gathering of his academic peers.

By 1968, Sutherland had built a prototype of the Sword of Damocles, and published a paper titled “A Head-Mounted, Three-Dimensional Display” within the AFIPS Proceedings of the Fall Joint Computer Conference, Part I, pp. 757-764. Last year at IEEE VR, I interviewed Henry Fuchs who was one of Sutherland’s students and fellow VR professor at UNC Chapel Hill.

The virtual reality research that has happened at the University of North Carolina at Chapel Hill has laid the foundations for the current VR revolution, and Brooks wants the current VR enthusiasts to remember that Sutherland had laid it all out from the very beginning in 1965 with that speech of his vision of making computer graphics that look real, sound real, move real, interact real, and feel real.

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

Robyn-GrayOtherworld Interactive recently announced that their horror experience Sisters just surpassed one million downloads for Google Cardboard. To celebrate, they released a compilation of reaction videos of people screaming and slamming VR headsets into the ground.

While doing horror in VR by yourself may be utterly terrifying, experiencing horror in VR while other people are watching you can be a lot of fun. Your terror and screaming is immediately following by a lot of uncontrollable laughter, which can be really contagious. Overall, it’s such an peak emotional experience that people want to capture and share online. These videos of visceral reactions are in part what has fueled Sisters to become one of the first VR applications to reach one million downloads.

I had a chance to catch up with Otherworld Interactive co-founder Robyn Tong Gray to talk about why people like to be scared, using indirect control to trigger action in VR narratives, cultivating emotional presence, and creating an ambiance and soundscape to amplify your VR story.

LISTEN TO THE VOICES OF VR PODCAST

Here’s the reaction video compilation celebrating a million downloads of Sisters

Here’s Robyn’s talk from the Unity VR/AR Vision Summit titled “Emotional Presence in Virtual Reality: The Making of Café Âme and Sisters”

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

Kirill-PokutnyyOn April 26, 1986, there was a catastrophic nuclear accident at the Chernobyl Nuclear Power Plant in Ukraine rated as the highest severity of level 7. Thirty years after the disaster, there are still ongoing construction projects to confine the disaster within the Chernobyl Exclusion Zone. A number of Ukrainian filmmakers are in the process of making an interactive documentary titled Chornobyl360 about these ongoing remediation efforts as well as the impact that it’s had on the surrounding area. I had a chance to catch up with producer Kirill Pokutnyy at SVVR to talk about the interactive documentary innovations, the long-term impact of nuclear energy disasters, and creating an empathy piece for the earth.

LISTEN TO THE VOICES OF VR PODCAST

There’s an ongoing Kickstart fundraising campaign for Chornobyl360 that is 22% funded with 24 days remaining.

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

Elijah-FreemanI didn’t expect that a VR rock-climbing game that used neck thrusts as one of the primary locomotion mechanics would be as immersive or fun as it was, but I was really impressed with the level of immersion and presence that I felt while playing Crytek’s The Climb, which was recently released for the Oculus Rift.

There was something really satisfying to reaching the vista and enjoying the amazing view after climbing up the metaphoric and virtual mountain side. A lot of the game mechanics of a first-person rock climbing game translates really well to strengths of VR as a medium and provides an overall compelling and exhilarating experience. I had a chance to catch up with Crytek executive producer Elijah Freeman at the Oculus Game Days event at GDC to learn more about how they cultivated a sense of presence, evolved the gameplay, and got a lot of feedback from actual rock climbers who were a part of the production of the game.

LISTEN TO THE VOICES OF VR PODCAST

Here’s a trailer from Oculus to promote The Climb & the Rift:

And here’s the launch trailer for The Climb

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip