Christian-VillwockAt GDC this year, SensoMotoric Instruments (SMI) showed a couple of new eye tracking demos at Valve’s booth. They added eye tracking to avatars in the social VR experiences of Pluto VR and Rec Room, which provided an amazing boost to the social presence within these experience.

eye-tracking

There are so many subtle body language cues that are communicated non-verbally through the someone else’s eye contact, gaze position, or even blinking. Since it’s difficult to see your own eye movement due to saccades, it’s best to experience eye tracking in a social VR context. Without having a recording of your eyes in social VR, you have to rely upon looking at a virtual mirror as you look to the extremes of your periphery, observing your vestibulo–ocular reflex as your eyes lock gaze while you turn your head, or winking at yourself.

I had a chance to catch up with SMI’s Head of the OEM business Christian Villwock at GDC to talk about the social presence multiplier of eye tracking, the anatomy of the eye, and some of the 2x performance boosts they’re seeing with foveated rendering on NVIDIA GPUs.

LISTEN TO THE VOICES OF VR PODCAST

It’s likely that the next generation of VR headsets will have integrated eye tracking and the goal of both SMI and Tobii to be the primary providers, but neither Tobii or SMI are commenting on any specific licensing agreements that they may have come to with any of the major VR HMD manufacturers. I will say that SMI had some of the more robust social VR eye tracking demos at GDC, but Tobii had more nuanced user interaction examples and more involvement with the OpenXR standardization process in collaboration with the other major VR hardware vendors. You can read more information their integration with Valve’s OpenVR SDK in SMI’s GDC press release.

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip


Support Voices of VR

Music: Fatality & Summer Trip

cabbibo2Using language to translate an experience into words is one of the highest levels of abstraction that you can have. Using the power of visual metaphor through poetry is able to get to deeper levels of emotion, and virtual reality is able to remove nearly all levels of abstraction by tricking your senses into having a direct sensory experience within your body. Indie VR artist Isaac “Cabbibo” Cohen has started to create a sort of “Experiential Poem” with virtual reality exploring how to invoke complicated emotions that transcend words.

I had a chance to catch up with Cabbibo at GDC to talk about his process of using VR for emotional exploration. He was previewing a couple of new experiences at the Valve booth including a picture-book VR narrative called Delilia’s Gift, and a social VR environment called Ring Grub Island that was designed for mutual exploration and embodied vulnerability.

LISTEN TO THE VOICES OF VR PODCAST

Cabbibo has released four brief experiences and games on Steam so far including Blarp, L U N E, Warka Flarka Flim Flam, My Lil’ Donut, which explore new types of embodied gameplay in VR that begs us to use our bodies in new ways. The building an imaginary fortress type of experience in L U N E catalyzed a deep emotional reaction from many users like this one from Hyperion, Half way through this I crouched to the floor and burst out in tears.”

Cabbibo told me last year that his favorite experience to date has been Irrational Exhuberance, and there haven’t been a lot of other experiences that have inspired him to use his body to explore a space and contemplate the meaning of existence in quite the same way.

Now he’s on his own journey now to create more of these experiential poem VR experiences that try to capture the essence of an emotion. After starting therapy last year, he’s been finding VR to be a robust expressive medium for exploring and playing with his own emotional states that is more interesting than his early experiments in embodied gameplay. He’s beginning to explore what it means to explore vulnerability within embodied and social context, and in the end wants to use VR to help people realize how being alive is such a miracle. Cabbibo is doing some of the most groundbreaking explorations of discovering some of the unique affordances of VR as an artistic medium, but more importantly using VR as a mirror to learn more about what it means to be human.

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip


Support Voices of VR

Music: Fatality & Summer Trip

http://voicesofvr.com/511-neurospeculative-afrofeminism-building-the-future-you-want-to-live-into/

Hyphen Labs is a immersive design collective made up of women of color, and they had a sci-fi VR experience at Sundance this year called Neurospeculative Afrofeminism. Their VR experience features black women as some of the pioneers of brain optimization, and you get to experience a futuristic neurocosmotology lab where you can receive transcranial stimulation. As you get this neuroplasticity treatment, you’re transported into a magical world that features speculative products that feature women of color as the center of the design narrative.

I had a chance to catch up with Carmen Aguilar y Wedge, Ashley Baccus-Clark, Ece Tankal, and Nitzan Bartov at Sundance where we talked about the process of writing a love letter to black women and creating an experience that helps them live into a future that they want to help create. Wedge cites the quote “You can’t be what you can’t see,” which provided inspiration for them to create an experience where they could create virtual characters within a context of technological innovation in order to directly stimulate neural pathways and re-wire their own brains using the principles of synaptic plasticity.
Continue reading

darshan-shankarLast week BigscreenVR announced that they raised $3 million dollars for their “social utility” VR application. BigscreenVR gives you access to your computer screen in VR, which is a deceptively simple idea but one that is unlocking new ways of working on your computer and enabling collaborative social environments that range from virtual 2D video game LAN parties to productive work meetings.

I had a chance to catch up with founder Darshan Shankar at Oculus Connect 3 last October to talk about his founding story, and how he’s designed BigscreenVR with privacy in mind through encrypted peer-to-peer networking technology that he developed. It’s a formula that seems to be working since he reports that “power users spend 20–30 hours each week in Bigscreen, making it one of the most widely used “killer apps” in the industry.” Those are astounding numbers for any social VR application, and the key to Bigscreen’s success is that they’ve been providing a more immersive and social experience of 2D content ranging from games to movies.

LISTEN TO THE VOICES OF VR PODCAST

The latest release of Bigscreen VR enables you to have up to three monitors in VR, which can provide a much BETTER experience of working on your computer than in real life. You can stream Netflix or YouTube on a giant movie screen while playing a video game, designing an electrical circuit, browsing Reddit, or creating a 3D model in Maya. You can basically do anything that you can do on your computer screen. The limited resolution for comfortably reading text is the biggest constraint, but there are plenty of other tasks that people have found are more enjoyable in VR than in real life. It’s not just the immersive nature, improved focus, and unlocking the spatial thinking potential of your brain, but you can do it with friends.

Adding a social dimension to computing in a private way is one of the keys to Bigscreen’s success. You can use Bigscreen alone and by yourself without anyone else. You can create a private room using peer-to-peer technology such that what you’re actually doing in Bigscreen isn’t even being passed through any servers on Bigscreen’s side. And if you want to have a public cafe experience and connect with hardcore VR enthusiasts from around the world, then create a public room and see who comes through. It’s a wide range of people looking to do everything from connect socially and casually to recreating the cafe experience of increased focus that can come from working in public spaces away from the private context of your home.

Taking that all into account and based upon my own direct experiences of using Bigscreen over the last couple of weeks I can say that Bigscreen VR is definitely the leading contended to becoming one of the first killer applications of VR. It’s a social utility that connects you to friends, family, romantic and business partners, as well as complete strangers who spend a consider amount of time living in the early days of the metaverse.

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip


Support Voices of VR

Music: Fatality & Summer Trip

Joe_LudwigValve’s Joe Ludwig talks about the latest updates on the Khronos Group’s VR standardization process that is now being called “OpenXR.” Ludwig says that OpenXR is still primarily creating an open and royalty-free open standard for virtual reality, but that they wanted to plan for the future and eventually accommodate augmented reality as well. In my Voices of VR interview with Ludwig, he talks about the OpenXR standardization process from Valve’s perspective and how they want to see VR become an open of a platform just like the PC.

LISTEN TO THE VOICES OF VR PODCAST

The OpenXR working group has just completed it’s exploratory process and there are still numerous open debates, and the Khronos Group is making this announcement of a name and logo at GDC in order to encourage more VR headset and peripheral companies to get involved in this standardization process. Ludwig can’t speak on behalf of any OpenXR decisions yet, but was able to provide more insight behind Valve’s motivations in the process, which is to develop a standard that will what they see as a minimal baseline for a quality VR experience as well as to make VR an open platform. OpenXR will also span the full spectrum from 3DoF mobile to 6Dof room-scale, and so there are many active discussions with the working group about what all will be included in the 1.0 specification.

VR is a new computing platform, and this OpenXR standard aims to help keep both VR and AR as open platforms. This Khronos Group OpenXR initiative aims to lower the barriers to innovation for virtual reality so that eventually a VR peripheral company just has to write a single driver to work with all of the various VR headsets. But in order to know what APIs should be available for developers, then this standardization process requires the participation from as many VR companies as possible. Part of the announcement at GDC is to say that the working group has finished their preliminary exploration, and that they’re ready for more companies to get involved.

In my previous interview with Khronos Group President Neil Trevett, he said that this standardization process typically takes about 18 months or so. Given that it was first announced in December 2016, then I’d expect that we might be seeing a 1.0 specification for OpenXR sometime in the first half of 2018. It also depends upon how motivated all of the participants are, and there seems to be a critical mass of major players in the industry to help make this happen and so it could happen sooner.

As to whether or not this OpenXR will mean that any VR headset will work with any VR software, that’s one of the theoretical technical goals but there are many constraints to making this happen. Ludwig said that while technically this could be made possible with OpenXR, there will still be a layer of business decisions around platform exclusives. When talking to Nate Mitchell of Oculus, even if Oculus implements OpenXR then they still want to make sure that it would be a quality experience. Ludwig said that there will be other constraints of having the proper input controls, button configurations, and set of minimal hardware available for some experiences to work properly. It’s also still too early for what the final OpenXR spec will look like for companies to make any specific commitments about cross-compatibility, and I’ll have more details on Oculus’ perspective on OpenXR early next week with a Voices of VR interview with Nate Mitchell.

Overall, I think that this OpenXR is probably one of the most significant collaborations across the entire VR industry. The Khronos Group says that the OpenXR “cross-platform VR standard eliminates industry fragmentation by enabling applications to be written once to run on any VR system, and to access VR devices integrated into those VR systems to be used by applications.” If VR and AR want become the next computing platform, then OpenXR is a key initiative to help make that happen.

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip

Danfung-DennisDanfung Dennis has an ambitious vision for the potential of virtual reality, and it’s one of the most radical ones that I’ve come across. He believes that VR can be used as a tool to cultivate compassion through having an embodied experience of witnessing suffering within VR. He says that the process of witnessing suffering can be used as a type of advanced Buddhist mind training to focus your attention, contemplate on your visceral reactions, and grow compassion through taking action. These brief VR experiences have the potential to impact day-to-day consumer decisions that people make, which can taken collectively could radically change the world.

LISTEN TO THE VOICES OF VR PODCAST

I know that this is possible is because I had one of the most powerful reactions I’ve ever had from watching Condition One’s Fierce Compassion / Operation Aspen VR experience. This live-action, cinéma vérité VR experience shows animal rights activists breaking into a factory farm to perform an open rescue and document the horrendous living conditions of Chicken in cages. It’s a guided tour of the many untreated heath ailments and barbaric conditions that are common in these types of industrial-scale factory farms. Having a direct embodied experience and bearing witness to this suffering had such a powerful impact on me that I vowed to never purchase anything other than cage-free Chicken eggs.

Condition One has also been producing guided meditations that are designed to be watched after experiencing some of their other animal rights experiences. Factory Farm is the most graphic and intense experience I’ve ever had in VR in that it shows the slaughter of two pigs as they go through a factory farm in Mexico. After witnessing this horrific scene in VR, I can why Paul McCartney once said, “If slaughterhouses had glass walls, everyone would be vegetarian.

Condition One has also been tackling larger issues like global warming in VR> They produced the Melting Ice companion VR piece to An Inconvenient Sequel, which is a follow-up film to Al Gore’s An Inconvenient Truth. The An Inconvenient Sequel film lays out all of the latest science as told through the personal narrative of Al Gore, and the VR experience doesn’t attempt to delve into that much depth of the science. Dennis pulled back a lot of the narrative and story elements and just focused on trying to create an embodied experience of transporting you to locations of melting ice as large chunks fall off the side of cliffs, the cracking sound of steady dripping, but also entire rivers of melting glaciers cutting through sheets of ice.

One of the challenges with complex topics like global warming is that it’s very difficult to provide a singular embodied experience in VR that tells the entire story of the systemic causes of global warming. Standing on melting ice that’s disappearing at an accelerated pace due to global warming is as good of a experience as any, but it’s still difficult to tell that entire story within the confines of VR. So rather than convey the science of it all, Dennis decided to take a more contemplative and Zen approach of creating an sparse experience with limited narration in order to cultivate a direct experience with the sounds and visuals of a rapidly changing part of the planet.

Dennis believes that VR has the potential to be tool that can inspire humans to cultivate compassion by taking actions that relieve suffering. He’s interested in creating VR experiences that allow us to witness the suffering in the world, and that ultimately help us to expand our sphere of compassion beyond just our immediate friends, family, and pets to eventually include all sentient beings and the planet earth. These embodied virtual reality experiences stick with us in a deeper way, and become a part of our memories as we are making decisions of either continuing to participate in a system of violence or choosing more sustainable and ethical options that cultivate compassion and takes into consideration the impact on the next seven generations.


Support Voices of VR

Music: Fatality & Summer Trip

egmorantTilt Brush launched on the Oculus Rift today, and I had a chance to catch up with Tilt Brush product manager Elisabeth Morant, launching. We have a broad discussion about adapting Tilt Brush for the Touch controllers, the Tilt Brush Artist in Residency Program, the Tiltbrush Unity Toolkit, and some of the features coming in the future potentially including a layering system and more non-intuitive and unexpected features similar to audio reactive brushes. I also asked about privacy in VR, but Google has yet to disclose any information about what they may or may not be capturing.

LISTEN TO THE VOICES OF VR PODCAST

Some of the most newsworthy parts about my interview with Morant were regarding things that weren’t talked about. When asked to comment about this being the first VR collaboration between Facebook & Google, Morant said that Google is “really looking to push virtual reality as a platform.” There’s been a tense history between Google and Facebook, and releasing Tilt Brush via Oculus Home is the first collaboration in the VR space that we’ve seen from the two tech giants.

This also means that it’s the first Google app that’s being released within the context of Oculus’ Privacy policy, which states that physical movements can be recorded and tied back to your Facebook profile. Facebook will be able to capture and store physical movements of users who are using Google’s application, and then this data could be connected to a unified Facebook super profile that pulls in data from third parties. Up to this point Google hasn’t made any VR-specific updates to their Privacy policy that explicitly accounts for what may or may not be recorded in VR and then connected back to your Google profile.

I asked Morant about this overlapping privacy policy dynamic between Google and Facebook during my interview, and Google’s PR liaison said that we could follow up after the interview for more information. I did follow up after the interview, and Google is indeed looking at the possibility of updating their privacy policy by saying “it is something that we are looking at, but nothing to share at this time.”

But Google dodged answering about what they may or may not already be recording in VR, again. I previously asked a follow-up question about what data they’re capturing in my my previous interview about Google Earth VR, but I received a generic boilerplate answer. When I asked again, they basically sent back the same non-answer.

Non-answers to hard to write about and cover, and so they usually serve the purpose of not talking about it. But it also reinforces the impression that privacy in VR is the big elephant in the room that no one wants to really talk about. So I maintained the integrity of my original questions within the context of the podcast interview, and I’ve also included the full context of my follow-up exchange with Google PR below.

I just had a follow-up question about privacy with some reference material. I’d love to get some more specific answers from a privacy expert on your side, and swap that more detailed information to put at the end within my wrap-up. If there’s someone there who I could speak to directly, then that would be preferable. A written response also works, but not quite as well within the podcast medium because I end up having to speak words on your behalf.

At this moment, Google’s Privacy policy does not have any language that is specific for any virtual reality technologies, and there are no controls for VR data that might be recorded listed within the “My Account” Privacy dashboard.

My question: Is any physical movement data of either the head or hands from in any VR experiences being recorded and saved by Google?

Oculus’ Privacy Policy states that “Information about your physical movements and dimensions when you use a virtual reality headset” are being captured and stored as part of the “Information Automatically Collected About You When You Use Our Services.”

In my previous interview about Google Earth VR, I followed up with some questions about privacy and you sent back a prepared statement that I included within both my written and spoken write-up. Here’s that passage:

Google Earth VR is a free application for the Vive on Steam VR, and so I had a couple of follow up questions for Google after my interview. I asked them: “What kind of data can and cannot be collected given Google’s standard Privacy Policy within a VR experience?” and “Are there long-term plans to evolve Google’s Privacy Policy given how VR represents the ability to passively capture more and more intimate biometric data & behavioral data?”

Here is Google’s response:

Our users trust us with their information and we outline how it may be used across Google — to personalize experiences, to improve products, and more — in our Privacy policy. Users can control the information they share with Google in ‘My Account’.”

Google’s previous response didn’t actually really directly answer my question. Google’s Privacy policy does not have any language that is specific for any virtual reality technologies, and there are no controls for VR data that might be recorded listed within the My Account Privacy dashboard.

  • Does this mean that no virtual reality specific data is being recorded or captured from Google?
  • Or if there is data being collected from VR, will we see an update to Google’s Privacy Policy that discloses what is being recorded?

For more context, here’s an interview and essay that I did with a privacy expert since the last time I spoke with Google.

Thanks for willing to take a look at this, and I look forward to getting some more specific answers than Elisabeth was able to provide.

Here’s the response that I got from Google:

We don’t have a privacy expert available for you to speak to for the podcast. In regards to your question about an updated privacy policy – it is something that we are looking at, but nothing to share at this time. As soon as we have any updates, we’ll let you know. The statement we provided before still applies:

“Our users trust us with their information and we outline how it may be used across Google — to personalize experiences, to improve products, and more — in our Privacy policy. Users can control the information they share with Google in ‘My Account’.”

Google is looking to potentially update their privacy policy with more information about what is or isn’t recorded, but up to this point they haven’t disclosed any information about what they’re capturing. There’s been no updates to the Privacy policy to account for any new VR technologies, and there’s no VR data that’s being tied back to the ‘My Account’ tab on your Google account.

If there are any VR data that would show up on the ‘My Account’ tab, then that would imply that Google has been able to correlate VR-captured data back to your personal identifiable Google account. But there are no controls for VR data on ‘My Account,’ and so if data is being captured, then there’s no way for a user to control or look at what’s been captured.

I’ve asked Google twice now what data they’re recording, and both times they’ve avoided answering with a direct answer. Privacy in VR is a hard topic to cover, especially when the major players don’t really want to talk about it. I wrote extensively in this article about the privacy implications of VR and how VR has the potential to become on of the most powerful surveillance technologies or the last bastion of privacy depending on the types of user demands are placed upon the systems that are built. Sarah Downey argues against companies capturing too much data and storing it forever, and so it’s important for companies to have transparency about what they’re doing.

Google appears to be failing on the privacy transparency front by avoiding answering simple questions. What data are you recording in VR? Is it being tied back to personally identifiable information? And if so, then when can we see updates to the privacy policy to reflect that?


Support Voices of VR

Music: Fatality & Summer Trip

alvin-wang-graylinAlvin Wang Graylin is the China President of Vive at HTC, and I had a chance to talk with him at CES this year about what’s happening in China. He provided me with a lot of cultural context, which includes support from the highest levels of Chinese Government to invest in companies working on emerging technologies like virtual reality and artificial intelligence. There were a flood of Chinese companies at CES showing VR headsets, peripherals, and 360 cameras. On average, the VR hardware from China tends to be no where near the quality of the major VR players of the HTC Vive, Oculus Rift, Sony PSVR, or Samsung GearVR, but there were some standout Chinese companies who are leading innovation in specific area. For example, some highlights from CES include TPCast’s wireless VR, Noitom’s hand-tracked gloves, and Insta360 with some of the cheapest 360 cameras with the best specs available right now.

After CES, I was convinced that if you want to understand what’s going to be happening in the overall VR ecosystem, then it’s worth looking to see what’s happening in China. The VR market in China is growing, and there is a lot more optimism for technological adoption and enthusiasm for having VR arcade experiences. Education in China is also very important with the one-child/two-child policy, and Graylin says that if VR can be proven to have a lot of educational impacts then the government will act to get VR headsets in every classroom. Once VR is in the classrooms, then it’ll help convince more parents to buy one for the home if they believe it’ll help their education.

LISTEN TO THE VOICES OF VR PODCAST

In an extensive round-up VR growth in China from Yoni Dayan, he mentions a moonshot project called Donghu VR Town, which is a proposed “city built in the south of the country, designed with virtual reality intertwined in every aspects from services, healthcare, education, to entertainment.” Here’s an untranslated promotional video that shows off what a VR-utopian city might look like:

It’s debatable as to whether Donghu VR Town would be a successful experiment if built, but it reflects a desire to innovate. Graylin said China doesn’t want to just be the manufacturing arm of the world, but that it wants to become a leader in virtual reality as well as in artificial intelligence, as can be seen in this Atlantic article detailing how Chinese universities and companies are starting to surpass American ones in researching and implementing AI.

China is a complicated topic and ecosystem, but after having a direct experience of the TPCast wireless VR, Noitom VR gloves, and the great-looking and high-res stereoscopy from a Insta360 camera at CES, then I think that it’s time to really look to China as a leader in innovation. If China really does go all-in on VR and AI and continues to investing large sums of money, then that type of institutional support is going to leap-frog China as one of the leading innovators in the world. I’ve already have started to see this at CES this year and at the International Joint Conference on Artificial Intelligence where there was a very healthy representation from China, and the thing to watch over the next couple of years is any big educational infrastucture investments by the Chinese government as well as the evolving digital out-of-home entertainment hardware ecosystem.


Support Voices of VR

Music: Fatality & Summer Trip

jules-urbachAt Unity’s Unite keynote in November, Otoy’s Jules Urbach announced that their Octane Renderer was going to be built into Unity to bake light field scenes. But this is also setting up the potential for real-time ray tracing of light fields using application-specific integrated circuits from PowerVR, which Urbach says that with 120W could render out up to 6 billion rays per second. Combining this PowerVR ASIC with foveated rendering and Otoy’s Octane renderer built into Unity provides a technological roadmap for being able to produce a photorealistic quality that will be the equivalent of beaming the matrix into your eyes.

I had a chance to catch up with Jules at CES where we talked about the Unity Integration, the open standards work Otoy is doing, overcoming the Uncanny Valley, the future of the decentralized metaverse, and some of the deeper philosophical thoughts about the Metaverse that is the driving motivation behind Otoy’s work in being able to create a virtual reality visual fidelity that is indistinguishable from reality.

LISTEN TO THE VOICES OF VR PODCAST

Here’s Otoy’s Unity Integration Announcement

Here’s the opening title sequence from Westworld that uses Otoy’s Octane Renderer:

Aaron-KoblinWithin premiered their first real-time rendered, interactive experience at Sundance New Frontier this year with Life of Us, which is the story of life on the planet as told through embodying a series of characters who are evolving into humans. The experience is somewhere betweeen a film and game, but it’s more like a theme park ride. There’s an on-rails narrative story being told, but there’s also opportunities to throw objects, swim or fly around, control a fire-breathing dragon, and interact with another person who has joined you on the experience. You learn about which new character you’re embodying by watching the other person embody that creature with you, and the modulation of your voice also changes with each new character deepening your sense of embodiment and presence.

I had a chance to catch up with Within CTO and co-founder Aaron Koblin at Sundance to talk about their design process, overcoming the uncanny valley of voice modulation delays, how the environment is primary feature of VR experiences, and how their background in large-scale museum installations inspires their work in virtual reality.

Koblin also talks quite a bit about finding that balance between the storytelling of a film and interaction of a game, and how Life of Us is their first serious investigation into that hybrid form that VR provides. He compares this type of VR storytelling to the experience of going to a baseball game with a friend in that this type of sports experience is amplified by the shared stories that are told by your friends. This is similar to collaborative storytelling of group explorations of VRChat, but with an environment that is a lot more opinionated in how it tells a story.

LISTEN TO THE VOICES OF VR PODCAST

Life of Us is a compelling way to connect and get to know someone. The structure of the story is open enough to allow each individual to explore and express themselves, but it also gives a more satisfying narrative arc than a completely open world that can have a fractured story. Life of Us has a deeper message about our relationship to each other and the environment that it’s asking us to contemplate. Overall, Koblin says that our relationships with each other essentially amount to the sum total of our shared experiences, and so Within sees an opportunity to create the types of social & narrative-driven, embodied stories that we can go through to connect and express our humanity to each other.

Here’s a trailer for Life of Us.

The Life of Us experience should be released sometime in 2017, and you can find more information about Within website (which links to all of their platform-specific apps), or their newly launched WebVR portal at VR.With.in.

Subscribe on iTunes

Donate to the Voices of VR Podcast Patreon

Music: Fatality & Summer Trip