Otoy is a rendering company that pushing the limits of digital light fields and physically-based rendering. Now that Otoy’s Octane Renderer has shipped in Unity, they’re pivoting from focusing on licensing their rendering engine to selling cloud computing resources for rendering light fields and physically-correct photon paths. Otoy has also completed an ICO for their Render Token (RNDR), and will continue to build out a centralized cloud-computing infrastructure to bootstrap a more robust distributed rendering ecosystem driven by a Etherium-based ERC20 cryptocurrency market.
I talked with CEO and co-founder Jules Urbach at the beginning of SIGGRAPH 2017 where we talked about relighting light fields, 8D lightfield & reflectance fields, modeling physics interactions in lightfields, optimizing volumetric lightfield capture systems, converting 360 video into volumetric videos for Facebook, and their movement into creating distributed render farms.
LISTEN TO THE VOICES OF VR PODCAST
In my previous conversations with Urbach, he shared his dreams of rendering the metaverse and beaming the matrix into your eyes. We complete this conversation by diving down the rabbit hole into some of the deeper philosophical motivations that are really driving and inspiring Urbach’s work.
This time Urbach shares his visions of VR’s potential to provide us with experiences that are decoupled from the normal expected levels of entropy and energy transfer for an equivalent meaningful experience. What’s below the Planck’s constant? It’s a philosophical question, but Urbach suspects that there are insights from information theory since Planck’s photons and Shannon’s bits have a common root in thermodynamics. He wonders whether the Halting problem suggests that a simulated universe is not computable, as well as whether Gödel’s Incompleteness Theorems suggests that we’ll never be able to create a complete model of the Universe. Either way, Urbach is deeply committed to trying to creating the technological infrastructure to be able to render the metaverse, and continue to probe for insights into the nature of consciousness and the nature of reality.
Here’s the launch video for the Octane Renderer in Unity
Over the past couple of years, Danny Bittman has logged over 1800 hours in programs like Tiltbrush & Google Blocks on his way to becoming a full-time VR artist. Bittman started studying film, but realized that he had a lot to catch up with of over 100 years of cinematic history. He decided to pivot to virtual reality since the biggest thing holding him back from becoming a professional VR artist was putting in the hours to learn the tools, rapidly iterate on projects, and explore the potential of spatial storytelling. He has found a niche in creating vast landscapes in Tiltbrush that take dozens of hours to create. Over the past couple of years, Bittman has collaborated with Marvel on a Doctor Strange, helped beta test Google Blocks, created art for TheWave performances, performed at a VMWare conference, and was the lead artist on Billy Corgan’s Aeronaut music video in collaboration with Viacom NEXT.
I had a chance to talk with Bittman at Oculus Connect 4 about his journey in becoming a full-time VR artist, how he uses VR to express and reflect on his emotional states, how he’s using VR to document his dreams, and how his experiences of the Sandy Hook Elementary School shooting in his hometown of Newtown, CT have affected what he creates and wants to experience in VR.
LISTEN TO THE VOICES OF VR PODCAST
Here’s the Aeronaut music video in collaboration with Billy Corgan
Soundboxing is a VR rhythm game that has found a community of people who use it for exercising in VR, with some people reporting that they’ve lost up to 50 pounds from playing it. Soundboxing is similar to Audioshield in that you punch orbs set to the rhythm of songs streamed from YouTube, but rather than using an algorithmic approach Soundboxing allows users to record their own runs, which means that all of the content is user generated. Soundboxing allows users to record and edit their own runs by playing a song and punching an invisible wall, and the scoring system encourages streaks, which results in helping to cultivate and track flow states.
I’ve really enjoyed playing Soundboxing, and it’s an engaging game that has a lot of options to allow you to follow creators, curate playlists, and customize your gaming experience. For example, you can record a run with you dominant hand, and then flip the recording so that you can train yourself to become more ambidextrous. The official Soundboxing website also has user profiles, with an impressive set of archive and search integrations compared to other VR game websites.
Soundboxing was created by solo indie developer Eric Florenzano, who was working on a VR browser for Reddit and discovered how compelling it was to record your embodiment in the process of trying to figure out what comments look like in VR.
The reach that the #PolyAPI enables for #VR artists marks the opening of a new chapter for immersive tools., but how can we credit the artists? Moving forward, I'd like to see more focus on profiles with ways to identify, subscribe, and pay artists.@googlevr 1/3 https://t.co/viFy0umt2U
Florenzano has thought a lot about the larger economic systems and architecture within the VR software ecosystem, and he shares a lot of ideas for how to cultivate and design systems that would allow people to make a living within VR. Florenzano’s Soundboxing relies upon user generate content, and he was surprised to find that about 50% of his users were creating and sharing content. He sees Soundboxing a 3D immersive web browser, and takes a lot of inspiration for how the Brave browser is attempting to create blockchain-based digital advertising with their Basic Attention Token.
This is a chat that I had with Florenzano back in May talking about how people are using Soundboxing for exercise and fitness as well a lot of deep thoughts about the future of virtual economies & paying arists, and how he’s thinking about what immersive fitness experiences will look across different immersive platforms and mediums.
After talking to a lot of independent VR storytellers, Kaleidoscope VR’s René Pinnell identified that funding was one of the biggest blockers for continued experimentation. Cinematic VR pieces do not have many established distribution channels yet, and so a lot of the funding has come from larger HMD manufacturers like Oculus and select brands like Intel.
After traveling around to 30 cities around the world with Kaleidoscope VR’s festival, Pinnell decided to try to hold the First Look VR market in September in order to match the most promising independent VR creators with funders, producers, and distributors. I caught up with Pinnell at the end of the inaugural First Look market to talk about the funding landscape for independent creators, his journey from festival producer to executive producer to market organizer, and what he sees are the keys to innovation for immersive storytelling.
Linda Jacobson got into VR when she helped organize the 1990 CyberArts International gathering of artists and technologists who were using virtual reality technologies. She edited a compilation of CyberArts essays from that first gathering, and she also documented the Garage Virtual Reality DIY VR maker movement of the early 90s. In 1995, she became a VR evangelist at Silicon Graphics where she helped to sell VR into enterprise VR applications including engineering, architecture, construction, medicine, military training, automotive, aerospace, heavy equipment manufacturers, and oil and gas companies. The enterprise companies and applications of VR during this time period were pretty secretive and proprietary, but Jacobson was on the front lines traveling around the world seeing a huge range of different virtual worlds and use cases for VR.
Jacobson has continued to work in VR since the 90s ranging from entertainment to medicine to AEC, and has a lot of in insights about the evolution of VR in the enterprise space. I had a chance to talk with her at the Virtual Reality Strategy Conference in October about her last 20+ years in enterprise VR, her mentor Morton Heilig and his Sensorama, VR as a counter-cultural approach to computing, the CyberArts gathering of artists, DIY Garage Virtual Reality, and the major figures and companies who bootstrapped the commercial VR industry.
Jaron Lanier is a pioneer of the first commercially-available virtual reality systems with his VPL Research Inc startup that was founded in 1984. He has written a memoir called Dawn of the New Everything about his life leading up to and during his transition from a country hippie hacker to a over-stressed Silicon Valley CEO. He was inspired by painters like Hieronymus Bosch, musical instruments like the Theramin, mathematics, electronics, Ivan Sutherland’s pioneering work with the first virtual world head mounted displays. He also wanted to transcend his social insecurities and anxieties to connect creating shared social VR spaces. VPL Research pioneered the commercial “Eyephone” virtual reality head mounted display with tracking, haptic gloves, motion capture suits, 3D audio, and a cutting-edge virtual programming language. He was inspired by jazz to create technology that could enable mutual improvisation of communication and expression in what would feel like a shared dream in a waking state.
I had a chance to catch up with Lanier in Seattle, Washington on his book tour for Dawn of the New Everything, where we talked about highlights of his journey into VR, musical instruments as haptic devices, the tongue as an input device, and the body’s ability to embody a variety of different animal avatars. He also shared some of his thoughts on why he thinks artificial intelligence is a fake construct as long as the focus is on AI as a super intelligent parasitic entity rather than merely a tool for humans. He also shared some cautionary reflections on the dangers of the advertising-driven business models of Facebook, Google, and Twitter that are creating “massive behavior modification empires.” Lanier is a super humble guy, and his memoir is an interesting mix of impressionistic memories and reflections mixed in with technical deep dives and fifty-two definitions of virtual reality that explores the range of applications, metaphors, and unique affordances of this new medium. The Dawn of the New Everything is a fascinating story that captures a key turning point in the history of VR, and is packed with some deep insights and visions for what’s possible from someone who is still madly in love and inspired by this new medium.
Every culture around the world has their own beliefs around what happens to us after we die, and virtual reality may be a great medium to explore all of these different rituals and mythologies. The Tibetan Book of the Dead contains descriptions of the various bardo states that the Tibetans believe our consciousness experiences after we die. NYU instructor John Benton created a Bardo Thogul VR prototype in collaboration with his Tibetan Buddhism teacher as well as with two students from Columbia University’s Spirituality Mind Body Institute, Devorah Medwin & Lia Walton.
Benton’s prototype experience premiered at The Art of Dying VR art show that happened in San Francisco in October 2016. I had a chance to catch up with him to talk about how training for the afterlife in VR, visualizing Buddhist metaphysics, Eastern philosophical perspectives on the malleability of reality, and how VR can be used as a gym to train your awareness to be more potent, available, and present to our every day reality.
In 1978, a number of film scholars gathered at a conference in Brighton to re-evaluate the early days of film in terms of a developing new medium on it’s own terms rather than though the lens of a mature narrative and storytelling communications medium. These early experimental days of film were referred to as the “cinema of attractions” by scholars like Tom Gunning, André Gaudreault, Charles Musser, and Richard Abel because these early film experiments that were focused more on showing and exhibiting something while breaking the fourth wall to make a direct connection to the audience.
Rebecca Rouse is an assistant professor of communication & media at Rensselaer Polytechnic Institute, and she was inspired to take insights from the cinema of attractions scholarship and apply it to virtual and augmented reality in a more generalized framework she calls “media of attraction.” She identifies four characteristics of an emerging medium in that they’re unassimilated, interdisciplinary, seamed, and participatory. I caught up with Rouse at the IEEE VR conference to unpack her insights about what VR can learn from the early days of film, the evolution of other immersive communication mediums before VR, and whether or not VR is really all that different from other mediums from a historical and media theory perspective.
Stanford’s Virtual Human Interaction Lab (VHIL) premiered an experience called Becoming Homeless: A Human Experience at the Tribeca Film Festival in April. It was an experiment researching whether it’s possible to cultivate empathy through emboding a character who has lost their job, has to sell possessions to make rent, gets evicted, and starts living in their car. It’s designed to break down our stereotypes for how we imagine that people become homeless, and potentially overcome the fundamental attribution error which disproportionally blames people for their situation rather than acknowledging the deeper context of external factors. VHIL is hoping that they can reduce the cognitive load that’s required to imagine what someone’s experience might be like by providing an embodied and immersive experience in VR of walking in the shoes of another person and enabling the process of perspective-taking.
I had a chance to catch up with the writer & director team of Becoming Homeless, Elise Ogle, who is a project manager at VHIL, and Tobin Asher, who is the lab manager at VHIL. We talk about why the unmediated experience of presence in VR helps it outperform other forms of media, how they’re using VR to research empathy, their collaborations with empathy expert Jamil Zaki in exploring the thresholds to empathize, and the other social science work that they’re doing with the founder of VHIL lab Jeremy Bailenson.
Cognitive scientist and phenomenologist Lynda Joy Gerry got into virtual reality after seeing the body swap experiments by Machine to Be Another as well as Shawn Gallager’s A Neurophenomenology of Awe and Wonder research. Gerry’s master’s thesis was on empathy in VR where she did a survey of the established theories on empathy from cognitive science, social science, phenomenology, and virtual reality. I had a chance to talk with Gerry at the IEEE VR academic conference in March where she was presenting her research on empathy in VR, and advocating for a more holistic framework for empathy research that bridges the objective theories of empathy from cognitive science with the more subjective, intersubjective, and experiential perspectives from the philosophical branch of phenomenology.