New Muscle behind VR Looks at Ways To Enable Market-Moving Experiences

Xavier Hansen, program manager, Verizon Labs’ Envrnmnt

Xavier Hansen, program manager, Verizon Labs’ Envrnmnt

Sense of Opportunity Grows Notwithstanding Primitive State of Development

By Fred Dawson

July 27, 2017 – The prospects for virtual reality as a vehicle for mainstream entertainment have spawned new, more platform-agnostic approaches to content development among major industry players even as the most avid VR proponents acknowledge the technology has a long, long way to go.

Among the more significant activities lending credibility to the notion that VR could become a disruptive force in everyday culture are incubation initiatives mounted by Verizon and Technicolor. Employing large teams of professionals from various disciplines in sizeable facilities, these endeavors are opening a new chapter in VR development by creating frameworks for new VR architectures, developing new approaches to production and post-production and exploring use of AI machine learning, eye tracking and other advances that can add verisimilitude to interactive VR experiences.

These projects are at the cutting edge of an expanding array of activities across the emerging VR ecosystem. Most are focused on use of the technology at its current stage of development, but in the aggregate they are creating a pan-industry environment for shared learning that is sure to expedite pursuit of more commercially viable approaches to engaging mass audiences.

Among the oldest VR skunkworks are the studios dedicated to fostering content for specific VR HMDs (head-mounted devices) like Facebook’s Oculus Studios and Samsung’s Milk VR. Joining these with a commitment to the Daydream platform Google debuted in 2016 is the company’s new VR Creator Lab housed in YouTube Spaces facilities in Los Angeles.

Also of recent vintage is a growing cluster of independent boutiques like Moth + Flame VR, The Virtual Reality Company, N-iX VR, Koncept VR, Start VR and yode that have emerged in LA and New York to help producers create content for VR games, storytelling and advertising using whatever platforms they choose. Productions out of some of these studios have begun making waves at traditional venues like Cannes, Sundance and the Tribeca Film Festival, but, as compelling as they might be, they are largely short-form variations on linear narratives offering viewers a 360-degree view of what’s going on.

On a broader level, there’s also growing industry-wide cooperation on efforts to foster more coherent development in the VR space, most notably through the VR Industry Forum, a new organization that includes leading vendors like Ericsson, ARRIS, Intel, Qualcomm, Technicolor, Dolby, Harmonic, Irdeto and many more, along with CableLabs, MovieLabs and two Tier 1 service providers, Sky and Verizon. Following its public introduction at CES in January, VRIF has been making steady progress toward issuing initial guidelines aimed at overcoming the fragmentation caused by proprietary closed systems.

“Proprietary device implementations are scaring everybody,” says David Price, vice president of strategic business development for TV and media at Ericsson. “We want VR to work across any platform. We want to be able to point to standards that will allow development of compelling content.”

As in any startup phase involving development of new consumer technology the shakeout in the battle for supremacy among leading developers will likely lead to some agreement on standards that are in everybody’s interest. Indeed, preceding the VRIF announcement there was already movement in this direction with last year’s launch of the Global Virtual Reality Association (GVRA), a group that includes leading VR device suppliers like Acer Starbreeze, Google Daydream View and Cardboard, HTC Vive, Facebook Oculus and Samsung Gear.

But while standardization is a long-term goal everyone can aspire to, the most important goal is to discover a path to commercial viability. As VRIF put it in its founding statement, “[W]e come together to exchange ideas and to seek a greater understanding of a very complex creation, delivery and consumption model.”

Whether the search will pan out remains to be seen. But excitement about the possibilities has drawn enough participation to merit confidence that answers won’t be long in coming.

One seedbed for such explorations is Verizon Labs’ Envrnmnt, a big open space at facilities in Warren, NJ that program manager Xavier Hansen describes as a “dynamic, highly lateral innovation environment.” There teams led by senior technologists and staffed by talented young enthusiasts are developing templates for production which can be used to create ever more compelling user experiences as the technology matures.

“We’re taking a bold, broad architectural approach to the entire AR/VR space,” Hansen says, articulating a linkage between augmented and virtual reality development that is gaining traction everywhere. “We’ve launched units devoted to R&D, productizing and content production that are pushing the envelope not just in entertainment but other fields as well.”

Envrnmnt is working with colleagues from Verizon and recently acquired AOL as well as with outside entertainment brands on specific content development projects. But, as the name suggests, outputting specific content is not the primary purpose of the operation. “Our main focus is on building core development technology,” Hansen says.

“We’re building the engines that enable people to build apps, and then we use that core to create apps as milestones for our clients,” he explains. “But we want ultimately to provide a platform that lets people build apps themselves without necessarily requiring a team of expert developers.”

Demonstrating one of the lab’s new frameworks, Hansen introduces a virtual sports bar where people’s avatars can meet and talk, watch live sports events streaming on a virtual screen or shoot hoops. The basketball shooting experience relies on tracking of hand motions as the user tosses a virtual ball at the hoop. The current iteration of the framework uses CGI (computer-generating imaging) rather than video capture, making it easier to build an application that can be transmitted to users.

The framework is meant to support a more immersive experience of moving through three-dimensional space than commonly occurs with current VR productions. It will enable developers to create such spaces using whatever visual content they bring to the table without building everything from scratch, Hansen explains. Elements will include the avatar constructs, voice communications format, support for placement of live video screens, means of utilizing motion tracking for specific applications like shooting baskets and other mechanisms that might go into creating a VR social networking space.

The Envrnmnt team is working on techniques that will make it possible to stream richer video-recorded environments for this level of immersive interaction. “We’re reducing the weight of transmission with our algorithms,” Hansen says.

The experience Envnmnt is looking for can’t be attained by transmitting just the content that falls into the field of view as the user shifts focus, he notes. “We’re streaming the full 360-degree video,” he says. “The problem with the FOV approach is you can’t move in space. It’s okay if you’re stationary and just looking around, but it’s not practical for creating a fully immersive experience. The Holy Grail is about fixing your ability to move in a 360-degree space. We’re on that quest.”

Another part of that quest is to use hologram technology in conjunction with VR so that a music performance or other activity involving real people can be brought into the three-dimensional VR space. “You could create a club atmosphere where people can pull in the hologram of a performer like Katy Perry and watch her as they walk around the space, just as you would in a real club,” Hansen says.

Eventually the full expression of the VR potential will appear in the storytelling space, he adds. “The space race is on to create a full-scale movie where people can teleport place to place and exist in different worlds,” he says, noting it’s not a matter of if, just when. “Intel is telling us when we have the software, they have the chips.”

Another place where efforts to reach these levels of performance are underway is the new Technicolor Experience Center (TEC) in Culver City, CA, where a team of over 300 researchers is working on scriptwriting and production concepts for VR movie making. Or maybe a better way of putting it is movie-game making, insofar as when it comes to immersive story telling in VR the line between movies and games often disappears, especially in the early going where action movies are a relatively easy fit for adding immersive interactions.

But there’s a distinction between games and movies in terms of emotional engagement that’s all important. As Kevin Cornish, founder of VR content creation studio Moth + Flame notes in a recent interview with SMPTE newsletter editor Michael Goldman, “We are talking about experiences that are emotion-based. And stirring emotion is, of course, the feature that is the most intentionally cinematic aspect of VR.”

Placing the user in the middle of the action to the point that the emotion-generating force of a good story becomes possible is not just a major challenge for script writers. From a technical standpoint it involves the ability to support immersive interactive experiences where the reaction of characters in the movie to the user needs to be realistic.

The key to giving characters the ability to interact with real people lies with adaptation of the machine-learning capabilities of AI to VR, which is one area of focus at TEC, according to Nick Mitchell, TEC’s vice president of immersive technology. Speaking with Goldman, he notes there’s a lot to work with given the progress in machine-human interaction capabilities derived from AI by the likes of Google, Facebook, Amazon, IBM and others.

TEC researchers are tasked with enabling authenticity of the most detailed aspects of characters’ appearance and physical responses to the interaction down to the finest details. “We have people studying behavioral patterns to try and get things right, like getting micro expressions correct in facial movement,” Mitchell says.

TEC is also putting a lot of effort into development of production and post-production processes for VR, where it has a significant leg up, given its long history in cinematic production. Discussing some of these efforts in a recent blog, Marcie Jastrow, head of TEC, says, “I think the biggest challenge is going to be learning who the new storytellers will be. What is often written out for creative is not something that can translate into immersive experiences.”

But there are major technical issues that need to be addressed as well. “Those include motion sickness, embodiment issues, or feeling that you are not completely immersed because you don’t feel your body in there,” Jastrow says.

And there’s a need to invent production processes that work at the highest levels of quality, leaving whatever corner cutting might be needed for various modes of distribution to points beyond the initial mastering. “If your best viewing environment is a movie screen or a theatrical release, then that is your master from which everything else flows,” she says. “Now we have to create the same standards in the immersive experience world to drive mass distribution.”

Clearly VR is on a path to becoming something much bigger than people have seen so far. The work at Envrnmnt and TEC is in the early stages, but these are the types of applications that have the power to generate the culture-rattling disruptions many people expect from VR.