Content Ecosystem Archive

0

Live VR Olympics Coverage Portends New Era of Panoramic Sports Viewing

David Aufhauser, managing director, Intel Sports

David Aufhauser, managing director, Intel Sports

NBC Leverages Intel Platform as Barriers to Bandwidth-Efficient Streaming Fall

By Fred Dawson

January 25, 2018 – Virtual reality is about to move farther than ever into mainstream entertainment with immersive coverage from the Winter Olympics kicking off what promises to be an escalating pace of wide-field sports broadcasts in 2018 and beyond.

NBC Sports and Intel are collaborating on plans to deliver 15 or more events live and another 15 or so in on-demand mode from the games in PyeongChang, South Korea, with VR coverage that allows viewers to look at anything that’s happening within a 1800 field of vision at multiple viewing positions by simply turning their heads. “This is nothing short of reinventing the way fans engage with content,” says David Aufhauser, managing director at Intel Sports.

Previously reported technical advances and trials over the past year have set the stage for introduction of TV-caliber network VR services, including episodic as well as sports programming. But it’s sports broadcasting where the path to creating experiences with mass market appeal seems most direct.

Much of what goes into such efforts will leverage consensus on approaches to producing and delivering content that have been achieved through the auspices of the Virtual Reality Industry Forum (VRIF). Although the forum’s first set of specifications, issued at CES in early January, target on-demand use cases, specs for live use cases are under preparation for release by year’s end.

Achieving multi-platform interoperability is crucial to meeting the mass audience requirements of a broadcaster like NBC. Intel, a charter member of VRIF, has built a comprehensive suite of volumetric production capabilities that will support maximum audience reach in a wide range of VR scenarios, Aufhauser says.

“One of the things we focused on in developing our True View technology was to make sure it works across multiple devices and platforms,” he notes. This means not only delivering stereoscopic 3D viewing to people using different types of VR head-mounted devices (HMDs) but also enabling panoramic 2D viewing on handheld devices without HMDs.

Two major hurdles have been cleared to make such a service possible. Wide-field cameras capable of capturing 1800 or even 3600 of the visual space have eliminated the jarring discontinuities caused by stitching much narrower swaths of the space together. And VR experiences can be supported with new levels of bandwidth efficiency through a process known as “tiling,” whereby the transmitted field of vision or, in VR parlance, “viewport,” encompassing what a viewer sees at any instant in time is filled in with degrees of resolution mapped to how the eye registers fields of vision in real life.

Still to be resolved are the low resolution limitations of the HMD displays. While, as previously reported, new high-end HMDs are supporting better resolution in the 2400-x-1200 pixel range, the achievement of the high-resolution consumers are accustomed to with 4K or even HD TV panels is still a couple of years away, especially when it comes to mid- and low-priced HMDs.

Nonetheless, if what NBC Sports has cooked up performs as billed, it’s hard to imagine VR sports programming won’t soon be in high demand. Users who download the recently released Intel-based NBC Sports VR app will be able to watch marquee events every day in 3D stereoscopic VR mode using any of several HMD platforms, including Google Daydream View, Samsung GearVR and Windows Mixed Reality, which runs on a variety of OEM HMDs designed for the Microsoft platform. The content will also be made available for non-HMD 2D viewing on iOS and Android devices with navigation executed by screen swiping across the field of vision.

Events selected for VR coverage such as alpine skiing, snowboarding, ice hockey and speed and figure skating will be captured by camera pods in three to six locations per event. Viewers switching from one location to the next as an event unfolds will be able look in all directions over the1800 field to see what’s going on.

Other features include: post-event highlights delivered for VR viewing as well as on-demand availability of the full VR-covered events; text providing names of athletes; real-time stats and leaderboards accessible in VR viewing mode, and audio integration that enables an immersive sound experience at each vantage point. The coverage will also include a director’s cut enabling a lean-back immersive experience with picture-in-picture support that facilitates toggling between self-selected views and the director’s cut.

All of this requires the ability to process an immense amount of data in real time, going well beyond what’s needed with traditional 2D live capture and streaming. This starts with mapping all the data generated by the “voxels,” the cubed volumetric pixels representing height, width and depth, into a volumetric rendition of the playing field and everything happening moment to moment. In addition, productions have to process all the metadata tied to each event and participant wherever a user chooses to look. And there’s a lot to handle when it comes to maintaining quality assurance, including ensuring smooth operations with the constant stream of data flowing back and forth between user and the CDN to enable instantaneous view shifts with each turn of the head.

The mapping of the voxels, of course, is intrinsic to the True View production platform, but other aspects require support from third parties. “We’re pioneering new ways to interact and how the content is managed,” Aufhauser says. “How we manage all this data and operate the programs across multiple rights holders and multiple apps is a major challenge.”

Citing the support Intel is getting from online publisher Ooyala’s Flex media logistics platform, he adds, “Flex has been the right solution for that. We’re working with them on a day-to-day basis.”

“No matter how much metadata piles up, the end points have to be selective in finding and applying relevant information with each session in that location,” notes Glen Sakata, senior account executive at Ooyala. “You have to have a very dynamic way of handling this.”

The quality control aspects are especially daunting, he adds. “When things go wrong you have to know what do. A lot of things can happen whether it’s on the backbones, in the various cloud services – AWS, Alibaba, Azure. You can’t wait for people to push a button.”

Notwithstanding the challenges, Intel is throwing a lot of effort into VR. Last year the company began working with the National Football League and Major League Baseball to engage various teams in use of the technology to deliver game highlights. For example, Intel worked with 11 NFL teams using 30 to 50 5K JAI cameras around stadium perimeters to capture the entire field of action, enabling users to zoom on whatever they wanted to watch during the replay.

At CES Intel announced the opening of a Los Angeles-based studio dedicated to VR and augmented reality (AR) productions. “With Intel Studios we’re going into the fully immersive world of video production,” Aufhauser says.

The facility features what is billed as the world’s largest volumetric production stage, a dome-shaped structure measuring 10,000 square feet at the base where a bevy of VR cameras feed captured data over fiber cables to Intel-powered servers capable of processing up to six terabytes per minute. Intel expects movie studios, including the first announced partner, Paramount, as well as broadcasters, ad agencies and other commercial content producers to make use of the facility for live as well as episodic productions.

Another signal as to what’s in store came during Intel CEO Bryan Krzanich’s keynote speech at CES, where he was joined by former Dallas Cowboys quarterback Tony Romo in a demonstration a True View “be-the-player” app. Without indicating when the capability will be commercialized, they showed how viewers can watch plays in a football game unfold from the moving vantage of any player.

Aufhauser also points to social media applications as a major area of opportunity for the Intel technology. Indeed, according to Alexis Macktin, an analyst at VR researcher Greenlight Insights, social interaction in VR mode is a major area of interest. Among consumers who have a strong interest in VR, “67 percent are interested in being able to interact socially,” Macktin says. “When we asked active users who are interested in using VR features every day to name what the daily use cases would be, social features were mentioned more than any others.”

Contrary to fears that VR threatens to isolate people from one another, Macktin says the social experience has become an important component of people’s engagement with VR in public locations, such as theme parks, IMAX centers, kiosks and, especially in the APAC region, Internet cafes. “Location-based VR is bringing people together in the VR experience,” she says, noting that Greenlight is projecting this industry segment will be generating $8 billion in annual global revenues by 2022.

Macktin also notes the role industrial use of VR is playing in popularizing the technology. “We expect that sector to grow a lot this year,” she says. Overall, Greenlight believes VR in all its permutations will be a $175-billion contributor to the global economy by 2022.

At this early stage it’s the 3600 immersive viewing experience with entertainment content that is galvanizing the most consumer attention. “3600 video is the gateway for consumers and for brands as well,” Macktin says. In its 2017 U.S. consumer adoption surveys, the firm found that 44 percent had seen 3600 video.

No wonder, then, that the VRIF has focused its first set of specifications, released at CES, on defining an interoperable environment for producing and distributing immersive 3600 content. In so doing, the group has directed a lot of attention on steps to enabling a practical delivery mode for such content that avoids the vertigo-inducing practices that plagued early iterations of VR.

“We don’t want to upset the consumer, make anyone ill in any way,” says Sky broadcast chief engineer Chris Johns, a VRIF vice president who serves as co-chair of the forum’s Production Task Force. “We want an entertaining, interactive experience.”

The speed at which technological advances have brought VR to the point of practical operations in the streaming services arena has taken many people by surprise. NAGRA, for example, engaged with the VRIF early on but pulled back in light of the uncertainties surrounding the technology.

“VR is an area where we’ve taken more of a follow position, mindful of the 3-D risk from a few years ago,” says Simone Trudelle, senior product marketing director at NAGRA. “But at this stage it’s clear VR has gone through all the right gates and will become an industry of its own that’s tied to traditional content distribution. The VR Forum is really drilling down on the details and optimizing VR for mass distribution. We’re currently in the process of rejoining VRIF.”

Confirmation that the Olympic VRcast could be the start of a major trend can be found in network service providers’ preparations for utilizing managed broadband networks to provide the bandwidth and super low-latency bi-directional communications essential to delivering a compelling volumetric experience. “This technology represents an extremely promising and powerful opportunity, and it is imperative that we work together to create a powerful experience for users out of the gate,” says Christian Egeler, director of XR (Extended Reality) product development at Verizon, which is a member of the VRIF.

Cisco Systems has a bird’s eye view of these efforts, notes Sean Welch, vice president and general manager for cable access solutions in Cisco’s Service Provider Business group. “We’re engaged with four cable MSOs that are looking at providing 3600 services,” Welch says.

0

New Muscle behind VR Looks at Ways To Enable Market-Moving Experiences

Xavier Hansen, program manager, Verizon Labs’ Envrnmnt

Xavier Hansen, program manager, Verizon Labs’ Envrnmnt

Sense of Opportunity Grows Notwithstanding Primitive State of Development

By Fred Dawson

July 27, 2017 – The prospects for virtual reality as a vehicle for mainstream entertainment have spawned new, more platform-agnostic approaches to content development among major industry players even as the most avid VR proponents acknowledge the technology has a long, long way to go.

Among the more significant activities lending credibility to the notion that VR could become a disruptive force in everyday culture are incubation initiatives mounted by Verizon and Technicolor. Employing large teams of professionals from various disciplines in sizeable facilities, these endeavors are opening a new chapter in VR development by creating frameworks for new VR architectures, developing new approaches to production and post-production and exploring use of AI machine learning, eye tracking and other advances that can add verisimilitude to interactive VR experiences.

These projects are at the cutting edge of an expanding array of activities across the emerging VR ecosystem. Most are focused on use of the technology at its current stage of development, but in the aggregate they are creating a pan-industry environment for shared learning that is sure to expedite pursuit of more commercially viable approaches to engaging mass audiences.

Among the oldest VR skunkworks are the studios dedicated to fostering content for specific VR HMDs (head-mounted devices) like Facebook’s Oculus Studios and Samsung’s Milk VR. Joining these with a commitment to the Daydream platform Google debuted in 2016 is the company’s new VR Creator Lab housed in YouTube Spaces facilities in Los Angeles.

Also of recent vintage is a growing cluster of independent boutiques like Moth + Flame VR, The Virtual Reality Company, N-iX VR, Koncept VR, Start VR and yode that have emerged in LA and New York to help producers create content for VR games, storytelling and advertising using whatever platforms they choose. Productions out of some of these studios have begun making waves at traditional venues like Cannes, Sundance and the Tribeca Film Festival, but, as compelling as they might be, they are largely short-form variations on linear narratives offering viewers a 360-degree view of what’s going on.

On a broader level, there’s also growing industry-wide cooperation on efforts to foster more coherent development in the VR space, most notably through the VR Industry Forum, a new organization that includes leading vendors like Ericsson, ARRIS, Intel, Qualcomm, Technicolor, Dolby, Harmonic, Irdeto and many more, along with CableLabs, MovieLabs and two Tier 1 service providers, Sky and Verizon. Following its public introduction at CES in January, VRIF has been making steady progress toward issuing initial guidelines aimed at overcoming the fragmentation caused by proprietary closed systems.

“Proprietary device implementations are scaring everybody,” says David Price, vice president of strategic business development for TV and media at Ericsson. “We want VR to work across any platform. We want to be able to point to standards that will allow development of compelling content.”

As in any startup phase involving development of new consumer technology the shakeout in the battle for supremacy among leading developers will likely lead to some agreement on standards that are in everybody’s interest. Indeed, preceding the VRIF announcement there was already movement in this direction with last year’s launch of the Global Virtual Reality Association (GVRA), a group that includes leading VR device suppliers like Acer Starbreeze, Google Daydream View and Cardboard, HTC Vive, Facebook Oculus and Samsung Gear.

But while standardization is a long-term goal everyone can aspire to, the most important goal is to discover a path to commercial viability. As VRIF put it in its founding statement, “[W]e come together to exchange ideas and to seek a greater understanding of a very complex creation, delivery and consumption model.”

Whether the search will pan out remains to be seen. But excitement about the possibilities has drawn enough participation to merit confidence that answers won’t be long in coming.

One seedbed for such explorations is Verizon Labs’ Envrnmnt, a big open space at facilities in Warren, NJ that program manager Xavier Hansen describes as a “dynamic, highly lateral innovation environment.” There teams led by senior technologists and staffed by talented young enthusiasts are developing templates for production which can be used to create ever more compelling user experiences as the technology matures.

“We’re taking a bold, broad architectural approach to the entire AR/VR space,” Hansen says, articulating a linkage between augmented and virtual reality development that is gaining traction everywhere. “We’ve launched units devoted to R&D, productizing and content production that are pushing the envelope not just in entertainment but other fields as well.”

Envrnmnt is working with colleagues from Verizon and recently acquired AOL as well as with outside entertainment brands on specific content development projects. But, as the name suggests, outputting specific content is not the primary purpose of the operation. “Our main focus is on building core development technology,” Hansen says.

“We’re building the engines that enable people to build apps, and then we use that core to create apps as milestones for our clients,” he explains. “But we want ultimately to provide a platform that lets people build apps themselves without necessarily requiring a team of expert developers.”

Demonstrating one of the lab’s new frameworks, Hansen introduces a virtual sports bar where people’s avatars can meet and talk, watch live sports events streaming on a virtual screen or shoot hoops. The basketball shooting experience relies on tracking of hand motions as the user tosses a virtual ball at the hoop. The current iteration of the framework uses CGI (computer-generating imaging) rather than video capture, making it easier to build an application that can be transmitted to users.

The framework is meant to support a more immersive experience of moving through three-dimensional space than commonly occurs with current VR productions. It will enable developers to create such spaces using whatever visual content they bring to the table without building everything from scratch, Hansen explains. Elements will include the avatar constructs, voice communications format, support for placement of live video screens, means of utilizing motion tracking for specific applications like shooting baskets and other mechanisms that might go into creating a VR social networking space.

The Envrnmnt team is working on techniques that will make it possible to stream richer video-recorded environments for this level of immersive interaction. “We’re reducing the weight of transmission with our algorithms,” Hansen says.

The experience Envnmnt is looking for can’t be attained by transmitting just the content that falls into the field of view as the user shifts focus, he notes. “We’re streaming the full 360-degree video,” he says. “The problem with the FOV approach is you can’t move in space. It’s okay if you’re stationary and just looking around, but it’s not practical for creating a fully immersive experience. The Holy Grail is about fixing your ability to move in a 360-degree space. We’re on that quest.”

Another part of that quest is to use hologram technology in conjunction with VR so that a music performance or other activity involving real people can be brought into the three-dimensional VR space. “You could create a club atmosphere where people can pull in the hologram of a performer like Katy Perry and watch her as they walk around the space, just as you would in a real club,” Hansen says.

Eventually the full expression of the VR potential will appear in the storytelling space, he adds. “The space race is on to create a full-scale movie where people can teleport place to place and exist in different worlds,” he says, noting it’s not a matter of if, just when. “Intel is telling us when we have the software, they have the chips.”

Another place where efforts to reach these levels of performance are underway is the new Technicolor Experience Center (TEC) in Culver City, CA, where a team of over 300 researchers is working on scriptwriting and production concepts for VR movie making. Or maybe a better way of putting it is movie-game making, insofar as when it comes to immersive story telling in VR the line between movies and games often disappears, especially in the early going where action movies are a relatively easy fit for adding immersive interactions.

But there’s a distinction between games and movies in terms of emotional engagement that’s all important. As Kevin Cornish, founder of VR content creation studio Moth + Flame notes in a recent interview with SMPTE newsletter editor Michael Goldman, “We are talking about experiences that are emotion-based. And stirring emotion is, of course, the feature that is the most intentionally cinematic aspect of VR.”

Placing the user in the middle of the action to the point that the emotion-generating force of a good story becomes possible is not just a major challenge for script writers. From a technical standpoint it involves the ability to support immersive interactive experiences where the reaction of characters in the movie to the user needs to be realistic.

The key to giving characters the ability to interact with real people lies with adaptation of the machine-learning capabilities of AI to VR, which is one area of focus at TEC, according to Nick Mitchell, TEC’s vice president of immersive technology. Speaking with Goldman, he notes there’s a lot to work with given the progress in machine-human interaction capabilities derived from AI by the likes of Google, Facebook, Amazon, IBM and others.

TEC researchers are tasked with enabling authenticity of the most detailed aspects of characters’ appearance and physical responses to the interaction down to the finest details. “We have people studying behavioral patterns to try and get things right, like getting micro expressions correct in facial movement,” Mitchell says.

TEC is also putting a lot of effort into development of production and post-production processes for VR, where it has a significant leg up, given its long history in cinematic production. Discussing some of these efforts in a recent blog, Marcie Jastrow, head of TEC, says, “I think the biggest challenge is going to be learning who the new storytellers will be. What is often written out for creative is not something that can translate into immersive experiences.”

But there are major technical issues that need to be addressed as well. “Those include motion sickness, embodiment issues, or feeling that you are not completely immersed because you don’t feel your body in there,” Jastrow says.

And there’s a need to invent production processes that work at the highest levels of quality, leaving whatever corner cutting might be needed for various modes of distribution to points beyond the initial mastering. “If your best viewing environment is a movie screen or a theatrical release, then that is your master from which everything else flows,” she says. “Now we have to create the same standards in the immersive experience world to drive mass distribution.”

Clearly VR is on a path to becoming something much bigger than people have seen so far. The work at Envrnmnt and TEC is in the early stages, but these are the types of applications that have the power to generate the culture-rattling disruptions many people expect from VR.

0

Major Tech Advances in VR Add Credibility to Market Projections

Henrik Johansson, VP, products & marketing, Tobil

Henrik Johansson, VP, products & marketing, Tobil

Growing Public Exposure Readies Market for the Emergence of a Much Improved Experience

By Fred Dawson

May 23, 2017 – Anyone wondering whether to invest in virtual reality content or services might gain some clarity from recent technology advancements that suggest expectations for mass market adoption could be vindicated in fairly short order.

Innovations on display at this year’s CES extravaganza in Las Vegas make clear the bulky headsets, nausea-inducing effects, untenable bitrates and other issues dogging VR could disappear sooner than many observers anticipate. Meanwhile, ever more avenues for exposing consumers to VR through online, retail and amusement center outlets are delivering VR experiences that could accelerate development of a mass market once the VR experience becomes more user-friendly.

The early impact of that exposure can be seen in a comparison of results from surveys of U.S. consumers by IBB Consulting conducted last year in April and again at the outset of this year. Then the researchers found 12 percent of over 8,000 respondents expressed an interest in VR; the new survey, with 3,199 consumers responding, found the interest level has risen to 16 percent with close to 5 percent of respondents reporting they now own VR equipment.

Awareness-Accelerating Initiatives

While hardly a sign of serious groundswell, the figures show the advertising blitzes mounted by VR vendors over the past year combined with increasing first-hand exposure to the technology has had an impact. The exposure level is sure to increase this year, thanks in part to a bigger commitment immersive VR displays by retailers like Best Buy and Walmart.

Best Buy, which last year featured VR demos in only 48 stores, is now supporting displays with demos in about 500 stores and has VR products available in the “vast majority” of its stores, judging from plans first announced last year. In announcing the plans during a second quarter earnings call in August, Best Buy CEO Hubert Joy made clear the store’s growing commitment is a bet on the long-term, notwithstanding tepid market response in 2016.

“We believe virtual reality has the potential to contribute to our growth in the future,” Joy said. “But I am not expecting a material financial impact this year, given the timing of launches, inventory availability, and the fact that we are early in the cycle.”

This year VR gear owners will find they have access to a far greater volume of content with much better support for discovering the content than they’ve had before. A key facilitator in this regard is Littlstar, an online operation partially funded by a $33-million investment from Disney that provides a free app enabling consumers to view hundreds of VR titles in over 20 genre categories from Discovery, ABC, Showtime, Disney, National Geographic, PBS, Red Bull, Virgin and many other sources on all the major platforms, including Google Dreamland and Cardboard, Samsung Gear VR, Oculus Rift, HTC Vive and Sony PlayStation VR.

In a recent interview with tech podcaster Neil Hughes, Littlstar founder and CEO Tony Mugavero said the outlet will be introducing much more content this year, especially in conjunction with new partnerships tied to content initiatives by Google Daydream, Sony PSVR and game-developer Marvel. In the case of Daydream, the $79 VR HMD (head-mounted display) Google introduced last year as a step up to immersive experience from the low-cost Cardboard device, Littlstar will be displaying a rapidly expanding array of Daydream content options resulting from Google’s partnerships with Hulu, HBO, MLB, NBA, NFL and various game makers and its significant investments in original productions.

“There’s a massive opportunity for us there,” Mugavero told Hughes. “Google is making huge investments in AR (augmented reality) and VR.” As what some have called the “YouTube” of VR, Littlstar anticipates its success at evangelizing the market with free VR apps will pay off through opportunities to begin selling advertising and charge for premium content as the market base expands.

At year’s end the company took a big step in that direction with launch of Littlstar Japan, an initiative aligned with Sony Music Entertainment as its anchor partner and other partners in the offing. Mugavero said the company will be able to tailor its VR navigation strategy to tastes in the hot Asian market, where unexpected demand for the Sony PSVR after it rolled out in October led to long lines and a rushed effort by Sony to raise its manufacturing output.

VR is also starting to get significant exposure through installations at amusement parks and other locations, including New York City’s Times Square where The New York Times reported startup Void’s VR installation in the Madame Tussauds wax museum has drawn over 43,000 customers paying $20 each for a multi-room high-action Ghostbusters game experience since it launched in July. As The Times noted, Void is just one of several outfits pursuing business models built on paid admission to VR experiences in public places.

One of these, not mentioned by The Times, is three-year-old French startup Scale-1 Portal. The company is finding success with a low-cost variation on the concept, which, rather than offering a fully immersive VR experience, offers arcade vendors a $20,000 package consisting of projection equipment, action-tracking sensors, glasses and content that delivers an interactive 3-D experience with wall-projected exercise courses, games and other diversions. Visitors to the Scale-1 booth at CES witnessed staff leaping, weaving and running in place in front of an exercise routine dubbed “Future Runner,” which, when viewed without the glasses, looked like a 3-D movie in the raw.

The company, which is also engaged in custom design of products for business applications, was offering four content titles with the platform as of early January with the intention of producing one title per quarter in the future. “We add games to our installed units through the Internet, so it’s a simple process to keep sites updated with new content,” said Scale-1 president Emmanuel Icart. “We have units currently running in France as well as in Florida and Canada with more on the way.”

The New VR Experience

Expectations that all the activities driving increased public exposure to VR content will pay off for entities making big investments in equipment and content are looking ever more credible amid a wave of technology innovations that promise to significantly improve the VR experience compared to what’s delivered through today’s HMDs. That’s saying a lot, given the size and weight of HMDs, concern over nausea and eye strain, constraints imposed by cables and other issues.

One development that’s sure to impact the market is the emergence of head gear with a form factor closer to sunglasses than HMDs supporting performance matching the likes of the Oculus Rift and HTC Vive. The groundbreaking VR eyewear, on display at CES and slated for commercial release later this year, is the work of Chinese startup Dlodlo (pronounced “Dodo”) Technologies, which was handing out glasses for sampling by visitors to its booth.

Amazingly, the experience lived up to the claims made by DloDlo CEO Li Gang when the firm’s V1 glasses were announced last summer. “Dlodlo V1 is a breakthrough in Dlodlo’s VR equipment development, and is a landmark step in introducing VR equipment to the public,” Li said. “It’s smaller, smarter, better looking and offers a better experience than other VR products in the market. We’ve made progress in every part from optics and electronics to material and structure.”

With anticipated pricing in the $500-$600 range, the V1 has evolved quickly since its August debut in a short-lived fund raising effort anchored by Kickstarter. The V1 carbon-fiber glasses, 16 mm thick and weighing 88 grams, operate at a 90 Hz refresh rate with 3D displays fed through a “micro HDMI” input at 2400 x 1200 resolution, which translates to over 800 ppi (pixels per inch), nearly double the density levels of leading competitors’ HMDs.

The glasses can be connected to computers and game consoles or to the Dlodlo D1 controller, a pricey optional pocket-size high-density computing device that frees users to download and interact with VR content from Dlodlo’s growing store of games and other entertainment anywhere they find a Wi-Fi connection. In addition, Dlodlo has negotiated a tie-in with the high-end SteamVR controller platform, which entered the market last year as the supplier of motion-sensing technology and VR content for HTC’s Vive.

SteamVR, too, is a harbinger of improvements in VR experience that are destined to reach a larger audience over time. The platform utilizes saturation laser-beam coverage of a physical space to capture full-body motion signals at twitch-action gaming speeds anywhere the user moves in that space. The company has also partnered with LG in that vendor’s forthcoming second attempt at supplying the HMD market, and it’s now possible to run Rift HMDs on SteamVR. V1 users connected to SteamVR can participate in high-action games with full-body motion that matches the experience they can get with the Vive, according to some recent reviews.

Dlodlo faces a steep climb to market success, given the size of its competitors and their technological clout, especially with pricing that’s likely to be at the high end of the market. But, win or lose, the company has opened the window on a new VR HMD form factor that could well be duplicated by larger players before long.

Dlodlo has also found ways to address some of the visual issues that have continued to plague suppliers, notwithstanding recent increases in refresh rates and pixel densities. For example, the much higher pixel density reached by V1 mitigates the “screen-door” effect viewers experience when the close proximity of the displays to their eyes exposes black lines defining the contours of individual pixels.

Dlodlo has addressed another VR issue, dizziness, with a proprietary predictive algorithm that minimizes delays and distortions in the rendering process. And the company has shown it’s possible to support adjustments in display settings to help improve the experience for people with eyesight impairments.

The fact that such advances are destined to gain wider currency elsewhere is underscored by recent reports from academic researchers. For example, researchers at the University of Central Florida’s College of Optics and Photonics report they have overcome longstanding barriers to use of blue-phase LCD technology in a prototype application that raises pixel density to 1500 ppi. And researchers at Stanford University’s Computational Imaging Lab say they are developing HMDs that can adjust renderings for aging-related and other types of eyesight deficiencies.

Another area of advancement with major implications for VR involves the use of eye-tracking technology. The precision and speed of this technology was apparent in a demonstration at CES conducted by Henrik Johansson, vice president of products and marketing at Stockholm-based Tobil, a 16-year-old innovator in eye tracking technology.

In the demo, a viewer whose eye movements were tracked by infrared sensors at the base of a PC display saw little graphic images arrayed in rows across the screen light up instantaneously as his gaze shifted from one to the next without any head movement. Johansson said Tobil’s technology also supports what is known as foveated rendering by utilizing the firm’s EyeChip microprocessor-powered IS4 eye tracking platform to dynamically display content in gradations of resolution that realistically replicate what the eyes absorb across the field of vision in everyday experience.

Tobil’s technology, which previously gained traction in video gaming, marketing research and other fields, has now entered the VR space in conjunction with the recent introduction of the StarVR HMD, the product of a joint venture between PC maker Acer and VR content supplier Starbreeze. So far, the HMD is being used exclusively by IMAX in newly launched VR arcade centers in New York, London and Shanghai.

As explained by Johansson, the VR application, along with tracking the viewer’s gaze, divides the StarVR’s 205-degree field of vision into three zones of resolution with decreasing detail from the core circle of focus to the outer circle of perception. By eliminating unnecessary information within the field of vision as the viewer’s focus shifts in real time, “we reduce the overall bitrate by anywhere from 40 to 70 percent, depending on the degrees of resolution chosen for a particular application,” Johansson said.

Along with reducing the load on GPU processing power and cutting the transmission bandwidth required for streaming live sports and other real-time VR content, the technology introduces new level of personalized experience to VR applications, Johansson noted. For example, it’s now possible to support more natural personal interactions based on eye contact between users’ avatars in the virtual world or to peg sounds and the appearance of characters in a gaming sequence to the user’s gaze without reliance on head movements.

Acer and Starbreeze have made know their intentions to make the StarVR HMD available to the general public later this year. “While we have not yet started shipping larger volumes of StarVR, the current interest from multiple markets and from prominent brands and business sets us up well for the mass production phase beginning later in 2017,” Starbreeze said in a quarterly financial statement.

With the eye-tracking capabilities and support for 5K (5120 x1440) resolution, which raises pixel density to over 1,200 ppi, the HMD could significantly impact consumer expectations for VR. Starbreeze, which did not set a consumer price for StarVR, said it may farther refine the HMD, which in its IMAX implementation is as bulky or more so than the Rift and Vive. “Durability, field of view, hygiene, resolution, refresh rates and weight are all key aspects that we improve constantly and according to plan,” the statement said.

In a recent court appearance defending Facebook-owned Oculus in a copyright infringement case, Facebook CEO Mark Zuckerberg offered the prevailing view on the development trend line for VR. Noting his company will be investing some $3 billion to improve the VR experience in the years ahead, Zuckerberg said, “It’s going to take five or ten years of development before we get where we all want to go.”

Judging by the latest advances in VR technology, that timeline could turn out to be much shorter.

0

Sky Embraces New CDN Strategy Based on Nokia’s Velocix Platform

Roland Mestric, director of marketing, video business unit, Nokia

Roland Mestric, director of marketing, video business unit, Nokia

Move Harbingers Things to Come in Direct-to-Consumer Premium Video Market

By Fred Dawson

In a sign of things to come Sky’s UK operation has opted to implement CDN technology from Nokia in its own datacenter facilities at various locations across the country rather than continuing with use of public CDN services for delivering on-demand video to subscribers.

As previously reported, several suppliers of CDN technology, including Cisco Systems, Ericsson, Imagine Communications and others as well as Nokia, have launched initiatives aimed at providing content owners and distributors who don’t own local broadband networks the means to ensure TV-caliber delivery of streamed video content with functionalities suited to new monetization and personalization strategies. Such initiatives, positioned as alternatives to reliance on traditional turnkey CDN services, aim to provide distributors the kinds of benefits Sky’s UK CTO Mohamed Hammady says his company has achieved with use of Nokia’s Velocix CDN technology.

“Using Nokia’s Velocix CDN, we have greater traffic visibility in the network, allowing us to regain control of managing the network capacity,” Hammady says. “Deploying the solution deep in the network also ensures we can manage delivery more effectively to improve performance and the customer experience.”

Sky’s move, just announced after nearly a year of experience with the Velocix deployment, marks a first for Nokia, which built its position as a leading supplier of CDN technology through sales to network owners. “The Sky project represents a milestone for Nokia,” says Paul Larbey, head of Nokia’s IP Video business. “It shows that our Velocix CDN – which delivers high-quality programming – aligns with the needs of broadcasters and content providers in addition to those of IPTV and cable operators.”

Indeed, along with selling CDN platforms directly to broadcasters, Nokia and other suppliers are also working with network owners to tout use of their technologies in the creation of CDN infrastructure in edge facilities that could support a wholesale business model targeting broadcasters. Here the idea is that CDNs positioned in headends, central offices or hubs much closer to end users would offer more robust delivery along with advanced features specifically targeted to the direct-to-consumer market.

In Sky’s case, closer proximity to subscriber has been achieved by using datacenters facilities it controls in multiple locations across the UK. As a result, the company is able to take a hierarchical approach to maximizing performance leveraging a centralized as well as dispersed regional datacenters, notes Roland Mestric, marketing director of the video business unit at Nokia.

“Distribution of traffic is architected based on the popularity of content,” Mestric says. “Centrally, for longer tail content there’s a need for more storage capacity but lower volumes of throughput capacity, while, at the edges, storage capacity is lower but throughput is higher.”

In the first phase of its use of the Velocix platform Sky has focused strictly on meeting mounting demand for on-demand content delivered through its Sky On Demand service, which had grown to where the existing operations model was straining delivery resources and costs across the company’s entertainment and communications service networks. But Sky’s choice of the Nokia platform also took into account potential future needs, including support for live broadcast content, which is now in preparation for the second phase of the engagement in conjunction with delivering Sky Go, the OTT multiscreen component of the satellite pay TV service.

Support for time shifting provided by Velocix is also part of the phase two discussion, Mestric says. “We’re looking at both catch-up (short-term availability of replay of live content) and restart,” he notes.

Another consideration vital to Sky’s choice of CDN technologies was the ability to seamlessly transfer CDN operations from the public services to the new locations. “The introduction of Velocix CDN to support growth of our video on demand services was seamless,” Hammady confirms.

“Our platform includes what we call Velocix Proxy language,” Mestric explains. “This made it possible to make sure our CDN provides exactly the same capabilities with the Sky set-top boxes that they had before.” Call flows were easily customized without requiring product development,” he adds.

Another priority feature for Sky was ensuring it could have greater visibility into traffic demands and flows to ensure it had greater predictability of performance and network usage than it had before. The CDN caches are able to sense what’s going on with each device and to report useful metrics back to the operations center to ensure quality of experience is sustained under changing conditions.

Other attributes that factored into Sky’s choice have to do with support for personalization and dynamic advertising tied to the manifest manipulation capabilities that the Velocix platform can execute at the network edge. Specific plans remain to be spelled out, but it would be no surprise if Sky takes advantage of dynamic advertising capabilities enabled in the IP domain by the platform, given that, as previously reported, the company intends to expand its long-standing addressable advertising capabilities to its multiscreen feeds.

With Velocix Sky can modify how a request for content from subscribers is treated depending on a user’s location or device, Mestric says. “Customization on a per-subscriber basis is a potential future application that was quite important to them,” he adds.

Beyond the current phase 2 implementation, there’s a possibility Sky could expand use of the Velocix platform to other markets. “Hopefully, in phase 3 we’ll consider extension to Italy and Germany,” Mestric says.

Nokia has met another key requirement in the current market for software-based CDN solutions, which is to ensure the Velocix system can run on whatever recent vintage commodity hardware might be in a customers’ data centers as long as certain performance requirements are met. “We’ve moved from requiring specialized hardware to commodity platforms,” Mestric reports. “Sky is using HP servers, and we’re running on other datacenter hardware with other customers.”

Nokia is optimistic other satellite and terrestrial broadcasters will soon join Sky in use of the CDN platform. “Sky is just the beginning of this new market opportunity for us,” Mestric says, noting Nokia is in discussions with many other players in the DTC market. “We believe content owners will look at building their own CDNs for all the reasons Sky is doing. We see this as a key trend in the near future.”

0

NAGRA Pursues Unique Paths To Fostering UHD TV Services

Christopher Schouten, senior director of product marketing, NAGRA

Christopher Schouten, senior director of product marketing, NAGRA

Smart TV and Anti-Piracy Service Initiatives Mark Departure from Proprietary Traditions

By Fred Dawson

January 23, 2017 – NAGRA is taking groundbreaking approaches to overcoming security-related barriers to the licensing of UHD services with the goal of changing market dynamics in two key areas: one related to enabling more effective use of forensic watermarking in the battle against piracy and the other aimed at making sure the new content is readily available to buyers of smart TVs.

While the two initiatives are operating on separate tracks, they have in common a shift toward licensing of capabilities that can be decoupled from use of NAGRA’s proprietary watermarking and conditional access products. Elements of the smart TV initiative, known as “TVkey,” were publicized at IBC and CES, but aspects pertaining to applications in North America have not been publicized. The anti-piracy initiative, which will extend the vendor’s services to a broader global market, has yet to be announced.

A Smart TV Platform for UHD Content

The NAGRA TVkey dongle

The NAGRA TVkey dongle

The TVkey platform, developed in cooperation with Samsung Electronics, has already been adopted by Samsung to enable secure delivery of pay TV providers’ services directly to its high-end smart TV models. Now the joint initiative is moving forward with creation of a licensing entity that will enable access to TVkey technology by third-party suppliers of chipsets, TVs, dongles and conditional access systems.

The first announced licensee, pending finalization of the new licensing body, is MStar Semiconductor, which plans to implement the platform in its EMC SoCs for 4K UHD HDR sets. This will ensure availability of the solution on smart TVs offered by a broad range of OEMs, says JongHee Han, executive vice president of visual display business at Samsung Electronics.

“TVkey technology will ultimately help provide a faster route to market of 4K services for pay-TV operators and the 4K value chain as a whole,” Han says. “By opening access to the technology, we are committed to establishing TVkey as the de facto standard for access to premium pay services directly on TV sets.”

The TVkey framework is based on a NAGRA-designed root of trust embedded in TV chips that communicates securely with a TVkey dongle containing operator-controlled CAS and DRM that plugs into TV sets’ USB ports. As described by Christopher Schouten, senior director of product marketing at NAGRA, this creates a secure media path for strict enforcement of high-value content usage rules in accord with the Enhanced Content Protection recommendations of MovieLabs, the technology consortium formed by major Hollywood studios. The platform meets other ECP requirements as well by supporting hardware-based watermarking and operator-controlled device service revocation.

“TVkey is now enabled on all the latest Samsung series 6000 or higher models,” Schouten says. “Through the licensing authority with Samsung we will enable any CE or CA provider to license this on a not-for-profit basis.”

One unnamed satellite pay TV provider has reached agreement with Samsung to be featured as a subscription option for buyers of the OEM’s TVkey-compatible sets who are in reach of that provider’s signals, Schouten notes. He says expectations are high that many more distributors will be signing up with Samsung this year and with many other CE firms in the future. “By the 2018 or 2019 model year we expect to see much wider distribution of TVkey-compatible TV sets,” he says.

NAGRA and Samsung cite multiple advantages for the TVkey approach as the means of making pay TV services and especially UHD services available for viewing on smart TVs without the use of set-top boxes, starting with the low-cost USB form factor. “It’s a smart card on a stick,” Schouten says.
This contrasts with the more costly PCMCIA (Personal Computer Memory Card International Association) form factor used with the DVB CI+ (Common Interface Plus) model employed in Europe. Of course, there are many regions of the world where the CI+ option is not available, which is one reason Samsung’s Han views TVkey as a potential de facto global standard for the smart TV market.

Moreover, in places where CI+ is available users must acquire a CI+ card supporting the conditional access technology specific to any given service provider to gain access to that provider’s service. In contrast, the TVkey approach is designed to provide users of the dongle protected access to any pay TV service that has contracted to be featured with an OEM’s implementation of TVkey.

“TVkey gets virtualized for each operator so that it works with whichever service the user selects from the options displayed on screen,” Schouten says. “Through a simple sign-up process via TV app, web portal or call center consumers can sign up for any featured pay-TV service package.”

This opens a wide range of business models that can be beneficial to both CE manufacturers and service providers, he adds. By bundling TVkey dongles with their sets, manufacturers can enhance the appeal of their products by giving consumers multiple service options right out of the box.

Service providers can use the TVkey-equipped TVs to avoid the costs of providing and installing set-tops with the HEVC decoding capabilities that are essential to delivering UHD. And they can leverage the platform to enable free trials or other special offers such as one-day passes or skinny bundles for people who might not be inclined to subscribe to the provider’s full service.

Beyond the basic hardware advantages, TVkey will free service providers from reliance on the CE manufacturer’s user interface by providing a customizable template based on NAGRA’s Gravity Edge, a UI platform closely mirroring NAGRA’s OpenTV 5 that has already been ported to Amazon Fire and Roku. The UI will be introduced as a second phase in the unfolding TVkey strategy, Schouten says. And, he adds, because the platform includes support for DRM as well as CAS whether from NAGRA or other suppliers, operators will be able to include access to OTT options like Netflix as part of the branded experience.

Making TVkey Viable in North America

All these benefits apply in most of the world where smart TVs are equipped with tuners supporting access to cable, satellite, IPTV and over-the-air services. But it’s a different story in North America where tuning for anything other than ATSC broadcast services remains under control of the set-top box.

NAGRA, however, sees a way around this issue in conjunction with efforts to persuade CE manufacturers to bring multi-tuner capabilities into play with TVkey-capable UHD sets. “There’s going to have to be pull from U.S. operators who tell the CE people they want those tuners included in their TV sets,” Schouten says. Samsung, with its commitment to TVkey, is already offering such capabilities with its newer models in the North American market.

Another part of the strategy involves gaining support from the major suppliers of headend gear and set-tops, again, with a push from operators. “Because we’re making this an open CA, we will license it to anyone who wants to use it,” Schouten says. “We see operators telling their traditional suppliers, ‘We’re taking a new direction. If you want to participate and continue to have a piece of the business, we need your cooperation.’ ”

DBS operators might be especially ripe for the TVkey option, he adds. “Dish and DirecTV are offering skinny bundles over IP,” he notes. “By using TVkey they could lower costs and marry the satellite with the broadband operations.”

It remains to be seen whether a USB-based approach to enabling pay TV security on smart TVs will gain traction in North America, but the winds of change are clearly blowing in that direction elsewhere. The DVB standards organization is already well on its way to moving the CI+ platform onto USB with a preliminary blueprint introduced in July.

On a parallel track, three years ago a spate of vendor initiatives emerged with the aim of supporting virtualization of the set-top via cloud-based software connectivity to HDMI sticks. While many of these efforts fizzled with pushback from operators who felt the solutions lacked the robustness of traditional set-tops, HDMI dongle-based solutions have made significant inroads not only in the OTT domain but with traditional service providers as set-top replacements for second and third TVs in homes where traditional set-tops serve the primary TV.

The emergence of a solution that offers next-generation security on a non-proprietary basis for delivering UHD services to smart TVs could be a game changer. “We see TVkey as an important element of our future chipset strategy in terms of content security,” says Wayne Tsai, marketing director at MStar. “Our adoption of the TVkey technology ensures robust content protection for 4K Ultra HD content, and ultimately benefits everyone from us to the end-consumer.”

A Global Anti-Piracy Initiative

A police raid in Paraguay was part of enforcement action in a case against retail stores Casa Litani and Nadia Centerfiled filed by NAGRA and Discovery Communications in June 2015.

A police raid in Paraguay was part of enforcement action in a case against retail stores Casa Litani and Nadia Centerfiled filed by NAGRA and Discovery Communications in June 2015.

Meanwhile, as another component to making UHD content available, there’s a need for better collaboration on making forensic watermarking an effective tool against piracy, as mandated by the MovieLabs ECP specifications. NAGRA, by expanding its anti-piracy capabilities beyond customers who use its watermarking platform, believes it is well positioned to help in this arena as well.

The company, which last year acquired watermarking technology supplier NexGuard Labs, has been a long-time provider of anti-piracy services to network operators and broadcasters who use its content protection products, allowing them to benefit from a global tracking operation that works with law enforcement and various organizations to identify and take down distributors of purloined content and illicit viewing devices. Now, Schouten says, the company is beefing up its anti-piracy operations with plans to offer such services more widely.

“We’re investing a lot more, including a whole new team in Spain, to increase automation in monitoring and tracking as well as to do follow-up with people,” he explains. “We’re expanding our service on a global basis in both the traditional pay TV and the OTT spaces with the intention of offering it to the entire market whether or not entities are using our conditional access and watermarking products. It’s a true alliance-based anti-piracy model.”

This is a model NAGRA has been pursing in Latin America for some time through its affiliation with Alianza contra Piratería de Televisión Paga, a group encompassing most of the region’s major players in pay TV that was formed in 2013 to collaborate in the fight against Free-to-Air (FTA) piracy, which uses illicit satellite receivers to decrypt signals.  The alliance, an outgrowth of an anti-piracy collaboration between DirecTV and NAGRA that began in 2010, has led to a string of successful enforcement operations resulting in arrests, shutdowns of illegal retail operations and seizures of FTA devices and pirate headends supporting hundreds of thousands of illicit subscriptions in Brazil, Argentina, Colombia, Venezuela and other countries.

“Some members of Alianza are using our security solutions, others not,” Schouten says. To support its more expansive approach to anti-piracy operations NAGRA is “investing a lot more in building our tracking and response capabilities,” he adds, noting this includes assembling “a whole new team in Spain to increase automation in monitoring and tracking as well as to do follow-up with people.”

If successful, the new NAGRA strategy will break with the current modus operandi where forensics and enforcement activities are largely a function of services provided by vendors to just those customers who use their watermarking and content protection technologies. An ecosystem-wide, pan-regional approach to beating piracy is widely acknowledged as the key to making it possible for content distributors to live up to the enforcement mandate embodied in the ECP specifications.

Along with generating support for enforcement NAGRA is focusing on compiling data essential to demonstrating the impact piracy is having on service providers. “One of the challenges to building the anti-piracy effort is many companies lack knowledge of the impact piracy has on their bottom lines,” Schouten says. “They need this information to drive investments in these measures.”

It’s also important to engage OTT providers in these activities, he adds. “We’ve always provided service of this nature in broadcast and pay TV, but now increasingly we’re working in the OTT space,” he says. Already, he notes, more than half of NexGuard’s customer base consists of content providers delivering high-end video direct to consumers.

0

Results from VR’s ‘Breakout Year’

Tim Bajarin, president, Creative Strategies

Tim Bajarin, president, Creative Strategies

Small Global Market, Lack of Content Augur Long Haul Ahead


By Fred Dawson

January 5, 2017 – As another CES gets underway with the virtual reality hype machine running full tilt it’s clear that network operators are justified in their cautious approach to embracing VR as an emerging service opportunity.

But it’s equally clear service providers are well advised to stay tuned with enough resources devoted to understanding what it would take to get involved if they want to avoid being blindsided when the time comes to get serious about delivering VR services. Certainly content producers aren’t taking VR lightly, notes Tim Bajarin, president of Creative Strategies, a consulting firm that advises content creators on the use of new technologies.

“All the companies we talk to see VR as an opportunity,” Bajarin says “Hollywood directors are getting into this. Steven Spielberg has invested in the Virtual Reality Company.”

But, he quickly adds, “They’re all waiting for a product that will sell in the millions. Goggles that people will accept on a mass-market basis are probably years away.”

Indeed, notwithstanding heavy advertising and other marketing efforts by some of the major players, the fact is 2016 didn’t measure up as the predicted breakthrough year for consumer acceptance. And despite CES demos that will be teeing up VR applications in everything from mainstream games and entertainment to porn and industrial use, the measured approach to content development signals a slower-than-expected uptake by VR gear buyers is discouraging a pace of content rollout that would serve to drive more gear sales, raising the likelihood of an extended run for the usual chicken-and-egg conundrum.

That doesn’t mean, as some have suggested, that VR will go the way of 3-D TV. As the enthusiastic purveyors of reports predicting the emergence of a multi-billion-dollar VR market note, VR has transformative potential in how content is produced and consumed as well as how business-related tasks are executed that 3-D never had. Moreover, even as VR cuts users off from interaction with people in their immediate vicinity, there is a social component to the technology that may well be the primary reason Facebook spent $2 billion on its purchase of VR equipment maker Oculus.

At an Oculus developers’ conference in October Facebook CEO Mark Zuckerberg wowed attendees with a demonstration of what a Facebook VR experience will look like. Wearing an Oculus Rift headset he met with co-workers, played cards and engaged in other activities in virtual space to demonstrate, as he put it, that VR is “the perfect platform to put people first.”

Not that Facebook doesn’t see the content potential as well. Zuckerberg said his company has already invested $250 million with the content development community to drive VR content development and expects to funnel another $250 million into the effort, with an additional $10 million devoted to a new fund for educational applications.

A year ago, Goldman Sachs predicted VR and augmented reality, which brings digital imagery into a viewer’s field of vision, together could generate anywhere from $80 billion to $182 billion in hardware and content revenue by 2025. In August IDC issued an even more aggressive prediction, suggesting global VR and AR revenues would each $162 billion in 2020.
But by year’s end it was clear 2016 VR HMD (head-mounted display) sales were not as robust as expected, which was inadvertently underscored by Best Buy in a pre-CES email promotion touting all the great new things to be featured at CES that are now available on Best Buy’s shelves. 4K/UHD TVs, new smart phones and much else were mentioned in the promotion; VR wasn’t.

There’s a lot more convincing to be done before a significant portion of the public is ready to buy in. In December market research firm SuperData revised downward an already pessimistic prediction for 2016 HMD sales it had issued in October, suggesting Sony’s PlayStation VR, launched in October, had gotten off to a much slower start than expected.

Sony was on track to sell about 745,000 units rather than the previously predicted 2.6 million, SuperData said. Its predictions for the other major players, HTC’s Vive, the Oculus Rift and Samsung’s Gear VR, were unchanged at 450,000, 355,000 and 2.3 million, respectively. The heavily advertised Gear VR, offering a less immersive experience tied to use of Samsung smartphones lodged in the HMD to generate images, costs much less than the others at a listed price of $100.

Expectations that the Sony PlayStation VR, priced at $400 for use with PlaysStation4 consoles, would do better than the Oculus and HTC displays was based in part on total costs insofar as the base of over 40 million PlayStation4 owners needed to spend just $400 to obtain a VR experience. With the purchase price of the PlayStation4 starting at $300, the Sony VR experience could be obtained at $700 by people who don’t own the console, with the added benefit of gaining access to all the non-VR benefits of owning the console, whereas buyers of the Rift at about $600 and Vive at $800 needed to have a high-performance PC, which adds another $1,000 or more to the cost.

In addition to costs, dearth of content was widely seen as a drag on sales, which, as previously reported, was the mantra at the beginning of 2016. In October The Seattle Times reported the content void remained a point of major concern among developers and investors gathered for the Immersive Technology Summit in Belleview. The paper summed up the mood with a quote from Chris Donahue, a senior director at chipmaker AMD, who said, “It’s all about the content. I don’t have an answer to what the ‘killer app’ is going to be.” It wasn’t just a matter of absence of content; it was the quality of experience associated with much of the content that reached the market, which, as the paper noted, often had the feel of “technology demonstrations or physics experiments.”

For example, live sports events offered in VR for viewing on Samsung’s Gear VR, such as games at the NCAA’s March Madness tournament and events at the Summer Olympics, were less notable for immersive experience than they were for the poor viewing quality, which fell far short of the viewing experience on TV sets. And, generally speaking, video game reviewers found VR gaming experiences available on various platforms to be hampered by less-than-realistic limitations imposed on player actions.

But an impressive and growing range of activity focused on development of content beyond gaming as 2016 progressed suggests things could improve in the year ahead. For example, NextVR, producer of the aforementioned VR sports events and others such as the U.S Tennis and Golf Open tournaments for Samsung’s Gear, has expanded its reach to include the new Google Daydream View, a recently introduced $79 HMD for use with smartphones, and promises to add what could be better viewing options in conjunction with use of the higher end HMDs in the months ahead. NextVR has also begun producing one NBA game per week for VR viewing at no extra cost for holders of the NBA League Pass.

Motion picture studios have been getting their feet wet with VR clips promoting movies such as Martian, The Blair Witch Project, Assassin’s Creed, episodes of Star Wars and many others. Doug Limon, director of the Bourne movie series, has created a VR episodic series titled Invisible, and Disney released a short VR video that puts people on the back of a dragon appearing in the remake of Pete’s Dragon.

There are also a number of startups devoted to VR content production that have been drawing significant investment funds, such as Within, Lucid, Immersv, VirtualSky and Vertebrae. The question is, will any of this be enough to draw a significantly larger market of users.

Verizon thinks so. The company has built an end-to-end AR/VR platform utilizing Amazon Web Services’ EC2 GPU and CloudFront resources to support content creation, hosting and delivery. As described in an online presentation by Verizon executives Christian Egeler, director of AR/VR product development, and cloud architect Vinay Polavarapu, the initial iteration of the platform scales to support 100,000 simultaneous HD video streams to customers worldwide.

Elements include a VR content authoring service for ingest; media library service for content consumption; ads service for ad integration and monetization; real-time stitching and encoding service for live and on-demand ultra HD VR content from a variety of cameras and rigs; adaptive streaming capability to stream to any device using HLS and MPEG-DASH, and a mobile VR rendering platform for IOS and Android.

Another entity getting into the cloud-based VR distribution business is NeuLion, which has built an international business providing OTT video management, distribution and monetization support for content owners, including major sports entities such as the NFL, NBA, Univision Deportes and Euroleague Basketball. In a partnership with Nokia that utilizes that firm’s OZO VR camera and player SDK, NeuLion is offering content owners an end-to-end integrated platform enabling a single stitched live video feed from OZO cameras to NewLion encoders for packaging and distribution to consumers worldwide.

The question for network service providers is whether any of these developments merits serious investment in a VR service business. “VR is on the radar of almost every mobile, cable and media client we work with, and the most frequent question we get is whether this makes sense for their business right now,” says Jefferson Wang, senior partner at Interactive Broadband Consulting Group (IBB).

In a recent survey of over 1,000 U.S. consumers who say they are interested in VR, IBB found that just 31 percent had actually tried the technology. For network operators that have a retail store presence, there’s an opportunity to provide that experience in demos that can lead to subscriptions to a VR service, Wang notes. “Initially, IBB predicts that the VR market winners will be companies that can break down the barriers to entry with an end-to-end play,” he says.

But any such initiatives will have to wait until there’s enough content to fill a VR service pipeline. That still looks to be well off in the future.

0

AVC at HEVC Compression Rates Scrambles Next-Gen Codec Picture

Keith Lissak, senior director, product marketing, Harmonic

Keith Lissak, senior director, product marketing, Harmonic

Harmonic Introduces Technique that Works with Existing Client Base

By Fred Dawson

October 10, 2016 – Harmonic appears likely to shake up industry-wide efforts to save bandwidth with new encoding methods offering HEVC-level bitrate reduction on AVC encoders without requiring any change in device codecs.

“This is something we’ve been working on for some time,” says Keith Lissak, senior director of product marketing at Harmonic. “We’ve gotten AVC (Advanced Video Coding) up to HEVC (High Efficiency Video Coding) levels. Our solution works on all existing AVC-enabled devices, including devices that use HEVC codecs.”

The company’s EyeQ software system, slated for commercial release before year’s end, enters the market amid much uncertainty among mobile, pay TV and OTT content distributors over how to accommodate the rising tide of IP video transmissions as 4K UHD, HDR and other next-gen formats come into play. While HEVC has long been positioned by the ISO and ITU as the successor to the Moving Picture Experts Group’s AVC, the search for lower-cost and more easily implemented solutions has spawned a flurry of initiatives from proprietary codec suppliers such as V-Nova and RealNetworks and from promoters of royalty-free solutions such as Google and the Alliance for Open Media (AOM).

V-Nova, for example, continues to make strides, building on previously reported successes in several market segments with recently announced wins in mobile, OTT and 4K contribution. But, as confirmed in recent testing by independent research firm informitv, V-Nova’s Perseus codec, in the hybrid version designed to work with existing MPEG codecs, achieves just a 33 percent bitrate reduction on AVC-delivered 1080p HD  at comparable quality levels.

Google’s royalty-free VP9, used with YouTube and in many other parties’ OTT operations, last year achieved parity with HEVC, as confirmed by several testing organizations. Perhaps more significantly, the company has moved what had been its VP10 successor initiative into the AOM technology pool.

AOM’s first codec, AV1, slated for release in March, is targeted for Internet Engineering Task Force (IETF) standardization as a royalty-free platform precisely tuned to the requirements of streaming live as well as on-demand HD and 4K UHD content over the Internet with a 50 percent efficiency improvement over HEVC and much lower use of CPU power in the encoding process. VP9 uses almost as much processing power as HEVC, which, with the exception of improvements engineered by encoding companies like Elemental, consumers ten times as much processing as is required by AVC.

While long-term prospects look good for AV1 as a potential force in IP video, the opportunity to exploit the vast embedded base of AVC codecs to achieve HEVC-level performance in the near term promises to expedite efforts to raise the quality of user experience in the congested OTT video space. According to Cisco’s latest VNI Global IP Traffic Forecast, video now accounts for over 60 percent of global Internet traffic and about 60 percent of mobile data traffic.

Viewing of TV shows, movies and other long-form video is now a big part of the video flow, which makes viewers less tolerant of sub-par performance. “Viewers now expect a first-screen quality of experience on every device, with increased video resolution and no buffering, despite network conditions,” notes Bart Spriester, senior vice president for video products at Harmonic,

Harmonic’s EyeQ is designed as an enhancement to the PURE Compression Engine used with Electra X encoders in the cloud-based suite of VOS video processing solutions the company developed to give distributors an alternative to purpose-built hardware solutions. According to Harmonic, EyeQ will allow these encoders to deliver live as well as on-demand video at a 50 percent reduction in bandwidth without resorting to HEVC in complete conformance with AVC specifications.

Lissak says the Q4 implementation of EyeQ will run on the Electra X2, Harmonic’s first software-based encoder designed to achieve performance levels on Intel processors comparable to the capabilities of ASICs used with its E8000 and E9000 hardware platforms. Harmonic’s software-based Electra X3, optimized for delivering broadcast-ready content at 4K resolutions and 60 frames per second, currently employs HEVC Main 10 profiles.

How the company intends to utilize EyeQ to enable 4K UHD over AVC remains to be seen, but Lissak makes clear the new technology represents “an opportunity to accelerate the rollout of UHD.” X3 implementations are slated to appear in mid-2017, he says.

More immediately, Harmonic’s emphasis is on the benefits to be realized with current video streams. Along with cutting bitrates by up to 50 percent, an especially important benefit to bandwidth-squeezed mobile operators, EyeQ directly improves the bottom line for video content and service providers through reduced storage costs at the core and network edges and by enabling a more consistent viewing experience with enhanced video quality and less buffering, Spriester says.

“By lowering CDN and storage costs by half, EyeQ has the potential to help deliver significant CapEx and OpEx savings, and increased profitability, for operators,” he says. “And when viewers spend more time in front of the screen, there’s more opportunity for content monetization.”

EyeQ is not a revamp of AVC encoding. Asked whether Harmonic is simply using extensions available in the AVC profiles, Lissak replies, “This isn’t some patch. If it were, there would be a lot more people doing it. What we’re doing represents a whole new way to analyze and optimize compression performance in real time.

“The optimization is happening based on the human visual system,” he adds. “If the human eye can’t spot the detail it gets cut out in real time.”

In other words, EyeQ executes “quality awareness” analysis of encoded frames using what Harmonic calls “in-loop artificial intelligence” to determine which bits are needed to hit quality targets based on what matters to the human visual system. These adjustments are then communicated to the encoding system, which reprocesses the frames accordingly. All of this is done without adding latency to the encoding process, Harmonic says.

EyeQ relies on variable bitrate (VBR) rather than constant bitrate (CBR) encoding. It is totally different from capped VBR processes, which rely on pseudo-linear scaling of picture- and scene-level quantification to cut bitrates.

0

Market Focus on HDR Intensifies

Michael Wise, CTO, Universal Studios

Michael Wise, CTO, Universal Studios

Resolving Production Workflow Issues Is Now a Key Goal

By Fred Dawson

October 3, 2016 – The pace continues to quicken in the long march to full realization of the enhanced quality potential of new video display technologies, especially as regards attempts to capitalize on stunning High Dynamic Range enhancements without the encumbrances imposed by the bandwidth-hogging 4K component of UHD.

Industry acceptance of the primacy of HDR is reflected in the recently adopted ITU HDR–TV Recommendation BT.2100, which, among other provisions, extends the luminance range and Wide Color Gamut (WCG) for 4K resolution displays embodied in the BT.2020 standard to content formatted for both HDTV 1080p and 8K displays.

There are still many issues to be worked out with HDR, which technically refers just to luminance range but in general parlance these days also includes WCG. But with growing consensus on HDR as the surest path to wowing the huge base of viewers who own 4K displays, content producers now have more reason than ever to weave HDR into the production process..

“Content shot and mastered with HDR, in my mind, looks better than any 3D content I’ve seen,” said Michael Wise, CTO at Universal Studios, who spoke at a symposium sponsored by the Society of Motion Picture and Television Engineers (SMPTE) in June. “It’s like looking out a window.”

HDR is now a key consideration in Universal’s and other studios’ production processes, although more needs to be done to reduce the labor involved and to improve creative use of the enhancements. “It’s about educating cinematographers, colorists and filmmakers about the art of the possible,” Wise said. “Honestly, there are some titles that don’t look as good as they could, but we do have really good ones like [20th Century Fox’s] Revenant.”

If striking the right balance between not enough and too much of a good thing when it comes to avoiding unintended perversions of creative intent is as much art as science, at least the science has reached a point where HDR can be used to consistently good effect across a wide range of displays in people’s homes. What matters most is that content be produced in multiple versions optimized to different HDR formats so that any display designed to work with one of those formats and even some that aren’t can render frames in accord with the intended variations in colors and brightness.

The ITU’s BT.2100 helps distributors make efficient use of these different format versions by  navigating a key area of technical complexity that has to do with the so-called “gamma curves” that determine the range of brightness values executed by different types of TV displays. The prevailing gamma curve used with the new generation of HDR sets is SMPTE’s ST 2084 dynamic range electro-optical transfer function.

ST 2084 defines a standardized approach to breaking with the 100-nit luminance limit used with the traditional gamma function on SDR (Standard Dynamic Range) displays. But there’s also growing support for what is known as Hybrid Log-Gamma, which was developed by the BBC and NHK and standardized as ARIB STD-B67 by the Association of Radio Industries and Businesses as a way to enable a degree of compatibility with legacy displays by more closely matching the traditional gamma curve while utilizing whatever capabilities they have to extend beyond the 100-nit limitation.

BT.2100 embraces a newly developed simple conversion process to enable use of either HDR gamma function for rendering content depending on the type of display in use. But the onus remains on producers to devise workflows that can deliver HDR versions suited to different display environments.

Simply scanning a finished film and enhancing it to HDR is not an economically viable solution, Wise noted. “Going forward our studio workflow will incorporate digital migrations that derive versions of HDR for different display environments,” he said, suggesting these would include the two leading TV formats, Dolby Vision and HDR10, as well as versions suited to tablets and other small-screen displays.

Right now this is a laborious process. Better cooperation among producers on formulating common parameters and procedures associated with mapping content to the various HDR format is essential to normalizing and streamlining how things are done from camera operations through all stages of production and distribution.

“We have to get together on this,” said Mark Lemmons, another SMPTE speaker, who at the time served as CTO at Deluxe Entertainment Services Group, a job he left in July. “It’s something that we have to do in partnership with others across the industry.”

But, as Ron Sanders, president of Warner Home Entertainment, noted at the symposium, it’s far easier said than done. Notwithstanding monthly meetings of studios, CE manufacturers, OTT companies and others under the auspices of the Digital Entertainment Group to work through technical issues, the consensus-building process “is like herding cats,” Sanders said.

HDR processes had yet to be incorporated into Warner’s workflow, he said, noting how hard it was to accomplish such formatting under tightening deadlines. “Windows are shrinking,” he commented, which leaves little time for remastering during the home entertainment post-production process.

“We have to get directors and producers to understand HDR,” Sanders said. “Once HDR is in the production process and the tools are more efficiently priced, [HDR-formatted] content will flow.”

The first Blu-ray players supporting the Blu-ray Disc Association’s UHD standard, which establishes HDR10 as the baseline requirement with Dolby Vision as an option, entered the market this year with under 50 titles ready for viewing in the new format. “We expect to have over 100 by Christmas,” Sanders said, speaking of titles from all sources. “You’ll see a huge ramp-up next year.”

Adding to the building HDR momentum is the emergence of content produced in HDR for OTT distribution. As previously reported, Netflix started down this road last year with a couple of series with ongoing expansion this year and now is reported to have about 100 hours of content available in the format. In June Amazon launched its first HDR-formatted series along with support for a handful of HDR-formatted movies with a promise to hit 150 hours of HDR content by year’s end.

Efforts to extend HDR benefits to owners of flat screens not equipped to support HDR per se but with luminance and color ranges exceeding the SDR parameters plays well with the expectations of consumers who purchased 4K UHD sets, especially as that base of users inspires expectations among distributors that there are monetization opportunities tied to delivering content in 4K resolution. According to Futuresource Consulting, worldwide shipments of 4K UHD sets reached 32 million units last year, representing a 160 percent increase over 2014 and 14 percent of all sets sold. Futuresource expects 4K UHD shipments will account for 52 percent of the market by 2020.

A recent global survey conducted for Irdeto by SNL Kagan found that 64 percent of service providers and 73 percent of content producers among the nearly 500 respondents believe consumers will be willing to pay 10 to 30 percent more on their subscriptions for access to 4K UHD content. Ninety-six percent of all respondents believe 4K UHD TV services will be widely adopted by 2020.

Page 1 of 2612345...1020...Last »