Content Ecosystem Archive

0

New Initiatives Poised to Accelerate Pay TV Business Upheaval in 2016

Lowell McAdam, chairman & CEO, Verizon

Lowell McAdam, chairman & CEO, Verizon

More Broadcasters Embrace Direct-to-Consumer Models as Mobile 5G Threatens Major Disruption

By Fred Dawson

January 18, 2016 – The mounting pressures on stakeholders in the pay TV business have sparked an outpouring of new strategies over the past month or so, ensuring that the year ahead will be unlike anything yet seen in efforts to keep consumers engaged.

On the content side, the list of broadcast and cable networks already or soon to be on the direct-to-consumer (DTC) bandwagon grew, some with public declarations, others with still-to-be publicized plans, signaling that while they remain uncertain what business models to pursue, the time has come to resolve those uncertainties through aggressive experimentation. MVPDs, meanwhile, were accelerating various efforts to adjust to the new realities amid growing conviction on all sides that skinny bundles accessible over broadband wireless connections offer the best alternative to trying to win subscribers with traditional pay TV bundles.

As the trend lines in consumer behavior – most prominently the emergence of mobile as a major conduit for video consumption – push everyone beyond their legacy comfort zones, it remains to be seen whether the broadcaster/MVPD (multichannel video programming distributor) consensus that has been the guiding light of the pay TV business since the birth of cable will hold. The outlook is not so much for a destructive everybody-for-themselves dive into the unknown but, rather, for a long search to find a balance that maximizes results for everyone.

Bundle Headwinds

It won’t be easy, especially if things move too fast for consensus around new business models to take hold. MVPDs have had mixed results in efforts to hold the line against cord cutting as the population of cord-nevers grows, greatly dimming the prospects for the traditional bundle as the rallying point for business dealings. Meanwhile, mounting pressure from investors on media companies has increased the chances for a mad scramble that is more go-it-alone than collegial.

How bad could it get? In December, PriceWaterhouse Coopers captured headlines with results of a survey of 1,200 U.S. consumers that found the percentage of respondents who said they could see themselves subscribing to MVPD services in the year ahead had dropped from 91 percent in 2014 to 79 percent in 2015. In other words, based on these findings, about a fifth of existing MVPD subscribers could be cutting the cord in 2016.

In another recent report, Pew Research found that almost a fifth of adults between the ages of 18 and 29 are cord-cutting former subscribers and another 16 percent are cord nevers. Seventy-one percent of all non-subscribing adults cite expense as at least partially to blame for their resistance to the bundle, while nearly two thirds say they can access the content they want using over-the-air antennas, the Internet or a combination of both. Among Americans of all ages, Pew found that 78 percent are MVPD subscribers, 15 percent are cord cutters and 9 percent are cord nevers.

Nielsen, too, has delivered some sobering stats. With American viewers watching less live TV than ever even as TV viewing time holds steady, the drop-off is attributable to the Internet rather than, as in times past, the DVR.

Internet SVOD penetration as of Q3 stood at 46 percent compared to 40 percent a year earlier, while DVR penetration remained flat at 49 percent, Nielsen said. Live viewing was down 2 percent from the previous year and 7.5 percent from two years earlier. Nielsen also noted the number of broadcast-only households in the quarter rose to 12.8 million from 12.2 million in 2014 while the number of broadband-only households rose to 3.6 million from 2.8 million.

Mobile Pay TV

One of the more dramatic ramifications of these trends is the two leading mobile carriers’ new commitments to refocusing efforts to build their pay TV business away from fixed line to mobile in anticipation of the bandwidth surge promised by 5G. In an appearance at Business Insider’s Ignition 2015 conference on December 8 in New York, Verizon CEO Lowell McAdam made clear he expects 5G to be the conduit for driving what he described as a “mobile-first” video strategy in the years ahead. “Fiber’s going to be absolutely crucial as you go forward,” he said, “But our strongest asset is the mobile network, and we cover every square inch of the U.S.”

In a keynote interview with Business Insider CEO and editor-in-chief Henry Blodget, which was transcribed by the publication, McAdam said 5G, representing a 200-fold increase over the current 5 mbps average LTE throughput, would be running at Verizon’s Basking Ridge, N.J. campus starting this month. Pilots are slated for San Francisco, New York and Boston followed by commercial rollouts starting later in the year and extending nationwide through 2017.

Noting that video now comprises 70 percent of Verizon’s mobile traffic with year-to-year growth currently tracking 75 percent, McAdam said 5G is “use-case defined” for video with extremely low latency and faster response times than LTE. “5G will really change the game and, I think, will be another spike of growth in the wireless industry,” he said.

But the forthcoming strategy won’t be about moving the 300-channel TV bundle to mobile, he added, citing the carrier’s new social network-optimized Go90 video service in conjunction with its acquisition of AOL as the start of what will be a series of new product rollouts in 2016. Millennials don’t want the bundle, he said, noting that when he asked the 300 people Verizon has in southern California working on media strategies how many to subscribe the bundle, only ten raised their hands.

“[S]kinnying down the bundle helps the profitability; driving more broadband helps the profitability, and, frankly, I think a big breakthrough is going to be when we start doing 5G, because that allows you to cover many more homes without having to actually go into the home to provide the services,” McAdam said. While acknowledging that licensing issues impede Verizon’s ability to offer high-profile content on the Go90 service, he voiced confidence that “the 300-channel bundle is going to continue to break down” over the next three to five years.

Already, McAdam noted, Verizon’s Custom TV “roll-your-own” FiOS bundle option has had a big impact on its business. “It became 40 percent of our volume the minute we launched it,” he said. Discounting current data showing most mobile video consumption occurs in the home, he suggested that when the mobile throughput gets to1 Gbps on 5G, “I believe mobile-first video will take off inside and outside the home.”

Notwithstanding its acquisition of DirecTV, AT&T appears to be on the same page with Verizon when it comes to leveraging mobile for compelling video services. In fact, said CEO Randall Stephenson, speaking the same day as McAdam in New York, in this case at the UBS Global Media and Communications Conference, licensing rights held by DirecTV give the carrier a leg up in mobile video. DirecTV’s “stacking” rights allow AT&T to offer current TV shows as well as past season episodes, Stephenson said.

“Putting together a bundle of DirecTV content they can acquire over a mobile device or a single screen in the home – that is something we are very interested in,” Stephenson said, as quoted in various press reports. “You should assume we’re doing something.”

He said AT&T would be announcing details of its new mobile video service in January, which, at press time, had yet to occur. “We will get to put some details on this, announce it and launch it in January,” he said. “The idea that you put that portfolio of mobile stacked content together with a really robust wireless asset – it will turn some heads.”

Indeed, developments on the mobile front have reignited cable operators’ interest in arranging MVNO (mobile virtual network operator) deals, such as Comcast has indicated it might pursue under its agreement with Verizon stemming from the carrier’s acquisition of MSOs’ cellular spectrum in 2011. Stephenson was dismissive of the strategy, saying, “It’s going to be tough to come into this space without owner’s economics.”

The DTC Juggernaut

Be that as it may, the emergence of a national footprint for a mobile-based millennial-friendly pay TV service is just one of the forces coming into play to threaten the traditional bundle strategy. No less disruptive are media companies’ fast-evolving DTC strategies, where the possibilities are boundless as super-speed access over fixed and mobile networks opens a path for expansive use of assets to create new programming models.

Some broadcasters have become quite outspoken in their enthusiasm for breaking out of the TV channelization straitjacket. For example, NBCUniversal digital chief Evan Shapiro recently told the New York Post’s entertainment site Decider that Seeso, his company’s new $3.99 comedy-themed streaming service, is just the starting point for a DTC strategy that could produce nine other niche options in the year ahead.

“We feel like we’re providing the curated, niche experience that cable provided in the 1980s with the huge advantage of having watched the last 30 years take place,” Shapiro said. “We can go right to original programming knowing that it’s the ultimate destination for any successful new channel. We knew going into it that the market demands original, exclusive programming, so we are getting ahead of that.”

The NBC strategy perfectly exemplifies what’s in store as big content owners overcome the channel limitations of traditional pay TV networks to put the full power of archived, newly created and third-party web-originated asset resources to use in developing programming for niche audiences. Targeting an audience of what Shapiro referred to as “comedy nerds – millennials, Gen Xers and boomers,” Seeso combines 40 years of Saturday Night Live, remastered classics like Monty Python and Kids in the Hall, video of web comedy sketches and much else to deliver a package viewers can’t get from any other source.

“We did 11,000 interviews of people who watch video online,” Shapiro said. “We have nine different platforms in development right now. Based on the research, we’re making decisions about what to do next.”

Similarly, Time Warner’s Turner Broadcasting with last year’s acquisition of iStreamPlanet has positioned itself to leverage programming assets of its multiple brands to the fullest extent possible. Turner president David Levy, in an interview with online financial pub TheStreet at the Paley Center for Media in New York two months ago, made clear his company’s intentions to put its DTC technology to use as video consumption on mobile becomes ever more dominant.

“We have to have consumer-facing products,” Levy said. “We’re not in the television business. We’re in the content creation distribution data business.” Where mobile is concerned, he added, “We’ve got to figure out the monetization strategy…, but I think that comes later. For now we have to continue to push the products out.”

The list goes on. Other strategies bubbling under the radar include Disney’s move to DTC, now underway with its DisneyLife subscription service in Europe; a new BBC service for the U.S. leveraging assets not seen on its BBC America network venture with AMC; an expansion of an initial DTC subscription play by Viacom with its $5.99 kid’s service Noggin, and much else.

Presently, most broadcaster DTC strategies are either subscription based, avoiding a requirement for MVPD subscriber authentication, or offered free but only to authenticated subscribers. In the latter category is Discovery Communications’ just-launched Discovery GO, a bundled offering of live and on-demand content from its brands, including Discovery Channel, TLC, Animal Planet, Investigation Discovery, Science Channel, Velocity, Destination America, American Heroes Channel and Discovery Life. The app allows viewers with iOS, Android and, soon, other types of devices to browse each of the featured networks to find programs organized into 14 genres and personally curated playlists.

Many other broadcasters who have decided against the subscription DTC model, at least for the time being, are moving to aggregated groupings along these lines. But whether the authenticated pay TV subscriber requirements will hold with these types of offerings remains to be seen. One of the capabilities intrinsic to these efforts is dynamic interstitial ad placements on live as well as on-demand content, marking a testing of waters that could lead to new approaches down the road.

Advertising is also playing a role in the monetization efforts of entities pursuing the DTC subscription model. For example, CBS All Access, the $5.99 subscription service in operation since late 2014, also relies on advertising for revenues. In an appearance at the aforementioned Business Insiders Ignition conference on December 8, CBS chief Leslie Moonves said each subscriber to All Access is worth about $4 in additional revenue from advertising.

Moovnes also indicated the network is considering an ad-free version of the service. “We’re thinking about it,” Moonves said, according to Variety. “There could very well be a $9.99 product out there.”

And CBS isn’t just experimenting with a subscription DTC service. The network is also offering CBSN, a live 24/7 news feed at no cost with no subscriber authentication required; CBS Live Stream, a free on-demand service requiring registration but not subscriber authentication; CBSSport.com Stream, a free sports news service also requiring registration but no subscriber authentication, and live sports feeds through a mobile sports app at no charge.

In these early days of experimentation it’s too soon to know how everything will settle out on the DTC frontier. But, already, it’s clear the ability to serve audience interests with an expanded array of content categories in the multiscreen environment is an opportunity that will play a growing role in the relationship between license holders and MVPDs.

0

Disney Offers Explanations For DTC & Other Initiatives

Robert Iger , chairman & CEO, The Walt Disney Co.

Robert Iger , chairman & CEO, The Walt Disney Co

Licensing Issues May Delay but Won’t Stop Replication in U.S. of Strategy Already Underway in Europe

January 15, 2016 – With wind in its sails driven by solid financial results, The Walt Disney Co. once again is taking the lead in exploring new opportunities to build audiences via new technologies, this time through development of direct-to-consumer bundles and affiliations with third-party “skinny” bundlers. Speaking with Wall St. analysts during the company’s fiscal fourth-quarter earnings call in early November, chairman and CEO Robert Iger offered his most expansive explanation yet as to the thinking that is propelling the company in these new directions.

As previously reported, Disney has been consolidating post-production workflows into a highly automated platform for managing and distributing its ABC television assets as part of the framework that’s essential to managing assets for DTC and other services. Now those services are coming into view with launch of the UK DTC DisneyLife service and the licensing of content to Sony’s Vue.

In the following excerpts from the November 5 teleconference, Iger and Disney COO Thomas Staggs answer analysts’ questions about how they intend to balance preservation of the lucrative pay TV distribution model with the need to reach younger audiences through other means. They address concerns that affiliations with over-the-top SVOD providers might conflict with their own DTC aspirations. And they suggest what Disney would like to see from MVPDs to help sustain consumer interest in the bundle and, with it, Disney’s long-term commitment to that distribution model.

Robert Iger, chairman & CEO, The Walt Disney Co. – One of our greatest strengths is our willingness to embrace change and adapt to new trends and technology to create extraordinary experiences that are relevant to consumers. That’s especially true in the media space. From our perspective, there are three key elements that are essential for success in media today.

First, you have to have a quality product, preferably high-quality branded product. Next, you need to create a fantastic user experience with incredible interface navigation. You have to make the service easy-to-use and the content easy-to-find. And the third essential element is mobility. Consumers now dictate where they want to access media and it’s essential for legacy distributors to crack the mobile code. The demand for great content is stronger than ever, but consumers are demanding a better user experience and they’re migrating to platforms and services that deliver it.

Because of our great brands and franchises, we’re uniquely positioned to use new platforms to reach more people and to do so in more compelling ways. And we intend to use these platforms to augment distribution and connect with consumers more directly.

And DisneyLife, our new direct-to-consumer service in the UK, is a perfect example. Launching later this month, DisneyLife will give users unprecedented access to the vast universe of Disney storytelling including hundreds of movies and thousands of TV episodes as well as music, books and more. It delivers a great experience with an incredible degree of personalization including the ability to watch and read in several different languages. The Disney branded content is fantastic, the user-friendly interface is very interactive and intuitive and it’s designed to be mobile with apps for iOS and Android. So, we’re three for three.

We’re very proud of this product, it definitely speaks to where we’re going as a company, and we see opportunities to grow the concept across other markets and perhaps other brands in the future. We’re proud of our results this year and believe our continued strong performance validates our long-term strategy to drive growth and shareholder value.

Alexia Quadrani, managing director, J.P. Morgan Securities, LLC – Disney recently made several new announcements on over-the-top with DisneyLife in the UK and I think today with Sony Vue. Do you view these as potentially additive financially to Disney, or more filling a potential hole for subscriber losses over time?  

Iger – We are looking at a number of new opportunities to distribute our content. I guess there are different ways you could look at it. We believe that we’re seizing these opportunities to augment what is still a primary form of us delivering content to consumers on the TV side, and that’s the multichannel package through MVPDs.

DisneyLife is a direct-to-consumer proposition that we’re launching in the UK in a few weeks. We’ve already commented about it publicly. But to reiterate, it’s essentially an app experience, which we like because the app experience tends to be far more rich and textured and a better user interface, which is really important.

It has hundreds of movies, thousands of TV shows, songs, books and games, and we like the concept from a consumer proposition perspective. We believe there will be other opportunities outside of the UK, and we believe there will be other opportunities like that for some of our other brands, but we’re not rushing into that right now.

I also think what you’re seeing in the multichannel environment is you’re probably seeing some attrition, which is always the case, from current large-bundle subscribers. Most of that is due to economic factors. At the same time, we think it’s possible that young people are not signing up as quickly as they once did. And we think that points to a pretty interesting dynamic, and that relates to cost and user experience. And we think that the primary ingredients for success in media today [are], one, you’ve got to have great product – we certainly have that; the user interface has to be great, and that means easy-to-find, easy-to-use product. And we challenge all the legacy distributors to deliver that, because we think that could be one of the factors in terms of why young people aren’t signing up.

And the third thing is mobility. People want to watch these channels and these programs on mobile devices. And, frankly, the experience, when you try to do that as a multichannel subscriber, is not as easy or as good as it needs to be. So they’re being challenged by these new entrants, and we’re seizing the opportunity to basically distribute our content with these new entrants, because we think they deliver better user experiences. And the price-to-value relationship, particularly for some of the smaller bundles that are out there, also tends to be attractive to younger people.

So we’re going to continue to look for opportunities. We like the trend. We think that the more the merrier in terms of new platforms. And it’s clear also, and Sony Vue is a great example of that, that these platforms cannot launch successfully without the array of channels that we provide. And they came to us to negotiate a deal, because it was clear the product that they had launched was not penetrating the marketplace as much as I think they expected, and they needed ESPN, Disney and ABC.

Jessica Reif Cohen, first VP & managing director, Merrill Lynch – [O]n Hulu, you mentioned in the press release and actually in the call that there were losses in fiscal 2015 and further investment in fiscal 2016. I was wondering if you could, one, quantify it and, two, talk about how Hulu strategically can benefit Disney over coming years.

Thomas Staggs, COO, The Walt Disney Co. – So, Jessica, let me talk about Hulu. I’ll leave it to them to speak of the specifics of their investment, but it’s true that Hulu has and is going to continue to step up their investment in both acquiring and producing original programming and programming from others, and that will continue to increase their losses in the near term. We believe it’s going to create value over time, and we think there’s value in them strengthening their offering. And furthermore, the market is big enough for them and others to thrive. So we feel good about where they’re going strategically.

Anthony DiClemente, managing director, Nomura Securities International, Inc. – [I]n light of the conversation that’s topical about retaining SVOD rights, ABC sold “How to Get Away with Murder” to Netflix recently. And I’m just hoping you can walk us through your latest thinking around the SVOD marketplace.

Iger – Regarding the decision we made on “How to Get Away with Murder” and the relationship with Netflix, we’ve had a good relationship with Netflix. They’ve been extremely aggressive buyers of our content. A movie deal kicks in starting in 2016, and they’ve bought both original programming, the Marvel program is a good example, and a lot of off-network programming from us. And those decisions were all made to monetize our content at the highest levels.

In fact, as I’ve said, we’ve never seen greater demand for our content than we’re seeing today. I think it’s really important for us – I can’t speak for the whole industry – to maintain flexibility, because it’s a dynamic marketplace; it continues to change. And that essentially means that when we made these decisions to sell these shows to Netflix, those decisions made the most sense for us in terms of the economics.

Longer term it’s possible that we’ll make different decisions based on other factors, and it is possible that off-network programs will end up either being bundled with our multichannel services or as part of apps that the company brings out to sell directly to customers as is the case with DisneyLife. Most important thing for us in this is flexibility, and that’s what we have maintained and what we’ll continue to maintain.

Doug Mitchelson,  equity analyst, media sector, UBS Securities, LLC – Bob, I was hoping you could give further details of the strategic rationale for DisneyLife, because, as you said, it speaks to where the company is going. I know you talked about it, but why the UK rather than the U.S., and how did you arrive at that price point? How much overlap, if any, is there with the content that you are distributing on your networks through the pay-TV platform in the UK?

Iger – We’re launching in the UK for a variety of reasons. One, it’s a strong Disney market – huge affinity for the Disney brand – and so we wanted to test this in the market where there was already strong brand affinity.

Second, we looked carefully at what was available to us to put on this platform, because we obviously have to factor in – speaking of flexibility, by the way – deals that already exist that encumber us from getting access to the product to sell on our own. And we had an opportunity to bring this to market in a manner that worked in relation to the product available and the deal that we have mostly with Sky in the UK.

And we’re doing this, because when you look at technology today – and I know that Apple referred to this when it announced its Apple TV product – the app experience is – I’ll call it 3D experience versus the 2D experience that linear television offers, because you can go to an app and you can choose from a menu of multiple choices in terms of media and you can essentially use it in ways that we think are far more compelling and typically a better user experience.

And I mentioned that earlier. I can’t emphasize that enough. I think today’s consumer when they’re faced with a user experience that is sub-par, where they just can’t find anything, they can’t navigate things or they find them and they just don’t work well, you don’t keep a consumer.

We’re also very interested in taking product directly to consumers as a company. If you look at the profile of this company, outside of the Disney stores and our theme park business, the customers that buy Disney typically buy through third parties. There’s nothing wrong with that, by the way. We do great business with movie theater chains and big-box retailers and multichannel distributors. But, given the way the world is and what technology makes available, and given the passion that our customers have for our brands – Marvel, Disney, Pixar, ESPN, Star Wars – we have an opportunity to reach the consumer directly in ways that our competitors can’t come close to doing.

So it is a competitive advantage and it is an opportunity that technology is providing us. And it’s something ultimately that you’ll see more of, both from a consumer proposition perspective and from a company perspective.

The pricing, we debated a lot about. We’re going to market with a price that we think reflects the value of the product that we’re offering and the experience that we’re offering. I can’t tell you that we’re absolutely certain it’s the right price. It’s the price that we decided to take it out to the market with. And what we do know is we’ve created an elegant product. It’s a really good user experience. We love the look, the feel and the navigation, and we’ll see. It also has the ability to provide the content in multi-languages, for instance. So you can imagine the opportunities in terms of other markets both in Europe and the rest of the world. And it is also technology we can use for those other brands.

David Bank, managing director, RBC Capital Markets LLC – It seems that, Bob, in particular, you have always stood out as kind of a thought and action leader with respect to these new distribution technologies. You just kind of termed it flexibility, I think: the first live streaming cable app, the first streaming broadcast network app, the first iTunes episodic content deal, probably the first to sign a light domestic streaming cable product. Now the first do a direct consumer branded subscription product in the UK. So, clearly, you’re comfortable with this.

I guess the question is, does thought leadership and flexibility, though, mean at some point you kind of have the ability to move in the opposite direction? You are making more money, and you’re not breaking up the bundle. Do you think we are at the point where the sale of SVOD content, given the way it fragments the audience, could ultimately be damaging the ecosystem such that you might use flexibility to pull content back from that kind of platform?

Iger – Well, I think you were complimenting me, and I appreciate that. I’ll comment on that and I’ll comment on the crux of this matter or the question. I’m struck with something that Steve Jobs once said when he was asked about technology and how he developed it. And he said that he starts with the consumer, and he works backward to technology. And we’re actually doing that, too. We’re thinking about the consumer and the consumer today is a different consumer than before. They don’t just want to sit in the living room on a couch and watch our product on a fixed screen on the wall with a remote control in their hand. They want to do it in many more ways, and they have the authority, thanks to technology, to make those decisions.

So, we’re starting with what we believe the consumer wants. Every one of those examples that you used, which sounds like a strategic initiative for the company, is actually a strategic initiative for the consumer. It’s really that simple. This is a company that has been consumer-facing from the start. And I think, actually, when you look at what we do at our parks it’s a great example of that. All of these things that we’re doing, we’re trying to take a very, very expansive look at the consumer today and where that consumer is going, particularly the younger consumer.

So, second, as it relates to, well, maybe in a strange way, flexibility, all of these things are designed to obviously create product that we think ultimately benefits the shareholders of The Walt Disney Company, but also to learn from. And if we take product out that may work at a given time but long term have a negative impact on businesses or business models that can continue to create more value for us, then we’ll retract and we’ll cut back.

We’ve got a lot of good things going for us. And, in reality, the multichannel business model, while facing more competition than ever before, is still a huge driver of value for this company. What we ultimately would want to do would be to do whatever we can working with the distributors to make that product more compelling, more consumer-facing than it ever has been. That’s really the biggest goal.

As it relates to SVOD and those things, I think we could get to a point where off-network programs, both in-season and prior-season, are bundled with the channels that the programs were originally on, and that may be a feature that’s offered to multi-channel subscribers that’s designed as a means to perpetuate that business model or make that business model more consumer-friendly. So I’m not going to suggest that we’re set in our ways or stuck in the business model of the future to the point where it damages the former business model in ways that either aren’t necessary or that are premature.

Barton Crockett, SVP & senior research analyst, FBR Capital Markets & Co. –What I’d like to follow-up on is that outlook for the declines in pay-TV subscriptions. So, on the last call, you acknowledge that there is a modicum of decline, whether it is one percent or two percent. Something like that seems to be the industry numbers. But it would seem to me that that decline is unlikely to go on in perpetuity, that there is a sticking point. There is a group of people who won’t, are not likely to cancel pay-TV, because, among other things, they like sports, and they are avid sports fans. And you can’t get that very easily outside of pay-TV, and other reasons, maybe economic reasons, are driving the cancellations. Have you guys done research that would suggest to you where these declines and subs might end? You know, if we’re down one percent for a couple of years, does it stop then? What are your learnings telling you about where the sticking point is on declines?

Staggs – We’re not going to make any predictions about what’s going to exactly happen with the subs. At the end of the day, we talked about the subs in the terms that we’ve had, and generally what we’re seeing is consistent with what we talked about last quarter.

Having said that, as Bob indicated earlier, this market is going to continue to evolve, and one of the things that we’re going to see is that there’s room for optimizing the bundle of programming that people receive. There are opportunities to increase the price-value relationship as well as the user experience of those, and as we indicated earlier, the program availability will likely be augmented over time. Bob talked about the possibility of in-season stacking being offered through the programming services that we offer.

So, I think you’re going to see the bundle continue to evolve, and I think you’re going to see the industry, programmers and distributors alike, respond to the consumer empowerment with increasingly strong product.

At the end of the day, we feel really good about the positioning of our major branded services – ESPN, Disney, ABC – to play a vital role in that optimization of the bundle. So, I think that this is going to play out over a long period of time. And for a very, very long period of time you’re going to see that bundle be the primary means by which people get their programming.

 

0

Quality Assurance Looms as Big Factor in OTT Biz Case

Kurt Michel, senior director, marketing, IneoQuest Technologies

Kurt Michel, senior director, marketing, IneoQuest Technologies

IneoQuest Prepares to Introduce Solutions that Could Lead to a QA Currency


By Fred Dawson

December 2, 2015 – It looks like broadcasters and programming bundlers seeking to generate new revenues from online streaming will soon be able to attain the essential level of quality assurance that has long been the exclusive purview of network owners.

That’s a bold statement considering the hurdles that must be cleared to achieve end-to-end visibility into all the points where something can go wrong as the ABR (adaptive bitrate) “chunks” comprising each content session make their way over divergent paths to the end user. But IneoQuest Technologies, a leading supplier of service assurance solutions in the MVPD sector, says it can be done.

In the months ahead, IneoQuest intends to roll out a portfolio of products designed to help OTT providers minimize performance glitches that are endemic to reliance on online business models, says Kurt Michel, senior director of marketing at IneoQuest. “There’s a lot of attention being paid to developing the advertising currency for monetizing OTT, but there’s not a lot of attention being paid to how you achieve the quality assurance that was a given with broadcast TV,” Michel says. “If you want to build a real business on unmanaged infrastructure, that’s essential.”

IneoQuest is not alone in tackling the OTT quality assurance problem. For example, as previously reported, Tektronix has addressed the new quality control (QC) requirements faced by suppliers of file-based on-demand content to the proliferating ecosystem of OTT outlets. The Tektronix post-production solution set includes a highly automated QC platform along with a multi-protocol playback tool enabling highly granular monitoring of files’ conformance to OTT distributors’ ABR and other specifications.

And other entities, such as CDN operator Akamai, where Michel was employed before moving to IneoQuest, and CDN technology supplier Edgeware, are providing insight into what’s happening at edge points where they have a presence. But, as Michel notes, there needs to be a solution that looks at and analyzes problems as they occur on a more ubiquitous basis, with the analytics clout to immediately pinpoint issues by triangulating data gleaned from origins, edges and end devices everywhere.

To get there IneoQuest is applying the expertise it developed for enabling telcos to achieve TV service-caliber quality assurance with packet-based IPTV technology. There the challenge was to aggregate data from the managed network and IPTV set-tops using advanced analytics tools to provide an end-to-end view of performance accessible to all stakeholders, including engineers, field technicians, CSRs and operations managers.

“Now, in the OTT space, we’re looking at different places in the network, as we did in the managed network, measuring at the headend or points of content origin and storage, the quality going in and coming out of the CDNs and measuring with software integrated into the app at the client device,” Michel says. “We can measure all those and correlate them in real time.”

But it’s hard to do in the unmanaged network environment. “In the packet network there are lots of different packets sent out over separate paths and recompiled on a packet-by-packet basis to create a continuous stream of video on the end user’s device,” he notes. “So trying to figure out what’s wrong is a complicated process.”

Collecting all the data needed to perform the analytics that can locate where a problem is occurring and how it is impacting the final end user experience is a multifaceted process.
The solution entails pulling data from a combination of appliance-based devices, virtualized cloud monitoring points and quality monitoring-enabled media players, Michel says.

Virtualization is crucial. “We’re doing it through NFV (network function virtualization) – virtual probes and virtual monitoring and correlation end to end,” Michel says. “We’re taking all the things managed network providers have always done with physical appliances and turning them into virtualized packages that can be deployed on the public cloud.”

To drive data to those virtual modules IneoQuest envisions that once content distributors have engaged with the IneoQuest cloud platform they will have the leverage to get their CDN affiliates and other suppliers to cooperate. “They will be able to say, ‘I want you to apply an API that allows me to access real quality metrics so I can see in real time what the content is doing,’” he says.

Another approach will entail use of passive and active monitoring. By passive, Michel means directing test streams to a single location in order to sample performance. Active monitoring entails use of robotic playback players in strategic locations that can monitor performance on a live stream.

However it’s done, the OTT distributor will be able to discern how much buffering is happening on a given stream, how bandwidth availability at different points is affecting the level of throughput chosen by the ABR process, whether ads are rendering where they’re supposed to in the program and all the other parameters that go into determining whether performance meets viewers’ and advertisers’ expectations. Using client software to monitor viewer behavior such as tuning out or channel switching will allow operations managers to correlate what’s happening upstream with viewer behavior to determine whether poor performance is producing negative audience results.

“We can do that and give power to the content owners,” Michel says. “Then I think we can get focused on improving quality.”

This, of course, raises the question of what can be done by the content owner to drive proactive action at points the distributor doesn’t control. Here the issue goes beyond what any one provider can do with individual suppliers to the larger question of ecosystem-wide participation in establishing benchmarks for performance and means of adjusting to meet those benchmarks.

“There needs to be third-party quality validation based on an established currency just as we have with Nielsen validation of ad viewing,” Michel says. “Quality is another currency that the OTT business has to be built on.”

As IneoQuest brings its solutions to the OTT market, the players will have what they need to work with in establishing that currency. It would assign value to viewer behavior as measured against the benchmarks and then use that measure to categorize the quality and, ultimately, the value of the content stream.

There’s a long way to go before such standards come into view. But opening the door to getting there is a major step forward.

1

Disney/ABC Illuminates Vision For Transition to IP Operations

Mike Strein, director, television & workflow strategy, ABC TV

Mike Strein, director, television & workflow strategy, ABC TV

Executives Discuss How Virtual Master Control Facilitates New Business Strategies

Disney/ABC Television Group made headlines at the NAB Show in April with its decision to convert its entire post-production operation to IP-based “virtual master control” using technology supplied by Imagine Communications. The project has become an industry bellwether as to the feasibility of moving to software-defined virtualization.

By all accounts, the so-called “Columbus Project” requires a lot of work and tight cooperation between the vendor and its client to accomplish such a massive transition. The commitment reflects how essential this is to Disney/ABC’s ability to deliver on new business opportunities, including using direct-to-consumer OTT distribution to better engage consumers with their brands and make better use of their assets to develop niche “channels.”

At IBC in September key participants in the project publicly shared their views of what the transition to all-IP, IT-centric operations is all about. In the transcribed discussion that follows, Mike Strein, director of television and workflow strategy for ABC TV, Stephen Smith, CTO of cloud strategy at Imagine, and Tim Mendoza, vice president of product development for the playout family of Imagine products, explain the scope of the project and its ramifications for the broadcaster’s operations and business development.

Tim Mendoza – The future of virtualized master control is a tremendous topic. We started talking about it two years ago, and now we’re well on our way down the path of virtualizing our solutions, starting with VersioCloud. Mike, as one of our anchor clients at ABC in New York, what is virtualized master control?

Mike Strein – Master control is different at a TV network than it is at a station. At a station it’s a 24-hour operation continually running lists and inserting commercials and that sort of thing. At a TV network like us where we deliver to stations, it’s a time zone management sort of thing. We’ll do it by day part. We’ll run automation lists, which are all Imagine, formerly Harris, products, and insert the commercials on a time zone and day part basis.

Mendoza – How many cities is ABC in?

Strein – We have over 220 affiliate stations. We have a number of time zone feeds that we’re continually updating, fixing things as news gets updated and what not.

Mendoza – Project Columbus is what we’re working on. Your current infrastructure is traditional.

Strein – Columbus is the redesign of our TV network – a complete rebuild, a lot of it using Imagine’s viritualized processing.

Mendoza – I would say your master control is one of the hardest case studies of any master control on the planet. It includes every type of workflow.

Strein – Needlessly complex.

Mendoza – Steve, you’ve been here since day one of our virtualized strategy. What do you think virtualized master control holds for us?

Steve Smith – It’s really the aggregation of all the parts and pieces in a transmission chain. We’ve had that notion of integrated channel playout for quite some time now. But [virtualized master control] not only encompasses the graphics, branding, playback and automation aspects of the chain; there’s a lot more that happens upstream and downstream of playout: audio shuffling, live caption insertion, download normalization, [ATSC] encoding, decoding, Dolby E management. All the discrete parts of your workflow today that are solved traditionally with modular gear need to get virtualized and aggregated into this platform as well.

So when we’re talking about virtualized master control, we’re really talking about that complete origination chain being able to interact with inbound live feeds, interact with file-based content, insert commercials, do the live captioning as those feeds come through. So everything you use to work within a very dynamic and live or linear environment, all done in software.

Strein – And I might want to say we’re really tackling the non-live component first. Live is really the most complex part of that. So the automated playout of content, ingest of commercials and replay of that sort of content is much simpler to do in a virtual environment where you may have that virtual environment not co-located with your broadcast facility.

Mendoza – You brought up a really good point. It doesn’t have to be co-located with the broadcast facility. In the current, traditional days you have miles and miles of coax cable in your NOC. Now you can be hundreds of miles away. In fact, ABC is running 650 miles away.

Strein – We have two facilities. One is 700 miles away. The other is a backup facility in Las Vegas, and that’s 2,000 miles away. It’s kind of scary when you think about it, but it’s why you have multiple instances.

Mendoza – Do you own the facilities or is it an outsourced operation?

Strein – We own the one in North Carolina, which is 700 miles away. The Las Vegas is a leased facility. That is owned by The Walt Disney Co., not ABC.

Smith – The relationship there is almost like you are contracting with a third party for cloud.

Strein – Yes.

Mendoza – Steve, do you see a lot of clients actually contracting with third parties, or are they doing it themselves in-house?

Smith – I think it’s a progression where, to begin with, to get your hands dirty with the technology and get a sense of what it means to move into a virtualized IP environment, customers kind of want to experiment with it internally just to understand what it is. They look for low-hanging fruit, things that are low risk but have value within the organization to try and cut their teeth on.

A simple one is DR (disaster recovery). You hope to never actually have to use it. So to start playing with the technology in that space is sort of any easy stepping-off point. But really the value in trying to move to software-defined workflow and cloud enablement is to get out of the physical plant that you own and to get into an infrastructure that is actually elastic, that you can scale up and scale down and take advantage of the resources that have been deployed not just for your broadcast day in day out but for other business systems as well.

Strein – I think one of the key aspects to virtualizing is it can be co-located, but it doesn’t have to be. If you want to move it out, if you feel brave enough to do that, you can do that. But you can also build entirely within your own plant.

Mendoza – Currently you have eight feeds. Does virtualized master control give you the ability to increase those feeds pretty much immediately and take them down pretty much immediately?

Strein – Potentially, yes. You have the ability to spin up more instances. Our satellite distribution has not yet moved out of house, so if it goes to an off-site facility, it has to come back in to distribute it again. But that doesn’t mean that doesn’t happen sometime in the future as well.

Mendoza – So master control is not only with regard to playout but it also includes a whole slew of other workflows in the chain. One of them is graphics. Can you explain how Disney/ABC is working on the Brandnet project and what exactly that is?

Strein – Brandnet is our affiliate-based graphic-insertion model. We’ve had a couple versions of this. Brandnet 1 we put in place in 2005. Sort of ahead of the game a little bit. At that time it was a Harris IconStation platform, which sits at stations. We file distribute the content to all those boxes, and we in-band trigger the localized content, whether it’s time and temperature, news crawls, any sort of localized content for the stations. The Brandnet 2.0 is a replacement product for that based on the same Versio product we’re putting in Columbus and other areas within the company.

Smith – The Brandnet platform lets you keep that local feel in region while being able to control it from a central location.

Strein – Absolutely.

Mendoza – Why don’t you go through how other graphics can be [part of] a total easy workflow graphics solution [that] flows into a virtualized environment.

Strein – We’re going to be doing almost all our graphics in a nearly live environment. Graphic devices have traditionally been very expensive flashy devices that would sit downstream of playout. The limitations to what you could do were based on the amount of processing power available in that device.

If you look at how the data is actually injected into those graphic templates, most of the time it’s already available in the schedule and you’re pulling information from an RSS feed. But it’s not really that timely; it’s not frame accurate – not as if it’s being generated at the point where a frame is originated. So you can actually create these graphics in time or just in time, render than a few seconds ahead of time.

And if we take a backend approach we can actually farm the graphics out and use an after-effect around the farm to render much more complex graphics with data injected very close to air. So you’re removing them out of processing power you actually need to do this stuff, because it doesn’t have to be done in real time. I can do it slower than real time. I can do it faster than real time. And then key that element in downstream.

Strein – And as that processing power [requirement] increases, you don’t necessarily have to push the content to the boxes at the stations. You can just have pointers that have the content maybe in the cloud or hosted environment. It gives you a lot more power. A capability of updating things like election results and whatnot are sort of what we’re looking at for this next generation.

Smith- Agility is the word.

Mendoza – Also, it goes beyond graphics and playout. It goes upstream a little bit for schedules as well. In a virtualized environment you can have your virtualized scheduling solution, and the actual playlist of the schedule is the exact same playlist as is being played out. So there’s no more XEP translation between the two playlists, or going back one generation it would be a flat file between the scheduling system and the playlist.

We’re talking real-time API calls between the playout playlist and the scheduling playlist. And the schedule is completely virtualized in the cloud or it can be a private data center. It can be accessed in Android, tablets, Macs, whatever it is. With that and the playout in the virtualized environment you have a complete solution that can be either virtualized in [a Microsoft] Azure, [an HP] Helion or Cisco cloud, or you can have it on bare metal and hosted in your environment.

And then there’s the output with regard to this. Now it’s much more than just [a single] stream output. We’re dealing everyday with OTT providers and other types of distribution means not only in the U.S. for ABC but also globally. So how does this environment help us push that out into an OTT type of world?

Smith – The first point is that as we change the wire away from a piece of coax into a piece of Ethernet we can consolidate the business functions that run on that device. We’re shrinking down the footprint of the infrastructure. And that’s where some of the cost savings start coming in.

Once we get rid of that piece of coax, we’re no longer constrained to the aspect ratio of the frame size or the frame rate that SDI imposes on us. So from a single platform I can deliver a transport stream that can go to traditional free-to-air. I can target a cable plant. I can go to an earth station for satellite uplink. I can create adaptive bitrate streams for an OTT service. It’s all the same software platform, the same product, just configured to deliver different streaming outputs.

The general message here is it’s not just about changing the wire. The step into the IP domain is the first step that has to be taken before you can get to software-defined workflow and cloud enablement. So a lot of people are going to have to get focused on if you make that transition into the IP domain, you need to make sure you remain also focused on why it is that you’re actually making that transition – to become more agile and be able to adapt to changes in business needs. I don’t think anybody thinks their business is going to be the same for the next ten years as it was for the last ten years. Our industry is changing, and your infrastructure needs to be able to adapt to those change as they come.

Being in this software-defined world we can repurpose resources. You can sacrifice your transcode capability to launch new linear services; you can reduce linear service capability to do transcode or any other business function that may be in there. But it’s really around getting to the collective pool of resources that can be applied to many different business functions, instead of having this one business function tied to a single appliance.

Strein – Once your content is there you can repurpose it for whatever you want to do with it.

Mendoza – Everybody has a traditional type of plant. Is this something you have to go in head first into the pool, or can you phase it in over a timeframe?

Strein – I think it’s an over build, and you have to phase it in. At some point you throw a switch. For us it will be a challenge when to do that. And, as I said earlier, the live component is the difficult part. But not all of our day part is live. So we’ll focus on the things that are achievable at first and slowly bolt the live part in.

Smith – Another easy stepping off point to get into the IP and software-defined world is as infrastructure ages and needs to be replaced, you replace it with something that not only has traditional capabilities but actually will move forward into the IP and software domain as well. So you see that case of picking some point in the operation where you’re going to start making the transition to IP, and then it will just bubble out from there.

To try to either boil the ocean and replace everything at once is going to be cost prohibitive, and starting at either end is going to introduce additional costs as well as you have to keep on and off boarding between the IP  world and the SDI world. So you just pick a point and let it grow from there.

Mendoza – IP standards are very important. Help me understand some vendors out on the floor are proprietary. We’ve decided to go pure IP standards-based solutions. What’s the difference?

Smith – You will find organizations that have decided that video on IP should still be treated as signal switching, that we’re still treating the IP plant as if it were a traditional heavy iron based on Raptor and it’s just the wire that’s changing.

If you take that approach, you can’t ever actually get outside of your own datacenter. You can’t use public Internet or anybody else’s infrastructure to transition between sites. As soon as you take that step into a proprietary network infrastructure, you are tied into that one space.

The approach we’ve taken is to partner with Cisco, partner with Juniper, partner with companies that are in the IP space where it is their core business to manufacture this IP technology and then layer software across the top of that. That lets us transition to anyone of these manufacturers’ products, and therefore you’re not tied to any specific domain. If you’re a Cisco house great; if you’re an HP house great. Stay with whatever it is you have internally. The key point is, do everything in software and leave the hardware layer out.

Mendoza – From my standpoint it would be ludicrous to go to my CEO and ask for millions of dollars for new hardware platforms and compete against the hundreds of billions of dollars that are being spent on R&D by the likes of HP, IBM, Cisco, Juniper Networks, Brocade. We have to be very locked up with the IP IT industry. And from a Disney standpoint I’d think that would be a major driver as well.

Strein – Certainly.We want to leverage the enormity of the growth space in that market. The densities you can achieve with these types of products are amazing.

Smith – I picked the top six companies that play in the IP space, added up how much they spend in R&D every single year. That amount of money is bigger than the total amount of money spent in the broadcast industry every year on technology. Their R&D budgets are bigger than our spend as a global market.

Mendoza – Microsoft spends $4 billion a year just on the Azure infrastructure. With regard to workflow and change management, this is a big deal, not only from an IT standpoint but also from an operator standpoint. How is Disney/ABC trying to change and grow the professionals within your ranks to come to grips with what’s coming?

Strein – There’s a lot of training going on, certainly. The manual processes, manually translating a traffic log into an automation list, a lot of that process should hopefully go away.

Work doesn’t go away. It becomes a different environment. People are constantly tweaking and changing things. Our traffic and sales people will sell commercials up ‘til just before the show airs. So there’s constant updating. Hopefully it becomes simpler and we give our clients more flexibility. That’s the ultimate goal. Make a simpler process and make it easier to change.

Mendoza – What about broadcast engineers turning more into an IT type of world?

Strein – There’s no question. A number of years ago you looked for broadcast engineers with IT strengths. Now it’s almost the opposite. You look for IT engineers with broadcast strengths. I don’t know if it’s at that tipping point yet, but it’s getting there.

Mendoza – I think it is at the tipping point. We see around the world that customers who are holding onto the traditional world are just holding until retirement.

Strein – You don’t need to know everything. That’s what I tell people. You don’t need to know how to configure a switch. You don’t need to know how to configure a memory array. But you need to be able to communicate intelligently to the people who are doing it.

Smith – What about on the operational side, the people that monitor the channels and interact with them on a daily basis. How does this change for them?

Strein – I think what they have to do stays the same. What we’re doing is still the same business. They’re just going about it a different way. So it’s different tools, adapting to those different tools, reacting with the capabilities that are enabled with them.

0

Use of Watermarking against Piracy Kicks into Higher Gear Worldwide

Alex Terpstra, chairman, Civolution

Alex Terpstra, chairman, Civolution

SoC-Level Support, Cloud-Based Tracking Services Create Foundation for Effective Use of Vendor Solutions in OTT Environment

By Fred Dawson

October 19, 2015 – A more robust global response to online video piracy is finally coming into view, built on forensic watermarking technology, cloud-based support for rapid identification of thieves and growing cooperation among regional entities, content owners and service providers.

While implementation of watermarking has been widely associated with licensing movies for 4K
Ultra HD distribution, broadcasters’ increasing reliance on the Internet to deliver on-demand and live programming has added another dimension to demand for the technology. “It’s about securing premium high-value content, and there’s plenty of premium high-value content available today that isn’t necessarily UHD,” says Steve Oetegenn, president of content security provider Verimatrix.

When it comes to thwarting the direct-from-screen recording of content for illicit distribution, the fastest growing type of piracy, watermarking is the only recourse, notes Rory O’Connor, vice president of managed services at Irdeto. “The only way to counter this kind of piracy is through a robust scheme that uses watermarks to identify illegal sources and follows up with enforcement action,” O’Connor says.

While there’s been some drop off in the number of sites supporting downloading of illicit content, typically through Bit Torrent, sites devoted to streaming purloined content and collecting ad revenues from automated online ad networks are on the rise. A recent study conducted by Digital Citizens Alliance and consulting firm MediaLink LLC reported the number of such sites worldwide had jumped by 40 percent between 2013 and 2014.

Live sports streaming has become an especially urgent area of focus for the use of watermarking. O’Connor points to the decision of Barclays Premier League, the U.K. soccer association, to engage Irdeto’s Piracy Control service as a bellwether development in this arena. The league is tapping Irdeto’s comprehensive detection, enforcement, watermarking and investigation capabilities to close down pirate sites and to inhibit illegal distribution of set-tops capable of receiving stolen content

Verimatrix, which offers a comprehensive spectrum of forensic anti-piracy capabilities within its Video Content Authority System (VCAS) Ultra architecture, last month announced it was enhancing that support with a live profile for its VideoMark watermarking solution that is specifically designed to protect linear content against real-time re-broadcasting threats. “It is well known that re-broadcasting piracy is a growing threat to operator revenues, particularly with sports,” says Verimatrix CTO Petr Peterka. “The VideoMark live profile was developed as a better alternative to effectively identify marks and block leaks of live content in real time.”

The VideoMark live profile enables a flashing mark that is unobtrusive, yet quick to read and analyze, enabling direct interpretation from a mobile device and rapid implementation of steps to block illegal streams as they appear, Peterka notes. A distinguishing characteristic of the VideoMark live profile, he adds, is that it is embedded at the pixel level and can be secured using the Verimatrix proprietary algorithm operating within SoCs (systems-on-chips).

Indeed, the stepped-up efforts to combat use of screen-captured content to feed illicit sites dovetails with preparations to shore up SoC-level protection for 4K UHD content delivered to set-tops and directly to IP-connected 4K TV displays. These preparations include establishing chip-level support for watermarking in SoCs now being produced for next-generation set-tops and 4K TV displays.

“We’re available on all major chipset vendors natively with our VideoMark watermarking technology,” Oetegenn says. This support is part of new security mechanisms performed by walled-off processes within SoCs that include instantiations of hardware roots of trust as required by the Enhanced Content Protection specifications issued by MovieLabs, the tech organization founded by the six leading Hollywood studios.

“The operator has the peace of mind that if they buy a mainstream set-top box using a mainstream system-on-a-chip, that system will not only be pre-enabled for Verimatrix encryption and subscriber management, but also pre-enabled for forensic watermarking,” Oetegenn says. “So there’s no reason not to use [watermarking] anymore. It’s available today.”

“There’s definitely progress being made,” he adds. “We’re working with quite a few operators that are actually using the VideoMark product.”

Civolution, too, points to widespread SoC support for its NexGuard watermarking technology as a sign that watermarking is moving into the mainstream. Named SoC vendors now supporting NexGuard include STMicroelectronics, Sigma Design and Hisilicon Technologies.

The integration with Hisilicon, the latest chip manufacturer to publicly announce inclusion of support for NexGuard in its SoCs, ensures the watermark is present in video viewed on Hisilicon-supplied set-tops whether the content is captured from the screen or through a set-top video output, notes Jean-Philippe Plantevin, senior vice president of products and solutions at NexGuard.

“This will help operators in their content acquisition discussions with Hollywood studios and sports content owners and rights holders,” Plantevin says. “It also gives operators the flexibility to easily and securely activate forensic watermarking at the STB manufacturing facility or through secure over-the-air downloads in coordination with any conditional access or DRM technology.”

“Our business is accelerating with growing market traction in several important areas,” says Alex Terpstra, chairman of Civolution, which, after selling other lines of business, is focused on watermarking-related products and services provided through NexGuard. Network service providers as well as broadcasters are now taking action to deal with the threat posed by pirates’ ability to deliver high-quality video captured from big TV displays.

“The streaming and pay TV ecosystems have different incentives, but it comes down to similar needs for watermarking to address this kind of piracy,” Terpstra says. “Pay TV operators paying large sums of money for licensing rights lose revenues when the content they deliver is stolen this way.”

“We’re responding to a lot of RFPs from operators calling for watermarking support,” he adds. “We’ve signed a number of pay TV operators this year, including one of the largest in Europe and another large operator in the U.S., which we can’t name.”

A major factor in making use of watermarking more effective in tracking pirates is the emergence of cloud-based support for identifying stolen content and reading the embedded watermarks, which can be done in real time to enable action against live streams while events are in progress. The availability of such services from major content security firms helps to overcome one of the drawbacks of watermarking, which is the fact that every provider uses a proprietary forensic marking scheme which only they or an entity licensed by them can detect and read. Since there’s no inclination among stakeholders to create a single global reading and tracking system, the best alternative is to make it possible to quickly detect what’s happening within any vendor’s watermarking ecosystem through shared access to that ecosystem’s watermarked content and other data.

“If you can’t detect the watermark online and detect where it’s coming from, you can’t act fast enough to prevent some of the most damaging activities when it comes to limiting your ability to monetize your assets,” says Peter Oggel, vice president of products at Irdeto. “We’ve made watermarking service an integral part of our security life-cycle service.”

The service includes a menu of options people can choose to take action against pirates and users, such as warnings that content is being viewed illegally or take-down notices against repeat offenders.  Oggel adds. The strength of Irdeto’s service rests in part on its global piracy monitoring operation, which is used by enforcement authorities, regional organizations and content distributors worldwide to gauge the scale of piracy and to identify illegal sites.

At Verimatrix cloud-based support for tracking and reacting to pirated content that has been watermarked with VideoMark is one of the services offered through the company’s new Verspective Intelligence Center (VIC), a globally interconnected revenue security support engine designed to proactively address revenue threats utilizing information gathered from across the video distribution ecosystem. “Verspective will allow operators globally to opt in, connect into the VIC, and with that we’ll be able to provide a whole multitude of value-added services to our operators,” Oetegenn says.

To combat piracy “we enable real-time watermark detection with the monitoring of piracy and breach attempts,” he explains. “If we’re seeing footprints and patterns within our operator community that are telling us the bad guys are trying to break in, those activities can be very quickly combatted and eliminated before they spread.”

Such capabilities extend to thwarting the types of Periscope attacks that resulted in screen capture and illicit distribution of the Mayweather-Pacquiao fight on May 2 and other programming since Twitter bought the app company earlier this year. In such instances, stopping the streams could have a lasting impact on people’s willingness to rely on illegal sources for the content, Oetegenn notes.

In the case of the Mayweather-Pacquiao fight “it would have been possible to monitor it in real time as that content was being put on the Internet and define exactly which terminal device was being used to play out that content and shut it down,” he says. “Let’s imagine they shut an operation down just before the KO or just before the homerun in the baseball game or just before the deciding goal in the soccer match. Ultimately people are going to start wandering away from this type of service and understanding it’s not legal in the first place. It’s not reliable enough to be viewed as true high-value premium content out on the Internet.”

Oetegenn notes his company’s recent acquisition of the video analytics business from Concurrent Computing will greatly enhance its ability to gather and analyze system performance and potential threats. “That analytics platform will be fully integrated with the VIC to the extent that now we’ll be able to have data from every single end device,” he says.

NexGuard has also been stressing the importance of utilizing its cloud-service to expedite effective use of watermarking. By teaming with NexGuard, companies specializing in monitoring for piracy can “read our watermarks,” Terpstra says “If one of our partners finds an illegal file, they can identify the watermarks and use that information to track down the pirates,” he adds.

For example, one such partner, MarkMonitor, a subsidiary of NetResult, does Internet monitoring for sports leagues. Another, Vobile, does anti-piracy work with the studios. “There’s a de facto ecosystem emerging around NexGuard,” Terpstra says.

The NexGuard ecosystem also includes providers of content security who do not have their own watermarking systems, such as NAGRA, Conax and Via-Orca, he adds. NAGRA and Conax, which are partnering on content security solutions, are the latest additions to the NexGuard fold, allowing them to integrate and certify implementation of NexGuard in set-top boxes and to enable pay TV operators’ headend systems to control the application of watermarking.

The integration also brings into play use of NAGRA sister company Kudelski Security to add forensic monitoring, investigation and response services to the enhanced protection portfolio, notes NAGRA chief architect Philippe Stransky. “Our customers already trust us to provide device certification, lifetime device support, security countermeasures and anti-piracy services,” Stransky says. “With the addition of NexGuard, we demonstrate NAGRA’s capability to securely integrate and manage forensic watermarking for all types of content.”

0

New Solutions Add to Power Of Analytics to Drive TV Biz

Steven Canepa, GM, global media and entertainment industry, IBM

Steven Canepa, GM, global media and entertainment industry, IBM

IBM, ContentWise, Edgeware Introduce Unprecedented Range of Capabilities

By Fred Dawson

October 12, 2015 – New responses to surging demand for advanced data analytics in the premium video OTT marketplace are coming from all directions, ranging from a major commitment to media and entertainment by a revamped cloud-focused IBM to innovative solutions from much smaller entities.

“Business analytics is the new battleground,” says Steven Canepa, general manager for the global media and entertainment industry at IBM. “The disruptors in this industry, the new emergent players in this industry, have data at the center of their business model. They’re going to compete in the marketplace by leveraging that data.”

In this environment the challenge has become competing for people’s time against an ever-expanding range of online options, which can only be done effectively through greater attention to each individual’s needs and interests, Canepa adds. “The notion that there are static, segmented sections of the audiences that you can appeal to with media programming is an outdated idea,” he says. “The question becomes, how do you appeal to those consumers on the terms that are interesting to them?”

With billions spent over the past few years on R&D and acquisitions, IBM has put itself in position to respond to the need for big data analytics across multiple industries with strong emphasis on public cloud-based services that seamlessly integrate with in-house facilities to apply a wide range of algorithmic solutions to specific needs. As Canepa’s title suggests, the video entertainment sector has become a major focus for IBM, which has teamed with Imagine Communications as a key market ally in its pursuit of business in this arena.

“The ability to get the distribution platform in place and the ability to marry with it the customer insights are the two fundamental things that any company has to do to get money, to get value out of these new OTT services,” Canepa says. “This is the future of the industry.”

Clearly, IBM is a force to be reckoned with in this scenario. But media companies and network distributors also have opportunities to look at much smaller entrants offering new approaches to using data to compete more effectively for eyeballs and ad dollars.

One of these is ContentWise, a Milan-based company with a significant customer base outside the U.S., which earlier this year announced it was allocating resources to focus on North America. ContentWise sees a significant opportunity for its approach to pulling together, analyzing and turning data from multiple sources into a compelling force for personalized engagement with end users, says CTO Pancrazio Auteri.

“Operators and content providers face enormous challenges today in handling even the simplest of metadata management tasks, and today’s technologies do not have the functionality to support them,” Auteri says. “Delivering true personalization and a superior user experience is only possible if the underlying data is rich, deep and understood by the personalization system.”

The unmet challenges, as Auteri sees them, include the need to “automate metadata processing, validate enriched data before it goes live, consolidate workflows and export clean and richer metadata to existing content management systems.”

According to Massimiliano Bolondi, senior vice president of sales at ContentWise, these are the capabilities ContentWise brings to the table through its Knowledge Factory data processing platform and other product components. These include predictive browsing that takes users to locations with content duration, type, structure and genre suited to their interests; creation of “pseudo-genres” personalized to each user, and a means of instantly reconfiguring and programming UI elements across all client platforms without having to rely on client upgrades.

The key is to be able to leverage as many data sources as possible with fully automated processes that offer a practical, reliable alternative to manually intensive approaches, Bolondi says. At the same time, he adds, the ContentWise platform is designed to ensure managers will have “the last word on things like what types of data get pulled into digital libraries, the way audience reminders and promotions are handled or which subsets of customers are targeted for testing new ideas.”

Knowledge Factory is a set of metadata handling tools, a workflow manager and advanced semantic algorithms that maximize the value of data sources, he explains. “We built our Knowledge Factory to pull data into a knowledge tree that can be accessed through our workflow and processed by our algorithms to support many applications,” he says.

The metadata tools control the ingestion, mash-up, reconciliation and de-duplication of data from multiple commercial and free data providers, including TMS, Rovi, IMDB, Wikipedia and social media websites like Facebook and Twitter, he adds. The platform supports four analytics sets for generating KPI (Key Performance Indicators), including user activity metrics, financial metrics, engagement metrics and recommendation effectiveness metrics. Managers can drill down to each service, catalog item, audience segment, business rule and search keyword to fill out the details underlying these metrics.

Since shifting its focus to the video entertainment sector in 2007, ContentWise has built a strong base of customers in Europe and other regions outside the U.S., Bolondi notes. For example, Sky Italia has deployed the ContentWise discovery solution to deliver personalized recommendations for its Sky Online service, which was deployed last year with systems integration managed by Ericsson.

“As viewers demand more relevance and convenience in the content they consume, our goal is to make sure they can enjoy moving effortlessly across discovery patterns,” says Sky Italia CTO Pier Paolo Tamma. “Selecting ContentWise’s solution makes this vision a reality.”

Meanwhile, coming at the analytics challenge from another direction is Edgeware, traditionally a supplier of advanced CDN solutions for the MPVD market. Following the previously reported expansion of its Video Consolidation Platform (VCP) to include core as well as edge processing capabilities, Edgeware has added significant data aggregation and analytics solutions that not only serve the needs of network operators but also provide media companies who don’t own their own network facilities an opportunity to gain access to per-session performance analysis that would otherwise be out of reach.

Edgeware’s Convoy 360o Analytics, operating in tandem with the company’s high-density purpose-built Orbit server platform, delivers analysis of real-time and historical business information essential to fulfilling direct-to-consumer OTT business goals, says Matt Parker, director of business development at Edgeware. “This is moving from being just a management interface and portal into a very comprehensive 360 analytics suite that will provide extremely granular information on the content that is being watched, at what frequency, from which location, on what device type, whether it’s VOD, whether it’s live,” Parker explains.

Utilizing the Orbit/Convoy combination, broadcasters can take control over content distribution in ways that are impossible with reliance on third-party distributors, he adds. “By taking ownership of those processing functions in the headend, they’re able to exercise much more control over how the content is ingested, the frequency at which it’s played out across multiple concurrent clients and the management of bulk storage and the way those assets are stored in a logical hierarchy,” he explains. “And then, drilling down, new innovations such as dynamic ad insertion and forensic watermarking are the next generation processes the broadcaster has to take in, because this is how they will monetize next-generation services.”

Equally important, by exploiting these benefits of the VCP portfolio, broadcasters now have access to the 360o analytics platform. “It gives the broadcaster, the original content owner the opportunity to create their own interface using a widget-based approach,” he says. “They can create their own UI layout to get exactly the information they want when they want it to make these actionable decisions through the better use of data.”
Within each dashboard, users can filter by region, distribution, format, devices, content provider and ISP and present the data as a variety of charts, tables or geographical maps. The system also includes support for data export or integration to third party data processing systems via an open API, Parker notes.
In an emerging environment where multiple flows of live TV content along with high volumes of stored content must be delivered with consistent performance on par with managed network distribution, broadcasters can’t afford to be at the mercy of faulty points in the external distribution chain. To deal with such situations they must be able to prove the fault doesn’t lie at the point of origin.

“In terms of having a poorly encoded piece of content, missing chunks or bits, etc. they can categorically state we know this content was good when it left our origination point because we  measured it, stress tested it,” Parker says. “The rest is up to you.”

Adding to the possibilities is the fact that network operators who use the Edgeware technology to support their own CDNs can provide wholesale services to broadcasters that will generate the same range of performance analytics at edge points, he notes. Or broadcasters themselves can negotiate for placement of the edge platform at peering points of access to local markets, as is the case with Hong Kong OTT video provider TVB.COM.

Since 2008 TVB’s myTV service has offered 24-hour live streaming, free TV channels and catch-up with accessibility to TVB’s TV programs on smartphones, tablets and computers. With the increased popularity, myTV traffic volumes grew beyond the capacity that third-party CDN platforms could cost-effectively support, says TVB COO Kenneth Wong.

“Our customers expect nothing less than the highest quality, regardless of the device on which they access the content,” Wong says. “With Edgeware’s VCP, we can cost effectively deliver the quality our customers demand with the ability to rapidly scale and add new services as the market evolves.”

As broadcasters and network operators take advantage of the unique capabilities of solutions offered by the likes of ContentWise and Edgeware, they will also need access to larger fields of data and analytics capabilities like those touted by IBM. Fortunately, in this emerging environment with open APIs enabling integration of different data and analytics platforms into content providers’ workflows, buyers have the ability to put together best-of-breed options that will give them the full benefits of basing operations on comprehensive data analytics.

IBM has been putting a lot of marketing dollars into promoting Watson, its cognitive computing platform, for every type of business and application under the sun. In the media industry that means being able to find metadata, get smarter, search fragmented repositories and get value out of the platform’s ability to interpret human language in readouts of text in real time, Canepa notes.

“We can now essentially read the Web in real time, analyze that data and bring it into the workflow within the media company,” he says. “In Watson we have the logic to be able to federate search across a set of repositories, put intelligence in it so that we can find the right metadata. We can cleanse it; we can do forensics, and we can integrate that automatically into the workflow.”

Such capabilities are critical to enabling the ad campaigns of the future, where media buyers want to get to consumers wherever they’re spending time. “We have to begin to bring intelligence to things like campaign reporting, rate card optimization, ratings prediction,” Canepa says.

“One of the engines we have in that portfolio can take a bunch of disparate data, weight those different attributes and predict what’s going to happen,” he continues. “If you’re a subscriber management system and you want to understand where churn is coming from or be able to predict churn before it happens, that becomes really important. If you’re in a targeted advertising role and you want to predict your efficacy on your marketing spend against a certain sub segment of the audience, the ability to predict a very complex set of attributes and constantly re-evaluate that becomes a new critical path.”

The functionalities extend to internal management of the ad sales operations. Canepa cites questions that need answers such as: “Who are my top performing ad sales reps; who are my worst performing ad sales reps? Who’s selling campaigns cross channel? Who do I have make-good exposures with? Where can I do substitutions, deliver a better audience for advertisers at a premium and eliminate financial risk I have on my side of the equation? Those are the kinds of real-time decision-making tools that we can put in the hands of executives when you have a flexible platform architecture that you can extract that data from.”

As Canepa notes, from now on “it’s every media company competing with every other alternative that every consumer has. We all have a unique pattern of usage. We all use a different set of services.”

Old ways of looking at audience divisions by age “don’t tell us anything anymore,” he adds. “Early adopter curves are largely over, and they’re going to continue to collapse. Who is a 23-year-old female today? Each of them is unique with unique patterns of consumption.”

That’s not to say, however, that it’s not possible to use big data to define and target new audience segments. “There are commonalities, and if you can attribute sufficiently what those interests and behaviors are to group those commonalities together, then you can target them as a segment,” Canepa says.

1

HDR Starts to Roll amid Growing Clarity on Bitrate & Quality Issues

Matthew Goldman, SVP, TV compression technology, Ericsson, and EVP, SMPTE

Matthew Goldman, SVP, TV compression technology, Ericsson, and EVP, SMPTE

Minimizing Bandwidth Impact vs. Enabling Backward Compatibility with Better SDR Quality Is a Key Consideration

By Fred Dawson

September 18, 2015 – The pace toward market adoption of high dynamic range-enhanced video has accelerated with the launch of HDR content by Amazon, Walmart’s Vudu, 21st Century Fox and indications of commitment to rollouts in the near future on the part of Comcast, Netflix and others.

While there’s continuing debate over HDR formats, there’s now a lot more clarity on some points than there was a few months ago, thanks in part to the Blu-ray Disc Association’s (BDA’s) release of the new Blu-ray Ultra HD specifications and a bulletin on recommended HDR specifications from the Consumer Electronics Association (CEA). But there’s still a long way to go as content creators, vendors and distributors wrestle with key issues such as bandwidth impact, when the quality of TV displays for mainstream consumption will be sufficient to merit delivering HDR-enhanced content and finding the balance between too much and too little in the way of eye-popping dynamism (see p. 8).

Driving progress is a widespread desire to deliver a dramatically better consumer viewing experience without waiting for content formatted to the full pixel density of 4K to emerge. As noted by Matthew Goldman, senior vice president of TV compression technology at Ericsson and executive vice president of the Society of Motion Picture and Television Engineers (SMPTE), ever greater numbers of 4K UHD TV sets are entering the market, bringing with them the ability to upscale HD 1080p through rendering tricks that mimic the effect of the higher pixel density delivered by 4K at 3840 x 2160 pixels.

As a result, Goldman notes, there’s little difference in what the viewer sees at a reasonable viewing distance between pure 4K and upscaled 1080 p HD content. While 1080 interlaced content presents some challenges to the upscaling process, they’re not insurmountable, he adds.

“The rule of thumb is you get the full impact of HD at three picture heights’ distance – about two and a half meters, which is a typical viewing distance in people’s homes,” he continues. “To get the full benefit of 4K, which is to say, to see a real difference, you have to be half that distance from the screen, which is not what people are accustomed to.”

All of this has been born out in focus group tests and by providers of 4K-originated content, who, as previously reported, have found consumers to be underwhelmed by the experience. In contrast, Goldman notes, with HDR, whether it’s offered in conjunction with 4K or 2K pixel density, “You can see the impact of HDR from across the room.”

Given the bandwidth constraints, delivering 4K at four times the density of HD is a potentially costly proposition with a marginal return on the allocation of network capacity compared to HDR. “It’s all about more bang for the bit,” Goldman says.

Just what the price to be paid in added bandwidth for HDR apart from 4K turns out to be depends on many factors, but it will be much lower, even though the sampling rate for encoding HDR (and 4K) is at least 10 bits versus the 8-bit encoding the industry uses today. ”Our experiments have shown the bitrate increase could be as little as 0 or maybe as much as 20 percent, depending on a variety of factors,” Goldman says.

Ericsson has found that with today’s dynamic encoding capabilities, where the level of compression and hence bandwidth utilization varies widely across picture sequences, HDR in dark spaces actually allows more compression without affecting picture quality than is the case with HD. This tends to balance out bright areas where capturing the nuances requires reducing the amount of compression and raising the bitrate compared to the bitrate for comparable scenes in HD, notes Carl Furgusson, head of strategy for business line compression at Ericsson.

HDR, by definition, means there is more non-linearity in the acceptable compression rate frame to frame than is the case with the standard dynamic range (SDR) used in traditional television. As embodied in the ITU Rec-709 standard, SDR has a luminance range of 100 nits (candelas per square meter) with a color gamut of 16.78 million colors while HDR supports many hundreds or even thousands of nits and billions of colors, depending on which HDR format is used, the type of display and which of two advanced color gamuts is in play, the current cinematic DCI P3 standard or the ITU’s Rec-2020.

“We’re finding with HDR the fact that you can reach deeper levels of black means artifacts that a high level of compression of SDR content might produce aren’t perceivable, so you can go to that level of compression with HDR,” Furgusson says. “At the top end of brightness you see more artifacts with HDR than HD at a given level of compression, so you have to spend more bitrate to avoid that.”

More testing will be required to develop guidelines around bitrate expectations with HDR, he adds. Asked whether the impact range of 10-15 percent in additional bandwidth that CableLabs has measured for HDR was in the ballpark, he replies, “You can’t go with a straight rule of thumb on this. But HDR will not have the impact on bitrates people anticipated.”

HDR formats, like Dolby Vision, that use 12-bit rather than 10-bit encoding will have a greater bandwidth impact, possibly in the 20-25 percent range, says Patrick Griffis, who serves as executive director of technology strategy in the office of the CTO at Dolby Laboratories and education vice president at SMPTE. But if backward compatibility isn’t required, the dual streams can be compacted into one, resulting in a lower bandwidth penalty. “We’ve worked with major chip vendors to ensure both options are included,” Griffis says.

At IBC in Amsterdam this month, Envivio, set to be acquired by Ericsson pending shareholder approval, offered a dramatic demonstration of how its encoding technology can be used to lower the bitrate of a single-stream version of Dolby Vision HDR with 4K pixel density. The demo, running on a new 1,000-nit Vizio display slated for commercial release by year’s end, showed fast-motion trailer clips from an HDR-enhanced version of the motion picture Oblivion at a bitrate of just 12 Mbps.

The bitrate impact of the single-stream version of the 12-bit Dolby Vision system on the total bitrate was between 1 and 2 Mbps, or somewhere between 10 and 20 percent, according to an Envivio official. The compromises in the encoding process that had to be made to reach the 12 Mbps bitrate were cleverly obscured with use of aggressive compression levels outside the areas of viewing focus in any given scene, resulting in a high-quality viewing experience that won the requisite approval for the demo from Oblivion director Joseph Kosinski.

The idea of making HD a part of the HDR paradigm, now labeled as HDR+, is really a return to the original concept of UHD, which was that it was more about dynamic range than pixel density, Griffis notes. “UHD is really about creating an immersive viewing experience, but the industry got sidetracked for a while by the CE manufacturers’ introduction of 4K TV sets, which focused discussion of UHD on pixel density,” he says. “Now things are getting back to the original intent.”

That intent has been bolstered by the development of SMPTE 2094, a draft standard moving to adoption that introduces more dynamism into the color transformation process than provided by SMPTE 2086, otherwise known as the “Master Display Color Volume Metadata Supporting High Luminance and Wide Color Gamut.” SMPTE 2086 serves as the metadata component referenced by the SMPTE 2084 Electro-Optical Transfer Function (EOTF), an alternative to the traditional Gamma or Opto-Electric Transfer Function (OETF) that is now part of the BDA’s UHD standard and the HDR specifications recommended by the Consumer Electronics Association.

By providing a means of assigning a dynamic brightness dimension to the rendering of colors by TV displays, SMPTE 2094, an adaptation of the metadata instruction set used in Dolby Vision, brings the full HDR experience into play with 10-bit as well as 12-bit sampling systems.    With SMPTE 2084 and ITU Rec-2020 color gamut now specified in both the BDA’s and the CEA’s HDR profiles, a fairly clear HDR roadmap is in place, leaving it to individual industry players to determine whether they want to utilize 12-bit formats like Dolby Vision and the one developed by Technicolor or the 10-bit formats offered by Samsung and others.

Dolby at its IBC booth offered a dramatic demonstration of what can be done with Dolby Vision using SMPTE 2094 to map HDR-originated content in the transfer function on both HDR and SDR displays. The demo, using a 2,000-nit display not yet commercially available, offered a dramatic view of what HDR will look like as such displays enter the market.

At the same time, with a real-time 2094 mapping capability, Dolby also showed how the HDR-originated content delivered through the Rec-709 component of the dual-stream Dolby Vision system could be rendered to greatly enhance the SDR display of that content in comparison to an SDR display of the content that didn’t use the SMPTE 2094 mapping capability. This is another factor that will have to be weighed as network-based distributors consider backward compatibility and its impact on bandwidth.

“The industry is split on the backward compatibility question,” says Ericsson’s Goldman. “Some say it’s essential; others don’t think it’s necessary.” Even for those that do favor backward compatibility, there are other approaches under consideration besides reliance on the dual-stream option that involve separate processing of HDR content for SDR distribution.

But with higher-luminance displays supporting the 68.7 billion colors enabled with REC 2020 on the near horizon, distributors will have to begin deciding which way they want to go with HDR formats and sampling rates sooner than later. Given the added benefit of enhanced SDR quality stemming from use of SMPTE 2094 with the dual-stream 12-bit format, there’s a new benefit that will have to be considered in determining whether the 12-bit dual-stream route is worth the extra cost in bandwidth.

0

HDR Requires Care in Addressing Nuances of Human Visual Response

Sean McCarthy, engineering fellow, ARRI

Sean McCarthy, engineering fellow, ARRI

The Eye’s Reactions to Greater Contrast and Color Depth Pose New Challenges 

September 21, 2015 – Now that MPVDs are turning their focus to HDR, considerations relating to the impact of the technology on viewers that weren’t part of the 4K UHD discussion must be brought into the planning process.

While there’s every reason to expect that the visual impact of the expanded color and contrast of HDR will induce distributors to embrace the technology as a major improvement beyond the 4x increase in pixel density of 4K Ultra HD, the push to bring the HDR viewing experience to the public introduces issues in video production that will require far more attention to the nuances of human visual perception. In other words, if HDR is to live up to its potential as a major leap forward in video quality, providers will have to be extremely careful not to alienate viewers with unintended negative impacts analogous to what happened with 3D TV.

This is the message implicit in a paper delivered at INTX in Chicago this past spring by Sean McCarthy, an expert in visual science who serves as engineering fellow for ARRIS. “Unlike 4K UHD, which has been a fairly straight-forward step in the evolution of display resolution built on the SDR (standard dynamic range) foundation, HDR introduces a new paradigm where the dimensions of the new viewing experience must be defined in keeping with basic principles of the human visual response system,” McCarthy says. “This will impact everything that’s done in the creation and dissemination of video content from initial capture through production, postproduction and processing for distribution.”

While such concerns may not be top of mind at this point among service providers, they are a top priority at motion picture studios where directors are now shooting in HDR. “For next-generation imaging people need to be more aware of the way the eye responds to changes in brightness and other dynamics HDR introduces than has been the case with SDR,” says Patrick Griffis, executive director of technology strategy in office of the CTO at Dolby Laboratories and vice president of education at the Society of Motion Picture and Television Engineers. “How much difference in brightness you can employ from one scene to the next is a complicated question. It’s a learning process that movie producers are going through as they shoot in HDR.”

As Griffis notes, SMPTE has provided algorithmic tools to simplify dynamic control over contrast and color in the new transfer function designed to support rendering of the HDR parameters on TV screens. The conveyance of parameters maintaining control over contrast in the black-to-white dimension as well as choice and brightness of colors is performed by metadata used in conjunction with the SMPTE 2084 Electro-Optical Transfer Function (EOTF), otherwise known as “Perceptual Quantization (PQ).”

The ability to dynamically control color brightness is the latest addition to the toolbox. These capabilities are embodied in the draft standard SMPTE 2094, known as “Content-Dependent Metadata for Color Volume Transformation of High Luminance and Wide Color Gamut Images.”

Learning how to use these tools in the context of maintaining viewing comfort is a process that will also have to be undertaken by those who are responsible for quality control in content distribution, including instances where SDR content is being converted to HDR for subscribers who own HDR-capable TV sets. This is a tricky domain where engineers employing 10-bit HDR systems will find they are more constrained than users of 12-bit HDR systems, Griffis says.

The challenge is to apply PQ as aggressively as possible without triggering artifacts resulting from crossing the threshold of tolerance for quantization error. “The amount of precision you can apply with PQ depends on how much error you can tolerate,” he says. “What we found is that to maximize precision with a higher error tolerance you need 12-bit sampling.”

Based on McCarthy’s analysis, there are many factors relating to how changes in contrast and color affect the viewing experience that must be taken into account with HDR. “Along with responsiveness to degrees of contrast,” he notes, “developers must consider human perceptual factors such as light and dark adaptation; brightness sensitivity; reaction to ambient light; how color is perceived under different conditions, [and] responses to frame-rate flicker that might be introduced with expansions in dynamic range.”

And that’s not all. “Content producers and distributors will have to determine how human visual response in the HDR environment will impact quality parameters for advertising, channel changes between HDR and SDR programming and the presentation of user interfaces, captioning and other textual and graphic elements,” McCarthy says. “The impact of various HDR modes on bitrate and bandwidth requirements will also be an important consideration, especially for MVPDs.”

One example of the difference in impact between SDR rendering in HD and HDR rendering of the same content can be seen in how HDR impacts the size of the pupil, which regulates how much light enters the eye. An SDR signal going from low luminosity to a maximum luminance of 100 nits causes the pupil to reduce the light admitting aperture from 8 millimeters to about 4.5 mm, McCarthy notes, whereas the pupil diameter reduces to approximately 2 mm when encountering an HDR-enabled luminance of 1,000 nits, which translates to an approximately three-fold reduction in pupil area compared to SDR.

These differences bring into play considerations about how fast the eye can adapt to increases or reductions in luminosity, since, with HDR, the changes in pupil area from one extreme to the other are much larger than they are with SDR. “Settings for HDR must take into account the impact of luminance on the retinal photoreceptors that determine illuminance in both bright and dark home environments and even outdoor situations in cases where the viewing experience is extended to handheld devices,” McCarthy says. “The level of sensitivity and the speed of adaptation can vary considerably depending on those conditions.”

This is especially significant in instances where the eye has adjusted to a given level of luminosity over an extended period of time in the video sequence – a phenomenon known as “bleaching adaptation.” “The bleaching impact will be an important consideration in determining what average and peak levels of HDR brightness should be in the context of temporal shifts in luminance,” McCarthy notes.

Color, too, is an area requiring special attention with use of HDR. “As luminance increases so does the ability of the human visual system to discriminate between colors at ever smaller gradations,” McCarthy says. “Consequently, more bits would be needed to code color without introducing noticeable errors, particularly at high luminance.”

In other words, a TV show originally recorded in SDR using 8-bit encoding will have to be re-encoded using 10-bit sampling to achieve a satisfactory conversion to HDR. “10-bit encoding may be expected as a minimum bit depth for HDR for any color space,” McCarthy says.

Page 4 of 26« First...23456...1020...Last »