Content Ecosystem Archive


HEVC Puts High-Quality TV In Play over Mobile and OTT

Giles Wilson, head of the compression business, Ericsson

Giles Wilson, head of TV compression business, Ericsson

By Fred Dawson

April 12, 2013 – Once again it appears the pay TV industry is on the cusp of a transformation in the video service marketplace where the dividing line between past and future will be etched by the commercial introduction of another big leap in digital compression.

This time it’s HEVC (High Efficiency Video Coding) or H.265, the newly ratified successor to AVC (Advanced Video Coding) or H.264, that’s cutting the bitrate for delivering any given resolution of video by anywhere from 40 to 50 percent. Not only does this have the effect of nearly doubling the bandwidth for video transport across the aggregate fixed and mobile distribution infrastructures at a fraction of the costs that would be required for network capacity expansion; it also opens the way for reaching a new level of video quality envisioned with 4K and, eventually, 8K ultra-HD on ever larger screens with minimum impact on existing bandwidth allocations.

Most important, perhaps, as vendors roll out initial products aimed at achieving greater efficiency in the mobile and over-the-top video domains, the emergence of HEVC provides a way for content suppliers to deliver video at far higher quality than before over the broadband Internet. This means that while pay TV operators are going through the long cycle of HEVC implementation on their managed networks, OTT providers will be leveling the playing field in terms of the viewing experience on smart TVs, tablets, smartphones and other connected devices.

“If you look at where the timeline for rollouts will be for HEVC, one of the constraints will be with legacy set-top boxes,” says Giles Wilson, head of TV compression business at Ericsson, which has been at the forefront of early HEVC product releases. “We’ll see the earliest deployments in mobile and perhaps multiscreen [OTT] services. In terms of traditional broadcast, if there’s a need for new set-tops to support 4K, that may drive implementation of HEVC in the future.”

One of the more impressive demonstrations of what’s in store for mobile and OTT players was mounted recently by Japanese mobile provider NTT DoCoMo, which has begun licensing its in-house developed HEVC software codec for use in smartphones, tablets and other mobile devices within and beyond its service domain. As shown on the website of Tokyo-based publisher DigInfo TV, the carrier in February ran side-by-side comparisons of real-time H.265 and H.264 compression showing what the new system offers at just one megabit-per-second while also demonstrating a 60 frame-per-second large-screen display of real-time H.265-encoded 4K ultra-HD video streamed at just 10 mbps.

The DoCoMo codec “uses a PC to play video four times the size of full HD at 60 fps in real time,” says a DoCoMo spokesman during the demo. “We think 60 fps video with a 4K display size like this is a world first.”

This is a remarkably low bitrate for 4K resolution (4096 x 2304 pixels) at 60 fps. In fact it’s well below what people expect to see at 24 or 30 fps.

For example, Alex Zambelli, former Microsoft video expert and now principal video specialist at online video publisher iStreamPlanet, offers a far more conservative perspective on the HEVC potential based on an assumption of 40 percent rather than 50 percent improvements in compression performance compared to H.264. But, even so, Zambelli anticipates the HEVC bitrate for 4K delivered over the Internet at 30 fps will fall into the 12-15 mbps range. In other words, based on the FCC’s latest report on broadband, 4K at these rates would be widely viewable in U.S. broadband households, where the average access rate has climbed to 15.6 mbps.

“We’re not comparing Blu-ray quality levels here – we’re comparing 2013 OTT quality levels which are ‘good enough’ but not ideal,” Zambelli notes in a recent blog post. “If the dream of 4K OTT video also carries an implication of high frame rates – e.g. 48 to 120 fps – then the bandwidth requirements will certainly go up.”

While the level of quality achieved by DoCoMo for 60 fps 4K at 10 mbps may fall well short of the parameters deemed appropriate for big-screen display of pay TV or Blu-ray content, it clearly demonstrates the impact HEVC is likely to have on mobile and OTT services. Indeed, with 4K TV sets a long way from making a significant dent in the consumer market, the near-term impact for HEVC will have far more to do with enabling extraordinarily bandwidth-efficient video streaming at current 780p and 1080 p HD levels.

“HEVC is important for the industry, allowing compression to keep pace as resolution and quality demands on video rise and the sheer volume of channels and video available increases,” says Avni Rambhia, senior industry analyst of Frost & Sullivan’s Digital Media Practice. “However, the speed of uptake of the format and the rate at which it is able to transform the industry with its benefits depends heavily on the timely delivery of encoding SDKs for content creation, as well as technologies to streamline CE and mobile device support.”

In Ericsson’s case, the move in this direction began with production of  HEVC encoders last year even before the standard was ratified. “In September we introduced our SVP 5500 HEVC encoder targeted for applications in mobile networks,” Wilson says. “And last month we announced our end-to-end broadcast solution for LTE using HEVC.”

As previously reported, the emergence of an IP-based broadcast standard, eMBMS (Evolved Multimedia Broadcast Multicast Service), for LTE promises to become another factor in carriers’ efforts to accommodate surging demand for long-form video. In combination with the LTE generational leap in bitrate and the compression efficiencies of HEVC, the ability to deliver live programming in multicast mode positions mobile to have a major impact on the pay TV market.

The Ericsson Broadcast LTE solution, the first of its kind, also includes support for another new standard, MPEG-DASH (Dynamic Adaptive Streaming over HTTP), which helps overcome inefficiencies of multiple adaptive streaming and content protection formats while simplifying monetization of video services delivered to connected devices. Verizon Wireless, the first announced North American customer for the new Ericsson platform, plans to begin commercial applications next year, says Parissa Pandkhou, director of advanced solutions at Verizon.

“Verizon plans to introduce Ericsson LTE Broadcast to give sports fans a whole new experience while watching a game,” Pandkkou says. “We see new opportunities in this technology for sports, concerts and even distance learning and college classes.”

Another announced Ericsson LTE Broadcast customer, Australia’s Telstra, plans to undertake a live network trial in the second half of this year, says Mike Write, Telstra’s executive director for networks and access technology. “The trial will show how we can improve the delivery of video to customers who want to enjoy the video content on the move,” Write says. “The key for this solution is the greater network efficiency it will provide, ensuring we will be able to meet a critical business imperative of giving our technology-savvy customers the services they want.”

The processing capacity of targeted devices will be a factor in determining how fast HEVC-compressed video takes hold, Wilson notes. While PCs and some tablets can handle the processing load, most of the current generation of smartphones, while capable of doing the decoding when equipped with an H.265 codec, will burn up too much battery power in the processing to make HEVC a practical option on those devices, he says.

“We’ve been working with our partners on the development of hardware for decode acceleration on mobile devices,” Wilson adds. “One big difference between the mobile market and traditional pay TV is the refresh rate on phones is much faster than TVs or set-top boxes.” Ericsson began demonstrating use of software codecs on PCs and tablets at the NAB Show in Las Vegas.

Another factor that should contribute to early rollouts of HEVC is the fact that suppliers of software-based encoding systems running on off-the-shelf processors say they are able to enable HEVC on customers’ deployed systems via software upgrades. “The fact we can upgrade to HEVC on a software base has become a key incentive in new customers’ purchasing decisions,” says Julien Signès, president and CEO of Envivio, a leading encoding supplier with HEVC demos running at NAB. “MSOs, for example, are looking at our system as a way to converge and scale their encoding requirements without constantly having to purchase new headend equipment.”

Elemental, utilizing CPU and GPU processors with its software-based transcoding platform, is another supplier positioned to support rapid rollout of HEVC. “The computational intensity of HEVC lends itself perfectly to the processing performance advantage available with graphics processing units,” says Keith Wymbs, vice president of marketing at Elemental.

The HEVC/H.265 codec requires up to 10 times more processing power for encoding compared to H.264 and relies on software capable of more complex decisions and tradeoffs across a wider array of decision points, Wymbs notes. Easing the transition to H.265 within legacy MPEG-2 and H.264 infrastructures, software-upgradeable solutions from Elemental can incorporate new compression approaches much more quickly than existing fixed hardware encoding and decoding platforms, such as ASICs and DSPs (digital signal processors), he says.

On another front, Rovi is also taking steps that promise to expedite rollout of HEVC to connected devices, in this case through implementation of the standard on its DivX delivery and playback platform and the launch of a MainConcept encoding SDK for HEVC. “As with H.264, Rovi will release core video encoding and decoding solutions that will be the foundation of a successful HEVC rollout and enable our customers to save money while enhancing the quality of the video services they offer,” says Matt Milne, executive vice president worldwide sales and marketing, Rovi Corporation. “We see HEVC as a huge step, enabling the industry to cost effectively transition more content to high definition formats and, eventually, 4K.”

The new MainConcept SDK offers core professional HEVC encoding for developers serving the broadcast, professional content creation, mobile and consumer industries. MainConcept encoding solutions are already broadly deployed by many of the world’s largest technology companies, Milne notes, which will help streamline the migration to HEVC in a broad range of leading cable, internet and wireless systems.

Early in the second half of this year Rovi plans to introduce HEVC support on DivX, the firm’s widely deployed end-to-end solution for secure adaptive streaming in the OTT market. HEVC over DivX will include advanced features such as support for 1080p, subtitles, multiple language tracks and trick-play functions such as smooth fast forward and rewind for playback. The company says support for HEVC will also be integrated in the next version of its DivX consumer playback software to enable consumers to enjoy high definition content, including 4K, as soon as it is released.


Advances in Asset Management Expand Content Owners’ Clout

Mike Nann, director, marketing communications,  Digital Rapids

Mike Nann, director, marketing communications, Digital Rapids

March 21, 2013 – TV and OTT content suppliers’ ability to leverage assets in response to market opportunities is rapidly becoming a force to be reckoned with in the unfolding multiscreen services arena.
While the arcane details of post-production management of content assets may seem far removed from the trends that are shaping the consumer entertainment business, the truth is technological advances are freeing media companies to orchestrate content flows into TV, OTT and mobile distribution streams in ways that give them far greater leverage to monetize assets than ever before. The market impact will be seen in the emergence of new niche channels, new advertising models and the reshaping of storytelling and other facets of the content itself through use of interactivity, socialization and expanded exposure enabled by the Internet.

Many functions and relationships in the supply chain factor into all this, of course, but the linchpin to this new level of empowerment for content suppliers is the ability to get over the hump of inefficiencies in asset management at the core. One example of how this is happening can be seen in content suppliers’ use of tools supplied by Digital Rapids to integrate high-volume media transformation and all the other processes required for serving multiple outlets under management of a unified workflow system.

As previously reported, the vendor’s Kayak workflow management platform is designed to enable this level of comprehensive integration. Now, with general availability of version 2.0 of Digital Rapids’ Transcode Manager, Kayak has been made an integral part of a specific product release, with more such Kayak-influenced product updates to follow, according to company officials.

The new mode of integrated operations enabled by Kayak has already been put into play by a variety of the firm’s customers, including multiple content owners, broadcast stations groups and one cable MSO, in what amounts to a pre-release beta phase of Transcode Manager 2.0 implementation, says Mike Nann, marketing communications director at Digital Rapids. “We also have a major OTT entertainment provider ramping up with Kayak and Transcode Manager 2.0,” Nann says.

In many of these early applications Digital Rapids’ customers are using Kayak as a master workflow template that allows multiple technology and back-office workflows to run in harmony across on-premises and cloud-based resources, he notes. In this role Kayak allows entities to draw on multiple processing components as elements in a catalog, which is to say, as elements that can be activated on servers and assigned specific policies for how they are applied by simply implementing commands on the Kayak interface.

In one case in point that the company can discuss publicly, Turner Broadcasting System, a long-time user of Digital Rapids’ StreamZHD multi-format encoding system and the Transcode Manager, over the past several months has been exploiting the benefits of Kayak in conjunction with Transcode Manager 2.0 to streamline multi-platform distribution across its multiple TV channels. And, Turner, like some other big customers has been leveraging Kayak in other ways as well.

“This platform allows us to efficiently process and deliver content from our news and entertainment services across a wide array of consumer platforms,” comments Keith Chandler, vice president of media and multi-platform operations at Turner. By using Kayak to support everything from multi-stage image processing and transcoding to packaging the media into the multiple formats required by varying viewing devices, Turner is able to address the challenges of processing a vast back-catalog of content with widely varying technical characteristics

As Turner officials note, the modular, component-based Kayak architecture also makes it possible for them to integrate other systems directly into the workflow. “We have developed our own custom Kayak components, enabling seamless integration with Turner’s existing business systems,” says Brooks Tobey, senior vice president of sales solutions and multiscreen development and delivery at Turner.

Turner’s new operating environment puts into play a way of handling content that has not been available to media companies in the past. The logic-driven automation provided by Kayak allows Turner to adapt the workflows of disparate systems to the source content, minimizing manual effort while maximizing processing efficiency and eliminating unnecessary steps. Functionalities such as metadata tagging leverage the platform’s continuous analysis of media on a frame-by-frame and sample-by-sample basis, officials note.

While some customers like Turner have moved forward aggressively with use of the Kayak workflow platform, others have been waiting for general availability of a specific Digital Rapids product upgrade that brings Kayak into play as an intrinsic part of the upgrade. “Transcode Manager 2.0 is the first widespread availability of a solution that brings our Kayak workflow platform into the implementation of a specific product line,” Nann says. “Many customers wanted it in a packaged form they can work with.”

For such customers the aspects of Kayak they’re interested in may only apply to processes related to Transcode Manager, Nann explains. “There are a lot of functionalities in Kayak they may not need beyond this,” he says. On the other hand, he adds, “As they see the benefits of using Kayak with transcoding they may see the potential for using it in other areas, such as image processing or even document-based workflows.”

The scope of Transcode Manager 2.0 has reached the point where a better term for describing the role played by the platform is media transformation, Nann notes. That’s because the processes go beyond typical file-based format conversion to execution of multiple functions based on frame-by-frame analysis of what’s required for each usage scenario, whether it be the initial shift from master to mezzanine or from mezzanine to distribution locations.

For example, Nann says, leveraging Digital Rapids’ partnership with Dolby, the workflow bakes in audio loudness management for regulatory compliance. The platform also makes adjustments in encoding based on intelligent analysis of the video segments that might have been compiled into the final product so that if there’s a letter box segment that needs to be bumped to HD the system will do that. Or if there’s a film segment the aspect ratio will be adjusted for HD.

“Rather than starting with a set of encoding priorities, we can be actively modifying encoding as the file goes through the Transcode Manager,” he says. This applies to “transwrapping” as well. “We’re able to change the file container without touching the actual compression,” he adds.

Such flexibility greatly streamlines the creative processes applied to bringing together various content elements in post-production, he says. By reducing the manual work that has to be done to create a seamless master from various components, content suppliers can do a lot more with a lot less.

Digital Rapids’ live and multiscreen encoding will soon be added to the product lines that bring Kayak into play for workflow management related specifically to those functions. “This will the first migration of our family of live encoding products onto the Kayak platform,” Nann says.


UltraViolet Made Headway In ’12, but Jury Is Still Out

Chuck Parker, VP, Intersection Research

Chuck Parker, VP, Intersection Research

By Todd Marcelle

January 10, 2013 – The cloud of uncertainty surrounding UltraViolet persists going into 2013, with momentum building but the goal of ubiquitous usage tied to access to all the mainstream movies and TV programs consumers might want to own still well out of reach.

The Digital Entertainment Content Ecosystem (DECE) consortium of five of the six major movie studios, Disney being the exception, made major strides with its UltraViolet media storage platform in 2012, including expansion to over seven million registered accounts from just a few hundred thousand at the start of the year and an ever-growing list of participants, including Walmart, Best Buy, Barnes & Noble Seagate, the BBC and many others. But key steps remain to be taken, including a much larger selection of titles beyond the 7,200 or so now on offer from participating studios and TV networks.

In an analysis of the challenges confronting UltraViolet Chuck Parker, president of Intersection Research and frequent blog contributor to the Media & Entertainment Services Alliance website, notes that when it comes to availability of content most desired by consumers, as measured by the Internet Media Database (IMBd) top 100 evergreen titles and the Rentrak top 50 recently released titles, titles available for sale on the UltraViolet platform represent only 50 to 60 percent of the titles on either list. “This isn’t a digital rights issue,” Parker says in a recent blog. “Digital title availability for rental and sell-through on iTunes is nearly ubiquitous. This is a business decision [of the studios] not to support UltraViolet.”

That may seem harsh in light of the fact that Sony Pictures, Warner, Fox, Universal, Paramount, Lionsgate and DreamWorks Animation have all embraced UltraViolet through pre-street date digital releases of select titles via Sony Pictures Store, Best Buy’s CinemaNow, Walmart’s, PlayStation Store and Google Play. But the hit-or-miss electronic availability of DVD and Blu-ray releases on UltraViolet leaves the consumer in need of alternative sources, undermining the purpose of UltraViolet, which is to compensate for the falloff in hard copy sales by encouraging electronic sell-through through the convenience of a universal cloud storage platform.

Notwithstanding the limited penetration of UltraViolet, electronic sell-through sales in general as well as rentals are growing rapidly, according to the Digital Entertainment Group, which serves as the marketing arm for UltraViolet. Overall disc sales in the third quarter were down by four percent, even as Blu-ray disc sales were up by 13 percent compared to Q3 2011. Disc rentals were off by 50 percent.
By contrast, electronic sell-through sales were up by 37.7 percent. VOD spending climbed more than 8.4 percent while subscription revenues from Internet streaming services grew by 127 percent, DEG said. But the total sales value of hard copy rentals, subscriptions and sales, totaling about $4 billion in Q3, far outdistanced electronic sell-through, rentals and subscription revenues, which totaled about $811 million.

The upshot is that Hollywood continues to have a big problem meeting ROI goals on motion pictures, especially in light of how weak the profit levels are in the area of fastest growth, namely, online subscriptions. Digital subscription services, including Netflix, Amazon Prime, Hulu and, soon, RedBox Instant by Verizon, physical rental kiosks and the Netflix disc-by-mail subscription service aearn about one third of the profit per viewing compared to VOD, digital sell through and physical sales. “Video consumption has never been higher in the U.S. household, but it is the mix of consumption that is hurting Hollywood studios,” Parker says.

UltraViolet officials say a heavier marketing effort beyond the “organic” approach taken so far is in the offing for 2013. And they say the long-delayed adaptation of UltraViolet distributors to the Common File Format will soon allow UltraViolet titles to be downloaded without users having to work with different file formats from each retailer.

At a Digital Hollywood session in October, UltraViolet GM Mark Teitell said the CFF was in business-to-business testing with consumer testing to follow. But he acknowledged DECE still has work to do to facilitate use of CFF in the cloud environment.

Teitell also reported UltraViolet, now available in the U.K. and Canada as well as the U.S., is slated to launch in Australia, New Zealand, Ireland, France and Germany in 2013. But the question remains whether the platform is going to crack through to mainstream adoption in the U.S.

“It is difficult to crow about having retailers signed up when the largest DVD/Blu-ray sales retailer (Amazon), the largest digital video retailer (iTunes), and the largest digital ‘rentailer’ (Xbox) have not signed up for the program,” Roberts says. “No matter how you slice up the markets where the consumers you want to attract are currently buying or renting, each one of these companies represents the lion’s share of them, and I would venture to say you cannot create mass adoption without them.”


Monetization Opportunities Take Shape for Multiscreen TV in 2013

Keith Wymbs, VP, marketing, Elemental

Keith Wymbs, VP, marketing, Elemental

January 11, 2013 – Entering 2013 multiscreen distribution of pay TV content is kicking into a new gear, raising the prospects that real money may start flowing into what has been a laborious effort to keep pace with consumer behavior on the part of established TV programmers and distributors.
So far, monetization of long-form video distribution has been the purview of over-the-top players like Netflix, Hulu and Amazon, and there, with the aggressive strategies of Google, Microsoft and myriad others in play, the money curve is sure to keep climbing. Hulu, for example, after registering a 60 percent jump to $420 million in revenues in 2011 logged an even bigger spike at 65 percent in 2012 with a reported $695 million in advertising and subscription sales.

So far, as Tom Morrod, director for consumer and media technology at research firm IHS notes, all the money flowing to providers from online video consumption is a drop in the bucket compared to traditional TV. In Europe, for example, the online video take adds up to about one percent of the overall media revenue pie, counting print, movie box office and everything else, compared to the 54 percent share represented by pay TV subscriptions.

“There’s very little money being generated right now from the multiscreen world, but there’s a lot of money going to the TV set,” Morrod says. But how long can things remain this far out of balance in light of other trends cited by Morrod and other researchers?

Among developed countries worldwide the average number of TVs per household has been at two or above since 2005, according to IHS findings, while the number of PCs per household has steadily increased to a total of two per household as of 2012. Meanwhile, the number of other devices capable of delivering video from the Internet, including smartphones, game consoles, connected set-tops and tablets, has gradually escalated to where, by the end of 2011, the total of such devices in all households in all developed nations matched the number of PCs, Morrod says.

“Within another few years we’re going to have more of all those different connected device types added together than the sum of TVs and PCs added together,” Morrod says.” What this is really showing is a huge fragmentation of device types consumers can use to watch content.”

Just how rapidly that fragmentation is impacting consumption habits in the U.S. can be seen in research performed by Leichtman Group, which found that the proportion of U.S. adults who viewed Web video on their TV sets at least once a week had gone from five percent in 2010 to 13 percent in 2012, while the percentage who viewed full TV shows online weekly on all types of devices had gone from six percent to 16 percent over the same timeframe. In a similar vein, Parks Associates last year found that the percentage of smart TV owners who watch online TV shows daily was at 32 percent.

For MVPDs (multichannel video program distributors) the push into multiscreen service delivery has been a defensive mechanism aimed at making sure pay TV services are available to serve this growing propensity to view TV and movies on connected devices. Certainly the program providers, too, have participated in agreements to make their content available to authenticated MVPD subscribers with the same goals in mind.

But, as the subscription model proves to have legs amid growing ad revenues in the OTT domain, the content owners are also looking at ways to generate more revenue through online offerings independent of MVPD affiliations. In fact, Hulu, with a reported three million plus subscribers to its premium offering through Hulu Plus, is threatened by its own success as partners in the venture, including Disney, NBC Universal and Fox, push to free themselves from exclusive licenses in order to make the same high-value content available through other outlets as well.

The flexibility to exploit content distribution opportunities wherever they can be found requires a change in how content is handled at the sources, not only lowering the costs of delivering secure streams to every type of device but minimizing dependence on third parties to execute on the ever more complex technical requirements. As previously reported, over the past year TV programmers have been beefing up and streamlining OTT operations with an eye toward funneling existing and newly developed channels of programming into whatever distribution conduits make business sense, whether directly to consumers over the Internet, through Web aggregators like Hulu or in conjunction with MVPDs’ TV Everywhere initiatives.

With sufficient flexibility to put together niche channels with compelling appeal to certain audiences from their deep reservoirs of content in whatever mixes are suited to their contractual obligations, programming suppliers would have an opportunity to create far more varied programming options than they can within the restrictive multichannel TV environment as well as to maximize exposure of established programs across multiple outlets to whatever extent their deals with MVPDs allow. Now, with such content workflow management capabilities made possible by advances in the technology platforms that support on-the-fly aggregation, transcoding and device-specific secured streaming of new and archived programming in whatever combinations they choose, programmers are preparing to execute on these possibilities.

One sign of what’s in store comes from white label online video publisher thePlatform, which has responded to programmers’ demands for such flexibility with a suite of new “smart workflow” features to simplify video preparation and publishing across multiple formats and devices. Now, says Marty Roberts, senior vice president of sales and marketing at thePlatform,
distributors running their workflows through thePlatform’s mpx publishing system will have click-and-execute access to suppliers such as Elemental and Harmonic for next-generation transcoding, Aspera for fast-file transfer and Akamai for its latest HTTP ingest technology on the recently launched Sola Media Solutions portfolio.

“Multiscreen video publishing has never been more complex, and coordinating the workflow is the key to success,” Roberts says. “When you look at the proliferation of devices with different aspect ratios, file formats for delivery and security protocols you see an explosion in the number of files you have to manage for each title. You may need 20 to 25 different files including caption files, different language versions, thumbnails to manage and set up for consumer to have great experiences, etc.”

Nothing is more important to content owners’ ability to activate new business opportunities than the gains achieved with transcoding and streaming systems. “When we looked at this space a couple of years ago we thought transcoding was moving to a commoditized software play without a lot of differentiation,” Roberts says. Now, with the challenges posed by device proliferation, “to their credit these vendors have stepped up and provided some of the most innovative advancements to meet these needs.

“With packaging and encryption all bundled in,” he adds, “they’re able to process content faster than real time. A couple years ago we didn’t think these capabilities were possible.” thePlatform has integrated the Elemental and Harmonic systems to work seamlessly with customers’ workflows on the mpx platform, and it’s looking at support for other transcoding suppliers on mpx as well, including Digital Rapids, Envivio, RGB and Telestream.

Another major requirement underlying ambitious business plans of content suppliers is the ability to instantly access, transfer and ingest files across multiple locations in a dynamically changing distribution environment. In essence, Roberts says, the new paradigm in content distribution is to instantly “grab files from storage and transform them to meet the requirements of all downstream outlets and set them up for delivery to those outlets.”

“We move a lot of files around the network, from customer storage to theplatform for transcoding, sometimes from our servers to content delivery networks, syndication partners, etc.,” he explains. “Our customers can now use Aspera to move content into thePlatform, and we’ve talked with a number of CDN suppliers about using Aspera to move content to ingest on their systems. Their technology does it in a secure and reliable way that’s much faster than FTP or other traditional protocols.”

The new requirements introduce new challenges that must be addressed through the smart workflow system as well, Roberts notes. “You start to run into some interesting situations,” he says. “For example, what happens if the eighth file set up for a particular title has an error and the other nine are fine? Is the system smart enough to recode just one without starting all over again with all ten?”

Or, to cite another nuance, content owners need to be able to manage all these different files for each title in the context of what the business arrangements are with each downstream distributor. “Maybe all the content going to our website gets a higher priority than what you publish to YouTube, so you have to be smarter about setting up and prioritizing your modes of distribution within the workflow,” he says.

Streamlining metadata management is another major requirement. “You used to have to go back and re-transcode your metafiles to, say, set them up for the Xbox,” Roberts notes. “Now we can analyze the files we have and package them up for streaming with the metafiles to the Xbox without going through those added steps. We’re really reducing the amount of work that has to be done when we look at the capabilities of these new transcoding engines.”

Another technological advance with the potential to further buttress the monetization opportunities for online distributors is the soon-to-be adopted next-generation video compression standard, H.265, also known as HEVC (High Efficiency Video Coding). With the anticipated ratification of the latest draft from the ISO’s MPEG committee H.265 will quickly enter the commercial mainstream, bringing with it a near doubling of compression ratios in comparison to H.264 AVC (Advanced Video Coding).

This is a landmark moment for online video distribution, especially for mobile, where the confluence of increasing bandwidth on LTE and reduction in bit rates on H.265 has explosive potential, notes Keith Wymbs, vice president of marketing at Elemental. “You’re able to get a very high quality experience at a very low video bit rate to any device of your choice over a wider variety of access networks, whether you’re at a Starbucks, on public transportation or roaming throughout your home,” Wymbs says. “That creates a type of ubiquity that a consumer is able to take advantage of regardless of the stream they’re on.”

Another transformative factor driving new business opportunities for online distributors is the cloud. Elemental, for example, is offering cloud-based support with its core technology to lower customers’ costs, Wymbs notes. “We think cloud over time will become the next kind of technology evolution that really changes things and makes things even more flexible for the premium content providers that are delivering multiscreen offerings,” he says.

Cloud-based support, of course, has always characterized thePlatform’s model. Now, Roberts notes, customers can use the new workflow system to more easily leverage multiple cloud tie-ins such as might come into play if a customer is relying on Elemental’s cloud for transcoding and thePlatform’s for all the publishing components. And customers with their own security and transcoding infrastructure can exploit the functionalities brought into play by thePlatform’s smart workflow on mpx.

“Some customers who don’t have their own transcoding farm take advantage of our cloud transcoding,” Roberts says. “Others who want to manage their security requirements behind their own firewalls have invested in their own transcoding solutions.

“Our remote media processor – RMP –sits remotely at the customer’s location and calls back into mpx for instructions what to do,” he continues. “mpx says take this file and pass it to Elemental and when that processing is done move it out to Akamai. We can work in a world where the vast majority of processing is operating in our cloud but can also work with customers in a hybrid fashion.”

The options will continue to expand as customers seek to bring in new vendor partners, Roberts adds. “We’ll continue to invest in those integrations as our customers require them,” he says.

“Because everyone is using standard Web protocols and open documented APIs these integrations have gone really, really well. We’re able to build new plugins in two to three weeks, including load and stress testing.”

The cloud is a big part of Akamai’s strategy with its new Sola Media Solutions platform. “Bringing seamless television experiences across devices is a great market opportunity that requires content providers to tackle major challenges including platform fragmentation, monetization, buffering between content and ads, and understanding who is watching the content,” say Jeremy Helfand, vice president of monetization at Adobe. Leveraging the cloud greatly mitigates the costs, he adds.

The vendor’s offerings include cloud-based transcoding for on-demand content and stream packaging that’s designed to adapt a single file or live stream on-the-fly for delivery to multiple viewing devices, Helfand explains. With support for multiple levels of content protection the architecture is designed to match content protection levels and monetization strategies to specific content and target audiences, he says.

What all this adds up to is a transformation in the monetization opportunities associated with OTT distribution of premium content. Technology advances that enable quality-of-experience suitable for TV-caliber advertising and subscription services in combination with cost-effective means of achieving ubiquitous access are freeing all players to create business models that maximize returns across all outlets.


Advances in Video QoE Control Facilitate SPs’ Efficiency Goals

Steve Liu, VP, video network monitoring, Tektronix

Steve Liu, VP, video network monitoring, Tektronix

December 11, 2012 – Tektronix has taken ground-breaking steps toward strengthening video service providers’ ability to meet rigorous quality standards through use of better tools at interfaces with transport backbones and at the edges of the network.
To overcome limited monitoring capabilities at points where video is handed off to regional headends Tektronix has introduced monitors capable of comprehensively identifying and diagnosing quality impediments on programming feeds operating at up to 3 gigabits per second. At the same time, the company has issued a new edge device, the SentryEdge II, which detects RF modulation and transport stream errors across multiple QAM channels simultaneously, allowing technicians to quickly identify potential problems before they impact subscribers’ quality of experience.

“Both of these products are industry firsts,” says Steve Liu, vice president of video network monitoring at Tektronix. “QoE (quality of experience) monitoring at 3 Gbps is three times the previous bitrate capacity for monitoring devices, and our SentryEdge II is the first RF quality monitoring device capable of remotely monitoring up to eight channels concurrently.”

The need for QoE monitoring at up to 3 Gbps stems from the growing use of 10 Gbps fiber transport to deliver video to headends, where the handoff at any point might include 500 to 1,000 live video programs, including those earmarked for use with switched digital video systems. Before the addition of 10 gigabit interfaces to the firm’s Sentry and Sentry Verify product lines, the only products suitable for 10 gig networks offered very basic monitoring capabilities, such as registering packet loss, Liu notes.

“You need to look at more than packet loss to understand quality problems,” he says. “We provide the means to do both QoS and QoE at 3 Gbps.”

The firm’s Sentry platform incorporates advanced QoE monitoring capabilities while Sentry Verify handles QoS monitoring, he explains. By having the capacity to perform comprehensive monitoring of baseband signals before they enter transcoders operators can identify source problems that could impact the viewing experience while avoiding “chasing false alarms” resulting from alerts about inconsequential packet losses, he says.

The new Sentry Edge II is meant to solve another challenge for operators, which is to more efficiently perform RF monitoring of QAM channel output at the network edges, including hubs and local headends, With the choice of four- or eight-tuner models, operators can speed the process of monitoring channels across up to 1 GHz of spectrum, Liu says.

“Exceptional RF analysis across all channels gives operators a proactive way to diagnose RF issues before customers complain,” he says. The rack-mounted server system provides Web interfaces for remote monitoring 24 x 7 and generates email notices if performance is subpar, which, as Liu notes, is a vast improvement over the traditional mode of performing QAM signal spot checks on RF analyzers. If operators want to maintain steady monitoring of their most important premium channels, they can “park some of the tuners on those channels and perform round robin monitoring on the rest,” he says.

Sentry Edge II performs high-quality MER (modulation error ratio) measurements up to 41dB as compared to most traditional analyzers, which only register MER variations at 30 dB or below, Liu adds. This is good enough for assessing whether there’s a problem at the user end but not for assessing whether degradations at the QAM output are likely to diminish performance at the set-top. “You need to know if the signal is degrading to 36 or 38 dB if you want to be proactive,” he says.

The new monitoring system allows engineers to quickly isolate problem sources by generating metrics on other variables as well, including RF lock indication (including LED on rear panel); input signal level; carrier-to-noise; carrier offset, pre-forward error correction bit error rate and others. “In-home monitoring is important, but you need information from the source in order to correlate issues and understand what’s going on in the network,” Liu says. “The ability to make sense of all the data is quite challenging.”


Quality Assurance Is Moving Multiscreen into Mainstream

Marty Roberts, SVP, sales & marketing, thePlatform

Marty Roberts, SVP, sales & marketing, thePlatform

November 21, 2012 – Distributors of high-value video in growing numbers are taking a crucial step toward moving multiscreen services into the mainstream by embracing a variety of approaches to measuring and maintaining quality of experience.
Given uncertainties about ROI and business models, over-the-top and pay TV providers have been reluctant to invest in costly solutions that would give them the same level of quality assurance that has become a mainstay in cable, telco and satellite TV operations. But now, with consumers making clear there’s high demand for a premium TV experience on connected devices of every description, the stakes are too high to forego facing the quality assurance challenges posed by the adaptive rate (AR) streaming mode of distribution over IP networks.

“Miranda [Technologies] is really happy we made the investments in products for this space,” says Mitchell Askenas, director of business development, at Miranda, which was recently acquired by cable hardware supplier Belden, Inc. “We were trying to get ahead of the curve, and it looks like it’s paying off.”

Miranda has quietly brought AR quality assurance (QA) into the mix of QoE and QoS issues addressed by its iControl QA system over the past year. The data generated from functionalities added to deal with the complexities of AR are analyzed in concert with data gathered from Miranda’s probes and other network resources to pinpoint and analyze video performance across the network, resulting in the same level of quality assurance for AR streamed content that can be achieved for traditionally delivered pay TV.

OTT suppliers have been especially focused on the QA issue as they seek to provide content useful for TV Everywhere services offered through their multichannel video programming distributor (MVPD) affiliates, Askenas notes. Moreover, programmers are starting to sell advertising inventory unique to the multiscreen streams, which makes QA essential, he adds.

“We’re also seeing a lot more interest from cable operators,” he says. “They want to know what each user experience looks like rather than simply relying on packet analysis.” The reference is to the difference between traditional IP deep packet inspection (DPI) measuring packet losses and delays and deep content inspection (DCI), which looks at what’s happening within the video frames and across frame sequences.

“This year has been a time for learning about the options for our customers,” Askenas says. “I think next year is when we’ll see significant sales.”
A New Bellwether

One important bellwether to the trend is the recently announced decision by white label video publisher thePlatform to provide analytics capabilities distributors can use to turn raw data coming in from different points of the network into a coherent picture of what’s going on. “Our customers are using adaptive streaming, because it delivers a better overall experience for users accessing content over broadband networks,” says Marty Roberts, senior vice president of sales and marketing at thePlatform. “But with the variations in degrees of quality from difference CDNs based on the different ways they handle streaming modes like HLS (Apple’s HTTP Live Streaming), Smooth (the Microsoft streaming mode) and Adobe’s HDS (HTTP Dynamic Streaming), they need to be able to analyze the overall experience their customers are getting from all these suppliers.”

That’s a tall order. “Tracking and managing QoS around AR is a little bit harder than it is for traditional modes of distribution,” Roberts says. “There are different formats and protocols and different encryption schemes, so there are technical challenges to monitoring and understanding what the quality is for each user experience.”

To address these challenges thePlatform has partnered with Conviva, whose Conviva Viewer Insights video analytics capabilities will be offered as an integrated component of thePlatform’s mpx video publishing system to provide an additional layer of dynamic reporting within the mpx console at no extra cost to customers. This will allow publishers to quickly access real-time statistics related to the consumer experience, engagement and the relationship between high-quality viewing and audiences, Roberts says.

Now that AR has been widely embraced as the distribution mode for TV Everywhere on the part of big pay TV operators and media companies, maintaining QA “boils down to being good business for them,” Robert notes. “We’ve seen data showing that if a video takes longer than two seconds to start streaming you will see a real drop off in viewership. Assuring good user experience keeps viewers engaged for longer viewing times, resulting in more ad avails to support monetization as well as higher viewer satisfaction.”

Intrinsic to the partnership is the device-end data-gathering capabilities of the new default plug-in installed with the video players thePlatform provides. “Data from the user experience on each device is piped back to the Conviva servers for analysis and displayed into our console to give our customers a good understanding of what’s going on,” Roberts says. “It’s delivered in a standard report, so there’s no need for customization.”

Conviva’s analysis of this data displayed in mpx will allow content distributors to determine video performance and its impact on viewer engagement across multiple types of video players and streaming protocols, notes Conviva CEO Darren Feher. “The joint solution will allow thePlatform’s customers to see exactly what every single viewer experiences, at the precise moment it happens, providing actionable intelligence to enhance people’s viewing experiences and ultimately improve online video for both viewers and publishers,” Feher says. Metrics include audience-based quantification of video quality on viewer engagement and in-depth diagnostics into video quality issues across CDNs, CDN regions, ISPs and viewer host machines.

The partnership also facilitates upselling thePlatform’s customers to the Conviva Precision Video solution, which utilizes the flow of analytics data to optimize quality of experience by maintaining what Conviva calls “preemptive, seamless stream adjustments” on content as it’s delivered to each user. “Precision allows the client to make real-time decisions that optimize the quality of service,” Roberts says.

“The client says, ‘I’m getting a bad stream from this node in this CDN so let’s switch to another node or to another CDN,’” he explains. “The client doesn’t care what server it’s talking to. If one chunk happens to be coming from one server and the next chunk is from another CDN, that’s fine.”

The reference is to how AR employs a “pull” mode in distribution technology that is altogether different from the “push” mode of traditional digital TV. Every few seconds an AR-enabled device, by referencing the bitrate options or “adaptation sets” listed for a given piece of content in a manifest file sent from an HTTP server, asks the server to send a fragment or chunk of streamed content at the most optimum bitrate, depending on how much bandwidth is available at that moment in time and how much processing power the device has available for decoding the bit streams.

The basic goal is to ensure video is streamed at the highest level of quality that can be sustained at any given time without dropping frames or triggering buffering interruptions in the flow. But AR introduces a wide range of processes that pose challenges to assessing audio and video quality that are new to the premium television environment. And those processes vary from one streaming mode to the next.

Not only are there far more parameters to measure in the AR transcoding, fragmentation and distribution process; there are multiple points in the network where those processes can go wrong, extending from source encoders through origin servers and CDN caching points to all the different types of unmanaged IP-connected devices possessed by end users. Moreover, additional complexities associated with content protection and monetization make the achievement of premium service-level quality assurance all the more daunting.

In some respects Conviva’s Precision Video solution avoids these complications by simply switching the call for chunks from one HTTP server to another through the streaming session so as to achieve the best possible quality flow from the CDN tier in the network. But this leaves unanswered issues such as any problems at origin servers or at the content sources that may be contributing to poor performance in the distribution network.

End-to-End QA Challenges

Systems designed to track sources of problems typically employ a combination of DPI and DCI techniques with probes positioned at different points, sometimes in conjunction with plug-ins at the device end such as thePlatform is providing. In some cases, traditional DPI isn’t used, but virtually all participants in AR QA agree there needs to be a means by which the delivery of packets is monitored so as to ensure there’s an even rate that avoids over-loading packets into the device buffer or not having enough buffered packets to ensure smooth video rendering on the device.

This potential for jitter goes to the heart of the fragmentation process, where, when things are going well, a sequence of packets in the video stream is distributed in response to a device request every few seconds. QoS monitoring must be sensitive to which type of AR mode is in play, insofar as sequence durations vary by mode.

Some of the QoS monitoring process is a function of how the fragmentation server is performing; some pertains to what’s happening in transit from the fragmenter to the user device, and some of it is a matter of the time it takes for a device request to get to the fragmenter. Thorough QoS monitoring requires an understanding of what’s happening to interrupt smooth performance when such interruptions occur.

QoE as measured by DCI techniques pertains to the full range of functionalities that determine what the user sees and hears, which means that some aspects of assuring acceptable QoE in the AR domain are the same as what’s required for QoE in traditional premium TV QA. DCI looks at the video stream on a frame-by-frame basis to identify any problems in the encoding process such as blurring, blocking, tiling and splicing errors, taking into account the location and size of impairments as well as their duration and frequency. Gauging audio performance, of course, is also a part of this process, including now the measurement of volume changes between programming and ads to ensure conformance with the Commercial Advertisement Loudness Mitigation (CALM) Act.

DCI, however, gets a little harder in the AR arena compared to legacy pay TV services that employ MPEG-2 compression. Whereas MPEG-2 applies encoding algorithms to fixed size macroblocks of pixels within each frame, H.264 encoding employs variable-sized macroblocks. This greatly complicates detection of tiling or macroblocking, which occurs when one or more image components within a frame are blurred or delivered as a single color block.

Further complicating matters is the fact that, with AR transcoding there are multiple streams for each piece of programming that must be monitored with respect to ancillary content feeds such as audio/video synching and synchronization of closed captioning. This even goes to the need to assure proper synching of alternative language audio or captioning when transnational content distribution is involved.

An important element of QoE that’s unique to AR is the need to go beyond the QoS monitoring of streaming performance to keep track of the degree to which the fluctuations in bitrates driven by bandwidth availability are within acceptable bounds. In other words, if the bandwidth availability is persistently limited to a point where the AR system is sending out sub-par quality video, as might happen if an HD TV set is receiving a stream at a persistent rate that results in sub-par resolution, the user is not getting a good experience, even though the QoS measures are reporting everything is fine.

The ability to verify performance of content protection mechanisms on AR premium content is also essential to the overall QA regime. Just as different types of devices operate natively with different types of AR fragmentation systems, they also come equipped to support different types of digital rights management (DRM) systems. This means that each fragment over each AR stream must be assigned an encryption key that will communicate with the embedded device DRM client.

Premium service quality assurance will have to provide verification that the DRM processes are working. These include not just the encryption mechanisms with appropriate synching of keys to DRMs but also enforcement of usage policies tied to rights metadata and to authorizations assigned by back-office systems to individual users.

The Miranda Solutions

“There are so many moving parts you must gather information from if you want to identify where the problems are in the network and how to fix them,” says Miranda’s Askenas. “QoS will tell you where something is going wrong, but if you have certain fixes in place like forward error correction or buffering mechanisms, it won’t tell you whether the customer is having a good or bad experience. And QoE doesn’t tell you whether you have a QoS problem.

“We rely on telemetry from network elements and the components Miranda provides to get to sources of problems,” he continues. “The first issue is to determine exactly what the source of the problem is, which requires correlating a lot of data. You might have an alarm saying something is wrong with an encoder, but you have to determine which of hundreds of streams an operator is delivering over the top are affected.”

The second issue is, “How do you verify whether the cause of the alarm is actually impacting customers? The priority is to concentrate on fixing poor customer experience.”

Then, he says, “Once you know there’s an impact on customer experience, you need to know exactly what that experience looks like. Now you know what the source of the problem is, what its significance is to end users and what precisely needs to be fixed to rectify the situation.”

And beyond all this the data must be aggregated into reports that are useful at the management level. “You need an overall sense of the quality of your service, what your uptime performance looks like, whether you are fulfilling on your commitments to program suppliers, advertisers and subscribers,” Askenas says.

Miranda’s iControl relies on whatever sources of telemetry in the network can be used to perform analysis on QoS, whether that data is coming from DPI probes, routers, cable modems or devices. “There’s a tremendous amount of data to draw from; the trick is to aggregate and analyze it to provide an accurate and thorough measure of QoS end to end,” Askenas says.

“We’re not providing the DPI probes; our focus in on providing the data you need for a deep QoE analysis,” he adds. “We have probes that sit on the video network to go deep into the video and audio analysis of the frame. We look at the true customer experience by identifying things at the macroblock level like pixilation, frame freezes and black spaces in the video, audio issues, performance of closed captioning, whether the metadata is included in the stream.”

One set of probes, the firm’s Kaleido multiviewers, sit beyond the cache points to deliver DCI readings on all the content flowing out of the local cache. The other probes, part of the firm’s Densité infrastructure equipment, look at encoder and origin server outputs, providing a view beyond QoS measures on encoding to look for things such monitors can’t detect.

The QoE analysis also covers critical aspects of ad performance. “We can use fingerprinting technology to coordinate with the ad schedule and determine if the right ad is playing out,” Askenas says. “Our iControl fingerprinting process is built into our hardware. The ad insertion management module is part of the video management system and collects data from various elements in the network. It sits on top of all the functionalities, including the QoE mechanisms as well as fingerprinting readouts, to correlate and provide a clear view of what’s going on with ads.”

Tying all these capabilities into QA on AR streams Miranda also adds AR-specific metrics having to do with things like fragmentation and buffering. “We’ll abstract an alarm that might be saying there’s too much fragmentation happening on an ESPN stream and analyze whether those fluctuations are really impairing the viewer experience,” Askenas notes. “We’ll look at whether encoders are over-feeding device buffers.”

Rather than relying on its own client software to obtain data from devices Miranda intends to tap telemetry feeds intrinsic to players running connected devices. “The players wrap the decoding with the infrastructure, so we can use iControl to collect and correlate those statistics,” he says. “We aggregate that data with everything else to see where the problems are and what needs to be done to fix them.”

Belgacom, DISH and other Initiatives

Another supplier reporting rising demand for AR QA solutions is Paris-based Witbe, which recently added Belgacom, the incumbent telecom operator in Belgium, to the list of service providers that are using the firm’s Multiscreen Quality Manager solution. Employing what it calls “QoE Robots,” Witbe’s platform employs connections to Belgacom set-top boxes in several Belgium cities to log onto Belgacom’s TV Everywhere portal. The robots use Belgacom’s TV Everywhere app for iOS and Android to “watch” live TV and order on-demand content across all devices, explains Witbe president Jean-Michel Planche.

“Delivering multiscreen video services can be tricky as one does not control the networks nor the devices used to watch video streams,” Planche notes. “Controlling the quality of experience is crucial to ensure success, protect brand reputation and secure revenues.”

Belgacom engineers, marketers and managers have access to analytic dashboards reporting KPIs (Key Progress Indicators) such as channel change time, video streams quality, portal responsiveness, delay to launch the app and log into the portal, success ratio when buying on-demand content, etc. KPIs are available per device type and geographic location enabling management to focus troubleshooting actions and measure the impact of infrastructure investments on the quality delivered.

At the start of the year Witbe’s contracted to supply its QA solution for DISH Network’s new broadband TV Everywhere service, marking one of the largest deals yet publicized in the AR space. Using Witbe’s QoE Robots, DISH can evaluate service availability, measure application performance, check content integrity and measure perceived quality of video streams delivered to computers, smartphones and tablets, Planche says.

The robots run continuous tests on the DISH broadband feed by replicating user actions on end users’ devices through Wi-Fi connections or 3G/4G cellular networks, Planche explains. The robots interface with multiple types of devices operating in AR or other modes to log into servers, browse program guides, watch live and on-demand TV, configure and access DVR recordings and more.

Witbe, with 12 years’ experience in Europe and two in North America, has other, unannounced North American customers as well, including Comcast and Cogeco of Canada. Comcast is using the platform to run tests of its multiscreen service with the iPad and other connected devices while Cogeco is running set-top box tests, Planche says.

Declaring the “classical market for probes has hit the wall,” Planche describes the Witbe QoE Robot platform as the source of comprehensive intelligence distributors require to run premium services in the user-centric IP services environment. “In the IP world you can have a good backbone and bad service or a bad backbone and good service,” he says. “QoS without collaboration with what the user is seeing is of little value.”

But that’s not to say QoS is not important to the value of what Witbe brings to the table. At the analytics level Witbe correlates the intelligence gathered on QoE by its probes with what QoS metrics from other sources are delivering to precisely identify the nature and sources of problems. “Our technology is to understand the quality of the content the operator is delivering at each strategic point of the network, from the point of ingestion, across the backbone and over the last mile,” Planche says.

While end-to-end QA is the ultimate goal, operators can start slowly with implementation of the Witbe platform to begin gaining control over the AR experience at they explore where they want to go with multiscreen services. “We have different small operators, such as telecom operators in small states like Monaco and Macao, where we can do clever things with just a few robots,” he notes.

Whatever level of penetration a provider wants to go to, the Witbe approach to AR QE does not constitute a big investment, he adds. “We’re delivering information they never dreamed they could get with such a small investment,” he says.

As a growing number of vendors offer solutions to bolster QA, the ecosystem in general is moving in the direction of ever better performance metrics. As thePlatform’s Marty Roberts notes, now that AR is “table stakes” there’s general recognition that an old saw holds for the multiscreen domain much as it has for any other aspect of network service operations: “You can’t improve what you can’t measure.”

Notably, he adds, CDN suppliers are now generating QoS metrics that can be fed into analytics frameworks like the one thePlatform is leveraging from Conviva. “Akamai is the best example of a CDN supplier with robust analytics tools for measuring QoS experience,” he says. “But all of them have some level of QoS metrics.”


Cloud-Compatible Workflows Spur Content Tech Integration

Brick Eksten, president, Digital Rapids

Brick Eksten, president, Digital Rapids

October 29, 2012 – New approaches to enabling flexibility in technology integration for content distribution through cloud-compatible workflow management systems are gaining traction as linchpins to over-the-top and TV Everywhere expansions of the premium TV sector.
For example, after introducing its Kayak workflow system earlier this year (see March, p. 19), Digital Rapids is reporting its decision to support multi-technology workflow integration on a platform that runs across on-premises and cloud-based resources is paying off with a growing lineup of ecosystem partners. Several dozen technology partners are now making it easier for their customers to integrate with their solutions through Kayak, says Digital Rapids president Brick Eksten.

“We now have a majority of vendors out there with concrete plans on where they want to go and what they want to do with Kayak,” Eksten says, noting the lineup spans suppliers of codec technologies, quality control tools, audio loudness management, digital rights management and more. “We also have large partners on the integration side who sell platforms that run cable systems and studios.”

Two major suppliers of cloud support services, Microsoft with its Azure platform and Amazon with Amazon Web Services (AWS), have moved to cloud-compatible workflow systems as well. This fall Microsoft introduced Workflow Manager 1.0 as the next-generation workflow for its SharePoint collaboration software, which is a core component of Azure. Earlier this year, AWS launched Simple Workflow Service to address key challenges that have impeded complex multi-task implementations of applications running on AWS.

In a blog post Amazon CTO Werner Vogels offers a candid description of the issues that prompted the company to introduce the new workflow management system and likely will lead to growing use of these sorts of workflow integration systems everywhere, including in the multiscreen services space. As suppliers turn to asynchronous and distributed processing models to support independent scalability across loosely coupled parts of their applications, they must develop ways to coordinate multiple distributed components, incurring increased latency and unreliability inherent in remote communications, Vogels notes.

“Today, to accomplish this, developers are forced to write complicated infrastructure that typically involves message queues and databases along with complex logic to synchronize them,” Vogels writes. “All this ‘plumbing’ is extraneous to business logic and makes the application code unnecessarily complicated and hard to maintain.”

Vogels says Amazon’s Simple Workflow service (SWF) makes it easy for developers to architect and implement these tasks, run them in the cloud or on premises and coordinate their flow.SWF manages the execution flow such that the tasks are load balanced across registered workers, inter-task dependencies are respected, concurrency is handled appropriately and that child workflows are executed, he adds.

SharePoint Workflows for SharePoint Server 2013 performs similar functions for Azure cloud-hosted customers, the key difference, of course, being that the new Workflow Manager 1.0 is tied specifically to use of the SharePoint platform. According to Jürgen Willis, principal group program manager for SharePoint, the new system allows SharePoint customers to host and manage these long-running workflows with support for deployments that require multi-tenancy support, scalability and high availability.

“Tenants in Workflow Manager may represent the various departments of an enterprise or the customers of an ISV (independent software vendor),” Willis explains in a recent blog. “Multiple Workflow Manager nodes can be joined together into a farm deployment to scale the service.”

Microsoft has added new capabilities for managing system tenants, activities and workflow instances. “This includes repository and version management for published activities and workflows,” Willis says. “Messaging and management are clearly two critical areas for building and maintaining workflow solutions, and this is an area where we will continue to invest as we evolve this technology”.

Whereas SharePoint is based on a service-oriented-architecture (SOA), Digital Rapids has positioned Kayak to make it easier to integrate a multitude of applications into a customer’s workflow by avoiding the need to individually integrate each process-specific component of each application onto the SOA system. Kayak provides a template for designing workflows that allows customers to draw on specific processing components as elements in a catalog that can be activated on servers and assigned specific policies, Eksten explains.

Applications are prototyped with the development of the workflow, tested and deployed to be utilized as dictated by whatever workflow processes are brought into play for any given piece of content – depending, for example, on whether the content is to be delivered live or ingested into storage, what the encoding resolutions are, whether metadata should be overlaid or embedded, etc. “We’re blueprinting the workflow to tell the box (server) which processes to pull in and how to run them,” he says.

“If you think of transcoding, formatting, rendering and other steps in distribution, you have to keep everything working together, which is very hard to maintain,” he continues. “That’s the biggest complaint about SOA today. With Kayak, if you want to add a box, you don’t have to think of how it has to be used. You point Kayak at that box and anything you’ve designed now runs on that blank slate. Provisioning is automated and completely dynamic.”

Along with facilitating workflow integration with third-party suppliers’ products, Digital Rapids has integrated the latest iterations of its solutions, including version 2.0 of its Transcode Manager media processing software and StreamZ Live Broadcast multiscreen encoder, with Kayak. At the same time, Eksten notes, beyond content-specific workflows, the Digital Rapids architecture allows customers to integrate back office and other IT workflows through Kayak to support a unified enterprise system that makes it easier to conduct business in today’s device-saturated market.

From the Digital Rapids partners’ perspective, integration into Kayak allows vendor partners to enable customers who have moved into the new workflow system to more easily address the kinds of problems cited by Amazon’s Werner Vogels. “It’s really managing all those virtual apps they have to manage as part of their overall solution,” Eksten says. “It can be a rifle-shot solution or an overall workflow. The beauty of integrating with Kayak is the integration works for them whether they use discrete workflows or integrate directly into the end-to-end customer workflow.”

Building a partner ecosystem of suppliers together with enhancing Digital Rapids’ own products to operate seamlessly across internal customer facilities and the cloud is crucial to drawing customers to Kayak. “The richness of the Kayak partner ecosystem is one of the platform’s key strengths, combining with its unique architecture to let our mutual customers quickly integrate new technologies into their operations while mixing and matching partner solutions to create the optimal workflows for their needs,” says Onkar Parmar, senior partnership manager for Kayak at Digital Rapids.

Comments from Kayak partners buttress this claim. “Digital Rapids’ Kayak platform allows our mutual customers to quickly and flexibly integrate Dolby Digital Plus premium multichannel audio encoding and Dolby’s loudness correction technology into powerful workflows to efficiently realize and differentiate their multiscreen offerings,” says Jean-Christophe Morizur, senior director of e-media professional solutions at Dolby Laboratories.

Similar high praise is offered by Venera Technologies, a supplier of quality control and other test and measurement tools. “The innovative Kayak platform provides a perfect opportunity for Venera to bring our QC technologies to Digital Rapids’ customers, enabling them to enhance their media production and delivery operations with content verification at various stages of their workflows,” says Fereidoon Khosravi, senior vice president of business development for the Americas at Venera. “The ease with which Kayak users can integrate our QC components into powerful workflows is simply amazing,”

Other participants in the Kayak partner ecosystem include Automatic Sync Technologies; BuyDRM; Corpus Media Labs; Digimetrics; DSB Consulting; Empress Media Asset Management, LLC; EZDRM; Hitachi Solutions; Ignite Technologies, Inc.; Interra Systems; Irdeto; Manzanita Systems; Minnetonka Audio Software; National TeleConsultants; PixelTools; R Systems Inc.; Screen Subtitling Systems; Signiant; Solekai Systems; Tata Elxsi, and VidCheck.

One of the early points of connection for use of Kayak in the premium services arena is UltraViolet. For distributors in the UltraViolet ecosystem having a workflow that can support all the points of interaction required for execution on the platform is essential, Eksten notes.

“We’re working with the studios and some of our partners to test on the UltraViolet workflows using Kayak to integrate into their business systems,” he says. “There’s a complex interaction between work performed by various technologies for UltraViolet, including encoding, DRM, multiplexing, as well as the need to integrate on the business side with metadata, registration and authentication.”


Transcoding Advances Intensify Debate over Hardware Strategies

Kevin Wirick, VP & GM, video processing, Motorola Mobility

Kevin Wirick, VP & GM, video processing, Motorola Mobility

October 20, 2012 –The vendor-driven battle over digital video encoding strategies has taken a new turn with introduction of new hardware platforms touting massive processing capacity as software-based systems continue to post new gains in bitrate and distribution efficiencies.

Motorola Mobility and Imagine Communications are publicizing as-yet-unavailable transcoding systems running on purpose-built ASICs (application-specific integrated circuits) at unprecedented processing rates of 3 gigapixels and 20 gigapixels per second per rack unit, respectively, with prospects for major savings in power and space consumption as well as cost-effective approaches to expanding live multiscreen channel counts into the thousands. Meanwhile, transcoding systems designed to run on generic processors continue to make great strides, not only as a function of ever-greater processing power but as a result of advances in encoding and other software-based techniques.

Software-based systems like Elemental’s, which uses a combination of individual or hybrid CPUs and GPUs (graphics processing units), and Envivio’s, designed for Intel CPUs, have so far dominated the multiscreen streaming environment, prompting some traditional hardware-based encoder suppliers like Harmonic to develop software-based system. But as multiscreen streaming moves from the over-the-top domain into the premium service provider space Motorola and Imagine have gambled on developing hardware systems with massive processing capabilities that are meant to consolidate and cost effectively expand the range of multiscreen streaming options to include all live TV channels, including all the local broadcast channels as well as all the nationally based channels, which can add up to two or three thousand channels in the case of a Tier 1 MSO.

At a moment when most operators haven’t even begun streaming live channels to connected devices and when those that have in most cases are delivering only a handful of channels, there’s general agreement the channel count is going to go up amid a great deal of uncertainty about how that can be accomplished cost effectively. With all the devices in play, comprehensive coverage to all types of Apple iOS and Android smartphones and tablets, PCs, Macs, game consoles and smart TVs requires up to 16 encoded profiles per live channel or on-demand file, meaning handling all requirements for local as well as national programming from a regional headend could require capacity to generate many thousands of profiles at once.

Moreover, there’s a lot of processing required beyond the basic encoding of each stream for each type of device. The transcoder must be able to de-interlace each encoded NTSC file to progressive mode, add IDR (instantaneous decoder refresh) frames to enable SCTE 35-based ad insertion and perform GOP (group of pictures) alignment to ensure smooth transition between fragments sent from adaptive bitrate (ABR) streaming packagers.

Proprietary hardware system advocates assert that pushing the envelope on hardware density and processing power serves to lower the amount of space and power consumed for a given volume of transcoding, far outweighing any cost penalty to be paid for proprietary hardware. Equally, if not more important, the super high processing power of an ASIC purpose built for encoding enables more efficient compression. No matter how many streams a stack of transcoder modules might deliver, the lower the bitrate per stream for a given level of quality, the greater the utilization of bandwidth, which is the most expensive commodity to cope with in the move to multiscreen services.

In fact, notes Kevin Wirick, vice president and general manager of video processing at Motorola Mobility, “the secret of the GT-3 is the latest video technology and our custom video processing algorithms that allow us to get the best video quality in a very small efficient package. An operator can now process a lot more video at different resolutions and provide a higher resolution for different screen formats using our product than with our competitors.”

For example, he explains, the company’s advanced video processing algorithms can exploit the latest processing capabilities of purpose-built ASICs to do much more motion prediction across multiple frames than was previously possible. “So if an operator can only get one megabit through their cable bandwidth and over Wi-Fi to your iPad, with the GT-3 you can have a higher resolution picture than using other people’s transcoders,” he says.

The ability to process video in a one-rack unit at 3 gigapixels per second, more than tripling the highest levels of current-generation hardware-based encoders, translates into capacity to process the equivalent of about 48 1080p/30 HD channels. Input versus output configurations vary, depending on types of channels on the input side and the number of profiles per channels to be delivered from the box. Motorola is spec’ing the 1RU unit as supporting up to 24 input channels with up to 16 encoded profiles per channel on the output.

“Compared to server-based approaches we’re at about ten times more density,” Wirick says. “So we get about ten times more video with the same amount of power as somebody using an Intel-based 86X server would get to do adaptive stream transcoding.”

The GT-3 is slated for general availability in the first quarter. “We have interest from the top tier operators who are doing deployments now and are planning new services coming up over the next year,” he says.

Imagine Communications, which hopes to have its new super high-power transcoding product, dubbed “next:,” available for commercial deployments by the end of the second quarter next year, has been more focused on the hardware aspects than the algorithmic aspects of the platform at this point, acknowledges Chris Gordon, vice president for product and marketing at Imagine. “We’re still working on motion extension and mode decisioning tracking, but that’s not our primary focus,” Gordon says, noting the next: platform benefits from the major encoding advances that have made Imagine’s first-generation product a factor in over half the digital premium channel encoding performed in the U.S.

Along with motion extension, which is to say the predictive encoding processes referenced by Motorola’s Kevin Wirick, mode decisioning is one of the major areas of improvement in encoding efficiency enabled by more advanced processors. It’s a process by which the results of different decision paths are compared to determine what is optimal for a given level of resolution, thereby avoiding over use of resources.

“We’ll continue to tweak our software capabilities,” Gordon says. “But right now our resources are devoted to supporting customer trialing and bringing the product to market on time.”

Imagine’s next: platform will be available in 2RU, 4RU and 10RU iterations. In an apples-to-apples comparison with the 1RU specs of the Motorola GT-3, Gordon notes the 20 gigapixel processing power of next: is the equivalent of 320 HD channels compared to the 48 represented by 3 gigapixels per second. What this means in terms of practical proportions of input channels versus output channels depends, as always, on the number of profiles supported on the output and whether HD or SD channels are in play.

In Imagine’s case there’s no limit on the number of video stream profiles per ABR group, which facilitates adjustments to ongoing changes in multiscreen requirements, Gordon notes. The platform also supports all current profiles, including 1080p60. And, like many transcoding platforms, it comes with support for packaging in multiple ABR streaming modes.

Imagine is able to race ahead with this kind of capacity on next-generation ASICs by virtue of its ability to leverage its accomplishments in software, says Richard Stanfield, the company’s CEO. “What the ASIC can’t do, we do in software,” he says. “We can take all the software from our last generation and create a new product, so our time to market is quick.”

The next: platform promises to open markets beyond North America for Imagine, Stanfield notes. “Our business has traditionally been with large Tier 1 North American MSOs,” he says. “This product takes us to the next level with the rest of the world where there’s a strong demand for linear transcoding in IPTV as well as cable. We’ll be able to price below the current market to capture market share.”

Imagine views IPTV operators’ need for gear to replace aging encoders deployed with initial rollouts six or so years ago as the lowest hanging fruit. Right behind that is the demand for multiscreen streaming support from both IPTV and cable operators here and abroad.

Gordon stresses the flexibility of the new platform when it comes to the type of hardware packaging it’s compatible with and the ways in which built-in storage can be employed. Because most of the firm’s MSO customers have deployed the first-generation platform on HP BladeSystem c7000 enclosures, the next: system will frequently be added as another blade on that chassis.

More generally, availability of the next: system with the Imagine ASICs embedded in PCI cards creates an opportunity to place the transcoding on edge servers operators are deploying to support their own CDN (content delivery network) infrastructures. In such cases, the 1 gigabyte of onboard storage in the 2RU version of the platform could be used to accommodate local time-shifted programming, Gordon suggests.

Advances at Elemental

Support for distributed as well as centralized transcoding architectures, of course, is a major selling point of software-based systems with their ability to leverage low-cost COTS (commodity off-the-shelf) servers. How those purported cost advantages stack up against the forthcoming Motorola and Imagine transcoding machines, given the density and power consumption benefits of the latter, remains to be seen.

But it’s clear the software system providers aren’t sitting still, even when it comes to gaining improvements that could impact MPEG-2 encoding for rapidly increasing volumes of on-demand content. Elemental, for example, which built its MPEG-2 encoding algorithms from the ground up as it has with MPEG-4, VC-1 and the emerging HEVC (High Efficiency Video Coding) standard, believes it can get the MPEG-2 rate to below 10 mbps and possibly down to 8 mbps without sacrificing quality. The result for an MSO aggressively expanding its VOD file count could be infrastructure savings approaching $1 billion, says Keith Wymbs, vice president of marketing at Elemental.

Along with encoding knowhow Elemental achieves a high level of performance efficiency on its core Linux-based Elemental Server platform through a unique blend of parallel processing utilizing Intel CPUs and GPUs from NVIDIA or the new Sandy Bridge hybrid CPU/GPU from Intel, resulting in a three to seven times density improvement over CPU-only systems, according to Elemental officials. The technology, rather than processing individual macroblocks within each video frame serially, processes all the macroblocks in a frame concurrently.

As previously reported (September 2011, p. 10), Comcast is using Elemental’s on-demand transcoding platform for its Xfinity online and mobile service initiatives. Trading out a previous encoding system, Comcast reduced the physical footprint for its Xfinity servers by 75 percent.

Where HEVC is concerned, while the standard is not slated to be completed until well into next year, Elemental believes it has an edge when it comes to having a product that will be ready for commercial deployment once the standard is finalized. “We’re watching the spec closely and implementing aspects as they stabilize,” Wymbs says, noting its implementations so far have achieved a 40 percent in bitrates compared to H.264 (MPEG-4) bitrates. “Our customers will be able to implement the code with software upgrades on Elemental technology they deployed a year ago.”

Elemental also is now delivering a new product, Elemental Stream, which offers premium service providers a way to lower costs of high-volume streaming of live and on-demand content over their networks. Stream, which can be deployed at the encoding location or with CDN resources, allows content to be delivered from the transcoder in a single encoded video format for each bitrate profile by applying the specific DRMs and ABR formats to each user’s stream on the fly.

Elemental Stream supports Apple HTTP Live Streaming (HLS), Adobe HTTP Dynamic Streaming (HDS), Microsoft Smooth Streaming and MPEG-DASH and can apply content protection such as Microsoft PlayReady, Verimatrix VCAS and Motorola SecureMedia, Wymbs says, noting additional profiles could be added in response to new developments. The platform also supports SCTE-35 advertising triggers, closed captioning and subtitle conversion and allows international broadcasters and operators to associate a single video with multiple audio tracks.

Envivio Achieves Big Density Gains

Envivio, too, has been racing ahead with its Intel CPU-based system, which recently scored a big win with a still unnamed Tier 1 U.S. MSO for its multiscreen service. The firm’s advances include the 4Caster G4, the latest version of its fully packaged 2RU encoding platform, representing a 6x density improvement over its previous version. That translates to power to transcode into multiple bitrate formats up to 12 HD channels per 2RU chassis, according to Julien Signès, president and CEO of Envivio.

“Envivio 4Caster G4 is the most powerful encoding platform that we have ever offered,” Signès says. “We are providing a broader range of interfaces, the largest number of output formats and the option of high quality or high density configurations.”

The 4Caster G4 platform houses Muse Live, the core Envivio software system supporting multiple codecs, including the capability to encode in HEVC as that standard takes shape, to transcode premium content into profiles for live and on-demand multiscreen services on all types of distribution networks. By virtue of its support for IP, ASI and SD/HD-SDI interfaces along with redundant power supplies and hot-swappable nodes, the new platform can be used for all premium service environments, Signès notes.

Muse also runs on HP BladeSystem c7000 and ProLiant BL460c series servers and supports a wide range of additional features such as picture-in-picture, alternative audio languages, closed captions, DVB-Subtitles and DVB-Teletext. This allows Envivio to support a wide range of distributed architectures and pure OTT plays as a complement to the more centralized 4Caster option.

Envivio’s solution for distributed positioning of the stream fragmentation and DRM packaging process for ABR-based services is the Halo Network Media Processor, which the company recently upgraded to support time-shifted service models, such as catch-up TV, start over and network PVR. Signès says the Halo “TV Anytime” functionalities are in trials with multiple operators in Europe and North America, representing still another sign of how all the on-demand services common to the traditional TV realm are now moving into the multiscreen space.

“The new TV Anytime capabilities available on Halo further enhance the multiscreen user experience by allowing operators to deliver time-shifted TV and customized assets,” Signès says. He notes that a key element now available on Halo is Personalized Index Creation (PIC), a new approach enabling dynamic asset creation in the network, including highlights creation and time-shifted TV assets.

This streamlined solution utilizes bits of content already cached in the network to deliver a unique stream per user, he explains. By leveraging the existing caching infrastructure, PIC does not require expensive storage and processing and opens up possibilities for new personalized service offerings.

It will be interesting to see what impact the massive ASICs-based transcoding solutions from Motorola and Imagine have on service providers’ decision making as they ramp up for all-encompassing next-generation multiscreen services. Whether or not they will trigger a swing back to hardware-based systems will depend a lot on the software system suppliers’ ability to build compelling, market-leading software solutions. But they’ll also have to sustain what has been a winning argument about the merits of relying on Moore’s law to generate commodity hardware options that make reliance on proprietary hardware a risky proposition.

Page 10 of 26« First...89101112...20...Last »