Content Ecosystem Archive

0

UltraViolet Made Headway In ’12, but Jury Is Still Out

Chuck Parker, VP, Intersection Research

Chuck Parker, VP, Intersection Research

By Todd Marcelle

January 10, 2013 – The cloud of uncertainty surrounding UltraViolet persists going into 2013, with momentum building but the goal of ubiquitous usage tied to access to all the mainstream movies and TV programs consumers might want to own still well out of reach.

The Digital Entertainment Content Ecosystem (DECE) consortium of five of the six major movie studios, Disney being the exception, made major strides with its UltraViolet media storage platform in 2012, including expansion to over seven million registered accounts from just a few hundred thousand at the start of the year and an ever-growing list of participants, including Walmart, Best Buy, Barnes & Noble Seagate, the BBC and many others. But key steps remain to be taken, including a much larger selection of titles beyond the 7,200 or so now on offer from participating studios and TV networks.

In an analysis of the challenges confronting UltraViolet Chuck Parker, president of Intersection Research and frequent blog contributor to the Media & Entertainment Services Alliance website, notes that when it comes to availability of content most desired by consumers, as measured by the Internet Media Database (IMBd) top 100 evergreen titles and the Rentrak top 50 recently released titles, titles available for sale on the UltraViolet platform represent only 50 to 60 percent of the titles on either list. “This isn’t a digital rights issue,” Parker says in a recent blog. “Digital title availability for rental and sell-through on iTunes is nearly ubiquitous. This is a business decision [of the studios] not to support UltraViolet.”

That may seem harsh in light of the fact that Sony Pictures, Warner, Fox, Universal, Paramount, Lionsgate and DreamWorks Animation have all embraced UltraViolet through pre-street date digital releases of select titles via Sony Pictures Store, Best Buy’s CinemaNow, Walmart’s Vudu.com, PlayStation Store and Google Play. But the hit-or-miss electronic availability of DVD and Blu-ray releases on UltraViolet leaves the consumer in need of alternative sources, undermining the purpose of UltraViolet, which is to compensate for the falloff in hard copy sales by encouraging electronic sell-through through the convenience of a universal cloud storage platform.

Notwithstanding the limited penetration of UltraViolet, electronic sell-through sales in general as well as rentals are growing rapidly, according to the Digital Entertainment Group, which serves as the marketing arm for UltraViolet. Overall disc sales in the third quarter were down by four percent, even as Blu-ray disc sales were up by 13 percent compared to Q3 2011. Disc rentals were off by 50 percent.
By contrast, electronic sell-through sales were up by 37.7 percent. VOD spending climbed more than 8.4 percent while subscription revenues from Internet streaming services grew by 127 percent, DEG said. But the total sales value of hard copy rentals, subscriptions and sales, totaling about $4 billion in Q3, far outdistanced electronic sell-through, rentals and subscription revenues, which totaled about $811 million.

The upshot is that Hollywood continues to have a big problem meeting ROI goals on motion pictures, especially in light of how weak the profit levels are in the area of fastest growth, namely, online subscriptions. Digital subscription services, including Netflix, Amazon Prime, Hulu and, soon, RedBox Instant by Verizon, physical rental kiosks and the Netflix disc-by-mail subscription service aearn about one third of the profit per viewing compared to VOD, digital sell through and physical sales. “Video consumption has never been higher in the U.S. household, but it is the mix of consumption that is hurting Hollywood studios,” Parker says.

UltraViolet officials say a heavier marketing effort beyond the “organic” approach taken so far is in the offing for 2013. And they say the long-delayed adaptation of UltraViolet distributors to the Common File Format will soon allow UltraViolet titles to be downloaded without users having to work with different file formats from each retailer.

At a Digital Hollywood session in October, UltraViolet GM Mark Teitell said the CFF was in business-to-business testing with consumer testing to follow. But he acknowledged DECE still has work to do to facilitate use of CFF in the cloud environment.

Teitell also reported UltraViolet, now available in the U.K. and Canada as well as the U.S., is slated to launch in Australia, New Zealand, Ireland, France and Germany in 2013. But the question remains whether the platform is going to crack through to mainstream adoption in the U.S.

“It is difficult to crow about having retailers signed up when the largest DVD/Blu-ray sales retailer (Amazon), the largest digital video retailer (iTunes), and the largest digital ‘rentailer’ (Xbox) have not signed up for the program,” Roberts says. “No matter how you slice up the markets where the consumers you want to attract are currently buying or renting, each one of these companies represents the lion’s share of them, and I would venture to say you cannot create mass adoption without them.”

0

Monetization Opportunities Take Shape for Multiscreen TV in 2013

Keith Wymbs, VP, marketing, Elemental

Keith Wymbs, VP, marketing, Elemental

January 11, 2013 – Entering 2013 multiscreen distribution of pay TV content is kicking into a new gear, raising the prospects that real money may start flowing into what has been a laborious effort to keep pace with consumer behavior on the part of established TV programmers and distributors.
 
So far, monetization of long-form video distribution has been the purview of over-the-top players like Netflix, Hulu and Amazon, and there, with the aggressive strategies of Google, Microsoft and myriad others in play, the money curve is sure to keep climbing. Hulu, for example, after registering a 60 percent jump to $420 million in revenues in 2011 logged an even bigger spike at 65 percent in 2012 with a reported $695 million in advertising and subscription sales.

So far, as Tom Morrod, director for consumer and media technology at research firm IHS notes, all the money flowing to providers from online video consumption is a drop in the bucket compared to traditional TV. In Europe, for example, the online video take adds up to about one percent of the overall media revenue pie, counting print, movie box office and everything else, compared to the 54 percent share represented by pay TV subscriptions.

“There’s very little money being generated right now from the multiscreen world, but there’s a lot of money going to the TV set,” Morrod says. But how long can things remain this far out of balance in light of other trends cited by Morrod and other researchers?

Among developed countries worldwide the average number of TVs per household has been at two or above since 2005, according to IHS findings, while the number of PCs per household has steadily increased to a total of two per household as of 2012. Meanwhile, the number of other devices capable of delivering video from the Internet, including smartphones, game consoles, connected set-tops and tablets, has gradually escalated to where, by the end of 2011, the total of such devices in all households in all developed nations matched the number of PCs, Morrod says.

“Within another few years we’re going to have more of all those different connected device types added together than the sum of TVs and PCs added together,” Morrod says.” What this is really showing is a huge fragmentation of device types consumers can use to watch content.”

Just how rapidly that fragmentation is impacting consumption habits in the U.S. can be seen in research performed by Leichtman Group, which found that the proportion of U.S. adults who viewed Web video on their TV sets at least once a week had gone from five percent in 2010 to 13 percent in 2012, while the percentage who viewed full TV shows online weekly on all types of devices had gone from six percent to 16 percent over the same timeframe. In a similar vein, Parks Associates last year found that the percentage of smart TV owners who watch online TV shows daily was at 32 percent.

For MVPDs (multichannel video program distributors) the push into multiscreen service delivery has been a defensive mechanism aimed at making sure pay TV services are available to serve this growing propensity to view TV and movies on connected devices. Certainly the program providers, too, have participated in agreements to make their content available to authenticated MVPD subscribers with the same goals in mind.

But, as the subscription model proves to have legs amid growing ad revenues in the OTT domain, the content owners are also looking at ways to generate more revenue through online offerings independent of MVPD affiliations. In fact, Hulu, with a reported three million plus subscribers to its premium offering through Hulu Plus, is threatened by its own success as partners in the venture, including Disney, NBC Universal and Fox, push to free themselves from exclusive licenses in order to make the same high-value content available through other outlets as well.

The flexibility to exploit content distribution opportunities wherever they can be found requires a change in how content is handled at the sources, not only lowering the costs of delivering secure streams to every type of device but minimizing dependence on third parties to execute on the ever more complex technical requirements. As previously reported, over the past year TV programmers have been beefing up and streamlining OTT operations with an eye toward funneling existing and newly developed channels of programming into whatever distribution conduits make business sense, whether directly to consumers over the Internet, through Web aggregators like Hulu or in conjunction with MVPDs’ TV Everywhere initiatives.

With sufficient flexibility to put together niche channels with compelling appeal to certain audiences from their deep reservoirs of content in whatever mixes are suited to their contractual obligations, programming suppliers would have an opportunity to create far more varied programming options than they can within the restrictive multichannel TV environment as well as to maximize exposure of established programs across multiple outlets to whatever extent their deals with MVPDs allow. Now, with such content workflow management capabilities made possible by advances in the technology platforms that support on-the-fly aggregation, transcoding and device-specific secured streaming of new and archived programming in whatever combinations they choose, programmers are preparing to execute on these possibilities.

One sign of what’s in store comes from white label online video publisher thePlatform, which has responded to programmers’ demands for such flexibility with a suite of new “smart workflow” features to simplify video preparation and publishing across multiple formats and devices. Now, says Marty Roberts, senior vice president of sales and marketing at thePlatform,
distributors running their workflows through thePlatform’s mpx publishing system will have click-and-execute access to suppliers such as Elemental and Harmonic for next-generation transcoding, Aspera for fast-file transfer and Akamai for its latest HTTP ingest technology on the recently launched Sola Media Solutions portfolio.

“Multiscreen video publishing has never been more complex, and coordinating the workflow is the key to success,” Roberts says. “When you look at the proliferation of devices with different aspect ratios, file formats for delivery and security protocols you see an explosion in the number of files you have to manage for each title. You may need 20 to 25 different files including caption files, different language versions, thumbnails to manage and set up for consumer to have great experiences, etc.”

Nothing is more important to content owners’ ability to activate new business opportunities than the gains achieved with transcoding and streaming systems. “When we looked at this space a couple of years ago we thought transcoding was moving to a commoditized software play without a lot of differentiation,” Roberts says. Now, with the challenges posed by device proliferation, “to their credit these vendors have stepped up and provided some of the most innovative advancements to meet these needs.

“With packaging and encryption all bundled in,” he adds, “they’re able to process content faster than real time. A couple years ago we didn’t think these capabilities were possible.” thePlatform has integrated the Elemental and Harmonic systems to work seamlessly with customers’ workflows on the mpx platform, and it’s looking at support for other transcoding suppliers on mpx as well, including Digital Rapids, Envivio, RGB and Telestream.

Another major requirement underlying ambitious business plans of content suppliers is the ability to instantly access, transfer and ingest files across multiple locations in a dynamically changing distribution environment. In essence, Roberts says, the new paradigm in content distribution is to instantly “grab files from storage and transform them to meet the requirements of all downstream outlets and set them up for delivery to those outlets.”

“We move a lot of files around the network, from customer storage to theplatform for transcoding, sometimes from our servers to content delivery networks, syndication partners, etc.,” he explains. “Our customers can now use Aspera to move content into thePlatform, and we’ve talked with a number of CDN suppliers about using Aspera to move content to ingest on their systems. Their technology does it in a secure and reliable way that’s much faster than FTP or other traditional protocols.”

The new requirements introduce new challenges that must be addressed through the smart workflow system as well, Roberts notes. “You start to run into some interesting situations,” he says. “For example, what happens if the eighth file set up for a particular title has an error and the other nine are fine? Is the system smart enough to recode just one without starting all over again with all ten?”

Or, to cite another nuance, content owners need to be able to manage all these different files for each title in the context of what the business arrangements are with each downstream distributor. “Maybe all the content going to our website gets a higher priority than what you publish to YouTube, so you have to be smarter about setting up and prioritizing your modes of distribution within the workflow,” he says.

Streamlining metadata management is another major requirement. “You used to have to go back and re-transcode your metafiles to, say, set them up for the Xbox,” Roberts notes. “Now we can analyze the files we have and package them up for streaming with the metafiles to the Xbox without going through those added steps. We’re really reducing the amount of work that has to be done when we look at the capabilities of these new transcoding engines.”

Another technological advance with the potential to further buttress the monetization opportunities for online distributors is the soon-to-be adopted next-generation video compression standard, H.265, also known as HEVC (High Efficiency Video Coding). With the anticipated ratification of the latest draft from the ISO’s MPEG committee H.265 will quickly enter the commercial mainstream, bringing with it a near doubling of compression ratios in comparison to H.264 AVC (Advanced Video Coding).

This is a landmark moment for online video distribution, especially for mobile, where the confluence of increasing bandwidth on LTE and reduction in bit rates on H.265 has explosive potential, notes Keith Wymbs, vice president of marketing at Elemental. “You’re able to get a very high quality experience at a very low video bit rate to any device of your choice over a wider variety of access networks, whether you’re at a Starbucks, on public transportation or roaming throughout your home,” Wymbs says. “That creates a type of ubiquity that a consumer is able to take advantage of regardless of the stream they’re on.”

Another transformative factor driving new business opportunities for online distributors is the cloud. Elemental, for example, is offering cloud-based support with its core technology to lower customers’ costs, Wymbs notes. “We think cloud over time will become the next kind of technology evolution that really changes things and makes things even more flexible for the premium content providers that are delivering multiscreen offerings,” he says.

Cloud-based support, of course, has always characterized thePlatform’s model. Now, Roberts notes, customers can use the new workflow system to more easily leverage multiple cloud tie-ins such as might come into play if a customer is relying on Elemental’s cloud for transcoding and thePlatform’s for all the publishing components. And customers with their own security and transcoding infrastructure can exploit the functionalities brought into play by thePlatform’s smart workflow on mpx.

“Some customers who don’t have their own transcoding farm take advantage of our cloud transcoding,” Roberts says. “Others who want to manage their security requirements behind their own firewalls have invested in their own transcoding solutions.

“Our remote media processor – RMP –sits remotely at the customer’s location and calls back into mpx for instructions what to do,” he continues. “mpx says take this file and pass it to Elemental and when that processing is done move it out to Akamai. We can work in a world where the vast majority of processing is operating in our cloud but can also work with customers in a hybrid fashion.”

The options will continue to expand as customers seek to bring in new vendor partners, Roberts adds. “We’ll continue to invest in those integrations as our customers require them,” he says.

“Because everyone is using standard Web protocols and open documented APIs these integrations have gone really, really well. We’re able to build new plugins in two to three weeks, including load and stress testing.”

The cloud is a big part of Akamai’s strategy with its new Sola Media Solutions platform. “Bringing seamless television experiences across devices is a great market opportunity that requires content providers to tackle major challenges including platform fragmentation, monetization, buffering between content and ads, and understanding who is watching the content,” say Jeremy Helfand, vice president of monetization at Adobe. Leveraging the cloud greatly mitigates the costs, he adds.

The vendor’s offerings include cloud-based transcoding for on-demand content and stream packaging that’s designed to adapt a single file or live stream on-the-fly for delivery to multiple viewing devices, Helfand explains. With support for multiple levels of content protection the architecture is designed to match content protection levels and monetization strategies to specific content and target audiences, he says.

What all this adds up to is a transformation in the monetization opportunities associated with OTT distribution of premium content. Technology advances that enable quality-of-experience suitable for TV-caliber advertising and subscription services in combination with cost-effective means of achieving ubiquitous access are freeing all players to create business models that maximize returns across all outlets.

0

Advances in Video QoE Control Facilitate SPs’ Efficiency Goals

Steve Liu, VP, video network monitoring, Tektronix

Steve Liu, VP, video network monitoring, Tektronix

December 11, 2012 – Tektronix has taken ground-breaking steps toward strengthening video service providers’ ability to meet rigorous quality standards through use of better tools at interfaces with transport backbones and at the edges of the network.
 
To overcome limited monitoring capabilities at points where video is handed off to regional headends Tektronix has introduced monitors capable of comprehensively identifying and diagnosing quality impediments on programming feeds operating at up to 3 gigabits per second. At the same time, the company has issued a new edge device, the SentryEdge II, which detects RF modulation and transport stream errors across multiple QAM channels simultaneously, allowing technicians to quickly identify potential problems before they impact subscribers’ quality of experience.

“Both of these products are industry firsts,” says Steve Liu, vice president of video network monitoring at Tektronix. “QoE (quality of experience) monitoring at 3 Gbps is three times the previous bitrate capacity for monitoring devices, and our SentryEdge II is the first RF quality monitoring device capable of remotely monitoring up to eight channels concurrently.”

The need for QoE monitoring at up to 3 Gbps stems from the growing use of 10 Gbps fiber transport to deliver video to headends, where the handoff at any point might include 500 to 1,000 live video programs, including those earmarked for use with switched digital video systems. Before the addition of 10 gigabit interfaces to the firm’s Sentry and Sentry Verify product lines, the only products suitable for 10 gig networks offered very basic monitoring capabilities, such as registering packet loss, Liu notes.

“You need to look at more than packet loss to understand quality problems,” he says. “We provide the means to do both QoS and QoE at 3 Gbps.”

The firm’s Sentry platform incorporates advanced QoE monitoring capabilities while Sentry Verify handles QoS monitoring, he explains. By having the capacity to perform comprehensive monitoring of baseband signals before they enter transcoders operators can identify source problems that could impact the viewing experience while avoiding “chasing false alarms” resulting from alerts about inconsequential packet losses, he says.

The new Sentry Edge II is meant to solve another challenge for operators, which is to more efficiently perform RF monitoring of QAM channel output at the network edges, including hubs and local headends, With the choice of four- or eight-tuner models, operators can speed the process of monitoring channels across up to 1 GHz of spectrum, Liu says.

“Exceptional RF analysis across all channels gives operators a proactive way to diagnose RF issues before customers complain,” he says. The rack-mounted server system provides Web interfaces for remote monitoring 24 x 7 and generates email notices if performance is subpar, which, as Liu notes, is a vast improvement over the traditional mode of performing QAM signal spot checks on RF analyzers. If operators want to maintain steady monitoring of their most important premium channels, they can “park some of the tuners on those channels and perform round robin monitoring on the rest,” he says.

Sentry Edge II performs high-quality MER (modulation error ratio) measurements up to 41dB as compared to most traditional analyzers, which only register MER variations at 30 dB or below, Liu adds. This is good enough for assessing whether there’s a problem at the user end but not for assessing whether degradations at the QAM output are likely to diminish performance at the set-top. “You need to know if the signal is degrading to 36 or 38 dB if you want to be proactive,” he says.

The new monitoring system allows engineers to quickly isolate problem sources by generating metrics on other variables as well, including RF lock indication (including LED on rear panel); input signal level; carrier-to-noise; carrier offset, pre-forward error correction bit error rate and others. “In-home monitoring is important, but you need information from the source in order to correlate issues and understand what’s going on in the network,” Liu says. “The ability to make sense of all the data is quite challenging.”

0

Quality Assurance Is Moving Multiscreen into Mainstream

Marty Roberts, SVP, sales & marketing, thePlatform

Marty Roberts, SVP, sales & marketing, thePlatform

November 21, 2012 – Distributors of high-value video in growing numbers are taking a crucial step toward moving multiscreen services into the mainstream by embracing a variety of approaches to measuring and maintaining quality of experience.
 
Given uncertainties about ROI and business models, over-the-top and pay TV providers have been reluctant to invest in costly solutions that would give them the same level of quality assurance that has become a mainstay in cable, telco and satellite TV operations. But now, with consumers making clear there’s high demand for a premium TV experience on connected devices of every description, the stakes are too high to forego facing the quality assurance challenges posed by the adaptive rate (AR) streaming mode of distribution over IP networks.

“Miranda [Technologies] is really happy we made the investments in products for this space,” says Mitchell Askenas, director of business development, at Miranda, which was recently acquired by cable hardware supplier Belden, Inc. “We were trying to get ahead of the curve, and it looks like it’s paying off.”

Miranda has quietly brought AR quality assurance (QA) into the mix of QoE and QoS issues addressed by its iControl QA system over the past year. The data generated from functionalities added to deal with the complexities of AR are analyzed in concert with data gathered from Miranda’s probes and other network resources to pinpoint and analyze video performance across the network, resulting in the same level of quality assurance for AR streamed content that can be achieved for traditionally delivered pay TV.

OTT suppliers have been especially focused on the QA issue as they seek to provide content useful for TV Everywhere services offered through their multichannel video programming distributor (MVPD) affiliates, Askenas notes. Moreover, programmers are starting to sell advertising inventory unique to the multiscreen streams, which makes QA essential, he adds.

“We’re also seeing a lot more interest from cable operators,” he says. “They want to know what each user experience looks like rather than simply relying on packet analysis.” The reference is to the difference between traditional IP deep packet inspection (DPI) measuring packet losses and delays and deep content inspection (DCI), which looks at what’s happening within the video frames and across frame sequences.

“This year has been a time for learning about the options for our customers,” Askenas says. “I think next year is when we’ll see significant sales.”
A New Bellwether

One important bellwether to the trend is the recently announced decision by white label video publisher thePlatform to provide analytics capabilities distributors can use to turn raw data coming in from different points of the network into a coherent picture of what’s going on. “Our customers are using adaptive streaming, because it delivers a better overall experience for users accessing content over broadband networks,” says Marty Roberts, senior vice president of sales and marketing at thePlatform. “But with the variations in degrees of quality from difference CDNs based on the different ways they handle streaming modes like HLS (Apple’s HTTP Live Streaming), Smooth (the Microsoft streaming mode) and Adobe’s HDS (HTTP Dynamic Streaming), they need to be able to analyze the overall experience their customers are getting from all these suppliers.”

That’s a tall order. “Tracking and managing QoS around AR is a little bit harder than it is for traditional modes of distribution,” Roberts says. “There are different formats and protocols and different encryption schemes, so there are technical challenges to monitoring and understanding what the quality is for each user experience.”

To address these challenges thePlatform has partnered with Conviva, whose Conviva Viewer Insights video analytics capabilities will be offered as an integrated component of thePlatform’s mpx video publishing system to provide an additional layer of dynamic reporting within the mpx console at no extra cost to customers. This will allow publishers to quickly access real-time statistics related to the consumer experience, engagement and the relationship between high-quality viewing and audiences, Roberts says.

Now that AR has been widely embraced as the distribution mode for TV Everywhere on the part of big pay TV operators and media companies, maintaining QA “boils down to being good business for them,” Robert notes. “We’ve seen data showing that if a video takes longer than two seconds to start streaming you will see a real drop off in viewership. Assuring good user experience keeps viewers engaged for longer viewing times, resulting in more ad avails to support monetization as well as higher viewer satisfaction.”

Intrinsic to the partnership is the device-end data-gathering capabilities of the new default plug-in installed with the video players thePlatform provides. “Data from the user experience on each device is piped back to the Conviva servers for analysis and displayed into our console to give our customers a good understanding of what’s going on,” Roberts says. “It’s delivered in a standard report, so there’s no need for customization.”

Conviva’s analysis of this data displayed in mpx will allow content distributors to determine video performance and its impact on viewer engagement across multiple types of video players and streaming protocols, notes Conviva CEO Darren Feher. “The joint solution will allow thePlatform’s customers to see exactly what every single viewer experiences, at the precise moment it happens, providing actionable intelligence to enhance people’s viewing experiences and ultimately improve online video for both viewers and publishers,” Feher says. Metrics include audience-based quantification of video quality on viewer engagement and in-depth diagnostics into video quality issues across CDNs, CDN regions, ISPs and viewer host machines.

The partnership also facilitates upselling thePlatform’s customers to the Conviva Precision Video solution, which utilizes the flow of analytics data to optimize quality of experience by maintaining what Conviva calls “preemptive, seamless stream adjustments” on content as it’s delivered to each user. “Precision allows the client to make real-time decisions that optimize the quality of service,” Roberts says.

“The client says, ‘I’m getting a bad stream from this node in this CDN so let’s switch to another node or to another CDN,’” he explains. “The client doesn’t care what server it’s talking to. If one chunk happens to be coming from one server and the next chunk is from another CDN, that’s fine.”

The reference is to how AR employs a “pull” mode in distribution technology that is altogether different from the “push” mode of traditional digital TV. Every few seconds an AR-enabled device, by referencing the bitrate options or “adaptation sets” listed for a given piece of content in a manifest file sent from an HTTP server, asks the server to send a fragment or chunk of streamed content at the most optimum bitrate, depending on how much bandwidth is available at that moment in time and how much processing power the device has available for decoding the bit streams.

The basic goal is to ensure video is streamed at the highest level of quality that can be sustained at any given time without dropping frames or triggering buffering interruptions in the flow. But AR introduces a wide range of processes that pose challenges to assessing audio and video quality that are new to the premium television environment. And those processes vary from one streaming mode to the next.

Not only are there far more parameters to measure in the AR transcoding, fragmentation and distribution process; there are multiple points in the network where those processes can go wrong, extending from source encoders through origin servers and CDN caching points to all the different types of unmanaged IP-connected devices possessed by end users. Moreover, additional complexities associated with content protection and monetization make the achievement of premium service-level quality assurance all the more daunting.

In some respects Conviva’s Precision Video solution avoids these complications by simply switching the call for chunks from one HTTP server to another through the streaming session so as to achieve the best possible quality flow from the CDN tier in the network. But this leaves unanswered issues such as any problems at origin servers or at the content sources that may be contributing to poor performance in the distribution network.

End-to-End QA Challenges

Systems designed to track sources of problems typically employ a combination of DPI and DCI techniques with probes positioned at different points, sometimes in conjunction with plug-ins at the device end such as thePlatform is providing. In some cases, traditional DPI isn’t used, but virtually all participants in AR QA agree there needs to be a means by which the delivery of packets is monitored so as to ensure there’s an even rate that avoids over-loading packets into the device buffer or not having enough buffered packets to ensure smooth video rendering on the device.

This potential for jitter goes to the heart of the fragmentation process, where, when things are going well, a sequence of packets in the video stream is distributed in response to a device request every few seconds. QoS monitoring must be sensitive to which type of AR mode is in play, insofar as sequence durations vary by mode.

Some of the QoS monitoring process is a function of how the fragmentation server is performing; some pertains to what’s happening in transit from the fragmenter to the user device, and some of it is a matter of the time it takes for a device request to get to the fragmenter. Thorough QoS monitoring requires an understanding of what’s happening to interrupt smooth performance when such interruptions occur.

QoE as measured by DCI techniques pertains to the full range of functionalities that determine what the user sees and hears, which means that some aspects of assuring acceptable QoE in the AR domain are the same as what’s required for QoE in traditional premium TV QA. DCI looks at the video stream on a frame-by-frame basis to identify any problems in the encoding process such as blurring, blocking, tiling and splicing errors, taking into account the location and size of impairments as well as their duration and frequency. Gauging audio performance, of course, is also a part of this process, including now the measurement of volume changes between programming and ads to ensure conformance with the Commercial Advertisement Loudness Mitigation (CALM) Act.

DCI, however, gets a little harder in the AR arena compared to legacy pay TV services that employ MPEG-2 compression. Whereas MPEG-2 applies encoding algorithms to fixed size macroblocks of pixels within each frame, H.264 encoding employs variable-sized macroblocks. This greatly complicates detection of tiling or macroblocking, which occurs when one or more image components within a frame are blurred or delivered as a single color block.

Further complicating matters is the fact that, with AR transcoding there are multiple streams for each piece of programming that must be monitored with respect to ancillary content feeds such as audio/video synching and synchronization of closed captioning. This even goes to the need to assure proper synching of alternative language audio or captioning when transnational content distribution is involved.

An important element of QoE that’s unique to AR is the need to go beyond the QoS monitoring of streaming performance to keep track of the degree to which the fluctuations in bitrates driven by bandwidth availability are within acceptable bounds. In other words, if the bandwidth availability is persistently limited to a point where the AR system is sending out sub-par quality video, as might happen if an HD TV set is receiving a stream at a persistent rate that results in sub-par resolution, the user is not getting a good experience, even though the QoS measures are reporting everything is fine.

The ability to verify performance of content protection mechanisms on AR premium content is also essential to the overall QA regime. Just as different types of devices operate natively with different types of AR fragmentation systems, they also come equipped to support different types of digital rights management (DRM) systems. This means that each fragment over each AR stream must be assigned an encryption key that will communicate with the embedded device DRM client.

Premium service quality assurance will have to provide verification that the DRM processes are working. These include not just the encryption mechanisms with appropriate synching of keys to DRMs but also enforcement of usage policies tied to rights metadata and to authorizations assigned by back-office systems to individual users.

The Miranda Solutions

“There are so many moving parts you must gather information from if you want to identify where the problems are in the network and how to fix them,” says Miranda’s Askenas. “QoS will tell you where something is going wrong, but if you have certain fixes in place like forward error correction or buffering mechanisms, it won’t tell you whether the customer is having a good or bad experience. And QoE doesn’t tell you whether you have a QoS problem.

“We rely on telemetry from network elements and the components Miranda provides to get to sources of problems,” he continues. “The first issue is to determine exactly what the source of the problem is, which requires correlating a lot of data. You might have an alarm saying something is wrong with an encoder, but you have to determine which of hundreds of streams an operator is delivering over the top are affected.”

The second issue is, “How do you verify whether the cause of the alarm is actually impacting customers? The priority is to concentrate on fixing poor customer experience.”

Then, he says, “Once you know there’s an impact on customer experience, you need to know exactly what that experience looks like. Now you know what the source of the problem is, what its significance is to end users and what precisely needs to be fixed to rectify the situation.”

And beyond all this the data must be aggregated into reports that are useful at the management level. “You need an overall sense of the quality of your service, what your uptime performance looks like, whether you are fulfilling on your commitments to program suppliers, advertisers and subscribers,” Askenas says.

Miranda’s iControl relies on whatever sources of telemetry in the network can be used to perform analysis on QoS, whether that data is coming from DPI probes, routers, cable modems or devices. “There’s a tremendous amount of data to draw from; the trick is to aggregate and analyze it to provide an accurate and thorough measure of QoS end to end,” Askenas says.

“We’re not providing the DPI probes; our focus in on providing the data you need for a deep QoE analysis,” he adds. “We have probes that sit on the video network to go deep into the video and audio analysis of the frame. We look at the true customer experience by identifying things at the macroblock level like pixilation, frame freezes and black spaces in the video, audio issues, performance of closed captioning, whether the metadata is included in the stream.”

One set of probes, the firm’s Kaleido multiviewers, sit beyond the cache points to deliver DCI readings on all the content flowing out of the local cache. The other probes, part of the firm’s Densité infrastructure equipment, look at encoder and origin server outputs, providing a view beyond QoS measures on encoding to look for things such monitors can’t detect.

The QoE analysis also covers critical aspects of ad performance. “We can use fingerprinting technology to coordinate with the ad schedule and determine if the right ad is playing out,” Askenas says. “Our iControl fingerprinting process is built into our hardware. The ad insertion management module is part of the video management system and collects data from various elements in the network. It sits on top of all the functionalities, including the QoE mechanisms as well as fingerprinting readouts, to correlate and provide a clear view of what’s going on with ads.”

Tying all these capabilities into QA on AR streams Miranda also adds AR-specific metrics having to do with things like fragmentation and buffering. “We’ll abstract an alarm that might be saying there’s too much fragmentation happening on an ESPN stream and analyze whether those fluctuations are really impairing the viewer experience,” Askenas notes. “We’ll look at whether encoders are over-feeding device buffers.”

Rather than relying on its own client software to obtain data from devices Miranda intends to tap telemetry feeds intrinsic to players running connected devices. “The players wrap the decoding with the infrastructure, so we can use iControl to collect and correlate those statistics,” he says. “We aggregate that data with everything else to see where the problems are and what needs to be done to fix them.”

Belgacom, DISH and other Initiatives

Another supplier reporting rising demand for AR QA solutions is Paris-based Witbe, which recently added Belgacom, the incumbent telecom operator in Belgium, to the list of service providers that are using the firm’s Multiscreen Quality Manager solution. Employing what it calls “QoE Robots,” Witbe’s platform employs connections to Belgacom set-top boxes in several Belgium cities to log onto Belgacom’s TV Everywhere portal. The robots use Belgacom’s TV Everywhere app for iOS and Android to “watch” live TV and order on-demand content across all devices, explains Witbe president Jean-Michel Planche.

“Delivering multiscreen video services can be tricky as one does not control the networks nor the devices used to watch video streams,” Planche notes. “Controlling the quality of experience is crucial to ensure success, protect brand reputation and secure revenues.”

Belgacom engineers, marketers and managers have access to analytic dashboards reporting KPIs (Key Progress Indicators) such as channel change time, video streams quality, portal responsiveness, delay to launch the app and log into the portal, success ratio when buying on-demand content, etc. KPIs are available per device type and geographic location enabling management to focus troubleshooting actions and measure the impact of infrastructure investments on the quality delivered.

At the start of the year Witbe’s contracted to supply its QA solution for DISH Network’s new broadband TV Everywhere service, marking one of the largest deals yet publicized in the AR space. Using Witbe’s QoE Robots, DISH can evaluate service availability, measure application performance, check content integrity and measure perceived quality of video streams delivered to computers, smartphones and tablets, Planche says.

The robots run continuous tests on the DISH broadband feed by replicating user actions on end users’ devices through Wi-Fi connections or 3G/4G cellular networks, Planche explains. The robots interface with multiple types of devices operating in AR or other modes to log into servers, browse program guides, watch live and on-demand TV, configure and access DVR recordings and more.

Witbe, with 12 years’ experience in Europe and two in North America, has other, unannounced North American customers as well, including Comcast and Cogeco of Canada. Comcast is using the platform to run tests of its multiscreen service with the iPad and other connected devices while Cogeco is running set-top box tests, Planche says.

Declaring the “classical market for probes has hit the wall,” Planche describes the Witbe QoE Robot platform as the source of comprehensive intelligence distributors require to run premium services in the user-centric IP services environment. “In the IP world you can have a good backbone and bad service or a bad backbone and good service,” he says. “QoS without collaboration with what the user is seeing is of little value.”

But that’s not to say QoS is not important to the value of what Witbe brings to the table. At the analytics level Witbe correlates the intelligence gathered on QoE by its probes with what QoS metrics from other sources are delivering to precisely identify the nature and sources of problems. “Our technology is to understand the quality of the content the operator is delivering at each strategic point of the network, from the point of ingestion, across the backbone and over the last mile,” Planche says.

While end-to-end QA is the ultimate goal, operators can start slowly with implementation of the Witbe platform to begin gaining control over the AR experience at they explore where they want to go with multiscreen services. “We have different small operators, such as telecom operators in small states like Monaco and Macao, where we can do clever things with just a few robots,” he notes.

Whatever level of penetration a provider wants to go to, the Witbe approach to AR QE does not constitute a big investment, he adds. “We’re delivering information they never dreamed they could get with such a small investment,” he says.

As a growing number of vendors offer solutions to bolster QA, the ecosystem in general is moving in the direction of ever better performance metrics. As thePlatform’s Marty Roberts notes, now that AR is “table stakes” there’s general recognition that an old saw holds for the multiscreen domain much as it has for any other aspect of network service operations: “You can’t improve what you can’t measure.”

Notably, he adds, CDN suppliers are now generating QoS metrics that can be fed into analytics frameworks like the one thePlatform is leveraging from Conviva. “Akamai is the best example of a CDN supplier with robust analytics tools for measuring QoS experience,” he says. “But all of them have some level of QoS metrics.”

1

Cloud-Compatible Workflows Spur Content Tech Integration

Brick Eksten, president, Digital Rapids

Brick Eksten, president, Digital Rapids

October 29, 2012 – New approaches to enabling flexibility in technology integration for content distribution through cloud-compatible workflow management systems are gaining traction as linchpins to over-the-top and TV Everywhere expansions of the premium TV sector.
 
For example, after introducing its Kayak workflow system earlier this year (see March, p. 19), Digital Rapids is reporting its decision to support multi-technology workflow integration on a platform that runs across on-premises and cloud-based resources is paying off with a growing lineup of ecosystem partners. Several dozen technology partners are now making it easier for their customers to integrate with their solutions through Kayak, says Digital Rapids president Brick Eksten.

“We now have a majority of vendors out there with concrete plans on where they want to go and what they want to do with Kayak,” Eksten says, noting the lineup spans suppliers of codec technologies, quality control tools, audio loudness management, digital rights management and more. “We also have large partners on the integration side who sell platforms that run cable systems and studios.”

Two major suppliers of cloud support services, Microsoft with its Azure platform and Amazon with Amazon Web Services (AWS), have moved to cloud-compatible workflow systems as well. This fall Microsoft introduced Workflow Manager 1.0 as the next-generation workflow for its SharePoint collaboration software, which is a core component of Azure. Earlier this year, AWS launched Simple Workflow Service to address key challenges that have impeded complex multi-task implementations of applications running on AWS.

In a blog post Amazon CTO Werner Vogels offers a candid description of the issues that prompted the company to introduce the new workflow management system and likely will lead to growing use of these sorts of workflow integration systems everywhere, including in the multiscreen services space. As suppliers turn to asynchronous and distributed processing models to support independent scalability across loosely coupled parts of their applications, they must develop ways to coordinate multiple distributed components, incurring increased latency and unreliability inherent in remote communications, Vogels notes.

“Today, to accomplish this, developers are forced to write complicated infrastructure that typically involves message queues and databases along with complex logic to synchronize them,” Vogels writes. “All this ‘plumbing’ is extraneous to business logic and makes the application code unnecessarily complicated and hard to maintain.”

Vogels says Amazon’s Simple Workflow service (SWF) makes it easy for developers to architect and implement these tasks, run them in the cloud or on premises and coordinate their flow.SWF manages the execution flow such that the tasks are load balanced across registered workers, inter-task dependencies are respected, concurrency is handled appropriately and that child workflows are executed, he adds.

SharePoint Workflows for SharePoint Server 2013 performs similar functions for Azure cloud-hosted customers, the key difference, of course, being that the new Workflow Manager 1.0 is tied specifically to use of the SharePoint platform. According to Jürgen Willis, principal group program manager for SharePoint, the new system allows SharePoint customers to host and manage these long-running workflows with support for deployments that require multi-tenancy support, scalability and high availability.

“Tenants in Workflow Manager may represent the various departments of an enterprise or the customers of an ISV (independent software vendor),” Willis explains in a recent blog. “Multiple Workflow Manager nodes can be joined together into a farm deployment to scale the service.”

Microsoft has added new capabilities for managing system tenants, activities and workflow instances. “This includes repository and version management for published activities and workflows,” Willis says. “Messaging and management are clearly two critical areas for building and maintaining workflow solutions, and this is an area where we will continue to invest as we evolve this technology”.

Whereas SharePoint is based on a service-oriented-architecture (SOA), Digital Rapids has positioned Kayak to make it easier to integrate a multitude of applications into a customer’s workflow by avoiding the need to individually integrate each process-specific component of each application onto the SOA system. Kayak provides a template for designing workflows that allows customers to draw on specific processing components as elements in a catalog that can be activated on servers and assigned specific policies, Eksten explains.

Applications are prototyped with the development of the workflow, tested and deployed to be utilized as dictated by whatever workflow processes are brought into play for any given piece of content – depending, for example, on whether the content is to be delivered live or ingested into storage, what the encoding resolutions are, whether metadata should be overlaid or embedded, etc. “We’re blueprinting the workflow to tell the box (server) which processes to pull in and how to run them,” he says.

“If you think of transcoding, formatting, rendering and other steps in distribution, you have to keep everything working together, which is very hard to maintain,” he continues. “That’s the biggest complaint about SOA today. With Kayak, if you want to add a box, you don’t have to think of how it has to be used. You point Kayak at that box and anything you’ve designed now runs on that blank slate. Provisioning is automated and completely dynamic.”

Along with facilitating workflow integration with third-party suppliers’ products, Digital Rapids has integrated the latest iterations of its solutions, including version 2.0 of its Transcode Manager media processing software and StreamZ Live Broadcast multiscreen encoder, with Kayak. At the same time, Eksten notes, beyond content-specific workflows, the Digital Rapids architecture allows customers to integrate back office and other IT workflows through Kayak to support a unified enterprise system that makes it easier to conduct business in today’s device-saturated market.

From the Digital Rapids partners’ perspective, integration into Kayak allows vendor partners to enable customers who have moved into the new workflow system to more easily address the kinds of problems cited by Amazon’s Werner Vogels. “It’s really managing all those virtual apps they have to manage as part of their overall solution,” Eksten says. “It can be a rifle-shot solution or an overall workflow. The beauty of integrating with Kayak is the integration works for them whether they use discrete workflows or integrate directly into the end-to-end customer workflow.”

Building a partner ecosystem of suppliers together with enhancing Digital Rapids’ own products to operate seamlessly across internal customer facilities and the cloud is crucial to drawing customers to Kayak. “The richness of the Kayak partner ecosystem is one of the platform’s key strengths, combining with its unique architecture to let our mutual customers quickly integrate new technologies into their operations while mixing and matching partner solutions to create the optimal workflows for their needs,” says Onkar Parmar, senior partnership manager for Kayak at Digital Rapids.

Comments from Kayak partners buttress this claim. “Digital Rapids’ Kayak platform allows our mutual customers to quickly and flexibly integrate Dolby Digital Plus premium multichannel audio encoding and Dolby’s loudness correction technology into powerful workflows to efficiently realize and differentiate their multiscreen offerings,” says Jean-Christophe Morizur, senior director of e-media professional solutions at Dolby Laboratories.

Similar high praise is offered by Venera Technologies, a supplier of quality control and other test and measurement tools. “The innovative Kayak platform provides a perfect opportunity for Venera to bring our QC technologies to Digital Rapids’ customers, enabling them to enhance their media production and delivery operations with content verification at various stages of their workflows,” says Fereidoon Khosravi, senior vice president of business development for the Americas at Venera. “The ease with which Kayak users can integrate our QC components into powerful workflows is simply amazing,”

Other participants in the Kayak partner ecosystem include Automatic Sync Technologies; BuyDRM; Corpus Media Labs; Digimetrics; DSB Consulting; Empress Media Asset Management, LLC; EZDRM; Hitachi Solutions; Ignite Technologies, Inc.; Interra Systems; Irdeto; Manzanita Systems; Minnetonka Audio Software; National TeleConsultants; PixelTools; R Systems Inc.; Screen Subtitling Systems; Signiant; Solekai Systems; Tata Elxsi, and VidCheck.

One of the early points of connection for use of Kayak in the premium services arena is UltraViolet. For distributors in the UltraViolet ecosystem having a workflow that can support all the points of interaction required for execution on the platform is essential, Eksten notes.

“We’re working with the studios and some of our partners to test on the UltraViolet workflows using Kayak to integrate into their business systems,” he says. “There’s a complex interaction between work performed by various technologies for UltraViolet, including encoding, DRM, multiplexing, as well as the need to integrate on the business side with metadata, registration and authentication.”

0

Transcoding Advances Intensify Debate over Hardware Strategies

Kevin Wirick, VP & GM, video processing, Motorola Mobility

Kevin Wirick, VP & GM, video processing, Motorola Mobility

October 20, 2012 –The vendor-driven battle over digital video encoding strategies has taken a new turn with introduction of new hardware platforms touting massive processing capacity as software-based systems continue to post new gains in bitrate and distribution efficiencies.

Motorola Mobility and Imagine Communications are publicizing as-yet-unavailable transcoding systems running on purpose-built ASICs (application-specific integrated circuits) at unprecedented processing rates of 3 gigapixels and 20 gigapixels per second per rack unit, respectively, with prospects for major savings in power and space consumption as well as cost-effective approaches to expanding live multiscreen channel counts into the thousands. Meanwhile, transcoding systems designed to run on generic processors continue to make great strides, not only as a function of ever-greater processing power but as a result of advances in encoding and other software-based techniques.

Software-based systems like Elemental’s, which uses a combination of individual or hybrid CPUs and GPUs (graphics processing units), and Envivio’s, designed for Intel CPUs, have so far dominated the multiscreen streaming environment, prompting some traditional hardware-based encoder suppliers like Harmonic to develop software-based system. But as multiscreen streaming moves from the over-the-top domain into the premium service provider space Motorola and Imagine have gambled on developing hardware systems with massive processing capabilities that are meant to consolidate and cost effectively expand the range of multiscreen streaming options to include all live TV channels, including all the local broadcast channels as well as all the nationally based channels, which can add up to two or three thousand channels in the case of a Tier 1 MSO.

At a moment when most operators haven’t even begun streaming live channels to connected devices and when those that have in most cases are delivering only a handful of channels, there’s general agreement the channel count is going to go up amid a great deal of uncertainty about how that can be accomplished cost effectively. With all the devices in play, comprehensive coverage to all types of Apple iOS and Android smartphones and tablets, PCs, Macs, game consoles and smart TVs requires up to 16 encoded profiles per live channel or on-demand file, meaning handling all requirements for local as well as national programming from a regional headend could require capacity to generate many thousands of profiles at once.

Moreover, there’s a lot of processing required beyond the basic encoding of each stream for each type of device. The transcoder must be able to de-interlace each encoded NTSC file to progressive mode, add IDR (instantaneous decoder refresh) frames to enable SCTE 35-based ad insertion and perform GOP (group of pictures) alignment to ensure smooth transition between fragments sent from adaptive bitrate (ABR) streaming packagers.

Proprietary hardware system advocates assert that pushing the envelope on hardware density and processing power serves to lower the amount of space and power consumed for a given volume of transcoding, far outweighing any cost penalty to be paid for proprietary hardware. Equally, if not more important, the super high processing power of an ASIC purpose built for encoding enables more efficient compression. No matter how many streams a stack of transcoder modules might deliver, the lower the bitrate per stream for a given level of quality, the greater the utilization of bandwidth, which is the most expensive commodity to cope with in the move to multiscreen services.

In fact, notes Kevin Wirick, vice president and general manager of video processing at Motorola Mobility, “the secret of the GT-3 is the latest video technology and our custom video processing algorithms that allow us to get the best video quality in a very small efficient package. An operator can now process a lot more video at different resolutions and provide a higher resolution for different screen formats using our product than with our competitors.”

For example, he explains, the company’s advanced video processing algorithms can exploit the latest processing capabilities of purpose-built ASICs to do much more motion prediction across multiple frames than was previously possible. “So if an operator can only get one megabit through their cable bandwidth and over Wi-Fi to your iPad, with the GT-3 you can have a higher resolution picture than using other people’s transcoders,” he says.

The ability to process video in a one-rack unit at 3 gigapixels per second, more than tripling the highest levels of current-generation hardware-based encoders, translates into capacity to process the equivalent of about 48 1080p/30 HD channels. Input versus output configurations vary, depending on types of channels on the input side and the number of profiles per channels to be delivered from the box. Motorola is spec’ing the 1RU unit as supporting up to 24 input channels with up to 16 encoded profiles per channel on the output.

“Compared to server-based approaches we’re at about ten times more density,” Wirick says. “So we get about ten times more video with the same amount of power as somebody using an Intel-based 86X server would get to do adaptive stream transcoding.”

The GT-3 is slated for general availability in the first quarter. “We have interest from the top tier operators who are doing deployments now and are planning new services coming up over the next year,” he says.

Imagine Communications, which hopes to have its new super high-power transcoding product, dubbed “next:,” available for commercial deployments by the end of the second quarter next year, has been more focused on the hardware aspects than the algorithmic aspects of the platform at this point, acknowledges Chris Gordon, vice president for product and marketing at Imagine. “We’re still working on motion extension and mode decisioning tracking, but that’s not our primary focus,” Gordon says, noting the next: platform benefits from the major encoding advances that have made Imagine’s first-generation product a factor in over half the digital premium channel encoding performed in the U.S.

Along with motion extension, which is to say the predictive encoding processes referenced by Motorola’s Kevin Wirick, mode decisioning is one of the major areas of improvement in encoding efficiency enabled by more advanced processors. It’s a process by which the results of different decision paths are compared to determine what is optimal for a given level of resolution, thereby avoiding over use of resources.

“We’ll continue to tweak our software capabilities,” Gordon says. “But right now our resources are devoted to supporting customer trialing and bringing the product to market on time.”

Imagine’s next: platform will be available in 2RU, 4RU and 10RU iterations. In an apples-to-apples comparison with the 1RU specs of the Motorola GT-3, Gordon notes the 20 gigapixel processing power of next: is the equivalent of 320 HD channels compared to the 48 represented by 3 gigapixels per second. What this means in terms of practical proportions of input channels versus output channels depends, as always, on the number of profiles supported on the output and whether HD or SD channels are in play.

In Imagine’s case there’s no limit on the number of video stream profiles per ABR group, which facilitates adjustments to ongoing changes in multiscreen requirements, Gordon notes. The platform also supports all current profiles, including 1080p60. And, like many transcoding platforms, it comes with support for packaging in multiple ABR streaming modes.

Imagine is able to race ahead with this kind of capacity on next-generation ASICs by virtue of its ability to leverage its accomplishments in software, says Richard Stanfield, the company’s CEO. “What the ASIC can’t do, we do in software,” he says. “We can take all the software from our last generation and create a new product, so our time to market is quick.”

The next: platform promises to open markets beyond North America for Imagine, Stanfield notes. “Our business has traditionally been with large Tier 1 North American MSOs,” he says. “This product takes us to the next level with the rest of the world where there’s a strong demand for linear transcoding in IPTV as well as cable. We’ll be able to price below the current market to capture market share.”

Imagine views IPTV operators’ need for gear to replace aging encoders deployed with initial rollouts six or so years ago as the lowest hanging fruit. Right behind that is the demand for multiscreen streaming support from both IPTV and cable operators here and abroad.

Gordon stresses the flexibility of the new platform when it comes to the type of hardware packaging it’s compatible with and the ways in which built-in storage can be employed. Because most of the firm’s MSO customers have deployed the first-generation platform on HP BladeSystem c7000 enclosures, the next: system will frequently be added as another blade on that chassis.

More generally, availability of the next: system with the Imagine ASICs embedded in PCI cards creates an opportunity to place the transcoding on edge servers operators are deploying to support their own CDN (content delivery network) infrastructures. In such cases, the 1 gigabyte of onboard storage in the 2RU version of the platform could be used to accommodate local time-shifted programming, Gordon suggests.

Advances at Elemental

Support for distributed as well as centralized transcoding architectures, of course, is a major selling point of software-based systems with their ability to leverage low-cost COTS (commodity off-the-shelf) servers. How those purported cost advantages stack up against the forthcoming Motorola and Imagine transcoding machines, given the density and power consumption benefits of the latter, remains to be seen.

But it’s clear the software system providers aren’t sitting still, even when it comes to gaining improvements that could impact MPEG-2 encoding for rapidly increasing volumes of on-demand content. Elemental, for example, which built its MPEG-2 encoding algorithms from the ground up as it has with MPEG-4, VC-1 and the emerging HEVC (High Efficiency Video Coding) standard, believes it can get the MPEG-2 rate to below 10 mbps and possibly down to 8 mbps without sacrificing quality. The result for an MSO aggressively expanding its VOD file count could be infrastructure savings approaching $1 billion, says Keith Wymbs, vice president of marketing at Elemental.

Along with encoding knowhow Elemental achieves a high level of performance efficiency on its core Linux-based Elemental Server platform through a unique blend of parallel processing utilizing Intel CPUs and GPUs from NVIDIA or the new Sandy Bridge hybrid CPU/GPU from Intel, resulting in a three to seven times density improvement over CPU-only systems, according to Elemental officials. The technology, rather than processing individual macroblocks within each video frame serially, processes all the macroblocks in a frame concurrently.

As previously reported (September 2011, p. 10), Comcast is using Elemental’s on-demand transcoding platform for its Xfinity online and mobile service initiatives. Trading out a previous encoding system, Comcast reduced the physical footprint for its Xfinity servers by 75 percent.

Where HEVC is concerned, while the standard is not slated to be completed until well into next year, Elemental believes it has an edge when it comes to having a product that will be ready for commercial deployment once the standard is finalized. “We’re watching the spec closely and implementing aspects as they stabilize,” Wymbs says, noting its implementations so far have achieved a 40 percent in bitrates compared to H.264 (MPEG-4) bitrates. “Our customers will be able to implement the code with software upgrades on Elemental technology they deployed a year ago.”

Elemental also is now delivering a new product, Elemental Stream, which offers premium service providers a way to lower costs of high-volume streaming of live and on-demand content over their networks. Stream, which can be deployed at the encoding location or with CDN resources, allows content to be delivered from the transcoder in a single encoded video format for each bitrate profile by applying the specific DRMs and ABR formats to each user’s stream on the fly.

Elemental Stream supports Apple HTTP Live Streaming (HLS), Adobe HTTP Dynamic Streaming (HDS), Microsoft Smooth Streaming and MPEG-DASH and can apply content protection such as Microsoft PlayReady, Verimatrix VCAS and Motorola SecureMedia, Wymbs says, noting additional profiles could be added in response to new developments. The platform also supports SCTE-35 advertising triggers, closed captioning and subtitle conversion and allows international broadcasters and operators to associate a single video with multiple audio tracks.

Envivio Achieves Big Density Gains

Envivio, too, has been racing ahead with its Intel CPU-based system, which recently scored a big win with a still unnamed Tier 1 U.S. MSO for its multiscreen service. The firm’s advances include the 4Caster G4, the latest version of its fully packaged 2RU encoding platform, representing a 6x density improvement over its previous version. That translates to power to transcode into multiple bitrate formats up to 12 HD channels per 2RU chassis, according to Julien Signès, president and CEO of Envivio.

“Envivio 4Caster G4 is the most powerful encoding platform that we have ever offered,” Signès says. “We are providing a broader range of interfaces, the largest number of output formats and the option of high quality or high density configurations.”

The 4Caster G4 platform houses Muse Live, the core Envivio software system supporting multiple codecs, including the capability to encode in HEVC as that standard takes shape, to transcode premium content into profiles for live and on-demand multiscreen services on all types of distribution networks. By virtue of its support for IP, ASI and SD/HD-SDI interfaces along with redundant power supplies and hot-swappable nodes, the new platform can be used for all premium service environments, Signès notes.

Muse also runs on HP BladeSystem c7000 and ProLiant BL460c series servers and supports a wide range of additional features such as picture-in-picture, alternative audio languages, closed captions, DVB-Subtitles and DVB-Teletext. This allows Envivio to support a wide range of distributed architectures and pure OTT plays as a complement to the more centralized 4Caster option.

Envivio’s solution for distributed positioning of the stream fragmentation and DRM packaging process for ABR-based services is the Halo Network Media Processor, which the company recently upgraded to support time-shifted service models, such as catch-up TV, start over and network PVR. Signès says the Halo “TV Anytime” functionalities are in trials with multiple operators in Europe and North America, representing still another sign of how all the on-demand services common to the traditional TV realm are now moving into the multiscreen space.

“The new TV Anytime capabilities available on Halo further enhance the multiscreen user experience by allowing operators to deliver time-shifted TV and customized assets,” Signès says. He notes that a key element now available on Halo is Personalized Index Creation (PIC), a new approach enabling dynamic asset creation in the network, including highlights creation and time-shifted TV assets.

This streamlined solution utilizes bits of content already cached in the network to deliver a unique stream per user, he explains. By leveraging the existing caching infrastructure, PIC does not require expensive storage and processing and opens up possibilities for new personalized service offerings.

It will be interesting to see what impact the massive ASICs-based transcoding solutions from Motorola and Imagine have on service providers’ decision making as they ramp up for all-encompassing next-generation multiscreen services. Whether or not they will trigger a swing back to hardware-based systems will depend a lot on the software system suppliers’ ability to build compelling, market-leading software solutions. But they’ll also have to sustain what has been a winning argument about the merits of relying on Moore’s law to generate commodity hardware options that make reliance on proprietary hardware a risky proposition.

0

New Tech Enhances Viability Of Adaptive Streaming for TV

Brian Collie, CEO, SeaWell Networks

Brian Collie, CEO, SeaWell Networks

October 3, 2012 – Start-up SeaWell Networks has introduced a multiscreen session-level management platform which could become a significant force for breaking barriers to achieving the personalization and monetization potential of IP-streamed TV services.
 
According to SeaWell officials, the company’s technology has gained traction with two major, unnamed network operating companies in Canada and the U.S. and is helping Avail-TVN serve over 200 operators through its new AnyView TV Everywhere service. Adding to the momentum, three leading providers of advanced advertising support systems have partnered with SeaWell to facilitate personalized advertising in the multiscreen domain.

Time Warner Cable Media president Joan Gillman, who recently joined SeaWell’s board, says the company is “positioned to provide innovative solutions for IP delivered content and advanced advertising.” Avail-TVN CTO Michael Kazmier agrees and spells out why.

“Service providers need new and innovative ways to deliver content securely and efficiently,” Kazmier says. “SeaWell’s solution enables us to deliver to any device, without the overhead of building and maintaining multiple infrastructures and client applications.”

Such comments reflect the fact that prior to SeaWell’s introduction of its Spectrum platform earlier this year service providers had not found a cost-effective way to manage IP-streamed content on a per-session basis, asserts SeaWell CEO Brian Collie. “The response we’re getting from MSOs is, this is something we need to be able to do,” Collie says.

Essentially, Spectrum is a software system that enables managed delivery of IP streamed services by performing multiple functions at the network edge, thereby alleviating the bandwidth and processing load between central transcoding locations and CDN caches as well as the need for specialized client software on user devices. By handling requests for video, querying the appropriate back office authentication servers and taking control of the adaptive streaming manifest process Spectrum creates a personalized, DRM-secured session based on the device and the business rules set by the network operator, explains Brian Stevenson, director of product management at SeaWell.

“People have been looking for a way to handle session management and management of QoS,” Stevenson says. “HTTP adaptive streaming is a fantastic mechanism as far as it goes, but it doesn’t have a state for managing sessions. By taking over the stream and manifest process we manage the session setup and all the content delivered in real time to make sure end users are getting a cable-like experience.”

This approach represents a big change in how adaptive streaming (AS) works, because it means the device is no longer in control. In unmanaged AS, at the initiation of a video streaming session by any user on any device, the HTTP server sends the device a live manifest file that defines each of the available bit rate profiles for the chosen content.

Throughout the session the device signals which profile should be streamed every few seconds based on what the available access data rate is on the network and how much processing power is available on the device to handle the video stream. Other information can be included in the manifest as well, such as type and source of application the client can expect to see in any given stream fragment or “chunk.”

By assuring the stream adjusts every few seconds to bandwidth conditions AS offers a powerful way to maintain continuity amid fluctuations which otherwise might cause buffering breaks in the stream. But it leaves providers in the dark as to what’s going on at the user end and makes it very difficult to deliver a managed service if every stream has to be tailored to each recipient device as it leaves the transcoder, especially if devices have to be sent special clients to accommodate specific apps such as advertising.

Concerns over quality control, personalization and costs have limited the usefulness of multiscreen service, allowing certain types of devices targeted by the transcoders to receive a non-personalized stream of content and making it hard to bring new devices with variant AS and DRM formats into the service mix without spending huge amounts on more processing power at the headend. The SeaWell Spectrum solution has emerged amid internal debates within industry tech circles as to how to address the multiscreen premium service and personalization issues, possibly with cable-specific versions of AS tied to PacketCable Multimedia policy servers or by avoiding AS streaming over the network altogether by relying on home media gateways to perform transcoding and IP formatting on standard MPEG-2 content for distribution to connected devices in the home.

SeaWell claims its solution deals with all these issues at minimum costs, especially if the operator has cache or other edge servers at hand which can be used to run the Spectrum software program. “We can coordinate Spectrum with a lot of the caching infrastructure that’s out there,” Stevenson says. “So in most cases you don’t need to buy a second box.”

Spectrum is manipulating the manifest on the network, creating a personalized session with the device, and reacting to new conditions as they occur. “We take a stream delivered in a single format from the transcoder and on the fly tailor each piece of content to the device and specific user characteristics as it’s delivered,” he says.

Content protection is delivered to individual streams, whether files are stored in the clear or encrypted form, through the Spectrum interface with DRM servers. “We interact with all the different DRMs and formats used with Microsoft, Android and Apple formats, as well as different flavors of MPEG DASH (Dynamic Adaptive Streaming over HTTP),” Collie notes. “It’s all done on the fly.”

Where bandwidth management is concerned, the platform allows operators to go beyond the usual approaches to AS where all subscribers are treated alike to allow preferences to be accorded to premium users when congestion starts to impact bitrates. “If you’re a VIP subscriber, you may get a continuous high definition quality stream while people on lower tiers are scaled back to lower resolutions,” Stevenson says.

The SeaWell officials also stress the role the Spectrum solution plays in gathering session data for quality performance monitoring, tracking bandwidth consumption for users’ accounts, assessing viewer interaction with apps and ad metrics. All the information about what occurs during the viewing session is gathered together and delivered to internal collection centers as well as to third party partners, Collie notes.

“For example, we can work with a policy management system which may be assigned to take action in the case of network saturation to adjust allocations to assure all users have a high quality of service,” he says. “We export statistics to our own or third-party quality assurance analyzers to get quantitative values to help operators track performance.

So far, SeaWell customers have been using Spectrum to achieve the basic efficiencies that allow them to deliver streams from single mezzanine files for AS fragmentation, DRM management and client manifest control by Spectrum at the edges. “The goal here is cost reduction and better performance,” Collie says, noting use of Spectrum does away with the need to store content in different formats for different classes of devices.

But, he adds, SeaWell expects the personalized advertising capabilities to come into play as customers put the platform in place and the streaming scales to levels conducive to driving new revenues. “As the advertising delivery ecosystem becomes more complex, Spectrum responds to this challenge by repackaging files for a particular user and device for each session enabling operators to efficiently and cost-effectively expand existing VOD and linear advertising models to any IP device,” he says.

The Spectrum platform can feed the information it is collecting about the device and user into a third-party ad network, enabling personalized delivery of ads with each session, he explains. The ad provider can encode the ads in any AS format, and Spectrum will repackage them on the fly for insertion into the session. The resulting smooth transition between content and ads makes the experience TV like while creating a personal stream immune to conventional ad skipping technologies.

Spectrum has now been integrated into the advanced advertising systems of BlackArrow, ARRIS and Harris. In each case, the joint solution leverages SeaWell technology to eliminate the need to create multiple interfaces and streams for different devices and video formats while using the capabilities of the ad system to determine ad payloads, decisioning and reporting.

Such integrations have important implications for local cable ad sales as well as national spot placements, Stevenson notes. “This is one of the applications MSOs are looking at to deploy with us,” he says. “We look at ad markers like any other splicer, so we know whether there’s an opportunity to place an ad from local cache, and then we perform the formatting on that ad to create the seamless experience on each session.”

Cable customers are looking at personalizing local ad placements in the IP streams, Collie adds. “We have customers who want to go to sub-zones within DMAs, and one that’s looking at doing ads on a zip-code-plus-four basis,” he says. “The fact is, you can go down to the individual level and roll out that kind of targeting when that becomes part of the gameplan.”

1

MPEG-DASH Gains Toehold In European OTT Distribution

Steve Christian, VP, marketing, Verimatrix

Steve Christian, VP, marketing, Verimatrix

October 2, 2012 – With ever more vendors supporting MPEG-DASH in their encoders, streaming software, DRM platforms and other products, it appears that the standard has found an early beachhead for commercial implementation in Europe.

Multiscreen service distribution via MPEG-DASH (Dynamic Adaptive Streaming over HTTP) is now possible on the cloud-based OTT service that Spain’s Abertis Telecom and NAGRA are offering to European service providers and broadcasters. This addition to the adaptive streaming (AS) formats already available on the cloud service comes in conjunction with integration of NAGRA’s content protection platform with the ProMedia transcoding system supplied by Harmonic.

“By launching the world’s first OTT service supporting MPEG-DASH, Abertis Telecom and NAGRA are breaking new ground and helping to build the future of multiscreen delivery,” says Thierry Fautier, senior director for convergence solutions at Harmonic. He notes this follows Harmonic’s and other vendors’ participation in the first live public MPEG-DASH trial at the London Olympic Games with Belgium broadcaster VRT.

Together these developments highlight the role European broadcasters are likely to play in building early commercial momentum for MPEG-DASH, thanks to adoption of a new multiscreen streaming version of HbbTV (Hybrid Broadcast Broadband TV). This is the standard which, in its first iteration, supported distribution of premium and free Internet content over broadband connections to smart TVs and hybrid set-tops in combination with over-the-air digital terrestrial TV (DTT).

In the two years since HbbTV was finalized broadcasters in growing numbers have been seizing on the OTT opportunity in response to the rising penetration of smart TVs and HbbTV compatible DTT receivers. By 2014 60 million TV sets in Western Europe, representing half the installed market, will be HbbTV compliant, according to the HbbTV Consortium.

This TV-centric foundation for OTT distribution sets the stage for a move to multiscreen streaming in conjunction with the newly released version 1.5 of HbbTV, which includes support for MPEG-DASH. Notably, the partners in the Abertis-NAGRA Multiscreen Cloud Service report that ten free-to-air broadcasters across Europe are in trials with the new DASH implementation.

Demand for the multiscreen extension of HbbTV is strong, says HbbTV Consortium chairman Klaus Illgner-Fehns. “The publication of version 1.5 of the HbbTV specification responds to strong market demand for new features to be included as soon as possible,” he says. “We are already working towards version 2.0 of the specification.”

Another source of momentum from the content side is Netflix, which, according to Mark Watson, director of streaming standards, has made all its media files compliant with MPEG-DASH. Explaining Netflix’ thinking in a post to the DASH Promoters Group, Watson says this means “the majority of our traffic (and so a sizable chunk of U.S. peak Internet traffic) is compliant to the new MPEG-DASH and Common Encryption file format. We hope more and more devices will choose to support these formats.”

Challenges to Market Adoption

These and other efforts to drive DASH into the mainstream face a formidable array of challenges, starting with Apple’s failure to embrace the standard and extending to the chicken-and-egg conundrum where the absence of a DASH-compliant base of client software in IP devices feeds the reluctance of content suppliers to support DASH in their streaming operations. There are also concerns about licensing costs.

While ISO standardization requires that contributors of IP-based intellectual property agree to reasonable licensing terms, it’s unclear what the terms will be on some aspects of the standard. So far, three major players, Microsoft, Qualcomm and Cisco Systems, have agreed to offer their contributions to DASH royalty free.

Royalty issues also dog the encoding arena. DASH is codec agnostic, but H.264 is the predominant mode in use today, and royalty costs there have become an issue, prompting Google’s WebM royalty-free encoding initiative.

The good news is the combination of industry vendor support and proofs of concept in early demonstrations and trials is starting to move the market, says Steven Christian, vice president of marketing at Verimatrix, a key supporter of content protection on DASH. “In my opinion the DASH format will naturally be the center of things in two to three years’ time,” Christian says.

“But in the meantime,” he adds, “there are a lot of existing devices out there to be supported by today’s streaming platforms. So I think we’ll end up with a multi-format delivery system where DASH plays an increasing role over time.”

DASH Goals

DASH addresses the incompatibilities of proprietary AS systems, which employ a “pull” mode in distribution technology that is altogether different from the “push” mode of traditional digital TV. Every few seconds an AS-enabled device, by referencing the bitrate options or “adaptation sets” listed for a given piece of content in a manifest file sent from an HTTP server, asks the server to send a segment fragment of streamed content at the optimum bitrate, depending on how much bandwidth is available at that moment in time and how much processing power the device has available for decoding the bit streams.

The basic goal is to ensure video is streamed at the highest level of quality that can be sustained at any given time without dropping frames or triggering buffering interruptions in the flow. While the proprietary modes, including most prominently Apple’s HTTP Live Streaming (HLS), Microsoft’s Smooth and Adobe’s HTTP Dynamic Streaming (HDS) all use the MPEG-4 H.264 video codec along with MPEG Advanced Audio Coding (AAC), each uses a different approach to constructing fragments, timing their sequence, formatting manifest files and supporting content protection.

DASH aims to overcome these disparities through server-to-client communications which are delivered in an AS manifest file format known as Media Presentation Description (MPD) to define various segments within the stream, each of which is associated with a uniquely referenced HTTP URL. While critics point to the existence of multiple versions of DASH as a reason to doubt its efficacy, in truth, the consensus now is that the market will embrace two versions predicated on two modes of transporting fragments.

These are fragmented MP4 (fMP4) derived from a part of the MPEG-4 standard known as ISO Base Media File Format (BMFF), which is used by Smooth Streaming and HDS, and an approach to fragmentation based on MPEG-2 Transport Stream, which is very close to the proprietary mode used by Apple for HLS. Google’s WebM, which uses another transport mode, appears to be losing steam, as evidenced by Android’s embrace of HLS.

This DASH transport dichotomy isn’t just the result of Apple’s adherence to MPEG-2TS. Many CE manufacturers prefer this choice of transport modes for DASH by virtue of the processing power savings that come with using the same transport that’s used with digital pay TV.

However, one of the attributes of DASH and fMP4 in particular is an intrinsic bridge to MPEG-2TS and HLS whereby the XML-based MPD manifest file is able to present fragments based on the HLS mode of delivery. Thus, DASH-based content formatted for MPEG-2TS will be viewable on any player that conforms to a DASH profile supporting this capability, such as the DASH 264 profile backed by entities comprising the DASH Industry Forum.

The DASH specifications create a standardized means of supporting many other functions as well. These include live streaming as well as progressive download of on-demand content; fast initial startup and seeking; enhanced trick modes and random access capabilities; dual streams for stereoscopic 3D presentations; Multi-view Video Coding used in Blu-ray 3D displays, and, critically, dynamic implementation of diverse protection schemes.

Content Protection

The fact that DASH creates a way to readily communicate to devices what the specific DRM parameters are for a given piece of content overcomes a major cost barrier to scaling connected-device access to premium content. Today, a content supplier who wants to provide a greater level of security than is natively supported on any given AS platform must take specific steps to secure each targeted device with the appropriate DRM client. DASH allows any DASH-compliant DRM to be implemented automatically with no need for intervention by the content supplier.

Verimatrix, for example, has brought support for the DASH Common Encryption format into the domain of its Video Content Authority System (VCAS), which is a harmonized rights management platform designed to streamline delivery of protection to all types of devices, including iOS, Android, PCs, game consoles, smart TVs and set-top boxes. VCAS for DASH, which was demonstrated at the IBC conference in conjunction with Envivio’s encoding platform, complements the support for PlayReady-protected Smooth Streaming already incorporated into VCAS as well as security enhancements to HLS provided through VCAS ViewRight Web components, Christian notes.

“What we’ve tried to offer is a way to deal with the extra complexity of delivering secured content to multiple types of devices through our multi-rights approach,” he says. “VCAS for DASH is a natural part of the overall strategy.”

What this means for devices already running VCAS, such as smart TVs from Samsung and LTG and set-tops from a variety of vendors, is manufacturers have a ready means of introducing support for DASH on the DRM side with a simple software upgrade. “These suppliers have integrated our client-side technology in such a way that they could easily transition to DASH if they want to take that step,” Christian says.

Overcoming Incompatibilities

Of course, all these DASH capabilities are moot if the client doesn’t support the MPD manifest, which is where Apple’s resistance to DASH comes into play as a way to maintain a proprietary link between its devices and the content it provides. But even here there are ways to break through the incompatibilities, if providers are willing to incur some added expenses at the network edges in exchange for savings attained through uniform use of DASH to stream content to all devices.

This is part of what the Belgium VRT trial was all about. Working in cooperation with the DASH Promoters Group and the European Broadcasting Union, the broadcaster delivered OTT coverage of live Olympics events using only DASH, which meant it had to come up with a way to enable access on any devices not running MPD in their AS players.

“The logical flow of online distribution based on MPEG-DASH is very similar to what is currently deployed in adaptive streaming systems,” VRT says in a paper describing the trial. “One needs only an encoded/packaged stream and an HTTP server to get the job done and play out video to a player that supports MPEG-DASH natively.”

VRT explains how this part of the trial was executed: “The simple part of the workflow is demonstrated by the chain starting with the Elemental Live encoder from Elemental Technologies, which captures the audiovisual content from the SDI feed at the VRT premises. The Apache server located at the CDN of Belgacom picks up the data packages via HTTP GET from the encoder and makes it available by a URL and an MPD file describing how the packages should be interpreted by the player. The Adobe player reads out the MPD, buffers the packages and plays out the video on a device of the end user.”
But the Adobe player only worked on PCs and older Android devices, leaving VRT to create another distribution chain to support iOS and newer Android devices running HLS. In this case, the IP feeds of the DASH-streamed live Olympics content were captured, transcoded and packaged just like any other streamed content via Harmonic’s ProMedia Live encoder and transmitted to an origin server at VRT’s premises, where streaming media software supplied by Wowza performed fragmentation and other streaming functions.
To get the streamed content to the targeted devices required that the streams be captured by a Wowza cache server in the CDN, where they were formatted to the HLS transport and manifest templates in conjunction with the appropriate DRMs. “It gets more complicated when one involves dedicated applications that play out video to devices that do not yet support MPEG-DASH natively,” VRT says.

To round out the DASH proof of concept in its trial VRT worked with suppliers to show how DASH Common Encryption works with multiple DRMs in streams that were switched seamlessly between protected and unprotected content. Along with DRM support in the workflows for DASH-ready and non-DASH clients, the trial participants set up another distribution chain to show Common Encryption support for Microsite’s PlayReady DRM.

New Momentum

The VRT trial was meant to show how an application, in this case the Olympics live OTT coverage, could be delivered through DASH to users on all devices. But the more common approach to introducing DASH will be instances where content providers stream a given program using DASH alongside content formatted for the proprietary streaming platforms running on devices that haven’t been made DASH compliant. While, as DASH detractors note, this has the effect of adding still another streaming format to the stack, it also means that as DASH-ready CE devices come to market they can utilize the many benefits of the DASH manifest and multiple-DRM support system with any content and apps that have been positioned to run over DASH.

With the transition last month of the DASH Promoters Group to the more formal DASH Industry Forum, interoperability testing beyond the ad hoc trade show demos and market trials of recent months has become part of the DASH 264 agenda. Over 50 companies now belong to DASH-IF, including founding members Akamai, Ericsson, Microsoft, Netflix, Qualcomm and Samsung.

As interoperability testing unfolds the pace of real-world implementation will depend on how fast the content/device chicken-and-egg barrier is crossed. As Christian notes, smart TV manufacturers have an especially strong interest in getting DASH embedded in their products. “What we’re seeing is the rollout of DASH services will probably start to happen on devices like smart TVs,” he says.

In part, this is because smart TV manufacturers, with an incentive to build consumers’ content experience on their platforms, want to encourage content suppliers to take advantage of all the service types and feature extensions that are enabled through the DASH MDP in a uniform way that broadens participation to the greatest extent possible.

And there’s also an incentive owing to the fact that smart TVs have a longer life cycle than portable devices, which means DASH needs to be there when the market moves in that direction. The upshot is that initiatives along these lines could break the chicken-and-egg syndrome, where, owing to the presence of DASH on smart TVs or set-tops, content suppliers begin to have an incentive to add DASH to their streaming profiles.

This is exactly what’s happening with implementation of the aforementioned HbbTV 1.5 in Europe. HbbTV was a focus of many DASH-based demos at IBC, including one featuring time-shifted service staged by AuthenTec (applications for Android and iOS); Adobe (Flash player supporting MPEG-DASH); CodeShop (Unified Streaming Server); DekTec (DVB modulator); Elemental Technologies (encoding); and LG, which introduced the first connected TV prototype supporting HbbTV 1.5 with MPEG-DASH profiles.

Many other firsts involving DASH were on display at IBC, including the industry’s first MPEG-DASH-based live and on-demand video ad-insertion solution from SeaWell Networks, Inc., another member of DASH-IF. The ad-insertion capability was part of SeaWell’s integration of DASH into its library of supported formats, which include Smooth, HLS and HDS.

Using SeaWell’s Spectrum software, network operators are now able to convert all formats dynamically, and deliver IP video content to any connected device, notes Brian Stevenson, director of product management at SeaWell. “With the integration of the international DASH industry standard, Spectrum enables operators to not only store and deliver content in any format, but to now insert advertising seamlessly into the live or VOD stream. This is a market first.”

In another instance of new initiatives unveiled at IBC, Digital Rapids says upcoming new solutions in its product portfolio powered by its Kayak workflow technology platform will support DASH, including version 2.0 of the Digital Rapids Transcode Manager high-volume media processing software and the StreamZ Live Broadcast integrated broadcast/multiscreen live encoder. Notwithstanding all the confusion in the marketplace and efforts by some players, including network service providers, to devise their own solutions to incompatibility and other streaming deficiencies, there’s an inevitability about DASH, as well as the UltraViolet electronic sell-through standard, which are increasingly linked in standards development, says Brick Eksten, co-founder and president of Digital Rapids.

“We started working with people who were thinking about DASH before there was a real group promoting the standard,” Ecksten says, noting there was great interest among CE manufacturers in finding a way out of the adaptive streaming weeds. “While all the various operators out there are building defensive positions with players that offer multiscreen solutions, the market momentum is toward standardization.”

Page 10 of 25« First...89101112...20...Last »