Multiscreen Monetization Potential Gets Real with Gains in QA for ABR

Kirk George, director of marketing and strategy , IneoQuest

Kirk George, director of marketing and strategy , IneoQuest

By Fred Dawson
 
November 19, 2013 – In moves that could go a long way toward fostering monetization of multiscreen premium TV services two of the leading players in quality assurance have taken new steps aimed at supporting better performance awareness in the complicated adaptive bitrate (ABR) streaming domain.

IneoQuest, a pioneer in such efforts which, as previously reported, began offering support for ABR quality assurance in early 2012, has greatly expanded on those efforts with introduction of a new cloud-based iteration of the service and, earlier this year, several enhancements to its probe and analytics capabilities. More recently, Tektronix, which has held a strong position in pay TV video quality assurance since acquiring Mixed Signals and its Sentry product line two years ago, is rolling out Sentry ABR to extend its monitoring and analytics regime to enable quality control (QC) on multiscreen service streams.

Until recently, quality assurance (QA) was deemed a cost factor in TV Everywhere and OTT video services that could be avoided by relying on the ability of ABR technology to avoid buffering stops and starts in video streams by adjusting data rates to bandwidth conditions in real time. But now, with the need to monetize content through advertising and viewing fees as consumers watch ever more high-value ABR content on big HDTV screens, QA has become fundamental to making sure the viewing experience on all devices measures up to consumers’ and advertisers’ expectations.

There’s no longer any doubt that even minor glitches can have a big impact on viewer behavior and, hence, the bottom line. One landmark study underscoring this fact was conducted last year by researchers from the University of Massachusetts at Amherst and Akamai Technologies, who used data supplied by Akamai to analyze viewing patterns of 6.7 million people across 23 million sessions over a ten-day period.

“Content providers know that reducing the abandonment rate, increasing the play time of each video watched, and enhancing the rate at which viewers return to their site increase opportunities for advertising and upselling, leading to greater revenues,” wrote Akamai Fellow Shunmuga Krishnan and UM computer science researcher Ramesh Sitaraman in their report. “The key question is whether and by how much increased stream quality can cause changes in viewer behavior that are conducive to improved monetization.”

Judging from their results, a little quality improvement could go a long way. They found, for example, that two seconds into a delayed video start users begin to leave the video at an increasing rate of 5.8 percent for every additional one-second delay. They also found that with each rebuffering delay in the stream equivalent to one percent of the video’s duration the amount of video viewed drops by five percent.

Another study sponsored by Azuki Systems, echoing results in the UM study, found a one percent increase in buffering time leads to a three-minute decrease in viewing time. This study found that the return rate of consumers to a service where viewers have experienced a video startup failure is 54 percent less than for viewers who don’t have that experience.

“It is important to remember,” said the UM researchers, “that small changes in viewer behavior can lead to large changes in monetization, since the impact of a few percentage points over tens of millions of viewers can accrue to large impact over a period of time.” Apparently awareness of this principle has persuaded distributors that the ability to monitor streams after they leave the streaming packagers at the content delivery network (CDN) cache points and origin servers is critical, notwithstanding the added costs of equipment and software required to support such capabilities.

“Demand for our IQDialogue ASM product has grown a lot faster over the past few months,” says Kirk George, director of marketing and strategy at IneoQuest. “With the amount of video traffic and the buildout of private CDNs, distributors are asking for higher capacity on our probes, which is why we’ve gone to a full 20 gigabit-per-second line rate for monitoring live sessions in the network just six months after announcing 10 gbps.”

As described by George, the product passively and actively monitors all aspects of video delivery across the entire CDN in all device communications as well as asset publishing to origin servers. Now, with an eye toward bringing such capabilities to providers who don’t have their own CDNs, IneoQuest has introduced a cloud version of the technology, where probes positioned in ten Amazon datacenters worldwide pull content from regional CDNs to get a read on performance experienced by end users served by those CDNs across all ABR profiles for all devices.

Tektronix, too, has seen a big surge in demand for QA in the ABR streaming domain, prompting it to introduce Sentry ABR, which is meant to work in conjunction with quality control mechanisms such as those provide by other Sentry modules that address QA in the encoding and transcoding processes. “MSOs want to make sure they’re offering a better quality of experience in the battle to retain customers,” says Steve Liu, vice president of video network monitoring at Tektronix. “Customers approached us asking that we extend Sentry capabilities into the ABR network. So now we’re working with top MSOs as they look at best ways to implement the technology.”

The role of Sentry ABR is to validate all assets, bitrate profiles and manifest files based on a highly parallel HTTP fragment “fetching” engine, Liu explains. The platform, running on proprietary hardware consuming one rack unit of space at origin servers and CDN caches, supports up to 250 top manifests, which is to say, the data profiles used in ABR streaming to convey to end devices the options available to them as they pull each fragment from the server. By looking at the top-level manifests, Sentry ABR is monitoring up to 12 streaming profiles for each piece of content, which translates into the capacity to monitor over 2,000 profiles simultaneously.

In the typical QA sequence used with the Sentry product group, the first step is to use a Sentry to identify any quality of experience (QoE) errors in programs when they are ingested at video service providers’ headends. Other modules, also employing deep content inspection capabilities to look at the video frame by frame, are positioned to detect any errors created when these programs are transcoded from MPEG-2 into the H.264 format required for ABR.

As Liu notes, this requires more monitors per program than is the case with legacy services, given the multiple transcodes required to serve all devices. The Sentry ABR comes into play at the post-fragmentation output of origin servers and cache servers, allowing the system analytics to compare playlist availability and integrity, network latency and delivery performance for each fragment at both ends to determine the location of any trouble sources.

There’s no need to perform DCI at the ABR level, because that’s already been done upstream, Liu notes. “If I have positioned Sentry components to make sure the video isn’t over compressed and the audio levels are right there’s no need for another check on those things at the origin or CDN levels,” he says, adding that Sentry ABR can be deployed to work with other vendors’ DCI probes.

The processing density of the Sentry ABR platform, which Liu describes as “our key differentiation,” allows great flexibility in terms of how operators implement the ABR QC capabilities, he adds. “If you want to conserve bandwidth consumed by the ABR flows to the probes you can set the system to monitor a certain sub-group of channels at a time or you can do it all at once,” he says.

“As customers are rolling out multiscreen services and adding more devices they can configure Sentry ABR to test different network routes and servers,” he continues. “Maybe this week you want to look at the route from the origins to Philadelphia edge points, and next week you want to test the route to Denver.

“If customers were to wait to add QC until after everything is deployed, it would be much harder. By working with them as they roll out city by city we’re helping them make sure everything works at a smaller scale so they can be sure it will go smoothly all the way to the national scale.”

Another factor contributing to ease of use is the fact the Sentry ABR system feeds into the Tektronix Medius reporting platform, which generates consolidated status reports and alerts from multiple Sentry units across the network. “If you’re a Sentry customer, you already have the analytics platform in place that you’ll need for the ABR QC application,” he says.

IneoQuest, too, is stressing benefits tied to its analytics capabilities, which have become instrumental in its drive to promote QA technologies as the key to driving monetization on multiscreen services. With several interlocking projects targeting this agenda either just launching or soon to launch in the year ahead, Kirk George says the company has begun using big-data analytics to tie information from network operations and consumer behavior together for use in coordinated applications across legacy and multiscreen services.

This enterprise-scale analytics solution provides customized four-dimensional analysis and reporting on years of video data, allowing providers to gain insight into the performance over time by understanding problems by program, time, location and error, George explains. “On the operational side we’re looking at outages, meantime to repair, performance of QAM assets on cable networks, what’s happening with bandwidth utilization at the edge,” he explains. “Behavior analytics are looking at who’s watching what, where, when and why and on what types of devices.”

“Measuring and storing those together is essential,” he says. “If I only have behavioral data, I can tell you 10,000 people watched a particular piece of content delivered from a CDN and left after 15 seconds, but I can’t tell you why. If I only have operational information I can tell what the problem is in the network, but I can’t tell you how it impacts specific devices and behavior. A couple of years ago, the industry wasn’t ready for this level of analytics. Now they’re telling us they really need it.”

QA specific to mobile operators’ needs now that video is a big component of consumer experience is another part of IneoQuest’s expanding product portfolio. With Intel’s creation of CDN caching support as a component of its new small cell chipsets, IneoQuest has created a small cell version of its QA probe. “Now mobile operators can monitor their video traffic down to the last edge point and tell their advertisers and content suppliers, ‘We can guarantee the service,’” George says.

There are other operational implications, he adds. For example, in Asia where data caps on mobile use are not employed to the extent they are in North America, users, who expect to have a good video experience on their tablets and handsets will jump to another carrier if they have problems. “It becomes critical to keeping these customers that mobile carriers know their networks are handling this traffic,” he says.

IneoQuest is also looking to make things easier for service providers’ use of QA in conjunction with the telecom industry’s move to software-defined networks (SDNs) and network functions virtualization (NFV), which, as previously reported, has become a rapidly growing priority among carriers of all descriptions. “Service provides are out there trying to comply with SDN and NFV so they can reduce rack space and power and reduce IT expenditures,” George notes.

“If I have two systems, an edge cache on one blade and our probing technology on another blade, that’s two systems I have to purchase and maintain,” he says. “But with virtualization they can use the blade for both apps. We are installed in a few locations still under alpha tests where the probe that monitors CDNs in the cloud is virtualized and implemented on the CDN caching blade without requiring another piece of equipment.”