Coherency Out of Chaos: CableLabs Maps New Strategies

Ralph Brown, CTO, Cable Television Laboratories, Inc.

Ralph Brown, CTO, Cable Television Laboratories, Inc.

July 20, 2012 – At a moment of unprecedented change the cable industry is awash in technical challenges that call for the types of coherent interoperable solutions that have always been the purview of CableLabs. It’s a tall order inspiring new approaches that CableLabs CTO Ralph Brown brings to light in this interview with ScreenPlays editor Fred Dawson.
 
ScreenPlays – Helping the cable industry understand and adjust to fast-changing technology has always been an important part of CableLabs’ role, but I don’t think we’ve ever seen a time when so many areas of technical change have a direct bearing on your members’ business strategies.
Let’s start with a big one – the migration path to IP. Is this something you’re pursuing in terms of specifications or guidance from a holistic perspective?

Ralph Brown
– We’re approaching it from a technology-focused architectural perspective. The starting point for us is to define the architectural layers that have to be addressed, no matter what path you take. We want to be able to say to our members, here’s the fundamental architectural layering that you need to work with as you make your technology and strategic business decisions.

Without clarity across the architectural layers we can’t expect to have multiple suppliers who can deliver solutions that work within those layers. For example, consumer electronics manufacturers are going to have to see some potential value deriving from their efforts to meet the industry’s new technology requirements, so we believe giving them a framework to work within will help create the credible market they’re looking for.

This is a world that’s divided into two halves – linear content and on-demand content. The underlying problems you have to solve are different from a technological perspective, and how you solve them has a lot to do with the terms set by rights holders for delivering content to devices. Of course, we have in place now the CableLabs-developed OLCA – Online Content Access – authentication of service specifications as the underpinning for delivering content via IP. This forms the basis for everything we’re talking about when it comes to IP migration.

SP – There are so many moving parts within all those architectural layers. How do you ever get to a comprehensive set of specifications that define how they all work together as you evolve toward an all-IP operating environment?

Brown
– Although our specifications are not mandatory, we do try to make them comprehensive and beneficial to the parties implementing them. The challenge when taking a comprehensive approach is that it often comes down to the capacity issue – the bandwidth you have available to accommodate a transition where you have to support simultaneous transmission of legacy and IP services. Each cable system has different available capacity – how many HD channels you’re carrying; what’s your high-speed data penetration; what your broadband data rates are; do you use SDV (switched digital video). The design criteria are different for each one.

So while it’s easy to say we should all go to IP, it’s challenging to solve on an industry-wide basis. All our members are doing different things. We’re providing solutions where there’s agreement on scale and commonality, largely at the edge. We provide the building blocks; it is up to our cable members when and how to employ them.

SP – For example?

Brown – Some members are looking at hybrid media gateways, the kinds of things we see with the ARRIS initiative and Liberty Global’s Horizon gateway. For any given operator those types of solutions might be a better answer.

SP – One of the most pressing technical issues for operators who choose to create a conduit for delivering premium content over IP end-to-end is the use of adaptive bitrate streaming, which is such a totally different way to deliver video compared to the traditional push model.

Brown
– Adaptive bitrate (ABR) streaming is an important new technology, especially from an over-the-top perspective. With OTT there’s no awareness of network capacity. When you apply adaptive bitrate algorithms, their behavior becomes chaotic. The quality goes up and down frequently as the bitrates adjust to bandwidth conditions over the Internet and access networks.

In a managed network environment, such as that provided by cable operators, you don’t necessarily suffer from that problem. The need for adaptive bitrate may not be there, so your dependency isn’t as great.

SP – Of course, this gets back to the bandwidth question – what your IP migration strategy will be, how much capacity you can assign to a dedicated stream for every type of device on unicast sessions, etc. Meanwhile, some operators are using adaptive streaming. Is there a cable version that might be easier to work with?

Brown
– Depending on how you engineer your network, you may want to avail yourself of adaptive streaming, but you could do it in a way that’s more coherent over your network. I can see different models, such as where the picture quality decreases marginally for everybody when there’s a high level of traffic congestion over your access network, versus reallocating bitrates independently for each session. My own eyes tell me that fluctuations in bitrates are a lot less noticeable if there’s a gradual degradation. So there are a variety of opportunities to use adaptive streaming that can manage the impact on quality of experience.

SP – How is CableLabs dealing with the adaptive streaming issues?

Brown – We’re looking at adaptive bitrate technology from the standpoint that it seems to be a transport system for IP video that various industries in the marketplace are interested in standardizing. Prior to Adobe moving to HTTP (Hypertext Transfer Protocol) adaptive streaming, Microsoft and Apple were there with their own servers and clients. So now you have three major streaming modes using HTTP infrastructure which are all different.

But one of the things the marketplace struggles to accommodate is lack of interoperability. Fortunately, all three systems use AVC (MPEG-4 Advanced Video Coding), so they’re interoperable on the encoding side. Where it breaks down is how they handle transport, and the types of DRMs (digital rights management systems) they use. But at least they’re all using AES (Advanced Encryption Standard) encryption.

So we’re seeing attempts at standards like MPEG DASH (Dynamic Adaptive Streaming over HTTP), which covers encryption as well as transport. The idea is you don’t have to set up separate encoding and fragmentation systems for each type of streaming platform. To serve different clients that are tied to different DRMs, you only have to deliver the right encryption key.

SP – Is CableLabs actively engaged with the MPEG DASH Working Group?

Brown
– We haven’t been active participants in the development of the standard, but it matters a lot to us. We’re interested in seeing the adoption of MPEG DASH. The MPEG DASH Promoters Group seeks to speed its adoption.

The question for CableLabs is how cable operators might use it. What profiles, bitrates, resolutions, etc. make the most sense for cable? We’ve been looking at that question and at the right time could engage around that discussion to define which profiles are right for cable.

The bigger issue is adoption by the client vendors – whose players will be able to use DASH transparently.

SP
– I guess, with Microsoft and Adobe pretty vocal in their backing for MPEG DASH, the question is what is Apple going to do? Obviously, they’ve been engaged with sharing intellectual property on HLS (HTTP Live Streaming), but they haven’t come out in support of the standard.

Brown – I guess I’d say stay tuned. Apple has been one of the most active participants in MPEG DASH and at WC3 (Worldwide Web Consortium).

SP – There’s been talk about mitigating some of the hassles with adaptive rate streaming through leveraging the PacketCable Multimedia Gate-Set function, which would allow you to do linear multicasting rather than doing everything in adaptive rate unicast but without incurring the channel access delays associated with traditional IP multicast.

Brown – The infrastructure is there. PacketCable Multimedia is perfect for many IP applications. So far the capabilities have only been lightly tapped. But it all depends on where you want to manage the channel, whether it’s on an individual or aggregated basis. There are a lot of complexities and tradeoffs to be considered with each approach.

The thing that shakes all that up is HEVC [the next-generation MPEG standard, High Efficiency Video Coding, sometimes referred to as H.265], which is scheduled to be completed around the end of the year. Right now we have a focus on streaming that takes into account HEVC because of the timing. With HEVC, you’re looking at a two times efficiency improvement over AVC, which translates to a four to nine times improvement compared to MPEG-2.

The question is, when does HEVC start to have an impact on the market? I think we’ll see a faster transition to HEVC in the IP domain, where you can negotiate codecs with streams as necessary. It could double video quality for the OTT guys overnight.

The challenge for traditional QAM (quadrature amplitude modulation) delivery and broadcast systems is there are a lot of receivers out there running MPEG-2. So it’s going to be hard to support HEVC without using IP, but, given the bandwidth efficiencies, it’s something you’re going to want to support.

SP – Obviously this ties into the whole question of how operators address their bandwidth challenges with IP migration.

Brown – A lot of what we’re doing with HEVC applies to later work we’ll be doing on the infrastructure question. For us, it’s about getting the encoding and encryption right for transport so our members can select standards at each level with the interoperability that enables cost efficient scaling of the landscape.

SP – Speaking of defining the architectural layers, a key question that arises in the migration to IP is the possibility of abstracting the operations layer from all the network components so that you can orchestrate across legacy and wireless networks and newer data-center based components which are more suited to IP-based operations. In other words, are you looking at some way to put software to work to maximize utilization of all the resources at the cable operator’s disposal in the headend, the CMTSs, the QAMs, the IP transmission mechanisms, the servers in the data centers, etc.?

Brown – There are several things wrapped up in that question. On the access network side, if you look at HFC [hybrid fiber-coax], CCAP (Converged Cable Access Platform) is probably the key element. Because of how QAMs are utilized with CCAP you can digitize extra spectrum [for IP transport] as you need it.

You don’t need a combining network. All the QAMs are in one place but with each port dedicated to a service area whether you’re delivering MPEG-2 video or DOCSIS over QAM. That drives integration harder than anything. Today you have a video QAM conversation and you have a DOCSIS QAM conversation. So CCAP allows you to manage all your QAM channels inside a common chassis.

When Comcast’s engineers started working on CMAP [the Converged Multiservice Access Platform predecessor to CCAP] they were referencing CableLabs’ specs. So CCAP isn’t really introducing new specs, although there’s some work associated with a modular approach to DOCSIS sorting itself out in the CCAP world. I personally never bought the modular architecture, which doesn’t make sense to me if you’re going to be continually pushed to leverage integration to achieve greater scaling efficiencies. But, in any event, modular CCAP is basically using existing specs.

We’ve developed new operations interfaces, the CCAP OSSI (Operations Support System Interfaces). With CCAP, QAMs are part of the same chassis, so you need provisioning and management interfaces that work in that environment. At this point in CCAP evolution we’re finalizing the specs and putting together a qualification program for CCAP components.

SP – As you mentioned, CCAP addresses the converged operational question from the HFC side, but beyond CCAP you have integration issues tied to other aspects of the network and the question of how to manage those resources as part of the overall operational environment.

Brown
– There’s the content delivery side of that which begins to span multiple networks. We begin down that path a couple of ways. On the fiber network, we make the EPON (Ethernet over Passive Optical Network) infrastructure look like a cable modem network from an OSSI perspective. We haven’t really seen any EPON at this point, except cellular backhaul stuff. So we’ve integrated that network.

It gets more challenging when you get into the wireless world. There are two places you can jump into wireless from HFC – through Wi-Fi access points on the outside network or in the home. You can’t get the same levels of guarantees on QoS in the wireless world that you can over the fixed network. Things revert to best effort, but there are some things we can do.

That gets us back to the standards for IP transport – DASH file format, encryption, encoding. Members may want to deliver the content for access on IP devices the same way across all networks. So you get into the issue of where you source that different on-demand and linear content for distribution over IP transport, which includes the discussion about CDN (content delivery network) interconnects and disputes about peering. And, of course, content rights.

We build tools, not rules. We try to build the best set of tools to allow you to implement a variety of business models. That’s the tech challenge right now – trying to optimize solutions for whatever problems you need to solve without restriction on what the business models will be. We don’t have all the answers, but we have to forge ahead, knowing our members’ use of technology will be driven by their individual decisions.

SP – There’s now a lot of talk about what comes next after DOCSIS 3.0

Brown – The common reference these days is DOCSIS 3.x

SP – Yeah, for a while it was DOCSIS 4.0. But, really, some of the thinking looks at IP transport over Ethernet – the dumb-pipe approach – where all the operations are abstracted at a management layer that talks in IP to all the end points and points in between. I guess EPoC (Ethernet PON over Cable) is the new buzzword there. I imagine you’ve personally focused on these kinds of questions.

Brown – I’d be careful about adding noise to the channel at this point. We’re looking at data over cable in the broadest sense. We’re active in all the choices where we see promise.

The harder question is, how does any of it get deployed? The biggest challenge for us will be moving the upstream split [various approaches to expanding spectrum used on the coax plant for return signals, including expansions at the lower spectrum tier, mid-spectrum and at levels above 1 GHz. It’s a really ugly set of options. Nobody wants to deal with that today, but will eventually.

Right now the debate is about how soon that need arrives. We’re building a set of tools that will make it possible for members to move in that direction when their time arrives.

In any capital-intensive area of technology discussion it’s easy to talk about point A and point B and the architecture at each. The hard part is knowing when and how to get from A to B in an optimal fashion. The real engineering is in the transition.

SP – And all of that has to be woven into the wireless domain. Where do things stand with specifications for cable Wi-Fi now that this seems to be the wireless path the industry wants to follow?

Brown – As you know, we’ve published a number of specs around the new relationship between MSOs and their commitment to supporting customers’ ability to roam wherever they have access to one of the partners’ hot spots. In addition, there are other specifications and approaches that some of our members will be considering.

SP – What about infrastructure issues related to advanced applications like handoffs between the Wi-Fi and LTE or other mobile networks?

Brown – We’ve been looking at this space for many years. When we were developing the PacketCable 2.0 specs we participated in the 3GPP/IMS specification development. We showed up with contributions that take into consideration cable technology. Initially, they said, who are you? We established our technical credibility and now all our [PacketCable 2.0 IP Multimedia Subsystem] specs are included in the [3GPP] IMS specs. So IMS is both cable and wireless.

In terms of how cable operators might use Wi-Fi assets to offer mobility, that’s work we’ve been doing for quite a few years. We’ve got the bases covered for critical infrastructure issues and have supplied tools for different models.

Now our members have choices to put the radios out there. I think Cablevision spurred some other MSOs to move in this direction with their Wi-Fi initiative. They’ve been pretty successful.

SP – Are there new shoes to drop in this area?

Brown – Yes, there are, but I can’t talk about them yet.

SP – In the time we have left I’d like to quickly explore some other work CableLabs has been doing just to get a sense of where things stand at this point. Let’s start with advertising. Here I’m interested in what you might be doing, along with SCTE and your members, to facilitate dynamic advertising in the IP multiscreen service streams.

Brown – Generally the work with SCTE so far was focused on traditional TV, so now the question is, how does that leverage into the IP world? That opens up a whole new technical set of architectural decisions that have to be made to support efficient campaign management, dynamic placements, matching advertising content to the streaming modes. Routing of content so that ads from the source get to all the right places at the right time is a big issue in the IP world. And then there are questions of measurement that are being addressed.

This gets back to the division between linear and on-demand content where the technical questions of how you insert ads are different for each technology’s domain. Not all things are well defined with regard to rights, delivery modes and how advertising comes into play with HTML5, for example. We’re looking at these questions internally and are working outside with the WC3 and the HTML5 Working Group.

Several challenges confront our members in these areas. Not all of them are visible yet. The bottom line is you have to understand content delivery before you solve ad problems, and you can’t solve that independent of an understanding of measurement requirements.

SP – But at least some work from the legacy TV domain would seem to be applicable. For example, do you see SCTE 130 creeping into IP?

Brown
– Yes. There’s an existing infrastructure in the field. We want to use that knowledge to whatever extent we can.

SP
– Where do things stand now with your work on 3DTV?

Brown – The market has to catch up. Many members have deployed the means of delivering content in 3D. The tools are in place. The big debate now is, looking at 4K [next-generation HD resolution], how do you get to 1080p at 60 frames per second for each eye? We know what has to happen in terms of commitment of bandwidth. But the question is do the benefits support that commitment? At this point, with so little 3DTV content in the pipeline, we’ve basically moved on to other priorities. Our members are ready to support the current generation 3D technology, so we’re not doing a lot of new stuff at this time.

SP – One topic that’s heating up is M2M (machine-to-machine). We recently reported the SCTE is looking into various interfaces that will help automate some of the things related to its energy management initiative and how that relates to bandwidth efficiency when it comes to flooding the pipe with automated software upgrades to IP devices. Is there a role here for CableLabs?

Brown – Yes. The bigger question is what requirements that technology must be capable of supporting. Clearly, M2M is happening. There’s a broader concept [than the route currently pursued with SCTE’s energy initiative], which really goes to the concept of a sensor network.

But if you’re talking about sensors in the abstract, it can be anything. So you start to see verticals showing up in the conversation – energy management, health care, security. It goes on and on.

There are big opportunities there, but articulating what those opportunities are, in terms of technology definitions and requirements, is difficult. I’m fully convinced we’re just seeing the surface scratched.

SP – One last thing I want to bring up, which is IPv6. Much work has been completed by CableLabs in terms of the various dual stack modes, CPE equipment specs, etc. Are you working on anything else at this point?

Brown – All that effort has largely transitioned to deployments. Most of our members are well prepared. The technology is solid. The big question is whether the consumer electronics industry is ready. And some think the telco side has a ways to go.

The transition will happen, and it will take a long time. For us the heavy lifting is pretty much done. Now we’re giving thought to what things will look like once IPv6 is fully implemented as the sole addressing platform, five or ten years out. There are a lot of interesting things to think about.

As we do, we learn more and more what an all-IPv6 world really means. People haven’t thought much about what happens when you can allocate millions of IP addresses for every household.

SP – Ralph, thanks so much for taking time to discuss all these matters. No question there’s plenty to keep you busy for a long time to come.

Brown – Me and whoever comes next. There’s no end in sight.