Service Providers Archive


Nationwide Edge Datacenter Buildout Positions MSOs for Big Leap Forward

Tom Rutledge

Tom Rutledge

Distributed Cloud Architectures Will Drive Next-Gen Business Models

By Fred Dawson

November 27, 2017 – The largest U.S. cable MSOs have quietly built a springboard to the future anchored by an advanced edge infrastructure that could soon put them well ahead of competitors in the battle for residential and business customers.

Executing construction initiatives backed by Comcast, Charter, Cox and other entities, datacenter specialist EdgeConneX has built upwards of 40 video-optimized edge datacenters across the U.S., according to Phill Lawson-Shanks, chief architect and vice president of innovation at EdgeConneX. “These POPs (points of presence) are now the closest edge locations for 70 to 80 percent of the populations in these areas,” Lawson-Shanks says.

Their primary use so far has been to bring peering points between the Internet and ISPs closer to users in metro regions, some of which traditionally relied on long-haul private fiber links to peering points farther away. These are large multi-use facilities which host public CDN (content delivery network) services sold to Internet content providers while serving as private on/off ramps for cable operators in their dealings with Internet content suppliers.

New Cable Strategies

But, from the outset, cable operators backing the edge datacenter initiative had in mind another purpose as well. In contrast to strategies that dictated industry-wide headend centralization over the past ten years, new cable strategies require coordination between centralized and edge cloud facilities to execute on much-discussed IP-based transformations of their businesses.

Speaking of Comcast, Lawson-Shanks says, “They realized that with Xfinity they were going to need extremely low latency in the interactive data exchanges with customers.” The new edge datacenters are designed to accommodate much higher power loads for processing and cooling than a typical local headend can handle, which is essential to supporting the massive amount of data processing with minimal latency that allows operators to personalize and monetize user experiences across all services.

MSOs are beginning to move headend gear into these facilities, including new CCAP (Converged Cable Access Platform) appliances, with an eye toward migrating all processing onto the cloud at central and regional locations. How that workload is partitioned will vary from one MSO to the next, depending on specific service architectures. For example, Lawson-Shanks notes, “Charter does more transcoding in the edge datacenter while Comcast with its all-Xfinity approach to distribution does it centrally.”

While some MSOs have virtualized their stacks using virtual machine engines to operate transcoders, cloud DVR and other applications, they haven’t yet begun taking advantage of the dynamic shifting of workloads enabled by virtualization to maximize efficient utilization of hardware resources. “That will come,” Lawson-Shanks says.

These new edge datacenters are essential to executing new cable business strategies. A case in point is the strategic vision outlined by Charter CEO Tom Rutledge at the Society of Cable Telecommunications Engineers Cable-Tec Expo in Denver last month.

Declaring “We’re not in the video business,” Rutledge asserted Charter’s mandate is to optimize consumer choice with superior performance through sale of “capacity, connectivity and security.” He said this requires “advanced intelligent software” that can leverage a “high-capacity, low-latency, high-compute network” connecting consumers over wireless access points in and outside the home.

A Worldwide Trend

U.S. MSOs’ buildout of edge datacenters parallels efforts by other network service providers worldwide. Some of the other network operators investing heavily in this edge agenda include AT&T, Oath (Verizon’s new amalgam of Verizon Digital Media Services, AOL and Yahoo!), Liberty Global, Sky and several dozen telecoms participating in Ericsson’s global Unified Delivery Network (UDN) initiative.

Many are acting not only to buttress their own B2C service strategies but also to create new B2B wholesale opportunities through video-optimized CDN support for OTT providers’ service flows and dynamic advertising. So far, Comcast, through its Technology Solutions unit, is the only U.S. cable operator offering a wholesale CDN service, which was recently expanded to provide support for live direct-to-consumer services from TV programmers.

Demand for such facilities has triggered an outpouring of virtualized edge datacenter solutions from industry vendors that vastly expand on the capabilities of traditional CDNs, performing processing of content in support of virtually any live or on-demand multiscreen service model. Dynamic, targeted ad insertion, personalization of user experience, adherence to local blackout policies, support for time shifting and feature enhancements of every description can all be performed at the edge, leaving just a short hop for streaming at quality levels suited to display on big screens as well as handheld devices.

For example, Cisco Systems’ Open Media Distribution “cloud-ready” CDN platform, utilizing the company’s Virtualized Video Processing (V2P) technology, combines proprietary innovations with open-source CDN software to perform all the core elements of CDN management, including request routing, load balancing, caching, multi-format ABR (adaptive bitrate) streaming, CDN operations and viewing trend analytics and other advanced capabilities of the video-optimized CDN. As previously reported, Nokia and Imagine Communications are in the hunt as well.

Nokia, which became a provider of CDN technology by virtue of its acquisition of Alcatel-Lucent and that company’s previously acquired Velocix platform, is heavily promoting advances supporting TV caliber delivery and monetization of OTT. Imagine is touting a suite of edge components supporting just-in-time packaging, manifest manipulation and other advanced capabilities associated with ABR technology.

All three vendors say they provide software support for converting content delivered in IP mode to the edge to UDP/MPEG-2 transport mode for delivery over legacy pay TV conduits. By enabling operators to unify delivery of video over IP from the core to the edge, these solutions eliminate the need for legacy encoding, DAI and other hardware in regional and central headends.

The EdgeConnex Strategy

EdgeConneX’s role is to build and maintain high-grade colocation spaces for whatever platforms customers choose, as long as equipment measures up to company standards. Its investors include Comcast Ventures, Charter, Cox, Ciena, Brown Brothers Harriman, Providence Equity Partners and Akamai, according to industry reports.

Akamai has a strong CDN presence in many EdgeConneX facilities, Lawson-Shanks notes. Many other non-cable entities are using the facilities as well.

But cable utilization has been a focus from the outset. The first edge datacenter built by EdgeConneX provided support for Comcast operations in the Houston metro area starting about four years ago, well before the MSO introduced wholesale CDN services. As managers explored options for facilities locations there and elsewhere they realized it would be necessary to build new structures or convert existing buildings rather than use cable headend facilities, Lawson-Shanks says. While typical datacenter power requirements average around 7-10 Kw per cabinet, the types of servers required to handle processing for the video market require on the order of 20-30 Kw per cabinet, he notes.

“We found we could secure a warehouse or other type of building sitting on fiber where costs are relatively low and retrofit them as edge datacenters,” he explains. “Salt Lake City was next. We developed a whole new process to design buildings to absorb heat and operate securely without a need for people on premises. This became the cookie cutter we could use everywhere.”

With the template established, Lawson-Shanks says EdgeConneX was able to transform old buildings into 15,000 square-foot spaces suited for advanced datacenter operations in a matter of months, resulting in construction of 23 sites over the next 24 months. To streamline maintenance across these automated facilities EdgeConneX created the software platform EdgeOS to provide visibility to all buildings “through a single pane of glass.”

Now, with 40 sites operating in the U.S., EdgeConneX has expanded to other countries and is also creating much larger facilities suited for housing video-optimized datacenters here and abroad to serve as aggregation points for the booming OTT industry. Where the typical edge datacenter built for cable operators draws 2 megawatts of power, the larger datacenters can handle up to 8 Mw.

“We just announced an 8 Mw facility in Toronto and one in Buenos Aires,” Lawson-Shanks reports. “And we have campuses in Dublin and Amsterdam. Elsewhere in Europe you’ll see more traditional two-plus Mw edge sites in cities like Brussels, Milan and Manchester where they don’t have primary Internet peering points.”

The Charter Example

All of this activity points to what could be a very sudden transformation in what consumers are accustomed to getting from cable operators as the shift to DOCSIS 3.1 broadband capacity is completed. The capabilities of what Rutledge called the “high-compute network” allow operators to shift their focus from creating, bundling and marketing services that run separate from what’s available over the Internet to enabling blended services tailored to each subscribing household’s or business’s needs.

“We’ll be able to offer high-capacity two-way interactive products with massive bi-directional gigabit capabilities serving consumers and small businesses all on a single network,” Rutledge said. When it comes to video, Charter will integrate OTT apps and services like Netflix, Hulu and SlingTV “into our UI to make it easy for people to find what they want,” he said. He made clear that by expanding choice, Charter will contribute to breaking up “the big fat video package.”

Along with expanding the connectivity options within the managed service domain Charter will make security a primary focus in its pitch to residential and commercial customers. Under this new high-profile approach to security Charter will protect the privacy of its customers, add protections against theft for any content under its managed service control and protect products of other suppliers from theft through sharing of subscriber passwords, Rutledge said.

Like CDN services, aspects to this security strategy represent forays into new B2B monetization models enabled by the new network architecture. Speaking of the need for protections against password sharing, Rutledge said, “It’s a significant issue and it’s not well appreciated by the people who are new to distribution,” which he characterized as anyone with an app, including programmers operating in OTT mode. “They have an obligation to protect their product, and they don’t do a very good job, and that’s affecting the business,” he added.

The new intelligent networking platform also has major implications for operations, Rutledge noted. Charter personnel will have visibility into network performance and customer accounts through uniformly configured UIs across all its properties, including recently acquired Time Warner Cable and Bright House systems. “As the customer experience gets better, the average life of the customer relationship goes up and costs go down with less failure,” he said.

Lower costs mean more money can be spent on increasing value for customers. “It’s a virtuous circle,” he said.

The Cable Wireless Agenda

Rutledge also stressed the role wireless will be playing in Charter’s future. There are now some 200 million devices connecting to Wi-Fi in Charter subscribers’ homes with 80 percent of all subscribers’ mobile network traffic running over these Wi-Fi connections, he noted.

He said Charter will implement next-generation Wi-Fi over 802.11ax access points, which will expand premises wireless networking capacity from eight simultaneous streams in the home to 64. With the launch of branded mobile service as MVNOs (mobile virtual network operators), Charter and other MSOs following a similar Wi-Fi-first path will be fully invested as wireless providers in their own right.

“I’m calling it 6G,” Rutledge said, leaving open the possibility that Charter could acquire licensed spectrum to complement operations over unlicensed spectrum in a move to MNO (mobile network operator) status. Noting cable’s high bandwidth reach all the way to small cells wherever they’re positioned, Rutledge characterized 6G as “something we have and the phone guys don’t.”

The idea of adding licensed spectrum, which would enable deployment of 5G small cells, brings into play another aspect to the evolving edge datacenter scenario. “The next step for us is 5G,” says Lawson-Shanks.

Noting the need to aggregate massive amounts of traffic generated by small cells, he says EdgeConneX is working with operators to set up micro-datacenter facilities deeper in their networks to begin testing real-world requirements for utilizing 5G technology. These refrigerator-size or even smaller containers will be co-located near pole-mounted antennas communicating with a cluster of small cells.

The forthcoming trials will set parameters for massive quantities of micro-datacenters that will be deployed in the years ahead, Lawson-Shanks says, noting that how 5G will work in the real world has yet to be determined. “How many people can be served per micro location will be defined based on testing of 5G antennas to end users in multiple scenarios,” he says. “Right now all we have to go on is lab tests and some very limited field tests.”


Resolving Cable Wireless Conundrum Requires Clarity on Network Evolution

Jay Fausch, head of cable sector marketing, Nokia

Jay Fausch, head of cable sector marketing, Nokia

Interplay Between 5G and Virtualization Is a Key Consideration

By Fred Dawson

August 28, 2017 – Amid another flare-up in speculation over what big cable companies are going to do about mobile the overarching question is whether they are ready to act on a vision of the future of networking that is coming into focus at warp speed.

The debate over pursuing the MVNO path versus taking the M&A route into a big MNO play is a business question colored for much of this year by speculation about a dizzying array of possible deals, from a Comcast-Charter partnership looking to acquire Sprint or T-Mobile to Sprint seeking a merger with Charter to Comcast and Verizon combining into a $315-billion behemoth. In the latest wrinkles both Comcast and Charter have dumped a lot of cold water on the rumor mill by saying they’re committed to forging ahead with their MVNO deals with Verizon.

On an earnings call in late July Comcast chairman and CEO Brian Roberts, noting the MSO was already rolling out the Xfinity-branded mobile service using Verizon’s infrastructure, said, “We really feel we’re not missing anything ….I don’t see something happening in that industry that we envy a position that we don’t have today.”

Charter, responding to reports that Sprint owner SoftBank was looking to sell some or all of the company to the MSO, issued a statement saying “While we understand why a deal is attractive for SoftBank, Charter has no interest in acquiring Sprint. We have a very good MVNO relationship with Verizon and intend to launch wireless services to cable customers next year.”

Whether these statements are just negotiating ploys or a signal that everything has hit a dead end remains to be seen. Meanwhile, the real issue for cable operators, big and small, is whether they’re inclined to use their assets to full advantage to create the kind of network that mobile and telecom companies can only aspire to at this point.

Right now, as Verizon CEO Lowell McAdam noted recently, even his company with its extensive FTTH footprint is a long way from where it needs to be. In an interview with Bloomberg that focused on possible acquisitions, McAdam suggested he’d be open to talking about merging with any of three entities that came up in the discussion – Comcast, Disney or CBS. But then he said, “Given what I know about architecture, financial requirement, cultural fit, there’s never a dream deal.”

The dream deal, he added, would be one that involved an entity with all-fiber infrastructure attached to 5G microcells at the end points. “If I can find a company that had the fiber built for this architecture I’d scoop them up in a minute, but they don’t exist,” McAdam said.

Comparing 5G and Wi-Fi

While the telecommunications industry expects to be able to use 5G for mobile services, currently most agendas are focused on the advantages of using the technology for fixed wireless connectivity. 5G radios are designed to support robust performance at millimeter wave frequencies, where an abundance of spectrum allocated to 5G by the FCC and other bodies will enable fixed wireless connectivity at multi-gigabit speeds throughout the premises and in public places.

The mobile standards body 3GPP recently announced it was accelerating 5G standards development, prompting expectations that mobile uses of 5G will be possible sooner than previously expected. In May AT&T announced it could launch 5G mobile as early as year’s end 2018, but it will likely be a long time before 5G is widely available as a mobile service. Any practical implementations for mobile will require use of already saturated cellular spectrum, which is likely to lead to pressure on regulators to allocate more spectrum for migration to 5G.

Some cable executives have expressed strong interest in 5G, notwithstanding the prospects for ongoing expansion of Wi-Fi capabilities as embodied in emerging IEEE standards, which rely on some of the same advanced radio technologies used with 5G. For example, one commonly used core technology now employed with widely deployed 802.11ac Wi-Fi chipsets is MIMO (Multiple-Input, Multiple Output), which uses multiple antenna arrays in transmitters and receivers to support more robust transmissions through spatial separation of bit segments using SDMA (space division multiple access) multiplexing.

The newest 802.11ac chipsets, implementing the second wave of specifications issued for the platform, employ what’s known as Multi-User MIMO (MU-MIMO), which uses precoding to identify multiple receiving devices, thereby enabling a kind of broadcast mode that uses SDMA to transmit content delivered over a given frequency channel to more than one device. This has the effect of doubling or even quadrupling the number of simultaneously supported clients on each access point (AP).

5G goes much farther with Massive MIMO technology, which outdoes the 8×8 antenna configurations envisioned for the third wave of 802.11ac with support for 64×64 configurations in compact, commercially viable components. These antennas can focus the transmission and reception of signal energy into very small regions of space, providing new levels of spectral efficiency and throughput for more users in a dense area without causing interference.

Depending on spectrum availability for Wi-Fi, the MU-MIMO and other advances envisioned with Wave 3 802.11ac are expected to take effective throughput per AP to around 2.8 Gbps compared to 800 Mbps and 1.4 Gbps, respectively, for Wave 1 and 2 802.11ac. (Maximum PHY rates utilizing all available spectrum for the three waves are pegged at 1.3 Gbps, 2.34 Gbps and 6.933 Gbps.)

Wi-Fi doesn’t stop there. Specifications for the next generation of Wi-Fi in the existing 2.4 and 5 GHz spectrum zones, 802.11ax, are under development with the goal of pushing the PHY rate to the 10 Gbps level. And vendors are already certifying products supporting the 802.11ad WiGig protocol for transmissions at the 60 GHz tier reaching PHY throughput of 4.6 Gbps. Farther out, an enhancement to WiGig known as 802.11ay will raise the PHY ceiling to 100 Gbps.

In light of these developments it’s no wonder that many cable operators question whether they’ll ever need to use 5G to remain competitive. But it seems safe to say that Wi-Fi will never catch up with 5G as a fixed wireless solution as radio and chip technologies supporting both continue to evolve.

AT&T, describing its 5G rollout plans earlier this year, said it’s already running fixed 5G connectivity in lab tests at up to 14 Gbps. Given the much greater bandwidth available, the potential for mobility and other 5G advances that are not part of the Wi-Fi migration path, there’s a persuasive case to be made for cable operators’ transition to 5G at some point.

Just what the cable industry’s version of McAdam’s dream network might look like (with some coax running between the fiber and the microcells) can be seen in an emerging portfolio of next-generation products coming into the cable industry from the mobile and telecom sectors. Tight integration between HFC and 5G is definitely part of the picture, but so, too, is the power of cloud-orchestrated network virtualization to coordinate functions across a wide array of virtualization-optimized network elements.

As a case in point, several developments underway at Nokia, which acquired Alcatel-Lucent last year, can be pieced together to envision what’s in the offing for cable operators who are ready to move beyond the restrictions of legacy approaches to network migration. These touch on things like multi-terabit routing capabilities, remote PHY access infrastructure, virtualized network elements, distributed cloud architecture and big-data intelligence as well as 5G.

A New Benchmark in Routing

Nokia became a conduit for flowing advanced telecom technology into the cable industry by virtue of Alcatel-Lucent’s success marketing its edge routing solutions to MSOs as broadband took off in cable. With a longstanding corporate unit devoted to cable and the RF requirements of the HFC network, the company is well positioned to propose new ideas to operators and react to the resulting demand by adapting innovations coming out of Nokia Bell Labs and the company’s product development teams to cable’s requirements, notes Jay Fausch, who leads global marketing for Nokia’s cable MSO segment.

“Edge routing got us on the map with cable, and now there’s strong demand for deployment of our routers on cable backbone and core networks as well,” Fausch says. “The opportunities cable operators have to compete in a fixed-line, all-IP gigabit world are playing to our strengths.”

The same is true of the industry’s growing reliance on advanced wireless technology, he adds. “We see great opportunities around mobility in cable,” he says, “As operators address the mobility question and how to keep customers on their networks when they’re not in reach of Wi-Fi, there’s a lot of technology in the Nokia portfolio that can play in that transition.”

When it comes to routers, Nokia’s recently introduced 2.4 terabit-per-second FP4 network processor is driving capabilities in the company’s edge and core routers that will allow cable as well as other telecom companies to keep up with traffic and functional demands of all-IP service operations, Fausch says. “The FP4 is pushing the envelope on capacity without compromising on the capabilities you need to have in edge and core routers,” he notes. “It’s a significant improvement for us.”

The company says the FP4 increases edge routing capacity two to three fold across various models in the 7750 SR portfolio and by six fold in its 7950 XRS core router. For example, according to company specs, the single-shelf 7750 SR-14s supports a 144 Tbps configuration and can scale up to 288 Tbps. The 7950 XRS scales to 576 Tbps in a single system through chassis extension, without requiring separate switching shelves.

“Our next generation of terabit class routing leapfrogs other suppliers out there,” Fausch says, a point confirmed by Frank Ostojic, senior vice president and general manager of the ASIC Products Division at Broadcom. “Nokia is charting a course that others will have to follow,” Ostojic says, citing the firm’s use of several cutting-edge silicon technologies. These include 16nm finFET Plus process technology (where a fin-shaped electrode in a field effect transistor allows multiple gates to operate on a single transistor), Broadcom’s embedded SerDes (Serializer/Deserializer, a means of maximizing input/output capacity on chipsets) and advanced packaging.

Of course, in the seesaw battle among routing suppliers this might only be a temporary lead. But timing is essential as operators push ahead with preparations for future requirements. “Router interfaces and optical transport pipes will have to keep up” to enable full exploitation of these capabilities, Fausch acknowledges. Nonetheless, he adds, “This positions us very well for the cycle of scalable infrastructure upgrades taking place today.”

The Cable Virtualization Mandate

Whereas the move to higher capacity routing is more or less a no-brainer for cable operators, another key but less certain step in preparations for competing in an environment where everybody and everything is connected all the time has to do with adopting approaches to network virtualization. One area of debate where the rubber is meeting the road right now concerns best approaches to virtualizing CCAP (Converged Cable Access Platform) systems.

As previously reported, vendors are lining up on different sides of the debate with introduction of CCAPs that are meant to work with the CableLabs-defined Distributed Access Architecture (DAA) model, which relies on digital optics terminated at the HFC node by Remote PHY electronics that manage modulation, multiplexing, forward error correction and other physical layer processes in the conversion to RF for distribution over coax. The question posed by the new vendor options is, once the CCAP is relieved of performing these PHY layer processes, which components of the DAA–enabled CCAP should be virtualized.

Some vendors are virtualizing both the control and data plane components of the CCAP to run on COTS (commodity off-the-shelf) servers. Nokia, Huawei and possibly others are supporting a Remote MAC-PHY configuration with solutions that virtualize the control plane functions running in core locations to orchestrate provisioning, quality control and tie-ins with other back-office elements but move the CCAP data plane or MAC (Media Access Control) into the node.

It’s still unclear which way operators will go, Fausch says. “There’s a lot of lab and field trial activity as operators deal with explosive traffic growth and the need to reduce service group sizes and deal with increased demands on space and power in headends,” he notes. While the Nokia approach offers significant savings in power consumption and space utilization, “not everybody is ready to cross that bridge.”

“We feel traditional players in the CMTS business have a bit of embedded business to protect, so they’re not that anxious to see operators moving to remote PHY applications,” he adds. “But if you don’t go that way, you end up with a lot of big iron in the network you don’t need.”

The idea of a fully distributed virtualization architecture where software-based functions performed on COTS hardware in remote locations can be orchestrated through highly automated cloud-based workflows is gradually taking hold in the broader telecom industry. But it’s an evolutionary process where different carriers are focusing on different areas of virtualization rather than converting to full virtualization all at once. So far, most cable companies have yet to take these first steps.

“Elegant evolution is a pretty big pill to swallow,” Fausch says. “We’re emphasizing that as the industry heads into this all-IP fiber-rich gigabit-enabled world, the more you can recognize that and cloud-enable it to get the flexibility and agility in the network that you need for making adjustments to customer behavior and demand for services, the better.”

“If that’s where you want to go, we can get you there sooner,” he adds. “But until it smacks you in the face, it’s tough to bite the bullet.”

Virtualization and 5G

Looking to where things are going with respect to virtualization necessarily impacts how the industry views use of next-generation wireless. 5G has great significance here with capabilities enabling allocation of spectrum for data flows tuned to specific service categories.

A just-announced project getting underway in Europe points to what this could mean for service providers of all stripes. The 5G Mobile Network Architecture research project (5G-MoNArch) brings the architectural concepts articulated by phase 1 of Europe’s 5G Infrastructure Private Public Partnership (5G-PPP) into play with industry-driven use cases and two real-world testbed implementations.

Coordinated by Nokia under the auspices of the European Union’s €80-billion multi-technology 2020 Framework Programme, the project involves a consortium of 14 industrial and academic partners who have aligned to kick off Phase 2 of 5G-PPP, marking a significant step toward ensuring a common approach to service launches in the years ahead. Ultimately, the flexible and programmable architecture will support the vast variety of services, use cases and applications that will be part of 5G-enabled networks, participants say.

“We follow a shared architecture of what the next-generation communications infrastructure needs to look like to enable and meet the network demands of the next decade,” says Peter Merz, head of end-to-end mobile networks solutions at Nokia Bell Labs. Underscoring the magnitude of what has to be accomplished, he adds, “5G communication needs both private and public entities to invest in the infrastructure and ensure Europe remains competitive.”

The goal of the 5G MoNArch’s project is to use network slicing, which capitalizes on the capabilities of software-defined networking (SDN), network functions virtualization (NFV), orchestration and analytics to support a variety of use cases in vertical industries such as automotive, healthcare and media. Network slicing, a technique where the network is logically rather than physically sectorized, is deemed crucial to flexibly mounting and simultaneously supporting various services with widely varying requirements.

In other words, it’s all about maximizing the benefits of multi-terabit throughput in coordination with the service agility enabled through network virtualization. The 5G MoNArch consortium expects to develop detailed specifications and extensions of 5G architecture utilizing key enabling innovations such as inter-slice control and cross-domain management, experiment-driven modeling and optimization and native cloud-enabled protocol stack.

The two use cases earmarked for the project will deploy the architecture in live testbeds, one supporting heavy communications usage in a high tourism urban environment and the other enabling reliable and secure communications in a seaport environment. Such use cases, of course, are far from the traditional role played by cable companies. But in a marketplace where pay TV is no longer the defining service, operators have already gone a long way toward positioning themselves as competitors in the broader telecom environment with superior broadband connectivity and aggressive expansion into ever larger segments of the commercial services market.

Going forward, competitive strength will depend on operators’ ability to deliver value-added services over their broadband pipes, including superior converged entertainment, smart home and other benefits for the residential market as well as support for things like virtual VPNs, SaaS (software-as-a-service), video surveillance, IoT (Internet-of-Things) applications and a host of other extras that are central to serving the fast-evolving business market. Of course, ubiquitous wireless connectivity suited to meeting demand in a video-saturated environment where 4K UHD and virtual reality will be part of the bandwidth-guzzling mix will be mandatory.


Cisco Overcomes Incompatibilities Impeding Datacenter Virtualization

Dave Ward, CTO and chief architect, Cisco

David Ward, CTO & chief architect, service provider division, Cisco Systems

Workflows Operate Seamlessly across VMs, Containers and Bare Metal

By Fred Dawson

August 22, 2017 – Cisco Systems has found a way to orchestrate use of datacenter resources that could provide the flexibility that’s been missing when it comes to getting the most out of virtualization technology.

As described by Dave Ward, Cisco’s CTO and chief architect, the company has created an adaptor layer that frees workflows to work with multiple iterations of virtualization across in-house and external cloud resources. “We can orchestrate across bare metal, containers and VMs (virtual machines) all in the same workflow,” Ward said.

With growing reliance on IP-based software systems running on datacenter rather than purpose-built hardware, producers and distributors of high-value video content know they can save money on infrastructure by using virtualization technology to enable dynamic use of those resources for multiple applications as needs ebb and flow, even to the point of being able to tap public cloud resources for extra capacity on an as-needed basis. Indeed, high-density multi-core processors merging CPU and GPU functionalities have expanded the range and volume of applications that can run on any server blade to the point where failure to implement virtualization amounts to a huge waste of resources.

The problem is, as each major advance in virtualization technology proves more effective at cutting costs and enabling dynamic versatility than the last, it’s hard to commit to implementing any given mode knowing a better one is likely in the pipeline. And when a company does choose to put different generations of virtualization technology into play as better options come along, silos emerge where workflows are locked in to one or another virtualization environment.

Cisco hopes to put such concerns to rest. “We’re working on virtualizing the entire datacenter – storage objects, orchestration optimization, etc. for ultimate results,” Ward said. But, of course, being virtualization, it’s always a work in progress, which, in this case, involves deep engagement on the part of a growing list of partners who are making it possible to interface their workflows and specific solutions with the adaptor layer developed by Cisco.

“You need partnerships to get over these hurdles,” he said. “There are a lot of end points and capture devices, and we have to be able to do the entire feature set and take assigned tasks across the entire datacenter infrastructure. We’re working as fast as we can to enable ecosystems around us.”

The emphasis is on enabling optimum use of facilities rather than competing at the workflow level, he added. “We’re the infrastructure company, not the orchestration company,” he said, citing a recently consummated partnership with broadcast production supplier Evertz as a case in point. “Combining our control layer with their orchestration layer is big news for broadcasters, because it puts workflow management in possession of the entire datacenter.”

Such fluidity not only bridges facilities that have been provisioned with different approaches to virtualizing hardware resources over time. Equally important, it frees users to choose the best compute environment for each application in any given workflow. While there is a lot of “religion” in IT circles where debates over the benefits of one approach over another can get intense, the truth is the optimum environment is one where users have complete flexibility of choice.

“We don’t want to let technology barriers or religion get in the way of making the best use of datacenter infrastructure,” Ward said. “We’re making it possible to get past these issues with a solution that ensures absolute frame accuracy across all A/V feeds with high tolerance and availability.”

The Need for Multiple Options

Right now there’s a lot of momentum behind virtualization based on container technology as the successor to virtualization based on VMs running on hypervisors. Hypervisor-based virtualization is a multi-layered technology. Hypervisors, relying on a host multi-server OS like OpenStack to interact with the bare-metal server OSs, enable VMs to operate like independent servers with their own OSs and middleware.

With container technology a server runs an OS that creates semi-autonomous software modules (containers) to load applications directly onto bare metal. Containers use less computing for a given task and are more easily scaled.

In some cases container technology is not only seen as a better option; it’s seen as the only practical option for virtualizing use of resources for certain applications. In those situations performance degradation resulting from the additional processing overhead required with use of hypervisors simply cannot be tolerated.

More generally, there’s much excitement over container technology and its support for microservices. A specific functionality performed by a containerized microservice can be applied across multiple applications, greatly adding to the overall efficiency.

But there are instances where reliance on VMs with hypervisors is the better choice. For example, there may be situations where there’s a need for a wide choice of OSs to ensure that a given application in the workflow benefits from the latest and greatest innovations flowing out of different OS environments.

VMs run independently of each other on shared commodity hardware and so can be optimized to use whatever OS is best for their assigned application, whereas containers are designed as incremental instances of a specific OS. A cluster of Linux-based containers will only work for applications that run on Linux, or one consisting of Windows-based containers will only work for applications running on Windows.

Moreover, VM technology is more mature, which means some users will prefer to wait awhile before implementing containers, and, when they do, they may want to do so for some applications while leaving others to run in VMs. And there are some situations where it’s better to run an application on a server dedicated to that application, i.e., the bare-metal option, rather than in a virtualized environment.

One case in point where bare metal is the widely preferred option involves use of Apache Hadoop, which, rather than using proprietary computer systems to process and store data, provides an open-sourced means of clustering commodity hardware to enable analysis of massive data sets in parallel. Hadoop has major implications for the media and entertainment space where aggregations of vast data sets flowing in from multiple sources are being used for diagnostics, advanced discovery, personalization, addressable advertising and much else. As an I/O-intensive application focused on reading and writing to storage, Hadoop is sensitive to any performance degradations caused by accessing disks through a virtualized layer.

A New Approach to Using Virtualization

For all these reasons and many more, the emergence of an adaptor layer that allows a given workflow to operate across a mix of virtualization and non-virtualization modes whether in private or public cloud environments or in hybrid combinations of both could represent an important breakthrough in service providers’ and broadcasters’ progression to virtualization.

“The industry needs an infrastructure orchestration framework that can tune to support whatever the workflow requires without pre-planning,” Ward said.

At NAB in April Cisco demonstrated the implications of this capability with seamless operation of production workflows across three cloud environments in its booth. One of those clouds was physically supported by servers in the booth of broadcast production systems supplier Blackmagic Design, which was connected to Cisco’s booth by a dedicated fiber link to underscore the ability to work across dispersed resources.

This and cloud-hosted solutions of other vendors tied to the demo highlighted the ability to leverage virtualization as the broadcast industry is transitioning from traditional hardware-based solutions to an all-IP environment utilizing COTS (commodity off-the-shelf) facilities. A key factor in smoothing the process has been agreement on adoption of multiple protocols under the SMPTE 2110 umbrella.

These include SMPTE 2206, through which payloads utilizing SDI (Serial Digital Interface), the traditional mode of connectivity in the production workflow, are bridged to IP devices over RTP (Real-time Transport Protocol) streams. Cisco demonstrated how its platform brings SMPTE 2206 feeds into the virtualization process, ensuring that virtualization can proceed wherever COTS facilities are in use while allowing legacy cameras and other traditional elements of the production workflow to remain in operation.

“With SDN (software-defined networking) we can bring age-old job routing with cameras and other equipment into the datacenter workflow,” Ward said. Using high-speed switches under SDN controller management “we can automate topologies; for example, how customers connect their cameras to the network,” he added.

Another aspect to the tight compatibility between production workflows utilizing the SMPTE 2110 portfolio and the Cisco adaptor layer has to do with the all-important timing mechanisms embodied in the TR-03 protocol, which includes Precision Timing Protocol (PTP) and IEEE 1588, to ensure frame-by-frame synchronization of all streams comprising each piece of content. “Synchronization through PTP and1588 is built into our controllers,” Ward said, noting that such capabilities have been invaluable for virtualizing datacenters used in lightning fast financial transaction environments.

“Synching up stream, load management, workflow management – the financial industry has a great need for all these things,” he said. “In all cases, it comes down to how you time the orchestration of systems and fully account for all variables.”

The Proof-of-Concept

The impact such capabilities can have in a traditional TV production environment became clear during the Cisco demo. Each application in the workflow made optimum use of whatever resources were available so that, from the dashboard view of a production manager, everything was running just as it would in a single cloud tied to a single mode of facilities utilization, whether virtual or bare metal.

The bare-metal component of the demo datacenter was run in “metal-as-a-service” mode, which creates an elastic cloud-like environment by treating physical servers like VM instances in the cloud. VMs were set up to run three types of jobs tapping either the bare-metal approach or hypervisor-based virtualization as needs arose. The container environment utilized the scaling capabilities of Kubernetes, the open source technology that has made massive scaling possible by treating multiple servers as a single unit providing capacity to the clustered containers.

The live workflow compositor fed screens with the same views of any application in the workflow in both booth locations. Jobs could be programmed to run in whatever environment was most optimal with prioritization of resource allocations suited to each.

For example, a transcoding job could be assigned to the container segment of the virtualized server arrays where chunks of content were shunted off to individual containers for processing in parallel, reducing what might be a two-and-a-half hour process on dedicated servers to 11 seconds, with everything stitched into a single stream and verified for accuracy. Maximum efficiency was achieved with 100 percent dedication of the available container CPU capacity.

Alternatively, if the overall efficiency of execution in the workflow were to require only partial dedication of container capacity to transcoding, the system could be set to allocate capacity necessary to perform the transcoding job in whatever time frame was optimal in the context of execution of other elements in the workflow – say, five minutes instead of 11 seconds, said Andre Surcouf, distinguished engineer in Cisco’s Chief Technology and Architecture Office.

Such optimization, extending to any combination of applications running in any combination of virtual and non-virtual environments, can be arranged automatically, he added. “This is the efficient play,” he said. “You can run a separate optimization algorithm to pack all you can into the datacenter.”

With two bidirectional pipelines available into the VM-hypervisor and container virtualization environments the system was able to run the same application in both to meet high availability prioritization requirements with no disruption in execution. For example, one job running with the demonstrated workflow required use of the two virtualization pipelines to tile and upscale a video feed to UDI (Universal Display Interface) for insertion of logos in sync with the schedule set for the overall workload in a live production.

Similarly, users can set a second job while the first is running and use CPU as it comes available, all with the appropriate parameters set for each, Surcouf said. In the case of less time-sensitive transcoding jobs, the system can be set to execute them as capacity is freed up with completion of other tasks in the workflow, he added.

Surcouf also showed how, in the event of failure or under performance in one virtual environment, the system performs frame accurate switching of the processing into the other environment, subsequently switching the flow back into the first when the problem is solved with no phase jitter or other disruption. “When we do transcoding, we can take a large file and destroy the processing on one computer and pick it up on another with no interruption in the flow,” he said. “It’s about being able to efficiently use the notion of primary jobs and back-up jobs running hot as close to 100 percent as possible.”

Bringing Hyperconverged Infrastructure into the Workflow

A major development in datacenter efficiency that workflows will need to interact with is hyperconverged infrastructure (HCI), which can support VM or container approaches to virtualization but at a much more granular level than is possible with legacy infrastructure or converged systems. Unlike these other architectures, storage in HCI solutions is integrated with the computing and networking components within each module or node, typically consuming just one RU of datacenter space. Storage is tiered within each module to provide RAM, solid state and hard disk drive support as needed.

Each module is configured with proportionate allocations of compute and storage capacity as dictated by immediate needs, and users can make adjustments to those allocations as new modules are added to the cluster. Managers accessing the platform through a single user interface can set policies for each application dictating how much of the computing resources it requires and what data components will be instantly available on RAM, positioned for fast-cache access in Flash or offloaded to the FAS (Fabric Attached Storage) or SATA (Serial Advanced Technology Attachment) components of the storage stack.

“It’s a great advantage to have the compute network and storage integrated together with the ability to extend full orchestration across the virtualization and other facilities technology choices,” Ward said. But media and entertainment needs pose challenges related to the size of video file objects in the HCI environment, he added.

“Because video objects are so large, you have to optimize around moving files to where there’s enough CPU available,” he explained. “We’re working to optimize our engine for these workloads so that the principles of hyperconvergence become specific for media.”

And so it goes in the never-ending task of ensuring the datacenter adaptor layer can bring the workflows into whatever comes next. “We’ll find a way to support whatever comes along,” Ward said.


Cable Transformations Illuminate Upside to Rural Broadband Story

Diane Quennoz, SVP, marketing & customer experience

Diane Quennoz, SVP, marketing & customer experience

Vyve Shows What Can Be Done with Prudent Investments in Small Systems

By Fred Dawson

August 10, 2017 – As government officials argue over how to deal with the sad state of broadband coverage in rural America they would do well to consider what the experiences of some aggressive Tier 2 cable operators say about what’s doable with resources at hand.

Of course, not every community and certainly not every farm has access to a cable network, but, too often, those that do are served by antiquated plant run by small operators whose stewardship leaves the wrong impression about the value of those facilities. Getting them up to speed will cost money, but the costs are not so great as to foreclose the likelihood of a reasonable return on investment.

Confidence in that supposition is reflected in the hundreds of millions of dollars flowing into small market cable acquisitions and plant upgrades over the past few years, notwithstanding an industry-wide pay TV margin squeeze that has hit small operators much harder than their larger brethren. There’s been plenty of time to determine whether the earliest expansion strategies pegged to high-speed broadband were a good idea. That the investments keep coming suggests the case has been made.

For example, the pacesetters in implementation of 1 Gig broadband service across large multi-state footprint have been Tier 2 MSOs like Cable One, now reaching 70 percent of 1.7 million homes passed with its GigaONE service, and Midco, which reached the 50 percent mark at midyear with its Xstream Gig service on its way to 100 percent coverage by year’s end. That would put the 1 Gig service in reach of 600,000 households in 335 communities across North and South Dakota, Minnesota, Wisconsin and Kansas.

The sense of opportunity in smaller markets is fueling ever more announcements of data rates in the 100 Mbps-1Gbps range and ongoing buyouts of smaller MSOs by larger companies. And with the broadband expansion, most of these companies have made connectivity to businesses a key part of their strategies.

One of the more recent consolidation deals came at the start of the year with Cable One’s $735-million acquisition of NewWave Communications, which, as previously reported, was recapitalized four years ago to become the leading broadband provider for businesses as well as consumers in its markets. With 1 Gig available across much of its 440,000 household footprint, NewWave has built a commercial service business from scratch that now accounts for 13.6 percent of company revenues.

Another even smaller Tier 2 operator demonstrating the success of a broadband-focused strategy is Vyve Broadband, which five years ago began transforming old cable systems into the kinds of high-capacity multi-service operations small towns and villages everywhere are looking for. Today, with HFC networks passing about 315,000 households in nine states, Vyve offers 200 Mbps access in three quarters of its franchises and has reached 1 Gbps in two of them: Shawnee and Ketchum, OK.

With cable systems in Texas, Arkansas, Kansas, Louisiana, Tennessee, Georgia, Colorado and Wyoming as well as Oklahoma, Vyve has put a good deal of capital into interconnecting its markets via fiber rings to facilitate headend consolidation and other efficiencies essential to profitably operating over such an expanse. Last year, the company completed buildout of a 400-mile fiber ring around central and eastern Oklahoma.

The 48-fiber ring feeds seven main city hubs, which provide direct service to over 40 municipalities across the state. It connects with a larger network tied to Tulsa, Oklahoma City, Dallas and Atlanta, and also feeds signals into links connecting to local systems in rural areas of these and the other states.

Vyve is offering a triple-play service with digital TV and voice in all its markets. It also has a unit dedicated to delivering business services, which include optical Ethernet, PRI (Primary Rate Interface) and hosted voice.

The emphasis on broadband hasn’t diverted the company from developing a 140+-channel HD service that would be competitive in any market. Indeed, the company is considering new service elements that would take it a step beyond what’s typically on offer from operators in much larger markets.

“Vyve is always on the cutting front,” says Diane Quennoz, senior vice president of marketing and customer experience at the Rye Brook, NY-based company. For example, she notes, Vyve is just now rolling out the first iteration of hybrid video service offering Netflix with the MSO’s linear and VOD lineup for unified navigation through the TiVo UI running on Evolution Digital’s eBOX, which combines QAM-delivered traditional linear TV with IP delivered VOD and OTT on the HDMI 1 input.

“We’re really excited about bringing OTT subscriptions into the linear lineup,” Quennoz says. “People want live and local TV but many want to get their movies through OTT. Now they can have it all through one input.”

Along with universal navigation, support for recommendations and other features, the TiVo UI offers viewers the option to choose between a grid-style traditional interface and the more graphically rich personalized format taking hold throughout the industry. “With TiVo we’ve always had a very user-friendly UI,” she adds. “As we’ve evolved into whole-home DVR and ultimately the eBOX we’ve been able to maintain the same look and feel while benefitting from the things TiVo has done to make navigation easier.”

Vyve pioneered use of Evolution’s DTA with the TiVo UI three years ago as a way to deliver a feature-rich service to legacy analog as well as HD TV sets. The eBOX IP hybrid STB builds on the DTA capabilities to enable operators to cap QAM-based delivery in a smooth migration to all-IP video. The terminal has caught on among smaller operators, many of whom are deploying it through an affiliation Evolution established with the National Cable Television Cooperative.

Next up may be cloud DVR. Vyve is looking at options but hasn’t made any decisions, Quennoz says.

Where the OTT tie-in leads is anybody’s guess. Quennoz acknowledges skinny bundling of live TV channels by OTT providers has had an impact. “We’ve seen our share of decline in video market for sure,” she acknowledges.

For now Vyve sees Netflix as “the go-to service” for ensuring the company is keeping pace with what customers want. “But let’s see what takes off and what makes sense for consumers,” she says, noting the company may look at options that could include new combinations of OTT and smaller bundles as the market evolves.

What matters is keeping customers engaged through high-speed broadband with offerings of value-added services that they can’t get in pure a la carte OTT mode. Along with the unified navigational advantage of providing live and local programming with OTT on-demand content, Vyve sees opportunities tied to a whole-home robust Wi-Fi service and smart-home applications.

“We have to own the Wi-Fi and the Internet in the home,” Quennoz says. “We’re working through various solutions that go beyond using our DOCSIS 3.0 modems with extenders.”

The search for a whole-home Wi-Fi solution operators can offer as a better experience over traditional modes is accelerating across the industry as the number of wireless devices used to access video and other content from any point in the home or business multiplies. This is another area where Midco has played a leading role, having been the first MSO in North America to deploy the mesh Wi-Fi solution offered by AirTies to support a whole-home wireless service, which it offers at $7.95 per month.

“We’re looking at a couple of providers in the market, conducting tests to see what’s working and what’s affordable,” Quennoz says. “The question is how you calculate the value for customers and sell it. Is it part of our broadband offering or a different product set? But we definitely want to be part of that home Wi-Fi opportunity.”

Where smart-home services are concerned, “we’re looking to see what makes sense for our customers,” she says. “We’re testing stuff every day and trying to determine whether this is something we should own or provide access to. A challenge with a lot of these solutions is the hardware is quite expensive.”

It’s still unclear how far consumers in rural markets will go in embracing the smart-home concept and the many applications associated with the Internet of Things, she adds. Vyve is offering home security in one of its markets, but “I wouldn’t say we’re on the forefront of any solutions,” she says. “But we want to determine where our customers are heading and get there before they need it.”

That’s a pretty good way of summing up how Vyve and the other cable players who see opportunity in the smaller markets are approaching the business. So far, they’re demonstrating it’s a winning strategy.


HDR Tech Bottleneck Slows but Can’t Stop 4K Transition

Steven Corda, VP, business development, SES

Steven Corda, VP, business development, SES

Complications Abound, but SES Is Demonstrating They Aren’t Insurmountable

By Fred Dawson

July 3, 2017 – The wait for pervasive availability of 4K UHD TV services may seem interminable as technical complications and a dearth of content continue to impede progress, but there’s every reason to believe the dam will finally break in 2018.

Right now the view from the technical trenches is mixed at best, given the added challenges imposed by HDR (High Dynamic Range) technology, which is widely viewed as essential to creating a viewing experience that significantly differentiates UHD from HD. No one knows how that differentiation will impact revenue streams, but MVPDs, traditional and virtual alike, as well as content producers appear willing to invest heavily to find out.

“4K for us will always go with HDR,” says Joshua Seiden, executive director of Comcast Innovation Labs. The decision to go that route has significantly altered the MSO’s plans, which initially envisioned introduction of a 4K UHD set-top-box (STB) supporting UHD services for possible rollout in 2016 followed by an HDR-capable STB later that year to enable delivery of HDR-enhanced content.

Joshua Seiden, executive director, Comcast Innovation Labs

Joshua Seiden, executive director, Comcast Innovation Labs

While Comcast hasn’t announced the timing for UHD service introduction, beyond the already supported on-demand 4K sampler service offered to owners of certain Samsung  and LG TV sets, the company has set its sights on making HDR based on the HDR10 standard available in time for the 2018 Winter Olympics schedule for February 9-25 in Pyeongchang, South Korea. “That’s what we’re targeting,” Seiden says.

The Layer3 TV Agenda

Meanwhile, the pace of MVPDs’ commercial introductions of 4K UHD services, with and without HDR, is quickening across North America. Denver-based startup Layer3 TV, for example, is providing STBs supporting HDR-enhanced 4K to all the subscribers it signs up in currently served markets, which include Los Angeles, Chicago, Washington D.C., Dallas/Ft. Worth and Denver, with New York City and environs slated for launch in the near future.

Layer3’s allHD service, delivered over subscribers’ broadband connections, offers 250 HD channels typically priced at about $85 per month. In Washington the company also offers a fiber-to-the-home option by reselling 100 Mbps full duplex Internet service running on Verizon’s network at a standalone price of $69 or at $125 when bundled with the allHD service.

So far, Layer3 has only offered a limited amount of 4K content in VOD mode. Going a step farther into live event coverage, on June 24, along with a handful of other MVPDs, the company tapped the recently launched North American 4K UHD satellite feed from SES to offer iN DEMAND’s pay-per-view production of the Bellator NYC: Sonnen vs Silva Mixed Martial Arts event at a slight premium over the HD feed.

There’s much more in store once more content becomes available, says David Rapson, senior director of content partnerships at Layer3. “We see an opportunity to take advantage of being first with 4K/HDR in our markets,” Rapson says. “We’d like to have four or five live UHD channels running 24/7 along with VOD content as soon as possible.”

The opportunity is closer at hand than most people realize, he adds. When it comes to 4K content development “there’s a lot being discussed that’s not out publicly,” he says. “Movies will be a good opportunity along with sports and nature programs. And there’s a lot of international content coming, too.”

SES Orchestrates a Head Start for MVPDs

Steven Corda, vice president of business development at SES, agrees. “The pace of channels becoming available for our 4K service is exceeding our expectations,” Corda says. “Some new ones are imminent.”

One factor in the quickening pace is the fact that SES has built a distribution system designed to facilitate implementation by terrestrial MVPDs, he adds. “Every piece of the value chain is resolved,” he says, noting this includes a growing catalog of STBs for telco IPTV and cable operators. “If we hadn’t created an end-to-end solution, things wouldn’t be going this fast.”

SES uses HEVC (High Efficiency Video Coding) to compress the channels for delivery over terrestrial networks at 18 Mbps, performs encryption and other format processing and supplies the local headend reception equipment as part of the package. Corda says the bitrate is likely to fall as HEVC matures.

But what the bitrate may be for MVPDs once live sports and other programming comes into play with HDR remains to be seen. Comcast’s Seiden says that, right now, delivering HDR-enhanced 4K sports content at quality levels meeting Comcast’s requirements requires throughput in the range of 30-35 Mbps.

Clearly, though, the SES UHD service, currently delivering ten 4K UHD channels from three satellites covering the U.S., represents a good starting point for MVPDs who want to get their feet wet. As previously reported, SES is offering a similar service in Europe, which it launched ahead of the U.S. service, and now it’s operating UHD in Latin America as well. In all, the company has 22 UHD channels in operation globally, representing about 43 percent of the available UHD channel count, Corda says.

In the U.S. the service is undergoing testing by about 25 MVPDs with a combined audience approaching 10 million, he adds. Verizon, for example, is collaborating with SES in conjunction with evaluation of the platform as a way to integrate scalable and dedicated satellite bandwidth into their Ultra HD launch plans. “This marks an important milestone in the development of our Ultra HD solution,” Corda notes.

Other MVPDs publicly named as trial partners include Frontier Communications and several cable operators, including Aureon in Iowa, GVTC Communications in Texas, Highlands Cable Group in North Carolina, KPU Telecommunications in Alaska, Service Electric in Pennsylvania and New Jersey, and Shrewsbury Community Cable in Massachusetts. In addition two MVPDs, Highlands Cable Group, a small cable operator in Highlands, NC, and Marquette-Adams, an independent telco offering IPTV service in Oxford, WI, are using the SES channels to support commercial 4K UHD services.

SES does not negotiate the licensing rights on the nine 4K UHD channels it offers from third parties. In the case of the two commercial MVPD launches, the licensing was mediated by Vivicast Media, a content licensor serving MVPDs worldwide. The role played by Vivicast reflects the extent to which SES has gone to help Tier 3 MVPDs get off the ground with 4K UHD services, Corda says. He also points to the assistance SES provided to Marquette-Adams in testing the Amino 4K STB it chose for the service as another example of the hands-on approach.

These launches reflect the importance of a turnkey 4K UHD service to the fortunes of smaller operators who don’t want to be caught, as they were with HD, at a disadvantage against DBS competitors, Corda notes. But, he adds, SES sees an opportunity for the service extending into the higher MVPD tiers as well. “There are over 900 MVPDs in North America, and we have relationships with all of them,” he says.

The ten 4K UHD channels currently on offer from SES are provided on an a la carte basis. One of the channels is comprised of content aggregated by SES, such as the iN DEMAND PPV event. The others, most of which are not part of traditional pay TV lineups, include Fashion One 4K, Travelxp 4K, 4KUNIVERSE, NASA TV UHD, INSIGHT TV, UHD1, C4K360, Funbox 4K and Nature Relaxation 4K.

Some are better known in the OTT space, such as Insight and Fashion On, two English language channels out of Munich, and TravelXP, an international travel channel originating in English out of India. There are startups as well.

“A few of our channels are from content people who saw they could build channels with their own brands through affiliation with us,” Corda says. For example, 4KUNIVERSE debuted in January on one of the SES satellites offering a mix of documentaries, sports, movies and TV shows aimed at Millennials and Generation-X viewers.

The only SES Ultra HD channel delivered so far with HDR enhancement is TravelXP. Billing itself as the world’s first 4K travel channel, Travelx is using SES to deliver hundreds of hours of travel programs from all over the world. Its HD service, with a lineup consisting entirely of originally produced travel and lifestyle programming, reaches over 50 million homes globally, the company says.

“Ours is the only commercial HDR channel available in this market,” Corda says. He expects SES will be able to add more before too long. “A number of programmers are looking at HDR,” he says.

The Technical Challenges Posed by HDR

SES has settled for its own purposes one of the more vexing issues MVPDs face with HDR, namely, choosing which transfer function to support. “We’re extremely pleased with what we’re seeing with HLG (Hybrid Log Gamma),” Corda says. “Unlike PQ (Perceptual Quantizer) it doesn’t require use of metadata, and it’s backward compatible with standard dynamic range (SDR) UHD. We looked at HDR10, but it washes out with non-HDR10 TV sets.”

In TV displays the transfer function is the algorithmic instruction set which directs how the display interprets and renders the luminance values of the original production. PQ does this by incorporating metadata into the channel stream that can be interpreted by HDR10-compatible TV sets to render brightness as captured by cameras in the original production in accord with the luminance range supported by any given display.

Dolby, the developer of PQ, offers a two-stream version supporting backward compatibility where the basic content signal is delivered in SDR and the metadata enabling HDR rendering is delivered in a separate stream, but this has not been incorporated into the SMPTE and HDR10 standards. HLG relies on tweaks in how the traditional transfer function used in broadcast TV works, avoiding use of metadata so that SDR TVs can display the picture while enabling HLG10-compatible TV sets to render with the luminance enhancements enabled by HDR.

As previously reported, last year the ITU added HLG to its Rec. 2100 specifications for HDR, which include PQ along with the other components of the HDR domain, such as a minimum luminance range of 1,000 nits (cd/m2 or candela per square meter), the wide color gamut set by ITU’s Rec. 2020, support for 10-bit or 12-bit coding, a wide range of frame rate values, resolution specs for HD, 4K and 8K and much else. HLG is also now accommodated in specifications set for ATSC 3.0, HDMI 2.0b, HEVC and Google’s VP9 codec.

Scott Davis, chief architect, Charter Communications

Scott Davis, chief architect, Charter Communications

Notwithstanding the ITU’s accommodation of both the PQ and HLG options with provisions for transcoding from PQ to HLG or vice versa, the industry is increasingly torn over which approach to take. Scott Davis, chief architect for Charter Communications, notes that while HLG solves the backward compatibility problem there are, as we reported last year, concerns “about chromaticity errors that some people have seen in versions of HLG.” Such issues can be dealt with, he says, but “the difficulty becomes, what does the TV support?”

Until this year’s NAB, Davis continues, “I saw a great deal of support for PQ and not so much for HLG.” But at NAB 2017 “I saw an awful lot of HLG. I think we’re back to the place of, which would you like to do? It’s going to be a bit of a negotiation between us as content distributors, the content creators and TV manufacturers over what the process needs to be. I think anybody who thinks this is completely solved is a little premature.”

As Seiden notes, Comcast has committed to HDR10. “HDR10 is the most widely deployed,” he says, in reference to HDR-capable UHD TV sets. “That’s our focus.” But, he adds without elaboration, “From the STB side other standards will be supported.”

This could mean the MSO’s new HDR/4K STBs may be able to transcode from PQ to HLG in cases where the subscriber’s TV set is not HDR10 compatible. Whether this is feasible from a cost standpoint is unclear, but it’s clearly doable based on the process prescribed in the ITU’s Rec.2100.

Right now, though, it’s very hard for an MVPD to lock onto an approach that can be relied on to satisfy consumers; meet the requirements of content providers, and measure up to its own standards of performance, Davis says. Moreover, the transfer function issue is just one of many that don’t lend themselves to easy resolution.

“Let’s start with the easy one,” he says. “What luminance value should we choose? I recall last year going across the floor looking at 400-nit TVs and thinking that’s pretty cool. Since then I’ve had opportunities on a couple different occasions to see produced content at much higher values – 1500 nits.”

Indeed, where 1,000-nit displays were the high-end models a year ago, now manufacturers are said to be preparing to introduce displays with 2,000-nit capabilities. “How do we balance this out appropriately?” Davis asks.

“How do we measure those luminances?” he continues. “How do we derive what the real value is of that TV? Additionally, what happens if somebody shoots a video at 1,000 nits and the TV is 2,000 nits? Do we allow the TV to make a change? Do we do the change in some external box? Or do we clamp at 1,000 nits? These are things we haven’t figured out yet.”

Another point of uncertainty is the coding bit rate used with HDR, where 10 bits is now the norm with 12 bits in the wings. Until now, the standard in digital TV has been 8-bit color coding. Does that mean MVPDs have to deliver two streams for every channel, one for HDR versions and one for SDR? Or is it best to move everything to 10-bit processing?

One way or the other, Davis notes, if MVPDs are going to rely on HEVC, also known as MPEG H.265,  to compress UHD signals to reasonable bitrates, they’ll have to adopt the H.265 Main 10 profile to support 10-bit processing. “Quite a few of our existing decoders don’t understand that,” Davis says, referring to Main 10. “So how do we make sure we don’t give somebody something they can’t watch?”

The need to utilize 10 Main to support 10-bit coding for HDR has been the key delaying factor for Comcast, Seiden says. “It complicates matters,” he says, noting it has taken awhile for chip makers to incorporate 10 Main. “We’re taking mezzanine content directly from content providers. The whole workflow has to be developed to support 10-bit HEVC.”

Moreover, he adds, getting to the low latency required with live sports and other linear content “takes a heck of a lot of processing.” Comcast is working with vendor partners to address that problem.

The fact that standards keep changing doesn’t help in the transition to UHD. For example, SMPTE, which had incorporated what is known as static metadata in ST 2084 in conjunction with HDR10 PQ specifications, is now finalizing a new standard, ST 2094, to incorporate dynamic metadata as a means of accurately adjusting brightness levels on a scene-by-scene, frame-by-frame basis. With a strong push from Samsung, PQ with dynamic metadata is now coming into the market as the key component to what’s known as HDR10+, which Amazon says it will use with some of its UHD content later this year.

“There’s nothing wrong with new standards,” Davis says. “They just add to the breadth of options.” The problem is deciding which options to pick. “You get nervous at that point in time,” he says.

Unstoppable Momentum

But all these issues, probably sooner than later, will be resolved. As they are, MVPDs will feel intensifying pressure to be among the first with viable HDR/UHD services.

Already OTT providers are racing ahead with HDR-infused UHD content. Netflix and Amazon have led the way so far with the addition of HDR-enhanced programming to 4K portfolios they’ve been building since 2014. Others following suit include Hulu, Vudu, Sony Ultra and UltraFlix4K.

But probably the biggest incentive to accelerating cable operators’ move into 4K UHD is the threat posed by DirecTV, especially now that it has the resources of AT&T to leverage in the anticipated expansion of its nascent 4K UHD service. DirecTV hasn’t implemented HDR yet, but it’s leading the pay TV market with three channels devoted to 4K UHD and enough satellite capacity to support dozens more as content becomes available.

The DBS operator recently shifted from making UHD channels available at a high premium to other services to including UHD as part of its 145-channel, $50-per-month “Select” plan. A growing component of the 4K programming is live sports, which began with the Masters Golf tournament in 2016 and was repeated with two-channel coverage in 2017. The MVPD’s 4K sports coverage also includes occasional broadcasts of live MLB, NBA and Notre Dame football games.


Altice Adopts Security Strategy Suited to Big Expansion Agenda

Dexter Goei, CEO, Altice USA

Dexter Goei, CEO, Altice USA

MSO Says Multiscreen Solution Supports Cost-Efficient Path to UHD & IP Migration

By Fred Dawson

February 8, 2017 – Tier 1 MSO Altice USA has taken a key step toward positioning itself to be a formidable competitor across what could become a much larger fixed and possibly mobile implementing a consolidated approach to securing and managing next-generation services, including UHD.

In a departure from the norm in U.S. cable, the company has tapped Switzerland-based NAGRA to supply content protection and back-office platforms that will serve as the foundation for an aggressive expansion strategy, details of which are gradually coming into public view. The decision to tap a pay TV security supplier that has had limited penetration in the U.S comports with the company’s intentions “to bring innovative products and services to Altice USA’s Optimum and Suddenlink customers by leveraging our global operational expertise, scale, resources and key strategic partners like NAGRA,” says Altice USA co-president and COO Hakim Boubazine.

By breaking with reliance on the traditional suppliers to those cable systems, the company is blazing a trail that could have implications for other operators looking for solutions essential to getting next-gen TV off the ground. Indeed, in some respects the move is in stride with a major shift in how North American cable operators go about procuring solutions these days.

But while openness to broader selections of solutions beyond those offered by traditional set-top and other equipment suppliers has drawn a growing number of players from abroad, security has been a tough nut to crack. With an embedded base of set-tops that rely on the conditional access systems (CAS) from the dominant CAS suppliers, ARRIS and Technicolor, successors, respectively, to the old General Instrument/Scientific-Atlanta duopoly, it’s been hard for North American operators to embrace other suppliers’ solutions.

The Altice strategy suggests this barrier may soon fall as operators make the transition to a new generation of hybrid set-tops that can support UHD 4K while advancing the migration to all-IP video. From a security standpoint, the ability to manage content protection from a single platform that can cover all the bases in an increasingly fragmented device environment has become fundamental to ensuring the robust security that’s essential to delivering licensed content to every point of subscriber connectivity.

NAGRA’s Connect security platform is designed to meet these goals, says NAGRA COO Pierre Roy, not only by offering advanced CAS/DRM and multi-DRM support, but also by facilitating operators’ ability to meet the more advanced security requirements tied to UHD content, including forensic watermarking technologies and anti-piracy and cybersecurity services. Of course, such capabilities don’t amount to much unless they can be configured to local conditions, which Roy says NAGRA has demonstrated it can do here and in other markets, giving it a leg up when it comes to achieving economies of scale

“Being selected by Altice USA shows how we can be a global partner to large multi-network operators while adapting to their local infrastructures and requirements,” he says. “This creates economies of scale that reduce operator cost and increase operational efficiency through a single, flexible, global technology partner.”

Boubazine concurs. “We have been impressed by the flexibility NAGRA has shown in adapting to U.S.-specific requirements in a short amount of time,” he says, “This partnership will enable us to design integrated services to meet our customers’ expectations.”

Cost efficiencies, such as those enabled through service integration, are a major goal as well. Altice, now the fourth largest MSO following its acquisitions of Cablevision and Suddenlink, has partially justified its risky bet on U.S. cable by citing cost-cutting opportunities which, in the case of Cablevision, are expected to produce a $900-million savings within three to five years.

Altice isn’t saying much about its next-gen service strategy. But it’s clear the company is eager to move to a uniform approach to delivering services across all networks and user devices, as farther evidenced in its choice of the NAGRA MediaLive platform.

MediaLive is designed to enable a flexible all-screen backend management approach to monetizing and delivering services, explains Christopher Schouten, senior director of product marketing at NAGRA. “It can be used in set-top-only situations, multiscreen-only configurations and as a universal IP content delivery backend for both set-tops and personal devices,” he says

On the security side, Schouten adds, Connect allows Altice to automatically provide security appropriate to whatever device a subscriber is using to access the MSO’s service in compliance with licensing terms. “Each situation has different security and business rules, so it’s important to have one master system that applies to all of them,” he says.

The NAGRA solution can efficiently coexist within legacy U.S. cable systems while avoiding duplication of bandwidth and enabling an open choice of set-top box suppliers, Schouten notes.
“It’s impossible to dump 100 percent of the headend and set-top legacy infrastructure in one go,” he says. “You can imagine this would be for new customers and upgrades to more advanced services. The two solutions (legacy and Connect) will be able to be run in parallel.”

The converged security solution can be extended to mobile, he adds. “Because Connect supports broadcast, unicast, multicast and third-party services like Netflix, it can be used in any environment,” he says. “The multi-DRM management component helps unify the application of business rules across third-party DRMs like Fairplay, PlayReady and Widevine as well as NAGRA DRMs.”

Altice leaders have broadly hinted at expansion plans that could involve other cable acquisitions as well as a move into mobile, much as happened in France with the acquisition of SFR in 2014 and the subsequent combination of fixed and mobile operations under the SFR Group umbrella. In an interview last June with The New York Times, Altice USA CEO Dexter Goei made clear mobile was under consideration. “It is worthwhile knowing that every single one of our businesses in other markets are quad-play, both fixed and mobile broadband,” he said.

In one respect, the company has already expanded beyond the boundaries of its acquired cable systems through an investment in startup Layer3 TV, which it inherited with the Suddenlink takeover, as recently reported by Variety. Layer3, which has made known it intends to make its commercial service debut in Chicago with other, unnamed cities on tap in 2017, recently ran a trial of its 4K-ready service platform in Midland and Kingwood, Texas, apparently in cooperation with Suddenlink.

In its current operating territories Altice has committed to a massive fixed network upgrade plan with the intention to extend fiber in its HFC networks all the way to the premises across all of its Optimum (Cablevision) and most of its Suddenlink footprints over the next five years. This will enable “Generation Gigaspeed” services of up to 10 gigabits-per-second, the company says, noting that it “expects to reinvest efficiency savings to support the buildout without a material change in its overall capital budget”.

Underscoring the company’s belief that converged operational capabilities are key to generating efficiencies everywhere, the Holland-based parent Altice Group, now serving close to 50 million customers on four continents, has adopted what it calls the “Altice Way” as a set of principles for all its operations. These include a commitment to “developing, launching and integrating new products, services and business models, including the creation of next-generation communications access and content convergence platforms with market-leading home hubs.”

The company also said plans include forthcoming launches of Altice Studios to “create original movies and series” and the Altice Channel Factory to “create more new channels.”


On-Boarding OTT Services Just Got a Lot Easier for Pay TV Ops

Jeroen Ghijsen, CEO, Metrological

Jeroen Ghijsen, CEO, Metrological

Liberty Global’s Approach to Netflix Integration May Soon Be Replicated Elsewhere

By Fred Dawson

January 26, 2017 – Pay TV providers looking to include OTT subscription services like Netflix in their programming lineups will be relieved to learn there’s an expeditious alternative to the tortuous procedures they’ve had to employ to integrate such services in the past.

Liberty Global, which last year announced it was going to feature Netflix in its programming guides, has revealed it has performed the integration in a much more straight-forward and timely fashion than operators are accustomed to by utilizing technology developed by Metrological. As described by Metrological CEO Jeroen Ghijsen and VP of technology and innovation Wouter van Boesschoten, the new cloud-based process provides a means by which operators everywhere can more easily create OTT service-enhanced user experiences using existing middleware and set-top boxes (STBs).

“By leveraging our experience with browser-based application frameworks, we have standardized key components, simplifying the integration of premium OTT content,” Ghijsen says. “This results in a reduction of the required STB resources, deployment cost and time to market.”

Liberty Global has launched the Netflix app on its Horizon UI in the UK, Ireland, Switzerland and the Netherlands, to be followed in other countries throughout 2017. “Metrological’s Application Platform, which is an integral part of Horizon TV, helped us to streamline this particular Netflix deployment and expedite the time to market,” says Doron Hacmon, chief product officer at Liberty Global. “The flexibility of the platform allows us to continue to innovate by integrating new relevant services in a timely fashion.”

As Doron’s comment implies, the Metrological solution promises to make it easier for operators to continually add third-party OTT services as strategies are refined and new deals are arranged, improving their ability to turn the growing multi-subscription phenomenon to their advantage. About 22 percent of cable subscribers also subscribe to at least one OTT service, according to research conducted by Millward Bown Digital. A new study from Parks Associates finds that 31 percent of U.S. broadband households have multiple OTT service subscriptions, which is nearly one-half of the 63 percent of U.S. broadband households subscribing to at least one OTT service.

Brett Sappington, senior director of research at Parks, says the service-stacking phenomenon has become an important step in the growth of the U.S. OTT video services marketplace. As Sappington notes, a big reason for the surging importance of OTT services to pay TV and multi-OTT service subscribers is the volume of original content they provide that can’t be found anywhere else. “The regular release of high-quality original content, such as The Grand Tour (Amazon) and Gilmore Girls: A Year in the Life (Netflix), ensures the large OTT players will remain a core, consistent subscription among service-stacking households,” he says.

For pay TV providers who want to serve this demand by creating a one-stop-shopping environment for their own and others’ subscription services, having a way to bring those OTT services into the pay subscriber’s navigation window as a routine operational task will become ever more important. “If you can support this through one unified solution that integrates services in compliance with all their requirements in a standardized manner, you can save a lot of time,” Ghijsen says.

Metrological’s hybrid deployment architecture leverages an application framework that acts as a device- and software-agnostic abstraction layer streamlining the engineering and coding requirements for STBs, van Boesschoten explains. This approach also yields a smaller STB resource footprint, enabling operators to deploy premium OTT content on legacy devices, he notes, adding that much of the Liberty deployment involves use of five-year-old STBs.

As previously reported,  Metrological has become a leading supplier of solutions designed to facilitate pay TV operators’ multiscreen services strategies. Its Application Platform integrates TV and OTT experiences, providing full lifecycle support for operators’ management of branded TV app stores and OTT content via a cloud-based back-end that also provides real-time business intelligence data and marketing analytics. Operators can execute on these capabilities utilizing Metrological’s App Library, which contains over 300 apps, or they can build their own apps with an open software development kit.

The move away from reliance on apps hosted on the STB, where limited CPU resources restrain operators’ ability to respond to new opportunities, requires use of browser technology that draws on cloud resources fast enough to meet low latency requirements. There was considerable skepticism at Netflix that a cloud-based platform could execute on trick play and other functions intrinsic to its service, van Boesschoten notes.

But, with wide-scale adoption of its cloud technology, including incorporation into the Reference Design Kit (RDK) software stack backed by Comcast, Liberty Global and Time Warner Cable, Metrological has proved its pay TV-optimized HTML5 browser is up to supporting this latest addition to its cloud capabilities. Utilizing an open-source environment known as “WebKit for Wayland,” the browser enables better rendering of apps and next-generation UIs in a multi-device environment along with better control over all applications and resources, van Boesschoten says.

“Our browser gets past the native utilization hurdle,” he says. “It’s very fast with the ability to read data at 60 frames per second.”

In the hybrid deployment architecture used for integrating OTT services there’s a careful balance between functions residing in the cloud and on the STB. “At the hardware level we perform integration for graphics rendering with the CPU,” van Boesschoten says. In an RDK set-top environment, integration with the STB SoC takes just three days, he adds.

Speed to market is greatly aided by Metrological’s integration with GStreamer, a multimedia framework included in the RDK software stack to support secure streaming of content over the home network with a full set of components for managing complex renderings across all networked devices. GStreamer employs a plugin model supporting implementation of a wide range of codecs, filters and other resources that can be mixed and matched through developer-defined pipelines to enable feature-rich multimedia applications.

“We have tremendous experience with GStreamer,”  van Boesschoten says. “As long as an application supports GStreamer we can make that app work with whatever STB and middleware environment you bring to the table.”  Metrological can achieve the STB-level integration with apps that aren’t compatible with GStreamer, but it takes a little longer, he adds.

At the cloud layer in the hybrid architectural approach Metrological supports all the state functions (pause, rewind, resume, reset), positioning of the app in the UI (whether as a standalone app or as a channel selection or both) and any modifications tied to rendering on different types of devices beyond the STB. Device certification, subscriber authentication and security provisioning live in the cloud as well. A systematic, automated approach to pushing to the STB whatever DRM or other security mechanisms are required by a particular OTT service is critical to quickly mounting such apps, van Boesschoten notes.

In the case of Liberty’s integration with Netflix all these capabilities were put into play with the existing Cisco middleware platform with a minimum of heavy lifting,. “We’ve defined the platform to be useful regardless of whether the STB is running Cisco, ARRIS or somebody else’s middleware,” he says.

There are likely to be many more customers for the new Metrological platform, given how widespread the OTT service integration strategy has become. As previously reported, a recent global survey of operators’ service innovation priorities by the Pay TV Innovation Forum found that on-boarding OTT content was one of the top three priorities among service providers everywhere.

“This is opening an important new business opportunity for us,” Ghijsen says. “Meeting Netflix’s requirements for engagement has been a difficult undertaking for operators. Now we’ve validated it can be done much faster with far less effort.”


Pay TV Operators Worldwide Detail Responses to Disruption

Koby Zontag, VP Media Sales & Business Development, PCCW

Koby Zontag, VP Media Sales & Business Development, PCCW

Common Thread in Top Innovation Priorities Reflects Consistency of Competitive Threats

November 22, 2016 – Entering 2017 the challenges faced by pay TV providers the world over are remarkably consistent region to region, as evidenced by the results of an extensive global survey of operators undertaken by the Pay TV Innovation Forum initiative spearheaded by NAGRA. At the same time, innovation strategies vary depending on regional market conditions and where any given service provider sits in the intensifying competitive scrum.

In the interview that follows, Simon Trudelle, senior marketing director for NAGRA, provides an overview of the survey process and its findings. We then present excerpts from Pay TV Innovation Forum interviews with six executives from different regions of the world who describe the market conditions, challenges and innovation strategies that characterize their operational environments. Companies represented include AT&T/DirecTV, Liberty Global, Hong Kong’s PCCW, Brazil’s Oi, Link Net-First Media in Indonesia and Telekom Malaysia.

ScreenPlays – It’s great to have this opportunity to catch up with you, Simon, especially in light of some of the research that’s come out of the Pay TV Innovation Forum that NAGRA has been spearheading. Why don’t we begin with your telling us what this is, how long it’s been operating and what its agenda is?

Simon Trudelle, senior product marketing director, NAGRA – It’s a program we launched in Q2 2016. A final report and conclusions were released at IBC 2016.

The program aims to look at the state of innovation in the pay TV industry and really answer the question of what will be driving growth in the years to come at a global level. The approach we’ve taken is to work with a London-based consultancy, MTM. They have been experts in the TV space for over a decade.

They researched the market around the world looking at the top 231 operators across the leading countries and analyzing the state of innovation with each of these operators. And then we opened up the conversation with industry executives. Over 200 people were asked to contribute and to provide their view of what are the priorities in terms of innovation for years to come.

We ran six workshops in different parts of the world – Europe, in London and Rome; Asia-Pacific, in Singapore, and in the U.S. Los Angeles. And we also went to Mexico and Brazil. We surveyed executives in each of these regions to capture their input and also ran some surveys and analyzed data to get a complete view of the situation today and where it’s headed.

SP – I don’t know of anybody that has done this. Usually you get research studies that aren’t really talking to distributors. They’re talking to everybody else to get trends and what have you.

Getting them to cooperate was no small feat I imagine. Once you did what did you find out?

Trudelle – Ultimately we realized that there are some obvious leaders worldwide. They’re not specific to one region. We listed major players that are ahead of the curve in many ways and have been able to innovate already and launch new types of services, improving the pay TV experience or even going into what we call adjacencies, new areas of growth for pay TV. We provided a benchmark and ranking of the players. That data is available in the reports.

What comes out is that, in terms of the next steps, we’re going to see more competition driving more innovation. Eighty-three percent of the service providers we surveyed said that competition is going up, and 78 percent said that innovation was the answer.

It means we are reaching a point in the industry where we know things are changing and the opportunities are there to actually grow the pay TV industry. But the recipes will be different, because the technologies and the networks to deploy pay TV services are evolving with IP and cloud technology and data becoming more and more important.

And the other dimension in terms of how to do it better for the future, in the conclusions we not only see a focus on the new technologies but also on partnerships with key vendors to accelerate this innovation process and be more agile in leveraging the best-of-breed players to get there and build the future of pay TV.

SP – Where is that collaboration in the vendor community centered? How does that get done?

Trudelle – We’ve analyzed several models. There are some consortiums that have begun to be put in place. Also, there are some contributions from open-source communities. There are also some service providers among the largest ones that have started making equity investments in some of their partner vendors.

We think there are several models. It will be a mix of them that will make service providers successful. It is certainly a new way of approaching the market. The end game is that service providers have to be in a position where they put the consumer at the center of the experience, and they’re agile enough to move their systems to the next generation of technology.

SP – What did you see as the biggest area of consensus on innovation strategies? Is it revolving around UHD and HDR? Is it starting other services? Is it mounting an over-the-top?

Trudelle – We looked in particular at nine major categories, and out of that list there were three that stood out more. One is more on the business side, the pricing and packaging of the offering.

The feedback we’re getting from the industry is that we will move progressively away from the one-fits-all type of bundle to more segmented, targeted products that respond to the needs of consumer segments. And that has become possible because of OTT delivery, new technologies and new experiences that can be delivered. In the survey that came out as one of the top priorities.

Then it’s also about improving the offering in terms of content. So on-boarding OTT content, particularly the Netflix’s and YouTubes of the world…

SP – A few years ago that survey would have come up near zero on that question.

Trudelle – Absolutely. We started this survey over a year ago where we already had some signs that it was becoming a reality. And now we’re seeing that happening and more service providers saying we would like to on-board more content and create the one place where you have access to all the best content. So it’s really giving pay TV its leadership role again as being the one place where the best content is available.

The third priority is increasing the reach to all screens – big screens, TV sets, very important, but also bringing the same content to other devices with the on-demand capabilities easily available from all devices. That’s more to address the needs of a younger generation that is consuming content on all these devices.

SP – Obviously, these priorities are all intertwined. They basically feed off each other as the priorities of the industry. That survey really gives us a good idea of what’s on these people’s minds. Were the findings different for North America?

Trudelle – There were trends that are stronger in the North American market. We’ve learned from service providers there is a great appetite for delivering OTT content and building an app model addressing on-demand consumption anywhere anytime and also more flexible pricing and bundling.

And with the pressure from content owners that are going direct to consumers this is also bringing service providers to look at the market with a different vision of where it’s headed. We haven’t seen that much of these trends emerging in other parts of the world yet. When we look at the four reports, we see that North America is already addressing challenges that the other regions are only dreaming about.

SP – In this area of collaboration, did anything come up around security and the fact that these new [content licensing] rules that are coming into play will require far more cooperation on enforcements in tracking piracy, which is really a pan-industry kind of agenda?

Trudelle – It does come out in some conversations that there is, especially in markets like those in Latin America, a lot of illegal content that is available and hurting the pay TV industry. We at NAGRA work with regional operators to improve the anti-piracy efforts as part of the Alianza alliance in the region.

But this is potentially holding back growth and playing a negative factor on innovation, because consumers find the content they want but through the wrong channels. That means that service providers at some point and content owners as well have to get themselves the tools and the technologies to stay in control of the distribution of content and also make sure the experience at the end is better than what you get from a pirated site. So it is both defensive and proactive.

SP – Our audience can go to your website and get your findings from the forum?

Trudelle – Absolutely. These findings are available for download for free – registration at And we’ve also published a number of public interviews with executives that were created as part of the program. They provide from a service provider perspective real examples of what’s happening in a given market. Some insights of how they see innovation in their companies and innovation in the industry and what they see as the key success factors.

SP – We’ll definitely be watching the site for that input. Thanks much for taking us through this, Simon.

Excerpts from Pay TV Innovation Forum Interviews with Service Provider Executives

United States
Charles Cataldo, Manager Technical Services, DirecTV/AT&T

Pay TV Innovation Forum – How would you describe the state of the US pay-TV industry today?

Cataldo – Ten years ago, a typical pay TV subscriber was a family household. Today, the picture is very different – there might be five members of that household, each of them looking for different content. The ‘one subscription fits all’ model does not work anymore. Pay TV service providers now have to focus on building an ecosystem of products and services that appeals to each member of the household.

In addition, the younger generation has grown up watching YouTube. Their perceptions of and expectations for content are very different from those of a traditional pay TV decision maker. For a long time, I have believed that would present a great opportunity for video services that sit North of YouTube and South of traditional pay TV. That is exactly the type of standalone OTT subscription services that Major League Baseball (MLB.TV) and HBO (HBO Go) have developed.

PTVIF – What are the innovation priorities for pay TV companies in the USA? 

Cataldo – Pay TV service providers that have physical networks and are experienced in developing great content need to be able to innovate in terms of search engines and content placement on the user interface. On the other hand, OTT service providers, such as Netflix, that have flexible technology platforms and are sensitive to their customer preferences need to be able to establish relationships with major programmers in order to build great content propositions.

At the end of the day, the factors that will determine the success of a pay TV product will be content quality, followed by user experience and ease of navigation, followed by quality of delivery.

PTVIF – Looking ahead, what will be the most exciting areas of opportunity for pay TV service providers?

Cataldo – In terms of content, there is significant unrealized value in standalone OTT content, particularly sports, and mobile content, including mobile-first content and mobile gaming. In terms of business models, there are exciting opportunities to move beyond subscriptions. For example, pay TV service providers can utilize freemium models, where users can choose to pay the full price for the service without advertising, or get the service for free or at a reduced price with advertising. In addition, pay TV service providers can be creative in terms of how they promote their services, instead of buying advertising they could spend those ad dollars on offering pilot episodes to the public for free.

Shuja Khan, VP Revenue Growth Transformation, Liberty Global

PTVIF – How would you describe the state of the pay TV industry today?

Khan – When you look at the long-term evolution of the pay TV industry, the last five years have been much more disruptive than the previous ten. During the first decade of the century, European pay TV providers were focused on improving their content offerings by, for example, increasing channel lineups, differentiating themselves from free-to-air channels, and investing in their distribution platforms and set-top boxes. Today the focus is on delivering even better experiences for our customers – they are now used to almost continuous app updates compared to the three-five year refresh cycles we used to have. Then it’s also about bringing new content offerings to our platforms with flexible propositions and addressing the exciting new growth opportunities that are opening up with on demand, personalization and impact of social media.

It feels like we’ve gone from a jog to a sprint triathlon!

PTVIF – Pay TV companies are often perceived as not being especially innovative. Why do you think that is the case?

Kahn – In my opinion, what makes pay TV companies successful is their ability to transition breakthrough innovation into mass market adoption. The innovation may have originated in other markets, often niche markets, but what they do is make the technology reliable and easier to use and then package it in a way that is compelling. That for me is still innovation.

PTVIF – Looking forward, what do you see as the key innovation challenges facing the pay TV industry?

Kahn – First of all, the pay TV delivery mechanism is very complex. It has so many components to it and bringing innovation to the whole system is not straightforward, in terms of technology and cost. I think on balance it’s better to get it out then make sure it’s perfect…and then course correct.

Secondly, organizational design is really important for innovation, and lots of pay TV companies are not designed to be innovative – they’re designed to be efficient. A lot of them are still working in silos, with little collaboration. This is one of the key reasons for some of the transformational changes that I’m involved with at Liberty Global.

Third, there are return-on-investment considerations. Pay TV is a great cash-generating business and has healthy margins, so innovative products and services can face a very high return-on-investment hurdle.

Finally, lots of pay TV operators are worried about disrupting their existing businesses, so innovation is much more likely to come from new entrants or industry outsiders. The best way to address this – and the ROI challenge – is to strategically invest, incubate, rapidly experiment and then integrate.

PTVIF – What steps can pay TV service providers take to develop and grow their businesses?

Kahn – Quick ones. Pay TV service providers can’t ignore the disruptive forces facing the industry. They need to identify potential disruptions and take steps to take advantage of them.

In general, pay TV companies are doing a good job addressing the basics, investing to better set-top boxes, great OTT products and very advanced functionalities. Competition is stimulating innovation across the industry.

Secondly, the future is uncertain so we need to place bets. A good way to do that is through corporate venturing. As an investor, you can integrate the new innovation into your business – and could buy the business outright at some point, if it makes sense.

There are also lots of exciting new growth opportunities opening up for pay TV providers outside of their core business. Advertising and data is one area. There is a wealth of data that pay TV service providers can extract, analyze and monetize, leveraging return-path data from set-top boxes and OTT products. It’s a really unique asset that we have and can enable some really exciting new business models.

Thirdly, we need to follow consumer behavior and demand. This is what makes multiscreen or OTT interesting and exciting. Although TV Everywhere services are almost ubiquitous, there are still lots of opportunities to extend content onto new screens – to deliver the next generation of aggregation services and to make the mobile viewing experience easier and more user-friendly.

There is an abundance of opportunity; it’s just a case of prioritizing what’s most likely to provide the best growth.

Hong Kong
Koby Zontag, VP Media Sales and Business Development, PCCW
PTVIF – Do you think innovation is becoming more or less important to the pay TV industry?

Zontag – Innovation is definitely becoming more important to the industry, and companies are investing more in it. It is particularly important for market leaders who need to invest heavily to respond to disruptive technologies and innovate continuously to maintain their market positions. Pay TV service providers will always face potential disruptions. Today, it is OTT services, tomorrow there will be something else, so they have to be ready. It is also important to note that with major Internet businesses, such as Google and Amazon, entering the video market, the lines between different types of TV and video service providers are getting blurred.

PTVIF – Looking ahead, what will be the most exciting areas of opportunity for pay TV service providers?

Zontag – Service providers will focus a lot of their attention on offering great content, so we should see more original and exclusive content in the market and stronger partnerships between content owners and pay TV service providers.

Multiscreen TV Everywhere services will also be very important for pay TV service providers going forward. TV Everywhere is slowly becoming a must-have service for customers and it will soon become part of the most basic pay TV service offering. Commercially, I see a big opportunity to bring more premium content, particularly sports, to consumers, allowing them to, say, watch football finals on the go. The key challenge will be monetizing these TV Everywhere services, but there are various ways to overcome it, such as tiered pricing based on the number of supported devices or higher reliance on advertising revenue.

In terms of adjacent businesses, smart home solutions will be a very important way for pay TV service providers to extend their presence in consumer homes by providing connectivity for all consumer devices.

On the B2B side, targeted TV advertising will be a major opportunity for pay TV service providers as advertisers will be willing to pay more money for effective ways to reach their target audiences. Pay TV service providers, broadcasters and advertisers will have to work together to find a mutually beneficial business model. Today, it might be more lucrative for some broadcasters to sell TV advertising on their own. However, with TV advertising rates getting squeezed by online advertising, it will be just a matter of time before targeted TV advertising becomes a reality.

Ariel Dascal, Head of Digital Innovation, Oi

PTVIF – How would you describe the state of the Brazilian pay-TV market today?

Dascal – There is a clear generational divide in terms of how people consume TV and video content. Under 35s have very distinct viewing habits: they are very technologically savvy, they prefer streaming videos – either on subscription OTT services, YouTube or pirate sites – and consume a lot of content on mobile devices. Selling pay TV packages to them is difficult. They do not see much value in packaging, they want freedom to watch content whenever and wherever they desire. And then we have the older generation who consume TV in the traditional linear way and who are used to buying traditional pay TV services and triple-play bundles.

Although pay TV service providers need to respond to this new market reality, the pay TV industry still has huge growth potential in Brazil. There is a large untapped market, with less than half of the households subscribing to pay TV. Even among the high income households, where penetration is just over 80 percent, there is still a significant base of potential users that pay TV companies could go after.

However, there are three key barriers to further expansion. First is the price of pay TV. Most households that do not subscribe to pay TV services simply cannot afford to at the current price levels. Second, subscribing to pay TV used be a status symbol, but with the economic crisis many subscribers are dropping their pay TV subscriptions and keeping only their broadband subscriptions. Third, some consumers are leapfrogging pay TV and going from free-to-air TV to non-linear OTT services.

PTVIF – What are the innovation priorities for pay TV companies in Brazil? 

Dascal – The number one priority is the digitization of the pay TV experience in terms of delivering a better end-to-end experience to our customers and reducing our costs of operation and customer acquisition. We need to bring our services into the 21st century. As consumers are comparing pay TV services to Netflix, pay TV service providers need to deliver an interactive digital user experience across all consumer devices.

The second priority is acquiring great content, particularly for various on-demand and streaming propositions. Pay TV service providers face a major challenge in relation to the content industry, which is slow to respond to changing market realities and still follows the traditional approach of managing release windows and selling packages of channels. The content industry is highly susceptible to disruption driven by large Internet businesses, such as Apple and Google, which will allow consumers to get whatever content they want whenever and wherever they want it.

The industry also needs to look for opportunities beyond pay TV and OTT services in areas such as e-commerce, advertising, innovative pricing, new types of content, second-screen applications, mobile-first solutions and home automation and security solutions.

Iris Wee, CMO, Link Net-First Media
PTVIF – What do you think makes the Indonesian pay TV market different?

Wee – The Indonesian pay TV industry has faced a unique set of challenges and opportunities. Historically, pay TV penetration has been low due to high level of piracy and a very vibrant and competitive free-to-air TV market that offers high quality local content, providing little incentive to people to switch to pay TV. The pay TV market has been dominated by satellite operators that have primarily pursued aggressive pricing strategies, with little differentiation or innovation.

PTVIF – How would you describe the key developments in the Indonesian pay TV market?

Wee – I think the market is changing. First of all, the traditional DTH satellite providers have realized the limitations of their business model and are now increasingly trying to bundle their services with fixed broadband or 4G mobile data services, usually through partnerships with telcos. In addition, they are trying to move beyond pure price competition and are looking for ways to differentiate their services. However, without being able to support two-way communication, DTH satellite operators are at a big disadvantage. Hybrid set-top boxes might seem like a reasonable next step for them, but this would require significant capital expenditure and a long-term view of the business, which are not supported by the ‘low ARPU and high-churn nature of the DTH satellite pay TV business.

Secondly, there has been a number of new fiber providers entering the market recently, with pay TV and video playing a significant role in their market penetration strategies. Some of them offer pay TV services as part of their bundle, while others have partnered with OTT players to offer on-demand entertainment bundles.

Finally, the market has seen a number of OTT service launches. It is yet to be seen whether these services are going to be a substantial threat to the traditional pay TV model, but they have definitely been very innovative. OTT service providers recognized that a one-size-fits-all model would not work in the Asian market and adapted their propositions in terms of pricing and content. They have implemented a myriad of content localization techniques, such as subtitling and dubbing, and are actively looking to acquire and produce local content.

PTVIF – Looking ahead, what will be the most exciting areas of opportunity for pay-TV service providers?

Wee – Telcos will drive innovation in pay TV over the coming years, with broadband being key to pay TV market penetration strategies. They will not limit themselves to offering pay TV as a set-top box-based home entertainment service. Their offerings will be agnostic of consumer premises equipment and will include OTT products targeting the on-the-go digital consumer. It is only a matter of time before we will see the proliferation of digital media players, such as Chromecast and Apple TV, and these guys will be ready for that.

For mobile telcos, OTT services will be key to monetizing their mobile data services. We are already seeing a number of telco and OTT partnerships in the market, and these will be ever more important. However, the penetration of these services will heavily depend on pricing and packaging strategies.

Also, if you compare mobile networks in Europe and those in emerging Asian countries, you quickly realize that our networks cannot support a great on-the-go video experience. Some OTT and TV Everywhere services already have download-to-go functionality, and anyone trying to build a successful OTT service will need to support it.

Meanwhile, DTH satellite operators are changing their strategies and moving away from competing solely on price. They are seeing rationalization and investment in new set-top boxes, with differentiating functionalities, and putting more focus on premium customers.

Emily Wee, VP Business and Media Operations, New Media, Telekom Malaysia

PTVIF – Where does innovation rank among the Malaysian pay-TV industry’s top priorities?

Wee – Innovation is definitely one of the top priorities. Pay TV operators have to innovate to keep up with market trends and to protect and enhance their revenue streams. For us, as a challenger in the Malaysian pay TV industry that entered the pay TV business only five years ago, innovation is particularly important. We always need to look for an edge to convince customers to choose us rather than our competitors.

Innovation has become much more important over the last couple of years. The rate of change has accelerated and we are seeing many new players in the market, while consumers have a lot more choice and freedom. A growing number of different businesses are jumping onto the OTT bandwagon, with subscription fees of some OTT services as low as a tenth of the price of traditional pay TV packages. In addition, with technology companies, such as Google and Amazon, and TV manufacturers coming into the game, the urgency for the pay TV industry to innovate and keep ahead is growing.

PTVIF – Looking ahead, what will be the most exciting areas of commercial opportunity for pay-TV service providers?

Wee – There is a substantial opportunity to bring all entertainment together on a single platform. Partnerships with OTT content providers or game developers are where a lot of convergence is happening. The key task and challenge is to ensure that the whole experience fits nicely together.

Great user experience is the missing piece of the puzzle. How can pay TV service providers make it seamless? How can they build a search and recommendations engine that encompasses not only linear content, but also all the on-demand libraries, applications and OTT content? Smart TV manufacturers were the first to attempt that. They have tried to partner with as many content providers as possible in order to bring the adoption rate of smart TVs up. However, the experience has not lived up to the expectations. It still feels a bit clunky, with users having to navigate between different standalone apps.

In the OTT space, TV Everywhere is a ‘must do’ for all operators. I think there are also interesting opportunities for pay TV companies to offer standalone OTT services that are differentiated from their core propositions and targeted at new customers outside their footprints. Sky has made it work quite well with Now TV in the UK. However the jury is still out as to whether this would be applicable to the Malaysian pay TV market, given the differences between the two markets.

Outside the core pay TV and OTT propositions, Internet of Things and smart home solutions would be the first priority. This is particularly true for telcos, which are increasingly focused on owning the connected home. However, it is very early days for Internet of Things and smart home solutions in Malaysia. These solutions will develop much faster in other countries in the region that have higher incomes and higher broadband penetration.

Page 1 of 3512345...102030...Last »