Applications Archive


Startup Makes Artificial Intelligence Far Easier to Use in M&E Operations

Rejeev Dutt, CEO, DimensionalMechanics, Inc.

Rejeev Dutt, CEO, DimensionalMechanics, Inc.

DimensionalMechanics says AI Expertise Is no Longer Essential to Leveraging Technology

By Fred Dawson

June 22, 2017 – We might not yet have access to genies in bottles, but how about a means by which people with no experience in artificial intelligence can put AI to use in whatever ways are beneficial to their businesses?

This is the vision startup DimensionalMechanics, Inc. (DMI) has brought to fruition for a rapidly expanding base of media and entertainment industry customers who are getting a leg up on competitors by executing tasks that would otherwise be beyond their reach. Exemplifying one area of M&E operations that stands to benefit from use of AI is a recently announced strategic alliance between DMI and GrayMeta, a provider of automated metadata collection, curation and search applications to M&E firms such as ABC and CBS.

AI, which not too long ago was dismissed as a futurist fantasy, is now widely used in robotics and myriad routine applications across the enterprise landscape to guide computers in deep human-like learning processes that continually adapt processes engineered for certain tasks to changing conditions. In M&E operations AI is fast becoming an essential component in a wide range of applications such as content recommendation, service personalization, addressable advertising, voice recognition, network diagnostics and much else.

For GrayMeta the ability to quickly implement AI-based solutions in the many scenarios involving use of metadata across the M&E market is a significant benefit to its business, says GrayMeta CEO Tom Szabo. “DimensionalMechanics will give us the ability to rapidly deploy highly customizable AI solutions to dramatically improve content discovery and recommendations for our customers,” Szabo notes.

One of GrayMeta’s recently announced customers is Levels Beyond, which, as previously reported, provides tools to broadcasters enabling instant access to archived video in live sports coverage and other aspects of production. The GrayMeta Platform creates searchable metadata which can identify faces, people, logos, speech, tags, descriptions, on-screen text and other elements in video streams.

GrayMeta founder and executive chairman Tim Stockhaus has also made known the firm’s use of AI in conjunction with integration of its technologies with Microsoft Azure to enable users to easily search and find content in all Microsoft Office products, including emails held on their networks. “As machine learning technologies rapidly evolve, new capabilities and avenues for productivity are being added every day,” Stockhaus notes.

The process of modeling AI architectures suited to the unique needs of each application category and specific solutions within those categories is, as DMI CEO Rejeev Dutt puts it, “closer to art than science.” Applying AI in highly specialized areas with unique modalities suited to meeting each company’s needs typically requires engagement of a team of machine-learning specialists who not only understand AI but have sufficient knowledge of the application category to ensure development of useful AI architectures.

Moreover, ongoing changes in conditions not accounted for in the initial AI modeling process may require sustained involvement of an expert team to ensure the AI mechanisms remain relevant to the tasks at hand. Experts with the right combination of machine-learning and application-specific skills are hard to come by, even if a company can afford to employ them.

“Our core purpose is to make AI more accessible, reducing the difficulties involved and making it easier for people to bring models up faster,” Dutt says. “The second part is about how you manage and modify those models over time.”

DMI customers can build and continually refine AI architectures without requiring the retention of machine-learning specialists, he notes. And for customers who do have access to such expertise, personnel can utilize DMI resources to achieve optimal results much faster, he adds.

As explained by Dutt, an AI architecture is the blueprint for a “neuro-network” or interconnected network of intelligent processing layers that describes their connectivity and points of communication, the machine-learning rates within and across the layers and the overall capacity for learning. “To try and figure out how to build a neuro-network for specific problems is hard,” he says. “A machine-learning expert can do this, but it’s hard for others, including IT guys.”

But even for the machine-learning experts making AI useful in everyday operations is a major challenge. Not only is it hard to build architectures that are broadly applicable to all conditions; customizing architectures to be more flexible is hard as well.

For example, a highly accurate voice recognition system might be stymied by someone whose voice is outside the pitch ranges the system is designed to work with, or a system adept in identifying certain types of images might draw a blank on others. Ideally, the intrinsic capabilities of machine learning would make the necessary adjustments, but often that’s not the case. “Deep learning models are very vertical,” Dutt notes.

Further complicating matters is the fact that contracting with machine-learning pros to build and adjust AI architectures traditionally has required a willingness on the part of companies to expose data off premises. “A lot of companies are reluctant to use AI if it means comprising data privacy,” Dutt says.

And then there’s the question of payoff. While some AI applications are built solely for internal use, others have marketable potential that companies would like to exploit. But, notes Dutt, there hasn’t been a common marketplace for organizations to share or monetize their AI creations.

DMI has introduced a portfolio of solutions within its NeoPulse Framework that address all these issues, he says. First and foremost, when it comes to building AI models, DMI has demonstrated that IT personnel with no background in machine learning can do the job using DMI’s technology. “You need some coding experience to build sophisticated models,” he says, “but the bigger challenge is getting and curating the data you want to use with the model.”

DMI’s NeoPulse Modeling Language provides customers an efficient and easy-to-use means of automating the process of building AI models. By using AI as an “oracle” to build AI, DMI is continually enhancing customers’ capabilities as its proprietary technology acquires ever more knowhow.

“Oracle gets smarter over time, constantly learning from previous models,” Dutt says. “Very often it’s not easy even for us to decide the right architecture.” In the building process, the platform leverages what’s already been learned to figure out what looks to be the optimal architecture and then begins a testing and refinement process that quickly leads to the best possible starting point, he explains.

“We just built an application involving sentiment detection technology that only took 18 lines of code,” Dutt notes. “If we’d done that with TensorFlow [an AI software library] and Python [the language used with TensorFlow] it would have taken more than 600 lines.”

DMI is preparing a new release of Oracle enabling more sophisticated analyses of video. “I can’t say much about this yet, but it will allow an IT team to address some big problems,” Dutt says.

DMI’s new product release, NeoPulse AI Studio, leverages architectures built with Oracle in six general categories of operations important to M&E companies and other entities. These include image recognition, object recognition, recommendations, identification of unacceptable content, detection of infrastructure anomalies associated with intrusion or other malicious activities and character recognition in conjunction with Japanese and other languages using ideographic symbols.

These application frameworks allow IT teams without AI experience to quickly create optimal AI architectures suited to their specific requirements, meeting expectations for human-like discernment at speeds and often with accuracy beyond what can be expected based on traditional norms. As an example Dutt cites a recent instance where a test of AI-enabled search for prohibited adult content resulted in identification of nudity from a grainy old film displayed on a TV screen in the background of a scene from a TV program that had drawn a fine from regulators after going undetected by the programmer’s monitoring team.

In another test, a DMI-developed AI application assigned the task of ranking the likely appeal of headlines for news articles made choices that closely aligned with choices made by producers. “Digital broadcasters want to generate more clicks by making content as interesting as possible to their audiences,” Dutt notes. “Having confidence they can do this automatically whether it’s with the choice of headlines, images or other elements will save them a lot of time.”

DMI’s solutions also address the previously mentioned data privacy concerns that inhibit wider utilization of AI. By enabling creation and ongoing refinement of AI architectures by in-house staff, AI Studio allows customers to keep high-value data on premises and to ensure they are using the most current data to retrain their models, Dutt says.

At the same time, he adds, the company offers a cloud-based licensing model for companies that can’t afford to build models in house. “For example, you can port your voice recognition to run as an AI application on our platform,” Dutt explains. “If someone builds an app that uses voice recognition, they can call on your model and the voice recognition comes back.”

“If your app does really well, millions of users are calling your AI, and we earn royalties on the run times,” he adds. This license-free royalty-base monetization model also applies to AI applications built in house by customers using AI Studio.

The opportunity to monetize AI-enabled applications in an environment where there’s been no common way for entities to do that is another benefit touted by DMI. DMI’s NeoPulse AI Store operates like a traditional app store enabling organizations to distribute, license and monetize their AI models.

Whether apps are developed on customers’ premises or on the DMI cloud, they can be pushed to the cloud-based AI Store for broader consumption, Dutt says. AI Studio scales exponentially as successful AI models in the store are licensed and used by additional users and organizations, he adds.

While M&E is a great proving ground for building momentum in the AI marketplace, DMI has its sights on many other fields as well. “We’re just at the point where we’ve released our primary product and are heading into the next realm of funding,” Dutt says, noting DMI has raised $6.7 million so far. “We now have 22 pre-orders from Fortune 500 companies.”


OTT Video Biz Gains Support For Smoother App Performance

Marty Roberts, CEO, Wicket Labs

Marty Roberts, CEO, Wicket Labs

Startup Automates Tracking of APIs that Activate Functions Crucial to Viewer Experience

By Fred Dawson

October 31, 2016 – It looks like the legion of premium video providers putting ever more eggs in the OTT basket may have one less thing to worry about when it comes to malfunctions that can disrupt consumer experience at any moment of engagement in today’s app-dominated entertainment arena.

This particular gotcha has to do with a lack of attention to how all the cloud-based functions that go into supporting any given app are responding to the API (applications programming interface) calls that are triggered with each user’s engagement with that app. While the market is awash in application performance management (APM) solutions, there’s still a gap in the ability of many of those systems to ensure app consistency.

Marty Roberts, former co-CEO of thePlatform and now CEO of startup Wicket Labs, says his new company intends to fix this “blind spot in the cloud.” “We’ve been working with over a dozen companies in trials, including cable networks, broadcasters, MCNs (multichannel networks),” Roberts says. “Anyone building business around media is a good candidate for our solution.”

The need for a solution that can automatically keep tabs on how all the APIs associated with any given app are performing became apparent while Roberts was at thePlatform, the Comcast-owned online video publisher that provides OTT support services to many major broadcasters and other entities. “At thePlatform we were doing about 18 billion API calls per month,” he says.
“Often we’d have operational problems with APIs that didn’t show up until consumers started complaining.”

Sometimes, he says, it would take 20 to 30 minutes to figure out which vendor providing support for a given app had a problem. And then it took more time for the vendor to figure out what the problem was.

thePlatform was staffed to deal with such issues on as timely a basis as possible, but individual media companies relying on internally managed apps typically aren’t equipped to set up and run monitoring of all the APIs that connect cloud-based vendor software platforms to their apps. “We started our company six months ago with the mission to lower operational downtime of client sites,” Roberts says.

In a nutshell, Wicket Labs creates a digital map or “Wicket” for each API which automatically identifies API changes and client-impacting errors or outages through a process that mimics inquiries into the API on a predetermined frequency. Wicket Labs’ clients gain dashboard access to a customized Wicket Scorecard that continually tracks performance on all the API connections that contribute to their app’s functionality.

Given the vulnerabilities incurred with reliance on complex apps that require support from multiple cloud-based elements, it’s surprising to learn how unequipped many providers of premium video are to deal with potential issues. But it is a problem not just in entertainment, but across business operations of every description that depend on cloud-based apps, notes Julie Craig, a research director at the consulting firm Enterprise Management Associates.

Describing results of EMA research in a recent blog, Craig says two thirds of surveyed IT professionals either don’t track API-triggered transactions or do so through the cumbersome process of monitoring API Gateways. Just 32 percent of respondents said they rely on APM solutions to manage the API transactions.

“In essence, the Gateway has become another monitoring silo, which IT organizations are utilizing in standalone mode to track transaction performance and availability,” she says. “So at a time when software is becoming increasingly business-relevant, IT teams are, in too many cases, retreating to the silo monitoring techniques of the past to track and troubleshoot application performance.”

Such manually intensive approaches are impractical for media companies dealing with millions of API calls daily, which means they typically fall into the category of firms that rely on other tools measuring subscribers’ experiences with apps to learn when something is going wrong. As a result, they have no way of identifying causes related to the malfunction of a specific API transaction.

The risks of relying on smooth performance of multiple vendor contributions to a given app are magnified by the number of touchpoints engaged by the media app during the viewing experience. “If you look at a media company’s website, it might have 14 different vendors driving some part of that user experience,” Roberts notes. They can be pulling and using data related to advertising, content management systems, video management systems, presenting thumbnails and content descriptions and much else. “All of those are API calls that go into that app,” he says.

While, theoretically, media companies could put together a means of using data from engineering and other sources to figure out what’s going on across their app APIs, Wickets Labs believes boiling everything down to readily accessible, constantly updated data that gets to the essence of API performance will prove to be the better way to go for  most companies. “With so many APIs updating all the time, you need to automate the monitoring and analysis process,” Roberts says.

The Wickets Labs team does the heavy lifting building the Wickets for each API relevant to a client’s app or apps. “We can look at any of these APIs, fill out the information about it, modify the URL and put it into the roster,” Roberts says.

With so much duplication in vendor resources used for different apps across the ecosystem there’s a cumulative benefit to all who rely on those vendors’ APIs as new Wickets are added. “The more Wickets we build, the better it is for our customers,” he says.

The Wicket Labs platform knows how to call into vendors’ APIs and identify error codes, analyze the performance on those API calls as measured against expected norms and determine how quickly and effectively remedial action impacts performance. “If the vendor is sophisticated enough to have a user testbed or sandbox environment, we can be that user acceptance test for our customers,” Roberts notes.

The Wicket Scorecard uses an intuitive Web and mobile user interface to alert business owners to developments that define how any given Wicket is categorized at any given moment. In the case of “Problem Wickets,” the system presents details about issues and changes impacting the consumer experience, such as outages stemming from timeouts or HTTP server errors, “brown outs” caused by intermittent data errors or slow response times and glitches from unacceptable responses in a data field.

There are also alerts pertaining to “Notable Wickets” which identify and explain changes in an API that are not expected to have a meaningful impact on consumer experience, such as an API schema change or a relatively imperceptible slowdown in performance. And there’s a “Good Wickets” category where customers can view the performance history of a currently smooth functioning API, including the number of times the API has entered a problem state or a notable change has occurred.

The company has found vendors to be ready and willing to share data essential to keeping tabs on how the APIs are working. “One of the things we looked at early in the process was whether vendors would be okay presenting data back to us,” Roberts says. “But the vendors we’re working with have been great.”

They, like other stakeholders including Wicket Labs partners as well as customers, recognize they need to know why the user experience is broken. “If you receive a phone call asking about what’s happening, it’s better to have more clean data to reference for answers,” he says.

Wickets Labs offers potential customers free access to three Wickets and then charges monthly at various tiers depending on the volume of Wickets included in the customer’s Wickets Scorecard. While the company is focused on building its business in the media and entertainment domain, Roberts says it recognizes there are many other industry sectors that could benefit from a system that tracks API performance with their apps.


New Approach to Developing UIs Fuels OTT Efforts to Win Viewers

Trisha Cooke, VP marketing, You.i TV

Trisha Cooke, VP marketing, You.i TV

You.i TV Scores with Single Codebase Engine for Quickly Mounting Compelling Cross-Platform Experiences

By Fred Dawson

September 16, 2016 – Canadian software innovator You.i TV is making waves in the UI space with a highly flexible development platform that Turner Broadcasting, Sony Crackle, Canadian SVOD operator showmi and a growing number of other players are using to expedite their efforts to raise the bar on user experience.

Turner, for example, has begun to standardize application development on the You.i platform, following successful utilization with apps for TNT, TBS and Turner Classic Movie’s forthcoming direct-to-consumer service FilmStruck. Adding to the momentum,Turner parent Time Warner, Inc.’s investment arm just announced it is leading a $12-million Series B funding round for You.i.

“Delivering video directly to consumers is becoming vital to the media industry, and offering a compelling user interface and app experience is an important piece of this value chain,” says Scott Levine, managing director of Time Warner Investments, who will be joining the You.i TV board. “We were immediately impressed with the You.i TV products, seeing how they create high-quality, unique branded experiences across multiple device platforms, while powering higher engagement rates with users.”

Unlike suppliers of UI solutions that offer MVPDs and other entities fully baked templates complete with recommendation engines and other navigational bells and whistles, You.i gives its customers a toolset that facilitates rapid translation of designers’ visions into practical implementations without the coding hassles that prolong development and prevent full realization of the intended user experience. “We’re enabling the marriage between thinking about what needs to happen in targeting consumers with compelling interfaces and what that interface turns out to be,” says Tricia Cooke, vice president of marketing at You.i.

In so doing, You.i is fueling intensifying competition to win consumers through highly differentiated gateways that convey what’s special about the provider’s offerings in a market awash with me-too viewing options. “You.i Engine brings motion designs to life with pixel-perfect clarity and performance that I’ve never seen before,” says Ann Tebo, director product management at shomi, the OTT venture backed by Canadian MSOs Rogers Communications and Shaw Communications.

showmi has been using the platform to build an immersive multiscreen experience across iOS, Android, Xbox and PlayStation devices. The range of device platforms developers can reach through the You.i Engine also includes Apple’s tvOS, Amazon Fire, Roku, smart TVs, RDK set-top boxes and more, Cooke notes.

As explained by Cooke, the You.i Engine is an app development platform built on the principles of video game engines, which use design-centric, cross-platform code in conjunction with GPUs (graphical processing units) to expedite development of on-screen presentations. Users of the You.i Engine are able to directly export designs created in Adobe Photoshop and animated in Adobe After Effects into a single codebase that conforms the fully realized UI to every device platform in accord with the requirements of each, Cooke says.

“Production people get an already-coded app to work with,” she notes. “This represents a break with the norm where you see designers handing off their work to the tech guys at a brand, who come back and tell the designer, ‘You need to adjust and compromise so we can work with this’ – to the point that you end up with a fraction of what the design called for.”

Freedom from constraints that have prevented realization of the full potential of innovative designs has been a boon to the cross-platform video aspirations of the Canadian Football League, says Christina Litz, the CFL’s senior vice president of content and marketing. “When we were building CFL Mobile, You.i Engine was the only option that allowed us to realize our vision for the new application without having to compromise on any detail for the fans,” Litz says.

The project took about three months, Cooke notes. “They’re a small league that has been able to do what much bigger entities are doing, which is to use technology to define themselves in the online market.”

You.i customers are finding that once they’ve integrated with the You.i codebase to enable cross-platform rendering of a new UI for one of their brands, they can re-use the codebase for UI development on other brands with minimal recoding. A case in point, Cooke says, is the Canadian content aggregator Corus Entertainment, which leveraged the You.i Engine to create building blocks for their TV Everywhere app, including front-end design, interactions, business logic and back-end integrations, to deliver highly diverse experiences on four core brands across four device platforms. “They were able to launch three brands within six weeks of the first launch,” she notes.

Following its successes with Canadian entertainment outlets, You.i’s engagement with Turner marks an expanding involvement with U.S. entities, including network service providers. “Most of our work now is in U.S.,” Cooke says, noting conversations there have gone from “academic exercises” to commercial dialogs leading to RFPs.

“Service providers are interested in pursuing the OVP (online video publishing) model,” she adds. “That’s where our customers are going.”

You.i has developed two additional avenues for engaging customers beyond its original approach of directly assisting them with the integrations and other aspects of bringing the You.i engine into their workflows. Now it offers the You.i Engine as a product that can be implemented in-house by customers’ DIY teams, and it works through channel partners like EPAM and Valtech that have integrated the software into their solutions. Valtech, for example, is the channel partner You.i is working with in the CFL engagement.

“Our channel partners are helping us to expand our reach globally,” Cooke says. “There’s no point of the market we can’t serve.”


Connected Cars Are Catching On With Big Implications for MVPDs

Alan Messer, CTO, global connected customer experience, General Motors

Alan Messer, CTO, global connected customer experience, General Motors

GM, AT&T Lead the Way, but There’s Plenty of Room for Cable Operators

By Fred Dawson

August 5, 2016 – Like everything else in the IP world the business models and opportunities surrounding the connected-car phenomenon are changing at warp speed, signaling that network service providers who may have kicked the Internet-on-wheels tires and passed two or three years ago should think again.

One sign of the fast-changing times is the position occupied by Alan Messer as CTO of global connected customer experience at General Motors. To Messer’s knowledge he is the first to hold such a title in the automotive industry but probably won’t be the only one for long.

“We’re going to see more change in the next five years than we’ve seen in the past 50 in the auto industry,” Messer says, citing use of car connectivity as one of the big four transformative trends underway, which also include the emergence of electric-powered vehicles, autonomous cars and shared use of cars analogous to bike sharing systems now operating in hundreds of cities worldwide.

While tech giants Microsoft, Google and Apple have been evolving various connected-car strategies for several years, AT&T, by far the leading player among network service providers, has established a dominant position through its Drive service. In fact, AT&T’s success raises the question of whether there are meaningful opportunities left for other NSPs, especially cable operators who lack the LTE network support that AT&T has leveraged to gain partnerships with 19 car brands worldwide.

“Absolutely, there’s an opportunity for the local MSOs and MVPDs in general, big time,” says analyst Allan McLennan, who heads up the PADEM Group. “As we advance with network demands, the ability to create new customers and models for connection with over-the-top services of media and entertainment is a natural extension for MVPDs.”

While, according to Parks Associates, as of the start of 2016 only 16 percent of the light vehicles on the road in the U.S. had built-in mobile connectivity to the Internet, a recent study conducted by AT&T and Ericsson found that three out of four consumers consider connected-car services to be an important feature in their next car purchase. Most car manufacturers have at least some models with mobile connections to in-car networks that support a variety of applications.

GM OnStar, the oldest such offering from a car manufacturer, leverages built-in connectivity to support a variety of monthly subscription packages on top of a free basic plan that includes connectivity with data usage surcharges. OnStar premium packages, priced at $20 and up, include offerings such as Protection covering crash response, roadside assistance and online advisors; Security with stolen vehicle tracking assistance, ignition block or slowdown of the stolen vehicle and theft alarm notification, and Guidance with “turn-by-turn navigation” travel assistance like hotel booking and “hands-free calling minutes.”

Chevrolet, the first vehicle brand to offer LTE connectivity on all models, reports a high volume of usage since it introduced the connected service in 2014. “Wireless connectivity has proven to be a beneficial technology for many Chevrolet customers, from contractors who use their Silverado as a mobile office to families using their Suburban on a summer road trip,” says Sandor Piszar, Chevrolet truck marketing director. “As our customers increase their usage of the technology, we are able to make it more affordable for them.”

With 2.1 million connected vehicles purchased so far, Chevrolet customers have consumed more than 3 million gigabytes of data, and Chevy in-vehicle data usage continues to trend upward, Piszar says. For example, more than 60 percent of Suburban owners and passengers use their OnStar 4G LTE Wi-Fi hotspot, with Tahoe and Traverse hotspot usage not far behind. Chevy has cut its data plan rate for all models in half this summer to $10 for 1 gigabyte per month and has added a 4-Gbyte offer priced at $20.
AT&T’s Drive is a value-added service component to the underlying LTE connectivity it provides to Chevy and other brands. Auto makers can choose the services and capabilities that are important to them to complement in-house offerings like those in the OnStar portfolio or Ford’s FordLink service, most of which so far have been directly related to car operations.
As of Q1 2016 about eight million cars were embedded with connectivity to AT&T’s network, including more than 50 percent of all new connected passenger vehicles in the U.S., according to Chris Penrose, senior vice president for Internet of Things (IoT) solutions at AT&T. “It’s incredible to think back to ten years ago when we first started talking with automakers about connecting their cars,” Penrose says. “The interest we are seeing from carmakers and consumers around the world says this revolution is here to stay.”

Beyond providing connectivity and applications of interest to the car companies, it’s clear AT&T’s larger goal is to drive its position in the IoT marketplace. With its launch of a global SIM platform for cars, AT&T has created an environment designed to draw individuals and IoT equipment makers to its network to foster expansion of an IoT ecosystem tied to its brand.

Consumers with Drive service can remotely interact with their AT&T Digital Life smart home service, synching automation of actions such as setting house temperatures, locking and unlocking doors, turning lights on and off, running connected appliance, etc. with the use and location of the car. To help drive the car-related aspects to applications in that ecosystem the company operates the AT&T Drive Studio in Atlanta, which serves as a working lab and showroom that automakers and third parties can use to build and exhibit innovations.

AT&T isn’t limiting its expansion of IoT into the vehicle environment to owners of connected cars. The company also offers a Wi-Fi in-car plug-in device supplied by ZTE called Mobley, which its mobile customers who don’t have connected cars can use to distribute data accessed from their phones to screens and other devices in the car. AT&T unlimited data plan customers who don’t have connected cars as well as those that do can add the ZTE Mobley vehicle to the plan for $40 per month.

Verizon, which has very limited penetration as a supplier to connected-car manufacturers, has also been offering mobile customers an in-vehicle Wi-Fi hotspot service, which it calls Hum. Both AT&T and Verizon are making these hotspot modules available to owners of cars built in 1996 or later, which come with government-mandated On-Board Diagnostics (OBD) ports. OBD provides open access to a vehicle’s diagnostic and performance information, allowing third parties to layer value-added services onto the built-in vehicle platform using smart phones as the front-end interfaces.

But, so far, Verizon’s connected-vehicle services play has been primarily focused on the enterprise fleet management and telematics market. The company recently acquired Telogis, a leader in the vehicular telematics market, to bolster its position in this space.

The upshot of all these developments is that a space has been opened for cable operators to begin leveraging their role in entertainment and information services in ways that suit other needs of connected-car owners that haven’t been the focus of the mobile providers. IoT apps, too, could be part of their play, depending on how deeply they get into that side of the business. Here the role, until such time as they become mobile operators themselves, would be as OTT providers leveraging the underlying connectivity provided by mobile carriers.

“The connected car to me is just another connected exhibition arena for media and entertainment,” says PADEM Group’s McLennan. Indeed, in-car video entertainment is now becoming a practical option with screens positioned for passenger viewing in many new models, creating an environment where cable operators will want to engage with subscribers rather than leaving them to rely solely on OTT video providers.

Cable companies’ local market presence is especially advantageous for creating business models that tie in with car dealers’ needs to generate value-added revenue in the increasingly tight-margined car sales business, McLennan notes. “The dealer revenue model is primarily built off of services – service department, financing, parts, etc., more so than the actual sale of a new car,” he says. “This lends itself to an entirely new perspective and potentially lucrative business model for both the MVPD and the dealer.”

GM’s Messer agrees, noting “the living room on wheels potential is huge.” In fact, he adds, it could be explosive if and when autonomous vehicles take hold.

Beyond simply providing car access to existing content, there’s also an opportunity to build content directly related to the driving experience, such as location-based travelogues that can be curated from the cloud to fit the specific itinerary of a vacationing family. Location-based content with advertising support is something GM is looking at. “We want to enable those services,” Messer says.

Car-specific subscription channels could also be part of the MVPD strategy with a revenue-share for car dealers, McLennan suggests. “A media offering potentially with exclusive packages (front seat/back seat) has strong potential, especially with an already established [dealer] sales channel,” he says.


Vendors Give Broadcasters Tools Enabling OTT Quality Assurance

Kurt Michel, senior director of marketing, IneoQuest

Kurt Michel, senior director of marketing, IneoQuest

IneoQuest, Tektronix and Edgeware Mitigate Liabilities for Companies that Don’t Own Networks

By Fred Dawson

May 5, 2016 – OTT providers of high-value video content at last have access to quality-assurance solutions suited to assessing whether they’re achieving the end-to-end performance that’s essential to forging ahead with next-generation direct-to-consumer agendas.

Until now, it’s been hard to execute business models predicated on monetizing online delivery of HD- and UHD-caliber content with confidence that goals respecting user experience and fulfillment of advertising commitments are being met. Without that confidence, it’s hard to make a case to consumers and advertisers that online alternatives to traditional pay TV don’t represent a compromise on quality and value.

One vendor that has gone to great lengths to cover all the bases in enabling distributors who don’t own networks to identify and proactively address issues that could damage their value propositions is IneoQuest. Judging from demonstrations of its new FoQus platform solutions at the recent NAB Show in Las Vegas, it appears the company has delivered on commitments to solving the conundrum of how to deliver OTT services with a managed-network level of quality assurance.

NAB also brought to light advancements in this direction on the part of other vendors that have been focused on delivering actionable performance metrics in the direct-to-consumer (DTC) space. Especially noteworthy were new solutions introduced by Tektronix and the feedback from Edgeware regarding the receptivity its previously introduced performance measurement platform is getting from content providers.

Edgeware TV Analytics

Edgeware’s monitoring and analytics product suite, introduced in September as part of the company’s positioning of its solutions for the broadcaster side of the TV ecosystem, has proved to be a point of primary interest to potential customers in this segment, says Johan Bolin, vice president of products at Edgeware. “When we get into meetings with these media companies, the first thing they want to talk about is analytics,” Bolin says. “If they like what we they see, it opens discussion about our other solutions.”

The starting point for Edgeware’s TV Analytics platform is the ability to aggregate raw adaptive bit rate (ABR) “chunks” into virtual sessions for meaningful analysis that can be performed on streams at points of origin, CDN edge locations and end devices to gauge bitrates and other indicators of quality on a per-session basis. “If you see the bitrate is way down from the optimum you know you have something happening in the network that’s degrading performance,” Bolin says. “We look at the time stamp requests on client requests to the server to see how buffering is being managed. If the gaps in requests are unnaturally long, that’s another signal that there’s a problem.”

Without owning the access network, it’s hard for broadcasters to pinpoint the causes of the problems, but at least the Edgeware TV Analytics platform can let them know there’s an issue. Moreover, the platform makes it possible to tie viewer behavior with network behavior in real-time and over long durations to provide extremely granular information on what content is being watched, at what frequency, from which location, on what device type and whether it’s live or VOD, Bolin notes. By correlating data from multiple sources, the platform can look into the relationship between network performance and customer satisfaction, viewing behavior across geographies or screens, viewer mobility, screen swapping and other dynamics, he says.

Edgeware customers, tapping a pool of pre-designed widgets for specific analytic applications or working with Edgeware to design new widgets, can create their own interfaces to collect and analyze information for making actionable decisions, Bolin adds. They can filter the information by data, distribution, format, devices, ISP, etc. and present the data as a variety of charts, tables or geographical maps.

The range of possibilities depends on the volume of data available, which depends on what can be pulled from different types of end devices and the degree to which third-party CDN suppliers are willing to share data. Edgeware is developing open APIs that will make it relatively easy to integrate such feeds into the platform, Bolin says.

Advertising, too, is an important target for Edgeware TV Analytics. “We can see what ads played out, whether a session was broken, whether what we’re seeing matches the volume of ad renderings scheduled by the ad decision server,” he says. “We can map all ads geographically to see the frequency of ad playouts in different areas.”

New IneoQuest Solutions

Taking broadcasters even farther into the process of identifying and rectifying impediments to the quality of consumer experience, the new IneoQuest FoQus platform basically eliminates the disadvantages of being a virtual MVPD compared to a network-based MVPD, says Kurt Michel, senior director of marketing at IneoQuest. By providing visibility and advanced analytics intelligence across the entire video distribution value chain, FoQus allows any OTT provider to ensure the delivery of a reliable, consistent viewing experience to consumers, Michel notes.

“We’ve re-invented everything we’ve done in the traditional network service provider space to bring those capabilities into the open Internet environment,” Michel says. “We’ve created a portfolio of products that gives you that same level of end-to-end access to information even though you don’t own the network.”

That’s a tall order, but judging from the demonstration of the FoQus platform in action on a live video feed at NAB, IneoQuest has met the challenge. In the demo, the system analyzed the availability of the asset, its quality in the pre- and post-encoder phases, what the quality was coming out of the CDN and what the status was at the viewing point, all of which pointed to the convention center’s Wi-Fi system as the cause for a low bitrate. The platform also examined how many people were viewing and determined what the quality was for viewing on phones and other devices.

All the information the administrator sought was instantly presented on the UI. Rather than delivering just the raw quality metrics based on PSNR (Peak Signal-to-Noise Ratio) or MSE (Mean Squared Error), the system employed IneoQuest’s iQ-MOS real-time scoring technique, an on-the-fly execution of the Mean Opinion Score method of grading video. MOS grading relies on algorithms that reflect the actual responses of the human visual system as prescribed in guidelines set by the ITU’s BT.500 video assessment recommendations.

These assessments were performed by the FoQus platform’s modular iQ Engines, which collect, correlate and process data to provide the comprehensive big-picture views that video providers require to manage their business. Different iQ Engines are offered based on the area of analysis they cover – audience analytics or operational analytics – so that customers can meet their current needs and obtain new modules as needed later, Michel notes.

IneoQuest is also offering a subscription-based, cloud-hosted FoQus|Event service to address streaming events over the Internet, he adds. This service, which leverages Amazon EC2 cloud infrastructure to dynamically position FoQus platform elements as needed, can be used both to test the video distribution system prior to the actual event and to monitor the performance, quality and availability when the live event streaming occurs.

The data processing performed by the iQ Engines utilizes data drawn from the network by the FoQus acquisition elements, which are offered as modules dedicated to specific segments of the network. These include Inspector, which measure the quality of content preparation before it enters the network, and are well-suited to headends, origin services infrastructure and video testing labs; Surveyor, which measure network performance and content availability across the Internet, CDNs and fixed and mobile access networks, and Spectator, which measure the playback quality and the viewer’s response to it with metrics that include the selected content, session time, network type and provider, and key quality metrics such as startup time, bitrate and re-buffering.

Along with allowing non-network owners to scrutinize quality performance at all points in the distribution chain, FoQus allows distributors to determine what the optimum bitrate settings are for any given locality for any type of device, which is to say the minimum bitrates required to hit a given MOS target. For example, the distributor can assess what the bitrate needs to be to hit a MOS score – typically between 3 and 4 – that signals good quality has been achieved on a big screen display. Then bitrates can be set for smaller screens where similar MOS scores can be achieved at lower throughput owing to the lower resolution of those screens.

“We probe for information and create alarms,” says Peter Dawson, chairman and co-founder of IneoQuest, who conducted the NAB demonstration. “We give people the tools they need to harness how many people are impacted by network problems.”

With such information in hand broadcasters have leverage over CDN suppliers, peering exchange systems and others they contract with to rectify the problems. Or they can make adjustments in the bitrates generated from transcoders at points of origin under their control to get to the MOS levels they’re looking for.

“Once you have that score for the content you can report the quality value to the system, which will then tell you where the bitrate you’ve chosen doesn’t deliver that value and why,” Dawson explains. “You can monitor this channel over time and, when the MOS goes below your chosen threshold, the system reports that up to the platform to analyze the problem.”

To get at what’s happening beyond the point of origin, customers can position the platform at the edge of the network, running as software on commodity servers where the customer has their own physical presence or can negotiate space for the software on servers operated by a CDN provider. Or the customer can place IneoQuest appliances in strategic locations to emulate what’s happening at the regional CDN level. Similarly, the FoQus platform can gather real-time information from end user devices whenever possible or rely on strategically placed end-device emulators.

Tektronix Solutions for Broadcasters

Rounding out the latest innovations aimed at ensuring high-quality DTC services for broadcasters and other OTT distributors is a new platform from Tektronix designed to provide testing and diagnostics support for broadcasters’ transition from SDI-based to IP-based production and post-production infrastructure. Tektronix, a leading supplier of end-to-end video quality assurance technology for network service providers, has been focused on quality control (QC) needs of broadcasters for managing video assets through to playout.

For example, as previously reported, Tektronix has addressed the QC requirements faced by suppliers of file-based on-demand content to the proliferating ecosystem of OTT outlets. The Tektronix post-production solution set includes a highly automated QC platform along with a multi-protocol playback tool enabling highly granular monitoring of files’ conformance to OTT distributors’ ABR and other specifications.

In its latest innovation targeting broadcasters, Tektronix is offering what it says is the first hybrid SDI/IP media analysis platform, dubbed “Prism.” The platform diagnoses and correlates both SDI and IP signal types and helps quickly identify the root cause of the error, whether it is in the IP layer or in the content layer, says Charlie Dunn, GM for the Tektronix video product line.

“The transition from SDI to IP is happening in phases, where there’s a need for a test and monitoring system that can provide visibility into new kinds of problems to allow engineers and operators to keep things on a consistent path as they manage hybrid facilities,” Dunn says. “We’re enabling broadcasters to ensure the new systems they’re putting in place will deliver all video content, including 4K UHD with HDR (high dynamic range) enhancements, from post production into playout at the quality levels they expect.”

Dunn notes that Tektronix has expanded its quality assessment capabilities in the broadcast domain to take into account the challenges posed by HDR. “We’re addressing issues like color grading that come into play with HDR,” Dunn says. “For example, there’s a question of shading in live production where you have to make sure that when you shade for HDR, the content isn’t washed out when it’s played on an SDR (standard dynamic range) display. We’re doing research with customers to determine how to put this type of analysis into our solutions.”

The success of Tektronix in addressing quality-assurance needs of broadcasters was underscored with news that NBC Sports Group’s Olympics division will use the vendor’s equipment to handle audio and video testing and live distribution quality monitoring for its production of the games this summer in Rio de Janeiro. The Tektronix equipment, operating across production, post production, transmission and distribution workflows, includes the firm’s WFM8300 Waveform Monitor, which supports numerous UHD formats and ITU-R BT.2020 wide color gamut, Dunn notes.

Terry Adams, vice president of engineering at NBC Olympics, notes that, for the first time his division, which has used Tektronix equipment in its last eight Olympics productions, will be using the vendor’s Sentry probes to monitor distribution performance. “We will be utilizing 12 of the Sentry units located across the country to monitor the hundreds of live production and distribution streams generated in Rio,” Adams says. “Tektronix has incorporated many new features based on requirements we identified during our coverage of the London and Rio Games.”

Clearly, a key element essential to the transformation of the TV business in the OTT era is now in place. The risk of flying blind with no recourse to identifying, let alone rectifying problems as they occur in real time has been eliminated from the strategic planning equation.


Getting Real about Virtual Reality

In a galaxy far, far away.

In a galaxy far, far away.

TV Universe Begins to Stir as Technology Gains Momentum

By Fred Dawson

February 1, 2016 – As the hype surrounding virtual reality moves into mainstream entertainment circles network service providers face still another situation where they have to weigh how far to go with allocation of human and infrastructure resources toward a vaguely defined service opportunity.

With the failure of 3D to get off the ground still fresh on network operators’ minds and the rollout pace of services supporting 4K UHD and High-Dynamic-Range (HDR) formatted content still up in the air, network operators may see little reason to dive into serious consideration of VR at this early stage. But they can’t afford to be too late to the party if nascent but accelerating attempts at making VR part of the OTT app mix catch hold.

Decades of VR development and misplaced expectations have given way to an unprecedented burst of enthusiasm buttressed by multiple research projections and a new generation of headsets, production tools and application concepts that are drawing the engagement of major players. These range from the massive $2-billion bet made on VR by Facebook with its purchase of VR technology developer Oculus in 2014 to Google’s aggressive approach to building a mass market through its Cardboard initiative and new YouTube programming to toe-in-the-water activities on the part of entertainment giants such as Netflix, Disney, Discovery Communications, the BBC, DirecTV, Comcast and many more.

A recent Goldman Sachs Group report predicted an $80-billion global market for VR and AR (augmented reality) by 2025 with $45 billion going to hardware sales and the remainder to software applications. On the software side, the $7.4-billion share Goldman Sachs sees going to video entertainment ($3.2 billion) and network-delivered viewing of live events ($4.1 billion) is second only to the $11.6 billion projected for the VR games market by that year, with healthcare, engineering and defense leading in other categories singled out for VR and AR applications.

While Goldman Sachs says AR software revenues will account for 25 percent of the $35-billion software pie, figures projected for the video and event as well as gaming software markets are focused on VR apps. In fact, while Goldman Sachs says VR and AR have the “potential to become the next big computing platform,” it predicts the far greater impact will come from VR, “given VR’s technological progress and momentum” and the fact that AR has “more technological hurdles to overcome, including challenges in display technology and the real-time calibration and processing of the real-world physical environment.”

Goldman Sachs is not alone in predicting a big future for VR. The same week in early December that its report came out, Macquarie Bank analyst Ben Schachter issued an advisory note that echoed Goldman’s core point: “We continue to believe that VR/AR is poised to be the next computing platform,” Schachter says. “And like the transition from desktop to mobile, it will be disruptive.”

While Schachter and others view 2016 as the year that VR surges to an unprecedented level of consumer adoption, in the grand scheme of things the ramp-up in the near term will be relatively slow. “Less will happen in two years than you’d think,” he says, “but more than you can possibly imagine will happen in the next 10….[O]nce these devices begin to get into consumers’ hands and developers launch content that moves beyond the ‘wow’ moment and into uniquely, useful experiences, it will be clear that entertainment, communication and many enterprise functions will change dramatically over the coming decade.”

For 2016, the Consumer Technology Association (formerly Consumer Electronics Association) estimates unit sales of VR headsets such as Oculus Rift, HTC Vive, Sony PlayStation VR and Samsung Gear VR will reach 1.2 million and generate $540 million in revenue, marking a 440 percent increase over 2015. Globally, Deloitte Global, in another study released in December, projects 2.5 million VR headsets sold in 2016 will generate $700 million in revenue with another $300 million generated by sales of ten million game copies selling at anywhere from $5 to $40 per unit.

“We do not expect VR to be used to any great extent in television or movies in 2016,” Deloitte says, noting the absence of content or even much in the way of commercially viable production gear. “By the start of 2016, we anticipate a small range of suitable cameras may have been launched onto the market, but the cost of purchasing or renting professional grade devices may initially be prohibitive for many projects.”

Equally if not more significant is the fact that there’s a steep learning curve ahead for VR filming, where the need to capture the 360o perspective means cameras will have to be invisible from all angles and under automated control to keep crew members out of the picture. Sports poses an even bigger problem, given that cameras in the field could obstruct player movement.

Handling the massive file sizes will be a big issue in post-production. “One production level camera features 42 cameras capable of 4K resolution,” Deloitte says. “This captures a gigapixel image (about 500 times the size of a standard smartphone image), and shoots at 30 frames a second. One subsequent challenge of capturing images at this level of resolution will be determining how to store, transmit and edit the files.”

The biggest factor driving the expectations for 2016 has been the improvement in headset or what in industry jargon is known as HMD (head-mounted device) technology. As noted by Deloitte, the market will consist of two types of VR headsets: fully featured systems and mobile-optimized systems. “Full feature devices will likely be designed for use with either the latest generation games consoles or PCs with advanced graphics cards capable of driving high refresh rates,” Deloitte says. “‘Mobile VR’ incorporates a high-end smartphone’s screen into a special case, enabling the headset to fit more-or-less snugly on the user’s head.” Samsung’s Gear VR, priced at $99 and powered by Oculus technology, is a leading example of this VR category.

The high-end device lineup is led by the Oculus Rift, pre-order priced at $599 with a late March rollout date; the HTC Vive, slated for April rollout with price yet to be announced, and the Sony PlayStationVR, due out later in the year. All three have been widely reviewed in prototype mode, most recently at CES 2016, with enthusiastic feedback from gaming specialists.

For example, Gizmag reviewer Will Shanklin found all three compelling on a visit to CES. “All three give you the basic VR experience of transporting you somewhere else,” Shanklin writes. “If any one of them were the only virtual reality that existed, we’d still be excited about this emerging frontier.”

While there are some consequential technical differences, the overarching consideration for buyers is availability of good content, which gives the edge to Oculus in Shanklin’s opinion. Oculus, in addition to having the backing of Facebook and the broadest range of content already in play, has been drawing developers to the platform through its Oculus Studios with plans to introduce 20 more VR games this year. Moreover, Oculus is teamed with Microsoft’s Xbox One, making the game console an alternative to high-performance PCs as the processing platform for playing games, which is sure to spur additional game development.

Other reviewers have given the content nod to HTC Vive in light of its deal with Valve, a leading video game maker which is leveraging the headset to support an integrated hardware-software gaming system it’s calling SteamVR. And, of course, in the wings is PlayStation VR, whose backers promise to have a boatload of content available with launch.

These headsets far outperform previous VR devices with extremely high-resolution displays – one 1080 x 1200 OLED (Organic LED) display for each eye with a 90 Hz frame rate in the cases of Rift and Vive and a 120 Hz rate with PlayStation VR. This means, with the exception of Oculus Xbox users, consumers using Rift and Vive must also have access to a high-end PC to run the headsets. Oculus is offering a complete package with appropriate PC and headset priced at $1,499.

Reviewers offer varying assessments of how the systems compare from a technological performance standpoint, but Vive seems to be winning highest praise for enhancements that include motion tracking capabilities utilizing two wireless infrared cameras placed at the corners of a room to interact with the headset’s 37 sensors. As a result, scene changes tied to head and bodily movements across a room create the sense of physical exploration in the VR experience. The system has a camera built into the headset to provide a user-activated view of the real surroundings that allows the user to avoid stumbling into things.

Writing about the Vive for online publisher Pocket-Lint, reviewer Stuart Miles notes Vive’s full movement capabilities offer “a more comprehensive range of possibilities than many other units.”  He also is impressed at “how smooth the experience is. Graphically there’s no sign of lag, no delay as you move your head, hands or body…There’s no flicker and the headset is pretty comfortable too, with the soundtrack being completely enveloping.”

But it’s a measure of how far VR veterans have gone in adapting to what others might find off putting that a reviewer can rave about an immersive VR experience that includes “an umbilical cord of cables coming out of the back” of the headset, not to mention a headset that looks like “a giant scuba diving mask” with a headband “more akin to a gas mask fitting…than skiing goggles” that serves to reduce “some of the front weighting of the unit.”

As the Deloitte report cautions, “Any company that is considering VR in any regard should have a careful look at the likely addressable market. Recent breakthrough technologies that required consumers to wear something on their face have not proven to be mass market successes. While VR headsets may sell better than smart glasses or 3D TV glasses, also consider that using the technology may require a set of behavioral changes that the majority of people do not want to make.”

CableLabs, which in its recent re-organization has listed VR as a major area of ongoing interest, took pains this past year to gauge consumer response to the VR experience by bringing a cross section of non-users into its facilities for a test run. Surprisingly, the response was “overwhelmingly positive,” reports Steve Glennon, principal architect for CableLabs’ Advanced Technology Group.

Writing in a recent blog, Glennon says, “[W]e were surprised by how few expressed any discomfort and how positively regular people described the experience.” Fifty-seven percent of the visitors said they “had to have it,” while 88 percent could see themselves using a head-mounted display within three years. “Only 11% considered the headset to be either uncomfortable or very uncomfortable,” Glennon notes, adding that “96% of those who were cost sensitive would consider a purchase at a $200 price point,” which is well within the range of mobile VR headset like Samsung’s Gear VR.

But content availability is a big issue. “[W]e asked what would stop people from buying a virtual reality headset,” Glennon says. “High on the list of items was availability of content. Setting aside VR gaming, people didn’t want to spend money on a device that only had a few (or a few hundred) pieces of content.”

Judging from recent developments, a dearth of non-gaming content may not be a problem for long. A new survey of Hollywood content creators jointly sponsored by CTA and NATPE (National Association of Television Program Executives) finds most believe VR represents a game-changing method of storytelling. The consensus of 16 executives interviewed in depth for the study was that, beyond gaming, the strongest genre meriting VR development is horror. Sports and concerts were also cited as promising areas of development.

But respondents also made clear they recognize serious hurdles must be overcome before a significant tide of non-gaming VR content emerges. These include the need to generate a viable model for content creation, including a determination of the endurance cap for sustained viewing, with a clear pathway to monetization.

“The future of VR is dependent on quality content and, with this study we wanted to provide a more comprehensive look at Hollywood’s attitudes on the many opportunities and challenges this technology faces,” says NATPE president and CEO Rod Perth. “This study presents a snapshot of the types of genres that could be adapted to this dynamic technology but also offers a realistic picture of its limitations.”

CableLabs’ Glennon cites three VR content factories that are acting to fill the void, including JauntVR, which recently secured $66 million in investment funding led by Disney, Creative Artist Agency’s Evolution Media Capital and China Media Capital; Immersive Media, whose VR productions include an American Express-sponsored Taylor Swift concert video, and NextVR, which, Glennon notes, “seemingly wants to become the ‘Netflix of VR.’”

NextVR has caught the enthusiasm of Comcast Ventures, which joined with other investors in a recent $30.5-million financing round. “As the preeminent company that can transmit live high definition virtual reality over the Internet, NextVR’s lens-to-lens system captures and delivers immersive experiences for marquis live events,” say Comcast Venture managing director Michael Yang and principal Gavin Teo in a recent blog post.

Recent big TV network forays into VR include a VR-enabled version of a CNN-sponsored Democratic primary debate, Discover Communications’ Discovery VR, a series of short-form VR experiences in nature, and the BBC’s creation of a VR version of its Strictly Come Dancing TV show. Netflix has broken into VR with an app for viewing through the Samsung Gear VR that puts the viewer in front of a virtual giant screen located in different settings such as a ski lodge to watch any Netflix movies or TV episodes.

Among pay TV distributors, DirecTV, focusing on mobile VR development at its digital innovation lab, has taken the lead with a VR app that takes viewers inside the boxing ring to experience a recorded fight. In a recent interview with Multichannel News, DirecTV senior vice president of digital entertainment Tony Goncalves was quoted as saying, “We’re not sure how VR will evolve and shift from flat, non-immersive TV experiences to the level of almost putting consumers inside a movie, but for pay TV and video content providers, it’s very important to explore new ways of seeing content, and VR falls into that category.”

Perhaps the biggest center of early network-delivered VR activity can be found at Google, which has over 14,000 “spherical videos” running on YouTube, much of it user-generated content. “[With] VR (virtual reality) we’re starting to see some of the first signs of really incredible storytelling,” says Ben Relles, head of comedy and unscripted at YouTube.

Google is also heavily engaged with developers targeting Google’s $20 Cardboard headset with VR apps delivered over mobile links. For example, The New York Times has distributed Cardboard HMDs to its 1.1 million subscribers in advance of releasing the first in a series of VR short films, The Displaced, which is a documentary following the lives of three refugee children fromSouth Sudan, eastern Ukraine and Syria.

While all this activity adds up to a drop in the bucket next to mainstream content development for 4K UHD, there’s no reason at this point for pay TV distributors not to take seriously the possibility that before too long they may be packaging VR services for delivery over their broadband pipes. This will represent another strong incentive for building out gigabit access networks, given that, by CableLabs’ estimate, a single VR stream suited for use on high-end systems like the Oculus will consume between 150 and 200 Mbps of bandwidth.

“This is not just hot and sexy, a passing fad,” says CableLabs’ Glennon. “It has massive potential to transform lots of what we do, and we can all expect incredible developments in this space.”


ADB Scores Wins in Switch To Solutions for IP Services

Gerald Wood, CMO, ADB

Gerald Wood, CMO, ADB

Cloud Middleware, Support for Connected Apps Fill Gaps in B2C and B2B Segments

By Fred Dawson

October 9, 2015 – In another reflection of how profound changes in premium video strategies are impacting suppliers, ADB, known for its advanced set-top boxes, is making waves with a new portfolio of wide-ranging software solutions for the TV and Internet-of-Things markets.

“Over the last couple of years we’ve seen declining margins on hardware amid a lot of tough competition as boxes have become more commoditized,” says ADB CMO Gerald Wood. “We either had to stay with that strategy or move to adding more value to our operations. So we shifted our focus to the software side.”

The result is a new set of application-specific products and services packaged as “Connected Solutions” buttressed by a cloud-based device-centric middle platform that acts as a transition layer between Connected Solutions, industry platforms and protocols. This “ConnectedOS” serves to simplify integration, accelerate speed to market and reduce the costs of delivering connected services, says ADB CEO Peter Balchin.

“This is a new chapter for ADB,” Balchin says. “We believe that in an age of Internet connectivity there is a need for fast, reliable and cost-effective solutions that ensure consumers and businesses are always connected.”

The strategy is already beginning to bear fruit, resulting, for example, in a collaboration with Polish satellite pay TV provider Cyfrowy Polsat aimed at integrating ADB’s Connected Solutions and Personal TV software with the MVPD’s in-house manufactured set-tops and other devices. “Our partnership with ADB will help us to give our customers an enhanced TV experience that will meet their TV anytime, anywhere needs,” says Cyfrowy Polsat CTO Dariusz Dzialkowski. “ADB is a renowned global player with great local knowledge.”

Indeed, Wood notes, Polish universities have been an important recruitment source for ADB in building its software expertise. Now the guiding light for ADB isn’t so much about capitalizing on the next hardware advancement in technology,

“We’re taking a different approach by following market developments to determine what we need to do,” he says. There are needs aplenty, he adds, including support for “scalable convergence of TV and mobile services, managed devices in the home, personalized EPGs, multiscreen, second screen, cloud DVR, catch-up and OTT affiliations.”

That strategy has already paid off handsomely in the B2B realm where modules from ADB’s Commercial Video Solutions (CVS) portfolio have been deployed by major U.S. MSOs to crack the hotel market. Used with a client application running on smart TVs and set-tops to render hotel-specific UIs, the network-agnostic platform supports linear and VOD TV and OTT content as well as local advertisements and in-house promotions of hotel services, Wood says.

“We’ve been tremendously successful with CVS in the U.S. where large operators, including Time Warner Cable, Cox and Bright House, are using the platform to offer solutions to hotel groups,” he notes. “Through operators in North America and elsewhere, our technology is now operating in some 200,000 hotel rooms.”

Another market ADB is targeting with its OSConnect middleware is the Internet-of-Things (IoT). The company’s efforts there as well as its ability to marry the legacy and OTT video environments were greatly enhanced with its 2010 acquisition of Pirelli Broadband, a leading supplier of broadband gateways, fixed/mobile convergence devices and broadband systems management solutions to telecom operators in Europe and Latin America. “We’re now able to better understand the broadband Internet featuring side of the business and to bridge between broadcast and broadband,” Wood says.

The IoT product suites target both the B2C and B2B markets, says Jamie Mackinlay, vice president of business development at ADB. On the B2C front, ADB is providing operators the means to build a managed multi-app service while its B2B solutions enable Internet-connected sensors to interact with an IT cloud infrastructure to support device-specific applications.

“At a basic level we’re proving an intelligent message bus with business logic designed to interoperate with third parties to pull together the applications and create a compelling user experience,” Mackinlay says.  In the B2B scenario he points to ADB’s engagement with Whirlpool, which is bringing IoT capabilities to washing machines, dishwashers, refrigerators and other appliances.

For example, with washing machines the IoT apps include timing of wash runs to coincide with periods of low electricity costs, identification of defects before they inconvenience users and support for marketing tie-ins that allow detergent suppliers to promote their brands by offering advice on efficient use of their products. Or, as another example, in a household with connected refrigerators and connected fitness machines, the refrigerator can advise residents what to eat based on their treadmill activity. “We’ve learned that Whirlpool believes that with these advances they’re now on par with Bosch and far ahead of other competitors,” Mackinlay says.

On the B2C side, ADB’s Personal IoT solution leverages the OSConnect middleware in conjunction with interoperability with Apple, Android and other ecosystems to enable end-to-end
remote provisioning, assurance and control of connected objects. The platform supports creation of a collaborative service model between network operators and domain providers that can be extended on top of traditional offerings as part of bundled service packages.

ADB is finding many other use cases for its software knowhow, including enabling MVPDs to extend OTT as a managed service and providing pure OTT providers a more robust path in direct-to-consumer offerings. Advertising, too, has become a focus, Wood says, noting that advertising platform provider Invidi is utilizing ConnectedOS for programmatic advertising implementations in Europe. “ConnectedOS is also being used in M&A environments to bring different companies operating systems together,” he adds.


Open-Source Solution Facilitates Use of Browsers in Pay TV Apps

Thijs Bijleveld, senior vice president, sales & marketing, Metrological

Thijs Bijleveld, senior vice president, sales & marketing, Metrological

Metrological Makes HTML5 Enhancements Available to Users of the RDK Framework

By Fred Dawson

October 2, 2015 – Support for enabling robust rendering of cloud-based applications on set-top boxes, long the domain of proprietary middleware solutions, is now available as an open-source option, potentially setting in motion a more rapid transition to advanced services for service providers utilizing the RDK and other set-top frameworks.

The new open-source HTML5 browser enhancements, developed by Metrological and now incorporated into the Reference Design Kit software stack, have already had an impact on advanced service initiatives underway at Comcast and Liberty Global, which, along with Time Warner Cable, are the core MSO partners in Reference Design Kit Management, LLC. The two cable giants see the Metrological solution as a way to enable cloud-based applications to run on set-top boxes with the speed and consistency of native apps while avoiding paying the costs normally associated with such capabilities developed by various cloud middleware suppliers.

“With these enhancements for the RDK, we hope to see HTML5 experiences with the visual fidelity and graphics performance normally reserved for native apps,” says Sree Kotay, executive vice president and chief software architect at Comcast Cable. “Under the hood, we’re constantly looking for ways to enhance our X1 Entertainment operating system, and we believe Metrological’s contribution will have significant impact.”

Comcast plans to trial the STB browser software enhancements for use on its RDK-based X1 Platform later this year. Liberty Global, which currently uses an earlier version of Metrological’s browser, plans to upgrade to these new enhancements for its RDK-based Horizon TV platform.

“At Liberty Global we set out to optimize the browser to deliver a high performance consumer experience with support for rich UIs and HTML5 apps on Horizon TV,” says Balan Nair, executive vice president and CTO at Liberty Global. “This browser capability will enable us to more easily customize user experiences and offer high performance TV services, such as our integrated app experience based on the Metrological Application Platform, to our customers in ways that weren’t previously possible.”

Metrological’s open-source approach to overcoming the drawbacks that limit the usefulness of browsers to pull apps from the cloud into the panoply of feature options available on pay TV UIs parallels many of the techniques used in proprietary solutions. But, says Thijs Bijleveld, senior vice president sales and marketing at Metrological, his company is not looking to compete with middleware suppliers.

Instead, the goal is to create a more favorable environment for operators’ use of Metrological’s cloud-based Applications Platform, which, as previously reported, supports a device- and software-agnostic managed service that includes app store deployment, lifecycle management, service assurance and legal content management. “We’re not a browser company and don’t have a business model for that,” Bijleveld says. Rather, by making its solution available on an open-source basis the company hopes to inspire wide-scale adoption where “the better the browser performs the better our apps perform.”

In making RDK a part of its app outreach strategy, Metrological has been utilizing the RDK Emulator, a testing framework, to allow app developers and operators to remotely develop and test apps on top of the RDK. At the same time, the Metrological SDK (software development kit) is designed to allow developers to create apps for other environments as well, including DVB, OpenCable and IP.

Operators are able to tap into all the apps Metrological hosts, currently numbering about 250 with another 50 slated to be added before year’s end, to enhance their main screen TV content with feeds from OTT sources, Bijleveld says. “Increasingly, our platform is being used by operators to distribute niche content such as ethnic programming and sports,” he adds. “With our scheduling and personalization capabilities, if operators have good recommendation engines and subscriber profiles, we can determine which selection of content will be served to a given end user.”

By offering an open SDK, Metrological makes it possible for a global community of developers to create apps for its platform, greatly extending their reach across multiple operators, set-top frameworks and geographic regions, he explains. At the same time, operators can develop apps that will be made available just for their own use.

In growing numbers, operators are recognizing the limits of relying on apps residing natively on the set-top to expand their service portfolios. “At some point, you run up against the limits of CPU power and memory to support apps,” Bijleveld says. “Operators are finding consumers want personalized OTT offerings, which means the operator has to have access to a great number of apps and the mechanisms in place to search and discover the right apps for each person. This can’t be done natively.”

But as such considerations drove operators to explore the use of HTML5-enabled browsers to manage these apps at the set-top, it became clear something had to be done to improve performance, he adds. “During our deliveries we discovered more and more that the browser and even HTML5 needed farther improvement and optimization to get the best out of our app platform,” he notes.

“At some point Comcast and LGI acknowledged they would use browsers as the basis for launching new services,” he continues. “That’s when we decided to take on the challenge of improving browser performance to match and even exceed user experience compared to native apps running in the STB. Now we’re seeing that the new browser version used by LGI, for example, is performing better than apps that run natively on the same device.”

Metrological has developed the HTML5 browser enhancements utilizing an open-source environment for the browser core known as “WebKit for Wayland.” The enhancements not only enable better rendering of apps and next-generation UIs along with better windows management to control multiple applications and improve control over resources; they deliver these improvements with a smaller software footprint and significantly less memory usage, Bijleveld says.

“HTML5 creates a standard, but it doesn’t solve the issue of having to deal with the fact that every device is different,” he explains. “If you want a uniform experience you need an in-house browser supported by a framework that can run cloud-based services consistently across all devices.” And, he adds, operators need to be able to control which devices have access to any given app.

One of the advantages to using an open-source approach to addressing these challenges is that “you can leverage the rapid increase in innovations coming from contributors to open-source browsing,” he notes. “For suppliers of proprietary solutions, it’s a real challenge to keep up with this scale of innovation.”

So far, Metrological has employed over 130 parameters for caching, graphics rendering and other functions, either developed in-house using the open-source core or applied from already developed open-source contributions, to create a robust HTML5 browser environment, Bijleveld says. For example, much as some proprietary solutions have done, the Metrological solution takes advantage of the capabilities of OpenGL (Graphic Library), a multi-program API designed to interact with GPUs (graphics processing units) in chipsets to activate hardware-accelerated rendering of 2D and 3D graphics.

Operators can use the Metrological enhancements with any off-the-shelf HTML5-enabled browser to create a robust user experience, or they can combine their own enhancements with those provided by Metrological to farther differentiate the user experience. “We’re doing everything we can to enable more open, flexible and robust use of browsers, which is pretty much in line with what the RDK approach,” Bijleveld says.

“These innovative new browser enhancements are a prime example of how RDK member companies are using modern technologies to help accelerate the deployment of new TV services,” confirms Steve Heeb, president and general manager of RDK Management. “Metrological is bringing this contribution into the RDK, and thanks to their efforts, it will be available for the entire RDK community.”

Page 1 of 1012345...10...Last »