Archives

Accenture, Your Webcast With Hotstar Wasn’t A “World Record”

Here we go again. It seems every year we have a few companies involved in a webcast that are quick to pat themselves on the back calling it a “world record”. The latest is Accenture, who is promoting the Hotstar cricket webcast in 2018 saying it, “established the world record for the highest observed concurrency on a streaming platform, at 10.3 million concurrent views.” Accenture is touting this statement via a case study they have done to highlight their Accenture Video Solutions platform. The problem with their statement is that it’s simply false. On May 20th, 2018, the League Of Legends final peaked at just over 24.5M simultaneous streams. And unlike Accenture’s lower number, the League Of Legends number was verified by a third-party company, ESC. Who was Accenture’s verified by? While the Hotstar webcast was big, calling it a “world record” on a “streaming platform” is simply wrong. It wasn’t and Accenture should know better to advertise it as such. If you really want to impress us Accenture, give us numbers about the quality of the stream, not just how many were connected to it.

Sponsored by

Technical and Business Reasons Kept Low-Latency Streaming Out Of The Super Bowl

A thread was started on LinkedIn with some suggesting that CBS should have enabled low-latency streaming for the Super Bowl and that the stream should not have had a 20-40 second delay. There are so many wrong assumptions being made by people in the industry around low-latency streaming, from both a business and technical level. For starters, what value does CBS get, from a pure ROI standpoint, by reducing the latency of the Super Bowl stream? Do they sell more ads? No. Would more people initiate the stream? No. There is ALWAYS a cost vs. quality tradeoff that takes place with any streaming media service.

Netflix could make the quality of their videos better tomorrow by encoding them at 10Mbps. Why don’t they do it? Because there is no business benefit. The idea that CBS or anyone else should enable certain technology or features, just because they can, doesn’t make good business sense. Next up is the idea that delivering low-latency streaming at scale is easy or cheap to do. It’s not. Any major CDN that delivers live events with multi-million simultaneous streams will tell you it’s not easy to do at scale and many today can’t do it for large audiences. [see: Edge Computing Helps Scale Low-Latency Live Streaming, But Challenges Remain]

Not to mention, most CDNs charge more to deliver low-latency streaming, but broadcasters don’t want to pay more for it, expect in specific use cases. In a recent survey I did of over 100 broadcast and media customers, 80% of them said they wanted ultra-low latency functionality, but were not willing to pay more for it. Many expect the functionality to be part of a standard CDN delivery service, capable of supporting millions of simultaneous viewers. So to justify the extra cost, CBS or anyone else, needs to have a business reason to enable it.

Also, when it comes to latency, there are many places it can be introduced into the workflow, not just at the encoding and distribution points. So the idea that any one vendor can offer a solution that a company just “drops in” to their workflow isn’t accurate. The bottom line is that broadcasters streaming live events to millions at the same time have to make business decisions of what the end-user experience will look like. So no one should fault CBS or anyone else for not implementing low-latency or 4K or any other feature, unless they know what the company’s workflow is, what the limitations are by vendors, what the costs are to enable it, and what KPIs are being used to judge the success of the event.

Leaked Pricing Details Licensing Costs Per Channel For Live OTT Services

Recently, one of the major live OTT services shared with me their licensing costs broken out per channel. I agreed not to disclose which live platform this comes from and the pricing listed doesn’t mean this is what every live TV company pays or that the platform carries all of these specific channels. There are variations based on length of contract and number of subscribers. Most of the pricing listed is for 2 years, but some are 5 years. For this specific unnamed platform, ESPN pricing is 3 years. Also, a few of the content providers require an ad split, while others don’t. The costs are per subscriber, per month.

The channel pricing listed gives a good insight into one of the major costs of running a live OTT platform and when you add in the distribution costs, and all the technical pieces of the workflow, on top of the content costs, it’s not possible to run a profitable streaming live TV business. Even at scale, I don’t know of any live OTT service that isn’t losing money which is why all of the major services are owned by MVPDs, ISPs, or others that can afford to lose money on the service, since their offering is part of a bigger product ecosystem.

Some of the live TV platforms have told me they think that with the growth of their service they can push back on TV network licensing costs, or drop unwanted networks. But no MVPD to date has been able to do this, so I don’t see the streaming platforms having any better leverage. With content licensing costs rising, it’s the main reason why nearly all of the live streaming TV services raised the cost of their packages by $5 last year. Like it or not, live TV streaming is only going to get more expensive for consumers.

The Advantages of Handling Manifest Manipulation at the Network Edge to Personalize Video Experiences

The ability to personalize viewing experiences at a granular level is one area in which online video separates itself from traditional TV. Consumer expectations are rising, be it for the quality and value of the service or the relevance of the ads they’re seeing. Meanwhile, content providers need to manage regional rights restrictions, secure the content and also monetize it with appropriate ads.

As we all know, content providers don’t always apply the most optimal methods and tools to best meet these expectations and requirements. Efforts to personalize content today are often inefficient, struggle to scale, add unnecessary rigidness and costs into video workflows and storage while limiting what can be personalized at an individual level.

Alternatively, many content providers are using manifest manipulation to personalize the video experience for each viewer. When a stream is requested, video and audio segments are accompanied by a manifest file that acts as a playlist and determines the playback order. The ability to change or customize the manifest dynamically, at a per user level, opens up numerous opportunities to tailor the viewing experience. In the case of live video, a new manifest is delivered with each segment of video requested, allowing adjustments to be applied dynamically as viewing conditions change. Executing this function at the edge, allows content providers to offload complexity from early on in the content preparation process and dynamically apply advanced logic for an individual viewer just prior to delivery. Meanwhile, enhancing scale to reach larger audiences.

Examples of functions that can be handled by manifest manipulation include:

  • Monetization: CDNs can work in tandem with ad decision systems to enable dynamic ad insertion on a per-user basis.
  • Personalized Streaming: Optimize video playback quality (bitrate selection) based on user, device type, network conditions or geography
  • Content Security: Protect content against unauthorized viewing using scalable session-level encryption, access control and watermarking
  • Content Localization: Apply regional variations including audio tracks, closed captioning and subtitles
  • DVR and Clip Creation: Dynamically create highlight clips, DVR windows and time shifting
  • Content Rights Compliance: Program Replacement – Adhere to content rights restrictions by replacing content in restricted regions

When it comes to content providers personalizing content for their viewers, there are three main challenges they face.

Challenge 1: The One-Size-Fits-All Approach to Content Preparation
Delivering personalized content to fragmented audiences is difficult and costly when preparing video in a cookie-cutter fashion. This often entails creating large sets of manifest files with similar bitrate ladders across a wide range of supported formats, codecs and ad copy with additional localized variants. This approach not only places unnecessary complexity on the content creation phase, it adds cost by expanding the content library storage footprint.

When delivering the content to an end-user, this approach can fail to deliver a truly authentic, personalized experience as it’s not actually targeted at an individual. The amount of permutations created in the content preparation phase are finite, the content is precast to the end-user and rarely appears relevant from the perspective of the viewer.

Challenge 2: Customizing Manifests at Origin
Content providers that host and originate their content in the cloud sometimes attempt to process manifests at origin. Doing so can be extremely compute intensive and cause cloud costs to rise quickly. Creating personalized manifest files at origin is also inefficient as the static manifest is first created, then passed to a CDN for delivery, but can’t be cached efficiently by the network. When a content provider is delivering live or on demand content to large audiences, the compute-heavy method is intensified by the additional strain it places on the origin from additional end-user requests, which ultimately drives additional cost. This method can add undue latency and impact delivery performance.

Challenge 3: Reliance on Clients
Those that rely heavily on their client implementation to personalize viewing are limited to the capabilities of their client stack which are often inconsistent across platforms. Even though many clients can support a range of the desired personalization capabilities such as localization, targeted ads and more, scale can be an issue. Client side dynamic ad insertion implementations can struggle to handle programmatic decisioning quickly and at scale. Content security measures are often reliant upon the server-side logic to enforce policies. Meanwhile, managing implementations across the landscape of supported devices that require frequent updating creates a fragmented environment that’s hard to maintain. Depending on the end-user client also places additional burden upon a technology stack that is often heavy and reliant on third parties. Choosing a client strategy to handle personalization functions pushes added complexity onto an already brittle environment where errors can significantly impact playback.

Because of these challenges, many content owners are realizing the advantages of handling manifest manipulation at the network edge. Utilizing manifest manipulation at the edge of the internet enables personalization at greater scale that can be achieved using a client side approach. Using a widely distributed network can reach large audiences with customized manifests across the landscape of devices and browsers without concerns around player updates and heavy reliance on third-party software. For a diagram on how this work, check out a blog post Akamai did on this topic here.

It can also be less costly than creating manifests at origin. Executing at a per session level provides more flexibility than the one size fits all approach by handling requests dynamically at the edge as opposed to placing undue complexity and rigidity earlier in the in content preparation phase. In addition, manifest manipulation offers a graceful way to handle certain issues that happen upstream, for example within the ad decisioning workflow. In a client side approach, an error where the ad copy isn’t delivered in time could result in a blank screen, whereas a server-side implementation could replace the absent ad with other content to avoid the impact on the user experience.

Handling manifest manipulation at the edge offers many advantages around scale, flexibility and intelligence and seems to be the route content owners are moving towards industry wide. I’d love to hear from others in the comment section on how you are addressing this problem, or helping clients address it.

Video: Delivering Incredible End-User Experiences Using Containerized Microservices Closer To Endpoints

At my recent EdgeNext Summit event, one of the key talks was how to allow developers to deliver interactive experiences to end users while ensuring low application response times, leading to higher engagement and happier users. Haseeb Budhani, CEO of Rafay Systems did a presentation on how the company’s platform enables developers to deliver these highly engaging user experiences by running containerized microservices closer to endpoints. Video link: https://www.youtube.com/watch?v=iNyazklVTQw&t=17s

Right Now, It’s Not About VR and Autonomous Cars: See Which Edge Applications and Edge Platforms Are Ready To Go Today

With all the hype of edge computing and edge cloud, everyone is now claiming to have products that are focused on the “Edge”. This branding and edge washing has created a lot of confusion around “What is the Edge”, “Where is the Edge” and “Who owns the Edge”?

Furthermore, the futuristic view around autonomous cars, virtual reality and remote surgery is over used: is this the best we can do for use cases for edge? At my recent EdgeNext Summit event, Yves Boudreau, VP of Partnerships and Ecosystem Strategy for Edge Gravity by Ericsson, presents what they have learned in the 12 months on real edge computing and which applications are likely to exemplify the near team user of the edge. Video link: https://www.youtube.com/watch?v=Vxm9mpltXv8

Understanding Packet Loss and Its Impact On Mobile Content Performance

The transient nature and pervasiveness of packet loss, jitter and other performance problems occurring over a wireless “last mile” are often poorly understood, and hardly ever quantified. At my recent EdgeNext Summit event, Subbu Varadarajan from Zycada illustrated the impact of packet loss on performance over a wireless connection. Based on the analysis of 100+ billion transactions, he demonstrated the scope of packet loss, explained best practices to measure its impact, and showed a live demo of Zycada’s packet loss mitigation over the wireless last mile.

https://zycada.wistia.com/medias/fdrrrxbagn