The Current State of Ultra-Low Latency Streaming: Little Adoption Outside of Niche Applications, Cost and Scaling Issues Remain
For the past two years we’ve been hearing a lot about low/ultra-low latency streaming, with most of the excitement coming from encoding and CDN vendors looking to up-sell customers on the functionality. But outside of some niche applications, there is very little adoption or demand from customers and that won’t change anytime soon. This is due to multiple factors including a lack of agreed upon definition of what low and ultra-low latency means, the additional cost it adds to the entire streaming stack, scalability issues, and a lack of business value for many video applications.
All the CDNs I have spoken to said that on average, 3% of less of all the video bits they deliver today are using DASH-LL with chunked transfer and chunked encoding, with a few CDNs saying it was as low as 1% or less. While Apple LL-HLS is also an option, there is no real adoption of it as of yet, even though CDNs are building out for it. The numbers are higher when you go to low-latency, which some CDNs define as 10 seconds or less, using 1 or 2 second segments, with CDNs saying on that on average, it makes up 20% of the total video bits they deliver.
Low latency and ultra-low latency streaming are hard technical problems to solve in the delivery side of the equation. The established protocols (i.e. CMAF and LL-HLS) call for very small segment sizes, which correlate to much higher requests to the cache servers than a non-low latency stream. This could be much more expensive for legacy edge providers to support given the age of their deployed hardware. This is why some CDNs have to run a separate network to support it since low-latency delivery is very I/O intensive and older hardware doesn’t support it. As a result, some CDNs don’t have a lot of capacity for ultra-low latency delivery which means they have to charge customers more to support it. Based on recent pricing I have seen in RFPs many CDNs charge an extra 15-20% on average, per GB delivered.
Adding to the confusion is the fact that many vendors don’t define what exactly they mean by low or ultra-low latency. Some CDNs have said that low-latency is under 10 seconds and ultra-low latency is 2 seconds or less. But many customers don’t define it that way. As an example, FOX recently published a nice blog post of their streaming workflow for the Super Bowl calling their low-latency stream “8–12 secs behind” the master feed. They aren’t right or wrong, it’s simply a matter of how each company defines these terms.
In Q1 of this year I surveyed just over 100 broadcasters, OTT platforms and publishers asking them how they define ultra-low latency and the applications they want to deploy it for. (see notes at bottom for the methodology) The results don’t line up with what some vendors are promoting and that’s part of the problem, no agreed upon expectations. Of those surveyed, 100% said they define ultra-low latency as “2 seconds”, “1 second” or “sub 1 second”. No respondent picked any number higher than 2 seconds. 90% said they were “not willing to pay a third-party CDN more for ultra-low latency live video delivery” and that it “should be part of their standard service”. The biggest downside they noted in ultra-low latency adoption was “cost”, followed by “scalability” and the “impact on ABR implementation.” Of the 10% that were willing to pay more for ultra-low latency delivery, they all responded to the survey saying they would not pay more than 10% per GB delivered.
Part of the cost and scaling problems is why to date, most of the ultra-low latency delivery we see is coming from companies that build their own delivery infrastructure using WebRTC, like I noted in a recent post. Agora has been successful selling their ultra-low latency delivery and I consider them to be the best in the market. They were one of the first to offer an ultra-low latency solution at scale, but note that a large percentage of what they are delivering is audio only, has no video, is mostly in APAC and being used for two-way communications. Agora defines their low-latency solution as “400 – 800 ms” of latency and their ultra-low latency as “1,500 – 2,000 ms” of latency. That’s a lot lower than other solutions I have seen on the market, based on how vendors define these terms.
Aside from the technical issues, more importantly, many customers don’t see a business benefit from deploying ultra-low latency, except for niche applications. It doesn’t allow them to sell more ads, get higher CPMs or extend users viewing times. Of those streaming customers I recently surveyed, the most common video use cases they said ultra-low latency would be best suited for was “betting”, “two-way experience (ie: quiz show, chat)”, “surveillance” and “sports”. These are use cases when ultra-low latency can make the experience better and might provide a business ROI to the customer, but they are very specific video use cases. The idea that every sports event will go to ultra-low latency streaming in the near-term simply isn’t reality. Right now, no live linear streaming service has deployed ultra-low latency but with fuboTV having disclosed how they want to add betting to their service down the line, an ultra-low latency solution will be needed. That makes sense but it’s not the live streaming that’s driving the need but rather the gambling functionality as the business driver for adopting it.
Live sports streaming is one instance where most consumers would probably say they would like to see ultra-low latency implemented, but it’s not up to the viewer. There is ALWAYS a tradeoff for streaming services between cost, QoE and reliability and customers don’t deploy technology just because they can, it has to provide a tangible ROI. The bottom line is that broadcasters streaming live events to millions at the same time have to make business decisions of what the end-user experience will look like. No one should fault any live streaming service for not implementing ultra-low latency, 4K or any other feature, unless they know what the company’s workflow is, what the limitations are by vendors, what the costs are to enable it, and what KPIs are being used to judge the success of their deployment.
Note on survey data: My survey was conducted in Q1 of 2021 and 104 broadcasters, OTT platforms and publishers responded to the survey, who were primarily based in North America and Europe. They were asked the following questions: how they define ultra-low latency; which applications they would use it for; if they would be willing to pay more for ultra-low latency delivery; the biggest challenges to ultra-low latency video delivery; how much latency is considered too much for sporting events and which delivery vendors they would consider if they were implementing an ultra-low latency solution for live. If you would like more details on the survey, I am happy to provide it, free of charge.