Archives

Unpacking the Edge Compute Hype: What It Really Is and Why It’s Important


The tech industry has always been a prolific producer of hype and right now, no topic is mentioned more generically than that of “edge” and “edge compute”.  Everywhere you turn these days, vendors are promoting their “edge solution”, with almost no definition, no real-world use cases, no metrics around deployments at scale and a lack of details on how much revenue is being generated. Making matters worse, some industry analysts are publishing reports saying the size of the “edge” market is already in the billions. These reports have no real methodology behind the numbers, don’t define what service they are covering and talk to non-realistic growth and use cases. It’s why the moment edge compute use cases are mentioned, people always use the examples of IoT, connected cars and augmented reality.

Part of the confusion in the market is due to rampant “edge-washing”, which is vendors seeking to rebrand their existing platforms as edge solutions. Similar to how some cloud service providers call their points of presence the edge. Or CDN platforms marketed as edge platforms, when in reality, the traditional CDN use cases are not taking advantage of any edge compute functions. You also see some mobile communications providers even referring to their cell towers as the edge and even a few cloud based encoding vendors are now using the word “edge” in their services.

Growing interest among the financial community in anything edge-related is helping fuel this phenomenon, with very little understanding of what it all means, or more importantly, doesn’t mean. Look at the valuation trends for “edge” or “edge computing” vendors and you’ll see there is plenty of incentive for companies to brand themselves as an edge solution provider. This confusion makes it difficult to separate functional fact from marketing fiction. To help dispel the confusion, I’m going to be writing a lot of blog posts this year around edge and edge compute topics with the goal of separating facts from fiction.

The biggest problem is that many vendors are using the phrase “edge” and “edge compute” interchangeably and they are not the same thing. Put simply, the edge is a location, the place in the network that is closest to where the end user or device is. We all know this term and Akamai and been using it for a long time to reference a physical location in their network. Edge computing refers to a compute model where application workloads occur at an edge location, where logic and intelligence is needed. It’s a distributed approach that shifts the computing closer to the user or device being used. This contrasts with the more common scenario where applications are run in a centralized data center or in the cloud, which is really just a remote data center usually run by a third-party. Edge compute is a service, the “edge” isn’t. You can’t buy “edge”, you are buying CDN services that simply leverage an edge-based network architecture that perform work at the distributed points of presence closest to where the digital and physical intersect. This excludes basic caching and forwarding CDN workflows.

When you are deploying an application, the traditional approach would be to host that application on servers in your own data center. More recently, it is likely you would instead choose to host the application in the cloud, with a cloud service provider like Amazon Web Services, Microsoft Azure or the Google Cloud Platform. While cloud service providers do offer regional PoPs, most organizations typically still centralize in a single or small number of regions.

But what if your application serves users in New York, Rome, Tokyo, Guangzhou, Rio di Janeiro, and points in between? The end-user journey to your content begins on the network of their ISP or mobile service provider, then continues over the Internet to whichever cloud PoP or data center the application is running on, which may be half a world away. From an architectural viewpoint, you have to think of all of this as your application infrastructure, and many times the application itself is running far, far away from those users. The idea and value of edge computing turns this around. It pushes the application closer to the users, offering the potential to reduce latency and network congestion, and to deliver a better user experience.

Computing infrastructure has really evolved over the years. It began with “bare metal,” physical servers running a single application. Then virtualization came into play, using software to emulate multiple virtual machines hosting multiple operating systems and applications on a single physical server. Next came containers, introducing a layer that isolates the application from the operating system, allowing applications to be easily portable across different environments while ensuring uniform operation. All of these computing approaches can be employed in a data center or in the cloud.

In recent years, a new computing alternate has emerged called serverless. This is a zero-management computing environment where an organization can run applications without up-front capital expense and without having to manage the infrastructure. While it is used in the cloud (and could be in a data center—though this would defeat the “zero management” benefit), serverless computing is ideal for running applications at the edge. Of course, where this computing occurs matters when delivering streaming media. Each computing location, on-premises, in the cloud and at the edge, has its pros and cons.

  • On-premises computing, such as in an enterprise data center, offers full control over the infrastructure, including the storage and security of data. But it requires substantial capital expense and costly management. It also means you may need reserve server capacity to handle spikes in demand-capacity that sits idle most of the time, which is an inefficient use of resources. And an on-premises infrastructure will struggle to deliver low-latency access to users who may be halfway around the world.
  • Centralized cloud-based computing eliminates the capital expense and reduces the management overhead, because there are no physical servers to maintain. Plus, it offers the flexibility to scale capacity quickly and efficiently to meet changing workload demands. However, since most organizations centralize their cloud deployments to a single region, this can limit performance and create latency issues.
  • Edge computing offers all the advantages of cloud-based computing plus additional benefits. Application logic executes closer to the end user or device via a globally distributed infrastructure. This dramatically reduces latency and avoids network congestion, with the goal of providing an enhanced and consistent experience for all users.

There is a trade-off with edge computing, however. The distributed nature of the edge translates into a lower concentration of computing capacity in any one location. This presents limitations for what types of workloads can run effectively at the edge. You’re not going to be running your enterprise ERP or CRM application in a cell tower, since there is no business or performance benefit. And this leads to the biggest unknown in the market today, that being, which application use cases will best leverage edge compute resources? As an industry, we’re still finding that out.

From a customer use case and deployment standpoint, the edge computing market is so small today that both Akamai and Fastly have told Wall Street that their edge compute services won’t generate significant revenue in the near-term. With regards to their edge compute services, during their Q1 earnings call, Fastly’s CFO said, “2021 is much more just learning what are the use cases, what are the verticals that we can use to land as we lean into 2022 and beyond.” Akamai, which gave out a lot of CAGR numbers at their investor day in March said they are targeting revenue from “edge applications” to grow 30% in 3-5 years, inside their larger “edge” business, with expected overall growth of 2-5% in 3-5 years.

Analysts that are calling CDN vendors “edge computing-based CDNs” don’t understand that most CDNs services being offered are not levering any “edge compute” services inside the network. You don’t need “edge compute” to deliver video streaming, which as an example, made up 57% of all the bits Akamai’s delivered in 2020, for their CDN services, or what they call edge delivery. Akamai accurately defines the video streaming they deliver as “edge delivery”, not “edge compute”. Yet some analysts are taking the proper terminology vendors are using and swapping that out with their own incorrect terms, which only further adds to the confusion in the market.

In simple terms, edge compute is all about moving logic and intelligence to the edge. Not all services or content needs to have an edge compute component, or be stored at or delivered from the edge, so we’ll have to wait to see which applications customers use it for. The goal with edge compute isn’t just about improving the use experience but also having a way to measure the impact on the business, with defined KPIs. This isn’t well defined today, but it’s coming over the next few years as we see more uses cases and adoption.

Sponsored by

CDN Limelight Networks Gives Yearly Revenue Guidance, Update on Turnaround

In my blog post from March of this year I detailed some of the changes Limelight Networks new management team is taking to set the company back on a path to profitability and accelerated growth. Absent from my post were full-year revenue guidance numbers as Limelight’s management team was too new at the time to be able to share them with Wall Street. Now, with Limelight having reported Q1 2021 earnings on April 29th, we have a better insight into what they expect for the year.

Limelight had revenue of $51.2 million in Q1, down 10%, compared to $57.0 million in the first quarter of 2020. This wasn’t surprising since Limelight’s previous management team didn’t address some network performance issues that resulted in a loss of some traffic. The good news is that Limelight stated during their earnings that they have since “reduced rebuffer rates by approximately 30%”, “increased network throughput by up to 20% through performance tuning” and believe that over the next 90-days they can create additional performance improvements that will “drive increased market share of traffic from our clients.” For the full year, Limelight expects revenue to be in the range of $220M-$230M, while having a $20M-$25M Capex spend. Limelight had total revenue of $230.2M in 2020, so at the high-end of Limelight’s 2021 projection, the growth of the business would be flat year-over-year.

New management has made some measurable progress addressing some of their short-term headwinds and identifying what they need to work on going forward. Based on some of the changes they have already made the company expects to benefit from an annual cash cost savings of approximately $15M. It’s a good start, but turnarounds don’t happen overnight and the new management team has only been inside the organization for 90-days. They need to be given more time, at least two quarters of operating the business, before we can expect to see some measurable results and see what growth could look like in Q4 and going into Q1 of 2022. Limelight also announced during their earrings that they will be holding a strategy update session in early summer to discuss their broader plans to evolve their offerings beyond video with the goal of taking advantage of their network during low peak times.

Earnings Recap: Brightcove, Google, FB, Verizon, AT&T, Microsoft, Discovery, Comcast, Dish

Here’s a quick recap that highlights the most important numbers you need to know from last week’s Q4 2021 earnings from Brightcove, Google, FB, Verizon, AT&T (HBO Max), Microsoft, Discovery, Comcast (Peacock TV) and Dish (Sling TV). Later this week I’ll cover earnings from Akamai, Fastly, T-Mobile, Fox, Vimeo, Cloudflare, Roku, ViacomCBS and AMC Networks. Disney and fuboTV report the week of May 10th.

  • Brightcove Q1 2021 Earnings: Revenue of $54.8M, up 18% y/o/y, but nearly flat from Q4 revenue of $53.7M; Expects revenue to decline in Q2 to $49.5M-$50.5M. Full year guidance of $211M-$217M. More details: https://bit.ly/2RgZLVE
  • Alphabet Q1 2021 Earnings: Revenue of $55.31B, up 34% y/o/y ($44.6B in advertising); YouTube ad revenue of $6.01B, up 49% y/o/y; cloud revenue of $4.05B, up 46% y/o/y (lost $974M). No details on YouTube TV subs. Added almost 17,000 employees in the quarter. More details: https://lnkd.in/dNrwSVD
  • Facebook Q1 2021 Earnings: Total revenue of $26.1B, up 22% y/o/y; Monthly active users of 2.85B, up 10% y/o/y; Daily active users of 1.88B, up 8% y/o/y; Expects y/o/y total revenue growth rates in Q3/Q4 to significantly decelerate sequentially as they lap periods of increasingly strong growth. More details: https://bit.ly/3tf2o7J
  • Verizon Q1 2021 Earnings: Lost 82,000 pay TV subscribers; added 98,000 Fios internet customers. Has 3.77M pay TV subs. More details: https://lnkd.in/dWrZ9iP
  • AT&T Q1 2021 Earnings: Added 2.7M domestic HBO Max and HBO subscriber net adds; total domestic subscribers of 44.2M and nearly 64M globally. Domestic HBO Max and HBO ARPU of $11.72. WarnerMedia revenue of $8.5B, up 9.8% y/o/y. More details: https://lnkd.in/eZeFy_8
  • Microsoft Q1 2021 Earnings: Total revenue of $41.7B, up 19% y/o/y; Intelligent Cloud revenue of $17.7B, up 33% y/o/y; Productivity and Business Processes revenue of $13.6B, up 15% y/o/y. More details: https://bit.ly/3tdv5BV
  • Discovery Q1 2021 Earnings: Has 15M paying D2C subs, but won’t say how many are Discovery+; Ad supported Discovery+ had over $10 in ARPU; Average viewing time of 3 hours per day, per user. More details: https://lnkd.in/dp8DZG4
  • Comcast Q1 2021 Earnings: Peacock Has 42M Sign-Ups to Date; Lost 491,000 pay TV subscribers; Peacock TV had $91M in revenue on EBITDA loss of $277M. More details: https://bit.ly/3vGnJZw
  • Dish Q1 2021 Earnings: Lost 230,000 pay TV subs; Lost 100,000 Sling TV subs (has 2.37M in total). More details: https://lnkd.in/dD9za-e
  • Netflix Q1 2021 Earnings: Added 3.98M subs (estimate was for 6M); finished the quarter with 208M subs; operating income of $2B more than doubled y/o/y; will spend over $17B on content this year; Q2 2021 guidance of only 1M net new subs. More details: https://bit.ly/33bsdei
  • Twitter Q1 2021 Earnings: Revenue $1.04B, up 28% y/o/y; Average mDAU was 199M, up 20% y/o/y; mDAU growth to slow in coming quarters, when compared to rates during pandemic. More details: https://lnkd.in/dK9PiJf

Too Early To Speculate on The Impact To The CDN Market, With the Sale of Verizon’s Media Platform Business

This morning it was announced that private equity firm Apollo Global Management has agreed to acquire Verizon’s Media assets for $5 billion, in a deal expected to close in the second half of this year. Apollo will pay Verizon $4.25 billion in cash, along with preferred interests of $750 million, and Verizon will keep 10% of the new company, which will be named Yahoo. I’m getting many inquires as to what this means for the CDN market as a whole since the Verizon Media Platform business (formerly called Verizon Digital Media Service) is part of the sale.

While part of Verizon’s Media Platform business involves content delivery, based in large part to Verizon’s acquisition of CDN EdgeCast in 2003, it’s far too early to speculate what this means for the larger overall CDN market. The Verizon Media Platform business includes a lot of video functionality outside of just video delivery, with ingestion, packaging, data analytics and a deep ad stack for publishers as part of their offering. What pieces of the overall Verizon Media business Apollo will keep, sell, consolidate or double-down on with further investment is unknown. For now, it’s business as usual for Verizon’s Media Platform business.

Anyone suggesting that this is good for other CDNs as maybe there will be less competition in the long-run, or bad for other CDNs as Apollo could double-down on their investment in CDN and make it a more competitive market, is pure speculation. It’s too early to know what impact this deal may or may not have on the CDN market.

Netflix Misses Subs Estimate: Added 3.98M Subs In Q2, Will Spend Over $17B on Content This Year

Netflix reported their Q1 2021 earnings, adding 3.98M subscribers in the quarter (estimate was for 6M) and finished the quarter with 208M total subscribers. On the positive side, Netflix reported operating income of $2B which more than doubled year-over-year. The company said they will spend over $17B on content this year and anticipates a strong second half with the return of new seasons of some of their biggest hits and film lineup. More details:

  • Q2 guidance of only 1M net new subs
  • Finished Q1 2021 with 208M paid memberships, up 14% year over year, but below guidance forecast of 210M paid memberships
  • Average revenue per membership in Q1 rose by 6% year-over-year
  • Q1 operating income of $2B vs. $958M more than doubled vs. Q1’20. The company exceeded their guidance forecast primarily due to the timing of content spend.
  • Netflix doesn’t believe competitive intensity materially changed in the quarter or was a material factor in the variance as their over-forecast was across all of their regions
  • Netflix believes paid membership growth slowed due to the big Covid-19 pull forward in 2020 and a lighter content slate in the first half of this year, due to Covid-19 production delays

Netflix’s stock is down almost 11% as of 5:12pm ET. Roku is also down 5%, probably seeing an impact from Netflix’s earnings.

The Current State of Ultra-Low Latency Streaming: Little Adoption Outside of Niche Applications, Cost and Scaling Issues Remain

For the past two years we’ve been hearing a lot about low/ultra-low latency streaming, with most of the excitement coming from encoding and CDN vendors looking to up-sell customers on the functionality. But outside of some niche applications, there is very little adoption or demand from customers and that won’t change anytime soon. This is due to multiple factors including a lack of agreed upon definition of what low and ultra-low latency means, the additional cost it adds to the entire streaming stack, scalability issues, and a lack of business value for many video applications.

All the CDNs I have spoken to said that on average, 3% of less of all the video bits they deliver today are using DASH-LL with chunked transfer and chunked encoding, with a few CDNs saying it was as low as 1% or less. While Apple LL-HLS is also an option, there is no real adoption of it as of yet, even though CDNs are building out for it. The numbers are higher when you go to low-latency, which some CDNs define as 10 seconds or less, using 1 or 2 second segments, with CDNs saying on that on average, it makes up 20% of the total video bits they deliver.

Low latency and ultra-low latency streaming are hard technical problems to solve in the delivery side of the equation. The established protocols (i.e. CMAF and LL-HLS) call for very small segment sizes, which correlate to much higher requests to the cache servers than a non-low latency stream. This could be much more expensive for legacy edge providers to support given the age of their deployed hardware. This is why some CDNs have to run a separate network to support it since low-latency delivery is very I/O intensive and older hardware doesn’t support it. As a result, some CDNs don’t have a lot of capacity for ultra-low latency delivery which means they have to charge customers more to support it. Based on recent pricing I have seen in RFPs many CDNs charge an extra 15-20% on average, per GB delivered.

Adding to the confusion is the fact that many vendors don’t define what exactly they mean by low or ultra-low latency. Some CDNs have said that low-latency is under 10 seconds and ultra-low latency is 2 seconds or less. But many customers don’t define it that way. As an example, FOX recently published a nice blog post of their streaming workflow for the Super Bowl calling their low-latency stream “8–12 secs behind” the master feed. They aren’t right or wrong, it’s simply a matter of how each company defines these terms.

In Q1 of this year I surveyed just over 100 broadcasters, OTT platforms and publishers asking them how they define ultra-low latency and the applications they want to deploy it for. (see notes at bottom for the methodology) The results don’t line up with what some vendors are promoting and that’s part of the problem, no agreed upon expectations. Of those surveyed, 100% said they define ultra-low latency as “2 seconds”, “1 second” or “sub 1 second”. No respondent picked any number higher than 2 seconds. 90% said they were “not willing to pay a third-party CDN more for ultra-low latency live video delivery” and that it “should be part of their standard service”. The biggest downside they noted in ultra-low latency adoption was “cost”, followed by “scalability” and the “impact on ABR implementation.” Of the 10% that were willing to pay more for ultra-low latency delivery, they all responded to the survey saying they would not pay more than 10% per GB delivered.

Part of the cost and scaling problems is why to date, most of the ultra-low latency delivery we see is coming from companies that build their own delivery infrastructure using WebRTC, like I noted in a recent post. Agora has been successful selling their ultra-low latency delivery and I consider them to be the best in the market. They were one of the first to offer an ultra-low latency solution at scale, but note that a large percentage of what they are delivering is audio only, has no video, is mostly in APAC and being used for two-way communications. Agora defines their low-latency solution as “400 – 800 ms” of latency and their ultra-low latency as “1,500 – 2,000 ms” of latency. That’s a lot lower than other solutions I have seen on the market, based on how vendors define these terms.

Aside from the technical issues, more importantly, many customers don’t see a business benefit from deploying ultra-low latency, except for niche applications. It doesn’t allow them to sell more ads, get higher CPMs or extend users viewing times. Of those streaming customers I recently surveyed, the most common video use cases they said ultra-low latency would be best suited for was “betting”, “two-way experience (ie: quiz show, chat)”, “surveillance” and “sports”. These are use cases when ultra-low latency can make the experience better and might provide a business ROI to the customer, but they are very specific video use cases. The idea that every sports event will go to ultra-low latency streaming in the near-term simply isn’t reality. Right now, no live linear streaming service has deployed ultra-low latency but with fuboTV having disclosed how they want to add betting to their service down the line, an ultra-low latency solution will be needed.  That makes sense but it’s not the live streaming that’s driving the need but rather the gambling functionality as the business driver for adopting it.

Live sports streaming is one instance where most consumers would probably say they would like to see ultra-low latency implemented, but it’s not up to the viewer. There is ALWAYS a tradeoff for streaming services between cost, QoE and reliability and customers don’t deploy technology just because they can, it has to provide a tangible ROI. The bottom line is that broadcasters streaming live events to millions at the same time have to make business decisions of what the end-user experience will look like. No one should fault any live streaming service for not implementing ultra-low latency, 4K or any other feature, unless they know what the company’s workflow is, what the limitations are by vendors, what the costs are to enable it, and what KPIs are being used to judge the success of their deployment.

Note on survey data: My survey was conducted in Q1 of 2021 and 104 broadcasters, OTT platforms and publishers responded to the survey, who were primarily based in North America and Europe. They were asked the following questions: how they define ultra-low latency; which applications they would use it for; if they would be willing to pay more for ultra-low latency delivery; the biggest challenges to ultra-low latency video delivery; how much latency is considered too much for sporting events and which delivery vendors they would consider if they were implementing an ultra-low latency solution for live. If you would like more details on the survey, I am happy to provide it, free of charge.

WebRTC is Gaining Deployments for Events With Two-Way Interactivity

While traditional broadcast networks have been able to rely on live content to draw viewers, we all know that younger audiences are spending more time in apps with social experiences. To better connect with young viewers, companies are testing new social streaming experiences that combine Hollywood production, a highly engaging design and in many cases WebRTC technology. (See a previous post I wrote on this topic here: “The Challenges With Ultra-Low Latency Delivery For Real-Time Applications“)

Within the streaming media industry, there is a lot of discussion right now about different low/ultra-low latency technologies for applications requiring two-way interactivity. Many are looking to the WebRTC specification that allows for real-time communication capabilities that work on top of an open standard and use point-to-point communication to take video from capture to playback. WebRTC was developed as a standard way to deliver two-way video and provides users with the ability to communicate from within their primary web browser without the need for complicated plug-ins or additional hardware.

WebRTC does pose significant scaling challenges as few content delivery networks support it natively today. As a result, many companies utilizing WebRTC in their video stack have built out their own delivery infrastructure for their specific application. An example of a social platform doing this would be Caffeine, which built out their own CDN with a few IaaS partners to facilitate the custom stack necessary for them to deliver ultra-low latency relays. Keeping latency low also involves custom ingest applications that Caffeine built out to keep latency low glass-to-glass.

Another hurdle to WebRTC streaming is that it incurs a higher cost than traditional HTTP streaming. The low latency space is a rapidly evolving field in terms of traditional CDNs support for ultra-low latency WebRTC relays and http based low latency standards (LL-HLS and LL-DASH). So cost, sometimes as high as three-times regular video delivery and the ability to scale are still big hurdles for many. You can see what the CDNs are up with regards to low/ultra-low latency video delivery by reading their posts about it here: Agora, Akamai, Amazon, Limelight, Lumen, Fastly, Verizon, Wowza/Azure.

One problem we have as an industry is that very few companies have put out any real data on how well they have been able to scale their WebRTC based real-time interaction consumer experiences. One example I know of is that Caffeine disclosed that in 2020, they had 350,000 people tuned in to the biggest event they have done to date, a collaboration with Drake and the Ultimate Rap League (URL). While getting to scale with WebRTC based video applications is good to see, we can’t really talk about scale unless we also talk about measuring QoE. Most companies are an ABR implementation within WebRTC, doing content bitrate adaption based on the user’s network connection leveraging the WebRTC standard similar to http multi-variant http streaming but adapting faster relative to what’s afforded by the WebRTC protocol. This is the approach Caffeine has taken, telling me they measure QoE via several dimensions around startup, buffering, dropped frames, network level issues and video bitrate.

Some want to suggest that low-latency based streaming is needed for all streaming events or video applications but that’s not the case. There are business models where it makes sense but many others where the stream experience is passive and doesn’t require two-way interactivity. For platforms that do need it, like Caffeine, people are reacting to one another because of exchanges happening in real time. Chat brings out immediacy amongst participants, whether being called out by a creator or sending digital items to them, fans can change the course of a broadcast in real-time, driven by extremely low latency at scale. In these cases, culture, community, tech and production come together to elevate the entertainment to a whole new level. For Caffeine, it works so well that average watch times were over 100 minutes per user in 2020 for their largest live events.

Streaming media technology has transformed traditional forms of media consumption from packaging to distribution. Now with lots of social media streaming taking place, we are seeing interactive experiences continue to evolve, shaping opportunities in content creation, entertainment, monetization and advertising, with live streaming events being the latest. WebRTC is now the go-to technology being used in the video stack for the applications and experiences that need it, but the future of WebRTC won’t be as mainstream as some suggest or for all video services. WebRTC will be a valuable point-solution providing the functionality needed in specific use cases going forward and should see more improvements with regards to scale and distribution in the coming years.