Archives

HDR Streaming Across OTT Services Grows, Setting Up a Battle for the Best User Experience

Since AT&T closed its purchase of Time Warner, Viacom merged with CBS, Disney acquired Fox’s studio and key cable networks, Discovery took over Scripps Networks and Amazon looking to acquire MGM, content consolidation has been the main focus in the industry. With so many OTT services for consumers to pick from, alongside multiple monetization models (AVOD, SVOD, Free, Hybrid), fragmentation in the market will only continue to grow. We all know that content is king and is the most important element in a streaming media service. But with so many OTT services all having such a good section of content, the next phase of the OTT industry will be all about the differentiation of quality and experience amongst the services and the direct impact this has on churn and retention.

The fight for eyeballs will be primarily fought over the quality of the user experience, and each of these consolidated players will now own a rich and diversified content portfolio. This can suit all sorts of business models such as everything to everyone or more niche targeted content for specific audiences. CEOs of these OTT services and most of the media, tend to focus on the volume of subscribers. For example, Discovery CEO David Zaslav told CNBC recently “There’s billions of people out there that we could reach in the market.” Most of the attention has focused on how the Discovery-WarnerMedia deal could reach 400M subscribers, Netflix growing beyond its 200M+ global subscribers or Amazon Prime beyond its 175M. Many are multiplying number of subs per month, x ARPU, x total households in a region and huge numbers start popping out and rousing the financial markets.

However, these numbers being quoted aren’t truly representative of the real opportunity in the market. I would argue that combining companies, brands and content on excel sheets is a far cry from effectively reaching out to billions of potential customers with a high QoE needed to keep them from jumping to a competitive service. Whilst secondary to content, streaming services need to keep a keen eye on their technology and what competitive advantages they can offer to reduce churn. One such differentiation for some is the volume of content they are offering with support for HDR. By deploying HDR capabilities, media companies could impress their audiences with a richer breadth of color and deeper contrast within HD content, which is still viewed far more than 4K content. Amongst third-party content delivery networks, many tell me of all the video bits the deliver, less than 5% are in 4K. This makes HDR even more important to support within HD content.

HDR has been on the roadmap for a lot of content owners but historically has been held back largely due to device support. Rendering performance of HDR formats is different on devices, backward compatibility is a mess, APIs are pretty bad and content licensing agreements sometimes allow only for a specific HDR format. While HDR has been a struggle over the last few years, we are finally starting to see some faster adoption amongst streaming media services. For example, last month, Hulu added HDR support to certain shows and movies within its streaming catalog. Most streaming platforms now offer a portion of their video catalog with HDR support and thankfully most TV sets sold are now HDR ready thanks to support for the HEVC codec. But whilst consumption on smart TVs is significant, HDR is largely excluded from consumption on laptops, tablets, and mobile phones. And that is where you will find the ‘billions of people’ mentioned before. For many of us, our phones and tablets have the most advanced and capable displays that we own, so why restrict HDR delivery to the Smart TV?

Contrary to popular belief, HDR is achievable on 1080p and independently of HEVC with MPEG-5 LCEVC, as I previously detailed in a blog post here. LCEVC can add a 10 bit enhancement layer to any underlying codec including AVC/H.264 thus providing the possibility to upgrade all AVC/H.264 streaming and make it HDR capable, without the need for HEVC. LCEVC is deployable via a software upgrade and therefore quick rollouts could take place, rather than the usual years needed to get hardware updates into a meaningful number of devices. The opportunity to drive up user engagement and ARPU with premium HDR delivery to more of our devices could be a key advantage for one or more of the OTT services in our space. I predict that over the next two years, we’re going to see some of the fastest rate of HDR adoption across all streaming media services and we should keep an eye on what measurable impact this can have on the user experience.

Sponsored by

Former CEO of KIT Digital, Found Guilty on All Charges, Get 3-Years Probation

Kaleil Isaza Tuzman, the former CEO of KIT Digital who was found guilty of market manipulation, wire fraud, defrauding shareholders, and accounting fraud was sentenced on September 10th to three years probation by the judge in the case. This is astonishing as the prosecutors had sought a sentencing of 17-1/2 to 22 years in prison. The judge also ordered three years of supervised release.

U.S. District Judge Paul Gardephe sentenced Kaleil to only probation, saying the 10 months Kaleil spent in Colombian prisons was so horrible it would put him at little risk of committing further crimes. “The risk associated with sending Mr. Tuzman back to prison, the risk to his mental health, is just too great,” Gardephe said. “While in many other cases it has been my practice to sentence white-collar defendants for these sorts of crimes to a substantial sentence, in good conscience I can’t do that here.”

It’s sad state of affairs in our legal system when someone who defrauded investors, lied, cost many employees to lose their jobs and was found guilty on every charge, gets probation. Reading through some of the legal filings, there appears to be more to the story though on what might have impacted his sentence. In a Supplemental Sentencing Memorandum filed on July 7th of this year, it says:

While incarcerated and during the five years since his release from prison, Kaleil has repeatedly provided material and substantial assistance to the Anti-Corruption Unit of the Colombian Attorney General’s Office, which ultimately resulted in the indictment of a number of government officials in the Colombian National Prison Institute known as “INPEC” (Instituto Nacional Penitenciario y Carcelario)—including the prior warden of La Picota prison, César Augusto Ceballos—on dozens of charges of extortion, assault and murder.” So one wonders if he got a lighter sentence due to information he was providing to the Colombian government.

On the civil side, Kaleil is still being sued by investors in a hotel project who accuse him of stealing $5.4 million and in May 2021, a similar suit was filed in U.S. federal court, asking for $6 million.

For a history of what went on on KIT Digital, you see read my post here from 2013, “Insiders Detail Accounting Irregularities At KIT Digital, Rumors Of A Possible SEC Fraud Investigation“.

Streaming Services Evaluating Their Carbon Footprint, as Consumers Demand Net-Zero-Targets

Right now, almost anyone has access to some sort of video streaming platform that offers the content they value at a satisfactory video quality level, most of the time. But the novelty factor has long worn off and most of the technical improvements are now taken for granted. Of course viewers are increasingly demanding in terms of video quality and absence of buffering, and losing a percentage of viewership due to poor quality means more lost profits than before, but consumers are starting to care about more than just the basics.

Just like in many other industries (think of the car or fashion industries) consumer demands – especially for Generation Z – are now moving beyond “directly observable” features and sustainability is steadily climbing the pecking order of their concerns. To date, this importance has mostly been for physical goods, not digital, but I wonder whether this may be a blind-spot for many in our industry? Remember how many people thought consumers would not care much about where and how their shoes were made? Some large footwear companies sustained heavy losses due to that wrong assumption.

Video streaming businesses should be quick to acknowledge that, whether they like it or not, whether they believe in global warming or not, that they have to have a plan to reach the goal of net-zero emissions. The relevance of this to financial markets and customer concerns about sustainable practices are here to stay and growing. About half of new capital issues in financial markets are being linked to ESG targets (Environmental, Social and Governance). Sustainability consistently ranks among the top 5 concerns in every survey of Generation Z consumers when it comes to physical goods and one could argue it’s only a matter of time before this applies to digital services as well.

Back in 2018 I posted that the growth in demand for video streaming had created a capacity gap and that building more and more data centers, plus stacking them with servers was not a sustainable solution. Likewise, encoding and compression technology has been plagued with diminishing gains for some time, where for each new generation of codec, the increase in compute power they require is far greater than the compression efficiency benefits they deliver. Combine that with the exponential growth of video services, the move from SD to HD to 4K, the increase in bit depth for HDR, the dawn of immersive media, and you have a recipe for everything-but-net-zero.

So, what can be done to mitigate the carbon footprint of an activity that is growing exponentially by 40% per year and promises to transmit more pixels at higher bitrates crunched with more power-hungry codecs? Recently Netflix has pledged to be carbon neutral by 2022, while media companies like Sky committed to become net zero carbon by 2030. A commonly adopted framework is the “Reduce – Retain – Remove”. While many companies accept that they have a duty to “clean up the mess” after polluting, I believe the biggest impact lies in reducing emissions in the first place.

Netflix, on the “Reduce” part of their pledge, aim to reduce emissions by 45% by 2030 and others will surely follow with similar targets. The question is how can they get there? Digital services are starting to review their technology choices to factor-in what can be done to reduce emissions. At the forefront of this should be video compression, which typically drives the two most energy intensive processes in a video delivery workflow: transcoding and delivery.

The trade-off with the latest video compression codecs is that while they increase compression efficiency and reduce energy costs in data transmission (sadly, only for the small fraction of new devices compatible with them), their much-higher compute results in increased energy usage for encoding. So the net balance in terms of sustainability is not a slam dunk, especially for operators that deliver video to consumer-owned-and-managed devices such as mobile devices.

One notable option able to improve both video quality and sustainability is MPEG-5 LCEVC, the low-complexity codec-agnostic enhancement recently standardized by MPEG. LCEVC increases the speed at which encoding is done by up to 3x, therefore decreasing electricity consumption in data center. At the same time, it reduces transmission requirements, and immediately does so for a broad portion of the audience, thanks to the possibility of deploying LCEVC to a large number of existing devices, and notably all mobile devices. With some help from the main ecosystem players, LCEVC device coverage may become nearly universal very rapidly.

LCEVC is just one of the available technologies with so-called “negative green premium”, good for the business and good for the environment. Sustainability-enhancing technologies, which earlier may have been fighting for attention among a long list of second-priority profit-optimization interventions, may soon bubble up in priority. The need for sustainability intervention is real, and will only become greater in the next few years, so all available solutions should be brought into play. Netflix says it best from their 2020 ESG report, “If we are to succeed in entertaining the world, we need a habitable, stable world to entertain.”

Real-World Use Cases for Edge Computing Explained: A/B Testing, Personalization and Privacy

In a previous blog post, [Unpacking the Edge Compute Hype: What It Really Is and Why It’s Important] I discussed what edge computing is—and what it is not. Edge computing offers the ability to run applications closer to users, dramatically reducing latency and network congestion, providing a better, more consistent user experience. Growing consumer demand for personalized, high-touch experiences is driving the need for running application functionality to the edge. But that doesn’t mean edge compute is right for every use case.

There are some notable limitations and challenges to be aware of and many industry analysts are predicting every type of workload will move to the edge, which is not accurate. Edge compute requires a microservices architecture that doesn’t rely on monolithic code. The edge is a new destination for code, so best practices and operational standards are not yet well defined or well understood.

Edge compute also presents some unique challenges around performance, security and reliability. Many microservices require response times in the tens of milliseconds, requiring extremely low latency. Yet providing sophisticated user personalization consumes compute cycles, potentially impacting performance. With edge computing services, there is a trade-off between performance and personalization.

Microservices also rely heavily on APIs, which are a common attack vector for cybercriminals, so protecting API endpoints is critical and is easier said than done, given the vast number of APIs. Reliability can be a challenge, given the “spiky” nature of edge applications due to variations in user traffic, especially during large online events that drive up the volume of traffic. Given these realities, which functions are the most likely candidates for edge compute in the near term? I think the best use cases fall into four categories.

A/B Testing
This use case involves implementing logic to support marketing campaigns by routing traffic based on request characteristics and collecting data on the results. This enables companies to perform multivariate testing of offers and other elements of the user experience, refining their appeal. This type of experimental decision logic is typically implemented at the origin, requiring a trip to the origin in order to make the A/B decisions on which content to serve to each user. This round-trip adds latency that decreases page performance for the request. It also adds traffic to the origin, increasing congestion and requiring additional infrastructure to handle the traffic.

Placing the logic that governs A/B testing at the edge results in faster page performance and decreased traffic to origin. Serverless compute resources at the edge determines which content to deliver based on the inbound request. Segment information can be stored in a JavaScript bundle or in a key-value store, with content served from the cache. This decreases page load time and reduces the load on the origin infrastructure, yielding a better user experience.

Personalization
Companies are continually seeking to deliver more personalized user experiences to increase customer engagement and loyalty in order to drive profitability. Again, the functions of identifying the user and determining which content to present typically reside at the origin. This usually means personalized content is uncacheable, resulting in low offload and negative impact to performance. Instead, a serverless edge compute device can be used to detect the characteristics of inbound requests, rapidly identifying unique users and retrieving personalized content. This logic can be written in JavaScript at the edge and personalized content can be stored in JavaScript bundle or in a key-value store at the edge. Performing this logic at the edge provides highly personalized user experiences while increasing offload, enabling a faster, more consistent experience.

Privacy Compliance
Businesses are under growing pressures to safeguard their customers’ privacy and comply with an array of regulations, including GDPR, CCPA, APPI, and others, to avoid penalties. Compliance is particularly challenging for data over which companies may have no control. One important aspect of compliance is tracking consent data. Many organizations have turned to the Transparency and Consent Framework (TCF 2.0) developed by the Interactive Advertising Bureau (IAB) as an industry standard for sending and verifying user consent.

Deploying this functionality as a microservice at the edge makes a lot of sense. When the user consents to tracking, state-tracking cookies are added to the session that enable a personalized user experience. If the user does not consent, the cookie is discarded and the user has a more generic experience that does not involve personal information. Performing these functions at the edge improves offload and enables cacheability, allowing extremely rapid lookups. This improves the user experience while helping ensure privacy compliance.

Third-Party Services
Many companies offer “productized” services designed to address specific, high-value needs. For example, the A/B testing discussed earlier is often implemented using such a third-party service in conjunction with core marketing campaign management applications. These third-party services are often tangential to the user’s request flow. When implemented in the critical path of the request flow, they add latency that can affect performance. Moreover, scale and reliability are beyond your control, which means the user experience is too. Now imagine this third-party code is running natively on the same serverless edge platform handling the user’s originating request. Because the code is local, latency is reduced. And the code is now able to scale to meet changing traffic volumes and improving reliability.

One recent example of this was the partnership between Akamai and the Queue-It virtual waiting room service. The service allows online customers to retain their place in line, while providing a positive waiting experience and reducing the risk of a website crash due to sudden, spikes in volume. The partnership was focused specifically on providing an edge-based virtual waiting room solution to handle traffic during the rush to sign up for COVID vaccinations. The same approach could be used for any online event where traffic spikes are expected, such as ticket reservations to a sought-after concert or theater event, now that these venues are poised to open back up.

Conclusion
These examples highlight how important it is to understand and think carefully about what functions make sense to run at the edge. It’s true that some of these use cases may be met by traditional centralized infrastructures. But consider the reduction in overhead, the speed and efficiency of updating functionality, and the performance advantages gained by executing them at the edge. These benefit service providers and users alike. Just as selecting the right applications for edge compute is critical, so it working with the right edge provider. In this regard, proximity matters.

Generally speaking, the closer edge compute resources are to the user, the better. Beware of service providers running code in more centralized nodes that they call “the edge.” And be sure they can deliver the performance, reliability and security needed to meet your service objectives, based on the methodology you choose, while effectively managing risk.

The edge compute industry and market for these services is an evolving landscape that’s only just starting off. But there is a growing list of use cases that can benefit now from edge compute deployed in a thoughtful way. We should expect to see more uses cases in the next 18 months as edge computing adoption continues and companies look at ways to move logic and intelligence to the edge.

Streaming Summit at NAB Show Returns, Call For Speakers Now Open

It’s back! I am happy to announce the return of the NAB Show Streaming Summit, taking place October 11-12 in Las Vegas. The call for speakers is now open and lead gen opportunities are available. The show will be a hybrid event this year, with both in-person and remote presentations. See the website for all the details or contact me with your ideas on how you want to be involved.

The topics covered will be created based on the submissions sent in, but the show covers both business and technology topics including; bundling of content; codecs; transcoding; live streaming; video advertising; packaging and playback; monetization of video; cloud based workflows; direct-to-consumer models, the video ad stack and other related topics. The Summit does not cover topics pertaining to video editing, pre/post production, audio only applications, content scripts and talent, content rights and contracts, or video production hardware.

Please reach out to me at (917) 523-4562 or via email at anytime if you have questions on the submission process or want to discuss an idea before you submit. I always prefers speaking directly to people about their ideas so I can help tailor your submission to what works best. Interested in moderating a session? Please contact me ASAP!

Apple Using Akamai, Fastly, Cloudflare For Their New iCloud Private Relay Feature

[Updated October 18, 2021: Apple’s iCloud Private Relay feature, which was being powered during the beta by Akamai, Fastly and Cloudflare, will officially launch with iOS 15 in “beta”. It will no longer be enabled by default, due to incompatibility issues with some websites.]

On Monday, Apple announced some new privacy features in iCloud, one of which they are calling Private Relay. The way it works is that when you go to a website using Safari, iCloud Private Relay takes your IP address to connect you to the website and then encrypts the URL so that app developers, and even Apple, don’t know what website you are visiting. The IP and encrypted URL then travels to an intermediary relay station run by what Apple calls a “trusted partner”. In a media interview published yesterday, Apple would not say who the trusted partners are but I can confirm, based on public details (as shown below; Akamai on left, Fastly on the right), that Akamai, Fastly and Cloudflare are being used.

On Fastly’s Q1 earnings call, the company said they expect revenue growth to be flat quarter-over-quarter going into Q2, but that revenue growth would accelerate in the second half of this year. The company also increased their revenue guidance range to $380 million to $390 million, up from $375 million to $380 million. Based on the guidance numbers, Fastly would be looking at a pretty large ramp of around 15% of sequential growth in the third and fourth quarter. Fastly didn’t give any indication of why they thought revenue might ramp so quickly, but did say that, “a lot of really important opportunities that are coming our way.” By itself, this new traffic generated from Apple isn’t that large when it comes to overall revenue and is being shared amongst three providers. This news comes out at an interesting time as this morning, Fastly had a major outage on their network that lasted about an hour.

Rebuttal to FCC Commissioner: OTT, Cloud and Gaming Services Should Not Pay for Broadband Buildout

Brendan Carr, commissioner of the Federal Communications Commission (FCC), published an op-ed post on Newsweek entitled “Ending Big Tech’s Free Ride.” In it, he suggests that companies such as Facebook, Apple, Amazon, Netflix, Microsoft, Google and others, should pay a tax for the build-out of broadband networks to reach every American. In his post he blames streaming OTT services as well as gaming services like Xbox and cloud services like AWS for the volume of traffic on the Internet. There are a lot of factual problems with his post from both a business and technical standpoint, which is always one of the main problems when regulators get involved in topics like this. They don’t focus on the facts of the case but rather their “opinions” disguised as facts. The Commissioner references a third-party post-doctoral paper as his argument, which contains many factual errors when it comes to numbers disclosed by public companies, some of which I highlight below.

The federal government currently collects roughly $9 billion a year through a tax on traditional telephone services—both wireless and wireline. That pot of money, known as the Universal Service Fund, is used to support internet builds in rural areas. The Commissioner suggests that consumers should not have to pay that tax on their phone bill for the buildout of broadband and that the tax should be paid by large tech companies instead. He says that tech companies have been getting a “free ride” and have “avoided” paying their fair share.” He writes that “Facebook, Apple, Amazon, Netflix and Google generated nearly $1 trillion in revenues in 2020 alone,” saying it “would take just 0.009 percent of those revenues,” to pay for the tax. The Commissioner is ignoring the fact that in 2020, Apple made 60% of their revenue overseas and that much of it comes from hardware, not online services. Apple’s “services” revenue, as the company defines it, made up 19% of their total 2020 revenue. So that $1 trillion number is much, much lower, if you’re counting revenue from actual online services that use broadband to deliver the content.

If the Commissioner wants to tax a company that makes hardware, why isn’t Ford Motor Company on the list? They make physical products but also have a “mobility” division that relies on broadband infrastructure for their range of smart city services. Without that broadband infrastructure Ford would not be able to sell mobility services to cities or generate any revenue for their mobility division. You could extend this notion to all kinds of companies that make revenue from physical goods or commerce companies like eBay, Etsy, Target and others. Yet the Commissioner specifically calls out video streaming as the problem and references a paper written in March of this year as his evidence. The problem is that the paper is full of so many factually wrong numbers, definitions and can’t even get the pricing of streaming services accurate, something the Commissioner clearly hasn’t noticed.

The paper says YouTube is “on track to earn more than $6 billion in advertising revenue for 2020”. No, YouTube generated $19.7 billion revenue in 2020. The paper also says that Hulu’s live service costs $4.99 a month, when in actuality it costs $65 a month. It also says that the on-demand version of Hulu costs $11.99 a month when it costs $5.99 a month. There are many instances of wrong numbers like this in the report that can’t be debated and are simply wrong. Full stop. The authors say the goal of the paper is to look at the “challenge of four rural broadband providers operating fiber to the home networks to recover the middle mile network costs of streaming video entertainment.” They say that “subscribers pay about $25 per month subscriber to video streaming services to Netflix, YouTube, Amazon Prime, Disney+, and Microsoft.” That’s not accurate. YouTube is free. If they mean YouTube TV, that costs $65 a month. The paper also uses words like “presumed” and “assumptions” when making their arguments, which isn’t based on any facts.

The paper also points out that the data and methodology used in the report to come to their conclusions “has limitations” since traffic is measured “differently” amongst broadband providers. So only a slice of the overall data is being used in the report and the methodology collection isn’t consistent amongst all the providers. We’re only seeing a small window into the data being used in the repot, yet the Commissioner is referencing this paper as his “evidence”. The paper also references industry terms from as far back as 2012 saying they have “adapted” them to today, which is always a red flag for accuracy.

The paper also incorrectly states that “The video streaming entertainment providers do not contribute to middle or last mile network costs. The caching services provided by Netflix and YouTube are exclusionary to the proprietary services of these platforms and entail additional costs for rural broadband providers to participate.” Netflix and others have been putting caches inside ISP networks for FREE, which saves the ISP money on transit. Apple will also work with ISPs via their Apple Edge Cache program. For anyone to suggest that big tech companies don’t spend money to build out infrastructure for the consumers benefit is simply false. Some ISPs choose not to work with content companies offering physical or virtual caches but that’s based on a business decision they have made on their own. In addition, when a consumer signs up for a connection to the Internet from an ISP, the ISP is in the business of adding capacity to support whatever content the consumer wants to stream. That is the ISPs business and there is no valid argument that an ISP should not have to spend money to support the user.

The paper stats that, “Rural broadband providers generally operate at close to breakeven with little to no profit margin. This contrasts with the double-digit profit margins of the Big Streamers.” Disney’s direct-to-consumer streaming division which includes Disney+, Hulu, ESPN+ and Hotstar lost $466 million in Q1 of this year. What “double-digit profit margins” is the paper referencing? Again, they don’t know the numbers. The paper also gets wrong many of their explanations of what a CDN is, how it works, and how companies like Netflix connect their network to an ISP like Comcast. The paper also shows the logo of Hulu and Disney+ on a chart listing them under the Internet “backbone” category, when of course the parent owner of those services, The Walt Disney Company, doesn’t own or operate a backbone of any kind. The paper argues that since rural ISPs have no scale, they can’t launch “streaming services of their own” like AT&T has. Of course, this is 100% false and there are many third-party companies in the market that have packaged together content ready to go that any ISP can re-sell as a bundle to their subscribers, no matter how many subscribers they have.

According to a 2020 report from the Government Accountability Office (GAO), the FCC’s number one challenge in targeting and identifying unserved areas for broadband deployment was the accuracy of the FCC’s own broadband deployment data. Congress recently provided the FCC with $98 million to fund more precise and granular maps. You read that right, the FCC was given $98 million dollars to create maps. In March of 2020, Acting FCC Chairwoman Jessica Rosenworcel said these maps could be produced in “a few months,” but that estimate has now been changed to 2022. Some Senators have taken notice of the delay and have demanded answers from the FCC.

It’s easy to suggest that someone else should pay a tax without offering any details on who exactly it would apply to, how much it would be, what the classifications are to be included or omitted, which services would or would not fall under the rule, how much would need to be collected and over what period of time. But this is exactly what Commissioner Carr has done by calling out companies, by name, that he thinks should pay a tax. All while providing no details or proposal and referencing a paper filled with factual errors. I have contacted the Commissioner’s office and offered him an opportunity to come to the next Streaming Summit at NAB Show, October 11-12, and debate this topic with me in-person. If accepted, I will only focus on the facts, not opinions.