Archives

Reviewing Fastly’s New Approach To Load Balancing In The Cloud

Load balancing in the cloud is nothing new. Akamai, Neustar, Dyn, and AWS have been offering DNS-based cloud load balancing for a long time. This cloud based approach has many benefits over more traditional appliance-based solutions, but there are still a number of shortcomings. Fastly’s recently released load balancer product takes an interesting new approach, which the company says actually addresses many of the original challenges.

But before delving into the merits of cloud-based load balancing, let’s take a quick look at the more traditional approach. The global and local load balancing market has long been dominated by appliance vendors like F5, Citrix, A10, and others. While addressing key technology requirements, appliances have several weaknesses. First, they are costly to maintain. You need specialized IT staff to configure and manage them, not to mention the extra space and power they take up in your datacenter. Then there are the high support costs, sometimes as much as 20-25% of the total yearly cost of the hardware. Appliance-based load balancers are also notoriously difficult to scale. You can’t just “turn on another appliance” based on flash traffic or a sudden surge in the popularity of your website. Finally, these solutions don’t fit into the growing cloud model which requires you to be able to load balance within and between Amazon’s AWS, Google Cloud or Microsoft Azure.

Cloud load balancers address the shortcomings of appliance-based solutions. However, the fact that they are built on top of DNS creates some new challenges. Let’s consider an example of a user who wants to connect to www.example.com. A DNS query is generated, and the DNS-based load balancing solution decides what region/location to send the query based on a few variables. The browser/end-user caches that information typically for a minute or more. There are two key problems with this approach, the DNS time-to-live/caching, and the minimal number of variables that the load balancer can use to make the optimal decision.

The first major flaw with this approach is the fact that DNS-based load balancing is dependent upon a mechanism that was designed to help with the performance of DNS. DNS has a built-in performance mechanism where the answer returned from a DNS question can be cached for a time period specified by the server. This is called the Time to Live or TTL, and usually the lowest value most sites use is between 30-60 seconds. However, most browsers have implemented their own caching layer that can override the TTL specified by the server. In fact, some browsers cache for 5-10 minutes, which is an eternity when a region or data center fails and you need to route end users to a different location. Granted, modern browsers have improved their response time as it relates to TTL, but there are a ton of older browsers and various libraries that still hold on to cached DNS responses for 10+ minutes.

The second major flaw with DNS-based load balancing solutions is that the load balancing provider can only make a decision based on the recursive IP of the querying DNS server, or less frequently (if the provider supports it), the end-user IP. Most frequently, DNS-based solutions receive a DNS query for www.example.com and the load balancer looks at the IP address of the querying system, which is generally just the end user’s DNS resolver and is often not even in the same geography. The DNS-based load balancer has to make decisions based solely on this input. It doesn’t know anything about the specific request itself – e.g. the path requested, the type of content it is, whether the user is logged in or not, the particular cookie or header values, etc. It only sees the querying IP address and the hostname which severely limits its ability to make the best possible decision.

Fastly’s says their new application-aware load balancer is built-in such a way that it avoids these problems. It’s basically a SaaS service built on top of their 10+ Tbps platform, which already provides CDN, DDoS protection, and web application firewall (WAF). Fastly’s load balancer makes all of its load balancing decisions at the HTTP/HTTPS layer, so it can make application-specific decisions on every request, overcoming the two major flaws of the DNS-based solutions. Fastly also provides granular control, including the ability to make different load balancing decisions and ratios based on cookie values, headers, whether a user is logged in (and if they are a premium customer), what country they come from, etc. Decisions are also made on every request to the customer’s site or API, not just when the DNS cache expires. This allows for sub-second failover to a backup site if the primary is unavailable.

The other main difference is that Fastly’s Load Balancer, like the rest of their services, is developed on their single edge cloud platform, allowing customers to take advantage of all the benefits of this platform. For example, they can create a proactive feedback loop with real-time streaming logs to identify issues faster and instant configuration changes to address these issues. You can see more about what Fastly is doing with load balancing by checking out their recent video presentation from the CDN Summit last month.

Sponsored by

When It Comes To Cache Hit Ratio And CDNs, The Devil Is In The Details

The term “cache hit ratio” is used so widely in the industry that it’s hard to tell what exactly it means anymore from a measurement standpoint, or the methodology behind how it’s measured. When Sandpiper Networks first invented the concept of a CDN (in 1996), and Akamai took it to the next level by distributing the caching proxy “Squid” on a network of global servers, the focus of that caching at the time was largely images. But now we need to ask ourselves if focusing on overall cache hit ratio as a success metric is the best way to measure performance on a CDN.

In the late 90’s, much of the Internet’s web applications were being served from enterprises with on premise data centers and generally over much lower bandwidth pipes. One of the core issues Akamai solved was relieving bandwidth constraints at localized enterprise data centers. Caching images was critical to moving bandwidth off the local networks and bringing content closer to the end user.

But fast forward 20 years later and the Internet of today is very different. Pipes are bigger, applications are more complicated and users are more demanding with respect to performance, availability and security of those applications. So, in this new Internet is the total cache hit ratio for an application a good enough metric to consider, or is there a devil in the details? Many CDNs boast of their customers achieving cache hit ratios around 90%, but what does that really mean and is it really an indicator of good performance?

To get into cache hit ratios we must think about the elements that make up a webpage. Every webpage delivered to a browser is comprised of an HTML document and then other assets including images, CSS files, JS files and Ajax calls.  HTTP Archive tells us that, on average, a web page contains about 104-108 objects per page coming from 19 different domains. The average breakdown of asset types served per webpage from all HTTP Archive sites tested looks like this:

Most of the assets being delivered per web page are static. On average 9 may specifically be content type HTML (and therefore potentially dynamic) but usually, only one will be the initial HTML document. An overall cache hit rate for all of these objects tells us what percentage of them are being served from the CDN, but does not give developers the details they need to truly optimize caching. A modern web application should have most of the images, CSS files and other static objects served from cache. Does a 90% cache hit ratio on the above page tell you enough about the performance and scalability of the application serving that page?  Not at all.

The performance and scalability of a modern web applications is often largely dependent on its ability to process and serve the HTML document.  The production of the HTML document is very often the largest consumer of compute resource on a web application. When more HTML documents are served from cache, less compute resource is consumed and therefore applications become more scalable.

HTML delivery time is also critical to page load time and start render time, being the first object delivered to the browser and a blocker to all other resources being delivered. Generally, serving HTML from cache can cut HTML delivery time to circa 100ms and significantly improve user experience and their perspective of page speed. Customers should seek to understand the cache hit ratio by asset type so developers can specifically target improvements in cache hit rates by asset type. This would result in achieving faster page load times and a more scalable application.

For example, seeking closer to 100% cache hit rates for CSS files, JS files and possibly images would seem appropriate.

As would understanding what cache hit rate is being achieved on the HTML.

[*Snapshots from the section.io portal]

While not all HTML can be served from cache, the configurability of cache solutions like Varnish Cache (commercially available through Varnish Software, section.io and Fastly) and improved HTML management options such as HTML streaming (commercially available from Instart Logic and section.io) have made it possible to cache HTML. In addition, new developer tools such as section.io’s Developer PoP allow developers to more safely configure and deploy HTML caching without risking incidents in production.

Many CDNs focus on overall cache hit rate because they do not encourage their users to cache HTML. A 90% cache hit rate may sound high, but when you consider that the 10% of elements not cached are the most compute-heavy, a different picture emerges. By exposing the cache hit ratio by asset type, developers are able to see the full picture of their caching and optimize accordingly. This results in builders and managers of web applications who can more effectively understand and improve the performance, scalability, and user experience of their applications and is where the industry needs to head.

New Report Reveals Low TV Network Brand Recognition among Young Millennials, Here’s What it Means for Business

A new report from ANATOMY entitled, “The Young and the Brandless,”  ranks seven key TV and OTT networks according to their digital performance and brand recognition among young millennials (18-26). The report reveals what raises a TV network brand’s relevance in digital environments.The biggest difference between TV brands with high and low brand recognition is a user

The biggest difference between TV brands with high and low brand recognition is a user experience-first strategy, ANATOMY’s report suggests. But, according to ANATOMY CEO Gabriella Mirabelli, “While networks consistently indicate that the viewer is at the center of their thinking, they don’t seem to actually analyze how users truly behave.” User experience is the key to making a TV network relevant in digital spaces. TV brands with higher brand recognition among young millennials (e.g., Netflix) are extremely social media savvy. ANATOMY found that these brands know when to post and what to post on to drive higher rates of engagement. For example, according to ANATOMY, Facebook posts published between 12-3 PM generate “236% more engagements” (reactions, shares, and comments).

TV brands with higher brand recognition among young millennials (e.g., Netflix) are extremely social media savvy. ANATOMY found that these brands know when to post and what to post on to drive higher rates of engagement. For example, according to ANATOMY, Facebook posts published between 12-3 PM generate “236% more engagements” (reactions, shares, and comments).

A TV network’s website or app is also an important touchpoint for its brand in digital spaces, but people judge websites quickly — in about “3.42 seconds”, according to ANATOMY. TV networks with higher brand recognition had easy-to-user user interfaces on their websites and apps. They made it easy for people to watch shows, discover new content, and find information about shows.

There is a lot of other great data in the report, which you can download for free here.

Media and Web Performance CDN Pricing Survey: Raw Data Now Available

In April I completed my yearly pricing survey asking customers of third-party content delivery networks what they pay, which vendor(s) they use and how much their traffic is growing amongst a host of other questions. New this year I also collected data on web performance pricing. If you are interested in purchasing all of the raw data, minus the customer’s names, please contact me. (917-523-4562) The media CDN pricing raw data is from over 600 customers and the web performance based pricing data is from over 50 customers. I can also collect custom data as well, around third-party CDN services.

Conviva Raises 6th Round Of Funding Totaling $40M, Has Raised $112M To Date

This morning Conviva announced their sixth round of funding in the amount of $40M, with money coming from a new investor, Future Fund, along with several existing investors. To date Conviva has now raised a total of $112M. Conviva has been the longest operating vendor in the market offering content owners the ability to measure the QoE of their OTT offerings with the company saying they have 200 video publishing brands and service providers including the likes of HBO, SKY, and Turner.

While the company won’t disclose any revenue numbers, the number I keep hearing whispered in the industry is that Conviva did around $70M in revenue in 2016. I have no way to verify that, but a former Conviva employee told me they wanted to do $100M+ in revenue by 2017, which to me, seems aggressive.

Conviva published their latest viewing minutes numbers across all customers citing 80% growth last year to 1 billion minutes / day and expected growth of 150% for 2018. Their customer billing model is based on viewer hours so you can extrapolate that their revenue would grow with increased viewing time. The company claims to be growing faster than the overall OTT market that is estimated to be somewhere at around 20-30% CAGR.

Conviva told me that 3-5% of the total spend on traditional TV goes to measurement and analytics and they believe the total available market will be the same for OTT. Their rationale is that the added value of continuous, census-based measurement and analytics over the internet and a wide variety of consumer video devices is inherently more valuable than more traditional panel-based statistical approaches. This market has historically been relatively small, but is now getting much more competitive, so the success of these companies will be based on the market truly experiencing accelerated growth and the recent historic drop in traditional Pay-TV subscribers is a good leading indicator of that being the case.

Conviva said they raised this round to do strategic development on their AI for video platform and the sensor network they deploy across all their publisher’s customer viewing devices. Collection and basic measurement will feel downwards price pressure with competition and market maturity, so they feel this product vision will be key to their success. In speaking with the company I learned that they have very sophisticated AI or machine learning models that have been trained for years on their large customer base of video viewing data. They stressed the large and diverse data set derived from continuous real-time measurement of all metrics and metadata associated with every second of all video viewing sessions.

This is not just QoE data, but also engagement data, audience data, content metadata, infrastructure metadata, and more. The combination of this continuous and comprehensive data collection and AI purpose-built for video could be a very interesting formula to unleash enormous value for OTT businesses.

How The Right Kind Of Marketing Can Prevent Webinar Burnout

According to a recent benchmarking report from the Content Marketing Institute, 58% of marketers said webinars are part of their content marketing strategy. Webinars initially gained popularity because they allow businesses to educate their existing customers about products and services and they provide an opportunity to meet prospective customers and gain high quality leads. So, why are webinars getting such a bad rap these days?

Marketers spend a great deal of time developing webinar content and we all get tons of email invites to industry webinars. However, no matter how great your content is, your webinar is intended to be a door opener, not a deal closer. By managing this expectation, understanding the basics of webcasting, building webinars that focus on engagement and including the sales team in the process, marketers can turn webinars into key lead gen opportunities.

Though webinars may seem simple, there are a few best practices you must follow in order to create a quality, lead-generating webinar:

  • Collect key audience information: Building an engaging experience that entices viewers to enter your sales funnel starts with a strong webinar landing page. Include all of the essential information around who is presenting the webinar, what topics are being covered, how viewers will benefit from attending and when the event is taking place. The landing page also needs to include a distinguishable call to action asking viewers to sign-up. And it’s best to use a contrasting color to ensure it’s one of the first things visitors see when they click on your page. The signup form should gather name, email, company and reason for watching the webinar. While you can ask for more information, it’s best to limit the number of fields to around four, since any more may deter sign-ups. Collecting this information will be key for your pre and post-webinar marketing activities.
  • Prioritize digestible content: To ensure your webcast is effective, make the content easy to read, desirable to share, customer-centric and actionable. Allowing viewers to download the slides or other presentation materials encourages them to actively participate and share the materials. Posting a recording of the webinar online will extend the content’s shelf-life and result in more leads over time.
  • Keep it brief: A recent study found that the average on-demand viewing time for webinars was 39 minutes, while the live viewing time jumped up to 50 minutes. If your webinar is shaping up to be longer than that, identify sections that can be modified before presenting or consider breaking up the presentation into two parts.

Engage with the audience
In a discussion with Luis Ramirez, director of marketing at West’s Unified Communication Services, he highlighted that, “Engaging with your audience is not only key to keeping them attentive during the webinar, it’s also a great way to collect real-time lead generation data. You want to elicit reactions, responses and interactions from your audience that can help shed light on their pain points, challenges and business needs.” One easy way to do this is by building in time for a live Q&A. You can ask for questions beforehand or during the webinar using a chat-style messaging board. If you’re unable to answer all the live or pre-emailed questions in the webinar, following up with an email or phone call not only shows the potential customer that you care but it also provides a lead-in for sales talks.

You can also capture viewers’ attention with a live poll during the webinar. This makes viewers feel like they’re impacting the webinar and will ultimately keep them interested longer. Additionally, some platforms, like West’s Webcast Pro, offer engagement tracking tools that measure and score audience participation. This data can then be used to inform your sales approach and improve future webinars.

Sync up with your sales team
Webcasts are a great way to connect with your customers, but at the end of the day, they are intended to create qualified leads. Demonstrate your webcasts’ value by integrating your webcast data including its sign-up form and engagement metrics with your CRM system. Engagement data is particularly great for qualifying leads. For instance, you can segment leads by those who watched the entirety of your webinar and those who watched 50%, and disqualify those who only watched a few minutes. You can also identify follow-up activities based on how participants responded to poll questions.

Since webinars are designed to be lead generation incubators, it’s important to find ways to incorporate your sales team into the process, either by hosting the webcast or leading communications afterward to help connect the dots and initiate potential deals. For instance, it’s a good idea to send a thank you message to those who attended your webinar. However, your sales team can take the extra step and follow up with viewers personally to see if they got the information they were looking for and if they’re interested in learning more about your company.

While companies often treat webinars as just another channel to disseminate information and build out their content, this quickly becomes a wasted investment. To actually gain a return on investment with webinars, companies must master webcasting basics, engage with the audience and align their webinars with their sales tools. Gaining qualified leads is the goal, and companies can use more engaging and interactive webinars to get there.

Why Apple’s HEVC Announcement Is A Big Step Forward For The Streaming Media Industry

The battle for bandwidth is nothing new. As CE manufacturers push the bounds on display technologies, and with 360 and VR production companies demonstrating ever more creative content, the capacity of networks will be taxed to levels much greater than we see today. For this reason, the Apple announcement at their 2017 Worldwide Developers Conference that they are supporting HEVC in High Sierra (macOS) and iOS 11 is going to be a big deal for the streaming media industry. There is little doubt that we are going to need that big bandwidth reduction that HEVC can deliver.

While HEVC has already been established on some level since Netflix, VUDU, Fandango Now, and Amazon Instant Video have been distributing HEVC encoded content, that’s all been to non-mobile devices to date. But what about the second screen, where more than 50% of viewing time is occurring? With this announcement, Apple set the de-facto standard for the premium codec on second screen devices. We know that H.264 is supported fully across the mobile device ecosystem and any codec, which is going to replace it must have a realistic path to being ubiquitous across devices and operating systems. That’s why the argument from some that VP9 will win on mobile, never made sense, as I don’t see any scenario where Apple would adopt a Google video codec. But prior to Monday morning June 5th, 2017, and the WWDC2017 HEVC announcement, no one could say for certain that this wouldn’t happen.

We’ll likely never know what the considerations were for Apple to select HEVC over VP9. With VP9 supported only on Android devices while HEVC is supported on Apple as well as certain Android devices such as the Samsung Galaxy S8 and Galaxy Tab, streaming services now face a conundrum. Do they encode their entire library twice (HEVC and VP9) or only once (HEVC) to cover iOS devices and connected TVs? The decision is an obvious one. HEVC should receive the priority over VP9 as most services have too much content to maintain three libraries (H.264, HEVC, VP9). When you consider that HEVC decoding is available in software and hardware for Android, the choice to deploy HEVC as the next generation codec beyond H.264 seems an obvious one.

With Beamr and other HEVC vendors supporting OTT streaming services in production since 2014, we are well down the road with a proven technology in HEVC. And as we heard from Joe Inzerillo, CTO of BAMTech during his keynote talk at Streaming Media East show, serious companies should not be wasting time with “free” technology that ultimately is unproven legally. Though Joe may have been thinking of VP9 when he made this statement, it could have also been referring to the Alliance for Open Media codec AV1 which has been receiving some press of late mainly for being “free.” My issue with AV1 is that the spec is not finalized so early proof of concepts can be nothing more than just that, proof of concepts that may never see production for at least 18-24 months if not longer. Then there is the issue of playback support for AV1 where to put it simply, there is none.

What Apple delivers to the industry with adoption of HEVC is 1 billion active iOS devices across the globe, where consumer demand for video has never been higher. Until today, OTT services have been limited by only having access to an H.264 codec across the massive Apple device ecosystem. I predict that the first “user” of the HEVC capability will be Apple themselves, as they will likely re-encode their entire library including SD and HD videos to take advantage of the 40% bitrate reduction that HEVC can deliver over H.264, as Apple has claimed. Streaming services with apps in the app store, or those who deliver content for playback on iOS devices will need to be mindful that the consumer will be able to see the improved UX and bandwidth savings from iTunes, along with higher quality.

I reached out to Beamr to get their take on the Apple HEVC news and Mark Donnigan, VP of marketing make three good point to me. The first point is that higher quality at lower bitrates will be a basic requirement to compete successfully in the OTT market. As Mark commented, “Beamr’s rationale for making this claim is that consumers are growing to expect ever higher quality video and entertainment services. Thus the service that can deliver the best quality with the least amount of bits (lowest bandwidth) is going to be noticed and in time preferred by consumers.” Beamr has been hitting their speed claim hard saying that they can deliver an 80% speed boost with Beamr 5 compared to x265, which removes the technical overhead of HEVC.

Mark also suggested that, “there is no time to wait for integrating HEVC encoding into content owners video workflow. Though every vendor will make the time is of the essence claim, in this case, it’s possible that they aren’t stretching things. With iOS 11 and High Sierra public betas rolling out to developers in June, and to users this fall, video distributors who have not yet commissioned an HEVC encoding workflow don’t have a good reason to still be waiting.” It’s well known that outside of Netflix, VUDU, Amazon Instant Video and a small number of niche content distributors, HEVC is not in wide use today. However, active testing and evaluation of HEVC has been going on for several years now. Which means it’s possible that there are services closer to going live than some realize.

Finally, Mark also correctly pointed out that Apple is clearly planning to support HDR with displays and content. With the announcement that the new iMac’s will sport a 500 nit display, 10-bit graphics support (needed for HDR) and will be powered by the 7th generation Intel Kaby Lake processor with Iris Pro GPU, Apple is raising the bar on consumer experience. Not every home may have an HDR capable TV, but with Apple pushing their display and device capabilities ever higher, consumers will grow to expect HDR content even on their iOS devices. Soon it will not be sufficient to treat the mobile device as a second class playback screen. As Mark told me, “Services who do not adopt HDR encoding capabilities (and HEVC is the mandatory codec for the HDR10 standard), will find their position in the market difficult to maintain.” Studies continue to show that higher resolution is difficult for consumers to see, but HDR can be appreciated by everyone regardless of screen size.

Apple drives many trends in our industry, and history has shown that those who ignore them do so at their peril. Whether you operate a high-end service that differentiates based on video quality and user experience, or you operate a volume based service where delivery cost is a key factor, HEVC is here. With HEVC as the preferred codec supported by the TV manufactures, and adopted by some Android devices, and with Apple bringing on board up to 1 billion HEVC capable devices, it seems HEVC has been prioritized by the industry as the next generation codec of choice.