Updated Post: A Summary Of Transparent Caching Architectures
Updated June 8th: After getting some feedback from vendors and carriers in the market about this post, I have updated it with additional thoughts. My intention with the post was to help educate the market on how the architectures work and I was not trying to say one architecture is better than another since I am not a network engineer and don't operate a carrier network. I don't think the post was neutral enough and will be correcting that so I have re-written some of it and removed the references to performance. Also, in talking with both carriers and MSOs for the piece, it's clear that the two kinds of companies should not be combined in terms of how they deploy these solutions as MSOs are very different than carriers. I have also updated the title of the post so that it is simply talking architecture and not performance.
———————————
Two weeks ago, at the Content Delivery Summit, one of the biggest topics being discussed was the technology of transparent caching (definition). Telcos, carriers, MSOs and ISPs are hard at work building out CDN and transparent caching services inside their network for the delivery of video. Over time, these carriers and ISPs plan to compete with SaaS based CDNs and take over more control of the delivery of content to their users, since many of them own the last mile. (See CDN Summit video: Transparent Caching: Cost Center or Business Opportunity?)
There are differences in the approach of many types of carriers in how transparent caching platforms are deployed. Some own vast libraries of video content and want to provide it to their subscribers on as many devices as possible, others are squarely involved in providing Internet services as a business and are focusing on network optimization. I hope to cover the differences in operators in future posts, but that will be for another day.
Recently, I wrote a post entitled "An Overview Of Transparent Caching and Its Role In The CDN Market", and this post looks to suppliment that with more technical details on the features and functionality required for the successful various architectures used during the integration of a cache system into an operator network.
When evaluating cache systems available on the market, operators typically consider two main the following parameters: impact on network costs and subscriber quality of experience, ease of deployment and operation, how the solution affects Internet application delivery and the larger Internet video ecosystem, and lastly, how it fits into the operator’s long-term content delivery network strategy. Before undertaking a deployment strategy, operators spend a great deal of time testing and evaluation the system's architecture and basic design, which will dictate its overall behavior and determine its ability to meet the operator's requirements. There are two three main transparent caching architectures currently on the market: called in-line and out-of-band two leverage an in-band architecture and one architecture uses an out-of-band approach.
In-Line Cache Architecture
In-band cache systems use a mechanism where Internet traffic is redirected to the in-band system. The user request is analyzed near the point of redirection, and based on an algorithm decision, specific content is cached or served from cache. The storage and analysis are typically collocated.
The two in-band architectures differ in the way they manage sessions and make decisions on what content is stored and served from the cache. One in-band architecture is a traditional cache proxy, in which every TCP session is terminated by the cache from the subscriber and a new session is created with the origin. The other approach is a transparent in-band cache, in which the session between the subscriber and origin is preserved, but requested objects are cached or served from cache based on analysis performed by the cache system.
Internet traffic is can be redirected to in-band cache systems by various methods: 1) policy-based routing (PBR) on currently installed network equipment; 2) from a deep packet inspection solution (existing or newly installed); 3) via Web Cache Communication Protocol (WCCP) or 4) through application load balancing equipment (existing or newly installed). The cache engine can also be embedded in the redirecting network element such as a BRAS, CMTS or DPI engine. The deployment method chosen depends on the resources available at the deployment location and the goals of the operator.
Out-of-Band Cache Architecture
In an out-of-band cache architecture the control plane and data plane of the caching system are separated. A route advertising protocol, such as the border gateway protocol (BGP) or static routing, is used to direct user Internet request traffic to a cache manager. At the cache manager, the user request is analyzed and based on an algorithm decision, a specific content request is directed to a cache server that has the object in storage to serve the object. In the event that the requested content is not available, the cache manager forwards the subscriber’s request to the origin and the content can flow directly from the origin to the requester. If the algorithm decides the content should be cached, it tells a cache server to retrieve the content from the Internet for the next user. Cache managers are typically located in a centralized network location.
Cache servers are the file storage and delivery elements used in an out-of-band architecture to serve content to subscribers and to retrieve content from an origin. They are connected to both the cache manager and the Internet and may or may not be collocated with the cache manager.
The above descriptions are extremely simplified descriptions architectures used by operators to manage the explosion of OTT content on their networks. Operators are all bringing different goals to their CDN and caching network deployments and depending on exactly what the short and long-term objectives are, any of these architectures may be relevant. In my earlier post, I should not have stated that one architecture was better than another, as it all depends on what the carrier is trying to accomplish. There are various value propositions presented by vendors in the transparent caching space and as I mentioned in my earlier post, there are some key factors to consider when evaluating each of these architectures and vendors.
Eighteen months ago, almost no one was talking about transparent caching as operators were not yet serious about deploying these kinds of content delivery technologies inside their network. But if there was one thing we heard loud and clear from the carriers and telcos who spoke at the Content Delivery Summit last month, it's that they are now heavily investing the time and money to deploy transparent caching architectures, amongst other CDN platforms, and that the market for these services is going to grow very fast. I recently completed a study at Frost & Sullivan and we expect the market for transparent caching services to grow to nearly half a billion dollars in the next three years. I'll have more details on those numbers in a future post.