How Mobile App Acceleration SDK’s Are Replacing TCP-Based Approaches
The TCP/IP protocol that is the communication language of the web, and how we do things on the internet, was built for a PC-focused internet whose best days are now behind it. Since the late 1990’s, when CDNs like Sandpiper Networks and Akamai came to the market, CDNs have done a fantastic job of speeding websites to end users on PCs who have great connections to the Internet. When early smartphone adoption began to change the nature of how we interacted with the internet, the CDNs were speeding up mobile websites as well, using many of the same tools they used in the pre-mobile era.
But today, the traditional PC-focused Internet and TCP/IP protocol were never designed to support the fast delivery of mobile apps. Both introduce a number of delays throughout the mobile app delivery process, making fast mobile app performance on end-user devices an elusive goal for most developers.
HTTP/1.1 only allowed a single request over one TCP connection. If one wanted to make multiple requests, they had to wait for the first request to finish in order to start the second request, and so on. If one request takes longer to finish, it holds up the line for a longer time, and all other requests behind it in the queue have to wait. This problem is called “Head of Line Blocking” (HOLB). To overcome this, web and app developers started making multiple concurrent TCP connections in order to boost speeds. Yet that approach doesn’t scale, since maintaining each TCP connection requires memory and CPU resources.
Google then came-up with SPDY, which later became the foundation to the standard we know of as HTTP/2 today. The idea here is to multiplex requests over a single TCP connection. This approach does solve the problem of HOLB at the HTTP layer, since you now don’t have to wait for a single request to finish before starting other requests. That said, it is still limited by the same HOLB problem at the TCP layer. This is a fundamental limitation of the TCP protocol itself, because it requires “in-order”, or sequential, data.
I’ve spent some time recently looking at Neumob, a mobile app acceleration company with offices in Silicon Valley and the UK, which focus its SDK-based solution on apps. Neumob solves the TCP problem by using UDP under-the-hood for its own protocol, or what they call the Neumob Protocol. UDP doesn’t suffer from HOLB, as it inherently does not require in-order data delivery. Neumob’s focus has been to create a mobile-first protocol, designed for the mobile apps in which 85-90% of all smartphone activity occurs, rather than taking a legacy protocol designed in the 1990s, and then retrofitting it to work for mobile world.
The company’s protocol accelerates everything within a mobile app, including all of those great (but heavy) 3rd-party calls like videos, images, ad network SDKs and analytics tools that make an app what it is. It doesn’t cache for one domain only, and it doesn’t meekly tune TCP. Instead, the company says they chose to develop its own robust UDP-based protocol, 3-POP WAN acceleration architecture and software-defined content routing that dynamically does one thing exceptionally well: speed up the performance of mobile apps, no matter whether its users are in the same city or halfway around the world.
Neumob says one of the differentiating features of their protocol is their network profiles approach. More than half of the connections the company serves are wireless: 4G, 3G (WCDMA, HSDPA, EVDO_A), 2G (EDGE, CDMA) and so on. Even in the same LTE network, any given mobile carrier will have different coverage and latencies, and all of these networks have different characteristics. The company says they have the ability to detect if the network connection is on, and tune connection parameters accordingly. With their SDK, the protocol is able to detect the mobile network carrier, the network technology (WiFi, LTE, HSPA etc) and the country in which the device is connecting, then apply different protocol parameters to maximize mobile app speed and error reduction. It’s a pretty simple approach, to a complex problem.
Historically, web-based CDNs have used edge servers in order to cache static objects efficiently. This is good for small web sites with a low amount of calls, but when the total size of typical libraries grew bigger, CDNs introduced another concept of placing a second level of cache in a few aggregation points (called parent cache, shield cache, super cache, super POP etc), near the origin in order to improve the cache hit rate in the edge server, while reducing access to the origin. This approach was also useful for accelerating dynamic objects (not cacheable and in need of origin access every time). These days, most CDNs support accelerating dynamic content in their own way, but this 2-POP approach is pretty common. Having edge POP and another POP near the origin, and using various middle-mile acceleration techniques between edge POP and a POP near to the origin is foundational architecture the allows CDNs to accelerate dynamic content.
Neumob has expanded this idea to the actual device in the user’s hand. The company says CDNs take what is basically a server-side only approach, with no information about the device itself, and simply assumes it’s “a good client”. This assumes it has a good DNS resolver configuration, so that it can find a nearby edge POP using DNS (or relying on anycast to find a nearby edge POP), and that it knows how to connect using an up-to-date protocol.
Neumob’s approach, by contrast, hosts a small and intelligent proxy right in the device itself by virtue of its embedded SDK within the app being used. Traffic from the app travels through Neumob’s small edge server in the device. This enables the protocol to get unique information about the client, while providing Neumob with the ability to optimize the last mile from the edge of the internet to the device itself, something they say was not possible in the traditional CDN approach.
For example, Neumob can identify that the device is connecting to a Wifi network or to LTE via a specific mobile carrier, without guessing, which enables Neumob to apply the most appropriate protocol parameters. Neumob is able to fall back properly when anything bad or unexpected happens during content transmission, which reduces errors, collects more detailed metrics about the request, alerts about unusual errors, and more. This is effectively having an intelligent agent on the device that’s constantly reporting on network connections.
So how does all of this reduce errors within mobile apps? Neumob says it’s important to underscore how effective the UDP-driven protocol is in reducing errors within apps. These errors include timeouts, when an app’s responses effectively freeze, and force the user to refresh or navigate elsewhere, since images or other content can’t be delivered. Errors can also include blank spaces with missing images; third-party-hosted content that never arrives; and even advertisements that are never seen by the user (and therefore can’t be monetized) because of failed delivery.
Neumob says typical mobile app error rates range from 3% on faster networks such as LTE, to over 12% on 2G & 3G networks, and in countries such as India and China. By not being inherently limited by HOLB (“Head of Line Blocking”), the Neumob protocol already provides apps with a leg up in reducing these frustrating errors. It also uses innovative loss detection & recovery mechanisms, while providing fine-grained control with the aforementioned 3rd POP implemented right inside the SDK.
The traditional PC-focused Internet and TCP/IP protocol were never designed to support the fast delivery of mobile apps. Both introduce a number of delays throughout the mobile app delivery process, making fast mobile app performance (and low error rates) on end-user devices an elusive goal for most developers. Neumob is looking to address these challenges, and because it has been specifically and exclusively engineered for mobile apps, it by necessity incorporates a variety of improvements and network-driven leaps forward. The company says they are able to achieve mobile app speed gains of 30-300%, and reduction of in-app errors by up to 90%.
The SDK revolution, in which app developers can add small bits of code to their apps that contain everything from robust analytics to advertising solutions, is where that next stage of performance and speed innovation lies. The right SDK can effectively transform the last, mobile mile from a latency-filled bottleneck into a lightning-fast conduit for images, files, high-bandwidth videos and more.
It’s a tricky problem for mobile-first infrastructure providers to solve, but therein lies the kernel of the solution: reimagining how we interact with the internet in this newly-dominant era of mobile, and of mobile apps, versus the way we did things in the now-fading PC internet and mobile web era.