Once considered just part of the “nuts and bolts” of application infrastructure, APIs have moved swiftly into a leading role in driving digital experiences. For the CDNs that handle this API traffic, this is creating high expectations for performance and reliability, as well as expanding security challenges. The worldwide focus on digital transformation is driving increased adoption of microservices architectures and APIs have quickly emerged as a standard way of building and connecting modern applications and enabling digital experiences where connection speeds are measured in milliseconds.
We use these services—and the APIs that enable them—every day, within all applications. Things like interactive apps for weather, news and social media; transactional apps like those for commerce and banking; location services; online gaming; videoconferencing; chat bots…all rely on APIs. With new microservices coming online daily, expect the proliferation of APIs to continue. Indeed, recent surveys revealed that 77% of organizations are developing and consuming APIs and 85% indicated that APIs are critical to their digital transformation initiatives.
API traffic has some specific characteristics that can make them tricky to manage. Transactions are small and highly dynamic, yet they can also be quite compute-intensive. They are sensitive to latency, often measured in milliseconds, and prone to spikes. These realities, together with the proliferation of APIs, create some significant challenges for delivering content and services. APIs also represent the most common attack vector for cyber criminals. It has been reported that 90% of web application attacks target APIs, yet API endpoints are often left unprotected due to the sheer number of APIs and the limited resources available to police them. The challenge of policy enforcement is especially complex in organizations with several autonomous development teams building and deploying across hybrid cloud environments.
Organizations expect API response times in the tens of milliseconds, particularly for public-facing APIs that are critical to user experience. This can be difficult to manage given the highly dynamic nature of API traffic, which is often compute-intensive and difficult to cache. Many APIs are in the critical path for applications and if they are not delivered, it can render the application unusable. That explains why 77% of respondents in a recent survey pointed to API availability as their top deployment concern. Ensuring that availability can be challenging because API traffic volumes tend to come in waves or spike quickly when an end-user request triggers a series of requests to third-party services. Large online events can also drive up request volumes, creating even greater availability challenges.
Any or all of these issues can significantly impact applications. When public-facing APIs aren’t delivered within fast response times or have poor reliability, the user experience suffers. And if APIs are not secure, they represent serious cyberattack vulnerabilities. The potential results, with equal a poor user experience, leads to a loss of revenue and damage to brands. To minimize that risk, companies should start with the fundamental step of API discovery. After all, you can’t manage, secure and protect what you can’t see. With developers launching new APIs left and right, it is likely that there are many undiscovered API endpoints out there. So it’s critical to discover and protect unregistered APIs and identify errors and changes to existing APIs.
Content owners also need to think about where application functionality is taking place. While public clouds have emerged as the “go-to” for all kinds of application workloads, they do present some limitations when it comes to handling API transactions. One leading cloud provider achieves response times around 130ms (measured using Cedexis Radar – 50th percentile – Global community), yet many microservices require API response times of less than 50ms. Edge computing offers an attractive alternative to the cloud. Moving application functionality to the edge benefits from closer proximity to end users, minimizing latency and maximizing performance. Making the edge an extension of your API infrastructure can also help unify your security posture, improving operational efficiency. Load balancing traffic at the edge can improve availability while simplifying management. And moving computing to the edge can improve scalability, allowing customers to serve users globally with more network capacity. Additionally, executing code at the edge gives developers the ability to move business logic closer to users, accelerating time to value.
Of course, like cloud providers and CDNs, not all edge compute platforms are equal. It’s important to look at how many points of presence there are, how globally distributed they are and the proximity to users. Does their network allow you to easily deploy microservices in multiple edge locations? These factors have a direct impact on latency. You also want to make sure the network is robust enough to handle the spikes that are common with APIs. Finally, is the network secure enough to mitigate the risk posed by bad actors targeting API endpoints? The API explosion is far from over. That reality presents a compelling case to view the edge as the logical extension of your organization’s API infrastructure, ensuring your users get the experience they expect, whenever and wherever they want it.