With the right API management architecture into the cloud

API management is in vogue, cloud is en vogue and containers are hype. But how do you successfully combine these concepts?

Autor: Christian Sanabria

Many companies are currently developing cloud architectures and implementing them with the help of ipt. This also includes the implementation of API management in such environments. The following blog shows why a distributed architecture also makes sense for API management and what such a solution could look like.

How is an API management solution structured? 

An API management solution consists of three main components:

  • Management component (aka Admin Portal): Central point for API management, security and API policies and API usage metrics (analytics).
  • Gateway component (aka API Gateway): Enforcement point for security and API policies. Delivers API usage metrics to the management component. Obtains the configuration from the management component.
  • Engagement component (aka Developer Portal): Portal for interacting with developers who want to use APIs. Obtains the data from the management component.
Fig. 1: Components of an API management solution

High demands are placed on the gateway component in particular in terms of security, availability and performance. Scaling is essential here, whereas the requirements are generally lower for the management and engagement components.

What are the requirements for a solution for the cloud?

One of the biggest advantages of a cloud environment with containers is the ability to easily move components back and forth between cloud environments, e.g. from a private cloud to a public cloud on Amazon or Google. Likewise, the container can be started dynamically to scale a component, but can also be stopped again to handle peak loads.

With these two advantages, a new API management solution should definitely be designed with a distributed architecture, i.e. the individual components should be independently installable and scalable in their own containers. The focus is mainly on the gateway component.

Since such a solution is a distributed system, the CAP theorem plays a role here.

Depending on the requirements of the API management solution, the architecture looks different, especially in the communication between the gateway component and the management component.

If the focus is on consistency, then synchronous communication is essential, but this leads to reductions in availability or fault tolerance. In contrast, you must accept restrictions in consistency if you place a high value on availability or error tolerance. 

Why are classic solutions unsuitable for the cloud? 

Traditional API management solutions are often based on central API gateways in appliance form factor, where an additional engagement component is added. In this case, the API gateways combine the gateway and management components. Scaling is usually done by adding new appliances, whereby configurations are exchanged in the master / slave procedure. Such a setup is not suitable for a distributed cloud architecture.

Most of the classic appliances are now available as containers and can therefore be operated in the cloud. However, since they are still based on old concepts, advantages such as dynamic scaling and high distribution cannot be exploited.

In addition, API gateways in many companies are subject to a manual and slow change process, which is very difficult to reconcile with the DevOps requirements of the engagement component or API developers.

Although most API gateways themselves offer APIs for configuration, due to their internal architecture and history, most of them are very complex and difficult to integrate into an agile deployment process.

What could a solution for the cloud look like?

In an ideal world, an API management solution could be structured as follows:

  • The management component runs as a scalable container with its own persistent database. The component is not distributed.
  • The gateway component runs as a scalable container without dependencies to surrounding systems. The component can be operated in distributed mode. Configurations are retrieved asynchronously from the management component using a pull mechanism. Information about API usage (for example, for limit rates) is delivered asynchronously to the management component using a push mechanism.
  • The engagement component runs as a scalable container with its own persistent database. The component can be operated in distributed mode (for example, for multi-client capability). The information is synchronized bidirectionally with the management component using an asynchronous synchronization mechanism.
Fig. 2: Possible architecture of an API management solution in the cloud

In this architecture, only the gateway component needs to be highly available and fully autonomous. As already mentioned, this results in potential inconsistencies, for example, in analytics in the management component, but especially in the enforcement of rate limits.

If you need complete consistency, you need synchronous interfaces between the individual components. As a result, the complete API management platform with all sub-components must be designed to be highly available, which is not always easy, especially with persistence.  

What happens now?

A cloud strategy is the focus of many companies. It is 'in' and they hope to save costs and time to market. With the transformation to the cloud and the associated distributed architectures with containers or microservices, especially in combination with DevOps, requirements such as scalability, flexibility in deployment and automation are becoming increasingly important. Traditional API management approaches with central infrastructures as well as security requirements must be rethought.