API management is in vogue, cloud is en vogue and containers are hype. But how do you successfully combine these concepts?
Autor: Christian Sanabria
Many companies are currently developing cloud architectures and implementing them with the help of ipt. This also includes the implementation of API management in such environments. The following blog shows why a distributed architecture also makes sense for API management and what such a solution could look like.
An API management solution consists of three main components:
High demands are placed on the gateway component in particular in terms of security, availability and performance. Scaling is essential here, whereas the requirements are generally lower for the management and engagement components.
One of the biggest advantages of a cloud environment with containers is the ability to easily move components back and forth between cloud environments, e.g. from a private cloud to a public cloud on Amazon or Google. Likewise, the container can be started dynamically to scale a component, but can also be stopped again to handle peak loads.
With these two advantages, a new API management solution should definitely be designed with a distributed architecture, i.e. the individual components should be independently installable and scalable in their own containers. The focus is mainly on the gateway component.
Since such a solution is a distributed system, the CAP theorem plays a role here.
Depending on the requirements of the API management solution, the architecture looks different, especially in the communication between the gateway component and the management component.
If the focus is on consistency, then synchronous communication is essential, but this leads to reductions in availability or fault tolerance. In contrast, you must accept restrictions in consistency if you place a high value on availability or error tolerance.
Traditional API management solutions are often based on central API gateways in appliance form factor, where an additional engagement component is added. In this case, the API gateways combine the gateway and management components. Scaling is usually done by adding new appliances, whereby configurations are exchanged in the master / slave procedure. Such a setup is not suitable for a distributed cloud architecture.
Most of the classic appliances are now available as containers and can therefore be operated in the cloud. However, since they are still based on old concepts, advantages such as dynamic scaling and high distribution cannot be exploited.
In addition, API gateways in many companies are subject to a manual and slow change process, which is very difficult to reconcile with the DevOps requirements of the engagement component or API developers.
Although most API gateways themselves offer APIs for configuration, due to their internal architecture and history, most of them are very complex and difficult to integrate into an agile deployment process.
In an ideal world, an API management solution could be structured as follows:
In this architecture, only the gateway component needs to be highly available and fully autonomous. As already mentioned, this results in potential inconsistencies, for example, in analytics in the management component, but especially in the enforcement of rate limits.
If you need complete consistency, you need synchronous interfaces between the individual components. As a result, the complete API management platform with all sub-components must be designed to be highly available, which is not always easy, especially with persistence.
A cloud strategy is the focus of many companies. It is 'in' and they hope to save costs and time to market. With the transformation to the cloud and the associated distributed architectures with containers or microservices, especially in combination with DevOps, requirements such as scalability, flexibility in deployment and automation are becoming increasingly important. Traditional API management approaches with central infrastructures as well as security requirements must be rethought.