Cloud-native applications can be developed in containers or even serverless. This article explains the respective advantages and disadvantages.
Author: Yu Li
With cloud-native application development, the benefits of cloud computing are fully exploited. It's not about where applications are developed, but how. So the question is how to develop cloud-native applications: using container technologies or serverless functions?
Containers are already used by many companies and are becoming increasingly popular. Serverless is also gaining acceptance, with the cloud provider being responsible for the execution of code.
Depending on the project, one approach is more suitable than the other. It is crucial to understand the two approaches and their advantages and disadvantages. The following article is intended to help in this respect.
This blog compares the two approaches based on six aspects (Figure 1). However, the aspects to be compared must always be adapted to the project and the customer situation.
The basic technologies for operating and orchestrating the container are offered by every cloud provider (e.g. Microsoft Azure Kubernetes Services or Google Kubernetes Engine). The container-based solutions can be easily ported between cloud providers.
Serverless based solutions on the other hand cannot be migrated easily between cloud providers. This is due to the cloud provider-specific implementation of serverless products such as AWS Lambda and Azure Functions. For example, to migrate an AWS Lambda Function to an Azure Function, the triggers (e.g. HTTP triggers, queue triggers, database triggers) would have to be rewritten.
Scalability is the strength of serverless. Serverless functions are automatically scaled as required according to demand, whereby the scaling is carried out entirely by the cloud providers. This saves administrative effort and reduces complexity.
Containers can also scale according to demand. In contrast to serverless, container scaling is the responsibility of the user.
Serverless scales automatically. However, the latency of scaling from the actual state to the target state must be taken into account. If the serverless function is not ready when called, the so-called «cold start» problem occurs. Here, «cold start» means that there is a latency between the call and the execution of a Function. This has a great influence on the user experience. There are various workarounds against «cold start», such as Scheduled Pingers, Retry Approach or Pre-Warmer. Cloud providers such as Microsoft Azure, for example, offer Premium Tier of Azure Function without code start limitation.
The container-based solution has fewer «cold start» problems, as it is usually not scaled back to zero replicas. The «Scale Up» on the other hand must be considered. Kubernetes typically scales according to CPU or memory usage. In event-based architectures, Kubernetes often reacts too late, since the number of outstanding messages in the queue does not affect scaling. To this end, cloud providers such as Microsoft Azure are actively promoting innovative solutions like KEDA . KEDA enables fine-grained autoscaling for event-driven Kubernetes workloads.
In summary, there are challenges in terms of latency for both serverless and containerized workloads. In our opinion, there are more sophisticated solutions for containers than for serverless. Since it is difficult to make a comparison here, we should focus on understanding the challenges and clarifying the solutions at an earlier stage in a project.
In the Container Ecosystem, there are extensive tools for the analysis of containers. With serverless, however, the runtime is managed by the cloud provider, which poses additional challenges in performance analysis. Without the ability to install agents to retrieve metrics directly from the runtime, one must rely completely on the capabilities of the cloud platforms. Azure Monitoring, for example, offers analysis functions with metrics for Azure Functions.
The use of containers has become very widespread in recent years and is widely used in software development. However, containers also have certain limitations. For example, containers for event-based applications cannot be scaled fine-grained, because containers scale according to CPU and memory usage and not according to events.
Serverless technologies are able to scale from zero to peak load. For large numbers of events, the cloud provider ensures that the serverless functions scale quickly to handle the events. This makes serverless applications particularly suitable for event-based architectures and business applications with peak loads.
With serverless, more responsibility for security is transferred to the cloud provider. Security topics such as Kubernetes Security, Image Vulnerability, Registry Security, which have to be addressed for containers, are less relevant here.
On the other hand, some new challenges have to be considered. With serverless each function is a perimeter. This means that security vulnerabilities must be tested individually for each function. Serverless functions are stateless, so stateful information must be stored outside (e.g. session ID). This brings new requirements for data security.
By comparing serverless and container you get a good picture of their strengths and weaknesses. In summary, it can be said that containers can be used very widely and offer comprehensive solutions for the challenges that arise. Serverless functions bring the advantage of scaling and at the same time the challenges such as vendor lock-in, latency, analysis capability. Compared to containers, serverless functions have a narrower field of application, which is more suitable for event-based architectures and applications with high load peaks.