Continuous delivery for the container - Part 1

The first part of this blog series deals with the creation of containers with a continuous delivery pipeline.

Author: Christian Sanabria

Continuous delivery has arrived in Switzerland and many companies now have a more or less developed continuous delivery pipeline. Cloud, or rather containers, are also a hotly discussed topic for more and more companies. Bringing these two areas together is obvious. Products such as Red Hat's OpenShift, with its practical features and tool integrations, support the creation of a container-based continuous delivery pipeline for flexible development and faster time-to-market.

The container as foundation

In order to be able to start with continuous delivery in the cloud at all, you need suitable base images. There are many ways to get to such base images to create your own images for applications. For OpenShift, for example, Red Hat offers a large number of custom images. Public images, like those from Docker Hub, are also possible. Another alternative is to develop your own base images. The first part of the blog series «Continuous Delivery for the Container» deals with this development process.

Modules of a container

Containers are instantiations of images. The creation of images is done e.g. in Docker via so-called Dockerfiles. These consist of a series of instructions which, mostly based on a base image, execute builds and configurations. The base images themselves are developed in the same way.

Continuous-Delivery-1.png
Fig. 1: Anatomy of an image

Base images usually provide a minimal OS and basic configurations such as network, DNS or NTP. In addition, references to software repositories with released packages and artifact repositories are preconfigured.

Assembly line work

Docker files work like source codes and the build of an image is subject to the same rules as the build of an application. This means that images must be tested before they are released for productive operation. And as with any application, this is done via a continuous delivery pipeline.

Continuous-Delivery-2.png
Fig. 2: General Continuous Delivery Pipeline

All steps within the pipeline run fully automatically and are ideally also triggered by an automated trigger, e.g. by pull requests to a code repository.
In the verify phase, an automated static code analysis, e.g. with Sonar, takes place to find problematic statements in the docker file. In the Test Phase, automated tests are performed at the operating system level, for which ServerSpec can be used, for example.

Quality Control

Since images also contain OS and system configurations, special attention is paid to compliance, especially in the verification and test phase. Images are subject to the same requirements as productive servers or VMs. Productive because containers are instantiated from the same image from the development environment to production.

Compliance includes e.g. ensuring security settings, configurations and other policies regarding infrastructure. Possible tests relate to checking whether the sudo command cannot be called, whether certain services are running or whether sensitive directories / files have the appropriate file system permissions. InSpec, for example, can be used for this.

Moving into the container

Once a base image has been deployed to the image repository, it can be used in a variety of ways. Products such as CA DevTest can be installed on it to build an on-demand test environment, which is discussed in the second part «Using Containers for a Continuous Delivery Pipeline» of this blog series. Or, it can be used as a source-to-image (S2I) to build and deploy custom developments in a stable environment, as discussed in the third part, «Deploying Containers in a Continuous Delivery Pipeline» in this series of blogs. Both is also done via docker files and can be deployed automatically via the same pipeline as the base images, although the testing requirements will of course change accordingly.

Maintenance

With a container orchestration like OpenShift, the maintenance of such container landscapes can now be massively simplified and almost completely automated.
In the case of a security issue, such as the heartbleed bug in OpenSSL, a new base image with the secured version of the library can now be easily created and deployed. OpenShift detects a new version of the base image and can also rebuild all images based on it until all containers are running with the corrected software.

Continuous-Delivery-pipeline.gif
Fig. 3: Container in the Continuous Delivery Pipeline

The continuous delivery pipeline and testing ensures that compliance is maintained at all times and that all applications continue to function as expected.

Now what?

With the Continuous Delivery Pipeline for images, the foundations have now been laid for the actual application deployment. It is ensured at all times that all compliance requirements for the containers are met and that security-relevant corrections are controlled and, above all, distributed automatically, which is not easily possible with physical servers or VMs.

Further series of the Continuous Delivery blog series for the container will follow in the course of the remaining week. Part 2 (Using Containers for a Continuous Delivery Pipeline) and Part 3 (Deploying Containers in a Continuous Delivery Pipeline) of this series of blogs will discuss further strengths of Containers in relation to Continuous Delivery.