Continuous delivery for the container - Part 3

In this section, a strategy is presented on how containers can be deployed in a continuous delivery pipeline and transported over several stages.

Deploy containers in a continuous delivery pipeline

Author: Christian Sanabria

The first two parts of this blog series showed how to develop images for the instantiation of containers using a continuous delivery pipeline and how to use containers to build this pipeline. In this third and final part, a strategy is presented on how to deploy containers in a continuous delivery pipeline and how to bring them across multiple stages, from development to production.

Images as new artifacts

Compared to traditional staging, where application artifacts are installed or updated on each target environment, staging with images offers several advantages:

  • Ordering, deploying and configuring target environments is no longer necessary, but happens ad-hoc with the instantiation of the container.
  • The environment configurations are always the same on all target environments because they are built together with the instantiation of the container.
  • The installation or update process was already executed and tested during image creation and does not need to be repeated.
  • Uninterrupted deployment of an application can be achieved more easily by running several container versions in parallel and targeted load balancing.

From source to image

For in-house developments there are basically two ways to create an image:

  • The classic way is to use a docker file to install a pre-packaged artifact from an artifact repository onto a base image.
  • The modern way leads via the source-to-image (S2I) principle, which is based on so-called builder image

Builder images take the URL of a code repository as input and contain all build tools to build the application from the source code. For the build, a container is instantiated from the image and the build process is started. After successful completion, a start script is created within the container for the newly created artifact. The last step is to create an application image from the container, which can be used for the further staging process.

This entire pipeline can be fully automated, with the PaaS solution handling the orchestration.

Off into the pipeline

From this point on, the application image can be moved from stage to stage via a continuous delivery pipeline, as explained in the first part of this blog series Continuous delivery for the container - Part 1. For this purpose, the use of a container orchestration like OpenShift is recommended. Such a Platform-as-a-Service (PaaS) solution already includes all the features and processes needed to build a pipeline of images and containers. Such a product can use triggers, such as a new commit to the code repository or changes to the base image, to initiate the build of a new application image. This new image is instantiated and the container, including the application running in it, is tested. If successful, the PaaS solution instantiates a new container from the image in the next target environment, in parallel to any running container, with the old version of the application. Only when this quality gate has also been successfully passed through is the old container removed and the load balancer reassigned (blue/green deployment). The game then starts all over again in the next target environment, right up to production.

Past the bouncer

In a fully automated continuous delivery pipeline of this kind, the quality gates in particular are of great importance. All tests required for this stage are carried out automatically at these gates. This means that the tests must have a high quality and coverage. These tests can be performed with the help of test tools such as CA DevTest. These tools are also automatically booted and initialized as containers by the PaaS solution when required, as was shown in the second part «Using Containers for a Continuous Delivery Pipeline» of this blog series. If required, you can even go so far as to start a separate DB container with cloned data from the target environment for the tests, and only after successful testing is the application container rebent to the 'real' DB container. This process is also automated by the PaaS solution.

Welcome to the Club

With the completion of this series of blogs, all the building blocks for a successful continuous delivery pipeline from container to container are now in place.

The first part «Creating Containers with a Continuous Delivery Pipeline» showed how to develop containers using a Continuous Delivery Pipeline. The second part «Using Containers for a Continuous Delivery Pipeline» explained how to use Containers for such a Continuous Delivery Pipeline. This last part demonstrated how to bring containers into production using a Continuous Delivery Pipeline.

The next step is now to use these building blocks in your own company. ipt can support you in all phases: starting with the development of an optimally tailored container environment, through the containerization of applications, to the complete development of a PaaS solution for the implementation of your own cloud strategy.