Continuous delivery for the container - Part 2

In the second part of the blog series we show the usability of containers in the continuous delivery pipeline within a cloud environment.

Author: Christian Sanabria

Use of containers for a continuous delivery pipeline within a cloud environment: Mainly for application development, but of course also for images themselves.

Things we leave behind

The infrastructure in classical software development looks something like this:

  • Developers have a desktop development environment on their own notebook or PC, including build and some test tools and a small amount of test data.
  • On the Continuous Integration environment the same build and test tools and test data, this time complete, are available and shared across all projects.
  • On all other staging environments, the same test tools and test data are again present, this time complete, and shared across all projects.
  • In addition, there is a central code and artifact repository and a tool for mapping the pipeline.

This setup has some disadvantages:

  • The versions of build and test tools must be kept synchronized across all staging environments, which involves installation effort and downtimes.
  • The Continuous Integration environment should always be available and also scalable, which involves maintenance effort.
  • Changes to the test data have side effects on other projects and must be kept consistent, which involves coordination effort.

Brave new world

In the world of containers and container orchestration, the infrastructure looks something like this:

  • Developers have a desktop, but perhaps also an online development environment available.
  • All other components of the continuous delivery pipeline are provided as images and can be instantiated as containers and started with appropriate configurations if required.
  • A cloud environment and container orchestration like OpenShift is available.
  • The central code and artifact repository and the pipeline tool still exist.

With this setup, the disadvantages of the «classic» pipeline can be elegantly eliminated:

  • Only one image per build and test tool needs to be maintained, eliminating the need for synchronization across multiple staging environments.
  • Test data is also provided in a dedicated container on demand, with changes having no side effects on other projects and consistency.
  • There is no longer a continuous integration environment as the required tools are started in containers on demand and scaled by container orchestration.

Unbreakable

The use of containers and container orchestration, together with a cloud environment, creates a new continuous delivery pipeline. This is ideal for efficient software development and fast time-to-market. Builds, testing and deployment is independent of other applications for each project across all staging environments, which massively increases stability and reproducibility. The coordination of test data is simplified, since only one global state needs to be maintained, which is instantiated as required and discarded after the tests and is therefore available in a consistent state at all times. The maintenance of a continuous integration environment as well as several instances of test tools is replaced by the maintenance of images and is therefore no longer necessary.

Deep Impact

The construction of such a container-based continuous delivery pipeline is associated with more or less high initial costs. Above all, the development of the images and the provision of the cloud environment should not be neglected. With images we have to think about how configurations or test data can be loaded at container startup. Especially with standard products, additional work in image design is to be expected. Likewise, under certain circumstances, adjustments to processes and interfaces, both electronic and human, may become necessary.

Step by Step

However, the initial effort does not have to be made in a big bang, but each module of the continuous delivery pipeline can be migrated individually and as required. This allows for sensible planning and implementation without major risks. The goal of a DevOps organization is approached with big steps, because the dedicated containers transfer much more responsibility to the developers: They maintain and deploy the images for their tools themselves. Accordingly, the operation is relieved and can concentrate on its core tasks instead of dealing with development and continuous integration environments.

Now what?

With the continuous delivery pipeline for images and the provision of this pipeline by means of containers, the only thing missing is the deployment of in-house developments in containers across different stages. This is done using builder images (source-to-image, S2I). In Part 3 «Deploying Containers in a Continuous Delivery Pipeline» we will discuss the strength of containers in terms of continuous delivery.