Theory meets reality - The demo app on the Continuous Delivery Pipeline

In a first blog we presented the concept of the Continuous Delivery Pipeline. Now it is time to put theory into practice.

Using the demo application we explain the concrete building blocks of the Continuous Delivery Pipeline. The pipeline becomes tangible. Everyone can recognize typical tools, which they have certainly already encountered in their daily work.

Theory and practice. This is a story that has accompanied us for a long time. Just this morning I read that a city forest is to be divided into protection zones, which are only a few hundred meters in size. This means that within a few hundred meters of a continuous forest I may or may not pick mushrooms, for example. I am curious to see how this will be implemented in practice! Implementation in practice is also the subject of the Continuous Delivery Pipeline. We have  described the concept and also recorded it audiovisually. The proof that the concept also works was demonstrated with a practical demo application.

The demo app is a mini version of an application that in practice handles several hundred transactions per second and manages millions of data records. During installation and updates, there were always unforeseen failures or anomalies in behavior. Manually inserted configuration parameters or non-identical environments were the cause. We have proven that these cost-intensive failures do not have to be, with the demo app. The entire process was completely automated.

Blog_Bild_Pipeline der Demo-App_01.png
Figure 1: The pipeline of the demo app was realized on the Continuous Delivery Pipeline. It orchestrates the automated process to bring adjustments in the code into production.

We have built a continuous delivery pipeline and implemented the demo app as a pipeline. You can touch the pipeline and see how it works. In the webinar we could only show slides, so a live demo was not possible. But we would like to come by and show you the pipeline in action.

Version Control: To trigger the pipeline or installation process, we check in a change to our code base in the versioning system. We use multiple git repositories to manage application code and configurations.

Continuous Integration: The CI Server checks every minute to see if anything has changed. If there are changes, it compiles the code and unit tests and integration tests are performed. If successful, an artifact (WAR) is generated and stored in the artifact repository. It then informs the pipeline that the artifacts are ready and that the installation can start. As CI-Server we used Jetbrains Teamcity, as integration test framework JBoss Arquillian.

Configuration Management: Next, the pipeline provides the infrastructure required for the demo app. As configuration management tool we used Chef, Docker and Hashicorp's Vagrant.

Artefact Repository: To ensure that the application and the infrastructure is aligned with the environment, the pipeline retrieves this information from Configuration Management (e.g. IP addresses, credentials, ports). The artifacts to be installed (application and database) are retrieved from the Artifact Repository. The repository is a read-only storage. Once stored in the repository, an artifact can no longer be changed (audit). As product for the repository we used Sonatype Nexus.

Database Lifecycle Management: Via Database Lifecycle Management (DLM) we create the required database structures and fill them with user data. With DLM it is possible to enforce best practices in the creation of database manipulations company-wide, without manual review. The same infrastructure is used as required for the application. Automation minimizes the sources of error and brings the speed of agile software development into line with that of agile software development. We have used Flyway, an open source solution, for this.

The application server (Oracle Weblogic) installed via configuration management is now being prepared for the installation of the application. A separate domain is created and configured (database connection, security).

Simulations: In order to be able to execute the integration tests repeatedly, simulations are started which abstract the surrounding systems. Repeatable means that the tests do not fail due to unavailable surrounding systems or changed test data. The simulations were created with CA Service Virtualisation.

The Web Archive (WAR), which contains the application, will be installed into the prepared Weblogic domain next. The configuration files contained in the WAR are populated with the parameters from the configuration management. This task is done by the bus.

Automatic tests: The functional requirements are checked by automated system integration tests. The automated tests were implemented with CA DevTest (see also blog: CPO SV 9.0). If the tests are executed successfully, the installation is saved as successful in the bus. If tests fail, it is possible to return to the last release status that successfully passed the tests. This is called automatic rollback.

Continuous Delivery Pipeline: We implemented the Continuous Delivery Pipeline with CA Release Automation. A key aspect for this decision is the fact that hundreds of predefined adapters for tools like JIRA, Chef, Jenkins or HP Quality Center are included. An integration into existing environments is therefore easy and fast. The abstract modelling of environments and processes offers great flexibility to install an application on one machine or on multiple cluster environments - without adjustments and at the push of a button. With the Continuous Delivery Pipeline, it is possible to start with the current infrastructure without any profound changes. Based on this platform, the entire process can be optimized into the digital age with the involvement of all persons involved.