Coding Continuous Delivery — Helpful Tools for the Jenkins Pipeline
This article is part 3 of the series „Jenkins Pipeline for continuous delivery“
Read the first part now.
After the first two parts of this series discuss the basics and the performance of Jenkins Pipelines, this article describes useful tools and methods: Shared libraries allow for reuse for different jobs and unit testing of the pipeline code. In addition, the use of containers with Docker© offers advantages when used in Jenkins Pipelines.
In the following, the Pipeline examples from the first two articles will be successively expanded to demonstrate the features of the Pipeline. In so doing, the changes will be presented in both declarative and scripted syntax. The current status of each extension can be followed and tried out on GitHub (see Jenkinsfile Repository GitHub). Beneath the number stated in the title for each section, there is a branch in both declarative and scripted form that shows the full example. The result of the builds for each branch can also be seen directly on our Jenkins instance (see Cloudogu Open Source Jenkins). As in the first two parts, the features of the Jenkins Pipeline plugin will be shown using a typical Java project. The kitchensink quickstart from WildFly is a useful example here as well. Since this article builds upon the examples from the first article, the numbering continues from the first part, in which five examples were shown with a simple Pipeline, own steps, stages, error handling, properties/archiving, parallelization and nightly builds. Thus, shared libraries are the eighth example.
Shared Libraries (8)
In the examples shown in this series of articles, there are already a few self-written steps, such as mvn()
and mailIfStatusChanged()
. These are not project-specific and could be stored separately from the Jenkins file
and thus also be used for other projects. With Jenkins Pipelines there are currently two options for referencing external files
load
step: Loads a Groovy script file from the Jenkins workspace (i.e. the same repository) and evaluates it. Further steps can then be loaded dynamically.- Shared libraries: Allow the inclusion of external Groovy scripts and classes.
The load
step has a few limitations:
- Classes cannot be loaded, only Groovy scripts (see Groovy scripts vs. classes. With these scripts, additional classes cannot be easily loaded, and inheritance is not possible.
For the scripts to be usable in the Pipeline, each script has to end with
return this;
. - Only files from the workspace can be used. Therefore, reuse in other projects is not possible.
- The scripts loaded in this step are not shown in the “replay” feature described in the first article. As a result, they are more difficult to develop and debug.
Shared libraries are not subject to these three limitations, which makes them much more flexible. Thus, their use will be described in greater detail in the following.
Currently, a shared library has to be loaded from its own repository. Loading from the repository that will be created is currently not possible, but may be at some point in the future (see cps-global-lib-plugin Pull Request 37). This will make it possible to divide the Jenkins file
into various classes/scripts, which in turn will increase maintainability and provide the option of writing unit tests. This is also helpful for the development of shared libraries, since these can be used in their own Jenkins file
.
The repository for each shared library needs to have a specific directory structure:
src
contains Groovy classesvars
contains Groovy scripts and documentationresources
contains other files
A test
directory for unit tests and an own build are recommended.
To reduce the complexity of the Jenkins file
from the examples and make it so that the functionality is reusable for other projects, in the following example, a step will be exported to a shared library. In the mvn
step, for example, an mvn.groovy
file is created in the shared library repository in the vars
directory (see Listing 1). This contains the method known from the first part of this article series.
def call(def args) {
def mvnHome = tool 'M3'
def javaHome = tool 'JDK8'
withEnv(["JAVA_HOME=${javaHome}", "PATH+MAVEN=${mvnHome}/bin:${env.JAVA_HOME}/bin"]) {
sh "${mvnHome}/bin/mvn ${args} --batch-mode -V -U -e -Dsurefire.useFile=false"
}
}
Listing 1
In the Groovy script in Listing 1, however, this method is specified using the Groovy convention call()
. Technically, Jenkins creates a global variable for all .groovy
files in the vars
directory and names it according to the filename. If this variable is now called with the call operator (), its call()
method will be implicitly called Groovy call operator. Since brackets are optional for the call in Groovy, the query of the steps in scripted and declarative syntax remains, as previously, for example: mvn 'test'
.
There are several options for using the shared library in the Pipeline. First, the shared libraries must be defined in Jenkins. The following options exist for the definition of shared libraries:
- Global: Must be set by a Jenkins administrator in the Jenkins configuration. Shared libraries defined therein are available in all projects and are treated as trustworthy. This means that they may execute all Groovy methods, internal Jenkins APIs, etc. Therefore, caution should be exercised. This can, however, also be used, for example, to encapsulate the queries described under nightly builds, which would otherwise require script approval.
- Folder/multibranch: Can be set by authorized project members for a group of build jobs. Shared libraries defined therein are only valid for associated build jobs and are not treated as trustworthy. This means they run in the Groovy sandbox, just like normal Pipelines.
- Automatic: Plugins such as the Pipeline GitHub Library Plugin (see Github Branch Source Plugin) allow for automatic definition of libraries within Pipelines. This makes it possible for shared libraries to be used directly in Jenkins files without prior definition in Jenkins. These shared libraries also run in the Groovy sandbox.
For our example, the GitHub Branch Source Plugin can be used, since the example is available from GitHub and therefore requires no further configuration in Jenkins. In the examples for both scripted and declarative syntax, the externally referenced steps (for example, mvn
) are defined as follows through the inclusion of the shared library in the first line of the script:
@Library('github.com/cloudogu/jenkinsfiles@e00bbf0') _
Here, github.com/cloudogu/jenkinsfiles
is the name of the shared library and the version is given after the @
, in this case a commit hash. A branch name or day name could also be used here. It is recommended that a defined status (day or commit instead of branches) be used to ensure deterministic behavior. Since the shared library will be newly called from the repository in each build, there would otherwise be the risk that a change to the shared library could affect the next build without any change to the actual Pipeline script or code. This can lead to unexpected results whose causes are difficult to find.
Alternatively, libraries can be loaded dynamically (using the library
step). These can be used only after the step is called.
As described above, classes can also be created in shared libraries in addition to scripts (in the src
directory). If these are contained in packages, they can be declared using import statements after the @Library
annotation. In scripted syntax, these classes can be instantiated anywhere in the Pipeline, but in declarative syntax only within the script
step. An example of this is the shared library of the Cloudogu EcoSystem Cloudogu ces-build-lib.
Shared libraries also offer the option to write unit tests. For classes, this is often possible with Groovy resources (see Cloudogu ces-build-lib). For scripts, the JenkinsPipelinUnit (see JenkinsPipelineUnit) is useful. With this framework, scripts can be loaded and mocks of the installed Pipeline steps easily defined. Listing 2 shows what a test for the step described in Listing 1 could look like.
@Test
void mvn() {
def shParams = ""
helper.registerAllowedMethod("tool", [String.class], { paramString -> paramString })
helper.registerAllowedMethod("sh", [String.class], { paramString -> shParams = paramString })
helper.registerAllowedMethod("withEnv", [List.class, Closure.class], { paramList, closure ->
closure.call()
})
def script = loadScript('vars/mvn.groovy')
script.env = new Object() {
String JAVA_HOME = "javaHome"
}
script.call('clean install')
assert shParams.contains('clean install')
}
Listing 2
Here, a check is performed to determine whether the given parameters have correctly been passed on to the sh
step. The framework provides the variable helper
to the test class via inheritance. As can be seen in Listing 2, plenty of mocking is used: The tool
and withEnv
steps as well as the global variable env
are mocked. This shows that the unit test only checks the underlying logic and of course does not replace the test in a true Jenkins environment. These integration tests cannot yet currently be automated. The “replay” feature described in the first article is well suited to the development of shared libraries: The shared library can also be temporarily modified and executed here along with the Jenkins
file. This makes it possible to avoid a lot of unnecessary commits to the shared library’s repository. This tip is also described in the extensive documentation on shared libraries (see Jenkins Shared Libraries).
In addition to external referencing of steps, entire Pipelines can be defined in shared libraries (see Standard build example), thus standardizing its stages, for example.
In conclusion, here are a few more open source shared libraries:
- Official examples with shared library and Jenkins file (see Shared Library demo). Contains classes and scripts.
- Shared library used by Docker© Inc. for development (see Shared Library Docker©). Contains classes and scripts.
- Shared library used by Firefox Test Engineering (see Shared Library Docker©). Contains scripts with unit tests and Groovy build.
- Shared library of the Cloudogu EcoSystem (see Cloudogu ces-build-lib). Contains classes and scripts with unit tests and Maven build.
Docker© (9)
Docker© can be used in Jenkins builds to standardize the build and test environment and to deploy applications. Furthermore, port conflicts with parallel builds can be prevented through isolation, as already discussed in the first article of this series. Another advantage is that less configuration is needed in Jenkins. Only Docker© needs to be made available on Jenkins. The Pipelines can then simply include the necessary tools (Java, Maven, Node.js, PaaS-CLIs, etc.) using a Docker© image.
A Docker© host must of course be available in order to use Docker© in Pipelines. This is an infrastructure issue that needs to be dealt with outside of Jenkins. Even independent of Docker©, for production it is recommended to operate the build executor separately from the Jenkins master to distribute the load and prevent builds from slowing the response times of the Jenkins web application. This also applies to making Docker© available on the build executors: The Docker© host of the master (if it exists) should be separated from the Docker© host of the build executor. This also ensures that the Jenkins web application remains responsive, independent from the builds. Moreover, the separation of hosts provides additional security, since no access to the Jenkins host is possible in the event of container breakouts (see Security concerns when using Docker©).
When setting up a special build executor with Docker©, it is also recommended to directly install the Docker© client and make it available in the PATH
. Alternatively, the Docker© client can also be installed as a tool in Jenkins. This tool must then (as with Maven and JDK in the examples provided in the first article in this series) be explicitly stated in the Pipeline syntax. This is currently only possible in scripted syntax and not with declarative syntax (see Pipeline Syntax – Tools).
As soon as Docker© is set up, the declarative syntax offers the option of either executing the entire Pipeline or individual stages within a Docker© container. The image based on the container can either be pulled from a registry (see Listing 3) or built from a Docker file
.
pipeline {
agent {
Docker {
image 'maven:3.5.0-jdk-8'
label 'Docker'
}
}
//...
}
Listing 3
Through the use of the Docker
parameter in the agent
section, the entire Pipeline will be executed within a container, from which the given image will be created. The image used in Listing 3 ensures that the executables from Maven and the JDK are made available in the PATH
. Without any further configuration of tools in Jenkins (as with Maven and JDK in the examples provided in the first article of this series), it is possible to execute the following step, for example: sh 'mvn test'
.
The label set in Listing 3 refers to the Jenkins build executor in this case. This causes the Pipelines to only execute on build executors that have the Docker
label. This best practice is particularly helpful if one has different build executors. This is because if this Pipeline is executed on a build executor that does not have a Docker© client available in the PATH
, the build will fail. If, however, no build executor is available with the respective label, the build remains in the queue.
Storage of data outside the container is another point that needs to be considered with builds or steps executed in containers. Since each build is executed in a new container, the data contained therein are no longer available for the next run. Jenkins ensures that the workspace is mounted in the container as a working directory. However, this does not occur, for example, for the local Maven repository. While the previously used mvn
step from the examples (based on the Jenkins tools) uses the Maven repository of the build executor, the Docker© container creates a Maven repository in the workspace of each build. This does cost a bit more storage space and the first build will be slower, but it prevents undesired side effects such as, for example, when two simultaneously running builds of a Maven multi-module project overwrite each other’s snapshots in the same local repository. If the repository of the build executor needs to be used in spite of this, a few adjustments to the Docker© image are necessary (see Cloudogu ces-build-lib – Docker). What should be avoided is creation of the local Maven repository in the container. This would result in all dependencies being reloaded from the Internet for each build, which in turn would increase the duration of each build.
The behavior described in Listing 3 in declarative syntax can also be stated in scripted syntax, as shown in Listing 4.
node('Docker') {
// ...
Docker.image('maven:3.5.0-jdk-8').inside {
// ...
}
}
Listing 4
As with declarative syntax (see Listing 3), build executors can also be selected via labels in scripted syntax. In scripted syntax (Listing 4), this is done using a parameter of the node
step. Here, Docker© is contacted using a global variable (see Global variable reference Docker). This variable offers even more features, including:
- use of specific Docker© registries (helpful for tasks such as continuous delivery with Kubernetes, which is described in the third part of this series),
- use of a specific Docker© client (defined as Jenkins tool, as described above),
- building of images, specification of the day and sending to a registry, and
- starting and stopping of containers.
The Docker
variable does not always support the latest Docker© features. For example, multi-stage Docker© images (see Jenkins issue 44609) cannot be built. Docker©’s CLI client can be used in this case, for example: sh 'Docker build ...'
.
Comparison of Listing 3 with Listing 4 clearly shows the difference between descriptive (declarative) and imperative (scripted) syntax. Instead of stating declaratively which container needs to be used from the outset, the location from which something is to be executed in this container is stated imperatively. This also makes things more flexible, however: While with declarative syntax, the entire pipeline or individual stages can be executed in containers, with scripted syntax, individual sections can be executed in containers.
As already described on multiple occasions, scripted syntax can in any case also be executed in declarative syntax within the script step
or, alternatively, one’s own steps written in scripted syntax can be called. This call is used in the following to convert the mvn
step in the shared library (Listing 1) from Jenkins Tools to Docker© (compare Listing 5).
def call(def args) {
Docker.image('maven:3.5.0-jdk-8').inside {
sh "mvn ${args} --batch-mode -V -U -e -Dsurefire.useFile=false"
}
}
Listing 5
After the shared library is updated (as described in Listing 5), in both the scripted and declarative Pipeline examples, each mvn
step then runs without modification in a Docker© container.
In conclusion, another advanced Docker© topic. The scripted Pipeline syntax practically invites nesting of Docker© containers, or in other words “Docker in Docker” execution. This is not easily possible, since no Docker© client is initially available in a Docker© container. However, it is possible to execute multiple containers simultaneously with Docker.withRun()
(see documentation Pipeline Docker©).
There are, however, also builds that start Docker© containers, for example with the Docker© Maven plugin (see Docker© Maven Plugin). These can be used to start up test environments or execute UI builds, for example. For these builds, “Docker in Docker” must actually be made available. However, it would not make sense to start another Docker© host in a Docker© container, even if this were possible (see Do Not Use Docker In Docker for CI). Instead, the Docker© socket of the build executor can be mounted in the Docker© container of the build. Even with this procedure, one should be aware of certain security limitations. Here, the aforementioned separation of the Docker© host of the master from the Docker© host of the build executor becomes even more important. To make access to the socket possible, a few adjustments to the Docker© image are also necessary. For this, the user that starts the container must be in the Docker© group to gain access to the socket. The user and group must also be generated in the image (see for example Cloudogu ces-build-lib – Docker).
Conclusion and outlook
This article describes how the maintainability of the Pipeline can be improved through outsourcing of code into a shared library. This code can then be reused and its quality checked via unit tests. In addition, Docker© is presented as a tool with which Pipelines can be executed in a uniform environment, isolated and independent from the configuration of the respective Jenkins instance. These useful tools create the foundation for the fourth part, in which the Continuous Delivery Pipeline is completed.
Individual consulting services
Create the optimal framework for first-class software development in your company – with Cloudogu.
Request consulting
This article is part 3 of the series „Jenkins Pipeline for continuous delivery“.
Read all articles now: