Tips & tricks learned during years of using Jenkins

Karol Wybraniec

I would like to share with you some useful ways of working with Jenkins, as well as methods of coping with various issues that I (and probably a lot of you) faced during my career in QA scope. It is also worth to mention some reliable plugins. I’ve found them helpful and time-saving, so it pays to get them together in this short blogpost.

Manage jobs massively from command line

Creating and modifying jobs using the graphical user interface is convenient, until we have to modify dozens of them and it comes to the same actions, like changing the name of fields, directories inside jobs, etc.

In order to do it faster, some basic knowledge about the jobs would be needed. To be more specific, basic jobs in Jenkins are based on XML, where configuration is stored, so all we have to do is to modify the job’s config and force Jenkins to reload it.

Here we go with an example.

Let’s create a new general-purpose job and call it test_job. This job is going to just print the current date in DD/MM/YYYY format. In order to do that, we need to add following line of shell code in Execute shell step:

printf "%s\n" "$(date +'%d/%m/%Y')"

Then let’s create a second job reload_job_config, which responsibility will be to reload the configuration of the pointed job. Jenkins API is going to be used, therefore user and his API token has to be provided in job parameters, and clearly the job that we want to refresh. In order to do that, check This job is parameterized checkbox, then create the following 3 text variables: user, token, and job_name_. In Execute shell step: 

curl -O http://"$user":"$token"@$JENKINS_URL/job/$job_name_/config.xml 
curl -X POST http://"$user":"$token"@$JENKINS_URL/job/$job_name_/config.xml --data-binary ""@config.xml""

Using curl, config.xml (default name of job’s config file – if it’s different in your case – add another parameter to customize it) is downloaded to the workspace of reload_job_config. Then modified config is posted, what is manifested by our job changed according to the changes in config.xml. $JENKINS_URL is a built-in environmental variable, in my case – pointing to localhost:8080. API Token is available to generate in profile configuration. Save it somewhere, because after the generation it cannot be recovered.

Once we’ve created our test jobs, we can modify the config.xml  of test_job on the disk, then reload it. For example, let’s assume that we want to change the format of the date used in test_job.

cd path_to_job_config_file 
sed -i -e 's/%d\/%m\/%Y/%d-%m-%Y/g' config.xml

Now, run reload_job_config providing required params and done – job is reloaded, up-to-date with config.xml on the disk.

On the basis of this simple example it’s clear, that for multiple changes in certain jobs (e.g: changing common paths, commands, etc.), it is a convenient way of fast changes, without using Jenkins built-in option called Reload, which wipes out all the data and reload all of the jobs.

Once, I had a job, which requirements were changing every other day. Jenkins jobs responsible for this were using plots plugin, so changing dozens of plots descriptions manually was a pain in the eye. That’s why I’ve created this solution and I am sharing it with you now.

Storing jobs configs in git as a way of tracking changes

As I mentioned in the previous section, Jenkins stores general purpose jobs` signatures in config.xml files. Taking that into consideration, we can easily track (and backup) jobs in git by creating a cron or Jenkins’s job.

Assuming we have a job with Git plugin configured, here’s one of the ways, that we can do backup of config.xml files:

#!/bin/sh
# Go to backup directory
cd /path_to_backup_directory
# Clear all files except hidden (.git)
ls | xargs -n1 -i{} rm -rf "{}"
# Create dirs based on current ls result
ls /path_to_jobs_signatures/jobs | xargs -n1 -i{} mkdir "{}"
# Copy config.xml to each catalog
ls | xargs -n1 -i{} cp -R ../jobs/"{}"/config.xml "{}"
# Create directory Jenkins configuration backup
mkdir JenkinsConfig
# Copy Jenkins instance files
ls -I "jobs*" -I "logs" -I "*.log" -I "monitoring" -I "updates" \
-I "userContent" -I "fingerprints" -I "*disk_usage*Factory*" ../ | \
xargs -n1 -i{} cp -R ../"{}" ./JenkinsConfig/"{}"
# Add everything to staging file
git add .
# Create commit message
message=$(date +%Y-%m-%d_%H-%M)
# Commit files
git commit -m "Backup "$message
# Push changes
git push origin master

Each line is pretty well described in the above code snippet.

On the other hand, you may use one of the existing plugins designed to track changes, like jobConfigHistory (see following webpage for the details https://plugins.jenkins.io/jobConfigHistory).

Git plugin and running jobs with specified fork and branch

Git plugin is widely used in Jenkins. It is also one of the suggested plugins during the installation. With its help users can run code on different forks and branches on the certain steps of continuous integration process.

I found it useful to create a job git_clone with four string parameters as following: fork, branch, git_clone_directory, repository, which allows me to clone any repository (with configured SSH key obviously) to the given location on a disk.   In my case, it was self-hosted bitbucket repository, therefore the repository URL might be different in your case.

Here’s how it looks:

This little helper job was reused hundred times as a part of pipelines or multiproject jobs.

Running pipelines using Node plugin to switch between workers

As we know, Jenkins works with distributed agents (servers). In my project, I had multiple servers were Jenkins agents were stored with access to different recourses on hard drives. In order to orchestrate process with pipeline, I had to use different agents for specific steps to access the desired data. One of the ways is to use Node plugin and agent directive of  Declarative Pipeline.

Here’s an example how it was managed:

pipeline {
    agent { node { label 'master' } }
    environment {
        PATH = "/usr/sbin:/usr/bin:/sbin:/bin"
    }
    stages {
        stage('Test') {
            steps {
                script {
                    echo "[ INFO ] functional tests"
                }
                build job: 'functional_tests', parameters: [
                    [
                        $class: 'NodeParameterValue', 
                        name: 'node', 
                        labels: ["slave"], 
                        nodeEligibility: [$class: 'AllNodeEligibility']
                    ]
                ]            
            }        
        }
        stage('Parsing') {
            steps {
                script {
                    sh "ls /tests/functional -t | head -1 > lastResults"
                    lastResults = readFile('lastResults').trim()
                    echo "[ INFO ] Parsing last results from: "+lastResults
                }                
                build job: 'parsing_tool', parameters: [
                    [
                            $class: 'NodeParameterValue', 
                            name: 'node', 
                            labels: ["slave"], 
                            nodeEligibility: [$class: 'AllNodeEligibility']
                    ],
                    string(
                            name: 'input',
                            value: '/tests/functional'+lastResults
                    ),
                    string(
                            name: 'output',
                            value: '/results/functional'
                    )
                ]
            }
        }
        stage('Updating dashboard') {
            steps {
                script {
                    echo "[ INFO ] Updating tests results dashboard"
                    // Python's script is placed in workspace
                    sh """        
                        source /home/py_envs/python363_env/bin/activate
                        python update_dashboard.py
                    """
                }
 
            }
        }
    }
}

In a word of explanation about this fragment of code:

sh "ls /tests/functional -t | head -1 > lastResults "
lastResults = readFile('lastResults').trim()

In Declarative Pipeline each sh statement is a standalone instance and does not hold created variables. One of the workarounds is to save the results to flat file, then read it.

Conclusion

The data about market share in the space of continuous integration tools is barely consistent. You may find information about market occupied by Jenkins between 20% up to even 50%. Nevertheless, the popularity of this tool is unquestionable, therefore it’s worth to personalize it with available plugins and|or write your own tools, which may improve your effectiveness at every day’s tasks. I really hope, that you’ve found something useful in this text.

Poznaj mageek of j‑labs i daj się zadziwić, jak może wyglądać praca z j‑People!

Skontaktuj się z nami