Jenkins Book PDF
Jenkins Book PDF
Gábor Szabó
This book is for sale at http://leanpub.com/jenkins-book
This is a Leanpub book. Leanpub empowers authors and publishers with the Lean Publishing
process. Lean Publishing is the act of publishing an in-progress ebook using lightweight tools and
many iterations to get reader feedback, pivot until you have the right book and build traction once
you do.
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
v0.04 2019-04-06 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
v0.03 2018-08-23 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
v0.02 2018-08-21 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
v0.01 2018-08-20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Jenkins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Jenkins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Install Jenkins on Ubuntu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Jenkins Pipeline - Hello World . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Jenkins Pipeline: running external programs with sh or bat . . . . . . . . . . . . . . . . . . . 36
Jenkins Pipeline: Send e-mail notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Jenkins Pipeline: Add some text to the job using shortText . . . . . . . . . . . . . . . . . . . . 39
Jenkins CLI: create node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
User Handbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
User Handbook overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Installing Jenkins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Installation platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Using Jenkins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Using credentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Credential security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
CONTENTS
Configuring credentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
What is Jenkins Pipeline? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Why Pipeline? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Pipeline concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Pipeline syntax overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Pipeline example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Using a Jenkinsfile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Creating a Jenkinsfile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Working with your Jenkinsfile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Navigation bar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Pipelines list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Favorites list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
CONTENTS
License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
About the book
Leanpub provides the opportunity to update an already sold book and for the reader to get the new
edition without any extra payment.
As we all of my other books¹ sold on Leanpub, this too is a work in progress. I’ll add more pages.
Update old ones etc. Hopefully make it a better book.
Once you buy the book you can come back to Leanpub at any time and download the new edition.
However you will only get notified about the new edition if you opt-in to that.
You can do that by visiting your library² on Leanpub, clicking on the book image, and making sure
the “New version available” checkbox is checked.
It does not need an extra step of saving but it takes a number of seconds to be actually saved. So
wait till a “Successfully updated!” message appears at the top of the page.
You will have to do it one-by-one with each book for which you’d like to get notifications.
Enjoy the book!
¹https://leanpub.com/u/szabgab
²https://leanpub.com/user_dashboard/library
Preface
This book about Jenkins contains the User handbook and some of my own writings.
License and Copyright information of the “User handbook” can be found at the end of the book.
The formatting is still far from being nice. I am working on it.
There is some code³ that generates Markua files Leanpu uses to create the actual eBook.
³https://github.com/szabgab/books
Changes
v0.04 2019-04-06
Regenerate the book with all the updates from the Jenkins site.
v0.03 2018-08-23
• Improved display of special section with CAUTION and NOTE.
v0.02 2018-08-21
• Properly formatting many of the code snippets.
• Fix some unicode issues.
v0.01 2018-08-20
• Initial release
Articles
Jenkins
Jenkins
Jenkins⁴ is an automation server. It allows for all kinds of automations. It is primarily used for Build
automation, Continuous Integration, and Continuous Deployment.
• Install Jenkins on Ubuntu (using Vagrant) * Vagrant for Jenkins on Ubuntu⁵ * Jenkins Pipeline
- Hello World * Jenkins Pipeline: running external programs with sh or bat, returnStdout, trim
* Jenkins Pipeline: Send e-mail notifications * Jenkins Pipeline: Add some text to the job using
shortText * Jenkins CLI: create node * Jenkins Pipeline BuildUser plugin⁶ * Jenkins Pipeline -
set and use environment variables⁷ * Jenkins Pipeline: git checkout using reference to speed up
cloning large repositories⁸
• Triggers from Version Control Systems * Report failures. * Send alerts * Collect test coverage
data. * Jenkins slides⁹
Add badges
1 manager.addBadge("error.gif", "Failed build")
⁴https://jenkins.io/
⁵https://code-maven.com/vagrant-for-jenkins-on-ubuntu
⁶https://code-maven.com/jenkins-pipeline-builduser
⁷https://code-maven.com/jenkins-pipeline-environment-variables
⁸https://code-maven.com/jenkins-git-check-out-using-reference
⁹https://code-maven.com/slides/jenkins/
Jenkins 6
1 pipeline {
2 agent none
3 stages {
4 stage ('Catch crash') {
5 agent { label 'master'}
6 steps {
7 echo "before crash"
8 script {
9 try {
10 sh 'exit 1'
11 } catch (err) {
12 echo "exception caught, going on"
13 println err // java.lang.ClassCastException:
14 org.jenkinsci.plugins.workflow.steps.EchoStep.message expects class java.lang.String\
15 but received
16 class hudson.AbortException
17 }
18 }
19 echo "after crash"
20 }
21 }
22 stage ('Continue after crash') {
23 agent { label 'master'}
24 steps {
25 echo "stage after crash"
26 }
27 }
28 }
29 }
39 or.java:23)
40 at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.on\
41 MethodCall(SandboxInterc
42 eptor.java:133)
43 at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:155)
44 at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:159)
45 at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:129)
46 at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:129)
47 at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker\
48 .java:17)
49 at WorkflowScript.run(WorkflowScript:16)
1 report("text")
56 }
57 }
git Backup
gist¹⁰
Other
1 echo bat(returnStdout: true, script: 'set')
2
3 build(job: 'RevertServerAutomationCloud', parameters: [
4 string(name: 'VM_SNAPSHOT', value: 'CleanDb')
5 ])
¹⁰https://gist.github.com/cenkalti/5089392
Jenkins 11
Options
Jenkins 12
1 options {
2 ansiColor('xterm')
3 timestamps()
4 }
Scripts
Scripts not permitted to use method groovy.lang.GroovyObject invokeMethod java.lang.String
java.lang.Object (org.jenkinsci.plugins.workflow.cps.EnvActionImpl keys). Administrators can de-
cide whether to approve or reject this signature.
1 pipeline {
2 agent none
3 options {
4 ansiColor('xterm')
5 timestamps()
6 }
7 parameters {
8 string(name: 'machine', defaultValue: 'asos', description: 'Some text input')
9 string(name: 'size', defaultValue: '23', description: 'Some number input')
10 }
11 environment {
12 answer = 42
13 // machine_name = params.machine // -> Environment variable values must eith\
14 er be single quoted, double quoted, or function calls.
15 machine_name = set_machine_name(params.machine)
16 }
17 stages {
18 stage('try') {
Jenkins 13
Jenkins environment
Even if there is an exception in the environment section Jenkins will still run the “success” part of
the post section. Same problem if there is an exception on one of the functions.
To overcome this we create an environment variable as the last step in the environment section and
then we check that variable using
if (! binding.hasVariable(‘environment_is_set’))
That does not help in case there is an exception in the functions.
http_request
Jenkins 14
• Sending e-mail In Manage Jenkins - Configure System in the Extended mail section set the
SMTP: smtp.office365.com Domain name: @company.com Advanced: Use SMTP Authenti-
cation: + User Name: [email protected] Password: SMTP port: 587 E-mail notifica-
tion section: SMTP server: smtp.office365.com Default user e-mail suffix: @company.com
Advanced User Name: [email protected] Password: SMTP port: 587
Shut down Jenkins (via the Windows services) Open the file: C:Program Files (x86)Jenkins\jenkins.xml
and change the arguments line to be:
1 pipeline {
2 agent none
3 stages {
4 stage('try') {
5 agent { label 'master' }
6 steps {
7 script {
8 //some_strange_name = null
9 //print(some_strange_name)
10 if (true) {
11 print("creating variable")
12 some_strange_name = 1
13 }
Jenkins 15
14 print(some_strange_name)
15
16 if (binding.hasVariable('some_strange_name')) {
17 print("has some_strange_name")
18 print(some_strange_name)
19 } else {
20 print("DOES NOT have some_strange_name")
21 }
22 }
23 }
24 }
25 stage('try again') {
26 agent { label 'master' }
27 steps {
28 script {
29 if (binding.hasVariable('some_strange_name')) {
30 print("has some_strange_name")
31 print(some_strange_name)
32 } else {
33 print("DOES NOT have some_strange_name")
34 }
35 }
36 }
37 }
38 }
39 }
Parallel stages
1 pipeline {
2 agent { label 'master' }
3 stages {
4 stage('before') {
5 steps {
6 println("before")
7 }
8 }
9 stage('para') {
10 parallel {
11 stage('apple') {
12 steps {
13 println("apple 1")
14 sleep(20 * Math.random())
Jenkins 16
15 println("apple 2")
16 }
17 }
18 stage('banana') {
19 steps {
20 println("banana 1")
21 sleep(20 * Math.random())
22 println("banana 2")
23 }
24 }
25 stage('peach') {
26 steps {
27 println("peach 1")
28 sleep(20 * Math.random())
29 println("peach 2")
30 }
31 }
32 }
33 }
34 stage('after') {
35 steps {
36 println("after")
37 }
38 }
39 }
40 }
Skip steps
Jenkins pipeline stop early with success How to indicate that a job is successful pipeline conditional
step stage¹¹
1 pipeline {
2 agent none
3 options {
4 ansiColor('xterm')
5 timestamps()
6 }
7 parameters {
8 booleanParam(defaultValue: false, description: 'Checkbox', name: 'yesno')
9 }
¹¹https://stackoverflow.com/questions/37690920/jenkins-pipeline-conditional-step-stage
Jenkins 17
10 stages {
11 stage('first') {
12 agent { label 'master' }
13 steps {
14 script {
15 println("first before")
16 println(params.yesno)
17 done = true
18 return
19 println("first after")
20 }
21 }
22 }
23 stage('second') {
24 agent { label 'master' }
25 when {
26 expression {
27 return ! done;
28 }
29 }
30 steps {
31 script {
32 println("second")
33 }
34 }
35 }
36 }
37 }
1 import java.text.SimpleDateFormat
2 pipeline {
3 agent none
4 options {
5 ansiColor('xterm')
6 timestamps()
7 }
8 parameters {
9 string(name: 'machine', defaultValue: '', description: 'Name of the machine')
10 string(name: 'size', defaultValue: '23', description: 'The size')
11 choice(choices: ['Mercury', 'Venus', 'Earth', 'Mars'], description: 'Pick a \
12 planet', name: 'planet')
13 //}
Jenkins 18
14 stages {
15 stage('try') {
16 agent { label 'master' }
17 steps {
18 script {
19 sh "hostname"
20
21 def data = readJSON text: '{}'
22 data.name = "test-529" as String
23 data.date = new java.text.SimpleDateFormat('yyyyMMddHHmmss').form\
24 at(new Date())
25 writeJSON(file: 'mydata.json', json: data, pretty: 4)
26 archiveArtifacts artifacts: 'mydata.json', onlyIfSuccessful: true
27
28 //error("Fail after first artifact")
29
30 writeJSON(file: 'otherfile.log', json: data, pretty: 4)
31 archiveArtifacts artifacts: '*.log', onlyIfSuccessful: true
32 }
33 }
34 }
35 }
36 }
1 :^origin/(work|dev)-.*
You can limit this by selecting another option and setting the ancestry date to limit how many days
you are ready to go back in time.
• Strategy for choosing what to build * Choosing strategy: Ancestry * Maximum Age of Commit:
1
In a pipeline
1 checkout([
2 $class: 'GitSCM',
3 branches: [[name: '*/master']],
4 doGenerateSubmoduleConfigurations: false,
5 extensions: [[
6 $class: 'BuildChooserSetting',
7 buildChooser: [$class: 'AncestryBuildChooser', ancestorCommitSha1: '', maxim\
8 umAgeInDays: 1]
9 ]],
10 submoduleCfg: [],
11 userRemoteConfigs: [[]]
12 ])
Exceptions
When you encounter one of those 40-lines long Java stack-traces, look for WorkflowScript to locate
the source of the problem.
Jenkins 20
1 pipeline {
2 agent { label 'master' }
3 stages {
4 stage('only') {
5 steps {
6 script {
7 println("one")
8 sh "ls -l"
9 println("two")
10
11 try {
12 //sh "ls -l no_such"
13 a = 10
14 b = 0
15 c = a/b
16
17 }
18 catch(Exception ex) {
19 //currentBuild.result = 'FAILURE'
20 println("exception")
21 println(ex) // hudson.AbortException: script returned exit c\
22 ode 2
23 println(ex.toString())
24 println(ex.getMessage())
25 println(ex.getStackTrace())
26 }
27
28 //} catch(ArrayIndexOutOfBoundsException ex) {
29 //println("Catching the Array out of Bounds exception");
30 //}catch(Exception ex) {
31
32 println("three")
33 is_it_the_answer(42)
34 echo "try again"
35 is_it_the_answer(23)
36 }
37 }
38 }
39 stage('second') {
40 steps {
41 script {
42 // if (manager.logContains("three")) {
43 // manager.addWarningBadge("tres")
Jenkins 21
44 // } else {
45 // manager.addWarningBadge("nem harom")
46 // }
47
48 }
49 }
50 }
51 }
52 }
53
54 def is_it_the_answer(n) {
55 if (n == 42) {
56 return 'yes'
57 }
58 throw new Exception('Nope')
59 }
60
61
62 try {
63 is_it_the_answer(23)
64 } catch (err) {
65 print(err)
66 print(err.getMessage())
67 }
¹²https://wiki.jenkins.io/display/JENKINS/Groovy+Postbuild+Plugin
Jenkins 22
1 stage('second') {
2 steps {
3 script {
4 if (manager.logContains("three")) {
5 manager.addWarningBadge("drei")
6 } else {
7 manager.addWarningBadge("kein drei")
8 }
9 }
10 }
11 }
12
13
14 print("A disk image was created: zorg-1552278641")
15
16 def matcher = manager.getLogMatcher(/.* (zorg-\d+).*/)
17 print(matcher)
18 if (matcher?.matches()) {
19 def image_name = matcher.group(1)
20 print(image_name)
21 manager.addShortText(image_name)
22 }
readJSON
no such dsl method¹³
1 Vagrant.configure(2) do |config|
2 config.vm.box = "generic/ubuntu1710"
3 config.vm.network "forwarded_port", guest: 8080, host:8080
4 #config.vm.synced_folder "/Users/gabor/work", "/vagrant"
5 config.vm.provider "virtualbox" do |vb|
6 vb.memory = "512"
7 end
8 end
Open a terminal window or in MS Windows a Command window. Change to the directory you
created for our work. Then type in
1 vagrant up
This will take some time as it first downloads an Ubuntu image and then it will create a VirtualBox
and set up the Ubuntu image as a new Virtual Box. (It took me about 5-10 minutes.)
Once it completed the createtion successfully you can log in to the machine by typing:
1 vagrant ssh
Install Jenkins
Execute the following commands inside the VirtualBox image (after you ran vagrant ssh and were
logged in to the Virtual Box).
1 curl http://localhost:8080/
You will most likely get some message about being forbidden. That’s actually a good sign.
Set up Vagrant
Visit the newly installed Jenkins using your regular browser on your computer by following this
URL:
http://localhost:8080/login?from=%2F
You should see something like this:
1 Unlock Jenkins
2
3 To ensure Jenkins is securely set up by the administrator, a password has been writt\
4 en to the log (not sure where to find it?) and this file on the server:
5
6 /var/lib/jenkins/secrets/initialAdminPassword
7
8 Please copy the password from either location and paste it below.
9 Administrator password
1 da3160af7d0f4c8db649d4b8000380a6
After a while it finished and shows a new page asking you to Create Firs Admin User:
Jenkins 27
I typed in “foobar” as the username, “Foo Bar” as the Full name and my real e-mail address.
Then that’s done you will see a page confirming our success:
Jenkins 28
Jenkins Menu
It will take you directly to the “Configuration” page of the project that looks like this:
Jenkins 31
Scroll down to the section called “Pipeline”, paste the following code:
1 pipeline {
2 agent { label 'master' }
3 stages {
4 stage('build') {
5 steps {
6 echo "Hello World!"
7 }
8 }
9 }
10 }
Jenkins Pipeline
Click on “Save”.
It will take to the menu of the specific project:
Jenkins 33
If you click on that blue dot on the left hand side it will take you to the “Console Output” that looks
like this:
Jenkins 35
1 pipeline {
2 agent { label 'master' }
3 stages {
4 stage('build') {
5 steps {
6 echo "Hello World!"
7 sh "echo Hello from the shell"
8 sh "hostname"
9 sh "uptime"
10 }
11 }
12 }
13 }
MS Windows
A few examples in Windows:
Show the name of the computer (a bit like hostname on Unix):
Print out the content of the PATH environment variable as seen by Windows.
Of course we could have printed the PATH environment variable without invoking an external call:
1 echo env.PATH
1 script {
2 def disk_size = sh(script: "df / --output=avail | tail -1", returnStdout: true).\
3 trim() as Integer
4 println("disk_size = ${disk_size}")
5 }
Full examples:
1 pipeline {
2 agent { label 'master' }
3 stages {
4 stage('build') {
5 steps {
6 script {
7 def disk_size = sh(script: "df / --output=avail | tail -1", retu\
8 rnStdout: true).trim() as Integer
9 println("disk_size = ${disk_size}")
10 }
11 }
12 }
13 }
14 }
1 emailext (
2 subject: "Job '${env.JOB_NAME} ${env.BUILD_NUMBER}'",
3 body: """<p>Check console output at <a href="${env.BUILD_URL}">${env.JOB_NAME}</\
4 a></p>""",
5 to: "[email protected]",
6 from: "[email protected]"
7 )
1 script {
2 manager.addShortText("Some text")
3 manager.addShortText("\ntext")
4 manager.addShortText("same line", "black", "lightgreen", "0px", "white")
5 }
1 #!/bin/bash
2
3 JENKINS_URL=$1
4 NODE_NAME=$2
5 NODE_HOME='/home/build/jenkins-node'
6 EXECUTORS=1
7 SSH_PORT=22
8 CRED_ID=$3
9 LABELS=build
10 USERID=${USER}
11
12 cat <<EOF | java -jar ~/bin/jenkins-cli.jar -s $1 create-node $2
13 <slave>
14 <name>${NODE_NAME}</name>
15 <description></description>
16 <remoteFS>${NODE_HOME}</remoteFS>
17 <numExecutors>${EXECUTORS}</numExecutors>
18 <mode>NORMAL</mode>
19 <retentionStrategy class="hudson.slaves.RetentionStrategy$Always"/>
20 <launcher class="hudson.plugins.sshslaves.SSHLauncher" plugin="[email protected]">
21 <host>${NODE_NAME}</host>
22 <port>${SSH_PORT}</port>
23 <credentialsId>${CRED_ID}</credentialsId>
24 </launcher>
25 <label>${LABELS}</label>
26 <nodeProperties/>
27 <userId>${USERID}</userId>
28 </slave>
29 EOF
Notes:
• “Agent” and “node” is often used interchangably and in the old days they were called “slaves”
so you will still see that word used in some documentation and in the code. * The origrinal
name of the Jenkins project was Hudson and that too still appears in a lot of the code. * remove
the -remoting flag
Index
• BuildUser
* Jenkins Pipeline: Send e-mail notifications
• Devops
* Jenkins
• Jenkins
* Jenkins
• Vagrant
* Install Jenkins on Ubuntu
• addShortText
* Jenkins Pipeline: Add some text to the job using shortText
• agent
* Jenkins Pipeline - Hello World
• bat
* Jenkins Pipeline: running external programs with sh or bat
• echo
* Jenkins Pipeline - Hello World
• emailext
* Jenkins Pipeline: Send e-mail notifications
• jenkins-cli
* Jenkins CLI: create node
• master
* Jenkins Pipeline - Hello World
• node
* Jenkins Pipeline - Hello World
• pipeline
* Jenkins Pipeline - Hello World
• sh
* Jenkins Pipeline: running external programs with sh or bat
• slave
* Jenkins Pipeline - Hello World
• stage
* Jenkins Pipeline - Hello World
• stages
* Jenkins Pipeline - Hello World
• step
* Jenkins Pipeline - Hello World
User Handbook
User Handbook overview
This page provides an overview of the documentation in the Jenkins User Handbook.
If you want to get up and running with Jenkins, see Installing Jenkins for procedures on how to
install Jenkins on your supported platform of choice.
If you are a typical Jenkins user (of any skill level) who wants to know more about Jenkins usage,
see Using Jenkins. Also refer to the separate Pipeline and Blue Ocean chapters for more information
about these core Jenkins features.
If you are a Jenkins administrator and want to know more about managing Jenkins nodes and
instances, see Managing Jenkins.
If you are a system administrator and want to learn how to back-up, restore, maintain as Jenkins
servers and nodes, see Jenkins System Administration.
Installing Jenkins
The procedures on this page are for new installations of Jenkins on a single/local machine.
Jenkins is typically run as a standalone application in its own process with the built-in Java servlet²⁰
container/application server (Jetty²¹).
Jenkins can also be run as a servlet in different Java servlet containers such as Apache Tomcat²² or
GlassFish²³. However, instructions for setting up these types of installations are beyond the scope of
this page.
Note: Although this page focuses on local installations of Jenkins, this content can also be used to
help set up Jenkins in production environments.
Prerequisites
Minimum hardware requirements:
• 256 MB of RAM
• 1 GB of drive space (although 10 GB is a recommended minimum if running
Jenkins as a Docker container)
• 1 GB+ of RAM
• 50 GB+ of drive space
Software requirements:
Installation platforms
This section describes how to install/run Jenkins on different platforms and operating systems.
Docker
Docker²⁴ is a platform for running applications in an isolated environment called a “container” (or
Docker container). Applications like Jenkins can be downloaded as read-only “images” (or Docker
images), each of which is run in Docker as a container. A Docker container is in effect a “running
instance” of a Docker image. From this perspective, an image is stored permanently more or less
(i.e. insofar as image updates are published), whereas containers are stored temporarily. Read more
about these concepts in the Docker documentation’s Getting Started, Part 1: Orientation and setup²⁵
page.
Docker’s fundamental platform and container design means that a single Docker image (for any
given application like Jenkins) can be run on any supported operating system (macOS, Linux and
Windows) or cloud service (AWS and Azure) which is also running Docker.
Installing Docker
To install Docker on your operating system, visit the Docker store²⁶ website and click the Docker
Community Edition box which is suitable for your operating system or cloud service. Follow the
installation instructions on their website.
Jenkins can also run on Docker Enterprise Edition, which you can access through Docker EE on the
Docker store website.
is production-ready) bundled with all Blue Ocean plugins and features. This means that you do not
need to install the Blue Ocean plugins separately.
include::doc/book/installing/_docker.adoc[]
WAR file
The Web application ARchive (WAR) file version of Jenkins can be installed on any operating system
or platform that supports Java.
To download and run the WAR file version of Jenkins:
. Download the latest stable Jenkins WAR file²⁹ to an appropriate directory on your machine. . Open
up a terminal/command prompt window to the download directory. . Run the command java -jar
jenkins.war. . Browse to \http://localhost:8080 and wait until the Unlock Jenkins page appears.
. Continue on with the Post-installation setup wizard below.
Notes:
macOS
To install from the website, using a package:
²⁹http://mirrors.jenkins.io/war-stable/latest/jenkins.war
Installing Jenkins 47
*
Download the latest package³⁰
Linux
Debian/Ubuntu
On Debian-based distributions, such as Ubuntu, you can install Jenkins through apt.
Recent versions are available in an apt repository³¹. Older but stable LTS versions are in this apt
repository³².
• Setup Jenkins as a daemon launched on start. See /etc/init.d/jenkins for more details.
• Create a ‘jenkins’ user to run this service.
• Direct console log output to the file /var/log/jenkins/jenkins.log. Check this file if you are
troubleshooting Jenkins.
• Populate /etc/default/jenkins with configuration parameters for the launch, e.g JENKINS_-
HOME
• Set Jenkins to listen on port 8080. Access this port with your browser to start configuration.
³⁰http://mirrors.jenkins.io/osx/latest
³¹https://pkg.jenkins.io/debian/
³²https://pkg.jenkins.io/debian-stable/
Installing Jenkins 48
Fedora
You can install Jenkins through dnf. You need to add the Jenkins repository from the Jenkins website
to the package manager first.
Then, you can install Jenkins. The following command also ensures you have java installed.
You can check the status of the Jenkins service using this systemctl command:
If everything has been set up correctly, you should see an output like this:
Windows
To install from the website, using the installer:
*
Download the latest package³³
OpenIndiana Hipster
On a system running OpenIndiana Hipster³⁴ Jenkins can be installed in either the local or
global zone using the link:https://en.wikipedia.org/wiki/ImagePackagingSystem[Image Packaging
System] (IPS).
[IMPORTANT] ==== Disclaimer: This platform is NOT officially supported by the Jenkins team,
use it at your own risk. Packaging and integration described in this section is maintained by
the OpenIndiana Hipster team, bundling the generic jenkins.war to work in that operating
environment. ====
For the common case of running the newest packaged weekly build as a standalone (Jetty) server,
simply execute:
³³http://mirrors.jenkins.io/windows/latest
³⁴https://www.openindiana.org/
Installing Jenkins 50
• Create a jenkins user to run the service and to own the directory structures under /var/lib/jenkins.
• Pull the OpenJDK8 and other packages required to execute Jenkins, including
• Set up Jenkins as an SMF service instance (svc:/network/http:jenkins) which
• Set up Jenkins to listen on port 8080.
• Configure the log output to be managed by SMF at /var/svc/log/network-http:jenkins.log.
the jenkins-core-weekly package with the latest jenkins.war. + CAUTION: Long-Term
Support (LTS) Jenkins releases currently do not support OpenZFS-based systems, so no
packaging is provided at this time. can then be enabled with the svcadm command demonstrated
above.
Upgrades to the package can be performed by updating the entire operating environment with pkg
update, or specifically for Jenkins core software with:
³⁵http://localhost:8080
Installing Jenkins 51
1 Procedure for updating the package will restart the currently running Jenkins
2 process. Make sure to prepare it for shutdown and finish all running jobs
3 before updating, if needed.
Generally it should suffice to install Java 8 and download the jenkins.war and run it as a standalone
process or under an application server such as Apache Tomcat³⁶.
Some caveats apply:
Also note that forks of the OpenZFS initiative may provide ZFS on various BSD, Linux, and macOS
distributions. Once Jenkins supports detecting ZFS capabilities, rather than relying on the SunOS
check, the above caveats for ZFS integration with Jenkins should be considered.
include::doc/book/installing/_setup-wizard.adoc[]
³⁶https://tomcat.apache.org
³⁷https://wiki.jenkins.io/display/JENKINS/Jenkins+got+java.awt.headless+problem
³⁸https://github.com/kohsuke/libzfs4j
Using Jenkins
This chapter contains topics for typical Jenkins users (of all skill levels) about Jenkins usage which
is outside the scope of the core Jenkins features: Pipeline and Blue Ocean.
If you want to create and configure a Pipeline project through a Jenkinsfile or through Blue Ocean,
or you wish to find out more about these core Jenkins features, refer to the relevant topics within
the respective Pipeline and Blue Ocean chapters.
If you are a Jenkins administrator and want to know more about managing Jenkins nodes and
instances, see Managing Jenkins.
If you are a system administrator and want learn how to back-up, restore, maintain as Jenkins servers
and nodes, see Jenkins System Administration.
For an overview of content in the Jenkins User Handbook, see User Handbook overview.
Using credentials
There are numerous 3rd-party sites and applications that can interact with Jenkins, for example,
artifact repositories, cloud-based storage systems and services, and so on.
A systems administrator of such an application can configure credentials in the application for
dedicated use by Jenkins. This would typically be done to “lock down” areas of the application’s
functionality available to Jenkins, usually by applying access controls to these credentials. Once
a Jenkins manager (i.e. a Jenkins user who administers a Jenkins site) adds/configures these
credentials in Jenkins, the credentials can be used by Pipeline projects to interact with these 3rd
party applications.
Note: The Jenkins credentials functionality described on this and related pages is provided by the
plugin:credentials-binding[Credentials Binding plugin].
Credentials stored in Jenkins can be used:
• Secret text - a token such as an API token (e.g. a GitHub personal access
• Username and password - which could be handled as separate components or as
• Secret file - which is essentially secret content in a file,
• SSH Username with private key - an
• Certificate - a link:https://tools.ietf.org/html/rfc7292[PKCS#12 certificate
• Docker Host Certificate Authentication credentials.
token), a colon separated string in the format username:password (read more about this in
Handling credentials), SSH public/private key pair³⁹, file] and optional password, or
Credential security
To maximize security, credentials configured in Jenkins are stored in an encrypted form on the
master Jenkins instance (encrypted by the Jenkins instance ID) and are only handled in Pipeline
projects via their credential IDs.
This minimizes the chances of exposing the actual credentials themselves to Jenkins users and
hinders the ability to copy functional credentials from one Jenkins instance to another.
³⁹http://www.snailbook.com/protocols.html
Using credentials 54
Configuring credentials
This section describes procedures for configuring credentials in Jenkins.
Credentials can be added to Jenkins by any Jenkins user who has the Credentials > Create permission
(set through Matrix-based security). These permissions can be configured by a Jenkins user with the
Administer permission. Read more about this in the Authorization section of Managing Security.
Otherwise, any Jenkins user can add and configure credentials if the Authorization settings of your
Jenkins instance’s Configure Global Security settings page is set to the default Logged-in users can
do anything setting or Anyone can do anything setting.
• work with
• use different
• work with
(committed in source control) goes through a complex process on its way to being released. This
process involves building the software in a reliable and repeatable manner, as well as progressing
the built software (called a “build”) through multiple stages of testing and deployment.
Pipeline provides an extensible set of tools for modeling simple-to-complex delivery pipelines “as
code” via the Pipeline domain-specific language (DSL) syntax. footnoteref:[dsl,Domain-specific
language⁴⁰]
The definition of a Jenkins Pipeline is written into a text file (called a Jenkinsfile) which in
turn can be committed to a project’s source control repository. footnoteref:[scm,Source control
management⁴¹] This is the foundation of “Pipeline-as-code”; treating the CD pipeline a part of the
application to be versioned and reviewed like any other code.
Creating a Jenkinsfile and committing it to source control provides a number of immediate
benefits:
• Automatically creates a Pipeline build process for all branches and pull
• Code review/iteration on the Pipeline (along with the remaining source code).
• Audit trail for the Pipeline.
• Single source of truth
requests. footnote:[link:https://en.wikipedia.org/wiki/Single_sourceof truth[Single source of
truth]] for the Pipeline, which can be viewed and edited by multiple members of the project.
While the syntax for defining a Pipeline, either in the web UI or with a Jenkinsfile is the same,
it is generally considered best practice to define the Pipeline in a Jenkinsfile and check that in to
source control.
Many of the individual syntactical components (or “steps”) written into a Jenkinsfile, however, are
common to both Declarative and Scripted Pipeline. Read more about how these two types of syntax
differ in <<pipeline-concepts>> and <<pipeline-syntax-overview>> below.
⁴⁰https://en.wikipedia.org/wiki/Domain-specific_language
⁴¹https://en.wikipedia.org/wiki/Version_control
Pipeline 57
Why Pipeline?
[[why]]
Jenkins is, fundamentally, an automation engine which supports a number of automation patterns.
Pipeline adds a powerful set of automation tools onto Jenkins, supporting use cases that span from
simple continuous integration to comprehensive CD pipelines. By modeling a series of related tasks,
users can take advantage of the many features of Pipeline:
• Code: Pipelines are implemented in code and typically checked into source
• Durable: Pipelines can survive both planned and unplanned restarts of the
• Pausable: Pipelines can optionally stop and wait for human input or approval
• Versatile: Pipelines support complex real-world CD requirements, including
• Extensible: The Pipeline plugin supports custom extensions to its DSL
control, giving teams the ability to edit, review, and iterate upon their delivery pipeline. Jenkins
master. before continuing the Pipeline run. the ability to fork/join, loop, and perform work in
parallel. footnoteref:[dsl] and multiple options for integration with other plugins.
While Jenkins has always allowed rudimentary forms of chaining Freestyle Jobs together to perform
sequential tasks, footnote:[Additional plugins have been used to implement complex behaviors
utilizing Freestyle Jobs such as the Copy Artifact, Parameterized Trigger, and Promoted Builds
plugins] Pipeline makes this concept a first-class citizen in Jenkins.
Building on the core Jenkins value of extensibility, Pipeline is also extensible both by users with
Pipeline Shared Libraries and by plugin developers. footnoteref:[ghof,plugin:github-organization-
folder[GitHub Organization Folder plugin]]
The flowchart below is an example of one CD scenario easily modeled in Jenkins Pipeline:
image:pipeline/realworld-pipeline-flow.png[alt=”Pipeline Flow”,width=100%]
Pipeline concepts
The following concepts are key aspects of Jenkins Pipeline, which tie in closely to Pipeline syntax
(see the overview below).
Pipeline
A Pipeline is a user-defined model of a CD pipeline. A Pipeline’s code defines your entire build
process, which typically includes stages for building an application, testing it and then delivering it.
Also, a pipeline block is a key part of Declarative Pipeline syntax.
Pipeline 58
Node
A node is a machine which is part of the Jenkins environment and is capable of executing a Pipeline.
Also, a node block is a key part of Scripted Pipeline syntax.
Stage
A stage block defines a conceptually distinct subset of tasks performed through the entire
Pipeline (e.g. “Build”, “Test” and “Deploy” stages), which is used by many plugins to visualize or
present Jenkins Pipeline status/progress. footnoteref:[blueocean,Blue Ocean, plugin:pipeline-stage-
view[Pipeline: Stage View plugin]]
Step
A single task. Fundamentally, a step tells Jenkins what to do at a particular point in time (or “step”
in the process). For example, to execute the shell command make use the sh step: sh 'make'. When a
plugin extends the Pipeline DSL, footnoteref:[dsl] that typically means the plugin has implemented
a new step.
Pipeline example
Here is an example of a Jenkinsfile using Declarative Pipeline syntax - its Scripted syntax
equivalent can be accessed by clicking the Toggle Scripted Pipeline link below:
[pipeline] —- // Declarative // pipeline —- <1> pipeline is Declarative Pipeline-specific syntax
that defines a “block” containing all content and instructions for executing the entire Pipeline. <2>
agent is Declarative Pipeline-specific syntax that instructs Jenkins to allocate an executor (on a
node) and workspace for the entire Pipeline. <3> stage is a syntax block that describes a stage of
this Pipeline. Read more about stage blocks in Declarative Pipeline syntax on the Pipeline syntax
page. As mentioned above, stage blocks are optional in Scripted Pipeline syntax. <4> steps is
Declarative Pipeline-specific syntax that describes the steps to be run in this stage. <5> sh is a
Pipeline step (provided by the plugin:workflow-durable-task-step[Pipeline: Nodes and Processes
plugin]) that executes the given shell command. <6> junit is another a Pipeline step (provided
by the plugin:junit[JUnit plugin]) for aggregating test reports. <7> node is Scripted Pipeline-specific
syntax that instructs Jenkins to execute this Pipeline (and any stages contained within it), on any
available agent/node. This is effectively equivalent to agent in Declarative Pipeline-specific syntax.
Read more about Pipeline syntax on the Pipeline Syntax page.
Getting started with Pipeline
As mentioned previously, Jenkins Pipeline is a suite of plugins that supports implementing and
integrating continuous delivery pipelines into Jenkins. Pipeline provides an extensible set of tools for
modeling simple-to-complex delivery pipelines “as code” via the Pipeline DSL. footnoteref:[dsl,Domain-
specific language⁴²]
This section describes how to get started with creating your Pipeline project in Jenkins and
introduces you to the various ways that a Jenkinsfile can be created and stored.
Prerequisites
To use Jenkins Pipeline, you will need:
• Jenkins 2.x or later (older versions back to 1.642.3 may work but are not
• Pipeline plugin,
recommended) footnoteref:[pipeline,Pipeline plugin⁴³] which is installed as part of the “sug-
gested plugins” (specified when running through the Post-installation setup wizard after
installing Jenkins).
Read more about how to install and manage plugins in Managing Plugins.
Defining a Pipeline
Both Declarative and Scripted Pipeline are DSLs footnoteref:[dsl] to describe portions of your
software delivery pipeline. Scripted Pipeline is written in a limited form of Groovy syntax⁴⁴.
Relevant components of Groovy syntax will be introduced as required throughout this documenta-
tion, so while an understanding of Groovy is helpful, it is not required to work with Pipeline.
A Pipeline can be created in one of the following ways:
*
In SCM - you can write a Jenkinsfile
the Blue Ocean UI helps you write your Pipeline’s Jenkinsfile and commit it to source control.
Jenkins through the classic UI. manually, which you can commit to your project’s source control
repository. footnoteref:[scm,Source control management⁴⁵]
The syntax for defining a Pipeline with either approach is the same, but while Jenkins supports
entering Pipeline directly into the classic UI, it is generally considered best practice to define the
Pipeline in a Jenkinsfile which Jenkins will then load directly from source control.
throughout the remainder of this procedure.) + [pipeline] —- // Declarative // pipeline —- <1> agent
instructs Jenkins to allocate an executor (on any available agent/node in the Jenkins environment)
and workspace for the entire Pipeline. <2> echo writes simple string in the console output. <3> node
effectively does the same as agent (above). + [.boxshadow] image:pipeline/example-pipeline-in-
classic-ui.png[alt=”Example Pipeline in the classic UI”,width=100%] + Note: You can also select from
canned Scripted Pipeline examples from the try sample Pipeline option at the top right of the Script
text area. Be aware that there are no canned Declarative Pipeline examples available from this field.
. Click Save to open the Pipeline project/item view page. . On this page, click Build Now on the left
to run the Pipeline. + [.boxshadow] image:pipeline/classic-ui-left-column-on-item.png[alt=”Classic
UI left column on an item”,width=35%] . Under Build History on the left, click #1 to access the details
for this particular Pipeline run. . Click Console Output to see the full output from the Pipeline run.
The following output shows a successful run of your Pipeline. + [.boxshadow] image:pipeline/hello-
world-console-output.png[alt=”Console Output for the Pipeline”,width=70%] + Notes:
• You can also access the console output directly from the Dashboard by clicking
• Defining a Pipeline through the classic UI is convenient for testing Pipeline
the colored globe to the left of the build number (e.g. #1). code snippets, or for handling
simple Pipelines or Pipelines that do not require source code to be checked out/cloned from a
repository. As mentioned above, unlike Jenkinsfiles you define through Blue Ocean (above)
or in source control (below), Jenkinsfiles entered into the Script text area area of Pipeline
projects are stored by Jenkins itself, within the Jenkins home directory. Therefore, for greater
control and flexibility over your Pipeline, particularly for projects in source control that are
likely to gain complexity, it is recommended that you use Blue Ocean or source control to
define your Jenkinsfile.
In SCM
// Despite :sectanchors:, explicitly defining an anchor because it will be // referenced from other
documents [[defining-a-pipeline-in-scm]]
Complex Pipelines are difficult to write and maintain within the classic UI’s Script text area of the
Pipeline configuration page.
To make this easier, your Pipeline’s Jenkinsfile can be written in a text editor or integrated
development environment (IDE) and committed to source control footnoteref:[scm] (optionally with
the application code that Jenkins will build). Jenkins can then check out your Jenkinsfile from
source control as part of your Pipeline project’s build process and then proceed to execute your
Pipeline.
To configure your Pipeline project to use a Jenkinsfile from source control:
. Follow the procedure above for defining your Pipeline through the classic UI until you reach step 5
(accessing the Pipeline section on the Pipeline configuration page). . From the Definition field, choose
the Pipeline script from SCM option. . From the SCM field, choose the type of source control system
of the repository containing your Jenkinsfile. . Complete the fields specific to your repository’s
Getting started with Pipeline 63
source control system. + Tip: If you are uncertain of what value to specify for a given field, click its
? icon to the right for more information. . In the Script Path field, specify the location (and name) of
your Jenkinsfile. This location is the one that Jenkins checks out/clones the repository containing
your Jenkinsfile, which should match that of the repository’s file structure. The default value of
this field assumes that your Jenkinsfile is named “Jenkinsfile” and is located at the root of the
repository.
When you update the designated repository, a new build is triggered, as long as the Pipeline is
configured with an SCM polling trigger.
[TIP] ==== Since Pipeline code (i.e. Scripted Pipeline in particular) is written in Groovy-like
syntax, if your IDE is not correctly syntax highlighting your Jenkinsfile, try inserting the line
#!/usr/bin/env groovy at the top of the Jenkinsfile, footnoteref:[shebang,Shebang (general defini-
tion)⁴⁶] footnoteref:[groovy_shebang,link:http://groovy-lang.org/syntax.html#shebangline[Shebang
line (Groovy syntax)]] which may rectify the issue. ====
Built-in Documentation
Pipeline ships with built-in documentation features to make it easier to create Pipelines of varying
complexities. This built-in documentation is automatically generated and updated based on the
plugins installed in the Jenkins instance.
The built-in documentation can be found globally at: localhost:8080/pipeline-syntax/⁴⁷, assuming
you have a Jenkins instance running on localhost port 8080. The same documentation is also linked
as Pipeline Syntax in the side-bar for any configured Pipeline project.
[.boxshadow] image:pipeline/classic-ui-left-column-on-item.png[alt=”Classic UI left column on an
item”,width=35%]
Snippet Generator
[[snippet-generator]]
The built-in “Snippet Generator” utility is helpful for creating bits of code for individual steps,
discovering new steps provided by plugins, or experimenting with different parameters for a
particular step.
The Snippet Generator is dynamically populated with a list of the steps available to the Jenkins
instance. The number of steps available is dependent on the plugins installed which explicitly expose
steps for use in Pipeline.
To generate a step snippet with the Snippet Generator:
⁴⁶https://en.wikipedia.org/wiki/Shebang_(Unix)
⁴⁷http://localhost:8080/pipeline-syntax/
Getting started with Pipeline 64
. Navigate to the Pipeline Syntax link (referenced above) from a configured Pipeline, or at
localhost:8080/pipeline-syntax⁴⁸. . Select the desired step in the Sample Step dropdown menu . Use
the dynamically populated area below the Sample Step dropdown to configure the selected step. .
Click Generate Pipeline Script to create a snippet of Pipeline which can be copied and pasted into a
Pipeline.
[.boxshadow] image:pipeline/snippet-generator.png[alt=”Snippet Generator”,width=100%]
To access additional information and/or documentation about the step selected, click on the help
icon (indicated by the red arrow in the image above).
configure it in a form, and generate the configuration for that directive, which you can then use in
your Declarative Pipeline.
To generate a Declarative directive using the Declarative Directive Generator:
. Navigate to the Pipeline Syntax link (referenced above) from a configured Pipeline, and then click on
the Declarative Directive Generator link in the sidepanel, or go directly to localhost:8080/directive-
generator⁵². . Select the desired directive in the dropdown menu . Use the dynamically populated
area below the dropdown to configure the selected directive. . Click Generate Directive to create the
directive’s configuration to copy into your Pipeline.
The Directive Generator can generate configuration for nested directives, such as conditions inside
a when directive, but it cannot generate Pipeline steps. For the contents of directives which contain
steps, such as steps inside a stage or conditions like always or failure inside post, the Directive
Generator adds a placeholder comment instead. You will still need to add steps to your Pipeline by
hand.
[pipeline] —- // Declarative // stage(‘Stage 1’) // Script // —-
Further Reading
This section merely scratches the surface of what can be done with Jenkins Pipeline, but should
provide enough of a foundation for you to start experimenting with a test Jenkins instance.
In the next section, The Jenkinsfile, more Pipeline steps will be discussed along with patterns for
implementing successful, real-world, Jenkins Pipelines.
Additional Resources
*
Pipeline Steps Reference,
*
Pipeline Examples, a
encompassing all steps provided by plugins distributed in the Jenkins Update Center. community-
curated collection of copyable Pipeline examples.
⁵²http://localhost:8080/directive-generator
Using Docker with Pipeline
Many organizations use Docker⁵³ to unify their build and test environments across machines, and to
provide an efficient mechanism for deploying applications. Starting with Pipeline versions 2.5 and
higher, Pipeline has built-in support for interacting with Docker from within a Jenkinsfile.
While this section will cover the basics of utilizing Docker from with a Jenkinsfile, it will not cover
the fundamentals of Docker, which can be read about in the Docker Getting Started Guide⁵⁴.
1 [Pipeline] stage
2 [Pipeline] { (Test)
3 [Pipeline] sh
4 [guided-tour] Running shell script
5 + node --version
6 v7.4.0
7 [Pipeline] }
8 [Pipeline] // stage
9 [Pipeline] }
Pipeline supports adding custom arguments which are passed to Docker, allowing users to specify
custom Docker Volumes⁵⁷ to mount, which can be used for caching data on the agent between
Pipeline runs. The following example will cache ∼/.m2 between Pipeline runs utilizing the maven
container⁵⁸, thereby avoiding the need to re-download dependencies for subsequent runs of the
Pipeline.
[pipeline] —- // Declarative // pipeline —-
stage(‘Front-end’) —-
Using a Dockerfile
[[dockerfile]]
For projects which require a more customized execution environment, Pipeline also supports
building and running a container from a Dockerfile in the source repository. In contrast to the
previous approach of using an “off-the-shelf” container, using the agent { dockerfile true }
syntax will build a new image from a Dockerfile rather than pulling one from Docker Hub⁵⁹.
Re-using an example from above, with a more custom Dockerfile:
1 FROM node:7-alpine
2
3 RUN apk add -U subversion
.Dockerfile
By committing this to the root of the source repository, the Jenkinsfile can be changed to build a
container based on this Dockerfile and then run the defined steps using that container:
⁵⁷https://docs.docker.com/engine/tutorials/dockervolumes/
⁵⁸https://hub.docker.com/_/maven/
⁵⁹https://hub.docker.com
Using Docker with Pipeline 68
1 node {
2 checkout scm
3 /*
4 * In order to communicate with the MySQL server, this Pipeline explicitly
5 * maps the port (`3306`) to a known port on the host machine.
6 */
7 docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw" -p 3306:3\
8 306') { c ->
9 /* Wait until mysql service is up */
10 sh 'while ! mysqladmin ping -h0.0.0.0 --silent; do sleep 1; done'
11 /* Run some tests which require MySQL */
12 sh 'make check'
⁶⁰https://docs.microsoft.com/en-us/azure/architecture/patterns/sidecar
Using Docker with Pipeline 69
13 }
14 }
This example can be taken further, utilizing two containers simultaneously. One “sidecar” running
MySQL, and another providing the execution environment, by using the Docker container links⁶¹.
1 node {
2 checkout scm
3 docker.image('mysql:5').withRun('-e "MYSQL_ROOT_PASSWORD=my-secret-pw"') { c ->
4 docker.image('mysql:5').inside("--link ${c.id}:db") {
5 /* Wait until mysql service is up */
6 sh 'while ! mysqladmin ping -hdb --silent; do sleep 1; done'
7 }
8 docker.image('centos:7').inside("--link ${c.id}:db") {
9 /*
10 * Run some tests which require MySQL, and assume that it is
11 * available on the host name `db`
12 */
13 sh 'make check'
14 }
15 }
16 }
The above example uses the object exposed by withRun, which has the running container’s ID
available via the id property. Using the container’s ID, the Pipeline can create a link by passing
custom Docker arguments to the inside() method.
The id property can also be useful for inspecting logs from a running Docker container before the
Pipeline exits:
Building containers
In order to create a Docker image, the plugin:docker-workflow[Docker Pipeline] plugin also provides
a build() method for creating a new image, from a Dockerfile in the repository, during a Pipeline
run.
One major benefit of using the syntax docker.build("my-image-name") is that a Scripted Pipeline
can use the return value for subsequent Docker Pipeline calls, for example:
⁶¹https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/
Using Docker with Pipeline 70
1 node {
2 checkout scm
3
4 def customImage = docker.build("my-image:${env.BUILD_ID}")
5
6 customImage.inside {
7 sh 'make test'
8 }
9 }
The return value can also be used to publish the Docker image to Docker Hub⁶², or a custom Registry,
via the push() method, for example:
1 node {
2 checkout scm
3 def customImage = docker.build("my-image:${env.BUILD_ID}")
4 customImage.push()
5 }
One common usage of image “tags” is to specify a latest tag for the most recently, validated, version
of a Docker image. The push() method accepts an optional tag parameter, allowing the Pipeline to
push the customImage with different tags, for example:
1 node {
2 checkout scm
3 def customImage = docker.build("my-image:${env.BUILD_ID}")
4 customImage.push()
5
6 customImage.push('latest')
7 }
The build() method builds the Dockerfile in the current directory by default. This can be
overridden by providing a directory path containing a Dockerfile as the second argument of the
build() method, for example:
⁶²https://hub.docker.com
Using Docker with Pipeline 71
1 node {
2 checkout scm
3 def testImage = docker.build("test-image", "./dockerfiles/test") // <1>
4
5 testImage.inside {
6 sh 'make test'
7 }
8 }
1 node {
2 checkout scm
3 def dockerfile = 'Dockerfile.test'
4 def customImage = docker.build("my-image:${env.BUILD_ID}", "-f ${dockerfile} ./d\
5 ockerfiles") // <1>
6 }
⁶³https://docs.docker.com/engine/reference/commandline/build/
⁶⁴https://docs.docker.com/swarm/
Using Docker with Pipeline 72
1 node {
2 checkout scm
3
4 docker.withServer('tcp://swarm.example.com:2376', 'swarm-certs') {
5 docker.image('mysql:5').withRun('-p 3306:3306') {
6 /* do things */
7 }
8 }
9 }
1 `inside()` and `build()` will not work properly with a Docker Swarm server out
2 of the box
3
4 For `inside()` to work, the Docker server and the Jenkins agent must use the
5 same filesystem, so that the workspace can be mounted.
6
7 Currently neither the Jenkins plugin nor the Docker CLI will automatically
8 detect the case that the server is running remotely; a typical symptom would be
9 errors from nested `sh` commands such as
10
11
12 When Jenkins detects that the agent is itself running inside a Docker
13 container, it will automatically pass the `--volumes-from` argument to the
14 `inside` container, ensuring that it can share a workspace with the agent.
15
16 Additionally some versions of Docker Swarm do not support custom Registries.
⁶⁵https://hub.docker.com
Using Docker with Pipeline 73
1 node {
2 checkout scm
3
4 docker.withRegistry('https://registry.example.com') {
5
6 docker.image('my-custom-image').inside {
7 sh 'make test'
8 }
9 }
10 }
For a Docker Registry which requires authentication, add a “Username/Password” Credentials item
from the Jenkins home page and use the Credentials ID as a second argument to withRegistry():
1 node {
2 checkout scm
3
4 docker.withRegistry('https://registry.example.com', 'credentials-id') {
5
6 def customImage = docker.build("my-image:${env.BUILD_ID}")
7
8 /* Push the container to the custom Registry */
9 customImage.push()
10 }
11 }
Using a Jenkinsfile
This section builds on the information covered in Getting started with Pipeline and introduces more
useful steps, common patterns, and demonstrates some non-trivial Jenkinsfile examples.
Creating a Jenkinsfile, which is checked into source control footnoteref:[scm, https://en.wikipedia.org/wiki/Source
provides a number of immediate benefits:
Pipeline supports two syntaxes, Declarative (introduced in Pipeline 2.5) and Scripted Pipeline. Both
of which support building continuous delivery pipelines. Both may be used to define a Pipeline in
either the web UI or with a Jenkinsfile, though it’s generally considered a best practice to create a
Jenkinsfile and check the file into the source control repository.
Creating a Jenkinsfile
As discussed in the Defining a Pipeline in SCM, a Jenkinsfile is a text file that contains the
definition of a Jenkins Pipeline and is checked into source control. Consider the following Pipeline
which implements a basic three-stage continuous delivery pipeline.
[pipeline] —- // Declarative // pipeline { agent any
stages —-
Not all Pipelines will have these same three stages, but it is a good starting point to define them for
most projects. The sections below will demonstrate the creation and execution of a simple Pipeline
in a test installation of Jenkins.
Using a text editor, ideally one which supports Groovy⁶⁶ syntax highlighting, create a new
Jenkinsfile in the root directory of the project.
[role=declarative-pipeline] The Declarative Pipeline example above contains the minimum neces-
sary structure to implement a continuous delivery pipeline. The agent directive, which is required,
instructs Jenkins to allocate an executor and workspace for the Pipeline. Without an agent directive,
not only is the Declarative Pipeline not valid, it would not be capable of doing any work! By default
the agent directive ensures that the source repository is checked out and made available for steps in
the subsequent stages‘
The stages directive, and steps directives are also required for a valid Declarative Pipeline as they
instruct Jenkins what to execute and in which stage it should be executed.
[role=scripted-pipeline] ==== For more advanced usage with Scripted Pipeline, the example above
node is a crucial first step as it allocates an executor and workspace for the Pipeline. In essence,
without node, a Pipeline cannot do any work! From within node, the first order of business will be to
checkout the source code for this project. Since the Jenkinsfile is being pulled directly from source
control, Pipeline provides a quick and easy way to access the right revision of the source code
[pipeline] —- // Script // node // Declarative not yet implemented // —- <1> The checkout step will
checkout code from source control; scm is a special variable which instructs the checkout step to
clone the specific revision which triggered this Pipeline run. ====
Build
For many projects the beginning of “work” in the Pipeline would be the “build” stage. Typically this
stage of the Pipeline will be where source code is assembled, compiled, or packaged. The Jenkinsfile
is not a replacement for an existing build tool such as GNU/Make, Maven, Gradle, etc, but rather
can be viewed as a glue layer to bind the multiple phases of a project’s development lifecycle (build,
test, deploy, etc) together.
Jenkins has a number of plugins for invoking practically any build tool in general use, but this
example will simply invoke make from a shell step (sh). The sh step assumes the system is
Unix/Linux-based, for Windows-based systems the bat could be used instead.
[pipeline] —- // Declarative // pipeline { agent any
stages —- <1> The sh step invokes the make command and will only continue if a zero exit code
is returned by the command. Any non-zero exit code will fail the Pipeline. <2> archiveArtifacts
captures the files built matching the include pattern (+**/target/*.jar+) and saves them to the
Jenkins master for later retrieval.
[TIP] ==== Archiving artifacts is not a substitute for using external artifact repositories such as
Artifactory or Nexus and should be considered only for basic reporting and file archival. ====
⁶⁶http://groovy-lang.org
Using a Jenkinsfile 76
Test
Running automated tests is a crucial component of any successful continuous delivery process. As
such, Jenkins has a number of test recording, reporting, and visualization facilities provided by a
number of plugins⁶⁷. At a fundamental level, when there are test failures, it is useful to have Jenkins
record the failures for reporting and visualization in the web UI. The example below uses the junit
step, provided by the plugin:junit[JUnit plugin].
In the example below, if tests fail, the Pipeline is marked “unstable”, as denoted by a yellow ball in
the web UI. Based on the recorded test reports, Jenkins can also provide historical trend analysis and
visualization.
[pipeline] —- // Declarative // pipeline { agent any
stages —- <1> Using an inline shell conditional (sh 'make check || true') ensures that the sh
step always sees a zero exit code, giving the junit step the opportunity to capture and process the
test reports. Alternative approaches to this are covered in more detail in the <<handling-failure>>
section below. <2> junit captures and associates the JUnit XML files matching the inclusion pattern
(+**/target/*.xml+).
Deploy
Deployment can imply a variety of steps, depending on the project or organization requirements,
and may be anything from publishing built artifacts to an Artifactory server, to pushing code to a
production system.
At this stage of the example Pipeline, both the “Build” and “Test” stages have successfully executed.
In essence, the “Deploy” stage will only execute assuming previous stages completed successfully,
otherwise the Pipeline would have exited early.
[pipeline] —- // Declarative // pipeline { agent any
stages —- <1> Accessing the currentBuild.result variable allows the Pipeline to determine if there
were any test failures. In which case, the value would be UNSTABLE.
Assuming everything has executed successfully in the example Jenkins Pipeline, each successful
Pipeline run will have associated build artifacts archived, test results reported upon and the full
console output all in Jenkins.
[role=scripted-pipeline] A Scripted Pipeline can include conditional tests (shown above), loops,
try/catch/finally blocks and even functions. The next section will cover this advanced Scripted
Pipeline syntax in more detail.
⁶⁷https://plugins.jenkins.io/?labels=report
Using a Jenkinsfile 77
String interpolation
Jenkins Pipeline uses rules identical to Groovy⁶⁸ for string interpolation. Groovy’s String interpola-
tion support can be confusing to many newcomers to the language. While Groovy supports declaring
a string with either single quotes, or double quotes, for example:
Only the latter string will support the dollar-sign ($) based string interpolation, for example:
Understanding how to use string interpolation is vital for using some of Pipeline’s more advanced
features.
BUILD_ID:: The current build ID, identical to BUILD_NUMBER for builds created in Jenkins
versions 1.597+ JOB_NAME:: Name of the project of this build, such as “foo” or “foo/bar”. JENK-
INS_URL:: Full URL of Jenkins, such as https://example.com:port/jenkins/ (NOTE: only available if
Jenkins URL set in “System Configuration”)
Referencing or using these environment variables can be accomplished like accessing any key in a
Groovy Map⁷⁰, for example:
[pipeline] —- // Declarative // pipeline —-
In the case where environment variable need to be set dynamically at run time this can be done
with the use of a shell scripts (sh), Windows Batch Script (bat) or PowerShell Script (powershell).
Each script can either returnStatus or returnStdout. More information on scripts⁷¹.
Below is an example in a declarative pipeline using sh (shell) with both returnStatus and
returnStdout.
[pipeline] —- // Declarative // pipeline // Script // —- <1> An agent must be set at the top level of
the pipeline. This will fail if agent is set as agent none. <2> When using returnStdout a trailing
whitespace will be append to the returned string. Use .trim() to remove this.
Handling credentials
Credentials configured in Jenkins can be handled in Pipelines for immediate use. Read more about
using credentials in Jenkins on the Using credentials page.
Jenkins’ declarative Pipeline syntax has the credentials() helper method (used within the environment
directive) which supports secret text, username and password, as well as secret file credentials. If
you want to handle other types of credentials, refer to the For other credential types section (below).
⁷⁰http://groovy-lang.org/syntax.html#_maps
⁷¹https://jenkins.io/doc/pipeline/steps/workflow-durable-task-step
Using a Jenkinsfile 79
Secret text
The following Pipeline code shows an example of how to create a Pipeline using environment
variables for secret text credentials.
In this example, two secret text credentials are assigned to separate environment variables to access
Amazon Web Services (AWS). These credentials would have been configured in Jenkins with their
respective credential IDs + jenkins-aws-secret-key-id and jenkins-aws-secret-access-key.
[pipeline] —- // Declarative // pipeline // Script // —- <1> You can reference the two credential
environment variables (defined in this Pipeline’s environment directive), within this stage’s steps
using the syntax $AWS_ACCESS*KEY*ID and $AWS_SECRET*ACCESS*KEY. For example, here you can
authenticate to AWS using the secret text credentials assigned to these credential variables. + To
maintain the security and anonymity of these credentials, if you attempt to retrieve the value of
these credential variables from within the Pipeline (e.g. echo $AWS_SECRET*ACCESS*KEY), Jenkins
only returns the value “+****+” to prevent secret information from being written to the console
output and any logs. Any sensitive information in credential IDs themselves (such as usernames) are
also returned as “+****+” in the Pipeline run’s output. <2> In this Pipeline example, the credentials
assigned to the two AWS_... environment variables are scoped globally for the entire Pipeline, so
these credential variables could also be used in this stage’s steps. If, however, the environment
directive in this Pipeline were moved to a specific stage (as is the case in the Usernames and
passwords Pipeline example below), then these AWS_... environment variables would only be scoped
to the steps in that stage.
The following Pipeline code snippets show an example of how to create a Pipeline using environment
variables for username and password credentials.
In this example, username and password credentials are assigned to environment variables to access
a Bitbucket repository in a common account or team for your organization; these credentials would
have been configured in Jenkins with the credential ID jenkins-bitbucket-common-creds.
When setting the credential environment variable in the environment directive:
1 environment {
2 BITBUCKET_COMMON_CREDS = credentials('jenkins-bitbucket-common-creds')
3 }
• ‘BITBUCKET
• ‘BITBUCKET_COMMON
• ‘BITBUCKET_COMMON
The following code snippet shows the example Pipeline in its entirety:
• ‘$BITBUCKET
COMMON CREDS‘
• ‘$BITBUCKET_COMMON
CREDSUSR‘
• ‘$BITBUCKET_COMMON
Secret files
As far as Pipelines are concerned, secret files are handled in exactly the same manner as secret text
(above).
Essentially, the only difference between secret text and secret file credentials are that for secret
text, the credential itself is entered directly into Jenkins whereas for a secret file, the credential is
originally stored in a file which is then uploaded to Jenkins.
Unlike secret text, secret files cater for credentials that are:
If you need to set credentials in a Pipeline for anything other than secret text, usernames and
passwords, or secret files (above) - i.e SSH keys or certificates, then use Jenkins’ Snippet Generator
feature, which you can access through Jenkins’ classic UI.
To access the Snippet Generator for your Pipeline project/item:
. From the Jenkins home page (i.e. the Dashboard of Jenkins’ classic UI), click the name of your
Pipeline project/item. . On the left, click Pipeline Syntax and ensure that the Snippet Generator link
is in bold at the top-left. (If not, click its link.) . From the Sample Step field, choose withCredentials:
Bind credentials to variables. . Under Bindings, click Add and choose from the dropdown: * SSH
User Private Key - to handle SSH public/private key pair credentials⁷², from which you can specify:
** Key File Variable - the name of the environment variable that will be bound to these credentials.
Jenkins actually assigns this temporary variable to the secure location of the private key file required
in the SSH public/private key pair authentication process. ** Passphrase Variable ( Optional ) - the
name of the environment variable that will be bound to the passphrase⁷³ associated with the SSH
public/private key pair. ** Username Variable ( Optional ) - the name of the environment variable
that will be bound to username associated with the SSH public/private key pair. ** Credentials
- choose the SSH public/private key credentials stored in Jenkins. The value of this field is the
credential ID, which Jenkins writes out to the generated snippet. * Certificate - to handle PKCS#12
certificates⁷⁴, from which you can specify: ** Keystore Variable - the name of the environment
variable that will be bound to these credentials. Jenkins actually assigns this temporary variable
to the secure location of the certificate’s keystore required in the certificate authentication process.
** Password Variable ( Optional ) - the name of the environment variable that will be bound to the
password associated with the certificate. ** Alias Variable ( Optional ) - the name of the environment
variable that will be bound to the unique alias associated with the certificate. ** Credentials - choose
the certificate credentials stored in Jenkins. The value of this field is the credential ID, which Jenkins
writes out to the generated snippet. * Docker client certificate - to handle Docker Host Certificate
⁷²http://www.snailbook.com/protocols.html
⁷³https://tools.ietf.org/html/rfc4251#section-9.4.4
⁷⁴https://tools.ietf.org/html/rfc7292
Using a Jenkinsfile 82
Authentication. . Click Generate Pipeline Script and Jenkins generates a withCredentials( ... )
{ ... } Pipeline step snippet for the credentials you specified, which you can then copy and paste
into your Declarative or Scripted Pipeline code. + Notes: * The Credentials fields (above) show the
names of credentials configured in Jenkins. However, these values are converted to credential IDs
after clicking Generate Pipeline Script. [[withcredentials-script-examples]] * To combine more than
one credential in a single withCredentials( ... ) { ... } Pipeline step, see Combining credentials
in one step (below) for details.
SSH User Private Key example
The optional passphraseVariable and usernameVariable definitions can be deleted in your final
Pipeline code.
Certificate example
The optional aliasVariable and passwordVariable variable definitions can be deleted in your final
Pipeline code.
The following code snippet shows an example Pipeline in its entirety, which implements the SSH
User Private Key and Certificate snippets above:
[pipeline] —- // Declarative // pipeline steps, so these credential variables are not available
for use in this Example stage 2‘ stage’s steps.
To maintain the security and anonymity of these credentials, if you attempt to retrieve the value
of these credential variables from within these withCredentials( ... ) { ... } steps, the same
behavior described in the Secret text example (above) applies to these SSH public/private key pair
credential and certificate variable types too.
Using a Jenkinsfile 83
node {
withCredentials([string(credentialsId: ‘mytoken’, variable: ‘TOKEN’)]) {
sh /* WRONG! / “”” set +x
curl -H ‘Token: $TOKEN’ https://some.api/
“”” sh / CORRECT */ ‘’’ set +x
curl -H ‘Token: $TOKEN’ https://some.api/
‘’’ }
}
Using the Snippet Generator, you can make multiple credentials available within a single withCredentials(
... ) { ... } step by doing the following:
. From the Jenkins home page (i.e. the Dashboard of Jenkins’ classic UI), click the name of your
Pipeline project/item. . On the left, click Pipeline Syntax and ensure that the Snippet Generator link
is in bold at the top-left. (If not, click its link.) . From the Sample Step field, choose withCredentials:
Bind credentials to variables. . Click Add under Bindings. . Choose the credential type to add to the
Using a Jenkinsfile 84
withCredentials( ... ) { ... } step from the dropdown list. . Specify the credential Bindings
details. Read more above these in the procedure under For other credential types (above). . Repeat
from “Click Add …” (above) for each (set of) credential/s to add to the withCredentials( ... ) {
... } step. . Click Generate Pipeline Script to generate the final withCredentials( ... ) { ... }
step snippet.
Handling parameters
Declarative Pipeline supports parameters out-of-the-box, allowing the Pipeline to accept user-
specified parameters at runtime via the parameters directive. Configuring parameters with Scripted
Pipeline is done with the properties step, which can be found in the Snippet Generator.
If you configured your pipeline to accept parameters using the Build with Parameters option, those
parameters are accessible as members of the params variable.
Assuming that a String parameter named “Greeting” has been configuring in the Jenkinsfile, it
can access that parameter via ${params.Greeting}:
[pipeline] —- // Declarative // pipeline // Script // properties([parameters([string(defaultValue:
‘Hello’, description: ‘How should I greet the world?’, name: ‘Greeting’)])])
node —-
Handling failure
Declarative Pipeline supports robust failure handling by default via its post section which allows
declaring a number of different “post conditions” such as: always, unstable, success, failure, and
changed. The Pipeline Syntax section provides more detail on how to use the various post conditions.
An alternative way of handling this, which preserves the early-exit behavior of failures in Pipeline,
while still giving junit the chance to capture test reports, is to use a series of try/finally blocks:
====
this behavior be overridden, but Pipeline allows utilizing multiple agents in the Jenkins environment
from within the same Jenkinsfile, which can helpful for more advanced use-cases such as executing
builds/tests across multiple platforms.
In the example below, the “Build” stage will be performed on one agent and the built results will
be reused on two subsequent agents, labelled “linux” and “windows” respectively, during the “Test”
stage.
[pipeline] —- // Declarative // pipeline
stage(‘Test’) —- <1> The stash step allows capturing files matching an inclusion pattern (+**/target/*.jar+)
for reuse within the same Pipeline. Once the Pipeline has completed its execution, stashed files are
deleted from the Jenkins master. <2> The parameter in agent/node allows for any valid Jenkins label
expression. Consult the Pipeline Syntax section for more details. <3> unstash will retrieve the named
“stash” from the Jenkins master into the Pipeline’s current workspace. <4> The bat script allows for
executing batch scripts on Windows-based platforms.
For convenience, when calling steps taking only one parameter (or only one mandatory parameter),
the parameter name may be omitted, for example:
Parallel execution
The example in the section above runs tests across two different platforms in a linear series. In
practice, if the make check execution takes 30 minutes to complete, the “Test” stage would now take
60 minutes to complete!
Fortunately, Pipeline has built-in functionality for executing portions of Scripted Pipeline in parallel,
implemented in the aptly named parallel step.
Refactoring the example above to use the parallel step:
[pipeline] —- // Script // stage(‘Build’)
stage(‘Test’) // Declarative not yet implemented // —-
Instead of executing the tests on the “linux” and “windows” labelled nodes in series, they will now
execute in parallel assuming the requisite capacity exists in the Jenkins environment.
Branches and Pull Requests
In the previous section a Jenkinsfile which could be checked into source control was implemented.
This section covers the concept of Multibranch Pipelines which build on the Jenkinsfile foundation
to provide more dynamic and automatic functionality in Jenkins.
• Enter a name for your Pipeline, select Multibranch Pipeline and click OK.
1 Jenkins uses the name of the Pipeline to create directories on disk. Pipeline
2 names which include spaces may uncover bugs in scripts which do not expect
3 paths to contain spaces.
• Add a Branch Source (for example, Git) and enter the location of the
repository.
Upon Save, Jenkins automatically scans the designated repository and creates appropriate items for
each branch in the repository which contains a Jenkinsfile.
By default, Jenkins will not automatically re-index the repository for branch additions or deletions
(unless using an Organization Folder), so it is often useful to configure a Multibranch Pipeline to
periodically re-index in the configuration:
image::pipeline/multibranch-branch-indexing.png[“Setting up branch re-indexing”, role=center]
Running a Pipeline
// TODO: WEBSITE-495 - flesh out placeholder sections.
Branches and Pull Requests 89
Multibranch
See the Multibranch documentation for more information.
Parameters
See the Jenkinsfile documentation for more information
Replay
See the Replay documentation for more information.
How to Use
No additional configuration is needed in the Jenkinsfile to allow you to restart stages in your
Declarative Pipelines. This is an inherent part of Declarative Pipelines and is available automatically.
Restarting from the Classic UI
Once your Pipeline has completed, whether it succeeds or fails, you can go to the side panel for the
run in the classic UI and click on “Restart from Stage”.
image::pipeline/restart-stages-sidebar.png[Restart from Stage link]
You will be prompted to choose from a list of top-level stages that were executed in the original
run, in the order they were executed. Stages which were skipped due to an earlier failure will not be
available to be restarted, but stages which were skipped due to a when condition not being satisfied
will be available. The parent stage for a group of parallel stages, or a group of nested stages to be
run sequentially will also not be available - only top-level stages are allowed.
image::pipeline/restart-stages-dropdown.png[Choose the stage to restart]
Once you choose a stage to restart from and click submit, a new build, with a new build number, will
be started. All stages before the selected stage will be skipped, and the Pipeline will start executing
at the selected stage. From that point on, the Pipeline will run as normal.
Branches and Pull Requests 90
Restarting stages can also be done in the Blue Ocean UI. Once your Pipeline has completed, whether
it succeeds or fails, you can click on the node which represents the stage. You can then click on the
Restart link for that stage.
Normally, when you run the stash step in your Pipeline, the resulting stash of artifacts is cleared
when the Pipeline completes, regardless of the result of the Pipeline. Since stash artifacts aren’t
accessible outside of the Pipeline run that created them, this has not created any limitations on
usage. But with Declarative stage restarting, you may want to be able to unstash artifacts from a
stage which ran before the stage you’re restarting from.
To enable this, there is a job property that allows you to configure a maximum number of completed
runs whose stash artifacts should be preserved for reuse in a restarted run. You can specify anywhere
from 1 to 50 as the number of runs to preserve.
This job property can be configured in your Declarative Pipeline’s options section, as below:
1 options {
2 preserveStashes() // <1>
3 // or
4 preserveStashes(buildCount: 5) // <2>
5 }
<1> The default number of runs to preserve is 1, just the most recent completed build. <2> If a number
for buildCount outside of the range of 1 to 50 is specified, the Pipeline will fail with a validation
error.
When a Pipeline completes, it will check to see if any previously completed runs should have their
stash artifacts cleared.
Using Speed/Durability Settings To
Reduce Disk I/O Needs
One of the main bottlenecks in Pipeline is that it writes transient data to disk FREQUENTLY so that
running pipelines can handle an unexpected Jenkins restart or system crash. This durability is useful
for many users but its performance cost can be a problem.
Pipeline now includes features to let users improve performance by reducing how much data is
written to disk and how often it is written – at a small cost to durability. In some special cases, users
may not be able to resume or visualize running Pipelines if Jenkins shuts down suddenly without
getting a chance to write data.
Because these settings include a trade-off of speed vs. durability, they are initially opt-in. To enable
performance-optimized modes, users need to explicity set a Speed/Durability Setting for Pipelines.
If no explicit choice is made, pipelines currently default to the “maximum durability” setting and
write to disk as they have in the past. There are some I/O optimizations to this mode included in the
same plugin releases, but the benefits are much smaller.
** For example: readFile step with a large XML/JSON file, or using configuration information from
parsing such a file with One of the Utility Steps⁷⁶. ** Another common pattern is a “summary” object
containing data from many branches (logs, results, or statistics). Often this is visible because you’ll
be adding to it often via an add/append or Map.put() operations. ** Large arrays of data or Maps of
configuration information are another common example of this situation.
Atomic writes: All settings except “maximum durability” currently avoid atomic writes – what this
means is that if the operating system running Jenkins fails, data that is buffered for writing to disk
will not be flushed, it will be lost. This is quite rare, but can happen as a result of container or
virtualization operations that halt the operating system or disconnect storage. Usually this data is
flushed pretty quickly to disk, so the window for data loss is brief. On Linux this flush-to-disk can
be forced by running ‘sync’. In some rare cases this can also result in a build that cannot be loaded.
** you have a pipeline that modifies the state of critical infrastructure ** you do a production
deployment
huge difference.*
• In general try to fit the tool to the job. Consider writing short Shell/Batch/Groovy/Python
scripts when running a complex process using a build agent. Good examples include processing
data, communicating interactively with REST APIs, and parsing/templating larger XML or
JSON files. The sh and bat steps are helpful to invoke these, especially with returnStdout:
true to return the output from this script and save it as a variable (Scripted Pipeline).
• Use the latest versions of the Pipeline plugins and Script Security, if applicable. These include
regular performance improvements.
• Try to simplify Pipeline code by reducing the number of steps run and using simpler Groovy
code for Scripted Pipelines.
• Consolidate sequential steps of the same type if you can, for example by using one Shell step
to invoke a helper script rather than running many steps.
• Try to limit the amount of data written to logs by Pipelines. If you are writing several MB of
log data, such as from a build tool, consider instead writing this to an external file, compressing
it, and archiving it as a build artifact.
• When using Jenkins with more than 6 GB of heap use the
suggested garbage collection tuning options⁷⁷ to minimize garbage collection pause times and
overhead.
** This still runs on masters so be aware of complexity of the work, but is much faster than native
Pipeline code because it doesn’t provide durability and uses a faster execution model. Still, be
mindful of the CPU cost and offload to executors when the cost becomes too high. ** @NonCPS
functions can use a much broader subset of the Groovy language, such as iterators and functional
features, which makes them more terse and fast to write. ** @NonCPS functions should not use
Pipeline steps internally, however you can store the result of a Pipeline step to a variable and use
it that as the input to a @NonCPS function. ** *Gotcha: It’s not guaranteed that use of a step will
⁷⁷https://jenkins.io/blog/2016/11/21/gc-tuning/
Using Speed/Durability Settings To Reduce Disk I/O Needs 95
generate an error (there is an open RFE to implement that), but you should not rely on that behavior.
You may see improper handling of exceptions. ** While normal Pipeline is restricted to serializable
local variables, @NonCPS functions can use more complex, nonserializable types internally (for
example regex matchers, etc). Parameters and return types should still be Serializable, however. **
*Gotcha: improper usages are not guaranteed to raise an error with normal Pipeline (optimizations
may mask the issue), but it is unsafe to rely on this behavior. ** General Gotcha: when using running
@NonCPS functions, the actual error can sometimes be swallowed by pipeline creating a confusing
error message. Combat this by using a try/catch block and potentially using an echo to plain text
print the error message in the catch ** The Pipeline DSL is not designed for arbitrary networking
and computation tasks - it is intended for CI/CD scripting.
Extending with Shared Libraries
As Pipeline is adopted for more and more projects in an organization, common patterns are likely to
emerge. Oftentimes it is useful to share parts of Pipelines between various projects to reduce redun-
dancies and keep code “DRY” footnoteref:[dry, https://en.wikipedia.org/wiki/Don’trepeatyourself].
Pipeline has support for creating “Shared Libraries” which can be defined in external source control
repositories and loaded into existing Pipelines.
Directory structure
The directory structure of a Shared Library repository is as follows:
Extending with Shared Libraries 97
1 (root)
2 +- src # Groovy source files
3 | +- org
4 | +- foo
5 | +- Bar.groovy # for org.foo.Bar class
6 +- vars
7 | +- foo.groovy # for global 'foo' variable
8 | +- foo.txt # help for 'foo' variable
9 +- resources # resource files (external libraries only)
10 | +- org
11 | +- foo
12 | +- bar.json # static helper data for org.foo.Bar
The src directory should look like standard Java source directory structure. This directory is added
to the classpath when executing Pipelines.
The vars directory hosts script files that are exposed as a variable in Pipelines. The name of the
file is the name of the variable in the Pipeline. So if you had a file called vars/log.groovy with a
function like def info(message)... in it, you can access this function like log.info "hello world"
in the Pipeline. You can put as many functions as you like inside this file. Read on below for more
examples and options.
The basename of each .groovy file should be a Groovy (∼ Java) identifier, conventionally camelCased.
The matching .txt, if present, can contain documentation, processed through the system’s con-
figured https://wiki.jenkins.io/display/JENKINS/Markup+formatting[markup formatter] (so may
really be HTML, Markdown, etc., though the .txt extension is required). This documentation
will only be visible on the https://jenkins.io/doc/book/pipeline/getting-started/#global-variable-
reference[Global Variable Reference] pages that are accessed from the navigation sidebar of Pipeline
jobs that import the shared library. In addition, those jobs must run successfully once before the
shared library documentation will be generated.
The Groovy source files in these directories get the same “CPS transformation” as in Scripted
Pipeline.
A resources directory allows the libraryResource step to be used from an external library to load
associated non-Groovy files. Currently this feature is not supported for internal libraries.
Other directories under the root are reserved for future enhancements.
Since these libraries will be globally usable, any Pipeline in the system can utilize functionality
implemented in these libraries.
These libraries are considered “trusted:” they can run any methods in Java, Groovy, Jenkins internal
APIs, Jenkins plugins, or third-party libraries. This allows you to define libraries which encapsulate
individually unsafe APIs in a higher-level wrapper safe for use from any Pipeline. Beware that
anyone able to push commits to this SCM repository could obtain unlimited access to Jenkins.
You need the Overall/RunScripts permission to configure these libraries (normally this will be
granted to Jenkins administrators).
Using libraries
Shared Libraries marked Load implicitly allows Pipelines to immediately use classes or global
variables defined by any such libraries. To access other shared libraries, the Jenkinsfile needs
to use the @Library annotation, specifying the library’s name:
image::pipeline/configure-global-pipeline-library.png[“Configuring a Global Pipeline Library”, role=center]
1 @Library('my-shared-library') _
2 /* Using a version specifier, such as branch, tag, etc */
3 @Library('[email protected]') _
4 /* Accessing multiple libraries with one statement */
5 @Library(['my-shared-library', 'otherlib@abc1234']) _
The annotation can be anywhere in the script where an annotation is permitted by Groovy. When
referring to class libraries (with src/ directories), conventionally the annotation goes on an import
statement:
Extending with Shared Libraries 99
1 @Library('somelib')
2 import com.mycorp.pipeline.somelib.UsefulClass
[TIP] ==== For Shared Libraries which only define Global Variables (vars/), or a Jenkinsfile which
only needs a Global Variable, the annotation⁷⁸ pattern @Library('my-shared-library') _ may be
useful for keeping code concise. In essence, instead of annotating an unnecessary import statement,
the symbol _ is annotated.
It is not recommended to import a global variable/function, since this will force the compiler to
interpret fields and methods as static even if they were intended to be instance. The Groovy
compiler in this case can produce confusing error messages. ====
Libraries are resolved and loaded during compilation of the script, before it starts executing. This
allows the Groovy compiler to understand the meaning of symbols used in static type checking, and
permits them to be used in type declarations in the script, for example:
1 @Library('somelib')
2 import com.mycorp.pipeline.somelib.Helper
3
4 int useSomeLib(Helper helper) {
5 helper.prepare()
6 return helper.count()
7 }
8
9 echo useSomeLib(new Helper('some text'))
1 library 'my-shared-library'
Thereafter, any global variables from that library will be accessible to the script.
Using classes from the src/ directory is also possible, but trickier. Whereas the @Library annotation
prepares the “classpath” of the script prior to compilation, by the time a library step is encountered
⁷⁸http://groovy-lang.org/objectorientation.html#_annotation
Extending with Shared Libraries 100
the script has already been compiled. Therefore you cannot import or otherwise “statically” refer to
types from the library.
However you may use library classes dynamically (without type checking), accessing them by fully-
qualified name from the return value of the library step. static methods can be invoked using a
Java-like syntax:
1 library('my-shared-library').com.mycorp.pipeline.Utils.someStaticMethod()
You can also access static fields, and call constructors as if they were static methods named new:
Library versions
The “Default version” for a configured Shared Library is used when “Load implicitly” is checked, or if
a Pipeline references the library only by name, for example @Library('my-shared-library') _. If a
“Default version” is not defined, the Pipeline must specify a version, for example @Library('my-shared-library@mas
_.
1 library 'my-shared-library@master'
Since this is a regular step, that version could be computed rather than a constant as with the
annotation; for example:
1 library "my-shared-library@$BRANCH_NAME"
would load a library using the same SCM branch as the multibranch Jenkinsfile. As another
example, you could pick a library by parameter:
Extending with Shared Libraries 101
Note that the library step may not be used to override the version of an implicitly loaded library.
It is already loaded by the time the script starts, and a library of a given name may not be loaded
twice.
Retrieval Method
The best way to specify the SCM is using an SCM plugin which has been specifically updated to
support a new API for checking out an arbitrary named version (Modern SCM option). As of this
writing, the latest versions of the Git and Subversion plugins support this mode.
image::pipeline/global-pipeline-library-modern-scm.png[“Configuring a ‘Modern SCM’ for a Pipeline
Library”, role=center]
Legacy SCM
SCM plugins which have not yet been updated to support the newer features required by Shared Li-
braries, may still be used via the Legacy SCM option. In this case, include ${library.yourlibrarynamehere.version
wherever a branch/tag/ref may be configured for that particular SCM plugin. This ensures that
during checkout of the library’s source code, the SCM plugin will expand this variable to checkout
the appropriate version of the library.
image::pipeline/global-pipeline-library-legacy-scm.png[“Configuring a ‘Legacy SCM’ for a Pipeline
Library”, role=center]
Dynamic retrieval
If you only specify a library name (optionally with version after @) in the library step, Jenkins will
look for a preconfigured library of that name. (Or in the case of a github.com/owner/repo automatic
library it will load that.)
But you may also specify the retrieval method dynamically, in which case there is no need for the
library to have been predefined in Jenkins. Here is an example:
It is best to refer to Pipeline Syntax for the precise syntax for your SCM.
Note that the library version must be specified in these cases.
Extending with Shared Libraries 102
Writing libraries
At the base level, any valid Groovy code⁷⁹ is okay for use. Different data structures, utility methods,
etc, such as:
1 // src/org/foo/Point.groovy
2 package org.foo
3
4 // point in 3D space
5 class Point {
6 float x,y,z
7 }
Accessing steps
Library classes cannot directly call steps such as sh or git. They can however implement methods,
outside of the scope of an enclosing class, which in turn invoke Pipeline steps, for example:
1 // src/org/foo/Zot.groovy
2 package org.foo
3
4 def checkOutFrom(repo) {
5 git url: "[email protected]:jenkinsci/${repo}"
6 }
7
8 return this
This approach has limitations; for example, it prevents the declaration of a superclass.
Alternately, a set of steps can be passed explicitly using this to a library class, in a constructor, or
just one method:
⁷⁹http://groovy-lang.org/syntax.html
Extending with Shared Libraries 103
1 package org.foo
2 class Utilities implements Serializable {
3 def steps
4 Utilities(steps) {this.steps = steps}
5 def mvn(args) {
6 steps.sh "${steps.tool 'Maven'}/bin/mvn -o ${args}"
7 }
8 }
When saving state on classes, such as above, the class must implement the Serializable interface.
This ensures that a Pipeline using the class, as seen in the example below, can properly suspend and
resume in Jenkins.
If the library needs to access global variables, such as env, those should be explicitly passed into the
library classes, or methods, in a similar manner.
Instead of passing numerous variables from the Scripted Pipeline into a library,
1 package org.foo
2 class Utilities {
3 static def mvn(script, args) {
4 script.sh "${script.tool 'Maven'}/bin/mvn -s ${script.env.HOME}/jenkins.xml -o $\
5 {args}"
6 }
7 }
The above example shows the script being passed in to one static method, invoked from a Scripted
Pipeline as follows:
1 def info(message) {
2 echo "INFO: ${message}"
3 }
4
5 def warning(message) {
6 echo "WARNING: ${message}"
7 }
.vars/log.groovy
1 @Library('utils') _
2
3 log.info 'Starting'
4 log.warning 'Nothing to do!'
.Jenkinsfile
1 @groovy.transform.Field
2 def yourField = [:]
3
4 def yourFunction....
Note that if you wish to use a field in your global for some state, annotate it as such:
Declarative Pipeline does not allow method calls on objects outside “script” blocks. (JENKINS-
42360⁸⁰). The method calls above would need to be put inside a script directive:
1 @Library('utils') _
2
3 pipeline {
4 agent none
5 stage ('Example') {
6 steps {
7 // log.info 'Starting' // <1>
8 script { // <2>
9 log.info 'Starting'
10 log.warning 'Nothing to do!'
11 }
12 }
13 }
14 }
.Jenkinsfile <1> This method call would fail because it is outside a script directive. <2> script
directive required to access global variables in Declarative Pipeline.
⁸⁰https://issues.jenkins-ci.org/browse/JENKINS-42360
Extending with Shared Libraries 105
.Avoid preserving state in global variables [WARNING] ==== Avoid defining global variables with
methods that interact or preserve state. Use a static class or instantiate a local variable of a class
instead. ====
1 // vars/sayHello.groovy
2 def call(String name = 'human') {
3 // Any valid steps can be called from this code, just like in other
4 // Scripted Pipeline
5 echo "Hello, ${name}."
6 }
The Pipeline would then be able to reference and invoke this variable:
1 sayHello 'Joe'
2 sayHello() /* invoke with default arguments */
If called with a block, the call method will receive a Closure⁸¹. The type should be defined explicitly
to clarify the intent of the step, for example:
1 // vars/windows.groovy
2 def call(Closure body) {
3 node('windows') {
4 body()
5 }
6 }
The Pipeline can then use this variable like any built-in step which accepts a block:
⁸¹http://groovy-lang.org/closures.html
Extending with Shared Libraries 106
1 windows {
2 bat "cmd /?"
3 }
1 // vars/buildPlugin.groovy
2 def call(Map config) {
3 node {
4 git url: "https://github.com/jenkinsci/${config.name}-plugin.git"
5 sh 'mvn install'
6 mail to: '...', subject: "${config.name} plugin build", body: '...'
7 }
8 }
Assuming the script has either been loaded as a Global Shared Library or as a Folder-level Shared
Library the resulting Jenkinsfile will be dramatically simpler:
[pipeline] —- // Script // buildPlugin name: ‘git’ // Declarative not yet implemented // —-
There is also a “builder pattern” trick using Groovy’s Closure.DELEGATE_FIRST, which permits
Jenkinsfile to look slightly more like a configuration file than a program, but this is more complex
and error-prone and is not recommended.
⁸²https://search.maven.org/
Extending with Shared Libraries 107
1 @Grab('org.apache.commons:commons-math3:3.4.1')
2 import org.apache.commons.math3.primes.Primes
3 void parallelize(int count) {
4 if (!Primes.isPrime(count)) {
5 error "${count} was not prime"
6 }
7 // …
8 }
Loading resources
External libraries may load adjunct files from a resources/ directory using the libraryResource
step. The argument is a relative pathname, akin to Java resource loading:
The file is loaded as a string, suitable for passing to certain APIs or saving to a workspace using
writeFile.
It is advisable to use an unique package structure so you do not accidentally conflict with another
library.
1 // vars/evenOrOdd.groovy
2 def call(int buildNumber) {
3 if (buildNumber % 2 == 0) {
4 pipeline {
5 agent any
6 stages {
7 stage('Even Stage') {
8 steps {
9 echo "The build number is even"
10 }
11 }
12 }
13 }
14 } else {
15 pipeline {
16 agent any
17 stages {
18 stage('Odd Stage') {
19 steps {
20 echo "The build number is odd"
21 }
22 }
23 }
24 }
25 }
26 }
1 // Jenkinsfile
2 @Library('my-shared-library') _
3
4 evenOrOdd(currentBuild.getNumber())
Only entire pipelines can be defined in shared libraries as of this time. This can only be done in
vars/*.groovy, and only in a call method. Only one Declarative Pipeline can be executed in a single
build, and if you attempt to execute a second one, your build will fail as a result.
Pipeline Syntax
This section builds on the information introduced in Getting started with Pipeline and should be
treated solely as a reference. For more information on how to use Pipeline syntax in practical
examples, refer to the Using a Jenkinsfile section of this chapter. As of version 2.5 of the Pipeline
plugin, Pipeline supports two discrete syntaxes which are detailed below. For the pros and cons of
each, see the <<compare>>.
As discussed at the start of this chapter, the most fundamental part of a Pipeline is the “step”.
Basically, steps tell Jenkins what to do and serve as the basic building block for both Declarative
and Scripted Pipeline syntax.
For an overview of available steps, please refer to the Pipeline Steps reference which contains a
comprehensive list of steps built into Pipeline as well as steps provided by plugins.
Declarative Pipeline
[role=syntax]
Declarative Pipeline is a relatively recent addition to Jenkins Pipeline footnoteref:[declarative-
version, Version 2.5 of the “Pipeline plugin” introduces support for Declarative Pipeline syntax]
which presents a more simplified and opinionated syntax on top of the Pipeline sub-systems.
All valid Declarative Pipelines must be enclosed within a pipeline block, for example:
1 pipeline {
2 /* insert Declarative Pipeline here */
3 }
The basic statements and expressions which are valid in Declarative Pipeline follow the same rules
as Groovy’s syntax⁸³ with the following exceptions:
You can use the Declarative Directive Generator to help you get started with configuring the
directives and sections in your Declarative Pipeline.
Sections
[[declarative-sections]]
Sections in Declarative Pipeline typically contain one or more <<declarative-directives>> or <<declarative-
steps>>.
agent
The agent section specifies where the entire Pipeline, or a specific stage, will execute in the Jenkins
environment depending on where the agent section is placed. The section must be defined at the
top-level inside the pipeline block, but stage-level usage is optional.
[cols=”^10h,>90a”,role=syntax] |=== | Required | Yes
| Parameters | Described below
| Allowed | In the top-level pipeline block and each stage block. |===
Parameters
[[agent-parameters]]
In order to support the wide variety of use-cases Pipeline authors may have, the agent section
supports a few different types of parameters. These parameters can be applied at the top-level of
the pipeline block, or within each stage directive.
any:: Execute the Pipeline, or stage, on any available agent. For example: agent any
none:: When applied at the top-level of the pipeline block no global agent will be allocated for the
entire Pipeline run and each stage section will need to contain its own agent section. For example:
agent none
label:: Execute the Pipeline, or stage, on an agent available in the Jenkins environment with the
provided label. For example: agent { label 'my-defined-label' }
node:: agent { node { label 'labelName' } } behaves the same as agent { label 'labelName'
}, but node allows for additional options (such as customWorkspace).
Pipeline Syntax 111
1 agent {
2 docker {
3 image 'maven:3-alpine'
4 label 'my-defined-label'
5 args '-v /tmp:/tmp'
6 }
7 }
1 agent {
2 docker {
3 image 'myregistry.com/node'
4 label 'my-defined-label'
5 registryUrl 'https://myregistry.com/'
6 registryCredentialsId 'myPredefinedCredentialsInJenkins'
7 }
8 }
docker:: Execute the Pipeline, or stage, with the given container which will be dynamically
provisioned on a node pre-configured to accept Docker-based Pipelines, or on a node match-
ing the optionally defined label parameter. docker also optionally accepts an args parameter
which may contain arguments to pass directly to a docker run invocation, and an alwaysPull
option, which will force a docker pull even if the image name is already present. For example:
agent { docker 'maven:3-alpine' } or + + docker also optionally accepts a registryUrl and
registryCredentialsId parameters which will help to specify the Docker Registry to use and its
credentials. For example: +
1 agent {
2 // Equivalent to "docker build -f Dockerfile.build --build-arg version=1.0.2 ./b\
3 uild/
4 dockerfile {
5 filename 'Dockerfile.build'
6 dir 'build'
7 label 'my-defined-label'
8 additionalBuildArgs '--build-arg version=1.0.2'
9 args '-v /tmp:/tmp'
10 }
11 }
Pipeline Syntax 112
1 agent {
2 dockerfile {
3 filename 'Dockerfile.build'
4 dir 'build'
5 label 'my-defined-label'
6 registryUrl 'https://myregistry.com/'
7 registryCredentialsId 'myPredefinedCredentialsInJenkins'
8 }
9 }
dockerfile:: Execute the Pipeline, or stage, with a container built from a Dockerfile contained in
the source repository. In order to use this option, the Jenkinsfile must be loaded from either
a Multibranch Pipeline, or a “Pipeline from SCM.” Conventionally this is the Dockerfile in the
root of the source repository: agent { dockerfile true }. If building a Dockerfile in another
directory, use the dir option: agent { dockerfile { dir 'someSubDir' } }. If your Dockerfile
has another name, you can specify the file name with the filename option. You can pass additional
arguments to the docker build ... command with the additionalBuildArgs option, like agent
{ dockerfile { additionalBuildArgs '--build-arg foo=bar' } }. For example, a repository
with the file build/Dockerfile.build, expecting a build argument version: + + dockerfile also
optionally accepts a registryUrl and registryCredentialsId parameters which will help to specify
the Docker Registry to use and its credentials. For example: +
Common Options
These are a few options that can be applied two or more agent implementations. They are not
required unless explicitly stated.
label:: A string. The label on which to run the Pipeline or individual stage. + This option is valid for
node, docker and dockerfile, and is required for node.
1 agent {
2 node {
3 label 'my-defined-label'
4 customWorkspace '/some/other/path'
5 }
6 }
customWorkspace:: A string. Run the Pipeline or individual stage this agent is applied to within
this custom workspace, rather than the default. It can be either a relative path, in which case the
custom workspace will be under the workspace root on the node, or an absolute path. For example:
+ + This option is valid for node, docker and dockerfile.
reuseNode:: A boolean, false by default. If true, run the container on the node specified at the top-
level of the Pipeline, in the same workspace, rather than on a new node entirely. + This option is
Pipeline Syntax 113
valid for docker and dockerfile, and only has an effect when used on an agent for an individual
stage.
args:: A string. Runtime arguments to pass to docker run. + This option is valid for docker and
dockerfile.
Example
[[agent-example]]
[pipeline] —- // Declarative // pipeline // Script // —- <1> Execute all the steps defined in this Pipeline
within a newly created container of the given name and tag (maven:3-alpine).
[pipeline] —- // Declarative // pipeline // Script // —- <1> Defining agent none at the top-level of the
Pipeline ensures that an Executor will not be assigned unnecessarily. Using agent none also forces
each stage section to contain its own agent section. <2> Execute the steps in this stage in a newly
created container using this image. <3> Execute the steps in this stage in a newly created container
using a different image from the previous stage.
post
The post section defines one or more additional steps that are run upon the completion of a Pipeline’s
or stage’s run (depending on the location of the post section within the Pipeline). post can support
any of of the following post-condition blocks: always, changed, fixed, regression, aborted, failure,
success, unstable, unsuccessful, and cleanup. These condition blocks allow the execution of steps
inside each condition depending on the completion status of the Pipeline or stage. The condition
blocks are executed in the order shown below.
[cols=”^10h,>90a”,role=syntax] |=== | Required | No
| Parameters | None
| Allowed | In the top-level pipeline block and each stage block. |===
Conditions
[[post-conditions]]
always:: Run the steps in the post section regardless of the completion status of the Pipeline’s or
stage’s run. changed:: Only run the steps in post if the current Pipeline’s or stage’s run has a different
completion status from its previous run. fixed:: Only run the steps in post if the current Pipeline’s or
stage’s run is successful and the previous run failed or was unstable. regression:: Only run the steps
in post if the current Pipeline’s or stage’s run’s status is failure, unstable, or aborted and the previous
run was successful. aborted:: Only run the steps in post if the current Pipeline’s or stage’s run has
Pipeline Syntax 114
an “aborted” status, usually due to the Pipeline being manually aborted. This is typically denoted by
gray in the web UI. failure:: Only run the steps in post if the current Pipeline’s or stage’s run has
a “failed” status, typically denoted by red in the web UI. success:: Only run the steps in post if the
current Pipeline’s or stage’s run has a “success” status, typically denoted by blue or green in the web
UI. unstable:: Only run the steps in post if the current Pipeline’s or stage’s run has an “unstable”
status, usually caused by test failures, code violations, etc. This is typically denoted by yellow in
the web UI. unsuccessful:: Only run the steps in post if the current Pipeline’s or stage’s run has
not a “success” status. This is typically denoted in the web UI depending on the status previously
mentioned cleanup:: Run the steps in this post condition after every other post condition has been
evaluated, regardless of the Pipeline or stage’s status.
Example
[[post-example]]
[pipeline] —- // Declarative // pipeline // Script // —- <1> Conventionally, the post section should be
placed at the end of the Pipeline. <2> Post-condition blocks contain steps the same as the <<steps>>
section.
stages
Containing a sequence of one or more <<stage>> directives, the stages section is where the bulk of
the “work” described by a Pipeline will be located. At a minimum it is recommended that stages
contain at least one <<stage>> directive for each discrete part of the continuous delivery process,
such as Build, Test, and Deploy.
[cols=”^10h,>90a”,role=syntax] |=== | Required | Yes
| Parameters | None
| Allowed | Only once, inside the pipeline block. |===
Example
[[stages-example]]
[pipeline] —- // Declarative // pipeline // Script // —- <1> The stages section will typically follow
the directives such as agent, options, etc.
steps
The steps section defines a series of one or more steps to be executed in a given stage directive.
[cols=”^10h,>90a”,role=syntax] |=== | Required | Yes
| Parameters | None
| Allowed | Inside each stage block. |===
Pipeline Syntax 115
Example
[[steps-example]]
[pipeline] —- // Declarative // pipeline // Script // —- <1> The steps section must contain one or
more steps.
Directives
[[declarative-directives]]
environment
The environment directive specifies a sequence of key-value pairs which will be defined as
environment variables for the all steps, or stage-specific steps, depending on where the environment
directive is located within the Pipeline.
This directive supports a special helper method credentials() which can be used to access pre-
defined Credentials by their identifier in the Jenkins environment. For Credentials which are of
type “Secret Text”, the credentials() method will ensure that the environment variable specified
contains the Secret Text contents. For Credentials which are of type “Standard username and
password”, the environment variable specified will be set to username:password and two additional
environment variables will be automatically be defined: MYVARNAME_USR and MYVARNAME_PSW respec-
tively.
[cols=”^10h,>90a”,role=syntax] |=== | Required | No
| Parameters | None
| Allowed | Inside the pipeline block, or within stage directives. |===
Example
[[environment-example]]
[pipeline] —- // Declarative // pipeline // Script // —- <1> An environment directive used in the
top-level pipeline block will apply to all steps within the Pipeline. <2> An environment directive
defined within a stage will only apply the given environment variables to steps within the stage.
<3> The environment block has a helper method credentials() defined which can be used to access
pre-defined Credentials by their identifier in the Jenkins environment.
options
The options directive allows configuring Pipeline-specific options from within the Pipeline itself.
Pipeline provides a number of these options, such as buildDiscarder, but they may also be provided
by plugins, such as timestamps.
Pipeline Syntax 116
Available Options
buildDiscarder:: Persist artifacts and console output for the specific number of recent Pipeline runs.
For example: options { buildDiscarder(logRotator(numToKeepStr: '1')) }
checkoutToSubdirectory:: Perform the automatic source control checkout in a subdirectory of the
workspace. For example: options { checkoutToSubdirectory('foo') }
disableConcurrentBuilds:: Disallow concurrent executions of the Pipeline. Can be useful for prevent-
ing simultaneous accesses to shared resources, etc. For example: options { disableConcurrentBuilds()
}
newContainerPerStage:: Used with docker or dockerfile top-level agent. When specified, each stage
will run in a new container instance on the same node, rather than all stages running in the same
container instance.
overrideIndexTriggers:: Allows overriding default treatment of branch indexing triggers. If branch
indexing triggers are disabled at the multibranch or organization label, options { overrideIndexTriggers(true)
} will enable them for this job only. Otherwise, options { overrideIndexTriggers(false) } will
disable branch indexing triggers for this job only.
preserveStashes:: Preserve stashes from completed builds, for use with stage restarting. For example:
options { preserveStashes() } to preserve the stashes from the most recent completed build, or
options { preserveStashes(buildCount: 5) } to preserve the stashes from the five most recent
completed builds.
quietPeriod:: Set the quiet period, in seconds, for the Pipeline, overriding the global default. For
example: options { quietPeriod(30) }
retry:: On failure, retry the entire Pipeline the specified number of times. For example: options {
retry(3) }
skipDefaultCheckout:: Skip checking out code from source control by default in the agent directive.
For example: options { skipDefaultCheckout() }
skipStagesAfterUnstable:: Skip stages once the build status has gone to UNSTABLE. For example:
options { skipStagesAfterUnstable() }
timeout:: Set a timeout period for the Pipeline run, after which Jenkins should abort the Pipeline.
For example: options { timeout(time: 1, unit: 'HOURS') }
timestamps:: Prepend all console output generated by the Pipeline run with the time at which the
line was emitted. For example: options { timestamps() }
parallelsAlwaysFailFast:: Set failfast true for all subsequent parallel stages in the pipeline. For
example: options { parallelsAlwaysFailFast() }
Pipeline Syntax 117
Example
[[options-example]]
[pipeline] —- // Declarative // pipeline // Script // —- <1> Specifying a global execution timeout of
one hour, after which Jenkins will abort the Pipeline run.
stage options
The options directive for a stage is similar to the options directive at the root of the Pipeline.
However, the stage-level options can only contain steps like retry, timeout, or timestamps, or
Declarative options that are relevant to a stage, like skipDefaultCheckout.
Inside a stage, the steps in the options directive are invoked before entering the agent or checking
any when conditions.
skipDefaultCheckout:: Skip checking out code from source control by default in the agent directive.
For example: options { skipDefaultCheckout() }
timeout:: Set a timeout period for this stage, after which Jenkins should abort the stage. For example:
options { timeout(time: 1, unit: 'HOURS') }
retry:: On failure, retry this stage the specified number of times. For example: options { retry(3)
}
timestamps:: Prepend all console output generated during this stage with the time at which the line
was emitted. For example: options { timestamps() }
Example
[[stage-options-example]]
[pipeline] —- // Declarative // pipeline // Script // —- <1> Specifying a execution timeout of one hour
for the Example stage, after which Jenkins will abort the Pipeline run.
parameters
The parameters directive provides a list of parameters which a user should provide when triggering
the Pipeline. The values for these user-specified parameters are made available to Pipeline steps via
the params object, see the <<parameters-example>> for its specific usage.
[cols=”^10h,>90a”,role=syntax] |=== | Required | No
| Parameters | None
| Allowed | Only once, inside the pipeline block. |===
Pipeline Syntax 118
Available Parameters
text:: A text parameter, which can contain multiple lines, for example: parameters { text(name:
'DEPLOY_TEXT', defaultValue: 'One\nTwo\nThree\n', description: '') }
file:: A file parameter, which specifies a file to be submitted by the user when scheduling a build,
for example: parameters { file(name: 'FILE', description: 'Some file to upload') }
password:: A password parameter, for example: parameters { password(name: 'PASSWORD',
defaultValue: 'SECRET', description: 'A secret password') }
Example
[[parameters-example]]
[pipeline] —- // Declarative // pipeline { agent any parameters { string(name: ‘PERSON’, default-
Value: ‘Mr Jenkins’, description: ‘Who should I say hello to?’)
text(name: ‘BIOGRAPHY’, defaultValue: ‘’, description: ‘Enter some information about the person’)
triggers
The triggers directive defines the automated ways in which the Pipeline should be re-triggered.
For Pipelines which are integrated with a source such as GitHub or BitBucket, triggers may not
be necessary as webhooks-based integration will likely already be present. The triggers currently
available are cron, pollSCM and upstream.
[cols=”^10h,>90a”,role=syntax] |=== | Required | No
| Parameters | None
| Allowed | Only once, inside the pipeline block. |===
cron:: Accepts a cron-style string to define a regular interval at which the Pipeline should be re-
triggered, for example: triggers { cron('H */4 * * 1-5') } pollSCM:: Accepts a cron-style string
to define a regular interval at which Jenkins should check for new source changes. If new changes
exist, the Pipeline will be re-triggered. For example: triggers { pollSCM('H */4 * * 1-5') }
upstream:: Accepts a comma separated string of jobs and a threshold. When any job in the string
finishes with the minimum threshold, the Pipeline will be re-triggered. For example: triggers {
upstream(upstreamProjects: 'job1,job2', threshold: hudson.model.Result.SUCCESS) }
Example
[[triggers-example]]
[pipeline] —- // Declarative // pipeline // Script // —-
[[cron-syntax]] The Jenkins cron syntax follows the syntax of the cron utility⁸⁴ (with minor
differences). Specifically, each line consists of 5 fields separated by TAB or whitespace:
[%header,cols=5*] |=== |MINUTE |HOUR |DOM |MONTH |DOW
|Minutes within the hour (0–59) |The hour of the day (0–23) |The day of the month (1–31) |The month
(1–12) |The day of the week (0–7) where 0 and 7 are Sunday. |===
To specify multiple values for one field, the following operators are available. In the order of
precedence,
To allow periodically scheduled tasks to produce even load on the system, the symbol H (for “hash”)
should be used wherever possible. For example, using 0 0 * * * for a dozen daily jobs will cause
a large spike at midnight. In contrast, using H H * * * would still execute each job once a day, but
not all at the same time, better using limited resources.
The H symbol can be used with a range. For example, H H(0-7) * * * means some time between
12:00 AM (midnight) to 7:59 AM. You can also use step intervals with H, with or without ranges.
The H symbol can be thought of as a random value over a range, but it actually is a hash of the job
name, not a random function, so that the value remains stable for any given project.
Beware that for the day of month field, short cycles such as */3 or H/3 will not work consistently
near the end of most months, due to variable month lengths. For example, */3j will run on the 1st,
4th, …31st days of a long month, then again the next day of the next month. Hashes are always
chosen in the 1-28 range, so H/3 will produce a gap between runs of between 3 and 6 days at the end
of a month. (Longer cycles will also have inconsistent lengths but the effect may be relatively less
noticeable.)
Empty lines and lines that start with # will be ignored as comments.
In addition, @yearly, @annually, @monthly, @weekly, @daily, @midnight, and @hourly are supported
as convenient aliases. These use the hash system for automatic balancing. For example, @hourly is
the same as H * * * * and could mean at any time during the hour. @midnight actually means some
time between 12:00 AM and 2:59 AM.
[[cron-syntax-examples]] .Jenkins cron syntax examples [cols=1] |=== |every fifteen minutes
(perhaps at :07, :22, :37, :52) |triggers{ cron('H/15 * * * *') } |every ten minutes in the first
half of every hour (three times, perhaps at :04, :14, :24) |triggers{ cron('H(0-29)/10 * * * *') }
|once every two hours at 45 minutes past the hour starting at 9:45 AM and finishing at 3:45 PM every
weekday. |triggers{ cron('45 9-16/2 * * 1-5') } |once in every two hours slot between 9 AM
and 5 PM every weekday (perhaps at 10:38 AM, 12:38 PM, 2:38 PM, 4:38 PM) |triggers{ cron('H
H(9-16)/2 * * 1-5') } |once a day on the 1st and 15th of every month except December |triggers{
cron('H H 1,15 1-11 *') } |===
stage
The stage directive goes in the stages section and should contain a <<steps>> section, an optional
agent section, or other stage-specific directives. Practically speaking, all of the real work done by a
Pipeline will be wrapped in one or more stage directives.
[cols=”^10h,>90a”,role=syntax] |=== | Required | At least one
| Parameters | One mandatory parameter, a string for the name of the stage.
| Allowed | Inside the stages section. |===
Pipeline Syntax 121
Example
[[stage-example]]
[pipeline] —- // Declarative // pipeline // Script // —-
tools
A section defining tools to auto-install and put on the PATH. This is ignored if agent none is specified.
[cols=”^10h,>90a”,role=syntax] |=== | Required | No
| Parameters | None
| Allowed | Inside the pipeline block or a stage block. |===
Supported Tools
Example
[[tools-example]]
[pipeline] —- // Declarative // pipeline // Script // —- <1> The tool name must be pre-configured in
Jenkins under Manage Jenkins -> Global Tool Configuration.
input
The input directive on a stage allows you to prompt for input, using the input step⁸⁵. The stage
will pause after any options have been applied, and before entering the stages agent or evaluating
its when condition. If the input is approved, the stage will then continue. Any parameters provided
as part of the input submission will be available in the environment for the rest of the stage.
Configuration options
message:: Required. This will be presented to the user when they go to submit the input.
id:: An optional identifier for this input. Defaults to the stage name.
ok:: Optional text for the “ok” button on the input form.
submitter:: An optional comma-separated list of users or external group names who are allowed to
submit this input. Defaults to allowing any user.
submitterParameter:: An optional name of an environment variable to set with the submitter name,
if present.
parameters:: An optional list of parameters to prompt the submitter to provide. See <<parameters>>
for more information.
⁸⁵https://jenkins.io/doc/pipeline/steps/pipeline-input-step/#input-wait-for-interactive-input
Pipeline Syntax 122
Example
[[input-example]]
[pipeline] —- // Declarative // pipeline // Script // —-
when
The when directive allows the Pipeline to determine whether the stage should be executed depending
on the given condition. The when directive must contain at least one condition. If the when directive
contains more than one condition, all the child conditions must return true for the stage to execute.
This is the same as if the child conditions were nested in an allOf condition (see the examples
below). If an anyOf condition is used, note that the condition skips remaining tests as soon as the
first “true” condition is found.
More complex conditional structures can be built using the nesting conditions: not, allOf, or anyOf.
Nesting conditions may be nested to any arbitrary depth.
[cols=”^10h,>90a”,role=syntax] |=== | Required | No
| Parameters | None
| Allowed | Inside a stage directive |===
Built-in Conditions
branch:: Execute the stage when the branch being built matches the branch pattern given, for
example: when { branch 'master' }. Note that this only works on a multibranch Pipeline.
buildingTag:: Execute the stage when the build is building a tag. Example: when { buildingTag() }
changelog:: Execute the stage if the build’s SCM changelog contains a given regular expression
pattern, for example: when { changelog '.*^\\[DEPENDENCY\\] .+$' }
changeset:: Execute the stage if the build’s SCM changeset contains one or more files matching the
given string or glob. Example: +when { changeset "**/*.js" }+ + By default the path matching
will be case insensitive, this can be turned off with the caseSensitive parameter, for example: when
{ changeset glob: "ReadMe.*", caseSensitive: true }
changeRequest:: Executes the stage if the current build is for a “change request” (a.k.a. Pull Request
on GitHub and Bitbucket, Merge Request on GitLab or Change in Gerrit etc.). When no parameters
are passed the stage runs on every change request, for example: when { changeRequest() }. +
By adding a filter attribute with parameter to the change request, the stage can be made to run
only on matching change requests. Possible attributes are id, target, branch, fork, url, title,
author, authorDisplayName, and authorEmail. Each of these corresponds to a CHANGE_* environment
variable, for example: when { changeRequest target: 'master' }. + The optional parameter
comparator may be added after an attribute to specify how any patterns are evaluated for a match:
EQUALS for a simple string comparison (the default), GLOB for an ANT style path glob (same as for
Pipeline Syntax 123
example changeset), or REGEXP for regular expression matching. Example: when { changeRequest
authorEmail: "[\\w_-.][email protected]", comparator: 'REGEXP' }
environment:: Execute the stage when the specified environment variable is set to the given value,
for example: when { environment name: 'DEPLOY_TO', value: 'production' }
equals:: Execute the stage when the expected value is equal to the actual value, for example: when {
equals expected: 2, actual: currentBuild.number }
expression:: Execute the stage when the specified Groovy expression evaluates to true, for example:
when { expression { return params.DEBUG_BUILD } } Note that when returning strings from your
expressions they must be converted to booleans or return null to evaluate to false. Simply returning
“0” or “false” will still evaluate to “true”.
tag:: Execute the stage if the TAG_NAME variable matches the given pattern. Example: when { tag
"release-*" }. If an empty pattern is provided the stage will execute if the TAG_NAME variable exists
(same as buildingTag()). + The optional parameter comparator may be added after an attribute to
specify how any patterns are evaluated for a match: EQUALS for a simple string comparison, GLOB (the
default) for an ANT style path glob (same as for example changeset), or REGEXP for regular expression
matching. For example: when { tag pattern: "release-\\d+", comparator: "REGEXP"}
not:: Execute the stage when the nested condition is false. Must contain one condition. For example:
when { not { branch 'master' } }
allOf:: Execute the stage when all of the nested conditions are true. Must contain at least one
condition. For example: when { allOf { branch 'master'; environment name: 'DEPLOY_TO',
value: 'production' } }
anyOf:: Execute the stage when at least one of the nested conditions is true. Must contain at least
one condition. For example: when { anyOf { branch 'master'; branch 'staging' } }
triggeredBy:: Execute the stage when the current build has been triggered by the param given. For
example:
By default, the when condition for a stage will be evaluated after entering the agent for that stage,
if one is defined. However, this can be changed by specifying the beforeAgent option within the
when block. If beforeAgent is set to true, the when condition will be evaluated first, and the agent
will only be entered if the when condition evaluates to true.
Pipeline Syntax 124
By default, the when condition for a stage will be evaluated before the input, if one is defined.
However, this can be changed by specifying the beforeInput option within the when block. If
beforeInput is set to true, the when condition will be evaluated first, and the input will only be
entered if the when condition evaluates to true.
Examples
[[when-example]]
.Single condition [pipeline] —- // Declarative // pipeline // Script // —-
.Multiple condition [pipeline] —- // Declarative // pipeline // Script // —-
.Nested condition (same behavior as previous example) [pipeline] —- // Declarative // pipeline //
Script // —-
.Multiple condition and nested condition [pipeline] —- // Declarative // pipeline // Script // —-
.Expression condition and nested condition [pipeline] —- // Declarative // pipeline // Script // —-
.beforeAgent [pipeline] —- // Declarative // pipeline // Script // —-
.beforeInput [pipeline] —- // Declarative // pipeline // Script // —-
.triggeredBy [pipeline] —- // Declarative // pipeline // Script // —-
Sequential Stages
Stages in Declarative Pipeline may declare a list of nested stages to be run within them in sequential
order. Note that a stage must have one and only one of steps, parallel, or stages, the last for
sequential stages. It is not possible to nest a parallel block within a stage directive if that stage
directive is nested within a parallel block itself. However, a stage directive within a parallel block
can use all other functionality of a stage, including agent, tools, when, etc.
Example
[[sequential-stages-example]]
[pipeline] —- // Declarative // pipeline // Script // —-
Parallel
Stages in Declarative Pipeline may declare a number of nested stages within a parallel block,
which will be executed in parallel. Note that a stage must have one and only one of steps, stages,
or parallel. The nested stages cannot contain further parallel stages themselves, but otherwise
Pipeline Syntax 125
behave the same as any other stage, including a list of sequential stages within stages. Any stage
containing parallel cannot contain agent or tools, since those are not relevant without steps.
In addition, you can force your parallel stages to all be aborted when one of them fails, by adding
failFast true to the stage containing the parallel. Another option for adding failfast is adding
an option to the pipeline definition: parallelsAlwaysFailFast()
Example
[[parallel-stages-example]]
[pipeline] —- // Declarative // pipeline
// Script // —- .parallelsAlwaysFailFast [pipeline] —- // Declarative // pipeline// Script // —-
Steps
[[declarative-steps]]
Declarative Pipelines may use all the available steps documented in the Pipeline Steps reference,
which contains a comprehensive list of steps, with the addition of the steps listed below which are
only supported in Declarative Pipeline.
script
The script step takes a block of <<scripted-pipeline>> and executes that in the Declarative Pipeline.
For most use-cases, the script step should be unnecessary in Declarative Pipelines, but it can provide
a useful “escape hatch.” script blocks of non-trivial size and/or complexity should be moved into
Shared Libraries instead.
Example
[[script-example]]
[pipeline] —- // Declarative // pipeline { agent any stages { stage(‘Example’) { steps { echo ‘Hello
World’
script // Script // —-
Scripted Pipeline
[role=syntax]
Pipeline Syntax 126
Scripted Pipeline, like <<declarative-pipeline>>, is built on top of the underlying Pipeline sub-system.
Unlike Declarative, Scripted Pipeline is effectively a general purpose DSL footnoteref:[dsl,Domain-
specific language⁸⁶] built with Groovy⁸⁷. Most functionality provided by the Groovy language is
made available to users of Scripted Pipeline, which means it can be a very expressive and flexible
tool with which one can author continuous delivery pipelines.
Flow Control
Scripted Pipeline is serially executed from the top of a Jenkinsfile downwards, like most traditional
scripts in Groovy or other languages. Providing flow control therefore rests on Groovy expressions,
such as the if/else conditionals, for example:
[pipeline] —- // Scripted // node // Declarative // —-
Another way Scripted Pipeline flow control can be managed is with Groovy’s exception handling
support. When <<scripted-steps>> fail for whatever reason they throw an exception. Handling
behaviors on-error must make use of the try/catch/finally blocks in Groovy, for example:
[pipeline] —- // Scripted // node // Declarative // —-
Steps
[[scripted-steps]]
As discussed at the start of this chapter, the most fundamental part of a Pipeline is the “step”.
Fundamentally, steps tell Jenkins what to do and serve as the basic building block for both
Declarative and Scripted Pipeline syntax.
Scripted Pipeline does not introduce any steps which are specific to its syntax; Pipeline Steps
reference contains a comprehensive list of steps provided by Pipeline and plugins.
Syntax Comparison
[[compare]]
When Jenkins Pipeline was first created, Groovy was selected as the foundation. Jenkins has long
shipped with an embedded Groovy engine to provide advanced scripting capabilities for admins and
users alike. Additionally, the implementors of Jenkins Pipeline found Groovy to be a solid foundation
upon which to build what is now referred to as the “Scripted Pipeline” DSL. footnoteref:[dsl].
As it is a fully featured programming environment, Scripted Pipeline offers a tremendous amount
of flexibility and extensibility to Jenkins users. The Groovy learning-curve isn’t typically desirable
for all members of a given team, so Declarative Pipeline was created to offer a simpler and more
opinionated syntax for authoring Jenkins Pipeline.
Both are fundamentally the same Pipeline sub-system underneath. They are both durable imple-
mentations of “Pipeline as code.” They are both able to use steps built into Pipeline or provided by
plugins. Both are able to utilize Shared Libraries
Where they differ however is in syntax and flexibility. Declarative limits what is available to the
user with a more strict and pre-defined structure, making it an ideal choice for simpler continuous
delivery pipelines. Scripted provides very few limits, insofar that the only limits on structure and
syntax tend to be defined by Groovy itself, rather than any Pipeline-specific systems, making
it an ideal choice for power-users and those with more complex requirements. As the name
implies, Declarative Pipeline encourages a declarative programming model. footnoteref:[declarative,
Declarative Programming⁸⁸] Whereas Scripted Pipelines follow a more imperative programming
model. footnoteref:[imperative, Imperative Programming⁸⁹]
⁸⁸https://en.wikipedia.org/wiki/Declarative_programming
⁸⁹https://en.wikipedia.org/wiki/Imperative_programming
Blue Ocean
[[blue-ocean]]
This chapter covers all aspects of Blue Ocean’s functionality, including how to:
*
get started with Blue Ocean - covers how to set up Blue
*
create a new Pipeline project,
Dashboard,
• use the
• use the
• use the
To start using Blue Ocean, see Getting started with Blue Ocean.
New teams have little time to learn how to assemble their own Jenkins experience – they want
to improve their time to market by shipping better software faster. Assembling that ideal Jenkins
experience is something we can work together as a community of Jenkins users and contributors
to define. As time progresses, developers’ expectations of good user experience changes and the
mission of Blue Ocean enables the Jenkins project to respond.
The Jenkins community has poured its sweat and tears into building the most technically capable and
extensible software automation tool in existence. Not doing anything to revolutionize the Jenkins
developer experience today is just inviting someone else – in closed source – to do it.
Blue Ocean aims to deliver a great experience around Pipeline and be compatible with any freestyle
jobs you already have configured on your Jenkins instance. However, you will not benefit from any
of the features built for Pipelines – for example, Pipeline visualization.
As Blue Ocean is designed to be extensible, it is possible for the Jenkins community to extend Blue
Ocean to support other job types in the future.
extension points, just like they can in the Jenkins classic UI. So far, Blue Ocean itself is implemented
using these extension points.
Extensions are delivered by plugins as usual. However, plugin developers will need to include some
additional JavaScript to hook into Blue Ocean’s extension points and contribute to the Blue Ocean
user experience.
Navigation Bar
The Activity View includes the standard navigation bar at the top, with a local navigation bar below
that. The local navigation bar includes:
default activity tab Dashboard’s “Favorites” list for this user. (Activity, Branches, Pull Requests) -
Clicking one of these will display that tab of the Activity View.
Activity
The default tab of the Activity View, the “Activity” tab, shows a list of the latest completed or in-
progress Runs. Each line in the list shows the status of the Run, id number, commit information,
duration, and when the run completed. Clicking on a Run will bring up the Pipeline Run Details
for that Run. “In Progress” Runs can be aborted from this list by clicking on the “Stop” symbol (a
square “” inside a circle). Runs that have completed can be re-run by clicking the “Re-run” symbol
(a counter-clockwise arrow “”). The list can be filtered by branch or pull request by clicking on the
“branch” drop-down in the list header.
This list does not allow runs to be edited or marked as favorites. Those actions can be done from the
“branches” tab.
Branches
The “Branches” tab shows a list of all branches that have a completed or in-progress Run in the
current Pipeline. Each line in the list corresponds to a branch in source control, footnoteref:[scm,
https://en.wikipedia.org/wiki/Sourcecontrolmanagement] showing overall health of the branch
Activity View 133
based on recent runs, status of the most recent run, id number, commit information, duration, and
when the run completed.
image:blueocean/activity/branches.png[Branches Tab of Activity View, role=center]
Clicking on a branch in this list will bring up the Pipeline Run Details for the latest completed or
in-progress Run of that branch. “In Progress” runs can be aborted from this list by clicking on the
“Stop” symbol (a square “” inside a circle). Pull requests whose latest run has completed can be run
again by clicking the “Play” symbol (a triangle “” inside a circle). Clicking the “Edit” symbol (similar
to a pencil “”) opens the pipeline editor on the Pipeline for that brach. Clicking the “Favorite” symbol
(a star outline “”) adds a branch to the favorites list shown on the Dashboard’s “Favorites” list for
this user. A favorite branch will show a solid star “” and clicking it removes this branch from the
favorites.
Pull Requests
The “Pull Requests” tab shows a list of all Pull Requests for the current Pipeline that have a completed
or in-progress Run. (Some source control systems call these “Merge Requests”, others do not support
them at all.) Each line in the list corresponds to a pull request in source control, showing the status
of the most recent run, id number, commit information, duration, and when the run completed.
image:blueocean/activity/pull-requests.png[Activity Pull Requests view, role=center]
Blue Ocean displays pull requests separately from branches, but otherwise the Pull Requests list
behaves similar to the Branches list. Clicking on a pull request in this list will bring up the Pipeline
Run Details for the latest completed or in-progress Run of that pull request. “In Progress” runs can
be aborted from this list by clicking on the “Stop” symbol (a square “” inside a circle). Pull requests
whose latest run has completed can be run again by clicking the “Play” symbol (a triangle “” inside
a circle). Pull request do not display “Heath Icons” and cannot be edited or marked as favorites.
NOTE: By default, when a Pull Request is closed, Jenkins will remove the Pipeline from Jenkins
(to be cleaned up at a later date), and runs for that Pull Request will not longer be accessible from
Jenkins. That can be changed by changing the configuration of the underlying Multi-branch Pipeline
job.
Creating a Pipeline
Blue Ocean makes it easy to create a Pipeline project in Jenkins.
A Pipeline can be generated from an existing Jenkinsfile in source control, or you can use the Blue
Ocean Pipeline editor to create a new Pipeline for you (as a Jenkinsfile that will be committed to
source control).
Local repository
If your URL is a local directory path (e.g. beginning with a forward slash / such as /home/cloned-git-repos/my-git-
you can proceed to click the Create Pipeline button.
Blue Ocean will then scan your local repository’s branches for a Jenkinsfile and will commence a
Pipeline run for each branch containing a Jenkinsfile. If Blue Ocean cannot find any Jenkinsfile,
you will be prompted to begin creating one through the Pipeline editor.
Remote repository
Since the Pipeline editor saves edited Pipelines to Git repositories as Jenkinsfiles, Blue Ocean only
supports connections to remote Git repositories over the SSH protocol.
If your URL is for a remote Git repository, then as soon as you begin typing the URL, starting with
either:
• ssh:// - e.g.
• user@host:path/to/git/repo.git - e.g.
ssh://gituser@git-server-url/git-server-repos-group/my-git-repo.git + or gituser@git-server-url:g
Blue Ocean automatically generates an SSH public/private key pair (or presents you with an existing
one) for your current/logged in Jenkins user. This credential is automatically registered in Jenkins
with the following details for this Jenkins user:
• Domain: blueocean-private-key-domain
• ID: jenkins-generated-ssh-key
• Name: <jenkins-username> (jenkins-generated-ssh-key)
You need to ensure that this SSH public/private key pair has been registered with your Git server
before continuing. If you have not already done this, follow these 2 steps. Otherwise, continue on.
. Configure the SSH public key component of this key pair (which you can copy and paste from
the Blue Ocean interface) for the remote Git server’s user account (e.g. within the authorized_keys
file of the machine’s gituser/.ssh directory). + [[continuing-on]] Note: This process allows your
Jenkins user to access the repositories that your Git server’s user account (e.g. gituser) has access
to. Read more about this in Setting Up the Server⁹⁶ of the Pro Git documentation⁹⁷. . When done,
return to the Blue Ocean interface.
⁹⁶https://git-scm.com/book/en/v2/Git-on-the-Server-Setting-Up-the-Server
⁹⁷https://git-scm.com/book/en/v2/
Creating a Pipeline 136
• Domain: blueocean-github-domain
• ID: github
• Name: +<jenkins-username>/****** (GitHub Access Token)+
. In the new tab, sign in to your GitHub account (if necessary) and on the GitHub New Personal
Access Token page, specify a brief Token description for your GitHub access token (e.g. Blue Ocean).
+ Note: An access token is usually an alphanumeric string that respresents your GitHub account
along with permissions to access various GitHub features and areas through your GitHub account.
The new access token process (triggered through the Create an access key here link above) has the
appropriate permissions pre-selected, which Blue Ocean requires to access and interact with your
GitHub account. . Scroll down to the end of the page and click Generate token. . On the resulting
Personal access tokens page, copy your newly generated access token. . Back in Blue Ocean, paste
the access token into the Your GitHub access token field and click Connect. + Your current/logged
in Jenkins user now has access to your GitHub account (provided by your access token), so you can
now choose your GitHub account/organization and repository. + Jenkins registers this credential
with the following details for this Jenkins user:
Creating a Pipeline 137
At this point, Blue Ocean prompts you to choose your GitHub account or an organization you are a
member of, as well as the repository it contains from which to build your Pipeline project.
• Your GitHub account to create a Pipeline project for one of your own GitHub
• An organization you are a member of to create a Pipeline project for a GitHub
. In the Which organization does the repository belong to? section, click either: repositories or one
which you have forked from elsewhere on GitHub. repository located within this organization. . In
the Choose a repository section, click the repository (within your GitHub account or organization)
from which to build your Pipeline project. + Tip: If your list of repositories is long, you can
filter this list using the Search option. [.boxshadow] image:blueocean/creating-pipelines/choose-a-
repository.png[‘Choose a repository’,width=70%] . Click Create Pipeline. + Blue Ocean will then scan
your local repository’s branches for a Jenkinsfile and will commence a Pipeline run for each branch
containing a Jenkinsfile. If Blue Ocean cannot find any Jenkinsfile, you will be prompted to begin
creating one through the Pipeline editor (by clicking Create Pipeline again). + Note: Under the hood,
a Pipeline project created through Blue Ocean is actually “multibranch Pipeline”. Therefore, Jenkins
looks for the presence of at least one Jenkinsfile in any branch of your repository.
At this point, Blue Ocean prompts you to choose your Bitbucket account or a team you are a member
of, as well as the repository it contains from which to build your Pipeline project.
• Your Bitbucket account to create a Pipeline project for one of your own
• A team you are a member of to create a Pipeline project for a Bitbucket
. In the Which team does the repository belong to? section, click either: Bitbucket repositories or
one which you have forked from elsewhere on Bitbucket. repository located within this team. .
In the Choose a repository section, click the repository (within your Bitbucket account or team)
from which to build your Pipeline project. + Tip: If your list of repositories is long, you can
filter this list using the Search option. [.boxshadow] image:blueocean/creating-pipelines/choose-a-
repository.png[‘Choose a repository’,width=70%] . Click Create Pipeline. + Blue Ocean will then scan
your local repository’s branches for a Jenkinsfile and will commence a Pipeline run for each branch
containing a Jenkinsfile. If Blue Ocean cannot find any Jenkinsfile, you will be prompted to begin
creating one through the Pipeline editor (by clicking Create Pipeline again). + Note: Under the hood,
a Pipeline project created through Blue Ocean is actually “multibranch Pipeline”. Therefore, Jenkins
looks for the presence of at least one Jenkinsfile in any branch of your repository.
Dashboard
Blue Ocean’s “Dashboard” is the default view shown when you open Blue Ocean and shows an
overview of all Pipeline projects configured on a Jenkins instance.
The Dashboard consists of a blue navigation bar at the top, the Pipelines list, as well as the Favorites
list.
[.boxshadow] image:blueocean/dashboard/overview.png[Overview of the Dashboard,role=center,width=100%]
Navigation bar
The Dashboard includes the blue-colored navigation bar along the top of the interface.
This bar is divided into two sections - a common section along the top and a contextual section
below. The contextual section changes depending on the current Blue Ocean page you are viewing.
When viewing the Dashboard, the navigation bar’s contextual section includes the:
Pipelines list
Pipelines list
The “Pipelines” list is the Dashboard’s default list and upon accessing Blue Ocean for the first time,
this is the only list shown on the Dashboard.
This list shows the overall state of each Pipeline configured on the Jenkins instance (which can also
include other Jenkins items). For a given item in this list, the following information is indicated:
HEALTH ,
Dashboard 140
• Adding the default branch of the item’s repository to your current user’s
• Removing the item’s default branch from this list (indicated by an outlined
Favorites list (indicated by a solid “”), and “”).
Clicking on an item in the Pipelines list will display that item’s Activity View.
Favorites list
The Favorites list appears above the Dashboard’s default <<pipelines-list>> when at least one
Pipeline/item is present in your user’s Favorites list.
This list provides key information and actions for a core subset of your user’s accessible items
in the <<pipelines-list>>. This key information includes the current run status for an item and its
repository’s branch, as well as other details about the item’s run, including the name of the branch,
the initial part of the commit hash and the time of the last run. Items in this list also include clickable
icons to run or re-run the item on the repository branch indicated.
You should only add an item (or one of the repository’s specific branches) to your Favorites list if
you need to examine that item’s branch on a regular basis. Adding an item’s specific branch to your
Favorites list can be done through the item’s Activity View.
Blue Ocean automatically adds branches or PRs to this list when a they contain a run that has
changes authored by the current user.
You can also manually remove items from your Favorites list by clicking on the solid “” in this list.
When the last item is removed from this list, the list is removed from the interface.
Clicking on an item in the Favorites list will open the Pipeline run details for latest run on the
repository branch or PR indicated.
Health icons
[[pipeline-health]]
Blue Ocean represents the overall health of a Pipeline/item or one of its repository’s branches using
weather icons, which change depending on the number of recent builds that have passed.
Dashboard 141
Health icons on the Dashboard represent overall Pipeline health, whereas the health icons in the
Branches tab of the Activity View represent the overall health for each branch.
.Health icons (best to worst) |=== |Icon |Health
|image:blueocean/icons/weather/sunny.svg[Sunny,role=center] |Sunny, more than 80% of Runs pass-
ing
|image:blueocean/icons/weather/partially-sunny.svg[Partially Sunny,role=center] |Partially Sunny,
61% to 80% of Runs passing
|image:blueocean/icons/weather/cloudy.svg[Cloudy,role=center,width=100] |Cloudy, 41% to 60% of
Runs passing
|image:blueocean/icons/weather/raining.svg[Raining,role=center] |Raining, 21% to 40% of Runs
passing
|image:blueocean/icons/weather/storm.svg[Storm,role=center] |Storm, less than 21% of Runs passing
|===
Run status
Blue Ocean represents the run status of a Pipeline/item or one of its repository’s branches using a
consistent set of icons throughout.
.Run status icons |=== |Icon |Status
|image:blueocean/dashboard/status-in-progress.png[“In Progress” Status Icon,role=center] |In Progress
|image:blueocean/dashboard/status-passed.png[“Passed” Status Icon,role=center] |Passed
|image:blueocean/dashboard/status-unstable.png[“Unstable” Status Icon,role=center] |Unstable
|image:blueocean/dashboard/status-failed.png[“Failed” Status Icon,role=center] |Failed
|image:blueocean/dashboard/status-aborted.png[“Aborted” Status Icon,role=center] |Aborted |===
Getting started with Blue Ocean
This section describes how to get started with Blue Ocean in Jenkins. It includes instructions for
setting up Blue Ocean on your Jenkins instance as well as how to access the Blue Ocean UI and
return to the Jenkins classic UI.
• As a suite of plugins on an
• As part of
Jenkins in Docker.
existing Jenkins instance, or
• There is no need to select the check boxes of the other plugins in this
• If you chose the Install without restart button, you may need to restart
. If required, ensure you are logged in to Jenkins (as a user with the Administer permission). .
From the Jenkins home page (i.e. the Dashboard of the Jenkins classic UI), click Manage Jenkins
on the left and then Manage Plugins in the center. . Click the Available tab and type blue ocean
into the Filter text box, which filters the list of plugins to those whose name and/or description
Getting started with Blue Ocean 143
• already has existing Pipeline projects or other items present, then the
• is new or has no Pipeline projects or other items configured, then Blue Ocean
Blue Ocean Dashboard is displayed. displays a Welcome to Jenkins box with a Create a
new Pipeline button you can use to begin creating a new Pipeline project. Read more about
this in Creating a Pipeline. + [.boxshadow] image:blueocean/creating-pipelines/create-a-new-
pipeline-box.png[‘Welcome to Jenkins - Create a New Pipeline message box’,width=50%]
⁹⁸https://hub.docker.com/r/jenkinsci/blueocean/
⁹⁹https://hub.docker.com/
Getting started with Blue Ocean 144
Navigation bar
The Blue Ocean UI has a navigation bar along the top of its interface, which allows you to access
the different views and other features of Blue Ocean.
Th navigation bar is divided into two sections - a common section along the top of most Blue Ocean
views and a contextual section below. The contextual section is specific to the current Blue Ocean
page you are viewing.
The navigation bar’s common section includes the following buttons:
Dashboard, or reloads
• Pipelines - also takes you to the Dashboard, or does nothing if you are
• Administration - takes you to the *
• Go to classic icon - takes you back to the Jenkins classic UI. Read more
• Logout - Logs out your current Jenkins user and returns to the Jenkins login
this page if you are already viewing it. already viewing the Dashboard. This button serves
a different purpose when you are viewing a Pipeline run details page. of the Jenkins classic
UI. + Note: This button is not available if your Jenkins user does not have the Administer
permission (set through Matrix-based security). Read more about this in the Authorization
section of Managing Security. about this in <<switching-to-the-classic-ui>>. page.
Views that use the standard navigation bar will add another bar below it with options specific to
that view. Some views replace the common navigation bar with one specifically suited to that view.
Limitations
• SCM-based Declarative Pipelines only
• Credentials must have write permission
• Does not have full parity with Declarative Pipeline
• Pipeline re-ordered and comments removed
Navigation bar
The Pipeline Editor includes the standard navigation bar at the top, with a local navigation bar below
that. The local navigation bar includes:
Pipeline settings
By default, the right side of editor shows the “Pipeline Settings”. This sheet can be accessed by
clicking anywhere in the Stage editor that is not a Stage or one of the “Add Stage” buttons.
Agent
The “Agent” section controls what agent the Pipeline will use. This is the same as the “agent”
directive.
Environment
The “Environment” sections lets us set environment variables for the Pipeline. This is the same as
the “environment” directive.
Stage editor
The left side editor screen contains the Stage editor, used for creating the stages of a Pipeline.
image:blueocean/editor/stage-editor-basic.png[Stage editor simple, role=center]
Stages can be added to the Pipeline by clicking the “+” button to the right of an existing stage. Parallel
stages can be added by clicking the “+” button below an existing Stage. Stages can be deleted using
the context menu in the stage configuration sheet.
The Stage editor will display the name of each Stage once it has been set. Stages that contain
incomplete or invalid information will display a warning symbol. Pipelines can have validation
errors while they are being edited, but cannot be saved until the errors are fixed.
image:blueocean/editor/stage-editor-error.png[Stage editor with error, role=center]
Stage configuration
Selecting a stage in the Stage editor will open the “Stage Configuration” sheet on the right side. Here
we can can change the name of the Stage, delete the Stage, and add steps to the Stage.
image:blueocean/editor/stage-configuration.png[Stage Configuration, role=center]
The name of the Stage can be set at the top of the Stage Configuration sheet. The context menu (three
dots on the upper right), can be used to delete the current stage. Clicking “Add step” will display
the list of available Steps types with a search bar at the top. Steps can be deleted using the context
context menu in the step configuration sheet. Adding a step or selecting an existing step will open
the step configuration sheet.
image:blueocean/editor/step-list.png[Step list filtered by ‘file’, role=center]
Pipeline Editor 147
Step configuration
Selecting a step from the Stage configuration sheet will open the Step Configuration sheet.
image:blueocean/editor/step-configuration.png[Step configuration for JUnit step, role=center]
This sheet will differ depending on the step type, containing whatever fields or controls are needed.
The name of the Step cannot be changed. The context menu (three dots on the upper right), can be
used to delete the current step. Fields that contain incomplete or invalid information will display
a warning symbol. Pipelines can have validation errors while they are being edited, but cannot be
saved until the errors are fixed.
image:blueocean/editor/step-error.png[Step configuration with error, role=center]
Special cases
Blue Ocean is optimized for working with Pipelines in Source Control, but it can display details for
other kinds of projects as well. Blue Ocean offers the same tabs for all supported projects types, but
those tabs may display different information.
Freestyle Projects
For Freestyle projects, Blue Ocean still offers the same tabs, but the Pipeline tab only shows the
console log output. The “Rerun” or “Edit” options are also not shown in the top menu bar.
Matrix projects
Matrix projects are not supported in Blue Ocean. Viewing a Matrix project will redirect to the
“Classic UI” view for that project.
Tabs
Each of the tabs of the Run Detail view provides information on a specific aspect of a run.
Pipeline
This is the default tab and gives an overall view of the flow of this Pipeline Run. It shows each stage
and parallel branch, the steps in those stages, and the console output from those steps. The overview
image above shows a successful Pipeline run. If a particular step during the run fails, this tab will
automatically default to showing the console log from the failed step. The image below shows a
failed Run.
image:blueocean/pipeline-run-details/pipeline-failed.png[Failed Run, role=center]
Changes
image:blueocean/pipeline-run-details/changes-one-change.png[List of Changes for a Run, role=center]
Tests
The “Tests” tab shows information about test results for this run. This tab will only contain
information if a test result publishing step, such as the “Publish JUnit test results” (junit) step.
If no results are recorded this table will say that, If all tests pass, this tab will report the total number
of passing tests. In the case of failures, the tab will display logs details from the failures as shown
below.
image:blueocean/pipeline-run-details/tests-unstable.png[Test Results for Unstable Run, role=center]
When the previous Run had failures and the current run fixes those failures, this tab will note the
fixed texts and display their logs as well.
image:blueocean/pipeline-run-details/tests-fixed.png[Test Results for Fixed Run, role=center]
Pipeline Run Details View 150
Artifacts
The “Artifacts” tabs show a list of any artifacts saved using the “Archive Artifacts” (archive) step.
Clicking on a item in the list will download it. The full output log from the Run can be downloaded
from this list.
image:blueocean/pipeline-run-details/artifacts-list.png[List of Artifacts from a Run, role=center]
Managing Jenkins
This chapter cover how to manage and configure Jenkins masters and nodes.
This chapter is intended for Jenkins administrators. More experienced users may find this infor-
mation useful, but only to the extent that they will understand what is and is not possible for
administrators to do. Individual sections may assume knowledge of information from previous
sections, but such assumptions will be explicitly called out and cross-referenced.
If you are a system administrator and want learn how to back-up, restore, maintain as Jenkins servers
and nodes, see Jenkins System Administration.
For an overview of content in the Jenkins User Handbook, see User Handbook overview.
Jenkins CLI
Jenkins has a built-in command line interface that allows users and administrators to access Jenkins
from a script or shell environment. This can be convenient for scripting of routine tasks, bulk updates,
troubleshooting, and more.
The command line interface can be accessed over SSH or with the Jenkins CLI client, a .jar file
distributed with Jenkins.
[WARNING] ==== Use of the CLI client distributed with Jenkins 2.53 and older and Jenkins LTS
2.46.1 and older is not recommended for security reasons.
The client distributed with Jenkins 2.54 and newer and Jenkins LTS 2.46.2 and newer is considered
secure in its default (-http) or -ssh modes, as is using the standard ssh command.
Jenkins 2.165 and newer no longer supports the old (-remoting) mode in either the client or server.
====
With the random SSH port (53801 in this example), and <<Authentication>> configured, any modern
SSH client may securely execute CLI commands.
Authentication
Whichever user used for authentication with the Jenkins master must have the Overall/Read
permission in order to access the CLI. The user may require additional permissions depending on
the commands executed.
Jenkins CLI 153
Authentication relies on SSH-based public/private key authentication. In order to add an SSH public
key for the appropriate user, navigate to https://JENKINS_URL/user/USERNAME/configure and paste
an SSH public key into the appropriate text area.
image::managing/cli-adding-ssh-public-keys.png[“Adding public SSH keys for a user”, role=center]
Common Commands
Jenkins has a number of built-in CLI commands which can be found in every Jenkins environment,
such as build or list-jobs. Plugins may also provide CLI commands; in order to determine the full
list of commands available in a given Jenkins environment, execute the CLI help command:
The following list of commands is not comprehensive, but it is a useful starting point for Jenkins
CLI usage.
build
One of the most common and useful CLI commands is build, which allows the user to trigger any
job or Pipeline for which they have permission.
The most basic invocation will simply trigger the job or Pipeline and exit, but with the additional
options a user may also pass parameters, poll SCM, or even follow the console output of the triggered
build or Pipeline run.
console
Similarly useful is the console command, which retrieves the console output for the specified build
or Pipeline run. When no build number is provided, the console command will output the last
completed build’s console output.
who-am-i
The who-am-i command is helpful for listing the current user’s credentials and permissions available
to the user. This can be useful when debugging the absence of CLI commands due to the lack of
certain permissions.
1 java -jar jenkins-cli.jar [-s JENKINS_URL] [global options...] command [command opti\
2 ons...] [arguments...]
The JENKINS_URL can be specified via the environment variable $JENKINS_URL. Summaries of other
general options can be displayed by running the client with no arguments at all.
Jenkins CLI 156
This is the default mode, though you may pass the -http option explicitly for clarity.
Authentication is preferably with an -auth option, which takes a username:apitoken argument. Get
your API token from /me/configure:
[WARNING] ==== For security reasons the use of a file to load the authentication credentials is the
recommended authentication way. ====
An alternative authentication method is to configure environment variables in a similar way as the
$JENKINS_URL is used. The username can be specified via the environment variable $JENKINS*USER*ID
while the apitoken can be specified via the variable $JENKINS*API*TOKEN. Both variables have to be
set all at once.
1 export JENKINS_USER_ID=kohsuke
2 export JENKINS_API_TOKEN=abc1234ffe4a
3 java -jar jenkins-cli.jar [-s JENKINS_URL] command ...
In case these environment variables are configured you could still override the authentication
method using different credentials with the -auth option, which takes preference over them.
Generally no special system configuration need be done to enable HTTP-based CLI connections.
If you are running Jenkins behind an HTTP(S) reverse proxy, ensure it does not buffer request or
response bodies.
[WARNING] ==== The HTTP(S) connection mode of the CLI does not work correctly behind an
Apache HTTP reverse proxy server using mod_proxy. Workarounds include using a different reverse
proxy such as Nginx or HAProxy, or using the SSH connection mode where possible. See JENKINS-
47279¹⁰⁰. ====
Authentication is via SSH keypair. You must select the Jenkins user ID as well:
¹⁰⁰https://issues.jenkins-ci.org/browse/JENKINS-47279
Jenkins CLI 157
1 java -jar jenkins-cli.jar [-s JENKINS_URL] -ssh -user kohsuke command ...
In this mode, the client acts essentially like a native ssh command.
By default the client will try to connect to an SSH port on the same host as is used in the JENKINS_-
URL. If Jenkins is behind an HTTP reverse proxy, this will not generally work, so run Jenkins with
the system property -Dorg.jenkinsci.main.modules.sshd.SSHD.hostName=ACTUALHOST to define a
hostname or IP address for the SSH endpoint.
You may get the error below and find a log entry just below that concerning mismatched keys:
This means your SSH configuration does not recognize the public key presented by the server. It’s
often the case when you run Jenkins in dev mode and multiple instances of the application are run
under the same SSH port over time.
In a development context, access your ∼/.ssh/known_hosts (or in C:/Users/<your_name>/.ssh/known_-
hosts for Windows) and remove the line corresponding to your current SSH port (e.g. [localhost]:3485).
In a production context, check with the Jenkins administrator if the public key of the server changed
recently. If so, ask the administrator to do the the steps described above.
UsernameNotFoundException
1 org.acegisecurity.userdetails.UsernameNotFoundException: <name_you_used>
2 ...
This means your SSH keys were recognized and validated against the stored users but the username
is not valid for the security realm your application is using at the moment. This could occur when
you were using the Jenkins database initially, configured your users, and then switched to another
security realm (like LDAP, etc.) where the defined users do not exist yet.
To solve the problem, ensure your users exist in your configured security realm.
Jenkins CLI 158
Troubleshooting logs
Installing a plugin
Jenkins provides a couple of different methods for installing plugins on the master:
. Using the “Plugin Manager” in the web UI. . Using the Jenkins CLI install-plugin command.
Each approach will result in the plugin being loaded by Jenkins but may require different levels of
access and trade-offs in order to use.
The two approaches require that the Jenkins master be able to download meta-data from an
Update Center, whether the primary Update Center operated by the Jenkins project footnoteref:[uc,
https://updates.jenkins.io], or a custom Update Center.
The plugins are packaged as self-contained .hpi files, which have all the necessary code, images,
and other resources which the plugin needs to operate successfully.
Advanced installation
The Update Center only allows the installation of the most recently released version of a plugin. In
cases where an older release of the plugin is desired, a Jenkins administrator can download an older
.hpi archive and manually install that on the Jenkins master.
Assuming a .hpi file has been downloaded, a logged-in Jenkins administrator may upload the file
from within the web UI:
. Navigate to the Manage Jenkins > Manage Plugins page in the web UI. . Click on the Advanced
tab. . Choose the .hpi file under the Upload Plugin section. . Upload the plugin file.
Managing Plugins 162
On the master
Assuming a .hpi file has been explicitly downloaded by a systems administrator, the administrator
can manually place the .hpi file in a specific location on the file system.
Copy the downloaded .hpi file into the JENKINS_HOME/plugins directory on the Jenkins
master (for example, on Debian systems JENKINS_HOME is generally /var/lib/jenkins‘).
The master will need to be restarted before the plugin is loaded and made available in the Jenkins
environment.
1 The names of the plugin directories in the Update Site footnoteref:[uc] are
2 not always the same as the plugin's display name. Searching
3 link:https://plugins.jenkins.io/[plugins.jenkins.io]
4 for the desired plugin will provide the appropriate link to the `.hpi` files.
Updating a plugin
Updates are listed in the Updates tab of the Manage Plugins page and can be installed by checking
the checkboxes of the desired plugin updates and clicking the Download now and install after restart
button.
image::managing/plugin-manager-update.png[“Updates tab in the Plugin Manager”, role=center] By
default, the Jenkins master will check for updates from the Update Center once every 24 hours. To
manually trigger a check for updates, simply click on the Check now button in the Updates tab.
Removing a plugin
When a plugin is no longer used in a Jenkins environment, it is prudent to remove the plugin from
the Jenkins master. This provides a number of benefits such as reducing memory overhead at boot
or runtime, reducing configuration options in the web UI, and removing the potential for future
conflicts with new plugin updates.
Uninstalling a plugin
The simplest way to uninstall a plugin is to navigate to the Installed tab on the Manage Plugins page.
From there, Jenkins will automatically determine which plugins are safe to uninstall, those which
are not dependencies of other plugins, and present a button for doing so.
Managing Plugins 163
1 If a plugin `.hpi` file is removed but required by other plugins, the Jenkins
2 master may fail to boot correctly.
Uninstalling a plugin does not remove the configuration that the plugin may have created. If there
are existing jobs/nodes/views/builds/etc configurations that reference data created by the plugin,
during boot Jenkins will warn that some configurations could not be fully loaded and ignore the
unrecognized data.
Since the configuration(s) will be preserved until they are overwritten, re-installing the plugin will
result in those configuration values reappearing.
Jenkins provides a facility for purging configuration left behind by uninstalled plugins. Navigate to
Manage Jenkins and then click on Manage Old Data to review and remove old data.
Disabling a plugin
Disabling a plugin is a softer way to retire a plugin. Jenkins will continue to recognize that the plugin
is installed, but it will not start the plugin, and no extensions contributed from this plugin will be
visible.
A Jenkins administrator may disable a plugin by unchecking the box on the Installed tab of the
Manage Plugins page (see below).
image::managing/plugin-manager-disable.png[“Installed tab in the Plugin Manager”, role=center]
A systems administrator may also disable a plugin by creating a file on the Jenkins master, such as:
JENKINS_HOME/plugins/PLUGIN_NAME.hpi.disabled.
The configuration(s) created by the disabled plugin behave as if the plugin were uninstalled, insofar
that they result in warnings on boot but are otherwise ignored.
It is also possible to enable or disable plugins via the Jenkins CLI using the enable-plugin or
disable-plugin commands.
Managing Plugins 164
The enable-plugin command receives a list of plugins to be enabled. Any plugins which a selected
plugin depends on will also be enabled by this command.
The disable-plugin command receives a list of plugins to be disabled. The output will display
messages for both successful and failed operations. If you only want to see error messages, the
-quiet option can be specified. The -strategy option controls what action will be taken when one
of the specified plugins is listed as an optional or mandatory dependency of another enabled plugin.
21 enabled.
22 - all: all dependant plugins are also disabled, no
23 matter if its dependency is optional or mandatory.
1 In the same way than enabling and disabling plugins from the UI requires a restart
2 to complete the process, the changes made with the CLI commands will take effect
3 once Jenkins is restarted. The `-restart` option forces a safe restart of the
4 instance once the command has successfully finished, so the changes will be
5 immediately applied.
Pinned plugins
1 Pinned plugins feature was removed in Jenkins 2.0. Versions later than Jenkins
2 2.0 do not bundle plugins, instead providing a wizard to install the most
3 useful plugins.
The notion of pinned plugins applies to plugins that are bundled with Jenkins 1.x, such as the
plugin:matrix-auth[Matrix Authorization plugin].
By default, whenever Jenkins is upgraded, its bundled plugins overwrite the versions of the plugins
that are currently installed in JENKINS_HOME.
However, when a bundled plugin has been manually updated, Jenkins will mark that plugin as
pinned to the particular version. On the file system, Jenkins creates an empty file called JENKINS_-
HOME/plugins/PLUGIN_NAME.hpi.pinned to indicate the pinning.
Pinned plugins will never be overwritten by bundled plugins during Jenkins startup. (Newer versions
of Jenkins do warn you if a pinned plugin is older than what is currently bundled.)
It is safe to update a bundled plugin to a version offered by the Update Center. This is often necessary
to pick up the newest features and fixes. The bundled version is occasionally updated, but not
consistently.
The Plugin Manager allows plugins to be explicitly unpinned. The JENKINS_HOME/plugins/PLUGIN_-
NAME.hpi.pinned file can also be manually created/deleted to control the pinning behavior. If the
pinned file is present, Jenkins will use whatever plugin version the user has specified. If the file is
absent, Jenkins will restore the plugin to the default version on startup.
In-process Script Approval
Jenkins, and a number of plugins, allow users to execute Groovy scripts in Jenkins. These scripting
capabilities are provided by:
*
Script Console.
*
Jenkins Pipeline.
To protect Jenkins from execution of malicious scripts, these plugins execute user-provided scripts in
a <<groovy-sandbox>> that limits what internal APIs are accessible. Administrators can then use the
“In-process Script Approval” page, provided by the plugin:script-security[Script Security plugin], to
manage which unsafe methods, if any, should be allowed in the Jenkins environment.
image::managing/manage-inprocess-script-approval.png[“Entering the In-process Script Approval
configuration”, role=center]
Getting Started
The plugin:script-security[Script Security plugin] is installed automatically by the Post-install Setup
Wizard, although initially no additional scripts or operations are approved for use.
[IMPORTANT] ==== Older versions of this plugin may not be safe to use. Please review the security
warnings listed on plugin:script-security[the Script Security plugin page] in order to ensure that the
plugin:script-security[Script Security plugin] is up to date. ====
Security for in-process scripting is provided by two different mechanisms: the <<groovy-sandbox>>
and <<script-approval>>. The first, the Groovy Sandbox, is enabled by default for Jenkins Pipeline
allowing user-supplied Scripted and Declarative Pipeline to execute without prior Administrator
intervention. The second, Script Approval, allows Administrators to approve or deny unsandboxed
scripts, or allow sandboxed scripts to execute additional methods.
For most instances, the combination of the Groovy Sandbox and the Script Security’s built-in list¹⁰²
of approved method signatures, will be sufficient. It is strongly recommended that Administrators
only deviate from these defaults if absolutely necessary.
¹⁰²https://github.com/jenkinsci/script-security-plugin/tree/master/src/main/resources/org/jenkinsci/plugins/scriptsecurity/sandbox/
whitelists
In-process Script Approval 167
Groovy Sandbox
[[groovy-sandbox]]
To reduce manual interventions by Administrators, most scripts will run in a Groovy Sandbox by
default, including all Jenkins Pipelines. The sandbox only allows a subset of Groovy’s methods
deemed sufficiently safe for “untrusted” access to be executed without prior approval. Scripts using
the Groovy Sandbox are all subject to the same restrictions, therefore a Pipeline authored by an
Administrator is subject to the restrictions as one authorized by a non-administrative user.
When a script attempts to use features or methods unauthorized by the sandbox, a script is halted
immediately, as shown below with Jenkins Pipeline
.Unauthorized method signature rejected at runtime via Blue Ocean image::managing/script-
sandbox-rejection.png[“Sandbox method rejection”, role=center]
The Pipeline above will not execute until an Administrator approves the method signature via the
In-process Script Approval page.
In addition to adding approved method signatures, users may also disable the Groovy Sandbox
entirely as shown below. Disabling the Groovy Sandbox requires that the entire script must be
reviewed and manually approved by an administrator.
.Disabling the Groovy Sandbox for a Pipeline image::managing/unchecked-groovy-sandbox-on-
pipeline.png[“Creating a Scripted Pipeline and unchecking ‘Use Groovy Sandbox’”, role=center]
Script Approval
[[script-approval]]
Manual approval of entire scripts, or method signatures, by an administrator provides Adminis-
trators with additional flexibility to support more advanced usages of in-process scripting. When
the <<groovy-sandbox>> is disabled, or a method outside of the built-in list is invoked, the Script
Security plugin will check the Administrator-managed list of approved scripts and methods.
For scripts which wish to execute outside of the <<groovy-sandbox>>, the Administrator must
approve the entire script in the In-process Script Approval page:
[[approving-unsandboxed-pipeline]] .Approving an unsandboxed Scripted Pipeline image::managing/inprocess-
script-approval-pipeline.png[“Approving an unsandboxed Scripted Pipeline”, role=center]
For scripts which use the <<groovy-sandbox>>, but wish to execute an currently unapproved method
signature will also be halted by Jenkins, and require an Administrator to approve the specific method
signature before the script is allowed to execute:
[[approving-method-signature]] .Approving a new method signature image::managing/inprocess-
script-approval-method.png[“Approving a new method signature”, role=center]
In-process Script Approval 168
This could mean that approving the hudson.model.ItemGroup.getItems() method signature would
allow a script to bypass built-in permissions checks.
Instead, it is usually more desirable to click Approve assuming permissions check which will cause
the Script Approval engine to allow the method signature assuming the user running the script has
the permissions to execute the method, such as the Job/Read permission in this example.
Script Console
Plugins can provide additional security realms which may be useful for incorporating Jenkins into
existing identity systems, such as:
• plugin:active-directory[Active Directory]
• plugin:github-oauth[GitHub Authentication]
• plugin:crowd2[Atlassian Crowd 2]
Authorization
The Security Realm, or authentication, indicates who can access the Jenkins environment. The
other piece of the puzzle is Authorization, which indicates what they can access in the Jenkins
environment. By default Jenkins supports a few different Authorization options:
Anyone can do anything:: Everyone gets full control of Jenkins, including anonymous users who
haven’t logged in. Do not use this setting for anything other than local test Jenkins masters. Legacy
mode:: Behaves exactly the same as Jenkins <1.164. Namely, if a user has the “admin” role, they will
be granted full control over the system, and otherwise (including anonymous users) will only have
the read access. Do not use this setting for anything other than local test Jenkins masters. Logged
in users can do anything:: In this mode, every logged-in user gets full control of Jenkins. Depending
on an advanced option, anonymous users get read access to Jenkins, or no access at all. This mode
is useful to force users to log in before taking actions, so that there is an audit trail of users’ actions.
Matrix-based security:: This authorization scheme allows for granular control over which users and
groups are able to perform which actions in the Jenkins environment (see the screenshot below).
Project-based Matrix Authorization Strategy:: This authorization scheme is an extension to Matrix-
based security which allows additional access control lists (ACLs) to be defined for each project
separately in the Project configuration screen. This allows granting specific users or groups access
only to specified projects, instead of all projects in the Jenkins environment. The ACLs defined with
Project-based Matrix Authorization are additive such that access grants defined in the Configure
Global Security screen will be combined with project-specific ACLs.
For most Jenkins environments, Matrix-based security provides the most security and flexibility so
it is recommended as a starting point for “production” environments.
Script Console 170
Markup Formatter
Jenkins allows user-input in a number of different configuration fields and text areas which can lead
to users inadvertently, or maliciously, inserting unsafe HTML and/or JavaScript.
By default the Markup Formatter configuration is set to Plain Text which will escape unsafe
characters such as < and & to their respective character entities.
Using the Safe HTML Markup Formatter allows for users and administrators to inject useful and
information HTML snippets into Project Descriptions and elsewhere.
Caveats
CSRF protection may result in challenges for more advanced usages of Jenkins, such as:
• Some Jenkins features, like the remote API, are more difficult to use when
• Accessing Jenkins through a poorly-configured reverse proxy may result in the
• Out-dated plugins, not tested with CSRF protection enabled, may not properly
this option is enabled. Consult the Remote API documentation for more information. CSRF
HTTP header being stripped from requests, resulting in protected actions failing. function.
Customizing Access
For advanced users who may wish to allow certain access patterns from the agents to the Jenkins
master, Jenkins allows administrators to create specific exemptions from the built-in access control
rules.
image::managing/configure-global-security-access-control-rules.png[“Configure Global Security -
Enable Agent ⇒ Master Access Control - Editing Rules”, role=center]
By following the link highlighted above, an administrator may edit Commands and File Access
Agent/Master access control rules.
Script Console 172
Commands
“Commands” in Jenkins and its plugins are identified by their fully-qualified class names. The
majority of these commands are intended to be executed on agents by a request of a master, but
some of them are intended to be executed on a master by a request of an agent.
Plugins not yet updated for this subsystem may not classify which category each command falls
into, such that when an agent requests that the master execute a command which is not explicitly
allowed, Jenkins will err on the side of caution and refuse to execute the command.
In such cases, Jenkins administrators may “whitelist” footnote:[https://en.wikipedia.org/wiki/Whitelist]
certain commands as acceptable for execution on the master.
image::managing/configure-global-security-whitelist-commands.png[“Configure Global Security -
Enable Agent ⇒ Master Access Control - Editing Rules - Command Whitelisting”, role=center]
Advanced
Administrators may also whitelist classes by creating files with the .conf extension in the directory
JENKINS_HOME/secrets/whitelisted-callables.d/. The contents of these .conf files should list
command names on separate lines.
The contents of all the .conf files in the directory will be read by Jenkins and combined to create a
default.conf file in the directory which lists all known safe command. The default.conf file will
be re-written each time Jenkins boots.
Jenkins also manages a file named gui.conf, in the whitelisted-callables.d directory, where
commands added via the web UI are written. In order to disable the ability of administrators to
change whitelisted commands from the web UI, place an empty gui.conf file in the directory and
change its permissions such that is not writeable by the operating system user Jenkins run as.
The File Access Rules are used to validate file access requests made from agents to the master. Each
File Access Rule is a triplet which must contain each of the following elements:
. allow / deny: if the following two parameters match the current request being considered, an allow
entry would allow the request to be carried out and a deny entry would deny the request to be
rejected, regardless of what later rules might say. . operation: Type of the operation requested. The
following 6 values exist. The operations can also be combined by comma-separating the values.
The value of all indicates all the listed operations are allowed or denied. ** read: read file content
or list directory entries ** write: write file content ** mkdirs: create a new directory ** create:
create a file in an existing directory ** delete: delete a file or directory ** stat: read metadata of
a file/directory, such as timestamp, length, file access modes. . file path: regular expression that
specifies file paths that matches this rule. In addition to the base regexp syntax, it supports the
following tokens: ** <JENKINS_HOME> can be used as a prefix to match the master’s JENKINS_HOME
directory. ** <BUILDDIR> can be used as a prefix to match the build record directory, such as
Script Console 173
1 # To avoid hassle of escaping every '\' on Windows, you can use / even on Windows.
2 deny all <JENKINS_HOME>/secrets/.*
3 allow all <JENKINS_HOME>/.*
Ordering is very important! The following rules are incorrectly written because the 2nd rule will
never match, and allow all agents to access all files and folders under JENKINS_HOME:
Advanced
Administrators may also add File Access Rules by creating files with the .conf. extension in the di-
rectory JENKINS_HOME/secrets/filepath-filters.d/. Jenkins itself generates the 30-default.conf
file on boot in this directory which contains defaults considered the best balance between com-
patibility and security by the Jenkins project. In order to disable these built-in defaults, replace
30-default.conf with an empty file which is not writable by the operating system user Jenkins run
as.
On each boot, Jenkins will read all .conf files in the filepath-filters.d directory in alphabetical
order, therefore it is good practice to name files in a manner which indicates their load order.
Jenkins also manages 50-gui.conf, in the filepath-filters/ directory, where File Access Rules
added via the web UI are written. In order to disable the ability of administrators to change the
File Access Rules from the web UI, place an empty 50-gui.conf file in the directory and change its
permissions such that is not writeable by the operating system user Jenkins run as.
Disabling
While it is not recommended, if all agents in a Jenkins environment can be considered “trusted” to
the same degree that the master is trusted, the Agent/Master Access Control feature may be disabled.
Additionally, all the users in the Jenkins environment should have the same level of access to all
configured projects.
An administrator can disable Agent/Master Access Control in the web UI by un-checking the
box on the Configure Global Security page. Alternatively an administrator may create a file in
JENKINS_HOME/secrets named slave-to-master-security-kill-switch with the contents of true
and restart Jenkins.
Script Console 174
1 Most Jenkins environments grow over time requiring their trust models to evolve
2 as the environment grows. Please consider scheduling regular "check-ups" to
3 review whether any disabled security settings should be re-enabled.
Configuring the System
Managing Tools
Built-in tool providers
Ant
Git
JDK
Maven
Managing Users
System Administration
This chapter for system administrators of Jenkins servers and nodes. It will cover system mainte-
nance topics including security, monitoring, and backup/restore.
Users not involved with system-level tasks will find this chapter of limited use. Individual sections
may assume knowledge of information from previous sections, but such assumptions will be
explicitly called out and cross-referenced.
If you are a Jenkins administrator and want to know more about managing Jenkins nodes and
instances, see Managing Jenkins.
For an overview of content in the Jenkins User Handbook, see User Handbook overview.
Backing-up/Restoring Jenkins
references:
Monitoring Jenkins
Securing Jenkins
In the default configuration of Jenkins 1.x, Jenkins does not perform any security checks. This means
the ability of Jenkins to launch processes and access local files are available to anyone who can access
Jenkins web UI and some more.
Securing Jenkins has two aspects to it.
• Access control, which ensures users are authenticated when accessing Jenkins
• Protecting Jenkins against external threats
and their activities are authorized.
Access Control
You should lock down the access to Jenkins UI so that users are authenticated and appropriate set
of permissions are given to them. This setting is controlled mainly by two axes:
• Security Realm, which determines users and their passwords, as well as what
• Authorization Strategy, which determines who has access to what.
groups the users belong to.
These two axes are orthogonal, and need to be individually configured. For example, you might
choose to use external LDAP or Active Directory as the security realm, and you might choose
“everyone full access once logged in” mode for authorization strategy. Or you might choose to let
Jenkins run its own user database, and perform access control based on the permission/user matrix.
• https://wiki.jenkins-ci.org/display/JENKINS/Matrix-based+security[Matrix-based security|Matrix-
based security] — Granting and denying finer-grained permissions
In addition to access control of users, access control for builds limits what builds can do, once started.
The following topics discuss other security features that are on by default. You’ll only need to look
at them when they are causing problems.
• https://wiki.jenkins-ci.org/display/JENKINS/Configuring+Content+Security+Policy[Configuring
Content Security Policy] — protect users of Jenkins from malicious builds
• https://wiki.jenkins-ci.org/display/JENKINS/Markup+formatting[Markup formatting] — pro-
tect users of Jenkins from malicious users of Jenkins
Disabling Security
One may accidentally set up a security realm / authorization in such a way that you may no longer
be able to reconfigure Jenkins.
When this happens, you can fix this by the following steps:
. Stop Jenkins (the easiest way to do this is to stop the servlet container.) . Go to $JENKINS_-
HOME in the file system and find config.xml file. . Open this file in the editor. . Look for the
<useSecurity>true</useSecurity> element in this file. . Replace true with false . Remove the
elements authorizationStrategy and securityRealm . Start Jenkins . When Jenkins comes back, it
will be in an unsecured mode where everyone gets full access to the system.
If this is still not working, trying renaming or deleting config.xml.
Managing Jenkins with Chef
Managing Jenkins with Puppet
Scaling Jenkins
This chapter will cover topics related to using and managing large scale Jenkins configurations:
large numbers of users, nodes, agents, folders, projects, concurrent jobs, job results and logs, and
even large numbers of masters.
The audience for this chapter is expert Jenkins users, administrators, and those planning large-scale
installations.
If you are a Jenkins administrator and want to know more about managing Jenkins nodes and
instances, see Managing Jenkins.
For an overview of content in the Jenkins User Handbook, see User Handbook overview.
Appendix
[appendix]
These sections are generally intended for Jenkins administrators and system administrators. Each
section covers a different topic independent of the other sections. They are advanced topics, reference
material, and topics that do not fit into other chapters.
[WARNING] ==== To Contributors: Please consider adding material elsewhere before adding it here.
In fact, topics that do not fit elsewhere may even be out of scope for this handbook. See Contributing
to Jenkins for details of how to contact project contributors and discuss what you want to add. ====
Glossary
[[glossary]]
General Terms
[glossary] Agent:: [[agent]] An agent is typically a machine, or container, which connects to a
Jenkins master and executes tasks when directed by the master. Artifact:: [[artifact]] An immutable
file generated during a Build or Pipeline run which is archived onto the Jenkins Master for later
retrieval by users. Build:: [[build]] Result of a single execution of a Project Cloud:: [[cloud]] A
System Configuration which provides dynamic Agent provisioning and allocation, such as that
provided by the plugin:azure-vm-agents[Azure VM Agents] or plugin:ec2[Amazon EC2] plugins.
Core:: [[core]] The primary Jenkins application (jenkins.war) which provides the basic web UI,
configuration, and foundation upon which Plugins can be built. Downstream:: [[downstream]] A
configured Pipeline or Project which is triggered as part of the execution of a separate Pipeline or
Project. Executor:: [[executor]] A slot for execution of work defined by a Pipeline or Project on
a Node. A Node may have zero or more Executors configured which corresponds to how many
concurrent Projects or Pipelines are able to execute on that Node. Fingerprint:: [[fingerprint]] A
hash considered globally unique to track the usage of an Artifact or other entity across multiple
Pipelines or Projects. Folder:: [[folder]] An organizational container for Pipelines and/or Projects,
similar to folders on a file system. Item:: [[item]] An entity in the web UI corresponding to either
a: Folder, Pipeline, or Project. Job:: [[job]] A deprecated term, synonymous with Project. Label::
[[label]] User-defined text for grouping Agents, typically by similar functionality or capability. For
example linux for Linux-based agents or docker for Docker-capable agents. Master:: [[master]]
The central, coordinating process which stores configuration, loads plugins, and renders the various
user interfaces for Jenkins. Node:: [[node]] A machine which is part of the Jenkins environment and
capable of executing Pipelines or Projects. Both the Master and Agents are considered to be Nodes.
Project:: [[project]] A user-configured description of work which Jenkins should perform, such as
building a piece of software, etc. Pipeline:: [[pipeline]] A user-defined model of a continuous delivery
pipeline, for more read the Pipeline chapter in this handbook. Plugin:: [[plugin]] An extension to
Jenkins functionality provided separately from Jenkins Core. Publisher:: [[publisher]] Part of a Build
after the completion of all configured Steps which publishes reports, sends notifications, etc. Stage::
[[stage]] stage is part of Pipeline, and used for defining a conceptually distinct subset of the entire
Pipeline, for example: “Build”, “Test”, and “Deploy”, which is used by many plugins to visualize
or present Jenkins Pipeline status/progress. Step:: [[step]] A single task; fundamentally steps tell
Jenkins what to do inside of a Pipeline or Project. Trigger:: [[trigger]] A criteria for triggering a new
Pipeline run or Build. Update Center:: [[update-center]] Hosted inventory of plugins and plugin
metadata to enable plugin installation from within Jenkins. Upstream:: [[upstream]] A configured
Glossary 188
Pipeline or Project which triggers a separate Pipeline or Project as part of its execution. Workspace::
[[workspace]] A disposable directory on the file system of a Node where work can be done by a
Pipeline or Project. Workspaces are typically left in place after a Build or Pipeline run completes
unless specific Workspace cleanup policies have been put in place on the Jenkins Master.
Introduction
With over 1,000 plugins and countless versions of said plugins in the open-source community, testing
for all possible conflicts before upgrading one or more production Jenkins instances is simply not
feasible. While Jenkins itself will warn of potential incompatibility if it detects that a plugin requires
a newer version of the Jenkins core, there is no automatic way to detect conflicts between plugins
or to automatically quantifying the impact of upgrading a plugin before doing the upgrade.
Instead, Jenkins administrators should be made responsible for manually testing their own instance’s
plugin and core version updates before performing them on the production instance. This kind of
testing requires a copy or “test instance” of the production server to act as the sandbox for such tests
and can prevent production master downtime.
Test Instances
A test master is a Jenkins master used solely for testing configurations and plugins in a non-
production environment. For organizations with a mission-critical Jenkins instance, having a test
master is highly recommended.
Upgrading or downgrading either the Jenkins core or any plugins can sometimes have the unin-
tended side effect of crippling another plugin’s functionality or even crashing a master. As of today,
there is no better way to pre-test for such catastrophic conflicts than with a test master.
Test masters should have identical configurations, jobs, and plugins as the production master so
that test upgrades will most closely resemble the outcomes of a similar change on your production
master. For example, installing the Folders plugin while running a version of the Jenkins core older
than 1.554.1 will cause the instance crash and be inaccessible until the plugin is manually uninstalled
from the plugin folder.
There are many methods for setting up a test instance, but the commonality between them all is
that the _$JENKINSHOME between them is nearly identical. Whether this means that most all of
the production masters’ _$JENKINSHOME folder is version controlled in a service like GitHub and
mounted manually or programmatically to a test server or Docker container, the result is nearly the
same.
Glossary 189
It is ideal to first ensure sure the master is idle (no running or queued jobs) before attempting to
create a test master.
With GitHub + manual commands
You will simply need to open up your command-line interface and “cd” to the folder that contains
the _$JENKINSHOME directory for your production master and run the “git init” command. For the
location of this folder, please refer to section 3.
It is recommended that before running the “git add” command that you create a good .gitignore file.
This file will prevent you from accidentally version-controlling a file that is too large to be stored
in GitHub (anything >50 MB).
Here is an example .gitignore file for a Jenkins master running on OS X:
1 .DS_Store
2 .AppleDouble
3 .LSOverride
4 Icon
5 ._*
6 .Spotlight-V100
7 .Trashes
8 .AppleDB
9 .AppleDesktop
10 Network Trash Folder
11 Temporary Items
12 .apdisk
13 *.log
14 *.tmp
15 *.old
16 *.jar
17 *.son
18 .Xauthority
19 .bash_history
20 .bash_profile
21 .fontconfig
22 .gitconfig
23 .gem
24 .lesshst
25 .mysql_history
26 .owner
27 .ri
28 .rvm
29 .ssh
30 .viminfo
Glossary 190
31 .vnc
32 bin/
33 tools/
34 **/.owner
35 **/queue.xml
36 **/fingerprints/
37 **/shelvedProjects/
38 **/updates/
39 **/jobs/*/workspace/
40 **/war/
41 /tools/
42 **/custom_deps/
43 **/slave/workspace/
44 **/slave-slave.log.*
45 cache/
46 fingerprints/
47 **/wars/jenkins*.war
48 *.log
49 *.zip
50 *.rrd
51 *.gz
Once you have a good .gitignore file, you can run the follow git commands to commit your _-
$JENKINSHOME to a git repository like GitHub:
Now you can install Jenkins to a fresh instance and “git clone” this _$JENKINSHOME from the git
repository to your new instance. You will need to replace the files in the new instance with your
version-controlled files to complete the migration, whether through scripts or through a drag-and-
drop process.
Once this is done, you will need to restart the new test master’s Jenkins service or reload its
configuration from the Jenkins UI (“Manage Jenkins” >> “Reload Configuration”).
With GitHub + Docker (Linux-only)
When it comes to version controlling your $JENKINS_HOME, just follow the instructions in the
previous section.
The next step will be to create a Docker image with identical configurations to your production
instance’s - operating system (Linux-only), installed libraries/tools, and open ports. This can be
accomplished through Dockerfiles.
Glossary 191
You will then just need to create mounted storage on your Docker server with a clone of your version-
controlled _$JENKINSHOME home and a simple image to clone the _$JENKINSHOME into.
For example, we can create a Docker image called jenkins-storage and version control our _$JENK-
INSHOME in a Github repository known as “demo-joc”. The “jenkins-storage” Docker image can be
built from a Dockerfile similar to this:
1 FROM debian:jessie
2 RUN apt-get update && apt-get -y upgrade
3 RUN apt-get install -y --no-install-recommends \
4 openjdk-7-jdk \
5 openssh-server \
6 curl \
7 ntp \
8 ntpdate \
9 git \
10 maven \
11 less \
12 vim
13 RUN printf "AddressFamily inet" >> /etc/ssh/ssh_config
14 ENV MAVEN_HOME /usr/bin/mvn
15 ENV GIT_HOME /usr/bin/git
16 # Install Docker client
17 RUN curl https://get.docker.io/builds/Linux/x86_64/docker-latest -o /usr/local/bin/d\
18 ocker
19 RUN chmod +x /usr/local/bin/docker
20 RUN groupadd docker
21 # Create Jenkins user
22 RUN useradd jenkins -d /home/jenkins
23 RUN echo "jenkins:jenkins" | chpasswd
24 RUN usermod -a -G docker jenkins
25 # Make directories for [masters] JENKINS_HOME, jenkins.war lib and [slaves] remote F\
26 S root, ssh privilege separation directory
27 RUN mkdir /usr/lib/jenkins /var/lib/jenkins /home/jenkins /var/run/sshd
28 # Set permissions
29 RUN chown -R jenkins:jenkins /usr/lib/jenkins /var/lib/jenkins /home/jenkins
30 #create data folder for cloning
31 RUN ["mkdir", "/data"]
32 RUN ["chown", "-R", "jenkins:jenkins", "/data"]
33 USER jenkins
34 VOLUME ["/data"]
35 WORKDIR /data
36 # USER jenkins
Glossary 192
Creating mounted storage for containers would just require something similar to the following
command:
And Jenkins images that rely on the mounted storage for their _$JENKNISHOME will then need to
point to the mounted volume:
• For bad plugins, you can rollback the plugin from the UI by going to the
plugin manager (“Manage Jenkins” >> “Manage Plugins”) and going to the “Available” tab.
Jenkins will show a “downgrade” button next to any plugins that can be downgraded.
• If the UI is unavailable, then enter your _$JENKINS
The Jenkins Metrics Plugin¹⁰⁵ is an open-source plugin which exposes metrics on a Jenkins instance.
Metrics are exposed using the Dropwizard Metrics API¹⁰⁶
.Metrics exposed
The exact list of exposed metrics varies depending on your installed plugins. To get a full
list of available metrics for your own master, run the following script on https://wiki.jenkins-
ci.org/display/JENKINS/Jenkins+Script+Console[your master’s script console]:
1 for (j in Jenkins.instance.getExtensionList(jenkins.metrics.api.MetricProvider.class\
2 )) {
3 for (m in j.getMetricSet()) {
4 for (i in m.metrics)
5 { println i.getKey() }
6 }
7 }
CloudBees has documented¹⁰⁷ the full list of exposed metrics, along with in-depth explanations of
each.
.Metrics Usage
Metrics are protected by a set of permissions for viewing, accessing the thread dump, and posting
a health check. The Metrics Operational Menu can be access via the web UI by visiting <jenkins-
url>/metrics/currentUser, and the 4 menu options (Metrics, Ping, Threads, Healthcheck) lead to a
JSON string containing the requested metrics or thread dump.
Access to the Metrics Servlet can also be provided by issuing API keys. API keys can be configured
from the Jenkins global configuration screen (<jenkins-url>/configure) under the “Metrics” section.
Multiple access can be generated and permissions associated with those keys can also be restricted
at this level.
¹⁰⁴https://wiki.jenkins-ci.org/display/JENKINS/Obtaining+a+thread+dump
¹⁰⁵https://wiki.jenkins-ci.org/display/JENKINS/Metrics+Plugin
¹⁰⁶https://dropwizard.github.io/metrics/3.1.0
¹⁰⁷https://documentation.cloudbees.com/docs/cje-user-guide/monitoring-sect-reference.html#monitoring-sect-reference-metrics
Glossary 194
More information on Metrics basic and advanced usages can be found here¹⁰⁸.
Introduction
As an organization matures from a continuous delivery standpoint, its Jenkins requirements will
similarly grow. This growth is often reflected in the Jenkins master’s architecture, whether that be
“vertical” or “horizontal” growth.
Vertical growth is when a master’s load is increased by having more configured jobs or orchestrating
more frequent builds. This may also mean that more teams are depending on that one master.
Horizontal growth is the creation of additional masters within an organization to accommodate new
teams or projects, rather than adding these things to an existing single master.
There are potential pitfalls associated with each approach to scaling Jenkins, but with careful
planning, many of them can be avoided or managed. Here are some things to consider when choosing
a strategy for scaling your organization’s Jenkins instances:
To come back up and running with your Jenkins infrastructure you will need to enhance the master
(increasing memory, number of CPUs, etc). The time it takes to maintain and upgrade the machine,
the master together with all the build environment will be down, the jobs will be stopped and the
whole Jenkins infrastructure will be unusable.
Scaling Jenkins in such a scenario would be extremely painful and would introduce many “idle”
periods where all the resources assigned to your build environment are useless.
Moreover, executing jobs on the master’s executors introduces a “security” issue: the “jenkins” user
that Jenkins uses to run the jobs would have full permissions on all Jenkins resources on the master.
This means that, with a simple script, a malicious user can have direct access to private information
whose integrity and privacy could not be, thus, guaranteed.
For all these reasons Jenkins supports the “master/agent” mode, where the workload of building
projects are delegated to multiple agents.
An agent is a machine set up to offload projects from the master. The method with which builds are
scheduled depends on the configuration given to each project. For example, some projects may be
configured to “restrict where this project is run” which ties the project to a specific agent or set of
labeled agents. Other projects which omit this configuration will select an agent from the available
pool in Jenkins.
In a distributed builds environment, the Jenkins master will use its resources to only handle HTTP
requests and manage the build environment. Actual execution of builds will be delegated to the
agents. With this configuration it is possible to horizontally scale an architecture, which allows a
single Jenkins installation to host a large number of projects and build environments.
In order for a machine to be recognized an agent, it needs to run a specific agent program to establish
bi-directional communication with the master.
There are different ways to establish a connection between master and agent:
• The SSH connector: Configuring an agent to use the SSH connector is the
preferred and the most stable way to establish master-agent communication. Jenkins has a built-in
SSH client implementation. This means that the Jenkins master can easily communicate with any
machine with an SSH server installed. The only requirement is that the public key of the master is
part of the set of the authorized keys on the agent. Once the host and SSH key is defined for a new
agent, Jenkins will establish a connection to the machine and bootstrap the agent process.
. Manually: by navigating to the Jenkins master URL in a browser on the agent. Once the Java Web
Start icon is clicked, the agent will be launched on the machine. The downside of this approach is
that the agents cannot be centrally managed by the Jenkins master and each/stop/start/update of the
agent needs to be executed manually on the agent’s machine in versions of Jenkins older than 1.611.
This approach is convenient when the master cannot instantiate the connection with the client, for
example: with agents running inside a firewalled network connecting to a master located outside
the firewall.
. As a service: First you’ll need to manually launch the agent using the above method. After manually
launching it, jenkins-slave.exe and jenkins-slave.xml will be created in the slave’s work directory.
Now go to the command line to execute the following command:
[source, width=”300”] —- sc.exe create “<serviceKey>” start= auto binPath= “<path to jenkins-
slave.exe>” DisplayName= “<service display name>” —-
<serviceKey> is the name of the registry key to define this agent service and <service display name>
is the label that will identify the service in the Service Manager interface.
To ensure that restarts are automated, you will need to download a agent jar newer than v 2.37 and
copy it to a permanent location on the machine. The .jar file can be found at:
[source, width=”350”] —- http://<your-jenkins-host>/jnlpJars/slave.jar —-
If running a version of Jenkins newer than 1.559, the .jar will be kept up to date each time it connects
to the master.
* The JNLP-HTTP connector: This approach is quite similar to the JNLP-TCP Java Web Start
approach, with the difference in this case being that the agent is executed as headless and the
connection can be tunneled via HTTP(s). The exact command can be found on your JNLP gaent’s
configuration page:
[[jnlp_agent]] .JNLP agent launch command image::hardware-recommendations/jnlp-slave.png[scaledwidth=90%]
This approach is convenient for an execution as a daemon on Unix.
Windows agent set-up can either follow the standard SSH and JNLP approach or use a more
Windows-specific configuration approach. Windows agents have the following options:
The Jenkins Global configuration page let you specify the tools needed during the builds (i.e. Ant,
Maven, Java, etc).
When defining a tool, it is possible to create a pointer to an existing installation by giving the
directory where the program is expected to be on the slave. Another option is to let Jenkins take
care of the installation of a specific version in the given location. It is also possible to specify more
than one installation for the same tool since different jobs may need different versions of the same
tool.
The pre-compiled “Default” option calls whatever is already installed on the slave and exists in the
machine PATH, but this will return a failure if the tool was not already installed and its location
was not added to the PATH system variable.
One best practice to avoid this failure is to configure a job with the assumption that the target slave
does not have the necessary tools installed, and to include the tools’ installation as part of the build
process.
As mentioned previously, slaves should be interchangeable and standardized in order to make them
sharable and to optimize resource usage. Slaves should not be customized for a particular set of jobs,
nor for a particular team.
Lately Jenkins has become more and more popular not only in CI but also in CD, which means that it
must orchestrate jobs and pipelines which involve different teams and technical profiles: developers,
QA people and Dev-Ops people.
In such a scenario, it might make sense to create customized and dedicated slaves: different tools are
usually required by different teams (i.e. Puppet/Chef for the Ops team) and teams’ credentials are
usually stored on the slave in order to ensure their protection and privacy.
Glossary 198
In order to ensure the execution of a job on a single/group of slaves only (i.e. iOS builds on OSX slaves
only), it is possible to tie the job to the slave by specifying the slave’s label in the job configuration
page. Note that the restriction has to be replicated in every single job to be tied and that the slave
won’t be protected from being used by other teams.
Cloud build resources can be a solution for a case when it is necessary to maintain a reasonably
small cluster of slaves on-premise while still providing new build resources when needed.
In particular it is possible to offload the execution of the jobs to slaves in the cloud thanks to ad-hoc
plugins which will handle the creation of the cloud resources together with their destruction when
they are not needed anymore:
Designing the best Jenkins architecture for your organization is dependent on how you stratify the
development of your projects and can be constrained by limitations of the existing Jenkins plugins.
The 3 most common forms of stratifying development by masters is:
1. By environment (QA, DEV, etc) - With this strategy, Jenkins masters are populated by jobs based
on what environment they are deploying to.
• Pros
** Can tailor plugins on masters to be specific to that environment’s needs ** Can easily restrict
access to an environment to only users who will be using that environment
• Cons
** Reduces ability to create pipelines ** No way to visualize the complete flow across masters **
Outage of a master will block flow of all products
2. By org chart - This strategy is when masters are assigned to divisions within an organization.
Glossary 199
• Pros
** Can tailor plugins on masters to be specific to that team’s needs ** Can easily restrict access to a
division’s projects to only users who are within that division
• Cons
** Reduces ability to create cross-division pipelines ** No way to visualize the complete flow across
masters ** Outage of a master will block flow of all products
3. Group masters by product lines - When a group of products, with on only critical product in
each group, gets its own Jenkins masters.
• Pros
** Entire flows can be visualized because all steps are on one master ** Reduces the impact of one
master’s downtime on only affects a small subset of products
• Cons
** A strategy for restricting permissions must be devised to keep all users from having access to all
items on a master.
When evaluating these strategies, it is important to weigh them against the vertical and horizontal
scaling pitfalls discussed in the introduction.
Another note is that a smaller number of jobs translates to faster recovery from failures and more
importantly a higher mean time between failures.
Having the best possible estimate of necessary configurations for a Jenkins installation allows an
organization to get started on the right foot with Jenkins and reduces the number of configuration
iterations needed to achieve an optimal installation. The challenge for Jenkins architects is that true
limit of vertical scaling on a Jenkins master is constrained by whatever hardware is in place for the
master, as well as harder to quantify pieces like the types of builds and tests that will be run on the
build nodes.
There is a way to estimate roughly how many masters, jobs and executors will be needed based on
build needs and number of developers served. These equations assume that the Jenkins master will
have 5 cores with one core per 100 jobs (500 total jobs/master) and that teams will be divided into
groups of 40.
If you have information on the actual number of available cores on your planned master, you can
make adjustments to the “number of masters” equations accordingly.
Glossary 200
The equation for estimating the number of masters and executors needed when the number of
configured jobs is known is as follows:
[source, width=”350”] —- masters = number of jobs/500 executors = number of jobs * 0.03 —-
The equation for estimating the maximum number of jobs, masters, and executors needed for an
organization based on the number of developers is as follows:
[source, width=”350”] —- number of jobs = number of developers * 3.333 number of masters =
number of jobs/500 number of executors = number of jobs * 0.03 —-
These numbers will provide a good starting point for a Jenkins installation, but adjustments to actual
installation size may be needed based on the types of builds and tests that an installation runs.
It is also recommended to choose a master with consideration for future growth in the number of
plugins or jobs stored in your master’s _$JENKINSHOME. Storage is cheap and Jenkins does not
require fast disk access to run well, so it is more advantageous to invest in a larger machine for your
master over a faster one.
Different operating systems for the Jenkins master will also allow for different approaches to
expandable storage:
• Spanned Volumes on Windows - On NTFS devices like Windows, you can create a
• Logical Volume Manager for Linux - LVM manages disk drives and allows
• ZFS for Solaris - ZFS is even more flexible than LVM and spanned volumes
• Symbolic Links - For systems with existing Jenkins installations and who
spanned volume that allows you to add new volumes to an existing one, but have them
behave as a single volume. To do this, you will have to ensure that Jenkins is installed on a
separate partition so that it can be converted to a spanned volume later. logical volumes to be
resized on the fly. Many distributions of Linux use LVM when they are installed, but Jenkins
should have its our LVM setup. and just requires that the _$JENKINSHOME be on its own
filesystem. This makes it easier to create snapshots, backups, etc. cannot use any of the above-
mentioned methods, symbolic links (symlinks) may be used instead to store job folders on
separate volumes with symlinks to those directories.
Additionally, to easily prevent a _$JENKINSHOME folder from becoming bloated, make it manda-
tory for jobs to discard build records after a specific time period has passed and/or after a specific
number of builds have been run. This policy can be set on a job’s configuration page.
Windows
If you install Jenkins with the Windows installer, Jenkins will be installed as a service and the default
_$JENKINSHOME will be “C:Program Files (x86)\jenkins”.
You can edit the location of your _$JENKINSHOME by opening the jenkins.xml file and editing
the _$JENKINSHOME variable, or going to the “Manage Jenkins” screen, clicking on the “Install as
Windows Service” option in the menu, and then editing the “Installation Directory” field to point to
another existing directory.
Mac OSX
If you install Jenkins with the OS X installer, you can find and edit the location of your _-
$JENKINSHOME by editing the “Macintosh HD/Library/LaunchDaemons” file’s _$JENKINSHOME
property.
By default, the _$JENKINSHOME will be set to “Macintosh HD/Users/Shared/Jenkins”.
Ubuntu/Debian
If you install Jenkins using a Debian package, you can find and edit the location of your _$JENK-
INSHOME by editing your “/etc/default/jenkins” file.
By default, the _$JENKINSHOME will set to “/var/lib/jenkins” and your $JENKINS_WAR will point
to “/usr/share/jenkins/jenkins.war”.
Red Hat/CentOS/Fedora
If you install Jenkins as a RPM package, the default _$JENKINSHOME will be “/var/lib/jenkins”.
You can edit the location of your _$JENKINSHOME by editing the “/etc/sysconfig/jenkins” file.
openSUSE
If installing Jenkins as a package using zypper, you’ll be able to edit the _$JENKINSHOME by editing
the “/etc/sysconfig/jenkins” file.
The default location for your _$JENKINSHOME will be set to “/var/lib/jenkins” and the $JENKINS_-
WAR home will be in “/usr/lib/jenkins”.
FreeBSD
If installing Jenkins using a port, the _$JENKINSHOME will be located in whichever directory you
run the “make” command in. It is recommended to create a “/usr/ports/devel/jenkins” folder and
compile Jenkins in that directory.
You will be able to edit the _$JENKINSHOME by editing the “/usr/local/etc/jenkins”.
OpenBSD
If installing Jenkins using a package,the _$JENKINSHOME is set by default to “/var/jenkins”.
Glossary 202
If installing Jenkins using a port, the _$JENKINSHOME will be located in whichever directory you
run the “make” command in. It is recommended to create a “/usr/ports/devel/jenkins” folder and
compile Jenkins in that directory.
You will be able to edit the _$JENKINSHOME by editing the “/usr/local/etc/jenkins” file.
Solaris/OpenIndiana
The Jenkins project voted on September 17, 2014 to discontinue Solaris packages.
Anatomy of a $JENKINS_HOME
1 JENKINS_HOME
2 +- config.xml (Jenkins root configuration file)
3 +- *.xml (other site-wide configuration files)
4 +- identity.key (RSA key pair that identifies an instance)
5 +- secret.key (deprecated key used for some plugins' secure operations)
6 +- secret.key.not-so-secret (used for validating _$JENKINS_HOME_ creation date)
7 +- userContent (files served under your https://server/userContent/)
8 +- secrets (root directory for the secret+key for credential decryption)
9 +- hudson.util.Secret (used for encrypting some Jenkins data)
10 +- master.key (used for encrypting the hudson.util.Secret key)
11 +- InstanceIdentity.KEY (used to identity this instance)
12 +- fingerprints (stores fingerprint records, if any)
13 +- plugins (root directory for all Jenkins plugins)
14 +- [PLUGINNAME] (sub directory for each plugin)
15 +- META-INF (subdirectory for plugin manifest + pom.xml)
16 +- WEB-INF (subdirectory for plugin jar(s) and licenses.xml)
17 +- [PLUGINNAME].jpi (.jpi or .hpi file for the plugin)
18 +- jobs (root directory for all Jenkins jobs)
19 +- [JOBNAME] (sub directory for each job)
20 +- config.xml (job configuration file)
21 +- workspace (working directory for the version control system)
22 +- latest (symbolic link to the last successful build)
23 +- builds (stores past build records)
24 +- [BUILD_ID] (subdirectory for each build)
25 +- build.xml (build result summary)
26 +- log (log file)
27 +- changelog.xml (change log)
28 +- [FOLDERNAME] (sub directory for each folder)
29 +- config.xml (folder configuration file)
30 +- jobs (sub directory for all nested jobs)
Glossary 203
[[segrate-data]]
CAUTION: No data migration is handled by Jenkins when using those settings. So you either want
to use them from the beginning, or make sure you take into consideration which data you would
like to be moved to the right place before using the following switches.
It is possible to separate customize some of the layout to better separate pure job configurations
from less durable data, like build data or logs. footnote:[These switches are used to configure out of
the box Jenkins Essentials¹⁰⁹ instances.]
All of your Jenkins-specific configurations that need to be backed up will live in the _$JENK-
INSHOME, but it is a best practice to back up only a subset of those files and folders.
Below are a few guidelines to consider when planning your backup strategy.
.Exclusions
When it comes to creating a backup, it is recommended to exclude archiving the following folders
to reduce the size of your backup:
[literal] /war (the exploded Jenkins war directory) /cache (downloaded tools) /tools (extracted tools)
These folders will automatically be recreated the next time a build runs or Jenkins is launched.
.Jobs and Folders
Your job or folder configurations, build histories, archived artifacts, and workspace will exist entirely
within the jobs folder.
The jobs directory, whether nested within a folder or at the root level is as follows:
¹⁰⁹https://jenkins.io/blog/2018/04/06/jenkins-essentials/
¹¹⁰https://wiki.jenkins.io/display/JENKINS/Features+controlled+by+system+properties
Glossary 204
If you only need to backup your job configurations, you can opt to only backup the config.xml for
each job. Generally build records and workspaces do not need to be backed up, as workspaces will
be re-created when a job is run and build records are only as important as your organizations deems
them.
.System configurations
Your instance’s system configurations exist in the root level of the _$JENKINSHOME folder:
[literal] +- config.xml (Jenkins root configuration file) +- *.xml (other site-wide configuration files)
The config.xml is the root configuration file for your Jenkins. It includes configurations for the paths
of installed tools, workspace directory, and slave agent port.
Any .xml other than that config.xml in the root Jenkins folder is a global configuration file for an
installed tool or plugin (i.e. Maven, Git, Ant, etc). This includes the credentials.xml if the Credentials
plugin is installed.
If you only want to backup your core Jenkins configuration, you only need to back up the config.xml.
.Plugins
Your instance’s plugin files (.hpi and .jpi) and any of their dependent resources (help files, pom.xml
files, etc) will exist in the plugins folder in $JENKINS_HOME.
[literal] +- plugins (root directory for all Jenkins plugins) +- [PLUGINNAME] (sub directory for each
plugin) +- META-INF (subdirectory for plugin manifest + pom.xml) +- WEB-INF (subdirectory for
plugin jar(s) and licenses.xml) +- [PLUGINNAME].jpi (.jpi or .hpi file for the plugin)
It is recommended to back up the entirety of the plugins folder (.hpi/.jpis + folders).
.Other data
Other data that you are recommended to back up include the contents of your secrets folder, your
identity.key, your secret.key, and your secret.key.not-so-secret file.
[literal] +- identity.key (RSA key pair that identifies an instance) +- secret.key (used for various
secure Jenkins operations) +- secret.key.not-so-secret (used for validating _$JENKINSHOME cre-
ation date) +- userContent (files served in https://server/userContent/) +- secrets (directory for the
Glossary 205
secret+key decryption) +- hudson.util.Secret (used for encrypting some Jenkins data) +- master.key
(used for encrypting the hudson.util.Secret key) +- InstanceIdentity.KEY (used to identity this
instance)
The identity.key is an RSA key pair that identifies and authenticates the current Jenkins instance.
The secret.key is used to encrypt plugin and other Jenkins data, and to establish a secure connection
between a master and slave.
The secret.key.not-so-secret file is used to validate when the _$JENKINSHOME was created. It is
also meant to be a flag that the secret.key file is a deprecated way of encrypting information.
The files in the secrets folder are used by Jenkins to encrypt and decrypt your instance’s stored
credentials, if any exist. Loss of these files will prevent recovery of any stored credentials.
hudson.util.Secret is used for encrypting some Jenkins data like the credentials.xml, while the
master.key is used for encrypting the hudson.util.Secret key. Finally, the InstanceIdentity.KEY is
used to identity this instance and for producing digital signatures.
In the case of a total machine failure, it is important to ensure that there is a plan in place to get
Jenkins both back online and in its last good state.
If a high availability set up has not been enabled and no back up of that master’s filesystem has
been taken, then an corruption of a machine running Jenkins means that all historical build data
and artifacts, job and system configurations, etc. will be lost and the lost configurations will need
to be recreated on a new instance.
1. Backup policy - In addition to creating backups using the previous section’s backup guide, it is
important to establish a policy for selecting which backup should be used when restoring a downed
master. 2. Restoring from a backup - A plan must be put in place on whether the backup should be
restored manually or with scripts when the primary goes down.
to continuous delivery. These factors include componetizing the application under development,
automating the entire pipeline (within reasonable limits) and freeing up contentious resources.
.Step 1: Make each master highly available
Each Jenkins master needs to be set up such that it is part of a Jenkins cluster.
A proxy (typically HAProxy or F5) then fronts the primary master. The proxy’s job is to continuously
monitor the primary master and route requests to the backup if the primary goes down. To make
the infrastructure more resilient, you can have multiple backup masters configured.
.Step 2: Enable security
Set up an authentication realm that Jenkins will use for its user database.
TIP: If you are trying to set up a proof-of-concept, it is recommended to use the https://wiki.jenkins-
ci.org/display/JENKINS/Mock+Security+Realm+Plugin[Mock Security Realm plugin] for authenti-
cation.
.Step 3: Add build nodes (slaves) to master
Add build servers to your master to ensure you are conducting actual build execution off of the
master, which is meant to be an orchestration hub, and onto a “dumb” machine with sufficient
memory and I/O for a given job or test.
.Step 4: Setup a test instance
A test instance is typically used to test new plugin updates. When a plugin is ready to be used, it
should be installed into the main production update center.
Hardware Recommendations
Introduction
Sizing a Jenkins environment depends on a number of factors, which makes it a very inexact science.
Achieving an optimal configuration requires some experience and experimentation. It is, however,
possible to make a smart approximation to start - especially when designing with Jenkins’ best
practices in mind.
The following outlines these factors and how you can account for them when sizing your
configuration. You are also provided sample configurations and the hardwares behind some of the
largest Jenkins installations presented in a Jenkins Scalability Summit.
Your Jenkins master will be serving HTTP requests and storing all of the important information for
your Jenkins instance in its _$JENKINSHOME folder (configurations, build histories and plugins).
More information on sizing masters based organizational needs can be found in the Architecting for
Scale section.
The amount of memory Jenkins needs is largely dependent on many factors, which is why the RAM
allotted for it can range from 200 MB for a small installation to 70+ GB for a single and massive
Jenkins master. However, you should be able to estimate the RAM required based on your project
build needs.
Each build node connection will take 2-3 threads, which equals about 2 MB or more of memory. You
will also need to factor in CPU overhead for Jenkins if there are a lot of users who will be accessing
the Jenkins user interface.
It is generally a bad practice to allocate executors on a master, as builds can quickly overload a
master’s CPU/memory/etc and crash the instance, causing unnecessary downtime. Instead, it is
advisable to set up agents that the Jenkins master can delegate jobs to, keeping the bulk of the work
off of the master itself.
[[fungibility]] Agents are typically generic x86 machines with enough memory to run specific build
types. The agent’s configuration depends on the builds it will be used for and on the tools required
by the same builds.
Configuring a machine to act as an agent inside your infrastructure can be tedious and time
consuming. This is especially true when the same set-up has to be replicated on a large pool
of agents. Because of this, is ideal to have fungible agents, which are agents that are easily
replaceable. Agents should be generic for all builds rather customized for a specific job or a
set of jobs. The more generic the agents, the more easily they are interchanged, which in turn
allows for a better use of resources and a reduced impact on productivity if some agents suffer
an outage. Andrew Bayer introduced this concept of “fungibility” as applied to agents during
his presentation https://www.slideshare.net/andrewbayer/seven-habits-of-highly-effective-jenkins-
users-2014-edition[“Seven Habits of Highly Effective Jenkins Users” at the Jenkins User Conference
(Europe, 2014)].
Glossary 208
The more automated the environment configuration is, the easier it is to replicate a configuration
onto a new agent machine. Tools for configuration management or a pre-baked image can be
excellent solutions to this end. Containers and virtualization are also popular tools for creating
generic agent environments.
More information on estimating the number of executors needed in a given environment can be
found in the Architecting for Scale section.
Pipeline as Code
Introduction
Pipeline as Code describes a set of features that allow Jenkins users to define pipelined job processes
with code, stored and versioned in a source repository. These features allow Jenkins to discover,
manage, and run jobs for multiple source repositories and branches – eliminating the need for
manual job creation and management.
To use Pipeline as Code, projects must contain a file named Jenkinsfile in the repository root,
which contains a “Pipeline script.”
Additionally, one of the enabling jobs needs to be configured in Jenkins:
The Jenkinsfile
Presence of the Jenkinsfile in the root of a repository makes it eligible for Jenkins to automatically
manage and execute jobs based on repository branches.
The Jenkinsfile should contain a Pipeline script, specifying the steps to execute the job. The script
has all the power of Pipeline available, from something as simple as invoking a Maven builder, to
a series of interdependent steps, which have coordinated parallel execution with deployment and
validation phases.
A simple way to get started with Pipeline is to use the Snippet Generator available in the
configuration screen for a Jenkins Pipeline job. Using the Snippet Generator, you can create a Pipeline
script as you might through the dropdowns in other Jenkins jobs.
Folder Computation
Multibranch Pipeline projects and Organization Folders extend the existing folder functionality by
introducing ‘computed’ folders. Computed folders automatically run a process to manage the folder
contents. This computation, in the case of Multibranch Pipeline projects, creates child items for each
eligible branch within the child. For Organization Folders, computation populates child items for
repositories as individual Multibranch Pipelines.
Folder computation may happen automatically via webhook callbacks, as branches and repositories
are created or removed. Computation may also be triggered by the Build Trigger defined in the
configuration, which will automatically run a computation task after a period of inactivity (this
defaults to run after one day).
[role=”image-border”] image::folder-computation-build-trigger-schedule.png[scaledwidth=”75%”]
Information about the last execution of the folder computation is available in the Folder Computation
section.
[role=”image-border”] image::folder-computation-main.png[scaledwidth=”75%”,width=”75%”]
The log from the last attempt to compute the folder is available from this page. If folder computation
doesn’t result in an expected set of repositories, the log may have useful information to diagnose
the problem.
[role=”image-border”] image::folder-computation-log.png[scaledwidth=”75%”,width=”75%”]
Configuration
Both Multibranch Pipeline projects and Organization Folders have configuration options to allow
precise selection of repositories. These features also allow selection of two types of credentails to
use when connecting to the remote systems:
*
scan credentials, which are used for accessing the GitHub or Bitbucket APIs
Glossary 210
*
checkout credentials, which are used when the repository is cloned from the
remote system; it may be useful to choose an SSH key or ”- anonymous -“, which uses the default
credentials configured for the OS user
IMPORTANT: If you are using a GitHub Organization, you should create a GitHub access token¹¹² to
use to avoid storing your password in Jenkins and prevent any issues when using the GitHub API.
When using a GitHub access token, you must use standard Username with password credentials,
where the username is the same as your GitHub username and the password is your access token.
Multibranch Pipeline projects are one of the fundamental enabling features for Pipeline as Code.
Changes to the build or deployment procedure can evolve with project requirements and the job
always reflects the current state of the project. It also allows you to configure different jobs for
different branches of the same project, or to forgo a job if appropriate. The Jenkinsfile in the root
directory of a branch or pull request identifies a multibranch project.
NOTE: Multibranch Pipeline projects expose the name of the branch being built with the BRANCH_-
NAME environment variable and provide a special checkout scm Pipeline command, which is
guaranteed to check out the specific commit that the Jenkinsfile originated. If the Jenkinsfile needs
to check out the repository for any reason, make sure to use checkout scm, as it also accounts for
alternate origin repositories to handle things like pull requests.
To create a Multibranch Pipeline, go to: New Item -> Multibranch Pipeline. Configure the SCM source
as appropriate. There are options for many different types of repositories and services including Git,
Mercurial, Bitbucket, and GitHub. If using GitHub, for example, click Add source, select GitHub and
configure the appropriate owner, scan credentials, and repository.
Other options available to Multibranch Pipeline projects are:
After configuring these items and saving the configuration, Jenkins will automatically scan the
repository and import appropriate branches.
¹¹²https://github.com/settings/tokens/new?scopes=repo,public_repo,admin:repo_hook,admin:org_hook&description=Jenkins+Access
Glossary 211
Organization Folders
Organization Folders offer a convenient way to allow Jenkins to automatically manage which repos-
itories are automatically included in Jenkins. Particularly, if your organization utilizes GitHub Or-
ganizations or Bitbucket Teams, any time a developer creates a new repository with a Jenkinsfile,
Jenkins will automatically detect it and create a Multibranch Pipeline project for it. This alleviates
the need for administrators or developers to manually create projects for these new repositories.
To create an Organization Folder in Jenkins, go to: New Item -> GitHub Organization or New
Item -> Bitbucket Team and follow the configuration steps for each item, making sure to specify
appropriate Scan Credentials and a specific owner for the GitHub Organization or Bitbucket Team
name, respectively.
Other options available are:
• Repository name pattern - a regular expression to specify which repositories are included
• API endpoint - an alternate API endpoint to use a self-hosted GitHub Enterprise
• Checkout credentials - alternate credentials to use when checking out the code (cloning)
After configuring these items and saving the configuration, Jenkins will automatically scan the
organization and import appropriate repositories and resulting branches.
Computed folders can remove items immediately or leave them based on a desired retention strategy.
By default, items will be removed as soon as the folder computation determines they are no longer
present. If your organization requires these items remain available for a longer period of time, simply
configure the Orphaned Item Strategy appropriately. It may be useful to keep items in order to
examine build results of a branch after it’s been removed, for example.
[role=”image-border”] image::orphaned-item-strategy.png[scaledwidth=”75%”]
You may also configure an icon to use for the folder display. For example, it might be useful to
display an aggregate health of the child builds. Alternately, you might reference the same icon you
use in your GitHub organization account.
[role=”image-border”] image::folder-icon.png[scaledwidth=”75%”]
Example
To demonstrate using an Organization Folder to manage repositories, we’ll use the fictitious
organization: CloudBeers, Inc..
Glossary 212
Go to New Item. Enter ‘cloudbeers’ for the item name. Select GitHub Organization and click OK.
[role=”image-border”] image::screenshot1.png[scaledwidth=”75%”]
Optionally, enter a better descriptive name for the Description, such as ‘CloudBeers GitHub’. In the
Repository Sources section, complete the section for “GitHub Organization”. Make sure the owner
matches the GitHub Organization name exactly, in our case it must be: cloudbeers. This defaults to
the same value that was entered for the item name in the first step. Next, select or add new “Scan
credentials” - we’ll enter our GitHub username and access token as the password.
[role=”image-border”] image::screenshot2.png[scaledwidth=”75%”]
After saving, the “Folder Computation” will run to scan for eligible repositories, followed by
multibranch builds.
[role=”image-border”] image::screenshot3.png[scaledwidth=”75%”]
Refresh the page after the job runs to ensure the view of repositories has been updated.
[role=”image-border”] image::screenshot4.png[scaledwidth=”75%”]
A this point, you’re finished with basic project configuration and can now explore your imported
repositories. You can also investigate the results of the jobs run as part of the initial Folder
Computation.
[role=”image-border”] image::screenshot5.png[scaledwidth=”75%”]
Introduction
Continuous delivery allows organizations to deliver software with lower risk. The path to continuous
delivery starts by modeling the software delivery pipeline used within the organization and then
focusing on the automation of it all. Early, directed feedback, enabled by pipeline automation enables
software delivery more quickly over traditional methods of delivery.
Jenkins is the Swiss army knife in the software delivery toolchain. Developers and operations
(DevOps) personnel have different mindsets and use different tools to get their respective jobs done.
Since Jenkins integrates with a huge variety of toolsets, it serves as the intersection point between
development and operations teams.
Many organizations have been orchestrating pipelines with existing Jenkins plugins for several
years. As their automation sophistication and their own Jenkins experience increases, organizations
inevitably want to move beyond simple pipelines and create complex flows specific to their delivery
process.
These Jenkins users require a feature that treats complex pipelines as a first-class object, and so the
Pipeline plugin¹¹³ was developed .
¹¹³https://wiki.jenkins-ci.org/display/JENKINS/Pipeline+Plugin
Glossary 213
Pre-requisites
Continuous delivery is a process - rather than a tool - and requires a mindset and culture that must
percolate from the top-down within an organization. Once the organization has bought into the
philosophy, the next and most difficult part is mapping the flow of software as it makes its way
from development to production.
The root of such a pipeline will always be an orchestration tool like a Jenkins, but there are some
key requirements that such an integral part of the pipeline must satisfy before it can be tasked with
enterprise-critical processes:
To ensure a tool can scale with an organization and suitably automate existing delivery pipelines
without changing them, the tool should also support:
The Pipeline plugin allows users to create such a pipeline through a new job type called Pipeline.
The flow definition is captured in a Groovy script, thus adding control flow capabilities such as
loops, forks and retries. Pipeline allows for stages with the option to set concurrencies, preventing
multiple builds of the same pipeline from trying to access the same resource at the same time.
Concepts
1 node(‘linux’){
2 git url: 'https://github.com/jglick/simple-maven-project-with-tests.git'
3 def mvnHome = tool 'M3'
4 env.PATH = "${mvnHome}/bin:${env.PATH}"
5 sh 'mvn -B clean verify'
6 }
.Stages
Intra-organizational (or conceptual) boundaries are captured through a primitive called “stages.” A
deployment pipeline consists of various stages, where each subsequent stage builds on the previous
one. The idea is to spend as few resources as possible early in the pipeline and find obvious issues,
rather than spend a lot of computing resources for something that is ultimately discovered to be
broken.
[[throttled-concurrent]] .Throttled stage concurrency with Pipeline image::stage-concurrency.png[scaledwidth=”90%
Consider a simple pipeline with three stages. A naive implementation of this pipeline can sequen-
tially trigger each stage on every commit. Thus, the deployment step is triggered immediately after
the Selenium test steps are complete. However, this would mean that the deployment from commit
two overrides the last deployment in motion from commit one. The right approach is for commits
two and three to wait for the deployment from commit one to complete, consolidate all the changes
that have happened since commit one and trigger the deployment. If there is an issue, developers
can easily figure out if the issue was introduced in commit two or commit three.
Pipeline provides this functionality by enhancing the stage primitive. For example, a stage can have
a concurrency level of one defined to indicate that at any point only one thread should be running
through the stage. This achieves the desired state of running a deployment as fast as it should run.
.Restartable flows
All Pipelines are resumable, so if Jenkins needs to be restarted while a flow is running, it should
resume at the same point in its execution after Jenkins starts back up. Similarly, if a flow is running
a lengthy sh or bat step when an agent unexpectedly disconnects, no progress should be lost when
connectivity is restored.
There are some cases when a flow build will have done a great deal of work and proceeded to a point
where a transient error occurred: one which does not reflect the inputs to this build, such as source
code changes. For example, after completing a lengthy build and test of a software component, final
deployment to a server might fail because of network problems.
.Pipeline Stage View
When you have complex builds pipelines, it is useful to see the progress of each stage and to see
where build failures are occurring in the pipeline. This can enable users to debug which tests are
failing at which stage or if there are other problems in their pipeline. Many organization also want
to make their pipelines user-friendly for non-developers without having to develop a homegrown
UI, which can prove to be a lengthy and ongoing development effort.
The Pipeline Stage View feature offers extended visualization of Pipeline build history on the index
page of a flow project. This visualization also includes helpful metrics like average run time by stage
and by build, and a user-friendly interface for interacting with input steps.
.Pipeline Stage View plugin image::workflow-big-responsive.png[scaledwidth=”90%”]
The only prerequisite for this plugin is a pipeline with defined stages in the flow. There can be
as many stages as you desired and they can be in a linear sequence, and the stage names will be
displayed as columns in the Stage View interface.
Glossary 216
Traceability is important for DevOps teams who need to be able to trace code from commit to
deployment. It enables impact analysis by showing relationships between artifacts and allows for
visibility into the full lifecycle of an artifact, from its code repository to where the artifact is
eventually deployed in production.
Jenkins and the Pipeline feature support tracking versions of artifacts using file fingerprinting, which
allows users to trace which downstream builds are using any given artifact. To fingerprint with
Pipeline, simply add a “fingerprint: true” argument to any artifact archiving step. For example:
will archive any WAR artifacts created in the Pipeline and fingerprint them for traceability. This
trace log of this artifact and a list of all fingerprinted artifacts in a build will then be available in the
left-hand menu of Jenkins:
To find where an artifact is used and deployed to, simply follow the “more details” link through the
artifact’s name and view the entires for the artifact in its “Usage” list.
[[fingerprinting]] .Fingerprint of a WAR image::fingerprinting.png[scaledwidth=”90%”]
For more information, visit the https://wiki.jenkins-ci.org/display/JENKINS/Fingerprint[fingerprint
documentation] to learn more about how fingerprints work.
License
Code
This license applies code contributed to this repository which is generally used in the construction
of the Jenkins website
.MIT License “” Copyright (c) 2015-2016 CloudBees, Inc, and respective contributors
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and
associated documentation files (the “Software”), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial
portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE. “”
Content
The Creative Commons Atribution-ShareAlike 4.0¹¹⁴ license applies, unless otherwise stated to the
content created as part of this website, such as the Asciidoc or Markdown files inside of content/.
image:::https://licensebuttons.net/l/by-sa/4.0/88x31.png[Attribution/Share-a-like]
*
Human-readable license text¹¹⁵
*
Full legal license text¹¹⁶
¹¹⁴https://creativecommons.org/licenses/by-sa/4.0/
¹¹⁵https://creativecommons.org/licenses/by-sa/4.0/
¹¹⁶https://creativecommons.org/licenses/by-sa/4.0/legalcode