0% found this document useful (0 votes)
384 views97 pages

Jenkins Pipeline - Intermediate

This document provides an overview of a Jenkins Pipeline intermediate training course. The course objectives are to learn how to create and modify Pipeline code without Blue Ocean, create and configure shared libraries, call shared library custom steps from a Pipeline, create and use resource files, and learn best practices for robust, maintainable Pipelines. The course contains 5 modules and is intended for intermediate developers, build and release engineers. Students will modify a real-life Pipeline project to build, test and deploy using shared libraries.

Uploaded by

GOPI C
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
384 views97 pages

Jenkins Pipeline - Intermediate

This document provides an overview of a Jenkins Pipeline intermediate training course. The course objectives are to learn how to create and modify Pipeline code without Blue Ocean, create and configure shared libraries, call shared library custom steps from a Pipeline, create and use resource files, and learn best practices for robust, maintainable Pipelines. The course contains 5 modules and is intended for intermediate developers, build and release engineers. Students will modify a real-life Pipeline project to build, test and deploy using shared libraries.

Uploaded by

GOPI C
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 97

Jenkins Pipeline - Intermediate

COURSE OBJECTIVES

● After completing this training module, you should understand the following:
■ How to create and modify Pipeline code without using Blue Ocean
■ How to create and configure a Shared Library
■ How to call a Shared Library custom step from a Pipeline
■ How to create and use a Resource File
■ More about creating robust, maintainable Pipelines

COURSE MODULES

1. Recap of Pipeline Fundamentals


2. Prepare for Shared Libraries
3. Create a Shared Library
4. Call Shared Library Functions
5. Best Practices

AUDIENCE

● The course is applicable to:


■ Intermediate Developers​PREREQUISITES
● "​Jenkins - Fundamentals​" course or CJE/CCJE certification
● "​Pipeline - Fundamentals​" course
● Students should also have some familiarity with ​ancillary technologies​ that are used in this
course:
■ Docker
■ Git
■ Apache Maven, Gradle, Ant or NPM
■ Apache Groovy
● The class has been structured so you can do the exercises even if you are not familiar with
these tools but learning them will help you implement your Pipelines when you go back to
work.

■ Build and Release Engineers

APPROACH

● This course teaches you ​how to create and run a Jenkins Declarative Pipeline
using shared libraries​.
■ Students modify a real-life Pipeline to build, test and deploy a Pipeline.
● The course uses one project:
■ Lab project: Students are given a list of tasks and are expected to figure out
how to implement a Pipeline that implements those tasks.
CLASSROOM

● Feel free to ask questions if you do not understand the material


● Please avoid questions that are not directly related to the material
● One 15-minute break in the morning session
● One hour for lunch
● Two 15-minute breaks in the afternoon session
HINTS FOR SELF-PACED

● We recommend that you right-click on links to labs and other materials


to open the new section in a separate browser tab
● Use the contents in the left frame to navigate between sections
● If you do not want to complete the entire class in one sitting, you can stop at any time
■ When you log back into the class, you are taken to the place where you stopped
and the work you did on previous sections is available
LAB EXERCISE

Introduction

Lab exercises are a key component of your CloudBees Jenkins training. Each student has a
self-contained lab environment that includes all of the plugins and dependencies that are required for this
course.

This workbook contains of a sequence of lab exercises through which you will learn to work with
Pipelines.

These lab exercises should be completed in sequence.

Never try to upgrade plugins on the Lab’s Jenkins instances !


IMPORT
ANT

If you encounter problems with your software installation, or if you do not understand any of the
instructions, please ask your instructor for help.

Lab exercises

The solution to each task is located at the end of that task. Please try to solve the assignment by yourself
and look at the solutions only if you get stuck or want to validate your work.

This training is language-agnostic, so you are not expected to know the language. While
IMPORT
we recommend that you familarize yourself with ancillary technologies such as Apache
ANT
Maven, Gradle, Ant, NPM, Apache Groovy, Docker and Git/GitHub, you should be able to
complete the lab exercises by copying commands and text that are given in the class. You
will not need any additional tools.

Understand your lab environment


Before diving into the exercises, you need to familiarize yourself with your lab environment. Your home
page contains links to the facilities you need. When you click a link from this page, it opens in a new tab.

The first two links are for documentation:

● Slides​ — Slide set used for the lecture portions of this class.


● Labs Document ​ — Workbook for the lab portions of this course.

The last three links are to work environments:

● Jenkins Master​ — The Jenkins Master dashboard; this is the environment you will use for most
of the work in this course.
● Gitserver​ — Git repository page for the projects in this course. If you are used to GitHub, this
page will look familiar although it is actually running Gitea, which gives you a local advanced Git
server with a web interface from which to browse repositories, authenticate, do pull requests and
reviews.
● DevBox ​ — A ​bash​ shell environment that provides a command line interface.

For the labs associated with this class, we will not be using ​DevBox​.

Credentials To Use

Use the id of ​butler​ and the password of ​butler​ for all credentials in your lab environment.

Blue Ocean

The Blue Ocean plugin is installed in your lab environment. To open it:

● Open the Jenkins Master dashboard


● Click "Open Blue Ocean" in the left frame:


● Switch to Classic Web UI
○ Click on the arrow button to switch to "Jenkins Web UI":


○ Click "Open Blue Ocean" in the side bar to switch back.

Introduction

This document explains how to install and start your CloudBees Lab Environment.

Please follow all the steps carefully, before running any Lab exercise.

Local VM: Vagrant + Virtualbox

A Virtual Machine (VM) will be used for hosting your ​Lab Environment​:

● It does not break any local environment


● It does not depend on the host OS you are using
● It is portable with the same behavior for everyone

This VM runs using the VirtualBox hypervisor, and is managed and automated by Vagrant (see
requirements below).

Both of those tools are Open Source, free and multi-platforms, but they require:

● Having admin rights on your computer for installation only


● Your computer must not be already a Virtual Machine. Nested virtualization is not supported.

Common Requirements

● An HTML5 compliant Web Browser is required: ​Mozilla Firefox​, G


​ oogle Chrome​, ​Microsoft
Edge​, ​Apple Safari​, ​Opera
Internet Explorer is not supported
IMPORTANT


The following ports must be allowed access to your instance’s domain (which is ​localhost​):
○ 5000
○ 5001
○ 5002
○ 5003
○ 20000
○ 30001
○ 30002
● The following protocols must be allowed by any antivirus/firewall software:
○ HTTP
○ HTTPS
○ Websockets
■ Some antivirus software like ​Kasperky​ and ​McAfee​ might block websocket
silently
■ You can test websockets from this page: ​WebSockets Test
■ For more about security and websockets: ​Testing Websockets
● Even if the training lab is running in offline mode, an open Internet access is ​recommended
○ HTTP Proxy can ​only​ be configured for Jenkins operations

Hardware Requirements

Your machine must meet the following ​hardware​ requirements:

● Intel 64 Bits Dual-Core CPU compliant (Intel Pentium/i3 at least)


● 6GB of RAM (the VM will allocate 4GB for itself)
● 20GB of free space on your local hard drive
● One OS from this list:
○ Windows >= 8.1
○ Mac OS >= 10.10
○ Linux "classic" distribution (Ubuntu >= 12.04, Debian >= Jessie, RHEL>= 6)
● The "Virtualization instructions" of your CPU must be enabled (Intel VT-x or AMD SVM)
○ More information here: ​https://forums.virtualbox.org/viewtopic.php?f=6&t=58820
○ Intel official VT page:
http://www.intel.com/content/www/us/en/virtualization/virtualization-technology/int
el-virtualization-technology.html
Software Requirements

Your machine must meet the following ​software​ requirements:

● For ​All​ OSes, download and install the latest (64 Bits) versions of:
○ VirtualBox​ (An Open Source Hypervisor from Oracle):
■ Downloads page: ​https://www.virtualbox.org/wiki/Downloads
■ Make sure to download the appropriate binary for your OS
We encourage you to download the
latest available version of
VirtualBox. However, it is worth
noting that the last version we
tested with this courseware was
6.0.12. So, if you run into trouble
with the latest version, please try
using this one.


Windows users:
If you have HyperV installed,
VirtualBox may throw some
errors with the code
VERR_VMX_NO_VMX​.

In this case (​Stack Overflow -


Vagrant up -
VBoxManage.exe error: VT-x
is not available
(VERR_VMX_NO_VMX)​),
please disable HyperV
temporarily : (​Disable HyperV​)
bcdedit /​set​ hypervisorlaunchtype
off

and reboot


Vagrant​ (An Open Source VM manager):
■ Downloads page: ​https://www.vagrantup.com/downloads.html
■ Make sure to download the appropriate binary for your OS
We encourage you to download the
latest available version of Vagrant.
However, it is worth noting that the
last version we tested with this
courseware was 2.2.5. So, if you
run into trouble with the latest
version, please try using this one.


For ​Windows​ only, download latest version of ​Git for Windows
Git for Windows​ provides a bash-compliant shell and OpenSSH
TIP
client

Getting Lab Resources

After installing the software prerequisites:

● Right click this ​link to the virtual machine’s ​ZIP​ archive​ to open it in a new tab or window
○ The archive will download to your local disc
● Extract the virtual machine ​ZIP​ archive to your local disc
○ This archive contains your virtual machine image and automated settings in a folder
named ​training-pipeline-intermediate

Starting the Lab Environment

● Open a Command Line on your host OS:


○ On Mac OS, open ​Applications​ , ​Utilities​ , ​Terminal
○ On Windows, open ​Start Menu​ , ​Git Bash
○ On Linux, this can be named ​Command Line​ or ​Terminal

The command line is required to start the Virtual Machine without having to care
TIP
to any specific configuration.


Using the command line ​cd​, navigate to the un-archived folder that should be located on your
Desktop:
● cd​ ~/Desktop/training-pipeline-intermediate/


TIP
The ​~​ special character means "Full path to the user home folder"
● Desktop​ may vary depending on your operating system: can be lower
case, or localized in your operating system’s language.


Use the command line ​ls​ to check the content of this directory. We need to have a file named
Vagrantfile​ here:

ls -1
● Vagrantfile
● Now you are able to start the Virtual Machine, using the ​vagrant​ command:
● vagrant up

The VM should start without a GUI, and without any ​error​:

If some warnings about VirtualBox version appears, ignore them as long as


TIP
everything is working well.

Figure 1. Vagrant starting the VM for you

● You need to be able to stop and start the Virtual Machine whenever you want. Let’s do it now:
○ From the training-pipeline-intermediate folder that contains a ​Vagrantfile​:
○ Stop the VM "gracefully" with the vagrant "halt" command:
○ vagrant halt

Once the VM is in the ​stopped​ state, you can safely
T
do anything else, like stopping your computer


Start again the Virtual Machine:
○ vagrant up

Any ​Vagrant​ command can be used here. For more informations, please check
TIP
Vagrant Documentation - https://www.vagrantup.com/docs/cli/

Accessing the Lab Environment

Your ​Lab Environment​ provides a Home Page to use as the entrypoint.

● This page is available on your web browser at this URL: ​Lab Home Page

Unless specified differently, any authentication asked by any service in the


IMPORTANT
Lab Environment uses the following:

● Username: butler
● Password: butler


You will see an HTML page that lists the services hosted on your Lab Environment.
● Each service will be detailed in the next steps.

Re-scanning Jenkins project


You must re-scan the ​pipeline-lab​ project if Jenkins shows it as an empty
IMPORTANT
folder.

The first time you access your ​Jenkins instance​, you should see it populated with an existing
multibranch-pipeline project: ​pipeline-lab​:

Figure 2. Jenkins instance, with pipeline-lab project

When you click the project (pipeline-lab) you should see a list of the existing branches with their
corresponding pipelines; in the lab environment provided, there is only a master branch.

If instead you get a "This folder is empty" message, you must ​re-scan​ the project.
Figure 3. Re-scan Multibranch Pipeline, if folder is empty

After re-scanning, the different branches that have pipelines should appear.

Troubleshooting

General workflow

If you face any issue with the lab during this course, please read this troubleshooting guide first.

If you still cannot use the lab environment, depending on your training type:

● "Trainer led": please ask your trainer for help.


● "Self Paced": please open a ticket by following the instructions found in the "Open a training
ticket" at the start of the course.

Always double-check your settings: peer review is the best !


TIP
Technical Troubleshooting

● If an error was raised during the initial VM startup:


○ If the error is related to ​GuestAdditions​ like the one below:

==> default: Machine booted and ready!


[default] GuestAdditions versions on your host (5.1.8) and guest (5.0.18_Ubuntu r106667) ​do​ not match.
...
The following SSH ​command​ responded with a non-zero ​exit​ status.
Vagrant assumes that this means the ​command​ failed!

apt-get update
○ ...
○ Then remove the plugin ​vagrant-vbguest​, by using the command
○ vagrant plugin uninstall vagrant-vbguest
○ If the error is related to ​VT-x is not available

...
○ Stderr: VBoxManage.exe: error: VT-x is not available (VERR_VMX_NO_VMX)
○ Make sure you disable the HyperV service as stated in the 'Software Requirements' of
this document
● Is your VM started ?
○ Open VirtualBox GUI and check the state.
○ With you command line, use ​vagrant status​ within your labs directory.
○ On your process manager, verify that you have a ​VBoxHeadless​ process.
● Is your VM reachable with SSH ?
○ Is Vagrant aware of port forwarding (using ​vagrant port​) ?
○ In the VirtualBox GUI, do you see the port forwarding ?
○ Do you have any firewall rule that would block any traffic on your ​localhost​ (l0, loopback,
etc.) interface, on the forwarded port (2222 generally)?
● When stuck, always try rebooting the VM one time
● If you need to submit an issue (Self Paced training only), try to run your latest ​vagrant​ command
in debug mode (see example below ), and copy-paste the result in a text file or in
https://gist.github.com/

VAGRANT_LOG=debug vagrant up
RECENT FEATURES

RECENT FEATURES ADDED TO DECLARATIVE

● Declarative Directive Generator


● New when Conditions
● New post Conditions
● New options
● input Directive to Stage

DECLARATIVE DIRECTIVE GENERATOR

NEW WHEN CONDITIONS

● equals
● changeRequest
● buildingTag
● tag
● beforeAgent

EQUALS

● Compares two values and returns true if they’re equal


● You can also do "not equals" comparisons using the not { equals …​ } syntax
pipeline {
agent any
stages {
stage(​'Build'​) {
steps {
sh ​'make package'
}
}
stage(​'Test'​) {
when { equals ​expected:​ ​2​, ​actual:​ currentBuild.number }
steps {
sh ​'make check'
}
}
stage(​'Deploy'​) {
steps {
echo ​'Deploying only because this commit is tagged...'
sh ​'make deploy'
}
}
}
}

CHANGEREQUEST

● Returns true if this Pipeline is building a change request, such as a GitHub or Bitbucket pull
request
■ when { changeRequest() }
● More detailed checks by using a filter against the change request, allowing you to ask "was
this change request created by ​[email protected]​?"
■ when { changeRequest authorEmail: "​[email protected]​" }
● You can also do pattern matching against the filters using a comparator to determine if the
pull request was from anyone with the email address ending in @example.com
■ when { changeRequest authorEmail: "[\\w_-.][email protected]", comparator:
'REGEXP' }

BUILDINGTAG

● A simple condition that just checks if the Pipeline is running against a tag in SCM, rather
than a branch or a specific commit reference
■ when { buildingTag() }

TAG

● A more detailed equivalent of buildingTag, allowing you to check against the tag name itself
pipeline {
agent any
stages {
stage(​'Build'​) {
steps {
sh ​'make package'
}
}
stage(​'Test'​) {
when { equals ​expected:​ ​2​, ​actual:​ currentBuild.number }
steps {
sh ​'make check'
}
}
stage(​'Deploy'​) {
when { tag ​"release-*"​ }
steps {
echo ​'Deploying only because this commit is tagged...'
sh ​'make deploy'
}
}
}
}

BEFOREAGENT

● Allows you to specify that the when conditions should be evaluated before entering the
agent for the stage
● When beforeAgent true is specified, you will not have access to the agent’s workspace, but
you can avoid unnecessary SCM checkouts and waiting for a valid agent to be available
pipeline {
agent none
stages {
stage(​'Example Build'​) {
steps {
echo ​'Hello World'
}
}
stage(​'Example Deploy'​) {
agent {
label ​"some-label"
}
when {
beforeAgent ​true
branch ​'production'
}
steps {
echo ​'Deploying'
}
}

NEW POST CONDITIONS

● fixed
● regression

FIXED

● Checks to see if the current run is successful and if the previous run was either failed or
unstable
REGRESSION

● Checks to see if the current run’s status is worse than the previous run’s status
● If the previous run was successful and the current run is unstable, this fires and its block of
steps executes
● It also runs if the previous run was unstable and the current run is a failure, etc
NEW OPTIONS

● checkoutToSubdirectory
● newContainerPerStage

CHECKOUTTOSUBDIRECTORY

● Allows you to override the location that the automatic SCM checkout uses
● Using checkoutToSubdirectory("foo"), your Pipeline checks out your repository to
$WORKSPACE/foo, rather than the default of $WORKSPACE
NEWCONTAINERPERSTAGE

● If you are using a top-level docker or dockerfile agent and want to ensure that each of your
stages runs in a fresh container of the same image
INPUT DIRECTIVE TO STAGE

pipeline {
agent any
stages {
stage(​'Example'​) {
input {
message ​"Should we continue?"
ok ​"Yes, we should."
submitter ​"alice,bob"
parameters {
string(​name:​ ​'PERSON'​, ​defaultValue:​ ​'Mr Jenkins'​, ​description:​ ​'Who should I say hello to?'​)
}
}
when {
equals ​expected:​ "​ Fred"​, a
​ ctual:​ ​"${PERSON}"
}
steps {
echo ​"Hello, ${PERSON}, nice to meet you."
}
}
}
}
USING DOCKER WITH PIPELINE

USING DOCKER WITH PIPELINE

Pipeline has built-in support for interacting with Docker from within a Jenkinsfile
CUSTOMIZE THE EXECUTION ENVIRONMENT

pipeline {
agent {
docker { image ​'node:7-alpine'​ }
}
stages {
stage(​'Test'​) {
steps {
sh ​'node --version'
}
}
}
}

CACHING DATA FOR CONTAINERS

● Many build tools download external dependencies and cache them locally for future re-use
● Pipeline supports adding custom arguments that are passed to Docker, allowing users
to specify custom Docker Volumes to mount
■ These can be used for caching data on the agent between Pipeline runs
CACHING DATA FOR CONTAINERS

pipeline {
agent {
docker {
image ​'maven:3-alpine'
args ​'-v $HOME/.m2:/root/.m2'
}
}
stages {
stage(​'Build'​) {
steps {
sh ​'mvn -B'
}
}
}
}

CACHING DATA FOR CONTAINERS

● Many build tools download external dependencies and cache them locally for future re-use
● Pipeline supports adding custom arguments that are passed to Docker, allowing users
to specify custom Docker Volumes to mount
■ These can be used for caching data on the agent between Pipeline runs
CACHING DATA FOR CONTAINERS

pipeline {
agent {
docker {
image ​'maven:3-alpine'
args ​'-v $HOME/.m2:/root/.m2'
}
}
stages {
stage(​'Build'​) {
steps {
sh ​'mvn -B'
}
}
}
}

USING MULTIPLE CONTAINERS

● It is increasingly common for code bases to rely on multiple, different technologies


● A repository might have both a Java-based back-end API implementation and a
JavaScript-based front-end implementation
● Combining Docker and Pipeline allows a ​Jenkinsfile​ to use multiple types of technologies by
combining the agent {} directive, with different stages
USING MULTIPLE CONTAINERS

pipeline {
agent none
stages {
stage(​'Back-end'​) {
agent {
docker { image ​'maven:3-alpine'​ }
}
steps {
sh ​'mvn --version'
}
}
stage(​'Front-end'​) {
agent {
docker { image ​'node:7-alpine'​ }
}
steps {
sh ​'node --version'
}
}
}
}
USING MULTIPLE CONTAINERS

● It is increasingly common for code bases to rely on multiple, different technologies


● A repository might have both a Java-based back-end API implementation and a
JavaScript-based front-end implementation
● Combining Docker and Pipeline allows a ​Jenkinsfile​ to use multiple types of technologies
by combining the agent {} directive, with different stages
USING MULTIPLE CONTAINERS

pipeline {
agent none
stages {
stage(​'Back-end'​) {
agent {
docker { image ​'maven:3-alpine'​ }
}
steps {
sh ​'mvn --version'
}
}
stage(​'Front-end'​) {
agent {
docker { image ​'node:7-alpine'​ }
}
steps {
sh ​'node --version'
}
}
}
}

USING A DOCKERFILE

Dockerfile
FROM ​node:​7​-alpine
RUN apk add -U subversion
Jenkinsfile
pipeline {
agent { dockerfile ​true​ }
stages {
stage(​'Test'​) {
steps {
sh ​'node --version'
sh ​'svn --version'
}
}
}
}
SPECIFY A DOCKER LABEL

● By default, Pipeline assumes that any configured agent is capable of running


Docker-based Pipelines
● This can be problematic if some of your agents do not have Docker
● Pipeline enables you to specify the agents (by Label) to use when running Docker-based
Pipelines
■ This can be specified as a global option in the Manage Jenkins page and on the
Folder level,
SPECIFY A DOCKER LABEL

SCRIPTED PIPELINE

OVERVIEW

● Scripted syntax is a domain specific language based on Apache Groovy


■ Most​ functionality provided by the Groovy language is made available to users
of Scripted syntax, which means it can be a very expressive and flexible tool
for authoring
continuous delivery pipelines
■ Scripted syntax offers a tremendous amount of flexibility and extensibility to
Jenkins users
■ The learning curve for the scripted pipeline syntax is steep, which is not
typically desirable
for all members of a given team
● Declarative syntax offers a simpler and more opinionated syntax for authoring Jenkins
Pipeline
● We are not going to discuss how to implement a Scripted Pipeline in this class
DECLARATIVE AND SCRIPTED

● Have the same Pipeline sub-system underneath


■ Both are durable implementations of "Pipeline as code"
■ Both can use steps built into Pipeline or provided by plugins
■ Both can utilize Shared Libraries

DECLARATIVE VS SCRIPTED
● Declarative limits what is available to the user with a more strict and pre-defined structure,
making it an ideal choice for simpler continuous delivery pipelines
● Scripted provides very few limits
■ The only limits on structure and syntax are defined by Groovy itself, not by
Pipeline-specific systems
■ Useful when you have more complex requirements than Declarative can
support out of the box
■ BUT it has few safeguards against errors you might make
● Use the script step to execute a block of Scripted syntax in a Declarative Pipeline
SO WHAT DOES ALL THIS MEAN?

● Always start with Declarative syntax


● Extend with Shared Libraries
● Use a script step to introduce Scripted syntax only when you really need to
FOR FURTHER READING

● https://jenkins.io/doc/book/pipeline/syntax/#scripted-pipeline

USING A JENKINSFILE

BENEFITS OF USING A JENKINSFILE

● Code review/iteration on the Pipeline


● Audit trail for the Pipeline
● Single source of truth for the Pipeline, which can be viewed
and edited by multiple members of the project

WORKING WITH YOUR JENKINSFILE

STRING INTERPOLATION

● Jenkins Pipeline uses rules identical to Groovy for string interpolation


● Groovy’s String interpolation support can be confusing
STRING INTERPOLATION

● Groovy supports declaring a string with either single quotes or double quotes:
def singlyQuoted = ​'Hello'
● def doublyQuoted = ​"World"

STRING INTERPOLATION

● String interpolation only works for strings in double-quotes, not for strings in single-quotes.
■ For example, this code:
def username = ​'Jenkins'
echo ​'Hello Mr. ${username}'
■ echo ​"I said, Hello Mr. ${username}"
■ Results in:
Hello Mr. ​${username}
■ I said, Hello Mr. Jenkins
● You can see that the dollar-sign ($) based string interpolation works for the string that is in
double quotes but does not work for the string in single quotes
USING ENVIRONMENT VARIABLES

pipeline {
agent any
stages {
stage(​'Example'​) {
steps {
echo ​"Running ${env.BUILD_ID} on ${env.JENKINS_URL}"
}
}
}
}
SETTING ENVIRONMENT VARIABLES

● An environment directive used in the top-level pipeline block


applies to all steps within the Pipeline
● An environment directive defined within a stage applies only
to the given environment variables for steps within the ​stage
SETTING ENVIRONMENT VARIABLES

pipeline {
agent any
environment {
CC = ​'clang'
}
stages {
stage(​'Example'​) {
environment {
DEBUG_FLAGS = ​'-g'
}
steps {
sh ​'printenv'
}
}
}
}

CREDENTIALS

CREDENTIALS

You might be used to seeing something like:

pipeline {
agent any
stages {
stage(​"test"​) {
steps {
withCredentials([usernameColonPassword(​variable:​ ​'SERVICE_CREDS'​, ​credentialsId:
'my-cred-id'​)]) {
sh ​"""
echo "Service user is $SERVICE_CREDS_USR"
echo "Service password is $SERVICE_CREDS_PSW"
curl -u $SERVICE_CREDS https://myservice.example.com
"""
}
}
}
}
}

CREDENTIALS

● The ​environment​ directive supports a special helper method credentials()


■ This can be used to access pre-defined credentials by their identifier
in the Jenkins environment

USERNAME AND PASSWORD

The environment variable specified is set to username:password and two additional environment
variables are defined automatically: MYVARNAME_USR and MYVARNAME_PSW respectively.

pipeline {
agent any
environment {
SERVICE_CREDS = credentials(​'my-prefined-username-password'​)
}
stages {
stage(​"test"​) {
steps {
sh ​"""
echo "Service user is $SERVICE_CREDS_USR"
echo "Service password is $SERVICE_CREDS_PSW"
curl -u $SERVICE_CREDS https://myservice.example.com
"""
}
}
}
}

SECRET TEXT

The environment variable specified will be set to the Secret Text content
pipeline {
agent any
environment {
SOME_SECRET_TEXT = credentials(​'jenkins-secret-text-id'​)
}
stages {
stage(​"test"​) {
steps {
sh ​"""
echo "secret text is $SOME_SECRET_TEXT"
"""
}
}
}
}

SECRET FILE

The environment variable specified will be set to the location of the file that is temporarily created

pipeline {
agent any
environment {
SOME_SECRET_FILE = credentials(​'jenkins-secret-file-id'​)
}
stages {
stage(​"test"​) {
steps {
sh ​"""
echo "secret file location is $SOME_SECRET_FILE"
"""
}
}
}
}
SSH WITH PRIVATE KEY

The environment variable specified will be set to the location of the SSH key file that is temporarily
created and two additional environment variables may be automatically defined: MYVARNAME_USR and
MYVARNAME_PSW (holding the passphrase).

pipeline {
agent any
environment {
SSH_CREDS = credentials(​'my-prefined-ssh-creds'​)
}
stages {
stage(​"test"​) {
steps {
sh ​"""
echo "SSH private key is located at $SSH_CREDS"
echo "SSH user is $SSH_CREDS_USR"
echo "SSH passphrase is $SSH_CREDS_PSW"
"""
}
}
}
}
SECRET FILE

The environment variable specified will be set to the location of the file that is temporarily created

pipeline {
agent any
environment {
SOME_SECRET_FILE = credentials(​'jenkins-secret-file-id'​)
}
stages {
stage(​"test"​) {
steps {
sh ​"""
echo "secret file location is $SOME_SECRET_FILE"
"""
}
}
}
}

SSH WITH PRIVATE KEY

The environment variable specified will be set to the location of the SSH key file that is temporarily
created and two additional environment variables may be automatically defined: MYVARNAME_USR and
MYVARNAME_PSW (holding the passphrase).

pipeline {
agent any
environment {
SSH_CREDS = credentials(​'my-prefined-ssh-creds'​)
}
stages {
stage(​"test"​) {
steps {
sh ​"""
echo "SSH private key is located at $SSH_CREDS"
echo "SSH user is $SSH_CREDS_USR"
echo "SSH passphrase is $SSH_CREDS_PSW"
"""
}
}
}
}
WHAT IF MY CREDENTIALS ISN’T ONE OF THESE FOUR?

WHAT IF MY CREDENTIALS ISN’T ONE OF THESE FOUR?

Unsupported credentials type causes the pipeline to fail with the message:

org.jenkinsci.plugins.credentialsbinding.impl.CredentialNotFoundException: No suitable binding handler


could be found for type <unsupportedType>

and you’ll continue to use ​withCredentials


PARAMETERS

pipeline {
agent any
parameters {
string(​name:​ ​'Greeting'​, d
​ efaultValue:​ ​'Hello'​, ​description:​ ​'How should I greet the world?'​)
}
stages {
stage(​'Example'​) {
steps {
echo ​"${params.Greeting} World!"
}
}
}
}

HANDLING FAILURE

pipeline {
agent any
stages {
stage(​'Test'​) {
steps {
sh ​'make check'
}
}
}
post {
always {
junit ​'**/target/*.xml'
}
failure {
mail ​to:​ team​@example​.com, ​subject:​ ​'The Pipeline failed :('
}
}
}
OPTIONAL STEP ARGUMENTS

Pipeline follows the Groovy language convention of allowing parentheses to be omitted around method
arguments
OPTIONAL STEP ARGUMENTS

These two statements are functionally equivalent:


git ​url:​ ​'git://example.com/amazing-project.git'​, ​branch:​ ​'master'
git (​url:​ ​'git://example.com/amazing-project.git'​, ​branch:​ ​'master'​)

MULTIBRANCH PIPELINES

WHAT IS A MULTIBRANCH PIPELINE ?

● Configured to point to a SCM


● Contains Pipeline Jobs
■ One Pipeline ​per​ SCM branch with a Jenkinsfile
○ Without Multibranch, each Pipeline maps to only one branch of
the SCM
■ Supports Pull Requests as well
■ Jobs are ​automatically​ created/deleted
○ Without Multibranch, no automatic discovery
● Is implemented as a Jenkins job type
■ Basically: it is a ​folder
● All new Pipelines should be created as Multibranch Pipelines
■ All Pipelines created with Blue Ocean are Multibranch
CREATE A MULTIBRANCH PIPELINE USING THE CLASSIC UI

● From the Jenkins Dashboard, click on ​New Item​ in the left frame
■ Enter the name of your new Pipeline in the box that is provided
■ Choose "Multibranch Pipeline" from the list provided and click "OK"
● Choose your SCM from the list under "Branch Sources"
■ Fill in the fields that are displayed to configure your SCM
■ (Optional) Configure a webhook from SCM
● Push a Jenkinsfile on any branch
■ Merge branch: jobs automatically managed
● Everything is automated, which greatly reduces the administrative tasks
CREATE A NEW JOB OF TYPE "MULTIBRANCH PIPELINE"

CONFIGURE THE BRANCH SOURCE (SCM)

CONFIGURE THE BRANCH SOURCE (SCM)


MULTIBRANCH PIPELINES CONFIGURATIONS

● Customizable​ retention policy


■ "Orphaned Item Strategy" configuration section
● Triggers
■ If you aren’t using webhooks to trigger jobs, you can tell the job how often to
run

ORGANIZATION SCANNING

● Currently only works with GitHub Organization folders and Bitbucket Team/Project folders
■ Corresponding branch source plugins must be installed
■ Other SCMs may be supported in the future
● Admin selects the job type associated with the SCM type
■ One credential (API token generally) needed
■ Maps to an "organization folder" or "team/project" as top level
● Each repository maps to a Multibranch pipeline
■ Inside​ the "organization folder" or "team/project"
■ More ​automation
■ Automate ​webhooks​ creation
BUT WHAT IF I’M STILL USING SUBVERSION?

● You can still use Multibranch!


FOR FURTHER READING

● Some recommended readings on this subject:


■ Getting started with Blue Ocean
■ Branches and Pull Requests
■ Pipeline-as-code with Multibranch Workflows in Jenkins

PIPELINE WITHOUT BLUE OCEAN

OVERVIEW

● Pipelines can be implemented and modified outside Blue Ocean


● Use your favorite editor to maintain job syntax
CREATE A NEW MULTIBRANCH JOB

REGISTER THE SCM REPOSITORY


CREATE JENKINSFILE

INTRODUCTION TO SHARED LIBRARIES

WHY USE SHARED LIBRARIES ?

● Allow you to share and reuse Pipeline code


● Scale your Jenkins Pipeline usage
■ Supports collaboration between a large number of teams working on a large
number of projects
● Help administrators manage code sprawl
■ Write once, propagate everywhere
■ Pipeline as code everywhere
● Use tooling to avoid silos
■ Collaborate instead of enforcing

WHAT IS A SHARED LIBRARY ?

● A separate SCM repo that contains reusable custom steps that can be called from Pipelines
● Configured once per Jenkins instance
● Cloned at build time
● Loaded and used as code libraries for Jenkins Pipelines
● Modifications made to a shared library custom step are applied to all Pipelines that call that
custom step
NOTES ABOUT SHARED LIBRARIES

● Extremely powerful
● Learning curve
■ First step is not easy
■ Requires deeper understanding of Pipeline
● Adds some overhead
■ Testing
■ Maintenance
● Many uses
■ Take time to read the documentation
FOR FURTHER READING

● Extending with Shared Libraries


IMPLEMENT SHARED LIBRARIES

HOW TO IMPLEMENT PIPELINE SHARED LIBRARIES

1. Create a separate SCM repository for the shared library


2. Configure a Global Pipeline Library in Jenkins
3. Code the custom step and check it into the shared library SCM repository
4. Call the custom step from your Pipeline
CREATE AN SCM REPOSITORY

SCM DIRECTORY STRUCTURE

● The directory structure of a Shared Library repository is as follows:


(root)
+- src # Groovy source files
| +- org
| +- foo
| +- Bar.groovy # for org.foo.Bar class
+- vars
| +- foo.groovy # for global 'foo' variable
| +- foo.txt # help for 'foo' variable
+- resources # resource files (external libraries only)
| +- org
| +- foo
● | +- bar.json # static helper data for org.foo.Bar
SRC DIRECTORY
● src directory uses a standard Java source directory structure
● This directory is added to the classpath when executing Pipelines
● You should ​rarely​ (preferably never) add anything to the src directory
VARS DIRECTORY

● vars directory contains scripts that define ​custom steps​ accessible from a Pipeline.
● All custom steps are defined in the root of the vars directory
■ You can not use subfolders to var
● Each file should define one step
■ The name should be the name of that step, ​camelCased,​ with the .groovy suffix.
● The matching .txt file, if present, can contain documentation
■ This will be processed through the system’s configured markup formatter

RESOURCES DIRECTORY

● The ​libraryResource​ step reads files from the resources directory


and returns the content as a plain string
● You can use subdirectories in the resources directory
■ Be sure to give each directory a name that is meaningful to you

OTHER DIRECTORIES

● Other directories under the root of the shared library


are reserved for future enhancements

CONFIGURE THE SHARED LIBRARY

HOW TO CONFIGURE A SHARED LIBRARY

Navigate to ​Manage Jenkins​ → ​Configure System​ on the Manage Jenkins page:

SHARED LIBRARY CONFIGURATION NOTES

● Global Libraries configured in Jenkins are considered ​trusted


■ Steps from this library run ​outside​ the Groovy sandbox
● Libraries configured at multibranch/folder level are considered ​not trusted
■ Steps from this library run ​inside​ the Groovy sandbox
■ Prefer libraries at multibranch/folder level to reduce risk to Jenkins server
from libraries outside the sandbox
● Set "Default version" to ​master​ to have Pipelines call custom steps from the master branch
■ If "Allow default version to be overridden" is enabled, a Pipeline can override
this
to call custom steps from other branches using the @Library annotation
● When ​Load implicitly​ is enabled, the default branch is automatically available to all
Pipelines; custom steps can also be loaded manually using a @Library annotation

LAB EXERCISE

Configure a Global Pipeline Library

Configure a Global Pipeline Library

In the exercise for this section you will:

● Configure a Global Pipeline Library


● Create a simple Jenkinsfile to verify that the library is set up correctly

Task: Configure a Global Pipeline Library

● Click on ​Manage Jenkins​ in the left navigation bar


● Click on ​Configure System
● Scroll down to ​Global Pipeline Libraries
● Click on the ​Add​ button
● Set the following values:
○ Name: ​shared-library
○ Default version: ​master
○ Load implicitly: unchecked
○ Allow default version to be overridden: checked
○ Include @Library changes in job recent changes: checked
● Under ​Retrieval method​, click on ​Modern SCM
● Select the radio button for ​Git​ and enter/select the following values:
○ Project Repository: ​http://localhost:5000/gitserver/butler/shared-library
○ Credentials: butler
● Click Save

Task: Create a simple Jenkinsfile to verify that the library is set up correctly
● Click on ​New Item
● Enter the item name ​test-shared-library​ and select ​Pipeline
● Click ​OK
● Scroll down to the Pipeline text area and paste the following in:

@Library​(​'shared-library'​) _
pipeline {
agent { label ​'java'​ }
stages {
stage(​'verify'​) {
steps {
helloWorld(​name:​ ​'fred'​)
}
}
}
}

● Click ​Save
● Click ​Build Now
● Click on the Blue Ball to the left hand side of the #1
● Scroll down and verify that you see

Hello world, fred

WRITE SHARED LIBRARY CUSTOM STEPS

CREATE A CUSTOM STEP

● Create a file that has the desired name of our custom step
● Add code to a call() method inside that file
■ Code the custom step exactly as you would code it in a Pipeline
■ If the custom step is for code you created in a Pipeline, you can
basically copy-and-paste that code
● Check the file into the SCM repository
■ For testing, check the new custom step into a branch other than ​master
CREATE A HELLOWORLDSIMPLE CUSTOM STEP

Jenkinsfile
pipeline {
agent any
stages {
stage(​'hello'​) {
steps {
sh ​"echo Hello world, Fred. It is Friday."
}
}
}
}

CREATE A HELLOWORLDSIMPLE CUSTOM STEP

vars/helloWorldSimple.groovy
def call(String name, String dayOfWeek) {
sh ​"echo Hello World ${name}. It is ${dayOfWeek}."
}

LAB EXERCISE

Create a Simple Custom Step

Create a Simple Custom Step

In the exercise for this section you will:

● Create a custom step


● Modify existing Pipeline to use the custom step

Task: Create Custom Step

For this task, we will use the Gitea editor to create the custom step.

● Navigate to the ​shared-library​ Gitea repository


● Make sure the Branch is set to ​master
● Click on the ​vars​ directory
● Click on ​New File​ in the upper right hand corner
● Name the file ​postBuildSuccess.groovy
● Paste in the following content

vars/postBuildSuccess.groovy

def​ call(Map config = [:]) {


archiveArtifacts ​'target/*.jar'
stash(​name:​ ​"${config.stashName}"​, ​includes:​ ​'target/**'​)
}

● Add a commit message and click ​Commit Changes

CALL THE SHARED LIBRARY CUSTOM STEP

LOADING THE LIBRARY

● You can load a shared library from your Pipeline


■ By Annotation
■ By DSL keyword
■ Implicitly
HELLOWORLDSIMPLE EXAMPLE

Jenkinsfile
@Library​(​'shared-starter'​) _
pipeline {
agent any
stages {
stage(​'hello'​) {
steps {
helloWorldSimple(​"Fred"​,​"Friday"​)
}
}
}
}
vars/helloWorldSimple.groovy
def call(String name, String dayOfWeek) {
sh ​"echo Hello World ${name}. It is ${dayOfWeek}."
}

HELLOWORLD EXAMPLE

Jenkinsfile
@Library​(​'shared-starter'​) _
pipeline {
agent any
stages {
stage(​'hello'​) {
steps {
helloWorld(​name:​ ​"Fred"​, ​dayOfWeek:​ ​"Friday"​)
}
}
}
}
vars/helloWorld.groovy
def call(Map config = [:]) {
sh ​"echo Hello World ${config.name}. It is ${config.dayOfWeek}."
}

BACK TO THE FUTURE

● In the "Pipelines - Fundamentals" course, we created code to send email and Slack
notifications when a build starts, when it completes and when it fails
● Let’s look at that code and then turn it into a custom step that any Pipeline can call
NOTIFICATIONS WHEN BUILD STARTS

stages {
stage (​'Start'​) {
steps {
​// send build started notifications
slackSend (
color:​ ​'#FFFF00'​,
message:​ ​"STARTED: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]' (${env.BUILD_URL})"
)

​// send to email


emailext (
subject:​ ​"STARTED: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]'"​,
body:​ ​"""<p>STARTED: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]':</p>
<p>Check console output at &QUOT;<a href='${env.BUILD_URL}'>${env.JOB_NAME}
[${env.BUILD_NUMBER}]</a>&QUOT;</p>"""​,
recipientProviders:​ [[​$class:​ ​'DevelopersRecipientProvider'​]]
)
}
}
}

NOTIFICATIONS WHEN BUILD SUCCEEDS

post {
success {
slackSend (
color:​ ​'#00FF00'​,
message:​ ​"SUCCESSFUL: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]' (${env.BUILD_URL})"
)

emailext (
subject:​ ​"SUCCESSFUL: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]'"​,
body:​ ​"""<p>SUCCESSFUL: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]':</p>
<p>Check console output at &QUOT;<a href='${env.BUILD_URL}'>${env.JOB_NAME}
[${env.BUILD_NUMBER}]</a>&QUOT;</p>"""​,
recipientProviders:​ [[​$class:​ ​'DevelopersRecipientProvider'​]]
)
}
}

NOTIFICATIONS WHEN BUILD FAILS

post {
failure {
slackSend (
color:​ ​'#FF0000'​,
message:​ ​"FAILED: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]' (${env.BUILD_URL})"
)

emailext (
subject:​ ​"FAILED: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]'"​,
body:​ ​"""<p>FAILED: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]':</p>
<p>Check console output at &QUOT;<a href='${env.BUILD_URL}'>${env.JOB_NAME}
[${env.BUILD_NUMBER}]</a>&QUOT;</p>"""​,
recipientProviders:​ [[​$class:​ ​'DevelopersRecipientProvider'​]]
)
}
}

HERE WE GO

CUSTOM STEP FOR SENDNOTIFICATIONS

vars/sendNotifications.groovy
def call(Map config = [:]) {
slackSend (
color:​ ​"${config.slackSendColor}"​,
message:​ ​"${config.message}: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]'
(${env.BUILD_URL})"
)

​// send to email


emailext (
subject:​ ​"${config.message}: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]'"​,
body:​ ​"""<p>${config.message}: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]':</p>
<p>Check console output at &QUOT;<a href='${env.BUILD_URL}'>${env.JOB_NAME}
[${env.BUILD_NUMBER}]</a>&QUOT;</p>"""​,
recipientProviders:​ [[​$class:​ ​'DevelopersRecipientProvider'​]]
)
}

NOTIFICATIONS WHEN BUILD STARTS

Jenkinsfile
stages {
stage (​'Start'​) {
steps {
sendNotifications(
slackSendColor:​ ​"#FFFF00"​,
message:​ ​"STARTED"
)
}
}
}

NOTIFICATIONS WHEN BUILD SUCCEEDS

Jenkinsfile
post {
success {
sendNotifications(
slackSendColor:​ ​"#00FF00"​,
message:​ ​"SUCCESSFUL"
)
}
}
NOTIFICATIONS WHEN BUILD FAILS

Jenkinsfile
post {
failure {
sendNotifications(
slackSendColor:​ ​"#FF0000"​,
message:​ ​"FAILED"
)
}
}

LET’S MAKE THE JENKINSFILE EVEN MORE CONCISE

NOTIFICATIONS WHEN BUILD STARTS

Jenkinsfile
stages {
stage (​'Start'​) {
steps {
sendNotificationsStart()
}
}
}
vars/sendNotificationsStart.groovy
def call() {
sendNotifications(
slackSendColor:​ ​"#FFFF00"​,
message:​ ​"STARTED"
)
}

NOTIFICATIONS WHEN BUILD SUCCEEDS

Jenkinsfile
post {
success {
sendNotificationsSuccess()
}
}
vars/sendNotificationsSuccess.groovy
def call() {
sendNotifications(
slackSendColor:​ ​"#00FF00"​,
message:​ ​"SUCCESSFUL"
)
}

NOTIFICATIONS WHEN BUILD FAILS

Jenkinsfile
post {
failure {
sendNotificationsFailure()
}
}
vars/sendNotificationsFailure.groovy
def call() {
sendNotifications(
slackSendColor:​ ​"#FF0000"​,
message:​ ​"FAILED"
)
}
LAB EXERCISE

Use a Custom Step

Use a Custom Step


In the exercise for this section you will:

● Modify existing Pipeline to use the custom step from the previous lab

Task: Modify existing Pipeline to use the custom step

For this task, we will use the Gitea editor to modify the existing Pipeline.

● Navigate to the ​pipeline-lab​ Gitea repository


● Make sure the Branch is set to ​master
● Click on ​Jenkinsfile
● Click on the pencil in the upper right hand corner to enter edit mode
● Add the following line at the top of the file

@Library​(​'shared-library'​) _

● Scroll down to the ​post { success …​ } }​ section of the ​Build Java 7​ stage
● Replace

archiveArtifacts ​'target/*.jar'
stash(​name:​ ​'Java 7'​, ​includes:​ '​ target/**'​)

with

postBuildSuccess(​stashName:​ "​ Java 7"​)

Scroll to the bottom and click on ​Commit Changes

● The job should automatically start once the commit has completed.

Solution

Click here to see the solution

Jenkinsfile

@Library​(​'shared-library'​) _
pipeline {
agent none
stages {
stage(​'Fluffy Build'​) {
parallel {
stage(​'Build Java 8'​) {
agent {
node {
label ​'java8'
}
}
steps {
sh ​"./jenkins/build.sh"
}
post {
success {
stash(​name:​ ​'Java 8'​, ​includes:​ ​'target/**'​)
}
}
}
stage(​'Build Java 7'​) {
agent {
node {
label ​'java7'
}
}
steps {
sh ​'./jenkins/build.sh'
}
post {
success {
postBuildSuccess(​stashName:​ ​"Java 7"​)
}
}
}
}
}
stage(​'Fluffy Test'​) {
parallel {
stage(​'Backend Java 8'​) {
agent {
node {
label ​'java8'
}
}
steps {
unstash ​'Java 8'
sh ​'./jenkins/test-backend.sh'
}
post {
always {
junit ​'target/surefire-reports/**/TEST*.xml'
}
}
}
stage(​'Frontend'​) {
agent {
node {
label ​'java8'
}
}
steps {
unstash ​'Java 8'
sh ​'./jenkins/test-frontend.sh'
}
post {
always {
junit ​'target/test-results/**/TEST*.xml'
}
}
}
stage(​'Performance Java 8'​) {
agent {
node {
label ​'java8'
}
}
steps {
unstash ​'Java 8'
sh ​'./jenkins/test-performance.sh'
}
}
stage(​'Static Java 8'​) {
agent {
node {
label ​'java8'
}
}
steps {
unstash ​'Java 8'
sh ​'./jenkins/test-static.sh'
}
}
stage(​'Backend Java 7'​) {
agent {
node {
label ​'java7'
}
}
steps {
unstash ​'Java 7'
sh ​'./jenkins/test-backend.sh'
}
post {
always {
junit ​'target/surefire-reports/**/TEST*.xml'
}
}
}
stage(​'Frontend Java 7'​) {
agent {
node {
label ​'java7'
}
}
steps {
unstash ​'Java 7'
sh ​'./jenkins/test-frontend.sh'
}
post {
always {
junit ​'target/test-results/**/TEST*.xml'
}
}
}
stage(​'Performance Java 7'​) {
agent {
node {
label ​'java7'
}
}
steps {
unstash ​'Java 7'
sh ​'./jenkins/test-performance.sh'
}
}
stage(​'Static Java 7'​) {
agent {
node {
label ​'java7'
}
}
steps {
unstash ​'Java 7'
sh ​'./jenkins/test-static.sh'
}
}
}
}
stage(​'Confirm Deploy'​) {
when {
branch ​'master'
}
steps {
timeout(​time:​ ​3​, ​unit:​ ​'MINUTES'​ ) {
input(​message:​ ​"Okay to Deploy to Staging?"​, ​ok:​ ​"Let's Do it!"​)
}
}
}
stage(​'Fluffy Deploy'​) {
when {
branch ​'master'
}
agent {
node {
label ​'java7'
}
}
steps {
unstash ​'Java 7'
sh ​"./jenkins/deploy.sh ${params.DEPLOY_TO}"
}
}
}
parameters {
string(​name:​ ​'DEPLOY_TO'​, d
​ efaultValue:​ ​'dev'​, ​description:​ ​''​)
}
}

LIBRARYRESOURCE

USING LIBRARYRESOURCE

● From our previous example, instead of doing an inline body for the email,
let’s load the body of the message from a file

STARTING POINT

vars/sendNotifications.groovy
def call(Map config = [:]) {
<... removed Slack ...>
​// send to email
emailext (
subject:​ ​"${config.message}: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]'"​,
body:​ ​"""<p>${config.message}: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]':</p>
<p>Check console output at &QUOT;<a href='${env.BUILD_URL}'>${env.JOB_NAME}
[${env.BUILD_NUMBER}]</a>&QUOT;</p>"""​,
recipientProviders:​ [[​$class:​ ​'DevelopersRecipientProvider'​]]
)
}

BODY OF EMAIL

resources/emailtemplates/build-results.html
<p>​$message:​ Job ​'$applicationName [$buildNumber]'​:</p>
<p>Check console output at <a href=​"$buildUrl"​>$applicationName [$buildNumber]</a></p>
LOAD THE FILE

vars/sendNotifications.groovy
def renderTemplate(input, binding) {
def engine = new groovy.text.GStringTemplateEngine()
def template = engine.createTemplate(input).make(binding)
return template.toString()
}

def call(Map config = [:]) {


<... removed Slack ...>
def rawBody = libraryResource ​'emailtemplates/build-results.html'
def binding = [
applicationName:​ env.JOB_NAME,
​buildNumber :​ env.BUILD_NUMBER,
​buildUrl :​ env.BUILD_URL,
​message :​ config.message
]
def emailBody = renderTemplate(rawBody,binding)

​// send to email


emailext (
subject:​ ​"${config.message}: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]'"​,
body:​ emailBody,
recipientProviders:​ [[​$class:​ ​'DevelopersRecipientProvider'​]]
)
}

LAB EXERCISE

Create and Use a Resource File


Create and Use a Resource File

In the exercise for this section you will:

● Create a ​resources​ file


● Create a custom step that loads the ​resources​ file
● Modify existing Pipeline to use the custom step

Task: Create a ​resources​ file

For this task, we will copy the contents of jenkins/build.sh from pipeline-lab to resources/scripts/build.sh in
the shared-library repository.

● Navigate to the ​pipeline-lab​ Gitea repository


● Click on the ​jenkins​ directory
● Copy the contents of ​build.sh​ to the clipboard
● Navigate to the ​shared-library​ Gitea repository
● Click on ​New File​ in the upper right hand corner
● Enter ​resources/scripts/build.sh​ in the "Name your file…​" input field
● Paste in the ​build.sh​ contents from the clipboard
● Add a commit message and click ​Commit Changes

Task: Create a custom step that loads the ​resources​ file

For this task, we will use the Gitea editor to create the custom step.

● Navigate to the ​shared-library​ Gitea repository


● Click on the ​vars​ directory
● Click on ​New File​ in the upper right hand corner
● Name the file ​runLinuxScript.groovy
● Paste in the following content

def​ call(Map config = [:]) {


​def​ scriptcontents = libraryResource ​"scripts/${config.name}"
writeFile ​file:​ ​"${config.name}"​, t​ ext:​ scriptcontents
sh ​"""
chmod a+x ./${config.name}
./${config.name}
"""
}
● Add a commit message and click ​Commit Changes

Task: Modify existing Pipeline to use the custom step

● Navigate to the ​pipeline-lab​ Gitea repository


● Click on ​Jenkinsfile
● Click on the pencil in the upper right hand corner to enter edit mode
● Replace

sh ​'./jenkins/build.sh'

with

runLinuxScript(​name:​ ​"build.sh"​)

● Commit the changes. The job should automatically start.


● Verify the job ran successfully.

Solution

Click here to see the solution

Jenkinsfile

@Library​(​'shared-library'​) _
pipeline {
agent none
stages {
stage(​'Fluffy Build'​) {
parallel {
stage(​'Build Java 8'​) {
agent {
node {
label ​'java8'
}
}
steps {
runLinuxScript(​name:​ ​"build.sh"​)
}
post {
success {
stash(​name:​ ​'Java 8'​, ​includes:​ ​'target/**'​)
}
}
}
stage(​'Build Java 7'​) {
agent {
node {
label ​'java7'
}
}
steps {
runLinuxScript(​name:​ ​"build.sh"​)
}
post {
success {
postBuildSuccess(​stashName:​ ​"Java 7"​)
}
}
}
}
}
stage(​'Fluffy Test'​) {
parallel {
stage(​'Backend Java 8'​) {
agent {
node {
label ​'java8'
}
}
steps {
unstash ​'Java 8'
sh ​'./jenkins/test-backend.sh'
}
post {
always {
junit ​'target/surefire-reports/**/TEST*.xml'
}
}
}
stage(​'Frontend'​) {
agent {
node {
label ​'java8'
}
}
steps {
unstash ​'Java 8'
sh ​'./jenkins/test-frontend.sh'
}
post {
always {
junit ​'target/test-results/**/TEST*.xml'
}
}
}
stage(​'Performance Java 8'​) {
agent {
node {
label ​'java8'
}
}
steps {
unstash ​'Java 8'
sh ​'./jenkins/test-performance.sh'
}
}
stage(​'Static Java 8'​) {
agent {
node {
label ​'java8'
}
}
steps {
unstash ​'Java 8'
sh ​'./jenkins/test-static.sh'
}
}
stage(​'Backend Java 7'​) {
agent {
node {
label ​'java7'
}
}
steps {
unstash ​'Java 7'
sh ​'./jenkins/test-backend.sh'
}
post {
always {
junit ​'target/surefire-reports/**/TEST*.xml'
}
}
}
stage(​'Frontend Java 7'​) {
agent {
node {
label ​'java7'
}
}
steps {
unstash ​'Java 7'
sh ​'./jenkins/test-frontend.sh'
}
post {
always {
junit ​'target/test-results/**/TEST*.xml'
}
}
}
stage(​'Performance Java 7'​) {
agent {
node {
label ​'java7'
}
}
steps {
unstash ​'Java 7'
sh ​'./jenkins/test-performance.sh'
}
}
stage(​'Static Java 7'​) {
agent {
node {
label ​'java7'
}
}
steps {
unstash ​'Java 7'
sh ​'./jenkins/test-static.sh'
}
}
}
}
stage(​'Confirm Deploy'​) {
when {
branch ​'master'
}
steps {
timeout(​time:​ ​3​, ​unit:​ ​'MINUTES'​ ) {
input(​message:​ ​"Okay to Deploy to Staging?"​, ​ok:​ ​"Let's Do it!"​)
}
}
}
stage(​'Fluffy Deploy'​) {
when {
branch ​'master'
}
agent {
node {
label ​'java7'
}
}
steps {
unstash ​'Java 7'
sh ​"./jenkins/deploy.sh ${params.DEPLOY_TO}"
}
}
}
parameters {
string(​name:​ ​'DEPLOY_TO'​, d
​ efaultValue:​ ​'dev'​, ​description:​ ​''​)
}
}

MORE SHARED LIBRARY EXAMPLES

SIMPLIFYING JENKINSFILES

DEFINE THE WHOLE PIPELINE AS A CUSTOM STEP

Jenkinsfile
@Library​(​'shared-starter'​) _
helloWorldPipeline(​name:​ ​"Fred"​, ​dayOfWeek:​ ​"Friday"​)
vars/helloWorldPipeline.groovy
def call(Map pipelineParams) {
pipeline {
agent any
stages {
stage(​'hello'​) {
steps {
helloWorld(​name:​ ​"${pipelineParams.name}"​, ​dayOfWeek:​ ​"${pipelineParams.dayOfWeek}"​)
}
}
}
}
}

PIPELINE HAS A (NOT SO) WELL KEPT SECRET

● Pipeline gives you the ability to add your own DSL elements
● Pipeline is itself a DSL, so you can extend it
WHY YOU WOULD WANT YOUR OWN DSL

● To reduce boilerplate by encapsulating common items you do in one DSL statement


● To provide a DSL that provides a prescriptive way that builds should happen across
your team or company
PREVIOUS EXAMPLE AS DSL

Jenkinsfile
@Library​(​'shared-starter'​) _
helloWorldPipeline {
name = ​"Fred"
dayOfWeek = ​"Friday"
}
vars/helloWorldPipeline.groovy
def call(body) {
def pipelineParams= [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = pipelineParams
body()

pipeline {
agent any
stages {
stage(​'hello'​) {
steps {
helloWorld(​name:​ ​"${pipelineParams.name}"​, ​dayOfWeek:​ ​"${pipelineParams.dayOfWeek}"​)
}
}
}
}
}

LAB EXERCISE

Create a Corporate Pipeline

Create a Corporate Pipeline

In the exercise for this section you will:

● Create a custom step


● Modify existing Pipeline to use the custom step

Task: Create Custom Step


For this task, we will use the Gitea editor to create the custom step.

● Navigate to the ​shared-library​ Gitea repository


● Click on the ​vars​ directory
● Click on ​New File​ in the upper right hand corner
● Name the file ​corporatePipeline.groovy
● Paste the following into ​corporatePipeline.groovy​:

def​ call(body) {
​def​ pipelineParams= [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = pipelineParams
body()

!!!REPLACEME!!!
}

● Replace ​!!!REPLACEME!!!​ with the contents from the ​Jenkinsfile​ from the ​pipeline-lab​ repository.
Be sure to ​not​ copy over the ​@Library​ annotation.
● Add a commit message and click ​Commit Changes

Task: Modify the new custom step to use a parameter passed through corporatePipeline

● Click on ​corporatePipeline.groovy
● Click on the pencil in the upper right hand corner to enter edit mode
● Remove the ​parameters​ directive
● Change the ​${params.DEPLOY_TO}​ parameter to ​${pipelineParams.deployTo}
● Add a commit message and click ​Commit Changes

Task: Modify existing Pipeline to use the custom step

For this task, we will use the Gitea editor to modify the existing Pipeline.

● Navigate to the ​pipeline-lab​ Gitea repository


● Click on ​Jenkinsfile
● Click on the pencil in the upper right hand corner to enter edit mode
● Replace all of the contents of Jenkinsfile with

@Library​(​'shared-library'​) _
corporatePipeline {
deployTo = ​"dev"
}

● Commit the changes. The job should automatically start.


● Verify the job ran successfully.

Solution

Click here to see the solution

vars/corporatePipeline.groovy

def​ call(body) {
​def​ pipelineParams= [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = pipelineParams
body()

pipeline {
agent none
stages {
stage(​'Fluffy Build'​) {
parallel {
stage(​'Build Java 8'​) {
agent {
node {
label ​'java8'
}

}
post {
success {
stash(​name:​ ​'Java 8'​, ​includes:​ '​ target/**'​)

}
steps {
runLinuxScript(​name:​ "​ build.sh"​)
}
}
stage(​'Build Java 7'​) {
agent {
node {
label ​'java7'
}

}
post {
success {
postBuildSuccess(​stashName:​ "​ Java 7"​)
}

}
steps {
runLinuxScript(​name:​ "​ build.sh"​)
}
}
}
}
stage(​'Fluffy Test'​) {
parallel {
stage(​'Backend Java 8'​) {
agent {
node {
label ​'java8'
}

}
post {
always {
junit ​'target/surefire-reports/**/TEST*.xml'

}
steps {
unstash ​'Java 8'
sh ​'./jenkins/test-backend.sh'
}
}
stage(​'Frontend'​) {
agent {
node {
label ​'java8'
}

}
post {
always {
junit ​'target/test-results/**/TEST*.xml'

}
}
steps {
unstash ​'Java 8'
sh ​'./jenkins/test-frontend.sh'
}
}
stage(​'Performance Java 8'​) {
agent {
node {
label ​'java8'
}

}
steps {
unstash ​'Java 8'
sh ​'./jenkins/test-performance.sh'
}
}
stage(​'Static Java 8'​) {
agent {
node {
label ​'java8'
}

}
steps {
unstash ​'Java 8'
sh ​'./jenkins/test-static.sh'
}
}
stage(​'Backend Java 7'​) {
agent {
node {
label ​'java7'
}

}
post {
always {
junit ​'target/surefire-reports/**/TEST*.xml'

}
steps {
unstash ​'Java 7'
sh ​'./jenkins/test-backend.sh'
}
}
stage(​'Frontend Java 7'​) {
agent {
node {
label ​'java7'
}

}
post {
always {
junit ​'target/test-results/**/TEST*.xml'

}
steps {
unstash ​'Java 7'
sh ​'./jenkins/test-frontend.sh'
}
}
stage(​'Performance Java 7'​) {
agent {
node {
label ​'java7'
}

}
steps {
unstash ​'Java 7'
sh ​'./jenkins/test-performance.sh'
}
}
stage(​'Static Java 7'​) {
agent {
node {
label ​'java7'
}

}
steps {
unstash ​'Java 7'
sh ​'./jenkins/test-static.sh'
}
}
}
}
stage(​'Confirm Deploy'​) {
when {
branch ​'master'
}
steps {
timeout(​time:​ ​3​, ​unit:​ ​'MINUTES'​) {
input(​message:​ ​'Okay to Deploy to Staging?'​, ​ok:​ ​'Let\'s Do it!'​)
}

}
}
stage(​'Fluffy Deploy'​) {
agent {
node {
label ​'java7'
}

}
when {
branch ​'master'
}
steps {
unstash ​'Java 7'
sh ​"./jenkins/deploy.sh ${pipelineParams.deployTo}"
}
}
}
}
}

DURABILITY

DURABILITY AND PIPELINE SPEED

● By default, Pipeline writes transient data to disk ​FREQUENTLY


■ Running pipelines lose very little data from a system crash or an unexpected
Jenkins restart
■ This frequent disk I/O can severely degrade Pipeline performance
● Speed/Durability settings allow you to improve performance by reducing the frequency at
which data is written to disk
■ This incurs the risk that some data may be lost if the system crashes or
Jenkins is restarted

WHEN HIGHER-PERFORMANCE DURABILITY SETTINGS HELP

● Most basic Pipelines that just build and test the code
■ They frequently write build and test data and can easily be rerun
● Your Jenkins instance shows high iowait numbers
● Your Jenkins instance uses a networked file system or magnetic storage
● You run many Pipelines at the same time
● You run Pipelines with many steps (more than several hundred)
WHEN NOT TO USE HIGHER-PERFORMANCE DURABILITY SETTINGS

● Higher-Performance Durability Settings do not help if:


■ Your Pipelines mostly wait for shell/batch scripts to finish
■ Your Pipelines are writing large amounts of data to logs
○ This setting does not affect logging
■ You are not running Pipelines
● Higher-Performance settings are NOT recommended for:
■ Pipelines that modify the state of critical infrastructure
■ Pipelines that deploy code to a production
DURABILITY SETTINGS

● MAX_SURVIVABILITY​ (default) - Slowest option


■ Writes data frequently
■ Little or no data is lost if Jenkins has a dirty shutdown
● SURVIVABLE_NONATOMIC​ - A bit faster
■ Writes data with every step but avoids atomic writes
■ Faster than MAX_SURVIVABILITY mode, especially on networked filesystems
● PERFORMANCE_OPTIMIZED​ - Fastest option
■ Greatly reduces disk I/O
■ Data may be lost if Jenkins has a dirty shutdown

HOW TO SET - GLOBALLY

● Set under "Manage Jenkins" page on dashboard

HOW TO SET - PIPELINE JOB TYPE

● Overrides global setting


● Set under "Pipeline speed/durability override" at the top of the job configuration

HOW TO SET - PIPELINE JOB TYPE

● Overrides global setting


● Set under "Pipeline speed/durability override" at the top of the job configuration

HOW TO SET - PIPELINE JOB TYPE


HOW TO SET - MULTIBRANCH JOB TYPE

● Overrides global setting


● Configure a custom Branch Property Strategy under the SCM

HOW TO SET - MULTIBRANCH JOB TYPE


HOW TO SET - JENKINSFILE

pipeline {
agent any
stages {
stage(​'Example'​) {
steps {
echo ​'Hello World'
}
}
}
options {
durabilityHint(​'PERFORMANCE_OPTIMIZED'​)
}
}
BEST PRACTICES FOR DURABILITY SETTINGS

● Use ​PERFORMANCE_OPTIMIZED​ mode for most build/test Pipelines


■ Set ​MAX_SURVIVABILITY​ for Pipelines that modify critical infrastructure
■ You can set ​PERFORMANCE_OPTIMIZED​ for the global setting then use the
options step to choose a more durable setting for Pipelines where data
preservation is more critical
● Use either ​MAX_SURVIVABILITY​ or ​SURVIVABLE_NONATOMIC​ for auditing
■ These modes record every step that is run
■ You can set one of these modes for the global setting then use the options
step to choose ​PERFORMANCE_OPTIMIZED​ for build/test Pipelines
● You can force a Pipeline to persist data by pausing it
FOR FURTHER READING

● Scaling Pipelines

LAB EXERCISE

Durability

Durability

In the exercise for this section you will:

● Modify the existing custom step to override the global durability value

Task: Review the logs from a previous job run

● Using Classic view, review the most recent log from a successful master branch run.
● Search for ​Running in Durability level​. You should see a value of ​PERFORMANCE_OPTIMIZED​.

Task: Modify corporatePipeline to use a different durability option

● Navigate to the ​shared-library​ Gitea repository.


● Click on the ​vars​ directory.
● Click on ​corporatePipeline.groovy​.
● Click on the pencil in the upper right hand corner to enter edit mode.
● Using the information from the slides, add an ​option​ to set the durability to the default value.
● Commit the change.
● Back in Jenkins, start a build from the master branch of the ​pipeline-lab​ job.
● Verify that the job ran successfully.

Task: Review the log


● Using Classic view, review the most recent log from a successful master branch run.
● Search the log for ​Running in Durability level​. You should see a value of
PERFORMANCE_OPTIMIZED​.
○ This is a known issue much like the parameters issue we saw in the Blue Ocean
Refresher lab.
● Start another build from the master branch of the ​pipeline-lab​ job.
● Verify that the job ran successfully.
● Search the most recent log for ​Running in Durability level​. You should see the value that you set.

Solution

Click here to see the solution

vars/corporatePipeline.groovy

def​ call(body) {
​def​ pipelineParams = [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = pipelineParams
body()

pipeline {
agent none
stages {
stage(​'Fluffy Build'​) {
parallel {
stage(​'Build Java 8'​) {
agent {
node {
label ​'java8'
}
}
post {
success {
stash(​name:​ ​'Java 8'​, ​includes:​ '​ target/**'​)
}
}
steps {
runLinuxScript(​name:​ "​ build.sh"​)
}
}
stage(​'Build Java 7'​) {
agent {
node {
label ​'java7'
}
}
post {
success {
postBuildSuccess(​stashName:​ "​ Java 7"​)
}
}
steps {
runLinuxScript(​name:​ "​ build.sh"​)
}
}
}
}
stage(​'Fluffy Test'​) {
parallel {
stage(​'Backend Java 8'​) {
agent {
node {
label ​'java8'
}
}
post {
always {
junit ​'target/surefire-reports/**/TEST*.xml'
}
}
steps {
unstash ​'Java 8'
sh ​'./jenkins/test-backend.sh'
}
}
stage(​'Frontend'​) {
agent {
node {
label ​'java8'
}
}
post {
always {
junit ​'target/test-results/**/TEST*.xml'
}
}
steps {
unstash ​'Java 8'
sh ​'./jenkins/test-frontend.sh'
}
}
stage(​'Performance Java 8'​) {
agent {
node {
label ​'java8'
}
}
steps {
unstash ​'Java 8'
sh ​'./jenkins/test-performance.sh'
}
}
stage(​'Static Java 8'​) {
agent {
node {
label ​'java8'
}
}
steps {
unstash ​'Java 8'
sh ​'./jenkins/test-static.sh'
}
}
stage(​'Backend Java 7'​) {
agent {
node {
label ​'java7'
}
}
post {
always {
junit ​'target/surefire-reports/**/TEST*.xml'
}
}
steps {
unstash ​'Java 7'
sh ​'./jenkins/test-backend.sh'
}
}
stage(​'Frontend Java 7'​) {
agent {
node {
label ​'java7'
}
}
post {
always {
junit ​'target/test-results/**/TEST*.xml'
}
}
steps {
unstash ​'Java 7'
sh ​'./jenkins/test-frontend.sh'
}
}
stage(​'Performance Java 7'​) {
agent {
node {
label ​'java7'
}
}
steps {
unstash ​'Java 7'
sh ​'./jenkins/test-performance.sh'
}
}
stage(​'Static Java 7'​) {
agent {
node {
label ​'java7'
}
}
steps {
unstash ​'Java 7'
sh ​'./jenkins/test-static.sh'
}
}
}
}
stage(​'Confirm Deploy'​) {
when {
branch ​'master'
}
steps {
timeout(​time:​ ​3​, ​unit:​ ​'MINUTES'​) {
input(​message:​ ​'Okay to Deploy to Staging?'​, ​ok:​ ​'Let\'s Do it!'​)
}
}
}
stage(​'Fluffy Deploy'​) {
agent {
node {
label ​'java7'
}
}
when {
branch ​'master'
}
steps {
unstash ​'Java 7'
sh ​"./jenkins/deploy.sh ${pipelineParams.deployTo}"
}
}
}
options {
durabilityHint(​'MAX_SURVIVABILITY'​)
}
}
}

SEQUENTIAL STAGES

SEQUENTIAL STAGES

● Another way to specify stages nested within other stages


● You can run multiple stages in each parallel branch
■ This gives you more visibility into the progress of your Pipeline
SEQUENTIAL STAGES

SEQUENTIAL STAGES

pipeline {
agent none

stages {

stage(​"build and deploy on Windows and Linux"​) {

parallel {

stage(​"windows"​) {

agent { label ​"windows"​ }

stages {

stage(​"build"​) {

steps {

bat ​"run-build.bat"

stage(​"deploy"​) {

when { branch ​"master"​ }


steps {

bat ​"run-deploy.bat"

stage(​"linux"​) {

agent { label ​"linux"​ }

stages {

stage(​"build"​) {

steps {

sh ​"./run-build.sh"

}
SEQUENTIAL STAGES

● The sequential stages feature was originally driven by users wanting


to have multiple stages in parallel branches
● We also discovered that being able to group multiple stages together
with the same agent, environment, when, etc has many other uses
SEQUENTIAL STAGES

● Use sequential stages to ensure that stages using the same agent use the
same workspace even though you are using multiple agents in your Pipeline
■ Use a parent ​stage​ with an ​agent​ directive on it
○ Then all the stages inside its ​stages​ directive run on the same
executor,
in the same workspace.
SEQUENTIAL STAGES

● Use sequential stages to define a timeout in the parent’s options directive


■ Use a parent stage with nested stages and define a timeout in the parent’s
options directive
■ That timeout will be applied for the execution of the parent, including its nested
stages
● Previously, you could only set a timeout for the entire Pipeline or an individual stage
SEQUENTIAL STAGES

pipeline {

agent none

stages {

stage(​"build and test the project"​) {

options { timeout(​time:​ ​1​, u


​ nit:​ ​'HOURS'​) }

agent { docker ​"our-build-tools-image"​ }


stages {

stage(​"build"​) {

steps {

sh ​"./build.sh"

stage(​"test"​) {

steps {

sh ​"./test.sh"

post {

success {
stash ​name:​ ​"artifacts"​, ​includes:​ ​"artifacts/**/*"

...

SEQUENTIAL STAGES

...

stage(​"deploy the artifacts if a user confirms"​) {

timeout(​time:​ ​3​, ​unit:​ ​'DAYS'​) {

input {

message ​"Should we deploy the project?"

ok ​"Yes, we should."

submitter ​"alice,bob"

}
}

agent {

docker ​"our-deploy-tools-image"

steps {

sh ​"./deploy.sh"

LAB EXERCISE

Sequential Stages

Sequential Stages

In the exercise for this section you will:

● Modify existing custom step to use sequential stages


Task: Make a copy of the corporatePipeline

● Navigate to the ​shared-library​ Gitea repository


● Click on the ​vars​ directory
● Click on ​corporatePipeline.groovy
● Copy the contents of the file
● Click on the ​vars​ directory
● Create a new file
● Name it ​corporatePipelineSequential.groovy
● Paste the contents into the body of the text editor
● Commit the changes to the ​master​ branch

Task: Create Sequential Stages for Java 7 and Java 8

● Click on the pencil in the upper right hand corner of ​corporatePipelineSequential.groovy​ to enter
edit mode
● Modify the pipeline to run parallel sequential stages to replace the existing ​Fluffy Build​ and ​Fluffy
Test​ stages. There should be one sequential stage for ​Java 7​ and one sequential stage for ​Java
8​.
● Commit the changes to ​corporatePipelineSequential.groovy
● Open the ​Jenkinsfile​ in the ​pipeline-lab​ repo and change ​corporatePipeline​ to
corporatePipelineSequential​.
● Commit the changes to the ​master​ branch
● The job should start automatically on the ​master​ branch
● Verify the job completed successfully
● Go take a look at the job visualization using Blue Ocean. Notice the difference from prior runs.

Solution

Click here to see the solution

vars/corporatePipelineSequential.groovy

def​ call(body) {

​def​ pipelineParams= [:]


body.resolveStrategy = Closure.DELEGATE_FIRST

body.delegate = pipelineParams

body()

pipeline {

agent none

stages {

stage(​'Build and Test Java'​) {

parallel {

stage(​'java8'​) {

agent { label ​'java8'​ }

stages {

stage(​"build8"​) {

steps {
runLinuxScript(​name:​ "​ build.sh"​)

post {

success {

stash(​name:​ ​'Java 8'​, ​includes:​ '​ target/**'​)

stage(​'Backend Java 8'​) {

steps {

unstash ​'Java 8'

sh ​'./jenkins/test-backend.sh'

post {
always {

junit ​'target/surefire-reports/**/TEST*.xml'

stage(​'Frontend'​) {

steps {

unstash ​'Java 8'

sh ​'./jenkins/test-frontend.sh'

post {

always {

junit ​'target/test-results/**/TEST*.xml'

}
}

stage(​'Performance Java 8'​) {

steps {

unstash ​'Java 8'

sh ​'./jenkins/test-performance.sh'

stage(​'Static Java 8'​) {

steps {

unstash ​'Java 8'

sh ​'./jenkins/test-static.sh'

}
}

stage(​'java7'​) {

agent { label ​'java7'​ }

stages {

stage(​"build7"​) {

steps {

runLinuxScript(​name:​ "​ build.sh"​)

post {

success {

postBuildSuccess(​stashName:​ "​ Java 7"​)

}
}

stage(​'Backend Java 7'​) {

steps {

unstash ​'Java 7'

sh ​'./jenkins/test-backend.sh'

post {

always {

junit ​'target/surefire-reports/**/TEST*.xml'

stage(​'Frontend Java 7'​) {

steps {
unstash ​'Java 7'

sh ​'./jenkins/test-frontend.sh'

post {

always {

junit ​'target/test-results/**/TEST*.xml'

stage(​'Performance Java 7'​) {

steps {

unstash ​'Java 7'

sh ​'./jenkins/test-performance.sh'

}
}

stage(​'Static Java 7'​) {

steps {

unstash ​'Java 7'

sh ​'./jenkins/test-static.sh'

stage(​'Confirm Deploy'​) {

when { branch ​'master'​ }

steps {
timeout(​time:​ ​3​, ​unit:​ ​'MINUTES'​) {

input(​message:​ ​'Okay to Deploy to Staging?'​, ​ok:​ ​'Let\'s Do it!'​)

stage(​'Fluffy Deploy'​) {

agent { label ​'java7'​ }

when { branch ​'master'​ }

steps {

unstash ​'Java 7'

sh ​"./jenkins/deploy.sh ${pipelineParams.deployTo}"

}
options {

durabilityHint(​'MAX_SURVIVABILITY'​)

RESTART FROM A STAGE

RESTART FROM A STAGE

● You can restart any completed Declarative Pipeline from any top-level stage
that ran in that Pipeline
● This allows you to rerun a Pipeline from a stage that failed due to transient or
environmental considerations

HOW TO USE

● No additional configuration is needed in the Jenkinsfile to allow you to


restart stages in your Declarative Pipelines
● Once your Pipeline has completed, whether it succeeds or fails, you can go to
the side panel for the run in the classic UI and click on "Restart from Stage"
HOW TO USE
HOW TO USE

● You are prompted to choose from a list of top-level stages that were executed
in the original run, in the order they were executed
● Stages that were skipped due to an earlier failure are not available to be restarted,
but stages that were skipped due to a when condition not being satisfied are available
● The parent stage for a group of ​parallel​ stages or a group of nested ​stages​ to be
run sequentially are also not available - only top-level stages are allowed

HOW TO USE

HOW TO USE
● Once you choose a stage from which to restart and click submit,
a new build, with a new build number, starts
■ All inputs are the same, including SCM information, build parameters,
and the contents of any stash artifacts
● All stages before the selected stage are skipped and the Pipeline
starts executing at the selected stage
● From that point on, the Pipeline runs as normal
PRESERVING STASHES FOR USE WITH RESTARTED STAGES

● Normally, when you run the stash step in your Pipeline, the resulting stash of artifacts
is cleared when the Pipeline completes, regardless of the result of the Pipeline
● Since stash artifacts are not accessible outside of the Pipeline run that created them,
this has not created any limitations on usage
● With Declarative stage restarting, you may want to be able to ​unstash​ artifacts
from a stage that ran before the stage from which you are restarting
PRESERVING STASHES FOR USE WITH RESTARTED STAGES

● To enable stash preservation, use the preserveStashes job property


■ This allows you to configure a maximum number of completed runs
whose stash artifacts should be preserved for reuse in a restarted run
● You can specify anywhere from 1 to 50 as the number of runs to preserve

LAB EXERCISE

Restart Stage

Restart Stage

In the exercise for this section you will:

● Modify the existing custom step to use in order to successfully restart a stage

Task: Modify the corporatePipelineSequential step

● Navigate to the ​shared-library​ Gitea repository


● Click on the ​vars​ directory
● Click on ​corporatePipelineSequential.groovy
● Click on the pencil in the upper right hand corner to enter edit mode
● Add an option to retain the last 5 stashes
● Commit the changes

Task: Run the job from the Classic UI

● Open the job run from the master branch in the Classic UI
● Click on ​Build Now
● Click on the progress bar for the job in the left nav to open the scrolling console log
● Wait for the input stage and then click on ​Abort
● The job should finish in an ​ABORTED​ state

Task: Restart the stage from the Classic UI

● In the breadcrumb at the top of the page, click on the job number that you just aborted
● On the left nav, click on ​Restart from Stage
● From the dropdown, select ​Confirm Deploy
● Click ​Run
● Click on the progress bar for the job in the left nav to open the scrolling console log
● Wait for the input stage and then click on ​Let’s Do It!
● The job should complete successfully

Task: Restart the stage from Blue Ocean

● In the breadcrumb at the top of the page:


○ make note of the job number
○ click on ​master
● Click on the ​Open Blue Ocean​ link in the left nav
● Click on the ​Activity​ tab on the right hand side of the screen
● Click on the line that has the run number that matches the job number for the ​master​ branch
● Click on the ​Confirm Deploy​ green ball in the pipeline
● Click on the blue ​Restart Confirm Deploy​ link in the lower right hand corner

You may have to refresh the page (Cmd+R/Ctrl+R) in order to see the correct
NOTE
rendering. This is known issue with Blue Ocean and Restart from Stage.


Click on ​Let’s Do It!
● The job should complete successfully

Solution

Click here to see the solution

vars/corporatePipelineSequential.groovy
def​ call(body) {
​def​ pipelineParams= [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = pipelineParams
body()

pipeline {
agent none
stages {
stage(​'Build and Test Java'​) {
parallel {
stage(​'java8'​) {
agent { label ​'java8'​ }
stages {
stage(​"build8"​) {
steps {
runLinuxScript(​name:​ "​ build.sh"​)
}
post {
success {
stash(​name:​ ​'Java 8'​, ​includes:​ '​ target/**'​)
}
}
}
stage(​'Backend Java 8'​) {
steps {
unstash ​'Java 8'
sh ​'./jenkins/test-backend.sh'
}
post {
always {
junit ​'target/surefire-reports/**/TEST*.xml'
}
}
}
stage(​'Frontend'​) {
steps {
unstash ​'Java 8'
sh ​'./jenkins/test-frontend.sh'
}
post {
always {
junit ​'target/test-results/**/TEST*.xml'
}
}
}
stage(​'Performance Java 8'​) {
steps {
unstash ​'Java 8'
sh ​'./jenkins/test-performance.sh'
}
}
stage(​'Static Java 8'​) {
steps {
unstash ​'Java 8'
sh ​'./jenkins/test-static.sh'
}
}
}
}
stage(​'java7'​) {
agent { label ​'java7'​ }
stages {
stage(​"build7"​) {
steps {
runLinuxScript(​name:​ "​ build.sh"​)
}
post {
success {
postBuildSuccess(​stashName:​ "​ Java 7"​)
}
}
}
stage(​'Backend Java 7'​) {
steps {
unstash ​'Java 7'
sh ​'./jenkins/test-backend.sh'
}
post {
always {
junit ​'target/surefire-reports/**/TEST*.xml'
}
}
}
stage(​'Frontend Java 7'​) {
steps {
unstash ​'Java 7'
sh ​'./jenkins/test-frontend.sh'
}
post {
always {
junit ​'target/test-results/**/TEST*.xml'
}
}
}
stage(​'Performance Java 7'​) {
steps {
unstash ​'Java 7'
sh ​'./jenkins/test-performance.sh'
}
}
stage(​'Static Java 7'​) {
steps {
unstash ​'Java 7'
sh ​'./jenkins/test-static.sh'
}
}
}
}
}
}
stage(​'Confirm Deploy'​) {
when { branch ​'master'​ }
steps {
timeout(​time:​ ​3​, ​unit:​ ​'MINUTES'​) {
input(​message:​ ​'Okay to Deploy to Staging?'​, ​ok:​ ​'Let\'s Do it!'​)
}
}
}
stage(​'Fluffy Deploy'​) {
agent { label ​'java7'​ }
when { branch ​'master'​ }
steps {
unstash ​'Java 7'
sh ​"./jenkins/deploy.sh ${pipelineParams.deployTo}"
}
}
}
options {
durabilityHint(​'MAX_SURVIVABILITY'​)
​ ​)
preserveStashes(​buildCount:​ 5
}
}
}

GROOVY SANDBOX

WHAT IS THE GROOVY SANDBOX

● A limited execution environment that is enabled by default


● Allows anyone to run a test script without risking damage to the Jenkins environment
■ Not necessary to wait for an administrator to approve the script
■ When it runs, each method call, object construction and field access is
checked
against a ​whitelist​ of approved operations
○ If an unapproved operation is attempted, the script is killed

HOW TO USE SANDBOX

WHY DOES PIPELINE NEED A SANDBOX?

● The sandbox provides a safe location to test Scripted Pipeline code that
has not been thoroughly tested and reviewed
● "Unsafe" Pipeline code includes calls that are not known to be safe
■ The code itself may actually be benign
● The mischief that unsafe code can do includes:
■ Disclosure of information (secrets, proprietary information, or other
confidential information being accessed by the Pipeline)
■ Modification/deletion of data in the Jenkins master
● Unsafe code can be inserted in a Pipeline intentionally or accidentally
WHITELIST

● The whitelist defines each method call, object construction and field access that can be
used
■ The Script Security plugin includes a small ​default whitelist
■ Plugins may add operations to that list
■ Administrators may add operations to that list
■ Method signatures can be pre-whitelisted with Groovy either on boot (using
init.groovy.d​)
or in the script console

ADMINISTRATION OF WHITELIST
● When a script fails because it uses an operation that is not in the whiteliest,
that operation is added to an ​approval queue
■ The administrator can approve the script and it will run
● The administrator is also given a list of pending operation approvals
■ Click ​Approve​ next to an operation to add it to the whitelist
■ This makes that operation available to all sandboxed scripts on the Jenkins
instance

GUIDELINES FOR WHITELIST

● Consider carefully which operations are white listed


■ Most operations that change the state of persisted objects (such as Jenkins
jobs) should not be whitelisted
■ Most ​getSomething​ methods are harmless
○ Some "getter" methods check specific permissions (using an
Access Control List or ACL) whereas scripts are often run by a
system pseudo-user to whom all permissions are granted
○ Unconditional whitelisting of such an operation may allow access
to resources that should be secured
EXAMPLE OF UNSAFE GETTER METHOD

● hudson.model.AbstractItem getParent​ obtains the folder or Jenkins root that contains a


job
■ Is inherently safe
● The possible follow-up call method ​hudson.model.ItemGroup getItems​ lists jobs by name
within a folder by checking ​Jobs/Read
■ Should not be whitelisted unconditionally because it enables the user to read
at least some information from any jobs in that folder, even those that are
protected by the ACL

SAFE HANDLING OF GETTER METHOD

● Administrator can instead click ​Approve assuming permission check​ for ​getItems
■ The call is permitted when run as an actual user who is on the ACL
■ The call is forbidden when run as the system user
■ This button is shown only for method calls and constructors
■ Use it only when you know that Jenkins is doing a permission check
MORE REMARKS ABOUT SCRIPT SECURITY

● Do not​ use Permissive Script Security


■ This makes it trivially easy to root your master
● The Sandbox can be annoying because each script fails when the first item that is
not whitelisted fails so it can take several iterations to get all methods, etc. whitelisted
■ Even so, ​DO NOT DISABLE THE SANDBOX!
■ This would make your Jenkins installation very vulnerable to a catastrophic
security breach
● Script Security and the Sandbox are most important for Scripted Pipeline
■ Declarative Pipeline can include code that is subject to script security,
although it is less common and harder to do
MORE REMARKS ABOUT SCRIPT SECURITY
● Global Shared Libraries with a locked-down repo can be used
to execute unsafe Pipeline code without doing mass whitelisting

FOR FURTHER READING


We have only scratched the surface of this topic. Additional information is available:

R. Tyler Croy’s Do not disable the Groovy sandbox blog is a fun discussion of
why sandboxes are important and includes an example script that could
destroy your Jenkins instance were it allowed to run.

The Script Security Plugin documentation provides detailed information about


the issues and use of sandboxes and other related topics.

Content Security Policy describes the Content-Security-Policy header and


gives guidelines and instructions for relaxing the rules.

OTHER HINTS

KEEP IT SIMPLE!

● Limit the amount of complex logic embedded in the Pipeline


● Scripted Syntax is NOT a general-purpose programming language
There’s nothing wrong with using Scripted syntax if what you’re trying to do is really not a good fit for Declarative syntax —
should simplify your approach.

KEEP IT SIMPLE!

● Pipeline code is glue


■ Use just enough Scripted Syntax to connect the Pipeline steps and integrate
tools
○ Delegate more to agent and reduce the load on masters
■ This makes the Pipeline code:
○ Easier to maintain
○ More robust against bugs
USE COMMAND-LINE TOOLS FOR XML AND JSON PARSING

● Avoid Pipeline XML or JSON parsing using Groovy’s XmlSlurper and JsonSlurper
■ Groovy implementations are complex and very brittle for Pipeline usage
■ XmlSlurper and JsonSlurper carry a high memory and CPU cost in Pipelines
● xmllint​ and ​XMLStarlet​ are command-line tools offering XML extraction using XPath
● jq​ offers the same functionality for JSON
● These extraction tools may be coupled with ​curl​ or ​wget​ to fetch information from an HTTP
API
USE EXTERNAL SCRIPTS AND TOOLS

● Avoid embedding complex logic in the Pipeline itself


● Instead, use external scripts and tools for complex or CPU-expensive processing
■ Offloads work from the master to external executors
■ Allows for easy scale-out of hardware resources
■ Simplifies testing

Components can be tested in isolation without the full on-master
execution environment
WHEN TO USE COMMAND-LINE TOOLS

● Processing data
● Communicating interactively with REST APIs
● Parsing/templating larger XML or JSON files
● Nontrivial integration with external APIs
● Simulations and complex calculations
● Business logic
COMMAND-LINE CLIENTS FOR APIS

● Many software vendors provide easy command-line clients for their tools
in various programming languages
■ These are often robust, performant and easy to use
● Use shell or batch steps to integrate these tools, which can be written in any language
■ For a Java client, use a command like:
■ sh ​"java -jar client.jar $
​ endPointUrl​ ​$inputData​"
● Avoid inputs that might contain shell metacharacters. A construction like the following
solves this problem
writeFile file: ​'input.json'​, text: inputData
● sh ​'java -jar client.jar $endPointUrl input.json'
REDUCE THE NUMBER OF STEPS IN THE PIPELINE

● Most well-formed Pipelines contain less than 300 steps


● Reducing the number of steps that are called can improve Pipeline and overall Jenkins
performance
■ Each call to sh or bat incurs about 200ms of overhead
■ Information about each step run is written to disk, which can be slower for
Jenkins to process.
● Other advantages to fewer steps:
■ Simplify the test and debug process
■ Simplify the logic of the Pipeline
HOW TO REDUCE THE NUMBER OF STEPS IN A PIPELINE

● Consolidate several sequential sh or bat steps into a single, external helper


shell script that the Pipeline calls as a single step
■ Version this script and store it in the source code repository
■ This script can be tested independently of the Pipeline itself
● The tradeoff here is that you cannot just read down the Pipeline to quickly
see each step that it executes

You might also like