Continuous Integration (CI) and Continuous Delivery (CD)
Continuous Integration (CI) and Continuous Delivery (CD)
Continuous Integration (CI) and Continuous Delivery (CD)
Integration (CI)
and Continuous
Delivery (CD)
A Practical Guide to Designing and
Developing Pipelines
—
Henry van Merode
Continuous
Integration (CI)
and Continuous
Delivery (CD)
A Practical Guide to Designing
and Developing Pipelines
Acknowledgments�����������������������������������������������������������������������������xiii
iii
Table of Contents
iv
Table of Contents
v
Table of Contents
Constructs���������������������������������������������������������������������������������������������������216
Plugins and Marketplace Solutions�������������������������������������������������������������237
Repositories: Everything as Code����������������������������������������������������������������237
Third-Party Libraries and Containers����������������������������������������������������������������240
Versioning and Tagging�������������������������������������������������������������������������������������245
Environment Repository������������������������������������������������������������������������������������249
Secrets Management����������������������������������������������������������������������������������������251
Database Credentials����������������������������������������������������������������������������������255
Feature Management����������������������������������������������������������������������������������������257
Development in the Value Streams�������������������������������������������������������������������260
Simplified Pipeline Development�����������������������������������������������������������������265
Extended Pipeline Development������������������������������������������������������������������266
Advanced Pipeline Development�����������������������������������������������������������������267
Develop a Base Pipeline������������������������������������������������������������������������������268
Pipeline Generation�������������������������������������������������������������������������������������270
Pipeline of Pipelines (DevOps Assembly Line)���������������������������������������������273
Sustainable Pipeline Development�������������������������������������������������������������������279
Summary����������������������������������������������������������������������������������������������������������283
vi
Table of Contents
vii
Table of Contents
References����������������������������������������������������������������������������������������407
Index�������������������������������������������������������������������������������������������������411
viii
About the Author
Henry van Merode is a solution architect with more than 30 years of
experience in ICT within several financial organizations. His experience
spans a wide range of technologies and platforms, from IBM mainframes
to cloud systems on AWS and Azure. He developed, designed, and
architected major financial systems such as Internet banking and order
management systems, with a focus on performance, high availability,
reliability, maintainability, and security.
For the last 8 years, Henry’s expertise has been extended with
continuous integration, continuous delivery, and automated pipelines.
As an Azure DevOps community lead, Henry likes to talk about this
subject and promote automating the software supply chain to the teams at
his work.
ix
About the Technical Reviewers
Fred Peek is an IT architect from Utrecht, the Netherlands. He has a
master’s degree in electrical engineering from the Eindhoven University
of Technology. He has more than 20 years of experience in the IT industry,
working in software development (Java, C++), software architecture, and
security. Besides IT, he is involved in the audio and music industry as a
recording/mixing engineer, DJ, and Audio Engineering Society (AES)
member.
xi
Acknowledgments
Many thanks to the people of Apress for allowing me to write this book and
for helping me publish it.
Special thanks to my colleagues Fred, Ralph, and Joep for reviewing
the text, providing me with suggestions, and correcting mistakes I made.
And of course, I want to thank my wife Liseth for being supportive.
xiii
CHAPTER 1
Challenges
At work, I once gave a presentation about continuous integration/
continuous delivery and described how it improves the speed of software
delivery. I explained that using pipelines to automate the software delivery
process was a real game changer. I presented the theory that was written
down in books and articles, until someone from the audience asked me
a question about what the development of pipelines looks like and how,
for example, one should perform unit tests of pipelines themselves. This
question confused me a bit because the theory nicely explains how to
unit test an application in the pipeline but never explains how to unit test
pipelines themselves. Unfortunately, I could not give a satisfying answer,
but this question did make me realize that until then my approach to
creating pipelines was a bit naïve and needed a boost. A scan within the
department I worked at told me I wasn’t the only one who could benefit
2
Chapter 1 The Pitfalls of CI/CD
The problem with this type of diagram is that it’s fine to explain
the concepts of CI/CD, but I noticed that teams use this as their actual
blueprint and realize along the way that they have to redesign and rewrite
their pipelines. Often, one person is responsible and just starts with a
simple implementation of a pipeline without considering the requirements
or without even knowing that there are (implicit) requirements. For
example, the team works in a certain manner, and that was not taken into
account from the start.
The lack of a structured approach to implementing pipelines is one
of the underlying problems. The “thinking” processes required before the
pipeline implementation starts never happen.
3
Chapter 1 The Pitfalls of CI/CD
Vulnerabilities
Teams are often unaware that they incorporate solutions in their
pipeline, which perhaps contain severe vulnerabilities. For example,
third-party libraries or software is retrieved directly from the Internet,
but from unauthorized sources. This results in a real security risk. Also,
the propagation of secrets, tokens, and credentials is often insecure.
The CI/CD process should be fully automated, and manually moving
secret information around must be prevented. Some of these risks can be
avoided, or at least reduced, by applying mitigating actions.
Pipeline Testing
Consider an assembly line of a car-producing company. The company
produces cars 24 hours a day, 7 days a week. At the front of the assembly
line, the car parts enter. The wheels are mounted to the suspension,
the body is placed on the chassis, the engine is installed, and the seats,
steering wheel, and electronic equipment are installed. Everything is
automated, and at the end of the assembly line, a new car sees the light.
What if you are the mechanic who has to replace a large part of this
assembly line? Stopping the assembly line is not an option, and replacing
assembly line parts while running carries a risk. You may end up with a car
with a steering wheel attached to the roof.
This is the underlying problem of the question my colleague
once asked when I gave a presentation about continuous integration
and continuous delivery, “How do I develop and test my pipelines?”
Developing an application and testing it locally works very well for an
application, but not so well for pipeline code. The environment in which
a pipeline builds the application, deploys it, and executes the tests is not
suited to become the develop and test environment of the pipeline itself.
And having a local pipeline environment to develop and test the pipeline is
often not possible.
4
Chapter 1 The Pitfalls of CI/CD
1
Organizations with on-premises datacenters are trying to catch up slowly,
implementing platforms such as OpenShift, disclosing their infrastructure using
APIs, and making use of Ansible and Terraform to define infrastructure as code.
5
Chapter 1 The Pitfalls of CI/CD
6
Chapter 1 The Pitfalls of CI/CD
The number of pipelines may grow if the number of variations is high. I’ve
seen examples in which one small application resulted in multiple pipelines:
one pipeline performing just the CI stage for feature branches, one pipeline
for a regular (snapshot) build, one for a release build, one deployment
pipeline to set up tests, a separate pipeline to perform the—automated—tests,
and a deployment pipeline for the production environment.
Technical Constraints
Something that comes to the surface only after a while is that you may hit
some constraint, often a compute or storage resource. For example, the
code base becomes bigger and bigger, and the source code analysis stage
runs for hours, basically nullifying the CI/CD concept of fast feedback.
Also, the queuing of build jobs may become an issue in case the build
server cannot handle that many builds at the same time. Unfortunately,
these constraints are often difficult to predict, although some aspects can
already be taken into account from the start.
Legacy
If we like it or not, a lot of teams still use a legacy way of working. They still
perform too many manual tests, and test environments are often set up manually.
As a branching workflow, Gitflow is still used a lot. This type of workflow has a
few downsides. It is complex, with multiple—long-lived—branches, and it can be
slow to adapt new features because of a strict release cycle.
7
Chapter 1 The Pitfalls of CI/CD
8
Chapter 1 The Pitfalls of CI/CD
Summary
You learned about the following topics in this chapter:
• Testing pipelines
• Managing pipelines
• Technical constraints
9
CHAPTER 2
CI/CD Concepts
This chapter covers the following:
Principles
The foundations of continuous integration/continuous delivery (CI/CD)
were laid down by people like Paul Duvall, Jez Humble, and David Farley,
and they are thoroughly described in their respective books, Continuous
Integration: Improving Software Quality and Reducing Risk (see [5])
and Continuous Delivery: Reliable Software Release through Build, Test
and Deployment Automation (see [6]). These books present a couple
of concepts and principles that together make up CI/CD. Let’s try to
summarize CI/CD in a few sentences.
The benefit of continuous integration and continuous delivery is that
application code can be delivered faster to production by automating the
software supply chain. This produces secure code of better quality, provides
faster feedback, and results in a faster time to market of the product.
Continuous integration is based on the fact that application code is
stored in a source control management system (SCM). Every change in this
code triggers an automated build process that produces a build artifact,
which is stored in a central, accessible repository. The build process is
reproducible, so every time the build is executed from the same code, the
same result is expected. The build processes run on a specific machine,
the integration or build server. The integration server is sized in such a way
that the build execution is fast.
Continuous delivery is based on the fact that there is always a stable
mainline of the code, and deployment to production can take place
anytime from that mainline. The mainline is kept production-ready,
facilitated by the automation of deployments and tests. An artifact is
built only once and is retrieved from a central repository. Deployments
to test and production environments are performed in the same way,
and the same artifact is used for all target environments. Each build
is automatically tested on a test machine that resembles the actual
production environment. If it runs on a test machine, it should also run
on the production machine. Various tests are performed to guarantee
12
Chapter 2 CI/CD Concepts
that the application meets both the functional and the nonfunctional
requirements. The DevOps team is given full insight into the progress of
the continuous delivery process using fast feedback from the integration
server (via short feedback loops).
This is a concise explanation of CI/CD, which is of course more
thoroughly described in the mentioned books.
Positioning of CI/CD
The IT value chain provides a view of all activities in an organization that
create value for the organization [7]. The IT value chain concept is defined
in the Open Groups’ IT4IT Reference Architecture and consists of four
pillars, the value streams.
• Strategy to portfolio (S2P) value stream: Aligns the IT
and business road maps and includes activities such as
setting up standards and policies, defining the enterprise
architecture, analyzing service demand, and creating
service road maps.
13
Chapter 2 CI/CD Concepts
14
Chapter 2 CI/CD Concepts
service rollout. CI/CD covers activities of the software supply chain with
a focus to speed up software development and to maintain a high-quality
standard.
Traditionally, CI/CD does not cover all activities associated with
software development. CI/CD is usually restricted to build, test, and
deployment. Activities such as planning, requirements analysis, designing,
operating, and monitoring are usually not considered in the scope of CI/
CD; however, we shouldn’t be too narrow-minded here. It does make
sense to keep these activities in mind when realizing pipelines.
Consider the case in which an artifact is deployed to production. It needs
to be monitored. Incidents may occur, which need to be resolved. What if
application monitoring becomes integrated into the pipeline? Issues and
incidents detected by the monitoring system could lead to the automatic
creation of work items, or it could even lead to automated remediation;
an incident detected results in triggering a pipeline that remediates the
incident. Stretching this thought process a bit more and anomalies detected
by artificial intelligence (AI) monitoring may result in triggering a pipeline
that reconfigures a service even before the incident occurs.
It is good to see in practice that some teams stretch their CI/CD
pipeline setup to the max, looking beyond the scope of traditional CI/CD
and considering all steps in software development.
IT Value Chain
Drive IT portfolio to Build what the business Catalog, fulfill & manage Anticipate & resolve
business innovation needs, when it needs it service usage production issues
15
Chapter 2 CI/CD Concepts
16
Chapter 2 CI/CD Concepts
CI/CD Journey
I do not know one team that implemented CI/CD in the first iteration.
When I ask a team to think about a solution to deliver software in smaller
increments and more frequently, they agree it is a good idea but difficult to
realize in their context. They give various reasons why this is not possible
or at least very difficult. A generic problem seems to be that teams are
used to a certain way of working, often a way of working that does not
necessarily meet the preconditions of a CI/CD implementation. They find
it hard to let go, especially if the new way of working is not crystal clear
to them or if they don’t realize the necessity to change. And even if they
realize it, they still need to adapt. Change remains difficult.
A recurring problem, for example, deals with the granularity of user
stories or tasks. Some stories or tasks are just put down as one-liners, like
“implement the validation of a digital signature.” A developer commits to
this story and starts coding.
This is what happens: After the validation code is written, it needs to
be tested. This requires additional test code to be written. The test code is
needed to create the digital signature that needs to be validated. But testing
also requires a key pair and a certificate. The key pair and a certificate
signing request (CSR) file are created, and the certificate is obtained from
17
Chapter 2 CI/CD Concepts
the local public key infrastructure (PKI) shop (assuming that self-signed
certificates are not allowed in this company). The developer also realizes
that the target environment does not have a file system but an object store.
Storing the certificate on the workstation’s file system works fine for local
testing, but it does not work anymore after the code has been integrated
into the app and deployed to the target environment. So, the code has to
be rewritten, and by the way, additional measures have to be taken from
an access control point of view, so the app can also read the object store.
The story looked simple at glance but expands along the way. The result is
that the developer keeps the code and pushes it to the central repository
only after a couple of days, or even longer. The translation from business
requirements to epics, stories, and tasks is not trivial, and decomposing
the work into small, manageable chunks is often a challenge.
Realizing that implementing CI/CD is a journey is the first step of the
transformation process. It is the first hurdle of a bumpy journey. Setting an
ambition level helps in defining this journey. Team members should ask
themselves a couple of questions. Where do we stand six months or one year
from here? What can be improved in our way of working? What do we need to
fix certain impediments our team deals with? Can they be solved by training?
Determining the ambition level can be done with the help of a
continuous delivery maturity model. This model helps assess the team’s
current maturity and works as guidance in their CI/CD journey. There are
several examples of continuous delivery maturity models. The following
one is from the National Institute for the Software Industry (NISI; see
Reference [36] and Figure 2-2). The vertical axis represents the categories
or steps in software development. The horizontal axis represents five
maturity levels, from foundation to expert. These maturity levels indicate
how well a team performs in its continuous delivery practice. It is up to
the team—also driven by the organization’s ambition—to decide in which
areas they need improvement and to what extent. Maybe they don’t want
to be an expert in each category. Create an initial road map, but start small
and expand over time.
18
Chapter 2 CI/CD Concepts
19
Chapter 2 CI/CD Concepts
Add CI/CD-related work items to the sprint and keep the same pace as the
rest of the team. Give a sprint demo from time to time. Involve other team
members to take up small bits and pieces, once the pipelines are mature
and more or less stable.
Until now, CI/CD is presented as an abstract concept with a
certain philosophy, but a concept does not run on a real server. The
implementation of CI/CD also involves running pipelines that build, test,
and deploy software. The pipelines themselves are pieces of software
running on a server. This statement forms the basis of this book; a pipeline
is software. So, why shouldn’t you treat pipeline development the same
way as developing an application? With this in mind, consider the steps of
software development.
• Requirements analysis: The first step in software
development is the requirements analysis phase. In
our context, it involves gathering requirements to
understand the problem domain of CI/CD. This also
helps in scoping the implementation.
• Design: Designing pipelines is the process that helps
you understand the flow of the pipelines. It makes
clear which conditions to consider and where the
pipeline takes an alternative path. A design also helps
to determine which tasks are executed and where they
fit best in the pipeline. The design also visualizes which
external systems are involved and how the pipeline
communicates with them.
• Development: This concerns the actual development
of pipelines and the integration with other tools and
surrounding systems.
• Test: The context here is about testing the pipelines
themselves, not testing an application within
a pipeline. Pipelines are also software, and the
20
Chapter 2 CI/CD Concepts
This book describes how pipelines are designed and developed from
the viewpoint of software development. Each chapter covers one phase
of the pipeline development process, but on an abstract or semitechnical
level. It provides a structured approach to design and develop pipelines.
The final chapter dives into a use case and uses the strategies of all
previous chapters to design and develop pipelines using Azure DevOps
in combination with AWS. The code used in this chapter is provided as
research material.
If you are looking for an in-depth technical—how-to—book about the
development and implementation of pipelines using specific tools like
Jenkins or Azure DevOps, this is probably not the book you are looking for.
However, if you are looking for guidelines on how to start with CI/CD, how
to design the process and the associated pipelines, and what needs to be
considered during the development and implementation of pipelines, this
book is for you.
21
Chapter 2 CI/CD Concepts
Naming Conventions
There is no “standard” glossary for CI/CD, and sometimes the same name
is used in a different context. For example, deploy is also referred to as
release (the verb), but release can also refer to the creation of a release
(candidate), as in the noun.
So, to avoid confusion, this book provides the following definitions.
Note that this is not an exhaustive list. Only the words that need
explanation or that might cause confusion are listed.
22
Chapter 2 CI/CD Concepts
23
Chapter 2 CI/CD Concepts
24
Chapter 2 CI/CD Concepts
25
Chapter 2 CI/CD Concepts
Summary
You learned about these topics in this chapter:
• A brief overview of continuous integration and
continuous delivery outline
• Positioning of continuous integration and continuous
delivery in the software supply chain
• Application life-cycle management (ALM)
• The journey of implementing continuous integration
and continuous delivery
26
Chapter 2 CI/CD Concepts
27
CHAPTER 3
Requirements
Analysis
This chapter covers the following:
• Requirements in detail
Overview
Requirements analysis is the first step before the actual design of the
pipeline is drafted and the pipeline is created. Requirements apply to CI/
CD practices, pipelines, the ALM platform, or a combination of all tools
that make up the integration infrastructure. Requirements are derived
from different sources.
• First, there are basic CI/CD principles, which can be
treated as requirements. Become familiar with them. If
you deviate from the basic principles, you must have a
good reason to do so because they form the foundation
30
Chapter 3 Requirements Analysis
This list is not exhaustive but gives an idea of which areas must be
considered. Of course, more areas can be identified, and some maturity
models define areas such as business intelligence, planning, culture,
and organization. These maturity models list some expert/advanced
capabilities such as automated remediation based on (AI) monitoring
and automated prioritization of the backlog based on AI. However, this
book intends to give practical guidelines and not an advanced vision of
CI/CD because most companies will never reach that level. Moreover, in
practice, it is not even always possible to achieve a complete hands-off
software supply chain with all the bells and whistles. Just think of manual
intervention by operators because certain situations are not foreseen and
cannot be solved using a pipeline. Also, costs play an important role in the
realization of an automated software supply chain. This means you always
have to make a weighted choice between requirements that are absolutely
necessary and requirements that are not.
The remaining pages of this chapter describe the areas mentioned in
Table 3-1 in more detail and show some examples of requirements that are
worth checking out.1 These requirements serve the purpose to inspire and
1
I tried to prevent being Captain Obvious. A lot of requirements are implicit and
part of CI/CD practice, such as “tests are automated” and “use version control,” so
they are not listed explicitly.
31
Chapter 3 Requirements Analysis
Way of Working
The way of working can be defined on a business organization level or
team level. It defines the following:
• The way of working of the business organization: The
business organization may use Agile and Scrum,
biweekly sprints, or multiple DevOps teams working on
the development of one product. In some way, these
aspects influence the pipeline design.
32
Chapter 3 Requirements Analysis
33
Chapter 3 Requirements Analysis
Requirement: Choose the release strategy you want, but keep the
mainline production-ready.
Deploying a release once a day, once a week, or once a month is a
requirement the business defines. They probably have good reasons to
release either very often or with larger time intervals. This does not matter.
But it is good practice always to keep your mainline in such a state that
it is possible to deploy whenever you want. Even if you release once a
month, you are still practicing the CI/CD principles if the mainline is in a
production-ready state.
Requirement: Perform manual testing only if needed.
Performing manual testing is a CI/CD anti-pattern, but practice
shows that manual testing or semi-automated testing is still required. The
following are the reasons why:
• The QA team has a backlog converting manual tests to
automated tests.
• The automated test of a newly developed feature is not
yet integrated into the automated test suite. The trick is
therefore to integrate manual testing somehow into the
CI/CD process.
• Automating the test is costly if this particular test is
rarely executed.
• Some tests are very specific, so they cannot be
automated. Usability testing is such an example.
Technology
The target environment, the CI/CD framework, the tools, and the
application architecture all influence the realization of a pipeline and its
flow. Here are a few examples:
34
Chapter 3 Requirements Analysis
35
Chapter 3 Requirements Analysis
36
Chapter 3 Requirements Analysis
37
Chapter 3 Requirements Analysis
38
Chapter 3 Requirements Analysis
Information
ALM platforms potentially generate a lot of data that the team can use to
keep informed. Even if the platform has default overviews and notification
options, it still makes sense to think about how a team is informed and
what type of information is shared with the team. Often these tools send
a lot of emails, which results in team members not reading their emails
anymore. Information overloading is a common problem and must be
managed using several strategies.
• Information pull and push: What type of information
is important enough to push to the team members in
the form of a notification—such as an email—and what
type of information is not? In the latter case, a team
member can also actively search for information if it
is needed.
• Display capabilities: Overviews in some ALM tools
don’t always excel in readability. The overview is
often cluttered with all types of build and deployment
information. Sending the information to alternative
tools that provide different views and/or have better
displaying and filtering capabilities may be something
to consider.
• Channel: Preferably use a limited number of options
to inform teams. One tool to push information to the
team and one tool to pull (retrieve) information is more
than enough.
• Classify: Make a classification of types of information.
For example, information about production
deployments should not be combined with information
about deployments in test environments.
39
Chapter 3 Requirements Analysis
40
Chapter 3 Requirements Analysis
Security (General)
Security plays an important role in developing, implementing, and
managing pipelines. The ALM platform or integration server, the related
tools, and the pipelines themselves are potential attack surfaces, so they
need to be protected and monitored. Don’t forget that if applications have
to meet certain standards, such as the Sarbanes–Oxley Act (SOX), Health
Insurance Portability and Accountability Act (HIPAA), or Payment Card
Industry Data Security Standard (PCI DSS), it might be assumed that the
41
Chapter 3 Requirements Analysis
software supply chain also has to comply with these standards. Most of
these standards have a component focused on security in the software
supply chain.
Here is where the NIST Cybersecurity Framework [13] can play a role.
The NIST Cybersecurity Framework is a valuable source helping business
organizations to identify risks, protect resources, detect vulnerabilities,
and respond to and recover from security incidents. It is an extensive
framework and covers various security aspects targeted at people,
processes, and technology. Use the framework as guidance to define CI/
CD security requirements.
For example, one of the categories in the framework deals with supply
chain risk management. Subcategory ID.SC-2 states the following:
ID.SC-2: Suppliers and third-party partners of information
systems, components, and services are identified, prioritized,
and assessed using a cyber-supply chain risk assessment
process.
If this is brought up in the context of external libraries used for building
an application, it is made clear that the origin of such a library must be
assessed first. Just grabbing some software from the Internet and bringing
it into your production environment is not a good idea.
Requirement: Use a vault to store tokens, keys, secrets, and
passwords.
Ideally, all secrets—passwords, tokens, keys, credentials—used by
the application must be stored in a secure vault. Depending on the exact
requirements, this vault may have certain characteristics. It can be a
software vault or a Federal Information Processing Standard (FIPS) 140-2
level 3 compliant hardware security module (HSM). The pipeline has to
make sure that these secrets are stored in the vault, either by generating
them in the vault itself or by securely transferring the secret to the vault.
Some ALM platforms are supported by a vault to store secrets.
42
Chapter 3 Requirements Analysis
43
Chapter 3 Requirements Analysis
44
Chapter 3 Requirements Analysis
45
Chapter 3 Requirements Analysis
There is a tendency to say you always need to roll forward, but that
depends on the viewpoint. A simple web application with a corrupted
layout is something completely different than a payment or trading system
with a recovery point objective (RPO) of zero and an RTO of nearly zero.
Without judging the situation, one can only conclude that “it depends.”
What is more important in this context is the fact that it must be possible
to perform a rollback or roll forward using a pipeline. A rollback not only
means undeploying the new artifact version and redeploying the old
version, but it also has to execute rollback scripts to reverse the changes
already made in the database, roll back messages in a queue, or roll back
any data already propagated to other systems. Also, a roll forward may
involve more than just installing a fixed app. Any corrupted data needs to
be fixed also.
This is not for the faint of heart, and whatever strategy is used, it
requires some thorough thinking up front and needs to become part of
your test strategy. Without a proper rollback/roll-forward vision, you will
continue to work on your pipeline endlessly. Be prepared for that in the
pipeline design.
Requirement: Only deploy artifacts to production with a higher
version.
This requirement seems to be contradicting the previous requirements
because checking whether the deployment always has a higher version
sort of prevents a deployment rollback. That is also not the intention.
In most cases, a deployment just succeeds, and the installed version
is always the latest one, which has a higher version number than the
previously installed version. An additional check on the existing version
on production versus the version that is going to be deployed prevents the
installation of older versions. This requirement implies that the versioning
scheme has an order. Using a commit hash as a version does not work in
combination with this requirement. In the case of a rollback, this check
should be suppressed, of course.
46
Chapter 3 Requirements Analysis
47
Chapter 3 Requirements Analysis
2
Not all teams use pull requests.
48
Chapter 3 Requirements Analysis
49
Chapter 3 Requirements Analysis
50
Chapter 3 Requirements Analysis
51
Chapter 3 Requirements Analysis
52
Chapter 3 Requirements Analysis
3
IP whitelisting is not preferred anymore due to maintenance/error-prone
situations, especially in cloud environments. Use it only when there’s no
other option.
53
Chapter 3 Requirements Analysis
54
Chapter 3 Requirements Analysis
4
Maybe not directly, but using a proxy or intermediate repository.
55
Chapter 3 Requirements Analysis
Organization
Continuously validate
signature artifact
Central (trusted)
library server
Internet CI CD
Target
(External) library environment
developer
Chain of trust
56
Chapter 3 Requirements Analysis
Requirement: Pipeline logs may not contain PII data and secrets.
Derived from the previous requirement, PII data may not be used
at all in a CI/CD pipeline. And even if PII data is needed in the pipeline,
for example, to fill a database table in production, the data needs to be
protected.
Various options are possible to protect the data. The simplest solution
is to store it as a file, secured from reading by other users than the pipeline.
Even better is to encrypt the file. The pipeline decrypts the file as soon
as it is needed to fill the table. The decryption key must also be stored in
a secure location within the pipeline, of course. Another alternative is to
store the file in a vault.
57
Chapter 3 Requirements Analysis
Resource Constraints
Resource constraints affect the pipeline negatively and introduce queuing,
very long execution times, or even a complete halt of the whole ALM
platform/integration server. The underlying reasons are often a lack
of computing (CPU) resources, insufficient disk space, and network
congestion. These usually occur when the pipelines are already put into
use. It seems like these problems suddenly happen to you and you have to
deal with them as soon as they happen, but that’s very short-sighted.
As soon as you start with the design and development of your
pipelines, you should have some idea about the number of apps, the
number of pipelines, and how many pipeline runs are expected. The
sizing of the CI/CD infrastructure is an educated guess, which should at
least give enough confidence that the pipelines can do their work given all
requirements. In addition, some optimizations can be done.
Requirement: Parallelize code analysis scans.
If code analysis consists of multiple scan types, it may take a long time
to complete if all tasks are executed sequentially. A solution is to parallelize
these tasks. It is good practice to include this already in the design because
the different types of code analysis scans do not have any relation to one
another.
Requirement: Parallelize tests.
Not only can code analysis scans take a long time, but especially test
runs are prone to take a long time. Solutions are to execute multiple types
of tests in parallel or parallelize tests of the same type. In the case of the
latter, tests are divided into small groups, and the groups are executed in
parallel. Other approaches are to group tests based on historic timing data
and combine the tests in such a way that the test time of each group is
(almost) the same.
58
Chapter 3 Requirements Analysis
Manageability
Manageability is about organizing your pipelines in such a way that
changes are easy to apply and your code is not redundant and scattered all
over the place.
Requirement: Keep your pipeline code manageable.
Similar to software development, pipeline development can become
complex. Sometimes this cannot be prevented, but that’s all the more reason
to keep development under control. Your pipeline becomes unmanageable
if every hobbyist is given the space to add another hobby script of their
preference. Using technical standards, naming conventions, and development
guidelines is the only way to keep pipeline development manageable.
Requirement: Build once, run anywhere.
“Build once, run anywhere” is a statement originated from the Java and
Docker/container world, which also applies to the context of pipelines.
An application artifact must always be built once using a pipeline, and the
same artifact must be installed in all target environments, both test and
production. Environment-specific properties are deployed as part of the
application deployment.
59
Chapter 3 Requirements Analysis
60
Chapter 3 Requirements Analysis
61
Chapter 3 Requirements Analysis
Operations
Operations tasks must be automated as much as possible. Using a pipeline
to orchestrate these tasks is a logical choice.
62
Chapter 3 Requirements Analysis
63
Chapter 3 Requirements Analysis
Quality Assurance
Quality assurance (QA) involves source code analysis, both static and
dynamic, and testing. Testing is meant here in the broadest sense of the
word. It not only involves various testing types but also the creation and
management of test data and testware. In addition, security testing is
considered part of QA, although security is treated as a separate topic.
Requirement: Application code must be scanned on code quality.
Application code must meet a certain code quality. Static code
scanning is performed on the application code to validate bugs, coding
standards, complexity, bugs, nonperforming code, etc. The code must
also be checked for—security—vulnerabilities. Scanning code provides
confidence that the code quality of the application code is sufficient.
Dynamic scanning is validating the application in the runtime
environment to determine whether it contains security vulnerabilities
(e.g., using automated fuzzing).
Requirement: Infrastructure code must be scanned on code quality.
In addition to scanning application code, also infrastructure code
must be scanned on code quality. This involves both static scanning
of the infrastructure as code (IaC) and dynamic scanning of the target
environment.
Static scanning involves validating infrastructure code such as AWS
CloudFormation and Azure ARM templates. Dynamic scanning involves
validating whether an infrastructure resource in the target environment is
not misconfigured.
Both types of scanning complement each other, but considering the
“shift-left” principle, most of the issues and misconfigurations should
preferably be detected by static IaC scanning.
Requirement: Pipeline code must be scanned on code quality.
Because most pipelines are developed as code, they also need to meet
a certain code quality. Although scanning the pipeline code in the pipeline
itself is an option, it is a bit odd. That would be a bit like a fox guarding
64
Chapter 3 Requirements Analysis
65
Chapter 3 Requirements Analysis
66
Chapter 3 Requirements Analysis
67
Chapter 3 Requirements Analysis
Metrics
Metrics are used to assess the state and performance of the teams and the
pipelines.
Define key performance indicators (KPIs) that make sense.
The software supply chain is successful if all PKIs are considered
successful, but defining these KPIs is not easy. When is the software supply
chain considered successful? Of course, this differs for different business
organizations.
KPIs are often defined in business terms that contain words like
efficient, cost-effective, fast time to market, high change success rate,
compliance, etc. However, a good KPI must also be specific, measurable,
achievable, relevant, and time-bound (SMART). So instead of stating that
“the pipelines must be cost-effective,” it is better to define the KPI as “Costs
of the pipelines per month.” The trend of the KPI reveals whether the costs
go up or down, and it is up to the squad or business representative to
determine whether this trend is acceptable.
It is not always possible to find the right metrics in the CI/CD setup
that contribute to a KPI. In the case of the KPI “Costs of the pipelines per
month,” you need to get insight into the actual costs of the ALM platform
or integration server. If the ALM platform is a SaaS solution or if an
integration server runs on the infrastructure of a cloud service provider, it
is easy to get insight into the costs of the resources used. Assume the CI/
CD setup consists of AWS CodeCommit, CodeBuild, and CodeDeploy,
68
Chapter 3 Requirements Analysis
69
Chapter 3 Requirements Analysis
70
Chapter 3 Requirements Analysis
71
Chapter 3 Requirements Analysis
Monitoring
Monitoring is the process to collect data to identify, measure, validate,
visualize, and alert about the following:
• Availability
• Resource use/capacity
• Performance
• Security breaches
72
Chapter 3 Requirements Analysis
73
Chapter 3 Requirements Analysis
Sustainability
Sustainable computing is an emerging trend that focuses on reducing the
carbon footprint generated by the information technology industry. To
put things into perspective, the annual energy consumption of the global
Bitcoin network as of today is roughly 142 TWh, according to the University
of Cambridge (see [1]). That’s about the size of the electric energy
consumption per year of the whole of New York State. These are dazzling
numbers. And not only the carbon dioxide footprint of the Bitcoin network
is huge, but also trends like AI, Big Data, and other compute-intensive
processes have a big impact on the environment.
Sustainable computing becomes an important factor in architecting,
designing, implementing, and operating IT systems. This includes
continuous integration and continuous delivery pipelines.
Requirement: Define sustainability goals.
Governance
Governance involves managing the organization and teams in their CI/CD
journey.
74
Chapter 3 Requirements Analysis
75
Chapter 3 Requirements Analysis
Summary
You learned about the following topics in this chapter:
76
CHAPTER 4
Pipeline Design
This chapter covers the following:
Design
A pipeline design is a specification of how to construct a pipeline. It
describes the following:
• The CI/CD process in general, the pipeline stages that
make up the process, and the individual tasks within a
stage. It describes the process in words and visualizes
the activities that take place within a pipeline.
• The flow of the pipeline. The conditions that shape the
process flow act as gateways, allowing the pipeline to
continue or halt until a certain condition is met. These
gateways also determine possible alternative paths in
the flow.
• The interaction with surrounding systems.
78
Chapter 4 Pipeline Design
BPMN 2.0
Where the requirements analysis phase helps you understand the
problem domain, the BPMN diagrams help you understand the software
delivery process flow, the individual stages and tasks in the process, and
the interaction with other systems. The notation used in this chapter is
BPMN 2.0.
79
Chapter 4 Pipeline Design
BPMN 2.0 uses a certain notation with specific icons, called elements.
The set of BPMN 2.0 elements is limited, and because the pipeline flows
are not very complex, a subset of these elements is used throughout
this book.
A remark to the BPMN purists out there. You will probably detect
possible improvements in the models. I would like to know that, of course,
but as long as a model describes the essence of the flow, it serves its
purpose. A summary of the most used elements and some basic BPMN
examples are presented in the next paragraphs.
80
Chapter 4 Pipeline Design
(continued)
81
Chapter 4 Pipeline Design
82
Chapter 4 Pipeline Design
BPMN in Action
A workflow usually has a begin and an end element. In BPMN terminology
these are called events. Between these events, one or more tasks are
executed. This can be an automated or a manual task. A simple BPMN
model with two tasks looks like Figure 4-1.
System A
Start End
83
Chapter 4 Pipeline Design
Figure 4-1 visualizes system A as a BPMN pool. The pool contains two
tasks enclosed between a start event and an end event. The start and end
events are optional. If the number of tasks becomes very large, they can
be clubbed together into a subprocess. To make BPMN diagrams more
readable, this subprocess can be collapsed, hiding all underlying tasks, as
Figure 4-2 shows.
System A
Collapsed sub
process with
tasks
Start End
84
Chapter 4 Pipeline Design
Automated task
Result of both
tasks are
successful
System A
Start End
At least one task
resulted in error
Manual task
Handle error
Figure 4-4 adds a bit more complexity to the model. The Handle
error task in system A means that previous changes in system B must be
undone. System B has two subsystems called B.1 and B.2, and they both
must be reset to revert all changes. The two subsystems of system B are
depicted as lanes. To inform system B about the fact that the reset must be
performed, the model makes use of an event. The event in the model is a
message intermediate catch event, indicating that the task Perform reset in
subsystem B.1 can receive and process this event. After subsystem B.1 has
been reset, it calls subsystem B.2 to reset.
85
Chapter 4 Pipeline Design
Automated task
Result of both
tasks are
successful
System A
Start End
At least one task
resulted in error
Manual task
Handle error
Perform reset in
subsystem B.1
System B
Subsystem B.2
Perform reset in
subsystem B.2
It doesn’t become more complicated than this (at least for the scope
of this book). This makes BPMN a good way to describe the workflow of
a pipeline flow and helps with the thinking process required to design
pipelines.
Level of Detail
A BPMN diagram describes a certain context, which effectively refers to a
certain level of detail. There are multiple levels to distinguish.
• Global level, to understand the overall process flow.
86
Chapter 4 Pipeline Design
It is possible to model all levels and the complete workflow in one big
BPMN model, but often readability is improved when the global and detail
models are separated. This is a matter of taste, of course.
87
Chapter 4 Pipeline Design
1
Some stages are implemented differently, which means that tasks move to a
different stage, and the stage ends up with zero tasks, in other words, the [0..n]
range. In other contexts, some stages are not applicable, meaning that the stage
has zero tasks and is therefore not implemented.
88
Chapter 4 Pipeline Design
Start pipeline
Code analysis
passed
Tests passed
As you can see, Figure 4-6 shows the stages of the Generic CI/CD
Pipeline, most of them ending with an exclusive gateway. The exclusive
gateway is a condition that determines whether the stage result was
successful. The pipeline either ends in a success state or ends an error/
failed state.
The Generic CI/CD Pipeline consists of the following stages.
2
Or it is triggered manually, of course.
89
Chapter 4 Pipeline Design
system that calls the ALM platform/integration server API acts as a trigger.3
Pipelines can call other pipelines, and it is even possible to hook up an
advanced AI monitoring system to your production environment that
detects deviating behavior in the application. This may result in triggering
a pipeline to reconfigure the application or performing remediating
activities on the infrastructure.
To make sure that the pipeline is started by a valid trigger using the
correct trigger data and the correct pipeline configuration, a validation
stage—the Validate entry criteria stage—is added to the Generic CI/CD
Pipeline. The pipeline can proceed only if certain criteria are met. The
following are typical entry criteria validated in this stage:
• Validate all mandatory pipeline variables in the
Validate entry criteria stage. If one of the variables is
not (properly) configured, the pipeline stops in the first
stage instead of somewhere at the end of a pipeline run.
• Add a ping task to the Validate entry criteria stage to
make sure that an external system is reachable. The
ping task could send an HTTP request to an external
system and validate the returned HTTP status. If, for
example, a status 503 is returned, the pipeline stops,
because the external system cannot be accessed.
• The branch—passed as an argument in the trigger—for
which a release candidate is going to be built is indeed
the expected branch. For example, only triggers with a
Git event associated with the main branch are allowed,
if the intention is to create a release.
3
On an infrastructure level, this also means that the external system calling the
API of the ALM platform must be an authenticated system. So, connections should
make use of mTLS, OpenID Connect, or at least some basic authentication.
90
Chapter 4 Pipeline Design
Execute Build
This stage involves building artifacts from code, such as the creation
of a .jar file from Java code or an .exe file from C++ code. The code
associated with a certain branch and certain commit is checked out in the
SCM system, dependencies are downloaded (for example, Java libraries
from Maven Central), and the code is compiled. This is a fully automated
process.
91
Chapter 4 Pipeline Design
Analyze Code
The Analyze code stage provides confidence that the code quality
requirements are met. Organizations often demand a combination of
checks, sometimes completed with specific validations. Here are some
examples:
• Code quality assurance: Static analyzers that assure
the quality of the software, for example, SonarQube or
SonarCloud to perform static analysis of code to detect
bugs and code smells, OpenClover to validate code
coverage, and Pylint to analyze Python code.
• Static application security testing (SAST): Secure
software by reviewing the source code of the software
to identify sources of vulnerabilities. Tools are, for
example, Checkmarx and Fortify Static Code Analyzer.
• Software composition analysis (SCA): Automated
scans of an application’s codebase to identify security
vulnerabilities and the type of license of all open-
source components used in the build process. These
types of scanners can detect whether an artifact
contains a vulnerable version of log4j, for example.
Tools like Nexus IQ or JFrog Xray fill in this segment.
• Credentials scan: This is an extension of SAST and
scans other types of files for credentials, passwords,
tokens, or other secrets, which are present in a code
repository in plain text. Whispers is an example of such
a tool. Whispers can detect hard-coded credentials in
(property) files.
92
Chapter 4 Pipeline Design
The Analyze code stage may contain multiple tasks that potentially
delay the pipeline, because some of these tasks can be very slow.
Subsequent chapters point out what the options are to mitigate this.
93
Chapter 4 Pipeline Design
Package Artifact
Packaging an artifact involves all activities to deliver an artifact that can be
deployed to a test or production environment. Think of .zip, .jar, or .exe
files. This also involves the creation of custom packages in cases where a
dedicated deployment tool is used.
To guarantee the integrity of the artifact, specific measures must be
taken, such as signing a package,4 to make sure the artifact deployed to
production is not compromised. For auditability, this is the point at which
we want to ensure that the package goes to production unchanged.
Publish Artifact
Publishing an artifact means that the artifact is stored in an immutable
binary repository such as Artifactory, Nexus, or Azure DevOps Artifacts.
Docker images are pushed to a Docker repository, for example, Nexus 3
and AWS Amazon Elastic Container Registry (ECR).
Publishing an artifact is typically the last stage of continuous
integration, and this is where continuous delivery begins.5 The continuous
delivery stages retrieve the artifact from the repository and use it for
testing and deployment to production. This ensures that the same artifact
is used throughout all environments and not built for every environment
separately.
In addition to the published artifacts, additional information—
metadata of the continuous integration process—can be published. The
version of the artifact, the commit hash of the code, the work items that are
part of the artifact, the developer of a feature, the pull request reviewer(s),
4
Signing a package means that a digital signature is created and added to an
artifact, to guarantee the integrity of the artifact.
5
Continuous delivery is sometimes used as overarching concept that includes
continuous integration.
94
Chapter 4 Pipeline Design
and the unresolved but accepted issues are typical examples of metadata
gathered during continuous integration. This type of information can be
seen as the “contract” with the continuous delivery part of the pipeline,
and it makes sense to gather this kind of metadata and publish it as a
“release note” in a central place where all interested parties can read it.
If needed, test results can be added later to this metadata, so it becomes
clear whether a release candidate is suitable for production (or not). This
metadata can also be used to determine whether the artifact has gone
through all the mandatory steps before it is deployed to production.
95
Chapter 4 Pipeline Design
environment on the fly. In cases where you can make use of infrastructure
as code—for example, in the cloud—it is relatively easy to create an
infrastructure, but there still may be some issues. Consider, for example,
a long creation time of your infrastructure, or deleting your stacks is
problematic because they have dependencies with resources that cannot
be deleted or at least not easily. Also, test environments used for load and
performance tests cannot be deleted so easily because they often contain
very large databases. Rebuilding the environment would take several
hours. And if a test environment is almost continuously used, it makes no
sense to tear down the environment and rebuild it a second later. That is
why organizations still make use of fixed test environments, even if they
are created using IaC.
On the other hand, keeping test environments intact and leaving them
unused for a longer time should be avoided. Teams decide whether to
create a test environment once and use it for a longer time and destroy
it if not needed anymore or not needed shortly. Also, a combination of
more or less permanent test environments combined with ephemeral test
environments is possible.
96
Chapter 4 Pipeline Design
Perform Test
Testing covers a wide range of types from contract tests and integration
tests to usability tests and production acceptance (preproduction) tests,
except for unit tests, which are performed in a dedicated stage. More
details concerning the different test types are discussed later in this
chapter.
It is important to point out is that tests should not rely on each other.
Each test must be able to be performed individually, which offers the
possibility to perform tests both sequentially and in parallel. Each time a
test is executed, it is initialized to a certain starting point.
97
Chapter 4 Pipeline Design
The reason why this is a separate stage, executed only after all tests
have been performed, is that the focus of the pipeline flow should be first
on testing whether the application works properly and second on whether
the infrastructure resources associated with the application are compliant.
98
Chapter 4 Pipeline Design
6
This applies only to regular deployments and is not a rollback to a previous
version because of an incident.
99
Chapter 4 Pipeline Design
In principle, the exit criteria of the pipeline are the entry criteria of
the target platform. It makes sense to validate the artifact to determine
whether it does comply with the preconditions of a deployment (to
production), especially in cases in which more teams build artifacts for a
shared production environment. The positioning of this stage before the
actual deployment to production also makes sense because that is the last
possible moment to validate the artifact before it is deployed.
100
Chapter 4 Pipeline Design
This is by definition also the only manual step in the process; all other
stages and tasks are automated.7 Having the dual control stage in the
Generic CI/CD Pipeline also makes sense; otherwise, the pipeline would
have become a continuous deployment pipeline and not a continuous
delivery pipeline.
7
In theory, of course. Often there are still manual test tasks to be performed.
101
Chapter 4 Pipeline Design
Notify Actors
This stage has a generic name. It deals with informing team members
about the pipeline execution result, both success and failure, but it also
deals with notifying other actor types about the result. Other actors are,
for example, external systems, other pipelines, or specific functions of the
ALM platform/integration server. Informing actors can be implemented
as simply as sending an email to the team or a more sophisticated activity
such as performing an outbound API call to an external system.
Design Strategies
As the previous chapter already shows, there are lots of possible
requirements and aspects that influence the design and realization of
a pipeline: the business organizations’ software delivery strategy, the
workflow of the team, security aspects, and certain constraints, both
technical and nontechnical, etc. In the end, the pipeline design and
realization are derivative products of all these aspects, and if one of them is
suboptimal, the pipeline is also suboptimal.
It is important to have a continuous interaction between optimizing
the requirements on one hand and the design and realization of the
pipelines on the other hand. If, for example, the team’s workflow is overly
102
Chapter 4 Pipeline Design
Requirements
Context Diagram
Although the design phase is abstract, it does make sense to draw a
context diagram containing all actors. Actors are not only the people
who are involved but also the surrounding systems. A context diagram
gives an impression of which interactions take place in the context of CI/
CD. Include everything you already know—including tools—in the context
diagram and use abstract names like SCM, issue tracker, and the SCA tool,
if you do not know which tools are used (yet). A context diagram might
look something like Figure 4-8.
103
Chapter 4 Pipeline Design
Internet
SonarQube
Git
Product owner
@
Retrieve code Scan code Issue Tracker
Inform
DevOps Team
<Development/test <Producon
@
network segment> network segment> Inform
Test Producon
environment environment
104
Chapter 4 Pipeline Design
A context diagram is a good way to discuss with the team how the
pipelines interact with all actors. The first version of the context diagram is
probably a simple diagram with some blocks like the previous one, but the
diagram is extended along the way, with more (technical) details added in
later versions. Use the context diagram in the discussions with the team to
point out what is added or changed in the pipeline setup.
Branching Strategy
A branching strategy is a critical element in the way a pipeline is shaped.
At the start of a pipeline design, it must be clear how the team works
and which workflow they adopt. Depending on the type of strategy, the
pipeline flow differs. Some of these strategies are discussed in the next
paragraphs and demonstrate what a possible pipeline design could
look like.
Trunk-Based Workflow
In the context of continuous integration, there is only one workflow, the
trunk-based workflow. All other strategies are not considered continuous
integration, but they are relevant because lots of teams still use a branch-
based workflow.
The trunk-based workflow model is the simplest workflow strategy.
This means that the source code repository (e.g., Git) contains only the
main branch, the trunk. Changes are directly applied to the trunk, and also
release candidates are created from the trunk. The complexity of a trunk-
based pipeline is relatively low compared to other branching strategies.
See Figure 4-9.
main
105
Chapter 4 Pipeline Design
What does the workflow look like in practice? Most likely, some kind
of issue-tracking software like Jira is used to register work items. A work
item—also called a user story or a project backlog item—defines the feature
that needs to be built. This feature must be small, preventing the merges
of large pieces of code. Keeping the trunk “clean” requires disciplined
commit hygiene, and big changes to the trunk must be avoided.
The trunk-based workflow fits perfectly in a pair programming way
of working. In pair programming, two developers are working on a local
copy of the trunk and pushing their software code directly to the trunk.
This results in a release build that can be deployed to production if all
intermediate stages are passed.
This workflow makes pull requests obsolete because there isn’t a
separate branch and reviewing the code is done on the spot. This also
reveals an issue with the trunk-based workflow. If not done properly, code
reviews are not administered, and it becomes difficult to trace back the
input of colleague developers.
106
Chapter 4 Pipeline Design
Use case:
A team uses a trunk-based workflow and uses Git as their SCM
system. Team members perform pair programming, which involves
two developers per development session. The review is done by
both developers during development, and one of them performs the
commit/push. There is an organizational audit requirement that states
that all users who reviewed the code need to be registered. It must
be possible to trace back the code commit to a work item. The team
uses an issue tracker system. In this particular case, the test and
production environments are already provisioned.
107
Chapter 4 Pipeline Design
push
Error
No
o workiten in
Validate workitem in message Build is not OK Unittests failed
U Code analysis
message or
V
Validate branch is main failed
branch is not main
bran
Workiten in
message &&
m
bra
branch is mainn Unittests
Unittests
Validate entry Build
B uild is OK Perform passed
passed
Execute build Analyse code
criteria unittests
Trigger
Code analysiss
passed
Trunk-based pipeline
Tests p
passed
Validate Infrrastructure
Infrastructuree Exxit criteria
Exit a Du
Dual
ual controll Provision
infrastructure compliant
c ompliant Validate exit correct
c orrect Perform dual passed production Deploy artifact
compliancy criteria control environment to production
End pipeline
This diagram resembles the Generic CI/CD Pipeline, with some minor
additions. Added to the diagram is a specification of the stage Validate
entry criteria. The first task in this stage is to determine whether the branch
to which the code was pushed is indeed the main branch. The stage
108
Chapter 4 Pipeline Design
C
Commit info Get the
available Branch
Br
ranch is main Check whether Workitem exists
s
Get commit info workitem id
Check branch the workitem
from trigger from the commit
exists
message
Start End
Issue tracker
Check workitem
API
109
Chapter 4 Pipeline Design
Modeling the pipeline stages and tasks isn’t that complicated, but
explicitly designing it makes you more aware of the whole process, the
tasks involved, and what exactly needs to be implemented. Because the
trunk-based workflow results in a more or less straightforward pipeline
model, it is the preferred workflow of many teams. There are some
alternatives to the trunk-based workflow, like a trunk with a separate
release branch, but the principle of the workflow remains the same; you
directly push your commit to the trunk.
As shown in the next paragraphs, the pipeline design becomes more
complex as the complexity of the workflow increases.
110
Chapter 4 Pipeline Design
feature
Create feature branch
from main
main
The developer commits code to the feature branch. This can be done
several times. If the feature is completed, they create a pull request, so
other developers can review the code. If the colleague developers approve
the pull request, the code of the feature branch is merged back to the main
branch.8 See Figure 4-14.
Commit code Commit code
feature
Create feature branch Merge feature branch
from main back to main
main
A design principle that works out very well is that “Each branch has an
associated pipeline.” The reason is that each branch has its purpose and
its life cycle, so why would the pipeline execution be the same for different
types of branches?
8
From a technical (Git) point of view, you can decide to merge the feature branch
back to main, or rebase main onto the feature branch, to get a cleaner history. In
addition—if the platform supports it—you may define branch policies on the main
branch to prevent, for example, that a feature is merged that does not even build
successfully.
111
Chapter 4 Pipeline Design
The pipeline associated with this feature branch looks like Figure 4-16
in BPMN notation.
112
Chapter 4 Pipeline Design
git push
Git
Entry criteria
E Build is not OK
B Unittests failed
incorrect Error
Enttry criteria
Entry a
'feature' branch CI pipeline
Unittests passed
p
Code analysis
failed
Code analysis
passed
Analyze code Package artifactt Publish artifact
End
113
Chapter 4 Pipeline Design
main Validate entry Execute Perform Analyze Package Publish Provision test
criteria build unittests code artifact artifact environment
Notify
Actors
Deploy Validate Validate Perform Provision Deploy
artifact to Perform infrastructure production artifact to
test test compliance exit criteria dual control environment production
SCM trigger
push
T
Trigger
CI/CD platform
Generic CI/CD
pipeline
Table 4-2 summarizes the tasks performed for, respectively, the feature
and main branch. This is just a proposal, and of course, it is perfectly fine
to deviate from it. Essential, however, is to think about the stages that are
executed for each branch and why.
114
Chapter 4 Pipeline Design
Feature • Validate entry criteria The reason to execute only the CI stages
• Execute build is that the response to the developer must
• Perform unit tests be almost immediately. It happens often
• Analyze code that a build succeeds on the developers’
• Package artifact local machine, but not in the pipeline. The
• Publish artifact feature branch pipeline is the first step
• Notify actors to making sure that the code can be built
in a pipeline. In addition, the code of a
feature branch is committed frequently (to
the remote server). To minimize resource
consumption, only the proposed stages are
executed.
Main • Validate entry criteria The pipeline associated with the main
• Execute build branch creates a release (candidate)
• Perform unit tests artifact. This artifact is tagged and
• Analyze code versioned as a release artifact. All stages of
• Package artifact the Generic CI/CD Pipeline are incorporated
• Publish artifact into the pipeline.
• P rovision test
environment
• Deploy artifact to test
• Perform test
• V alidate infrastructure
compliance
• Validate exit criteria
• Perform dual control
• P rovision production
environment
115
Chapter 4 Pipeline Design
Note on Implementation
This book intends to be abstract and tool-agnostic as much as possible.
In cases where implementation is discussed, the technical details are kept
to a minimum. But there are some pointers concerning the realization of
the pipelines.
On a design level, two pipelines are distinguished, one associated with
the feature branch and one associated with the main branch. Of course, it
is perfectly possible to develop two pipeline implementations, but it is also
possible to realize one technical pipeline integrating both logical pipelines.
The technical pipeline makes use of a condition to distinguish between
branches and uses templates or libraries to reuse stages. In Azure DevOps,
for example, validating a branch can be defined as follows:
116
Chapter 4 Pipeline Design
git push
Git
Enttry criteria
Entry a
Branch is not
ot Validate entry c
correct Build
Build is OK Perform
main
n Execute build Notify actors
criteria unittests
End
Unittests passed
p
'main' + 'feature' branch pipeline
Code analysis
C
failed
Code analysis
passed
Package artifactt Package artifactt Publish artifact
Trigger
The model starts with a condition to determine whether the Git push
came from the main branch or not. If true, all stages of the Generic CI/D
pipeline are executed. If false, only a subset of these stages is executed.
Gitflow
Gitflow is still used by a lot of teams. It was one of the first workflows
developed and is still popular.
The repository consists in its core of two branches, master and
develop. These branches have an infinite lifetime. The master contains all
code that is deployed to production. The deploy branch contains the code
that reflects the current state the team is working on. In recent workflows,
the name main branch is used in favor of master. To keep aligned with
the previous paragraphs, the name main branch is used in the remaining
chapters of this book.
117
Chapter 4 Pipeline Design
feature
Create feature branch
from develop
develop
main
118
Chapter 4 Pipeline Design
develop
release
main
develop
Merge hoix branch to
develop
hoix
Create hoix branch Merge hoix branch
from main back to main
main
119
Chapter 4 Pipeline Design
120
Chapter 4 Pipeline Design
develop
Merge hoix branch to Merge develop branch
develop to release Merge release branch
back to develop (bugfixes) release
Create hoix branch Merge hoix branch Merge release branch hoix
from main back to main back to main
main
121
Chapter 4 Pipeline Design
122
Chapter 4 Pipeline Design
Figure 4-24 through Figure 4-27 are the pipelines associated with the
branches of Gitflow.
feature Validate entry Execute Perform Package Publish
criteria build uniests arfact arfact
Nofy
Actors
develop Validate entry Execute Perform Analyze Package Publish Provision test
criteria build uniests code arfact arfact environment
Nofy
Actors
Deploy Perform
arfact to test
test
SCM trigger
release Validate entry Execute Perform Analyze Package Publish Provision test
criteria build uniests code arfact arfact environment
Nofy
Actors
Deploy Perform Validate Validate Perform Provision Deploy
arfact to test infrastructure exit criteria dual control producon arfact to
test compliance environment producon
SCM trigger
hoix Validate entry Execute Perform Analyze Package Publish Provision test
criteria build uniests code arfact arfact environment
Nofy
Actors
Deploy Perform Validate Validate Perform Provision Deploy
arfact to test infrastructure exit criteria dual control producon arfact to
test compliance environment producon
SCM trigger
123
Chapter 4 Pipeline Design
All Gitflow pipelines are combined into one BPMN model, as shown in
Figure 4-28.
git
merge/rebase
git push
main + develop
branch
Git
git trigger
branch is hotfix*
Pipeline hotfix
Gitflow CI/CD pipeline
branch
branch is develop
Pipeline develop
End
branch
branch is main or
(something else)
As you noticed, the more branches there are, the more complex the
workflow, which translates to the complexity of the pipeline design. Also
notice that the “continuous” aspect becomes less if more branches are
involved. The Gitflow model is not considered a proper model for CI/CD,
because of its complexity, multiple—long-lived—branches, and slow-to-
adapt new features because of a strict release cycle.
124
Chapter 4 Pipeline Design
Build Strategy
One could argue there is not much to tell about building an artifact. You
selected the appropriate build tool and apply the principle “Build once,
run anywhere.” In essence, your Execute build stage itself consists of just
one task, often executing one line of code, for example:
Or
After a couple of minutes, the artifact is created, and that’s it. But,
in reality, the creation of an artifact has lots of aspects to be taken into
account. Maybe the build lasts for 10 minutes, half an hour, or even longer.
This breaks the “fast feedback” principle of CI/CD and asks for a strategy
to decrease the build time. And other factors influence the build strategy or
even shape the whole pipeline. What is the build strategy in case the target
environment is the cloud, or what is the build strategy if there are multiple
DevOps teams involved in the development of one integrated system? Let’s
highlight some of the factors associated with a build strategy.
Vertical Scaling
If the build time increases, vertical scaling is an option to speed up build
times. Adding a larger server with a faster processor, more processor cores,
and a faster disk is an option. But vertical scaling does not always help in
the long run if more demanding builds occur. Other build strategies are
needed, from which lots of advantages can be gained and which do not
require any additional hardware.
125
Chapter 4 Pipeline Design
Build server
All files
126
Chapter 4 Pipeline Design
previous builds, and this results in the compilation of just the changed
.cpp file. This speeds up the build time considerably. An incremental
build is depicted by Figure 4-30. But there are a couple of caveats with
incremental builds.
Build server
SCM repository Build task Arfact
• Unchanged files
• One changed file
Remote cache
127
Chapter 4 Pipeline Design
Note From an audit point of view, one could raise concerns about
which pipeline run is responsible for the creation of an artifact if
incremental builds are used. The pipeline that creates the artifact
running in production must be traceable. But if more pipelines are
involved in the creation of the artifact, they all need to be part of the
audit chain. Maybe the last pipeline run was responsible for only
1 percent of newly compiled code, while the other 99 percent was
compiled by other pipeline runs, and although the chance that a
clean—full—rebuild would deliver a different artifact compared to
an incremental build is small, it is theoretically not null. In addition, it
may even be difficult or impossible to point out which pipeline builds
contributed to the creation of an artifact. To circumvent this issue and
avoid difficult discussions with the Audit department, it may be wise
still to use full builds if the build time is acceptable.
Parallel Builds
In addition to full and incremental builds, there is also the option of
parallel builds. A parallel build spreads the compilation of source files over
multiple threads (on one server) or even over multiple servers, depending
on the platform setup. This results in the following strategies:
• Multithreaded builds: A multithreaded build makes
use of the fact that a build tool uses multiple threads
on one server to build an artifact. Build tools often
include a flag that can be set to enable multithreading,
even with the option to provide the number of threads
or cores. A build can profit enormously from this
feature; if multithreading is enabled and four threads
are specified, it can make full use of a multicore CPU
128
Chapter 4 Pipeline Design
Build server
Thread 1
Thread 3
129
Chapter 4 Pipeline Design
Build server
Sub task 1
Build server
Arfact
130
Chapter 4 Pipeline Design
Build server
Build server
Offload
pipeline build
Build task 3 Arfact 3
Pipeline Caching
Deciding on a strategy to reduce the build time involves not only the
execution of a build in terms of CPU usage, but also I/O and networking
are big factors to take into account. External libraries used to build an
artifact may be retrieved from a location not close to the ALM/integration
platform, for example, Maven libraries from Maven Central, Docker images
from Docker Hub, and .NET packages from NuGet. Downloading them
from these external locations adds a lot of time to a build task.
131
Chapter 4 Pipeline Design
Note Caching is used not only for external libraries but also for
incremental builds. Compiled files created in an earlier pipeline run
are stored in a cache. A new pipeline run will look into that cache first
before a source code file is recompiled. Another benefit of caching
is that it becomes possible to apply restricted access policies to a
cache and block it for other pipelines.
Build Targets
In addition to build time, there are other factors to take into account when
a build strategy is defined. Consider the target environment. Some target
environments require the creation of certain types of artifacts, such as a
Spring Boot JAR or a Docker container but also impose some constraints
on these artifacts. Take a Kubernetes cluster, a cloud target, or a mobile
phone, for example. Artifacts must be limited regarding storage size,
memory footprint, or CPU usage. An artifact for an AWS lambda may not
exceed a certain file size; it must have a fast startup time, and memory
consumption must be minimized. So, do not focus only on build time
when defining a build strategy, but also take the target environment and
artifact constraints into account. Tools such as Quarkus, Micronaut, and
GraalVM are focused on these aspects and produce artifacts optimized for
a target environment where these constraints are applicable.
132
Chapter 4 Pipeline Design
Cross-Platform Builds
There are plenty of situations in which one codebase leads to different
artifacts, each specific to a certain target platform or even certain versions
of that platform. Think of applications that must be able to run on both
Windows and Linux or a mobile app developed for both iOS and Android.
The CI pipeline needs to produce multiple types of artifacts, each one
dedicated to running on a specific target platform. A nice feature of various
CI tools and ALM platforms is the Matrix Build strategy. This allows
building several artifacts at once, based on the permutation of different
language versions, operation systems, and operating system versions. Only
one CI pipeline is required to build all artifacts, although multiple types of
build servers/agents could be needed to perform the build for a specific
operating system.
The deployment (CD) pipeline is separate for each platform. One
deployment pipeline could be dedicated to a Windows environment, while
the other pipeline is based on a deployment to Linux. This is an example of
a fan-out principle. Fan-out applies to stages, tasks, and pipelines.
Figure 4-34 depicts two target environments. The build/deployment
ratio is one-to-many: one continuous integration pipeline and two
continuous delivery pipelines.
CD pipeline (environment A) OS
CI pipeline
CD pipeline (environment B) OS
Figure 4-34. Cross-platform pipelines
133
Chapter 4 Pipeline Design
SCM trigger
134
Chapter 4 Pipeline Design
environment. The test scope of a team that builds only a small part of the
whole system would also test this part in isolation; they will never test how
their app behaves as part of a whole system.
Assume a situation that multiple DevOps teams are developing for one
product, running in its specific target environment. Each team delivers
artifacts that must be assembled into one product. The assembling
phase is part of a CD pipeline. A setup to accommodate this is to define
CI pipelines managed by individual teams, while the CD pipeline is
managed by a central team, which is also responsible for the stability and
auditability of the production environment. This setup results in a many-
to-one ratio of the number of CI pipelines that perform the build, related to
the CD pipeline that executes tests and deploys the artifact to production.
See Figure 4-38.
CI pipeline
Team A
135
Chapter 4 Pipeline Design
The pipelines of the DevOps teams A, B, and C typically contain all the
CI stages. These pipelines also contain test stages to perform integration,
system, and contract tests. This gives these teams a feeling of confidence
that their app works properly. The pipeline of the central DevOps team (D)
is responsible for the target environment and includes all the CD stages.
This is also the place where artifacts, produced by the other teams, are
integrated and tested as one integral system. The CD pipeline is triggered
by all other pipelines, using a pipeline-completed trigger.
Combined, the BPMN workflow model—with collapsed versions
of all pipelines—looks like the one in Figure 4-41, in which team A has
connected their pipeline to the central CD pipeline through a trigger
mechanism. The pipeline of team A submits a trigger, which executes the
CD pipeline of team D.
136
Chapter 4 Pipeline Design
git push
Git
Pipeline Team A
Decentralized
pipeline
CI/CD platform
Git trig
trigger
Continuous Delivery
pipeline Team D
Centralized CD
pipeline
Trigger
Entry criteria
E Build is not OK Unittests failed Code analysiss
incorrect failed Error
Enttry criteria
Entry a Unittests
Validate entry correct
c orrect Build
Build is OK passed
Pipeline Team A
Code analysiss
passed
Tests failed
T IInfrastructure
f t t nott
compliant
Infrastructure
Tests
Te
ests passed
p Validate compliant
Provision test Deploy artifact Trigger CD
Package artifactt Publish artifact Perform test infrastructure
environment to test pipeline
compliance
End CI pipeline
E e
137
Chapter 4 Pipeline Design
The design of the centralized CD pipeline could look like Figure 4-43.
Error
Entry criteria
E Tests failed
incorrect
Continuous Delivery pipeline Team D
Entry criteria
Validate entry correct Provision test Deploy artifact
Perform test
criteria environment to test
Start pipeline
S Notify actors
Tests passed
End pipeline
Validate In
Infrastructure
nfrastructure
e Ex
Exit
xit criteria Dua
Dual
al controll Provision
compliant Validate exit correct Perform dual passed
passed Deploy artifact
infrastructure production
compliance criteria control environment to production
The first stage of the CD pipeline is the Verify entry criteria stage. In
this stage, the validation of the artifact created by the pipeline of team A is
performed to determine whether it meets certain criteria. For example:
• The trigger must supply a valid reference to the artifact.
138
Chapter 4 Pipeline Design
Test Strategy
The test strategy outlines the testing approach within the software supply
chain. Decisions about the order of testing, the fact that some tests run
in parallel, the types of tests, which tests are automated, and which are
performed manually all contribute to the test workflow. There is no silver
bullet on how to design the pipeline flow concerning testing, but there are
some typical characteristics of testing that makes certain pipeline flows
more logical than others.
A test strategy cannot be discussed without looking at the different
types of tests in more detail. Tests come in different flavors, each
specialized in a certain area. Some tests focus on functionality, and some
on nonfunctional aspects. Also, the scope of tests differs, from narrow-
scope tests such as unit tests to broad-scope tests such as chain tests.
The question is, how does each type of test impact a pipeline flow? Is
there a logical order for all these different test types? Is there a relation
between the different test types, and to which extent can these test types be
139
Chapter 4 Pipeline Design
automated? For the latter question, the testing pyramid described by Mike
Cohn in his book Succeeding with Agile comes to the rescue (see [4]). But
before the relationship between the testing pyramid and pipeline design is
handled, here is an overview of possible test types:
• Unit tests: Validate the functional behavior of an
individual unit of source code by writing unit test cases.
Performing unit tests already has a distinct place in the
Generic CI/CD Pipeline. Unit tests are executed just
after the artifact has been built.
• Contract tests: Test the integration between two systems
in isolation, mocking the service provider.
• Integration tests: Validate the interaction between
some components. Where unit tests are performed
on individual components, the integration tests are
performed on a group of components. Integration tests
are functional in nature.
• System tests: Validate whether the system as a whole
meets the functional and (some) nonfunctional
requirements.
• Regression tests: Verify that a code change does not
impact the existing functionality. Regression tests
ensure that the application still performs as expected.
• Acceptance tests: Their purpose is to validate whether
the system works as expected. This is a formal test
because the customer accepts the software if all
business requirements are met.
• UI tests: These are focused on the user interface of an
application. Of course, not all applications have a user
interface, so UI testing is very context-dependent.
140
Chapter 4 Pipeline Design
141
Chapter 4 Pipeline Design
142
Chapter 4 Pipeline Design
The testing pyramid of Mike Cohn distinguishes only a few test types.
In Figure 4-44, an attempt is made to map a range of test types to the
testing pyramid.
143
Chapter 4 Pipeline Design
Regression and
Pre-production /
Contract tests Integration tests System tests Acceptance Security tests
staging tests
tests
Start
Perform test
Performance / Disaster
API tests End-to-end tests Usability tests
availability tests tolerance tests
End
This model ranks the tasks only from “relatively easy to automate” to
“too difficult to automate.” By default, usability and pentests are manual,
and as the model shows, all manual tests are executed at the end of the
stage. We could leave it to this and conclude that a Perform test stage
contains these tasks in the proposed sequence.
But this is not the whole story. Besides the distinction between
“relatively easy to automate” and “too difficult to automate,” there are
more test dimensions to consider. Given the five dimensions listed next,
which one contributes the most to the order of tests? What dimension is
the most important, and which one contributes the least? Let’s propose the
following order:
• Automated vs. manual tests: One of the principles of CI/
CD is that all tests must be automated. The next pages
will demonstrate what the impact is on the pipeline if
manual tests are included in the workflow. A general
rule of thumb is that automated tests are executed
before manual tests. This is the first dimension to
consider.
144
Chapter 4 Pipeline Design
145
Chapter 4 Pipeline Design
146
Chapter 4 Pipeline Design
main
Validate entry Execute Perform Analyze Package Publish Provision test
criteria build unittests code artifact artifact environment
Notify
Actors
Deploy Validate Provision Deploy
Perform Validate Perform
artifact to infrastructure production artifact to
test exit criteria dual control
test compliance environment production
Not tested
Pipeline Manual test:
instance B Feature 1
Feature 2
main
Validate entry Execute Perform Analyze Package Publish Provision test
criteria build unittests code artifact artifact environment
Notify
Actors
Deploy Validate Provision Deploy
Perform Validate Perform
artifact to infrastructure production artifact to
test exit criteria dual control
test compliance environment production
147
Chapter 4 Pipeline Design
148
Chapter 4 Pipeline Design
Tests
T t ffailed
il d
Error
Validate
Provision test Deploy artifact Perform Tests
Tes
sts p
passed
d
CI stages infrastructure
environment to test automated test compliancy
Start pipeline
S
Pipeline with Perform manual test stage
Perform manual
test
Notify actors
IInfrastructure
nfrastructure Exit criteria Du
ual control
Dual Provision
compliant Validate exit correct Perform dual passed Deploy artifact
production
criteria control environment to production
End pipeline
149
Chapter 4 Pipeline Design
main Validate entry Execute Perform Analyze Package Publish Provision test
criteria build uniests code arfact arfact environment
Nofy
Actors
Deploy Perform Validate
arfact to automated infrastructure
test test compliance
SCM trigger
The second pipeline starts with a manual trigger. The person who
started the pipeline is also the one who performs the manual test. The
Deploy artifact to test stage needs to know which artifact must be deployed,
so the manual trigger must include an option to select the already build
artifact from the repository or uses the latest version by default. If multiple
existing test environments are available, the specific test environment on
which the manual test is executed must also be provided as part of the
manual trigger. See Figure 4-49.
latest Validate entry Provision test
criteria environment
Nofy
Actors
Deploy Validate Provision Deploy
Perform Validate Perform
arfact to infrastructure producon arfact to
manual test exit criteria dual control
test compliance environment producon
Manual trigger
The benefit of this approach is that the first pipeline executes all stages
without being blocked by a manual test. The second pipeline is started
only after all manual tests have been executed; otherwise, it makes no
sense to start the pipeline in the first place. Using two pipelines like this
does not result in dangling pipelines.
• Separate pipeline for manual tests: Another approach
is to completely isolate the Perform manual test stage
from the main pipeline and wrap it in a pipeline
dedicated to manual tests. This pipeline is either
manually triggered or triggered from the main pipeline
using a webhook, called by the Perform test stage. The
150
Chapter 4 Pipeline Design
• Contract tests
• Integration tests
• System tests
• Acceptance tests
• Regression tests
• UI testing
• API tests
151
Chapter 4 Pipeline Design
• End-to-end test
• Usability tests
Nonfunctional tests
• Security tests
• Penetration tests
• DAST tests
• IAST tests
• Preproduction/staging tests
• Performance tests
Regression
R i
tests,
System / Security tests Pre-production /
Contract testing [Automated API tests
Integration tests (IAST/DAST) staging tests
acceptance
Start es s, UI
tests, U tests]
es s]
Perform test
End
152
Chapter 4 Pipeline Design
153
Chapter 4 Pipeline Design
Conditions:
• There is no maximum to the number of test
environments and parallel jobs in the ALM/integration
platform.
• The QA team consists of only two test engineers who
can perform manual tests in parallel.
154
Chapter 4 Pipeline Design
System
Integration tests
Regression
R i
tests,
Security tests Disaster
[Automated Acceptance test
(Pen test) tolerance tests
acceptance
es s, U
tests, es s]
UI tests]
Perform test
Start End
Pre-production /
staging tests
All automated tests run in parallel. For convenience, the API tests are
combined with the contract tests. The difference between functional and
nonfunctional tests does not matter anymore in the case of parallel tests.
The manual tests are also parallelized. Given that the QA team has only
two test engineers, two parallel lanes are defined. The security pentest
is positioned a bit arbitrarily because often this expertise is not present
within a DevOps QA team. That is solved in the next paragraph.
155
Chapter 4 Pipeline Design
System /
Integration tests
Regression
R i
tests,
Disaster
[Automated Acceptance test
tolerance tests
acceptance
tests, UI tests]
es s, U es s]
Perform test
Start End
Pre-production /
staging tests
156
Chapter 4 Pipeline Design
Condition:
• The Automated Security tests last for 2 hours.
Given this condition, the final model of the Perform test stage looks like
Figure 4-53.
System /
Integration tests
Regression
R i
tests,
Disaster
[Automated Acceptance test
tolerance tests
acceptance
Perform test
es s, U
tests, es s]
UI tests]
Start End
Pre-production / Performance
End-to-end tests
staging tests tests
157
Chapter 4 Pipeline Design
Contains tasks:
• Perform security test IAST
• Perform security test DAST
Release Strategy
Branching strategy, deployment strategy (which is discussed in the next
paragraph), and release strategy sometimes cause confusion, and people
tend to mix them up. Let’s clarify these concepts.
• Branching strategy involves the process of bringing a
business feature to the main branch (or to a release
branch), with the intention to deploy it to production.
• Deployment strategy defines how the artifact is
deployed to production. The availability classification
of the application is the main driver of the deployment
strategy. If downtime is allowed during deployment,
a different strategy is chosen compared to a case in
which the application must be available 24/7.
158
Chapter 4 Pipeline Design
159
Chapter 4 Pipeline Design
Variable time
Notify
Actors
Validate Perform Provision Deploy
exit criteria dual control production artifact to
environment production
Manual trigger
160
Chapter 4 Pipeline Design
push
workiten not in
message or
branch is not main Build is not OK Unittests failed Code analysis
fa
failed Error
workiten in
message &&
m & Unittests
Unittests
br
ranch is main
branch n Build is OK
B K passed
p
Validate entry Perform
Execute build Analyse code
criteria unittests
Trigger
Code analysis
passed
Primary pipeline
Tests passed
T
Infrastructure not
Validate compliant
infrastructure
compliancy
Infrastructure
compliant
End pipeline
pp
E
Error
Notify actors
Trigger
Exit criteria Dual control
passed Provision
Validate exit correct Perform dual Deploy artifact
production
criteria control environment to production
End pipeline
Exit criteria
passed + dual
control passed?
Release Manager
Manually start
deployment to
production
Timeboxed Release
Sometimes, there are valid reasons to deploy to production at regular
intervals. A release is timeboxed, meaning that features are added until the
end of the timebox has been reached and the deployment to production is
performed. A timebox is, for example, a Scrum sprint in which the release
is deployed at the end of each sprint. In his blog, Martin Fowler calls this
a release train. The train arrives and leaves at the scheduled times. When
the train leaves the station, all features that stepped into the train go to
production (see [27]). See Figure 4-59.
161
Chapter 4 Pipeline Design
Timebox
Notify
Actors
Validate Perform Provision Deploy
Scheduled exit criteria dual control production artifact to
environment production
trigger
This strategy looks similar to the road map–based release strategy with
the exception that the intervals between the releases are fixed, and the
production pipeline is triggered using a schedule.
162
Chapter 4 Pipeline Design
Regular Release
A regular release means that each business feature committed to the
mainline is deployed to production as soon as possible. This type of
release is possible only if the mainline is kept in a state from where it is
possible to deploy to production at any given moment (this is an important
continuous delivery principle). This is also possible in the two previous
release strategies, but the difference is, in the case of regular releases,
deployments to production are done more often, not once per two weeks,
but maybe once a day or even multiple times a day. In this strategy, just
one pipeline is involved, containing all stages. See Figure 4-62.
163
Chapter 4 Pipeline Design
Continuous Deployment
Continuous deployment is a “hands-off” process in which the deployment
to production does not pass a manual dual control stage. This means that if
a developer pushes the code to the main branch, the pipeline performs all
stages without manual interference, including deployment to production.
This results in a pipeline that resembles the Generic CI/CD Pipeline but
without the Perform dual control stage. See Figure 4-64.
main Validate entry Execute Perform Analyze Package Publish Provision test
criteria build unittests code artifact artifact environment
Notify
Deploy Actors
Validate Validate Provision Deploy
artifact to Perform infrastructure production artifact to
test test compliance exit criteria environment production
SCM trigger
9
The release version in production must always be lower than the deployed
release version.
164
Chapter 4 Pipeline Design
Re-create Deployment
The re-create deployment is best illustrated by an example. See
Figure 4-65.
165
Chapter 4 Pipeline Design
Internet
Hardware
Loadbalancer
Example:
Assume the application is a runnable—Spring Boot—jar, deployed
on two Linux servers. The application runs as a Linux service and
receives HTTP(S) requests from clients. Communication takes place
over the public Internet. Server-side load balancing is performed
using a hardware load balancer (for convenience, in this case, there
is no client-side load balancing applied). The load balancer redirects
the requests to the application instances. Both application instances
are connected to a SQL database.
166
Chapter 4 Pipeline Design
(a) (b)
Internet
Internet
Hardware
Hardware
Loadbalancer
Loadbalancer
167
Chapter 4 Pipeline Design
This strategy involves a couple of tasks. All tasks can be automated. See
Table 4-4 and Figure 4-67.
Disable nodes in load balancer pool. Disable servers 1 and 2 in the load
balancer pool.
Wait for a short period (until no Wait until the load balancer does not forward
requests received). any request to the Linux services and the
current request is completely processed.
Stop the Linux services on The Spring Boot app runs as a Linux service.
servers 1 and 2. Stop the service using sudo systemctl
myApp stop.
Copy the JAR file with the new Retrieve all artifacts from the artifact
version to the target environment. repository and copy the application JAR to
the target environment.
Copy the DB script to the target This is the script to migrate from database
environment and execute. version A to version B.
Start the Linux services on servers Start the Spring Boot app again using sudo
1 and 2. systemctl myApp start.
Wait for a couple of seconds. Needed to bootstrap and initialize the apps.
Enable nodes in load balancer pool. Route request to servers 1 and 2 again.
168
Chapter 4 Pipeline Design
Return artifacts
Artifact repository
(jar file, db
scripts)
Start stage
Wait for a short Copy jar file
Disable nodes in Stop the Linux
period (until no with new version
loadbalancer services on
requests to target
pool server 1 and 2
received) environment
Deploy artifact to production stage
Copy DB script
Start Linux Wait for a Enable nodes in
to target
services on couple of loadbalancer
environment and
server 1 and 2 seconds pool
execute
End stage
Server becomes
unhealthy Set Linux server
to 'unhealty'
Production target environment
Stopping Linux
Stop the Linux
services
services on
server 1 and 2
Deploy files
Copy files
Starting Linux
services Start Linux
services on
server 1 and 2
Server becomes
healthy Set Linux server
to 'healthy'
The model shows how the tasks in the Deploy to production stage result
in the remote execution of these tasks in the production environment.
This also implies the existence of an SSH connection between the ALM/
integration platform and the production environment.
169
Chapter 4 Pipeline Design
Blue/Green Deployment
In a blue/green deployment strategy, the starting point is an infrastructure
with the old version (version A, the blue version) of the application and
the database. In parallel, a new infrastructure is built, which has the
new version (version B, the green version) installed. The load balancer
instantaneously switches from infrastructure A to B, routing the traffic to
the new version. If the system has a database, two options are possible.
• The new version of the application can work with the
old version of the database.
• The old version of the application can work with the
new version of the database.
Often, the first option is not possible because a new version of the
application usually requires a database change, specific for the new
application version. In the example used in this paragraph, the second
option is used. The starting point in this example is a server pool (server
170
Chapter 4 Pipeline Design
Internet
Hardware
Lo
Loadbalancer
Serverpool A Database
D t b
(Server 1 + 2) Version A
Version A
The first step in the deployment is to replace the database version from
version A to version B. The database script is executed, but because the old
version of the application still works with the new version of the database,
everything should still be working.10 The assumption is that the database
changes can be performed online, of course. After this has been done, a
new infrastructure is built. The new infrastructure contains a server pool
(server pool B) with servers 3 and 4. Application version B is installed on
both servers, but because no requests are sent to the servers yet, servers 3
and 4 are still idle. See Figure 4-69.
10
Not all database changes are backward compatible. Sometimes, some additional
processing or transformation is required in the database using database triggers,
for example.
171
Chapter 4 Pipeline Design
The setup now consists of two server pools, one with application
version A and one with application version B. Server pool A is enabled and
processes all requests (using database version B). Server pool B is idle.
The essence of a blue/green deployment is that the load balancer switches
from server pool A to server pool B instantly. After the switch, all requests
are sent to the servers in server pool B. Server pool A becomes idle and
does not process any new requests anymore. The infrastructure of server
pool A can be dismantled and used for other purposes. See Figure 4-70.
172
Chapter 4 Pipeline Design
(a) (b)
Internet Internet
Hardware Hardware
Loadbalancer Loadbalancer
Figure 4-70. (a) Switch from server pool A to B (server pool A becomes
idle). (b) Version B available
173
Chapter 4 Pipeline Design
Task Description
Stop the Linux services in server The apps in server pool B are stopped, although
pool B. this is already the case if the new infrastructure
is created.
Copy the JAR file with the new Retrieve all artifacts from the artifact repository
version to the new environment and copy the application JAR to the target
(server pool B). environment. This concerns the deployment of
the new versions on servers in server pool B.
Start the Linux services in server The apps in server pool B are started as a Linux
pool B. service but do not process any requests yet.
Enable node B in the load Enable servers 3 and 4 of server pool B in the
balancer nodes pool. load balancer nodes pool.
Wait for short period. To allow bootstrapping and initializing the apps,
route traffic to the apps on server pool B. This
is the moment both applications A and B are
active.
Disable nodes A in the load Requests to servers 1 and 2 in the server pool
balancer nodes pool. A are blocked. From this moment, requests are
routed only to servers 3 and 4.
Dismantle the old infrastructure. Servers in Server pool A are no longer used and
can be decommissioned.
174
Chapter 4 Pipeline Design
Provision production
environment
Provision new
infrastructure
For clarity reasons, the BPMN model in Figure 4-71 does not contain a
connection between the pipeline and the artifact repository, a connection
between the pipeline and the production environment, and the execution
of the remote commands in the production environment.
175
Chapter 4 Pipeline Design
version. This allows the new version of the application to be tested in a live
environment with a small number of users before being deployed to all
users. Because both strategies are similar and primarily focused on testing
the stability and reliability of a change, they are used interchangeably.
It is best to demonstrate this strategy using an infrastructure with three
servers, each with version A installed. The first step in the deployment
is again replacing the database version from version A to version B. The
database script is executed, but because the old version of the application
still works with the new version of the database, everything should still be
working fine.
The next step is to disable server 1 in the load balancer pool. HTTP
traffic bleeds dry, and after some time the application on server 1 does
not receive requests anymore. All requests from the Internet are routed to
servers 2 and 3, which are still active. In the meantime, application version
B is deployed to server 1. See Figure 4-72.
Server
erver 1 erver 1
Server Server 2 Server 3 Server 1 Server 2 Server 3
Server 2 Server 3
App instance 1 App instance 2 App instance 3 App instance 1 App instance 2 App instance 3 App instance 1 App instance 2 App instance 3
Version A Version A Version A Version A Version A Version A Version A
Version A Version A
Version B
Database Database
Database
Version A Version B
Version A
Version B
176
Chapter 4 Pipeline Design
After that, server 2 is disabled in the load balancer pool, and server 1
is enabled again. At that moment, server 2 is inactive, and servers 1 and 3
are active. Server 1 serves application version B, while server 3 still serves
application version A. Both application versions run at the same time,
but because the database is compatible with both application versions,
everything works fine. In the meantime, application version B is deployed
on server 2.
The next step is to disable server 3 and enable server 2 again. Servers 1
and 2 are active and run application version B, while version B is installed
on server 3. The last step is to enable server 3, and from that moment all
servers serve application version B. See Figure 4-73.
177
Chapter 4 Pipeline Design
Copy DB script to target environment and This is the script to migrate from
execute. database version A to version B.
Loop; X = Server [1..3].
Disable node [X] in the load balancer nodes Block all requests to server [X].
pool.
Wait for a short period. Needed to finish requests that are still
processed.
Stop the Linux service on server [X].
Copy the JAR file with new version to Retrieve all artifacts from the artifact
server [X]. repository and copy the application
JAR to the target environment.
Start the Linux service on server [X]. Start the Spring Boot app.
Wait for a couple of seconds. Needed to bootstrap and initialize
the app.
Enable node [X] in the load balancer
nodes pool.
X = X + 1. Increment X to handle the next server.
This results in the BPMN model shown in Figure 4-74. Take note of
the repeating task with the intermediate conditional event (iteration).
The connection between the pipeline and the artifact repository, the
connection between the pipeline and the production environment, and
the execution of the remote commands in the production environment are
excluded from the model for clarity reasons.
178
Chapter 4 Pipeline Design
X = [1..3]
iterations
179
Chapter 4 Pipeline Design
180
Chapter 4 Pipeline Design
181
Chapter 4 Pipeline Design
182
Chapter 4 Pipeline Design
X = [1..variable]
iterations
The BPMN model of the A/B test strategy is similar to the BPMN model
of the previous paragraph, with the exception that a variable is introduced
for A/B testing to control to which extent version B is deployed.
183
Chapter 4 Pipeline Design
184
Chapter 4 Pipeline Design
Delegation
An example of role separation concerns a quality assurance engineer who
defines test cases, performs manual tests of the application, and automates
the test cases as much as possible. Although integration of the automated
tests in the pipeline is essential, some quality assurance engineers
sometimes work in isolation, and the development of automated tests is
separated from application development and pipeline development. At
a certain moment, however, automated tests have to be integrated into
the pipeline. This can be done using different techniques. One option is
to add a Perform test stage to the main pipeline and implement the test
tasks within that stage. Another option is to isolate the Perform test stage,
implement the stage in a separate pipeline, and let the main pipeline
invoke this Perform test pipeline. This means that the main pipeline does
include a Perform test stage, but the execution of this stage is delegated
185
Chapter 4 Pipeline Design
CI CD
QA pipeline
Notify
Actors
Perform
Webhook test
trigger
Application Architecture
The architecture of the application has a large influence on the pipeline
design. The pipeline design of a monolithic application consists of one
artifact or multiple strongly coupled artifacts. This monolithic architecture
differs from a microservice architecture. The pipeline design of an
186
Chapter 4 Pipeline Design
• Messaging enabled
Microservice A CI CD
Microservice B CI CD
Microservice C CI CD
187
Chapter 4 Pipeline Design
Orchestration
Sometimes it is needed that certain components are deployed in a
particular order, or certain tasks need to be completed before a component
can be deployed. This order of activities can be managed using an
orchestrator pipeline. The orchestrator executes tasks and orchestrates
the invocation of other pipelines. Consider a microservice architecture. In
normal conditions, microservices run independently, so an orchestrator
should not be needed at all. However, there could be a change in all
microservices that justifies an order in deployment.11 In Figure 4-79,
the deployment order is managed by the orchestrator, first deploying
microservice B, then microservice C, and finally microservice A. The
orchestrator acts as an automated “runbook” to guarantee the order.
Orchestrator
CI CD
Microservice C
CI CD
Microservice A
CI CD
Microservice B
11
Microservices are loosely coupled but not decoupled. If a new mandatory
element is added to an event between two microservices, both microservices are
impacted.
188
Chapter 4 Pipeline Design
Event-Based CI/CD
All design strategies and considerations so far are based on a predefined
workflow model. From a separation of concerns point of view, the
stages of the workflow are divided over different pipelines. But what if
we take this a level higher and consider an event-based CI/CD model?
Similar to an application architecture in which a monolithic application
is broken down into several microservices, it is also possible to do this
for pipelines. The pipeline stages are developed as microservices, using
an event-driven communication model. Each microservice consumes
events and produces events. The events are specified according to a
well-defined schema containing the metadata each microservice needs.
External systems like source code management systems and issue
trackers are hooked into the eventing framework and also produce and/
or consume events.
The event-based CI/CD model of the Generic CI/CD Pipeline is
transformed into Figure 4-80.
189
Chapter 4 Pipeline Design
Execute build
190
Chapter 4 Pipeline Design
191
Chapter 4 Pipeline Design
This model has a few benefits over a pipeline model, listed here:
192
Chapter 4 Pipeline Design
Resource Constraints
Resource constraints come to light only when the pipeline is already
developed and deployed. These resource constraints usually manifest
themselves due to a lack of computing or storage resources. This results
in a bad performance of the pipeline, or pipelines are put into a queue,
waiting for an agent or compute node to become available. The simple
answer to this problem is to add more hardware, but this is only one part
of the story as we have seen. At some point, all options are stretched so
far that other solutions have to be considered. Some of these solutions
are ALM or integration platform related. Other options can be found
in redesigning parts of the pipeline in such a way that their resource
consumption is optimized. Here are some other considerations:
• Revise the build strategy: The build strategy was already
explained earlier. Take a look at your build strategy
again, and determine whether some things can be
changed. Something as simple as pipeline caching
improves performance a lot.
• Priority clause: The regular behavior of ALM/
integration platforms regarding priority is that pipeline
execution is first in, first out (FIFO). The problem is, if
you deploy a “production fix,” the pipeline execution
joins the queue and is executed when all other
pipelines in the queue are processed first. There is
no distinction between a regular pipeline run and a
production deployment. Wouldn’t it be great if we
could add a clause like in Listing 4-1 to our pipelines?
193
Chapter 4 Pipeline Design
priority:
scope: global # Concerns the whole organization
target: prod # Deals with production; increased prio
management-class: incident # Incident; more important
When using a priority clause like this, the particular pipeline queue
is rearranged, and high-priority pipeline instances are moved to the front
of the queue. Certain properties indicate how the queue is rearranged.
Incidents in production have more priority than regular deployments to
production. Regular deployments to production have more priority than
regular deployments to test, etc. Unfortunately, few ALM/integration
platforms offer prioritization of pipelines, and if they do, it is only
rudimentary.
As an alternative to a priority clause, you can also define a pipeline
setup with different execution environments (e.g., runners, executors, or
agents). This way it becomes possible to define separate pipeline “lanes”
in which pipelines of different categories run but don’t interfere with
each other.
• Schedule pipelines: Sometimes there are good reasons
why a stage doesn’t have to be executed multiple times
per day. Analysis of source code can be done as part of
the regular pipeline, but if multiple minor changes are
applied to the codebase daily, the analysis of the source
code often doesn’t show much difference during that
day. It makes sense to schedule source code analysis
once a day in a quiet moment.
• Limit continuous deployments: Resource constraints
can also be present in test environments. Even if the
ALM/integration platform itself is capable of executing
all pipelines fast enough, the test environment may
194
Chapter 4 Pipeline Design
195
Chapter 4 Pipeline Design
Analyze
A l code
d
Perform
Pipeline
Because the Perform unit tests stage depends on the artifact produced
by the Execute build stage, both stages must be executed in sequence. The
Analyze code stage, however, does not necessarily depend on the artifact,
but on the source code in the repository.13 Reordering the stages would
result in a slight change in the design; see Figure 4-82.
13
SonarQube requires an artifact, but this can still be a task detached from the
creation of a regular build artifact.
196
Chapter 4 Pipeline Design
Perform
Execute build
unittests
Analyze
A l code
d
Pipeline
Start End
pipeline pipeline
Perform
Perform Fortify Perform
SonarQube
scan Whispers scan
scan
Start Analyze
S End Analyze
code code
This design already reduces the overall processing time of the pipeline.
Also, notice the use of the parallel gateway at the end of the Perform
unit tests and Analyze code stages. In workflow modeling, this parallel
gateway represents a “join.” In multithreading, this is called a barrier. The
barrier takes care that both Perform unit tests and Analyze code stages
are completed before the pipeline continues (in this design example, the
pipeline ends). This can be a requirement in case further testing should
be performed; continue only in case both previous stages were completed
successfully. Removing the barrier results in a pipeline design in which
the Analyze code stage still executes in parallel, but the Perform unit
tests stage (which isn’t included in this model) doesn’t wait for it to be
completed, completely disregarding the result of the Analyze code stage.
See Figure 4-83.
197
Chapter 4 Pipeline Design
Perform
Execute build
unittests
End 1
Analyze
A l code
d
Pipeline
Start
pipeline
Perform
Perform Fortify Perform
SonarQube
scan Whispers scan
scan
Start Analyze
S End Analyze End 2
code code
Taking a closer look at the three code analysis tasks reveals that also
these tasks are independent. Applying further parallelization results
in the design shown in Figure 4-84. The design makes use of a barrier
(parallel gateway) at the end of the Perform unit tests and Analyze
code stages, but also the individual tasks of the Analyze code stage end
with a barrier; the Analyze code stage ends only if all three tasks are
completed.
198
Chapter 4 Pipeline Design
Perform
Execute build
unittests
Analyze
A l code
d
Perform
SonarQube
Start End
Pipeline
scan
pipeline pipeline
Perform Fortify
scan
Perform
Whispers scan
199
Chapter 4 Pipeline Design
Perform
Pipeline
Execute build
unittests
Start End
pipeline pipeline
Pipeline
Perform
Perform Fortify Perform
SonarQube
scan Whispers scan
scan
Start pipeline End
pipeline
200
Chapter 4 Pipeline Design
• Download package
• Validate integrity
201
Chapter 4 Pipeline Design
202
Chapter 4 Pipeline Design
203
Chapter 4 Pipeline Design
Entry criteria
Validate entry correct Download Integrity
Int
tegrity OK Publish package Provision test
Validate integrity
criteria package (internal) environment
Start
Notify actors
Tests passsed
T Approve A
Approved Install and
Install and Test / validate Provision test
production configure in
configure in test the application environment
installation production
End
Tests passsed?
204
Chapter 4 Pipeline Design
Summary
You learned about the following topics in this chapter:
205
CHAPTER 5
Pipeline Development
This chapter covers the following:
each with its characteristics and technical solutions. Still, various generic
topics can be emphasized, even if the implementation is different. This
chapter discusses some of these topics and examples that deal with
pipeline development.
Pipeline Specification
A pipeline specification covers the translation of the logical pipeline design
into a technical definition. This results in one or more files containing
pipeline code executed on an ALM/integration platform.
208
Chapter 5 Pipeline Development
209
Chapter 5 Pipeline Development
Scripted Pipelines
A scripted pipeline is either a file containing a scripting language or a
domain-specific language (DSL) language, but it can also consist of a
complete project, supported by a general-purpose programming language.
An example of a scripted pipeline is the Groovy pipeline used in Jenkins.
Atlassian’s Bamboo has the option to develop a pipeline based on a
complete Java project (Bamboo Java Specs).
Besides the benefit that scripted pipelines are just files, which can be
put under version control, scripted pipelines are also extremely versatile.
You have full control of the flow and the implementation of the stages
210
Chapter 5 Pipeline Development
and tasks. However, this can also become a pitfall. If not taken care of, the
pipeline code becomes unreadable. Listing 5-1 shows a simple structure of
a scripted Jenkins pipeline.
node {
stage(' Stage 1') {
//
}
stage(' Stage 2') {
//
}
stage(' Stage 3') {
//
}
}
Declarative Pipelines
Declarative pipelines are similar to scripted pipelines, but they have a
more restricted syntax that preserves the pipeline structure and prevents
the code from becoming bloated and unreadable. Declarative pipelines
intend to be better structured, which makes reading and writing the
pipeline code easier. This does not mean you cannot do the things you can
do with scripted pipelines. It is common to add scripting to a declarative
pipeline, but because of the strict syntax, the scripting has a distinctive
place in the pipeline structure. The trend seems to be shifting toward
the use of declarative pipelines, and especially YAML-based pipelines
dominate the pipeline landscape.
211
Chapter 5 Pipeline Development
stages {
stage('Validate entry criteria') {
steps {
echo 'Stage: Validate entry criteria'
}
}
stage('Execute build') {
steps {
echo 'Stage: Execute build'
}
}
stage('Perform unit tests') {
steps {
echo 'Stage: Perform unit tests'
}
}
stage('Analyze code') {
when {
branch "main"
212
Chapter 5 Pipeline Development
}
steps {
echo 'Stage: Analyze code'
}
}
stage('Package artifact') {
steps {
echo 'Stage: Package artifact'
}
}
stage('Publish artifact') {
steps {
echo 'Stage: Publish artifact'
}
}
stage('Provision test environment') {
when {
branch "main"
}
steps {
echo 'Stage: Provision test environment'
}
}
stage('Deploy artifact to test') {
when {
branch "main"
}
steps {
echo 'Stage: Deploy artifact to test'
}
}
213
Chapter 5 Pipeline Development
stage('Perform test') {
when {
branch "main"
}
steps {
echo 'Stage: Perform test'
}
}
stage('Validate infrastructure compliance') {
when {
branch "main"
}
steps {
echo 'Stage: Validate infrastructure compliance'
}
}
stage('Validate exit criteria') {
when {
branch "main"
}
steps {
echo 'Stage: Validate exit criteria'
}
}
stage('Perform dual control') {
when {
branch "main"
}
steps {
echo 'Stage: Perform dual control'
}
214
Chapter 5 Pipeline Development
}
stage('Provision production infrastructure') {
when {
branch "main"
}
steps {
echo 'Stage: Provision production infrastructure'
}
}
stage('Deploy artifact to production') {
when {
branch "main"
}
steps {
echo 'Stage: Deploy artifact to production'
}
}
}
215
Chapter 5 Pipeline Development
Constructs
One of the issues with pipelines is that complex actions sometimes
require a lot of plumbing code. Declarative YAML-based pipelines are
also not very versatile, because YAML is not a real programming language.
Complex setups such as canary deployment or building various versions
for different target environments blow up the pipeline declaration, are
hard to read, and are difficult to maintain unless there are features in the
platform supporting this complexity.
A construct is a generic name for pipeline features that reduce
complexity. Constructs are out-of-the-box features solving problems
not easy to solve otherwise. This paragraph is devoted to some of the
(common) constructs found on various platforms. The examples are not
“taken” from only one platform, but from various ones. Not all platforms
support all constructs. The examples are to show only what is possible.
216
Chapter 5 Pipeline Development
Triggers
There are several ways to start a pipeline, depending on the context.
Starting a pipeline is based on triggers, and most ALM/integration
platforms support various kinds of triggers. These are the most
common ones:
• SCM trigger: Most common is the SCM trigger that
starts a pipeline after code is committed and pushed
to a source code management repository. The pipeline
builds the artifact based on the branch in which the
code was committed. In addition to code pushes, other
SCM events may lead to triggering a pipeline. One
example is an event submitted after a pull request has
been approved. SCM triggers can be implemented
using webhooks or as an integrated feature of an ALM/
integration platform.
Tip If you plan to incorporate the pipeline file into the same source
code repository as the application, remember that if you use an
SCM trigger, the pipeline by default also runs after you changed the
pipeline code itself, which potentially could lead to the deployment of
the application to production (or at least to a test environment). It is
better to move the pipeline code to a separate directory and exclude
this directory from the trigger; this option is provided by several
platforms. An alternative is to exclude the pipeline file(s) based on the
filename or extension if the platform supports this feature.
217
Chapter 5 Pipeline Development
218
Chapter 5 Pipeline Development
219
Chapter 5 Pipeline Development
resources:
pipelines:
- pipeline: logical-name-of-this-pipeline
source: pipeline-that-triggers-me
trigger: true
Execution Environment
Modern platforms provide the option to specify in which environment
a pipeline is supposed to run. The various platforms use concepts like
“slave” nodes, runners, executors, or agents, whether grouped into a
pool of servers or containers. In essence, the execution environment is
the environment in which a pipeline runs. This can be in the form of a
Linux or Windows server, but it is also possible to execute a pipeline in a
Docker container running on a (Kubernetes) cluster. These environments
are preconfigured and registered to the ALM/integration platform. These
environments also consist of preconfigured tools. If you want to build an
artifact using Java or Python, the environment must have pre-installed Java
JDK and Python.
In addition to running the whole pipeline in one specific environment,
it is also possible to decompose the pipeline and have each part of the
pipeline run independently. The pipeline is decomposed, often as so-
called jobs. Each job is executed in a specific environment. Jobs of one
220
Chapter 5 Pipeline Development
pipeline may run in the same environment, but jobs may also run in
separate environments. This also means that in these situations there is no
shared memory and passing information between jobs is not always trivial.
Listing 5-5 and Listing 5-6 show some examples.
jobs:
build:
docker:
- image: cimg/openjdk:17.0.3
jobs:
- job: build
pool: myServerPool
Connections
Pipelines often connect to external systems with a specific endpoint, a
certain protocol, and security credentials. Using curl in the pipeline to
connect to an external Nexus IQ server may work, but this does bloat the
pipeline code. A more elegant way is to make use of connectors or service
221
Chapter 5 Pipeline Development
- task: NexusIqPipelineTask@1
displayName: 'Nexus IQ policy evaluation'
inputs:
nexusIqService: 'ServiceConnectionNexusIQ'
applicationId: myApp
stage: 'AnalyzeCode'
222
Chapter 5 Pipeline Development
Variables
Variables in pipelines are similar to variables in a programming language.
Variables can be defined in a pipeline, but certain platforms also provide
the option to define variables outside the pipeline specification, sometimes
grouped with a logical name. Special care needs to be taken concerning
variable scope. As explained earlier, parts of the pipeline—stages or jobs—
can be executed on different runtime environments, which makes sharing
variables more troublesome, or in some cases even impossible.
223
Chapter 5 Pipeline Development
variables:
- name: endpoint
${{ if eq( parameters['target'], 'test') }}:
value: 'https://mycompany.test.com'
${{ if eq( parameters['target'], 'production') }}:
value: 'https://mycompany.com'
Conditions
Conditions in pipelines are indispensable. Conditions in scripted pipelines
are implemented using an if/then/else construction. Conditions in
declarative pipelines often have a different structure and use keywords
like if, when, or condition, depending on the platform. Some examples of
conditions on different platforms are shown in Listing 5-8, Listing 5-9, and
Listing 5-10.
job:
script: echo " Run Analyze code in case of the main branch"
rules:
- if: $CI_COMMIT_BRANCH == "main"
224
Chapter 5 Pipeline Development
stage('Analyze code') {
when {
branch "main"
}
steps {
echo 'Run Analyze code in case of the main branch'
}
}
- stage: Analyze_code_stage
displayName: 'Analyze code'
condition: eq(variables['Build.SourceBranchName'], 'main')
jobs:
- job: Analyze_code_job
steps:
- script: echo 'Run Analyze code in case of the
main branch'
Caching
Caching decreases the time to build an artifact. Different platforms have
implemented caching in different ways. In one of the researched platforms
(CircleCI), it is implemented as an integrated construct in the pipeline
declaration and is accessed by using the save_cache and restore_cache
keywords, while in other platforms, caching is added as a marketplace
solution that performs the save and restore actions.
When using the caching feature, it becomes possible to store external
libraries or even compiled code to a “cache store” and use this cache in
subsequent pipeline runs. It is best to explain this using Figure 5-3.
225
Chapter 5 Pipeline Development
CI/CD plaorm
Download Download
CI/CD libraries libraries
First run
pipeline
Internal
Local
filesystem repository
Store local files to cache
Remote
cache
Download cache
Download
CI/CD libraries
Second run
pipeline
Local
filesystem
226
Chapter 5 Pipeline Development
- task: Cache@2
inputs:
key: 'maven | "$(Agent.OS)" | **/pom.xml'
restoreKeys: |
maven | "$(Agent.OS)"
maven
path: $(MAVEN_CACHE_FOLDER) # is ./.m2/repository
displayName: cache_maven_local_repo
The trick is to assemble a cache key, using the Maven prefix, the
operating system, and all pom.xml files. The **/.pom pattern is used to
calculate the hash of all the pom.xml files. As soon as one of the pom.xml
files changes, the hash changes, and a new cache is saved and restored.
Resolving key:
- maven [string]
- "Linux" [string]
- **/pom.xml [file pattern; matches: 3]
- s/pom.xml -->
7CC04B8124B461613E167AA0D15E62306BDF553750988B6BF21355
E641B163DE
227
Chapter 5 Pipeline Development
Matrix
A matrix is used to declare an action using all permutations of variables
declared in the matrix. The matrix implements a fan-out pattern and can
be used for the implementation of a cross-platform build strategy. Using
a matrix, it becomes possible to define a build for multiple language
versions and multiple target environments. Listing 5-14 shows an example
of a matrix declaration.
jobs:
build:
runs-on: ${{ matrix.os }}
228
Chapter 5 Pipeline Development
strategy:
matrix:
python-version: [3.7, 3.8]
os: [ubuntu-latest, macOS-latest, windows-latest]
Deployment Strategy
A deployment strategy can become complex. There are various solutions
to solve this problem. A common—and recommended—solution is to use
a deployment tool with built-in deployment strategies. Examples are AWS
CodeDeploy, which supports canary deployments, and Cloud Foundry CLI
with the blue-green deployment plugin. Using specific deployment tooling
has a lot of benefits, but sometimes it is not possible to use a tool. There
can be a technical or financial constraint that “forces” teams to implement
the deployment strategy in the pipeline itself.
Fortunately, some platforms have features that help implement
deployment strategies in the pipeline. One of these features is the canary
deployment construct shown in Listing 5-15.
jobs:
- deployment:
environment: production
pool:
name: myAgentPool
229
Chapter 5 Pipeline Development
strategy:
canary:
increments: [10]
preDeploy:
steps:
- script: "Performing initialization"
deploy:
steps:
- script: echo "Deploying…"
routeTraffic:
steps:
- script: echo "Route traffic to updated version"
on:
failure:
steps:
- script: echo "Deployment failed"
success:
steps:
- script: echo "Deployment succeeded"
Auto-cancel
If a pipeline contains a task to sign off a manual test result and this pipeline
is executed multiple times, multiple orphaned pipeline instances pile up
and wait for a manual sign-off. The previous chapter proposes various
solutions. One of them is to use the “auto-cancel” option. With an auto-
230
Chapter 5 Pipeline Development
cancel construct, all already running instances of the same pipeline are
canceled if a new pipeline instance is started. The new instance always
includes the latest code changes. This means there are no dangling
pipelines anymore.
auto_cancel:
running:
when: "true"
There are similar constructs that almost do the same, but not quite.
Azure DevOps has a “batch” feature. Enabling the “batch” option does not
start any new instance of the pipeline if there is still a running instance.
On Success/Failure
Just as in regular programming languages, there is a need to add a
try/catch/finally construct in a pipeline. They come in various
flavors. Sometimes—in scripted pipelines—they are just implemented
as try/catch/finally blocks. In declarative pipelines, you see
implementations like a post section, which includes blocks that can be
executed conditionally.
post {
success {
echo 'Stage: Notify actors - success'
}
failure {
echo 'Stage: Notify actors - failure'
}
}
231
Chapter 5 Pipeline Development
on:
success:
- script: "Notify actors - success"
failure:
- script: "Notify actors - failure"
Fail Fast
One of the key elements in CI/CD is to fail fast and return immediate
feedback. This concept is implemented differently on each platform, and
there is no generic construct that has been adopted by multiple platforms.
A fail fast means that if a stage, job, or task fails, the whole pipeline stops
immediately. The example in Listing 5-19 stops all jobs in the pipeline in
the case of an error.
fail_fast:
stop:
when: "true"
Priority
It was already mentioned earlier, but prioritizing pipelines is a must-have
feature. In addition, it should be possible to define this prioritization on
different levels. A pipeline run solving a production incident should have
priority over previous nonurgent pipeline runs. In addition, priorities
232
Chapter 5 Pipeline Development
Test Shards
Some platforms—like CircleCI—have the option to “split” one task
and divide the work. The execution of one task is instantiated several
times, and work is distributed over multiple compute nodes. This is very
efficient when performing tests. Assume that a regression test contains
the execution of hundred individual tests. A normal task run executes
these hundred tests sequentially. But the workload can also be spread over
multiple instances of that task, such as in five instances of the same task,
executing five times 20 tests in parallel, for example. Note that this puts
a requirement on the test set. It must be possible to group tests and run
them independently. This group of tests is called a test shard. The process
to create the shards is called test splitting.
Creating test shards is possible in several ways. A simple algorithm just
takes the hundred test cases and distributes them equally over five shards.
The problem, however, is that you could end up with a shard containing
only tests with a long test duration. A better approach is to divide the tests
based on other characteristics. An optimized approach is to spread the test
set over the five shards based on timing data. This is historic data based
on previous test runs. After several runs, the ALM/integration platform
has enough information to equally divide the tests efficiently over the task
instances based on their duration.
Figure 5-4 contains three instances of the same test task. The total work
of Test_task_1.2.1 is spread over the three task instances, each executing
10 tests.
233
Chapter 5 Pipeline Development
Pipeline
Stage_1.1
Task_1.1.1
Exectute tests
Stage_1.2 [1..10]
Test_
Exectute tests
task_1.2.1
[11..20]
Exectute tests
Task_1.2.2 [21..30]
234
Chapter 5 Pipeline Development
pipeline.yml extends:
- template: templates/extend.yml
235
Chapter 5 Pipeline Development
Workflow
Various platforms support pipeline declarations in which the functionality
of the tasks and the workflow are intertwined. This makes it harder to
distinguish functionality from workflow and makes it harder to understand
the workflow of the pipeline. A good alternative is to separate the
functionality from the workflow. Workflow becomes an isolated section of
the pipeline declaration, which improves readability.
Listing 5-21 declares the workflow in a separate section of the pipeline.
It does not include all the details of the jobs, but only their mutual relation
and execution order. The unit_test and acceptance_test jobs are executed
only after the build job has been finished. If both test jobs are completed,
the deploy job kicks in.
236
Chapter 5 Pipeline Development
- deploy:
requires:
- unit_test
- acceptance_test
237
Chapter 5 Pipeline Development
1
Open Policy Agent gets more attention lately and fits nicely into the security-as-
code domain (see [32]).
238
Chapter 5 Pipeline Development
IT4IT repo
pipeline
Base pipeline code for the whole organiza on
template
Generic pipeline code for the whole organiza on
Microservice 1 repo
microservice-1
Microservice 1 applica on code
. . . . . . . . . . .
infrastructure
Infrastructure code microservice 1
pipeline
Pipeline code microservice 1
template
Generic pipeline code for microservice 1
Microservice 15 repo
microservice-15
Microservice 15 applica on code
infrastructure
Infrastructure code microservice 15
pipeline
Pipeline code microservice 15
template
Generic pipeline code for microservice 15
239
Chapter 5 Pipeline Development
240
Chapter 5 Pipeline Development
241
Chapter 5 Pipeline Development
CI/CD
pipeline
Proxy layer
Unauthorized source
242
Chapter 5 Pipeline Development
243
Chapter 5 Pipeline Development
CI/CD
pipeline
Unauthorized source
244
Chapter 5 Pipeline Development
CI/CD
pipeline
pp
Remote
cache
Unauthorized source
245
Chapter 5 Pipeline Development
A team uses Jira as their issue tracker system and Git as a source
control management system. A Git commit represents one Jira ticket,
which has to be provided in the commit message. The team uses
the Feature branch workflow. Jenkins is used to build and deploy the
artifact—an AWS Lambda app—to an AWS account. Artifacts are
stored in Sonatype Nexus.
246
Chapter 5 Pipeline Development
Given this setup, the following are possible actions to be taken in the
pipeline:
• Tag the Git commit with the release version (for
example, git tag -a v2.3.1 9fceb02).
• The Git commit message contains a reference to the
Jira ticket if the commit is pushed to the repository.
• Add a label to the Jira ticket with the release version. A
Jira REST API is used to create this label.
• Add the release version to a Jenkins build by setting the
release version in the job display name.
• Add the release version to the artifact filename in
Nexus. Tagging is not needed if the artifact name
already contains the version, but it is possible to add a
tag with the Nexus Pro version.
• Tag the AWS Lambda or the AWS Stack with the release
version.
247
Chapter 5 Pipeline Development
248
Chapter 5 Pipeline Development
Environment Repository
A well-developed application does not contain any environmental
properties. The artifact must be built once but must be able to run
anywhere. Environmental properties must be added during deployment,
for example, by enriching placeholders in a property file with the correct
data during deployment. Data such as database credentials or HTTP
endpoints are stored in an environment repository, and as soon as a
deployment starts, the placeholders in the property file are replaced with
the database credentials and HTTP endpoints associated with the target
environment to which the application is deployed.
There are different types of environment repositories. The type of
environment repository to use also depends on the security classification
of a certain property. Database credentials have a higher risk rating than
an HTTP endpoint, so database credentials should be stored in a more
secure environment repository. Here are some examples:
• Variable in the pipeline: The simplest solution is to
just define properties as (conditional) variables in the
pipeline code itself. During deployment, the target
environment is determined, and a specific set of
variables is used. This solution is easy to implement.
249
Chapter 5 Pipeline Development
250
Chapter 5 Pipeline Development
Secrets Management
As mentioned in Chapter 3, secrets—passwords, tokens, keys,
credentials—used by an application preferably must be stored in a vault.
This can be Azure Key Vault, AWS Key Management Services, AWS Secrets
Manager, HashiCorp Vault, or a Hardware Security Module (HSM).
Important to consider is where the secret is created and how it can be
used by the application. Is the source location of the secret the same as
the target location? Or in other words, is the secret created in the location
where it is also used by the application, or is it created somewhere else
and must it be transferred to another destination so the application can
use it? This also raises the question of whether the source and target
2
Unclear, however, is whether these encryption/decryption keys are specific to
one tenant or whether they are used across tenants.
251
Chapter 5 Pipeline Development
locations both meet the secret’s security classification and whether the
transport from the source to the target location is secure enough. Cases
exist in which vaults are not used for whatever reason or the secret cannot
be created in the vault itself and it has to be manually transferred from the
source location. Different situations are possible. Let’s go through some
options, in order of most secure to less secure:
1. The safest solution is that the target platform in
which the application runs also manages the
secret. The target platform creates the secret in
a vault, and the vault maintains its life cycle (see
Figure 5-12). No pipeline is involved. This is a safe
way to deal with secrets because the secret is not
exposed and may even never leave the vault. Key
rotation is managed by the vault by which the key is
automatically renewed.
Create
secret
Use
App
Vault
252
Chapter 5 Pipeline Development
Trigger
create
secret Use
CD pipeline
App
Target
Vault
Figure 5-13. The pipeline triggers the creation of a secret in the vault
253
Chapter 5 Pipeline Development
Create
secret
Retrieve Insert
secret secret Use
CD pipeline
App
Source Target
Vault Vault
Create Store
secret secure Insert
secret Use
CD pipeline
Source Ops App
Locaon Engineer Target
Vault
254
Chapter 5 Pipeline Development
Store Inject
Create
secure secret Deploy Use
secret
app
CD pipeline
Figure 5-16. Manual transfer from source and “injecting” the secret
in the artifact
Database Credentials
The secrets management cases in the previous paragraph are a bit abstract,
and a little more clarification seems in order. Consider the credentials of
a database. In Figure 5-17, the database is situated in a highly managed
infrastructure, such as a cloud environment. The pipeline calls an API of
the vault, which acts as an identity provider of the database and creates the
database secret (credentials). Because the app has a trusted relationship
with the vault, it is allowed to use the database secret to access the
database. The vault is responsible for the rotation of the database secret.
The responsibility of the pipeline is limited. After the initial trigger to
create the database secret, the system—consisting of a vault, an app, and a
database—manages and uses the database secret. This is a secure solution
because the secret in the vault is accessible only by a trusted party: the app.
This trust is based on security policies and other infrastructure measures.
255
Chapter 5 Pipeline Development
Access database
Create credentials
Trusted relation in database Trusted relation
Retrieve
Trigger create database
database secret secret
CD pipeline
App
Vault
256
Chapter 5 Pipeline Development
Store Inject
secure secret Deploy
app
CD pipeline
Ops App
Engineer
Feature Management
Most developers know what a feature flag is. A basic feature flag is an if
statement that determines whether a function in the code is executed
or not. More complex feature flags make it possible to disclose a certain
function only for a specific user group and/or target environment. This
is also the power of using feature flags; functions that were previously
hidden because they were in an experimental state, for example, can
be enabled with the click of a mouse. This makes feature management
257
Chapter 5 Pipeline Development
Runme test
test
CI CD
test Runme
Not environment-specific producon
producon
producon
258
Chapter 5 Pipeline Development
259
Chapter 5 Pipeline Development
Figure 5-20 shows how feature flags can be enabled and disabled—also
for a particular user group and/or environment—in GitLab.
Listing 5-20 contains the if statement with a feature flag called add-
additional-costs. It makes use of the feature management system
Unleash (see [24] for more information).
if (unleash.isEnabled("add-additional-costs")) {
// Additional costs are calculated and added to the booking
} else {
// The booking is processed without additional costs
}
260
Chapter 5 Pipeline Development
261
Chapter 5 Pipeline Development
A valid use case is a centrally hosted The integration platform code can also
integration platform, managed by a be developed (once) by a specific team,
specific organizational unit. The platform is while each DevOps team makes use of
shared with multiple DevOps teams. it and manages the hosting.
It makes sense that a specific IT4IT DevOps teams make use of the base
team develops such a base pipeline. pipeline and configure it according to their
needs.
262
Chapter 5 Pipeline Development
A specific IT4IT team develops these DevOps teams make use of the generic
templates/libraries. template/library in their pipelines.
There are plenty of code analysis tools to integrate This is usually not something
into a pipeline and analyze the applications’ code, a DevOps team itself does
but the tools that analyze the pipeline code itself because that would be a
are rather scarce. A specific IT4IT team is required bit like a fox guarding the
to develop this kind of tooling. henhouse.
263
Chapter 5 Pipeline Development
This is specific for DevOps teams themselves, so The responsibility lies within the
no central team is involved. DevOps team.
This is specific for DevOps teams themselves, so The responsibility lies within the
no central team is involved. DevOps team.
264
Chapter 5 Pipeline Development
3
Sometimes some form of testing may be possible using a dry-run flag (like
mvn release:prepare -DdryRun=true), but it is still a hacky way of testing the
pipeline code.
265
Chapter 5 Pipeline Development
1
Pipeline code
Applicaon
Devops team
arfact
Uniest (local)
Applicaon
code
2
Pipeline test
CI pipeline CD pipeline environment
1
Pipeline code
Unittest (local)
Unittest (local)
Application
code
2
1
Pipeline code Pipeline test
CI pipeline CD pipeline environment
Unittest (local)
2
Pipeline
artifact
Devops team
Application
artifact
Unittest (local)
Application
code
3
267
Chapter 5 Pipeline Development
268
Chapter 5 Pipeline Development
mandatory tasks that should not be overwritten. The DevOps team extends
its pipeline from the base pipeline and configures it to its needs, so it can
be used to build, deploy, and test the application.
Creating the base pipeline requires specific knowledge, but
centralizing the development can save a lot of time and money in the end.
Creating a base pipeline also allows enforcement of certain policies or
security restrictions, which become automatically part of the extended
base pipeline.
In Figure 5-24, the base pipeline is tested by the IT4IT team—also
making use of a pipeline of pipelines—and the resulting base pipeline
artifact is centrally stored and can be used by the DevOps teams. Of course,
after extending and reconfiguring the base pipeline, the DevOps team
can also perform (unit) tests of their pipeline to make sure that it works as
expected.
1
Base pipeline
code Pipeline test
CI pipeline CD pipeline environment
Unittest (local)
2
Base pipeline
IT4IT team artifact
CI pipeline CD pipeline Pipeline of pipelines
Pipeline
Configuration
3
Devops team
CI pipeline CD pipeline Application pipeline
Application
artifact
Application
code
4
269
Chapter 5 Pipeline Development
Pipeline Generation
Extending a base pipeline has its limitations. What if the team needs
a completely different pipeline or deviates from the base pipeline so
much that using it is not justified? Instead of creating a base pipeline, the
pipeline used by the DevOps team can also be generated using a pipeline
generator. The feature richness of such a pipeline generator varies from
creating code snippets, which need to be assembled by the DevOps team,
to the generation of a complete customized pipeline that undergoes the
stages also used in regular application-oriented pipelines. The input of
a pipeline generator is a repository managed by a DevOps team. The
pipeline generator scans this repository, detects the configuration,
and starts the creation of artifacts (pipeline code and testware).
Figure 5-25 depicts a setup with a pipeline generator.
These kinds of tools, however, are scarcely available, and if there are
any commercial tools out there, they are not well-known. Until then, it
looks like organizations have to develop these tools themselves. Usually,
this is a task of an IT4IT team dedicated to this job, but to prevent the “not
invented here” syndrome, a cooperation model with DevOps teams is
270
Chapter 5 Pipeline Development
271
Chapter 5 Pipeline Development
272
Chapter 5 Pipeline Development
273
Chapter 5 Pipeline Development
4
Unit tests and integration tests of pipelines can be combined.
274
Chapter 5 Pipeline Development
275
Chapter 5 Pipeline Development
276
Chapter 5 Pipeline Development
277
Chapter 5 Pipeline Development
278
Chapter 5 Pipeline Development
279
Chapter 5 Pipeline Development
280
Chapter 5 Pipeline Development
281
Chapter 5 Pipeline Development
282
Chapter 5 Pipeline Development
Summary
You learned about the following topics in this chapter:
283
Chapter 5 Pipeline Development
• Extended development
• Advanced development
• Pipeline generation
• The concept of pipeline of pipelines was explained.
284
CHAPTER 6
Testing Pipelines
This chapter covers the following:
Testing Pipelines
Pipelines and testing can be highlighted from different viewpoints. Most
books and articles describe how pipelines are used to test an application,
which test frameworks are used, and how everything integrates into the
pipeline. Chapter 4 highlights the importance of a test strategy and how
this reflects on the pipeline design.
Testability of Pipelines
Pipelines are defined as code. Code can be tested. Most declarative
pipeline code (with some exceptions) consists of YAML files or scripts.
Testing them is a challenge. Teams often test the pipelines using trial and
error, sometimes screwing things up because a wrong version of an app
was deployed by accident. In some cases, code from a feature branch was
accidentally tagged with a release version tag, and because of the trial-and-
error nature of developing and testing pipelines, the number of commits
is very high. The once well-organized overview with regular application
pipeline runs is cluttered with a zillion test runs. Testing pipelines is hard
because teams also don’t have the tools to test properly.
Just as with testing applications, pipeline code must be tested in a
test environment. The pipeline test environment must differ from the
environment in which the business application is built, tested, and
deployed. From a pipeline point of view, the environment used to build,
test, and deploy the business application is considered the production
environment. The pipeline test environment is either a separate ALM
platform or integration server infrastructure or an infrastructure in which
separation between the regular pipeline environment and the pipeline test
environment is established in another way. Important is that the pipeline
must be able to run in a test/sandbox environment, without the destructive
character. It must also be possible to test specific characteristics of the
pipeline. This means the following:
• Checking the configuration of the pipeline and its
components to ensure that they are set up properly and
functioning as expected. This can include things like
286
Chapter 6 Testing Pipelines
287
Chapter 6 Testing Pipelines
• Performance tests
• Acceptance tests
Let’s discuss them in the next few sections and point out how this can
be done.
Unit Tests
Let’s face it. Test frameworks for pipelines are almost nonexistent or at
least very scarce. Also, when dealing with SaaS platforms of big-tech
companies, you might expect that there is some information or support
concerning pipeline testing. The platforms are mature, but testing
pipelines are not given that much TLC. Local testing of pipelines within
an IDE is very much desired but often not supported. Mocking a task, so it
is not really executed, is a simple feature, but which provider supports this?
Sometimes the only thing left is to develop something yourself.
As an example, unit testing an Azure DevOps pipeline is explained in
this section. This is a real example using a relatively simple unit test
framework.1
1
The code of this framework is published on the Github page of the author,
however it is still experimental at this stage.
288
Chapter 6 Testing Pipelines
289
Chapter 6 Testing Pipelines
Pipelines
Clone App
290
Chapter 6 Testing Pipelines
With this picture in mind, consider the following steps. The application
code is located in a Git repository in the original Azure DevOps project.
This code is cloned to another repository in an Azure DevOps test
project. This test repository is checked out (manually) and resides on the
workstation of the developer.
The developer starts developing a pipeline, as listed in Listing 6-1. This
is the YAML file with the name pipeline.yml. For readability reasons,
various stages are omitted from this pipeline.
name: $(Date:yyyyMMdd)$(Rev:.r)
parameters:
- name: environment
type: string
default: acctest
values:
- dev
- systest
- acctest
- prod
variables:
- name: aws_connection
value: 486439332092
- name: aws_region
value: us-east-1
stages:
- stage: Execute_build
displayName: Execute build
condition: always()
jobs:
291
Chapter 6 Testing Pipelines
- stage: Analyze_code
displayName: Analyze code
condition: eq(variables['Build.SourceBranchName'], 'main')
jobs:
- job: Tasks
pool: Default
steps:
- script: |
pip install whispers
whispers ./
- stage: Deploy_artifact_to_test
displayName: Deploy artifact to test
condition: eq(variables['Build.SourceBranchName'], 'main')
292
Chapter 6 Testing Pipelines
jobs:
- deployment: Deploy
pool: Default
environment: ${{ parameters.environment }}
strategy:
runOnce:
deploy:
steps:
- task: AWSShellScript@1
inputs:
awsCredentials: $(aws_connection)
regionName: $(aws_region)
scriptType: inline
inlineScript: |
#!/bin/bash
set -ex
export artifact=`find $(Pipeline.Workspace)/. -name
'cdk*.jar'`
293
Chapter 6 Testing Pipelines
import org.junit.jupiter.api.AfterAll;
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Test;
import java.io.IOException;
@BeforeAll
public static void setUpClass() {
System.out.println("setUpClass");
294
Chapter 6 Testing Pipelines
@Test
public void test1() {
// Validate the pipeline flow in case the current
branch is a feature branch (instead of the main branch)
System.out.println("\nPerform unit test: test.test1");
pipeline.overrideCurrentBranch("myFeature");
try {
pipeline.startPipeline();
}
catch (IOException e) {
e.printStackTrace();
}
assertEquals (RunResult.succeeded, pipeline.
getRunResult());
}
@Test
public void test2() {
// Test the build and deploy stages:
// - Use a different AWS account (Ohio based) for
deployment
// - Use a different environment (dev instead of
acctest) for deployment
// - Skip the 'Analyze code' stage, only the deployment
needs to be tested
System.out.println("\nPerform unit test: test.test2");
pipeline.overrideVariable("aws_connection", "
497562947267");
pipeline.overrideVariable("aws_region", "us-east-2");
pipeline.overrideDefaultParameter("environment", "dev");
pipeline.skipStage("Analyze_code");
295
Chapter 6 Testing Pipelines
try {
pipeline.startPipeline();
}
catch (IOException e) {
e.printStackTrace();
}
assertEquals (RunResult.succeeded, pipeline.
getRunResult());
}
@AfterAll
public static void tearDown() {
System.out.println("\ntearDown");
}
}
Unit test number 1 (test1) mimics the current branch. What happens
in test1 is that the current branch is replaced with myFeature, so the
pipeline behaves as if it resides in the branch myFeature, even if it resides
in another branch.
The pipeline code in unit test number 2 (test2) is changed by the unit
test framework in such a way that deployment of the application artifact
to AWS does not impact the current application in AWS. In test2 the AWS
account variables are replaced by other values, and the Analyze code stage
is set to skip. This results in a unit test that is performed in a different AWS
account, with account ID 497562947267. The application is even deployed
in a different region (us-east-2; Ohio) and a different virtual environment
(dev). To speed up the test, the Analyze code stage is skipped.
The pipeline, manipulated as part of JUnit test2, results in the code in
Listing 6-3.
296
Chapter 6 Testing Pipelines
parameters:
- name: environment
type: string
default: dev
values:
- dev
- systest
- acctest
- prod
variables:
- name: aws_connection
value: 497562947267
- name: aws_region
value: us-east-2
stages:
- stage: Execute_build
displayName: Execute build
condition: always()
jobs:
- job: Tasks
pool: Default
steps:
- script: echo 'Execute build'
- task: Maven@3
displayName: Maven Package
inputs:
297
Chapter 6 Testing Pipelines
mavenPomFile: pom.xml
condition: always()
- task: CopyFiles@2
displayName: Copy Files to artifact staging directory
inputs:
SourceFolder: $(System.DefaultWorkingDirectory)
Contents: '**/target/*.?(war|jar)'
TargetFolder: $(Build.ArtifactStagingDirectory)
- upload: $(Build.ArtifactStagingDirectory)
artifact: drop
- stage: Analyze_code
displayName: Analyze code
condition: eq(true, false)
jobs:
- job: Tasks
pool: Default
steps:
- script: |
pip install whispers
whispers ./
- stage: Deploy_artifact_to_test
displayName: Deploy artifact to test
condition: eq(variables['Build.SourceBranchName'], 'main')
jobs:
- deployment: Deploy
pool: Default
environment: ${{ parameters.environment }}
strategy:
runOnce:
deploy:
298
Chapter 6 Testing Pipelines
steps:
- task: AWSShellScript@1
inputs:
awsCredentials: $(aws_connection)
regionName: $(aws_region)
scriptType: inline
inlineScript: |
#!/bin/bash
set -ex
export artifact=`find $(Pipeline.Workspace)/. -name
'cdk*.jar'`
299
Chapter 6 Testing Pipelines
Performance Tests
Performance testing does not apply to the performance tests of the
application, but to the performance test of the pipeline itself. Important to
keep in mind is that fast feedback is of utmost importance. The processing
time of the pipeline must be as short as possible. Your pipeline may be
affected by various types of performance penalties.
• The execution time of the pipeline takes too long. One
underlying problem could be that compute and/or
storage capacity is insufficient. This can be solved by
scaling up the infrastructure.
• Another reason why the execution time of a pipeline
takes too long is that the design is not optimized for
speed. The solution can be found in revising the build
strategy and/or redesigning parts of the pipeline.
300
Chapter 6 Testing Pipelines
2
Queuing time was not measured, so this was not taken into account.
3
This application has a relatively large codebase, so the effect of parallelization
becomes apparent.
301
Chapter 6 Testing Pipelines
302
Chapter 6 Testing Pipelines
303
Chapter 6 Testing Pipelines
Again, the total pipeline execution time has been brought back
to normal proportions, and the pipeline fully executes well within 10
minutes. Of course, this is just one of the measures to increase pipeline
performance. Experience showed that after applying a combination of
measures such as pipeline caching, parallelization, and multithreaded
builds, pipeline execution time could be reduced by 75 percent.
304
Chapter 6 Testing Pipelines
305
Chapter 6 Testing Pipelines
306
Chapter 6 Testing Pipelines
Acceptance Tests
Whether the development team uses simplified pipeline development
or advanced pipeline development, at some point the pipeline must be
accepted for usage.
Validating the quality of the pipeline in simplified pipeline
development poses risks because the pipeline is not thoroughly tested.
Acceptance tests do not play an explicit role in simplified pipeline
development. Accepting the quality of the pipeline is implicit. It is a
process of changing the pipeline, pushing it to a repo, and watching its
behavior. If it does not work properly, this step is repeated. Accepting the
pipeline is nothing more than continuously implementing the adjusted
pipeline and seeing it working in its normal environment until the
expectations are met.
An acceptance test in advanced pipeline development involves the
execution of all the stages in the assembly line. This includes a Perform test
stage in which the pipeline is executed in a pipeline test environment. If
all stages in the assembly line are passed, the quality of the pipeline can be
considered sufficient, and the pipeline can be implemented (used).
Summary
You learned about the following topics in this chapter:
307
Chapter 6 Testing Pipelines
308
CHAPTER 7
Pipeline
Implementation
This chapter covers the following:
Pipeline Implementation
The implementation of a pipeline itself is a bit odd. If the implementation
of a pipeline is compared with the implementation of an application, the
pipeline needs to be configured for and deployed to a target environment.
But what is the target environment in the case of pipelines, and can
we speak of the deployment of a pipeline? In the pipeline of pipelines
discussion, the conclusion was that deploying a pipeline to production is
nothing more than pushing the pipeline code to the remote repository and
merging it with the mainline. Figure 7-1 illustrates this behavior.
310
Chapter 7 Pipeline Implementation
Pipeline code
Trunk
Pipeline regular (production) environment Application
Runtime test-1
Pipeline code
Runtime production
Application code
Organizational Impact
A pipeline is developed according to requirements and guidelines and
properly tested before it can be used. This means the functional behavior
is according to the specifications, the performance of the pipeline is tested
and meets the criteria, security measures are in place, and the pipeline
meets the compliance specifications of the organization. Because the
pipeline is used by the DevOps team, all team members must be confident
311
Chapter 7 Pipeline Implementation
312
Chapter 7 Pipeline Implementation
Team Discipline
Even if the team is enthusiastic about automation and working with
pipelines, it still happens that certain things are a bit neglected. Pipeline
implementation also means that the team must be disciplined in certain
areas. Some persistent problems are the following:
• Breaking builds: One of the principles of continuous
integration is that broken builds must be repaired
immediately. Developers are expected to drop what
they are working on and solve the broken pipeline.
This is a bit of wishful thinking. Developers often don’t
react immediately to this event. That doesn’t have to be
a problem if it doesn’t lead to issues with releasing an
artifact too late. However, leaving the pipeline broken
for one or two days is also not a recommended practice.
One obvious reason why a pipeline can break is that
the committed code is incorrect and cannot be built.
Another reason is that the world around the pipelines
is in flux. External systems can be down, updated,
or not accessible anymore; vulnerability checks are
tightened; credentials or certificates are expired; or the
ALM/integration platform itself suffers from technical
problems. Teams must repair these broken pipelines.
Otherwise, the effort to repair them will increase as
time goes on.
• Disabled quality gates: Good practice is that if the code
analysis detects severe or high-ranked vulnerabilities in
the code, the pipeline “breaks” because the quality gate
kicks in. Some pipelines do not have this quality gate
activated, either by accident or on purpose. The latter is
probably because of the following issue.
313
Chapter 7 Pipeline Implementation
Depending on the team and its maturity, there are more persistent
problems. Some teams still manage to bypass the pipeline and deploy to
production in another way, or they perform continuous integration of a
develop branch in the pipeline, while still creating a release artifact from
the main branch on their local development machine. If the pressure is on,
pull requests are approved without looking at the code. This is all part of
growing up, but these problems must be addressed.
Integration Platform
Depending on the type of integration platform used, the responsibilities
of setting up and managing the infrastructure differ. In this context,
314
Chapter 7 Pipeline Implementation
315
Chapter 7 Pipeline Implementation
316
Chapter 7 Pipeline Implementation
317
Chapter 7 Pipeline Implementation
Playbook
What is the business impact if an incident or a problem with a pipeline
occurs? A failure of a pipeline may lead to damage. For example, an urgent
application fix is created and needs to be deployed. However, the pipeline
does not work because of an infrastructure failure of the integration
platform. This could damage the continuity of a business process if
318
Chapter 7 Pipeline Implementation
the pipeline is unavailable for a long time. ITIL processes also apply
to pipelines. Playbooks can play a useful role in incident and problem
management processes.
A playbook contains documented investigation methods to detect and
resolve problems. They are useful for investigating incidents or failures.
Playbooks can also be used for pipelines. Drafting pipeline playbooks
can already be started during pipeline testing. Common pipeline failures
and solutions are added to the playbook. Of course, playbooks are never
complete, and after implementation and usage of the pipelines, more
cases will occur. These cases are also added to the playbook.
Application Implementation
It is hard to speak about pipeline implementation without mentioning
application implementation. Application implementation is, after all,
the goal of using a pipeline in the first place. Adding certain features to a
pipeline can contribute to a solid application implementation experience.
Consider using or implementing these features.
Runbook
“A runbook is a set of processes and procedures that you
execute repetitively to support various enterprise tasks.”
Reference [33]
319
Chapter 7 Pipeline Implementation
• There are still one-off tasks or activities that are not part
of CI/CD. The start of CI is a commit to a repository.
The end of CD is the deployment of an artifact to
a production environment. Plenty of tasks fall into
the processes before and after CI/CD. Think about
requesting an Azure subscription, configuring the
IAM roles, and assigning team members. In addition,
regular maintenance or migration involves activities
that are also not part of a CI/CD pipeline. Sometimes
these activities are complex and require a detailed
runbook.
• Another reason to use a runbook is the first-time
implementation of a complete system. You don’t have
CI/CD arranged on day one. The implementation
of a new system in production maybe requires the
execution of several pipelines in a specific order;
even in the case of a microservice architecture, some
pipelines need to run in a specific order. Think about
setting up the base infrastructure components used by
all microservices.
Release Note
A release note is a change log, describing the updates of the software. It
may also include proof that all new features are tested and accepted. So, a
320
Chapter 7 Pipeline Implementation
321
Chapter 7 Pipeline Implementation
Release candidate 1
Release note 1
Metadata 1
Release candidate 2
Release note 2
Metadata 1 + 2
Release (final)
Release in
production
Metadata reset
322
Chapter 7 Pipeline Implementation
323
Chapter 7 Pipeline Implementation
Artifact repository
Store artifact
e
Pipeline storage
M t d t
Metadata
e P
Publish Artifact stage
Start
S tart "Publish
artifact"
arrtifact" stage
g
P
CI/CD Pipeline
Sttart "Perform
Start
test" stage
test
Inform user
Reset
Notify actor stage
about Publish
releasenote
deployment to releasenote
metadata
production
Event
Eve ent production
d e
deployment (onlyy
s
successful)
E-mail server
Send e-mail
Wiki
324
Chapter 7 Pipeline Implementation
Artifact Promotion
The result of the build, package, and publish stages is an artifact stored
in a binary repository. This artifact is a release candidate, meaning that it
potentially can be deployed to production. But first, it has to run through
various test cycles, so anything can happen along the way. During the test
process, the artifact moves near production, but only a successfully tested
artifact is allowed to be deployed to production. Release candidates that
get stranded somewhere in the test process should be flagged because
potentially there is a risk that the wrong release is deployed to production.
The problem is that all release candidates, both the ones that failed the tests
and the ones that passed the tests, are kept in the same binary repository.
It must be possible to make a distinction between failed release candidates
and successful releases. To make sure that release candidates that failed
during testing are prevented from being deployed to production, a quality
gate can be added; this is an additional check to determine that the artifact
is valid. This check can be implemented in the Validate exit criteria stage.
325
Chapter 7 Pipeline Implementation
But based on what information does this quality gate work? There are a
couple of options to prevent the wrong release from being deployed.
• The artifact is promoted from stage to stage. One type
of implementation is that the artifact moves through
different binary repositories. So after integration
testing, acceptance testing, and performance testing,
the artifact is moved from one repository to the next.
The last repository contains the production-ready
releases, so that is the repository used in the Deploy
artifact to the production stage. An extra condition/
quality gate is not even needed because the correct
repository is already used. A big disadvantage of this
solution is that multiple repositories are required and
the artifact is moved several times.
• Another option is to manually promote an artifact.
This feature is offered by some ALM platforms. The
problem with this option is that it is a manual action. A
user must actively change the status of an artifact from
prerelease to release, for example. The dual control
stage is already a manual action, so what is the point
to add more of them? To be honest, manual artifact
promotion is something to avoid.
• Instead of dragging the same artifact through different
repositories, there are also options to keep all artifacts
in the same repository and provide metadata. After
specific stages and tasks are finished and testing was
successful, the metadata of the artifact is updated
(using curl or Maven, for example). Based on its
metadata, the status of the artifact is clear.
326
Chapter 7 Pipeline Implementation
327
Chapter 7 Pipeline Implementation
unittest-1.0-metadata.xml unittest-1.0-metadata.xml
Release failed the tests Tests are completed successfully
Summary
You learned about the following topics in this chapter:
328
Chapter 7 Pipeline Implementation
• IaaS solution
• Self-hosting solution
329
CHAPTER 8
Operational Pipelines
Pipelines are often explained in the context of building, testing, and
deploying an application, but there are plenty of other areas in which
pipelines also play a role. They are not necessarily CI/CD pipelines, but
just pipelines used for different purposes. One area in which the use
of pipelines is beneficial is in performing operational tasks associated
with maintaining an application. Various activities are needed to keep
the application running. These tasks should be automated as much as
possible. Manual operational tasks should be discouraged for several
reasons. Automating tasks speeds up operational activities and makes
them repeatable, which results in more predictable results. In addition, an
automated task is more secure because nobody touches the production
environment with their hands. Here are some examples of operational
pipelines:
332
Chapter 8 Operate and Monitor
333
Chapter 8 Operate and Monitor
334
Chapter 8 Operate and Monitor
Monitor
There is not much difference between monitoring an application running
in a target environment and monitoring the integration platform and
its pipelines. In both cases, similar characteristics are monitored. Is the
infrastructure healthy? Does the application or pipeline perform well, or
is there a security issue detected? In addition, you may want to monitor
certain business key performance indicators (KPIs), such as what is the
success rate of the pipeline runs, or how long does it take between a work
item being worked on and the actual deployment of the feature associated
with this work item? Generalized, monitoring falls into a few categories.
• Systems monitoring: The infrastructure of the
underlying ALM/integration platform is monitored.
This is a type of technical monitoring that covers CPU,
disk and memory usage, network congestion, etc.
• Platform monitoring: This can be considered an
extension of systems monitoring. It covers monitoring
the middleware layer of the ALM/integration platform,
including the pipeline performance, health, and
queuing status.
• Business monitoring: This covers monitoring KPIs and
relates to metrics of the CI/CD process. It monitors
the functional and process behavior of the platform
and the pipelines. Monitoring KPIs is very specific to a
team’s needs.
• Security and compliance monitoring: This has the
responsibility of monitoring all security-related aspects
of the platform and pipelines. Pipeline compliance
monitoring is part of this.
335
Chapter 8 Operate and Monitor
Several websites suggest the top four, six, or ten metrics that you
should monitor. This is arbitrary and should be taken with a grain of salt.
In general, you should always monitor aspects of all categories, such
as the technical health of the system, the performance of the system
and pipelines, and, in the case of organizations with tight security
requirements, the monitoring vulnerabilities or other security-related
aspects of the system. Concerning KPIs, it is up to the team what they think
is important for them. So, no recommendation is given here.
Systems Monitoring
If the team or organization manages its integration infrastructure, systems
monitoring must also be organized. Systems monitoring is used to validate
whether the infrastructure is still healthy, but it is also used to determine
whether pipelines still run in a decent and fast manner. Bottlenecks in the
infrastructure have an immediate effect on pipeline execution.
Systems monitoring is arranged around various system metrics, such
as the following:
• CPU usage
• Memory usage
• Disk usage
336
Chapter 8 Operate and Monitor
337
Chapter 8 Operate and Monitor
As soon as all the pipelines are triggered at the same time, CPU usage
increases but stays between 90 percent and 95 percent. Memory usage
increases only slightly and stays between 36 percent and 37 percent. CPU
usage is a bit on the high side, but the system is still perfectly able to run
the pipelines. Table 8-1 shows the results.
(= Start Time +
Execution Time)
338
Chapter 8 Operate and Monitor
What stands out is that if all pipelines are triggered at the same time,
not all pipelines start immediately, and the last pipeline (View Payments)
is finished only after 25 minutes and 54 seconds. This means the developer
receives information about the pipeline execution after more than 25
minutes since the code was committed and pushed. This is not a surprise
because the number of executors is set to two, meaning that only two
pipelines are executed at the same time. The other pipelines become
pending until one of the executors is available again. This is problematic if
the commit rate is high because each commit triggers a pipeline.
No problem, you would say. Just increase the number of executors to,
let’s say, four. This changes the results slightly, as shown in Table 8-2.
(= Start Time +
Execution Time)
The start time of the last pipeline (View Payments) is reduced, from
16 minutes and 28 seconds to 9 minutes and 19 seconds. That is an
improvement, but the overall execution time of most individual pipelines is
339
Chapter 8 Operate and Monitor
Increasing the number of executors smooths out the Total time until
completed, but it comes with a price. The overall execution time of the
pipelines in concurrent runs increases. This is even more dramatic if the
number of executors is increased to six and all pipelines are started at the
same time. See Table 8-3.
1
Note that the View Payments is even faster with four executors instead of two.
This can be explained because when the pipeline starts, most other pipelines are
already finished, so this pipeline has more CPU resources at its disposal.
340
Chapter 8 Operate and Monitor
(= Start Time +
Execution Time)
341
Chapter 8 Operate and Monitor
This case shows how to play with the number of executors, and it is a
nice example of using systems monitoring to spot bottlenecks in pipeline
processing.
Platform Monitoring
Platform monitoring is positioned one level above infrastructural systems
monitoring. Platform monitoring concerns the monitoring of the ALM/
integration platform itself. This includes the platform middleware and the
pipelines. The following are the typical metrics to monitor:
• Queue depth of all nodes/servers/agents (to detect
queuing/pending pipelines).
• Performance of pipelines.
342
Chapter 8 Operate and Monitor
Although this dashboard gives some insight into the latest pipeline
runs, it is still a rudimentary dashboard, and in this particular case, it was
not possible to configure a dashboard in such a way that it fulfilled all
the requirements, especially information about pipeline performance is
omitted. Metrics like what is the average processing time? of the various
pipelines and how does it change over time? are hard to monitor. Also
things like how long does a pipeline run remain in the queue before it is
executed? and what are the maximum and average queuing times? are
problematic to monitor, or at least difficult to display in a dashboard.
In general, the requirement to spot any degradation or bottleneck in
pipeline processing because of infrastructure/platform issues was difficult
to be fulfilled with the standard options available for the various analyzed
platforms.
Business Monitoring
KPIs can be visualized using custom dashboards. A few examples of KPIs
were mentioned in Chapter 3. The next dashboard example visualizes two
KPIs called Lead time and Cycle time. These KPIs need some explanation.
343
Chapter 8 Operate and Monitor
Lead time does not say anything about the performance of the
DevOps team. The time between the moment a work item is
created and the moment it is pulled into a sprint and picked up
by a developer can be very long. A work item can stay on the
backlog for a very long time.
• Cycle time gives better insight into the performance of
the team. It measures the time between a developer
committing themselves to a work item and the moment
the code for the particular feature has been developed
and tested.
Figure 8-4 visualizes the difference between Lead time and Cycle time.
Workitem (status)
Create
Comitted Done
workitem
Cycle time
Lead time
The dashboard shows both KPIs. The average Lead time is 91 days,
based on 18 work items (the 10 bugs excluded), while the Cycle time is 23
days on average. This means the 18 work items stayed on the backlog for 68
on average, while the team finished a feature in 23 days on average.
344
Chapter 8 Operate and Monitor
345
Chapter 8 Operate and Monitor
Security Monitoring
Security monitoring covers a broad range of topics. The integration platform
and infrastructure must be secure, and any vulnerabilities or breaches must
be detected by the monitoring systems. In addition, various checks can be
done on the pipelines themselves. For example, a pipeline has to comply
with the company policies. So, let’s zoom in on two examples.
Application monitoring and monitoring of the target environment
on which the application runs are typically not part of integration
platform and infrastructure monitoring. However, there are a few types
of monitoring that do fall into this category. Consider an application
deployed to a certain target environment. The application may not
be altered once deployed, and if it is changed, it can be changed only
using a pipeline redeploy and not manually. The same applies to the
target environment itself. Once the infrastructure has been provisioned
and applications run on it, any manual change of the infrastructure is
not allowed and should be detected. This type of monitoring can be
considered part of pipeline (security) monitoring.
Figure 8-6 shows an example in which part of the infrastructure—a
stack—is provisioned to an AWS account. The infrastructure is provisioned
using IaC, and once provisioned, it can be changed only by re-provisioning
the updated infrastructure code. In this particular screenshot, the stack is
changed manually, indicated by the Drift status. It has the value DRIFTED,
while the default Drift status should be IN_SYNC.
Continuous monitoring of the infrastructure drift or changes in the
applications deployed on this infrastructure is a good way to detect any
manual change in the production environment. A cloud service provider
like AWS has the tools to check for drift of both infrastructure and
346
Chapter 8 Operate and Monitor
2
Lambda code signing is a way to determine whether running code has been
altered.
347
Chapter 8 Operate and Monitor
teams can view the compliance status of their pipelines. In this particular
example, the pipeline is not compliant because the infrastructure
validation task is omitted from the pipeline. A short explanation of the
problem and the solution are given, as shown here:
This pipeline does not have an ‘AWS Infrastructure scan-
ning’ stage
A production environment must be configured in such a way
that it meets the company security policies. Add the IT4IT
AWS Infrastructure scanning task 2.0 to your pipeline to scan
your infrastructure code and test compliance of the pipeline
using the Validate button.
Process Payment pipeline This pipeline does not have a ‘AWS Infrastructure scanning‘ stage
Receive Payment pipeline A producon environment must be configured in such a way that it meets the company security
policies. Add the IT4IT AWS Infrastructure scanning task 2.0 to your pipeline to scan your
View Payments pipeline
infrastructure code and test compliance of the pipeline by means of the Validate buon
Validate Close
348
Chapter 8 Operate and Monitor
Share Information
Information can be shared in different ways, but beware that information
overloading of the DevOps team must be prevented. The best way to
demonstrate what an “information sharing” design could look like if
techniques to prevent information overloading are applied is by using a
specific case. Of course, this case depicts only one possible solution, and
teams have to decide for themselves what their information flow will look
like. Consider the following case:
349
Chapter 8 Operate and Monitor
git push
Git
Main
M i + ffeature
t branch
b h CI/CD pipeline
i li
Notify actors
Entry criteria
incorrect Build is not OK
Unittests failed
Entry criteria
Validate entry correct Build is OK Perform
Execute build
criteria unittests
CI/CD platform
Unittests passed
Branch is feature
Trigger
It shows the feature branch workflow with a feature branch and a trunk
(the main branch). The Notify actors stage is responsible for communication
with other actors, and the requirements state that both successful and
unsuccessful results must be communicated. This explains the presence of
the parallel gateway after certain stages. Also note that the diagram does not
have an end event; start and end events in BPMN are optional.
350
Chapter 8 Operate and Monitor
Determine
D t i e-
Determine e- Determine e-
Branch is feature mailaddress
mailaddress mailaddress
product owner
developer and team and create
and create e-
create e-mail e-mail
a
mail
previous_stage is previous_stage is
not Perform "Perform dual
unittests" and control
control"
"Notify actors" stage
End "Notify
previous_stage is previous_stage is
g
actors" stage
Branch is main not not "Perform
Validate branch (trunk) "Execute build" test"?
and Production
previous_stage is deployment failed
previous_stage
not "Perform dual
Start "Notify previo
previous_stage is control
control"
actors" stage Perform unittests" previous_stage is
previous_stage is or "Deploy artifact to
"Execute build" previous_stage is production"
production
Perform test"
"Perform test
End "Notify
Create Message g
actors" stage
Create Message
Create Message Card for
Card for
Card for "Test" "Production
"Release build"
Channel deployment"
Channel
C a e
Channel
MS Teams
Add message
Add message Add message card to
card to "Release card to "Test" "Production
build" Channel Channel deployment"
Channel
C a e
In the Notify actors stage, the first validation is on the branch. A different
path is followed for the main branch compared to the feature branches. In the
case of a feature branch, an email is created for the developer and sent to the
developer using an email server. The path of the feature branch stops here.
351
Chapter 8 Operate and Monitor
352
Chapter 8 Operate and Monitor
Notifications, alerts, and incidents are shared with the team. In the
case of incidents, the team should be informed proactively, based on a
push mechanism; one or more team members are informed using an
email, an SMS message, or a WhatsApp message because immediate
action is required. Notifications and alerts can be shared by these same
channels, but it is also possible to inform the team with a notification or
alert on a dashboard or overview. The team members have to actively
watch the dashboard to be kept informed.
In all cases, you need to be conservative with the amount of
information you push to the team. Only if needed, information is
actively pushed.
The overview in Figure 8-10 gives a nice example of the pipeline runs
of a process booking pipeline in Jenkins. There are some issues with the
latest runs. In one of these runs, the build failed. In the latest run, all stages
were executed properly again; however, the Deploy artifact to production
stage ended with a warning, although the deployment was successful.
Further investigation is needed. Since the overview already gives a nice
indication that something went wrong, the team has to decide whether
they also want to be alerted actively or whether keeping an eye on the stage
view screen is sufficient.
353
Chapter 8 Operate and Monitor
354
Chapter 8 Operate and Monitor
355
Chapter 8 Operate and Monitor
Summary
You learned about the following in this chapter:
• Systems monitoring
• Platform monitoring
• Business monitoring
• Security monitoring
356
Chapter 8 Operate and Monitor
357
CHAPTER 9
Use Case
This chapter covers the following:
• The results, the gaps and backlog items, and the output
of a running application.
through all the steps explained in this book, from requirements analysis
to the implementation of a pipeline that runs on an ALM platform. Azure
DevOps is the ALM platform that is used to demonstrate the use case.
But even if you don’t know anything about Azure DevOps or AWS, this
chapter is still valuable because it shows a real case from requirements to
implementation.
360
Chapter 9 Use Case
R
equirements Analysis
The requirements of myapp and its first increment—the healthcheck
app—are clear. The healthcheck is realized as an AWS Lambda function
that listens to HTTP requests and writes a log line to a CloudWatch log
after every processed request. The healthcheck Lambda is called every 5
minutes by a CloudWatch schedule.
Because the runtime environment is AWS, the team chooses
infrastructure as code (IaC), but they use the AWS Cloud Development Kit
(CDK) over AWS CloudFormation. By using CDK, the infrastructure is fully
coded in their favorite programming language, Java.1
Defining continuous integration, continuous delivery, and pipeline
requirements take a bit more work, so the AWSome team decides to draft a
table with all the requirements; see Table 9-1.
1
AWS CDK supports multiple languages.
361
Chapter 9 Use Case
362
Chapter 9 Use Case
363
Chapter 9 Use Case
364
Chapter 9 Use Case
Pipeline Design
To get a clear understanding of the environments, the tools, and how
everything is connected, the context diagrams in Figure 9-1 and Figure 9-2
are drafted.
The first diagram represents the Azure DevOps environment, used to
run the pipelines. It consists of two projects. The application is developed
in the main (production) project. This project is also used to run the
pipelines and deploy them to the AWS test and production environments.
A second project—the test project—is a clone of the main project and
is solely used to develop and test the pipelines.
The main Azure DevOps project is connected to both the AWS test
and the AWS production environments. The Azure DevOps test project is
connected only to the AWS test environment, so it cannot deploy to the
AWS production environment. The AWS test environment is represented
by account2 497562947267. The AWS production account has the ID
486439332092.
Both Azure DevOps projects are connected to SonarCloud. External
libraries are retrieved from the central Maven repository, and emails are
sent from the Azure DevOps pipelines to the team members.
2
Both account numbers 497562947267 and 486439332092 are the account of the
AWSome team. If you want to try the pipelines yourself, you need to request and
use your own AWS accounts, of course.
365
Chapter 9 Use Case
MyCorp.com
<service connecon>
Clone (Sync with
latest applicaon and Product owner
Azure Azure
infrastructure code)
DevOps DevOps @
Project Project
test producon
‘Deploy’ pipeline code
DevOps Engineer
Nofy actors
Deploy MyLambda Deploy MyLambda
@
<service connecon prod> Internet
<service connecon test>
MyLambda MyLambda
Environments
Pipelines
Library
Service Connecons
Permissions
366
Chapter 9 Use Case
SCM trigger
SCM trigger
367
Chapter 9 Use Case
The latter two pipelines are represented in Figure 9-5 and Figure 9-6.
main Validate entry Analyze Provision test
criteria Execute build, Perform unit tests, Package arfact, Publish arfact environment
code
Nofy
Actor
Deploy Validate
Perform
arfact to infrastructure
test
test compliance
SCM trigger
release
Nofy
Actor
Provision Deploy
Validate Perform
producon arfact to
exit criteria dual control
environment producon
Manual trigger
368
Chapter 9 Use Case
Build, Unittests,
Package,
Publish Build is OK and
Unittests passed
Entry
En
ntry criteria and Code
Validate entry correct analysis passed
criteria
Start pipeline
S
Analyze code
myapp-pipeline
Error
Infrastructure not
Tests failed
compliant
Branch
Bra
anch is main
n Tests
T
Tes
sts passed
d Validate
Provision test Deploy artifact
Perform test infrastructure Notify actors
environment to test
compliance
Infrastructure
Branch is not compliant
main
End pipeline
Error
Entry/exit criteria
Dual control failed
incorrect
myapp-production-deployment
Notify actors
369
Chapter 9 Use Case
major.minor.patch
1. 0. 0
• Starts with 0
• [0 .. x]
• Increased by 1 aer each pipeline run
• If ‘minor’ changes, ‘patch’ starts with 0 again
• If ‘major’ changes, ‘patch’ starts with 0 again
• Starts with 0
• [0 .. y]
• Increased by 1 aer every successful deployment
• If ‘major’ changes, ‘minor’ starts with 0 again
• Starts with 1
• [1 .. z]
• Increased by 1 on the first of January of each new year
P
ipeline Development
Before the Azure DevOps pipeline code can be used, various preparations
must be made, starting with the creation of the two Azure DevOps projects.
As shown in Figure 9-10, the projects are created in the Azure DevOps
organization called mycorp-com. The projects are called MyApp and
MyApp-test.
3
If the major part is incremented, the minor is reset to zero as a result, so the patch
is also reset to zero.
370
Chapter 9 Use Case
MyApp is the main Azure DevOps project. This is where the team
develops all application and infrastructure code. The MyApp-test project
is a cloned version of the MyApp project. Development and testing of
pipelines happen in the MyApp-test project, so the rest of the team is not
disturbed by pipeline tests. The pipeline code is merged with the code in
the MyApp project after each pipeline feature is finished.
The AWSome team was so nice to share their code and their
configuration, and they encourage you to use it and discover what they
developed. The code provided for this book must be imported into the
myapp Git repository in the MyApp project and cloned in the MyApp-
test project. The preparation activities listed in this chapter apply to both
projects.
Code Repository
Both Azure DevOps projects consist of three Git repositories. Two of
these repositories contain scanning tools used in the pipelines. The tools
Whispers and Lambdaguard are cloned from GitHub (https://github.
com/Skyscanner) into a local repository in the Azure DevOps project to
limit dependencies on Internet sources as much as possible. In addition,
371
Chapter 9 Use Case
373
Chapter 9 Use Case
374
Chapter 9 Use Case
Pipeline Creation
In the design phase, three logical pipelines were defined and translated
into two BPMN models. The BPMN models, myapp-pipeline and myapp-
production-deployment, map to two technical pipelines with the same
names, as depicted by the schema in Table 9-2.
375
Chapter 9 Use Case
376
Chapter 9 Use Case
Table 9-3 through Table 9-6 show the configuration of the four
variable groups.
azdo-user [email protected]
cdk-version 2.46.0
myapp-email [email protected]
nodejs-version 16.15.1
personal- ******** This is a generated personal
access-token access token (PAT); you need to
generate one yourself in Azure
DevOps and add it here.
pipeline-id 2 This is the pipeline ID of pipeline
myapp-pipeline. This value can be
different in your case.
project MyApp The value is MyApp-test for the
test project.
rest-api-vg https://dev.azure.com/ The Azure DevOps API to update
mycorp-com/MyApp /_ the semver variable group. Note
apis/distributedtask/ that the project in this URL is
variablegroups/4?api- MyApp-test for the test project.
version=5.0-preview.1 The value 4 in this URL applies to
the semver variable group ID. This
value may be different in your
situation.
(continued)
377
Chapter 9 Use Case
last-update-year 2023 Used to determine the year of the previous release version.
minor 0 Starts with zero, but is updated after every deployment
to production.
Table 9-5. Variable Group, test (Represents the AWS Test Environment)
Name Value Additional information
aws-account 497562947267 Use your AWS account if you want to try it yourself.
aws-region us-east-1 And the region of your AWS account.
service- Service Use your own AWS service connection if you
connection- ConnectionAWS want to try it yourself.
aws-account Test-497562947267
378
Chapter 9 Use Case
379
Chapter 9 Use Case
380
Chapter 9 Use Case
381
Chapter 9 Use Case
• SonarCloud
382
Chapter 9 Use Case
383
Chapter 9 Use Case
T est
This section does not go into much detail on testing pipelines.
Development and testing are done in a separate Azure DevOps project, so
from a pipeline testing point of view, some measures are taken to optimize
pipeline testing.
Executing pipeline myapp-pipeline results in images similar to
Figure 9-22 and Figure 9-23. The first figure represents the stages if the
pipeline is associated with a feature branch. The next figure shows the
stages associated with the main branch. As shown in Figure 9-23, release
version 1.0.3 is created. All the stages are passed, and the application is
deployed to the AWS test environment.
384
Chapter 9 Use Case
Let’s zoom in on some of the stages. The Analyze code stage consists of
a SonarCloud scan with a build breaker and a Whispers scan, represented
respectively by Figure 9-24 and Listing 9-1.
385
Chapter 9 Use Case
386
Chapter 9 Use Case
Both scans show that everything is fine. The build passes the
SonarCloud quality gate and the Whispers scan looks fine (no hard-coded
secrets).
The Perform test stage contains a test task invoking a Cucumber test.
The test is still simple and covers only one test, defined in the mylambda.
feature file shown in Listing 9-2.
387
Chapter 9 Use Case
┌──────────────────────────────────────────┐
Share your Cucumber Report with your team at │
│
│ https://reports.cucumber.io │
│
Activate publishing with one of the following: │
│ │
│ src/test/resources/cucumber.properties: │
│ cucumber.publish.enabled=true │
│ src/test/resources/junit-platform.properties: │
│ cucumber.publish.enabled=true │
│ Environment variable: CUCUMBER_PUBLISH_ENABLED=true │
│ JUnit: @CucumberOptions(publish = true) │
│ │
│ More information at https://cucumber.io/docs/ │
│ cucumber/environment-variables/ │
│ │
│ Disable this message with one of the following: │
│ │
│ src/test/resources/cucumber.properties: │
│ cucumber.publish.quiet=true │
│ src/test/resources/junit-platform.properties: │
│ cucumber.publish.quiet=true │
└──────────────────────────────────────────┘
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0,
Time elapsed: 2.757 s - in mylambda.RunCucumberTest
[INFO]
[INFO] Results:
[INFO]
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
[INFO]
[INFO] --------------------------------------------------------
388
Chapter 9 Use Case
`.::////::.`
./osssssoossssso/.
-osss/-` .-/ssso-
`osso- .++++: -osso`
`oss/ .//oss- /sss`
+ss+ -sss. /sso
.sss` .sssso` `sss. LambdaGuard v2.4.3
-sso :ssooss+ oss-
.sss` /ss+``oss/ `sss.
+ss+ `oss/ .sss/// /sso
`oss/`.oso- -ssso+./sso`
`+sso: .` -oss+`
-osss+-.` `.-+ssso-
./osssssssssssso/.
`.-:////:-.`
389
Chapter 9 Use Case
Arn............ arn:aws:iam::497562947267:user/
azuredevops
[ 1/1 ] myLambda
Lambdas........ 1
Security....... 2
Triggers....... 1
Resources...... 0
Layers......... 0
Runtimes....... 1
Regions........ 1
Report......... ./mylambda-report/report.html
Log............ ./mylambda-report/lambdaguard.log
390
Chapter 9 Use Case
Integrity of Artifacts
The security requirement “Only build and deploy artifacts using a
pipeline” states that the integrity of the artifact must be guaranteed, from
building the artifact to running the artifact. A simple measure is applied
to meet this requirement. The first step in this process is to visualize
that the integrity remains the same over all stages in the process. This is
done by creating an SHA256 hash of the built artifact. If the hash of the
lambda running in the AWS target environment is the same as the hash of
the artifact in the pipeline(s), there is high confidence that it is the same
artifact. Generating the SHA256 hash is included in the files pipeline.
yml and template/deploy.yml. Pipelines myapp-pipeline and myapp-
production-deployment both print the hash in the log, as shown in
Listing 9-5 and Listing 9-6.
391
Chapter 9 Use Case
####Task Permissions
Permissions for this task to call AWS service APIs depend on
the activities in the supplied script.
===============================================================
Configuring credentials for task
392
Chapter 9 Use Case
393
Chapter 9 Use Case
394
Chapter 9 Use Case
a lot faster. The wait time before a dual control is performed is many times
greater than the actual execution time of the stages. With these numbers
and the fact that the outcome of all stages looks good, the AWSome team
approves the pipeline, which can be implemented in the MyApp project.
Implementation
Implementation means that the pipeline developed and tested in the
MyApp-test project is pushed to the MyApp project. The first increment
does not cover all requirements, and some mitigating actions are applied.
The team puts work items on the backlog that need to be implemented in
the next couple of iterations. Here is a selection from their backlog:
• Workitem 1: The requirement “Resources associated
with a release cannot be deleted” is not implemented.
This is put on the backlog. Retaining pipelines for a
long time can be automated, using the Leases API
of Azure DevOps, which sets the retention time of a
pipeline to “forever” after a deployment to production.
395
Chapter 9 Use Case
396
Chapter 9 Use Case
397
Chapter 9 Use Case
398
Chapter 9 Use Case
399
Chapter 9 Use Case
As soon as the pipeline reaches the dual control step, it shows a dialog
similar to Figure 9-33. Members of the Product Owner group must approve
(or reject) it before the deployment to production is performed.
400
Chapter 9 Use Case
The result is the deployment of the artifact stack, which includes the
myLambda resource, in the AWS production account (see Figure 9-35).
Notice the presence of the release version tag in this stack. This completes
the requirement “All changes are traceable/Tag everything.”
401
Chapter 9 Use Case
402
Chapter 9 Use Case
Quality Gate
To prevent an incorrect release version from being deployed to production,
an additional quality gate is added to the myapp-production-deployment
pipeline. This quality gate prevents that release versions, for which the
stages Analyze code, Perform test, and Validate infrastructure compliance
are not executed, can be deployed to production.
The pipeline myapp-pipeline creates a “stage completed” file after
every successful run of a particular stage. Only release versions for
which the files ANALYZE-CODE-COMPLETED, PERFORM-TEST-COMPLETED,
and VALIDATE-INFRASTRUCTURE-COMPLIANCE-COMPLETED are created
and considered valid releases. The existence of these files is checked
in the Validate entry/exit criteria stage in pipeline myapp-production-
deployment.
Figure 9-37 shows the artifacts of myapp-pipeline. The three “stage
completed” files are listed in the myapp-status folder.
403
Chapter 9 Use Case
404
Chapter 9 Use Case
Generating script.
================== Starting Command Output ====================
/usr/bin/bash --noprofile --norc /home/vsts/work/_temp/
bb88fef0-b881-44a8-b3da-06cc6a165198.sh
Stage [Validate infrastructure compliance] was not executed
##[error]Bash exited with code '1'.
Finishing: Validate whether QA stages are completed
Summary
You learned about the following topics in this chapter:
405
Chapter 9 Use Case
406
References
[1] Cambridge Bitcoin Electricity Consumption Index
https://ccaf.io/cbeci/index
[2] Business Process Model and Notation
https://www.omg.org/spec/BPMN/2.0
[3] Cloudbees CD
https://docs.cloudbees.com/docs/cloudbees-cd/10.0/
[4] Succeeding with Agile
Mike Cohn
Pearson Education, 2009
EAN 9780321579362
[5] Continuous Integration: Improving Software Quality and Reducing Risk
Paul M. Duvall, Steve Matyas, and Andrew Glover.
Addison-Wesley, 2007
EAN 9780321336385
[6] Continuous Delivery: Reliable Software Release through Build, Test and
Deployment Automation
Jez Humble and David Farley
Addison-Wesley, 2010
EAN 9780321601919
https://continuousdelivery.com/
[7] The Open Group IT4IT™ Reference Architecture, Version 2.1
https://pubs.opengroup.org/it4it/refarch21/
408
REFERENCES
409
REFERENCES
410
Index
A AWSLambdaBasicExecutionRole,
390, 397
A/B testing, 78, 180–183, 253
AWS Secrets Manager, 251
Acceptance tests, 140, 288, 307, 362
Azure DevOps, 21, 234, 360, 365
Actors, 102, 103
branching strategy, 367
Alerts, 72, 73, 352–356
context diagram, 366
Analyze code, 22, 92–93, 195–200
environment, 290, 365
API tests, 141, 151, 155
exit criteria stage, 368
Application life-cycle management
myapp-pipeline, 368
(ALM), 36, 38, 39, 44, 51, 62,
release strategy, 367
90, 113, 130, 153, 209, 244,
release version generation,
250, 317
369, 370
auto-cancel, 151, 230–231
Azure Key Vault, 251
tools, 16, 39
Azure resource manager (ARM), 5
application*shaded.jar, 372
Artifacts, 51, 94, 101, 125, 326
Artificial intelligence (AI), 15, B
31, 74, 90 Binary repository, 22, 24, 52, 60, 94,
Atlassian’s Bamboo, 210 135, 325
Automated process, 91, 187, 200, Blue/green deployment, 170, 173,
318, 347 175, 181
Automated security tests (IAST/ BPMN 2.0, 80
DAST), 153, 157 Branching strategy, 3, 6, 30, 33, 105,
Automated tests, 8, 30, 37, 66, 144, 158, 271, 350, 361, 367
155, 185, 314 Business continuity, 63
AWS Key Management Business organization, 30, 32, 40,
Services, 251 68, 75, 165, 305
412
INDEX
413
INDEX
414
INDEX
415
INDEX
Long execution time vs. short notify actors stage, 350, 351
execution time, 146 team’s branching
Long-lasting tests, 158 strategy, 350
Long-running automated test, 157 integration platform, 335
platform monitoring, 335,
342, 343
M security monitoring, 346–348
Manageability, 59 systems monitoring, 335–342
binary repository, 60 Monitors, 73, 335
deployment scripts, 60 Multiteam build strategy, 135, 139
infrastructure code, 61 environment, 134
libraries, 60 multiple DevOps teams, 135
pipeline development, 59 Multithreaded builds, 128, 129, 304
run anywhere, 59 Myapp, 360, 368, 369, 375, 376, 384,
versioning schema, 61 385, 391, 394, 401, 403, 404
Manual tests, 145, 150, 153 MyCorp.com, 359, 360, 370
Matrix, 43, 228, 229 mylambda.feature file, 372, 387
Matrix Build strategy, 133 myLambda healthcheck, 402
Mend Supply Chain Defender, 244 myServerPool, 221
Metrics, 68
KPIs, 68
PKIs, 68 N
Microservice, 35, 36, National Institute for the Software
185–192, 237–239 Industry (NISI), 18, 19, 42
Monitoring, 72 Network-attached storage
pipelines, 73 (NAS), 45
tools, 72 NexusIQ, 222, 223, 305
Monitoring pipelines Nexus repository, 104, 396
business monitoring, 343–345 Notifications, 40, 272, 312, 352–356
(see also Key performance
indicators (KPIs)
information sharing, 349 O
events, alerts, incidents and Offloaded build, 59, 130, 131
notifications, 352–355 Operational pipelines, 332
416
Index
417
Index
418
Index
419
Index
Q suggestions, 32
way of working, 32
Quality assurance (QA), 64
Requirements of myapp, 361
ALM platforms, 65
manageability, 364
application code, 64
security, 363
entry criteria, 66
sustainability, 361
exit criteria, 67
technology, 362
infrastructure code, 64
way of working, 361
pipelines, 64, 65
Resource, 24, 44, 48–49, 60, 101,
Quality gates, 65, 276, 287, 313, 314,
141, 195, 245, 334, 341, 395
325, 326, 403–405
Resource constraints, 31, 58–59,
184, 193–200
R Road map–based release, 159–163
Recovery point objective (RPO), 46 Rolling update
Re-create deployment, 33, 101, deployments, 175–180
165–170, 173, 362 Runbook, 188, 319, 320
Register mitigating actions, 312 Runtime test-1, 311
Regular deployments, 99, 194
Release build, 7, 24, 106, 349, 355
Release notes, 41, 202, 320–325
S
Release strategy, 3, 33, 34, 158, 159, Scanning complement, 64
164, 312, 362, 367–369 Schedules, 218
Requirements analysis, 20, 29 Scripting language, 210
areas, 31 Secret management tools, 250
CI/CD practices, 29 Security, 41
complex pipelines, 33 ALM platform, 41
costs, 31 DAST, 47
feature branches short-lived, 33 deployment, 45
inflexibility and costs, 32 monitoring, 346
manual testing, 34 requirement, 46
maturity models, 31 RTO, 46
microservice, 36 Security pentest, 155, 156
principles, 29 Self-hosting, 316, 332
420
Index
421
Index
422