Azure DevOps Explained Get Started With Azure DevOps and Develop Your DevOps Practices
Azure DevOps Explained Get Started With Azure DevOps and Develop Your DevOps Practices
Sjoukje Zaal
Stefano Demiliani
Amit Malik
BIRMINGHAM—MUMBAI
Livery Place
35 Livery Street
Birmingham
B3 2PB, UK.
ISBN 978-1-80056-351-3
www.packt.com
Packt.com
Why subscribe?
Spend less time learning and more time coding with
practical eBooks and Videos from over 4,000 industry
professionals
Did you know that Packt offers eBook versions of every book
published, with PDF and ePub files available? You can
upgrade to the eBook version at packt.com and as a print
book customer, you are entitled to a discount on the eBook
copy. Get in touch with us at [email protected]
for more details.
At www.packt.com, you can also read a collection of free
technical articles, sign up for a range of free newsletters,
and receive exclusive discounts and offers on Packt books
and eBooks.
Contributors
Preface
Section 1: DevOps
Principles and Azure
DevOps Project
Management
Chapter 1:
Introducing DevOps
Understanding DevOps
principles
Principle 1 – Customer-centric
action6
Principle 2 – Create with the
end in mind6
Principle 3 – End-to-end
responsibility7
Principle 4 – Cross-functional
autonomous teams7
Principle 5 – Continuous
improvement7
Principle 6 – Automate
everything7
Plan9
Develop 9
Deliver9
Operate9
Version control11
Infrastructure as Code11
Configuration Management12
Monitoring12
Discovering Azure DevOps
services
Azure Boards12
Azure Repos13
Azure Pipelines14
Azure Artifacts16
Extension Marketplace17
Summary
Further reading
Chapter 2:
Technical requirements
Creating an organization
Creating a project
Work Items31
Backlogs 39
Boards44
Sprints45
Queries48
Summary
Further reading
Section 2: Source Code
and Builds
Chapter 3:
Source Control
Management with Azure
DevOps
Technical requirements
Understanding SCM
GitHub Flow61
GitLab Flow62
Git Flow63
Cross-repo policies88
Tagging a release
Summary
Chapter 4:
Technical requirements
Microsoft-hosted agents110
Self-hosted agents111
Scalars119
Dictionaries120
Document structure120
Multi-stage pipeline
Summary
Chapter 5:
Technical requirements
Summary
Further reading
Chapter 6:
Technical requirements
Microsoft-hosted agents194
Self-hosted agents194
Environment variables218
Summary
Section 3: Artifacts and
Deployments
Chapter 7:
Technical requirements
Further reading
Chapter 8:
Technical requirements
An overview of release
pipelines
Creating approvals272
Summary
Section 4: Advanced
Features of Azure DevOps
Chapter 9:
Technical requirements
Summary
Chapter 10:
Technical requirements
Exploratory testing
Summary
Further reading
Chapter 11:
Technical requirements
Summary
After reading this book, you will have a complete and clear
vision of what Azure DevOps can offer you to improve your
development life cycle.
Who this book is for
This book is for solution developers/architects and project
managers who want to apply DevOps techniques to their
projects and use Azure DevOps to manage the entire process
of developing applications of quality.
What this book covers
Chapter 1, Azure DevOps Overview, gives you a full overview
of the Azure DevOps features and toolsets, such as boards,
repos, pipelines, test plans, and artifacts.
Chapter 10, Using Test Plans with Azure DevOps, shows you
how to manage your project's testing life cycle with test
plans in Azure DevOps.
https://azure.microsoft.com/en-us/services/devops/
Conventions used
There are a number of text conventions used throughout this
book.
using System;
using PartsUnlimited.Models;
namespace AzureArtifacts
class Program
Console.WriteLine('Hello World!');
[Net.ServicePointManager]::SecurityProtocol =
[Net.SecurityProtocolType]::Tls12
docker run \
-e VSTS_ACCOUNT=<name> \
-e VSTS_TOKEN=<pat> \
-it mcr.microsoft.com/azure-pipelines/vsts-agent
Get in touch
Feedback from our readers is always welcome.
Reviews
Please leave a review. Once you have read and used this
book, why not leave a review on the site that you purchased
it from? Potential readers can then see and use your
unbiased opinion to make purchase decisions, we at Packt
can understand what you think about our products, and our
authors can see your feedback on their book. Thank you!
Introducing DevOps
Introducing DevOps
For a long time, development and operations had been divided
into isolated modules with both separate concerns and
responsibilities. Developers wrote the code and made sure that it
worked on their development systems, while the system
administrators were responsible for the actual deployment and
integration in the organization's IT infrastructure.
This fitted in nicely with the Waterfall Methodology that was used
for most projects. The Waterfall Methodology is based on the
Software Development Life Cycle (SDLC), which has clearly
defined processes for creating software. The Waterfall
Methodology is a breakdown of project deliverables into linear
sequential phases, where each phase depends on the deliverables
of the previous phase. This sequence of events may look as
follows:
Figure 1.1 – Waterfall Methodology
Due to this, Agile and DevOps were introduced in 2009 and have
slowly taken over the world of software development. They
replaced the Waterfall Methodology for most projects that are out
there. DevOps is a natural extension of Agile and continuous
delivery approaches, and it stands for development and
operations. It is a practice that merges development, IT
operations, and quality assurance into one single, continuous set
of processes.
Understanding DevOps
principles
There are a lot of different definitions when it comes to DevOps.
Most of them are good at explaining the different aspects of
finding the right flow in delivering software and IT projects. In the
upcoming sections, we will highlight six DevOps principles that we
think are essential when adopting a DevOps way of working.
Principle 1 – Customer-
centric action
Nowadays, it is important that software development projects
have short cycles and feedback loops, with end users and real
customers integrated into the team. To fully meet the customers'
requirements, all activity around building software and products
must involve these clients. DevOps teams and organizations must
continuously invest in products and services that will allow clients
to receive the maximum outcome, while also being as lean as
possible to continuously innovate and change the chosen strategy
when it is no longer working.
Principle 4 – Cross-
functional autonomous
teams
Organizations that work with vertical and fully responsible teams
will need to let these teams work completely independently
throughout the whole life cycle. To enable these teams to work
completely independently, a broad and balanced set of skills are
required. Team members need to have T-shaped profiles instead
of old-school IT specialists who are only skilled in their own role.
Examples of skills that every team member should have include
development, requirement analysis, testing, and administration
skills.
Principle 5 – Continuous
improvement
Another part of end-to-end responsibility is that, for organizations,
it is important to adapt changes continuously. There can be a
number of changing circumstances, such as new technology that
has been released, changing customer requirements, and so on.
Continuous improvement is a strong focus in DevOps when it
comes to optimizing for speed and costs, minimizing waste, easy
of delivery, and to continuously improve the software and services
that are being built and released. An important activity to embed
inside these cycles is experimentation. This will allow teams to
develop a way of learning from their failures, which is essential to
continuous improvement.
Principle 6 – Automate
everything
To fully adopt and embed a continuous improvement culture
inside an organization, most organizations have a lot of waste and
tech depth to eliminate. To work with high cycle rates and to
process the instant feedback from customers and end users as
soon as possible, it is imperative to automate everything. This
means that not only the software development process should be
automated using continuous delivery (which includes continuous
development and integration), but also the whole infrastructure
landscape needs to be automated. The infrastructure also needs
to be ready for new ways of working. In this sense, automation is
synonymous with the drive to renew the way in which the team
delivers their services to their customers.
In this section, we have covered the six principles that are very
important when adopting or migrating to a DevOps way of
working. In the next few sections, we are going to look at what
Azure DevOps has to offer as a tool that supports teams so that
they can work in a DevOps oriented manner.
The following diagram shows the phases that are defined in the
application life cycle:
Figure 1.3 – Application life cycle phases
In the following sections, we'll explain these phases and the
corresponding Microsoft tooling and products in more detail.
Plan
During the planning phase, teams can use Kanban boards and
backlogs to define, track, and lay out the work that needs to be
done in Azure Boards. They can also use GitHub for this. In
GitHub, an issue can be created by suggesting a new idea or
stating that a bug should be tracked. These issues can be
organized and assigned to teams.
Develop
The development phase is supported by Visual Studio Code and
Visual Studio. Visual Studio Code is a cross-platform editor, while
Visual Studio is a Windows- and Mac-only IDE. You can use Azure
DevOps for automated testing and use Azure Pipelines to create
automatic builds for building the source code. Code can be shared
across teams with Azure DevOps or GitHub.
Deliver
The deliver phase is about deploying your applications and
services to target environments. You can use Azure Pipelines to
deploy code automatically to any Azure service or on-premises
environments. You can use Azure Resource Manager templates or
Terraform to spin up environments for your applications or
infrastructure components. You can also integrate Jenkins and
Spinnaker inside your Azure DevOps Pipelines.
Operate
In this phase, you implement full-stack monitoring for monitoring
your applications and services. You can also manage your cloud
environment with different automation tools, such as Azure
Automation, Chef, and more. Keeping your applications and
services secure is also part of this phase. Therefore, you can use
features and services such as Azure Policy and Azure Security
Center.
Version control
A version control system, also known as a source control system,
is an essential tool for multi-developer projects. It allows
developers to collaborate on the code and track changes. The
history of all the code files is also maintained in the version
control system. This makes it easy to go back to a different
version of the code files in case of errors or bugs.
Configuration Management
Configuration Management refers to all the items and artifacts
that are relevant to the project and the relationship between
them. Those items are stored, retrieved, and uniquely identified
and modified. This includes items such as source code, files, and
binaries. The configuration management system is the one true
source of configuration items.
Monitoring
You can use Azure Monitor to practice full-stack continuous
monitoring. The health of your infrastructure and applications can
be integrated into existing dashboards in Grafana, Kibana, and the
Azure portal with Azure Monitor. You can also monitor the
availability, performance, and usage of your applications, whether
they are hosted on-premises or in Azure. Most popular languages
and frameworks are supported by Azure Monitor, such as NET,
Java, and Node.js, and they are integrated with DevOps processes
and tools in Azure DevOps.
Azure Boards
Azure Boards can be used to plan, track, and discuss work across
teams using the Agile planning tools that are available. Using
Azure Boards, teams can manage their software projects. It also
offers a unique set of capabilities, including native support for
Scrum and Kanban. You can also create customizable dashboards,
and it offers integrated reporting and integration with Microsoft
Teams and Slack.
You can create and track user stories, backlog items, tasks,
features, and bugs that are associated with the project using
Azure Boards.
Azure Repos
Azure Repos provides support for private Git repository hosting
and for Team Foundation Server Control (TFSC). It offers a set
of version control tools that can be used to manage the source
code of every development project, large or small. When you edit
the code, you ask the source control system to create a snapshot
of the files. This snapshot is saved permanently so that it can be
recalled later if needed.
Azure Pipelines
You can use Azure Pipelines to automatically build, test, and
deploy code to make it available to other users and deploy it to
different targets, such as a development, test, acceptance,
and production (DTAP) environment. It combines CI/CD to
automatically build and deploy your code.
Before you can use Azure Pipelines, you should put your code in a
version control system, such as Azure Repos. Azure Pipelines can
integrate with a number of version control systems, such as Azure
Repos, Git, TFVS, GitHub, GitHub Enterprise, Subversion, and
Bitbucket Cloud. You can also use Pipelines with most application
types, such as Java, JavaScript, Node.js, Python, .NET, C++, Go,
PHP, and XCode. Applications can be deployed to multiple target
environments, including container registries, virtual machines,
Azure services, or any on-premises or cloud target.
Azure Artifacts
With Azure Artifacts, you can create and share NuGet, npm,
Python, and Maven packages from private and public sources with
teams in Azure DevOps. These packages can be used in source
code and can be made available to the CI/CD pipelines. With
Azure Artifacts, you can create multiple feeds that you can use to
organize and control access to the packages.
Extension Marketplace
You can download extensions for Azure DevOps from the Visual
Studio Marketplace. These extensions are simple add-ons that can
be used to customize and extend your team's experience with
Azure DevOps. They can help by extending the planning and
tracking of work items, code testing and tracking, pipeline build
and release flows, and collaboration among team members. The
extensions are created by Microsoft and the community.
TIP
For more information about the Tailwind Traders sample
project, refer to the following site:
https://github.com/Microsoft/TailwindTraders. For more
information about the Parts Unlimited example, refer to
https://microsoft.github.io/PartsUnlimited/.
Summary
In this chapter, we covered some of the basics of DevOps and
covered the six different DevOps principles. Then, we covered the
key concepts of Azure DevOps and the different solutions that
Azure DevOps has to offer to support teams throughout each of
the application life cycle phases. After that, we looked at the
different features that Azure DevOps has to offer, and we
introduced and created the two scenarios that we will use in the
upcoming chapters of this book.
Extension Marketplace:
https://marketplace.visualstudio.com/azuredevops
Creating an organization
Creating a project
Technical requirements
To follow this chapter, you need to have an active Azure DevOps
organization. The organization that we'll be using in this chapter
was created in Chapter 1, Azure DevOps Overview.
Understanding processes
and process templates
With Azure Boards, you can manage the work of your software
projects. Teams need tools to support them that can grow and
that are flexible. This includes native support for Scrum and
Kanban, as well as customizable dashboards and integrated
reporting capabilities and tools.
At the start of the project, teams must decide which process and
process templates need to be used to support the project model
that is being used. The process and the templates define the
building blocks of the Work Item tracking system that is used in
Azure Boards.
Agile: Choose Agile when your team uses the Agile planning
process. You can track different types of work, such as
Features, User Stories, and Tasks. These artifacts are
created when you create a new project using the Agile
process. Development and test activities are tracked
separately here, and Agile uses the Kanban board to track
User Stories and bugs. You can also track them on the task
board:
Creating an organization
An organization in Azure DevOps is used to connect groups of
related projects. You can plan and track your work here and
collaborate with others when developing applications. From the
organization level, you can also integrate with other services, set
permissions accordingly, and set up continuous integration and
deployment.
2. Log in with your Microsoft account and from the left menu,
click on New organization:
Figure 2.5 – Creating a new organization
4. Click Continue.
With that, the organization has been created. In the next section,
we are going to learn how to add a new project to this
organization.
Creating a project
After creating a new organization, Azure DevOps automatically
gives you the ability to create a new project. Perform the following
steps:
5. From there, the same wizard for creating a new project will
be displayed.
We have now covered how to create a new organization and add a
project to it. For the remaining sections of this chapter, we are
going to leave this organization and project as-is, and we are
going to use the Tailwind Traders project that we imported in
Chapter 1, Azure DevOps Overview.
Work Items
Teams use artifact Work Items to track all the work for a team.
Here, you will describe what is needed for the software
development project. You can track the features and the
requirements, the code defects or bugs, and all other items. The
Work Items that are available to you are based on the process
that was chosen when the project was created.
Work Items have three different states: new, active, and closed.
During the development process, the team can update the items
accordingly so that everyone has a complete picture of the work
related to the project.
4. Next, from the left menu, select Boards and then Work
items:
Figure 2.10 – Navigating to the Work Items
5. On the next screen, you will see an overview of all the Work
Items that were generated automatically when we created
the Tailwind Traders project:
Now, let's create a new User Story. To do so, click on User Story
from the list. Now, follow these steps:
1. A new window will open where you can specify the values
for the User Story. Add the following:
a) Title: As a user, I want to edit my user profile.
c) Add tag: You can also add tags to this Work Item. These
tags can be used for searching later. I've added a tag called
Profile Improvements.
Figure 2.13 – Linking the item to a specific development
process
2. Related Work: You can also link the item to other items or
GitHub issues, such as parent-child relationships, Tested
By, Duplicate Of, and so on:
Figure 2.14 – Linking the item to related work
3. After filling in these fields, click the Save button at the top-
right-hand side of the screen:
IMPORTANT NOTE
For more information on how to create the different Work Items,
refer to the following website: https://docs.microsoft.com/en-
us/azure/devops/boards/work-items/about-work-items?
view=azure-devops&tabs=agile-process. For more information
about the different fields that are used in the Work Item forms,
refer to this website: https://docs.microsoft.com/en-
us/azure/devops/boards/work-items/guidance/work-item-field?
view=azure-devops.
Backlogs
The product backlog is a roadmap for what teams are planning to
deliver. By adding User Stories, requirements, or backlog items to
it, you can get an overview of all the functionality that needs to be
developed for the project.
2. Next, from the left menu, select Boards and then Back log.
Then, select Tailwind Traders Team backlogs:
Figure 2.16 – Navigating to the backlog of the project
3. Here, you will see all the different User Stories for the
project, including the one that we created in the previous
demo. From the top-right, you can select the different types
of Work Items that come with the project template:
Figure 2.17 – Different types of Work Items
4. For now, we will stick with the User Stories view. You can
also reorder and prioritize the Work Items from here. Let's
reprioritize our newly created User Stories by dragging it
between numbers 2 and 3 in the list:
5. From the backlog, you can also add Work Items to the
different sprints. During creation of the Work Item, we
added this User Story to Sprint 2. From here, we can drag
this item to a different sprint if we want to:
6. You can also change the view to see more Work Items that
are related to these User Stories. By clicking on the view
options shown on the left-hand side of screen, you can
enable different views. Enable Parent, which displays epics
and features:
Figure 2.20 – Displaying the parent items
Boards
Another way to look at the different Work Items you have is by
using boards. Each project comes with a preconfigured Kanban
board that can be used to manage and visualize the flow of the
work.
Sprints
According to the project template that is chosen, sprints can have
a different name. In our Tailwind Traders project, the Agile project
template is being used. This changes the name to Iterations.
However, Azure DevOps treats these the same as Sprints.
You can also drag the User Stories to another sprint in here
and reprioritize them if needed.
Queries
You can filter Work Items based on the filter criteria that you
provide in Azure DevOps. This way, you can easily get an
overview of all the Work Items that are in a particular type, state,
or have a particular label. This can be done within a project, but
also across different projects.
To create different queries and search for Work Items, perform the
following steps:
3. Then, click on Run query. The result will display the Work
Item that we created in the first step of this section:
Figure 2.29 – Search result
IMPORTANT NOTE
This was a basic example of the search queries that you can
create. For more in-depth information, you can refer to
https://docs.microsoft.com/en-
us/azure/devops/project/search/overview?view=azure-devops.
Further reading
Check out the following links for more information about the
topics that were covered in this chapter:
Source Control
Management with Azure
DevOps
Source control management (SCM) is a vital part of every
company that develops software professionally, but also for every
developer that wants to have a safe way to store and manage
their code.
In this chapter, we'll learn how Azure DevOps can help with
managing source code professionally and securely. In this chapter,
we'll cover the following topics:
By the end of this chapter, you will have learned about all the
concepts you can use to apply SCM techniques to your team using
Azure DevOps.
Technical requirements
To follow this chapter, you need to have an active Azure DevOps
organization and Visual Studio or Visual Studio Code installed on
your development machine.
Understanding SCM
Source control (or version control) is a software practice used to
track and manage changes in source code. This is an extremely
important practice because it permits to maintain a single source
of code across different developers and helps with collaborating
on a single software project (where different developers works on
the same code base).
3. You create a new file in your local repository and then you
save the changes locally (stage and commit).
5. You pull the changes from the remote repository to the local
one (to align your code with the remote repository if other
developers have made modifications).
Snapshots are the way Git keeps track of your code history.
A snapshot essentially records what all your files look like at
a given point in time. You decide when to take a snapshot
and of what files.
git add .
git pull
git push
git add .
git push
git push
Exploring branching
strategies
A branch is a version of your code stored in an SCM system. When
using SCM with Git, choosing the best branching strategy to adopt
for your team is crucial because it helps you have a reliable code
base and fast delivery.
With SCM, if you're not using branching, you always have a single
version of your code (master branch) and you always commit to
this branch:
GitHub Flow
GitLab Flow
Git Flow
GitHub Flow
GitHub Flow is one of the most widely used branching strategies
and is quite simple to adopt.
GitLab Flow
GitLab Flow is another popular branching strategy that's widely
used, especially when you need to support multiple environments
(such as production, staging, development, and so on) in your
SCM process. The following diagram represents this flow:
Git Flow
Git Flow is a workflow that's used when you have a scheduled
release cycle. The following diagram represents this flow:
Figure 3.7 – Git Flow
According to this workflow, you have a master branch and a
develop branch that are always live, and then some other
branches that are not always live (can be deleted). The master
branch contains the released code, while the develop branch
contains the code that you're working on.
Every time you add a new feature to your code base, you create a
feature branch, starting from the develop branch, and then you
merge the feature branch into develop when the
implementation is finished. Here, you never merge into the
master branch.
If a serious bug occurs in production, this flow says that you can
create a fix branch from the master, fix the bug, and then merge
this branch into master again directly. You can also merge it into
the release branch if it's present, or into develop otherwise. If
you have merged the code into the release branch, the develop
branch will have the fix when you merge the release branch.
The first step when working with Azure DevOps is to create a new
project inside your organization. When you create a new project
with Azure DevOps, you're prompted to choose the version control
system you want to use (shown in the red box in the following
screenshot):
Figure 3.8 – Create new project
By clicking the OK button, the new project will be created in your
Azure DevOps organization.
Once the project has been provisioned, you can manage your
repositories by going to the Repos hub on the left bar in Azure
DevOps (see the following screenshot). This is where your files will
be stored and where you can start creating repositories and
managing branches, pull requests, and so on:
Figure 3.9 – Repos
Starting from Repos, every developer can clone a repository
locally and work directly from Visual Studio or Visual Studio Code
while being connected to Azure DevOps in order to push code
modifications, pull and create branches, make commits, and start
pull requests.
When you start a new project from scratch, Azure DevOps creates
an empty repository for you. You can load your code into this
repository manually (via upload) or you can clone from a remote
repository (for example, GitHub) to Azure DevOps.
From here, you'll see a window that shows you the clone
repository's URL. You can clone this repository by using the git
clone <Repository URL> command or directly in Visual Studio or
Visual Studio Code by using one of the options shown in the
following screenshot:
Figure 3.13 – Cloning options
Here, I'm cloning the project to Visual Studio Code. Azure DevOps
prompts me to select a folder where I will save the project (local
folder on your development machine), then opens Visual Studio
Code and starts cloning the remote repository:
Once the cloning process has finished, you will have a local copy
of the master branch of the remote repository in the selected
folder:
Figure 3.16 – Local copy of the remote repository
When you click the Import button, the remote GitHub repository
import process will start and you will see an image showing its
progress:
Figure 3.21 – Processing the import repository request
Once the import process has finished, you'll have the code
available in Azure Repos. Please remember that when importing a
repository from GitHub, the history and revision information is also
imported into Azure DevOps for complete traceability:
Figure 3.22 – History of the imported repository
Working with commits,
pushes, and branches
Once you've cloned the remote repository to your local Git
repository, you can start coding (create new files or modify new
files).
Every time you create or change a file, Git records the changes in
the local repository. You'll see the Visual Studio Code source
control icon start signaling that a file has been modified. In the
following screenshot, for example, I've added a comment to a file
in my project. After saving this file, the Git engine says that I have
an uncommitted file:
Figure 3.23 – Uncommitted file alert
If you click on the Source Control icon in the left bar, you will
see the uncommitted file. From here, you can select the changes
that you want to commit and stage them. Every commit is done
locally. You can stage a modification by clicking the + icon and
then perform a commit of all your staged files by clicking the
Commit button in the top toolbar. Every commit must have a
message that explains the reason for this commit:
Now, the files are locally committed into your local master branch
(although it's not recommended to do this, as explained later). To
sync these modifications to the online repository in Azure DevOps,
you can click the Synchronize Changes button on the bottom
bar in Visual Studio Code (this visually indicates that you have
some modifications that must be pushed online), as highlighted in
red in the following screenshot:
Figure 3.25 – Modifications to be pushed online
Alternatively, you can select the Git : push command from the
command bar, as follows:
Now, all the code modifications have been pushed online to the
master branch. If you go to Azure DevOps in the Repos hub and
select the Commits menu, you will see the history of every
commit for the selected branch:
Figure 3.27 – Commit history
In this way, we're directly working on the master branch. This is
not how you work in a real team of developers because if every
developer commits directly to the master branch, you cannot
guarantee that this branch will be always stable. The best way to
work is by using the previously explained GitHub Flow. So, you
should create a new branch, work on this newly created branch,
and only when the work is finished should you create a pull
request to merge your branch to the master branch.
3. Now, select the name for the new branch (here, it's called
development):
Figure 1.31 – Assigning a branch name
This action can also be done directly from Visual Studio Code with
the Azure Repos extension by using the Team:View History
command:
Figure 3.37 – The Team:View History command from Visual Studio
Code
Here, you have a set of options that you can set to control your
selected branch. We'll look at each of these options in detail in the
following sections.
Require a minimum number of
reviewers
This option allows you to specify the number of reviewers that can
approve a code modification. If any reviewer rejects the code
changes, the modifications are not approved, and the code
changes are discarded. If you select Allow completion even if
some reviewers vote to wait or reject, then the pull request
can be completed. The Requestors can approve their own
changes option enables the creator of a pull request to approve
its own code changes:
Build validation
This section allows you to specify a set of rules for building your
code before the pull request can be completed (useful for catching
problems early). Upon clicking Add build policy, a new panel
appears:
Figure 3.44 – Add build policy
Here, you can specify what build pipeline definition you wish to
apply and if it must be triggered automatically when the branch is
updated or manually. We'll talk about build pipelines in detail in
Chapter 4, Understanding Azure DevOps Pipelines.
From here, you can add a branch protection policy and select one
of these options:
You can view the incoming pull requests for a specific repository
on Azure DevOps by selecting the Pull requests menu from the
Repos hub, as shown in the following screenshot:
Figure 3.49 – Pull requests view
You can also filter this list to view only your pull requests or only
the Active, Completed, or Abandoned pull requests.
This will prompt you to open Azure DevOps. After confirming this,
the pull request window will open.
From Visual Studio, select the Team Explorer panel. From here,
you can click on Pull Requests to start a pull request:
Figure 3.54 – Creating a pull request from Visual Studio
Handling a pull request
All the different ways to handle a pull request that we've
described converge to a unique point: in Azure DevOps, the Pull
requests window opens, and you need to fill in the details of your
pull request activity. As an example, this is the pull request that
we started after the previous commit on the development
branch:
Figure 3.55 – New pull request window
Here, you can immediately see that the pull request merges a
branch into another branch (in my case, development will be
merged into master). You need to provide a title and a
description of this pull request (that clearly describes the changes
and the implementations you made in the merge), as well as
attach links and add team members (users or groups) that will be
responsible for reviewing this pull request. You can also include
work items (this option will be automatically included if you
completed a commit attached to a work item previously).
In the Files section, you can see what this pull request will do in
the destination branch (for every file). As an example, this is what
my pull request shows me:
If you've specified some reviewers, they will see the details of the
code modifications, which means they can add comments and
interact with the developers.
To create the pull request process, simply click the Create button.
Once the pull request has been created, you can complete the
pull request by clicking on the Complete button in the top-right
corner of the pull request window (you can do this after the
optional approval phase and after passing the branch rules):
Figure 3.57 – Completing a pull request
Here, you can insert a title and a description for the merge
operation, select the merge type to apply, and select the post-
completion operation to apply (if the associated work items
should be marked as completed after merging and if the source
branch must be deleted after the merge operation).
Regarding the type of merge operation to apply, you can choose
from the following options:
Azure DevOps gives you a nice animated graph to show the final
result of the merge. To complete the pull request, click on
Complete merge. You need to resolve any merge conflicts if
something happens. With this, the merging phase starts:
Figure 3.59 – Completing the pull request.
Tagging a release
Git Tags are references that point to specific points in the Git
history. Tags are used in Azure DevOps for marking a particular
release (or branch) with an identifier that will be shared internally
in your team to identify, for example, the "version" of your code
base.
To use tags for your branches, in the Repos hub in Azure DevOps,
go to the Tags menu:
When you click on Create, the tag will be applied to your branch:
Figure 3.63 – Tag applied to a branch
Summary
In this chapter, we learned how to handle source control
management with Azure DevOps and why it's so important when
working in teams when developing code.
In the next chapter, we'll learn how to create build pipelines with
Azure DevOps for implementing CI/CD practices.
Chapter 4:
Understanding Azure
DevOps Pipelines
When adopting Azure DevOps in your organization, one of the
main important decisions you must make is how to define the
pipeline of your development process. A pipeline is a company-
defined model that describes the steps and actions that a code
base must support, from building to the final release phase. It's a
key part of any DevOps architecture.
In this chapter, we'll learn how to define and use pipelines with
Azure DevOps for building code.
Retention of builds
Multi-stage pipeline
Technical requirements
To follow this chapter, you need to have the following:
Implementing a CI/CD
process
When adopting DevOps in a company, implementing the right
DevOps tools with the right DevOps processes is essential and
crucial. One of the fundamental flows in a DevOps
implementation is the continuous integration (CI) and
continuous delivery (CD) process, which can help developers
build, test, and distribute a code base in a quicker, structured,
and safer way.
CI is a software engineering practice where developers in a team
integrate code modifications in a central repository a few times
in a day. When a code modification integrated into a particular
branch (normally with a pull request, as explained in the
previous chapter), a new build is triggered in order to check the
code and detect integration bugs quickly. Also, automatic tests
(if available) are executed during this phase to check for
breakages.
Overview of Azure
Pipelines
Azure Pipelines is a cloud service offered by the Azure platform
that allows you to automate the building, testing, and releasing
phases of your development life cycle (CI/CD). Azure Pipelines
works with any language or platform, it's integrated in Azure
DevOps, and you can build your code on Windows, Linux, or
macOS machines.
A pipeline with multiple jobs in a single stage can be represented as follows: pool:
vmImage: 'ubuntu-latest'
jobs:
- job: job1
steps:
- job: job2
steps:
If you're using stages when defining your pipeline, this is what is called a fan-out/fan-in scenario:
Here, each stage is a fan-in operation, where all the jobs in the stage (which can consist of multiple tasks that
run in sequence) must be finished before the next stage can be triggered (only one stage can be executing at a
time). We'll talk about multi-stage pipelines later in this chapter.
When defining agents for your pipeline, you have essentially two types of possible agents:
Microsoft-hosted agents: This is a service totally managed by Microsoft and it's cleared on every
execution of the pipeline (on each pipeline execution, you have a fresh new environment).
Self-hosted agents: This is a service that you need to set up and manage by yourself. This can be a
custom virtual machine on Azure or a custom on-premise machine inside your infrastructure. In a self-
hosted agent, you can install all the software you need for your builds, and this is persisted on every
pipeline execution. A self-hosted agent can be on Windows, Linux, macOS, or in a Docker container.
Microsoft-hosted agents
Microsoft-hosted agents is the simplest way to define an agent for your pipeline. Azure Pipelines provides a
Microsoft-hosed agent pool by default called Azure Pipelines:
By selecting this agent pool, you can create different virtual machine types for executing your pipeline. At the
time of writing, the available standard agent types are as follows:
Table 1.1
Each of these images has its own set of software automatically installed. You can install additional tools by
using the pre-defined Tool Installer task in your pipeline definition. More information can be found here:
https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/?view=azure-devops#tool.
When you create a pipeline using a Microsoft-hosted agent, you just need to specify the name of the virtual
machine image to use for your agent from the preceding table. As an example, this is the definition of a hosted
agent that's using Windows Server 2019 with a Visual Studio 2019 image: - job: Windows
pool:
vmImage: 'windows-latest'
Self-hosted agents
While Microsoft-hosted agents are a SaaS service, self-hosted agents are private agents that you can configure
as per your needs by using Azure virtual machines or directly using your on-premise infrastructure. You are
responsible for providing all the necessary software and tools to execute your pipeline and you're responsible
for maintaining and upgrading your agent.
Windows
Linux
macOS
Docker
These steps are similar for all the environments. Next, we'll learn how to create a self-hosted Windows agent.
A self-hosted Windows agent is used to build and deploy applications built on top of Microsoft's platforms (such
as .NET applications, Azure cloud apps, and so on) but also for other types of platforms, such as Java and
Android apps.
The first step to perform when creating an agent is to register the agent in your Azure DevOps organization. To
do so, you need to sign into your DevOps organization as an administrator and from the User Settings menu,
click on Personal access tokens:
Figure 4.7 – Personal access tokens
Here, you can create a new personal access token for your organization with an expiration date and with full
access or with a custom defined access level (if you select the custom defined scope, you need to select the
permission you want for each scope). To see the complete list of available scopes, click on the Show all
scopes link at the bottom of this window:
Figure 4.8 – Create a new personal access token
Please check that the Agent Pools scope has the Read & manage permission enabled.
When finished, click on Create and then copy the generated token before closing the window (it will only be
shown once).
Important Note
The user that you will be using for the agent must be a user with permissions to register the agent. You can
check this by going to Organization Settings | Agent pools, selecting the Default pool, and clicking on
Security.
Now, you need to download the agent software and configure it. From Organization Settings | Agent Pools,
select the Default pool and from the Agents tab, click on New agent:
Figure 4.9 – Creating a new agent
The Get the agent window will open. Select Windows as the target platform, select x64 or x86 as your target
agent platform (machine) accordingly, and then click on the Download button:
Figure 4.10 – Agent software download page
This procedure will download a package (normally called vsts-agent-win-x64-2.166.4.zip). You need to run
this package (config.cmd) on the agent machine (an Azure VM or your on-premise server, which will act as an
agent for your builds):
When running the agent (interactively or as a service), it's recommended to run it as a service if you want to
automate builds.
To register the agent, you need to insert the agent pool, the agent name, and the work folder (you can leave
the default value as-is).
Finally, you need to decide whether your agent must be executed interactively or as a service. As we
mentioned previously, running the agent as a service is recommended, but in many cases, the interactive
option can be helpful because it gives you a console where you can see the status and running UI tests.
In both cases, please be aware of the user account you select for running the agent. The default account is the
built-in Network Service user, but this user normally doesn't have all the needed permissions on local folders.
Using an administrator account can help you solve a lot of problems.
If the setup has been completed successfully, you should see a service running on your agent machine and a
new agent that pops up on your agent pool in Azure DevOps:
4.13 – New agent created
If you select the agent and then go to the Capabilities section, you will be able to see all its capabilities (OS
version, OS architecture, computer name, software installed, and so on):
Figure 4.14 – Agent capabilities
The agent's capabilities can be automatically discovered by the agent software or added by you (user-defined
capabilities) if you click on the Add a new capability action. Capabilities are used by the pipeline engine to
redirect a particular build to the correct agent according to the required capabilities for the pipeline (demands).
When the agent is online, it's ready to accept your code build, which should be queued.
Remember that you can also install multiple agents on the same machine (for example, if you want the
possibility to execute core pipelines or handle jobs in parallel), but this scenario is only recommended if the
agents will not share resources.
Self-hosted agents are the way to go when you need a particular environment configuration, when you need a
particular piece of software or tools installed on the agent, and when you need more power for your builds. Self-
hosted agents are also the way to go when you need to preserve the environment between each run of your
builds. A self-hosted agent is normally the right choice when you need to have better control of your agent or
you wish to deploy your build to on-premise environments (not accessible externally). It also normally allows
you to save money.
Now that we've discussed about the possible build agents that you can use for your build pipelines, in the next
section, we'll provide an overview of YAML, the scripting language that allows you to define a pipeline.
YAML uses indentation for handling the structure of the object's definitions, and it's insensitive to quotation
marks and braces. It's simply a data representation language and is not used for executing commands.
With Azure DevOps, YAML is extremely important because it allows you to define a pipeline by using a script
definition instead of a graphical interface (that cannot be ported between projects).
http://yaml.org/
Scalars
As an example, the following are scalar variables that have been defined in YAML: Number: 1975 quotedText:
"some text description"notQuotedtext: strings can be also without quotes boolean: true nullKeyValue: null You
can also define multi-line keys by using ?, followed by a space, as follows: ? |
This is a key
- Mercedes
- BMW
- Drivers:
age: 45
Dictionaries
You can define a Dictionary object by using YAML in the following way: CarDetails:
make: Mercedes
model: GLC220
fuel: Gasoline
Document structure
YAML uses three dashes, ---, to separate directives from document content and to identify the start of a
document. As an example, the following YAML defines two documents in a single file: ---# Products purchased
quantity: 1
quantity: 3
quantity: 1
---
- item : Surface 4
invoice: 20-198754
date : 2020-05-27
bill-to: C002456
Name : Stefano Demiliani
address:
lines:
Viale Pasubio, 21
city : Milan
state : MI
postal : 20154
ship-to: C002456
product:
- itemNo : ITEM001
quantity : 1
price : 1850.00
- sku : ITEM002
quantity : 2
price : 65.00
tax : 80.50
total: 1995.50
comments:
Now that we've provided a quick overview of the YAML syntax, in the next section, we'll learn how to create a
build pipeline with Azure DevOps.
The prerequisite to creating a build pipeline with Azure DevOps is obviously to have some code stored inside a
repository.
To create a build pipeline with Azure DevOps, you need to go to the Pipelines hub and select the Pipelines
action:
Figure 4.15 – Build pipeline creation
From here, you can create a new build pipeline by selecting the New pipeline button. When pressed, you will
see the following screen, which asks you for a code repository:
1. Using a YAML file to create your pipeline definition. This is what happens when you select the repository
in this window.
2. Using the classic editor (graphical user interface). This is what happens when you click on the Use the
classic editor link at the bottom of this page.
In the next section, we'll learn how to create a build pipeline by using these two methods.
When you click on the Use the classic editor link, you need to select the repository where your code is stored
(Azure Repos Git, GitHub, GitHub Enterprise Server, Subversion, TFVC, Bitbucket Cloud, or Other Git)
and the branch that the build pipeline will be connected to:
Figure 4.17 – Classic editor pipeline definition
Then, you need to choose a template for the kind of app you're building. You have a set of predefined templates
to choose from (that you can customize later), but you can also start from an empty template:
Figure 4.18 – Pipeline template selection
If predefined templates fit your needs, you can start by using them; otherwise, it's recommended to create a
custom pipeline by selecting the actions you need.
Here, my application that's stored in the Azure DevOps project repository is an ASP.NET web application (an e-
commerce website project called PartsUnlimited; you can find the public repository at the following URL:
https://github.com/Microsoft/PartsUnlimited), so I've selected the ASP.NET template.
When selected, this is the pipeline template that will be created for you automatically:
Figure 4.19 – Pipeline created from a template
The agent job starts by installing the NuGet package manager and restoring the required packages for building
the project in the selected repository. For these actions, the pipeline definition contains the tasks that you can
see in the following screenshot:
Figure 4.21 – NuGet tasks
There's also a task for testing the solution and publishing the test results:
Figure 4.23 – Test Assemblies task
The last steps are for publishing the sources of the build process as artifacts (output of the build):
Figure 4.24 – Publishing tasks
If you select the Variables tab, you will see that there are some parameters that are used during the build
process. Here, you can create your own variables to use inside the pipeline if needed:
Figure 4.25 – Pipeline variables
The next section is called Triggers. Here, you can define what triggers start your pipeline. By default, no
triggers are published initially, but here, you can enable CI to automatically start your pipeline on every commit
on the selected branch:
Figure 4.26 – Pipeline triggers
Important Note
Enabling CI is a recommended practice if you want every piece of code that's committed on a branch (for
example, on the master branch) to always be tested and safely controlled. In this way, you can be assured that
the code is always working as expected.
In the Option tab, you can set some options related to your build definition. For example, here, you can create
links to all the work items so that they're linked to associated changes when a build completes successfully,
create work items on failure of a build, set the status badge for your pipeline, specify timeouts, and so on:
Figure 4.27 – Pipeline options
The Retention tab, on the other hand, is used for configuring the retention policy for this specific pipeline (how
many days to keep artifacts for, the number of days to keep runs and pull requests for, and so on). Doing this
will override the general retention settings. We'll talk about them later in the Retention of builds section.
Once you've finished defining the pipeline, you can click Save & queue to save your definition. By clicking on
Save and run, the pipeline will be placed in a queue and wait for an agent:
When the agent is found, the pipeline is executed and your code is built:
Figure 4.29 – Pipeline execution starting
You can follow the execution of each step of the pipeline and see the related logs. If the pipeline ends
successfully, you can view a summary of its execution:
You can also select the Tests tab to review the test execution status:
Figure 4.31 – Pipeline tests result
In the next section, we'll learn how to create a YAML pipeline for this application.
To start creating a YAML pipeline, go to the Pipeline section in Azure DevOps and click on New Pipeline.
Here, instead of selecting the classic editor (as we did in the previous section), just select the type of repository
where your code is located (Azure Repos Git, GitHub, BitBucket, and so on):
Figure 4.32 – YAML pipeline definition
The system now analyzes your repository and proposes a set of available templates according to the code
stored in the repository itself. You can start from a blank YAML template or you can select a template. Here, I'm
selecting the ASP.NET template:
Figure 4.34 – YAML pipeline – template selection
The system creates a YAML file (called azure-pipelines.yml), as shown in the following screenshot:
Figure 4.35 – YAML pipeline definition
The generated YAML definition contains a set of tasks, just like in the previous example, but here, these tasks
are in their YAML definition. The complete generated file is as follows: # ASP.NET
# Add steps that publish symbols, save build artifacts, deploy, and more:
# https://docs.microsoft.com/azure/devops/pipelines/apps/aspnet/build-aspnet-4
trigger:
- master
pool:
vmImage: 'windows-latest'
variables:
solution: '**/*.sln'
buildConfiguration: 'Release'
steps:
- task: NuGetToolInstaller@1
- task: NuGetCommand@2
inputs:
restoreSolution: '$(solution)'
- task: VSBuild@1
inputs:
solution: '$(solution)'
platform: '$(buildPlatform)'
configuration: '$(buildConfiguration)'
- task: VSTest@2
inputs:
platform: '$(buildPlatform)'
configuration: '$(buildConfiguration)'
Here I add two more tasks for publishing the symbols and the final artifacts of the pipeline: task:
PublishSymbols@2
inputs:
SearchPattern: '**\bin\**\*.pdb'
PublishSymbols: false
continueOnError: true
- task: PublishBuildArtifacts@1
PathtoPublish: '$(build.artifactstagingdirectory)'
ArtifactName: '$(Parameters.ArtifactName)'
condition: succeededOrFailed()
As you can see, the YAML file contains the trigger that starts the pipeline (here, this is a commit on the master
branch), the agent pool to use, the pipeline variables, and the sequence of each task to execute (with its
specific parameters).
Click on Save and run as shown in the previous screenshot to queue the pipeline and have it executed. The
following screenshot shows the executed YAML pipeline.
To add new tasks, it's useful to use the assistant tool on the right of the editor frame. It allows you to have a
Tasks list where you can search for a task, fill in the necessary parameters, and then have the final YAML
definition:
When you choose to create a pipeline with YAML, Azure DevOps creates a file that's stored in the same
repository that your code is stored in:
Figure 4.38 – YAML pipeline file created
For a complete reference to the YAML schema for a pipeline, I suggest following this link:
https://docs.microsoft.com/en-us/azure/devops/pipelines/yaml-schema?view=azure-
devops&tabs=schema%2Cparameter-schema
Retention of builds
When you run a pipeline, Azure DevOps logs each step's execution and stores the final artifacts and tests for
each run.
Azure DevOps has a default retention policy for pipeline execution of 30 days. You can change these default
values by going to Project settings | Pipelines | Settings:
Figure 4.39 – Pipeline retention policy
You can also use the Copy files task to store your build and artifacts data in external storage so that you can
preserve them for longer than what's specified in the retention policy:
- task: CopyFiles@2
inputs:
SourceFolder: '$(Build.SourcesDirectory)'
Contents: '**'
TargetFolder: '\\networkserver\storage\$(Build.BuildNumber)'
Important Note
Remember that any data saved as artifacts with the Publish Build Artifacts task is periodically deleted.
More information about the Copy files task can be found here: https://docs.microsoft.com/en-
us/azure/devops/pipelines/tasks/utility/copy-files?view=azure-devops&tabs=yaml.
Multi-stage pipeline
As we explained previously, you can organize the jobs in your pipeline into stages. Stages are logical
boundaries inside a pipeline flow (units of works that you can assign to an agent) that allow you to isolate the
work, pause the pipeline, and execute checks or other actions. By default, every pipeline is composed of one
stage, but you can create more than one and arrange those stages into a dependency graph.
stages:
- stage: Build
jobs:
- job: BuildJob
steps:
- stage: Test
jobs:
- job: TestOne
steps:
- job: TestTwo
steps:
- stage: Deploy
jobs:
- job: Deploy
steps:
As an example of how to create a multi-stage pipeline with YAML, let's look at a pipeline that builds code in your
repository (with .NET Core SDK) and publishes the artifacts as NuGet packages. The pipeline definition is as
follows. The pipeline uses the stages keyword to identify that this is a multi-stage pipeline.
In the first stage definition (Build), we have the tasks for building the code: trigger:
- master
stages:
- stage: 'Build'
variables:
buildConfiguration: 'Release'
jobs:
- job:
pool:
vmImage: 'ubuntu-latest'
workspace:
clean: all
steps:
- task: UseDotNet@2
inputs:
packageType: sdk
version: 2.2.x
installationPath: $(Agent.ToolsDirectory)/dotnet
- task: DotNetCoreCLI@2
inputs:
command: restore
projects: '**/*.csproj'
- task: DotNetCoreCLI@2
inputs:
command: build
projects: '**/*.csproj'
Here, we installed the .NET Core SDK by using the UseDotnet standard task template that's available in Azure
DevOps (more information can be found here: https://docs.microsoft.com/en-
us/azure/devops/pipelines/tasks/tool/dotnet-core-tool-installer?view=azure-devops). After that, we restored the
required NuGet packages and built the solution.
Now, we have the task of creating the release version of the NuGet package. This package is saved in the
packages/release folder of the artifact staging directory. Here, we will use nobuild = true because in this task,
we do not have to rebuild the solution again (no more compilation): - task: DotNetCoreCLI@2
inputs:
command: pack
packDirectory: '$(Build.ArtifactStagingDirectory)/packages/releases'
nobuild: true
As the next step, we have the task of creating the prerelease version of the NuGet package. In this task, we're
using the buildProperties option to add the build number to the package version (for example, if the package
version is 2.0.0.0 and the build number is 20200521.1, the package version will be 2.0.0.0.20200521.1). Here, a
build of the package is mandatory (for retrieving the build ID): - task: DotNetCoreCLI@2
command: pack
buildProperties: 'VersionSuffix="$(Build.BuildNumber)"'
packDirectory: '$(Build.ArtifactStagingDirectory)/packages/prereleases'
- publish: '$(Build.ArtifactStagingDirectory)/packages'
artifact: 'packages'
Next, we need to define the second stage, called PublishPrereleaseNuGetPackage. Here, we skip the
checkout of the repository and the download step downloads the packages artifact that we published in the
previous build stage. Then, the NuGetCommand task publishes the prerelease package to an internal feed in
Azure DevOps called Test: - stage: 'PublishPrereleaseNuGetPackage'
dependsOn: 'Build'
condition: succeeded()
jobs:
- job:
pool:
vmImage: 'ubuntu-latest'
steps:
- checkout: none
- download: current
artifact: 'packages'
- task: NuGetCommand@2
inputs:
command: 'push'
packagesToPush: '$(Pipeline.Workspace)/packages/prereleases/*.nupkg'
nuGetFeedType: 'internal'
publishVstsFeed: 'Test'
Now, we have to define the third stage, called PublishReleaseNuGetPackage, which creates the release
version of our package for NuGet: - stage: 'PublishReleaseNuGetPackage'
dependsOn: 'PublishPrereleaseNuGetPackage'
condition: succeeded()
jobs:
- deployment:
pool:
vmImage: 'ubuntu-latest'
environment: 'nuget-org'
strategy:
runOnce:
deploy:
steps:
- task: NuGetCommand@2
inputs:
command: 'push'
packagesToPush: '$(Pipeline.Workspace)/packages/releases/*.nupkg'
nuGetFeedType: 'external'
publishFeedCredentials: 'NuGet'
This stage uses a deployment job to publish the package to the configured environment (here, this is called
nuget-org). An environment is a collection of resources inside a pipeline.
In the NuGetCommand task, we specify the package to push and that the feed where we're pushing the
package to is external (nuGetFeedType). The feed is retrieved by using the publishFeedCredentials
property, set to the name of the service connection we created.
Once the environment has been created, in order to publish it to NuGet, you need to create a new service
connection by going to Project Settings | Service Connections | Create Service Connection, selecting
NuGet from the list of available service connection types, and then configuring the connections according to
your NuGet account:
Figure 4.42 – New NuGet service connection
With that, we have created a multi-stage build pipeline. When the pipeline is executed and all the stages
terminate successfully, you will see a results diagram that looks as follows:
Now that we have understood what a multi-stage pipeline is, we'll create some pipelines with GitHub
repositories in the next section.
By using Azure DevOps and the Azure Pipeline service, you can also create pipelines for a repository stored on
GitHub, thus triggering a build pipeline on every commit in a branch inside the GitHub repository. We will do this
by following these steps:
1. To use Azure Pipelines to build your GitHub repository, you need to add the Azure DevOps extension to
your GitHub account. From your GitHub page, select the Marketplace link from the top bar and search
for Azure Pipelines. Select the Azure Pipelines extension and click on Set up a plan, as shown in the
following screenshot:
2. Select the Free plan, click the Install it for free button, and then click Complete order and begin
installation.
3. Now, the Azure Pipelines installation will ask you if this app should be available for all your repositories or
only for selected repositories. Select the desired option and click on Install:
Figure 4.45 – Azure Pipelines on GitHub – installation
4. You will now be redirected to Azure DevOps, where you can create a new project (or select an existing
one) for handling the build process. Here, I'm going to create a new project:
7. Click the Save and run button. Here, the pipeline will be queued and waiting for an agent, then
executed.
Every time you commit code inside your GitHub repository, the build pipeline on Azure DevOps will be
triggered automatically.
If you're building a public repository on GitHub, it's quite useful to show all your users that the code
inside this repository has been checked and tested with a build pipeline. Then, you can show the result of
the build. You can do that by placing a badge in your repository.
A badge is a dynamically generated image that reflects the status of a build (never built, success, or fail)
and it's hosted on Azure DevOps.
8. To do so, select your pipeline in Azure DevOps, click on the three dots on the right, and select Status
badge:
9. From here, you can copy the Sample markdown string and place it in the Readme.md file on your
GitHub repository:
Figure 4.52 – Build status badge markdown
Every time a user accesses your repository, they will be able to see the status of the latest build via a graphical
badge:
As an example of how to handle parallel jobs in a pipeline, consider a simple pipeline where you have to
execute three PowerShell scripts called Task 1, Task 2, and Final Task. Task 1 and Task 2 can be executed in
parallel, while Final Task can only be executed when the previous two tasks are completed.
When you start creating a new pipeline (I'm using the classic editor here for simplicity), Azure DevOps creates
an agent job (here, this is called Agent Job 1). You can add your task to this agent. By selecting the agent job,
you can specify the agent pool where this task runs. Here, I want this task to be executed on a Microsoft-hosted
agent pool:
Figure 4.54 – Agent specification
Then, to add a new agent pool to your pipeline (for executing the other task independently), click the three dots
beside the pipeline and select Add an agent job:
Figure 4.55 – Add an agent job
Now, we'll add a second agent job (here, this is called Agent job 2) that runs on a self-hosted agent. This job
will execute the Task 2 PowerShell script:
Figure 4.56 – Agent selection
Finally, we'll add a new agent job (here, this is called Agent Job 3) to execute the Final Task that will run on a
Microsoft-hosted agent. However, this job has dependencies from Agent Job 1 and Agent Job 2:
Figure 4.57 – Agent job dependencies
In this way, the first two tasks start in parallel and the final job will wait until the two previous tasks are
executed.
For more information about parallel jobs in an Azure pipeline, I recommend that you check out this page:
https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops&tabs=yaml
You can create a build agent running on Azure Container Instances by using a custom image or by reusing one
of Microsoft's available images.pipe To create a build agent running on ACI, you need to create a personal
access token for your Azure DevOps organization. To do so, from your Azure DevOps organization home page,
open the user settings (top-right corner) and select Personal access tokens.
When you have the personal access token for your agent, you can create an agent on ACI by executing the
following command from the Azure CLI (after connecting to your Azure subscription): az container create -g
RESOURCE_GROUP_NAME -n CONTAINER_NAME --image mcr.microsoft.com/azure-pipelines/vsts-agent --cpu 1 --
memory 7 --environment-variables VSTS_ACCOUNT=AZURE_DEVOPS_ACCOUNT_NAME
VSTS_TOKEN=PERSONAL_ACCESS_TOKEN VSTS_AGENT=AGENT_NAME VSTS_POOL=Default Here, we have the
following:
RESOURCE_GROUP_NAME is the name of your resource group in Azure where this resource will be
created.
CONTAINER_NAME is the name of the ACI container.
AZURE_DEVOPS_ACCOUNT_NAME is the name of your Azure DevOps account.
PERSONAL_ACCESS_TOKEN is the personal access token you created previously.
AGENT_NAME is the name of the build agent that you want to create. This will be displayed on Azure
DevOps.
--image is used to select the name of the Azure Pipelines image for creating your agent, as described
here: https://hub.docker.com/_/microsoft-azure-pipelines-vsts-agent.
VSTS_POOL is used to select the agent pool for your build agent.
Remember that you can start and stop an ACI instance by using the az container stop and az container
start commands. This can help you save money.
If you're using Windows or Linux agents, you can also run a job inside a container (in an isolated way from the
host). To run a job inside a container, you need to have Docker installed on the agent and your pipeline must
have permission to access the Docker daemon. If you're using Microsoft-hosted agents, running jobs in
containers is actually supported on the windows-2019 and ubuntu-16.04 pool images.
As an example, this is a YAML definition for using a container job in a Windows pipeline:
pool:
vmImage: 'windows-2019'
container: mcr.microsoft.com/windows/servercore:ltsc2019
steps:
- script: date /t
workingDirectory: $(Agent.BuildiDirectory)
As we mentioned previously, to run a job inside a Windows container, you need to use the windows-2019
image pool. It's required that the kernel version of the host and the container match, so here, we're using the
ltsc2019 tag to retrieve the container's image.
For a Linux-based pipeline, you need to use the ubuntu-16.04 image: pool:
vmImage: 'ubuntu-16.04'
container: ubuntu:16.04
steps:
- script: printenv
As you can see, the pipeline creates a container based on the selected image and runs the command (steps)
inside that container.
Summary
In this chapter, we provided an overview of the Azure Pipelines service and we saw how to implement a CI/CD
process by using Azure DevOps. We also saw how to create a pipeline for code hosted in a repository by using
the graphical interface and by using YAML, as well as how to use and create build agents. We then looked at
how to create a build pipeline by using the classic editor and by using a YAML definition. We also saw an
example of a multi-stage pipeline and how to use Azure DevOps pipelines to build code inside a GitHub
repository, before looking at how to use parallel tasks in a build pipeline to improve build performance. Finally,
we learned how to create a build agent on Azure Container Instances and how to use a container's jobs.
In the next chapter, we'll learn how to execute quality tests for our code base in a build pipeline.
Chapter 5:
Running Quality
Tests in a Build
Pipeline
In the previous chapter, we introduced Azure
Pipelines and learned how to implement a CI/CD
process using Azure DevOps, GitHub, and
containers.
Introduction to code
coverage testing
Introduction to Feature
Flags
Technical
requirements
To follow this chapter, you need to have an active
Azure DevOps organization. The organization that
will be used in this chapter is called the Parts
Unlimited organization. It was created in Chapter
1, Azure DevOps Overview. You also need to have
Visual Studio 2019 installed, which can be
downloaded from
https://visualstudio.microsoft.com/downloads/.
For the latest demo, you will need Visual Studio
Code with the C# extension installed and the
.NET Core SDK, version 3.1 or later.
Benefits of
automatic
testing
After adding a new feature to your application,
you want to know if it will work correctly, given
all the possible interactions. You also don't want
to break any other features with this new
functionality and want to know if the code is
easily understood by others, as well as being
maintainable.
Introduction to
unit testing
With unit testing, you break up code into small
pieces, called units, that can be tested
independently from each other. These units can
consist of classes, methods, or single lines of
code. The smaller the better works best here.
This will give you a better view of how your code
is performing and allows tests to be run fast.
Running unit
tests in a build
pipeline
Our Parts Unlimited test project already has
unit tests created. So, this is a good pick for this
demo. First, we are going to look at the
application and the tests that are created.
Therefore, we have to clone the repository to our
local filesystem and open the solution in Visual
Studio.
Downloading
the source code
We are going to create unit tests for the web
application for Parts Unlimited. First, we need to
clone the repository from Azure DevOps to our
filesystem. This will allow us to add the unit tests
to it using Visual Studio Code. Therefore, we
must take the following steps:
With code coverage testing, you can measure what source code for an application is going to be tested. Code coverage testing measures
how many lines, blocks, and classes are executed while automated tests, such as unit tests, are running.
The more code that's tested, the more confident teams can be about their code changes. By reviewing the outcome of the code coverage
tests, teams can identify what code is not covered by these tests. This information is very helpful as it reduces test debt over time.
Azure DevOps supports code coverage testing from the build pipeline. The Test Assemblies task allows us to collect code coverage testing
results. There is also a separate task, called Publish Code Coverage Results, that can also publish these results. This task offers out-of-
the-box support for popular coverage results formats such as Cobertura and JaCoCo.
Important Note
Cobertura and JaCoCo are both Java tools that calculate the percentage of code that's accessed by tests. For more information about
Cobertura, you can refer to https://cobertura.github.io/cobertura/. For more information about JaCoCo, you can refer to
https://www.eclemma.org/jacoco/.
In the next section, we are going to look how to perform code coverage testing by using Azure DevOps.
To perform code coverage testing, we need to open the build pipeline that we created in the previous demo. Let's get started:
With the build pipeline open, select the Edit button in the right-hand corner:
Once the test has run automatically and the build process has finished, you can assign the results to work items that have been added to
the backlog and sprint. For this, you must perform the following steps:
Go back to the build pipeline and select the pipeline that ran last. Click Test from the top menu.
For the results table, make sure that Passed is selected and that Failed and Aborted have been deselected:
You can use a Feature Flag to turn features in your code, such as specific methods or sections in your code, on or off. This can be extremely
helpful when you want to hide (disable) and expose (enable) features in a solution. Features that are not complete and ready for release yet
can be hidden or exposed in the solution. This allows us to test code in production for a subset of users. You can enable the code for a
subset of users, for instance, based on the login name of the user and let them test the features before releasing them to others. However,
there is a drawback to Feature Flags: they introduce more complexity in your code, so it is better to constrain the number of toggles in your
application.
The recommended approach when creating Feature Flags is to keep them outside the application. For instance, a web or app configuration
file is a good place to add Feature Flags because you can change them easily, without the need to redeploy the application again.
In the next section, we are going to implement a Feature Flag in a .NET Core solution.
In this demonstration, we are going to create a new .NET Core application in Visual Studio Code. Then, we are going to implement a Feature
Flag for this application.
We are going to add a very basic Feature Flag that changes the welcome message from Welcome to Welcome to Learn Azure DevOps. This
is only going to be tested by a subset of users. Therefore, we need to open Visual Studio Code and create a new Razor application with
.NET Core. I have created a new folder on my local filesystem called FeatureFlags for this. Open this folder in Visual Studio Code. Check
the next section for the detailed steps.
Creating a new .NET Core application
To create a new .NET Core application, follow these steps:
With Visual Studio Code open, click on Terminal > New terminal from the top menu.
In the Terminal, add the following line of code to create a new project:
dotnet new webapp -o RazorFeatureFlags
code -r RazorFeatureFlags
The newly created project will now open. Open the Terminal once more and add the following line of code to test the project: dotnet run
The output of running this code will look as follows:
Figure 5.26 – Welcome message changed based on the Feature Flag provided In this demonstration, we added some Feature Flags to our
application using the Featuremanagement NuGet package of Microsoft. Using these Feature Flags, we changed the welcome message for
the home page of the application. This concludes this chapter.
Summary
In this chapter, we covered how to run quality tests in a build pipeline in more depth. With this, you can now run unit tests from the build
pipeline and execute coverage tests from Azure DevOps. Lastly, we covered how to create Future Flags inside an application that you can
use in your future projects as well.
In the next chapter, we are going to focus on how to host build agents in Azure Pipelines.
Further reading
Check out the following links for more information about the topics that were covered in this chapter:
Unit test basics: https://docs.microsoft.com/en-us/visualstudio/test/unit-test-basics?view=vs-2019
Run quality tests in your build pipeline by using Azure Pipelines: https://docs.microsoft.com/en-us/learn/modules/run-quality-tests-build-pipeline/
Explore how to progressively expose your features in production for some or all users: https://docs.microsoft.com/en-us/azure/devops/migrate/phase-
features-with-feature-flags?view=azure-devops
Chapter 6:
Hosting Your
Own Azure
Pipeline Agent
In the previous two chapters, we looked at
setting up continuous integration through Azure
Pipelines while using Microsoft-hosted agents. In
this chapter, we'll be building a self-hosted agent
and updating the pipeline to use our own agent,
rather than using the Microsoft-hosted one.
Technical
requirements
To follow this chapter, you need to have an active
Azure DevOps organization and an Azure
subscription to create a VM.
Containers
IMPORTANT NOTE
Understanding
the types of
agents in Azure
Pipelines
Azure Pipelines offers two types of agents:
Microsoft-hosted agents
Self-hosted agents
Let's look at them in detail.
Microsoft-hosted
agents
Microsoft-hosted agents are fully managed VMs,
deployed and managed by Microsoft. You can
choose to use a Microsoft-hosted agent with no
additional pre-requisites or configurations.
Microsoft-hosted agents are the simplest and are
available at no additional cost.
Self-hosted
agents
Self-hosted agents are servers owned by you,
running in any cloud platform or data center
owned by you. Self-hosted agents are preferred
due to various reasons, including security,
scalability, and performance.
Performance
Planning and
setting up your
self-hosted
Azure pipeline
agent
In order to use a self-hosted agent with Azure
Pipelines, you will need to set up a machine and
configure it for your pipeline requirements.
Typically, you would choose an OS version best
suited for your project, considering the
framework, libraries, and build tools
compatibility.
Choosing the
right OS/image
for the agent
VM
The first decision you take while setting up the
VM is choosing the OS/image for the server
depending on your target deployment. If you are
deploying in an on-premises environment, you
may just select one of the supported OS versions
(such as Windows Server 2016) and install the
necessary software. In the case of cloud
deployments, you have multiple options provided
in the form of images, which come in various
combinations of OS version and pre-installed
tools.
OS support and
pre-requisites
for installing an
Azure Pipelines
agent
Azure supports various OS versions to use as a
self-hosted agent; based on the OS you choose,
there is a set of pre-requisites you'll need to
complete before you can install the Azure
Pipelines agent on your host.
SUPPORTED OSES
The following list shows the supported OSes:
Windows-based:
Linux-based:
a) CentOS 7, 6
b) Debian 9
c) Fedora 30, 29
f) Oracle Linux 7
ARM32:
a) Debian 9
b) Ubuntu 18.04
macOS-based:
PRE-REQUISITE
SOFTWARE
Based on the OS you choose, you will have to
install the following pre-requisites before you can
set up the host as an Azure pipeline agent:
Windows-based:
Linux/ARM/macOS-based:
IMPORTANT NOTE
Creating a VM in
Azure for your
project
The PartsUnlimited project is built using .NET
Framework 4.5 and Visual Studio as the primary
IDE tool. You can review that by browsing
through the repository in the PartsUnlimited
project in your Azure DevOps.
IMPORTANT
NOTE
Visual Studio 2019-based
images are available in the
Azure portal directly in the
search results.
4. Click Create to start
creating a VM. Choose the
required subscription,
resource group, and other
settings based on your
preference.
5. In further pages, you can
modify the settings to use a
pre-created virtual network,
as well as customize the
storage settings and other
management aspects. Please
review the documentation to
explore more on VM creation
in Azure.
IMPORTANT
NOTE
Please follow the Microsoft
docs to learn more about
creating a VM in Azure:
https://docs.microsoft.com/e
n-us/azure/virtual-
machines/windows/quick-
create-portal.
6. Log in to the VM upon
creation and install the
required pre-requisites.
Setting up the
build agent
In this section, we'll configure the newly created
VM to use as a self-hosted pipeline agent.
SETTING UP AN ACCESS
TOKEN FOR AGENT
COMMUNICATION
In this task, you will create a personal access
token that will be used by the Azure Pipelines
agent to communicate with your Azure DevOps
organization:
IMPORTANT
NOTE
You will need to give
additional permissions when
creating a token if you plan
to use deployment groups
(more information here:
https://docs.microsoft.com/e
n-
us/azure/devops/pipelines/rel
ease/deployment-groups/?
view=azure-devops).
INSTALLING AZURE
PIPELINES AGENTS
You are now ready to install the Azure Pipelines
agent on your VMs that you created earlier. Let's
download the Azure Pipelines agent. Before you
start, please log in to the VM created earlier
using Remote desktop:
TIP
If you are unable to
download the agent file on
your Visual Studio machine,
you can use a different
browser than Internet
Explorer or disable
Enhanced IE Security
configuration from the
server manager. You can
refer to
https://www.wintips.org/how-
to-disable-internet-explorer-
enhanced-security-
configuration-in-server-2016/
to learn how to disable
enhanced Internet Explorer
security configuration.
4. Launch an elevated
PowerShell window and
change to the C: directory
root by running the cd C:\
command:
Add-Type -AssemblyName
System.IO.Compression.F
ileSystem ;
[System.IO.Compression.
ZipFile]::ExtractToDire
ctory("$HOME\Downloads\
vsts-agent-win-x64-
2.171.1.zip", "$PWD")
6. It will take a minute or two
to extract the files. Please
browse to the new directory
once it's completed. You
should see files as displayed
in the following screenshot:
IMPORTANT NOTE
./config.cmd --proxyurl
http://127.0.0.1:8888 --proxyusername
"myuser" --proxypassword "mypass"
Updating your
Azure pipeline
to use self-
hosted agents
In this section, we'll take the Azure pipeline
scenario covered in the last chapters
(PartsUnlimited) and modify it to use our newly
created self-hosted agent. This will enable us to
use our self-hosted agent to run the pipelines,
rather than using Microsoft-provided agents.
Preparing your
self-hosted
agent to build
the Parts
Unlimited
project
Before we can start using the self-hosted agent,
we must prepare it to support building our
sample project, PartsUnlimited. The
PartsUnlimited project is built using Visual
Studio leveraging .NET Framework, Azure
development tools and .NET Core, Node.js, and
so on. In order to use our self-hosted agent for
building the solution, we must install the required
dependencies prior to running the pipeline jobs:
[Net.ServicePointManage
r]::SecurityProtocol =
[Net.SecurityProtocolTy
pe]::Tls12
Install-Module AzureRM
-AllowClobber
7. Install Node.js version 6.x
from
https://nodejs.org/download/
release/v6.12.3. You can
download the file named
node-v6.12.3-x64.msi and
install it using the
interactive installer.
Running the
Azure pipeline
In this task, we'll now run the pipeline job to
build the PartsUnlimited solution using our own
self-hosted agents:
Using
containers as
self-hosted
agents
Azure Pipelines supports using Docker containers
as the compute target for running pipeline jobs.
You can use both Windows containers (Windows
Server Core/Nano Server) and Linux containers
(Ubuntu) to host your agents.
Setting up
Windows
containers as
Azure pipeline
agents
In order to use Windows containers as Azure
pipeline agents, you need to build the container
image first and then run it with your Azure
DevOps organization environment variables. Let's
look at the process.
BUILDING THE
CONTAINER IMAGE
Follow these steps to build the container image:
1. Launch Command Prompt
and run the following
commands:
mkdir C:\dockeragent
cd C:\dockeragent
2. Create a new file named
Dockerfile (no extension)
and update it with the
following content. You can
use Notepad to open the file:
FROM
mcr.microsoft.com/windo
ws/servercore:ltsc2019
WORKDIR /azp
COPY start.ps1 .
CMD powershell
.\start.ps1
3. Create a new PowerShell file
with the name start.ps1
and copy the content from
here:
https://github.com/PacktPubl
ishing/Learning-Azure-
DevOps---
B16392/blob/master/Chapter
-6/start.ps1.
4. Run the following command
to build the container image:
docker build -t
dockeragent:latest.
IMPORTANT NOTE
Setting up Linux
containers as
Azure Pipelines
agents
In order to use Linux containers as Azure pipeline
agents, you can either use the Docker image
published by Microsoft on Docker Hub or build
your own Docker image.
docker run \
-e VSTS_ACCOUNT=<name> \
-e VSTS_TOKEN=<pat> \
-it mcr.microsoft.com/azure-
pipelines/vsts-agent
az container create -g
RESOURCE_GROUP_NAME -n CONTAINER_NAME --
image mcr.microsoft.com/azure-
pipelines/vsts-agent --cpu 1 --memory 7
--environment-variables
VSTS_ACCOUNT=AZURE_DEVOPS_ACCOUNT_NAME
VSTS_TOKEN=PERSONAL_ACCESS_TOKEN
VSTS_AGENT=AGENT_NAME VSTS_POOL=Default
RESOURCE_GROUP_NAME is the
name of your resource group
in Azure where you want to
create this resource.
AZURE_DEVOPS_ACCOUNT_NAM
E is the name of your Azure
DevOps account.
PERSONAL_ACCESS_TOKEN is
the personal access token
previously created.
Environment
variables
Azure DevOps pipeline agents running on
containers can be customized further by using
additional environment variables. The
environment variables and their purposes are
described as follows:
Planning for
scale
Azure VM scale set-based agents can be auto-
scaled based on your Azure Pipelines jobs
demand at a given time. There are several
reasons why scale set agents can be a better
option, rather than using dedicated agents:
3. Click Create.
4. Fill in the values as
described here:
--Availability zone:
Recommended to choose all
three for high availability.
--Authentication:
Username/password or SSH
key:
8. On Scaling, provide an
initial instance count and
keep Scaling policy to
Manual. Leave the other
settings as the default and
click Next:
9. On Management, ensure
that Upgrade mode is set to
Manual. Leave the other
settings as the default and
click Next:
--Azure Subscription:
Select the Azure
subscription where you
created the VM scale set:
Summary
In this chapter, we looked at using Microsoft-
hosted agents and self-hosted agents to run your
Azure pipeline jobs. We dug deep into the process
of setting up a self-hosted agent and updated our
pipelines to use the self-hosted agent.
Chapter 8, Deploying
Applications with Azure
DevOps
Chapter 7:
Using Artifacts
with Azure
DevOps
In the previous chapter, we covered how to host
build agents in Azure Pipelines. In this chapter,
we are going to cover how to use artifacts with
Azure DevOps. We will begin by explaining what
artifacts are. Then, we will look at how to create
them in Azure DevOps, as well as how to produce
the artifact package from a built pipeline. Next,
we are going to cover how to deploy the feed
using a release pipeline. Then, we are going to
cover how to set the feed permissions and how to
consume the package in Visual Studio. Finally, we
are going to cover how to scan for package
vulnerabilities using WhiteSource Bolt.
Technical
requirements
To follow this chapter, you need to have an active
Azure DevOps organization. The organization
we'll be using in this chapter is the
PartsUnlimited organization, which we created
in Chapter 1, Azure DevOps Overview. You also
need to have Visual Studio 2019 installed, which
can be downloaded from
https://visualstudio.microsoft.com/downloads/.
Introducing
Azure Artifacts
It is likely that every developer has used a third-
party or open source package in their code to add
extra functionalities and speed up the
development process of their application. Using
popular, pre-built components that have been
used and tested by the community will help you
get things done more easily.
Creating an
artifact feed
with Azure
Artifacts
In this demo, we are going to create an artifact
feed in Azure Artifacts. Packages are stored in
feeds, which are basically organizational
constructs that allow us to group packages and
manage their permissions. Every package type
(NuGet, npm, Maven, Python, and Universal) can
be stored in a single feed.
P ro d u c i n g t h e
package using a
build pipeline
Now that we have created our feed, we are going
to create a build pipeline that automatically
creates a package during the build of the project.
For this example, you can use the sample project
provided in this book's GitHub repository. This
sample project consists of all the models from the
PartsUnlimited project. We are going to add all
the models to a package and distribute it from
Artifacts. This way, you can easily share the data
model across different projects.
Now that we've built the application and the package from our build pipeline, we can publish the package to the feed that we created in our
first demo.
For this, we need to set the required permissions on the feed. The identity that the build will run under needs to have Contributor
permissions on the feed. Once these permissions have been set, we can extend our pipeline to push the package to the feed.
Setting the required permissions on the feed
To set the required permissions, we need to go to the settings of our feed:
Log in with your Microsoft account and from the left menu, select Artifacts.
Go to the settings of the feed by selecting the Settings button from the top-right menu:
Now that our PartsUnlimited.Models package has been pushed to our feed in Artifacts, we can consume this package from Visual Studio.
In this section, we are going to create a new console app in Visual Studio and connect to the feed from there.
Therefore, we need to perform the following steps:
Open Visual Studio 2019 and create a new .NET Core console application:
Figure 7.14 – Creating a new console package
Once the application has been created, navigate to Azure DevOps and from the left menu, select Artifacts.
From the top menu, select Connect to feed:
WhiteSource Bolt can be used to scan packages for vulnerabilities directly from the build pipeline. It is a developer tool for scanning for
security vulnerabilities in application code, as well as open source applications and packages. It offers extensions that can be installed
through the Azure DevOps marketplace and through GitHub. WhiteSource Bolt can be downloaded free of charge, but this version is limited
to five scans per day, per repository.
Important Note
For more information about WhiteSource Bolt, you can refer to the following website: https://bolt.whitesourcesoftware.com/.
In this section, we are going to install the extension in our Azure DevOps project and implement the tasks that come with it into our existing
build pipeline. Let's get started:
Open a browser and navigate to https://marketplace.visualstudio.com/.
Search for WhiteSource Bolt in the search box and select the WhiteSource Bolt extension:
Figure 7.27 – WhiteSource Bolt vulnerability report With that, we have installed the WhiteSource Bolt extension and scanned our solution for
vulnerabilities before packaging and pushing the NuGet package to our feed in Azure Artifacts.
This concludes this chapter.
Summary
In this chapter, we looked at Azure Artifacts in more depth. First, we set up a feed and created a new NuGet package using the model
classes in the PartsUnlimited project. Then, we created a build pipeline where we packed and pushed the package to the feed
automatically during the build process. Finally, we used the WhiteSource Bolt extension from the Azure marketplace to scan the package for
vulnerabilities.
In the next chapter, we are going to focus on how to deploy applications in Azure DevOps using release pipelines.
Further reading
Check out the following links for more information about the topics that were covered in this chapter:
What is Azure Artifacts?: https://docs.microsoft.com/en-us/azure/devops/artifacts/overview?view=azure-devops
Get started with NuGet packages in Azure DevOps Services and TFS: https://docs.microsoft.com/en-us/azure/devops/artifacts/get-started-nuget?
view=azure-devops
Chapter 8:
Deploying
Applications
with Azure
DevOps
In previous chapters, we saw how you can
automate your development processes by using
build pipelines for your code. But an important
part of the software life cycle is also the release
phase. In this chapter, we will cover an overview
of release pipelines; we'll see how to create a
release pipeline with Azure DevOps and how you
can automate and improve the deployment of
your solutions by using release approvals and
multi-stage pipelines.
An overview of release
pipelines
Configuring continuous
deployment on a release
pipeline
Creating a multi-stage
release pipeline
Technical
requirements
To follow this chapter, you need to have an active
Azure DevOps organization. The organization
used in this chapter is the PartsUnlimited
organization we created in Chapter 1, Azure
DevOps Overview.
An overview of
release
pipelines
Release pipelines permit you to implement the
continuous delivery phase of a software life cycle.
With a release pipeline, you can automate the
process of testing and deliver your solutions
(committed code) to the final environments or
directly to the customer's site (continuous
delivery and continuous deployment).
A containerized
environment, such as Docker
or Kubernetes
A serverless environment,
such as Azure Functions
Creating a
release pipeline
with Azure
DevOps
The final goal for implementing a complete CI/CD
process with DevOps is to automate the
deployment of your software to a final
environment (for example, the final customer),
and to achieve this goal, you need to create a
release pipeline.
Figure 8.19 – Pre-deployment conditions definition In this pane, you can also define other parameters, such as selecting artifact condition(s)
to trigger a new deployment (a release will be deployed to this stage only if all the artifact conditions match), setting up a schedule for the
deployment, allowing pull request-based releases to be deployed to this stage, selecting the users who can approve or reject deployments
to this stage (pre-deployment approvals), defining gates to evaluate before deployment, and defining behavior when multiple releases are
queued for deployment.
You have now created a release pipeline that takes your artifacts and deploys them to the cloud by using Azure DevOps and also by
applying continuous deployment triggers and pre-deployment conditions checks.
In the next section, we'll see how to improve our release pipeline by using multiple stages.
A multi-stage release pipeline is useful when you want to release your applications with multiple steps (staging), such as, for example,
development, staging, and production. A quite common scenario in the real world is, for example, deploying an application initially to a
testing environment. When tests are finished, the application is moved to a quality acceptance stage, and then, if the customer accepts the
release, the application is moved to a production environment.
Here, we'll do the same: starting from the previously created single-stage pipeline, we'll create a new release pipeline with three stages,
called DEV, QA, and Production. Each stage is a deployment target for our pipeline:
In the previously defined pipeline, as a first step, I renamed the Deploy to cloud stage to Production. This will be the final stage of the release
pipeline.
Now, click on the Clone action to clone the defined stage into a new stage:
As previously configured, our release pipeline will move between stages only if the previous stage is completed successfully. This is okay for
moving from DEV to QA because on this transition, our application is deployed to a testing environment, but the transition from QA to
Production should usually be controlled because the release of an application into a production environment normally occurs after an
approval.
Creating approvals
Let's follow these steps to create approvals:
To create an approval step, from our pipeline definition, select the Pre-deployment conditions properties of the Production stage. Here, go to the
Pre-deployment approvals section and enable it. Then, in the Approvers section, select the users that will be responsible for approving. Please also
check that the The user requesting a release or deployment should not approve it option is not ticked:
Figure 8.39 – Adding a deployment group job At the time of writing, deployment group jobs are not yet supported on YAML pipelines.
Summary
In this chapter, we had a full overview of how to work with release pipelines in Azure DevOps.
We created a basic release pipeline for the PartsUnlimited project, defined artifacts, and created our first release by adding continuous
deployment conditions.
Then, we improved our pipeline definition by using multiple stages (DEV, QA, and Production), and at the end of this chapter, we saw how
to define approvals and gates for managing the release of our code in a more controlled way and the concepts around YAML-based release
pipelines In the next chapter, we'll see how to integrate Azure DevOps with GitHub.
Section 4:
Advanced
Features of
Azure DevOps
In this part, we are going to integrate Azure
DevOps with GitHub and we are going to cover
some real-world examples.
Integrating
Azure DevOps
with GitHub
GitHub is one of the most popular development
platforms that's used by open source developers
and businesses across the globe to store their
code. In this chapter, you will learn how to
leverage Azure DevOp's capabilities while you
continue to use GitHub as your software
development hub.
An overview of Azure
DevOps and GitHub
integration
Technical
requirements
To follow this chapter, you need to have an active
Azure DevOps organization and a GitHub
account. You can sign up for a GitHub account
here: https://github.com/join.
An overview of
A z u re D e v O p s a n d
GitHub integration
GitHub and Azure DevOps go hand in hand to
provide a superior software development
experience for teams, enabling them to ship and
release software at a faster pace with minimal
effort. In many scenarios, GitHub and Azure
DevOps are competitors (for example, Azure
Repos versus GitHub repositories), so it is
typically up to you to choose the one that fits your
needs and integrate them together for a
wholesome platform setup.
Integrating Azure Pipelines with GitHub enables developers to continue using GitHub as their preferred source control management platform
while leveraging Azure Pipelines' build and release capabilities. Azure Pipelines offers unlimited pipeline job minutes for open source
projects.
We looked at Azure Pipelines in detail previously in this book, so in this section, we'll take a look at how to store our Azure Pipelines
configuration and source code in GitHub and build a CI/CD process with GitHub and Azure DevOps.
Setting up Azure Pipelines and GitHub integration
In order to use Azure Pipelines with GitHub, you must authorize Azure Pipelines to access your GitHub repositories. Let's take a look at the
steps for this:
Log into your Azure DevOps account and select the project we created in the Technical requirements section.
Click on Pipelines > Create Pipeline:
branch to do so:
Figure 9.12 – Running an Azure pipeline
Clicking on Save and run will create the pipeline and start its execution. It may take a few minutes for the build job to complete:
Azure Boards is the best place to plan and track your work items. Integrating Azure Boards with GitHub allows you to keep using Azure
Boards as your planning and managing platform while you continue using GitHub as your source control management platform.
By integrating Azure Boards with GitHub, you can link objects from Azure Boards to GitHub. A few examples are as follows:
Work item and Git commit/issue/pull request linking means you can link your work items to the corresponding work being done in GitHub.
You can update your work item's status from GitHub itself.
Overall, integration allows us to track and link the deliverable across the two platforms easily.
Now, let's set up our Azure Boards integration.
Setting up Azure Boards and GitHub integration
Azure Boards is another extension available in GitHub Marketplace. You can configure the integration from both Azure DevOps and GitHub
Marketplace.
Let's set this up with the help of the following steps:
Log into Azure DevOps and browse to your Parts Unlimited project > Project settings > Boards > GitHub connections:
your repositories:
Figure 9.27 – Approving the Azure Boards extension
Upon installing Azure Boards, you should see your GitHub connection listed with a green checkmark, meaning it has been successful:
Figure 9.28 – GitHub connection status
With that, you have set up Azure Boards and GitHub integration.
Adding an Azure Boards Status badge
Like the Azure Pipelines status badge, Azure Boards also provides a status badge that can show stats about the work items inside your
GitHub repository.
In this section, we'll add a status badge from Azure Boards to our GitHub repository with the help of the following steps:
Log into Azure DevOps, browse to Boards, and click on the settings gear icon:
Figure 9.39 – Git comment on the pull request This was a quick example of how to link GitHub objects by referring to Azure Boards work
items by following some simple syntax; that is, AB#<Work Item ID >. As soon as you link the work item to GitHub, your Azure Board work
item will also be updated with a link to the corresponding GitHub object.
Along with the link objective, in this demonstration, you also updated the state of the work item by using a simple instruction in the commit message.
Let's take a look at some of the sample messages you can use:
Figure 9.40 – Sample messages
This concludes how to integrate with Azure Boards and GitHub. In this section, we looked at how to manage tasks better by using Azure
Boards and GitHub together. In the next section, we'll take a look at GitHub Actions.
GitHub Actions is a CI/CD service from GitHub that's used to build and release applications being developed in GitHub repositories.
Essentially, GitHub Actions is similar to Azure Pipelines, where you can set up your build and release pipelines to automate the entire
software development life cycle.
GitHub Actions was launched in early 2019 to provide a simple DevOps experience built into GitHub itself. GitHub Actions includes
enterprise-grade features, such as support for any language with built-in, self-hosted agents for various OSes and container images.
It includes various pre-built workflow templates built by the community, which can make it easier for you to build your DevOps pipeline.
It is outside the scope of this book to talk about GitHub Actions in detail, but you can refer to the GitHub Actions documentation at
https://github.com/features/actions to get started.
Summary
In this chapter, we looked at how to use GitHub and Azure DevOps together to build an integrated software development platform for our
software teams. To do this, we learned how to set up and manage Azure DevOps pipelines from GitHub, as well as build and integrate CI/CD
solutions.
We also learned about how to plan and track our work better in Azure Boards while doing software development in GitHub. You should now
be able to use GitHub and Azure DevOps together and improve your overall productivity and DevOps experience. You should also be able to
set up integration between the two services and use it in your daily DevOps work.
In the next chapter, we'll look at several real-world CI/CD examples with the help of Azure DevOps.
Chapter 10:
Using Test
Plans with
Azure DevOps
In the previous chapter, we covered how you can
integrate Azure DevOps with GitHub.
Exploratory testing
Technical
requirements
To follow this chapter, you need to have an active
Azure DevOps organization. The organization
used in this chapter is the Parts Unlimited
organization that we created in Chapter 1, Azure
DevOps Overview. You also need to have Visual
Studio 2019 installed, which can be downloaded
from
https://visualstudio.microsoft.com/downloads/.
The test plan that is used to run and analyze a
manual test plan can be downloaded from
https://github.com/PacktPublishing/Learning-
Azure-DevOps---
B16392/tree/master/Chapter%2010.
Introduction to
Azure Test
Plans
Manual and exploratory testing can be key
testing techniques in delivering quality and a
great user experience for your applications. In
modern software development processes, quality
is the responsibility of all the team members,
including developers, managers, business
analysts, and product owners.
Exploratory
testing
With exploratory testing, testers are exploring
the application to identify and document potential
bugs. It focuses on discovery and relies on the
guidance of the individual tester to discover
defects that are not easily discovered using other
types of tests. This type of testing is often
referred to as ad hoc testing.
Installing and
using the Test &
Feedback
extension
The Test & Feedback extension can be installed
from the Visual Studio Marketplace and is
currently available for Chrome and Firefox
(version 50.0 and higher). Chrome extensions can
also be installed in the Microsoft Edge browser.
This browser is based on Chromium.
IMPORTANT NOTE
Over the years, manual testing has evolved together with the software development process into a more agile approach. With Azure
DevOps, manual testing is integrated into the different agile processes that are supported and can be configured in Azure DevOps.
Important note
The different agile processes that are supported and integrated in Azure DevOps are covered in more detail in Chapter 2, Managing Projects
with Azure DevOps Boards.
Software development teams can begin manual testing right from the Kanban board from Azure Boards. From the board, you can monitor
the status of the tests directly from the cards. This way, all team members can get an overview of what tests are connected to the work
items and stories. From there the team can also see what the status is of the different tests.
In the following image, you can see the tests and statuses that are displayed on the board:
Figure 10.10 – Tests displayed in the work hub
If more advanced testing capabilities are needed, Azure Test Plans can also be used for all the test management needs. The Test hub can
be accessed from the left menu, under Test Plans, and there it offers all the capabilities that are needed for a full testing life cycle.
In the following image, you see the how the Test Hub can be accessed from the left menu, together with the different menu options:
In Azure DevOps Test Plans, you can create and manage test plans and test suites for sprints or milestones that are defined for your
software development project. Test Plans offers three main types of test management artifacts: Test plans, Test suites, and Test cases.
These artifacts are all stored in the work repository as special types of work items and can be exported and shared with the different team
members or across different teams. This also enables the integration of the test artifacts with all of the DevOps tasks that are defined for
the project.
The three artifacts have the following capabilities:
Test plans: A test plan groups different test suites, configurations, and individual test cases together. In general, every major milestone in a project
should have its own test plan.
Test suites: A test suite can group different test cases into separate testing scenarios within a single test plan. This makes it easier to see which
scenarios are complete.
Test cases: With test cases, you can validate individual parts of your code or app deployments. They can be added to both test plans and test suites.
They can also be added to multiple test plans and suites if needed. This way, they can be reused effectively without the need to copy them. A test case
is designed to validate a work item in Azure DevOps, such as a feature implementation or a bug fix.
In the next section, we are going to put this theory into practice and see how you can create and manage test plans in Azure DevOps.
Managing test plans, test suites, and test cases
For this demonstration, we are going to use the Parts Unlimited project again. It also has a test plan in Azure DevOps, so we are going to
look at that first. Therefore, we have to follow these steps:
Open a web browser and navigate to https://dev.azure.com/.
Log in with your Microsoft account and select the Parts.Unlimited project. Then, in the left menu, select Test Plans. This will let you navigate to the
test plan that has already been added to the project.
Select Parts.Unlimited_TestPlan1 from the list to open it. The suites of tests are added to this plan. Select As a customer, I would like to store my
credit card details securely. This will open the list of individual test cases that have been added to this suite:
In this demonstration, we are going to run and analyze a manual test plan. For this, we are going to use the test plan that is already added
to the Parts.Unlimited project in Azure DevOps again and import a test suite. The test suite can be downloaded from the GitHub repository
that belongs to this chapter. You can obtain the GitHub URL at the beginning of the chapter from the Technical requirements section:
Open the test plan of the Parts.Unlimited project again in Azure DevOps.
First, we need to add a new static test suite. For this, select the three dots next to Parts.Unlimited_TestPlan1 > New Suite > Static suite. Name the
suite End-to-end tests.
Select the newly created suite and in the top menu, select the import button to import test cases:
Summary
In this chapter, we have covered Azure DevOps Test Plans. We looked at the different features and capabilities and managed test plans, test
suites, and test cases. Then we imported a test case from a CSV file and tested the Parts Unlimited application. Then, we covered
exploratory testing in detail, and we used the Test & Feedback extension to report a bug.
In the next chapter, we are going to focus on real-world CI/CD scenarios with Azure DevOps.
Further reading
Check out the following links for more information about the topics that were covered in this chapter:
Exploratory and manual testing scenarios and capabilities: https://docs.microsoft.com/en-us/azure/devops/test/overview?view=azure-devops
Creating manual test cases: https://docs.microsoft.com/en-us/azure/devops/test/create-test-cases?view=azure-devops
Providing feedback using the Test & Feedback extension: https://docs.microsoft.com/en-us/azure/devops/test/provide-stakeholder-feedback?view=azure-
devops
Exploratory testing with the Test & Feedback extension in Connected mode: https://docs.microsoft.com/en-us/azure/devops/test/connected-mode-
exploratory-testing?view=azure-devops
Chapter 11:
Real-World
CI/CD Scenarios
with Azure
DevOps
In this chapter, we'll show you some sample
projects where the continuous integration and
continuous delivery (CI/CD) processes are
handled by using Azure DevOps. We'll be taking
sample applications and setting up a CI/CD
pipeline using Azure DevOps for managing the
software development, deployment, and upgrade
life cycle.
Technical
requirements
To follow along with this chapter, you need to
have an active Azure DevOps organization and an
Azure subscription.
Setting up a
CI/CD pipeline
for .NET-based
applications
A typical .NET-based application includes
applications developed using Microsoft's .NET
Framework and uses a SQL database in the
backend. You may have multiple layers of
applications, such as a frontend, backend (also
known as the middle tier or API tier), and data
tier (SQL Server).
Introduction to
the sample
application
We'll be using a simple ToDo application for this
walkthrough. It's a web-based application that
uses a SQL database in the backend.
Preparing the
pre-requisite
Azure
infrastructure
In this section, we'll create the required Azure
infrastructure to host the application. We will be
creating the following resources:
b) Contoso-ToDo-
Production
2. Application components:
We'll be creating the
following resources for both
the staging and production
environments: a) Azure App
Service to host the web
application
CREATING A RESOURCE
GROUP IN AZURE
A resource group is a container that holds
resources in the Azure cloud. Typically, a
resource group includes resources that you want
to manage as a group or are maintained in a
similar life cycle. We'll be creating two resource
groups: one for production and one for staging.
Let's create the resource groups in Azure:
C R E AT I N G A Z U R E A P P S E RV I C E
In this example, we'll take a container-based application and build an end-to-end CI/CD pipeline. We'll take a Python and Redis-based
sample application for the purpose of this demonstration.
In this example, we'll be using various Azure resources in the overall solution architecture. This includes the following:
Azure DevOps: CI/CD pipeline
Azure Kubernetes Service (AKS): For hosting the containers
Azure Container Registry (ACR): Container image storage and management
Introduction to the sample app
In this section, we'll be using a sample application called Azure Voting App. It is a standard multi-container-based application that uses
the following components:
The Azure Voting App backend: This will be running on Redis.
The Azure Voting App frontend: Web application built with Python.
You can review the application code here: https://github.com/Azure-Samples/azure-voting-app-redis.
Setting up the required infrastructure
In order to be able to build the pipeline, first we need to set up the required infrastructure, including the AKS cluster and Azure container
registry. We will be creating separate resources for the staging and production environments as a standard best practice; however, it is
possible to use a single environment for both the production and development environments by using a combination of tags and a
Kubernetes namespace.
In this section, we'll be using the Azure command-line interface (CLI) for all infrastructure provisioning tasks.
C R E AT I N G T H E A Z U R E R E S O U RC E G R O U P
Let's start by creating an Azure resource group for organizing all the resources for your development and production environments:
Log in to Azure Cloud Shell (https://shell.azure.com) with your Azure credentials.
If this is your first time logging in to Azure Cloud Shell, it will prompt you to create an Azure storage account. Select your subscription and click Create
Storage.
Select Bash on the shell type selection.
Run the following command to list all your subscriptions:
az account list
If you need to select a specific subscription for provisioning resources, run the following command: az account set --subscription 'Your Subscription
Name'
Create a resource named Contoso-Voting-Stage by running the following command. You can choose to upload the location with a region of your
choice: az group create -l westus -n Contoso-Voting-Stage
Repeat the resource group creation command to create another resource group named Contoso-Voting-Prod for the production environment.
You have now completed the required resource groups. In the next step, you'll create an Azure Kubernetes cluster.
C R E AT I N G A N A Z U R E KU B E R N E T E S S E RV I C E
AKS is a managed Kubernetes offering from Microsoft Azure. There are two types of hosts in Kubernetes clusters – master (aka the control
plane) and nodes. In the world of AKS, there's no master for end users. Microsoft creates and manages master nodes and hides them away
from end users. As a user, you only deploy AKS nodes (Kubernetes nodes) in your subscription, whereas the configuration of Kubernetes and
the joining of Microsoft-managed Kubernetes masters happens in the background. With AKS, you only pay for the nodes' infrastructure
costs; masters are provided for free by Microsoft.
We will be using AKS to host our containers.
Let's start by creating an AKS cluster:
Log in to Cloud Shell with your Azure credentials.
Run the following command to create an AKS cluster with the default configuration and latest version: az aks create --resource-group Contoso-
Voting-Stage --name Contoso-Stage-AKS --node-count 1 --enable-addons monitoring --generate-ssh-keys
Let's look at this command in detail:
a) az aks create: The syntax for creating an AKS cluster.
b) --resource-group & --name: The resource group's name and AKS cluster name.
c) --node-count: The number of AKS nodes you're creating.
d) --enable-addons: This specifies add-ons, such as monitoring and HTTP routing.
e) --generate-ssh-keys: This is a flag that lets az cli create SSH keys to be used for agent nodes.
It may take up to 10 minutes for the AKS cluster to be ready. You can review the status by running the following command: az aks list
Once your cluster is ready, you can get the Kubernetes authentication configuration in your Cloud Shell session by running the following command: az
aks get-credentials --resource-group Contoso-Voting-Stage --name Contoso-Stage-AKS
You can try running kubectl commands now to interact with Kubernetes. Run the following command to get a list of all the Kubernetes nodes: kubectl
get nodes
Your Azure Kubernetes cluster is now ready; please repeat the process to create another AKS cluster for the production environment.
C R E AT I N G A N A Z U R E C O N TA I N E R R E G I S T RY
ACR is a private Docker container registry that's hosted and managed by Microsoft Azure. ACR is fully compatible with Docker and works in
the same way, except that it's managed, hosted, and secured by Microsoft. We will be using ACR to store our container images.
Let's create a container registry for the project:
Log in to Azure Cloud Shell and run the following command to create a container registry: az acr create --resource-group Contoso-Voting-Stage --
name ContosoStageACR --sku Basic
Once your container registry is ready, you can get the status and details of it by running the following command: az acr list
I N T E G RAT I N G A C R W I T H A K S
AKS needs to have permissions to access the container images from ACR in order to run the application. Let's enable access for AKS to
interact with our ACR.
Run the following command to integrate AKS with our ACR:
az aks update -n Contoso-Stage-AKS -g Contoso-Voting-Stage --attach-acr ContosoStageACR
Now that our infrastructure is ready, we'll begin with setting up the code repository for the application.
Setting up Azure Repos for the voting application
In this section, we'll create a new Azure DevOps project and import the voting app source code in Azure Repos:
Log in to Azure DevOps and create a new project named Contoso Voting App or any other name of your choice.
Navigate to Azure Repos and click Import a Git repository. Please import the Azure voting app repository from: https://github.com/Azure-
Samples/azure-voting-app-redis:
Figure 11.42 – Importing the repository
Now that our repo is ready, let's start with a build pipeline.
Setting up the CI pipeline
The build pipeline will be responsible for building the container image and pushing them in ACR. Let's get started:
Log in to Azure DevOps and open Contoso Voting App Project.
Navigate to Pipeline and click Create Pipeline.
Click on Use the Classic Editor for creating the pipeline with the UI.
Select the source Azure repo that you created in the previous section as the source for the pipeline.
For the template, select Docker Container as the template type:
Figure 11.43 – Docker container pipeline template
In the Build an Image task configuration, provide the following values: a) Container Registry Type: Azure Container Registry.
b) Select your Azure subscription from the dropdown and authorize it.
c) Select ACR from the dropdown.
d) Action: Build an image.
e) Docker File: The root/azure-vote/Dockerfile repo.
f) Check Include Latest Tag:
Figure 11.44 – Push an image
In the Push an image task, select the Azure subscription and ACR again, with the task being Push an image. Be sure to check Include Latest Tag.
Once you're done, review both tasks and click Save and Run to start the pipeline job execution.
Review the job logs to see the detailed information about image building and pushing to ACR.
Upon completion, navigate to the Azure portal and open the container registry you created earlier.
Navigate to Repositories; you should see a new image being created there. Let's look at the image and find out the image name to update in our
application deployment configuration:
Figure 11.45 – Container image in ACR
Make a note of the image pull connection string. We'll need it in the next exercise:
Azure Architecture Center is a centralized place to take guidance for architecting solutions on Azure using established patterns and
practices. There are several sample architectures available around DevOps.
You can access Azure Architecture Center here: https://docs.microsoft.com/en-us/azure/architecture/.
Refer to the following links to learn more about planning the right architecture for DevOps across various infrastructure and application
scenarios:
Azure DevOps: https://docs.microsoft.com/en-us/azure/architecture/example-scenario/apps/devops-dotnet-webapp
DevOps with containers: https://docs.microsoft.com/en-us/azure/architecture/example-scenario/apps/devops-with-aks
Microservices with AKS and Azure DevOps: https://docs.microsoft.com/en-us/azure/architecture/solution-ideas/articles/microservices-with-aks
Secure DevOps for AKS: https://docs.microsoft.com/en-us/azure/architecture/solution-ideas/articles/secure-devops-for-kubernetes
Azure DevOps CI/CD pipelines for chatbots: https://docs.microsoft.com/en-us/azure/architecture/example-scenario/apps/devops-cicd-chatbot
CI/CD for Azure VMs: https://docs.microsoft.com/en-us/azure/architecture/solution-ideas/articles/cicd-for-azure-vms
CI/CD for Azure web apps: https://docs.microsoft.com/en-us/azure/architecture/solution-ideas/articles/azure-devops-continuous-integration-and-
continuous-deployment-for-azure-web-apps
CI/CD for containers: https://docs.microsoft.com/en-us/azure/architecture/solution-ideas/articles/cicd-for-containers
Container CI/CD using Jenkins and Kubernetes on AKS: https://docs.microsoft.com/en-us/azure/architecture/solution-ideas/articles/container-cicd-using-
jenkins-and-kubernetes-on-azure-container-service
DevSecOps in Azure: https://docs.microsoft.com/en-us/azure/architecture/solution-ideas/articles/devsecops-in-azure
DevTest deployment for testing IaaS solutions: https://docs.microsoft.com/en-us/azure/architecture/solution-ideas/articles/dev-test-iaas
DevTest deployment for testing PaaS solutions: https://docs.microsoft.com/en-us/azure/architecture/solution-ideas/articles/dev-test-paas
DevTest deployment for testing microservice solutions: https://docs.microsoft.com/en-us/azure/architecture/solution-ideas/articles/dev-test-microservice
DevTest Image Factory: https://docs.microsoft.com/en-us/azure/architecture/solution-ideas/articles/dev-test-image-factory
Immutable infrastructure CI/CD using Jenkins and Terraform on Azure virtual architecture overview: https://docs.microsoft.com/en-
us/azure/architecture/solution-ideas/articles/immutable-infrastructure-cicd-using-jenkins-and-terraform-on-azure-virtual-architecture-overview
DevOps in a hybrid environment: https://docs.microsoft.com/en-us/azure/architecture/solution-ideas/articles/java-cicd-using-jenkins-and-azure-web-apps
Java CI/CD using Jenkins and Azure web apps: https://docs.microsoft.com/en-us/azure/architecture/solution-ideas/articles/java-cicd-using-jenkins-and-
azure-web-apps
Run a Jenkins server on Azure: https://docs.microsoft.com/en-us/azure/architecture/example-scenario/apps/jenkins
SharePoint Farm for development testing: https://docs.microsoft.com/en-us/azure/architecture/solution-ideas/articles/sharepoint-farm-devtest
Sharing location in real time using low-cost serverless Azure services: https://docs.microsoft.com/en-us/azure/architecture/example-scenario/signalr/
Summary
In this chapter, we looked at a .NET and SQL-based application and set up a CI/CD pipeline for it using Azure DevOps. We looked at how you
manage your production and staging environments through approval workflows.
Similarly, we also looked at a container-based application and did a walkthrough of setting up an end-to-end CI/CD pipeline for the
application using ACR and AKS.
In the end, we talked about Azure Architecture Center, which can be referred to while planning your DevOps architecture.
This was the final chapter, and we hope you have enjoyed reading this book!
Other Books
You May Enjoy
If you enjoyed this book, you may be interested in
these other books by Packt:
Kamil Mrzygłód
ISBN: 978-1-83855-145-2
Leave a review
- let other
readers know
what you think
Please share your thoughts on this book with
others by leaving a review on the site that you
bought it from. If you purchased the book from
Amazon, please leave us an honest review on this
book's Amazon page. This is vital so that other
potential readers can see and use your unbiased
opinion to make purchasing decisions, we can
understand what our customers think about our
products, and our authors can see your feedback
on the title that they have worked with Packt to
create. It will only take a few minutes of your
time, but is valuable to other potential customers,
our authors, and Packt. Thank you!