0% found this document useful (0 votes)
5 views

Implementing DevOps Solutions and Practices Using Cisco Platforms - Student Learning Guide...

Uploaded by

Dúber Pérez
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Implementing DevOps Solutions and Practices Using Cisco Platforms - Student Learning Guide...

Uploaded by

Dúber Pérez
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 1007

DEVOPS

Implementing DevOps Solutions and


Practices Using Cisco Platforms
Student Learning Guide Volume
Version

Part Number:
© 2022 Cisco Systems, Inc.
© 2022 Cisco Systems, Inc.

Americas Headquarters Asia Pacific Headquarters Europe Headquarters


Cisco Systems, Inc. Cisco Systems (USA) Pte. Ltd. Cisco Systems International BV
San Jose, CA Singapore Amsterdam,
The Netherlands

Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed on the Cisco website at
http://www.cisco.com/go/offices.

Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To
view a list of Cisco trademarks, go to this URL:http://www.cisco.com/c/en/us/about/legal/trademarks.html. Third-party trademarks
that are mentioned are the property of their respective owners. The use of the word partner does not imply a partnership
relationship between Cisco and any other company. (1110R)

DISCLAIMER WARRANTY: THIS CONTENT IS BEING PROVIDED “AS IS” AND AS SUCH MAY INCLUDE TYPOGRAPHICAL,
GRAPHICS, OR FORMATTING ERRORS. CISCO MAKES AND YOU RECEIVE NO WARRANTIES IN CONNECTION WITH
THE CONTENT PROVIDED HEREUNDER, EXPRESS, IMPLIED, STATUTORY OR IN ANY OTHER PROVISION OF THIS
CONTENT OR COMMUNICATION BETWEEN CISCO AND YOU. CISCO SPECIFICALLY DISCLAIMS ALL IMPLIED
WARRANTIES, INCLUDING WARRANTIES OF MERCHANTABILITY, NON-INFRINGEMENT AND FITNESS FOR A
PARTICULAR PURPOSE, OR ARISING FROM A COURSE OF DEALING, USAGE OR TRADE PRACTICE. This learning
product may contain early release content, and while Cisco believes it to be accurate, it falls subject to the disclaimer above.

Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Course Welcome
Thank you for choosing Cisco as your technical learning provider. We recognize that you have many
options to choose from when working toward achieving your technical and professional goals. Our objective
is to help you meet those goals by providing high-quality, collaborative learning experiences.
Before you begin, take a moment to review the primary components in this course, how to access online
support, and opportunities to provide feedback on the course.
Course outline: If you are attending a live, instructor-led training session, your instructor may customize
the course to meet the specific needs of the class. However, you will find a basic outline of the material in
the Course Introduction section.
Course content: You will find detailed information and instructions along with supporting illustrations, self-
check challenges to give you exam practice, and lab activities to give you a real-world experience.
Bias-Free language: Cisco is updating content to be free of offensive or suggestive language. We are
changing terms such as blacklist/whitelist and master/slave to more appropriate alternatives. While we
update our portfolio of products and content, users may see differences between some content and a
product’s user interface or command syntax. Please use your product’s current terminology as found in its
documentation.
Glossary of terms: If you need to review or learn unfamiliar terms used in this course, refer to the Glossary
of Terms section.
Online support: Join the Cisco Learning Network community to participate in study group discussions and
get answers to questions as you prepare for your exam.
Your feedback: We encourage you to submit feedback so that we can continue to improve course quality
and offer the best learning products possible. Your input is valuable to us, and we want to know how the
course has helped with your job and exam performance. There are two ways to submit feedback:
1. Course evaluation survey: If you attend a live, instructor-led training session, then your instructor
will provide a survey on the last day of class. After completing the survey, you’ll receive a course
completion certificate. Once you’ve had a chance to practice what you’ve learned, you’ll receive a
follow-up survey approximately two months after completing the course.
2. Digital kit feedback: Use the Feedback button in the digital version of the course materials to
submit your comments.
We make regular updates to our content in response to your feedback, so please share it with us.
Special thanks to our Cisco Authorized Learning Partners in making these materials available.
Thank you again for choosing Cisco.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS)
Table of Contents

Overview
FrontMatter-Chapter-Topic-Title-XY
Module Title
Lesson Title

Lesson-Topic[1]-Title-XY
Topic-Title-InLineKeyboard-XY
Lesson-Topic-Title-XY
Topic Title
Glossary
Glossary Title

Course Introduction................................................................................................................................................................................................................
Overview.....................................................................................................................................
Course Goal................................................................................................................................
Course Flow................................................................................................................................
Cisco Training and Certifications................................................................................................
Student Introductions..................................................................................................................
Section 1: Introducing the DevOps Model........................................................................................................................................................................................................

DevOps Philosophy....................................................................................................................
DevOps Practices.....................................................................................................................
Discovery 1: Interact with GitLab Continuous Integration.........................................................
Summary Challenge.................................................................................................................
Answer Key...............................................................................................................................
Section 2: Introducing Containers...................................................................................................................................................................................................................

Container-Based Architectures.................................................................................................
Linux Containers.......................................................................................................................
Docker Overview.......................................................................................................................
Docker Commands...................................................................................................................
Discovery 2: Explore Docker Command-Line Tools.................................................................
Summary Challenge.................................................................................................................
Answer Key...............................................................................................................................
Section 3: Packaging an Application Using Docker........................................................................................................................................................................................

Dockerfiles................................................................................................................................
Discovery 3: Package and Run a WebApp Container............................................................
Golden Images........................................................................................................................
Safe Processing Practices......................................................................................................
Discovery 4: Build and Deploy Multiple Containers to Create a Three-Tier Application.........
Summary Challenge...............................................................................................................
Answer Key.............................................................................................................................
Section 4: Deploying a Multitier Application..................................................................................................................................................................................................

Linux Networking....................................................................................................................
Docker Networking..................................................................................................................
Discovery 5: Explore Docker Networking................................................................................
Docker Compose....................................................................................................................

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) i
Discovery 6: Build and Deploy an Application Using Docker Compose.................................
Summary Challenge...............................................................................................................
Answer Key.............................................................................................................................
Section 5: Introducing CI/CD.........................................................................................................................................................................................................................

Continuous Integration............................................................................................................
CI Tools...................................................................................................................................
DevOps Pipelines...................................................................................................................
Summary Challenge...............................................................................................................
Answer Key.............................................................................................................................
Section 6: Building the DevOps Flow............................................................................................................................................................................................................

GitLab Overview.....................................................................................................................
GitLab CI Overview.................................................................................................................
Discovery 7: Implement a Pipeline in GitLab CI......................................................................
Continuous Delivery with GitLab.............................................................................................
Discovery 8: Automate the Deployment of an Application......................................................
Summary Challenge...............................................................................................................
Answer Key.............................................................................................................................
Section 7: Validating the Application Build Process.....................................................................................................................................................................................

Automated Testing in the CI Flow...........................................................................................


Discovery 9: Validate the Application Build Process...............................................................
Summary Challenge...............................................................................................................
Answer Key.............................................................................................................................
Section 8: Building an Improved Deployment Flow......................................................................................................................................................................................

Postdeployment Validation.....................................................................................................
Discovery 10: Validate the Deployment and Fix the Infrastructure.........................................
Release Deployment Strategies.............................................................................................
Summary Challenge...............................................................................................................
Answer Key.............................................................................................................................
Section 9: Extending DevOps Practices to the Entire Infrastructure............................................................................................................................................................

Introduction to NetDevOps......................................................................................................
Infrastructure as Code............................................................................................................
Discovery 11: Build a YAML IaC Specification for the Test Environment...............................
Summary Challenge...............................................................................................................
Answer Key.............................................................................................................................
Section 10: Implementing On-Demand Test Environments at the Infrastructure Level...............................................................................................................................

Configuration Management Tools...........................................................................................


Terraform Overview................................................................................................................
Discovery 12: Manage On-Demand Test Environments with Terraform................................
Ansible Overview....................................................................................................................
Ansible Inventory File..............................................................................................................
Use the Cisco IOS Core Configuration Module......................................................................
Jinja2 and Ansible Templates.................................................................................................
Basic Jinja2 with YAML...........................................................................................................
Configuration Templating with Ansible....................................................................................
Discovery 13: Build Ansible Playbooks to Manage Infrastructure..........................................
Discovery 14: Integrate the Testing Environment in the CI/CD Pipeline................................
Discovery 15: Implement Predeployment Health Checks.......................................................
Summary Challenge...............................................................................................................

ii Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Answer Key.............................................................................................................................
Section 11: Monitoring in NetDevOps...........................................................................................................................................................................................................

Introduction to Monitoring, Metrics, and Logs.........................................................................


Introduction to Elasticsearch, Beats, and Kibana...................................................................
Discovery 16: Set Up Logging for the Application Servers and Visualize with Kibana...........
Discovery 17: Create System Dashboard Focused on Metrics..............................................
Discovery 18: Use Alerts Through Kibana..............................................................................
Introduction to Prometheus and Instrumenting Python Code for Observability......................
Discovery 19: Instrument Application Monitoring....................................................................
Discovery 20: Use Alerts and Thresholds to Notify Webhook Listener and Webex
Teams Room..........................................................................................................................
Summary Challenge...............................................................................................................
Answer Key.............................................................................................................................
Section 12: Engineering for Visibility and Stability........................................................................................................................................................................................

Application Health and Performance......................................................................................


AppDynamics Overview..........................................................................................................
Troubleshoot an Application Using AppDynamics with APM..................................................
Chaos Engineering Principles.................................................................................................
Summary Challenge...............................................................................................................
Answer Key.............................................................................................................................
Section 13: Securing DevOps Workflows.....................................................................................................................................................................................................

DevSecOps Overview.............................................................................................................
Application Security in the CI/CD Pipeline..............................................................................
Infrastructure Security in the CI/CD Pipeline..........................................................................
Discovery 21: Secure Infrastructure in the CI/CD Pipeline.....................................................
Summary Challenge...............................................................................................................
Answer Key.............................................................................................................................
Section 14: Exploring Multicloud Strategies..................................................................................................................................................................................................

Application Deployment to Multiple Environments..................................................................


Public Cloud Terminology Primer...........................................................................................
Tracking and Projecting Public Cloud Costs...........................................................................
High Availability and Disaster Recovery Design Considerations............................................
IaC for Repeatable Public Cloud Consumption......................................................................
Cloud Services Strategy Comparison.....................................................................................
Summary Challenge...............................................................................................................
Answer Key.............................................................................................................................
Section 15: Examining Application and Deployment Architectures..............................................................................................................................................................

Twelve-Factor Application.......................................................................................................
Microservices Architectures....................................................................................................
Summary Challenge...............................................................................................................
Answer Key.............................................................................................................................
Section 16: Describing Kubernetes...............................................................................................................................................................................................................

Kubernetes Concepts: Nodes, Pods, and Clusters.................................................................


Kubernetes Concepts: Storage...............................................................................................
Kubernetes Concepts: Networking.........................................................................................
Kubernetes Concepts: Security..............................................................................................
Kubernetes API Overview.......................................................................................................
Discovery 22: Explore Kubernetes Setup and Deploy an Application....................................

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) iii
Summary Challenge...............................................................................................................
Answer Key.............................................................................................................................
Section 17: Integrating Multiple Data Center Deployments with Kubernetes...............................................................................................................................................

Kubernetes Deployment Patterns...........................................................................................


Kubernetes Failure Scenarios.................................................................................................
Kubernetes Load-Balancing Techniques................................................................................
Kubernetes Namespaces........................................................................................................
Kubernetes Deployment via CI/CD Pipelines.........................................................................
Discovery 23: Explore and Modify a Kubernetes CI/CD Pipeline...........................................
Summary Challenge...............................................................................................................
Answer Key.............................................................................................................................
Section 18: Monitoring and Logging In Kubernetes......................................................................................................................................................................................

Kubernetes Resource Metrics Pipeline...................................................................................


Kubernetes Full Metrics Pipeline and Logging........................................................................
Discovery 24: Kubernetes Monitoring and Metrics—ELK.......................................................
Summary Challenge...............................................................................................................
Answer Key.............................................................................................................................

iv Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) v
vi Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Course Introduction

Overview
Implementing DevOps Solutions and Practices using Cisco Platforms (DEVOPS) is a five-day course that
teaches students DevOps practices and how to apply them to deployment automation that enables automated
configuration, management, and scalability of cloud microservices and infrastructure processes on Cisco
platforms.

Skills and Knowledge


This subtopic lists the knowledge and skills that you should have before beginning this course. It also
includes recommended Cisco learning offerings that may help you meet these prerequisites.
Knowledge and skills you should have before attending this course:
• Basic programming language concepts and familiarity with Python
• Basic understanding of compute virtualization
• Ability to use Linux, text-driven interfaces and CLI tools, such as SSH, bash, grep, ip, vim/nano, curl,
ping, traceroute, Telnet
• Foundational understanding of Linux-based operating system architecture and system utilities
• CCNA-level core networking knowledge
• Foundational understanding of DevOps concepts
• Awareness and familiarity with continuous integration and continuous deployment (CI/CD) concepts
• Hands-on experience with Git

Recommended Cisco learning offerings:


• Developing Applications and Automating Workflows using Cisco Core Platforms (DEVASC)
• Developing Applications Using Cisco Core Platforms and APIs (DEVCOR)
• DevNet Start with a curated list of learning content

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 1
Course Goal
This topic describes the course goal.
The goal of this course is to provide the DevOps practices and how to apply them to deployment automation
that enables automated configuration, management, and scalability of cloud microservices and infrastructure
processes on Cisco platforms.

2 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Course Flow
The schedule reflects the recommended structure for this course. This structure allows enough time for the
instructor to present the course information and for you to work through the lab activities. The exact timing
of the subject materials and labs depends on the pace of your specific class.
Day 1
• Section 1: Introducing the DevOps Model
• Discovery 1: Interact with GitLab Continuous Integration
• Section 2: Introducing Containers
• Discovery 2: Explore Docker Command-Line Tools
• Section 3: Packaging an Application Using Docker
• Discovery 3: Package and Run a WebApp Container

Day 1 (Cont.)
• Discovery 4: Build and Deploy Multiple Containers to Create a Three-Tier Application
• Section 4: Deploying a Multitier Application
• Discovery 5: Explore Docker Networking
• Discovery 6: Build and Deploy an Application Using Docker Compose
• Section 5: Introducing CI/CD

Day 2
• Section 6: Building the DevOps Flow
• Discovery 7: Implement a Pipeline in GitLab CI
• Discovery 8: Automate the Deployment of an Application
• Section 7: Validating the Application Build Process
• Discovery 9: Validate the Application Build Process
• Section 8: Building an Improved Deployment Flow
• Discovery 10: Validate the Deployment and Fix the Infrastructure
• Section 9: Extending DevOps Practices to the Entire Infrastructure
• Discovery 11: Build a YAML IaC Specification for the Test Environment

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 3
Day 3
• Section 10: Implementing On-Demand Test Environments at the Infrastructure Level
• Discovery 12: Manage On-Demand Test Environments with Terraform
• Discovery 13: Build Ansible Playbooks to Manage Infrastructure
• Discovery 14: Integrate the Testing Environment in the CI/CD Pipeline
• Discovery 15: Implement Predeployment Health Checks
• Section 11: Monitoring in NetDevOps
• Discovery 16: Set Up Logging for the Application Servers and Visualize with Kibana

Day 4
• Discovery 17: Create System Dashboard Focused on Metrics
• Discovery 18: Use Alerts Through Kibana
• Discovery 19: Instrument Application Monitoring
• Discovery 20: Use Alerts and Thresholds to Notify Webhook Listener and Webex Teams Room
• Section 12: Engineering for Visibility and Stability
• Section 13: Securing DevOps Workflows
• Discovery 21: Secure Infrastructure in the CI/CD Pipeline
• Section 14: Exploring Multicloud Strategies

Day 5
• Section 15: Examining Application and Deployment Architectures
• Section 16: Describing Kubernetes
• Discovery 22: Explore Kubernetes Setup and Deploy an Application
• Section 17: Integrating Multiple Data Center Deployments with Kubernetes
• Discovery 23: Explore and Modify a Kubernetes CI/CD Pipeline
• Section 18: Monitoring and Logging In Kubernetes
• Discovery 24: Kubernetes Monitoring and Metrics—ELK

4 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Cisco Training and Certifications
Cisco training and certification programs prepare students, network engineers, and software developers for
today’s most critical jobs. The industry needs talented professionals with validated skill sets to power
success in this changing technology landscape. The latest programmable network infrastructure enables
software developers to create new applications and experiences. IT professionals can implement automation
and DevOps workflows with intent-based networks, which ensures that all learners can access training and
certifications to meet the demands of the enterprise.
To learn more about how Cisco programs can help you remain marketable, job-ready, and poised for your
next career goal, visit http://www.cisco.com/c/en/us/training-events/training-certifications/overview.html.

Training Resources
You are encouraged to join the Cisco Learning Network—a dynamic learning community for certified
Cisco professionals and those seeking certification, where you can share questions, suggestions, and
information about the Cisco training and certifications program and other certification-related topics. To
register, visit https://learningnetwork.cisco.com.
The Cisco Learning Network also offers various resources for learning and interaction with members of the
Cisco certification community, including:
• Certification communities: https://learningnetwork.cisco.com/s/communities
• IT training videos and seminars: https://learningnetwork.cisco.com/s/all-media
• Cisco Certifications: https://learningnetwork.cisco.com/s/certifications
• Webinars and events: https://learningnetwork.cisco.com/s/event-list

Cisco Training Services and Cisco DevNet offer hands-on, instructor-led training and self-study:
• Cisco Training Services product and solution training: https://www.cisco.com/c/en/us/training-
events/training-certifications/training/training-services/courses.html
• Cisco DevNet programmability self-study and practice: https://developer.cisco.com/

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 5
Student Introductions
• Your name
• Your company
• Job responsibilities
• Skills and knowledge
• Brief history
• Objective

6 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Section 1: Introducing the DevOps Model

Introduction
Development and operations (DevOps) combines business practices, philosophies, and tools to allow
development and operations teams to keep pace with or exceed the speed required to compete in today’s
market. The DevOps model allows faster innovation, faster delivery of product improvements, reliability at
scale, and improved security. This section introduces the DevOps philosophy and why it is so important in
today’s marketplace.

DevOps Philosophy
Need for a New Operational Model
In recent years, certain industry trends and innovations have transformed the way business is done.
Businesses are transforming to include mobility, internet of things, and cloud services to meet market
demands and require agility, simplicity, speed, and innovation to keep up with market trends. This
transformation, often referred to as a digital transformation, requires businesses to introduce new tools,
culture, and processes to change the way things are done, especially regarding how developers and
operations relate to one another.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 7
The speed at which business changes and adapts to new requirements is no longer fast enough to adapt to
current industry expectations. To meet the demands of digital transformation, organizations are exploring
DevOps and how it can be applied within an enterprise to allow a faster, safer, and more reliable state of
operations.

DevOps Demystified
Traditionally, software developers and IT operations are in separate silos. Software developers focus on
creating features and delivering monolithic applications. IT Operations focus on enabling connectivity and
ensuring a reliable and stable environment so that the developers can deliver value. DevOps methodologies
break down the silos between the development of software and the operations of deploying and maintaining
the software.

The main benefit of siloing development and operations teams was that each team could specialize and use
its expertise to deliver value. Before enterprises embarked on digital transformations, this approach was
common and allowed enterprises to keep up with industry trends.
Now that digitization has disrupted enterprises, IT Operations cannot focus purely on network connectivity
without paying respect to automation and DevOps culture. Applications are no longer released
monolithically, every few years; now, applications continually have new releases, add new features, and
require regular updates to the IT infrastructure, with the expectation of a quick turnaround.

Dev and Ops: The Problem


First, developers care about writing software. They care about application programming interfaces (APIs),
libraries, and code. The software they write should be of high quality and meet customer expectations.
Success is determined by whether the software does the job as expected and if it was completed and
delivered on time.
The following table presents the differences between the developers and operations.

8 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Developers World Operations World

• Care about • Care about


– Writing software – Everything is stable
– Working code – Standards
– APIs – Templates
– Libraries – Not getting bothered at 2:00 a.m.
– Sprints • Success
• Success – Software is stable
– Software worked: Test locally and on servers – Backup and restore works
– Finished sprint – Systems are operating within defined thresholds

However, developers traditionally do not pay as much attention to what happens after the software
application is delivered to a production data center. This issue creates a divide between development and
operations.
On the other hand, operations cares about software standards and stability. The Operations Department has
rigid change management windows for rolling out new software. Success for operations is represented by a
stable and functional environment. The drivers and the definition of success are clearly different for
developers and operations. Furthermore, as the business continues to drive the development of new
applications, developers continue to write the software under their strict deadlines, but the Operations
Department often finds that it is difficult to roll out new software because of the backlog of change requests.
Developers and operations look at the world from different perspectives:
• Development wants to release new versions of applications and new products as fast as possible.
• Operations wants a reliable and stable environment.

This figure shows how the “wall of confusion” creates silos within an IT organization. One of the key goals
of DevOps is to break down silos and enhance communication between teams. This goal is extremely
important for organizations that want to deploy software and services faster and more frequently.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 9
The problem lies in the fact that development and operations are often in different, isolated parts of an
organization.
Another way to visualize this issue is to look at the work calendars for each team. If they are using Agile,
development teams are likely completing milestones every two weeks. By contrast, the operations calendar
in the figure depicts the infrequent openings for changes. Due to factors like risk mitigation, actual change
windows may be scheduled weeks or months away from a desired rollout.

Development is trying to serve the business through features and software, and operations is trying to serve
the business through stability. The challenge is for them to work together successfully.
When development and operations teams do not work together, they become isolated from each other and
sometimes from the rest of the organization. This practice creates an unproductive environment for the
implementation of new applications.
Methodologies such as waterfall can create these silos. The difficulty of sharing company resources has
been known to create rifts among teams or a culture of blame and hostility. Teams tend to become more
specialized as they focus on their small slice of a development process.
DevOps seeks to break down these silos and restore communication and collaboration among teams. The
hope is that this practice will result in increased output and quality and make useful information available to
all teams, rather than the compartmentalization that traditional methodologies produce.

10 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
What Is DevOps?
Defining DevOps is difficult because of the somewhat ambiguous nature of its philosophies, goals, and role
in the software development lifecycle.
Some key DevOps characteristics are the following:
• Changes operational approach and mindset
• Changes culture
• Enhances the level of communication
• Automates all things
• Delivers software, products, and services faster
• Requires commitment at all levels
• Breaks down silos and improves collaboration

One way to demystify the DevOps paradigm is to investigate the characteristics of organizations that have
adopted the DevOps model. Examining how these organizations are structured, how they produce new
applications, and how they provide continual improvement is very helpful in arriving at a clear definition.
Some key characteristics of these organizations are as follows:
• They embrace new technology.
• They embrace a collaborative culture.
• They maintain a well-defined common goal among teams.

It is also important to note what DevOps is not. DevOps is not hardware that can be purchased or a piece of
software that can be installed. Although there are software tools that are often used in a DevOps culture,
organizations that embrace DevOps are embracing just that: a new culture.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 11
CALMS Model
DevOps uses lean and agile techniques to merge development and operations and deliver services faster,
more frequently, and on time.
Many Web 2.0 companies, such as Google, Netflix, Facebook, and Amazon, have embraced a DevOps
culture that allows them to push tens or even hundreds of changes in a single day. This practice allows these
organizations to deliver new products and features to their customers almost on demand.

The guiding principles of these organizations can be encapsulated in the acronym CALMS:
• Culture: The organization must be ready to make this type of change.
• Automation: The technology must enable faster testing and feature deployment.
• Lean: The management philosophy goal is to reduce all waste.
• Measurement: Information about ongoing operations is shared in real time.
• Sharing: DevOps units work together and are vocal when things go right or wrong.

12 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
CALMS Breakdown
The following table outlines the main characteristics of the culture, automation, and lean elements of the
CALMS model.

Culture Automation Lean

• Good habits • Deploy automation • Identify waste


– Trust – Ansible • Learn continuously
– Respect – Chef • Focus on people
– Supportive – Puppet • Optimize the whole
– Collaborative – Terraform
– No blame, so no victims • Automation occurs
everywhere and between
– Common goals teams.
• Management evolution

Culture is arguably the most crucial aspect of an organization that is interested in implementing an agile and
DevOps approach to software development. A DevOps culture relies and thrives on traits like mutual trust
and respect among colleagues, support, collaboration, and a sense that no one will be blamed when
something goes wrong. These traits, along with shared goals, give people working for DevOps
organizations the freedom and motivation to innovate, experiment, and collaborate.
However, this type of culture requires a very different flow of information in comparison to more traditional
organizational structures. For example, managers must embrace a collaborative information-sharing
atmosphere and abandon a top-down approach to organizational management.
Within organizations that have embraced the DevOps approach, operations teams get more involved. The
knowledge and tools that are traditionally used only within the software development or systems
administration silos are shared with teams across the organization.
Automation tools such as Ansible, Chef, and Puppet are not new, but breaking down silos and sharing
knowledge is. Today, operations teams work with development teams to use automation techniques to push
changes frequently to help the organization succeed. In a DevOps environment, automation will occur
everywhere and between teams.
The term lean comes from Lean Manufacturing, originally developed by Toyota, a set of principles for
achieving quality, speed, and customer alignment. The primary purpose of Lean is to eliminate anything that
does not add value and work only on what is absolutely necessary. Eliminating waste means eliminating
useless meetings, tasks, and documentation, but it also means eliminating time spent building what is
unknown or needed in the future.
Lean software development is also about learning. You need to structure work to ensure that you are
continuously learning while providing value in the present.
Next, Lean is concerned with people. Typical Lean ideas include putting your team first, responding
promptly, listening, and hearing everyone’s opinion. This practice means that people know it is okay to fail,
but that they are expected to learn from their mistakes.
Finally, Lean places a strong emphasis on what it calls “the system”, the way that the team operates as a
whole.
The following table outlines the main characteristics of the measurement and sharing elements of the
culture, automation, lean, measurement, sharing (CALMS) model.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 13
Measurement Sharing

• Mean time to repair (MTTR) • Share code, ideas, and problems


• Number and frequency of outages or performance • Leverage common repositories like GitHub or
issues GitLab
• Number and cost of resources • ChatOps
• Attitude toward continuous improvement
• Rewards and feelings of success
• Release and deployment
• User acceptance testing
• Measure everything

John Willis, a veteran of the DevOps industry, once said, “If you can’t measure, you can’t improve. A
successful DevOps implementation will measure everything it can as often as it can … performance metrics,
process metrics, and even people-metrics.”
Measurements help you know what is going on in real time and what has happened over time. Many metrics
can be gathered and measured, including the following:
• Number and frequency of software releases
• Volume of defects
• Time and cost per release
• Mean time to repair (MTTR)
• Number and frequency of outages and performance issues
• Revenue and profit impact of outages and performance issues
• Number and cost of resources
• Extraneous lines of code
• Gathering and managing requirements
• Agile development
• Release and deployment
• Unit testing
• User acceptance testing
• Quality assurance
• Application performance monitoring

14 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
There are other factors that can be more difficult to measure quantitatively but are nevertheless very
important to examine. Some of these measurements are the following:
• Attitude toward continuous improvement
• Obsession with metrics
• Technological experimentation
• Team autonomy
• Rewards and feelings of success
• Hierarchical and political obstacles and annoyances
• Inspiring and fostering creativity
• Organizing teams around projects rather than skill sets
• Constantly dancing on the edge of failure (in a good way)
• Position around business demand

In the DevOps paradigm, sharing is about working together and making sure that everybody has the same
understanding. From code to problems, everything should be shared.
Hubot, for example, is a tool that helps make this effort possible. From an automated change perspective,
Hubot is a chat bot that GitHub developed and made open-source. By using a chat bot like Hubot, you can
ensure that everyone sees everything. It is possible to integrate Hubot into collaboration platforms like
Cisco Jabber and Slack. Common workflows can be automated to allow querying network devices, making
changes, and other actions from a group chat Because everyone in the group sees what is happening,
communication is improved, and a great opportunity exists to train new engineers and new hires.
1. 0Which two statements about development and operations are true? (Choose two.)
a. Developers care about writing software.
b. The Network Operations team traditionally does not pay as much attention to what happens after
the software application goes into a production data center.
c. The Network Operations team cares about software standards and stability.
d. The drivers and the definition of success are the same for developers and operations.
e. Developers and operations have the same synergistic mindshare.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 15
0DevOps Practices
Agile and Lean methodologies have influenced DevOps practices. Because Agile projects are iterative, and
the lean methodology encourages the creation of a minimum viable product, so there needs to be an
associated process and tooling to support it. The process that is used to help deliver Agile and Lean projects
is continuous integration, continuous delivery, and continuous deployment, often referred to as continuous
CI/CD for short.
Continuous integration is the constant merging of development work into the code base so that automated
testing can catch problems early. Continuous delivery is a software package delivery mechanism where
code is staged for review and inspection before release. Continuous deployment relies on both continuous
integration and continuous delivery to automatically release code into production when it is ready, allowing
a constant flow of new features into production.

Infrastructure as Code
The CI/CD concept can also be applied to managing infrastructure. Infrastructure as Code (IaC) is writing
high-level code that automates the provisioning and deployment of infrastructure components. It is not just
writing a few scripts, but rather uses software development practices like the following:
• Version control
• Design patterns
• Testing

16 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
The typical flow of using these software development practices to update a piece of infrastructure is as
follows:
• Pull the latest high-level code, representing the infrastructure’s configuration, from a version control
system. Make any changes that are intended for the device and push the code to the version control
system.
• Execute prechecks, such as verifying operational state, on the infrastructure and in a testing
environment. Perform a dry run of the change.
• Push updates to the version control system, which will deploy the updated code and change the
infrastructure’s configuration.
• Perform validation checks to ensure that all intended changes were successful or roll back to the
previous configuration from the version control system.
• Log and report the results.
• Notify the team through automated messages with updates from the system.

CI/CD IaC Toolchain


To see CI/CD in action with several of the tools mentioned, consider the process that a software
development team would follow to release a new version of the company web site.
The CI/CD toolchain for an IaC project involves the following components:
• Code editor
• Version control for the code
• CI/CD orchestrator
• Configuration management tools
• Testing and verification
• Monitoring and notification

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 17
There are various options for each of these toolchain components. If your company already has some of
these software licenses in production, it might make sense to use your existing licenses before exploring
other options. In other cases, you may be in an environment with no DevOps tools and you will have to
investigate the benefits and trade-offs for each one because there is no one-size-fits-all solution.

18 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
CI/CD Example
Using CI/CD for deploying software for a new application involves several of the previously shown tools.
For example, a software development team might need to make a new release of a new version of the
company website. In that case the following workflow would occur:
• The developer pulls the latest software code from GitHub.
• The developer makes the necessary changes to the code and pushes the new commits to the remote
GitHub repository.
• A Jenkins continuous integration server detects code modifications and begins a job.
• The Jenkins job creates a test environment with the proposed changes and runs tests.
• The Jenkins job runs multiple tests including integration tests, smoke tests, and others.
• The test results are reported back to Jenkins.
• The test results are sent to Cisco Webex Teams for the development team to see.
• If the tests pass, the code is deployed to an artifact repository.
• The code is automatically delivered to a ready-for-production state.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 19
Microservices
One of the innovations in deploying Agile and Lean software is a shift from monolithic applications to
applications that are designed to have microservices. A microservice is a small, focused piece of software
that is independently developed, tested, and deployed as a part of a larger application. It is stateless and
loosely coupled and has a programming language and technology that are independent of the other
microservices. A huge benefit of this approach is that using microservices allows you to have highly
scalable and fault-tolerant applications. These microservices more easily integrate into a CI/CD pipeline
because you can update, test, and deploy only the pieces of the application that need to be updated without
impacting or changing the whole application.

Containers
One of the innovations that enabled the use of microservices is containers. Containers have a significantly
faster deployment, faster migrations, less overhead, and a faster restart than normal virtual machines.
Containers are different from virtual machines because containers share a single operating system kernel.
Containers are more useful when you want to run multiple instances of a single application.

20 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Containers made implementing a DevOps set of practices possible because they enable better alignment of
developers and operations. Containers create a natural segmentation of effort. They also have guaranteed
consistency for CI/CD. Because containers can be more quickly and easily deleted and re-created than
virtual machines, they are easier for operations to manage and easier for developers to make frequent
deployments. Containers also allow applications that are built on containers on a laptop to be consistent
when deployed to production. This situation aligns well with the CI/CD process of developing locally and
then pushing to the pipeline.

Monitoring, Logging, and Alerting


Because DevOps includes many automated processes in the CI/CD pipeline, monitoring and logging are
critical for keeping operations moving smoothly. DevOps values measurement (part of the CALMS
acronym), and having precise benchmarks for application performance, container performance, and
infrastructure performance requires software that is meant for those use cases.

1. 0What does CI/CD stand for?


a. continuous integration / continuous delivery / deployment
b. continuous isolation / continuous demonstration / deployment
c. cycle integration / cycle delivery / deployment
d. circular integration / circular delivery / deployment

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 21
0Discovery 1: Interact with GitLab Continuous
Integration
Introduction
You will examine the concepts and highlight the value of using version control systems and automated
testing within a continuous integration (CI) pipeline, and then eventually with continuous deployment to
yield a CI/CD pipeline.
Using GitLab, GitLab-CI, and continuous testing, you will walk through a Python script for managing
network infrastructure. The script needs to pass tests before accepting changes and ultimately being
deployed to the master branch that is used for production network operations.
The Python script also requires a couple of input files to work properly, and the project has tests to verify
that the script fails properly when the files are not passed in correctly. You will update the error messages
that these exceptions return, so if the tests fail, it will be clearer for the engineer who submits a request or
update.

Topology

Job Aids

Device Information

Device Device Descrip- IP Address Credentials


tion

Student Workstation Linux Ubuntu VM 192.168.10.10 student, 1234QWer

GitLab GitLab Ubuntu 192.168.10.20 student, 1234QWer


Server

22 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Task 1: Update a Project
You will browse an existing GitLab project and update it using the built-in web integrated development
environment (IDE). You will simply update basic error messages that are used in the Python script.

Activity

Step 1 In the Student Workstation, open your browser window and navigate to https://git.lab. Log in to GitLab.

Step 2 Once you have logged into GitLab, browse to the explore-pipeline project. It can be found at:
https://git.lab/cisco-devops/explore-pipeline.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 23
Forking the Project
Now you will fork the project. Most Git services and platforms provide this feature, allowing users to copy
the project into their own user profile, in this case, your "student" profile.

Changes are tracked between copies. Changes made to the "upstream" (the original) project are not
automatically brought into the fork, but the project's main page will indicate that the fork is missing newer
commits. Forks also make it easier to make change requests to the upstream project. This feature will be
discussed later.

GitLab provides a Fork option in the upper right corner next to the blue Clone button. Clicking the fork
option will create a fork using your profile.

Step 3 Click Fork to create a fork using your profile.

24 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 4 After clicking Fork, GitLab prompts you to select the profile that it should fork to. Click the icon with your
profile. When it completes the fork process, it will take you to that new fork (note the URL).

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 25
26 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Before starting to make changes to a project, it is important to create a new branch. Branching allows
developers to work on multiple isolated copies of a project, which keeps unrelated changes independent
from each other.

Step 5 Click the plus (+) icon in the project toolbar to reveal the available options. Choose New branch to go to a
form to create a new branch.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 27
Step 6 Because you will be updating the exit message, name the branch exit-message and click Create branch.

28 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 7 After creating the branch, GitLab brings you back to your fork's main page, which is now on the exit-
message branch (previously it was on the master).

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 29
Now that the environment is set up, it is time to update the error messages. GitLab provides an IDE feature
that works well for small changes like the ones you perform in this lab.

This feature shows you what you will be doing via your own text editor later. For now, use the built-in IDE.

Step 8 Click Web IDE to launch the IDE.

30 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 31
Step 9 The error messages are in the network_operations.py file. In the left panel listing the files and directories,
click the network_operations.py file. Scroll to the bottom where the parse_args function validates the
arguments if they are files, and exits early if one of them is not. The messages currently start with " Please
specify valid, readable...", but would read better with in indefinite article, "Please specify a
valid, readable...". Make those updates and click Commit on the bottom left.

32 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
When you click Commit, the commit process starts.

Step 10 The first step is to add the appropriate files to the staging area for the commit. The upper left of the IDE
shows files that have been changed, but not staged. The network_operations.py file should be the only file
there. Click the file and then click the Stage button in the upper right corner.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 33
After clicking the Stage button, the network_operations.py file will move into the Staged changes section
of the IDE. The changes are now ready to be committed.

Step 11 Click the Commit button to bring up the commit page.

34 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 12 The next screen asks you to provide a commit message. Add a message in the message box in the lower left
corner and click Commit. The changes will be committed. Use the message Update exit message for
invalid files.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 35
Task 2: Review and Update CI Tests
Now you will see how tests are run for any change within this project. It is possible to have these tests run at
different times, such as upon commit or merge. Here, you will see that tests are executed before submitting
your request upstream.

GitLab's default behavior is to start a new Merge Request back to the Repository from which the project
was forked. Other Git products might call this behavior a Pull or Change Request.

36 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Before starting a Merge Request, it is a good idea to verify that the CI tests pass for the project to which you
are trying to contribute. Keep in mind that you have not seen much of the core Python script for this project.
However, for even a small change like updating error messages, all CI tests should run and pass before
opening a Merge or Pull Request.

Activity

Step 1 To see the status of the tests, scroll to the bottom of the Merge Request and click the Pipeline tab. You will
learn all about pipelines in upcoming labs, but you dictate what and when you want to run tests within a
pipeline.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 37
Step 2 The status indicates that the CI tests failed, so this issue will need to be fixed before submitting a Merge
Request. Hover over the red X icon in the Stages column; this action will bring up the CI test that failed. The
CI test that failed is named pytest. Clicking it will reveal why the test failed.

38 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
The CI test output shows that the exit message returned a different message than the tests expected. It is
good and common practice to ensure that you have adequate tests for anything that you store in your Git
service. Tests should cover anything that will impact the use of your script, but also the surrounding
scaffolding, such as making sure that the app has documentation, ensuring the documents are formatted
properly, and ensuring all files are linked properly .

So, in this case, the failure makes sense, because the commit you made updated the exit messages, but not
the tests. This example is a small test to verify output. Many more tests will be covered.

Step 3 Review the CI test output.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 39
Step 4 To fix the tests, click the explore-pipeline link to return to the repository's main page.

Step 5 Updating the test file uses the same process as updating the network_operations.py script. The first step is
to make sure that you are on the exit-message branch. GitLab lists the current branch on the left side of the
project toolbar. This box is a drop-down list that allows you to switch between branches.

40 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Note Make sure that you are on the exit-message branch.

Step 6 Once you are on the exit-message branch, click Web IDE in the upper right of the project toolbar to start
editing the test file.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 41
Step 7 To edit the test files, you will first click the tests directory to reveal its files. The tests are in the
test_network_operations.py file. Toward the top of this file, the expected message values are assigned to
variables that are named INTENT_MSG and INVENTORY_MSG. Update these values to match the
updates to network_operations.py, and click Commit to start the commit process.

Step 8 The first step in the commit process is to move the updated file or files to the staging area. Choose the
updated test file and click Stage.

42 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 9 When the file is moved to the staging area, click Commit.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 43
Step 10 Provide a Commit Message and click Commit again.

44 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Task 3: Perform the Merge Request
You have updated the project, your tests are passing, and now it is time to perform a merge request from
your branch back into the upstream master branch. The exit-message branch should be ready for a merge
request now.

Activity

Step 1 Verify one last time that the CI tests passed. Scroll to the bottom of the Merge Request and click Pipelines;
the Status should be green.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 45
Step 2 Since the tests pass, it is time to add a message to the Merge Request and submit it to the upstream project.
The Title section at the top of the Merge Request is where you provide a message. Make sure that the
Requestor's Source Branch is exit-message and the Target Branch is master.

46 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 3 Scroll to the bottom of the request and click Submit merge request. The merge request is now submitted to
the maintainers of the project for review.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 47
48 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
You are going to be working with different applications. These applications could be robust three-tier
web/app/db apps that you must to ensure are properly built and tested before deploying new releases.

This overview has shown a few components in GitLab that you will see in more detail later. When you do,
keep in mind that it is more important to understand the mechanics versus the implementation of GitLab
specifically, because there are many other Git and CI solutions on the market.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 49
0Summary Challenge
1. 0Match the description with the category.
Uses knowledge of software
Developers
development best practices,
automation frameworks, and
cultural disciplines
Focuses primarily on enabling
Operations
connectivity to deliver value
Focuses on creating features
DevOps
and delivering monolithic
applications

2. 0How is a container different from a virtual machine?


a. Containers share a single operating system kernel.
b. Containers do not share a single operating system kernel.
c. Virtual machines share a single operating system kernel.
d. Virtual machines are faster than containers to boot.
3. 0What does CALMS stand for?
a. culture, automation, lean, measurement, sharing
b. containers, automation, lean, measurement, sharing
c. culture, automation, lean, monitoring, sharing
d. culture, automation, lean, measurement, silos
4. 0What is one example of a CALMS measurement in DevOps?
a. Identify waste
b. Ansible
c. No blame postmortem
d. MTTR
5. 0What does IaC stand for?
a. Infrastructure as Code
b. Isolate as Code
c. Infrastructure always Compiled
d. YAML as Code
6. 0What is an example of a CI/CD orchestrator?
a. Cisco Webex Teams
b. VIM
c. Jenkins
d. Git
7. 0What is one benefit of monitoring?
a. DevOps values measurement.
b. DevOps includes many automated processes in CI/CD, especially with containers.
c. Progressively cultivate strategic technologies.
d. Synergistically unleash client-based relationships.

50 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Answer Key
DevOps Philosophy
1. A, C

DevOps Practices
1. A

Summary Challenge
1.

Focuses on creating features and delivering monolithic Developers


applications

Focuses primarily on enabling connectivity to deliver value Operations

Uses knowledge of software development best practices, DevOps


automation frameworks, and cultural disciplines

2. A
3. A
4. D
5. A
6. C
7. B

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 51
52 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Section 2: Introducing Containers

Introduction
Containers are part of many mainstream DevOps architectures and cloud deployments. Application owners
and data center infrastructure teams aim to shorten the development lifecycle, reduce operational costs, and
reduce complexity by deploying containers. Containers are similar to virtual machines in that they have
their own network interfaces, file systems, resource controls, and are isolated from each other. However,
containers do not maintain their own operating system, and are typically lightweight, fast to start, and can
accommodate more applications than VMs.
You will be introduced to container architecture and examine Linux-based containers. Then you will be
introduced to the Docker container format, from the Docker company. After an introduction to the
commands for interacting with the Docker engine, you will explore containers in a discovery exercise that is
focused on downloading, running, adding host folders to containers, and looking at container logs.

Container-Based Architectures
One of the traditional challenges between development and operations is supporting developers to work
more effectively by accommodating faster iterations without compromising the stability of the production
environment. The risk of deploying large monolithic systems or application stacks stems from their
significant complexity and their associated processes for deployment and support during their lifetime.
The idea of containerizing software gained traction because it promised to alleviate some of these
challenges. Containerizing software breaks up complex systems into smaller components that can be
developed, packaged, and deployed relatively independently from each other and the underlying supporting
infrastructure.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 53
Like virtual machines (VMs), containers provide a sandbox for the running application to create a certain
degree of isolation. Such isolation is less strict than VMs, but generally has less overhead and greater
agility. You can build an application in a container on your local environment and then deploy that same
container image in production. This approach will reduce the deployment effort when moving between
different environments and teams.

VMs represented a paradigm shift that allowed applications to run independently inside smaller machines
on top of the same bare-metal server. This shift enabled migrations, use of templates, and better resource
management. Containers created a trade-off between the stricter isolation and operating system variety of
virtualization and the agility and resource optimization of even smaller units of execution.

Containers Virtual Machines

Run anywhere Yes Yes


Architecture-dependent Architecture-independent

Operating system Operating system is kernel- Operating system is not dependent


dependent, based on the host on the host operating system
operating system

Consistent run time Yes Yes

Application sandboxing Yes Yes, but stricter

Size on disk Small Large

54 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Containers Virtual Machines

Overhead Low High

Containers also introduced a shift in expected lifetime. Although a VM may take hours to be fully deployed
and be expected to run for weeks or months, a container would take minutes (even seconds) to deploy and
be expected to live for hours or days. By allowing greater agility and ease of iteration without increasing
risk, containers allow faster development cycles and individual instances or versions become less important
overall and are thus expendable.
Operating system kernel-dependent means that a Windows container must run on a Windows host and a
Linux container must run on a Linux host. Currently, a Linux container cannot run on a Windows host and
Windows containers cannot run on Linux hosts. With a rather complex deployment, Linux can run in a
special Hyper-V host that also has Linux kernel support.
Another advantage of containers is improved scale-out capabilities. Containers allow systems to rapidly
scale up the same application across many servers to manage higher demand and scale down as fast when
demand decreases.
Containers transform deployments from being machine-oriented to being application-oriented. This
transformation allows developers to not worry about the details of the host machines and their operating
systems, whereas infrastructure teams can maintain and upgrade their hardware and operating systems
without worrying about specific application dependencies.
For network infrastructure, containers enable a lightweight approach to virtual network functions and use
improved orchestration and lifecycle management solutions for their components. A second benefit is being
able to easily extend functionality of a network element by running a prepackaged containerized application
that does not depend on the operating system of the device, its libraries, or other runtimes to execute. Better
dependency management and ease of deployment simplify adding custom functionality to network devices
while maintaining isolation, so that the main functionality of such devices is not affected.

A container is a set of processes that run inside a specific namespace set. The inside of the container looks
like a VM with isolated processes, networking, and file system access. From the outside, it looks like
normal processes running on the host machine. All these containers share the same host operating system
kernel, are visible as isolated processes, and are managed by a common set of tools. These tools can, for
example, gather telemetry that is related to the containers that are running on the system. Such collected
metrics are tied to a specific application (because a container equals an application) instead of the whole
machine, where metrics would be a mix of signals from multiple applications.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 55
An ecosystem of tools has grown around the application-centric deployment model, including functions
such as naming and service discovery, application-aware load balancing, vertical (instance size) or
horizontal (number of instances) scaling, Domain Name System (DNS) integration, monitoring, logging,
rollout, deployment, and more.
1. 0What are three properties of container? (Choose three.)
a. large file size on disk
b. application sandbox
c. low overhead
d. contain secured boot sector
e. independent namespaces
f. must be from a private repository
g. must have a hypervisor present
2. 0Which kernel does a container use?
a. its own kernel built into the image
b. the kernel of the operating system on which the container is running
c. connection to a local mainframe operating system
d. a multicast source on the network

56 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Linux Containers
Historically, containerization started by providing file system-based isolation (via chroot), which offers each
process a different limited view of the host storage. FreeBSD extended the concept by implementing Jails,
expanding the chroot model by virtualizing access to the file system, the set of users, and networking.
Solaris improved things even further with zones. Linux went through several of its own iterations (VServer,
OpenVZ, Process Containers) before the current-day implementations of Linux Container (LXC), rkt,
Docker, and many others flavors that have built on chroot, Linux Control Groups (cgroups), and
namespaces.
The industry has adopted these projects and they are developed through organizations like the Cloud Native
Computing Foundation (CNCF) and the Linux Foundation (namely, the Open Container Initiative).
The original purpose of chroot, control groups (cgroups, which control and limit resources), and
namespaces (which isolate and virtualize system resources) was to protect applications from each other’s
neighbors that are sharing the same machine. This protection is not perfect, because it cannot prevent all
interference in resources that the operating system kernel does not manage. However, it can prevent
problems such as the noisy neighbor, where one application uses up all available resources of the system,
leaving other applications unable to run properly.
Container images are the files that make up the application and its supporting libraries. By adding these
images to the mix, a better abstraction was created that isolates applications from the host operating system,
increasing portability and reducing inconsistencies.

To make this abstraction work, a container image must include all the dependencies of an application, such
as system tools and libraries. The only external links from containers are links going through the system
calls provided by the Linux kernel. As such, the container host only needs a minimal set of tools and
libraries, usually the tools and libraries for the container management system. A container does not need to
know the various application dependencies for the VM deployment model.
Container images have their own file system and libraries. You could build a container image from an
Ubuntu base and run it on a Community Enterprise Operating System (CentOS) host machine because both
distributions are Linux-based, as long as they support the same kernel. However, you should consider that
the CPU architecture the binary files are compiled for is the same. The kernel is shared between the
containers and the host.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 57
Container images are immutable. They include libraries with their versions, configuration, operating system,
folders, applications, and anything else that is needed to run the image. This approach makes it easier to
scan them for known vulnerabilities, sign them, and ensure the integrity of the supply-and-deploy chain.

Linux Namespaces
Namespaces isolate system resources by partitioning critical kernel structures to create virtual
environments. There are different kinds of namespaces:
• pid: Processes
• mnt: Mount points, file systems
• net: Network interfaces, routing, ports, firewall
• user: User IDs and group IDs
• ipc: System V interprocess communication
• uts: System identifiers such as hostname and domain name

Mount namespaces are built on chroot functionality and give processes a different view of the file system,
usually restricted to certain paths. Each process or container running in that namespace will see the same
root file system.
Network namespaces provide different sets of network interfaces, including the loopback, a separate routing
table, and firewall rules. Interfaces can be moved between the default namespace and any other namespaces
you create later. New interfaces can also be created in a specific namespace. For a process to be able to
communicate with the outside world, you can create a virtual interface that bridges its isolated network
namespace with another and connects to an interface such as eth0 to provide access to the network. Multiple
containers running on the same machine can bind to the same physical network by using different IP
addresses, or bind to an isolated virtual interface and then use Network Address Translation (NAT) overload
to access the physical network through its one IP address.
The process ID (PID) namespace provides process isolation for running containers. When you start a
container, it will have an entry point, which is the application where you want the container to start. The
process in the figure has a PID of 1 inside the container, just like the init daemon is the first process to start
in a normal Linux operating system.

Processes inside the container do not see processes from other containers or the host, but the host will see all
processes that are running (for example, PIDs 1, 2, 3, and 4) and translate PIDs to unique numbers globally.

58 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
In the figure, you can identify three PID namespaces: the root or parent, Namespace1, and two children,
Namespace2 and Namespace3. Namespace2 has one process (P2) running inside with PID 1. This process is
also visible to Namespace1 but is mapped to PID 2 because PID 1 is already used by process P1 in
Namespace1. Similarly, process P3 runs in Namespace3 with PID 1, but is visible as PID 3 in the parent
Namespace1.
Therefore, Namespace1 in the figure sees four processes: processes P1 and P4 are its own processes, and
processes P2 and P3 are from its nested namespaces, Namespace2 and Namespace3. However, Processes P2
and P3 do not see any processes outside their own namespaces.
docker-host:root# docker run -it --rm ubuntu /bin/bash
root@79dfd016fe09:/# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 9 10:32 pts/0 00:00:00 /bin/bash
root 10 1 0 10:32 pts/0 00:00:00 ps -ef

docker-host:root# docker inspect -f '{{.State.Pid}}' 79dfd016fe09


2399

docker-host:root# ps --pid 2399 -o pid,cmd


PID CMD
2399 /bin/bash

The example shows a freshly run Docker container that started one process (/bin/bash) as PID 1. On the host
machine, this process will be seen with a different number. That number is assigned based on the number of
processes running on the machine. You will also notice another process with PID 10, which is a child
process of PID 1. The process with PID 10 is created by the ps -ef command, which was executed from the
shell to list the running processes in the container.

Linux Control Groups


The control groups (cgroups) kernel feature was added in Linux Version 2.6.24 (originally developed by
Google) and isolates and limits the CPU, memory, disk I/O, and network usage of one or more processes.
This framework provides a unified interface with the following functions:
• Accounting: Accounting monitors and measures the resource utilization of a group.
• Control: Control creates a checkpoint, restarts, or freezes groups of processes.
• Prioritization: Prioritization provides a larger share of a resource to a group.
• Resource limiting: Resource limiting imposes upper limits that a group cannot exceed (for example,
memory usage).

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 59
Privileged vs. Unprivileged Containers
Privileged containers are containers that have the container user identifier (UID) 0 (root) mapped to UID 0
of the host. An unprivileged container maps its internal UID 0 to a nonroot user on the host system. With
this functionality, the potential attackers that want to compromise the application will have problems
making the application “escape” from running inside the container and consequently gaining root privileges
on the host machine.
Support for this unprivileged UID mapping depends on the container run time. Note that Docker containers
run as privileged by default. You can see in the following output that the root user owns the internal
process.
docker-host:root# docker run -it --rm ubuntu /bin/bash
root@79dfd016fe09:/# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 9 10:32 pts/0 00:00:00 /bin/bash
root 10 1 0 10:32 pts/0 00:00:00 ps -ef

Note The Docker --privileged flag does not affect the UID mapping. It allows the run time to offer more
capabilities to the container itself, such as full host device access and permissions, similar to the access
and permissions of processes that run on the host. You should use this flag sparingly because it greatly
increases the security risk by reducing that container’s isolation from the host.

1. 0What do Linux containers use to provide isolation from other containers?


a. VM processes
b. run on different physical hypervisor hosts
c. use the –unprivileged flag so that nothing can see inside
d. use of namespaces that allocate processes, storage, networking, users, and others

60 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Docker Overview
Docker is an Apache-licensed open-source platform for developing, packaging, and deploying applications
in containers. It separates applications from infrastructure by providing a robust set of tools to orchestrate
and manage containers that run all sorts of services. Beyond providing a container run time (originally, it
used a Linux Container [LXC]), Docker provides tooling and a platform to manage the entire lifecycle of
containerized applications. This tooling benefits developers, system administrators, and operations by
greatly improving workflows. Docker has become an important part of many DevOps toolchains.
Docker supports the development of applications, building images, managing their distribution, deploying
them in any environment (development, test, or production), and assisting with other advanced needs such
as service discovery and load balancing.
While Docker may run on other operating systems in virtualized environments (macOS or Windows), here
we assume that Docker is run natively on a Linux only.
Docker focuses on several key concepts that are built on the benefits of container architectures and the
features of Linux containerization technologies:
• Ease of use and portability: Docker containers are packaged as images. Tooling is provided to make
the process of building and distributing these images as easy as possible. A container that is built on
your development laptop will run unmodified on other machines in your own data center or in the cloud
("build once, run anywhere").
• File system layers: These layers are the container files, starting from a base image and adding
application files, libraries, or any other packages until the final version is obtained. Each layer is stored
separately and even shared between different versions that have the same overlapping base files.
• Version control and rollback: Each built image is versioned using a tag system and consists of a series
of combined layers.
• Speed: Containers at their core are lightweight, fast, single-process, fast-starting, and resource-
optimized applications. Therefore, any application can be deployed inside a container, but keep in mind
that this model may not be the best fit in all circumstances.
• Process management: Docker containers are fundamentally allowed to run only a single process. If
your application is made up of multiple separate processes, packaging each of them in a separate
container is the Docker strategy. You can run a supervisor or init-like process in the container that starts
other subprocesses to circumvent this approach, but by increasing the complexity of a particular
container, you will start losing some of the other advantages of container architectures. Those
advantages should motivate you to build lightweight Docker containers.
• State management: Container images are immutable. Once they are created, they cannot be changed.
Any changes to the file system or other attributes will result in a new image or, if done in real time in a
running container, be ephemeral and disappear when that specific container is gone. Therefore,
containers should not store a permanent state internally. If you need to save the state of a Docker, use
external mounts (called volumes) to provide data persistence.
• Distribution: Docker images can be stored locally, but the real power comes from using central
repositories (private or public) called registries. Registries are like application stores for Docker images.
The tooling allows for very straightforward build, push, pull, and run actions that make sharing and
running containers simple operations.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 61
A build host that is running Docker can use a Dockerfile to build a container image from source files and
libraries. This freshly built image has a name and version, which are both useful for its distribution. The
next step is to push the image to a central Docker registry and upload the various layers and metadata of the
image. From there, a separate Docker target host can be instructed to run this container. This request causes
the Docker daemon to pull the appropriate image version from the registry and then run it.
The Docker Engine is the client/server application at the core of the Docker architecture. It comprises the
following components:
• A daemon process (dockerd) manages all aspects of the container lifecycle on that machine, such as
images, network, data volumes, containers, and so on.
• A Representational State Transfer (REST) API allows other clients to interact with the daemon.
• A CLI client, the docker command, communicates with a local or remote Docker daemon (via REST
API or Unix sockets).

Docker effectively uses namespaces and control groups to manage resource allocation and isolation of
containers on the Linux system. The Union Filesystem allows Docker to manage the file system layers that
are central to each container image, both at build and at run time. There are many variants of Union
Filesystem available such as Overlay, AUFS, btrfs, vfs, and DeviceMapper.

62 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
The container run time is provided by containerd and runc, two open-source projects that Docker donated to
the CNCF and the Open Container Initiative (OCI), respectively. Docker uses the run time to manage the
lifecycle of a container from image specification and transfer to execution, supervision, and access to
system resources.
The REST API provided by the Docker daemon is used behind the scenes by Docker tooling. It is a public
API that is commonly used for external tooling and a whole ecosystem of applications has grown around it.
These applications include Kubernetes for orchestration and Prometheus for monitoring. The API is
documented on the Docker documentation website (https://docs.docker.com/develop/sdk/). There are many
available software development kits (SDKs) and bindings in various programming languages, such as
Python, Ruby, and Go.
Docker manages container networking through one of its built-in network types that uses Linux networking
such as interfaces, routing tables, firewalling and NAT, and proxying. Many third-party network plug-ins
widely support container networking for potentially more advanced network connectivity or integration with
other parts of the infrastructure.
The following is one example of why Docker has become ubiquitous. It takes one command to run a
minimal container that seamlessly accesses the Internet and pings the 8.8.8.8 address.
docker-host:root# docker run busybox ping -c 5 8.8.8.8
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
0f8c40e1270f: Pull complete
Digest: sha256:1303dbf110c57f3edf68d9f5a16c082ec06c4cf7604831669faf2c712260b5a0
Status: Downloaded newer image for busybox:latest
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=61 time=9.945 ms
64 bytes from 8.8.8.8: seq=1 ttl=61 time=12.770 ms
64 bytes from 8.8.8.8: seq=2 ttl=61 time=14.941 ms
64 bytes from 8.8.8.8: seq=3 ttl=61 time=14.906 ms
64 bytes from 8.8.8.8: seq=4 ttl=61 time=10.825 ms

--- 8.8.8.8 ping statistics ---


5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 9.945/12.677/14.941 ms

This command tells the Docker daemon to run the busybox container image and start the ping command
with the -c 5 8.8.8.8 parameters. If that specific image is not found locally, then it will be searched for on
the Docker Hub, the official central Docker public registry available on the Internet. When found, the image
layers are downloaded, hashed, and verified for integrity.
Finally, Docker starts a container from the busybox image and provides a network interface and NAT rules
for default gateway access. Inside the container, the ping command becomes PID 1, executing its five pings
and then exiting. The output from the command is captured in the container logs and also printed to the
terminal in real time.
1. 0Match the item to the container component.
Docker Engine
Container Lifecycle Manager
containerd/runc
Container run time
Namespaces
Linux operating system

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 63
0Docker Commands
Docker has simple but powerful command-line tooling, which is one of the reasons for its very fast
adoption. The command-line tooling has been significantly developed and improved since the original
public release in 2013. You will look at some of the most common commands that you will use when
working with Docker.
The docker command is the main client interface that you will use to interact with the Docker daemon and
its API. It is written in the Go language and comes with the Docker installation on many architectures and
operating systems. Using subcommands as parameters, the docker command allows you to perform actions
such as running, building, and stopping containers; pulling and pushing container images to registries;
viewing container logs; attaching to a running container; inspecting a container; manipulating network
configuration of a container; and more.
The following are subcommands that are available in Docker 19.03:
attach diff import node rm stats version
build engine info pause rmi stop volume
builder events inspect plugin run swarm wait
commit exec kill port save system
config export load ps search tag
container help login pull secret top
context history logout push service trust
cp image logs rename stack unpause
create images network restart start update

There are many more subcommands and parameters available. As you gain Docker experience, you will
become accustomed to them and expand your toolset. Highly recommended documentation is available
publicly at https://docs.docker.com/engine/reference/commandline/docker/.

Docker run Command


You will use the run subcommand to start a container from a given image with many possible parameters
like modifying its name, network settings, or starting the command.
docker-host:root# docker run -it --name test ubuntu bash
root@5dfce7291e34:/#

You can achieve the following with the example run command:
• Instruct Docker to run a container image named ubuntu.
• Set the container name to test.
• Start the bash shell as its PID 1.
• Open an interactive (-i) pseudo-terminal (-t).

Note the change in the shell prompt from the host to the container on the second line.
If you need to run the container in the background without any interaction, you can use the detached
parameter (-d). Docker will return a unique ID for the container that was just started. If you do not provide a
name of the image, the name is generated dynamically for it.
docker-host:root# docker run -d ubuntu application
787d8277e45aa1be54942f356dbc436fd42964f8a662472d499dc2fa62bf24cc

64 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Docker container Command
The container subcommand is a very important subcommand that you will use often to perform common
actions on containers.
The container subcommand takes the following parameters:
attach diff kill port rm stop wait
commit exec logs prune run top
cp export ls rename start unpause
create inspect pause restart stats update

In many examples, especially in documentation available on the Internet, you will notice that "shortcuts"
exist for many of these parameters in a traditional format.
For example, the docker container ls command (or in its legacy form docker ps), lists all running
containers. If you add the -a parameter, the command lists all containers, including containers that already
finished their execution.
docker-host:root# docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
6766f3b586af ubuntu "bash" 9 minutes ago
Exited (0) 8 minutes ago cranky_lumiere
f144b183a870 busybox "ping -c 5 8.8.8.8" 39 minutes ago
Exited (0) 39 minutes ago agitated_ritchie
031b6c7daf72 ubuntu "/bin/bash" 40 minutes ago
Exited (127) 40 minutes ago nostalgic_lamport
9234943e48c6 ubuntu "ping -c 5 8.8.8.8" 40 minutes ago
Created boring_ishizaka

Container execution ends when the process PID 1 inside the container exits. Containers are not deleted
automatically once their execution has ended. To remove the container, you provide the rm or prune
parameter to the docker container command or perform cleanup later.
docker-host:root# docker container prune
WARNING! This will remove all stopped containers.
Are you sure you want to continue? [y/N] y
Deleted Containers:
6766f3b586afbf12e97fc22c13829bcb3d4188b0ad2a7821e85456112d9fa323
f144b183a870bd508e9eba5e1da53d1a7b4af6dc483f95ae8b7cfccbbb9f76ba
031b6c7daf72f01b0e79f1fafd89cb016a76438be9cd1e4966166986b60c06c2
9234943e48c62cbd0ce40fd8f6d05f0294dbba307d180c429b383d8b94e2663b

Sometimes you need to copy files from or to a container. Keep in mind that anything you copy to a
container exists only for the lifetime of that container and does not modify the original container image.
Remember, the container image is immutable.
In the example, Docker is instructed to run the ubuntu image, set container name to test, and start the bash
shell. Next, there is a check to see if a file hello exists in the container. Then a file hello is copied into the
container using the cp subcommand.
docker-host:root# docker run -it --name test ubuntu bash
root@5141cfabb221:/# cat hello
cat: hello: No such file or directory
docker-host:root# docker container cp hello test:/hello
docker-host:root# docker container exec test cat /hello
world!

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 65
The final subcommand, exec, allows you to run an arbitrary command (another process) inside a running
container. In this example, the exec subcommand is used to verify that the file was indeed copied within the
test container.

Working with Container Images


You can use the docker image subcommand to view and manage all container images that are stored
locally on the Docker host. The example uses the docker image ls subcommand and you can see that there
are two images available locally, ubuntu and busybox. To obtain more detailed information about the
container images, including all image metadata, you can use the docker image inspect subcommand.
docker-host:root# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu latest 775349758637 22 hours ago 64.2MB
busybox latest 6d5fcfe5ff17 45 hours ago 1.22MB

docker-host:root# docker image inspect busybox


[
{
"Id":
"sha256:6d5fcfe5ff170471fcc3c8b47631d6d71202a1fd44cf3c147e50c8de21cf0648",
"RepoTags": [
"busybox:latest"
],
<... output omitted ...>

In the following example, you can see how to remove an image that is no longer needed using the docker
image rm command.
docker-host:root# docker image rm ubuntu:latest
Untagged: ubuntu:latest
Untagged:
ubuntu@sha256:6e9f67fa63b0323e9a1e587fd71c561ba48a034504fb804fd26fd8800039835d
Deleted: sha256:775349758637aff77bf85e2ff0597e86e3e859183ef0baba8b3e8fc8d3cba51c
Deleted: sha256:4fc26b0b0c6903db3b4fe96856034a1bd9411ed963a96c1bc8f03f18ee92ac2a
Deleted: sha256:b53837dafdd21f67e607ae642ce49d326b0c30b39734b6710c682a50a9f932bf
Deleted: sha256:565879c6effe6a013e0b2e492f182b40049f1c083fc582ef61e49a98dca23f7e
Deleted: sha256:cc967c529ced563b7746b663d98248bc571afdb3c012019d7f54d6c092793b8b

All the file system layers that belong to an image are deleted as long as another image that shares some of
the same layers does not reference them. This optimization helps keep disk utilization lower when working
with many container images.
The docker pull subcommand obtains a different image from the Docker Hub. You only need to provide
the image name to the subcommand; the usage of the Docker Hub as the default registry is implicit.
docker-host:root# docker pull python
Using default tag: latest
latest: Pulling from library/python
c7b7d16361e0: Pull complete
b7a128769df1: Pull complete
1128949d0793: Pull complete
667692510b70: Pull complete
bed4ecf88e6a: Pull complete
<... output omitted ...>
Digest: sha256:514a95a32b86cafafefcecc28673bb647d44c5aadf06203d39c43b9c3f61ed52

66 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Status: Downloaded newer image for python:latest
docker.io/library/python:latest

As you can see in the docker image ls command output, the official python container image is rather large
—932 MB once it is downloaded and made ready for use by the local Docker daemon.
docker-host:root# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
busybox latest 020584afccce 45 hours ago 1.22MB
python latest d6a7b0694364 13 days ago 932MB

1. 0What is the main command for listing the current running Docker containers?
a. docker container ls
b. docker image ls
c. docker show running
d. docker get container

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 67
0Discovery 2: Explore Docker Command-Line
Tools
Introduction
Becoming proficient in a technology like Docker can take months to years, depending on how often you use
it. This high-level, practical view of the basic commands that are needed for common Docker tasks includes
a brief description of command actions. It provides enough information to allow you to start using Docker
in your network automation journey. You will be building, destroying, running, and working with various
Docker containers.

Topology

Job Aids
Device Information

Device Description FQDN/IP Address Credentials

Student Workstation Linux Ubuntu VM 192.168.10.10 student, 1234QWer

GitLab Container Registry Container registry.git.lab student, 1234QWer


Registry

Command List
The table describes the commands that are used in this activity. The commands are listed in alphabetical
order so that you can easily locate the information that you need. Refer to this list if you need configuration
command assistance during the lab activity.

Command Description

cd directory name To change directories within the Linux file system, use the cd
command. You will use this command to enter a directory where the
scripts are housed. You can use tab completion to finish the name of

68 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Command Description

the directory after you start typing it.

cat file The most common use of the cat Linux command is to read the
contents of files. It is the most convenient command for this purpose in
a Unix-like operating system.

docker container cp source This command copies files from and to a container. The source and
destination destination parameters must be represented with a full path. When
referencing the container, the format is container: full_path.

docker container inspect -- This command inspects the configurations of a container. The –
format='{{type json_path }}' format flag allows you to limit the scope of the viewed configurations.
container A common type that is used in format is json, and the json_path is
the object in dot notation.

docker container ls -a This command allows you to view the containers that are configured
on the host system. The -a flag will display containers that are not up
as well.

docker container rm -f container This command removes containers. The -f flag will force an
operational container to be removed.

docker container start container This command starts a container that exists, but is currently stopped.

docker container stop container This command stops a container that is currently running.

docker container prune This command removes any containers that are not currently
operational.

docker exec -it container command This command allows you to run commands on the container. The
command is any valid command on the container. The -i flag is for
interactive, and the -t flag is for creating a pseudo-TTY to the
container.

docker images This command allows you to see which images are currently stored
locally.

docker login docker_registry This command allows you to log in to a Docker registry. In cases
where you are not already logged in, it will prompt you for your
username and password.

docker logs -f --tail num container This command allows you to view the logs of a given container. The -f
flag is for following the logs live, and the --tail flag with a num will
indicate how many logs to go back.

docker pull container_registry / This command allows you to obtain the container image from the
gitlab_organization / gitlab_project / registry. The command does not have spaces between the forward
slashes.
container:tag

docker run -itd --name container This command allows you to run, or obtain a container registry and run
a container. The -i flag is for interactive, and the -t flag is for creating a

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 69
Command Description

container_registry / pseudo-TTY to the container. The -d flag is to run in detached state.


gitlab_organization / gitlab_project / The command is any command that is valid on the container. The --
container:tag command name flag names the container as you intend, and does not randomly
generate a name for you.

docker version This command allows you to view the Docker status, and the version
of Docker thatis currently running.

ls file This command allows you to see a file or folder contents.

json_pp This command is a standard Linux JavaScript Object Notation (JSON)


pretty printer.

touch file This command creates an empty file.

Task 1: Pull and Run a Container


You will explore the basics of running a container locally.

Activity

Docker Registry
By default, Docker assumes that you will use https://registry-1.docker.io/ as the public cloud-based
container registry when pulling and pushing containers. For this reason, many commands that you see in
standard Docker documentation will not include the full URL, organization, and project structure that is
contained in the commands included here. This standard practice for using a custom container registry such
as the GitLab Container Registry is often found in client environments. There is no natively supported
method from Docker to change the default search path. In real-life scenarios, this issue is often abstracted
away with tooling such as bash aliases, bash files, and make files.

Understanding Top-Level Commands


The Docker list of commands was initially limited to the needs of the use cases. As the amount of features
grew within Docker, the number of commands grew with it. The command structure was initially flat. As of
Docker Version 1.13, many top-level commands were improved to create a more nested command set. As
an example, the docker ps command became the docker container ls command, with container being the
new top-level command. The top-level commands provide the nested structure that is needed to reasonably
support the number of features that Docker has. These traditional commands can be hidden by setting the
DOCKER_HIDE_LEGACY_COMMANDS environment variable. If this setting becomes the default, this
environment variable will be removed. Here, you will only use the top-level commands because they
support all features and are the long-term strategy.

To understand some of the differences between the commands, examine this short list of equivalents for
future reference.

70 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Traditional Command Top-Level Command

docker images docker image list

docker rmi docker image rm

docker create docker container create

docker ps docker container ls

Ensure That Docker Is Installed and Running

Step 1 In the student workstation, open a terminal window and change the directory to ~/labs/lab02 using the cd
~/labs/lab02 command.

student@student-vm:$ cd ~/labs/lab02/
student@student-vm:labs/lab02$

Step 2 Execute the docker version command and determine which docker version is installed.

The command will fail if Docker is not running.


student@student-vm:labs/lab02$ docker version
Client:
Version: 18.09.7
API version: 1.39
Go version: go1.10.1
Git commit: 2d0083d
Built: Fri Aug 16 14:20:06 2019
OS/Arch: linux/amd64
Experimental: false

Server:
Engine:
Version: 18.09.7
API version: 1.39 (minimum version 1.12)
Go version: go1.10.1
Git commit: 2d0083d
Built: Wed Aug 14 19:41:23 2019
OS/Arch: linux/amd64
Experimental: false
student@student-vm:labs/lab02$

Log in to the GitLab Container Registry


Once you are logged in to a container registry, Docker will store your credentials and you will not be
prompted again.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 71
Step 3 Log in to the GitLab Container Registry using the docker login registry.git.lab command. If prompted, use
the credentials provided in the Job Aids.

student@student-vm:labs/lab02$ docker login registry.git.lab


Username: student
Password:
WARNING! Your password will be stored unencrypted in /home/student/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
student@student-vm:labs/lab02$

View the Credentials File

Step 4 Docker stores the credentials information in the /home/student/.docker/config.json file. View the content of
the file using the cat /home/student/.docker/config.json command.

student@student-vm:labs/lab02$ cat /home/student/.docker/config.json


{
"auths": {
"registry.git.lab": {
"auth": "c3R1ZGVudDoxMjM0UVdlcg=="
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/18.09.7 (linux)"
}
}

Pull and Run the hello-world Container


The first time that you run a container, Docker will download the container image from the registry. When
you run the docker run registry.git.lab/cisco-devops/containers/hello-world:latest command, the URL is
divided into the following fields:
<container_registry>/<gitlab_organization>/<gitlab_project>/<container>:<tag>.

The GitLab attributes will be discussed in detail later; for now, it is important to know that there is
namespace separation for different groupings of containers. The tag is not needed and will default to latest.
When you run the command the second time without the tag, you will notice that Docker refers to the same
image and, as it is already presented locally, there was no need to download it again.

The docker run registry.git.lab/cisco-devops/containers/hello-world:latest command was discussed here


instead of the docker run hello-world command because you will not use the default Docker registry in
this activity.

Step 5 Run the docker run registry.git.lab/cisco-devops/containers/hello-world:latest command and the docker
run registry.git.lab/cisco-devops/containers/hello-world command.

Observe the differences in the output when you run the container for the first time and in
subsequent runs.

72 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:labs/lab02$ docker run
registry.git.lab/cisco-devops/containers/hello-world:latest
Unable to find image 'registry.git.lab/cisco-devops/containers/hello-world:latest'
locally
latest: Pulling from cisco-devops/containers/hello-world
1b930d010525: Pull complete
Digest: sha256:92c7f9c92844bbbb5d0a101b22f7c2a7949e40f8ea90c8b3bc396879d95e899a
Status: Downloaded newer image for registry.git.lab/cisco-devops/containers/hello-
world:latest

Hello from Docker!


This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:


1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/

For more examples and ideas, visit:


https://docs.docker.com/get-started/

student@student-vm:labs/lab02$ docker run


registry.git.lab/cisco-devops/containers/hello-world

Hello from Docker!


This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:


1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/

For more examples and ideas, visit:


https://docs.docker.com/get-started/

student@student-vm:labs/lab02$

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 73
View the Containers That You Created
Docker automatically creates a container name each time you build a new container. The names are
autogenerated using the format <adjective>_<scientist> such as affectionate_margulis or wizardly_hodgkin.

Step 6 View all containers that have been built and have not been deleted using the docker container ls -a
command and review each column. You can also compare containers that have been built against running
containers by ommitting the -a or all flag to show only the running containers. Because there is no running
container, nothing is shown.

student@student-vm:labs/lab02$ docker container ls -a


CONTAINER ID IMAGE
COMMAND CREATED STATUS PORTS
NAMES
969f66f833be registry.git.lab/cisco-devops/containers/hello-world
"/hello" 3 minutes ago Exited (0) 3 minutes ago
affectionate_margulis
8e151c70d521 registry.git.lab/cisco-devops/containers/hello-world:latest
"/hello" 12 minutes ago Exited (0) 12 minutes ago
wizardly_hodgkin
student@student-vm:labs/lab02$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES
student@student-vm:labs/lab02$

Task 2: Explore a Running Container


In the “Pull and Run a Container” task, you ran a container that built, ran, and stopped itself. Now you will
run a container that will stay operational. It is important to understand that containers reference the same
base image but have unique instantiations of that image. This approach allows for optimization of resources,
including caching the local image storage. Downloading an image may take time, depending on the image
size, network resources, and other components. If the image was already downloaded, the container is built
nearly instantaneously.

You ran the hello-world container, which is simply a container that you used to prove a working Docker
instance. Here you will build, run, and connect to an Alpine image, which is a lightweight Linux operating
system.

Activity

Step 1 Pull the Alpine image down without running it. Issue the docker pull
registry.git.lab/cisco-devops/containers/alpine command and view the container download.

student@student-vm:labs/lab02$ docker pull


registry.git.lab/cisco-devops/containers/alpine
Using default tag: latest
latest: Pulling from cisco-devops/containers/alpine
89d9c30c1d48: Pull complete
Digest: sha256:e4355b66995c96b4b468159fc5c7e3540fcef961189ca13fee877798649f531a
Status: Downloaded newer image for
registry.git.lab/cisco-devops/containers/alpine:latest
student@student-vm:labs/lab02$

74 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
View the Image

Step 2 Issue the docker images command to view the image. You will notice that the tag is latest, even though a
tag was not set. If you do not explicitly set the tag, the default behavior is to download the tag with latest in
the tag.

student@student-vm:labs/lab02$ docker images


REPOSITORY TAG IMAGE ID
CREATED SIZE
registry.git.lab/cisco-devops/containers/alpine latest 965ea09ff2eb
8 days ago 5.55MB
registry.git.lab/cisco-devops/containers/hello-world latest fce289e99eb9
10 months ago 1.84kB
student@student-vm:labs/lab02$

Run a Single Command on a Docker Container

Step 3 Run the docker run -it registry.git.lab/cisco-devops/containers/alpine sh command to enter the shell of
the Alpine image. Notice the -it flag that initiates the interactive pseudo-terminal session with the container.
Once you are in the container, issue the ls command to check that you are not in the same directory. To stop
the container, use the exit command.

student@student-vm:labs/lab02$ docker run -it


registry.git.lab/cisco-devops/containers/alpine sh
/ # ls
bin dev etc home lib media mnt opt proc root run sbin srv
sys tmp usr var
/ # exit
student@student-vm:labs/lab02$ docker container ls -a
CONTAINER ID IMAGE
COMMAND CREATED STATUS PORTS
NAMES
3c92bd5e9d49 registry.git.lab/cisco-devops/containers/alpine "sh"
3 minutes ago Exited (0) 18 seconds ago
elastic_blackwell
969f66f833be registry.git.lab/cisco-devops/containers/hello-world
"/hello" About an hour ago Exited (0) About an hour ago
affectionate_margulis
8e151c70d521 registry.git.lab/cisco-devops/containers/hello-world:latest
"/hello" About an hour ago Exited (0) About an hour ago
wizardly_hodgkin
student@student-vm:labs/lab02$

Run the Container in Detached Mode


Previously, you stopped the container when you exited it. The container ran in foreground mode and was
tied to the user terminal session. Now you will run the same Docker command with the -d (detached mode)
parameter and name the container dev_alpine.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 75
Step 4 Issue the docker run -itd --name dev_alpine registry.git.lab/cisco-devops/containers/alpine sh command
to run the container in detached mode. You can verify that the container is operational by issuing the docker
container ls command.

student@student-vm:labs/lab02$ docker run -itd --name dev_alpine


registry.git.lab/cisco-devops/containers/alpine sh
3fe31846d3b84080af5d74afab5163634638a9191574f88d2a27a7633675d45a
student@student-vm:labs/lab02$ docker container ls
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS NAMES
3fe31846d3b8 registry.git.lab/cisco-devops/containers/alpine "sh"
7 seconds ago Up 5 seconds dev_alpine
student@student-vm:labs/lab02$

Execute Commands on a Detached Container


You can connect to an existing container using the docker exec command as you would use the docker run
command for running commands. The docker exec command enables you to interact with the container
commands, but not to change the Docker container after it starts.

Step 5 Issue the command docker exec -it dev_alpine ls to run a single container command.

Step 6 Issue the docker exec -it dev_alpine sh command to interact with the container shell. Once you are finished,
use the exit command to return to the host.

You will notice that you can refer to the container by the name given in an earlier step
(dev_apline).
student@student-vm:labs/lab02$ docker exec -it dev_alpine ls
bin etc lib mnt proc run srv tmp var
dev home media opt root sbin sys usr
student@student-vm:labs/lab02$ docker exec -it dev_alpine sh
/ # ls
bin dev etc home lib media mnt opt proc root run sbin srv
sys tmp usr var
/ # exit
student@student-vm:labs/lab02$

Stop and Start a Container


Containers can be stopped and started when needed. You can view active containers to determine if the
container is running.

Step 7 Issue the docker container stop dev_alpine command to stop the container.

Step 8 Issue the docker container ls command to verify that the container is no longer running.

Step 9 Issue the docker container ls -a command to verify that the container still exists.

Step 10 Issue the docker container start dev_alpine command to start the container again.

Step 11 Issue the docker exec -it dev_alpine ls command to confirm that the container is working.

76 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:labs/lab02$ docker container stop dev_alpine
dev_alpine
student@student-vm:labs/lab02$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES
student@student-vm:labs/lab02$ docker container ls -a
CONTAINER ID IMAGE
COMMAND CREATED STATUS PORTS
NAMES
f1ecbc1f7b7c registry.git.lab/cisco-devops/containers/alpine
"/bin/sh" 17 minutes ago Exited (137) 9 seconds ago
dev_alpine
3c92bd5e9d49 registry.git.lab/cisco-devops/containers/alpine "sh"
28 minutes ago Exited (0) 24 minutes ago
elastic_blackwell
969f66f833be registry.git.lab/cisco-devops/containers/hello-world
"/hello" About an hour ago Exited (0) About an hour ago
affectionate_margulis
8e151c70d521 registry.git.lab/cisco-devops/containers/hello-world:latest
"/hello" 2 hours ago Exited (0) About an hour ago
wizardly_hodgkin
student@student-vm:labs/lab02$ docker container start dev_alpine
dev_alpine
student@student-vm:labs/lab02$ docker exec -it dev_alpine ls
bin etc lib mnt proc run srv tmp var
dev home media opt root sbin sys usr
student@student-vm:labs/lab02

Remove an Existing Container That Is Stopped


You can remove an individual container with the docker container rm command. You can also remove all
stopped containers with the docker container prune command. Running containers can also be forced
down.

Step 12 Issue the docker container stop dev_alpine command to stop the container.

Step 13 Issue the docker container rm dev_alpine command to remove the container.

Step 14 Issue the docker container ls -a command to confirm that the container was removed.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 77
student@student-vm:labs/lab02$ docker container stop dev_alpine
dev_alpine
student@student-vm:labs/lab02$ docker container rm dev_alpine
dev_alpine
student@student-vm:labs/lab02$ docker container ls -a
CONTAINER ID IMAGE
COMMAND CREATED STATUS PORTS
NAMES
3c92bd5e9d49 registry.git.lab/cisco-devops/containers/alpine "sh"
41 minutes ago Exited (0) 38 minutes ago elastic_blackwell
969f66f833be registry.git.lab/cisco-devops/containers/hello-world
"/hello" 2 hours ago Exited (0) 2 hours ago
affectionate_margulis
8e151c70d521 registry.git.lab/cisco-devops/containers/hello-world:latest
"/hello" 2 hours ago Exited (0) 2 hours ago
wizardly_hodgkin
student@student-vm:labs/lab02$

Step 15 Issue the docker container prune command to remove all stopped containers and enter y to continue.

student@student-vm:labs/lab02$ docker container prune


WARNING! This will remove all stopped containers.
Are you sure you want to continue? [y/N] y
'Deleted Containers:
3c92bd5e9d49c19a167438db4cbe30f77f238c825c3ff5278214c51ee472c6a5
969f66f833be37d7cf5d74592c58981c73bc1747fb21db6d89bc4582c6d9bd7f
8e151c70d5217bcbcc353c72c2fb5791d2f4499bb745af5ef8b8c2d6c78a9728

Total reclaimed space: 8B


student@student-vm:labs/lab02$

Step 16 Issue the docker run -itd --name alpine_force_remove registry.git.lab/cisco-devops/containers/alpine


command to start a container.

Step 17 Issue the docker container ls command to confirm that the container is started and the docker container rm
alpine_force_remove command to observe what happens when you try to remove an active container.

Step 18 Finally, issue the docker container rm -f alpine_force_remove command to force the container to be
removed even though it is active.

78 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:labs/lab02$ docker run -itd --name alpine_force_remove
registry.git.lab/cisco-devops/containers/alpine
adaa0bb13ec51a254668511be5eceaa14b209028125bcc5ec2f0ba1131946231
student@student-vm:labs/lab02$ docker container ls
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS NAMES
adaa0bb13ec5 registry.git.lab/cisco-devops/containers/alpine "/bin/sh"
4 seconds ago Up 3 seconds alpine_force_remove
student@student-vm:labs/lab02$ docker container rm alpine_force_remove
Error response from daemon: You cannot remove a running container
adaa0bb13ec51a254668511be5eceaa14b209028125bcc5ec2f0ba1131946231. Stop the container
before attempting removal or force remove
student@student-vm:labs/lab02$ docker container rm -f alpine_force_remove
alpine_force_remove
student@student-vm:labs/lab02$ docker container ls
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS NAMES
student@student-vm:labs/lab02$

Task 3: Inspect the Container Configuration


There are many configuration aspects to consider when building a container. The default Docker stance
manages many of these considerations. Attributes such as networks, network types, resource allocation, and
others can often be left at their default settings. However, understanding these attributes impacts the design
and performance of the service. It is important to consider them in any production design.

You will now examine and inspect a container.

Activity

Prepare the Environment

Step 1 Prepare the environment by starting a container with the docker run -itd --name alpine_inspect
registry.git.lab/cisco-devops/containers/alpine command.

student@student-vm:labs/lab02$ docker run -itd --name alpine_inspect


registry.git.lab/cisco-devops/containers/alpine
5e381905bff6070adbf06a1d1f0ad8c42676e0e49ed6012211dbcde4b8cc5e66
student@student-vm:labs/lab02$

Inspect the Container

Step 2 Issue the docker container inspect alpine_inspect command to view the complete container configuration.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 79
student@student-vm:labs/lab02$ docker container inspect alpine_inspect
[
{
"Id": "5e381905bff6070adbf06a1d1f0ad8c42676e0e49ed6012211dbcde4b8cc5e66",
"Created": "2019-10-30T03:15:06.309191079Z",
"Path": "/bin/sh",
"Args": [],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 2167,
"ExitCode": 0,
"Error": "",
"StartedAt": "2019-10-30T03:15:07.1478015Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image":
"sha256:965ea09ff2ebd2b9eeec88cd822ce156f6674c7e99be082c7efac3c62f3ff652",
"ResolvConfPath":
"/var/lib/docker/containers/5e381905bff6070adbf06a1d1f0ad8c42676e0e49ed6012211dbcde4b8c
c5e66/resolv.conf",
"HostnamePath":
"/var/lib/docker/containers/5e381905bff6070adbf06a1d1f0ad8c42676e0e49ed6012211dbcde4b8c
c5e66/hostname",
"HostsPath":
"/var/lib/docker/containers/5e381905bff6070adbf06a1d1f0ad8c42676e0e49ed6012211dbcde4b8c
c5e66/hosts",
"LogPath":
"/var/lib/docker/containers/5e381905bff6070adbf06a1d1f0ad8c42676e0e49ed6012211dbcde4b8c
c5e66/5e381905bff6070adbf06a1d1f0ad8c42676e0e49ed6012211dbcde4b8cc5e66-json.log",
"Name": "/alpine_inspect",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "docker-default",
"ExecIDs": null,
"HostConfig": {
"Binds": null,
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "default",
"PortBindings": {},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,

80 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "shareable",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DiskQuota": 0,
"KernelMemory": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": 0,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 81
"IOMaximumBandwidth": 0,
"MaskedPaths": [
"/proc/asound",
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware"
],
"ReadonlyPaths": [
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
]
},
"GraphDriver": {
"Data": {
"LowerDir":
"/var/lib/docker/overlay2/4421b6f20167700d6fc1f24aa358c09a32bd022de25ac75349030c38cb2e9
200-init/diff:/var/lib/docker/
overlay2/136242bdae6db6c0055346290d27989c45dff30ea60cf588253243f3315fb5ef/diff",
"MergedDir":
"/var/lib/docker/overlay2/4421b6f20167700d6fc1f24aa358c09a32bd022de25ac75349030c38cb2e9
200/merged",
"UpperDir":
"/var/lib/docker/overlay2/4421b6f20167700d6fc1f24aa358c09a32bd022de25ac75349030c38cb2e9
200/diff",
"WorkDir":
"/var/lib/docker/overlay2/4421b6f20167700d6fc1f24aa358c09a32bd022de25ac75349030c38cb2e9
200/work"
},
"Name": "overlay2"
},
"Mounts": [],
"Config": {
"Hostname": "5e381905bff6",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": true,
"OpenStdin": true,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"/bin/sh"
],

82 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
"ArgsEscaped": true,
"Image": "registry.git.lab/cisco-devops/containers/alpine",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": null,
"OnBuild": null,
"Labels": {}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID":
"b664bdbe6ad10668d49c67b69b476eba99ad13fb9368ef0a65d45358ad04f181",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": "/var/run/docker/netns/b664bdbe6ad1",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID":
"31da7b7215b64214e08d0e52d8707981118e9024669be123a280d4ac8f3ed0f0",
"Gateway": "172.18.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.18.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"MacAddress": "02:42:ac:12:00:02",
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID":
"ba6756271a373b75fd11cb219ba28bde53aaa4429e0a46fdba7195be6af99dc8",
"EndpointID":
"31da7b7215b64214e08d0e52d8707981118e9024669be123a280d4ac8f3ed0f0",
"Gateway": "172.18.0.1",
"IPAddress": "172.18.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:12:00:02",
"DriverOpts": null
}
}
}
}
]
student@student-vm:labs/lab02$

Examine the highlighted parts of the container configuration. They define settings like network,
IP, and container status.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 83
Filter the Output
The container configuration includes a lot of information. You can filter the inspect command using the --
format flag with json set as a formatting hint. You will run the command through a JSON pretty printer
(json_pp), which is included on the host machine.

Step 3 To view the formatted output of the inspect command with and without it being JSON pretty printed
(json_pp), issue the docker container inspect --format='{{json .NetworkSettings }}' alpine_inspect
command and the docker container inspect --format='{{json .NetworkSettings }}' alpine_inspect |
json_pp command.

Note The syntax of the --format flag is complex. You must follow the instructions provided here exactly.

84 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:labs/lab02$ docker container inspect --
format='{{json .NetworkSettings }}' alpine_inspect
{"Bridge":"","SandboxID":"b664bdbe6ad10668d49c67b69b476eba99ad13fb9368ef0a65d45358ad04f
181","HairpinMode":false,"LinkLocalIPv6Address":"","LinkLocalIPv6PrefixLen":0,"Ports":
{},"SandboxKey":"/var/run/docker/netns/
b664bdbe6ad1","SecondaryIPAddresses":null,"SecondaryIPv6Addresses":null,
"EndpointID":"31da7b7215b64214e08d0e52d8707981118e9024669be123a280d4ac8f3ed0f0","Gatewa
y":"172.18.0.1","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"IPAddress":"172.18.0.2"
,"IPPrefixLen":16,"IPv6Gateway":"","MacAddress":"02:42:ac:12:00:02","Networks":
{"bridge":{"IPAMConfig":null,
"Links":null,"Aliases":null,"NetworkID":"ba6756271a373b75fd11cb219ba28bde53aaa4429e0a46
fdba7195be6af99dc8","EndpointID":"31da7b7215b64214e08d0e52d8707981118e9024669be123a280d
4ac8f3ed0f0","Gateway":"172.18.0.1","IPAddress":"172.18.0.2","IPPrefixLen":16,"IPv6Gate
way":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:12:00:02"
,"DriverOpts":null}}}
student@student-vm:labs/lab02$ docker container inspect --
format='{{json .NetworkSettings }}' alpine_inspect | json_pp
{
"SandboxID" : "b664bdbe6ad10668d49c67b69b476eba99ad13fb9368ef0a65d45358ad04f181",
"Networks" : {
"bridge" : {
"Gateway" : "172.18.0.1",
"GlobalIPv6PrefixLen" : 0,
"IPv6Gateway" : "",
"IPPrefixLen" : 16,
"EndpointID" :
"31da7b7215b64214e08d0e52d8707981118e9024669be123a280d4ac8f3ed0f0",
"IPAddress" : "172.18.0.2",
"IPAMConfig" : null,
"MacAddress" : "02:42:ac:12:00:02",
"NetworkID" :
"ba6756271a373b75fd11cb219ba28bde53aaa4429e0a46fdba7195be6af99dc8",
"GlobalIPv6Address" : "",
"Links" : null,
"DriverOpts" : null,
"Aliases" : null
}
},
"MacAddress" : "02:42:ac:12:00:02",
"LinkLocalIPv6Address" : "",
"SecondaryIPAddresses" : null,
"IPv6Gateway" : "",
"Gateway" : "172.18.0.1",
"LinkLocalIPv6PrefixLen" : 0,
"EndpointID" : "31da7b7215b64214e08d0e52d8707981118e9024669be123a280d4ac8f3ed0f0",
"IPPrefixLen" : 16,
"SandboxKey" : "/var/run/docker/netns/b664bdbe6ad1",
"SecondaryIPv6Addresses" : null,
"GlobalIPv6Address" : "",
"Ports" : {},
"Bridge" : "",
"IPAddress" : "172.18.0.2",
"HairpinMode" : false,
"GlobalIPv6PrefixLen" : 0
}
student@student-vm:labs/lab02$

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 85
Now that you understand the syntax and output structure, you can further refine the inspect
command output.

Step 4 Issue the docker container inspect --format='{{json .NetworkSettings.Networks }}' alpine_inspect |
json_pp command to view the nested configurations.

student@student-vm:labs/lab02$ docker container inspect --


format='{{json .NetworkSettings.Networks }}' alpine_inspect | json_pp
{
"bridge" : {
"IPAMConfig" : null,
"NetworkID" : "ba6756271a373b75fd11cb219ba28bde53aaa4429e0a46fdba7195be6af99dc8",

"GlobalIPv6Address" : "",
"GlobalIPv6PrefixLen" : 0,
"Links" : null,
"EndpointID" :
"31da7b7215b64214e08d0e52d8707981118e9024669be123a280d4ac8f3ed0f0",
"IPPrefixLen" : 16,
"IPv6Gateway" : "",
"Gateway" : "172.18.0.1",
"DriverOpts" : null,
"Aliases" : null,
"IPAddress" : "172.18.0.2",
"MacAddress" : "02:42:ac:12:00:02"
}
}
student@student-vm:labs/lab02$

Step 5 Issue the docker container inspect --format='{{json .State }}' alpine_inspect | json_pp command to view
the container state.

student@student-vm:labs/lab02$ docker container inspect --format='{{json .State }}'


alpine_inspect | json_pp
{
"StartedAt" : "2019-10-30T03:15:07.1478015Z",
"FinishedAt" : "0001-01-01T00:00:00Z",
"Running" : true,
"Dead" : false,
"ExitCode" : 0,
"Paused" : false,
"Status" : "running",
"Error" : "",
"Restarting" : false,
"OOMKilled" : false,
"Pid" : 2167
}
student@student-vm:labs/lab02$

Step 6 Issue the docker container stop alpine_inspect command to stop the container and then the docker
container inspect --format='{{json .State }}' alpine_inspect | json_pp command to view the container
state. The container should not run and its status should be exited.

86 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:labs/lab02$ docker container stop alpine_inspect
alpine_inspect
student@student-vm:labs/lab02$ docker container inspect --format='{{json .State }}'
alpine_inspect | json_pp
{
"Status" : "exited",
"Dead" : false,
"OOMKilled" : false,
"Restarting" : false,
"Pid" : 0,
"FinishedAt" : "2019-10-30T03:45:05.621892557Z",
"Running" : false,
"ExitCode" : 137,
"StartedAt" : "2019-10-30T03:15:07.1478015Z",
"Paused" : false,
"Error" : ""
}

Task 4: Add Host Folders to a Container


You will explore multiple options for transferring and synchronizing files between a host machine and a
container. You can copy a file or folder to, from, or between running containers at any time. However, you
can only mount volumes, using the mount or volume flags, at the time of container instantiation.

Activity

Step 1 Set up the container environment by issuing the docker run -itd --name alpine_folder
registry.git.lab/cisco-devops/containers/alpine command.

student@student-vm:labs/lab02$ docker run -itd --name alpine_folder


registry.git.lab/cisco-devops/containers/alpine
56c4620603a75cfa64d35edec14909df30a48840ce236b2c5222766510f3d9df
student@student-vm:labs/lab02$

Copy a Folder to the Container


The docker container cp <source> <destination> command is used to copy files and folders to, from, or
between running containers. The container can be either the source or the destination. If you reference a
container folder, the format is <container_name>:<file>. Using the copy command, you can copy files
bidirectionally to and from the container.

Step 2 Copy the app folder from the host machine to the container by issuing the docker container cp app/
alpine_folder:/ command.

student@student-vm:labs/lab02$ docker container cp app/ alpine_folder:/


student@student-vm:labs/lab02$

Verify That the Files Are Copied


When you copy files or folders, you create a physical copy of the file or folder in the container, not a pointer
to the source of the content. This situation is different from mounting volumes, where you create a pointer
from the container to the volume.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 87
You will confirm that the container copy of the file is not pointing to the source file.

Step 3 Issue the docker exec -it alpine_folder sh command to attach to the container shell.

Step 4 Issue the touch app/file1.txt command to change the folder structure.

Step 5 Issue the ls app/ command to review the changes.

Step 6 Issue the exit command to return to the host.

Step 7 Issue the ls app/ command to demonstrate that the initial folder stayed intact.

student@student-vm:labs/lab02$ docker exec -it alpine_folder sh


/ # ls
app bin dev etc home lib media mnt opt proc root run
sbin srv sys tmp usr var ~
/ # ls app/
README.md
/ # touch app/file1.txt
/ # ls app/
README.md file1.txt
/ # exit
student@student-vm:labs/lab02$ ls app/
README.md
student@student-vm:labs/lab02$

Establish a Volume Folder on the Container


You will set the environment to use a volume mount. A volume is a folder that is shared between the host
and the container. Any changes that are made in one folder will be reflected in the other folder. You can
only mount a volume at container instance instantiation.

Use the -v flag to mount, following the format of <host_folder>:<container_folder>. For example: -v $
{PWD}/app:/app.

Step 8 Issue the docker run -itd --name alpine_folder_volume -v ${PWD}/app:/app registry.git.lab/cisco-
devops/containers/alpine command to set the container environment with a mounted volume.

Note The mount flag requires a full path, so the ${PWD} syntax is used to dynamically indicate the current
directory.

student@student-vm:labs/lab02$ docker run -itd --name alpine_folder_volume -v


${PWD}/app:/app registry.git.lab/cisco-devops/containers/alpine
67b0fc408b93a03b6d2592993da3fe8147715e72814ea221da0f8455af3977a9
student@student-vm:labs/lab02$

Confirm That the Files Are Shared in a Volume

Step 9 Run the following commands:

88 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
• Run the docker exec -it alpine_folder_volume sh command to attach to the container shell.
• Run the ls app/ command to list the content of the folder in the container.
• Run the touch app/file1.txt command to change the folder structure.
• Run the exit command to return to the host.
• Run the ls app/ command to confirm that the folder content was also updated on the host
machine.
student@student-vm:labs/lab02$ docker exec -it alpine_folder_volume sh
/ # ls app/
README.md
/ # touch app/file1.txt
/ # exit
student@student-vm:labs/lab02$ ls app/
file1.txt README.md
student@student-vm:labs/lab02$

Mount a Folder on the Container

Step 10 Issue the command docker run -itd --name alpine_folder_mount --mount
source=${PWD}/app,target=/app,type=bind registry.git.lab/cisco-devops/containers/alpine to start and
mount the folder.

student@student-vm:labs/lab02$ docker run -itd --name alpine_folder_mount --mount


source=${PWD}/app,target=/app,type=bind registry.git.lab/cisco-devops/containers/alpine
67b0fc408b93a03b6d2592993da3fe8147715e72814ea221da0f8455af3977a9
student@student-vm:labs/lab02$

Confirm That the Files Are Shared in a Mount

Step 11 Run the following commands:

• Run the docker exec -it alpine_folder_mount sh command to attach to the container shell.
• Run the ls app/ command to list the contents of the folder in the container.
• Run the touch app/file2.txt command to change the folder structure.
• Run the exit command to return to the host.
• Run the ls app/ command to confirm that the folder content was also updated on the host
machine.
student@student-vm:labs/lab02$ docker exec -it alpine_folder_mount sh
/ # ls app/
README.md file1.txt
/ # touch app/file2.txt
/ # exit
student@student-vm:labs/lab02$ ls app/
file1.txt file2.txt README.md
student@student-vm:labs/lab02$

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 89
Task 5: Inspect the Container Networking
You will now investigate the networking configuration on both the host and container.

Activity

Step 1 Set up the environment by issuing the docker run -itd --name alpine_net
registry.git.lab/cisco-devops/containers/alpine command.

student@student-vm:labs/lab02$ docker run -itd --name alpine_net


registry.git.lab/cisco-devops/containers/alpine
bf15d074af480663822f4157727c839b1b0bcac5b423e6b750e047c1a5a36228
student@student-vm:labs/lab02$

Inspect the Container


You already performed different container configuration inspection. You can run similar commands on the
host and then compare the two outputs.

You will compare the default gateway reported by the host and the default gateway reported by the
container. You already know the Docker container inspect command. The equivalent Alpine Linux
distribution command is the route command. You will also use the exec command, which allows you to run
the command from the host, but on the container. It is important to distinguish between the host-run
commands and the container-run commands.

Step 2 Compare the default gateway IP address, reported from the container, using the docker container inspect --
format='{{json .NetworkSettings.Networks.bridge.Gateway }}' alpine_net command to the default
gateway IP address, reported from the host, but executed from the container, using the docker exec -it
alpine_net route command.

student@student-vm:labs/lab02$ docker container inspect --


format='{{json .NetworkSettings.Networks.bridge.Gateway }}' alpine_net
"172.18.0.1"
student@student-vm:labs/lab02$ docker exec -it alpine_net route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 172.18.0.1 0.0.0.0 UG 0 0 0 eth0
172.18.0.0 * 255.255.0.0 U 0 0 0 eth0
student@student-vm:labs/lab02$

Step 3 Compare the IP address, reported from the container, using the docker container inspect --
format='{{json .NetworkSettings.Networks.bridge.IPAddress }}' alpine_net command to the default
route IP address that is reported from the host, but executed from the container, using the docker exec -it
alpine_net ip addr show command.

90 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:labs/lab02$ docker container inspect --
format='{{json .NetworkSettings.Networks.bridge.IPAddress }}' alpine_net
"172.18.0.5"
student@student-vm:labs/lab02$ docker exec -it alpine_net ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
71: eth0@if72: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:12:00:05 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.5/16 brd 172.18.255.255 scope global eth0
valid_lft forever preferred_lft forever
student@student-vm:labs/lab02$

Compare the Output From the docker network inspect Command


The container configuration parameters output from the docker container inspect command gives a lot of
information in the context of the whole container. If you are only interested in networking parameters, you
can use the docker network inspect command, which only returns the network-specific configuration of
the container.

Step 4 Obtain the docker network ID by issuing the docker container inspect --
format='{{json .NetworkSettings.Networks.bridge.NetworkID }}' alpine_net command. Use the returned
network ID to run the docker network inspect <network_id> command. Review the fields in the output.
Note the additional containers that share the same network bridge. You can also compare the returned
information with the outputs from the earlier steps where you obtained the IP address and default gateway.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 91
student@student-vm:labs/lab02$ docker container inspect --
format='{{json .NetworkSettings.Networks.bridge.NetworkID }}' alpine_net
"ba6756271a373b75fd11cb219ba28bde53aaa4429e0a46fdba7195be6af99dc8"

student@student-vm:labs/lab02$ docker network inspect


ba6756271a373b75fd11cb219ba28bde53aaa4429e0a46fdba7195be6af99dc8
[
{
"Name": "bridge",
"Id": "ba6756271a373b75fd11cb219ba28bde53aaa4429e0a46fdba7195be6af99dc8",
"Created": "2019-10-29T20:41:49.649384934Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"358fe723ef98046eeae24cd9d155112b39d9c242dab9f6b31b87815b3ef52066": {
"Name": "alpine_folder_mount",
"EndpointID":
"e615cffd49452f3fec7cea990e2e8cbc7cd246ad1fddff2c2f1d2d9053cd024b",
"MacAddress": "02:42:ac:12:00:04",
"IPv4Address": "172.18.0.4/16",
"IPv6Address": ""
},
"56c4620603a75cfa64d35edec14909df30a48840ce236b2c5222766510f3d9df": {
"Name": "alpine_folder",
"EndpointID":
"3d27516254ffa27a4e4fc1deb1fde9506ab847fdf14668c089321deddc39b188",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"67b0fc408b93a03b6d2592993da3fe8147715e72814ea221da0f8455af3977a9": {
"Name": "alpine_folder_volume",
"EndpointID":
"685b3b5b37f4bcce18c62fe41fe8f0f602cfdcf318a2c1c1c47110ae1c7ecc05",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},

92 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
"bf15d074af480663822f4157727c839b1b0bcac5b423e6b750e047c1a5a36228": {
"Name": "alpine_net",
"EndpointID":
"00523fefce36232be21cdcfc2ebbae8b406cdb9c1c9be6cd3af87fef0e69c22f",
"MacAddress": "02:42:ac:12:00:05",
"IPv4Address": "172.18.0.5/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
student@student-vm:labs/lab02$

Task 6: Examine the Container Logs


Activity

Finally, you will investigate the logging status of a container.

Step 1 Set up the environment by issuing the docker run -itd --name alpine_log
registry.git.lab/cisco-devops/containers/alpine sh -c "while true; do $(echo date); sleep 5; done"
command.

Note You will notice that a while loop was added. It was added to create some tangible logs that you will
examine now. The while loop does not serve any other purpose here.

student@student-vm:labs/lab02$ docker run -itd --name alpine_log


registry.git.lab/cisco-devops/containers/alpine sh -c "while true; do $(echo date);
sleep 5; done"
24f8f54aa14bfb880c0a90a15e0596a571eac68a0a9edee227db2c89cac45d73
student@student-vm:labs/lab02$

View the Logs


The docker logs <container> command allows you to view the container logs, which represent the output
from the command that is running inside the container.

Step 2 Issue the docker logs alpine_log command. The output will show time stamps for every 5 seconds that have
elapsed since the container was started.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 93
student@student-vm:labs/lab02$ docker logs alpine_log
Wed Oct 30 05:20:57 UTC 2019
Wed Oct 30 05:21:02 UTC 2019
Wed Oct 30 05:21:07 UTC 2019
Wed Oct 30 05:21:12 UTC 2019
Wed Oct 30 05:21:17 UTC 2019
Wed Oct 30 05:21:22 UTC 2019
student@student-vm:labs/lab02$

Step 3 Adding the -f option to the docker logs <container> command allows you to follow the log file in real time,
updating the output whenever new data is added to the log file. The --tail <string> option tells the docker
logs command how many entries from the log file it should show.

Use the docker logs -f --tail 10 alpine_log command to start by showing the last 10 entries from
the log file and then follow changes that may happen in the log file.
student@student-vm:labs/lab02$ docker logs -f --tail 10 alpine_log
Wed Oct 30 05:26:37 UTC 2019
Wed Oct 30 05:26:42 UTC 2019
Wed Oct 30 05:26:47 UTC 2019
Wed Oct 30 05:26:52 UTC 2019
Wed Oct 30 05:26:57 UTC 2019
Wed Oct 30 05:27:02 UTC 2019
Wed Oct 30 05:27:07 UTC 2019
Wed Oct 30 05:27:12 UTC 2019
Wed Oct 30 05:27:17 UTC 2019
Wed Oct 30 05:27:22 UTC 2019
Wed Oct 30 05:27:27 UTC 2019
^C
student@student-vm:labs/lab02$

Step 4 To view the rest of the options that are available for the docker logs command, use the docker logs --help
command.

student@student-vm:labs/lab02$ docker logs --help

Usage: docker logs [OPTIONS] CONTAINER

Fetch the logs of a container

Options:
--details Show extra details provided to logs
-f, --follow Follow log output
--since string Show logs since timestamp (e.g. 2013-01-02T13:23:37) or relative
(e.g. 42m for 42 minutes)
--tail string Number of lines to show from the end of the logs (default "all")

-t, --timestamps Show timestamps


--until string Show logs before a timestamp (e.g. 2013-01-02T13:23:37) or
relative (e.g. 42m for 42 minutes)
student@student-vm:labs/lab02$

As you saw earlier, you can mix and match different options within the same docker logs
command, depending on your logging output needs.

94 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Summary
You reviewed the basic container management with Docker and used several docker commands. You
examined the entire lifecycle of a container from building and deploying to removing and interacting with
containers. Some of the commands that you used, such as inspecting the container and reviewing the
container logs, briefly presented some troubleshooting approaches. With the knowledge gained and the
basic commands that were examined here, you should be able to run Docker locally.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 95
0Summary Challenge
1. 0Which three statements describe why containers have become so popular? (Choose three.)
a. Containers break up complex systems into smaller components.
b. Containers include antivirus, antimalware, and endpoint protection software.
c. Containers can be deployed independently from underlying infrastructure.
d. Containers can test images locally on development machines with consistent promotion to
production.
e. Containers were introduced to eliminate the need for VMs.
f. Containers enable the use of inexpensive RAM.
g. Containers eliminate the need for hard drives.
2. 0Which two dependencies must be fulfilled to run Linux containers anywhere? (Choose two.)
a. hypervisor versions match
b. CPU architectures match
c. DRAM frequencies match
d. Linux kernel versions match
e. Internet access
3. 0Which feature, introduced in the Linux kernel 2.6.24, enabled the limitation of resource utilization
of Linux containers?
a. control groups
b. security groups
c. group overwatch
d. stretch limiters
4. 0Which three functions were introduced in Docker Engine Version 18.x? (Choose three.)
a. a REST-API for clients to interact with the Docker daemon
b. a CLI client
c. Docker container run time
d. virtualized file system for containers to run
e. a daemon process to maintain the lifecycle of a particular host
f. namespaces on Linux and Windows hosts
g. security group tags for segmentation of hosts
5. 0Which command will download an Ubuntu image from a Docker Image Repository if not already
available on the system?
a. docker get pull ubuntu
b. docker image pull ubuntu
c. docker image ls ubuntu
d. image ubuntu –download
6. 0Which subcommand set is involved in execution of a Docker container?
a. docker image
b. docker run
c. docker exec
d. docker make

96 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
7. 0Which subcommand is used for managing Docker images on the host?
a. docker container
b. docker run
c. docker make
d. docker image
8. 0Which subcommand is used for managing Docker containers on the host such as stopping,
restarting, and starting a stopped container?
a. docker container
b. docker run
c. docker make
d. docker image

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 97
0Answer Key
Container-Based Architectures
1. B, C, E
2. B

Linux Containers
1. D

Docker Overview
1.

Docker Engine Container Lifecycle Manager

containerd/runc Container run time

Namespaces Linux operating system

Docker Commands
1. A

Summary Challenge
1. A, C, D
2. B, D
3. A
4. A, B, E
5. B
6. B
7. D
8. A

98 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Section 3: Packaging an Application Using Docker

Introduction
Dockerfiles are used to package applications into container images in a repeatable and source-controlled
environment. A Dockerfile enables both developers and operators to understand how the container is built,
to check which processes are running in a container, and to access good documentation about primary tasks
of the container. A Dockerfile represents the structured format of the container build process. There are
many instructions and arguments that can go into the building of a container image.
Once an image is built, it becomes an artifact that can be installed in a Docker Image Registry. Then the
images can be versioned and deployed. The base images become the starting point for any new Docker
image. Many of these base images are from common Linux distributions and may come with a packaged
application, such as Python. As images are tested and approved, they become golden images for use in any
organization.

Dockerfiles
Using Docker through Dockerfiles will help you overcome the challenges of building consistent
applications and will also help you create automated builds of Docker images. A Dockerfile includes a set
of instructions for building a Docker image. Inside a Dockerfile are command-related instructions, such as
FROM, RUN, and CMD, that tell the build process how to perform actions while creating the Docker
container image. The instructions are followed by arguments that provide information that the instructions
need for their execution. Dockerfiles help you create automated builds of Docker containers.

Dockerfile Introduction
A Dockerfile always begins with the FROM instruction and is processed from the top to the bottom of the
Dockerfile. In a special scenario, the ARG instructions can be placed before the FROM instruction. You
will put the ARG instructions first only if the arguments in the ARG instructions will be used in the FROM
instruction, such as a version number. The best practice is to write all instructions in uppercase. This is not a
requirement, but will help you easily distinguish an instruction from instruction arguments.
An argument is information that is used by the instruction. An example of an argument is the source and
destination directory for COPY or ADD instructions. Another example of an argument is a command that
the RUN instruction will execute. In the following example, all the items that are not part of a comment and
appear in lowercase are considered arguments.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 99
The following are the most commonly used Dockerfile instructions:

Full Layer Instructions Intermediate Layer Instructions

FROM LABEL

COPY ENV

ADD EXPOSE

RUN WORKDIR

CMD

A Dockerfile is then called by the docker build . command to create a Docker container.

Dockerfile Example
When looking through the common Dockerfile, the first step is the FROM
registry.git.lab/cisco-devops/containers/python37:latest instruction with an argument. The FROM is the
instruction that starts the Dockerfile. The instruction is followed by the image name on which this Docker
container will be based. If you do not provide the URL as part of the registry, Docker assumes that the
image is from the public Docker repository at https://hub.docker.com. In this example, the image will be
pulled from a local registry with the DNS entry of registry.git.lab. The Docker registry is then nested in the
cisco-devops/containers folder. The name of the base container is python37, which indicates that the Python
3.7 build will be used. The colon after python37 is a tag separator. It indicates the Docker image tag on
which the container will be based. Typically, you would use the “latest” tag to represent the latest stable
version, which may not be the newest version of the image. The user who makes the build assigns the tag
value.
FROM registry.git.lab/cisco-devops/containers/python37:latest

# metadata
LABEL description=”Net inventory flask app"
LABEL maintainer="Cisco <[email protected]>"
LABEL version="0.1"

# copy files over to container


COPY ./ /net_inventory

# sets the working directory


WORKDIR /net_inventory/

# install system packages


RUN apt install -y git vim

# install python packages


RUN pip install -r ./requirements.txt

# set environment to development by default


ENV ENV=DEVELOPMENT

100 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
# doesn't actually do anything, just documentation purposes.
EXPOSE 5000/tcp

# start the application


ENTRYPOINT python run.py

The LABEL instruction defines particular values for particular keys. In these key-value pairs, the key is
specified on the left side of the equal sign and the value is defined on the right side. Each key-value pair
must be on the same line. For practical reasons, you define the value in double quotation marks (“), but this
format is not a requirement.
The ADD instruction (with the syntax ADD <src> <dest>) defines the source location of a local file or
URL and copies the corresponding files into the specified destination directory. If the source is a
compressed file, such as a .tar file, the ADD instruction will also extract the file upon completion of the
copy process.
The COPY instruction (with the syntax COPY <src> <dest>) is very similar to the ADD instruction. The
difference is that the COPY instruction takes an explicit source file and directory and copies it to the
destination. In the example, the instruction will copy the data from the local current directory to the
/net_inventory destination directory. The COPY instruction does not extract a compressed file and cannot
copy from an URL.
The WORKDIR instruction (with the syntax WORKDIR /path/to/directory) is equivalent to the cd Linux
command. You are encouraged to use absolute paths when changing the current working directory and not
to rely on the ../ relative paths.
The RUN instruction will execute the command that is specified in the argument from within the container.
Common arguments are package installation commands such as apt install and pip install.
The ENV instruction sets environmental variables on the container while it is being built. In this example,
the variable ENV will be set to DEVELOPMENT.
The EXPOSE instruction is a documentation-only instruction that you can use to indicate the ports that will
be used for the container. The default protocol is TCP and does not have to be explicitly defined. In the
example, TCP is explicitly specified and the port number is set to 5000.
The CMD instruction allows you to specify a default command that will be executed when you run the
container with the docker run command without a command to be executed. You can use the CMD
instruction only once within a Dockerfile. If you do not use the CMD instruction, you must use the
ENTRYPOINT instruction.
The ENTRYPOINT instruction sets a main command that will always be executed when the container
starts up.
What is the difference between the CMD and ENTRYPOINT instruction? The CMD instruction will
execute its argument only if you execute the docker run command without an executable argument, and
therefore the docker run command can overrule this instruction. The ENTRYPOINT instruction will
always execute its argument when the container starts up.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 101
Docker Container Size Optimization
The size of the container is an important factor. The Docker container life span is expected to be relatively
short compared to the life span of a VM. When building containers, images are frequently pulled across the
network from the Docker image registry. It is easier for a system to pull a 90-MB container down than a 2-
GB container.
Every time you RUN an instruction that adds a layer, an extra size is added to the container. To help keep
the number of layers to a minimum, you should try to bundle many commands in a single RUN instruction
wherever possible. For example, instead of running a RUN apt install vim instruction, followed by a RUN
apt install git, and then maybe another RUN instruction to install the next package, you could use one
single RUN instruction:
RUN apt install git vim

In situations where you can run many commands with a single RUN instruction, you can make use of a
more readable syntax, where you list each command in a separate line and use a backslash (\) as the
“continuation” marker. The backslash character tells the interpreter that the next line is part of the same
command set. Here is an example:
RUN apt install git \
vim \
next-package1 \
next-package2

When using Linux-based containers using the Bash shell, you can also use another method of executing
multiple commands with a single instruction set using the double ampersand (&&) combination. For
example, apt update && apt upgrade.

Docker Build Process


Now that you have created a Dockerfile, you can start building the image. You will use the docker build .
command to build an image from the file that is located inside the current directory. The period (.) indicates
that the current directory should be used as the starting point for the files that will be passed to the container
build process.
Using the previous Dockerfile example, the process of building the image will have 11 steps, one for each
instruction within the Dockerfile. The docker build command needs additional parameters to start the build
process. The -t option indicates that the next word will be the tag to be used with the image. Alternatively,
you could use the --tag option. In this example, you will tag the image as dev-net-inv-image. The last
parameter is the path to the files that are being passed to the build process. In this example, it is the working
directory (.).
$ docker build -t dev-net-inv-image .
Sending build context to Docker daemon 58.26MB
Step 1/11 : FROM registry.git.lab/cisco-devops/containers/python37:latest
---> dd4eec63855e
Step 2/11 : LABEL description="This is a net inventory flask application”
---> Running in 679add8eefa4
Removing intermediate container 679add8eefa4
---> 4bbd70343f32
Step 3/11 : LABEL maintainer="Cisco <[email protected]>”
---> Running in 3d7c27c481e3
Removing intermediate container 3d7c27c481e3
---> ce454bfc2958
Step 4/11 : LABEL version="0.1”

102 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
---> Running in c8d629639b87
Removing intermediate container c8d629639b87
---> a326d8f88a03

Step 5/11 : ADD ./ /net_inventory


---> Running in a4d629639bef
Removing intermediate container a4d629639bef
---> 39b79cf1dc59
Step 6/11 : WORKDIR /net_inventory/
---> Running in 6813077fdc30
Removing intermediate container 6813077fdc30
---> 0aa7339d2ac3
Step 7/11 : RUN apt install -y git vim
---> Running in 63174b97da49
{...output omitted for brevity...}
Removing intermediate container 63174b97da49
---> df08d0ba9082
Step 8/11 : RUN pip install -r ./requirements.txt
---> 44d65ea36668

Step 9/11 : ENV ENV=DEVELOPMENT


---> Running in 742b9cb225ef
Removing intermediate container 742b9cb225ef
---> 8986d252bda3
Step 10/11 : EXPOSE 5000/tcp
---> Running in 78ce74b38938
Removing intermediate container 78ce74b38938
---> 5935d023d9df
Step 11/11 : ENTRYPOINT python run.py
---> Running in 90a5c1f228b1
Removing intermediate container 90a5c1f228b1
---> 86d5cd36cf19
Successfully built 86d5cd36cf19
Successfully tagged dev-net-inv-image:latest

If you plan to build an image from a different Dockerfile, you can use the -f /path/to/file option. An example
would be the docker build -f Dockerfile_frontend . command, which specifies the local build file,
Dockerfile_frontend file, and builds the image from the current working directory.
Here are few use cases of the docker build command:
$ docker build [OPTIONS] PATH|URL|-
$ docker build .
$ docker build -f /path/to/a/Dockerfile .

Docker Build History


The docker image history command is very useful. It shows you the size of the layers that were added to
the container.
$ docker image history registry.git.lab/cisco-devops/containers/python37
IMAGE CREATED CREATED BY
SIZE COMMENT
dd4eec63855e 10 days ago /bin/sh -c pip install alembic==1.2.1 …
92.3MB
e296b1fa4d9d 10 days ago /bin/sh -c apt install -y git vim python3-de…
310MB

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 103
01e15190721f 10 days ago /bin/sh -c apt update
16.4MB
05627decfad6 3 weeks ago /bin/sh -c #(nop) CMD ["python3"]
0B
<missing> 3 weeks ago /bin/sh -c set -ex; savedAptMark="$(apt-ma…
7.44MB
<missing> 3 weeks ago /bin/sh -c #(nop) ENV PYTHON_GET_PIP_SHA256…
0B
<...output omitted...>

<missing> 3 weeks ago /bin/sh -c #(nop) ENV PYTHON_GET_PIP_URL=ht…


0B
<missing> 3 weeks ago /bin/sh -c #(nop) ENV PYTHON_PIP_VERSION=19…
0B
<missing> 3 weeks ago /bin/sh -c cd /usr/local/bin && ln -s idle3…
32B
<missing> 3 weeks ago /bin/sh -c set -ex && savedAptMark="$(apt-…
86.1MB
<missing> 3 weeks ago /bin/sh -c #(nop) ENV PYTHON_VERSION=3.7.5
0B
<missing> 3 weeks ago /bin/sh -c #(nop) ENV GPG_KEY=0D96DF4D4110E…
0B
<missing> 3 weeks ago /bin/sh -c apt-get update && apt-get install…
6.48MB
<missing> 3 weeks ago /bin/sh -c #(nop) ENV LANG=C.UTF-8
0B
<...output omitted...>

<missing> 3 weeks ago /bin/sh -c #(nop) ENV PATH=/usr/local/bin:/…


0B
<missing> 3 weeks ago /bin/sh -c #(nop) CMD ["bash"]
0B
<missing> 3 weeks ago /bin/sh -c #(nop) ADD file:37512e59e7c324f9e…
55.3MB

You can see that some instructions, such as the RUN and ADD instructions, add significant layer file size to
the image size. On the other hand, the lightweight instructions, such as the CMD, ENV, or LABEL
instructions, do not increase the image size.

Docker Lifecycle of a Build


The Docker lifecycle starts with creating the Dockerfile. Using the Dockerfile and the docker build
command, you create the Docker image. Once you have the image available, you can start Docker using the
docker run command.

104 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
The following command starts Docker in detached mode (-d), maps local port 5000 to container port 5000
(-p localport:containerport) with an interactive (-i) pseudo-terminal (-t), and renames the container as
dev-net-inv (-name containername), using the container image name dev-net-inv-image:
docker run -itd -p 5000:5000 --name dev-net-inv dev-net-inv-image.

The detached mode allows you to run the container in the background without any interaction and no need
to log in to the container.
The –name option allows you to specify a container name that is different from the image name and
therefore enables you to run multiple containers using the same image.

Comparing Docker Container Lifecycles


As you have learned, you can download existing Docker images from the Docker Hub registry. These
images are free and publicly available. Once you download a Docker image, you can start the image and run
the container.
Now you will learn how to use your own, customized Docker images. You need to create the Dockerfile,
build the image, and then run the container.
You can also stop the container from running by using the docker stop <id> command. Stopping the
container does not delete the container. You can restart the container using the docker start <id>
command, where <id> is the hash of the container.
To remove the image from the host, use the docker image rm command.

1. 0Which Dockerfile instruction indicates the start of the container build?


a. RUN
b. START
c. FROM
d. ARG

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 105
0Discovery 3: Package and Run a WebApp
Container
Introduction
Becoming proficient in a technology like Docker can take months to years, depending on how often you use
it. Here you will see a high-level, but practical view of how to package and build an application in Docker
using a Dockerfile. During the process, you will learn how a Dockerfile is constructed. Then you will
deploy a web application—a network device inventory application that is based on the Python Flask
framework. Once the container is built and the application is running, you will package the container and
publish it to the GitLab container registry.

Topology

Job Aids
Device Information

Device Description FQDN or IP Address Credentials

Student Workstation Linux Ubuntu VM 192.168.10.10 student, 1234QWer

GitLab Git Repository git.lab student, 1234QWer

GitLab Container Registry Container registry.git.lab student, 1234QWer


Registry

106 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Command List
The table describes the commands that are used in this activity. The commands are listed in alphabetical
order so that you can easily locate the information that you need. Refer to this list if you need configuration
command assistance during the lab activity.

Command Description

cd directory name To change directories within the Linux file system, use the cd
command. You will use this command to enter into a directory where
the scripts are housed. You can use tab completion to finish the name
of the directory after you start typing it.

cat file The most common use of the cat Linux command is to read the
contents of files. It is the most convenient command for this purpose in
a Unix-like operating system.

docker build -t name:tag -f filename This command builds a Docker image. The -t flag will name and tag
path the image as you specify. The -f flag is used when you are not using
the standard filename of Dockerfile. The path defines the context for
the Docker daemon; normally, it is the “.” that is specified.

docker container ls -a This command allows you to view the containers that are configured
on the host system. The -a flag will indicate to show containers that
are not up as well.

docker login docker_registry This command allows you to log in to a Docker registry. In cases
where you are not already logged in, it will prompt you for your
username and password.

docker push container_registry / This docker command pushes to the registry. The command does not
gitlab_organization / gitlab_project / have spaces between the forward slashes.
container:tag

docker run -itd -p port --name The command to run, or obtain a container registry and run a
container container_registry / container. The -i flag is for interactive, and the -t flag is for creating a
gitlab_organization / gitlab_project / pseudo-TTY to the container. The -d flag is to run in the detached
container:tag command state. The command is any command that is valid on the container.
The --name flag names the container as you intend, and does not
randomly generate a name for you. The -p flag is for port; it can be in
either host_port:container_port format, or port format.

docker tag container_registry / This command tags an image. Here, images are generally tagged
gitlab_organization / gitlab_project / using the container container_registry / gitlab_organization /
container:tag gitlab_project / container:tag standard.

docker version This command allows you to view the Docker status and the Docker
version that is currently running.

git clone repository This command downloads or clones a Git repository into the directory
that has the name of the project in the repository definition.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 107
Command Description

ls This command allows you to see the contents of a folder.

Task 1: Create a Dockerfile


You will create a Dockerfile. You will use Visual Studio Code to edit files and use the terminal embedded
within the application to run commands.

Activity

Step 1 In the student workstation, find and open Visual Studio Code by clicking the Visual Studio Code icon.

Step 2 From the Visual Studio Code top navigation bar, choose Terminal > New Terminal [ctrl-shift-`].

Step 3 Navigate to the terminal section at the bottom of Visual Studio Code.

108 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 4 Within the Visual Studio Code terminal, change the directory to ~/labs/lab03 using the cd ~/labs/lab03
command.

student@student-vm:$ cd ~/labs/lab03/
student@student-vm:labs/lab03$

Ensure That Docker Is Installed and Running

Step 5 Execute the docker version command to verify which Docker version is installed. You should be running
version 18.09.7.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 109
student@student-vm:labs/lab03$ docker version
Client:
Version: 18.09.7
API version: 1.39
Go version: go1.10.1
Git commit: 2d0083d
Built: Fri Aug 16 14:20:06 2019
OS/Arch: linux/amd64
Experimental: false

Server:
Engine:
Version: 18.09.7
API version: 1.39 (minimum version 1.12)
Go version: go1.10.1
Git commit: 2d0083d
Built: Wed Aug 14 19:41:23 2019
OS/Arch: linux/amd64
Experimental: false
student@student-vm:labs/lab03$

Log In to the GitLab Container Registry


Once you are logged in to a container registry via the Linux command line, Docker will store your
credentials and you will not be prompted again.

Step 6 Log in to the GitLab container registry with the docker login registry.git.lab command. You will be
prompted for your username and password. Use the credentials that are provided in the Job Aids.

student@student-vm:labs/lab03$ docker login registry.git.lab


Username: student
Password:
WARNING! Your password will be stored unencrypted in /home/student/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
student@student-vm:labs/lab03$

View the Application in GitLab


The application code is hosted in a GitLab repository. Familiarize yourself with GitLab and the code it is
hosting.

Step 7 From the Chrome browser, navigate to https://git.lab.

Step 8 Log in with the credentials that are provided in the Job Aids and click Sign in.

Step 9 From the list of projects, choose the cisco-devops/net_inventory project.

Step 10 Review the layout of the code and the folder structure.

110 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 111
Clone the Repository to Your Student Workstation
The application code is on the GitLab server, but to run the application, you must copy the code to a local
directory.

Step 11 In the Visual Studio Code terminal, run the git clone https://git.lab/cisco-devops/net_inventory command.

Step 12 Change the directory to net_inventory by running the cd net_inventory command. Notice that the bash
string will change to include the context of your Git branch. The context of the Git branch is set in your
~/.bash_rc file. This modified bash string helps you by providing a visual reminder of the branch in which
you are located.

Step 13 Use the ls command to verify that you have successfully downloaded the files and folders.

student@student-vm:labs/lab03$ git clone https://git.lab/cisco-devops/net_inventory


Cloning into 'net_inventory'...
warning: redirecting to https://git.lab/cisco-devops/net_inventory.git/
remote: Enumerating objects: 416, done.
remote: Counting objects: 100% (416/416), done.
remote: Compressing objects: 100% (112/112), done.
remote: Total 416 (delta 292), reused 416 (delta 292)
Receiving objects: 100% (416/416), 3.10 MiB | 14.75 MiB/s, done.
Resolving deltas: 100% (292/292), done.
student@student-vm:labs/lab03$ cd net_inventory/
student@student-vm:lab03/net_inventory (master)$ ls
app.py database.db docker-compose.yml Dockerfile Makefile migrations
net_inventory net-inventory-config.yml postgres-data pyproject.toml
requirements.txt run.py setup.py static tests
student@student-vm:lab03/net_inventory (master)$

112 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Navigate to the Directory in Visual Studio Code
To work in Visual Studio Code to create, edit, and delete files, you must open the working directory inside
the editor.

Step 14 From the Visual Studio Code top navigation bar, choose File > Open Folder… [Ctrl-K Ctrl-O].

Step 15 From the Open Folder page, choose student > labs > lab03 > net_inventory and click OK in the top-right
corner. Now you should see the NET_INVENTORY folder in the left pane.

Note When you choose a folder and you have a file open, Visual Studio Code may ask you to save or discard
that opened file. You may need to restart the terminal, depending on how Visual Studio Code will treat
opening a new folder. In that case, follow the instructions from previous steps to open the terminal again.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 113
114 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Create a New Dockerfile in Visual Studio Code
The filename Dockerfile is a standard Docker construct. It is the default name for a text document that
Docker uses. The Dockerfile contains all the commands that you could call from the command line to
assemble a Docker container from a Docker image. You will create the Dockerfile in Visual Studio Code
because Git application does not currently have an editor.

Step 16 In Visual Studio Code, the folder NET_INVENTORY is in the left-hand EXPLORER pane. Hover over
NET_INVENTORY and click the first New File icon to create a file.

Step 17 The cursor will move to the new file. Set the filename to Dockerfile.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 115
Build the Dockerfile
More than 12 instructions are available for Dockerfile to create a Docker container image. In this activity,
you will use the FROM, LABEL, WORKDIR, RUN, ENV, EXPOSE, and ENTRYPOINT instructions.

116 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Keyword Description

FROM With this instruction, you define the parent image from which you are
building a container. A Dockerfile must begin with the FROM
instruction.

LABEL This instruction creates metadata for an image. A LABEL is a key-


value pair. To include spaces within a LABEL value, use quotation
marks and backslashes as you would in command-line parsing.

WORKDIR This instruction sets the working directory for any command that is run
within the container.

RUN This instruction executes any commands in a new layer in addition to


the current image and commits the results.

ENV This command sets an environment variable on any host container.

EXPOSE This instruction informs Docker that the container listens on the
specified network ports at run time. The EXPOSE instruction does not
actually publish the port. It functions as a type of documentation
between the person who builds the image and the person who runs the
container, concerning the ports that are intended to be published.

ENTRYPOINT This instruction sets the main command of the image and allows that
image to be run as though it was that command.

The prestaged container has already defined all the actual package requirements. The following steps define
the process of building the vim and pip packaging within the RUN keyword.

Step 18 Set the base image to python37 using the latest tag. Use the FROM
registry.git.lab/cisco-devops/containers/python37:latest instruction.

Step 19 Provide metadata with the following key-value pairs:

• LABEL description="This is a net inventory flask application"


• LABEL maintainer="Cisco <[email protected]>"
• LABEL version="0.1"

Step 20 The contents should be added to the /net_inventory folder. Use the ADD ./ /net_inventory instruction.

Step 21 Set the working directory to the /net_inventory folder using the WORKDIR /net_inventory/ instruction.

Step 22 Upon building, the container should install the vim and pip packages. Use the RUN apt install -y git vim
and RUN pip install -r ./requirements.txt instructions.

Step 23 Set the environment variable to DEVELOPMENT using the ENV ENV=DEVELOPMENT instruction.

Step 24 The file reader port should be exposed to TCP port 5000. Use the EXPOSE 5000/tcp instruction.

Step 25 On container startup, the run.py script should be executed. Use the ENTRYPOINT python run.py
instruction.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 117
student@student-vm:lab03/net_inventory (master)$ cat Dockerfile
FROM registry.git.lab/cisco-devops/containers/python37:latest

# metadata
LABEL description="This is a net inventory flask application"
LABEL maintainer="Cisco <[email protected]>"
LABEL version="0.1"

# copy files over to container


ADD ./ /net_inventory

# sets the working directory


WORKDIR /net_inventory/

# install system packages


RUN apt install -y git vim

# install python packages


RUN pip install -r ./requirements.txt

# set environment to development by default


ENV ENV=DEVELOPMENT

# doesn't actually do anything, just documentation purposes. forward your port at


runtime
EXPOSE 5000/tcp

# start the application


ENTRYPOINT python run.py
student@student-vm:lab03/net_inventory (master)$

118 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Build Your Container from the Dockerfile
You will use the docker build command to build the container image from your Dockerfile. Dockerfile is
the default name for the text document that Docker uses. Therefore, you will not need to specify the
filename of the build process document. If you use a different packaging filename, you need to use the -f
filename flag. When building an image, you must tag the image with the -t tag flag to assign a name to the
built image in the local registry.

Step 26 In the terminal window, execute the docker build -t dev-net-inv-image . command.

Note The period (.) at the end of the command ensures that the image will be built using the local Dockerfile.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 119
student@student-vm:lab03/net_inventory (master)$ docker build -t dev-net-inv-image .
Sending build context to Docker daemon 58.26MB
Step 1/11 : FROM registry.git.lab/cisco-devops/containers/python37:latest
---> dd4eec63855e
Step 2/11 : LABEL description="This is a net inventory flask application"
---> Running in 679add8eefa4
Removing intermediate container 679add8eefa4
---> 4bbd70343f32
Step 3/11 : LABEL maintainer="Cisco <[email protected]>"
---> Running in 3d7c27c481e3
Removing intermediate container 3d7c27c481e3
---> ce454bfc2958
Step 4/11 : LABEL version="0.1"
---> Running in c8d629639b87
Removing intermediate container c8d629639b87
---> a326d8f88a03
Step 5/11 : ADD ./ /net_inventory
---> 39b79cf1dc59
Step 6/11 : WORKDIR /net_inventory/
---> Running in 6813077fdc30
Removing intermediate container 6813077fdc30
---> 0aa7339d2ac3
Step 7/11 : RUN apt install -y git vim
---> Running in 63174b97da49

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

Reading package lists...


Building dependency tree...
Reading state information...
git is already the newest version (1:2.11.0-3+deb9u4).
vim is already the newest version (2:8.0.0197-4+deb9u3).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Removing intermediate container 63174b97da49
---> df08d0ba9082
Step 8/11 : RUN pip install -r ./requirements.txt
---> Running in 3ff5211324cb
Requirement already satisfied: alembic==1.2.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 1)) (1.2.1)
Requirement already satisfied: asn1crypto==0.24.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 2)) (0.24.0)
Requirement already satisfied: attrs==19.2.0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 3)) (19.2.0)
Requirement already satisfied: bcrypt==3.1.7 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 4)) (3.1.7)
Requirement already satisfied: black==19.10b0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 5)) (19.10b0)
Requirement already satisfied: certifi==2019.9.11 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 6)) (2019.9.11)
Requirement already satisfied: cffi==1.12.3 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 7)) (1.12.3)
Requirement already satisfied: chardet==3.0.4 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 8)) (3.0.4)
Requirement already satisfied: Click==7.0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 9)) (7.0)
Requirement already satisfied: cryptography==2.7 in /usr/local/lib/python3.7/site-

120 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
packages (from -r ./requirements.txt (line 10)) (2.7)
Requirement already satisfied: enum34==1.1.6 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 11)) (1.1.6)
Requirement already satisfied: Faker==2.0.2 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 12)) (2.0.2)
Requirement already satisfied: flasgger==0.9.3 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 13)) (0.9.3)
Requirement already satisfied: Flask==1.1.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 14)) (1.1.1)
Requirement already satisfied: Flask-Login==0.4.1 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 15)) (0.4.1)
Requirement already satisfied: flask-marshmallow==0.10.1 in
/usr/local/lib/python3.7/site-packages (from -r ./requirements.txt (line 16)) (0.10.1)
Requirement already satisfied: Flask-Migrate==2.5.2 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 17)) (2.5.2)
Requirement already satisfied: Flask-Script==2.0.6 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 18)) (2.0.6)
Requirement already satisfied: Flask-SQLAlchemy==2.4.1 in
/usr/local/lib/python3.7/site-packages (from -r ./requirements.txt (line 19)) (2.4.1)
Requirement already satisfied: Flask-WTF==0.14.2 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 20)) (0.14.2)
Requirement already satisfied: funcsigs==1.0.2 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 21)) (1.0.2)
Requirement already satisfied: idna==2.8 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 22)) (2.8)
Requirement already satisfied: ipaddress==1.0.22 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 23)) (1.0.22)
Requirement already satisfied: itsdangerous==1.1.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 24)) (1.1.0)
Requirement already satisfied: Jinja2==2.10.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 25)) (2.10.1)
Requirement already satisfied: jsonschema==2.6.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 26)) (2.6.0)
Requirement already satisfied: Mako==1.1.0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 27)) (1.1.0)
Requirement already satisfied: MarkupSafe==1.1.1 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 28)) (1.1.1)
Requirement already satisfied: marshmallow==2.20.5 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 29)) (2.20.5)
Requirement already satisfied: marshmallow-sqlalchemy==0.18.0 in
/usr/local/lib/python3.7/site-packages (from -r ./requirements.txt (line 30)) (0.18.0)
Requirement already satisfied: mistune==0.8.4 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 31)) (0.8.4)
Requirement already satisfied: pip==19.3.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 32)) (19.3.1)
Requirement already satisfied: psycopg2==2.8.4 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 33)) (2.8.4)
Requirement already satisfied: pycparser==2.19 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 34)) (2.19)
Requirement already satisfied: pyrsistent==0.15.4 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 35)) (0.15.4)
Requirement already satisfied: python-dateutil==2.8.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 36)) (2.8.0)
Requirement already satisfied: python-dotenv==0.10.3 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 37)) (0.10.3)
Requirement already satisfied: python-editor==1.0.4 in /usr/local/lib/python3.7/site-

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 121
packages (from -r ./requirements.txt (line 38)) (1.0.4)
Requirement already satisfied: PyYAML==5.1.2 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 39)) (5.1.2)
Requirement already satisfied: requests==2.22.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 40)) (2.22.0)
Requirement already satisfied: setuptools==41.4.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 41)) (41.4.0)
Requirement already satisfied: six==1.12.0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 42)) (1.12.0)
Requirement already satisfied: SQLAlchemy==1.3.8 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 43)) (1.3.8)
Requirement already satisfied: SQLAlchemy-Utils==0.34.2 in
/usr/local/lib/python3.7/site-packages (from -r ./requirements.txt (line 44)) (0.34.2)
Requirement already satisfied: text-unidecode==1.3 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 45)) (1.3)
Requirement already satisfied: typing==3.7.4.1 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 46)) (3.7.4.1)
Requirement already satisfied: urllib3==1.25.6 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 47)) (1.25.6)
Requirement already satisfied: Werkzeug==0.16.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 48)) (0.16.0)
Requirement already satisfied: wheel==0.33.6 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 49)) (0.33.6)
Requirement already satisfied: WTForms==2.2.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 50)) (2.2.1)
Requirement already satisfied: pathspec<1,>=0.6 in /usr/local/lib/python3.7/site-
packages (from black==19.10b0->-r ./requirements.txt (line 5)) (0.6.0)
Requirement already satisfied: typed-ast>=1.4.0 in /usr/local/lib/python3.7/site-
packages (from black==19.10b0->-r ./requirements.txt (line 5)) (1.4.0)
Requirement already satisfied: appdirs in /usr/local/lib/python3.7/site-packages (from
black==19.10b0->-r ./requirements.txt (line 5)) (1.4.3)
Requirement already satisfied: toml>=0.9.4 in /usr/local/lib/python3.7/site-packages
(from black==19.10b0->-r ./requirements.txt (line 5)) (0.10.0)
Requirement already satisfied: regex in /usr/local/lib/python3.7/site-packages (from
black==19.10b0->-r ./requirements.txt (line 5)) (2019.11.1)
Removing intermediate container 3ff5211324cb
---> 44d65ea36668
Step 9/11 : ENV ENV=DEVELOPMENT
---> Running in 742b9cb225ef
Removing intermediate container 742b9cb225ef
---> 8986d252bda3
Step 10/11 : EXPOSE 5000/tcp
---> Running in 78ce74b38938
Removing intermediate container 78ce74b38938
---> 5935d023d9df
Step 11/11 : ENTRYPOINT python run.py
---> Running in 90a5c1f228b1
Removing intermediate container 90a5c1f228b1
---> 86d5cd36cf19
Successfully built 86d5cd36cf19
Successfully tagged dev-net-inv-image:latest
student@student-vm:lab03/net_inventory (master)$

122 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Task 2: Run the Container and View the Application
You will now run the container from the container image that you built using the Dockerfile. You will
verify that the application is running and then explore the application. The application is running as a single
instance but is developed with an architecture that is split between the back end and front end.

Activity

Run the Application Container


You will use the docker run command to run the container image that you created previously and reference
the image tag dev-net-inv-image. You will also use the -p host_port:container_port flag to instruct the host
to communicate with the container on TCP port 5000.

Step 1 Build the container from the dev-net-inv-image image and expose TCP port 5000. Use the docker run -itd -
p 5000:5000 --name dev-net-inv dev-net-inv-image command.

student@student-vm:lab03/net_inventory (master)$ docker run -itd -p 5000:5000 --name


dev-net-inv dev-net-inv-image
da3660bb6ec42127474c1e37425fbd2d8c9b961ad3857e8ce815bd3d99ae9da5
student@student-vm:lab03/net_inventory (master)$

Step 2 Verify that the container was built successfully using the docker container ls command.

student@student-vm:lab03/net_inventory (master)$ docker container ls


CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
da3660bb6ec4 dev-net-inv-image "/bin/sh -c 'python …" 13 seconds ago Up
12 seconds 0.0.0.0:5000->5000/tcp dev-net-inv
student@student-vm:lab03/net_inventory (master)$

Explore the Front End of the Application

Step 3 Run the populate_inventory script and enter 127.0.0.1:5000 for server and port information. The script will
populate the network inventory database Use the populate_inventory command.

student@student-vm:$ populate_inventory
Enter the server and port info : 127.0.0.1:5000
nyc-rt01: Added successfully
nyc-rt02: Added successfully
rtp-rt01: Added successfully
rtp-rt02: Added successfully

Step 4 Using the Chrome browser, connect to the local TCP 5000 port. Navigate to http://127.0.0.1:5000 to view
the network inventory.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 123
Explore the Back End of the Application
The back end of the application is a simple RESTful application programming interface (API) to manage
the database. The OpenAPI (formerly Swagger) interface lets you describe and interact with the API.

Step 5 From Chrome, navigate to http://127.0.0.1:5000/api/docs and view the OpenAPI interface methods.

124 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 6 Click the POST method, expand it, and click Try it out. This action will make the JavaScript Object
Notation (JSON) body of the power-on self test (POST) method editable.

Do not update the JSON body. There is no need to adjust the JSON body the first time you run
the POST method. Additional clicks without making the data unique will cause data conflicts
because this API implements duplicate checks during POST requests. Otherwise, the API
response is an HTTP 400 response.

Step 7 Click Execute and notice the Server response code of 201.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 125
126 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 8 Navigate back to the front end of the application at http://127.0.0.1:5000 and notice how the data has
changed.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 127
Task 3: Publish Your Container to the GitLab Container
Registry
You will now push your new image to the GitLab Container Registry.

Activity

Retag the Image

Step 1 Retag your dev-net-inv-image image into net_inventory by issuing the docker tag dev-net-inv-image
registry.git.lab/cisco-devops/containers/net_inventory:latest command.

student@student-vm:lab03/net_inventory (master)$ docker tag dev-net-inv-image


registry.git.lab/cisco-devops/containers/net_inventory:latest
student@student-vm:lab03/net_inventory (master)$

Register the Container to the GitLab Container Registry

Step 2 Publish the container to the GitLab Container Registry. Run the docker push registry.git.lab/cisco-
devops/containers/net_inventory:latest command.

student@student-vm:lab03/net_inventory (master)$ docker push registry.git.lab/cisco-


devops/containers/net_inventory:latest
The push refers to repository [registry.git.lab/cisco-devops/containers/net_inventory]
1a4be2bbb77b: Pushed
2135f46209e7: Pushed
2e2a752e2550: Pushed
9ebf59de99a3: Mounted from cisco-devops/containers/python37
1f8901027234: Mounted from cisco-devops/containers/python37
581d0eb94046: Mounted from cisco-devops/containers/python37
5833990cb8e5: Layer already exists
86339b326932: Layer already exists
859394076549: Layer already exists
896510bee743: Layer already exists
67ecfc9591c8: Layer already exists
latest: digest: sha256:0ceb2ea75d8ab39be52d4c364bade384b6fb1c6f5f1f74dfb294e1cbf69d73df
size: 2635
student@student-vm:lab03/net_inventory (master)$

View the Container in GitLab


The container is now hosted in the GitLab Container Registry. You can view information about containers
from their respective GitLab repositories.

Step 3 From the Chrome browser, navigate to https://git.lab.

Step 4 Log in with the credentials that are provided in the Job Aids.

128 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 5 From the list of projects, choose cisco-devops/containers. From there, you can review the code.

Step 6 In the left pane, hover over Packages and click Container Registry.

Step 7 Navigate to the net_inventory container that you created and expand it for a detailed view.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 129
You have packaged your application and posted it to the GitLab Container Registry.

Task 4: Run the WebApp from the Registry


Now you can execute the entire application as if it were any other container. You will run the application
from the registry.
130 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Activity

Clean Up and Pull the Image


First, you will pull down the image. This action is nearly instantaneous. The Docker cache system operates
in this way and you already have the container image layers locally.

Step 1 If the previous container that you built, named dev-net-inv, is still running, run the docker stop dev-net-inv
command to stop the container.

Step 2 Run the net-inventory image that you created earlier on the GitLab Container Registry and name it net-inv-
reg. Issue the docker run -itd -p 5000:5000 --name net-inv-reg
registry.git.lab/cisco-devops/containers/net_inventory command.

Step 3 Review the status of the new container using the docker container ls command.

student@student-vm:lab03/net_inventory (master)$ docker stop dev-net-inv


student@student-vm:lab03/net_inventory (master)$ docker run -itd -p 5000:5000 --name
net-inv-reg registry.git.lab/cisco-devops/containers/net_inventory
475b2678661f561b0660f9df8107459d97941c6940c11d8e58ee86cbd35c21bc
student@student-vm:lab03/net_inventory (master)$ docker container ls
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS NAMES
475b2678661f registry.git.lab/cisco-devops/containers/net_inventory "/bin/sh -
c 'python …" 4 seconds ago Up 3 seconds 0.0.0.0:5000->5000/tcp net-
inv-reg
student@student-vm:lab03/net_inventory (master)

Ensure That the Application Is Running


Using Google Chrome, ensure that your application is running.

Step 4 From the Chrome browser, navigate to http://127.0.0.1:5000 and explore the network inventory.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 131
Note Note that the data added in Task 2 is no longer present in the app. Containers are ephemeral and the
container in Task 2 was stopped,

Summary
You reviewed the process of building a container from a Dockerfile. You learned how to manage packages,
how to change the working directory, and how to run commands. After you created the new Docker
container image, you turned it into a hosted container image on GitLab Container Registry. Finally, you
used that hosted container image as a source image to build an application directly from the GitLab
Container Registry.

132 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Golden Images
The characteristics of a Docker golden image are similar to desktop build or virtual machine (VM) golden
images. The golden image should maintain all the latest security patches,and all the latest versions of
software that are approved and may be required. For example, there may be a particular version of the Vim
tool that must be used because it passed a security scan. That Vim version must be pinned to the container
build so that every build that is based on that container will maintain that approved version of the tool.

Image Tags
When you are building a golden image, it is recommended that you use a specific tagged version. For
example, with Ubuntu, the default tag named latest has shifted over time. Ubuntu version numbers
correspond to the year and month of the release. There are multiple images that could be used as a base if a
container was built in 2016. The latest image might be the 16.04 Xenial build. Of course, there may have
been changes in the structure when 18.04 was released. If you created an image and the version that is tied
to the latest tag is a different Ubuntu version, you would have created a different image from the same
Dockerfile.

• It is recommended that you build containers from a Ubuntu Image Tags


specific tag version.
• ubuntu:18.04, bionic-20191029, bionic, latest
• Python sourced from Debian, seen by release name
(buster) in the tag. • ubuntu:19.04, disco-20191030, disco
• ubuntu:19.10, eoan-20191017, eoan, rolling
• ubuntu:20.04, focal-20191030, focal, devel
• ubuntu:16.04, xenial-20191024, xenial
Alpine Image Tags
• alpine:3.7
• alpine:3.10.3
Python Image Tags
• python:3.8-buster
• python:3.7.5-buster

To avoid such situations, specify the version number, such as ubuntu:18.04, within a Dockerfile. Any builds
that you create using that Dockerfile will use the same container base image.
You can also “pin” the image versions on package repository installations (such as apt and pip packages).
This methodology verifies that the intended version of the package is installed. Without pinning the version,
new builds will often have the latest version that is installed at the build time. Such situations can lead to
unintended outcomes, especially if the new version has a feature that is deprecated or a bug was introduced.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 133
Golden Images
A golden image should be an image that is tested and verified to be fully operational. An image without a
test and verification cannot be trusted and is not a candidate for a golden image. The best approach for your
organization is to define a process (or even an internal organization) with methods for testing and verifying
the images before approving them as golden images.

• Trusted starting point Common base images include:


– Tested • Alpine Linux
– Approved • Debian Linux
• Available at a trusted registry • Ubuntu Linux
• CentOS Linux
Places to host a base image:
• Docker Hub
• Docker Trusted Registry
• Private Registries

Another important element of golden or trusted images is to store (and access) the source of these images on
a trusted registry. The common images on Docker Hub are examples of images that are regularly checked
for vulnerabilities. These images should constantly be monitored and updated because there could be
vulnerabilities in them that were not identified earlier.
Your organization may choose to host its own registries and have its own base images.
Golden images are generally sourced from one of several locations:
• Docker Hub
• Docker Trusted Registry (commercially supported)
• Private Docker registries such as the following:
– Docker image
– GitLab
– Privately created registry

Uploading Golden Images


To log in to a registry, use the docker login <URL> command. You will be prompted for a username and
password to log in.
docker login <URL>
docker build -t db -f Dockerfile_db .
docker push <registry_url>/<image_name>:<tag>

# docker login git.lab


Username: student
Password:
WARNING! Your password will be stored unencrypted in
/home/student/.docker/config.json.
< ... output omitted ... >

134 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
# docker push registry.git.lab/cisco-devops/containers/python37:latest
The push refers to repository [registry.git.lab/cisco-devops/containers/python37]
9b0968105ee3: Pushed
1f8901027234: Pushed
581d0eb94046: Pushed
5833990cb8e5: Pushed
86339b326932: Pushed
859394076549: Pushed
896510bee743: Pushed
67ecfc9591c8: Pushed
latest: digest:
sha256:d25506ce75aa4b219831ae2f8c642b75a51d575b8db7729319382f0c96b70f08 size: 2008

After logging in, you can push images to the registry with the docker push command. The format of the
command is docker push <registry_url>/<image_name>:<tag>. The registry URL in this example is
registry.git.lab/cisco-devops/containers. This URL will take you to the proper directory on the GitLab
server. In the example, the image name python37 is used, and the tag latest is used (after the “:” ).
1. 0Which option is a public registry that Docker hosts, where the main public images of many Linux
distributions are maintained?
a. Docker Trusted Registry
b. Docker Center
c. Docker Secret
d. Docker Hub

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 135
0Safe Processing Practices
While you are working with any data or system that is involved in data processing, safe processing practices
should always be used. Working with Docker images is no different. In fact, you need to be even more
careful. The Docker daemon requires root access to the host server, so only trusted users should be allowed
to operate on the Docker host. Docker has some powerful features, including sharing files between Docker
hosts and Docker containers.

• Do not put sensitive information # cat env_file_frontend


into a Dockerfile. ENV=FRONTEND
URL=http://netinv_backend:5001
• Use environment files for
passing sensitive information
into a Docker container.

Use of env-files is a great way to put data into a container without having to put the data into source control.
On the host server, an env-file is created and then referenced when starting a container. The data in the file
will be available as an environmental variable in the container.

• Run with the --env-file # docker run -itd --env-file=env_file_db --name netinv_db db
option within the docker C2ac3073815b12d2e561dc5b2160eda9235f941a2d27e9a29be98f0160181c5
a
run command.
# docker run --env-file=env_file_backend -itd -p 5001:5001 --
name netinv_backend backend
8906dec88d8f741f2b44823daf586a2684cf00f85a09d6dff28438b0d233d40
f

# docker run --env-file=env_file_frontend -itd -p 5000:5000 --


name netinv_frontend frontend
d0d618f542a3eba8e1a90898cccb7685a9eeda92bb467af785138ca630eea57
3

The examples show a docker run command execution in a detached state using the env-file option. The
environmental variables are then set in each of the files that are added to the ENV structure of the container.
Once in the container, these environmental variables can be accessed by applications that run within the
container.
1. 0Which practice is a best practice for safe handling of data within Docker containers?
a. Use env-files to keep sensitive information out of source control and within the Docker container
itself.
b. Use env-vault to distribute information to containers.
c. If no sensitive information care has been taken, use of this information is forbidden.
d. Run a dedicated server to gather data.

136 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Discovery 4: Build and Deploy Multiple
Containers to Create a Three-Tier Application
Introduction
Modern applications, especially web applications, are often deployed in a distributed architecture. This
architecture means that there are different sets of services (and microservices) for each of the main tiers of
the application, for example, web (front end), application (back-end API), and database and storage tiers, to
name a few. This architecture allows for the application to scale horizontally at each of those tiers without
impacting any other tier.
You will learn how to deploy a three-tier application using Docker as three separate containers. In doing
that, you will learn how to store local variables that are required for services to function as environment
variables.

Topology

Job Aids
Device Information

Device Description FQDN/IP Address Credentials

Student Workstation Linux Ubuntu VM 192.168.10.10 student, 1234QWer

GitLab Git Repository git.lab student, 1234QWer

GitLab Container Registry Container registry.git.lab student, 1234QWer


Registry

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 137
Command List
The table describes the commands that are used in this activity. The commands are listed in alphabetical
order so that you can easily locate the information that you need. Refer to this list if you need configuration
command assistance during the lab activity.

Command Description

cd directory name To change directories within the Linux file system, use the cd
command. You will use this command to enter into a directory where
the scripts are housed. You can use tab completion to finish the name
of the directory after you start typing it.

cat file The most common use of the cat Linux command is to read the
contents of files. It is the most convenient command for this purpose in
a Unix-like operating system.

docker build -t name:tag -f filename This command builds a Docker image. The -t flag will name and tag
path the image as you specify. The -f flag is used when you are not using
the standard filename of Dockerfile. The path defines the context for
the Docker daemon; normally, it is the “.” that is specified.

docker container ls -a This command allows you to view the containers that are configured
on the host system. The -a flag will indicate to show containers that
are not up as well.

docker container rm -f This docker command removes a container, and optionally forces its
container_name removal even if it is operational.

docker network create -d type name The docker network create command is used to create networks of
different types, such as a bridge.

docker run -itd -p port --env- The command to run, or obtain a container registry and run a
file=filename --network net_name -- container. The -i flag is for interactive, and the -t flag is for creating a
name container container_registry / pseudo-TTY to the container. The -d flag is to run in the detached
gitlab_organization / gitlab_project / state. The command is any command that is valid on the container.
container:tag command The --name flag names the container as you intend, and does not
randomly generate a name for you. The -p flag is for port; it can be in
either host_port:container_port format, or port format. The --env-
file flag signifies the environment file. The --network flag allows you
to add containers to networks.

docker version This command allows you to view the Docker status and the Docker
version that is currently running.

git clone repository This command downloads or clones a Git repository into the directory
that has the name of the project in the repository definition.

ls file This command allows you to see the contents of a file or folder.

138 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Dockerfile Instructions
Many instructions are available for Dockerfile to create a Docker container image. In this activity, you will
use the FROM, LABEL, WORKDIR, RUN, ENV, EXPOSE, and ENTRYPOINT instructions.

Keyword Description

FROM With this instruction, you define the parent image from which you are
building a container. A Dockerfile must begin with the FROM
instruction.

LABEL This instruction creates metadata for an image. A LABEL is a key-


value pair. To include spaces within a LABEL value, use quotation
marks and backslashes as you would in command-line parsing.

WORKDIR This instruction sets the working directory for any command that is run
within the container.

RUN This instruction executes any commands in a new layer in addition to


the current image and commits the results.

ENV This command sets an environment variable on any host container.

EXPOSE This instruction informs Docker that the container listens on the
specified network ports at run time. The EXPOSE instruction does not
actually publish the port. It functions as a type of documentation
between the person who builds the image and the person who runs the
container that concerns the ports that are intended to be published.

ENTRYPOINT This instruction sets the main command of the image, and allows that
image to be run as though it was that command.

Task 1: Create a Database and Back-End and Front-End


Container Images
You will create a Dockerfile for each of the three components of the application: the database tier, the back-
end tier, and the front-end tier.

Activity

Obtain the Application Code


The application code is on the GitLab server, but to run the application, you must copy the code to a local
directory.

Step 1 In the Student Workstation, find and open Visual Studio Code by clicking the Visual Studio Code icon.

Step 2 From the Visual Studio Code top navigation bar, choose Terminal > New Terminal [Ctrl-Shift-`].

Step 3 Navigate to the terminal section at the bottom of the Visual Studio Code.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 139
Step 4 In the Visual Studio Code terminal, change the directory to ~/labs/lab04 using the cd ~/labs/lab04 command.

Step 5 In the Visual Studio Code terminal, run the git clone https://git.lab/cisco-devops/net_inventory command.

Step 6 Change the directory to net_inventory by running the cd net_inventory command.

Step 7 Use the ls command to verify that you have successfully downloaded the files and folders.

student@student-vm:$ cd ~/labs/lab04/
student@student-vm:labs/lab04$ git clone https://git.lab/cisco-devops/net_inventory
Cloning into 'net_inventory'...
warning: redirecting to https://git.lab/cisco-devops/net_inventory.git/
remote: Enumerating objects: 416, done.
remote: Counting objects: 100% (416/416), done.
remote: Compressing objects: 100% (112/112), done.
remote: Total 416 (delta 292), reused 416 (delta 292)
Receiving objects: 100% (416/416), 3.10 MiB | 14.75 MiB/s, done.
Resolving deltas: 100% (292/292), done.
student@student-vm:labs/lab04$ cd net_inventory/
student@student-vm:lab04/net_inventory (master)$ ls
app.py database.db docker-compose.yml Dockerfile Makefile migrations
net_inventory net-inventory-config.yml postgres-data pyproject.toml
requirements.txt run.py setup.py static tests
student@student-vm:lab03/net_inventory (master)$

Create a Dockerfile for the Database


The filename Dockerfile is a standard Docker construct. Because you will be building a three-tier
application, you will need to build three Docker containers from Dockerfiles. For this reason, you will not
use the standard Dockerfile filename, but the filenames Dockerfile_db, Dockerfile_backend, and
Dockerfile_frontend. When building the containers, you will use the -f filename flag to specify Dockerfiles.

Before creating the Dockerfiles, you must open the working directory lab04/net_inventory in the Visual
Studio Code editor.

You will first build the Dockerfile_db Dockerfile. This Dockerfile will be used to build the database
container.

The postgres container requires several options to be set upon creation. These options are often set by
supplying environment variables. In this activity, you will set the following environment variables:
POSTGRES_DB, POSTGRES_USER, POSTGRES_PASSWORD, and PGDATA. The PGDATA variable
will be used to set the actual directory where the data is stored.

Step 8 From the Visual Studio Code top navigation bar, choose File > Open Folder… [Ctrl-K, Ctrl-O].

Step 9 From the Open Folder browser, choose student > labs > lab04 > net_inventory and click OK in the top-
right corner. Now you should see the NET_INVENTORY folder in the left pane.

Step 10 The folder NET_INVENTORY is located in the left-most EXPLORER pane. Hover over
NET_INVENTORY and click the first New File icon to create a file.

Step 11 The cursor will move to the new file. Set the filename to Dockerfile_db.

140 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Instead of creating a new Dockerfile, you can copy an existing Dockerfile template that is stored in the
/home/student/mgmt folder. Use the cp /home/student/mgmt/Dockerfile Dockerfile.db command and then
open Dockerfile.db for editing.

Step 12 Set the base image to postgres using the latest tag. Use the FROM
registry.git.lab/cisco-devops/containers/postgres:latest instruction.

Step 13 Provide metadata with the following key-value pairs:

• LABEL description="This is a postgres db for net inventory Flask app"


• LABEL maintainer="Cisco <[email protected]>"
• LABEL version="0.1"

Step 14 Set the environment variables with the following instructions:

• ENV POSTGRES_DB=net_inventory
• ENV POSTGRES_USER=root
• ENV POSTGRES_PASSWORD=Cisco123
• ENV PGDATA=/var/lib/postgresql/data/pgdata

Step 15 The file reader port should be exposed to TCP port 5432. Use the EXPOSE 5432/tcp instruction.

student@student-vm:lab04/net_inventory (master)$ cat Dockerfile_db


FROM registry.git.lab/cisco-devops/containers/postgres:latest

# metadata
LABEL description="This is a postgres db for net inventory Flask app"
LABEL maintainer="Cisco <[email protected]>"
LABEL version="0.1"

ENV POSTGRES_DB=net_inventory
ENV POSTGRES_USER=root
ENV POSTGRES_PASSWORD=Cisco123
ENV PGDATA=/var/lib/postgresql/data/pgdata

# doesn't actually do anything, just documentation purposes. forward your port at


runtime
EXPOSE 5432/tcp

student@student-vm:lab04/net_inventory (master)$

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 141
Create a Dockerfile for the Back End
The back-end application will be served from port 5001. The connection to the database will use Docker's
ability to add other container host information into all containers. In other words, the container can be
referred to from other containers by its name. In this container, you will set the database string to point to
the named container.

Step 16 In the Visual Studio Code, create a new file and set the filename to Dockerfile_backend.

Step 17 Set the base image to python37 using the latest tag. Use the FROM
registry.git.lab/cisco-devops/containers/python37:latest instruction.

Step 18 Provide metadata with the following key-value pairs:

• LABEL description="This is a net inventory backend flask application"


• LABEL maintainer="Cisco <[email protected]>"
• LABEL version="0.1"

Step 19 The contents should be added to the /net_inventory folder. Use the ADD ./ /net_inventory instruction.

Step 20 Set the working directory to the /net_inventory folder using the WORKDIR /net_inventory/ instruction.

Step 21 Upon building, the container should install the vim and pip packages. Use the RUN apt install -y git vim
and RUN pip install -r ./requirements.txt instructions.

Step 22 Set the environment variables with the following instructions:

• ENV ENV=BACKEND

142 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
• ENV SECRET_KEY=aj8j6PIbaJpJXBS8jjvcylT84G+1UhxjXQFz6pJPNuQ=
• ENV POSTGRES_DB=net_inventory
• ENV
SQLALCHEMY_DATABASE_URI='postgresql+psycopg2://root:Cisco123@netinv_db
/net_inventory'

Note The ENV variable uses a YAML configuration file, net-inventory-config.yml, that indicates to only run the
back-end code in the inventory application.

Step 23 The file reader port should be exposed to TCP port 5001. Use the EXPOSE 5001/tcp instruction.

Step 24 On container startup, the run.py script should be executed. Use the ENTRYPOINT python run.py
instruction.

student@student-vm:lab04/net_inventory (master)$ cat Dockerfile_backend


FROM registry.git.lab/cisco-devops/containers/python37:latest

# metadata
LABEL description="This is a net inventory backend flask application"
LABEL maintainer="Cisco <[email protected]>"
LABEL version="0.1"

# copy files over to container


ADD ./ /net_inventory

# sets the working directory


WORKDIR /net_inventory/

# install system packages


RUN apt install -y git vim

# install python packages


RUN pip install -r ./requirements.txt

# Add Environment Variables


ENV ENV=BACKEND
ENV SECRET_KEY=aj8j6PIbaJpJXBS8jjvcylT84G+1UhxjXQFz6pJPNuQ=
ENV POSTGRES_DB=net_inventory
ENV SQLALCHEMY_DATABASE_URI='postgresql+psycopg2://root:Cisco123@netinv_db/
net_inventory'

# doesn't actually do anything, just documentation purposes. forward your port at


runtime
EXPOSE 5000/tcp

# start the application


ENTRYPOINT python run.py
student@student-vm:lab04/net_inventory (master)$

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 143
Create a Dockerfile for the Front End
The front-end application will run on port 5000. The front-end application will use an environment variable
that can be set either in the net-inventory-config.yml file or in the URL environment variable.

Step 25 In the Visual Studio Code, create a new file and set the filename to Dockerfile_frontend.

Step 26 Set the base image to python37 using the latest tag. Use the FROM
registry.git.lab/cisco-devops/containers/python37:latest instruction.

Step 27 Provide metadata with the following key-value pairs:

• LABEL description=" This is a net inventory frontend flask application"


• LABEL maintainer="Cisco <[email protected]>"
• LABEL version="0.1"

Step 28 The contents should be added to the /net_inventory folder. Use the ADD ./ /net_inventory instruction.

Step 29 Set the working directory to the /net_inventory folder using the WORKDIR /net_inventory/ instruction.

Step 30 Upon building, the container should install the vim and pip packages. Use the RUN apt install -y git vim
and RUN pip install -r ./requirements.txt instructions.

Step 31 Set the environment variables with the following instructions:

• ENV ENV=FRONTEND
• ENV URL=http://netinv_backend:5001

144 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 32 The file reader port should be exposed to TCP port 5001. Use the EXPOSE 5000/tcp instruction.

Step 33 On container startup, the run.py script should be executed. Use the ENTRYPOINT python run.py
instruction.

student@student-vm:lab04/net_inventory (master)$ cat Dockerfile_frontend


FROM registry.git.lab/cisco-devops/containers/python37:latest

# metadata
LABEL description="This is a net inventory backend flask application"
LABEL maintainer="Cisco <[email protected]>"
LABEL version="0.1"

# copy files over to container


ADD ./ /net_inventory

# sets the working directory


WORKDIR /net_inventory/

# install system packages


RUN apt install -y git vim

# install python packages


RUN pip install -r ./requirements.txt

# Add Environment Variables


ENV ENV=FRONTEND
ENV URL=http://netinv_backend:5000

# doesn't actually do anything, just documentation purposes. forward your port at


runtime
EXPOSE 5001/tcp

# start the application


ENTRYPOINT python run.py
student@student-vm:lab04/net_inventory (master)$

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 145
Build the Three Docker Containers
You will use the docker build command to build the three container images from the three Dockerfiles.
Dockerfile is the default name for the text document that Docker uses. Because you are using three different
filenames, you must use the -f filename flag to specify which Dockerfile to use. When building an image,
you must tag the image with the -t tag flag to assign a name to the built image in the local registry.

Step 34 First, build the database application image from the Dockerfile_db Dockerfile. In the terminal window, run
the docker build -t db -f Dockerfile_db . command.

Note Make sure that you do not forget to add the period (.) at the end of the command.

146 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:lab04/net_inventory (master)$ docker build -t db -f Dockerfile_db .
Sending build context to Docker daemon 56.43MB
Step 1/9 : FROM registry.git.lab/cisco-devops/containers/postgres:latest
---> 3eda284d1840
Step 2/9 : LABEL description="This is a postgres db for net inventory Flask app"
---> Using cache
---> 183b8e45019a
Step 3/9 : LABEL maintainer="Cisco <[email protected]>"
---> Using cache
---> f128461b0142
Step 4/9 : LABEL version="0.1"
---> Using cache
---> 7b434c55c1a5
Step 5/9 : ENV POSTGRES_DB=net_inventory
---> Using cache
---> e2f9c6ac413b
Step 6/9 : ENV POSTGRES_USER=root
---> Using cache
---> 3a6d6b53d417
Step 7/9 : ENV POSTGRES_PASSWORD=Cisco123
---> Using cache
---> d3615f99af0a
Step 8/9 : ENV PGDATA=/var/lib/postgresql/data/pgdata
---> Using cache
---> b3a0b3b27a04
Step 9/9 : EXPOSE 5432/tcp
---> Using cache
---> 5b26d48b67a5
Successfully built 5b26d48b67a5
Successfully tagged db:latest
student@student-vm:lab04/net_inventory (master)$

Step 35 Now, build the back-end application image from the Dockerfile_backend Dockerfile. In the terminal
window, run the docker build -t backend -f Dockerfile_backend . command.

Note Make sure that you do not forget to add the period (.) at the end of the command.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 147
student@student-vm:lab04/net_inventory (master)$ docker build -t backend -f
Dockerfile_backend .
Sending build context to Docker daemon 56.43MB
Step 1/13 : FROM registry.git.lab/cisco-devops/containers/python37:latest
---> dd4eec63855e
Step 2/13 : LABEL description="This is a net inventory backend flask application"
---> Running in c2951a418b15
Removing intermediate container c2951a418b15
---> 73ab606b73c9
Step 3/13 : LABEL maintainer="Cisco <[email protected]>"
---> Running in 453dc6515fb8
Removing intermediate container 453dc6515fb8
---> aab4b5ecc95d
Step 4/13 : LABEL version="0.1"
---> Running in f676337d7380
Removing intermediate container f676337d7380
---> 193af7003ee8
Step 5/13 : ADD ./ /net_inventory
---> 142c3bdaa189
Step 6/13 : WORKDIR /net_inventory/
---> Running in 850c018b585d
Removing intermediate container 850c018b585d
---> 5d72a66325e0
Step 7/13 : RUN apt install -y git vim
---> Running in d58412ff24e7

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

Reading package lists...


Building dependency tree...
Reading state information...
git is already the newest version (1:2.11.0-3+deb9u4).
vim is already the newest version (2:8.0.0197-4+deb9u3).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Removing intermediate container d58412ff24e7
---> 77e3c2645be5
Step 8/13 : RUN pip install -r ./requirements.txt
---> Running in a44ed583f1d5
Requirement already satisfied: alembic==1.2.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 1)) (1.2.1)
Requirement already satisfied: asn1crypto==0.24.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 2)) (0.24.0)
Requirement already satisfied: attrs==19.2.0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 3)) (19.2.0)
Requirement already satisfied: bcrypt==3.1.7 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 4)) (3.1.7)
Requirement already satisfied: black==19.10b0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 5)) (19.10b0)
Requirement already satisfied: certifi==2019.9.11 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 6)) (2019.9.11)
Requirement already satisfied: cffi==1.12.3 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 7)) (1.12.3)
Requirement already satisfied: chardet==3.0.4 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 8)) (3.0.4)
Requirement already satisfied: Click==7.0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 9)) (7.0)

148 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Requirement already satisfied: cryptography==2.7 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 10)) (2.7)
Requirement already satisfied: enum34==1.1.6 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 11)) (1.1.6)
Requirement already satisfied: Faker==2.0.2 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 12)) (2.0.2)
Requirement already satisfied: flasgger==0.9.3 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 13)) (0.9.3)
Requirement already satisfied: Flask==1.1.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 14)) (1.1.1)
Requirement already satisfied: Flask-Login==0.4.1 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 15)) (0.4.1)
Requirement already satisfied: flask-marshmallow==0.10.1 in
/usr/local/lib/python3.7/site-packages (from -r ./requirements.txt (line 16)) (0.10.1)
Requirement already satisfied: Flask-Migrate==2.5.2 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 17)) (2.5.2)
Requirement already satisfied: Flask-Script==2.0.6 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 18)) (2.0.6)
Requirement already satisfied: Flask-SQLAlchemy==2.4.1 in
/usr/local/lib/python3.7/site-packages (from -r ./requirements.txt (line 19)) (2.4.1)
Requirement already satisfied: Flask-WTF==0.14.2 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 20)) (0.14.2)
Requirement already satisfied: funcsigs==1.0.2 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 21)) (1.0.2)
Requirement already satisfied: idna==2.8 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 22)) (2.8)
Requirement already satisfied: ipaddress==1.0.22 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 23)) (1.0.22)
Requirement already satisfied: itsdangerous==1.1.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 24)) (1.1.0)
Requirement already satisfied: Jinja2==2.10.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 25)) (2.10.1)
Requirement already satisfied: jsonschema==2.6.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 26)) (2.6.0)
Requirement already satisfied: Mako==1.1.0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 27)) (1.1.0)
Requirement already satisfied: MarkupSafe==1.1.1 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 28)) (1.1.1)
Requirement already satisfied: marshmallow==2.20.5 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 29)) (2.20.5)
Requirement already satisfied: marshmallow-sqlalchemy==0.18.0 in
/usr/local/lib/python3.7/site-packages (from -r ./requirements.txt (line 30)) (0.18.0)
Requirement already satisfied: mistune==0.8.4 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 31)) (0.8.4)
Requirement already satisfied: pip==19.3.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 32)) (19.3.1)
Requirement already satisfied: psycopg2==2.8.4 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 33)) (2.8.4)
Requirement already satisfied: pycparser==2.19 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 34)) (2.19)
Requirement already satisfied: pyrsistent==0.15.4 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 35)) (0.15.4)
Requirement already satisfied: python-dateutil==2.8.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 36)) (2.8.0)
Requirement already satisfied: python-dotenv==0.10.3 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 37)) (0.10.3)

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 149
Requirement already satisfied: python-editor==1.0.4 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 38)) (1.0.4)
Requirement already satisfied: PyYAML==5.1.2 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 39)) (5.1.2)
Requirement already satisfied: requests==2.22.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 40)) (2.22.0)
Requirement already satisfied: setuptools==41.4.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 41)) (41.4.0)
Requirement already satisfied: six==1.12.0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 42)) (1.12.0)
Requirement already satisfied: SQLAlchemy==1.3.8 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 43)) (1.3.8)
Requirement already satisfied: SQLAlchemy-Utils==0.34.2 in
/usr/local/lib/python3.7/site-packages (from -r ./requirements.txt (line 44)) (0.34.2)
Requirement already satisfied: text-unidecode==1.3 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 45)) (1.3)
Requirement already satisfied: typing==3.7.4.1 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 46)) (3.7.4.1)
Requirement already satisfied: urllib3==1.25.6 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 47)) (1.25.6)
Requirement already satisfied: Werkzeug==0.16.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 48)) (0.16.0)
Requirement already satisfied: wheel==0.33.6 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 49)) (0.33.6)
Requirement already satisfied: WTForms==2.2.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 50)) (2.2.1)
Requirement already satisfied: appdirs in /usr/local/lib/python3.7/site-packages (from
black==19.10b0->-r ./requirements.txt (line 5)) (1.4.3)
Requirement already satisfied: typed-ast>=1.4.0 in /usr/local/lib/python3.7/site-
packages (from black==19.10b0->-r ./requirements.txt (line 5)) (1.4.0)
Requirement already satisfied: pathspec<1,>=0.6 in /usr/local/lib/python3.7/site-
packages (from black==19.10b0->-r ./requirements.txt (line 5)) (0.6.0)
Requirement already satisfied: regex in /usr/local/lib/python3.7/site-packages (from
black==19.10b0->-r ./requirements.txt (line 5)) (2019.11.1)
Requirement already satisfied: toml>=0.9.4 in /usr/local/lib/python3.7/site-packages
(from black==19.10b0->-r ./requirements.txt (line 5)) (0.10.0)
Removing intermediate container a44ed583f1d5
---> 9f856784ed5d
Step 9/13 : ENV ENV=BACKEND
---> Running in 8955ef9be627
Removing intermediate container 8955ef9be627
---> 48589122bf17
Step 10/13 : ENV SECRET_KEY=aj8j6PIbaJpJXBS8jjvcylT84G+1UhxjXQFz6pJPNuQ=
---> Running in 34a5138d65f3
Removing intermediate container 34a5138d65f3
---> 440f59784bee
Step 11/13 : ENV
SQLALCHEMY_DATABASE_URI='postgresql+psycopg2://root:Cisco123@netinv_db/net_inventory'
---> Running in 56d9e9fd4435
Removing intermediate container 56d9e9fd4435
---> 90d6c277171b
Step 12/13 : EXPOSE 5001/tcp
---> Running in e8e1a9b5a451
Removing intermediate container e8e1a9b5a451
---> 2b3193434ba6
Step 13/13 : ENTRYPOINT python run.py

150 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
---> Running in 759332a51db1
Removing intermediate container 759332a51db1
---> 35bd2e0b4db3
Successfully built 35bd2e0b4db3
Successfully tagged backend:latest
student@student-vm:lab04/net_inventory (master)$

Step 36 Finally, build the front-end application image from the Dockerfile_frontend Dockerfile. In the terminal
window, run the docker build -t frontend -f Dockerfile_frontend . command.

Note Make sure that you do not forget to add the period (.) at the end of the command.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 151
student@student-vm:lab04/net_inventory (master)$ docker build -t frontend -f
Dockerfile_frontend .
Sending build context to Docker daemon 56.43MB
Step 1/11 : FROM registry.git.lab/cisco-devops/containers/python37:latest
---> dd4eec63855e
Step 2/11 : LABEL description="This is a net inventory frontend flask application"
---> Running in 349968cff6d7
Removing intermediate container 349968cff6d7
---> 65011cc7bfb3
Step 3/11 : LABEL maintainer="Cisco <[email protected]>"
---> Running in 4d51da31cd06
Removing intermediate container 4d51da31cd06
---> 65d0376e8f55
Step 4/11 : LABEL version="0.1"
---> Running in 5d18d282c75c
Removing intermediate container 5d18d282c75c
---> c03212a5e5d3
Step 5/11 : ADD ./ /net_inventory
---> 6856dd81e177
Step 6/11 : WORKDIR /net_inventory/
---> Running in b739df2fd79d
Removing intermediate container b739df2fd79d
---> aefeca6da373
Step 7/11 : RUN pip install -r ./requirements.txt
---> Running in 32f878c77016
Requirement already satisfied: alembic==1.2.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 1)) (1.2.1)
Requirement already satisfied: asn1crypto==0.24.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 2)) (0.24.0)
Requirement already satisfied: attrs==19.2.0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 3)) (19.2.0)
Requirement already satisfied: bcrypt==3.1.7 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 4)) (3.1.7)
Requirement already satisfied: black==19.10b0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 5)) (19.10b0)
Requirement already satisfied: certifi==2019.9.11 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 6)) (2019.9.11)
Requirement already satisfied: cffi==1.12.3 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 7)) (1.12.3)
Requirement already satisfied: chardet==3.0.4 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 8)) (3.0.4)
Requirement already satisfied: Click==7.0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 9)) (7.0)
Requirement already satisfied: cryptography==2.7 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 10)) (2.7)
Requirement already satisfied: enum34==1.1.6 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 11)) (1.1.6)
Requirement already satisfied: Faker==2.0.2 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 12)) (2.0.2)
Requirement already satisfied: flasgger==0.9.3 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 13)) (0.9.3)
Requirement already satisfied: Flask==1.1.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 14)) (1.1.1)
Requirement already satisfied: Flask-Login==0.4.1 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 15)) (0.4.1)
Requirement already satisfied: flask-marshmallow==0.10.1 in

152 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
/usr/local/lib/python3.7/site-packages (from -r ./requirements.txt (line 16)) (0.10.1)
Requirement already satisfied: Flask-Migrate==2.5.2 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 17)) (2.5.2)
Requirement already satisfied: Flask-Script==2.0.6 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 18)) (2.0.6)
Requirement already satisfied: Flask-SQLAlchemy==2.4.1 in
/usr/local/lib/python3.7/site-packages (from -r ./requirements.txt (line 19)) (2.4.1)
Requirement already satisfied: Flask-WTF==0.14.2 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 20)) (0.14.2)
Requirement already satisfied: funcsigs==1.0.2 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 21)) (1.0.2)
Requirement already satisfied: idna==2.8 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 22)) (2.8)
Requirement already satisfied: ipaddress==1.0.22 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 23)) (1.0.22)
Requirement already satisfied: itsdangerous==1.1.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 24)) (1.1.0)
Requirement already satisfied: Jinja2==2.10.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 25)) (2.10.1)
Requirement already satisfied: jsonschema==2.6.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 26)) (2.6.0)
Requirement already satisfied: Mako==1.1.0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 27)) (1.1.0)
Requirement already satisfied: MarkupSafe==1.1.1 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 28)) (1.1.1)
Requirement already satisfied: marshmallow==2.20.5 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 29)) (2.20.5)
Requirement already satisfied: marshmallow-sqlalchemy==0.18.0 in
/usr/local/lib/python3.7/site-packages (from -r ./requirements.txt (line 30)) (0.18.0)
Requirement already satisfied: mistune==0.8.4 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 31)) (0.8.4)
Requirement already satisfied: pip==19.3.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 32)) (19.3.1)
Requirement already satisfied: psycopg2==2.8.4 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 33)) (2.8.4)
Requirement already satisfied: pycparser==2.19 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 34)) (2.19)
Requirement already satisfied: pyrsistent==0.15.4 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 35)) (0.15.4)
Requirement already satisfied: python-dateutil==2.8.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 36)) (2.8.0)
Requirement already satisfied: python-dotenv==0.10.3 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 37)) (0.10.3)
Requirement already satisfied: python-editor==1.0.4 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 38)) (1.0.4)
Requirement already satisfied: PyYAML==5.1.2 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 39)) (5.1.2)
Requirement already satisfied: requests==2.22.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 40)) (2.22.0)
Requirement already satisfied: setuptools==41.4.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 41)) (41.4.0)
Requirement already satisfied: six==1.12.0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 42)) (1.12.0)
Requirement already satisfied: SQLAlchemy==1.3.8 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 43)) (1.3.8)
Requirement already satisfied: SQLAlchemy-Utils==0.34.2 in

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 153
/usr/local/lib/python3.7/site-packages (from -r ./requirements.txt (line 44)) (0.34.2)
Requirement already satisfied: text-unidecode==1.3 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 45)) (1.3)
Requirement already satisfied: typing==3.7.4.1 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 46)) (3.7.4.1)
Requirement already satisfied: urllib3==1.25.6 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 47)) (1.25.6)
Requirement already satisfied: Werkzeug==0.16.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 48)) (0.16.0)
Requirement already satisfied: wheel==0.33.6 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 49)) (0.33.6)
Requirement already satisfied: WTForms==2.2.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 50)) (2.2.1)
Requirement already satisfied: toml>=0.9.4 in /usr/local/lib/python3.7/site-packages
(from black==19.10b0->-r ./requirements.txt (line 5)) (0.10.0)
Requirement already satisfied: typed-ast>=1.4.0 in /usr/local/lib/python3.7/site-
packages (from black==19.10b0->-r ./requirements.txt (line 5)) (1.4.0)
Requirement already satisfied: appdirs in /usr/local/lib/python3.7/site-packages (from
black==19.10b0->-r ./requirements.txt (line 5)) (1.4.3)
Requirement already satisfied: regex in /usr/local/lib/python3.7/site-packages (from
black==19.10b0->-r ./requirements.txt (line 5)) (2019.11.1)
Requirement already satisfied: pathspec<1,>=0.6 in /usr/local/lib/python3.7/site-
packages (from black==19.10b0->-r ./requirements.txt (line 5)) (0.6.0)
Removing intermediate container 32f878c77016
---> 5826a29c93ef
Step 8/11 : ENV ENV=FRONTEND
---> Running in 6115a5475793
Removing intermediate container 6115a5475793
---> aa7c537adc8a
Step 9/11 : ENV URL=http://netinv_backend:5001
---> Running in 692e0fb1255b
Removing intermediate container 692e0fb1255b
---> f91d35cb3373
Step 10/11 : EXPOSE 5000/tcp
---> Running in 1f84052b7a04
Removing intermediate container 1f84052b7a04
---> be24ad835d31
Step 11/11 : ENTRYPOINT python run.py
---> Running in 52d787276855
Removing intermediate container 52d787276855
---> e7ada664f182
Successfully built e7ada664f182
Successfully tagged frontend:latest
student@student-vm:lab04/net_inventory (master

Task 2: Build the Application


Your application consists of three containers and a bridge for Layer 2 communication between the
containers.

154 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Activity

Build a Bridge for Communication


A bridge allows the containers to communicate with other containers on the same host using container
names. A bridge maps container names to their IP addresses. The containers do not need to know the IP
addresses of the other containers to communicate with them.

The docker network create command allows you to create various types of networks.

Step 1 Create a bridge network named services_bridge. Use the docker network create -d bridge services_bridge
command.

student@student-vm:lab04/net_inventory (master)$ docker network create -d bridge


services_bridge
fd8f8d872e9120ccdd264f0db04e46b31dd9d9a1f1c01d09a6c447b9a2326aac
student@student-vm:lab04/net_inventory (master)$

Create the Application Containers


The --network flag, when used in the docker run command, adds the container to the bridge network. You
will use this flag to run the three containers within the same bridge network.

Step 2 Build the Postgres database from the db container image and name it netinv_db. Use the docker run -itd --
network services_bridge --name netinv_db db command.

Step 3 Build the back-end application from the “backend” container image, running on TCP port 5001, and name it
netinv_backend. Use the docker run -itd --network services_bridge -p 5001:5001 --name
netinv_backend backend command.

Step 4 Build the front-end application from the “frontend” container image, running on TCP port 5000, and name it
netinv_frontend. Use the docker run -itd --network services_bridge -p 5000:5000 --name
netinv_frontend frontend command.

student@student-vm:lab04/net_inventory (master)$ docker run -itd --network


services_bridge --name netinv_db db
96452f1de822a73fc7f25ccd1a50cb60afe7cf24c7b8246298445bf026270bae
student@student-vm:lab04/net_inventory (master)$ docker run -itd --network
services_bridge -p 5001:5001 --name netinv_backend backend
26d53dc480cf16f53d01628ee616f3ea6fdb4c00ddf34d7d790cc652498c7a55
student@student-vm:lab04/net_inventory (master)$ docker run -itd --network
services_bridge -p 5000:5000 --name netinv_frontend frontend
ed0fc8dd0b39833a18464cc52f9bb2601d2934098a7fa636bba0c94fa3eb5a3e
student@student-vm:lab04/net_inventory (master)$

Verify Installation
Verify that the application is running. Check the containers and connect to the front-end and back-end GUI.

Step 5 Verify that the containers are operational. Use the docker container ls command.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 155
student@student-vm:lab04/net_inventory (master)$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
843b5fdce46b backend "/bin/sh -c 'python …" About a minute ago
Up About a minute 0.0.0.0:5001->5001/tcp netinv_backend
64f13e2c2edf frontend "/bin/sh -c 'python …" About a minute ago
Up About a minute 0.0.0.0:5000->5000/tcp netinv_frontend
549eb261b35f db "docker-entrypoint.s…" About a minute ago
Up About a minute 5432/tcp netinv_db
student@student-vm:lab04/net_inventory (master)$

Step 6 Use your browser and connect to the back-end application at http://127.0.0.1:5001/api/docs.

Step 7 Use your browser and connect to the front-end application at http://127.0.0.1:5000/.

Note The front end and back end are served on different TCP ports.

Task 3: Decouple Parameters from the Container Image


Earlier, you specified environmental variables while creating the Dockerfiles. Normally, you do not want to
provide sensitive information within a container image, especially when you push it to the Git Repository.

Docker provides a mechanism, known as env_files, that allows you to separate variables from the
Dockerfile. Using this mechanism, you gain better control over where and how secrets are managed.

Activity

Create the env_files


Your variables are easily visible within the Dockerfiles that you created. They are specified using the ENV
instruction and the keyword ENV. You will now create env_files with the ENV data from your Dockerfiles.

Note Be careful and do not include any additional whitespace.

Step 1 Create a file that is named env_file_db and add the following environmental data:

POSTGRES_DB=net_inventory
POSTGRES_USER=root
POSTGRES_PASSWORD=Cisco123
PGDATA=/var/lib/postgresql/data/pgdata

Step 2 Create a file that is named env_file_backend and add the following environmental data:

ENV=BACKEND
SECRET_KEY=aj8j6PIbaJpJXBS8jjvcylT84G+1UhxjXQFz6pJPNuQ=
SQLALCHEMY_DATABASE_URI=postgresql+psycopg2://root:Cisco123@netinv_db/net_inventory

156 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 3 Create a file that is named env_file_frontend and add the following environmental data:

ENV=FRONTEND
URL=http://netinv_backend:5001

Step 4 Verify the content of your newly created env_files. Use the cat command.

student@student-vm:lab04/net_inventory (master)$ cat env_file_db


POSTGRES_DB=net_inventory
POSTGRES_USER=root
POSTGRES_PASSWORD=Cisco123
PGDATA=/var/lib/postgresql/data/pgdata
student@student-vm:lab04/net_inventory (master)$ cat env_file_backend
ENV=BACKEND
SECRET_KEY=aj8j6PIbaJpJXBS8jjvcylT84G+1UhxjXQFz6pJPNuQ=
SQLALCHEMY_DATABASE_URI=postgresql+psycopg2://root:Cisco123@netinv_db/net_inventory
student@student-vm:lab04/net_inventory (master)$ cat env_file_frontend
ENV=FRONTEND
URL=http://netinv_backend:5001
student@student-vm:lab04/net_inventory (master)$

Remove Environment Variables from Dockerfiles


Your env_files are now prepared. Next, you will remove the environmental variables, which are specified
by the ENV instruction, from your Dockerfiles. You will also purge the active containers.

Step 5 Remove all the ENV instruction lines from the Dockerfile_db file.

Step 6 Remove all the ENV instruction lines from the Dockerfile_backend file.

Step 7 Remove all the ENV instruction lines from the Dockerfile_frontend file.

Step 8 Verify your edited Dockerfiles using the cat command.

Step 9 Remove the active frontend container. Run the docker container rm -f netinv_frontend command.

Step 10 Remove the active backend container. Run the docker container rm -f netinv_backend command.

Step 11 Remove the active db container. Run the docker container rm -f netinv_db command.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 157
student@student-vm:lab04/net_inventory (master)$ cat Dockerfile_db
FROM registry.git.lab/cisco-devops/containers/postgres:latest

# metadata
LABEL description="This is a postgres db for net inventory Flask app"
LABEL maintainer="Cisco <[email protected]>"
LABEL version="0.1"

# doesn't actually do anything, just documentation purposes. forward your port at


runtime
EXPOSE 5432/tcp

student@student-vm:lab04/net_inventory (master)$ cat Dockerfile_backend


FROM registry.git.lab/cisco-devops/containers/python37:latest

# metadata
LABEL description="This is a net inventory backend flask application"
LABEL maintainer="Cisco <[email protected]>"
LABEL version="0.1"

# copy files over to container


ADD ./ /net_inventory

# sets the working directory


WORKDIR /net_inventory/

# install system packages


RUN apt install -y git vim

# install python packages


RUN pip install -r ./requirements.txt

# doesn't actually do anything, just documentation purposes. forward your port at


runtime
EXPOSE 5001/tcp

# start the application


ENTRYPOINT python run.py
student@student-vm:lab04/net_inventory (master)$ cat Dockerfile_frontend
FROM registry.git.lab/cisco-devops/containers/python37:latest

# metadata
LABEL description="This is a net inventory frontend flask application"
LABEL maintainer="Cisco <[email protected]>"
LABEL version="0.1"

# copy files over to container


ADD ./ /net_inventory

# sets the working directory


WORKDIR /net_inventory/

# install python packages


RUN pip install -r ./requirements.txt

# doesn't actually do anything, just documentation purposes. forward your port at

158 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
runtime
EXPOSE 5000/tcp

# start the application


ENTRYPOINT python run.py
student@student-vm:lab04/net_inventory (master)$ docker container rm -f netinv_backend
netinv_backend
student@student-vm:lab04/net_inventory (master)$ docker container rm -f netinv_frontend
netinv_frontend
student@student-vm:lab04/net_inventory (master)$ docker container rm -f netinv_db
netinv_db
student@student-vm:lab04/net_inventory (master)$

Rebuild the Images Without Hardcoded Environmental Values


You will now build the Docker images once again. This time, your Dockerfiles will not include
environmental variables, so these variables will not be included in the images anymore.

Step 12 Run the docker build -t db -f Dockerfile_db . command.

student@student-vm:lab04/net_inventory (master)$ docker build -t db -f Dockerfile_db .


Sending build context to Docker daemon 56.43MB
Step 1/5 : FROM registry.git.lab/cisco-devops/containers/postgres:latest
---> 3eda284d1840
Step 2/5 : LABEL description="This is a postgres db for net inventory Flask app"
---> Using cache
---> 183b8e45019a
Step 3/5 : LABEL maintainer="Cisco <[email protected]>"
---> Using cache
---> f128461b0142
Step 4/5 : LABEL version="0.1"
---> Using cache
---> 7b434c55c1a5
Step 5/5 : EXPOSE 5432/tcp
---> Running in 5aae8b566649
Removing intermediate container 5aae8b566649
---> 60028e27e749
Successfully built 60028e27e749
Successfully tagged db:latest
student@student-vm:lab04/net_inventory (master)$

Step 13 Run the docker build -t backend -f Dockerfile_backend . command.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 159
student@student-vm:lab04/net_inventory (master)$ docker build -t backend -f
Dockerfile_backend .
Sending build context to Docker daemon 56.43MB
Step 1/10 : FROM registry.git.lab/cisco-devops/containers/python37:latest
---> dd4eec63855e
Step 2/10 : LABEL description="This is a net inventory backend flask application"
---> Using cache
---> 335d5d6ee6c3
Step 3/10 : LABEL maintainer="Cisco <[email protected]>"
---> Using cache
---> 12302dda8fd8
Step 4/10 : LABEL version="0.1"
---> Using cache
---> 812173fb7145
Step 5/10 : ADD ./ /net_inventory
---> e37d94ddd564
Step 6/10 : WORKDIR /net_inventory/
---> Running in 10bbe573826c
Removing intermediate container 10bbe573826c
---> 542178562f9a
Step 7/10 : RUN apt install -y git vim
---> Running in 7ed60f6ad39d

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

Reading package lists...


Building dependency tree...
Reading state information...
git is already the newest version (1:2.11.0-3+deb9u4).
vim is already the newest version (2:8.0.0197-4+deb9u3).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Removing intermediate container 7ed60f6ad39d
---> 492a29a1a0f5
Step 8/10 : RUN pip install -r ./requirements.txt
---> Running in cec2dd0cf53c
Requirement already satisfied: alembic==1.2.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 1)) (1.2.1)
Requirement already satisfied: asn1crypto==0.24.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 2)) (0.24.0)
Requirement already satisfied: attrs==19.2.0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 3)) (19.2.0)
Requirement already satisfied: bcrypt==3.1.7 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 4)) (3.1.7)
Requirement already satisfied: black==19.10b0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 5)) (19.10b0)
Requirement already satisfied: certifi==2019.9.11 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 6)) (2019.9.11)
Requirement already satisfied: cffi==1.12.3 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 7)) (1.12.3)
Requirement already satisfied: chardet==3.0.4 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 8)) (3.0.4)
Requirement already satisfied: Click==7.0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 9)) (7.0)
Requirement already satisfied: cryptography==2.7 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 10)) (2.7)
Requirement already satisfied: enum34==1.1.6 in /usr/local/lib/python3.7/site-packages

160 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
(from -r ./requirements.txt (line 11)) (1.1.6)
Requirement already satisfied: Faker==2.0.2 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 12)) (2.0.2)
Requirement already satisfied: flasgger==0.9.3 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 13)) (0.9.3)
Requirement already satisfied: Flask==1.1.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 14)) (1.1.1)
Requirement already satisfied: Flask-Login==0.4.1 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 15)) (0.4.1)
Requirement already satisfied: flask-marshmallow==0.10.1 in
/usr/local/lib/python3.7/site-packages (from -r ./requirements.txt (line 16)) (0.10.1)
Requirement already satisfied: Flask-Migrate==2.5.2 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 17)) (2.5.2)
Requirement already satisfied: Flask-Script==2.0.6 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 18)) (2.0.6)
Requirement already satisfied: Flask-SQLAlchemy==2.4.1 in
/usr/local/lib/python3.7/site-packages (from -r ./requirements.txt (line 19)) (2.4.1)
Requirement already satisfied: Flask-WTF==0.14.2 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 20)) (0.14.2)
Requirement already satisfied: funcsigs==1.0.2 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 21)) (1.0.2)
Requirement already satisfied: idna==2.8 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 22)) (2.8)
Requirement already satisfied: ipaddress==1.0.22 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 23)) (1.0.22)
Requirement already satisfied: itsdangerous==1.1.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 24)) (1.1.0)
Requirement already satisfied: Jinja2==2.10.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 25)) (2.10.1)
Requirement already satisfied: jsonschema==2.6.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 26)) (2.6.0)
Requirement already satisfied: Mako==1.1.0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 27)) (1.1.0)
Requirement already satisfied: MarkupSafe==1.1.1 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 28)) (1.1.1)
Requirement already satisfied: marshmallow==2.20.5 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 29)) (2.20.5)
Requirement already satisfied: marshmallow-sqlalchemy==0.18.0 in
/usr/local/lib/python3.7/site-packages (from -r ./requirements.txt (line 30)) (0.18.0)
Requirement already satisfied: mistune==0.8.4 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 31)) (0.8.4)
Requirement already satisfied: pip==19.3.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 32)) (19.3.1)
Requirement already satisfied: psycopg2==2.8.4 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 33)) (2.8.4)
Requirement already satisfied: pycparser==2.19 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 34)) (2.19)
Requirement already satisfied: pyrsistent==0.15.4 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 35)) (0.15.4)
Requirement already satisfied: python-dateutil==2.8.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 36)) (2.8.0)
Requirement already satisfied: python-dotenv==0.10.3 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 37)) (0.10.3)
Requirement already satisfied: python-editor==1.0.4 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 38)) (1.0.4)
Requirement already satisfied: PyYAML==5.1.2 in /usr/local/lib/python3.7/site-packages

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 161
(from -r ./requirements.txt (line 39)) (5.1.2)
Requirement already satisfied: requests==2.22.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 40)) (2.22.0)
Requirement already satisfied: setuptools==41.4.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 41)) (41.4.0)
Requirement already satisfied: six==1.12.0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 42)) (1.12.0)
Requirement already satisfied: SQLAlchemy==1.3.8 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 43)) (1.3.8)
Requirement already satisfied: SQLAlchemy-Utils==0.34.2 in
/usr/local/lib/python3.7/site-packages (from -r ./requirements.txt (line 44)) (0.34.2)
Requirement already satisfied: text-unidecode==1.3 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 45)) (1.3)
Requirement already satisfied: typing==3.7.4.1 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 46)) (3.7.4.1)
Requirement already satisfied: urllib3==1.25.6 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 47)) (1.25.6)
Requirement already satisfied: Werkzeug==0.16.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 48)) (0.16.0)
Requirement already satisfied: wheel==0.33.6 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 49)) (0.33.6)
Requirement already satisfied: WTForms==2.2.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 50)) (2.2.1)
Requirement already satisfied: pathspec<1,>=0.6 in /usr/local/lib/python3.7/site-
packages (from black==19.10b0->-r ./requirements.txt (line 5)) (0.6.0)
Requirement already satisfied: typed-ast>=1.4.0 in /usr/local/lib/python3.7/site-
packages (from black==19.10b0->-r ./requirements.txt (line 5)) (1.4.0)
Requirement already satisfied: regex in /usr/local/lib/python3.7/site-packages (from
black==19.10b0->-r ./requirements.txt (line 5)) (2019.11.1)
Requirement already satisfied: appdirs in /usr/local/lib/python3.7/site-packages (from
black==19.10b0->-r ./requirements.txt (line 5)) (1.4.3)
Requirement already satisfied: toml>=0.9.4 in /usr/local/lib/python3.7/site-packages
(from black==19.10b0->-r ./requirements.txt (line 5)) (0.10.0)
Removing intermediate container cec2dd0cf53c
---> fc7b7baed191
Step 9/10 : EXPOSE 5001/tcp
---> Running in f38476ec1ed4
Removing intermediate container f38476ec1ed4
---> 6a7ae27a79d5
Step 10/10 : ENTRYPOINT python run.py
---> Running in 98e6947249ed
Removing intermediate container 98e6947249ed
---> b316c502a4d9
Successfully built b316c502a4d9
Successfully tagged backend:latest
student@student-vm:lab04/net_inventory (master)$

Step 14 Run the docker build -t frontend -f Dockerfile_frontend . command.

162 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:lab04/net_inventory (master)$ docker build -t frontend -f
Dockerfile_frontend .
Sending build context to Docker daemon 56.43MB
Step 1/9 : FROM registry.git.lab/cisco-devops/containers/python37:latest
---> dd4eec63855e
Step 2/9 : LABEL description="This is a net inventory frontend flask application"
---> Using cache
---> 6edd53702798
Step 3/9 : LABEL maintainer="Cisco <[email protected]>"
---> Using cache
---> ef7ef2e2a3e8
Step 4/9 : LABEL version="0.1"
---> Using cache
---> a8dbe4720c17
Step 5/9 : ADD ./ /net_inventory
---> 08c74c5c6dd4
Step 6/9 : WORKDIR /net_inventory/
---> Running in 72de5e71805c
Removing intermediate container 72de5e71805c
---> 66b9e2836a33
Step 7/9 : RUN pip install -r ./requirements.txt
---> Running in 9ddb701eefb9
Requirement already satisfied: alembic==1.2.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 1)) (1.2.1)
Requirement already satisfied: asn1crypto==0.24.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 2)) (0.24.0)
Requirement already satisfied: attrs==19.2.0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 3)) (19.2.0)
Requirement already satisfied: bcrypt==3.1.7 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 4)) (3.1.7)
Requirement already satisfied: black==19.10b0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 5)) (19.10b0)
Requirement already satisfied: certifi==2019.9.11 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 6)) (2019.9.11)
Requirement already satisfied: cffi==1.12.3 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 7)) (1.12.3)
Requirement already satisfied: chardet==3.0.4 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 8)) (3.0.4)
Requirement already satisfied: Click==7.0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 9)) (7.0)
Requirement already satisfied: cryptography==2.7 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 10)) (2.7)
Requirement already satisfied: enum34==1.1.6 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 11)) (1.1.6)
Requirement already satisfied: Faker==2.0.2 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 12)) (2.0.2)
Requirement already satisfied: flasgger==0.9.3 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 13)) (0.9.3)
Requirement already satisfied: Flask==1.1.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 14)) (1.1.1)
Requirement already satisfied: Flask-Login==0.4.1 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 15)) (0.4.1)
Requirement already satisfied: flask-marshmallow==0.10.1 in
/usr/local/lib/python3.7/site-packages (from -r ./requirements.txt (line 16)) (0.10.1)
Requirement already satisfied: Flask-Migrate==2.5.2 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 17)) (2.5.2)

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 163
Requirement already satisfied: Flask-Script==2.0.6 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 18)) (2.0.6)
Requirement already satisfied: Flask-SQLAlchemy==2.4.1 in
/usr/local/lib/python3.7/site-packages (from -r ./requirements.txt (line 19)) (2.4.1)
Requirement already satisfied: Flask-WTF==0.14.2 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 20)) (0.14.2)
Requirement already satisfied: funcsigs==1.0.2 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 21)) (1.0.2)
Requirement already satisfied: idna==2.8 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 22)) (2.8)
Requirement already satisfied: ipaddress==1.0.22 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 23)) (1.0.22)
Requirement already satisfied: itsdangerous==1.1.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 24)) (1.1.0)
Requirement already satisfied: Jinja2==2.10.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 25)) (2.10.1)
Requirement already satisfied: jsonschema==2.6.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 26)) (2.6.0)
Requirement already satisfied: Mako==1.1.0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 27)) (1.1.0)
Requirement already satisfied: MarkupSafe==1.1.1 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 28)) (1.1.1)
Requirement already satisfied: marshmallow==2.20.5 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 29)) (2.20.5)
Requirement already satisfied: marshmallow-sqlalchemy==0.18.0 in
/usr/local/lib/python3.7/site-packages (from -r ./requirements.txt (line 30)) (0.18.0)
Requirement already satisfied: mistune==0.8.4 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 31)) (0.8.4)
Requirement already satisfied: pip==19.3.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 32)) (19.3.1)
Requirement already satisfied: psycopg2==2.8.4 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 33)) (2.8.4)
Requirement already satisfied: pycparser==2.19 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 34)) (2.19)
Requirement already satisfied: pyrsistent==0.15.4 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 35)) (0.15.4)
Requirement already satisfied: python-dateutil==2.8.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 36)) (2.8.0)
Requirement already satisfied: python-dotenv==0.10.3 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 37)) (0.10.3)
Requirement already satisfied: python-editor==1.0.4 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 38)) (1.0.4)
Requirement already satisfied: PyYAML==5.1.2 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 39)) (5.1.2)
Requirement already satisfied: requests==2.22.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 40)) (2.22.0)
Requirement already satisfied: setuptools==41.4.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 41)) (41.4.0)
Requirement already satisfied: six==1.12.0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 42)) (1.12.0)
Requirement already satisfied: SQLAlchemy==1.3.8 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 43)) (1.3.8)
Requirement already satisfied: SQLAlchemy-Utils==0.34.2 in
/usr/local/lib/python3.7/site-packages (from -r ./requirements.txt (line 44)) (0.34.2)
Requirement already satisfied: text-unidecode==1.3 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 45)) (1.3)

164 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Requirement already satisfied: typing==3.7.4.1 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 46)) (3.7.4.1)
Requirement already satisfied: urllib3==1.25.6 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 47)) (1.25.6)
Requirement already satisfied: Werkzeug==0.16.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 48)) (0.16.0)
Requirement already satisfied: wheel==0.33.6 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 49)) (0.33.6)
Requirement already satisfied: WTForms==2.2.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 50)) (2.2.1)
Requirement already satisfied: typed-ast>=1.4.0 in /usr/local/lib/python3.7/site-
packages (from black==19.10b0->-r ./requirements.txt (line 5)) (1.4.0)
Requirement already satisfied: toml>=0.9.4 in /usr/local/lib/python3.7/site-packages
(from black==19.10b0->-r ./requirements.txt (line 5)) (0.10.0)
Requirement already satisfied: pathspec<1,>=0.6 in /usr/local/lib/python3.7/site-
packages (from black==19.10b0->-r ./requirements.txt (line 5)) (0.6.0)
Requirement already satisfied: appdirs in /usr/local/lib/python3.7/site-packages (from
black==19.10b0->-r ./requirements.txt (line 5)) (1.4.3)
Requirement already satisfied: regex in /usr/local/lib/python3.7/site-packages (from
black==19.10b0->-r ./requirements.txt (line 5)) (2019.11.1)
Removing intermediate container 9ddb701eefb9
---> dd063900d965
Step 8/9 : EXPOSE 5000/tcp
---> Running in 407fb39f2787
Removing intermediate container 407fb39f2787
---> 5453209272a5
Step 9/9 : ENTRYPOINT python run.py
---> Running in 47ee31d7466e
Removing intermediate container 47ee31d7466e
---> 16d68c1b146f
Successfully built 16d68c1b146f
Successfully tagged frontend:latest
student@student-vm:lab04/net_inventory (master)$

Create the Application Containers Using the env_files


You will now build the three containers using the Docker env_files mechanism via the --env-
file=env_file_name flag. The env_file_name defines the container environmental variables.

Step 15 Build the Postgres database container using the docker run -itd --env-file=env_file_db --network
services_bridge --name netinv_db db command.

Step 16 Build the back-end application container using the docker run --env-file=env_file_backend -itd --network
services_bridge -p 5001:5001 --name netinv_backend backend command.

Step 17 Build the front-end application container using the docker run --env-file=env_file_frontend -itd --network
services_bridge -p 5000:5000 --name netinv_frontend frontend command.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 165
student@student-vm:lab04/net_inventory (master)$ docker run -itd --env-file=env_file_db
--network services_bridge --name netinv_db db
c2ac3073815b12d2e561dc5b2160eda9235f941a2d27e9a29be98f0160181c5a
student@student-vm:lab04/net_inventory (master)$ docker run --env-file=env_file_backend
-itd --network services_bridge -p 5001:5001 --name netinv_backend backend
8906dec88d8f741f2b44823daf586a2684cf00f85a09d6dff28438b0d233d40f
student@student-vm:lab04/net_inventory (master)$ docker run --env-
file=env_file_frontend -itd --network services_bridge -p 5000:5000 --name
netinv_frontend frontend
d0d618f542a3eba8e1a90898cccb7685a9eeda92bb467af785138ca630eea573
student@student-vm:lab04/net_inventory (master)$

You may want to explore the newly created containers to verify that they use the same set of
environmental variables as they did previously, when you defined the environmental variables in
the Dockerfile.

Summary
You built a three-tier application by separating each application component into different Dockerfiles. From
those Dockerfiles, you created images, and ran the three containers that built the three-tier application.
Finally, to detach sensitive data from the Docker image, you created the environmental files, which the
container read when it started, rather than having these files hardcoded in the image itself.

166 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Summary Challenge
1. 0Which two instructions are used to move files from a local system into a container build? (Choose
two.)
a. COPY
b. MOVE
c. CP
d. ADD
e. DUPLICATE
2. 0Which two instructions could add a nonzero-sized layer to a Docker image? (Choose two.)
a. COPY
b. LABEL
c. ENV
d. RUN
e. WORKDIR
3. 0Which instruction is used to execute regardless of the parameters that are passed when starting a
container?
a. CMD
b. WORKDIR
c. ENV
d. ENTRYPOINT
4. 0Which instruction affects documentation only and does not actually change anything within the
Docker build process?
a. ENTRYPOINT
b. ENV
c. EXPOSE
d. ADD
e. LABEL
5. 0Which feature of Docker helps define versioning of images?
a. ImageAddress
b. Tags
c. ENV variables
d. Versioning is not available.
6. 0Which option allows for passing of ENV variables into a Docker container when defining a single
container?
a. env-lifecycle
b. Python client library for Docker
c. must be done in a Dockerfile
d. env-file
7. 0Which option is not a characteristic of a golden image?
a. tested
b. hardened
c. approved within the organization
d. available in a trusted registry

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 167
0Answer Key
Dockerfiles
1. C

Golden Images
1. D

Safe Processing Practices


1. A

Summary Challenge
1. A, D
2. A, D
3. D
4. C
5. B
6. C
7. B

168 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Section 4: Deploying a Multitier Application

Introduction
An application traditionally has multiple tiers including presentation, application, and data tiers. Containers
help facilitate the three-tier application design by having a container or multiple containers within each tier.
When tiers are separated, the ability to communicate between the tiers is important. Linux networking is at
the heart of Docker containers, which can be separated while enabling interconnected services within and
between tiers.
This section starts with an overview of networking, first with Linux networking and then Docker
networking specifically. After introducing the networking aspects of Docker containers, the discussion
moves to deploying a multitier application with Docker Compose.

Linux Networking
An introduction to Linux networking is an important foundation for working with Docker container
networking. Many of the components will be used in some fashion as Docker interacts with the networking
stack of the hosts of the Docker containers.

Linux Networking: Interfaces


• Access via ip commands
– ip addr
– nmcli connection show
• lo: Loopback
• eth0: First Ethernet interface
• wlan0: Wireless LAN interface
• ens*, vbox*, vmnet*: virtual machine interfaces
• bond0, tap0, tun0: LAG & Taps & tunnels

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 169
Accessing interface-level information is typically done with the ip addr command. The command output
returns the interfaces that are available on the host machine with associated information such as interface
name, MAC address, IP (v4 and v6) address and mask, and broadcast address.
Some common names include “lo” for loopback addresses, eth# for the Ethernet interface, and “wlan” for a
wireless network. Interface prefixes typically associated with virtual interfaces include ens, vbox, and
vmnet. Other typical interface prefixes include bond for link aggregation group (LAG) interfaces and tap or
tun for taps and tunnels. LAG is the concept of joining multiple interfaces together to act as one logical unit.

Linux Networking: Routing


ip route
• Shows a routing table of the device

$ ip route
default via 192.168.10.1 dev ens160 proto static metric 100
169.254.0.0/16 dev ens160 scope link metric 1000
172.17.3.0/24 via 172.17.70.254 dev ens192 proto static metric 101
172.17.4.0/24 via 172.17.70.254 dev ens192 proto static metric 101
172.17.70.0/24 dev ens192 proto kernel scope link src 172.17.70.56 metric 101
172.18.0.0/16 dev docker0 proto kernel scope link src 172.18.0.1
172.19.0.0/16 dev br-fd8f8d872e91 proto kernel scope link src 172.19.0.1 linkdown
172.20.0.0/16 dev br-15d5af845c07 proto kernel scope link src 172.20.0.1
172.24.0.0/16 dev br-b59007db32d8 proto kernel scope link src 172.24.0.1 linkdown
192.168.10.0/24 dev ens160 proto kernel scope link src 192.168.10.10 metric 100
192.168.32.0/20 dev br-abc14106d901 proto kernel scope link src 192.168.32.1 linkdown

The output of an ip route command is what you may expect: it outputs the routing table with the associated
host. The default route (0.0.0.0/0) is stated with the word “default” and the path to the route. Like
networking devices, a route table has a metric that indicates the preference for the route. A longer prefix
match on a route indicates that the route will be taken.
Many of the IPv6 commands are similar to the IPv4 commands. For example, to view the IPv6 routing
table, you add a -6 to the route command (ip -6 route). Some other commands, such as ping, have a ping6
counterpart command to ping an IPv6 address.

170 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Linux Networking: Namespaces
• Similar concept to the Open NX-OS Linux Network Architecture
• Namespaces are allocated and mapped
• Similar concept to a virtual routing and forwarding
• To route between build a bridge network

Namespaces enable isolation within the Linux environment. This isolation extends to processes, storage,
and networking. Think of namespaces as similar to a VLAN or virtual routing and forwarding (VRF)
instance in the routing environment for separation. An object within a namespace does not natively have
access to a peer object in a different namespace. Because of the capability for separation, containers use
namespaces for separation at the network level.
The diagram of the Cisco Open Nexus Operating System (NX-OS) architecture is set up as it relates to VRF
instances; a VRF instance is mapped to a Linux namespace. In a Cisco NX-OS switch, a management VRF
instance is assigned to the Linux namespace Management, the default VRF is assigned to the Default
namespace, and so on.

Linux Networking: Firewall


• iptables: host firewall management
– IPv4/IPv6 Stateless and Stateful Packet Filtering
– NAT/PAT

# iptables --list
Chain INPUT (policy ACCEPT)
target prot opt source destination

Chain FORWARD (policy DROP)


target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate
RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 171
ACCEPT all -- anywhere anywhere

Chain OUTPUT (policy ACCEPT)


target prot opt source destination

Chain DOCKER (1 references)


target prot opt source destination

Chain DOCKER-ISOLATION-STAGE-1 (1 references)


target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere

Chain DOCKER-ISOLATION-STAGE-2 (1 references)


target prot opt source destination
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere

Chain DOCKER-USER (1 references)


target prot opt source destination
RETURN all -- anywhere anywhere

The command iptables accesses information about the firewall that is associated with the host. The iptables
utility has many of the same features as a traditional Layer 3 firewall. Both stateful and stateless packet
filtering are available. Network address translation (NAT) and port address translation (PAT) functionalities
are included in iptables to facilitate the NAT translations required in some of the Docker networking
functions, which will be covered later.
In this figure, you see more about namespaces. These namespaces are separated into different chains that
enable granular policy updates on a namespace-by-namespace basis. INPUT chains are chains that are
destined for the host itself. OUTPUT chains are used for items that are leaving the host going outward. The
other chains that are shown here are for routing through the host into the Docker containers.

Linux Networking: Open Services


• ss and netstat
• Both commands can be used to show open ports on a Linux host

$ ss -at
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 127.0.0.1:53117 0.0.0.0:*
LISTEN 0 128 127.0.0.1:41261 0.0.0.0:*
LISTEN 0 128 127.0.0.1:5390 0.0.0.0:*
LISTEN 0 128 127.0.0.1:18766 0.0.0.0:*
LISTEN 0 128 0.0.0.0:http 0.0.0.0:*
LISTEN 0 128 127.0.0.53%lo:domain 0.0.0.0:*
LISTEN 0 128 0.0.0.0:ssh 0.0.0.0:*
LISTEN 0 5 127.0.0.1:ipp 0.0.0.0:*
ESTAB 0 0 172.17.70.56:50481 172.17.70.20:7163
ESTAB 0 0 172.17.70.56:60093 172.17.70.20:7161
ESTAB 0 36 172.17.70.56:ssh 172.17.70.5:50462
ESTAB 0 0 172.17.70.56:53183 172.17.70.20:7162
ESTAB 0 0 172.17.70.56:34127 172.17.70.20:7164

172 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
$ netstat -at
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 localhost:53117 0.0.0.0:* LISTEN
tcp 0 0 localhost:41261 0.0.0.0:* LISTEN
tcp 0 0 localhost:5390 0.0.0.0:* LISTEN
tcp 0 0 localhost:18766 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:http 0.0.0.0:* LISTEN
tcp 0 0 localhost:domain 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:ssh 0.0.0.0:* LISTEN
tcp 0 0 localhost:ipp 0.0.0.0:* LISTEN
tcp 0 0 student-vm:50481 172.17.70.20:7163 ESTABLISHED
tcp 0 0 student-vm:60093 172.17.70.20:7161 ESTABLISHED
tcp 0 208 student-vm:ssh 172.17.70.5:50462 ESTABLISHED
tcp 0 0 student-vm:53183 172.17.70.20:7162 ESTABLISHED
tcp 0 0 student-vm:34127 172.17.70.20:7164 ESTABLISHED

To view the open services and ports on a Linux host, use the netstat or the newer ss commands to show
information about the TCP/UDP ports that are in use and listening. The ss command output gives the
current state of a connection, the queue depth, the listening address and port, and the remote peer
connection. This information can be helpful for verifying that a service is running on the appropriate port
and that it is actively listening.

Linux Networking: Watch


• Watch repeats the command provided
– Used for watching counters or anything else that should be repeated

Every 2.0s: netstat -i

Kernel Interface table


Iface MTU RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
br-15d5a 1500 0 0 0 0 207 0 0 0 BMRU
br-abc14 1500 271 0 0 0 288 0 0 0 BMU
br-b5900 1500 0 0 0 0 7 0 0 0 BMU
br-fd8f8 1500 1289 0 0 0 1359 0 0 0 BMU
docker0 1500 327848 0 0 0 461163 0 0 0 BMRU
ens160 1500 14573499 0 3249 0 2027655 0 0 0 BMRU
ens192 1500 6831504 0 5 0 3956083 0 0 0 BMRU
lo 65536 54409 0 0 0 54409 0 0 0 LRU
veth5c5d 1500 0 0 0 0 207 0 0 0 BMRU
veth7b34 1500 0 0 0 0 506 0 0 0 BMRU
veth8d6a 1500 0 0 0 0 148 0 0 0 BMRU

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 173
The watch command is used to repeat a command over and over and provide output. The highlighted
section of the example shows the command that is being run and how often the command runs. In this
instance, the command watch netstat -i shows the interface counters and is being run every 2 seconds to
give an idea of what is being used. Take a look at the manual's pages for more options to adjust the
command. The first line of the output shows how often the watch command is running and lists the
command after the colon.

Linux Networking: Interface Configuration and Routing


To permanently update the interface configurations or routing with Ubuntu, you need to update the
/etc/network/interfaces file on the host.
• Updates for interface configuration or a routing table are made to the /etc/network/interfaces file.
• Setting an IP address:
auto eth0
iface eth0 inet static
address 172.17.70.56
netmask 255.255.255.0

• Setting a route:
auto eth0
iface eth0 inet static
address 172.17.70.56
netmask 255.255.255.0
up route add –net 10.0.0.0 netmask 255.0.0.0 gw 172.17.70.1
up route add –net 1.1.1.1 netmask 255.255.255.255 gw 172.17.70.1

Often, there will be a skeleton framework in place that you can update and the remove the commenting
character (#) from the beginning of the line. This example shows the assignment of the static IP address
172.17.70.56 to the interface eth0. The address keyword sets the address and the netmask keyword sets the
network mask.
The process is similar for static routes; the same file needs to be updated. However, in this case you would
add up route add -net <network> netmask <network_mask> gw <gateway>.
Updating the same information on a RedHat-based distribution such as CentOS or Fedora requires a file that
is named ifcfg-INTERFACENAME in the directory /etc/sysconfig/network-scripts/ where
INTERFACENAME is the name of the interface. Within that file, you should have the following configured
in the file:
IPADDR=172.17.70.56
NETMASK=255.255.255.0
GATEWAY=172.17.70.1
DNS1=1.0.0.1
DNS2=1.1.1.1
DNS3=8.8.4.4

If these settings are updated, you must restart the networking process for the changes to take effect.

174 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Linux Networking: Restart Networking Service
There are three commands that will restart the networking service on a Linux host.
Three options to restart the network service:
• $ /etc/init.d/networking restart
• $ service restart networking
• $ systemctl restart networking

The most basic option to restart is the /etc/init.d/networking restart command. The other options,
developed as the Linux kernel has evolved, are service restart networking and systemctl restart
networking. Which commands are available depends on which service manager is available on the Linux
host on which you are working. The restart options will vary based on the Linux versions and releases.

Network Operation Commands


Commands that one may find helpful in Linux networking are common commands that may be seen on
Cisco devices and other non-Linux operating systems. Generally, the commands have similar output, but
some minor differences do occur.

The ping command has a few available arguments. The biggest difference between a Cisco IOS device ping
and a Linux device ping is that in Linux there are, by default, no set number of pings to run. It is a
continuous ping. The argument -c 5 (count) sends five packets. Ubuntu 18.04 uses a default packet size of
64 bytes versus an IOS device using 100-byte packets. To increase the size of the packets, use the -s 100
(size) option to modify the size of the ICMP packets.
The traceroute command works the same with traceroute <host>. The output shows the hops along the
way during the test.
Use the ssh command at the shell to connect to another host with a SSH session. The command ssh
<username>@<host>:<port> is the method to log in to a device. If no port is defined, the SSH command
uses port 22 by default. If no username is supplied (eliminating username@), then the command uses the
locally logged in user as the username going to the device.
The command scp is used for file transfers between hosts over the SSH Protocol with the dependency SSH
and Secure Copy Protocol (SCP) enabled. The command is in the scp [OPTION]
[user@]SRC_HOST:]file1 [user@]DEST_HOST:]file2 format. The source file and remote file can be a
local file or a remote host. Note that the SCP command uses a colon (:) as a separation character between
the host information and the filename. To use a different port, use the option -P <port>, otherwise this
command uses port 22 by default.
If there is a device that does not support SSH but does allow Telnet (insecure) access, the command telnet
<host> <port> is used. Recall that username and passwords are stored in cleartext with Telnet, and no
credentials are included in the command (compared to SSH).

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 175
The nc (netcat) command can be used to quickly test ports and their reachability. This command tests both
TCP- and UDP-based ports and can be used as both a client and server. To listen on a port as a server, use
nc -l <port_number>. To connect as a client, use nc <host> <port>. To test UDP, use the option -u with
the nc command. More information on this command can be found at https://linux.die.net/man/1/nc.
The tcpdump command is something that can be discussed at length. This command is used for capturing
packets on an interface. With this command, you can go from just logging a packet on the interface to full
packet capture. Take a look at https://www.tcpdump.org/ for more information and documentation.
1. 0Which command provides the routing table of a Linux host?
a. show ip route
b. ip route
c. ip address
d. route print
2. 0Which system is responsible for firewalls on a Linux host?
a. kitchentable
b. mactables
c. ipdrop
d. iptables

176 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Docker Networking
When Docker containers are deployed for multitier applications, they must be able to communicate
information between them. However, you should make sure that the only appropriate entities can access any
given service. Conversations directly to the data tier or the application tier should not happen without first
going to the presentation tier. The presentation tier should talk to the application tier and not go directly to
the data tier. Good segmentation of services should work in this manner.

Docker Networking: Driver Types


There are several types of network drivers for Docker containers. The figure shows the primary drivers for
the containers on a single host. There are additional drivers that are available for use with Docker Swarm or
Kubernetes. Networking with Kubernetes will be discussed later.

Driver Deployment Model

Bridge Host-only Layer 2 software bridge


Utilizes NAT to expose services externally

Host Host network namespaces


All containers use same interfaces

Overlay Encap provided by kernel Virtual Extensible LAN (VXLAN)


interfaces
Control plane provided by Docker

macvlan IP per container


No NAT, no encap
Less portable, requires some host configuration

A bridge network driver is a driver that is local to the host in which it is built. You can have multiple
bridges and Docker installs on by default on installation. With a bridge driver, you must use NAT to talk to
individual containers. Host-based drivers allow the containers to bind directly to the host networking stack.
There are no additional IP addresses used. Overlay drivers are used for connecting multiple hosts to the
same network. Finally, macvlan acts as a bridged network on a virtual machine and extends the LAN
segment on which the host resides into the container.

Docker Networking: Overview


Networking typically requires translations of some sort.
• Bridges are built when containers on the same host need to interact with each other
• Similar concept to a VLAN
• To route between, build a bridge network

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 177
Concerning bridging, a packet will come in on an Ethernet interface, pass through the kernel, and into the
user space in which the container resides.

Docker Networking: Bridge


Here you see two hosts and the bridged networking setup.
• Uses Linux bridges as a method to attach to the host network stack
• veth is a virtual Ethernet interface, must exist through another virtual interface

On Docker host 1, the Linux bridge is on the top of the iptables setup. Combining iptables and eth0 forms
the Layer 3 gateway of a network. The hosts get a 172.18.0.0/24 network address, yet the external side has a
192.168.2.17 network address. This process is the NAT translation. Containers on the same bridge
(namespace) are able to talk to each other, as in traditional Layer 2 networking. Docker uses the same IP
addressing within the bridge. If there are two separate Docker containers that have the same IP address, but
they are on different hosts, there will be no conflict. One must address the outside of the NAT to get to the
containers.

Docker Networking: Custom Bridges


To facilitate separation, Docker allows custom bridges to be created within a host.
• Creates Layer 2 separation within the Docker host
• Similar concept to creating a VLAN on a switch

178 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Custom bridges provide the same separation as VLANs within a physical network (campus or data center).
C1 has a different network segment than C2 and C3. C2 and C3 can communicate with each other without
having to go through a policy enforcement point. To create a custom bridge, the command docker network
is used. Specifically, the command docker network create -d bridge my_custom_bridge will create a new
custom bridge named my_custom_bridge.

Docker Networking: Host Mode


With the host network driver, the containers attach to the network stack of the physical host.
• Connects a container to the host network stack
• Similar to installation of the app directly on the host

There are not separate networks and the namespace separation disappears. So, if there is a nginx container
running that exposes port 443 to the container, port 443 on the host is exposed. The ports that are needed for
the container are bound to the host stack and passed through. Note that there can only be one container
using a port. There will be errors if two containers attempt to use the same port. To use a host network
driver, the command flag --network=host is used. This flag will associate the port that is used to the
network stack.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 179
Docker Networking: Overlay
The overlay network driver uses VXLAN to help facilitate a Layer 2 network between physical hosts.
• Extend Layer 2 across Layer 3 boundaries
• Leverages VXLAN Tunnel Endpoint (VTEP) functionality and distributes MAC addresses to hosts in
the Overlay network

A Virtual Extensible LAN (VXLAN) overlay network is an overlay network that is used to have a device
(container or host) think that it has a Layer 2 adjacency with another device that crosses a traditional Layer
3 boundary. A VXLAN Tunnel Endpoint (VTEP) is an end point that encapsulates traffic into and off of an
overlay network. Encapsulation can be done in hardware or software.
To help facilitate communication between containers on different hosts, Docker Engine has a local DNS
server component to resolve IP addresses within the overlay network. Within the overlay, statically
programmed address resolution protocol (ARP) entries are used to help the local operating system know
where to send the traffic.
Here is a detailed look at the packet flow that is involved with an overlay:
1. c1 sends a packet that is destined for c2.
2. The VXLAN Tunnel Endpoint (VTEP) sees this packet and encapsulates it to send to the proper host
based on the VXLAN network.
3. The packet goes from host-A to host-B.
4. host-B de-encapsulates the VXLAN header and sends the packet on to c2, which arrives as expected.

180 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Docker Networking: Macvlan
A macvlan driver allows the containers on the local host to expose their network stack to the physical
underlay network.
• Used to extend a unique MAC address of a container onto the host network stack, giving the appearance
of being on the local network, and allowing it to have its own IP address from the local network

From the local network segment, the network will see this container as another device with a separate MAC
address. The default gateway should be the gateway for the network segment on which the host resides. To
create a macvlan network, use the command docker network create -d macvlan --
subnet=192.168.10.10/24 --gateway=192.168.10.1 -o parent=ens160 my_custom_macvlan. This
command will create a macvlan network driver with a subnet of 192.168.10.0/24 and assign it to the parent
network interface of ens160.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 181
Docker Networking: Network Plug-Ins
Network plug-ins help extend the capabilities within the network stack for Docker. Some of the common
plug-ins are Contiv, Weave, and Kuryr. Each of these plug-ins extends the capabilities of Docker in
different ways.

In 2016, when Cisco acquired ContainerX, it gained access to the Contiv Open Source plug-in. Cisco has
continued to keep Contiv as an Open Source project, but it is still instrumental in helping extend the vision
of the Cisco Data Center into Docker containers. Contiv unifies containers, VMs, and application-centric
infrastructure (ACI) in a single network fabric. This process allows containers to be accessed from bare
metal and VMs in a single policy framework. It works within the ACI framework to extend ACI capabilities
for policy enforcement and connectivity into Docker containers.
Weave is similar in nature to the Docker overlay in that it builds connectivity between containers that reside
on different hosts. It boasts the capability of a mesh network that can tolerate and recover from network
partitions, has service discovery, and enables encryption between all nodes.
Kuryr is a project that enables Docker containers to join an OpenStack framework. It bridges container
frameworks to the OpenStack networking abstraction. The project aims to integrate containers with
OpenStack.
These plug-ins are not the only plug-ins that are available for Docker networking. Many more plug-ins are
available. Calico is a plug-in that is gaining traction in the community. Another source of plug-ins is the
Docker website: https://hub.docker.com/search/?type=plugin.
1. 0Which Docker networking feature allows the network to natively see a container?
a. host
b. bridge
c. macvlan
d. linvlan

182 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Discovery 5: Explore Docker Networking
Introduction
The basics of networking never change. However, there was a shift in networking with the creation of
virtualization and now there is another shift to container-based networking. You will develop a high-level
and practical view of Docker networking. You will learn how to create, modify, and view various Docker
network types and learn how they can be applied to Docker containers.

Topology

Job Aids

Chapter 1 Device Information

Device Description FQDN/IP Address Credentials

Student Workstation Linux Ubuntu VM 192.168.10.10 student, 1234QWer

Command List
The table describes the commands that are used in this activity. The commands are listed in alphabetical
order so that you can easily locate the information that you need. Refer to this list if you need configuration
command assistance during the lab activity.

Command Description

cd directory name To change directories within the Linux file system, use the cd
command. You will use this command to enter a directory where the
scripts are housed. You can use tab completion to finish the name of
the directory after you start typing it.

docker container ls -a This command views the containers that are configured on the host
system. The -a flag also shows containers that are not up.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 183
Command Description

docker container rm -f container This command removes a container, and optionally forces its removal,
even if it is operational.

docker container stop container This command stops a container that is currently running.

docker exec -it container command This command allows you to run commands on the container. The
command is any valid command on the container. The -i flag is for
interactive, and the -t flag is for creating a pseudo-TTY to the
container.

docker network inspect This command displays detailed information on one or more Docker
networks.

docker network ls This command lists the Docker networks

docker run -itd -p port --name This command runs, or obtains a container registry and runs a
container container_registry / container. The -i flag is for interactive, and the -t flag is for creating a
gitlab_organization / gitlab_project / pseudo-TTY to the container. The -d flag is to run in detached state.
container:tag command The command is any command that is valid on the container. The --
name flag names the container as you intend, and does not randomly
generate a name for you. The -p flag is for port; it can be in either
host_port:container_port format, or port format.

git clone repository This command downloads or clones a Git repository into the directory
that is the name of the project in the repository definition.

Task 1: Run a Container and Inspect Networking


Settings
You will build a container and then inspect the various elements of the default networks.

Activity

Run an Alpine Image

Step 1 In the student workstation, find and open Visual Studio Code by clicking the Visual Studio Code icon.

Step 2 From the Visual Studio Code top navigation bar, choose Terminal > New Terminal [Ctrl-Shift-`].

Step 3 Navigate to the terminal section at the bottom of Visual Studio Code.

Step 4 Within the Visual Studio Code terminal, change the directory to ~/labs/lab05 using the cd ~/labs/lab05
command.

184 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 5 Issue the docker run -itd --name alpine_net registry.git.lab/cisco-devops/containers/alpine Docker
command. This command will build and run the container.

student@student-vm:$ cd ~/labs/lab05/
student@student-vm:labs/lab05$ docker run -itd --name alpine_net
registry.git.lab/cisco-devops/containers/alpine
76ac1761a55c9ac78f4ca85f6e58196e5593c6f82a6f63244ab67f33d460735f
student@student-vm:labs/lab05$

List the Available Networks

Step 6 Use the docker network ls command to verify that the network drivers are already installed on the host.

student@student-vm:labs/lab05$ docker network ls


NETWORK ID NAME DRIVER SCOPE
ba6756271a37 bridge bridge local
24d8830fa72a host host local
a5e456c5baf1 none null local
fd8f8d872e91 services_bridge bridge local
student@student-vm:labs/lab05$

Inspect the Default Network Bridge


Further investigate the configuration of the network by issuing the docker network inspect network_name
command against the name of the network. The command provides insight into all aspects of the network
including the routing table, identifiers, driver, and containers using the network. Take a moment to
understand the various components by reviewing the output of the command.

Step 7 Issue the docker network inspect bridge command. You will see the subnet and default gateway that all
containers connecting to the default bridge will use.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 185
student@student-vm:labs/lab05$ docker network inspect bridge
[
{
"Name": "bridge",
"Id": "ba6756271a373b75fd11cb219ba28bde53aaa4429e0a46fdba7195be6af99dc8",
"Created": "2019-10-29T20:41:49.649384934Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"76ac1761a55c9ac78f4ca85f6e58196e5593c6f82a6f63244ab67f33d460735f": {
"Name": "alpine_net",
"EndpointID":
"92b647b8bbe2545cbd628fff00a2a880e22060db1947afc96fe627ff51193540",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
student@student-vm:labs/lab05$

Connect to the Container and Review Network Output


From inside the container, review how the information compares to what Docker reports via the docker
network inspect command.

Step 8 Issue the docker exec -it alpine_net sh command to connect to the alpine_net container.

186 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 9 From within the container, issue the ip addr show command to review the output of the IP and MAC
addresses.

Step 10 From within the container, issue the ip route list command to review the routing table. Compare the output
against the Docker reported output.

Step 11 Issue the exit command to return to host.

student@student-vm:labs/lab05$ docker exec -it alpine_net sh


/ # ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
258: eth0@if259: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state
UP
link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.2/16 brd 172.18.255.255 scope global eth0
valid_lft forever preferred_lft forever
/ # ip route list
default via 172.18.0.1 dev eth0
172.18.0.0/16 dev eth0 scope link src 172.18.0.2
/ # exit
student@student-vm:labs/lab05$/ #

Task 2: Use Host Networking


You will examine how the host network exposes all ports without any additional configurations. Note that
there are drawbacks to this approach such as conflicting ports that are exposed from the host itself.

Activity

Run a Container Without a Host Network


Use the standard method for running a container with a simple HTTP server running. In doing so, you will
notice that there is no connection to the port.

Step 1 Build the container and expose the container to TCP port 8000 using the docker run -itd --name py3_net
registry.git.lab/cisco-devops/containers/python37:latest python3 -m http.server 8000 command. This
command will start a simple Python web server within the container and expose it via port 8000. Take note
of the command python3 -m http.server 8000 that is run within the container.

Step 2 Verify the new web server using the Chrome browser by connecting to http://127.0.0.1:8000. You will see
that you cannot connect to the server. A port was not explicitly set and host networking was not used.

student@student-vm:labs/lab05$ docker run -itd --name py3_net registry.git.lab/cisco-


devops/containers/python37:latest python3 -m http.server 8000
e1eae3ea0a5f27f5987f34c0e883121d819262cc20b8c23fcc7ab284caf156d6
student@student-vm:labs/lab05$

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 187
Run a Container with a Host Network
After removing the existing container, you will build a container with a host network enabled and gain
access to the server.

Step 3 Remove the previous container by running the docker container rm -f py3_net command in the terminal.

Step 4 Build the new container with the --network=host flag. Use the docker run --network=host -itd --name
py3_net registry.git.lab/cisco-devops/containers/python37:latest python3 -m http.server 8000
command.

Step 5 Verify the new web server by using the Chrome browser and connecting to http://127.0.0.1:8000. You will
now be able to connect to the server.

student@student-vm:labs/lab05$ docker container rm -f py3_net


py3_net
student@student-vm:labs/lab05$ docker run --network=host -itd --name py3_net
registry.git.lab/cisco-devops/containers/python37:latest python3 -m http.server 8000
ea7f7a180ebe929afd92a4f9c786a232a60491af7fd2dc62cde63d683e8c7ed5
student@student-vm:labs/lab05$

188 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Task 3: Use Custom Bridge Networking
A custom bridge is already created, but without full context or exploration of capability. You will create,
attach, and explore a custom bridge.

Containers that are assigned to the default bridge network automatically expose all ports to each other and
no ports to the outside world. Containers that are assigned to user-defined bridges can resolve each other by
name or alias and can be attached or unattached in real time, without requiring the container to be stopped.

Activity

Create and Attach a Custom Bridge

Step 1 Inspect the existing IP configuration using the following commands:

• ip address
• ip link
• ip route

Step 2 Create the custom bridge by issuing the docker network create -d bridge my_custom_bridge command.

Step 3 Attach a running container to the newly created network using the docker network connect
my_custom_bridge alpine_net command.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 189
student@student-vm:labs/lab05$ ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen
1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default
qlen 1000
link/ether 00:50:56:9c:42:52 brd ff:ff:ff:ff:ff:ff
inet 192.168.10.10/24 brd 192.168.10.255 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
259: veth7b34e26@if258: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master
docker0 state UP group default
link/ether 5a:38:22:3a:20:26 brd ff:ff:ff:ff:ff:ff link-netnsid 3
3: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default
qlen 1000
link/ether 00:0e:ef:99:63:10 brd ff:ff:ff:ff:ff:ff
inet 172.17.70.56/24 brd 172.17.70.255 scope global dynamic noprefixroute ens192
valid_lft 171sec preferred_lft 171sec
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group
default
link/ether 02:42:8d:75:de:c3 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.1/16 brd 172.18.255.255 scope global docker0
valid_lft forever preferred_lft forever
800: veth8d6a471@if799: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master
docker0 state UP group default
link/ether fe:19:d7:2d:bc:15 brd ff:ff:ff:ff:ff:ff link-netnsid 0
301: br-b59007db32d8: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state
DOWN group default
link/ether 02:42:6a:3b:98:44 brd ff:ff:ff:ff:ff:ff
inet 172.24.0.1/16 brd 172.24.255.255 scope global br-b59007db32d8
valid_lft forever preferred_lft forever
388: br-abc14106d901: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state
DOWN group default
link/ether 02:42:26:c8:c0:38 brd ff:ff:ff:ff:ff:ff
inet 192.168.32.1/20 brd 192.168.47.255 scope global br-abc14106d901
valid_lft forever preferred_lft forever
199: br-fd8f8d872e91: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state
DOWN group default
link/ether 02:42:39:a5:c5:a2 brd ff:ff:ff:ff:ff:ff
inet 172.19.0.1/16 brd 172.19.255.255 scope global br-fd8f8d872e91
valid_lft forever preferred_lft forever
student@student-vm:labs/lab05$ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group
default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT
group default qlen 1000
link/ether 00:50:56:9c:42:52 brd ff:ff:ff:ff:ff:ff
259: veth7b34e26@if258: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master
docker0 state UP mode DEFAULT group default
link/ether 5a:38:22:3a:20:26 brd ff:ff:ff:ff:ff:ff link-netnsid 3
3: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT
group default qlen 1000
link/ether 00:0e:ef:99:63:10 brd ff:ff:ff:ff:ff:ff
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode

190 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
DEFAULT group default
link/ether 02:42:8d:75:de:c3 brd ff:ff:ff:ff:ff:ff
800: veth8d6a471@if799: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master
docker0 state UP mode DEFAULT group default
link/ether fe:19:d7:2d:bc:15 brd ff:ff:ff:ff:ff:ff link-netnsid 0
301: br-b59007db32d8: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state
DOWN mode DEFAULT group default
link/ether 02:42:6a:3b:98:44 brd ff:ff:ff:ff:ff:ff
388: br-abc14106d901: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state
DOWN mode DEFAULT group default
link/ether 02:42:26:c8:c0:38 brd ff:ff:ff:ff:ff:ff
199: br-fd8f8d872e91: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state
DOWN mode DEFAULT group default
link/ether 02:42:39:a5:c5:a2 brd ff:ff:ff:ff:ff:ff
student@student-vm:labs/lab05$ ip route
default via 192.168.10.1 dev ens160 proto static metric 100
169.254.0.0/16 dev ens160 scope link metric 1000
172.17.3.0/24 via 172.17.70.254 dev ens192 proto static metric 101
172.17.4.0/24 via 172.17.70.254 dev ens192 proto static metric 101
172.17.70.0/24 dev ens192 proto kernel scope link src 172.17.70.56 metric 101
172.18.0.0/16 dev docker0 proto kernel scope link src 172.18.0.1
172.19.0.0/16 dev br-fd8f8d872e91 proto kernel scope link src 172.19.0.1 linkdown
172.24.0.0/16 dev br-b59007db32d8 proto kernel scope link src 172.24.0.1 linkdown
192.168.10.0/24 dev ens160 proto kernel scope link src 192.168.10.10 metric 100
192.168.32.0/20 dev br-abc14106d901 proto kernel scope link src 192.168.32.1 linkdown
student@student-vm:labs/lab05$
student@student-vm:labs/lab05$ docker network create -d bridge my_custom_bridge
68193a2a020c69e684a92c0ecade96f90ec659b3d29026ecb5cafd2b1aeb5a4d
student@student-vm:labs/lab05$ docker network connect my_custom_bridge alpine_net
student@student-vm:labs/lab05$

Investigate the Network

Step 4 Issue the docker network inspect my_custom_bridge command and investigate the custom network's
detailed information. Take note of the subnet information for the new bridge.

Step 5 Inspect the existing IP configuration using the following commands and comparing the new output to the
previous output:

• ip address
• ip link
• ip route

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 191
student@student-vm:labs/lab05$ docker network inspect my_custom_bridge
[
{
"Name": "my_custom_bridge",
"Id": "68193a2a020c69e684a92c0ecade96f90ec659b3d29026ecb5cafd2b1aeb5a4d",
"Created": "2019-11-27T13:31:30.478867917Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.20.0.0/16",
"Gateway": "172.20.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"76ac1761a55c9ac78f4ca85f6e58196e5593c6f82a6f63244ab67f33d460735f": {
"Name": "alpine_net",
"EndpointID":
"44e41b6ad296acf4edfdfb63f45821e1c1d3626489d4d742eded6e11b287947c",
"MacAddress": "02:42:ac:14:00:02",
"IPv4Address": "172.20.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
student@student-vm:labs/lab05$ ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen
1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default
qlen 1000
link/ether 00:50:56:9c:42:52 brd ff:ff:ff:ff:ff:ff
inet 192.168.10.10/24 brd 192.168.10.255 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
259: veth7b34e26@if258: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master
docker0 state UP group default
link/ether 5a:38:22:3a:20:26 brd ff:ff:ff:ff:ff:ff link-netnsid 3
3: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default
qlen 1000

192 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
link/ether 00:0e:ef:99:63:10 brd ff:ff:ff:ff:ff:ff
inet 172.17.70.56/24 brd 172.17.70.255 scope global dynamic noprefixroute ens192
valid_lft 223sec preferred_lft 223sec
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group
default
link/ether 02:42:8d:75:de:c3 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.1/16 brd 172.18.255.255 scope global docker0
valid_lft forever preferred_lft forever
800: veth8d6a471@if799: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master
docker0 state UP group default
link/ether fe:19:d7:2d:bc:15 brd ff:ff:ff:ff:ff:ff link-netnsid 0
804: br-68193a2a020c: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
group default
link/ether 02:42:cc:c9:ea:a9 brd ff:ff:ff:ff:ff:ff
inet 172.20.0.1/16 brd 172.20.255.255 scope global br-68193a2a020c
valid_lft forever preferred_lft forever
806: veth60d150c@if805: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master
br-68193a2a020c state UP group default
link/ether aa:0a:ae:99:dd:21 brd ff:ff:ff:ff:ff:ff link-netnsid 3
301: br-b59007db32d8: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state
DOWN group default
link/ether 02:42:6a:3b:98:44 brd ff:ff:ff:ff:ff:ff
inet 172.24.0.1/16 brd 172.24.255.255 scope global br-b59007db32d8
valid_lft forever preferred_lft forever
388: br-abc14106d901: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state
DOWN group default
link/ether 02:42:26:c8:c0:38 brd ff:ff:ff:ff:ff:ff
inet 192.168.32.1/20 brd 192.168.47.255 scope global br-abc14106d901
valid_lft forever preferred_lft forever
199: br-fd8f8d872e91: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state
DOWN group default
link/ether 02:42:39:a5:c5:a2 brd ff:ff:ff:ff:ff:ff
inet 172.19.0.1/16 brd 172.19.255.255 scope global br-fd8f8d872e91
valid_lft forever preferred_lft forever
student@student-vm:labs/lab05$ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group
default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT
group default qlen 1000
link/ether 00:50:56:9c:42:52 brd ff:ff:ff:ff:ff:ff
259: veth7b34e26@if258: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master
docker0 state UP mode DEFAULT group default
link/ether 5a:38:22:3a:20:26 brd ff:ff:ff:ff:ff:ff link-netnsid 3
3: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT
group default qlen 1000
link/ether 00:0e:ef:99:63:10 brd ff:ff:ff:ff:ff:ff
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode
DEFAULT group default
link/ether 02:42:8d:75:de:c3 brd ff:ff:ff:ff:ff:ff
800: veth8d6a471@if799: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master
docker0 state UP mode DEFAULT group default
link/ether fe:19:d7:2d:bc:15 brd ff:ff:ff:ff:ff:ff link-netnsid 0
804: br-68193a2a020c: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
mode DEFAULT group default
link/ether 02:42:cc:c9:ea:a9 brd ff:ff:ff:ff:ff:ff

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 193
806: veth60d150c@if805: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master
br-68193a2a020c state UP mode DEFAULT group default
link/ether aa:0a:ae:99:dd:21 brd ff:ff:ff:ff:ff:ff link-netnsid 3
301: br-b59007db32d8: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state
DOWN mode DEFAULT group default
link/ether 02:42:6a:3b:98:44 brd ff:ff:ff:ff:ff:ff
388: br-abc14106d901: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state
DOWN mode DEFAULT group default
link/ether 02:42:26:c8:c0:38 brd ff:ff:ff:ff:ff:ff
199: br-fd8f8d872e91: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state
DOWN mode DEFAULT group default
link/ether 02:42:39:a5:c5:a2 brd ff:ff:ff:ff:ff:ff
student@student-vm:labs/lab05$
student@student-vm:labs/lab05$ ip route
default via 192.168.10.1 dev ens160 proto static metric 100
169.254.0.0/16 dev ens160 scope link metric 1000
172.17.3.0/24 via 172.17.70.254 dev ens192 proto static metric 101
172.17.4.0/24 via 172.17.70.254 dev ens192 proto static metric 101
172.17.70.0/24 dev ens192 proto kernel scope link src 172.17.70.56 metric 101
172.18.0.0/16 dev docker0 proto kernel scope link src 172.18.0.1
172.19.0.0/16 dev br-fd8f8d872e91 proto kernel scope link src 172.19.0.1 linkdown
172.20.0.0/16 dev br-68193a2a020c proto kernel scope link src 172.20.0.1
172.24.0.0/16 dev br-b59007db32d8 proto kernel scope link src 172.24.0.1 linkdown
192.168.10.0/24 dev ens160 proto kernel scope link src 192.168.10.10 metric 100
192.168.32.0/20 dev br-abc14106d901 proto kernel scope link src 192.168.32.1 linkdown
student@student-vm:labs/lab05$

Task 4: Use Macvlan Networking


Macvlan networking is a Docker capability that exposes the container’s network interface on the actual
physical layer through the host. This capability is especially helpful for applications that require Layer 2
connectivity.

The flags for subnet, gateway, and -o are introduced here. The subnet and gateway represent the subnet
and gateway on which the host network resides and from which this custom macvlan interface will be
advertised. The -o flag allows you to map an option, which, in this case, will be the parent interface.

Activity

Create and Attach a Macvlan Network

Step 1 Create the custom bridge by issuing the docker network create -d macvlan --subnet=192.168.10.10/24 --
gateway=192.168.10.1 -o parent=ens160 my_custom_macvlan command. Note that the type is defined as
macvlan and note the syntax of the parent interface.

Step 2 Attach a running container to the newly created network by using the docker network connect
my_custom_macvlan alpine_net command.

194 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:labs/lab05$ docker network create -d macvlan --
subnet=192.168.10.10/24 --gateway=192.168.10.1 -o parent=ens160 my_custom_macvlan
62910fe42ee8167e71abc316b7d8339dbe55fe1682e15516ce4ab7756728fb41
student@student-vm:labs/lab05$ docker network connect my_custom_macvlan alpine_net
student@student-vm:labs/lab05$

Investigate the Network

Step 3 Issue the docker network inspect my_custom_macvlan command and investigate the detailed custom
network information.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 195
student@student-vm:labs/lab05$ docker network inspect my_custom_macvlan
[
{
"Name": "my_custom_macvlan",
"Id": "62910fe42ee8167e71abc316b7d8339dbe55fe1682e15516ce4ab7756728fb41",
"Created": "2019-11-06T12:36:50.180151603Z",
"Scope": "local",
"Driver": "macvlan",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.10.10/24",
"Gateway": "192.168.10.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"76ac1761a55c9ac78f4ca85f6e58196e5593c6f82a6f63244ab67f33d460735f": {
"Name": "alpine_net",
"EndpointID":
"b2b2e3ee94defa2c80dbc88d50485e5d92ca77032f019384fd2874817e0238dc",
"MacAddress": "02:42:c0:a8:0a:02",
"IPv4Address": "192.168.10.2/24",
"IPv6Address": ""
}
},
"Options": {
"parent": "ens160"
},
"Labels": {}
}
]
student@student-vm:labs/lab05$

Summary
You reviewed some of the Docker networking capabilities, created various network types, and viewed the
status of various networking states on both the host and container.

196 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Docker Compose
Now that Docker networking and Docker build have been discussed, how do you orchestrate and work in a
larger environment? The Docker Compose solution makes working in a larger environment possible. The
environment will have multiple tiers, including an external web, internal web, API, back end, and data tier.
Putting this environment together consistently is a detailed task. A custom shell script to repeat actions may
help, but Docker Compose can manage more than just the containers.
Multitier, multiple-container applications are complicated.
Every new application needs to go through these steps for every new load:
• Create new bridges and networks
• Load all Dockerfiles
• Start every container (in the right network)

Creating the proper Linux bridge networks, gathering the Dockerfiles, and starting each container in the
proper network will become quite burdensome very quickly. Docker Compose will manage this process and
put it into an easier-to-read and maintain YAML file.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 197
What Is Docker Compose?
Docker Compose is the Docker orchestration and automation tool for managing Docker containers, which is
defined in a YAML (.yml) file.

By default, the docker-compose.yml file is searched for in the current directory. To start all of the containers
that are defined in the docker-compose file, the command docker-compose up -d is issued. The -d option
instructs the Docker host to start the containers in a detached state, matching the state of the docker run
flags. The --build flag will build all the images before starting the containers.
To bring down a composed infrastructure, use the command docker-compose down. When issued, the
Docker Compose engine will stop the containers and remove the containers and networks that were created
during the docker-compose up command. There are additional flags that can be set to remove the volumes
and images that were created.

In the YAML file, there are primarily three top-level definitions: the version of docker-compose file, the
services (containers), and the networks that are to be built.

198 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Docker Compose File Configuration
The following are examples to give some context for the YAML file structure.

Note The example is not a complete Docker Compose file.

version: '3’
services:
netinv_db:
build:
context: .
dockerfile: "Dockerfile_db”
ports:
- "5432:5432”
environment:
POSTGRES_DB: 'net_inventory’
POSTGRES_USER: 'root’
POSTGRES_PASSWORD: 'Cisco123’

networks:
backend_network:
driver: "bridge”
frontend_network:
driver: "bridge"

netinv_frontend:
build:
context: .
dockerfile: "Dockerfile_frontend”
ports:
- "5000:5000”
restart: always
environment:
ENV: 'FRONTEND’
URL: 'http://netinv_backend:5001’
volumes:
- .:/app
depends_on:
- netinv_backend

There are many commands that can be used in the Docker Compose file. Some of the relevant commands
for services include (but are not limited to) build, ports, environment, env_file, depends_on, and
volumes. Refer to https://docs.docker.com/compose/compose-file/ for the complete list of all the available
compose configuration options.
The build command is for configuration options to be applied at run time. Here there are suboptions of
context, which are the location for the build; dockerfile, which is the Dockerfile to be used to build a
container, and args, labels, and target.
The ports command maps the ports to be used with bridges. These ports are not compatible when using the
host-based network type.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 199
It is not recommended that you put secret information directly into the YAML file for a Docker Compose
execution; these files are strong candidates for inclusion in source control. It is not a good practice to put
sensitive information into files that go into source control. Therefore, the use of the env_file option is
recommended.
The depends_on configuration option is new and relevant to Docker Compose files. This option allows
dependencies to be built before starting a particular container. Likely scenarios include requiring that the
data tier container is operating before the application tier container starts and that the data and application
tier containers are operating before the presentation tier container starts. This command enables
dependencies to be built.
The volumes command is used to mount host paths or volumes in the service container. Volumes can also
be part of the top-level key of the YAML file if you want them to be available for each of the defined
services.
1. 0Which two components that are configured by the docker-compose up -d command will be
removed automatically when docker-compose down is issued? (Choose two.)
a. volumes
b. networks
c. containers
d. images
e. folders

200 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Discovery 6: Build and Deploy an Application
Using Docker Compose
Introduction
Becoming proficient in a technology like Docker can take months to years, based on how often you use it.
Now you will examine the next level of abstraction within Docker—Docker Compose. Docker Compose
allows you to build multicontainer applications through using a YAML configuration file that references
individual Dockerfiles. Any of the functionalities that can be placed in a Dockerfile, or used in a docker
run command, can be used in Docker Compose to simplify managing and working with services and apps
that require multiple Docker containers.
In this introduction to Docker Compose, you will review the basics of building a multicontainer application
and use the Network Inventory application.

Topology

Job Aids

Device Information

Device Description FQDN/IP Address Credentials

Student Workstation Linux Ubuntu VM 192.168.10.10 student, 1234QWer

Command List
The table describes the commands that are used in this activity. The commands are listed in alphabetical
order so that you can easily locate the information that you need. Refer to this list if you need configuration
command assistance during the lab activity.

Command Description

cd directory name To change directories within the Linux file system, use the cd

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 201
Command Description

command. You will use this command to enter a directory where the
scripts are housed. You can use tab completion to finish the name of
the directory after you start typing it.

cp file_source file_destination This command copies a file from the defined source to the defined
destination.

docker container ls -a This command views the containers that are configured on the host
system. The -a flag also shows containers that are not up.

docker-compose build This command builds the images for the application, but no creating
containers are needed for the application.

docker-compose down This command destroys the application by stopping and removing the
containers.

docker-compose up -d This command builds and starts the application. The -d flag runs the
application in the background.

git clone repository This command downloads or clones a Git repository into the directory
that is the name of the project in the repository definition.

Docker Compose Keywords


Many instructions are available for Docker Compose to create a Docker container image. You will use the
FROM, LABEL, WORKDIR, RUN, ENV, EXPOSE, and ENTRYPOINT instructions.

Root Keywords
This table shows top-level keywords.

Keyword Description

networks This keyword describes any network-specific instructions. Each key is


the name of a network, with the attributes of that described within.

services This keyword describes any container-specific instructions. Each key is


the name of a container, with its attributes described within.

version This keyword describes the Docker Compose reference file instruction
set that you are using.

202 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Services Keywords
These keywords describe the container builds that exist within the services root keyword and under the
named definition of the containers. As an example, under the services keyword, you would expect to see
keys for web, db, and app, and within each of these keys, the following keywords would be used to describe
those individual applications.

Keyword Description

build This keyword describes how to build the application. The subkeys of
context and dockerfile are also used here. The context will hint at
the Dockerfile folder location, and the dockerfile keyword will indicate
the filename.

depends_on This keyword describes the order in which the application can be
started by stating which container should be deployed first. All the
rules are computed by Docker Compose.

env_file This keyword describes a location of the env_file, which will create
environment variables in the container.

environments This keyword describes a key-value pairing that will create


environment variables in the container.

networks This keyword describes a list of networks to which the container should
be attached.

ports This keyword describes the list of ports that should be exposed to the
container.

Network Keywords
These keywords describe the networks that are used in the application.

Keyword Description

driver This keyword describes the type of network driver that is used, such as
bridge.

Task 1: Write the Docker Compose Blueprint


You will build the Network Inventory application using a Docker Compose file that is named docker-
compose.yml. The Compose file refers to various Dockerfiles to construct a multicontainer application.

This task introduces the docker-compose command. This command allows you to describe the entire
imaging and build process and to build, run, or destroy in a single command. For a single host instance,
Docker Compose can manage the entire application lifecycle.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 203
Activity

Step 1 In the student workstation, find and open Visual Studio Code by clicking the Visual Studio Code icon.

Step 2 From the Visual Studio Code top navigation bar, choose Terminal > New Terminal [Ctrl-Shift-`].

Step 3 Navigate to the terminal section at the bottom of Visual Studio Code.

Step 4 Within the Visual Studio Code terminal, change the directory to ~/labs/lab06 using the cd ~/labs/lab06
command.

Step 5 Issue the git clone https://git.lab/cisco-devops/net_inventory Git command.

Step 6 Change the directory to net_inventory by issuing the cd net_inventory command.

Note You should open Visual Studio Code to the same folder.

student@student-vm:$ cd ~/labs/lab06/
student@student-vm:labs/lab06$ git clone https://git.lab/cisco-devops/net_inventory
Cloning into 'net_inventory'...
warning: redirecting to https://git.lab/cisco-devops/net_inventory.git/
remote: Enumerating objects: 416, done.
remote: Counting objects: 100% (416/416), done.
remote: Compressing objects: 100% (114/114), done.
remote: Total 416 (delta 290), reused 416 (delta 290)
Receiving objects: 100% (416/416), 3.10 MiB | 14.16 MiB/s, done.
Resolving deltas: 100% (290/290), done.
student@student-vm:labs/lab06$ cd net_inventory/
student@student-vm:lab06/net_inventory (master)$

Start to Build the Docker Compose File


The docker-compose.yml file was already partially prepared in the lab06 directory. The Docker Compose
file has two containers that are already defined. You will copy that subdirectory into the main project
directory and then modify the file, using the same structure.

The two top-level attributes are version and services. The version attribute indicates which Compose file
reference is being used, which is 3 in this case. The services keyword describes the containers and each
dictionary key under it is the name of the container.

Note Refer to Job Aids for more information on the keywords.

Step 7 Copy the docker-compose.yml file from the lab06 directory to the net_inventory directory by issuing the
cp ../docker-compose.yml ./ command.

Step 8 Open the docker-compose.yml file in Visual Studio Code.

204 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 9 Explore the Docker Compose file and note the build, restart, environment, ports, and depends on
keywords.

student@student-vm:lab06/net_inventory (master)$ cp ../docker-compose.yml ./


student@student-vm:lab06/net_inventory (master)$

Next, you will copy from the back-end example within the Compose file and build the front-end container
within the Compose file. First, you will copy that file locally and then modify it to add it as the following
steps describe. Remember, the Compose file is a YAML file, which are especially sensitive in syntax. A
single space that is not considered can create an entirely different meaning.

Step 10 In the docker-compose.yml file, copy the section starting with: netinv_backend: and ending with: netinv_db.
Paste that code at the end of the file.

Step 11 Modify the name netinv_backend: to netinv_frontend:.

Step 12 Change the dockerfile: value from Dockerfile_backend to Dockerfile_frontend.

Step 13 Modify the ports value from 5001:5001 to 5000:5000.

Step 14 Edit the ENV value from BACKEND to FRONTEND.

Step 15 Modify the SECRET_KEY keyword to URL and value from


aj8j6PIbaJpJXBS8jjvcylT84G+1UhxjXQFz6pJPNuQ= to http://netinv_backend:5001.

Step 16 Change the depends_on value from netinv_db to netinv_backend.

Step 17 Save the file by pressing Ctrl-s.

Step 18 View the final configuration by issuing the cat docker-compose.yml command.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 205
student@student-vm:lab06/net_inventory (master)$ cat docker-compose.yml
---
version: '3'
services:

netinv_db:
build:
context: .
dockerfile: "Dockerfile_db"
ports:
- "5432:5432"
environment:
POSTGRES_DB: 'net_inventory'
POSTGRES_USER: 'root'
POSTGRES_PASSWORD: 'Cisco123'

netinv_backend:
build:
context: .
dockerfile: "Dockerfile_backend"
restart: always
environment:
ENV: "BACKEND"
SECRET_KEY: 'aj8j6PIbaJpJXBS8jjvcylT84G+1UhxjXQFz6pJPNuQ='
SQLALCHEMY_DATABASE_URI:
'postgresql+psycopg2://root:Cisco123@netinv_db/net_inventory'
volumes:
- .:/app
ports:
- "5001:5001"
depends_on:
- netinv_db

netinv_frontend:
build:
context: .
dockerfile: "Dockerfile_frontend"
ports:
- "5000:5000"
restart: always
environment:
ENV: 'FRONTEND'
URL: 'http://netinv_backend:5001'
volumes:
- .:/app
depends_on:
- netinv_backend

student@student-vm:lab06/net_inventory (master)$

206 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Task 2: Connect Containers with Private Networking
Now you will build two networks for the front- and back-end communication and then apply those networks
to their respective containers.

Activity

You will examine a new top-level keyword, networks. This keyword allows you to build networks, similar
to the docker network command set.

Step 1 At the bottom of the docker-compose.yml file, create the keyword networks:. Make sure that there are no
spaces in front of it.

Step 2 Create a backend_network dictionary key with a single key-value pair of driver: bridge added under the key.
Refer to the solution for further clarity on spacing; each dictionary should be two spaces to the right of its
parent.

Step 3 Create a dictionary key for frontend_network at same indentation level as backend_network with a single
key-value pair of driver: bridge one line underneath.

Step 4 View the final configuration by issuing the cat docker-compose.yml command.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 207
student@student-vm:lab06/net_inventory (master)$ cat docker-compose.yml
---
version: '3'
services:

netinv_db:
build:
context: .
dockerfile: "Dockerfile_db"
ports:
- "5432:5432"
environment:
POSTGRES_DB: 'net_inventory'
POSTGRES_USER: 'root'
POSTGRES_PASSWORD: 'Cisco123'

netinv_backend:
build:
context: .
dockerfile: "Dockerfile_backend"
restart: always
environment:
ENV: "BACKEND"
SECRET_KEY: 'aj8j6PIbaJpJXBS8jjvcylT84G+1UhxjXQFz6pJPNuQ='
SQLALCHEMY_DATABASE_URI:
'postgresql+psycopg2://root:Cisco123@netinv_db/net_inventory'
volumes:
- .:/app
ports:
- "5001:5001"
depends_on:
- netinv_db

netinv_frontend:
build:
context: .
dockerfile: "Dockerfile_frontend"
ports:
- "5000:5000"
restart: always
environment:
ENV: 'FRONTEND'
URL: 'http://netinv_backend:5001'
volumes:
- .:/app
depends_on:
- netinv_backend

networks:
backend_network:
driver: "bridge"
frontend_network:
driver: "bridge"
student@student-vm:lab06/net_inventory (master)$

208 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Add Custom Networks to Containers
So far you have created the networks, but they are not assigned to any containers. Now you will add the
backend_network network to the netinv_db and netinv_backend containers and the frontend_network
network to the netinv_backend and netinv_frontend containers for proper isolation of traffic. In this
scenario, there is no reason for the front end to talk directly to the database.

Step 5 Create a dictionary key under netinv_db for networks:, with a single list element of: backend_network.

Step 6 Create a dictionary key under netinv_backend for networks:, with two list elements of: - backend_network,
and - frontend_network.

Step 7 Create a dictionary key under netinv_front for networks:, with a single list element of: - frontend_network.

Step 8 View the final configuration by issuing the cat docker-compose.yml command.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 209
student@student-vm:lab06/net_inventory (master)$ cat docker-compose.yml
---
version: '3'
services:

netinv_db:
build:
context: .
dockerfile: "Dockerfile_db"
ports:
- "5432:5432"
environment:
POSTGRES_DB: 'net_inventory'
POSTGRES_USER: 'root'
POSTGRES_PASSWORD: 'Cisco123'
networks:
- "backend_network"

netinv_backend:
build:
context: .
dockerfile: "Dockerfile_backend"
restart: always
environment:
ENV: "BACKEND"
SECRET_KEY: 'aj8j6PIbaJpJXBS8jjvcylT84G+1UhxjXQFz6pJPNuQ='
SQLALCHEMY_DATABASE_URI:
'postgresql+psycopg2://root:Cisco123@netinv_db/net_inventory'
volumes:
- .:/app
ports:
- "5001:5001"
depends_on:
- netinv_db
networks:
- "backend_network"
- "frontend_network"

netinv_frontend:
build:
context: .
dockerfile: "Dockerfile_frontend"
ports:
- "5000:5000"
restart: always
environment:
ENV: 'FRONTEND'
URL: 'http://netinv_backend:5001'
volumes:
- .:/app
depends_on:
- netinv_backend
networks:
- "frontend_network"

networks:

210 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
backend_network:
driver: "bridge"
frontend_network:
driver: "bridge"

student@student-vm:lab06/net_inventory (master)$

Task 3: Convert Environment Variables to Files


You will separate the environment variables from the docker-compose file. Using the Docker env file
construct, you will delineate where secrets are kept from the docker-compose file.

Activity

Copy the Prestaged Environment Files


You will copy environment variables that were previously defined in their respective env files and review
them for each of the environments per container.

Step 1 Copy the env_file_db file from the parent directory into the local directory with the cp ../env_file_db ./
command.

Step 2 View the file contents by issuing the cat env_file_db command.

Step 3 Copy the file env_file_backend from the parent directory into the local using the cp ../env_file_backend ./
command.

Step 4 View the file contents by issuing the cat env_file_backend command.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 211
Step 5 Copy the file env_file_frontend from the parent directory into the local by issuing the cp
../env_file_frontend ./ command.

Step 6 View the final configuration by issuing the cat docker-compose.yml command.

student@student-vm:lab06/net_inventory (master)$ cp ../env_file_db ./


student@student-vm:lab06/net_inventory (master)$ cat env_file_db
POSTGRES_DB=net_inventory
POSTGRES_USER=root
POSTGRES_PASSWORD=Cisco123
PGDATA=/var/lib/postgresql/data/pgdata
student@student-vm:lab06/net_inventory (master)$ cp ../env_file_backend ./
student@student-vm:lab06/net_inventory (master)$ cat env_file_backend
ENV=BACKEND
SECRET_KEY=aj8j6PIbaJpJXBS8jjvcylT84G+1UhxjXQFz6pJPNuQ=
SQLALCHEMY_DATABASE_URI='postgresql+psycopg2://root:Cisco123@netinv_db/net_inventory'
student@student-vm:lab06/net_inventory (master)$ cp ../env_file_frontend ./
student@student-vm:lab06/net_inventory (master)$ cat env_file_frontend
ENV=FRONTEND
URL=http://netinv_backend:5001
student@student-vm:lab06/net_inventory (master)$

Point the Docker Compose File Reference to the env Files


You will change the references to the environment variables from the file to reference the env files that you
created.

Step 7 In the docker-compose.yml file, for the netinv_db: service, replace the dictionary key and its values for
environment: with the key-value pair of env_file: env_file_db.

Step 8 For the netinv_backend: service, replace the dictionary key and its values for environment: with the
key/value pair of env_file: env_file_backend.

Step 9 For the netinv_frontend: service, replace the dictionary key and its values for environment: with the
key/value pair of env_file: env_file_frontend.

Step 10 View the final configuration by issuing the cat docker-compose.yml command.

212 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:lab06/net_inventory (master)$ cat docker-compose.yml
---
version: '3'
services:

netinv_db:
build:
context: .
dockerfile: "Dockerfile_db"
ports:
- "5432:5432"
env_file: "env_file_db"
networks:
- "backend_network"

netinv_backend:
build:
context: .
dockerfile: "Dockerfile_backend"
restart: always
env_file: "env_file_backend"
volumes:
- .:/app
ports:
- "5001:5001"
depends_on:
- netinv_db
networks:
- "backend_network"
- "frontend_network"

netinv_frontend:
build:
context: .
dockerfile: "Dockerfile_frontend"
ports:
- "5000:5000"
restart: always
env_file: "env_file_frontend"
volumes:
- .:/app
depends_on:
- netinv_backend
networks:
- "frontend_network"

networks:
backend_network:
driver: "bridge"
frontend_network:
driver: "bridge"
student@student-vm:lab06/net_inventory (master)$

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 213
Task 4: Deploy and Validate
You will deploy the application using the docker-compose command set for the first time. This command
set assumes that the application is described in a docker-compose.yml file in the same directory or by a file
specified using the -f filename flag. Consult the command list for further information. Now you will build
and deploy the application. This process can be combined in one command if desired.

Activity

Build the Service


You will copy and review files for each of the environment files per container.

Step 1 Build, but do not deploy, the application by issuing the docker-compose build command from your
terminal. This command will perform the Docker build operation for all containers within the Compose file.

214 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:lab06/net_inventory (master)$ docker-compose build
Building netinv_db
Step 1/5 : FROM registry.git.lab/cisco-devops/containers/postgres:latest
---> 3eda284d1840
Step 2/5 : LABEL description="This is a postgres db for net inventory Flask app"
---> Using cache
---> 183b8e45019a
Step 3/5 : LABEL maintainer="Cisco <[email protected]>"
---> Using cache
---> f128461b0142
Step 4/5 : LABEL version="0.1"
---> Using cache
---> 7b434c55c1a5
Step 5/5 : EXPOSE 5432/tcp
---> Running in 3e80db89624d
Removing intermediate container 3e80db89624d
---> 403575381334
Successfully built 403575381334
Successfully tagged net_inventory_netinv_db:latest
Building netinv_backend
Step 1/10 : FROM registry.git.lab/cisco-devops/containers/python37:latest
---> dd4eec63855e
Step 2/10 : LABEL description="This is a net inventory backend flask application"
---> Running in 3ff7fd365dab
Removing intermediate container 3ff7fd365dab
---> 78d8c7431340
Step 3/10 : LABEL maintainer="Cisco <[email protected]>"
---> Running in 978b2679ce62
Removing intermediate container 978b2679ce62
---> 945cce032bbf
Step 4/10 : LABEL version="0.1"
---> Running in 03d4f1dddb32
Removing intermediate container 03d4f1dddb32
---> 161c3b336022
Step 5/10 : ADD ./ /net_inventory
---> fa9beac43e4a
Step 6/10 : WORKDIR /net_inventory/
---> Running in efd825df974d
Removing intermediate container efd825df974d
---> 25ecf7f5f2fc
Step 7/10 : RUN apt install -y git vim
---> Running in 5bfec0724e27

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

Reading package lists...


Building dependency tree...
Reading state information...
git is already the newest version (1:2.11.0-3+deb9u4).
vim is already the newest version (2:8.0.0197-4+deb9u3).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Removing intermediate container 5bfec0724e27
---> 515a49f531a1
Step 8/10 : RUN pip install -r ./requirements.txt
---> Running in 9bae51f06129
Requirement already satisfied: alembic==1.2.1 in /usr/local/lib/python3.7/site-packages

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 215
(from -r ./requirements.txt (line 1)) (1.2.1)
Requirement already satisfied: asn1crypto==0.24.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 2)) (0.24.0)
Requirement already satisfied: attrs==19.2.0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 3)) (19.2.0)
Requirement already satisfied: bcrypt==3.1.7 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 4)) (3.1.7)
Requirement already satisfied: black==19.10b0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 5)) (19.10b0)
Requirement already satisfied: certifi==2019.9.11 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 6)) (2019.9.11)
Requirement already satisfied: cffi==1.12.3 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 7)) (1.12.3)
Requirement already satisfied: chardet==3.0.4 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 8)) (3.0.4)
Requirement already satisfied: Click==7.0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 9)) (7.0)
Requirement already satisfied: cryptography==2.7 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 10)) (2.7)
Requirement already satisfied: enum34==1.1.6 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 11)) (1.1.6)
Requirement already satisfied: Faker==2.0.2 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 12)) (2.0.2)
Requirement already satisfied: flasgger==0.9.3 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 13)) (0.9.3)
Requirement already satisfied: Flask==1.1.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 14)) (1.1.1)
Requirement already satisfied: Flask-Login==0.4.1 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 15)) (0.4.1)
Requirement already satisfied: flask-marshmallow==0.10.1 in
/usr/local/lib/python3.7/site-packages (from -r ./requirements.txt (line 16)) (0.10.1)
Requirement already satisfied: Flask-Migrate==2.5.2 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 17)) (2.5.2)
Requirement already satisfied: Flask-Script==2.0.6 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 18)) (2.0.6)
Requirement already satisfied: Flask-SQLAlchemy==2.4.1 in
/usr/local/lib/python3.7/site-packages (from -r ./requirements.txt (line 19)) (2.4.1)
Requirement already satisfied: Flask-WTF==0.14.2 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 20)) (0.14.2)
Requirement already satisfied: funcsigs==1.0.2 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 21)) (1.0.2)
Requirement already satisfied: idna==2.8 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 22)) (2.8)
Requirement already satisfied: ipaddress==1.0.22 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 23)) (1.0.22)
Requirement already satisfied: itsdangerous==1.1.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 24)) (1.1.0)
Requirement already satisfied: Jinja2==2.10.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 25)) (2.10.1)
Requirement already satisfied: jsonschema==2.6.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 26)) (2.6.0)
Requirement already satisfied: Mako==1.1.0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 27)) (1.1.0)
Requirement already satisfied: MarkupSafe==1.1.1 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 28)) (1.1.1)
Requirement already satisfied: marshmallow==2.20.5 in /usr/local/lib/python3.7/site-

216 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
packages (from -r ./requirements.txt (line 29)) (2.20.5)
Requirement already satisfied: marshmallow-sqlalchemy==0.18.0 in
/usr/local/lib/python3.7/site-packages (from -r ./requirements.txt (line 30)) (0.18.0)
Requirement already satisfied: mistune==0.8.4 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 31)) (0.8.4)
Requirement already satisfied: pip==19.3.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 32)) (19.3.1)
Requirement already satisfied: psycopg2==2.8.4 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 33)) (2.8.4)
Requirement already satisfied: pycparser==2.19 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 34)) (2.19)
Requirement already satisfied: pyrsistent==0.15.4 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 35)) (0.15.4)
Requirement already satisfied: python-dateutil==2.8.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 36)) (2.8.0)
Requirement already satisfied: python-dotenv==0.10.3 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 37)) (0.10.3)
Requirement already satisfied: python-editor==1.0.4 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 38)) (1.0.4)
Requirement already satisfied: PyYAML==5.1.2 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 39)) (5.1.2)
Requirement already satisfied: requests==2.22.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 40)) (2.22.0)
Requirement already satisfied: setuptools==41.4.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 41)) (41.4.0)
Requirement already satisfied: six==1.12.0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 42)) (1.12.0)
Requirement already satisfied: SQLAlchemy==1.3.8 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 43)) (1.3.8)
Requirement already satisfied: SQLAlchemy-Utils==0.34.2 in
/usr/local/lib/python3.7/site-packages (from -r ./requirements.txt (line 44)) (0.34.2)
Requirement already satisfied: text-unidecode==1.3 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 45)) (1.3)
Requirement already satisfied: typing==3.7.4.1 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 46)) (3.7.4.1)
Requirement already satisfied: urllib3==1.25.6 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 47)) (1.25.6)
Requirement already satisfied: Werkzeug==0.16.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 48)) (0.16.0)
Requirement already satisfied: wheel==0.33.6 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 49)) (0.33.6)
Requirement already satisfied: WTForms==2.2.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 50)) (2.2.1)
Requirement already satisfied: pathspec<1,>=0.6 in /usr/local/lib/python3.7/site-
packages (from black==19.10b0->-r ./requirements.txt (line 5)) (0.6.0)
Requirement already satisfied: regex in /usr/local/lib/python3.7/site-packages (from
black==19.10b0->-r ./requirements.txt (line 5)) (2019.11.1)
Requirement already satisfied: typed-ast>=1.4.0 in /usr/local/lib/python3.7/site-
packages (from black==19.10b0->-r ./requirements.txt (line 5)) (1.4.0)
Requirement already satisfied: toml>=0.9.4 in /usr/local/lib/python3.7/site-packages
(from black==19.10b0->-r ./requirements.txt (line 5)) (0.10.0)
Requirement already satisfied: appdirs in /usr/local/lib/python3.7/site-packages (from
black==19.10b0->-r ./requirements.txt (line 5)) (1.4.3)
Removing intermediate container 9bae51f06129
---> 3f5edc6128bc
Step 9/10 : EXPOSE 5001/tcp

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 217
---> Running in 60f26584c12b
Removing intermediate container 60f26584c12b
---> ce3fa38815e7
Step 10/10 : ENTRYPOINT python run.py
---> Running in ef32b6d52023
Removing intermediate container ef32b6d52023
---> 01d9e0088db4
Successfully built 01d9e0088db4
Successfully tagged net_inventory_netinv_backend:latest
Building netinv_frontend
Step 1/9 : FROM registry.git.lab/cisco-devops/containers/python37:latest
---> dd4eec63855e
Step 2/9 : LABEL description="This is a net inventory frontend flask application"
---> Running in 792eeca86f26
Removing intermediate container 792eeca86f26
---> 3554f9a36a93
Step 3/9 : LABEL maintainer="Cisco <[email protected]>"
---> Running in 87a7855333b3
Removing intermediate container 87a7855333b3
---> 38d8f5e2d2d0
Step 4/9 : LABEL version="0.1"
---> Running in 16a1fdd0e40d
Removing intermediate container 16a1fdd0e40d
---> 8eb72dce8358
Step 5/9 : ADD ./ /net_inventory
---> d1f6542a92b9
Step 6/9 : WORKDIR /net_inventory/
---> Running in 719817b07686
Removing intermediate container 719817b07686
---> 7b2043445092
Step 7/9 : RUN pip install -r ./requirements.txt
---> Running in 9e21635af462
Requirement already satisfied: alembic==1.2.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 1)) (1.2.1)
Requirement already satisfied: asn1crypto==0.24.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 2)) (0.24.0)
Requirement already satisfied: attrs==19.2.0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 3)) (19.2.0)
Requirement already satisfied: bcrypt==3.1.7 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 4)) (3.1.7)
Requirement already satisfied: black==19.10b0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 5)) (19.10b0)
Requirement already satisfied: certifi==2019.9.11 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 6)) (2019.9.11)
Requirement already satisfied: cffi==1.12.3 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 7)) (1.12.3)
Requirement already satisfied: chardet==3.0.4 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 8)) (3.0.4)
Requirement already satisfied: Click==7.0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 9)) (7.0)
Requirement already satisfied: cryptography==2.7 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 10)) (2.7)
Requirement already satisfied: enum34==1.1.6 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 11)) (1.1.6)
Requirement already satisfied: Faker==2.0.2 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 12)) (2.0.2)

218 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Requirement already satisfied: flasgger==0.9.3 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 13)) (0.9.3)
Requirement already satisfied: Flask==1.1.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 14)) (1.1.1)
Requirement already satisfied: Flask-Login==0.4.1 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 15)) (0.4.1)
Requirement already satisfied: flask-marshmallow==0.10.1 in
/usr/local/lib/python3.7/site-packages (from -r ./requirements.txt (line 16)) (0.10.1)
Requirement already satisfied: Flask-Migrate==2.5.2 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 17)) (2.5.2)
Requirement already satisfied: Flask-Script==2.0.6 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 18)) (2.0.6)
Requirement already satisfied: Flask-SQLAlchemy==2.4.1 in
/usr/local/lib/python3.7/site-packages (from -r ./requirements.txt (line 19)) (2.4.1)
Requirement already satisfied: Flask-WTF==0.14.2 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 20)) (0.14.2)
Requirement already satisfied: funcsigs==1.0.2 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 21)) (1.0.2)
Requirement already satisfied: idna==2.8 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 22)) (2.8)
Requirement already satisfied: ipaddress==1.0.22 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 23)) (1.0.22)
Requirement already satisfied: itsdangerous==1.1.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 24)) (1.1.0)
Requirement already satisfied: Jinja2==2.10.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 25)) (2.10.1)
Requirement already satisfied: jsonschema==2.6.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 26)) (2.6.0)
Requirement already satisfied: Mako==1.1.0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 27)) (1.1.0)
Requirement already satisfied: MarkupSafe==1.1.1 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 28)) (1.1.1)
Requirement already satisfied: marshmallow==2.20.5 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 29)) (2.20.5)
Requirement already satisfied: marshmallow-sqlalchemy==0.18.0 in
/usr/local/lib/python3.7/site-packages (from -r ./requirements.txt (line 30)) (0.18.0)
Requirement already satisfied: mistune==0.8.4 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 31)) (0.8.4)
Requirement already satisfied: pip==19.3.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 32)) (19.3.1)
Requirement already satisfied: psycopg2==2.8.4 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 33)) (2.8.4)
Requirement already satisfied: pycparser==2.19 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 34)) (2.19)
Requirement already satisfied: pyrsistent==0.15.4 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 35)) (0.15.4)
Requirement already satisfied: python-dateutil==2.8.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 36)) (2.8.0)
Requirement already satisfied: python-dotenv==0.10.3 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 37)) (0.10.3)
Requirement already satisfied: python-editor==1.0.4 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 38)) (1.0.4)
Requirement already satisfied: PyYAML==5.1.2 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 39)) (5.1.2)
Requirement already satisfied: requests==2.22.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 40)) (2.22.0)

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 219
Requirement already satisfied: setuptools==41.4.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 41)) (41.4.0)
Requirement already satisfied: six==1.12.0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 42)) (1.12.0)
Requirement already satisfied: SQLAlchemy==1.3.8 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 43)) (1.3.8)
Requirement already satisfied: SQLAlchemy-Utils==0.34.2 in
/usr/local/lib/python3.7/site-packages (from -r ./requirements.txt (line 44)) (0.34.2)
Requirement already satisfied: text-unidecode==1.3 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 45)) (1.3)
Requirement already satisfied: typing==3.7.4.1 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 46)) (3.7.4.1)
Requirement already satisfied: urllib3==1.25.6 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 47)) (1.25.6)
Requirement already satisfied: Werkzeug==0.16.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 48)) (0.16.0)
Requirement already satisfied: wheel==0.33.6 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 49)) (0.33.6)
Requirement already satisfied: WTForms==2.2.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 50)) (2.2.1)
Requirement already satisfied: appdirs in /usr/local/lib/python3.7/site-packages (from
black==19.10b0->-r ./requirements.txt (line 5)) (1.4.3)
Requirement already satisfied: pathspec<1,>=0.6 in /usr/local/lib/python3.7/site-
packages (from black==19.10b0->-r ./requirements.txt (line 5)) (0.6.0)
Requirement already satisfied: typed-ast>=1.4.0 in /usr/local/lib/python3.7/site-
packages (from black==19.10b0->-r ./requirements.txt (line 5)) (1.4.0)
Requirement already satisfied: toml>=0.9.4 in /usr/local/lib/python3.7/site-packages
(from black==19.10b0->-r ./requirements.txt (line 5)) (0.10.0)
Requirement already satisfied: regex in /usr/local/lib/python3.7/site-packages (from
black==19.10b0->-r ./requirements.txt (line 5)) (2019.11.1)
Removing intermediate container 9e21635af462
---> 702c295a51f5
Step 8/9 : EXPOSE 5000/tcp
---> Running in ff8187062327
Removing intermediate container ff8187062327
---> 4bc83631278f
Step 9/9 : ENTRYPOINT python run.py
---> Running in 01b64ed8c6b8
Removing intermediate container 01b64ed8c6b8
---> faf65bc8f438
Successfully built faf65bc8f438
Successfully tagged net_inventory_netinv_frontend:latest
student@student-vm:lab06/net_inventory (master)$

Deploy the Application with Docker Compose


You will build the application and review the state of the containers after they are built.

Step 2 Deploy the application in the foreground to view the build process by issuing the docker-compose up
command.

Step 3 Press Ctrl-C to stop the container.

Step 4 Deploy the application in the background by adding the -d flag and issuing the docker-compose up -d
command.

220 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 5 View the state of the containers by issuing the docker container ls command. Note that the containers are
automatically named with the [application]_[service]_[num] format.

Step 6 Verify that the containers are up and operational.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 221
student@student-vm:lab06/net_inventory (master)$ docker-compose up
Creating network "net_inventory_backend_network" with driver "bridge"
Creating network "net_inventory_frontend_network" with driver "bridge"
Creating net_inventory_netinv_db_1 ... done
Creating net_inventory_netinv_backend_1 ... done
Creating net_inventory_netinv_frontend_1 ... done
Attaching to net_inventory_netinv_db_1, net_inventory_netinv_backend_1,
net_inventory_netinv_frontend_1
netinv_backend_1 | * Serving Flask app "app" (lazy loading)
netinv_backend_1 | * Environment: production
netinv_backend_1 | WARNING: This is a development server. Do not use it in a
production deployment.
netinv_backend_1 | Use a production WSGI server instead.
netinv_backend_1 | * Debug mode: on
netinv_backend_1 | * Running on http://0.0.0.0:5001/ (Press CTRL+C to quit)
netinv_backend_1 | * Restarting with stat
netinv_db_1 | The files belonging to this database system will be owned by user
"postgres".
netinv_db_1 | This user must also own the server process.
netinv_db_1 |
netinv_db_1 | The database cluster will be initialized with locale "en_US.utf8".
netinv_db_1 | The default database encoding has accordingly been set to "UTF8".
netinv_db_1 | The default text search configuration will be set to "english".
netinv_db_1 |
netinv_db_1 | Data page checksums are disabled.
netinv_db_1 |
netinv_db_1 | fixing permissions on existing directory
/var/lib/postgresql/data/pgdata ... ok
netinv_db_1 | creating subdirectories ... ok
netinv_db_1 | selecting default max_connections ... 100
netinv_db_1 | selecting default shared_buffers ... 128MB
netinv_db_1 | selecting dynamic shared memory implementation ... posix
netinv_db_1 | creating configuration files ... ok
netinv_db_1 | running bootstrap script ... ok
netinv_db_1 | performing post-bootstrap initialization ... ok
netinv_db_1 | syncing data to disk ...
netinv_db_1 | WARNING: enabling "trust" authentication for local connections
netinv_db_1 | You can change this by editing pg_hba.conf or using the option -A,
or
netinv_db_1 | --auth-local and --auth-host, the next time you run initdb.
netinv_db_1 | ok
netinv_db_1 |
netinv_db_1 | Success. You can now start the database server using:
netinv_db_1 |
netinv_db_1 | pg_ctl -D /var/lib/postgresql/data/pgdata -l logfile start
netinv_db_1 |
netinv_db_1 | waiting for server to start....2019-11-09 19:21:29.713 UTC [41]
LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
netinv_db_1 | 2019-11-09 19:21:29.800 UTC [42] LOG: database system was shut
down at 2019-11-09 19:21:29 UTC
netinv_db_1 | 2019-11-09 19:21:29.897 UTC [41] LOG: database system is ready to
accept connections
netinv_db_1 | done
netinv_db_1 | server started
netinv_db_1 | CREATE DATABASE
netinv_db_1 |

222 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
netinv_db_1 |
netinv_db_1 | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-
initdb.d/*
netinv_db_1 |
netinv_db_1 | 2019-11-09 19:21:30.456 UTC [41] LOG: received fast shutdown
request
netinv_db_1 | waiting for server to shut down....2019-11-09 19:21:30.458 UTC
[41] LOG: aborting any active transactions
netinv_db_1 | 2019-11-09 19:21:30.467 UTC [41] LOG: background worker "logical
replication launcher" (PID 48) exited with exit code 1
netinv_db_1 | 2019-11-09 19:21:30.470 UTC [43] LOG: shutting down
netinv_db_1 | 2019-11-09 19:21:30.513 UTC [41] LOG: database system is shut
down
netinv_db_1 | done
netinv_db_1 | server stopped
netinv_db_1 |
netinv_db_1 | PostgreSQL init process complete; ready for start up.
netinv_db_1 |
netinv_db_1 | 2019-11-09 19:21:30.577 UTC [1] LOG: listening on IPv4 address
"0.0.0.0", port 5432
netinv_db_1 | 2019-11-09 19:21:30.579 UTC [1] LOG: could not create IPv6 socket
for address "::": Address family not supported by protocol
netinv_db_1 | 2019-11-09 19:21:30.591 UTC [1] LOG: listening on Unix socket
"/var/run/postgresql/.s.PGSQL.5432"
netinv_db_1 | 2019-11-09 19:21:30.623 UTC [59] LOG: database system was shut
down at 2019-11-09 19:21:30 UTC
netinv_db_1 | 2019-11-09 19:21:30.636 UTC [1] LOG: database system is ready to
accept connections
netinv_backend_1 | * Debugger is active!
netinv_backend_1 | * Debugger PIN: 666-071-495
netinv_frontend_1 | * Serving Flask app "app" (lazy loading)
netinv_frontend_1 | * Environment: production
netinv_frontend_1 | WARNING: This is a development server. Do not use it in a
production deployment.
netinv_frontend_1 | Use a production WSGI server instead.
netinv_frontend_1 | * Debug mode: on
netinv_frontend_1 | * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
netinv_frontend_1 | * Restarting with stat
netinv_frontend_1 | * Debugger is active!
netinv_frontend_1 | * Debugger PIN: 188-396-762
^CGracefully stopping... (press Ctrl+C again to force)
Stopping net_inventory_netinv_frontend_1 ... done
Stopping net_inventory_netinv_backend_1 ... done
Stopping net_inventory_netinv_db_1 ... done
student@student-vm:lab06/net_inventory (master)$ docker-compose up -d
Creating net_inventory_netinv_db_1 ... done
Creating net_inventory_netinv_backend_1 ... done
Creating net_inventory_netinv_frontend_1 ... done
student@student-vm:lab06/net_inventory (master)$ docker container ls
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS NAMES
b65354943bfa net_inventory_netinv_frontend "/bin/sh
-c 'python …" 2 minutes ago Up 3 seconds 0.0.0.0:5000->5000/tcp
net_inventory_netinv_frontend_1
03b924081b8c net_inventory_netinv_backend "/bin/sh
-c 'python …" 2 minutes ago Up 4 seconds 0.0.0.0:5001->5001/tcp

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 223
net_inventory_netinv_backend_1
c8047dd1a1cf net_inventory_netinv_db "docker-
entrypoint.s…" 2 minutes ago Up 5 seconds 0.0.0.0:5432->5432/tcp
net_inventory_netinv_db_1
student@student-vm:lab06/net_inventory (master)$

Verify the Application

Step 7 Run the populate_inventory script and enter 127.0.0.1:5001 for server and port information. The script will
populate the network inventory database. Use the populate_inventory command.

Note Port 5001 is serving the backend API Docker container. Port 5000 is serving the frontend container.

student@student-vm:$ populate_inventory
Enter the server and port info : 127.0.0.1:5001
nyc-rt01: Added successfully
nyc-rt02: Added successfully
rtp-rt01: Added successfully
rtp-rt02: Added successfully

Step 8 Using the Chrome browser, connect to the local TCP 5000 port. Navigate to http://127.0.0.1:5000 to view
the network inventory.

224 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Remove the Application
Finally, you will shut down the containers and application with a single command.

Step 9 Remove the application by issuing the docker-compose down command.

student@student-vm:lab06/net_inventory (master)$ docker-compose down


Stopping net_inventory_netinv_frontend_1 ... done
Stopping net_inventory_netinv_backend_1 ... done
Stopping net_inventory_netinv_db_1 ... done
Removing net_inventory_netinv_frontend_1 ... done
Removing net_inventory_netinv_backend_1 ... done
Removing net_inventory_netinv_db_1 ... done
Removing network net_inventory_backend_network
Removing network net_inventory_frontend_network
student@student-vm:lab06/net_inventory (master)

Summary
You discovered different mechanisms for deploying containers. You added features such as networking and
separation of environment variables and secrets. The docker-compose.yml file was used to describe both
containers and images in a single file. You also learned about the power of the docker-compose command.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 225
0Summary Challenge
1. 0Which Linux networking component maps directly to the concept of a VRF instance?
a. iptables
b. ip arp
c. netstat + ss
d. namespaces
2. 0Which Linux component is responsible for firewall functionality on a Linux host?
a. iptables
b. ip arp
c. netstat + ss
d. namespaces
3. 0Which Docker networking driver component allows multiple Layer 2 segments to be built?
a. custom bridges
b. default bridges
c. host-based
d. overlay
e. macvlan
4. 0Which Docker networking driver maps a port in use on the container to a port on the host?
a. custom bridges
b. default bridges
c. host-based
d. overlay
e. macvlan
5. 0Which Docker networking driver allows a container to get an IP address on the network segment to
which the host device NIC is connected?
a. custom bridges
b. default bridges
c. host-based
d. overlay
e. macvlan
6. 0Which network plug-in allows Cisco ACI to be extended into Docker containers?
a. Contiv
b. Kuryr
c. OpenStack
d. Weave
7. 0Which configuration of a Docker Compose YAML file is not a top-level configuration item?
a. volumes
b. networks
c. services
d. images

226 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Answer Key
Linux Networking
1. B
2. D

Docker Networking
1. C

Docker Compose
1. B, C

Summary Challenge
1. D
2. A
3. A
4. C
5. E
6. A
7. D

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 227
228 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Section 5: Introducing CI/CD

Introduction
Continuous integration/continuous delivery/continuous deployment, more commonly known as CI/CD, is a
methodology for continuously integrating and continuously delivering solutions.
A CI/CD process is commonly initiated with a request to integrate code into the feature branch of an app,
such as a main (master) or development branch. The process is initiated by running tests and requesting peer
review of the change to approve implementation. Once the tests are completed, the changes can be merged
into the feature branch.
Continuous delivery is the next step after CI. In this step, everything is packaged and made ready for
production. An artifact is created and made available for download or it can be automatically moved to the
next phase, deployment.
Continuous deployment is an organizational decision to be made. Up to this point, everything has been
automatic. The tests, integration, and creation of an artifact have all been done within the system. If
continuous deployment is used, then the system automatically puts changes into production. In the case of
applications, a website may use continuous deployment, whereas a mobile app that has periodic updates is
likely to use continuous delivery, where the code is promoted to production and updated manually.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 229
Continuous Integration
The purpose of CI is to have code integrated into the feature branch frequently, sometimes several times per
day. By regularly checking the code into the repository and receiving frequent feedback, “merge” conflicts
can be eliminated at the end of projects.

CI Overview
A key concept in CI is the need to upload code into the code base frequently, often several times a day.
Some faster integrations may upload multiple times per day, whereas others may upload code only a few
times per week. The frequency also depends on where in the development cycle the project is.
• Developer work is constantly merged with the code base.
• Use of testing catches problems early in the process.

The use of frequent code check-ins, combined with automated testing, helps detect any bugs early in the
development process. This approach contrasts with waiting until the final product is ready to start looking
for problems. With CI, much of the testing is done by a machine, making frequent testing an inexpensive
process. The CI tool helps find little mistakes and notifies you of what needs to be corrected.

CI Benefits
CI increases your speed to market, helps you realize revenue and results faster, and allows you to deliver
new features quicker and more reliably.
• Move faster.
• Improve reliability.

230 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Smaller changes reduce debugging complexity, allow for more efficient debugging, reduce the number of
bugs in a project, and therefore increase uptime. The automated testing that is part of the CI process means
that less time is spent on manual testing to make sure that the code is in a good state. Therefore, you get can
spend more time developing enhancements. With more enhancements, a feedback mechanism is required to
make sure that you can work on the features that are needed sooner.

Build: Integration
Integration is the act of merging code. In an organization with multiple developers working in parallel on
the same project, the code must be combined into the single code base.
• Bring code together
• Linting
• Unit tests
• Compile and build components

Integration requires that components work together properly, which requires testing. Testing includes
regression testing to ensure that bugs are not reintroduced. Properly written tests increase the confidence
that the code will have fewer bugs (and hopefully none).

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 231
Build Test: Linting
Linting is the act of analyzing code to identify programming errors, bugs, style errors, or incorrect amounts
of white space. This type of testing is very fast and should be done early in the CI testing when problems are
easier to fix. Your organization should work to establish styles so that as multiple individuals work on code,
the coding style remains the same. For example, should a variable for an interface should be intf, interface,
or something else?
• Analyze code to detect logic or style issues.
• Formatting tools

Logic issues, such as unused variables and unused imports, should be avoided. You should not create a
variable and then not use it or import a module and never use it. These items take up memory when calling
the variable and import and should only be created when they are needed. This issue is easy for a linter to
catch. Note that many integrated development environments (IDEs) have built-in linter support for specific
programming languages.
The linters listed in the figure are for Python. Other programming languages have similar linters available.
Compilers often manage many of these activities for you. Python and Ruby are interpreted languages and
there is not a compiler available to check syntax, which is why linters are available for these languages.

232 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
• Pylint is a style-checking linter. It is one of the oldest and most mature Python linters. It is a bit slower
than some of the other linters but is very complete. It prints the error message with a code at the
beginning. Items that start with “W” are for warnings and error items start with “E” and a four-digit
number.
• Pyflakes is a syntax checker for source files. It only looks for errors and is not a style checker. If you
want to use Pyflakes with a format standard such as PEP8, then the Python module flake8 is the
recommended module.
• Bandit is a security analysis tool. It looks for issues like passwords in files and the use of unsafe
modules when there are other modules that are newer and safer. This tool is a static code analyzer; it
does not run the code. It analyzes the code only by looking at it.
• pycodestyle is another code style-analysis tool that is similar to Pylint. It will check your code style
against the PEP8 standard. pydocstyle is a documentation string checker that makes sure that functions
and methods have appropriately formatted doc strings.
• The Black code formatting tool uses pycodestyle to help format a Python file. This tool performs code
review more quickly by automatically formatting text using the pycodestyle format. Many Python-based
projects are now requiring that the code is “blackened,” so that code is consistent. You can run a black
--check [filename] command to check if the code matches the style. If you leave off the --check part of
the command and run black [filename], then the black module will reformat the file based on its
understanding of the code format. This action will not delete or modify the functionality of the Python
file.

When working with linters, it is also valuable to check how you can configure the linter. In Pylint, you can
create a file that is named .pylintrc in the directory where you run Pylint that configures its initial settings.
There may be some settings that you do not want Pylint to check. Look at the linter that you want to
implement for more information on how to create a configuration file.

Build Test: Unit Testing


• Pytest and Unittest packages for testing
– Write tests first.
– Use test framework to test (first test will fail, nothing written to pass the test).
– Write the test until a test passes.
• Test-driven development

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 233
Unit testing tests the individual components of code. In Python, pytest is one of many modules that can be
used to test code. The idea behind test-driven development (TDD) is to first write a test (which at first will
fail as the code has not been written) to verify the functionality of the function or method. Then you work to
write a function that will pass the tests. After getting a passing test, check to see if anything can be
refactored. Refactoring may include writing a new function to make sure that you are not repeating yourself.
These unit tests are meant to verify the functionality of the code on a unit-by-unit basis. They are not meant
to test the entire code base with one test. By having a functional test for each function and method within
the code, you can have stronger confidence in this, and future versions of the application, that the code does
not have a bug within the function.
Unit tests and linting tests function by executing a test or script that validates the expected functionality. It
works on the return or exit code of a script. If the exit code is set to 0, everything executed normally. If the
exit code is any number other than 0, then the script or program has had a failure. CI tools use this exit code
to determine a pass or fail. When you write your own tests, as long as an appropriate exit code is provided at
the end of execution, the CI tool can determine if your test has passed or failed.

Final Build
After all the tests are completed successfully and the new code has been peer-reviewed for merging into the
feature branch, the final build steps are executed.
• Build is the final stage before the deployment or delivery.
• Tests have been completed successfully.
• Build an artifact, which is the output of the build run.

The build stage prepares the code for deployment in the staging environment for further user acceptance
testing (UAT). The outcome of the build phase is an artifact that can be stored and maintained as its own
independent item.
1. 0Which option is not a benefit of using CI?
a. merge difficulties
b. ability to move faster
c. fewer defects
d. improved reliability

234 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0CI Tools
CI tools are typically aligned with the ability to run through a defined pipeline and are meant to help in the
build process. The tooling marketplace is quite large for CI. So how do you determine which one to use in
your environment? This topic will help you understand the features that are associated with the tools, but
does not provide detailed product introductions.

Tool Landscape
• Jenkins
• Drone
• Travis
• GitLab

There are many tools to choose from, with varying integrations and associated costs. You will see a few of
them here in more detail, but take a closer look at the tools in the list or even do your own research. The
tools that are covered here are often used in the current software development environment, but there may
be others that better fit your organization.

File Format for CI Tools


The most common CI tool file type tends to use YAML format.

Jenkins Travis-CI GitLab CI Drone

Default File Jenkinsfile .travis.yml .gitlab-ci.yml .drone.yml

Language/Structure Groovy YAML YAML YAML

Self-Managed Solution Yes Contact Travis Yes Yes

Cloud Service Available No Yes Yes Yes

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 235
Many of the newer pipeline tools use YAML formats for the structure. The outlier in this group is Jenkins.
Its pipeline script format is written in Groovy, perhaps because it is one of the older CI tools on the market
today.

Jenkins
• Extensible, can move to a full CI/CD tool
• Open Source with a large community; install on your VM or Docker
• Distribute work across machines to scale as wide as necessary
• Self-managed solution only; no SaaS offerings
• https://jenkins.io

Jenkins is one of the original Open Source tools. It remains popular and has the backing of several software
brands. Jenkins is a Java-based application that is very common in software development pipelines. Jenkins
does not have a cloud offering; installations in your environments are via a package installer. Jenkins boasts
installations for Docker, FreeBSD, macOS, RedHat/Centos/Fedora, Ubuntu/Debian, several other Linux
flavors, and a generic Java package. You can also deploy directly to Azure.
Jenkins pipelines are maintained in a Groovy code format and are processed as a Groovy script. The Groovy
file can be maintained by Jenkins, can be picked up from source control, and uses the filename Jenkinsfile
as the default file.
Jenkins also provides a visual build for a pipeline that does not rely on the code itself. There are several
methodologies for creating a CI pipeline.

Jenkins Default File


• Jenkinsfile in root directory of the project
• Written in Groovy
• Pipeline format

pipeline {
agent any
environment {
CI_REGISTRY_IMG_DB = "net_inventory_db"
<...>
}

stages {
stage('Build') {

236 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
steps {
echo "BUILD DB"
<...>
}
}
<...>
}

A Jenkinsfile is the default file that the Jenkins system wants to load if using a file in source control
management (SCM). Jenkins lets you build pipelines outside of SCM. The best practice when working with
Jenkins in a DevOps or network development and operations (NetDevOps) environment is to create
Jenkinsfiles for the project so that it is maintained in SCM.
A Jenkinsfile is written in the Groovy language and typically separates the stages into steps. You can create
multiple stages and multiple steps within a stage. The image in the figure, which is part of the new Jenkins
user interface Blue Ocean, has four stages: Build, Browser Tests, Static Analysis, and Deploy. Within the
Browser Tests stage, there are multiple steps including tests for each of the browser types.

This figure shows the visual pipeline editor, which gives a visual representation of the pipeline and enables
you to add, modify, or delete the pipeline.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 237
This figure shows the continued pipeline stage creation, including a Python script that is to be executed.

This figure shows a successfully completed pipeline. There are multiple phases including Build, Browser
Tests, Static Analysis, and Deploy. In the Browser Tests phase, there are tests for multiple browser types
(steps). You can see that the test was passed, because there are several green check marks throughout the
GUI and a large white check mark on the upper left.

238 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
In this figure, there was a failure on testing with the Firefox browser within the Browser Tests phase. This
failure causes the build to fail as a whole, which the amount of red shown in the banner and the white X on
the top left denote.

This screen shows the summary of the builds that are configured on the Jenkins system. There are four
builds that passed the tests, denoted by the green rows and check marks. The red row and the white X at the
front of the row denote the one build that failed.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 239
Jenkins can integrate nicely with GitHub and can show the build status on a particular process. Look at the
bottom section highlighted in the green box. It shows that All checks have passed and that there are no
conflicts on the branch, so the pull request can be merged into the feature branch as requested.

Travis CI
Travis CI provides free testing on Open Source projects on Github.com.
• Free for Open Source projects
• Open Source with a large community; install on your VM or Docker
• Distribute work across machines to scale as widely as necessary.
• Software as a Service solution-focused, enterprise, and self-managed available by contacting Travis CI.
• https://travis-ci.org

Travis CI is deeply integrated in the Github.com API interface. Travis CI is a Software as a Service (SaaS)
service that supplies all of the testing on the GitHub nodes. The CI pipeline is defined in a YAML format
that provides for a top-down definition of the stage. This file is often named .travis.yml in a repository and
is the default file that Travis CI looks for when a project is set up to use Travis.

240 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Travis CI File Format
The Travis CI file format is YAML and has the name .travis.yml in the project root directory.
• .travis.yml in root of project directory
• YAML syntax

---
language: "python"
python:
- "3.7"
env:
- CI_REG_IMG_DB="net_inventory_db"
<...>

before_script:
- "docker login -u $USER -p $CI_REG_PASSWORD https://registry.git.lab"
- "echo $TRAVIS_COMMIT_MESSAGE"

script:
- "echo BUILD DB"
- "docker build -t $CI_REG_IMG_DB -f Dockerfile_db .“
<...>

The user defines the language to be used, and which version of the language. Environmental variables are
defined under the env key. Then there are actions such as before_script, script, and after_script that can be
executed.

GitLab CI
GitLab describes itself as a “single tool for the entire DevOps lifecycle.” It has many features that are part
of the DevOps toolchain, such as the Git repository, Docker image registry, project board, CI/CD pipelines,
and runners.
• Full featured Git, CI/CD tool
– “Single tool for the entire DevOps lifecycle.” -gitlab.com
• Community and enterprise editions
• Both SaaS and self-managed installations available
• https://gitlab.com

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 241
There are community, enterprise, and SaaS offerings for GitLab. There are many more features that may be
of interest, with varying costs. Refer to https://gitlab.com for more information.

GitLab CI File Format


The default file for a GitLab CI pipeline is .gitlab-ci.yml. The format separates each key and value. The
stages are defined at the top level of the YAML file.
• .gitlab-ci.yml in root of project directory
• YAML syntax

stages:
- "build"
- "deploy"

variables:
CI_REGISTRY_IMAGE_DB: "net_inventory_db"
<...>

before_script:
- "docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD
https://registry.git.lab"
<...>

build:
stage: "build"
script:
- "echo BUILD DB"
<...>

The stage definitions are farther down in the file and the CI system knows to look at those keys for
instructions on what to execute.
The following are sample screenshots from the GitLab web pages.

242 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Here you see a sample of the project board, which gives an overview of projects.

The figure is a sample of the pipeline view. It shows that there are five stages in the pipeline: Build,
Prepare, Test, Post-test, and Post-cleanup. The status of the pipeline elements is shown to the left of the
element with several green check marks in place in the Prepare and Test stages.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 243
An issue board helps organize the issue list into a visual representation, similar to a KANBAN board. This
board can be helpful to visualize the work that needs to be done and where it needs to be done.

With GitLab, the request to merge code into a feature branch is considered a Merge Request. This request is
part of the Git repository features.

244 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
This figure illustrates the diff engine, which can show the differences between the files. This feature of the
Git process can show the history of a file and help you understand how it has changed over time.

Drone
The Drone.io system uses plug-ins to extend its capabilities. The pipelines are defined in a yml file that is
named .drone.yml by default. There are many plug-ins and they can work with the popular source control
repositories of GitHub, Bitbucket (Cloud and Server), GitLab, Gitea, and Gogs.
• Built on container technology
• Custom plug-ins
• Many built-in pipeline types
• Free offering for Open Source
• Both SaaS and self-managed solutions are available.
• https://drone.io

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 245
Runner types can be Docker, Exec, SSH, and on Digital Ocean. Docker runners are Docker containers that
execute the pipeline. Exec runs locally on the host on which Drone was installed. SSH uses a remote host
over SSH as its runner. Digital Ocean uses a dedicated droplet over SSH within Digital Ocean.
Drone offers a cloud service and free CI for Open Source projects.

Drone File Format


The Drone file is a YAML file that is named .drone.yml. It flows in a similar fashion to many other YAML
file definitions. It has a pipeline key at the top level and the stages are the next level keys within the file.
The commands key is followed by a list of commands that are to be executed.
• .drone.yml in root of project directory
• YAML syntax

pipeline:
build:
image: python
commands:
- "echo BUILD DB"
- "docker build -t net_inventory_db -f Dockerfile_db ."
<...>
- "docker tag net_inventory_frontend
registry.git.lab/cisco-devops/net_inventory/net_inventory_frontend:$$DRONE_COMMIT"
deploy:
image: python
commands:
- "docker push registry.git.lab/cisco-devops/net_inventory/net_inventory_db:$
$DRONE_COMMIT"
<...>

The GUI has a familiar set of components if you have seen other CI tools.

246 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
This figure shows the builds and their status within the Drone system. It shows a white check mark in a
green circle for successful builds and a white X on a red background for failed builds.

This figure shows the status of an ongoing build. It shows the step that is currently executing and the results
over time.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 247
The figure shows the testing result with every test passing. Every test has a white check mark in a green
circle.

This screen shows the test of test under testing as having a failure. The overall build then fails.

248 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Some Other CI Tools
CI tools are not limited to only a few selections. Here are a few more solutions that are available in the
marketplace. This list is not exhaustive, and there are likely to be more entrants into the market. As long as
they can execute tests, provide feedback on the tests, create an artifact, and maintain build history, the tool
will be viable.
• Azure Pipelines
• Amazon Web Services (AWS) Code Pipeline
• Bamboo
• CircleCI
• Codeship
• GoCD

• Hudson
• Semaphore
• Shippable
• TeamCity
• Visual Studio Team Services
• Wercker

Azure Pipelines by Microsoft is the evolution of Visual Studio Team Services (VSTS). AWS Code Pipeline
is specific to Amazon web service (AWS); it is not a public offering but is available within AWS. Bamboo
is part of the Atlassian suite of DevOps products. TeamCity is by JetBrains, which makes various IDEs.
1. 0Which CI tool does not have a cloud offering, but has the backing of several software brands?
a. GitLab
b. Drone.io
c. Travis
d. Jenkins

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 249
0DevOps Pipelines
The principles of DevOps have been around for quite some time now. Its initial applicability was to
applications and bridging the gap between application developers and the operations teams who supported
those applications. However, over the past few years, the NetDevOps concept has emerged, which covers
the applicability of DevOps principles, processes, and tools for IT networking professionals with the goal of
increasing uptime, reliability, and predictability, while benefiting from automation. This topic explores
common tools that are used within a NetDevOps pipeline.

DevOps Pipeline
The DevOps pipeline continuously provides feedback to the products developed, starting with the
development phase, which may be split into planning and code development. It is in this phase that the
product is made.

The build stage is where integration, unit tests, and system tests are executed. The code is integrated, peer-
reviewed, and artifacts are created.
The test phase is where user acceptance testing and the artifacts from the build stage are built into a staging
environment.
Sometimes organizations break release and deploy into two phases. The release or delivery phase is where
the package is made ready for production, but not yet promoted into production. This interim phase can
involve a new code that is ready for download or is just waiting for an individual to move the code into the
next phase. The deployment phase is where the code is put into production.
In the operate and monitor phase, feedback is gathered from the code operations and monitoring of the app
happens. The monitoring statistics also provide feedback, which then provides feedback to the planning
phase to continue the cycle.

250 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Develop
In the planning portion, feedback is collected and summarized. Sprint planning is based on this feedback
and can feed into roadmaps that will help determine the next task from a code perspective.
• Planning
– Feedback gathering
– Roadmaps
• Code creation
– Standard toolset
– Code style

Within code creation, code should follow an organizational standard, including code style and use of a
standard toolset. If there are multiple packages that manage the same thing (such as Python requests and
urllib), creating a single standard for code creation will prevent one person using one library and others
using another library.

Build
The build phase can include many activities. One task is the methodology for getting code into a feature
branch. Typically, when using the modern Git repository tools (such as GitLab and GitHub), you will
upload your written code into your own feature branch with commits. Then you will submit a merge or pull
request (the terminology depends on the repository system). Once the merge or pull request is sent forward,
an automated build process is activated.
• Code commit
• Builds start on a pull or merge request.
• Peer review
• Merge code into branch

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 251
The automated build process will have tests that verify the code, including linting and unit tests. If the first
set of tests is passed, system tests begin. If a test fails, the build will stop and fail. It will be up to the
developer to work on the code again and continue with the process from the first step of resubmitting a
merge or pull request.
Once the code passes all the tests and the build has passed, the code is peer-reviewed. An artifact or record
of the build is created and is sent for the next step of peer review. A peer will review the code; if the code
matches what is required, the peer reviewer will allow the code to merge into the feature branch.

Test
Out-of-band testing occurs in the test phase.
• Out-of-band testing in a staged environment
• May include manual or automated UAT

The application is run in a staged environment and deeper, more functional, tests are run, possibly including
UAT. Any related infrastructure that is built would also be tested here (such as new packages or firewall
rules).

Release or Deploy
Release means that the code is packaged and ready for production.
• Ready for production
• Automatic deployment of a release into production (optional)
• Tooling should enable both delivery (completion of the release phase) or deployment into production.

252 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
The release phase is also more commonly known as delivery. The package is delivered with a new artifact
that is production-ready. It is not deployed directly into production; you will have to go through the process
of deploying the package into production manually. Scenarios that may have this process as a default
include apps that need to be submitted to an app store for approval (such as mobile apps), or if there are
restrictions on the hours during which new code can be deployed.
Deployment takes the final step of deploying the app into production. Postproduction implementation tests
are also executed at this point; the new code is verified as operational and tests are run to verify that
production did not affect the code operation.

Operate and Monitor


At this stage of the cycle, the release is in production.
• New release is in production.
• Feedback is gathered from customers.
• The system is monitored for customer analytics, performance, errors, etc.

In this phase, more feedback is gathered from customers and the operations team maintains the
environment.
The monitoring function is in place and gathers feedback on performance. How fast are pages loading? How
fast is the customer getting the information that they are looking for or a response to information submitted?
What are the errors in the logs showing? What are the performance counters showing?

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 253
DevOps Pipeline: Docker Pipeline
Consider the Docker pipeline for CI in the DevOps model. In the beginning, a plan is developed based on
the requirements for a container. The Dockerfile is then created that prescribes how the Docker container
will be built.

The Dockerfile is then built out based on specifications. There are linters available for Dockerfiles and the
file will be run through a linter at this point. Use the docker build command set to build the container at the
end. The Docker container is tagged with a version number and added to the Docker Image Registry.
Next, testing is done within a staged environment. A core set of applications that use the container would be
tested here and UAT is completed and verified.
The act of making a release and deploying the container includes uploading the container as a new tag, such
as latest, to a Docker Image Registry. At that point, the container is available for use by anyone who has
access to the registry.
The last stage is to gather feedback from the containers that are being used. Feedback is likely to come from
issues that are submitted or from keeping up-to-date with package dependency updates. For example, if a
new patch has become available for a Flask application, an update should be planned for the upcoming
release of the container.

254 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
NetDevOps Continuous Integration Pipeline
The CI/CD pipelines make it possible to replicate the production network environment, which is required
for performing network tests through the development process. Developers might use a set of tools to build,
replicate, and simulate network topologies. This approach is helpful for local testing of a proposed change
in code that represents a real network environment. Once verified, the developer could send back the change
in the form of a Git pull request to a Git source control system. The CI/CD systems can deploy the new,
proposed network configuration to a test environment and perform network tests. Those tests could validate
the operational state, verify settings, and check if the network is operating properly in a simulated
environment. If the new configuration passes the tests, the pull request can be merged into the production
(main or master) branch of code. This action could trigger another set of actions—the deployment of the
new, proposed code to the production network. In this stage, the network change happens: network devices
are configured with a set of network changes that are committed to the production branch, and regular
network tests can be executed as a part of the deployment process.

A practical example, represented in the figure, includes the following sequence:


• A network engineer issues a merge request (or equivalent) to GitLab (or equivalent). The pull request
has a few files that have been modified with changes that are required for a routing change. The file
updates are YAML files that are variables for Ansible but will automate the Cisco Network Services
Orchestrator (NSO).
• When the pull request is open, an event is seen and initiates the build and testing process.
• The first step is to build the development server where tests are run. You can think of this server as your
network automation server. Vagrant is used to orchestrate the creation of the virtual environment (this
environment can be one or more servers that are configured in a Vagrantfile). In this example, it is a
Linux server with Ansible installed and a dedicated Cisco NSO server.
• The network topology files are also stored in a source control repository. The Linux server pulls down
these files. Next, the network is automatically deployed on Cisco Virtual Internet and Routing Lab
(VIRL).
• Once the test network is up, the proposed changes are deployed to the test network using Ansible and
Cisco NSO, and tests are executed using Cisco pyATS, a test framework foundation in end-to-end
testing ecosystem.
• Notifications will also be sent to Cisco Webex Teams at every step.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 255
1. 0In which phase of the DevOps pipeline would you find merge and pull requests?
a. release and deploy
b. test
c. plan and code
d. build
e. operate and monitor

256 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Summary Challenge
1. 0What are two benefits of doing CI? (Choose two.)
a. use of tools that are used by software brands
b. move faster
c. satisfy customer need for testing
d. direct feedback from consumers of an application
e. improved reliability
2. 0Which two testing methods are done within integration? (Choose two.)
a. linting
b. user acceptance testing
c. production integration
d. split horizon
e. unit tests
3. 0Which tool is used to format Python code?
a. Bandit
b. Pylint
c. Black
d. Orange
e. pydocstyle
4. 0In which methodology are tests written before the actual code?
a. Custom Coding
b. Test Driven Development
c. Test First, Code Later
d. Limitless Coding
5. 0Which CI tool is completely open source and does not have a cloud offering?
a. GitLab
b. Drone.io
c. Travis-CI
d. Hudson
e. Jenkins
6. 0Which DevOps pipeline phase is responsible for creating roadmaps and organizing feedback to
give direction to a project?
a. planning and code development
b. build
c. deploy
d. test
7. 0Which phase in the Docker Pipeline is responsible for making the containers?
a. planning and code development
b. build
c. test
d. deploy

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 257
0Answer Key
Continuous Integration
1. A

CI Tools
1. D

DevOps Pipelines
1. D

Summary Challenge
1. B, E
2. A, E
3. C
4. B
5. E
6. A
7. D

258 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Section 6: Building the DevOps Flow

Introduction
Previously, you were introduced to the DevOps Pipeline. This pipeline is effective in delivery of systems
from an organizational perspective. In this section, you will put some of the theory to practice, to get an
understanding of how to implement a DevOps Flow.
In this section, you will build out a DevOps Flow using GitLab as a platform. GitLab
(http://www.gitlab.com) focuses on being a single source tooling for all DevOps tools. You can manage,
plan, create, verify, package your application stack, and even more.

GitLab Overview
GitLab is quoted as saying that their product is a “single application for the entire DevOps lifecycle.” It
comes out of the box with many capabilities including Continuous Integration (CI/CD), Source Code
Management (SCM) via Git, Docker Registry, Package Registries, and more. You can find out more about
the current features and capabilities at https://about.gitlab.com/features/.

GitLab Platform Review

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 259
GitLab has three different product offerings with differing levels of licensing and support. GitLab
Community Edition is all Open Source with no propriety code. This edition looks to maintain feature parity
with Core edition.
GitLab has a self-managed solution with multiple tiers available, based on licensing. This model starts with
Core at the basic level, which has the same features as the Community Edition. Beyond the Core level,
increments go to Start, Premium, and Ultimate levels (names may be subject to change).
The last version GitLab offers the solution within a Software as a Service (SaaS) offering in the cloud. This
is hosted at gitlab.com and has several versions that have similar feature sets to the self-managed solution.
These versions start at Free and work up to Bronze, Silver, and Gold versions.

Continuous Integration and Continuous Delivery/Deployment

Continuous Integration module is similar to many other continuous integration and continuous deployment
(CI/CD) tools on the market today. The system uses YAML defined processes to complete the pipeline for
CI/CD. The pipeline is defined in the project's .gitlab-ci.yml file and is executed based on the definition.
You can define the execution to have a pipeline run whenever a commit is pushed to the server or when a
merge is pushed to a particular branch.
To enable more scalability of resources, GitLab uses a gitlab-runner application that spreads the resource-
intensive work of running the pipeline out to multiple machines. This is configurable and allows for tags to
identify which gitlab-runner machines run which jobs. GitLab will also autoscale the service for assistance
in saving costs and resource utilization.

Source Code Management: Git


• Web IDE
• Issue Management and Merge Requests
• Reporting and Visualization

260 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Git functionality within GitLab has everything in a modern Git repository. There is a browser-based IDE
called Web IDE that allows you to edit code directly on the web page without having to go to a separate
editor. You get the reporting and visualization of the git repository with a graph capability, tracking of
issues, merge requests (also known as pull requests in other systems), a wiki, pipelines, and code snippets.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 261
Docker Registry
• Secure and Private Registry for Docker Images
• Easy Upload and Download of images
• Fully integrated with Git repository management

The Docker Registry within GitLab provides a place for you to house your own private Docker images. This
allows you to not need to set up an independent Docker registry and provides a private registry without
exposing everything to the public (such as Docker Hub).

Agile
• Issue Boards
• Epics and Milestones
• Labels
• Burndown Charts
• Points and Estimation

262 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
GitLab helps to facilitate Agile development with project management tooling, including Epics, User
stories, Milestones, labels, points, issues, and burndown charts, to name a few of the features. All these
features provide tooling to help your Agile development, all in one place.
1. 0Which functionality allows for editing of source code right in GitLab?
a. Web IDE
b. Agile Boards
c. Docker Registry
d. CI/CD
2. 0What language/format is the CI/CD defined in?
a. JSON
b. Groovy
c. Python
d. YAML

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 263
0GitLab CI Overview
The key to the pipelines is the pipeline itself. To deliver a DevOps Pipeline, you must define what is
required to complete the pipeline. The pipeline is a series of steps to complete the task of validation. If the
first step in the pipeline fails, then the entire pipeline will fail. A pipeline can be of virtually any size,
although keeping it simple helps in troubleshooting and successfully completing pipelines. You should
group tasks by similarity. For example, linting your files, it makes sense to lint your Python and Ansible
playbook files in a single pipeline step/stage rather than breaking them apart.

GitLab CI: Definition


• Defined in .gitlab-ci.yml
• Defines Jobs, which are assigned to Stages
• Stages define the order in which the pipeline is run

The GitLab Continuous Integration (CI) runs with information from a YAML (.yml, .yaml) file named
.gitlab-ci.yml. This file is defined and is followed in the ordered list defined in the GitLab CI file. The
full reference documentation for the Pipeline configuration can be found at the url:
https://docs.gitlab.com/ee/ci/yaml/. The file uses Job names as the main keys and each job must have a
unique name. There are several reserved words that cannot be used for the job name since they are
keywords. See the documentation for the full list, but it includes the following keywords:
before_script, after_script, stages, services, and image.

GitLab CI: Runners


• Runs the pipeline jobs
• Reports back to GitLab
• Scalable via system runners that can be installed on VMs

264 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
A GitLab Runner is the execution engine that GitLab uses to execute the CI/CD pipeline that runs your jobs
and send the results back to GitLab. GitLab runners are not tied to the user interface or any other part of the
GitLab installation. They are designed to be run on separate infrastructure, which enables the CI/CD tool to
be scalable. When there are more builds than just a single server can handle, the system can distribute the
load to additional runners.
A runner can be assigned to tags to help allocating which runners handle which tasks. This is helpful if you
want to set up runners inside of a security zone to handle jobs for a particular security zone. Or, if there
needs to be a dedicated system for a particular project, a runner can be assigned only to that project and will
execute jobs only for that project. This is defined during setup of the GitLab runner.

Note You do not need to use a laptop as a runner, the image is depicting the capability to be able to do so.

GitLab CI: Script


• script is the only required keyword
• Commands to be executed by the runner
• Can run as a single line or as an array with multiple commands

job1:
script: ”python run.py”
stage: build

job2:
script:
- uname –a
- python -–version
- python verify_script.py
stage: test

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 265
The script keyword is the only required keyword within a job. This is the main script execution that is done
by the runner. If the script returns an exit code that is not 0, the job is failed and further commands will not
be executed. This is done in sequential order.

GitLab CI: Stages


• Jobs are defined into a stage with the stage keyword
• Executed in the order defined by the stage, for example, build, test, deploy
• Special .pre and .post stages that run always, and are always at the beginning or the end

stages:
- build
- test
- deploy

The Stages keyword defines what order things are processed in. The GitLab takes the .gitlab-ci.yml
file and reads in the stages key. The value will be a list giving the order for processing the stages. The
runners will run all of the tasks that are defined for a stage in parallel. Once all jobs are done for the defined
stage, the pipeline will continue onto the next defined stage. Each job is to be assigned to a stage. If any of
the jobs fail within the stage, the build will be failed at that stage and no more jobs will be processed beyond
that stage.
There are two special stages of .pre and .post that cannot be changed, order-wise. The .pre stage is
guaranteed to run first in a pipeline. Likewise, a .post stage will always be the last stage in a pipeline.

GitLab CI: Before and After Scripts


• Ability to define code to execute before the script
• default key gets applied to all jobs of a before script, with the ability to override with a more specific
before_script in the job
• After_script is executed always, including on a failure within the script tasks

default:
before_script:
- global before script

job:
before_script:
- exec instead of global script
script:
- my script
after_script:
- execute after my script

266 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
GitLab CI/CD can run scripts before and after jobs. The use of the before_script: section concatenates
the scripts defined in this key to those with the main script execution. An after_script is executed after
the job runs, including after a failure. Both the before_script and after_script can be defined
globally with a default: key. If these are defined globally in the default key, the script defined within the
job will be executed and not the default.

GitLab CI: Only and Except


• Used to define a job policy, such as only a specific branch, or except on a specific branch
• Use Regex to define policy pattern
• Only runs on the branches defined in the array of only
• Except runs on every branch except that defined
• Able to expand beyond just branches

job:
only:
- branches@gitlab-org/gitlab
except:
- master@gitlab-org/gitlab
- /^release/.*$/@gitlab-org/gitlab

Within the definition of the pipeline, you may want to run the job on particular occasions, such as only
when there is a merge to the master branch or when there is a merge to a particular development branch.
You define this rule with the only: or except: keys. To run a particular job only when there is a merge to
master, for example, the job would contain the following:
deploy:
only:
refs:
- master

The keyword refs is in regards to the reference branch. So, if you had some other branch names such as
devel or schedules, then you could include those names in the same position as master.
This can be applied to many more settings. It is recommended to look at the reference guide for more details
—https://docs.gitlab.com/ee/ci/yaml/.

GitLab CI: When


• When is used to have jobs run when there is a failure or despite a failure
• Arguments of the when keyword include:
– on_success
– on_failure
– always
– manual
– delayed

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 267
stages:
- build
- cleanup_build
- test - deploy
- cleanup

build_job:
stage: build
script:
- make build

cleanup_build_job:
stage: cleanup_build
script:
- cleanup build on fail
when: on_failure

(continued)
test_job:
stage: test
script:
- make test

deploy_job:
stage: deploy
script:
- make deploy
when: manual

cleanup_job:
stage: cleanup
script:
- cleanup after jobs
when: always

The when keyword is used in particular scenarios of cleaning up the environment. It gives you the capability
of running a particular job based under predefined conditionals. You would run a cleanup script if there is a
failure on the build or a cleanup job at the end of the pipeline, and you will want to have this run at all
times. Let’s say that there are temporary files created in the build process. It is good practice to remove
these for proper system hygiene. Arguments for the keyword when include:
• on_success
• on_failure
• always
• manual
• delayed

268 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Manually defined jobs are helpful for Continuous Delivery operations to have a consistent and documented
method to deploy the delivered product to production.

Putting It All Together


stages:
- build
- cleanup_build
- test
- deploy
- cleanup
build_job:
before_script: make setup
stage: build
script:
- make build
cleanup_build_job:
stage: cleanup_build
script:
- cleanup build on fail
when: on_failure

(continued)
test_job:
before_script: make setup
stage: test
script:
- make test
after_script: make cleanup_test
deploy_job:
stage: deploy
script:
- make deploy
only:
- master
cleanup_job:
stage: cleanup
script:
- cleanup after jobs
when: always

Here you see a .gitlab-ci.yml file put together with all of the components discussed here. There are still
many more options available on the GitLab CI reference page that should be referenced when putting
together a DevOps pipeline.
1. 0In which file is the pipeline defined within the GitLab CI/CD solution?
a. Gitlabfile
b. .gitlab-ci.yml
c. gitlab-cicd.yml
d. buildspec

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 269
0Discovery 7: Implement a Pipeline in GitLab CI
Introduction
Deployment pipelines can be used to automate the process of building and deploying an endless number of
services. For example, pipelines can be used to perform automated testing, such that when all tests pass,
they are also able to generate build artifacts such as container images, and, upon updates to the image,
publish a new version. During this lab, you will work to automate the process of updating and registering
new containers as they are modified and rebuilt. In a previous lab, you have already deployed a single
application using the docker push registry command. In this lab, you will push container images to the
registry when there is a new merge into the master branch.

Topology

Job Aid

Device Information

Device Description FQDN/IP Address Credentials

Student Workstation Linux Ubuntu VM 192.168.10.10 student, 1234QWer

GitLab Git Repository git.lab student, 1234QWer

GitLab Container Registry Container registry.git.lab student, 1234QWer


Registry

270 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Command List
The table describes the commands that are used in this activity. The commands are listed in alphabetical
order so that you can easily locate the information that you need. Refer to this list if you need configuration
command assistance during the lab activity.

Command Description

cd directory name This command to change directories within the Linux file system. You
will use this command to enter into a directory where the lab scripts
are housed. You can use tab completion to finish the name of the
directory after you start typing it.

docker build -t name:tag -f filename The command to build a Docker image. The -t flag will name and
path tag it as you specify. The -f flag is used when not using the
standard filename of Dockerfile. The path defines the context for the
docker daemon, normally it is the “.” is specified.

docker container ls -a The command to view the containers configured on the host system.
The -a flag will indicate to show containers that are not up as well.

docker login docker_registry The ability to log in to a docker registry. In cases where you are not
already logged in, it will prompt you for your username and password.

docker network create -d type name The docker network create command is used to create networks of
different types, such as a bridge.

docker push container_registry / The docker command to push to the registry. The command does not
gitlab_organization / gitlab_project / have spaces between the forward-slashes
container:tag

docker run -itd -p port --name The command to run, or obtain a container registry and run a
container container_registry / container. The -i flag is for interactive, and the -t flag is for creating a
gitlab_organization / gitlab_project / pseudo-TTY to the container. The -d flag is to run in detached state.
container:tag command The command is any command valid on the container. The --name
flag names the container as you intend, and does not randomly
generate a name for you. The -p flag is for port, it can be in either
host_port:container_port format, or port format.

docker tag tag The command to tag an image. In this lab, they are generally tagged
using the container container_registry / gitlab_organization /
gitlab_project / container:tag standard.

git add -a filename The ability to add a file or use the -a flag to add all files to the git index.

git checkout -b branch_name The git command to check out a branch, and optionally create the
branch applying the -b flag.

git clone repository Downloads or clones a git repository into the directory that is the name
of the project in the repository definition.

git commit -m message The git command to commit the changes locally.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 271
Command Description

git push repo branch_name The git command to push the branch to the remote git service. The
repo is normally in the form of a named instance, usually a named
remote such as origin.

Task 1: Manually Deploy Container Images


This task will reinforce how to deploy images manually, preparing you to compare and contrast manual and
automated deployment of container images.

Activity

Step 1 In the student workstation, find and open Visual Studio Code by clicking the Visual Studio Code icon.

Step 2 From the Visual Studio Code top navigation bar, choose Terminal > New Terminal [Ctrl-Shift-`].

Step 3 Navigate to the terminal section at the bottom of Visual Studio Code.

Step 4 Within the Visual Studio Code terminal, change the directory to ~/labs/lab07 using the cd ~/labs/lab07
command.

student@student-vm:$ cd ~/labs/lab07/

Step 5 Issue the git clone https://git.lab/cisco-devops/net_inventory command to clone the net_inventory
repository.

student@student-vm:labs/lab07$ git clone https://git.lab/cisco-devops/net_inventory


Cloning into 'net_inventory'...
warning: redirecting to https://git.lab/cisco-devops/net_inventory.git/
remote: Enumerating objects: 416, done.
remote: Counting objects: 100% (416/416), done.
remote: Compressing objects: 100% (114/114), done.
remote: Total 416 (delta 290), reused 416 (delta 290)
Receiving objects: 100% (416/416), 3.10 MiB | 14.16 MiB/s, done.
Resolving deltas: 100% (290/290), done.

Step 6 Change directory to the net_inventory directory by issuing the cd net_inventory command.

student@student-vm:labs/lab07$ cd net_inventory/
student@student-vm:lab07/net_inventory (master)$

Create and Register Three Docker Container Images


In these steps you will build Docker images, retag each image, and register those images to the container
registry, so they can be later pulled down from the registry. This is important to execute to fully understand
the process of when it is automated in upcoming tasks.

Step 7 Build the database image by issuing the docker build -t net_inventory_db -f Dockerfile_db . command.

272 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:lab07/net_inventory (master)$ docker build -t net_inventory_db -f
Dockerfile_db .
Sending build context to Docker daemon 56.61MB
Step 1/5 : FROM registry.git.lab/cisco-devops/containers/postgres:latest
latest: Pulling from cisco-devops/containers/postgres
743f2d6c1f65: Pull complete
5d307000f290: Pull complete
29837b5e9b78: Pull complete
3090df574038: Pull complete
dc0b4463fa0e: Pull complete
1fb834895f59: Pull complete
59169bd605be: Pull complete
a950d631bfe9: Pull complete
19906d8610a9: Pull complete
f073bb1dfb35: Pull complete
d2f60e906bcb: Pull complete
0a8c5d1e3f51: Pull complete
50c8d3614d4f: Pull complete
5d051cf29253: Pull complete
Digest: sha256:08f97554d30e1cfa3ce47c800adb137f651edb6f6956012693bd31ddde97e6a6
Status: Downloaded newer image for
registry.git.lab/cisco-devops/containers/postgres:latest
---> 3eda284d1840
Step 2/5 : LABEL description="This is a postgres db for net inventory Flask app"
---> Running in 72f0574f5dae
Removing intermediate container 72f0574f5dae
---> e2bf48365e31
Step 3/5 : LABEL maintainer="Cisco <[email protected]>"
---> Running in 2a3261ac5500
Removing intermediate container 2a3261ac5500
---> d0272fc3df90
Step 4/5 : LABEL version="0.1"
---> Running in 648a6b13cce3
Removing intermediate container 648a6b13cce3
---> 8c83e7dae2bb
Step 5/5 : EXPOSE 5432/tcp
---> Running in 49c320253dad
Removing intermediate container 49c320253dad
---> cf1cb355cacc
Successfully built cf1cb355cacc
Successfully tagged net_inventory_db:latest

Step 8 Tag the database image to point to the GitLab Container Registry by issuing the docker tag
net_inventory_db registry.git.lab/cisco-devops/containers/net_inventory_db:latest command.

student@student-vm:lab07/net_inventory (master)$ docker tag net_inventory_db


registry.git.lab/cisco-devops/containers/net_inventory_db:latest

Step 9 Register the database image to the GitLab Container Registry by issuing the docker push
registry.git.lab/cisco-devops/containers/net_inventory_db:latest command.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 273
student@student-vm:lab07/net_inventory (master)$ docker push registry.git.lab/cisco-
devops/containers/net_inventory_db:latest
The push refers to repository
[registry.git.lab/cisco-devops/containers/net_inventory_db]
2c023ab93af2: Mounted from cisco-devops/containers/postgres
7015c8d0a3f5: Mounted from cisco-devops/containers/postgres
842b24e93f2c: Mounted from cisco-devops/containers/postgres
e4d7bc8584dd: Mounted from cisco-devops/containers/postgres
923ed18d2581: Mounted from cisco-devops/containers/postgres
9e3f39b108ca: Mounted from cisco-devops/containers/postgres
804abd5012d6: Mounted from cisco-devops/containers/postgres
da1749281c4c: Mounted from cisco-devops/containers/postgres
268417188696: Mounted from cisco-devops/containers/postgres
8ffc50431e46: Mounted from cisco-devops/containers/postgres
b4505242243c: Mounted from cisco-devops/containers/postgres
8700c6d5f108: Mounted from cisco-devops/containers/postgres
1e1890158369: Mounted from cisco-devops/containers/postgres
6270adb5794c: Mounted from cisco-devops/containers/postgres
latest: digest: sha256:491f04bcc681f71d67aa65c098ea2277f2c7317a980116eb47b59813dbd0799e
size: 3245
student@student-vm:lab07/net_inventory (master)$

Step 10 Build the back-end image by issuing the docker build -t net_inventory_backend -f Dockerfile_backend
command.

274 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:lab07/net_inventory (master)$ docker build -t net_inventory_backend
-f Dockerfile_backend
Sending build context to Docker daemon 56.61MB
Step 1/5 : FROM registry.git.lab/cisco-devops/containers/postgres:latest
latest: Pulling from cisco-devops/containers/postgres
743f2d6c1f65: Pull complete
5d307000f290: Pull complete
29837b5e9b78: Pull complete
3090df574038: Pull complete
dc0b4463fa0e: Pull complete
1fb834895f59: Pull complete
59169bd605be: Pull complete
a950d631bfe9: Pull complete
19906d8610a9: Pull complete
f073bb1dfb35: Pull complete
d2f60e906bcb: Pull complete
0a8c5d1e3f51: Pull complete
50c8d3614d4f: Pull complete
5d051cf29253: Pull complete
Digest: sha256:08f97554d30e1cfa3ce47c800adb137f651edb6f6956012693bd31ddde97e6a6
Status: Downloaded newer image for
registry.git.lab/cisco-devops/containers/postgres:latest
---> 3eda284d1840
Step 2/5 : LABEL description="This is a postgres db for net inventory Flask app"
---> Running in 72f0574f5dae
Removing intermediate container 72f0574f5dae
---> e2bf48365e31
Step 3/5 : LABEL maintainer="Cisco <[email protected]>"
---> Running in 2a3261ac5500
Removing intermediate container 2a3261ac5500
---> d0272fc3df90
Step 4/5 : LABEL version="0.1"
---> Running in 648a6b13cce3
Removing intermediate container 648a6b13cce3
---> 8c83e7dae2bb
Step 5/5 : EXPOSE 5432/tcp
---> Running in 49c320253dad
Removing intermediate container 49c320253dad
---> cf1cb355cacc
Successfully built cf1cb355cacc
Successfully tagged net_inventory_db:latest

Step 11 Tag the back-end image to point to the GitLab Container Registry by issuing the docker tag
net_inventory_backend registry.git.lab/cisco-devops/containers/net_inventory_backend:latest
command.

student@student-vm:lab07/net_inventory (master)$ docker tag net_inventory_backend


registry.git.lab/cisco-devops/containers/net_inventory_backend:latest

Step 12 Register the back-end image to the GitLab Container Registry by issuing the docker push
registry.git.lab/cisco-devops/containers/net_inventory_backend:latest command.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 275
student@student-vm:lab07/net_inventory (master)$ docker push registry.git.lab/cisco-
devops/containers/net_inventory_backend:latest
The push refers to repository
[registry.git.lab/cisco-devops/containers/net_inventory_backend]
e2a3a4f8f81a: Pushed
57275282f8d6: Pushed
d5efea4cd86c: Pushed
9ebf59de99a3: Mounted from cisco-devops/containers/net_inventory
1f8901027234: Mounted from cisco-devops/containers/net_inventory
581d0eb94046: Mounted from cisco-devops/containers/net_inventory
5833990cb8e5: Mounted from cisco-devops/containers/net_inventory
86339b326932: Mounted from cisco-devops/containers/net_inventory
859394076549: Mounted from cisco-devops/containers/net_inventory
896510bee743: Mounted from cisco-devops/containers/net_inventory
67ecfc9591c8: Mounted from cisco-devops/containers/net_inventory
latest: digest: sha256:c03343a6bc54a266b28f6eb0939aeed2d9f3ed1745f7778ce20a86fe60511175
size: 2635
student@student-vm:lab07/net_inventory (master)$

Step 13 Build the front-end image by issuing the docker build -t net_inventory_frontend -f Dockerfile_frontend .
command.

276 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:lab07/net_inventory (master)$ docker build -t net_inventory_frontend
-f Dockerfile_frontend .
Sending build context to Docker daemon 56.61MB
Step 1/9 : FROM registry.git.lab/cisco-devops/containers/python37:latest
---> dd4eec63855e
Step 2/9 : LABEL description="This is a net inventory frontend flask application"
---> Running in 442bfa69a3ad
Removing intermediate container 442bfa69a3ad
---> 2169c24bc795
Step 3/9 : LABEL maintainer="Cisco <[email protected]>"
---> Running in 75100f3bbb11
Removing intermediate container 75100f3bbb11
---> 95352b8c826f
Step 4/9 : LABEL version="0.1"
---> Running in aa7f93eb1402
Removing intermediate container aa7f93eb1402
---> e5f12ef773b6
Step 5/9 : ADD ./ /net_inventory
---> 9d0c2c710d2f
Step 6/9 : WORKDIR /net_inventory/
---> Running in 3cc56c1f10f9
Removing intermediate container 3cc56c1f10f9
---> 42aabd8a9954
Step 7/9 : RUN pip install -r ./requirements.txt
---> Running in 9148a718be6a
Requirement already satisfied: alembic==1.2.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 1)) (1.2.1)
Requirement already satisfied: asn1crypto==0.24.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 2)) (0.24.0)
Requirement already satisfied: attrs==19.2.0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 3)) (19.2.0)
Requirement already satisfied: bcrypt==3.1.7 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 4)) (3.1.7)
Requirement already satisfied: black==19.10b0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 5)) (19.10b0)
Requirement already satisfied: certifi==2019.9.11 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 6)) (2019.9.11)
Requirement already satisfied: cffi==1.12.3 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 7)) (1.12.3)
Requirement already satisfied: chardet==3.0.4 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 8)) (3.0.4)
Requirement already satisfied: Click==7.0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 9)) (7.0)
Requirement already satisfied: cryptography==2.7 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 10)) (2.7)
Requirement already satisfied: enum34==1.1.6 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 11)) (1.1.6)
Requirement already satisfied: Faker==2.0.2 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 12)) (2.0.2)
Requirement already satisfied: flasgger==0.9.3 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 13)) (0.9.3)
Requirement already satisfied: Flask==1.1.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 14)) (1.1.1)
Requirement already satisfied: Flask-Login==0.4.1 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 15)) (0.4.1)
Requirement already satisfied: flask-marshmallow==0.10.1 in

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 277
/usr/local/lib/python3.7/site-packages (from -r ./requirements.txt (line 16)) (0.10.1)
Requirement already satisfied: Flask-Migrate==2.5.2 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 17)) (2.5.2)
Requirement already satisfied: Flask-Script==2.0.6 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 18)) (2.0.6)
Requirement already satisfied: Flask-SQLAlchemy==2.4.1 in
/usr/local/lib/python3.7/site-packages (from -r ./requirements.txt (line 19)) (2.4.1)
Requirement already satisfied: Flask-WTF==0.14.2 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 20)) (0.14.2)
Requirement already satisfied: funcsigs==1.0.2 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 21)) (1.0.2)
Requirement already satisfied: idna==2.8 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 22)) (2.8)
Requirement already satisfied: ipaddress==1.0.22 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 23)) (1.0.22)
Requirement already satisfied: itsdangerous==1.1.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 24)) (1.1.0)
Requirement already satisfied: Jinja2==2.10.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 25)) (2.10.1)
Requirement already satisfied: jsonschema==2.6.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 26)) (2.6.0)
Requirement already satisfied: Mako==1.1.0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 27)) (1.1.0)
Requirement already satisfied: MarkupSafe==1.1.1 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 28)) (1.1.1)
Requirement already satisfied: marshmallow==2.20.5 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 29)) (2.20.5)
Requirement already satisfied: marshmallow-sqlalchemy==0.18.0 in
/usr/local/lib/python3.7/site-packages (from -r ./requirements.txt (line 30)) (0.18.0)
Requirement already satisfied: mistune==0.8.4 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 31)) (0.8.4)
Requirement already satisfied: pip==19.3.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 32)) (19.3.1)
Requirement already satisfied: psycopg2==2.8.4 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 33)) (2.8.4)
Requirement already satisfied: pycparser==2.19 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 34)) (2.19)
Requirement already satisfied: pyrsistent==0.15.4 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 35)) (0.15.4)
Requirement already satisfied: python-dateutil==2.8.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 36)) (2.8.0)
Requirement already satisfied: python-dotenv==0.10.3 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 37)) (0.10.3)
Requirement already satisfied: python-editor==1.0.4 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 38)) (1.0.4)
Requirement already satisfied: PyYAML==5.1.2 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 39)) (5.1.2)
Requirement already satisfied: requests==2.22.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 40)) (2.22.0)
Requirement already satisfied: setuptools==41.4.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 41)) (41.4.0)
Requirement already satisfied: six==1.12.0 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 42)) (1.12.0)
Requirement already satisfied: SQLAlchemy==1.3.8 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 43)) (1.3.8)
Requirement already satisfied: SQLAlchemy-Utils==0.34.2 in

278 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
/usr/local/lib/python3.7/site-packages (from -r ./requirements.txt (line 44)) (0.34.2)
Requirement already satisfied: text-unidecode==1.3 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 45)) (1.3)
Requirement already satisfied: typing==3.7.4.1 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 46)) (3.7.4.1)
Requirement already satisfied: urllib3==1.25.6 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 47)) (1.25.6)
Requirement already satisfied: Werkzeug==0.16.0 in /usr/local/lib/python3.7/site-
packages (from -r ./requirements.txt (line 48)) (0.16.0)
Requirement already satisfied: wheel==0.33.6 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 49)) (0.33.6)
Requirement already satisfied: WTForms==2.2.1 in /usr/local/lib/python3.7/site-packages
(from -r ./requirements.txt (line 50)) (2.2.1)
Requirement already satisfied: toml>=0.9.4 in /usr/local/lib/python3.7/site-packages
(from black==19.10b0->-r ./requirements.txt (line 5)) (0.10.0)
Requirement already satisfied: typed-ast>=1.4.0 in /usr/local/lib/python3.7/site-
packages (from black==19.10b0->-r ./requirements.txt (line 5)) (1.4.0)
Requirement already satisfied: appdirs in /usr/local/lib/python3.7/site-packages (from
black==19.10b0->-r ./requirements.txt (line 5)) (1.4.3)
Requirement already satisfied: pathspec<1,>=0.6 in /usr/local/lib/python3.7/site-
packages (from black==19.10b0->-r ./requirements.txt (line 5)) (0.6.0)
Requirement already satisfied: regex in /usr/local/lib/python3.7/site-packages (from
black==19.10b0->-r ./requirements.txt (line 5)) (2019.11.1)
Removing intermediate container 9148a718be6a
---> 096dc92888db
Step 8/9 : EXPOSE 5000/tcp
---> Running in c622bdb84f9c
Removing intermediate container c622bdb84f9c
---> 105dc8a4e31f
Step 9/9 : ENTRYPOINT python run.py
---> Running in 1932b12b181c
Removing intermediate container 1932b12b181c
---> 92f96a064902
Successfully built 92f96a064902
Successfully tagged net_inventory_frontend:latest

Step 14 Tag the front-end image to point to the GitLab Container Registry by issuing the docker tag
net_inventory_frontend registry.git.lab/cisco-devops/containers/net_inventory_frontend:latest
command.

student@student-vm:lab07/net_inventory (master)$ docker tag net_inventory_frontend


registry.git.lab/cisco-devops/containers/net_inventory_frontend:latest

Step 15 Register the front-end image to the GitLab Container Registry by issuing the docker push
registry.git.lab/cisco-devops/containers/net_inventory_frontend:latest command.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 279
student@student-vm:lab07/net_inventory (master)$ docker push registry.git.lab/cisco-
devops/containers/net_inventory_frontend:latest
The push refers to repository
[registry.git.lab/cisco-devops/containers/net_inventory_frontend]
2274dbe60fb0: Pushed
d6f0b7069f0c: Pushed
9ebf59de99a3: Mounted from cisco-devops/containers/net_inventory_backend
1f8901027234: Mounted from cisco-devops/containers/net_inventory_backend
581d0eb94046: Mounted from cisco-devops/containers/net_inventory_backend
5833990cb8e5: Mounted from cisco-devops/containers/net_inventory_backend
86339b326932: Mounted from cisco-devops/containers/net_inventory_backend
859394076549: Mounted from cisco-devops/containers/net_inventory_backend
896510bee743: Mounted from cisco-devops/containers/net_inventory_backend
67ecfc9591c8: Mounted from cisco-devops/containers/net_inventory_backend
latest: digest: sha256:23219366d14bc623286acd8d439d1ed099968165110307e170c0536ace19dfde
size: 2428
student@student-vm:lab07/net_inventory (master)$

Task 2: Implement a Build Pipeline


This task will demonstrate how to build a build pipeline in GitLab. You will learn topics such as variable for
secrets management, GitLab runners, and pipelines.

Activity

Verify GitLab Readiness


In this task you will set a variable for secrets management and ensure the GitLab runner is operational. The
GitLab runner is the component that runs your CI/CD pipeline. There are various agents that you could have
such as shell, docker, or SSH; in this instance you will be using a shell runner.

Step 1 From the Chrome browser, navigate to https://git.lab.

Step 2 Log in with the credentials that are provided in the Job Aids and click Sign in.

Step 3 From the list of projects, choose the cisco-devops/net_inventory project.

280 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 4 From the left navigation bar, choose Settings > CI/CD.

Note Be careful not to navigate to the root navigation bar link to CI/CD, as this is the operational component,
not the configuration component.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 281
Step 5 Find the section for Variables and click the Expand button.

Step 6 Add a variable key of CI_REGISTRY_USERNAME with value of student, choose the Protected toggle,
and click the Save Variables button.

These variables are set as environment variables for jobs within a given pipeline. The project
security controls who can edit these files and is careful not to show these variables in the logs
(unless you do that on purpose by echoing the $CI_REGISTRY_USERNAME variable, for
example). In later steps, you will refer to these variables. It is important to understand where the
variables come from and how the variables are managed.

282 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 7 Find the section for Runners and click the Expand button.

Step 8 Verify that there is an active runner. At least one runner must have a green label. Be aware that the randomly
generated value may not be the same as depicted in the picture.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 283
Note The runner should be a shell type, enabled for the project, and configured for any tag.

Review the GitLab CI YAML File

Step 9 Open the.gitlab-ci.yml file in Visual Studio Code. Review the file and syntax of the YAML file content.

284 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Add the Front-End Container to the Prebuilt Pipeline
In these steps, you will update the existing .gitlab-ci.yml file to include the front-end server. It already has
the db and back-end containers.

Several git commands are used in this lab:


• The git checkout -b branch_name command creates a new branch.
• The git add filename command starts tracking the file and adds it to staging, but does not commit it.
• The git commit -m "your message here" command commits the index locally.
• The git push repo branch_name command pushes the branch to the git server, where repo is a named
reference, such as origin to the actual repo.

Step 10 Create a new branch called fe_container by issuing the command git checkout -b fe_container command.

Note Please note that, in the solution screens below, the underscore does not always render in the pictures,
but it is required. This could cause confusion when comparing the pictures.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 285
Step 11 Within the stage: “build” script: dictionary key, copy the last three elements starting with echo BUILD
BACKEND and paste them at the bottom of the YML list. You will use the existing framework as a
reference. However, you will need to modify it for usage with the front-end application. Change the name
from BACKEND to FRONTEND.

The result will be the following:

286 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
- "echo BUILD FRONTEND"
- "docker build -t $CI_REGISTRY_IMAGE_FRONTEND -f Dockerfile_frontend ."
- "docker tag $CI_REGISTRY_IMAGE_FRONTEND
registry.git.lab/cisco-devops/net_inventory/$CI_REGISTRY_IMAGE_FRONTEND:
$CI_COMMIT_REF_SLUG"

Step 12 Within the stage: “deploy” script: dictionary key, copy the last element and paste it at the bottom of that
element list. Change the name from BACKEND to FRONTEND.

- "docker push registry.git.lab/cisco-devops/net_inventory/$CI_REGISTRY_IMAGE_FRONTEND:


$CI_COMMIT_REF_SLUG"

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 287
Step 13 Press Ctrl-S to save the file.

Step 14 Compare the diff by issuing the git diff command. You will see any changes that have been made to the code
in a standard diff format. Review the output and ensure it is as expected.

288 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:lab07/net_inventory (fe_container)$ git diff
diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
index 37805b8..916c481 100644
--- a/.gitlab-ci.yml
+++ b/.gitlab-ci.yml
@@ -20,11 +20,15 @@ build:
- "echo BUILD BACKEND"
- "docker build -t $CI_REGISTRY_IMAGE_BACKEND -f Dockerfile_backend ."
- "docker tag $CI_REGISTRY_IMAGE_BACKEND
registry.git.lab/cisco-devops/net_inventory/$CI_REGISTRY_IMAGE_BACKEND:
$CI_COMMIT_REF_SLUG"
+ - "echo BUILD FRONTEND"
+ - "docker build -t $CI_REGISTRY_IMAGE_FRONTEND -f Dockerfile_frontend ."
+ - "docker tag $CI_REGISTRY_IMAGE_FRONTEND
registry.git.lab/cisco-devops/net_inventory/$CI_REGISTRY_IMAGE_FRONTEND:
$CI_COMMIT_REF_SLUG"

deploy:
stage: "deploy"
script:
- "docker push registry.git.lab/cisco-devops/net_inventory/$CI_REGISTRY_IMAGE_DB:
$CI_COMMIT_REF_SLUG"
- "docker push
registry.git.lab/cisco-devops/net_inventory/$CI_REGISTRY_IMAGE_BACKEND:
$CI_COMMIT_REF_SLUG"
+ - "docker push
registry.git.lab/cisco-devops/net_inventory/$CI_REGISTRY_IMAGE_FRONTEND:
$CI_COMMIT_REF_SLUG"
only:
- "master"

Step 15 Add the file to the git index by issuing the git add .gitlab-ci.yml command.

student@student-vm:lab07/net_inventory (fe_container)$ git add .gitlab-ci.yml

Step 16 Commit the file to git by issuing the git commit -m "ADD FRONTEND TO REGISTRY" command.

student@student-vm:lab07/net_inventory (fe_container)$ git commit -m "ADD FRONTEND TO


REGISTRY"
[fe_container 6319cdf] ADD FRONTEND TO REGISTRY
1 file changed, 4 insertions(+)

Step 17 Push the branch up to GitLab by issuing the git push origin fe_container command. When prompted,
provide your GitLab credentials.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 289
student@student-vm:lab07/net_inventory (fe_container)$ git push origin fe_container
Username for 'https://git.lab': student
Password for 'https://[email protected]':
warning: redirecting to https://git.lab/cisco-devops/net_inventory.git/
Counting objects: 3, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 367 bytes | 367.00 KiB/s, done.
Total 3 (delta 2), reused 0 (delta 0)
remote:
remote: To create a merge request for fe_container, visit:
remote: https://git.lab/cisco-devops/net_inventory/merge_requests/new?merge_request
%5Bsource_branch%5D=fe_container
remote:
To https://git.lab/cisco-devops/net_inventory
* [new branch] fe_container -> fe_container
student@student-vm:lab07/net_inventory (fe_container)$

Create a Merge Request


In these steps, you will create a merge request on the GitLab Server to contribute your changes back
upstream to the master branch within the net_inventory project.

Step 18 From the GitLab net_inventory project, choose Merge Request in the left panel.

Step 19 In the upper right corner, click the Create merge request button.

290 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 20 Set the Source Branch to fe_container, the target branch to master, and click the Compare branches and
continue button. If there is only a single branch, you may skip this step and continue with the next step.

Step 21 Scroll down and click the Submit merge request button.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 291
View the Pipeline
In these steps, you will view the status of the Pipeline. You can view the details of the job as if you had run
the task via your bash shell.

Step 22 From the left navigation bar, choose CI/CD > Jobs.

292 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 23 Find the job the matches your Merge Request number, which is likely the topmost job, and select the job ID
number, such as #48. You could have jumped directly to this point from the Merge Request; however, it is
important to know how to navigate to jobs when you did not put in the Merge Request yourself.

Step 24 Scroll up and down to review the output. You will notice it contains the commands in the build process
scripting steps.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 293
Commit the Merge Request to Master
Change to the appropriate directory and obtain the code for the network inventory application.

Step 25 From the left navigation, choose Merge Requests.

Step 26 Find the job that matches your Merge Request commit message of ADD FRONTEND TO REGISTRY and
click it.

Step 27 Click Merge.

294 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 28 Click on the number that follows Pipeline, such as #32, and review the job.

Step 29 Click the deploy icon to review the deploy output.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 295
Step 30 Review the docker push commands and note their success.

Step 31 From the left navigation panel, choose Packages > Container Registry.

296 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 32 Review the various containers by expanding them.

Task 3: Test the Container Builds


At this point, you have automated the process of building the containers. The following task will ensure that
this is working as expected.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 297
Activity

Build the Application with Local env Files


Build the application using the docker run commands and referring to the local env files.

Step 1 Change directory to /labs/lab07 using the cd ~/labs/lab07 command. This folder will contain env_files for
the db, back end, and front end.

student@student-vm:lab07/net_inventory (fe_container)$ cd ~/labs/lab07/


student@student-vm:labs/lab07$

Step 2 Create a network to attach to the test containers using the docker network create test_bridge command.

student@student-vm:labs/lab07$ docker network create test_bridge


abc14106d9015fdfa26529c6d2cb15d129ded1c537cc56ff794c63d0eb719523
student@student-vm:labs/lab07$

Step 3 Create a test_db container, referencing a local env file using the
docker run -itd --name test_db --network test_bridge --env-file env_file_db registry.git.lab/cisco-
devops/net_inventory/net_inventory_db:master command.

student@student-vm:labs/lab07$ docker run -itd --name test_db --network test_bridge --


env-file env_file_db
registry.git.lab/cisco-devops/net_inventory/net_inventory_db:master
Unable to find image
'registry.git.lab/cisco-devops/net_inventory/net_inventory_db:master' locally
master: Pulling from cisco-devops/net_inventory/net_inventory_db
743f2d6c1f65: Pull complete
5d307000f290: Pull complete
29837b5e9b78: Pull complete
3090df574038: Pull complete
dc0b4463fa0e: Pull complete
1fb834895f59: Pull complete
59169bd605be: Pull complete
a950d631bfe9: Pull complete
19906d8610a9: Pull complete
f073bb1dfb35: Pull complete
d2f60e906bcb: Pull complete
0a8c5d1e3f51: Pull complete
50c8d3614d4f: Pull complete
5d051cf29253: Pull complete
Digest: sha256:bc8c968b9fc913edce30ad5e2b427cd8538d5c90e0d3a638e29073b1ff0eed39
Status: Downloaded newer image for
registry.git.lab/cisco-devops/net_inventory/net_inventory_db:master
6ac841ecd261f8ead924f9d573446c7c131fe6473e19eef4772ddf90dca38ae5

Step 4 Create a test_backend container, referencing a local env file using the
docker run -itd --name test_backend --network test_bridge -p 5001:5001 --env-file env_file_backend
registry.git.lab/cisco-devops/net_inventory/net_inventory_backend:master command.

298 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:labs/lab07$ docker run -itd --name test_backend --network
test_bridge -p 5001:5001 --env-file env_file_backend
registry.git.lab/cisco-devops/net_inventory/net_inventory_backend:master
Unable to find image
'registry.git.lab/cisco-devops/net_inventory/net_inventory_backend:master' locally
master: Pulling from cisco-devops/net_inventory/net_inventory_backend
80369df48736: Already exists
aaba0609d543: Already exists
f6c315699b29: Already exists
1ed59a75505b: Already exists
69aee1181685: Already exists
128605feeed4: Already exists
4b1a5145a1fa: Already exists
7f6169d4068e: Already exists
a7d3e818cd93: Pull complete
7032720c171b: Pull complete
6e38aa71d074: Pull complete
Digest: sha256:2d0d47bdbbf0ada652af4787eaacd91a207b9c474bc9238271700b316f949cb3
Status: Downloaded newer image for
registry.git.lab/cisco-devops/net_inventory/net_inventory_backend:master
e17bf2d5b5415a01f2e0c8e3f6f3078079bebc2d603081298272c02cad2c4594

Step 5 Create a test_frontend container, referencing a local env file using the
docker run -itd --name test_frontend --network test_bridge -p 5000:5000 --env-file env_file_frontend
registry.git.lab/cisco-devops/net_inventory/net_inventory_frontend:master command.

student@student-vm:labs/lab07$ docker run -itd --name test_frontend --network


test_bridge -p 5000:5000 --env-file env_file_frontend
registry.git.lab/cisco-devops/net_inventory/net_inventory_frontend:master
Unable to find image
'registry.git.lab/cisco-devops/net_inventory/net_inventory_frontend:master' locally
master: Pulling from cisco-devops/net_inventory/net_inventory_frontend
80369df48736: Already exists
aaba0609d543: Already exists
f6c315699b29: Already exists
1ed59a75505b: Already exists
69aee1181685: Already exists
128605feeed4: Already exists
4b1a5145a1fa: Already exists
7f6169d4068e: Already exists
d169ceae7fa4: Pull complete
ca8d9a5ddf37: Pull complete
Digest: sha256:05828b7a33d0f04226e9e3d524afbf84906549bbcf7f316d22da0209b773b0cb
Status: Downloaded newer image for
registry.git.lab/cisco-devops/net_inventory/net_inventory_frontend:master
fced438fc4e64a9c7e2374b99e588e245389dc4fbf1d11ba6570ec55b173e523
student@student-vm:labs/lab07$

Step 6 From the Chrome browser, navigate to http://localhost:5001/api/docs to verify that the back-end application
is running.

Step 7 From the Chrome browser, navigate to http://localhost:5000 to verify that the front-end application is
running.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 299
Summary
In this lab, you built a pipeline that deploys container images upon successful merge to the master branch.
In doing so, you covered many GitLab features, such as merge requests, secrets management, runners,
pipelines, and the container registry. GitLab-CI is a powerful tool for testing and deployment and this lab is
only intended to be an introduction to those features.

300 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Continuous Delivery with GitLab
You have built out a pipeline that has completed all the testing and merged code together. The next step in
your journey with the pipeline is to complete the delivery of the application/system so that it can be put into
production.

GitLab for Continuous Delivery


• Code Review
• Stage leveraging pipeline for merge to master
• Runner (not GUI) will need access to hosts delivering/deploying to

Continuous Delivery is packaging up the application and staging it for promotion into production. During
the review stage, UAT is done by the end user to ensure requirements are met and no bugs are discovered.
During UAT, it is also common to have smoke testing done. Smoke testing is preliminary testing to ensure
the application runs and integrates with other systems. This is commonly a fast, high level test to ensure
compatibility with an environment. Regression tests will also run during the review stage to ensure existing
code functions as expected. If the software does not meet the review requirements, the application is
rejected and is not delivered to production.
Once it meets the review requirements for the project, the code can be delivered. The code is ready for
production and will need to manually be moved into the production instance. An example in the network
configuration would be the staging of Border Gateway Protocol (BGP) neighbor configuration, but leaving
the neighbor inactive, requiring manual activation to establish the connection.
Continuous Deployment takes a step beyond Continuous Delivery by having the system complete the
promotion to production. All of the same CI testing, code review, and delivery is completed, providing
confidence that the code is ready for production and thus the code can be put into production. Commonly,
post-deployment tests are run automatically to verify a successful deployment. If the deployment fails,
alerts can be triggered for automated or manual rollbacks.
1. 0Which component of the DevOps Pipeline delivers code to the production servers and requires
manual promotion into production?
a. Continuous Packaging
b. Continuous Deployment
c. Continuous Integration
d. Continuous Delivery

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 301
0Discovery 8: Automate the Deployment of an
Application
Introduction
You are familiar with the CI/CD pipeline. You can publish new Docker images and build artifacts required
to deploy the application. In this lab, you will automate the deployment of an application after a merge
request is officially merged.

Topology

Job Aid

Device Information

Device Description FQDN/IP Address Credentials

Student Workstation Linux Ubuntu VM 192.168.10.10 student, 1234QWer

GitLab Git Repository git.lab student, 1234QWer

GitLab Container Registry Container registry.git.lab student, 1234QWer


Registry

k8s1 Kubernetes k8s1 student, 1234QWer

302 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Command List
The table describes the commands that are used in this activity. The commands are listed in alphabetical
order so that you can easily locate the information that you need. Refer to this list if you need configuration
command assistance during the lab activity.

Command Description

cd directory name To change directories within the Linux file system, use the cd
command. You will use this command to enter a directory where the
lab scripts are housed. You can use tab completion to finish the name
of the directory after you start typing it.

docker-compose rm -f The command to remove all docker-compose created attributes, such


as containers and networks.

docker-compose stop The command to stop the docker-compose application.

docker-compose up -d The process of building and starting the application. The -d flag runs
the application in the background.

docker build -t name:tag -f filename The command to build a Docker image. The -t flag will name and
path tag it as you specify. The -f flag is used when not using the standard
filename of Dockerfile. The path defines the context for the docker
daemon, normally it is the “.” is specified.

export key=value The Linux command to set an environment variable in the current
session. An example would be export ENV=PRODUCTION.

git add -a filename The ability to add a file or use the -a flag to add all files to the git index.

git checkout -b branch_name The git command to check out a branch, and optionally create the
branch applying the -b flag.

git clone repository Downloads or clones a git repository into the directory that is the name
of the project in the repository definition.

git commit -m message The git command to commit the changes locally.

git push repo branch_name The git command to push the branch to the remote git service. The
repo is normally in the form of a named instance, usually a named
remote such as origin.

ssh -tt user@server 'command' The ability to SSH to a device, the -tt flag sets the output to the
terminal. The command is often chained with the && Linux construct.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 303
Task 1: Review Prerequisites to Deploy
There are many things to consider when deploying an application, including analyzing connectivity to a
remote server from the GitLab CI runners. As an example, having the GitLab server automatically log in to
the target server via passwordless SSH authentication allows easier direct access to the application server.
You must ensure that the application server has applications to spawn the new application installed on it,
such as Docker and Docker Compose. You must also ensure that all variables are set correctly on the
GitLab server. While these are already configured in this lab environment, you will review the steps and
setup for automated application deployment.

Activity

Change the directory and obtain the code for the network inventory application.

Step 1 In the student workstation, find and open Visual Studio Code by clicking the Visual Studio Code icon.

Step 2 From the Visual Studio Code top navigation bar, choose Terminal > New Terminal [Ctrl-Shift-`].

Step 3 Navigate to the terminal section at the bottom of Visual Studio Code.

Step 4 Within the Visual Studio Code terminal, change the directory to labs/lab08 using the cd ~/labs/lab08
command.

student@student-vm:$ cd ~/labs/lab08/

Step 5 Issue the git clone https://git.lab/cisco-devops/net_inventory command to clone the net_inventory
repository.

student@student-vm:labs/lab08$ git clone https://git.lab/cisco-devops/net_inventory


Cloning into 'net_inventory'...
warning: redirecting to https://git.lab/cisco-devops/net_inventory.git/
remote: Enumerating objects: 484, done.
remote: Counting objects: 100% (484/484), done.
remote: Compressing objects: 100% (156/156), done.
remote: Total 484 (delta 335), reused 456 (delta 316)
Receiving objects: 100% (484/484), 3.10 MiB | 13.76 MiB/s, done.
Resolving deltas: 100% (335/335), done.

Step 6 Change directory to the net_inventory directory by issuing cd net_inventory command.

student@student-vm:labs/lab08$ cd net_inventory/
student@student-vm:lab08/net_inventory (master)$

Verify Application Server Readiness


You will work on the k8s1 Kubernetes server to ensure it is ready to deploy the application from the GitLab
CI runner.

Step 7 Establish an SSH session to the k8s1 server using the ssh student@k8s1 command. You will not be
prompted for a password.

304 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:lab08/net_inventory (master)$ ssh student@k8s1
Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-62-generic x86_64)

Last login: Mon Nov 11 06:24:27 2019 from 192.168.10.20

Step 8 SSH key management is a standard practice in Linux environments. You can review the SSH key installed
by the GitLab runner using the cat ~/.ssh/authorized_keys command. You will notice the gitlab-
runner@student-vm entry in the output, proving that the SSH key has been applied. The GitLab runner SSH
key was added to the k8s1 server by issuing the cat ~/.ssh/id_rsa.pub | ssh [email protected]
'cat >> ~/.ssh/authorized_keys' command.

student@k8s1:~$ cat ~/.ssh/authorized_keys


ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAABAQDJhXHA552alrKUpHJMrwvXZfM+Ln/JoYw/3GgtpmBVu8uvvqIXdjOlx7q
gdVmX9w2N8P4Ir3JVLiqWHvakRAmAu0J9OZIPJNLOFOhXQhEeo99v6u0/
WfLoin8q65FbJ+qh3WxzH12AcjfnaQiEWmgh1t3So7Gg0YQ+JBYLvCj/W49O33P/
JhsL2GlnT2DSVKEyG05HsmzOfIVi1kosmk1kbYv729i5clYF10SFBDBxF4wB2UFGYbXM2wXSWBy0HM4ZUVqS/
njhYu3qiDo/LuKBSqxiubVtCtEVg1J23KVLL7az4udc3Dy+A0gvBrNUd+n/J1Y2oSGXdyMIxkqreLT5 gitlab-
runner@student-vm

Step 9 Ensure that Docker is installed and operational. Use the docker –version command.

student@k8s1:~$ docker --version


Docker version 18.06.3-ce, build d7080c1

Step 10 Ensure that Docker Compose is installed and operational. Use the docker-compose –version command.

student@k8s1:~$ docker-compose --version


docker-compose version 1.24.1, build 4667896

Step 11 Exit the SSH session to return to your workstation using the exit command.

student@k8s1:~$ exit
logout
Connection to k8s1 closed.
student@student-vm:lab08/net_inventory (master)$

Verify GitLab Server Connectivity


Verify the connectivity and the SSH key from the git.lab server.

Step 12 Establish an SSH session to the git.lab server using the ssh [email protected] command.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 305
student@student-vm:lab08/net_inventory (master)$ ssh [email protected]
Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-60-generic x86_64)

* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage

* Kata Containers are now fully integrated in Charmed Kubernetes 1.16!


Yes, charms take the Krazy out of K8s Kata Kluster Konstruction.

https://ubuntu.com/kubernetes/docs/release-notes

* Canonical Livepatch is available for installation.


- Reduce system reboots and improve kernel security. Activate at:
https://ubuntu.com/livepatch

128 packages can be updated.


98 updates are security updates.

Failed to connect to https://changelogs.ubuntu.com/meta-release-lts. Check your


Internet connection or proxy settings

Last login: Mon Nov 11 05:13:10 2019 from 192.168.10.10


student@gitlab:~$

Step 13 Verify connectivity between the git.lab and k8s1 server by pinging the k8s1 Kubernetes server using the
ping -c 3 k8s1 command.

student@gitlab:~$ ping -c 3 k8s1


PING k8s1 (192.168.10.21) 56(84) bytes of data.
64 bytes from kubernetes.lab (192.168.10.21): icmp_seq=1 ttl=64 time=0.429 ms
64 bytes from kubernetes.lab (192.168.10.21): icmp_seq=2 ttl=64 time=0.575 ms
64 bytes from kubernetes.lab (192.168.10.21): icmp_seq=3 ttl=64 time=0.522 ms

--- k8s1 ping statistics ---


3 packets transmitted, 3 received, 0% packet loss, time 2052ms
rtt min/avg/max/mdev = 0.429/0.508/0.575/0.065 ms

Step 14 Review the user gitlab-runner SSH key. Use the sudo cat /home/gitlab-runner/.ssh/id_rsa.pub command.
Compare the SSH key from this output to the one from the application server in the previous steps. If
prompted, use the credentials that are provided in the Job Aids.

student@gitlab:~$ sudo cat /home/gitlab-runner/.ssh/id_rsa.pub


ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAABAQDJhXHA552alrKUpHJMrwvXZfM+Ln/JoYw/3GgtpmBVu8uvvqIXdjOlx7q
gdVmX9w2N8P4Ir3JVLiqWHvakRAmAu0J9OZIPJNLOFOhXQhEeo99v6u0/
WfLoin8q65FbJ+qh3WxzH12AcjfnaQiEWmgh1t3So7Gg0YQ+JBYLvCj/W49O33P/
JhsL2GlnT2DSVKEyG05HsmzOfIVi1kosmk1kbYv729i5clYF10SFBDBxF4wB2UFGYbXM2wXSWBy0HM4ZUVqS/
njhYu3qiDo/LuKBSqxiubVtCtEVg1J23KVLL7az4udc3Dy+A0gvBrNUd+n/J1Y2oSGXdyMIxkqreLT5 gitlab-
runner@student-vm

Step 15 Exit the SSH session to return to your workstation using the exit command.

306 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@gitlab:~$ exit
logout
Connection to git.lab closed.
student@student-vm:lab08/net_inventory (master)$

Verify GitLab Server


The pipeline was already built. To autodeploy the application, you must safely store the variables in the
GitLab server. Verify that they are set properly.

Step 16 From the Chrome browser, navigate to https://git.lab.

Step 17 Log in with the credentials that are provided in the Job Aids and click Sign in.

Step 18 From the list of projects, choose the cisco-devops/net_inventory project.

Step 19 From the left navigation bar, choose Settings > CI/CD.

Note Be careful not to navigate to the root navigation bar link to CI/CD, as this is the operational component,
not the configuration component.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 307
Step 20 Find the section for Variables and click the Expand button.

Step 21 Ensure that variables are set for CI_REGISTRY_PASSWORD, CI_REGISTRY_USER, SECRET_KEY,
SQLALCHEMY_DATABASE_URI, POSTGRES_DB, POSTGRES_USER, and POSTGRES_PASSWORD.
They are used to deploy the application.

308 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Review the SSH Deploy Script
The script to deploy the application is an SSH command that sends multiple chained commands to deploy
the application. This script will be executed on the GitLab CI runner and will deploy the application to the
k8s1 server. From a high-level perspective, the script sets environment variables, cleans up the environment,
downloads the code from master, and finally uses Docker Compose to bring up the application.

Step 22 Review the gitlab-ci.yml script using the cat .gitlab-ci.yml command. You may already be familiar to the
content of this script, but pay attention to the bottom part of the script.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 309
student@student-vm:lab08/net_inventory (master)$ cat .gitlab-ci.yml
stages:
- "build"
- "deploy"

variables:
CI_REGISTRY_IMAGE_DB: "net_inventory_db"
CI_REGISTRY_IMAGE_BACKEND: "net_inventory_backend"
CI_REGISTRY_IMAGE_FRONTEND: "net_inventory_frontend"

before_script:
- "docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD
https://registry.git.lab"
- "echo $CI_COMMIT_REF_SLUG"

build:
stage: "build"
script:
- "echo BUILD DB"
- "docker build -t $CI_REGISTRY_IMAGE_DB -f Dockerfile_db ."
- "docker tag $CI_REGISTRY_IMAGE_DB
registry.git.lab/cisco-devops/net_inventory/$CI_REGISTRY_IMAGE_DB:$CI_COMMIT_REF_SLUG"
- "echo BUILD BACKEND"
- "docker build -t $CI_REGISTRY_IMAGE_BACKEND -f Dockerfile_backend ."
- "docker tag $CI_REGISTRY_IMAGE_BACKEND
registry.git.lab/cisco-devops/net_inventory/$CI_REGISTRY_IMAGE_BACKEND:
$CI_COMMIT_REF_SLUG"
- "echo BUILD FRONTEND"
- "docker build -t $CI_REGISTRY_IMAGE_FRONTEND -f Dockerfile_frontend ."
- "docker tag $CI_REGISTRY_IMAGE_FRONTEND
registry.git.lab/cisco-devops/net_inventory/$CI_REGISTRY_IMAGE_FRONTEND:
$CI_COMMIT_REF_SLUG"

deploy:
stage: "deploy"
script:
- "docker push registry.git.lab/cisco-devops/net_inventory/$CI_REGISTRY_IMAGE_DB:
$CI_COMMIT_REF_SLUG"
- "docker push
registry.git.lab/cisco-devops/net_inventory/$CI_REGISTRY_IMAGE_BACKEND:
$CI_COMMIT_REF_SLUG"
- "docker push
registry.git.lab/cisco-devops/net_inventory/$CI_REGISTRY_IMAGE_FRONTEND:
$CI_COMMIT_REF_SLUG"
#- >-
# ssh -tt student@k8s1
# "export SECRET_KEY=$SECRET_KEY && export
SQLALCHEMY_DATABASE_URI=$SQLALCHEMY_DATABASE_URI &&
# export POSTGRES_DB=$POSTGRES_DB && export POSTGRES_USER=$POSTGRES_USER &&
# export POSTGRES_PASSWORD=$POSTGRES_PASSWORD &&
# cd /deploy/ &&
# rm -rf ./net_inventory || true && git clone
https://git.lab/cisco-devops/net_inventory.git/ && cd /deploy/net_inventory &&
# docker-compose stop || true && docker-compose rm -f || true && docker-compose
up -d"
only:

310 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
- "master"
student@student-vm:lab08/net_inventory (master)$

Explain the SSH Deploy Script


You will focus to the bottom part of the script.

You will notice the - >- construct that starts the newly added code. This construct is the syntax for marking
the start of converting a single long string into a multiple lines in a YAML file.

At this point, the autodeployment part of the script is still commented out.

The autodeployment code starts with the SSH command ssh -tt student@k8s1. The -tt flag triggers the
SSH output to the terminal.

The next few lines of the code, is the actual autodeployment. You can see a series of export statements for
SECRET_KEY, SQLALCHEMY_DATABASE_URI, POSTGRES_DB, POSTGRES_USER, and
POSTGRES_PASSWORD in the KEY:$KEY format to represent these environment variables deployed in
GitLab.

The cd /deploy/ command ensures that the execution is done in the correct directory. The rm -rf
./net_inventory || true command removes that folder. The || true part of the command ensures that the
command always returns true, even if the folder does not exist. You will use this construct multiple times in
the next steps.

The next command in the autodeployment part of the script, the git clone
https://git.lab/cisco-devops/net_inventory.git/ command, clones the image from GitLab.

The script then changes the directory to /deploy/net_inventory, where the Docker Compose commands will
run.

Finally, the script cleans up the environment and starts your application using the docker-compose stop ||
true, docker-compose rm -f || true, and docker-compose up -d commands.

Task 2: Trigger Deploy on Successful Build


In this task, you will deploy the application to the k8s1 server. The .gitlab-ci.yml file is predeployed and the
environment was confirmed operational in the previous task. You will update the .gitlab-ci.yml file and
create a merge request to kick off the deployment process.

Activity

Update the CI File

Step 1 Open the .gitlab-ci.yml file in Visual Studio Code.

Step 2 Remove the hash (#) characters from the stage: “deploy” script: dictionary key. This will make the
configuration valid on the next push to GitLab.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 311
Note You must remove only the hash characters and no other character. YAML is very sensitive about
formatting such as spaces, for example.

Step 3 Press Ctrl-S to save the file.

Create the Merge Request


Change to the appropriate directory and obtain the code for the network inventory application.

Step 4 Create a new branch called deployapp using the git checkout -b deployapp command.

student@student-vm:lab08/net_inventory (master)$ git checkout -b deployapp


M .gitlab-ci.yml
Switched to a new branch 'deployapp'

Step 5 Add the file to the git index using the git add .gitlab-ci.yml command.

student@student-vm:lab08/net_inventory (deployapp)$ git add .gitlab-ci.yml

Step 6 Commit the file to git using the git commit -m "ADD APPLICATION DEPLOYMENT STEP"
command.

student@student-vm:lab08/net_inventory (deployapp)$ git commit -m "ADD APPLICATION


DEPLOYMENT STEP"
[deployapp 318ab43] ADD APPLICATION DEPLOYMENT STEP
1 file changed, 8 insertions(+), 8 deletions(-)

312 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 7 Push the branch to GitLab using the git push origin deployapp command. When prompted, provide your
GitLab credentials.

student@student-vm:lab08/net_inventory (deployapp)$ git push origin deployapp


Username for 'https://git.lab': student
Password for 'https://[email protected]':
warning: redirecting to https://git.lab/cisco-devops/net_inventory.git/
Counting objects: 3, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 345 bytes | 345.00 KiB/s, done.
Total 3 (delta 2), reused 0 (delta 0)
remote:
remote: To create a merge request for deployapp, visit:
remote: https://git.lab/cisco-devops/net_inventory/merge_requests/new?merge_request
%5Bsource_branch%5D=deployapp
remote:
To https://git.lab/cisco-devops/net_inventory
* [new branch] deployapp -> deployapp
student@student-vm:lab08/net_inventory (deployapp)$

Step 8 From the GitLab net_inventory project, choose Merge Request in the left panel.

Step 9 On the upper right corner, click the New merge request button.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 313
Step 10 Set the Source Branch to deployapp, the target branch to master, and click the Compare branches and
continue button. If there is only a single branch, you may skip this step and continue with the next step.

Step 11 Scroll down and click the Submit merge request button.

314 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 12 Wait for the job to complete, then click Merge.

Step 13 Click on the number that follows Pipeline, such as #55, and review the job.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 315
Step 14 Click the deploy icon to review the deploy output.

Step 15 Review the output to determine if your job succeeded.

316 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Review the Deployed Application

Step 16 From the Chrome browser, navigate to http://k8s1:5001/api/docs to verify that the back-end application is
running.

Step 17 From the Chrome browser, navigate to http://k8s1:5000 to verify that the front-end application is running.

Summary
In this lab, you reviewed the many requirements needed for continuous deployment. As you have seen, you
must consider connectivity, access, server requirements, and security concerns. Finally, you deployed the
application.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 317
0Summary Challenge
1. 0Which hosts need to have access to the servers that are having code delivered to?
a. GitLab GUI
b. GitLab Runner
c. GitLab Tester
d. GitLab Core
2. 0Which action is not part of the GitLab CI?
a. Code Commit
b. Unit Tests
c. Integration Tests
d. Build
3. 0Which keyword defines the steps that the pipelines take?
a. steps
b. stages
c. GitLab feet
d. foot
e. clock
4. 0Which application is responsible for the execution of scripts defined in pipelines?
a. GitLab GUI
b. GitLab Core
c. GitLab Runners
d. Docker containers
e. EIGRP
5. 0Which exit code status is required to indicate that a GitLab script execution has passed?
a. 10
b. 5
c. 100
d. -1
e. 0
6. 0Which GitLab CI/CD keyword runs regardless of the success of the script execution?
a. before_script
b. always
c. mandatory
d. after_script
7. 0Which GitLab CI/CD keyword is used to execute exclusively if the master branch is changed?
a. only
b. except
c. master-only
d. must

318 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Answer Key
GitLab Overview
1. A
2. D

GitLab CI Overview
1. B

Continuous Delivery with GitLab


1. D

Summary Challenge
1. B
2. A
3. B
4. C
5. E
6. D
7. A

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 319
320 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Section 7: Validating the Application Build
Process

Introduction
Validation of the build process is important to provide high-quality code and verify that the application will
work when promoted into production. The application build process uses several tools to validate the
system, including linters, code formatters, and unit testing—which are important concepts in completing the
build process.

Automated Testing in the CI Flow


Automation in the CI flow provides for a consistent testing methodology. Automation will help speed up
testing and does not rely on a human being to test the code in the integration phases. This approach helps
accelerate delivery of applications and gives you the agility and speed to deliver on solutions at the same
pace as the DevOps teams.

Automated Testing in the CI Flow


• Linting
• Code formatters
• Security analysis
• Unit tests

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 321
The automation and tests that are discussed here are to be run during the verify stage of the pipeline. These
automated tests start with some very fast style checkers and formatters. These tools will look at the code
without executing it and determine if there are some errors in the code. Some linters will also find execution
errors in the code.
After the static code analysis is done, you move on to functional tests of the code. This process runs the
code with unit tests to verify that the code is functioning as expected.

Linting
• Pylint
• Pyflakes
• pycodestyle
• pydocstyle

# pylint net_inventory
*** Module net_inventory.frontend.__init__
net_inventory/frontend/__init__.py:3:0: C0301: Line too long (118/100) (line-too-long)
*** Module net_inventory.shared.utils
net_inventory/shared/utils.py:20:0: C0301: Line too long (110/100) (line-too-long)
*** Module net_inventory.shared.setup
---Your code has been rated at 9.75/10 (previous run: 8.40/10, +1.35)
# pylint net_inventory/
---Your code has been rated at 10.00/10 (previous run: 9.75/10, +0.25)

Linting is the process of checking your code for syntax and logical errors by doing code analysis. The term
originates from C language source code where the compiler would first check for syntax errors. The term
has expanded to be used across multiple languages.
In Python, the predominate linters are Pylint, Pyflakes, pycodestyle, and pydocstyle. Pylint is one of the
oldest linters and is considered a very mature linter. Because of its age, there are many components that
have been added to the functionality. Because of the code and checks, it also takes the longest time to
execute of any of the linters. However, the execution time is still very short relative to the process of
executing a Python application or script.

322 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Pylint is customizable with modification of a .pylintrc file. This file maintains the configuration of the
Pylint execution and can indicate which rules should be ignored, which files should be ignored, and many
other settings. There are many methods for installing Pylint based on which operating system you are
running. Using pip to install with the pip install pylint command is a common practice. To generate a
default .pylintrc file, you can execute the following command with the destination of the file listed on the
right-hand side of the “>”: pylint --generate-rcfile > ~/.pylintrc.
Pyflakes is similar to Pylint but it does not do any style checking. Pylint will complete a style check and a
syntax check. The benefit of Pyflakes is that the syntax is checked very quickly compared to Pylint, but it
has no style checks included. Installation can be done with the pip install --upgrade pyflakes command
Pycodestyle is a very fast linter that checks Python code against the PEP8 standard only, with no syntax
checking. Pyflakes does not have a style checker, but pycodestyle does. Installation can be done with the
pip install --upgrade pycodestyle command
Pydocstyle is similar to pycodestyle in checking for standards, but it is used to check that the documentation
strings are styled properly. This linter can be installed with the pip install --upgrade pydocstyle command.
There are more linters and style checkers available, but the examples that are discussed here are some of the
most commonly used.

Code Formatting

• Black
• yapf
• autopep8

# black ./
reformatted /app/net_inventory/__init__.py
reformatted /app/net_inventory/backend/models.py
reformatted /app/net_inventory/frontend/__init__.py
reformatted /app/net_inventory/backend/api.py
reformatted /app/net_inventory/shared/config.py
reformatted /app/net_inventory/shared/setup.py
reformatted /app/net_inventory/shared/utils.py
reformatted /app/run.py
reformatted /app/tests/test_routes.py
All done! 9 files reformatted, 8 files left unchanged.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 323
It is important that your code format is consistently used throughout the organization. One of the most
recent styles that is being adopted very quickly is the Black format. The Python Black page states that Black
is an “opinionated code formatter.” The Black formatter will make changes to your code to match an
established style, most commonly the PEP8 standard.
To install Black, issue the command pip install --upgrade black. To have Black autoformat your code, run
the command black <filename>. To execute on all .py files in a directory, you can execute black .. Finally,
if you want to verify that the files are conforming to the Black style, as a step in your pipeline, execute the
command black . --check. This command will only check the files and give a nonzero exit code, which
sends the pipeline a message that there has been a failure.
One of the original code formatters is autopep8. This formatter can manage quite a bit of PEP8 coding
standards, but does not reformat strings. As long as the code meets the PEP8 standards, even if it is “ugly,”
the formatter will not modify it. Black and yapf can be used to modify “ugly” code to get to a consistent
design.
The last code formatter of note is yapf. Google sponsors this project via their GitHub page. yapf is a highly
configurable code formatter. It will not try to fix linting issues or make your code compliant with PEP8
guidelines, but it will format the code so that it matches its standards.

Static Code Analysis: Security


• Bandit
– Static code analysis
– Configuration file of
• bandit in the project directory

Code scanned:
Total lines of code: 201
Total lines skipped (#nosec): 0

Run metrics:
Total issues (by severity):
Undefined: 0.0
Low: 2.0
Medium: 0.0
High: 0.0
Total issues (by confidence):
Undefined: 0.0
Low: 0.0
Medium: 2.0
High: 0.0
Files skipped (0):

324 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Bandit is a static code analysis security scanner. This tool is designed to find common security issues in
Python code. Bandit can scan and search through code and provide a report. If there are any issues that are
identified, the Bandit run will give a nonzero exit code, which tells a CI tool that Bandit found a problem
and should not continue. Bandit is expandable via plug-ins that are installed in a plug-in directory.
To install Bandit, execute the command pip install bandit at the command line. To execute Bandit, run the
command bandit <filename>. To execute against all the files in the directory and subdirectories, execute
the command bandit -r ..

Unit Tests
# python -m pytest
============== test session starts ================== platform linux -- Python 3.7.5,
pytest-5.2.2, py-1.8.0, pluggy-0.13.0 rootdir: /app, inifile: pytest.ini collected 3
items tests/test_routes.py .F.
[100%]
============ FAILURES ============= _______________________________
TestNetInventoryAPI.test_get_device _________________________
self = <test_routes.TestNetInventoryAPI testMethod=test_get_device>
<...>
tests/test_routes.py:73: AssertionError=
============ 1 failed, 2 passed in 1.46s =============

• unittest
• pytest

Unit testing is the act of individually testing the “units” of code that make up the entire code base. In
Python, this testing would involve individual classes, methods, or functions to verify that the output, based
on known information, is consistent. If you pass an integer into a function that squares the integer, it should
always give the same result. If the result is different than expected, then the unit test will fail.
In Python, there are two primary ways to perform unit testing. Built into Python itself is unittest. Unit tests
are created in a separate file, usually with the same filename as the Python file, but prepended with test_.
You can execute Python unittest with the command python test_file.py. This command will execute all the
tests within the test_file.py file and output a set of results. The exit code is 0 if all tests are successful and a
nonzero number when a test has failed.
In addition to unittest, the pytest tool is commonly used for testing Python code. Pytest is similar to unittest
in that you write code in a separate file that is prepended with test_file.py. Pytest also prints the results in an
easy-to-read fashion showing where there are failures and successes. When you execute pytest in your
directory, it will automatically find any files that start with test_ or end in _test. The command to execute is
pytest, and it is recommended to add a few argument flags of -vv to increase the output verbosity. You can
use -v through -vvvv with Pytest to get varying degrees of verbosity and output. Pytest is installed with the
pip install --upgrade pytest command.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 325
DevOps Pipeline

As you implement a CI system and add automation tooling, you will gain visibility into the benefits of
checking your code in the source control system regularly. Upon the check into source control, the CI
system can initiate the automated tests. These tests can provide feedback to you before merging the new
code into the production branch. If there are errors in the code, you can fix them quickly without impacting
production and resubmit the fixes.
As you work through the DevOps pipeline, you have completed the build phase successfully. This phase
completion gives you the confidence to deploy the application in the staging environment to get feedback
from UAT. Once that testing is complete, there is confidence that you can move into production and
deployment. Within a Docker pipeline, you would have the new image that is tested, verified, and deployed
into the Docker Registry and part of the deploy phase.
1. 0Which is the original code formatting tool used with Python?
a. autopep8
b. Black
c. Pycodestyle
d. yapf

326 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Discovery 9: Validate the Application Build
Process
Introduction
As applications grow, they become more difficult to maintain. Building an application is the easy part. The
difficult part is ongoing support and maintenance. A mature process will verify the state and quality of the
application across several components such as linting, code formatting, and unit tests—all during the
process of the merge request. You will build solutions for each of these testing types.

Topology

Job Aid

Device Information

Device Description FQDN/IP Address Credentials

Student Workstation Linux Ubuntu VM 192.168.10.10 student, 1234QWer

GitLab Git Repository git.lab student, 1234QWer

Command List
The table describes the commands that are used in this activity. The commands are listed in alphabetical
order so that you can easily locate the information that you need. Refer to this list if you need configuration
command assistance during the lab activity.

Command Description

cd directory name To change directories within the Linux file system, use the cd
command. You will use this command to enter into a directory where
the lab scripts are housed. You can use tab completion to finish the

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 327
Command Description

name of the directory after you start typing it.

black folder_name The command to run the Python library called Black that enforces code
styling standards.

docker run -itd -p port --name The command to run or obtain a container registry and run a container.
container container_registry / The -i flag is for interactive, and the -t flag is for creating a pseudo-
gitlab_organization / gitlab_project / TTY to the container. The -d flag is to run in detached state. The
container:tag command command is any command valid on the container. The --name flag
names the container as you intend, and does not randomly generate a
name for you. The -p flag is for port, it can be in either
host_port:container_port format, or port format.

export key=value The Linux command to set an environment variable in the current
session. An example would be export ENV=PRODUCTION.

pylint folder_name The command to run the python linter against a specific file/folder.

python -m pytest -v folder_name Running a python python library as a script is provided by the python
-m component. The pytest command runs a testing framework called
pytest, and the -v flag provides greater verbosity.

Task 1: Implement Code Linting


Linting is the process of executing a utility or a program that interprets software code for idiomatic and
stylistic errors. A linter, or linting tool, helps point out any potential issues in your code such as syntax or
violations of best practices in the language of choice. Linting is often the first line of testing to ensure all
code submitted to a repository is following repository and industry standard practices. For example, you
can use Python linters or YAML linters to get started.

Activity

Change the directory and obtain the code for the network inventory application.

Step 1 In the student workstation, find and open Visual Studio Code by clicking the Visual Studio Code icon.

Step 2 From the Visual Studio Code top navigation bar, choose Terminal > New Terminal [Ctrl-Shift-`].

Step 3 Navigate to the terminal section at the bottom of Visual Studio Code.

Step 4 Within the Visual Studio Code terminal, change the directory to labs/lab09 using the cd ~/labs/lab09
command.

student@student-vm:$ cd ~/labs/lab09/

Step 5 Issue the git clone https://git.lab/cisco-devops/net_inventory command to clone the net_inventory
repository.

328 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:labs/lab09$ git clone https://git.lab/cisco-devops/net_inventory
Cloning into 'net_inventory'...
warning: redirecting to https://git.lab/cisco-devops/net_inventory.git/
remote: Enumerating objects: 416, done.
remote: Counting objects: 100% (416/416), done.
remote: Compressing objects: 100% (114/114), done.
remote: Total 416 (delta 290), reused 416 (delta 290)
Receiving objects: 100% (416/416), 3.10 MiB | 14.16 MiB/s, done.
Resolving deltas: 100% (290/290), done.

Step 6 Change directory to the net_inventory directory by issuing cd net_inventory command.

student@student-vm:labs/lab09$ cd net_inventory/
student@student-vm:lab09/net_inventory (master)$

Build an Image and Run the Linter


Next, you will build a container to run the tests within. Then you will run the linter and identify the potential
issues.

Step 7 Start a new container based on the python37 image with the name test_lint and mount a local directory /app
inside that container so that you can then run local programs and access local files from within the container.
This will allow you to use the container as an execution engine to run your tests. Use the docker run -it --
name test_lint -v ${PWD}:/app registry.git.lab/cisco-devops/containers/python37:latest sh command.

student@student-vm:lab09/net_inventory (master)$ docker run -it --name test_lint -v $


{PWD}:/app registry.git.lab/cisco-devops/containers/python37:latest sh
Unable to find image 'registry.git.lab/cisco-devops/containers/python37:latest' locally
latest: Pulling from cisco-devops/containers/python37
80369df48736: Already exists
aaba0609d543: Already exists
f6c315699b29: Already exists
1ed59a75505b: Already exists
69aee1181685: Already exists
128605feeed4: Already exists
4b1a5145a1fa: Already exists
3c98c4e7fe1a: Pull complete
Digest: sha256:d25506ce75aa4b219831ae2f8c642b75a51d575b8db7729319382f0c96b70f08
Status: Downloaded newer image for
registry.git.lab/cisco-devops/containers/python37:latest
#

Step 8 Within the container, change directory to /app using the cd /app command in the terminal window within the
container.

# cd /app

Step 9 Within the container, run the Python linter. Use the pylint net_inventory command. This command will lint
everything within the net_inventory application directory, recursively linting every file it finds.

Take time to explore the output. As an example, you will notice that the linter will give you
output in the file_name:line_num:row_num msg_id description (category) format.
Upon linting with the pylint command, you will also receive a score for your code based on the
code style and quality. Take note of the score in the last line of the output.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 329
After this step, do not exit the container. You should remain inside the container for the rest of
this lab activity.

330 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
# pylint net_inventory
************* Module net_inventory
net_inventory/__init__.py:1:0: C0114: Missing module docstring (missing-module-
docstring)
net_inventory/__init__.py:2:0: W0622: Redefining built-in '__package__' (redefined-
builtin)
************* Module net_inventory.frontend.views
net_inventory/frontend/views.py:1:0: C0114: Missing module docstring (missing-module-
docstring)
net_inventory/frontend/views.py:8:0: C0116: Missing function or method docstring
(missing-function-docstring)
************* Module net_inventory.frontend.__init__
net_inventory/frontend/__init__.py:3:0: C0301: Line too long (118/100) (line-too-long)
************* Module net_inventory.frontend
net_inventory/frontend/__init__.py:1:0: C0114: Missing module docstring (missing-
module-docstring)
************* Module net_inventory.shared.utils
net_inventory/shared/utils.py:20:0: C0301: Line too long (110/100) (line-too-long)
net_inventory/shared/utils.py:1:0: C0114: Missing module docstring (missing-module-
docstring)
net_inventory/shared/utils.py:10:0: C0116: Missing function or method docstring
(missing-function-docstring)
net_inventory/shared/utils.py:29:0: C0116: Missing function or method docstring
(missing-function-docstring)
net_inventory/shared/utils.py:33:0: C0116: Missing function or method docstring
(missing-function-docstring)
************* Module net_inventory.shared.setup
net_inventory/shared/setup.py:15:0: C0301: Line too long (109/100) (line-too-long)
net_inventory/shared/setup.py:42:0: C0301: Line too long (104/100) (line-too-long)
net_inventory/shared/setup.py:1:0: C0114: Missing module docstring (missing-module-
docstring)
net_inventory/shared/setup.py:8:0: C0116: Missing function or method docstring
(missing-function-docstring)
net_inventory/shared/setup.py:14:0: C0116: Missing function or method docstring
(missing-function-docstring)
net_inventory/shared/setup.py:20:0: C0116: Missing function or method docstring
(missing-function-docstring)
************* Module net_inventory.shared.database
net_inventory/shared/database.py:1:0: C0114: Missing module docstring (missing-module-
docstring)
************* Module net_inventory.shared.config
net_inventory/shared/config.py:1:0: C0114: Missing module docstring (missing-module-
docstring)
net_inventory/shared/config.py:33:0: C0115: Missing class docstring (missing-class-
docstring)
************* Module net_inventory.backend.api
net_inventory/backend/api.py:1:0: C0114: Missing module docstring (missing-module-
docstring)
************* Module net_inventory.backend.models
net_inventory/backend/models.py:1:0: C0114: Missing module docstring (missing-module-
docstring)
net_inventory/backend/models.py:13:0: C0115: Missing class docstring (missing-class-
docstring)
net_inventory/backend/models.py:36:4: C0116: Missing function or method docstring
(missing-function-docstring)
net_inventory/backend/models.py:40:0: C0115: Missing class docstring (missing-class-

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 331
docstring)
************* Module net_inventory.backend
net_inventory/backend/__init__.py:1:0: C0114: Missing module docstring (missing-module-
docstring)

------------------------------------------------------------------
Your code has been rated at 8.40/10 (previous run: 8.40/10, +0.00)
#

Address the Linting Issues


There are three categories of issues in the linter output that you must fix to ensure a higher quality of code:
• Docstring issues which are represented by the C0114, C0115, and C0116 message IDs. By the
additions included in the below steps, these will be ignored by policy of adjusting the .pylintrc file.
• There is a single W0622 message ID instance. You will create a single in file exception by commenting
the line.
• Issues with message ID C0301 will be ignored for now and addressed later in the exercise.

Note The .pylintrc file is the configuration file to specify issues, errors, and warnings that you would like to
check for while linting the code. The file is stored in root of the net_inventory directory.

Step 10 View the current .pylintrc configuration file using the cat .pylintrc command or opening it in Visual Studio
Code.

Current configuration covers the pylint_flask_sqlalchemy and pylint_flask plug-ins that ensure
the pylint command understands the sql_alchemy library. Without it, the linter is unable to
understand the methods associated with the library due to the dependency of sql_alchemy for
pylint_flask_sqlalchemy and pylint_flask.
The settings for max-attributes and max-args changed from the default of 7. The sql_alchemy
model has more than 7 columns, which becomes an issue for the linter, but is a valid use case in
this scenario.

Step 11 Open the .pylintrc configuration file in Visual Studio Code and update it to include the two following two
lines:

[MESSAGES CONTROL]
disable=C0114, C0115, C0116

This addition will disable any comment-based linting errors. Press Ctrl-S to save the file.

332 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 12 Now you will disable a check for when a variable or function overrides a built-in variable or function. Open
the net_inventory/__init__.py file in Visual Studio Code and update it to include a comment # pylint:
disable=redefined-builtin at the end of line 2. Line 2 code will look as follows:

__package__ = "net-inventory" # pylint: disable=redefined-builtin

Note The linter is a guideline. It is up to you, the developer, to determine which standards it will or will not
comply with.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 333
Rerun the Linter After Updating pylint Configuration
You will rerun the pylint command after you update the pylint configuration files.

Step 13 Within the container, execute the pylint net_inventory command.

With modifications of the pylint configuration files, you will notice that the only issues left are
of type C0301. These will be addressed in the next task.

Note Do not exit the container.

# pylint net_inventory
************* Module net_inventory.frontend.__init__
net_inventory/frontend/__init__.py:3:0: C0301: Line too long (118/100) (line-too-long)
************* Module net_inventory.shared.utils
net_inventory/shared/utils.py:20:0: C0301: Line too long (110/100) (line-too-long)
************* Module net_inventory.shared.setup
net_inventory/shared/setup.py:15:0: C0301: Line too long (109/100) (line-too-long)
net_inventory/shared/setup.py:42:0: C0301: Line too long (104/100) (line-too-long)

------------------------------------------------------------------
Your code has been rated at 9.75/10 (previous run: 8.40/10, +1.35)

334 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Add Linting to the Test Stage of the Pipeline
The predeployed .gitlab-ci.yml file and the pipeline currently have a few stages. Now you will add a testing
stage to the pipeline by adding it to the .itlab-ci.ygml file. You will add the process of linting to the test
stage.

Step 14 Open the .gitlab-ci.yml file in Visual Studio Code.

Step 15 Add the pylintrc net_inventory/ element into a new line within the stage: “test” script: dictionary key. With
this addition, the script will run the linting process and return a nonzero result if linting issues are identified.
A nonzero linting result will stop the build process from completing. Press Ctrl-S to save the file.

Task 2: Implement Code Formatting


Code linting ensures that the code conforms to coding and styling rules. However, code linting does not
implement those solutions. Code formatting, and specifically Python Black, takes an opinionated view of
formatting. Python Black's tag line is the uncompromising code formatter and it lives up to its name. Here,
you will use Python Black to update the code formatting to their standards.

Activity

Verify Changes That Python Black Would Create


You will verify the changes that black would make but not actually make them.

Step 1 While still in the container, run the black --check ./ command to check which files will be formatted. With
this command, the Python Black will run in a check mode on the ./ folder.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 335
# black --check ./
--py36 is deprecated and will be removed in a future version. Use --target-version py36
instead.
would reformat /app/net_inventory/__init__.py
would reformat /app/net_inventory/backend/api.py
would reformat /app/net_inventory/frontend/__init__.py
would reformat /app/net_inventory/backend/models.py
would reformat /app/net_inventory/shared/config.py
would reformat /app/net_inventory/shared/setup.py
would reformat /app/run.py
would reformat /app/net_inventory/shared/utils.py
would reformat /app/tests/test_routes.py
Oh no!
9 files would be reformatted, 8 files would be left unchanged.

Step 2 Run the black --diff ./ command to view the changes that Python Black will make without actually making
the changes.

336 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
# black --diff ./
--py36 is deprecated and will be removed in a future version. Use --target-version py36
instead.
--- net_inventory/__init__.py 2019-11-14 03:55:53.977533 +0000
+++ net_inventory/__init__.py 2019-11-14 05:03:04.335982 +0000
@@ -1,3 +1,3 @@
__version__ = "0.5.1"
-__package__ = "net-inventory" # pylint: disable=redefined-builtin
+__package__ = "net-inventory" # pylint: disable=redefined-builtin

reformatted net_inventory/__init__.py
--- net_inventory/backend/api.py 2019-11-14 03:55:10.317576 +0000
+++ net_inventory/backend/api.py 2019-11-14 05:03:04.449725 +0000
@@ -69,11 +69,13 @@
Path Parameters:
hostname (str): hostname
Returns:
JSON object
"""
- device = DB.session.query(Device).filter(Device.hostname == hostname).first() #
pylint: disable=no-member
+ device = (
+ DB.session.query(Device).filter(Device.hostname == hostname).first()
+ ) # pylint: disable=no-member
if not device:
abort(404, {"message": "{} not found".format(hostname)})
for key, val in request.json.items():
setattr(device, key, val)
DB.session.commit()
@@ -90,11 +92,13 @@
Path Parameters:
hostname (str): hostname
Returns:
JSON object
"""
- device = DB.session.query(Device).filter(Device.hostname == hostname).first() #
pylint: disable=no-member
+ device = (
+ DB.session.query(Device).filter(Device.hostname == hostname).first()
+ ) # pylint: disable=no-member
if not device:
abort(404, {"message": "{} not found".format(hostname)})
DB.session.delete(device)
DB.session.commit()
return jsonify(), 200
reformatted net_inventory/backend/api.py
--- net_inventory/frontend/__init__.py 2019-11-11 19:57:19.746452 +0000
+++ net_inventory/frontend/__init__.py 2019-11-14 05:03:04.478936 +0000
@@ -1,4 +1,10 @@
from flask import Blueprint

-VIEW = Blueprint("view", __name__, url_prefix="/views/inventory",


template_folder="templates", static_folder="static")
+VIEW = Blueprint(
+ "view",
+ __name__,

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 337
+ url_prefix="/views/inventory",
+ template_folder="templates",
+ static_folder="static",
+)

reformatted net_inventory/frontend/__init__.py
--- net_inventory/backend/models.py 2019-11-14 03:55:16.953569 +0000
+++ net_inventory/backend/models.py 2019-11-14 05:03:04.498906 +0000
@@ -8,30 +8,30 @@

CONFIG = get_config()
KEY = CONFIG["SECRET_KEY"]

-class Device(DB.Model): # pylint: disable=too-few-public-methods


+class Device(DB.Model): # pylint: disable=too-few-public-methods

__tablename__ = "device"

hostname = Column(String, nullable=False, primary_key=True)


ip_address = Column(String, nullable=False, unique=True)
site = Column(String, nullable=False)
role = Column(String, nullable=False)
device_type = Column(String, nullable=False)
- os = Column(String, nullable=False) # pylint: disable=no-member
+ os = Column(String, nullable=False) # pylint: disable=no-member
username = Column(String, nullable=False)
password = Column(EncryptedType(String, KEY), nullable=False)

def __init__(self, hostname, ip_address, site, role, device_type, os, username,


password):
self.hostname = hostname
self.ip_address = ip_address
self.site = site
self.role = role
self.device_type = device_type
- self.os = os # pylint: disable=invalid-name
+ self.os = os # pylint: disable=invalid-name
self.username = username
self.password = password

def as_dict(self):
return {c.name: getattr(self, c.name) for c in self.__table__.columns}
reformatted net_inventory/backend/models.py
--- net_inventory/shared/config.py 2019-11-14 03:55:22.397564 +0000
+++ net_inventory/shared/config.py 2019-11-14 05:03:04.555616 +0000
@@ -14,25 +14,25 @@
config_file = os.path.join(project_path, CONFIG_NAME)
yaml_config = yaml.load(open(config_file, "r"), Loader=yaml.FullLoader)

config = yaml_config[ENV]
# validate_config(config)
- if not config.get('SECRET_KEY'):
- config['SECRET_KEY'] = os.environ.get('SECRET_KEY')
- if not config.get('SQLALCHEMY_DATABASE_URI'):
- config['SQLALCHEMY_DATABASE_URI'] = os.environ.get('SQLALCHEMY_DATABASE_URI')

338 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
- if not config.get('URL'):
- config['URL'] = os.environ.get('URL', 'http://127.0.0.1:5000')
+ if not config.get("SECRET_KEY"):
+ config["SECRET_KEY"] = os.environ.get("SECRET_KEY")
+ if not config.get("SQLALCHEMY_DATABASE_URI"):
+ config["SQLALCHEMY_DATABASE_URI"] = os.environ.get("SQLALCHEMY_DATABASE_URI")
+ if not config.get("URL"):
+ config["URL"] = os.environ.get("URL", "http://127.0.0.1:5000")

sql_path = config["SQLALCHEMY_DATABASE_URI"]
# Work around for slash parsing with sqlite
if sql_path and sql_path.startswith("sqlite:///") and not
sql_path.startswith("sqlite:////"):
config["SQLALCHEMY_DATABASE_URI"] = "sqlite:///" + project_path + sql_path[9:]

return config

-class Config(): # pylint: disable=too-few-public-methods


+class Config: # pylint: disable=too-few-public-methods
def __init__(self):
config = get_config()
for key, val in config.items():
setattr(self, key, val)

reformatted net_inventory/shared/config.py
--- net_inventory/shared/setup.py 2019-11-14 03:55:27.649559 +0000
+++ net_inventory/shared/setup.py 2019-11-14 05:03:04.567925 +0000
@@ -10,11 +10,13 @@
module = import_module(item)
app.register_blueprint(module.blueprint)

def configure_logs(app):
- basicConfig(filename=app.config["LOGGING_LOCATION"], level=getattr(logging,
app.config["LOGGING_LEVEL"]))
+ basicConfig(
+ filename=app.config["LOGGING_LOCATION"], level=getattr(logging,
app.config["LOGGING_LEVEL"])
+ )
logger = getLogger()
logger.addHandler(StreamHandler())

def setup_swagger(app):
@@ -37,11 +39,14 @@
template = {
"swagger": "2.0",
"info": {
"title": "NET INVENTORY",
"description": "API FOR NET INVENTORY",
- "contact": {"responsibleOrganization": "Nexi", "responsibleDeveloper":
"DevOps Maintainer"},
+ "contact": {
+ "responsibleOrganization": "Nexi",
+ "responsibleDeveloper": "DevOps Maintainer",

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 339
+ },
"version": "1.0",
},
"schemes": ["http", "https"],
"operationId": "getmyData",
}
reformatted net_inventory/shared/setup.py
--- run.py 2019-11-14 03:55:36.773550 +0000
+++ run.py 2019-11-14 05:03:04.591532 +0000
@@ -3,13 +3,16 @@
from flask import redirect

from app import create_app, DB

app = create_app()
[email protected]('/')
+
+
[email protected]("/")
def base():
return redirect("./views/inventory/devices", code=302)
+

Migrate(app, DB)
if __name__ == "__main__":
app.run(host=app.config["HOST"], port=app.config["PORT"])

reformatted run.py
--- net_inventory/shared/utils.py 2019-11-11 19:57:19.746452 +0000
+++ net_inventory/shared/utils.py 2019-11-14 05:03:04.618427 +0000
@@ -15,11 +15,13 @@
api_base_url = current_app.config["URL"] + "/api/v1"

headers = {"Accept": "application/json", "Content-Type": "application/json"}

try:
- req = getattr(requests, method.lower())(api_base_url + url, params=params,
json=json, headers=headers)
+ req = getattr(requests, method.lower())(
+ api_base_url + url, params=params, json=json, headers=headers
+ )
except requests.exceptions.ConnectionError:
return {"message": "ConnectionError"}

if req.status_code == 401:
session.clear()
reformatted net_inventory/shared/utils.py
--- tests/test_routes.py 2019-11-11 17:10:26.539967 +0000
+++ tests/test_routes.py 2019-11-14 05:03:04.726671 +0000
@@ -31,11 +31,14 @@

class TestNetInventoryAPI(unittest.TestCase):
def clean_db(self):
for device in DEVICES:
- self.app.delete("/api/v1/inventory/devices/{}".format(device["hostname"]),

340 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
content_type="application/json")
+ self.app.delete(
+ "/api/v1/inventory/devices/{}".format(device["hostname"]),
+ content_type="application/json",
+ )

def test_create_device(self):
rv = self.app.post(
"/api/v1/inventory/devices",
data=json.dumps(
@@ -61,11 +64,13 @@
self.assertEqual(rv.status_code, 200)
self.assertEqual(len(json.loads(rv.get_data())["data"]), 2)
self.clean_db()

def test_get_device(self):
- rv = self.app.get("/api/v1/inventory/devices/{}".format("nyc-rt01"),
content_type="application/json")
+ rv = self.app.get(
+ "/api/v1/inventory/devices/{}".format("nyc-rt01"),
content_type="application/json"
+ )
self.assertEqual(rv.status_code, 200)
self.assertEqual(json.loads(rv.get_data())["data"]["ip_address"],
"10.201.15.11")
self.clean_db()

def setUp(self):
@@ -75,11 +80,15 @@
with app.app_context():
db.reflect()
db.drop_all()
db.create_all()
for device in DEVICES:
- self.app.post("/api/v1/inventory/devices", data=json.dumps(device),
content_type="application/json")
+ self.app.post(
+ "/api/v1/inventory/devices",
+ data=json.dumps(device),
+ content_type="application/json",
+ )

def tearDown(self):
pass

reformatted tests/test_routes.py
All done!
9 files reformatted, 8 files left unchanged.
#

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 341
Review Python Black Configuration File
Python Black uses a pyproject.toml configuration file to define the configuration parameters that you
choose to implement (similar to the .pylintrc configuration file used by the pylint command). This is not a
strict requirement; however, it provides you with the opportunity to have your own styling choice. As an
example, this project is set to line length of 100 and not the default line length of 88 characters.

Step 3 Open the pyproject.toml configuration file in Visual Studio Code and review the parameters. As you can see,
the line length is set to 100.

Run Python Black Command Implement the Changes


When you run the black command, it will actually implement and enforce the changes.

Step 4 Run the git status command to verify which files have been updated before making changes.

# git status
On branch master
Your branch is up-to-date with 'origin/master'.
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)

modified: .pylintrc
modified: net_inventory/__init__.py
modified: .gitlab-ci.yml

no changes added to commit (use "git add" and/or "git commit -a")

Step 5 Run the black ./ command to make the changes.

342 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
# black ./
--py36 is deprecated and will be removed in a future version. Use --target-version py36
instead.
reformatted /app/net_inventory/__init__.py
reformatted /app/net_inventory/backend/models.py
reformatted /app/net_inventory/frontend/__init__.py
reformatted /app/net_inventory/backend/api.py
reformatted /app/net_inventory/shared/config.py
reformatted /app/net_inventory/shared/setup.py
reformatted /app/net_inventory/shared/utils.py
reformatted /app/run.py
reformatted /app/tests/test_routes.py
All done!
9 files reformatted, 8 files left unchanged.

Step 6 Run the git status command again to view the files that have not changed. Take note of all files that have
now changed. Python Black has automatically made changes on your behalf.

# git status
On branch master
Your branch is up-to-date with 'origin/master'.
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)

modified: .pylintrc
modified: .gitlab-ci.yml
modified: net_inventory/__init__.py
modified: net_inventory/backend/api.py
modified: net_inventory/backend/models.py
modified: net_inventory/frontend/__init__.py
modified: net_inventory/shared/config.py
modified: net_inventory/shared/setup.py
modified: net_inventory/shared/utils.py
modified: run.py
modified: tests/test_routes.py

no changes added to commit (use "git add" and/or "git commit -a")

Step 7 Rerun the pylint net_inventory command to verify that the issues identified earlier do not show up
anymore.

# pylint net_inventory/

-------------------------------------------------------------------
Your code has been rated at 10.00/10 (previous run: 9.39/10, +0.61)

Adding Code Formatting to the Pipeline Test Stage


Add the process of code formatting to the test stage.

Step 8 Open the .gitlab-ci.yml file in Visual Studio Code.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 343
Step 9 Add the black ./ element into a new line within the stage: “test” script: dictionary key. With this addition,
the script will run the black command and return a nonzero result if changes were done by the command.
Press Ctrl-S to save the file.

Task 3: Implement Unit Testing


Unit tests are intended to check an individual component (file, function, class, for example) at a time. Unit
test helps to ensure that a single piece of code runs by itself and as expected without any other
dependencies.

The pytest framework is a common method of implementing unit tests in a Python application.

Activity

Run pytest Command and Fix the Issues


Tests will often run a function and then expect a specific output. Tests help to ensure that inputs and output
do not change without understanding the effect that the function execution will have on other systems. In
this example, there is an issue with one of the tests and you will update it.

Step 1 Set the ENV variable value to DEVELOPMENT using the export ENV=DEVELOPMENT command in
your docker instance terminal. This will set the application to use that environment when running through the
tests.

# export ENV=DEVELOPMENT

Step 2 Run the python -m pytest command. You will review this command in more details later. For now, note the
issue that it reports.

344 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
# python -m pytest
=======================================================================================
====================================== test session starts
=======================================================================================
=======================================
platform linux -- Python 3.7.5, pytest-5.2.2, py-1.8.0, pluggy-0.13.0
rootdir: /app, inifile: pytest.ini
collected 3 items

tests/test_routes.py .F.
[100%]

=======================================================================================
============================================ FAILURES
=======================================================================================
============================================
_______________________________________________________________________________________
______________________________ TestNetInventoryAPI.test_get_device
_______________________________________________________________________________________
_______________________________

self = <test_routes.TestNetInventoryAPI testMethod=test_get_device>

def test_get_device(self):
rv = self.app.get(
"/api/v1/inventory/devices/{}".format("nyc-rt01"),
content_type="application/json"
)
self.assertEqual(rv.status_code, 200)
> self.assertEqual(json.loads(rv.get_data())["data"]["ip_address"],
"10.201.15.12")
E AssertionError: '10.201.15.11' != '10.201.15.12'
E - 10.201.15.11
E ? ^
E + 10.201.15.12
E ? ^

tests/test_routes.py:73: AssertionError
=======================================================================================
================================== 1 failed, 2 passed in 1.46s
=======================================================================================
===================================
#

Step 3 You will notice that the pytest command returned that it expected the result of 10.201.15.12, but got
10.201.15.11. You can conclude there is a mistake in the test script. Open the tests/test_routes.py script in
Visual Studio Code. In line 73, change the IP address to 10.201.15.11. Press Ctrl-S to save the file.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 345
Step 4 Rerun the tests using the python -m pytest command. You will notice that the tests now completed
successfully.

Note The tests are built by creating python classes in a specific manner. You will notice that the test
<test_routes.TestNetInventoryAPI testMethod=test_get_device> failed. That test refers to the specific
method within the class that is not working as expected.

# python -m pytest
=======================================================================================
====================================== test session starts
=======================================================================================
=======================================
platform linux -- Python 3.7.5, pytest-5.2.2, py-1.8.0, pluggy-0.13.0
rootdir: /app, inifile: pytest.ini
collected 3 items

tests/test_routes.py ...
[100%]

=======================================================================================
======================================= 3 passed in 1.48s
=======================================================================================
========================================
#

346 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Review the pytest Command
Previously, you addressed the open testing issue. Now that is resolved, it is easier to explain the pytest
command and its various options.

Step 5 If you add the tests/ folder as the target folder, using the python -m pytest tests/ command, you will notice
that the output is the same as in the previous step. This is because the pytest command uses the tests/ folder
as its default folder.

# python -m pytest tests/


=======================================================================================
====================================== test session starts
=======================================================================================
=======================================
platform linux -- Python 3.7.5, pytest-5.2.2, py-1.8.0, pluggy-0.13.0
rootdir: /app, inifile: pytest.ini
collected 3 items

tests/test_routes.py ...
[100%]

=======================================================================================
======================================= 3 passed in 1.06s
=======================================================================================
========================================

Step 6 If you add the -v verbose flag, you will see more details about the pytest command execution. Use the
python -m pytest -v tests/ command. You can see details about each test and the associated class and
method that is used.

# python -m pytest -v tests/


=======================================================================================
====================================== test session starts
=======================================================================================
=======================================
platform linux -- Python 3.7.5, pytest-5.2.2, py-1.8.0, pluggy-0.13.0 --
/usr/local/bin/python
cachedir: .pytest_cache
rootdir: /app, inifile: pytest.ini
collected 3 items

tests/test_routes.py::TestNetInventoryAPI::test_create_device PASSED
[ 33%]
tests/test_routes.py::TestNetInventoryAPI::test_get_device PASSED
[ 66%]
tests/test_routes.py::TestNetInventoryAPI::test_get_devices PASSED
[100%]

=======================================================================================
======================================= 3 passed in 1.11s
=======================================================================================
========================================

Step 7 View the pytest.ini configuration file using the cat pytest.ini command. This file provides configuration
parameters that you would implement in the pytest command.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 347
# cat pytest.ini
[pytest]
filterwarnings =
ignore::DeprecationWarning
ignore::_pytest.warning_types.PytestCollectionWarning

Adding Unit Testing to the Test Stage


Similar to the previous task, add the process of Unit testing to the test stage.

Step 8 Open the .gitlab-ci.yml file in Visual Studio Code.

Step 9 Add the python -m pytest -v tests/ element in a new line within the stage: “test” script: dictionary key.
With this addition, the script will run the pytest command and kick off the unit code testing. Press Ctrl-S to
save the file.

Summary
In this lab, you added three different components to the test stage for linting, code formatting, and unit tests.
Interestingly, each one of these components has its own configuration files that can reside as code in your
application code repository.
Developing a pipeline and ensuring tests pass each time help ensure that best practices are being followed.
Over time, you will build on your “coverage”—the process of verifying which lines of code should or
should not be tested. Al these improvements will help you to ensure a more consistent process and quality
of the results.

348 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Summary Challenge
1. 0In which phase of the NetDevOps flow should unit tests be executed?
a. plan
b. verify
c. monitor
d. configure
2. 0Which code linter looks at style and syntax?
a. Pylint
b. Pyflakes
c. pycodestyle
d. pydocstyle
3. 0Which code linter will only check the docstrings of a Python application?
a. Pylint
b. Pyflakes
c. pycodestyle
d. pydocstyle
4. 0Which code formatter will format strings and the Python file to PEP8 standards?
a. Black
b. yapf
c. autopep8
d. pydocstyle
5. 0Which option is a security analysis tool for Python?
a. Chef
b. Bandit
c. Ansible
d. pycodestyle
6. 0Which tool is used to complete unit tests of all files inside a directory instead of a single file at a
time?
a. Bandit
b. yapf
c. Pytest
d. unittest
7. 0Which Python linter will check the syntax of a Python file and not look at the style?
a. Pylint
b. Pyflakes
c. pycodestyle
d. pydocstyle

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 349
0Answer Key
Automated Testing in the CI Flow
1. A

Summary Challenge
1. B
2. A
3. D
4. A
5. B
6. C
7. B

350 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Section 8: Building an Improved Deployment Flow

Introduction
When working to deploy an application, you should not deploy and forget about the application. Although
the application build process was successful using the CI/CD system, you cannot simply deploy the
application to the production web server and move on to something else. You should do additional
verification of the application. Does the application respond from the public Internet when it should? Or
maybe the application was deployed for internal use only, in which case, you should verify that the public
Internet cannot reach the application.
Once you have an application ready for deployment, you will need to define a strategy for deploying the
application and its updates. Do you want to have two coexisting production environments so that you can
flip between the application versions? Should you look at a slow rollout that would offer real-world
experience? There are several generally accepted options for deploying applications. There are benefits and
risks that are associated with each deployment methodology and selecting the internal best practice may be
an app-by-app process, or there might be one strategy for the entire organization.
This section will introduce a few of the testing methodologies that are available for deployed applications
and follow up with a discussion of some common deployment methodologies.

Postdeployment Validation
Once you have deployed an application, it is important to know if the application is up and functional with
the proper configuration and working as designed and expected. To accomplish postdeployment validation,
there are several options that can be used from Open Source to commercial tooling—some could be native
Linux tools, all the way to complex application testing and validation systems.
• Infrastructure Testing
– Connectivity
• Systems Testing
– Docker
– Linux
• Application Testing
– Smoke Tests

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 351
– Simulations
– Transaction Tests

There are several methodologies that are part of postdeployment validation of applications. Postdeployment
validation typically falls into three categories:
• Infrastructure testing
• Systems testing
• Application testing

Infrastructure Testing
Infrastructure testing validates that the underlying infrastructure is in place and working. You will initiate
tests from various locations to verify that the application is reachable. You should test inside your firewall
and outside the firewall to mimic external users and systems that may be consuming your application. This
process will test that the firewalls in the path are allowing the necessary ports and completing NAT (in an
IPv4 environment) to the proper location. Is the application server accessible via ICMP but not via a
TCP/UDP port?
• Connectivity testing
• Is the application port listening?

Moreover, for complete infrastructure testing, you should consider testing overall reachability, validating
the configuration and operational state of network devices and appliances including, but not limited to
routers, switches, firewalls, VPN, proxies, and load balancers.

352 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Systems Testing
Systems testing involves checking the application at the operating system level to verify that the port on
which the application listens is up and listening. This testing verifies that the application is allowing
connections from the appropriate sources, such as any Internet address, or if it is a back-end server, and
makes sure that only the front-end servers are making connections to the back end. For example, is the
application (or load balancer) listening on port 80 and 443 if it is a web app? If the application is up, but the
ports are not listening or a connection is not being made, you can conclude that something could be broken
within the infrastructure.
• Port check
• Local application test
• Resource checks
• Transaction rate verification

One basic option is executing a curl command against the application to verify that it is serving the proper
content.
The last systems check involves verifying that the resource allocations for the application are within
appropriate limits. Are there any increases in system resources that are unexpected, such as a memory leak
or unexpected CPU spike for a period of time? In summary, anything can be monitored and anything can be
verified with something as basic as a Linux command to create primitive tests for an application.

Verify Docker Container States


Docker containers can be in several different states. By using and querying the Docker API, you can
determine the status of a given container. To see the state on a Docker host, you execute the docker ps or
docker container ps command.
• Created
• Restarting
• Running
• Paused
• Exited
• Dead

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 353
The Docker container states include the following:
• Created: This state indicates that the container is created but not started. This situation can happen
when you issue the command docker create to create the container, but it has not been started. A
container can also enter the created state if it has not been able to start successfully.
• Restarting: This state indicates that the container is in the process of being restarted.
• Running: This state indicates the normal operational state of a container in a good healthy state.
• Paused: This state indicates that the processes have been suspended with the command docker pause.
• Exited: This state indicates that the main process running in the container has been exited, usually on a
graceful exit.
• Dead: This state indicates that the container has issues with an underlying resource and has been exited.

Verify Status Using Docker Health Checks


Docker provides a mechanism for a health check of the container itself. If a container is initiated in
standalone mode, then the health check mechanism helps provide a status of the application running in the
container. The standalone execution of a container with a failed health check will not have any impact on
the container itself. You can view the health status codes by issuing the docker inspect [container]
command.
• Command for verifying the operational status of the container.
• If not specifically defined in the Dockerfile command, the default is used.
• Docker Compose key is health check.

Dockerfile Syntax
HEALTHCHECK CMD curl -f http://localhost:5000/ || exit 1

Default options
--interval=DURATION (default: 30s)
--timeout=DURATION (default: 30s)
--start-period=DURATION (default: 0s)
--retries=N (default: 3)

Docker-Compose Syntax
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5000"]
interval: 2m30s
timeout: 4s
retries: 3
start_period: 30s

354 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
When a Docker container is in a Docker Swarm environment and the Swarm manager notices that the
container is in an unhealthy state, Docker Swarm will remove that unhealthy container and build a new
instance of the container. This process is called automatic healing.

Note These health checks for automatic healing are only applicable for Docker Swarm, and do not influence
Kubernetes health checks.

To include a health check for the application within a Dockerfile, you add a single line to the Dockerfile
with the command HEALTHCHECK. You have an argument of CMD as part of the command so that the
Docker Engine knows how to execute the health check.
The health check options are defined in the following table. You can modify these options by setting the
flag immediately following the HEALTHCHECK keyword, such as HEALTHCHECK --interval=60s
CMD curl -f http://localhost:5000 with the flag in all lowercase text.

Health Check Option Description Default

Interval This option is the time between 30 seconds


executions of the health check.

Timeout This option is the amount of time to 30 seconds


wait before considering the check as
a failed check.

Start-Period This option is the time to wait before 0 seconds (immediate start up)
starting the test of the health check
on container startup.

Retries This option is the number of retries 3 count


that have to fail before the container
is considered unhealthy.

Application Testing: Smoke Tests


Smoke tests validate that the application is responding as expected. These tests are the beginning of testing
to verify that functions and methods are continuing to function once the application is in a production or a
production-like environment. These tests are often included in the build test to ensure that the application
will have the proper responses in production.
• Quick tests to determine the application state.
• Ensure that the build process occurred.
• Checks the state of the operating system services.
• Ideally completed programmatically, but can be a manual process.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 355
During this phase, a review of the operating system services and the logs that are associated with the
application is appropriate. Are there unexpected errors in the logs? Are there exceptions along the way that
were not caught properly? Do all the services stay up and functional? These questions are just a few that
you must ask when smoke testing.

Application Testing: Functional Regression Testing


Functional regression testing involves sets of tests to validate that features behave consistently through the
evolution of the application. These tests are generally longer than smoke testing and involve more intense
functionality testing of the application.
• Generally longer than smoke tests.
• Identify functionality of the application.
• Were there any bugs reintroduced that have been seen previously?

Were there any process changes? Is there a new workflow in the new version of the code? These tests are
often done by a person in the initial setup of the tests.

Application Testing: End-to-End Testing


End-to-end testing focuses on testing the application as a whole. This testing is meant to ensure that if there
are any third-party integrations, the system is still functional. This testing models the end-user experience
and utilizes the entire stack of the application.
• Test transactions through the entire stack.
• Verify back-end functionality.
• Test from external sources that simulate end users.
• Selenium WebDriver
• AppDynamics

It is recommended that these tests are done from a source that emulates the end-user experience. If you are
concerned with a web application that is meant to be accessible from the public Internet, then the testing
should involve the public Internet instead of being done internally. This testing will verify that the
authentication, authorization, logging, and access to the entire stack is working as expected.

356 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Selenium WebDriver is recognized by the W3C as a methodology for testing web applications. WebDriver
emulates what a web browser would do. There are drivers for all major browsers including Google Chrome,
Mozilla Firefox, Opera, Safari, Edge, and Internet Explorer. Selenium WebDriver is available in many
popular languages including Python, Ruby, JavaScript, C# (C Sharp), Kotlin, and Java. The syntax is
slightly different in each language, but the functionality is available through these languages.
When you use Selenium WebDriver, you are doing exactly what an individual would do in using a website
—clicking links, using a search functionality, inputting and submitting data, and so on. There are methods
available in Selenium WebDriver to accomplish all these functions, which makes it a good tool to complete
the end-to-end testing in an automated fashion.
AppDynamics, which is part of Cisco, is an Application Performance Management (APM) tool that will
help you determine if there are issues within the application stack. When integrated into the application, you
can get performance metrics about how each layer of the application is responding.
1. 0Which Docker container state occurs when an underlying component has a failure, such as a
storage failure?
a. restarted
b. exited
c. paused
d. dead

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 357
0Discovery 10: Validate the Deployment and Fix
the Infrastructure
Introduction
Here, you will create health checks for the application to ensure it is deployed and functioning as expected.
The first health check will use native Docker Compose capabilities. The next will use more generic
capabilities such as ping and curl commands. Finally, you will integrate these testing and validation steps
into the GitLab CI pipeline.

Topology

Job Aid

Device Information

Device Description FQDN/IP Address Credentials

Student Workstation Linux Ubuntu VM 192.168.10.10 student, 1234QWer

GitLab Git Repository git.lab student, 1234QWer

GitLab Container Registry Container registry.git.lab student, 1234QWer


Registry

k8s1 Kubernetes k8s1 student, 1234QWer

k8s3 Kubernetes k8s3 student, 1234QWer

asa1 Cisco ASAv asa1 cisco, 1234QWer (enable: cisco)

358 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Command List
The table describes the commands that are used in this activity. The commands are listed in alphabetical
order so that you can easily locate the information that you need. Refer to this list if you need configuration
command assistance during the lab activity.

Command Description

cd directory name To change directories within the Linux file system, use the cd
command. You will use this command to enter into a directory where
the lab scripts are housed. You can use tab completion to finish the
name of the directory after you start typing it.

curl -m 2 -f -s -o file -w "% Curl is a Linux test command, often used for running HTTP
{header_name}" url commands. The -m will set a wait time, the -f will mark if not receive a
2XX HTTP code, the -o flag will send output to a file, and the -w will
not send all output to terminal.

git add -a filename The ability to add a file or use the -a flag to add all files to the git index.

git checkout -b branch_name The git command to check out a branch, and optionally create the
branch using the -b flag.

git clone repository Downloads or clones a git repository into the directory that is the name
of the project in the repository definition.

git commit -m message The git command to commit the changes locally.

git push repo branch_name The git command to push the branch to the remote git service. The
repo is normally in the form of a named instance, usually a named
remote such as origin.

ping -w wait_time ip_address Ping and wait for -w seconds before declaring it a failure.

ssh -tt user@server 'command' The ability to SSH to a device, the -tt flag sets the output to the
terminal. The command is often chained with the && Linux construct.

Task 1: Verify Docker Host Health


Using functionality that is natively built into Docker Compose along with standard Linux commands, you
can verify the basic health of your application to ensure it is running properly. Setting up health checks that
are atomically executed, post deployment, will help ensure that your application is working as expected. In
the following tasks, you will build basic tests in the form of health checks to ensure that the three-tier
application is running as expected.

Activity

Change the directory and obtain the code for the network inventory application.

Step 1 In the student workstation, find and open Visual Studio Code by clicking the Visual Studio Code icon.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 359
Step 2 From the Visual Studio Code top navigation bar, choose Terminal > New Terminal [ctrl-shift-`].

Step 3 Navigate to the terminal section at the bottom of Visual Studio Code.

Step 4 Within the Visual Studio Code terminal, change the directory to ~/labs/lab10 using the cd ~/labs/lab10
command.

student@student-vm:$ cd ~/labs/lab10/

Step 5 Issue the git clone https://git.lab/cisco-devops/net_inventory command to clone the net_inventory
repository.

student@student-vm:labs/lab10$ git clone https://git.lab/cisco-devops/net_inventory


Cloning into 'net_inventory'...
warning: redirecting to https://git.lab/cisco-devops/net_inventory.git/
remote: Enumerating objects: 416, done.
remote: Counting objects: 100% (416/416), done.
remote: Compressing objects: 100% (114/114), done.
remote: Total 416 (delta 290), reused 416 (delta 290)
Receiving objects: 100% (416/416), 3.10 MiB | 14.16 MiB/s, done.
Resolving deltas: 100% (290/290), done.

Step 6 Change directory to the net_inventory directory by issuing cd net_inventory command.

student@student-vm:labs/lab10$ cd net_inventory/
student@student-vm:lab10/net_inventory (master)$

Add Health Checks to the Docker Compose File


The docker-compose.yml file has two health checks already prestaged, but they are commented out. In the
Docker Compose file, there is a dictionary key added for healthcheck:, with key:value pairs for test (which
is the command Docker Compose will run after starting), interval, timeout, and retries.

The health check will start after the container is up and will fail if health checks fail.

Step 7 Open the docker-compose.yml file in Visual Studio Code.

Step 8 Remove the hash (#) characters for the two healthcheck: keys. Review the syntax of the key:value pairs.
Press Ctrl-S to save the file.

Note You must remove only the hash characters and no other character. YAML is very sensitive about spaces,
for example.

360 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Update GitLab CI Configuration File Adding Post Deployment Tests
Now you will use the ping and curl commands to verify network connectivity and application success. The
ping command will make use of the -w flag, which will indicate to wait for specified number of seconds for
a response. The curl command will use the -f flag to result in a failure if 4XX code is returned.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 361
You will run these commands from the outside environment, for example from the k8s3 server, to emulate
inbound traffic destined to the k8s1application server. The GitLab CI server will establish an SSH session to
the k8s3 server to run the actual commands to perform the tests against the k8s1 server. The GitLab server
has been set up to auto log in via SSH keys from the gitlab-runner user using passwordless authentication.

Step 9 Open the .gitlab-ci.yml file in Visual Studio Code.

Step 10 Within the stage: “deploy” script: dictionary keys you will find three commented lines for ping and curl
commands towards the application server. Remove the hash (#) characters from these lines. Press Ctrl-S to
save the file.

Note You must remove only the hash characters and no other character. YAML is very sensitive about spaces,
for example.

362 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Task 2: Verify Connectivity
Previously, you built tests to ensure that the application is working as expected. In this task, you will
manually verify that the connectivity, that is required for the tests to execute, is operational and functional.

Activity

Ping Remote Host


From the k8s3 server, ensure that you can ping the remote host, the k8s1 server.

Step 1 Establish an SSH session to the k8s3 server using the ssh student@k8s3 command.

student@student-vm:lab10/net_inventory (master)$ ssh k8s3


Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-62-generic x86_64)

Last login: Mon Nov 18 00:24:07 2019 from 192.168.10.20

Step 2 From the k8s3 server, issue the ping -w 10 k8s1 command to verify the connectivity. You will notice that the
server did in fact respond within the allocated 10 seconds. If this test is successful, you have a fair level of
confidence the test should pass within GitLab CI platform.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 363
student@k8s3:~$ ping -w 10 k8s1
PING k8s1 (10.10.1.10) 56(84) bytes of data.
64 bytes from k8s1 (10.10.1.10): icmp_seq=1 ttl=62 time=1.31 ms
64 bytes from k8s1 (10.10.1.10): icmp_seq=2 ttl=62 time=1.26 ms
64 bytes from k8s1 (10.10.1.10): icmp_seq=3 ttl=62 time=1.32 ms
64 bytes from k8s1 (10.10.1.10): icmp_seq=4 ttl=62 time=1.30 ms
64 bytes from k8s1 (10.10.1.10): icmp_seq=5 ttl=62 time=1.26 ms
64 bytes from k8s1 (10.10.1.10): icmp_seq=6 ttl=62 time=1.25 ms
64 bytes from k8s1 (10.10.1.10): icmp_seq=7 ttl=62 time=1.34 ms
64 bytes from k8s1 (10.10.1.10): icmp_seq=8 ttl=62 time=1.22 ms
64 bytes from k8s1 (10.10.1.10): icmp_seq=9 ttl=62 time=1.44 ms
64 bytes from k8s1 (10.10.1.10): icmp_seq=10 ttl=62 time=1.33 ms

--- k8s1 ping statistics ---


10 packets transmitted, 10 received, 0% packet loss, time 9011ms
rtt min/avg/max/mdev = 1.229/1.308/1.447/0.059 ms
student@k8s3:~$

Verify HTTP Connectivity With the curl Command


Though you can ping the application server, it does not mean that the application is operational. You will
now verify HTTP connectivity to the k8s1 server.

Step 3 From the k8s3 server, execute the curl -m 2 -f -s -o /dev/null -w "%{http_code}"
http://k8s1:5000/views/inventory/devices command. Notice that application server port 5000.

You will use the following flags:


• -f: This flag will return a nonzero code and fail if the HTTP code is in the 4XX range.
• -s: This flag makes the command running in silent mode.
• -o: This flag redirects any stdout output to /dev/null.
• -w: This flag indicates which header the command should return
The command has succeeded.
student@k8s3:~$ curl -m 2 -f -s -o /dev/null -w "%{http_code}"
http://k8s1:5000/views/inventory/devices
200
student@k8s3:~$

Step 4 Now, execute the curl -m 2 -f -s -o /dev/null -w "%{http_code}"


http://k8s1:5001/api/v1/inventory/devices command. Notice that application server port 5001.

student@k8s3:~$ curl -m 2 -f -s -o /dev/null -w "%{http_code}"


http://k8s1:5001/api/v1/inventory/devices
000
student@k8s3:~$

364 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
This time, the command has failed, indicated by the lack of an HTTP 200 response. There are two possible
reasons. The application may not be operational, or the network is blocking access. In the next steps, you
will look to identify the issue.

Verify Connectivity from Your Workstation


You will verify that you are able to connect to the back-end API from your student workstation. This will
eliminate the application not being operational as a potential issue.

Step 5 From the Chrome browser, navigate to http://k8s1:5001/api/v1/inventory/devices. You can see that the
application is operational.

Verify the Application

Step 6 Run the populate_inventory script and enter k8s1:5001 for server and port information. The script will
populate the network inventory database. Use the populate_inventory command.

Note The containers are now deployed on k8s1 and not on the local student workstation. The containers are
now deployed on k8s1 and not on the local student workstation. Port 5001 is serving the back-end API
Docker container. Port 5000 is serving the front-end container.

Note The populate_inventory script uses the API to add new devices

student@student-vm:$ populate_inventory
Enter the server and port info : k8s1:5001
nyc-rt01: Added successfully
nyc-rt02: Added successfully
rtp-rt01: Added successfully
rtp-rt02: Added successfully

Step 7 Using the Chrome browser, connect to the k8s1 server on TCP port 5001. Navigate to
http://k8s1:5001/api/v1/inventory/devices to view the network inventory.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 365
Task 3: Fix Infrastructure Rules
You identified issues with the HTTP server on your application server. You confirmed that the application
server is operational. The other possible problem for not being able to communicate with the application
server from the k8s3 server could be that the network is blocking the access. Remember that the tests added
here will be used to provide greater confidence in merging merge request when changes are being made to
the application. In this task, you will review the configuration of the firewall, and update as required.

Activity

Review Cisco ASA Configuration


Log in to the Cisco ASA and observer the access list called INBOUND.

Step 1 Establish an SSH session to the asa1 Cisco ASA using the ssh student@asa1 command. Use the password
that is provided in the Job Aids.

Step 2 On asa1, enter enable mode using the enable command and use the password that is provided in Job Aids.

Step 3 Review the INBOUND access list using the show run | in OUTSIDE command. You will notice that
although ICMP and TCP port 5000 from any host is allowed, TCP port 5001 is not listed and therefore
implicitly denied.

asa1# show run | in OUTSIDE


access-list OUTSIDE extended permit icmp any host 10.10.1.10
access-list OUTSIDE extended permit tcp any host 10.10.1.10 eq 5000
access-group OUTSIDE in interface outside
asa1#

366 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Update the ACL
You will update the access list to allow traffic to reach the API server on TCP port 5001.

Step 4 On asa1, enter the configuration mode using the configure terminal command.

asa1# configure terminal

Step 5 Update the access list for to allow traffic to TCP port 5001. Use the access-list OUTSIDE extended permit
tcp any host 10.10.1.10 eq 5001 command.

asa1(config)# access-list OUTSIDE extended permit tcp any host 10.10.1.10 eq 5001

Step 6 Exit configuration mode. Use the exit command.

asa1(config)# exit

Step 7 Rerun the show run | in OUTSIDE command to verify that the access list is updated.

asa1# show run | in OUTSIDE


access-list OUTSIDE extended permit icmp any host 10.10.1.10
access-list OUTSIDE extended permit tcp any host 10.10.1.10 eq 5000
access-list OUTSIDE extended permit tcp any host 10.10.1.10 eq 5001
access-group OUTSIDE in interface outside
asa1#

Task 4: Verify Post-Change Connectivity


Now that the infrastructure has been updated to accommodate the traffic, deploy the application and verify
that the application heath checks pass.

Activity

Create the Merge Request


Check out a branch and push the code to the GitLab repository.

Step 1 Within the Visual Studio Code terminal, create a new branch called checkapp using the git checkout -b
checkapp command.

student@student-vm:lab10/net_inventory (master)$ git checkout -b checkapp


M .gitlab-ci.yml
M docker-compose.yml
Switched to a new branch 'checkapp'

Step 2 Add the file to the git index using the git add .gitlab-ci.yml docker-compose.yml command.

student@student-vm:lab10/net_inventory (checkapp)$ git add .gitlab-ci.yml docker-


compose.yml

Step 3 Commit the file to git using the git commit -m "Add health checks" command.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 367
student@student-vm:lab10/net_inventory (checkapp)$ git commit -m "Add health checks"
[checkapp 535cf96] Add health checks
2 files changed, 15 insertions(+), 15 deletions(-)

Step 4 Push the branch up to GitLab using the git push origin checkapp command. When prompted, provide your
GitLab credentials.

student@student-vm:lab10/net_inventory (checkapp)$ git push origin checkapp


Username for 'https://git.lab': student
Password for 'https://[email protected]':
warning: redirecting to https://git.lab/cisco-devops/net_inventory.git/
Counting objects: 4, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (4/4), done.
Writing objects: 100% (4/4), 435 bytes | 435.00 KiB/s, done.
Total 4 (delta 3), reused 0 (delta 0)
remote:
remote: To create a merge request for checkapp, visit:
remote: https://git.lab/cisco-devops/net_inventory/merge_requests/new?merge_request
%5Bsource_branch%5D=checkapp
remote:
To https://git.lab/cisco-devops/net_inventory
* [new branch] checkapp -> checkapp
student@student-vm:lab10/net_inventory (checkapp)$

Step 5 From the Chrome browser, navigate to https://git.lab.

Step 6 Log in with the credentials that are provided in the Job Aids and click Sign in.

Step 7 From the list of projects, choose the cisco-devops/net_inventory project.

Step 8 On the upper right corner, click the Create merge request button.
368 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 9 Set the Source Branch to checkapp, the target branch to master, and click the Compare branches and
continue button.

Step 10 Scroll down and click the Submit merge request button.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 369
Step 11 Wait for the job to complete, then click Merge.

Step 12 Click on the number that follows the pipeline, such as #69, and review the job.

Step 13 Click the deploy icon to review the deploy output.

370 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 14 Review the output to determine if your job succeeded. In that case, the application was tested properly.

Summary
In this lab, you reviewed aspects of testing an application both within the context of Docker Compose and
outside in a more generic fashion. Also, you integrated health test with the GitLab CI pipeline for
continuous testing of the application.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 371
0Release Deployment Strategies
When you have an application that is being released, there are multiple strategies that can be employed to
ensure that the application is deployed appropriately. This topic discusses several popular application
deployment methods, including the Big Bang, Rolling, Blue-Green, and Canary deployment methodologies.

Big Bang Deployment


In the Big Bang release deployment model, the entire application is upgraded in one window. This
methodology is common in traditional deployments. There is a single maintenance window to complete the
upgrade and all the systems have the same code version. Downtime is required to complete the upgrade and
may be minimized with load balancing.
• Traditional method
• Flip the switch to a new application version
• All systems are the same; no variation in versions

The risk of the Big Bang deployment method is that it is an all-or-nothing success methodology. If the
change is successful, then it will be successful for the entire application. However, if an application error
starts to occur, then there are likely to be errors for everyone. Real production traffic is not sent to the
application until the entire application is in production. Planning needs coordination of all systems, which
may lead to delays in the deployment.

Rolling Deployment
In the Rolling deployment model, you roll out components and infrastructure one at a time, such as an
application and presentation container. You then continue to roll out the updated application one piece of
infrastructure at a time until the entire application has migrated successfully to the new version.
• Upgrade individual components at a time.
• No complete outage of the application
• Test components along the way.
• Swap out components as you go.

372 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
This approach is much more modular than the Big Bang model. Therefore, you may have more complexity
and time to move from one version to the next.

Blue-Green Deployment
With Blue-Green deployments, you duplicate the production environment. You have a complete set of Blue
production servers that have the current baseline deployment. You also have a complete environment in
standby that is the next version of the application. Once it is time to migrate to the new version of the
application, the entire workload is shifted to the Green application stack. You get the benefit that you can
flip back and forth between application states with the flip of a switch.
• Multiple versions are live at the same time.
• Migrate the application from blue to green set; can move back to blue if the green application has
issues.

Rollback is easy because you return to the previous stack if there is an issue. The downside of the Blue-
Green deployment model is that you have twice the infrastructure to maintain. You will have two different
copies of the environment operating at the same time to be able to switch back and forth between the
environments.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 373
Canary Deployment
The name comes from the old mining days, in which a canary was used to detect if there were toxic air
levels in the mine. If the canary stopped singing, the miners knew to evacuate. In the Canary deployment
model, you have a second deployment for a limited set of users. You then get a sense of whether the
application is performing properly. Over time, you increase the number of nodes and users on the new
version of the application, and eventually shift the entire application over to the new application version and
the process repeats.
• Route a subset of users to an application.
• Different branches used for different Canary servers.
• Verify that new features and functions are working without impacting most users.

This model gives you real-life feedback on the application versions. You will know if there are errors
occurring that need attention or if the application is in a good state. This approach is good if you are risk-
adverse, because you increment the load on a new version of code over time. The downside is that there are
two (or more) versions of the application in production at a given time. This approach can cause difficulty
in troubleshooting any application issues.

Release Strategies Comparison


There are benefits and risks that are associated with the various release strategies. There are financial costs
that are involved with the size of the system environment and complexity. There are trade-offs between the
categories that are listed, and others that are not listed. The release deployment strategy that is right for your
organization will differ based on the values that are associated with the strategy. You may be able to have a
minimal amount of downtime that is accounted for in a planned maintenance window to keep complexity
and costs down. Or you may want to have 100 percent uptime with no outage and fast rollbacks.

374 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
The Big Bang release method has associated downtime, but is the least complex and requires the least
amount of system resources to complete the strategy.
Rolling releases have a bit more complexity and keeps the costs down. The downside is that there is not any
real traffic hitting the application until it is in production. Therefore, you may have some issues and
versioning control challenges during the full rollout.
Blue-Green releases have the quickest rollback capabilities. If there are issues on the new (green side)
application version, you can switch back to the previous production instance, because it is still 100 percent
in production. This methodology produces the greatest cost, because you are maintaining two environments.
Canary releases have some complexity trade-offs because the rollout includes different versions within the
production environment. This approach creates a longer mismatch than a rollout release, but gives you the
ability to see real-world traffic on the Canary systems before continuing to increase the production load of
the new application version.
1. 0Which method of release has the most cost associated with it from a systems perspective?
a. Big Bang
b. Rolling
c. Blue-Green
d. Canary

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 375
0Summary Challenge
1. 0Which type of system tests are expected to be short?
a. infrastructure testing
b. smoke testing
c. functional regression testing
d. end-to-end testing
2. 0Which type of testing would identify issues with a third-party application integration?
a. performance testing
b. smoke testing
c. functional regression testing
d. end-to-end testing
3. 0Which Docker orchestration engine can take advantage of the Docker Health Check feature to help
maintain application uptime and availability?
a. Kubernetes
b. Mesos
c. Swarm
d. Marathon
4. 0Which deployment methodology is the quickest and easiest on infrastructure teams?
a. Big Bang
b. Blue-Green
c. Canary
d. Rolling
5. 0Which deployment methodology sends a small amount of traffic to a subset of hosts in the
environment that are running the new version of the application with a gradual rollout, assuming
there are no issues with the application?
a. Big Bang
b. Blue-Green
c. Canary
d. Rolling
6. 0Which release methodology ensures that there are always two operational systems?
a. Big Bang
b. Blue-Green
c. Canary
d. Rollout
7. 0Which application testing tool is used to emulate an end-user browsing experience?
a. AppDynamics
b. Selenium WebDriver
c. Python Black
d. Python Canary

376 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Answer Key
Postdeployment Validation
1. D

Release Deployment Strategies


1. C

Summary Challenge
1. B
2. D
3. C
4. A
5. C
6. B
7. B

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 377
378 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Section 9: Extending DevOps Practices to the
Entire Infrastructure

Introduction
The principles of DevOps have been around for a fairly long time. Its initial applicability was to
applications and bridging the gap between application developers and the operations teams who supported
those applications. However, over the past few years, the concept of NetDevOps has emerged. NetDevOps
is the application of DevOps principles, processes, and tools by IT networking professionals to increase
uptime, reliability, and predictability, while gaining the benefits of automation. This section explores the
NetDevOps culture, tooling, and principles.

Introduction to NetDevOps
NetDevOps is the expansion of DevOps into the network space and overall network industry. Networking
has challenges that other areas of the IT stack do not have. The new term NetDevOps was introduced to
help explain practices that relate to networks in the DevOps era.

DevOps Cycle

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 379
Here you see the phases of the DevOps cycle. Starting in the development section, there is a plan, which
includes the prioritization and decision phases in determining what to work on.
Once work is planned, you move into the create phase, where you generate the code and configurations that
will be applied.
Once the code is created, you need to verify that the code is in a good state. This process includes unit
testing, code format verification, and code linting. This code is then packaged for production. Once the
packaging is done, a release is made available.
Once the release is in production, it is important to monitor the application and configuration so that it
meets the planned goals. Are all the key performance indicators (KPIs) being met? Are there new error logs
that are being generated from a release? Is a backout of the recent change necessary? Monitoring the KPIs is
important to help provide feedback into the planning phase.

NetDevOps versus DevOps


• Configuration versus Code
• Build Config versus Build Code
• Test Config versus Test Code
• Release Config versus Release Code

In NetDevOps, code is replaced with configuration. General code relates to configuration. Build code
relates to build configuration. Your test code corresponds to how you test the configuration. This process
continues into the release cycle. In DevOps, you move the build, test, and release into an automated process.

NetDevOps Culture
• DevOps and NetDevOps are about culture and process.
• Moving from a culture of few changes to a culture of rapid incremental changes with testing,
verification, and backout when necessary.

380 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Culture is a significant part of the DevOps adoption. In the past, many organizations were very hesitant to
make changes and tended to put several changes into one window. This approach can cause collisions of
changes and make it more challenging to get exact feedback on the success of a change. The risk level of
changes is often seen as high.
NetDevOps is a “culture of change.” Many more changes are done incrementally, usually one change at a
time. These changes are then tested independently and verified independently. Testing is automated and
done before the change to give confidence that the change will be successful. Automated verification after
the change provides consistency, helps accelerate changes, and creates confidence that the change was done
successfully.

Automation
• Consistent and faster testing
– Incremental development
– Frequent changes
– Continuous testing
• Faster feature deployment

Automation is a major contributor to the drive for DevOps principles. By using automation in the testing
phases, you can increase the frequency of testing proposed changes. Changes are tested, and if the changes
pass the tests, they are ready for the next phase. If the tests fail, feedback is provided. You can then update
the change and rerun the tests. This process continues until the change passes the test.
Automation provides a consistent methodology. When the tests are set up and executed, they are run in the
same way repeatedly. By having the appropriate tests run to meet the feature requirement, you have
confidence in the systems. You know that the function has been tested, and you know the results of the test.
This process is similar to asking a user to run through tests when a network change is complete, except that
the tests are automated and you get consistent results.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 381
Monitoring
• Consistent and quick detection of faults
• Changes are verified via monitoring capabilities.
• Measurement of network key performance indicators (KPI)s
• Feedback loops

Implementation of proper monitoring in a NetDevOps environment is a strong component of getting fast or


immediate feedback. The feedback loop helps you know if a release has changed the environment. For
instance, suppose that you have a quality of service (QoS) change in which you are changing the
differentiated services code point (DSCP) marking of a packet. You will want to make sure that there are
appropriate tests in place to verify that the expected DSCP value for the application occurs both before and
after the QoS change. Did the change cause a change in the CPU performance load?
You should set up the key metrics that are important to your network and monitor these metrics as KPIs.
KPIs can include items such as bandwidth utilization, CPU, and memory. Metrics need to be relevant and
measurable so that you have a good indication of the health of the network and the feedback loop remains
relevant to the changes that are being introduced.

382 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Apply DevOps Tools

The following describes some of the available DevOps tools:


• Continuous integration: Tools such as Jenkins, Travis CI, circle CI, TeamCity, and Visual Studio
Team Services help orchestrate the overall development pipeline and offer more efficient ways of
testing.
• Configuration management: Tools like Puppet, Chef, SaltStack, Terraform, and Ansible are often
used for configuration management and ensure that an application is deployed in the approved and
template-defined configuration every time.
• Collaboration: Systems like Trello, Slack, Cisco Webex Teams, Jira, and Hipchat enhance
collaboration during the development process.
• Working environment: Systems that quickly allow repeatable environments to be created, such as
Vagrant/Packer and Docker, simplify and accelerate development efforts.
• Source and image control: Source code, and more broadly, software artifacts, are stored and pushed
using systems like GitLab, GitHub, Docker Hub, Bitbucket, and JFrog Artifacts.
• Platforms: Cloud resources like OpenStack, Google Cloud Platform, Amazon Web Services,
DigitalOcean, and Microsoft Azure allow platforms to be easily consumed via APIs to build on-demand
and elastic environments.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 383
NetDevOps Pipeline

The tooling choices continue to expand as the NetDevOps pipelines continue to mature. There are new
options and methods for monitoring environments that were not available just a decade ago. The tools that
are used in the build phase are continuing their reach and expanding. Using tools like Jenkins, Travis-CI,
and other CI tools are also expanding to help provide coverage of the release and deploy phase of the
NetDevOps pipeline. Source-control tools are continuing to help improve code quality and expanding into
some of the CI spaces.
Configuration management tools are important in the release and deploy phase to ensure that the updates are
done automatically. Systems use Ansible, SaltStack, Chef, Terraform, and Puppet to deploy to devices in a
consistent way.
Working environment tools like Docker and Vagrant help provide a consistent development to staging and
into production. With Docker, you get a consistent image that produces consistent containers that can be
used in all phases.
The monitoring and real-time feedback loops are continuing to evolve by providing real-time data to
operations and using tools like chat to provide instant feedback and even control the environment.
Jira, Trello, Slack, and Cisco Webex Teams are all examples of tooling that is helping the planning and
creation phases to evolve as well. From monitoring and tracking the environment to real-time collaboration,
there are many tools that are helping the DevOps and NetDevOps pipelines evolve at a quick pace.
1. 0Which statement describes the DevOps (and by extension, NetDevOps) culture?
a. Problems occur during changes.
b. Manual testing is used.
c. Change is tested and verified.
d. Changes happen rarely.

384 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Infrastructure as Code
Infrastructure as Code (IaC) is the representation of infrastructure in a code format. The infrastructure
configuration is in a source-controlled system, which is used to derive the final state configuration. Some
characteristics of IaC include having variables that are defined and tracked in source control, data that are
gathered from a source of truth such as IP address management (IPAM), and configurations that are built
from templates that are also in source control.

Infrastructure as Code
• Maintained in source control
• Tested
• Versioned
• Self-documenting

IaC gives you many benefits that developers have used for years. You use a source control system to
maintain the definition of the IaC. Once infrastructure is represented in a file, you get the advantages of
versioning and documenting of changes (who made the change).
With source control, you get a history of who has requested the changes, the ability to have peer review,
multiple approvals to promote a new version into production, and more. The source control systems
continue to make enhancements to these fundamentals.
Testing of configurations can be done once the infrastructure is converted to a code format. This testing
occurs in the CI/CD pipeline rather than live on the box. By developing these methodologies, the confidence
in change success increases, which speeds up changes.
Source control provides versioning. If a change is implemented, but breaks something that testing
methodologies did not catch, it is easier to roll back the change to the previous version of the repository and
redeploy.
Self-documenting is done by the source control system. The documentation includes who requested the
change to the repository, who approved it, and the timing of these actions. You get history of the
configuration over time, not just the current configuration on the device.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 385
BGP and ASA Interfaces as Code
BGP Configuration in YAML
---
local_as: 65004
neighbors:
- ip: "10.10.4.1"
remote_as: 65003
- ip: “10.10.70.1”
remote_as: 65009
advertised_networks:
- ip: "10.10.3.0"
mask: 24
- ip: “10.10.2.0”
mask: 24

Cisco ASA Interfaces in YAML


---
GigabitEthernet0/0:
zone:
name: "inside"
security_level: 100
ip_address:
ip: "10.10.4.254"
mask: 24
GigabitEthernet0/1:
zone:
name: "outside"
security_level: 0
ip_address:
ip: "10.10.3.1"
mask: 24

When making information that helps define the network IaC, you can get much the same information that
you would need to configure on the device. The command syntax is moved out of the definition and
managed in other source code files, leaving the interesting information in the source code that can be easily
read and changed.
The first portion of the BGP configuration in YAML shows that the local AS for the device is going to be
65004, as noted by the key local_as with the value 65004. The neighbors are defined within the neighbors
key, and there are two different neighbors that will be configured. The list of neighbors includes the IP
address of the neighbor along with the BGP remote AS that will be configured. The advertised networks are
also represented in this snippet by the advertised_networks key. This list of networks allows this device to
advertise multiple networks. The YAML information is fed into a system that will configure the device.
Notice that the device type is not defined. It could be defined as a Cisco IOS, IOS-XE, or IOS-XR device, a
Cisco Nexus Operating System (NX-OS) device, or a Cisco ASA Adaptive Security Appliance. The details
of the configuration are in a different file that takes information from this snippet.

386 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
The Cisco ASA interfaces example has some more specific configuration, because the Cisco ASA appliance
uses zone or interface names. This approach requires there to be a definition of the zone and the associated
security level. You see that for each interface, there is a single zone and a security level that is assigned to
this zone. The ip_address section is as you would expect for any other interface on a device. Because the
syntax is removed, you can see the environment without having to look through configuration details. This
configuration is then easily portable to a new device. For example, if you want to replace a firewall with a
router, you can easily migrate because the YAML definition would not have to change (you could leave the
zone information in the file). You would only need to change the configuration template that is called.

Firewall Policy as Code


---
INSIDE:
interface: "inside"
policies:
- action: "permit"
protocol: "icmp"
source:
ip: "10.10.0.0"
mask: 22
destination:
ip: "10.10.3.0"
mask: 24

---
OUTSIDE:
interface: "outside"
policies:
- action: "permit"
protocol: "icmp"
source:
ip: "10.10.3.0"
mask: 24
destination:
ip: "10.10.0.0"
mask: 22

Code can also represent a firewall policy. The inside interface policy and outside interface policy are
defined in the figure. You would have additional rules that are defined for each list item under the key
policies. Within this YAML definition, you would also define the action, protocol, source IP address and
mask, and destination IP address and mask—the full five-tuple firewall policy requirements. If there is an
extra definition, such as a security group tag (SGT), you would add additional keys that would represent the
SGT information. The template engine would then determine if the policy needed to be an SGT source or a
traditional five-tuple rule.
Now that you have seen the examples, what do you do with these files? These files belong in a source
control system where you can track changes, have appropriate approvers, and see the change over time. The
information in the source control system then can be fed into a CI tool to execute tests and verify the
network environment. This testing ensures that the information that is fed into the system is of the proper
type and setup; for example, making sure that an IP address is valid and that the interface names and
descriptions match the description naming scheme. These examples are simply starting examples, and you
will likely come up with additional tests that can be completed.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 387
When the infrastructure is in a source code repository and is code, the code can be fed to many tools that
can deploy the configuration. You can use Ansible, SaltStack, Terraform, or any tool that meets the
requirements for deploying the code. The tool can be changed over time as the infrastructure is defined in a
code format. As long as there is a method to send the code through the configuration management tool, you
will accomplish the goals of deploying your infrastructure from code.

Source of Truth
• Devices are no longer the source of truth.
• Audit devices, comparing them to the source of truth, and report or fix deviations.
• Devices are configured based on information in the source of truth.
• May be multiple, but only one source of truth per data domain (customers, IP addresses, prefixes,
addresses, etc.)

Knowing where to get a definitive data point is important. Devices should no longer be the source of this
data. It should be derived from a single source for each data domain. If there are multiple sources, there can
be conflict and no definitive answer. This scenario does not mean that a single tool becomes the source for
every data type. An IPAM tool is the source for prefixes and IP addresses. Physical addresses will come
from a data source that is defined for that purpose.
As the IaC is built, the data from the various sources of truth are combined to build the final configuration
that will go on a device. If a device’s configuration state does not match the state of the IaC system, then the
device should be updated to match the system, either manually or as time and systems mature, automatically
through your NetDevOps pipeline.

YAML
• Human-friendly data serialization
• Indentation is part of the structure and is mandatory.
• A dash indicates an item in a list.
• The dictionary uses a colon to denote key: value pairs.

388 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
---
inventory:
device:
- name: csr1kv1:
version: 16.09
vendor: cisco
uptime: '2 days'
serial: XB96871
snmp:
- name: public
permission: ro
- name: private
permission: rw

YAML is a human-friendly data serialization standard that has steadily increased in popularity over the last
few years. YAML uses Python-style indentation to indicate nesting, and a special format to indicate lists
and dictionaries. YAMLs are commonly used as configuration files, but can also be used in many other
applications. YAML syntax is very simple, but needs careful indentation, otherwise the consuming program
will fail to read the data. It is common to standardize either a two- or four-space indent, although the two-
space indent is more widely used.
1. 0Which benefit of Infrastructure as Code is a historical reference point of the code?
a. CI/CD
b. versioning
c. source of truth
d. planning documentation

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 389
0Discovery 11: Build a YAML IaC Specification for
the Test Environment
Introduction
Infrastructure as Code is a technique that manages infrastructure configuration via definition files instead of
interactive sessions. The goal of this lab is as follows:
1. Define data models for codifying IP address, BGP, and access control list (ACL) configurations.
2. Build the configuration files for the test environment in YAML according to the defined data models.

The data models will be defined using JavaScript Object Notation (JSON) schema, a popular framework for
defining schema for both JSON and YAML data. In addition to defining the schema for data, JSON Schema
has wide support among programming languages for validating data against the defined schema.
YAML has wide support among programming languages and is used by many DevOps IaC tools, so it is a
natural choice for storing configuration definitions.
The lab provides tests using pytest testing framework, which uses the PyYAML parser to load the
configuration files, and the JSON Schema tool (https://www.jsonschema.net) to validate the configurations
that adhere to the defined schema.

Topology

Job Aid
• Device Information

Device Description IP Address Credentials

Student Workstation Linux Ubuntu VM 192.168.10.10 student, 1234QWer

390 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Command List
The table describes the commands that are used in this activity. The commands are listed in alphabetical
order so that you can easily locate the information that you need. Refer to this list if you need configuration
command assistance during the lab activity.

Command Description

cd directory name To change directories within the Linux file system, use the cd
command. You will use this command to enter a directory where the
lab scripts are housed. You can use tab completion to finish the name
of the directory after you start typing it.

cat file The most common use of the cat Linux command is to read++ the
contents of files. It is the most convenient command for this purpose in
a UNIX-like operating systems.

more file-name To view the content of a file (one full window at a time), use the more
Linux command. Press space to view the next part of the file.

cp source-file destination-file Copies a file from the source to the destination. The files can be
absolute or relative paths.
You may also copy entire folders with the -r flag.

mv source-file destination-file Moves/Renames a file from the source to the destination. The files can
be absolute or relative paths.
You may also move or rename a folder.

code file-name|dir-name Opens the provided file or directory in the graphical editor VS Code. If
it is already running, it will open the file in a new tab.

chmod +x file-name Adds executable permissions to the provided file.

pytest Runs tests in the tests/ directory using the pytest framework.

Task 1: Create an Address Plan for the Infrastructure


You will view and edit the YAML and Python files using the Visual Studio Code, which also provides
syntax highlighting and helps ensure proper formatting.

Activity

Design the Address Plan

Step 1 In the student workstation, open a terminal window and change the directory to labs/lab11 using the cd
~/labs/lab11 command.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 391
student@student-vm:$ cd ~/labs/lab11/
student@student-vm:labs/lab11$

The directory structure provides a directory for the four networking devices in the Test Topology. Each
directory has the necessary YAML files for defining the corresponding device’s configuration. There are
also directories for schema definitions and test files, which will validate the YAML files against the schema
definitions.

Step 2 In the terminal window, issue the tree --dirsfirst command to view the directory structure.

student@student-vm:labs/lab11$ tree --dirsfirst


.
├── asa1
│ ├── bgp.yml
│ ├── interfaces.yml
│ └── policies.yml
├── csr1kv1
│ ├── bgp.yml
│ └── interfaces.yml
├── csr1kv2
│ ├── bgp.yml
│ └── interfaces.yml
├── csr1kv3
│ ├── bgp.yml
│ └── interfaces.yml
├── schemas
│ ├── array_properties
│ │ └── bgp.py
│ │ └── policy.py
│ ├── objects
│ │ └── bgp.py
│ │ └── interface.py
│ │ └── ip.py
│ │ └── policy.py
│ ├── properties
│ │ └── bgp.py
│ │ └── interface.py
│ │ └── ip.py
│ │ └── policy.py
│ ├── bgp.py
│ ├── __init__.py
│ ├── interface.py
│ └── policy.py
├── tests
│ ├── conftest.py
│ ├── __init__.py
│ └── test_config_against_schema.py
├── __init__.py
└── schema_writer.py

Step 3 Open Visual Studio Code for viewing and editing the lab11 directory. Use the code . command.

student@student-vm:labs/lab11$ code .
.

392 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
You will focus on defining the schema for each configuration section. The implementation is provided for
all three csr1kv devices. The asa1 device has the YAML modeled per the provided schema and only
requires filling in the data per the provided topology data. The schema is defined using jsonschema Python
library, which is the standard implementation of JSON schema.

The main building blocks for creating a data model are defining the appropriate types for the data. Since this
lab is using YAML to store data and Python to validate against JSON Schema, it is important to understand
how the data types are related between all three languages. When defining a model with JSON Schema, the
type will always use the name used by JSON. Here is a table that maps the JSON types to their respective
YAML and Python types.

JSON Type YAML Type Python Type

String String String

Number Integer and Float Integer and Float

Array Sequence List

Object Mapping Dictionary

Boolean Boolean Boolean

Null Null None

In addition to defining the type of data that should be supplied in the data, it is also important to apply
further restrictions on what is considered to be a valid data. For example, it might be important to validate
that an email field follows the standard format for email addresses, or that a VLAN number is between 1
and 4094. JSON Schema provides several fields for further restricting what counts as acceptable data.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 393
The main field used for string types is format. An example used in this lab will be IP addresses. JSON
Schema provides formats for both IPv4 and IPv6 addresses. JSON Schema also provides format options for
date, time, email, hostnames, resource identifiers, and means to create custom formats.

The number type provides fields for ensuring the value is a multiple of a given number, or for setting the
upper and lower bounds of acceptable values. This lab makes use of setting the range of acceptable values
using the minimum and maximum fields.

Understanding of these two types is sufficient to start defining properties for the Data Models. The first
schema you will define is for the device interface configurations. The configurations required to build the
Test Environment include IP address of each device interface. asa1 device also includes security zone
definitions.

Since JSON Schema provides validation for IP addresses, it is useful to define the IP address and mask
separately. The IP property should be a string with a format of IPv4. The mask property should be a number
using classless interdomain routing (CIDR) notation using the minimum and maximum fields to restrict
valid masks to values between 0 and 32.

Properties are defined in the ip.py file in the schemas/properties directory.

Step 4 View the schemas/properties/ip.py property file using the cat command.

student@student-vm:labs/lab11$ cat schemas/properties/ip.py

IP_PROPERTY = {
"type": "string",
"format": "ipv4"
}

MASK_PROPERTY = {
"type": "number",
"minimum": 0,
"maximum": 32
}

It is also important to group IP and Mask properties together. Defining an object type with these two
properties is the best way to accomplish this. Objects map keys to values which JSON Schema defines as
properties. Each key defined in an object’s properties key has a value of another schema definition (such as,
the IP_PROPERTY and MASK_PROPERTY above). JSON Schema defaults to allowing the data to define
additional properties that are not defined in the schema, but this can be changed by setting the
additionalProperties value to False. There is also a required field that takes a list of property names that are
required; for the IP Address object, this will be “ip” and “mask” to ensure that each IP has a netmask.

Objects are defined in the ip.py file in the schemas/objects/ directory.

Step 5 Open the schemas/objects/ip.py file in Visual Studio Code using the code schemas/objects/ip.py command.

student@student-vm:labs/lab11$ code schemas/objects/ip.py

394 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Configuring the security zone on the asa1 device includes assigning the interface a name and a security
level. The nameif value is a string, and the security-level takes is a number between 0 and 100. The security
zone schema properties are defined in schemas/properties/interface.py, and are similar to the IP Address
properties.

Step 6 View the schemas/properties/interface.py file using the cat command.

student@student-vm:labs/lab11$ cat schemas/properties/interface.py

INTERFACE_ZONE_PROPERTY = {
"type": "string",
}

INTERFACE_SECURITY_LEVEL_PROPERTY = {
"type": "number",
"minimum": 0,
"maximum": 100,
}

The security zone should also be grouped as an object. Both the zone name and security-level property
fields are required to configure the interfaces on the asa1 device.

In addition to the security zone object, schemas/objects/interface.py defines an interface object that acts as a
container for both the IP Address and security zone objects. Since security zones are only configured on the
ASA, only the IP Address is required.

Step 7 View the schemas/objects/interface.py using the cat command.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 395
student@student-vm:labs/lab11$ cat schemas/objects/interface.py

from ..properties.interface import (


INTERFACE_ZONE_PROPERTY, INTERFACE_SECURITY_LEVEL_PROPERTY
)
from .ip import IP_OBJECT

INTERFACE_ZONE_OBJECT = {
"type": "object",
"properties": {
"name": INTERFACE_ZONE_PROPERTY,
"security_level": INTERFACE_SECURITY_LEVEL_PROPERTY,
},
"required": ["name", "security_level"],
"additionalProperties": False,
}
"""
EX:
{
"zone": {
"name": "inside",
"security_level": "inside"
}
}
"""

INTERFACE_OBJECT = {
"type": "object",
"properties": {
"zone": INTERFACE_ZONE_OBJECT,
"ip_address": IP_OBJECT,
},
"required": ["ip_address"],
"additionalProperties": False,
}
"""
EX:
{
"zone": {
"name": "inside",
"security_level": 100
},
"ip_address": {
"ip": "10.1.1.0",
"mask": 24
}
}
"""

The last part you must define for the interface schema is mapping the interface name to its corresponding IP
address and security zone. The options are:
• Define an array and add a name property to the above INTERFACE_OBJECT schema
• Define an outer object that maps each interface name to an INTERFACE_OBJECT

396 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
In this activity, you will use the second option because each interface name is guaranteed to be unique and it
better organizes the data. The challenge with this structure is that the property names cannot be known
before defining the schema. Fortunately, JSON Schema provides a patternProperties key that works similar
to the properties key, but uses regular expressions to define the property names. Since this is the top-level
schema, the $schema is added per JSON Schema standards.

Step 8 Open the schemas/interface.py file in Visual Studio Code using the code schemas/interface.py command.

student@student-vm:labs/lab11$ code schemas/interface.py

Segmenting schema definitions into individual sections as done in the interface.py file provides reusable
code and ultimately makes the schema definitions more readable. However, when first learning JSON
Schema, it can be helpful to see the full schema definition. The schema_writer.py script in the root directory
takes an argument to specify which schema should be generated, and writes the full schema to a JSON file.

Step 9 Generate the interface schema using the schema_writer script. Use the ./schema_writer.py interface
command.

student@student-vm:labs/lab11$ ./schema_writer.py interface


student@student-vm:labs/lab11$

Step 10 Open the newly created interface_schema.json file in the root directory

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 397
student@student-vm:labs/lab11$ cat interface_schema.json

{
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"patternProperties": {
"^[A-Z].+\\d": {
"type": "object",
"properties": {
"zone": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"security_level": {
"type": "number",
"minimum": 0,
"maximum": 100
}
},
"required": [
"name",
"security_level"
],
"additionalProperties": false
},
"ip_address": {
"type": "object",
"properties": {
"ip": {
"type": "string",
"format": "ipv4"
},
"mask": {
"type": "number",
"minimum": 0,
"maximum": 32
}
},
"required": [
"ip",
"mask"
],
"additionalProperties": false
}
},
"required": [
"ip_address"
],
"additionalProperties": false
}
},
"additionalProperties": false
}

398 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:labs/lab11$ code interface_schema.json

Step 11 Using the Interface Schema, fill out the interface configurations in the asa1/interfaces.yml file for the asa1
device in the Test Environment. The management interface should not be included in the configuration
definitions.

ASA1

Interface IP Address Zone Security Level

GigabitEthernet0/0 10.10.4.254/24 inside 100

GigabitEthernet0/1 10.10.3.1/24 outside 0

student@student-vm:labs/lab11$ cat asa1/interfaces.yml


---
GigabitEthernet0/0:
zone:
name: "inside"
security_level: 100
ip_address:
ip: "10.10.4.254"
mask: 24
GigabitEthernet0/1:
zone:
name: "outside"
security_level: 0
ip_address:
ip: "10.10.3.1"
mask: 24

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 399
Now that you updated the configurations, You can validate the configuration files against the schema by
running pytest command. The pytest files have been predeployed to accept a schema argument to filter
which configuration files are validated against the defined schema. Issuing the pytest command with the --
schema=interfaces argument will limit schema validation to only the interface configurations of your four
devices.

Step 12 Validate that the interface configurations adhere to the schema. Use the pytest --schema=interfaces
command.

If you added the configuration parameters to the asa1/interfaces.yml file correctly, pytest will
report four passed tests. In case of failures, check the configuration parameters.
student@student-vm:labs/lab11$ pytest --schema=interfaces

=========================================== test session starts


===========================================
platform linux -- Python 3.6.8, pytest-5.2.2, py-1.8.0, pluggy-0.13.0
rootdir: /home/student/labs/lab11
collected 4 items

tests/test_config_against_schema.py ....
[100%]

============================================ 4 passed in 0.29s


============================================

Step 13 To demonstrate that the schema is properly validating the configuration data, update the final octet of asa1
GigabitEthernet0/0 interface IP address to 300. This number should cause the validation to fail on one test.

student@student-vm:labs/lab11$ cat asa1/interfaces.yml


---
GigabitEthernet0/0:
zone:
name: "inside"
security_level: 100
ip_address:
ip: "10.10.4.300"
mask: 24
GigabitEthernet0/1:
zone:
name: "outside"
security_level: 0
ip_address:
ip: "10.10.3.1"
mask: 24

Step 14 Re-run the pytest validation against the interface configurations, which should report an error.

400 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:labs/lab11$ pytest --schema=interfaces

=========================================== test session starts


===========================================
platform linux -- Python 3.6.8, pytest-5.2.2, py-1.8.0, pluggy-0.13.0
rootdir: /home/student/labs/lab11
collected 4 items
tests/test_config_against_schema.py ...F
[100%]

================================================= FAILURES
=================================================
__________________ test_config_definitions_against_schema[interfaces-model_schema0-
asa1] ___________________

hostname = 'asa1', model = 'interfaces'


model_schema = {'$schema': 'http://json-schema.org/draft-07/schema#',
'additionalProperties': False, 'patternProperties': {'^[A-Z].+\...red': ['name',
'security_level'], 'type': 'object'}}, 'required': ['ip_address'], 'type': 'object'}},
'type': 'object'}

@pytest.mark.parametrize("hostname", DEVICES)
def test_config_definitions_against_schema(hostname, model, model_schema):
try:
with open(f"{hostname}/{model}.yml", encoding="UTF-8") as vars_file:
model_vars = yaml.safe_load(vars_file)
jsonschema.validate(
instance=model_vars,
schema=model_schema,
> format_checker=jsonschema.draft7_format_checker
)

tests/test_config_against_schema.py:17:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _

instance = {'GigabitEthernet0/0': {'ip_address': {'ip': '10.10.4.300', 'mask': 24},


'zone': {'name': 'inside', 'security_level': ...GigabitEthernet0/1': {'ip_address':
{'ip': '10.10.3.1', 'mask': 24}, 'zone': {'name': 'outside', 'security_level': 0}}}
schema = {'$schema': 'http://json-schema.org/draft-07/schema#', 'additionalProperties':
False, 'patternProperties': {'^[A-Z].+\...red': ['name', 'security_level'], 'type':
'object'}}, 'required': ['ip_address'], 'type': 'object'}}, 'type': 'object'}
cls = <class 'jsonschema.validators.create.<locals>.Validator'>, args = ()
kwargs = {'format_checker': <jsonschema._format.FormatChecker object at
0x7f89d57300f0>}
validator = <jsonschema.validators.create.<locals>.Validator object at 0x7f89d429c898>,
error = <ValidationError: "'10.10.4.300' is not a 'ipv4'">

def validate(instance, schema, cls=None, *args, **kwargs):



Abbreviated output

if cls is None:
cls = validator_for(schema)

cls.check_schema(schema)

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 401
validator = cls(schema, *args, **kwargs)
error = exceptions.best_match(validator.iter_errors(instance))
if error is not None:
> raise error
E jsonschema.exceptions.ValidationError: '10.10.4.300' is not a 'ipv4'
E
E Failed validating 'format' in schema['patternProperties']['^[A
Z].+\\d']['properties']['ip_address']['properties']['ip']:
E {'format': 'ipv4', 'type': 'string'}
E
E On instance['GigabitEthernet0/0']['ip_address']['ip']:
E '10.10.4.300'

../.local/lib/python3.6/site-packages/jsonschema/validators.py:934: ValidationError
======================================= 1 failed, 3 passed in 0.37s
=======================================

Pytest provides a lot of useful output. Two of the more important items to point out are:
• The first line after the failure header provide the values for the hostname and model variables, which
map to the file structure; this makes it easy to identify which config file is noncompliant.
• The errors at the end clearly indicate that the issue is that the data contains an invalid value for what
should be an IPv4 property.

Another test to perform is to convert the asa1 GigabitEthernet0/1 interface mask property from number to a
string. You can surround the number with quotes.

Step 15 Restore the asa1 GigabitEthernet0/0 interface IP address value to 10.10.4.254 and add quotes around asa1
GigabitEthernet0/1 interface subnet mask value to "24." Use the cat asa1/interfaces.yml command.

student@student-vm:labs/lab11$ cat asa1/interfaces.yml


---
GigabitEthernet0/0:
zone:
name: "inside"
security_level: 100
ip_address:
ip: "10.10.4.254"
mask: 24
GigabitEthernet0/1:
zone:
name: "outside"
security_level: 0
ip_address:
ip: "10.10.3.1"
mask: "24"

Step 16 Run the pytest validation against the interface configurations again. The test should report an error.

402 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:labs/lab11$ pytest --schema=interfaces

=========================================== test session starts


===========================================
platform linux -- Python 3.6.8, pytest-5.2.2, py-1.8.0, pluggy-0.13.0
rootdir: /home/student/labs/lab11
collected 4 items
tests/test_config_against_schema.py ...F
[100%]

================================================= FAILURES
=================================================
__________________ test_config_definitions_against_schema[interfaces-model_schema0-
asa1] ___________________

Abbreviated output

if cls is None:
cls = validator_for(schema)

cls.check_schema(schema)
validator = cls(schema, *args, **kwargs)
error = exceptions.best_match(validator.iter_errors(instance))
if error is not None:
> raise error
E jsonschema.exceptions.ValidationError: '24' is not of type 'number'
E
E Failed validating 'type' in schema['patternProperties']['^[A-Z].+\\d']
['properties']['ip_address']['properties']['mask']:
E {'maximum': 32, 'minimum': 0, 'type': 'number'}
E
E On instance['GigabitEthernet0/1']['ip_address']['mask']:
E '24'

../.local/lib/python3.6/site-packages/jsonschema/validators.py:934: ValidationError
======================================= 1 failed, 3 passed in 0.37s
=======================================

It is important to point out that strings that look like numbers do not count as numbers in JSON Schema.

Step 17 Normalize the asa1 GigabitEthernet0/1 interface subnet mask back to a number.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 403
student@student-vm:labs/lab11$ cat asa1/interfaces.yml
---
GigabitEthernet0/0:
zone:
name: "inside"
security_level: 100
ip_address:
ip: "10.10.4.254"
mask: 24
GigabitEthernet0/1:
zone:
name: "outside"
security_level: 0
ip_address:
ip: "10.10.3.1"
mask: 24

Task 2: Create a Routing Plan for the Infrastructure


Activity

Define the Routing


The next schema to define is the schema for BGP. The BGP configuration in Test Environment requires:
1. Specifying the local AS number
2. Specifying the list of neighbors by their peering IP address and AS number
3. Specifying the list of networks to advertise to peers

The first property to define is straightforward. An AS number should be a number type. This is the only
property defined in the schemas/properties/bgp.py file. The other two properties are defined in the
array_properties directory.

Step 1 View the schemas/properties/bgp.py file.

student@student-vm:labs/lab11$ cat schemas/properties/bgp.py

BGP_AS_PROPERTY = {"type": "number"}

The next two properties are identified as lists, which are called arrays in JSON. JSON Schema uses two
keywords for validating contents of an array:
• The contains keyword is used to ensure that at least one entry in the array adheres to the defined schema
• The items keyword is used to ensure that every entry in the array adheres to the defined schema or
schemas

404 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
The items keyword accepts either an array or an object as its value, and which data type is used affects how
JSON Schema validates the contents of an array. If the value is an array, then JSON Schema uses sequence
numbers to map each entry in the data to the schema defined for that sequence number, and validates that
each entry against its schema definition. For example, [{“type”: “number”}, {“type”: “string”}] would
ensure that the first entry is a number and the second entry is a string; any additional entries would not be
validated.

If the value is an object, then JSON Schema will ensure that all entries in the array adhere to the defined
schema. For example, {“type”: “number”, “minimum”: 1} would validate that each entry in the array is a
positive number. A common design pattern for modeling data is to use an array of objects, where each
object contains the same type of data. The best method for validating this type of data is using the items
keyword with a value of an object defining the same schema for each entry.

In addition to validating the type of data that is in an array, JSON Schema also provides mechanisms to:
• Validate its length with the minItems and maxItems fields
• Validate that each entry is unique with the uniqueItems field

The contents of the BGP neighbor property is an array of objects that specify peer IP address ad AS number
of each neighbor. The peering IP property can reuse the IP_PROPERTY defined in the previous task, and
the AS number can use the same property as the local AS number. You only need to add an object to link
each peering IP address to its AS number. Since both properties are needed for building an adjacency, they
defined as are required properties. Each neighbor should also be unique.

The contents of the advertised networks property is an array of objects that specify IP network and subnet
mask for each network. This is identical to the IP_OBJECT that was defined in the previous task. Since
networks only need to be advertised once, each entry should be unique.

Since the advertised networks property is able to reuse the IP_OBJECT property, the only object defined in
schemas/objects/bgp.py is the object for BGP neighbors.

Step 2 View, and then open the schemas/objects/bgp.py file in Visual Studio Code.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 405
student@student-vm:labs/lab11$ cat schemas/objects/bgp.py

from ..properties.ip import IP_PROPERTY


from ..properties.bgp import BGP_AS_PROPERTY

BGP_NEIGHBOR_OBJECT = {
"type": "object",
"properties": {
"ip": IP_PROPERTY,
"remote_as": BGP_AS_PROPERTY,
},
"required": ["ip", "remote_as"],
"additionalProperties": False,
}
"""
EX:
{
"ip": "10.1.2.1",
"remote_as: 65002
}
"""
student@student-vm:labs/lab11$ code schemas/objects/bgp.py

To avoid circular import dependencies, the properties that are arrays are defined in a different directory than
other properties. The BGP schema definition has two array type properties defined.

Step 3 View and then open the schemas/array_properties/bgp.py file in Visual Studio Code.

406 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:labs/lab11$ cat schemas/array_properties/bgp.py

from ..objects.ip import IP_OBJECT


from ..objects.bgp import BGP_NEIGHBOR_OBJECT

BGP_NEIGHBOR_PROPERTY = {
"type": "array",
"items": BGP_NEIGHBOR_OBJECT,
"uniqueItems": True,
}
"""
EX:
[
{
"ip": "10.1.2.1",
"remote_as: 65002
}
]
"""

BGP_ADVERTISED_NETWORK_PROPERTY = {
"type": "array",
"items": IP_OBJECT,
"uniqueItems": True,
}
"""
EX:
[
{
"ip": "10.1.10.0",
"mask": 24
}
]
"""

student@student-vm:labs/lab11$ code schemas/array_properties/bgp.py

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 407
As mentioned in the beginning, the BGP schema requires that the local AS, BGP neighbors, and advertised
networks defined. They will all be added to the required key in the schema definition.

Step 4 View the schemas/bgp.py file.

408 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:labs/lab11$ cat schemas/bgp.py

from .properties.bgp import BGP_AS_PROPERTY


from .array_properties.bgp import (
BGP_NEIGHBOR_PROPERTY, BGP_ADVERTISED_NETWORK_PROPERTY
)

BGP_SCHEMA = {
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"properties": {
"local_as": BGP_AS_PROPERTY,
"neighbors": BGP_NEIGHBOR_PROPERTY,
"advertised_networks": BGP_ADVERTISED_NETWORK_PROPERTY,
},
"required": ["local_as", "neighbors", "advertised_networks"],
}
"""
EX:
{
"local_as": 65001,
"neighbors": [
{
"ip": "10.1.2.1",
"remote_as": 65002
}
],
"advertised_networks": [
{
"ip": "10.1.10.0",
"mask": 24
}
]
}
"""

Step 5 Generate the bgp schema using the schema_writer script. Use the ./schema_writer.py bgp command.

student@student-vm:labs/lab11$ ./schema_writer.py bgp


student@student-vm:labs/lab11$

Step 6 Open the newly created bgp_schema.json file in the root directory to view the full schema definition.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 409
student@student-vm:labs/lab11$ cat bgp_schema.json

{
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"properties": {
"local_as": {
"type": "number"
},
"neighbors": {
"type": "array",
"items": {
"type": "object",
"properties": {
"ip": {
"type": "string",
"format": "ipv4"
},
"remote_as": {
"type": "number"
}
},
"required": [
"ip",
"remote_as"
],
"additionalProperties": false
},
"uniqueItems": true
},
"advertised_networks": {
"type": "array",
"items": {
"type": "object",
"properties": {
"ip": {
"type": "string",
"format": "ipv4"
},
"mask": {
"type": "number",
"minimum": 0,
"maximum": 32
}
},
"required": [
"ip",
"mask"
],
"additionalProperties": false
},
"uniqueItems": true
}
},
"required": [
"local_as",

410 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
"neighbors",
"advertised_networks"
]
}

student@student-vm:labs/lab11$ code bgp_schema.json

Step 7 Using the BGP Schema, complete the BGP configurations in the asa1/bgp.yml file for the asa1 device in the
Test Environment.

ASA1 – AS 65004

Neighbor Address Remote AS Advertised Networks

10.10.4.1 65003 10.10.3.0/24

student@student-vm:labs/lab11$ cat asa1/bgp.yml

---
local_as: 65004
neighbors:
- ip: "10.10.4.1"
remote_as: 65003
advertised_networks:
- ip: "10.10.3.0"
mask: 24

Once you updated the BGP configurations, you can validate the configuration files against the schema.

Step 8 Validate that the BGP configurations adhere to the schema. Use the pytest --schema=bgp command.

If you added the configuration parameters to the asa1/BGP.yml file correctly, pytest will report
four passed tests. In case of failures, check the configuration parameters.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 411
student@student-vm:labs/lab11$ pytest --schema=bgp

=========================================== test session starts


===========================================
platform linux -- Python 3.6.8, pytest-5.2.2, py-1.8.0, pluggy-0.13.0
rootdir: /home/student/labs/lab11
collected 4 items

tests/test_config_against_schema.py ....
[100%]

============================================ 4 passed in 0.23s


============================================

Step 9 To demonstrate that every entry in the advertised_networks array must adhere to the schema, add a new entry
that is the number 1.

student@student-vm:labs/lab11$ cat asa1/bgp.yml

---
local_as: 65004
neighbors:
- ip: "10.10.4.1"
remote_as: 65003
advertised_networks:
- ip: "10.10.3.0"
mask: 24
- 1

Step 10 Rerun pytest against the BGP configurations. You should receive an error.

412 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:labs/lab11$ pytest --schema=bgp

=========================================== test session starts


===========================================
platform linux -- Python 3.6.8, pytest-5.2.2, py-1.8.0, pluggy-0.13.0
rootdir: /home/student/labs/lab11
collected 4 items
tests/test_config_against_schema.py ...F
[100%]

================================================= FAILURES
=================================================
______________________ test_config_definitions_against_schema[bgp-model_schema0-asa1]
______________________

Abbreviated output

> raise error


E jsonschema.exceptions.ValidationError: 1 is not of type 'object'
E
E Failed validating 'type' in schema['properties']['advertised_networks']
['items']:
E {'additionalProperties': False,
E 'properties': {'ip': {'format': 'ipv4', 'type': 'string'},
E 'mask': {'maximum': 32,
E 'minimum': 0,
E 'type': 'number'}},
E 'required': ['ip', 'mask'],
E 'type': 'object'}
E
E On instance['advertised_networks'][1]:
E 1

../.local/lib/python3.6/site-packages/jsonschema/validators.py:934: ValidationError
======================================= 1 failed, 3 passed in 0.37s
=======================================

The error message indicates that the second entry of advertised_networks is not and object, the valid type,
which is the expected error.

Step 11 Normalize the advertised_networks property, by removing the invalid entry.

student@student-vm:labs/lab11$ cat asa1/bgp.yml

---
local_as: 65004
neighbors:
- ip: "10.10.4.1"
remote_as: 65003
advertised_networks:
- ip: "10.10.3.0"
mask: 24

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 413
Task 3: Create a Firewall Policy for the Infrastructure
Activity

Define the Firewall Policy


The last schema to define for the Test Environment is for the firewall policies. The configurations for this
environment will require that each policy has a name, is assigned to an interface, and has a list of rules
defining the access policy. Since each policy name must be unique, that will be used as the key to the top-
level objects in the schema. The name property should be a string, and the policy definitions will be an
array. Each policy element will need to define the protocol, source and destination IPs, and whether traffic
should be permitted or denied. Also, each policy element can define the destination port; when defining the
policy element object, the destination port property should not be required.

The protocol property should be a string type, but it should also restrict the values to valid protocols. JSON
Schema uses the enum field to restrict string types to a range of acceptable values. The value for an enum
field is an array of strings that are considered valid for the property. The protocol property will accept: “ip,”
“tcp,” “udp,” and “icmp” as valid protocols. The source and destination IP properties will reuse the
IP_OBJECT defined in the interface schema, and the destination port property is defined as a number within
the range of 1-65535. The action property is also a string, with “permit” and “deny” enum values.

Step 1 Open the schemas/properties/policy.py file. You will notice that the policy action and protocol properties use
the enum field.

student@student-vm:labs/lab11$ cat schemas/properties/policy.py

INTERFACE_PROPERTY = {"type": "string"}

POLICY_ACTION_PROPERTY = {
"type": "string",
"enum": ["permit", "deny"],
}

PROTOCOL_PROPERTY = {
"type": "string",
"enum": ["ip", "tcp", "udp", "icmp"],
}

PROTOCOL_PORT_PROPERTY = {
"type": "number",
"minimum": 1,
"maximum": 65535,
}

The policy object defines five properties. The destination port is not required, but all other properties are
necessary to define a policy entry.

Step 2 Open the schemas/objects/policy.py file in Visual Studio Code.

414 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:labs/lab11$ cat schemas/objects/policy.py

from .ip import IP_OBJECT


from ..properties.policy import (
POLICY_ACTION_PROPERTY, PROTOCOL_PROPERTY, PROTOCOL_PORT_PROPERTY
)

POLICY_OBJECT = {
"type": "object",
"properties": {
"action": POLICY_ACTION_PROPERTY,
"protocol": PROTOCOL_PROPERTY,
"source": IP_OBJECT,
"destination": IP_OBJECT,
"destination_port": PROTOCOL_PORT_PROPERTY,
},
"required": ["action", "protocol", "source", "destination"],
}
"""
EX:
{
"action": "permit",
"protocol": "tcp",
"source": {
"ip": 10.1.0.0",
"mask": 24
}
"destination": {
"ip": "10.2.0.0",
"mask": 24
},
"destination_port": 22
}
"""

student@student-vm:labs/lab11$ code schemas/objects/policy.py

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 415
Similar to the BGP array properties, the policy array property defines the type as array, uses an already
defined object, and requires that each entry is unique.

Step 3 Open the schemas/array_properties/policy.py file.

student@student-vm:labs/lab11$ cat schemas/array_properties/policy.py

from ..objects.policy import POLICY_OBJECT

POLICY_PROPERTY = {
"type": "array",
"items": POLICY_OBJECT,
"uniqueItems": True,
}
"""
EX:
{
"action": "permit",
"protocol": "tcp",
"source": {
"ip": 10.1.0.0",
"mask": 24
}
"destination": {
"ip": "10.2.0.0",
"mask": 24
},
"destination_port": 22
}
"""

416 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
In order for a policy to be activated on a firewall, the policy must have the access permissions defined, and
the policy must be assigned to an interface. The policy schema requires both interface and policy properties
to be defined.

Step 4 Open the schemas/policy.py file.

student@student-vm:labs/lab11$ cat schemas/policy.py

from .properties.policy import INTERFACE_PROPERTY


from .array_properties.policy import POLICY_PROPERTY

policy_schema = {
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"patternProperties": {
r"^.": {
"type": "object",
"properties": {
"interface": INTERFACE_PROPERTY,
"policies": POLICY_PROPERTY,
},
"required": ["interface", "policies"],
"additionalProperties": False,
},
},
}
"""
EX:
{
"INSIDE": {
"interface": "inside",
"policy": [
{
"action": "permit",
"protocol": "tcp",
"source": {
"ip": "10.1.1.0",
"mask": 24
},
"destination": {
"ip": "10.1.2.0",
"mask": 24
},
"destination_port": 22
}
]
}
}
"""

student@student-vm:labs/lab11$ code schemas/policy.py

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 417
Step 5 Generate the policy schema using the schema_writer script. Use the ./schema_writer.py policy command.

student@student-vm:labs/lab11$ ./schema_writer.py policy


student@student-vm:labs/lab11$

Step 6 Open the newly created policy_schema.json file in the root directory to view the full schema definition

418 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:labs/lab11$ cat policy_schema.json

{
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"patternProperties": {
"^.": {
"type": "object",
"properties": {
"interface": {
"type": "string"
},
"policies": {
"type": "array",
"items": {
"type": "object",
"properties": {
"action": {
"type": "string",
"enum": [
"permit",
"deny"
]
},
"protocol": {
"type": "string",
"enum": [
"ip",
"tcp",
"udp",
"icmp"
]
},
"source": {
"type": "object",
"properties": {
"ip": {
"type": "string",
"format": "ipv4"
},
"mask": {
"type": "number",
"minimum": 0,
"maximum": 32
}
},
"required": [
"ip",
"mask"
],
"additionalProperties": false
},
"destination": {
"type": "object",
"properties": {
"ip": {

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 419
"type": "string",
"format": "ipv4"
},
"mask": {
"type": "number",
"minimum": 0,
"maximum": 32
}
},
"required": [
"ip",
"mask"
],
"additionalProperties": false
},
"destination_port": {
"type": "number",
"minimum": 1,
"maximum": 65535
}
},
"required": [
"action",
"protocol",
"source",
"destination"
]
},
"uniqueItems": true
}
},
"required": [
"interface",
"policies"
],
"additionalProperties": false
}
}
}

Step 7 Using the Policy Schema, fill out the policy configs in the asa1/policies.yml file for the asa1 device in the
Test Environment.

ASA1

Policy Name Interface

INSIDE inside

OUTSIDE outside

ASA1 – INSIDE

Action Protocol Source Destination Destination Port

420 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
ASA1 – INSIDE

Permit ICMP 10.10.0.0/22 10.10.3.0/24 N/A

ASA1 – OUTSIDE

Action Protocol Source Destination Destination Port

Permit ICMP 10.10.3.0/24 10.10.0.0/22 N/A

student@student-vm:labs/lab11$ cat asa1/policies.yml

---
INSIDE:
interface: "inside"
policies:
- action: "permit"
protocol: "icmp"
source:
ip: "10.10.0.0"
mask: 22
destination:
ip: "10.10.3.0"
mask: 24
OUTSIDE:
interface: "outside"
policies:
- action: "permit"
protocol: "icmp"
source:
ip: "10.10.3.0"
mask: 24
destination:
ip: "10.10.0.0"
mask: 22

Once the policy configurations have been filled out, the config files can be validated against the schema by
running pytest again. Since all configurations have been completed, there is not a need to specify the
schema argument.

Step 8 Validate that the policy configurations adhere to the schema. Use the pytest --schema=policies command.

If you added the configuration parameters to the asa1/policies.yml file correctly, pytest will
report four passed tests. In case of failures, check the configuration parameters.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 421
student@student-vm:labs/lab11$ pytest --schema=policies
=========================================== test session starts
===========================================
platform linux -- Python 3.6.8, pytest-5.2.2, py-1.8.0, pluggy-0.13.0
rootdir: /home/student/labs/lab11
collected 12 items

tests/test_config_against_schema.py ................
[100%]

============================================ 12 passed in 0.26s


============================================

Step 9 To demonstrate enum validation, change the ICMP policy in the OUTSIDE policy to use a protocol of “ah”
instead of “icmp.”

student@student-vm:labs/lab11$ cat asa1/policies.yml

---
INSIDE:
interface: "inside"
policies:
- action: "permit"
protocol: "icmp"
source:
ip: "10.10.0.0"
mask: 22
destination:
ip: "10.10.3.0"
mask: 24
OUTSIDE:
interface: "outside"
policies:
- action: "permit"
protocol: "ah"
source:
ip: "10.10.3.0"
mask: 24
destination:
ip: "10.10.0.0"
mask: 22

Step 10 Rerun pytest against the policy configurations. You should receive an error.

422 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:labs/lab11$ pytest --schema=policies
=========================================== test session starts
===========================================
platform linux -- Python 3.6.8, pytest-5.2.2, py-1.8.0, pluggy-0.13.0
rootdir: /home/student/labs/lab11
collected 4 items
tests/test_config_against_schema.py ...........F
[100%]

================================================= FAILURES
=================================================
____________________ test_config_definitions_against_schema[policies-model_schema2-
asa1] ___________________


Abbreviated output

> raise error
E jsonschema.exceptions.ValidationError: 'ah' is not one of ['ip', 'tcp',
'udp', 'icmp']
E
E Failed validating 'enum' in
schema['patternProperties']['^.']['properties']['policies']['items']['properties']
['protocol']:
E {'enum': ['ip', 'tcp', 'udp', 'icmp'], 'type': 'string'}
E
E On instance['OUTSIDE']['policies'][0]['protocol']:
E 'ah'

../.local/lib/python3.6/site-packages/jsonschema/validators.py:934: ValidationError
====================================== 1 failed, 11 passed in 0.35s
=======================================

The error message indicates that ah is not one of the acceptable strings for the protocol property, which is
the expected error.

Step 11 Normalize the protocol field back to “icmp”

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 423
student@student-vm:labs/lab11$ cat asa1/policies.yml

---
INSIDE:
interface: "inside"
policies:
- action: "permit"
protocol: "icmp"
source:
ip: "10.10.0.0"
mask: 22
destination:
ip: "10.10.3.0"
mask: 24
OUTSIDE:
interface: "outside"
policies:
- action: "permit"
protocol: "icmp"
source:
ip: "10.10.3.0"
mask: 24
destination:
ip: "10.10.0.0"
mask: 22

424 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Summary Challenge
1. 0Which items of code/Config should be done manually?
a. build code and build configuration
b. test code and test configuration
c. release code and release configuration
d. code and configuration
2. 0Which is not a mindset of a NetDevOps culture?
a. Changes are regular activities.
b. Each change is small.
c. The team is well practiced at the change process.
d. Changes are tested and verified.
e. Changes are significant and complicated.
3. 0Which option is not a benefit of automation in the NetDevOps environment and culture?
a. incremental development
b. frequent changes
c. no change freezes
d. continuous testing
e. faster feature development
4. 0Monitoring helps which function within the NetDevOps flow?
a. feedback loops
b. future trending
c. management structure
d. automated pipeline kickoff
5. 0Which item is not a modern configuration management tool?
a. Chef
b. Puppet
c. Ansible
d. Jira
e. SaltStack
6. 0Which option is not a benefit of IaC?
a. testing
b. versioning
c. self-documenting
d. IP address management
7. 0Which character indicates that the following items in a YAML file are part of a list?
a. colon (:)
b. plus (+)
c. hashtag (#)
d. dash (-)

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 425
8. 0Which three options are valid sources of truth in an environment with IaC? (Choose three.)
a. network device
b. server operating system
c. IP address management tool
d. CRM tool
e. service desk system
f. paper at your desk
g. There are hundreds of sources of truth per data domain.

426 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Answer Key
Introduction to NetDevOps
1. C

Infrastructure as Code
1. B

Summary Challenge
1. D
2. E
3. C
4. A
5. D
6. D
7. D
8. C, D, E

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 427
428 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Section 10: Implementing On-Demand Test
Environments at the Infrastructure Level

Introduction
Tools that were traditionally used in the systems realm for application deployments are now used as
configuration management automation tools to improve network operations. These tools include agent-
based and agentless tools, but also stateful provisioning and orchestration tools. These tools include Puppet,
Ansible, Chef, and SaltStack, but also Terraform. These tools are not new. What is new and exciting is how
their use in networking is revolutionizing how infrastructures are managed. There is some overlap in these
platforms, but this section will primarily review Terraform and Ansible and how they can be used to spin
up, and subsequently, maintain infrastructure used for testing.

Configuration Management Tools


Some configuration management tools require an agent, which is a piece of software that must be installed
on the system or device to be managed. In a network automation use case, this requirement could be a
problem because some network devices cannot support running agents and loading general software. In a
situation where the network devices do not support the agent, you can use a tool that supports proxy agents
or a tool that does not require agents at all.
There are two core types of configuration management tools, and a third type that is usually a derivation of
an agent-based tool:
• Agent-based tool: This tool requires an agent to be installed on every device that the configuration
management tool will manage.
• Agentless tool: This tool does not require an installed agent on every device and it will communicate
via SSH or another given API that the device supports.
• Proxy-agent: This type of configuration does not require an agent on every device, but it does require
some type of “process” or “worker” to communicate with the controller and with the managed device.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 429
Configuration management tools define target end states, while allowing maximum flexibility to automate
one device or 10,000 devices. An example would be to ensure that your networking devices have their latest
version.
When you describe what you want to automate in one of these tools, you often use a domain-specific
language or a structure markup language such as YAML.
A domain-specific language is designed specifically to express solutions to problems in a particular domain;
in other words, domain-specific languages are special-purpose computer languages and limited compared to
a language like Python or Ruby, which are general-purpose languages.
Some of the benefits of using these types of tools are fast implementation, lower failure rates, shortened
times between fixes, and, importantly, faster user adoption for nonprogrammers.
This approach also brings the networking environment close to the concept of continuous delivery. It
enables IaC, which is the idea of writing software for the infrastructure, so you deploy your environment
with code rather than in a manual process, which makes it a programmable infrastructure.
From a networking perspective, it is common to deploy changes manually. This change could be adding a
VLAN across a data center or campus, or making daily changes to firewall policies for new applications that
are deployed. When there is a defined manual workflow to perform a set of tasks, proper tools should be
used to automate it. It does not make sense to spend an hour performing a change. This change could take
just a few minutes using a properly engineered tool. This process is where Open Source tools such as
Puppet, Chef, Ansible, and SaltStack can dramatically reduce the number of manual interactions with the
network.
These tools are often referred to as DevOps tools. More specifically, they are configuration management
and automation tools that are used by organizations that have implemented some form of DevOps practices.
These tools enable you to automate application, infrastructure, and network deployments to a high degree
without the need to do manual programming. An example would be using a language like Python. These
tools reduce the time that it takes to perform certain tasks and offer greater predictability.

430 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Configuration Management for Networking

Puppet was created in 2005 and has been in use longer than Chef and Ansible. Puppet manages systems in a
declarative manner, which means that you define an ideal state of the target system without worrying about
how it happens. In reality, that approach is true for all these tools. Puppet is written in Ruby and refers to its
automation instruction set as Puppet "manifests." The major point to realize is that Puppet is agent-based.
Agent-based means that a software agent must be installed on all devices that you want to manage with
Puppet. Examples include servers, routers, switches, and firewalls. It is often not possible to load an agent
on many network devices. This procedure limits the number of devices that can be used with a new Puppet
package. By a new Puppet package, you can infer that it is possible to have proxy devices when using
Puppet. However, this process means that Puppet has a greater barrier to entry when getting started.
Chef, another popular configuration management tool, follows much the same model as Puppet. Chef is
based in Ruby, uses a declarative model, is agent-based, and refers to the Chef automation instruction as
"recipes" (grouped, they are called "cookbooks").
It is often difficult to load agents onto machines to automate them. When it is technically possible, it often
increases the time that is necessary to deploy the solution or tool. Ansible was created as an alternative to
Puppet and Chef. Red Hat acquired Ansible in 2015 and IBM most recently acquired Red Hat. The two
notable differences among Puppet, Chef, and Ansible are that Ansible is written in Python and that it is
agentless. Being natively agentless significantly lowers the barrier to entry from an automation perspective.
Because Ansible is agentless, it can integrate and automate a device using an API. For example, integrations
can use REST APIs, Network Configuration Protocol (NETCONF), SSH, or even Simple Network
Management Protocol (SNMP), if desired. "Playbooks" are sets of Ansible tasks (instructions) and are used
to automate devices. Each playbook consists of one or more "plays," each of which consists of individual
tasks.
It is worth noting that each of these tools had its start in automation for applications and cloud
infrastructure. It was not until after each project and company had a significant amount of traction that the
companies started to include network automation in their portfolios. In Ansible, a set of automated tasks is
called a "playbook."

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 431
Salt, by SaltStack, is another configuration agent-based management and orchestration tool similar
to Puppet and Chef. It requires each Salt-managed node to have a Salt "minion" service installed. Usually,
this approach is not a blocker on a server, but in the networking world, it is not possible to install custom
software on the network equipment that you want to manage. To solve this problem with Salt, you can use
the proxy minion, which is based on a minion that does not need to be installed on the targeted device and
can run ubiquitously. In other words, proxy minions run intermediate software that mediates between the
controller and the managed device. It is written in Python and it uses a message queue-based system
(zeroMQ) to push commands and configuration to minions and allows for publish and subscribe messaging.

Key Concepts for Infrastructure Management


Ansible, Salt, Puppet, and Chef are known as configuration management tools. Terraform is best known for
its ability to create infrastructure, but in a manner that is complimentary to configuration management tools.
Terraform tracks the state of the infrastructure and always allows you to declaratively indicate the desired
state of the infrastructure. For example, independent of the number of nodes that are already deployed,
Terraform allows you to declare that there should be N nodes, but does not require that you specify how this
state is to be achieved. This approach is in contrast with doing something imperatively, which requires that
you specify how to accomplish an infrastructure state. For example, if there are more than N nodes, then
destroy x nodes or if there are fewer than N nodes, activate y nodes.
Terraform works well with purpose-built images, so you can create on-demand infrastructure as needed for
your applications. However, it is not always possible to have prebuilt images, making hybrid use of tools
like Terraform and Ansible quite common—using Terraform to instantiate infrastructure and Ansible to
manage it.
• Mutable infrastructure vs. immutable infrastructure
• Imperative vs. declarative
• Management: controller vs. controller-less
• Managed Nodes: agent vs. agentless

1. 0Which two configuration management tools are written in Python? (Choose two.)
a. Ansible
b. Puppet
c. Chef
d. SaltStack
e. Juju
f. Rudder

432 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Terraform Overview
Introduction to Terraform
• Infrastructure resource manager
• Compose and combine infrastructure resources to build and maintain a desired state (declarative).
• Plan and execution are distinct actions.
• Manages all resources through APIs.
• Resources and data can be reused within modules.
• Terraform uses core and plug-in components for basic functions and extensibility.
• Tracks the state of the infrastructure.

Terraform is a tool for building and changing infrastructure resources. Terraform acts as a resource manager
for described infrastructure components. Each component is described by using the Terraform HashiCorp
Configuration Language (HCL) syntax and configuration files. Infrastructure changes are performed in a
declarative manner. Terraform composes and combines infrastructure resources to build and maintain a
desired state.
To reach a desired state, Terraform creates an execution plan that describes the actions that are needed, and
later applies the plan. Terraform supports Day 1 operations along with Day 2+, so Terraform can recognize
changes in the configuration files and create incremental plans and execute them. With Terraform, it is
possible to achieve a "dry-run" view of proposed changes. Terraform builds an execution plan, but it does
not execute it immediately. Plan and execution are two distinct actions in Terraform.
Configuration data in Terraform can be stored in multiple files and reused by Terraform. The application
constructs a dependency graph from the configuration file and uses this graph to create plans and refresh the
state. Terraform stores state information about managed infrastructure and configuration. The state is used
to map real-world resources to the user's configuration and keep track of metadata. A local Terraform state
file is used to create plans and perform changes to infrastructure. Before any operation, Terraform performs
a state refresh and updates it with recent information about real infrastructure.
In the Terraform architecture, there are two parts: the core and plug-ins.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 433
The Terraform core uses remote procedure calls (RPC)s to communicate with plug-ins. The Terraform core
is a statically compiled binary written in the Go language and executed in the CLI. The main functions of
the core element are as follows:
• IaC: Reading and interpolating configuration files and modules
• Resource state management
• Creating a resource graph
• Plan execution
• Communication with plug-ins

Terraform was designed to support an infrastructure that is based on plug-ins. All providers and
provisioners that are used by Terraform are considered plug-ins. Terraform plug-ins are written in Go and
expose implementation for a specific service (such as for Cisco Application Centric Infrastructure [Cisco
ACI]). The main functions of plug-ins are as follows:
• Initialization of any included libraries used to make API calls
• Authentication with the infrastructure provider
• Defines resources that map to specific services

HashiCorp Configuration Language


The main purpose of Terraform language is to declare resources. A resource describes a single object in an
infrastructure. A group of resources can be described in a "module." A module might describe a set of
resources; it also describes relationships between single resources.
• Resources are declared in a Terraform (TF) file.
• Syntax is HCL
• Human understandable

Terraform code is organized as a collection of configuration files that are kept in the same working
directory. Basic Terraform configuration might contain only a single .tf file. As user infrastructure grows,
configuration can also be expanded into multiple files to efficiently describe infrastructure.
Terraform configuration is defined in terms of an HCL syntax. HCL is a system for defining configuration
languages for applications, so the syntax can also be found in other applications. Terraform has two main
syntax constructs: arguments and blocks.
• Arguments
– Arguments assign a value to a name.
router = "Cisco"

434 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
– The identifier appears before the equal sign and the argument's value appears after the equal sign,
just like you would see in Python.
• Blocks
– Blocks are containers for other content. The definition of a block consists of type, labels (such as
provider type or resource name), and the body delimited by { } characters. A block body might
contain any other arguments or blocks (nested). This approach allows for building a hierarchical
model of infrastructure resources.

Terraform Providers
As stated previously, Terraform was designed to support an infrastructure that is based on plug-ins. All
providers that are used by Terraform are plug-ins.
• Providers abstract the API layer of resource providers.
– New resources are available for Terraform to provision and manage.
• Authentication, authorization, and accounting (AAA) configuration is required as part of the provider
definition in the TF file.
• Example: Cisco ACI supports user and X509 certificate-based authentication.

Terraform plug-ins are written in Go and expose implementation for a specific service (such as Cisco ACI).
The main functions of plug-ins are as follows:
• Initialization of any included libraries that are used to make API calls.
• Authentication with the infrastructure provider
• Defines resources that map to specific services

Providers understand API interactions and expose resources. Using providers, Terraform can create,
manage, and update infrastructure resources. The supported infrastructure could be physical machines,
virtual machines, network devices, or containers. Almost any resource types can be supported in Terraform
as new providers can be added to the tool.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 435
Arguments Composing Resources

One of the Terraform providers is the Cisco ACI Terraform provider that allows you to manage Cisco ACI
in the data center with Terraform. That provider interacts with Cisco APIC resources (via its APIs). It is
worth noting that this provider requires an authentication setup to properly communicate with Cisco APIC.
The following is an example Terraform configuration file (main.tf):
resource "aci_tenant" "demotenant" {
name = "${var.tenant_name}"
description = "tenant description"
}

To understand how the Cisco ACI Tenant resource is described using HCL syntax, the following examines
this example:
• "aci_tenant" block has two labels, which are “aci_tenant” and “demotenant.” Those labels describe the
resource type and resource name, respectively.
• Nested fields in the body block: "name" and "description" describe arguments with assigned values.

Because the main configuration file contains a reference to a variable that has not yet been declared, an
example Terraform variables file (variables.tf) may be used to store variables:
variable tenant_name {}

This code just declares the required variable, which is named tenant_name, but at this moment, does not
assign any value to it. The value of the variable is still needed and can be defined in the terraform.tfvars file.
The following is an example Terraform tfvars file (terraform.tfvars):
tenant_name = “tenant-01”

The examples use three different filenames (main.tf, variables.tf, terraform.tfvars). Although it can be a
good practice to use “main” and “variables” in filenames in the naming convention, Terraform will read and
accept configuration code from any file with a .tf file extension. This processing different from processing
the .tfvars extension; Terraform will read and accept configuration code only from files that are named
exactly terraform.tfvars or any files matching the pattern *.auto.tfvars.

436 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
The Terraform state is stored in a local file that is named terraform.tfstate. Terraform uses this local state to
create plans and make changes to infrastructure. Prior to any operation, Terraform does a refresh to update
the state with the infrastructure.
The state uses the JSON encoding format. The JSON format is human-readable and allows for integration
with other tools (such as Ansible).
Terraform also manages state backups automatically. If there are unsuccessful state modifications or an
application bug, state backup is always available for users when there is an erroneous update. Due to the
sensitivity of the state file, backups for state modification cannot be disabled.
The following is an example tfstate file:

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 437
"aci_application_epg.EPG-10_1_3_0_24": {
"type": "aci_application_epg",
"depends_on": [
"aci_application_profile.Test_APP1",
"aci_bridge_domain.BD-10_1_3_0_24"
],
"primary": {
"id": "uni/tn-Test/ap-Test_APP1/epg-EPG-10.1.3.0_24",
"attributes": {
"annotation": "",
"application_profile_dn": "uni/tn-Test/ap-Test_APP1",
"description": "",
"exception_tag": "",
"flood_on_encap": "disabled",
"fwd_ctrl": "",
"has_mcast_source": "no",
"id": "uni/tn-Test/ap-Test_APP1/epg-EPG-10.1.3.0_24",
"is_attr_based_e_pg": "no",
"match_t": "AtleastOne",
"name": "EPG-10.1.3.0_24",
"name_alias": "",
"pc_enf_pref": "unenforced",
"pref_gr_memb": "exclude",
"prio": "unspecified",
"relation_fv_rs_bd": "uni/tn-Test/BD-BD-10.1.3.0_24",
"shutdown": "no"
},
"meta": {
"schema_version": "1"
},
"tainted": false
},
"deposed": [],
"provider": "provider.aci"
},

Usage of Variables

438 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Terraform variables are used to input parameters in Terraform modules. With use of Terraform variables,
modification of a module code is not needed because all parameters are stored in a form of variables. A
module can be shared between different configurations without altering its source code.
An input variable is declared by using variables in code:
variable "aci_private_key" {
default = "/home/nvermand/fabric1_admin.key"
}
variable "aci_cert_name" {
default = "admin_cert"
}
variable "provider_profile_dn" { default = "uni/vmmp-VMware" }
variable "bd_subnet" {}
variable "gateway" {}
variable "vmm_domain_dn" {}

The first label after the variable keyword is a name for the variable, which must be unique among all
variables in the same module.
The variable declaration can optionally include a type argument to specify which value types are accepted
for the variable. Also, the variable declaration can include a default argument. If present, the variable is
considered to be optional and the default value will be used if no value is set when calling the module or
running Terraform.

If no default is set within a variable declaration block, the variable is required. Terraform can load and
populate variables from all files with names that match terraform.tfvars or *.auto.tfvars in the current
working directory. The tfvars file has the same Terraform syntax as other .tf files that are used for
Terraform configuration. Vars files can also be specified using a Terraform command-line attribute: "-var-
file". Multiple arguments can be used in a single command.
vsphere_compute_cluster = "pod-03"
folder = "ACI/demos"
aci_vm1_name="aci-tf-test1"
aci_vm2_name="aci-tf-test2"
aci_vm1_address = "1.1.1.10"
aci_vm2_address = "1.1.1.11"
bd_subnet = "1.1.1.1/24"
gateway = "1.1.1.1"

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 439
There are also additional ways of defining variables:
• Command line: Use the -var attribute in the command line. This flag will not save variables and they
will need to be applied on every Terraform execution.
• Environment variables: Terraform will read any environment variables beginning with
TF_VAR_name and will autopopulate variables.

Data Sources vs. Resources

Data sources allow Terraform to fetch information about infrastructure that the Terraform configuration can
use.
Typically, data sources correspond to an infrastructure object type that is accessed via an API. A data source
is accessed via a "data" resource type and can be declared using code similar to this presented code block:
data "vsphere_datacenter" "dc" {
name = "${var.vsphere_datacenter}"
}

A data block requests that Terraform read from a "vsphere_datacenter" data resource and export the result
under the "dc" local name. The name is used as a reference when using this information in other places in
the configuration. The combined data source and name are used as an identifier, and must therefore be
unique within a module.
Arguments are defined within the body block (between { and }). Arguments depend on the data source and
are defined specifically for the given data source.
Other Terraform resources can consume data, such as virtual machines:
resource "vsphere_virtual_machine" "csr1kv1" {
name = "csr1kv1"
resource_pool_id = "${data.vsphere_resource_pool.pool.id}"
datastore_id = "${data.vsphere_datastore.datastore.id}"

# << Output truncated >>


}

In the example, to create a new instance within a VMware ESXi API, you would first need to know detailed
information about the resource pool and datastore, such as their internal identifiers. You could query the
API first (using data resources), and reuse that information when creating objects.

440 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
When comparing data sources to resources, the primary goal of resources is to describe managed resources.
Both types take arguments and export attributes for use in Terraform configuration, but while managed
resources are used to create, update, and delete infrastructure, data resources are used to read information
about infrastructure. While performing apply and destroy operations, data resources will not be modified (as
read-only operations), but infrastructure elements described by managed resources might be modified.

Direct Acyclic Graph

Based on Terraform configurations, an application builds a dependency graph of resources and walks
through this graph to generate plans and refresh the state. There are multiple nodes that can exist on a graph:
• Resource node
• Provider configuration node
• Resource meta-node

Standard depth-first traversal is done to walk the graph. Graph walking is done in parallel: a node is walked
when all its dependencies are walked.

Terraform Architecture

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 441
Typically, configuration of Terraform will include the following elements:
• Single or multiple configuration .tf files
• Single or multiple tfvars variable files
• Single or multiple modules

The Terraform state file is created upon plan creation or plan execution. The state file stores information
about managed infrastructure and configuration, and is used to map the configuration to real-world
resources.
Content in configuration or variable files can be organized to reflect roles and functions. For example,
configuration elements that are related to a specific function, provider, or technology are stored in dedicated
files.
Modules can be used to increase code reusability. With the use of modules, the code becomes clearer and
easier to understand for developers, especially with the growth of infrastructure. Multiple developers can
share Terraform configuration files in a version control system (such as Git), and different teams can adopt
them. Structured organization of code helps in administering and maintaining particular functions of
Terraform configuration.

Terraform State Declarations


Terraform state declarations describe the desired infrastructure.
### csr1kv1.tf
resource "vsphere_virtual_machine" "csr1kv1" {
name = "csr1kv1"
resource_pool_id = "${data.vsphere_resource_pool.pool.id}"
datastore_id = "${data.vsphere_datastore.datastore.id}"

num_cpus = 4
memory = 4096
guest_id = "other26xLinux64Guest"
wait_for_guest_net_timeout = 0
wait_for_guest_ip_timeout = 5
shutdown_wait_timeout = 1
force_power_off = true
# Gi1
network_interface {
network_id = "${data.vsphere_network.vm_network_1.id}"
adapter_type = "vmxnet3"
}

Terraform analyzes its configuration content and takes appropriate actions to provision resources. The
following example of code is for a Cisco Cloud Services Router (CSR) 1000V installed on a VMware ESXi
hypervisor:

442 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
### csr1kv1 vm
resource "vsphere_virtual_machine" "csr1kv1" {
name = "csr1kv1"
resource_pool_id = "${data.vsphere_resource_pool.pool.id}"
datastore_id = "${data.vsphere_datastore.datastore.id}"

num_cpus = 4
memory = 4096
<<output truncated>>

# Gi1
network_interface {
network_id = "${data.vsphere_network.vm_network_1.id}"
adapter_type = "vmxnet3"
}

# Gi2
network_interface {
network_id = "${data.vsphere_network.vm_network_4.id}"
adapter_type = "vmxnet3"
}

<<output truncated>>

# Gi5 (management)
network_interface {
network_id = "${data.vsphere_network.network.id}"
adapter_type = "vmxnet3"
}

cdrom {
datastore_id = "${data.vsphere_datastore.datastore.id}"
path = "csr1kv1/bootstrap.iso"
}

disk {
label = "disk0"
attach = true
path = "csr1kv1/csr.vmdk"
disk_mode = "independent_nonpersistent"
datastore_id = "${data.vsphere_datastore.datastore.id}"
}

depends_on = [
vsphere_host_port_group.pg
]

The main resource type is the vsphere_virtual_machine that is named “csr1kv1.” There could be multiple
resources of the same type, but with unique names. The main object contains nested definitions, which are
attributes of Cisco Collaboration Systems 1000V routers. Each computing object and router needs dedicated
CPU and memory, which are defined by attributes. There are also two other VMware-specific parameters,
the Datastore ID and Resource Pool ID. These parameters are retrieved using data sources and inform the
hypervisor how this VM should consume the available resources. In the next code blocks, network
interfaces are added along with a CD-ROM drive, which is used as a part of the bootstrap process.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 443
Ubuntu Linux VM Example
The configuration file for installing an Ubuntu Linux Server is very similar to the configuration file for a
Cisco CSR 1000V. Different resources are assigned to the VM and the hypervisor is informed about a
different operating system running in the VM (“guest_id”). The details for a disk are changed, but the most
interesting part is the network. You might notice that the ETH1 interface points to the same data
(vm_network_1) as in the router example, so the Cisco CSR 1000V Gi1 interface is connected with the
ETH1 interface of a Linux server. Now both VMs have an available private virtual network.
### k8s1 vm
resource "vsphere_virtual_machine" "k8s1" {
name = "k8s1"
resource_pool_id = "${data.vsphere_resource_pool.pool.id}"
datastore_id = "${data.vsphere_datastore.datastore.id}"

num_cpus = 2
memory = 8192
guest_id = "ubuntu64Guest"
<<output truncated>>

# eth0 (management)
network_interface {
network_id = "${data.vsphere_network.network.id}"
adapter_type = "vmxnet3"
}

# eth1
network_interface {
network_id = "${data.vsphere_network.vm_network_1.id}"
adapter_type = "vmxnet3"
}

disk {
label = "disk0"
attach = true
path = "k8s1/k8s1.vmdk"
disk_mode = "independent_nonpersistent"
datastore_id = "${data.vsphere_datastore.datastore.id}"
}

depends_on = [
vsphere_host_port_group.pg
]

444 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Data Sources
As in the previous example, data resources help managed resources be accessed and referenced. VMs attach
network interfaces by specifying network interface identifiers, but those identifiers are first created by data
resources.
data "vsphere_network" "vm_network_1" {
name = "vm_network_1"
datacenter_id = "${data.vsphere_datacenter.dc.id}"

depends_on = [
vsphere_host_port_group.pg
]
}

Dependencies
Another interesting part of Terraform configurations are dependencies.
data "vsphere_network" "vm_network_1" {
<<output truncated>>

depends_on = [
vsphere_host_port_group.pg
]
}

resource "vsphere_host_port_group" "pg" {


count = 6
name = "vm_network_${count.index}"
host_system_id = "${data.vsphere_host.esxi_host.id}"
virtual_switch_name = "$
{vsphere_host_virtual_switch.devnet_lab_vswitch[count.index].name}"
}

Terraform automatically builds a dependency tree, but it is also possible to manually indicate the order of
created resources. Switches and networks are good examples of this feature. To create a network, you first
need to create a switch. As in the virtual networking world, a VM network needs a vSwitch (a port group) to
be created. Depends_on is a list of objects that should first be created by Terraform, before creating a “pg”
object.

Terraform: Common CLI Commands


Terraform is a CLI-based application and has several subcommands depending on the expected action:

Command Description

terraform plan This command generates and displays the execution plan
computed by Terraform.

terraform apply This command deploys the infrastructure as described in


Terraform configuration files.

terraform destroy This command destroys previously deployed and existing


infrastructure as described in Terraform configuration files and

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 445
Command Description

in Terraform state files.

terraform graph This command is used to generate a visual representation of


either a configuration or an execution plan.

terraform import This command is used to import existing infrastructure


resources into the configuration.

Command Description

terraform init This command is used to initialize a working directory


containing configuration files. This command should be run
first in new Terraform installations.

terraform show This command is used to provide human-readable output


from a state or plan file.

terraform validate This command is used to validate local configuration files.

Cisco Terraform Providers


• Cisco ACI
• Cisco ASA Adaptive Security Appliances

446 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Cisco products and solutions are supported in Terraform as providers. A provider is an abstraction layer
between the Terraform application and Cisco solution APIs.
The Cisco ASA provider can manage the firewall-related configuration of Cisco ASA hardware appliances
or virtual devices. The Cisco ASA provider can declaratively manage the following Cisco ASA appliance
configuration elements:
• Access rules
• Network objects and object groups
• Network service groups
• Static routes

The Cisco ACI provider supports more than 90 resources and data sources that cover all aspects of bringing
up and configuring the Cisco ACI infrastructure in on-premises, WAN, access, and cloud environments.
The Terraform Cisco ACI provider also helps customers optimize network compliance and operations, and
maintain a consistent state across the entire multicloud infrastructure.

Integrate Terraform with Ansible

As you know by now, the DevOps ecosystem consists of many different tools. Automation workflows
commonly include multiple tools that are best dedicated to a particular task. The same is true when
integrating Terraform with Ansible. You can have the best of both applications and use both tools in a
single workflow. Terraform is a provisioning tool, whereas Ansible is more focused on configuration
management. A typical example of this process might be the deployment of an instance in a public cloud.
Terraform can be used to create VMs or cloud instances, while Ansible can perform the rest of the
deployment (such as the operating system, application installation, and configuration).
Because they are separate tools, multiple integration options exist:
• Ansible ships with a Terraform module. This module can deploy resources using Terraform and pull
resource information back into Ansible.
• Terraform can call Ansible using "provisioners," which call your shell scripts or Ansible code.

For Ansible to know information about your infrastructure, it needs to access the Terraform state file. The
state file keeps track of the Terraform managed resources and can be consumed by Ansible to access created
resources.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 447
Common methods of exposing the Terraform state file to Ansible are as follows:
• Inline inventory: Ansible can be invoked with command-line parameters that specify host attributes.
• Dynamic inventory: Ansible can read the Terraform state file and create inventory dynamically based
on the Terraform deployment result.

1. 0What is the primary use case for Terraform?


a. building and changing infrastructure
b. configuration management
c. continuous integration/continuous deployment
d. code version control

448 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Discovery 12: Manage On-Demand Test
Environments with Terraform
Introduction
Creating an on-demand test environment might be time-consuming task when performed manually. This
activity aims to provide a high-level view of how to describe test environment with an IaC approach using
configuration files to create and destroy test environments dynamically. During the process, you will learn
how to describe infrastructure as a code with Terraform. Finally, you will use Terraform to create and
destroy a test environment with single commands.

Topology

Job Aid

Device Information

Device Description FQDN/IP Address Credentials

Student Workstation Linux Ubuntu VM 192.168.10.10 student, 1234QWer

Test ESXi Test Environment esxi_test root, 1234QWer


VMware ESXi 192.168.10.70
hypervisor

Test Environment ASA1 Test Environment 10.99.0.51 cisco, cisco


Cisco ASA
Firewall

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 449
Command List
The table describes the commands that are used in this activity. The commands are listed in alphabetical
order so that you can easily locate the information that you need. Refer to this list if you need configuration
command assistance during the lab activity.

Command Description

cd directory name To change directories within the Linux file system, use the cd
command. You will use this command to enter into a directory where
the lab scripts are housed. You can use tab completion to finish the
name of the directory after you start typing it.

cat file The most common use of the cat Linux command is to read the
contents of files. It is the most convenient command for this purpose in
UNIX-like operating systems.

terraform plan To generate and view the execution plan computed by Terraform

terraform apply The command to deploy infrastructure as described in Terraform


configuration files

terraform destroy The command to destroy previously deployed and existing


infrastructure as described in Terraform configuration files and in
Terraform state files

show running-config access-list To validate access-list entries on Cisco ASA firewall

show running-config object To validate network objects on Cisco ASA Firewall

show running-config route To validate static routes on Cisco ASA Firewall

ls file The ability to see a file or folder contents.

Task 1: Review Terraform Configuration for Test


Environment
Understand and analyze content of Terraform Infrastructure as code configuration files.

Activity

List the Terraform Configuration Files

Step 1 In the Student Workstation, open a terminal window and change the directory to ~/labs/lab12/test_env using
the cd ~/labs/lab12/test_env/ command.

450 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:$ cd ~/labs/lab12/test_env/
student@student-vm:lab12/test_env$

Step 2 List the directory content using the ls –al command.

student@student-vm:lab12/test_env$ ls -al
total 56
drwxrwxr-x 3 student student 4096 Nov 6 20:47 .
drwxrwxr-x 4 student student 4096 Nov 6 20:47 ..
-rw-rw-r-- 1 student student 1552 Nov 5 12:19 csr1kv1.tf
-rw-rw-r-- 1 student student 1552 Nov 5 12:19 csr1kv2.tf
-rw-rw-r-- 1 student student 2059 Nov 6 14:34 csr1kv3.tf
-rw-rw-r-- 1 student student 2016 Nov 5 11:47 data_sources.tf
-rw-rw-r-- 1 student student 984 Nov 6 09:39 k8s1.tf
-rw-rw-r-- 1 student student 984 Nov 6 09:39 k8s2.tf
-rw-rw-r-- 1 student student 984 Nov 6 09:40 k8s3.tf
-rwxr-xr-x 1 student student 223 Nov 6 14:34 power_off_asa.sh
-rwxr-xr-x 1 student student 224 Nov 6 14:34 power_on_asa.sh
drwxr-xr-x 3 student student 4096 Nov 4 09:30 .terraform
-rw-rw-r-- 1 student student 158 Nov 4 09:47 variables.tf
-rw-rw-r-- 1 student student 628 Nov 6 14:54 vswitch.tf

Explore csr1kv.tf Terraform file

Step 3 Open the csr1kv1.tf file using the cat csr1kv1.tf command.
You will be presented with a Terraform declaration of a virtual machine. The virtual machine will use ESXi
vSphere capabilities and types provided within the Terraform tool. In the output, you will see configuration
of a virtual machine name, memory, processors, network interfaces, and storage. As some of the resources
(such as network interfaces) have their dependencies, Terraform uses data sources to collect required
information. Terraform is capable of automatically determining the order of creating the resources, however
it is also possible to explicitly declare dependencies with the depends_on block of code.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 451
student@student-vm:lab12/test_env$ cat csr1kv1.tf
### csr1kv1 vm
resource "vsphere_virtual_machine" "csr1kv1" {
name = "csr1kv1"
resource_pool_id = "${data.vsphere_resource_pool.pool.id}"
datastore_id = "${data.vsphere_datastore.datastore.id}"

num_cpus = 4
memory = 4096
guest_id = "other26xLinux64Guest"
wait_for_guest_net_timeout = 0
wait_for_guest_ip_timeout = 5
shutdown_wait_timeout = 1
force_power_off = true

# Gi1
network_interface {
network_id = "${data.vsphere_network.vm_network_1.id}"
adapter_type = "vmxnet3"
}

# Gi2
network_interface {
network_id = "${data.vsphere_network.vm_network_4.id}"
adapter_type = "vmxnet3"
}

# Gi3
network_interface {
network_id = "${data.vsphere_network.vm_network_2.id}"
adapter_type = "vmxnet3"
}

# Gi4 (not connected in topology, declared to preserve ordering)


network_interface {
network_id = "${data.vsphere_network.vm_network_0.id}"
adapter_type = "vmxnet3"
}

# Gi5 (management)
network_interface {
network_id = "${data.vsphere_network.network.id}"
adapter_type = "vmxnet3"
}

cdrom {
datastore_id = "${data.vsphere_datastore.datastore.id}"
path = "csr1kv1/bootstrap.iso"
}

disk {
label = "disk0"
attach = true
path = "csr1kv1/csr.vmdk"
disk_mode = "independent_nonpersistent"
datastore_id = "${data.vsphere_datastore.datastore.id}"

452 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
}

depends_on = [
vsphere_host_port_group.pg
]

Understand Cisco ASA Provisioning

Step 4 Use the cat csr1kv3.tf command to open the csr1kv3.tf configuration file.

The definition of the csr1kv3 VM is similar and almost identical to the csr1kv1 VM. The
csr1kv3 is a border router. You will notice the major difference in the last block of configuration,
which indicates Terraform provisioners used for the Cisco ASA deployment process.
Terraform is a highly extensible tool, and it allows for execution of custom and third-party code.
You will use that capability to demonstrate how to power on and power off a predeployed asa1
device. In the presented example, Terraform will transfer the power_on_asa.sh local script to the
ESXi host and remotely execute it every time the csr1kv3 resource is successfully created. It will
also transfer the power_off_asa.sh and execute it every time the csr1kv3 is successfully
destroyed.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 453
student@student-vm:lab12/test_env$ cat csr1kv3.tf
### csr1kv3 vm
resource "vsphere_virtual_machine" "csr1kv3" {
name = "csr1kv3"
resource_pool_id = "${data.vsphere_resource_pool.pool.id}"
datastore_id = "${data.vsphere_datastore.datastore.id}"

num_cpus = 4
memory = 4096
guest_id = "other26xLinux64Guest"
wait_for_guest_net_timeout = 0
wait_for_guest_ip_timeout = 5
shutdown_wait_timeout = 1
force_power_off = true

# Gi1
network_interface {
network_id = "${data.vsphere_network.vm_network_4.id}"
adapter_type = "vmxnet3"
}

# Gi2
network_interface {
network_id = "${data.vsphere_network.vm_network_5.id}"
adapter_type = "vmxnet3"
}

# Gi3
network_interface {
network_id = "${data.vsphere_network.vm_network_6.id}"
adapter_type = "vmxnet3"
}

# Gi4 (not connected in topology, declared to preserve ordering)


network_interface {
network_id = "${data.vsphere_network.vm_network_0.id}"
adapter_type = "vmxnet3"
}

# Gi5 (management)
network_interface {
network_id = "${data.vsphere_network.network.id}"
adapter_type = "vmxnet3"
}

cdrom {
datastore_id = "${data.vsphere_datastore.datastore.id}"
path = "csr1kv3/bootstrap.iso"
}

disk {
label = "disk0"
attach = true
path = "csr1kv3/csr.vmdk"
disk_mode = "independent_nonpersistent"
datastore_id = "${data.vsphere_datastore.datastore.id}"

454 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
}

depends_on = [
vsphere_host_port_group.pg
]

provisioner "remote-exec" {
connection {
type = "ssh"
user = "${var.vsphere_username}"
password = "${var.vsphere_password}"
host = "${var.vsphere_server}"
}
script = "power_on_asa.sh"
}

provisioner "remote-exec" {
when = "destroy"
connection {
type = "ssh"
user = "${var.vsphere_username}"
password = "${var.vsphere_password}"
host = "${var.vsphere_server}"
}
script = "power_off_asa.sh"
}
}

Explore the k8s1.tf, k8s2.tf, and k8s3.tf Terraform Files

Step 5 Use the cat k8s1.tf command to explore the k8s1.tf configuration file. You will be presented with a
Terraform declaration of a Linux Ubuntu virtual machine. There are a few differences between this
configuration and the csr1kv configurations. The VM points to a different boot disk location on an ESXi
datastore and has a different number of virtual interfaces connected. Focus on the interfaces. You will notice
that the interface connected network is declared. The ESXi driver (formally the “provider”) requires
specifying the ID (identifier) of the network on the ESXi server.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 455
student@student-vm:lab12/test_env$ cat k8s1.tf
### k8s1 vm
resource "vsphere_virtual_machine" "k8s1" {
name = "k8s1"
resource_pool_id = "${data.vsphere_resource_pool.pool.id}"
datastore_id = "${data.vsphere_datastore.datastore.id}"

num_cpus = 2
memory = 8192
guest_id = "ubuntu64Guest"
wait_for_guest_net_timeout = 0
wait_for_guest_ip_timeout = 5
shutdown_wait_timeout = 1
force_power_off = true

# eth0 (management)
network_interface {
network_id = "${data.vsphere_network.network.id}"
adapter_type = "vmxnet3"
}

# eth1
network_interface {
network_id = "${data.vsphere_network.vm_network_1.id}"
adapter_type = "vmxnet3"
}

disk {
label = "disk0"
attach = true
path = "k8s1/k8s1.vmdk"
disk_mode = "independent_nonpersistent"
datastore_id = "${data.vsphere_datastore.datastore.id}"
}

depends_on = [
vsphere_host_port_group.pg
]

Step 6 Repeat the above step for k8s2.tf and k8s3.tf configuration files, examining differences between those three
configurations.

Explore Data Source Terraform File


Once logged into the terminal shell, open the data_sources.tf configuration files

Step 7 Use the cat data_sources.tf command to open the data sources configuration file.

Notice the "provider" resource. It is a vSphere driver used by Terraform to inform Terraform
how to connect to the ESXi server. One of the Terraform features is variables. For brevity,
variables are being stored in a separate file. They can be referenced by any other Terraform
configuration file.

456 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
You will also notice Terraform declaration data sources used by resources created during earlier
steps. The vm_network_1 data source refers to a Terraform object that describes a resource of a
type vpshere_network. This resource is located in the specified data center (referenced by
identifier) and named “vm_network_1.” The name and the data center ID allow you to query
ESXi API to retrieve details of the queried virtual network.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 457
student@student-vm:lab12/test_env$ cat data_sources.tf
### Provider
provider "vsphere" {
user = "${var.vsphere_username}"
password = "${var.vsphere_password}"
vsphere_server = "${var.vsphere_server}"
allow_unverified_ssl = true
}

### Data Sources


data "vsphere_datastore" "datastore" {
name = "datastore"
datacenter_id = "${data.vsphere_datacenter.dc.id}"
}

data "vsphere_network" "network" {


name = "VM Network"
datacenter_id = "${data.vsphere_datacenter.dc.id}"
}

data "vsphere_network" "vm_network_1" {


name = "vm_network_1"
datacenter_id = "${data.vsphere_datacenter.dc.id}"

depends_on = [
vsphere_host_port_group.pg
]
}

data "vsphere_network" "vm_network_2" {


name = "vm_network_2"
datacenter_id = "${data.vsphere_datacenter.dc.id}"

depends_on = [
vsphere_host_port_group.pg
]
}

data "vsphere_network" "vm_network_3" {


name = "vm_network_3"
datacenter_id = "${data.vsphere_datacenter.dc.id}"

depends_on = [
vsphere_host_port_group.pg
]
}

data "vsphere_network" "vm_network_4" {


name = "vm_network_4"
datacenter_id = "${data.vsphere_datacenter.dc.id}"

depends_on = [
vsphere_host_port_group.pg
]
}

458 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
data "vsphere_network" "vm_network_5" {
name = "vm_network_5"
datacenter_id = "${data.vsphere_datacenter.dc.id}"

depends_on = [
vsphere_host_port_group.pg
]
}

data "vsphere_network" "vm_network_6" {


name = "vm_network_6"
datacenter_id = "${data.vsphere_datacenter.dc.id}"

depends_on = [
vsphere_host_port_group.pg
]
}

data "vsphere_network" "vm_network_7" {


name = "vm_network_7"
datacenter_id = "${data.vsphere_datacenter.dc.id}"

depends_on = [
vsphere_host_port_group.pg
]
}

data "vsphere_network" "vm_network_0" {


name = "vm_network_0"
datacenter_id = "${data.vsphere_datacenter.dc.id}"

depends_on = [
vsphere_host_port_group.pg
]
}

data "vsphere_datacenter" "dc" { }

data "vsphere_resource_pool" "pool" { }

Explore the variables.tf Terraform File

Step 8 Use the cat variables.tf command to explore the cat variables.tf configuration file.

Notice the variables. Variables are a specific feature in Terraform that allows you to declare and
store values. Variables are helpful because they simplify the configuration by allowing the
variables to be used from anywhere in the Terraform configuration files.
student@student-vm:lab12/test_env$ cat variables.tf
variable "vsphere_server" { default = "192.168.10.70" }
variable "vsphere_username" { default = "root" }
variable "vsphere_password" { default = "password" }

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 459
Explore the vswitch.tf Terraform File

Step 9 Use the cat vswitch.tf command to explore the vswitch.tf network configuration file.

Each of the previously declared virtual machines had multiple networks interfaces attached to it.
Network interfaces are connected to virtual networks known as port groups in ESXi. ESXi port
groups are created on ESXi virtual switches. The vswitch.tf configuration file describes
networking resources consumed by virtual machine resources. Terraform allows you to create
multiple instances of a resource. You will use the count parameter to create six virtual switch
devices and six port groups.
student@student-vm:lab12/test_env$ cat vswitch.tf
data "vsphere_host" "esxi_host" {
datacenter_id = "${data.vsphere_datacenter.dc.id}"
}

resource "vsphere_host_virtual_switch" "devnet_lab_vswitch" {


count = 6
name = "devnet_lab_vswitch_${count.index}"
host_system_id = "${data.vsphere_host.esxi_host.id}"

network_adapters = []

active_nics = []
standby_nics = []
}

resource "vsphere_host_port_group" "pg" {


count = 6
name = "vm_network_${count.index}"
host_system_id = "${data.vsphere_host.esxi_host.id}"
virtual_switch_name = "$
{vsphere_host_virtual_switch.devnet_lab_vswitch[count.index].name}"
}

Task 2: Deploy Test Environment Using Terraform


You will create virtual resources declared in Terraform configuration files.

Activity

Analyze Terraform Plan

Step 1 Execute the terraform plan command and analyze the output. The terraform plan command generates and
shows the execution plan.

This command helps you to analyze the potential impact of the change made by Terraform. In
the output, you will see the expected actions to build your test environment. Terraform will
create multiple resources declared in .tf files. At the bottom of the output, you will find a

460 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
summary of actions to be taken. In your case, Terraform is not expected to modify or delete any
existing resources during the deployment step, because you are only initializing and deploying
the test environment.
Analyze the output and validate that no changes were made. The terraform plan command only
provides a plan of deployment that is calculated based on the current and intended infrastructure
state. The planning process does not affect any elements of the existing infrastructure.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 461
student@student-vm:lab12/test_env$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

data.vsphere_datacenter.dc: Refreshing state...


data.vsphere_resource_pool.pool: Refreshing state...
data.vsphere_datastore.datastore: Refreshing state...
data.vsphere_host.esxi_host: Refreshing state...
data.vsphere_network.network: Refreshing state...

------------------------------------------------------------------------

An execution plan has been generated and is shown below.


Resource actions are indicated with the following symbols:
+ create
<= read (data resources)

Terraform will perform the following actions:

# data.vsphere_network.vm_network_0 will be read during apply


# (config refers to values not yet known)
<= data "vsphere_network" "vm_network_0" {
+ datacenter_id = "ha-datacenter"
+ id = (known after apply)
+ name = "vm_network_0"
+ type = (known after apply)
}

# data.vsphere_network.vm_network_1 will be read during apply


# (config refers to values not yet known)
<= data "vsphere_network" "vm_network_1" {
+ datacenter_id = "ha-datacenter"
+ id = (known after apply)
+ name = "vm_network_1"
+ type = (known after apply)
}

# data.vsphere_network.vm_network_2 will be read during apply


# (config refers to values not yet known)
<= data "vsphere_network" "vm_network_2" {
+ datacenter_id = "ha-datacenter"
+ id = (known after apply)
+ name = "vm_network_2"
+ type = (known after apply)
}

# data.vsphere_network.vm_network_3 will be read during apply


# (config refers to values not yet known)
<= data "vsphere_network" "vm_network_3" {
+ datacenter_id = "ha-datacenter"
+ id = (known after apply)
+ name = "vm_network_3"
+ type = (known after apply)
}

462 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
# data.vsphere_network.vm_network_4 will be read during apply
# (config refers to values not yet known)
<= data "vsphere_network" "vm_network_4" {
+ datacenter_id = "ha-datacenter"
+ id = (known after apply)
+ name = "vm_network_4"
+ type = (known after apply)
}

# data.vsphere_network.vm_network_5 will be read during apply


# (config refers to values not yet known)
<= data "vsphere_network" "vm_network_5" {
+ datacenter_id = "ha-datacenter"
+ id = (known after apply)
+ name = "vm_network_5"
+ type = (known after apply)
}

# data.vsphere_network.vm_network_6 will be read during apply


# (config refers to values not yet known)
<= data "vsphere_network" "vm_network_6" {
+ datacenter_id = "ha-datacenter"
+ id = (known after apply)
+ name = "vm_network_6"
+ type = (known after apply)
}

# data.vsphere_network.vm_network_7 will be read during apply


# (config refers to values not yet known)
<= data "vsphere_network" "vm_network_7" {
+ datacenter_id = "ha-datacenter"
+ id = (known after apply)
+ name = "vm_network_7"
+ type = (known after apply)
}

# vsphere_host_port_group.pg[0] will be created


+ resource "vsphere_host_port_group" "pg" {
+ computed_policy = (known after apply)
+ host_system_id = "ha-host"
+ id = (known after apply)
+ key = (known after apply)
+ name = "vm_network_0"
+ ports = (known after apply)
+ virtual_switch_name = "devnet_lab_vswitch_0"
+ vlan_id = 0
}

# vsphere_host_port_group.pg[1] will be created


+ resource "vsphere_host_port_group" "pg" {
+ computed_policy = (known after apply)
+ host_system_id = "ha-host"
+ id = (known after apply)
+ key = (known after apply)
+ name = "vm_network_1"
+ ports = (known after apply)

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 463
+ virtual_switch_name = "devnet_lab_vswitch_1"
+ vlan_id = 0
}

# vsphere_host_port_group.pg[2] will be created


+ resource "vsphere_host_port_group" "pg" {
+ computed_policy = (known after apply)
+ host_system_id = "ha-host"
+ id = (known after apply)
+ key = (known after apply)
+ name = "vm_network_2"
+ ports = (known after apply)
+ virtual_switch_name = "devnet_lab_vswitch_2"
+ vlan_id = 0
}

# vsphere_host_port_group.pg[3] will be created


+ resource "vsphere_host_port_group" "pg" {
+ computed_policy = (known after apply)
+ host_system_id = "ha-host"
+ id = (known after apply)
+ key = (known after apply)
+ name = "vm_network_3"
+ ports = (known after apply)
+ virtual_switch_name = "devnet_lab_vswitch_3"
+ vlan_id = 0
}

# vsphere_host_port_group.pg[4] will be created


+ resource "vsphere_host_port_group" "pg" {
+ computed_policy = (known after apply)
+ host_system_id = "ha-host"
+ id = (known after apply)
+ key = (known after apply)
+ name = "vm_network_4"
+ ports = (known after apply)
+ virtual_switch_name = "devnet_lab_vswitch_4"
+ vlan_id = 0
}

# vsphere_host_port_group.pg[5] will be created


+ resource "vsphere_host_port_group" "pg" {
+ computed_policy = (known after apply)
+ host_system_id = "ha-host"
+ id = (known after apply)
+ key = (known after apply)
+ name = "vm_network_5"
+ ports = (known after apply)
+ virtual_switch_name = "devnet_lab_vswitch_5"
+ vlan_id = 0
}

# vsphere_host_virtual_switch.devnet_lab_vswitch[0] will be created


+ resource "vsphere_host_virtual_switch" "devnet_lab_vswitch" {
+ active_nics = []
+ allow_forged_transmits = true

464 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
+ allow_mac_changes = true
+ allow_promiscuous = false
+ beacon_interval = 1
+ check_beacon = false
+ failback = true
+ host_system_id = "ha-host"
+ id = (known after apply)
+ link_discovery_operation = "listen"
+ link_discovery_protocol = "cdp"
+ mtu = 1500
+ name = "devnet_lab_vswitch_0"
+ network_adapters = []
+ notify_switches = true
+ number_of_ports = 128
+ shaping_enabled = false
+ standby_nics = []
+ teaming_policy = "loadbalance_srcid"
}

# vsphere_host_virtual_switch.devnet_lab_vswitch[1] will be created


+ resource "vsphere_host_virtual_switch" "devnet_lab_vswitch" {
+ active_nics = []
+ allow_forged_transmits = true
+ allow_mac_changes = true
+ allow_promiscuous = false
+ beacon_interval = 1
+ check_beacon = false
+ failback = true
+ host_system_id = "ha-host"
+ id = (known after apply)
+ link_discovery_operation = "listen"
+ link_discovery_protocol = "cdp"
+ mtu = 1500
+ name = "devnet_lab_vswitch_1"
+ network_adapters = []
+ notify_switches = true
+ number_of_ports = 128
+ shaping_enabled = false
+ standby_nics = []
+ teaming_policy = "loadbalance_srcid"
}

# vsphere_host_virtual_switch.devnet_lab_vswitch[2] will be created


+ resource "vsphere_host_virtual_switch" "devnet_lab_vswitch" {
+ active_nics = []
+ allow_forged_transmits = true
+ allow_mac_changes = true
+ allow_promiscuous = false
+ beacon_interval = 1
+ check_beacon = false
+ failback = true
+ host_system_id = "ha-host"
+ id = (known after apply)
+ link_discovery_operation = "listen"
+ link_discovery_protocol = "cdp"
+ mtu = 1500

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 465
+ name = "devnet_lab_vswitch_2"
+ network_adapters = []
+ notify_switches = true
+ number_of_ports = 128
+ shaping_enabled = false
+ standby_nics = []
+ teaming_policy = "loadbalance_srcid"
}

# vsphere_host_virtual_switch.devnet_lab_vswitch[3] will be created


+ resource "vsphere_host_virtual_switch" "devnet_lab_vswitch" {
+ active_nics = []
+ allow_forged_transmits = true
+ allow_mac_changes = true
+ allow_promiscuous = false
+ beacon_interval = 1
+ check_beacon = false
+ failback = true
+ host_system_id = "ha-host"
+ id = (known after apply)
+ link_discovery_operation = "listen"
+ link_discovery_protocol = "cdp"
+ mtu = 1500
+ name = "devnet_lab_vswitch_3"
+ network_adapters = []
+ notify_switches = true
+ number_of_ports = 128
+ shaping_enabled = false
+ standby_nics = []
+ teaming_policy = "loadbalance_srcid"
}

# vsphere_host_virtual_switch.devnet_lab_vswitch[4] will be created


+ resource "vsphere_host_virtual_switch" "devnet_lab_vswitch" {
+ active_nics = []
+ allow_forged_transmits = true
+ allow_mac_changes = true
+ allow_promiscuous = false
+ beacon_interval = 1
+ check_beacon = false
+ failback = true
+ host_system_id = "ha-host"
+ id = (known after apply)
+ link_discovery_operation = "listen"
+ link_discovery_protocol = "cdp"
+ mtu = 1500
+ name = "devnet_lab_vswitch_4"
+ network_adapters = []
+ notify_switches = true
+ number_of_ports = 128
+ shaping_enabled = false
+ standby_nics = []
+ teaming_policy = "loadbalance_srcid"
}

# vsphere_host_virtual_switch.devnet_lab_vswitch[5] will be created

466 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
+ resource "vsphere_host_virtual_switch" "devnet_lab_vswitch" {
+ active_nics = []
+ allow_forged_transmits = true
+ allow_mac_changes = true
+ allow_promiscuous = false
+ beacon_interval = 1
+ check_beacon = false
+ failback = true
+ host_system_id = "ha-host"
+ id = (known after apply)
+ link_discovery_operation = "listen"
+ link_discovery_protocol = "cdp"
+ mtu = 1500
+ name = "devnet_lab_vswitch_5"
+ network_adapters = []
+ notify_switches = true
+ number_of_ports = 128
+ shaping_enabled = false
+ standby_nics = []
+ teaming_policy = "loadbalance_srcid"
}

# vsphere_virtual_machine.csr1kv1 will be created


+ resource "vsphere_virtual_machine" "csr1kv1" {
+ boot_retry_delay = 10000
+ change_version = (known after apply)
+ cpu_limit = -1
+ cpu_share_count = (known after apply)
+ cpu_share_level = "normal"
+ datastore_id = "5d851500-53966784-2df3-0050569c58b8"

+ default_ip_address = (known after apply)


+ ept_rvi_mode = "automatic"
+ firmware = "bios"
+ force_power_off = true
+ guest_id = "other26xLinux64Guest"
+ guest_ip_addresses = (known after apply)
+ host_system_id = (known after apply)
+ hv_mode = "hvAuto"
+ id = (known after apply)
+ imported = (known after apply)
+ latency_sensitivity = "normal"
+ memory = 4096
+ memory_limit = -1
+ memory_share_count = (known after apply)
+ memory_share_level = "normal"
+ migrate_wait_timeout = 30
+ moid = (known after apply)
+ name = "csr1kv1"
+ num_cores_per_socket = 1
+ num_cpus = 4
+ reboot_required = (known after apply)
+ resource_pool_id = "ha-root-pool"
+ run_tools_scripts_after_power_on = true
+ run_tools_scripts_after_resume = true
+ run_tools_scripts_before_guest_shutdown = true

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 467
+ run_tools_scripts_before_guest_standby = true
+ scsi_bus_sharing = "noSharing"
+ scsi_controller_count = 1
+ scsi_type = "pvscsi"
+ shutdown_wait_timeout = 1
+ swap_placement_policy = "inherit"
+ uuid = (known after apply)
+ vapp_transport = (known after apply)
+ vmware_tools_status = (known after apply)
+ vmx_path = (known after apply)
+ wait_for_guest_ip_timeout = 5
+ wait_for_guest_net_routable = true
+ wait_for_guest_net_timeout = 0

+ cdrom {
+ datastore_id = "5d851500-53966784-2df3-0050569c58b8"
+ device_address = (known after apply)
+ key = (known after apply)
+ path = "csr1kv1/bootstrap.iso"
}

+ disk {
+ attach = true
+ datastore_id = "5d851500-53966784-2df3-0050569c58b8"
+ device_address = (known after apply)
+ disk_mode = "independent_nonpersistent"
+ disk_sharing = "sharingNone"
+ eagerly_scrub = false
+ io_limit = -1
+ io_reservation = 0
+ io_share_count = 0
+ io_share_level = "normal"
+ keep_on_remove = false
+ key = 0
+ label = "disk0"
+ path = "csr1kv1/csr.vmdk"
+ thin_provisioned = true
+ unit_number = 0
+ uuid = (known after apply)
+ write_through = false
}

+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = (known after apply)
}
+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1

468 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = (known after apply)
}
+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = (known after apply)
}
+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = (known after apply)
}
+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = "HaNetwork-VM Network"
}
}

# vsphere_virtual_machine.csr1kv2 will be created


+ resource "vsphere_virtual_machine" "csr1kv2" {
+ boot_retry_delay = 10000
+ change_version = (known after apply)
+ cpu_limit = -1
+ cpu_share_count = (known after apply)
+ cpu_share_level = "normal"
+ datastore_id = "5d851500-53966784-2df3-0050569c58b8"

+ default_ip_address = (known after apply)


+ ept_rvi_mode = "automatic"
+ firmware = "bios"
+ force_power_off = true

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 469
+ guest_id = "other26xLinux64Guest"
+ guest_ip_addresses = (known after apply)
+ host_system_id = (known after apply)
+ hv_mode = "hvAuto"
+ id = (known after apply)
+ imported = (known after apply)
+ latency_sensitivity = "normal"
+ memory = 4096
+ memory_limit = -1
+ memory_share_count = (known after apply)
+ memory_share_level = "normal"
+ migrate_wait_timeout = 30
+ moid = (known after apply)
+ name = "csr1kv2"
+ num_cores_per_socket = 1
+ num_cpus = 4
+ reboot_required = (known after apply)
+ resource_pool_id = "ha-root-pool"
+ run_tools_scripts_after_power_on = true
+ run_tools_scripts_after_resume = true
+ run_tools_scripts_before_guest_shutdown = true
+ run_tools_scripts_before_guest_standby = true
+ scsi_bus_sharing = "noSharing"
+ scsi_controller_count = 1
+ scsi_type = "pvscsi"
+ shutdown_wait_timeout = 1
+ swap_placement_policy = "inherit"
+ uuid = (known after apply)
+ vapp_transport = (known after apply)
+ vmware_tools_status = (known after apply)
+ vmx_path = (known after apply)
+ wait_for_guest_ip_timeout = 5
+ wait_for_guest_net_routable = true
+ wait_for_guest_net_timeout = 0

+ cdrom {
+ datastore_id = "5d851500-53966784-2df3-0050569c58b8"
+ device_address = (known after apply)
+ key = (known after apply)
+ path = "csr1kv2/bootstrap.iso"
}

+ disk {
+ attach = true
+ datastore_id = "5d851500-53966784-2df3-0050569c58b8"
+ device_address = (known after apply)
+ disk_mode = "independent_nonpersistent"
+ disk_sharing = "sharingNone"
+ eagerly_scrub = false
+ io_limit = -1
+ io_reservation = 0
+ io_share_count = 0
+ io_share_level = "normal"
+ keep_on_remove = false
+ key = 0
+ label = "disk0"

470 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
+ path = "csr1kv2/csr.vmdk"
+ thin_provisioned = true
+ unit_number = 0
+ uuid = (known after apply)
+ write_through = false
}

+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = (known after apply)
}
+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = (known after apply)
}
+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = (known after apply)
}
+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = (known after apply)
}
+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 471
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = "HaNetwork-VM Network"
}
}

# vsphere_virtual_machine.csr1kv3 will be created


+ resource "vsphere_virtual_machine" "csr1kv3" {
+ boot_retry_delay = 10000
+ change_version = (known after apply)
+ cpu_limit = -1
+ cpu_share_count = (known after apply)
+ cpu_share_level = "normal"
+ datastore_id = "5d851500-53966784-2df3-0050569c58b8"

+ default_ip_address = (known after apply)


+ ept_rvi_mode = "automatic"
+ firmware = "bios"
+ force_power_off = true
+ guest_id = "other26xLinux64Guest"
+ guest_ip_addresses = (known after apply)
+ host_system_id = (known after apply)
+ hv_mode = "hvAuto"
+ id = (known after apply)
+ imported = (known after apply)
+ latency_sensitivity = "normal"
+ memory = 4096
+ memory_limit = -1
+ memory_share_count = (known after apply)
+ memory_share_level = "normal"
+ migrate_wait_timeout = 30
+ moid = (known after apply)
+ name = "csr1kv3"
+ num_cores_per_socket = 1
+ num_cpus = 4
+ reboot_required = (known after apply)
+ resource_pool_id = "ha-root-pool"
+ run_tools_scripts_after_power_on = true
+ run_tools_scripts_after_resume = true
+ run_tools_scripts_before_guest_shutdown = true
+ run_tools_scripts_before_guest_standby = true
+ scsi_bus_sharing = "noSharing"
+ scsi_controller_count = 1
+ scsi_type = "pvscsi"
+ shutdown_wait_timeout = 1
+ swap_placement_policy = "inherit"
+ uuid = (known after apply)
+ vapp_transport = (known after apply)
+ vmware_tools_status = (known after apply)
+ vmx_path = (known after apply)
+ wait_for_guest_ip_timeout = 5
+ wait_for_guest_net_routable = true
+ wait_for_guest_net_timeout = 0

472 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
+ cdrom {
+ datastore_id = "5d851500-53966784-2df3-0050569c58b8"
+ device_address = (known after apply)
+ key = (known after apply)
+ path = "csr1kv3/bootstrap.iso"
}

+ disk {
+ attach = true
+ datastore_id = "5d851500-53966784-2df3-0050569c58b8"
+ device_address = (known after apply)
+ disk_mode = "independent_nonpersistent"
+ disk_sharing = "sharingNone"
+ eagerly_scrub = false
+ io_limit = -1
+ io_reservation = 0
+ io_share_count = 0
+ io_share_level = "normal"
+ keep_on_remove = false
+ key = 0
+ label = "disk0"
+ path = "csr1kv3/csr.vmdk"
+ thin_provisioned = true
+ unit_number = 0
+ uuid = (known after apply)
+ write_through = false
}

+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = (known after apply)
}
+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = (known after apply)
}
+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 473
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = (known after apply)
}
+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = (known after apply)
}
+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = "HaNetwork-VM Network"
}
}

# vsphere_virtual_machine.k8s1 will be created


+ resource "vsphere_virtual_machine" "k8s1" {
+ boot_retry_delay = 10000
+ change_version = (known after apply)
+ cpu_limit = -1
+ cpu_share_count = (known after apply)
+ cpu_share_level = "normal"
+ datastore_id = "5d851500-53966784-2df3-0050569c58b8"

+ default_ip_address = (known after apply)


+ ept_rvi_mode = "automatic"
+ firmware = "bios"
+ force_power_off = true
+ guest_id = "ubuntu64Guest"
+ guest_ip_addresses = (known after apply)
+ host_system_id = (known after apply)
+ hv_mode = "hvAuto"
+ id = (known after apply)
+ imported = (known after apply)
+ latency_sensitivity = "normal"
+ memory = 8192
+ memory_limit = -1
+ memory_share_count = (known after apply)
+ memory_share_level = "normal"
+ migrate_wait_timeout = 30
+ moid = (known after apply)
+ name = "k8s1"

474 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
+ num_cores_per_socket = 1
+ num_cpus = 2
+ reboot_required = (known after apply)
+ resource_pool_id = "ha-root-pool"
+ run_tools_scripts_after_power_on = true
+ run_tools_scripts_after_resume = true
+ run_tools_scripts_before_guest_shutdown = true
+ run_tools_scripts_before_guest_standby = true
+ scsi_bus_sharing = "noSharing"
+ scsi_controller_count = 1
+ scsi_type = "pvscsi"
+ shutdown_wait_timeout = 1
+ swap_placement_policy = "inherit"
+ uuid = (known after apply)
+ vapp_transport = (known after apply)
+ vmware_tools_status = (known after apply)
+ vmx_path = (known after apply)
+ wait_for_guest_ip_timeout = 5
+ wait_for_guest_net_routable = true
+ wait_for_guest_net_timeout = 0

+ disk {
+ attach = true
+ datastore_id = "5d851500-53966784-2df3-0050569c58b8"
+ device_address = (known after apply)
+ disk_mode = "independent_nonpersistent"
+ disk_sharing = "sharingNone"
+ eagerly_scrub = false
+ io_limit = -1
+ io_reservation = 0
+ io_share_count = 0
+ io_share_level = "normal"
+ keep_on_remove = false
+ key = 0
+ label = "disk0"
+ path = "k8s1/k8s1.vmdk"
+ thin_provisioned = true
+ unit_number = 0
+ uuid = (known after apply)
+ write_through = false
}

+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = "HaNetwork-VM Network"
}
+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 475
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = (known after apply)
}
}

# vsphere_virtual_machine.k8s2 will be created


+ resource "vsphere_virtual_machine" "k8s2" {
+ boot_retry_delay = 10000
+ change_version = (known after apply)
+ cpu_limit = -1
+ cpu_share_count = (known after apply)
+ cpu_share_level = "normal"
+ datastore_id = "5d851500-53966784-2df3-0050569c58b8"

+ default_ip_address = (known after apply)


+ ept_rvi_mode = "automatic"
+ firmware = "bios"
+ force_power_off = true
+ guest_id = "ubuntu64Guest"
+ guest_ip_addresses = (known after apply)
+ host_system_id = (known after apply)
+ hv_mode = "hvAuto"
+ id = (known after apply)
+ imported = (known after apply)
+ latency_sensitivity = "normal"
+ memory = 8192
+ memory_limit = -1
+ memory_share_count = (known after apply)
+ memory_share_level = "normal"
+ migrate_wait_timeout = 30
+ moid = (known after apply)
+ name = "k8s2"
+ num_cores_per_socket = 1
+ num_cpus = 2
+ reboot_required = (known after apply)
+ resource_pool_id = "ha-root-pool"
+ run_tools_scripts_after_power_on = true
+ run_tools_scripts_after_resume = true
+ run_tools_scripts_before_guest_shutdown = true
+ run_tools_scripts_before_guest_standby = true
+ scsi_bus_sharing = "noSharing"
+ scsi_controller_count = 1
+ scsi_type = "pvscsi"
+ shutdown_wait_timeout = 1
+ swap_placement_policy = "inherit"
+ uuid = (known after apply)
+ vapp_transport = (known after apply)
+ vmware_tools_status = (known after apply)
+ vmx_path = (known after apply)
+ wait_for_guest_ip_timeout = 5
+ wait_for_guest_net_routable = true

476 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
+ wait_for_guest_net_timeout = 0

+ disk {
+ attach = true
+ datastore_id = "5d851500-53966784-2df3-0050569c58b8"
+ device_address = (known after apply)
+ disk_mode = "independent_nonpersistent"
+ disk_sharing = "sharingNone"
+ eagerly_scrub = false
+ io_limit = -1
+ io_reservation = 0
+ io_share_count = 0
+ io_share_level = "normal"
+ keep_on_remove = false
+ key = 0
+ label = "disk0"
+ path = "k8s2/k8s2.vmdk"
+ thin_provisioned = true
+ unit_number = 0
+ uuid = (known after apply)
+ write_through = false
}

+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = "HaNetwork-VM Network"
}
+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = (known after apply)
}
}

# vsphere_virtual_machine.k8s3 will be created


+ resource "vsphere_virtual_machine" "k8s3" {
+ boot_retry_delay = 10000
+ change_version = (known after apply)
+ cpu_limit = -1
+ cpu_share_count = (known after apply)
+ cpu_share_level = "normal"
+ datastore_id = "5d851500-53966784-2df3-0050569c58b8"

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 477
+ default_ip_address = (known after apply)
+ ept_rvi_mode = "automatic"
+ firmware = "bios"
+ force_power_off = true
+ guest_id = "ubuntu64Guest"
+ guest_ip_addresses = (known after apply)
+ host_system_id = (known after apply)
+ hv_mode = "hvAuto"
+ id = (known after apply)
+ imported = (known after apply)
+ latency_sensitivity = "normal"
+ memory = 8192
+ memory_limit = -1
+ memory_share_count = (known after apply)
+ memory_share_level = "normal"
+ migrate_wait_timeout = 30
+ moid = (known after apply)
+ name = "k8s3"
+ num_cores_per_socket = 1
+ num_cpus = 2
+ reboot_required = (known after apply)
+ resource_pool_id = "ha-root-pool"
+ run_tools_scripts_after_power_on = true
+ run_tools_scripts_after_resume = true
+ run_tools_scripts_before_guest_shutdown = true
+ run_tools_scripts_before_guest_standby = true
+ scsi_bus_sharing = "noSharing"
+ scsi_controller_count = 1
+ scsi_type = "pvscsi"
+ shutdown_wait_timeout = 1
+ swap_placement_policy = "inherit"
+ uuid = (known after apply)
+ vapp_transport = (known after apply)
+ vmware_tools_status = (known after apply)
+ vmx_path = (known after apply)
+ wait_for_guest_ip_timeout = 5
+ wait_for_guest_net_routable = true
+ wait_for_guest_net_timeout = 0

+ disk {
+ attach = true
+ datastore_id = "5d851500-53966784-2df3-0050569c58b8"
+ device_address = (known after apply)
+ disk_mode = "independent_nonpersistent"
+ disk_sharing = "sharingNone"
+ eagerly_scrub = false
+ io_limit = -1
+ io_reservation = 0
+ io_share_count = 0
+ io_share_level = "normal"
+ keep_on_remove = false
+ key = 0
+ label = "disk0"
+ path = "k8s3/k8s3.vmdk"
+ thin_provisioned = true
+ unit_number = 0

478 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
+ uuid = (known after apply)
+ write_through = false
}

+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = "HaNetwork-VM Network"
}
+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = (known after apply)
}
}

Plan: 18 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You did not specify an "-out" parameter to save this plan, so Terraform
cannot guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
.

Deploy Terraform Configuration

Step 2 Execute the terraform apply command and type yes when prompted for deployment confirmation.
Terraform will refresh the information about the existing infrastructure and configuration files and ask for a
deployment confirmation. Once confirmed, Terraform will start creating resources within ESXi. You will
notice that all resources declared in .tf files (virtual machines and virtual networks) are being created and the
asa1 host powered on due to the remote provisioner. When finalized, Terraform will print the summary of
actions taken.

Note When the devices are up and running, you can use the ping command to check their connectivity. The
device names are test_csr1kv1, test_csr1kv2, test_csr1kv3, test_k8s1, test_k8s2, test_k8s3, and
test_asa1.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 479
student@student-vm:lab12/test_env$ terraform apply
data.vsphere_datacenter.dc: Refreshing state...
data.vsphere_resource_pool.pool: Refreshing state...
data.vsphere_network.network: Refreshing state...
data.vsphere_datastore.datastore: Refreshing state...
data.vsphere_host.esxi_host: Refreshing state...

An execution plan has been generated and is shown below.


Resource actions are indicated with the following symbols:
+ create
<= read (data resources)

Terraform will perform the following actions:

# data.vsphere_network.vm_network_0 will be read during apply


# (config refers to values not yet known)
<= data "vsphere_network" "vm_network_0" {
+ datacenter_id = "ha-datacenter"
+ id = (known after apply)
+ name = "vm_network_0"
+ type = (known after apply)
}

# data.vsphere_network.vm_network_1 will be read during apply


# (config refers to values not yet known)
<= data "vsphere_network" "vm_network_1" {
+ datacenter_id = "ha-datacenter"
+ id = (known after apply)
+ name = "vm_network_1"
+ type = (known after apply)
}

# data.vsphere_network.vm_network_2 will be read during apply


# (config refers to values not yet known)
<= data "vsphere_network" "vm_network_2" {
+ datacenter_id = "ha-datacenter"
+ id = (known after apply)
+ name = "vm_network_2"
+ type = (known after apply)
}

# data.vsphere_network.vm_network_3 will be read during apply


# (config refers to values not yet known)
<= data "vsphere_network" "vm_network_3" {
+ datacenter_id = "ha-datacenter"
+ id = (known after apply)
+ name = "vm_network_3"
+ type = (known after apply)
}

# data.vsphere_network.vm_network_4 will be read during apply


# (config refers to values not yet known)
<= data "vsphere_network" "vm_network_4" {
+ datacenter_id = "ha-datacenter"
+ id = (known after apply)
+ name = "vm_network_4"

480 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
+ type = (known after apply)
}

# data.vsphere_network.vm_network_5 will be read during apply


# (config refers to values not yet known)
<= data "vsphere_network" "vm_network_5" {
+ datacenter_id = "ha-datacenter"
+ id = (known after apply)
+ name = "vm_network_5"
+ type = (known after apply)
}

# data.vsphere_network.vm_network_6 will be read during apply


# (config refers to values not yet known)
<= data "vsphere_network" "vm_network_6" {
+ datacenter_id = "ha-datacenter"
+ id = (known after apply)
+ name = "vm_network_6"
+ type = (known after apply)
}

# data.vsphere_network.vm_network_7 will be read during apply


# (config refers to values not yet known)
<= data "vsphere_network" "vm_network_7" {
+ datacenter_id = "ha-datacenter"
+ id = (known after apply)
+ name = "vm_network_7"
+ type = (known after apply)
}

# vsphere_host_port_group.pg[0] will be created


+ resource "vsphere_host_port_group" "pg" {
+ computed_policy = (known after apply)
+ host_system_id = "ha-host"
+ id = (known after apply)
+ key = (known after apply)
+ name = "vm_network_0"
+ ports = (known after apply)
+ virtual_switch_name = "devnet_lab_vswitch_0"
+ vlan_id = 0
}

# vsphere_host_port_group.pg[1] will be created


+ resource "vsphere_host_port_group" "pg" {
+ computed_policy = (known after apply)
+ host_system_id = "ha-host"
+ id = (known after apply)
+ key = (known after apply)
+ name = "vm_network_1"
+ ports = (known after apply)
+ virtual_switch_name = "devnet_lab_vswitch_1"
+ vlan_id = 0
}

# vsphere_host_port_group.pg[2] will be created


+ resource "vsphere_host_port_group" "pg" {

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 481
+ computed_policy = (known after apply)
+ host_system_id = "ha-host"
+ id = (known after apply)
+ key = (known after apply)
+ name = "vm_network_2"
+ ports = (known after apply)
+ virtual_switch_name = "devnet_lab_vswitch_2"
+ vlan_id = 0
}

# vsphere_host_port_group.pg[3] will be created


+ resource "vsphere_host_port_group" "pg" {
+ computed_policy = (known after apply)
+ host_system_id = "ha-host"
+ id = (known after apply)
+ key = (known after apply)
+ name = "vm_network_3"
+ ports = (known after apply)
+ virtual_switch_name = "devnet_lab_vswitch_3"
+ vlan_id = 0
}

# vsphere_host_port_group.pg[4] will be created


+ resource "vsphere_host_port_group" "pg" {
+ computed_policy = (known after apply)
+ host_system_id = "ha-host"
+ id = (known after apply)
+ key = (known after apply)
+ name = "vm_network_4"
+ ports = (known after apply)
+ virtual_switch_name = "devnet_lab_vswitch_4"
+ vlan_id = 0
}

# vsphere_host_port_group.pg[5] will be created


+ resource "vsphere_host_port_group" "pg" {
+ computed_policy = (known after apply)
+ host_system_id = "ha-host"
+ id = (known after apply)
+ key = (known after apply)
+ name = "vm_network_5"
+ ports = (known after apply)
+ virtual_switch_name = "devnet_lab_vswitch_5"
+ vlan_id = 0
}

# vsphere_host_virtual_switch.devnet_lab_vswitch[0] will be created


+ resource "vsphere_host_virtual_switch" "devnet_lab_vswitch" {
+ active_nics = []
+ allow_forged_transmits = true
+ allow_mac_changes = true
+ allow_promiscuous = false
+ beacon_interval = 1
+ check_beacon = false
+ failback = true
+ host_system_id = "ha-host"

482 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
+ id = (known after apply)
+ link_discovery_operation = "listen"
+ link_discovery_protocol = "cdp"
+ mtu = 1500
+ name = "devnet_lab_vswitch_0"
+ network_adapters = []
+ notify_switches = true
+ number_of_ports = 128
+ shaping_enabled = false
+ standby_nics = []
+ teaming_policy = "loadbalance_srcid"
}

# vsphere_host_virtual_switch.devnet_lab_vswitch[1] will be created


+ resource "vsphere_host_virtual_switch" "devnet_lab_vswitch" {
+ active_nics = []
+ allow_forged_transmits = true
+ allow_mac_changes = true
+ allow_promiscuous = false
+ beacon_interval = 1
+ check_beacon = false
+ failback = true
+ host_system_id = "ha-host"
+ id = (known after apply)
+ link_discovery_operation = "listen"
+ link_discovery_protocol = "cdp"
+ mtu = 1500
+ name = "devnet_lab_vswitch_1"
+ network_adapters = []
+ notify_switches = true
+ number_of_ports = 128
+ shaping_enabled = false
+ standby_nics = []
+ teaming_policy = "loadbalance_srcid"
}

# vsphere_host_virtual_switch.devnet_lab_vswitch[2] will be created


+ resource "vsphere_host_virtual_switch" "devnet_lab_vswitch" {
+ active_nics = []
+ allow_forged_transmits = true
+ allow_mac_changes = true
+ allow_promiscuous = false
+ beacon_interval = 1
+ check_beacon = false
+ failback = true
+ host_system_id = "ha-host"
+ id = (known after apply)
+ link_discovery_operation = "listen"
+ link_discovery_protocol = "cdp"
+ mtu = 1500
+ name = "devnet_lab_vswitch_2"
+ network_adapters = []
+ notify_switches = true
+ number_of_ports = 128
+ shaping_enabled = false
+ standby_nics = []

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 483
+ teaming_policy = "loadbalance_srcid"
}

# vsphere_host_virtual_switch.devnet_lab_vswitch[3] will be created


+ resource "vsphere_host_virtual_switch" "devnet_lab_vswitch" {
+ active_nics = []
+ allow_forged_transmits = true
+ allow_mac_changes = true
+ allow_promiscuous = false
+ beacon_interval = 1
+ check_beacon = false
+ failback = true
+ host_system_id = "ha-host"
+ id = (known after apply)
+ link_discovery_operation = "listen"
+ link_discovery_protocol = "cdp"
+ mtu = 1500
+ name = "devnet_lab_vswitch_3"
+ network_adapters = []
+ notify_switches = true
+ number_of_ports = 128
+ shaping_enabled = false
+ standby_nics = []
+ teaming_policy = "loadbalance_srcid"
}

# vsphere_host_virtual_switch.devnet_lab_vswitch[4] will be created


+ resource "vsphere_host_virtual_switch" "devnet_lab_vswitch" {
+ active_nics = []
+ allow_forged_transmits = true
+ allow_mac_changes = true
+ allow_promiscuous = false
+ beacon_interval = 1
+ check_beacon = false
+ failback = true
+ host_system_id = "ha-host"
+ id = (known after apply)
+ link_discovery_operation = "listen"
+ link_discovery_protocol = "cdp"
+ mtu = 1500
+ name = "devnet_lab_vswitch_4"
+ network_adapters = []
+ notify_switches = true
+ number_of_ports = 128
+ shaping_enabled = false
+ standby_nics = []
+ teaming_policy = "loadbalance_srcid"
}

# vsphere_host_virtual_switch.devnet_lab_vswitch[5] will be created


+ resource "vsphere_host_virtual_switch" "devnet_lab_vswitch" {
+ active_nics = []
+ allow_forged_transmits = true
+ allow_mac_changes = true
+ allow_promiscuous = false
+ beacon_interval = 1

484 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
+ check_beacon = false
+ failback = true
+ host_system_id = "ha-host"
+ id = (known after apply)
+ link_discovery_operation = "listen"
+ link_discovery_protocol = "cdp"
+ mtu = 1500
+ name = "devnet_lab_vswitch_5"
+ network_adapters = []
+ notify_switches = true
+ number_of_ports = 128
+ shaping_enabled = false
+ standby_nics = []
+ teaming_policy = "loadbalance_srcid"
}

# vsphere_virtual_machine.csr1kv1 will be created


+ resource "vsphere_virtual_machine" "csr1kv1" {
+ boot_retry_delay = 10000
+ change_version = (known after apply)
+ cpu_limit = -1
+ cpu_share_count = (known after apply)
+ cpu_share_level = "normal"
+ datastore_id = "5d851500-53966784-2df3-0050569c58b8"

+ default_ip_address = (known after apply)


+ ept_rvi_mode = "automatic"
+ firmware = "bios"
+ force_power_off = true
+ guest_id = "other26xLinux64Guest"
+ guest_ip_addresses = (known after apply)
+ host_system_id = (known after apply)
+ hv_mode = "hvAuto"
+ id = (known after apply)
+ imported = (known after apply)
+ latency_sensitivity = "normal"
+ memory = 4096
+ memory_limit = -1
+ memory_share_count = (known after apply)
+ memory_share_level = "normal"
+ migrate_wait_timeout = 30
+ moid = (known after apply)
+ name = "csr1kv1"
+ num_cores_per_socket = 1
+ num_cpus = 4
+ reboot_required = (known after apply)
+ resource_pool_id = "ha-root-pool"
+ run_tools_scripts_after_power_on = true
+ run_tools_scripts_after_resume = true
+ run_tools_scripts_before_guest_shutdown = true
+ run_tools_scripts_before_guest_standby = true
+ scsi_bus_sharing = "noSharing"
+ scsi_controller_count = 1
+ scsi_type = "pvscsi"
+ shutdown_wait_timeout = 1
+ swap_placement_policy = "inherit"

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 485
+ uuid = (known after apply)
+ vapp_transport = (known after apply)
+ vmware_tools_status = (known after apply)
+ vmx_path = (known after apply)
+ wait_for_guest_ip_timeout = 5
+ wait_for_guest_net_routable = true
+ wait_for_guest_net_timeout = 0

+ cdrom {
+ datastore_id = "5d851500-53966784-2df3-0050569c58b8"
+ device_address = (known after apply)
+ key = (known after apply)
+ path = "csr1kv1/bootstrap.iso"
}

+ disk {
+ attach = true
+ datastore_id = "5d851500-53966784-2df3-0050569c58b8"
+ device_address = (known after apply)
+ disk_mode = "independent_nonpersistent"
+ disk_sharing = "sharingNone"
+ eagerly_scrub = false
+ io_limit = -1
+ io_reservation = 0
+ io_share_count = 0
+ io_share_level = "normal"
+ keep_on_remove = false
+ key = 0
+ label = "disk0"
+ path = "csr1kv1/csr.vmdk"
+ thin_provisioned = true
+ unit_number = 0
+ uuid = (known after apply)
+ write_through = false
}

+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = (known after apply)
}
+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)

486 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
+ network_id = (known after apply)
}
+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = (known after apply)
}
+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = (known after apply)
}
+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = "HaNetwork-VM Network"
}
}

# vsphere_virtual_machine.csr1kv2 will be created


+ resource "vsphere_virtual_machine" "csr1kv2" {
+ boot_retry_delay = 10000
+ change_version = (known after apply)
+ cpu_limit = -1
+ cpu_share_count = (known after apply)
+ cpu_share_level = "normal"
+ datastore_id = "5d851500-53966784-2df3-0050569c58b8"

+ default_ip_address = (known after apply)


+ ept_rvi_mode = "automatic"
+ firmware = "bios"
+ force_power_off = true
+ guest_id = "other26xLinux64Guest"
+ guest_ip_addresses = (known after apply)
+ host_system_id = (known after apply)
+ hv_mode = "hvAuto"
+ id = (known after apply)
+ imported = (known after apply)

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 487
+ latency_sensitivity = "normal"
+ memory = 4096
+ memory_limit = -1
+ memory_share_count = (known after apply)
+ memory_share_level = "normal"
+ migrate_wait_timeout = 30
+ moid = (known after apply)
+ name = "csr1kv2"
+ num_cores_per_socket = 1
+ num_cpus = 4
+ reboot_required = (known after apply)
+ resource_pool_id = "ha-root-pool"
+ run_tools_scripts_after_power_on = true
+ run_tools_scripts_after_resume = true
+ run_tools_scripts_before_guest_shutdown = true
+ run_tools_scripts_before_guest_standby = true
+ scsi_bus_sharing = "noSharing"
+ scsi_controller_count = 1
+ scsi_type = "pvscsi"
+ shutdown_wait_timeout = 1
+ swap_placement_policy = "inherit"
+ uuid = (known after apply)
+ vapp_transport = (known after apply)
+ vmware_tools_status = (known after apply)
+ vmx_path = (known after apply)
+ wait_for_guest_ip_timeout = 5
+ wait_for_guest_net_routable = true
+ wait_for_guest_net_timeout = 0

+ cdrom {
+ datastore_id = "5d851500-53966784-2df3-0050569c58b8"
+ device_address = (known after apply)
+ key = (known after apply)
+ path = "csr1kv2/bootstrap.iso"
}

+ disk {
+ attach = true
+ datastore_id = "5d851500-53966784-2df3-0050569c58b8"
+ device_address = (known after apply)
+ disk_mode = "independent_nonpersistent"
+ disk_sharing = "sharingNone"
+ eagerly_scrub = false
+ io_limit = -1
+ io_reservation = 0
+ io_share_count = 0
+ io_share_level = "normal"
+ keep_on_remove = false
+ key = 0
+ label = "disk0"
+ path = "csr1kv2/csr.vmdk"
+ thin_provisioned = true
+ unit_number = 0
+ uuid = (known after apply)
+ write_through = false
}

488 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = (known after apply)
}
+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = (known after apply)
}
+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = (known after apply)
}
+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = (known after apply)
}
+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = "HaNetwork-VM Network"
}

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 489
}

# vsphere_virtual_machine.csr1kv3 will be created


+ resource "vsphere_virtual_machine" "csr1kv3" {
+ boot_retry_delay = 10000
+ change_version = (known after apply)
+ cpu_limit = -1
+ cpu_share_count = (known after apply)
+ cpu_share_level = "normal"
+ datastore_id = "5d851500-53966784-2df3-0050569c58b8"

+ default_ip_address = (known after apply)


+ ept_rvi_mode = "automatic"
+ firmware = "bios"
+ force_power_off = true
+ guest_id = "other26xLinux64Guest"
+ guest_ip_addresses = (known after apply)
+ host_system_id = (known after apply)
+ hv_mode = "hvAuto"
+ id = (known after apply)
+ imported = (known after apply)
+ latency_sensitivity = "normal"
+ memory = 4096
+ memory_limit = -1
+ memory_share_count = (known after apply)
+ memory_share_level = "normal"
+ migrate_wait_timeout = 30
+ moid = (known after apply)
+ name = "csr1kv3"
+ num_cores_per_socket = 1
+ num_cpus = 4
+ reboot_required = (known after apply)
+ resource_pool_id = "ha-root-pool"
+ run_tools_scripts_after_power_on = true
+ run_tools_scripts_after_resume = true
+ run_tools_scripts_before_guest_shutdown = true
+ run_tools_scripts_before_guest_standby = true
+ scsi_bus_sharing = "noSharing"
+ scsi_controller_count = 1
+ scsi_type = "pvscsi"
+ shutdown_wait_timeout = 1
+ swap_placement_policy = "inherit"
+ uuid = (known after apply)
+ vapp_transport = (known after apply)
+ vmware_tools_status = (known after apply)
+ vmx_path = (known after apply)
+ wait_for_guest_ip_timeout = 5
+ wait_for_guest_net_routable = true
+ wait_for_guest_net_timeout = 0

+ cdrom {
+ datastore_id = "5d851500-53966784-2df3-0050569c58b8"
+ device_address = (known after apply)
+ key = (known after apply)
+ path = "csr1kv3/bootstrap.iso"
}

490 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
+ disk {
+ attach = true
+ datastore_id = "5d851500-53966784-2df3-0050569c58b8"
+ device_address = (known after apply)
+ disk_mode = "independent_nonpersistent"
+ disk_sharing = "sharingNone"
+ eagerly_scrub = false
+ io_limit = -1
+ io_reservation = 0
+ io_share_count = 0
+ io_share_level = "normal"
+ keep_on_remove = false
+ key = 0
+ label = "disk0"
+ path = "csr1kv3/csr.vmdk"
+ thin_provisioned = true
+ unit_number = 0
+ uuid = (known after apply)
+ write_through = false
}

+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = (known after apply)
}
+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = (known after apply)
}
+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = (known after apply)
}
+ network_interface {

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 491
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = (known after apply)
}
+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = "HaNetwork-VM Network"
}
}

# vsphere_virtual_machine.k8s1 will be created


+ resource "vsphere_virtual_machine" "k8s1" {
+ boot_retry_delay = 10000
+ change_version = (known after apply)
+ cpu_limit = -1
+ cpu_share_count = (known after apply)
+ cpu_share_level = "normal"
+ datastore_id = "5d851500-53966784-2df3-0050569c58b8"

+ default_ip_address = (known after apply)


+ ept_rvi_mode = "automatic"
+ firmware = "bios"
+ force_power_off = true
+ guest_id = "ubuntu64Guest"
+ guest_ip_addresses = (known after apply)
+ host_system_id = (known after apply)
+ hv_mode = "hvAuto"
+ id = (known after apply)
+ imported = (known after apply)
+ latency_sensitivity = "normal"
+ memory = 8192
+ memory_limit = -1
+ memory_share_count = (known after apply)
+ memory_share_level = "normal"
+ migrate_wait_timeout = 30
+ moid = (known after apply)
+ name = "k8s1"
+ num_cores_per_socket = 1
+ num_cpus = 2
+ reboot_required = (known after apply)
+ resource_pool_id = "ha-root-pool"
+ run_tools_scripts_after_power_on = true
+ run_tools_scripts_after_resume = true

492 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
+ run_tools_scripts_before_guest_shutdown = true
+ run_tools_scripts_before_guest_standby = true
+ scsi_bus_sharing = "noSharing"
+ scsi_controller_count = 1
+ scsi_type = "pvscsi"
+ shutdown_wait_timeout = 1
+ swap_placement_policy = "inherit"
+ uuid = (known after apply)
+ vapp_transport = (known after apply)
+ vmware_tools_status = (known after apply)
+ vmx_path = (known after apply)
+ wait_for_guest_ip_timeout = 5
+ wait_for_guest_net_routable = true
+ wait_for_guest_net_timeout = 0

+ disk {
+ attach = true
+ datastore_id = "5d851500-53966784-2df3-0050569c58b8"
+ device_address = (known after apply)
+ disk_mode = "independent_nonpersistent"
+ disk_sharing = "sharingNone"
+ eagerly_scrub = false
+ io_limit = -1
+ io_reservation = 0
+ io_share_count = 0
+ io_share_level = "normal"
+ keep_on_remove = false
+ key = 0
+ label = "disk0"
+ path = "k8s1/k8s1.vmdk"
+ thin_provisioned = true
+ unit_number = 0
+ uuid = (known after apply)
+ write_through = false
}

+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = "HaNetwork-VM Network"
}
+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 493
+ network_id = (known after apply)
}
}

# vsphere_virtual_machine.k8s2 will be created


+ resource "vsphere_virtual_machine" "k8s2" {
+ boot_retry_delay = 10000
+ change_version = (known after apply)
+ cpu_limit = -1
+ cpu_share_count = (known after apply)
+ cpu_share_level = "normal"
+ datastore_id = "5d851500-53966784-2df3-0050569c58b8"

+ default_ip_address = (known after apply)


+ ept_rvi_mode = "automatic"
+ firmware = "bios"
+ force_power_off = true
+ guest_id = "ubuntu64Guest"
+ guest_ip_addresses = (known after apply)
+ host_system_id = (known after apply)
+ hv_mode = "hvAuto"
+ id = (known after apply)
+ imported = (known after apply)
+ latency_sensitivity = "normal"
+ memory = 8192
+ memory_limit = -1
+ memory_share_count = (known after apply)
+ memory_share_level = "normal"
+ migrate_wait_timeout = 30
+ moid = (known after apply)
+ name = "k8s2"
+ num_cores_per_socket = 1
+ num_cpus = 2
+ reboot_required = (known after apply)
+ resource_pool_id = "ha-root-pool"
+ run_tools_scripts_after_power_on = true
+ run_tools_scripts_after_resume = true
+ run_tools_scripts_before_guest_shutdown = true
+ run_tools_scripts_before_guest_standby = true
+ scsi_bus_sharing = "noSharing"
+ scsi_controller_count = 1
+ scsi_type = "pvscsi"
+ shutdown_wait_timeout = 1
+ swap_placement_policy = "inherit"
+ uuid = (known after apply)
+ vapp_transport = (known after apply)
+ vmware_tools_status = (known after apply)
+ vmx_path = (known after apply)
+ wait_for_guest_ip_timeout = 5
+ wait_for_guest_net_routable = true
+ wait_for_guest_net_timeout = 0

+ disk {
+ attach = true
+ datastore_id = "5d851500-53966784-2df3-0050569c58b8"
+ device_address = (known after apply)

494 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
+ disk_mode = "independent_nonpersistent"
+ disk_sharing = "sharingNone"
+ eagerly_scrub = false
+ io_limit = -1
+ io_reservation = 0
+ io_share_count = 0
+ io_share_level = "normal"
+ keep_on_remove = false
+ key = 0
+ label = "disk0"
+ path = "k8s2/k8s2.vmdk"
+ thin_provisioned = true
+ unit_number = 0
+ uuid = (known after apply)
+ write_through = false
}

+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = "HaNetwork-VM Network"
}
+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = (known after apply)
}
}

# vsphere_virtual_machine.k8s3 will be created


+ resource "vsphere_virtual_machine" "k8s3" {
+ boot_retry_delay = 10000
+ change_version = (known after apply)
+ cpu_limit = -1
+ cpu_share_count = (known after apply)
+ cpu_share_level = "normal"
+ datastore_id = "5d851500-53966784-2df3-0050569c58b8"

+ default_ip_address = (known after apply)


+ ept_rvi_mode = "automatic"
+ firmware = "bios"
+ force_power_off = true
+ guest_id = "ubuntu64Guest"
+ guest_ip_addresses = (known after apply)

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 495
+ host_system_id = (known after apply)
+ hv_mode = "hvAuto"
+ id = (known after apply)
+ imported = (known after apply)
+ latency_sensitivity = "normal"
+ memory = 8192
+ memory_limit = -1
+ memory_share_count = (known after apply)
+ memory_share_level = "normal"
+ migrate_wait_timeout = 30
+ moid = (known after apply)
+ name = "k8s3"
+ num_cores_per_socket = 1
+ num_cpus = 2
+ reboot_required = (known after apply)
+ resource_pool_id = "ha-root-pool"
+ run_tools_scripts_after_power_on = true
+ run_tools_scripts_after_resume = true
+ run_tools_scripts_before_guest_shutdown = true
+ run_tools_scripts_before_guest_standby = true
+ scsi_bus_sharing = "noSharing"
+ scsi_controller_count = 1
+ scsi_type = "pvscsi"
+ shutdown_wait_timeout = 1
+ swap_placement_policy = "inherit"
+ uuid = (known after apply)
+ vapp_transport = (known after apply)
+ vmware_tools_status = (known after apply)
+ vmx_path = (known after apply)
+ wait_for_guest_ip_timeout = 5
+ wait_for_guest_net_routable = true
+ wait_for_guest_net_timeout = 0

+ disk {
+ attach = true
+ datastore_id = "5d851500-53966784-2df3-0050569c58b8"
+ device_address = (known after apply)
+ disk_mode = "independent_nonpersistent"
+ disk_sharing = "sharingNone"
+ eagerly_scrub = false
+ io_limit = -1
+ io_reservation = 0
+ io_share_count = 0
+ io_share_level = "normal"
+ keep_on_remove = false
+ key = 0
+ label = "disk0"
+ path = "k8s3/k8s3.vmdk"
+ thin_provisioned = true
+ unit_number = 0
+ uuid = (known after apply)
+ write_through = false
}

+ network_interface {
+ adapter_type = "vmxnet3"

496 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = "HaNetwork-VM Network"
}
+ network_interface {
+ adapter_type = "vmxnet3"
+ bandwidth_limit = -1
+ bandwidth_reservation = 0
+ bandwidth_share_count = (known after apply)
+ bandwidth_share_level = "normal"
+ device_address = (known after apply)
+ key = (known after apply)
+ mac_address = (known after apply)
+ network_id = (known after apply)
}
}

Plan: 18 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?


Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.

Enter a value: yes

vsphere_host_virtual_switch.devnet_lab_vswitch[0]: Creating...
vsphere_host_virtual_switch.devnet_lab_vswitch[3]: Creating...
vsphere_host_virtual_switch.devnet_lab_vswitch[4]: Creating...
vsphere_host_virtual_switch.devnet_lab_vswitch[5]: Creating...
vsphere_host_virtual_switch.devnet_lab_vswitch[2]: Creating...
vsphere_host_virtual_switch.devnet_lab_vswitch[1]: Creating...
vsphere_host_virtual_switch.devnet_lab_vswitch[2]: Creation complete after 0s [id=tf-
HostVirtualSwitch:ha-host:devnet_lab_vswitch_2]
vsphere_host_virtual_switch.devnet_lab_vswitch[5]: Creation complete after 0s [id=tf-
HostVirtualSwitch:ha-host:devnet_lab_vswitch_5]
vsphere_host_virtual_switch.devnet_lab_vswitch[0]: Creation complete after 0s [id=tf-
HostVirtualSwitch:ha-host:devnet_lab_vswitch_0]
vsphere_host_virtual_switch.devnet_lab_vswitch[3]: Creation complete after 0s [id=tf-
HostVirtualSwitch:ha-host:devnet_lab_vswitch_3]
vsphere_host_virtual_switch.devnet_lab_vswitch[1]: Creation complete after 0s [id=tf-
HostVirtualSwitch:ha-host:devnet_lab_vswitch_1]
vsphere_host_virtual_switch.devnet_lab_vswitch[4]: Creation complete after 0s [id=tf-
HostVirtualSwitch:ha-host:devnet_lab_vswitch_4]
vsphere_host_port_group.pg[2]: Creating...
vsphere_host_port_group.pg[3]: Creating...
vsphere_host_port_group.pg[0]: Creating...
vsphere_host_port_group.pg[4]: Creating...
vsphere_host_port_group.pg[1]: Creating...
vsphere_host_port_group.pg[5]: Creating...
vsphere_host_port_group.pg[2]: Creation complete after 0s [id=tf-HostPortGroup:ha-
host:vm_network_2]

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 497
vsphere_host_port_group.pg[3]: Creation complete after 0s [id=tf-HostPortGroup:ha-
host:vm_network_3]
vsphere_host_port_group.pg[5]: Creation complete after 0s [id=tf-HostPortGroup:ha-
host:vm_network_5]
vsphere_host_port_group.pg[0]: Creation complete after 1s [id=tf-HostPortGroup:ha-
host:vm_network_0]
vsphere_host_port_group.pg[4]: Creation complete after 1s [id=tf-HostPortGroup:ha-
host:vm_network_4]
vsphere_host_port_group.pg[1]: Creation complete after 1s [id=tf-HostPortGroup:ha-
host:vm_network_1]
data.vsphere_network.vm_network_3: Refreshing state...
data.vsphere_network.vm_network_2: Refreshing state...
data.vsphere_network.vm_network_1: Refreshing state...
data.vsphere_network.vm_network_5: Refreshing state...
data.vsphere_network.vm_network_6: Refreshing state...
data.vsphere_network.vm_network_4: Refreshing state...
data.vsphere_network.vm_network_0: Refreshing state...
data.vsphere_network.vm_network_7: Refreshing state...
vsphere_virtual_machine.k8s1: Creating...
vsphere_virtual_machine.k8s2: Creating...
vsphere_virtual_machine.k8s3: Creating...
vsphere_virtual_machine.csr1kv2: Creating...
vsphere_virtual_machine.csr1kv3: Creating...
vsphere_virtual_machine.csr1kv1: Creating...
vsphere_virtual_machine.k8s1: Still creating... [10s elapsed]
vsphere_virtual_machine.k8s2: Still creating... [10s elapsed]
vsphere_virtual_machine.k8s3: Still creating... [10s elapsed]
vsphere_virtual_machine.csr1kv2: Still creating... [10s elapsed]
vsphere_virtual_machine.csr1kv3: Still creating... [10s elapsed]
vsphere_virtual_machine.csr1kv1: Still creating... [10s elapsed]
vsphere_virtual_machine.k8s1: Still creating... [20s elapsed]
vsphere_virtual_machine.k8s2: Still creating... [20s elapsed]
vsphere_virtual_machine.k8s3: Still creating... [20s elapsed]
vsphere_virtual_machine.csr1kv2: Still creating... [20s elapsed]
vsphere_virtual_machine.csr1kv3: Still creating... [20s elapsed]
vsphere_virtual_machine.csr1kv1: Still creating... [20s elapsed]
vsphere_virtual_machine.k8s1: Still creating... [30s elapsed]
vsphere_virtual_machine.k8s2: Still creating... [30s elapsed]
vsphere_virtual_machine.k8s3: Still creating... [30s elapsed]
vsphere_virtual_machine.csr1kv2: Still creating... [30s elapsed]
vsphere_virtual_machine.csr1kv3: Still creating... [30s elapsed]
vsphere_virtual_machine.csr1kv1: Still creating... [30s elapsed]
vsphere_virtual_machine.k8s1: Still creating... [40s elapsed]
vsphere_virtual_machine.k8s2: Still creating... [40s elapsed]
vsphere_virtual_machine.k8s3: Still creating... [40s elapsed]
vsphere_virtual_machine.csr1kv2: Still creating... [40s elapsed]
vsphere_virtual_machine.csr1kv3: Still creating... [40s elapsed]
vsphere_virtual_machine.csr1kv1: Still creating... [40s elapsed]
vsphere_virtual_machine.k8s1: Still creating... [50s elapsed]
vsphere_virtual_machine.k8s2: Still creating... [50s elapsed]
vsphere_virtual_machine.k8s3: Still creating... [50s elapsed]
vsphere_virtual_machine.csr1kv2: Still creating... [50s elapsed]
vsphere_virtual_machine.csr1kv3: Still creating... [50s elapsed]
vsphere_virtual_machine.csr1kv1: Still creating... [50s elapsed]
vsphere_virtual_machine.k8s1: Still creating... [1m0s elapsed]
vsphere_virtual_machine.k8s2: Still creating... [1m0s elapsed]

498 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
vsphere_virtual_machine.k8s3: Still creating... [1m0s elapsed]
vsphere_virtual_machine.csr1kv2: Still creating... [1m0s elapsed]
vsphere_virtual_machine.csr1kv3: Still creating... [1m0s elapsed]
vsphere_virtual_machine.csr1kv1: Still creating... [1m0s elapsed]
vsphere_virtual_machine.k8s1: Still creating... [1m10s elapsed]
vsphere_virtual_machine.k8s2: Still creating... [1m10s elapsed]
vsphere_virtual_machine.k8s3: Still creating... [1m10s elapsed]
vsphere_virtual_machine.csr1kv2: Still creating... [1m10s elapsed]
vsphere_virtual_machine.csr1kv3: Still creating... [1m10s elapsed]
vsphere_virtual_machine.csr1kv1: Still creating... [1m10s elapsed]
vsphere_virtual_machine.k8s1: Creation complete after 1m18s [id=564dd38f-6003-be29-
2f69-791f39f949d9]
vsphere_virtual_machine.k8s2: Still creating... [1m20s elapsed]
vsphere_virtual_machine.k8s3: Still creating... [1m20s elapsed]
vsphere_virtual_machine.csr1kv2: Still creating... [1m20s elapsed]
vsphere_virtual_machine.csr1kv3: Still creating... [1m20s elapsed]
vsphere_virtual_machine.csr1kv1: Still creating... [1m20s elapsed]
vsphere_virtual_machine.k8s2: Creation complete after 1m22s [id=564d329d-8eee-2da4-
9aee-58484be0d340]
vsphere_virtual_machine.k8s3: Creation complete after 1m24s [id=564dba8c-997a-1661-
ecfd-1b1e3741dde2]
vsphere_virtual_machine.csr1kv2: Still creating... [1m30s elapsed]
vsphere_virtual_machine.csr1kv3: Still creating... [1m30s elapsed]
vsphere_virtual_machine.csr1kv1: Still creating... [1m30s elapsed]
vsphere_virtual_machine.csr1kv2: Still creating... [1m40s elapsed]
vsphere_virtual_machine.csr1kv3: Still creating... [1m40s elapsed]
vsphere_virtual_machine.csr1kv1: Still creating... [1m40s elapsed]
vsphere_virtual_machine.csr1kv2: Still creating... [1m50s elapsed]
vsphere_virtual_machine.csr1kv3: Still creating... [1m50s elapsed]
vsphere_virtual_machine.csr1kv1: Still creating... [1m50s elapsed]
vsphere_virtual_machine.csr1kv2: Still creating... [2m0s elapsed]
vsphere_virtual_machine.csr1kv3: Still creating... [2m0s elapsed]
vsphere_virtual_machine.csr1kv1: Still creating... [2m0s elapsed]
vsphere_virtual_machine.csr1kv2: Still creating... [2m10s elapsed]
vsphere_virtual_machine.csr1kv3: Still creating... [2m10s elapsed]
vsphere_virtual_machine.csr1kv1: Still creating... [2m10s elapsed]
vsphere_virtual_machine.csr1kv2: Still creating... [2m20s elapsed]
vsphere_virtual_machine.csr1kv3: Still creating... [2m20s elapsed]
vsphere_virtual_machine.csr1kv1: Still creating... [2m20s elapsed]
vsphere_virtual_machine.csr1kv2: Still creating... [2m30s elapsed]
vsphere_virtual_machine.csr1kv3: Still creating... [2m30s elapsed]
vsphere_virtual_machine.csr1kv1: Still creating... [2m30s elapsed]
vsphere_virtual_machine.csr1kv2: Still creating... [2m40s elapsed]
vsphere_virtual_machine.csr1kv3: Still creating... [2m40s elapsed]
vsphere_virtual_machine.csr1kv1: Still creating... [2m40s elapsed]
vsphere_virtual_machine.csr1kv2: Still creating... [2m50s elapsed]
vsphere_virtual_machine.csr1kv3: Still creating... [2m50s elapsed]
vsphere_virtual_machine.csr1kv1: Still creating... [2m50s elapsed]
vsphere_virtual_machine.csr1kv2: Still creating... [3m0s elapsed]
vsphere_virtual_machine.csr1kv3: Still creating... [3m0s elapsed]
vsphere_virtual_machine.csr1kv1: Still creating... [3m0s elapsed]
vsphere_virtual_machine.csr1kv2: Creation complete after 3m5s [id=564d8a38-e0b6-4735-
4c70-05d7ef3d58ee]
vsphere_virtual_machine.csr1kv3: Provisioning with 'remote-exec'...
vsphere_virtual_machine.csr1kv3 (remote-exec): Connecting to remote host via SSH...
vsphere_virtual_machine.csr1kv3 (remote-exec): Host: 192.168.10.70

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 499
vsphere_virtual_machine.csr1kv3 (remote-exec): User: root
vsphere_virtual_machine.csr1kv3 (remote-exec): Password: true
vsphere_virtual_machine.csr1kv3 (remote-exec): Private key: false
vsphere_virtual_machine.csr1kv3 (remote-exec): Certificate: false
vsphere_virtual_machine.csr1kv3 (remote-exec): SSH Agent: false
vsphere_virtual_machine.csr1kv3 (remote-exec): Checking Host Key: false
vsphere_virtual_machine.csr1kv3 (remote-exec): Connected!
vsphere_virtual_machine.csr1kv1: Creation complete after 3m6s [id=564d74fe-9a5e-fa07-
a44e-af5dfdbf5c04]
vsphere_virtual_machine.csr1kv3 (remote-exec): Powering on VM:
vsphere_virtual_machine.csr1kv3: Creation complete after 3m8s [id=564dcddb-a837-ba79-
de62-2bde4ebad152]

Apply complete! Resources: 18 added, 0 changed, 0 destroyed.

Task 3: Manage Firewall Policies Using Terraform


You will manage the firewall access lists, network objects, and static routes using Terraform.

Activity

Step 1 Change the directory to /labs/lab12/asa_policies using the cd ~/labs/lab12/asa_policies / command.

student@student-vm:$ cd ~/labs/lab12/asa_policies/
student@student-vm:lab12/asa_policies

Review the Terraform Configuration Files

Step 2 Using the cat command, check the content of the following files: access_lists.tf, network_objects.tf, and
static_routes.tf.

• access_lists.tf: This file stores definitions of access rules to be created by Terraform.


• network_objects.tf: This file stores network objects.
• static_routes.tf: This file stores definitions of IP static routes configured by Terraform.

500 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:lab12/asa_policies$ cat access_lists.tf
resource "ciscoasa_acl" "terraform_acl_1" {
name = "terraform_acl_1"
rule {
source = "10.0.0.1/32"
destination = "192.168.0.0/24"
destination_service = "tcp/443"
}
rule {
source = "10.0.1.0/24"
source_service = "udp"
destination = "172.16.0.1/32"
destination_service = "udp/53"
}
rule {
source = "0.0.0.0/0"
destination = "0.0.0.0/0"
destination_service = "icmp/0"
}
}
student@student-vm:lab12/asa_policies$ cat network_objects.tf
resource "ciscoasa_network_object" "ssh_jumphost" {
name = "ssh_jumphost"
value = "192.168.1.1"
}
resource "ciscoasa_network_object" "ssh_jumphost_range" {
name = "ssh_jumphost_range"
value = "192.168.1.1-192.168.1.5"
}
resource "ciscoasa_network_object" "ssh_jumphost_subnet" {
name = "ssh_jumphost_subnet"
value = "192.168.1.0/24"
}
student@student-vm:lab12/asa_policies$ cat static_routes.tf
resource "ciscoasa_static_route" "management_static_route" {
interface = "management"
network = "192.168.255.255/32"
gateway = "10.99.0.1"
}

Review the Cisco ASA Running Configuration

Step 3 Before deploying Terraform configuration, you will validate the running configuration directly on the Cisco
ASA host. It is expected that none of the configuration described in the Terraform .tf files should be present
on the Cisco ASA host with hostname test_asa.

Establish an SSH session to the test_asa1 host using the ssh student@test_asa1 command.
Execute the show running-configuration access-list, show running-configuration object, and
show running-config route commands.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 501
asa1# show running-configuration access-list
asa1#
asa1# show running-configuration object
asa1#
asa1# show running-configuration route
route management 0.0.0.0 0.0.0.0 10.99.0.1 1
asa1#

Deploy Cisco ASA Configuration with Terraform

Step 4 Execute the terraform apply command and type yes when prompted for deployment confirmation.
Terraform will refresh the information about the existing infrastructure and configuration files and ask for a
deployment confirmation. Once confirmed, Terraform will start creating resources within Cisco ASA. You
will notice that all resources declared in the .tf files (access lists, network objects, static routes) are being
created.

502 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:lab12/asa_policies$ terraform apply

An execution plan has been generated and is shown below.


Resource actions are indicated with the following symbols:
+ create

Terraform will perform the following actions:

# ciscoasa_acl.terraform_acl_1 will be created


+ resource "ciscoasa_acl" "terraform_acl_1" {
+ id = (known after apply)
+ name = "terraform_acl_1"

+ rule {
+ active = true
+ destination = "192.168.0.0/24"
+ destination_service = "tcp/443"
+ id = (known after apply)
+ log_interval = 300
+ log_status = "Default"
+ permit = true
+ source = "10.0.0.1/32"
}
+ rule {
+ active = true
+ destination = "172.16.0.1/32"
+ destination_service = "udp/53"
+ id = (known after apply)
+ log_interval = 300
+ log_status = "Default"
+ permit = true
+ source = "10.0.1.0/24"
+ source_service = "udp"
}
+ rule {
+ active = true
+ destination = "0.0.0.0/0"
+ destination_service = "icmp/0"
+ id = (known after apply)
+ log_interval = 300
+ log_status = "Default"
+ permit = true
+ source = "0.0.0.0/0"
}
}

# ciscoasa_network_object.ssh_jumphost will be created


+ resource "ciscoasa_network_object" "ssh_jumphost" {
+ id = (known after apply)
+ name = "ssh_jumphost"
+ value = "192.168.1.1"
}

# ciscoasa_network_object.ssh_jumphost_range will be created


+ resource "ciscoasa_network_object" "ssh_jumphost_range" {
+ id = (known after apply)

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 503
+ name = "ssh_jumphost_range"
+ value = "192.168.1.1-192.168.1.5"
}

# ciscoasa_network_object.ssh_jumphost_subnet will be created


+ resource "ciscoasa_network_object" "ssh_jumphost_subnet" {
+ id = (known after apply)
+ name = "ssh_jumphost_subnet"
+ value = "192.168.1.0/24"
}

# ciscoasa_static_route.management_static_route will be created


+ resource "ciscoasa_static_route" "management_static_route" {
+ gateway = "10.99.0.1"
+ id = (known after apply)
+ interface = "management"
+ metric = 1
+ network = "192.168.255.255/32"
+ tracked = false
+ tunneled = false
}

Plan: 5 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?


Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.

Enter a value: yes

ciscoasa_network_object.ssh_jumphost_range: Creating...
ciscoasa_network_object.ssh_jumphost: Creating...
ciscoasa_network_object.ssh_jumphost_subnet: Creating...
ciscoasa_static_route.management_static_route: Creating...
ciscoasa_acl.terraform_acl_1: Creating...
ciscoasa_network_object.ssh_jumphost_range: Creation complete after 2s
[id=ssh_jumphost_range]
ciscoasa_network_object.ssh_jumphost: Creation complete after 2s [id=ssh_jumphost]
ciscoasa_network_object.ssh_jumphost_subnet: Creation complete after 2s
[id=ssh_jumphost_subnet]
ciscoasa_static_route.management_static_route: Creation complete after 3s [id=c7ae8484]
ciscoasa_acl.terraform_acl_1: Creation complete after 6s [id=terraform_acl_1]

Apply complete! Resources: 5 added, 0 changed, 0 destroyed.

Review the Cisco ASA Running Configuration After the Initial Terraform
Deployment

Step 5 Terraform confirmed that requested changes have been successfully deployed into the asa1 host. Validate the
state of access list, network objects, and static routes configuration after the Terraform initial deployment.

As per Terraform configuration, a new access list named terraform_acl_1, several network
objects, and one static route have been created.

504 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Use the show running-configuration access-list, show running-configuration object, and
show running-config route commands within the already established SSH session with the asa1
host.
asa1# show running-configuration access-list
access-list terraform_acl_1 extended permit tcp host 10.0.0.1 192.168.0.0 255.255.255.0
eq https
access-list terraform_acl_1 extended permit udp 10.0.1.0 255.255.255.0 host 172.16.0.1
eq domain
access-list terraform_acl_1 extended permit icmp any4 any4 echo-reply

asa1# show running-configuration object


object network ssh_jumphost_range
range 192.168.1.1 192.168.1.5
object network ssh_jumphost
host 192.168.1.1
object network ssh_jumphost_subnet
subnet 192.168.1.0 255.255.255.0

asa1# show running-configuration route


route management 0.0.0.0 0.0.0.0 10.99.0.1 1
route management 192.168.255.255 255.255.255.255 10.99.0.1 1
asa1#

Modify Existing Terraform Configuration

Step 6 In a text editor of your choice, edit the access_lists.tf, network_objects.tf, and static_routes.tf Terraform
configuration files. In the access_lists.tf file, inject a new deny statement as follows:

rule {
source = "10.0.0.1/32"
destination = "192.168.0.255/32"
destination_service = "tcp/443"
permit = "false"
}
In the network_objects.tf file, create a new network object as follows:
resource "ciscoasa_network_object" "dns_server" {
name = "dns_server"
value = "192.168.10.10"
}
In the static_routes.tf file, create a new static route as follows:
resource "ciscoasa_static_route" "management_static_route_2" {
interface = "management"
network = "192.168.254.254/32"
gateway = "10.99.0.1"
}

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 505
student@student-vm:lab12/asa_policies$ cat access_lists.tf
resource "ciscoasa_acl" "terraform_acl_1" {
name = "terraform_acl_1"
rule {
source = "10.0.0.1/32"
destination = "192.168.0.255/32"
destination_service = "tcp/443"
permit = "false"
}
rule {
source = "10.0.0.1/32"
destination = "192.168.0.0/24"
destination_service = "tcp/443"
}
rule {
source = "10.0.1.0/24"
source_service = "udp"
destination = "172.16.0.1/32"
destination_service = "udp/53"
}
rule {
source = "0.0.0.0/0"
destination = "0.0.0.0/0"
destination_service = "icmp/0"
}
}

student@student-vm:lab12/asa_policies$ cat network_objects.tf


resource "ciscoasa_network_object" "dns_server" {
name = "dns_server"
value = "192.168.10.10"
}
resource "ciscoasa_network_object" "ssh_jumphost" {
name = "ssh_jumphost"
value = "192.168.1.1"
}
resource "ciscoasa_network_object" "ssh_jumphost_range" {
name = "ssh_jumphost_range"
value = "192.168.1.1-192.168.1.5"
}
resource "ciscoasa_network_object" "ssh_jumphost_subnet" {
name = "ssh_jumphost_subnet"
value = "192.168.1.0/24"
}

student@student-vm:lab12/asa_policies$ cat static_routes.tf


resource "ciscoasa_static_route" "management_static_route" {
interface = "management"
network = "192.168.255.255/32"
gateway = "10.99.0.1"
}

resource "ciscoasa_static_route" "management_static_route_2" {


interface = "management"
network = "192.168.254.254/32"

506 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
gateway = "10.99.0.1"
}

Deploy the Modified Cisco ASA Configuration with Terraform

Step 7 Execute the terraform apply command and type yes when prompted for deployment confirmation.
Terraform will refresh the information about the existing infrastructure and configuration files and ask for a
deployment confirmation. Once confirmed, Terraform will start creating and modifying existing resources
within the Cisco ASA host. You will notice that all resources declared in .tf files (access lists, network
objects, static routes) are being applied.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 507
student@student-vm:lab12/asa_policies$ terraform apply
ciscoasa_network_object.ssh_jumphost_range: Refreshing state... [id=ssh_jumphost_range]
ciscoasa_network_object.ssh_jumphost_subnet: Refreshing state...
[id=ssh_jumphost_subnet]
ciscoasa_network_object.ssh_jumphost: Refreshing state... [id=ssh_jumphost]
ciscoasa_static_route.management_static_route: Refreshing state... [id=c7ae8484]
ciscoasa_acl.terraform_acl_1: Refreshing state... [id=terraform_acl_1]

An execution plan has been generated and is shown below.


Resource actions are indicated with the following symbols:
+ create
~ update in-place

Terraform will perform the following actions:

# ciscoasa_acl.terraform_acl_1 will be updated in-place


~ resource "ciscoasa_acl" "terraform_acl_1" {
id = "terraform_acl_1"
name = "terraform_acl_1"

~ rule {
active = true
~ destination = "192.168.0.0/24" -> "192.168.0.255/32"
destination_service = "tcp/443"
id = "3464578925"
log_interval = 300
log_status = "Default"
~ permit = true -> false
remarks = []
source = "10.0.0.1/32"
}
~ rule {
active = true
~ destination = "172.16.0.1/32" -> "192.168.0.0/24"
~ destination_service = "udp/53" -> "tcp/443"
id = "1260360354"
log_interval = 300
log_status = "Default"
permit = true
remarks = []
~ source = "10.0.1.0/24" -> "10.0.0.1/32"
- source_service = "udp" -> null
}
~ rule {
active = true
~ destination = "0.0.0.0/0" -> "172.16.0.1/32"
~ destination_service = "icmp/0" -> "udp/53"
id = "2915814272"
log_interval = 300
log_status = "Default"
permit = true
remarks = []
~ source = "0.0.0.0/0" -> "10.0.1.0/24"
+ source_service = "udp"
}
+ rule {

508 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
+ active = true
+ destination = "0.0.0.0/0"
+ destination_service = "icmp/0"
+ log_interval = 300
+ log_status = "Default"
+ permit = true
+ source = "0.0.0.0/0"
}
}

# ciscoasa_network_object.dns_server will be created


+ resource "ciscoasa_network_object" "dns_server" {
+ id = (known after apply)
+ name = "dns_server"
+ value = "192.168.10.10"
}

# ciscoasa_static_route.management_static_route_2 will be created


+ resource "ciscoasa_static_route" "management_static_route_2" {
+ gateway = "10.99.0.1"
+ id = (known after apply)
+ interface = "management"
+ metric = 1
+ network = "192.168.254.254/32"
+ tracked = false
+ tunneled = false
}

Plan: 2 to add, 1 to change, 0 to destroy.

Do you want to perform these actions?


Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.

Enter a value: yes

ciscoasa_network_object.dns_server: Creating...
ciscoasa_static_route.management_static_route_2: Creating...
ciscoasa_acl.terraform_acl_1: Modifying... [id=terraform_acl_1]
ciscoasa_network_object.dns_server: Creation complete after 1s [id=dns_server]
ciscoasa_static_route.management_static_route_2: Creation complete after 1s
[id=1c8d50c6]
ciscoasa_acl.terraform_acl_1: Modifications complete after 2s [id=terraform_acl_1]

Apply complete! Resources: 2 added, 1 changed, 0 destroyed.

Review the Cisco ASA Running Configuration After the Second


Terraform Deployment

Step 8 Terraform confirmed that requested changes have been successfully deployed into the asa1 host. Validate the
state of access list, network objects, and static routes configuration after the second Terraform deployment.
You will notice Terraform modified terraform_acl_1 access list by injecting the new rule into the beginning
(preserving order as in the access_lists.tf file), created a new network object, and new static route.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 509
Use the show running-configuration access-list , show running-configuration object, and
show running-config route commands within the already established SSH session with the asa1
host.
asa1# show running-configuration access-list
access-list terraform_acl_1 extended deny tcp host 10.0.0.1 host 192.168.0.255 eq https

access-list terraform_acl_1 extended permit tcp host 10.0.0.1 192.168.0.0 255.255.255.0


eq https
access-list terraform_acl_1 extended permit udp 10.0.1.0 255.255.255.0 host 172.16.0.1
eq domain
access-list terraform_acl_1 extended permit icmp any4 any4 echo-reply

asa1# show running-configuration object


object network ssh_jumphost_range
range 192.168.1.1 192.168.1.5
object network ssh_jumphost
host 192.168.1.1
object network ssh_jumphost_subnet
subnet 192.168.1.0 255.255.255.0
object network dns_server
host 192.168.10.10

asa1# show running-configuration route


route management 0.0.0.0 0.0.0.0 10.99.0.1 1
route management 192.168.254.254 255.255.255.255 10.99.0.1 1
route management 192.168.255.255 255.255.255.255 10.99.0.1 1
asa1#

Destroy Firewall Configuration with Terraform

Step 9 Execute the terraform destroy command and type yes when prompted for destroy confirmation. Terraform
will refresh the information about the existing infrastructure and configuration files and ask for a destroy
confirmation. Once confirmed, Terraform will start destroying resources within the Cisco ASA host. You
will notice that only deployed resources, declared in .tf files (access lists, static routes, network objects,) are
being destroyed. No other predeployed configuration elements will be deleted. Once finalized, Terraform
will print the summary of actions taken.

510 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:lab12/asa_policies$ terraform destroy
ciscoasa_network_object.ssh_jumphost: Refreshing state... [id=ssh_jumphost]
ciscoasa_network_object.ssh_jumphost_range: Refreshing state... [id=ssh_jumphost_range]
ciscoasa_network_object.ssh_jumphost_subnet: Refreshing state...
[id=ssh_jumphost_subnet]
ciscoasa_acl.terraform_acl_1: Refreshing state... [id=terraform_acl_1]
ciscoasa_static_route.management_static_route: Refreshing state... [id=c7ae8484]
ciscoasa_network_object.dns_server: Refreshing state... [id=dns_server]
ciscoasa_static_route.management_static_route_2: Refreshing state... [id=1c8d50c6]

An execution plan has been generated and is shown below.


Resource actions are indicated with the following symbols:
- destroy

Terraform will perform the following actions:

# ciscoasa_acl.terraform_acl_1 will be destroyed


- resource "ciscoasa_acl" "terraform_acl_1" {
- id = "terraform_acl_1" -> null
- name = "terraform_acl_1" -> null

- rule {
- active = true -> null
- destination = "192.168.0.255/32" -> null
- destination_service = "tcp/443" -> null
- id = "1163114762" -> null
- log_interval = 300 -> null
- log_status = "Default" -> null
- permit = false -> null
- remarks = [] -> null
- source = "10.0.0.1/32" -> null
}
- rule {
- active = true -> null
- destination = "192.168.0.0/24" -> null
- destination_service = "tcp/443" -> null
- id = "3464578925" -> null
- log_interval = 300 -> null
- log_status = "Default" -> null
- permit = true -> null
- remarks = [] -> null
- source = "10.0.0.1/32" -> null
}
- rule {
- active = true -> null
- destination = "172.16.0.1/32" -> null
- destination_service = "udp/53" -> null
- id = "1260360354" -> null
- log_interval = 300 -> null
- log_status = "Default" -> null
- permit = true -> null
- remarks = [] -> null
- source = "10.0.1.0/24" -> null
- source_service = "udp" -> null
}
- rule {

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 511
- active = true -> null
- destination = "0.0.0.0/0" -> null
- destination_service = "icmp/0" -> null
- id = "2915814272" -> null
- log_interval = 300 -> null
- log_status = "Default" -> null
- permit = true -> null
- remarks = [] -> null
- source = "0.0.0.0/0" -> null
}
}

# ciscoasa_network_object.dns_server will be destroyed


- resource "ciscoasa_network_object" "dns_server" {
- id = "dns_server" -> null
- name = "dns_server" -> null
- value = "192.168.10.10" -> null
}

# ciscoasa_network_object.ssh_jumphost will be destroyed


- resource "ciscoasa_network_object" "ssh_jumphost" {
- id = "ssh_jumphost" -> null
- name = "ssh_jumphost" -> null
- value = "192.168.1.1" -> null
}

# ciscoasa_network_object.ssh_jumphost_range will be destroyed


- resource "ciscoasa_network_object" "ssh_jumphost_range" {
- id = "ssh_jumphost_range" -> null
- name = "ssh_jumphost_range" -> null
- value = "192.168.1.1-192.168.1.5" -> null
}

# ciscoasa_network_object.ssh_jumphost_subnet will be destroyed


- resource "ciscoasa_network_object" "ssh_jumphost_subnet" {
- id = "ssh_jumphost_subnet" -> null
- name = "ssh_jumphost_subnet" -> null
- value = "192.168.1.0/24" -> null
}

# ciscoasa_static_route.management_static_route will be destroyed


- resource "ciscoasa_static_route" "management_static_route" {
- gateway = "10.99.0.1" -> null
- id = "c7ae8484" -> null
- interface = "management" -> null
- metric = 1 -> null
- network = "192.168.255.255/32" -> null
- tracked = false -> null
- tunneled = false -> null
}

# ciscoasa_static_route.management_static_route_2 will be destroyed


- resource "ciscoasa_static_route" "management_static_route_2" {
- gateway = "10.99.0.1" -> null
- id = "1c8d50c6" -> null
- interface = "management" -> null

512 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
- metric = 1 -> null
- network = "192.168.254.254/32" -> null
- tracked = false -> null
- tunneled = false -> null
}

Plan: 0 to add, 0 to change, 7 to destroy.

Do you really want to destroy all resources?


Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.

Enter a value: yes

ciscoasa_network_object.dns_server: Destroying... [id=dns_server]


ciscoasa_network_object.ssh_jumphost_subnet: Destroying... [id=ssh_jumphost_subnet]
ciscoasa_acl.terraform_acl_1: Destroying... [id=terraform_acl_1]
ciscoasa_static_route.management_static_route_2: Destroying... [id=1c8d50c6]
ciscoasa_network_object.ssh_jumphost: Destroying... [id=ssh_jumphost]
ciscoasa_static_route.management_static_route: Destroying... [id=c7ae8484]
ciscoasa_network_object.ssh_jumphost_range: Destroying... [id=ssh_jumphost_range]
ciscoasa_network_object.dns_server: Destruction complete after 1s
ciscoasa_static_route.management_static_route_2: Destruction complete after 1s
ciscoasa_network_object.ssh_jumphost_subnet: Destruction complete after 2s
ciscoasa_network_object.ssh_jumphost_range: Destruction complete after 2s
ciscoasa_network_object.ssh_jumphost: Destruction complete after 3s
ciscoasa_static_route.management_static_route: Destruction complete after 3s
ciscoasa_acl.terraform_acl_1: Destruction complete after 4s

Destroy complete! Resources: 7 destroyed.


student@student-vm:lab12/asa_policies$

Review the Cisco ASA Running Configuration After Destroying


Resources with Terraform

Step 10 The Terraform tool confirmed that all objects described in .tf configuration files have successfully been
destroyed. Validate the state of access-lists, network objects, and static routes on the Cisco ASA1 host.

Use the show running-configuration access-list, show running-configuration object, and


show running-config route commands within the already established SSH session with the asa1
host.
asa1# show running-configuration access-list
asa1#
asa1# show running-configuration object
asa1#
asa1# show running-configuration route
route management 0.0.0.0 0.0.0.0 10.99.0.1 1
asa1#

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 513
Task 4: Destroy Test Environment Using Terraform
You will now destroy the virtual resources created by Terraform.

Activity

Step 1 Change the directory to /labs/lab12/test_env using the cd ~/labs/lab12/test_env/ command.

student@student-vm:$ cd ~/labs/lab12/test_env/
student@student-vm:lab12/test_env$

Issue the terraform destroy Command

Step 2 Execute the terraform destroy command and type yes when prompted for destroy confirmation. Terraform
will refresh the information about the existing infrastructure and configuration files and ask for a destroy
confirmation. When confirmed, Terraform will start destroying resources within ESXi. You will notice that
all deployed resources declared in .tf files (virtual machines and virtual networks) are being destroyed and
the asa1 host powered off due to the remote provisioner.

When finalized, Terraform will print the summary of actions taken. The output shows that the
order of destroying resources is different than the order used during creation. Terraform is
capable of determining dependencies between resources, therefore virtual machines are being
deleted first, and networking resources (virtual switches and networks) are being removed last.

514 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:lab12/test_env$ terraform destroy
data.vsphere_datacenter.dc: Refreshing state...
data.vsphere_resource_pool.pool: Refreshing state...
data.vsphere_network.network: Refreshing state...
data.vsphere_datastore.datastore: Refreshing state...
data.vsphere_host.esxi_host: Refreshing state...
vsphere_host_virtual_switch.devnet_lab_vswitch[0]: Refreshing state... [id=tf-
HostVirtualSwitch:ha-host:devnet_lab_vswitch_0]
vsphere_host_virtual_switch.devnet_lab_vswitch[2]: Refreshing state... [id=tf-
HostVirtualSwitch:ha-host:devnet_lab_vswitch_2]
vsphere_host_virtual_switch.devnet_lab_vswitch[5]: Refreshing state... [id=tf-
HostVirtualSwitch:ha-host:devnet_lab_vswitch_5]
vsphere_host_virtual_switch.devnet_lab_vswitch[3]: Refreshing state... [id=tf-
HostVirtualSwitch:ha-host:devnet_lab_vswitch_3]
vsphere_host_virtual_switch.devnet_lab_vswitch[1]: Refreshing state... [id=tf-
HostVirtualSwitch:ha-host:devnet_lab_vswitch_1]
vsphere_host_virtual_switch.devnet_lab_vswitch[4]: Refreshing state... [id=tf-
HostVirtualSwitch:ha-host:devnet_lab_vswitch_4]
vsphere_host_port_group.pg[1]: Refreshing state... [id=tf-HostPortGroup:ha-
host:vm_network_1]
vsphere_host_port_group.pg[2]: Refreshing state... [id=tf-HostPortGroup:ha-
host:vm_network_2]
vsphere_host_port_group.pg[3]: Refreshing state... [id=tf-HostPortGroup:ha-
host:vm_network_3]
vsphere_host_port_group.pg[4]: Refreshing state... [id=tf-HostPortGroup:ha-
host:vm_network_4]
vsphere_host_port_group.pg[5]: Refreshing state... [id=tf-HostPortGroup:ha-
host:vm_network_5]
vsphere_host_port_group.pg[0]: Refreshing state... [id=tf-HostPortGroup:ha-
host:vm_network_0]
vsphere_virtual_machine.k8s2: Refreshing state... [id=564d329d-8eee-2da4-9aee-
58484be0d340]
vsphere_virtual_machine.csr1kv1: Refreshing state... [id=564d74fe-9a5e-fa07-a44e-
af5dfdbf5c04]
vsphere_virtual_machine.k8s1: Refreshing state... [id=564dd38f-6003-be29-2f69-
791f39f949d9]
vsphere_virtual_machine.csr1kv3: Refreshing state... [id=564dcddb-a837-ba79-de62-
2bde4ebad152]
vsphere_virtual_machine.k8s3: Refreshing state... [id=564dba8c-997a-1661-ecfd-
1b1e3741dde2]
vsphere_virtual_machine.csr1kv2: Refreshing state... [id=564d8a38-e0b6-4735-4c70-
05d7ef3d58ee]

An execution plan has been generated and is shown below.


Resource actions are indicated with the following symbols:
- destroy

Terraform will perform the following actions:

# vsphere_host_port_group.pg[0] will be destroyed


- resource "vsphere_host_port_group" "pg" {
- computed_policy = {
- "allow_forged_transmits" = "true"
- "allow_mac_changes" = "true"
- "allow_promiscuous" = "false"
- "check_beacon" = "false"

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 515
- "failback" = "true"
- "notify_switches" = "true"
- "shaping_average_bandwidth" = "0"
- "shaping_burst_size" = "0"
- "shaping_enabled" = "false"
- "shaping_peak_bandwidth" = "0"
- "teaming_policy" = "loadbalance_srcid"
} -> null
- host_system_id = "ha-host" -> null
- id = "tf-HostPortGroup:ha-host:vm_network_0" -> null
- key = "key-vim.host.PortGroup-vm_network_0" -> null
- name = "vm_network_0" -> null
- ports = [
- {
- key = "key-vim.host.PortGroup.Port-134316034"
- mac_addresses = [
- "00:0c:29:bf:5c:22",
]
- type = "virtualMachine"
},
] -> null
- shaping_average_bandwidth = 0 -> null
- shaping_burst_size = 0 -> null
- shaping_peak_bandwidth = 0 -> null
- virtual_switch_name = "devnet_lab_vswitch_0" -> null
- vlan_id = 0 -> null
}

# vsphere_host_port_group.pg[1] will be destroyed


- resource "vsphere_host_port_group" "pg" {
- computed_policy = {
- "allow_forged_transmits" = "true"
- "allow_mac_changes" = "true"
- "allow_promiscuous" = "false"
- "check_beacon" = "false"
- "failback" = "true"
- "notify_switches" = "true"
- "shaping_average_bandwidth" = "0"
- "shaping_burst_size" = "0"
- "shaping_enabled" = "false"
- "shaping_peak_bandwidth" = "0"
- "teaming_policy" = "loadbalance_srcid"
} -> null
- host_system_id = "ha-host" -> null
- id = "tf-HostPortGroup:ha-host:vm_network_1" -> null
- key = "key-vim.host.PortGroup-vm_network_1" -> null
- name = "vm_network_1" -> null
- ports = [
- {
- key = "key-vim.host.PortGroup.Port-151093250"
- mac_addresses = [
- "00:0c:29:f9:49:e3",
]
- type = "virtualMachine"
},
] -> null

516 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
- shaping_average_bandwidth = 0 -> null
- shaping_burst_size = 0 -> null
- shaping_peak_bandwidth = 0 -> null
- virtual_switch_name = "devnet_lab_vswitch_1" -> null
- vlan_id = 0 -> null
}

# vsphere_host_port_group.pg[2] will be destroyed


- resource "vsphere_host_port_group" "pg" {
- computed_policy = {
- "allow_forged_transmits" = "true"
- "allow_mac_changes" = "true"
- "allow_promiscuous" = "false"
- "check_beacon" = "false"
- "failback" = "true"
- "notify_switches" = "true"
- "shaping_average_bandwidth" = "0"
- "shaping_burst_size" = "0"
- "shaping_enabled" = "false"
- "shaping_peak_bandwidth" = "0"
- "teaming_policy" = "loadbalance_srcid"
} -> null
- host_system_id = "ha-host" -> null
- id = "tf-HostPortGroup:ha-host:vm_network_2" -> null
- key = "key-vim.host.PortGroup-vm_network_2" -> null
- name = "vm_network_2" -> null
- ports = [
- {
- key = "key-vim.host.PortGroup.Port-100761602"
- mac_addresses = [
- "00:0c:29:bf:5c:18",
]
- type = "virtualMachine"
},
] -> null
- shaping_average_bandwidth = 0 -> null
- shaping_burst_size = 0 -> null
- shaping_peak_bandwidth = 0 -> null
- virtual_switch_name = "devnet_lab_vswitch_2" -> null
- vlan_id = 0 -> null
}

# vsphere_host_port_group.pg[3] will be destroyed


- resource "vsphere_host_port_group" "pg" {
- computed_policy = {
- "allow_forged_transmits" = "true"
- "allow_mac_changes" = "true"
- "allow_promiscuous" = "false"
- "check_beacon" = "false"
- "failback" = "true"
- "notify_switches" = "true"
- "shaping_average_bandwidth" = "0"
- "shaping_burst_size" = "0"
- "shaping_enabled" = "false"
- "shaping_peak_bandwidth" = "0"
- "teaming_policy" = "loadbalance_srcid"

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 517
} -> null
- host_system_id = "ha-host" -> null
- id = "tf-HostPortGroup:ha-host:vm_network_3" -> null
- key = "key-vim.host.PortGroup-vm_network_3" -> null
- name = "vm_network_3" -> null
- ports = [
- {
- key = "key-vim.host.PortGroup.Port-167870466"
- mac_addresses = [
- "00:0c:29:e0:d3:4a",
]
- type = "virtualMachine"
},
] -> null
- shaping_average_bandwidth = 0 -> null
- shaping_burst_size = 0 -> null
- shaping_peak_bandwidth = 0 -> null
- virtual_switch_name = "devnet_lab_vswitch_3" -> null
- vlan_id = 0 -> null
}

# vsphere_host_port_group.pg[4] will be destroyed


- resource "vsphere_host_port_group" "pg" {
- computed_policy = {
- "allow_forged_transmits" = "true"
- "allow_mac_changes" = "true"
- "allow_promiscuous" = "false"
- "check_beacon" = "false"
- "failback" = "true"
- "notify_switches" = "true"
- "shaping_average_bandwidth" = "0"
- "shaping_burst_size" = "0"
- "shaping_enabled" = "false"
- "shaping_peak_bandwidth" = "0"
- "teaming_policy" = "loadbalance_srcid"
} -> null
- host_system_id = "ha-host" -> null
- id = "tf-HostPortGroup:ha-host:vm_network_4" -> null
- key = "key-vim.host.PortGroup-vm_network_4" -> null
- name = "vm_network_4" -> null
- ports = [
- {
- key = "key-vim.host.PortGroup.Port-184647682"
- mac_addresses = [
- "00:0c:29:bf:5c:0e",
]
- type = "virtualMachine"
},
] -> null
- shaping_average_bandwidth = 0 -> null
- shaping_burst_size = 0 -> null
- shaping_peak_bandwidth = 0 -> null
- virtual_switch_name = "devnet_lab_vswitch_4" -> null
- vlan_id = 0 -> null
}

518 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
# vsphere_host_port_group.pg[5] will be destroyed
- resource "vsphere_host_port_group" "pg" {
- computed_policy = {
- "allow_forged_transmits" = "true"
- "allow_mac_changes" = "true"
- "allow_promiscuous" = "false"
- "check_beacon" = "false"
- "failback" = "true"
- "notify_switches" = "true"
- "shaping_average_bandwidth" = "0"
- "shaping_burst_size" = "0"
- "shaping_enabled" = "false"
- "shaping_peak_bandwidth" = "0"
- "teaming_policy" = "loadbalance_srcid"
} -> null
- host_system_id = "ha-host" -> null
- id = "tf-HostPortGroup:ha-host:vm_network_5" -> null
- key = "key-vim.host.PortGroup-vm_network_5" -> null
- name = "vm_network_5" -> null
- ports = [
- {
- key = "key-vim.host.PortGroup.Port-117538818"
- mac_addresses = [
- "00:0c:29:3d:58:f8",
]
- type = "virtualMachine"
},
] -> null
- shaping_average_bandwidth = 0 -> null
- shaping_burst_size = 0 -> null
- shaping_peak_bandwidth = 0 -> null
- virtual_switch_name = "devnet_lab_vswitch_5" -> null
- vlan_id = 0 -> null
}

# vsphere_host_virtual_switch.devnet_lab_vswitch[0] will be destroyed


- resource "vsphere_host_virtual_switch" "devnet_lab_vswitch" {
- active_nics = [] -> null
- allow_forged_transmits = true -> null
- allow_mac_changes = true -> null
- allow_promiscuous = false -> null
- beacon_interval = 1 -> null
- check_beacon = false -> null
- failback = true -> null
- host_system_id = "ha-host" -> null
- id = "tf-HostVirtualSwitch:ha-host:devnet_lab_vswitch_0"
-> null
- link_discovery_operation = "listen" -> null
- link_discovery_protocol = "cdp" -> null
- mtu = 1500 -> null
- name = "devnet_lab_vswitch_0" -> null
- network_adapters = [] -> null
- notify_switches = true -> null
- number_of_ports = 128 -> null
- shaping_average_bandwidth = 0 -> null
- shaping_burst_size = 0 -> null

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 519
- shaping_enabled = false -> null
- shaping_peak_bandwidth = 0 -> null
- standby_nics = [] -> null
- teaming_policy = "loadbalance_srcid" -> null
}

# vsphere_host_virtual_switch.devnet_lab_vswitch[1] will be destroyed


- resource "vsphere_host_virtual_switch" "devnet_lab_vswitch" {
- active_nics = [] -> null
- allow_forged_transmits = true -> null
- allow_mac_changes = true -> null
- allow_promiscuous = false -> null
- beacon_interval = 1 -> null
- check_beacon = false -> null
- failback = true -> null
- host_system_id = "ha-host" -> null
- id = "tf-HostVirtualSwitch:ha-host:devnet_lab_vswitch_1"
-> null
- link_discovery_operation = "listen" -> null
- link_discovery_protocol = "cdp" -> null
- mtu = 1500 -> null
- name = "devnet_lab_vswitch_1" -> null
- network_adapters = [] -> null
- notify_switches = true -> null
- number_of_ports = 128 -> null
- shaping_average_bandwidth = 0 -> null
- shaping_burst_size = 0 -> null
- shaping_enabled = false -> null
- shaping_peak_bandwidth = 0 -> null
- standby_nics = [] -> null
- teaming_policy = "loadbalance_srcid" -> null
}

# vsphere_host_virtual_switch.devnet_lab_vswitch[2] will be destroyed


- resource "vsphere_host_virtual_switch" "devnet_lab_vswitch" {
- active_nics = [] -> null
- allow_forged_transmits = true -> null
- allow_mac_changes = true -> null
- allow_promiscuous = false -> null
- beacon_interval = 1 -> null
- check_beacon = false -> null
- failback = true -> null
- host_system_id = "ha-host" -> null
- id = "tf-HostVirtualSwitch:ha-host:devnet_lab_vswitch_2"
-> null
- link_discovery_operation = "listen" -> null
- link_discovery_protocol = "cdp" -> null
- mtu = 1500 -> null
- name = "devnet_lab_vswitch_2" -> null
- network_adapters = [] -> null
- notify_switches = true -> null
- number_of_ports = 128 -> null
- shaping_average_bandwidth = 0 -> null
- shaping_burst_size = 0 -> null
- shaping_enabled = false -> null
- shaping_peak_bandwidth = 0 -> null

520 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
- standby_nics = [] -> null
- teaming_policy = "loadbalance_srcid" -> null
}

# vsphere_host_virtual_switch.devnet_lab_vswitch[3] will be destroyed


- resource "vsphere_host_virtual_switch" "devnet_lab_vswitch" {
- active_nics = [] -> null
- allow_forged_transmits = true -> null
- allow_mac_changes = true -> null
- allow_promiscuous = false -> null
- beacon_interval = 1 -> null
- check_beacon = false -> null
- failback = true -> null
- host_system_id = "ha-host" -> null
- id = "tf-HostVirtualSwitch:ha-host:devnet_lab_vswitch_3"
-> null
- link_discovery_operation = "listen" -> null
- link_discovery_protocol = "cdp" -> null
- mtu = 1500 -> null
- name = "devnet_lab_vswitch_3" -> null
- network_adapters = [] -> null
- notify_switches = true -> null
- number_of_ports = 128 -> null
- shaping_average_bandwidth = 0 -> null
- shaping_burst_size = 0 -> null
- shaping_enabled = false -> null
- shaping_peak_bandwidth = 0 -> null
- standby_nics = [] -> null
- teaming_policy = "loadbalance_srcid" -> null
}

# vsphere_host_virtual_switch.devnet_lab_vswitch[4] will be destroyed


- resource "vsphere_host_virtual_switch" "devnet_lab_vswitch" {
- active_nics = [] -> null
- allow_forged_transmits = true -> null
- allow_mac_changes = true -> null
- allow_promiscuous = false -> null
- beacon_interval = 1 -> null
- check_beacon = false -> null
- failback = true -> null
- host_system_id = "ha-host" -> null
- id = "tf-HostVirtualSwitch:ha-host:devnet_lab_vswitch_4"
-> null
- link_discovery_operation = "listen" -> null
- link_discovery_protocol = "cdp" -> null
- mtu = 1500 -> null
- name = "devnet_lab_vswitch_4" -> null
- network_adapters = [] -> null
- notify_switches = true -> null
- number_of_ports = 128 -> null
- shaping_average_bandwidth = 0 -> null
- shaping_burst_size = 0 -> null
- shaping_enabled = false -> null
- shaping_peak_bandwidth = 0 -> null
- standby_nics = [] -> null
- teaming_policy = "loadbalance_srcid" -> null

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 521
}

# vsphere_host_virtual_switch.devnet_lab_vswitch[5] will be destroyed


- resource "vsphere_host_virtual_switch" "devnet_lab_vswitch" {
- active_nics = [] -> null
- allow_forged_transmits = true -> null
- allow_mac_changes = true -> null
- allow_promiscuous = false -> null
- beacon_interval = 1 -> null
- check_beacon = false -> null
- failback = true -> null
- host_system_id = "ha-host" -> null
- id = "tf-HostVirtualSwitch:ha-host:devnet_lab_vswitch_5"
-> null
- link_discovery_operation = "listen" -> null
- link_discovery_protocol = "cdp" -> null
- mtu = 1500 -> null
- name = "devnet_lab_vswitch_5" -> null
- network_adapters = [] -> null
- notify_switches = true -> null
- number_of_ports = 128 -> null
- shaping_average_bandwidth = 0 -> null
- shaping_burst_size = 0 -> null
- shaping_enabled = false -> null
- shaping_peak_bandwidth = 0 -> null
- standby_nics = [] -> null
- teaming_policy = "loadbalance_srcid" -> null
}

# vsphere_virtual_machine.csr1kv1 will be destroyed


- resource "vsphere_virtual_machine" "csr1kv1" {
- boot_delay = 0 -> null
- boot_retry_delay = 10000 -> null
- boot_retry_enabled = false -> null
- change_version = "2019-11-07T11:10:13.078477Z" -> null

- cpu_hot_add_enabled = false -> null


- cpu_hot_remove_enabled = false -> null
- cpu_limit = -1 -> null
- cpu_performance_counters_enabled = false -> null
- cpu_reservation = 0 -> null
- cpu_share_count = 4000 -> null
- cpu_share_level = "normal" -> null
- datastore_id = "5d851500-53966784-2df3-0050569c58b8"
-> null
- default_ip_address = "10.99.0.101" -> null
- efi_secure_boot_enabled = false -> null
- enable_disk_uuid = false -> null
- enable_logging = false -> null
- ept_rvi_mode = "automatic" -> null
- extra_config = {} -> null
- firmware = "bios" -> null
- force_power_off = true -> null
- guest_id = "other26xLinux64Guest" -> null
- guest_ip_addresses = [
- "10.99.0.101",

522 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
] -> null
- host_system_id = "ha-host" -> null
- hv_mode = "hvAuto" -> null
- id = "564d74fe-9a5e-fa07-a44e-
af5dfdbf5c04" -> null
- latency_sensitivity = "normal" -> null
- memory = 4096 -> null
- memory_hot_add_enabled = false -> null
- memory_limit = -1 -> null
- memory_reservation = 0 -> null
- memory_share_count = 40960 -> null
- memory_share_level = "normal" -> null
- migrate_wait_timeout = 30 -> null
- moid = "98" -> null
- name = "csr1kv1" -> null
- nested_hv_enabled = false -> null
- num_cores_per_socket = 1 -> null
- num_cpus = 4 -> null
- reboot_required = false -> null
- resource_pool_id = "ha-root-pool" -> null
- run_tools_scripts_after_power_on = true -> null
- run_tools_scripts_after_resume = true -> null
- run_tools_scripts_before_guest_reboot = false -> null
- run_tools_scripts_before_guest_shutdown = true -> null
- run_tools_scripts_before_guest_standby = true -> null
- scsi_bus_sharing = "noSharing" -> null
- scsi_controller_count = 1 -> null
- scsi_type = "pvscsi" -> null
- shutdown_wait_timeout = 1 -> null
- swap_placement_policy = "inherit" -> null
- sync_time_with_host = false -> null
- uuid = "564d74fe-9a5e-fa07-a44e-
af5dfdbf5c04" -> null
- vapp_transport = [] -> null
- vmware_tools_status = "guestToolsRunning" -> null
- vmx_path = "csr1kv1/csr1kv1.vmx" -> null
- wait_for_guest_ip_timeout = 5 -> null
- wait_for_guest_net_routable = true -> null
- wait_for_guest_net_timeout = 0 -> null

- cdrom {
- client_device = false -> null
- datastore_id = "5d851500-53966784-2df3-0050569c58b8" -> null
- device_address = "ide:0:0" -> null
- key = 3000 -> null
- path = "csr1kv1/bootstrap.iso" -> null
}

- disk {
- attach = true -> null
- datastore_id = "5d851500-53966784-2df3-0050569c58b8" -> null
- device_address = "scsi:0:0" -> null
- disk_mode = "independent_nonpersistent" -> null
- disk_sharing = "sharingNone" -> null
- eagerly_scrub = false -> null
- io_limit = -1 -> null

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 523
- io_reservation = 0 -> null
- io_share_count = 1000 -> null
- io_share_level = "normal" -> null
- keep_on_remove = false -> null
- key = 2000 -> null
- label = "disk0" -> null
- path = "csr1kv1/csr.vmdk" -> null
- size = 0 -> null
- thin_provisioned = true -> null
- unit_number = 0 -> null
- uuid = "6000C29f-9e58-8138-7791-78a4839e1b72" -> null
- write_through = false -> null
}

- network_interface {
- adapter_type = "vmxnet3" -> null
- bandwidth_limit = -1 -> null
- bandwidth_reservation = 0 -> null
- bandwidth_share_count = 50 -> null
- bandwidth_share_level = "normal" -> null
- device_address = "pci:0:7" -> null
- key = 4000 -> null
- mac_address = "00:0c:29:bf:5c:04" -> null
- network_id = "HaNetwork-vm_network_1" -> null
- use_static_mac = false -> null
}
- network_interface {
- adapter_type = "vmxnet3" -> null
- bandwidth_limit = -1 -> null
- bandwidth_reservation = 0 -> null
- bandwidth_share_count = 50 -> null
- bandwidth_share_level = "normal" -> null
- device_address = "pci:0:8" -> null
- key = 4001 -> null
- mac_address = "00:0c:29:bf:5c:0e" -> null
- network_id = "HaNetwork-vm_network_4" -> null
- use_static_mac = false -> null
}
- network_interface {
- adapter_type = "vmxnet3" -> null
- bandwidth_limit = -1 -> null
- bandwidth_reservation = 0 -> null
- bandwidth_share_count = 50 -> null
- bandwidth_share_level = "normal" -> null
- device_address = "pci:0:9" -> null
- key = 4002 -> null
- mac_address = "00:0c:29:bf:5c:18" -> null
- network_id = "HaNetwork-vm_network_2" -> null
- use_static_mac = false -> null
}
- network_interface {
- adapter_type = "vmxnet3" -> null
- bandwidth_limit = -1 -> null
- bandwidth_reservation = 0 -> null
- bandwidth_share_count = 50 -> null
- bandwidth_share_level = "normal" -> null

524 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
- device_address = "pci:0:10" -> null
- key = 4003 -> null
- mac_address = "00:0c:29:bf:5c:22" -> null
- network_id = "HaNetwork-vm_network_0" -> null
- use_static_mac = false -> null
}
- network_interface {
- adapter_type = "vmxnet3" -> null
- bandwidth_limit = -1 -> null
- bandwidth_reservation = 0 -> null
- bandwidth_share_count = 50 -> null
- bandwidth_share_level = "normal" -> null
- device_address = "pci:0:11" -> null
- key = 4004 -> null
- mac_address = "00:0c:29:bf:5c:2c" -> null
- network_id = "HaNetwork-VM Network" -> null
- use_static_mac = false -> null
}
}

# vsphere_virtual_machine.csr1kv2 will be destroyed


- resource "vsphere_virtual_machine" "csr1kv2" {
- boot_delay = 0 -> null
- boot_retry_delay = 10000 -> null
- boot_retry_enabled = false -> null
- change_version = "2019-11-07T11:10:13.002144Z" -> null

- cpu_hot_add_enabled = false -> null


- cpu_hot_remove_enabled = false -> null
- cpu_limit = -1 -> null
- cpu_performance_counters_enabled = false -> null
- cpu_reservation = 0 -> null
- cpu_share_count = 4000 -> null
- cpu_share_level = "normal" -> null
- datastore_id = "5d851500-53966784-2df3-0050569c58b8"
-> null
- default_ip_address = "10.99.0.102" -> null
- efi_secure_boot_enabled = false -> null
- enable_disk_uuid = false -> null
- enable_logging = false -> null
- ept_rvi_mode = "automatic" -> null
- extra_config = {} -> null
- firmware = "bios" -> null
- force_power_off = true -> null
- guest_id = "other26xLinux64Guest" -> null
- guest_ip_addresses = [
- "10.99.0.102",
] -> null
- host_system_id = "ha-host" -> null
- hv_mode = "hvAuto" -> null
- id = "564d8a38-e0b6-4735-4c70-
05d7ef3d58ee" -> null
- latency_sensitivity = "normal" -> null
- memory = 4096 -> null
- memory_hot_add_enabled = false -> null
- memory_limit = -1 -> null

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 525
- memory_reservation = 0 -> null
- memory_share_count = 40960 -> null
- memory_share_level = "normal" -> null
- migrate_wait_timeout = 30 -> null
- moid = "101" -> null
- name = "csr1kv2" -> null
- nested_hv_enabled = false -> null
- num_cores_per_socket = 1 -> null
- num_cpus = 4 -> null
- reboot_required = false -> null
- resource_pool_id = "ha-root-pool" -> null
- run_tools_scripts_after_power_on = true -> null
- run_tools_scripts_after_resume = true -> null
- run_tools_scripts_before_guest_reboot = false -> null
- run_tools_scripts_before_guest_shutdown = true -> null
- run_tools_scripts_before_guest_standby = true -> null
- scsi_bus_sharing = "noSharing" -> null
- scsi_controller_count = 1 -> null
- scsi_type = "pvscsi" -> null
- shutdown_wait_timeout = 1 -> null
- swap_placement_policy = "inherit" -> null
- sync_time_with_host = false -> null
- uuid = "564d8a38-e0b6-4735-4c70-
05d7ef3d58ee" -> null
- vapp_transport = [] -> null
- vmware_tools_status = "guestToolsRunning" -> null
- vmx_path = "csr1kv2/csr1kv2.vmx" -> null
- wait_for_guest_ip_timeout = 5 -> null
- wait_for_guest_net_routable = true -> null
- wait_for_guest_net_timeout = 0 -> null

- cdrom {
- client_device = false -> null
- datastore_id = "5d851500-53966784-2df3-0050569c58b8" -> null
- device_address = "ide:0:0" -> null
- key = 3000 -> null
- path = "csr1kv2/bootstrap.iso" -> null
}

- disk {
- attach = true -> null
- datastore_id = "5d851500-53966784-2df3-0050569c58b8" -> null
- device_address = "scsi:0:0" -> null
- disk_mode = "independent_nonpersistent" -> null
- disk_sharing = "sharingNone" -> null
- eagerly_scrub = false -> null
- io_limit = -1 -> null
- io_reservation = 0 -> null
- io_share_count = 1000 -> null
- io_share_level = "normal" -> null
- keep_on_remove = false -> null
- key = 2000 -> null
- label = "disk0" -> null
- path = "csr1kv2/csr.vmdk" -> null
- size = 0 -> null
- thin_provisioned = true -> null

526 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
- unit_number = 0 -> null
- uuid = "6000C29f-9e58-8138-7791-78a4839e1b72" -> null
- write_through = false -> null
}

- network_interface {
- adapter_type = "vmxnet3" -> null
- bandwidth_limit = -1 -> null
- bandwidth_reservation = 0 -> null
- bandwidth_share_count = 50 -> null
- bandwidth_share_level = "normal" -> null
- device_address = "pci:0:7" -> null
- key = 4000 -> null
- mac_address = "00:0c:29:3d:58:ee" -> null
- network_id = "HaNetwork-vm_network_3" -> null
- use_static_mac = false -> null
}
- network_interface {
- adapter_type = "vmxnet3" -> null
- bandwidth_limit = -1 -> null
- bandwidth_reservation = 0 -> null
- bandwidth_share_count = 50 -> null
- bandwidth_share_level = "normal" -> null
- device_address = "pci:0:8" -> null
- key = 4001 -> null
- mac_address = "00:0c:29:3d:58:f8" -> null
- network_id = "HaNetwork-vm_network_5" -> null
- use_static_mac = false -> null
}
- network_interface {
- adapter_type = "vmxnet3" -> null
- bandwidth_limit = -1 -> null
- bandwidth_reservation = 0 -> null
- bandwidth_share_count = 50 -> null
- bandwidth_share_level = "normal" -> null
- device_address = "pci:0:9" -> null
- key = 4002 -> null
- mac_address = "00:0c:29:3d:58:02" -> null
- network_id = "HaNetwork-vm_network_2" -> null
- use_static_mac = false -> null
}
- network_interface {
- adapter_type = "vmxnet3" -> null
- bandwidth_limit = -1 -> null
- bandwidth_reservation = 0 -> null
- bandwidth_share_count = 50 -> null
- bandwidth_share_level = "normal" -> null
- device_address = "pci:0:10" -> null
- key = 4003 -> null
- mac_address = "00:0c:29:3d:58:0c" -> null
- network_id = "HaNetwork-vm_network_0" -> null
- use_static_mac = false -> null
}
- network_interface {
- adapter_type = "vmxnet3" -> null
- bandwidth_limit = -1 -> null

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 527
- bandwidth_reservation = 0 -> null
- bandwidth_share_count = 50 -> null
- bandwidth_share_level = "normal" -> null
- device_address = "pci:0:11" -> null
- key = 4004 -> null
- mac_address = "00:0c:29:3d:58:16" -> null
- network_id = "HaNetwork-VM Network" -> null
- use_static_mac = false -> null
}
}

# vsphere_virtual_machine.csr1kv3 will be destroyed


- resource "vsphere_virtual_machine" "csr1kv3" {
- boot_delay = 0 -> null
- boot_retry_delay = 10000 -> null
- boot_retry_enabled = false -> null
- change_version = "2019-11-07T11:10:13.016916Z" -> null

- cpu_hot_add_enabled = false -> null


- cpu_hot_remove_enabled = false -> null
- cpu_limit = -1 -> null
- cpu_performance_counters_enabled = false -> null
- cpu_reservation = 0 -> null
- cpu_share_count = 4000 -> null
- cpu_share_level = "normal" -> null
- datastore_id = "5d851500-53966784-2df3-0050569c58b8"
-> null
- default_ip_address = "10.99.0.103" -> null
- efi_secure_boot_enabled = false -> null
- enable_disk_uuid = false -> null
- enable_logging = false -> null
- ept_rvi_mode = "automatic" -> null
- extra_config = {} -> null
- firmware = "bios" -> null
- force_power_off = true -> null
- guest_id = "other26xLinux64Guest" -> null
- guest_ip_addresses = [
- "10.99.0.103",
] -> null
- host_system_id = "ha-host" -> null
- hv_mode = "hvAuto" -> null
- id = "564dcddb-a837-ba79-de62-
2bde4ebad152" -> null
- latency_sensitivity = "normal" -> null
- memory = 4096 -> null
- memory_hot_add_enabled = false -> null
- memory_limit = -1 -> null
- memory_reservation = 0 -> null
- memory_share_count = 40960 -> null
- memory_share_level = "normal" -> null
- migrate_wait_timeout = 30 -> null
- moid = "102" -> null
- name = "csr1kv3" -> null
- nested_hv_enabled = false -> null
- num_cores_per_socket = 1 -> null
- num_cpus = 4 -> null

528 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
- reboot_required = false -> null
- resource_pool_id = "ha-root-pool" -> null
- run_tools_scripts_after_power_on = true -> null
- run_tools_scripts_after_resume = true -> null
- run_tools_scripts_before_guest_reboot = false -> null
- run_tools_scripts_before_guest_shutdown = true -> null
- run_tools_scripts_before_guest_standby = true -> null
- scsi_bus_sharing = "noSharing" -> null
- scsi_controller_count = 1 -> null
- scsi_type = "pvscsi" -> null
- shutdown_wait_timeout = 1 -> null
- swap_placement_policy = "inherit" -> null
- sync_time_with_host = false -> null
- uuid = "564dcddb-a837-ba79-de62-
2bde4ebad152" -> null
- vapp_transport = [] -> null
- vmware_tools_status = "guestToolsRunning" -> null
- vmx_path = "csr1kv3/csr1kv3.vmx" -> null
- wait_for_guest_ip_timeout = 5 -> null
- wait_for_guest_net_routable = true -> null
- wait_for_guest_net_timeout = 0 -> null

- cdrom {
- client_device = false -> null
- datastore_id = "5d851500-53966784-2df3-0050569c58b8" -> null
- device_address = "ide:0:0" -> null
- key = 3000 -> null
- path = "csr1kv3/bootstrap.iso" -> null
}

- disk {
- attach = true -> null
- datastore_id = "5d851500-53966784-2df3-0050569c58b8" -> null
- device_address = "scsi:0:0" -> null
- disk_mode = "independent_nonpersistent" -> null
- disk_sharing = "sharingNone" -> null
- eagerly_scrub = false -> null
- io_limit = -1 -> null
- io_reservation = 0 -> null
- io_share_count = 1000 -> null
- io_share_level = "normal" -> null
- keep_on_remove = false -> null
- key = 2000 -> null
- label = "disk0" -> null
- path = "csr1kv3/csr.vmdk" -> null
- size = 0 -> null
- thin_provisioned = true -> null
- unit_number = 0 -> null
- uuid = "6000C29f-9e58-8138-7791-78a4839e1b72" -> null
- write_through = false -> null
}

- network_interface {
- adapter_type = "vmxnet3" -> null
- bandwidth_limit = -1 -> null
- bandwidth_reservation = 0 -> null

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 529
- bandwidth_share_count = 50 -> null
- bandwidth_share_level = "normal" -> null
- device_address = "pci:0:7" -> null
- key = 4000 -> null
- mac_address = "00:0c:29:ba:d1:52" -> null
- network_id = "HaNetwork-vm_network_4" -> null
- use_static_mac = false -> null
}
- network_interface {
- adapter_type = "vmxnet3" -> null
- bandwidth_limit = -1 -> null
- bandwidth_reservation = 0 -> null
- bandwidth_share_count = 50 -> null
- bandwidth_share_level = "normal" -> null
- device_address = "pci:0:8" -> null
- key = 4001 -> null
- mac_address = "00:0c:29:ba:d1:5c" -> null
- network_id = "HaNetwork-vm_network_5" -> null
- use_static_mac = false -> null
}
- network_interface {
- adapter_type = "vmxnet3" -> null
- bandwidth_limit = -1 -> null
- bandwidth_reservation = 0 -> null
- bandwidth_share_count = 50 -> null
- bandwidth_share_level = "normal" -> null
- device_address = "pci:0:9" -> null
- key = 4002 -> null
- mac_address = "00:0c:29:ba:d1:66" -> null
- network_id = "HaNetwork-vm_network_6" -> null
- use_static_mac = false -> null
}
- network_interface {
- adapter_type = "vmxnet3" -> null
- bandwidth_limit = -1 -> null
- bandwidth_reservation = 0 -> null
- bandwidth_share_count = 50 -> null
- bandwidth_share_level = "normal" -> null
- device_address = "pci:0:10" -> null
- key = 4003 -> null
- mac_address = "00:0c:29:ba:d1:70" -> null
- network_id = "HaNetwork-vm_network_0" -> null
- use_static_mac = false -> null
}
- network_interface {
- adapter_type = "vmxnet3" -> null
- bandwidth_limit = -1 -> null
- bandwidth_reservation = 0 -> null
- bandwidth_share_count = 50 -> null
- bandwidth_share_level = "normal" -> null
- device_address = "pci:0:11" -> null
- key = 4004 -> null
- mac_address = "00:0c:29:ba:d1:7a" -> null
- network_id = "HaNetwork-VM Network" -> null
- use_static_mac = false -> null
}

530 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
}

# vsphere_virtual_machine.k8s1 will be destroyed


- resource "vsphere_virtual_machine" "k8s1" {
- boot_delay = 0 -> null
- boot_retry_delay = 10000 -> null
- boot_retry_enabled = false -> null
- change_version = "2019-11-07T11:10:12.973992Z" -> null

- cpu_hot_add_enabled = false -> null


- cpu_hot_remove_enabled = false -> null
- cpu_limit = -1 -> null
- cpu_performance_counters_enabled = false -> null
- cpu_reservation = 0 -> null
- cpu_share_count = 2000 -> null
- cpu_share_level = "normal" -> null
- datastore_id = "5d851500-53966784-2df3-0050569c58b8"
-> null
- default_ip_address = "10.99.0.21" -> null
- efi_secure_boot_enabled = false -> null
- enable_disk_uuid = false -> null
- enable_logging = false -> null
- ept_rvi_mode = "automatic" -> null
- extra_config = {} -> null
- firmware = "bios" -> null
- force_power_off = true -> null
- guest_id = "ubuntu64Guest" -> null
- guest_ip_addresses = [
- "10.10.1.10",
- "10.99.0.21",
- "fe80::20c:29ff:fef9:49e3",
- "fe80::20c:29ff:fef9:49d9",
] -> null
- host_system_id = "ha-host" -> null
- hv_mode = "hvAuto" -> null
- id = "564dd38f-6003-be29-2f69-
791f39f949d9" -> null
- latency_sensitivity = "normal" -> null
- memory = 8192 -> null
- memory_hot_add_enabled = false -> null
- memory_limit = -1 -> null
- memory_reservation = 0 -> null
- memory_share_count = 81920 -> null
- memory_share_level = "normal" -> null
- migrate_wait_timeout = 30 -> null
- moid = "97" -> null
- name = "k8s1" -> null
- nested_hv_enabled = false -> null
- num_cores_per_socket = 1 -> null
- num_cpus = 2 -> null
- reboot_required = false -> null
- resource_pool_id = "ha-root-pool" -> null
- run_tools_scripts_after_power_on = true -> null
- run_tools_scripts_after_resume = true -> null
- run_tools_scripts_before_guest_reboot = false -> null
- run_tools_scripts_before_guest_shutdown = true -> null

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 531
- run_tools_scripts_before_guest_standby = true -> null
- scsi_bus_sharing = "noSharing" -> null
- scsi_controller_count = 1 -> null
- scsi_type = "pvscsi" -> null
- shutdown_wait_timeout = 1 -> null
- swap_placement_policy = "inherit" -> null
- sync_time_with_host = false -> null
- uuid = "564dd38f-6003-be29-2f69-
791f39f949d9" -> null
- vapp_transport = [] -> null
- vmware_tools_status = "guestToolsRunning" -> null
- vmx_path = "k8s1/k8s1.vmx" -> null
- wait_for_guest_ip_timeout = 5 -> null
- wait_for_guest_net_routable = true -> null
- wait_for_guest_net_timeout = 0 -> null

- disk {
- attach = true -> null
- datastore_id = "5d851500-53966784-2df3-0050569c58b8" -> null
- device_address = "scsi:0:0" -> null
- disk_mode = "independent_nonpersistent" -> null
- disk_sharing = "sharingNone" -> null
- eagerly_scrub = false -> null
- io_limit = -1 -> null
- io_reservation = 0 -> null
- io_share_count = 1000 -> null
- io_share_level = "normal" -> null
- keep_on_remove = false -> null
- key = 2000 -> null
- label = "disk0" -> null
- path = "k8s1/k8s1.vmdk" -> null
- size = 0 -> null
- thin_provisioned = true -> null
- unit_number = 0 -> null
- uuid = "6000C297-236d-bf0f-dbba-d984bc79e274" -> null
- write_through = false -> null
}

- network_interface {
- adapter_type = "vmxnet3" -> null
- bandwidth_limit = -1 -> null
- bandwidth_reservation = 0 -> null
- bandwidth_share_count = 50 -> null
- bandwidth_share_level = "normal" -> null
- device_address = "pci:0:7" -> null
- key = 4000 -> null
- mac_address = "00:0c:29:f9:49:d9" -> null
- network_id = "HaNetwork-VM Network" -> null
- use_static_mac = false -> null
}
- network_interface {
- adapter_type = "vmxnet3" -> null
- bandwidth_limit = -1 -> null
- bandwidth_reservation = 0 -> null
- bandwidth_share_count = 50 -> null
- bandwidth_share_level = "normal" -> null

532 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
- device_address = "pci:0:8" -> null
- key = 4001 -> null
- mac_address = "00:0c:29:f9:49:e3" -> null
- network_id = "HaNetwork-vm_network_1" -> null
- use_static_mac = false -> null
}
}

# vsphere_virtual_machine.k8s2 will be destroyed


- resource "vsphere_virtual_machine" "k8s2" {
- boot_delay = 0 -> null
- boot_retry_delay = 10000 -> null
- boot_retry_enabled = false -> null
- change_version = "2019-11-07T11:10:12.976311Z" -> null

- cpu_hot_add_enabled = false -> null


- cpu_hot_remove_enabled = false -> null
- cpu_limit = -1 -> null
- cpu_performance_counters_enabled = false -> null
- cpu_reservation = 0 -> null
- cpu_share_count = 2000 -> null
- cpu_share_level = "normal" -> null
- datastore_id = "5d851500-53966784-2df3-0050569c58b8"
-> null
- default_ip_address = "10.99.0.22" -> null
- efi_secure_boot_enabled = false -> null
- enable_disk_uuid = false -> null
- enable_logging = false -> null
- ept_rvi_mode = "automatic" -> null
- extra_config = {} -> null
- firmware = "bios" -> null
- force_power_off = true -> null
- guest_id = "ubuntu64Guest" -> null
- guest_ip_addresses = [
- "10.99.0.22",
- "10.10.2.10",
- "fe80::20c:29ff:fee0:d340",
- "fe80::20c:29ff:fee0:d34a",
] -> null
- host_system_id = "ha-host" -> null
- hv_mode = "hvAuto" -> null
- id = "564d329d-8eee-2da4-9aee-
58484be0d340" -> null
- latency_sensitivity = "normal" -> null
- memory = 8192 -> null
- memory_hot_add_enabled = false -> null
- memory_limit = -1 -> null
- memory_reservation = 0 -> null
- memory_share_count = 81920 -> null
- memory_share_level = "normal" -> null
- migrate_wait_timeout = 30 -> null
- moid = "99" -> null
- name = "k8s2" -> null
- nested_hv_enabled = false -> null
- num_cores_per_socket = 1 -> null
- num_cpus = 2 -> null

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 533
- reboot_required = false -> null
- resource_pool_id = "ha-root-pool" -> null
- run_tools_scripts_after_power_on = true -> null
- run_tools_scripts_after_resume = true -> null
- run_tools_scripts_before_guest_reboot = false -> null
- run_tools_scripts_before_guest_shutdown = true -> null
- run_tools_scripts_before_guest_standby = true -> null
- scsi_bus_sharing = "noSharing" -> null
- scsi_controller_count = 1 -> null
- scsi_type = "pvscsi" -> null
- shutdown_wait_timeout = 1 -> null
- swap_placement_policy = "inherit" -> null
- sync_time_with_host = false -> null
- uuid = "564d329d-8eee-2da4-9aee-
58484be0d340" -> null
- vapp_transport = [] -> null
- vmware_tools_status = "guestToolsRunning" -> null
- vmx_path = "k8s2/k8s2.vmx" -> null
- wait_for_guest_ip_timeout = 5 -> null
- wait_for_guest_net_routable = true -> null
- wait_for_guest_net_timeout = 0 -> null

- disk {
- attach = true -> null
- datastore_id = "5d851500-53966784-2df3-0050569c58b8" -> null
- device_address = "scsi:0:0" -> null
- disk_mode = "independent_nonpersistent" -> null
- disk_sharing = "sharingNone" -> null
- eagerly_scrub = false -> null
- io_limit = -1 -> null
- io_reservation = 0 -> null
- io_share_count = 1000 -> null
- io_share_level = "normal" -> null
- keep_on_remove = false -> null
- key = 2000 -> null
- label = "disk0" -> null
- path = "k8s2/k8s2.vmdk" -> null
- size = 0 -> null
- thin_provisioned = true -> null
- unit_number = 0 -> null
- uuid = "6000C29b-50ff-c1f7-2879-17c988a66593" -> null
- write_through = false -> null
}

- network_interface {
- adapter_type = "vmxnet3" -> null
- bandwidth_limit = -1 -> null
- bandwidth_reservation = 0 -> null
- bandwidth_share_count = 50 -> null
- bandwidth_share_level = "normal" -> null
- device_address = "pci:0:7" -> null
- key = 4000 -> null
- mac_address = "00:0c:29:e0:d3:40" -> null
- network_id = "HaNetwork-VM Network" -> null
- use_static_mac = false -> null
}

534 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
- network_interface {
- adapter_type = "vmxnet3" -> null
- bandwidth_limit = -1 -> null
- bandwidth_reservation = 0 -> null
- bandwidth_share_count = 50 -> null
- bandwidth_share_level = "normal" -> null
- device_address = "pci:0:8" -> null
- key = 4001 -> null
- mac_address = "00:0c:29:e0:d3:4a" -> null
- network_id = "HaNetwork-vm_network_3" -> null
- use_static_mac = false -> null
}
}

# vsphere_virtual_machine.k8s3 will be destroyed


- resource "vsphere_virtual_machine" "k8s3" {
- boot_delay = 0 -> null
- boot_retry_delay = 10000 -> null
- boot_retry_enabled = false -> null
- change_version = "2019-11-07T11:10:13.061895Z" -> null

- cpu_hot_add_enabled = false -> null


- cpu_hot_remove_enabled = false -> null
- cpu_limit = -1 -> null
- cpu_performance_counters_enabled = false -> null
- cpu_reservation = 0 -> null
- cpu_share_count = 2000 -> null
- cpu_share_level = "normal" -> null
- datastore_id = "5d851500-53966784-2df3-0050569c58b8"
-> null
- default_ip_address = "10.99.0.23" -> null
- efi_secure_boot_enabled = false -> null
- enable_disk_uuid = false -> null
- enable_logging = false -> null
- ept_rvi_mode = "automatic" -> null
- extra_config = {} -> null
- firmware = "bios" -> null
- force_power_off = true -> null
- guest_id = "ubuntu64Guest" -> null
- guest_ip_addresses = [
- "10.99.0.23",
- "10.10.3.10",
- "fe80::20c:29ff:fe41:dde2",
- "fe80::20c:29ff:fe41:ddec",
] -> null
- host_system_id = "ha-host" -> null
- hv_mode = "hvAuto" -> null
- id = "564dba8c-997a-1661-ecfd-
1b1e3741dde2" -> null
- latency_sensitivity = "normal" -> null
- memory = 8192 -> null
- memory_hot_add_enabled = false -> null
- memory_limit = -1 -> null
- memory_reservation = 0 -> null
- memory_share_count = 81920 -> null
- memory_share_level = "normal" -> null

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 535
- migrate_wait_timeout = 30 -> null
- moid = "100" -> null
- name = "k8s3" -> null
- nested_hv_enabled = false -> null
- num_cores_per_socket = 1 -> null
- num_cpus = 2 -> null
- reboot_required = false -> null
- resource_pool_id = "ha-root-pool" -> null
- run_tools_scripts_after_power_on = true -> null
- run_tools_scripts_after_resume = true -> null
- run_tools_scripts_before_guest_reboot = false -> null
- run_tools_scripts_before_guest_shutdown = true -> null
- run_tools_scripts_before_guest_standby = true -> null
- scsi_bus_sharing = "noSharing" -> null
- scsi_controller_count = 1 -> null
- scsi_type = "pvscsi" -> null
- shutdown_wait_timeout = 1 -> null
- swap_placement_policy = "inherit" -> null
- sync_time_with_host = false -> null
- uuid = "564dba8c-997a-1661-ecfd-
1b1e3741dde2" -> null
- vapp_transport = [] -> null
- vmware_tools_status = "guestToolsRunning" -> null
- vmx_path = "k8s3/k8s3.vmx" -> null
- wait_for_guest_ip_timeout = 5 -> null
- wait_for_guest_net_routable = true -> null
- wait_for_guest_net_timeout = 0 -> null

- disk {
- attach = true -> null
- datastore_id = "5d851500-53966784-2df3-0050569c58b8" -> null
- device_address = "scsi:0:0" -> null
- disk_mode = "independent_nonpersistent" -> null
- disk_sharing = "sharingNone" -> null
- eagerly_scrub = false -> null
- io_limit = -1 -> null
- io_reservation = 0 -> null
- io_share_count = 1000 -> null
- io_share_level = "normal" -> null
- keep_on_remove = false -> null
- key = 2000 -> null
- label = "disk0" -> null
- path = "k8s3/k8s3.vmdk" -> null
- size = 0 -> null
- thin_provisioned = true -> null
- unit_number = 0 -> null
- uuid = "6000C297-bed1-8440-7fae-42a128bec079" -> null
- write_through = false -> null
}

- network_interface {
- adapter_type = "vmxnet3" -> null
- bandwidth_limit = -1 -> null
- bandwidth_reservation = 0 -> null
- bandwidth_share_count = 50 -> null
- bandwidth_share_level = "normal" -> null

536 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
- device_address = "pci:0:7" -> null
- key = 4000 -> null
- mac_address = "00:0c:29:41:dd:e2" -> null
- network_id = "HaNetwork-VM Network" -> null
- use_static_mac = false -> null
}
- network_interface {
- adapter_type = "vmxnet3" -> null
- bandwidth_limit = -1 -> null
- bandwidth_reservation = 0 -> null
- bandwidth_share_count = 50 -> null
- bandwidth_share_level = "normal" -> null
- device_address = "pci:0:8" -> null
- key = 4001 -> null
- mac_address = "00:0c:29:41:dd:ec" -> null
- network_id = "HaNetwork-vm_network_7" -> null
- use_static_mac = false -> null
}
}

Plan: 0 to add, 0 to change, 18 to destroy.

Do you really want to destroy all resources?


Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.

Enter a value: yes

vsphere_virtual_machine.k8s1: Destroying... [id=564dd38f-6003-be29-2f69-791f39f949d9]


vsphere_virtual_machine.csr1kv3: Destroying... [id=564dcddb-a837-ba79-de62-
2bde4ebad152]
vsphere_virtual_machine.csr1kv3: Provisioning with 'remote-exec'...
vsphere_virtual_machine.k8s2: Destroying... [id=564d329d-8eee-2da4-9aee-58484be0d340]
vsphere_virtual_machine.csr1kv3 (remote-exec): Connecting to remote host via SSH...
vsphere_virtual_machine.csr1kv3 (remote-exec): Host: 192.168.10.70
vsphere_virtual_machine.csr1kv3 (remote-exec): User: root
vsphere_virtual_machine.csr1kv3 (remote-exec): Password: true
vsphere_virtual_machine.csr1kv3 (remote-exec): Private key: false
vsphere_virtual_machine.csr1kv3 (remote-exec): Certificate: false
vsphere_virtual_machine.csr1kv3 (remote-exec): SSH Agent: false
vsphere_virtual_machine.csr1kv3 (remote-exec): Checking Host Key: false
vsphere_virtual_machine.k8s3: Destroying... [id=564dba8c-997a-1661-ecfd-1b1e3741dde2]
vsphere_virtual_machine.csr1kv2: Destroying... [id=564d8a38-e0b6-4735-4c70-
05d7ef3d58ee]
vsphere_virtual_machine.csr1kv1: Destroying... [id=564d74fe-9a5e-fa07-a44e-
af5dfdbf5c04]
vsphere_virtual_machine.csr1kv3 (remote-exec): Connected!
vsphere_virtual_machine.csr1kv3 (remote-exec): Powering off VM:
vsphere_virtual_machine.k8s2: Destruction complete after 3s
vsphere_virtual_machine.k8s3: Destruction complete after 3s
vsphere_virtual_machine.k8s1: Destruction complete after 3s
vsphere_virtual_machine.csr1kv2: Destruction complete after 10s
vsphere_virtual_machine.csr1kv3: Still destroying... [id=564dcddb-a837-ba79-de62-
2bde4ebad152, 10s elapsed]
vsphere_virtual_machine.csr1kv1: Still destroying... [id=564d74fe-9a5e-fa07-a44e-
af5dfdbf5c04, 10s elapsed]

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 537
vsphere_virtual_machine.csr1kv1: Destruction complete after 12s
vsphere_virtual_machine.csr1kv3: Still destroying... [id=564dcddb-a837-ba79-de62-
2bde4ebad152, 20s elapsed]
vsphere_virtual_machine.csr1kv3: Still destroying... [id=564dcddb-a837-ba79-de62-
2bde4ebad152, 30s elapsed]
vsphere_virtual_machine.csr1kv3: Still destroying... [id=564dcddb-a837-ba79-de62-
2bde4ebad152, 40s elapsed]
vsphere_virtual_machine.csr1kv3: Still destroying... [id=564dcddb-a837-ba79-de62-
2bde4ebad152, 50s elapsed]
vsphere_virtual_machine.csr1kv3: Still destroying... [id=564dcddb-a837-ba79-de62-
2bde4ebad152, 1m0s elapsed]
vsphere_virtual_machine.csr1kv3: Destruction complete after 1m4s
vsphere_host_port_group.pg[0]: Destroying... [id=tf-HostPortGroup:ha-host:vm_network_0]
vsphere_host_port_group.pg[3]: Destroying... [id=tf-HostPortGroup:ha-host:vm_network_3]
vsphere_host_port_group.pg[4]: Destroying... [id=tf-HostPortGroup:ha-host:vm_network_4]
vsphere_host_port_group.pg[2]: Destroying... [id=tf-HostPortGroup:ha-host:vm_network_2]
vsphere_host_port_group.pg[1]: Destroying... [id=tf-HostPortGroup:ha-host:vm_network_1]
vsphere_host_port_group.pg[5]: Destroying... [id=tf-HostPortGroup:ha-host:vm_network_5]
vsphere_host_port_group.pg[0]: Destruction complete after 0s
vsphere_host_port_group.pg[2]: Destruction complete after 0s
vsphere_host_port_group.pg[3]: Destruction complete after 0s
vsphere_host_port_group.pg[1]: Destruction complete after 0s
vsphere_host_port_group.pg[4]: Destruction complete after 0s
vsphere_host_port_group.pg[5]: Destruction complete after 0s
vsphere_host_virtual_switch.devnet_lab_vswitch[0]: Destroying... [id=tf-
HostVirtualSwitch:ha-host:devnet_lab_vswitch_0]
vsphere_host_virtual_switch.devnet_lab_vswitch[5]: Destroying... [id=tf-
HostVirtualSwitch:ha-host:devnet_lab_vswitch_5]
vsphere_host_virtual_switch.devnet_lab_vswitch[4]: Destroying... [id=tf-
HostVirtualSwitch:ha-host:devnet_lab_vswitch_4]
vsphere_host_virtual_switch.devnet_lab_vswitch[2]: Destroying... [id=tf-
HostVirtualSwitch:ha-host:devnet_lab_vswitch_2]
vsphere_host_virtual_switch.devnet_lab_vswitch[3]: Destroying... [id=tf-
HostVirtualSwitch:ha-host:devnet_lab_vswitch_3]
vsphere_host_virtual_switch.devnet_lab_vswitch[1]: Destroying... [id=tf-
HostVirtualSwitch:ha-host:devnet_lab_vswitch_1]
vsphere_host_virtual_switch.devnet_lab_vswitch[0]: Destruction complete after 0s
vsphere_host_virtual_switch.devnet_lab_vswitch[5]: Destruction complete after 0s
vsphere_host_virtual_switch.devnet_lab_vswitch[4]: Destruction complete after 0s
vsphere_host_virtual_switch.devnet_lab_vswitch[1]: Destruction complete after 0s
vsphere_host_virtual_switch.devnet_lab_vswitch[2]: Destruction complete after 0s
vsphere_host_virtual_switch.devnet_lab_vswitch[3]: Destruction complete after 0s

Destroy complete! Resources: 18 destroyed.

Summary
You reviewed the activities that are involved in deploying a test environment with a Terraform, including
topics such as content and syntax of Terraform configuration files, environment creation, and destruction.
The Terraform configuration files were used to create three csrvkv routers, three Linux Ubuntu hosts, and
power on the asa1 device. Finally, configuration files were used to destroy 18 cloud resources (virtual
machines, virtual switches, and virtual port groups) using a single shell command.

538 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Ansible Overview
Ansible can be used in various infrastructure domains and can automate many types of new Cisco devices,
as well as service providers, virtual networking, cloud-managed switches, campus LAN, data center, and so
on. A significant benefit is that Ansible replaces many of the manual processes that previously existed in
these domains, so that now you can deploy a full stack of devices with a single tool. It also supports various
communication protocols, so if your device needs to communicate through a console, Telnet, REST, or
SNMP, there is more flexibility to interact with other devices.

An Ansible workstation can be a Linux host with a Python interpreter and SSH functionality. The Ansible
application does not require an agent to be installed on remote devices like other configuration management
tools. In the context of networking, it will run the Python modules locally and use SSH to access and
interact with the remote device.
Ansible was originally created to automate Linux servers. Eventually, it started to grow and expand into
other domains. Now it is becoming popular in the networking industry due to its low barrier to entry.
Ansible automates tasks using Ansible modules, which the community or individuals write to carry out
logic on remote hosts. Modules can be written in all languages, but most of them are written in Python.
Engineers write sets of automated tasks in Ansible "playbooks." These playbooks are written in standards-
based YAML. Playbooks contain the logic to orchestrate workflows. This approach makes it simple and
easy to understand considering that you are not required to know programming to start writing playbooks.
The following are a few core components for using Ansible.

Ansible Configuration File


Ansible supports a configuration file, which is a text file that declares the details for the deployment. In
other words, the configuration file (ansible.cfg) holds Ansible configuration settings. The location is
configurable, but by default, it is found in /etc/ansible/. Usually, the default settings are adequate for most
environments, but at times you may need to change SSH or privileged escalation settings or module paths in
the ansible.cfg file.
There are three locations in which the ansible.cfg file can be located and these locations are searched in this
order:
1. ansible.cfg (in the current directory)
2. ansible.cfg (in the home directory)
3. /etc/ansible/ansible.cfg

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 539
Ansible recommends that you keep the ansible.cfg file in the root of the project folder to enable users to
have independent ansible.cfg files for each project.

Ansible Inventory File


The Ansible inventory file identifies hosts that Ansible manages. When executing a playbook, Ansible
needs to know the hosts on which the playbook must act. The list of possible hosts is stored in an inventory
file. By default, the inventory file is stored in the /etc/ansible/ directory, and is called hosts. If you change
the name of the file to something other than hosts, or if you change the location of the hosts file, you will
need to use the – i <filename> option when you run the playbook so that Ansible can find the file. Similar
to the ansible.cfg file, it is recommended that you keep the inventory file in the root of the project directory.

Automate Networking Tasks


You can use Ansible with a networking infrastructure via two main functions: managing network
configurations and retrieving network configurations and operational data.
• Managing network configurations: When communicating with remote devices, Ansible can be used
to deploy configuration commands or configuration files. Ansible is integrated with Jinja2, which
allows the creation of configuration templates that are based on the device configuration file. Jinja2 also
generates configurations in an automated way to then push to every device in the inventory.
• Retrieving network configurations and operational data: One of the goals of configuration
management tools is to ensure that the infrastructure is in its desired state. Many of the modules that are
built for Ansible are designed to check if the current configuration is in its desired state, and if not, to
make the change.

Beyond the previously mentioned core tasks, Ansible can also generate automated reports and perform
continuous compliance on the network. These tasks can be thought of as “applications” after retrieving and
configuring network devices. The capability of writing your own playbooks allows you to create nearly
limitless types of applications for a use case as needed for network automation.

Ansible for the Enterprise


The Ansible engine is a command-line utility that runs on Linux. You may need added features to deploy
Ansible into production for a larger team. These features could be role-based access control (RBAC) and
integration into Lightweight Directory Access Protocol (LDAP) for authentication.

540 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Ansible AWX is an Open Source project that offers RBAC, API access, credentials management, logging,
Git integration, reporting, and an intuitive user interface for managing Ansible deployments.
Ansible Tower is an enterprise and commercial version of Ansible AWX. The model is similar to Fedora
and RHEL. AWX is to Fedora as Tower is to Red Hat Enterprise Linux (RHEL).
The following is a list of features that Ansible AWX and Ansible Tower offer:
• Tower dashboard: The Ansible Tower user interface offers a friendly graphical framework for your IT
orchestration needs.
• RBAC: RBAC is a method of restricting network access based on the roles of individual users within
an enterprise. It lets employees have access rights only to the information they need to do their jobs and
prevents them from accessing information that does not pertain to them.
• Reporting and controls: Logging is a standalone feature that was introduced in Ansible Tower version
3.1.0. This feature allows you to send detailed logs to several types of third-party external log
aggregation services. Services that are connected to this data feed help you gain insight into Ansible
Tower usage or technical trends. The data can be used to analyze events in the infrastructure, monitor
for anomalies, and correlate events from one service with events in another.
• Fully documented REST API: Everything that you can do in the Ansible Tower user interface can be
done from the API. You can also use it to view everything, ranging from credentials to users.

How Ansible Works


The figure depicts how Ansible was initially built.

Initially, Ansible was created to interact with Linux servers and a typical Ansible implementation requires
remote hosts to have a Python interpreter and SSH enabled.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 541
1. Operations engineers deploy new playbooks or modules through tools like GitHub. Ansible can be
installed on several hosts, and each one is referred to as a “control host.” Playbooks are executed from
the control host.
2. By default, Ansible uses SSH to connect to the device. Ansible copies Python files (modules) to remote
hosts. Third-party API integrations are possible with custom development, which is required for cloud,
network, and so on. Although this method is the default mode of operation, you can also run these
Python modules “locally” on the control host if desired (generating files, and so on).
3. Ansible executes tasks that are described in playbooks. A small program module that is written in
Python is transferred to the device (by default) and executed. When completed, the small program is
removed. On the control host, there is visual feedback, with detailed options to see the status of the
tasks (data returned from the module).
4. The process starts again and continues to run Ansible playbooks.

How Ansible Works for Networks


When working with networking devices, the process starts in a way that is similar to the previous example.
The greatest difference is that the Python code runs locally on the Ansible control host (where Ansible is
installed) and interfaces with Cisco devices through SSH (or HTTP or NETCONF)—it is analogous to
writing Python scripts on a single server.

Remember that no code is copied to the device for network device automation. Python code is executed
when the task is defined in the playbook and each task has an instruction to push a configuration file, a
configuration command to make changes, or an operational command to receive data. Think of this process
as sending commands through a keyboard from your computer while accessing the device through SSH on
the terminal, but instead of managing a single device at a time, you can manage multiple devices at the same
time, which are defined in an inventory file.

542 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Your First Ansible Playbook
To execute a workflow with Ansible, you need two files to get started: an Ansible playbook and an
inventory file.

The figure illustrates the following:


1. Inventory: Devices that are defined in the inventory file are potential targets, depending on what has
been set as values in the play definition of important hosts of the playbook.
2. YAML: A YAML document starts with three hyphens at the top and denotes a list of plays (YAML
list).
3. Ansible playbook: The term playbook is a sports analogy. Each playbook file contains one or more
plays and each play contains one or more tasks.
4. Play definition: In the play definition, you define the name of a play, what devices will be targeted for
the current play, and the connection type that is used for that play. A play is simply a mapping of hosts
to tasks.
5. Tasks: Every play contains a list of tasks. The task list is the automation workflows that are executed in
sequential order.

Files in the current root of the project include the following:


.
├── ansible.cfg (optional)
├── inventory
└── view_push_snmp.yml

There are no directories, simply three files.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 543
Examine an Ansible Playbook

The figure depicts the following components of an Ansible playbook:


1. Play name: The name is what you will call the play. It is also an arbitrary description, an option of the
play, and it is displayed to the terminal when executed.
2. Hosts: Hosts indicate which devices Ansible operates for the play. The inventory file has a group that is
called iosxe, which can have IP addresses or fully qualified domain names (FQDNs) of the hosts.
3. Connection: The network_cli is a connection plug-in that provides a persistent connection to remote
devices over SSH.
4. Gather facts: Ansible collects device facts by default. Because you execute modules locally, there is no
need to gather facts.
5. Tasks: Each task has the following characteristics:
– It describes the work that is completed on hosts.
– It is executed using modules.
– It is executed on devices that are defined in an inventory file.
– It executes a module using specified parameters (key and value pairs).
6. Modules: Modules have the following characteristics:
– They are mostly written in Python.
– Name: The name is optional, arbitrary text that is displayed when the task is executed.
– Ansible core modules provide the ios_config task.
– More than one supported syntax and native YAML are recommended.
– Idempotence: Modules that perform a change should only make the change once (the first
execution). You can run the task 1000 times and it will only occur once. If you see something
different, the module is not idempotent or there is a bug in the module (or the API).
– Modules are parameterized.
• Commands: The parameter belongs to the ios_config module.
• List of commands: The commands are sent to remote devices to make configuration changes.

544 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Execute a Playbook
To execute an Ansible playbook in the most basic way, you will need two main files, the inventory and the
playbook.

The ansible-playbook program and command-line utility are used to execute the playbook.
First, you will enter the ansible-playbook command, and use the (-i) flag to tell Ansible which inventory
file to use.

Note There are options, so you are not required to use (-i) to specify an inventory file.

Other options are the following:


• The default inventory file is /etc/ansible/hosts.
• Define (export) an environment variable called ANSIBLE_INVENTORY.
• Override the default in your ansible.cfg file (verify with ansible --version).

After telling Ansible where to find the inventory file, you can add the Ansible playbook that contains all the
automation workflows.

Ansible Documentation
A couple of resources are available to view a more detailed description of what each module does. You can
view the resources directly at the Ansible documentation website at
https://docs.ansible.com/ansible/latest/modules/ios_config_module.html.
You can browse directly to the Ansible documentation from https://docs.ansible.com.
Docs > User Guide > Working With Modules > Module Index > Network Modules

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 545
Docs > User Guide > Working With Modules > Module Index > Network modules

The documentation gives you the following types of information: module description, parameter
description, the requirements for each parameter, who is the creator of the module, examples, and return
values. In the Parameter column, you will see the name and the data type of string, Boolean, dictionary,
integer, and so on. You will see other things in the Parameter column such as the Ansible version that was
added, or even if that particular parameter is required or not. In the Choices/Defaults column you can see if
the parameter has choices like running, startup, and intended for the diff_against parameter. The
Comments column will have a description of the parameter.

546 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Ansible Documentation Utility
If the workstation does not have Internet access to view the description of the module, Ansible offers built-
in documentation, so you can also view it locally from the Ansible workstation. To view the documentation,
use the ansible-doc utility to better understand the parameters that each module supports.

In the utility, you will get as much information as the Ansible documentation gives in the website. You can
see the module description, parameter descriptions, the requirements, the creator of the module, and so on.
The example is from the ansible-doc utility for the ios_config module.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 547
student@student-vm:~$ ansible-doc ios_config
> IOS_CONFIG (/home/student/.local/lib/python3.6/site-packages/ansible/modules/
network/ios/ios_config.py)
Cisco IOS configurations use a simple block indent file syntax for segmenting
configuration into sections. This module provides an
implementation for working with IOS configuration sections in a deterministic
way.
* This module is maintained by The Ansible Network Team
OPTIONS (= is mandatory):
- after
The ordered set of commands to append to the end of the command stack if a
change needs to be made. Just like with `before' this
allows the playbook designer to append a set of commands to be executed after
the command set.
[Default: (null)]
- auth_pass
*Deprecated*
Starting with Ansible 2.5 we recommend using `connection: network_cli' and
`become: yes' with `become_pass'.
For more information please see the L(IOS Platform Options guide,
../network/user_guide/platform_ios.html).
HORIZONTALLINE
Specifies the password to use if required to enter privileged mode on the
remote device. If `authorize' is false, then this
argument does nothing. If the value is not specified in the task, the value of
environment variable `ANSIBLE_NET_AUTH_PASS' will be
used instead.
[Default: (null)]
type: str
- authorize
*Deprecated*
Starting with Ansible 2.5 we recommend using `connection: network_cli' and
`become: yes'.
For more information please see the L(IOS Platform Options guide,
../network/user_guide/platform_ios.html).
HORIZONTALLINE
Instructs the module to enter privileged mode on the remote device before
sending any commands. If not specified, the device will
attempt to execute all commands in non-privileged mode. If the value is not
specified in the task, the value of environment
variable `ANSIBLE_NET_AUTHORIZE' will be used instead.
[Default: False]
type: bool
- backup
This argument will cause the module to create a full backup of the current
`running-config' from the remote device before any
changes are made. If the `backup_options' value is not given, the backup file
is written to the `backup' folder in the playbook
root directory or role root directory, if playbook is part of an ansible role.
If the directory does not exist, it is created.
[Default: no]
type: bool
version_added: 2.2
- backup_options
This is a dict object containing configurable options related to backup file
path. The value of this option is read only when

548 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
`backup' is set to `yes', if `backup' is set to `no' this option will be
silently ignored.
[Default: (null)]
suboptions:
dir_path:
description:
- This option provides the path ending with directory name in which the
backup
configuration file will be stored. If the directory does not exist it
will be
first created and the filename is either the value of `filename' or
default
filename as described in `filename' options description. If the path
value
is not given in that case a `backup' directory will be created in the
current
working directory and backup configuration will be copied in `filename'
within
`backup' directory.
type: path
filename:
description:
- The filename to be used to store the backup configuration. If the the
filename
is not given it will be generated based on the hostname, current time
and date
in format defined by <hostname>_config.<current-date>@<current-time>
type: dict
version_added: 2.8
- before
The ordered set of commands to push on to the command stack if a change needs
to be made. This allows the playbook designer the
opportunity to perform configuration commands prior to pushing any changes
without affecting how the set of commands are matched
against the system.
[Default: (null)]
- defaults
This argument specifies whether or not to collect all defaults when getting
the remote device running config. When enabled, the
module will get the current config by issuing the command `show running-config
all'.
[Default: no]
type: bool
version_added: 2.2
- diff_against
When using the `ansible-playbook --diff' command line argument the module can
generate diffs against different sources.
When this option is configure as `startup', the module will return the diff of
the running-config against the startup-config.
When this option is configured as `intended', the module will return the diff
of the running-config against the configuration
provided in the `intended_config' argument.
When this option is configured as `running', the module will return the before
and after diff of the running-config with respect to
any changes made to the device configuration.
(Choices: running, startup, intended)[Default: (null)]

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 549
version_added: 2.4
- diff_ignore_lines
Use this argument to specify one or more lines that should be ignored during
the diff. This is used for lines in the configuration
that are automatically updated by the system. This argument takes a list of
regular expressions or exact line matches.
[Default: (null)]
version_added: 2.4
- intended_config
The `intended_config' provides the master configuration that the node should
conform to and is used to check the final running-
config against. This argument will not modify any settings on the remote
device and is strictly used to check the compliance of the
current device's configuration against. When specifying this argument, the
task should also modify the `diff_against' value and
set it to `intended'.
[Default: (null)]
version_added: 2.4
- lines
The ordered set of commands that should be configured in the section. The
commands must be the exact same commands as found in the
device running-config. Be sure to note the configuration command syntax as
some commands are automatically modified by the device
config parser.
(Aliases: commands)[Default: (null)]
- match
Instructs the module on the way to perform the matching of the set of commands
against the current device config. If match is set
to `line', commands are matched line by line. If match is set to `strict',
command lines are matched with respect to position. If
match is set to `exact', command lines must be an equal match. Finally, if
match is set to `none', the module will not attempt to
compare the source configuration with the running configuration on the remote
device.
(Choices: line, strict, exact, none)[Default: line]
- multiline_delimiter
This argument is used when pushing a multiline configuration element to the
IOS device. It specifies the character to use as the
delimiting character. This only applies to the configuration action.
[Default: @]
version_added: 2.3
- parents
The ordered set of parents that uniquely identify the section or hierarchy the
commands should be checked against. If the parents
argument is omitted, the commands are checked against the set of top level or
global commands.
[Default: (null)]
- provider
*Deprecated*
Starting with Ansible 2.5 we recommend using `connection: network_cli'.
For more information please see the L(IOS Platform Options guide,
../network/user_guide/platform_ios.html).
HORIZONTALLINE
A dict object containing connection details.
[Default: (null)]
suboptions:

550 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
auth_pass:
description:
- Specifies the password to use if required to enter privileged mode on
the remote
device. If `authorize' is false, then this argument does nothing. If
the value
is not specified in the task, the value of environment variable
`ANSIBLE_NET_AUTH_PASS'
will be used instead.
type: str
authorize:
default: false
description:
- Instructs the module to enter privileged mode on the remote device
before sending
any commands. If not specified, the device will attempt to execute all
commands
in non-privileged mode. If the value is not specified in the task, the
value
of environment variable `ANSIBLE_NET_AUTHORIZE' will be used instead.
type: bool
host:
description:
- Specifies the DNS host name or address for connecting to the remote
device over
the specified transport. The value of host is used as the destination
address
for the transport.
required: true
type: str
password:
description:
- Specifies the password to use to authenticate the connection to the
remote device. This
value is used to authenticate the SSH session. If the value is not
specified
in the task, the value of environment variable `ANSIBLE_NET_PASSWORD'
will
be used instead.
type: str
port:
default: 22
description:
- Specifies the port to use when building the connection to the remote
device.
type: int
ssh_keyfile:
description:
- Specifies the SSH key to use to authenticate the connection to the
remote device. This
value is the path to the key used to authenticate the SSH session. If
the value
is not specified in the task, the value of environment variable
`ANSIBLE_NET_SSH_KEYFILE'
will be used instead.
type: path

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 551
timeout:
default: 10
description:
- Specifies the timeout in seconds for communicating with the network
device for
either connecting or sending commands. If the timeout is exceeded
before the
operation is completed, the module will error.
type: int
username:
description:
- Configures the username to use to authenticate the connection to the
remote
device. This value is used to authenticate the SSH session. If the
value is
not specified in the task, the value of environment variable
`ANSIBLE_NET_USERNAME'
will be used instead.
type: str
type: dict
- replace
Instructs the module on the way to perform the configuration on the device. If
the replace argument is set to `line' then the
modified lines are pushed to the device in configuration mode. If the replace
argument is set to `block' then the entire command
block is pushed to the device in configuration mode if any line is not
correct.
(Choices: line, block)[Default: line]
- running_config
The module, by default, will connect to the remote device and retrieve the
current running-config to use as a base for comparing
against the contents of source. There are times when it is not desirable to
have the task get the current running-config for every
task in a playbook. The `running_config' argument allows the implementer to
pass in the configuration to use as the base config
for comparison.
(Aliases: config)[Default: (null)]
version_added: 2.4
- save_when
When changes are made to the device running-configuration, the changes are not
copied to non-volatile storage by default. Using
this argument will change that before. If the argument is set to `always',
then the running-config will always be copied to the
startup-config and the `modified' flag will always be set to True. If the
argument is set to `modified', then the running-config
will only be copied to the startup-config if it has changed since the last
save to startup-config. If the argument is set to
`never', the running-config will never be copied to the startup-config. If
the argument is set to `changed', then the running-
config will only be copied to the startup-config if the task has made a
change. `changed' was added in Ansible 2.5.
(Choices: always, never, modified, changed)[Default: never]
version_added: 2.4
- src
Specifies the source path to the file that contains the configuration or
configuration template to load. The path to the source

552 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
file can either be the full path on the Ansible control host or a relative
path from the playbook or role root directory. This
argument is mutually exclusive with `lines', `parents'.
[Default: (null)]
version_added: 2.2
NOTES:
* Tested against IOS 15.6
* Abbreviated commands are NOT idempotent, see L(Network
FAQ,../network/user_guide/faq.html#why-do-the-config-modules-always-
return-changed-true-with-abbreviated-commands).
* For more information on using Ansible to manage network devices see
the :ref:`Ansible Network Guide <network_guide>`
* For more information on using Ansible to manage Cisco devices see the `Cisco
integration page
<https://www.ansible.com/integrations/networks/cisco>`_.
AUTHOR: Peter Sprygada (@privateip)
METADATA:
status:
- preview
supported_by: network
EXAMPLES:
- name: configure top level configuration
ios_config:
lines: hostname {{ inventory_hostname }}
- name: configure interface settings
ios_config:
lines:
- description test interface
- ip address 172.31.1.1 255.255.255.0
parents: interface Ethernet1
- name: configure ip helpers on multiple interfaces
ios_config:
lines:
- ip helper-address 172.26.1.10
- ip helper-address 172.26.3.8
parents: "{{ item }}"
with_items:
- interface Ethernet1
- interface Ethernet2
- interface GigabitEthernet1
- name: configure policer in Scavenger class
ios_config:
lines:
- conform-action transmit
- exceed-action drop
parents:
- policy-map Foo
- class Scavenger
- police cir 64000
- name: load new acl into device
ios_config:
lines:
- 10 permit ip host 192.0.2.1 any log
- 20 permit ip host 192.0.2.2 any log
- 30 permit ip host 192.0.2.3 any log
- 40 permit ip host 192.0.2.4 any log

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 553
- 50 permit ip host 192.0.2.5 any log
parents: ip access-list extended test
before: no ip access-list extended test
match: exact
- name: check the running-config against master config
ios_config:
diff_against: intended
intended_config: "{{ lookup('file', 'master.cfg') }}"
- name: check the startup-config against the running-config
ios_config:
diff_against: startup
diff_ignore_lines:
- ntp clock .*
- name: save running to startup when modified
ios_config:
save_when: modified
- name: for idempotency, use full-form commands
ios_config:
lines:
# - shut
- shutdown
# parents: int gig1/0/11
parents: interface GigabitEthernet1/0/11
# Set boot image based on comparison to a group_var (version) and the version
# that is returned from the `ios_facts` module
- name: SETTING BOOT IMAGE
ios_config:
lines:
- no boot system
- boot system flash bootflash:{{new_image}}
host: "{{ inventory_hostname }}"
when: ansible_net_version != version
- name: render a Jinja2 template onto an IOS device
ios_config:
backup: yes
src: ios_template.j2
- name: configurable backup path
ios_config:
src: ios_template.j2
backup: yes
backup_options:
filename: backup.cfg
dir_path: /home/user
RETURN VALUES:
updates:
description: The set of commands that will be pushed to the remote device
returned: always
type: list
sample: ['hostname foo', 'router ospf 1', 'router-id 192.0.2.1']
commands:
description: The set of commands that will be pushed to the remote device
returned: always
type: list
sample: ['hostname foo', 'router ospf 1', 'router-id 192.0.2.1']
backup_path:
description: The full path to the backup file

554 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
returned: when backup is yes
type: str
sample: /playbooks/ansible/backup/ios_config.2016-07-16@22:28:34
filename:
description: The name of the backup file
returned: when backup is yes and filename is not specified in backup options
type: str
sample: ios_config.2016-07-16@22:28:34
shortname:
description: The full path to the backup file excluding the timestamp
returned: when backup is yes and filename is not specified in backup options
type: str
sample: /playbooks/ansible/backup/ios_config
date:
description: The date extracted from the backup file name
returned: when backup is yes
type: str
sample: "2016-07-16"
time:
description: The time extracted from the backup file name
returned: when backup is yes
type: str
sample: "22:28:34"

Commonly Used Modules


Ansible has thousands of modules, which are all found in the Ansible documentation. However, there are a
few key modules that are often used together to automate a given systems or network automation workflow.
These modules include common tasks such autogenerating configuration, deploying configurations, and
building dynamic reports and automated documentation. Two modules that will be powerful in your
network automation journey are the template and debug modules.

Debug Module
The debug module is the simplest of all. It simply allows you to print a variable or a message to the
terminal. It would be analogous to the print() statement in Python.
For example, this is an example that prints out the cookie to the terminal when the playbook runs:

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 555
- name: CREATE VARIABLE TO STORE SESSION COOKIE
set_fact:
viptela_cookie: "{{ login_data['set_cookie'] }}"
- name: PRINT THE COOKIE TO THE TERMINAL
debug:
var: viptela_cookie

You can also use the debug module to print a custom message embedded with variables:
- name: PRINT THE COOKIE TO THE TERMINAL
debug:
msg: "For the controller device {{ inventory_hostname }}, the existing session
cookie is {{ viptela_cookie }}"

Note When you use the var parameter, you do not need curly brackets, but you do need them when you
embed variables in a message when using the msg parameter.

Template Module
The template module can generate text files from any Jinja2 template as its source. Jinja2 is a templating
engine for Python, and because Ansible is written in Python, it was chosen to use within the template
module. The template module is used for many reasons, but two primary use cases include building
configuration templates and generating standard and automated documentation. Because the focus is
software-defined WAN (SD-WAN) APIs and no commands are sent, the next few examples will illustrate
the building of automated documentation.
The following example builds a report that has the hostname and credentials that are required for that
device. This example can use static data that is already defined in the inventory file.
It is common practice to store templates in a directory called templates.
Template: basic-report.j2
The file would have the following text:
Device: {{ inventory_hostname }}
Username: {{ ansible_user }}
Password: {{ ansible_password }}

A task in a playbook that was built to generate documentation would look like the following:
- name: GENERATE BASIC REPORT
template:
src: basic-report.j2
dest: ./reports/{{ inventory_hostname }}.txt

Using a variable name in the filename would allow one file per device to be created. The files would be
stored within the local reports subdirectory. As expected, an example of a generated file would look like the
following:
Device: vmanage
Username: admin
Password: admin

If there were multiple vmanage devices, each would have a different hostname and a different session
cookie. This information could also be added to the template:
556 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Device: {{ inventory_hostname }}
Username: {{ ansible_user }}
Password: {{ ansible_password }}
Cookie: {{ viptela_cookie }}

This code assumes that the set_fact task was executed before the task that generated the report. You can use
any variable within a template including access and looping over any data that is returned from a module.
This ability allows users to issue API calls, register (save) the response, and then use the response within a
template to create an audit trail and build dynamic reports.

Task Attributes
Although there are thousands of modules, there are also many task attributes, which are also referred to as
task directives. These attributes appear on the same indentation level as the module name. A few attributes
are used quite commonly: run_once, loop, register, and tags.
In previous examples, the file module was used to ensure that the reports directory existed. That module
looked like the following:
- name: ENSURE DIRECTORY EXISTS
file:
path: ./reports
state: directory

It is often quite common that you need to create a directory for each device. You can accomplish this task
by using inventory_hostname as follows:
- name: ENSURE DIRECTORY EXISTS
file:
path: ./reports/{{ inventory_hostname }}
state: directory

However, you may want subdirectories called device and site and devices and sites nested within them.

Loop Task Attribute


The loop task attribute can be used to loop over any task. Here is an example:
- name: ENSURE DIRECTORY EXISTS
file:
path: ./reports/{{ item }}
state: directory
loop: ['device', 'site']

You can also use different syntax to denote a YAML list:


- name: ENSURE DIRECTORY EXISTS
file:
path: ./reports/{{ item }}
state: directory
loop:
- device
- site

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 557
Run Once Attribute
In this example, if directories are being created, but the inventory file has 100 devices, the directories only
need to be created once and only once. To optimize the task, the run_once: true attribute can be used.

Using Tags
Tags are used heavily in larger playbooks because they allow users who are executing a playbook to
selectively execute a subset of tasks. You can assign one or more tags to a given play or task.
This task has one assigned tag:
- name: ENSURE DIRECTORY EXISTS
file:
path: ./reports
state: directory
tags: create_dir

When executing the playbook, the following command could be used to execute that one task:
ansible-playbook playbook.yml --tags=create_dir

- name: ENSURE DIRECTORY EXISTS


file:
path: ./reports
state: directory

- name: ENSURE DIRECTORY EXISTS


file:
path: ./reports/{{ item }}
state: directory
loop: ['device', 'site']

- name: ENSURE DIRECTORY EXISTS


file:
path: ./reports/{{ item }}
state: directory
run_once: true
loop: ['device', 'site']
register: data
tags:
- reboot
- tunnel
- device

As seen in the slide, it is also possible to assign multiple tags to a given task.

558 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Data and State Validation
It is true that Ansible is predominantly a configuration management tool. However, once you understand
Ansible, how it connects to devices, and how you can create and register variables, you realize that Ansible
could play a role in a pipeline for some basic to advanced tests. There are other tools (maybe even native
code) that can be used for testing, but Ansible offers a nice alternative if it is already being used as a
configuration management platform.
• Ansible is best known as a configuration management platform.
• It has modules that can be used for validation and testing.

- name: "CHECK IF DEVICE IS REACHABLE"


wait_for:
host: "{{ ansible_host }}"
port: 22
timeout: 600
delegate_to: "localhost"

- name: "CONFIRM REACHABILITY TO NEIGHBORS"


ios_ping:
dest: "{{ neighbor['ip'] }}"
loop: "{{ bgp_config['neighbors'] }}"
loop_control:
loop_var: "neighbor“

- name: "PING TEST"


command: "ping -c 1 {{ item }}"
delegate_to: "{{ inventory_hostname }}"
loop: "{{ linux_neighbors }}"
changed_when: False

- name: "COLLECT FACTS INFORMATION"


ios_facts:
gather_subset: "all"

- name: "CONFIRM INTERFACES ARE ENABLED"


assert:
that:
- "interface['value']['lineprotocol'] == 'up '"
- "interface['value']['operstatus'] == 'up'"
success_msg: "Interface {{ interface['key'] }} is UP"
fail_msg: "Interface {{ interface['key'] }} is DOWN"
loop: "{{ ansible_facts['net_interfaces'] | dict2items }}"
loop_control:
loop_var: "interface"

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 559
Given the many modules that exist in Ansible, it is possible to create robust tests for both infrastructure and
application. The following are a few key modules that help create tests:
• wait_for: This module permits you to ensure that a given port is listening on a remote system. Using
this module also allows you to ensure that ACLs and firewall policies are set properly along the path.
• *_ping: From a network infrastructure perspective, this module allows you to test remote reachability
from each router (or network device).
• command: This module allows you to execute any Linux command from a remote host.
• assert: Ansible allows you to save response data from devices. That information can be used with the
assert module, which defines the conditions that should be met to move forward within a playbook.
• fail: Similar to the assert module, the fail module can be used to fail a device given particular conditions
(also used with the “when” directive).

1. 0Which command is used to execute an Ansible playbook?


a. ansible
b. ansible-playbook
c. ansible-doc
d. ansible-play
2. 0Which two files are required when executing several automated tasks with Ansible? (Choose two.)
a. vars files
b. inventory file
c. playbook
d. group vars file

560 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Ansible Inventory File
Ansible uses the inventory file to find the devices to target for its automation tasks. You can also say that an
inventory file is a collection of hosts that is optionally sorted into groups that can also potentially include
variable data. Without extra details, when Ansible executes a task on a host from the inventory, it will
connect to the name of the host as it is defined in the inventory file.

In the example, you have an INI-like file that statically defines which devices are to be automated. The
name of the inventory file can be arbitrary, so it can be called hosts, inventory, data center, and so on.
A simple inventory is just an uncategorized list of hostnames, FQDNs, or IP addresses. As the infrastructure
grows, it becomes advantageous to categorize hosts within groups. Groups provide an easy way to reference
a target to a set of hosts. Groups can also include other groups, which allows the growth of a hierarchy of a
fleet. Hosts can be grouped for many reasons, such as purpose, locality, or operating system.
Hosts or groups are used in patterns as an entity to target or as an entity to skip from within a target.
Patterns also support wildcards and even regular expressions.
The inventory file can be in one of many formats, depending on the inventory of plug-ins that you have. For
this example, the format is like an INI file and is as follows.
For INI nested groups, you have the following:
csr1kv4
[datacenters:children]
dc_east
dc_west

[dc_east]
10.10.10.1
csr1kv1

[dc_west]
csr1kv2
switch.cisco.com

A YAML version would look like the following example:

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 561
---
all:
hosts:
- csr1kv4
datacenters :
children:
- dc_east
- dc_west
routers:
hosts:
- csr1kv2
- switch.cisco.com
switches:
hosts:
- 10.10.10.1
- csr1kv1

Note csr1kv4 host is ungrouped, but can be automated if the playbook targets the device itself or the group
called “all.”

Host and Group Variables in the Inventory File


Group variables allow you to define variables that belong to a particular group defined in the inventory file.

In the example, a variable that is defined under [all:vars] will belong to all the devices in the inventory file.
There is another group_vars called [dc_east:vars] that says devices under the [dc_east] group will use the
defined variables like ansible_ssh_pass=secret and ansible_network_os=ios.
Because [dc_east:vars] is more specific than [all:vars], ansible_ssh_pass will not be admin, it will be secret.
Consider that there are snowflake (one-of-a-kind) variables that belong to particular devices. You can use
host_vars to define a variable for a particular device. In the example for dc_west, ansible_ssh_pass will be
supersecret for nxos-spine1, whereas for nxos-spine2, ansible_ssh_pass will be admin.

562 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
HOSTS USERNAME PASSWORD LOOPBACK LOCATION OPERATING
SYSTEM

csr1kv1 Admin secret 192.168.1.1 AMERS Cisco IOS


Software

csr1kv2 Admin secret 192.168.1.2 AMERS Cisco IOS


Software

nxos-spine1 Admin supersecret 192.168.2.1 EMEA Cisco Nexus


Operating
System (NX-
OS)

nxos-spine2 Admin admin 192.168.2.2 EMEA Cisco NX-OS

Expanded Host and Group Variable Management


As the infrastructure begins to grow, it does not make sense to store all the variables in the inventory file.
You can move all the group variables and host variables into special directories, called group_vars and
host_vars, where Ansible will look for the current variables. For the groups directory, you can create
individual YAML files with the names of the groups that are defined in the inventory file and store all the
variables that belong to those particular groups in those YAML files. For the host variables, the YAML files
must have the same names as the hosts in the inventory file.

As those variables become even harder to manage, you can also create directories with the group names and
create individual YAML files that specify the type of variables that will exist in each file. So, you can have
individual YAML files to store all your SNMP variables in one file and all your interface variables in a
single file. You can do the same for host variables by creating a directory filename that is based on the host
device.
.
├── group_vars
│ ├── ios
│ │ ├── snmp.yml
│ │ └── interfaces.yml
│ └── nxos
│ ├── snmp.yml
│ └── interfaces.yml
├── inventory
└── playbook.yml

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 563
Variables can be defined in the inventory file or within a directory that is called group_vars:
• Variables are specific to a group.
• Variables are accessible within playbooks and templates.

Alternatively, you can create a directory filename that is the same as the group name and have individual
files in that directory.
.
├── host_vars
│ ├── csr1kv1
│ │ ├── snmp.yml
│ │ └── interfaces.yml
│ └── nxos-spine1
│ ├── snmp.yml
│ └── interfaces.yml
├── inventory
└── playbook.yml

Variables can be defined in the inventory file or within a directory that is called host_vars:
• Variables are specific to a host.
• Variables are accessible within playbooks and templates.

You can alternatively create a directory filename that is the same as the hostname and have individual files
in that directory. A host variables file does the same job as a group variables file but for a single host.
1. 0Which statement about using group and host variables in Ansible is true?
a. Ansible looks for files that have the same name as devices in the inventory for variables.
b. A file in a group_vars subdirectory must have the same name as a group that is defined in the
inventory file.
c. A file in a host_vars subdirectory must have the same name as a host that is defined in the
inventory file.
d. When using the group_vars subdirectory, it must contain files or directories that are the names of
groups.

564 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Use the Cisco IOS Core Configuration Module
The ios_config module was first introduced in Ansible Release 2.1. It allows the user to make configuration
changes and back up the configuration to Cisco IOS devices. Some of the parameters that are commonly
used are commands and src. Technically, lines is the parameter and commands is an alias, because they are
just "lines within a configuration file."

src and lines or commands are mutually exclusive for this module. When you use src, the value would be
the configuration file that you want to deploy on the network. This file can contain several CLI commands.
Each task can use and name an optional task attribute that maps to arbitrary text that is displayed when you
run the playbook. This attribute provides context about where you are in the playbook execution.
With the ios_command module, you send a single command or multiple lines of commands. You can also
source a configuration file.
- name: ENSURE STATIC ROUTE EXISTS ON IOS DEVICES TASK 2 in PLAY 1
ios_config:
lines:
- ip route 172.16.1.0 255.255.255.0 172.16.2.1

- name: ENSURE CONFIG EXISTS ON IOS DEVICES TASK 3 in PLAY 1


ios_config:
src: cisco_ios.cfg

The following is an example:


[all:vars]
ansible_ssh_pass=cisco
ansible_user=cisco
ansible_network_os=ios

snmp_community=cisco_corse
snmp_location=CA_HQ
snmp_contact=JOHN_SMITH

[ios]
csr1kv1 node_id=1
csr1kv2 node_id=2
csr1kv3 node_id=3

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 565
You can also configure your devices based on variables that are defined in host_vars and group_vars. The
first example shows how you can configure your devices with host_vars variables that are defined in the
inventory file.
---

- name: MAKE CONFIG CHANGES USING HOST_VARS AND GROUP_VARS


hosts: all
connection: network_cli
gather_facts: no

tasks:

- name: CONFIGURE HOSTNAME USING VARIABLES STORED IN HOST_VARS


ios_config:
commands: "hostname nycr-{{ node_id }}"

When running the playbook, you can see which configuration commands are sent to the remote device using
(-v) verbose.
$ansible-playbook -i inventory core_config.yml -v
PLAY [MAKE CONFIG CHANGES USING HOST_VARS AND GROUP_VARS]
**************************************************************************************
*************

TASK [CONFIGURE HOSTNAME USING VARIABLES STORED IN HOST_VARS]


**************************************************************************************
*************
changed: [csr1kv2] => {"banners": {}, "changed": true, "commands": ["hostname nycr-
2"], "updates": ["hostname nycr-2"]}
changed: [csr1kv3] => {"banners": {}, "changed": true, "commands": ["hostname nycr-
3"], "updates": ["hostname nycr-3"]}
changed: [csr1kv1] => {"banners": {}, "changed": true, "commands": ["hostname nycr-
1"], "updates": ["hostname nycr-1"]}

PLAY RECAP
**************************************************************************************
*************
csr1kv1 : ok=1 changed=1 unreachable=0 failed=0
skipped=0 rescued=0 ignored=0
csr1kv2 : ok=1 changed=1 unreachable=0 failed=0
skipped=0 rescued=0 ignored=0
csr1kv3 : ok=1 changed=1 unreachable=0 failed=0
skipped=0 rescued=0 ignored=0

You can also move all of the variables that are currently defined in the inventory and place them in their
own YAML files inside host_vars/<hostname>.yml and group_vars/all.yml. In the next example, another
task is added to add on SNMP community configurations using variables inside the group_vars/all.yml.

566 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
[ios]
csr1kv1
csr1kv2
csr1kv3
# host_vars/csr1kv1.yml
---
node_id: 1

# host_vars/csr1kv2.yml
---
node_id: 2

# host_vars/csr1kv3.yml
---
node_id: 3

#group_vars/all.yml
---
ansible_ssh_pass: cisco
ansible_user: cisco
ansible_network_os: ios

snmp_community: cisco_corse
snmp_location: CA_HQ
snmp_contact: JOHN_SMITH

#PLAYBOOK
---

- name: MAKE CONFIG CHANGES USING HOST_VARS AND GROUP_VARS


hosts: all
connection: network_cli
gather_facts: no

tasks:

- name: CONFIGURE HOSTNAME USING VARIABLES STORED IN HOST_VARS


ios_config:
commands: "hostname nycr-{{ node_id }}"

- name: CONFIGURE SNMP COMMUNITY USING VARIABLES STORED IN GROUP_VARS


ios_config:
commands:
- snmp-server community {{ snmp_community }} RO
- snmp-server location {{ snmp_location }}
- snmp-server contact {{ snmp_contact }}

$ansible-playbook -i inventory core_config.yml -v


PLAY [MAKE CONFIG CHANGES USING HOST_VARS AND GROUP_VARS]
**************************************************************************************
*************

TASK [CONFIGURE HOSTNAME USING VARIABLES STORED IN HOST_VARS]


**************************************************************************************
*************
ok: [csr1kv1] => {"changed": false}

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 567
ok: [csr1kv2] => {"changed": false}
ok: [csr1kv3] => {"changed": false}

TASK [CONFIGURE SNMP COMMUNITY USING VARIABLES STORED IN GROUP_VARS]


**************************************************************************************
*************
changed: [csr1kv1] => {"banners": {}, "changed": true, "commands": ["snmp-server
community cisco_corse RO", "snmp-server location CA_HQ", "snmp-server contact
JOHN_SMITH"], "updates": ["snmp-server community cisco_corse RO", "snmp-server
location CA_HQ", "snmp-server contact JOHN_SMITH"]}
changed: [csr1kv2] => {"banners": {}, "changed": true, "commands": ["snmp-server
community cisco_corse RO", "snmp-server location CA_HQ", "snmp-server contact
JOHN_SMITH"], "updates": ["snmp-server community cisco_corse RO", "snmp-server
location CA_HQ", "snmp-server contact JOHN_SMITH"]}
changed: [csr1kv3] => {"banners": {}, "changed": true, "commands": ["snmp-server
community cisco_corse RO", "snmp-server location CA_HQ", "snmp-server contact
JOHN_SMITH"], "updates": ["snmp-server community cisco_corse RO", "snmp-server
location CA_HQ", "snmp-server contact JOHN_SMITH"]}

PLAY RECAP
**************************************************************************************
*************
csr1kv1 : ok=2 changed=1 unreachable=0 failed=0
skipped=0 rescued=0 ignored=0
csr1kv2 : ok=2 changed=1 unreachable=0 failed=0
skipped=0 rescued=0 ignored=0
csr1kv3 : ok=2 changed=1 unreachable=0 failed=0
skipped=0 rescued=0 ignored=0

1. 0Which two parameters can be used to deploy configurations in the configuration module? (Choose
two.)
a. config
b. src
c. commands
d. command
e. source

568 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Jinja2 and Ansible Templates
Typically, network engineers perform countless manual network operations and manual network changes.
The most common workflow is to build a Microsoft Notepad file or Word document and call it a template.
However, that template is only a set of instructions for building a configuration—it is not a real template.
Jinja2 is a templating engine that is purpose-built for Python. This topic discusses the Jinja2 templating
engine and how to build programmatic templates.

Jinja2 Overview

Templating languages have existed for a long time and most of these languages are used in the web
development industry. Much of the web is based on templates, so rather than writing HTML files for every
user profile or every page on a website, developers can build a template and add dynamic values to it based
on the data that is presented on the back-end system.
Based on this model, template languages have a wide variety of relevant use cases including web
development, constructing emails, building reports in a text file, or generating network configurations.
Jinja2 templates for networking provide consistency instead of handcrafting text files full of CLI
commands. With templates, you declare which part of the configuration file must remain static, and which
parts should be dynamic (and use variables). Every experienced network engineer has prepared a
configuration file for a new piece of network equipment, like a switch or router. Sometimes this task must
be performed for many switches that will be part of a data center migration or a new site deployment; it
could even be for the migration of a few commands that exist across a campus that are needed to deploy
IEEE 802.1X.
You can turn your Cisco configuration into a Jinja2 template and autogenerate configuration files with the
following process:
1. Construct a Jinja template file.
2. Place data that changes per device group or per device into a variable.
3. Create a data file (JSON or YAML).
4. Generate configuration files by rendering the template with the data.

One of the primary values of templates for network engineers is achieving configuration consistency. If
implemented correctly, templates reduce the likelihood that a human error can cause issues while making
changes to production network configurations.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 569
Construct a Jinja2 Template from a Network Configuration
Jinja2 is a templating language for Python, so many similarities exist between the two. Most programming
languages have one or more templating languages.

As stated previously, Jinja2 is very common in HTML programming and is used heavily within the Python
Flask web framework. Most template languages are not completely “programming languages,” but a
template language is closely tied to another language that will push data into the templates that were built.
One of the first things to notice about Jinja2 is the variable syntax. Jinja2 uses double curly brackets, for
example, {{ variable }}. This type of syntax denotes a variable.
This type of templating language is a lightweight programming language that supports things, strings, lists,
and dictionaries, just as in Python or Ansible. Jinja2 was built for Python, so it also supports conditionals,
loops, and more functionality. The goal is to keep these templates simple. For more information on Jinja2,
visit the Jinja2 website at http://jinja.pocoo.org/docs/.
The core idea of the Jinja2 templating language is to pull variables from a data source (variable or data file)
and add them, or render them, with the Jinja2 template. This type of task can be called variable replacement.
1. 0Which statement best describes Jinja2?
a. Jinja2 is a programming language that is used with Python.
b. Jinja2 is a templating language that can be used to simplify the generation of a text file when
using Python.
c. Jinja2 is the only templating language that works with networking devices.
d. Jinja2 is a templating language that is purpose-built for templating network configurations.

570 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Basic Jinja2 with YAML
YAML can be described as follows:
• YAML documents start with three hyphens (---). This syntax indicates the beginning of a YAML
document.
• YAML stands for “YAML Ain’t Markup Language” or “Yet Another Markup Language.”
• The goal is to provide human-readable data serialization.
• Engineers often choose between JSON and YAML for data serialization.
• YAML syntax can easily be serialized into data structures that are used by code.

If you try to compare YAML to other data formats like JSON, it seems to do much the same thing. It
represents constructs like lists, key-value pairs, strings, and integers. However, one of the advantages of
YAML is that it does this task in a very human-readable way. As mentioned before, it is very easy to read
and write, and you can understand the basic data types. This readability is why a huge number of tools use
YAML as the method to define an automation workflow or provide a data set to work with.
Next, you will explore YAML in the context of networking. The current example has a key for hostname
and a value of the device hostname. To access the hostname with Jinja2, it would be as simple as just adding
the key or variable name between the two curly brackets {{ hostname }}. Some other details to consider
with YAML are the following:
• YAML is case-sensitive.
• YAML uses indents for structure.
• YAML uses spaces, not tabs.
• Comments begin with a pound symbol (#).
• Strings do not need quotation marks unless they include special characters.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 571
Basic Jinja2 with YAML

Trying to determine the type of data that you can parameterize can be as simple as figuring out what type of
data changes in the Cisco configuration versus the type of data that does not change. On the left in the
example, there is a Cisco configuration, so the interface is Cisco syntax that never changes. However, the
interface ID changes for every interface, so that is a data type that you can parameterize.
On the left of the figure, you have the Jinja2 template and on the right is the data that you will use to
substitute the variables between the Jinja2 syntax.
The following standard configuration file is used to generate the Jinja2 template:
#output_file.cfg
!
interface GigabitEthernet2
description used_jinja2
ip address 10.10.10.1 255.255.255.0
cdp enable
no mop enabled
no mop sysid
!
!
interface GigabitEthernet3
description used_jinja2
ip address 10.10.10.2 255.255.255.0
cdp enable
no mop enabled
no mop sysid
!

Other examples include the following:


Standard config:
snmp-server community public RO
snmp-server community private RW
snmp-server location United_States
snmp-server contact cisco_lab

The Jinja2 template is as follows:


snmp-server community {{ snmp_community_ro }} RO
snmp-server community {{ snmp_community_rw }} RW
snmp-server location {{ snmp_location }}
snmp-server contact {{ snmp_contact }}

The YAML file is as follows:

572 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
---
snmp_community_ro: public
snmp_community_rw: private
snmp_location: United_States
snmp_contact: cisco_lab

YAML Data Structures

Similar to other programing languages, YAML has data structures. In the data structures of Python or other
languages, the key: value pairs are dictionaries or hashes. In YAML. These structures are called mappings,
and lists or arrays in YAML are called sequences and start with a hyphen (-).
On the left in the example, there is a CLI configuration that shows configurations for each interface. Each
interface has its own configuration, which means that you can add a list of interfaces with its own attributes.
On the right of the figure is the data model that will represent your configuration.
An example of a list of VLANS is as follows:
YAML:
---
vlans:
- 100
- 101
- 102

JSON:
{
"vlans": [
100,
101,
102
]
}

Other examples of nested data structures include the following:

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 573
YAML:
---
- vlan_name: web
vlan_id: '10'
vlan_state: active
- vlan_name: app
vlan_id: '20'
vlan_state: active
- vlan_name: DB
vlan_id: '30'
vlan_state: active

JSON:
[
{
"vlan_name": "web",
"vlan_id": "10",
"vlan_state": "active"
},
{
"vlan_name": "app",
"vlan_id": "20",
"vlan_state": "active"
},
{
"vlan_name": "DB",
"vlan_id": "30",
"vlan_state": "active"
}
]

Examine Jinja2 for Loops and Conditionals

574 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
As mentioned previously, Jinja2 is similar to Python and is a lightweight programming language. It allows
you to use things like for loops and even add logic to your templates.
• For loop: A for loop opens a code block with curly brackets and a percent sign and closes the loop with
curly brackets and endfor enclosed in percent signs. In the example, you configure multiple interfaces
using the same interface snippet in the Jinja2 template and add the data that are needed in the YAML
file. After using Ansible or another tool that renders that data, you will generate a configuration file that
is based on the data that is added without modifying the Jinja2 template. The goal is to simply modify
the data and not touch the Jinja2 template.
• If conditional logic: Jinja2 supports conditional statements, such as if/else statements. Conditional
statements begin with a curly opening bracket and a percent sign, and close with the percent sign
followed by a closing curly bracket, as shown in the figure. If structures contain one if statement and
may contain multiple else statements, but the structure must close with the endif statement, as seen in
the figure.

In the example, notice that you can add a state key to add extra configuration if you need the interface to be
in the up or down state.
1. 0Which data structures can be represented in YAML?
a. string
b. list
c. dictionary
d. nested dictionary
e. dictionaries, lists, strings

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 575
0Configuration Templating with Ansible
It is important to remember that there are three parts to using templates: writing the template, creating the
data file that contains the configuration data, and using something like Ansible or Python to render the
template and YAML data. This subtopic focuses on Ansible.

The example shows a Jinja2 template that is called interface.j2. The YAML file that contains data is called
csr1kv1.yml. The Ansible rendering engine is used to merge the files and generate a configuration that is
called csr1kv1-output.cfg. All these tasks are accomplished thanks to the Ansible playbook, which the
engineer designs to render the data and push it to the remote devices.
Ansible will try to find the variables in various locations. It can find them from the inventory file,
group_vars/directory, host_vars/directory, or in the playbook if defined, without the user having to say
where to find it. In these examples, the variables are defined in the host_vars/directory inside each
individual host directory.

Render a Jinja2 Template and YAML Data

576 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
After building the Jinja2 templates and adding the data to the YAML file, it is time to use Ansible to render
that data and create a configuration file.
1. The template module renders the Jinja2 template and YAML data to generate the configuration file.
2. The src parameter tells the module where to find the Jinja2 template. If the Jinja2 templates are in the
templates directory, you are not required to define the direct path in the parameter unless it is in another
path.
3. At the dest parameter, you will define the path where the configuration file will be generated,
specifically the directory location and the name of the file. In this example, the configuration directory
must be created manually so the template module can add the created files in the correct location. The
filename {{ inventory_hostname }}.cfg is a built-in variable that will be replaced with each device that
is defined in the hosts key in the play definition.

It is worth noting that when you execute a playbook, Ansible gathers all the variables that it can find within
the inventory file, host and group variables, and other locations that are possible. All variables or facts in
Ansible can be used in a playbook or Jinja template.
After running the playbook, the task that is using the template module will generate the configuration files
and store them in the specified dest: path.
student@student-vm:~/ansible/t$ ansible-playbook -i inventory build_config.yml
PLAY [Configure IOS Interfaces]
************************************************************************
TASK [Generate Configuration]
************************************************************************
changed: [csr1kv3]
changed: [csr1kv2]
changed: [csr1kv1]
PLAY RECAP ************************************************************************
csr1kv1 : ok=1 changed=1 unreachable=0 failed=0
skipped=0 rescued=0 ignored=0
csr1kv2 : ok=1 changed=1 unreachable=0 failed=0
skipped=0 rescued=0 ignored=0
csr1kv3 : ok=1 changed=1 unreachable=0 failed=0
skipped=0 rescued=0 ignored=0

Before:
.
├── build_config.yml
├── configs
├── host_vars
│ ├── csr1kv1
│ │ └── interface.yml
│ ├── csr1kv2
│ │ └── interface.yml
│ └── csr1kv3
│ └── interface.yml
├── inventory
└── templates
└── interface.j2
6 directories, 6 files

After:

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 577
.
├── build_config.yml
├── configs
│ ├── csr1kv1.cfg
│ ├── csr1kv2.cfg
│ └── csr1kv3.cfg
├── host_vars
│ ├── csr1kv1
│ │ └── interface.yml
│ ├── csr1kv2
│ │ └── interface.yml
│ └── csr1kv3
│ └── interface.yml
├── inventory
└── templates
└── interface.j2

Deploy Configurations
To push the configurations, you must add another task using the ios_config module and use the src
parameter to tell it where to find the generated configuration files.

The ios_config module can also source the Jinja2 template. In this way, you can build a single task that will
render the Jinja2 template with the data and push the configuration all at once, so there will be no
configuration file stored locally in the Ansible workstation. It will simply be generated and pushed to the
remote device.
- name: Build and Push Configuration
ios_config:
src: interface.j2

1. 0When Ansible creates a configuration file from a Jinja template, how does Ansible know where to
look for the variables that the template uses?
a. The variables are passed to the template as parameters in the template module.
b. Ansible knows where to find the variables.
c. Variable search paths are defined in the ansible.cfg file.
d. Ansible knows to look only in host_vars and group_vars files.

578 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Discovery 13: Build Ansible Playbooks to
Manage Infrastructure
Introduction
Ansible is best known for configuration management and can be used to manage both production and test
infrastructure. In this activity, you will use Ansible to build out the production infrastructure. You will test
the playbooks that you could use later on the lab test infrastructure. As you know, Terraform can be used to
bootstrap and deploy the test infrastructure comprising of servers, routers, and a firewall, for example. In
this activity, you will showcase how Ansible can be used to deploy the required configurations on each of
the test devices that exists within the VMware vSphere ESXi (ESXi) nested infrastructure. You will execute
the playbooks against the production infrastructure to minimize the time waiting to boot the lab test
infrastructure.
You will be able to use these playbooks in other lab activities, for example to build lab test topology
infrastructure that is the same as the lab production topology.

Topology

Job Aid

Device Information

Device Description FQDN / IP Address Credentials

Student Workstation Linux Ubuntu VM 192.168.10.10 student, 1234QWer

k8s1 Kubernetes 192.168.10.21 student, 1234QWer

k8s2 Kubernetes 192.168.10.22 student, 1234QWer

k8s3 Kubernetes 192.168.10.23 student, 1234QWer

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 579
Device Description FQDN / IP Address Credentials

csr1kv1 Cisco Router 192.168.10.101 student, 1234QWer

csr1kv2 Cisco Router 192.168.10.102 student, 1234QWer

csr1kv3 Cisco Router 192.168.10.103 student, 1234QWer

asa1 Firewall 192.168.10.51 student, 1234QWer (enable: cisco)

Command List
The table describes the commands that are used in this activity. The commands are listed in alphabetical
order so that you can easily locate the information that you need. Refer to this list if you need configuration
command assistance during the lab activity.

Command Description

cd directory name To change directories within the Linux file system, use the cd
command. You will use this command to enter a directory where the
lab scripts are housed. You can use tab completion to finish the name
of the directory after you start typing it.

cat file The most common use of the cat Linux command is to read++ the
contents of files. It is the most convenient command for this purpose in
a UNIX-like operating systems.

more file-name To view the content of a file (one full window at a time), use the more
Linux command. Press space to view the next part of the file.

cp source-file destination-file Copies a file from the source to the destination. The files can be
absolute or relative paths.
You may also copy entire folders with the -r flag.

mv source-file destination-file Moves/Renames a file from the source to the destination. The files can
be absolute or relative paths.
You may also move or rename a folder.

code file-name|dir-name Opens the provided file or directory in the graphical editor VS Code. If
it is already running, it will open the file in a new tab.

ansible-playbook playbook-file-name Lists Task names in an Ansible Playbook.


--list-tasks

ansible-playbook playbook-file-name Execute Ansible Playbook to manage infrastructure.

580 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Task 1: Build Ansible Playbook for Router
Configuration
The tasks in this lab are all performed on the Student Workstation. You will view and edit the YAML files
using Visual Studio Code from the Student Workstation. Visual Studio Code provides syntax highlighting
and helps ensure proper formatting.

Activity

View the Directory Structure

Step 1 In the Student Workstation, open a terminal window and change the directory to ~/labs/lab13 using the cd
~/labs/lab13 command.

student@student-vm:$ cd ~/labs/lab13
student@student-vm:labs/lab13$

Step 2 Use the tree --dirsfirst command to view the directory structure.

The directory structure provides the following content:


• The hosts file that defines the Ansible inventory.
• The templates directory for storing Jinja2 templates to build infrastructure configurations.
• The vars directory structure for storing configuration data of each device.
• The outputs directory for creating artifacts of the configurations being sent to the devices
for configuration.
• Ansible playbooks for creating and deploying the configurations to the infrastructure
devices.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 581
student@student-vm:labs/lab13$ tree --dirsfirst
.
├── files_for_remote_hosts
│ ├── authorized_keys_for_remote_servers
│ ├── git.lab.crt
│ ├── hosts
│ ├── id_rsa
│ └── root-git.lab.crt
├── outputs
│ └── configs
├── templates
│ ├── firewalls.j2
│ └── routers.j2
├── vars
│ ├── asa1
│ │ ├── bgp.yml
│ │ ├── interfaces.yml
│ │ └── policies.yml
│ ├── csr1kv1
│ │ ├── bgp.yml
│ │ └── interfaces.yml
│ ├── csr1kv2
│ │ ├── bgp.yml
│ │ └── interfaces.yml
│ └── csr1kv3
│ ├── bgp.yml
│ └── interfaces.yml
├── ansible.cfg
├── firewalls.yml
├── hosts
├── routers.yml
└── servers.yml

Step 3 Open the lab13 directory for viewing and editing files in Visual Studio Code. Use the code . command.

student@student-vm:labs/lab13$ code .

582 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Build an Ansible Playbook for Router Configuration
You will find the predeployed playbook to deploy router configurations named routers.yml. The ansible-
playbook command provides a --list-tasks flag that shows the names and tags given to plays and tasks in
the playbook.

Step 4 Use the ansible-playbook routers.yml --list-tasks command to view plays and tasks specified in the
routers.yml playbook.

student@student-vm:labs/lab13$ ansible-playbook routers.yml --list-tasks

playbook: routers.yml

play #1 (routers): CREATE AND DEPLOY ROUTER CONFIGURATIONS TAGS: []


tasks:
LOAD CONFIG VARIABLES FROM VARS FILE TAGS: []
BUILD CONFIGURATION ARTIFACTS FOR DEVICES TAGS: []
CHECK IF DEVICE IS REACHABLE TAGS: []
DEPLOY CONFIGURATION ARTIFACTS TO DEVICES TAGS: []
ENABLE RESTCONF ON DEVICES TAGS: []

The first task of the playbook loads the appropriate variables from the vars directory, based on
inventory_hostname Ansible variable.

The second task uses the variables from the first task to build a configuration file for the specified devices.

Ansible uses Jinja2 as its templating engine. The routers.yml playbook uses the templates/routers.j2 Jinja
file for building the appropriate configurations for each device.

Step 5 Open the templates/routers.j2 template in Visual Studio Code using the code templates/routers.j2
command. Examine the template.

The first line in the Jinja2 file is a mechanism for ignoring indentation of control blocks within

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 583
the template. Indenting nested control structures (such as for and if/else clauses) helps you to
create more readable templates. The remainder of the file is the actual templating of the
configuration. You will notify two sections in the template; the interface and BGP
configurations.
The interface section loops through each defined interface and adds configuration for its IP
address. The schema definition uses separate fields for the address and the netmask, and the
netmask is stored as an integer with CIDR notation. Ansible provides a Jinja2 filters for working
with IP addresses. One of these filters provides for converting the netmask into different formats.
In Jinja2, filters are ways of extending a template by calling a Python function. The ipaddr filter
takes two arguments, the IP address (for example, 10.1.1.10/24) and the type of the data to
retrieve from the IP address. This template specifies the netmask, which provides the netmask
for the IP address using the traditional format.
The BGP section builds configurations for defining the local AS, the local networks to advertise,
and the neighbor information to form adjacencies. Configuring the networks to advertise also
requires converting the netmask to the traditional format. All other configurations in the BGP
section place the configuration definitions into the appropriate syntax.
This template contains most of the logic for deploying the infrastructure configurations. And the
actual playbook tasks are fairly straightforward.
student@student-vm:labs/lab13$ cat templates/routers.j2

#jinja2: lstrip_blocks: True


{% if interfaces_config %}
{% for interface, config in interfaces_config.items() %}
{% set ip_addr = config["ip_address"] %}
{% set ip_cidr = ip_addr["ip"] ~ "/" ~ ip_addr["mask"] %}
interface {{ interface }}
ip address {{ ip_addr["ip"] }} {{ ip_cidr | ipaddr("netmask") }}
no shutdown
{% endfor %}
{% endif %}

{% if bgp_config %}
router bgp {{ bgp_config["local_as"] }}
{% for network in bgp_config["advertised_networks"] %}
{% set ip_cidr = network["ip"] ~ "/" ~ network["mask"] %}
network {{ network["ip"] }} mask {{ ip_cidr | ipaddr("netmask") }}
{% endfor %}
{% for neighbor in bgp_config["neighbors"] %}
neighbor {{ neighbor["ip"] }} remote-as {{ neighbor["remote_as"] }}
{% endfor %}
{% endif %}

student@student-vm:labs/lab13$ code templates/routers.j2

584 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 6 Open the routers.yml playbook in Visual Studio Code using the code routers.yml command. Examine the
playbook.

The play declaration provides a name, the group of hosts to execute the playbook against, and
disables fact gathering. This play is set to execute against the routers group, which is defined in
the hosts file at the root of the lab13 directory. The routers group contains the csr1kv1, csr1kv2,
and csr1kv3 hosts. With the default configuration, Ansible will execute the tasks sequentially,
but each task will execute against all hosts simultaneously.
The first task in the Playbook uses the include_vars module to load variables from YAML files
from the vars directory. The task uses the file argument to tell Ansible which YAML file to load
the variables from, and the name argument to use as a top-level key to reference the new
variables. Namespacing the variables is important, as they prevent existing variables, or
variables from other files, from being overwritten. The files that are loaded contain the
configuration data for the infrastructure devices. Each configuration section is defined in a
separate YAML file. The first task uses the loop attribute to iterate through the list of filenames
without the .yml extension. In addition to using these filenames to load the appropriate vars file,
the name is used as a way to predictably namespace the variables. For example, the variables
defined in the bgp.yml host file will be loaded into a variable named bgp_config.
The second task uses the template module to use a Jinja2 Template to create an artifact of the
infrastructure device configuration based on the variables loaded in the first task. This module
requires two arguments: the src template file for generating a file and the dest where the
resulting file contents should be saved to. This template saves configuration of each device to the
outputs/configs directory and uses the hostname of each device as the filename.
You will now add a third task that will make sure that devices are accessible and can be managed
before being configured. Call this module CHECK IF DEVICE IS REACHABLE and use the
wait-for Ansible module that waits for the host to become available. Use the following syntax:

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 585
- name: "CHECK IF DEVICE IS REACHABLE"
wait_for:
host: "{{ ansible_host }}"
port: 22
timeout: 600
delegate_to: "localhost"

The next task uses the ios_config module to deploy the configurations created in the previous
task to the infrastructure devices. The src argument provides the path to the configuration
artifact. The save_when argument is used to save the running configuration as the startup
configuration if Ansible sends any configuration commands to the device. It is important to note
that Ansible will only send commands to the device if the command does not already exist in the
running configuration of such device.
At the end of the playbook, you will add another task that will enable RESTCONF on the
routers. Call this module ENABLE RESTCONF ON DEVICES and use the following syntax:
- name: "ENABLE RESTCONF ON DEVICES"
ios_config:
lines: "restconf"
save_when: "changed"

586 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:labs/lab13$ cat routers.yml

---
- name: "CREATE AND DEPLOY ROUTER CONFIGURATIONS"
hosts: "routers"
gather_facts: no
tasks:
- name: "LOAD CONFIG VARIABLES FROM VARS FILE"
include_vars:
file: "vars/{{ inventory_hostname }}/{{ item }}.yml"
name: "{{ item }}_config"
loop: ["interfaces", "bgp"]
delegate_to: "localhost"

- name: "BUILD CONFIGURATION ARTIFACTS FOR DEVICES"


template:
src: "templates/routers.j2"
dest: "outputs/configs/{{ inventory_hostname }}"
delegate_to: "localhost"

- name: "CHECK IF DEVICE IS REACHABLE"


wait_for:
host: "{{ ansible_host }}"
port: 22
timeout: 600
delegate_to: "localhost"

- name: "DEPLOY CONFIGURATION ARTIFACTS TO DEVICES"


ios_config:
src: "outputs/configs/{{ inventory_hostname }}"
save_when: "changed"

- name: "ENABLE RESTCONF ON DEVICES"


ios_config:
lines: "restconf"
save_when: "changed"

student@student-vm:labs/lab13$

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 587
Task 2: Build Ansible Playbook for Firewall
Configuration
Activity

You will find the predeployed playbook to deploy firewall configurations named firewalls.yml.

Step 1 Use the ansible-playbook firewalls.yml --list-tasks command to view plays and tasks specified in the
firewalls.yml playbook.

student@student-vm:labs/lab13$ ansible-playbook firewalls.yml --list-tasks

playbook: firewalls.yml

play #1 (firewalls): CREATE AND DEPLOY FIREWALL CONFIGURATIONS TAGS: []


tasks:
LOAD CONFIG VARIABLES FROM VARS FILE TAGS: []
BUILD CONFIGURATION ARTIFACTS FOR DEVICES TAGS: []
CHECK IF DEVICE IS REACHABLE TAGS: []
DEPLOY CONFIGURATION ARTIFACTS TO DEVICES TAGS: []

This playbook is similar to the routers.yml playbook. Only RESTCONF is not enabled on firewalls. Open
the file in VS Code to compare with that Playbook.

Step 2 Open the firewalls.yml playbook in Visual Studio Code using the code firewalls.yml command. Examine
the playbook.

The main difference compared to the routers.yml playbook is the addition of configuring ACL
policies.

588 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
There are a few other differences:
• The play hosts are the firewalls group, which only references the asa1 host in the hosts
inventory file.
• The loading of variable files includes loading the variables for access policies.
• The Jinja2 template is used for firewalls.
• The asa_config module is used to deploy the configurations.
student@student-vm:labs/lab13$ cat firewalls.yml

---
- name: "CREATE AND DEPLOY FIREWALL CONFIGURATIONS"
hosts: "firewalls"
gather_facts: no
tasks:
- name: "LOAD CONFIG VARIABLES FROM VARS FILE"
include_vars:
file: "vars/{{ inventory_hostname }}/{{ item }}.yml"
name: "{{ item }}_config"
loop: ["interfaces", "bgp", "policies"]
delegate_to: "localhost"

- name: "BUILD CONFIGURATION ARTIFACTS FOR DEVICES"


template:
src: "templates/firewalls.j2"
dest: "outputs/configs/{{ inventory_hostname }}"
delegate_to: "localhost"

- name: "CHECK IF DEVICE IS REACHABLE"


wait_for:
host: "{{ ansible_host }}"
port: 22
timeout: 600
delegate_to: "localhost"

- name: "DEPLOY CONFIGURATION ARTIFACTS TO DEVICES"


asa_config:
src: "outputs/configs/{{ inventory_hostname }}"
save: True

Step 3 Open the templates/ firewalls.j2 template in Visual Studio Code using the code templates/firewalls.j2
command. Examine the template.

The first two sections are similar to the routers.j2 template. The only modifications are changes
in syntax. The last two sections are specific to the firewalls. The order of these configurations is
important as Ansible will try to add the configuration commands in the order provided by the
template. The last section assigns an ACL policy to an interface, which will only be accepted if
the policy exists. You must ensure that the policy exists first. This is accomplished by the third
section. These two sections follow the same format as the previous section, but use the variables
and syntax specific to building ACL policies.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 589
student@student-vm:labs/lab13$ cat templates/firewalls.j2

#jinja2: lstrip_blocks: True


{% if interfaces_config %}
{% for interface, config in interfaces_config.items() %}
{% set ip_addr = config["ip_address"] %}
{% set ip_cidr = ip_addr["ip"] ~ "/" ~ ip_addr["mask"] %}
{% set zone = config["zone"] %}
interface {{ interface }}
nameif {{ zone["name"] }}
security-level {{ zone["security_level"] }}
ip address {{ ip_addr["ip"] }} {{ ip_cidr | ipaddr("netmask") }}
{% endfor %}
{% endif %}

{% if bgp_config %}
router bgp {{ bgp_config["local_as"] }}
address-family ipv4 unicast
{% for neighbor in bgp_config["neighbors"] %}
neighbor {{ neighbor["ip"] }} remote-as {{ neighbor["remote_as"] }}
{% endfor %}
{% for network in bgp_config["advertised_networks"] %}
{% set ip_cidr = network["ip"] ~ "/" ~ network["mask"] %}
network {{ network["ip"] }} mask {{ ip_cidr | ipaddr("netmask") }}
{% endfor %}
{% endif %}

{% if policies_config %}
{% for policy, config in policies_config.items() %}
{% for entry in config["policies"] %}
{% set src_ip = entry["source"]["ip"] %}
{% set src_mask = entry["source"]["mask"] %}
{% if src_mask == 0 %}
{% set src_addr = "any" %}
{% elif src_mask == 32 %}
{% set src_addr = "host " ~ src_ip %}
{% else %}
{% set src_ip_cidr = src_ip ~ "/" ~ src_mask %}
{% set src_addr = src_ip ~ " " ~ src_ip_cidr | ipaddr("netmask") %}
{% endif %}
{% set dst_ip = entry["destination"]["ip"] %}
{% set dst_mask = entry["destination"]["mask"] %}
{% if dst_mask == 0 %}
{% set dst_addr = "any" %}
{% elif dst_mask == 32 %}
{% set dst_addr = "host " ~ dst_ip %}
{% else %}
{% set dst_ip_cidr = dst_ip ~ "/" ~ dst_mask %}
{% set dst_addr = dst_ip ~ " " ~ dst_ip_cidr | ipaddr("netmask") %}
{% endif %}
access-list {{ policy }} extended {{ entry["action"] }} {{ entry["protocol"] }}
{{ src_addr }} {{ dst_addr }}{% if entry.get("destination_port") %} eq
{{ entry["destination_port"] }}{% endif %}

{% endfor %}
{% endfor %}

590 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
{% for policy, config in policies_config.items() %}
access-group {{ policy }} in interface {{ config["interface"] }}
{% endfor %}
{% endif %}

student@student-vm:labs/lab13$ code templates/firewalls.j2

Task 3: Build Ansible Playbook for Server


Configuration
You will find the predeployed playbook to deploy server configurations named servers.yml.

Activity

Step 1 Use the ansible-playbook servers.yml --list-tasks command to view plays and tasks specified in the
servers.yml playbook.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 591
student@student-vm:labs/lab13$ ansible-playbook servers.yml --list-tasks

playbook: servers.yml

play #1 (k8s): INSTALL SERVER PACKAGES TAGS: []


tasks:
RUN THE EQUIVALENT OF 'apt-get update' AS A SEPARATE STEP TAGS: []
INSTALL LINUX PACKAGES TAGS: []
ADD DOCKER KEY TO APT TAGS: []
ADD DOCKER REPO TO APT TAGS: []
INSTALL DOCKER PACKAGE TAGS: []
CREATE SSH DIR ON REMOTE HOST TAGS: []
COPY SSH KEYS, CERTS AND /etc/hosts FOR REMOTE HOSTS TAGS: []
UPDATE CERTIFICATES TAGS: []
ADD STUDENT USER TO DOCKER GROUP TAGS: []

This playbook includes two tasks to install the base packages. It uses the apt Ansible module to update APT
and install the necessary packages for installing and running the net_inventory application. The next three
tasks are necessary to install Docker on the servers. Before APT can install Docker, the Docker repository
must be added to APT list of package sources. Also, the GPG key used to verify the legitimacy of the source
must be added to APT. Once APT knows where to find the Docker package and can trust the source, APT
can install the package through the normal process.

In addition to installing the necessary packages on the servers, the servers also need to have the proper SSH
keys and certificates to allow SSH access without password authentication. The next three tasks create the
necessary directories and install the files located in the Ansible control host files_for_remote_hosts local
directory. Finally, the student user ID is added to the docker group in order to provide access to issue
Docker commands.

Step 2 Open the servers.yml playbook in Visual Studio Code using the code servers.yml command. Examine the
playbook.

592 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:labs/lab13$ cat servers.yml

---
- name: "INSTALL SERVER PACKAGES"
hosts: "k8s"
gather_facts: no
become: yes
tasks:
- name: "RUN THE EQUIVALENT OF 'apt-get update' AS A SEPARATE STEP"
apt:
update_cache: yes

- name: "INSTALL LINUX PACKAGES"


apt:
name:
- "apt-transport-https"
- "ca-certificates"
- "curl"
- "gnupg2"
- "python3-pip"
- "software-properties-common"
- "docker-compose"

- name: "ADD DOCKER KEY TO APT”


apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
state: "present"

- name: "ADD DOCKER REPO TO APT"


apt_repository:
repo: "deb [arch=amd64] https://download.docker.com/linux/ubuntu
{{ lookup('pipe', 'echo $(lsb_release -cs)') }} stable"

- name: "INSTALL DOCKER PACKAGE"


apt:
name: "docker-ce"
update_cache: yes
state: "latest"

- name: "CREATE SSH DIR ON REMOTE HOST"


file:
path: "/home/student/.ssh"
state: "directory"
owner: "student"
group: "student"
mode: "0700"

- name: "COPY SSH KEYS, CERTS AND /etc/hosts FOR REMOTE HOSTS"
copy:
src: "{{ item['src'] }}"
dest: "{{ item['dest'] }}"
owner: "student"
group: "student"
mode: "0600"
loop:
- src: "./files_for_remote_hosts/authorized_keys_for_remote_servers"

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 593
dest: "/home/student/.ssh/authorized_keys"
- src: "./files_for_remote_hosts/hosts"
dest: "/etc/hosts"
- src: "./files_for_remote_hosts/git.lab.crt"
dest: "/usr/local/share/ca-certificates/git.lab.crt"
- src: "./files_for_remote_hosts/root-git.lab.crt"
dest: "/usr/local/share/ca-certificates/root-git.lab.crt"
- src: "./files_for_remote_hosts/id_rsa"
dest: "/home/student/.ssh/id_rsa"

- name: "UPDATE CERTIFICATES"


command:
cmd: "update-ca-certificates"

- name: "ADD STUDENT USER TO DOCKER GROUP"


user:
name: "student"
groups: "docker"
append: "yes"

Task 4: Deploy Configurations to Test Environment


Activity

Deploy Infrastructure Configurations


The test environment is ready to be configured with Ansible. Before running any of the Ansible playbooks,
log in to the asa1 device to verify that the interface IP addresses, BGP, and access lists are not configured.

The Ansible playbooks are designed to build the necessary configurations. After the playbooks are
executed, IP addresses will be assigned, the BGP adjacencies will be formed, and the access policies applied
to the interfaces.

Step 1 In the terminal window, use the ansible-playbook routers.yml command to configure the routers.

594 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:labs/lab13$ ansible-playbook routers.yml

PLAY [CREATE AND DEPLOY ROUTER CONFIGURATIONS] *********************************

TASK [LOAD CONFIG VARIABLES FROM VARS FILE] ************************************


ok: [csr1kv1 -> localhost] => (item=interfaces)
ok: [csr1kv1 -> localhost] => (item=bgp)
ok: [csr1kv2 -> localhost] => (item=interfaces)
ok: [csr1kv3 -> localhost] => (item=interfaces)
ok: [csr1kv2 -> localhost] => (item=bgp)
ok: [csr1kv3 -> localhost] => (item=bgp)

TASK [BUILD CONFIGURATION ARTIFACTS FOR DEVICES] *******************************


ok: [csr1kv2 -> localhost]
ok: [csr1kv3 -> localhost]
ok: [csr1kv1 -> localhost]

TASK [CHECK IF DEVICE IS REACHABLE] ********************************************


ok: [csr1kv1 -> localhost]
ok: [csr1kv2 -> localhost]
ok: [csr1kv3 -> localhost]

TASK [DEPLOY CONFIGURATION ARTIFACTS TO DEVICES] *******************************


changed: [csr1kv1]
changed: [csr1kv3]
changed: [csr1kv2]

TASK [ENABLE RESTCONF ON DEVICES] **********************************************


changed: [csr1kv1]
changed: [csr1kv3]
changed: [csr1kv2]

PLAY RECAP *********************************************************************


csr1kv1 : ok=5 changed=2 unreachable=0 failed=0
csr1kv2 : ok=5 changed=2 unreachable=0 failed=0
csr1kv3 : ok=5 changed=2 unreachable=0 failed=0

Step 2 In the terminal window, use the ansible-playbook firewalls.yml command to configure the firewall.

After you run the playbooks, you will notice that both Ansible playbooks reported changes for
creating the configuration artifacts and deploying those configurations to the devices. The
outputs/commands/asa1 file shows what commands Ansible sent to the device. Since all
devices were configured, the asa1 device should have a BGP adjacency with csr1kv1 and
receiving routes.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 595
student@student-vm:labs/lab13$ ansible-playbook firewalls.yml

PLAY [CREATE AND DEPLOY FIREWALL CONFIGURATIONS] *******************************

TASK [LOAD CONFIG VARIABLES FROM VARS FILE] ************************************


ok: [asa1 -> localhost] => (item=interfaces)
ok: [asa1 -> localhost] => (item=bgp)
ok: [asa1 -> localhost] => (item=policies)

TASK [BUILD CONFIGURATION ARTIFACTS FOR DEVICES] *******************************


ok: [asa1 -> localhost]

TASK [CHECK IF DEVICE IS REACHABLE] ********************************************


ok: [asa1 -> localhost]

TASK [DEPLOY CONFIGURATION ARTIFACTS TO DEVICES] *******************************


changed: [asa1]

PLAY RECAP *********************************************************************


asa1 : ok=4 changed=2 unreachable=0 failed=0

Step 3 Open the outputs/configs/asa1 file in Visual Studio Code using the cat outputs/configs/asa1 command.
Examine the file.

student@student-vm:labs/lab13$ cat outputs/configs/asa1

interface GigabitEthernet0/0
nameif inside
security-level 100
ip address 10.10.4.254 255.255.255.0
no shutdown
interface GigabitEthernet0/1
nameif outside
security-level 0
ip address 10.10.3.1 255.255.255.0
no shutdown

router bgp 65004


address-family ipv4 unicast
neighbor 10.10.4.1 remote-as 65003
network 10.10.3.0 mask 255.255.255.0

access-list INSIDE extended permit icmp 10.10.0.0 255.255.252.0 10.10.3.0 255.255.255.0


access-list OUTSIDE extended permit icmp any host 10.10.1.10
access-list OUTSIDE extended permit tcp any host 10.10.1.10 eq 5000
access-list OUTSIDE extended permit tcp any host 10.10.1.10 eq 5001

access-group INSIDE in interface inside


access-group OUTSIDE in interface outside

student@student-vm:labs/lab13$ code outputs/configs/asa1

596 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 4 Establish an SSH session to the asa1 host using the ssh asa1 command.

student@student-vm:$ ssh asa1


student@asa1’s password:
User student logged in to asa1
Type help or '?' for a list of available commands.
asa1>

Step 5 Use the enable command and enter password to switch to the enable mode.

asa1> enable
Password:
asa1#

Step 6 Within the SSH session with the asa1 host, issue the show run access-list and show run access-group
commands to verify the access lists configuration:

asa1# show run access-list


access-list INSIDE extended permit icmp 10.10.0.0 255.255.252.0 10.10.3.0 255.255.255.0
access-list OUTSIDE extended permit icmp any host 10.10.1.10
access-list OUTSIDE extended permit tcp any host 10.10.1.10 eq 5000
access-list OUTSIDE extended permit tcp any host 10.10.1.10 eq 5001

asa1# show run access-group


access-group INSIDE in interface inside
access-group OUTSIDE in interface outside

Step 7 Within the SSH session with the asa1 host, verify the BGP adjacency using the show bgp summary
command.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 597
asa1# show bgp summary

Abbreviated Output

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd
10.10.4.1 4 65003 20 12 6 0 0 00:01:39 3

The output shows that the asa1 host has one neighbor and received three prefixes. The show bgp command
will show what prefixes are received from this neighbor and will also show the AS path to reach each
network. Based on the configurations for the three csr1kv devices, the expected networks and paths are the
following:
• 10.10.1.0/24 via 65003, 65001 (originator)
• 10.10.2.0/24 via 65003, 65002 (originator)
• 10.10.4.0/24 via 65003 (originator)

Step 8 Within the SSH session with the asa1 host, verify the learned BGP paths using the show bgp command.

asa1# show bgp



Abbreviated Output

Network Next Hop Metric LocPrf Weight Path
*> 10.10.1.0/24 10.10.4.1 0 65003 65001 i
*> 10.10.2.0/24 10.10.4.1 0 65003 65002 i
*> 10.10.3.0/24 0.0.0.0 0 32768 i
r> 10.10.4.0/24 10.10.4.1 0 0 65003 i
asa1#

The asa1 host shows that it learned the three expected routes from the csr1kv3 router. This confirms that all
three of the routers were also configured correctly.

The last task is to have the Kubernetes servers install the necessary packages.

Step 9 In the terminal window, use the ansible-playbook servers.yml command to configure the Kubernetes
servers. The number of changed tasks may differ from the displayed.

598 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:labs/lab13$ ansible-playbook servers.yml

PLAY [INSTALL SERVER PACKAGES] **********************************************

TASK [RUN THE EQUIVALENT OF 'apt-get update' AS A SEPARATE STEP] ************


changed: [k8s1]
changed: [k8s2]
changed: [k8s3]
TASK [INSTALL LINUX PACKAGES] ***********************************************
changed: [k8s1]
changed: [k8s2]
changed: [k8s3]
TASK [ADD DOCKER KEY TO APT] ************************************************
changed: [k8s1]
changed: [k8s2]
changed: [k8s3]
TASK [ADD DOCKER REPO TO APT] ***********************************************
changed: [k8s1]
changed: [k8s2]
changed: [k8s3]
TASK [INSTALL DOCKER PACKAGE] ***********************************************
changed: [k8s1]
changed: [k8s2]
changed: [k8s3]
TASK [CREATE SSH DIR ON REMOTE HOST] ****************************************
changed: [k8s1]
changed: [k8s2]
changed: [k8s3]
TASK [COPY SSH KEYS, CERTS AND /etc/hosts FOR REMOTE HOSTS] *****************
changed: [k8s1]
changed: [k8s2]
changed: [k8s3]
TASK [UPDATE CERTIFICATES] **************************************************
changed: [k8s1]
changed: [k8s2]
changed: [k8s3]
TASK [ADD STUDENT USER TO DOCKER GROUP] *************************************
changed: [k8s1]
changed: [k8s2]
changed: [k8s3]

PLAY RECAP ******************************************************************


k8s1 : ok=9 changed=9 unreachable=0 failed=0
k8s2 : ok=9 changed=9 unreachable=0 failed=0
k8s3 : ok=9 changed=9 unreachable=0 failed=0

The servers were successfully installed.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 599
0Discovery 14: Integrate the Testing Environment
in the CI/CD Pipeline
Introduction
In this activity, you will build out the test infrastructure that can be used as part of the CI/CD pipeline. You
will use Terraform to build an on-demand test environment and use the Ansible playbooks to configure each
device. Finally, you will deploy the network inventory application on the test infrastructure to ensure it is
working as expected.

Topology

Job Aid

Device Information

Device Description FQDN / IP Address Credentials

Student Workstation Linux Ubuntu VM 192.168.10.10 student, 1234QWer

ESXi VM VM in which test 192.168.10.17


infra runs

GitLab Version Control 192.168.10.20 student, 1234QWer


and Container
Registry

600 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Command List
The table describes the commands that are used in this activity. The commands are listed in alphabetical
order so that you can easily locate the information that you need. Refer to this list if you need configuration
command assistance during the lab activity.

Command Description

cd directory name To change directories within the Linux file system, use the cd
command. You will use this command to enter a directory where the
lab scripts are housed. You can use tab completion to finish the name
of the directory after you start typing it.

python script-file-name.py To initiate a Python script, you need to use the python Linux
command along with the name of the script. You can use tab
completion to finish the name of the script after you start typing it.

cat file The most common use of the cat Linux command is to read the
contents of files. It is the most convenient command for this purpose in
a UNIX-like operating systems.

more file-name To view the content of a file (one full window at a time), use the more
Linux command. Press space to view the next part of the file.

pip <command> [options] To install missing packages on Linux VM.

Task 1: Add a Test IaC Specification to the


ThreeTierApp
You will first implement a full test IaC specification for the ThreeTierApp repository.

Activity

Step 1 In the student workstation, open a terminal window and change the directory to ~/labs/lab14 using the cd
~/labs/lab14 command.

student@student-vm:$ cd ~/labs/lab14
student@student-vm:labs/lab14$

Step 2 Use the git clone git@gitlab:cisco-devops/net_inventory_iac.git command to clone the net_inventory_iac
repository. Confirm the repository has been cloned by using the ls -l command.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 601
student@student-vm:labs/lab14$ git clone git@gitlab:cisco-devops/net_inventory_iac.git
Cloning into 'net_inventory_iac'...
remote: Enumerating objects: 610, done.
remote: Counting objects: 100% (610/610), done.
remote: Compressing objects: 100% (252/252), done.
remote: Total 610 (delta 379), reused 565 (delta 341)
Receiving objects: 100% (610/610), 14.22 MiB | 21.16 MiB/s, done.
Resolving deltas: 100% (379/379), done.
student@student-vm:labs/lab14$ ls -l
total 4
drwxrwxr-x 9 student student 4096 Dec 12 11:56 net_inventory_iac
student@student-vm:labs/lab14$

Step 3 Change directory to the net_inventory_iac/iac/terraform/ Terraform directory by issuing cd


net_inventory_iac/iac/terraform/ command.

student@student-vm:labs/lab14$ cd net_inventory_iac/iac/terraform/
student@student-vm:iac/terraform (master)$

Step 4 List the directory content using the ls -l command to see the Terraform related files.

student@student-vm:iac/terraform (master)$ ls -l
total 104
-rw-rw-r-- 1 student student 1552 Dec 12 11:56 csr1kv1.tf
-rw-rw-r-- 1 student student 1552 Dec 12 11:56 csr1kv2.tf
-rw-rw-r-- 1 student student 2059 Dec 12 11:56 csr1kv3.tf
-rw-rw-r-- 1 student student 2016 Dec 12 11:56 data_sources.tf
-rw-rw-r-- 1 student student 984 Dec 12 11:56 k8s1.tf
-rw-rw-r-- 1 student student 984 Dec 12 11:56 k8s2.tf
-rw-rw-r-- 1 student student 984 Dec 12 11:56 k8s3.tf
-rwxrwxr-x 1 student student 223 Dec 12 11:56 power_off_asa.sh
-rwxrwxr-x 1 student student 224 Dec 12 11:56 power_on_asa.sh
-rw-rw-r-- 1 student student 159 Dec 12 11:56 terraform.tfstate
-rw-rw-r-- 1 student student 56546 Dec 12 11:56 terraform.tfstate.backup
-rw-rw-r-- 1 student student 158 Dec 12 11:56 variables.tf
-rw-rw-r-- 1 student student 628 Dec 12 11:56 vswitch.tf
student@student-vm:iac/terraform (master)$

Step 5 Confirm that Terraform can build test topology infrastructure. Use the terraform plan command. The
command output is long and contains detailed information about every object that Terraform would build.

602 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:iac/terraform (master)$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

data.vsphere_datacenter.dc: Refreshing state...


data.vsphere_resource_pool.pool: Refreshing state...
data.vsphere_host.esxi_host: Refreshing state...
data.vsphere_datastore.datastore: Refreshing state...
data.vsphere_network.network: Refreshing state...

------------------------------------------------------------------------

An execution plan has been generated and is shown below.


Resource actions are indicated with the following symbols:
+ create
<= read (data resources)

<...>

# data.vsphere_network.vm_network_3 will be read during apply


# (config refers to values not yet known)
<= data "vsphere_network" "vm_network_3" {
+ datacenter_id = "ha-datacenter"
+ id = (known after apply)
+ name = "vm_network_3"
+ type = (known after apply)

<...>

student@student-vm:iac/terraform (master)$

Step 6 Change directory to the net_inventory_iac/iac/ansible/ Ansible directory by issuing cd ../ansible/ command.
Then use the ls -l command to see all available Ansible files.

student@student-vm:iac/terraform (master)$ cd ../ansible


student@student-vm:iac/ansible (master)$ ls -l
total 52
-rw-rw-r-- 1 student student 676 Dec 12 11:56 ansible.cfg
drwxrwxr-x 2 student student 4096 Dec 12 11:56 files_for_remote_hosts
-rw-rw-r-- 1 student student 867 Dec 12 11:56 firewalls.yml
-rw-rw-r-- 1 student student 421 Dec 12 11:56 hosts
drwxrwxr-x 3 student student 4096 Dec 12 11:56 outputs
drwxrwxr-x 2 student student 4096 Dec 12 11:56 plugin_filters
-rw-rw-r-- 1 student student 977 Dec 12 11:56 routers.yml
-rw-rw-r-- 1 student student 2512 Dec 12 11:56 routers_check.yml
-rw-rw-r-- 1 student student 297 Dec 12 11:56 servers
-rw-rw-r-- 1 student student 2099 Dec 12 11:56 servers.yml
-rw-rw-r-- 1 student student 910 Dec 12 11:56 servers_check.yml
drwxrwxr-x 2 student student 4096 Dec 12 11:56 templates
drwxrwxr-x 6 student student 4096 Dec 12 11:56 vars
student@student-vm:iac/ansible (master)$

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 603
Step 7 Use the cat routers.yml command to view the routers Ansible playbook. In the playbook, before
configurations are pushed to the csr1kv devices, the wait_for module is used to confirm that the routers are
reachable and can be managed. In the last playbook task, RESTCONF is enabled.

student@student-vm:iac/ansible (master)$ cat routers.yml


---
- name: "CREATE AND DEPLOY ROUTER CONFIGURATIONS"
hosts: "routers"
gather_facts: no
tasks:
- name: "LOAD CONFIG VARIABLES FROM VARS FILE"
include_vars:
file: "vars/{{ inventory_hostname }}/{{ item }}.yml"
name: "{{ item }}_config"
loop: ["interfaces", "bgp"]
delegate_to: "localhost"

- name: "BUILD CONFIGURATION ARTIFACTS FOR DEVICES"


template:
src: "templates/routers.j2"
dest: "outputs/configs/{{ inventory_hostname }}"
delegate_to: "localhost"

- name: "CHECK IF DEVICE IS REACHABLE"


wait_for:
host: "{{ ansible_host }}"
port: 22
timeout: 600
delegate_to: "localhost"

- name: "DEPLOY CONFIGURATION ARTIFACTS TO DEVICES"


ios_config:
src: "outputs/configs/{{ inventory_hostname }}"
save_when: "changed"

- name: "ENABLE RESTCONF ON DEVICES"


ios_config:
lines: "restconf"
save_when: "changed"

student@student-vm:iac/ansible (master)$

Step 8 Use the cat firewall.yml command to view the firewall Ansible playbook. In the playbook, verification is
made to confirm firewalls can be reached and managed. At the end of the playbook, firewall configuration
deployment is executed.

604 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:iac/ansible (master)$ cat firewalls.yml
---
- name: "CREATE AND DEPLOY FIREWALL CONFIGURATIONS"
hosts: "firewalls"
gather_facts: no
tasks:

- name: "LOAD CONFIG VARIABLES FROM VARS FILE"


include_vars:
file: "vars/{{ inventory_hostname }}/{{ item }}.yml"
name: "{{ item }}_config"
loop: ["interfaces", "bgp", "policies"]
delegate_to: "localhost"

- name: "BUILD CONFIGURATION ARTIFACTS FOR DEVICES"


template:
src: "templates/firewalls.j2"
dest: "outputs/configs/{{ inventory_hostname }}"
delegate_to: "localhost"

- name: "CHECK IF DEVICE IS REACHABLE"


wait_for:
host: "{{ ansible_host }}"
port: 22
timeout: 600
delegate_to: "localhost"

- name: "DEPLOY CONFIGURATION ARTIFACTS TO DEVICES"


asa_config:
src: "outputs/configs/{{ inventory_hostname }}"
save: True

student@student-vm:iac/ansible (master)$

Step 9 Use the cat servers.yml command to view the server Ansible playbook. This playbook installs all necessary
packages that are required to set up web application on the test servers.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 605
student@student-vm:iac/ansible (master)$ cat servers.yml
---
- name: "INSTALL SERVER PACKAGES"
hosts: "k8s"
gather_facts: no
become: yes
tasks:

- name: "RUN THE EQUIVALENT OF 'apt-get update' AS A SEPARATE STEP"


apt:
update_cache: yes

- name: "INSTALL LINUX PACKAGES"


apt:
name:
- "apt-transport-https"
- "ca-certificates"
- "curl"
- "gnupg2"
- "python3-pip"
- "software-properties-common"
- "docker-compose"

- name: "ADD DOCKER KEY TO APT"


apt_key:
url: "https://download.docker.com/linux/ubuntu/gpg"
state: "present"

- name: "ADD DOCKER REPO TO APT"


apt_repository:
repo: "deb [arch=amd64] https://download.docker.com/linux/ubuntu
{{ lookup('pipe', 'echo $(lsb_release -cs)') }} stable"

- name: "INSTALL DOCKER PACKAGE"


apt:
name: docker-ce
update_cache: yes
state: latest

- name: "CREATE SSH DIR ON REMOTE HOST"


file:
path: "/home/student/.ssh"
state: "directory"
owner: "student"
group: "student"
mode: "0700"

- name: "COPY SSH KEYS, CERTS AND /etc/hosts FOR REMOTE HOSTS"
copy:
src: "{{ item['src'] }}"
dest: "{{ item['dest'] }}"
owner: "student"
group: "student"
mode: "0600"
loop:
- src: "./files_for_remote_hosts/authorized_keys_for_remote_servers"

606 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
dest: "/home/student/.ssh/authorized_keys"
- src: "./files_for_remote_hosts/hosts"
dest: "/etc/hosts"
- src: "./files_for_remote_hosts/git.lab.crt"
dest: "/usr/local/share/ca-certificates/git.lab.crt"
- src: "./files_for_remote_hosts/root-git.lab.crt"
dest: "/usr/local/share/ca-certificates/root-git.lab.crt"
- src: "./files_for_remote_hosts/id_rsa"
dest: "/home/student/.ssh/id_rsa"

- name: "UPDATE CERTIFICATES"


command:
cmd: "update-ca-certificates"

- name: "ADD STUDENT USER TO DOCKER GROUP"


user:
name: "student"
groups: "docker"
append: "yes"
student@student-vm:iac/ansible (master)$

Task 2: Implement a Test Pipeline


You will use Terraform and Ansible to bring up the on-demand test environment as previously designed.
You will integrate it into the CI pipeline such that upon each GitLab merge request, a test environment is
created and deployed with Terraform and Ansible.

Activity

Step 1 In a new terminal window, change directory to the net_inventory_iac/ by issuing the cd
~/labs/lab14/net_inventory_iac command.

student@student-vm:$ cd ~/labs/lab14/net_inventory_iac
student@student-vm:lab14/net_inventory_iac (master)$

Step 2 View the gitlab-ci.yml YAML file using the cat .gitlab-ci.yml command. This GitLab pipeline consists of
three stages: test_topology_build, test_build, and test_deploy. The test_build and test_deploy stages are
responsible for setting up the web APP. The test_topology_build stage currently just prints Testing using the
echo Testing command. Before the APP can be built, the test_topology_build stage needs to be updated
with the Terraform and Ansible commands.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 607
student@student-vm:lab14/net_inventory_iac (master)$ cat .gitlab-ci.yml
stages:
- "test_topology_build"
- "test_build"
- "test_deploy"

variables:
CI_REGISTRY_IMAGE_DB: "net_inventory_db"
CI_REGISTRY_IMAGE_BACKEND: "net_inventory_backend"
CI_REGISTRY_IMAGE_FRONTEND: "net_inventory_frontend"

before_script:
- "docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD
https://registry.git.lab"
- "echo $CI_COMMIT_REF_SLUG"

test_topology_build:
stage: "test_topology_build"
script:
- "echo Testing"
artifacts:
paths:
- "iac/terraform/terraform.tfstate"
untracked: true
when: "always"

test_build:
stage: "test_build"
script:
- "echo BUILD DB"
- "docker build -t $CI_REGISTRY_IMAGE_DB -f Dockerfile_db ."
- "docker tag $CI_REGISTRY_IMAGE_DB
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_DB:
$CI_COMMIT_REF_SLUG"
- "echo BUILD BACKEND"
- "docker build -t $CI_REGISTRY_IMAGE_BACKEND -f Dockerfile_backend ."
- "docker tag $CI_REGISTRY_IMAGE_BACKEND
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_BACKEND:
$CI_COMMIT_REF_SLUG"
- "echo BUILD FRONTEND"
- "docker build -t $CI_REGISTRY_IMAGE_FRONTEND -f Dockerfile_frontend ."
- "docker tag $CI_REGISTRY_IMAGE_FRONTEND
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_FRONTEND:
$CI_COMMIT_REF_SLUG"
- "docker push
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_DB:
$CI_COMMIT_REF_SLUG"
- "docker push
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_BACKEND:
$CI_COMMIT_REF_SLUG"
- "docker push
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_FRONTEND:
$CI_COMMIT_REF_SLUG"

test_deploy:
stage: "test_deploy"

608 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
script:
- >-
ssh -tt student@test_k8s1
"export SECRET_KEY=$SECRET_KEY && export
SQLALCHEMY_DATABASE_URI=$SQLALCHEMY_DATABASE_URI &&
export POSTGRES_DB=$POSTGRES_DB && export POSTGRES_USER=$POSTGRES_USER &&
export POSTGRES_PASSWORD=$POSTGRES_PASSWORD &&
rm -rf ./net_inventory || true && git clone
https://git.lab/cisco-devops/net_inventory.git/ && cd ./net_inventory &&
docker-compose stop || true && docker-compose rm -f || true && docker-compose
up -d"
- "echo PING TEST"
- ssh -tt student@test_k8s3 "ping -w 10 test_k8s1"
- "echo CURL TEST"
- ssh -tt student@test_k8s3 'curl -m 2 -f -s -o /dev/null -w "%{http_code}"
http://test_k8s1:5000/views/inventory/devices'
- "echo CURL TEST"
- ssh -tt student@test_k8s3 'curl -m 2 -f -s -o /dev/null -w "%{http_code}"
http://test_k8s1:5001/api/v1/inventory/devices'
student@student-vm:lab14/net_inventory_iac (master)$

Step 3 Open the .giltab-ci.yml file in a text editor of your choice and replace the the echo Testing command with
the following commands in the test_topology_build stage script:

cd iac/terraform/
terraform apply -auto-approve
cd ../ansible/
ansible-playbook -i hosts firewalls.yml
ansible-playbook -i hosts routers.yml
ansible-playbook -i servers servers.yml

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 609
student@student-vm:lab14/net_inventory_iac (master)$ cat .gitlab-ci.yml
stages:
- "test_topology_build"
- "test_build"
- "test_deploy"

variables:
CI_REGISTRY_IMAGE_DB: "net_inventory_db"
CI_REGISTRY_IMAGE_BACKEND: "net_inventory_backend"
CI_REGISTRY_IMAGE_FRONTEND: "net_inventory_frontend"

before_script:
- "docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD
https://registry.git.lab"
- "echo $CI_COMMIT_REF_SLUG"

test_topology_build:
stage: "test_topology_build"
script:
- "cd iac/terraform/"
- "terraform apply -auto-approve"
- "cd ../ansible/"
- "ansible-playbook -i hosts firewalls.yml"
- "ansible-playbook -i hosts routers.yml"
- "ansible-playbook -i servers servers.yml"
artifacts:
paths:
- "iac/terraform/terraform.tfstate"
untracked: true
when: "always"

test_build:
stage: "test_build"
script:
- "echo BUILD DB"
- "docker build -t $CI_REGISTRY_IMAGE_DB -f Dockerfile_db ."
- "docker tag $CI_REGISTRY_IMAGE_DB
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_DB:$CI_COMMIT_REF_
SLUG"
- "echo BUILD BACKEND"
- "docker build -t $CI_REGISTRY_IMAGE_BACKEND -f Dockerfile_backend ."
- "docker tag $CI_REGISTRY_IMAGE_BACKEND
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_BACKEND:$CI_C
OMMIT_REF_SLUG"
- "echo BUILD FRONTEND"
- "docker build -t $CI_REGISTRY_IMAGE_FRONTEND -f Dockerfile_frontend ."
- "docker tag $CI_REGISTRY_IMAGE_FRONTEND
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_FRONTEND:$CI
_COMMIT_REF_SLUG"
- "docker push
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_DB:
$CI_COMMIT_REF_SLUG"
- "docker push
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_BACKEND:
$CI_COMMIT_REF_SLUG"
- "docker push

610 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_FRONTEND:
$CI_COMMIT_REF_SLUG"

test_deploy:
stage: "test_deploy"
script:
- >-
ssh -tt student@test_k8s1
"export SECRET_KEY=$SECRET_KEY && export
SQLALCHEMY_DATABASE_URI=$SQLALCHEMY_DATABASE_URI &&
export POSTGRES_DB=$POSTGRES_DB && export POSTGRES_USER=$POSTGRES_USER &&
export POSTGRES_PASSWORD=$POSTGRES_PASSWORD &&
rm -rf ./net_inventory || true && git clone
https://git.lab/cisco-devops/net_inventory.git/ && cd ./net_inventory &&
docker-compose stop || true && docker-compose rm -f || true && docker-compose
up -d"
- "echo PING TEST"
- ssh -tt student@test_k8s3 "ping -w 10 test_k8s1"
- "echo CURL TEST"
- ssh -tt student@test_k8s3 'curl -m 2 -f -s -o /dev/null -w "%{http_code}"
http://test_k8s1:5000/views/inventory/devices
'
- "echo CURL TEST"
- ssh -tt student@test_k8s3 'curl -m 2 -f -s -o /dev/null -w "%{http_code}"
http://test_k8s1:5001/api/v1/inventory/device
s'
student@student-vm:lab14/net_inventory_iac (master)$

Step 4 Since you modified the .gitlab-ci.yml file, it has to be committed and pushed back to the remote repository.
As soon as changes are pushed, the GitLab pipeline will be executed. Run the following commands to add,
commit, and push changes to the remote repository:

git status
git add .gitlab-ci.yml
git commit -m "Lab14 changes"
git push origin master

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 611
student@student-vm:lab14/net_inventory_iac (master)$ git status
On branch master
Your branch is up to date with 'origin/master'.

Changes not staged for commit:


(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)

modified: .gitlab-ci.yml

no changes added to commit (use "git add" and/or "git commit -a")
student@student-vm:lab14/net_inventory_iac (master)$ git add .gitlab-ci.yml
student@student-vm:lab14/net_inventory_iac (master)$ git commit -m "Lab14 changes"
[master 9bf1dd3] Lab14 changes
1 file changed, 6 insertions(+), 1 deletion(-)
student@student-vm:lab14/net_inventory_iac (master)$ git push origin master
Counting objects: 3, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 424 bytes | 424.00 KiB/s, done.
Total 3 (delta 2), reused 0 (delta 0)
To git.lab:cisco-devops/net_inventory_iac.git
f5b8a4f..9bf1dd3 master -> master
student@student-vm:lab14/net_inventory_iac (master)$

Step 5 From the Chrome browser, navigate to https://git.lab.

Step 6 Accept the privacy notification and log in with the credentials that are provided in the Job Aids and click
Sign in.

Step 7 From the list of projects, choose the cisco-devops/net_inventory_iac project.

Step 8 From the left navigation bar, choose CI/CD > Jobs. From the list of jobs, find your latest
test_topology_build job and click status. It will probably still be visible as running since you started it few
seconds ago. It may also be already finished if you log in to GitLab web page after few minutes. Examine the
job details. You should see pipeline execution. By scrolling up and down you can go through entire stage
test_topology_build. Wait until you can confirm that the Terraform and Ansible tasks were completed
successfully. Note that it can take few minutes to build the entire infrastructure for test topology.

612 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 9 From the left navigation bar, choose CI/CD > Pipelines. Confirm that the latest pipeline completed
successfully. All three stages, test_topology_build, test_build, and test_deploy should be green.

Step 10 In a web browser, open a new tab and use web app URL http://test_k8s1:5000 to confirm that APP is
running as expected. By completing this, you have validated that the infrastructure that you deployed is also
in a good state.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 613
Step 11 The web APP is running as expected, but the inventory is empty. To populate the inventory with some
example devices, use the populate_inventory predeployed script. In your terminal window, change
directory to the home directory and use the populate_inventory test_k8s1:5001 command.

student@student-vm:lab14/net_inventory_iac (master)$ cd ~
student@student-vm:$ populate_inventory test_k8s1:5001
nyc-rt01: Added successfully
nyc-rt02: Added successfully
rtp-rt01: Added successfully
rtp-rt02: Added successfully
student@student-vm:$

Step 12 Refresh the Network Inventory web page. You will see new devices added to the inventory.

614 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Discovery 15: Implement Predeployment Health
Checks
Introduction
You built the pipeline such that the Network Inventory application is deployed on-demand, tested, and then
deployed into production. This activity introduces and integrates predeployment infrastructure sanity and
health checks into the pipeline. You are adding another check to be sure that the application will work as
expected when deployed into production.

Topology

Job Aid
• Device Information

Device Description FQDN / IP Address Credentials

Student Workstation Linux Ubuntu VM 192.168.10.10 student, 1234QWer

ESXi VM VM in which test 192.168.10.17


infra runs

GitLab Version Control 192.168.10.20 student, 1234QWer


and Container
Registry

k8s1 Kubernetes 192.168.10.21 student, 1234QWer

k8s2 Kubernetes 192.168.10.22 student, 1234QWer

k8s3 Kubernetes 192.168.10.23 student, 1234QWer

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 615
Device Description FQDN / IP Address Credentials

asa1 Firewall 192.168.10.51 student, 1234QWer

csr1kv1 Cisco Router 192.168.10.101 student, 1234QWer

csr1kv2 Cisco Router 192.168.10.102 student, 1234QWer

csr1kv3 Cisco Router 192.168.10.103 student, 1234QWer

Command List
The table describes the commands that are used in this activity. The commands are listed in alphabetical
order so that you can easily locate the information that you need. Refer to this list if you need configuration
command assistance during the lab activity.

Command Description

cd directory name To change directories within the Linux file system, use the cd
command. You will use this command to enter a directory where the
lab scripts are housed. You can use tab completion to finish the name
of the directory after you start typing it.

python script-file-name.py To initiate a Python script, you need to use the python Linux
command along with the name of the script. You can use tab
completion to finish the name of the script after you start typing it.

cat file The most common use of the cat Linux command is to read the
contents of files. It is the most convenient command for this purpose in
a UNIX-like operating systems.

more file-name To view the content of a file (one full window at a time), use the more
Linux command. Press space to view the next part of the file.

pip <command> [options] To install missing packages on Linux VM.

Task 1: Build Infrastructure and Server Validation


You will review Ansible playbooks that validate the desired state of the created test environment.

Activity

Step 1 In the student workstation, open a terminal window and change the directory to cd ~/labs/lab15 using the cd
~/labs/lab15 command.

616 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:$ cd ~/labs/lab15
student@student-vm:labs/lab15$

Step 2 Clone the git repository git clone https://git.lab/cisco-devops/net_inventory_iac.git using the git clone
git@gitlab:cisco-devops/net_inventory_iac.git command. Confirm that the repository has been cloned by
using the ls -l command.

student@student-vm:labs/lab15$ git clone git@gitlab:cisco-devops/net_inventory_iac.git


Cloning into 'net_inventory_iac'...
remote: Enumerating objects: 610, done.
remote: Counting objects: 100% (610/610), done.
remote: Compressing objects: 100% (252/252), done.
remote: Total 610 (delta 379), reused 565 (delta 341)
Receiving objects: 100% (610/610), 14.22 MiB | 21.16 MiB/s, done.
Resolving deltas: 100% (379/379), done.
student@student-vm:labs/lab15$ ls -l
total 4
drwxrwxr-x 9 student student 4096 Dec 12 11:56 net_inventory_iac
student@student-vm:labs/lab15$

Step 3 Change to the ansible directory cd net_inventory_iac/iac/ansible by issuing the cd


net_inventory_iac/iac/ansible/ command. Then use the ls -l command to see all available Ansible files.

student@student-vm: labs/lab15$ cd net_inventory_iac/iac/ansible/


student@student-vm:iac/ansible (master)$ ls -l
total 52
-rw-rw-r-- 1 student student 676 Dec 12 11:56 ansible.cfg
drwxrwxr-x 2 student student 4096 Dec 12 11:56 files_for_remote_hosts
-rw-rw-r-- 1 student student 867 Dec 12 11:56 firewalls.yml
-rw-rw-r-- 1 student student 421 Dec 12 11:56 hosts
drwxrwxr-x 3 student student 4096 Dec 12 11:56 outputs
drwxrwxr-x 2 student student 4096 Dec 12 11:56 plugin_filters
-rw-rw-r-- 1 student student 977 Dec 12 11:56 routers.yml
-rw-rw-r-- 1 student student 2512 Dec 12 11:56 routers_check.yml
-rw-rw-r-- 1 student student 297 Dec 12 11:56 servers
-rw-rw-r-- 1 student student 2099 Dec 12 11:56 servers.yml
-rw-rw-r-- 1 student student 910 Dec 12 11:56 servers_check.yml
drwxrwxr-x 2 student student 4096 Dec 12 11:56 templates
drwxrwxr-x 6 student student 4096 Dec 12 11:56 vars
student@student-vm:iac/ansible (master)$

Step 4 Use the cat routers_check.yml command to view the routers_check Ansible validation playbook. Before
any checks are executed, devices need to be reachable. The first task waits for a successful connection to
each router. The next task loads variables for each device so that comparisons can be made between desired
state and the real state of the device. Validation confirms that required interfaces are enabled and their
operational status is up. Then, the IP addresses are checked for required interfaces and IP reachability
between the three csr1kv devices. The last two tasks verify that BGP sessions are established between
neighbors.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 617
student@student-vm:iac/ansible (master)$ cat routers_check.yml
---
- name: "VALIDATE ROUTERS CONFIGURATIONS"
hosts: "routers"
gather_facts: no
tasks:

- name: "CHECK IF DEVICE IS REACHABLE"


wait_for:
host: "{{ ansible_host }}"
port: 22
timeout: 600
delegate_to: "localhost"

- name: "LOAD CONFIG VARIABLES FROM VARS FILE"


include_vars:
file: "vars/{{ inventory_hostname }}/{{ item }}.yml"
name: "{{ item }}_config"
loop: ["interfaces", "bgp"]

- name: "COLLECT FACTS INFORMATION"


ios_facts:
gather_subset: "all"

- name: "CONFIRM INTERFACES ARE ENABLED"


assert:
that:
- "interface['value']['lineprotocol'] == 'up '"
- "interface['value']['operstatus'] == 'up'"
success_msg: "Interface {{ interface['key'] }} is UP"
fail_msg: "Interface {{ interface['key'] }} is DOWN"
when: "interface['key'] in interfaces_config.keys() | list"
loop: "{{ ansible_facts['net_interfaces'] | dict2items }}"
loop_control:
loop_var: "interface"

- name: "CHECK IP ADDRESSES"


assert:
that:
- "interface['value']['ipv4'][0]['address'] ==
interfaces_config[interface['key']]['ip_address']['ip']"
- "interface['value']['ipv4'][0]['subnet'] ==
interfaces_config[interface['key']]['ip_address']['mask'] | string"
success_msg: "IP address for interface {{ interface['key'] }} is compliant"
fail_msg: "IP address for interface {{ interface['key'] }} is NON compliant"
when: "interface['key'] in interfaces_config.keys() | list"
loop: "{{ ansible_facts['net_interfaces'] | dict2items }}"
loop_control:
loop_var: "interface"

- name: "CONFIRM REACHABILITY TO NEIGHBORS"


ios_ping:
dest: "{{ neighbor['ip'] }}"
loop: "{{ bgp_config['neighbors'] }}"
loop_control:
loop_var: "neighbor"

618 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
- name: "CHECK BGP NEIGHBORS STATUS"
uri:
url: "https://{{ ansible_host }}/restconf/data/Cisco-IOS-XE-bgp-oper:bgp-state-
data"
url_username: "{{ ansible_user }}"
url_password: "{{ ansible_ssh_pass }}"
method: "GET"
headers:
Content-Type: "application/yang-data+json"
Accept: "application/yang-data+json"
return_content: "yes"
validate_certs: "no"
delegate_to: "localhost"
register: "bgp_neighbors_data"

- name: "DEBUG BGP NEIGHBORS STATUS"


debug:
msg: "{{ bgp_neighbors_data.content | bgp_state_validation }}"

student@student-vm:iac/ansible (master)$

Now you will review the Ansible playbook, which implements health checks for the state of the servers and
full reachability between test_k8s1, test_k8s2, and test_k8s3 servers.

Step 5 Use the cat servers_check.yml command to view the servers_check Ansible validation playbook. The
gather_facts parameter is set to yes to gather host facts by the play. These facts are needed to check
reachability between the test servers.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 619
student@student-vm:iac/ansible (master)$ cat servers_check.yml
---
- name: "VALIDATE SERVERS STATE"
hosts: "k8s"
gather_facts: "yes"
tasks:

- name: "DEBUG ANSIBLE FACTS"


debug:
msg: "{{ ansible_facts['eth0']['ipv4']['address'] }}"

- name: "CHECK TCP PORT 22"


wait_for:
host: "{{ ansible_host }}"
port: 22
delay: 3
timeout: 100

- name: "SET FACT: GET LINUX IP-S"


set_fact:
linux_ip_address: "{{ ansible_facts['eth0']['ipv4']['address'] }}"

- name: "SET FACT: LINUX NEIGHBORS"


set_fact:
linux_neighbors: "{{ linux_neighbors | default([]) + [ hostvars[item]
['linux_ip_address'] ] }}"
when: "item != inventory_hostname"
loop: "{{ ansible_play_batch }}"

- debug: var=linux_neighbors

- name: "PING TEST"


command: "ping -c 1 {{ item }}"
delegate_to: "{{ inventory_hostname }}"
loop: "{{ linux_neighbors }}"
changed_when: False
student@student-vm:iac/ansible (master)$

Task 2: Integrate Validation into Pipeline


Now you will extend the testing pipeline with the created validation playbooks.

Activity

Step 1 Change directory to the net_inventory_iac directory by issuing the cd ~/labs/lab15/net_inventory_iac/


command.

student@student-vm:iac/ansible (master)$ cd ~/labs/lab15/net_inventory_iac/


student@student-vm:lab15/net_inventory_iac (master)$

620 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 2 Use the cat .gitlab-ci.yml command to view the GitLab playbook. The GitLab pipeline consists of several
stages: test_topology_build, test_build, test_deploy, test_ cleanup, build, and deploy. The first four stages are
related to the infrastructure in test topology. The last two stages in the pipeline, the build and deploy stages,
run against production infrastructure. As soon as all tests pass against the test topology, the web APP is built
and deployed in the production. The test_cleanup stage destroys the test topology infrastructure.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 621
student@student-vm:lab15/net_inventory_iac (master)$ cat .gitlab-ci.yml
stages:
- "test_topology_build"
- "test_build"
- "test_deploy"
- "test_cleanup"
- "build"
- "deploy"

variables:
CI_REGISTRY_IMAGE_DB: "net_inventory_db"
CI_REGISTRY_IMAGE_BACKEND: "net_inventory_backend"
CI_REGISTRY_IMAGE_FRONTEND: "net_inventory_frontend"

before_script:
- "docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD
https://registry.git.lab"
- "echo $CI_COMMIT_REF_SLUG"

test_topology_build:
stage: "test_topology_build"
script:
- "cd iac/terraform/"
- "terraform apply -auto-approve"
- "cd ../ansible/"
- "ansible-playbook -i hosts firewalls.yml"
- "ansible-playbook -i hosts routers.yml"
- "ansible-playbook -i servers servers.yml"
artifacts:
paths:
- "iac/terraform/terraform.tfstate"
untracked: true
when: "always"

test_build:
stage: "test_build"
script:
- "echo BUILD DB"
- "docker build -t $CI_REGISTRY_IMAGE_DB -f Dockerfile_db ."
- "docker tag $CI_REGISTRY_IMAGE_DB
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_DB:
$CI_COMMIT_REF_SLUG"
- "echo BUILD BACKEND"
- "docker build -t $CI_REGISTRY_IMAGE_BACKEND -f Dockerfile_backend ."
- "docker tag $CI_REGISTRY_IMAGE_BACKEND
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_BACKEND:
$CI_COMMIT_REF_SLUG"
- "echo BUILD FRONTEND"
- "docker build -t $CI_REGISTRY_IMAGE_FRONTEND -f Dockerfile_frontend ."
- "docker tag $CI_REGISTRY_IMAGE_FRONTEND
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_FRONTEND:
$CI_COMMIT_REF_SLUG"
- "docker push
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_DB:
$CI_COMMIT_REF_SLUG"
- "docker push

622 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_BACKEND:
$CI_COMMIT_REF_SLUG"
- "docker push
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_FRONTEND:
$CI_COMMIT_REF_SLUG"

test_deploy:
stage: "test_deploy"
script:
- >-
ssh -tt student@test_k8s1
"export SECRET_KEY=$SECRET_KEY && export
SQLALCHEMY_DATABASE_URI=$SQLALCHEMY_DATABASE_URI &&
export POSTGRES_DB=$POSTGRES_DB && export POSTGRES_USER=$POSTGRES_USER &&
export POSTGRES_PASSWORD=$POSTGRES_PASSWORD &&
rm -rf ./net_inventory || true && git clone
https://git.lab/cisco-devops/net_inventory.git/ && cd ./net_inventory &&
docker-compose stop || true && docker-compose rm -f || true && docker-compose
up -d"
- "echo PING TEST"
- ssh -tt student@test_k8s3 "ping -w 10 test_k8s1"
- "echo CURL TEST"
- ssh -tt student@test_k8s3 'curl -m 2 -f -s -o /dev/null -w "%{http_code}"
http://test_k8s1:5000/views/inventory/devices'
- "echo CURL TEST"
- ssh -tt student@test_k8s3 'curl -m 2 -f -s -o /dev/null -w "%{http_code}"
http://test_k8s1:5001/api/v1/inventory/devices'

test_cleanup:
stage: "test_cleanup"
script:
- "cd iac/terraform/"
- "terraform destroy -auto-approve"
when: "always"

build:
stage: "build"
script:
- "echo BUILD DB"
- "docker build -t $CI_REGISTRY_IMAGE_DB -f Dockerfile_db ."
- "docker tag $CI_REGISTRY_IMAGE_DB
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_DB:
$CI_COMMIT_REF_SLUG"
- "echo BUILD BACKEND"
- "docker build -t $CI_REGISTRY_IMAGE_BACKEND -f Dockerfile_backend ."
- "docker tag $CI_REGISTRY_IMAGE_BACKEND
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_BACKEND:
$CI_COMMIT_REF_SLUG"
- "echo BUILD FRONTEND"
- "docker build -t $CI_REGISTRY_IMAGE_FRONTEND -f Dockerfile_frontend ."
- "docker tag $CI_REGISTRY_IMAGE_FRONTEND
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_FRONTEND:
$CI_COMMIT_REF_SLUG"
- "docker push
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_DB:

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 623
$CI_COMMIT_REF_SLUG"
- "docker push
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_BACKEND:
$CI_COMMIT_REF_SLUG"
- "docker push
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_FRONTEND:
$CI_COMMIT_REF_SLUG"
only:
- "master"

deploy:
stage: "deploy"
script:
- >-
ssh -tt student@k8s1
"export SECRET_KEY=$SECRET_KEY && export
SQLALCHEMY_DATABASE_URI=$SQLALCHEMY_DATABASE_URI &&
export POSTGRES_DB=$POSTGRES_DB && export POSTGRES_USER=$POSTGRES_USER &&
export POSTGRES_PASSWORD=$POSTGRES_PASSWORD &&
rm -rf ./net_inventory || true && git clone
https://git.lab/cisco-devops/net_inventory.git/ && cd ./net_inventory &&
docker-compose stop || true && docker-compose rm -f || true && docker-compose
up -d"
- "echo PING TEST"
- ssh -tt student@k8s3 "ping -w 10 k8s1"
- "echo CURL TEST"
- ssh -tt student@k8s3 'curl -m 2 -f -s -o /dev/null -w "%{http_code}"
http://k8s1:5000/views/inventory/devices'
- "echo CURL TEST"
- ssh -tt student@k8s3 'curl -m 2 -f -s -o /dev/null -w "%{http_code}"
http://k8s1:5001/api/v1/inventory/devices'
only:
- "master"

student@student-vm:lab15/net_inventory_iac (master)$

Step 3 Use the nano .gitlab-ci.yml command to open the .giltab-ci.yml playbook in text editor. Add the following
commands to the test_topology_build stage script:

ansible-playbook -i hosts routers_check.yml


ansible-playbook -i servers servers_check.yml

624 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:lab15/net_inventory_iac (master)$ nano .gitlab-ci.yml
stages:
- "test_topology_build"
- "test_build"
- "test_deploy"
- "test_cleanup"
- "build"
- "deploy"

variables:
CI_REGISTRY_IMAGE_DB: "net_inventory_db"
CI_REGISTRY_IMAGE_BACKEND: "net_inventory_backend"
CI_REGISTRY_IMAGE_FRONTEND: "net_inventory_frontend"

before_script:
- "docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD
https://registry.git.lab"
- "echo $CI_COMMIT_REF_SLUG"

test_topology_build:
stage: "test_topology_build"
script:
- "cd iac/terraform/"
- "terraform apply -auto-approve"
- "cat terraform.tfstate"
- "cd ../ansible/"
- "ansible-playbook -i hosts firewalls.yml"
- "ansible-playbook -i hosts routers.yml"
- "ansible-playbook -i hosts routers_check.yml"
- "ansible-playbook -i servers servers.yml"
- "ansible-playbook -i servers servers_check.yml"
artifacts:
paths:
- "iac/terraform/terraform.tfstate"
untracked: true
when: "always"

test_build:
stage: "test_build"
script:
- "echo BUILD DB"
- "docker build -t $CI_REGISTRY_IMAGE_DB -f Dockerfile_db ."
- "docker tag $CI_REGISTRY_IMAGE_DB
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_DB:
$CI_COMMIT_REF_SLUG"
- "echo BUILD BACKEND"
- "docker build -t $CI_REGISTRY_IMAGE_BACKEND -f Dockerfile_backend ."
- "docker tag $CI_REGISTRY_IMAGE_BACKEND
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_BACKEND:
$CI_COMMIT_REF_SLUG"
- "echo BUILD FRONTEND"
- "docker build -t $CI_REGISTRY_IMAGE_FRONTEND -f Dockerfile_frontend ."
- "docker tag $CI_REGISTRY_IMAGE_FRONTEND
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_FRONTEND:
$CI_COMMIT_REF_SLUG"
- "docker push

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 625
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_DB:
$CI_COMMIT_REF_SLUG"
- "docker push
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_BACKEND:
$CI_COMMIT_REF_SLUG"
- "docker push
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_FRONTEND:
$CI_COMMIT_REF_SLUG"

test_deploy:
stage: "test_deploy"
script:
- >-
ssh -tt student@test_k8s1
"export SECRET_KEY=$SECRET_KEY &&
export SQLALCHEMY_DATABASE_URI=$SQLALCHEMY_DATABASE_URI &&
export POSTGRES_DB=$POSTGRES_DB && export POSTGRES_USER=$POSTGRES_USER &&
export POSTGRES_PASSWORD=$POSTGRES_PASSWORD &&
rm -rf ./net_inventory || true &&
git clone https://git.lab/cisco-devops/net_inventory_iac.git/ ./net_inventory
&& cd ./net_inventory &&
docker-compose stop || true && docker-compose rm -f || true && docker-compose
up -d"
- "echo PING TEST"
- ssh -tt student@test_k8s3 "ping -w 10 test_k8s1"
- "echo CURL TEST"
- ssh -tt student@test_k8s3 'curl -m 2 -f -s -o /dev/null -w "%{http_code}"
http://test_k8s1:5000/views/inventory/devices'
- "echo CURL TEST"
- ssh -tt student@test_k8s3 'curl -m 2 -f -s -o /dev/null -w "%{http_code}"
http://test_k8s1:5001/api/v1/inventory/devices'

test_cleanup:
stage: "test_cleanup"
script:
- "cd iac/terraform/"
- "terraform destroy -auto-approve"
when: "always"

build:
stage: "build"
script:
- "echo BUILD DB"
- "docker build -t $CI_REGISTRY_IMAGE_DB -f Dockerfile_db ."
- "docker tag $CI_REGISTRY_IMAGE_DB
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_DB:
$CI_COMMIT_REF_SLUG"
- "echo BUILD BACKEND"
- "docker build -t $CI_REGISTRY_IMAGE_BACKEND -f Dockerfile_backend ."
- "docker tag $CI_REGISTRY_IMAGE_BACKEND
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_BACKEND:
$CI_COMMIT_REF_SLUG"
- "echo BUILD FRONTEND"
- "docker build -t $CI_REGISTRY_IMAGE_FRONTEND -f Dockerfile_frontend ."
- "docker tag $CI_REGISTRY_IMAGE_FRONTEND

626 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_FRONTEND:
$CI_COMMIT_REF_SLUG"
- "docker push
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_DB:
$CI_COMMIT_REF_SLUG"
- "docker push
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_BACKEND:
$CI_COMMIT_REF_SLUG"
- "docker push
registry.git.lab/cisco-devops/net_inventory_iac/$CI_REGISTRY_IMAGE_FRONTEND:
$CI_COMMIT_REF_SLUG"
only:
- "master"

deploy:
stage: "deploy"
script:
- >-
ssh -tt student@k8s1
"export SECRET_KEY=$SECRET_KEY && export
SQLALCHEMY_DATABASE_URI=$SQLALCHEMY_DATABASE_URI &&
export POSTGRES_DB=$POSTGRES_DB && export POSTGRES_USER=$POSTGRES_USER &&
export POSTGRES_PASSWORD=$POSTGRES_PASSWORD &&
rm -rf ./net_inventory || true && git clone
https://git.lab/cisco-devops/net_inventory.git/ && cd ./net_inventory &&
docker-compose stop || true && docker-compose rm -f || true && docker-compose
up -d"
- "echo PING TEST"
- ssh -tt student@k8s3 "ping -w 10 k8s1"
- "echo CURL TEST"
- ssh -tt student@k8s3 'curl -m 2 -f -s -o /dev/null -w "%{http_code}"
http://k8s1:5000/views/inventory/devices'
- "echo CURL TEST"
- ssh -tt student@k8s3 'curl -m 2 -f -s -o /dev/null -w "%{http_code}"
http://k8s1:5001/api/v1/inventory/devices'
only:
- "master"

student@student-vm:lab15/net_inventory_iac (master)$

Step 4 Since you modified the .gitlab-ci.yml file, it has to be committed and pushed back to the remote repository.
When changes are pushed, the GitLab pipeline will be executed. Run the following commands to add,
commit, and push changes to the remote repository:

git status
git add .gitlab-ci.yml
git commit -m "Lab15 changes"
git push origin master

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 627
student@student-vm:lab15/net_inventory_iac (master)$ git status
On branch master
Your branch is up to date with 'origin/master'.

Changes not staged for commit:


(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)

modified: .gitlab-ci.yml

no changes added to commit (use "git add" and/or "git commit -a")
student@student-vm:lab15/net_inventory_iac (master)$ git add .gitlab-ci.yml
student@student-vm:lab15/net_inventory_iac (master)$ git commit -m "Lab15 changes"
[master c95d6ee] Lab15 changes
1 file changed, 8 insertions(+), 1 deletion(-)
student@student-vm:lab15/net_inventory_iac (master)$ git push origin master
Counting objects: 4, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (4/4), done.
Writing objects: 100% (4/4), 447 bytes | 447.00 KiB/s, done.
Total 4 (delta 2), reused 0 (delta 0)
To git.lab:cisco-devops/net_inventory_iac.git
f430fa7..c95d6ee master -> master
student@student-vm:lab15/net_inventory_iac (master)$

Step 5 From the Chrome browser, navigate to https://git.lab.

Step 6 Log in with the credentials that are provided in the Job Aids and click Sign in.

Step 7 From the list of projects, choose the cisco-devops/net_inventory_iac project.

Step 8 From the left navigation bar, choose CI/CD > Jobs. From the list of jobs, find your latest
test_topology_build job. It will probably still be visible as running since you started it a few seconds ago. It
may also be already finished if you log in to GitLab web page after a few minutes. Click status of your latest
job. The job failed. Review the routers_check playbook results. You will note errors in validating neighbor
reachability.

628 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 9 From the playbook results, you can see that the IP address of the csr1kv1 router is wrongly assigned.
Therefore, the connectivity to the csr1kv3 router fails. To correct the wrong IP address, use the nano
iac/ansible/vars/csr1kv1/interfaces.yml command to open the interfaces.yml file in text editor. Change the
GigabitEthernet2 IP address to 172.16.100.1.

student@student-vm:lab15/net_inventory_iac (lab15)$ cat


iac/ansible/vars/csr1kv1/interfaces.yml
---
GigabitEthernet1:
ip: "10.10.1.1"
mask: 24
GigabitEthernet2:
ip: "172.16.100.1"
mask: 30
GigabitEthernet3:
ip: "172.16.100.5"
mask: 30
student@student-vm:lab15/net_inventory_iac (lab15)$

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 629
Step 10 Since you modified the interfaces.yml file, it has to be committed and pushed back to the remote repository.
Run the following commands to add, commit, and push changes to the remote repository:

git add iac/ansible/vars/csr1kv1/interfaces.yml


git commit -m "Lab 15 changes"
git push origin master

student@student-vm:lab15/net_inventory_iac (master)$ git add


iac/ansible/vars/csr1kv1/interfaces.yml
student@student-vm:lab15/net_inventory_iac (master)$ git commit -m "Lab15 changes"
[master 984fd14] Lab15 changes
1 file changed, 1 insertion(+), 1 deletion(-)
student@student-vm:lab15/net_inventory_iac (master)$ git push origin master
Counting objects: 7, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (7/7), done.
Writing objects: 100% (7/7), 568 bytes | 568.00 KiB/s, done.
Total 7 (delta 4), reused 2 (delta 0)
To git.lab:cisco-devops/net_inventory_iac.git
c95d6ee..984fd14 master -> master
student@student-vm:lab15/net_inventory_iac (master)$

Step 11 In a web browser, verify that the pipeline was now executed successfully.

Step 12 In a web browser, open a new tab and use the web app URL, http://k8s1:5000 to confirm that the APP is
running on production servers as expected.

630 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 13 The web APP is running as expected, but the inventory is empty. To populate the inventory with some
example devices, use the populate_inventory predeployed script. In your terminal window, change the
directory to the home directory and use the populate_inventory k8s1:5001 command.

student@student-vm:$ populate_inventory k8s1:5001


nyc-rt01: Added successfully
nyc-rt02: Added successfully
rtp-rt01: Added successfully
rtp-rt02: Added successfully
student@student-vm:$

Step 14 Refresh the Network Inventory web page. You will see new devices added to the inventory.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 631
0Summary Challenge
1. 0Which two options represent the proper syntax for basic variable replacement using Jinja2?
(Choose two.)
a. {{ variable }}
b. {variable}
c. {{variable}}
d. {% variable %}
e. { variable }
2. 0What is the most common source of data that is rendered with a template to create a configuration
file with Ansible?
a. JSON
b. XML
c. YAML
d. INI-like
3. 0Which module is used in Ansible to generate text files from Jinja templates?
a. temp
b. source
c. dest
d. template
e. copy
4. 0Which two statements about Jinja2 in Ansible are true? (Choose two.)
a. Jinja2 variables can be used in a playbook.
b. Jinja2 variables can be used in a template.
c. Jinja2 variables can be used on the command line when executing a playbook.
d. Jinja2 variables can be used within the ansible.cfg file.
e. Jinja2 is the only templating language that works with networking devices.
5. 0Which two parameters are required when using the template module? (Choose two.)
a. source
b. src
c. destination
d. dest
e. template

632 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
6. 0What is the outcome of the following playbook?
---

- name: GENERATE & DEPLOY CONFIGURATIONS


hosts: all
connection: network_cli
gather_facts: no

tasks:

- name: GENERATE CONFIGURATIONS


template:
src: snmp.j2
dest: ./configs/snmp_configs.cfg
a. It generates an SNMP configuration file for each device in the inventory file called
snmp_configs.cfg.
b. It generates a single SNMP configuration file called snmp_configs.cfg.
c. It generates an SNMP configuration file for each device in the inventory file called snmp.cfg.
d. It generates a single SNMP configuration file called snmp.cfg.
7. 0If you have a playbook that generates and deploys configurations, what is one option that you can
use to run certain tasks?
a. Use inventory groups.
b. Use tags on your tasks.
c. Use the register attribute on your tasks.
d. Use tags as a parameter for each module.
8. 0What is the file extension that is used by a Terraform configuration?
a. .tf
b. .cf
c. .terraform
d. .trf
9. 0How does Terraform map a configuration to real-world infrastructure?
a. by integrating with and using Ansible’s inventory
b. by creating a backup of the .tfstate state file
c. by creating and maintaining .tfstate state files
d. by creating deployment plans before making changes
10. 0What are two parts of Terraform architecture? (Choose two.)
a. Terraform state
b. Terraform core
c. Terraform plug-ins
d. Terraform variables
e. Terraform inventory

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 633
0Answer Key
Configuration Management Tools
1. A, D

Terraform Overview
1. A

Ansible Overview
1. B
2. A, B

Ansible Inventory File


1. D

Use the Cisco IOS Core Configuration Module


1. B, C

Jinja2 and Ansible Templates


1. B

Basic Jinja2 with YAML


1. E

Configuration Templating with Ansible


1. B

634 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Summary Challenge
1. A, C
2. C
3. D
4. A, B
5. B, D
6. B
7. B
8. A
9. C
10. B, C

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 635
636 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Section 11: Monitoring in NetDevOps

Introduction
Monitoring in NetDevOps is similar to general monitoring. There are many Open Source tools that are
changing the monitoring landscape. New tools that can gather native statistics from applications with
minimal effort. One example is a tool that can add metrics for collection to Python code with as few as two
additional lines of code.
In this section, you will learn about monitoring, including the use of metrics and logs and how they help tell
the monitoring story. You will learn about the Elasticsearch, Logstash, and Kibana (ELK) stack. Finally,
you will examine the Open Source monitoring project Prometheus and how you can use Python libraries to
expose metrics that Prometheus can use and export.

Introduction to Monitoring, Metrics, and Logs


Understanding your NetDevOps environment has multiple facets. You need to measure and gather event
information from your NetDevOps environment. Gathering feedback is important so that you know what is
happening, technically, in the environment. The better you can monitor your systems, the better the
feedback will be on your NetDevOps stack. Monitoring helps provide the key technical metrics involved.
Log collection helps you know if there is an event (positive or negative) in the environment.
Monitoring, metrics gathering, storing and alerting, and log management are very expansive topics. A single
multiple-day course would be required to cover each of these functions and the possible related systems.
This section gives a brief overview of these items.

Monitoring
• SNMP polling
• Streaming telemetry
• NetFlow
• Build notifications
– Chat ops

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 637
– Build graphs
• Dependency monitoring

According to the Google dictionary, monitoring is the ability to “observe and check the progress or quality
of (something) over a period of time; keep under systematic review.” “Systematic review” is the important
part of this definition for the NetDevOps pipeline. With monitoring, you observe the status of networking
devices, receive job status notifications, and much more.
The “traditional” method of monitoring a network includes Network Management Station (NMS) polling of
network devices and gathering of information via Simple Network Management Protocol (SNMP). This
method has remained viable as newer tooling continues to offer support for SNMP statistical data gathering.
NMS stations reach out over SNMP to the device and request a Management Information Base (MIB) that
corresponds to the measurement requested. The device responds with the appropriate values, assuming that
authentication and authorization are correct. There are multiple versions of SNMP, versions 1 through 3.
In streaming telemetry, the remote device sends measurement metrics to an NMS without the NMS station
reaching out to the device. This approach is considered more scalable than SNMP polling. Devices can send
the data more frequently and the NMS receives and parses the data. There are no connection establishments
to manage.
A subscription in streaming telemetry is a contract between a subscription service and a subscriber that
specifies the type of data that will be pushed. It defines the languages to be used, frequency, and identifies
what will be sent in a push. Streaming telemetry offers two types of publications and subscriptions: periodic
and on-change.

An on-change data point would be routing relationships, such as a Border Gateway Protocol (BGP)
neighbor state. The only time a device will send an update to the subscriber is when the BGP state changes.

638 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
A periodic data point sends data periodically. Things like interface counters are considered periodic data
points. The subscriber of the data would want to know the utilization numbers at a specific interval.
A reliable streaming telemetry infrastructure may also include a messaging bus that provides transport for
the streaming telemetry before it gets into a stack, such as the ELK stack from Elastic.co.
NetFlow is a protocol that provides information about IP applications that traverse a network. A device with
NetFlow sends network traffic accounting information that can help in usage-based billing, network
planning, and network monitoring. NetFlow version 9 is the latest version and is also the basis for an IETF
standard. Records are created and cached on the device. After a set period of time or amount of data, the
cached information is sent to NetFlow collectors that are configured on the device. The collectors can then
report on the data that was collected and many commercial products provide intelligence and dashboard
information that is based on that data.
Notifications from the tooling within the NetDevOps pipeline are helpful in understanding the status of
systems. To enable ChatOps, the tool is integrated with your chat application, such as Cisco Webex Teams,
Slack, Microsoft Teams, or Mattermost, and allows for incoming web hooks. From an incoming web hook,
tools can send status updates such as new pull requests, new issues, or build status (such as start, success,
failures). From ChatOps, you can determine if there are issues that need immediate attention.
Several tools that are available within remote Git repositories can provide metrics about code update
frequency and other items. These metrics can be helpful in understanding if there are significant issues in
the code base, if the current project is actively maintained, or if a project is fresh with much activity. There
are often graphs or logs for the activity that is involved with a project as well.
Dependency monitoring monitors the underlying dependencies of a project. One common third-party library
that is often used with Python projects that use Representational State Transfer (REST) application
programming interfaces (APIs) is the Python requests library. A third party maintains this library in an
Open Source model. You may not be immediately notified when there is a new version or a vulnerability
within the library itself, but using a toolset that can identify when there is a new version of the dependency
is helpful so that you can test your application with this new version.

Metrics
• Infrastructure metrics
• Application metrics
– Prometheus exporters
– Response codes
– Time to serve data
• CI/CD pipeline metrics
– Build time
– Build failures

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 639
A metric is a unit of measurement for evaluating an item. Metrics are often seen as part of system Key
Performance Indicators (KPIs). Examples of metrics are CPU utilization, memory utilization, and interface
utilization. Metrics can be consistently measured.
Typical infrastructure-based metrics include CPU utilization, memory utilization, and network interface
utilization. These metrics can be applied to networking equipment but also to hypervisors, host-based
systems, and others.
Application metrics pertain specifically to an application. In a web application, there are some metrics that
are easier to measure, such as the number of HTTP 200, 302, or 404 response codes. Prometheus exporters
in Python can natively generate many of these metrics for Python web frameworks. You may also want to
know how long it took the web server to respond to the data or other custom metrics that are defined in the
software.
Continuous integration and continuous deployment (CI/CD) pipeline metrics may include how long it took
to build an application. You may want to know how long it took to complete tests, how often builds fail, or
a handful of other possibilities within metrics. If you can measure it, then you can have a metric that is
defined for it. A couple of helpful Python libraries include timeit and the time library for start and stop times
of functions to get exact test times.
On Linux-based systems, you can also use the time command to get the run time of a particular command
execution.
time ping 1.1.1.1 -c 3
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=56 time=14.713 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=41.360 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=56 time=16.250 ms

--- 1.1.1.1 ping statistics ---


3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 14.713/24.108/41.360/12.215 ms
ping 1.1.1.1 -c 3 0.00s user 0.00s system 0% cpu 2.032 total

Logs
• Record of an event
• Syslog
• Firewall logs
• Application-specific logs
• Logs in the CI/CD pipeline
– Build logs
– Test results

640 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
A log is a record of an event. Regulatory bodies have logging requirements for systems with varying
degrees of granularity. Logs can be extremely detailed and it is possible to generate enormous amounts of
information. The logging levels, settings, and policies must be determined within your organization, based
on your requirements.
Several types of logs are available depending on the system and the configuration. These logs may reside on
the device itself and stay on the system or may be exported from the host or device to a central log
repository. Syslog is one of the many types of log transport mechanisms and is heavily used with network
devices.
Firewalls log configured events, which may include denied packets, the start of a new state table entry, the
transmission of a packet, and the flow duration and transmitted bits.
At the host or operating system level, there are often system logs that indicate system events. You can get
detailed information about the performance of the system or general information if there are errors that
occur within the operating system.
Applications can generate their own logs. They can be logged to the console of the screen or exported off
the box to a logging destination. Because application-level logs are defined within the application, you will
need to know what is configurable. Application logging needs to be written into the application itself to be
used. If the application does not have logging written into it, logs will not be sent.
Email notifications are another example of a logging mechanism. To use this mechanism, you need to have
a library installed that can communicate with an email server and have the proper email server configuration
set. Once set, the application can send applicable logs for the application.
In many pipelines, you can see the results of the build from the console output. The output of each build
step is stored within each build. If there was a failure, you can observe the logged output from the build as
feedback to resolve the failure. If it is a success, you will see all the output that created the success.
1. 0Which monitoring method uses subscriptions and is considered scalable?
a. SNMP updates
b. SNMP polling
c. streaming telemetry
d. speedy telecom

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 641
0Introduction to Elasticsearch, Beats, and Kibana
Monitoring in the NetDevOps environment requires tooling to know the status of the environment. You
should collect and store metrics and logs for a period of time, and then, based on the monitoring in place,
create appropriate alerts. In this section, you will see a system that can collect and store metrics and logs,
alert on components of the metrics and logs, and visualize the metrics and logs. You will also examine the
Prometheus exporter and the format of the metrics that Prometheus scrapes.

ELK Stack
• Elasticsearch, Logstash, Kibana
• Elastic.co

The ELK stack is one Open Source log- and metric-gathering tech stack. ELK stands for Elasticsearch,
Logstash, and Kibana. ELK can collect and ingest logs and metrics, store these logs and metrics in a
searchable and scalable format, and using Kibana, visualize, and alert based on the metrics and logs that
were gathered. Information can be found at http://www.elastic.co.
In the data flow, various networks, applications, and systems will send information through Logstash or
directly to Elasticsearch. Kibana is then used to visualize the data, send alerts, and manage the ELK stack.
Users can consume dashboards within the Kibana web interface or alerts can be sent from Kibana to other
systems.

Elasticsearch
• Central storage of data
• RESTful search
• Analytics engine

642 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Elasticsearch is the heart of the ELK stack and is the central data storage component. The storage itself has
a RESTful search capability that can be called using a REST API, which accounts for the openness and
effectiveness of the Elasticsearch system. Elasticsearch stores the data in JavaScript Object Notation
(JSON) format and can search across the entire data store to analyze and find the requested data.

Logstash
• Server-side data processing
• Transforms data
• Outputs to your own data platform, such as Elasticsearch

Logstash is a server-side processing engine. It ingests data, transforms the data, and sends it off to another
data store, such as Elasticsearch, or to another data platform. You configure the input data types (syslog,
SNMP, and so on), the filters, and what the output should include. Advertised capabilities include
geolocation that is based on IP addresses, creation of structured data from unstructured formats using Grok,
and removal of sensitive information from fields.

Kibana
• Visualization engine
• Includes drag-and-drop fields
• Visually manage ELK stack

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 643
Kibana is the visualization and alerting engine in the ELK stack. Kibana itself is highly customizable and
now features drag-and-drop fields, flexible data sources, and can manage the ELK stack functions. With
Kibana, you can create dashboards from the Elasticsearch data to give visual context to what is happening
within your NetDevOps environment and beyond.
The Kibana interface allows you to send alerts to several destinations. You can customize the interface to
watch various components including CPU, memory, interface utilization, and to notify you of an event via
email, Slack, or outgoing webhook (including for Cisco Webex Teams, PagerDuty, and other destinations
that accept hooks).
Includes the following:
• Histograms
• Line graphs
• Pie charts
• Sunbursts
• Other options

Kibana includes various visual display options including histograms, line graphs, pie charts, sunbursts, and
more, to visually represent data.

Filebeat
• Ship logs to Elasticsearch and Kibana.
• Search and visualize logs.
• Add prebuilt dashboards for system logs.

644 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Filebeat is the log transmission component for the ELK stack. It can be installed on servers and systems as a
forwarder. When enabled, you can gather logs about several preloaded capabilities. Filebeat is configured
with the YAML file /etc/filebeat/filebeat.yml on a Linux system. You can gather system stats by enabling
the system module with the command sudo filebeat modules enable system. Once you have configured the
Filebeat destinations in the configuration file and restarted the service, you can have system logs sent to the
destinations without significant configuration of Filebeat.
Filebeat allows you to search and visualize logs for systems and load some prebuilt dashboards to view logs.
Some other application logs that ship with the Linux version of Filebeat are aws, azure, cisco, iis, iptables,
mysql, netflow, rabbitmq, and redis. There are several more, and you can configure Filebeat to send other
logs to the ELK stack.

Metricbeat
• Gathers metrics and sends them to Elasticsearch and Kibana.
• Servers
• Prometheus
• Docker
• Applications
• Prebuilt dashboards for Kibana

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 645
Metricbeat is similar to Filebeat, but instead of gathering and forwarding log information, Metricbeat
gathers metric data to send to the rest of the ELK stack. The configuration of Metricbeat is similar to
Filebeat and uses a configuration file that is located at /etc/metricbeat/metricbeat.yml. Metricbeat can gather
metrics from your system (such as CPU and memory), Docker statistics, and scrape Prometheus metrics
locally to send to the ELK stack for storage. There are several prebuilt Kibana visualizations, such as the
example in the figure, to give you a good start on visualizing system metrics.
1. 0Which is not an advertised feature of Logstash?
a. firewall policy adjustment recommendation
b. geolocation of IP addresses
c. creation of structured data from unstructured data
d. removal of sensitive information from logs

646 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Discovery 16: Set Up Logging for the Application
Servers and Visualize with Kibana
Introduction
As you deploy applications, it becomes ever more critical to monitor the servers the applications run on and
the applications themselves. One such way to do that is to use components such as ElasticSearch, Filebeat,
Metricbeat, and Kibana for visualization. This lab will show how to install Filebeat on an Ubuntu server to
collect and send logs to a remote Elasticsearch/Kibana server. You will explore how to configure Filebeat to
extract the logs from a specific host and how to visualize these logs from Kibana using some of the
predefined dashboards. Finally, you will learn how to use Filebeat to collect the logs from all Docker
containers that are part of the net_inventory application.

Topology

Job Aid
Device Information

Device Description FQDN/IP Address Credentials

Student Workstation Linux Ubuntu VM 192.168.10.10 student, 1234QWer

k8s1 Kubernetes 192.168.10.21 student, 1234QWer

k8s2 Kubernetes 192.168.10.22 student, 1234QWer

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 647
Command List
The table describes the commands that are used in this activity. The commands are listed in alphabetical
order so that you can easily locate the information that you need. Refer to this list if you need configuration
command assistance during the lab activity.

Command Description

ansible-playboook pb.install_filebeat.yml -K Ansible playbook execution to complete the file beats install
on the Kubernetes servers
The flag “-K” (uppercase K) indicates to prompt for the sudo
password. This is the same password as the login indicated
above (1234QWer)

cd directory name To change directories within the Linux file system, use the
cd command. You will use this command to enter into a
directory where the lab scripts are housed. You can use tab
completion to finish the name of the directory after you start
typing it.

/home/student/scripts/copy_filebeat.sh Script to copy via SCP the filebeat Debian installation file.
Copies the file from /tmp/filebeat-7.2.4-amd64.deb to the
Kubernetes servers

/home/student/scripts/ Script that will attempt to log in to the Kubernetes servers


generate_ssh_failures.sh with a wrong credential, generating log messages to be
visualized

ssh student@k8s# Uses the SSH application to open a Remote Terminal


session to the Kubernetes servers, where # is the server
number (1-3)

systemctl This command may be used to introspect and control the


state of the "system" system and service manager

sudo filebeat modules enable system Enables the system module for the filebeat process

sudo filebeat modules list List the modules available and enabled/disabled within
filebeat

systemctl systemctl may be used to introspect and control the state of


the "systemd" system and service manager

vi Vim is a highly configurable text editor for efficiently


creating and changing any kind of text. It is included as "vi"
with most UNIX systems

648 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
systemctl Keywords
The systemctl command has several keywords that are used for managing Linux operating system
processes. In this exercise you will use root keywords status, enable, and start.

Root systemctl Keywords


These are the top level systemctl command keywords:

Keyword Description

enable process_name This keyword sets the process to run on startup of the operating system

restart process_name This keyword restarts a process that is already running, or starts a
process that was previously stopped

start process_name This keyword starts the process name

status process_name This keyword shows the status of the given processes name

stop process_name This keyword stops a process if it is running, leaves it stopped if


already stopped

Task 1: Install Filebeat on Kubernetes Servers Via


Ansible
You will set up Filebeat log shipper via an Ansible playbook. This playbook will set up Filebeat from the
source Debian file located at /tmp/filebeat-7.4.2-amd64.deb to the Kubernetes servers k8s1 and k8s2. The
playbook will copy the file via SCP from the Student Workstation to each of the Kubernetes servers to the
/tmp/filebeat-7.4.2-amd64.deb location.

After the file is copied, it will install the local package, update its configuration YAML file from a template
file (which you will modify), and restart the filebeat service.

Activity

You will review the pb.install_filebeat.yml playbook. There are several tasks within the single playbook.
The assert tasks are verifying, at various steps along the play, that the system is in the state that it should be
in.

Step 1 In the student workstation, find and open Visual Studio Code by clicking the Visual Studio Code icon.

Step 2 From the Visual Studio Code top navigation bar, choose Terminal > New Terminal [Ctrl-Shift-`].

Step 3 Navigate to the terminal section at the bottom of Visual Studio Code.

Step 4 Within the Visual Studio Code terminal, change the directory to ~/labs/lab16/playbooks using the cd
~/labs/lab16/playbooks command.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 649
Step 5 Examine the pb.install_filebeat.yml playbook file and note each of the playbook tasks.

650 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Modify the Template File to Deploy Updated Settings
The filebeat.yml.j2 Jinja2 template file starts in the default format that is installed on the host when
Filebeat is installed. You are going to modify the file so that there are new tags available when sending into
Kibana and updating the destination of Kibana and Elasticsearch to match that of the Student Workstation
IP address, 192.168.10.10.

Step 6 In the labs/lab16/playbooks/ templates directory, open the filebeat.yml.j2 file for editing.

Step 7 Modify line 93 from


#tags: [“service-X”, “web-tier”]
To
tags: [“docker”, “{{ inventory_hostname }}”]

Step 8 Modify line 123 from


host: “localhost:5601”
To
host: “192.168.10.10:5601”

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 651
Step 9 Modify line 150 from
hosts: [“localhost:9200”]
To
hosts: [“192.168.10.10:9200”]

Save the changes.

652 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Install the Filebeat Application Via Ansible
Now that you updated the Jinja template for the Filebeat YAML configuration, you can use Ansible to
install and configure Filebeat on the Kubernetes servers. The Ansible playbook is located in the
~/labs/lab16/playbooks directory of the Student Workstation. This pb.install_filebeat.yml playbook will
verify that the file is copied to the correct destination from the previous step and install via the apt process.
The playbook also verifies that the installation was done properly. Manual installation may be required if
the playbook is unsuccessful. See the Appendix task for the manual solution installation.

Step 10 Execute the ansible-playbook pb.install_filebeat.yml -K command to install Filebeat. When prompted,
enter the password used to log in to the k8s servers. Ensure that all the tasks are completed successfully.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 653
student@student-vm:lab16/playbooks$ ansible-playbook pb.install_filebeat.yml -K
SUDO password:

PLAY [INSTALL FILEBEAT FROM LOCAL FILE WITH APT]


******************************
TASK [COPY FILEBEAT FROM SERVER TO HOST] ***************************************
changed: [k8s1 -> localhost]
changed: [k8s2 -> localhost]

TASK [GET FILES IN TMP FOLDER] *************************************************


ok: [k8s2]
ok: [k8s1]

TASK [VERIFY FILEBEAT IS ON SERVER] ********************************************


ok: [k8s1] => {
"changed": false,
"msg": "Filebeat file present, continuing"
}
ok: [k8s2] => {
"changed": false,
"msg": "Filebeat file present, continuing"
}

TASK [INSTALL FILEBEAT FROM DEB FILE] ******************************************


changed: [k8s2]
changed: [k8s1]

TASK [GET STATE OF FILEBEAT] ***************************************************


ok: [k8s2]
ok: [k8s1]

TASK [ASSERT FILEBEAT INSTALLED] ***********************************************


ok: [k8s1] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [k8s2] => {
"changed": false,
"msg": "All assertions passed"
}

TASK [TEMPLATE FILEBEAT YAML FILE TO K8S HOST] *********************************


changed: [k8s2]
changed: [k8s1]

TASK [RESTART THE FILEBEAT SERVICE] ********************************************


changed: [k8s2]
changed: [k8s1]

TASK [VERIFY RESTARTED] ********************************************************


ok: [k8s1] => {
"changed": false,
"msg": "Filebeat successfully restarted"
}
ok: [k8s2] => {
"changed": false,

654 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
"msg": "Filebeat successfully restarted"
}

PLAY RECAP *********************************************************************


k8s1 : ok=9 changed=3 unreachable=0 failed=0
k8s2 : ok=9 changed=3 unreachable=0 failed=0

Task 2: Visualize the Logs In Kibana


In this task, you will visualize the system information logs being collected by Filebeat within the Kibana
visualization web application. You will see the syslog information, what commands are being executed with
sudo, and information related to SSH logins with the out of the box dashboards.

Activity

Discover Logs
This activity will get you familiar with the Kibana Discover section.

Step 1 In the Student Workstation, open a web browser and navigate to http://localhost:5601.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 655
Step 2 Choose the Discover icon on the left menu (compass icon) to explore the raw logs received.

Step 3 Choose the index so that it states filebeat. An index must be selected to view logs.

Step 4 In the search bar, enter agent.hostname. Add the operator equal or : and add host k8s1. Here you are
looking for system logs coming from k8s1.

Step 5 Click Update to show the results. There should be no results.

Enable the System Module on Filebeat


The reason there are no logs is that nothing has been configured to be sent within the Filebeat modules.
Filebeat ships by default with several modules installed but they are not enabled by default. In this case, no
modules are enabled any that is why there are no logs yet. To see all the modules available by default,
execute sudo filebeat modules list command.

Repeat Steps 6 through 10 on each of the Kubernetes servers (k8s1 and k8s2.)

Step 6 Establish an SSH session to the Kubernetes servers k8s#, where # is the server number, using the ssh
student@k8s# command.

Step 7 Use the sudo filebeat modules list command to list the available modules.

Step 8 Enable the system module using the sudo filebeat modules enable system command.

Step 9 Use the sudo filebeat modules list command once again to verify that the modules are now enabled.

Step 10 To ensure that Filebeat will send the system logs, restart the Filebeat process with the sudo systemctl restart
filebeat command.

656 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@k8s#:~$ sudo filebeat modules list
Enabled:

Disabled:
apache
auditd
aws
cef
cisco
coredns
elasticsearch
envoyproxy
googlecloud
haproxy
ibmmq
icinga
iis
iptables
kafka
kibana
logstash
mongodb
mssql
mysql
nats
netflow
nginx
osquery
panw
postgresql
rabbitmq
redis
santa
Suricata
system
traefik
zeek

student@k8s#:~$ sudo filebeat modules enable system


Enabled system

student@k8s#:~$ sudo filebeat modules list


Enabled:
system

Disabled:
apache
auditd
aws
cef
cisco
coredns
elasticsearch
envoyproxy
googlecloud
haproxy

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 657
ibmmq
icinga
iis
iptables
kafka
kibana
logstash
mongodb
mssql
mysql
nats
netflow
nginx
osquery
panw
postgresql
rabbitmq
redis
santa
Suricata
traefik
zeek
student@k8s#:~$ sudo systemctl restart filebeat
student@k8s#:~$

Verify That Logs Are Getting In to Elasticsearch


Now that the Filebeat configuration has been updated to send system logs, you will verify that the logs are
in fact showing up in Kibana. Go back to the browser and refresh the page. There should be syslogs
showing up in the window.

Step 11 In web browser, click Refresh to refresh the page. You should now see logs.

658 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Task 3: Explore Filebeat Predefined Dashboards
Filebeat comes with predefined dashboards. Here you will load these into Kibana and explorer the
dashboards.

Activity

Activate the Dashboards


Filebeat comes with predefined dashboards. To load them to your Kibana instance, you need to run the sudo
filebeat setup --dashboards command from the k8s1 server.

Step 1 If needed, establish a new SSH session to the k8s1 server using the ssh student@k8s1 command.

Step 2 Issue the sudo filebeat setup --dashboards command to trigger Filebeat to load the dashboards to Kibana.

student@k8s1:~$ sudo filebeat setup –dashboards


Loading dashboards (Kibana must be running and reachable)
Loaded dashboards
student@k8s1:~$

Explore the Dashboards


Now you will examine the predefined dashboards.

Step 3 Click the Dashboard icon on the left navigation pane (four rectangles of varying size and orientation).

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 659
Step 4 Choose [Filebeat System] Syslog dashboard ECS. You will be taken to the syslog page, where you will see
the current syslog commands that have been executed.

660 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 5 Click on sudo command to see all sudo commands that have been executed.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 661
Generate Failed Logins and Check the Dashboard
Now you will generate some failed logins to each of the servers with a predeployed script. This will cause
additional SSH connections to appear on the dashboard for SSH logins.

Step 6 In the terminal windows, ensure that you exit all SSH sessions to the Kuberneted servers using the exit
command.

Step 7 Run the ~/labs/lab16/scripts/generate_ssh_failures.sh command to generate false SSH logins.

662 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:$ ~/labs/lab16/scripts/generate_ssh_failures.sh
Command: sshpass -p badpass ssh notauser@k8s1
Permission denied, please try again.
Command: sshpass -p badpass ssh notauser@k8s2
Permission denied, please try again.
Command: sshpass -p badpass ssh notauser@k8s3
Permission denied, please try again.
Command: sshpass -p badpass ssh notauser@k8s1
Permission denied, please try again.
Command: sshpass -p badpass ssh notauser@k8s2
Permission denied, please try again.
Command: sshpass -p badpass ssh notauser@k8s3
Permission denied, please try again.
Command: sshpass -p badpass ssh notauser@k8s1
Permission denied, please try again.
Command: sshpass -p badpass ssh notauser@k8s2
Permission denied, please try again.
Command: sshpass -p badpass ssh notauser@k8s3
Permission denied, please try again.
Command: sshpass -p badpass ssh notauser@k8s1
Permission denied, please try again.
Command: sshpass -p badpass ssh notauser@k8s2
Permission denied, please try again.
Command: sshpass -p badpass ssh notauser@k8s3
Permission denied, please try again.
student@k8s1:~$

Step 8 Once the command execution is completed, go back to the Kibana web browser. Select SSH login to see
activity about the SSH login attempts.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 663
Task 4: (Appendix) Install Filebeat on Kubernetes
Servers Manually
You can use this task to set up Filebeat from the source Debian file located at $HOME/filebeat-7.4.2-
amd64.deb to each of the Kubernetes servers (k8s1 and k8s2). The file will be copied via SCP from the
Student Workstation to each of the Kubernetes servers /tmp/filebeat-7.4.2-amd64.deb location.

Activity

Step 1 Within the Visual Studio Code terminal, change the directory to ~/labs/lab16/scripts using the cd
~/labs/lab16/scripts command.

Step 2 Using your preferred text editor, open the copy_filebeat.sh file for editing. Change the first part of the source
file path from /tmp to $HOME and save the script.

student@student-vm:scripts$ cat copy_filebeat.sh


#! /bin/sh
for i in 1 2 3
do
echo "Command: scp /temp/filebeat-7.4.2-amd64.deb student@k8s$i:/tmp/filebeat-7.4.2-
amd64.deb"
scp $HOME/filebeat-7.4.2-amd64.deb student@k8s$i:/tmp/filebeat-7.4.2-amd64.deb
done

664 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 3 Execute the ./copy_filebeat.sh command to initiate the copying process.

student@student-vm$ cd ~/labs/lab16/scripts
student@student-vm:scripts$ ./copy_filebeat.sh
Command: scp /temp/filebeat-7.4.2-amd64.deb student@k8s1:/tmp/filebeat-7.4.2-amd64.deb
filebeat-7.4.2-amd64.deb 100% 23MB 91.8MB/s 00:00
Command: scp /temp/filebeat-7.4.2-amd64.deb student@k8s2:/tmp/filebeat-7.4.2-amd64.deb
filebeat-7.4.2-amd64.deb 100% 23MB 92.4MB/s 00:00
Command: scp /temp/filebeat-7.4.2-amd64.deb student@k8s3:/tmp/filebeat-7.4.2-amd64.deb
filebeat-7.4.2-amd64.deb 100% 23MB 92.4MB/s 00:00
student@student-vm:scripts$

Install the Filebeat Application Via Ansible


An Ansible playbook is created in the ~/labs/lab16/playbooks directory of the Student Workstation. The
pb.install_filebeat.yml playbook will verify that the file is copied to the correct destination and install via
the apt process. The playbook also verifies that the installation was done properly. Manual installation may
be required if the playbook is unsuccessful.

Step 4 Within the Visual Studio Code terminal, change the directory to ~/labs/lab16/playbooks using the cd
~/labs/lab16/playbooks command.

Step 5 Execute the command ansible-playbook pb.install_filebeat.yml -K to install Filebeat. When prompted,
enter the password used to log in to the k8s servers. Ensure that all the tasks are completed successfully.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 665
student@student-vm$ cd ~/labs/lab16/playbooks
student@student-vm:playbooks$ ansible-playbook pb.install_filebeat.yml -K
SUDO password:

PLAY [PLAY 1: INSTALL FILEBEAT FROM LOCAL FILE WITH APT]


*************************************************

TASK [LIST FILES IN TMP FOLDER]


****************************************************************************
ok: [k8s2]
ok: [k8s1]

TASK [VERIFY FILEBEAT IS ON SERVER]


***********************************************************************
ok: [k8s1] => {
"changed": false,
"msg": "Filebeat file present, continuing"
}
ok: [k8s2] => {
"changed": false,
"msg": "Filebeat file present, continuing"
}

TASK [INSTALL FILEBEAT FROM DEB FILE]


*********************************************************************
ok: [k8s2]
ok: [k8s1]

TASK [GET STATE OF FILEBEAT]


***************************************************************************
ok: [k8s2]
ok: [k8s1]

TASK [ASSERT FILEBEAT INSTALLED]


**************************************************************************
ok: [k8s1] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [k8s2] => {
"changed": false,
"msg": "All assertions passed"
}

PLAY RECAP **************************************************************************


k8s1 : ok=5 changed=0 unreachable=0 failed=0
k8s2 : ok=5 changed=0 unreachable=0 failed=0

Configure Filebeat to Send Logs to Elasticsearch on Student


Workstation
On each of the Kubernetes servers, you will modify the Filebeat configuration file to send the logs to the
proper destinations. Before starting the Filebeat process, you will need to make these updates. The
configuration file is located at /etc/filebeat/filebeat.yml. Perform the following steps on all three Kubernetes
servers.

666 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 6 Establish an SSH session to the Kubernetes servers k8s#, where # is the server number, using the ssh
student@k8s# command.

Step 7 Open the /etc/filebeat/filebeat.yml file for editing.

Note If you use vi for editing, use the sudo vi /etc/filebeat/filebeat.yml command.

Step 8 Modify line 93 from


#tags: ["service-X", "web-tier"]
To
tags: [“docker”]

Step 9 Modify line 123 from


#host: "localhost:5601"
To
host: "192.168.10.10:5601"

Step 10 Modify line 150 from


host: ["localhost:9200"]
To
hosts: ["192.168.10.10:9200"]

#============================== General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
tags: ["docker"]

#============================== Kibana =====================================

setup.kibana:

# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required:
http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
host: "192.168.10.10:5601"

#-------------------------- Elasticsearch output ------------------------------


output.elasticsearch:
# Array of hosts to connect to.
hosts: ["192.168.10.10:9200"]

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 667
Enable and Start Filebeat
Once Filebeat is successfully installed and configured, you must enable and start the process. Enabling the
process indicates that the process will start when the operating system boots up. Starting the process takes a
process that is stopped or not started and starts it.

Perform the following steps on all three Kubernetes servers.

Step 11 If needed, establish new SSH sessions to the k8s servers using the ssh student@k8s# command, where # is
the server number.

Step 12 Issue the sudo systemctl enable filebeat command, followed by the sudo systemctl start filebeat
command. You can use the systemctl status filebeat command to get the status of the process. You should
see the status of active (running) when it is complete.

668 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Note If prompted for the sudo password, use the password that is provided in the Job Aids.

student@k8s1:~$ sudo systemctl enable filebeat


Synchronizing state of filebeat.service with SysV service script with
/lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable filebeat
student@k8s1:~$ sudo systemctl start filebeat
student@k8s1:~$
student@k8s1:~$ sudo systemctl status filebeat
filebeat.service - Filebeat sends log files to Logtash or directly to Elasticsearch.
Loaded: loaded (/lib/systemd/system/ filebeat.service; enabled; vendor preset:
enabled)
Active: active (running) since Tue 2019-12-03 17:37:35 UTC; 7s ago
Docs: https://www.elastic.co/products/beats/ filebeat
Main PID: 18061 (filebeat)
Tasks: 11 (limit: 4915)
CGroup: /system.slice/ filebeat.service
└─18061 /usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml -
path.home /usr/share/filebeat -path.config /etc/filebeat -path.d
Dec 03 21:28:32 k8s1 filebeat [18061]: 2019-12-03T21:28:32.411Z INFO
[index-management] idxmgmt/std.go:394 Set setup.template.name to
Dec 03 21:28:32 k8s1 filebeat [18061]: 2019-12-03T21:28:32.411Z INFO
[index-management] idxmgmt/std.go:399 Set setup.template.pattern
Dec 03 21:28:32 k8s1 filebeat [18061]: 2019-12-03T21:28:32.411Z INFO
[index-management] idxmgmt/std.go:433 Set settings.index.lifecyc
Dec 03 21:28:32 k8s1 filebeat [18061]: 2019-12-03T21:28:32.411Z INFO
[index-management] idxmgmt/std.go:437 Set settings.index.lifecyc
Dec 03 21:28:32 k8s1 filebeat [18061]: 2019-12-03T21:28:32.412Z INFO
template/load.go:169 Existing template will be overwritten, as overwrit
Dec 03 21:28:32 k8s1 filebeat [18061]: 2019-12-03T21:28:32.615Z INFO
template/load.go:108 Try loading template filebeat-7.4.2 to Elasticse
Dec 03 21:28:32 k8s1 filebeat [18061]: 2019-12-03T21:28:32.939Z INFO
template/load.go:100 template with name 'filebeat-7.4.2' loaded.
Dec 03 21:28:32 k8s1 filebeat [18061]: 2019-12-03T21:28:32.939Z INFO
[index-management] idxmgmt/std.go:289 Loaded index template.
Dec 03 21:28:33 k8s1 filebeat [18061]: 2019-12-03T21:28:33.274Z INFO
[index-management] idxmgmt/std.go:300 Write alias successfully g
Dec 03 21:28:33 k8s1 filebeat [18061]: 2019-12-03T21:28:33.277Z INFO
pipeline/output.go:105
<...>

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 669
670 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Discovery 17: Create System Dashboard
Focused on Metrics
Introduction
You will continue to examine how you can use Kibana along with logging utilities to gain better visibility
into server and application performance. This activity shows how to install Metricbeat on an Ubuntu server
to collect and send logs to a remote Elasticsearch or Kibana server. You will explore how to configure
Metricbeat to extract the logs from a specific application and how to visualize these logs from Kibana using
some of the predefined dashboards. Finally, you will learn how to use Metricbeat to collect the logs from all
Docker containers.

Topology

Job Aid

Device Information

Device Description FQDN/IP Address Credentials

Student Workstation Linux Ubuntu VM 192.168.10.10 student, 1234QWer

k8s1 Kubernetes 192.168.10.21 student, 1234QWer

k8s2 Kubernetes 192.168.10.22 student, 1234QWer

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 671
Command List
The table describes the commands that are used in this activity. The commands are listed in alphabetical
order so that you can easily locate the information that you need. Refer to this list if you need configuration
command assistance during the lab activity.

Command Description

ansible-playboook pb.install_metricbeat.yml -K Ansible playbook execution to complete the file


beats install on the Kubernetes servers
The flag “-K” (uppercase K) indicates to prompt for
the sudo password. This is the same password as
the login indicated above (1234QWer)

cd directory name To change directories within the Linux file system,


use the cd command. You will use this command to
enter into a directory where the lab scripts are
housed. You can use tab completion to finish the
name of the directory after you start typing it.

/home/student/labs/lab17/scripts/ Script to copy via SCP the Metricbeat Debian


copy_metricbeat.sh installation file. Copies the file from /tmp/Metricbeat-
7.2.4-amd64.deb to the Kubernetes servers

ssh student@k8s# Uses the SSH application to open a Remote


Terminal session to the Kubernetes servers, where
# is the server number (1-3)

systemctl systemctl may be used to introspect and control the


state of the "systemd" system and service manager

sudo metricbeat modules enable system Enables the system module for the Metricbeat
process

sudo metricbeat modules list List the modules available and enabled/disabled
within Metricbeat

systemctl Keywords
The systemctl command has several keywords that are used for managing Linux operating system
processes. In this exercise you will use root keywords status, enable, and start.
Root systemctl Keywords
These are the top level systemctl command keywords:

Root systemctl Keywords

Keyword Description

enable process_name This keyword sets the process to run on startup of the operating system

672 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Keyword Description

restart process_name This keyword restarts a process that is already running, or starts a
process that was previously stopped

start process_name This keyword starts the process name

status process_name This keyword shows the status of the given processes name

stop process_name This stops a process if it is running, leaves it stopped if already stopped

Task 1: Install Metricbeat on Kubernetes Servers


In this task, you will install Metricbeat metrics shipper via Ansible Playbook. The playbook will set up
Metricbeat from the source Debian file located at /tmp/metricbeat-7.4.2-amd64.deb to each of the
Kubernetes servers (k8s1 and k8s2). The file will get copied via SCP from the Student Workstation to each
of the Kubernetes servers to the /tmp/metricbeat-7.4.2-amd64.deb location. From there, the playbook will
install Metricbeat from the .deb file in the tmp folder. Once installation is verified, the template file will be
used to generate a configuration for Metricbeat.

Activity

Review Ansible Playbook and Jinja Template

Step 1 In the student workstation, find and open Visual Studio Code by clicking the Visual Studio Code icon.

Step 2 From the Visual Studio Code top navigation bar, choose Terminal > New Terminal [Ctrl-Shift-`].

Step 3 Navigate to the terminal section at the bottom of Visual Studio Code.

Step 4 Within the Visual Studio Code terminal, change the directory to ~/labs/lab17/playbooks using the code -r
~/labs/lab17/playbooks command.

Step 5 Examine the pb.install_metricbeat.yml playbook file and note each of the playbook tasks.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 673
Update the Jinja Template File
In the templates folder of the playbooks, you will find a metricbeat.yml.j2 Jinja template that is used for
generating the configuration file for the Metricbeat application. This currently is the default file from the
Metricbeat installation. You will be updating it to refer to the host environment.

Step 6 In the templates directory, open the metricbeat.yml.j2 file for editing.

Step 7 Modify line 37 from

#tags: [“service-X”, “web-tier”]

To
tags: [“docker”, “{{ inventory_hostname }}”]

Step 8 Modify line 67 from

host: “localhost:5601”

To
host: “192.168.10.10:5601”

Step 9 Modify line 94 from

hosts: [“localhost:9200”]

To

674 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
hosts: [“192.168.10.10:9200”]

Step 10 Save the changes.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 675
676 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Run the Ansible Playbook
Now that you have updated the Jinja template, you can use Ansible playbook to install Metricbeat with the
appropriate updates to the configuration file. The playbook copies over the necessary file to the /tmp
directory, runs the apt install, copies the metricbeat.yml configuration file from the modified template, and
finally restarts the service to get Metricbeat up and running.

Step 11 Execute the ansible-playbook pb.install_metricbeat.yml -K command to install Metricbeat. When


prompted, enter the Student Workstation password found in the Job Aids. Ensure that all the tasks are
completed successfully.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 677
student@student-vm:lab17/playbooks$ ansible-playbook pb.install_metricbeat.yml -K
SUDO password:

PLAY [INSTALL METRICBEAT FROM LOCAL FILE WITH APT] *******************************

TASK [COPY METRICBEAT FROM SERVER TO HOST] ***************************************


changed: [k8s1 -> localhost]
changed: [k8s2 -> localhost]

TASK [GET FILES IN TMP FOLDER] ***************************************************


ok: [k8s1]
ok: [k8s2]

TASK [VERIFY METRICBEAT IS ON SERVER] ********************************************


ok: [k8s1] => {
"changed": false,
"msg": "Metricbeat file present, continuing"
}
ok: [k8s2] => {
"changed": false,
"msg": "Metricbeat file present, continuing"
}

TASK [INSTALL METRICBEAT FROM DEB FILE] ******************************************


changed: [k8s2]
changed: [k8s1]

TASK [GET STATE OF METRICBEAT] ***************************************************


ok: [k8s1]
ok: [k8s2]

TASK [ASSERT METRICBEAT INSTALLED] ***********************************************


ok: [k8s1] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [k8s2] => {
"changed": false,
"msg": "All assertions passed"
}

TASK [TEMPLATE METRICBEAT YAML FILE TO K8S HOST] *********************************


ok: [k8s1]
ok: [k8s2]

TASK [RESTART THE METRICBEAT SERVICE] ********************************************


changed: [k8s1]
changed: [k8s2]

TASK [VERIFY RESTARTED] **********************************************************


ok: [k8s1] => {
"changed": false,
"msg": "Metricbeat successfully restarted"
}
ok: [k8s2] => {
"changed": false,

678 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
"msg": "Metricbeat successfully restarted"
}

PLAY RECAP ***********************************************************************


k8s1 : ok=9 changed=3 unreachable=0 failed=0
k8s2 : ok=9 changed=3 unreachable=0 failed=0

Task 2: Create a Graph in Kibana


In this task, you will explore common metrics used in monitoring hosts. Metricbeat is installed and is
sending logs to Kibana. Kibana as a visualizer that provides quality graphs and historical performance
information. These include metrics, such as CPU and memory utilization data.

Activity

Discover Logs
This activity will get you familiar with the Kibana Discover section. First you will refresh the indices to
ensure that all the fields are available and indexed.

Step 1 In the Student Workstation, open a web browser and navigate to http://localhost:5601.

Step 2 Click the management icon on the left menu. Use the expand option if you do not find it.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 679
Step 3 Click Index Patterns and choose the first metricbeat-* option that is available.

680 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 4 Click Refresh Field List (a reload icon), this is required to load the new Fields from the incoming
metricbeat data.

Step 5 Click the Visualize icon on the left menu (graphing icon) to start a visualization.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 681
Step 6 Click Create new visualization and then choose Line. You will be adding a line graph to chart the average
CPU load for K8S1 and K8S2 combined.

Step 7 Set source to metricbeat-*.

Step 8 Add an X-axis. Click +Add under buckets and select X-axis.

682 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 9 Set the Aggregation to Date Histogram. This selection will autofill Field to @timestamp and Minimum
interval to Auto.

Step 10 Set the Y-Axis. Expand the Y-axis Count selection, choose Aggregation, and set it to Average. Set the
Field to system.load.1.

Set the Custom label field to cpu 1min.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 683
Step 11 Click Apply changes at the top of the left column (the icon that looks like a play button.)

684 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 685
Explore the Graphs
You will explore some of the other graphing options that are available in Kibana version 7.4. In these steps
you will split the previous graph to show the average CPU load from each individual K8s server.

Step 12 Scroll down to the bottom on the left-hand panel of Buckets.

Step 13 Click Add and then Split Series,

686 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 14 In Sub aggregation, choose Terms. In the Field, choose agent.hostname.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 687
Step 15 Click Apply at the top of the navigation pane.

Now you can see two line graphs, one for each of the Kubernetes servers.

688 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 689
690 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Add a Filter to the Graph
Now you will add a filter to the graph to be able to filter hosts based on the set tags. Both servers should still
show up since the tags applied earlier are the same. In these steps you will only show the results of one of
the K8s server by using a filter.

Step 16 On the upper left just below the search bar, click Add filter.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 691
Step 17 Set the Field to tags, Operator to is, and the Value drop-down to docker. Click Save. You now see both
servers again, as they are both tagged with docker.

692 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 18 Click x next to the tags to remove the tag.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 693
Step 19 Add a new tag search by setting the Field to tags, Operator to is, and the Value drop-down to k8s2. Click
Save.

694 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 695
Task 3: Explore Predefined Dashboards
Metricbeat comes with predefined dashboards. You will load these dashboards into Kibana and explore the
dashboards.

Activity

Activate the Dashboards


Metricbeat comes with predefined dashboards. To load them to your Kibana instance, you need to run the
sudo metricbeat setup --dashboards command from k8s1.

Step 1 Establish a new SSH session to the k8s1 server using the ssh student@k8s1 command.

Step 2 Issue the sudo metricbeat setup --dashboards command.

696 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@k8s1:~$ sudo metricbeat setup –-dashboards
Loading dashboards (Kibana must be running and reachable)
Loaded dashboards
student@k8s1:~$

Explore the Dashboards


You will now explore the predefined dashboards.

Step 3 In the web browser, click the Dashboard icon on the left-hand navigation menu (four rectangles of varying
size and orientation.)

Step 4 Search for Metricbeat System, and choose [Metricbeat System] Overview ECS. This is the main System
Overview with no filters, so all hosts that are sending metrics will appear here.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 697
Step 5 In the new search bar within the Dashboard, search for host.name and select host.name in the filter results

Step 6 Choose : for equals.

698 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 7 Choose k8s1 from the next filter pop-up.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 699
Step 8 Click Update on the upper right-hand corner. You will be taken to the system overview of just the k8s1
server.

700 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 9 Click System Overview in the navigation for the dashboard. You will be taken to an overview of the system.
Explore the metrics.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 701
Step 10 Click Host Overview in the navigation for the dashboard. You will be taken to an overview of the host.
Explore the metrics.

702 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 11 Click Containers overview in the navigation for the dashboard. You will see metrics related to containers
running on the host.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 703
704 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Discovery 18: Use Alerts Through Kibana
Introduction
As more log and metric data is gathered and visualized in your environment, it becomes critical to generate
alerts at specific events or thresholds. This lab will set up an alert to trigger when a particular metric
threshold has been crossed.

Topology

Job Aid

Device Information

Device Description FQDN/IP Address Credentials

Student Workstation Linux Ubuntu VM 192.168.10.10 student, 1234QWer

k8s1 Kubernetes 192.168.10.21 student, 1234QWer

k8s2 Kubernetes 192.168.10.22 student, 1234QWer

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 705
Command List
The table describes the commands that are used in this activity. The commands are listed in alphabetical
order so that you can easily locate the information that you need. Refer to this list if you need configuration
command assistance during the lab activity.

Command Description

cd directory name To change directories within the Linux file system, use the cd
command. You will use this command to enter into a directory
where the lab scripts are housed. You can use tab completion
to finish the name of the directory after you start typing it.

./generate_cpu.py Python script to generate CPU usage. Used along with the
alerting lab

netstat -tl Netstat listing of TCP ports listening


-t flag: TCP Ports
-l flag: Listening Ports

ssh student@k8s# Uses the SSH application to open a Remote Terminal session
to the Kubernetes servers, where # is the server number (1-3)

Task 1: Create Alert in Kibana


Activity

Step 1 In the Student Workstation, open a terminal window and verify that Elasticsearch and Kibana are active and
listening. Use the netstat -lt command and locate the connection that run on TCP ports 5601, 9200, and
9300. Then click - to minimize the terminal window.

student@student-vm:$ netstat -lt


Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 localhost:53117 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:5601 0.0.0.0:* LISTEN
tcp 0 0 localhost:13668 0.0.0.0:* LISTEN
tcp 0 0 localhost:10180 0.0.0.0:* LISTEN
tcp 0 0 localhost:41261 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:9200 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:http 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:9300 0.0.0.0:* LISTEN
tcp 0 0 localhost:domain 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:ssh 0.0.0.0:* LISTEN
tcp 0 0 localhost:ipp 0.0.0.0:* LISTEN
tcp 0 0 localhost:6010 0.0.0.0:* LISTEN

Step 2 Open Chrome, navigate to Kibana at localhost:5601 and expand the navigation menu on the left.

706 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 3 Choose the Management icon on the left menu (settings cog wheel).

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 707
Step 4 In the Elasticsearch section, choose License Management.

708 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 5 On the right, in the Start a 30-day trial section, choose Start trial.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 709
Step 6 In the window that pops up, choose Start my trial.

710 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 7 In the Elastisearch section, choose Watcher.

Step 8 Click Create and choose Create threshold alert.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 711
Step 9 In the Create threshold alert page, set Name to watch-cpu.

712 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 10 Set the Indices to query field to metricbeat-*.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 713
Step 11 In the Time field, choose @timestamp and set the Run watch every to 1 minute.

714 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 12 Set the Match the following condition to "WHEN max() OF system.process.cpu.total.norm.pct OVER
all documents IS ABOVE 0.25 FOR THE LAST 3 minutes."

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 715
Step 13 Click Add action and select Logging. This will set logging to the local system. Review the other action
capabilities. The default logging text is predefined, but you can customize it.

716 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 14 Click Log a sample message to see a sample. Then, click Create alert.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 717
Step 15 The alert is displayed with status OK, which means it is watching the logs.

718 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Generate CPU Load on the k8s1 Kubernetes Server
There is a Python script available on the k8s1 server that will generate some CPU load. The server will
trigger an alert in Kibana. You will execute that script and then verify that the watcher has triggered an
alert.

Step 16 In the student workstation, find and open Visual Studio Code by clicking the Visual Studio Code icon.

Step 17 From the Visual Studio Code top navigation bar, choose Terminal > New Terminal [Ctrl-Shift-`].

Step 18 Navigate to the terminal section at the bottom of Visual Studio Code and establish an SSH session to the
Kubernetes servers k8s1using the ssh student@k8s1 command.

Step 19 On the k8s1 server, change the directory to ~/scripts using the cd ~/scripts command.

student@k8s2:~$ cd ~/scripts

Step 20 Execute the ./generate_cpu.py command to generate some CPU load.

student@k8s1:~/scripts$ ./generate_cpu.py
Still working...(1%)
Still working...(2%)

Step 21 In your browser, wait for the alert to be triggered in Kibana.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 719
720 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Introduction to Prometheus and Instrumenting
Python Code for Observability
Prometheus is an Open Source project of the Cloud Native Computing Foundation (CNCF). SoundCloud
originally created Prometheus, which was then moved to the CNCF as a member project. This tool gathers,
stores, and alerts on metrics. Prometheus and other Open Source tools are possible solutions for providing
insight into and visualization of your metrics. Prometheus makes it easy to get metrics about applications.

Open Source Instrumentation Tools


• Databases
– Graphite (Whisper)
– InfluxDB
– Prometheus
• Visualization
– Grafana
– Kibana

There are several Open Source tools that are available and work with Prometheus. There are also several
competing projects that are in a similar segment as Prometheus.
Prometheus stores data in a time-series database (TSDB) that is optimized for storing data through
associated time and values at the time. Prometheus is not the only TSDB available. There are others such as
Whisper, which is part of the Graphite App, InfluxDB, and others.
InfluxDB is one of the more common TSDBs that is used in NetDevOps environments. It is quick to install
and make operational. The database is written in Go and was developed by InfluxData. InfluxDB provides
an SQL-like language that is served on TCP port 8086. Clustering InfluxDB servers requires licensing from
InfluxData.
The Graphite App with the Whisper database is free open-source software that can be used to store, collect,
and visualize time-series data. There are multiple parts to the tooling including Carbon, which is a listener
for time-series data, Whisper (the database), and the Graphite web app.
Grafana and Kibana are just two of the Open Source options for visualizing the data from these TSDBs. To
visualize data with Kibana, you need to have the data in the ELK stack. Working with Logstash to import
data from a TSDB into ELK will allow you to visualize the data in the same fashion as seen in the topic
“Introduction to Elasticsearch, Beats, and Kibana.”

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 721
Grafana is a popular Open Source tool that can display data from the TSDBs. With Grafana, you configure
the source of the data as the TSDB. Then you set up dashboards with queries against the TSDB information
to create the dashboards. Here is an example of a dashboard that shows the data volume total over time on
top with an overtime graph of the utilization for a Cisco Live event.

722 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Prometheus
• Open-source system monitoring and alerting
• Originally built at SoundCloud
• CNCF
• Multidimensional data model with time series data by metric name
• Pulls data over HTTP

The Prometheus application is an open-source tool for systems monitoring and alerting. Prometheus was
originally built at SoundCloud and later moved to the CNCF. Prometheus describes itself as a
multidimensional data model with time series by metric name.
There are four main components of the Prometheus system. The Prometheus server in the middle of the
diagram, the Pushgateway seen on the left, the Alertmanager on the upper right, and a web user interface
layer for basic visualization of the data inside the database.

Prometheus Server
• Scrapes and stores time series data (pulls)
• Provides PromQL data source

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 723
The Prometheus server is responsible for scraping data from Prometheus exporters, the TSDB, and a web
server to present the data inside the database. The data is queried using a PromQL query language that is
accessible via the HTTP server running on the Prometheus server.
The server configuration dictates how frequently the data is scraped over HTTP from the Prometheus
exporters. Prometheus exporters present metrics over an HTTP interface that can have an HTTP GET
retrieve the data. As long as the data is in the proper format, the server can read and put the data into the
TSDB. The action is a pull action from the exporter to the server, with the server initiating the request.

Prometheus Exporter and Format


• Serves data to be scraped by the Prometheus server.
• Metric names may contain only [a-zA-Z0-9:_].
• snake_case is preferred to camelCase.
• # HELP is a description of the content.
• # TYPE indicates the type of metric.
• metric_name{label} value

# TYPE python_gc_objects_collected_total counter


python_gc_objects_collected_total{generation="0"} 1000.0
python_gc_objects_collected_total{generation="1"} 125.0
python_gc_objects_collected_total{generation="2"} 0.0 # HELP
python_gc_objects_uncollectable_total Uncollectable object found during GC
# TYPE python_gc_objects_uncollectable_total counter
python_gc_objects_uncollectable_total{generation="0"} 0.0
python_gc_objects_uncollectable_total{generation="1"} 0.0
python_gc_objects_uncollectable_total{generation="2"} 0.0

# HELP python_info Python platform information # TYPE python_info gauge


python_info{implementation="CPython",major="3",minor="7",patchlevel="5",version="3.7.5
"} 1.0

A Prometheus exporter is an application that presents metrics for pulling or scraping by an application.
Typically, this exporter is a Prometheus server, but you can view the data yourself by navigating to the
metrics page, typically hosted at /metrics on a web server (but can be defined with another URL).
The metrics have a particular format that must be followed. Metric names can only contain alphanumeric
values, and a colon (:) or an underscore (_). Colons should be used in special circumstances, allowing for
alphanumeric and underscores and your primary metric name characters. Snake case ( snake_case), is
preferred versus using camel case (camelCase). The metric definition should begin with # HELP
<help_content> to define the metric that is being gathered. This field has no bearing on the Prometheus
system. The line before the metric should define the type of metric that is to be collected.
The metric should then be presented as metric_name{label} value. This information will be collected
in the TSDB.

724 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Prometheus Metric Types
• Counter
• Gauge
• Histogram
• Summary

Prometheus metric types should be a common format that is used within many data types. You may
recognize gauge and counter types from SNMP data points, if you have worked with SNMP data.
The counter metric type starts with 0 and counts up. It is a cumulative total of everything that has been seen.
A counter will return to 0 if the data rolls over the maximum size or if the service restarts. An example
would be interface bytes that are transmitted or received.
Gauges are for metrics that can arbitrarily go up and down. Gauges are used for measured values like CPU
utilization or temperature.
Histograms are a data type for metrics that can be sampled, such as buffer queue time or the response time
on a web server. For example, your buckets could be the time between 0.1 and 0.25, then 0.26 to .50 and so
on.
Summaries sample observations like histograms, but summaries give a summary of information, not just
what fits in the buckets. Some example summaries include _sum and _count, which would provide a total
from the histograms.

Prometheus Pushgateway
• Receiver of data for systems that cannot serve their own metrics page.
• Data is pushed to the gateway.
• Gateway is scraped by the Prometheus server.
• Only when you absolutely must, otherwise attempt to serve an independent metrics system.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 725
A Prometheus Pushgateway is a place where you can gather metrics about devices that are unable to provide
their own presentation of the metrics. You would use this feature on short-lived jobs that are unable to
present their own metrics to Prometheus because their life span is too short. The Pushgateway will then
present the metrics to Prometheus.
You would you not want to use a Pushgateway all the time, because you lose some of the inherent benefits
of the Prometheus system.
• The system is a single point of failure with a single Pushgateway for jobs.
• You lose the ability to get status of the service.
• The Pushgateway never forgets information that is pushed to it. Such information will remain forever
unless the metric is deleted manually via the API.

Prometheus has also developed an SNMP exporter to help in gathering information about SNMP-enabled
devices and getting the metrics into Prometheus. The SNMP exporter gathers the metrics via SNMP and
then provides a web page that the Prometheus server can scrape. You can find more information about this
exporter at the Prometheus GitHub page: https://github.com/prometheus/snmp_exporter.

Prometheus Alertmanager
• Groups, deduplicates, and routes to the proper alert system.
• Email.
• OpsGenie and PagerDuty.
• Silencing of alerts.

726 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
The Prometheus Alertmanager is, as the title describes, an alert manager. It is the part of the system that
manages the necessary alerts. The Alertmanager will group, deduplicate, and route alerts to the proper
system. Routing could be email for some alerts, out to OpsGenie or PagerDuty for other alerts, and out to
your own webhook destination for others or all alerts. The Alertmanager is where you would silence any
planned events for devices as well.
A webhook is the method of sending data over an HTTP POST request to a destination that is listening for
data to come in. This mechanism is very common for exchanging information and provides information to
other applications. As in Cisco Webex Teams, you send a webhook to your organization’s URL. The
webhook provides data for the channel to which you want to post a message, the message, and the source of
the message. This information would then be present in the chat channel.

Prometheus Python Clients


• Python client
• Web frameworks
– Flask
– Django

Flask app
from prometheus_flask_exporter import PrometheusMetrics
metrics = PrometheusMetrics(app)

Python app (From Client_Python Github)


from prometheus_client import start_http_server, Summary

REQUEST_TIME = Summary('req_proc_seconds', 'Time processing request’)

@REQUEST_TIME.time() def process_request(t): """A dummy function that takes some


time.""” time.sleep(t)

if __name__ == '__main__’:
# Start up the server to expose the metrics.
start_http_server(8000) # Generate some requests. while True:
process_request(random.random())

There are various Python Prometheus clients that are available for use in your applications. If you are
looking to integrate Prometheus metrics from your Python web application, both Django and Flask
frameworks have libraries that accelerate this process. With the Prometheus Flask application, it takes an
extra two lines of Python code to make the page operational. With these two lines, you now have a
Prometheus exporter available at the /metrics section of the web application.
There is also a Python client that you can integrate with a Python program that is not a web application. You
will need to establish a web server for Prometheus to scrape and from which Prometheus will get the
metrics and application health information.
In the Python app portion of the figure, you have the import from the Prometheus client, an HTTP server,
and a Summary stat counter. REQUEST_TIME is assigned the Summary metric type req_proc_seconds
and a help of ‘Time processing request’. The next function uses the time library to sleep for a random
number of seconds, creating some random metrics. In the __main__ function, you have the start of the web
server with start_http_server(8000), which starts the Prometheus metrics exporter on port 8000. The
application then keeps the web server operational with the while True: statement to enter the call of the
function using a random.random() function call.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 727
1. 0Which Prometheus metric type would be used for a CPU utilization metric?
a. counter
b. gauge
c. histogram
d. summary

728 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Discovery 19: Instrument Application Monitoring
Introduction
In this activity, you will add new metrics specifically about the Net Inventory application. You will install a
Python module into the Python application, check the changes into the Git repository to autodeploy the
application, setup Metricbeat to send the applications logs to Elasticsearch and Kibana, and explore the
graphs generated with Metricbeat.

Topology

Job Aid
Device Information

Device Description FQDN/IP Address Credentials

Student Workstation Linux Ubuntu VM 192.168.10.10 student, 1234QWer

GitLab Server CI/CD and Git 192.168.10.20 student, 1234QWer


Repository

k8s1 Kubernetes 192.168.10.21 student, 1234QWer

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 729
Command List
The table describes the commands that are used in this activity. The commands are listed in alphabetical
order so that you can easily locate the information that you need. Refer to this list if you need configuration
command assistance during the lab activity.

Command Description

ansible-playboook pb.install_metricbeat.yml - Ansible playbook execution to complete the file beats


K install on the Kubernetes servers
The flag “-K” (uppercase K) indicates to prompt for the
sudo password. This is the same password as the login
indicated above (1234QWer)

cd directory name To change directories within the Linux file system, use the
cd command. You will use this command to enter into a
directory where the lab scripts are housed. You can use
tab completion to finish the name of the directory after you
start typing it.

/home/student/scripts/copy_metricbeat.sh Script to copy via SCP the Metricbeat Debian installation


file. Copies the file from /tmp/Metricbeat-7.2.4-amd64.deb
to the Kubernetes servers

/home/student/scripts/ Script that will attempt to log in to the Kubernetes servers


generate_ssh_failures.sh with a wrong credential, generating log messages to be
visualized

netstat -tl Netstat listing of TCP ports listening


-t flag: TCP Ports
-l flag: Listening Ports

ssh student@k8s# Uses the SSH application to open a Remote Terminal


session to the Kubernetes servers, where # is the server
number (1-3)

systemctl systemctl may be used to introspect and control the state


of the "systemd" system and service manager

sudo metricbeat modules enable system Enables the system module for the Metricbeat process

sudo metricbeat modules list List the modules available and enabled/disabled within
Metricbeat

vi Vim is a highly configurable text editor for efficiently


creating and changing any kind of text. It is included as "vi"
with most UNIX systems

730 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
systemctl Keywords
The systemctl command has several keywords that are used for managing Linux operating system
processes. In this exercise, you will use root keywords status, enable, and start.

Root systemctl Keywords


These are the top level systemctl command keywords:

Keyword Description

enable process_name This keyword sets the process to run on startup of the operating system

Restart process_name This keyword restarts a process that is already running, or starts a
process that was previously stopped

start process_name This keyword starts the process name

status process_name This keyword shows the status of the given processes name

stop process_name This keyword stops a process if it is running, leaves it stopped if


already stopped

Task 1: Update Net Inventory with Prometheus Metrics


You will update the Net Inventory application with the Prometheus metrics exporter for Flask. You will
update the net_inventory project to add in the metrics page at http://k8s1:5000/metrics.

Activity

Gather Code Repository and Update Necessary Files

Step 1 In the student workstation, find and open Visual Studio Code by clicking the Visual Studio Code icon.

Step 2 From the Visual Studio Code top navigation bar, choose Terminal > New Terminal [Ctrl-Shift-`].

Step 3 Navigate to the terminal section at the bottom of Visual Studio Code.

Step 4 Within the Visual Studio Code terminal, change the directory to ~/labs/lab19 using the cd ~/labs/lab19
command.

student@student-vm$ cd ~/labs/lab19/
student@student-vm:labs/lab19$

Step 5 Use the git clone git@gitlab:cisco-devops/net_inventory.git command to clone the net_inventory
repository.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 731
student@student-vm:labs/lab19$ git clone https://git.lab/cisco-devops/net_inventory
Cloning into 'net_inventory'...
warning: redirecting to https://git.lab/cisco-devops/net_inventory.git/
remote: Enumerating objects: 588, done.
remote: Counting objects: 100% (588/588), done.
remote: Compressing objects: 100% (172/172), done.
remote: Total 588 (delta 403), reused 587 (delta 403)
Receiving objects: 100% (588/588), 3.11 MiB | 15.86 MiB/s, done.
Resolving deltas: 100% (403/403), done.
student@student-vm:labs/lab19$

Step 6 Change directory to the net_inventory using the cd net_inventory command.

student@student-vm:labs/lab19$ cd net_inventory/
student@student-vm:lab19/net_inventory (master)$

Update the Python Files for Metrics


You will be updating Python files for the Net Inventory project to make application-specific metrics
available. These metrics are not related to the host itself, but to the HTTP services that are running on the
host. After you update the Python files, you will submit a merge request to have the code tested and
verified.

Step 7 Within the Visual Studio Code Explorer, open the ~/labs/lab19/net_inventory/net-inventory-config.yml
file for editing.

Step 8 Modify the FRONTEND: DEBUG: key/subkey value to false and save the file.

732 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 9 Open the ~/labs/lab19/net_inventory/app.py file for editing.

Step 10 Below line 5, insert a new line with the following content from prometheus_flask_exporter import
PrometheusMetrics. This will allow the Python application to import the PrometheusMetrics function and
create the metrics page.

Step 11 Insert a new line below the line that says login_manager = LoginManager(). Add the metrics =
PrometheusMetrics(app=None) content. This line will initiate the use of the PrometheusMetrics application
and assigns it to the variable metrics for later use.

Step 12 Add the following lines at the end of the register_extensions(app) function

try:
metrics.init_app(app)
except ValueError:
pass

There are multiple instances of the Net Inventory application getting activated. The first load of
the application works well, but with the PrometheusMetrics part of the application, subsequent
loads will have duplicate key errors.
The metrics.init_app(app) statement initializes the metrics page for the Flask application and
starts to serve the /metrics page.

Step 13 Save the file.

Commit and Push Code


You updated the code. Now you will create a new branch for the code, make a new commit, and push the
commit to the remote git repository.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 733
Step 14 Create a new branch called lab19-add_metrics using the git checkout -b lab19-add_metrics command.

student@student-vm:lab19/net_inventory (master)$ git checkout -b lab19-add_metrics


Switched to a new branch 'lab19-add_metrics'

Step 15 Add the updated files to the git index using the git add -u command.

student@student-vm:lab19/net_inventory (lab19-add_metrics)$ git add -u

Step 16 Commit the file to git using the git commit -m " Lab 19: Added Prometheus Exporter for metrics to Net
Inventory" command.

student@student-vm:lab19/net_inventory (lab19-add_metrics)$ git commit -m "Lab 19:


Added Prometheus Exporter for metrics to Net Inventory"
[lab19-add_metrics fd70994] Lab 19: Added Prometheus Exporter for metrics to Net
Inventory
2 files changed, 2 insertions(+)

Step 17 Push the branch to GitLab using the git push -u origin lab19-add_metrics command. When prompted,
provide your GitLab credentials.

student@student-vm:lab19/net_inventory (lab19-add_metrics)$ git push -u origin lab19-


add_metrics
Username for 'https://git.lab': student
Password for 'https://[email protected]':
Counting objects: 4, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (4/4), done.
Writing objects: 100% (4/4), 388 bytes | 388.00 KiB/s, done.
Total 4 (delta 3), reused 0 (delta 0)
remote:
remote: To create a merge request for lab19-add_metrics, visit:
remote: https://git.lab/cisco-devops/net_inventory/merge_requests/new?merge_request
%5Bsource_branch%5D=lab19-add_metrics
remote:
To https://git.lab/cisco-devops/net_inventory.git
3464d30..fd70994 lab19-add_metrics -> lab19-add_metrics
Branch 'lab19-add_metrics' set up to track remote branch 'lab19-add_metrics' from
'origin'.

Merge Request
Now that the code is in the git remote repository under a new branch, you must submit a merge request to
have the code tested for later applications deployed to the server.

Step 18 From the Chrome browser, navigate to https://git.lab.

Step 19 Accept the privacy notifications, log in with the credentials that are provided in the Job Aids, and click Sign
in.

Step 20 From the list of projects, choose the cisco-devops/net_inventory project.

Step 21 In the upper right corner, click the Create merge request button.

734 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 22 Review the autocompleted information, scroll down, and click the Submit merge request button. Then,
click the Merge button to complete the merge.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 735
736 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 23 Once the pipeline is completed, open a new browser tab and navigate to the http://k8s1:5000, where you
will find the bare Net Inventory application. You can populate the application back end with devices using
the populate_inventory k8s1:5001 script.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 737
738 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 24 Open another browser tab and navigate to http://k8s1:5000/metrics. Here you will see text-based metrics
that you can read and export.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 739
Install the Metricbeat Application Via Ansible
You will install the Metricbeat metrics shipper via Ansible Playbook. The playbook is created in the
~/labs/lab19/playbooks directory of the Student Workstation. This pb.install_metricbeat.yml playbook will
verify that the source Debian file located at $HOME/metricbeat-7.4.2-amd64.deb is copied correctly and
install via the apt process. The playbook also verifies that the installation was done properly. Manual
installation may be required if the playbook is unsuccessful.

Update the Jinja template file


In the templates folder of the playbooks, there is a Jinja template named metricbeat.yml.j2 that is used for
generating the configuration file for the Metricbeat application. This currently is the default file from the
Metricbeat installation. You will be updating it to refer to the host environment. You will also be updating
using the Jinja templating language with Ansible to use an inventory hostname to assign the tag for each of
the hosts.

Step 25 In the ~/labs/lab19/playbooks/templates directory, open the metricbeat.yml.j2 file for editing.

740 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 26 Modify line 37 from
#tags: [“service-X”, “web-tier”]
To
tags: [“docker”, “{{ inventory_hostname }}”]

Step 27 Modify line 67 from


host: “localhost:5601”
To
host: “192.168.10.10:5601”

Step 28 Modify line 94 from


hosts: [“localhost:9200”]
To
hosts: [“192.168.10.10:9200”]

Step 29 Save the changes.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 741
742 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Run the Ansible Playbook
Now that you have updated the Jinja template, you can use Ansible playbook to install Metricbeat with the
appropriate updates to the configuration file. The playbook copies over the necessary file to the /tmp
directory, runs the apt install, copies the metricbeat.yml configuration file from the template just modified,
and finally restarts the service to get Metricbeat up and running.

Step 30 Execute the ansible-playbook pb.install_metricbeat.yml -K command to install Metricbeat. When


prompted, enter the password used to log in to the k8s servers. Ensure that all the tasks are completed
successfully

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 743
student@student-vm:lab17/playbooks$ ansible-playbook pb.install_metricbeat.yml -K
SUDO password:

PLAY [INSTALL METRICBEAT FROM LOCAL FILE WITH APT] *******************************

TASK [COPY METRICBEAT FROM SERVER TO HOST] ***************************************


changed: [k8s1 -> localhost]
changed: [k8s2 -> localhost]

TASK [GET FILES IN TMP FOLDER] ***************************************************


ok: [k8s1]
ok: [k8s2]

TASK [VERIFY METRICBEAT IS ON SERVER] ********************************************


ok: [k8s1] => {
"changed": false,
"msg": "Metricbeat file present, continuing"
}
ok: [k8s2] => {
"changed": false,
"msg": "Metricbeat file present, continuing"
}

TASK [INSTALL METRICBEAT FROM DEB FILE] ******************************************


changed: [k8s2]
changed: [k8s1]

TASK [GET STATE OF METRICBEAT] ***************************************************


ok: [k8s1]
ok: [k8s2]

TASK [ASSERT METRICBEAT INSTALLED] ***********************************************


ok: [k8s1] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [k8s2] => {
"changed": false,
"msg": "All assertions passed"
}

TASK [TEMPLATE METRICBEAT YAML FILE TO K8S HOST] *********************************


ok: [k8s1]
ok: [k8s2]

TASK [RESTART THE METRICBEAT SERVICE] ********************************************


changed: [k8s1]
changed: [k8s2]

TASK [VERIFY RESTARTED] **********************************************************


ok: [k8s1] => {
"changed": false,
"msg": "Metricbeat successfully restarted"
}
ok: [k8s2] => {
"changed": false,

744 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
"msg": "Metricbeat successfully restarted"
}

PLAY RECAP ***********************************************************************


k8s1 : ok=9 changed=3 unreachable=0 failed=0
k8s2 : ok=9 changed=3 unreachable=0 failed=0

Modify Metricbeat Configuration to Gather Prometheus Metrics


Now that Metricbeat is set up on the k8s1 Kubernetes server, you must configure Metricbeat to send the
Prometheus metrics back to the ELK stack on the Student Workstation. To do this, you will enable the
Prometheus module and restart the Metricbeat service.

Step 31 Establish a new SSH session to the k8s1 server using the ssh student@k8s1 command.

student@student-vm:labs$ ssh student@k8s1


Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-62-generic x86_64)

Last login: Thu Dec 12 00:24:33 2019 from 192.168.10.10


student@k8s1:~$

Step 32 Issue the sudo metricbeat modules enable prometheus command to enable the module.

student@k8s1:~$ sudo metricbeat modules enable prometheus


student@k8s1:~$

Step 33 Using the vi editor, verify that the configuration is set up to scrape the local web host. Use the sudo vi
/etc/metricbeat/modules.d/prometheus.yml command. The configuration must look like the following:

- module: prometheus
period: 10s
hosts: [“localhost:5000”]
metrics_path: /metrics

student@k8s1:~$ sudo vi /etc/metricbeat/modules.d/prometheus.yml


[sudo] password for student:
- module: prometheus
period: 10s
hosts: [“localhost:5000”]
metrics_path: /metrics
student@k8s1:~$

Step 34 Restart Metricbeat to start sending Prometheus metrics to the ELK stack with the sudo systemctl restart
metricbeat command.

student@k8s1:~$ sudo systemctl restart metricbeat

Task 2: Visualize a Graph in Kibana


In this task, you will use Kibana to visualize the metrics that are collected and sent into the ELK stack. You
will create a chart that will show a graph of the pages that are gathered from the website by a script that will
curl the Net Inventory application.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 745
Activity

Discover Logs
This activity will familiarize you with the Kibana Discover section. Here you will get to the base of the
visualization page. You will be creating new line graphs with Kibana to visualize the application reporting.

The first few steps are to update the Metricbeat field list in Kibana. Selecting to reload the fields will tell
Kibana to index any new fields to allow search and dashboard to use.

Step 1 In the Student Workstation, open a web browser and navigate to http://localhost:5601.

Step 2 Click the management icon on the left menu. Use the expand option if you do not find it.

746 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 3 Click Index Patterns and choose the first metricbeat-* option that is available. Click Create index pattern.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 747
Step 4 Click Refresh Field List (a reload icon).

Step 5 Click the Visualize icon on the left menu (graphing icon) to start a visualization.

748 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 6 Click Create new visualization and then choose Line.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 749
Step 7 Set source to metricbeat-*.

750 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Add the X-Series and Y-Series Definitions to the Chart
To display multiple graphs in the chart, you must edit the X-series. This is done within Buckets part of the
navigation. The order of operations is important; you will receive an error if these steps are done out of
order.

Step 8 In Buckets, click Add.

Step 9 Choose Split Series.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 751
Step 10 In the first X split-series, set Aggregation to Terms and Field to prometheus.labels.path. Click Add to add
the X-axis definition.

Note In the Field box, start typing the file name until it matches the required value.

752 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 11 In this second X series, set Sub aggregation to Terms and Field to prometheus.labels.method. Click Add
to add the second X-axis definition.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 753
Step 12 Set the Sub aggregation to Date Histogram and Field set to @timestamp.

754 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 13 Optionally, you may want to collapse all X-series Buckets, as these are now complete.

Step 14 Under the metrics heading, expand the Y-axis count. Set the Aggregation to Derivative. This will expand
another selection.

Step 15 Set the new Aggregation to Max and the Field to


prometheus.metrics.flask_http_request_duration_seconds_count-. Click the apply button at the top of
the navigation section.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 755
Step 16 Review the graphs that have a flight line.

Generate Traffic and Refresh the Graph


To allow for multiple graphs to be on the visualization, you will edit the x-series. This is done within the
buckets section of the navigation. The order of operations matters for this and will create an error if done
out of order.

Step 17 In the Visual Studio Code terminal, issue the ssh k8s1 command.

student@student-vm:labs$ ssh k8s1


Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-62-generic x86_64)

Last login: Thu Dec 12 02:57:54 2019 from 192.168.10.10

Step 18 Issue the scripts/generate_web_requests.sh command which will generate 20 web page requests.

student@k8s1:~$ scripts/generate_web_requests.sh
...output omitted for brevity...
student@k8s1:~$

Step 19 After the script is complete, wait a short time (about 10 seconds, which is the scraping frequency of
Metricbeat), then select the refresh button in the web browser's upper right

Step 20 You will now see a line appear for the spike in request traffic.

756 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 757
0Discovery 20: Use Alerts and Thresholds to
Notify Webhook Listener and Webex Teams Room
Introduction
This lab will introduce you to monitoring and how to send a webhook to another system based on
conditions. You will run a Python script that will poll the Prometheus metrics of a web application. Based
on the metrics that are collected, an alert will fire a webhook to a webhook listener that will output the sent
data. In reality, this would be a message to Webex Teams or to another system to notify and start another
process based on the thresholds.

Topology

Job Aid

Device Information

Device Description FQDN/IP Address Credentials

Student Workstation Linux Ubuntu VM 192.168.10.10 student, 1234QWer

GitLab GitLab Ubuntu 192.168.10.20 student, 1234QWer


VM

k8s1 Kubernetes Host 192.168.10.21 student, 1234QWer

758 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Command List
The table describes the commands that are used in this activity. The commands are listed in alphabetical
order so that you can easily locate the information that you need. Refer to this list if you need configuration
command assistance during the lab activity.

Command Description

cd directory name To change directories within the Linux file system, use the cd
command. You will use this command to enter into a directory
where the lab scripts are housed. You can use tab completion
to finish the name of the directory after you start typing it.

docker build -t Docker build command to build a container

docker run -it --name webhook1 -p Docker runtime execution to enable a Docker container that is
5010:5010 listening for web hooks and displaying the content out to the
screen. Meant as a simulation/replacement of what Webex
Teams or other webhook receiver would receive.

generate_web_requests.sh Shell script that gets the web page from the localhost to
generate traffic load

git clone Command to clone the repository from your GitLab remote
repository

python health_watcher.py Python script to scrape Prometheus metrics and send


webhooks when outside of thresholds

ssh student@k8s1 Uses the SSH application to open a Remote Terminal session
to the Kubernetes server

systemctl systemctl may be used to introspect and control the state of the
"systemd" system and service manager

Task 1: Set up the Environment


To simulate this experience that would be seen in Webex Teams, you will open three separate terminal
windows. The first will be an SSH session to the k8s1 Kubernetes server. There, you will use a script to
generate HTTP requests to the application. The second terminal window will have the webhook listener
container started and listening for incoming webhooks. This is to simulate the Webex Teams environment in
the lab. To move to Webex Teams would be a change to the URL, using the same data structures. The third
terminal window will have the Python script that is used to watch the Prometheus /metrics page and send
the webhook when the thresholds are exceeded.

Activity

Prepare Terminal Windows


Minimize all active windows before opening the terminal windows.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 759
Step 1 Open three separate terminal windows. Arrange the terminal windows so all three terminal windows are
visible.

Step 2 Open the second and third terminal window, right mouse clicking in the application bar on the bottom and
selecting new terminal until you have three (3) terminal windows open. Arrange the windows however you
see fit. The one that should remain visible is the window for the Docker container where the web hook
listening is going to be run.

Step 3 In one of the terminal windows, establish an SSH session to the k8s1 server using the ssh student@k8s1
command.

student@student-vm:$ ssh student@k8s1


Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-62-generic x86_64)

Last login: Mon Nov 11 06:24:27 2019 from 192.168.10.20


student@k8s1:~$

Step 4 Change the directory to ~/scripts using the cd ~/scripts command. This terminal session is now prepared for
generating HTTP requests.

student@k8s1:~$ cd ~/scripts
student@k8s1:~/scripts$

Clone Repositories onto Student Workstation

Step 5 In the second terminal window, change the directory to /labs/lab20 using the cd ~/labs/lab20 command.

Note You can execute this step also using Visual Studio Code, if desired.

student@student-vm:$ cd ~/labs/lab20
student@student-vm:labs/lab20$

Step 6 Issue the git clone https://git.lab/cisco-devops/health_watcher.git command to clone the health_watcher
repository to your Student Workstation.

student@student-vm:labs/lab20$ git clone


https://git.lab/cisco-devops/health_watcher.git
Cloning into 'health_watcher'...
remote: Enumerating objects: 4, done.
remote: Counting objects: 100% (4/4), done.
remote: Compressing objects: 100% (4/4), done.
remote: Total 4 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (4/4), done.
student@student-vm:labs/lab20$

Step 7 Issue the git clone https://git.lab/cisco-devops/ basic_webhook.git command to clone the basic_webhook
repository to your Student Workstation.

760 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:labs/lab20$ git clone https://git.lab/cisco-devops/basic_webhook.git
Cloning into 'basic_webhook'...
remote: Enumerating objects: 5, done.
remote: Counting objects: 100% (5/5), done.
remote: Compressing objects: 100% (4/4), done.
remote: Total 5 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (5/5), done.

Deploy the Net Inventory Application Via GitLab and Verify the Metrics
Pages
To prepare the application, you need to deploy the application with k8s1 as the web host. You will use the
GitLab CI/CD to deploy the web application. Once the deployment is complete you should see some
metrics on the http://k8s1:5000/metrics page.

Step 8 From the Chrome browser, navigate to https://git.lab.

Step 9 Log in with the credentials that are provided in the Job Aids and click Sign in.

Step 10 From the list of projects, choose the cisco-devops/net_inventory project.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 761
Step 11 From the left navigation bar, choose CI/CD > Pipelines.

762 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 12 Click Run Pipeline at the upper right part of the page to start the pipeline.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 763
Step 13 Click Run Pipeline on the confirmation page.

764 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 14 Once the pipeline is completed, open a new browser tab and navigate to http://k8s1:5000/metrics. Here you
will see text-based metrics that you can read and export.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 765
Build and Start the Webhook Listener Container in a Terminal Window
In the second terminal window, you will build the Docker container that will have the webhook listener
built into it. The container includes code that does not have any authentication assigned to it, it just takes
what is received from a webhook call and prints it out to the console.

Step 15 In the second terminal window, change directory to basic_webhook using the cd basic_webhook command.

student@student-vm:labs/lab20$ cd basic_webhook
student@student-vm:lab20/basic_webhook (master)$

Step 16 Execute the docker build -t webhook_listener . command to build the webhook_listener Docker container
on the Student Workstation.

766 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:lab20/basic_webhook (master)$ docker build -t webhook_listener .
Sending build context to Docker daemon 58.88kB
Step 1/10 : FROM registry.git.lab/cisco-devops/containers/python37:latest
---> 6d319e0d3165
Step 2/10 : LABEL description="This is a basic webhook listener."
---> Using cache
---> 332b403f0cb9
Step 3/10 : LABEL maintainer="Cisco <[email protected]>"
---> Using cache
---> 678beb201d85
Step 4/10 : LABEL version="0.1"
---> Using cache
---> 74e1de186eff
Step 5/10 : ADD ./ /net_listener
---> dd991dd0eb56
Step 6/10 : WORKDIR /net_listener/
---> Running in b3844a2de589
Removing intermediate container b3844a2de589
---> 966cdd7958a7
Step 7/10 : RUN apt install -y git vim
---> Running in fcad1f024818

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

Reading package lists...


Building dependency tree...
Reading state information...
git is already the newest version (1:2.11.0-3+deb9u4).
vim is already the newest version (2:8.0.0197-4+deb9u3).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Removing intermediate container fcad1f024818
---> 7d34ca429a0d
Step 8/10 : RUN pip install -r ./requirements.txt
---> Running in 730d91baffc9
Requirement already satisfied: flask in /usr/local/lib/python3.7/site-packages
from -r ./requirements.txt (line 1)) (1.1.1)
Requirement already satisfied: itsdangerous>=0.24 in /usr/local/lib/python3.7/site-
packages (from flask->-r ./requirements.txt (line 1)) (1.1.0)
Requirement already satisfied: Jinja2>=2.10.1 in /usr/local/lib/python3.7/site-packages
(from flask->-r ./requirements.txt (line 1)) (2.10.1)
Requirement already satisfied: Werkzeug>=0.15 in /usr/local/lib/python3.7/site-packages
(from flask->-r ./requirements.txt (line 1)) (0.16.0)
Requirement already satisfied: click>=5.1 in /usr/local/lib/python3.7/site-packages
(from flask->-r ./requirements.txt (line 1)) (7.0)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.7/site-
packages (from Jinja2>=2.10.1->flask->-r ./requirements.txt (line 1)) (1.1.1)
Removing intermediate container 730d91baffc9
---> 47ee06aedc9f
Step 9/10 : EXPOSE 5010/tcp
---> Running in e0a870d4c79f
Removing intermediate container e0a870d4c79f
---> 7eb88ef311da
Step 10/10 : ENTRYPOINT python webhook.py
---> Running in 3db84e5cbbe2
Removing intermediate container 3db84e5cbbe2
---> 0346802d4a9a

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 767
Successfully built 0346802d4a9a
Successfully tagged webhook_listener:latest

Step 17 Issue the docker run -it -p 5010:5010 --name webhook1 webhook_listener command to start the Docker
container for the webhook. This will take you into the container and wait for webhooks to be sent. You may
use the Enter key a few times to help break apart the webhook execution outputs.

student@student-vm:lab20/basic_webhook (master)$ docker run -it -p 5010:5010 --name


webhook1 webhook_listener
* Serving Flask app "webhook" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:5010/ (Press CTRL+C to quit)
{'text': 'HTTP hit count rising threshold hit'}

Start the Python Monitoring App in the Third Terminal Window


In the third window, you will run a monitoring application that will execute locally. It will query the
Prometheus metrics page and then use a regular expression search to find how many page hits have
occurred on the web application. This script then parses and compares to the previous period value and
against a predefined threshold value. If either threshold has been crossed, then a webhook is sent.

Step 18 In the third terminal window, change directory to ~/labs/lab20/health_watcher using the cd
~/labs/lab20/health_watcher command.

student@student-vm:labs/lab20$ cd ~/labs/lab20/health_watcher
student@student-vm:lab20/health_watcher (master)$

Step 19 Execute the python health_watcher.py Python script. The command execution will not immediately
complete, as it waits 10 seconds between polls if the config has not been changed. The script is meant to
execute for approximately 200 seconds. Keep that terminal window visible – here you will see notifications
that a webhook has been sent and output of the Webex Teams URL that would be used instead of the local
listener.

student@student-vm:lab20/health_watcher (master)$ python health_watcher.py


Webex Teams URL: https://api.ciscospark.com/v1/webhooks/incoming/<incoming_webhook_url>
Incoming webhook URL is configured/provided within Webex Teams administration

Observe the Results


The monitoring via Python is in place in the third terminal window. You will open the first window and
generate HTTP traffic to achieve the alerts of passing thresholds triggered.

Step 20 Return to the first terminal window that is at the student@k8s1:~/scripts$ prompt.

Step 21 Issue the./generate_web_requests.sh command that will initiate 20 requests to the web page.

student@k8s1:~/scripts$ ./generate_web_requests.sh

<<< OUTPUT OMMITTED FOR BEREVITY – HTML Should scroll through >>>>
student@k8s1:~/scripts$

768 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 22 In the second and the third terminal window. you will see the webhooks being sent and the information being
received.

#### CONTAINER WINDOW #########


{'text': 'HTTP hit count rising threshold hit'}
172.18.0.1 - - [13/Dec/2019 20:03:12] "POST /webhook HTTP/1.1" 200 -
{'text': 'Webserver has served over 20 requests!'}
172.18.0.1 - - [13/Dec/2019 20:03:12] "POST /webhook HTTP/1.1" 200 -

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 769
0Summary Challenge
1. 0Which modern monitoring technique is scalable as devices send metrics via subscriptions?
a. SNMP pushing
b. streaming telemetry
c. NetFlow
d. SNMP polling
2. 0Which type of monitoring is concerned with measurements?
a. pipeline
b. logs
c. error lights
d. metrics
3. 0Which ELK stack component is responsible for visualization dashboards?
a. Elasticsearch
b. Filebeat
c. Logstash
d. Kibana
4. 0Which ELK stack component ingests data, transforms it, and sends it elsewhere?
a. Elasticsearch
b. Filebeat
c. Logstash
d. Kibana
5. 0Which ELK stack component has a RESTful search capability that is highly scalable?
a. Elasticsearch
b. Filebeat
c. Logstash
d. Kibana
6. 0Which visualization tool is often paired with Prometheus to provide dashboards?
a. Grafana
b. Kibana
c. Canary
d. OpsDashOne
7. 0Which device in the Prometheus project is responsible for gathering and storing data?
a. Alertmanager
b. Exporter
c. Prometheus server
d. Pushgateway

770 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Answer Key
Introduction to Monitoring, Metrics, and Logs
1. C

Introduction to Elasticsearch, Beats, and Kibana


1. A

Introduction to Prometheus and Instrumenting Python Code


for Observability
1. B

Summary Challenge
1. B
2. D
3. D
4. C
5. A
6. A
7. C

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 771
772 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Section 12: Engineering for Visibility and Stability

Introduction
Applications and infrastructure work hand-in-hand to fulfill business outcomes and serve customers.
Gaining a deeper understanding of how applications and infrastructure work is critical for building,
operating, and optimizing increasingly complex systems. This section introduces the concepts and benefits
of application health monitoring, logging, and gathering metrics, in a general context and how they are
realized in practice using management platforms like Cisco AppDynamics. Finally, you will learn about
chaos engineering, where the concept came from, why it is necessary, and what are its main principles and
practices.

Application Health and Performance


Fifteen years ago, most applications ran in the private data center. Today, the environment looks a little
more like the one in the figure: dependencies on local and remote services, and everything is interconnected
in ways that make it difficult for a single person (or team) to fully understand what is going on.

The challenge is to find what is causing an end-user error or slowdown. It might be caused by a single
element within that interconnected topology. It could be a thread contention in the authentication services,
before users even get to the actual application.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 773
Configuration changes cause many outages. They can happen on servers or the underlying infrastructure,
and affect connectivity or just policy (network, security, load balancing, and so on).

The process of identifying a problem can be like trying to find a needle in a haystack, which requires
visibility into the application and infrastructure health and performance, especially given the increase in
complexity and scale in modern systems. This growth requires that you think differently about the types of
tools that you use, and the way that you use intelligence and data to resolve issues.
However, tools alone can take you only so far. Their full potential is only unlocked when a shift in culture
occurs at the same time. Traditionally, IT teams worked in a siloed environment, where they could be good
at solving the problems that affected their own siloed part of the IT environment, but often be out of sync
with the actual organizational problems. A network engineer, for example, might check the network and if it
"looks" fine, despite an ongoing application issue, pass the incident on to other teams, who will do the same
thing. When databases perform normally and there are no application error logs, but there are problem
reports coming from customers, a different approach is necessary to quickly track down and solve issues
before their impact on the business becomes critical.

774 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Monitoring and Management
You must consider the differences between monitoring and management. A monitoring solution collects and
aggregates data coming from various software and hardware systems, then facilitates its display for its
users, including reporting and alerting. These systems normally receive numerous metrics and the user
needs to manually look at the data and try to group, analyze, and present it in a way that helps detect or
investigate problems. A monitoring system will also send alerts or notifications when certain thresholds are
crossed or systems become unresponsive or unreachable.
Traditionally, some alerts and a large amount of collected historical data is used in identifying what went
wrong after a problem occurs, often much later, when the needed granularity is not available anymore (for
example, due to database interval averaging).
There is a gap in this scenario between monitoring individual endpoints or applications and managing the
system as a whole—from its components, to the user experience, to the value-generating elements that drive
the business as a whole. It is attractive to focus on simple issues that can be fixed in a straightforward way
rather than dealing with systemic issues, such as a 10 percent site-level increase in HTTP errors. The first
mindset is that of a reactive organization that focuses only on immediate issues (the traditional IT-as-a-cost-
center approach), whereas the second mindset factors the IT infrastructure quality and performance into its
business drivers instead of just its availability.
Consider an application container management system that runs multiple services. Each service would be
made of one or more containers that spread the load, and is considered online if it has at least one functional
running container. The management system will also track resource utilization, both for individual
containers and for the servers in its fleet that host these services. But just checking whether the container
process is running does not mean that the service is healthy. Periodically, testing that it accepts connections
on a specific port or even validates that it returns the expected data (for example, a known login web page)
enhances the confidence in the actual ability of that service to do its meaningful job. Services can also use
the push model to send heartbeats (or check-ins) to the management system at predefined intervals, or if
there are regularly scheduled jobs, it could be when their execution begins and ends.
Managing application performance starts with monitoring health and performance metrics, but continues
with real-time analysis, baselining, and identification of anomalies. It tries to identify problems before they
impact the business and service via errors or an inability to cope with increased demand. Understanding and
using the designed elasticity of a service to scale up and down when demand changes is only one example.

Performance Management
Implementing DevOps practices and having good CI/CD improves code quality and enables continuous
improvement of the processes and automation servicing applications during their lifecycle. Combining
continuous automated testing (that uses much of the spare capacity in many IT environments) with a good
telemetry setup (to obtain relevant metrics from that testing) is the key to being successful.
You may think that testing often will lead to bottlenecks, but if it is done right, the opposite happens.
Instead of performing massive performance tests before release time on complex applications, continuously
testing individual services is easier to do and gradually builds confidence in each service and its ability to
withstand problems (its own or in external dependencies). Achieving this goal requires solid knowledge of
how each service works and complete understanding of the metrics that describe the functioning of these
services. Variation or deviation of these services during testing must be correlated to another variable for
you to understand what has happened, explain it, and predict future behavior. You may otherwise end up
with many different results and metrics that changed during testing, but you may not be able to empirically
explain why.
This testing is easier to achieve when the application is not a monolith, but is designed as a loosely coupled
system of multiple services. This approach requires cultural and technological shifts within the organization.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 775
Concerning performance testing and monitoring, it is important to move away from focusing only on large-
scale, complex, systemwide testing. This type of testing is difficult to set up, takes a long time to run, and
when issues are found, it may take much time and effort away from other activities to find the problem,
usually under a tight deadline to release to production.
The following four patterns will help you define better strategies to build, test, and run software that
operates well and delivers the required business outcomes.
• Establish a baseline: Derive the baseline of the application when it is not under any load, using a single
business transaction to perform as many of the flows as possible in an automated environment. In this
best-case scenario, the system is not under any kind of stress, and any transaction should finish in the
shortest possible time. Using performance monitoring and logs, you should have a good idea of which
are the slowest and the fastest calls, which are the most resource-intensive, and so on.
• Find the breaking points: Ramp up a single instance of the application until it gets to a point where it
cannot serve all requests or it completely breaks. Track all possible metrics to gain insight into what is
affected as the service nears its breaking point. Finally, identify the cause of the breakdown and decide
if it makes sense to optimize (sometimes it may be cheaper and safer to increase the available
resources).
• Scale it up: Choose an appropriate number of instances for your application to run in parallel under a
load-balancing regime. The goal is to figure out if the best performance is achieved through horizontal
scaling (more instances) or vertical scaling (more resources, such as memory, CPU, I/O).
• Test for resilience: Take the application up to the breaking point and start lowering the load, observing
how it recovers from being under high stress. Is the application stable running at 90 percent of its
maximum load? Or is it stable at 70 percent? If it occasionally breaks, will it recover to reliable
functionality once the load is within its limits? These tests will help determine if the application needs
to be restarted every time it exceeds a certain point. These tests will also help identify what needs to be
done so that the application can become resilient enough to run at high utilization without risk.

These patterns should be repeated every time changes are made and new code is pushed to a service: find a
new baseline, find the breaking point, determine the scaling factors, and find the code’s resilience. To help
keep this performance testing scalable, usable, and in use, it should require no effort from the developers to
run the tests. The results should always be there as a feedback loop to fine-tune the service and continuously
improve it.
In summary, monitoring and managing the health and performance of your applications is critical in
providing a measurable user experience that translates into business value. This process tells the business
that the services deliver what customers want and provides feedback to the IT teams that indicates what
works and what does not work in the context of delivering good quality of service.
1. 0What are two characteristics of traditional reactive monitoring systems? (Choose two.)
a. They focus mostly on immediate problems.
b. They collect and store logs for just-in-case analysis.
c. They use performance management tools to investigate baseline deviations as they appear.
d. They focus mostly on long-term application performance.
e. They identify problems before they impact business.

776 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0AppDynamics Overview
The AppDynamics application performance management (APM) platform enables you to monitor and
manage your entire application-delivery ecosystem, from the client-facing mobile app or browser, through
your network, back-end databases, and application servers. AppDynamics APM gives you a detailed view
across your application landscape and lets you quickly navigate from the global perspective of your
distributed application to the call graphs or exception reports that are generated on individual hosts.

AppDynamics provides an operational view of your code as it runs through an app server agent. This agent
monitors and records the calls that are made to a service, from the entry point and following execution along
its path through the call stack. It then sends data back to its controller (which may be hosted on-premises or
SaaS-based) about code exceptions, error conditions, usage metrics, and exit calls.
Because many of today's applications consist of multiple distributed services, such as following service-
oriented architecture (SOA) or microservice designs, interconnections, and external dependencies,
AppDynamics tracks these transactions to provide a complete picture.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 777
The database is an important application component, and AppDynamics deployments can include database
visibility components. Although the app server agent advises you about calls to back-end databases
(reporting client-side metrics and errors), you can also deploy a database agent that collects information
from the database servers and sends it to the controller for analysis and storage. Database analytics features
may use the events service, the document storage component of the platform, which AppDynamics has
optimized for searching and storing high volumes of information.
The server visibility part of AppDynamics enhances your view of the data center. It collects metrics about
the resource utilization and performance of the machines in your environment, and watches how they run
locally and their access to the network infrastructure.
Agents are plug-ins or extensions that monitor the performance of your application code, run time, and
behavior. When deployed, they start monitoring every code path, and assign unique tags to every method
call and request header. This process allows AppDynamics to trace every transaction from start to finish—
even in modern, distributed applications. Agents work with programming languages such as Java, .NET,
Node.js, PHP, Python, C/C++, and cloud platforms like AWS, Microsoft Azure, and Google Cloud.
These agents help you achieve the following:
• Troubleshoot problems such as slow response times and application errors.
• Automatically discover application topology and how components in the application environment work
together to fulfill key business transactions for its users.
• Measure end-to-end business transaction performance, along with the health of individual application
and infrastructure nodes.
• Receive alerts that are based on custom or built-in health rules, including rules against dynamic
performance baselines that alert you to issues in the context of business transactions.
• Analyze your applications at the code execution level using snapshots.

The controller is the collection point for data that is gathered in real time by the agents and provides a single
monitoring, troubleshooting, and analysis point for the entire application landscape in one interface.

Business Transactions and Flow Maps


A business transaction consists of all the required services within your environment that are called upon to
successfully deliver a response to a user-initiated request. These services typically include actions like login,
search, adding something to a shopping cart, and checking out, which will invoke various applications, web
services, third-party APIs, and databases. Business transactions reflect the logical way users interact with
your applications in practice.
AppDynamics performs automatic discovery of business transactions and builds a topology map of how
traffic flows within the application. This map helps you see usage patterns and possibly hidden flows to
better manage the application based on accurate user behavior. While it monitors code execution,
AppDynamics captures metrics and traffic patterns to build its acceptable performance baselines. As you
will see in troubleshooting, these baselines make it very easy to spot issues and then investigate when
services and components are slower than usual, which can potentially affect user experience.

778 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Flow maps show the tiers, nodes, message queues, and databases in your environment that are, by default,
providing live performance data. You can also include nodes that are not currently providing data, which is
helpful when troubleshooting node issues.
The maps highlight the business transactions flowing through them and provide information (when
available) that is derived from their baseline performance. The flow lines use color to indicate the
performance of the services: green indicates that the current performance is per the baseline, yellow
indicates that response times are slower than expected in the baseline. When there are no baselines available
for comparison, the flow lines are blue. Finally, solid lines represent synchronous connections, whereas
dashed lines indicate asynchronous connections.
The following figure shows a basic flow map for an e-commerce application in which three server tiers
interact with databases and an Apache ActiveMQ message broker.

Note that the flows between Order-Processing-Services and ECommerce-Fulfillment are baselined and
conforming (in green) with solid lines indicating synchronous connections. Other flows between various
nodes and their message queues are predictably asynchronous, as indicated by the dashed lines.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 779
Next to the flow lines, information is presented about the calls that are made per minute to the tier and the
average time that is required for the request to be serviced (the round-trip time). This information includes
the time that is spent on the network, if applicable to your topology, and the time that the back-end server
spends processing the request. The calls per minute for a given context, such as a tier, must be one or more
for the flow map to display.
Flow maps show different information depending upon the AppDynamics user interface context in which
they appear:
• Cross application flow maps show exit calls between applications within the monitored environment.
• Application flow maps show the topology and activities within an application and display metric values
across all business transactions in the application for the selected time range.
• Tier and node flow maps display metric values across all business transactions for the subset of the
application flow that is related to the selected tier or node.
• Business transaction flow maps display data for a particular business transaction, showing metrics that
are calculated based on all executions of the business transaction during the selected time range.
• Snapshot flow maps illustrate the metrics that are associated with a single snapshot. The metrics that are
shown in the map are specific to a particular execution of the transaction.

End User Monitoring


AppDynamics End User Monitoring (EUM) gives you insights on the performance of your application from
the viewpoint of the end user. It extends data collection and analysis to the mobile application or web
browser, revealing the impact of the network and browser rendering. Also, it will automatically capture
errors, crashes, network requests, page load details, and other metrics.

780 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Using AppDynamics EUM, you can determine the following:
• Where (geographically) your heaviest application load originates.
• Where (geographically) your slowest end-user response times occur.
• How performance varies by location, client type, device, browser and browser version, and network
connection for web requests, application and application version, operating system version, device, and
carrier for mobile requests.
• Your slowest web requests and where the problem may lie.
• Your slowest mobile and Internet of Things (IoT) network requests and where the problem may lie.
• How application server performance impacts the performance of your web and mobile traffic.
• Whether your mobile or IoT applications are experiencing errors or crashes and the root cause of the
issues.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 781
Infrastructure Visibility
The root cause of application issues is often most obvious when looking at application, network, server, and
machine metrics that measure infrastructure utilization.

For example, the following infrastructure issues can slow down your application:
• Too much time spent in garbage collection of temporary objects (application metric)
• Packet loss between two nodes that results in retransmissions and slow calls (network metric)
• Inefficient processes that result in high CPU utilization (server metric)
• Excessively high rates of reads and writes on a specific disk or partition (hardware metric)

This functionality enables you to isolate, identify, and troubleshoot these types of issues. Infrastructure
visibility is enabled through a machine agent that runs with an app server agent on the same machine.

782 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Infrastructure visibility is composed of a few distinct categories that are based on the functional or product-
focused area that they cover (for example, monitoring containers with Docker Visibility, Pivotal Cloud
Foundry, or Kubernetes):

Network visibility monitors traffic flows, network packets, TCP connections, and ports (network agents use
the app server agents to identify the TCP connections that are used by each application). It includes detailed
metrics about dropped and retransmitted packets, TCP window sizes, connection setup and teardown issues,
high round-trip times, and other performance-impacting issues. It performs automatic mapping of TCP
connections to application flows and detection of intermediate load balancers, and presents all this
information in KPI-based dashboards that facilitate data analysis.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 783
Basic machine metrics are collected through the Standalone Machine Agent, such as CPU and memory
utilization, throughput on network interfaces, and disk and network I/O with support for additional custom
metrics and execution of automated remediation scripts.
Server visibility monitors local processes, services, and resource utilization as an add-on to the Standalone
Machine Agent. It provides visibility into hardware metrics such as machine availability; disk, CPU, virtual-
memory utilization; and process page faults. It monitors services such as HTTP servers and container
tooling such as Docker to identify run-time issues that may impact application performance.
Database visibility is a standalone Java program that collects performance metrics about your database
instances and database servers and helps you troubleshoot problems such as slow response times and
excessive load. It provides metrics on database activity such as resource-intensive SQL statements, stored
procedures, and SQL query plans, or time spent on fetching, sorting, or waiting on a lock.

Integrating and Extending AppDynamics


A metric is a particular class of measurement, state, or event in the monitored environment. Many of the
built-in metrics relate to the overall performance of the application or business transaction, while others
describe the state of the server infrastructure. They are registered with the controller and then reported at
regular intervals where they are stored, aggregated, and presented for analysis.
You can also extend functionality by creating custom metrics, which are reported using the same machine
agent back to the controller. The AppDynamics platform automatically calculates dynamic baselines for
your metrics, defining what is normal for each metric, based on actual live data from the use of the
application.

784 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Detection of anomalies or behavior that does not conform to the baseline is facilitated through the definition
of health rules with various conditions that trigger alerts or remedial actions. Setting thresholds within
AppDynamics helps you maintain the agreed service level by detecting slow, very slow, and stalled
transactions. Thresholds provide a flexible way to associate the right business context with a slow request to
isolate the root cause.
For example, a health rule for business transaction response times defines a critical condition as the
combination of the following factors: an average response time higher than the default baseline by three
standard deviations and a load greater than 50 calls per minute.
You can always develop your own rules, metrics, and extensions if you cannot find what you need, and the
AppDynamics Exchange provides a repository of numerous extensions you can download, with
functionality as follows:
• Monitoring extensions that add metrics to the existing set that AppDynamics agents have when
originally shipped. These extensions can come from other monitoring systems or services that
AppDynamics does not manage.
• Alerting extensions allow you to integrate AppDynamics with external alerting or ticketing systems and
create custom notification actions.
• Performance testing extensions are also available.

1. 0Which three types of visibility are part of AppDynamics infrastructure visibility? (Choose three.)
a. server visibility
b. database visibility
c. network visibility
d. application visibility
e. cloud visibility
f. end-user visibility
g. flow map visibility

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 785
0Troubleshoot an Application Using
AppDynamics with APM
Now you will take a quick look at how a distributed application is monitored through AppDynamics and
how a few workflows look when needing to troubleshoot errors and performance issues with the various
components of the application.
In this scenario, the application is depicted in the figure as seen by the AppDynamics dashboard. It is called
AD-Financial-Lite-ACI, and it is an in-house demo application that is hosted on Cisco Unified Computing
System (UCS) servers running VMware, which is integrated with Cisco Application Centric Infrastructure
(Cisco ACI).
The demo application consists of Docker container components that are distributed in VMs that run the
Docker stack. Each VM, apart from the application containers, also hosts AppDynamics agents, which are
responsible for gathering status information and infrastructure and application metrics. The demo
application consists of 23 container components that are segregated in 10 application tiers and support
functions (database services and load-generation components). The application containers have been
distributed to an equal number of VMs, with each VM hosting an application component container and the
relevant AppDynamics agent containers.

786 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
As you can see in the dashboard screenshot, the AD-Financial application is experiencing increased load
and with it, some slower response times and a few errors. Most nodes are operating within their health
parameters (green), a few have reached their warning thresholds (yellow), and finally, the
OrderProcessingNode has a critical alert.
First, you will investigate the OrderProcessingNode. Click the Node Health link in the right-hand sidebar
to find its health rule violations.

The node's memory utilization has exceeded the rule limit, which triggers when the utilization is more than
90 percent.

By digging into the historical data for the heap utilization, you find that it has increased to 91 percent.

Although this issue has to be resolved, user reports of errors and slow performance have been coming in
that are related to other parts of the application. You will investigate the errors first.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 787
Examining one of these errors, you can see details about the HTTP return code (500 in this case) and the
affected nodes.

788 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Drilling down into the WebFrontEnd node, where the API call returns an error, you can see the full details
of the Java exception stack and investigate further.

With your colleague now on the case investigating the errors, it is time to look at the final issue, the slow
responses. You notice in the application dashboard that there are several transactions that are marked as
very slow by AppDynamics. You will investigate one of these transactions, which seem to be affecting the
WireServices and WebFrontEnd nodes.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 789
You notice that one of these transactions has taken more than 50 seconds, so it looks like a great place to dig
deeper. Open this transaction and you can see the HTTP calls and Java methods that are taking such a long
time.

790 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Finally, you can drill down even further by looking at the Call Graph. Here you find a detailed code-level
snapshot of the methods that are being called, with fully qualified class names and line numbers. You are
also given data about how much time is spent in each section, along with any exit calls that are made from
methods.

This short walk-through of various issues shows how they can be quickly identified from initial reports
when AppDynamics manages the distributed application. This information provides insights into deviations
from known baselines, error tracking, and slow performance issues.
1. 0Which two types of issues can you troubleshoot using the APM functionality in AppDynamics?
(Choose two.)
a. slow HTTP responses
b. OSPF database exchange
c. Java heap usage
d. ping speed
e. ACI leaf TCAM utilization

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 791
0Chaos Engineering Principles
With microservices becoming a very popular architectural pattern for software and systems design,
developers gain a smaller, more independent, and focused codebase to work with and more control over
how they deploy their service. Although individual services are simpler, complexity does not simply
disappear. It just shifts into the distributed system of which these services are now part. From an operational
point of view, there are many more moving parts now (as opposed to a monolith), with reverse proxies, load
balancers, firewalls, and other infrastructure support services (what used to be method calls or interprocess
calls within the monolith's host now go over the network). Growing to hundreds or thousands of services
means that you will not be able to understand all the intricacies of how the system works and predict its
behavior (in good and bad times).
1. The network is reliable.
2. Latency is zero.
3. Bandwidth is infinite.
4. The network is secure.
5. Topology does not change.
6. There is one administrator.
7. Transport cost is zero.
8. The network is homogeneous.

Peter Deutsch, http://wiki.c2.com/?EightFallaciesOfDistributedComputing


These fallacies provide plenty of inspiration for designing chaos engineering experiments, for example,
increasing latency, adding packet loss, or constant turnover in network convergence. Services will block
while they wait for responses that will never come or packets that were long lost. They might also increase
their usage of local resources while they retry or exhaust remote resources in attempts to get a response too
aggressively. Experimentation with time travel (clocks getting out of sync) or unpredictable circumstances
such as large traffic spikes, CPU overloads, and even data center failures are all within the scope of chaos
engineering.
To succeed against the distributed systems trade-offs, you must be able to improve the monitoring,
performance management, and testing of these distributed systems. Chaos engineering can come into play to
help you understand and better manage unexpected failures and performance degradation, so that the result
is a more robust and resilient system.
"Chaos Engineering is the discipline of experimenting on a system in order to build confidence in the
system's capability to withstand turbulent conditions in production."
Principles of Chaos Engineering, https://principlesofchaos.org/
The goal of chaos engineering is to discover and provide evidence of weaknesses in a system before they
become critical issues that affect the business. Users want a reliable system and developers will strive to
build it that way, but the reality is that there are many factors that affect reliability and robustness. These
factors become apparent only when the system faces production conditions (or real-life usage). There are
many types of tests that can be performed on parts of code, individual services, and complete applications,
but the focus here is on tests that experiment with your application to understand how it responds to the
various types of turbulent conditions that happen in production.
The following example has three services that communicate with each other (and potentially more back-end
databases, and so on) and serve users (looking to check their account balance) from the WebFrontEnd,
which depends on AccountManagement and, in turn, on BalanceServices.

792 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
What should happen and what will happen when AccountManagement goes down? What if BalanceServices
is experiencing increased load from other services and is slow to respond to AccountManagement? What
happens to the WebFrontEnd when AccountManagement responds slowly or times out because
BalanceServices is not available? What happens if the network connection between the services is saturated
and there is loss and much increased latency in the transactions due to excessive retransmissions? Some
services will consist of many nodes for scaling purposes, so what happens when only a few of the
AccountManagement services start misbehaving? What is the behavior of WebFrontEnd going to be and,
ultimately, the experience of the users accessing the portal?
You might think that you have thought of all these scenarios and that services are designed and, hopefully,
implemented to manage all such service degradation issues gracefully—but the question is, how do you
know without actually testing? The likelihood of a developer covering everything from the first
implementation is low, and even later, after some testing, there is a chance of unforeseen and unknown
issues occurring (also known as "dark debt").
The DevOps methodology and practices mean that the rapid pace of software development and deployment
makes it very challenging to be highly confident in the stability and resilience of the system in the face of
continuous releases. Adding more monitoring and testing throughout development, staging, and release
cycles, and integrating with chaos engineering scenarios is what builds good coverage and reliability into
the system even before going into production.
• Chaos engineering is ...
– Controlled and planned engineering experiments
– Proactive testing before an outage
– Building resilience for unpredictable failures
– Preparing engineers for systems failing
– Game days and automated testing
– Improving SLAs and building confidence
– Revealing weaknesses in complex systems

• Chaos engineering is not ...


– Random experimentation
– Unsupervised
– Unmonitored
– Unexpected
– Accidentally breaking production
– Creating outages

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 793
In the early 2000s, Jesse Robbins, with his official title of Master of Disaster at Amazon, created and led a
GameDay—a project inspired by his experience training as a firefighter. GameDay was meant to increase
reliability by purposefully creating major failures regularly, thus preparing Amazon's systems, people, and
applications to better respond to disaster-type scenarios.
GameDay was the precursor to Netflix's creation of what is known today as chaos engineering, during their
migration of the rapidly growing streaming service from their own data centers to AWS around 2010. They
created tools like Chaos Monkey and Simian Army (which were later open-sourced and can be now found
at https://github.com/Netflix/chaosmonkey) that identify groups of systems and randomly terminate one of
their instances. These tests happen in a controlled fashion, during a time when people are around to resolve
issues and learn from the outcomes. Their reasoning was that "Since no single component can guarantee 100
percent uptime (and even the most expensive hardware eventually fails), we have to design a cloud
architecture where individual components can fail without affecting the availability of the entire system."
(Source: https://medium.com/netflix-techblog/the-netflix-simian-army-16e57fbab116.)

Practices of Chaos Engineering


There are four main steps in implementing chaos engineering experiments.
1. Start by defining "steady state" as some measurable output of a system that indicates normal behavior.
2. Hypothesize that this steady state will continue in both the control group and the experimental group.
3. Introduce variables that reflect real-world events like servers that crash, hard drives that malfunction,
network connections that are severed, and so on..
4. Try to disprove the hypothesis by looking for a difference in steady state between the control group
and the experimental group.

Principles of Chaos Engineering, https://principlesofchaos.org/


Experiments are about breaking things in a controlled environment and are planned to build confidence,
contain the damage, and provide useful results.
First, start with the steady state, which means understanding the behavior of the system under normal
conditions. Having a known baseline is important to have something to compare to when introducing
deviations and something to return to at the end of the experiment (it is all about control). Focus on the
business transactions, the end-user experience, or flows that are both measurable (you can gather metrics)
and meaningful. For example, the number of orders for an online retail store will be affected significantly
by an increase in response time, ultimately causing a drop in sales (revenue).
Then, build a hypothesis. What if the shopping cart breaks? Or the load balancer in front of the shopping
cart service is overloaded? Or the network latency increases due to a major oceanic fibre optics failure?
Involve everyone who is related to that service, from product management to architecture to operations, and
let them answer the question that is posed.
When a hypothesis has been articulated and chosen, it is time to design and run the experiment. Keep in
mind that these experiments are not meant to directly break your production. They should be aimed at
validating systems that you think are resilient and finding out to what degree that assumption holds.
Performing these experiments in production might be scary at first, and while it should be the end goal, for
those individuals starting out, the focus should be their other environments (developing, testing, staging) to
build confidence in the system's ability to withstand turbulent conditions. That approach paves the road to
moving some of these experiments into the production environment. Sometimes, there is no alternative to
the conditions of real use at production scale. Learning how to run these experiments safely and in a
controlled fashion is important to the long-term success of any such initiative. You especially need to know
how to limit the extent of problems and not cause harm to the business.

794 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Then, verify the end state and learn from the results to improve the resiliency of the system itself.
Investigate as many relevant metrics as are available, such as the following: How long did it take to detect
the problem, then to notify the appropriate entity? What was the time for graceful degradation (if it
happened) or nonresponse/outage? Did the service recover, how long did it take, and was it fully or partially
successful? How much time was required for complete recovery to steady state?
Finally, you can improve the resiliency of the system by fixing any issues found during these experiments.
Prioritizing these experiments over other development might take a significant cultural change within the
greater organization!
Dark debt, which can be found in complex systems, and the anomalies it generates, are complex failures.
Dark debt can be present anywhere and it is important to avoid focusing only on infrastructure. It may be
the most straightforward entity for which to build hypotheses and design experiments, but looking beyond
applications and their infrastructure to people, processes, workflows, and other dependencies should be
considered.
1. 0Which statement regarding chaos engineering is true?
a. Chaos engineering only applies to microservices architectures.
b. Chaos engineering is only performed manually.
c. Chaos engineering tests the ability of a system to withstand turbulent conditions.
d. Chaos engineering is about creating outages.
e. Chaos engineering is random experimentation.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 795
0Summary Challenge
1. 0Which two statements regarding software testing patterns are correct? (Choose two.)
a. Testing a complex application as a whole is a lot easier because you have all the components in
one place.
b. Baselines should be captured when the application is under load.
c. Testing each component as it is developed in a continuous fashion builds knowledge and
confidence in the service.
d. You should never try to overload an application because it might cause other services to break.
e. After taking an application to its breaking point, you should reduce the load and monitor what
happens as it recovers.
2. 0Which two statements correctly describe AppDynamics flow maps? (Choose two.)
a. Node flow maps display data (metrics) for a specific business transaction.
b. Snapshot flow maps contain metrics that belong to a specific execution of a transaction.
c. Cross application flow maps show exit calls between different applications within the
environment.
d. Database flow maps show the topology and metrics for all the business transactions of that
database.
e. Cross-application flow maps show exit calls within the same application classes.
3. 0Which two options are metrics that are provided by AppDynamics End User Monitoring? (Choose
two.)
a. the location of the slowest client response times
b. the server with the highest memory usage
c. the region that generates the heaviest application load
d. the application that has the most errors
e. the top database queries
4. 0Which three statements regarding the flow lines in AppDynamics flow maps are correct? (Choose
three.)
a. Green lines indicate that performance is within the baseline.
b. Blue lines indicate that performance is within the baseline.
c. Red lines indicate asynchronous connections.
d. Yellow lines indicate that performance is worse than the baseline.
e. Dashed lines indicate asynchronous connections.
f. Green lines indicate that there is no baseline available yet.
g. Dashed lines indicate synchronous connections.
5. 0When troubleshooting an API call error with AppDynamics, which two options are part of the
captured information? (Choose two.)
a. application stack traces with exception details
b. timestamp and error codes
c. connected services status
d. database contents
e. capture of the network packets

796 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
6. 0Which three statements correctly describe chaos engineering? (Choose three.)
a. Chaos engineering uses both controlled and planned engineering experiments.
b. Chaos engineering uses uncontrolled but planned engineering experiments.
c. Chaos engineering is only performed unsupervised.
d. Chaos engineering experiments do not need any monitoring.
e. Chaos engineering experiments aim to break production.
f. Chaos engineering aims to reveal weaknesses in complex systems.
g. Chaos engineering aims to improve resilience and build confidence.
7. 0Which two statements correctly describe chaos engineering practices? (Choose two.)
a. The steady state is the measurable output of a system under normal load.
b. A hypothesis should be formed about application behavior that is unknown.
c. A hypothesis should test the designed resilience of a service.
d. An experiment should only test a hypothesis that you know will break the application.
e. The steady state is the measurable output of a system under abnormal load.
f. Proving the hypothesis is enough; fixing the problems is not your responsibility.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 797
0Answer Key
Application Health and Performance
1. A, B

AppDynamics Overview
1. A, B, C

Troubleshoot an Application Using AppDynamics with APM


1. A, C

Chaos Engineering Principles


1. C

Summary Challenge
1. C, E
2. B, C
3. A, C
4. A, D, E
5. A, B
6. A, F, G
7. A, C

798 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 799
Section 13: Securing DevOps Workflows

Introduction
Security is a very important aspect of software development and infrastructure management. Integrating
information security concepts into DevOps workflows requires a few mindset, tooling, and cultural changes.
First, you will learn about DevSecOps practices and what adding security to DevOps means. Then you will
take a closer look at application security and the types of tools that can be used. Finally, there is a discussion
of securing the infrastructure as a critical element of a resilient system and a few tools are examined that can
achieve these goals.

DevSecOps Overview
When discussing DevOps, it is easy to focus on the two main domains—development and operations—and
how they interact with each other. IT security is an often-overlooked line item that is added after
development takes place, when a separate specialized security team gets to play its part.
To derive all the advantages of the agility in DevOps methodology, security needs to be a shared mindset
that is integrated into the full lifecycle of the application (inception, design, build, test, release, support,
maintenance, and beyond). DevOps means frequent and fast-moving development, but outdated security
practices from the era of months- or year-long development cycles mean many of the benefits of DevOps
cannot be realized.
Some in the industry have coined the term DevSecOps to describe the shared responsibility for security as
part of the DevOps patterns and practices. In short, this approach involves thinking about application and
infrastructure security from the beginning, and adapting the tools and making the cultural changes necessary
to facilitate integration without slowing down the DevOps workflow. Examples are automation and cross-
functional team integration. Although many developers tend to focus on the functional side of their
application, inviting security teams to be involved from the start of the process helps build information
security practices into the code and associated workflows and builds capability across teams by sharing
insights on known threats, tools, mitigations, and training.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 801
One challenge in integrating security into the speed-focused DevOps rhythm is that teams are usually not
aligned well enough, and create blockers, slowdowns, or risk by skipping certain tasks. It is therefore
important to use as much automation as possible so that the quantity and frequency of new code that
developers push can be given the correct InfoSec treatment at the same pace. This approach means
automating tasks such as code analysis, configuration checks, and vulnerability scanning in the testing and
integration pipeline, so that all new code is scanned rather than being rushed due to insufficient resources to
do these tasks manually (which would result in potentially insecure code, leaking data, and hardcoded
credentials, using vulnerable libraries, and so on).
The evolution from monolithic architectures to microservices, where infrastructure is much more dynamic
and very distributed in nature, has changed how applications are built, deployed, and secured. In the same
way that the introduction of virtualization represented a paradigm shift, today's use of containers and all the
tech that comes with them (orchestrators, overlays, service meshes, and so on) represent another significant
change. Static policies, checklists, and perimeter security do not apply well to cloud-native technology,
which requires that security is in line with the other practices, applied continuously, and integrated at every
stage of the lifecycle.
The rise in popularity of container platforms (for example, Docker, Kubernetes) caused a change and an
increase in productivity for its users (developers, system administrators), created a new set of challenges,
and made previous tools and workflows obsolete. Containers share the underlying operating system and
kernel with many other containers, yet they can be spun up and down very quickly and across many types of
servers and clouds. The isolation of the guest from its host operating system is not as strict as in the case of
virtual machines, which meant that a new category of attack vectors appeared from within containers, for
example. Although containers need to share certain things like the CPU architecture with their host
operating system, the libraries and tools inside the container can be from a different source (or distribution,
for example the host can run Ubuntu and the container Alpine Linux) and in various versions that were
frozen into the container filesystem. Because they are immutable, containers can easily hide vulnerabilities,
outdated libraries, and other dependencies unless they are constantly scanned and updated as needed.
Many typical DevOps workflows use cloud environments and products, and therefore share many of the
cloud's security considerations, such as delegating responsibility for security that is based on product trust in
the public cloud, or the risks of using new, unaudited, or immature tools that provide the needed
functionality and scale of operation.
Poorly managed secrets and privileged (elevated) access controls can pose great risk. In highly distributed
and interconnected systems, the potential damage could be significant. An attacker could disrupt operations,
steal data, take down the infrastructure, install back doors, and so on. It is well known that inadequate
secrets management is common in DevOps environments, which rely heavily on sharing things like account
credentials, API tokens, and SSH keys, both in automated tools and orchestrators, or simply on team-
member machines. Using a wide range of tools to manage an even larger number of infrastructure devices,
servers, and applications, requires a certain amount of secret sharing and many times these tools are given
wide privileged access. This situation may give malware or hackers full control of many different systems
and access to the data that is hosted on them (usually of a confidential nature). Therefore, it is critical to
enforce good discipline and provide the least amount of privilege that is required to do the job.
Ultimately, adding security to the DevOps equation means protecting the application as much as the
environment housing it, the infrastructure, the tools, and the CI/CD processes.

802 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Embracing the DevSecOps Model
• Cross-functional collaboration and buy-in to integrate security into the entire lifecycle.
• Enforce policy and governance, but assist teams by ensuring transparency and good communication of
policies.
• Automate security processes and tools to match the speed and scale of the rest of the DevOps flow.
• Perform continuous discovery so that all devices, accounts, and tools are validated and under security
management.

Ensure that everyone takes ownership of implementing security best practices in their roles, so that you can
have secure product releases with confidence that is produced by a cybersecurity-integrated DevOps flow.
Functions such as AWS Identity and Access Management (IAM), privilege management, firewalling and
unified threat management, code review, configuration management, and vulnerability management must be
woven into the entire product lifecycle (product design, development, delivery, operations, support, and so
on).
Security tools should be automated as much as possible. When these tools are integrated and scale well with
the rest of the DevOps environment, you avoid creating a bottleneck and reduce the chance for resistance
from the other teams when trying to embed security practices. Minimize the risk of human error and deploy
tools that identify potential threats, problematic code, vulnerabilities, manage credentials, secrets, and
patching.
• Manage vulnerabilities across all environments.
• Perform configuration management, create hardened baselines, and continuously scan to identity and fix
misconfigurations or deviations.
• Manage credentials and secrets.
• Control, monitor, and audit access with privileged access management.
• Segment infrastructure to reduce an attacker's ability to move laterally if a system is compromised.

Scanning for vulnerabilities and identifying potential exploits and other related issues should be done across
development and integration environments. This approach identifies as many weaknesses as possible before
deployment to production. Software should be under scrutiny for its entire lifetime because vulnerabilities
may appear anytime in dependencies and libraries.
Separate the credentials from the code so that they are stored securely and used when needed (for example,
at run time when deployed in the correct environment). Avoid any kind of hardcoding in scripts, tools, and
other files so that they can be rotated or changed with ease through automation rather than manual effort.
By enforcing the least-privileged access model, you can reduce the chances for any attackers to gain access.
Developer and tester access should be restricted only to specific environments that they need to build their
machines, images, and deploy tooling, with privileged access as needed, but monitored and logged to ensure
that there is an audit trail.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 803
Secure Applications and Their Supporting Infrastructure
Applications are supported by the underlying infrastructure and the tooling that makes it possible for them
to run in their respective environments. These systems also present a surface for attack that is not often
obviously tied to the application itself. For example, continuous integration systems can be compromised to
allow attackers to insert malicious code into the pipeline, steal credentials, or simply disrupt the build and
testing flows. If the pipeline has write access to code repositories, it may even cause further issues by
committing malicious code in many applications or simply allowing theft of the source code.
Adequately protecting the integrity of applications therefore requires that the attack vectors and weaknesses
are mitigated in the build, test, and deployment pipelines. This goal can be achieved by implementing some
of the following strategies.
• Harden CI/CD and build servers, and standardize and treat them like production infrastructure.
• Run CI processes in an isolated environment (such as containers or VMs).
• Ensure that the CI system uses read-only credentials (least privileged) for the code repositories.
• Review all changes that are introduced into version control to prevent CI servers from potentially
running uncontrolled code.
• Standardize and automate the environment, and remove special cases and manual access.
• Integrate security scanners for infrastructure containers and tooling.
• Automate security updates, such as patches for known vulnerabilities.

1. 0Which two options are valid types of credentials that should be part of a secrets management
policy? (Choose two.)
a. API tokens
b. secret questions
c. account numbers
d. SSH keys
e. IP and port numbers

804 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Application Security in the CI/CD Pipeline
Testing application security is ideally an ongoing process that is integrated into the regular development
process and all the way into operations—essentially throughout the whole lifecycle of a product. As the
number of development teams increases, it is increasingly difficult for operations teams to keep up with all
the deployment and support work, especially because these teams are usually of very different sizes. The
same problem exists in the security arena, where InfoSec teams are seen as bottlenecks, in terms of added
restrictions and slowing down the time to production due to complicated reviews at the end of the
development cycle. If an organization tries to introduce DevOps patterns in its workflows, InfoSec and
compliance can be common blockers if the DevOps teams are not yet ready to embrace the same tooling,
automation, and agility.
Understanding and building better tooling is therefore as important as integrating information security into
the daily work of the DevOps teams, through a combination of education, tooling, and automation. By
inviting InfoSec staff to regular checkpoints during development, and integrating their tools into the CI/CD
pipelines for continuous testing, a shorter feedback loop is created. Therefore, developers find out about
critical issues earlier, when the cost of making corrections (both time and effort) is much lower.
Most development testing is focused on whether a piece of code functions correctly (the happy path), where
everything works as expected. Effective InfoSec and QA testing focuses on the unhappy path, or what
happens when things go wrong and the application receives erroneous or malicious input.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 805
Security testing tools may belong to one of the following categories:
• Static Analysis: These tools perform tests directly on the application code without running it. They will
inspect the code for known bad practices and custom-defined coding styles, perform linting, or search
for embedded secrets that should not be there.
• Dependency scanning: These tools are also static analyzers, but worth mentioning separately,
especially in the context of modern applications that have tens or hundreds of external dependencies
(packages, libraries, and so on). Ensuring that dependencies do not have any known vulnerabilities or
that known malicious libraries are not used is therefore very important.
• Dynamic analysis: These tools execute tests on running applications. They monitor memory usage,
functional behavior, or application performance (for example, response times) while interacting with the
application from the outside, just like any other user or attacker would.
• Code signing and integrity verification: Using version control systems that support signed commits
and releases ensures that you can audit and track changes to the person who performed them in a secure
manner. Released code from the automated build system should be signed and verified at deployment
time for integrity and authenticity.

In addition to testing application security, it is important to secure the environment in which applications are
built or executed. Environments (servers, VMs, containers, and so on) should be built from a hardened
baseline that reduces risk. Their running instances should be monitored continuously to ensure that there is
no departure from the known hardened configuration (such as key lengths, security settings in running
daemons, or database security). Because new vulnerabilities are found all the time, scanning environments
regularly through automated tests ensures that you will know when updates are needed to either the
configuration or the hardened base images themselves.
Integrating security telemetry (monitoring, logging, and alerting) into the same platforms that development,
testing, and operations use daily, gives everyone a complete view of how the applications are performing.
This approach is especially important when deployed in production environments that are hostile (that is,
outside attackers are constantly scanning for weaknesses and vulnerabilities). One of the desired outcomes
is that security incidents such as breaches become easier to detect and mitigate when they happen, instead of
months or years later when they eventually leak on the Internet or that a third party notices long after the
fact.
Detecting problematic user behavior can be achieved through simple yet effective analysis of metrics such
as the number of successful and unsuccessful logins, user-initiated password resets, or email address (or
other private details) changes. Monitoring environments for operating system configuration changes, service
status, restarts, and cloud infrastructure changes, and applications for SQL injection attacks and cross-site
scripting attacks can also yield interesting information. Of course, the baseline of normal system operation
must be evaluated and adjusted as needed, so you can identify abnormal behavior.

OWASP Top 10 Application Security Risks


The Open Web Application Security Project (OWASP) (https://owasp.org) describes itself as "an open
community dedicated to enabling organizations to develop, purchase, and maintain applications and APIs
that can be trusted."
The OWASP provides many resources for free on its website, such as application security tools,
presentations, videos, cheat sheets, research, conferences, and mailing lists, for the wider dissemination of
application security best practices. The OWASP also publishes the OWASP Top 10, whose primary aim is
to educate software developers, architects, and managers about the most common security weaknesses and
what their consequences are. Guidance is provided on how to protect against these high-risk issues, how to
find them, and how to avoid them.

806 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
The latest version of the OWASP Top 10 can be found at https://owasp.org/www-project-top-ten/. At the
time of this writing, the most current release is the 2017 release, which is based on more than 40 data
submissions from firms that specialize in application security and an industry survey that more than 500
individuals completed. This data spans vulnerabilities that were gathered from hundreds of organizations
and more than 100,000 real-world applications and APIs. These applications and APIs are organized
according to their prevalence with a general consensus on their exploitability, detectability, and impact.
A brief overview of the OWASP Top 10 from the 2017 release is included here for reference, but further
reading of the whole published document is highly encouraged.
• A1:2017-Injection: Injection flaws, such as SQL, NoSQL, OS, and LDAP injection, occur when
untrusted data is sent to an interpreter as part of a command or query. The attacker’s hostile data can
trick the interpreter into executing unintended commands or accessing data without proper
authorization.
• A2:2017-Broken: Authentication Application functions related to authentication and session
management are often implemented incorrectly, allowing attackers to compromise passwords, keys, or
session tokens, or to exploit other implementation flaws to assume other users’ identities, temporarily or
permanently.
• A3:2017-Sensitive Data: Many web applications and APIs do not properly protect sensitive data, such
as financial, healthcare, and personally identifiable information (PII). Attackers may steal or modify
such weakly protected data to conduct credit card fraud, identity theft, or other crimes. Sensitive data
may be compromised without extra protection, such as encryption at rest or in transit, and requires
special precautions when exchanged with the browser.

• A4:2017-XML External Entities (XXE): Many older or poorly configured XML processors evaluate
external entity references within XML documents. External entities can be used to disclose internal files
using the file URI handler, internal file shares, internal port scanning, remote code execution, and denial
of service attacks.
• A5:2017-Broken: Access Control Restrictions on what authenticated users are allowed to do are often
not properly enforced. Attackers can exploit these flaws to access unauthorized functionality and/or
data, such as access other users' accounts, view sensitive files, modify other users’ data, change access
rights, etc.

• A6:2017-Security Misconfiguration: This is commonly a result of insecure default configurations,


incomplete or ad hoc configurations, open cloud storage, misconfigured HTTP headers, and verbose
error messages containing sensitive information. Not only must all operating systems, frameworks,
libraries, and applications be securely configured, but they must be patched and upgraded in a timely
fashion.
• A7:2017-Cross-Site Scripting (XSS): XSS flaws occur whenever an application includes untrusted
data in a new web page without proper validation or escaping, or updates an existing web page with
user-supplied data using a browser API that can create HTML or JavaScript. XSS allows attackers to
execute scripts in the victim’s browser which can hijack user sessions, deface web sites, or redirect the
user to malicious sites.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 807
• A8:2017-Insecure Deserialization: Insecure deserialization often leads to remote code execution. Even
if deserialization flaws do not result in remote code execution, they can be used to perform attacks,
including replay attacks, injection attacks, and privilege escalation attacks.
• A9:2017-Using Components with Known Vulnerabilities: Components, such as libraries,
frameworks, and other software modules, run with the same privileges as the application. If a vulnerable
component is exploited, such an attack can facilitate serious data loss or server takeover. Applications
and APIs using components with known vulnerabilities may undermine application defenses and enable
various attacks and impacts.

• A10:2017-Insufficient Logging & Monitoring: Insufficient logging and monitoring, coupled with
missing or ineffective integration with incident response, allows attackers to further attack systems,
maintain persistence, pivot to more systems, and tamper, extract, or destroy data. Most breach studies
show time to detect a breach is over 200 days, typically detected by external parties rather than internal
processes or monitoring.

Application Security Testing


Attackers cause many security incidents by gaining access through the exploitation of known software
issues. Reducing the existence of such bugs early in the process is a significant benefit to any organization.
Several testing tools have been developed to scan and identify such common vulnerabilities and issues
before an application is deployed into production.
Static Application Security Testing (SAST) is an established set of tools that allows developers to scan their
application code for bad or banned practices and vulnerabilities at any stage in the development cycle. This
scanning is usually performed early in the process because the code does not have to run). SAST can also
ensure that code conforms to coding standards.
Dynamic Application Security Testing (DAST) looks for security vulnerabilities and potential weaknesses
in running applications (as opposed to SAST, which only looks at code) without knowing what the actual
code looks like. DAST tools try all sorts of operations to spot common (known) types of vulnerabilities,
such as cross-site scripting or SQL injection. It can also find more complex problems that relate to a
particular workflow that a user must undertake in the application (for example, issues that are hidden behind
an authenticated session).
These categories of testing tools are often used together (although at different stages of the development
process), because they identify different types of issues. DAST finds run-time errors, which SAST cannot
see, whereas SAST finds coding issues, which DAST will not find (because it does not look at code, just the
running application).
• SAST
– The tester has access to the whole application, testing from the inside out (the developer approach).
– The application does not need to be deployed or executed (analyzes source code or the binary).
– Can be run in an automated fashion throughout the development cycle.
– Finds nonrun-time issues early, which should be easier to fix.
• DAST
– The tester has no knowledge of how the application is built inside; testing is from the outside (the
user or hacker approach).
– The application must be running for analysis, but does not need source code access.
– Scanning begins toward the end of the development cycle.

808 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
– Finds run-time issues, which may be complex to fix in the same cycle.

Interactive Application Security Testing (IAST) can be considered an evolution of the older pair of SAST
and DAST into the world of modern web and mobile applications that rely heavily on many libraries and
frameworks. It inserts an agent within the application and allows analysis to take place at multiple stages of
the development process (in the development, CI, testing, or even production environments) and in real
time.
Because the IAST agent runs within the application, it has access to all its code, frameworks, data flows,
and run-time configuration. It can analyze requests and responses from other systems with which it
communicates such as other services, databases, or back ends. Therefore, an IAST can gain deeper insights
than SAST and DAST can on their own and verify a wider range of rules.

AppDynamics APM Security Considerations


The APM functionality that is provided by AppDynamics is tightly integrated into applications that allow it
to offer a rich set of metrics and troubleshooting data that are gathered in real time. AppDynamics collects
data on the performance, health, and resources of an application, its components (transactions, code
libraries), and the related infrastructure (nodes, tiers) that services those components.
Sensitive data that is sent outside your infrastructure can break security policies. Therefore, depending on
how your application processes sensitive data, you may want to consider disabling certain features if you
are using the cloud-hosted or on-premises AppDynamics.
If your environment contains sensitive data that should not be processed by an AppDynamics product or
sent to the AppDynamics cloud-based SaaS instance, you should avoid the following:
• Applications that transmit sensitive data in URL query parameters
• Enabling HTTP request parameter capture
• Enabling bind variable capture
• Applications that send sensitive data in error logs and log files
• Allowing method invocation data collection
• Log captures (or ensure that you mask sensitive values in captured logs)
• Collection of raw SQL statements with actual dynamic parameters

As mentioned previously, AppDynamics offers an on-premises solution for customers who want to maintain
full control of their deployment of software and its collected data. With this type of implementation,
AppDynamics has no access to the software or the data it collects and processes, so customers that are
subject to strict regulatory requirements for data security may want to consider this option.

Adding Security Testing in the Pipeline


Most of the security testing tools that are discussed here should be used in an automated way, which is a
good fit workflow- and culture-wise with the rest of the DevOps-driven machine. These tests can be
integrated at various stages of the development lifecycle, before and inside the CI/CD pipeline.
Before code makes it into the CI/CD pipeline, certain checks can be performed in the development
environment by the IDE (for example, static code checks and linting before committing to source control),
code reviews by team members, and merge-request prechecks such as automated scans for embedded
secrets in code.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 809
Once code is merged and the integration and delivery pipelines have started, a wide range of automated tests
can be performed, such as the tests that are indicated in the figure:
CI Tests:
• Unit tests (especially security-driven negative tests that look at unhappy path behavior)
• Static analysis, code linting, and other checks (such as banned libraries or practices)
• Vulnerability scanning of application components and libraries
• Security smoke testing using third-party tools

CD Tests:
• Application scanning with DAST tools
• Automated attacks
• Acceptance tests for security functionality (authentication and access control, identity management,
auditing, etc.) using behavior-driven development) testing frameworks

Of course, being able to make the most of the automated integration of these tools requires that the team is
correctly using and adhering to the required workflows. For example, all code changes must be checked into
the repository and code must be checked in frequently (so it is easy to identify the causes). The pipeline
tests should run consistently and quickly so that they do not become a drag or a block to development, but
when the tests fail, the team should prioritize fixing the problems before making more changes.
When developing tests or simply starting out, it may be beneficial to run tests in a separate pipeline so that
initial failures (perhaps not due to issues in the application) will not cause many failed builds immediately.
It is important to find a balance among test coverage, run time, number of false positives, and overall
usefulness, so that their integration into the normal workflow is both useful and welcome, instead of
becoming a nuisance.
1. 0Which two statements regarding application security testing are correct? (Choose two.)
a. SAST is performed on the application code.
b. DAST is performed on the application code.
c. SAST finds run-time issues in the application.
d. DAST finds run-time issues in the application.
e. SAST cannot be automated.

810 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Infrastructure Security in the CI/CD Pipeline
Using automated integration, build, testing, and deployment pipelines is an important enabler for all the
benefits of implementing DevOps practices, but they introduce another layer of activities and systems that
need to be secured. This issue becomes doubly important when you consider that these systems have access
(read-only though) to your code and can spin up test environments (resources, cost), install and run many
other tools (testing), and then package your code for distribution. If you are also embracing continuous
deployment, then the pipeline becomes a very attractive target for attack, and its compromise becomes
potentially more valuable than finding a vulnerability in a single application.
In addition to providing read access to code, an integration or build system will often protect or have access
to several credentials for other services, or secrets such as passwords and API keys. If the system also has
write access to code repositories, then attackers have much freedom, which allows them to inject back doors
and malware into your applications or code that allows them to steal data.
Protecting your systems from outside threats is not sufficient in itself, because usually your systems will
reside on a protected segment of the internal network. This system is critical and should also be protected
(to a reasonable degree) against inside threats. At the very least, you must ensure that any changes or
unauthorized access can be detected and attributed to a particular individual (no shared credentials).
There are a few things that you can do to better protect this critical infrastructure.
• Harden the CI/CD environments and the automated build tool chain.
• Keep tools and servers updated as if they were in production.
• Secure configuration management tools.
• Protect stored secrets and credentials.
• Lock down distribution repositories and use integrity checking and signed releases.
• Secure access from chat platforms or management systems.
• Log activities and ensure their integrity.
• Monitor everything.

To start with, the servers that house the tooling that is used in the CI/CD environments should be hardened
in the same way as your other production servers. Try to reduce their exposure by firewalling them from the
less-trusted parts of the network and reducing the number of manual changes that are done by privileged
users. Isolation features such as virtualization or containerization should be used to contain operations (and
reduce the attack surface) in case any of the tools are compromised.
You should also harden the automated build tool chain. This tool chain often consists of tools that are easy
to use and therefore have very permissive defaults. Their security features should be reviewed and locked
down in separate environments, teams, application stacks, and servers into logical groups that do not
interact with each other. This approach reduces the exposure in case one feature is compromised.
It may be very tempting not to touch complex build systems once they have grown into a critical function
for the whole product lifecycle. However, keep in mind that they are applications like any others, and new
vulnerabilities will be discovered, either in the tools themselves or the libraries they use. Having a separate
staging or testing environment for upgrading these tools becomes important if you want to avoid downtime
for your main pipelines.
Configuration management tools and all the secrets that are used by them and other pipeline tools should be
well protected because they can provide unrestricted access to systems and sensitive data.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 811
Lock down distribution repositories, because they will house the released binaries of your applications (or
containers) that are eventually deployed into production. Even with trusted repositories, deployment should
not be done blindly, so use integrity checking and signed releases so that only known safe applications are
run.
Secure access from chat platforms or other IT management systems, especially when they can do more than
read the state, for example, in the case of ChatOps. Starting certain tasks from a bot on your Cisco Webex
Teams channel or from a new ticket that is created in your favorite IT Service Management (ITSM)
platform means that these applications have a certain level of access to your integration and build platforms.
All the activities that are happening in these systems can be overwhelming for anyone to track, which is
why having access to logs is important, both for troubleshooting or investigation after an incident. It is also
important to properly store and back up these systems to ensure that no one can tamper with their data. Logs
should be part of a comprehensive monitoring setup that monitors the well-being of these systems.

Managing Secrets
One of the most common challenges is keeping secrets actually secret. This goal is especially important in
highly distributed systems, like in microservices architectures, where many different services need to talk to
each other, back ends, and databases, with authentication. There will be many usernames, passwords, API
tokens, cryptographic keys, and other secrets that need to be secured.
It is too easy for developers to simply place their set of credentials temporarily alongside their application
while they are trying to solve more difficult problems and then forget about them. This results in those
secrets getting into build systems, repositories, and logs, effectively compromising them.
Secrets should not be stored in code or alongside it, because it is rare that people work in isolation.
Repositories are shared by people within teams and usually by others (even if in read-only mode). If you do
not have any way to store some credentials other than in the code, then you can use tools like git-secret
(store private data in a git repository) or git-crypt (encrypt files in a git repository) with the Git version
control system. The risk is that someone might forget to encrypt before committing to the repository, so it
becomes necessary to add further analysis tools that inspect patches or merge requests for cleartext secrets.
CI/CD systems often require access to many different secrets to build, test, package, or deploy applications.
There are several features of the CI platform GitLab-CI that are used here. Configuration management tools
such as Ansible require, at the very least, some sort of authentication credentials when connecting to the
machines they manage. Thankfully, there are many tools that are available to help lock down these secrets.
Ansible Vault (http://docs.ansible.com/ansible/playbooks_vault.html) is used here to encrypt secrets at rest
and you will use it shortly.
Although many tools have developed their own specialized secrets management, it may be beneficial when
using many different tools to use a general-purpose secrets management platform for your tools and
applications (a commonly used platform is HashiCorp Vault). Your secrets management platform should
provide the following functionalities:
• Encrypt stored data
• Provide restricted access, with full audit trails and fine-grained access control rules
• Should be highly available, so it does not become a single point of failure
• Have an API for secure access to secrets by other tools

812 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Ansible Vault
When writing Ansible playbooks and inventories, it is often necessary to provide credentials for remote
hosts or network devices. Additional variable files will often include sensitive data as well. Although
Ansible artifacts are all in plaintext and can (and should) be tracked under versioned source control, you
should not include sensitive data such as secrets when committing the files to (for example) a Git repository.
Ansible provides a simple tool called Ansible Vault (command: ansible-vault) that facilitates keeping
sensitive data in encrypted files that can then be shared or safely placed under source control. The ansible-
vault command can encrypt any structured data file that is used by Ansible, including variable files of all
types (including those files that are passed as extra variables with -e @file.yml/json), role variables, and
defaults.
The following is an example of an Ansible inventory file called hosts:
student@student-vm:vault (master)$ cat hosts
[all:vars]
ansible_connection=network_cli
ansible_user=cisco
ansible_password=cisco

[ios:vars]
ansible_network_os=ios

[ios]
csr1 ansible_host=192.168.10.101
csr2 ansible_host=192.168.10.102
csr3 ansible_host=192.168.10.103

If you want to encrypt the whole file, you can run the following command and ansible-vault will ask you
for a vault password (the encryption key), then replace the contents of the file with the ciphertext.
student@student-vm:vault (master)$ ansible-vault encrypt hosts
New Vault password:
Confirm New Vault password:
Encryption successful

student@student-vm:vault (master)$ cat hosts


$ANSIBLE_VAULT;1.1;AES256
363032<output truncated for brevity>3430

You can then test that the inventory file is actually encrypted and cannot be read unless the password for
decryption is provided. The following command invokes Ansible without a playbook, tells it to run the
debug module, and prints the value of the ansible_password variable.
student@student-vm:vault (master)$ ansible localhost -m debug -a
var='ansible_password'
[WARNING]: * Failed to parse /home/student/vault/hosts with ini plugin: Attempting
to decrypt but no vault secrets found

localhost | SUCCESS => {


"ansible_password": "VARIABLE IS NOT DEFINED!"
}

As expected, Ansible cannot parse the encrypted file by itself and has no inventory data to load, which
results in a message that ansible_password is not defined. Now add the --ask-vault-pass parameter to the
previous command and input the password.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 813
student@student-vm:vault (master)$ ansible localhost -m debug -a
var='ansible_password' --ask-vault-pass
Vault password:
localhost | SUCCESS => {
"ansible_password": "cisco"
}

This parameter causes Ansible to use Vault to decrypt the file and then load it as a valid inventory, thus
printing the inventory-defined ansible_password value of cisco.
Ansible Vault also supports encrypting single values inside a YAML file, using the special !vault tag to
mark the variable for special processing. You will see how to perform this operation later.
To understand more about Ansible Vault and explore its other command-line parameters, you can find the
latest version of the documentation at https://docs.ansible.com/ansible/latest/user_guide/vault.html.

Storing Sensitive Variables in CI/CD


CI/CD tools often require a combination of credentials and other secrets to perform their work. Storing and
providing this sensitive information safely at run time for the pipeline scripts is the job of the CI/CD system.
This system also ensures that these scripts are not visible outside of the secure environment in which the
pipeline executes (for example, public logs).
As with most mature systems, GitLab CI/CD offers the ability to define environment variables that will be
provided to the execution environment of the pipeline at run time. These variables can be managed by
navigating to the Settings > CI/CD section of a repository and scrolling down to the Variables section.

814 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Variables can be masked, which means that their value will be hidden in job logs with certain restrictions
(depending on which GitLab version you are running), such as being a single line and using only the Base64
alphabet. Variables can also be protected, which restricts their usage to protected branches or tags (which is
a security feature in GitLab that controls access to specific branches, based on a user's groups).
Other CI tools, such as Travis-CI, may also allow you to store variables in an encrypted format in the
pipeline definition file. The values will then be decrypted when the pipeline executes based on the private
key, which is known only to the Travis-CI instance.
notifications:
devops:
rooms:
secure: "DEF5OwLbwB8L7Da..."

Because GitLab does not support encrypted variables, to achieve a similar result, its variables functionality
can be combined with an encryption function that is provided by other tools such as Ansible Vault.
1. 0Which statement regarding infrastructure security is true?
a. Tooling infrastructure servers do not need to be kept up-to-date.
b. Because tooling is usually hidden behind firewalls, it does not require very strict security.
c. Credentials that are stored in code are acceptable as long as the repositories are private.
d. CI/CD environments should be hardened as if they are in production.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 815
0Discovery 21: Secure Infrastructure in the CI/CD
Pipeline
Security separation between stored secrets in code and the applications it supports is an important piece of
your overall security posture. In this activity, you will describe how to identify, encrypt, and use those
encrypted values in a pipeline that uses Ansible. The encryption technology is Ansible Vault, but the
pipeline will apply GitLab-CI capabilities to store environment variables.

Topology

Job Aid

Device Information

Device Description FQDN/IP Address Credentials

Student Workstation Linux Ubuntu VM 192.168.10.10 student, 1234QWer

GitLab Git Repository git.lab student, 1234QWer

csrkv1 Cisco Router 192168.10.101 cisco, cisco

csrkv2 Cisco Router 192168.10.102 cisco, cisco

csrkv3 Cisco Router 192168.10.103 cisco, cisco

816 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Command List
The table describes the commands that are used in this activity. The commands are listed in alphabetical
order so that you can easily locate the information that you need. Refer to this list if you need configuration
command assistance during the lab activity.

Command Description

ansible localhost -m debug -a The ability to run a single Ansible task from the command line. In this
var='ansible_password' scenario, you are running the debug module.

ansible-vault encrypt_string <value> The Ansible Vault mechanism for encrypting passwords.
--ask-vault-pass

cd directory name To change directories within the Linux file system, use the cd
command. You will use this command to enter into a directory where
the lab scripts are housed. You can use tab completion to finish the
name of the directory after you start typing it.

chmod <0-7><0-7><0-7> filename Gives the ability to change the permissions and execution capabilities
of the file.

export key=value The Linux command to set an environment variable in the current
session. An example would be export ENV=PRODUCTION.

git add -a filename The ability to add a file or use the -a flag to add all files to the git index.

git checkout -b branch_name The git command to check out a branch, and optionally create the
branch using the -b flag.

git clone repository Downloads or clones a git repository into the directory that is the name
of the project in the repository definition.

git commit -m message The git command to commit the changes locally.

git push repo branch_name The git command to push the branch to the remote git service. The
repo is normally in the form of a named instance, usually a named
remote such as origin.

Task 1: Identify Critical Data


To push network changes, you will inevitably need to know the proper credentials of the networking
devices to push those configurations. Throughout the process so far, the values have been stored in code,
which goes against best practices. In this task, you will identify and separate the credentials from being
stored in Version Control.

The identification piece will always need to be done, however, the separation piece is completed in this use
case, since Ansible does not support encrypted variables in the ini inventory-file format. As such, the
variables must be moved into YAML files.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 817
Activity

Change the directory and obtain the code for the network inventory application.

Step 1 In the student workstation, find and open Visual Studio Code by clicking the Visual Studio Code icon.

Step 2 From the Visual Studio Code top navigation bar, choose Terminal > New Terminal [ctrl-shift-`].

Step 3 Navigate to the terminal section at the bottom of Visual Studio Code.

Step 4 Within the Visual Studio Code terminal, change the directory to labs/lab21 using the cd ~/labs/lab21
command.

student@student-vm:$ cd ~/labs/lab21/

Step 5 Issue the git clone https://git.lab/cisco-devops/net_inventory_iac command to clone the net_inventory_iac
repository.

student@student-vm:labs/lab21$ git clone https://git.lab/cisco-devops/net_inventory_iac


Cloning into 'net_inventory_iac'...
warning: redirecting to https://git.lab/cisco-devops/net_inventory_iac.git/
remote: Enumerating objects: 588, done.
remote: Counting objects: 100% (588/588), done.
remote: Compressing objects: 100% (172/172), done.
remote: Total 588 (delta 403), reused 587 (delta 403)
Receiving objects: 100% (588/588), 3.11 MiB | 15.86 MiB/s, done.
Resolving deltas: 100% (403/403), done.

Step 6 Change directory to the net_inventory_iac directory by issuing cd net_inventory_iac command.

student@student-vm:labs/lab21$ cd net_inventory_iac/
student@student-vm:lab21/net_inventory_iac (master)$

View the Current Passwords


View the passwords as they currently exist within the ini file.

Step 7 Change directory to the iac/ansible/ directory by issuing cd iac/ansible/ command.

student@student-vm:lab21/net_inventory_iac (master)$ cd iac/ansible/


student@student-vm:iac/ansible (master)$

Step 8 Examine the hosts file by issuing the cat hosts command. Notice the ansible_password variable definitions.

Note For the sake brevity, you will notice two exceptions about this activity. You will only consider the csr1k
devices in the scope of this activity. And you are running this activity on production environment rather
than on the test environment—to reduce the boot up time. These differences are visible in the .gitlab-
ci.yml file. Normally, these activities would not be done on a production environment directly.

818 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:iac/ansible (master)$ cat hosts
[all:vars]
ansible_connection=network_cli
ansible_user=cisco
ansible_password=cisco

[ios:vars]
ansible_network_os=ios

[ios]
csr1kv1 ansible_host=192.168.10.101
csr1kv2 ansible_host=192.168.10.102
csr1kv3 ansible_host=192.168.10.103

[asa:vars]
ansible_network_os=asa
ansible_become=true
ansible_become_method=enable
ansible_become_pass=cisco

[asa]
asa1 ansible_host=192.168.10.51

[routers:children]
ios

[firewalls:children]
asa
student@student-vm:iac/ansible (master)$

Move Credentials to a YAML File


Ansible leverages a concept of group variables, which predetermines the location of a file where any group
variable can be set. Within Ansible, there is a one-to-one mapping of filename and groups held in the
group_vars folder. The group all exists by default, and its variables automatically presumed to be in the
all.yml file.

The values must move into a YAML file based on the ini file limitation.

Step 9 Within the Visual Studio Code, navigate to the iac/ansible directory and edit the hosts file. In that file, delete
line 4, which contains the ansible_password variable.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 819
820 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 10 Within the ansible directory, create a new directory called group_vars. Right-click the ansible directory and
choose New Folder.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 821
822 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 11 Within the group_vars directory, create a new file called all.yml. Right-click the group_vars directory and
choose New File.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 823
824 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 12 Add a key/value pair ansible_password: cisco.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 825
Step 13 Press Ctrl-S to save the hosts and all.yml files.

Task 2: Secure Ansible Secrets


Ansible Vault provides several mechanisms to secure variables. In this case, variables will be encrypted
inline, and not within a separate file. First you must generate the encrypted variable, and then apply it to the
variable.

Activity

Encrypt the Password


Use Ansible’s command-line utility, to encrypt a password. The ansible-vault encrypt_string [value] --
ask-vault-pass command encrypts the value argument.

Step 1 Use the ansible-vault encrypt_string cisco --ask-vault-pass. The password for the Ansible vault command
to encrypt is: cisco. Use 1234QWer when prompted for a password and password confirmation.

826 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Note The hash values that you will see will be different to the ones shown here. Use those values from your
terminal.

student@student-vm:iac/ansible (master)$ ansible-vault encrypt_string cisco --ask-


vault-pass
New Vault password:
Confirm New Vault password:
!vault |
$ANSIBLE_VAULT;1.1;AES256

38333065373730626136356639363236663637386132303937353664623134656364663430393864

3235383435366463343966646238326235303866353132390a396535666236313432396333306132

61306335323938356134626364653330636463633131623961376539303834393236646562383162

6265633363663732620a393135636332346164306564656231316162386661613031613438316538
3636
Encryption successful
student@student-vm:iac/ansible (master)$

Store Encrypted Password as Variable


You will now copy the encrypted value into the actual YAML files. Note the strict formatting.

Step 2 In the all.yml file, set the ansible_password variable to the one you obtained as a result of the Ansible vault
command.

ansible_password: !vault |
$ANSIBLE_VAULT;1.1;AES256

38333065373730626136356639363236663637386132303937353664623134656364663430393864

3235383435366463343966646238326235303866353132390a396535666236313432396333306132

61306335323938356134626364653330636463633131623961376539303834393236646562383162

6265633363663732620a393135636332346164306564656231316162386661613031613438316538
3636

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 827
Step 3 In the ansible folder, create a file called vault_pw.sh. Add the following two lines to this file:

#!/bin/bash
echo ${ANSIBLE_VAULT_PASSWORD}

828 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 4 Press Ctrl-S to save the vault_pw.sh and all.yml files.

Review the Encrypted Variables


Ansible by default does not allow you to set a vault value via environment variables. But you can write a
script that will return the value.

To work around this limitation, a simple bash script will render the environment variable for
ANSIBLE_VAULT_PASSWORD and it will be found leveraging the
ANSIBLE_VAULT_PASSWORD_FILE variable. Since this script must run on its own, you have to
change the script to be executable.

Step 5 Use the cat group_vars/all.yml command to view the all.yml file.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 829
student@student-vm:iac/ansible (master)$ cat group_vars/all.yml
---
ansible_password: !vault |
$ANSIBLE_VAULT;1.1;AES256

38333065373730626136356639363236663637386132303937353664623134656364663430393864

3235383435366463343966646238326235303866353132390a396535666236313432396333306132

61306335323938356134626364653330636463633131623961376539303834393236646562383162

6265633363663732620a393135636332346164306564656231316162386661613031613438316538
3636

Step 6 Use the cat vault_pw.sh command to view the vault_pw.sh file.

student@student-vm:iac/ansible (master)$ cat vault_pw.sh


#!/bin/bash
echo ${ANSIBLE_VAULT_PASSWORD}

Step 7 Change the permissions of the vault_pw.sh file by issuing the chmod 770 vault_pw.sh command.

student@student-vm:iac/ansible (master)$ chmod 770 vault_pw.sh


student@student-vm:iac/ansible (master)$

Test the Implementation Locally


You will now prove that the process works locally by setting the environment variables, and running a
command to view the variable for ansible_password.

Step 8 Set the environment variable ANSIBLE_VAULT_PASSWORD_FILE to vault_pw.sh. Use the export
ANSIBLE_VAULT_PASSWORD_FILE=vault_pw.sh command.

student@student-vm:iac/ansible (master)$ export ANSIBLE_VAULT_PASSWORD_FILE=vault_pw.sh

Step 9 Set the environment variable ANSIBLE_VAULT_PASSWORD to 1234QWer. Use the export
ANSIBLE_VAULT_PASSWORD=1234QWer command.

student@student-vm:iac/ansible (master)$ export ANSIBLE_VAULT_PASSWORD=1234QWer

Step 10 Test that your process works locally. Issue the ansible localhost -m debug -a var='ansible_password'
command.

student@student-vm:iac/ansible (master)$ ansible localhost -m debug -a


var='ansible_password'
localhost | SUCCESS => {
"ansible_password": "cisco"
}

Task 3: Security Credentials in Pipeline


To secure the Ansible run, at this point, you simply need to set two variables via GitLab-CI.

830 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Activity

Change the directory and obtain the code for the network inventory application.

Step 1 From the Chrome browser, navigate to https://git.lab.

Step 2 Accept the privacy notifications and log in with the credentials that are provided in the Job Aids and click
Sign in.

Step 3 From the list of projects, choose the cisco-devops/net_inventory_iac project.

Step 4 From the left navigation bar, choose Settings > CI/CD.

Step 5 Find the section for Variables and click the Expand button.

Step 6 Add a variable for ANSIBLE_VAULT_PASSWORD_FILE with the value of vault_pw.sh.

Step 7 Add a variable for ANSIBLE_VAULT_PASSWORD with the value of 1234QWer.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 831
Step 8 Click Save variables to save changes.

832 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Commit and Push Code
Now that the code has been updated, you will create a new branch for the code, make a new commit, and
push the commit to the remote git repository.

Step 9 In the terminal window, create a new branch called lab21 using the git checkout -b lab21 command.

student@student-vm:lab21/net_inventory_iac (master)$ git checkout -b lab21


Switched to a new branch 'lab21'

Step 10 Add all updated files to the git index using the git add -A command.

student@student-vm:lab21/net_inventory_iac (lab21)$ git add -A

Step 11 Commit the file to git using the git commit -m "Lab 21: Add encrypted vars" command.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 833
student@student-vm:lab21/net_inventory_iac (lab21)$ git commit -m "Lab 21: Add
encrypted vars"
[lab21 90dfc3e] Lab 21: Add encrypted vars
2 files changed, 21 insertions(+)
create mode 100644 iac/ansible/group_vars/all.yml
create mode 100755 iac/ansible/vault_pw.sh

Step 12 Push the branch to GitLab using the git push origin lab21 command. When prompted, provide your GitLab
credentials.

student@student-vm:lab21/net_inventory_iac (lab21)$ git push -u origin lab21


Username for 'https://git.lab': student
Password for 'https://[email protected]':
warning: redirecting to https://git.lab/cisco-devops/net_inventory_iac.git/
Counting objects: 8, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (7/7), done.
Writing objects: 100% (8/8), 876 bytes | 876.00 KiB/s, done.
Total 8 (delta 3), reused 0 (delta 0)
remote:
remote: To create a merge request for lab21, visit:
remote: https://git.lab/cisco-devops/net_inventory_iac/merge_requests/new?
merge_request%5Bsource_branch%5D=lab21
remote:
To https://git.lab/cisco-devops/net_inventory_iac
* [new branch] lab21 -> lab21
Branch 'lab21' set up to track remote branch 'lab21' from 'origin'.
student@student-vm:lab21/net_inventory_iac (lab21)$

Merge Request
Now that the code is in the git remote repository under a new branch, you need to submit a merge request to
have the code tested, and then the application deployed to the server.

Step 13 In the web browser, choose the cisco-devops/net_inventory_iac project.

Step 14 GitLab has added a link at the top to submit a merge request with having a recent branch pushed to the
server. Click Create merge request to create the merge request.

834 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 15 Review the autocompleted information and submit the merge request. Click Submit merge request.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 835
Step 16 Monitor the pipeline jobs, specifically for test_topology_build stage. Determine if your job succeeded.

836 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Summary
You reviewed the ability to store encrypted variables in Ansible and properly in code. The secret moves
from the device password to the encryption password and is handled via environment variables on the
GitLab server.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 837
0Summary Challenge
1. 0Which two options are valid security practices? (Choose two.)
a. manually logging in to servers to perform changes using shared credentials
b. scanning for vulnerabilities only on production systems
c. creating hardened baseline systems for all environments
d. saving shared credentials in code repositories
e. scanning for vulnerabilities across all environments
2. 0Where should DevSecOps practices be implemented?
a. only on critical customer servers
b. in all stages of a product lifecycle
c. only in production, because it holds important data
d. only in development, to keep production fast and lean
3. 0Which two statements correctly describe SAST tools? (Choose two.)
a. SAST scans the running application.
b. SAST requires access to the application code.
c. SAST finds run-time issues early in the process.
d. SAST scans can be automated at all stages of the development cycle.
e. SAST can only be run manually.
4. 0Which two statements correctly describe DAST tools? (Choose two.)
a. DAST scans the running application.
b. DAST analyzes the application code.
c. DAST is generally used toward the end of the development cycle.
d. DAST scans can be automated at all stages of the development cycle.
e. DAST identifies bad coding practices.
5. 0Which statement regarding infrastructure security is false?
a. All infrastructure services, tools, and servers should be monitored.
b. Logging of activities in CI/CD environments is very useful.
c. Unsigned code should be run if it comes from a trusted repository.
d. Chat platforms should have strictly controlled access.
6. 0Which statement regarding secrets management is false?
a. Secrets should not be stored in code.
b. Secrets can be stored in code as long as the repository is private.
c. Tooling exists to encrypt secrets while at rest.
d. A central secrets management system should provide a secure API.
7. 0Which statement regarding secrets management is true?
a. Sensitive data in CI/CD log files does not pose any risk.
b. Secure CI/CD systems can encrypt and mask sensitive data.
c. Secrets have to be hardcoded into scripts because there is no other way to provide them to the
pipeline.
d. Private CI/CD systems do not need secrets management tools.

838 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Answer Key
DevSecOps Overview
1. A, D

Application Security in the CI/CD Pipeline


1. A, D

Infrastructure Security in the CI/CD Pipeline


1. D

Summary Challenge
1. C, E
2. B
3. B, D
4. A, C
5. C
6. B
7. B

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 839
840 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Section 14: Exploring Multicloud Strategies

Introduction
Every day, more companies are moving to the cloud. Companies are using the cloud to reduce costs,
improve scale, increase availability, and use multiregion disaster recovery. But what is the cloud? How do
you manage the cloud? What are the benefits of a public cloud? What about a private cloud? These
questions and more are answered in this section.

Application Deployment to Multiple Environments


To understand application deployment in multiple environments, it is good to have a solid understanding of
the environment, its architecture, and its design. There are private and public models of consumption
available. How you access resources can differ from provider to provider. This topic defines the cloud,
discusses cloud types, and addresses how you connect to cloud resources.

Cloud Definition

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 841
The National Institute of Standards and Technology (NIST) defined cloud computing in 2011 as follows:
“Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared
pool of configurable computing resources that can be rapidly provisioned and released with minimal
management effort or service provider interaction.”
The essential characteristics of cloud computing include the following:
• Resource pooling: Perhaps the greatest enabler of a cloud service is the ability to have resources
pooled together. Any set of defined characteristics will rely on the ability to make more resources
available when needed. When you have a resource pool with appropriate resources available, you can
provide the characteristics of a cloud.
• Broad network access: Broad network access allows access to systems without significant interaction
with a service provider. The correct controls should still be in place to prevent unauthorized network
access.
• Rapid elasticity: Rapid elasticity allows for a consumer of the cloud service to get more or fewer nodes
of service quickly. If a sudden spike of traffic occurs on a web server, the cloud service can add more
nodes to manage the incoming traffic load. When the spike in traffic subsides, the cloud service scales
the service back to normal size.
• Measured service: Measured service involves knowing how the resource pool is being used and by
whom. This approach allows for proper reporting and possible billing of the correct source.
• On-demand self-service: On-demand self-service is the ability for cloud customers (private or public)
to provision resources with minimum management or service provider interaction. Automation starts the
provisioning and deprovisioning of resources as needed from the self-service tool. When the resources
are available or removed, the end user will be notified of the state change.

Public Cloud Environment


Infrastructure provisioned for use by the general public.

842 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Public cloud environments are defined as infrastructure that is provisioned for use by the general public.
This infrastructure resides on-premises at the cloud provider locations. This category includes Amazon
AWS, Google Cloud Platform, Microsoft Azure, Digital Ocean, OVH, and many others. These providers
are generally for-profit organizations that charge fees to cover the costs of the resource pools that are
required to meet the elasticity characteristics and offer on-demand services. The organizations tend to be
innovative in offering cloud services to customers.

Private Cloud Environment


• Cloud infrastructure provisioned for exclusive use by an organization
• On- or off-premises

A private cloud is provisioned on an organization’s own hardware, typically in its own data center or
colocated facility, for exclusive use by the organization provisioning the cloud. The cloud can be on- or off-
premises. Private clouds aim to provide the same services that public clouds offer to meet the requirements
of internal customers.

Considerations for Deploying to Multiple Clouds


• Performance
• Compliance
• Resilience
• Pricing

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 843
When deploying infrastructure and applications to a cloud, several considerations should be taken into
account:
• Will the application perform appropriately in the cloud in which you want to deploy it?
• If using virtual machines, are the VM characteristics appropriately sized?
• What is the latency to the data center locations from the cloud?
• Are there other performance metrics, like packet loss or jitter, to take into consideration?
• What connection types are available to deploy applications?

Compliance efforts may require that the data remains in the country of origin. Are there other data
governance requirements, such as General Data Protection Regulation (GDPR), that dictate where data
needs to be maintained? These requirements and other data privacy controls that are required for your data
will help you decide where to deploy your app.
Resiliency may be important in selecting a cloud provider. Many of the large public cloud providers are
geographically diverse. Local cloud providers may only have a single data center. These facts may drive
you to look at a multicloud deployment model to ensure that your systems are available through a possible
local disruption of service.
Pricing is a major concern for cloud providers. With a public cloud, you are using someone else’s hardware
investment. There is generally a cost that is associated with those resources. One cloud provider may be
more cost-effective than others at deploying services. Microsoft Azure may be more cost-effective than
some of the others when deploying Microsoft applications, such as Office 365.

Connect Private Clouds to Public Clouds

Connecting to public clouds from an on-premises private cloud, or hybrid cloud, offers multiple network
connection options. This diagram shows a couple of methods of connecting to the Google Cloud platform.
The first and most ubiquitous method of connecting to a public cloud is to use an IPsec VPN tunnel, where
a VPN connection is established from a network device on the private network side to a VPN concentrator
within the cloud provider. The VPN concentrator can then connect your private on-premises environment to
the virtual private cloud (VPC) network within the cloud provider.

844 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
The second connection type that is becoming more common is the private connection, at times, through a
colocation facility. Products such as AWS Direct Connect, Microsoft Azure Express Route, and Google
Cloud Interconnect services provide private network access. These products provide dedicated network
access and provide VPC network access without requiring a VPN connection over the public Internet.

Connect Private Clouds to Multiple Public Clouds

When working with multiple cloud providers, you must consider their design and network access
capabilities. There is no methodology for extending a private access Multiprotocol Label Switching (MPLS)
network to a service provider. There are also no connection providers that provide direct access between
competing cloud service providers.
If you deploy private access such as Azure ExpressRoute or AWS Direct Connect, and you have traffic
flows that route between the cloud providers, the traffic will rely on another routing connection that you
configure and operate.
If you are using the public Internet with VPN connections to cloud services, you may be able to use Cisco
DMVPN between a Cisco Cloud Services Router that is provisioned within the cloud service provider. A
Cisco DMVPN hub set up on the private network has spokes to each of the cloud providers. When
configured to allow spoke-to-spoke traffic flows, this scenario will use the public Internet services that your
cloud provider has available.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 845
Connection Types Compared

When comparing connections to a single cloud provider that has a choice of a dedicated private access
medium or the public Internet, there are some trade-offs to consider.
If you are concerned about throughput, QoS, low latency, inline services, or managed service for the
connection, you will want to look at private access.
If you are concerned about the most costs, quick provisioning, location and provider availability, or the
flexibility to connect to other services, you will want to look at using a VPN connection over the Internet for
connectivity to the cloud provider network.

Cloudburst
• Scale out from a private network.
• Meet the need for high demand periods.
• Minimize cloud costs while using internal infrastructure.

846 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
A significant reason for verifying that your applications can deploy to multiple clouds includes the use of
the cloud during burst or peak usage time. You may have made a significant investment in your private
cloud environment, but the resource pool of a private cloud is still finite and may run out of capacity for the
application’s needs. Cloudbursting was designed to help with this issue.
In cloudbursting, your application is deployed in one cloud environment, but during times of peak load,
another cloud, usually a public cloud, is used to manage the extra load. This approach will help minimize
the cloud cost for the application while it runs on the already procured hardware in the private cloud, and
only uses the pay-as-you-go public resources when needed. This situation can also apply when using two
public clouds. You purchase a minimum usage amount on one cloud provider to save costs, but purchase
on-demand services from another provider that is more cost-effective than the day-to-day cloud provider.
1. 0Which is not a consideration for deploying to multiple clouds?
a. performance
b. compliance
c. pricing
d. resilience
e. underlying hardware vendor

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 847
0Public Cloud Terminology Primer
The fundamentals of the technology that is deployed in the public cloud or on-premises in a data center or
private cloud are the same. In both cases, application requirements need to be defined including
connectivity, resources, access, availability, storage, and failover, just to name the more common
requirements.

Enterprise Cloud Infrastructure

• Maximum flexibility
• Total ownership
• Could lead to bad design decisions

Within an on-premises enterprise infrastructure, it is common to define the previously mentioned


requirements using terms such as VLANs or virtual routing and forwarding (VRF) instances for networking
connectivity, or being concerned about the specific network operating system that is running on the data
center switches. From a compute and storage perspective, there are also hundreds of settings that you can
configure within a hypervisor manager such as vCenter. All these configurable settings are possible when
you deploy infrastructure in a data center that you own and manage. This situation leads to maximum
flexibility for building out infrastructure for any application at any time—in theory.
The reality is knowing that anything is possible. Historically, it was not uncommon to have suboptimal
design decisions due to demanding forces within an IT department or enterprise organization. One common
theme is to forgo proper application design and try to have the systems and network team build redundancy
that should actually be built into the application itself.

848 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Cloud Adoption and Technology

As the public cloud continues to gain adoption, there are a few common themes to note, including the
following:
It is critical to have a plan for hybrid and multicloud scenarios. Because you no longer own the underlying
infrastructure, you need to prepare for unknown failures and mitigate risk to ensure that the business and
applications can continue to run if there is any failure whatsoever.
Again, because you do not own the infrastructure, the administrators and application developers must work
within the constraints of the public providers. Therefore, an app team cannot demand Layer 2 connectivity
between sites, as an example. This approach provides less flexibility for end users, but quicker time to
deploy due to fewer overall options when building and deploying apps in a public cloud environment.
Application architectures continue to improve with the move to the public cloud using cloud-first and cloud-
native models that ensure that apps are built with the proper resiliency in mind. Although there are fewer
control options in the public cloud, it offers technology that is usually never built on-premises within an
enterprise—such as autoscaling, serverless technology, and PaaS. Properly deploying within a public cloud
offers tight control of the application without heavy capital expenditures.

Public Cloud Providers

There are several major cloud providers that are used for public-cloud deployments. The best known is
Amazon Web Services, but others include Google Cloud, Azure, and Digital Ocean, which is the smallest of
the four (it offers fewer services compared to the three major providers).
In any of these public cloud platforms, it is possible to spin up a server that has hundreds of gigabytes of
RAM and more than 12 virtual CPUs. This scenario is just one example of their core services (VMs). You
also need to enable things like a public IP address, backups, and so on.
Although these public cloud providers are changing the industry due to the vast number of services they
offer, it is often not an easy shift because of how the technology is presented to its user base. The
technology is fundamentally similar to what is deployed on-premises, but the terminology, the default
settings, the security implications, and the cloud portals take time to learn.
The following discusses how the terminology differs between public cloud providers.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 849
Public Cloud Technology

The most popular service that is provided by the major cloud providers is its “virtual machine” service.
Although it is very common to deploy VMware ESXi, kernel virtual machines (KVMs), or Microsoft
Hyper-V on-premises to deploy virtual machines, the cloud providers have their own service (each with
varying features) for rapidly deploying a VM.
For example, VMs are called different things by different cloud providers:
• AWS: EC2
• Google Cloud: Compute Engine instance
• Azure: virtual machine
• Digital Ocean: Droplet

You can think of these names as marketing, but the capabilities are different, even for deploying something
like a VM. For example, how many interfaces are supported, are backups supported, are public IPs
supported, how many public IPs can be assigned to the same VM?
Also, the networking connectivity options from an on-premises location to a cloud network differ between
cloud providers. First of all, the cloud network itself is often a single network (entry point), but is referred to
as a VPC by some cloud providers or as a VNet by Microsoft within Azure. Digital Ocean does not have
extensive features and only offers the ability for private networking (check box) that allows droplet-to-
droplet communication. Second, as it pertains to external connectivity within the cloud providers, the larger
carriers support a diverse set of features, so it often takes a lot of research to ensure that you are comparing
similar features. These features include direct connect circuits, direct peering, carrier peering, and express
route services. The major cloud providers realize that for continued adoption, they needed to offer high-
speed and private ways of connecting private data centers around the world directly into their highly secure
multitenant cloud environment.

850 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Although networking and compute are two of the primary services, cloud providers offer hundreds of
services. In fact, in some deployments you may not even use networking or compute. It could be that it is a
storage-based service or something like serverless services that allow users to literally drop in code and the
provider manages the execution and scales accordingly, ensuring that you do not have to manage any
infrastructure.
The following table shows how mature the three major cloud providers are, but how complex it is to
understand all the terminology that is required for using and building public and multicloud strategies.

Amazon Web Services Microsoft Azure Google Cloud Platform

Regions Global Infrastructure Regions Regions and Zones

Pricing Cloud Services Pricing Pricing Pricing

Basic Compute EC2 Virtual Machines Compute Engine

Containers ECS AKS Kubernetes Engine

EKS Container Instances

Serverless Lambda Functions Cloud Functions

App Hosting Elastic Beanstalk App Service App Engine

Service Fabric

Cloud Services

Batch Processing Batch Batch —

Object Storage S3 Blob Storage Cloud Storage

Block Storage EBS — Persistent Disk

File Storage EFS File Storage —

Hybrid Storage Storage Gateway StorSimple —

Offline Data Snowball — Transfer Appliance

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 851
Amazon Web Services Microsoft Azure Google Cloud Platform

Transfer

Snowball Edge

Snowmobile

Relational/SQL RDS SQL Database Cloud SQL


Database
Aurora Database for MySQL Cloud Spanner

Database for
PostgreSQL

NoSQL Database DynamoDB Cosmos DB Cloud Bigtable

Table Storage Cloud Datastore

In-Memory Elasticache Redis Cache —


Database

Archive/Backup Glacier Backup —

Disaster Recovery — Site Recovery —

Machine Learning SageMaker Machine Learning Cloud Machine Learning


Engine
AML

Apache MXNet on AWS

TensorFlow on AWS

Cognitive Services Comprehend Cognitive Services Cloud Natural Language

Lex Cloud Speech API

Polly Cloud Translation API

Rekognition Cloud Video Intelligence

Translate

Transcribe

IoT IoT Core IoT Hub Cloud IoT Core

IoT Edge

Networking Direct Connect Virtual Network Cloud Interconnect

852 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Amazon Web Services Microsoft Azure Google Cloud Platform

Network Service Tiers

Content Delivery CloudFront CDN Cloud CDN

Big Data Analytics Athena HDInsight Cloud Dataflow

EMR Stream Analytics Cloud Dataproc

Kinesis Data Lake Analytics

Analysis Services

Authentication and IAM Active Directory Cloud IAM


Access
Management
Directory Service Multi-Factor Cloud IAP
Authentication

Organizations

Single Sign-On

Security GuardDuty Security Center Cloud DLP

Macie Cloud Security Scanner

Shield

WAF

Application CodeStar Visual Studio Team —


Lifecycle Services
Management
CodePipeline Visual Studio App
Center

Cloud Monitoring CloudWatch Monitor Stackdriver

CloudTrail Log Analytics

Cloud Management Systems Manager Portal Stackdriver

Management Console Policy

Cost Management

Augmented Reality Sumerian — —


and Virtual Reality

Virtual Private VPC VNet Virtual Private Cloud


Cloud

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 853
Amazon Web Services Microsoft Azure Google Cloud Platform

Training Training and Certification Training Training Programs

Support Support Support Support

Third-Party Marketplace Marketplace Cloud Launcher


Software and
Services
Partner Director

1. 0Which option is not a virtual networking construct in the public cloud?


a. VPC
b. virtual private cloud
c. VNet
d. virtual port channel

854 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Tracking and Projecting Public Cloud Costs
Over the past decade, there have been significant lessons learned and case studies that are written about
public cloud versus private and why public cloud makes sense or why private cloud makes sense.

Managing Costs in the Cloud

Deploying private clouds, or on-premises infrastructure, is traditional and makes sense from a financial
perspective, because assets are often purchased and treated as capital expenses and depreciated over 3 to 7
years. The public cloud has changed things drastically for IT organizations that want to deploy applications
and infrastructure. There is now an operating budget that is used to pay for public cloud services (although
the finance department can still capitalize this expense). It is not a trivial process to compare public and
private cloud costs, and it is not trivial for finance and teams that manage IT budgets to project what cloud
costs will be when the public cloud is fully embraced. It is this issue, where the IT teams must really
understand each public cloud service that might be used and create different financial models based on its
possible usage. Using the public cloud, costs could increase beyond the costs for a private cloud. However,
taking advantage of features like autoscaling, you can directly increase revenue in ways that are nearly
impossible using the private cloud. Therefore, the increased revenue from the public cloud can directly fund
the additional operating expenses.
There has been no choice, due to the enterprise process, the lack of skills, and the time, to build equivalent
on-premises solutions than to adopt the public cloud, at least for particular applications and workloads. For
these workloads and more generally, it is obvious that the public cloud involves variable costs—you simply
pay for what you use. However, although costs have actually been a driver for the public cloud, it is often
not what is expected.

Public Cloud Consumption


• 80 percent of enterprises believe that managing cloud spending is one of their biggest challenges.
• Cost savings are typically a primary motivation for moving to the public cloud.
• After migration to the public cloud, 53 percent of enterprises said that cost and budget are still a key
problem.
• To eliminate these issues, enterprises continue to look for ways to better manage cloud costs.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 855
Nearly every business looks at and deploys some type of workload in the public cloud. Most enterprises that
migrate to the public cloud believe that managing the overall dollars spent is the biggest issue, although
saving costs was the primary driver for moving to the public cloud in the first place.
Environments and organizations are different, and it is important to realize that there are many unknowns
concerning the public cloud. These unknowns include systems and the apps that are being deployed in the
cloud. There is still much to learn about overall management of costs that are related to the public cloud.
However, there are several key objectives that should be employed when using the public cloud.

Managing Public Cloud Costs


• Monitor costs frequently.
• Monitor unused resources.
• Require automation.
• Require proper naming conventions and tags company-wide.
• Consider a standalone cloud cost-management tool.
• Take a systematic approach to cloud cost management.
• Consider vendor lock-in.

It is critical to monitor costs that are related to the public cloud as often as possible. Cloud administrators
should know what normal behavior is at any given time by using DevOps tools. For example, collectors and
dashboards such as Prometheus can be used to export the right data and visualize it in a platform such as
Grafana, so that you can track costs over time.
The speed of deploying new resources is one of the major reasons why IT organizations choose to use the
public cloud. This speed could, in turn, have an unforeseen impact on spending. If administrators only
monitor on a weekly or monthly basis, there is a great chance that they will be surprised by unexpectedly
high charges.
In the interest of minimizing costs, it is also critical to monitor the instances and services that are not being
used. Administrators must understand the cost implications if a service is “on” but “not used,” to ensure that
the right resources and services are powered off.
Automation simplifies monitoring and ensures that money is not being spent on unused resources, which
often happens with traditional compute-based resources. It is so quick to spin up a new VM for development
or testing that team members often forget to spin it down.
Using automation with strict naming standards with well-organized tags can also provide greater insight into
the usage and pricing of a given cloud environment. Of course, like any other standard, it is fundamentally
required that all cloud users and administrators enforce policies for naming and tags.
Tags can indicate several different attributes, such as whether it is a production, development, test, or
workload; which application is involved, or what department launched it or asked for it; which compliance
requirements it must meet; and what its priority is.
There is no doubt that automation plays a critical role in optimizing, but understanding cloud costs is more
difficult. For organizations that try to migrate to the public cloud using manual monitoring, “eye-balling”
stats, and manual deployments, costs will continue to rise. Organizations that are adopting the cloud are also
exploring the use of DevOps and IaC. This approach makes it possible to destroy an instance or service in
the cloud, fully rebuild it from the start within minutes, and define the desired state in files that tools such as
Terraform, CloudFormation, and Ansible interpret.

856 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
For many enterprises, the type of automation solution that might make the most sense is a standalone cloud
cost monitoring and optimization tool. These solutions often cost a fraction of the price of more complex
hybrid cloud management tools. A few examples of these types of tools include RightScale, CloudHealth
Technologies, Turbonomic, Densify, Apptio, and CloudCheckr.
Of course, cloud cost management requires far more than just technology. It also requires that organizations
get the right people and processes in place. The public cloud and its adoption need to happen systematically.
One approach, by Gartner, is a framework for public cloud cost management:
• Plan: Create a forecast to set spending expectations.
• Track: Observe your actual cloud spending and compare it with your budget to detect anomalies before
they become a surprise.
• Reduce: Quickly eliminate resources that waste cloud spending.
• Optimize: Use the provider’s discount models and optimize your workload for cost.
• Mature: Improve and expand your cost management processes on a continual basis.

Finally, administrators need to be aware of vendor lock-in. The world's largest public cloud providers
naturally control the public cloud market. Enterprises should be cautious of becoming overly dependent on
any one provider or of using services that might make it difficult to migrate to a different vendor. These
conditions could drive up costs over time.
1. 0What is one benefit of the private cloud versus the public cloud?
a. variable costs
b. fixed costs
c. cloud lock-in
d. fewer personnel required to manage on-premises resources

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 857
0High Availability and Disaster Recovery Design
Considerations
What is high availability? How can the cloud help you with high availability? How can you plan and
recover from a disaster?

Define the Requirements


• Recovery Time Objective (RTO)
• Recovery Point Objective (RPO)
• mean time to repair (MTTR)
• mean time between failures (MTBF)
• Service Level Agreement (SLA)

Before investigating high availability, you need clear requirements or the cost estimation will be incorrect.
The following is a list of key concepts to help define the business requirements for high availability.
• RTO: This concept is the maximum acceptable time for an application to be unavailable.
• RPO: This concept is the maximum acceptable duration for data loss after a disaster.
• MTTR: This concept is the average time that it takes to restore after a failure.
• MTBF: This concept is how long a component is expected to last between outages (for example, a hard
drive).

The RTO and RPO can be discovered through a risk assessment. The MTTR can be estimated and later
refined by looking at the deployment process and disaster recovery tests. The manufacturer can provide the
MTBF.
The following numbers will help give you an idea of what kind of SLA will be required. An SLA defines
the level of uptime that you can expect from the provider. SLAs are usually measured in nines with more
nines giving better uptime. Here is an example downtime chart.

858 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Availability % Downtime per Downtime per Downtime per Downtime per Day
Year Month Week

90% ("one nine") 36.53 days 73.05 hours 16.80 hours 2.40 hours

99% ("two nines") 3.65 days 7.31 hours 1.68 hours 14.40 minutes

99.9% ("three nines") 8.77 hours 43.83 minutes 10.08 minutes 1.44 minutes

99.99% ("four nines") 52.60 minutes 4.38 minutes 1.01 minutes 8.64 seconds

99.999% ("five 5.26 minutes 26.30 seconds 6.05 seconds 864.00 milliseconds
nines")

Building by Following Best Practices

• Failure mode analysis (FMA)


• Design the system to be scaled.
• Create a redundancy plan.
• Build high availability into the design.
• Implement load balancing.
• Create logs and metrics for monitoring.

FMA helps you determine where failure points are in your application. When choosing a service or system
to make highly available, you should first target the failure points that can lead to the largest disruptions.
Design the application or system to be scaled. When discussing scaling, there are two common approaches:
vertical scaling and horizontal scaling. With vertical scaling, you add more resources to an existing system.
Vertical scaling is easy to implement but has a downside in the high cost of memory and processors.
Horizontal scaling, on the other hand, allows you to run an application many times across many low-cost
compute nodes.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 859
Next, you will need to consider a redundancy plan. Based on the business requirements for high availability,
choose which components to make redundant to achieve the SLA required.
When scaling horizontally, the application will require some form of load balancing to ensure that traffic is
sent evenly to the compute nodes.
Finally, add logs and metrics within your application and systems that can be exposed for monitoring.

Data Management
• Choose the right storage to meet the requirements.
• Back up data regularly.
• Verify and restore regularly. If you cannot restore, then you do not have backups.
• Protect your data.

Your actions concerning data management can greatly impact the RPO. When choosing storage for your
application, there are many options. You will need to choose one that fits your requirements for costs,
reliability, encryption, security management, and even location.
You should also back up the data regularly. How often you back up the data depends on the risk assessment.
However, simply backing up the data does not help if the backups are not regularly verified and restores
have not been done. If you cannot restore from a backup, then you do not have a backup.
Protect the data storage. Enable disk encryption if possible and ensure proper user rights. Verify that the
user who is managing the storage does not have access to remove the backups.

860 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Automated Deployment Process
• Automate your deployments as much as possible.
• Have a rollback plan for failed deployments.
• Audit your deployments.
• Document your release process.

When implementing a high availability or disaster recovery system, another part of the calculation is the
MTTR. Deployments greatly impact the MTTR. How quickly can an application be deployed on a new
system?
Automating deployments, or as much of them as you can, helps reduce the time that it takes to run a
deployment and limits the chance for human error during the deployment. A high availability system may
have many servers to which the application is deployed, depending on scale. The more systems there are,
the more chances there are for human error. Automation does not have this problem.
Deployments may still fail, so what is the rollback plan? How quickly can you roll back from a failed
deployment? How much impact will the failed deployment have? The rollback plan and deployment
auditing can answer these questions.
Document your release process. Multiple people should know how to deploy a release. Even if releases are
scheduled based on your availability, a disaster may not occur on your timetable. Having clear deployment
instructions that others are familiar with reduces the MTTR.

Monitor Server and Application Health

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 861
• Identify KPIs as an early warning alert.
• Maintain application logs and metrics.
• Monitor third-party services on which your application relies.
• Implement health checks.

To maintain the SLA and the MTBF, it is important to have a monitoring solution in place.
Key performance indicators (KPIs) help you find abnormalities in your system performance that can be a
sign of a possible or imminent failure. For instance, an example API has an average of a 6-ms response time
to requests. One day, the response time jumps to more than 25 ms. That spike is an anomaly and could point
to an issue. Tied closely to the KPIs should be a system of alerting. A 19-ms increase may or may not have
users reporting issues. Without an alert, the anomaly would be ignored.
Maintain application logs and metrics. Logs can tell you about an issue, but a sudden increase in logs can
help in detecting an issue. Metrics can report on load and system health. A sudden spike in load can point to
an issue on the application, an issue with the load balancer, or perhaps even a server outage.
While on the topic of monitoring, if your application requires a third-party service, those services should
also be monitored.
Health checks for the application should be implemented and monitored. Running checks to ensure that the
application returns the right data can help discover issues even when logs and metrics do not point to issues
occurring.

Test High Availability


• What happens when a single server fails?
• What happens when a zone or region goes offline?
• Run simulated tests.
• Run load tests.

862 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Just as with backups, if high availability is not routinely tested, your system is not highly available. Run
scenarios against your system and verify the expected result. What is the expected result when a single
server fails?
When you rely on a cloud platform like AWS, Google Cloud Platform, or Azure, what happens when one of
their zones or regions goes offline? These scenarios should be tested. You can run simulated tests by
removing servers, detaching storage, or changing network policies to see how the systems recover. You
should also see what happens to your application under load.

High Availability in the Cloud


• Scale servers horizontally.
• Scale all parts of the application.
• Add redundancy by adding a second zone or region.

An application that is hosted in the cloud should be scaled horizontally, not vertically, to achieve high
availability. Every compute node that is added, increases the SLA, but with diminishing returns. Each
system in the application should also be scaled to get the benefits of high availability.
Adding a second zone or region can be more beneficial to uptime than horizontal scaling, once horizontal
scaling reaches the diminishing returns. A second region also benefits you if the provider performs
maintenance, if an outage occurs, or if someone deletes one of the application clusters.

Disaster Recovery Plan


• Write a disaster recovery plan.
• Automate as much as you can.
• Document the manual steps.
• Regularly test the disaster recovery plan.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 863
Just as important as high availability is disaster recovery. Disaster recovery time is what gives you the
MTTR component of availability. Every application should have a disaster recovery plan. This plan should
detail the systems, teams, and documentation that is needed to recover from a disaster. Usually, disaster
recovery will involve a separate location for a regional disaster. Automate as much of the disaster recovery
as you can; automating will ensure the speed and the reliability of the recovery.
All steps should be documented, but especially the manual steps. The documentations should be clear and
easy to follow. Regularly test the disaster recovery plan and hold a meeting to discuss issues or comments
on the plan.

Disaster Recovery in the Cloud


• Moving disaster recovery to the cloud benefits from on-demand systems.
• Not limited by physical presence.
• Highly benefits from IaC.

Moving disaster recovery into the cloud has benefits over using only on-premises systems. The public cloud
providers have on-demand systems. Hardware is readily available and does not have to be purchased ahead
of time. There is no downtime waiting on someone to rack the hardware or provide connectivity to the
hosts.
Another benefit of choosing a cloud provider is that you are not limited to physical presence. This approach
allows the disaster recovery location to be in any region.

864 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
IaC allows your infrastructure requirements to be stored and acted upon, rather than being written down or
only in documentation. IaC allows the infrastructure to be stored along with the application code. Coupled
with scripts such as Terraform, allows disaster recovery to be automated. Many of the cloud providers work
with Terraform.
1. 0Which option is not used to define the business requirements for high availability?
a. RTO
b. FMA
c. MTTR
d. RPO
e. MTBF

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 865
0IaC for Repeatable Public Cloud Consumption
There are several benefits of running your infrastructure as code, or IaC. You can deploy your infrastructure
in a repeatable fashion to multiple public clouds. You define what the network looks like in your code. Then
you transform the data in your language into the native language of the public cloud infrastructure to which
you are deploying.

Infrastructure as Code

IaC is the practice of representing your infrastructure as code and within text files. You provision your
network and infrastructure from files that are maintained in files that define the network rather than
individually configuring each component.
As you deploy, the deployment methodology reads the definition file and applies to the appropriate cloud
resource. You can deploy from one file definition to several cloud providers in this methodology. Some
examples include defining all the interesting components of a switch configuration, such as the switch port
VLAN access or trunk setting, and the VLANs to which this information applies. The file is read and
deployed to the appropriate device. By indicating an interface and VLAN, you can deploy the configuration
to multiple devices. You can start by deploying to Cisco IOS XE switches, and in the next iteration, deploy
to a Cisco Nexus switch. The definition files look the same for both device types. The differences are
implementation details that another component will address.

IaC Benefits
• Repository
• Repeatable; deploy programmatically
• Single source of truth for infrastructure

866 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
There are several benefits to running your infrastructure as code. You can maintain your infrastructure
definition in a code repository and have repeatable deployments using programmatic solutions. IaC is a
single source of truth, and you can test the infrastructure before deployment.
By maintaining your infrastructure definition in a code repository that has version control, you gain several
benefits. You are able to control who is able to merge the code base into the main branch and how. The
main branch should be the representation of the current state or intended state of the network. You can also
have a requirement for how many peers need to review the changes before merging and require multiple
approvers on particular projects that are change-sensitive and fewer approvers on areas that are less critical
(but still important).
By using a code repository, there is more accountability because change documentation is built into the
system. You can see who is requesting the change and who approved it. Each of the pieces is time stamped,
which helps maintain the audit trail of the network configuration. With versioning included in many
repository systems, you can revert if a change breaks something in the environment. You simply revert to
the previous version of the main branch, redeploy the infrastructure, and it is back to the previously known
state.
You gain repeatable deployment methodology by deploying from the definition files. This approach helps to
maintain consistency throughout the environment. Because the deployment is done in a programmatic
fashion, you get the same result every time. Maintaining consistent configuration over thousands of
individual ports helps to provide confidence in the network configuration.
By having the configuration in a code repository and deployed consistently, you no longer need to look at
multiple devices to make sure that the configuration is as it should be. You are now able to look at the
source definition file, and that is the source of truth. If the configuration of the system deviates from this
source, the system should be redeployed from the source code so that the device configuration will match
the definition file.
Moving the infrastructure definition into a repository, you can integrate the repository with a CI/CD tool.
Once integrated with the CI/CD system, you are able to write tests to verify that the data being put into the
repository is of good quality. You can set up tests to ensure that the VLANs in your data center only come
from a certain list, or that an entered IP address is valid.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 867
IaC Methods
• Declarative
• Imperative

With tooling that is related to IaC, declarative and imperative methods are important to understand.
An imperative tool requires that you to take all the actions and steps to complete the task. You must provide
exact instructions for every step. You must describe how something is accomplished.
In a declarative state tool, you indicate the desired state. In the network world, an example would be
creating a desired state in which VLANs 10, 11, and 12 are configured on the switch. The tool needs to
know how to add the VLANs, delete any extra VLANs, and how to verify the current state of the device.
The definition indicates what that intended state is and you can take the appropriate action to get the device
into that state.
Terraform is a tool that uses a declarative state. Ansible has modules that are both declarative in nature and
imperative.

IaC Tools
• Ansible
– Cloud modules
– Includes roles for deploying
– Declarative
• Terraform
– Deploys to defined cloud providers
– Declarative

868 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
A couple of tools that are heavily relied upon for IaC deployments are Ansible and Terraform.
In an IaC environment, Ansible is used in multiple ways. There are several cloud modules that are generally
declarative in nature and help in automating the public cloud. An Ansible-curated list of modules that are
part of the cloud arena can be found at
https://docs.ansible.com/ansible/latest/modules/list_of_cloud_modules.html. Note that there are many
modules that are available for each of the major public cloud environments. For AWS, you can manage
several components via Ansible, including codepipeline, codecommit, EC2, AWS Simple Storage Service
(S3), and Cloudfront. The list of cloud modules is quite comprehensive.
Ansible not only has modules for public cloud providers, but also modules for working with VMware and
OpenStack, among others, for managing private clouds.
Terraform has built-in and community providers for the Terraform system. A provider is responsible for the
API integration and resource manipulation. Providers are typically the cloud or service provider. There are
more than 200 providers, including all the major public cloud providers. Terraform uses your IaC definition
files in HashiCorp Configuration Language to provision the infrastructure defined in the file.
1. 0IaC gives you the benefit of audit history via which benefit?
a. code repository
b. repeatable deployments
c. single source of truth
d. CI/CD testing

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 869
0Cloud Services Strategy Comparison
A cloud services strategy has several possibilities and requires several considerations. Should you buy a
software service from a single provider or build it yourself? Is there a reason to have a third party involved
to help manage the relationships in a cloud environment? How do you avoid vendor lock-in with a cloud
provider? This topic discusses this decision-making process.

Cloud Services Strategy


• Strategy for buying cloud-based applications vs. building your own.
• Build
– Put together cloud services
– Build individual components and put together
• Buy
– Off-the-shelf product
– Still in the cloud

There are a couple of methodologies for adopting a cloud service strategy. You can build in the cloud with
your own pieces, or you can buy an “off-the-shelf” product that will meet your goals for moving to the
cloud. If you buy a product, you will receive the product delivered and prepackaged. These products are still
considered cloud components because they are not typically in your data center.
Your strategy can involve building your own cloud services. You will be required to build the components
individually to meet your goals. Some flexibility is inherent in this strategy of choosing products that are
open and can interoperate with each other.

Cloud Services: Buy


• SaaS offerings
• All-in-one solution; speed of deployment
– Vendor support
• Less flexibility
– Vendor’s schedule
– Customization may be limited

870 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
The benefits of a buy strategy for obtaining cloud services is that you do not have to build the applications.
You can purchase what you need. You will get some level of support from the service provider. This
approach is often called SaaS. The entire application is provided by a service provider. You get support
from the vendor when something is not working in the application. The downside is that you are bound by
the vendor’s timelines and development for new features and integrations.

Cloud Services: Build


• Requires knowledge, skills, and abilities to develop in the cloud
• Cloud-native building pieces
• Risk of integration sprawl
• Extensibility is a function of app design.

If you are using the build strategy, you are building out all the components of the application within the
cloud. You work to build the software, integrations, and provide support on the application. This approach
requires the knowledge, skills, and abilities to develop and design applications on public cloud
infrastructure.
The Cloud Native Computing Foundation (CNCF) has many components that are cloud-friendly, and can be
found at https://www.cncf.io. The mission of the CNCF is to make cloud-native computing ubiquitous. To
help drive the primary mission, the CNCF helps foster open source and vendor-neutral projects. The CNCF
maintains a list of open source projects and other non-open source projects that are cloud-friendly at
https://landscape.cncf.io.
Without good governance, there is a risk of application and integration sprawl. There are many open source
tools that have similar features, so you may find part of your organization using tool A, and other parts of
the organization using tool B. Having multiple tools completing the same or similar function may bring
operational challenges.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 871
Extending applications that are deployed in the cloud with new features or new integrations is done on your
own timeline. There is no waiting for a solution to be provided by the SaaS provider. This approach gives
the organization greater flexibility and control of the application.

Cloud Managed Services Provider


• Assists with workloads in the cloud.
• Migrations vs. steady state
• Broad range of partners
– Hybrid cloud environments
– Specialize in a particular cloud
• Experienced organizations of building apps in the cloud

Cloud managed service providers are available to help you get started with the cloud integration. They
typically offer assistance from getting started for the first time in the cloud, to migrating applications to
clouds, and on to operating and optimizing your cloud experience.
There is a broad range of partners. Some specialize in a particular public cloud, while others can help in a
multicloud or hybrid cloud deployment. This diversity helps bring the necessary experience of operating in
the public cloud to your organization faster.

Avoiding Cloud Lock-In


• Build with IaC principles
– Build to other clouds
• Multicloud strategy
• Cloud-native components
• Docker containers

872 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
A concern with many organizations is how to prevent yourself from getting locked into services with one
provider, with financial or operational risks. What happens if the cloud provider suddenly increases prices
on a service that is important to the operation? What happens if the cloud provider has service outages? Or
worse, what if the service provider suddenly shuts down? Having a good plan for preventing lock-in is key
to survival at that point.
You can prevent lock-in with a few different starting points. If you deploy infrastructure with IaC
principles, you should be able to deploy your infrastructure in multiple environments. With IaC, your
deployment to a new cloud provider moves the necessary translations from the IaC definition into the new
cloud provider.
If you have a multicloud strategy, you are already avoiding lock-in by having the workloads in multiple
locations. Using CNCF components is a good step toward a multicloud environment. These components are
optimized to work in a cloud, but not any particular cloud.
Using Docker containers and other container workload tools such as Kubernetes will help with the
multicloud deployment. By nature, containers are meant to be portable between systems, and will therefore
help to prevent lock-in to any particular cloud.

Public vs. Private Clouds


• Public
– AWS, Azure, Google, and so on
• Private
– OpenStack, VMware vCloud
• Hybrid
– Orchestration in place between the public and private cloud
– Kubernetes

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 873
Public clouds are computing resources that are available via the public Internet. The typical public cloud
providers are AWS, Azure, Google Cloud Platform, and others. You can rent resources with varying price
points from these third-party systems for each of the components that are necessary.
Private clouds are computing resources that are available for an organization and hosted by the organization.
This system may still be managed, but is typically built into the private network access of an organization.
A characteristic of this system involves heavy automation that helps speed up resource allocation, such that
the time to become operational is similar to a public cloud. You can have a basic Linux VM operating in a
few minutes with a few forms filled out. Examples of some private cloud systems include OpenStack and
VMware vCloud.
Some organizations that use both public and private clouds are using a hybrid cloud. This scenario can also
describe a multicloud environment where you may have workloads in AWS and Azure, both serving the
same application sets. There is orchestration in place to know what is happening within each of the
respective clouds to understand what is necessary to serve the application.
Kubernetes is an excellent container orchestration tool that helps enable an organization to run containers
efficiently. Kubernetes has clusters available for integration with many of the public and private cloud
services, helping an organization operate in a hybrid cloud fashion.
1. 0Which orchestration tool helps orchestrate containers across clouds?
a. Containerd
b. Docker Swarm
c. Kubernetes
d. Gartner Partner

874 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Summary Challenge
1. 0Match the virtual machine name to the proper cloud provider:
Google
EC2 Instance
Azure
VM
Digital Ocean
Compute Engine Instance
AWS
Droplet

2. 0Which two steps should be followed to manage public cloud costs? (Choose two.)
a. Use automation.
b. Monitor cloud usage quarterly.
c. Monitor cloud usage as frequently as possible (daily).
d. Do not use a public cloud.
e. Only pick one vendor.
3. 0Which public cloud feature can be used to offer services that are nearly impossible when deploying
on-premises?
a. autoscaling
b. load balancing
c. VPC
d. VNet
4. 0Which option is not an essential characteristic of a cloud?
a. resource pooling
b. measured service
c. rapid elasticity
d. portal service displaying usage stats
5. 0What does FMA help you determine?
a. manual versus automated deployments
b. early warnings of a failure
c. failure points in an application
d. root cause of the failure
6. 0Which two benefits are allowed for by hosting an application in the cloud? (Choose two.)
a. redundancy through regions or zones
b. vertical scaling across compute nodes
c. high security of the storage
d. horizontal scaling across compute nodes
e. more customer visibility
7. 0Which benefit of IaC provides an authoritative view of the infrastructure?
a. code repository
b. repeatable deployments
c. single source of truth for infrastructure
d. CI/CD testing

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 875
8. 0Which option is a characteristic of an organization that has a build cloud strategy?
a. software as a service
b. vendor support
c. all in one
d. integration and feature independence

876 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Answer Key
Application Deployment to Multiple Environments
1. E

Public Cloud Terminology Primer


1. D

Tracking and Projecting Public Cloud Costs


1. B

High Availability and Disaster Recovery Design


Considerations
1. B

IaC for Repeatable Public Cloud Consumption


1. A

Cloud Services Strategy Comparison


1. C

Summary Challenge
1.

AWS EC2 Instance

Azure VM

Google Compute Engine Instance

Digital Ocean Droplet

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 877
2. A, C
3. A
4. D
5. C
6. A, D
7. C
8. D

878 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 879
Section 15: Examining Application and
Deployment Architectures

Introduction
Introduction
Application design is not only about the functionality that application design provides in response to
business needs, but also about how application design is developed, monitored, deployed, debugged, and
scaled. This section looks at how many of these architectural considerations have evolved into the DevOps
and cloud era, and discusses everyday lessons that have been learned in production environments in the
form of the twelve-factor application and the revolution in systems thinking that introduced the
microservices architecture.

Twelve-Factor Application
The twelve-factor application (https://12factor.net/) defines a methodology and guiding principles for
building software as a service (SaaS) applications with portability and resilience when deployed to the web.
It was originally drafted in 2012 by developers who were working for the platform as a service (PaaS)
company Heroku to help people design applications in a cloud-friendly way. These cloud-friendly
applications run on a public cloud, but also embrace concepts such as statelessness, elastic scalability, rapid
deployment, and ephemeral file systems.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 881
The main goals of the twelve-factor application are:
• Minimize cost and time for new developers joining the project by using declarative formats for setup
automation.
• Ensure maximum application portability between execution environments by exact contract with the
underlying operating system.
• Develop an application that is independent of the underlying infrastructure by supporting deployment
on most of the modern cloud platforms.
• Enable continuous deployment by reducing the differences between development and production to a
minimum.
• Ensure application scalability that does not require significant changes in tooling, architecture, or
development practices.

The ability to replicate development environments (with beneficial effects further down the line in testing,
staging, and production) plays an important role in making the codebase maintainable and ensuring
scalability—in this case, not scalability of the application itself, but of the teams, process, and workflows
that are built to support it.
In trying to separate things as much as possible, it is necessary to become (more) independent from the
underlying infrastructure and offer a high degree of portability for the way services are deployed in their
respective environments. Good packaging and tooling also provide benefits when managing many different
versions and operational environments for the same application stack.

The Twelve Factors


As the number of services grows, the benefits of following these principles become more visible. The
following list of principles is not a complete recipe for building a system, but a way of getting the basics
right from the beginning and getting projects off to a good start.
• Codebase: One codebase is tracked in revision control, with many deployments.
• Dependencies: Explicitly declare and isolate dependencies.
• Configuration: Store the configuration in the environment.
• Backing services: Treat backing services as attached resources.

The codebase is a repository of the application's code and is tracked in a version control system (for
example, Git). There is a one-to-one correlation between the app and its codebase, but there will be multiple
deployments of that app (running instances)—test, development, staging, production, and so on—capturing
various versions of the codebase.
An application that has to be composed from multiple code repositories is actually a prime candidate for a
system that needs to be split into multiple components, each adhering to the principles and forming a
distributed application from individual (micro)services.
Second, the app never relies on the implicit existence of systemwide libraries and packages. It explicitly
declares all dependencies (for example, Gemfile for Ruby or requirements for Python’s pip) and uses
tooling that isolates them. The result is that the application can be deployed anywhere and makes it easier to
set up repeatable and identical (but functionally different) environments.

882 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
The next principle concerns configuration, and refers to any values that can vary across different
deployments or environments. Examples of these values are URLs or other dependent back-end services,
database paths, or credentials for third-party APIs or services. Most importantly, sensitive data like
credentials should not be stored inside the codebase. One easy test is to consider what the impact would be
of open-sourcing the codebase.
External configuration allows for the deployment of immutable builds to multiple different environments
through automation and helps ensure parity. The guideline here is to store the configuration in environment
variables that are provided at the time of deployment and have less of a chance of being accidentally
committed to the codebase repository.
A backing service is any resource that the app consumes over the network during its normal operation, such
as databases or messaging systems. Whether the app is run locally for development or deployed into
production, it should not make any distinction between its backing services. The details of how to access
these services should be stored in its external configuration, allowing for easily swapping a local database
for a cloud-based offering when the environment requires it.
An application should therefore declare its required backing service, but leave it to the environment to
decide the specific instance that it will provide for the application to use.
• Build, release, and run: Strictly separate build and run stages.
• Processes: Execute the app as one or more stateless processes.
• Port binding: Export services via port binding.
• Concurrency: Scale out via the process model.
• Disposability: Maximize robustness with fast startup and graceful shutdown.

To transform an application's code into a deploy, a three-stage process is defined: build the app (getting an
artifact that is based on the code at a specific revision), combine it with a deploy- or environment-specific
configuration (the release), and finally, execute the release into its destination environment.
These three stages must be strictly isolated to ensure that automation is possible, that system maintenance is
as straightforward as possible, and that the "it worked on my machine" syndrome is avoided.
Each twelve-factor app is stateless. If it needs to hold any data, then it should do so through a backing
service, typically a database. The app will use its allocated system memory and filesystem as a temporary
single-transaction cache, but it should be assumed that anything temporarily stored there can and will be
lost. Thus, there should be no assumptions concerning previous filesystem contents, and this temporary
system will only be used for transient data, because it pertains to a single request (or operation).
The app is completely self-contained and should not rely on additional processes at run time to expose its
functionality (for example, a web server), but rather include everything that is required. It binds to (at least)
a port and exposes that port to the outside world, and sometimes it becomes a backing service itself for
another app. This port binding should not require any code changes between different environments and so
be treated as an external configuration.
The scale-out factor of the concurrency model (or horizontal scaling) ties in with the share-nothing stateless
nature of each app. Adding more running instances of the app in parallel is a simple and reliable operation.
All instances operate independently and allow the load to be distributed rather than concentrated on an ever-
growing single instance of the app.
Each individual application instance is disposable and facilitates fast reactions to events such as scaling up
or down due to load or restarting due to abnormal behavior. As such, startup time should be kept fairly short
and the application should try to provide graceful shutdown under most circumstances (that is, on receiving
the termination signal, stop accepting new connections, finish its current requests, and then shut down).

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 883
• Development and production parity: Keep development, staging, and production as similar as
possible.
• Logs: Treat logs as event streams.
• Administrative processes: Run administrator and management tasks as one-off processes.

The twelve-factor app is designed for continuous deployment. This scenario is achieved by keeping the
various environments in which the app will live as close to parity as possible, starting with the developer's
own machine. As such, code should be easy to deploy in another environment within minutes or hours, and
developers should be directly involved in its deployment and subsequent feedback loop. Backing services
should not become a compromise and diverge between development and production environments.
Logs provide visibility into the lifecycle of a running instance of the application. They are a stream of
ordered, time-based events that are streamed to a location—the local disk or a remote collector backing
service.
As a stateless process, the application should not be concerned with the storage of its logs. The execution
environment should manage logs, by parsing, storing, and archiving on a dedicated system.
Administrative and management tasks that are specific to the application should be bundled with the main
service code and executed from the same environment as the application process itself. These tools should
be treated in the same way as the main application code when considering dependency isolation and security
privileges.

More Factors
• Build services API first: Different teams work with each other's public contracts, independent of their
internal development processes.
• Authentication and authorization: Security should never be an afterthought.
• Telemetry and analytics: Health, performance monitoring, and domain-specific telemetry.

In addition to the original 12 factors, three more factors have been suggested in more recent literature as
representative of the evolution of modern distributed applications.
To facilitate multiple integrations across the services that are provided by the application or allow for
different clients (think desktop versus mobile, for example) to be implemented over a common back end, it
is good practice to recognize the API as a first-class artifact of the development process. Front-loading this
effort (as opposed to building an API on an already finished product) allows for early discussions with all
the application stakeholders well before the application has been written past the point of no return.
Whenever an application is built, deployed, and executed, there can be many moving parts, access to
systems, and ultimately to confidential data. Each instance should run signed and authenticated code, have
access only to the services it requires, and give access only to clients that are allowed. These concerns
should be part of the application design from day one, even if some are provided by the underlying
environments rather than by the application code itself.
Monitoring an application is more than just storing and reacting to certain log events. You need to know
how well the app is actually performing in production, but you do not have the same tooling available as
when you are in your development environment. Therefore, third-party APM tooling provides insights into
how well the app is performing by capturing various metrics from running instances. Domain-specific
telemetry (as opposed to generic APM) captures metrics that are related to the business logic and problems
that are ultimately solved by the application, and is something that cannot be bought off-the-shelf, but rather
should be outlined as part of the application design process.

884 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
1. 0Which of these is one of the main goals of a twelve-factor application?
a. Explicitly declare and isolate dependencies
b. Low overhead
c. Keep development environments similar
d. Develop an application that is independent of the underlying infrastructure
e. Ensure that the codebase is stored in a version control system

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 885
0Microservices Architectures
The twelve-factor app defined a set of goals and guiding principles that were necessary at the beginning of
what was a new journey for many people. This meant embracing new application design patterns and
moving from monolithic and service-oriented architecture (SOA) to cloud-native architectures, where cloud
means platform as a service (PaaS), not necessarily of the public variety, twelve-factor apps, and
microservices.

Microservices emerged as a real-world, use-driven pattern built on the ongoing growth and metamorphosis
of the challenges of running applications at the scale of the web. Microservices architecture is a way of
structuring an application as a collection of services that are highly scalable, maintainable, and testable. A
loosely coupled system is built from these services and can be deployed independently.
• Microservice architectures are applications that are composed of small services.
• Businesses can construct services to meet their particular needs and can automatically deploy these
services independently, with little need for central management.
• These services often communicate using an HTTP-based API.
• They are written in many programming languages and can use various types of data storage.

Characteristics of Microservice Architectures


There are several general characteristics of microservice architectures.
Although loose coupling works for services that perform different roles, an important concept is that of
cohesion, or keeping related code together. Paraphrasing Robert C. Martin, software engineer and instructor,
when writing code, put things together that change for the same reasons, and separate things that change for
different reasons.
A frequent question might be, “How small should microservices actually be?” The answer depends on how
the service decomposition is done (the pattern). For some, it may be lines of code or redevelopment time
(how many days would be needed for someone to rewrite this service). For example, if decomposition is
based on business capability, then service boundaries are focused on those things that a business uses to
generate value.

886 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Decoupling is essential for realizing many of the advantages of the microservices-based architecture. The
primary principle of decoupling requires that you can answer “yes” to the question, “Can you make a
change to a service and deploy the new version without it affecting anything else?” To answer "yes" here is
not easy, because you have to design your services, their APIs, and interaction surfaces correctly.
You should also consider the trade-off when going for smaller and smaller services. On the one hand,
having the smallest services maximizes the benefits of the particular architecture, but the disadvantages of
having more moving parts increases the complexity of the distributed system, often trading one set of
problems for another.

When you build a system that is composed of multiple loosely coupled but collaborating services,
independent decisions about the internal technology stack can be made when choosing the right tool for
each job, rather than locking in just "one stack that fits all" requirements in a monolithic application. As
long as the interaction surface (the APIs) between services is well-defined and agreed upon (in essence, a
public contract), internal technological alignment matters less. Choosing a different language and its
libraries for a particular job where performance is a driver becomes much easier, although not without
drawbacks. Adding multiple technologies adds overhead and load on the operations teams.
• Products not projects: Teams own their products over the products' full lifetime.
• Smart endpoints and dumb pipes
• Decentralized governance: Not every problem is a nail and not every solution is a hammer.
• Decentralized data management: One database per service
• Infrastructure automation
• Design for failure
– Design apps so that they can tolerate service failures.
– Real-time monitoring of performance (requests per second) and business (orders per minute)
metrics
• Evolutionary design
– Control change without slowing down evolution.
– Design services to be as tolerant as possible to changes in their suppliers.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 887
Project-based development models aim to deliver a piece of software that meets certain specifications.
However, once it is handed over to operations, the team is considered to have finished its job and moves on
to another project. Product-oriented development proposes that a team should own the product that they
build over its entire lifecycle. This approach brings developers in contact with the day-to-day operation of
their code in production and tightens the relationship between software and the business capabilities to
which it is tied. The feedback loop is therefore shorter, and there is a more direct link to the end users who
benefit from the code.
Applications that are built as microservices should be as decoupled and as cohesive as possible. They
should own their domain logic and do one thing well—receive a request, apply its logic as necessary, and
produce a response. The fabric that interconnects these services should be kept as close to a simple message
as possible, because all the code intelligence resides in the services themselves.

Design Patterns

Monolithic applications work with large consolidated databases to hold data, which is often shared across
many applications (with some of the design possibly tied to licensing models). Microservices decentralize
how data is stored and prefer to allow each service to manage its own database and even use different
technologies where it makes sense for specific data structures and performance requirements.
Decentralizing responsibility for data management has a major trade-off because consistency cannot be
guaranteed anymore, or rather is expensive and complex to do, and the architecture needs to accept and
work with the concept of eventual consistency.
Database-per-service is one of the many design patterns that have grown under the microservices
architecture umbrella.
A few other patterns are as follows. Going into detail about all of them is beyond the scope of this topic, but
further research is encouraged.
• Strangler pattern
• Anticorruption layer pattern
• Bulkhead pattern
• Circuit-breaker pattern
• API gateway pattern
• Sidecar pattern

888 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
The strangler design pattern characteristics are as follows:
• Transform: Two applications live side-by-side—the newly refactored and the traditional apps.
• Coexist: Users are redirected from the traditional app to the new one. As time passes, the new app
"strangles" the old one by implementing all of its functionality.
• Eliminate: Remove any remaining old functionality from the main site.

The anticorruption layer design pattern characteristics are as follows:


• Translates communications between two systems, allowing one system to remain unchanged (usually
the traditional system), while the other can avoid compromising its design and technological approach.
• Trade-offs are in added latency, data consistency, and scale.

The bulkhead design pattern characteristics are as follows:


• This design helps isolate failure domains and sustain functionality for parts of the whole service when
others are degraded or down.
• Various elements of an application should be isolated into groups based on criteria such as consumer
categories, or load and availability requirements.
• This pattern provides resource isolation (such as memory, CPU, connections) so that a single service
cannot starve others.
• This pattern prevents cascading failures.

The circuit-breaker design pattern characteristics are as follows:


• When downstream services fail, consumers keep sending requests despite the fact that the request does
not work.
• This pattern protects against potential resource exhaustion and performance impact.
• If a consecutive number of requests to a given service fail, the circuit breaker trips, blocking any other
requests for a given timeout period.
• This pattern provides instant failure feedback to the consumer, so that repeated unsuccessful attempts
stop.

The API gateway design pattern characteristics are as follows:


• Offloading: Enables microservices to delegate shared functionality, such as Secure Sockets Layer
(SSL) certificates or authentication and authorization.
• Aggregation: Compose response data from different microservices before sending it back to the
consumer.
• Routing: Map (for example, a specific URL path to a service) and (reverse) proxy API requests to the
correct corresponding services.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 889
The sidecar design pattern characteristics are as follows:
• The sidecar (like the motorcycle attachment) is a way of deploying supporting features for components
of an application in a separate container to provide encapsulation and isolation.
• It shares the same lifecycle with its parent service.
• It provides related functionality such as monitoring, logging, or networking services.

Other Considerations
Microservices are definitely not the solution of every problem. Often, a monolithic design may indeed be
the right choice. Although a lot has been learned about managing distributed systems, the reality is that they
are complex. There are always trade-offs to be made, and ensuring resilience requires different thinking
from the more classical approaches.
In a monolithic application, modules can directly invoke each other, whereas in a distributed system of
many microservices, a new mechanism for interprocess communication must be implemented, often over a
network infrastructure. Because such RPCs may sometimes be slow to respond or unavailable, services
should be better equipped to manage and reduce gray failures. This scenario, in turn, requires better
monitoring and observability tooling.
When choosing the architecture of a new application, there is no easy recipe to follow and there has been
much discussion in the industry about whether it makes sense to start with a small monolith and decompose
it into microservices. Indeed, if the application is not that complex, it is a lot easier to build it as a monolith
rather than manage the complexity of managing distributed microservices.
Eventually, however, the trade-offs become apparent, especially as the application grows, becomes more
complex, reduces productivity, and increases work. It is therefore a good idea to design a loosely coupled
monolith from the beginning, which makes it easier to break up into separate services before it becomes too
large. Alternatively, after a point, new features can be added as microservices around the original monolith,
and functionality can be reimplemented and extended until the old application can be completely removed.
Organizations, teams, and systems are all different, so many factors will decide whether a microservices
architecture is the right way to go. This decision will require a shift in how you manage testing, monitoring,
deployment, operations, and often necessitates significant cultural changes.
1. 0Which of the following describes successfully decoupled microservices?
a. You cannot change any service without restarting all applications.
b. You can make a change to a service and deploy it by itself without changing anything else.
c. You can make a change to a service and redeploy all connected services.
d. It is not possible to completely decouple microservices.

890 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Summary Challenge
1. 0For a twelve-factor application, which statement is a reason for ensuring the ability to replicate
development environments?
a. The time that it takes to bring new people on a project is negligible.
b. If the main developer knows how to start the application, then it can be replicated.
c. The application will not need to run on more than one server.
d. The codebase needs to be run in multiple development, testing, and production environments.
2. 0For a twelve-factor application, which of the following is not one of the 12 factors?
a. configuration
b. backing services
c. API coverage
d. dependencies
3. 0For a twelve-factor application, which statement correctly describes the application codebase?
a. The codebase is the shared repository of code for multiple applications.
b. For each application, you will have one codebase and one deploy (for example, staging).
c. A codebase can be linked to multiple deploys (for example, dev, test, prod) using different
versions.
d. An application's codebase does not need version control if it is developed carefully.
4. 0Which statement correctly describes configuration as one of the twelve-factor principles?
a. Configuration is a set of values that are the same across all deployments of the application.
b. Service URLs can be hardcoded and should not be configurable.
c. Configuration refers to any values that can vary across different environments.
d. Credentials can be safely stored in the codebase as long as it is private.
5. 0Which option is not a valid configuration item for a twelve-factor app?
a. database paths
b. credentials for third-party APIs
c. maximum repository size
d. required backing services
6. 0Which two statements about microservices and monolithic applications are correct? (Choose two.)
a. Monolithic applications generally contain only one functional feature.
b. Monolithic applications are composed of multiple functional features.
c. A microservice combines a maximum of two different features to stay simple.
d. Microservices are obtained by decomposing each functional element into its own service.
e. Microservices are defined in an IETF standard.
7. 0Which two statements correctly describe microservices architectures? (Choose two.)
a. Microservices are managed as part of a project-based development model.
b. Microservices are owned and managed by a team over the lifetime of the service as a product.
c. Microservices should be used for all modern application designs.
d. A microservice cannot tolerate failure and relies on its infrastructure.
e. Microservices are designed to be as tolerant as possible of changes in their suppliers.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 891
0Answer Key
Twelve-Factor Application
1. D

Microservices Architectures
1. B

Summary Challenge
1. D
2. C
3. C
4. C
5. C
6. B, D
7. B, E

892 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 893
Section 16: Describing Kubernetes

Introduction
Kubernetes, often stylized as k8s, is an open-source container and service orchestrator that supports both
declarative configuration and automation. The name comes from the Greek word κυβερνήτης, meaning
“helmsman” or “pilot.” Google open-sourced Kubernetes in 2014. In 2015, Google joined with the Linux
Foundation to create the Cloud Native Computing Foundation (CNCF) with Kubernetes as the flagship
application.

Kubernetes Concepts: Nodes, Pods, and Clusters


This topic considers Kubernetes and some of its concepts, such as Kubernetes objects, and how nodes, pods,
and clusters fit into them.

Kubernetes Objects
Compute and storage resources are defined as objects. Each object has a name associated with it. Some of
the most common Kubernetes objects are as follows:
• Pods
• Services
• Volumes
• Namespaces

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 895
Kubernetes uses objects to represent the intended state of the cluster. These objects can describe which
containers are running, the storage and network resources that are available to those containers, and the
policies that govern those containers. The object is a “record of intent,” and once created, Kubernetes will
work to maintain the intended state of the object.

Pods
• A pod can include one or more containers.
• Additional containers are known as sidecars.
• A pod includes shared storage and network functions.
• Pods are disposable and have a lifecycle.

A pod is the smallest deployable object in Kubernetes. A pod consists of one or more containers that work
tightly coupled together. A pod with a single container is the most common Kubernetes use case. In cases
where there are multiple colocated containers, the additional containers are known as sidecars.
An example of a multicontainer pod might look like a media conversion site. With a site that can convert
WAV files to MP3, the user would upload the file to the site, the file would be converted, then the user
would download the file. In this instance, one container would house the website, while the other container
would process the file conversion. This approach is similar to the multicontainer Network Inventory app
that you can find in the lab activities of this course—one container per service (web, app or API, database).
Each pod has its own network namespace, a single unique IP address, and access to all network ports.
Within a pod, containers use the local host to communicate with each other. Pods can also share storage,
which is called a volume.
Kubernetes manages the pod, not the containers. Pods are disposable. A pod runs on a node until the process
is terminated, a pod is deleted, the pod is evicted, or the node fails. A controller manages the pod lifecycle.
Pods have different states in the lifecycle. The states are pending, running, succeeded, failed, and unknown.
In the pending state, Kubernetes has accepted the pod, but one or more container images have not been
created yet. This state includes the time that is spent downloading the images. In the running state, the pod
has been assigned to a node and all the containers have been created. At least one container is still running.
In the succeeded state, all containers in the pod have terminated successfully. The failed state occurs when
at least one container terminates in failure due to a nonzero exit status or the system terminates a container.
The unknown state occurs when the state of the pod cannot be obtained, usually due to an error in
communicating with the node.

896 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Node
• A node is a worker machine.
• Can be a VM or a physical machine.
• Includes the container runtime interface (CRI), kubelet, and kube-proxy, which is everything needed to
run pods.

A node is another Kubernetes object. A node contains several components:


• container runtime interface (CRI): There are several CRIs that are available for Kubernetes—the
most common is Docker. The others include CRI-O, Containerd rktlet, and Frakti.
• kubelet: The kubelet ensures that containers are running in the pods. It also interfaces with the API
server of the control plane.
• kube-proxy: Kube-proxy is a network proxy that implements the network policies.

Cluster
• A cluster is a pool of one or more nodes.
• Contains one or more control plane nodes (also known as the API server) to manage worker nodes.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 897
A cluster is composed of one or more nodes that work together. A cluster must have at least one control
plane node, or more if running in high availability. The control plane node is also known as the API server.
The API server is in charge of communication that flows in and out of the cluster, and communication to the
nodes, pods, and services.
The control plane node is also responsible for the controller. The controller is more of a concept than a
component, and is a control loop. A control loop is a nonterminating loop that maintains the measured
process variable at a set point. There are many controllers running; one example is the job controller. The
job controller works with the API server to maintain the deployment of pods.

1. 0Which three statements about pods are true? (Choose three.)


a. A pod consists of only one container.
b. A pod consists of one or more containers.
c. All pods on a node share a pool of network ports.
d. A pod can share a volume with all containers within itself.
e. A pod shares the static IP address of the node.
f. A pod is permanently on and available.
g. Each pod has its own network namespace.

898 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Kubernetes Concepts: Storage
Many applications have a need for storage. An online store needs to maintain an inventory, a blog would
need a place to store its articles, a forum needs to store the users and content. Kubernetes allows for
persistent data to be stored in volumes. A Kubernetes volume is defined as an abstraction of the methods
that allow data to be stored.

Storage Problems
• Files in containers are disposable; they are lost when a container is restarted.
• Multiple containers can run in a pod and should be able to share files.

Why do you need a storage module? Why can’t you use the built-in container storage? Because containers
are disposable, storage that is created within the container is lost when the container is restarted. If you are
running multiple containers in a single pod, using the built-in storage on a container would not allow the
two containers to share the same files.

Volumes
• An abstraction of the storage for pods.
• Shares the pod lifecycle and not the container lifecycle.
• Available to all containers within a pod.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 899
A volume is an abstraction of storage. A volume shares the pod’s lifecycle. The volume exists as long as the
pod exists. A volume is also available to all containers that run in the pod.
Here is an example of the API for creating a volume with a pod:
apiVersion: v1
kind: Pod
metadata:
name: example-pod
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: example-container
volumeMounts:
- mountPath: /example-data
name: example-volume
volumes:
- name: example-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory

In the volumes object, there is a volume that is named example-volume. This volume type is hostPath,
which uses a file or directory from the node on which the pod is hosted. The volume is only defined once. A
volume mount is needed for each container in the pod that requires access to this volume.

900 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Types of Volumes
Here are some of the many types of volumes:
• awsElasticBlockStore
• azureDisk
• azureFile
• Cephfs
• configMap
• Fc(fibre channel)
• gcePersistentDisk
• hostPath
• Local
• nfs

There are numerous storage options for volumes. Picking an option may seem daunting at first, so here are
some tips and examples to help.
• First, you do not have to pick just one option. Each pod can have its own storage type, and a pod can
have more than one storage type.
• Pick an option that is best suited for the environment. If Kubernetes is hosted in AWS, a readily
available option is to use an AWS Elastic Block Store. If you are hosting on Google Cloud, a GCE
Persistent Disk may be better. If you are hosting on-premises, you might use Fibre Channel or NFS.
• Pick an option that is best suited for the application. If the data only needs to be available for the pod at
run time, and does not need to exist when the pod is deleted, then Empty Dir is an option. If the data is
required to be distributed and have high availability, then Cephfs might be an option.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 901
Persistent Volumes
• Volumes are managed per pod.
• Persistent volumes are managed per cluster.
• PersistentVolumeClaim allows a pod to have access to a persistent volume.
• Persistent volumes allows pods to have access to a volume without needing to know how the volume is
implemented.

The pod and the pod lifecycle manage volumes, whereas a cluster manages a persistent volume outside the
pod. A persistent volume needs to be created before it is mounted inside a pod. Multiple pods can mount a
persistent volume.
A persistent volume also allows you to “hide” the volume implementation from the developer who is
creating the pod. This approach can be used for security purposes or ease of use. This volume can also be
hidden to limit the storage resources that are available to a pod, to ensure that a pod does not consume more
than its share of disk space.
Pods need to make a claim for storage by using a PersistentVolumeClaim. The following is an example of a
PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 8Gi
storageClassName: slow
selector:
matchLabels:
release: "stable"
matchExpressions:
- {key: environment, operator: In, values: [dev]}

902 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
In this example, the name of the PersistentVolumeClaim is myclaim. There is a request for 8 GB of storage.
Instead of calling for a specific persistent volume, it is going to use one that matches the criteria. In this
case, the storageClassName of the persistent volume must be “slow,” the release label must match “stable,”
and the environment must contain “dev.”
1. 0Which option is not a benefit of a persistent volume?
a. It persists after a pod is terminated.
b. It can be created with the pod.
c. It allows limits to be set on storage.
d. It is easy for developers to implement in the pod.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 903
0Kubernetes Concepts: Networking
In addition to storage, pods also have access to networking. In Kubernetes, networking is implemented
using the Kubernetes networking model.

Kubernetes Networking Model


• Every pod has its own IP address.
• Pods can be treated like VMs or physical hosts from the perspective of ports.
• Pods in a node can communicate with all pods within the cluster without NAT.

One benefit of using Kubernetes is that it is just like a VM or physical host. Every pod gets its own IP
address and has access to the full range of network ports. There is no need to bridge the pod with the node’s
network interface. Another benefit is that pods can communicate with each other within a cluster without
the need for NAT. With pods having full access to ports, there is no need for PAT either.

Kubernetes Networking Model Implementation


There are many CNIs. Here are a few of them:
• Cisco Application Centric Infrastructure
• Flannel
• Multus
• Weave Net

904 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Kubernetes uses a CNI (container networking interface) to implement the networking model. The CNI is a
common API for connecting containers to networks. There are too many CNIs to list here. As with the
storage types, large cloud providers such as AWS, Azure, and Google Cloud all have a CNI. Flannel is a
simple and easy way to configure a Layer 3 network fabric. Weave Net is a simple-to-use network for
Kubernetes and does not require any configuration to run. Multus allows you to connect multiple network
interfaces to pods instead of the standard one interface per pod. More information can be found at
https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-
networking-model.

Pod Networking Problem


• Pods can be created and destroyed dynamically.
• Some pods (”back ends”) provide functionality to other pods (“front ends”). How do the pods know the
IP addresses of the other pods?

IP addresses are assigned to pods during deployment. Pods are also disposable and can be deleted and re-
created with different IP addresses. In some applications, a pod might need to talk to another pod that is
providing a database. How are pods expected to know the IP addresses of other pods? How can an
application be accessed on a pod if the IP address changes?

Service
• A service solves the pod networking problem.
• A service provides a discovery layer for a pod to communicate with other pods.
• Similar to DNS, but without the Time to Live (TTL) issues.
• Uses the kube-proxy to delegate requests to the specific pods.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 905
A service solves these IP address issues. One of the benefits of Kubernetes is the ease of scale using
replicas. A service provides a discovery layer for the pod. A service works similar to DNS. A selector is
used to specify an application. Any pod that serves that application can be reached using the service. The
service uses the kube-proxy to send requests to one of the pods that services that app. The following is an
example of the API.
apiVersion: v1
kind: Service
metadata:
name: myService
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 80

The selector allows you to use the pod name to select one of the matching pods in the replicas. The port is
the port that is made accessible by the service. The targetPort is the port on the pod. In this example, they
match, but they do not have to match.

Service Types
Service types are as follows:
• ClusterIP: Exposes the service on a cluster-internal IP and makes the service only reachable within the
cluster. This service type is the default.
• NodePort: Exposes the service on a static port. Can access externally using <NodeIP>:<NodePort>.
• LoadBalancer: Exposes the service using a cloud provider’s load balancer.
• ExternalName: Maps the service to a CNAME record. No proxying is set up.

906 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
You can also use ingress to expose your service.

There are a few different options for service types. ClusterIP exposes the service on an internal cluster IP.
This service type is only reachable from within the cluster. If no service type is provided, the default
ClusterIP type is used.
NodePort exposes the service on the node’s IP address at the specified port. By default, the range of
available ports is 30000 to 32767, but can be specified by using the flag –service-node-port-range on the
Kubernetes API server.
LoadBalancer creates the service on compatible cloud provider external load balancers. The cloud provider
controls the load balancing.
ExternalName maps the service to a DNS CNAME, not a selector.
A service can, and sometimes requires, more than one service type set. More information can be found at
https://kubernetes.io/docs/concepts/services-networking/service/.

Ingress
• Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.
• Ingress requires an ingress controller.
• Ingress can be configured to give services URLs, terminate SSL and Transport Layer Security (TLS)
connections, and offer name-based virtual hosting.
• Ingress does not expose arbitrary ports or protocols, only HTTP and HTTPS.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 907
Ingress exposes HTTP and HTTPS traffic from outside the cluster. Ingress defines the rules for traffic
routing. In order for ingress rules to take effect, an ingress controller is required. Ingress gives you the
option to provide an external URL for a service. You can also terminate SSL and TLS connections to the
ingress controller to ease configuration of the containers.
Ingress also allows you to offer name-based virtual hosting, so that one IP address can service many URLs.
Ingress only exposes HTTP and HTTPS traffic and does not work for other ports or protocols.

Ingress Controller
• For ingress to work, the cluster must have an ingress controller running.
• Ingress controllers are not started automatically with the cluster—they must be implemented.
• Multiple ingress controllers can be defined and used.
• Kubernetes currently supports and maintains GCE and Nginx controllers.

908 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Ingress controllers allow the use of ingress rules to route HTTP and HTTPS traffic. Ingress controllers are
not built into Kubernetes. You must implement one, just like the CNI. More than one ingress controller can
be created.
Ingress controllers are similar to a reverse proxy service, such as Nginx. In fact, Nginx is one of the ingress
controllers. Kubernetes officially supports and maintains GCE and Nginx controllers, but there are other
controllers as well. Additional ingress controllers can be found at:
https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/#additional-controllers.
Configurations can be different, depending on which ingress controller is chosen.
1. 0Which is not a valid service type for services?
a. LoadBalancer
b. NodePort
c. ClusterIP
d. ExternalIP
e. ExternalName

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 909
0Kubernetes Concepts: Security
Security is a major concern, even more in recent years with data moving into the cloud. A security breach is
devastating to any company. Security should not be taken lightly and needs to be viewed holistically and not
just at the demilitarized zone (DMZ). This topic examines some options for securing your Kubernetes
cluster.

Cloud Native Security

One method of conceptualizing security is the Four C’s of Cloud Native Security:
• Cloud: The cloud is the infrastructure on which Kubernetes runs. Each cloud operator has security best
practices that you should follow. Here are some general recommendations:
– Access to the Kubernetes control plane node is not allowed publicly. Access control lists (ACLs)
should only allow the administrator’s IP addresses.
– Nodes should only accept connections from the control plane nodes (using ACLs). If possible,
nodes should not be exposed on the public Internet.
– Kubernetes access to the cloud provider’s API should be limited.
– Access to etcd (the datastore for Kubernetes) should be limited to the control plane nodes only.
– Etcd drives should be encrypted at rest.
• Cluster: You need to secure the components within the Kubernetes cluster.
• Container: You need to secure the containers that run within the pods. Following are some of the
general suggestions for maintaining secure containers:
– Containers should not run in privileged mode unless required.
– Applications in the containers should run under their own user account (not root).
– Containers should only use known images. The FROM tag in containers should be a tagged version
and not latest. These images should be scanned for vulnerabilities using a tool such as Clair from
CoreOS.

910 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
– If possible, container images should be signed. TUF and Notary are two tools to consider. If using
Docker as the CRI, Docker Content Trust can be enforced.
• Code: Even the code running the applications needs to be secure. The following are a few suggestions
for securing code:
– Encrypt data transmission and use TLS when possible.
– Only expose necessary ports.
– Regularly scan dependencies to ensure that no Common Vulnerabilities and Exposures (CVEs)
exist.
– When possible, use static code analyzers against the codebase.
– Ensure that code does not allow for SQL injection, cross-site request forgery (CSRF), or cross-site
scripting (XSS) attacks.

Cluster Security Strategies


• RBAC authorization
• Network policies
• Resource quotas
• QoS classes
• Secrets

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 911
You need to become familiar with the following security topics. Some of these topics may not look like
security measures, for example, QoS classes. Although they do not enforce any security policies, they
ensure that production systems stay online.
• role-based access control (RBAC) authorization
• Network policies
• Resource quotas
• QoS classes
• Secrets

RBAC Authorization
• Role: Defined within a namespace
• ClusterRole: Defined within a cluster
• RoleBinding: Binds a user or group to a role
• ClusterRoleBinding: Binds a user or group to a ClusterRole

A namespace is a logical separation between systems. There are two options for roles:
1. Role: A role is defined within a namespace.
2. ClusterRole: A ClusterRole is defined within a cluster.

Roles use rules to restrict access to a system. Roles are additive in nature—a user or group can have more
than one role assigned to it.
Roles allow specific HTTP verbs to be performed against a list of resources.
The recognized HTTP verbs are get, list, watch, create, delete, update, and patch.
The apiGroups lists the resources apiVersion. For example, a deployment is apps/v1 and would be
apiGroups: [“apps”]. Anything in the Core API would be just v1, which is apiGroups:[“”].
For a list of resources and the apiVersion, see the API reference docs for your Kubernetes version.
https://kubernetes.io/docs/reference/.
For a user or group to use these rules, a role must be bound to them. For a role, you will use RoleBinding.
For a ClusterRole, you will use ClusterRoleBinding.

912 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
RBAC API Examples

The following example shows a role that grants get and list HTTP verbs to the pods resource in the default
namespace.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: read-pod
rules:
- apiGroups: [""] # Core api
resources: ["pods"]
verbs: ["get", "list"]

The following example applies the newly created role to the user John to allow access to read pods in the
default namespace.
apiVersion: rbac.authorization.k8s.io/v1
# This role binding allows ”john" to read pods in the "default" namespace.
kind: RoleBinding
metadata:
name: read-pods
namespace: default
subjects:
- kind: User
name: john # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role # this must be Role or ClusterRole
name: read-pod # this must match the name of the Role or ClusterRole you wish to
bind to
apiGroup: rbac.authorization.k8s.io

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 913
Network Policies
• Pods accept traffic from any source by default.
• Network policies specify how pods or groups of pods communicate with each other and other network
entities.
• Policies are additive; the order of evaluation does not affect the result.
• Once a policy is applied, it acts as an allow list. All traffic that is not specified in the applied policies is
blocked.
• The CNI must support network policies for a policy to take effect.

The next security strategy is network policies. By default, pods have an insecure network policy because
they allow all traffic to enter and exit the pod. Network policies specify how pods or groups of pods will
communicate with each other and with network entities outside the cluster.
Just like RBAC roles, network policies are additive, and a pod can have more than one network policy.
Because they only add, the order in which policies are added does not matter. Policies act as allow lists,
which means that they allow only what is specified in the allow list. Any traffic that is not specified is
dropped.
Kubernetes relies on the CNI to apply the network policies, so the CNI that you use must support network
policies.

914 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Network Policies: API Examples

The figure shows two examples of a network policy. The first example defines ingress and egress, but there
are no rules that are listed. In this case, an allow list is created with no rules. When applied to a pod, no
traffic will be allowed in or out of the pod. In addition, the pod selector is an empty object, so this object
will apply to all pods.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress

The second example applies an allow list for any pod that has the role: db label applied to it. The allow list
allows TCP traffic on port 3306 from the 172.17.0.0/16 block to enter the pod.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
ports:
- protocol: TCP
port: 3306

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 915
Resource Quotas
• Quotas can be created per namespace.
• Quotas prevent teams from consuming more than their share of resources.
• Quotas can prevent a malicious user from taking down production by consuming all the resources.

Resource quotas are another tool to secure your Kubernetes cluster. Resource quotas define the amount of
memory and CPU that a namespace can consume. The resource quota prevents teams or areas from
consuming more than their allotted resources. One example is a production and testing environment—
production would be one namespace and testing would be the other namespace. The production
environment could be set up to use 80 percent of the cluster resources, and restricting the testing
environment to 20 percent of the cluster resources.
Implementing resource quotas can ensure that a malicious user cannot use all the cluster resources and can
prevent other applications from having resources.

QoS Classes

916 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
QoS classes are similar to resource quotas. QoS classes cannot be defined, because they already exist in
Kubernetes. There are three QoS classes: guaranteed, burstable, and best effort. Pods in the best effort class
can use any amount of CPU or memory, but are the first to be terminated if resources are exhausted. Pods in
the burstable class are given a minimal resource guarantee but can use more resources when available.
These pods can also be terminated when the resources are exhausted and there are no best effort pods left.
Guaranteed pods are the highest priority and are guaranteed not to be terminated until they exceed their
limits.
A request is the amount of resources that the system will guarantee to a pod. A limit is the maximum
amount of resources the system will allow a container to use.
Kubernetes defines rules for how pods are assigned to a QoS class.
• Guaranteed QoS class
– Every container in the pod must have a memory limit and memory request that are the same.
– Every container in the pod must have a CPU limit and a CPU request that are the same.
• Burstable QoS class
– The pod does not meet the requirement for the guaranteed class.
• At least one container has a memory or CPU request.
• Best Effort QoS class
– None of the containers have a memory or CPU limit or requests.

Secrets
• Secrets allow you to store and manage small amounts of sensitive data such as passwords, tokens, or
keys.
• Secrets can be used as an environmental variable (ENV) when creating the pod.
• Secrets can also be used as a file in a volume mounted on one or more containers in the pod.
• Secrets can be used when pulling images for the pod from a private repository.
• Care must be taken to ensure the secret is not accessible.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 917
Secrets are key-value stores in Kubernetes that can be used to manage small amounts of sensitive data, such
as passwords, tokens, or keys. Secrets have several uses cases. They can be used during pod creation to
define environment variables, pull images for the pod from private repositories, or be passed in to the pod as
a volume.
Secrets cannot be used to store large amounts of data. They are limited to 1 MiB (mebibyte, slightly larger
than a megabyte). There are also a limited number of secrets that can be created per cluster.
The control plane node (API server) can access all secrets on the cluster, but only the nodes that run pods
with secrets will have access to those secrets that are necessary for the pod. This security measure limits the
number of secrets that can be leaked if a node is compromised.
Secrets can be created through manifest files or through kubectl directly (kubectl is explained later). If
created through a manifest file, the file should not exist in source control repositories. If it does exist, it is
compromised.
If you are using secrets as a volume, the volume could be accessible if the container is compromised. The
default file permission mode is 0644, but can be changed.
The secret can also be used as an environment variable. If used as an environment variable, the variable
should be overwritten by the application after it is consumed.
RBAC should be configured to limit access to secrets.
1. 0Which HTTP verb does Kubernetes RBAC not support?
a. get
b. list
c. connect
d. watch
e. patch

918 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Kubernetes API Overview
Kubernetes has an extensive and ever-changing API.

Manifest
• Specifies the desired state of an object.
• Each configuration file can contain multiple manifests.
• Accepts JSON or YAML format.
• YAML is generally preferred because it is easier to read.
• All examples here are in YAML format.

A manifest is a method of interacting with the Kubernetes API. The manifest specifies the desired state of
an object. One or more manifests can be combined into a single configuration file for ease of deployment.
The manifest can be stored in JSON or YAML format. YAML is generally preferred because it is more
user-friendly, and is likely what you will find in examples. The JSON standard also has some limitations in
representing file permissions, because it does not support octal.
All examples in this section are provided in YAML format.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 919
Manifest Example

This example shows two Manifests stored in a single YAML file. First, a service is created, then a
deployment for a pod is created. In this example, Nginx is being deployed with three pods (replicas). In
YAML the --- designates the separation between the directives and the document. Usually, they start the
file. In a manifest configuration file, they are used to separate the manifests.
The service must be created before pods are created or DNS issues can arise. This approach is also part of
the best practices. The selector command in the service replicas look for pods with the label “app: nginx.”

Mandatory Fields
The following are mandatory fields in the API:
• apiVersion: Contains the API version number and sometimes the API location.
• Kind: The REST resource that this object represents.
• Metadata: Nested object field that contains attributes of the object.
• Namespace: A namespace is a DNS-compatible label into which objects are subdivided. Defaults to
“default.”
• Name: A string that uniquely identifies the object within a namespace.

There are a few fields that appear in all the manifest examples. These fields are the mandatory fields for the
Kubernetes API.
apiVersion is the first field in the manifest. The apiVersion contains the version number and the API
location. The value apps/v1 represents the API location Apps version 1. Often, the value will be v1, which
represents the API location Core version 1. In these cases, Core is left off.
Kind is the Kubernetes object that is being created. This field is the REST resource representation of the
Kubernetes object. Some examples that you have seen are deployment, service, network policy, pod, and
role.
Metadata contains the attributes of the object. Metadata itself is required, but not all the nested objects are.
Namespace is one of the nested metadata objects. The namespace defines the “group” in which the
Kubernetes object is located. It is a DNS-compatible label into which the Kubernetes objects are
subdivided. When namespace is not provided, it defaults to “default.”
920 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Name is another nested metadata object. The name is a unique string that identifies the object within a
namespace.
Although not required, labels and annotations are additional nested metadata objects. Labels are key-value
fields within Kubernetes objects that allow a selector to access them. Annotations allow you to attach
arbitrary data to the objects that a user or tool can view. An example might be the contact information of the
maintainer, build information, time stamps, or even URLs.

kubectl
• kubectl is a CLI tool for accessing a Kubernetes cluster.
• kubectl can run commands to retrieve data from the cluster or to create objects on the cluster.
• Easy to create or apply manifest files to the cluster
– Create a resource from a file: kubectl create –f nginx-app.yml
– Apply a configuration to a resource by filename: kubectl apply –f nginx-app.yml

kubectl is a CLI tool for managing and accessing a Kubernetes cluster and is one of the easier API tools to
learn. Kubectl can be used to run commands against the cluster to retrieve data or create objects on the
cluster. Kubernetes has two methods for adding objects to the cluster.
Create is an imperative command in Kubernetes. Imperative commands are used as step-by-step instructions
for Kubernetes. Create will only create the objects.
Apply is a declarative command in Kubernetes. Declarative commands are used to imply intent, but allow
Kubernetes to choose how best to implement. Apply will create an object if it does not exist or update the
objects if they do exist.
More about imperative and declarative commands in Kubernetes can be found here:
https://kubernetes.io/docs/concepts/overview/working-with-objects/object-management/.
1. 0Which two fields are required in the metadata? (Choose two.)
a. Name
b. Annotations
c. apiVersion
d. Labels
e. Namespace

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 921
0Discovery 22: Explore Kubernetes Setup and
Deploy an Application
Introduction
Docker and Docker Compose allow you to easily run containers on a single host. If you need to scale to
multiple hosts or clusters of hosts, Kubernetes provides the solution. Kubernetes is a broad technology with
many moving parts. At its core, Kubernetes (or K8s) provides a platform to schedule and run containers on
clusters of physical and/or virtual machines, managing the lifecycle of containerized applications. K8s is an
open source, extensible platform that can have many different variants, plugins, and customizations. The
DevOps interaction with Kubernetes can be simplified via command line utilities. These command line
utilities lower the barrier of entry to Kubernetes, and provide a user experience similar to the Docker
command line and Docker Compose. In this lab, you will explore and deploy an application using the
Kubernetes command line tool “kubectl.”

Topology

Job Aid

Device Information

Device Description FQDN/IP Address Credentials

Student Workstation Linux Ubuntu VM 192.168.10.10 student, 1234QWer

GitLab Container Registry Container Registry registry.git.lab student, 1234QWer

K8s1 Kubernetes Control 192.168.10.21 student, 1234QWer


Plane Node

K8s2 Kubernetes Worker 192.168.10.22 student, 1234QWer

922 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Device Description FQDN/IP Address Credentials

K8s3 Kubernetes Worker 192.168.10.23 student, 1234QWer

asa1 Firewall 192.168.10.51 student, 1234QWer

csr1kv1 Cisco Router 192.168.10.101 student, 1234QWer

csr1kv2 Cisco Router 192.168.10.102 student, 1234QWer

csr1kv3 Cisco Router 192.168.10.103 student, 1234QWer

Command List
The table describes the commands that are used in this activity. The commands are listed in alphabetical
order so that you can easily locate the information that you need. Refer to this list if you need configuration
command assistance during the lab activity.

Command Description

cd directory name To change directories within the Linux file system, use the cd
command. You will use this command to enter a directory where the
lab scripts are housed. You can use tab completion to finish the name
of the directory after you start typing it.

cat file The most common use of the cat Linux command is to read the
contents of files. It is the most convenient command for this purpose in
UNIX-like operating systems.

curl url Provides the ability to access URLs from the command line

kubectl [create, delete, expose, get] Kubernetes command line tool

ls file Provides the ability to see a file or folder contents

Kubernetes and Gitlab Registry


This lab infrastructure has been set up to use a 3-node Kubernetes cluster and a private Docker registry
(Gitlab Container Registry). All images used in this lab are present on the container registry. The
communication between Kubernetes and Gitlab has been established prior to the start of the lab. There are
no other setup steps necessary to begin this lab. The communication between Kubernetes and Gitlab is
being done securely (via https), and a service account has been created for pulling images from Gitlab.

Task 1: Create Deployment via CLI


This task will demonstrate the basics of running a container locally.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 923
Activity

Step 1 In the student workstation, find and open Visual Studio Code by clicking the Visual Studio Code icon.

Step 2 From the Visual Studio Code top navigation bar, choose Terminal > New Terminal [Ctrl+Shift+`].

Step 3 Navigate to the terminal section at the bottom of Visual Studio Code.

Step 4 Within the Visual Studio Code terminal, change the directory to labs/lab22 using the cd ~/labs/lab22
command.

student@student-vm:$ cd ~/labs/lab22/

Step 5 Issue the git clone https://git.lab/cisco-devops/explore-k8s-setup command to clone the explore-k8s-setup
repository.

student@student-vm:labs/lab22$ git clone https://git.lab/cisco-devops/explore-k8s-setup


Cloning into 'explore-k8s-setup'...
remote: Enumerating objects: 3, done.
remote: Counting objects: 100% (3/3), done.
remote: Compressing objects: 100% (2/2), done.
remote: Total 3 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (3/3), done.
student@student-vm:labs/lab22$ cd explore-k8s-setup
student@student-vm:lab22/explore-k8s-setup (master)$

Step 6 Change directory to the explore-k8s-setup directory by issuing cd explore-k8s-setup command.

student@student-vm:labs/lab22$ cd explore-k8s-setup
student@student-vm:lab22/ explore-k8s-setup (master)$

Ensure that kubectl is Installed

Step 7 Execute the kubectl version command and verify which kubectl version is installed. kubetcl is a command-
line tool for Kubernetes objects and clusters management. The command will return an error if kubectl is not
installed or configured yet. As you are not connected to the server yet, the command will fail when the server
version is queried.

student@student-vm:labs/lab22/explore-k8s-setup (master)$ kubectl version


Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2",
GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean",
BuildDate:"2019-10-15T19:18:23Z", GoVersion:"go1.12.10", Compiler:"gc",
Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right
host or port?
student@student-vm:labs/lab22/explore-k8s-setup (master)$

Connect to kubectl Cluster

924 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 8 Execute the kubectl config view command to check the available clusters. Then, connect to kube.lab using
the kubectl config use-context kube.lab command.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 925
student@student-vm:labs/lab22/explore-k8s-setup (master)$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://192.168.10.22:6443
name: k8s2
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://192.168.10.23:6443
name: k8s3
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://192.168.10.21:6443
name: kube.lab
contexts:
- context:
cluster: k8s2
user: k8s2-admin
name: k8s2
- context:
cluster: k8s3
user: k8s3-admin
name: k8s3
- context:
cluster: kube.lab
user: kube.lab-admin
name: kube.lab
current-context: ""
kind: Config
preferences: {}
users:
- name: k8s2-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
- name: k8s3-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
- name: kube.lab-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
student@student-vm:labs/lab22/explore-k8s-setup (master)$ kubectl config use-context
kube.lab
Switched to context "kube.lab".
student@student-vm:labs/lab22/explore-k8s-setup (master)$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2",
GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean",
BuildDate:"2019-10-15T19:18:23Z", GoVersion:"go1.12.10", Compiler:"gc",
Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3",
GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean",
BuildDate:"2019-11-13T11:13:49Z", GoVersion:"go1.12.12", Compiler:"gc",
Platform:"linux/amd64"}

926 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Get Kubernetes Nodes
Now you will ensure that Kubernetes nodes are online.

Step 9 Determine Kubernetes nodes by running the kubectl get nodes command. The Age and Version in your
command output may differ, but there should be three nodes.

student@student-vm:labs/lab22/explore-k8s-setup (master)$ kubectl get nodes


NAME STATUS ROLES AGE VERSION
k8s1 Ready master 10d v1.16.2
k8s2 Ready <none> 10d v1.16.2
k8s3 Ready <none> 10d v1.16.2

Run test pod in Kubernetes Cluster

Step 10 Using the kubectl run command, run a simple test pod within the Kubernetes cluster. This will validate the
communication between the Gitlab registry and Kubernetes cluster that you are connected to.

student@student-vm:labs/lab22/explore-k8s-setup (master)$ kubectl run hello-world \


--image=cisco-devops/containers/hello-world \
--generator=run-pod/v1
pod/hello-world created

Validate Pod Deployment


To validate the container launched successfully, view the pod count and logs for the pod.

Step 11 Run the kubectl get pods and kubectl logs hello-world commands. You will see the pod name and the
output from the kubectl run command.

Remember, a pod in Kubernetes is a unit of deployment that represents a container or small


number of containers that share resources.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 927
student@student-vm:labs/lab22/explore-k8s-setup (master)$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-world 0/1 Completed 0 58s
student@student-vm:labs/lab22/explore-k8s-setup (master)$ kubectl logs hello-world

Hello from Docker!


This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:


1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/

For more examples and ideas, visit:


https://docs.docker.com/get-started/

student@student-vm:labs/lab22/explore-k8s-setup (master)$

Execute Command in the Container


You just used the hello-world container that prints a message and immediately exits. Now you will use a
more fully featured container image based on Alpine Linux. You will also pass a command into this image
at runtime and view the output.

Step 12 Issue the kubectl run command with additional flags to specify the image name, pod version, and command
to be executed once the pod is built. Use the --image=cisco-devops/containers/alpine, --generator=run-
pod/v1 and --command -- "echo" "Hello DEVOPS!" flags.

student@student-vm:labs/lab22/explore-k8s-setup (master)$ kubectl run k8s-alpine \


--image=cisco-devops/containers/alpine \
--generator=run-pod/v1 \
--command -- "echo" "Hello DEVOPS!"
pod/k8s-alpine created

Step 13 Then view the created pods and logs using the kubectl get pods and kubectl logs k8s-alpine commands.

student@student-vm:labs/lab22/explore-k8s-setup (master)$ kubectl get pods


NAME READY STATUS RESTARTS AGE
k8s-alpine 0/1 Completed 0 37s
student@student-vm:labs/lab22/explore-k8s-setup (master)$ kubectl logs k8s-alpine
Hello DEVOPS!
student@student-vm:labs/lab22/explore-k8s-setup (master)$

928 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Delete Pods
To remove pods and deployments, you can utilize the kubectl delete pods [pod_name] command.

Step 14 Check the created pods using the kubectl get pods command. Then use the kubectl delete pods k8s-alpine
command to remove the previously created pods. Finally, validate that the pod was actually removed.

student@student-vm:labs/lab22/explore-k8s-setup (master)$ kubectl get pods


NAME READY STATUS RESTARTS AGE
k8s-alpine 0/1 Completed 0 9m35s
student@student-vm:labs/lab22/explore-k8s-setup (master)$ kubectl delete pods k8s-
alpine
pod "k8s-alpine" deleted
student@student-vm:labs/lab22/explore-k8s-setup (master)$ kubectl get pods
No resources found in default namespace.
student@student-vm:labs/lab22/explore-k8s-setup (master)$

Deploy Network Inventory Container


Now you will use the kubectl command to deploy the Network Inventory application. With the --port flag,
you will specify which port the application must listen on.

Step 15 The Network Inventory image net-inventory is in the cisco-devops/containers/ folder. The Network
Inventory application must listen on port 5000. Deploy the Network Inventory application using the kubectl
run net-inventory --image=cisco-devops/containers/net_inventory --port=5000.

student@student-vm:labs/lab22/explore-k8s-setup (master)$ kubectl run net-inventory --


image=cisco-devops/containers/net_inventory --port=5000
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a
future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/net-inventory created

Step 16 Validate that the Network Inventory container is running by issuing the kubectl get pods and kubectl get
pods -o wide commands. You will notice that the node is running on the k8s2 server.

student@student-vm:labs/lab22/explore-k8s-setup (master)$ kubectl get pods


NAME READY STATUS RESTARTS AGE
net-inventory 1/1 Running 0 40s
student@student-vm:labs/lab22/explore-k8s-setup (master)$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
NOMINATED NODE READINESS GATES
net-inventory 1/1 Running 0 75s 10.245.2.19 k8s2
<none> <none>
student@student-vm:labs/lab22/explore-k8s-setup (master)$

Expose Service
To make the exposed port accessible to the outside world, you will expose a service within Kubernetes.
Create a service and expose internal port 5100 to container port 5000. Check if the service is running. Once
the service is exposed, validate that the API is accessible from the student workstation, using the public
interface of the Kubernetes control plane or worker nodes.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 929
Step 17 Use the command kubectl expose deployment net-inventory --port=5100 --target-port=5000 --
type=NodePort to expose the service to the outside world. Use the kubectl get svc to view the services that
are running in the Kubernetes cluster and verify that the net-inventory service is running. The Kubernetes
cluster will select a random port to expose it to the outside world. Note the assigned port number.

student@student-vm:labs/lab22/explore-k8s-setup (master)$ kubectl expose deployment


net-inventory --port=5100 --target-port=5000 --type=NodePort
service/net-inventory exposed
student@student-vm:labs/lab22/explore-k8s-setup (master)$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d
net-inventory NodePort 10.111.211.50 <none> 5100:31455/TCP 12s

Step 18 To populate the Network Inventory with some example devices, use the populate_inventory predeployed
script, using the port number identified in the step above. In your terminal window, use the
populate_inventory command.

student@student-vm:labs/lab22/explore-k8s-setup (master)$ populate_inventory


Enter the server and port info : kube.lab:31455
nyc-rt01: Added successfully
nyc-rt02: Added successfully
rtp-rt01: Added successfully
rtp-rt02: Added successfully

Step 19 Now you can access the API using the curl -L kube.lab:31455/api/v1/inventory/devices command. The
command should return a HTTP status 200 OK and a list of devices within the Network Inventory
application.

930 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:labs/lab22/explore-k8s-setup (master)$ curl -L
kube.lab:31455/api/v1/inventory/devices
{
"data": [
{
"device_type": "switch",
"hostname": "nyc-rt01",
"ip_address": "10.100.10.11",
"os": "ios",
"password": "Cisco123",
"role": "dc-spine",
"site": "nyc",
"username": "admin"
},
{
"device_type": "switch",
"hostname": "nyc-rt02",
"ip_address": "10.100.10.12",
"os": "ios",
"password": "Cisco123",
"role": "dc-spine",
"site": "nyc",
"username": "admin"
},
{
"device_type": "switch",
"hostname": "rtp-rt01",
"ip_address": "10.100.20.11",
"os": "ios",
"password": "Cisco123",
"role": "dc-spine",
"site": "nyc",
"username": "admin"
},
{
"device_type": "switch",
"hostname": "rtp-rt02",
"ip_address": "10.100.20.12",
"os": "ios",
"password": "Cisco123",
"role": "dc-spine",
"site": "nyc",
"username": "admin"
}
]
}
student@student-vm:labs/lab22/explore-k8s-setup (master)$

Delete the net-inventory deployment


You created the net-inventory deployment using the kubectl run command. The kubectl delete
deployment [deployment_name} command will remove the specified deployment.

Step 20 Use the kubectl delete deployment net-inventory command to remove the previously created deployments.
Finally, validate that the pods were removed using the kubectl get pods command.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 931
student@student-vm:labs/lab22/explore-k8s-setup (master)$ kubectl delete deployment
net-inventory
deployment.apps "net-inventory" deleted
student@student-vm:labs/lab22/explore-k8s-setup (master)$ kubectl get pods
No resources found in default namespace.
student@student-vm:labs/lab22/explore-k8s-setup (master)$

Delete the net-inventory service


Finally, you want to remove the service that was created using the kubectl expose command. The kubectl
delete service [service_name } command will remove the specified service.

Step 21 Check the running Kubernetes services using the kubectl get service command. Then issue the kubectl
delete service net-inventory command to remove the created service. Finally, validate that the net-inventory
service was actually removed.

student@student-vm:labs/lab22/explore-k8s-setup (master)$ kubectl get service


NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h36m
net-inventory NodePort 10.98.80.72 <none> 5100:30446/TCP 21m
student@student-vm:labs/lab22/explore-k8s-setup (master)$ kubectl delete service net-
inventory
service "net-inventory" deleted
student@student-vm:labs/lab22/explore-k8s-setup (master)$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h37m
student@student-vm:labs/lab22/explore-k8s-setup (master)$

Task 2: Create a Deployment via Manifest File


In the previous task, you created a Kubernetes deployment, test pod, and service using the kubectl
commands. This approach works for simple deployments. It can quickly become cumbersome to perform
advanced deployments using just the command line flags.

Now, you will review the Kubernetes manifest file to deploy the Network Inventory application and
exposed services, and leverage the kubectl command to complete the deployment. Using manifest files is
the recommended method for Kubernetes application deployments. The manifest file is written in YAML.
The manifest contains similar wording as using the kubectl run command. The file will also include a
service that will be exposed.

Activity

Review the Manifest File

Step 1 Review the net_inventory.yml manifest file for the net_inventory application. Use the cat
net_inventory.yml command.

932 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:labs/lab22/explore-k8s-setup (master)$ cat net_inventory.yml

apiVersion: apps/v1
kind: Deployment
metadata:
name: net-inventory
labels:
app: net-inventory
spec:
replicas: 1
selector:
matchLabels:
app: net-inventory
template:
metadata:
labels:
app: net-inventory
spec:
containers:
- name: net-inventory
image: cisco-devops/containers/net_inventory
imagePullPolicy: Always
ports:
- containerPort: 5000

apiVersion: v1
kind: Service
metadata:
labels:
app: net-inventory
name: net-inventory
spec:
ports:
- nodePort: 30500
port: 5100
targetPort: 5000
selector:
app: net-inventory
type: NodePort
student@student-vm:labs/lab22/explore-k8s-setup (master)$

Create the Deployment

Step 2 Create the net_inventory application deployment using the manifest file. Issue the kubectl create -f
net_inventory.yml command. When deployed, verify that the deployment is actually created and that the
service is running. Use the kubectl get deployment and kubectl get svc commands.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 933
student@student-vm:labs/lab22/explore-k8s-setup (master)$ kubectl create -f
net_inventory.yml
deployment.apps/net-inventory created
service/net-inventory created
student@student-vm:labs/lab22/explore-k8s-setup (master)$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
net-inventory 1/1 1 1 13s
student@student-vm:labs/lab22/explore-k8s-setup (master)$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d
net-inventory NodePort 10.111.252.247 <none> 5100:30500/TCP 26s

Validate the Deployment

Step 3 To populate the Network Inventory with some example devices, use the populate_inventory predeployed
script, this time using port number 30500, as specified in the manifest file.

student@student-vm:labs/lab22/explore-k8s-setup (master)$ populate_inventory


Enter the server and port info: kube.lab:30500
nyc-rt01: Added successfully
nyc-rt02: Added successfully
rtp-rt01: Added successfully
rtp-rt02: Added successfully

Step 4 Now you can access the API using the curl -L kube.lab:30500/api/v1/inventory/devices command. The
command should return a HTTP status 200 OK and a list of devices within the Network Inventory
application.

Step 5 Run the curl -L kube.lab:35000/api/v1/inventory/devices to validate the deployment command. You
should see a list of devices returned, and a 200 OK in the HTTP status code.

934 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:labs/lab22/explore-k8s-setup (master)$ curl -L
kube.lab:30500/api/v1/inventory/devices
{
"data": [
{
"device_type": "switch",
"hostname": "nyc-rt01",
"ip_address": "10.100.10.11",
"os": "ios",
"password": "Cisco123",
"role": "dc-spine",
"site": "nyc",
"username": "admin"
},
{
"device_type": "switch",
"hostname": "nyc-rt02",
"ip_address": "10.100.10.12",
"os": "ios",
"password": "Cisco123",
"role": "dc-spine",
"site": "nyc",
"username": "admin"
},
{
"device_type": "switch",
"hostname": "rtp-rt01",
"ip_address": "10.100.20.11",
"os": "ios",
"password": "Cisco123",
"role": "dc-spine",
"site": "nyc",
"username": "admin"
},
{
"device_type": "switch",
"hostname": "rtp-rt02",
"ip_address": "10.100.20.12",
"os": "ios",
"password": "Cisco123",
"role": "dc-spine",
"site": "nyc",
"username": "admin"
}
]
}
student@student-vm:labs/lab22/explore-k8s-setup (master)$

Cleanup and Delete Deployment

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 935
Step 6 Using the manifest file with service and deployment definitions, and the kubectl delete command, remove
the deployment and service. Issue the kubectl delete -f net_inventory.yml command. Notice that the
command does not require you to delete them separately, because of the manifest file. Finally, check the
deployment pod status.

student@student-vm:labs/lab22/explore-k8s-setup (master)$ kubectl delete -f


net_inventory.yml
deployment.apps "net-inventory" deleted
service "net-inventory" deleted
student@student-vm:labs/lab22/explore-k8s-setup (master)$ kubectl get pod
NAME READY STATUS RESTARTS AGE
net-inventory-867f9bc65b-r6rgf 1/1 Terminating 0 19m
student@student-vm:labs/lab22/explore-k8s-setup (master)$ kubectl get pod
No resources found in default namespace.
student@student-vm:labs/lab22/explore-k8s-setup (master)$

Summary
This lab reviewed the steps needed to deploy applications on a Kubernetes cluster using command line
tools. The lab introduced the concepts of pods, deployments, services, and nodes. This lab focused on two
methods for deploying applications with Kubernetes—command line flags and manifest files.The lab also
covered exposing services via the command line, or embedding this into the manifest file. Finally, both
command line and manifest file methodologies were used to deploy and remove a network inventory
application.

936 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Summary Challenge
1. 0Which option is another name for the control plane node?
a. kube-proxy
b. node
c. kubelet
d. apiServer
2. 0What are two benefits of the pod’s network namespace? (Choose two.)
a. Ports are automatically mapped between the pod and the node.
b. Containers in a pod can reach each other on the local host.
c. Pods get their own ports.
d. Containers get their own ports.
e. Each container gets its own IP address.
3. 0Which statement is not true of a volume?
a. It shares the pod’s lifecycle.
b. Each container in the pod shares the volume.
c. It persists after the pod is terminated.
d. There are many different storage drivers for a volume.
4. 0What does ingress provide?
a. a discovery layer for a pod to communicate with other pods
b. traffic routing for HTTP and HTTPS
c. network access control lists
d. traffic routing for all protocols
5. 0Which option is a cluster security strategy?
a. static code analyzers
b. encryption of data at rest
c. firewall
d. RBAC authorization
6. 0What are two rules that Kubernetes uses to determine the burstable QoS class? (Choose two.)
a. At least one container has a memory or CPU request.
b. At least one container has a memory or CPU limit.
c. Does not meet the requirement for the guaranteed QoS class.
d. All containers have a memory request and limit that are the same.
e. All containers have a CPU request and limit that are the same.
7. 0What is not true of a manifest?
a. It specifies the desired state of the control plane node.
b. It can be stored as YAML.
c. It can be stored as JSON.
d. It specifies the desired state of the object.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 937
0Answer Key
Kubernetes Concepts: Nodes, Pods, and Clusters
1. B, D, G

Kubernetes Concepts: Storage


1. B

Kubernetes Concepts: Networking


1. D

Kubernetes Concepts: Security


1. C

Kubernetes API Overview


1. A, E

Summary Challenge
1. D
2. B, C
3. C
4. B
5. D
6. A, C
7. A

938 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Section 17: Integrating Multiple Data Center
Deployments with Kubernetes

Introduction
Kubernetes is built to be scalable. Currently, a single cluster supports up to 5000 nodes, 150,000 total pods,
300,000 total containers, and 100 pods per node. It is possible, and often necessary, to run more than one
cluster, and in deployments with multiple data centers, each data center usually has more than one cluster.

Kubernetes Deployment Patterns


When releasing an application to Kubernetes or deploying updates to that application, you have several
methods for deployment. This topic discusses several methods that are popular in application deployment,
including Big Bang, Blue-Green, Canary, and Rolling deployment methodologies. You will also see an
example of how these methods are applied using Kubernetes.

Big Bang Deployment


• Traditional method
• Flip the switch to a new application version
• All systems are in lock step, no variation in versions

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 939
In the Big Bang release deployment model, the entire application is upgraded in one window. This
methodology is common in traditional deployments. There is a single maintenance window to complete the
upgrade and downtime is required.
In Kubernetes, this deployment means that all traffic to the application will fail until the pods on the new
deployment are available. If downtime is not an issue, then this method is a cost-effective and easy
deployment process.

Big Bang, or Re-Create, Deployment

In Kubernetes, the Big Bang deployment is called Re-create. This strategy is built into Kubernetes
deployments. This deployment reuses the existing cluster resources, so there should be no extra costs. To re-
create the application, the manifest for the deployment needs to be adjusted. Under the spec field, you will
add a new line for strategy, and under strategy, set the type to Re-create. It is important that the name field
in the metadata does not change.
Use the kubectl apply -f (filename) command to apply the changes.

Rolling Deployment
• The new version is slowly rolled out.
• Automated deployment removes old pods as new pods become available.

940 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
In Kubernetes, the Rolling deployment is a little different than traditional deployments. In this method,
Kubernetes replaces some of the replicas with the new deployment. As each new pod comes up, another
replacement occurs.
The replacement settings can be configured before the deployment starts, but once started, there is no way
to control these settings. The configurable settings are as follows:
• maxSurge: This setting specifies how many new pods can be added at one time.
• maxUnavailable: This setting specifies the number of pods that can be down during the update.

There is no downtime with this deployment. Depending on the settings, this method may not use up any
more system resources, so costs are minimized during upgrades. This system is difficult to test during
deployment, and rollbacks cannot occur until after the deployment completes.

Rolling Deployment or Rolling Update


• When deploying a new version, the rollout happens slowly. New pods are brought up and old pods are
terminated, but not all at once.
• Configurable using maxSurge and maxUnavailable

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 941
This strategy is built into Kubernetes deployments. To update the application, the manifest for the
deployment needs to be adjusted. Under the spec field, you will add a new line for strategy, and under
strategy, set the type to RollingUpdate. It is important that the name field in the metadata does not change.
Optionally, you can add the maxSurge and maxUnavailable settings.
Use the kubectl apply -f (filename) command to apply the changes.

Blue-Green Deployment
• Multiple versions are live at the same time.
• Migrate the application from the blue to the green set, with the ability to move back to blue if there are
issues with the green application.

The Blue-Green deployment is not a configurable option in Kubernetes, so you need to configure this
deployment. This situation makes the Blue-Green deployment a little more complex. With the Blue-Green
deployment, the goal is to have both deployments available, then have a hard cutover between them.
Although this option has a higher resource cost, there are benefits from not having downtime and the ability
to test before cutting over to the green option.
In Kubernetes, there are a few options for Blue-Green deployments. One is to use the Service and the other
is to use Ingress. The choice depends on which option was deployed to provide external access.

942 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Blue-Green Deployment Using Service or Ingress

The native way of configuring Blue-Green deployments in Kubernetes is to use a deployment with a
different name and a different label, in this case, the version number. The name value must be unique
between the two different deployments, because both will remain active. In this example, the existing
deployment is configured with a name nginx-v1, and has two labels that are configured, app: nginx and
version: v1.0.0. The Service has just the base name without the version number, because the Service will
not need two instances. The selector for the Service will match the existing deployment. It is important that
both the app and the version match.
• First, deploy the new application. This deployment should be tagged in the label. The old version should
also be tagged.
• The deployment name should be unique as well.
• Because the names are different, both deployments can exist in the same cluster.

To deploy the update to the application, you will make a new deployment for the application. The snippet in
the figure shows the updates to the deployment. The updates to the containers for the new code are not
shown.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 943
The metadata will need to be changed to nginx-v2 because you are deploying version 2. This change will
allow nginx-v1 to be up at the same time as nginx-v2. In addition, the version label will need to be updated
to the new version number.
Use the kubectl apply -f (filename.yml) command to deploy the new version.
• Second, wait for all pods in the new service to finish deploying.
• kubectl rollout status deploy nginx-v2 -w

After deploying to Kubernetes, you need to verify that the deployment was rolled out successfully. This fact
can be verified by running kubectl rollout status deploy nginx-v2 -w command.
After you run the command, if the deployment was successful, the output should show that the deployment
rolled out successfully.
• Third, update the existing service.
• In the first command, you are supplying a JSON patch.
• Once everything is verified, remove the existing deployment.

944 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Once the deployment is successful, you may want to verify operation by port-forwarding because the
Service still points to the previous deployment. When ready, you will want to update the Service. This
update can be done easily by running the kubectl patch command and supplying the changes in JSON
patch format. Here is an example:
kubectl patch service nginx -p '{”spec”:{“selector”:{”version”:”v2.0.0”}}}'

This command will update the existing Service resource in Kubernetes and set the selector to look for the
new deployment. Once everything has been verified, you can safely remove the old deployment using the
kubectl delete deploy nginx-v1 command.

Canary Deployment
• Route a subset of users to an application.
• Different branches are used for different Canary servers.
• Verify that new features and functions are working without impacting most users.

Canary deployment in Kubernetes is similar to a rolling update, but provides you with the control. The
Canary deployment is fairly complex and can create challenges to troubleshoot. Costs are minimal because
you use the same cluster resources. In this configuration, both systems are active at the same time, so there
is zero downtime.
Like the Blue-Green deployment, Kubernetes does not have the built-in option for a Canary deployment,
but there are several ways to do a Canary deployment. One option uses the base version of Kubernetes, and
the other options require an ingress controller such as Nginx or Istio.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 945
Canary Deployment Using Replica Scale

An example of the existing deployment is provided in the figure. The existing deployment should look
similar to the Blue-Green deployment. The replicas have been increased to 10 to provide better examples.
The new deployment will also be similar to the Blue-Green deployment, because the name and version
number are updated. The replicas on the new deployment have been set to 1. With a Canary deployment, it
is good to start small, with a subset of traffic, and then increase as you gain confidence in the update.
• Here, you will deploy version 2 and scale down version 1. In the example, version 2 has one replica, so
one replica is removed from version 1.
• You can continue increasing replicas on version 2 and reducing them on version 1 until all are on
version 2.

When you apply the new Canary deployment, you will also want to scale down the existing deployment. In
the example, the scale will be reduced from 10 to 9 on the existing deployment, because one replica is being
added on the new deployment.
After verifying the application, you can optionally increase the scale to 50 percent of each version for more
testing. Or you could skip this step and go to the last step.
kubectl scale --replicas=5 deploy nginx-v1

946 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
kubectl scale --replicas=5 deploy nginx-v2
Finally, once you are confident and ready to fully transition to the new deployment, you can increase the
scale of the new deployment to match the scale of the old deployment before you started. In this case, there
will be replicas. You will also delete the old deployment.
kubectl scale --replicas=10 deploy nginx-v2
kubectl delete deploy nginx-v1

Release Strategies Comparison

There are benefits and risks that are associated with the various release strategies. There are financial costs
that are involved with the size of the system environment and complexity. There are trade-offs among the
categories that are listed, and others that are not listed. The release deployment strategy that is right for your
organization will differ based on the values that are associated with the strategy. You may be able to have a
minimal amount of downtime that is accounted for in a planned maintenance window to keep complexity
and costs down. Or you may want to have 100 percent uptime with no outage and fast rollbacks.
The Big Bang deployment method has downtime, but is the least complex and requires the least amount of
system resources to complete the strategy.
Rolling deployments have a bit more complexity and keep the costs down. The downside is that no real
traffic is hitting the application until it is in production. The deployment also has to complete before
rollbacks can be made.
The Blue-Green deployment has the quickest rollback capabilities. If there are issues on the new application
version (green side), you can switch back to the previous production instance because it is still 100 percent
in production. This methodology has the highest cost because you are maintaining two instances of the
environment.
The Canary deployment has some complexity trade-offs because the rollout includes different versions
within the production environment. This deployment has a longer mismatch than a rollout release, but gives
you the ability to see real-world traffic on the Canary systems before continuing to increase the production
load of the new application version.
1. 0Which deployment pattern allows zero downtime, is cost-effective, and easy to implement?
a. Big Bang deployment
b. Rolling deployment
c. Blue-Green deployment
d. Canary deployment

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 947
0Kubernetes Failure Scenarios
Failures are common for any environment. In this topic, you will learn about some of the common failure
scenarios and the options to prevent or recover from them.

VM or Physical Server Shutdown


• VMs or physical servers can be shut down or fail for several reasons.
• What happens if the API server is down?
• What happens if a node is down?

If the API server is shut down or crashing, existing pods and services will still work normally unless they
need the Kubernetes API, like the Prometheus exporter. You will be unable to update the pods, start new
pods, or create new services. New replicas of the pods will not be created. Nodes will stay up, but no new
nodes can be brought up while the API server is down. This situation may not be considered a production
outage. Once the API server is back online, all resources that require the API server will continue to
function.
If one of the nodes fails or is shut down, the pods on that node will also be down. If there are other nodes in
the cluster, and replicas were set, the pods will come up on the other nodes of the cluster. This situation
could be a production outage, depending on the deployment configurations.
To mitigate some of the effects, use the provider’s automatic VM restart to prevent the machine from
staying in a shutdown state. Applications and their containers should be designed for unexpected restarts.
Replication should be set for all deployments. Configure high availability for the API server if constant
access to the API is required.

Network Errors
• Network failures or errors can cause communication loss between nodes and the API server.
• What happens when connectivity to a node is lost?
• What happens when the node loses connectivity to the API server?

948 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Network errors or failures can cause nodes to lose communication with the API server. The node will no
longer be able to check in to the API server and will think the API server is down. The API server will
perceive that the node is down and will move the replicas to other nodes that are available. Pods will still
function just as they would if the API server was shut down.
To mitigate these effects, ensure that deployments are configured with replicas. High availability of the API
server could also mitigate network issues if another API server is unreachable.

Data Loss or Unavailable


• Data loss is one of the risks you can face. Sometimes the data is not lost, but the storage is inaccessible.
• What happens to the Kubernetes API server, the nodes, and the pod storage?

If the API server loses the storage back end, the API server should fail to come up. This scenario is similar
to the API server being shut down, except there could be manual recovery of the API server once the
storage is available.
If a node loses access to the storage, then all pods on the node should fail and the pods would be
rescheduled on another node.
To mitigate these effects, ensure that snapshots of the API server storage are taken regularly. Use reliable
storage for VMs with the API server or the etcd. The etcd is the database that Kubernetes API Server is built
on. It is a distributed key-value store.

Operator Error
• Kubernetes is complex.
• The list is a small sampling of Kubernetes failures. The items that are highlighted in red and others that
are caused by operator errors can be found at https://k8s.af/.
• How does Kubernetes manage operator errors?

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 949
Kubernetes is complex, and many issues can occur due to operator error. The Kubernetes Failure Stories
page (https://k8s.af) has a list of Kubernetes failures, many due to operator error. The issues can range from
the loss of a pod or a service to a cluster deletion.
However, you should not be afraid to run Kubernetes. Everyone makes mistakes, but learning from those
mistakes is important. Following some guidelines should help mitigate and minimize the impact of failures.
1. Maintain backups and practice restoring from the backups. A backup is only good if you can restore it.
2. Have a disaster recovery plan and practice disaster recovery often.
3. Limit user permissions through RBAC policies in Kubernetes.
4. Limit user permissions in cloud provider policies.
5. Ensure that all deployments have replicas.
6. Set up independent clusters, preferably in different provider zones.
7. Do not make changes to all clusters at once.

1. 0If a node fails, what happens to the pods that are running on the nodes?
a. They are replicated to other nodes.
b. They continue running.
c. The pods are terminated.
d. The pods can no longer access the API server.

950 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Kubernetes Load-Balancing Techniques
Load balancing can mitigate many failure issues, but also allows your application to scale well. Good load
balancing ensures that all pods are running efficiently. Without load balancing, some pods will be
overworked, while others sit idle.

Deployment and ReplicaSet


• You can use a deployment to scale out the ReplicaSet.
• A ReplicaSet ensures that a set number of pods exist in the cluster.
• Replicas can also be used with Horizontal Pod Autoscaling to dynamically scale the number of pods
based on the cluster resources.

Although Replicas do not in themselves load-balance, they allow for ease in scaling pods. They are a
prerequisite for load balancing. ReplicaSets can create other ReplicaSets, but are easier to manage when
they are configured through a deployment. A ReplicationController is another alternative to managing
replicas, but the preferred method is using deployments.
When pods are deployed, the API server chooses the node to deploy based on the node resources. Combined
with replicas, this approach tries to evenly distribute the replicas between the nodes. The deployment also
ensures that when a node is unavailable, another replica can be run on a different node.
Replicas can also be used with Horizontal Pod Autoscaling. This approach allows pods to dynamically scale
with the cluster resources.

DaemonSets
• DaemonSets are built for daemons that do not require scale, but must exist on each node.
• The Prometheus node exporter is an example of a DaemonSet—others are Logstash or Fluentd.
• The DaemonSet ensures that a pod exists on every node in the cluster.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 951
Similar to ReplicaSets, DaemonSets are built to scale, but provide a slightly different purpose. If the scale is
for high availability rather than compute resources, or if the pod is required on every node, then a
DaemonSet is the right tool for the job. A DaemonSet ensures that there is one pod on every node of the
cluster. Applications like the Prometheus node exporter, Logstash or Fluentd must be on every node.
This approach does not ensure that a minimum number of pods exist in the cluster. Its only job is to ensure
that the configured pod exists on each node in the cluster.

Ingress
• Ingress allows advanced routing and load balancing.
• Ingress controllers offer basic to advanced load-balancing techniques.
• Nginx provides load distribution and load balancing in its ingress controller.
• The F5 Big-IP controller maps pods to their external Big-IP load balancer.

Ingress allows advanced routing and load-balancing functionality. Ingress works on an ingress controller,
and the ingress controller implements load balancing.
Nginx is a common ingress controller and offers load distribution and load balancing. Another popular
ingress controller is the F5 Big-IP controller. Although the ingress controller does not do the load balancing
itself, it sends the pod information to the Big-IP load balancer, which sits outside the Kubernetes cluster.

952 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
High Availability
• Kubernetes can be set up to run multiple control plane nodes per cluster. These servers are also known
as API servers.
• Kubernetes can use an etcd cluster.

High availability is another type of load balancing. Kubernetes is built on the idea of high availability with
deployments using ReplicaSets and multinode clusters. When using kubeadm to bring up a Kubernetes
cluster, it creates one instance of API server, etcd, scheduler, controller manager, and cluster autoscaler.
The goal of high availability is to allow one of the control planes to fail, but have the others pick up its load
and continue. Running this configuration is very complex. As mentioned earlier, an API server fail does not
usually take down the production system. The nodes and pods will continue running without the API Server.

Cloud Load Balancers


• If Kubernetes is hosted in the public cloud, many of these providers integrate with a Service type of
LoadBalancer.
• Provides a hosted external load balancer, instead of relying on the ingress controller.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 953
If Kubernetes is hosted on a public cloud, there is likely a load balancer that is built for you to use. Instead
of using the ingress controller to load-balance the traffic, you can use the hosted external load balancer.
These load balancers integrate with the services. Setting the Service type to LoadBalancer and configuring
the settings per the cloud provider’s instructions will allow Kubernetes to update the load balancer with
information about reaching the pods.
1. 0Which option ensures that there is one specific pod on every node?
a. ReplicaSets
b. cloud load balancers
c. high availability
d. ingress
e. DaemonSets

954 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Kubernetes Namespaces
Kubernetes clusters are large enough to be easily shared between teams or environments, but many teams
still want the division of server resources. Namespaces help establish these divisions.

Namespaces

• A namespace is a virtual cluster.


• A namespace is an optional feature that divides cluster resources between teams or environments.
• Namespaces cannot be nested.
• A resource can only be in one namespace.

Namespaces are logical separations between systems and are also known as virtual clusters. Namespaces
help divide a Kubernetes cluster. Names are unique only within a namespace. The same object names can be
used in different namespaces. RBAC and resource quotas can be defined per namespace.
A namespace cannot exist within a namespace—you cannot nest namespaces. A resource that is defined
within a namespace cannot be shared with other namespaces.

Namespace Examples
• Separate namespaces for production and development environments.
• Apply storage resource quotas to the development environment.
• Set a computer resources limit in a development environment.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 955
One use case for namespaces is to separate development environments. Production, test, and development
environments can all exist in the same cluster. Because names are unique per namespace, the same
resources can be tested in each environment simply by changing the namespace. Resource quotas in the test
and development environments help ensure that the production environment has the resources it needs.
• Each development team has its own namespace.
• Each team has permission to manage its own pods, but not pods from other namespaces.
• A team does not need its own cluster.

Another example of using namespaces is for development teams. Each development team has their own
namespace. Through RBAC, a team would have permission to manage its own pods, but not pods that exist
in another namespace. Resource quotas ensure that each team has the resources it needs without affecting
other teams.
1. 0Which statement about namespaces is not true?
a. Namespaces can be nested.
b. Namespaces are virtual clusters.
c. Names are unique within a namespace.
d. Resources in a namespace cannot be shared with other namespaces.
e. The same resource names can be shared with multiple namespaces.

956 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Kubernetes Deployment via CI/CD Pipelines
Kubernetes has a powerful shell client and a resource model that allows resources to be stored in a version
control system and deployed through a CI/CD pipeline.

Why Use CI/CD?


• CI/CD: Continuous integration and continuous deployment
• The goal of CI/CD is to ensure that software is in a state in which it can be deployed to users and fully
automates the deployment.

CI/CD stands for continuous integration and continuous deployment. Continuous integration validates the
code by running tests to confirm that the code meets the established standards and does not break existing
functionality. When validated, CI automates the merge to the main branch (Git) or trunk (Subversion).
Continuous deployment automates the deployment of the software to production or releases the build.
In summary, the goal of CI/CD is to ensure that software is in a state in which it can be deployed to users
and fully automates the deployment.

Why Use CI/CD on Kubernetes?


• Kubernetes is a platform to host applications.
• Kubernetes is built around the idea of automating deployments.
• Deploying to Kubernetes through the pipeline reduces human error.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 957
Kubernetes ties in well with the continuous deployment stage in the CI/CD pipeline. Kubernetes is a
platform to host applications and is built around automating deployments. Deploying through a pipeline
reduces human interaction, which reduces human errors.
To deploy an application to Kubernetes, the container for the application must be built first. Building
containers for Kubernetes is the same as building containers for Docker, if using Docker run time. The
pipeline should build the containers and push the containers to a registry. Then the deployment is applied to
Kubernetes through a manifest. The manifest will specify the required number of pods and the location of
the container image on the registry. The pod will pull down the image and start the container. Verification
of the deployment should also happen within the pipeline.
1. 0What is a benefit of deploying to Kubernetes using a CI/CD Pipeline?
a. Kubernetes was built on speed.
b. Kubernetes was built on ease of use.
c. Kubernetes was built around automating deployments.
d. Kubernetes was built around automation.

958 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Discovery 23: Explore and Modify a Kubernetes
CI/CD Pipeline
Introduction
The power of Kubernetes begins to shine once you begin to implement it in a CI/CD pipeline. In this lab,
you will explore and modify an application running on Kubernetes using Gitlab.

Topology

Job Aid

Device Information

Device Description FQDN/IP Address Credentials

Student Workstation Linux Ubuntu VM 192.168.10.10 student, 1234QWer

GitLab Container Registry Container registry.git.lab student, 1234QWer


Registry

k8s1 Kubernetes 192.168.10.21 student, 1234QWer


Control Plane
Node

k8s2 Kubernetes 192.168.10.22 student, 1234QWer


Worker

k8s3 Kubernetes 192.168.10.23 student, 1234QWer


Worker

csr1kv1 Cisco Router 192.168.10.101 student, 1234QWer

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 959
Device Description FQDN/IP Address Credentials

csr1kv2 Cisco Router 192.168.10.102 student, 1234QWer

csr1kv3 Cisco Router 192.168.10.103 student, 1234QWer

asa1 Firewall 192.168.10.51 student, 1234QWer

Kubernetes and Gitlab Registry


This lab infrastructure has been set up to use two single-node Kubernetes clusters and a private Docker
registry (Gitlab Container Registry). All images used in this lab are present on the container registry. The
communication between Kubernetes and Gitlab has been established prior to the start of the lab. There are
no other setup steps necessary to begin this lab. The communication between Kubernetes and Gitlab is
being done securely (via https), and a service account has been created for pulling images from Gitlab.

Command List
The table describes the commands that are used in this activity. The commands are listed in alphabetical
order so that you can easily locate the information that you need. Refer to this list if you need configuration
command assistance during the lab activity.

Command Description

cd directory name To change directories within the Linux file system, use the cd
command. You will use this command to enter a directory where the
lab scripts are housed. You can use tab completion to finish the name
of the directory after you start typing it.

cat file The most common use of the cat Linux command is to read the
contents of files. It is the most convenient command for this purpose in
UNIX-like operating systems.

curl url Gives the ability to access URLs from the command line

kubectl [create, delete, expose, get] Kubernetes command-line tool

ls file Provides the ability to see a file or folder contents.

Task 1: Create Deployment to Single Cluster Via CI/CD


Pipeline
In this task, you will explore the basics of deploying an application to a single Kubernetes cluster using a
CI/CD Pipeline. You will be using Gitlab’s built-in CI/CD pipeline. A Gitlab runner has already been
configured to execute the pipeline

960 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Activity

Step 1 In the student workstation, find and open Visual Studio Code by clicking the Visual Studio Code icon.

Step 2 From the Visual Studio Code top navigation bar, choose Terminal > New Terminal [ctrl-shift-`].

Step 3 Navigate to the terminal section at the bottom of Visual Studio Code.

Step 4 Within the Visual Studio Code terminal, change the directory to labs/lab23 using the cd ~/labs/lab23
command.

student@student-vm:$ cd ~/labs/lab23/

Step 5 Issue the git clone https://git.lab/cisco-devops/explore-k8s-pipeline.git command to clone the explore-k8s-
pipeline repository.

student@student-vm:labs/lab23$ git clone https://git.lab/cisco-devops/explore-k8s-


pipeline.git
Cloning into 'explore-k8s-pipeline'...
remote: Enumerating objects: 3, done.
remote: Counting objects: 100% (3/3), done.
remote: Compressing objects: 100% (2/2), done.
remote: Total 3 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (3/3), done.
student@student-vm:labs/lab23 (master)$

Step 6 Change directory to the explore-k8s-pipeline directory by issuing cd explore-k8s-pipeline command.

student@student-vm:labs/lab23$ cd explore-k8s-pipeline
student@student-vm:lab23/explore-k8s-setup (master)$

Edit the .gitlab-ci.yml File to Add kubectl Commands


Gitlab uses the .gitlab-ci.yml file to control the CI/CD pipeline. You will edit an existing pipeline. Applying
the kustomization (Kubernetes "customization") file works similarly as the Docker Compose file. In
YAML, the “-“ denotes an item in a list. You will add the commands to the shell script list. Pay attention to
indentation.

Step 7 To switch to the context for deploying to the single node cluster k8s2, add the following line as the first item
in the deploy stage script part of the .gitlab-ci.yml file, one line above the kubectl delete command:

- "kubectl config use-context k8s2"

To apply the kustomization file, add the following line as the last line within the deploy stage
script part.
- "kubectl apply -k ./"

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 961
962 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
stages:
- "build"
- "deploy"

variables:
CI_REGISTRY_IMAGE_DB: "net_inventory_db"
CI_REGISTRY_IMAGE_BACKEND: "net_inventory_backend"
CI_REGISTRY_IMAGE_FRONTEND: "net_inventory_frontend"

before_script:
- "docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD
https://registry.git.lab"
- "echo $CI_COMMIT_REF_SLUG"

build:
stage: "build"
script:
- "echo BUILD DB"
- "docker build -t $CI_REGISTRY_IMAGE_DB -f Dockerfile_db ."
- "docker tag $CI_REGISTRY_IMAGE_DB registry.git.lab/cisco-devops/explore-k8s-
pipeline/$CI_REGISTRY_IMAGE_DB:$CI_COMMIT_REF_SLUG"
- "echo BUILD BACKEND"
- "docker build -t $CI_REGISTRY_IMAGE_BACKEND -f Dockerfile_backend ."
- "docker tag $CI_REGISTRY_IMAGE_BACKEND registry.git.lab/cisco-devops/explore-k8s-
pipeline/$CI_REGISTRY_IMAGE_BACKEND:$CI_COMMIT_REF_SLUG"
- "echo BUILD FRONTEND"
- "docker build -t $CI_REGISTRY_IMAGE_FRONTEND -f Dockerfile_frontend ."
- "docker tag $CI_REGISTRY_IMAGE_FRONTEND registry.git.lab/cisco-devops/explore-
k8s-pipeline/$CI_REGISTRY_IMAGE_FRONTEND:$CI_COMMIT_REF_SLUG"
- "docker push
registry.git.lab/cisco-devops/explore-k8s-pipeline/$CI_REGISTRY_IMAGE_DB:
$CI_COMMIT_REF_SLUG"
- "docker push
registry.git.lab/cisco-devops/explore-k8s-pipeline/$CI_REGISTRY_IMAGE_BACKEND:
$CI_COMMIT_REF_SLUG"
- "docker push
registry.git.lab/cisco-devops/explore-k8s-pipeline/$CI_REGISTRY_IMAGE_FRONTEND:
$CI_COMMIT_REF_SLUG"

deploy:
stage: "deploy"
script:
- "kubectl config use-context k8s2"
- "kubectl delete secret net-inventory || true"
- "kubectl create secret generic net-inventory --from-literal=secret-
key=$SECRET_KEY \
--from-literal=sqlalchemy-database-uri=$SQLALCHEMY_DATABASE_URI \
--from-literal=postgres-db=$POSTGRES_DB \
--from-literal=postgres-password=$POSTGRES_PASSWORD \
--from-literal=postgres-user=$POSTGRES_USER"
- "kubectl apply -k ./"

only:
- "master"

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 963
Save, Commit, and Push the Changes to Start Pipeline Using Visual
Studio Code Integrated Functionalities
You can press Ctrl-S to save the changes that you made in the .gitlab-ci.yml file. And then you must add,
commit, and push the changes to the remote repository to be able to start the pipeline.

Here you will use the easy-to-use integrated environment of Visual Studio Code to achieve the same results.
The following few screen captures show the procedure.

964 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 965
Confirm Ports and Verify Deployment
With Kubernetes in place, you must verify the service ports in the manifest file for each application exist
and are properly defined. In this activity, the two manifest files are netinv_frontend.yml and the
netinv_backend.yml. Then you will verify the deployment on GitLab. The application needs some data, so
you will populate the inventory and finally validate that the Network Inventory application works as
expected.

Step 8 Open the netinv_frontend.yml and the netinv_backend.yml manifest files. Confirm that the nodePort
variable is set to port 30500 in the netinv_frontend.yml manifest file and to port 30501 in the
netinv_backend.yml manifest file. If needed, make corrections and save changes.

966 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 9 From the Chrome browser, navigate to https://git.lab. Log in with the credentials that are provided in the
Job Aids and click Sign in.

Step 10 From the list of projects, choose the cisco-devops/explore-k8s-pipeline project. The Pipeline shows a green
check mark for both, the build stage and the deploy stage.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 967
Step 11 To populate the Network Inventory with some example devices, use the populate_inventory predeployed
script. Use port number 30501, as defined as the nodePort parameter of the netinv_backend.yml manifest
file. The Kubernetes cluster name is k8s2. Use the populate_inventory k8s2:30501 command.

student@student-vm:labs/lab23/explore-k8s-pipeline (master)$ populate_inventory


k8s2:30501
nyc-rt01: Added successfully
nyc-rt02: Added successfully
rtp-rt01: Added successfully
rtp-rt02: Added successfully
student@student-vm:labs/lab23/explore-k8s-pipeline (master)$

968 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 12 In a web browser, open a new tab and use the Network Inventory front-end web app URL http://k8s2:30500
to confirm that APP is running as expected. Port number 30500 is defined as the nodePort parameter of the
netinv_frontend.yml manifest file.

Task 2: Create a Deployment to Multiple Clusters Via


CI/CD Pipeline
Previously, you created a Kubernetes deployment via the Gitlab CI/CD pipeline. That method is working
well when working with one Kubernetes control plane node. But if you want to deploy to multiple clouds,
each with their own instance of Kubernetes, then you need to modify your deployment method.

The kubectl config use-context command uses contexts to switch between clusters. Using that command,
you can deploy to one cluster, then switch to another, and so on.

In this task, you will run multiple jobs in the same stage.

Activity

Step 1 Use the git checkout -b multiple_deployments origin/multiple_deployments command to check out the
branch named multiple_deployments. The -b option creates a new local branch. The multiple_deployments
argument is the local branch name. The origin/multiple_deployments is the name of the remote branch you
are tracking.

student@student-vm:lab23/explore-k8s-pipeline (master)$ git checkout -b


multiple_deployments origin/multiple_deployments
Branch 'multiple_deployments' set up to track remote branch 'multiple_deployments' from
'origin'.
Switched to a new branch 'multiple_deployments'
student@student-vm:lab23/explore-k8s-pipeline (multiple_deployments)$

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 969
Edit .gitlab-ci.yml File to Add kubectl Commands
Gitlab uses the .gitlab-ci.yml file to control the CI/CD pipeline. You will edit an existing pipeline with two
deploy jobs. Pay attention to indentation.

Step 2 To switch to the context for deploying to the k8s2 server, add the following line as the first item in the
deploy stage script part deploy_k8s2 job of the .gitlab-ci.yml file, one line above the kubectl delete
command:

- "kubectl config use-context k8s2"

To apply the kustomization file, add the following line as the last line within the deploy stage
script part.
- "kubectl apply -k ./"

Step 3 To switch to the context for deploying to the k8s3 server, add the following line as the first item in the
deploy stage script part deploy_k8s3 job of the .gitlab-ci.yml file, one line above the kubectl delete
command:

- "kubectl config use-context k8s3"

To apply the kustomization file, add the following line as the last line within the deploy stage
script part.
- "kubectl apply -k ./"

970 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
student@student-vm:labs/lab23/explore-k8s-pipeline (multiple_deployments)$ cat .gitlab-
ci.yml
stages:
- "build"
- "deploy"

variables:
CI_REGISTRY_IMAGE_DB: "net_inventory_db"
CI_REGISTRY_IMAGE_BACKEND: "net_inventory_backend"
CI_REGISTRY_IMAGE_FRONTEND: "net_inventory_frontend"

before_script:
- "docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD
https://registry.git.lab"
- "echo $CI_COMMIT_REF_SLUG"

build:
stage: "build"
script:
- "echo BUILD DB"
- "docker build -t $CI_REGISTRY_IMAGE_DB -f Dockerfile_db ."
- "docker tag $CI_REGISTRY_IMAGE_DB registry.git.lab/cisco-devops/explore-k8s-
pipeline/$CI_REGISTRY_IMAGE_DB:$CI_COMMIT_REF_SLUG"
- "echo BUILD BACKEND"
- "docker build -t $CI_REGISTRY_IMAGE_BACKEND -f Dockerfile_backend ."
- "docker tag $CI_REGISTRY_IMAGE_BACKEND registry.git.lab/cisco-devops/explore-k8s-
pipeline/$CI_REGISTRY_IMAGE_BACKEND:$CI_COMMIT_REF_SLUG"
- "echo BUILD FRONTEND"
- "docker build -t $CI_REGISTRY_IMAGE_FRONTEND -f Dockerfile_frontend ."
- "docker tag $CI_REGISTRY_IMAGE_FRONTEND registry.git.lab/cisco-devops/explore-
k8s-pipeline/$CI_REGISTRY_IMAGE_FRONTEND:$CI_COMMIT_REF_SLUG"
- "docker push
registry.git.lab/cisco-devops/explore-k8s-pipeline/$CI_REGISTRY_IMAGE_DB:
$CI_COMMIT_REF_SLUG"
- "docker push
registry.git.lab/cisco-devops/explore-k8s-pipeline/$CI_REGISTRY_IMAGE_BACKEND:
$CI_COMMIT_REF_SLUG"
- "docker push
registry.git.lab/cisco-devops/explore-k8s-pipeline/$CI_REGISTRY_IMAGE_FRONTEND:
$CI_COMMIT_REF_SLUG"

deploy_k8s2:
stage: "deploy"
script:
- "kubectl config use-context k8s2"
- "kubectl delete secret net-inventory || true"
- "kubectl create secret generic net-inventory --from-literal=secret-
key=$SECRET_KEY \
--from-literal=sqlalchemy-database-uri=$SQLALCHEMY_DATABASE_URI \
--from-literal=postgres-db=$POSTGRES_DB \
--from-literal=postgres-password=$POSTGRES_PASSWORD \
--from-literal=postgres-user=$POSTGRES_USER"
- "kubectl apply -k ./"

deploy_k8s3:
stage: "deploy"

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 971
script:
- "kubectl config use-context k8s3"
- "kubectl delete secret net-inventory || true"
- "kubectl create secret generic net-inventory --from-literal=secret-
key=$SECRET_KEY \
--from-literal=sqlalchemy-database-uri=$SQLALCHEMY_DATABASE_URI \
--from-literal=postgres-db=$POSTGRES_DB \
--from-literal=postgres-password=$POSTGRES_PASSWORD \
--from-literal=postgres-user=$POSTGRES_USER"
- "kubectl apply -k ./"

Save, Commit, and Push the Changes to Start Pipeline Using Terminal
Window

Step 4 Press Ctrl-S to save the changes that you did in the .gitlab-ci.yml file. You must then commit the file and
push it back to the remote repository. When changes are pushed, the GitLab pipeline will be executed. Run
the following commands to add, commit, and push changes to the remote repository:

git add .gitlab-ci.yml


git commit -m " Deploying to k8s2 and k8s3"
git push origin

student@student-vm:labs/lab23/explore-k8s-pipeline (multiple_deployments)$ git add .


student@student-vm:labs/lab23/explore-k8s-pipeline (multiple_deployments)$ git commit -
m “Deploying to k8s2 and k8s3”
[multiple_deployments 1524aa7] Deploying to k8s2 and k8s3
1 file changed, 4 insertions(+)
student@student-vm:labs/lab23/explore-k8s-pipeline (multiple_deployments)$ git push
origin
Username for ‘https://git.lab’: Student
Password for ‘https://git.lab’: 1234QWer

Verify Deployment

Step 5 From the Chrome browser, navigate to https://git.lab.

Step 6 From the list of projects, choose the cisco-devops/explore-k8s-pipeline project. The Pipeline shows a green
check mark for both, the build stage and the deploy stage. You will note that there are now two jobs on the
deploy stage.

972 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 7 To populate the Network Inventory with some example devices, use the populate_inventory predeployed
script. Use port number 30501, as defined as the nodePort parameter of the netinv_backend.yml manifest
file. The Kubernetes cluster name is k8s2. Use the populate_inventory k8s2:30501 command.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 973
student@student-vm:labs/lab23/explore-k8s-pipeline (master)$ populate_inventory
k8s2:30501
nyc-rt01: Added successfully
nyc-rt02: Added successfully
rtp-rt01: Added successfully
rtp-rt02: Added successfully
student@student-vm:labs/lab23/explore-k8s-pipeline (master)$ populate_inventory
k8s3:30501
nyc-rt01: Added successfully
nyc-rt02: Added successfully
rtp-rt01: Added successfully
rtp-rt02: Added successfully
student@student-vm:labs/lab23/explore-k8s-pipeline (master)$

Step 8 In a web browser, open a new tab and use the Network Inventory front-end web app URL http://
k8s2:30500 to confirm that APP is running as expected. Port number 30500 is defined as the nodePort
parameter of the netinv_frontend, yml manifest file.
Perform the same validation for the k8s3 server at URL http:// k8s2:30500.

974 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Summary
This lab reviewed the steps needed to deploy applications on a Kubernetes cluster using a CI/CD pipeline.
This lab focused on two scenarios, deploying to a single Kubernetes cluster, and deploying to multiple
Kubernetes clusters on different clouds.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 975
0Summary Challenge
1. 0Match the deployment type with its advantages and disadvantages.
Canary
Some downtime, not complex, little to no upgrade costs,
no real traffic testing
Big Bang
Zero downtime, not very complex, little to no upgrade
costs, no real traffic testing
Blue-Green
Zero downtime, complex, large upgrade costs, allows real
traffic testing
Rolling
Zero downtime, very complex, little to no upgrade costs,
allows real traffic testing

2. 0Which two deployment patterns are built into Kubernetes deployments? (Choose two.)
a. Blue-Green deployment
b. rolling update
c. Canary deployment
d. A/B Testing deployment
e. Big Bang deployment
3. 0What happens to pods when the API server does not respond?
a. Pods are terminated.
b. Pods report a critical state.
c. Pods continue to run.
d. Pods pause until the API server is available.
4. 0What are two options to mitigate storage failures? (Choose two.)
a. Take frequent snapshots.
b. Use reliable storage.
c. Have a disaster recovery plan and test the plan regularly.
d. Limit user permissions through RBAC.
e. Set up multiple clusters, preferably in different zones.
5. 0Which statement describes the purpose of ReplicaSets?
a. Ensure that every node has one pod from a deployment.
b. Communicate the pod IP address to external load balancers.
c. Ensure that a maximum number of pods exist in the cluster.
d. Ensure that a set number of pods exist in the cluster.
6. 0Which Kubernetes object cannot be applied to a namespace?
a. pod
b. namespace
c. role
d. resource quota

976 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
7. 0What does a pod use when a deployment is created?
a. a container image
b. gitlab-ci.yml
c. the build
d. the binary file

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 977
0Answer Key
Kubernetes Deployment Patterns
1. B

Kubernetes Failure Scenarios


1. A

Kubernetes Load-Balancing Techniques


1. E

Kubernetes Namespaces
1. A

Kubernetes Deployment via CI/CD Pipelines


1. C

Summary Challenge
1.

Big Bang Some downtime, not complex, little to no upgrade costs, no real
traffic testing

Rolling Zero downtime, not very complex, little to no upgrade costs, no


real traffic testing

Blue-Green Zero downtime, complex, large upgrade costs, allows real traffic
testing

Canary Zero downtime, very complex, little to no upgrade costs, allows


real traffic testing

978 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
2. B, E
3. C
4. A, B
5. D
6. B
7. A

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 979
Section 18: Monitoring and Logging In Kubernetes

Introduction
Kubernetes is a great way to run and scale applications, but how do you monitor the applications and system
when your application is spread across a cluster? In this section, you will explore monitoring and logging in
Kubernetes.

Kubernetes Resource Metrics Pipeline


Metrics give you a view of how a system is performing. Through metrics, you can discover if a system is
overutilized or underutilized. You can see if a node goes offline or stops reporting metrics. Metrics can help
in planning resource upgrades. Kubernetes has a method for displaying and publishing the resource metrics
for pods and nodes.

Metrics
• Raw measurements of resource usage
• Low-level usage summaries from the operating system
• Higher-level data tied to Kubernetes
• Can report total capacity or load

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 981
Metrics are raw measurements of resource usage. Metrics can be low-level usage summaries from the
operating system or high-level data from services or applications. Metrics can report based on the total
capacity or the load.
CPU is represented in CPU units where 1 cpu is equivalent to 1 core or 1 virtual cpu. Often, you will see
data like 100m, which is “one hundred millicpu” or “one hundred millicores.” The CPU unit is an absolute
quantity, not relative. 100m is the same amount on a single-core, quad-core, or a 64-core machine.
Memory is represented using bytes in the power of 2 prefix, which is also known as a binary prefix. Power
of 10 is the other common standard. MB stands for megabyte and is in the power of 10 format. MiB stands
for mebibyte and is the power of 2 format. 1 KiB (kibibyte) is 1024 bytes, and 1 KB (kilobyte) is 1000
bytes.

Resource Metrics Pipeline


• CPU, memory, and other usage metrics are available through the Metrics API.
• The kubectl top command can display these metrics.

$ kubectl top node

NAME CPU(cores) CPU% MEMORY(bytes)


MEMORY%

k8s1 224m 11% 3308Mi 41%

k8s2 93m 9% 2113Mi 60%

k8s3 151m 10% 2845Mi 49%

In Kubernetes, CPU, memory, and other usage metrics are available through the Metrics API. The Metrics
API is available to users and to the system. Horizontal Pod Autoscaling uses the Metrics API to determine
the scale of pods.
The kubectl top command can be run to display the metrics. In the example, kubectl top node is run to see
the CPU and memory metrics for all the nodes in a cluster.

982 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Metrics API
• The Kubernetes Metrics API gives the current resource usage.
• The metrics are not stored by Kubernetes, so there is no history.
• The API requires the metrics server to be deployed.

The Kubernetes Metrics API is located at the /apis/metrics.k8s.io path. The Metrics API gives the current
resource usage and is stored in memory and overwritten at each collection cycle. There is no storage in
Kubernetes for metrics history. This data must be stored outside of Kubernetes to create historical metrics.
The Kubernetes API server forwards traffic to the metrics server using the kube-aggregator. The metrics
server must be installed for the Metrics API to function.

Metrics Server
• Aggregator of resource usage for the cluster.
• Collects metrics from the Kubelet Summary API on each node.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 983
The metrics server is an aggregator of resource usage metrics for the cluster. The metrics server replaces
Heapster and is built in addition to Heapster. This change was made to add the Kubernetes API design,
which allows for RBAC controls to be placed on the metrics.
The metrics server collects metrics from the Summary API endpoint on the kubelet. It collects this data
from every node in the cluster. The metrics server is an add-on package for Kubernetes and is provided
through the kube-up.sh script. If the kube-up.sh script was not used in building the cluster, the metrics
server can be manually installed.
1. 0How is memory quantified in metrics?
a. kibibytes, mibibytes, gibibytes, and so on
b. kilobytes, megabytes, gigabytes, and so on
c. kilacpu, millicpu, CPU, and so on
d. kilobits, megabits, gigabits, and so on

984 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Kubernetes Full Metrics Pipeline and Logging
CPU, memory, and storage metrics from pods and nodes may not be enough metrics data for your use case.
If you need more metrics data, you can implement the full metrics pipeline and custom logging.

Full Metrics Pipeline


Custom.metrics.k8s.io
• Implement custom metrics for Kubernetes objects.

External.metrics.k8s.io
• Implement custom metrics for external resources.

The full metrics pipeline is an extendable option in Kubernetes. The API locations custom.metrics.k8s.io
and external.metrics.k8s.io are available for anyone to implement metrics. Kubernetes does not include
custom or external metrics.
You can implement custom metrics for Kubernetes objects at custom.metrics.k8s.io. Pods, namespaces, and
volumes are some Kubernetes objects.
You can implement custom metrics for external resources at external.metrics.k8s.io. It could be for an
application that is running on Kubernetes or for an external load balancer.

Examples of Full Metrics


• Kubernetes does not include a full metrics pipeline, but you can use a third-party pipeline.
• Stackdriver offers metrics, logs, monitoring, and alerts.
• Prometheus offers metrics, monitoring, and alerts.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 985
Kubernetes defines the API for custom and external metrics, but not the data itself. Implementation is left to
third-party systems or you can create your own implementation. Two examples of full metrics are
Stackdriver and Prometheus.
Google Stackdriver is a unified monitoring and logging system that provides alerts, debugging, error
reporting, tracing, logging, dashboards, and more. More information can be found at
https://cloud.google.com/stackdriver/.
Prometheus is an open-source monitoring solution that is simple to use, but has a powerful query language.
Both solutions support the custom.metrics.k8s.io and external.metrics.k8s.io locations.

Logging Architecture
• By default, containers write to stdout and stderr.
• The Docker logging driver converts the log messages into JSON format.
• The logs are stored in a directory per node.
• Logs can be viewed using kubectl logs.

986 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Like metrics, logging has some built-in support in Kubernetes. Containers write to stdout and stderr, unless
a sidecar (helper container) captures logs. Kubernetes uses the Docker logging driver to capture the output
on stdout and stderr, and then converts them into JSON format. These logs are stored in the directory
/var/log/container on the node running the pod. Logs can be viewed by using the command kubectl logs.

Node Logging
• Nodes log system components to journald if systemd is enabled, otherwise they are logged to /var/logs.
• Logs should be set up to rotate using logrotate, or disk space will be an issue.
• There is no clusterwide logging, but logs can be collected in a central location using third-party
providers.

Nodes also log data for system components such as the kube-proxy or kubelet. If the operating system uses
systemd, the logs are written to journald. Otherwise, logs are written to /var/logs. Depending on the
installation method, log rotation may not be enabled for the Kubernetes component logs. Kubernetes
recommends using the Linux tool, logrotate, to configure log rotation, so that the logs do not fill up the disk
space. There is no clusterwide logging, but logs can be collected from the nodes and sent to a central
logging system using a third-party provider.

Third-Party Logging
• Logs contain valuable information to help diagnose issues, and to catch issues when they happen or
even before they occur.
• Monitoring and alerts should be set up to alert engineers of issues.
• Kubernetes does not have built-in monitoring, but relies on tools like Elastic.co, ELK stack, or Google
Stackdriver.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 987
Logs contain valuable information to help diagnose issues, and catch issues when they happen or even
before they occur. For this reason, logging should be sent to a location that can be monitored and action can
be taken. Monitoring and alerts are valuable tools for notifying engineers of issues or potential issues. One
example is a disk space alert that notifies a systems engineer when disk usage exceeds 85 percent. This alert
would hopefully prevent the disk usage from hitting 100 percent and causing production issues.
Kubernetes does not have built-in monitoring, but relies on tools like the Elastic.co, Elastic stack (also
known as ELK stack), or Google Stackdriver.
1. 0Which statement about Kubernetes full metrics is not true?
a. Custom.metrics.k8s.io is for custom metrics of Kubernetes objects.
b. Kubernetes includes a full metrics pipeline.
c. Kubernetes does not include a full metrics pipeline.
d. External.metrics.k8s.io is for custom metrics of external resources.

988 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Discovery 24: Kubernetes Monitoring and
Metrics—ELK
Introduction
There are many different methods for exploring monitoring in Kubernetes. One popular method is using the
ELK (Elasticsearch, Logstash, Kibana) stack for both metrics and logs. You will explore the potentials of
Filebeat and Metricbeat for monitoring a Kubernetes cluster. You will also use Metricbeat built-in
dashboards for Kibana and Kubernetes.

Topology

Job Aid

Device Information

Device Description FQDN/IP Address Credentials

Student Workstation Linux Ubuntu VM 192.168.10.10 student, 1234QWer

GitLab Container Registry Container registry.git.lab student, 1234QWer


Registry

k8s1 Kubernetes 192.168.10.21 student, 1234QWer


Control Plane
Node

k8s2 Kubernetes 192.168.10.22 student, 1234QWer


Worker

k8s3 Kubernetes 192.168.10.23 student, 1234QWer


Worker

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 989
Device Description FQDN/IP Address Credentials

csr1kv1 Cisco Router 192.168.10.101 student, 1234QWer

csr1kv2 Cisco Router 192.168.10.102 student, 1234QWer

csr1kv3 Cisco Router 192.168.10.103 student, 1234QWer

asa1 Firewall 192.168.10.51 student, 1234QWer

Command List
The table describes the commands that are used in this activity. The commands are listed in alphabetical
order so that you can easily locate the information that you need. Refer to this list if you need configuration
command assistance during the lab activity.

Command Description

cd directory name To change directories within the Linux file system, use the cd
command. You will use this command to enter into a directory where
the lab scripts are housed. You can use tab completion to finish the
name of the directory after you start typing it.

cat file The most common use of the cat Linux command is to read the
contents of files. It is the most convenient command for this purpose in
UNIX-like operating systems.

curl url Gives the ability to access URLs from the command line

kubectl [create, delete, expose, get] Kubernetes command-line tool

ls file Provides the ability to see a file or folder contents.

Task 1: Configure Monitoring and Metrics


You will discover the basics of the filebeat and metricbeat modules and how they interface with Kubernetes.
Metricbeat will be using data from kube-state-metrics service, which can be found at
https://github.com/kubernetes/kube-state-metrics.

Activity

Step 1 In the student workstation, find and open Visual Studio Code by clicking the Visual Studio Code icon.

Step 2 From the Visual Studio Code top navigation bar, choose Terminal > New Terminal [ctrl-shift-`].

Step 3 Navigate to the terminal section at the bottom of Visual Studio Code.

990 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 4 Within the Visual Studio Code terminal, change the directory to ~/labs/lab24 using the cd ~/labs/lab24
command.

student@student-vm:$ cd ~/labs/lab24/
student@student-vm:labs/lab24$

View Filebeat and Metricbeat Kubernetes Configuration


Filebeat and Metricbeat use YAML to define their configuration. You will be using Ansible to deploy the
configuration to the servers. Before deploying, you will view the template files for Filebeat and Metricbeat
Kubernetes.

Step 5 Open and review the templates/filebeat.yml.j2 file in the playbooks directory. Do not make changes to the
file. Pay attention to the type and path configuration. The type is set to log to container to allow filebeat to
read the container logs. The path is set to /var/log/containers/*.log.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 991
student@student-vm:labs/lab24$ cd playbooks
student@student-vm:labs/lab24/playbooks$ cat templates/filebeat.yml.j2

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so


# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: container

# Change to true to enable this input configuration.


enabled: true

# Paths that should be crawled and fetched. Glob based paths.


paths:
- /var/log/containers/*.log
#- c:\programdata\elasticsearch\logs\*

# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']

# Include lines. A list of regular expressions to match. It exports the lines that
are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']

# Exclude files. A list of regular expressions to match. Filebeat drops the files
that
# are matching any regular expression from the list. By default, no files are
dropped.
#exclude_files: ['.gz$']

# Optional additional fields. These fields can be freely picked


# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1

### Multiline options

# Multiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation

# The regexp Pattern that has to be matched. The example pattern matches all lines
starting with [
#multiline.pattern: ^\[

# Defines if the pattern set under pattern should be negated or not. Default is
false.
#multiline.negate: false

# Match can be set to "after" or "before". It is used to define if lines should be

992 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based
on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next
in Logstash
#multiline.match: after

Step 6 Open and review the templates/kubernetes.yml.j2 file in the playbooks directory. Pay attention to the Node
and State metrics configuration.
The metricsets in the Node metrics defines the data points that interest you in this activity. The default time
period of 10s is recommended. Three hosts are defined and metricbeats needs to be installed on one of these
hosts. The bearer_token_file is also defined.
The metricsets in the State metrics are provided by the kube-state-metrics service, which is a Kubernetes
add-on. Certain parts of the dashboard require this module on top of the built-in kubelet data in the Node
metrics.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 993
student@student-vm:labs/lab24/playbooks$ cat templates/kubernetes.yml.j2
# Module: kubernetes
# Docs: https://www.elastic.co/guide/en/beats/metricbeat/7.4/metricbeat-module-
kubernetes.html

# Node metrics, from kubelet:


- module: kubernetes
metricsets:
- container
- node
- pod
- system
- volume
period: 10s
hosts: ["https://k8s1:10250","https://k8s2:10250","https://k8s3:10250"]
bearer_token_file: /home/student/.kube_bearer/token
ssl.verification_mode: none
add_metadata: true
kube_config: ~/.kube/config
# Controller Metrics, from ControllerManager
- module: kubernetes
metricsets:
- controllermanager
period: 10s
hosts: ["http://k8s1:10252","http://k8s2:10252","http://k8s3:10252"]
bearer_token_file: /home/student/.kube_bearer/token
ssl.verification_mode: none
#username: "user"
#password: "secret"

# Enriching parameters:
#add_metadata: true
#labels.dedot: true
#annotations.dedot: true
# When used outside the cluster:
#host: kube.lab
#kube_config: ~/.kube/config

# State metrics from kube-state-metrics service:


- module: kubernetes
metricsets:
- state_node
- state_deployment
- state_replicaset
- state_statefulset
- state_pod
- state_container
- state_cronjob
period: 10s
hosts: ["localhost:30880"]
add_metadata: true

Deploy Filebeat and Metricbeat Application

994 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Step 7 You will use Ansible to deploy Filebeat, Metricbeat, and the configurations above. You are already in the
playbooks directory.

Step 8 Deploy Filebeat by running the ansible-playbook pb.install_filebeat.yml -K command. It will prompt you
for the superuser password, which is the Kubernetes student password that is provided in the Job Aids.

Step 9 Deploy metricbeat by running the ansible-playbook pb.install_metricbeat.yml -K command. Enter the
sudo password.

Task 2: View Logs and Metrics in Kibana


Filebeat and Metricbeat applications that will send logs and metrics to the ELK stack are now installed. You
will view the logs in Kibana and use one of Metricbeats built-in dashboards to view the metrics.

Metricbeat and Filebeat come with a number of dashboards that you can optionally install with the sudo
metricbeat setup --dashboards and sudo filebeat setup --dashboards commands respectively. The
dashboards in this activity have already been installed for you.

Activity

Step 1 In the Student Workstation, open a web browser and navigate to localhost:5601/app/kibana.
From the Kibana home page, choose Discover to view items that match the selected index pattern, search
and filter results, and view events on a timeline. If not already set, set pattern to filebeat-*.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 995
Enable and View the Kubernetes Overview ECS Dashboard

Step 2 In the left navigation menu, choose Dashboards to get to a collection of visualizations, searches, and maps.
These metrics are typically displayed in real time.

Step 3 On the Dashboards page, search for Kubernetes, then choose [Metricbeat Kubernetes] Overview ECS.
Review the Nodes, Deployments, and Desired Pods.

996 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 997
Create a New Deployment Using the kubectl Command and View the
Changes on the Dashboard

Step 4 Use the kubectl run net-inventory --image=cisco-devops/container/net_inventory --port=5000 command


to deploy the net-inventory app. Note that the command is deprecated. The recommended method is to create
a deployment using the manifest file done in Discovery Lab 23. For the purpose of this lab, you can ignore
the warning.

student@student-vm:labs/lab24/playbooks$ kubectl run net-inventory --image=cisco-


devops/container/net_inventory --port=5000

Step 5 Refresh the Kibana Dashboard browser page and examine the changes.
As you can see, the number of Desired Pods increased.

998 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
Summary
You reviewed the Kubernetes logs and metrics in Kibana using Filebeat and Metricbeat applications. Both,
the logs and the metrics provide you with the tools to quickly examine your Kubernetes Cluster to
determine potential problems issues. There are several other tools and systems that provide the logs and
metrics visualization.

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 999
0Summary Challenge
1. 0Which option is not a function of metrics in Kubernetes?
a. low-level usage summaries from the operating system
b. can report total capacity or load
c. raw measurements of resource usage
d. logging data
2. 0Which two statements about the Metrics API are true? (Choose two.)
a. Kubernetes gives the average metrics over time.
b. Metrics history can be accessed at /apis/metrics/historical.
c. Kubernetes does not store the metrics.
d. The Metrics API uses the Kubernetes API model to allow RBAC controls.
e. The Metrics API requires Heapster to be deployed.
3. 0Which two statements about the metrics server are true? (Choose two.)
a. It aggregates the resource usage for the cluster.
b. It collects metrics from the Summary API on kubelet.
c. It collects metrics from the pods Summary API.
d. It is a core component of Kubernetes.
e. It aggregates the resource usage of an individual node.
4. 0What are two examples of full metrics? (Choose two.)
a. ELK Stack
b. Google Stackdriver
c. Elastic Stack
d. Fluentd
e. Prometheus
5. 0Which option identifies the location to which containers write by default?
a. Docker logging driver
b. /var/log on the node
c. /var/log on the pod
d. stdout and stderr
6. 0Which statement about node logging is not true?
a. Logging goes to journald if the system is enabled on the operating system.
b. Logging goes to the Kubernetes logging server.
c. Logging falls back to /var/logs.
d. Logging should be set up with logrotate.
7. 0Which two logging options are available for central log collection? (Choose two.)
a. Google Stackdriver
b. Kubernetes Logging Server
c. Prometheus Log Writer
d. Kubernetes etcd
e. Elastic Stack

1000 Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) © 2022 Cisco Systems, Inc.
0Answer Key
Kubernetes Resource Metrics Pipeline
1. A

Kubernetes Full Metrics Pipeline and Logging


1. B

Summary Challenge
1. D
2. C, D
3. A, B
4. B, E
5. D
6. B
7. A, E

© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 1001
© 2022 Cisco Systems, Inc. Implementing DevOps Solutions and Practices Using Cisco Platforms (DEVOPS) 1003

You might also like