0% found this document useful (0 votes)
15 views107 pages

SOC RECORD-numbered

The document outlines a series of software engineering experiments and provides an overview of the Software Development Life Cycle (SDLC), detailing its phases, models, and factors influencing model selection. It describes various SDLC models including Waterfall, V, Incremental, Iterative, RAD, Spiral, and Agile, each with its own advantages and ideal use cases. The document emphasizes the importance of understanding project requirements, size, complexity, and customer involvement when choosing a software process model.

Uploaded by

csecgirls0203
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views107 pages

SOC RECORD-numbered

The document outlines a series of software engineering experiments and provides an overview of the Software Development Life Cycle (SDLC), detailing its phases, models, and factors influencing model selection. It describes various SDLC models including Waterfall, V, Incremental, Iterative, RAD, Spiral, and Agile, each with its own advantages and ideal use cases. The document emphasizes the importance of understanding project requirements, size, complexity, and customer involvement when choosing a software process model.

Uploaded by

csecgirls0203
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 107

1

INDEX
LIST OF EXPERMENRS

S.NO DATE NAME OF THE EXPERIMENT PAGE MARKS REMARKS


NO. AWARDED
1 Software engineering and Agile software
development
2 Development & Testing with Agile: Extreme
Programming
3 DevOps adoption in projects

4 Implementation of CICD with Java and open


source stack
(Configure the web application and Version control using
Git using Git commands and version control operations.)
5 Implementation of CICD with Java and open
source stack
(Configure a static code analyzer which will perform static
analysis of the web application code and identify the
coding practices that are not appropriate. Configure the
profiles and dashboard of the static code analysis tool.)
6 Implementation of CICD with Java and open
source stack
(Write a build script to build the application using a build
automation tool like Maven. Create a folder structure that
will run the build script and invoke the various software
development build stages. This script should invoke the
static analysis tool and unit test cases and deploy the
application to a web application server like Tomcat.)
7 Implementation of CICD with Java and open
source stack
(Configure the Jenkins tool with the required paths, path
variables, users and pipeline views)
8 Implementation of CICD with Java and open
source stack
(Configure the Jenkins pipeline to call the build script jobs
and configure to run it whenever there is a change made
to an application in the version control system. Make a
change to the background colour of the landing page of
the web application and check if the configured pipeline
runs.)
9 Implementation of CICD with Java and open
source stack
(Create a pipeline view of the Jenkins pipeline used in
Exercise 8. Configure it with user defined messages.)
10 Implementation of CICD with Java and open
source stack
(In the configured Jenkins pipeline created in Exercise 8 and 9,
implement quality gates for static analysis of code.)
11 Implementation of CICD with Java and open
source stack
(In the configured Jenkins pipeline created in Exercise 8 and 9,
implement quality gates for static unit testing.)
12 Course end assessment
(In the configured Jenkins pipeline created in Exercise 8 and 9,
implement quality gates for code coverage)

2
1.Software engineering and Agile software development
Software Development Life Cycle
Software Development Life Cycle is the application of standard business practices to building software applications.
It’s typically divided into six to eight steps: Planning, Requirements, Design, Build, Document, Test, Deploy,
Maintain. Some project managers will combine, split, or omit steps, depending on the project’s scope. These are
the core components recommended for all software development projects.
SDLC is a way to measure and improve the development process. It allows a fine-grain analysis of each step of the
process. This, in turn, helps companies maximize efficiency at each stage. As computing power increases, it places
a higher demand on software and developers. Companies must reduce costs, deliver software faster, and meet or
exceed their customers’ needs. SDLC helps achieve these goals by identifying inefficiencies and higher costs and
fixing them to run smoothly.

How the Software Development Life Cycle Works


The Software Development Life Cycle simply outlines each task required to put together a software application.
This helps to reduce waste and increase the efficiency of the development process. Monitoring also ensures the
project stays on track, and continues to be a feasible investment for the company.
Many companies will subdivide these steps into smaller units. Planning might be broken into technology research,
marketing research, and a cost-benefit analysis. Other steps can merge with each other. The Testing phase can
run concurrently with the Development phase, since developers need to fix errors that occur during testing.

The Seven Phases of the SDLC

1. Planning
In the Planning phase, project leaders evaluate the terms of the project. This includes calculating labor and material
costs, creating a timetable with target goals, and creating the project’s teams and leadership structure.
Planning can also include feedback from stakeholders. Stakeholders are anyone who stands to benefit from the
application. Try to get feedback from potential customers, developers, subject matter experts, and sales reps.
3
Planning should clearly define the scope and purpose of the application. It plots the course and provisions the team
to effectively create the software. It also sets boundaries to help keep the project from expanding or shifting from its
original purpose.

2. Define Requirements
Defining requirements is considered part of planning to determine what the application is supposed to do and its
requirements. For example, a social media application would require the ability to connect with a friend. An
inventory program might require a search feature.
Requirements also include defining the resources needed to build the project. For example, a team might develop
software to control a custom manufacturing machine. The machine is a requirement in the process.

3. Design and Prototyping


The Design phase models the way a software application will work. Some aspects of the design include:
a. Architecture – Specifies programming language, industry practices, overall design, and use of any templates
or boilerplate
b. User Interface – Defines the ways customers interact with the software, and how the software responds to
input
c. Platforms – Defines the platforms on which the software will run, such as Apple, Android, Windows version,
Linux, or even gaming consoles
d. Programming – Not just the programming language, but including methods of solving problems and performing
tasks in the application
e. Communications – Defines the methods that the application can communicate with other assets, such as a
central server or other instances of the application
f. Security – Defines the measures taken to secure the application, and may include SSL traffic encryption,
password protection, and secure storage of user credentials
Prototyping can be a part of the Design phase. A prototype is like one of the early versions of software in the
Iterative software development model. It demonstrates a basic idea of how the application looks and works. This
“hands-on” design can be shown to stakeholders. Use feedback o improve the application. It’s less expensive to
change the Prototype phase than to rewrite code to make a change in the Development phase.

4. Software Development
This is the actual writing of the program. A small project might be written by a single developer, while a large project
might be broken up and worked by several teams. Use an Access Control or Source Code Management application
in this phase. These systems help developers track changes to the code. They also help ensure compatibility
between different team projects and to make sure target goals are being met.
The coding process includes many other tasks. Many developers need to brush up on skills or work as a team.
Finding and fixing errors and glitches is critical. Tasks often hold up the development process, such as waiting for
test results or compiling code so an application can run. SDLC can anticipate these delays so that developers can
be tasked with other duties.
Software developers appreciate instructions and explanations. Documentation can be a formal process, including
wiring a user guide for the application. It can also be informal, like comments in the source code that explain why a
developer used a certain procedure. Even companies that strive to create software that’s easy and intuitive benefit
from the documentation.
Documentation can be a quick guided tour of the application’s basic features that display on the first launch. It can
be video tutorials for complex tasks. Written documentation like user guides, troubleshooting guides, and FAQ’s
help users solve problems or technical questions.

5. Testing
It’s critical to test an application before making it available to users. Much of the testing can be automated, like
security testing. Other testing can only be done in a specific environment – consider creating a simulated
production environment for complex deployments. Testing should ensure that each function works correctly.
Different parts of the application should also be tested to work seamlessly together—performance test, to reduce
any hangs or lags in processing. The testing phase helps reduce the number of bugs and glitches that users
encounter. This leads to a higher user satisfaction and a better usage rate.
4
6. Deployment
In the deployment phase, the application is made available to users. Many companies prefer to automate the
deployment phase. This can be as simple as a payment portal and download link on the company website. It could
also be downloading an application on a smartphone.

Deployment can also be complex. Upgrading a company-wide database to a newly-developed application is one
example. Because there are several other systems used by the database, integrating the upgrade can take more
time and effort.

7. Operations and Maintenance


At this point, the development cycle is almost finished. The application is done and being used in the field. The
Operation and Maintenance phase is still important, though. In this phase, users discover bugs that weren’t found
during testing. These errors need to be resolved, which can spawn new development cycles.

In addition to bug fixes, models like Iterative development plan additional features in future releases. For each new
release, a new Development Cycle can be launched.

Software Process Model


A software process model is an abstraction of the software development process. The models specify the stages
and order of a process. So, think of this as a representation of the order of activities of the process and
the sequence in which they are performed.
A model will define the following:
 The tasks to be performed
 The input and output of each task
 The pre and post conditions for each task
 The flow and sequence of each task
The goal of a software process model is to provide guidance for controlling and coordinating the tasks to achieve
the end product and objectives as effectively as possible.

There are many kinds of process models for meeting different requirements. We refer to these as SDLC
models (Software Development Life Cycle models). The most popular and important SDLC models are as follows:
 Waterfall model
 V model
 Incremental model
 RAD model
 Agile model
 Iterative model
 Prototype model
 Spiral model
5
Factors in choosing a software process
Choosing the right software process model for your project can be difficult. If you know your requirements well, it
will be easier to select a model that best matches your needs. You need to keep the following factors in mind when
selecting your software process model:
Project requirements: Before you choose a model, take some time to go through the project requirements and
clarify them alongside your organizations or team’s expectations. Will the user need to specify requirements in
detail after each iterative session? Will the requirements change during the development process?
Project size: Consider the size of the project you will be working on. Larger projects mean bigger teams, so you’ll
need more extensive and elaborate project management plans.
Project complexity: Complex projects may not have clear requirements. The requirements may change often, and
the cost of delay is high. Ask yourself if the project requires constant monitoring or feedback from the client.
Cost of delay: Is the project highly time-bound with a huge cost of delay, or are the timelines flexible?
Customer involvement: Do you need to consult the customers during the process? Does the user need to
participate in all phases?
Familiarity with technology: This involves the developers’ knowledge and experience with the project domain,
software tools, language, and methods needed for development.
Project resources: This involves the amount and availability of funds, staff, and other resources.

Types of software process models


As we mentioned before, there are multiple kinds of software process models that each meet different
requirements. Below, we will look at the top seven types of software process models that you should know.

a. Waterfall Model
The waterfall model is a sequential, plan driven-process where you must plan and schedule all your activities
before starting the project. Each activity in the waterfall model is represented as a separate phase arranged in
linear order.
It has the following phases:
 Requirements
 Design
 Implementation
 Testing
 Deployment
 Maintenance
Each of these phases produces one or more documents that need to be approved before the next phase begins.
However, in practice, these phases are very likely to overlap and may feed information to one another.
The software process isn’t linear, so the documents produced may need to be modified to reflect changes.
The waterfall model is easy to understand and follow. It doesn’t require a lot of customer involvement after the
specification is done. Since it’s inflexible, it can’t adapt to changes. There is no way to see or try the software until
the last phase.
The waterfall model has a rigid structure, so it should be used in cases where the requirements are understood
completely and unlikely to radically change.

b. V Model
The V model (Verification and Validation model) is an extension of the waterfall model. All the requirements are
gathered at the start and cannot be changed. You have a corresponding testing activity for each stage. For every
phase in the development cycle, there is an associated testing phase.
The corresponding testing phase of the development phase is planned in parallel, as you can see above.
6
The V model is highly disciplined, easy to understand, and makes project management easier. But it isn’t good for
complex projects or projects that have unclear or changing requirements. This makes the V model a good choice
for software where downtimes and failures are unacceptable.

c. Incremental Model
The incremental model divides the system’s functionality into small increments that are delivered one after the other
in quick succession. The most important functionality is implemented in the initial increments.
The subsequent increments expand on the previous ones until everything has been updated and implemented.
Incremental development is based on developing an initial implementation, exposing it to user feedback, and
evolving it through new versions. The process’ activities are interwoven by feedback.
Each iteration passes through the requirements, design, coding, and testing stages.
The incremental model lets stakeholders and developers see results with the first increment. If the stakeholders
don’t like anything, everyone finds out a lot sooner. It is efficient as the developers only focus on what is important
and bugs are fixed as they arise, but you need a clear and complete definition of the whole system before you start.
The incremental model is great for projects that have loosely-coupled parts and projects with complete and clear
requirements.

d. Iterative Model
The iterative development model develops a system through building small portions of all the features. This helps to
meet initial scope quickly and release it for feedback.
In the iterative model, you start off by implementing a small set of the software requirements. These are
then enhanced iteratively in the evolving versions until the system is completed. This process model starts with part
of the software, which is then implemented and reviewed to identify further requirements.
Like the incremental model, the iterative model allows you to see the results at the early stages of development.
This makes it easy to identify and fix any functional or design flaws. It also makes it easier to manage risk and
change requirements.
The deadline and budget may change throughout the development process, especially for large complex projects.
The iterative model is a good choice for large software that can be easily broken down into modules.

e. RAD Model
The Rapid Application Development (RAD model) is based on iterative development and prototyping with little
planning involved. You develop functional modules in parallel for faster product delivery. It involves the following
phases:
1. Business modelling
2. Data modelling
3. Process modelling
4. Application generation
5. Testing and turnover
The RAD concept focuses on gathering requirements using focus groups and workshops, reusing software
components, and informal communication.
The RAD model accommodates changing requirements, reduces development time, and increases the reusability
of components. But it can be complex to manage. Therefore, the RAD model is great for systems that need to be
produced in a short time and have known requirements.

f. Spiral Model
The spiral model is a risk driven iterative software process model. The spiral model delivers projects in loops.
Unlike other process models, its steps aren’t activities but phases for addressing whatever problem has the
greatest risk of causing a failure.
It was designed to include the best features from the waterfall and introduces risk-assessment.
You have the following phases for each cycle:
7
1. Address the highest-risk problem and determine the objective and alternate solutions
2. Evaluate the alternatives and identify the risks involved and possible solutions
3. Develop a solution and verify if it’s acceptable
4. Plan for the next cycle

You develop the concept in the first few cycles, and then it evolves into an implementation. Though this model is
great for managing uncertainty, it can be difficult to have stable documentation. The spiral model can be used for
projects with unclear needs or projects still in research and development.

g. Agile model

The agile process model encourages continuous iterations of development and testing. Each incremental part is
developed over an iteration, and each iteration is designed to be small and manageable so it can be completed
within a few weeks.

Each iteration focuses on implementing a small set of features completely. It involves customers in the
development process and minimizes documentation by using informal communication.

Agile development considers the following:

 Requirements are assumed to change


 The system evolves over a series of short iterations
 Customers are involved during each iteration
 Documentation is done only when needed

Though agile provides a very realistic approach to software development, it isn’t great for complex projects. It can
also present challenges during transfers as there is very little documentation. Agile is great for projects
with changing requirements.

Some commonly used agile methodologies include:

 Scrum: One of the most popular agile models, Scrum consists of iterations called sprints. Each sprint is
between 2 to 4 weeks long and is preceded by planning. You cannot make changes after the sprint activities
have been defined.
 Extreme Programming (XP): With Extreme Programming, an iteration can last between 1 to 2 weeks. XP uses
pair programming, continuous integration, test-driven development and test automation, small releases, and
simple software design.
 Kanban: Kanban focuses on visualizations, and if any iterations are used they are kept very short. You use the
Kanban Board that has a clear representation of all project activities and their numbers, responsible people,
and progress.

8
The Agile Lifecycle and Its Methodologies
Agile software development (or system development) methodologies
include Scrum, Kanban, Scrumban, Disciplined Agile 2.0, adaptive software development, Agile modelling, extreme
programming (XP), feature driven development (FDD), and Lean software development.
The goal of each Agile method is to adapt to change and deliver working software as quickly as possible. Each
methodology has slight variations in the phases of software development. Furthermore, even though the goal is the
same, each team’s process flow may vary depending on each specific project or situation. As an example, the full
Agile software development lifecycle includes the concept, inception, construction, release, production, and
retirement phases.
Agile software development is a more flexible approach than the Waterfall model’s strictly set phases. As a result,
many teams are moving toward Agile’s adaptive methodology and moving away from the predictive Waterfall
methodology when developing software.
The conventional Waterfall development method follows strict phases, sticking to the original requirements and
design plan created at the beginning of the project. A project manager spends time negotiating milestones,
features, resources, working at length in the planning stages of a project, usually developing a full-blown project
plan that details how the work will be moved through many gates to completion.
Customers finalize requirements before development begins and then a lengthy development process occurs, with
the project manager tracking every movement of the project through each handoff and finally on to delivery. If
everything goes well, this process produces an on-time, on-budget release.
But the chief drawbacks to this approach are well-documented: it is not responsive to change and it takes a long
time to deliver working software. When technology forms the field of play and drives every change, a six-month (or
longer) release cycle, with requirements chiselled in stone, does not meet the business need.
The history behind Agile software development is one of frustration with the traditional waterfall methodology. Agile
is designed to accommodate change and the need for faster software development. The project leader typically
facilitates the work of the development team, eliminates bottlenecks, and helps the team stay focused in order to
deliver software iterations on a regular basis. It is less about milestones than it is about hours, feature selection,
prioritization, and meetings.
Unlike the Waterfall model, the development team ultimately decides at the beginning of a sprint (or iteration) what
can be accomplished in the timeframe and sets out to build a series of features, delivering working software that
can be installed in a production environment at the end of the sprint. Since Agile software development methods
(such as Dynamic Systems Development Method- DSDM) are flexible, most are suitable for method tailoring –
where development teams can adapt the flow to meet the needs of the product.

The Agile Process Flow


The Agile process flow includes concept, initiation, iteration or construction, release, production, and retirement, as
described below:
1. Concept: Envision and prioritize projects.
2. Inception: Identify team members, appointment of funds, and discussion of initial environments and
requirements.
3. Iteration or Construction: The development team works to deliver working software based on iteration
requirements and feedback.
4. Release: Quality assurance (QA) testing, internal and external training, documentation development, and the
iteration is put into production.
5. Production: Ongoing software support.
6. Retirement: End-of-life activities, including customer notification and migration.
There may be many projects operating simultaneously, multiple sprints/iterations running on different product lines,
and a variety of customers, both external and internal, with a range of business needs.

Agile Software Development Workflow


The Agile software development lifecycle is dominated by the iterative process. Each iteration delivers the next
piece of the development puzzle: software and supporting elements (e.g. documentation) available for use by
9
customers, until the final product is complete. Each iteration is usually two to four weeks in length and has a fixed
completion time. The iteration process is methodical and the scope of each iteration is only as broad as the allotted
time allows.
Multiple iterations will take place during the Agile software development lifecycle and each follows its own workflow.
During an iteration, customers and business stakeholders provide feedback to ensure that the features meet their
needs.
A typical iteration process flow can be visualized as follows:
 Requirements: Define the requirements for the iteration based on the product backlog, sprint backlog, and
customer and stakeholder feedback.
 Development: Design and develop software based on defined requirements.
 Testing: Quality assurance (QA) testing, internal and external training, documentation development.
 Delivery: Integrate and deliver the working iteration into production.
 Feedback: Review customer and stakeholder feedback and work it into the requirements of the next iteration.

Agile Software Development Workflow Diagram

While you may feed additional features into the product backlog throughout the project, the rest of the process
repeats until the product backlog has been cleared. As a result, the Agile software development process flow is a
loop rather than a linear process.

Agile Scrum Workflow


The flow of work in Scrum is directed via a series of meetings, as described below:
Sprint planning is used to choose the work that will be incorporated into an upcoming Sprint based on the product
backlog.
A daily Scrum is a short meeting where each participant answers the following questions:
 What work did you do yesterday?
 What work will you do today?
 What obstacles are in your way?
The Scrum master, who manages the meetings, uses the data gathered to update the burndown chart and look for
ways to remove the obstacles that were identified.
A sprint review is a meeting at the end of each Sprint to evaluate what was completed and to review the product
backlog and determine what still needs to be done. Reviews focus on the product.
Finally, the sprint retrospective meeting at the end of each Sprint which covers what worked well and what can be
improved. Retrospectives focus on the process.

10
Making the Agile Process Work for You
As with any methodology, there are advantages and disadvantages (Read about the advantages and
disadvantages of Agile). The Agile method is more suitable in situations where customers and project stakeholders
are available to provide input, functional portions of software are needed quickly, flexibility is desired to
accommodate changing requirements, and the team is co-located and able to collaborate effectively.

As with any change, integrating Agile processes into your business can be overwhelming. Here are four activities
that will help support the adoption of Agile workflow:
 Daily Meetings: Host consistent or daily stand-up meetings to maintain open communication, hold workers
accountable, and keep each iteration moving forward.
 Live Demonstrations: Deliver live demonstrations of each iteration’s final product to show progress.
 Share Feedback: Receive feedback from stakeholders and customers and share it with the entire team before
the next iteration begins.
 Remain Agile: Make changes to your process based on feedback to ensure each iteration improves the last.

Streamline the Agile Software Lifecycle with Smartsheet for Project Management
From simple task management and project planning to complex resource and portfolio management, Smartsheet
helps you improve collaboration and increase work velocity -- empowering you to get more done.
The Smartsheet platform makes it easy to plan, capture, manage, and report on work from anywhere, helping your
team be more effective and get more done. Report on key metrics and get real-time visibility into work as it
happens with roll-up reports, dashboards, and automated workflows built to keep your team connected and
informed.
When teams have clarity into the work getting done, there’s no telling how much more they can accomplish in the
same amount of time.

11
1. Development & Testing with Agile : Extreme
Programming
With software engineering existing in such a fast-paced environment, traditional project management approaches
are no longer viable. That means that IT professionals must find new ways to handle frequently changing
development tasks.
Sharing this idea and focusing on the existing incremental development techniques, 17 software specialists
introduced the Agile project management philosophy in 2001. Principles of flexible, fast, and collaboration-centred
software development were outlined in the Agile Manifesto.
Extreme Programming (XP) is one of the numerous Agile frameworks applied by IT companies. But its key feature
— emphasis on technical aspects of software development — distinguishes XP from the other approaches.
Software engineer Ken Beck introduced XP in the 90s with the goal of finding ways to write high-qualitative
software quickly and being able to adapt to customers’ changing requirements. In 1999, he refined XP approaches
in the book Extreme Programming Explained: Embrace Change.
XP is a set of engineering practices. Developers have to go beyond their capabilities while performing these
practices. That’s where the “extreme” in the framework’s title comes from. To get a better understanding of these
practices, we’ll start with describing XP’s lifecycle and the roles engaged in the process.
Extreme Programming takes the effective principles and practices to extreme levels.
 Code reviews are effective as the code is reviewed all the time.
 Testing is effective as there is continuous regression and testing.
 Design is effective as everybody needs to do refactoring daily.
 Integration testing is important as integrate and test several times a day.
 Short iterations are effective as the planning game for release planning and iteration planning.

The process and roles of extreme programming


The XP framework normally involves 5 phases or stages of the development process that iterate continuously:
1. Planning, the first stage, is when the customer meets the development team and presents the requirements in
the form of user stories to describe the desired result. The team then estimates the stories and creates a
release plan broken down into iterations needed to cover the required functionality part after part. If one or
more of the stories can’t be estimated, so-called spikes can be introduced which means that further research is
needed.
2. Designing is actually a part of the planning process, but can be set apart to emphasize its importance. It’s
related to one of the main XP values that we’ll discuss below — simplicity. A good design brings logic and
structure to the system and allows to avoid unnecessary complexities and redundancies.
12
3. Coding is the phase during which the actual code is created by implementing specific XP practices such as
coding standards, pair programming, continuous integration, and collective code ownership (the entire list is
described below).
4. Testing is the core of extreme programming. It is the regular activity that involves both unit tests (automated
testing to determine if the developed feature works properly) and acceptance tests (customer testing to verify
that the overall system is created according to the initial requirements).
5. Listening is all about constant communication and feedback. The customers and project managers are
involved to describe the business logic and value that is expected.

Such a development process entails the cooperation between several participants, each having his or her own
tasks and responsibilities. Extreme programming puts people in the centre of the system, emphasizing the value
and importance of such social skills as communication, cooperation, responsiveness, and feedback. So, these roles
are commonly associated with XP:
1. Customers are expected to be heavily engaged in the development process by creating user stories,
providing continuous feedback, and making all the necessary business decisions related to the project.
2. Programmers or developers are the team members that actually create the product. They are responsible for
implementing user stories and conducting user tests (sometimes a separate Tester role is set apart). Since XP
is usually associated with cross-functional teams, the skill set of such members can be different.
3. Trackers or managers link customers and developers. It’s not a required role and can be performed by one of
the developers. These people organize the meetups, regulate discussions, and keep track of important
progress KPIs.
4. Coaches can be included in the teams as mentors to help with understanding the XP practices. It’s usually an
outside assistant or external consultant who is not involved in the development process, but has used XP
before and so can help avoid mistakes.
Values and principles of extreme programming
In the late 90s, Ken Beck summarized a set of certain values and principles that describe extreme programming
and lead to more effective cooperation within the team and, ultimately, higher product quality.

13
Values of extreme programming
XP has simple rules that are based on 5 values to guide the teamwork:
1. Communication. Everyone on a team works jointly at every stage of the project.
2. Simplicity. Developers strive to write simple code bringing more value to a product, as it saves time and effort.
3. Feedback. Team members deliver software frequently, get feedback about it, and improve a product
according to the new requirements.
4. Respect. Every person assigned to a project contributes to a common goal.
5. Courage. Programmers objectively evaluate their own results without making excuses and are always ready
to respond to changes.
These values represent a specific mindset of motivated team players who do their best on the way to
achieving a common goal. XP principles derive from these values and reflect them in more concrete ways.
Principles of extreme programming
Most researchers denote 5 XP principles as:
1. Rapid feedback. Team members understand the given feedback and react to it right away.
2. Assumed simplicity. Developers need to focus on the job that is important at the moment and follow YAGNI
(You Ain’t Gonna Need It) and DRY (Don’t Repeat Yourself) principles.
3. Incremental changes. Small changes made to a product step by step work better than big ones made at
once.
4. Embracing change. If a client thinks a product needs to be changed, programmers should support this
decision and plan how to implement new requirements.
5. Quality work. A team that works well, makes a valuable product and feels proud of it.
Having discussed the main values and principles of XP, let’s take a closer look at the practices inherent in this
framework.
Extreme programming practices
The practices of XP are a set of specific rules and methods that distinguishes it from other methodologies. When
used in conjunction, they reinforce each other, help mitigate the risks of the development process, and lead to the
expected high-quality result. XP suggests using 12 practices while developing software which can be clustered into
four groups.
14
Test-Driven Development
Is it possible to write a clear code quickly? The answer is yes, according to XP practitioners. The quality of software
derives from short development cycles that, in turn, allow for receiving frequent feedback. And valuable feedback
comes from good testing. XP teams practice test-driven development technique (TDD) that entails writing an
automated unit test before the code itself. According to this approach, every piece of code must pass the test to be
released. So, software engineers thereby focus on writing code that can accomplish the needed function. That’s the
way TDD allows programmers to use immediate feedback to produce reliable software. You can learn more
about improving software testing in our dedicated article.
The Planning Game
This is a meeting that occurs at the beginning of an iteration cycle. The development team and the customer get
together to discuss and approve a product’s features. At the end of the planning game, developers plan for the
upcoming iteration and release, assigning tasks for each of them.
On-site Customer
As we already mentioned, according to XP, the end customer should fully participate in development. The customer
should be present all the time to answer team questions, set priorities, and resolve disputes if necessary.
Pair Programming
This practice requires two programmers to work jointly on the same code. While the first developer focuses on
writing, the other one reviews code, suggests improvements, and fixes mistakes along the way. Such teamwork
results in high-quality software and faster knowledge sharing but takes about 15 percent more time. In this regard,
it’s more reasonable trying pair programming for long-term projects.
Code Refactoring
To deliver business value with well-designed software in every short iteration, XP teams also use refactoring. The
goal of this technique is to continuously improve code. Refactoring is about removing redundancy, eliminating
unnecessary functions, increasing code coherency, and at the same time decoupling elements. Keep your code
clean and simple, so you can easily understand and modify it when required would be the advice of any XP team
member.

15
Continuous Integration
Developers always keep the system fully integrated. XP teams take iterative development to another level because
they commit code multiple times a day, which is also called continuous delivery. XP practitioners understand the
importance of communication. Programmers discuss which parts of the code can be re-used or shared. This way,
they know exactly what functionality they need to develop. The policy of shared code helps eliminate integration
problems. In addition, automated testing allows developers to detect and fix errors before deployment.
Small Releases
This practice suggests releasing the MVP quickly and further developing the product by making small and
incremental updates. Small releases allow developers to frequently receive feedback, detect bugs early, and
monitor how the product works in production. One of the methods of doing so is the continuous integration practice
(CI) we mentioned before.
Simple Design
The best design for software is the simplest one that works. If any complexity is found, it should be removed. The
right design should pass all tests, have no duplicate code, and contain the fewest possible methods and classes. It
should also clearly reflect the programmer’s intent.
XP practitioners highlight that chances to simplify design are higher after the product has been in production for
some time. Don Wells advises writing code for those features you plan to implement right away rather than writing it
in advance for other future features: “The best approach is to create code only for the features you are
implementing while you search for enough knowledge to reveal the simplest design. Then refactor incrementally to
implement your new understanding and design.”
Coding Standards
A team must have common sets of coding practices, using the same formats and styles for code writing.
Application of standards allows all team members to read, share, and refactor code with ease, track who worked on
certain pieces of code, as well as make the learning faster for other programmers. Code written according to the
same rules encourages collective ownership.
Collective Code Ownership
This practice declares a whole team’s responsibility for the design of a system. Each team member can review and
update code. Developers that have access to code won’t get into a situation in which they don’t know the right
place to add a new feature. The practice helps avoid code duplication. The implementation of collective code
ownership encourages the team to cooperate more and feel free to bring new ideas.
System Metaphor
System metaphor stands for a simple design that has a set of certain qualities. First, a design and its structure must
be understandable to new people. They should be able to start working on it without spending too much time

16
examining specifications. Second, the naming of classes and methods should be coherent. Developers should aim
at naming an object as if it already existed, which makes the overall system design understandable.
40-Hour Week
XP projects require developers to work fast, be efficient, and sustain the product’s quality. To adhere to these
requirements, they should feel well and rested. Keeping the work-life balance prevents professionals from burnout.
In XP, the optimal number of work hours must not exceed 45 hours a week. One overtime a week is possible only if
there will be none the week after.
Advantages and disadvantages of XP
XP practices have been debated upon for decades, as its approach and methods are rather controversial in a
number of aspects and can’t be applied in just any project. Here, we’ll try to define the pros and cons of XP
methodology.

Extreme programming advantages


So, the XP framework can be beneficial and help reduce development time and costs for the following reasons:
 Continuous testing and refactoring practices help create stable well-performing systems with minimal
debugging;
 Simplicity value implies creating a clear, concise code that is easy to read and change in the future if needed;
 The minimalistic iterative approach to development ensures that the workable results can be delivered very
soon and only necessary features are built;
 Documentation is reduced as bulky requirements documents are substituted by user stories;
 No or very little overtime is practiced;
 Constant communication provides a high level of visibility and accountability and allows all team members to
keep up with the project progress;
 Pair programming has proven to result in higher-quality products with fewer bugs; most research participants
also reported enjoying such collaboration more and feeling more confident about their job;
 Customer engagement ensures their satisfaction as their participation in the development and testing process
can directly influence the result, getting them exactly what they wanted.
Extreme programming disadvantages
On the other hand, XP has a number of disadvantages that have to be considered when deciding on which
framework to choose for your next project:
 In many instances, the customer has no clear picture of the end result, which makes it almost unrealistic to
accurately estimate scope, cost, and time;
 Regular meetings with customers often take a great deal of time that could instead be spent on actual code
writing;
17
 Documentation can be scarce and lack clear requirements and specifications, leading to project scope creep;
 The rapid transition from traditional methods of software development to extreme programming demands
significant cultural and structural changes;
 Pair programming takes more time and doesn’t always work right due to the human factor and character
incompatibility;
 XP works best with collocated teams and customers present in person to conduct face-to-face meetings,
limiting its application with distributed teams;
 Sometimes customers have neither the desire, time, nor expertise to participate in product development.
Considering tight deadlines, it can become a source of stress as either no valuable feedback is provided, or a
non-technical representative attempts to manage tech specialists with little or no knowledge on the process;
 Some authors also mention overfocusing on code over design, lack of quality assurance, code duplication, and
poor results with inexperienced developers.
Any company can apply the XP principles in its projects; however, it’s important to understand both the good
and the bad sides. Read on to find out how XP is different from other methodologies and when applying its
techniques would be the best choice.
Comparison of XP to other frameworks

When to use XP
Now that we discussed the XP methodology pros and cons and identified its place among other agile frameworks,
we can talk about the cases when it’s applicable. It’s important to make sure a company’s size, structure, and
expertise, as well as the staff’s knowledge base allow for applying XP practices. These are the factors to consider.
Highly-adaptive development. Some systems don’t have constant functionality features and implies frequent
changes. XP was designed to help development teams adapt to fast-changing requirements.
Risky projects. Teams applying XP practices are more likely to avoid problems connected with working on a
new system, especially when a customer sets strict deadlines for a project. Additionally, a high level of customer
engagement reduces the risk of their not accepting the end product.

18
Small teams. XP practices are efficient for teams that don’t exceed 12 people. Managing such groups is usually
easier, communication is more efficient, and it takes less time to conduct meetings and brainstorming sessions.
Automated testing. Another factor that can influence the choice of XP is the developers’ ability to create and
run unit tests, as well as availability of the necessary testing tools.
Readiness to accept new culture and knowledge. XP is different from traditional approaches to software
development, and the way some of its practices should be implemented might not be obvious. So, it’s important
that your organization and team members are ready to embrace change. It’s also worth inviting an experienced
coach if you don’t have previous involvement with XP.
Customer participation. As XP requires customers, developers and managers to work side-by-side, make sure
your client is always available to provide input until a project ends.
Agility principles are becoming increasingly popular as they prove their effectiveness. Even though extreme
programming is not the most widespread methodology, it offers a lot of sensible practices that can benefit
software development and are worth considering for implementation in your projects.
Test Driven Development
• Write test first, then write code to make it work
• A form of Design by Contract
• Every test must pass at every build
• Supports Continuous Integration
Unit Testing
• Based on the idea that classes should contain their own tests
• Highly localized; test(s) work within a single package
• Tests the interfaces to other packages, but just assumes other packages work
Why Unit Testing?
• Better able to exercise all code
• Can write tests before writing code:
• Helps programmer to focus on the interface rather than the implementation
• Provides a clear finish point: when the test works
• Cuts down significantly on debugging time
• Run tests every time code is compiled
• If new code breaks a previously-passed test, bug location is easier to pinpoint
Unit Testing Difficulties
• Detraction: seem to be writing code twice
• Many programmers have never learned to write tests or even to think about tests
• Overhead of test framework
The Junit Testing Framework
• Used for writing unit tests in Java
• Helps automate testing process
• Provides some basic constructs for running tests
Structure
• Any class that contains tests must subclass the TestCase class
• Typically, one for each class being tested
• Junit framework allows tests to be grouped into suites
• TestSuites can contain TestCases or other TestSuites
• Makes it easy to build a range of large test suites and run the tests automatically
Junit Example: I/O Class
• The test must have a constructor:
19
class FileReaderTester extends TestCase { public FileReaderTester (String name) {
super(name); } }
First step: Set up a test fixture
• A test fixture is the set of objects that act as samples for testing. In the case of I/O testing, a test file:
data.txt
Bradman 99.94 52 80 10 6996 334 29
Pollock 60.97 23 41 4 2256 274 7
Headley 60.83 22 40 4 2256 270* 10
Sutcliffe60.73 54 84 9 4555 194 16
Manipulating the test fixture
• TestCase provides:
• protected void setUp() – creates objects
• protected void tearDown() – removes them
• Important to execute both methods for each test so that the tests are isolated from each other; thus can
run them in any order
setUp & tearDown
class FileReaderTester… protected void setUp() {
try {
_input = new FileReader(“data.txt”);
} catch (FileNotFoundException e) {
throw new RuntimeException (“unable to open test file”);
}
}
protected void tearDown(){ try {
_input.close();
} catch (IOException e) {
throw new RuntimeException (“error on closing test file”);
}
}
Create the first test
public void testRead() throws IOException { char ch;
for (int i = 0; i < 4; i++) {
ch = (char) _input.read();
}
Assert.assertEquals(‘d’, ch);
}
• assertEquals is the automatic Junit test
How to run the test
• Create a test suite:
class FileReaderTester … public static Test suite(){
TestSuite suite = new TestSuite(); suite.addTest(new FileReaderTester(“testRead”)); return suite;
}
20
• The test is bound to the method testRead()
• Use a separate TestRunner class
• can use a GUI version, but character interface can be called within the code
class FileReaderTester …
public static void main(String[] args) { junit.textui.TestRunner.run(suite());
}
TestRunner success output
.Time: 0.110
OK (1 tests)
• Junit prints a period (“.”) for every test run
• Junit prints a single “OK” if no test fails
TestRunner failure output
public void testRead() throws IOException
{
char ch;
for (int i = 0; i < 4; i++) {
ch = (char) _input.read();
}
Assert.assertEquals(‘2’, ch);
// deliberate error
}
Result:
.F
Time: 0.220
FAILURES!!!
Test Results:
Run: 1 Failures: 1 Errors: 0 There was 1 failure:
1) FileReaderTester.testRead “expected: “2” but was: “d”
Usefulness of Failures
• Can start by making tests fail, to prove:
• the test does actually run
• the test is actually testing what it’s supposed to
• A common testing error is to be testing something other than what is supposed to be tested
Catching errors
• In addition to catching failures (assertions are false), Junit’s framework also catches errors (unexpected
exceptions)
public void testRead() throws IOException
{
char ch;
_input.close();
for (int i = 0; i < 4; i++)
{
ch = (char) _input.read();
// will throw exception
21
}
Assert.assertEquals(‘d’, ch);
}

Result:
.E
Time: 0.110
!!!FAILURES!!!
Test Results:
Run: 1 Failures: 0 Errors: 1 There was 1 error:
1) FileReaderTester.testRead java.io.IOException: Stream closed
Running multiple tests
• Write new test methods
• public void testReadAtEnd()
• Put them in the suite to run them:
• suite.addTest(new FileReaderTester(“testReadAtEnd”));
• Junit has a lazy-programmer shortcut:
• Naming convention: “testX()”
• Replace main() method with:
public static void main(String[] args) {
junit.textui.TestRunner.run(new TestSuite(FileReaderTester.class));
}
Can run a Master Test Suite
class MasterTester extends TestCase { public static void main (String[] args) {
junit.textui.TestRunner.run (suite()); }
public static Test suite() {
TestSuite result = new TestSuite();
result.addTest(new TestSuite(FileReaderTester.class)); result.addTest(new TestSuite(FileWriterTester.class));
// and so on… return result; } }
User-defined comments in Junit
public void testReadBoundaries() throws IOException { assertEquals(“read first char”, ‘B’, _input.read()); char ch;
for (int i = 1; i < 140; i++){ ch = _input.read();
}
Assert.assertEquals(“read last char”, ‘6’, _input.read()); Assert.assertEquals(“read at end”, -1, _input.read());
}
Testing philosophies
• Testing should be risk-driven
• “test every public method” is not enough
• A little testing goes a long way
• Better to focus on complex code and areas that are at most risk of going wrong
• Helps to keep the task of test-writing to a doable size
• Focus on boundary conditions and special conditions that make the test fail
• e.g., for an I/O class, an empty file

22
2. DevOps adoption in projects
Software Engineering Concepts you must know before doing this course
 Phases of software development life cycle (SDLC)
 Values and principles of Agile software development
Awareness of the following concepts are required–
 Version control
 Programming Java web applications
 Unit testing, code coverage and build
 Quality assurance testing (QA)
 Deployment and release to production
 Maintenance and support of applications

What is devOps?
Traditionally, software development, testing and deployments are considered as separate activities. Deployments to
customer environments are scheduled activities and may be done on a yearly/half yearly basis. Hence customers
need to wait long to see a working version of the software. DevOps is a set of practices and its adoption in projects
provide the following benefits.

DevOps can be represented as follows:

This leads to continuous delivery of valuable & quality software to customers i.e. also helps realize the benefits of
Agility. These are also the benefits that are expected by “Pura Vida” IT group. Having understood the benefits that
“DevOps” offer, let us take a look at the industry definitions.
23
DevOps Adoption in Projects
The DevOps adoption requires focus on the People, Process and Technology aspects. The sample roadmap is
provided here for your reference.

The coach employed at “Pura Vida” suggests starting with the technology aspects.

Technology Aspects
Let us look at the actions required for DevOps adoption from a technology perspective first.

Technology challenges in "Pura Vida"

 Lack of automation in the software development lifecycle and hence loss of quality due to error prone
repeatability of steps (ex. Tests)
 Defects generated due to inconsistent environments for testing and deployment
 Delays in testing due to infrastructure unavailability
 Brittle point-to-point integration between modules

24
The
DevOps
and Lean coach suggests the identification of capabilities to setup an automated development and delivery pipeline.
The teams are asked to identify the tools to alleviate these challenges. Let us look at the capabilities to be built in
and the popular tools stacks.

Aligning capabilities

The coach at "Pura Vida" sets out to align the various capabilities across the various teams, categorizing them
under business, Dev,test,operations and support.
The capabilities to be aligned are shown in the figure below.

This is crucial because DevOps is a set of capabilities, across the IT Value Stream (PLAN – BUILD – RUN), which
enhances Throughput, Quality & Business Value.

Aligning capabilities: Business

Let us focus on the capabilities to be aligned from the business perspective


25
Policies/procedures/methods
Agile

 Why? - A rapid development approach like Agile helps in faster delivery of valuable software
 What? - Agile is a time boxed, iterative approach to software delivery that builds software incrementally from
the start of the project, instead of trying to deliver it all at once near the end
 Benefits - Rapid development of valuable software

Big room planning

 Why? - For effective planning and execution literally together in the same room real time
 What? - This brings all the stakeholders who are responsible for delivery of software (business, dev, test,
program management teams) together in a single room for about two days
 Benefits - Improves communication between the teams and promotes a collaborative working relationships
which is the basis for Agile

Lean
 Why? - Elimination of waste and bottlenecks in automated software development and delivery
 What? - This is achieved by collaboration, by ‘shifting left’ operational concerns early in the development
lifecycle, by eliminating waste, rework and over-production i.e. using Lean principles in DevOps
 Benefits - Teams eliminate the bottlenecks in the DevOps pipeline, making it more efficient and productive

Minimum capabilities/Practices
Automated acceptance tests

 Why? - Acceptance tests help identifying building the right code


 What? - These are meant for business teams to check if the functionality is working for the user. These can be
automated using tools like Fitnesse
 Benefits - Brings transparency and speed to delivery of software

26
Good to have capabilities/Practices
Rapid prototyping

 Why? - For rapid and iterative design and delivery of software


 What? - It involves using prototyping tools and collaborative review by stakeholders and refinement based on
feedback
 Benefits - Provides a feel of the product early, room for customization, saves cost and time (if tools are used)
and minimizes design flaws

Role and responsibilities


Business teams should

 be ready to adopt Agile approach and collaborate with the development teams for faster development and
delivery of valuable software
 adopt lean practices and eliminate wasteful processes, documentation and the like
 be ready to allow iterations to happen and provide early feedback
 Ensure availability and clarity on requirements are on time

If business teams align with the capabilities mentioned, the following issues faced by “Pura vida” would get alleviated.

Alleviated issues

 Lack of automation in the software development lifecycle and hence loss of quality due to error prone
repeatability of steps – in this case automation of Acceptance Tests

Aligning capabilities: Dev teams

Let us now focus on the capabilities from the Development team perspective
27
Policies/procedures/methods
Agile

 Why? - A rapid development approach like Agile helps in faster delivery of valuable software
 What? - Agile is a time boxed, iterative approach to software delivery that builds software incrementally from
the start of the project, instead of trying to deliver it all at once near the end
 Benefits - Early feedback, work division into small units, transparency in the process – all these motivate teams
and improve their productivity

Feature toggle

 Why? - When new features /enhancements need to be released to all/chosen user base into a live system
without needing to change the code, feature toggling is used
 What? - Let us take an example where there are four teams A-D working on a release. Team A is working on a
feature that will take considerable time to develop and test. Instead of team members B, C and D waiting for A
to complete, they introduce a feature toggle in the code

Current scenario :
method sampleMethod(){
// code which is being used currently prior to new feature being introduced
}
Using feature toggl
method sampleMethod() (){
boolean useNewFeature = false;
// useNewFeature = true; // This will be used when the new feature developed by Team A is ready for release
if( useNewFeature ){
return newMethod();
}else{
return oldMethod();
}
}
method oldMethod(){
// current implementation of the method before Team A implements it
}
method newMethod (){
// enhanced implementation of the method being developed by Team A
}
Types of toggles

 release toggles
 experiment toggles
 Ops toggles
 Permission toggles

Benefits of toggle
 release features to a selected cohort of users to get their responses prior to release to the entire user base
 provide permissions to access certain features to a selected user base
Incremental design

 Why? - We need to design solutions that can lend itself to change as the code adapts to changing requirements

28
 What? - Incremental design is based on SOLID principles. S.O.L.I.D is an acronym for the first five object-
oriented design(OOD) principles by Robert C. Martin

These are briefly described in the table below:

 Benefits - These principles, when combined together, make it easy for a developer to develop software that
are easy to maintain and extend. These help write code without “smells”. Helps achieve low coupling, high
cohesion and strong encapsulation in code

Micro services

 Why? - Traditionally, enterprise Applications are often built in three main parts. The server side application
becomes a “monolith”, a single logical executable. Any changes to the system will involve building and
deploying this monolith. In fully automated deployable environments, it becomes necessary to develop small
deployable components instead of one single process which then ties up the change cycles together.

 What? - It is an architectural style to developing a single application as a suite of small services, each running
in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These
services are built around business capabilities and independently deployable by fully automated deployment
machinery
 Benefits
o Improves isolation of faults. Large application may not be very much affected by failure of a single module
o Commitment to a single stack of tools can be eliminated. Dependencies may be lighter compared to
monolithic applications
o Very useful in fully automated deployment pipeline for faster delivery of components

Minimum capabilities/Practices

Continuous integration pipeline

 Why? -Continuous integration practice is mandatory to ensure that defects are captured early and a working
version of the software is always available by automating the different development stage through integration of
tools. This is a must-to-have to ensure continuous, frequent and automated delivery & deployment of software
to customers
 What? - Is a software development practice adopted as part of extreme programming (XP). The image below
shows the activities in the development phase that need to be automated to form the Continuous Integration
pipeline. It helps in automating the build process, enabling frequent integration, code quality checks and unit
29
testing without any manual intervention by use of various open-source, custom-built and/or licensed tools. The
sequence and whether an activity will be done will be decided by the orchestrator – i.e. the continuous integration
tool based on the gating criteria set for the project

 Benefits
o Reduced risks and fewer integration defects
o Helps detect bugs and remove them faster
o Less integration time due to automation
o Avoids cumulative bugs due to frequent integration
o Helps in frequent deployment

Note: The details of these activities and tools involved will be discussed in the subsequent sections.

If Dev teams align with the capabilities mentioned, the following issues faced by “Pura vida” would get alleviated.

Issues

 Lack of automation in the software development lifecycle and hence loss of quality due to error prone
repeatability of steps – here the automation of activities in the build cycle activities
 Brittle point-to-point integration between modules – resolved due to continuous integration of modules

Aligning capabilities: Test teams

Let us focus on the capabilities from Test team perspective.

Policies/procedures/methods

Progressive test automation

30
 Why? - In order to expedite and automate the testing right from the beginning and to ensure consistent delivery
of valuable software, progressive testing is required. It also helps to detect bugs early and quickly
 What? - It is one of the automation methods where the test modules are tested in various stages of software
development. The test code is written simultaneously with development code. Helps test new and evolving
functionality suggested by customers and hence has gained significance in software development using an Agile
Approach. Testing teams are involved right from the beginning. They write end-end automated test cases that
are run continuously. Mocking and service virtualization is used when the required components are not available
(more on this in the later section). Some service providers provide end-end automated testing services
 Benefits
o Expedites the testing process, helps detect and fix bugs early
o Increased testing coverage and shorter testing cycle time
o Testers are involved right from the beginning and hence the test cases are close to real world and
requirements of end users

Minimum capabilities/Practices

Continuous and automated testing

 Why? - In an automated development and delivery pipeline, integration will be done frequently as we saw in the
previous section. Hence test cases need to be run frequently. The gate for developed software meeting the
functionality are the test cases. This is possible only if the tests are automated
 What? - Involves using automated testing tools which can execute tests, provide outcomes of the tests in the
form of reports and can be run repeatedly. Test automation frameworks helps in this. Ex. JUnit

In the automated development and delivery pipeline, the following tests and their management are automated
mandatorily. These tests are invoked in an automated fashion by the orchestrator – i.e. the continuous integration
tool. The testing related automation is required.

Test and test data management automation

 Functional test
 Test and Test data management
 Performance test
 Security test

 Benefits
o Increases the depth and scope of test coverage which helps in improving software quality
o Ensures repeatability of running tests whenever required and helps in continuous integration of valuable
software

Note: The details of these activities and tools involved will be discussed in the subsequent sections.

Good to have capabilities/Practices

Service virtualization

 Why? - In traditional software development, the testing starts post the integration of all the components that are
needed – ex. Performance testing is delayed and testers may skip this for want of time. This leads to finding of
defects late in the cycle and are costly to fix. It also impacts the delivery speed. Hence service virtualization is
required to “shift-left” and detect defects early in the cycle (say during unit testing) by simulating the non-available
components of the application
 What? - Service virtualization helps in simulating application dependencies and begin testing earlier. Virtual
components can be data, business rules, I/O configurations etc
o Features

31
 Are light-weight and hence testing is inexpensive. Example: If we have a legacy system on top of which
business logic enhancements are done, setting up the latter every time for testing is cumbersome and
costly
 Creates a virtual asset which simulates the behaviour of the components which are impossible to access
or unavailable
 Components can be deployed in virtual environments
 Created by recording of the live communication among the components that are used for testing
 Provide logs representing the communication among the components
 Analysis of service interface specifications like WSDL
 Assets listen for requests and return an appropriate response. Example: Listen to an SQL statement and
return the source rows from the database as per the query
 Benefits
o Reduce cost of fixing defects
o Decreases project risk
o Improves the speed of delivery
o Helps emulate the unavailable components/environments and represents a more realistic behaviour
(stubs/mocks help in skipping unavailable components)

If test teams align with the capabilities mentioned, the following issues faced by “Pura vida” would get
alleviated

 Defects generated due to inconsistent environments for testing and deployment – here testing need for all the
components to be made available, simulation helps remove defects early and testing is done in a “close-to-real”
environment

Aligning capabilities: Infra teams

Let us focus on the capabilities from the Infra team perspective.

Policies/procedures/methods

On- demand infrastructure

 Why? - There are fluctuating demands for infrastructure due to the changing demands and requirements. For
example, in an automated software development and delivery pipeline, testing environments will be required on
32
a continuous basis for short periods of time. Maintaining the environments to service peak requirements on a
continuous basis may be costly for organizations, while maintaining minimal resources will lead to delay in
delivery. Hence the shift towards on-demand infrastructure is prevalent in the industry today with
HP,IBM,Microsoft and Sun-microsystems being the prominent vendors
 What? - This is a model where computing resources are made available to the user as needed. The resources
may be maintained within the user's enterprise, or made available by a service provider. This is implemented
through virtualization and cloud or dynamic allocation of VMs
 Benefits
o Infrastructure is provided when required
o Minimises the capex and revex tied up in IT infrastructure
o Reduced carbon footprint and maintenance costs of infrastructure

Infra-as-code

 Why? - Configuration / re-configuration of servers done frequently (due to fast and frequent releases) and
manually (with the help of scripts) is a time consuming and tedious process with lots of scope for errors
 What? - It is also called programmable infrastructure which involves writing code using a high level or descriptive
language to manage configurations and also provision infrastructure and deployments in an automated fashion.
This uses proven software development practices which are used to write code for application development. It
is different from infrastructure automation where the steps for configuration of servers is repeated
 Benefits
o Developers can also engage in writing code for infrastructure provisioning, deployment and configuration
o Development and testing is faster and simpler which further aids in speed of deployment and delivery
o Helps avoid “snowflake servers” (servers that are difficult to reproduce due to its complex configurations).
Once the configuration is automated (using infra-as-code ), it can be used by anyone to create servers of
same configuration. This will also ensure consistency in development, testing and deployment
environments
o Since standard development practices like version control is followed, it is easier to maintain the changes
made to the environments

Just enough infra

 Why? - Often, software quality suffers due to lack of proper testing. The latter may be due to the fact that enough
infrastructure is not available when required. Hence on-demand, just-enough infrastructure is required so that
environment can be made available for testing and deployment and released when not needed
 What? - Involves provisioning infrastructure from a cloud or a virtual setup when and as much required. May be
provided on demand (just enough infra) by Infra structure as services(IAAS) by vendors
 Benefits
o Cost-effective to host and test applications
o Aids in fast deliver, testing and deployments

Minimum capabilities/Practices

CD automation

 Why? - Business requires frequent delivery of valuable software with efficiency. Continuous delivery helps
create a repeatable, reliable and incrementally improving process for ensuring this
 What? - CD allows constant flow of changes into production through an automated software production pipeline
called "CD Pipeline". This involves Continuous Validation(CV) followed by Continuous Delivery. Quality is built-
in to the pipeline. The pipeline provides feedback to the team and visibility into the flow of changes to everyone
involved in delivering the new feature/s. We have seen these CI and CV earlier. CD therefore is a series of
practices to ensure that quality code can be deployed fast and safe to production by delivering every change/new
feature to a production-like environment. Since automation is used, the confidence level that this would work
well in production environment is high. With the push of a button, the change can be deployed to the production
environment. This is called continuous deployment. Continuous deployment may not be practical in all
33
organizations due to regulatory and other processes, though it should be the goal of every organization. It follows
continuous delivery
 Benefits
o Customers can realize early ROI
o Since it is based on automation, repeatability is ensured and quality software is delivered to pre-production
environment which ensures that the same will work well in production environment also

Release management

 Why? - Traditionally, release management involves the complex process of planning, designing, building, testing
and deploying new software and hardware in production environment. Integrity needs to be maintained while
releasing the correct version. Traditionally, this process is very stressful and inefficient involving a lot of manual
work and co-ordination. Also, since there is isolation of ops and dev teams, hence there are surprises, delays
and errors in releases. There is lot of documentation that needs to be read prior to deployment every time. A
path and workflow needs to be done through a system to allow for fast delivery of software
 What? - The automated tools for release management allow the integration of the management and execution
of releases. They help teams to plan, track and execute the releases through an integrated interface. They allow
the approvals and notification to the concerned for various stages in the delivery pipeline. Release plans can
therefore be run quickly
 Benefits
o Errors in releases can be reduced
o The workflow is automated and can be tracked through a system
o Helps in bringing speed to releases

Good to have capabilities/Practices

Database deploy

 Why? - In the continuous delivery pipeline, since database deployment has fundamental differences with
application deployment and processes, the former is done in a manual fashion. Hence the benefits of continuous
delivery pipelines are not optimized and may result in delays. To standardize database deployments to the CD
delivery practices, automated database deployments are required
 What? - The automated database deployment tools generate a single deployment script that contains the meta
data and structure changes. It also contains the details of the changes in terms of configuration management
 Benefits
o High levels of visibility into database deployments
o Prevents errors due to manual scripts for database deployments
o Provides an interface to package, verify, deploy, and promote database changes as done with application
code and integration of database deployments to the CD pipeline

Infrastructure layer and environment management

 Why? - The various steps that we saw earlier (including environment provisioning, testing, deployments etc.)
need an infrastructure to be in place for implementation. This layer is responsible for infrastructure
management. The environment management tools help in this
 What? - The infrastructure layer can be managed by tools like Chef which help spinning of virtual machines,
syncing them, help make changes across multiple servers etc. The virtual machines mimic servers including the
complete operating system, drivers, binaries etc. They run on top of a hypervisor system which in turn runs on
top of another operating system
 Benefits - Provides the necessary hardware for automated deployments and the environment management
tools help in managing and maintaining them

34
Containerization

 Why? - A virtual machine as seen in the earlier section has its own operating system. Hence precious operating
system resources are wasted across virtual machines. In order to ensure that the virtual machines share the
same resources, containerization is required
 What? - It allows virtual machines to share a single host operating system and relevant binaries, drivers etc.
This is called operating system level virtualization
 Benefits
o Containers are smaller in size, easier to migrate and requires less memory
o Allows a server to host multiple containers instead of virtual machines being spun

Aligning capabilities: Operational teams

Let us focus on the capabilities from Ops team perspective.

Policies/procedures/methods

Predictive monitoring

 Why? - In case of automated deployment and delivery it becomes important to help support teams predict an
issue before it arises. This is why predictive monitoring is required
 What? - This is done through predictive monitoring tools which analyze various elements of the IT environment
in a way that enables the IT teams to predict issues before it turns into a full-fledged issue and disrupts the
services
 Benefits - Will help support teams identify potential issues proactively before it results in disruption of services

Self-healing

 Why? - Systems that are created are not perfect. Services may fail due to increase in load or induced bugs.
Making systems resilient to recuperate from failures and predict one in the near future is required and hence
self-healing systems need to be developed
 What? - This involves making the system take decisions based on continuous checking and optimization of its
state and adapt to changing conditions. This creates a responsive system that is capable of responding to

35
changes and recuperate from failures. Self-healing systems can be divided into three levels - Application
level, System level and Hardware level
 Benefits - Helps support teams by monitoring and healing issues before they disrupt services and provide a
way to heal themselves

Minimum capabilities/Practices

Incident management

 Why? - With the large scale explosion of data centers and virtualization, the scale and fragmentation of IT alerts
have increased dramatically. Hence the manual way of resolving alerts like constantly filtering through noisy
alerts, connect them to get the bigger issue, prioritize and escalate to concerned, and manually managing the
alerts should be avoided
 What? - Centralized incident management solutions avoids redundant alerts. It combines all the monitoring
systems and provides an easy tracking mechanism by which support teams can respond
 Benefits
o Helps support teams to respond to alerts quick and easy
o Since the automated pipelines may have several tools and layers, incident management tools help
centralize the alerts and hence faster responses to them

Support analytics

 Why? - Faster release cycles demand automated deployment to get applications out faster and they demand
discovering and diagnosing production issues gaining insight quickly and through actionable analytics. Focusing
on business metrics is important in DevOps environments. To derive these metrics and the data to meet the key
performance indicators becomes essential and hence the need for support analytics
 What? - Tools for support analytics do a deep search for data, do centralized logging and parsing and display
the data in a neat way
 Benefits - These tools help in collaboration across teams and provide exactly what is happening to the business
from the data that is stored and logged

Good to have capabilities/Practices

Monitoring dashboard

 Why? - In order to measure the success of DevOps adoption and also measure the health of the pipelines,
monitoring dashboards are required
 What? - A dashboard provides a complete view of the pipeline. The dashboard can be based on different
perspectives. Some examples -
o Business performance dashboard – May depict the revenue, speed of deployment, defect status etc. This
can be for both technical and non-technical teams
o End user dashboard – may provide code and API specific metrics like error rates, pipeline status etc.
 Benefits - Single point where teams get visibility of the DevOps implementation

Tool stack and its implementation

Now that the capabilities are understood, let us look at how to use tools to actionize the capabilities in the team.

36
The coach now elaborates the parameters to be considered while choosing the tools which will lead to DevOps
adoption.

Parameters for tool consideration:

The aim of choosing the tool stack is to build an automated pipeline using tools for performing the various software
development, testing, deployment and release activities. This helps in creating rapid, reliable and repeated releases
of working software with low risk and minimal manual overhead. Here are the principles to be considered while
choosing tools.

Principles to be considered

 Repeatability : The automated pipeline needs to be executed frequently and multiple times with consistency
 Reliability : The automated pipeline should ensure reliable software
 End to end automation : The activities from coding to release should be automated
 100% source control : All the artifacts involved in the pipeline need to be version controlled (ex. Source code,
automated test cases, reports, binaries etc.)
 Auto build quality : Pipeline should have quality auto-built by way of gating conditions
 Done is released : Pipeline should ensure that “done-ness” as per the definition of done is only released to
production
 Continuous feedback : Tools provide continuous feedback by way of reports
 Customer appetite for tooling : The availability of budget from customer, existing tools and alliances, technology
used in the project, feasibility of automation

Note: Gating conditions represent the criterion to be met by the build cycle activity in order to move to the next activity.
For example, the source code should meet quality rules in order to move towards unit testing stage. These conditions
can be configured in and by various tools. They ensure only quality code gets integrated.

Popular tools for DevOps:

The DevOps and Lean coach now provide a snapshot of the tools that are available for the various activities as shown
in the figure below:

The tools stacks are evolving and there are many vendors in this area.

Practical tips

 Tooling is a consulting exercise.

37
The DevOps and Lean coach suggests using the Java and open source stack for the system development at "Pura
Vida" for the reasons mentioned below.

Reasons

 The tools involved in this stack are primarily open source, free and powerful
 The project is an application development project using Java stack
 Quick availability of these tools and no overhead of maintenance of licenses

The team is advised to get the OSS (Open Source Software) compliance for the open source tools prior to the
installation. OSS compliance refers to the compliance in terms of using approved and supported source code. There
should be a policy and process to check the usage, purchase (as all open source software is not free), management
and compliance(some can be used for training but payment is required if used commercially).Tools like Black Duck
help in checking OSS compliance.

He explains the various activities and tooling in this stack to the entire team

Continuous Integration and continuous delivery:

Continuous integration – Activities involved

Let us revisit the steps in continuous integration.

The development team commits the source code and automated test cases to the version control system. The
continuous integration tool is scheduled to run at a specific frequency (best practice is to run daily).It polls the version
control system (specific repository where the code is committed) for changes and triggers the various tools which can
be used to automate the build cycle activities. The CI moves to next stage only if the previous stage meets the gating
conditions set. These are explained below:

1. The CI tool invokes the static code analysis tool. This checks the quality of code against the set rules.
2. The CI tool then invokes the automated unit test cases. Only if they pass or as per the gating criteria set, the
next stage is invoked.
3. In order to check if the unit test cases cover the code, the code coverage tool is invoked next by the CI tool.
4. The CI tool then invokes the build script for building the executable file. At the end of the integration, the binary
file is baselined into a binary repository.
5. The continuous integration tool may trigger the environment provisioning tool which provides provisions and
configures the test environments (functional, performance testing etc).
6. The CI tool then triggers deployment automation tool to deploy the application into the provisioned environment
and executes the automated QA test cases and if the gating conditions are met for the execution of the tests
moves to the next stage.
38
7. The CI tool then invokes tools to provision the pre-production deployment environment.The deployment tool
deploys the working version of the software to the pre-prod environment configured and runs all the automated
acceptance tests.
8. If the acceptance tests pass, steps 7 and 8 are repeated for production environment and the software is deployed
automatically after going through release management process.
9. The workflow and approval mechanisms are configured using and in release management tools. The changes
done get reflected in production environment i.e. release to production.

Note: If the deployment process is also automated seamlessly, then it is called continuous deployment.

Continuous integration – Tools involved

Let us take a look at the features and functionality of the tools involved in the Java stack for Continuous Integration.

Continuous integration pipeline using java and open-source stack of tools:

Application life cycle management(ALM) tool

If agility is adopted through Scrum or any other method, an ALM tool can be used for tracking and managing the user
stories, team activities and ceremonies. This tool can be linked to the entire lifecycle and once the integration is
completed, the user story is marked complete in the ALM tool. Jira is a popular tool from the Atlassian company.

Rapid prototyping

As part of the design phase a rapid prototyping and design tool may be used. The UXPin tool helps for UI design and
prototyping.

These two stages are prior to the build and may be done based on the context of the project. In a project where an
agile approach to software development is not adopted, the build cycle automation can still be adopted. This is
explained in the following section.

Version control system

A distributed version control system would be useful if the teams are distributed. A proper branching structure needs
to be set up so that the continuous integration can be aligned to the appropriate branch. Git is a popular tool and is
gaining popularity due to the distributed nature and the fact that lot of tool stack development in DevOps is happening
around Git.

Other tools like SVN and CVS may also be used.

Static and dynamic code analysis

39
This step involves the execution of quality rules as defined by the static code analyzer on the source code and test
cases to ensure that the code meets quality guidelines for emergent design and incremental software development.
These rules are configurable, new rules may be added and gating conditions(threshold conditions for code quality)
in the static analyzer tool.

Sonarqube is a popular tool which supports C#,Java and C++ languages. Others include
CheckStyle,Findbugs,StyleCop and JSHint. A combination of tools can also be used in this stage.

Dynamic code analysis may also be done by executing the code and analyzing the code for performance, memory
utilization etc. JProbe may be used for dynamic code quality analysis.

Code review

Code review process may be automated or automated using tools. Tools like Jupiter and Crucible (from Atlassian
company) helps.

Unit testing

Unit tests need to be automated so that they can be run multiple times. A standard xUnit framework like JUnit can be
used. They help maintain and reuse test objects and also help execute them for unit testing. Gating conditions in
terms of the tolerance level for test case failure can be set to ensure that the code is of good quality and meets the
functionality.

Code coverage

In order to check the coverage of the source code by the test cases a code coverage tool is used. This helps check
the quality of test cases as the number of test cases in itself is not a sufficient parameter for the quality of testing

JaCoCo and Cobertura are popular tools as they are lightweight and provide the code coverage granularly in terms
of class/file/package level coverage. The reports are visual and hence helpful. Gating conditions in terms of the
coverage criterion(condition/statement/line) can be set to ensure the quality of the test cases.

Build automation

The code which meets the quality conditions (gating) set needs to be integrated and built. Automated build tools are
used for this purpose. These tools help in creating versions of the binary file. This is done through a build script. The
latter is used by the continuous integration tools to sequence and orchestrate the build phase activities.

ANT, Maven, Grunt and Gradle are popular tools.

Baseline in artifact/binary repository

The built artifacts needs to be stored in the binary repository in order for the next stages in the pipeline to pick the
artifacts.

An artifact repository ensures -

 Dependencies (libraries) are version controlled


 Dependencies can be shared
 Stores information about
o User who triggered the build
o Modules which were built
o Source control(branches) that were used
o Dependencies used
o Environment variables
40
o Packages installed

The following are the benefits -

 The right version of the build is used for QA and the completely tested version goes for release
 Ensures that any changes to source code to meet the quality and testing needs are routed only through the
source code repository
 Structures the deployment properly

Artifactory from JFrog is a useful tool. Nexus can also be used.

Continuous integration:

The mentioned stages and activities are orchestrated by the continuous integration tool. This tool may use plug-ins
to connect to the other mentioned tools in the pipeline and invoke them in a sequence. This tool may use build script
tasks/goals or powershell/shell scripts to execute the build lifecycle activities. Gating conditions can be set in them.
They provide a central dashboard to view the various reports created by the tools in the lifecycle stages. They can
be configured to run at a specific frequency and also to poll the version control repository for changes.

Jenkins is a popular tool in this area as it comes with thousands of plug-ins to connect to tools, is light-weight and
open source. Other tools include Bamboo from Atlassian and Hudson.

This section has covered the continuous integration pipeline which ensures that the quality code which following
coding and quality rules, unit tested with requisite coverage criteria is built and baselined. This in turn reduces the
chances of defects getting carried to testing stage and ensures that the code is maintainable and emergent to meet
the incremental design principles and changing requirements.

Here is the table which consists of the tools suggested by the DevOps & Lean coach for the Pura Vida team and the
reasons for the choice are mentioned.

Continuous delivery- tools involved:

Continuous Delivery involves Continuous Validation followed by deployment to pre-production. Let us focus on
Continuous Validation tools in Java and Open Source Stack.

Continuous Validataion

Database deployment

Since there are fundamental differences between application code and database code, automated database
deployment is missed out. These tools employ deployment scripts which enables safe database deployment and also
41
provide visual interfaces to detect and depict configuration drift and feedback. Thus packaging, verification,
deployment and promotion of database changes can be done just like application code with these tools at very low
risk.

Tools like DBMaestro, Liquibase etc. are popular.

Functional/Acceptance testing

Functional and acceptance tests can be automated using tools. This ensures that the tests are written once and can
be repeatedly run.

Selenium allows writing automated web application UI tests in any programming language against any HTTP website
using any mainstream JavaScript-enabled browser. FitNesse automated acceptance tests are power tools for fixing
a broken requirements process.Cucumber merges specification and test documentation into one cohesive whole.

Fitnesse, Selenium,Protractor and Cucumber are popular tools.

Test and defect management

A test and defect management tool provides a centralized repository for tracking defects across projects. They help
in automated notifications of assignments and also track the defect management process. They provide reporting
based on categorization of defects based on projects, severity and priority

Zephyr and Bugzilla are popular tools in this area

Test data management

During the test life cycle, the amount of data that is generated is enormous for testing the application. Live data may
also be used for testing. A sub-set of this data can be maintained since it may not be available all the time. TDM
deals with maintenance of the test data.

Tools like Datafinder and Datamaker are popular tools

Service virtualization

These tools help in designing and executing automated unit, functions, regression, load and performance tests for
distributed application architectures which leverage, SOA, BPM architectures. They help emulate the behavior of
specific components in heterogeneous component-based applications such as API-driven applications, cloud-based
applications and service-oriented architectures.

LISA is a popular tool (proprietary in nature). Mockito,SOAPUI are the other tools.

Performance testing

Ensures software applications will perform well under their expected workload. Provides information about the
application’s speed, stability and scalability. Automated performance testing frameworks enables repeatedly building
new load test agendas without recording a new script, eliminating the additional work that may have been caused by
changes to the UI.

JMeter is a popular tool

Security testing

Automated Security testing can be applied to tests that are done to check for known vulnerabilities. Other tests may
still be manual.
42
App scan is a popular tool.

Continuous Delivery And Release Management:

Let us now focus on automated deploy and release tools from the Java and Open Source Stack.

Environment management/Containerization

In a continuous effort to streamline infrastructure, optimize servers and ensure stability of applications, cloud and
virtualization are being used in DevOps . Containerization is complementary to virtualization and is playing a great
role in shaping the cloud computing space. They provide dedicated environments for applications to run on and they
can be deployed and run anywhere without creating an entire virtual machine setup for each application.
Containerization virtualizes the operating system so that applications can be distributed across a single host. The
application is given access to a single operating system kernel. Once the application is built and QA tested, the
deployment can be done through containers.

Docker is a popular containerization tool.

Infrastructure management(virtual/cloud) can be managed (allocation, deallocation) through automated scripts and
environment management systems like Open Stack, Chef, Puppet etc. These can be used to configure, allocate and
de-allocate test environments also. Thus they provide on-demand infra and just-enough infra.

CD automation

This involves the automation of all the activities of deployment through tools. Thus every code change goes through
all the steps in the pipeline and reaches the pre-production environment in an automated fashion.

Go is an example of an open source CD automation tool. Other popular proprietary tools include XLDeploy from
Xebia Labs, UDeploy from IBM CARA from CA technologies.

Release management

Release management involves management of software releases by co-ordinating design,build and configuration of
the releases, acceptance, rollout planning, release communications, training activities , distribution and installation of
releases and reports to management on the quality and operations of the release.

A release management tool helps in planning, executing and tracking releases. plan, execute, and track a release
through every stage of the delivery lifecycle.

XLRelease and URelease are popular proprietary tools.

Incident management/Support analytics/monitoring dashboard

Since the entire pipeline is automated, any failure in production can cause great impact. Incident management tools
analyze, monitor, provide alerts and audit trails and foster communication between the teams to minimize impact on
business operations and ensure possible levels of service quality is maintained .

ELK stack - ElasticSearch,Logstash and Kibana are popular tools in the open source stack.

Hardware Backbone:

Finally, to provide the required infrastructure for the automated pipeline and tools to run smoothly, the infrastructure
layer is important. They provide suitable environments for applications to run, application life cycle management,
centralized management of applications, support distributed environments and provide easy integration and
maintenance. Let us look at the tools involved in the hardware back bone.
43
VMware, AWS etc. may be used for infrastructure provisioning and management.Virtualization is the process of
creating a software-based (or virtual) representation of something rather than a physical one. Virtualization can be
applied to applications, servers, storage, and networks. It helps reduce costs while improving the efficiency and helps
in achieving speed. VMware is a virtualization and cloud computing software provider , based on bare-metal
hypervisor ESX/ESXi in x86 architecture. Amazon Web Services (AWS) is a secure cloud services platform, offering
compute power, database storage, content delivery and other functionality to help businesses scale and grow. These
can be used in CI-CD pipeline.

Here is the table which consists of the tools suggested by the DevOps and Lean coach for the "Pura Vida" team and
the reasons for the choice are mentioned.

People Aspects:

Now that the tool aspects have been finalized, the coach mentions the importance of moving towards the right team
structure for implementing DevOps.

44
People challenges in "Pura Vida"

 The Dev,QA and Ops teams are separate


 Conflict in goals
o Dev Team goals : Adapt to rapid changes and their implementation
o QA Team goals: Write and execute test cases
o Ops team goals: achieving stability & reliability
 Little collaboration between Dev, QA and Ops team and no shared responsibility
 Ops team not sensitized on the requirements and urgency of deployments
 Decision making is central and is not autonomous
 Geographically distributed teams

The DevOps & Lean coach now elaborate the people models available and what parameters are to be considered
for choosing them.

Team structure-possible organization:

Agile software development has broken down some of the isolation between requirements, analysis, development
and testing teams. The objective of DevOps is to remove the silos between development (including testing) and
operations teams and bring about collaboration between the teams.

However, since there are separate teams and also the fact the team may have niche skills rather than skills across
the software development lifecycle, there could a phased approach to creating a pure DevOps team. Here are some
of the possible team structures.

Note: Traditional speed: Refers to organizations where there are big business systems and how IT has been running
till Agile methods came in.

High speed IT: : When businesses became IT centric with dynamic changes and digitization of businesses happened,
organizations had to change levers and adopt agility in projects.

Team structure model 1: Separate dev and Ops teams

45
Key issue: Lack of collaboration between the teams as they are in silos.

However, it may not be possible to merge the teams, so the key is to improve the collaboration through common
interventions.

Salient Features

 Development and operations are separate (may apply to QA teams also)


 Interventions planned at regular intervals with no overhead processes

 Teams keep separate backlogs but take each other’s stories in their backlogs
 Ops team gets knowledge about upcoming features, major design changes, possible impact on production
 Dev team understands what causes outages/ defects better, improves Dev processes to reduce impact (e.g.
specific logging, perf testing for a cycle)
 Dev team improves dev processes over time by understanding Ops defects/outages better

Team structure model 2: One team with Ops extension

When a pure DevOps team cannot be constructed, a model closer to the pure devOps team can be constructed.

Salient Features

46
 A horizontal Ops team forms a backbone for all development teams
 It provides 24X7 support and performs the tasks which have larger impact on IT e.g. patch deployment
 Few Ops team members can become part of Dev team and perform tasks which are application specific.
 Ops team members can become part of Dev team
 Ops representative will focus on all the Ops activities which are this team/application specific while all centralized
Ops activities will be taken by horizontal Ops team

When speed is increased, deployments are faster. Then teams realize that support service levels start dropping. That
is when teams understand the importance of collaboration between development and ops teams.

Team structure model 3: Pure DevOps

The teams may be merged although DevOps skill set (end to end skills in a software lifecycle) may not be readily
available.

Salient features

 Embedded team can be created by hiring people with blended skills or cross-training/on the job learning by Dev
& Ops teams for each other’s skills
 Team has single backlog with both Dev & Ops tasks
 Each team member is capable of selecting any item & work on it

Process Aspects:

DevOps involves a change in the culture and the mindset change. The process to be followed plays a major role.

47
Process challenges in "Pura Vida"

 Delays due to formal knowledge transfer from Dev to Ops for every release
 Tedious Change Management process requiring lots of approvals
 Complex Release Management with manual checks impacts the operational efficiency

These challenges led to various issues. The coach suggests development and operations teams to follow a unified
process.

When teams are merged, which process to follow gains more significance.

Model 1: Dev and Ops separate

The development team may be following an agile approach say Scrum as shown in the figure. The Ops team may
be following another process, here ITIL (Information Technology Infrastructure Library. It is a framework for
streamlining maintenance activities). Here is how the process can be fine-tuned for DevOps adoption.

Process model

 Dev in Scrum and Ops in any iterative model


 Governance team (Program Manager) for conflict resolution
 Few team members cross-participate in daily standups

Limitations

 Frequent conflicts and less appreciation for each other’s work


 Cross skilling of talents is not possible

Model 2: Dev and Ops separate but following similar process

The development and operations team may be following Agile approach say Scrum(by the development ream) and
Kanban(by the operations team). Here is how the process can be fine-tuned for DevOps adoption.

Process model

 One group does Scrum and the other Kanban as ONE team
 Two different product backlogs (PB), but single PO
 Dev team works on user stories and Ops works on high priority Kanban PB
 Any inter-dependent work items are prioritized by PO to resolve dependencies on time. Daily standup by both
teams
48
Limitations

 Cross-skilling of talents is limited

Model 3: Unified process

When there is a unified DevOps team, they can follow a unified process like Scrumban as shown in the figure.

Process model

 Single PO with one PB


 Based on history, planned vs. unplanned effort considered across
 Team works on the prioritized user stories
 High priority is set for unplanned high severity incident,a team member having expertise in that takes up and
resolves
 Cross skilling of talents is possible

Development team at "Pura vida" follows Scrum and operations team adopts ITIL. The team starts with the first model
and then move towards a unified process at "Pura Vida" for reasons stated below.

Reasons

 The development and operations teams have been separate at "Pura Vida". The team structure is optimized
with this arrangement currently and hence it is difficult to have a unified process immediately.
 Over time, as the groups start to become cross-skilled, a unified process can be adopted.

49
3. Implementation of CICD with java and open-source
stack : Source Code Management
Here is the table which consists of the tools suggested by the coach for the "Pura Vida" team and the reasons for
the choice are mentioned.

The team now sets out to implement this CICD pipeline using the tools mentioned in the table. Here are the steps
suggested.

50
Let us look at each of the steps one by one.
Version Control:
Version control using source code repository

The "Pura Vida" team has the following challenges:

They are presently using a centralized version control system and hence the developers need to be connected to the
central server all the time to fetch the code and check-in

1. Since teams are distributed, they need a tool which can support this.
2. There is no track of local changes in the central server and development team is finding difficulty in reconciliation
to local changes
3. Branching is very complex and tedious in the existing version control system
4. The code base is increasing in size and the team needs a robust version control system that can handle this
size

The coach suggests a distributed version control system which will help mitigate the mentioned challenges. Let us
understand the version control activities using a distributed version control system like Git.

Version control of source code

Need

Version control systems are a category of software tools that help a software team manage changes to source code
over time. In a continuous integration and deployment pipeline setup, software developers working in teams are
continually writing new source code and changing existing source code. Version control helps track every individual
change by each contributor and helping prevent concurrent work from conflicting.

What is a version control system?

Version control systems are a category of software tools that help a software team manage changes to source code
over time. It helps in tracking and reconciliation of code to earlier versions of code written by the team. Here is the
basic vocabulary and their description:

51
Git features

 Open source distributed version control system


 Git Clone is a full-fledged repository
 Version control possible even when not on a network
 Branching is a very inexpensive operation
 Has strong support for non-linear development
 Fully distributed
 Able to handle large projects like the Linux kernel efficiently (speed and data size)

Other distributed systems include Mercurial, BitKeeper, Darcs, Bazaar, Perforce, IBM Rational clearcase, Visual
Studio Team foundation Server

 Here are the advantages of a distributed version control system over a centralized version control system
 Each machine is a clone of the server hence developers can work offline too and multiple copies of the code are
available as backups
 Local changes are also traced
 Changes made in one part of the software can be incompatible with those made by another developer working
at the same time. This problem called conflicts should be discovered and solved in an orderly manner without
blocking the work of the rest of the team. Git allows pushing of the code only when conflicts are resolved.

Git workflow

Here is how developers work with Git.

1. There is a central trunk which contains the integrated code/test cases of the team which will be used for release.
2. The snapshot of changes for an item is stored in the form of a database of snapshots in a compressed format
3. There may be several branches created for the different teams/team members to work before merging with the
central trunk.
4. A development team member clones from the respective branch to the local machine(via clone command). This
creates a local git repository in the local machine. The area where changes/ new code development will be done
is called the working directory which is not tracked by the version control tool.
5. The development team member makes changes to the working directory till it is ready for commit.
6. The development team member commits the changes done(via commit command) which makes the changes
to the local git repository(repository database).This is called staging and can happen several times during the
course of development

52
7. When the team member has a collection of changes ready or as per process laid out, they push the changes to
the branch shared by the team(via merge and push command). Conflicts may arise as the code is shared, in
which case the conflicts are resolved and then pushed(via conflict resolution tool).
8. Before starting the next iteration of changes, the development team members fetches the latest version of the
code from the branch(via fetch or pull command), merges the changes (via merge command) and repeats the
steps 4,5,6 and 7.

This is shown in the figure below.

Common commands in Git

Practical tips

 Commit often,push once


 Commit early and often
 Have a process and workflow for the team with commit and push frequencies clearly defined
 Create adequate number of branches – not too many nor too less
 Write useful commit messages
 Commit code which is good quality, tested and follows the definition of done defined
 Do not overwrite commit history

53
Demonstration

Please refer to the next section for the demonstration of the usage of Git commands using the e-git plugin of Eclipse.

The development team at "Pura Vida" will have their challenges mitigated with Git for the following reasons :

 Since it is a DVCS, multiple teams can work in parallel and can work offline on the local changes
 Git can handle huge files due to the compression logic followed in storing the change snapshots
 Staging ensures that local changes are tracked and the history shows the entire list of transactions done

Practical Implementation of Git & GitHub in project


Create a Java Maven Application in any of your system drives.
Open command prompt and navigate to the project root folder that we created above.
Enter the command : git init

git init → Initialises empty git repository in current working directory/folder. (By default the
.git folder is hidden. Goto explorer and view>show hidden files. Use ls -la before
and after the git init command. ls -la will show the hidden .git folder also).

<current working directory/folder> git:(master) - Says that the current working


directory is under git control and the default branch is master.

Configure Git Author Name And Email:


Since git is a distributed version control system and many members will be working and commiting the
changes. To track all the changes in git version history, we have author name and email to find out who
commited particular changes.

git config - - global user.name <Name>


git config - - global user.email <Email>

Note: Make sure to give same Name and Email address across all the applications we are going to use for Source
Code Management.
Pushing local changes to Remote repository using GitHub desktop application.
Changes made to your project in your working directory to remote repository using git bash, git GUI, GitHub
desktop, sourcetree etc. Here we are going to use GitHub Desktop. Click the Current Repository dropdown and
click on Add existing repository.

54
Choose the project folder that we created earlier and click on Add repository. Once the repository is added in
GitHub desktop app, now you can push it to remote repository on GitHub(You need to have an account created in
GitHub website - https://github.com). Click on Push. After the push, the code is moved to the remote repository.

55
4. Implementation of CICD with Java and open source
stack : Static code analysis

The "Pura Vida" team is facing the following challenges:

 Faster deployments are being done but the quality of code has taken a backseat
 Several issues are reported and the team is doing a lot of rework
 It is increasingly becoming difficult to bring in changes to the design and code

The coach suggests the usage of a code analysis tool (static and dynamic) to ensure that developers check in only
quality code which ensure which will help mitigate the mentioned challenges. Let us understand the static code
analysis, how it helps ensure quality of code with the help of a static analysis tool like Sonarqube.

Analysis of source code for quality

Need

In the name of fast deployment, code quality should not be compromised. This will have long lasting effects on speed
and will bring the productivity to zero. Hence only quality code needs to be pushed to the version control system. This
applies to all code – source code, test code, automated scripts etc.

What is static and dynamic code analysis?

Analyzing the code without executing it is known as static code analysis. If it is executed and analyzed for
performance is called dynamic code analysis. This helps detects issues like coding standard violation, design
principles violation, redundant code which are referred as Technical Debt. If such technical debt is not repaid then it
can accumulate, making it harder to make changes in the code at later time.

The development team needs to fix the issues to ensure the technical debt is removed. This is usually performed
after coding, compilation and build of source code. CheckStyle, FindBugs, Sonar Qube, PMD are examples of static
code analysis.

Technical debt

 Is a metaphor developed by Ward Cunningham (similar to financial debt)


 Is incurred by doing development quick and dirty
 Would need extra effort to fix the “dirty” parts in future (similar to interest payments)
56
 Team can choose to continue putting in extra effort due to the dirty pieces or refactor the code for better design
and clean code
 Teams need to remove technical debt over time
 Spending time on refactoring in every sprint
 Improvement in definition of done to allow clean coding
 Usage of quality analyzer tools for checking quality of code
 Factoring time in release and sprint planning for the debt

Sonarqube features

Sonarqube is a web based open source tool to manage code quality. It has the following features -

Can check the source code for -

 standard architecture and design principles


o comments
o coding rules
o code complexity
o duplication in code
 Covers many languages like Java, C,C++
 Has rules, thresholds and alerts can be configured online

Working of sonarqube

1. Sonarqube has a list of built-in rules for different languages


2. Rules can be configured as per needs called a profile. Sonarqube has a feature which allows the users to create
custom quality profiles on various parameters like defects, code smells, issues etc. Default profile is called
Sonarway. Customized profiles can be created and attached to projects by activating required set of rules
3. There are three ways to execute sonarqube
o Sonar runner (command line)
o Build script
o Sonar Lint (plugin to Eclipse)
4. When these profiles are applied to a project, analysis is performed and a dashboard is created
5. The dashboard provides the following details:
o Code demographics – no. of lines of code, files etc
o Bugs in code
o Code smells
o Code coverage details
o Duplications in code
o Technical debt
o Ratings based on bugs, vulnerabilities and code smells.
6. Quality gates can be applied to ensure that code that does not pass the quality conditions do not move forward
to the next stage.

Calculation of technical debt:

Technical debt = Total rework effort in minutes / Total original effort

Let us understand the formula.

Total rework effort in minutes:

 Each rule is associated with rework effort. If the rule is violated it adds to the rework effort.

Total original effort:


57
 The original number of lines of code is multiplied with original effort. Sonar considers the original effort as 30
minutes for each line of code to flow through the entire SDLC

The equation provides a percentage which is graded from A to E as a SQALE rating. D and E indicates that code
quality is very bad. A and B indicates good quality and C indicates deterioration of code.SQALE is a methodology
that was developed by inspearit and then open sourced. Yes, the SonarQube implementation of SQALE is based
solely on rules and issues.

Practical tips

 Create profiles with increasing number of rules so that teams are not overwhelmed with too many rules in the
beginning
 A mix of tools can be used to check quality
 It is important for the teams to fix issues at the earliest and improve the quality
 Issues can be resolved based on priority. The categories of issues can be seen in sonarqube dashboard

The development team at "Pura Vida" will have their challenges mitigated with Sonarqube for the following reasons
:

 Code quality will be ensured from design and clean coding perspectives which will go a long way in ensuring
that code is maintainable and able to adapt to changes quickly
 This will go a long way in ensuring code quality with speed

Automated unit testing and code coverage


Unit testing and code coverage

The "Pura Vida" team is facing the following challenges:

 Since the development of requirements is incremental, unit testing needs to be done frequently. Presently it is
manual and hence is becoming tedious.
 Due to evolving code and design regression testing also needs to be done which presently is time consuming
and is not being done properly leading to many defects slipping to production
 The team is also finding that the quality of test cases is not good and are unable to measure it
 The coach suggests the usage of an automated unit testing and code coverage tool to help resolve the issues
mentioned. Let us understand the unit testing and code coverage using tools like JUnit and Cobertura
respectively.
58
Automated unit testing and code coverage

Need

Automated unit testing is used to ensure repeatability of tests and bring speed to unit testing process. Code coverage
ensures the quality of test cases. This is required to ensure that bugs are detected faster and eliminate human errors.

What is automated unit testing and code coverage?

Test automation is the use of special software (separate from the software being tested) to control the execution of
tests and the comparison of actual outcomes with predicted outcomes. Automated code coverage ensures -

 Provides quantitative measurement of testing effort


 Indicates redundancy in test cases
 Increased code coverage may lead to early defect detection
 Points to untested areas of code
 Code coverage does not imply that the code is bug free, it means that the code is completely tested (if tests are
flaky code coverage cannot help)

Popular tools for unit testing

 JUnit
 NUnit
 MSTest

Popular tools for code coverage

 Cobertura
 Emma
 JaCoCo

JUnit features

 Is an open source Java testing framework used to write and run repeatable tests
 Is an instance of the xUnit architecture for unit testing frameworks
 Features-
o Assertions for testing expected results
o Test fixtures for sharing common test data
o Test suites for easily organizing and running tests
o Graphical and textual test runners

Working of Junit test cases

JUnit is an open source Java testing framework used to write and run repeatable automated test cases. It is the Java
version of xUnit architecture for unit and regression testing frameworks written by Erich Gamma and Kent Beck. Here
are the common terms used -

 Annotation to identify a test method


 Assertions for verifying expected results
 Test fixtures for sharing common test data
 Test suites for easily organizing and running tests
 Graphical and textual test runners

59
JUnit benefits

 Simple and takes less time to learn for a Java Programmer as no additional knowledge is required
 Can be run automatically and require no manual intervention
 Easy to integrate with any IDEs like Eclipse and Net Beans and also, build tools Ant and Maven
 Makes it possible for the software developer to easily correct bugs as they are found
 Helps the programmer to always be ready with working version of defect free code

Test suite workflow

 JUnit runs a suite of tests and reports results.For each test in the test suite:
o Calls setup().This method should create any object that you may need for testing
o Calls tearDown().This method should remove any objects you created
 JUnit calls one test method.This method may comprise multiple tests. i.e, it may make multiple calls to the
method you are testing.
 Each test method checks a condition (assertion) and returns back to the test runner whether the test failed or
succeeded.
 The test runner uses the result to report to the user. The report indicates the tests which passed(green), tests
which failed(red) and overall status.

Code coverage using JaCoCo

JaCoCo is an open source code coverage tool which can be plugged into Eclipse. JaCoCo is invoked through build
scripts.

JaCoCo features

 Free Java tool that calculates the percentage of code accessed by tests
 Useful to identify parts of the Java code lacking testing
 Can be called using Ant and Maven scripts

JaCoCo will generate the detailed reports that shows different coverage related parameters like line coverage,
method coverage, class coverage and so on. Here are the snapshots of the reports generated by JaCoCo.

60
Practical tips

 Separate common set up and teardown logic into test support services utilized by the appropriate test cases.
 Test code to be treated on par with production/source code.
 Review of test cases is also mandatory.
 Avoid slow running tests.
 Unit tests are so named because they each test one unit of code.
 Developers to run the entire suite.
 Use mock objects.
 When Test Driven Development is used along with test automation, the results are impressive.

61
62
5. Implementation of CICD with Java and open source
stack : Build automation
Build automation

The "Pura vida" team is facing the following challenges:

 The current build process is manual and very cumbersome as the instructions for build are in several pages of
a word document
 As the steps in build lifecycle are getting automated and moving towards a quicker frequency of builds, manual
builds would be very slow
 A continuous flow of working versions of the application is required by the stakeholders and are constantly asking
the development team for it

The coach suggests the usage of a build automation tool to ensure a continuous integration of working versions of
the application. Let us understand the build automation using Maven.

Need

Developers are paid to invest their effort in solving business problems by developing software programs to build a
software system. In the process of building a software system, developers have to do the following technical activities
periodically several times in a day/week.

 Getting the latest version of the program files from Configuration Server
 Compiling the programs
 Performing Static analysis on the programs/code to get feedback on design
 Running automated test (Unit/System test cases)
 Collecting Metrics on Code Coverage of each program during Unit Testing
 Packaging the programs into some form of binaries like .war or .jar or .ear files
 Placing the binary file into a central repository from where the SIT or UAT team can pick up for deployment and
testing.

All the above steps are a consolidation of Build Process of a software system.

If the above activities have to be done manually by developers, then significant amount of developer’s valuable time
will be eaten away on these mundane technical activities. Instead, by automating the above activities, developers
can invest the same effort on solving business problems by devising best in class algorithms, designs and programs.

In order to achieve the above automation, we adopt a process called build-automation. Primary goal of Build
Automation is to automate manual operations in the build process.

63
What is build automation?

A build automation tool helps in the following –

 Compilation of code
 Testing and integrating the changes
 Packaging the binaries
 Deployment to test server

Popular tools

Some of the build tools that are widely used in industry are:

 Apache ANT
 Apache Buildr
 Sbt
 Tup
 Gradle
 Visual build
 Apache Maven
 Grunt

Maven features

Maven is a software management and comprehension tool based on the concept of Project Object Model (POM)
which can manage Project build, Reporting & documentation. In simple terms, Maven is a build automation tool.
Maven generates build for us.

Workflow of Maven

Maven has few built-in lifecycle goals like clean, compile etc. The other user defined targets and project related
configurations are specified in the pom.xml in the project. When a build is run, Maven executes a set of goals.

Maven's local repository is by default located in '.m2' folder within the current user's folder. Maven uses this repository
to download required plugins to run the build from the central maven repository or internally in Infosys from the Infy-
Nexus repository. For executing the goals mentioned in the pom.xml, if maven doesn’t find the necessary files in the
local repository, it will connect to the infy-nexus central repository and download them. Maven uses the target folder
to save build data and build reports.

64
How to write a build script in Maven?

 Maven client plug-in should be downloaded into Eclipse IDE


 Create a Maven Project that comes with default POM.xml
 Define the dependencies in POM.xml that specify which external Jar files and Plug-in are required to be
downloaded from Maven Central Repository (usually on internet) onto local machine to create project build
 Provide necessary authentication and authorization credentials in ‘config.xml’ available in ‘.m2’ folder
 Specify appropriate Goals (commands) and trigger the execution of POM.xml
 Maven has lifecycle goals to perform the below activities:
o Clean up the working directories
o Compile Source Code
o Copy Resources
o Compile and Run Tests
o Package Project
o Deploy Project

Maven can be considered as a tool that can be used for building and managing any java-based project.

Maven POM.xml script example

Explanation for important tags in the below POM.xml are given with ‘->’ symbol.

Maven architecture is basically based on assembly of various Plug-Ins thus making Maven having a modular
architecture.

Plug-Ins are way in which we can extend the functionality of an application. Based on this philosophy, Maven
architecture uses various plug-ins and extends its functionality accordingly thus making its architecture modular. In
fact, the Maven compiler in itself is a plug-in inside the Maven system.

65
Execution of Maven script

After all the necessary configurations are done in POM.xml and Config.xml, right click on ‘POM.xml’ and choose ‘Run
As -> Maven build…’ and enter one or more of the below mentioned goals:

 clean - This goal is responsible to clean up ‘target’ folder in eclipse before build process is initiated.
 compile- This goal is responsible to compile all the .java files and create .class files
 sonar:Sonar -This goal is responsible to trigger the Static code analyzer tool ‘SonarQube’ that is responsible to
generate the static code metrics
 test- This goal is responsible to run the Junit testcases written and kept in the folder ‘src\test\java\ut….’
 cobertura:cobertura -This goal is responsible to consolidate the outcome of the ‘Test’ goal and generate Code
Coverage report and make the same available in ‘\site\cobertura\ folder in eclipse.
 war:war - This goal is responsible to create a .war (Web Archive) file
 deploy- This goal is responsible to deploy the .war file at pre-defined server.

Maven benefits

Maven is intended to make the day-to-day work of Java developers easier and generally help with the comprehension
of any java-based project.

 Maven has many in-built templates and using Maven has multi-fold benefits and some of them are:
 Creates Project Source code template that defines a Project Structure that brings in the following advantages:
 Clear separation of concerns in development
 Discipline in coding
 Import all the necessary libraries that are dependencies to create an application
 Build and package the application by compiling the source code and package the code into a .jar file.

Practical tips

 Build every time a change is checked in


 Ensure that the steps of static/dynamic analysis, unit testing, code coverage etc are also included
 If the build is broken, the development team needs to fix it immediately
66
You can see the demonstration of usage of Maven for orchestration of build lifecycle activities in the subsequent
section

The development team at "Pura Vida" will have their challenges mitigated with Maven for the following reasons
:

 Build automation will ensure a quick and continuous integration of valuable software

Artifact repository

The "Pura Vida" team is foreseeing the following challenges when the entire SDLC will be automated :

Many versions of binary files(ex. .jar,libraries) are involved in the development. Presently the sharing is through a
common library folder. They forsee that this approach will not work as there are many people involved

 They also would need an approach to version control the binaries created at the end of the build cycle
 They should have appropriate folder structures to identify Dev tested, QA tested, staged and release stages

The coach suggests the usage of an artifact repository to help this situation. Let us understand the need for an
artifact repository in an automated CICD pipeline with the help of a tool like Artifactory.

Artifact repository

Need

In traditional development-

 Dependencies are managed by using a static list


 It is not a good practice to check-in dependent libraries
 Dependencies are kept as a common library which is used by all developers / tools like Maven download the
latest version of the dependent libraries
 When code base and inter-dependencies increase, the version control of the dependencies are not kept track
leading to version related and consistency problems

In an enterprise system with CI -

 It becomes difficult to reuse common libraries by the development team members in their builds
 Version control of the dependencies and the build package becomes essential

67
What is an artifact repository?

This is a software repository for binary artifacts and their corresponding metadata which is akin to a software version
control system for source code. This is a way of versioning code binary artifacts, for ex. jars, wars, ears, fully fledged
applications, libraries or a collections of libraries that are packaged. This is illustrated in the diagram below:

An artifact repository has the following features -

 Dependencies (libraries) are version controlled


 Dependencies can be shared
 Stores information about
 User who triggered the build
 Modules which were built
o Source control(branches) that were used
o Dependencies used
o Environment variables
o Packages installed

Popular tools

Tools include-

 Artifactory
 Nexus

Benefits of using an artifact repository

 The right version of the build is used for QA and the completely tested version goes for release
 Ensures that any changes to source code to meet the quality and testing needs are routed only through the
source code repository
 Structures the deployment properly

Artifactory - Features

Artifactory is an open source tool from JFrog. It has the following features-

 Web driven repository manager from Jfrog


68
 Has four major modules
o Home: general information, user details, time and version
o Artifacts: Helps browse repositories in the system
o Build: Helps view CI server projects
o Admin: configure users and perform administration and maintenance activities
 Jenkins is configured with Artifactory using the plugin
 Artifactory provides a number of plug-ins to support connectivity with other tools
o Build tools Gradle,Ivy,Ant,MSBuild
o Source code version control like SVN, Git,Perforce
o CI Tools like Jenkins,TFS,Bamboo,TeamCity

Orchestration of build -continuous integration :

The coach at "Pura Vida" has advised the automation of the build cycle which you have understood in the earlier
section. The team now has the following questions:

1. How will these stages be orchestrated and would that be manual?


2. How frequently would this need to be done?
3. Will the stages be sequenced by the build script and run manually or use a tool?
4. Will the sequenced stages continue to run even if one of the stages fail?

The coach suggests the usage of an orchestrator tool and explains how it works. Let us understand the need and
working of a CI orchestration tool like Jenkins and its role in an automated CICD pipeline.

Orchestration of build activities leading to continuous integration

Need

When the need is to have a continuous delivery of valuable software to customers, the first step is to build it and
integrate it continuously. This will ensure that all the code entering the system will move to QA stage only if it passes
the conditions that are mentioned in the build cycle activities. This ensures that a no-touch pipeline is created and
runs automatically.

What is orchestration?

The integration needs to be done using an automated pipeline synchronizing the build activities at regular intervals.
An orchestration tool helps in invoking the appropriate tools/steps of the build cycle regularly (as per the duration
configured) and apply the conditions (called gates) that need to be satisfied before they move to the different stages.
Tools like Jenkins, TFS,TeamCity help in orchestration of the build activities.

Maven Build & Test results

After creating a Java Maven application in Eclipse IDE, paste the following code in pom.xml and save the file. Make
sure to give your project details in the highlighted area.

pom.xml

<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-
v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.mycompany.app</groupId>
<artifactId>simple-java-maven-app</artifactId>
<packaging>jar</packaging>
<version>1.0-SNAPSHOT</version>
<name>simple-java-maven-app</name>
69
<url>http://maven.apache.org</url>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.13.2</version>
<scope>test</scope>
</dependency>

<dependency>
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>0.8.6</version>
</dependency>

</dependencies>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<!--<sonar.host.url>http://192.168.20.56:9000/</sonar.host.url>-->

<sonar.host.url>http://192.168.117.51:9000/</sonar.host.url>
<sonar.login>admin</sonar.login>
<sonar.password>Ramesh@2005</sonar.password>

<!-- JaCoCo Properties -->


<jacoco.version>0.8.6</jacoco.version>
<sonar.java.coveragePlugin>jacoco</sonar.java.coveragePlugin>
<sonar.dynamicAnalysis>reuseReports</sonar.dynamicAnalysis>
<!--

<sonar.jacoco.reportPath>${project.basedir}/target/jacoco.exec</sonar.jacoco.reportPath
>
<sonar.language>java</sonar.language>
<sonar.coverage.jacoco.xmlReportPaths>C:/Users/RAMESH B/eclipse-workspace/simple-
java-maven-app/target/jacoco-report/jacoco.xml</sonar.coverage.jacoco.xmlReportPaths>
-->

</properties>
<build>
<pluginManagement>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.5.1</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
</plugins>
</pluginManagement>
<plugins>
<plugin>
<!-- Build an executable JAR -->
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<version>3.2.0</version>
<configuration>
<archive>
<manifest>
<addClasspath>true</addClasspath>
<classpathPrefix>lib/</classpathPrefix>
<mainClass>com.mycompany.app.App</mainClass>
70
</manifest>
</archive>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-enforcer-plugin</artifactId>
<version>3.0.0-M3</version>
<executions>
<execution>
<id>enforce-maven</id>
<goals>
<goal>enforce</goal>
</goals>
<configuration>
<rules>
<requireMavenVersion>
<version>[3.5.4,)</version>
</requireMavenVersion>
</rules>
</configuration>
</execution>
</executions>
</plugin>

<plugin>
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>${jacoco.version}</version>
<executions>
<execution>
<id>prepare-agent</id>
<goals>
<goal>prepare-agent</goal>
</goals>
</execution>
<execution>
<id>report</id>
<phase>prepare-package</phase>
<goals>
<goal>report</goal>
</goals>
</execution>
<execution>
<id>post-unit-test</id>
<phase>test</phase>
<goals>
<goal>report</goal>
</goals>
<configuration>
<!-- Sets the path to the file which contains the execution
data. -->

<dataFile>target/jacoco.exec</dataFile>
<!-- Sets the output directory for the code coverage report.
-->
<outputDirectory>target/jacoco-ut</outputDirectory>
</configuration>
</execution>
</executions>
<configuration>
<systemPropertyVariables>
<jacoco-agent.destfile>target/jacoco.exec</jacoco-agent.destfile>
</systemPropertyVariables>
71
</configuration>
</plugin>

</plugins>
</build>
</project>

In Eclipse IDE , right click on your project and click on Maven Build. Then you will see the build & test results
as shown below.

72
6. Implementation of CICD with Java and open source
stack : Jenkins
Jenkins is an open source, very popular and powerful tool. It has the following features-

 An open source continuous integration server written in Java


 Performs “jobs”, is configurable
 More than 1000+ plugins available

Jenkins workflow

1. A build script containing the various targets for executing the build cycle activities is available(pl. refer earlier
section on build automation)
2. These targets are used by Jenkins for orchestration
3. Jenkins is configured -
4. Paths to the executables of tools are provided
o Users are created with permissions
o Environmental variables are set(ex.Java_HOME ,MVN_Home)
o Plugins for the required tools are uploaded
o Email configurations are done
5. The frequency interval for integration (ie. start of orchestration) is configured
6. The repository from which the code and test cases are to be pulled is configured
7. The jobs (upstream and downstream) (invoker and invoked respectively) are configured as per the build lifecycle
8. Gating conditions are configured
9. Mailer configuration (to list, mail body and when) is done so that notifications can be made (ex. when build is
broken)
10. A pipeline view is created to see the status of the builds
11. Reports of unit testing and coverage are configured to view on Jenkins dashboard itself

Dynamic environment provisioning and management

Need

A testing environment is a setup of software and hardware in which the QA (Quality Assurance) teams can test a
build. This may consist of pre-production or staging environments. This may be a scaled down version of the
production environment to help detect pre-production defects. There may be dedicated test environments for
developers to test, QA to test, integration testing and business readiness or acceptance testing.

Generally, these environments are unorganized with ad-hoc management and hence high operational and
maintenance costs for their maintenance. Effective and efficient management of test environments with structured
automation can help in this case and hence dynamic environment provisioning and management is required.

Continuous testing and deployment is possible only when the required infra is configured and available on demand,
just-in time. Also, it is becoming increasingly difficult to manage complex IT infrastructure dependencies. Hence a
virtual environment and tool to configure this virtual environment and make it available on demand and release them
when not needed.

What is environment provisioning?

Release management in traditional setups is a complex and time consuming process. Templates are used for
maintaining the configuration and done manually. However, in a DevOps setup with an automated CICD pipeline,
this may not be a sustainable activity. Hence environment provisioning and management is automated using scripts.

73
An application environment consists of Infrastructure, configuration and dependencies. The Infrastructure defines
where the application would be run, various dependencies that are available for it. Configuration states how the
application will behave in the provided infrastructure. Dependencies are the other modules in which the application
depends on, also includes the libraries like the jars.

The following are the typical activities:

 Request for environment


 Planning for provisioning of environment
 Governance for the process and abstraction of the unnecessary data from the user
 Infrastructure provisioning and testing

An automated provisioning system will setup the environment, help scale when application grows in size and
deallocate when the need is over.

Popular tools

 Azure DevOps Releases


 Ansible
 Chef
 Puppet
 ServiceNow

Thus this automation helps streamline repetitive tasks and activities which eventually result in speed and reduction
in costs.

Practical tips

 Do not create too many scripts for automation creating islands. (ex. One script for starting the VM, one for
configuration etc.)
 Track wasted environments and release them
 Maintain by shutting down unwanted environments
 Have a governance process for environment provisioning
 Be aware of the fact that there is initial investment needed for cloud and virtual environments
 Consider containerization using tools like Docker as they make applications portable and manageable

The development team at "Pura Vida" will have their QA and environment automated to achieve high speeds and
continuous delivery of features.

Dynamic Environment provisioning and Continuous testing :

74
The "Pura Vida" team has automated the CI pipeline. But a unit tested working version is available which cannot be
deployed to production.

They now need a way to perform QA activities continuously after CI is done, only then can the software be delivered
quickly. Presently it is manual.

They are unable to automate the testing because the environments for testing are not available when they need it.

The coach suggests the usage of just in time and on demand provisioning of testing environments. He also suggests
the usage of automated functional and performance tests. Let us understand the on-demand provisioning, QA
automation and their orchestration using Azure DevOps.

Continuous testing

Need

The software developed today has the objective of a seamless user experience connecting dependent processes,
systems and infra. Testing such complex systems can post a big challenge if done manually as teams want to ensure
high quality software delivered at great speeds. Hence continuous testing plays an important role in an automated
CICD pipeline.

What is continuous testing?

Continuous testing involves the usage of principles of agility to the testing process. It needs the testing to be
automated and integrate with the CI process. QA testers would configure automation test cases for functional,
performance, security tests etc. These tests are automatically invoked by a continuous orchestration tool.

Functional testing

Functional tests the software application against the business requirements to ensure that all features are functioning
as expected. This is done to minimize the risk of potential bugs. Here are the features of functional testing -

 Involves black box testing, where quality assurance experts focus only on software functionality without testing
the internal code structure
 Critical to any software, as it verifies that it is ready for release

Performance testing

Performance Testing is crucial to determine that the web application under test will satisfy high load requirements. It
can be used to analyze overall server performance under heavy load. Load Testing involves the modelling the
expected usage by simulating multiple user access the Web services concurrently. Every web server has a maximum
load capacity. When the load goes beyond the limit, the web server start responding slowly and produce errors. The
purpose of the Stress Testing is to find the maximum load the web server can handle.

Functional testing

Popular tools

 Selenium
 Telerik Test Studio.
 Coded UI Test.
 UFT(HPE Unified functional testing tool)

75
Functional testing using Selenium

Selenium - Features

Selenium is an open source functional testing tool. Here are some of the features-

 Is a web testing tool which uses simple scripts to run tests directly within a browser
 Is a portable software testing framework for web applications that provides a record/playback tool for authoring
tests without learning a test scripting language (Selenium IDE)

Sample test cases

Here is a snapshot of the test cases written in Selenium to test a login screen of a web application

Performance testing

Performance Testing is crucial to determine that the web application under test will satisfy high load requirements. It
can be used to analyze overall server performance under heavy load. Load Testing involves the modelling the
expected usage by simulating multiple user access the Web services concurrently. Every web server has a maximum
load capacity. When the load goes beyond the limit, the web server start responding slowly and produce errors. The
purpose of the Stress Testing is to find the maximum load the web server can handle.

Popular tools

 WebLOAD
 LoadUI NG Pro
 SmartMeter.io
76
 LoadView
 Apache JMeter
 LoadRunner

JMeter - Features

 JMeter is an open source performance testing tool. Here are the features -
 Can be used to test performance of both static resources such as JavaScript and HTML, as well as dynamic
resources, such as JSP, Servlets, and AJAX.
 Can discover maximum number of concurrent users that the website can handle
 Provides a variety of graphical analyses of performance reports

Release management

The “Pura Vida” IT team are following manual deployment process which has the following issues –

Person dependent, time consuming and error prone

Due to manual process, team realized important steps in a release were accidentally missed, incorrect versions of
software were shipped and fatal errors were not spotted at right time

All this confusion was resulting in customer dissatisfaction and defective end product deployment in production
environment. The coach suggests automating release management process. Let us understand release management
activities.

Release management

Need

When Agile approach to software development is undertaken, CI is adopted, there is working version created every
iteration. The development teams keep creating features with great frequency and if not released can lead to the
release process becoming a bottleneck. In order to make the latter faster with improved cycle-time, release process
needs to be automated.

What is release management?

This bridges the gap between Dev(test) and Ops. It uses logical policies and reduces the uncertainty of releases
when changes come in. Here are the activities involved in release management.

77
Benefits of release management

 Scalability – Team can automatically scale infrastructure to meet the growing needs of project and customer.
Platform can work with any application and provide the same experience.
 Centralized Control - Automated process can launch release of the software with a click of a button and team
can track the status of application deployment centrally.
 Minimize Downtime – Helps to maximize application availability during the release process.
 Full owned and managed services - Eliminate the need to host, maintain, back up, and scale your own source
control servers.
 Improved Quality and Speed – It ensure that correct version of the software is shipped to the customer with
frequent releases.

Continuous Delivery and Deployment

“Pura Vida” IT team along with scale are also looking at deployment model that is cost effective, fast, less risky and
have no rooms for errors, so as to remain competitive. IT Team want to ensures that they are releasing the right
78
product, built quality in, by putting quality gates for different levels of automated tests and code metrics for every
change. Currently they have manual system of deployment and following are the challenges that team is facing
right now.

The coach suggests that full automation of the delivery process will break up “Pura Vida” software builds into
stages, applying quality gates at each stages. Once binary build is successful it can be deployed in various
environment for testing purpose. Let us understand continuous delivery and continuous deployment.
Continuous deployment

Need

Automation of deployment process reduces chances of human errors, resulting in speed, higher product quality and
overall increase in productivity

What is continuous deployment?

This is an extension of continuous integration which is intended to minimize the lead time between receiving a request
and delivery to production. To achieve this, automation of the various stages upto deployment is done.

Here are the perceived benefits of automation of the deployment process.

 Automation lead to on-time and frequent product releases


 Consolidated access to all tools, process and resource data leads to faster troubleshooting
 Effective collaboration between Dev, QA, and Ops teams leads to higher quality and higher customer satisfaction
 Centralized view of all deployment activities and outcomes leads to faster and lower effort audits
 Automate of repetitive tasks bring back the focus on actual testing
 Deployments is made effortless and frictionless with click of a button without compromising security
 Team can scale from a single application to an Enterprise IT portfolio because of automation
 A harmonious workflow can connect your existing tools and technologies such as CI providers, DevOps tools,
or scripts etc.
 Create workflows across the development, testing, and production environments
 Integrate teams and processes with a unified pipeline
 Ship both cloud-native and traditional applications in a unified pipeline
 Minimize Down Time – helps to maximize your application availability during the software deployment process.
Deployment process can be easily stopped or rolled back if there are errors.

79
Popular Tools

Proprietary tools

 Microsoft visual studio


 IBM urban code deploy
 AWS Code Deploy
 Bamboo

Open source tools

 GoCD is an open-source project sponsored by ThoughtWorks Inc.


 Capistrano is an open-source deployment tool programmed in Ruby.
 Travis CI can be synced to your GitHub account and allows you to automated testing and deployment.
 BuildBot is an open-source python-based CI framework that describes itself as “Framework with Batteries
Included”.

Jenkins may be used in the initial phases for deployment, but for complex environments it would be ideal to use a
sophisticated tool as per the list mentioned above.

Continuous Integration vs. continuous delivery vs. continuous deployment

The table below provides insights into these terms

Continuous deployment and release management

Release management helps in the automation of the deployment and testing of software in environments like unit
testing, QA, integration, acceptance testing etc. Each of these steps may be automated or some of them can be
manual though the former is recommended. It takes continuous deployment to the next level by allowing the
automation of the shifting of the code from build all the way to the production environment.

Best practices and tips:

 Have only single version control system for all documents


 Build Binaries Only Once and Deploy the Same Everywhere
 Run smoke tests frequently to ensure that crucial elements of the application work
80
 To eliminate surprises after deployment, deploy the application on a production-like Environment. This includes
infrastructure, operating system, databases, network topology, firewalls and configuration.
 Ensure every check in of the source code triggers the CICD pipeline so that no code goes untested for
deployment
 Using a database build automation tool and database release and verification processes will ensure a stability
in the CICD pipeline

Benefits of continuous deployment

The "Pura Vida" team can resolve the issues of poor quality deployments and the delay in deployments by trying to
automate the steps in continuous deployment overseen with the release management process. Presently they plan
to automate till staging environment and the deployment to production will be done manually. They would consider
automating the release management in the next quarter by adoption of the appropriate tools like IBM uDeploy and
uRelease.

Containerization

The “Pura Vida” team feels that there are going to be development, test, staging and later deployment environments.
They want to know if there is a way by which the Dev team will get identical environments and tool stacks all the time.
The coach recommends stateless architecture and containerization.

Let us understand containerization.

Need

Containers allow development teams to work in identical environments and stacks. The simplify the process of
deployment of an image(OS,tools etc) automatically in very less time across multiple environments.

What is containerization?

As seen in the capabilities for DevOps section, Software containers are a form of OS virtualization where the running
container includes just the minimum operating system resources, memory and services required to run an application
or service.

The benefits are as follows:

 Instant startup of operating system resources


 Container Environments can be replicated, template-ized and validated for production deployments.
 Greater performance with security

OS virtualization provides instant startup, reliable execution comes from the template or namespace isolation, greater
performance comes from better resource Governance. This makes it ideal for application development and testing.
This will also assure that if the container works in the staging/dev environment, it will work in production environment
also.

Containerization using Docker

Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers
allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies,
and ship it all out as one package. Docker containers wrap up software and its dependencies into a standardized
unit for software development that includes everything it needs to run: code, runtime, system tools and libraries. This
guarantees that your application will always run the same and makes collaboration as simple as sharing a container
image.

81
Best practices

 Security and governance is of utmost importance when containers are used for production deployments. A
minimalist OS which is hardened and patched should be used as the host OS and monitoring needs to be done
continuously on security vulnerabilities.
 The container must contain continuous monitoring tools to provide visualization and analytics
 Containers are transitory and hence data should be made persistent and protected after the association with the
container is over
 Containers present a potential situation where many virtual machines are spawned and lying unused. A proper
container lifecycle management should be put in place in association with the CICD pipeline

Need

Containers allow development teams to work in identical environments and stacks. The simplify the process of
deployment of an image(OS,tools etc) automatically in very less time across multiple environments.

What is containerization?

As seen in the capabilities for DevOps section, Software containers are a form of OS virtualization where the running
container includes just the minimum operating system resources, memory and services required to run an application
or service.

The benefits are as follows:

 Instant startup of operating system resources


 Container Environments can be replicated, template-ized and validated for production deployments.
 Greater performance with security

OS virtualization provides instant startup, reliable execution comes from the template or namespace isolation, greater
performance comes from better resource Governance. This makes it ideal for application development and testing.
This will also assure that if the container works in the staging/dev environment, it will work in production environment
also.

Popular tools

 Docker
 Kubernetes
 Apache Mesos
 Microsoft containers

Containerization using Docker

Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers
allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies,
and ship it all out as one package. Docker containers wrap up software and its dependencies into a standardized
unit for software development that includes everything it needs to run: code, runtime, system tools and libraries. This
guarantees that your application will always run the same and makes collaboration as simple as sharing a container
image.

Best practices

Security and governance is of utmost importance when containers are used for production deployments. A minimalist
OS which is hardened and patched should be used as the host OS and monitoring needs to be done continuously
on security vulnerabilities.

82
 The container must contain continuous monitoring tools to provide visualization and analytics
 Containers are transitory and hence data should be made persistent and protected after the association with the
container is over
 Containers present a potential situation where many virtual machines are spawned and lying unused. A proper
container lifecycle management should be put in place in association with the CICD pipeline

The “Pura Vida team” is planning to use docker with the Jenkins pipeline in future for deployment of the application
in dev and test environments in their next phase of improvement in the DevOps implementation.

Gating conditions in a CICD pipeline

Need

Gating conditions are set to ensure that the unstable code does not move between the different stages in the build
pipeline. It ensures that only code that meets the mentioned tests goes for QA and release (called a stable build) for
every build

What is gating?

Refers to the checks made on the source code to ensure code quality, testing coverage and ensures that it passes
all the tests.Helps automate code quality, test coverage and passing of all tests in every round of the build.If any of
these fail(unstable code), the code is reverted back to the last stable version.

How is gating achieved?

Gating can be done in the different stages of the CICD pipeline to ensure that unless the condition is met, the next
stage in the pipeline is not executed. Gating conditions can be applied in tools like Sonarqube, in a CI tool like Jenkins
or using power shell scripts. For example : -

 Static code analysis stage – set quality gates in Sonarqube. Quality Gates can be set in Sonarqube with various
conditions based on bugs, vulnerabilities and code smells.
 Unit tests – set conditions for unit test passing in Jenkins. A snapshot is shown below.

Code coverage – set coverage conditions in Jenkins using a powershell or use a plugin(ex. JaCoCo plugin for
Jenkins). A snapshot is shown below.

83
Metrics to track CICD practices

Adoption of DevOps is a journey and hence measurement & optimization of the automation and process need to be
done for getting the perceived benefits.

The DevOps & Lean coach suggests the metrics that can be measured and tracked by the team at "Pura Vida".

Unified metrics need to be provided to the teams with a common goal identified. This will ensure collaboration
between the teams for achieving a common goal of speed.

Here are the common metrics that are tracked for continuous integration. These are provided by the various tools
which are used for automating the pipeline.

84
Here are the common metrics that are tracked for continuous delivery and deployment. These are provided by the
various tools which are used for automating the pipeline.

Configuring Pipeline

General Build Triggers Advanced Project Options Pipeline

Description

[Plain text] Preview

Discard old builds?

Do not allow concurrent builds

Do not allow the pipeline to resume if the controller restarts


85
GitHub project

Pipeline speed/durability override?

Preserve stashes from completed builds?

This project is parameterized?

Throttle builds?

Build Triggers

Build after other projects are built?

Build periodically?

GitHub Branches

GitHub Pull Request Builder

GitHub Pull Requests?

GitHub hook trigger for GITScm polling?

Poll SCM?

Schedule?

 Do you really mean "every minute" when you say "* * * * *"? Perhaps you meant "H * * * *"
to poll once per hour
 Would last have run at Sunday, 20 November, 2022 4:54:52 PM IST; would next run at Sunday, 20
November, 2022 4:54:52 PM IST.

Ignore post-commit hooks?

Disable this project?

Quiet period?

Trigger builds remotely (e.g., from scripts)?

Advanced Project Options

Advanced...

Pipeline

Definition

86
SCM?

Repositories?

Repository URL?

https://github.com/brams17011983/simple-java-maven-app.git

Credentials?

Add

Advanced...

Add Repository

Branches to build?

Branch Specifier (blank for 'any')?

Add Branch

Repository browser?

Additional Behaviours

Add

Script Path?

./jenkins/Jenkinsfile

Lightweight checkout?

Pipeline Syntax

87
Jenkins Declarative Pipeline

pipeline
{
environment {
EMAIL_TO = '[email protected]'
EMAIL_FROM = '[email protected]'
}

agent any
stages {
stage('Build') {
steps {
sh 'mvn -B -DskipTests clean package'
}
}

stage('ExecuteSonarQubeReport'){
steps{
sh "mvn clean sonar:sonar"
}
}

stage('Test') {
steps {
sh 'mvn test'
}
post {
always {
junit 'target/surefire-reports/*.xml'
}
}
}
stage('Deliver') {
steps {
sh './jenkins/scripts/deliver.sh'
}
}
}

post{

always{
emailext to: "${EMAIL_TO}",
from: "${EMAIL_FROM}",
subject: "CSE-A Pipeline Build is over .. Build # is
..${env.BUILD_NUMBER} and Build status is.. ${currentBuild.result}.",
body: "CSE-A Pipeline Build is over .. Build # is ..${env.BUILD_NUMBER}
and Build status is.. ${currentBuild.result}."
88
/* ,
replyTo: "${EMAIL_TO}" */
}
/*
success{
emailext to: '[email protected]',
subject: "Pipeline Build is over .. Build # is ..${env.BUILD_NUMBER} and
Build status is.. ${currentBuild.result}.",
body: "Pipeline Build is over .. Build # is ..${env.BUILD_NUMBER} and
Build status is.. ${currentBuild.result}.",
replyTo: '[email protected]'
}

failure{
emailext to: '[email protected]',
subject: "Pipeline Build is over .. Build # is ..${env.BUILD_NUMBER} and
Build status is.. ${currentBuild.result}.",
body: "Pipeline Build is over .. Build # is ..${env.BUILD_NUMBER} and
Build status is.. ${currentBuild.result}.",
replyTo: '[email protected]'
}*/

89
7. Implementation of CICD with Java and open source
stack : Configure the Jenkins pipeline to call the build
script jobs and configure to run it whenever there is a
change made to an application in the version control
system
Make some changes to the code in Remote repository and commit the changes.

Since we have configured the pipeline to poll SCM every 1 minute, the pipeline gets executed automatically when
there is a change made in SCM. You can see the Git Polling Log timings details under “Git Polling Log” tab of
specific pipelines.

90
91
8. Implementation of CICD with Java and open source
stack : Configure it with user defined messages
Jenkins is a complete CI/CD tool that can start its process from the moment you push your latest code to Git, and
end after notifying you about the success/failure/error status of your CI/CD pipeline

Notification is one of the key aspects of this pipeline, you don’t want to keep looking at Jenkins screen to know the
status of your pipeline, rather you want to continue doing your next work and get an update while the pipeline is
completed

After executing the pipeline, We are going to send a notification mail with build details like whether the build was
successful or not, Build Number etc. The mail will be sent to the specified email addresses in case of success or
failure of the build.

The code to sent notification email in Jenkins Pipeline.

/* Declaring Environment Variables */


environment {
EMAIL_TO = '[email protected]'
EMAIL_FROM = '[email protected]'
}

/* Script to send email notification */


post{

always{
emailext to: "${EMAIL_TO}",
from: "${EMAIL_FROM}",
subject: "Changed Pipeline Build is over .. Build # is
..${env.BUILD_NUMBER} and Build status is.. ${currentBuild.result}.",
body: "Changed Pipeline Build is over .. Build # is ..${env.BUILD_NUMBER}
and Build status is.. ${currentBuild.result}."

}
}

Console Output of Pipeline after build.

[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] emailext
Sending mail from default account using custom from address [email protected]
messageContentType = text/plain; charset=UTF-8
Adding recipients from project recipient list
Analyzing: [email protected]
Looking for: [email protected]
92
starting at: 0
firstFoundIdx: 0
firstFoundIdx-substring: [email protected]
=> found type: 0
Analyzing: [email protected]
Looking for: [email protected]
starting at: 0
firstFoundIdx: 0
firstFoundIdx-substring: [email protected]
=> found type: 0
Analyzing: [email protected]
Looking for: [email protected]
starting at: 0
firstFoundIdx: 0
firstFoundIdx-substring: [email protected]
=> found type: 0
Adding recipients from trigger recipient list
Successfully created MimeMessage
Sending email to: [email protected]
DEBUG: getProvider() returning
jakarta.mail.Provider[TRANSPORT,smtp,com.sun.mail.smtp.SMTPTransport,Oracle]
DEBUG SMTP: need username and password for authentication
DEBUG SMTP: protocolConnect returning false, host=smtp.mail.yahoo.com, user=DESKTOP-
8AGRSCH$, password=<null>
DEBUG SMTP: useEhlo true, useAuth true
DEBUG SMTP: trying to connect to host "smtp.mail.yahoo.com", port 465, isSSL false
220 smtp.mail.yahoo.com ESMTP ready
DEBUG SMTP: connected to host "smtp.mail.yahoo.com", port: 465
EHLO DESKTOP-8AGRSCH
250-hermes--production-sg3-6c8895b545-nh2c9 Hello DESKTOP-8AGRSCH [175.101.120.147])
250-PIPELINING
250-ENHANCEDSTATUSCODES
250-8BITMIME
250-SIZE 41697280
250 AUTH PLAIN LOGIN XOAUTH2 OAUTHBEARER
DEBUG SMTP: Found extension "PIPELINING", arg ""
DEBUG SMTP: Found extension "ENHANCEDSTATUSCODES", arg ""
DEBUG SMTP: Found extension "8BITMIME", arg ""
DEBUG SMTP: Found extension "SIZE", arg "41697280"
DEBUG SMTP: Found extension "AUTH", arg "PLAIN LOGIN XOAUTH2 OAUTHBEARER"
DEBUG SMTP: STARTTLS requested but already using SSL
DEBUG SMTP: protocolConnect login, host=smtp.mail.yahoo.com, [email protected],
password=<non-null>
DEBUG SMTP: Attempt to authenticate using mechanisms: LOGIN PLAIN DIGEST-MD5 NTLM
XOAUTH2
DEBUG SMTP: Using mechanism LOGIN
DEBUG SMTP: AUTH LOGIN command trace suppressed
DEBUG SMTP: AUTH LOGIN succeeded
DEBUG SMTP: use8bit false
MAIL FROM:<[email protected]>
250 2.1.0 Sender <[email protected]> OK
93
DEBUG SMTP: sendPartial set
RCPT TO:<[email protected]>
250 2.1.5 Recipient <[email protected]> OK
DEBUG SMTP: Verified Addresses
DEBUG SMTP: [email protected]
DATA
354 Ok Send data ending with <CRLF>.<CRLF>
Date: Tue, 22 Nov 2022 16:30:08 +0530 (IST)
From: [email protected]
To: [email protected]
Message-ID: <1317370782.63.1669114810335@DESKTOP-8AGRSCH>
Subject: Changed Pipeline Build is over .. Build # is ..198 and Build status
is.. SUCCESS.
MIME-Version: 1.0
Content-Type: multipart/mixed;
boundary=" --- =_Part_62_2101512606.1669114808244"
X-Jenkins-Job: IT PIPELINE
X-Jenkins-Result: SUCCESS
List-ID:

------=_Part_62_2101512606.1669114808244
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Changed Pipeline Build is over .. Build # is ..198 and Build status is.. SUCCESS.
------=_Part_62_2101512606.1669114808244--
.
250 OK , completed
DEBUG SMTP: message successfully delivered to mail server
QUIT
221 Service Closing transmission
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS

94
Email Notification for Build status - Success

Email Notification for Build status - Failure

95
9. Implementation of CICD with Java and open source
stack : implement quality gates for static analysis
of code.
Configure Webhook in SonarQube
Login to SonarQube.
Click on Administration tab at the top.
Click on Security tab on the left pane. Disable “ Enable local webhooks validation ” as shown in the image below.

Click on Configuration menu and select Webhooks as shown in the image below.

In the Create Webhook page, provide valid name and URL.


Name : <any name>
URL : <ur Jenkins URL on the system>/sonarqubewebhook.

96
Once the Webhook is created successfully, add some quality gates and add conditions to that quality gate as
shown below.

Goto Jenkins file in the GitHub project repository and add the following stage to the jenkins pipeline and commit the
changes.
stage('QualityGate'){
steps {
97
timeout(time: 1, unit: 'HOURS') {
waitForQualityGate abortPipeline: true
}
}
}

Goto Jenkins dashboard, select your project and click on Build now.

In the SonarQube dashboard , you will see the project analysis as failed . The reason for failure is we have created
the condition for Bugs to be 0.

98
Update Quality Gate Bugs condition in SonarQube.

Build the Pipeline in Jenkins and now you an see the project analysis as passes in the SonarQube dashboard.

99
100
10. Implementation of CICD with Java and open source stack
: implement quality gates for static unit testing
Create an new quality gate and add condition for Unit Test Success % to be 100 as shown below.

Goto Jenkins dashboard, select your project and click on Build now.
After Build completion, you can see the test results of the pipeline.

101
Open SonarQube and you will see the project analysis as Success and all the test cases were passed.

102
11. Implementation of CICD with Java and open source stack
: implement quality gates for code coverage
Code coverage, also called test coverage, is a measure of how much of the application’s code has been run in testing.
Essentially, it's a metric that many teams use to check the quality of their tests because it represents the percentage
of the production code that has been tested and run.

This gives development teams reassurance that their programs have been broadly tested for bugs and should be
relatively error-free.

SonarQube and JaCoCo

SonarQube inspects and evaluates everything that affects our codebase, from minor styling details to critical design
errors. This enables developers to access and track code analysis data ranging from styling errors, potential bugs
and code defects, to design inefficiencies, code duplication, lack of test coverage and excess complexity.

It also defines a quality gate, which is a set of measure-based boolean conditions. Additionally, SonarQube helps us
to know whether our code is production-ready or not.

SonarQube is used in integration with JaCoCo, a free code coverage library for Java.

Maven Configuration

1. Download SonarQube

We can download SonarQube from its official website.


To start SonarQube, run the file named StartSonar.bat for a Windows machine or the file sonar.sh for Linux or
macOS. The file is in the bin directory of the extracted download.

2. Set Properties for SonarQube and JaCoCo

Let's first add the necessary properties that define the JaCoCo version, plugin name, report path and sonar
language:
<properties>
<!-- JaCoCo Properties -->
<jacoco.version>0.8.6</jacoco.version>
<sonar.java.coveragePlugin>jacoco</sonar.java.coveragePlugin>
<sonar.dynamicAnalysis>reuseReports</sonar.dynamicAnalysis>
<sonar.jacoco.reportPath>${project.basedir}/../target/jacoco.exec</sonar.jacoco.reportPath>
<sonar.language>java</sonar.language>
</properties>Copy
The property sonar.jacoco.reportPath specifies the location where the JaCoCo report will be generated.

3. Dependencies and Plugins for JaCoCo

The JaCoCo Maven plugin provides access to the JaCoCo runtime agent, which records execution coverage
data and creates a code coverage report.
Now let's have a look at the dependency we'll add to our pom.xml file:
<dependency>
103
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>0.8.6</version>
</dependency>Copy
Next, let's configure the plugin that integrates our Maven project with JaCoCo:
<plugin>
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>${jacoco.version}</version>
<executions>
<execution>
<id>jacoco-initialize</id>
<goals>
<goal>prepare-agent</goal>
</goals>
</execution>
<execution>
<id>jacoco-site</id>
<phase>package</phase>
<goals>
<goal>report</goal>
</goals>
</execution>
</executions>
</plugin>Copy

SonarQube in Action

Now that we've defined the required dependency and plugin in our pom.xml file, we'll run mvn clean install to
build our project.
Then we'll start the SonarQube server before running the command mvn sonar:sonar.
Once this command runs successfully, it will give us a link to the dashboard of our project's code coverage
report:

Notice that it creates a file named jacoco.exec in the target folder of the project.
104
This file is the result of the code coverage that will be further used by SonarQube:

It also creates a dashboard in the SonarQube portal.


This dashboard shows the coverage report with all the issues, security vulnerabilities, maintainability metrics
and code duplication blocks found in our code:

105
Code Coverage Implementation

Goto Jenkins and Build the pipeline that we created earlier.

106
107

You might also like