DevOps CourseFile
DevOps CourseFile
(2024 - 2025)
V
COURSE FILE
SUBJECT DEVOPS
ACADEMIC YEAR 2024-2025
REGULATION R22
SUBJECT CODE
V
INDEX
COURSEFILE
S.NO TOPIC PAGE
NO
1. PEO’S, PO’S, PSO’S 4
2 Syllabus Copy 6
5 Lesson Plan 13
18
Unit Wise Lecture Notes
a) Notes of Units
b) Assignment Questions
e) Objective Questions
PEO1: The graduates of the program will understand the concepts and principles of Computer Science
and Engineering inclusive of basic sciences.
PEO2: The program enables the learners to provide the technical skills necessary to design and implement
computer systems and applications, to conduct open-ended problem solving, and apply critical thinking.
PEO3: The graduates of the program will practice the profession with work effectively on teams to
communicate in written and oral form, ethics, integrity, leadership and social responsibility through safe
engineering leading them to contribute their might for the good of the human race.
PEO4: The program encourages the students to become lifelong activity and as a means to the creative
discovery, development, and implementation of technology as well as to keep up with the dynamic nature
of the Computer Science and Engineering discipline.
PROGRAM OUTCOMES
Engineering knowledge: Apply the knowledge of mathematics, science, engineering fundamentals, and
an engineering specialization to the solution of complex engineering problems.
Problem analysis: Identify, formulate, review research literature, and analyze complex engineering
problems reaching substantiated conclusions using first principles of mathematics, natural sciences, and
engineering sciences.
Design/development of solutions: Design solutions for complex engineering problems and design system
components or processes that meet the specified needs with appropriate consideration for the public health
and safety, and the cultural, societal, and environmental considerations.
Conduct investigations of complex problems: Use research-based knowledge and research methods
including design of experiments, analysis and interpretation of data, and synthesis of the information to
provide valid conclusions.
Modern tool usage: Create, select, and apply appropriate techniques, resources, and Modern engineering
and IT tools including prediction and modeling to complex engineering activities with an understanding of
the limitations.
The engineer and society: Apply reasoning informed by the contextual knowledge to assess societal,
health, safety, legal and cultural issues and the consequent responsibilities relevant to the professional
engineering practice.
V
Environment and sustainability: Understand the impact of the professional engineering solutions in
societal and environmental contexts, and demonstrate the knowledge of, and need for sustainable
development.
Ethics: Apply ethical principles and commit to professional ethics and responsibilities and norms of the
engineering practice.
Individual and team work: Function effectively as an individual, and as a member or leader in diverse
teams, and in multidisciplinary settings.
Project management and finance: Demonstrate knowledge and understanding of the engineering and
management principles and apply these to one’s own work, as a member and leader in a team, to manage
projects and in multidisciplinary environments.
Life-long learning: Recognize the need for, and have the preparation and ability to
engageinindependentandlife-longlearninginthebroadestcontextoftechnological change.
PSO1: Design and development of software applications by using data mining techniques.
PSO2: Enrichment of graduates with global certifications to produce reliable software solutions.
V
VISION
MISSION
M2. To provide state of art infrastructure to create an ecosystem for nurturing research consultancy and
Entrepreneurship.
M3. Inculcate professional behavior, ethical values to promote social responsibility through teaching –
learning process and collaboration.
Course Objectives:
The main objectives of this course are to:
Course Outcomes:
On successful completion of this course, students will be able to:
Syllabus
UNIT-I
V
Introduction:
Introduction, Agile development model, DevOps, and ITIL. DevOps process and Continuous Delivery,
Release management, Scrum, Kanban, delivery pipeline, bottlenecks, examples.
UNIT-II
Software development models and DevOps:
DevOps Lifecycle for Business Agility, DevOps, and Continuous Testing. DevOps influence on
Architecture: Introducing software architecture, The monolithic scenario, Architecture rules of thumb,
The separation of concerns, Handling database migrations, Microservices, and the data tier, DevOps,
architecture, and resilience.
UNIT–III
Introduction to project management: The need for source code control, The history of source code
management, Roles and code, source code management system and migrations, Shared authentication,
Hosted Git servers, Different Git server implementations, Docker intermission, Gerrit, The pull request
model, GitLab.
UNIT-IV
Integrating the system:
Build systems, Jenkins build server, Managing build dependencies, Jenkins plugins, and file system
layout, The host server, Build slaves, Software on the host, Triggers, Job chaining and
Build pipelines, Build servers and infrastructure as code, Building by dependency order, Build phases,
Alternative build servers, Collating quality measures.
UNIT-V
Testing Tools and automation: Various types of testing, Automation of testing Pros and cons, Selenium -
Introduction, Selenium features, JavaScript testing, Testing backend integration points, Test-driven
development, REPL-driven development Deployment of the system: Deployment systems, Virtualization
stacks, code execution at the client, Puppet master and agents, Ansible, Deployment tools: Chef, Salt Stack
and Docker
Lesson Plan
PROGRAM: B.TECH (CS) DEGREE: UG YEAR & SEM: III YEAR & II-SEM A.Y: 2024-25
COURSE: DevOps FACULTY NAME: J NARESH KUMAR
S.NO Toppic to be Covered Required Cumulative Date of Text Teaching Aid
V
Completi
Classes Classes Book
on
1 UNIT-I: Introduction to DevOps 1 1 TB1 Chalk & Talk
2 Agile development model 1 2 TB1 Chalk & Talk
3 DevOps, and ITIL 1 3 TB1 Chalk & Talk
4 DevOps process and Continuous Delivery 1 4 TB1 Chalk & Talk
5 DevOps process and Continuous Delivery 1 5 TB1 Chalk & Talk
6 Release management 1 6 TB1 Chalk & Talk
7 Scrum, Kanban 1 7 TB1 Chalk & Talk
8 Scrum, Kanban 1 8 TB1 Chalk & Talk
9 delivery pipeline 1 9 TB1 Chalk & Talk
10 bottlenecks, examples. 1 10 TB1 Chalk & Talk
11 UNIT-II: DevOps Lifecycle for Business Agility 1 11 TB1 Chalk & Talk
12 DevOps, and Continuous Testing 1 12 TB1 Chalk & Talk
13 DevOps influence on Architecture 1 13 TB1 Chalk & Talk
14 Introducing software architecture 1 14 TB1 Chalk & Talk
15 The monolithic scenario 1 15 TB1 Chalk & Talk
16 Architecture rules of thumb 1 16 TB1 Chalk & Talk
17 The separation of concerns 1 17 TB1 Chalk & Talk
18 Handling database migrations 1 18 TB1 Chalk & Talk
19 Microservices, and the data tier 1 19 TB1 Chalk & Talk
20 DevOps, architecture, and resilience 1 20 TB1 Chalk & Talk
21 UNIT-III: The need for source code control 1 21 TB1 Chalk & Talk
22 The history of source code management 1 22 TB1 Chalk & Talk
Roles and code, source code management
23 1 23 TB1 Chalk & Talk
system and migrations
24 Shared authentication 1 24 TB1 Chalk & Talk
25 Hosted Git servers 1 25 TB1 Chalk & Talk
26 Different Git server implementations 1 26 TB1 Chalk & Talk
27 Docker intermission 1 27 TB1 Chalk & Talk
28 Gerrit, The pull request model, GitLab 2 28 TB1 Chalk & Talk
29 Gerrit, The pull request model, GitLab 1 29 TB1 Chalk & Talk
30 UNIT-IV: Integrating the system: Build systems 1 30 TB1 Chalk & Talk
31 Jenkins build server 1 31 TB1 Chalk & Talk
V
32 Managing build dependencies 1 32 TB1 Chalk & Talk
33 Jenkins plugins and file system layout, 1 33 TB1 Chalk & Talk
34 The host server, Build slaves 1 34 TB1 Chalk & Talk
35 Software on the host, Triggers 1 35 TB1 Chalk & Talk
36 Job chaining and Build pipelines 1 36 TB2 Chalk & Talk
37 Build servers and infrastructure as code 1 37 TB2 Chalk & Talk
38 Building by dependency order 1 38 TB2 Chalk & Talk
39 Build phases 1 39 TB2 Chalk & Talk
40 Alternative build servers 1 40 TB1 Chalk & Talk
41 Collating quality measures 1 41 TB2 Chalk & Talk
42 UNIT-V: Testing Tools and automation 1 42 TB2 Chalk & Talk
43 Various types of testing 1 43 TB2 Chalk & Talk
44 Automation of testing Pros and cons 1 44 TB2 Chalk & Talk
45 Selenium - Introduction, Selenium features 1 45 TB1 Chalk & Talk
46 JavaScript testing 2 46 TB2 Chalk & Talk
47 Testing backend integration points 1 47 TB1 Chalk & Talk
48 Test-driven development 1 48 TB2 Chalk & Talk
REPL-driven development Deployment of the
49 1 49 TB2 Chalk & Talk
system
50 Deployment systems, Virtualization stacks 1 50 TB2 Chalk & Talk
51 code execution at the client 1 51 TB2 Chalk & Talk
52 Puppet master and agents 1 52 TB2 Chalk & Talk
53 Ansible, Deployment tools: 1 53 TB2 Chalk & Talk
54 Chef, Salt Stack and Docker 1 54 TB2 Chalk & Talk
V
Lecture Notes
V
Unit 1
Introduction
History
A software life cycle model (also termed process model) is a pictorial and diagrammatic
representation of the software life cycle.A life cycle model represents all the methods requiredto
make a software product transit through its life cycle stages. It also captures the structure in
which these methods are to be undertaken.
The senior members of the team perform it with inputs from all the stakeholders and domain
experts or SMEs in the industry.
Planning for the quality assurance requirements and identifications of the risks associated with
the projects is also done at this stage.
V
Business analyst and Project organizer set up a meeting with the client to gather all the data like
what the customer wants to build, who will be the end user, what is the objective ofthe product.
Before creating a product, a core understanding or knowledge of the product is very necessary.
For Example, A client wants to have an application which concerns money transactions. In this
method, the requirement has to be precise like what kind of operations will be done, how it will
be done, in which currency it will be done, etc.
Once the required function is done, an analysis is complete with auditing the feasibility of the
growthof a product. In case of anyambiguity, a signal is set up for further discussion.
Stage2: DefiningRequirements
Once the requirement analysis is done, the next stage is to certainly represent and document the
software requirements and get them accepted from the project stakeholders.
The next phase isaboutto bring downallthe knowledge ofrequirements, analysis, and designof the
software project. This phase is the product ofthe last two, like inputs fromthe customer and
requirement gathering.
In this phase of SDLC, the actual development begins, and the programming is built. The
implementation ofdesign begins concerning writing code. Developers have to follow the coding
guidelines described by their management and programming tools like compilers, interpreters,
debuggers, etc. are used to develop and implement the code.
Stage5:Testing
After the code is generated, it is tested against the requirements to make sure that the productsare
solving the needs addressed and gathered during the requirements stage.
Duringthisstage, unittesting,integrationtesting,systemtesting,acceptancetestingaredone.
Stage6:Deployment
Oncethesoftwareiscertified, andnobugsorerrorsarestated,thenitisdeployed.
Then based on the assessment, the software may be released as it is or with suggested
enhancement in the object segment.
Afterthesoftwareisdeployed,thenitsmaintenancebegins.
Stage7:Maintenance
Once when the client starts using the developed systems, then the real issues come up and
requirements to be solved from time to time.
Waterfallmodel
Winston Royce introduced the Waterfall Model in 1970.This model has five phases:
Requirements analysis and specification, design, implementation, and unit testing, integrationand
systemtesting, and operationand maintenance. The steps always follow in this order and do not
overlap. The developer must complete everyphase before the next phase begins. This model is
named "Waterfall Model", because its diagrammatic representation resembles a cascade of
waterfalls.
1. Requirements analysis and specification phase: The aim of this phase is to understand the
exact requirements of the customer and to document them properly. Both the customer and the
software developer work together so as to document all the functions, performance, and
interfacing requirement ofthe software. It describes the "what" ofthe systemto be produced and
not"how."Inthisphase,alargedocumentcalled SoftwareRequirementSpecification
(SRS)document is created which contained a detailed description of what the system will do in
the common language.
2. Design Phase:This phase aims to transform the requirements gathered in the SRS into a
suitable form which permits further coding in a programming language. It defines the overall
software architecture together with high level and detailed design. All this work is documentedas
a Software Design Document (SDD).
3. Implementation and unit testing: During this phase, design is implemented. If the SDD is
complete, the implementation or coding phase proceeds smoothly, because all the information
needed by software developers is contained in the SDD.
During testing, the code is thoroughly examined and modified. Small modules are tested in
isolation initially. After that these modules are tested by writing some overhead code to checkthe
interaction between these modules and the flow of intermediate output.
4. Integration and System Testing: This phase is highly crucial as the quality of the end
product is determined by the effectiveness of the testing carried out. The better output will leadto
satisfied customers, lower maintenance costs, and accurate results. Unit testing determinesthe
efficiency of individual modules. However, in this phase, the modules are tested for their
interactions with each other and with the system.
5. Operation and maintenance phase: Maintenance is the task performed by every user once
the software has been delivered to the customer, installed, and operational.
AdvantagesofWaterfallmodel
o Thismodelissimpletoimplementalsothenumberofresourcesthatarerequiredforitis minimal.
o Therequirementsaresimpleandexplicitlydeclared;theyremainunchangedduringtheentire project
development.
o Thestartandendpointsforeachphaseisfixed, whichmakesiteasytocoverprogress.
o Thereleasedateforthecompleteproduct,aswellasitsfinalcost,canbedeterminedbefore development.
o Itgiveseasytocontrolandclarityforthecustomerduetoastrictreportingsystem.
DisadvantagesofWaterfallmodel
o Inthismodel,theriskfactorishigher,sothismodelisnotsuitableformoresignificantand complex
projects.
o Thismodelcannotacceptthechangesinrequirementsduringdevelopment.
o It becomes tough to go back to the phase. For example, if the application has now shifted to
thecoding phase, and there is a change in requirement, It becomes tough to go back and change it.
o Sincethetestingdoneat a later stage, it does not allow identifyingthechallenges andrisks inthe
earlier phase, so the risk reduction strategy is difficult to prepare.
Introduction
The DevOps tutorial will help you to learn DevOps basics and provide depth knowledge of
variousDevOpstoolssuchas Git,Ansible,Docker,Puppet,Jenkins,Chef,Nagios, and Kubernetes.
WhatisDevOps?
WhyDevOps?
o In 2009, the first conference named DevOpsdayswas held in Ghent Belgium. Belgian
consultant and Patrick Debois founded the conference.
o In 2012, the state of DevOps report was launched and conceived by Alanna Brown at
Puppet.
o In 2014, the annual State of DevOps report was published by Nicole Forsgren, Jez
Humble, Gene Kim, and others. They found DevOps adoption was accelerating in 2014
also.
o In 2015, Nicole Forsgren, Gene Kim, and Jez Humble founded DORA (DevOpsResearch
and Assignment).
o In 2017, Nicole Forsgren, Gene Kim, and Jez Humble published "Accelerate: Building
and Scaling High Performing Technology Organizations".
Agiledevelopment
The agile software development process frequently takes the feedback ofworkable product. The
workable product is delivered within 1 to 4 weeks of iteration.
ITIL
ITILisanabbreviationofInformationTechnologyInfrastructure Library.
It is a framework which helps the IT professionals for delivering the best services of IT. This
framework is a set of best practices to create and improve the process of ITSM (IT Service
Management). It provides a framework within an organization, which helps in planning,
measuring, and implementing the services of IT.
The main motive of this framework is that the resources are used in such a way so that the
customer get the better services and business get the profit.
Itis notastandardbutacollectionofbestpracticesguidelines.
ServiceLifecycleinITIL
1. ServiceStrategy.
2. ServiceDesign.
3. ServiceTransition.
4. ServiceOperation.
5. ContinualService Improvement.
ServiceStrategy
Service Strategy is the first and initial stage in the lifecycle of the ITIL framework. The main
aim of this stage is that it offers a strategy on the basis of the current market scenario and
business perspective for the services of IT.
This stage mainly defines the plans, position, patters, and perspective which are required for a
service provider. It establishes the principles and policies which guide the whole lifecycle of IT
service.
Following are the various essential services or processes which comes under the Service
Strategy stage:
o FinancialManagement
o DemandManagement
o ServicePortfolioManagement
o BusinessRelationshipManagement
o StrategyManagement
StrategyManagement:
Accordingtotheversion3(V3)ofITIL,thisprocessincludesthefollowingactivitiesforIT services:
1. IdentificationofOpportunities
2. IdentificationofConstraints
3. OrganizationalPositioning
4. Planning
5. Execution
Followingarethethreesub-processeswhichcomesunderthis managementprocess:
1. StrategicServiceAssessment
2. ServiceStrategyDefinition
3. ServiceStrategyExecution
FinancialManagement:
Thisprocesshelpsindeterminingandcontrollingallthecostswhichareassociatedwiththe services of
an IT organization. It also contains the following three basic activities:
Followingarethefoursub-processeswhichcomesunderthismanagementprocess:
1. FinancialManagementSupport
2. FinancialPlanning
3. FinancialAnalysisandReporting
4. ServiceInvoicing
DemandManagement
This management process is critical and most important in this stage. It helps the service
providers to understand and predict the customer demand for the IT services. Demand
management is a process which also work with the process ofCapacityManagement. Following
are basic objectives of this process:
o Thisprocessbalancestheresourcesdemandandsupply.
o Italsomanagesormaintainsthequalityofservice.
1. AnalysingcurrentUsageofITservices
2. AnticipatetheFutureDemandsfor theServicesofIT.
3. InfluencingConsumptionbyTechnicalor FinancialMeans
Followingarethetwosub-processeswhichcomes underthismanagementprocess:
1. DemandPrognosis
2. DemandControl.
BusinessRelationshipManagement
This management process is responsible for maintaining a positive and good relationship
betweenthe service provider and their customers. It also identifiesthe needsofa customer. And,
thenensurethattheservicesareimplementedbytheserviceprovider tomeetthoserequirements.
ThisprocesshasbeenreleasedasanewprocessintheITIL 2011.
Accordingtotheversion3(V3)ofITIL,thisprocessperformsthefollowingvarious activities:
o Thisprocessisusedtorepresenttheserviceprovidertothecustomerinapositivemanner.
o Thisprocessidentifiesthebusinessneedsofa customer.
o Italsoactsasamediatorifthereisanycaseofconflictingrequirementsfromthedifferent businesses.
1. MaintainCustomerRelationships
2. IdentifyServiceRequirements
3. SignupCustomerstostandardServices
4. CustomerSatisfactionSurvey
5. HandleCustomer Complaints
6. MonitorCustomerComplaints.
ServicePortfolio Management
This management process defines the set of customer-oriented services which are provided by a
service provider to meet the customer requirements. The primary goal of this process is to
maintain the service portfolio.
Followingarethethreetypesofservicesunderthismanagementprocess:
1.LiveServices2.RetiredServices3.ServicePipeline.
Followingarethethreesub-processeswhichcomesunderthis managementprocess:
1. DefineandAnalysethenewservicesor changedservicesofIT.
2. ApprovethechangesornewITservices
3. ServicePortfolioreview.
ServiceDesign
It is the second phase or astage inthe lifecycle ofa service inthe frameworkofITIL. This stage
provides the blueprint for the IT services. The main goal of this stage is to design the new IT
services. We can also change the existing services in this stage.
Following are the various essential services or processes which comes under the Service Design
stage:
o ServiceLevelManagement
o CapacityManagement
o AvailabilityManagement
o RiskManagement
o ServiceContinuityManagement
o ServiceCatalogueManagement
o InformationSecurityManagement
o SupplierManagement
o ComplianceManagement
o ArchitectureManagement
ServiceLevelManagement
In this process, the Service Level Manager is the process owner. This management is fully
redesigned in the ITIL 2011.
ServiceLevelManagementdealswiththefollowingtwodifferenttypesofagreements:
1. OperationalLevelAgreement
2. ServiceLevel Agreement
o ItmanagesandreviewsalltheITservicestomatchserviceLevelAgreements.
o Itdetermines, negotiates, andagreesontherequirementsfor theneworchangedITservices.
Followingarethefoursub-processeswhichcomesunderthis managementprocess:
1. MaintenanceofSLM framework
2. Identifyingtherequirementsofservices
3. Agreementssign-offandactivationoftheITservices
4. ServicelevelMonitoringandReporting.
Capacity Management
Followingarethefoursub-processeswhichcomesunderthis managementprocess:
1. BusinessCapacityManagement
2. ServiceCapacityManagement
3. ComponentCapacityManagement
4. CapacityManagementReporting
AvailabilityManagement
In this process, the Availability Manager is the owner. This management process has a
responsibility to ensure that the services of IT meet the agreed availability goals. This process
also confirms that the servicewhich are new or changed does not affect the existing services.
According to the version 3 (V3) of ITIL, this process contains the following two activities:
1. ReactiveActivity
2. ProactiveActivity
Followingarethefoursub-processeswhichcomesunderthis managementprocess:
1. DesigntheITservicesforavailability
2. AvailabilityTesting
3. AvailabilityMonitoringandReporting
RiskManagement
In this process, the Risk Manageris the owner. This management process allows the risk
manager to check, assess, and controlthe business risks. Ifanyrisk is identified inthe processof
business, the risk of that entry is created in the ITIL Risk Register.
According to the version 3 (V3) of ITIL, this process performs the following activities in the
given order:
o Itidentifiesthethreats.
o Itfindstheprobabilityandimpact ofrisk.
o Itchecksthewayforreducingthoserisks.
o Italwaysmonitorstheriskfactors.
1. RiskManagement Support
2. ImpactonbusinessandRiskanalysis
3. MonitoringtheRisks.
4. AssessmentofRequiredRiskMitigation
ServiceCatalogueManagement (SCM)
In this process, the Service Catalogue Manager is the owner. This management process allows
the Catalogue Manager to give the huge information about all the other management processes.
Itcontainstheservicesintheserviceoperationphasewhicharepresentlyactive.
It is a process whichcertifies that the service catalogue is maintained, produced, and contains all
the accurate information for all the operational IT services.
1. BSCorBusinessServiceCatalogue
2. TSCorTechnicalServiceCatalogue
Underthismanagementprocess,nosub-processisspecifiedor defined.
ServiceContinuityManagement
In this process, the IT Service Continuity Manager is specified as the owner. It allows the
continuity manager to maintain the risks which could impact on the service of IT.
TheITSCMconsistsofthefollowingfour activitiesorstages:
1. Initiation
2. RequirementsandStrategy
3. Implementation
4. OngoingOperation
InformationSecurityManagement
In this process, the Information Security Manager is specified as the owner. The main aim of
this management process is to verify the confidentiality, integrity, and availability of the data,
information, and services of an IT organization.
According to the version 3 (V3) of ITIL, this process performs the following four activities:
1.Plan 2.Implement 3.Evaluation 4.Maintain
1. DesignofSecuritycontrols
2. ValidationandTestingofSecurity
3. ManagementofSecurityIncidents
4. SecurityReview
SupplierManagement
ItalsoworkswiththeFinancialandknowledgemanagement,whichhelpsinselectingthe suppliers on
the basis of previous knowledge.
Followingarethevariousactivitieswhichareinvolvedinthisprocess:
o Itmanagesthesub-contractedsuppliers.
o Itmanagestherelationshipwiththesuppliers.
o Ithelpsinimplementingthesupplierpolicy.
o ItalsomanagesthesupplierpolicyandsupportstheSCMIS.
o Italsomanagesormaintainstheperformanceofthesuppliers.
According to the version 3 (V3) ofITIL, following are the six sub-processes which comes under
this management process:
1. ProvidetheFrameworkofSupplierManagement
2. Evaluationandselectionofnewcontractsandsuppliers
3. Establishthenewcontractsandsuppliers
4. Process thestandardorders
5. ContractandSupplierReview
6. ContractRenewalorTermination.
ComplianceManagement
In this process, the Compliance Managerplays a role as an owner. This management process
allows the compliance manager to check and address all the issues which are associated with
regulatory and non-regulatory compliances.
Underthiscompliancemanagementprocess,nosub-processisspecifiedor defined.
Here, the role of Compliance Manager is to certify that the guidelines, legal requirements, and
standards are being followed properly or not. This manager works in parallel with the following
three managers:
1. InformationSecurityManager
2. FinancialManager
3. ServiceDesignManager.
ArchitectureManagement
In this process, the Enterprise Architect plays a role as an owner. The main aim of Enterprise
Architect is to maintain and manage the architecture of the Enterprise.
ServiceTransition
ServiceTransitionisthethirdstageinthelifecycleofITILManagementFramework.
The maingoalofthis stage is to build, test, and develop the new ormodified services ofIT.This
stageofservice lifecycle managesthe riskstotheexisting services. It also certifiesthat the value of a
business is obtained.
This stage also makes sure that the new and changed IT services meet the expectation of the
business as defined in the previous two stages of service strategy and service design in the
lifecycle.
ItcanalsoeasilymanageormaintainsthetransitionofnewormodifiedITservicesfrom the Service
Design stage to Service Operation stage.
There are following various essential services or processes which comes under the Service
Transition stage:
o ChangeManagement
o ReleaseandDeployment Management
o ServiceAssetandConfigurationManagement
o KnowledgeManagement
o ProjectManagement(TransitionPlanningandSupport)
o ServiceValidationandTesting
o ChangeEvaluation
ChangeManagement
Inthis process, the ChangeManagerplays a role as anowner. The Change Manager controls or
manages the service lifecycle ofall changes. It also allowsthe change Manager to implement all
the essential changes to be required with the less disruption of IT services.
This management process also allows its owner to recognize and stop any unintended change
activity. Actually, this management process is tightly bound with the process "Service Assetand
Configuration Management".
1. NormalChange
2. StandardChange
3. EmergencyChange
AllthesechangesarealsoknownastheChangeModels.
According to the version 3 (V3) of ITIL, following are the eleven sub-processes which comes
under this Change management process:
1. ChangeManagement Support
2. RFC(RequestforChange)LoggingandReview
3. ChangeAssessmentbytheOwner(ChangeManager)
4. AssessandImplementtheEmergencyChanges
5. Assessment ofchangeProposals
6. ChangeSchedulingandPlanning
7. ChangeAssessmentbytheCAB
8. ChangeDevelopment Authorization
9. ImplementationorDeploymentofChange
10. MinorchangeDeployment
11. PostImplementationReviewandchangeclosure
ReleaseandDeploymentManagement
In this process, the Release Managerplays a role as an owner. Sometimes, this process is
alsoknown as the 'ITIL Release Management Process'.
This process allows the Release Manager for managing, planning, and controlling the updates &
releases of IT services to the real environment.
1. Minorrelease
2. MajorRelease
3. EmergencyRelease
According to the version 3 (V3) ofITIL, following are the six sub-processes which comes under
this Change management process:
1. ReleaseManagementSupport
2. ReleasePlanning
3. Releasebuild
4. ReleaseDeployment
5. EarlyLifeSupport
6. ReleaseClosure
ServiceAssetandConfigurationManagement
Inthisprocess,theConfigurationManagerplaysaroleasanowner. This
1. AssetManagement
2. ConfigurationManagement
The aimofthis management process is to manage the information about the (CIs) Configuration
Items which are needed to deliver the services of IT. It contains information about versions,
baselines, and the relationships between assets.
1. PlanningandManagement
2. ConfigurationControlandIdentification
3. StatusAccountingandreporting
4. Auditand Verification
5. ManagetheInformation
Knowledge Management
In this process, the Knowledge Managerplays a role as an owner. This management process
helps the Knowledge Manager by analysing, storing and sharing the knowledge and the data or
information in an entire IT organization.
TransitionPlanningand Support
In this process, the Project Managerplays a role as an owner. This management process
manages the service transition projects. Sometimes, this process is also known as the Project
Management Process.
In this process, the project manager is accountable for planning and coordinating resources to
deploy IT services within time, cost, and quality estimates.
o Itmanagestheissuesandrisks.
o Itdefines thetasks andactivities whicharetobeperformedbytheseparateprocesses.
o It makes agroup withthesametypeof releases.
o Itmanageseachindividualdeploymentasaseparateproject.
According to the version 3 (V3) of ITIL, following are the four sub-processes which comesunder
this Project management process:
1. InitiatetheProject
2. PlanningandCoordinationofaProject
3. ProjectControl
4. ProjectCommunicationandReporting
ServiceValidationand Testing
In this process, the Test Managerplays a role as an owner. The main goal of this management
process is that it verifies whether the deployed releases and the resulting IT service meets the
customer expectations.
It also checks whether the operations of IT are able to support the new IT services after the
deployment. This process allows the Test Manager to remove or delete the errors which are
observed at the first phase of the service operation stage in the lifecycle.
are the various activities which are performed under this process:
o ValidationandTest Management
o PlanningandDesign
o VerificationofTestPlanandDesign
o PreparationoftheTest Environment
o Testing
o EvaluateExitCriteriaandReport
o Cleanup andclosure
1. TestModelDefinition
2. ReleaseComponentAcquisition
3. ReleaseTest
4. ServiceAcceptanceTesting
ChangeEvaluation
In this process, the Change Managerplays a role as an owner. The goal of this management
process is to avoid the risks which are associated with the major changes for reducing the
chances of failures.
This process is started and controlled by the change management and performed by the change
manager.
Followingarethevariousactivitieswhichareperformedunder thisprocess:
o Itcaneasilyidentifytherisks.
o Itevaluatestheeffectsofachange.
According to the version 3 (V3) of ITIL, following are the four sub-processes which comesunder
this management process:
1. ChangetheEvaluationpriortoPlanning
2. ChangetheEvaluationpriorto Build
3. ChangetheEvaluationpriortoDeployment
4. ChangetheEvaluationpriorafterDeployment
ServiceOperations
Service Operations is the fourth stage in the lifecycle of ITIL. This stage provides theguidelines
about how to maintain and manage the stability in services of IT, which helps in achieving the
agreed level targets of service delivery.
This stage is also responsible for monitoring the services ofIT and fulfilling the requests. Inthis
stage, alltheplansoftransitionanddesignare measuredandexecutedfortheactualefficiency. It is also
responsible for resolving the incidents and carrying out the operational tasks.
There are following various essential services or processes which comes under the stage of
Service Operations:
o EventManagement
o AccessManagement
o ProblemManagement
o IncidentManagement
o ApplicationManagement
o TechnicalManagement
Event Management
In this process, the IT Operations Managerplays a role as an owner. The main goal of this
management process is to make sure that the services of IT and CIs are constantly monitored. It
also helps in categorizing the events so that appropriate action can be taken if needed.
In this Management process, the process owner takes all the responsibilities of processes and
functions for the multiple service operations.
FollowingarethevariouspurposesofEventManagementProcess:
o ItallowstheITOperationsManagertodecidetheappropriateactionfortheevents.
o Italsoprovidesthetriggerfortheexecutionofmanagementactivitiesofmanyservices.
o Ithelpsinprovidingthebasisforserviceassuranceandserviceimprovement.
The Event Monitoring Toolsare divided intotwotypes, whichare defined bythe Version3 (V3) of
ITIL:
1. ActiveMonitoringTool
2. PassiveMonitoringTool
1. Warning
2. Informational
3. Exception
1. EventMonitoringandNotification
2. FirstlevelCorrelationandEventFiltering
3. SecondlevelCorrelationandResponseSelection
4. EventReviewandClosure.
Access Management
Inthis process, the Access Managerplays a role as anowner. This type ofManagement process is
also sometimes called as the 'Identity Management'or 'Rights Management'.
In this Management process, the owner ofa process follows those policies and guidelines which
are defined by the (ISM) 'Information Security Management'.
Following are the six activities which come under this management process and are followed
sequentially:
1. RequestAccess
2. Verification
3. ProvidingRights
4. Monitoringor ObservingtheIdentityStatus
5. LoggingandTrackingStatus
6. RestrictingorRemovingRights
1. MaintenanceofCatalogueofUserRolesandAccessprofiles
2. ProcessingofUserAccessRequests.
ProblemManagement
In this process, the Problem Managerplays a role as an owner. The main goal of this
management process is to maintain or manage the life cycle ofallthe problems which happen in
the services of IT. In the ITIL Framework, the problem is referred to as "an unknown cause or
event of one or more incident".
It helps in finding the root cause of the problem. It also helps in maintaining the information
about the problems.
Following are the ten activities which come under this management process and are followed
sequentially. These ten activities are also called as a lifecycle of Problem Management:
1. ProblemDetection
2. ProblemLogging
3. Categorizationofa Problem
4. Prioritizationofa Problem
5. InvestigationandDiagnosisofaProblem
6. IdentifyWorkaround
7. RaisingaKnownErrorRecord
8. ResolutionofaProblem
9. ProblemClosure
10. MajorProblemReview
IncidentManagement
In this process, the Incident Managerplays a role as an owner. The main goal of this
management process is to maintain or manage the life cycle ofallthe incidents which happen in
the services of IT.
This management process maintains the satisfaction of users by managing the qualities of IT
service. It increases the visibility of incidents.
According to the version 3 (V3) of ITIL, following are the nine sub-processes which comesunder
this management process:
1. IncidentManagementSupport
2. IncidentLoggingandCategorization
3. Pro-activeUser Information
4. FirstLevelSupportfor ImmediateIncidentResolution
5. SecondLevelSupportforIncident Resolution
6. HandlingofMajor Incidents
7. IncidentMonitoringandEscalation
8. ClosureandEvaluationofIncident
9. ManagementReportingofIncident
ApplicationManagement
Inthisfunction, theApplicationAnalystplaysaroleasanowner.
This management function maintains or improves the applications throughout the entire service
lifecycle. This function plays an important and essential role in the applications and system
management.
Under this management function, no sub-process is specified or defined. But, this management
function into the following six activities or stages:
TechnicalManagement
TheroleoftheTechnicalAnalystistodeveloptheskills, whicharerequiredto
ContinualServiceImprovement
It is the fifth stage in the lifecycle of ITIL service. This stage helps to identify and implement
strategies, which is used for providing better services in future.
o Itimprovesthequalityservices bylearningfromthepastfailures.
o Italsohelpsinanalyzingandreviewingtheimprovementopportunitiesineveryphaseofthe service
lifecycle.
o Italsoevaluatestheservicelevelachievement results.
o Italsodescribesthebestguidelinestoachievethelarge-scaleimprovementsinthequalityof service.
o It also helpsin describing the conceptof KPI,which isa processmetrics-driven forevaluating and
reviewing the performance of the services.
Therearefollowingvariousessentialservicesorprocesseswhichcomesunder thestageofCSI:
o ServiceReview
o ProcessEvaluation
o DefinitionofCSIInitiatives
o MonitoringofCSIInitiatives
This stage follows the following six-step approach (pre-defined question) for planning,
reviewing, and implementing the improvement process:
ServiceReview
In this process, the CSI Managerplays a role as an owner. The main aim of this management
process is to review the services of business and infrastructure on a regular basis.
Sometimes, this process is also called as "ITIL Service Review and Reporting". Under this
management process, no sub-process is specified or defined.
ProcessEvaluation
In this process, the Process Architect plays a role as an owner. The main aim of this
management process is to evaluate the processes of IT services on a regular basis. This process
acceptsinputsfromtheprocessofServiceReview andprovidesitsoutputtotheprocess of Definition of
CSI Initiatives.
In this process, the process owner is responsible for maintaining and managing the process
architecture and also ensures that all the processes of services cooperate in a seamless way.
1. ProcessManagementsupport
2. ProcessBenchmarking
3. ProcessMaturityAssessment
4. Process Audit
5. ProcessControlandReview
DefinitionofCSIInitiatives
In this process, the CSI Managerplays a role as an owner. This management process is also
called/ known as a "Definition of Improvement Initiatives".
Definition of CSI Initiatives is a process, which is usedfor describing the
particularinitiativeswhose aim is to improve the qualities of IT services and processes.
In this process, the CSI Manager (process owner) is accountable for managing and maintaining
the CSI registers and also helps in taking the gooddecisions regarding improvement initiatives.
Underthismanagementprocess,nosub-processisspecifiedor defined.
MonitoringofCSI Initiatives
Underthismanagementprocess,nosub-processisspecifiedor defined.
AdvantagesofITIL
Followingarethevariousadvantagesor benefitsofITIL:
1. OneofthebestadvantagesofITIListhatithelpsinincreasingthecustomer satisfaction.
2. Itallowsmanagerstoimprovethedecision-makingprocess.
3. Itisalsousedforcreatingtheclearstructureofanorganization.
4. Italsohelpsmanagersbycontrollingtheinfrastructureservices.
5. Itimprovestheinteractionbetweenthecustomersandtheserviceprovider.
6. Withthehelpofthisframework, servicedeliveryisalsoimproved.
7. ItestablishestheframeworkofITSMfor theorganization.
DevOpsProcess
TheDevOpsprocessflow
The DevOps process flow is all about agility and automation. Each phase in the DevOpslifecycle
focuses on closing the loop between development and operations and drivingproduction through
continuous development, integration, testing, monitoring and feedback, delivery, and
deployment.
Continuous development is an umbrella term that describes the iterative process for developing
software to be delivered to customers. It involves continuous integration, continuous testing,
continuous delivery, and continuous deployment.
Continuousintegration
Continuous integration ensures the most up-to-date and validated code is always readilyavailable
to developers. CI helps prevent costly delays in development by allowing multiple developers to
work on the same source code with confidence, rather than waiting to integrate separate sections
of code all at once on release day.
This practice is a crucial component of the DevOps process flow, which aims to combine speed
and agility with reliability and security.
Continuoustesting
Continuous testing is a verification process that allows developers to ensure the code actually
works the way it was intended to in a live environment. Testing can surface bugs and particular
aspects of the product that may need fixing or improvement, and can be pushed back to the
development stages for continued improvement.
Continuousmonitoringandfeedback
Throughout the development pipeline, your team should have measures in place for continuous
monitoring and feedback of the products and systems. Again, the majority of the monitoring
process should be automated to provide continuous feedback.
This process allows IT operations to identify issues and notify developers in real time.
Continuous feedback ensures higher security and system reliability as well as more agile
responses when issues do arises.
Continuousdelivery
Continuous delivery(CD) is the next logicalstep fromCI. Code changes are automatically built,
tested, and packaged for release into production. The goal is to release updates to the users
rapidly and sustainably.
To do this, CD automates the release process (building on the automated testing in CI) so that
new builds can be released at the click of a button.
Continuousdeployment
For the seasoned DevOps organization, continuous deployment may be the better option over
CD. Continuous deployment is the fully automated version of CD with no human (i.e., manual)
intervention necessary.
ContinuousDelivery
Continuous delivery is an approach where teams release quality products frequently and
predictably from source code repository to production in an automated fashion.
Some organizations release products manually by handing them off from one team to the next,
which is illustrated in the diagram below. Typically, developers are at the left end of this
spectrumand operations personnelare at the receiving end. This creates delays at everyhand-off
that leads to frustrated teams and dissatisfied customers. The product eventually goes live
through a tedious and error-prone process that delays revenue generation.
Howdoescontinuousdeliverywork?
Acontinuous delivery pipeline could have a manualgate right before production. A manual gate
requires human intervention, and there could be scenarios in your organization that require
manual gates in pipelines. Some manual gates might be questionable, whereas some could be
legitimate. One legitimate scenario allows the business team to make a last-minute release
decision. The engineering teamkeeps a shippable version ofthe product readyafter everysprint,
and the business team makes the final call to release the product to all customers, or a cross-
section of the population, or perhaps to people who live in a certain geographical location.
The architecture ofthe product that flows throughthe pipeline is a keyfactor that determines the
anatomy of the continuous delivery pipeline. A highly coupled product architecture generates a
complicated graphical pipeline pattern where various pipelines could get entangled before
eventually making it to production.
Loosely coupled components make up subsystems - the smallest deployable and runnable units.
For example, a server is a subsystem. A microservice running in a container is also an exampleof
a subsystem. This is the subsystem phase. As opposed to components, subsystems can be stood
up and tested.
The software delivery pipeline is a product in its own right and should be a priority for
businesses. Otherwise, you should not send revenue-generating products through it. Continuous
delivery adds value in three ways. It improves velocity, productivity, and sustainability of
software development teams.
Velocity
Velocity means responsible speed and not suicidal speed. Pipelines are meant to ship quality
products to customers. Unless teams are disciplined, pipelines can shoot faulty code to
production, only faster! Automated software delivery pipelines help organizations respond to
market changes better.
Productivity
A spike in productivity results when tedious tasks, like submitting a change request for every
changethatgoestoproduction,canbeperformedbypipelinesinsteadofhumans.This letsscrum teams
focus on products that wow the world, instead of draining their energy on logistics. And that can
make team members happier, more engaged in their work, and want to stay on the team longer.
Sustainability
Sustainability is key for all businesses, not just tech. “Software is eating the world” is no longer
true — software has already consumed the world! Everycompany at the end ofthe day, whether
in healthcare, finance, retail, or some other domain, uses technology to differentiate and
outmaneuver their competition. Automation helps reduce/eliminate manual tasks that are error-
prone and repetitive, thus positioning the business to innovate better and faster to meet their
customers' needs.
ReleaseManagement
Nowthatmostsoftwarehasmovedfromhardandfastreleasedatestothe softwareasaservice(SaaS)
business model, release management has become a constant process that works alongside
development. This is especially true for businesses that have converted to utilizing continuous
delivery pipelines that see new releases occurring at blistering rates. DevOps now plays a large
role in many of the duties thatwere originally considered to be under the purviewof release
management roles; however, DevOps has not resulted in the obsolescence of release
management.
AdvantagesofReleaseManagementforDevOps
WiththetransitiontoDevOpspractices,deployment dutieshaveshiftedontotheshouldersofthe
DevOps teams. This doesn’t remove the need for release management; instead, it modifies the
data points that matter most to the new role release management performs.
Release management acts as a method for filling the data gap in DevOps. The planning of
implementation and rollback safety nets is part of the DevOps world, but release management
still needs to keep tabs on applications, its components, and the promotion schedule as part of
change orders. The key to managing software releases in a way that keeps pace with DevOps
deployment schedules is through automated management tools.
Aligningbusiness&IT goals
The modern business is under more pressure than ever to continuously deliver new features and
boost their value to customers. Buyers have come to expect that their software evolves and
continues to develop innovative ways to meet their needs. Businesses create an outside
perspective to glean insights into their customer needs. However, IT has to have an inside
perspective to develop these features.
Release management provides a critical bridge between these two gaps in perspective. It
coordinatesbetweenITworkandbusinessgoalstomaximizethesuccessofeachrelease.
Release management balances customer desires with development work to deliver the greatest
value to users.
Minimizesorganizationalrisk
Software products contain millions of interconnected parts that create an enormous risk offailure.
Users are often affected differently by bugs depending on their other software,
applications,andtools.Plus,fasterdeploymentstoproductionincreasetheoverallriskthat faulty code
and bugs slip through the cracks.
Release management minimizes the risk of failure by employing various strategies. Testing and
governance cancatchcritical faultysectionsofcode before theyreachthe customer. Deployment
plansensurethereareenoughteammembersand resourcesto addressanypotential issues before
affecting users. All dependencies between the millions of interconnected parts are recognizedand
understood.
Directacceleratingchange
The move towards CI/CD and increases in automation ensure that the acceleration will only
increase. However, it also means increased risk, unmet governance requirements, and potential
disorder. Release management helps promote a culture of excellence to scale DevOps to an
organizational level.
Releasemanagementbest practices
As DevOps increases and changes accelerate, itis critical to have best practices in place to
ensurethat it movesasquicklyaspossible. Well-refinedprocessesenableDevOpsteamsto more
effectively and efficiently. Some best practices to improve your processes include:
Defineclearcriteria forsuccess
Well-defined requirements in releases and testing will create more dependable releases.Everyone
should clearly understand when things are actually ready to ship.
Well-defined means that the criteria cannot be subjective. Any subjective criteria will keep you
from learning from mistakes and refining your release management process to identify what
works best. It also needs to be defined for every team member. Release managers, quality
supervisors, product vendors, and product owners must all have an agreed-upon set of criteria
before starting a project.
Minimizedowntime
DevOps is about creating an ideal customer experience. Likewise, the goal of release
management is to minimize the amount of disruptionthat customers feel with updates.
Strive to consistently reduce customer impact and downtime with active monitoring, proactive
testing, and real-time collaborative alerts that enable you to quicklynotify you ofissues during a
release. A good release manager will be able to identify any problems before the customer.
The team can resolve incidents quickly and experience a successful release when proactive
efforts are combined with a collaborative response plan.
Optimizeyourstagingenvironment
The staging environment requires constant upkeep. Maintaining an environment that is as close
aspossibletoyourproductiononeensuressmootherandmoresuccessfulreleases.FromQA toproduct
owners, the whole team must maintain the staging environment by running tests and combing
through staging to find potential issues with deployment. Identifying problems in staging before
deploying to production is only possible with the right staging environment.
Maintaining a staging environment that is as close as possible to production will enable DevOps
teams to confirm that all releases will meet acceptance criteria more quickly.
Striveforimmutable
Whenever possible, aim to create new updates as opposed to modifying new ones. Immutable
programming drives teams to build entirely new configurations instead of changing existing
structures. These new updates reduce the risk of bugs and errors that typically happen when
modifying current configurations.
Theinherentlyreliablereleaseswillresultinmoresatisfiedcustomersandemployees.
Keepdetailedrecords
Good records management on any release/deployment artifacts is critical. From release notes to
binaries to compilation ofknownerrors, records are vital for reproducing entire sets ofassets. In
most cases, tacit knowledge is required.
Focusonthe team
Well-defined and implemented DevOps procedures will usually create a more effective release
management structure. They enable best practices for testing and cooperation during the
complete delivery lifecycle.
Although automation is a critical aspect of DevOps and release management, it aims to enhance
team productivity. The more that release management and DevOps focus on decreasing human
error and improving operational efficiency, the more they’ll start to quickly release dependable
services.
Scrum
Scrumis a framework used by teams tomanage work and solve problems collaboratively in short
cycles. Scrum implements the principles ofAgileas a concrete set of artifacts, practices, and
roles.
TheScrumlifecycle
The diagram below details the iterative Scrum lifecycle. The entire lifecycle is completed in fixed time
periods called sprints. A sprint is typically one-to-four weeks long
Scrumroles
Productowner
The product owner is responsible for what the team builds, and why they build it. The product
owner is responsible for keeping the backlog of work up to date and in priorityorder.
Scrummaster
The Scrum master ensures that the Scrum process is followed by the team. Scrum masters are
continuallyon the lookout for how the teamcan improve, while also resolving impediments and
other blocking issues that arise during the sprint. Scrum masters are part coach, part team
member, and part cheerleader.
Scrumteam
The members of the Scrum team actually build the product. The team owns the engineering of
the product, and the quality that goes with it.
Productbacklog
Theproduct backlog is a prioritized list of work the team can deliver. The product owner is
responsible for adding, changing, and reprioritizing the backlog as needed. The items at the topof
the backlog should always be ready for the team to execute on.
Planthesprint
In sprint planning, the teamchooses backlog items to workon in the upcoming sprint. The team
chooses backlog items based on priority and what they believe they can complete in the sprint.
Thesprint backlog is the list of items the teamplans to deliver in the sprint. Often, each itemon
the sprint backlog is broken down into tasks. Once all members agree the sprint backlog is
achievable, the sprint starts.
Executethesprint
Once the sprint starts, the team executes on the sprint backlog. Scrum does not specify how the
team should execute. The team decides how to manage its own work.
Scrumdefines a practice called a daily Scrum, oftencalled the daily standup. The dailyScrumis a
daily meeting limited to fifteen minutes. Team members often stand during the meeting to ensure
it stays brief. Each team member briefly reports their progress since yesterday, the plans for
today, and anything impeding their progress.
Toaid thedailyScrum,teamsoftenreviewtwoartifacts:
Task board
The task board lists each backlog item the team is working on, broken down into the tasks
required tocomplete it. Tasks are placedin To do, In progress, andDonecolumns based on their
status. The board provides a visual way to track the progress of each backlog item.
.
Sprintburndownchart
The sprint burndown is a graph that plots the daily total of remaining work, typically shown in
hours. The burndown chart provides a visual way of showing whether the team is on track to
complete all the work by the end of the sprint.
Sprintreviewandsprintretrospective
Atthe endofthesprint,theteamperformstwopractices:
Sprint review
The teamdemonstrates what they've accomplished to stakeholders. They demo the software and
show its value.
Sprint retrospective
The team takes time to reflect on what went well and which areas need improvement. The
outcome of the retrospective are actions for the next sprint.
Increment
Repeat,learn,improve
The entire cycle is repeated for the next sprint. Sprint planning selects the next items on the
product backlog and the cycle repeats. While the team executes the sprint, the product owner
ensures the items at the topof the backlog are readyto execute in the following sprint.
This shorter, iterative cycle provides the team with lots of opportunities to learn and improve. A
traditional project often has a long lifecycle, say 6-12 months. While a team can learn from a
traditional project, the opportunities are far less than a team who executes in two-week sprints,
for example.
Thisiterativecycleis,inmanyways,theessenceofAgile.
Scrum is very popular because it provides just enough framework to guide teams while giving
them flexibility in how they execute. Its concepts are simple and easy to learn. Teams can get
started quickly and learn as they go. All of this makes Scrum a great choice for teams juststarting
to implement Agile principles.
Kanban
Kanban is a Japanese term that means signboard or billboard. An industrial engineer named
Taiichi Ohno developed Kanban at Toyota Motor Corporation to improve manufacturing
efficiency.
Although Kanban was created for manufacturing, software development shares many of thesame
goals, such as increasing flow and throughput. Software development teams can improve their
efficiency and deliver value to users faster by using Kanban guiding principles andmethods.
Kanbanprinciples
Visualizework
Understanding development team status and work progress can be challenging. Work progress
and current state is easier to understand when presented visually rather than as a list of work
items or a document.
Visualization of work is a key principle that Kanban addresses primarily through Kanbanboards.
Theseboardsusecardsorganized byprogressto communicateoverallstatus.Visualizing work as
cards in different states on a board helps to easily see the big picture of where a project currently
stands, as well as identify potential bottlenecks that could affect productivity.
Useapullmodel
Kanban focuses on maintaining an agreed-upon level of quality that must be met before
considering work done. To support this model, stakeholders don't push work on teams that are
alreadyworkingat capacity. Instead, stakeholdersaddrequeststo abacklogthat ateampulls into their
workflow as capacity becomes available.
ImposeaWIPlimit
Teams that try to work on too many things at once can suffer from reduced productivity due to
frequent and costly context switching. The team is busy, but work doesn't get done, resulting in
unacceptably high lead times. Limiting the number of backlog items a team can workon at a time
helps increase focus while reducing context switching. The items the team is currently working
on are called work in progress (WIP).
Teams decide on a WIP limit, or maximum number of items they can work on at one time. A
well-disciplinedteammakessure notto exceedtheirWIP limit.Ifteamsexceedtheir WIP limits, they
investigate the reason and work to address the root cause.
Measurecontinuous improvement
Kanbanboards
Software development-based Kanban boards display cards that correspond to product backlog
items. The cards include links to other items, such as tasks and test cases. Teams can customize
the cards to include information relevant to their process.
On a Kanban board, the WIP limit applies to all in-progress columns. WIP limits don't apply to
the first and last columns, because those columns represent work that hasn't started or is
completed. Kanban boards help teams stay within WIP limits by drawing attention to columns
that exceed the limits. Teams can then determine a course of action to remove the bottleneck.
Cumulativeflowdiagrams
The CFD is particularly useful for identifying trends over time, including bottlenecks and other
disruptions to progress velocity. A good CFD shows a consistent upward trend while a team is
working on a project. The colored areas across the chart should be roughly parallel if the team is
working within their WIP limits.
A bulge in one or more of the colored areas usually indicates a bottleneck or impediment in the
team's flow. Inthe following CFD, the completed work in green is flat, while the testing state in
blue is growing, probably due to a bottleneck.
KanbanandScruminAgiledevelopment
While broadly fitting under the umbrella ofAgiledevelopment, Scrumand Kanban are quite
different.
● Scrumfocusesonfixedlengthsprints, whileKanbanisacontinuousflowmodel.
● Scrumhasdefinedroles,whileKanbandoesn't defineanyteamroles.
● Scrumusesvelocityasakeymetric,whileKanbanusescycletime.
Teams commonly adopt aspects of both Scrum and Kanban to help them work most effectively.
Regardless of which characteristics they choose, teams can always review and adapt until they
find the best fit. Teams should start simple and not lose sight of the importance of delivering
value regularly to users.
Kanban withGitHub
KanbanwithAzureBoards
Azure Boardsprovides a comprehensive Kanban solution for DevOps planning. Azure Boards
has deep integration across Azure DevOps, and can also be part of Azure Boards-
GitHubintegration.
● Formoreinformation, seeReasonstouseAzureBoardstoplanandtrackyourwork.
● The Learn moduleChoose an Agile approach to software development provides hands-on Kanban
experience in Azure Boards.
DeliveryPipeline
A DevOps pipeline is a set of automated processes and toolsthat allows both developers and
operations professionals to work cohesively to build and deploy code to a production
environment.
WhileaDevOpspipelinecandifferbyorganization,ittypicallyincludes buildautomation/continuous
integration,automation testing, validation, and reporting. It may also include one or more
manualgates that require human intervention before code is allowed to proceed.
Continuous is a differentiated characteristic of a DevOps pipeline. This includes continuous
integration, continuous delivery/deployment (CI/CD), continuous feedback, and continuous
operations. Instead of one-off tests or scheduled deployments, each function occurs on an
ongoing basis.
ConsiderationsforbuildingaDevOpspipeline
Since there isn’t one standard DevOps pipeline, an organization’s design and implementation of aDevOps
pipeline depends on its technology stack, a DevOps engineer’s level of experience, budget, and more.
ADevOps engineer should have a wide-ranging knowledge of both development and operations,
including coding, infrastructure management, system administration, and DevOps toolchains.
Plus, each organization has a different technology stack that can impact the process. Forexample,
ifyour codebase is node.js, factorsinclude whether you use alocalproxynpmregistry, whether you
download the source code and run`npminstall` at everystage inthe pipeline, or do it
onceandgenerateanartifactthat movesthroughthepipeline. Or, ifanapplicationiscontainer-
based,you need to decide to use a local or remote container registry,build the container once and
move it through the pipeline, or rebuild it at every stage.
While every pipeline is unique, most organizations use similar fundamental components. Each
step is evaluated for success before moving on to the next stage of the pipeline. In the event ofa
failure, the pipeline is stopped, and feedback is provided to the developer.
ComponentsofaDevOpspipeline
1.Continuousintegration/continuousdelivery/deployment (CI/CD)
Continuous integration is the practice of making frequent commits to a common source code
repository. It’s continuously integrating code changes into existing code base so that anyconflicts
betweendifferent developer’s code changes are quicklyidentified and relativelyeasyto remediate.
This practice is critically important to increasing deployment efficiency.
We believe that trunk-based development is a requirement of continuous integration. If you are
not making frequent commits to a common branch in a shared source code repository, you arenot
doing continuous integration. If your build and test processes are automated but your developers
are working on isolated, long-living feature branches that are infrequently integrated into a
shared branch, you are also not doing continuous integration.
Continuous delivery ensures that the “main” or “trunk” branch of an application's source codeis
always ina releasable state. Inother words, ifmanagement came to your desk at 4:30PMona
Fridayand said, “We need the latest version released right now,”that version could be deployed
with the push of a button and without fear of failure.
This means having a pre-production environment that is as close to identical to the production
environment as possible and ensuring that automated tests are executed, so that every variable
that might cause a failure is identified before code is merged into the main or trunk branch.
Continuous deployment entails having a level of continuous testing and operations that is so
robust, new versions of software are validated and deployed into a production environment
without requiring any human intervention.
This is rare and in most cases unnecessary. It is typically only the unicorn businesses who have
hundreds or thousands ofdevelopers and have many releases each day that require, or even want
to have, this level of automation.
To simplify the difference between continuous delivery and continuous deployment, think of
delivery as the FedEx person handing you a box, and deployment as you opening that box and
using what’s inside. If a change to the product is required between the time you receive the box
and when you open it, the manufacturer is in trouble!
3. Continuousfeedback
Continuous testing is a critical component of every DevOps pipeline and one of the primary
enablers of continuous feedback. In a DevOps process, changes move continuously from
development to testing to deployment, which leads not only to faster releases, but a
higherqualityproduct.Thismeanshavingautomatedteststhroughout yourpipeline, includingunit
tests that run on every build change, smoke tests, functional tests, and end-to-end tests.
Continuous monitoring is another important component of continuous feedback. A DevOps
approach entails using continuous monitoring in the staging, testing, and even development
environments. It is sometimes useful to monitor pre-production environments for anomalous
behavior, but in general this is an approach used to continuously assess the health and
performance of applications in production.
Numerous tools and services exist to provide this functionality, and this may involve anything
from monitoring your on-premise or cloud infrastructure such as server resources, networking,
etc. or the performance of your application or its API interfaces.
3.Continuous operations
Continuous operations is a relatively new and less common term, and definitions vary.
Onewayto interpret it isas “continuousuptime”. Forexample inthecaseofa blue/greendeployment
strategy in which you have two separate production environments, one that is “blue” (publicly
accessible) and onethat is“green” (not publiclyaccessible). Inthissituation, new code would be
deployed to the green environment, and when it was confirmed to be functional then a switch
would be flipped (usuallyona load-balancer) and traffic would switch fromthe “blue” systemto
the “green” system. The result is no downtime for the end-users.
Another wayto think ofContinuous operations is as continuous alerting. This is the notionthat
engineering staff is on-call and notified if any performance anomalies in the application or
infrastructure occur. In most cases, continuous alerting goes hand in hand with continuous
monitoring.
One of the main goals of DevOps is to improve the overall workflow in the softwaredevelopment
lifecycle(SDLC). The flowofworkisoftendescribedas WIPor workinprogress.Improving WIP can
be accomplished by a variety of means. In order to effectively removebottlenecks that decrease
the flow of WIP, one must first analyze the people, process, andtechnology aspects of the entire
SDLC.
Thesearethe11bottlenecksthathavethebiggestimpactontheflowofwork.
1. InconsistentEnvironments
In almost every company I have worked for or consulted with, a huge amount ofwaste
exists because the various environments (dev, test, stage, prod) are configured
differently. I call this “environment hell”. How many times have you heard a
developer say“it workedonmylaptop”? As codemoves fromoneenvironment tothe
next,softwarebreaksbecauseofthedifferentconfigurations withineachenvironment. I
have seen teams waste days and even weeks fixing bugs that are due to environmental
issues and are not due to errors within the code. Inconsistent environments are the
number one killer of agility.
Manual intervention leads to human error and non-repeatable processes. Two areas wheremanual
intervention can disrupt agility the most are in testing and deployments. If testing is performed
manually, it is impossible to implement continuous integration and continuous delivery in an
agile manner (if at all). Also, manual testing increases the chance of producing
defects,creatingunplannedwork.Whendeploymentsareperformedfullyorpartially
manual,therisk ofdeploymentfailureincreasessignificantlywhichlowersquality and
reliability and increases unplanned work.
Automatethebuildanddeploymentprocessesandimplementatestautomationmethodology like
test driven development (TDD)
3. SDLCMaturity
The maturity of a team’s software development lifecycle (SDLC) has a direct impact on their
ability to deliver software. There is nothing new here; SDLC maturity has plagued IT for
decades. In the age of DevOps, where we strive to deliver software in shorter increments with a
highdegree ofreliabilityand quality, it iseven more critical for ateam to have a mature process.
Some companies I visit are still practicing waterfall methodologies. These companies struggle
with DevOps because they don’t have any experience with agile. But not all companies that
practice agile do it well. Some are early in their agile journey, while others have implemented
what I call“Wagile”: waterfalltendencieswithagile terminologysprinkled in. I have seenteams
who have implemented Kanban but struggle with the prioritization and control of WIP. I have
seen scrum teams struggle to complete the story points that they promised. It takes time to get
really good at agile.
Invest in training and hold blameless post mortems to continously solicit feedback and
improve.
4. LegacyChangeManagementProcesses
Many companies have had their change management processes in place for years and are
comfortablewithit. Theproblemisthat theseprocesseswerecreatedbackwhencompanieswere
deploying and updating back office solutions or infrastructure changes that happened
infrequently. Fast forward to today’s environments where applications are made of many small
componentsor micro servicesthat canbe changed and deployed quickly, now allofa suddenthe
process gets in the way.
Many large companies with well-established ITIL processes struggle with DevOps. In these
environmentsIhave seendevelopmentteamsimplement highlyautomatedCI/CDprocessesonly to
stop and wait for weekly manual review gates. Sometimes these teams have to go through
multiple reviews (security, operations, code, and change control). What is worse is that there is
often a long line to wait in for reviews, causing a review process to slip another week. Many of
these reviews are just rubber stamp approvals that could be entirely avoided with some minor
modifications to the existing processes.
Companies with legacy processes need to look at how they can modernize processes to be
more agile instead of being the reason why their company can’t move fast enough.
5. LackofOperationalMaturity
Moving to a DevOps model often requires a different approach to operations. Some companies
accustomed to supporting back office applications that change infrequently. It requires adifferent
mindset to support software delivered as a service that is always on, and deployed frequently.
With DevOps, operations is no longer just something Ops does. Developers nowmust have tools
so theycansupport applications. OftenI encounter companies that only monitor infrastructure. In
the DevOps model developers need access to logging solutions, application performance
monitoring (APM) tools, web and mobile analytics, advanced alerting and notification solutions.
Processes like change management, problem management, request management, incident
management, access management, and many others often need to be modernized to allow for
more agilityand transparency. WithDevOps, operations isa teamsport.
Too often I see clients who have a separate QA department that is not fully integrated with the
development team. The code is thrown over the wall and then testing begins. Bugs are detected
and sent back to developers who then have to quickly fix, build, and redeploy. This process is
repeated until there is no time remaining and teams are left to agree on what defects they can
tolerate and promote to production. This is a death spiral in action. With every release, they
introduce more technicaldebt into the system lowering its quality and reliability, and increasing
unplanned work. There is a better way.
The better way is to block bugs from moving forward in the development process. This is
accomplished by building automated test harnesses and by automatically failing the build if any
of the tests fail. This is what continuous integration is designed for. Testing must be part of the
development process, not a handoff that is performed after development. Developers need toplay
a bigger part in testing and testers need to play a bigger part in development. This strikes fear in
some testers and not all testers can make the transition.
A very common pattern I run into is the automation of waste. This occurs when a team declares
itself a DevOps teamor a person declares themselves a DevOps engineer and immediately starts
writing hundreds or thousands of lines of Chef or Puppet scripts to automate their existing
processes. The problem is that many of the existing processes are bottlenecks and need to be
changed. Automating waste is like pouring concrete around unbalanced support beams. It makes
bad design permanent.
Automateprocessesafterthebottlenecksare removed.
8. CompetingorMisaligned Incentivesand LackofShared Ownership
This bottleneck has plagued IT for years but is more profound when attempting to be agile. In
fact, this issue is at the heart of why DevOps came to be in the first place. Developers are
incentedforspeedtomarket andoperationsisincentedtoensuresecurity,reliability,availability, and
governance. The incentives are conflicting. Instead, everyone should be incented for customer
satisfaction, with a high degree of agility, reliability, and quality (which is what DevOps is all
about). If every team is not marching towards the same goals, then there will be a never-ending
battle of priorities and resources. If all teams’ goals are in support of the goals I mentioned
above, and everyone is measured in a way that enforces those incentives, then everyone wins ---
especially the customer.
When heroic efforts are necessary to succeed, then a team is in a dark place. This often means
working insane hours, being reactive instead of proactive, and being highly reliant on luck and
chance. The biggest causes of this are a lack of automation, too much tribal knowledge,immature
operational processes, and even poor management. The culture of heroism often leads to
burnout, high turnover, and poor customer satisfaction.
If your organization relies on heroes, find out what the root causes are that creates these
dependencies and fix them fast.
10. Governanceasan Afterthought
When DevOps starts as a grassroots initiative there is typically little attention paid to thequestion
“how does this scale?” It is much easier to show some success in a small isolated team
andforaninitialproject.ButoncetheDevOpsinitiativestartsscalingtolargerprojectsrunning
onwaymore infrastructuresoronce it startsspreadingtoother teams, it cancomecrashingdown
without proper governance in place. This is very similar to building software in the cloud. How
many times have you seen a smallteam whip out their credit card and build an amazing solution
on AWS? Easy to do, right? Then a year later the costs are spiraling out of control as they lose
sight of how many servers are in use and what is running on them. They all have different
versions of third partyproducts and libraries onthem. Suddenly, it is not so easyanymore.
With DevOps, the same thing can happen without the appropriate controls in place. Many
companies start their DevOps journey with a team of innovators and are able to score somemajor
wins. But when they take that model to other teams it all falls down. There are numerous reasons
that this happens. Is the organization ready to manage infrastructure and operations across
multiple teams? Are there common shared services available like central logging and monitoring
solutions or is each team rolling their own? Is there a common security architecture that everyone
canadhere to?Cantheteams provisiontheir owninfrastructurefroma self-service portalor
aretheyalldependent on a single queue ticketing system?I could go on but you get the point. It is
easier to cut some corners when thereis one team to manage but to scale we mustlook at the
entire service catalog. DevOps will not scale without the appropriate level of governance in
place.
AssignanownerandstartbuildingaplanforscalingDevOpsacrosstheorganization.
The most successful companies have top level support for their DevOps initiative. One of my
clients is making a heavy investment in DevOps training and it will run a large number of
employees through the program. Companies with top level support make DevOps a priority.They
break down barriers, drive organizational change, improve incentive plans, communicate “Why”
they are doing Devops, and fund the initiative. When there is no top level support, DevOps
becomes much more challenging and often becomes a new silo. Don’t let this stop you from
starting a grass roots initiative. Many sponsored initiatives started as grassroots initiatives. These
grassroots teams measured their success and pitched their executives. Sometimes when
executives see the results and the ROI they become the champions for furthering the cause. My
point is, it is hard to get dev and ops to work together with common goals when it is not
supported at the highest levels. It is difficult to transform a company to DevOps if it is not
supported at the highest levels.
If running a grassroots effort, gather before and after metrics and be prepared to sell and
evangelize DevOps upward.
Unit 2
Software Development Life Cycle models and Devops
● Agile
● Lean
● Waterfall
● Iterative
● Spiral
● DevOps
Each of these approaches varies in some ways from the others, but all have a common purpose:
to help teams deliver high-quality software as quickly and cost-effectively as possible.
1. Agile
The Agile model first emerged in 2001 and has since become the de facto industry standard.
Some businesses value the Agile methodology so much that they apply it to other types of
projects, including nontech initiatives.
In the Agile model, fast failure is a good thing. This approach produces ongoing release cycles,
each featuring small, incremental changes from the previous release. At each iteration, the
product is tested. The Agile model helps teams identify and address small issues on projects
before they evolve into more significant problems, and it engages business stakeholders to give
feedback throughout the development process.
2. Lean
The Lean model for software development is inspired by "lean" manufacturing practices and
principles. The seven Lean principles (in this order) are: eliminate waste, amplify learning,
decideaslateaspossible,deliverasfast aspossible,empowertheteam, build inintegrityandsee the
whole.
The Lean process is about working only on what must be worked on at the time, so there’s no
room for multitasking. Project teams are also focused on finding opportunities to cut waste at
every turn throughout the SDLC process, from dropping unnecessary meetings to reducing
documentation.
The Agile model is actually a Lean method for the SDLC, but with some notable differences.
One ishow eachprioritizescustomer satisfaction: Agile makes it thetoppriorityfromtheoutset,
creating a flexible process where project teams can respond quickly to stakeholder feedback
throughout the SDLC. Lean, meanwhile, emphasizes the elimination of waste as a way to create
more overall value for customers — which, in turn, helps to enhance satisfaction.
3. Waterfall
Some experts argue that the Waterfall model was never meant to be a process model for real
projects. Regardless, Waterfall is widely considered the oldest of the structured SDLC
methodologies. It’s also a very straightforward approach: finish one phase, then move on to the
next. No going back. Each stage relies on information from the previous stage and has its own
project plan.
The downside ofWaterfallis itsrigidity. Sure,it’s easyto understand and simple to manage. But
early delays can throw off the entire project timeline. With little room for revisions once a stage
is completed, problems can’t be fixed untilyou getto the maintenance stage. This modeldoesn’t
work well if flexibility is needed or if the project is long-term and ongoing.
Even more rigid is the related Verification and Validation model — or V-shaped model. This
linear development methodology sprang from the Waterfall approach. It’s characterized by a
corresponding testing phase for each development stage. Like Waterfall, each stage begins only
after the previous one has ended. This SDLC model can be useful, provided your project has no
unknown requirements.
4. Iterative
The Iterative model is repetition incarnate. Instead of starting with fully known requirements,
project teams implement a set of software requirements, then test, evaluate and pinpoint further
requirements. Anew versionofthe software is produced witheachphase, or iteration. Rinse and
repeat until the complete system is ready.
One example of an Iterative model is the Rational Unified Process (RUP), developed by IBM’s
RationalSoftware division. RUP is a process product, designed to enhance teamproductivity for a
wide range of projects and organizations.
RUPdividesthedevelopmentprocessintofourphases:
Each phase of the project involves business modeling, analysis and design, implementation,
testing, and deployment.
5. Spiral
One of the most flexible SDLC methodologies, Spiral takes a cue from the Iterative model andits
repetition. The project passes through four phases (planning, risk analysis, engineering and
evaluation) over and over in a figurative spiral until completed, allowing for multiple rounds of
refinement.
The Spiral model is typically used for large projects. It enables development teams to build a
highlycustomized product and incorporateuserfeedback earlyon. Another benefit ofthis SDLC
model is risk management. Each iteration starts by looking ahead to potential risks and figuring
out how best to avoid or mitigate them.
6. DevOps
The DevOps methodology is a relative newcomer to the SDLC scene. It emerged from two
trends: the application of Agile and Lean practices to operations work, and the general shift in
business toward seeing the value of collaboration between development and operationsstaff at all
stages of the SDLC process.
In a DevOps model, Developers and Operations teams work together closely — and sometimes
as one team — to accelerate innovation and the deployment of higher-quality and more reliable
software products and functionalities. Updates to products are small but frequent. Discipline,
continuous feedback and process improvement, and automation of manual development
processes are all hallmarks of the DevOps model.
Amazon Web Services describes DevOps as the combination of cultural philosophies, practices,
and tools that increases an organization’s ability to deliver applications and services at high
velocity, evolving and improving products at a faster pace than organizations using traditional
software development and infrastructure management processes. So like many SDLC models,
DevOps is not only an approach to planning and executing work, but also a philosophy that
demands a nontraditional mindset in an organization.
Choosing the right SDLC methodology for your software development project requires careful
thought. But keep in mind that a model for planning and guiding your project is only one
ingredient for success. Even more important is assembling a solid team of skilled talent
committed to moving the project forward through every unexpected challenge or setback.
DevOpsLifecycle
Learning DevOps is not complete without understanding the DevOps lifecycle phases. The
DevOps lifecycle includes seven phases as given below:
1) ContinuousDevelopment
This phase involves the planning and coding ofthe software. The visionofthe project is decided
during the planning phase. And the developers begin developing the code for the application.
There are no DevOps tools that are required for planning, but there are several tools for
maintaining the code.
2) ContinuousIntegration
This stage is the heart of the entire DevOps lifecycle. It is a software development practice in
whichthedevelopersrequireto commit changestothesourcecodemorefrequently. This maybe on a
daily or weekly basis. Then every commit is built, and this allows early detection of
problemsiftheyarepresent.Buildingcodeisnotonlyinvolvedcompilation,butitalso includes unit
testing, integration testing, code review, and packaging.
The code supporting new functionality is continuously integrated with the existing code.
Therefore, there is continuous development ofsoftware. The updated codeneeds to be integrated
continuously and smoothly with the systems to reflect changes to the end-users.
Jenkins is a popular tool used in this phase. Whenever there is a change in the Git repository,
then Jenkins fetches the updated code and prepares a build of that code, which is an executable
file in the form of war or jar. Then this build is forwarded to the test server or the production
server.
3) ContinuousTesting
This phase, where the developed software is continuously testing for bugs. For constant testing,
automation testing tools such as TestNG, JUnit, Selenium, etc are used. These tools allow QAs
to test multiple code-bases thoroughly in parallel to ensure that there is no flaw in the
functionality. In this phase, Docker Containers can be used for simulating the test environment.
Seleniumdoes the automation testing, and TestNG generates the reports. This entire testing
phase can automate with the help of a Continuous Integration tool called Jenkins.
Automation testing saves a lot of time and effort for executing the tests instead of doing this
manually. Apart from that, report generation is a big plus. The task of evaluating the test cases
that failed in a test suite gets simpler. Also, we can schedule the execution of the test cases at
predefined times. After testing, the code is continuously integrated with the existing code.
4) ContinuousMonitoring
Monitoring is a phase that involves all the operational factors of the entire DevOps process,
where important informationabout the use ofthe software is recordedand carefullyprocessed to
find out trends and identify problem areas. Usually, the monitoring is integrated within the
operational capabilities of the software application.
It may occur in the form of documentation files or maybe produce large-scale data about the
application parameters when it is in a continuous use position. The system errors such as server
not reachable, low memory, etc are resolved in this phase. It maintains the security and
availability of the service.
5) ContinuousFeedback
The application development is consistently improved by analyzing the results from the
operations of the software. This is carried out by placing the critical phase of constant feedback
between the operations and the development of the next version of the current software
application.
The continuity is the essential factor in the DevOps as it removes the unnecessary steps whichare
required to take a software application from development, using it to find out its issues and then
producing a better version. It kills the efficiency that may be possible with the app and reduce
the number of interested customers.
6) ContinuousDeployment
In this phase, the code is deployed to the production servers. Also, it is essential to ensure thatthe
code is correctly used on all the servers.
The new code is deployed continuously, and configuration management tools play an essential
role in executing tasks frequently and quickly. Here are some popular tools which are used inthis
phase, such as Chef, Puppet, Ansible, and SaltStack.
Introducingsoftwarearchitecture DevOps
Model
The DevOps model goes through several phases governed by cross-discipline teams.
Those phases are as follows:
Planning,Identify,andTrack Using the latest in project management tools and agile practices,
track ideas and workflows visually. This gives all important stakeholders a clear pathway to
prioritization and better results. With better oversight, project managers can ensure teams are on
the right track and aware ofpotentialobstacles and pitfalls. All applicable teams can better work
together to solve any problems in the development process.
Development Phase Version control systems help developers continuously code, ensuring one
patch connects seamlessly with the master branch. Each complete feature triggers the developer
tosubmit arequestthat,ifapproved,allowsthechangestoreplaceexistingcode.Development is
ongoing.
Testing Phase After a build iscompleted indevelopment, it issent to QAtesting. Catching bugs is
importanttotheuserexperience, inDevOps bug testing happensearlyandoften. Practices like
continuous integration allow developers to use automation to build and test as a cornerstone of
continuous development.
Deployment Phase In the deployment phase, most businesses strive to achieve continuous
delivery. This means enterprises have mastered the art of manual deployment. After bugs have
been detected and resolved, and the user experience has been perfected, a final team is
responsible for the manual deployment. By contrast, continuous deployment is a DevOps
approach that automates deployment after QA testing has been completed.
Management Phase During the post-deployment management phase, organizations monitor
and maintain the DevOps architecture in place. This is achieved byreading and interpreting data
from users, ensuring security, availability and more.
BenefitsofDevOpsArchitecture
A properly implemented DevOps approach comes with a number of benefits. These include the
following that we selected to highlight:
Decrease Cost Of primary concern for businesses is operational cost, DevOps helps
organizations keep their costs low. Because efficiency gets a boost with DevOps practices,
software production increases and businesses see decreases in overall cost for production.
Customers are Served User experience, and by design, user feedback is important to the
DevOps process. By gathering information from clients and acting on it, those who practice
DevOps ensure that clients wants and needs get honored, and customer satisfaction reaches new
highs
.
It Gets More Efficient with TimeDevOps simplifies the development lifecycle, which in
previous iterations had been increasingly complex. This ensures greater efficiency throughout a
DevOps organization, as does the fact that gathering requirements also gets easier. In DevOps,
requirements gathering is a streamlined process, a culture of accountability, collaboration and
transparency makes requirements gathering a smooth going team effort where no stone is left
unturned.
Themonolithicscenario
Monolithic software is designed to be self-contained, wherein the program's components or
functionsaretightlycoupledratherthan looselycoupled, like inmodularsoftwareprograms. Ina
monolithic architecture, each component and its associated components must all be present for
code to be executed or compiled and for the software to run.
Monolithic applications are single-tiered, which means multiple components are combined into
one large application. Consequently, they tend to have large codebases, which can be
cumbersome to manage over time.
Furthermore, if one program component must be updated, other elements may also require
rewriting, and the whole application has to be recompiled and tested. The process can be time-
consuming and may limit the agility and speed of software development teams. Despite these
issues, the approach is still in use because it does offer some advantages. Also, many early
applications were developed as monolithic software, so the approach cannot be completely
disregarded when those applications are still in use and require updates.
What ismonolithicarchitecture?
A monolithic architecture is the traditional unified model for the design of a softwareprogram.
Monolithic, in this context, means "composed all in one piece." According to the Cambridge
dictionary, the adjective monolithic also means both "too large"and "unable to be changed."
Benefitsofmonolithic architecture
There are benefits to monolithic architectures, which is why many applications are still created
usingthisdevelopment paradigm.Forone,monolithic programsmayhavebetterthroughputthan
modular applications. They may also be easier to test and debugbecause, with fewer elements,
there are fewer testing variables and scenarios that come into play.
At the beginning of the software development lifecycle, it is usually easier to go with the
monolithic architecture since development can be simpler during the early stages. A single
codebase also simplifies logging, configuration management, application performancemonitoring
and other development concerns. Deployment can also be easier by copying the packaged
applicationto aserver. Finally, multiple copiesofthe applicationcan be placed behind a load
balancer to scale it horizontally.
That said, the monolithic approach is usually better for simple, lightweight applications. Formore
complex applications with frequent expected code changes or evolving scalability requirements,
this approach is not suitable.
Drawbacksofmonolithicarchitecture
Generally, monolithic architectures suffer from drawbacks that can delay application
development and deployment. These drawbacks become especially significant when theproduct's
complexity increases or when the development team grows in size.
The code base of monolithic applications can be difficult to understand because they may be
extensive, which can make it difficult for new developers to modify the code to meet changing
businessortechnicalrequirements.Asrequirementsevolveorbecomemorecomplex,it
becomes difficult to correctly implement changes without hampering the qualityofthe code and
affecting the overall operation of the application.
Following each update to a monolithic application, developers must compile the entire codebase
and redeploy the full application rather than just the part that was updated. This makes
continuous or regular deployments difficult, which then affects the application's and team's
agility.
The application's size can also increase startup time and add to delays. In some cases, different
partsofthe application may have conflicting resource requirements. This makes it harder to find
the resources required to scale the application.
ArchitectureRulesofThumb
1. There is always a bottleneck. Even in a serverless system or one you think will
“infinitely” scale, pressure will always be created elsewhere. For example, if your API
scales, doesyour database also scale?Ifyour database scales, doesyour emailsystem?In
modern cloud systems, there are so many components that scalability is not always the
goal. Throttling systems are sometimes the best choice.
2. Yourdatamodelis linked tothescalabilityofyourapplication. Ifyour tabledesignis
garbage, your queries will be cumbersome, so accessing data will be slow. When
designing a database (NoSQL or SQL), carefully consider your access pattern and what
data you will have to filter. For example, with DynamoDB, you need to consider what
“Key” you will have to retrieve data. If that field is not set as the primary or sort key, it
will force you to use a scan rather than a faster query.
3. Scalability is mainly linked with cost. When you get to a large scale, considersystems
where this relationship does not track linearly. If, like many, you have systems on
RDS and ECS; these will scale nicely. But the downside is that as you scale, you
willpaydirectlyfor that increased capacity. It’s common forthese workloads to cost
$50,000 per month at scale. The solution is to migrate these workloads to serverless
systems proactively.
4. Favour systems that require little tuning to make fast. The days of configuring your
ownserversare over. AWS, GCP and Azure allprovide fantastic systemsthat don’t need
expert knowledge to achieve outstanding performance.
5. Use infrastructure as code. Terraform makes it easy to build repeatable and version-
controlled infrastructure. It creates an ethos of collaboration and reduces errors by
defining them in code rather than “missing” a critical checkbox.
6. UseaPaaSifyou’reatlessthan100k MAUs.WithHeroku,FlyandRender,thereis no
needtospendhoursconfiguring AWSand messingaroundwithyourapplicationbuild
process. Platform-as-a-service should be leveraged to deploy quickly and focus on the
product.
7. Outsource systems outside of the market you are in. Don’t roll your own CMS or
Auth, even if it costs you tonnes. If you go to the pricing page of many third-party
systems, for enterprise-scale, the cost is insane - think $10,000 a month for an
authentication system! “I could make that in a week,” you think. That may be true, but it
doesn’t consider the long-term maintenance and the time you cannot spend on your core
product. Where possible, buy off the shelf.
8. You have three levers, quality, cost and time. You have to balance themaccordingly.
You have, at best, 100 “points” to distribute between the three. Of course, you always
want to maintain quality, so the other levers to pull are time and cost.
9. Design your APIs as open-source contracts. Leveraging tools such as
OpenAPI/Swagger(not a sponsor, just a fan!) allows you to create “contracts” between
your front-end and back-end teams. This reduces bugs by having the shape ofthe request
and responses agreed upon ahead of time.
10. Start with a simple system first (Gall’s law).Galls’ law states, “all complex systems
that work evolved from simpler systems that worked. If you want to build a complex
system that works, build a simpler system first, and then improve it over time.”. You
should avoid going after shiny technology when creating a new software architecture.
Focus on simple, proven systems. S3 for your static website, ECS for your API, RDS for
your database, etc.Afterthat, you canchop and change your workload to addthese fancy
technologies into the mix.
TheSeparationofConcerns
Howisseparationofconcerns achieved
Separationofconcernsin softwarearchitectureisachieved bytheestablishment ofboundaries. A
boundary is any logical or physical constraint which delineates a given set of responsibilities.
Some examples of boundaries would include the use of methods, objects, components, and
services to define core behavior within an application; projects, solutions, and folder hierarchies
forsourceorganization;applicationlayersandtiersforprocessingorganization.
Separation ofconcerns -advantages
SeparationofConcerns implementedinsoftwarearchitecturewouldhaveseveraladvantages:
1. Lack of duplication and singularity of purpose of the individual components render the overall
system easier to maintain.
2. Thesystembecomesmorestableasabyproductoftheincreasedmaintainability.
3. The strategies required to ensure that each component only concerns itself with a single set of
cohesive responsibilities often result in natural extensibility points.
4. The decoupling which results from requiring components to focus on a single purpose leads to
components whichare more easilyreused inother systems, ordifferent contexts within the same
system.
5. The increase in maintainability and extensibility can have a major impact on the marketability
and adoption rate of the system.
There are several flavors of Separation of Concerns. Horizontal Separation, Vertical Separation,
Data Separation and Aspect Separation. In this article, we will restrict ourselves to Horizontal
and Aspect separation of concern.
Handlingdatabasemigrations Introduction
What aredatabasemigrations?
Database migrations, also known asschema migrations, database schema migrations, or simply
migrations, are controlled setsofchanges developedto modifythe structureofthe objects within
arelational database. Migrations help transition database schemas from their current state to a
new desired state,whetherthat involvesadding tablesandcolumns, removing elements, splitting
fields, or changing types and constraints.
While preventing data loss is generallyone ofthe goals of migrationsoftware, changes that drop or
destructively modify structures that currently house data can result in deletion. To cope with this,
migrationisoftenasupervisedprocessinvolving inspectingtheresultingchangescriptsand making
any modifications necessary to preserve important information.
Whataretheadvantagesofmigration tools?
Migrations are helpful because they allow database schemas to evolve as requirements change.
They help developers plan, validate, and safely apply schema changes to their environments.
These compartmentalized changes are defined on a granular level and describe the
transformations that must take place to move between various "versions" of the database.
In general, migration systems create artifacts or files that can be shared, applied to multiple
databasesystems, andstoredinversioncontrol. This helpsconstructahistoryofmodificationsto the
database that can be closely tied to accompanying code changes in the client applications. The
database schema and the application's assumptions about that structure can evolve intandem.
Some other benefits include being allowed (and sometimes required) to manually tweak the
process by separating the generation of the list of operations from the execution of them. Each
change can be audited, tested, and modified to ensure that the correct results are obtained while
still relying on automation for the majority of the process.
Statebasedmigration
State based migration software creates artifacts that describe how to recreate the desired
database state from scratch. The files that it produces can be applied to an empty relational
database system to bring it fully up to date.
After the artifacts describing the desired state are created, the actual migration involves
comparing the generated files against the current state of the database. This process allows the
softwareto analyze the difference betweenthe two statesand generate anew file orfiles to bring
the current database schema in line with the schema described by the files. These change
operations are then applied to the database to reach the goal state.
Whattokeep inmindwithstatebasedmigrations
Like almost all migrations, state based migration files must be carefully examined by
knowledgeable developerstooversee the process. Boththe filesdescribing the desired finalstate
and the files that outline the operations to bring the current database into compliance must be
reviewed to ensure that the transformations will not lead to data loss. For example, if the
generated operations attempt to rename a table by deleting the current one and recreating it with
its new name, a knowledgable human must recognize this and intervene to prevent data loss.
State based migrations can feel rather clumsy if there are frequent major changes to the database
schema that require this type of manual intervention. Because of this overhead, this technique is
often better suited for scenarios where the schema is well-thought out ahead of time with
fundamental changes occurring infrequently.
However, state based migrations do have the advantage ofproducing files that fullydescribe the
databasestateinasinglecontext. Thiscanhelp newdevelopersonboard morequicklyandworks well
with workflows in version control systems since conflicting changes introduced by code
branches can be resolved easily.
Changebasedmigrations
The major alternative to state based migrations is a change based migration system. Change
based migrations also produce files that alter the existing structures in a database to arrive at the
desired state. Rather than discovering the differences between the desired database state and the
current one, thisapproach buildsoffofaknowndatabasestateto definetheoperationsto bring it into
the new state. Successive migration files are produced to modify the database further, creating a
series of change files that can reproduce the final database state when applied consecutively.
Because change based migrations work by outlining the operations required from a known
database state to the desired one, an unbroken chain of migration files is necessary from the
initial starting point. This system requires an initial state, which may be an empty database
systemor a files describing the starting structure, the files describing the operations that takethe
schema through each transformation, and a defined order which the migration files must be
applied.
Whattokeepinmindwithchange basedmigrations
Changebased migrationstracetheprovenanceofthedatabaseschemadesignbacktotheoriginal
structure through the series of transformation scripts that it creates. This can help illustrate the
evolutionofthe database structure, but is less helpfulfor understanding the complete stateofthe
database at anyone point since the changes described in each file modifythe structure produced
by the last migration file.
Since the previous state is so important to change based systems, the system often uses adatabase
within the database system itselfto track which migration files have been applied. This helps the
software understand what statethe system is currentlyin without having to analyze the current
structure and compare it against the desired state, known only by compiling the entire series of
migration files.
The disadvantage of this approach is that the current state of the database isn't described in the
code base after the initial point. Each migration file builds off of the previous one, so while the
changes are nicely compartmentalized, the entire database state at any one point is much harder
to reason about. Furthermore, because the order of operations is so important, it can be more
difficult to resolve conflicts produced by developers making conflicting changes.
Change based systems, however, do have the advantage of allowing for quick, iterative changes
to the database structure. Instead of the time intensive process of analyzing the current state of
the database, comparing it tothe desired state, creating files to performthe necessaryoperations,
andapplyingthemtothedatabase, change basedsystemsassumethecurrent stateofthedatabase based
on the previous changes. This generally makes changes more light weight, but does make out of
band changes to the database especially dangerous since migrations can leave the target systems
in an undefined state.
Microservices
With micro services, however, each unit is independently deployable but can communicate with
each other when necessary. Developers can now achieve thescalability, simplicity, andflexibility
needed to create highly sophisticated software.
Howdoesmicroservicesarchitecturework?
Thekeybenefitsofmicroservicesarchitecture
Microservices architecture presents developers and engineers with a number of benefits that
monoliths cannot provide. Here are a few of the most notable.
1. Lessdevelopmenteffort
Smaller development teams can work in parallel on different components to update existing
functionalities. This makes it significantly easier to identify hot services, scale independently
from the rest of the application, and improve the application.
2. Improvedscalability
3. Independentdeployment
Eachmicroserviceconstitutinganapplicationneedstobea fullstack.Thisenablesmicroservices to be
deployed independently at any point. Since microservices are granular in nature, development
teams can work on one microservice, fix errors, then redeploy it without redeploying the entire
application.
Microservice architecture is agile and thus does not need a congressional act to modify the
program by adding or changing a line of code or adding or eliminating features. The software
offers to streamline business structures through resilience improvisation and fault separation.
4. Errorisolation
In monolithic applications, the failure of even a small component of the overall application can
make it inaccessible. In some cases, determining the error could also be tedious. With
microservices, isolating the problem-causing component is easy since the entire application is
divided into standalone, fully functional software units. If errors occur, other non-related units
will still continue to function.
5. Integrationwithvarioustechstacks
With microservices, developers have the freedom to pick the tech stack best suited for one
particular microservice and its functions. Instead of opting for one standardized tech stack
encompassing all of an application’s functions, they have complete control over their options.
Dataprocessing
Since applications running on microservice architecture can handle more simultaneous requests,
microservices can process large amounts of information in less time. This allows for faster and
more efficient application performance.
Mediacontent
Companies like Netflix and AmazonPrime Video handle billionsofAPI requestsdaily. Services
such as OTT platforms offering users massive media content will benefit from deploying a
microservices architecture. Microservices will ensure that the plethora of requests for different
subdomains worldwide is processed without delays or errors.
Website migration
Website migration involves a substantial change and redevelopment of a website’s major areas,
suchasitsdomain,structure,userinterface,etc.Usingmicroserviceswillhelpyouavoid
business-damaging downtime and ensure your migration plans execute smoothly without any
hassles.
Transactionsandinvoices
Microservices are perfect for applications handling high payments and transaction volumes and
generating invoices for the same. The failure of an application to process payments can cause
huge losses for companies. With the help of microservices, the transaction functionality can be
made more robust without changing the rest of the application.
Microservicestools
Building a microservices architecture requires a mix of tools and processes to perform the core
building tasks and support the overall framework. Some of these tools are listed below.
1. Operating system
The most basic tool required to build an application is an operating system (OS). One such
operating system allows great flexibility in development and uses in Linux. It offers a largely
self-contained environment for executing program codes and a series of options for large and
small applications in terms of security, storage, and networking.
2. Programminglanguages
One of the benefits of using a microservices architecture is that you can use a variety of
programming languages across applications for different services. Different programming
languages have different utilities deployed based on the nature of the microservice.
3. APImanagementandtestingtools
The various services need to communicate when building an application using a microservices
architecture. This is accomplished using applicationprogramming interfaces (APIs). For APIs to
work optimally and desirably, they need to be constantly monitored, managed and tested, and
API management and testing tools are essential for this.
4. Messagingtools
Messagingtoolsenablemicroservicestocommunicatebothinternallyandexternally.Rabbit MQ and
Apache Kafka are examples of messaging tools deployed as part of a microservice system.
5. Toolkits
Toolkits in a microservices architecture are tools used to build and develop applications.
Different toolkits are available to developers, and these kits fulfill different purposes. Fabric8and
Seneca are some examples of microservices toolkits.
6. Architecturalframeworks
A container is a set of executables, codes, libraries, and files necessary to run a microservice.
Container orchestration tools provide a framework to manage and optimize containers within
microservices architecture systems.
8. Monitoringtools
Once a microservices application is up and running, you must constantly monitor it to ensure
everything is working smoothlyand as intended. Monitoring toolshelp developers stayontopof the
application’s work and avoid potential bugs or glitches.
9. Serverless tools Serverless tools further add flexibility and mobility to the various
microservices within an application by eliminating server dependency. This helps in the easier
rationalization and division of application tasks.
Microservicesvsmonolithic architecture
With monolithic architectures, all processes are tightly coupled and run as a single service. This
means that if one process of the application experiences a spike in demand, the entirearchitecture
must be scaled. Adding or improving a monolithic application’s features becomes more complex
as the code base grows. This complexity limits experimentation and makes it difficult to
implement new ideas. Monolithic architectures add risk for application availability because many
dependent and tightly coupled processes increase the impact of a single process failure.
Datatier
The data tier in DevOps refers to the layer of the application architecture that is responsible for
storing, retrieving, and processing data. The data tier is typically composed of databases, data
warehouses, and data processing systems that manage large amounts of structured and
unstructured data.
In DevOps, the data tier is considered an important aspect of the overall application architecture
and is typically managed as part of the DevOps process. This includes:
2. Data backup and recovery: Implementing data backup and recovery strategies to ensure
that data can be recovered in case of failures or disruptions.
3. Data security: Implementing data security measures to protect sensitive information and
comply with regulations.
5. Data integration: Integrating data frommultiple sources to provide a unified view ofdata
and support business decisions.
Byintegrating data management intothe DevOps process, teams canensure that data is properly
managed and protected, and that data-driven applications and services perform well and deliver
value to customers.
Devopsarchitectureandresilience
Development and operations both play essential roles in order to deliver applications. The
deployment comprises analyzing the requirements, designing, developing, andtestingof the
software components or frameworks.
The operation consists of the administrative processes, services, and support for the software.
When both the development and operations are combined with collaborating, then the DevOps
architecture is the solution to fix the gap between deployment and operation terms; therefore,
delivery can be faster.
DevOps architecture is used for the applications hosted on the cloud platform and large
distributed applications. Agile Development is used in the DevOps architecture so that
integration and delivery can be contiguous. When the development and operations team works
separately from each other, then it is time-consuming to design, test, and deploy. And if the
terms are not in sync with each other, then it may cause a delay in the delivery. So DevOps
enables the teams to change their shortcomings and increases productivity.
BelowarethevariouscomponentsthatareusedintheDevOpsarchitecture
1) Build
Without DevOps, the cost of the consumption of the resources was evaluated based on the pre-
definedindividual usage with fixedhardware allocation. And with DevOps, the usage of cloud,
sharing of resources comes into the picture, and the build is dependent upon the user's need,
which is a mechanism to control the usage of resources or capacity.
2) Code
Manygood practices such as Git enables the code to be used, which ensures writing the code for
business, helps to track changes, getting notified about the reason behind the difference in the
actual and the expected output, and if necessary reverting to the original code developed. The
code can be appropriately arranged in files, folders, etc. And theycan be reused.
3) Test
The application will be ready for production after testing. In the case of manual testing, it
consumes more time in testing and moving the code to the output. The testing can be automated,
which decreases the time for testing so that the time to deploy the code to production can be
reduced as automating the running of the scripts will remove many manual steps.
4) Plan
DevOps use Agile methodology to plan the development. With the operations and development
team in sync, it helps in organizing the workto plan accordingly to increase productivity.
5) Monitor
6) Deploy
Many systems can support the scheduler for automated deployment. The cloud management
platformenables users to capture accurate insights and view the optimization scenario, analytics
on trends by the deployment of dashboards.
7) Operate
DevOps changes the way traditional approach of developing and testing separately. The teams
operate in a collaborative way where both the teams actively participate throughout the service
lifecycle. Theoperationteaminteractswithdevelopers, and theycomeup witha monitoring plan
which serves the IT and business requirements.
8) Release
DevOpsresilience
DevOps resilience refers to the ability of a DevOps system to withstand and recover fromfailures
and disruptions. This means ensuring that the systems and processes used inDevOps are robust,
scalable, and able to adapt to changing conditions. Some of the key components of DevOps
resilience include:
1. Infrastructure automation: Automating infrastructure deployment, scaling, and
management helps to ensure that systems are deployed consistently and are easier to
manage in case of failures or disruptions.
2. Monitoringand logging:Monitoringsystems, applications, and infrastructureinreal-time
and collecting logs can help detect and diagnose issues quickly, reducing downtime.
3. Disaster recovery: Having a well-designed disaster recovery plan and regularly testing it
can help ensure that systems can quickly recover from disruptions.
4. Continuous testing: Continuously testing systems and applications can help identify and
fix issues before they become critical.
5. High availability: Designing systems for high availability helps to ensure that systems
remain up and running even in the event of failures or disruptions.
By focusing on these components, DevOps teams can create a resilient and adaptive DevOps
system that is able to deliver high-quality applications and services, even in the face of failures
and disruptions.
Unit 3
Introduction to project management
The need for source code control:
Source code control (also known as version control) is an essential part of DevOps practices.
Here are a few reasons why:
Collaboration: Source code control allows multiple team members to work on the same
codebase simultaneously and track each other's changes.
Traceability: Source code control systems provide a complete history of changes to the code,
enabling teams to trace bugs, understand why specific changes were made, and roll back to
previous versions if necessary.
Branching andmerging: Teamscancreate separatebranches for different featuresorbug fixes,
then merge the changes back into the main codebase. This helps to ensure that different parts of
the code can be developed independently, without interfering with each other.
Continuous integration and delivery: Source code control systems are integral to continuous
integration and delivery (CI/CD) pipelines, where changes to the code are automatically built,
tested, and deployed to production.
Insummary, source codecontrolisa criticalcomponent ofDevOps practices, asit enablesteams to
collaborate, manage changes to code, and automate the delivery of software.
Roles:
● Developmentteam:responsibleforwritingandtestingcode.
● Operationsteam:responsibleforthedeploymentandmaintenanceofthecodein production.
● DevOpsteam:responsibleforbridgingthegapbetweendevelopmentandoperations, ensuring
that code is delivered quickly and reliably to production.
Code:
● Code is the backbone of DevOps and represents the software that is being developed,
tested, deployed, and maintained.
● Code is managed using source code control systems like Git, which provide a way to
track changes to the code over time, collaborate on the code with other team members,
and automate the build, test, and deployment process.
● Code is continuously integrated and tested, ensuring that any changes to the code do not
cause unintended consequences in the production environment.
Inconclusion, bothroles and code playa criticalrole inDevOps. Teams worktogether to ensure that
code is developed, tested, and delivered quicklyand reliablyto production, while operations
teams maintain the code in production and respond to any issues that arise.
Overall, SCM has been an important part of the evolution of DevOps, enabling teams to
collaborate, manage code changes, and automate the software delivery process.
Sourcecodemanagementsystemandmigrations
● Asource codemanagement (SCM) systemis a softwareapplicationthat provides
versioncontrol for source code. It tracks changes made to the code over time, enabling
teams torevertto previousversionsifnecessary,andhelpsensurethat
codecanbecollaboratedonby multiple team members.
● SCMsystemstypicallyprovide featuressuchasversiontracking, branchingand
merging,change history, and rollback capabilities. Some popular SCM systems include
Git,Subversion, Mercurial, and Microsoft Team Foundation Server.
● Source code management (SCM) systems are often used to manage code migrations,
which are the process of moving code fromone environment to another. This is typically
done as part of a software development project, where code is moved from adevelopment
environment to a testing environment and finally to a production environment.
SCMsystemsprovideanumberofbenefits formanagingcodemigrations,including:
1. Versioncontrol
2. Branching and merging
3. Rollback
4. Collaboration
5. Automation
1) Versioncontrol:SCMsystemskeeparecord of all changesto thecode, enablingteamsto track the
code as it moves through different environments.
PurposeofVersionControl:
● Multiple people can work simultaneously on a single project. Everyone works on and edits
their owncopyofthe files and it is up to themwhen they wish to share the changes made by
them with the rest of the team.
● It also enables one person to use multiple computers to work on a project, so it is valuable
even if you are working by yourself.
● It integrates the workthat is done simultaneously bydifferent members of the team. In some
rare cases, when conflicting edits are made by two people to the same line of a file, then
humanassistance isrequestedbythe versioncontrolsystemindecidingwhat shouldbedone.
● Version control provides access to the historical versions of a project. This is insurance
against computer crashes or data loss. If any mistake is made, you can easily roll back to a
previousversion.Itisalsopossibletoundospecificeditsthattoowithoutlosingthework
donein themeanwhile.Itcan beeasily known when,why,andby whom any partof afile was
edited.
Benefitsoftheversion controlsystem:
● Enhancestheprojectdevelopmentspeedbyprovidingefficientcollaboration,
● Leverages the productivity, expedites product delivery, and skills of the employees through
better communication and assistance,
● Reduce possibilities of errors and conflicts meanwhile project development through
traceability to every small change,
● Employees or contributors of the project can contribute from anywhere irrespective of the
different geographical locations through this VCS,
● For each different contributor to the project, a different working copy is maintained and not
mergedtothemainfileunlesstheworkingcopyisvalidated.Themostpopularexample is Git, Helix
core, Microsoft TFS,
● Helps inrecoveryincaseofanydisasterorcontingentsituation,
● InformsusaboutWho,What, When,Whychanges havebeenmade.
TypesofVersionControlSystems:
● LocalVersionControlSystems
● Centralized VersionControlSystems
● DistributedVersionControlSystems
Local Version Control Systems: It is one ofthe simplest forms and has a database that kept all
the changes to files under revision control. RCS is one ofthe most common VCS tools. It keeps
patch sets (differences between files) in a special format on disk. Byadding up all the patches it
can then re-create what any file looked like at any point in time.
Centralized Version Control Systems: Centralized version control systems contain just one
repositorygloballyand everyuser need to commit for reflecting one’s changes in the repository. It
is possible for others to see your changes by updating.
Twothingsarerequiredtomakeyour changesvisibletootherswhichare:
● Youcommit
● Theyupdate
Thebenefit of CVCS (Centralized Version Control Systems) makes collaboration amongst
developers along with providing an insight to a certain extent on what everyone else is doing on
the project. It allows administrators to fine-grained control over who can do what.
It has some downsidesas well which led to the development of DVS. The most obvious is the
single point offailure thatthe centralized repositoryrepresents ifit goes downduring that period
collaboration and saving versioned changes is not possible. What if the hard disk of the central
database becomes corrupted, and proper backups haven’t been kept? You lose absolutely
everything.
DistributedVersionControlSystems:
Distributed version control systems contain multiple repositories. Each user has their own
repository and working copy. Just committing your changes will not give others access to your
changes. This is because commit willreflect those changes in your localrepositoryand you need
to push them in order to make them visible on the central repository. Similarly, When you
update, you do not get others’ changes unless you have first pulled those changes into your
repository.
Tomakeyour changesvisibletoothers,4thingsarerequired:
● Youcommit
● You push
● They pull
● Theyupdate
The most popular distributed version control systems are Git, and Mercurial. They
helpus overcome the problem of single point of failure.
2) Branching and merging: Teams can create separate branches of code for different
environments, making it easier to manage the migration process.
Branching and merging are key concepts in Git-based version control systems, and are widely
used in DevOps to manage the development of software.
Branching in Git allows developers to create a separate line ofdevelopment for a new feature or
bug fix. This allows developers to make changes to the code without affecting the main branch,
and to collaborate with others on the same feature or bug fix.
Merging in Git is the process of integrating changes made in one branch into another branch. In
DevOps, merging is often used to integrate changes made in a feature branch into the main
branch, incorporating the changes into the codebase.
Branchingandmergingprovideseveralbenefitsin DevOps:
Rollback: In the event of a problem during a migration, teams can quickly revert to a previous
version of the code.
Rollback inDevOps refers tothe processofreverting a change orreturning to aprevious version of
a system, application, or infrastructure component. Rollback is an important capability in
DevOps, as it provides a way to quickly and efficiently revert changes that have unintended
consequences or cause problems in production.
Thereareseveralapproachesto rollbackinDevOps,including:
Version control: By using a version control system, such as Git, DevOps teams can revert to a
previous version of the code by checking out an earlier commit.
Infrastructure as code: By using infrastructure as code tools, such as Terraform or Ansible,
DevOps teams can roll back changes to their infrastructure by re-applying an earlier version of
the code.
Continuous delivery pipelines: DevOps teams can use continuous delivery pipelines to
automate the rollback process, by automatically reverting changes to a previous version of the
code or infrastructure if tests fail or other problems are detected.
Snapshots: DevOps teams can use snapshots to quickly restore an earlier version of a system or
infrastructure component.
Overall, rollback is an important capability in DevOps, providing a way to quickly revertchanges
that have unintended consequences or cause problems in production. By using a combination of
version control, infrastructure as code, continuous delivery pipelines, and snapshots, DevOps
teams can ensure that their systems and applications can be quickly andeasily rolled back to a
previous version if needed.
Collaboration: SCM systems enable teams to collaborate on code migrations, with team
members working on different aspects of the migration process simultaneously.
Collaboration is a key aspect of DevOps, as it helps to bring together development, operations,
and other teams to work together towards a common goal of delivering high-quality software
quickly and efficiently.
InDevOps,collaborationisfacilitatedbyarangeoftoolsand practices,including:
Version control systems: By using a version control system, such as Git, teams can collaborate
on code development, track changes to source code, and merge code changes from multiple
contributors.
Continuous integration and continuous deployment (CI/CD): By automating the build, test,
and deployment ofcode,CI/CD pipelines help to streamline the development processand reduce
the risk of introducing bugs or other issues into the codebase.
Code review: By using code review tools, such as pull requests, teams can collaborate on code
development, share feedback, and ensure that changesare thoroughlyreviewed and tested before
they are integrated into the codebase.
Issue tracking: By using issue tracking tools, such as JIRA or GitHub Issues, teams can
collaborateonresolving bugs, trackingprogress, and managingthedevelopmentofnew features.
Communicationtools: Byusingcommunicationtools, suchasSlackorMicrosoft Teams, teams can
collaborate and coordinate their work, share information, and resolve problems quickly and
efficiently.
Overall, collaboration is a critical component of DevOps, helping teams to work together
effectively and efficiently to deliver high-quality software. By using a range of tools and
practices to facilitate collaboration, DevOps teams can improve the transparency, speed, and
quality of their software development processes.
Automation: Many SCM systems integrate with continuous integration and delivery (CI/CD)
pipelines, enabling teams to automate the migration process.
In conclusion, SCM systems play a critical role in managing code migrations. They provide a
way to track code changes, collaborate on migrations, and automate the migration process,
enabling teams to deliver code quickly and reliably to production.
Sharedauthentication
SharedauthenticationinDevOps refers tothepracticeof usinga commonidentity management system
tocontrol access to the various tools, resources, and systems used in software development and
operations.This helps to simplify the process of managing users and permissions and ensures that
everyone has thenecessary access to perform their jobs. Examples of shared authentication systems
include ActiveDirectory, LDAP, and SAML-based identity providers.
HostedGitservers
Hosted Git servers are online platforms that provide Git repository hosting services for
softwaredevelopment teams. They are widely used in DevOps to centralize version control of source
code, trackchanges, andcollaborateoncode development. Some popular hostedGit servers includeGitHub,
GitLab,and Bitbucket. These platforms offer features such as pull requests, code reviews, issue tracking,
andcontinuous integration/continuous deployment (CI/CD) pipelines. By using a hosted Git server,
DevOpsteams can streamline their development processes and collaborate more efficiently on code
projects.
DifferentGitserverimplementations
Thereareseveraldifferent Git server implementationsthatorganizationscanuseto hosttheirGit
repositories. Some of the most popular include:
GitHub: One ofthe largest Git repository hosting services, GitHub is widely used bydevelopers
for version control, collaboration, and code sharing.
GitLab: An open-source Git repository management platform that provides version control,
issue tracking, code review, and more.
Bitbucket: A web-based Git repository hosting service that provides version control, issue
tracking, and project management tools.
Gitea: Anopen-sourceGitserverthatisdesignedtobelightweight, fast, andeasytouse.
Gogs: Another open-source Git server, Gogs is designed for small teams and organizations and
provides a simple, user-friendly interface.
GitBucket: A Git server written in Scala that provides a wide range of features, including issue
tracking, pull requests, and code reviews.
Organizations can choose the Git server implementation that best fits their needs, taking into
account factors such as cost, scalability, and security requirements.
Dockerintermission
Docker is an open-source project with a friendly-whale logo that facilitates the deployment of
applications in software containers. It is a set of PaaS products that deliver containers (software
packages) using OS-level virtualization. It embodies resource isolation features of the Linux
kernel but offers a friendly API.
In simple words, Docker is a tool or platform design to simplify the process of creating,
deploying, and packaging and shipping out applications along with its parts such as libraries and
other dependencies. Its primary purpose is to automate the application deployment process and
operating-system-level virtualization on Linux. It allows multiple containers to run on the same
hardware and provides high productivity, along with maintaining isolated applications and
facilitating seamless configuration.
Dockerbenefits include:
● HighROIandcostsavings
● Productivityand standardization
● Maintenanceandcompatibility
● Rapiddeployment
● Fasterconfigurations
● Seamlessportability
● Continuoustestingand deployment
● Isolation,segregation,andsecurity
Dockervs.VirtualMachines
Space allocation: You cannot share data volumes with VMs, but you can share and reuse them
among various Docker containers.
Portability: With VMs, you can face compatibility issues while porting across different
platforms; Docker is easily portable.
Clearly, Docker is a hands-down winner.
Gerrit
Gerritis a web based code review tool which is integrated with Gitand builton top of Git version
control system (helps developers to work together and maintain the history of their work). It
allows to merge changes to Git repository when you are done with the code reviews.
Gerritwas developedbyShawn Pearce atGoogle which is written in Java,Servlet, GWT(Google
Web Toolkit). The stable release of Gerrit is 2.12.2 and published on March 11, 2016 licensed
under Apache License v2.
WhyUseGerrit?
Followingarecertainreasons, whyyoushoulduseGerrit.
● Youcaneasilyfind theerror inthesourcecodeusing Gerrit.
● YoucanworkwithGerrit,ifyouhaveregularGitclient;noneedtoinstallanyGerrit client.
● Gerrit canbeusedasanintermediatebetweendevelopersandgitrepositories.
FeaturesofGerrit
● GerritisafreeandanopensourceGitversioncontrolsystem.
● TheuserinterfaceofGerritis formedon GoogleWebToolkit.
● Itisalightweightframeworkforreviewingeverycommit.
● Gerrit acts as a repository, whichallows pushing the code and creates the review for your
commit.
AdvantagesofGerrit
● GerritprovidesaccesscontrolforGitrepositoriesandwebfrontendforcodereview.
● Youcanpushthecodewithoutusingadditionalcommandline tools.
● Gerrit can allow or decline the permission on the repository level and downto the branch
level.
● GerritissupportedbyEclipse.
DisadvantagesofGerrit
● Reviewing,verifyingandresubmittingthecodecommitsslowsdownthetimetomarket.
● GerritcanworkonlywithGit.
● Gerritisslowandit'snotpossibletochangethesortorderinwhichchangesarelisted.
● Youneed administratorrightstoadd repositoryonGerrit.
Whatis Gerrit?
Gerrit is an exceptionally extensible and configurable apparatus for online code survey and
storehouse the executives for projects utilizing the Git rendition control framework. Gerrit is
similarlyhelpfulwhereallclientsarebelieved committers,forexample, might bethesituation with
shut source business advancement.
UsecaseofGerrit
● Knowledgeexchange:
o Thecodereviewprocessallowsnewcomersto seethecodeofother more experienced
developers.
o Developerscangetfeedbackontheirsuggestedchanges.
o Experienceddeveloperscanhelptoevaluatetheimpactonthewholecode.
o Sharedcodeownership: byreviewingcodeofotherdevelopersthewholeteamgetsasolid
knowledge of the complete code base.
Thepullrequestmodel
Pull request is a feature of Git-based version control systems that allows developers to propose
changes to a Git repository and request feedback or approval f rom other team members. It is
widely used in DevOps to facilitate collaboration and code review in the software development
process.
Inthepullrequest model, adeveloper createsanewbranchinaGit repository, makeschangesto the
code, and then opens a pull request to merge the changes into the main branch. Other team
members can then review the changes, provide feedback, and approve or reject the request.
Pull Requests are a mechanism popularized bygithub, used to help facilitate merging of work,
particularly inthe context ofopen-sourceprojects. Acontributorworksontheir contribution ina fork
(clone) of the central repository. Once their contribution is finished theycreate a pull
requesttonotifytheowner ofthecentralrepositorythattheir workisreadytobe merged intothe
mainline. Tooling supports and encourages code review ofthe contribution before accepting the
request. Pull requests have become widely used in software development, but critics are
concerned by the addition of integration friction which can prevent continuous integration.
Once the maintainer gets the message, she can then examine the commits to decide if they are
readyto go into mainline. Ifnot, she can then suggest changes to the contributor, who then has
opportunitytoadjusttheirsubmission.Onceallisok,themaintainercanthenmerge,eitherwith a regular
merge/rebase or applying the patches from the final email.
So that's how pullrequests work, but should we use them, and if so how? To answer that
question, Iliketostepback fromthe mechanismandthinkabouthowit worksinthecontextofa source
code management workflow. To help me think about that, I wrote down a series
ofpatternsformanagingsourcecodebranching.Ifindunderstandingthese(specificallytheBase and
Integration patterns) clarifies the role of pull requests.
In terms of these patterns, pull requests are a mechanism designed to implement a combination
ofFeatureBranching andPre-IntegrationReviews. Thustoassesstheusefulnessofpullrequests we
first need to consider how applicable those patterns are to our situation. Like most patterns,
theyaresometimesvaluable, andsometimesapainintheneck - wehaveto examinethembased onour
specific context. Feature Branching is a good wayof packaging together a logical contributionso
that it canbe assessed, accepted,ordeferred as a single unit. This makes a lot of sense when
contributors are not trusted to commit directlyto mainline. But Feature Branching comes at a
cost, which is that it usually limits the frequency of integration, leading to complicated merges
and deterring refactoring. Pre-Integration Reviews provide a clear place to do code review at the
cost of a significant increase in integration friction. [1]
That's a drastic summary of the situation (I need a lot more words to explain this further in the
feature branching article), but it boils down to the fact that the value ofthese patterns, and thus
thevalueofpullrequests, rest mostlyonthesocial structureoftheteam. Someteamsworkbetter with
pull requests, some teams would find pull requests a severe drag onthe effectiveness. I suspect
thatsincepullrequestsaresopopular, a lotofteamsareusingthembydefault whenthey would do better
without them.
While pull requests are built for Feature Branches, teams can use them within a
ContinuousIntegration environment. To dothis theyneed to ensure that pullrequests are
smallenough, and the team responsive enough, to follow the CI rule of thumb that everybody
does MainlineIntegration at least daily. (And I should remind everyone that Mainline Integration
is more than just mergingthecurrentmainlineintothefeaturebranch).Usingthe
ship/show/askclassification can be an effective way to integrate pull requests into a more CI-
friendly workflow.
The wide usage of pull requests has encouraged a wider use of code review, since pull requests
provide a clear point for Pre-Integration Review, together with tooling that encourages it. Code
review is a Good Thing, but we must remember that a pull request isn't the only mechanismwe
can use for it. Manyteams find great value in the continuous review afforded by Pair
Programming. To avoid reducing integration frquency we can carry out post-integration code
review inseveralways. Aformalprocesscanrecordareview foreachcommit, oratechleadcan
examine risky commits every couple ofdays. Perhaps the most powerful formofcode review is
one that's frequently ignored. A teamthat takes the attitude that the codebase is a fluid system,
one that can be steadily refined with repeated iteration carries out Refinement Code
Revieweverytimeadeveloper looksat existingcode.Ioftenhearpeoplesaythat pullrequests are
necessary because without them you can't do code reviews - that's rubbish. Pre-integration code
review is just one wayto do code reviews, and for many teams it isn't the best choice.
ThepullrequestmodelprovidesseveralbenefitsinDevOps:
Improvedcodequality: Pullrequestsencouragecollaborationandcodereview,helpingtocatch
potential bugs and issues before they make it into the main codebase.
Increasedtransparency:Pullrequestsprovideaclearaudittrailofallchangesmadetothe code, making
it easier to understand how code has evolved over time.
Better collaboration: Pull requests allow developers to share their work and get feedback from
others, improving collaboration and communication within the development team.
GitLab
GitLab is an open-source Git repository management platform that provides a wide range of
features for software development teams. It is commonly used in DevOps for version control,
issue tracking, code review, and continuous integration/continuous deployment (CI/CD)
pipelines.
GitLab provides a centralized platform for teams to manage their Git repositories, track changes
to source code, and collaborate on code development. It offers a range of tools to support code
review and collaboration, including pull requests, code comments, and merge request approvals.
In addition, GitLab provides a CI/CD pipeline tool that allows teams to automate the process of
building, testing, and deploying code. This helps to streamline the development process and
reduce the risk of introducing bugs or other issues into the codebase.
Overall, GitLab is a comprehensive Git repository management platform that provides a wide
rangeoftoolsand features forsoftwaredevelopment teams. Byusing GitLab, DevOpsteamscan
improve the efficiency, transparency, and collaborationoftheir software development processes.
Whatis Git?
Git is a distributed version control system, which means that a local clone of the project is a
complete version control repository. These fully functional local repositories make it easy to
work offline or remotely. Developers commit their work locally, and then sync their copy of the
repository with the copy on the server. This paradigm differs from centralized version control
where clients must synchronize code with a server before creating new versions of code.
Git's flexibilityand popularitymake it a great choice for anyteam. Manydevelopers and college
graduates already know how to use Git. Git's user community has created resources to train
developersand Git'spopularitymake it easyto get help whenneeded. Nearlyeverydevelopment
environment has Git support and Git command linetools implemented oneverymajor operating
system.
Gitbasics
Everytime work is saved, Git creates a commit. A commit is a snapshot of all files at a point in
time.Ifafilehasn'tchangedfromonecommittothenext,Gitusesthepreviouslystoredfile.
This design differs from other systems that store an initial version of a file and keep a record of
deltas over time.
Commits create links to other commits, forming a graph ofthedevelopment history. It's possible
to revert codeto a previouscommit, inspect how fileschanged fromone commit to the next, and
review information such as where and when changes were made. Commits are identified in Git
by a unique cryptographic hash ofthe contents ofthe commit. Because everything is hashed, it's
impossible to make changes, lose information, or corrupt files without Git detecting it.
Branches
Each developer saves changes to their own localcode repository. As a result, there can be many
different changes based off the same commit. Git provides tools for isolating changes and later
merging them back together. Branches, which are lightweight pointers to work in progress,
managethis separation. Onceworkcreatedina branchis finished, it canbe merged back intothe
team's main (or trunk) branch.
Filesand commits
Files in Git are in one of three states: modified, staged, or committed. When a file is first
modified, the changesexist onlyinthe working directory. Theyaren't yet partofa commit orthe
development history. The developer must stagethe changed files to be included in the commit.
Thestaging areacontainsallchangestoincludeinthe nextcommit.Once the developerishappy
withthestagedfiles,the filesarepackagedasa commitwitha messagedescribingwhat changed. This
commit becomes part of the development history.
Staging lets developers pick which file changes tosave ina commit inorderto break downlarge
changes into a series ofsmaller commits. Byreducing the scope ofcommits, it's easier to review
the commit history to find specific file changes.
BenefitsofGit
ThebenefitsofGitaremany.
Simultaneousdevelopment
Everyone has their own local copy of code and can work simultaneously on their own branches.
Git works offline since almost every operation is local.
Fasterreleases
Branches allow for flexible and simultaneous development. The main branch contains stable,
high-quality code fromwhich you release. Feature branches contain work in progress, which are
merged into the main branch upon completion. By separating the release branch from
development in progress, it's easier to manage stable code and ship updates more quickly.
Built-inintegration
Due to its popularity, Git integrates into most tools and products. Every major IDE has built-in
Git support, and many tools support continuous integration, continuous deployment, automated
testing, work item tracking, metrics, and reporting feature integration with Git. This integration
simplifies the day-to-day workflow.
Strongcommunitysupport
Gitworkswithany team
Using Git with a source code management tool increases a team's productivity by encouraging
collaboration, enforcing policies, automating processes, and improving visibilityand traceability
of work. The team can settle on individual tools for version control, work item tracking, and
continuous integration and deployment. Or, they can choose a solution like GitHubor
AzureDevOps that supports all of these tasks in one place.
Pull requests
Usepull requeststo discuss code changes with the team before merging them into the main
branch. The discussions in pull requests are invaluable to ensuring code quality and increase
knowledge across your team. Platforms like GitHub and Azure DevOps offer a rich pull request
experience where developers can browse file changes, leave comments, inspect commits, view
builds, and vote to approve the code.
Branchpolicies
Teams can configure GitHub and Azure DevOps to enforce consistent workflows and process
across the team. They can set up branch policiesto ensure that pull requests meet requirements
before completion. Branch policies protect important branches by preventing direct pushes,
requiring reviewers, and ensuring clean builds.
Unit
4Integratingthesystem
Buildsystems
A build system is a key component in DevOps, and it plays an important role in the software
development and delivery process. It automates the process of compiling and packaging source
code into a deployable artifact, allowing for efficient and consistent builds.
Herearesomeofthekeyfunctionsperformedbyabuild system:
Compilation: The build system compiles the source code into a machine-executable format,such
as a binary or an executable jar file.
Dependency Management: The build system ensures that all required dependencies are
available and properly integrated into the build artifact. This can include external libraries,
components, and other resources needed to run the application.
Testing: The build system runs automated tests to ensure that the code is functioning asintended,
and to catch any issues early in the development process.
Packaging: The build system packages the compiled code and its dependencies into a single,
deployable artifact, such as a Docker image or a tar archive.
Version Control: The build systemintegrates with versioncontrolsystems, suchas Git, totrack
changes to the code and manage releases.
Continuous Integration: The build system can be configured to run builds automatically
whenever changesaremadetothecode,allowingfor fast feedbackandcontinuous integrationof new
code into the main branch.
Deployment: The build system can be integrated with deployment tools and processes to
automate the deployment of the build artifact to production environments.
In DevOps, it's important to have a build system that is fast, reliable, and scalable, and that can
integratewithother toolsandprocesses inthesoftwaredevelopment anddeliverypipeline. There are
manybuildsystemsavailable, eachwithitsownset offeaturesandcapabilities, andchoosing the right
one will depend on the specific needs of the project and team.
Jenkinsbuildserver
WhatisJenkin?
Jenkins is an open source automation tool written in Java programming language that allows
continuous integration.
With the help of Jenkins, organizations can speed up the software development process through
automation. Jenkins adds development life-cycle processes of all kinds, including build,
document, test, package, stage, deploy static analysis and much more.
Jenkins achieves CI (Continuous Integration) with the help of plugins. Plugins is used to allow
the integration of various DevOps stages. If you want to integrate a particular tool, you have to
install the plugins for that tool. For example: Maven 2 Project, Git, HTML Publisher, Amazon
EC2, etc.
For example: If any organization is developing a project, then Jenkinswill continuously test
your project builds and show you the errors in early stages of your development.
PossiblestepsexecutedbyJenkinsareforexample:
o PerformasoftwarebuildusingabuildsystemlikeGradleorMaven Apache
o Executeashellscript
o Archiveabuild result
o Runningsoftwaretests
Jenkinworkflow
JenkinsMaster-SlaveArchitecture
As you can see in the diagramprovided above, onthe left is the Remote source code repository.
The Jenkins server accesses the master environment onthe left side and the master environment
can push down to multiple other Jenkins Slave environments to distribute the workload.
That lets you run multiple builds, tests, and product environment across the entire architecture.
JenkinsSlaves canberunningdifferent build versionsofthecode fordifferent operating systems and
the server Master controls how each of the builds operates.
Supported on a master-slave architecture, Jenkins comprises many slaves working for a master.
This architecture - the Jenkins Distributed Build - can run identical test cases in different
environments. Results are collected and combined on the master node for monitoring.
JenkinsApplications
Code coverage is determined by the number of lines of code a component has and how many of
them get executed. Jenkins increases code coverage which ultimately promotes a transparent
development process among the team members.
2. NoBrokenCode
Jenkins ensures that the code is good and tested well through continuous integration. The final
code is merged only when all the tests are successful. This makes sure that no broken code is
shipped into production.
Jenkinsoffersmanyattractivefeaturesfor developers:
● EasyInstallation
● EasyConfiguration
● AvailablePlugins
● Extensible
Jenkins can be extended by means of its plugin architecture, providing nearly endless possibilities for
what it can do.
● EasyDistribution
Jenkins can easily distribute work across multiple machines for faster builds, tests, and deployments
across multiple platforms.
● FreeOpenSource
Jenkinsisanopen-sourceresourcebackedbyheavycommunitysupport.
AsapartofourlearningaboutwhatisJenkins,letusnextlearnabouttheJenkinsarchitecture.
Jenkinsbuild server
Jenkins is a popular open-source automation server that helps developers automate parts of the software
development process. A Jenkins build server is responsible for building, testing, and deploying software
projects.
A Jenkins build server is typically set up on a dedicated machine or a virtual machine, and is used to
manage the continuous integration and continuous delivery (CI/CD) pipeline for a software project. The
buildserver isconfiguredwithallthenecessarytools, dependencies, andpluginstobuild, test, anddeploy the
project.
Thebuild process in Jenkins typically starts with code being committed to a version control system (such
as Git), which triggers a build on the Jenkins server. The Jenkins server then checks out the code, buildsit,
runs tests on it, and if everything is successful, deploys the code to a staging orproduction environment.
Jenkins has a large community of developers who have created hundreds of plugins that extend its
functionality, so it's easy to find plugins to support specific tools, technologies, and workflows. For
example, there are plugins for integrating with cloud infrastructure, running security scans, deploying to
various platforms, and more.
Overall, a Jenkins build server can greatly improve the efficiency and reliability of the software
development process by automating repetitive tasks, reducing the risk of manual errors, and enabling
developers to focus on writing code.
Managingbuilddependencies
Managing build dependencies is an important aspect of continuous integration and continuous
delivery (CI/CD) pipelines. In software development, dependencies refer to external libraries,
tools, or resources that a project relies on to build, test, and deploy. Proper management of
dependencies can ensure that builds are repeatable and that the build environment is consistent
and up-to-date.
Herearesomecommonpracticesformanagingbuilddependencies inJenkins:
Dependency Management Tools: Utilize tools such as Maven, Gradle, or npm to manage
dependencies and automate the process ofdownloading and installing required dependencies for
a build.
Version Pinning: Specify exact versions of dependencies to ensure builds are consistent and
repeatable.
Git Plugin: This plugin integrates Jenkins with Git version controlsystem, allowing you to pull
code changes, build and test them, and deploy the code to production.
MavenPlugin:ThispluginintegratesJenkinswithApacheMaven,abuildautomationtool commonly
used in Java projects.
Amazon Web Services (AWS) Plugin: This plugin allows you to integrate Jenkins with
Amazon Web Services (AWS), making it easier to run builds, tests, and deployments on AWS
infrastructure.
Slack Plugin:This pluginintegrates Jenkins with Slack,allowingyou to receive notifications
about build status, failures, and other important events in your Slack channels.
Blue Ocean Plugin: This plugin provides a new and modern user interface for Jenkins,
makingit easier to use and navigate.
PipelinePlugin:ThispluginprovidesasimplewaytodefineandmanagecomplexCI/CD pipelines in
Jenkins.
Jenkins plugins are easyto install and can be managed through the Jenkins web interface. There
are hundredsofpluginsavailable, covering a wide range oftools, technologies, and use cases, so
you can easily find the plugins that best meet your needs.
By using plugins, you can greatly improve the efficiency and automation of your software
development process, and make it easier to integrate Jenkins with the tools and workflows you
use.
GitPlugin
The Git Plugin is a popular plugin for Jenkins that integratesthe Jenkins automation server with
the Git version control system. This plugin allows you to pull code changes from a Git
repository, build and test the code, and deploy it to production.
With the Git Plugin, you can configure Jenkins to automatically build and test your code
whenever changes are pushed to the Git repository. You can also configure it to build and test
code on a schedule, such as once a day or once a week.
TheGitPluginprovidesanumberoffeatures formanagingcodechanges, including:
Branch and Tagbuilds: You canconfigure Jenkins to build specific branches ortags fromyour
Git repository.
PullRequests:YoucanconfigureJenkinstobuildandtestpullrequestsfromyourGit repository, allowing you to
validate code changes before merging them into the main branch.
Build Triggers: Youcanconfigure Jenkinsto build andtest code changeswhenever changesare
pushed to the Git repository or on a schedule.
Code Quality Metrics: The Git Plugin integrates with tools such as SonarQube to provide code
quality metrics, allowing you to track and improve the qualityof your code over time.
Notification and Reporting: The Git Plugin provides notifications and reports on build status,
failures, and other important events. You can configure Jenkins to send notifications via email,
Slack, or other communication channels.
ByusingtheGitPlugin,youcanstreamlineyoursoftwaredevelopmentprocessandmakeit easier to
manage code changes and collaborate with other developers on your team.
filesystemlayout
In DevOps, the file system layout refers to the organization and structure of files and directories
onthe systems and servers used for softwaredevelopment and deployment. Awell-designed file
system layout is critical for efficient and reliable operations in a DevOps environment.
HerearesomecommonelementsofafilesystemlayoutinDevOps:
CodeRepository:Acentral coderepository,suchasGit,isusedtostoreandmanagesource code,
configuration files, and other artifacts.
Build Artifacts: Build artifacts, such as compiled code, are stored in a designated directory for
easy access and management.
Dependencies: Directories for storing dependencies, such as libraries and tools, are designated
for easy management and version control.
ConfigurationFiles:Configurationfiles,suchasYAMLorJSONfiles,arestoredina designated
directory for easy access and management.
LogFiles:Logfilesgeneratedbyapplications,builds,anddeploymentsarestoredina designated
directory for easy access and management.
BackupandRecovery:Directoriesforstoringbackupsandrecoverydataaredesignatedfor easy
management and to ensure business continuity.
Environment-specific Directories: Directories are designated for each environment, such as
development, test, and production, to ensure that the correct configuration files and artifacts are
used for each environment.
By following a well-designed file system layout in a DevOps environment, you can improve the
efficiency, reliability, and securityof your software development and deployment processes.
Thehostserver
In Jenkins, a host server refers to the physical or virtual machine that runs the Jenkinsautomation
server. The host server is responsible for running the Jenkins process and providing resources,
such as memory, storage, and CPU, for executing builds and other tasks.
The host server can be either a standalone machine or part of a network or cloud-based
infrastructure. When running Jenkins on a standalone machine, the host server is responsible for
all aspects of the Jenkins installation, including setup, configuration, and maintenance.
When running Jenkins on a network or cloud-based infrastructure, the host server is responsible
for providing resources for the Jenkins process, but the setup, configuration, and maintenance
may be managed by other components of the infrastructure.
By providing the necessary resources and ensuring the stability and reliability of the host server,
you can ensure the efficient operation of Jenkins and the success of your software development
and deployment processes.
TohostaserverinJenkins,you'llneedtofollowthesesteps:
Install Jenkins: You can install Jenkins on a server by downloading the Jenkins WAR file,
deploying it to a servlet container such as Apache Tomcat, and starting the server.
Configure Jenkins: Once Jenkins is up and running, you can access its web interface to
configure and manage the build environment. You can install plugins, set up security, and
configure build jobs.
Create a Build Job: To build your project, you'll need to create a build job in Jenkins. Thiswill
define the steps involved in building your project, such as checking out the code from version
control, compiling the code, running tests, and packaging the application.
Schedule Builds: You can configure your build job to run automatically at a specific time or
when certain conditions are met. You can also trigger builds manually from the web interface.
MonitorBuilds: Jenkins provides a varietyoftools for monitoring builds, suchas build history,
build console output, and build artifacts. You can use these tools to keep track of the status of
your builds and to diagnose problems when they occur.
Buildslaves
JenkinsMaster-SlaveArchitecture
As you can see in the diagramprovided above, onthe left is the Remote source code repository.
The Jenkins server accesses the master environment onthe left side and the master environment
can push down to multiple other Jenkins Slave environments to distribute the workload.
That lets you run multiple builds, tests, and product environment across the entire architecture.
JenkinsSlaves canberunningdifferent build versionsofthecode fordifferent operating systems and
the server Master controls how each of the builds operates.
Supported on a master-slave architecture, Jenkins comprises many slaves working for a master.
This architecture - the Jenkins Distributed Build - can run identical test cases in different
environments. Results are collected and combined on the master node for monitoring.
The standard Jenkins installation includes Jenkins master, and in this setup, the master will be
managing all our build system's tasks. If we're working on a number of projects, we can run
numerous jobs on each one. Some projects require the use of specific nodes, which necessitates
the use of slave nodes.
The Jenkins master is in charge of scheduling jobs, assigning slave nodes, and sending
builds to slave nodes for execution. It will also keep track of the slave node state (offline or
online), retrieve build results from slave nodes, and display them on the terminal output. In most
installations, multiple slave nodes will be assigned to the task of building jobs.
Before we get started, let's double-check that we have all of the prerequisites in place for
adding a slave node:
To configure the Master server, we'll log in to theJenkins server and follow the steps below.
First, we'llgo to “ManageJenkins->ManageNodes->NewNode”tocreateanewnode:
On the next screen, we enter the “Node Name” (slaveNode1), select “Permanent Agent”,
then click “OK”:
After clicking “OK”, we'll be taken to a screen with a newform where we need to
fill out the slave node's information. We're consideringthe slave node to be running
on Linux operating systems, hence the launch method is set to “Launch agents via
ssh”.
In the same way, we'll add relevant details, such as the name, description, and a
number of executors.
We'll save our work by pressing the “Save” button. The “Labels” with the name
“slaveNode1” will help us to set up jobs on this slave node:
4.BuildingtheProjectonSlaveNodes
Nowthat our master and slave nodes are ready, we'lldiscuss the steps forbuilding the project on
the slave node.
Forthis, westartbyclicking“NewItem”inthetopleftcorner ofthedashboard.
Next, we need to enter the name of our projectin the “Enter an item name” field and select
the“Pipeline project”, and then click the “OK” button.
On the next screen, we'll enter a “Description” (optional) and navigate to the “Pipeline” section.
Make sure the “Definition” field has the Pipeline script option selected.
After this,wecopyandpastethefollowingdeclarativePipelinescriptintoa“script”field:
node('slaveNode1'){
stage('Build') {
sh'''echo buildsteps'''
}
stage('Test'){
sh'''echoteststeps'''
}
}
Copy
Next,weclickonthe“Save”button.ThiswillredirecttothePipelineviewpage.
On the left pane, we click the “Build Now” button to execute our Pipeline. After Pipeline
execution is completed, we'll see the Pipeline view:
We can verify the history of the executed build under the Build History by
clickingthe buildnumber. As shown above, when we clickon the build number and
select “Console Output”, wecan seethat thepipelineranon our slaveNode1 machine.
Softwareonthehost
To run software on the host in Jenkins, you need to have the necessary dependencies and tools
installed on the host machine. The exact software you'll need will depend on the specific
requirements of your project and build process. Some common tools and software used inJenkins
include:
Java:JenkinsiswritteninJavaandrequiresJavatobeinstalledonthehostmachine.
Trigger
Thesearethemost commonJenkinsbuildtriggers:
● Triggerbuildsremotely
● Buildafterotherprojectsarebuilt
● Buildperiodically
● GitHubhooktriggerforGITScmpolling
● PollSCM
1. Triggerbuildsremotely:
If you want to trigger your project built from anywhere anytime then you should select Trigger
builds remotely option from the build triggers.
JENKINS_URL/job/JobName/build?token=TOKEN_NAME
TOKEN_NAME:Youhaveprovidedwhileselectingthisbuildtrigger.
//Example:
http://e330c73d.ngrok.io/job/test/build?token=12345
WheneveryouwillhitthisURL fromanywhereyouprojectbuildwillstart.
2. Buildafterotherprojectsare built
Inthis, you must specifythe project(Job) names in the Projectsto watch field sectionand select
one of the following options:
3.Triggerevenifthebuildfails
Afterthat,ItstartswatchingthespecifiedprojectsintheProjectstowatchsection.
Wheneverthebuildofthespecifiedprojectcompletes(eitherisstable,unstableorfailedaccording to
your selected option) then this project build invokes.
3) Build periodically:
Youmustspecifytheperiodicaldurationoftheprojectbuildintheschedulerfieldsection
This field follows the syntax ofcron (with minor differences). Specifically, each line consists of
5 fields separated by TAB or whitespace:
MINUTEHOURDOMMONTHDOW
DOM Thedayofthemonth(1–31)
MONTH Themonth(1–12)
Tospecifymultiplevaluesforonefield,thefollowingoperatorsareavailable.Intheorderof precedence,
● *specifiesallvalidvalues
● M-Nspecifies arangeof values
● M-N/Xor*/XstepsbyintervalsofXthroughthespecifiedrangeorwholevalidrange
● A,B,...,Zenumeratesmultiplevalues
Examples:
Aftersuccessfullyscheduledtheprojectbuildthentheschedulerwillinvokethebuild periodically
according to your specified duration.
4) GitHubwebhooktriggerforGITScm polling:
GitHubwebhooksinJenkinsareusedtotriggerthebuildwheneveradevelopercommitssomething to
the branch.
http://e330c73d.ngrok.io/github-webhook
Herehttps://e330c73d.ngrok.io/istheIPandportwheremyJenkins isrunning.
5) PollSCM:
Poll SCM periodically polls the SCM to check whether changes were made (i.e. new commits)
and builds the project if new commits were pushed since the last build.
You must schedule the polling duration in the scheduler field. Like we explained above in the
Build periodically section. You can see the Build periodically section to know how to schedule.
After successfully scheduled, the scheduler polls the SCM according to your specified durationin
scheduler field and builds the project if new commits were pushed since the last
build.LET'SINITIATE A PARTNERSHIP
Jobchaining
Job chaining in Jenkins refers to the process of linking multiple build jobs together in asequence.
When one job completes, the next job in the sequence is automatically triggered. This allows you
to create a pipeline of builds that are dependent on each other, so you can automate the entire
build process.
ThereareseveralwaystochainjobsinJenkins:
Build Trigger: You can use the build trigger in Jenkins to start one job after another. This is
done by configuring the upstream job to trigger the downstream job when it completes.
Jenkinsfile: If you are using Jenkins Pipeline, you can write a Jenkinsfile to define the steps in
your build pipeline. The Jenkinsfile can contain multiple stages, each of which represents a
separate build job in the pipeline.
JobDSLplugin: TheJobDSLpluginallowsyouto programmaticallycreateand manage Jenkins
jobs. You can use this plugin to create a series of jobs that are linked together and run in
sequence.
Multi-Job plugin: The Multi-Job plugin allows you to create a single job that runs multiplebuild
steps, each of which can be a separate build job. This plugin is useful if you have a build pipeline
that requires multiple build jobs to be run in parallel.
Bychaining jobs in Jenkins, you can automate the entire build process and ensure that each step
is completed before the next step is started. This can help to improve the efficiency
andreliabilityofyour build process, and allow you to quicklyand easily make changes to your
build pipeline.
Buildpipelines
Abuildpipeline inDevOps isaset ofautomatedprocessesthat compile, build, andtest software, and
prepare it for deployment. A build pipeline represents the end-to-end flow of code changes from
development to production.
Thestepsinvolvedinatypicalbuildpipelineinclude:
CodeCommit:DeveloperscommitcodechangestoaversioncontrolsystemsuchasGit.
Build and Compile: The code is built and compiled, and any necessary dependencies are
resolved.
UnitTesting:Automated unittestsareruntovalidatethe codechanges.
Integration Testing: Automated integration tests are run to validate that the code integrates
correctly with other parts of the system.
Staging:Thecodeisdeployedtoastagingenvironmentforfurthertestingandvalidation.
Release:Ifthecodepassesalltests,itisdeployed totheproductionenvironment.
Monitoring:Thedeployedcodeismonitoredforperformanceand stability.
A build pipeline can be managed using a continuous integration tool such as Jenkins, TravisCI,
or CircleCI. These tools automate the build process, allowing you to quickly and easily make
changes to the pipeline, and ensuring that the pipeline is consistent and reliable.
In DevOps, the build pipeline is a critical component of the continuous delivery process, and is
used to ensurethat codechangesare tested,validated, and deployed to productionasquicklyand
efficiently as possible. By automating the build pipeline, you can reduce the time and effort
required to deploy code changes, and improve the speed and quality of your software delivery
process.
Buildservers
When you're developing and deploying software, one of the first things to figure out is how to
take your code and deploy your working application to a production environment where people
can interact with your software.
Most development teams understand the importance of version control to coordinate code
commits, and build servers to compile and package their software, but Continuous Integration
(CI) is a big topic.
Buildservershave3mainpurposes:
● Compilingcommittedcodefromyour repositorymanytimesaday
● Runningautomaticteststovalidatecode
● Creatingdeployablepackagesandhandingofftoadeploymenttool,likeOctopusDeploy
Without a build server you're slowed down by complicated, manual processes and the needless
time constraints they introduce. For example, without a build server:
● Yourteamwilllikelyneedtocommitcodebeforeadailydeadlineorduringchange windows
● After that deadline passes, no one can commit again until someone manually creates and
tests a build
● Ifthere are problems with the code,the deadlines and manualprocesses further delaythe
fixes
Without a build server, the team battles unnecessary hurdles that automation removes. A build
server will repeat these tasks for you throughout the day, and without those human-causeddelays.
But CI doesn’t just mean less time spent on manual tasks or the death of arbitrary deadlines,
either. By automatically taking these steps many times a day, you fix problems sooner and your
results become more predictable. Build servers ultimately help you deploythrough your pipeline
with more confidence.
Buildingserversin DevOpsinvolvesseveralsteps:
Requirements gathering: Determine the requirements for the server, such as hardware
specifications, operating system, and software components needed.
Serverprovisioning: Choose a method for provisioning the server, suchas physicalinstallation,
virtualization, or cloud computing.
OperatingSysteminstallation:Installthechosenoperating systemontheserver.
Software configuration: Install and configure the necessary software components, such as web
servers, databases, and middleware.
Network configuration: Set up network connectivity, such as IP addresses, hostnames, and
firewall rules.
Security configuration: Configure security measures, such as user authentication, access
control, and encryption.
Monitoring and maintenance: Implement monitoring and maintenance processes, such as
logging, backup, and disaster recovery.
Deployment: Deploy the application to the server and test it to ensure it is functioning as
expected.
Throughout the process, it is important to automate as much as possible using tools such as
Ansible, Chef, or Puppet to ensure consistencyand efficiency in building servers.
Infrastructureascode
IaC is a keyDevOps practice and a component ofcontinuous delivery. With IaC, DevOps teams
can work together with a unified set of practices and tools to deliver applications and their
supporting infrastructure rapidly and reliably at scale.
IaC evolved to solve the problem ofenvironment driftin release pipelines. Without IaC, teams
must maintain deployment environment settings individually. Over time, each environment
becomes a "snowflake," a unique configuration that can't be reproduced automatically.
Inconsistency among environments can cause deployment issues. Infrastructure administration
and maintenance involve manual processes that are error prone and hard to track.
IaC avoids manual configuration and enforces consistency by representing desired environment
states via well-documented code in formats such as JSON. Infrastructure deployments with IaC
are repeatableandpreventruntimeissuescausedbyconfigurationdriftormissingdependencies.
Release pipelines execute the environment descriptions and version configuration models to
configure target environments. To make changes, the team edits the source, not the target.
Idempotence, the ability of a given operation to always produce the same result, is an important
IaC principle. A deployment command always sets the target environment into the same
configuration, regardless of the environment's starting state. Idempotency is achieved by either
automaticallyconfiguring the existing target,orbydiscarding the existing target and recreating a
fresh environment.
IAC can be achieved by using tools such as Terraform, CloudFormation, or Ansible to define
infrastructurecomponentsinafilethat canbeversioned,tested,anddeployed inaconsistent and
automated manner.
BenefitsofIACinclude:
Speed:IACenablesquickandefficientprovisioninganddeploymentofinfrastructure.
Consistency: By using code to define and manage infrastructure, it is easier to ensure
consistency across multiple environments.
Version control: Infrastructure components can be versioned, allowing for rollback to previous
versions if necessary.
Overall, IAC is a key component of modern DevOps practices, enabling organizations to manage their
infrastructure in a more efficient, reliable, and scalable way.
Buildingbydependencyorder
Building by dependency order in DevOps is the process of ensuring that the components of a
system are built and deployed in the correct sequence, based on their dependencies. This is
necessary to ensure that the system functions as intended, and that components are deployed in
the right order so that they can interact correctly with each other.
ThestepsinvolvedinbuildingbydependencyorderinDevOpsinclude:
Define dependencies: Identify all the components of the system and the dependencies between
them. This can be represented in a diagram or as a list.
Determine the build order: Based on the dependencies, determine the correct order in which
components should be built and deployed.
Automate the build process: Use tools such as Jenkins, TravisCI, or CircleCI to automate the
build and deployment process. This allows for consistencyand repeatability inthe build process.
Monitor progress: Monitor the progress of the build and deployment process to ensure that
components are deployed in the correct order and that the system is functioning as expected.
Test and validate: Test the system after deployment to ensure that all components are
functioning as intended and that dependencies are resolved correctly.
Buildphases
InDevOps,thereareseveralphasesinthebuildprocess,including:
Planning:Definetheprojectrequirements,identifythedependencies,andcreateabuild plan.
Codedevelopment:Writethecodeandimplementfeatures,fixingbugsalongtheway.
Continuous Integration (CI): Automatically build and test the code as it is committed to a
version control system.
ContinuousDelivery(CD): Automaticallydeploycodechangestoatestingenvironment,where they
can be tested and validated.
Deployment: Deploy the code changes to a production environment, after they have passed
testing in a pre-production environment.
Monitoring:Continuously monitorthe systemtoensurethat it isfunctioning asexpected, andto
detect and resolve any issues that may arise.
Maintenance: Continuously maintain and update the system, fixing bugs, adding new features,
and ensuring its stability.
These phases help to ensure that the build process is efficient, reliable, and consistent, and that
code changes are validated and deployed in a controlled manner. Automation is a key aspect of
DevOps, and it helps to make these phases more efficient and less prone to human error.
Incontinuous integration(CI), this is where we build the application for the first time. The build
stage is the first stretch of aCI/CD pipeline, and it automates steps like downloading
dependencies, installing tools, and compiling.
Besides building code, build automation includes using tools to check that the code is safe and
follows best practices. The build stage usually ends in the artifact generationstep, where we
create a production-ready package. Once this is done, the testing stage can begin.
We’ll be covering testing in-depth in future articles (subscribe to the newsletter so you don’tmiss
them). Today, we’ll focus on build automation.
Build automation verifies that the application, at a given code commit, can qualify for further
testing. We can divide it into three parts:
1. Compilation:thefirststepbuildstheapplication.
2. Linting:checksthecodeforprogrammaticandstylisticerrors.
3. Codeanalysis:usingautomated source-checkingtools,wecontrolthecode’squality.
4. Artifactgeneration:thelaststeppackagestheapplicationforreleaseor deployment.
Alternativebuildservers
Thereareseveralalternativebuildservers inDevOps,including:
Jenkins - an open-source, Java-based automation server that supports various plugins and
integrations.
TravisCI-acloud-based, open-sourceCI/CDplatformthatintegrateswithGithub.
CircleCI - a cloud-based, continuous integration and delivery platform that supports multiple
languages and integrates with several platforms.
GitLab CI/CD - an integrated CI/CD solution within GitLab that allows for complete project
and pipeline management.
Bitbucket Pipelines - a CI/CD solution within Bitbucket that allows for pipeline creation and
management within the code repository.
AWS CodeBuild - a fully managed build service that compiles source code, runs tests, and
produces software packages that are ready to deploy.
Azure Pipelines - a CI/CD solutionwithinMicrosoft Azurethat supportsmultiple platformsand
programming languages.
Collatingqualitymeasures
In DevOps, collating quality measures is an important part of the continuous improvement
process. The following are some common quality measures used in DevOps to evaluate the
quality of software systems:
Continuous Integration (CI) metrics - metrics that trackthe success rateof automated builds and
tests, such as build duration and test pass rate.
Continuous Deployment (CD) metrics - metrics that track the success rate of deployments, such
as deployment frequency and time to deployment.
Code review metrics - metrics that track the effectiveness of code reviews, such as review
completion time and code review feedback.
Performance metrics - measuresofsystemperformance inproduction, suchasresponse time and
resource utilization.
User experience metrics - measures of how users interact with the system, such as click-through
rate and error rate.
Securitymetrics-measuresofthesecurityofthesystem,suchasthenumberofsecurityvulnerabilities
and the frequency of security updates.
Incidentresponsemetrics-metricsthattracktheeffectivenessofincidentresponse,suchas mean time to
resolution (MTTR) and incident frequency.
Byregularlycollatingthesequalitymeasures, DevOpsteamscanidentifyareas forimprovement, track
progress over time, and make informed decisions about the qualityof their systems.
Unit5
Testing Tools and automation
Varioustypesoftesting
ThepurposeofhavingatestingtypeistoconfirmtheAUT(ApplicationUnder Test).
Thesoftwaretestingmainlydividedintotwoparts,whichareasfollows:
o Manual Testing
o AutomationTesting
WhatisManualTesting?
Testing any software or an application according to the client's needs without using any
automation tool is known as manual testing.
In other words, we can say that itis a procedure ofverification and validation. Manual testing is
used to verify the behavior of an application or software in contradiction of requirements
specification.
We do not require any precise knowledge of any testing tool to execute the manual test cases.We
can easily prepare the test document while performing manual testing on any application.
To get in-detail information about manual testing, click on the following link:
https://www.javatpoint.com/manual-testing.
Classification ofManualTesting
Insoftwaretesting, manualtesting can be further classified into three different types oftesting,
which are as follows:
o WhiteBoxTesting
o BlackBoxTesting
o GreyBoxTesting
Forourbetterunderstandinglet'sseethemoneby one:
WhiteBoxTesting
In white-box testing, the developer will inspect every line of code before handing it over to the
testing team or the concerned test engineers.
Subsequently, the code is noticeable for developers throughout testing; that's why this process is
known as WBT (White Box Testing).
The purpose of implementing the white box testing is to emphasize the flow of inputs andoutputs
over the software and enhance the security of an application.
White box testing is also known as open box testing, glass box testing, structural testing,clear
box testing, and transparent box testing.
BlackBoxTesting
Another type of manual testingis black-box testing. In this testing, the test engineer will analyze
the software against requirements, identify the defects or bug, and sends it back to the
development team.
Then, the developers will fix those defects, doone round ofWhite boxtesting, and send it to the
testing team.
Here, fixing the bugs means the defect is resolved, and the particular feature is working
according to the given requirement.
The main objective of implementing the black box testing is to specifythe business needs orthe
customer's requirements.
In other words, we can say that black box testing is a process ofchecking the functionality ofan
application as per the customer requirement. The source code is not visible in this testing; that's
why it is known as black-box testing.
TypesofBlackBoxTesting
o FunctionalTesting
o Non-function Testing
FunctionalTesting
In functional testing, all the components are tested by giving the value, defining the output, and
validating the actual output with the expected value.
Just like another type of testing is divided into several parts, functional testing is also classified
into various categories.
o Unit Testing
o Integration Testing
o System Testing
Unit testing is the first level of functional testing in order to test any software. In this, the test
engineer willtest the module ofan application independentlyor test allthe module functionality is
called unit testing.
The primary objective of executing the unit testing is to confirm the unit components with their
performance. Here, a unit is defined as a single testable function ofa software or an application.
And it is verified throughout the specified application development phase.
2. IntegrationTesting
Once we are successfully implementing the unit testing, we will go integration testing. It is the
second level of functional testing, where we test the data flow between dependent modules or
interface between two features is called integration testing.
The purpose of executing the integration testing is to test the statement's accuracy between each
module.
TypesofIntegrationTesting
o IncrementalTesting
o Non-IncrementalTesting
IncrementalIntegration Testing
Ifthese modulesare working fine, thenwe canaddone more module and test again. And we can
continue with the same process to get better results.
In other words, we can say that incrementally adding up the modules and test the data flow
between the modules is known as Incremental integration testing.
Incrementalintegrationtestingcanfurtherclassifyintotwoparts, whichareasfollows:
1. Top-downIncrementalIntegration Testing
2. Bottom-upIncrementalIntegrationTesting
Let'sseeabriefintroductionofthese typesofintegrationtesting:
1. Top-downIncrementalIntegrationTesting
In this approach, we will add the modules step by step or incrementally and test the data flow
between them. We have to ensure that the modules we are adding are the child of the earlier
ones.
2. Bottom-upIncrementalIntegrationTesting
In the bottom-up approach, we will add the modules incrementally and check the data flow
between modules. And also, ensure that the module we are adding is the parent of the earlier
ones.
Whenever the data flow is complexand verydifficult to classifya parent and a child, we will go
for the non-incremental integration approach. The non-incremental method is also known as the
Big Bang method.
3. SystemTesting
Whenever we are done with the unit and integration testing, we can proceed with the system
testing.
In this type of testing, we will undergo each attribute of the software and test if the end feature
works according to the business requirement. And analysis the software product as a complete
system.
Non-functionTesting
The next part of black-box testing is non-functional testing. It provides detailed information on
software product performance and used technologies.
Non-functional testing will help us minimize the risk of production and related costs of the
software.
Non-functional testing categorized into different parts of testing, which we are going to discuss
further:
o PerformanceTesting
o UsabilityTesting
o CompatibilityTesting
1. PerformanceTesting
Performancetestingincludesthevarioustypesoftesting,whichareas follows:
o Load Testing
o Stress Testing
o Scalability Testing
o Stability Testing
o Load Testing
While executing the performance testing, we will apply some load on the particular applicationto
check the application's performance, known as load testing. Here, the load could be less than or
equal to the desired load.
It will help us to detect the highest operating volume of the software and bottlenecks.
o Stress Testing
It is used to analyze the user-friendliness and robustness of the software beyond the common
functional limits.
Primarily, stress testing is used for critical software, but it can also be used for all types of
software applications.
o Scalability Testing
In scalabilitytesting, we can also check the system, processes, or database's abilityto meet an
upward need. And in this, the Test Cases are designed and implemented efficiently.
o Stability Testing
Stability testing is a procedure where we evaluate the application's performance by applying the
load for a precise time.
It mainly checks the constancy problems of the application and the efficiency of a developed
product. In this type of testing, we can rapidly find the system's defect even in a stressful
situation.
2. Usability Testing
Another type of non-functional testing is usability testing. In usability testing, we will analyze
the user-friendliness of an application and detect the bugs in the software's end-user interface.
o The application should be easy to understand, which means that all the features must be
visible to end-users.
o The application's look and feel should be good that means the application should be
pleasant looking and make a feel to the end-user to use it.
3. Compatibility Testing
GreyBox Testing
Another part of manual testing is Grey box testing. It is a collaboration of black box and
white box testing.
Since, the grey box testing includes access to internal coding for designing test cases. Grey box
testing is performed by a person who knows coding as well as testing.
In other words, we can say that if a single-person team done bothwhite box and black-box
testing, it is considered grey box testing.
AutomationTesting
The most significant part of Software testing is Automation testing. It uses specific tools to
automate manual design test cases without any human interference.
Automation testing is the best way to enhance the efficiency, productivity, and coverage of
Software testing.
In other words, we can say that whenever we are testing an application by using some tools is
known as automation testing.
We will go for automation testing when various releases or several regression cycles goes on the
applicationorsoftware. Wecannot writethetest scriptorperformtheautomationtestingwithout
understanding the programming language.
Automation of testing Pros and cons
In software testing, we also have some other types of testing that are not part of any above
discussed testing, but those testing are required while testing any software or an application.
o Smoke Testing
o Sanity Testing
o Regression Testing
o User Acceptance Testing
o Exploratory Testing
o Adhoc Testing
o Security Testing
o Globalization Testing
In smoke testing, we will test an application's basic and critical features before doing one round
of deep and rigorous testing.
Orbefore checking all possible positive and negative values is known as smoke testing.
Analyzing the workflow of the application's core and main functions is the main objective of
performing the smoke testing.
Sanity Testing
It isused to ensurethat allthe bugshave been fixed and no added issuescome into existence due to
these changes. Sanity testing is unscripted, which means we cannot documented it. It checks the
correctness of the newly added features and components.
Regression Testing
Regression testing is the most suitable testing for automation tools. As per the project type and
accessibility of resources, regression testing can be similar to Retesting.
Whenever a bug is fixed bythe developers and then testing the other features ofthe applications
that might be simulated because of the bug fixing is known as regression testing.
In other words, we can say that whenever there is a new release for some project, then we can
perform Regression Testing, and due to a new feature may affect the old features in the earlier
releases.
UserAcceptanceTesting
The User acceptance testing (UAT) is done by the individual team known as domain
expert/customer or the client. And knowing the application before accepting the final product is
called as user acceptance testing.
In user acceptance testing, we analyze the business scenarios, and real-time scenarios on the
distinct environment called the UAT environment. In this testing, we will test the application
before UAI for customer approval.
ExploratoryTesting
Whenever the requirement is missing, early iteration is required, and the testing team has
experienced testers when we have a critical application. New test engineer entered into the team
then we go for the exploratory testing.
To execute the exploratory testing, we will first go through the application in all possible ways,
make a test document, understand the flow of the application, and then test the application.
Adhoc Testing
Testingtheapplicationrandomlyassoonasthebuildisinthecheckedsequenceisknown as Adhoc
testing.
It is also called Monkey testing and Gorilla testing. In Adhoc testing, we will check the
application in contradiction of the client's requirements; that's why it is also known as negative
testing.
When the end-user using the application casually, and he/she may detect a bug. Still, the
specialized test engineer uses the software thoroughly, so he/she may not identify a similar
detection.
SecurityTesting
It is an essential part of software testing, used to determine the weakness, risks, or threats in the
software application.
The execution ofsecuritytesting will help us to avoid the nastyattack fromoutsiders and ensure
our software applications' security.
Inother words, we cansaythat securitytesting is mainly used to define that the datawill be safe
and endure the software's working process.
Globalization Testing
Another type of software testing is Globalization testing. Globalization testing is used to check
the developed software for multiple languages or not. Here, the words globalization means
enlightening the application or software for various languages.
Globalization testing is used to make sure that the application will support multiple languages
and multiple features.
In present scenarios, we can see the enhancement in several technologies as the applications are
prepared to be used globally.
Conclusion
In the tutorial, we have discussed various types of software testing. But there is still a list of
more than 100+ categories of testing. However, each kind of testing is not used in all types of
projects.
We have discussed the most commonly used types of Software Testing like black-box testing,
white box testing, functional testing, non-functional testing, regression testing, Adhoc
testing, etc.
Also, there are alternate classifications or processes used in diverse organizations, but the general
concept is similar all over the place.
These testing types, processes, and execution approaches keep changing when the project,
requirements, and scope change.
1. Automated testing improves the coverage of testing as automated execution of test cases
is faster than manual execution.
2. Automatedtestingreducesthedependabilityoftestingontheavailabilityofthetest engineers.
3. Automatedtestingprovides roundthe clockcoverage as automatedtests canbe run all time
in 24*7 environment.
4. Automatedtesting takesfarlessresourcesinexecutionascompared tomanualtesting.
5. It helps to train the test engineers to increase their knowledge by producing a repository
of different tests.
6. Ithelpsintestingwhichisnotpossiblewithoutautomationsuchasreliabilitytesting, stress
testing, load and performance testing.
7. It includes all other activities like selecting the right product build, generating the
righttest data and analyzing the results.
8. It acts as test data generator and produces maximumtest datato cover a large number of
input and expected output for result comparison.
9. Automatedtestinghas lesschancesoferrorhencemore reliable.
10. As with automated testing test engineers have free time and can focus on other
creativetasks.
ConsofAutomatedTesting:AutomatedTestinghasthefollowing disadvantages:
1. Automatedtestingisverymuchexpensivethanthemanualtesting.
2. It also becomes inconvenient andburdensomeastodecidewho wouldautomateandwho
would train.
3. Ithaslimitedtosomeorganisationsas manyorganisationsnotprefertest automation.
4. Automatedtestingwouldalso requireadditionallytrainedandskilledpeople.
5. Automatedtestingonlyremovesthe mechanicalexecutionoftestingprocess, but creation of
test cases still required testing professionals.
Selenium
Introduction
Seleniumisoneofthe most widelyusedopensourceWebUI(UserInterface)automationtesting suite.It
wasoriginallydeveloped byJasonHuggins in2004as aninternaltoolat Thought Works.
Seleniumsupports automationacross different browsers, platforms and programming languages.
Selenium can be easily deployed on platforms such as Windows, Linux, Solaris and Macintosh.
Moreover, it supports OS (Operating System) for mobile applications like iOS, windows mobile
and android.
Selenium supports a variety of programminglanguages through the use of drivers specific to each
Language.
Currently, Selenium Web driver is most popular with Java and C#. Selenium test scripts can be
coded in any of the supported programming languages and can be run directly in most modern
web browsers. Browsers supported by Selenium include Internet Explorer, Mozilla Firefox,
Google Chrome and Safari.
Selenium can be used to automate functional tests and can be integrated with automation test
tools suchasMaven, Jenkins,& Dockerto achieve continuous testing. It canalso be integrated
with tools such as TestNG, &JUnit for managing test cases and generating reports.
SeleniumFeatures
o SeleniumisanopensourceandportableWebtesting Framework.
o SeleniumIDEprovidesaplaybackandrecordfeatureforauthoringtestswithouttheneed to learn
a test scripting language.
o It can be considered as the leading cloud-based testing platform which helps testers to
record their actions and export themas a reusable script with a simple-to-understand and
easy-to-use interface.
o Selenium supports various operating systems, browsers and programming languages.
Following is the list:
o ProgrammingLanguages:C#,Java,Python,PHP, Ruby,Perl,andJavaScript
o OperatingSystems:Android, iOS,Windows,Linux,Mac,Solaris.
o Browsers: Google Chrome, Mozilla Firefox, Internet Explorer, Edge, Opera,
Safari, etc.
o It also supports paralleltest execution which reduces time and increases the efficiencyof
tests.
o SeleniumcanbeintegratedwithframeworkslikeAntandMavenforsourcecode compilation.
o SeleniumcanalsobeintegratedwithtestingframeworkslikeTestNGforapplication testing and
generating rseports.
o Seleniumrequiresfewer resourcesascomparedtoother automationtest tools.
o WebDriverAPIhasbeenindulgedinseleniumwhichisoneofthemostimportantmodifications
done to selenium.
o Seleniumwebdriver doesnot require server installation, test scripts interact directlywith
the browser.
o Seleniumcommands are categorized in terms ofdifferent classes which make it easier to
understand and implement.
Testingbackendintegrationpoints
The term backend generally refers to server-side deployment. Here the process is
entirely happening in the backend which is not shown to the user only the expectedresults
will be shown to the user. In every web application, there will be a backend language to
accomplish the task.
ForExample,whileuploadingthedetailsofthestudentsinthedatabase, thedatabasewill store
all the details. When there is a need to display the details of the students, it will simply
fetch all the details and display them. Here, it will show only the result, not the process
and how it fetches the details.
Backend Testing is a testing method that checks the database or server-side of the web
application. The main purpose ofbackend testingis to checkthe application layer andthe
database layer. It will find an error or bug in the database or server-side.
For implementing backend testing, the backend test engineer should also have some
knowledgeaboutthatparticularserver-sideordatabaselanguage.Itisalsoknown as Database
Testing.
Importance of Backend Testing: Backend testing is a must because anything wrong or
error happens at the server-side, it will not further proceed with that task or the outputwill
get differed or sometimes it will also cause problems such as data loss, deadlock,etc.,
Thefollowingarethedifferenttypesofbackend testing:
1. Structural Testing
2. Functional Testing
3. Non-Functional Testing
Let’sdiscusseachofthesetypesofbackendtesting.
1. StructuralTesting
Structural testing is the process of validating all the elements that are present inside the
datarepositoryandareprimarilyusedfordata storage. Itinvolvescheckingtheobjectsof front-
end developments with the database mapping objects.
TypesofStructuralTesting:Thefollowingarethedifferenttypesofstructural testing:
a) Schema Testing: In this Schema Testing, the tester will check for the correctly mapped
objects. This is also known as mapping testing. It ensures whether the objects of the
front-end and the objects ofthe back-end are correctly matched or mapped. It will mainly
focus on schema objects such as a table, view, indexes, clusters, etc., In this testing, the
tester will find the issues of mapped objects like table, view, etc.,
b) Table and Column Testing: In this, it ensures that the table and column properties are
correctly mapped.
● It ensures whether the table and the column names are correctly mapped on both
thefront-end side and server-side.
● Itvalidatesthedatatypeofthecolumniscorrectlymentioned.
● Itensuresthecorrectnamingofthecolumn valuesofthedatabase.
● Itdetectstheunusedtablesandcolumns.
● It validates whether the users are able to give the correct input as per therequirement.
For example, if we mention the wrong datatype for the column on the server-side whichis
different from the front-end then it will raise an error.
c) KeyandIndexesTesting:Inthis,itvalidatesthekeyandindexesofthecolumns.
● Itensures whether the mentioned keyconstraintsare correctly provided. Forexample,
Primary Key for the column is correctly mentioned as per the given requirement.
● ItensuresthecorrectreferencesofForeignKeywiththeparenttable.
● Itchecksthelengthandsizeoftheindexes.
● It ensures the creation of clustered and non-clustered indexes for the table as per
therequirement.
● ItvalidatesthenamingconventionsoftheKeys.
d) TriggerTesting: Itensuresthattheexecutedtriggersarefulfillingtherequired conditions of
the DML transactions.
● Itvalidateswhetherthetriggersmakethedataupdatescorrectlywhenwehaveexecuted them.
● Itchecksthecodingconventionsarefollowedcorrectlyduringthecodingphaseofthe
triggers.
● Itensuresthatthetriggerfunctionalitiesof update,delete,andinsert.
2. FunctionalTesting
Functional Testing is the process of validating that the transactions and operations made
by the end-users meet the requirements.
TypesofFunctionalTesting:Thefollowingarethedifferenttypesoffunctionaltesting:
a) BlackBoxTesting:
● Black Box Testing is the process of checking the functionalities of the integration of
the database.
● ThistestingiscarriedoutattheearlystageofdevelopmentandhenceItisvery helpful to
reduce errors.
● It consists of various techniques such as boundary analysis,equivalent partitioning,and
cause-effect graphing.
● Thesetechniquesarehelpfulincheckingthefunctionalityofthedatabase.
● ThebestexampleistheUser loginpage.Iftheenteredusernameandpasswordare correct, It
will allow the user and redirect to the next page.
b) WhiteBoxTesting:
● WhiteBoxTestingistheprocessofvalidatingtheinternalstructureofthedatabase.
● Here,thespecifieddetailsarehiddenfromtheuser.
● The database triggers,functions,views,queries,and cursors will be checked in this
testing.
● Itvalidatesthedatabaseschema,databasetable,etc.,
● Herethecodingerrorsinthetriggerscanbeeasilyfound.
● Errors in the queries can also be handled in this white box testing and hence internal
errors are easily eliminated.
3. Non-FunctionalTesting
Non-functional testing is the process of performing load testing, stress testing, and
checking minimum system requirements are required to meet the requirements. It will
also detect risks, and errors and optimize the performance of the database.
a) LoadTesting:
● Loadtestinginvolvestestingtheperformance andscalabilityofthedatabase.
● It determines how the software behaves when it is been used by many users
simultaneously.
● Itfocusesongoodloadmanagement.
● Forexample, ifthewebapplicationisaccessedbymultipleusersatthesametimeand it does
not create any traffic problems then the load testingis successfully completed.
b) StressTesting:
● Stress Testing is alsoknown as endurance testing.Stress testing is a testing processthat
is performed to identify the breakpoint of the system.
● Inthistesting,anapplicationisloadedtillthestagethesystemfails.
● Thispointisknownasabreakpointofthedatabasesystem.
● It evaluates and analyzes the software after the breakage of system failure. In case of
error detection, It will display the error messages.
● Forexample,ifusersenterthewronglogininformationthenitwillthrowanerror message.
BackendTestingProcess
1. Set up the Test Environment: When the coding process is done for the application,
set up the test environment by choosing a proper testing tool for back-end testing. It
includes choosing the right team to test the entire back-end environment with a proper
schedule. Record all the testingprocesses in the documents or update themin software to
keep track of all the processes.
2. Generate the TestCases: Once thetoolandtheteamare ready for thetestingprocess,
generate the test cases as per the business requirements. The automation tool itself will
analyze the code and generate all possible test cases for developed code. If the process is
manual then the tester will have to write the possible test cases in the testing tool to
ensure the correctness of the code.
3. Execution of Test Cases: Once the test cases are generated, the tester or Quality
Analyst needs to execute those test cases in the developed code. If the tool is automated,it
will generate and execute the test cases by itself. Otherwise, the tester needs to write and
execute those test cases. It will highlight whether the execution of test cases is executed
successfully or not.
4. Analyzing the Test Cases: After the execution of test cases, it highlights the result of
allthetestcaseswhetherithasbeenexecutedsuccessfullyornot. Ifanerroroccursinthe test
cases, it will highlight where the particular error is formed or raised, and in some
cases,theautomationtoolwillgivehintsregardingtheissuestosolvetheerror.
5. The tester or Quality Analyst should analyze the code again and fix the issues if an
error occurred.
6. Submission of Test Reports: This is the last stage in the testing process. Here, all the
details such as who is responsible for testing, the tool used in the testingprocess, number
of test casesgenerated, number of test cases executed successfully or not,time is taken to
executeeachtestcase,numberoftimestestcasesfailed,numberoftimeserrorsoccurred. These
details are either documented or updated in the software. The report will be submitted to
the respective team.
BackendTestingValidation
Thefollowingaresomeofthefactorsforbackendtestingvalidation:
● PerformanceCheck: Itvalidatestheperformanceofeachindividualtestandthe system
behavior.
● Sequence Testing:Backend testing validates that thetests are distributed according to
the priority.
● Database Server Validations: In this, ensures that the data fed through for the testsis
correct or not.
● FunctionsTesting: Inthis,thetestvalidatestheconsistencyintransactionsofthe database.
● Key andIndexes: In this, the test ensures that the accurate constraint and the rules of
constraints and indexes are followed properly.
● DataIntegrityTesting: Itisatechniqueinwhichdataisverifiedinthedatabase whether it is
accurate and functions as per requirements.
● Database Tables: It ensures that the created table and the queries for the output
areproviding the expected result.
● Database Triggers:Backend Testing validates the correctness of the functionality of
triggers.
● StoredProcedures: Backendtestingvalidatesthefunctions,returnstatements, calling the
other events, etc., are correctly mentioned as per the requirements,
● Schema:Backend testing validates that the data is organized in a correct way as perthe
business requirement and confirms the outcome.
ToolsForBackendTesting
Thefollowingaresomeofthetoolsforbackendtesting:
1. LoadRunner:
● Itisastresstestingtool.
● Itisanautomatedperformanceandtestingautomationtoolforanalyzingsystem behavior
and the performance of the system while generating the actual load.
2. Empirix-TESTSuite:
● ItisacquiredbyOraclefromEmpirix.Itisaloadtestingtool.
● It validates the scalability along with the functionality of the application under heavy
test.
● AcquisitionwiththeEmpirix-Testsuitemaybeproveneffectivetodelivertheapplication
with improved quality.
3. StoredProcedureTestingTools–LINQ:
● Itisapowerfultoolthatallowstheusertoshowtheprojects.
● IttracksalltheORMcallsanddatabasequeriesfromthe ORM.
● Itenablestoseetheperformanceofthedataaccesscodeandeasilydetermine performance.
4. UnitTestingTools–SQLUnit,DBFit,NDbUnit:
● SQL UNIT:SQLUnit is a Unit Testing Framework for Regression and Unit Testing
of database stored procedures.
● DBFit:ItisapartofFitNesseand managesstoredproceduresandcustomprocedures.
Accomplishes database testing either through Java or .NET and runs from the
command line.
● NDbUnit:It performs the database unit test for the system either before or after
execution or compiled the other parts of the system.
5. DataFactoryTools:
● Thesetoolsworkasdatamanagersanddatageneratorsforbackenddatabasetesting.
● Itisusedtovalidatethequerieswithahugesetof data.
● Itallowsperformingbothstressand loadtesting.
6. SQLMap:
● Itisanopen-sourcetool.
● ItisusedforperformingPenetrationTestingtoautomatetheprocessofdetection.
● Powerfuldetectionoferrorswillleadtoefficienttestingandresultintheexpected behavior of
the requirements.
7.phpMyadmin:
● ThisisthesoftwaretoolanditiswritteninPHP.
● It is developed to handle the databases and we can execute test queries to ensure
thecorrectness of the result as a whole and even for a separate table.
8. AutomaticEfficientTestGenerator(AETG):
● Itmechanicallygeneratesthepossibletestsfromuser-definedrequirements.
● It is based on algorithms that use ideas from statistical experimental design theory to
reduce the number of tests needed fora specific level of test coverage of the input test
space.
9. HammerDB:
● Itisanopen-sourcetoolforloadtesting.
● Itvalidatestheactivityreplayfunctionalityfortheoracledatabase.
● ItisbasedonindustrystandardslikeTPC-CandTPC-HBenchmarks.
10. SQLTest:
● SQLTestusesanopen-sourcetSQLtframework,views,storedprocedures,andfunctions.
● This tool stores database object in a separate schema and if changes occur there is no
need for clearing the process.
● ItallowsrunningtheunittestcasesfortheSQLserverdatabase.
AdvantagesofBackendTesting
Thefollowingaresomeofthebenefitsofbackendtesting:
● Errorsareeasilydetectableattheearlierstage.
● Itavoidsdeadlockcreationontheserver-side.
● Webloadmanagementiseasilyachieved.
● Thefunctionalityofthedatabaseismaintainedproperly.
● Itreducesdataloss.
● Enhancesthefunctioningofthesystem.
● Itensuresthesecurityandprotectionofthesystem.
● Whiledoing thebackendtesting,theerrorsintheUI partscanalsobedetectedand replaced.
● Coverageofallpossibletestcases.
DisadvantagesofBackendTesting
Thefollowingaresomeofthedisadvantagesofbackendtesting:
● Gooddomainknowledgeisrequired.
● Providingtestcasesfortestingrequiresspecial attention.
● InvestmentinOrganizationalcostsishigher.
● Ittakesmoretimetotest.
● IfmoretestingbecomesfailsthenItwillleadtoacrashontheserver-sideinsome cases.
● Errors or Unexpected results from one test case scenario will affect the other system
results also.
Test-drivendevelopment
Test Driven Development (TDD) is software development approach in which test cases
are developed to specify and validate what the code will do. In simple terms, testcases
for each functionality are created and tested first and if the test fails then thenew code
is written in order to pass the test and making code simple and bug-free.
Test-Driven Development starts with designing and developing tests for every small
functionality of an application. TDD framework instructs developers to write new
code only if an automated test has failed. This avoids duplication of code. The TDD
full form is Test-driven development.
The simple concept of TDD is to write and correct the failed tests before writing new code
(before development). This helps to avoid duplication of code as we write a small amount ofcode
at a time in order to pass tests. (Tests are nothing butrequirementconditions thatwe need to test to
fulfill them).
Test-Driven development is a process of developing and running automated test before actual
development ofthe application. Hence, TDD sometimes also called as Test First Development.
HowtoperformTDDTest
FollowingstepsdefinehowtoperformTDDtest,
1. Addatest.
2. Runalltestsand seeifanynewtest fails.
3. Writesomecode.
4. RuntestsandRefactorcode.
5. Repeat
TDDVs. TraditionalTesting
BelowisthemaindifferencebetweenTestdrivendevelopmentandtraditionaltesting:
TDD approach is primarily a specification technique. It ensures that your source code
is thoroughly tested at confirmatory level.
● With traditional testing, a successful test finds one or more defects. It is sameas
TDD. When a test fails, you have made progress because you knowthat you
need to resolve the problem.
● TDD ensures that your system actually meets requirements defined for it. It
helps to build your confidence about your system.
● In TDD more focus is on production code that verifies whether testing will
workproperly.Intraditionaltesting, morefocus isontest casedesign. Whether the
test will show the proper/improper execution of the application in order to
fulfill requirements.
● In TDD, you achieve 100% coverage test. Every single line of code is tested,
unlike traditional testing.
● The combination of both traditionaltestingand TDDleads to the importance of
testing the system rather than perfection of the system.
● In Agile Modeling (AM), you should “test with a purpose”. You should know
why you are testing something and what level its need to be tested.
WhatisacceptanceTDDandDeveloperTDD
There aretwo levelsofTDD
1. Acceptance TDD (ATDD): With ATDD you write a single acceptance test. This test
fulfills the requirement of the specification or satisfies the behavior of the system. After
that write just enough production/functionality code to fulfill that acceptance test.
Acceptancetestfocuseson the overallbehaviorof thesystem.ATDDalsowasknown as
Behavioral Driven Development (BDD).
2. Developer TDD: With Developer TDD you write single developer test i.e. unit test and
then just enough production code to fulfill that test. The unit test focuses on every small
functionality of the system. Developer TDD is simply called as TDD.The main goal of
ATDD and TDD is to specify detailed, executable requirements for your solution on
ajustin time (JIT) basis. JIT means taking only those requirements in consideration that
are needed in the system. So increase efficiency.
REPL-drivendevelopment
REPL-driven development (Read-Eval-Print Loop) is an interactive programming approach that
allows developers to execute code snippets and see their results immediately. This enables
developerstotesttheircodequicklyand iteratively, andhelpsthemto understandthebehaviorof their
code as they work.
In a REPL environment, developers can type in code snippets, and the environment will
immediatelyevaluate the code and return the results. This allows developers to test small bits of
code and quickly see the results, without having to create a full-fledged application.
REPL-driven development is commonly used in dynamic programming languages such as
Python, JavaScript, and Ruby. Some popular REPL environments include the Python REPL,
Node.js REPL, and IRB (Interactive Ruby).
BenefitsofREPL-drivendevelopmentinclude:
Deploymentofthesystem:
In DevOps, deployment systems are responsible for automating the release of software updates
and applications from development to production. Some popular deployment systems include:
Jenkins: anopen-source automationserver that provides plugins to support building, deploying,
and automating any project.
Ansible: anopen-source platformthat providesa simple wayto automate software provisioning,
configuration management, and application deployment.
Docker:aplatformthatenablesdeveloperstocreate,deploy, andrunapplicationsincontainers.
Azure DevOps: a Microsoft product that provides an end-to-end DevOps solution for
developing, delivering, and deploying applications on multiple platforms.
Virtualizationstacks
In DevOps, virtualization refers to the creation of virtual machines, containers, or environments
that allow multiple operating systems to run on a single physical machine. The following are
some of the commonly used virtualization stacks in DevOps:
Docker: An open-source platform for automating the deployment, scaling, and management of
containerized applications.
codeexecutionattheclient
In DevOps, code execution at the client refers to theprocess of executing code or scripts on client
devices or machines. This can be accomplished in several ways, including:
Client-side scripting languages: JavaScript, HTML, and CSS are commonly used client-side
scripting languages that run in a web browser and allow developers to create dynamic,interactive
web pages.
Remote execution tools: Tools such as SSH, Telnet, or Remote Desktop Protocol (RDP) allow
developers to remotely execute commands and scripts on client devices.
Configuration management tools: Tools such as Ansible, Puppet, or Chef use agent-based or
agentless architectures to manage and configure client devices, allowing developers to execute
code and scripts remotely.
Mobile apps: Mobile applications can also run code on client devices, allowing developers to
create dynamic, interactive experiences for users.
These methods are used in DevOps to automate various tasks, such as application deployment,
software updates, or system configuration, on client devices. By executing code on the client
side, DevOps teams can improve the speed, reliability, and security of their software delivery
process.
Puppetmasterandagents:
PuppetArchitecture
Here, the client is referred to as a Puppet agent/slave/node, and the server is referred to as a
Puppet master.
PuppetMaster
Puppet master handles all the configuration related process in the form of puppet codes. It is a
Linux based system in which puppet master software is installed. The puppet master must be in
Linux. It uses the puppet agent to apply the configuration to nodes.
ThisistheplacewhereSSLcertificatesarecheckedandmarked.
PuppetSlaveorAgent
Puppet agents are the real working systems and used by the Client. It is installed on the client
machine and maintained and managed by the puppet master. They have a puppet agent service
running inside them.
The agent machine canbe configured onanyoperating systemsuchas Windows, Linux, Solaris, or
Mac OS.
ConfigRepository
Config repository is the storage area where all the servers and nodes related configurations are
stored, and we can pull these configurations as per requirements.
Facts
These facts are used for determining the present state of any agent. Changes on any target
machine are made based on facts. Puppet's facts are predefined and customized.
Catalog
Theabove imageperformsthefollowingfunctions:
o Firstofall,anagentnodesendsfactstothemasteror serverandrequestsforacatalog.
o The master or server compiles and returns the catalog of a node with the help of someinformation
accessed by the master.
o Then the agent applies the catalog to the node by checking every resource mentioned in the
catalog. If it identifies resources that are not in their desired state, then makes the necessary
adjustments to fix them. Or, it determines in no-op mode, the adjustments would be required to
reconcile the catalog.
o Andfinally, theagent sendsareport backtothemaster.
PuppetMaster-SlaveCommunication
Puppet master-slave communicates via a secure encrypted channel through the SSL (Secure
Socket Layer). Let'sseethe below diagramto understandthe communicationbetweenthe master
and slave with this channel:
Theabovediagramdepictsthe following:
o PuppetslaverequestsforPuppetMasterCertificate.
o Puppetmaster sendstheMasterCertificatetothepuppetslaveinresponsetotheclient request.
o PuppetmasterrequeststothePuppetslavefortheslavecertificate.
o Puppetslavesends therequestedslavecertificatetothepuppetmaster.
o Puppetslavesendsa requestfor data tothepuppetmaster.
o Finally,themastersendsthedata tothepuppetslaveasper therequest.
PuppetBlocks
Puppet providesthe flexibilitytointegrateReportswiththird-partytoolsusing PuppetAPIs.
FourtypesofPuppetbuildingblocksare
1. Resources
2. Classes
3. Manifest
4. Modules
PuppetResources:
PuppetResourcesarethebuildingblocksofPuppet.
PuppetManifest:
Manifest is a directory containing puppet DSL files. Those files have a .pp extension. The .pp
extension stands for puppet program. The puppet code consists of definitions or declarations of
Puppet Classes.
PuppetModules:
Modules are a collection of files and directories such as Manifests, Class definitions. They arethe
re-usable and sharable units in Puppet.
For example, the MySQL module to install and configure MySQL or the Jenkins module to
manage Jenkins, etc..
Ansible:
Ansible is simple open source IT engine which automates application deployment, intra service
orchestration, cloud provisioning and many other IT tools.
Ansibleiseasytodeploybecauseitdoesnotuseanyagentsor customsecurityinfrastructure.
Ansibleusesplaybooktodescribeautomationjobs,andplaybookusesverysimplelanguage
i.e. YAML(It’s a human-readable data serialization language & is commonly used for
configuration files, but could be used in many applications where data is being stored)which is
very easy for humans to understand, read and write. Hence the advantage is that even the IT
infrastructure support guyscanread and understandthe playbook and debug ifneeded (YAML – It
is in human readable form).
Ansible is designed for multi-tier deployment. Ansible does not manage one system at time, it
modelsIT infrastructure bydescribing allofyour systemsare interrelated. Ansible iscompletely
agentless which means Ansible works by connecting your nodes through ssh(by default). But if
you want other method for connection like Kerberos, Ansible gives that option to you.
After connecting to your nodes, Ansible pushes small programs called as “Ansible Modules”.
Ansible runs that modules on your nodes and removes them when finished. Ansible manages
your inventory in simple text files (These are the hosts file). Ansible uses the hosts file whereone
can group the hosts and can control the actions on a specific group in the playbooks.
SampleHosts File
Thisisthecontentofhostsfile− #File
name: hosts
#Description:Inventoryfileforyourapplication.Definesmachinetypeabc node to
deploy specific artifacts
#Definesmachinetypedefnodetoupload metadata.
[abc-node]
#server1ansible_host=<targetmachineforDUdeployment>ansible_user=<Ansible user>
ansible_connection = ssh
server1ansible_host=<yourhostname>ansible_user=<yourunixuser> ansible_connection =
ssh
[def-node]
#server2ansible_host=<targetmachineforartifactupload> ansible_user =
<Ansible user> ansible_connection = ssh
server2ansible_host=<host>ansible_user=<user>ansible_connection=ssh
What isConfigurationManagement
AnsibleWorkflow
Ansible works by connecting to your nodes and pushing out a small program called Ansible
modulesto them. Then Ansible executed these modules and removed them after finished. The
libraryofmodulescanresideonanymachine,andthereareno daemons, servers, or databases
required.
In the above image, the Management Node is the controlling node that controls the entire
executionoftheplaybook.TheinventoryfileprovidesthelistofhostswheretheAnsible
modules need to be run. The Management Node makes anSSHconnection and executes the
small modules on the host's machine and install the software.
Ansible removes the modules once those are installed so expertly. It connects to the hostmachine
executes the instructions, and if it is successfully installed, then remove that code in which one
was copied on the host machine.
TermsusedinAnsible
Terms Explanation
Fact The information fetched from the client system from the global variables with
the gather facts operation.
Inventory AfilecontainingthedataregardingtheAnsibleclient-server.
Handler Thetaskiscalledonlyifanotifierispresent.
Tag It is a name set to a taskthat can be used later onto issue just that specific task or
group of jobs.
AnsibleArchitecture
The Ansible orchestration engine interacts with a user who is writing the Ansible playbook to
execute the Ansible orchestration and interact along with the services of private or public cloud
and configuration management database. You can show in the below diagram, such as:
Inventory
Inventory is lists of nodes or hosts having their IP addresses, databases, servers, etc. which are
need to be managed.
API's
Modules
Plugins
Plugins is a piece of code that expends the core functionality of Ansible. There are many useful
plugins, and you also can write your own.
Playbooks
Playbooks consist of your written code, and they are written in YAML format, which describes
the tasks and executes through the Ansible. Also, you can launch the tasks synchronously and
asynchronously with playbooks.
Hosts
Networking
Ansible is used to automate different networks, and it uses the simple, secure, and powerful
agentless automation framework for IT operations and development. It uses a type ofdata model
which separated from the Ansible automation engine that spans the different hardware quite
easily.
Cloud
A cloud is a network of remote servers on which you can store, manage, and process the data.
These serversarehosted onthe internet and storing thedataremotelyrather thanthe localserver. It
just launches the resources and instances on the cloud, connect them to the servers, and you have
good knowledge of operating your tasks remotely.
CMDB
PuppetComponents
Followingarethekeycomponents ofPuppet:
o Manifests
o Module
o Resource
o Factor
o M-collective
o Catalogs
o Class
o Nodes
Let'sunderstandthesecomponents indetail:
Manifests
Manifest is nothing but the files specifying the configuration details for Puppet slave. The
extension for manifest files is .pp, which means Puppet Policy. These files consist of puppet
scripts describing the configuration for the slave.
Module
The puppet module is a set of manifests and data. Here data is file, facts, or templates. The
module follows a specific directory structure. These modules allow the puppet program to split
into multiple manifests. Modules are simply self-contained bundles of data or code.
Let'sunderstandthemodulebythefollowingimage:
Resource
Resources are a basic unit ofsystemconfiguration modeling. These are the predefined functions
that run at the backend to perform the necessary operations in the puppet.
Each puppet resource defines certain elements of the system, such as some particular service or
package.
Factor
These facts are used for determining the present state of any agent. Changes on any target
machine are made based on facts. Puppet's facts are predefined and customized.
M-Collective
M-collective is a framework that enables parallel execution of several jobs on multiple Slaves.
This framework performs several functions, such as:
Catalogs
Alltherequired statesofslaveresourcesaredescribedinthecatalog.
Class
Like other programming languages, the puppet also supports a class to organize the code in a
better way. Puppet class is a collection of various resources that are grouped into a single unit.
Nodes
The nodes are the location where the puppet slaves are installed used to manage all the clients
and servers.
Deploymenttools
Chef
Chef is an open source technology developed by Opscode. Adam Jacob, co-founder of Opscode
is known as the founder of Chef. This technology uses Ruby encoding to develop basic building
blocks like recipe and cookbooks. Chef is used in infrastructure automation and helps inreducing
manual and repetitive tasks for infrastructure management.
Chef have got its own convention for different building blocks, which are required to manageand
automate infrastructure.
WhyChef?
Chefisaconfigurationmanagementtechnologyusedto automatetheinfrastructureprovisioning. It is
developed on the basis of Ruby DSL language. It is used to streamline the task of
configurationand managingthecompany’s server. It hasthecapabilityto get integratedwithany of
the cloud technology.
In DevOps, we use Chef to deploy and manage servers and applications in-house and on the
cloud.
Featuresof Chef
Followingarethe mostprominentfeaturesofChef−
● ChefusespopularRubylanguagetocreateadomain-specificlanguage.
● Chefdoes not make assumptions onthe current statusofa node. It uses its mechanisms to
get the current status of machine.
● Chefisidealfordeployingandmanagingthecloudserver,storage,and software.
AdvantagesofChef
Chefoffersthefollowingadvantages−
● Lower barrier for entry − As Chef uses native Ruby language for configuration, a
standard configuration language it can be easily picked up by anyone having some
development experience.
● Excellent integration with cloud − Using the knife utility, it can be easily integratedwith
any of the cloud technologies. It is the best tool for an organization that wishes to
distribute its infrastructure on multi-cloud environment.
DisadvantagesofChef
Someofthemajor drawbacksofChefareasfollows−
● One of the huge disadvantages of Chef is the way cookbooks are controlled. It needs
constant babying so that people who are working should not mess up with others
cookbooks.
● OnlyChefsoloisavailable.
● Inthecurrentsituation,itisonlyagoodfitforAWScloud.
● Itisnot veryeasytolearn ifthepersonisnot familiar withRuby.
● Documentationisstilllacking.
Key Building Blocks of ChefRecipe
It can be defined as a collection of attributes which are used to manage the infrastructure. These
attributes which are present in the recipe are used to change the existing state or setting a
particular infrastructure node. They are loaded during Chef client run and comparted with the
existing attribute of the node (machine). It then gets to the status which is defined in the node
resource of the recipe. It is the main workhorse of the cookbook.
Cookbook
A cookbook is a collection of recipes. They are the basic building blocks which get uploaded to
Chef server. When Chef run takes place, it ensures that the recipes present inside it gets a given
infrastructure to the desired state as listed in the recipe.
Resource
It is the basic component of a recipe used to manage the infrastructure with different kind of
states. There can be multiple resources in a recipe, which will help in configuring and managing
the infrastructure. For example −
● package− Managesthe packagesona node
● service−Managestheservicesona node
● user− Managesthe usersonthenode
● group−Managesgroups
● template−Managesthefileswithembedded Rubytemplate
● cookbook_file −Transfersthefilesfromthefilessubdirectoryinthecookbooktoa location on
the node
● file − Managesthecontentsofafile onthenode
● directory −Managesthedirectoriesonthenode
● execute−Executesacommand onthenode
● cron−Editsanexistingcronfile onthe node
Chef-Architecture
● Chef works on a three-tier client server model wherein the working units such as
cookbooks are developed on the Chef workstation. From the command line utilities such
as knife, they are uploaded to the Chef server and all the nodes which are present in the
architecture are registered with the Chef server.
● In order to get the working Chef infrastructure in place, we need to set up multiple things
in sequence.
● Intheabovesetup, wehavethefollowingcomponents.
● Chef Workstation
● This is the location where all the configurations are developed. Chef workstation is
installed on the local machine. Detailed configuration structure is discussed in the later
chapters of this tutorial.
● ChefServer
● This works as a centralized working unit of Chef setup, where all the configuration files
are uploaded post development. There are different kinds of Chef server, some are hosted
Chef server whereas some are built-in premise.
● ChefNodes
● They are the actual machines which are going to be managed by the Chef server. All the
nodes can have different kinds of setup as per requirement. Chef client is the key
component of all the nodes, which helps in setting up the communication between the
Chef server and Chef node. The other components of Chef node is Ohai, which helps in
getting the current state of any node at a given point of time.
SaltStack
Salt Stack is an open-source configuration management software and remote execution engine.
Salt is a command-line tool. While written in Python, SaltStack configuration management is
languageagnosticand simple. Salt platformusesthepushmodelforexecutingcommands viathe SSH
protocol. The default configurationsystemis YAMLandJinjatemplates. Salt isprimarily
competing with Puppet, Chef and Ansible.
Salt provides many features when compared to other competing tools. Some of these important
features are listed below.
● Fault tolerance − Salt minions can connect to multiple masters at one time byconfiguring
the master configuration parameter as a YAML list of all the available masters. Any
master can direct commands to the Salt infrastructure.
● Flexible − The entire management approach of Salt is very flexible. It can be
implemented to follow the most popular systems management models such as Agent and
Server, Agent-only, Server-only or all of the above in the same environment.
● Scalable Configuration Management − SaltStack is designed to handle ten thousand
minions per master.
● Parallel Execution model− Salt can enable commands to execute remote systems in a
parallel manner.
● Python API− Salt provides a simple programming interface and it was designed to be
modular and easily extensible, to make it easyto mold to diverse applications.
● Easy to Setup− Salt is easyto setup and provides a single remote execution architecture
that can manage the diverse requirements of any number of servers.
● Language Agnostic − Salt state configuration files, templating engine or file type
supports any type of language.
Benefitsof SaltStack
Beingsimpleaswellasafeature-richsystem,Saltprovidesmanybenefitsandtheycanbe summarized as
below −
● Robust − Saltis powerful and robustconfiguration managementframework and works
around tens of thousands of systems.
● Authentication−SaltmanagessimpleSSHkeypairs forauthentication.
● Secure−Saltmanagessecuredatausinganencryptedprotocol.
● Fast−Saltis very fast, lightweightcommunication bus toprovidethefoundation fora remote
execution engine.
● VirtualMachineAutomation −TheSaltVirtCloudControllercapabilityisusedfor
automation.
● Infrastructure as data, not code− Salt provides a simple deployment, model driven
configuration management and command execution framework.
IntroductiontoZeroMQ
Salt isbasedonthe ZeroMQ library anditisanembeddablenetworkinglibrary.Itis lightweight and a
fast messaging library. The basic implementation is in C/C++and native implementations for
several languages including Java and .Net is available.
ZeroMQ isa broker-lesspeer-peer messageprocessing. ZeroMQallows youto designacomplex
communication system easily.
ZeroMQcomeswiththefollowingfivebasicpatterns−
● Synchronous Request/Response− Used for sending a request and receiving subsequent
replies for each one sent.
● Asynchronous Request/Response − Requestor initiates the conversation by sending a
Request message and waits for a Response message. Provider waits for the incoming
Request messages and replies with the Response messages.
● Publish/Subscribe− Used for distributing data from a single process (e.g. publisher) to
multiple recipients (e.g. subscribers).
● Push/Pull−Usedfordistributingdatatoconnected nodes.
● ExclusivePair−Usedforconnectingtwopeerstogether, forminga pair.
ZeroMQ isa highlyflexible networking toolfor exchanging messagesamong clusters, cloud and
other multi system environments. ZeroMQ is the default transport library presented in
SaltStack.
SaltStack– Architecture
The architecture of SaltStack is designed to work with any number of servers, from localnetwork
systems to other deployments across different data centers. Architecture is a simple server/client
model with the needed functionality built into a single set of daemons.
Take a look at the following illustration. It shows the different components of SaltStack
architecture.
● SaltMaster−
SaltMaster is
the master
daemon. A
SaltMaster is
used to send
commands
and
configurations to the Salt slaves. A single master can manage multiple masters.
● SaltMinions− SaltMinion is the slave daemon. A Salt minion receives commands and
configuration from the SaltMaster.
● Execution − Modules and Adhoc commands executed from the command line againstone
or more minions. It performs Real-time Monitoring.
● Formulas− Formulas are pre-written Salt States. They are as open-ended as Salt States
themselves and can be used for tasks such as installing a package, configuring andstarting
a service, setting up users or permissions and many other common tasks.
● Grains− Grains is an interface that provides information specific to a minion. The
information available through the grains interface is static. Grains get loaded when the
Salt minion starts. This means that the information in grains is unchanging. Therefore,
grains information could be about the running kernel or the operating system. It is case
insensitive.
● Pillar− Apillar isaninterfacethat generatesandstoreshighlysensitivedataspecificto a
particular minion, suchascryptographic keysand passwords. It storesdataina key/value
pair and the data is managed in a similar wayas the Salt State Tree.
● TopFile−MatchesSaltstatesandpillardatatoSaltminions.
● Runners− It is a module located inside the SaltMaster and performs tasks such as job
status, connection status, read data fromexternal APIs, queryconnected salt minions and
more.
● Returners−ReturnsdatafromSaltminionstoanothersystem.
● Reactor− It is responsible for triggering reactions when events occur in your SaltStack
environment.
● SaltCloud−SaltCloudprovidesapowerfulinterfacetointeractwithcloud hosts.
● SaltSSH−RunSaltcommandsoverSSHonsystemswithoutusingSaltminion.
Docker
Dockerisacontainermanagementservice.ThekeywordsofDockerare develop, ship and run
anywhere. The whole idea of Docker is for developers to easily develop applications, ship them
into containers which can then be deployed anywhere.
The initialrelease ofDocker was in March 2013 and since then, it has become the buzzword for
modern world development, especially in the face of Agile-based projects.
FeaturesofDocker
● Docker has the ability to reduce the size of development by providing a smaller footprint
of the operating system via containers.
● With containers, it becomes easier for teams across different units, such as development,
QA and Operations to work seamlessly across applications.
● You can deploy Docker containers anywhere, on any physical and virtual machines
andeven on the cloud.
● SinceDockercontainersareprettylightweight,theyareveryeasilyscalable.
Components of Docker
Dockerhasthefollowingcomponents
● DockerforMac −ItallowsonetorunDocker containersontheMacOS.
● DockerforLinux−It allowsoneto runDockercontainersontheLinuxOS.
● DockerforWindows−Itallowsoneto runDocker containersontheWindowsOS.
● DockerEngine−Itisusedfor buildingDockerimagesandcreatingDocker containers.
● DockerHub−Thisistheregistrywhich isused tohost variousDockerimages.
● DockerCompose−This isusedtodefineapplicationsusingmultipleDocker containers.
Dockerarchitecture
● Docker uses a client-server architecture. The Docker clienttalks to the Dockerdaemon,
which does the heavy lifting of building, running, and distributing your Docker
containers. The Docker client and daemoncanrun on the same system, or you can connect
a Docker client to a remote Docker daemon. The Docker client and daemon communicate
using a REST API, over UNIX sockets or a network interface. Another Dockerclient is
Docker Compose, that lets you workwithapplications consisting ofa set of containers.
TheDockerdaemon
TheDockerdaemon(dockerd)listens forDockerAPIrequestsand managesDockerobjectssuch as
images, containers, networks, and volumes. A daemon can also communicate with other
daemons to manage Docker services.
TheDockerclient
The Docker client (docker) is the primary way that many Docker users interact with Docker.
When you use commands such as dockerrun, the clientsends these commands to dockerd, which
carries them out. The docker command uses the Docker API. The Docker client can
communicate with more than one daemon.
DockerDesktop
Docker Desktop is an easy-to-install application for your Mac, Windows or Linux environment
that enables you to build and share containerized applications and microservices. DockerDesktop
includes the Docker daemon (dockerd), the Docker client (docker), Docker Compose, Docker
Content Trust, Kubernetes, and Credential Helper. For more information, see DockerDesktop.
Docker registries
A Docker registrystores Docker images. Docker Hub is a public registry that anyone can use,
and Docker is configured to look for images on Docker Hub by default. You can even run your
own private registry.
Whenyouusethe dockerpullor dockerruncommands,therequiredimagesarepulledfrom your
configured registry. When you usethe dockerpushcommand, yourimage ispushed to your
configured registry.
Dockerobjects
When you use Docker, you are creating and using images, containers, networks, volumes,
plugins, and other objects. This section is a brief overview of some of those objects.
Images
Animage is a read-only template with instructions for creating a Docker container. Often, an
image isbased on another image, with some additional customization. For example, you may
build an image which is based onthe ubuntuimage, but installs the Apache web server and your
application, as well as the configuration details needed to make your application run.
You might create your own images or you might onlyuse those created byothers and published
ina registry. To build yourown image, you create a Dockerfilewitha simple syntax for defining
the steps neededto createthe image and run it. Each instruction ina Dockerfile creates alayer in
the image. When you change the Dockerfile and rebuild the image, onlythose layerswhich have
changed are rebuilt. This is part of what makes images so lightweight, small, and fast, when
compared to other virtualization technologies.
Containers
A container is a runnable instance of an image. You can create, start, stop, move, or delete a
container using the Docker API or CLI. You can connect a container to one or more networks,
attach storage to it, or even create a new image based on its current state.
Bydefault, acontainer isrelativelywell isolated fromother containersand its host machine. You
cancontrolhow isolated a container’s network, storage,orother underlying subsystems are from
other containers or from the host machine.
A container is defined by its image as well as any configuration options you provide to it when
you createor start it. When a container is removed, anychanges to its statethat are not stored in
persistent storage disappear.
Exampledockerruncommand
The following command runsanubuntucontainer, attaches interactivelyto your localcommand-
line session, and runs /bin/bash.
$dockerrun-i-tubuntu/bin/bash
When you runthiscommand, the following happens(assuming you are using the default registry
configuration):
1. Ifyoudonothavethe ubuntu imagelocally,Dockerpullsitfromyourconfiguredregistry, as
though you had run docker pull ubuntu manually.
2. Docker createsa new container, asthough you had runa docker container
create command manually.
3. Docker allocates a read-write filesystem to the container, as its final layer. This allows a
running container to create or modify files and directories in its local filesystem.
4. Docker creates a network interface to connect the container to the default network, since
you did not specifyany networking options. This includes assigning an IP address to the
container. By default, containers can connect to external networks using the host
machine’s network connection.
5. Docker starts the container and executes /bin/bash. Because the container is running
interactively and attached to your terminal (due to the -iand-t flags), you can provide
input using your keyboard while the output is logged to your terminal.
6. When you typeexit to terminate the/bin/bashcommand, the container stops but is not
removed. You can start it again or remove it.
Theunderlying technology
Docker iswritteninthe Go programming language andtakesadvantageofseveralfeaturesofthe
Linux kernelto deliver its functionality. Docker uses a technologycalled namespacesto provide
theisolatedworkspacecalledthe container.Whenyourunacontainer,Dockercreatesaset of
namespaces for that container.
These namespaces provide a layer of isolation. Each aspect of a container runs in a separate
namespace and its access is limited to that namespace.
Previous Question Papers
Unit 1:
PART A:
1) ExplainbrieflyaboutSdlc?
2) Whatiswaterfallmodel?
3) Whatisagile model?
4) WhyDevops?
5) WhatisDevops?
6) WhatisIITL?
7) Whatiscontinuousdevelopment?
8) WhatiscontinuousIntegration?
9) WhatiscontinuousTesting?
10) Whatiscontinuousdelivery?
11) Whatiscontinuousdeployment?
12) WhatisScrum?
13) WhatisKanban?
PART B:
1) Whatisthedifferencebetweenagileand Devops?
2) Whatarethedifferencesbetweenagileandwaterfallmodel?
3) ExplainDevopsprocessflowindetail?
4) Whatiscontinuousdeliveryandhowitworks?
5) Explaincomponentsofdeliverypipeline?
Unit 2:
PART A:
1. Whataredifferentsoftwaredevelopmentlifecycle models?
2. WhatisdatatierinDevops?
3. Whatismonolithicarchitecture?
4. Whatarebenefitsofmonolithicarchitecture?
PART B:
1. ExplainDevopslifecycle iindetail?
2. WhatareDevopscomponents?
3. ExplainaboutDevopsarchitectureindetail?
4. What aremicroservicesandhowdoesmicroservicesarchitecturework?
5. Explainarchitecturerulesofthumb?
6. Explaindatabasemigration?
7. Whataretheadvantagesofmigrationtools?
Unit 3:
PART A:
1. WhatistheneedforsourcecodecontrolinDevops?
2. Whataretherolesandcodesindevops?
3. WhatarethebenefitsofsourcecodemanagementinDevops?
4. WhatissharedauthenticationinDevops?
5. WhatispullrequestmodelinDevops?
PART B:
1. What is version control, Explain types of version controlsystems and benefits of version control systems?
2. WhatisGerritandexplainthearchitectureofgerrit?
3. Whatisdockerintermission andwhatare thedifferencesbetween Dockerand machine?
4. Explaingerritanditsarchitecture?
Unit 4:
PART A:
1) WhatisGit plugin?
2) WhatismanagingbuilddependenciesinDevops?
3) Whatisbuildpipelines?
4) Whatisjobchaining?
5) ExplaincollatingQuality measures?
6) Whatarealternativebuildservers?
PART B:
1. WhatisJenkinandexplainitsworkflow?
2. ExplainJenkinmasterslavearchitectureandJenkiapplications?
3. WhatarebuildslavesinDevops?
4. Whatis infrastructureasacodeinDevops?
5. What aredifferentbuildphasesinDevops?
Unit 5:
PART A:
1. Whatisjavascripttesting?
2. Whatarethedifferenttoolsusedforbackend testing?
3. ExplainTDDvsTraditionalTesting?
4. WhatisacceptanceTDDand DeveloperTDD?
5. WhatisREPLdrivendevelopment?
6. Whatisdeploymentofthe system?
7. Whatisvirtualizationofthestack?
PART B:
1. WhatistestingandExplaindifferenttypesoftesting?
2. Prosandconsoftesting?
3. WhatisseleniumandExplainseleniumfeatures?
4. Whatarebackend integrationpointsandexplainbackendtestingvalidation?
5. Whatareadvantagesanddisadvantagesofbackendtesting?
6. WhatisTTDandhowitisperformed?
Mid Question Paper
Unit wise objective questions.
a. Twoprogrammersworkingonthesametask
b. Twoprogrammersworkingondifferenttasks
c. Oneprogrammerworkingalone
d. Noneofthe above
Answer:a.Twoprogrammersworkingonthesametask
6) WhatisITIL(InformationTechnologyInfrastructureLibrary)?
a. Asetofbestpractices forITservicemanagement
b. AframeworkformanaginganddeliveringIT services
c. Amethodologyforcontinuousdeliveryandintegration
d. Alloftheabove
Answer:a.AsetofbestpracticesforITservicemanagement
7) WhatisthemaingoalofITIL?
a. ToimprovethequalityofITservices
b. Toreducethe costofITservices
c. ToimprovetheefficiencyofIT servicedelivery
d. All of the above
Answer:d.Alloftheabove
8) Whatistherelationship betweenITILandDevOps?
a. ITILandDevOpsarecompletelyseparateandhavenorelationship
b. ITILisamethodologythatcanbeusedtosupport DevOps
c. DevOps isamethodologythatcanbeusedtosupportITIL
d. BothITILandDevOpsarecompletelyintegratedandcannotbeusedseparately
Answer: b. ITIL is a methodologythat can be used to support DevOps
9) WhichITILprocessisconcerned withthedeliveryofITservicestocustomers?
a. IncidentManagement
b. ServiceDelivery
c. ServiceLevelManagement
d. Capacity Management
Answer:b.ServiceDelivery
10) WhatisthepurposeoftheChangeManagementprocess inITIL?
a. ToensurethatchangestoITservicesareproperlyplannedandtested
b. TominimizethedisruptioncausedbychangestoITservices
c. Toensurethatchangesareimplementedinacontrolledandcoordinated manner
d. All of the above
Answer:d.Alloftheabove
11) WhichITILprocess isconcernedwiththemanagementofIT servicecontinuity?
a. IncidentManagement
b. ServiceDelivery
c. ServiceLevelManagement
d. Continuity Management
Answer:d.ContinuityManagement
12) WhatisthemainpurposeofusingKanbanin DevOps?
a) Toincreaseefficiencyinthedevelopmentprocess
b) Tomanagetheentiresoftwaredevelopmentlifecycle
c) Toincreasespeedandagilityindelivering software
d) Tovisualizetheflowofwork
Answer: d
13) What isthemaindifferencebetweenScrumand Kanban?
a) Toassigntaskstoteammembers
b) Toensure workisonlystartedwhenthere iscapacityavailable
c) Tocontroltheflowofwork
d) Toprioritizetasks
Answer: b
16) WhatisthemainpurposeofadeliverypipelineinDevOps?
a) Toautomatethesoftwaredeliveryprocess
b) Tomanagetheentiresoftwaredevelopmentlifecycle
c) Toincreasespeedandagilityindelivering software
d) Tovisualizetheflowofwork
Answer: a
17) Whatarethemain stagesinatypicaldeliverypipeline?
a) Development,testing,deployment
b) Requirementsgathering,design,coding
c) Planning,execution, monitoring
d) Continuousintegration,continuousdelivery,continuousdeployment
Answer: d
18) Whatisthepurposeofcontinuousintegration inadeliverypipeline?
a) To automatethetesting process
b) To integratecodechangesfrommultipledevelopers
c) Todeploysoftwaretoproduction
d) Tomanagetheentiresoftwaredevelopmentlifecycle
Answer: b
19) Whatisthepurposeofcontinuousdeliveryinadeliverypipeline?
a) Toautomatethetestingprocess
b) Tointegratecodechangesfrommultipledevelopers
c) Todeploysoftwaretoproductionwithasingleclick
d) Tomanagetheentiresoftwaredevelopmentlifecycle
Answer: c
20) Whatisthepurposeofcontinuousdeploymentinadeliverypipeline?
a) Toautomatethetestingprocess
b) Tointegratecodechangesfrommultipledevelopers
c) Todeploysoftwaretoproductionwithasingleclick
d) Toautomaticallydeploysoftwareto productionwheneverchangesaremade
Answer: d
OBJECTIVE TYPE UNIT WISE QUESTIONS
UNIT-2
1) WhatarethemainstagesoftheDevOpslifecycle?
a) Development,testing,deployment
b) Plan,code,deploy
c) Continuousintegration,continuousdelivery,continuousdeployment
d) Plan,build,test,release,deploy,operate,monitor
Answer: d
2) Whatisthepurposeofthe"plan"stageintheDevOpslifecycle?
a) Toplanthedevelopmentanddeploymentprocess
b) Tobuildthesoftware
c) Totestthesoftware
d) Toreleasethesoftware
Answer: a
3) Whatisthepurposeofthe"build"stageintheDevOpslifecycle?
a) Toplanthedevelopmentanddeploymentprocess
b) Tobuildthesoftware
c) Totestthesoftware
d) Toreleasethesoftware
Answer: b
4) Whatisthepurposeofthe "test"stageintheDevOpslifecycle?
a) Toplanthedevelopmentanddeploymentprocess
b) Tobuildthesoftware
c) Totestthesoftware
d) Toreleasethesoftware
Answer: c
5) Whatisthepurposeofthe"release"stageintheDevOpslifecycle?
a) Toplanthedevelopmentanddeploymentprocess
b) Tobuildthesoftware
c) Totestthesoftware
d) Toreleasethesoftware
Answer: d
6) Whatisthepurposeofthe"deploy"stageintheDevOpslifecycle?
a) Todeploythesoftwaretoproduction
b) Tobuildthesoftware
c) Totestthesoftware
d) Toreleasethesoftware
Answer: a
7) Whatisthepurposeofthe"operate"stageintheDevOpslifecycle?
a) Tooperateandmaintainthesoftware
b) Tobuildthesoftware
c) Totestthesoftware
d) Toreleasethesoftware
Answer: a
8) Whatisthepurposeofthe"monitor"stageintheDevOpslifecycle?
a) Tomonitortheperformanceand stabilityofthe software
b) Tobuildthesoftware
c) Totestthesoftware
d) Toreleasethesoftware
Answer: a
9) Whatisa monolithicarchitectureinDevOps?
a) Anarchitecture inwhichallcomponents aretightlycoupledandcannotbeseparated
b) Anarchitectureinwhichcomponentsarelooselycoupledandcanbe separated
c) Anarchitectureinwhichcomponentsaredependentoneachother
d) Anarchitectureinwhichcomponentsare independent ofeachother
Answer: a
10) What arethemainbenefitsofamonolithicarchitectureinDevOps?
a) Scalabilityandflexibility
b) Easeofdeployment
c) Simplicityand easeof maintenance
d) Isolationofcomponents
Answer: c
11) WhatarethemaindrawbacksofamonolithicarchitectureinDevOps?
a) Scalabilityand flexibility
b) Easeofdeployment
c) Simplicityandeaseofmaintenance
d) Isolationofcomponents
Answer: a
12) Howdoesamonolithicarchitectureimpact deploymentinDevOps?
a) Deployment isdifficultbecauseallcomponentsaretightlycoupled
b) Deploymentiseasybecauseallcomponentsare looselycoupled
c) Deploymentisnotimpactedbythearchitecture
d) Deployment ismade morecomplexbecauseofthe inter-dependenciesofcomponents
Answer: a
13) HowdoesamonolithicarchitectureimpactscalabilityinDevOps?
a) Scalabilityisdifficultbecauseallcomponentsaretightlycoupled
b) Scalabilityiseasybecauseallcomponentsarelooselycoupled
c) Scalabilityisnotimpactedbythe architecture
d) Scalabilityismade morecomplexbecauseofthe inter-dependenciesofcomponents
Answer: a
14) WhatisthemainpurposeofdatabasemigrationsinDevOps?
a) Tomovedatafromonedatabasetoanother
b) Tochangetheschemaofadatabase
c) Tostoredatainadatabase
d) Toretrievedatafromadatabase
Answer: b
15) What arethemainchallengesofhandlingdatabasemigrationsinDevOps?
a) Datalossanddowntime
b) Incompatibilitywithdifferent databasesystems
c) Lackofautomation
d) Alloftheabove
Answer: d
16) HowcandatabasemigrationsbeautomatedinDevOps?
a) Byusingmanualscripts
b) Byusingdatabasemigrationtools
c) Byusingcontinuous integrationandcontinuousdeployment(CI/CD)pipelines
d) Byusingdatabasebackuptools
Answer: c
17) WhatisthepurposeofusingdatabasemigrationtoolsinDevOps?
a) Toautomatetheprocessofdatabasemigrations
b) Tostoredataina database
c) Toretrievedatafromadatabase
d) Tomovedatafromonedatabasetoanother
Answer: a
18) Howcandatabasedowntimebeminimizedduring migrationsinDevOps?
a) Byusingmanualscripts
b) Byusingdatabasemigrationtools
c) Byusingcontinuous integrationandcontinuousdeployment(CI/CD) pipelines
d) Byusingdatabasebackuptools
Answer: b
19) What isamicroservicearchitectureinDevOps?
a) Anarchitectureinwhichalargeapplicationisdividedinto small,independentservices
b) Anarchitectureinwhichalargeapplicationistightlycoupledandcannotbe separated
c) Anarchitectureinwhichalargeapplicationislooselycoupledandcanbe separated
d) Anarchitectureinwhicha largeapplicationisdependentonasingleservice
Answer: a
20) What arethemainbenefitsofusingmicroservicesinDevOps?
a) Scalabilityandflexibility
b) Easeofdeployment
c) Simplicityand easeof maintenance
d) Alloftheabove
Answer: d
21) What arethemaindrawbacksofusingmicroservices inDevOps?
a) Complexityofmanagingmultipleservices
b) Inter-servicecommunicationoverhead
c) Lackofscalability
d) Alloftheabove
Answer: a
22) Howdoesusing microservicesimpact deploymentinDevOps?
a) Deploymentismorecomplexbecausemultipleservicesmustbedeployed
b) Deploymentissimplerbecauseservicescanbedeployed independently
c) Deploymentisnotimpactedbythearchitecture
d) Deployment ismadeeasierbecauseoftheinter-dependenciesofservices
Answer: b
OBJECTIVE TYPE UNIT WISE QUESTIONS
UNIT-3
1) WhatisthepurposeofsourcecodemanagementinDevOps?
a) Tomanageandtrackchangestosourcecode
b) Tostoresourcecode
c) Tocompilesourcecode
d) Todistributesourcecode
Answer: a
2) What arethemainbenefitsofusingsourcecodemanagementinDevOps?
a) Improvedcollaborationandcoordinationbetweendevelopers
b) Increasedvisibilityintocodechanges
c) Betterorganizationofsource code
d) Alloftheabove
Answer: d
3) What arethemaintoolsusedforsourcecodemanagementinDevOps?
a) Git
b) Subversion
c) Mercurial
d) Alloftheabove
Answer: a
4) HowdoesusingsourcecodemanagementimpactdeploymentinDevOps?
a) Deployment isnotimpactedbysourcecode management
b) Deploymentismademorecomplexbecauseoftheneedtomanagecodechanges
c) Deploymentissimplifiedbecausecodechangesaretrackedandcanbeeasilyrolledback
d) Deploymentismadeeasierbecausecodechangesareautomaticallycompiled
Answer: c
5) Howdoesusingsourcecodemanagementimpactcollaborationbetweendevelopersin DevOps?
a) Collaborationisnot impactedbysourcecodemanagement
b) Collaborationis mademorecomplexbecauseoftheneedtomanagecode changes
c) Collaborationissimplifiedbecausecodechangesaretrackedandcanbeeasilyreviewed
d) Collaborationismadeeasier becausecodechangesareautomaticallycompiled
Answer: c
6) What isa migrationinDevOps?
a) Aprocessofmovingdatafromonelocationtoanother
b) Aprocessofchanginginfrastructure
c) Aprocessofupdatingsoftware
d) Aprocessofchangingdevelopmentprocesses
Answer: a
7) What arethemainbenefitsofusingmigrations inDevOps?
a) Improvedstabilityofsystems
b) Increasedefficiencyofsystems
c) Betterabilitytoscale systems
d) Alloftheabove
Answer: d
8) What arethemainchallengesassociatedwithmigrationsinDevOps?
a) Dataloss
b) Downtime
c) Increasedcomplexityofsystems
d) Alloftheabove
Answer: d
9) HowdomigrationsimpactdeploymentinDevOps?
a) Deployment isnotimpactedbymigrations
b) Deploymentis mademorecomplexbecauseoftheneedtomanagedata migrations
c) Deploymentissimplifiedbecausemigrationsareautomated
d) Deploymentismadeeasierbecausemigrationsareautomaticallyperformed
Answer: b
10) HowdomigrationsimpactcollaborationbetweenteamsinDevOps?
a) Collaborationisnotimpactedbymigrations
b) Collaborationismademorecomplexbecauseoftheneedtocoordinatemigrations
c) Collaborationissimplifiedbecausemigrations aretrackedandcanbeeasilyreviewed
d) Collaborationismadeeasierbecausemigrationsareautomaticallyperformed
Answer: b
10) What issharedauthenticationinDevOps?
a) Asystemforsharingauthenticationcredentialsbetweendifferentsystems
b) Asystemforstoringauthenticationcredentials
c) Asystemformanagingauthenticationcredentials
d) Asystemfordistributingauthenticationcredentials
Answer: a
11) WhatarethemainbenefitsofusingsharedauthenticationinDevOps?
a) Improvedsecurity
b) Increasedefficiency
c) Betterabilitytomanageauthenticationcredentials
d) Alloftheabove
Answer: d
12) What arethemainchallengesassociatedwithsharedauthenticationinDevOps?
a) Lackofcontroloverauthenticationcredentials
b) Increasedriskofunauthorized access
c) Increasedcomplexityofsystems
d) Alloftheabove
Answer: d
13) Howdoessharedauthenticationimpact deployment inDevOps?
a) Deploymentisnotimpactedbyshared authentication
b) Deploymentismademorecomplexbecauseoftheneedtomanagesharedauthentication credentials
c) Deploymentissimplifiedbecauseauthenticationiscentralized
d) Deploymentismadeeasierbecauseauthenticationisautomaticallyperformed
Answer: b
14) HowdoessharedauthenticationimpactcollaborationbetweenteamsinDevOps?
a) Collaborationisnotimpactedbyshared authentication
b) Collaborationismademorecomplexbecauseoftheneedto coordinateshared authentication
credentials
c) Collaborationissimplifiedbecauseauthenticationiscentralized
d) Collaborationismadeeasierbecauseauthenticationisautomaticallyperformed.
15) WhatisGit?
a) Aversioncontrolsystem
b) Afilebackupsystem
c) Aprojectmanagementtool
d) Asoftwaredistributionplatform
Answer: a
16) WhatarethemainbenefitsofusingGitinsoftware development?
a) Improvedcollaboration
b) Increasedefficiency
c) Betterabilityto managecodechanges
d) Alloftheabove
Answer: d
17) WhatisthedefaultbranchinaGitrepository?
a) master
b) develop
c) trunk
d) main
Answer:a
18) HowdoesGithandleconflictsbetweenmultiplecode changes?
a) Git automaticallymergeschanges
b) Gitpromptstheusertomanuallyresolve conflicts
c) Gitdiscardsconflictingchanges
d) Gitstoresconflictingchangesasseparatebranches
Answer: b
19) WhatisthepurposeofaGitstash?
a) Tosavechangestemporarilywithout committingthem
b) Todiscardchanges
c) To revertcodechanges
d) Tostorechangesasanew branch
20) WhatisGitHub?
a) Aversioncontrolsystem
b) Acodehostingplatform
c) Aproject management tool
d) Asoftwaredistributionplatform
Answer: b
21) WhatarethemainbenefitsofusingGitHubinsoftware development?
a) Improvedcollaboration
b) Increasedvisibilityofcodechanges
c) Betterabilitytomanagecode changes
d) Alloftheabove
Answer: d
22) WhatisaGitHubrepository?
a) Acollectionofcodeandrelatedfiles
b) Aplace tostore code backups
c) Aproject management tool
d) Asoftwaredistributionplatform
Answer: a
23) WhatisapullrequestinGitHub?
a) Arequestforcodechangestobemergedintoa repository
b) Arequestforcodeto bestoredinarepository
c) Arequest forarepositoryto be deleted
d) Arequestforcodetobereviewed
Answer: a
24) WhatisaGitHubissue?
a) Aplaceto reportbugsorrequestfeatures
b) Aplace tostore code backups
c) Aproject management tool
d) Asoftwaredistributionplatform
Answer: a
25) WhatisDocker?
a) Avirtualmachine software
b) Acontainerizationplatform
c) Aconfigurationmanagementtool
d) Asoftwaredistributionplatform
Answer: b
26) WhatarethemainbenefitsofusingDocker insoftwaredevelopment?
a) Improvedapplicationportability
b) Increasedefficiencyindeployingapplications
c) Betterabilitytomanagedependencies
d) Alloftheabove
Answer: d
27) WhatisaDocker image?
a) Apre-configuredenvironment forrunningapplications
b) Asetofinstructionsforbuildingcontainers
c) Aplacetostoreconfigurationdata
d) Awaytomanagecontainerresources
Answer: a
28) WhatisaDocker container?
a) Apre-configuredenvironment forrunningapplications
b) Asetofinstructionsforbuildingcontainers
c) Aplacetostoreconfigurationdata
d) ArunninginstanceofaDockerimage
Answer: d
29) Whatisthepurposeofa Dockerfile?
a) Tostoreconfigurationdatafor aDockercontainer
b) Tospecifythestepstobuild aDockerimage
c) To runaDockercontainer
d) Tomanagecontainerresources
Answer: b
30) What isGerrit?
a) Aversioncontrolsystem
b) Acodehostingplatform
c) Acodereviewtool
d) Asoftwaredistributionplatform
Answer: c
31) WhatarethemainbenefitsofusingGerritinsoftware development?
a) Improvedcollaboration
b) Increasedvisibilityofcodechanges
c) Betterabilitytomanagecode changes
d) Alloftheabove
Answer: d
32) WhatisaGerritchange?
a) Aset ofcodechangesina repository
b) Arequestforcodechangestobemerged
c) Arequestforcodereview
d) Aplacetostorecodebackups
Answer: a
33) WhatisaGerritpatchset?
a) AnewversionofachangeinGerrit
b) Arequestforcodechangesto bemerged
c) Arequestforcodereview
d) Aplacetostorecodebackups
Answer: a
34) WhatisaGerritreview?
a) AnevaluationofcodechangesinGerrit
b) Arequestforcodechangestobe merged
c) Arequestforcodereview
d) Aplacetostorecodebackups
Answer: a
UNIT-4
1) WhatisJenkins?
a) Avirtualmachine software
b) Acontinuousintegrationandcontinuousdelivery(CI/CD) tool
c) Aconfigurationmanagementtool
d) Asoftwaredistributionplatform
Answer: b
2) WhatarethemainbenefitsofusingJenkinsinsoftware development?
a) Improvedcollaboration
b) Increasedefficiencyinsoftwaredelivery
c) Betterabilitytomanagebuildanddeploymentprocesses
d) Alloftheabove
Answer: d
3) WhatisaJenkinsjob?
a) Asetofinstructions forbuildinganddeployingsoftware
b) Aplace tostorecode backups
c) Aprojectmanagementtool
d) Asoftwaredistributionplatform
Answer: a
4) WhatisaJenkinsbuild?
a) Theprocessofbuildingandcompilingsoftware
b) Aplacetostorebuildartifacts
c) Arequest for codereview
d) ArunninginstanceofaDockerimage
Answer:a
5) What isaJenkinspipeline?
a) Asetofinstructions for buildinganddeployingsoftware
b) Acontinuousdeliverypipeline
c) Aplacetostoreconfigurationdata
d) Awaytomanagecontainerresources
Answer: b
6) What isaJenkinsplugin?
a) Asoftwarecomponentthataddsfunctionalityto Jenkins
b) Aversioncontrolsystem
c) Acodereviewtool
d) Asoftwaredistributionplatform
Answer: a
7) WhatarethemainbenefitsofusingJenkinsplugins insoftware development?
a) Improvedefficiencyinsoftwaredelivery
b) IncreasedflexibilityincustomizingJenkins
c) Betterabilityto integratewithothertoolsandsystems
d) Alloftheabove
Answer: d
8) WhatisthepurposeoftheJenkinsGitplugin?
a) TointegrateGitversioncontrolwithJenkins
b) TomanageJenkinsjobs
c) Toautomatecodereviewprocesses
d) Todistributesoftwarepackages
Answer: a
9) WhatisthepurposeoftheJenkinsPipelineplugin?
a) Todefineand manageJenkinspipelines
b) Toautomatecodereview processes
c) Todistributesoftwarepackages
d) TomanageDockerimages
Answer: a
10) WhatisthepurposeoftheJenkinsDeploymentPipeline plugin?
a) Toautomatedeployment processesinJenkins
b) TomanageJenkinsjobs
c) TointegrateversioncontrolwithJenkins
d) Todistributesoftwarepackages
Answer: a
11) WhatisatriggerinDevOps?
a) Aneventorconditionthatinitiatesaprocessoraction
b) Aversioncontrolsystem
c) Acodereview tool
d) Asoftwaredistributionplatform
Answer: a
12) WhatarethemaintypesoftriggersinDevOps?
a) Scheduledtriggers
b) Event-basedtriggers
c) Manualtriggers
d) Alloftheabove
Answer: d
13) Whatisthepurposeofscheduledtriggers inDevOps?
a) Toinitiateprocessesoractionsatpre-determined times
b) Torespondtoeventsor conditions
c) Tomanuallyinitiateprocessesoractions
d) Todistributesoftwarepackages
Answer: a
14) Whatisthepurposeofevent-basedtriggersin DevOps?
a) Torespondtoeventsorconditions
b) Toinitiateprocessesoractionsatpre-determined times
c) Tomanuallyinitiateprocessesoractions
d) Todistributesoftwarepackages
Answer: a
15) Whatisthepurposeofmanualtriggers inDevOps?
a) Tomanuallyinitiateprocessesoractions
b) Torespondtoeventsorconditions
c) Toinitiateprocessesoractionsatpre-determined times
d) Todistributesoftwarepackages
Answer: a
16) WhatisthepurposeofbuildpipelinesinDevOps?
a) Toautomatetheprocessofbuildingsoftware
b) Tomanageversioncontrol
c) Toautomatecodereviewprocesses
d) Todistributesoftwarepackages
Answer: a
17) What isthedifferencebetweenbuildpipelinesandorchestrationinDevOps?
a) Buildpipelinesareaseriesofautomatedstepsforbuildingsoftware,whileorchestration
involves coordinating and automating the various steps and processes involved in software
delivery
b) Buildpipelinesareamanualprocess,whileorchestrationinvolvesautomating processes
c) Buildpipelinesareonlyforcodereview,whileorchestrationinvolvestheentiresoftware
delivery process
d) Thereisnodifference,theyrefertothesamething
Answer: a
18) Whatarethebenefitsofusingbuildpipelines inDevOps?
a) Improvedefficiencyinsoftwaredelivery
b) Increasedtransparencyinsoftwaredevelopmentprocesses
c) Betterabilitytoidentifyandresolveproblems earlyinthedevelopmentprocess
d) Alloftheabove
Answer: d
19) Whatarethebenefitsofusing orchestrationin DevOps?
a) Improvedefficiencyinsoftwaredelivery
b) Increasedcollaborationamong teams
c) Improvedabilitytoscaleprocessesandsystems
d) Alloftheabove
Answer: d
20) HowdoesorchestrationinDevOpshelpwithcontinuousdeliveryandcontinuous
deployment?
a) Bycoordinatingandautomatingthevariousstepsandprocessesinvolvedinsoftware delivery
b) Bymanualreviewandapprovalofeverystepinthedeliveryprocess
c) Byonlybuildingsoftware,withoutcoordinatingandautomatingdeliveryprocesses
d) Byonlydistributingsoftwarepackages,withoutcoordinatingandautomatingdelivery
processes
Answer:a
OBJECTIVE TYPE UNITWISE QUESTIONS
UNIT-5
1) Whatisthemaingoaloftestingin DevOps?
a) To ensurethatsoftwareisofhighqualityandmeetscustomer requirements
b) Toincreasedevelopment speed
c) Toimplementversioncontrol
d) Toautomatecodereviewprocesses
Answer: a
2) WhatarethebenefitsofincorporatingtestingintotheDevOps process?
a) Fastertime-to-marketforsoftwarereleases
b) Improvedsoftwarequalityand reliability
c) Increasedtransparencyinthedevelopmentprocess
d) Alloftheabove
Answer: d
Answer: a
4) Whatistherole ofautomationintestinginDevOps?
a) Automationhelpsto maketesting faster,moreefficient,and morereliable
b) Automationisnotnecessaryintesting
c) Automationslowsdownthetestingprocess
d) Automationonlymakestesting moremanual
Answer:a
5) Whatisthepurposeoftest-drivendevelopment(TDD)inDevOps?
a) Toensurethatcodemeetsrequirementsbefore itisevenwritten
b) Totestcodeafterithas beenwritten
c) Tomanuallyreviewandapprovecode
d) Toonlydistributesoftwarepackages
Answer: a
6) WhatisSeleniumusedforinsoftwaretesting?
a) Automatedtesting ofwebapplications
b) Automated testing ofdesktopapplications
c) Automatedtestingofmobileapplications
d) Automatedtestingofcommand-lineapplications
Answer: a
7) Whatprogramming languagescanbeusedwithSelenium?
a) Java,Python,Ruby,and C#
b) Assemblylanguageonly
c) Swiftonly
d) VisualBasiconly
Answer: a
8) WhatistheSeleniumWebDriver?
a) Alibraryfor automatingwebbrowsers
b) Alibraryforautomating desktopapplications
c) Alibraryforautomatingmobileapplications
d) Alibraryforautomatingcommand-lineapplications
Answer: a
9) WhatarethecomponentsoftheSeleniumSuite?
a) SeleniumWebDriver, SeleniumGrid,andSeleniumIDE
b) SeleniumWebDriver only
c) SeleniumGridonly
d) SeleniumIDEonly
Answer: a
10) WhatisthepurposeofSeleniumGridintheSeleniumSuite?
a) Todistributetestsacrossmultiplemachinesandenvironmentsforparallelexecution
b) To runtestssequentiallyona singlemachine
c) Tomanuallyreviewandapprovetests
d) Toonlydistributetestresults
Answer: a
11) WhatisthemaingoalofJavaScripttestinginDevOps?
a) ToensurethefunctionalityandreliabilityofJavaScriptcode
b) To ensurethe functionalityand reliabilityofonlyserver-side code
c) Toensurethefunctionalityandreliabilityofonlydatabasecode
d) To ensurethefunctionalityandreliabilityofonlyHTMLandCSScode
Answer: a
13) WhatisunittestinginJavaScripttesting?
a) TestingindividualunitsofJavaScriptcodeinisolation
b) TestingtheentireJavaScriptapplicationasawhole
c) Testingonlyserver-sidecode
d) Testingonlydatabasecode
Answer: a
Answer: a
15) How can JavaScript testing improve the speed and reliability
of software delivery in DevOps?
a) ByquicklyidentifyingandresolvingissuesinJavaScript code,reducingtheriskofcausing
problems in later stages of the software delivery process
b) Byslowingdownthesoftware deliveryprocess
c) Byhavingnoimpactonthesoftwaredeliveryprocess
d) Byincreasingthemanualeffortrequiredforsoftwaredelivery
Answer: a
16) What isPuppetMaster?
a) Anopen-sourceconfigurationmanagementtool
b) Aversioncontrolsystem
c) Acloudserviceprovider
d) Acontinuousintegrationtool
Answer: a
17) What isthemainpurposeofusingPuppetMasterinDevOps?
a) To automatetheconfigurationandmanagementofITinfrastructure
b) To automatethedevelopmentprocess
c) To automatethedeploymentprocess
d) To automateallstagesofthesoftwaredeliveryprocess
Answer: a
18) What kindofinfrastructurecanbemanagedusingPuppetMaster?
a) Physicalservers, virtualmachines, andcloud-based systems
b) Onlyphysicalservers
c) Onlyvirtualmachines
d) Onlycloud-basedsystems
Answer: a
19) WhatisaPuppetmoduleinPuppetMaster?
a) Apre-writtensetofPuppetcodethatcanbeusedtoautomatespecific tasks
b) Amanualprocessthatrequiresmanualcoding
c) Atoolforcodecollaboration
d) Atoolforcodedeployment
Answer: a
20) WhatarethebenefitsofusingPuppetMasterinDevOps?
a) Improvedspeed,reliability,andconsistencyofITinfrastructuremanagement
b) Increasedmanualeffort required forITinfrastructuremanagement
c) NoimpactonIT infrastructuremanagement
d) SlowerandlessreliableITinfrastructuremanagement
Answer: a
21) WhatisAnsible?
a) Anopen-sourceconfigurationmanagement tool
b) Aversioncontrolsystem
c) Acloudserviceprovider
d) Acontinuousintegrationtool
Answer: a
22) WhatisthemainpurposeofusingAnsibleinDevOps?
a) ToautomatetheconfigurationandmanagementofITinfrastructure
b) Toautomatethedevelopmentprocess
c) Toautomatethedeploymentprocess
d) To automateallstagesofthesoftwaredeliveryprocess
Answer: a
23) Whatkindofinfrastructurecanbemanagedusing Ansible?
a) Physicalservers, virtualmachines, andcloud-based systems
b) Onlyphysicalservers
c) Onlyvirtualmachines
d) Onlycloud-basedsystems
Answer: a
24) WhatisanAnsibleplaybookinAnsible?
a) Apre-writtensetofAnsiblecodethatcanbeusedtoautomatespecifictasks
b) Amanualprocessthatrequiresmanualcoding
c) Atoolforcodecollaboration
d) Atoolforcodedeployment
Answer: a
25) WhatarethebenefitsofusingAnsibleinDevOps?
a) Improvedspeed,reliability,andconsistencyofITinfrastructuremanagement
b) Increasedmanualeffortrequired forITinfrastructuremanagement
c) NoimpactonIT infrastructuremanagement
d) SlowerandlessreliableITinfrastructuremanagement
Answer: a
26) WhatisChef?
a) Anopen-sourceconfigurationmanagement tool
b) Aversioncontrolsystem
c) Acloudserviceprovider
d) Acontinuousintegrationtool
Answer: a
270WhatisthemainpurposeofusingChefinDevOps?
a) To automatetheconfigurationandmanagementofITinfrastructure
b) Toautomatethedevelopmentprocess
c) Toautomatethedeploymentprocess
d) Toautomateallstagesofthesoftwaredelivery process
Answer:a
28) Whatkindofinfrastructurecanbemanagedusing Chef?
a) Physicalservers, virtualmachines, andcloud-based systems
b) Onlyphysicalservers
c) Onlyvirtualmachines
d) Onlycloud-basedsystems
Answer: a
29) WhatisaChefrecipeinChef?
a) Apre-writtenset ofChefcodethatcanbeused toautomatespecifictasks
b) Amanualprocessthatrequiresmanualcoding
c) Atoolforcodecollaboration
d) Atoolforcodedeployment
Answer: a
30) WhatarethebenefitsofusingChefin DevOps?
a) Improvedspeed,reliability,andconsistencyofITinfrastructuremanagement
b) Increasedmanualeffortrequired forITinfrastructuremanagement
c) NoimpactonIT infrastructuremanagement
d) SlowerandlessreliableITinfrastructuremanagement
Answer: a
Question bank with Answers
UNIT 1
Short Answer Questions:
1. Explain in detail about Agile Development Model with a neat diagram? 2. DevOps and
ITIL are not mutually exclusive? Justify. 3. Explain in detail about DevOps Continuous
Delivery Pipeline? 4. List the different possible cases for Bottlenecks in CI/CD? 5. With an
example explain DevOps Process?
Iterative and Incremental: Agile breaks down projects into small, manageable units
called iterations or sprints. Each iteration includes planning, development, testing, and
review.
Customer Collaboration: Agile emphasizes working closely with customers and
stakeholders to gather feedback and ensure the product meets their needs.
Flexibility: Agile allows for changes and adjustments based on feedback and
evolving requirements.
Example Diagram:
2. DevOps and ITIL are not mutually exclusive? Justify.
DevOps and ITIL (Information Technology Infrastructure Library) can coexist and
complement each other. Here's how:
The DevOps Continuous Delivery (CD) Pipeline is a series of automated steps to deliver new
software versions quickly and safely. It includes:
Version Control: Code changes are tracked using version control systems like Git.
Continuous Integration (CI): Developers' changes are merged into a shared
repository frequently, with automated builds and tests ensuring code quality.
This pipeline ensures code changes are tested, integrated, and delivered to production quickly
and reliably.
4. List the different possible cases for Bottlenecks in CI/CD
Slow Build Processes: Long build times can delay the entire pipeline.
Long-running Tests: Extensive test suites can slow down the feedback loop.
Manual Approval Gates: Waiting for manual approvals can create delays.
3. Test: Automated tests run to verify the code changes. If tests pass, the build proceeds.
4. Deploy: The build is deployed to a staging environment for further testing and
validation.
5. Review: The changes are reviewed by the team and stakeholders. Feedback is
gathered and any issues are addressed.
1. Compare Agile and DevOps, and explain their complementary nature in achieving
efficient software development and delivery.
2. Discuss the principles and practices of DevOps that improve collaboration and
efficiency in IT operations.
5. Compare and contrast Scrum and Kanban as Agile methodologies, their support for
DevOps, and contribution to software delivery.
6. Explain the concept of a delivery pipeline in DevOps, its stages, and popular
tools/technologies used.
8. Analyze the relationship between DevOps and ITIL, and how to effectively
incorporate ITIL practices within a DevOps culture.
9. Explore the role of automation in DevOps, its benefits, challenges, and examples of
popular automation tools.
10. Investigate the importance of monitoring and feedback loops in DevOps, and how
organizations can leverage them for continuous improvement, with examples.
UNIT 2
Short Answer Questions:
The DevOps lifecycle is a series of stages that focus on enhancing collaboration between
development and operations teams to achieve continuous delivery and business agility. Here
are the key stages:
1. Plan: Involves planning and defining project goals, including requirements and tasks.
2. Code: Development of the software, focusing on writing and refining code.
4. Test: Automated and manual testing to ensure code quality and functionality.
These stages are iterative and aim to improve agility by enabling faster and more reliable
software delivery.
Automated Test Suites: Integration of unit, integration, and end-to-end tests that run
automatically.
Feedback Loop: Immediate feedback on code quality, allowing developers to address
issues promptly.
Shift-left Testing: Testing early in the development cycle to catch defects sooner.
Database migrations involve changing the database schema as the application evolves. Here
are some best practices:
Version Control: Keep track of database schema changes using version control tools.
Automated Migrations: Use migration tools like Flyway or Liquibase to automate
the migration process.
Example Diagram:
6. Discuss about Resilience in DevOps
Resilience in DevOps refers to the ability of systems to handle and recover from failures
gracefully. Key practices include:
1. Discuss the impact of DevOps on achieving business agility and provide examples of
companies that have adopted DevOps for faster software delivery and increased
customer responsiveness.
2. Explore continuous testing in DevOps, its contribution to software quality, and the
challenges and benefits of implementing it.
5. Discuss how DevOps ensures the resilience and robustness of software systems and
provide examples of organizations using DevOps for building resilient architectures.
7. Explore the challenges and best practices for handling database migrations in DevOps
and discuss available tools.
9. Discuss the impact of DevOps on software quality and reliability, providing examples
of improvements achieved through DevOps practices.
10. Analyze the challenges and benefits of implementing microservices in the data tier,
considering the alignment with DevOps principles and implications for data
management, scalability, and maintenance.
UNIT 3
Short Answer Questions:
Source Code Control (also known as Version Control) is essential for managing changes to
source code over time. Here are the key reasons for its necessity:
Backup and Recovery: Provides a backup of the codebase, ensuring that code can be
recovered in case of data loss.
Accountability: Tracks who made specific changes and why, enhancing transparency
and accountability.
Project Managers: Monitor progress, track changes, and ensure that development
aligns with project goals.
Data Export: Export data from the current SCM system, including repository history,
branches, and tags.
Data Import: Import the data into the target SCM system, ensuring that history and
metadata are preserved.
Validation: Validate the migration by comparing the data in both systems and
conducting thorough testing.
Training: Provide training and documentation to help team members transition to the
new system.
Shared Authentication refers to a method where multiple systems or services share a common
authentication mechanism. Key points include:
Single Sign-On (SSO): Users can log in once and access multiple systems without re-
entering credentials.
Centralized Management: Authentication is managed centrally, simplifying user
management and access control.
Hosted Git Servers are platforms that provide Git repository hosting and related services.
Here are the key features and benefits:
Collaboration Tools: Includes features like pull requests, code reviews, issue
tracking, and wikis to facilitate collaboration.
Popular Hosted Git Servers: GitHub, GitLab, Bitbucket, and Azure Repos.
Mutual Exclusion: Ensures that only one process accesses a critical section at a time,
preventing race conditions.
Busy Waiting: Uses busy waiting (spinning) to wait for access to the critical section.
Flags and Turn Variables: Utilizes flags to indicate the intent to enter the critical
section and a turn variable to manage access.
Historical Importance: One of the earliest solutions to the mutual exclusion problem
in operating systems.
Create Pull Request: The developer creates a pull request to merge the feature
branch into the main branch.
Code Review: Team members review the code, provide feedback, and suggest
improvements.
Merge: Once the code is approved, the pull request is merged into the main branch,
incorporating the changes.
Benefits: Encourages code review, collaboration, and quality control, ensuring that
only vetted changes are integrated into the codebase.
GitLab is a web-based DevOps platform that provides Git repository hosting and a
comprehensive suite of tools for software development, integration, and delivery. Key
features include:
Issue Tracking: Integrated issue tracking system for managing project tasks and
bugs.
Merge Requests: Facilitates code reviews and collaboration through merge requests
(similar to pull requests).
Security: Advanced security features like static and dynamic application security
testing (SAST/DAST), dependency scanning, and container scanning.
Open Source and Enterprise: Available in both open-source and enterprise editions,
catering to different needs and scales.
1. Explain the significance of source code control in project management, its history,
and its role in version control and collaboration.
2. Discuss the roles of developers, testers, and release managers in source code
management and how collaboration among them leads to project success.
3. Explore source code management systems, their importance, key features, and how
they enable efficient code management and version control.
4. Investigate challenges and best practices for source code migrations, with examples of
successful strategies.
6. Compare hosted Git servers like GitHub, GitLab, and Bitbucket, discussing their
features, advantages, and limitations.
9. Discuss Gerrit as a code review and collaboration tool, its features, and examples of
successful implementation.
UNIT 4
Short Answer Questions:
There are several popular build systems used in software development today:
Gradle: A flexible build automation tool that supports multiple languages and
integrates well with other tools.
TeamCity: A commercial CI/CD server from JetBrains, known for its powerful build
management features.
Travis CI: A cloud-based CI service used to build and test software projects hosted
on GitHub.
CircleCI: A CI/CD platform that automates the software development process, with
support for Docker.
Bamboo: A CI/CD server from Atlassian, designed to work seamlessly with JIRA
and Bitbucket.
GitLab CI/CD: An integrated CI/CD system within GitLab that automates the
building, testing, and deployment of code.
Azure DevOps Pipelines: Part of Azure DevOps Services, providing build and
release management for cloud and on-premises solutions.
GitHub Actions: A CI/CD service that allows developers to automate workflows
directly within GitHub repositories.
1. Install Jenkins: Download and install Jenkins on your server or use a cloud-based
version.
2. Set Up Jenkins: Configure Jenkins, including installing necessary plugins (e.g., Git
plugin, Maven plugin).
3. Create a New Job: In the Jenkins dashboard, click "New Item" to create a new job
and choose the type of job (e.g., Freestyle project, Pipeline).
4. Configure Source Code Management: Specify the repository URL (e.g., Git, SVN)
and credentials for Jenkins to access the source code.
5. Set Build Triggers: Define build triggers (e.g., Poll SCM, Git hooks) to automate the
build process when code changes are detected.
6. Define Build Steps: Add build steps to compile code, run tests, and package artifacts
(e.g., using Maven, Gradle, or custom scripts).
8. Save and Run: Save the job configuration and run the build. Jenkins will execute the
defined steps and provide build results.
Build Slaves (also known as Build Agents) are nodes in a Jenkins build environment that
perform the actual build work. Here’s how they work:
Triggers in build automation are mechanisms that initiate a build process based on specific
events or conditions. Key points include:
Poll SCM: Periodically checks the source code repository for changes and triggers a
build if changes are detected.
Webhook Triggers: Uses webhooks to automatically trigger builds when code
changes are pushed to the repository.
Manual Triggers: Allows users to manually trigger builds from the Jenkins
dashboard or command line.
Job Chaining and Build Pipelines in Jenkins are used to automate complex build processes by
linking multiple jobs together. Here’s how they work:
Job Chaining: Configures a series of dependent jobs, where the completion of one
job triggers the next job in the chain. This is useful for tasks like compiling code,
running tests, and deploying artifacts in sequence.
Build Pipelines: Represents a series of automated steps that code goes through from
development to production. Pipelines are defined using the Jenkinsfile, a script that
outlines the stages, steps, and conditions for each part of the process.
Stages and Steps: Pipelines are divided into stages (e.g., Build, Test, Deploy), each
containing a series of steps (e.g., shell commands, scripts).
}
stage('Deploy')
{ steps { sh 'make deploy' } } } }
6. How to create builds by Dependency Order?
Creating builds by dependency order involves ensuring that builds are executed in the correct
sequence based on their dependencies. Here’s how to achieve this:
4. Use Pipeline Syntax: Define the build order within a Jenkins pipeline using stages
and steps, ensuring that stages are executed in the correct sequence.
Example:
groovy
pipeline
{
agent any
stages
{
stage('Build Library')
{
steps
{
sh 'make build-library'
}
}
stage('Build Application')
{
steps
{
sh 'make build-application'
}
}
stage('Test Application')
{
steps
{
sh 'make test-application'
}
}
}
}
This setup ensures that the library is built before the application and the application is tested
after the build.
1. Discuss the role of build systems in DevOps, their key components, and how they
automate the software build process. Provide examples of popular build systems.
2. Explore the features and capabilities of the Jenkins build server, its role in continuous
integration and delivery, and the benefits and challenges of using Jenkins in DevOps.
4. Discuss the significance of Jenkins plugins in extending its functionality for tasks like
code analysis, testing, and deployment. Provide examples of popular Jenkins plugins.
5. Analyze the importance of file system layout in build server configurations, its impact
on the build process and artifact management, and best practices for designing an
efficient layout.
6. Explain the concept of build slaves in Jenkins, their role in distributed build
execution, scalability, and performance improvement. Discuss strategies for
configuring and managing build slaves effectively.
7. Investigate triggers in build automation, the types available in Jenkins, and how they
initiate the build process based on various scenarios.
8. Explore job chaining and build pipelines in Jenkins, their role in automating complex
build processes and deployment workflows, and the benefits of using them. Provide
examples of successful implementations.
9. Discuss infrastructure as code (IaC) in the context of build servers, its facilitation of
provisioning, configuration, and management. Explain the advantages of using IaC
tools for build server infrastructure.
10. Compare alternative build servers like Bamboo, TeamCity, and CircleCI with Jenkins,
discussing their features, advantages, limitations, and recommendations for choosing
the appropriate one.
11. Discuss the importance of collecting and analyzing quality metrics during the build
process, such as code coverage, static code analysis, and test results. Explain how
integrating quality measures enhances software quality and continuous improvement
in DevOps.
UNIT 5
Short Answer Questions:
1. What are the Pros and Cons of Automated Testing?
2. Write short notes on Selenium? List out the features of it?
o i) Puppet
o ii) Chef
o iii) Ansible
o iv) SaltStack
Pros:
Speed and Efficiency: Automated tests run faster than manual tests, allowing for
quick feedback on code changes.
Consistency: Automated tests are less prone to human error and provide consistent
results.
Reusability: Once written, automated tests can be reused across multiple projects and
different environments.
Scalability: Automated testing can handle large volumes of tests that would be
impractical to execute manually.
Cons:
Initial Investment: Writing automated tests requires time and effort, which can be
significant for large projects.
Maintenance: Automated tests need regular updates and maintenance to stay relevant
with evolving code.
Lack of Human Insight: Automated tests may not catch certain issues that a human
tester would, such as UI/UX problems or complex use cases.
Selenium Grid: Allows for parallel test execution on multiple machines, increasing
testing efficiency.
Record and Playback: Selenium IDE offers a record and playback tool for creating
test scripts without writing code.
Integration: Can be integrated with other tools like Maven, Jenkins, and TestNG for
continuous testing and CI/CD pipelines.
JavaScript testing involves verifying the functionality and performance of JavaScript code
using various testing frameworks and tools. Key aspects include:
End-to-End (E2E) Testing: Simulates user interactions with the entire application to
ensure it works as expected from start to finish. Selenium and Cypress are popular
choices.
Code Coverage: Measures the percentage of code executed during tests, helping
identify untested parts of the codebase. Tools like Istanbul and Codecov provide
coverage reports.
Automation: Automated JavaScript tests are often integrated into CI/CD pipelines to
ensure code quality and reliability with every code change.
Definition: TDD is a software development approach where tests are written before
the actual code.
Process: Involves writing a failing test, writing the minimum code to pass the test,
and then refactoring the code while keeping the test passing.
Benefits: Ensures code quality, encourages better design, and provides a safety net for
refactoring.
Usage: Commonly used in Agile and DevOps practices for developing reliable and
maintainable code.
Benefits: Provides rapid feedback, allows for quick experimentation, and facilitates
learning and debugging.
Usage: Often used in dynamic languages like Python, JavaScript, and Ruby for
exploratory programming and quick prototyping.
ii) Chef: Chef is a configuration management tool that automates infrastructure provisioning
and application deployment. It uses a domain-specific language (DSL) based on Ruby to
write "recipes" that define how systems should be configured.
iii) Ansible: Ansible is an open-source automation tool used for configuration management,
application deployment, and task automation. It uses simple, human-readable YAML files
called "playbooks" to define automation tasks and is agentless, relying on SSH for
communication.
1. Discuss the different types of testing in DevOps, their significance, and contributions
to software quality. Provide examples of testing techniques and frameworks used.
2. Explore the benefits and challenges of test automation in software development, its
impact on efficiency and accuracy, and best practices for implementing it in DevOps.
3. Explain the features and capabilities of Selenium as a popular testing tool, including
web application testing. Discuss its advantages and limitations in a DevOps context.
4. Discuss the challenges and approaches for testing backend integration points in
software applications. Provide examples of tools used in testing backend integrations
in DevOps.
5. Explore test-driven development (TDD) and its role in ensuring code quality and test
coverage. Discuss the principles, benefits, and challenges of implementing TDD in
DevOps.
6. Discuss REPL-driven development and its benefits for iterative testing and rapid code
prototyping. Explain how it aligns with DevOps principles and facilitates faster
feedback loops.
10. Compare and contrast deployment tools like Puppet, Ansible, Chef, Salt Stack, and
Docker. Discuss their features, benefits, and use cases in automating deployment and
infrastructure management in DevOps.