Devops Unit 1
Devops Unit 1
What is DevOps?
DevOps is a collaboration between Development and IT Operations to make software
production and Deployment in an automated & repeatable way. DevOps helps increase the
organization‟s speed to deliver software applications and services. The full form of „DevOps‟
is a combination of „Development‟ and „Operations.‟
It allows organizations to serve their customers better and compete more strongly in the market.
In simple words, DevOps can be defined as an alignment of development and IT operations
with better communication and collaboration.
Before DevOps, the development and operation team worked in complete isolation.
Testing and Deployment were isolated activities done after design-build. Hence they
consumed more time than actual build cycles.
Without using DevOps, team members spend a large amount of their time testing,
deploying, and designing instead of building the project.
Manual code deployment leads to human errors in production.
Coding & operation teams have separate timelines and are not synch, causing further
delays.
There is a demand to increase the rate of software delivery by business stakeholders. As per
Forrester Consulting Study, Only 17% of teams can use delivery software quickly, proving the
pain point.
How is DevOps different from traditional IT
In this DevOps training, let‟s compare the traditional software waterfall model with DevOps
to understand the changes DevOps brings.
We assume the application is scheduled to go live in 2 weeks, and coding is 80% done. We
assume the application is a fresh launch, and the process of buying servers to ship the code has
just begun-
Old Process DevOps
After placing an order for new
servers, the Development team After placing an order for new servers Development
works on testing. The Operations and Operations team work together on the paperwork
team works on extensive paperwork to set up the new servers. This results in better
as required in enterprises to deploy visibility of infrastructure requirements.
the infrastructure.
Projections about failover,
redundancy, data center locations, Projections about failover, redundancy, disaster
and storage requirements are skewed recovery, data center locations, and storage
as no inputs are available from requirements are pretty accurate due to the inputs from
developers who have deep the developers.
knowledge of the application.
In DevOps, the Operations team is completely aware
The operations team has no clue
of the developers‟ progress. Operations teams interact
about the progress of the
with developers and jointly develop a monitoring plan
Development team. The operations
that caters to IT and business needs. They also use
team develops a monitoring plan as
advanced Application Performance Monitoring
per their understanding.
(APM) Tools.
Before going go-live, the load testing makes the
Before going go-live, the load
application a bit slow. The development team quickly
testing crashes the application, and
fixes the bottlenecks, and the application is released
the release is delayed.
on time.
4. Time to market: DevOps reduces the time to market up to 50% through streamlined
software delivery. It is particularly the case for digital and mobile applications.
5. Greater Quality: DevOps helps the team improve application development quality by
incorporating infrastructure issues.
6. Reduced Risk: DevOps incorporates security aspects in the software delivery lifecycle, and
it helps reduce defects across the lifecycle.
7. Resiliency: The Operational state of the software system is more stable, secure, and changes
are auditable.
8. Cost Efficiency: DevOps offers cost efficiency in the software development process, which
is always an aspiration of IT management.
9. Breaks larger code base into small pieces: DevOps is based on the agile programming
method. Therefore, it allows breaking larger codebases into smaller and manageable chunks.
DevOps Workflow
Workflows provide a visual overview of the sequence in which input is provided. It also tells
about performed actions, and output is generated for an operations process.
DevOps WorkFlow
Workflow allows the ability to separate and arrange jobs that the users top request. It also can
mirror their ideal process in the configuration jobs.
How is DevOps different from Agile? DevOps Vs Agile
Stakeholders and communication chain a typical IT process.
Agile Process
DevOps addresses gaps in Developer and IT Operations communications
DevOps Process
Agile DevOps
Emphasize breaking down barriers between DevOps is about software deployment and
developers and management. operation teams.
Addresses gaps between customer Addresses the gap between the development
requirements and development teams. and Operation team
DevOps Principles
Here are six principles that are essential when adopting DevOps:
1. Customer-Centric Action: The DevOps team must constantly take customer-centric action
to invest in products and services.
2. End-To-End Responsibility: The DevOps team needs to provide performance support until
they become end-of-life. This enhances the level of responsibility and the quality of the
products engineered.
4. Automate everything: Automation is a vital principle of the DevOps process, and this is
not only for software development but also for the entire infrastructure landscape.
5. Work as one team: In the DevOps culture, the designer, developer, and tester are already
defined, and all they need to do is work as one team with complete collaboration.
6. Monitor and test everything: Monitor and test everything: The DevOps team needs robust
monitoring and testing procedures.
The DevOps approach needs frequent, incremental changes to code versions, requiring frequent
deployment and testing regimens. Although DevOps engineers need to code occasionally from
scratch, they must have the basics of software development languages.
A DevOps engineer will work with development team staff to tackle the coding and scripting
needed to connect code elements, like libraries or software development kits.
Roles, Responsibilities, and Skills of a DevOps Engineer
DevOps engineers work full-time, and they are responsible for the production and ongoing
maintenance of a software application‟s platform.
Following are some expected Roles, Responsibilities, and Skills that are expected from
DevOps engineers:
DevOps Architecture
Development and operations both play essential roles in order to deliver applications. The
deployment comprises analyzing the requirements, designing, developing, and testing of the
software components or frameworks.
The operation consists of the administrative processes, services, and support for the software.
When both the development and operations are combined with collaborating, then the DevOps
architecture is the solution to fix the gap between deployment and operation terms; therefore,
delivery can be faster.
DevOps architecture is used for the applications hosted on the cloud platform and large
distributed applications. Agile Development is used in the DevOps architecture so that
integration and delivery can be contiguous. When the development and operations team works
separately from each other, then it is time-consuming to design, test, and deploy. And if the
terms are not in sync with each other, then it may cause a delay in the delivery. So DevOps
enables the teams to change their shortcomings and increases productivity.
Below are the various components that are used in the DevOps architecture:
1) Build
Without De Without DevOps, the cost of the consumption of the resources was evaluated based
on the pre-defined individual usage with fixed hardware allocation. And with DevOps, the
usage of cloud, sharing of resources comes into the picture, and the build is dependent upon
the user's need, which is a mechanism to control the usage of resources or capacity.
2) Code
Many good practices such as Git enables the code to be used, which ensures writing the code
for business, helps to track changes, getting notified about the reason behind the difference in
the actual and the expected output, and if necessary reverting to the original code developed.
The code can be appropriately arranged in files, folders, etc. And they can be reused.
3) Test
The application will be ready for production after testing. In the case of manual testing, it
consumes more time in testing and moving the code to the output. The testing can be automated,
which decreases the time for testing so that the time to deploy the code to production can be
reduced as automating the running of the scripts will remove many manual steps.
4) Plan
DevOps use Agile methodology to plan the development. With the operations and development
team in sync, it helps in organizing the work to plan accordingly to increase productivity.
5) Monitor
Continuous monitoring is used to identify any risk of failure. Also, it helps in tracking the
system accurately so that the health of the application can be checked. The monitoring becomes
more comfortable with services where the log data may get monitored through many third-
party tools such as Splunk.
6) Deploy
Many systems can support the scheduler for automated deployment. The cloud management
platform enables users to capture accurate insights and view the optimization scenario,
analytics on trends by the deployment of dashboards.
7) Operate
DevOps changes the way traditional approach of developing and testing separately. The teams
operate in a collaborative way where both the teams actively participate throughout the service
lifecycle. The operation team interacts with developers, and they come up with a monitoring
plan which serves the IT and business requirements.
8) Release
Deployment to an environment can be done by automation. But when the deployment is made
to the production environment, it is done by manual triggering. Many processes involved in
release management commonly used to do the deployment in the production environment
manually to lessen the impact on the customers.
1. Automation
Automation most effectively reduces the time consumption specifically during the testing and
deployment phase. The productivity increases and releases are made quicker through
automation with less issue as tests are executed more rigorously. This will lead to catching
bugs sooner so that it can be fixed more easily. For continuous delivery, each code change is
done through automated tests, through cloud-based services and builds. This promotes
production using automated deploys.
2. Collaboration
The Development and Operations team collaborates together as DevOps team which improves
the cultural model as the teams become more effective with their productivity which
strengthens accountability and ownership. The teams share their responsibilities and work
closely in sync which in turn makes the deployment to production faster.
3. Integration
Applications need to be integrated with other components in the environment. The integration
phase is where the existing code is integrated with new functionality and then testing takes
place. Continuous integration and testing enable continuous development. The frequency in the
releases and micro-services lead to significant operational challenges. To overcome such
challenges, continuous integration and delivery are implemented to deliver in a quicker, safer
and reliable manner.
4. Configuration Management
This ensures that the application only interacts with the resources concerned with the
environment in which it runs. The configuration files are created where the configuration
external to the application is separated from the source code. The configuration file can be
written while deployment or they can be loaded at the run time depending on the environment
in which it is running.
DevOps Orchestration
DevOps orchestration is a logical and necessary step for any DevOps shop that is in the
process of, or has completed, implementing automation. Organizations generally start with a
local solution and then, after achieving success, orchestrate their best practices through a
technology that unifies connectivity into one solid process. That‟s because automation can
only go so far in maximizing efficiency, so orchestration in DevOps is needed if you want to
take your releases to the next level.
To illustrate this concept, I‟m going to discuss DevOps orchestration in general and how
practices like DevOps provisioning orchestration and DevOps release orchestration can apply
both on-site and in the cloud.
DevOps automation is a process by which a single, repeatable task, such as launching an app
or changing a database entry, is made capable of running without human intervention, both
on PCs and in the cloud. In comparison, DevOps orchestration is the automation of numerous
tasks that run at the same time in a way that minimizes production issues and time to market.
Automation applies to functions that are common to one area, such as launching a web
server, or integrating a web app, or changing a database entry. But when all of these functions
must work together, DevOps orchestration is required.
DevOps orchestration is not simply putting separated tasks together, but it can do much more
than that. DevOps orchestration streamlines the entire workflow by centralizing all tools used
across teams, along with their data, to keep track of process and completion status
throughout.
DevOps Applications
Applications of DevOps
1. Application of DevOps in the Online Financial Trading Company
The methodology in the process of testing, building, and development was automated in the
financial trading company. Using the DevOps, deployment was being done within 45 seconds.
These deployments used to take long nights and weekends for the employees. The time of the
overall process reduced and the interest of clients increased.
5. Application to GM Financial
Regression testing time was reduced by 93%, which in turn reduced the funding
period of load by five times.
This tool uses Java with plugins, which help in enhancing Continuous Integration. Jenkins is
widely popular with more 1 million users. So also get access to a thriving and helpful
community of developers.
2) Git
It is a version control system, and it lets teams collaborate on a project at the same time.
Developers can track changes in their files, as well as improve the product regularly. Git is
widely popular among tech companies. Many companies consider it is a must-have for their
tech professionals.
You can save different versions of your code on Git. You can use GitHub for holding
repositories as well. GitHub allows you to connect Slack with your on-going projects so your
team can easily communicate regarding the project.
3) Bamboo
Bamboo is similar to Jenkins as it helps you in automating your delivery pipeline. The
difference is of their prices. Jenkins is free, but Bamboo is not. Is bamboo worth paying for?
Well, Bamboo has many functionalities which are set up beforehand. With Jenkins, you
would‟ve had to build those functionalities yourself. And that takes a lot of effort and time.
Bamboo doesn‟t require you to use many plugins too because it can do those tasks itself. It has
a great UI, and it integrates with BitBucket and many other Atlassian products.
4) Kubernetes
Kubernetes deserves a place on this DevOps tools list for obvious reasons. First, it is a fantastic
container orchestration platform. Second, it has taken the industry by storm.
When you have many containers to take care of, scaling the tasks becomes immensely
challenging. Kubernetes helps you in solving that problem by automating the management of
your containers.
It is an open-source platform. Kubernetes can let you scale your containers without increasing
your team. As it is an open-source platform, you don‟t have to worry about access problems.
You can use public cloud infrastructure or hybrid infrastructure to your advantage. The tool
can also self-heal containers. This means it can restart failed containers, kills not-responding
containers, and replaces containers.
5) Vagrant
You can build and manage virtual machine environments on Vagrant. It lets you do all that in
a single workflow. You can use it on Mac, Windows, as well as, Linux.
It provides you with an ideal development environment for better productivity and
efficiency. It can easily integrate with multiple kinds of IDEs and configuration management
tools such as Salt, Chef, and Ansible.
Because it works on local systems, your team members won‟t have to give up their existing
technologies or operating systems. Vagrant‟s enhanced development environments certainly
make DevOps easier for your team. That‟s why we have kept it in our DevOps tools list.
6) Prometheus
Prometheus is an open-source service monitoring system, which you can use for free. It has
multiple custom libraries you can implement quickly.
It identifies time series through metric names and key-value pairs. You can use its different
modes for data visualization as well. Because of its functional sharding and federation, scaling
the projects is quite easy.
It also enables multiple integrations from different platforms, such as Docker and StatsD. It
supports more than ten languages. Overall, you can easily say it is among the top DevOps tools
because of its utility.
7) Splunk
Splunk makes machine data more accessible and valuable. It enables your organization to use
the available data in a better fashion. With its help, you can easily monitor and analyze the
available data and act accordingly. Splunk also lets you get a unified look at all the IT data
present in your enterprise.
You can deliver insights by using augmented reality and mobile devices, too, with the use of
Splunk. From security to IT, Splunk finds uses in many areas. It is one of the best automation
tools for DevOps because of the valuable insights it provides to the user. You can use Splunk
in numerous ways according to your organization‟s requirements.
Some companies also use Splunk for business analytics and IoT analytics. The point is, you
can use this tool for finding valuable data insights for all the sections of your organization and
use them better.
8) Sumologic
Sumologic is a popular CI platform for DevSecOps. It enables organizations to develop and
secure their applications on the cloud. It can detect Indicators of Compromise quickly, which
lets you investigate and resolve the threat faster.
Its real-time analytics platform helps organizations in using data for predictive analysis. For
monitoring and securing your cloud applications, you should choose Sumologic. Because of
its power of the elastic cloud, you can scale it infinitely (in theory).
A DevOps pipeline is a set of practices that the development (Dev) and operations (Ops) teams
implement to build, test, and deploy software faster and easier. One of the primary purposes of
a pipeline is to keep the software development process organized and focused.
The term "pipeline" might be a bit misleading, though. An assembly line in a car factory might
be a more appropriate analogy since software development is a continuous cycle.
Before the manufacturer releases the car to the public, it must pass through numerous assembly
stages, tests, and quality checks. Workers have to build the chassis, add the motor, wheels,
doors, electronics, and a finishing paint job to make it appealing to customers.
From this simplified explanation, you can conclude that a DevOps pipeline consists of the
build, test, and deploy stages.
To ensure the code moves from one stage to the next seamlessly requires implementing several
DevOps strategies and practices. The most important among them are continuous
integration and continuous delivery (CI/CD).
Continuous Integration
Continuous integration (CI) is a method of integrating small chunks of code from multiple
developers into a shared code repository as often as possible. With a CI strategy, you can
automatically test the code for errors without having to wait on other team members to
contribute their code.
One of the key benefits of CI is that it helps large teams prevent what is known as integration
hell.
In the early days of software development, developers had to wait for a long time to submit
their code. That delay significantly increased the risk of code-integration conflicts and the
deployment of bad code. As opposed to the old way of doing things, CI encourages developers
to submit their code daily. As a result, they can catch errors faster and, ultimately, spend less
time fixing them.
At the heart of CI is a central source control system. Its primary purpose is to help teams
organize their code, track changes, and enable automated testing.
In a typical CI set-up, whenever a developer pushes new code to the shared code repository,
automation kicks in to compile the new and existing code into a build. If the build process fails,
developers get an alert which informs them which lines of code need to be reworked.
Making sure only quality code passes through the pipeline is of paramount importance.
Therefore, the entire process is repeated every time someone submits new code to the shared
repository.
Continuous Delivery
Continuous delivery (CD) is an extension of CI. It involves speeding up the release process by
encouraging developers to release code to production in incremental chunks.
Having passed the CI stage, the code build moves to a holding area. At this point in the pipeline,
it's up to you to decide whether to push the build to production or hold it for further evaluation.
In a typical DevOps scenario, developers first push their code into a production-like
environment to assess how it behaves. However, the new build can also go live right away, and
developers can deploy it at any time with a push of a button.
To take full advantage of continuous delivery, deploy code updates as often as possible. The
release frequency depends on the workflow, but it's usually daily, weekly, or monthly.
Releasing code in smaller chunks is much easier to troubleshoot compared to releasing all
changes at once. As a result, you avoid bottlenecks and merge conflicts, thus maintaining a
steady, continuous integration pipeline flow.
Continuous Deployment
Continuous delivery and continuous deployment are similar in many ways, but there are critical
differences between the two.
While continuous delivery enables development teams to deploy software, features, and code
updates manually, continuous deployment is all about automating the entire release cycle.
At the continuous deployment stage, code updates are released automatically to the end-user
without any manual interventions. However, implementing an automated release strategy can
be dangerous. If it fails to mitigate all errors detected along the way, bad code will get deployed
to production. In the worst-case scenario, this may cause the application to break or users to
experience downtime.
Automated deployments should only be used when releasing minor code updates. In case
something goes wrong, you can roll back the changes without causing the app to malfunction.
To leverage the full potential of continuous deployment involves having robust testing
frameworks that ensure the new code is truly error-free and ready to be immediately deployed
to production.
Continuous Testing
Continuous testing is a practice of running tests as often as possible at every stage of the
development process to detect issues before reaching the production environment.
Implementing a continuous testing strategy allows quick evaluation of the business risks of
specific release candidates in the delivery pipeline.
The scope of testing should cover both functional and non-functional tests. This includes
running unit, system, integration, and tests that deal with security and performance aspects of
an app and server infrastructure.
Continuous testing encompasses a broader sense of quality control that includes risk
assessment and compliance with internal policies.
Continuous Operations
To reap the benefits of continuous operations, you need to have a robust automation and
orchestration architecture that can handle continuous performance monitoring of servers,
databases, containers, networks, services, and applications.
There are no fixed rules as to how you should structure the pipeline. DevOps teams add and
remove certain stages depending on their specific workflows. Still, four core stages make up
almost every pipeline: develop, build, test, and deploy.
That set-up can be extended by adding two more stages - plan and monitor - since they are
also quite common in professional DevOps environments.
Plan
The planning stage involves planning out the entire workflow before developers start coding.
In this stage, product managers and project managers play an essential role. It's their job to
create a development roadmap that will guide the whole team along the process.
After gathering feedback and relevant information from users and stakeholders, the work is
broken down into a list of tasks. By segmenting the project into smaller, manageable chunks,
teams can deliver results faster, resolve issues on the spot, and adapt to sudden changes easier.
In a DevOps environment, teams work in sprints - a shorter period of time (usually two weeks
long) during which individual team members work on their assigned tasks.
Develop
In the Develop stage, developers start coding. Depending on the programming language,
developers install on their local machines appropriate IDEs (Python IDEs, Java IDEs, etc),
code editors, and other technologies for achieving maximum productivity.
In most cases, developers have to follow certain coding styles and standards to ensure a uniform
coding pattern. This makes it easier for any team member to read and understand the code.
When developers are ready to submit their code, they make a pull request to the shared source
code repository. Team members can then manually review the newly submitted code and merge
it with the master branch by approving the initial pull request.
Build
The build phase of a DevOps pipeline is crucial because it allows developers to detect errors
in the code before they make their way down the pipeline and cause a major disaster.
After the newly written code has been merged with the shared repository, developers run a
series of automated tests. In a typical scenario, the pull request initiates an automated process
that compiles the code into a build - a deployable package or an executable.
Keep in mind that some programming languages don't need to be compiled. For example,
applications written in Java and C need to be compiled to run, while those written
in PHP and Python do not.
If there is a problem with the code, the build fails, and the developer is notified of the issues.
If that happens, the initial pull request also fails.
Developers repeat this process every time they submit to the shared repository to ensure only
error-free code continues down the pipeline.
Test
If the build is successful, it moves to the testing phase. There, developers run manual and
automated tests to validate the integrity of the code further.
In most cases, a User Acceptance Test is performed. People interact with the app as the end-
user to determine if the code requires additional changes before sending it to production. At
this stage, it's also common to perform security, performance, and load testing.
Deploy
When the build reaches the Deploy stage, the software is ready to be pushed to production. An
automated deployment method is used if the code only needs minor changes. However, if the
application has gone through a major overhaul, the build is first deployed to a production-like
environment to monitor how the newly added code will behave.
Implementing a blue-green deployment strategy is also common when releasing significant
updates.
A blue-green deployment means having two identical production environments where one
environment hosts the current application while the other hosts the updated version. To release
the changes to the end-user, developers can simply forward all requests to the appropriate
servers. If there are problems, developers can simply revert to the previous production
environment without causing service disruptions.
Monitor
At this final stage in the DevOps pipeline, operations teams are hard at work continuously
monitoring the infrastructure, systems, and applications to make sure everything is running
smoothly. They collect valuable data from logs, analytics, and monitoring systems as well as
feedback from users to uncover any performance issues.
Feedback gathered at the Monitor stage is used to improve the overall efficiency of the DevOps
pipeline. It's good practice to tweak the pipeline after each release cycle to eliminate potential
bottlenecks or other issues that might hinder productivity.
Now that you have a better understanding of what a DevOps pipeline is and how it works let's
explore the steps required when creating a CI/CD pipeline.
Before you and the team start building and deploying code, decide where to store the source
code. GitHub is by far the most popular code-hosting website. GitLab and BitBucket are
powerful alternatives.
To start using GitHub, open a free account, and create a shared repository. To push code to
GitHub, first install Git on the local machine. Once you finish writing the code, push it to the
shared source code repository. If multiple developers are working on the same project, other
team members usually manually review the new code before merging it with the master branch.
Once the code is on GitHub, the next step is to test it. Running tests against the code helps
prevent errors, bugs, or typos from being deployed to users.
Numerous tests can determine if the code is production-ready. Deciding which analyses to run
depends on the scope of the project and the programming languages used to run the app.
Two of the most popular solutions for creating builds are Jenkins and Travis-CI. Jenkins is
completely free and open-source, while Travis-CI is a hosted solution that is also free but only
for open-source projects.
To start running tests, install Jenkins on a server and connect it to the GitHub repository. You
can then configure Jenkins to run every time changes are made to the code in the shared
repository. It compiles the code and creates a build. During the build process, Jenkins
automatically alerts if it encounters any issues.
For example, you would run unit tests before functional tests since they usually take more time
to complete. If the build passes the testing phase with flying colors, you can deploy the code to
production or a production-like environment for further evaluation.
Deploy to Production
Before deploying the code to production, first set up the server infrastructure. For instance, for
deploying a web app, you need to install a web server like Apache. Assuming the app will be
running in the cloud, you'll most likely deploy it to a virtual machine.
For apps that require the full processing potential of the physical hardware, you can deploy to
dedicated servers or bare metal cloud servers.
There are two ways to deploy an app - manually or automatically. At first, it is best to deploy
code manually to get a feel for the deployment process. Later, automation can speed up the
process, but only if you are confident, there are barriers that will stop bad code from ending up
in production.
Develop
Developing is the stage where the ideas from planning are executed into code. The ideas come
to life as a product. This stage requires software configuration management, repository
management and build tools, and automated Continuous Integration tools for incorporating this
stage with the following ones.
Test
A crucial part that examines the product and service and makes sure they work in real time and
under different conditions (even extreme ones, sometimes). This stage requires many different
kinds of tests, mainly functional tests, performance or load tests, and service virtualization tests.
It‟s also important to test compatibility and integrations with third-party services. The data
from the tests needs to be managed and analyzed in rich reports for improving the product
according to test results.
Release
Once a stage that stood out on its own and caused many a night with no sleep for developers,
now the release stage is becoming agile and integrating with the Continuous Delivery process.
Therefore, the discussion of this part can‟t revolve only around tools, but rather needs to discuss
methodologies as well. Regarding tools, this stage requires deployment tools.
Operate
We now have a working product, but how can we maximize the features we‟ve planned,
developed, tested, and released? This is what this stage is for. Implementing the best UX is a
big part of this, monitoring infrastructure, APMs, and aggregators, and analyzing Business
Intelligence (BI). This stage ensures our users get the most out of the product and can use it
error-free.
Obviously, this work cycle isn‟t one-directional. We might use tools from a certain stage, move
on to the next, go back a stage, jump ahead two stages, and so on. Essentially, it all comes
down to a feedback loop. You plan and develop. The test fails, so you develop again. The test
passes, you release it, and you get information about customer satisfaction through
measurement tools like google analytics or A/B testing. Then, you re-discuss the same feature
to get better satisfaction out of the product, develop it again, etc. The most important part is
that you cover all stages, as we will do in the upcoming weeks.
Adopting DevOps offers a cultural change in the workforce by enabling engineers to cross the
barrier between the development teams and operations teams.
Progressive Collaboration
DevOps promises to bridge the gap between the two where both employ bottom-up and top-
down feedback from each other. With DevOps, when development seeks operational help or
when operations require immediate development, both remain ready for each other at any given
time. In such a scenario, the software development culture brings in to focus combined
development instead of individual goals; The development environment becomes more
progressive as all the team members work in cohesion towards a common goal.
Processing Acceleration
With conjoined operational and developmental paradigms, the communication lag between the
two is reduced to null. Organizations continuously strive for a better edge over their competing
rivals, and if such acceleration is not achieved, the organization will have to succumb to
competing forces— innovation will be slower, and the product market will decay.
Shorter Recovery Time
DevOps deployment functions on a more focused and exclusive approach which makes issues
more accessible to spot; this helps error rectification faster and easier to implement. The
resolution to problems is inherently quicker, as troubleshooting happens to take place at the
current development level only, within a single team. Thus, the overall time for recovery and
rectification is drastically reduced.
Lower Failure Rate
The abridged departments yield shorter development cycles which result in rapid production.
The entire process becomes modular wherein issues related to configuration, application code,
and infrastructure become more apparent and pre-accessible A decrease in error count also
positively affects the success rates of development. Therefore, very few fixes will be required
to attain a fully functional code for the desired output.
Higher Job Satisfaction
DevOps fosters equality by bringing different officials at the same level of interaction.
DevOps serves as a handy tool for achieving that feat; it enables the workforce to work in
cohesion where chances for failure are minimal, and production is rapid. As a result, the
processing becomes efficient and workspace more promising.
The DevOps adoption requires focus on the People, Process and Technology aspects.
Technology challenges
Lack of automation in the software development lifecycle and hence loss of quality due
to error prone repeatability of steps (ex. Tests)
Defects generated due to inconsistent environments for testing and deployment
Delays in testing due to infrastructure unavailability
Brittle point-to-point integration between modules
Aligning Capabilities :
Business perspectives:
Rapid prototyping
For rapid and iterative design and delivery of software
It involves using prototyping tools and collaborative review by stakeholders and
refinement based on feedback
Benefits - Provides a feel of the product early, room for customization, saves cost and
time (if tools are used) and minimizes design flaws
be ready to adopt Agile approach and collaborate with the development teams for faster
development and delivery of valuable software
adopt lean practices and eliminate wasteful processes, documentation and the like
be ready to allow iterations to happen and provide early feedback
Ensure availability and clarity on requirements are on time
Alleviated issues
Lack of automation in the software development lifecycle and hence loss of quality due
to error prone repeatability of steps – in this case automation of Acceptance Tests
Well defined and automated acceptance criteria and clarity on requirements
Continuous integration practice is mandatory to ensure that defects are captured early
and a working version of the software is always available by automating the different
development stage through integration of tools. This is a must-to-have to ensure
continuous, frequent and automated delivery & deployment of software to customers
Is a software development practice adopted as part of extreme programming (XP). The
image below shows the activities in the development phase that need to be automated
to form the Continuous Integration pipeline. It helps in automating the build process,
enabling frequent integration, code quality checks and unit testing without any manual
intervention by use of various open-source, custom-built and/or licensed tools. The
sequence and whether an activity will be done will be decided by the orchestrator – i.e.
the continuous integration tool based on the gating criteria set for the project
Benefits
o Reduced risks and fewer integration defects
o Helps detect bugs and remove them faster
o Less integration time due to automation
o Avoids cumulative bugs due to frequent integration
o Helps in frequent deployment
If Dev teams align with the capabilities mentioned, the following issues faced by ―Project‖
would get alleviated.
Issues
Lack of automation in the software development lifecycle and hence loss of quality due
to error prone repeatability of steps – here the automation of activities in the build cycle
activities
Brittle point-to-point integration between modules – resolved due to continuous
integration of modules
In the automated development and delivery pipeline, the following tests and their management
are automated mandatorily. These tests are invoked in an automated fashion by the orchestrator
– i.e. the continuous integration tool. The testing related automation is required.
Functional test
Test and Test data management
Performance test
Security test
Benefits
o Increases the depth and scope of test coverage which helps in improving
software quality
o Ensures repeatability of running tests whenever required and helps in
continuous integration of valuable software
Service virtualization
In traditional software development, the testing starts post the integration of all the
components that are needed – ex. Performance testing is delayed and testers may skip
this for want of time. This leads to finding of defects late in the cycle and are costly to
fix. It also impacts the delivery speed. Hence service virtualization is required to ―shift-
left‖ and detect defects early in the cycle (say during unit testing) by simulating the
non-available components of the application
Service virtualization helps in simulating application dependencies and begin testing
earlier. Virtual components can be data, business rules, I/O configurations etc
o Features
Are light-weight and hence testing is inexpensive. Example: If we have
a legacy system on top of which business logic enhancements are done,
setting up the latter every time for testing is cumbersome and costly
Creates a virtual asset which simulates the behaviour of the components
which are impossible to access or unavailable
Components can be deployed in virtual environments
Created by recording of the live communication among the components
that are used for testing
Provide logs representing the communication among the components
Analysis of service interface specifications like WSDL
Assets listen for requests and return an appropriate response. Example:
Listen to an SQL statement and return the source rows from the database
as per the query
Benefits
o Reduce cost of fixing defects
o Decreases project risk
o Improves the speed of delivery
o Helps emulate the unavailable components/environments and represents a more
realistic behaviour (stubs/mocks help in skipping unavailable components)
CD automation
Release management
Database deploy
The various steps that we saw earlier (including environment provisioning, testing,
deployments etc.) need an infrastructure to be in place for implementation. This layer
is responsible for infrastructure management. The environment management tools help
in this
The infrastructure layer can be managed by tools like Chef which help spinning of
virtual machines, syncing them, help make changes across multiple servers etc. The
virtual machines mimic servers including the complete operating system, drivers,
binaries etc. They run on top of a hypervisor system which in turn runs on top of another
operating system
Benefits - Provides the necessary hardware for automated deployments and the
environment management tools help in managing and maintaining them
Containerization
A virtual machine as seen in the earlier section has its own operating system. Hence
precious operating system resources are wasted across virtual machines. In order to
ensure that the virtual machines share the same resources, containerization is required
It allows virtual machines to share a single host operating system and relevant binaries,
drivers etc. This is called operating system level virtualization
Benefits
o Containers are smaller in size, easier to migrate and requires less memory
o Allows a server to host multiple containers instead of virtual machines being
spun
Incident management
With the large scale explosion of data centers and virtualization, the scale and
fragmentation of IT alerts have increased dramatically. Hence the manual way of
resolving alerts like constantly filtering through noisy alerts, connect them to get the
bigger issue, prioritize and escalate to concerned, and manually managing the alerts
should be avoided
Centralized incident management solutions avoids redundant alerts. It combines all the
monitoring systems and provides an easy tracking mechanism by which support teams
can respond
Benefits
o Helps support teams to respond to alerts quick and easy
o Since the automated pipelines may have several tools and layers, incident
management tools help centralize the alerts and hence faster responses to them
Support analytics
Faster release cycles demand automated deployment to get applications out faster and
they demand discovering and diagnosing production issues gaining insight quickly and
through actionable analytics. Focusing on business metrics is important in DevOps
environments. To derive these metrics and the data to meet the key performance
indicators becomes essential and hence the need for support analytics
Tools for support analytics do a deep search for data, do centralized logging and
parsing and display the data in a neat way
Benefits - These tools help in collaboration across teams and provide exactly what is
happening to the business from the data that is stored and logged
Monitoring dashboard
In order to measure the success of DevOps adoption and also measure the health of the
pipelines, monitoring dashboards are required
A dashboard provides a complete view of the pipeline. The dashboard can be based on
different perspectives. Some examples -
o Business performance dashboard – May depict the revenue, speed of
deployment, defect status etc. This can be for both technical and non-technical
teams
o End user dashboard – may provide code and API specific metrics like error
rates, pipeline status etc.
Benefits - Single point where teams get visibility of the DevOps implementation
Now that the capabilities are understood, let us look at how to use tools to actionize the
capabilities in the team
he aim of choosing the tool stack is to build an automated pipeline using tools for performing
the various software development, testing, deployment and release activities. This helps in
creating rapid, reliable and repeated releases of working software with low risk and minimal
manual overhead. Here are the principles to be considered while choosing tools.
Principles to be considered
The tools stacks are evolving and there are many vendors in this area.
Practical tips
The DevOps and Lean coach suggests using the Java and open source stack for the system
development at "Project" for the reasons mentioned below.
Reasons
The tools involved in this stack are primarily open source, free and powerful
The project is an application development project using Java stack
Quick availability of these tools and no overhead of maintenance of licenses
The team is advised to get the OSS (Open Source Software) compliance for the open source
tools prior to the installation. OSS compliance refers to the compliance in terms of using
approved and supported source code. There should be a policy and process to check the usage,
purchase (as all open source software is not free), management and compliance(some can be
used for training but payment is required if used commercially).Tools like Black Duck help in
checking OSS compliance.
People Aspects
The DevOps & Lean coach now elaborate the people models available and what parameters
are to be considered for choosing them.
Agile software development has broken down some of the isolation between requirements,
analysis, development and testing teams. The objective of DevOps is to remove the silos
between development (including testing) and operations teams and bring about collaboration
between the teams.
However, since there are separate teams and also the fact the team may have niche skills rather
than skills across the software development lifecycle, there could a phased approach to creating
a pure DevOps team. Here are some of the possible team structures.
Key issue: Lack of collaboration between the teams as they are in silos.
However, it may not be possible to merge the teams, so the key is to improve the collaboration
through common interventions.
Salient Features
Teams keep separate backlogs but take each other’s stories in their backlogs
Ops team gets knowledge about upcoming features, major design changes, possible
impact on production
Dev team understands what causes outages/ defects better, improves Dev processes to
reduce impact (e.g. specific logging, perf testing for a cycle)
Dev team improves dev processes over time by understanding Ops defects/outages
better
When a pure DevOps team cannot be constructed, a model closer to the pure devOps team can
be constructed.
Salient Features
When speed is increased, deployments are faster. Then teams realize that support service levels
start dropping. That is when teams understand the importance of collaboration between
development and ops teams.
Salient features
Embedded team can be created by hiring people with blended skills or cross-training/on
the job learning by Dev & Ops teams for each other’s skills
Team has single backlog with both Dev & Ops tasks
Each team member is capable of selecting any item & work on it
Process Aspects:
Delays due to formal knowledge transfer from Dev to Ops for every release
Tedious Change Management process requiring lots of approvals
Complex Release Management with manual checks impacts the operational efficiency
These challenges led to various issues. The coach suggests development and operations teams
to follow a unified process.
When teams are merged, which process to follow gains more significance.
Process model
Limitations
The development and operations team may be following Agile approach say Scrum(by the
development ream) and Kanban(by the operations team). Here is how the process can be fine-
tuned for DevOps adoption.
Process model
One group does Scrum and the other Kanban as ONE team
Two different product backlogs (PB), but single PO
Dev team works on user stories and Ops works on high priority Kanban PB
Any inter-dependent work items are prioritized by PO to resolve dependencies on time.
Daily standup by both teams
Limitations
Process model
Development team at "Project" follows Scrum and operations team adopts ITIL. The team
starts with the first model and then move towards a unified process at "Project" for reasons
stated below.
Reasons
The development and operations teams have been separate at "Project". The team
structure is optimized with this arrangement currently and hence it is difficult to have a
unified process immediately.
Over time, as the groups start to become cross-skilled, a unified process can be adopted.