0% found this document useful (0 votes)
3K views25 pages

Devops Unit - 2 Material Final

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3K views25 pages

Devops Unit - 2 Material Final

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

DEVOPS MATERIAL (UNIT -2)

SYLLABUS (UNIT - II):


Software development models and DevOps:
DevOps Lifecycle for Business Agility, DevOps and Continuous Testing. DevOps influence on
Architecture: Introducing software architecture, The monolithic scenario, Architecture rules of
thumb, The separation of concerns, Handling database migrations, Micro services, and the data
tier, DevOps, architecture and resilience.

I ) Software Development Life Cycle models:

● Agile
● Lean
● Waterfall
● Iterative
● Spiral
● DevOps

Each of these approaches varies in some ways from the others, but all have a common purpose: to
help teams deliver high-quality software as quickly and cost-effectively as possible.

1. Agile

The Agile model first emerged in 2001 and has since become the de facto industry standard.
Some businesses value the Agile methodology so much that they apply it to other types of
projects, including nontech initiatives.

In the Agile model, fast failure is a good thing. This approach produces ongoing release cycles,
each featuring small, incremental changes from the previous release. At each iteration, the product
is tested. The Agile model helps teams identify and address small issues on projects before they
evolve into more significant problems, and it engages business stakeholders to give feedback
throughout the development process.

As part of their embrace of this methodology, many teams also apply an Agile framework known
as Scrum to help structure more complex development projects. Scrum teams work in sprints,
which usually last two to four weeks, to complete assigned tasks. Daily Scrum meetings help the
whole team monitor progress throughout the project. And the ScrumMaster is tasked with keeping
the team focused on its goal.

PREPARED BY
G.SATISH KUMAR (M.Tech)
VEC-KHAMMAM.
DEVOPS MATERIAL (UNIT -2)

2. Lean

The Lean model for software development is inspired by "lean" manufacturing practices and
principles. The seven Lean principles (in this order) are: eliminate waste, amplify learning, decide
as late as possible, deliver as fast as possible, empower the team, build in integrity and see the
whole.

The Lean process is about working only on what must be worked on at the time, so there’s no
room for multitasking. Project teams are also focused on finding opportunities to cut waste at
every turn throughout the SDLC process, from dropping unnecessary meetings to reducing
documentation.

The Agile model is actually a Lean method for the SDLC, but with some notable differences. One
is how each prioritizes customer satisfaction: Agile makes it the top priority from the outset,
creating a flexible process where project teams can respond quickly to stakeholder feedback
throughout the SDLC. Lean, meanwhile, emphasizes the elimination of waste as a way to create
more overall value for customers — which, in turn, helps to enhance satisfaction.

3. Waterfall

Some experts argue that the Waterfall model was never meant to be a process model for real
projects. Regardless, Waterfall is widely considered the oldest of the structured SDLC
methodologies. It’s also a very straightforward approach: finish one phase, then move on to the
next. No going back. Each stage relies on information from the previous stage and has its own
project plan.

The downside of Waterfall is its rigidity. Sure, it’s easy to understand and simple to manage. But
early delays can throw off the entire project timeline. With little room for revisions once a stage is
completed, problems can’t be fixed until you get to the maintenance stage. This model doesn’t
work well if flexibility is needed or if the project is long-term and ongoing.

Even more rigid is the related Verification and Validation model — or V-shaped model. This
linear development methodology sprang from the Waterfall approach. It’s characterized by a
corresponding testing phase for each development stage. Like Waterfall, each stage begins only
after the previous one has ended. This SDLC model can be useful, provided your project has no
unknown requirements.

4. Iterative

The Iterative model is repetition incarnate. Instead of starting with fully known requirements,
project teams implement a set of software requirements, then test, evaluate and pinpoint further

PREPARED BY
G.SATISH KUMAR (M.Tech)
VEC-KHAMMAM.
DEVOPS MATERIAL (UNIT -2)

requirements. A new version of the software is produced with each phase, or iteration. Rinse and
repeat until the complete system is ready.

Advantages of the Iterative model over other common SDLC methodologies is that it produces a
working version of the project early in the process and makes it less expensive to implement
changes. One disadvantage: Repetitive processes can consume resources quickly.

One example of an Iterative model is the Rational Unified Process (RUP), developed by IBM’s
Rational Software division. RUP is a process product, designed to enhance team productivity for
a wide range of projects and organizations.

RUP divides the development process into four phases:

● Inception, when the idea for a project is set


● Elaboration, when the project is further defined and resources are evaluated
● Construction, when the project is developed and completed
● Transition, when the product is released

Each phase of the project involves business modeling, analysis and design, implementation,
testing, and deployment.

5. Spiral

One of the most flexible SDLC methodologies, Spiral takes a cue from the Iterative model and its
repetition. The project passes through four phases (planning, risk analysis, engineering and
evaluation) over and over in a figurative spiral until completed, allowing for multiple rounds of
refinement.

The Spiral model is typically used for large projects. It enables development teams to build a
highly customized product and incorporate user feedback early on. Another benefit of this SDLC
model is risk management. Each iteration starts by looking ahead to potential risks and figuring
out how best to avoid or mitigate them.

6. DevOps

The DevOps methodology is a relative newcomer to the SDLC scene. It emerged from two
trends: the application of Agile and Lean practices to operations work, and the general shift in
business toward seeing the value of collaboration between development and operations staff at all
stages of the SDLC process.

PREPARED BY
G.SATISH KUMAR (M.Tech)
VEC-KHAMMAM.
DEVOPS MATERIAL (UNIT -2)

In a DevOps model, Developers and Operations teams work together closely — and sometimes as
one team — to accelerate innovation and the deployment of higher-quality and more reliable
software products and functionalities. Updates to products are small but frequent. Discipline,
continuous feedback and process improvement, and automation of manual development processes
are all hallmarks of the DevOps model.

Amazon Web Services describes DevOps as the combination of cultural philosophies, practices,
and tools that increases an organization’s ability to deliver applications and services at high
velocity, evolving and improving products at a faster pace than organizations using traditional
software development and infrastructure management processes. So like many SDLC models,
DevOps is not only an approach to planning and executing work, but also a philosophy that
demands a nontraditional mindset in an organization.

Choosing the right SDLC methodology for your software development project requires careful
thought. But keep in mind that a model for planning and guiding your project is only one
ingredient for success. Even more important is assembling a solid team of skilled talent
committed to moving the project forward through every unexpected challenge or setback.

II) DevOps Lifecycle:


DevOps defines an agile relationship between operations and Development. It is a process that is
practiced by the development team and operational engineers together from beginning to the final
stage of the product.

PREPARED BY
G.SATISH KUMAR (M.Tech)
VEC-KHAMMAM.
DEVOPS MATERIAL (UNIT -2)

Learning DevOps is not complete without understanding the DevOps lifecycle phases. The
DevOps lifecycle includes seven phases as given below:

1) Continuous Development

This phase involves the planning and coding of the software. The vision of the project is decided
during the planning phase. And the developers begin developing the code for the application.
There are no DevOps tools that are required for planning, but there are several tools for
maintaining the code.

This stage involves identifying business requirements and defining the scope of the project. It
includes collaboration between the development and operations teams to understand the technical
requirements and identify potential challenges.

2) Continuous Integration

This stage is the heart of the entire DevOps lifecycle. It is a software development practice in
which the developers require to commit changes to the source code more frequently. This may be
on a daily or weekly basis. Then every commit is built, and this allows early detection of
problems if they are present. Building code is not only involved compilation, but it also
includes unit testing, integration testing, code review, and packaging.

The code supporting new functionality is continuously integrated with the existing code.
Therefore, there is continuous development of software. The updated code needs to be integrated
continuously and smoothly with the systems to reflect changes to the end-users.

PREPARED BY
G.SATISH KUMAR (M.Tech)
VEC-KHAMMAM.
DEVOPS MATERIAL (UNIT -2)

Jenkins is a popular tool used in this phase. Whenever there is a change in the Git repository, then
Jenkins fetches the updated code and prepares a build of that code, which is an executable file in
the form of war or jar. Then this build is forwarded to the test server or the production server.

3) Continuous Testing

This phase, where the developed software is continuously testing for bugs. For constant testing,
automation testing tools such as TestNG, JUnit, Selenium, etc are used. These tools allow QAs
to test multiple code-bases thoroughly in parallel to ensure that there is no flaw in the
functionality. In this phase, Docker Containers can be used for simulating the test environment.

Selenium does the automation testing, and TestNG generates the reports. This entire testing phase
can automate with the help of a Continuous Integration tool called Jenkins.

Automation testing saves a lot of time and effort for executing the tests instead of doing this
manually. Apart from that, report generation is a big plus. The task of evaluating the test cases
that failed in a test suite gets simpler. Also, we can schedule the execution of the test cases at
predefined times. After testing, the code is continuously integrated with the existing code.

4) Continuous Monitoring

Monitoring is a phase that involves all the operational factors of the entire DevOps process, where
important information about the use of the software is recorded and carefully processed to find
out trends and identify problem areas. Usually, the monitoring is integrated within the operational
capabilities of the software application.

It may occur in the form of documentation files or maybe produce large-scale data about the
application parameters when it is in a continuous use position. The system errors such as server

PREPARED BY
G.SATISH KUMAR (M.Tech)
VEC-KHAMMAM.
DEVOPS MATERIAL (UNIT -2)

not reachable, low memory, etc are resolved in this phase. It maintains the security and
availability of the service.

5) Continuous Feedback

The application development is consistently improved by analyzing the results from the
operations of the software. This is carried out by placing the critical phase of constant feedback
between the operations and the development of the next version of the current software
application.

The continuity is the essential factor in the DevOps as it removes the unnecessary steps which are
required to take a software application from development, using it to find out its issues and then
producing a better version. It kills the efficiency that may be possible with the app and reduce the
number of interested customers.

6) Continuous Deployment

In this phase, the code is deployed to the production servers. Also, it is essential to ensure that the
code is correctly used on all the servers.

The new code is deployed continuously, and configuration management tools play an essential
role in executing tasks frequently and quickly. Here are some popular tools which are used in this
phase, such as Chef, Puppet, Ansible, and SaltStack.

Containerization tools are also playing an essential role in the deployment


phase. Vagrant and Docker are popular tools that are used for this purpose. These tools help to
produce consistency across development, staging, testing, and production environment. They also
help in scaling up and scaling down instances softly.

Containerization tools help to maintain consistency across the environments where the application
is tested, developed, and deployed. There is no chance of errors or failure in the production
environment as they package and replicate the same dependencies and packages used in the
testing, development, and staging environment. It makes the application easy to run on different
computers.

PREPARED BY
G.SATISH KUMAR (M.Tech)
VEC-KHAMMAM.
DEVOPS MATERIAL (UNIT -2)

7) Continuous Operations: The DevOps lifecycle is an iterative process, and organizations


continuously improve their processes and tools to optimize the software delivery process.
Continuous improvement is achieved through collaboration, feedback, and continuous learning.

By implementing DevOps practices, organizations can improve their business agility by


delivering software faster, more frequently, and with higher quality. The DevOps lifecycle
enables organizations to respond to market changes quickly, iterate on their software, and
continuously improve their processes and tools to stay ahead of the competition.

III) Devops and continuous testing:

Continuous testing is a key component of the DevOps approach. It involves the use of automated
testing tools and practices to test software throughout the development lifecycle, from development to
production. The goal is to provide fast and reliable feedback on the quality of the software, which
enables teams to catch defects early in the development process and reduce the risk of issues in
production.

DevOps and continuous testing work hand in hand to help organizations achieve faster and more
reliable software delivery. DevOps practices such as continuous integration (CI) and continuous
deployment (CD) enable teams to build, test, and deploy software automatically and continuously,
while continuous testing ensures that software is thoroughly tested at each stage of the process.

Continuous testing involves various types of automated tests, including unit tests, integration tests,
and end-to-end tests. These tests are automated and run frequently, providing rapid feedback on the
quality of the code. As a result, defects are caught early in the development process, reducing the time
and cost of fixing them.

Continuous testing also helps to improve the overall quality of the software. By ensuring that software
is thoroughly tested, teams can identify and address issues before they reach production, reducing the
risk of downtime or other issues that can impact the user experience.

In summary, DevOps and continuous testing are two closely related practices that work together to
enable faster, more reliable software delivery. By adopting these practices, organizations can improve
their agility, reduce the time and cost of software delivery, and improve the quality of their software.

IV) Devops influence on architecture-Introducing Software


Architecture :
Software architecture refers to the high-level design of a software system. It defines
the components, their interactions, and the overall structure of the system. The architecture provides a

PREPARED BY
G.SATISH KUMAR (M.Tech)
VEC-KHAMMAM.
DEVOPS MATERIAL (UNIT -2)

roadmap for the development process and helps ensure that the system meets the requirements of the
stakeholders.

DevOps has a significant influence on software architecture. DevOps emphasizes collaboration,


automation, and continuous delivery, which require a different approach to software architecture.

Here are some ways in which DevOps influences software architecture:

Modularity: DevOps emphasizes modularity, where software is broken down into smaller
components. This approach requires a software architecture that is designed to support modularity,
such as microservices architecture. The architecture should also allow for the easy integration of new
components and the removal of obsolete ones.

Automation: DevOps relies heavily on automation, which means that the software architecture
should be designed with automation in mind. The architecture should be designed to facilitate the
automation of tasks such as build, testing, and deployment. Continuous integration and continuous
deployment (CI/CD) pipelines should be integrated into the architecture to enable automated testing,
build, and deployment of software.

Scalability: DevOps requires software architectures that are highly scalable and can handle frequent
updates and releases. The architecture should be designed to allow for horizontal scaling, where
additional instances of the application can be added to support increased demand. The architecture
should also be designed to support the rapid deployment of new features and updates.

Resilience: DevOps emphasizes the importance of resilience in software architecture. The


architecture should be designed to withstand failures, and the deployment should be designed with
redundancy to ensure that the system remains operational in the event of failures.

Security: Security is a critical concern in DevOps, and software architecture should be designed with
security in mind. Security considerations should be incorporated into the architecture from the start,
and security testing should be included in the automated testing process.

In summary, DevOps has a significant influence on software architecture. Software architectures


should be designed with modularity, automation, scalability, resilience, and security in mind to
support the DevOps principles of collaboration, automation, and continuous delivery.

Introducing software architecture


DevOps Model

The DevOps model goes through several phases governed by cross-discipline teams.
Those phases are as follows:

PREPARED BY
G.SATISH KUMAR (M.Tech)
VEC-KHAMMAM.
DEVOPS MATERIAL (UNIT -2)

Planning,Identify,andTrack Using the latest in project management tools and agile practices,
track ideas and workflows visually. This gives all important stakeholders a clear pathway to
prioritization and better results. With better oversight, project managers can ensure teams are on
the right track and aware of potential obstacles and pitfalls. All applicable teams can better work
together to solve any problems in the development process.

Development Phase Version control systems help developers continuously code, ensuring one
patch connects seamlessly with the master branch. Each complete feature triggers the developer to
submit a request that, if approved, allows the changes to replace existing code. Development is
ongoing.

Testing Phase After a build is completed in development, it is sent to QA testing. Catching bugs
is important to the user experience, in DevOps bug testing happens early and often. Practices like
continuous integration allow developers to use automation to build and test as a cornerstone of
continuous development.

Deployment Phase In the deployment phase, most businesses strive to achieve continuous
delivery. This means enterprises have mastered the art of manual deployment. After bugs have
been detected and resolved, and the user experience has been perfected, a final team is responsible
for the manual deployment. By contrast, continuous deployment is a DevOps approach that
automates deployment after QA testing has been completed.

Management Phase During the post-deployment management phase, organizations monitor


and maintain the DevOps architecture in place. This is achieved by reading and interpreting data
from users, ensuring security, availability and more.

Benefits of DevOps Architecture

A properly implemented DevOps approach comes with a number of benefits. These include the
following that we selected to highlight:

Decrease Cost Of primary concern for businesses is operational cost, DevOps helps organizations
keep their costs low. Because efficiency gets a boost with DevOps practices, software production
increases and businesses see decreases in overall cost for production.

PREPARED BY
G.SATISH KUMAR (M.Tech)
VEC-KHAMMAM.
DEVOPS MATERIAL (UNIT -2)

IncreasedProductivity and ReleaseTime With shorter development cycles and streamlined


processes, teams are more productive and software is deployed more quickly.

Customers are Served User experience, and by design, user feedback is important to the DevOps
process. By gathering information from clients and acting on it, those who practice DevOps
ensure that clients wants and needs get honored, and customer satisfaction reaches new highs.

It Gets More Efficient with Time DevOps simplifies the development lifecycle, which in
previous iterations had been increasingly complex. This ensures greater efficiency throughout a
DevOps organization, as does the fact that gathering requirements also gets easier. In DevOps,
requirements gathering are a streamlined process, a culture of accountability, collaboration and
transparency makes requirements gathering a smooth going team effort where no stone is left
unturned.

V) The monolithic scenario:


In DevOps, the monolithic scenario refers to a software architecture in which all components of
an application are tightly integrated into a single, self-contained unit. In this scenario, all features
of an application are developed, tested, and deployed together as a single package.

Monolithic software is designed to be self-contained, wherein the program's components or


functions are tightly coupled rather than loosely coupled, like in modular software programs. In a
monolithic architecture, each component and its associated components must all be present for
code to be executed or compiled and for the software to run.

Monolithic applications are single-tiered, which means multiple components are combined into
one large application. Consequently, they tend to have large codebases, which can be
cumbersome to manage over time.

Furthermore, if one program component must be updated, other elements may also require
rewriting, and the whole application has to be recompiled and tested. The process can be time-
consuming and may limit the agility and speed of software development teams. Despite these
issues, the approach is still in use because it does offer some advantages. Also, many early
applications were developed as monolithic software, so the approach cannot be completely
disregarded when those applications are still in use and require updates.

PREPARED BY
G.SATISH KUMAR (M.Tech)
VEC-KHAMMAM.
DEVOPS MATERIAL (UNIT -2)

What is monolithic architecture?


A monolithic architecture is the traditional unified model for the design of a software program.
Monolithic, in this context, means "composed all in one piece." According to the Cambridge
dictionary, the adjective monolithic also means both "too large" and "unable to be changed."

Benefits/Advantages of monolithic architecture

There are benefits to monolithic architectures, which is why many applications are still created
using this development paradigm. For one, monolithic programs may have better throughput than
modular applications. They may also be easier to test and debug because, with fewer elements,
there are fewer testing variables and scenarios that come into play.

At the beginning of the software development lifecycle, it is usually easier to go with the
monolithic architecture since development can be simpler during the early stages. A single
codebase also simplifies logging, configuration management, application performance
monitoring and other development concerns. Deployment can also be easier by copying the
packaged application to a server. Finally, multiple copies of the application can be placed behind
a load balancer to scale it horizontally.

That said, the monolithic approach is usually better for simple, lightweight applications. For more
complex applications with frequent expected code changes or evolving scalability requirements,
this approach is not suitable.

Drawbacks of monolithic architecture

Generally, monolithic architectures suffer from drawbacks that can delay application development
and deployment. These drawbacks become especially significant when the product's complexity
increases or when the development team grows in size.

The code base of monolithic applications can be difficult to understand because they may be
extensive, which can make it difficult for new developers to modify the code to meet changing
business or technical requirements. As requirements evolve or become more complex, it becomes
difficult to correctly implement changes without hampering the quality of the code and affecting
the overall operation of the application.

Following each update to a monolithic application, developers must compile the entire codebase
and redeploy the full application rather than just the part that was updated. This makes continuous
or regular deployments difficult, which then affects the applications and team's agility.
PREPARED BY
G.SATISH KUMAR (M.Tech)
VEC-KHAMMAM.
DEVOPS MATERIAL (UNIT -2)

The application's size can also increase startup time and add to delays. In some cases, different
parts of the application may have conflicting resource requirements. This makes it harder to find
the resources required to scale the application.

However, the monolithic scenario can be simpler to manage and deploy, as there is only one
package to manage. It can also be more suitable for smaller applications with fewer moving parts.

Overall, whether to use monolithic or micro services architecture depends on the specific needs
and requirements of the application and organization.

VI) Architecture Rules of Thumb:

Here are some general architecture rules of thumb in DevOps:

1. Design for failure: Plan for potential failures and build redundancy and fault tolerance into the
architecture. Use techniques such as load balancing, autoscaling, and backup and recovery to
ensure resilience.

2. Keep it simple: Simplify the architecture to reduce complexity and increase maintainability.
Avoid unnecessary components and dependencies that can introduce complexity and increase the
likelihood of errors.

3. Automate everything: Use automation tools to streamline the development, testing,


deployment, and maintenance processes. Automate tasks such as testing, provisioning, and
configuration management to reduce the risk of human error and increase efficiency.

4. Use modular architecture: Use a modular architecture to break down complex systems into
smaller, more manageable components. This can make it easier to deploy and manage changes, as
well as isolate issues and reduce the risk of system-wide failures.

5. Use cloud-native architecture: Use cloud-native architectures and design patterns to take
advantage of the scalability, flexibility, and resilience of cloud environments. This includes using
micro services, server less computing, and containerization to deploy and manage applications in
the cloud.

6. Emphasize security: Build security into the architecture from the beginning, including secure
coding practices, access controls, and encryption. Use tools such as vulnerability scanning and
threat modeling to identify and mitigate potential security risks.

These rules of thumb are not exhaustive, and the specifics of a particular architecture
will depend on the needs and constraints of the application and organization. However, following
these guidelines can help to create a more scalable, resilient, and secure architecture.

PREPARED BY
G.SATISH KUMAR (M.Tech)
VEC-KHAMMAM.
DEVOPS MATERIAL (UNIT -2)

VII) The Separation of Concerns:

Separation of concerns is a software architecture design pattern/principle for separating an


application into distinct sections, so each section addresses a separate concern. At its essence,
Separation of concerns is about order. The overall goal of separation of concerns is to establish a
well-organized system where each part fulfills a meaningful and intuitive role while maximizing
its ability to adapt to change.

The separation of concerns in DevOps is an important principle that emphasizes the need to
separate the concerns of different teams involved in software development and operations. It aims
to promote collaboration and efficient workflows among teams with different roles and
responsibilities.

In the context of DevOps, there are generally three main teams involved: development,
operations, and quality assurance. Each team has a specific set of responsibilities, and the
separation of concerns helps to ensure that these responsibilities are clearly defined and that each
team can focus on its own tasks without interfering with the tasks of others.

For example, the development team is responsible for writing code and implementing new
features, while the operations team is responsible for deploying and maintaining the software in
production environments. The quality assurance team, on the other hand, is responsible for
testing and ensuring the quality of the software.

By separating these concerns, each team can work more efficiently and effectively, without
stepping on each other's toes. This can lead to faster release cycles, better quality software, and
more efficient use of resources.

In addition to separating the concerns of different teams, DevOps also emphasizes the need for
collaboration and communication between teams. This can be achieved through various tools and
practices, such as continuous integration and delivery, automated testing, and agile
methodologies.

Overall, the separation of concerns is an important principle in DevOps that helps to ensure the
success of software development and operations projects.

Advantages of Separation of concerns:


Separation of Concerns implemented in software architecture would have several advantages:

1. Lack of duplication and singularity of purpose of the individual components render the overall
system easier to maintain.
2. The system becomes more stable as a byproduct of the increased maintainability.
3. The strategies required to ensure that each component only concerns itself with a single set of
cohesive responsibilities often result in natural extensibility points.
PREPARED BY
G.SATISH KUMAR (M.Tech)
VEC-KHAMMAM.
DEVOPS MATERIAL (UNIT -2)

4. The decoupling which results from requiring components to focus on a single purpose leads to
components which are more easily reused in other systems, or different contexts within the same
system.
5. The increase in maintainability and extensibility can have a major impact on the marketability and
adoption rate of the system.
There are several flavors of Separation of Concerns. Horizontal Separation, Vertical Separation,
Data Separation and Aspect Separation. In this article, we will restrict ourselves to Horizontal and
Aspect separation of concern.

VIII) Handling Database Migrations in devops:

Database Schema:

A database schema is a blueprint or plan that defines the structure of a database, including the
tables, columns, data types, constraints, relationships, and other characteristics of the data stored
in the database. In other words, a database schema describes how the data is organized and how it
can be accessed and manipulated.

A database schema is typically created during the design phase of a database application and is
based on the requirements and specifications of the application. The schema defines the logical
structure of the database, which can be implemented using various database management systems
(DBMS), such as MySQL, Oracle, SQL Server, and PostgreSQL.

What are database migrations?


Database migrations, also known as schema migrations, database schema migrations, or simply
migrations, are controlled sets of changes developed to modify the structure of the objects within
a relational database. Migrations help transition database schemas from their current state to a
new desired state, whether that involves adding tables and columns, removing elements, splitting
fields, or changing types and constraints.

Automate migrations: Use tools like Liquibase or Flyway to automate database migrations.
These tools can help to ensure that schema changes are applied consistently across all
environments, and can also help to reduce the risk of errors or downtime during migrations.

Test thoroughly: Test database migrations thoroughly before deploying them to production. Use
automated testing tools to ensure that schema changes do not cause data loss or other unintended
consequences.

Use rolling updates: Use rolling updates to apply schema changes to databases in a controlled
and gradual manner. This can help to reduce the risk of downtime or errors during migrations.

PREPARED BY
G.SATISH KUMAR (M.Tech)
VEC-KHAMMAM.
DEVOPS MATERIAL (UNIT -2)

Coordinate with other teams: Coordinate with other teams, such as operations and database
administrators, to ensure that database migrations are executed properly and do not conflict with
other changes being made to the system.

Monitor performance: Monitor database performance before and after migrations to ensure that
the system is running smoothly and that there are no unexpected bottlenecks or issues.

Backup and recovery: Backup the database before migrating to a new schema, and have a
recovery plan in place in case of any issues during the migration process.

By following these best practices, organizations can manage database migrations in a way that
minimizes risk, ensures consistency across environments, and reduces downtime and errors.

Liquibase:

Liquibase is an open-source database migration tool that helps to manage and automate the
process of database schema changes. It allows developers to track, version, and deploy changes to
database schemas in a consistent and repeatable way, which can help to improve the quality and
reliability of database-driven applications.

Database migrations can be a complex and error-prone process, especially in large and distributed
systems. Liquibase simplifies this process by providing a simple and declarative way to define
database schema changes, which can be stored in a version control system and shared among
team members. This helps to ensure that all developers are working from the same codebase and
that changes are made in a consistent and controlled manner.

Liquibase supports a wide range of databases, including Oracle, MySQL, SQL Server,
PostgreSQL, and many others. It uses XML files to define changes to database schemas, which
can be easily modified and versioned using standard version control tools.

One of the key benefits of Liquibase is its ability to roll back changes, which can be critical in the
event of errors or issues during the migration process. Liquibase also provides a range of tools
and plugins(In DevOps, a plugin is a software component that extends the functionality of an
existing tool or platform. Plugins are often used to integrate different tools and systems within the
DevOps tool chain, allowing for automated workflows and streamlined processes.) for integration
with popular build and deployment tools, such as Maven, Gradle, and Jenkins.

Overall, Liquibase is a powerful and flexible tool for managing database schema changes, which
can help to improve the quality, reliability, and agility of database-driven applications.

Flyway:
Flyway is an open-source database migration tool that automates the process of database schema
changes. It allows developers to version, track, and migrate database changes in a simple,
repeatable, and automated way.

PREPARED BY
G.SATISH KUMAR (M.Tech)
VEC-KHAMMAM.
DEVOPS MATERIAL (UNIT -2)

Flyway supports a wide range of databases, including Oracle, MySQL, SQL Server, PostgreSQL,
and many others. It uses plain SQL scripts to define database schema changes, which can be
easily modified and versioned using standard version control tools such as Git.

Flyway is a powerful and flexible tool for managing database schema changes, which can help to
improve the quality, reliability, and agility of database-driven applications. It allows developers to
focus on developing new features and functionality, rather than worrying about manual database
schema changes and migrations.

What are the advantages of migration tools?

Migrations are helpful because they allow database schemas to evolve as requirements change.
They help developers plan, validate, and safely apply schema changes to their environments.
These compartmentalized changes are defined on a granular level and describe the
transformations that must take place to move between various "versions" of the database.

In general, migration systems create artifacts or files that can be shared, applied to multiple
database systems, and stored in version control. This helps construct a history of modifications to
the database that can be closely tied to accompanying code changes in the client applications. The
database schema and the application's assumptions about that structure can evolve in tandem.

Some other benefits include being allowed (and sometimes required) to manually tweak the
process by separating the generation of the list of operations from the execution of them. Each
change can be audited, tested, and modified to ensure that the correct results are obtained while
still relying on automation for the majority of the process.

IX) Microservices:
Micro services, often referred to as Micro services architecture, is an architectural approach that
involves dividing large applications into smaller, functional units capable of functioning and
communicating independently.
This approach arose in response to the limitations of monolithic architecture. Because monoliths
are large containers holding all software components of an application, they are severely limited:
inflexible, unreliable, and often develop slowly.
With micro services, however, each unit is independently deployable but can communicate with
each other when necessary. Developers can now achieve the scalability, simplicity, and flexibility
needed to create highly sophisticated software.
Microservices is an architectural pattern that has become popular in DevOps as it promotes
flexibility, scalability, and agility in software development. It involves breaking down a large,
monolithic application into small, independent services, each with its own functionality and well-
defined interfaces. These services can be developed, deployed, and scaled independently,
allowing organizations to quickly iterate and respond to changing business requirements.
PREPARED BY
G.SATISH KUMAR (M.Tech)
VEC-KHAMMAM.
DEVOPS MATERIAL (UNIT -2)

How does microservices architecture work?

The key benefits of microservices architecture

Microservices architecture presents developers and engineers with a number of benefits that
monoliths cannot provide. Here are a few of the most notable.

1. Less development effort

Smaller development teams can work in parallel on different components to update existing
functionalities. This makes it significantly easier to identify hot services, scale independently
from the rest of the application, and improve the application.

PREPARED BY
G.SATISH KUMAR (M.Tech)
VEC-KHAMMAM.
DEVOPS MATERIAL (UNIT -2)

2. Improved scalability

Microservices launch individual services independently, developed in different languages or


technologies; all tech stacks are compatible, allowing DevOps to choose any of the most efficient
tech stacks without fearing if they will work well together. These small services work on
relatively less infrastructure than monolithic applications by choosing the precise scalability of
selected components per their requirements.

3. Independent deployment

Each microservice constituting an application needs to be a full stack. This enables microservices
to be deployed independently at any point. Since microservices are granular in nature,
development teams can work on one microservice, fix errors, then redeploy it without redeploying
the entire application.

Microservice architecture is agile and thus does not need a congressional act to modify the
program by adding or changing a line of code or adding or eliminating features. The software
offers to streamline business structures through resilience improvisation and fault separation.

4. Error isolation

In monolithic applications, the failure of even a small component of the overall application can
make it inaccessible. In some cases, determining the error could also be tedious. With
microservices, isolating the problem-causing component is easy since the entire application is
divided into standalone, fully functional software units. If errors occur, other non-related units
will still continue to function.

5. Integration with various tech stacks

With microservices, developers have the freedom to pick the tech stack best suited for one
particular microservice and its functions. Instead of opting for one standardized tech stack
encompassing all of an application’s functions, they have complete control over their options.

Here are some additional considerations for using microservices in DevOps:

1. Service discovery and communication: As microservices are often distributed across multiple
servers or containers, it is important to have a reliable way for them to discover and communicate
with each other. This can be achieved through the use of service registries (A service registry is a

PREPARED BY
G.SATISH KUMAR (M.Tech)
VEC-KHAMMAM.
DEVOPS MATERIAL (UNIT -2)

database for the storage of data structures for application-level communication. It serves as a
central location where app developers can register and find the schemas used for particular apps)
and service meshes (A service mesh is a dedicated infrastructure layer that controls service-to-
service communication over a network. This method enables separate parts of an application to
communicate with each other. Service meshes appear commonly in concert with cloud-based
applications).

2. API gateway: An API gateway (API Gateway acts as a "front door" for applications to access
data, business logic, or functionality from your backend services) (An API gateway is a software
pattern that sits in front of an application programming interface (API) or group of microservices,
to facilitate requests and delivery of data and services.) acts as a central point of entry for all
requests to the microservices architecture. It can provide features such as authentication, rate
limiting, and load balancing.

3. Database per service: To maintain loose coupling between services, it is often recommended
to have a separate database for each microservice.

4. DevOps team structure: Microservices can lead to a more decentralized and autonomous team
structure, with each team responsible for a specific set of services. This can require changes to the
organization's DevOps processes and culture.

5. Testing: Testing microservices can be challenging due to the number of services involved and
their distributed nature. Automated testing, including unit testing, integration testing, and end-to-
end testing, is essential to ensure that the system works as intended.

By considering these additional factors, organizations can successfully implement microservices


in DevOps and benefit from increased flexibility, scalability, and agility in software development.

Microservices vs monolithic architecture:

With monolithic architectures, all processes are tightly coupled and run as a single service. This
means that if one process of the application experiences a spike in demand, the entire architecture
must be scaled. Adding or improving a monolithic application’s features becomes more complex
as the code base grows. This complexity limits experimentation and makes it difficult to
implement new ideas. Monolithic architectures add risk for application availability because many
dependent and tightly coupled processes increase the impact of a single process failure.

With a microservices architecture, an application is built as independent components that run each
application process as a service. These services communicate via a well-defined interface using
lightweight APIs. Services are built for business capabilities and each service performs a single
function. Because they are independently run, each service can be updated, deployed, and scaled
to meet demand for specific functions of an application.

PREPARED BY
G.SATISH KUMAR (M.Tech)
VEC-KHAMMAM.
DEVOPS MATERIAL (UNIT -2)

X) Data Tier:
The data tier in DevOps refers to the layer of the application architecture that is responsible for
storing, retrieving, and processing data. The data tier is typically composed of databases, data
warehouses, and data processing systems that manage large amounts of structured and
unstructured data.

The data tier consists of a database and a program for managing read and write access to a
database. This tier may also be referred to as the storage tier and can be hosted on-premises or in
the cloud.

The data tier is an important component of any software system and is critical to the success of
DevOps. The data tier includes the databases, data storage systems, and data processing systems
that support the application. Effective management of the data tier is essential for ensuring data
availability, reliability, and scalability.

In DevOps, the data tier is considered an important aspect of the overall application architecture
and is typically managed as part of the DevOps process. This includes:

1. Data management and migration: Ensuring that data is properly managed and migrated
as part of the software delivery pipeline.

2. Data backup and recovery: Implementing data backup and recovery strategies to ensure
that data can be recovered in case of failures or disruptions.

3. Data security: Implementing data security measures to protect sensitive information and
comply with regulations.

4. Data performance optimization: Optimizing data performance to ensure that


applications and services perform well, even with large amounts of data.

5. Data integration: Integrating data from multiple sources to provide a unified view of data
and support business decisions.

By integrating data management into the DevOps process, teams can ensure that data is properly
managed and protected, and that data-driven applications and services perform well and deliver
value to customers.

Here are some considerations for managing the data tier in DevOps:

1. Automation: Automation is crucial for managing the data tier in DevOps. Automation tools,
such as configuration management and infrastructure as code, can be used to deploy and manage
the data tier.

PREPARED BY
G.SATISH KUMAR (M.Tech)
VEC-KHAMMAM.
DEVOPS MATERIAL (UNIT -2)

2. Version control: The data tier should be treated like any other code in the DevOps pipeline
and should be version controlled. This allows changes to the data tier to be tracked, rolled back if
necessary, and replicated across environments.

3. Continuous Integration and Continuous Deployment (CI/CD): A CI/CD pipeline can be


used to automate the deployment and testing of the data tier. This ensures that any changes to the
data tier are thoroughly tested before they are deployed to production.

4. Monitoring and alerting: Effective monitoring and alerting of the data tier is essential for
detecting and responding to issues quickly. Metrics such as database performance, storage usage,
and data processing time should be monitored and alerts should be set up for any anomalies.

5. Security: Data security is critical, and data tier management should include best practices for
data encryption, access control, and compliance.

By considering these factors and implementing best practices for managing the data tier,
organizations can ensure that their data is available, reliable, and scalable, which is critical for the
success of DevOps.

XI)Devops architecture and resilience:

DevOps architecture is focused on creating highly resilient systems that can quickly recover from
failures and continue to operate smoothly. Resilience is achieved by designing systems that can
tolerate and adapt to failures, and by automating the recovery process.

Development and operations both play essential roles in order to deliver applications. The
deployment comprises analyzing the requirements, designing, developing, and testing of the
software components or frameworks.

PREPARED BY
G.SATISH KUMAR (M.Tech)
VEC-KHAMMAM.
DEVOPS MATERIAL (UNIT -2)

The operation consists of the administrative processes, services, and support for the software.
When both the development and operations are combined with collaborating, then the DevOps
architecture is the solution to fix the gap between deployment and operation terms; therefore,
delivery can be faster.

DevOps architecture is used for the applications hosted on the cloud platform and large
distributed applications. Agile Development is used in the DevOps architecture so that integration
and delivery can be contiguous. When the development and operations team works separately
from each other, then it is time-consuming to design, test, and deploy. And if the terms are not in
sync with each other, then it may cause a delay in the delivery. So DevOps enables the teams to
change their shortcomings and increases productivity.

Below are the various components that are used in the DevOps architecture:

Build: Without DevOps, the cost of the consumption of the resources was evaluated based on the
pre-defined individual usage with fixed hardware allocation. And with DevOps, the usage of
cloud, sharing of resources comes into the picture, and the build is dependent upon the user's
need, which is a mechanism to control the usage of resources or capacity.

Code: Many good practices such as Git enables the code to be used, which ensures writing the
code for business, helps to track changes, getting notified about the reason behind the difference
in the actual and the expected output, and if necessary reverting to the original code developed.
The code can be appropriately arranged in files, folders, etc. And they can be reused.

Test: The application will be ready for production after testing. In the case of manual testing, it
consumes more time in testing and moving the code to the output. The testing can be automated,

PREPARED BY
G.SATISH KUMAR (M.Tech)
VEC-KHAMMAM.
DEVOPS MATERIAL (UNIT -2)

which decreases the time for testing so that the time to deploy the code to production can be
reduced as automating the running of the scripts will remove many manual steps.

Plan: DevOps use Agile methodology to plan the development. With the operations and
development team in sync, it helps in organizing the work to plan accordingly to increase
productivity.

Monitor: Continuous monitoring is used to identify any risk of failure. Also, it helps in tracking
the system accurately so that the health of the application can be checked. The monitoring
becomes more comfortable with services where the log data may get monitored through many
third-party tools such as Splunk.

Deploy: Many systems can support the scheduler for automated deployment. The cloud
management platform enables users to capture accurate insights and view the optimization
scenario, analytics on trends by the deployment of dashboards.

Operate: DevOps changes the way traditional approach of developing and testing separately. The
teams operate in a collaborative way where both the teams actively participate throughout the
service lifecycle. The operation team interacts with developers, and they come up with a
monitoring plan which serves the IT and business requirements.

Release: Deployment to an environment can be done by automation. But when the deployment is
made to the production environment, it is done by manual triggering. Many processes involved in
release management commonly used to do the deployment in the production environment
manually to lessen the impact on the customers.
DevOps resilience
DevOps resilience refers to the ability of a DevOps system to withstand and recover from failures
and disruptions. This means ensuring that the systems and processes used in DevOps are robust,
scalable, and able to adapt to changing conditions.
Here are some key principles for building resilient DevOps architecture:

1. Redundancy: Redundancy involves having multiple instances of critical components, such as


servers or databases, running in parallel. This ensures that if one instance fails, the others can continue
to operate, and the system remains available.

2. Load balancing: Load balancing distributes traffic across multiple instances of a component,
ensuring that no single instance is overloaded. This prevents a single point of failure and ensures that
the system can handle increases in traffic.

3. Monitoring: Monitoring the system is essential for detecting and diagnosing issues. By monitoring
metrics such as system performance, resource utilization, and error rates, teams can identify issues
early and take corrective action.

PREPARED BY
G.SATISH KUMAR (M.Tech)
VEC-KHAMMAM.
DEVOPS MATERIAL (UNIT -2)

4. Automation: Automation enables quick recovery from failures. By automating processes such as
backup and recovery, deployment, and scaling, teams can reduce the time to restore service when
issues arise.

5. Testing: Resilient systems are thoroughly tested to ensure that they can withstand failures. This
includes testing for common failure scenarios and testing the recovery process to ensure that it is
reliable.

By incorporating these principles into the architecture of DevOps systems, organizations can create
highly resilient systems that can withstand failures and continue to operate smoothly. This is essential
for ensuring the availability and reliability of critical business applications.

UNIT-II:- Software development models and DevOps


DevOps Lifecycle for Business Agility, DevOps, and Continuous Testing. DevOps influence on
Architecture: Introducing software architecture, The monolithic scenario, Architecture rules of
thumb, The separation of concerns, Handling database migrations, Microservices, and the data
tier, DevOps, architecture, and resilience.
PART A:
1) What are different software development lifecycle models?
2) What are the advantages of migration tools??
3) What is data tier in Devops?
4) What is monolithic architecture?
5) What are benefits of monolithic architecture?
PART B:
1) Explain Devops lifecycle in detail?
2) What are Devops components?
3) Explain about Devops architecture and resilience in detail?
4) What are microservices and how does microservices architecture work?
5) Explain architecture rules of thumb?
6) Explain data base migration?
7) Write a brief note on software architecture.Explian about the monolithic scenario?

PREPARED BY
G.SATISH KUMAR (M.Tech)
VEC-KHAMMAM.

You might also like