0% found this document useful (0 votes)
7 views47 pages

Unit Ii

The document outlines various software development models including Agile, Lean, Waterfall, Iterative, and Spiral, as well as the DevOps methodology, which emphasizes collaboration between development and operations teams throughout the software development lifecycle. It details the phases of the DevOps lifecycle such as Continuous Development, Integration, Testing, Monitoring, Feedback, and Deployment, highlighting the benefits and tools associated with each phase. Additionally, it discusses architectural approaches like monolithic and microservices architectures, emphasizing the importance of separation of concerns and database migrations in modern software development.

Uploaded by

jmadhavi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views47 pages

Unit Ii

The document outlines various software development models including Agile, Lean, Waterfall, Iterative, and Spiral, as well as the DevOps methodology, which emphasizes collaboration between development and operations teams throughout the software development lifecycle. It details the phases of the DevOps lifecycle such as Continuous Development, Integration, Testing, Monitoring, Feedback, and Deployment, highlighting the benefits and tools associated with each phase. Additionally, it discusses architectural approaches like monolithic and microservices architectures, emphasizing the importance of separation of concerns and database migrations in modern software development.

Uploaded by

jmadhavi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 47

UNIT - II

Software development models and DevOps


• Software Development Life Cycle models
• Agile
• Lean
• Waterfall
• Iterative
• Spiral
• DevOps
Agile
• In the Agile model, fast failure is a good thing.
• This approach produces ongoing release cycles, each featuring
small, incremental changes from the previous release.
• The Agile model helps teams identify and address small issues
on projects before they evolve into more significant problems,
and it engages business stakeholders to give feedback
throughout the development process
• this methodology, many teams also apply an Agile framework
known as Scrum to help structure more complex development
projects. Scrum teams work in sprints, which usually last two
to four weeks, to complete assigned tasks.
• Daily Scrum meetings help the whole team monitor progress
throughout the project. And the ScrumMaster is tasked with
keeping the team focused on its goal.
Lean
• Lean is defined as a set of management practices to
improve efficiency and effectiveness by eliminating
waste.
• The Lean model for software development is inspired
by "lean" manufacturing practices and principles.
• The seven Lean principles (in this order) are:
eliminate waste, amplify learning, decide as late as
possible, deliver as fast as possible, empower the
team, build in integrity and see the whole.
• The Lean process is about working only on what
must be worked on at the time, so there’s no room
for multitasking.
• Project teams are also focused on finding
opportunities to cut waste at every turn throughout
the SDLC process, from dropping unnecessary
meetings to reducing documentation.
Waterfall

• Waterfall is widely considered the oldest of the


structured SDLC methodologies.
• It’s also a very straightforward approach: finish one
phase, then move on to the next.
• No going back. Each stage relies on information from
the previous stage and has its own project plan.
Iterative
• The Iterative model is repetition incarnate
• Instead of starting with fully known requirements,
project teams implement a set of software
requirements, then test, evaluate and pinpoint
further requirements.
• A new version of the software is produced with each
phase, or iteration. Rinse and repeat until the
complete system is ready.
Spiral
• One of the most flexible SDLC methodologies, Spiral
takes a cue from the Iterative model and its repetition.
• The project passes through four phases (planning, risk
analysis, engineering and evaluation) over and over in a
figurative spiral until completed, allowing for multiple
rounds of refinement.
• The Spiral model is typically used for large projects. It
enables development teams to build a highly customized
product and incorporate user feedback early on.
• Another benefit of this SDLC model is risk management.
Each iteration starts by looking ahead to potential risks
and figuring out how best to avoid or mitigate them.
• The DevOps methodology is a relative newcomer to
the SDLC scene.
• It emerged from two trends: the application of Agile
and Lean practices to operations work, and the
general shift in business toward seeing the value of
collaboration between development and operations
staff at all stages of the SDLC process.
DevOps Lifecycle
• Continuous Development
• This phase involves the planning and coding of the
software.
• The vision of the project is decided during the
planning phase. And the developers begin developing
the code for the application.
• There are no DevOps tools that are required for
planning, but there are several tools for maintaining
the code.
• Continuous Integration
• This stage is the heart of the entire DevOps lifecycle.
• It is a software development practice in which the
developers require to commit changes to the source code
more frequently.
• This may be on a daily or weekly basis. Then every commit
is built, and this allows early detection of problems if they
are present.
• Building code is not only involved compilation, but it also
includes unit testing, integration testing, code review,
and packaging.
• The code supporting new functionality is
continuously integrated with the existing code.
Therefore, there is continuous development of
software. The updated code needs to be integrated
continuously and smoothly with the systems to
reflect changes to the end-users.
Jenkins is a popular tool used in this phase. Whenever there is a
change in the Git repository, then Jenkins fetches the updated
code and prepares a build of that code, which is an executable file
in the form of war or jar. Then this build is forwarded to the test
server or the production server.
• Continuous Testing
• This phase, where the developed software is
continuously testing for bugs.
• For constant testing, automation testing tools such
as TestNG, JUnit, Selenium, etc are used.
• These tools allow QAs to test multiple code-bases
thoroughly in parallel to ensure that there is no flaw
in the functionality.
• In this phase, Docker Containers can be used for
simulating the test environment.
Selenium does the automation testing, and TestNG
generates the reports. This entire testing phase can
automate with the help of a Continuous Integration tool
called Jenkins.

•Automation testing saves a lot of time and effort for


executing the tests instead of doing this manually.
•Apart from that, report generation is a big plus. The task of
evaluating the test cases that failed in a test suite gets
simpler. Also, we can schedule the execution of the test
cases at predefined times. After testing, the code is
continuously integrated with the existing code.
• Continuous Monitoring
• Monitoring is a phase that involves all the
operational factors of the entire DevOps process,
where important information about the use of the
software is recorded and carefully processed to find
out trends and identify problem areas. Usually, the
monitoring is integrated within the operational
capabilities of the software application.
• Continuous Feedback
• The application development is consistently
improved by analyzing the results from the
operations of the software.
• This is carried out by placing the critical phase of
constant feedback between the operations and the
development of the next version of the current
software application.
• Continuous Deployment
• In this phase, the code is deployed to the production
servers. Also, it is essential to ensure that the code is
correctly used on all the servers.
• The new code is deployed continuously, and
configuration management tools play an essential
role in executing tasks frequently and quickly. Here
are some popular tools which are used in this phase,
such as Chef, Puppet, Ansible, and SaltStack.
• Containerization tools are also playing an essential role in
the deployment phase. Vagrant and Docker are popular
tools that are used for this purpose.
• These tools help to produce consistency across
development, staging, testing, and production
environment. They also help in scaling up and scaling down
instances softly.
• Containerization tools help to maintain consistency across
the environments where the application is tested,
developed, and deployed.
• There is no chance of errors or failure in the production
environment as they package and replicate the same
dependencies and packages used in the testing,
development, and staging environment. It makes the
application easy to run on different computers.
Devops influence on Architecture

• DevOps Model
• The DevOps model goes through several
phases governed by cross-discipline teams.
Those phases are as follows:
• Planning, Identify and Track
• Development Phase
• Testing Phase
• Deployment Phase
• Management Phase
Benefits of DevOps Architecture

• Decrease Cost
• Increased Productivity and Release Time
• Customers are Served
• It Gets More Efficient with Time
The monolithic scenario

• Monolithic software is designed to be self-contained, wherein


the program's components or functions are tightly coupled
rather than loosely coupled, like in modular software
programs.
• In a monolithic architecture, each component and its
associated components must all be present for code to be
executed or compiled and for the software to run.
• Monolithic applications are single-tiered, which means
multiple components are combined into one large application.
Consequently, they tend to have large codebases, which can
be cumbersome to manage over time.
• What is monolithic architecture?
• A monolithic architecture is the traditional unified model for
the design of a software program. Monolithic, in this context,
means "composed all in one piece." According to the
Cambridge dictionary, the adjective monolithic also means
both "too large" and "unable to be changed."
Benefits of monolithic architecture

• Monolithic programs may have better throughput than


modular applications.
• They may also be easier to test and debug because, with
fewer elements, there are fewer testing variables and
scenarios that come into play.
• A single codebase also simplifies logging, configuration
management, application performance monitoring and other
development concerns.
• That said, the monolithic approach is usually better for
simple, lightweight applications. For more complex
applications with frequent expected code changes or evolving
scalability requirements, this approach is not suitable.
Drawbacks of monolithic architecture

• Delay application development and deployment.


• The code base of monolithic applications can be difficult to
understand because they may be extensive, which can make it
difficult for new developers to modify the code to meet
changing business or technical requirements.
• The application's size can also increase startup time and add
to delays. In some cases, different parts of the application
may have conflicting resource requirements. This makes it
harder to find the resources required to scale the application.
Architecture Rules of Thumb

• There is always a bottleneck


• Your data model is linked to the scalability of your application
• Scalability is mainly linked with cost. When you get to a large
scale, consider systems where this relationship does not track
linearly
• Favour systems that require little tuning to make fast
• Use infrastructure as codes
• Use a PaaS if you’re at less than 100k MAUs(monthly active
users )
• Outsource systems outside of the market you are in. Don’t roll
your own CMS or Auth, even if it costs you tonnes
• You have three levers, quality, cost and time. You have to
balance them accordingly.
• Design your APIs as open-source contracts.
• Start with a simple system first (Gall’s law).
The Separation of Concerns

• Separation of concerns is a software architecture


design pattern/principle for separating an
application into distinct sections, so each section
addresses a separate concern.
• At its essence, Separation of concerns is about
order.
• The overall goal of separation of concerns is to
establish a well-organized system where each part
fulfills a meaningful and intuitive role while
maximizing its ability to adapt to change.
• How is separation of concerns achieved
• Separation of concerns in software
architecture is achieved by the establishment
of boundaries.
• A boundary is any logical or physical
constraint which indetail a given set of
responsibilities.
Separation of concerns - advantages
• Lack of duplication and singularity of purpose of the
individual components render the overall system easier to
maintain.
• The system becomes more stable as a byproduct of the
increased maintainability.
• Each component only concerns itself with a single set of
responsibilities
• The decoupling which results from requiring components to
focus on a single purpose leads to components which are
more easily reused in other systems.
• The increase in maintainability and extensibility can have a
major impact on the marketability and adoption rate of the
system.
Handling database migrations

• What are database migrations?


• Database migrations, also known as schema migrations,
database schema migrations, or simply migrations, are
controlled sets of changes developed to modify the structure
of the objects within a relational database.
• Migrations help transition database schemas from their
current state to a new desired state, whether that involves
adding tables and columns, removing elements, splitting fields,
or changing types and constraints.
• The goals of database migration software are to make
database changes repeatable, shareable, and testable without
loss of data.
• What are the advantages of migration tools?
• Migrations are helpful because they allow database
schemas to evolve as requirements change.
• They help developers plan, validate, and safely apply
schema changes to their environments.
• In general, migration systems create artifacts or files that
can be shared, applied to multiple database systems, and
stored in version control.
• Each change can be audited, tested, and modified to ensure
that the correct results are obtained while still relying on
automation for the majority of the process.
• State based migration
• State based migration software creates artifacts
that describe how to recreate the desired database
state from scratch.
• The files that it produces can be applied to an
empty relational database system to bring it fully up
to date.
• state based migrations do have the advantage of
producing files that fully describe the database
state in a single context.
• Change based migrations
• The major alternative to state based migrations is a
change based migration system.
• Change based migrations also produce files that alter the
existing structures in a database to arrive at the desired
state.
• This approach builds off of a known database state to
define the operations to bring it into the new state.
• Change based systems, however, do have the advantage of
allowing for quick, iterative changes to the database
structure.
Micro services
• Micro services architecture, is an architectural approach that
involves dividing large applications into smaller, functional units
capable of functioning and communicating independently.
• This approach arose in response to the limitations of monolithic
architecture. Because monoliths are large containers holding all
software components of an application, they are severely
limited: inflexible, unreliable, and often develop slowly.
• With micro services, however, each unit is independently
deployable but can communicate with each other when
necessary. Developers can now achieve
the scalability, simplicity, and flexibility needed to create highly
sophisticated software.
How does microservices architecture work?
The key benefits of microservices architecture
What is the micro services architecture used for?

• Micro services architecture makes app


development quicker and more efficient. Agile
deployment capabilities combined with the flexible
application of different technologies drastically
reduce the duration of the development cycle.
• Data processing
• Media content
• Website migration
• Transactions and invoices
Micro services tools
Data tier

• The data tier in DevOps refers to the layer of the


application architecture that is responsible for
storing, retrieving, and processing data.
• The data tier is typically composed of databases, data
warehouses, and data processing systems that
manage large amounts of structured and
unstructured data.
• In DevOps, the data tier is considered an important
aspect of the overall application architecture and is
typically managed as part of the DevOps process.
• Data management and migration:
• Data backup and recovery:
• Data security:
• Data performance optimization:
• Data integration:
Devops architecture and resilience

• Development and operations both play essential roles in order


to deliver applications. The deployment comprises analyzing
the requirements, designing, developing, and testing of the
software components or frameworks.
• The operation consists of the administrative processes, services,
and support for the software.
• When both the development and operations are combined with
collaborating, then the DevOps architecture is the solution to fix
the gap between deployment and operation terms; therefore,
delivery can be faster.
• DevOps architecture is used for the applications hosted on the
cloud platform and large distributed applications.
• Agile Development is used in the DevOps architecture so that
integration and delivery can be contiguous.
• When the development and operations team works separately
from each other, then it is time-consuming to design, test,
and deploy.
• And if the terms are not in sync with each other, then it may
cause a delay in the delivery. So DevOps enables the teams to
change their shortcomings and increases productivity.
• The various components that are used in the
DevOps architecture
DevOps resilience

• DevOps resilience refers to the ability of a


DevOps system to withstand and recover from
failures and disruptions.
• This means ensuring that the systems and
processes used in DevOps are robust, scalable,
and able to adapt to changing conditions.
• Some of the key components of DevOps resilience include:
• Infrastructure automation:
• Monitoring and logging:
• Disaster recovery:
• Continuous testing:
• High availability:
• By focusing on these components, DevOps teams can
create a resilient and adaptive DevOps system that is
able to deliver high-quality applications and services,
even in the face of failures and disruptions.

You might also like