0% found this document useful (0 votes)
60 views47 pages

Unit 6 Cloud

Uploaded by

sanjaygarud1966
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views47 pages

Unit 6 Cloud

Uploaded by

sanjaygarud1966
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

Unit 6

Advanced Techniques in Cloud Computing

Future Tends in cloud Computing, Mobile Cloud,


Autonomic Cloud Computing: Comet Cloud.
Multimedia Cloud: IPTV,
Energy Aware Cloud Computing,
Jungle Computing,
Distributed Cloud Computing Vs Edge Computing.
Containers,
Docker, and Kubernetes,
Introduction to DevOps.
Future Tends in cloud Computing
Following are some future trends in cloud computing:
1. A large businesses will conceive personal cloud systems.
2. Cloud security will become more trusted and normalized.
3. Cloud adoption will have an upward trend.
4. More applications will proceed into the cloud.
5. Compression methods will be significant in reducing storage charges.
6. Analytics will become a large-scale cloud characteristics.
7. Cloud computing will become more customizable.
8. Cloud computing will adjust and shrink IT agencies.
9. Large cloud databases will centralize gigantic allowances of data.
10. Mobile apparatus will take benefits of offloaded computing.
Hybrid cloud will play a significant function in cloud computing. It boasts the
blend and coordination of off-site cloud services with an on-premise
application or infrastructure services.
Cloud computing notions can be directed to infrastructure and data centre
investments to maximize effectiveness and agility.
Mobile Cloud:
• MCC stands for Mobile Cloud Computing which is defined as a combination
of mobile computing, cloud computing, and wireless network that come up
together purpose such as rich computational resources to mobile users,
network operators, as well as to cloud computing providers.
• Mobile Cloud Computing is meant to make it possible for rich mobile
applications to be executed on a different number of mobile devices.
• In this technology, data processing, and data storage happen outside of mobile
devices.
• Mobile Cloud Computing applications leverage this IT architecture to generate
the following advantages:
1. Extended battery life.
2. Improvement in data storage capacity and processing power.
3. Improved synchronization of data due to “store in one place, accessible from
anywhere ” platform theme.
4. Improved reliability and scalability.
5. Ease of integration.
• In the recent development of MCC, resources are virtualized and assigned
in different groups of various distributed computers rather than in local
computers or servers and are provided to mobile devices such as mobiles,
portable terminals and so on.
• At the same time, various applications based on MCC have been developed
and provided to users, such as Google’s Gmail, Maps and Navigation
systems for mobile, voice search and some applications on an Android
platform, MobileMe from Apple, Live Mesh from Mocrosoft.
As shown in above fig, MCC can be classified as cloud computing and
mobile computing. Mobile devices such as laptops, PDA and smart phone
are connecting with a hotspot or base station by 3G, WIFI or GPRS.
The requirement for mobile devices in computing is limited as majot
processing phases are shifted to cloud.
MCC can be achieved by using some low-cost mobile devices or even
non-smart phone with cross-platform mid-ware.
Mobile users use web browser to send the service requests to the cloud.
Cloud computing represents a huge opportunity for the mobile industry as a
whole.
A ubiquitous Mobile computing will benefits the telecom industry as a
whole, by making it much more attractive for application service providers
to create new services, or to enrich existing services that use capabilities,
information and intelligence provided by mobile and fixed telecoms
operators.
Key Requirements for Mobile Cloud computing:

In addition to the basic requirements for exposing network services, there


are some important features of MCC that make it possible to execute
seamless service delivery in an cross-network environment. According tom
the perspective of the enterprise solution provider or web/mobile
application developer, the aims of the MCC platform are as follows:
• Clear and simple API for transparent access, without the knowledge of
underlying network.
• Capability of using single SLA for deploying applications in multiple
networks.
• Maintaining of specific network policy.
Autonomic Cloud Computing: Comet Cloud:

• Autonomic computing is one of those ideas which has received much


attention in the field of cloud computing.
• Autonomic computing is a computing form of self-managing.
• It pertains to the capability of computer systems to autonomously manage
and regulate their activities in sections such as performance, security, and
resource use, as a way of minimizing outside interferences.
• It is to be noted that in the area of cloud computing the autonomic systems
are vital for the management of large and distributed cloud resources.
• Autonomic computing is called so due to the fact its concept is based on the
autonomic apprehensive system of human beings which controls important
capabilities in the human frame without any specific directions from the
mind.
• In the same way, autonomic computing systems are supposed to perform
work, which is typical for the system itself, and act to changes in their
environment without the intervention of the user.
• This self-managing capacity is more useful in complex and large-scale
environments, especially in cloud computing.
Key Characteristics of Autonomic Computing:
Autonomic computing systems in cloud environments exhibit four primary
characteristics:
• 1. Self-Configuration: The automated adaptation to changes in the system
environment.
• 2. Self-healing: The capacity to identify, diagnose, and fix faults without
requiring external assistance.
• 3. Self-Optimization: The continuous monitoring and modification of
resources to achieve peak performance.
• 4. Self-Protection: The proactive detection and mitigation of security
threats and vulnerabilities.
Comet Cloud:
• Comet-Cloud is an open-source platform for building cloud computing
infrastructure.
• It provides a scalable and fault-tolerant architecture that can support a wide
range of cloud-based applications.
• The following diagram illustrates the architecture of the Comet-Cloud
platform:
The architecture of Comet-Cloud consists of the following layers:
• User Interface Layer: The user interface layer provides a web-based
interface that allows users to interact with the Comet-Cloud platform. This
includes a dashboard that provides real-time information about the status of
the cloud infrastructure, as well as tools for managing user accounts, virtual
machines, and storage.
• Application Layer: The application layer is where the user's applications are
deployed and executed. The applications can be developed using any
programming language or development tool that is compatible with the
underlying virtual machine environment. The Comet-Cloud platform
provides a set of APIs and libraries that make it easy for developers to build
and deploy cloud-based applications.
• Management Layer: The management layer provides a set of core services
that support the execution of applications in the Comet-Cloud environment.
These services include scheduling, resource management, load balancing,
and fault tolerance. The Comet-Cloud platform provides a rich set of APIs
and libraries that make it easy for developers to access these services.
• Virtualization Layer: The virtualization layer provides the infrastructure for
Virtualizing hardware resources such as CPU, memory, and storage. The
Comet-Cloud platform uses the Xen hypervisor to provide virtualization
capabilities, which allows multiple virtual machines to run on a single
physical machine.
• Infrastructure Layer: The infrastructure layer represents the underlying
physical infrastructure that supports the Comet-Cloud environment. This
includes the servers, clusters, and other resources that are used to execute
applications in the Comet-Cloud environment. The Comet-Cloud platform
provides a set of APIs and libraries that make it easy for developers to
access and manage these resources
• Virtualization Layer: The virtualization layer provides the infrastructure for
Virtualizing hardware resources such as CPU, memory, and storage. The
Comet-Cloud platform uses the Xen hypervisor to provide virtualization
capabilities, which allows multiple virtual machines to run on a single
physical machine.
• Infrastructure Layer: The infrastructure layer represents the underlying
physical infrastructure that supports the Comet-Cloud environment. This
includes the servers, clusters, and other resources that are used to execute
applications in the Comet-Cloud environment. The Comet-Cloud platform
provides a set of APIs and libraries that make it easy for developers to
access and manage these resources

In summary, the Comet-Cloud architecture provides a scalable and


fault-tolerant platform for building cloud-based applications. The platform
abstracts away the complexity of managing the underlying infrastructure,
allowing developers to focus on building and deploying their applications.
Comet-Cloud is a highly flexible and modular cloud computing platform
that can be customized and extended to meet the needs of various
organizations and businesses.
Multimedia Cloud: IPTV

Introduction:
Cloud computing opened the opportunity for media operators who serve
content providers, IPTV(Internet Protocol Television) operators and
multimedia players.
When we consider multimedia players, adopting cloud computing is often
one of the important priorities in the coming years.
Some media players, for example, companies like media post-production,
already utilize these kinds of cloud computing-based service capabilities
for managing the digital delivery.
In the future, multimedia companies will use cloud computing first, who
started looking to move their storage requirements into cloud computing.
The cost and the investment return for these kinds if services have
accelerated the growth of cloud computing services market.
The relationship between the media and content value chain are more
exposed because of cloud computing.
IPTV (Internet Protocol Television):
The IPTV offers a revenue opportunity for media operators looking to use
cloud computing services.
For using these service, normally we need a set-top box called STB.
To reduce costs, the processing power and graphic capabilities of STB are
limited.
Providers are not able to take the benefit from the latest technology, which
has powerful STBs, offered at low cost.
The reason is due to the installation base is not economically cost effective, as
a result, the low cost and less capable STB offers less service and the
innovation in delivery of media to TV is limited.
IPTV providers have to overcome the barriers because of the low capable
STBs, in terms of limited processing power, in order to:
• Deliver very good service with graphics quality
• Be competitive with other emerging video service providers using new STBs.
IPTV (Internet Protocol television) is a service that provides television
programming and other video content using the Transmission Control
Protocol/Internet Protocol (TCP/IP) suite, as opposed to broadcast TV,
cable TV or satellite signals.
An IPTV service, typically distributed by a service provider, delivers live TV
programs or on-demand video content via IP networks.
By adopting cloud computing services to manage the STB, IPTV provides
customers with services and applications which are not available in STB, and
also provides applications that are more resource intensive than the latest
STBs.
This kind of approaches results in low cost, compared with replacing old
STBs. The reasons are as follows:
i) Resources are shared.
ii) The cost involved by using cloud services is much lower than replacing old
STBs.
iii) Moving complexity simplifies operations at the customer end.
Energy Aware Cloud Computing:

• energy efficiency is increasingly most important and communication


technologies.
• The reasons are the increased use in advanced technologies, increased energy
costs and the need to reduce GHG (GreenHouse Gas) emissions. These
reasons called for energy-efficient technologies that tend to decrease the
overall energy consumption in terms of computation, communications and
storage.
• Cloud computing has been recently attracted as a promising approach for
delivering these advanced technology services by utilizing the data centre
resources.
• The analysis show that a particular organization or company that switched to
the cloud has saved around 68-87% energy for its office computing and
carbon emission has been reduced.
• A research report analyzed the energy efficiency benefits of cloud computing,
which includes an assessment for SaaS, PaaS and IaaS markets. The study
examines the drivers and technical developments related to cloud computing.
Following are some key issues related to cloud computing energy efficiency.
• Cost-wise advantage of public cloud computing providers over traditional
data centers.
• Objectives for computing by business providers of cloud.
• Strategies among cloud computing providers regarding energy efficiency.
• Improvement of sustainability while shifting to the cloud.
• Kind of ROI (Return On Investment) that the cloud computing delivered
from an energy efficiency perspective.
• Impact of using cloud computing on carbon emission from the data centre
operations.
Three points stand out from these results:
1. First, by migrating to the cloud, industries can achieve significant energy
savings and reduced pollution, in addition to savings from reduced server
purchases and maintenance.
2. Second, the reduction in energy consumption was larger and not by a reduced
number of servers. This was due to two factors: usage of server is lower,
power consumed is less and forming the servers in subset based on PUE
(Power usage effectiveness), reduces the energy consumption.
3. Third, the results do not reflect the energy consumed by the client devices.
There are two models for saving energy in office computing: the standard
model and the cloud-based model which are enabled by Google Apps.
Migration to Google Apps affects energy consumption in following three
ways:
1. Reduces direct energy for servers by 70-90%: The operations required for far
fewer servers and Google’s servers are fully loaded with more efficiency. An
organization that hosts its own IT services must have redundancy to prevent
safety and more servers to manage the demand. From the above, it results in
more servers and server utilization of 10% or less.
2. Reduces energy for server cooling by 70-90%: More energy consumed by
server means more heat produced. This makes the AC cooling systems to
work harder than their usual load. The energy 1.5 watts of cooling required
for each watt of direct power consumption by the server. In addition, the
large corporate data centers indirectly consume ~0.5 watts of power for each
watt of direct power consumption.
3. From the use of Google servers and heavy network traffic, the energy
increased by 2-3%: The impressive saving from points 1 and 2 are not
achieved easily. As a result of cloud-based services, some of the energy
consumption is added by Google’s servers and heavy traffic on the Internet.
Jungle Computing

• Jungle computing is a Distributed Computing paradigm.


• A jungle computing system provides the user to use all computing
resources available in this environment, which contains clusters,
clouds, grids, desktop grids, supercomputers, stand-alone machines
and mobile devices.

Reason to use Jungle computing:


• There are many reasons to use Jungle Computing Systems. To run
an application in a single system is not possible, because it may
need more computing power than available. A single system may
not support all the requirements from different parts of an
application, because the computational requirements differ in some
part of an application.
All resources in Jungle computing system are equal in some way.
The system contains some amount of processing power, memory and storage
etc.
The end users no need to consider about the resources located in a remote
cloud or down the hall in a cluster, but the compute resource run their
application effectively.
A Jungle computing system is heterogeneous because the properties of the
resources differ in processor architecture, memory capacity and performance.
In absence of central administration of these unrelated systems, software
systems like libraries and compilers may differ.
It is hard to run the applications on several resources due to the heterogeneity
of Jungle computing system. The application have to be re-compiled or even
partially re-written for each used resource is to handle the changes available
software and hardware resources. A different middleware interface is
required to use different middleware client software for each resources.
Jungle Computing System lack in terms of connectivity between resources.
This aspects reduces the usage of Jungle Computing.
Distributed Cloud Computing Vs Edge Computing

Edge computing and distributed computing are two computing approaches that
aim to enhance performance, efficiency, and scalability.
Edge computing focuses on placing computational resources, such as processing
power and storage, closer to the data source or end-users. This proximity
enables real-time data processing, reduces latency, and minimizes the need
for data transfer to remote servers or the cloud. Edge computing is
particularly beneficial for applications that require low latency, high
responsiveness, and efficient bandwidth usage.
Advantages of Edge Computing

1. Low Latency: Edge computing makes data processing faster by putting


computing resources near where the data is created or used. This means that
data doesn’t have to travel long distances, leading to quicker response times
and immediate processing.
2. Offline Operation: Edge computing accelerates data processing by placing
computing resources near the location where data is generated or used. This
eliminates the requirement for data to travel over long distances, resulting in
faster response times and immediate processing capabilities.
3. Improved Performance: Edge computing boosts performance by processing
data in close proximity to its source, reducing the need for distant cloud or
central data centers. This results in quicker data processing, improved
application performance, and a more satisfying user experience, especially for
time-sensitive applications such as IoT, autonomous vehicles, and augmented
reality.
Distributed computing involves utilizing multiple interconnected nodes or
machines to perform processing and storage tasks. The workload is divided
and distributed among these nodes, allowing for parallel execution and
increased computational capacity. Distributed computing enables efficient
handling of large-scale workloads, improved fault tolerance, and scalability.
Advantages of Distributed Computing:

1. Scalability: Distributed computing enables organizations to expand their


computing capabilities effectively by dividing the workload among multiple
machines or nodes. This allows them to manage bigger workloads and meet
the needs of more users without relying solely on a single central system.
2. Bandwidth Optimization: Edge computing allows for sending only the
necessary data or summarized information to the cloud or central servers. As a
result, less data needs to be transmitted over the network, leading to savings in
bandwidth and reduced associated costs.
3. Geographic Distribution: Distributed computing lets organizations place their
computing resources in multiple locations or data centers to serve users
worldwide. This decreases delays and speeds up response times, resulting in
an improved user experience. Additionally, it enables effortless global
expansion.
Difference Between Edge Computing and Distributed Computing
Containers
• Containers are packages of software that contain all of the necessary elements
to run in any environment.
• In this way, containers virtualize the operating system and run anywhere, from
a private data center to the public cloud or even on a developer’s personal
laptop.
• Containers are lightweight packages of your application code together with
dependencies such as specific versions of programming language runtimes
and libraries required to run your software services.
• Containers make it easy to share CPU, memory, storage, and network
resources at the operating systems level and offer a logical packaging
mechanism in which applications can be abstracted from the environment in
which they actually run.
• From Gmail to YouTube to Search, everything at Google runs in containers.
Containerization allows our development teams to move fast, deploy software
efficiently, and operate at an unprecedented scale.
Benefits of containers:
• Separation of responsibility:
Containerization provides a clear separation of responsibility, as developers
focus on application logic and dependencies, while IT operations teams can
focus on deployment and management instead of application details such as
specific software versions and configurations.
• Workload portability:
Containers can run virtually anywhere, greatly easing development and
deployment: on Linux, Windows, and Mac operating systems; on virtual
machines or on physical servers; on a developer’s machine or in data centers
on-premises; and of course, in the public cloud.
• Application isolation:
Containers virtualize CPU, memory, storage, and network resources at the
operating system level, providing developers with a view of the OS logically
isolated from other applications.
Use of Containers:
Containers offer a logical packaging mechanism in which applications can be
abstracted from the environment in which they actually run. This decoupling
allows container-based applications to be deployed easily and consistently,
regardless of whether the target environment is a private data center, the
public cloud, or even a developer’s personal laptop.

• Agile development: Containers allow your developers to move much more


quickly by avoiding concerns about dependencies and environments.
• Efficient operations: Containers are lightweight and allow you to use just the
computing resources you need. This lets you run your applications efficiently.
• Run anywhere: Containers are able to run virtually anywhere. Wherever you
want to run your software, you can use containers.
Docker and Kubernetes

Kubernetes and Docker are both popular technologies for containerized


development, but they have different roles:
• Docker: A container runtime that allows users to build, test, and deploy
applications into containers. Containers are standardized units that
include everything an application needs to run, such as libraries,
system tools, and code.
• Kubernetes: A container orchestration tool that allows users to manage,
coordinate, and schedule containers at scale. Kubernetes can automate
tasks like scaling, load balancing, and self-healing
Docker and Kubernetes are complementary technologies that can be
used together to streamline the development process and ensure
application security.
Docker
• Docker is a platform used to containerize your software, using which you can
easily build your application, the package with the dependencies required for
your application into the container further, these containers are easily shipped
to run on other machines.
• Docker is simplifying the DevOps methodology by allowing developers to
create templates called images using which you can create a
lightweight, virtual machine called a container.
• Docker is making things easier for software industries giving them the
capabilities to automate the infrastructure, isolate the application, maintain
consistency, and improve resource utilization.
Key Features of Docker:
The following are the key features of docker:
• Easy configuration: This is one of the key features of Docker in which you
can easily deploy your code in less time & effort as you can use Docker in a
wide variety of environments. The requirement of the infrastructure is no
longer linked with the environment of the application helping in configuring
the system easier and faster.
• Easy configuration: This is one of the key features of Docker in which you
can easily deploy your code in less time & effort as you can use Docker in a
wide variety of environments. The requirement of the infrastructure is no
longer linked with the environment of the application helping in configuring
the system easier and faster.
• You can use swarm: It is a clustering and scheduling tool for Docker
containers, SO swarm used the Docker API as a frontend which helps us to
use various tools to the controller, it also helps us to control clusters for
Docker host as a single virtual host, it is a self-organizing group of engines
that is used to enable, pluggable backbends.
• Manages security: Docker allows us to save secrets in the swarm itself. And
then choose to give services access to certain secrets. It includes some
important commands to the engine like secret inspection, secretly creating, etc.
• Services: Service is a list of tasks that lets us specify the state of a container
inside of a cluster. Each task represents one instance of a container that should
be running and Swan schedules them across the nodes.
• More Productivity: By easing technical configuration & rapid deployment of
applications no doubt it has increased productivity, Docker not only helps to
execute the application in an isolated environment but also reduces the
resources also.
Docker Advantages:
The following are the advantages of docker:
• Build app only once: An application inside a container can run on a system
that has Docker installed. So there is no need to build and configure apps
multiple times on different platforms.
• More sleep and less worry: With Docker, you test your application inside a
container and ship it inside a container. This means the environment in which
you test is identical to the one on which the app will run in production
• Portability: Docker containers can run on any platform. It can run on any
local system, Amazon EC2, Google Cloud, Virtual Box, etc.
• Version control: Like git, Docker has a built version control system. Docker
containers work just like GIT repositories, allowing you to commit changes to
your Docker images and version control them.
Docker Disadvantages:
The following are the disadvantages of docker:
• Missing feature: It has got Missing features. There are tons of features that
are under progress like container self-registration, self-inspect copying files
from host to container, and many more.
• Data in the container: When the container is going down after that it needs a
backup and recovery strategy although we have several solutions for that they
are not automated or not very scalable yet.
• Graphical app: Docker was designed as a solution for deploying server apps
that do not require a graphical interface, while there are some creative
strategies such as x11 video forwarding that u can use to run GUI apps inside
the container.
• The benefit is few: Generally, only apps that are designed to run as a discrete
set of micro services stand to gain the most from containers, otherwise,
Docker’s only real benefit is that it can simplify application delivery by
providing an easy package machine.
Kubernetes:
Kubernetes is a container management system developed in the Google platform
(GO Language). It helps you to manage a containerized application in various
types of physical, virtual, and cloud environments. Google Kubernetes is a
highly flexible tool to deliver even complex applications consistently.
Key Feature of Kubernetes
The following are the key features of Kubernetes:
• Runs everywhere: It is an open-source tool and gives you the freedom to take
advantage of on-premises, Public & hybrid cloud infrastructure letting you
move your workload anywhere you want.
• Automation: For instance, Kubernetes will control for you with a servable
host of the container that will be launched.
• Interaction: Kubernetes is able to manage more clusters at the same time. &
It allows not only horizontal but even vertical scaling also.
• Additional services: It provides additional features as well as the
management of containers, Kubernetes offers security networking & storage
services.
• self-monitoring: It also gives you a provision of self-monitoring as it
constantly checks the health of nodes and the container itself.
Kubernetes Advantages:
The following are the advantages of Kubernetes:
• Automatic container schedule: Kubernetes may reschedule a container from
one node to another to increase resource utilization. This means you get more
work out of the same number of machines, which saves money.
• Service discovery: When you have a bunch of services that need to
communicate with each other it’s critical that they are able to find each other
first. This is especially true because containers are automatically scheduled
and may potentially get moved around. Thankfully, Kubernetes makes it easy
for containers to communicate with each other.
• Self-Healing: Kubernetes automatically monitors containers and reschedules
them if they crash or are terminated when they shouldn’t. Kubernetes will also
reschedule containers in the event that the node that they’re living on fails.
• Rolling Upgrades: Fortunately, Kubernetes has the ability to perform rolling
updates. This is where old containers are judiciously swapped out of a new
version of the same containers all without disrupting the service provided by
the running application.
Kubernetes Disadvantages:
The following are the disadvantages of Kubernetes:
• Steep learning curve: Kubernetes is not an easy platform to learn, even for
the most experienced developers and DevOps engineers.
• Install & configure: Kubernetes consists of multiple no. of components that
should be configured and installed separately to initialize the cluster. if you
install Kubernetes manually you should also configure the security which
includes creating a certificate authority & issuing the certificate
• No high availability: Kubernetes does not provide high availability mode by
default to create a fault-tolerant cluster you have to manually configure HA
for your etc cluster.
• Compatibility issues: Sometimes when you have containers you may need to
use Docker with communities. But at that time communities were not
compatible with existing Docker CLI and composing tools. And it requires
more effort during the migration whenever you have to migrate to a stateless It
actually requires many efforts.
Using Kubernetes with Docker:
Kubernetes will serve as a container orchestration tool when used with Docker,
and Docker will assist us in creating the images needed to execute containers
in Kubernetes. All container deployments, scaling, and scheduling to the
correct node in the cluster may be handled by Kubernetes.


Kubernetes vs Docker: Comparison Table:

The following are the difference between kubernetes and docker:


Introduction to DevOps:
The DevOps is the combination of two words, one is Development and other
is Operations. It is a culture to promote the development and operation
process collectively.
By fostering collaboration and leveraging automation technologies, DevOps
enables faster, more reliable code deployment to production in an efficient and
repeatable manner.
DevOps promotes collaboration between Development and Operations team to
deploy code to production faster in an automated & repeatable way.
The goal of DevOps is to increase an organization’s speed when it comes to
delivering applications and services. Many companies have successfully
implemented DevOps to enhance their user experience including Amazon,
Netflix, etc.
DevOps helps to increase organization speed to deliver applications and
services. It also allows organizations to serve their customers better and
compete more strongly in the market.
DevOps is nothing but a practice or methodology of making "Developers" and
"Operations" folks work together.
Why DevOps?
why we need the DevOps over the other methods.

• The operation and development team worked in complete isolation.


• After the design-build, the testing and deployment are performed respectively.
That's why they consumed more time than actual build cycles.
• Without the use of DevOps, the team members are spending a large amount of
time on designing, testing, and deploying instead of building the project.
• Manual code deployment leads to human errors in production.
• Coding and operation teams have their separate timelines and are not in synch,
causing further delays.
How DevOps Works?
• DevOps will remove the “siloed” conditions between the development team
and operations team. In many cases these two teams will work together for the
entire application lifecycle, from development and test to deployment to
operations, and develop a range of skills not limited to a single function.
• Teams in charge of security and quality assurance may also integrate more
closely with development and operations over the course of an application’s
lifecycle under various DevOps models. DevSecOps is the term used when
security is a top priority for all members of a DevOps team.
• These teams employ procedures to automate labor-intensive, manual processes
that were slow in the past. They employ a technological stack and tooling that
facilitate the swift and dependable operation and evolution of apps.
• A team’s velocity is further increased by these technologies, which also assist
engineers in independently completing activities (such provisioning
infrastructure or delivering code) that ordinarily would have needed assistance
from other teams.
DevOps Architecture Features:
• Here are some key features of DevOps architecture, such as:

1) Automation
• Automation can reduce time consumption, especially during the testing and
deployment phase. The productivity increases, and releases are made quicker
by automation. This will lead in catching bugs quickly so that it can be fixed
easily. For contiguous delivery, each code is defined through automated tests,
cloud-based services, and builds. This promotes production using automated
deploys.
2) Collaboration
The Development and Operations team collaborates as a DevOps team, which
improves the cultural model as the teams become more productive with their
productivity, which strengthens accountability and ownership. The teams
share their responsibilities and work closely in sync, which in turn makes the
deployment to production faster.
3) Integration
Applications need to be integrated with other components in the environment.
The integration phase is where the existing code is combined with new
functionality and then tested. Continuous integration and testing enable
continuous development. The frequency in the releases and micro-services
leads to significant operational challenges. To overcome such problems,
continuous integration and delivery are implemented to deliver in a quicker,
safer, and reliable manner.
4) Configuration management
It ensures the application to interact with only those resources that are
concerned with the environment in which it runs. The configuration files are
not created where the external configuration to the application is separated
from the source code. The configuration file can be written during
deployment, or they can be loaded at the run time, depending on the
environment in which it is running.
DevOps Life Cycle:
• DevOps Lifecycle is the set of phases that includes DevOps for taking part
in Development and Operation group duties for quicker software program
delivery. DevOps follows positive techniques that consist of code, building,
testing, releasing, deploying, operating, displaying, and planning. DevOps
lifecycle follows a range of phases such as non-stop development, non-stop
integration, non-stop testing, non-stop monitoring, and non-stop feedback.
• Each segment of the DevOps lifecycle is related to some equipment and
applied sciences to obtain the process. Some of the frequently used tools are
open source and are carried out primarily based on commercial enterprise
requirements. DevOps lifecycle is effortless to manipulate and it helps
satisfactory delivery.
7 Cs of DevOps
1. Continuous Development
2. Continuous Integration
3. Continuous Testing
4. Continuous Deployment/Continuous
Delivery
5. Continuous Monitoring
6. Continuous Feedback
7. Continuous Operations
DevOps Advantages and Disadvantages
Here are some advantages and disadvantages that DevOps can have for business,
such as:
Advantages:
• DevOps is an excellent approach for quick development and deployment of
applications.
• It responds faster to the market changes to improve business growth.
• DevOps escalate business profit by decreasing software delivery time and
transportation costs.
• DevOps clears the descriptive process, which gives clarity on product
development and delivery.
• It improves customer experience and satisfaction.
• DevOps simplifies collaboration and places all tools in the cloud for customers
to access.
• DevOps means collective responsibility, which leads to better team
engagement and productivity.
Disadvantages:
• DevOps professional or expert's developers are less available.
• Developing with DevOps is so expensive.
• Adopting new DevOps technology into the industries is hard to manage in
short time.
• Lack of DevOps knowledge can be a problem in the continuous integration of
automation projects.

You might also like