0% found this document useful (0 votes)
13 views68 pages

Unit2 PDF

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views68 pages

Unit2 PDF

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 68

UNIT-2

CLOUD CONCEPTS AND TECHNOLOGIES


➢VIRTUALIZATION
➢LOAD BALANCING
➢REPLICATION
➢SOFTWARE DEFINED NETWORKING
➢NETWORK FUNCTION VIRTUALIZATION(NFV)
MIGRATING INTO A CLOUD
➢THE SEVEN-STEP MODEL OF MIGRATION INTO A CLOUD
➢MIGRATION RISKS AND MITIGATION
VIRTUALIZATION
➢ Virtualization refers to the partitioning the resources of a
physical system (such as computing, storage, network and
memory) into multiple virtual resources.
➢ Virtualization creates a virtual layer using the hypervisor
software, which manages resources assigned to the virtual
instances. The newly formed virtual representation is known
as virtual machines (VMs)
• Virtualization is a technique how to separate a service from the underlying
physical delivery of that service.
• It is the process of creating a virtual version of something like computer
hardware.
• It was initially developed during the mainframe era.
• It involves using specialized software to create a virtual or software-
created version of a computing resource rather than the actual version of
the same resource. With the help of Virtualization, multiple operating
systems and applications can run on the same machine and its same
hardware at the same time, increasing the utilization and flexibility of
hardware.
• In other words, one of the main cost-effective, hardware-reducing, and
energy-saving techniques used by cloud providers is Virtualization.
Virtualization allows sharing of a single physical instance of a resource or an
application among multiple customers and organizations at one time.
• It does this by assigning a logical name to physical storage and providing a
pointer to that physical resource on demand.
• The term virtualization is often synonymous with hardware virtualization,
which plays a fundamental role in efficiently delivering Infrastructure-as-a-
Service (IaaS) solutions for cloud computing.
• Moreover, virtualization technologies provide a virtual environment for not
only executing applications but also for storage, memory, and networking.
Work of Virtualization in Cloud Computing
➢ Virtualization has a prominent impact on Cloud
Computing. In the case of cloud computing, users store
data in the cloud, but with the help of Virtualization,
users have the extra benefit of sharing the infrastructure.
➢ Cloud Vendors take care of the required physical
resources, but these cloud providers charge a huge
amount for these services which impacts every user or
organization.
➢ Virtualization helps Users or Organizations in maintaining
those services which are required by a company through
external (third-party) people, which helps in reducing
costs to the company.
➢ This is the way through which Virtualization works in
Cloud Computing.
➢The virtualization layer consists of a hypervisor or a
virtual machine monitor (VMM).
➢ Hypervisor presents a virtual operating platform to a
guest operating system (OS).
Type-1 Hypervisor(is also called native hypervisors)
• Type-I or the native hypervisors run directly on the host
hardware and control the hardware and monitor the
guest operating systems.
Type-2 Hypervisor(is also called hosted hypervisors )
• Type 2 hypervisors or hosted hypervisors run on top of
a conventional (main/host) operating system and
monitor the guest operating systems.
Benefits of Virtualization
• More flexible and efficient allocation of
resources.
• Enhance development productivity.
• It lowers the cost of IT infrastructure.
• Remote access and rapid scalability.
• High availability and disaster recovery.
• Pay peruse of the IT infrastructure on demand.
• Enables running multiple operating systems.
Types of Virtualization

1. Full Virtualization
2. Para-Virtualization
3. Hardware Virtualization
1. Full Virtualization
• In full virtualization, the virtualization layer completely decouples
the guest OS from the underlying hardware.
• The guest OS requires no modification and is not aware that it is
being virtualized.
• Full virtualization is enabled by direct execution of user requests and
binary translation of OS requests.
• Microsoft and Parallels systems are examples of full virtualization.
2.Para-Virtualization

• In Para-virtualization, the guest OS is modified to enable


communication with the hypervisor to improve performance and
efficiency.
• The guest OS kernel is modified to replace non-virtualizable
instructions with hyper-calls that communicate directly with the
virtualization layer hypervisor.
• VM ware and Xen are some examples of Para -virtualization.
3.Hardware Virtualization

• Hardware assisted virtualization is enabled by


hardware features such as Intel’s Virtualization
Technology (VT-x) and AMD’s AMD-V.
• In hardware assisted virtualization, privileged and
sensitive calls are set to automatically trap to the
hypervisor.
• Thus, there is no need for either binary translation or
Para-virtualization.
LOAD BALANCING

➢ Load balancing distributes workloads across multiple


servers to meet the application workloads.
➢ Load balancing is the method that allows you to have a proper balance of the amount of work
being done on different pieces of device or hardware equipment.
➢ Typically, what happens is that the load of the devices is balanced between different servers or
between the CPU and hard drives in a single cloud server.
➢ Load balancing was introduced for various reasons.
➢ One of them is to improve the speed and performance of each single device, and the other is to
protect individual devices from hitting their limits by reducing their performance.
➢ Cloud load balancing is defined as dividing workload and computing properties in cloud
computing.
➢ It enables enterprises to manage workload demands or application demands by distributing
resources among multiple computers, networks or servers.
➢ Cloud load balancing involves managing the movement of workload traffic and demands over
the Internet.
➢ Traffic on the Internet is growing rapidly, accounting for almost 100% of the current traffic
annually.
➢ Therefore, the workload on the servers is increasing so rapidly, leading to overloading of the
servers, mainly for the popular web servers.
➢ There are two primary solutions to overcome the problem of overloading on the server-
➢ First is a single-server solution in which the server is upgraded to a higher-performance server.
However, the new server may also be overloaded soon, demanding another upgrade. Moreover,
the upgrading process is arduous and expensive.
➢ The second is a multiple-server solution in which a scalable service system on a cluster of servers
is built. That's why it is more cost-effective and more scalable to build a server cluster system for
network services.
• A load balancer acts as the “traffic cop” sitting in front of your
servers and routing client requests across all servers capable of
fulfilling those requests in a manner that maximizes speed and
capacity utilization and ensures that no one server is
overworked, which could degrade performance.
• If a single server goes down, the load balancer redirects traffic
to the remaining online servers.
• When a new server is added to the server group, the load
balancer automatically starts to send requests to it.
In this manner, a load balancer performs the following
functions:
• Distributes client requests or network load efficiently across
multiple servers
• Ensures high availability and reliability by sending requests
only to servers that are online
• Provides the flexibility to add or subtract servers as demand
dictates
Load balancing solutions can be categorized into two types
1) Software-based load balancers: Software-based load
balancers run on standard hardware (desktop, PC) and
standard operating systems.
2) Hardware-based load balancers: Hardware-based load
balancers are dedicated boxes that contain application-
specific integrated circuits (ASICs) optimized for a particular
use. ASICs allow network traffic to be promoted at high speeds
and are often used for transport-level load balancing because
hardware-based load balancing is faster than a software
solution.
Load Balancing Algorithms
Different load balancing algorithms provide different benefits; the choice of load
balancing method depends on your needs:
• Round Robin – Requests are distributed across the group of servers sequentially.
• Least Connections – A new request is sent to the server with the fewest current
connections to clients. The relative computing capacity of each server is factored
into determining which one has the least connections.
• Least Time – Sends requests to the server selected by a formula that combines the
fastest response time and fewest active connections. Exclusive to NGINX Plus.
• Hash – Distributes requests based on a key you define, such as the client IP address
or the request URL. NGINX Plus can optionally apply a consistent hash to minimize
redistribution of loads if the set of upstream servers changes.
• IP Hash – The IP address of the client is used to determine which server receives the
request.
• Random with Two Choices – Picks two servers at random and sends the request to
the one that is selected by then applying the Least Connections algorithm (or for
NGINX Plus the Least Time algorithm, if so configured).
Benefits of Load Balancing
• Reduced downtime
• Scalable
• Redundancy
• Flexibility
• Efficiency
Replication
➢ Replication is used to create and maintain multiple copies
of the data in the cloud.
➢ Cloud enables rapid implementation of replication
solutions for disaster recovery for organizations.
➢ With cloud-based data replication organizations can plan
for disaster recovery without making any capital
expenditures on purchasing, configuring and managing
secondary site locations
➢ Data Replication to decrease access latency in cloud
environment
replication1
Original data replication2

Primary storage
Replication in Cloud Computing
• refers to multiple storage of the same data to several different locations by
usually synchronization of these data sources.
• Replication in Cloud Computing is partly done for backup and on the other
hand to reduce response times, especially for reading data requests.
• The simplest form of data replication in cloud computing environment is to
store a copy of a file ( copy ), in expanded form, the copying and pasting in
any modern operating systems.
• Replication is the reproduction of the original data in unchanged form.
• In the frequently encountered master / slave replication, a distinction
between the original data (primary data) and the dependent copies. In peer
copies ( version control ) there must be merging of data sets
(synchronization).
• Sometimes it is important to know which data sets must have the replicas.
• Depending on the type of replication it is located between the processing
and creation of the primary data and their replication in a certain period of
time.
• This period is usually referred to as latency.
➢ Synchronous replication is when a change of operation can only be
completed successfully on a data object, if it was performed on the
replicas.
➢ In order to implement this technology, a protocol ensures the
indivisibility of transactions, the commit protocol.
➢ If between the processing of the primary data and the replication
there is latency, it speaks of asynchrony.
➢ The data are identical. A simple variant of the asynchronous
replication is the “File Transfer Replication”, the transfer of files via
FTP or SSH.
➢ The data of the replicas make only a snapshot of the primary data at
a specific time. At the database level this can happen in short time
intervals, the transaction databases are transported from one server
to another and read into the database.
➢ Assuming an intact network latency it corresponds to the time
interval in which the transaction logs are written.
➢ Four methods can be used for Replication in Cloud Computing –
Merge Replication, Primary Copy, Snapshot replication, Standby
replication.
Types of Replication

1)Array-based Replication
2)Network-based Replication
3)Host-based Replication
1) Array-based Replication
• An array-based data replication strategy uses built-in
software to automatically replicate data.
• With this type of data replication, the software is used in
compatible storage arrays to copy data between each.
• Using this method has several advantages and
disadvantages.
2)Host-Based Data Replication
• Host-based data replication uses the servers to copy data
from one site to another site.
• Host-based replication software usually includes options like
compression, encryption and, throttling, as well as failover.
• Using this method has several advantages and
disadvantages.
Array-based Replication
3 )Network-Based Data Replication
• Network-based data replication uses a device or
appliance that sits on the network in the path of the
data to manage replication.
• The data is then copied to a second device. These
devices usually have proprietary replication technology
but can be used with any host server and storage
hardware.
Software Defined Networking
➢ Software-Defined Networking (SDN) is a networking architecture that
separates the control plane from the data plane and centralizes the network
controller.
➢ To understand software-defined networks, we need to understand the various
planes involved in networking.
Data Plane
Control Plane
1) Data plane: All the activities involving as well as resulting from data packets
sent by the end-user belong to this plane. This includes:
• Forwarding of packets.
• Segmentation and reassembly of data.
• Replication of packets for multicasting.
2)Control plane: All activities necessary to perform data plane activities but do
not involve end-user data packets belong to this plane. In other words, this is
the brain of the network.
The activities of the control plane include:
• Making routing tables.
• Setting packet handling policies.
Two types of architecture
1) Conventional network architecture
➢ The control plane and data plane are coupled.
➢ Control plane is the part of the network that carries
the signaling and routing message traffic
➢ the data plane is the part of the network that carries
the payload data traffic.
2) SDN Architecture
➢ The control and data planes are decoupled and the
network controller is centralized.
A typical SDN architecture consists of three layers.
1) Application layer: It contains the typical network
applications like intrusion detection, firewall, and load
balancing
2)Control layer: It consists of the SDN controller which
acts as the brain of the network. It also allows
hardware abstraction to the applications written on top
of it.
3)Infrastructure layer: This consists of physical
switches which form the data plane and carries out the
actual movement of data packets.
• The layers communicate via a set of interfaces called
the north-bound APIs(between the application and
control layer) and southbound APIs(between the
control and infrastructure layer).
Northbound APIs:
➢ SDN uses APIs used to communicate between the
SDN Controller , services and applications running
over the networkAPP
Southbound APIs:
➢ allow the controller to modify the switch/router
configurations
Applications can now communicate with a SDN
controller which can configure our switches and
routers almost at real time
SDN Architecture
➢ In a traditional network, each switch has its own data
plane as well as the control plane.
➢ The control plane of various switches
exchange topology information and hence construct a
forwarding table that decides where an incoming data
packet has to be forwarded via the data plane.
➢ Software-defined networking (SDN) is an approach via
which we take the control plane away from the switch
and assign it to a centralized unit called the SDN
controller.
➢ Hence, a network administrator can shape traffic via a
centralized console without having to touch the
individual switches.
➢ The data plane still resides in the switch and when a packet
enters a switch, its forwarding activity is decided based on the
entries of flow tables, which are pre-assigned by the controller.
➢ A flow table consists of match fields (like input port number
and packet header) and instructions. The packet is first
matched against the match fields of the flow table entries.
➢ Then the instructions of the corresponding flow entry are
executed. The instructions can be forwarding the packet via
one or multiple ports, dropping the packet, or adding headers
to the packet.
➢ If a packet doesn’t find a corresponding match in the flow
table, the switch queries the controller which sends a new
flow entry to the switch.
➢ The switch forwards or drops the packet based on this flow
entry
Key Elements of SDN
1)Centralized Network Controller
• With decoupled the control and data planes and centralized
network controller, the network administrators can rapidly
configure the network.
2)Programmable Open APIs
• SDN architecture supports programmable open APIs for
interface between the SDN application and control layers
(Northbound interface). These open APIs that allow
implementing various network services such as routing, quality
of service (QoS), access control, etc.
3)Standard Communication Interface (Open Flow)
• SDN architecture uses a standard communication interface
between the control and infrastructure layers(Southbound
interface). Open Flow, which is defined by the Open
Networking Foundation (ONF) is the broadly accepted SDN
protocol for the Southbound interface.
Open Flow
Importantance of SDN?
1)Better Network Connectivity: SDN provides very better
network connectivity for sales, services, and internal
communications. SDN also helps in faster data sharing.
2)Better Deployment of Applications: Deployment of new
applications, services, and many business models can be speed
up using Software Defined Networking.
3)Better Security: Software-defined network provides better
visibility throughout the network. Operators can create
separate zones for devices that require different levels of
security. SDN networks give more freedom to operators.
4)Better Control with High Speed: Software-defined networking
provides better speed than other networking types by applying
an open standard software-based controller.
NETWORK FUNCTION VIRTUALIZATION(NFV)
• Technology that controls virtualization to combine the
heterogeneous network devices onto industry standard
high volume servers, switches and storage
NFV Architecture:
➢An individual proprietary hardware
component, such as a router, switch, gateway,
firewall, load balancer, or intrusion detection
system, performs a specific networking
function in a typical network architecture.
➢ A virtualized network substitutes software
programs that operate on virtual machines for
these pieces of hardware to carry out
networking operations.
• NFV is complementary to SDN as NFV can provide
the infrastructure on which SDN can run.
• NFV and SDN are mutually beneficial to each other
but not dependent.
• Network functions can be virtualized without SDN,
similarly, SDN can run without NFV
• NFV comprises of network functions implemented in
software that run on virtualized resources in the
cloud.
• NFV enables a separation the network functions
which are implemented in software from the
underlying hardware.
3 Key elements of the NFV architecture are

1) Virtualized Network Function (VNF): VNF is a software


implementation of a network function which is capable
of running over the NFV Infrastructure (NFVI).

2)NFV Infrastructure (NFVI): NFVI includes compute,


network and storage resources that are virtualized.

3)NFV Management and Orchestration: NFV Management


and Orchestration focuses on all virtualization-specific
management tasks and covers the orchestration and
lifecycle management of physical and/or software
resources that support the infrastructure virtualization,
and the lifecycle management of VNFs.
THE SEVEN-STEP MODEL OF MIGRATION INTO A CLOUD

What Is Migration ?
➢It is process of moving data, applications or
other business elements to a cloud computing
environment.
➢Migrating a model to a cloud can help in
several ways, such as
• Improving scalability
• Flexibility
• Accessibility
➢There are seven steps to follow when
migrating a model to the cloud:
Step 1: Choose the right cloud provider ( Assessment step)
• The first step in migrating your model to the cloud is to choose
a cloud provider that aligns with your needs, budget, and
model requirement.
• consider the factors such as compliance, privacy, and security.
Step 2: Prepare your data ( Isolation step)
• Before migrating to your cloud, you need to prepare your
data.
• For that ensure your data is clean and well organized, and in a
format that is compatible with your chosen cloud provider.
Step 3: Choose your cloud storage ( Mapping step)
Once your data is prepared, you need to choose your cloud
storage.
• This is where your data is stored in the cloud. there are many
cloud storage services such as GCP Cloud Storage, AWS S3, or
Azure Blob Storage.
Step 4: Set up your cloud computing resources and deploy your
model ( Re- architect step)
If you want to run a model in the cloud, you will need to set up your
cloud computing resources.
• This includes selecting the appropriate instance type and setting up
a virtual machine(VM) or container for your model. After setting up
your computing resource, it is time to deploy your model to the
cloud.
• This includes packaging your model into a container or virtual
machine image and deploying it to your cloud computing resource.
• While deploying it may be possible that some functionality gets lost
so due to this some parts of the application need to be re-architect.
Step-5: Augmentation step
• It is the most important step for our business for which we migrate
to the cloud in this step by taking leverage of the internal features of
cloud computing service we augment our enterprise.
Step 6: Test your Model
• Once your model is deployed, we need to test it to ensure
that it is working or not.
• That involves running test data through your model and
comparing the results with your expected output.
Step 7: Monitor and maintain your Model
• After the model is deployed and tested, it is important to
monitor and maintain it.
• That includes monitoring the performance, updating the
model as needed, and need to ensure your data stays up-
to-date.
• Migrating our machine learning model to the cloud can be
a complex process, but above 7 steps, you can help ensure
a smooth and successful migration, ensuring that our
model is scalable and accessible.
MIGRATION RISKS AND MITIGATION

1) No clear cloud migration strategy in place


2) Incompatibility of the existing architecture
3 )Data loss
4 )Wasted costs
5 )Added latency
6 )Lack of visibility and control
7 )Security
1)No clear cloud migration strategy in place
➢ First and foremost, you have to decide whether you go
with one cloud provider or opt for managing multiple cloud
platforms. Each strategy has its pros and cons.
➢ If you settle on one cloud provider, there is a risk of a vendor
lock-in.
➢ On the other side, you can make your code work with more
than one cloud provider and balance workloads between
several cloud platforms.
➢ It is more expensive and complicated as each provider offers
different services and tools for management, but it gives a
certain degree of freedom.
➢ AWS + Microsoft Azure is a popular multi cloud combination
today.
➢ 78% of organizations are currently using both AWS and Azure
to avoid common risks in cloud migration
2)Incompatibility of the existing architecture
➢It slows their migration to the cloud as companies
have to find people with sufficient IT skills, who can
make the entire architecture “fit for the cloud” at
the speed they require.
3)Data loss
➢Before the migration, it is vital to make sure that all
your data is backed up, especially the files that
you'll be migrating.
➢ During the migration process, you may encounter
such issues as corrupt, incomplete, or missing
files. And if you have a backup, you’ll be able to
easily correct any errors by restoring the data in its
original state.
4)Wasted costs
➢ In cloud computing, you pay for compute, storage and data transfer.
And each cloud vendor offers a range of different instance types,
storage services, and transfer options depending on your use case,
cost requirements, and performance expectations.
➢ Finding the best can be a complex challenge. Companies that fail to
figure out what they need usually waste their costs because they
don’t use opportunities they have to the full.
5)Added latency
➢ Unwanted latency is one of the most underestimated risks in cloud
migration.
➢ It can occur when you access applications, databases, and services in
the cloud. Latency is especially critical for IoT devices, e-commerce
websites, video streaming solutions, and cloud gaming platforms
where customer experience is crucial.
➢ If you have applications that require immediate responses, delay
in a few seconds can pose serious damage to your business. It can
not only lead to frustration and disappointment but also impact your
brand reputation.
6)Lack of visibility and control
➢ Visibility in the public cloud is among the top risks in cloud migration.
➢ It affects network and application performance. If you rely on your own
on-premise data centers, you take full control over your resources including
physical hosts, networks, and data centers.
➢ But when switching to external cloud services, the responsibility for some
of the policies moves to cloud providers depending on the type of service.
As a result, the company lacks visibility into public cloud workloads.
7)Security
➢ Moving data to the cloud involves a lot of security risks:
Compliance violations
Contractual breaches
Insecure APIs
Issues on the provider’s side
Misconfigured servers
Malware
External attacks
Accidental errors, Insider threats, etc.
MITIGATION

What is Mitigation
➢Minimizing the risk of unauthorized data
access in the cloud.
➢Mitigation Techniques for Access Controls
1. Tighten Up Access Controls
2. Prevent a Data Breach or Data Loss
3. Secure Application Programming Interfaces
4. Build a Strategy
5. Broaden Your Visibility
6. Identify Misconfiguration
7. Protect Shared Technology
1)Tighten Up Access Controls
➢ If an unwanted person gains access to your systems or
cloud-based resources, you’re faced with an automatic
security concern, which could potentially be as dangerous
as a full-blown data breach.
➢ Techniques for Access Controls
• Enable multi-factor authentication to tighten your
security.
• Implement stringent policies for removing access for past
employees.
• Educate users about social engineering tactics, strong
passwords and phishing attacks.
2)Prevent a Data Breach or Data Loss
➢Mitigation Techniques for Data Breaches
• Use a firewall.
• Encrypt data at rest.
• Develop a sound and efficient incident response
plan.
• Perform pen testing on your cloud resources.
➢ Mitigation Techniques for Data Loss
• Back up consistently.
• Restore your capabilities quickly.
3)Secure Application Programming Interfaces
➢Cloud infrastructure use application programming
interfaces (APIs) to retrieve information from cloud-
based systems and send it to your connected
devices
➢Mitigation Technique for Insecure APIs
• Perform penetration testing on API endpoints to
identify vulnerabilities.
• Use secure sockets layer (SSL) to encrypt data for
transmission.
• Implement proper controls to limit access to API
protocols.
4)Build a Strategy
➢Your cloud environment can quickly become
complicated and disparate without a larger
strategy. A hodgepodge environment can quickly
become difficult to manage when ad hoc services
are continually added to meet operational needs
➢Mitigation Technique for Lack of Strategy
• Develop a cloud strategy that provides a robust
security infrastructure and aligns with your business
objectives.
• Perform regular penetration testing to check for any
vulnerabilities in your framework.
5)Broaden Your Visibility
➢ Limited visibility into your data model leaves you
vulnerable in places you can’t anticipate. As the
saying goes, you can’t protect what you can’t see.
➢Mitigation Technique for Limited Visibility
• Use a web application firewall to check for anomalous
traffic.
• Implement a cloud access security broker (CASB) to
keep an eye on outbound activities.
• Adopt a zero-trust model.
6)Identify Misconfiguration
➢It’s difficult to anticipate what kind of security
vulnerability you’ll be battling if you don’t
know where the misconfiguration has
occurred. Common examples include excessive
permissions, security holes left unpatched or
unrestricted port access.
➢Mitigation Techniques for Misconfiguration
• Implement an intrusion detection system (IDS), an
automated solution that continually scans for
anomalies.
• Review and upgrade your access control policies.
• Beef up your incident response plan.
7)Protect Shared Technology
➢ By sharing computing resources, you open
yourself up to the possibility that a breach on
the cloud infrastructure may also constitute a
potential incident on your data residing on
those systems.
➢Mitigation Techniques for Shared Tenancy
• Adopt a defense-in-depth strategy to protect cloud
resources.
• Encrypt data at rest and in transit.
• Select a cloud vendor with robust security
protocols.

You might also like