Unit2 PDF
Unit2 PDF
1. Full Virtualization
2. Para-Virtualization
3. Hardware Virtualization
1. Full Virtualization
• In full virtualization, the virtualization layer completely decouples
the guest OS from the underlying hardware.
• The guest OS requires no modification and is not aware that it is
being virtualized.
• Full virtualization is enabled by direct execution of user requests and
binary translation of OS requests.
• Microsoft and Parallels systems are examples of full virtualization.
2.Para-Virtualization
Primary storage
Replication in Cloud Computing
• refers to multiple storage of the same data to several different locations by
usually synchronization of these data sources.
• Replication in Cloud Computing is partly done for backup and on the other
hand to reduce response times, especially for reading data requests.
• The simplest form of data replication in cloud computing environment is to
store a copy of a file ( copy ), in expanded form, the copying and pasting in
any modern operating systems.
• Replication is the reproduction of the original data in unchanged form.
• In the frequently encountered master / slave replication, a distinction
between the original data (primary data) and the dependent copies. In peer
copies ( version control ) there must be merging of data sets
(synchronization).
• Sometimes it is important to know which data sets must have the replicas.
• Depending on the type of replication it is located between the processing
and creation of the primary data and their replication in a certain period of
time.
• This period is usually referred to as latency.
➢ Synchronous replication is when a change of operation can only be
completed successfully on a data object, if it was performed on the
replicas.
➢ In order to implement this technology, a protocol ensures the
indivisibility of transactions, the commit protocol.
➢ If between the processing of the primary data and the replication
there is latency, it speaks of asynchrony.
➢ The data are identical. A simple variant of the asynchronous
replication is the “File Transfer Replication”, the transfer of files via
FTP or SSH.
➢ The data of the replicas make only a snapshot of the primary data at
a specific time. At the database level this can happen in short time
intervals, the transaction databases are transported from one server
to another and read into the database.
➢ Assuming an intact network latency it corresponds to the time
interval in which the transaction logs are written.
➢ Four methods can be used for Replication in Cloud Computing –
Merge Replication, Primary Copy, Snapshot replication, Standby
replication.
Types of Replication
1)Array-based Replication
2)Network-based Replication
3)Host-based Replication
1) Array-based Replication
• An array-based data replication strategy uses built-in
software to automatically replicate data.
• With this type of data replication, the software is used in
compatible storage arrays to copy data between each.
• Using this method has several advantages and
disadvantages.
2)Host-Based Data Replication
• Host-based data replication uses the servers to copy data
from one site to another site.
• Host-based replication software usually includes options like
compression, encryption and, throttling, as well as failover.
• Using this method has several advantages and
disadvantages.
Array-based Replication
3 )Network-Based Data Replication
• Network-based data replication uses a device or
appliance that sits on the network in the path of the
data to manage replication.
• The data is then copied to a second device. These
devices usually have proprietary replication technology
but can be used with any host server and storage
hardware.
Software Defined Networking
➢ Software-Defined Networking (SDN) is a networking architecture that
separates the control plane from the data plane and centralizes the network
controller.
➢ To understand software-defined networks, we need to understand the various
planes involved in networking.
Data Plane
Control Plane
1) Data plane: All the activities involving as well as resulting from data packets
sent by the end-user belong to this plane. This includes:
• Forwarding of packets.
• Segmentation and reassembly of data.
• Replication of packets for multicasting.
2)Control plane: All activities necessary to perform data plane activities but do
not involve end-user data packets belong to this plane. In other words, this is
the brain of the network.
The activities of the control plane include:
• Making routing tables.
• Setting packet handling policies.
Two types of architecture
1) Conventional network architecture
➢ The control plane and data plane are coupled.
➢ Control plane is the part of the network that carries
the signaling and routing message traffic
➢ the data plane is the part of the network that carries
the payload data traffic.
2) SDN Architecture
➢ The control and data planes are decoupled and the
network controller is centralized.
A typical SDN architecture consists of three layers.
1) Application layer: It contains the typical network
applications like intrusion detection, firewall, and load
balancing
2)Control layer: It consists of the SDN controller which
acts as the brain of the network. It also allows
hardware abstraction to the applications written on top
of it.
3)Infrastructure layer: This consists of physical
switches which form the data plane and carries out the
actual movement of data packets.
• The layers communicate via a set of interfaces called
the north-bound APIs(between the application and
control layer) and southbound APIs(between the
control and infrastructure layer).
Northbound APIs:
➢ SDN uses APIs used to communicate between the
SDN Controller , services and applications running
over the networkAPP
Southbound APIs:
➢ allow the controller to modify the switch/router
configurations
Applications can now communicate with a SDN
controller which can configure our switches and
routers almost at real time
SDN Architecture
➢ In a traditional network, each switch has its own data
plane as well as the control plane.
➢ The control plane of various switches
exchange topology information and hence construct a
forwarding table that decides where an incoming data
packet has to be forwarded via the data plane.
➢ Software-defined networking (SDN) is an approach via
which we take the control plane away from the switch
and assign it to a centralized unit called the SDN
controller.
➢ Hence, a network administrator can shape traffic via a
centralized console without having to touch the
individual switches.
➢ The data plane still resides in the switch and when a packet
enters a switch, its forwarding activity is decided based on the
entries of flow tables, which are pre-assigned by the controller.
➢ A flow table consists of match fields (like input port number
and packet header) and instructions. The packet is first
matched against the match fields of the flow table entries.
➢ Then the instructions of the corresponding flow entry are
executed. The instructions can be forwarding the packet via
one or multiple ports, dropping the packet, or adding headers
to the packet.
➢ If a packet doesn’t find a corresponding match in the flow
table, the switch queries the controller which sends a new
flow entry to the switch.
➢ The switch forwards or drops the packet based on this flow
entry
Key Elements of SDN
1)Centralized Network Controller
• With decoupled the control and data planes and centralized
network controller, the network administrators can rapidly
configure the network.
2)Programmable Open APIs
• SDN architecture supports programmable open APIs for
interface between the SDN application and control layers
(Northbound interface). These open APIs that allow
implementing various network services such as routing, quality
of service (QoS), access control, etc.
3)Standard Communication Interface (Open Flow)
• SDN architecture uses a standard communication interface
between the control and infrastructure layers(Southbound
interface). Open Flow, which is defined by the Open
Networking Foundation (ONF) is the broadly accepted SDN
protocol for the Southbound interface.
Open Flow
Importantance of SDN?
1)Better Network Connectivity: SDN provides very better
network connectivity for sales, services, and internal
communications. SDN also helps in faster data sharing.
2)Better Deployment of Applications: Deployment of new
applications, services, and many business models can be speed
up using Software Defined Networking.
3)Better Security: Software-defined network provides better
visibility throughout the network. Operators can create
separate zones for devices that require different levels of
security. SDN networks give more freedom to operators.
4)Better Control with High Speed: Software-defined networking
provides better speed than other networking types by applying
an open standard software-based controller.
NETWORK FUNCTION VIRTUALIZATION(NFV)
• Technology that controls virtualization to combine the
heterogeneous network devices onto industry standard
high volume servers, switches and storage
NFV Architecture:
➢An individual proprietary hardware
component, such as a router, switch, gateway,
firewall, load balancer, or intrusion detection
system, performs a specific networking
function in a typical network architecture.
➢ A virtualized network substitutes software
programs that operate on virtual machines for
these pieces of hardware to carry out
networking operations.
• NFV is complementary to SDN as NFV can provide
the infrastructure on which SDN can run.
• NFV and SDN are mutually beneficial to each other
but not dependent.
• Network functions can be virtualized without SDN,
similarly, SDN can run without NFV
• NFV comprises of network functions implemented in
software that run on virtualized resources in the
cloud.
• NFV enables a separation the network functions
which are implemented in software from the
underlying hardware.
3 Key elements of the NFV architecture are
What Is Migration ?
➢It is process of moving data, applications or
other business elements to a cloud computing
environment.
➢Migrating a model to a cloud can help in
several ways, such as
• Improving scalability
• Flexibility
• Accessibility
➢There are seven steps to follow when
migrating a model to the cloud:
Step 1: Choose the right cloud provider ( Assessment step)
• The first step in migrating your model to the cloud is to choose
a cloud provider that aligns with your needs, budget, and
model requirement.
• consider the factors such as compliance, privacy, and security.
Step 2: Prepare your data ( Isolation step)
• Before migrating to your cloud, you need to prepare your
data.
• For that ensure your data is clean and well organized, and in a
format that is compatible with your chosen cloud provider.
Step 3: Choose your cloud storage ( Mapping step)
Once your data is prepared, you need to choose your cloud
storage.
• This is where your data is stored in the cloud. there are many
cloud storage services such as GCP Cloud Storage, AWS S3, or
Azure Blob Storage.
Step 4: Set up your cloud computing resources and deploy your
model ( Re- architect step)
If you want to run a model in the cloud, you will need to set up your
cloud computing resources.
• This includes selecting the appropriate instance type and setting up
a virtual machine(VM) or container for your model. After setting up
your computing resource, it is time to deploy your model to the
cloud.
• This includes packaging your model into a container or virtual
machine image and deploying it to your cloud computing resource.
• While deploying it may be possible that some functionality gets lost
so due to this some parts of the application need to be re-architect.
Step-5: Augmentation step
• It is the most important step for our business for which we migrate
to the cloud in this step by taking leverage of the internal features of
cloud computing service we augment our enterprise.
Step 6: Test your Model
• Once your model is deployed, we need to test it to ensure
that it is working or not.
• That involves running test data through your model and
comparing the results with your expected output.
Step 7: Monitor and maintain your Model
• After the model is deployed and tested, it is important to
monitor and maintain it.
• That includes monitoring the performance, updating the
model as needed, and need to ensure your data stays up-
to-date.
• Migrating our machine learning model to the cloud can be
a complex process, but above 7 steps, you can help ensure
a smooth and successful migration, ensuring that our
model is scalable and accessible.
MIGRATION RISKS AND MITIGATION
What is Mitigation
➢Minimizing the risk of unauthorized data
access in the cloud.
➢Mitigation Techniques for Access Controls
1. Tighten Up Access Controls
2. Prevent a Data Breach or Data Loss
3. Secure Application Programming Interfaces
4. Build a Strategy
5. Broaden Your Visibility
6. Identify Misconfiguration
7. Protect Shared Technology
1)Tighten Up Access Controls
➢ If an unwanted person gains access to your systems or
cloud-based resources, you’re faced with an automatic
security concern, which could potentially be as dangerous
as a full-blown data breach.
➢ Techniques for Access Controls
• Enable multi-factor authentication to tighten your
security.
• Implement stringent policies for removing access for past
employees.
• Educate users about social engineering tactics, strong
passwords and phishing attacks.
2)Prevent a Data Breach or Data Loss
➢Mitigation Techniques for Data Breaches
• Use a firewall.
• Encrypt data at rest.
• Develop a sound and efficient incident response
plan.
• Perform pen testing on your cloud resources.
➢ Mitigation Techniques for Data Loss
• Back up consistently.
• Restore your capabilities quickly.
3)Secure Application Programming Interfaces
➢Cloud infrastructure use application programming
interfaces (APIs) to retrieve information from cloud-
based systems and send it to your connected
devices
➢Mitigation Technique for Insecure APIs
• Perform penetration testing on API endpoints to
identify vulnerabilities.
• Use secure sockets layer (SSL) to encrypt data for
transmission.
• Implement proper controls to limit access to API
protocols.
4)Build a Strategy
➢Your cloud environment can quickly become
complicated and disparate without a larger
strategy. A hodgepodge environment can quickly
become difficult to manage when ad hoc services
are continually added to meet operational needs
➢Mitigation Technique for Lack of Strategy
• Develop a cloud strategy that provides a robust
security infrastructure and aligns with your business
objectives.
• Perform regular penetration testing to check for any
vulnerabilities in your framework.
5)Broaden Your Visibility
➢ Limited visibility into your data model leaves you
vulnerable in places you can’t anticipate. As the
saying goes, you can’t protect what you can’t see.
➢Mitigation Technique for Limited Visibility
• Use a web application firewall to check for anomalous
traffic.
• Implement a cloud access security broker (CASB) to
keep an eye on outbound activities.
• Adopt a zero-trust model.
6)Identify Misconfiguration
➢It’s difficult to anticipate what kind of security
vulnerability you’ll be battling if you don’t
know where the misconfiguration has
occurred. Common examples include excessive
permissions, security holes left unpatched or
unrestricted port access.
➢Mitigation Techniques for Misconfiguration
• Implement an intrusion detection system (IDS), an
automated solution that continually scans for
anomalies.
• Review and upgrade your access control policies.
• Beef up your incident response plan.
7)Protect Shared Technology
➢ By sharing computing resources, you open
yourself up to the possibility that a breach on
the cloud infrastructure may also constitute a
potential incident on your data residing on
those systems.
➢Mitigation Techniques for Shared Tenancy
• Adopt a defense-in-depth strategy to protect cloud
resources.
• Encrypt data at rest and in transit.
• Select a cloud vendor with robust security
protocols.