Cloud computing
Cloud computing
In emulation, the virtual machine replicates hardware, allowing the guest operating system to operate
independently of the underlying physical hardware.
This means that the guest OS can run without any modifications, making it versatile for various systems.
Since it simulates the entire hardware environment, emulation is particularly useful for running software
designed for different architectures, but it can be slower due to the overhead of translating instructions.
Paravirtualization
Paravirtualization requires the host operating system to provide a specific virtual machine interface for the guest
operating system.
In this setup, the guest OS must be modified or ported to interact with this interface, allowing it to access
hardware more efficiently through the host VM.
While this results in improved performance compared to emulation, it limits compatibility since the guest OS
must be tailored to work with the host's virtualization interface.
Full Virtualization
Full virtualization refers to the ability to run a program, most likely an operating system, directly on top of a
virtual machine and without any modification, as though it were run on the raw hardware.
To make this possible, virtual machine managers are required to provide a complete emulation of the entire
underlying hardware.
The principal advantage of full virtualization is complete isolation, which leads to enhanced security, ease of
emulation of different architectures, and coexistence of different systems on the same platform.
Whereas it is a desired goal for many virtualization solutions, full virtualization poses important concerns related
to performance and technical implementation.
A key challenge is the interception of privileged instructions such as I/O instructions: Since they change the state
of the resources exposed by the host, they have to be contained within the virtual machine manager.
A simple solution to achieve full virtualization is to provide a virtual environment for all the instructions, thus
posing some limits on performance.
A successful and efficient implementation of full virtualization is obtained with a combination of hardware and
software, not allowing potentially harmful instructions to be executed directly on the host.
What are the different types of virtualization?
Server
virtualization
Programming
language- Storage
level virtualization
virtualization
Virtualization
Data Network
virtualization virtualization
Application Desktop
virtualization virtualization
SERVER VIRTUALIZATION
Server virtualization
Server virtualization is a process that partitions a physical server into multiple virtual servers.
It is an efficient and cost-effective way to use server resources and deploy IT services in an organization.
Without server virtualization, physical servers use only a small amount of their processing capacities, which leave
devices idle.
Benefits of Server Virtualization
Resource Optimization: Maximizes the use of physical server resources by distributing workloads across multiple
VMs.
Cost Efficiency: Reduces hardware costs and power consumption by consolidating servers.
Scalability: Easily scale resources up or down as needed without significant physical changes.
Disaster Recovery: Simplifies backup and recovery processes by allowing VMs to be replicated or moved to
different locations.
Isolation: Enhances security by isolating applications and workloads in separate VMs.
STORAGE VIRTUALIZATION
STORAGE VIRTUALIZATION
Storage virtualization combines the functions of physical storage devices such as network attached storage (NAS)
and storage area network (SAN).
We can pool the storage hardware in your data center, even if it is from different vendors or of different types.
Storage virtualization uses all your physical data storage and creates a large unit of virtual storage that you can
assign and control by using management software.
IT administrators can streamline storage activities, such as archiving, backup, and recovery, because they can
combine multiple network storage devices virtually into a single storage device.
Resource Optimization: Improves the utilization of storage devices by aggregating storage pools.
Scalability: Easily scale storage up or down as business needs change without significant downtime or
reconfiguration.
Simplified Management: Centralizes storage management, allowing administrators to manage storage resources
through a single interface.
Network virtualization
Network virtualization combines hardware appliances and specific software for the creation and management of a
virtual network.
Network virtualization can aggregate different physical networks into a single logical network (external network
virtualization) or provide network-like functionality to an operating system partition (internal network
virtualization).
The result of external network virtualization is generally a virtual LAN (VLAN).
A VLAN is an aggregation of hosts that communicate with each other as though they were located under the same
broadcasting domain.
SDN Controllers: Centralized software that manages network devices and configurations, enabling dynamic
adjustments to traffic flow.
Hypervisors: Used in virtualized environments to manage network resources alongside server virtualization.
Overlay Networks: Virtual networks built on top of existing physical networks, often using protocols like VXLAN
or GRE to encapsulate packets.
DATA VIRTUALIZATION
Modern organizations collect data from several sources and store it in different formats.
They might also store data in different places, such as in a cloud infrastructure and an on-premises data center.
Data virtualization creates a software layer between this data and the applications that need it.
Data virtualization tools process an application’s data request and return results in a suitable format.
Thus, organizations use data virtualization solutions to increase flexibility for data integration and support cross-
functional data analysis.
Data Sources: Connects to various data sources (structured and unstructured) such as SQL databases, NoSQL
databases, data lakes, and cloud storage.
Virtual Data Layer: A layer that integrates, transforms, and prepares data for access, often utilizing data federation
or real-time data integration techniques.
Querying and Access: Users can query the virtual data layer using standard SQL or other query languages,
allowing for seamless interaction with the underlying data sources.
APPLICATION VIRTUALIZATION
TYPES OF VIRTUALIZATION
Application virtualization
Application virtualization pulls out the functions of applications to run on operating systems other than the operating
systems for which they were designed.
For example, users can run a Microsoft Windows application on a Linux machine without changing the machine
configuration.
To achieve application virtualization, follow these practices:
Application streaming – Users stream the application from a remote server, so it runs only on the end user's device
when needed.
Server-based application virtualization – Users can access the remote application from their browser or client
interface without installing it.
Local application virtualization – The application code is shipped with its own environment to run on all operating
systems without changes.
DESKTOPTYPES OF VIRTUALIZATION
DESKTOP VIRTUALIZATION
Desktop virtualization is a technology that allows users to run desktop environments in a virtualized setting,
enabling them to access their desktops from any device with an internet connection.
Most organizations have nontechnical staff that use desktop operating systems to run common business applications.
For instance, you might have the following staff:
A customer service team that requires a desktop computer with Windows 10 and customer-relationship
management software
A marketing team that requires Windows Vista for sales applications
You can use desktop virtualization to run these different desktop operating systems on virtual machines, which your
teams can access remotely.
Hypervisors: Software like VMware vSphere, Microsoft Hyper-V, or Citrix Hypervisor is used to create and
manage the virtual machines that run the desktops.
TYPES OF VIRTUALIZATION
Programming language-level virtualization
Programming language-level virtualization is mostly used to achieve ease of deployment of applications, managed
execution, and portability across different platforms and operating systems.
It consists of a virtual machine executing the byte code of a program, which is the result of the compilation process.
Compilers implemented and used this technology to produce a binary format representing the machine code for an
abstract architecture.
The characteristics of this architecture vary from implementation to implementation.
Generally these virtual machines constitute a simplification of the underlying hardware instruction set and provide
some high-level instructions that map some of the features of the languages compiled for them.
At runtime, the byte code can be either interpreted or compiled on the fly—or jitted against the underlying hardware
instruction set.
What is load balancing?
The technology used to distribute service requests to resources is referred to as load balancing.
Load balancing can be implemented in hardware or in software.
Load balancing is an optimization technique; it can be used to increase utilization and throughput, lower latency,
reduce response time, and avoid system overload.
Load balancing is the method of distributing network traffic equally across a pool of resources that support an
application.
Modern applications must process millions of users simultaneously and return the correct text, videos, images,
and other data to each user in a fast and reliable manner.
To handle such high volumes of traffic, most applications have many resource servers with duplicate data
between them.
A load balancer is a device that sits between the user and the server group and acts as an invisible facilitator,
ensuring that all resource servers are used equally.
The following network resources can be load balanced:
Network interfaces and services such as DNS, FTP, and HTTP
Connections through intelligent switches
Processing through computer system assignment
Storage resources
Access to application instances
Importance:
Without load balancing, cloud computing would very difficult to manage.
Load balancing provides the necessary redundancy to make an intrinsically unreliable system reliable through
managed redirection.
It also provides fault tolerance when coupled with a failover mechanism.
Load balancing is nearly always a feature of server farms and computer clusters and for high availability applications.
Application Delivery Controller (ADC)
An Application Delivery Controller (ADC) is a combination load balancer and application server that is a server
placed between a firewall or router and a server farm providing Web services.
An Application Delivery Controller is assigned a virtual IP address (VIP) that it maps to a pool of servers based on
application specific criteria.
An ADC is a combination network and application layer device.
You also may come across ADCs referred to as a content switch, multilayer switch, or Web switch.
These vendors, among others, sell ADC systems:
A10 Networks (http://www.a10networks.com/)
Barracuda Networks (http://www.barracudanetworks.com/)
Brocade Communication Systems (http://www.brocade.com/)
Cisco Systems (http://www.cisco.com/)
Citrix Systems (http://www.citrix.com/)
WHAT ARE THE BENEFITS OF LOAD BALANCING?
Application availability
Server failure or maintenance can increase application downtime, making your application unavailable to visitors.
Load balancers increase the fault tolerance of your systems by automatically detecting server problems and
redirecting client traffic to available servers.
We can use load balancing to make these tasks easier:
Run application server maintenance or upgrades without application downtime
Provide automatic disaster recovery to backup sites
Perform health checks and prevent issues that can cause downtime
WHAT ARE THE BENEFITS OF LOAD BALANCING?
Application scalability
You can use load balancers to direct network traffic intelligently among multiple servers.
Your applications can handle thousands of client requests because load balancing does the following:
Prevents traffic bottlenecks at any one server
Predicts application traffic so that you can add or remove different servers, if needed Adds redundancy to
your system so that you can scale with confidence
Application performance
Load balancers improve application performance by increasing response time and reducing network latency. They
perform several critical tasks such as the following:
Distribute the load evenly between servers to improve application performance.
Redirect client requests to a geographically closer server to reduce latency.
Ensure the reliability and performance of physical and virtual computing resources.
WHAT ARE THE BENEFITS OF LOAD BALANCING?
Application security
Load balancers come with built-in security features to add another layer of security to your internet applications.
They are a useful tool to deal with distributed denial of service attacks, in which attackers flood an application
server with millions of concurrent requests that cause server failure
Load balancers can also do the following:
Monitor traffic and block malicious content
Automatically redirect attack traffic to multiple backend servers to minimize impact
Route traffic through a group of network firewalls for additional security
WHAT ARE LOAD BALANCING ALGORITHMS?
Load
Balancing
Dynamic
Static Load
Load
Static Load Balancing- follow fixed rules and are independent of the current server state.
Dynamic Load Balancing- examine the current state of the servers before distributing traffic.
ROUND-ROBIN METHOD
Round-Robin Load Balancing is a method of distributing incoming network traffic across multiple servers in a
sequential manner.
Each request is sent to the next server in a predetermined list, cycling through the servers in order.
This approach ensures that all servers receive an equal number of requests over time.
Simple to implement but does not consider server load or capacity.
No Session Persistence:Each request is treated independently, making it unsuitable for applications that
require session persistence (sticky sessions).
Performance Variability:If one server is slower or experiences higher latency, it may not receive fewer
requests compared to faster servers, affecting overall application performance.
Use Cases: Suitable for applications with uniform request sizes and processing times.
WEIGHTED ROUND-ROBIN METHOD
In weighted round-robin load balancing, you can assign different weights to each server based on their priority
or capacity.
Servers with higher weights will receive more incoming application traffic from the name server.
Advantages:
Better resource utilization compared to simple Round Robin.
Can accommodate servers of varying capacities effectively.
Disadvantages:
Requires periodic adjustment of weights based on server performance.
Still doesn’t adapt to real-time load fluctuations.
Use Cases:
Useful in environments with heterogeneous server capacities, like a mix of high-performance and standard
servers.
IP HASH METHOD
In the IP hash method, the load balancer performs a mathematical computation, called hashing, on the client IP
address.
It converts the client IP address to a number, which is then mapped to individual servers.
Advantages:
Provides session persistence, ensuring clients are consistently routed to the same server.
Can help in caching and maintaining user sessions effectively.
Disadvantages:
Uneven distribution of requests if many clients share the same IP (e.g., in corporate networks).
Limited flexibility if server capacities change.
Use Cases: Ideal for applications requiring session stickiness, such as e-commerce sites.
LEAST CONNECTION METHOD
A connection is an open communication channel between a client and a server.
When the client sends the first request to the server, they authenticate and establish an active connection between each
other.
In the least connection method, the load balancer checks which servers have the fewest active connections and sends
traffic to those servers.
This method assumes that all connections require equal processing power for all servers.
Advantages:
Adapts in real-time, leading to improved performance and responsiveness.
Reduces the risk of server overload.
Disadvantages:
May not consider the processing power of connections; a server may have few connections but could be heavily loaded.
Requires more complex algorithms for tracking connections.
Use Cases:
Effective in environments where requests have varied processing times, such as web applications with dynamic content.
WEIGHTED LEAST CONNECTION METHOD
Weighted least connection algorithms assume that some servers can handle more active connections than others.
Therefore, you can assign different weights or capacities to each server, and the load balancer sends the new client
requests to the server with the least connections by capacity.
Advantages:
Balances requests more effectively among servers of different capacities.
Improves overall performance by ensuring that powerful servers handle more connections.
Disadvantages:
More complex to implement and maintain.
Weights must be regularly adjusted to reflect performance changes.
Use Cases:
Beneficial for applications where servers have significantly different processing power.
LEAST RESPONSE TIME METHOD
The response time is the total time that the server takes to process the incoming requests and send a response.
The least response time method combines the server response time and the active connections to determine the
best server.
Load balancers use this algorithm to ensure faster service for all users.
Advantages:
Optimizes user experience by minimizing latency.
Adjusts dynamically to changes in server load and performance.
Disadvantages:
Requires continuous monitoring of response times, which adds overhead.
May not account for servers that are temporarily overloaded but have previously fast response times.
Use Cases:
Suitable for high-traffic web applications where user experience is critical.
RESOURCE-BASED METHOD
In the resource-based method, load balancers distribute traffic by analyzing the current server load.
Specialized software called an agent runs on each server and calculates usage of server resources, such as its
computing capacity and memory.
Then, the load balancer checks the agent for sufficient free resources before distributing traffic to that server.
Advantages:
Highly adaptive and can lead to optimal resource use across the server pool.
Capable of balancing loads based on real-time server health.
Disadvantages:
More complex to implement, requiring sophisticated monitoring and analytics tools.
Potentially higher overhead from constant resource monitoring.
Use Cases:
Ideal for large-scale applications with fluctuating workloads, such as cloud-native applications.
WHAT ARE THE TYPES OF LOAD BALANCING?
Application Load Balancing-
Distributes incoming application traffic based on specific application-level criteria.
Can make intelligent routing decisions based on HTTP headers, cookies, and data in the request.
Supports features like SSL termination, URL-based routing, and session persistence.
Use Cases: Ideal for web applications, microservices, and environments where traffic needs to be directed based
on application logic.
Network load balancing:
Distributes network traffic across multiple servers or resources at the transport layer.
Routes traffic based on IP address and TCP/UDP port information without inspecting packet contents.
Provides faster routing due to lower overhead and can handle a variety of protocols.
Use Cases: Suitable for applications requiring high throughput and low latency, such as database connections,
VoIP, and non-HTTP traffic.
Global server load balancing:
Global server load balancing occurs across several geographically distributed servers.
For example, companies can have servers in multiple data centers, in different countries, and in third-party cloud
providers around the globe.
In this case, local load balancers manage the application load within a region or zone.
They attempt to redirect traffic to a server destination that is geographically closer to the client.
They might redirect traffic to servers outside the client’s geographic zone only in case of server failure.
Use Cases: Essential for global applications that require high availability and optimal performance across different
regions.
DNS load balancing
Utilizes DNS records to distribute incoming requests among multiple servers, directing users based on DNS
resolution.
A domain can correspond to a website, a mail system, a print server, or another service that is made accessible
through the internet.