0% found this document useful (0 votes)
24 views9 pages

Load Balancing

Uploaded by

K Divi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views9 pages

Load Balancing

Uploaded by

K Divi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

*Load Balancing*

Load balancing is the method that allows you to have a proper balance of the amount
of work being done on different pieces of device or hardware equipment. Typically,
what happens is that the load of the devices is balanced between different servers or
between the CPU and hard drives in a single cloud server.

Load balancing was introduced for various reasons. One of them is to improve the
speed and performance of each single device, and the other is to protect individual
devices from hitting their limits by reducing their performance.

Cloud load balancing is defined as dividing workload and computing properties in


cloud computing. It enables enterprises to manage workload demands or application
demands by distributing resources among multiple computers, networks or servers.
Cloud load balancing involves managing the movement of workload traffic and
demands over the Internet.

Traffic on the Internet is growing rapidly, accounting for almost 100% of the current
traffic annually. Therefore, the workload on the servers is increasing so rapidly, leading
to overloading of the servers, mainly for the popular web servers. There are two
primary solutions to overcome the problem of overloading on the server o First is a
single-server solution in which the server is upgraded to a higher- performance server.
However, the new server may also be overloaded soon, demanding another upgrade.
Moreover, the upgrading process is arduous and expensive. o The second is a multiple-
server solution in which a scalable service system on a cluster of servers is built. That's
why it is more cost-effective and more scalable to build a server cluster system for
network services.

Cloud-based servers can achieve more precise scalability and availability by using farm
server load balancing. Load balancing is beneficial with almost any type of service, such
as HTTP, SMTP, DNS, FTP, and POP/IMAP.

It also increases reliability through redundancy. A dedicated hardware device or


program provides the balancing service.

Different Types of Load Balancing Algorithms in Cloud Computing:

1. Static Algorithm

Static algorithms are built for systems with very little variation in load. The entire traffic
is divided equally between the servers in the static algorithm. This algorithm requires
in-depth knowledge of server resources for better performance of the processor, which
is determined at the beginning of the implementation.
However, the decision of load shifting does not depend on the current state of the
system. One of the major drawbacks of static load balancing algorithm is that load
balancing tasks work only after they have been created. It could not be implemented
on other devices for load balancing.

2. Dynamic Algorithm

The dynamic algorithm first finds the lightest server in the entire network and gives it
priority for load balancing. This requires real-time communication with the network
which can help increase the system's traffic. Here, the current state of the system is
used to control the load.

The characteristic of dynamic algorithms is to make load transfer decisions in the


current system state. In this system, processes can move from a highly used machine
to an underutilized machine in real time.

3. Round Robin Algorithm

As the name suggests, round robin load balancing algorithm uses round-robin method
to assign jobs. First, it randomly selects the first node and assigns tasks to other nodes
in around-robin manner. This is one of the easiest methods of load balancing.

Processors assign each process circularly without defining any priority. It gives fast
response in case of uniform workload distribution among the processes. All processes
have different loading times. Therefore, some nodes may be heavily loaded, while
others may remain under-utilised.

4. Weighted Round Robin Load Balancing Algorithm

Weighted Round Robin Load Balancing Algorithms have been developed to enhance
the most challenging issues of Round Robin Algorithms. In this algorithm, there are a
specified set of weights and functions, which are distributed according to the weight
values.

Processors that have a higher capacity are given a higher value. Therefore, the highest
loaded servers will get more tasks. When the full load level is reached, the servers will
receive stable traffic.

5. Opportunistic Load Balancing Algorithm

The opportunistic load balancing algorithm allows each node to be busy. It never
considers the current workload of each system. Regardless of the current workload on
each node, OLB distributes all unfinished tasks to these nodes.
The processing task will be executed slowly as an OLB, and it does not count the
implementation time of the node, which causes some bottlenecks even when some
nodes are free.

6. Minimum To Minimum Load Balancing Algorithm

Under minimum to minimum load balancing algorithms, first of all, those tasks take
minimum time to complete. Among them, the minimum value is selected among all
the functions. According to that minimum time, the work on the machine is scheduled.

Other tasks are updated on the machine, and the task is removed from that list. This
process will continue till the final assignment is given. This algorithm works best where
many small tasks outweigh large tasks.

Load balancing solutions can be categorized into two types –

o Software-based load balancers: Software-based load balancers run on standard


hardware (desktop, PC) and standard operating systems.

o Hardware-based load balancers: Hardware-based load balancers are dedicated


boxes that contain application-specific integrated circuits (ASICs) optimized for a
particular use. ASICs allow network traffic to be promoted at high speeds and are often
used for transport-level load balancing because hardware-based load balancing is
faster than a software solution.

Major Examples of Load Balancers -

o Direct Routing Request Despatch Technique: This method of request dispatch is


similar to that implemented in IBM's Net Dispatcher. A real server and load balancer
share a virtual IP address. The load balancer takes an interface built with a virtual IP
address that accepts request packets and routes the packets directly to the selected
server. o

Dispatcher-Based Load Balancing Cluster: A dispatcher performs smart load


balancing using server availability, workload, capacity and other user-defined
parameters to regulate where TCP/IP requests are sent. The dispatcher module of a
load balancer can split HTTP requests among different nodes in a cluster. The
dispatcher divides the load among multiple servers in a cluster, so services from
different nodes act like a virtual service on only one IP address; Consumers
interconnect as if it were a single server, without knowledge of the back-end
infrastructure.

o Linux Virtual Load Balancer: This is an open-source enhanced load balancing


solution used to build highly scalable and highly available network services such as
HTTP, POP3, FTP, SMTP, media and caching, and Voice over Internet Protocol (VoIP) is
done. It is a simple and powerful product designed for load balancing and fail-over.
The load balancer itself is the primary entry point to the server cluster system. It can
execute Internet Protocol Virtual Server (IPVS), which implements transport-layer load
balancing in the Linux kernel, also known as layer-4 switching.

Types of Load Balancing

You will need to understand the different types of load balancing for your network.
Server load balancing is for relational databases, global server load balancing is for
troubleshooting in different geographic locations, and DNS load balancing ensures
domain name functionality. Load balancing can also be based on cloud-based
balancers.

Network Load Balancing

Cloud load balancing takes advantage of network layer information and leaves it to
decide where network traffic should be sent. This is accomplished through Layer 4 load
balancing, which handles TCP/UDP traffic. It is the fastest local balancing solution, butit
cannot balance the traffic distribution across servers.

HTTP(S) load balancing

HTTP(s) load balancing is the oldest type of load balancing, and it relies on Layer 7.
This means that load balancing operates in the layer of operations. It is the most
flexible type of load balancing because it lets you make delivery decisions based on
information retrieved from HTTP addresses.

Internal Load Balancing

It is very similar to network load balancing, but is leveraged to balance the


infrastructure internally.

Load balancers can be further divided into hardware, software and virtual load
balancers.

Hardware Load Balancer

It depends on the base and the physical hardware that distributes the network and
application traffic. The device can handle a large traffic volume, but these come with
ahefty price tag and have limited flexibility.

Software Load Balancer


It can be an open source or commercial form and must be installed before it can be
used. These are more economical than hardware solutions.

Virtual Load Balancer

It differs from a software load balancer in that it deploys the software to the hardware
load-balancing device on the virtual machine.

WHY CLOUD LOAD BALANCING IS IMPORTANT IN CLOUD COMPUTING? Here


are some of the importance of load balancing in cloud computing.

Offers better performance

The technology of load balancing is less expensive and also easy to implement. This
allows companies to work on client applications much faster and deliver better results
ata lower cost.

Helps Maintain Website Traffic

Cloud load balancing can provide scalability to control website traffic. By using
effectiveload balancers, it is possible to manage high-end traffic, which is achieved
using networkequipment and servers. E-commerce companies that need to deal with
multiple visitorsevery second use cloud load balancing to manage and distribute
workloads.

Can Handle Sudden Bursts in Traffic

Load balancers can handle any sudden traffic bursts they receive at once. For example,
in case of university results, the website may be closed due to too many requests.
When one uses a load balancer, he does not need to worry about the traffic flow.
Whatever the size of the traffic, load balancers will divide the entire load of the website
equally across different servers and provide maximum results in minimum response
time.

Greater Flexibility

The main reason for using a load balancer is to protect the website from sudden
crashes. When the workload is distributed among different network servers or units, if
a single node fails, the load is transferred to another node. It offers flexibility, scalability
and the ability to handle traffic better.

Because of these characteristics, load balancers are beneficial in cloud environments.


This is to avoid heavy workload on a single server.

Conclusion
Thousands of people have access to a website at a particular time. This makes it
challenging for the application to manage the load coming from these requests at the
same time. Sometimes this can lead to system failure.

*Scalability and Elasticity*

Scalability and elasticity are important characteristics of load balancing in distributed


computing systems. Let's explore each of them:

Scalability: Scalability refers to the ability of a system to handle an increasing amount


of work or accommodate a growing number of users or resources. In the context of
load balancing, scalability means that the system can efficiently distribute incoming
requests across multiple nodes or servers as the workload grows.

To achieve scalability in load balancing, several techniques are commonly employed:

1. Horizontal Scaling: This involves adding more nodes or servers to the system to
handle increased load. Load balancers distribute incoming requests across these
additional resources, allowing the system to handle a larger volume of traffic.

2. Load Balancer Redundancy: To ensure high availability and avoid single points of
failure, load balancers themselves can be scaled by implementing redundancy.
Multiple load balancers can be deployed in parallel, distributing the load across them
and providing fault tolerance. If one load balancer fails, others can take over
seamlessly.

3. Dynamic Configuration: Scalable load balancing systems often have dynamic


configurations that allow for automatic adjustment of resources based on demand.
This includes dynamically adding or removing nodes from the load balancing pool
based on factors like CPU utilization, network traffic, or predefined thresholds.

Elasticity: Elasticity is closely related to scalability but emphasizes the ability of a


system to dynamically adapt its resource allocation in response to workload changes.
In load balancing, elasticity refers to the ability to scale resources up or down based
on real-time demand.

Elastic load balancing can be achieved through:

1. Auto Scaling: Auto scaling allows the system to automatically adjust the number of
nodes or servers based on predefined metrics or policies. When the workload
increases, new nodes can be provisioned to handle the additional load, and when the
demand decreases, unnecessary resources can be removed.

2. Load Balancer Health Monitoring: Elastic load balancing systems continuously


monitor the health and performance of individual nodes or servers. If a node becomes
overloaded or unresponsive, the load balancer can dynamically redirect traffic to
healthier nodes, ensuring efficient resource utilization.

3. Dynamic Load Distribution: Elastic load balancers can intelligently distribute


incoming requests based on real-time conditions. For example, they can route requests
to nodes with lower resource utilization or closer proximity to minimize latency.

By combining scalability and elasticity, load balancing systems can efficiently distribute
workload across distributed resources, ensuring optimal performance, responsiveness,
and resource utilization. These characteristics are particularly important in cloud
computing environments, where workloads can vary significantly over time.

*Cloud services and platforms: Compute services*

Compute services are a fundamental component of cloud computing platforms. They


provide the necessary computing resources to run applications, process data, and
perform various computational tasks. Here are some prominent compute services
offered by cloud providers:

1. Amazon EC2 (Elastic Compute Cloud): EC2 is a web service provided by Amazon
Web Services (AWS) that offers resizable virtual servers in the cloud. It allows users to
rent virtual machines (EC2 instances) and provides flexibility in terms of instance types,
operating systems, and configurations. EC2 instances can be rapidly scaled up or down
based on demand, offering a highly scalable compute infrastructure.

2. Microsoft Azure Virtual Machines: Azure Virtual Machines provide users with on-
demand, scalable computing resources in the Microsoft Azure cloud. Users can deploy
virtual machines with various operating systems and configurations, choosing from a
wide range of instance types to meet their specific requirements.

3. Google Compute Engine: Compute Engine is the Infrastructure as a Service (IaaS)


offering of Google Cloud Platform (GCP). It allows users to create and manage virtual
machines with customizable configurations, including options for various CPU and
memory sizes. Compute Engine provides scalable and flexible compute resources in
the Google Cloud environment.

4. IBM Virtual Servers: IBM Cloud offers Virtual Servers, which are scalable and
customizable compute resources. Users can choose from a variety of instance types,
including bare metal servers, virtual machines, and GPU-enabled instances. IBM Virtual
Servers provide the flexibility to customize network and storage configurations
according to specific workload needs.

5. Oracle Compute: Oracle Cloud Infrastructure (OCI) provides compute services


through Oracle Compute, allowing users to provision and manage virtual machines in
the Oracle Cloud. It offers a range of compute shapes, including general-purpose
instances, memory-optimized instances, and GPU instances, enabling users to
optimize their compute resources for different workloads.

These compute services provide the necessary infrastructure to deploy and manage
applications, whether they require simple virtual machines or more specialized
instances. They offer scalability, flexibility, and on-demand provisioning, allowing users
to scale their compute resources up or down based on workload demands.
Additionally, these services often integrate with other cloud services like storage,
networking, and databases, enabling users to build comprehensive cloud-based
solutions

*Storage services*

1. Amazon S3 (Simple Storage Service): Amazon S3 is a highly scalable object storage


service provided by AWS. It allows users to store and retrieve any amount of data from
anywhere on the web. S3 provides high durability, availability, and low latency access
to data. It is commonly used for backup and restore, data archiving, content
distribution, and hosting static websites.

2. Azure Blob Storage: Azure Blob Storage is a scalable object storage service in
Microsoft Azure. It offers high availability, durability, and global accessibility for storing
large amounts of unstructured data, such as documents, images, videos, and log files.
Blob Storage provides various storage tiers to optimize costs based on data access
patterns.

3. Google Cloud Storage: Google Cloud Storage is a scalable and secure object storage
service in Google Cloud Platform (GCP). It provides a simple and cost-effective solution
for storing and retrieving unstructured data. Google Cloud Storage offers multiple
storage classes, including multi-regional, regional, and nearline, to meet different
performance and cost requirements.

4. IBM Cloud Object Storage: IBM Cloud Object Storage is an scalable and secure
storage service offered by IBM Cloud. It provides durable and highly available storage
for storing large volumes of unstructured data. IBM Cloud Object Storage supports
different storage tiers, data encryption, and integration with other IBM Cloud services.

*Application Services*

1. AWS Lambda: AWS Lambda is a serverless compute service provided by AWS. It


allows developers to run code without provisioning or managing servers. Lambda
functions can be triggered by various events, such as changes in data, API calls, or
scheduled events. It is commonly used for building event-driven architectures, data
processing, and executing small, self-contained tasks.
2. Azure Functions: Azure Functions is a serverless compute service in Microsoft Azure.
It enables developers to run event-triggered code in a serverless environment. Azure
Functions supports multiple programming languages and integrates with various
Azure services, making it suitable for building event-driven applications, data
processing pipelines, and microservices.

3. Google Cloud Functions: Google Cloud Functions is a serverless compute service in


GCP. It allows developers to write and deploy event-driven functions that automatically
scale based on demand. Cloud Functions can be triggered by various events from
Google Cloud services, HTTP requests, or Pub/Sub messages.

4. IBM Cloud Functions: IBM Cloud Functions is a serverless compute service offered
by IBM Cloud. It allows developers to run event-driven functions in a serverless
environment. IBM Cloud Functions supports multiple programming languages and
integrates with other IBM Cloud services, making it suitable for building
serverlessapplications and event-driven architectures.

These storage services and application services provided by cloud computing


platformsoffer scalable, reliable, and cost-effective solutions for data storage,
processing, andapplication development. They enable organizations to leverage the
benefits of cloudcomputing while reducing the burden of managing infrastructure and
focusing more on their core business goals.

You might also like