0% found this document useful (0 votes)
16 views

Load Balancing Algorithms-Worksheet1

Uploaded by

Mones Qasaiemh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Load Balancing Algorithms-Worksheet1

Uploaded by

Mones Qasaiemh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Load balancing algorithms

Introduc)on
Load balancing algorithms are cri2cal tools for managing network
traffic and ensuring op2mal performance. However, with the myriad of
op2ons available, it can be challenging to determine which is the best
fit for your unique needs.
The Importance of Load Balancing Algorithms

Load balancing algorithms play a pivotal role in managing network


traffic and op2mizing performance. They distribute network traffic
evenly across mul2ple servers, ensuring that no single server becomes
overwhelmed with too much traffic. This helps to prevent server
crashes and slow loading 2mes, which can be detrimental to user
experience and can have a significant impact on a company's boDom
line.

Moreover, load balancing algorithms can help to increase the


availability and reliability of applica2ons and services. By distribu2ng
traffic across mul2ple servers, they can ensure that if one server goes
down, the others can s2ll handle the load. This can greatly reduce
down2me and ensure that users can always access the services they
need.

Round Robin Algorithm


The Round Robin algorithm is a basic load balancing technique that
sequen2ally distributes requests across all servers in a loop. Each
request is assigned to the next server in the queue, ensuring an even
distribu2on of traffic. However, it doesn't account for the current load
or capacity of the servers, which can lead to imbalances if servers vary
in performance or workload. This algorithm is most effec2ve in
environments where all servers have similar capabili2es and workloads.
Many techniques are used when load balancing traffic, and round robin
is a key one. Round robin load balancers rotate the server that traffic is
being transmiDed to. Traffic will go to Server A before passing to Server
B and so on.
While some load balancing techniques regularly perform health checks
on servers to find out which ones are at maximum capacity and which
ones are not, round robin load balancing sends the same amount of
requests to each server. This makes it ideal for informa2on systems with
servers that each have the same capacity. At the same 2me, a round
robin load balancer will check up on the server as it transmits traffic to
it in order to bypass a given server should it fail.
For informa2on systems with applica2on servers that do not match
each other in capacity, weighted round robin load balancing is an ideal
solu2on. It prevents servers with smaller capaci2es from failing while
leaving servers with higher capaci2es from becoming overloaded. This
in turn means the system won’t lag.
RR Scheduling Example:

There are six processes named as P1, P2, P3, P4, P5 and P6. Their
arrival 2me and burst 2me are given below in the table. The 2me
quantum of the system is 4 units:

The comple*on *me, Turnaround *me and wai*ng *me will be calculated as shown in the table
below.
Turn Around Time = Comple*on Time - Arrival Time
Wai.ng Time = Turn Around Time - Burst Time

Avg Wai*ng Time = (12+16+6+8+15+11)/6 = 76/6 units

More ref: hLps://www.youtube.com/watch?v=Ej-gsTkCrHU


Least Connec3ons Algorithm

The Least Connec2ons algorithm directs incoming requests to the


server with the fewest ac2ve connec2ons, assuming that these servers
have more available capacity to handle new traffic. While it helps
balance workloads by considering the number of ac2ve connec2ons, it
doesn't factor in the varying processing power of servers. This can lead
to inefficiencies in environments where servers have different
performance levels. However, it is effec2ve in scenarios where traffic is
unpredictable and fluctuates in intensity.

Ref: hDps://www.youtube.com/watch?v=tAAmZ3bz8AA

Weighted Distribu3on Algorithm

The Weighted Distribu2on algorithm goes a step further than the Least
Connec2ons algorithm by taking into account the processing power of
each server. In this algorithm, each server is assigned a weight based on
its capacity. Servers with higher weights receive more requests than
servers with lower weights.
This algorithm can be an excellent choice for environments where
servers have different capabili2es. It ensures that more powerful
servers handle a larger share of the traffic, which can help to prevent
less powerful servers from becoming overwhelmed
Session Persistence Algorithm

The Session Persistence algorithm, also known as S2cky Session


algorithm, directs all requests from a par2cular user session to the
same server. This can be useful for applica2ons that need to
maintain session informa2on or state between requests, such as e-
commerce shopping carts or web-based email services.
However, this algorithm can lead to imbalances if some user sessions
generate more traffic than others. Therefore, it may not be the best
choice for environments where traffic is highly unpredictable or varies
widely in intensity.

IP Hash Algorithm

The IP Hash algorithm uses the IP address of the client and server to
determine where to route traffic. This algorithm provides a consistent
mapping, meaning that as long as the number of servers remains the
same, a client will always be directed to the same server.
This can be beneficial for applica2ons that need to maintain state
between requests. However, it can lead to imbalances if a large number
of requests come from a small number of IP addresses. Therefore, it
may not be the best choice for environments with a wide distribu2on
of client IP addresses.
Ref: hDps://www.youtube.com/watch?v=dBmxNsS3BGE
Code Example: Load Balancing with TensorFlow on Colab

Configura3on:

1. Run5me Type: Python 3


• Descrip(on: Default programming language for machine learning and data
science tasks. Python 3 should be selected for most modern projects.
2. Hardware Accelerators:
• CPU: General-purpose processor, slower for deep learning and large
datasets.
• T4 GPU: Powerful for parallel computa(ons, ideal for training machine
learning models. Fast and free in Colab.
• A100 GPU: Advanced, highly powerful GPU for large-scale AI tasks. Available
only with Colab Pro+.
• L4 GPU: Designed for AI inference tasks, available in premium Colab (ers.
• TPU v2-8: Specially op(mized for TensorFlow, excellent for large-scale deep
learning tasks. Free to use in Colab.

3. Purchase Addi5onal Compute Units


• Descrip(on: Upgrade to Colab Pro or Pro+ for beRer GPUs (A100, L4),
longer run(mes, and higher resource priority.

Step 1: Enable GPU on Google Colab

in Google Colab, go to Run2me -> Change run2me type, and select GPU
as the hardware accelerator.

Step 2: Import Libraries and Set Up Devices


When you run this code, TensorFlow will distribute the matrix mul*plica*on tasks between the
CPU and GPU using a round-robin approach. The *me taken for each task on the different
devices will be printed, and you'll no*ce that the GPU (if available) typically performs faster
than the CPU.

You might also like