0% found this document useful (0 votes)
47 views22 pages

Group - 9 Load Balancing and Load Sharing Approach

Uploaded by

anant2003krishna
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views22 pages

Group - 9 Load Balancing and Load Sharing Approach

Uploaded by

anant2003krishna
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

LOAD BALANCING

&
LOAD SHARING
APPROACH
Presented by: Presented To:
GROUP-9 Dr. Abhishek Kumar Shukla
GROUP MEMBERS
01 Abhay Jhalani - 21BCE10727
02 Shubham Saxena - 21BCE11301
03 Pranav Nehe - 21BCE11069
04 Harshal Wakchaure - 21BCE10983
05 Mrityunjay Kumar - 21BCE11411
06 Yashraj Jadhav - 21BCE10331
07 Ashutosh Mishra - 21BCE11217
Topic of Contents

01 02 03 04
Introduction to Load Load Balancing: Load Sharing: Types, Comparison b/w Load
Balancing & Load Sharing Types,Goals & Algo. Benefits, and Algorithms Balancing & Load Sharing

05 06 07 08
Load Balancing and Load Real-world Examples Challenges and Future Conclusion
Sharing Techniques and Case Studies Trends
Introduction to Load Balancing
and Load Sharing
Load balancing and load sharing are two techniques
used in computer networking and distributed
systems to distribute incoming requests or tasks
among multiple resources to optimize performance,
maximize resource utilization, and ensure high
availability.
load balancing is a broader concept that
encompasses various strategies for distributing
workloads across multiple resources, while load
sharing is a specific form of load balancing focused
on dynamically sharing the workload among
available resources to achieve optimal performance
and resource utilization
Load Balancing

Load balancing is the method of distributing network traffic equally across a


pool of resources that support an application. Modern applications must
process millions of users simultaneously and return the correct text, videos,
images, and other data to each user in a fast and reliable manner.
To handle such high volumes of traffic, most applications have many
resource servers with duplicate data between them.
A load balancer is a device that sits between the user and the server group
and acts as an invisible facilitator, ensuring that all resource servers are
used equally.
The load balancers allocates the primary undertaking to the main server and
the second assignment to the second server.
How it works?

Load balancers handle incoming requests from users for information and other services. They sit
between the servers that handle those requests and the internet. Once a request is received, the
load balancer first determines which server in a pool is available and online and then routes the
request to that server. During times of heavy loads, a load balancer acts promptly and can
dynamically add servers in response to spikes in traffic. Conversely, load balancers can drop
servers if demand is low.
Why Load Balancing?

Security: A load balancer provide safety to your site with practically no


progressions to your application.
Protect applications from emerging threats: The Web Application Firewall
(WAF) in the load balancer shields your site.
Authenticate User Access: The load balancer can demand a username and
secret key prior to conceding admittance to your site to safeguard against
unapproved access.
Protect against DDoS attacks: The load balancer can distinguish and drop
conveyed refusal of administration (DDoS) traffic before it gets to your site.
Performance: Load balancers can decrease the load on your web servers and
advance traffic for a superior client experience.
Traffic Compression: A load balancer can pack site traffic giving your clients
a vastly improved encounter with your site.
Types of Load Balancing
Load Sharing

Load sharing basically denotes the process of forwarding a router to


share the forwarding of traffic, in case of multiple paths if available in the
routing table. In case there are equal paths then the forwarding process
will follow the load-sharing algorithm. In load sharing systems, all nodes
share the overall workload, and the failure of some nodes increases the
pressure of the rest of the nodes. The load sharing approach ensures that
no node is kept idle so that each node can share the load. Load Sharing
works with the notion of traffic splitting across connections.Load sharing
is easy compared with load balancing.
Types of Load Sharing
Static Load :
Assignments of tasks to nodes are predetermined and do not change during runtime.
Works well when the workload is predictable and uniform across tasks.

Dynamic Load :
Tasks are dynamically assigned to nodes based on their current load and availability.
More flexible and can adapt to changing workloads and system conditions.

Centralized Load :
A single centralized coordinator or scheduler assigns tasks to nodes.
Simpler to implement but can become a bottleneck and a single point of failure.

Decentralized Load :
Nodes coordinate among themselves to distribute tasks without a central authority.
Scales better and improves fault tolerance but can be complex to implement due to the need for
efficient communication and decision-making protocols.
Hybrid Load :
Combines aspects of both centralized and decentralized methods to leverage the benefits of
both.
Benefits of Load Sharing
Improved Performance:
Efficiently distributing tasks ensures that no single node becomes a bottleneck, thereby enhancing the system's
overall performance.

Scalability:
Load sharing enables systems to handle increasing loads simply by adding more nodes to the network.

Fault Tolerance:
Distributing tasks among multiple nodes can prevent system failures due to the breakdown of a single node.
Tasks can be reallocated from a failed node to others.

Resource Utilization:
Maximizes the use of available resources, ensuring that all parts of the system are contributing to the workload,
reducing idle times.

Flexibility:
Adapts to changes in the workload and the operational status of nodes in real time.
Comparison between Load Balancing and
Load Sharing
Load balancing and load sharing are both techniques used in distributed systems to manage and
distribute workloads effectively. However, they differ in their approach and implementation:

1. Definition:
Load Balancing: Load balancing refers to the process of distributing incoming network traffic or
workload across multiple servers or resources to ensure optimal resource utilization, reliability,
and performance.
Load Sharing: Load sharing involves sharing or distributing workload among multiple resources to
achieve better resource utilization and performance. It can be centralized or decentralized.
2. Objective:
Load Balancing: The primary objective of load balancing is to optimize resource usage, maximize
throughput, minimize response time, and avoid overloading any single resource.
Load Sharing: The main objective of load sharing is to evenly distribute the workload among
multiple resources to improve performance, avoid bottlenecks, and enhance fault tolerance.
3. Approach:

Load Balancing: Load balancing typically involves the use of algorithms to distribute workload
based on factors such as server capacity, current load, and proximity to the client.

Load Sharing: Load sharing can be achieved through various approaches, including equal load
sharing, proportional load sharing, and dynamic load sharing.

4. Implementation:

Load Balancing: Load balancing can be implemented using hardware-based solutions (e.g., load
balancer devices) or software-based solutions (e.g., load balancing algorithms implemented in
software).

Load Sharing: Load sharing can be implemented using centralized or decentralized


approaches, depending on the system architecture and requirements.
5. Fault Tolerance:

Load Balancing: Load balancing can improve fault tolerance by distributing workload away
from failed or overloaded resources to healthy resources.

Load Sharing: Load sharing can also improve fault tolerance by distributing workload among
multiple resources, reducing the impact of failures on the overall system.

6. Scalability:

Load Balancing: Load balancing can improve scalability by enabling the addition of more
resources to the system without significantly impacting performance.

Load Sharing: Load sharing can also improve scalability by distributing workload among
multiple resources, allowing the system to handle a larger number of requests or users.
Load Balancing Techniques:

Round Robin: Distributes requests evenly across servers in a cyclic manner.


Weighted Round Robin: Assigns different weights to servers based on their
capacity.
Least Connection: Routes requests to the server with the fewest active connections.
Least Response Time: Directs requests to the server with the shortest response time.
IP Hash: Uses the client's IP address to determine the server for request routing.
Content-Based Routing: Routes requests based on specific attributes or content.
Dynamic Load Balancing Algorithms: Adaptively adjusts server selection based on
real-time metrics.
Software-Defined Networking (SDN): Allows centralized control for dynamic load
balancing.
Load Sharing Techniques:

Task Partitioning: Divides workload into smaller tasks distributed among resources.
Work Queues: Centralized or distributed queues for task distribution.
Task Migration: Dynamically moves tasks between resources based on workload.
Dynamic Resource Allocation: Provisions or deprovisions resources based on
workload demands.
Load-based Routing: Routes traffic based on current load on each resource.
Parallelism and Parallel Processing: Exploits parallel execution for efficient
workload distribution.
Replication and Redundancy: Replicates data or services for workload distribution
and fault tolerance.
Task Scheduling: Determines when and where to execute tasks based on various
factors.
Load Balancing Examples:
Content Delivery Networks (CDNs): CDNs use load balancing to distribute web
content (such as images, videos, and scripts) across multiple edge servers. When
a user requests content, the CDN directs the request to the nearest or least
congested server, reducing latency and improving user experience.
E-commerce Websites: Online retailers often employ load balancing to manage
high traffic volumes during peak shopping periods. Load balancers distribute
incoming requests across multiple web servers, ensuring fast response times and
preventing server overload.
Online Gaming: Multiplayer online games use load balancing to distribute player
traffic across game servers. Load balancers ensure fair gameplay by evenly
distributing players across servers and prevent server congestion, leading to a
smoother gaming experience for all players.
Load Sharing Examples:
Distributed Computing Platforms: Platforms like Apache Hadoop and Apache
Spark utilize load sharing techniques to distribute data processing tasks across a
cluster of nodes. Tasks are partitioned and distributed among available nodes,
allowing for parallel processing and efficient resource utilization.
Peer-to-Peer (P2P) File Sharing: P2P file sharing networks distribute file downloads
among participating peers. When a user requests a file, the file is divided into
smaller chunks and distributed across multiple peers. Each peer shares its
available chunks with others, allowing for faster downloads and reducing the load
on any single peer.
Distributed Database Systems: Distributed databases, such as Google Bigtable
and Apache Cassandra, use load sharing to distribute data storage and
processing tasks across multiple nodes. Data is partitioned and replicated across
nodes, ensuring high availability, fault tolerance, and efficient query processing.
Challenges And Future Trends
Load balancing and load sharing are crucial aspects of distributed systems, and they face several challenges
and trends in the present and future:
Scalability: As systems grow in size and complexity, ensuring efficient load balancing becomes more
challenging. Future trends involve developing scalable algorithms and architectures capable of handling
increasingly large workloads.
Dynamic Workloads: Systems often experience fluctuating workloads, which require dynamic load
balancing strategies. Future trends focus on adaptive load balancing techniques that can adjust in real-
time based on changes in demand.
Heterogeneity: With the proliferation of diverse computing platforms (cloud, edge, IoT devices), load
balancing must cater to heterogeneous environments. Future trends involve designing load balancing
solutions that can efficiently distribute workloads across different types of resources.
Fault Tolerance: Load balancers need to be resilient to failures and ensure high availability of services.
Future trends include integrating fault tolerance mechanisms into load balancing algorithms to handle
failures gracefully.
Cost Optimization: In cloud environments, cost optimization is essential. Future trends involve load
balancing algorithms that consider both performance metrics and cost factors to optimize resource
utilization and minimize expenses.
Security: Load balancers are potential targets for cyberattacks. Future trends include enhancing load
balancer security through techniques such as encryption, authentication, and anomaly detection to
prevent unauthorized access and ensure data integrity.
Edge Computing: With the rise of edge computing, load balancing becomes critical for distributing
workloads across edge devices efficiently. Future trends involve developing lightweight load
balancing mechanisms tailored for edge environments with limited resources and intermittent
connectivity.
Machine Learning Integration: Leveraging machine learning techniques for intelligent load balancing
is a growing trend. Future advancements may include using predictive analytics and reinforcement
learning to optimize load distribution based on historical data and real-time insights.
Containerization and Orchestration: With the widespread adoption of containerization and
orchestration platforms like Kubernetes, load balancing solutions need to integrate seamlessly with
these technologies. Future trends involve developing load balancers specifically designed for
containerized environments.
Multi-Cloud Load Balancing: Organizations increasingly rely on multiple cloud providers for
redundancy and flexibility. Future trends include load balancing solutions capable of efficiently
distributing workloads across multiple cloud environments while minimizing latency and maximizing
performance.
Conclusion
In conclusion, understanding load balancing and load sharing is
paramount in the design and management of parallel and distributed
systems.
Incorporating both load balancing and load sharing strategies in
parallel and distributed systems design is essential for achieving high
performance, scalability, and reliability. As technologies continue to
evolve and computing demands grow, mastering these concepts will
be crucial for building robust and resilient distributed systems capable
of meeting the challenges of tomorrow's computing landscape.
Thank
you!!

You might also like