0% found this document useful (0 votes)
3 views13 pages

Mod4 - DC MU

The document compares load balancing and load sharing, illustrating their goals and methods, with load balancing aiming for equal distribution of tasks across servers, while load sharing focuses on preventing idle servers. It explains the concepts of processes and threads, detailing their characteristics and how they operate within a system. Additionally, it discusses various policies in load balancing and load sharing, emphasizing their importance in improving resource utilization, performance, fault tolerance, and scalability in distributed systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views13 pages

Mod4 - DC MU

The document compares load balancing and load sharing, illustrating their goals and methods, with load balancing aiming for equal distribution of tasks across servers, while load sharing focuses on preventing idle servers. It explains the concepts of processes and threads, detailing their characteristics and how they operate within a system. Additionally, it discusses various policies in load balancing and load sharing, emphasizing their importance in improving resource utilization, performance, fault tolerance, and scalability in distributed systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

Mod4_DC MU

1) Compare and contrast load balancing and load sharing using examples.

Load Balancing Example:

 Goal: Equalize load across all servers.

 Action:

o Server A sends 2.5 tasks to D and 2.5 to B (if fractional tasks are supported or
approximated).

o Final task distribution might be A: 10, B: 7, C: 10, D: 8 — as close to equal as possible.

Focus: Equal load → High overhead to track all loads and balance.

Load Sharing Example:

 Goal: Prevent Server D from being idle.

 Action:

o Server A offloads 5 tasks to Server D.

o Final task distribution: A: 10, B: 5, C: 10, D: 5.

Focus: Avoid idle → Low overhead, simple decision-making.

2) Illustrate the concept of process and thread with an example.

A process is a running instance of a program. When we start a program, it doesn't execute instantly.
It follows several steps, and this step-by-step execution is called a process.

 A process can create other smaller processes.

o The one that creates is called the parent process.


o The ones it creates are called child processes or clones.

 Each process has its own memory space and does not share it with other processes.

 A process is an active entity in the system.

How Does a Process Work?

1. The program is first converted into binary code and loaded into the computer’s memory.

2. The process needs resources to run, like:

o Registers (to hold data or instructions)

o Program Counter (keeps track of the next instruction)

o Stack (stores active function calls or subroutines)

3. Each running instance of a program becomes a separate process.

Features of a Process

 Every new process needs a system call (like fork()) to be created.

 Each process has its own memory space (called an address space).

 Processes are independent from one another.

 To communicate, processes need IPC (Inter-Process Communication).

 Processes don’t need to be synchronized with each other all the time.

What is a Thread?

A thread is a smaller part of a process. It is often called a lightweight process.

 A process can have one or more threads.

 All threads in the same process share some parts like:

o Code segment

o Data segment

o Files

 But each thread has its own:

o Registers

o Stack

o Counter
How Does a Thread Work?

 When a process starts, the OS gives it memory and resources.

 All threads inside that process share those resources.

 Threads help make applications faster and more efficient.

 Although only one thread runs at a time, the OS quickly switches between threads (called
context switching) to make it feel like they run in parallel.

 If only one thread runs → Single-threaded process

If multiple threads run → Multithreaded process

Types of Threads

1. User-Level Threads

 Managed by users, not the OS.

 Fast and easy to create.

 The OS treats all user-level threads as one single process.

 Created using user-level libraries, not system calls.

2. Kernel-Level Threads

 Managed directly by the OS kernel.

 Slower than user threads because the kernel handles everything.

 Created using system calls.

 OS sees and controls each thread individually.


3) Different policies in load balancing and load sharing

1. Load Estimation Policy

Purpose:

To measure the workload of each node to determine which nodes are overloaded or
underloaded.

How it works:

 Measures the workload of each node in real-time.

 Several methods are used:

Types:

 Memoryless Method: Assumes all processes have the same remaining service time.

o Past Repeats: Uses previous execution time to predict remaining time.

o Distribution Method: Predicts future behaviuor using known service time patterns.

o CPU Utilization: Tracks actual CPU usage over time—the standard in modern
systems.

In Load Sharing:

Simplified to counting active processes or monitoring CPU idle state.

2. Process Transfer Policy

Purpose:

Determines when and under what conditions a node should transfer processes.

How it works:
 Based on a threshold system:

o Nodes exceeding a threshold are overloaded.

o Nodes below a threshold are underloaded.

Types:

 Single-Threshold Policy: Uses one threshold to trigger process transfers, though this can be
unstable.

 To reduce instability double-threshold policy has been proposed which is also known as high-
low policy

 Double-Threshold Policy (High-Low Policy):

o High mark: Overloaded node transfers processes.

o Low mark: Underloaded node receives processes.

o Normal region: Maintains current state.

 Why it matters: Prevents oscillation and thrashing by limiting unnecessary process


movements.

3. State Information Exchange Policy

Purpose:

Manages how nodes share their load information with each other.

How it works:

 Better state information leads to improved decisions.


 Balances accuracy with communication overhead.

Types:

 Periodic Broadcast: Nodes send regular load updates.


 ❌ May congest network in large systems.
 Broadcast When State Changes: Nodes update others only when their state
changes.
 ✅ Minimizes unnecessary messages.
 On-Demand Exchange: Nodes request information as needed.
 ✅ Efficient and targeted.
 Polling: Node queries others sequentially until finding a suitable partner.
 ✅ Works well in large systems.

4. Location Policy

Purpose:

Selects which nodes to involve in process transfers.


Types:

 Sender-Initiated:
o Overloaded nodes seek underloaded partners.
o Optimal for light to moderate loads.
 Receiver-Initiated:
o Underloaded nodes seek overloaded partners.
o Best during high system load.
 Shortest:
o Selects the node with the least load from a sample.
 Bidding Method:

o Nodes bid based on their capacity.


o Central manager selects best bid.
❌ Requires significant communication.

 Pairing:
o Nodes form exclusive load-sharing pairs. ✅ Reduces overall system
communication.

5. Priority Assignment Policy

Purpose:

Determines processing order between local and remote processes.

Types:

 Selfish:
o Prioritizes local processes.
o Delays remote process execution.
 Altruistic:
o Prioritizes remote processes.
o Improves system-wide performance despite local delays.
 Intermediate:
o Prioritizes local processes when they dominate.
o Favours remote processes when local load is light.

6. Migration Limiting Policy

Purpose:

Controls how often processes can move between nodes.

Types:

 Uncontrolled:
o Allows unlimited process migrations.
o Risks system instability.
 Controlled:
o Uses a counter to limit migrations.
o Stops migrations when limit is reached.
 Irrevocable:
o Permits only one migration per process.
o Simple but rigid.

A. Justify how Load Balancing is Useful in Distributed Systems (10 Marks)

🔷 Introduction

Load balancing is a technique used in distributed systems to distribute workloads evenly


across multiple nodes (computers or servers). Its main goal is to ensure that no single node is
overwhelmed, while others are underutilized. This improves the system’s overall
performance, reliability, and scalability.

🔷 1. Improves Resource Utilization

In distributed systems, resources like CPU, memory, and storage are spread across multiple
nodes. Load balancing ensures these resources are used efficiently by preventing any single
node from being overburdened while others remain idle.

Example: In a file storage system, file requests are distributed to storage servers to avoid
overloading any one of them.

🔷 2. Enhances Performance and Speed

By distributing tasks evenly, load balancing reduces processing delays and increases the
throughput of the system. It ensures faster response times for users.

Example: In a web application, user requests are balanced across web servers to ensure faster
page loads.

🔷 3. Provides Fault Tolerance and High Availability

If one node fails or goes offline, the load balancer redirects traffic to the remaining active
nodes. This prevents system downtime and ensures continuous service.
Example: In an e-commerce site, even if one server fails during peak hours, load balancer
routes traffic to backup servers.

🔷 4. Supports Scalability

As demand grows, new nodes can be added to the system. A load balancer automatically
starts distributing load to new nodes, enabling horizontal scaling without interrupting the
service.

Example: During a festival sale, more servers are added to handle increased traffic.

🔷 5. Avoids Bottlenecks and Improves Reliability

Without load balancing, too many requests to a single node can cause bottlenecks, slowing
down the entire system. Load balancing prevents this by spreading the load evenly.

🔷 6. Cost-Efficiency

Balanced resource usage means fewer idle machines and better ROI. It reduces the need for
over-provisioning and saves operational costs.

✅ Diagram (Optional for Full Marks – if allowed):

+-------------+

User Req → | LoadBalancer| → Server A

| | → Server B

| | → Server C

+-------------+
Extra Questions

Code migration involves transferring software code from one environment to another,
which is crucial in distributed systems where different parts of an application run on
separate computers or servers. This process helps in managing and updating the system
efficiently.

E.g searching for information on the internet. A search query can be easily implemented in
the form of a small mobile program called a mobile agent which moves from site to site. We
may be able to achieve a linear speedup over using a single program instance by making
several copies of such program & sending each to a different site.

Why Code Migration?

Reduce Network Bandwidth: Migrate part of client application to database server to perform
many database operations, Send only result across network, Migrate part of server application
to client, Migrate part of database server to client to process forms on client side, Reduces
database operations over the network

e.g. XSS – Javascript, Mobile Agent, Dynamic Configuration of Distributed Systems

 Code migration techniques

Code migration refers to moving code (and possibly its execution state) from one system
(host) to another in a distributed environment. There are several techniques used to perform
this migration effectively, depending on the purpose and complexity.

1. Static Code Migration

o Definition: Only the code is moved, not the execution state.


o The destination host receives the code and starts its execution from the
beginning.
o This technique is simple and commonly used when code doesn't depend on
runtime state.

Example: Transferring a script file or executable to another server for scheduled execution.

Use Case: Task scheduling, background processing.

2. Strong Code Migration

 Definition: Both the code and its execution state (variables, call stack, etc.) are
transferred.
 The program continues from the exact point it left off after migration.
 This technique is more complex but allows seamless transfer of long-running
processes.

Example: Pausing a game on one device and resuming on another without restarting.

Use Case: Mobile agents, real-time distributed applications.

3. Weak Code Migration

 Definition: Only the code and static data are transferred, but not the execution state.
 The code starts execution from the beginning on the new host.
 It's a compromise between static and strong migration.

Example: Downloading a module from a central server and executing it on a local machine.

Use Case: Client-side plugins, software updates.

4. Manual Code Migration

 Definition: Code is manually copied and set up on the destination system.


 Requires human effort to deploy and configure.
 Suitable for small-scale systems or for controlled deployment.

Example: Admin uploads code to a remote server via FTP or SCP.

Use Case: Deployment of websites, one-time setup tasks.

5. Automated (Agent-based) Code Migration

 Definition: Uses software agents that can move themselves (with their code and data)
to another system.
 Agents can communicate, make decisions, and migrate autonomously.
 Often used in AI-based and intelligent systems.

Example: An agent that moves between servers to collect logs or monitor activities.

Use Case: Network monitoring, data collection, distributed AI.

 Issues in code migration

Code migration means moving code (programs or parts of them) from one machine to another
in a distributed system. While it’s useful, it also brings several challenges. Below are the
main challenges explained in simple terms:

🔁 Code Migration Issues – Simplified Explanation


1. Data Consistency and Synchronization

👉 Problem:
Data may be stored on different computers. When you migrate code, you must make sure all
systems have the same, correct data.

✅ Solutions:

 Copy data properly: Use smart ways to copy and update data (like master-slave or
multi-master replication).
 Use agreement protocols: Use methods like Paxos or Raft so all systems agree on
data updates.
 Use transactions: Treat data updates like a package — either everything happens or
nothing happens, to avoid broken states.

2. Network Latency and Performance

👉 Problem:
When migrating code or data over a network, delays (latency) can slow things down or cause
problems.

✅ Solutions:

 Improve the network: Use faster connections and better settings to reduce delays.
 Send only what’s needed: Break large data into parts and send only what's necessary.
 Don’t wait unnecessarily: Use asynchronous operations so the system doesn't
freeze while waiting.

3. Security

👉 Problem:
Distributed systems are more open to attacks, so keeping everything safe during migration
is important.

✅ Solutions:

 Encrypt data: Use secure connections (like SSL/TLS) so data can't be stolen in
transit.
 Limit access: Make sure only the right people or systems can access the data and
code.
 Check regularly: Perform security checks often to find and fix problems.
4. Versioning and Rollbacks

👉 Problem:
During migration, multiple versions of code or data might be used. If something goes wrong,
you should be able to go back to the previous version.

✅ Solutions:

 Track changes: Use tools like Git to manage code and data versions.
 Rollback plan: Always have a way to undo changes in case something fails.
 Test first: Test the migration process carefully before doing it live.

5. Other Important Points

 Platform Compatibility: Make sure the new system supports your code and tools.
 Compliance: Follow all laws and rules (e.g., data privacy regulations).
 Ongoing Maintenance: Plan how you’ll support and maintain the new system.
 Development Limits: Be aware of language, hardware, or system restrictions.
 Technical Problems: Expect some bugs or errors — be ready to fix them.
 Cost and Risk: Always estimate the cost and possible risks before migrating.

You might also like