0% found this document useful (0 votes)
6 views17 pages

PDC Notes. 2.

Parallelism is a technique used to enhance computational efficiency by dividing a large problem into smaller sub-problems that can be solved simultaneously. There are two main types of parallelism: Data Parallelism, where the same operation is executed on different data sets, and Control Parallelism, where different tasks are executed concurrently. Amdahl's Law explains the limitations of speed improvement in parallel execution due to sequential parts of a task, emphasizing the importance of optimizing for bottlenecks in parallel computing.

Uploaded by

Abhinav Anand
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views17 pages

PDC Notes. 2.

Parallelism is a technique used to enhance computational efficiency by dividing a large problem into smaller sub-problems that can be solved simultaneously. There are two main types of parallelism: Data Parallelism, where the same operation is executed on different data sets, and Control Parallelism, where different tasks are executed concurrently. Amdahl's Law explains the limitations of speed improvement in parallel execution due to sequential parts of a task, emphasizing the importance of optimizing for bottlenecks in parallel computing.

Uploaded by

Abhinav Anand
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

👉

Parallelism ek technique hai jo computational efficiency badhane ke liye use hoti hai. Isme ek
bada problem chhoti sub-problems me divide hota hai, jo ek saath (simultaneously) solve kiye ja
sakte hain. Parallelism ke do main types hote hain: Data Parallelism aur Control Parallelism. Ab
in dono ko detail me samjhte hain:

1. Data Parallelism (Data Parallelism ka Matlab)

Data parallelism tab hota hai jab ek hi operation ko multiple processors par alag-alag data sets
par ek saath execute kiya jata hai. Matlab, ek bada data set ko chhoti-chhoti parts me divide
karke har processor ko ek part diya jata hai, jisme wo same operation apply karta hai.

Example:
Agar ek 4x4 matrix ko process karna hai aur 4 processors available hain, to har processor ek
row ko handle karega. Sabhi rows par ek saath (simultaneously) computation hoga, aur last me
sab results ko combine kiya jayega. Is tarike se computation fast ho jata hai.

Ek aur Example:
Maan lo ki ek bahut bada image file hai jisme brightness adjust karni hai. Agar hum us image ko 4
parts me divide kar dein aur har processor apne assigned part par brightness adjustment kare, to
pura process fast ho jayega.

2. Control Parallelism (Control Parallelism ka Matlab)

Control parallelism tab hota hai jab alag-alag tasks ko ek saath alag processors par execute kiya
jata hai. Matlab, ek processor ek task karega, doosra processor doosra task karega, aur teek isi
tarah sab apna-apna kaam parallelly karenge. Ye ek pipeline structure jaisa hota hai, jisme ek
processor ka output doosre processor ke input me chala jata hai.

Example:
Agar ek image processing system me ek image ko process karna hai, to:
• Processor 1 edge detection karega,
• Processor 2 image smoothing karega,
• Processor 3 color correction karega.

Teeno ka kaam ek saath (simultaneously) hoga, is wajah se pura process fast hoga.

Ek aur Example:
Maan lo ek factory me mobile phones assemble ho rahe hain.
• Processor 1 motherboard lagata hai,
• Processor 2 battery fix karta hai,
• Processor 3 screen attach karta hai.

Agar ye saari cheeze sequentially (ek ke baad ek) hoti to time zyada lagta, lekin
parallel processing ke wajah se assembly fast hoti hai.

Summary (Saar)
1. Data Parallelism → Same kaam, alag-alag data par ek saath. (e.g., Matrix
processing, Image Brightness adjustment)
2. Control Parallelism → Alag-alag kaam, ek saath execute. (e.g., Image
processing pipeline, Mobile assembly)

Dono techniques ka use computational efficiency badhane ke liye hota hai, aur har
tarah ke applications ke hisaab se inhe implement kiya jata hai.

Amdahl’s Law – Simple Explanation in Hinglish

Amdahl’s Law ek rule hai jo batata hai ki kisi task ka execution speed kitna improve ho sakta
hai agar hum multiple processors ka use karein.

Jab hum kisi task ko parallel banate hain, tab bhi uska kuch part sequential (ek ek step me
hone wala) hota hai. Amdahl’s Law ye batata hai ki agar task ka koi part parallel nahi ho sakta,
to chahe hum jitne bhi processors laga lein, performance ek limit tak hi badh sakti hai.

Amdahl’s Law ka Formula:


Agar hum 2 processors use karein:
S = \frac{1}{(1 - 0.6) + \frac{0.6}{2}} = \frac{1}{0.4 + 0.3} = \frac{1}{0.7} \approx 1.43
Matlab 2 processors hone par execution 1.43x fast hoga, lekin 2x nahi!

Agar 10 processors use karein:


S = \frac{1}{0.4 + 0.06} = \frac{1}{0.46} \approx 2.17
Matlab 10 processors lagane par bhi execution sirf 2.17x fast hoga, 10x nahi!

Amdahl’s Law ka Meaning


• Har task ka ek limit hota hai jitna parallel ho sakta hai
• Agar task ka kuch part sequential hai, to chahe jitne processors use
karein, speedup ek limit tak hi jayega
• Parallel computing me har task ko pura parallel nahi banaya ja sakta

Why Amdahl’s Law is Important?


• Ye batata hai ki parallelization ki ek limit hoti hai, aur bottlenecks ko
identify karne me help karta hai
• Ye realistic expectations set karne me madad karta hai, taaki hum zyada
processors use karke bhi waste na karein
• System architects aur developers ke liye important hai efficient parallel
systems design karne ke liye

Conclusion

Amdahl’s Law ye samjhata hai ki jitna task parallel ho sakta hai, utna hi speedup milega, lekin
agar task ka kuch part sequential hai, to speedup ek limit tak hi jayega.
Isliye parallel computing me optimization karne se pehle sequential bottlenecks ko hatana
zaroori hai!

Parallel Processors in Parallel and Distributed Computing

Parallel computing ka main concept hai ki ek task ko multiple processors par tod kar
simultaneously execute kiya jaye, taaki speed aur efficiency badh sake.

Parallel processors ka organization kaafi important hota hai, aur ye do tarah se classify kiya jata
hai:
1. Static Interconnections
• Fixed aur predefined connections hote hain, jo execution ke dauraan
change nahi hote.
• Examples:
• Bus-based systems (sabhi processors ek common bus share karte
hain)
• Crossbar switches (multiple processors aur memory modules directly
interconnected hote hain)
• Multistage Interconnection Networks (MINs) (hierarchical switching
network)
2. Dynamic Interconnections
• Ye flexible hote hain, aur execution ke dauraan reconfigure ho sakte
hain.
• Examples:
• Packet-switched networks (data packets routers ke through bheje jate
hain)
• Switch-based networks (network switching mechanism use hota hai)
• Dynamically adjustable paths (network dynamically change hota hai,
jisse flexibility badhti hai)

Flynn’s Taxonomy (Parallel Processors ka Classification)

Parallel processors ko instruction aur data streams ke base par classify kiya jata hai:
1. SISD (Single Instruction, Single Data)
• Ek hi instruction ek data set pe apply hota hai (traditional processors,
like normal PCs).
2. SIMD (Single Instruction, Multiple Data)
• Ek hi instruction multiple data streams par apply hota hai (like GPUs).
3. MISD (Multiple Instruction, Single Data)
• Multiple instructions ek hi data pe apply hote hain (rarely used).
4. MIMD (Multiple Instruction, Multiple Data)
• Alag-alag processors alag-alag instructions alag-alag data pe apply
karte hain (modern supercomputers me use hota hai).

Topology of Interconnections (Processors ki Connection Types)

Parallel systems me processors aur memory ka interconnection kaafi important hota hai. Kuch
common topologies:
1. Bus-Based
• Sabhi processors ek common bus share karte hain.
2. Ring
• Processors ek circular structure me connected hote hain.
3. Mesh
• Processors grid format me hote hain, jisme har node kisi aur ke sath
connected hoti hai.
4. Hypercube
• Multi-dimensional cube structure, jo fast communication provide
karta hai.
5. Tree
• Hierarchical structure hota hai, jo broadcasting ke liye achha hota hai.

Shared Memory Multiprocessors

Shared memory system me sabhi processors ek global memory access kar sakte hain.
Iske do main types hote hain:
1. UMA (Uniform Memory Access)
• Sabhi processors memory ko equal latency ke sath access kar sakte
hain.
2. NUMA (Non-Uniform Memory Access)
• Memory different latencies pe access hoti hai, jo system ke structure
pe depend karta hai.

Advantages:
• Fast communication
• Simple programming

Disadvantages:
• Memory contention (ek time pe zyada processors agar same memory ko
access karein, to bottleneck ho sakta hai)
• Synchronization overhead (data consistency maintain karna mushkil hota hai)

Distributed Memory Networks

Distributed memory architecture me har processor ka apna local memory hota hai, aur
processors ek network ke through communicate karte hain.

Key Features:
• Koi shared global memory nahi hoti
• Scalable architecture, jo large systems ke liye suitable hota hai
Advantages:
• No memory contention (kyunki har processor apni memory use karta hai)
Disadvantages:
• Complex programming
• Higher latency (kyunki message passing me time lagta hai)

Conclusion

Parallel processors computing me speed aur efficiency improve karne ke liye use hote hain.
Flynn’s Taxonomy ke basis pe systems ko classify kiya jata hai, aur interconnections ka structure
performance pe depend karta hai. Shared aur distributed memory systems dono ke apne
advantages aur disadvantages hote hain, aur system design ke according suitable architecture
select karna important hota hai.

Parallel Algorithms in Computing

Parallel Computing ka main concept hai ki ek bada task ko chhoti independent tasks me tod kar
multiple processors pe execute karna, jisse computation time kam ho jaye.

Parallel Algorithms me 2 important strategies hoti hain:


1. Partitioning aur mapping techniques ka sahi selection
2. Communication overhead ko minimize karna

1. Matrix Multiplication in Parallel Computing

Matrix multiplication ek important operation hai jo scientific computing aur machine learning
me use hota hai. Isko parallel computing me efficiently implement kiya jata hai.

Basic Idea
• Agar A (m × n matrix) aur B (n × p matrix) diye gaye hain, to result C (m × p
matrix) hoga.
• Parallel computing me C matrix ka computation multiple processors me
distribute kiya jata hai.

Parallel Algorithms for Matrix Multiplication


1. Row-wise Partitioning
• Matrix A ki rows ko multiple processors me divide kar diya jata hai.
• Har processor apni row ke corresponding calculations karta hai.
2. Cannon’s Algorithm
• Ye 2D grid of processors ke liye design kiya gaya hai.
• Har processor A aur B ke blocks par operations karta hai.
• Iska communication pattern efficient hai, jisse overhead kam hota hai.
3. Fox’s Algorithm
• Ye square processor grids ke liye use hota hai.
• Rows aur blocks ko broadcast kiya jata hai, jisse processors parallel me
computation kar sakein.
4. Strassen’s Algorithm
• Divide-and-Conquer approach use karta hai, jo traditional matrix
multiplication se faster hota hai.
• Isko parallelize kiya ja sakta hai, lekin extra communication overhead
hota hai.

Challenges:
• Load balancing (Har processor ke paas equal kaam ho)
• Communication overhead (Data transfer minimize ho)
• Scalability (Processors badhne par bhi performance efficient rahe)

2. Sorting in Parallel Computing

Parallel Computing me large datasets ko efficiently sort karna ek bada challenge hota hai. Iske
liye parallel sorting algorithms ka use hota hai.

Popular Parallel Sorting Algorithms


1. Parallel Merge Sort
• Divide-and-Conquer algorithm jo parallel execution ke liye perfect hai.
• Steps:
1. Data ko chhoti parts me divide karna
2. Har part ko alag processor par sort karna (Quicksort/
Heapsort use karke)
3. Parallel merging karna taaki final sorted array mile
2. Bitonic Sort
• Sorting network based algorithm hai, jo power of 2 sized datasets ke
liye best hai.
• Bitonic sequences create karta hai (ek increasing aur ek decreasing
sequence), fir unhe parallel merge karta hai.
• Ye hardware implementations ke liye best hota hai.
3. Odd-Even Transposition Sort
• Simple iterative compare-and-swap technique use karta hai.
• Har phase me adjacent elements compare hote hain aur swap kiye jate
hain.
• Parallelization ke liye efficient hai, lekin large datasets me slow ho
sakta hai.
4. Sample Sort
• Partitioning + Parallelization ka combination hai.
• Ek chhota sample dataset sort kiya jata hai, fir pivots ke basis pe data
ko divide kiya jata hai.
• Har processor apne assigned partition ko independently sort karta hai,
fir sab partitions ko merge kar diya jata hai.
• Unevenly distributed data ke liye best hai.

Challenges in Parallel Sorting:


• Communication overhead (Partitioning aur merging ke dauraan data transfer
me time lagta hai)
• Global ordering maintain karna (Local sorting fast ho sakta hai, lekin pura
dataset globally sort hona chahiye)
• Synchronization cost minimize karna (Processors efficiently kaam karein
bina zyada wait kiye)

3. Searching in Parallel Computing

Searching kaafi common operation hai large-scale systems me. Parallel algorithms searching
ko efficient aur fast banate hain.

Popular Parallel Searching Algorithms


1. Parallel Binary Search
• Array ko multiple processors me divide kiya jata hai.
• Har processor apni sub-array pe binary search perform karta hai.
• Results ko combine karke final answer nikala jata hai.
2. Hash-Based Searching
• Data ko multiple processors me distribute kiya jata hai ek hash
function ke basis pe.
• Jab search query aati hai, to usko correct processor pe route kiya jata
hai.
• Large databases aur distributed systems me efficient hota hai.
3. Breadth-First Search (BFS) in Parallel
• Graph traversal ke liye best algorithm hai.
• Current level ke nodes ek saath process kiye jate hain.
• Next level ke nodes simultaneously generate kiye jate hain.
• Social networks, AI aur shortest path finding me use hota hai.
4. Depth-First Search (DFS) in Parallel
• DFS ko parallel karna thoda challenging hota hai kyunki recursion aur
stack usage hota hai.
• Task-based parallelism use karke DFS parallelize kiya jata hai.

Challenges in Parallel Searching:


• Load balancing (Sabhi processors ko equal kaam mile)
• Synchronization issues (Correct results ensure karna)
• Efficient memory access patterns (Cache aur RAM latency minimize karna)

Conclusion

Parallel algorithms speed aur efficiency improve karne ke liye use hote hain.
• Matrix multiplication ke liye Cannon’s, Fox’s aur Strassen’s algorithms use
kiye jate hain.
• Sorting me Parallel Merge Sort, Bitonic Sort, Odd-Even Transposition Sort
aur Sample Sort important hote hain.
• Searching ke liye Parallel Binary Search, Hash-Based Searching, BFS aur
DFS efficient solutions hain.

Parallel computing ka main goal ye hai ki workload ko efficiently distribute karke


execution time minimize kiya jaye!

Leader Election and Mutual Exclusion in Distributed Systems – Simple Explanation in Hinglish

Distributed Systems me Leader Election aur Mutual Exclusion bahut important concepts hain,
jo multiple nodes ya processes ke coordination aur synchronization ke liye zaroori hote hain.

1. Leader Election

Leader Election ka main goal hai ek system me ek single process ya node ko “leader” elect
karna, jo baaki sabko coordinate karega.
Leader Election kyun zaroori hai?
• Coordination: Leader resources manage karta hai aur nodes ke actions
synchronize karta hai.
• Fault Tolerance: Agar leader fail ho jaye, to naya leader elect karna padta
hai taaki system smoothly chale.
• Consistency: Leader ensure karta hai ki sabhi nodes ka state same ho, jisse
data aur actions consistent rahein.

Leader Election Algorithms:


1. Bully Algorithm
• Sabhi nodes ko ek unique ID di jati hai.
• Agar kisi node ko lagta hai ki leader fail ho gaya, to wo election start
karta hai.
• Wo sabhi nodes ko jo usse bade ID wale hain, election request bhejta
hai.
• Agar koi bada node respond kare, to wo election le leta hai.
• Sabse bada ID wala node final leader ban jata hai.
Pros:
• Simple aur effective.
Cons:
• Large systems me zyada communication overhead hota hai.
2. Ring Algorithm
• Sabhi nodes ek logical ring me arranged hote hain.
• Jab ek node leader fail hota dekhti hai, to ek election message start
karti hai, jo ring me circulate hota hai.
• Har node apna ID add karti hai aur message next node ko bhejti hai.
• Jab message return hota hai, sabse bada ID wala node leader ban jata
hai.
Pros:
• Message complexity kam hoti hai, ek hi message circulate hota hai.
Cons:
• Agar ring structure unstable ho, to algorithm fail ho sakti hai.
3. Raft Consensus Algorithm
• Distributed systems me consistency maintain karne ke liye best
algorithm hai.
• Nodes “Follower” mode me start hote hain.
• Agar kisi node ko leader ka response na mile, to wo “Candidate” ban
kar voting start karta hai.
• Jo node sabse zyada votes jeet leti hai, wo leader ban jati hai.
• Leader baaki nodes ko synchronize karta hai aur decisions enforce
karta hai.
Pros:
• Paxos algorithm se simple aur easy to implement hai.
• Failures handle karne ke liye accha hai.
Cons:
• Agar multiple nodes fail ho jayein, to election slow ho sakta hai.

2. Mutual Exclusion in Distributed Systems

Mutual Exclusion ka matlab hai ki ek time pe sirf ek process critical section (CS) me enter
kare, jisse race conditions aur conflicts avoid ho sakein.

Mutual Exclusion Algorithms:


1. Centralized Algorithm
• Ek central coordinator hota hai jo request handle karta hai.
• Jo bhi process CS me enter karna chahta hai, wo coordinator se
permission leta hai.
• Jab process ka kaam khatam hota hai, to wo coordinator ko notify
karta hai, taki dusri request serve ki ja sake.
Pros:
• Simple aur kam messages ki zaroorat hoti hai.
Cons:
• Single point of failure hota hai (agar coordinator fail ho gaya to system
ruk jayega).
• Bottleneck create ho sakta hai agar zyada requests aayein.
2. Token-Based Algorithm
• Ek special token system me circulate hota hai.
• Jo bhi process critical section me enter karna chahta hai, uske pass
token hona chahiye.
• Agar token kisi aur process ke pass hai, to dusre process ko wait
karna padta hai.
Pros:
• Communication overhead kam hota hai.
Cons:
• Agar token kho gaya ya duplicate ho gaya, to system fail ho sakta hai.
3. Ricart-Agrawala Algorithm
• Distributed approach hai, jisme koi central coordinator nahi hota.
• Jo bhi process CS me enter karna chahta hai, wo sabhi nodes ko
request bhejta hai.
• Agar kisi node ke paas critical section access nahi hai, to wo
permission de deta hai.
• Jab process CS use kar leta hai, to dusri waiting processes ko notify
karta hai.
Pros:
• No single point of failure.
Cons:
• Zyada messages exchange hote hain, jisse communication overhead
badhta hai.

3. Lamport’s Logical Clocks


• Distributed systems me time synchronization ke liye use hoti hai.
• Har request ko ek timestamp assign hota hai, jisse ensure hota hai ki sabhi
operations correct order me execute hoon.

Applications:
• Distributed file locking
• Cloud computing me shared resources manage karna
• Replicated databases me consistency maintain karna

4. Requirements of Mutual Exclusion Algorithms


1. No Deadlock
• Koi bhi do processes ek dusre ka wait na karein indefinitely.
2. No Starvation
• Har process ko finite time me critical section ka access milna chahiye.
3. Fairness
• Jo request pehle aayi hai, usse pehle serve kiya jaye.
4. Fault Tolerance
• Agar koi node fail ho jaye, to system khud problem ko detect karke
solution nikal sake.

Difference Between Leader Election and Mutual Exclusion


Scheduling and Load Balancing in Distributed Systems – Simple Explanation in Hinglish

Distributed Systems me Scheduling aur Load Balancing bahut important concepts hain, jo
tasks ko efficiently distribute karne aur execution optimize karne ke liye use hote hain.

1. Scheduling in Distributed Systems

Scheduling ka matlab hai ki available resources (processors, nodes, servers) ko tasks


assign karna, taaki execution fast ho, response time kam ho aur resources efficiently use ho
sakein.

Types of Scheduling:
1. Static Scheduling
• Task allocation execution se pehle decide hoti hai.
• Pure system ke resources aur tasks ka knowledge pehle se hona
chahiye.
• Example: Grid computing me task allocation.
2. Dynamic Scheduling
• Task allocation runtime pe decide hoti hai.
• Agar workload unpredictable ho to ye approach useful hoti hai.
• Example: Cloud computing me load balancing.

Scheduling ke Important Features:


Execution Time Minimize Karna – Taaki tasks jaldi complete ho sakein.
Load Balancing Ensure Karna – Workload equally distribute ho.
Energy Efficiency – Power consumption kam karna.
Resource Utilization – Sabhi available resources ka maximum use
karna.
Scalability – Agar system bada ho jaye, to bhi scheduling efficiently
kaam kare.

2. Load Balancing in Distributed Systems

Load Balancing ka main goal hai ki sabhi nodes ya processors pe workload evenly
distribute ho, taaki koi bhi single node overload na ho aur system fast chale.

Types of Load Balancing Techniques:


1. Static Load Balancing
• Task distribution execution se pehle decide hoti hai.
• Example:
• Round Robin Algorithm (Tasks ko circular order me distribute kiya jata
hai).
• Least Connection Algorithm (Jisme jis node pe sabse kam active
connections hote hain, usko task diya jata hai).
2. Dynamic Load Balancing
• Real-time monitoring ke basis pe workload adjust hota hai.
• Example:
• Weighted Round Robin (Powerful nodes ko zyada tasks diye jate hain).
• Dynamic Least Connection (Jo node sabse free hai, use task assign
hota hai).

3. Load Balancing Strategies

A) Centralized Load Balancing


• Ek single node pura system ka load manage karta hai.
• Advantages:
• Simple aur poore system ka ek global view available hota hai.
• Disadvantages:
• Single point of failure ho sakta hai.
• System me bottleneck create ho sakta hai.

B) Distributed Load Balancing


• Sabhi nodes milkar load distribution manage karte hain.
• Advantages:
• Fault tolerance zyada hota hai (Agar ek node fail ho jaye, to system
kaam karta rahta hai).
• Scalable hai (System size bada hone par bhi kaam kar sakta hai).
• Disadvantages:
• Communication overhead badh sakta hai (Nodes ko continuously
synchronize karna padta hai).

C) Hierarchical Load Balancing


• Centralized aur Distributed approach ka combination hota hai.
• Nodes ek hierarchy me organize hote hain, jisme har level ka ek manager
hota hai.
• Advantages:
• Balance approach hai jo fault tolerance aur efficiency dono provide
karti hai.

4. Popular Scheduling and Load Balancing Algorithms

A) Scheduling Algorithms
1. Round Robin Algorithm
• Tasks ko circular order me assign kiya jata hai.
• Simple aur easy to implement hai.
• Task size agar alag-alag ho, to imbalance ho sakta hai.
2. Least Loaded Algorithm
• Jis node pe sabse kam load hai, usko task assign hota hai.
• Real-time load monitoring ka use karta hai.
3. First Come, First Serve (FCFS) Algorithm
• Jo task pehle aaya, wo pehle execute hoga.
• Simple aur easy to implement hai.
• Load balancing optimize nahi hota.

B) Load Balancing Algorithms


1. Weighted Fair Scheduling
• Nodes ke capabilities ke basis pe tasks distribute hote hain.
• Powerful nodes ko zyada tasks diye jate hain, weak nodes ko kam.
2. Ant Colony Optimization (ACO)
• Ants ke food search behavior pe based hai.
• Gradually best paths learn karta hai aur workload optimize karta hai.

5. Challenges in Scheduling and Load Balancing


1. Resource Heterogeneity
• Alag-alag nodes ke paas alag processing power, memory aur
bandwidth hoti hai.
2. Dynamic Environments
• Tasks aur resources ki availability runtime pe change ho sakti hai.
3. Scalability
• System size badhne par scheduling aur load balancing efficient kaam
kare.
4. Fault Tolerance
• Agar koi node fail ho jaye, to system smoothly kaam karta rahe.
5. Energy Efficiency
• Performance aur power consumption ka balance maintain karna.
6. Security
• Task execution aur communication secure hona chahiye.

Conclusion
• Scheduling aur Load Balancing ka main goal hota hai system ka
performance optimize karna, execution fast karna aur resources ka efficient use
karna.
• Static Scheduling aur Load Balancing predefined hota hai, jabki Dynamic
approaches runtime pe adapt karte hain.
• Load balancing ke liye Round Robin, Least Loaded, Weighted Fair
Scheduling aur ACO jaise algorithms use hote hain.
• Scalability, Fault Tolerance aur Security scheduling aur load balancing ke
major challenges hain.

You might also like