Q Bank Ut2: What Are The Issues in Designing Load-Balancing Algorithms?
Q Bank Ut2: What Are The Issues in Designing Load-Balancing Algorithms?
Issues :
• This policy defines how the system will measure the workload of a computer (or node).
• If the workload is not correctly estimated, some systems may be overloaded while others remain
underutilized.
Example:
• A web server hosting a popular website like Facebook needs to predict how many users will visit in the
next few minutes. If the server fails to estimate correctly, it may crash due to overload.
• This policy decides whether a task should be executed on the current system or transferred to
another system.
• If a wrong decision is made, it may increase delay and reduce efficiency.
Example:
• In cloud computing, if a virtual machine (VM) is moved unnecessarily to another server, it will
increase network delay and waste computing power.
• In Google Cloud Storage, if data about server load is updated every second, it will consume too much
network bandwidth.
4. Location Policy
Example:
• Suppose you are using Amazon Shopping, and your request is sent to a distant server instead of a
nearby one. This will increase website loading time.
Example:
• In a hospital management system, if an emergency patient’s request is given low priority, it could
lead to serious health risks.
• This policy defines how many times a process can be moved from one system to another.
• If a task keeps migrating too often, it increases delay and reduces efficiency.
Example:
• In online multiplayer games like PUBG or Call of Duty, if the game server keeps shifting players
between different servers, it will cause game lag and poor performance.
Millions of people watch YouTube videos at the same time. To handle this, YouTube’s servers use load
balancing. If one server gets too many video requests, the system automatically redirects users to another
server. This prevents buffering and ensures smooth video streaming.
What is the difference between Load Balancing and Load sharing ?
Explain the concepts of process migration and write features of a good process migration
mechanism
Process migration means moving a running process from one computer (or processor) to another
without stopping it. This helps in better load balancing, fault tolerance, and resource utilization.
Imagine you’re editing a document on Google Docs on your office PC. Suddenly, you need to leave, so
you open the same document on your laptop at home—and guess what? It opens exactly where you left
off!
In a similar way, in computing, a running process can be moved from one machine (or CPU) to another
without restarting. This helps in reducing load, improving performance, and handling system failures.
Execution is Suspended
• At a certain point, the process is paused (freezing time starts) so it can be transferred to another
node.
Transfer of Control
• The process moves from the source node to the destination node.
• This involves copying the process state, memory, and execution details.
B) Minimal Interference :
• Minimal Interference ka matlab hai ki jab ek process migrate hoti hai, toh uska execution zyada
affect nahi hona chahiye. Yeh tab possible hai jab freezing time (jab process temporarily rukti hai
data transfer ke liye) kam se kam ho. Achha migration mechanism ensure karta hai ki process jaldi
resume ho aur smoothly chale bina kisi dikkat ke.
• Example: Soch tu laptop pe ek bada file download kar raha hai aur beech me Wi-Fi change kar
deta hai. Agar tera download bina restart hue continue ho jaye, toh iska matlab system ne transition
minimal interference ke sath handle kiya, bilkul waise hi jaise ek efficient process migration hona
chahiye.
Architecture of DSM
DSM architecture multiple nodes se bana hota hai, jo ek high-speed communication network se
connected hote hain.
(i) Nodes
Har node ke paas ek ya zyada CPUs aur ek local memory hoti hai.
Nodes ek communication network ke through data exchange karte hain.
Advantages of DSM
✅ Simplifies Programming: Developers ko lagta hai ki ek hi memory space hai, jo coding aur
debugging easy banata hai.
✅ Efficient Communication: Direct memory access message passing ke comparison me faster
hota hai.
✅ Scalability: Naye nodes ko easily add kiya ja sakta hai, jo system ko flexible banata hai.
Limitations of DSM
❌ Client-Server Models ke liye inefficient ho sakta hai, kyunki har client ka direct memory access
hona zaroori nahi.
❌ Synchronization issues ho sakte hain, agar multiple nodes ek hi data ko modify kar rahe hain.
Explain any five data consistency models.
Data Consistency Models
Data consistency models define karte hain ki distributed systems me data ka access aur update kaise
hoga, taaki koi bhi inconsistency na aaye. Yeh models ensure karte hain ki agar multiple users ek
shared data pe kaam kar rahe hain, toh unko correct aur expected results milein.
1. Strict Consistency
Definition:
Yeh sabse strong consistency model hai jisme koi bhi read operation hamesha latest write ka result
hi dikhata hai, chahe system ke nodes kitni bhi door ho ya delay ho.
Iska matlab agar koi data update hota hai, toh turant woh update poore system me reflect hona
chahiye.
Example:
Online Banking System: Agar kisi ne ₹500 withdraw kiya, toh turant updated balance dikhna chahiye.
Agar kisi doosre ATM se check karein, toh old balance nahi dikhna chahiye.
Problem: Yeh model distributed systems me practically implement karna bahut mushkil hota hai,
kyunki network latency aur delays hamesha hoti hain.
✅ Advantage:
Most accurate results deta hai.
❌ Disadvantage:
Bahut slow ho sakta hai, kyunki har update poore system me instantly propagate karna padta hai.
2. Sequential Consistency
Definition:
Isme operations ka order sabhi processes ke liye same hota hai, lekin actual execution ka time
different ho sakta hai.
Yeh real-time ka guarantee nahi deta, bas itna ensure karta hai ki jo bhi operations execute hue
hain, unka ek proper order ho.
Example:
Multiplayer Game:
Agar ek player pehle bomb drop kare aur phir fire kare, toh sabhi players ko yeh actions isi order
me dikhne chahiye.
Lekin kisi aur player ke actions parallel me execute ho sakte hain.
✅ Advantage:
Easier to implement as compared to strict consistency.
❌ Disadvantage:
Real-time order ka guarantee nahi deta, bas ek consistent order maintain karta hai.
Definition:
Isme real-time order follow hota hai, jisme operations ek single global timeline pe execute hote dikhai
dete hain.
Strict consistency ke near hai, par thoda flexible hai.
Example:
Online Shopping Cart:
Agar ek customer ne cart me ek item add kiya, toh turant system me reflect hona chahiye.
Agar wo checkout kare, aur usi waqt koi doosra user bhi checkout kare, toh system latest stock ke
hisaab se decision le.
✅ Advantage:
Better user experience kyunki real-time updates dikhte hain.
❌ Disadvantage:
High overhead hota hai, specially distributed environments me.
4. Causal Consistency
Definition:
Yeh only causally related operations ka order maintain karta hai, independent operations ka nahi.
Agar ek action doosre se linked hai, toh pehle action ka result dikhna chahiye.
Example:
Social Media (Facebook, Instagram)
Agar Alice ek post karti hai aur Bob uspe reply karta hai, toh pehle Alice ka post dikhna chahiye,
phir Bob ka reply.
Lekin agar Charlie ek independent post karta hai, toh uska order alag ho sakta hai.
✅ Advantage:
Performance aur consistency ka balance bana ke rakhta hai.
❌ Disadvantage:
Har scenario me accurate nahi hota, specially agar operations independent ho.
Definition:
Ek particular process ke operations usi order me dikhne chahiye, chahe kisi bhi node pe execute ho.
Yeh global ordering ki guarantee nahi deta, sirf per-process order maintain karta hai.
Example:
Messaging System (WhatsApp, Telegram):
Agar ek user "Hello", "How are you?", aur "Bye" bhejta hai, toh har receiver ko yeh same order
me dikhna chahiye.
Lekin doosre users ke messages ka order mix ho sakta hai.
✅ Advantage:
Simple aur efficient consistency model hai.
❌ Disadvantage:
Different processes ke updates ka global order nahi hota, sirf individual process ka order follow
hota hai.
Describe two phase commit protocol.
Two-Phase Commit Protocol ek distributed transaction management technique hai jo ensure karta hai
ki ek transaction jo multiple sites par execute ho raha hai, wo ya to pura commit ho ya pura abort ho.
Yeh protocol atomicity ko maintain karne ke liye design kiya gaya hai taki koi bhi site inconsistent
state me na rahe. Distributed databases me transactions alag-alag servers par hoti hain, jisme atomicity
maintain karna mushkil hota hai. Agar kisi ek site par failure ho jaye ya network issue aaye, to
transaction adhoora reh sakta hai. Is problem ko solve karne ke liye Two-Phase Commit Protocol use
kiya jata hai.
Sorry, I understand you want the information in Hinglish and I apologize for the previous responses. I will do
my best to provide the information in the way you've requested.
Here's the detailed explanation of the NFS architecture image in Hinglish, using only the terms provided in the
image:
Let's break down the components shown in the "NFS architecture" picture, with a bit more detail, in Hinglish:
• Application Program:
o Ye wo software hai jo files ko use karta hai. Example ke liye, word processor, video
editing software, ya database app.
o Jab ek application ko file read ya write karni hoti hai, to wo operating system se request
karta hai.
• UNIX Kernel:
o Ye client computer ka main part hai, operating system ka core.
o Ye client ke hardware aur software resources ko control karta hai.
o Ye application programs se file access requests receive karta hai.
o Ye Virtual File System (VFS) aur NFS Client ke saath kaam karta hai file operations
handle karne ke liye.
• Virtual File System (VFS):
o Ye UNIX kernel ke andar ek layer hai.
o Ye applications ko different file systems use karne ka ek consistent tareeka deta hai,
chahe files local computer par ho ya network par.
o Ye application se aaye file access requests ko commands me translate karta hai jo file
system samajh sake.
o Ye local aur network files ke beech ka difference chupata hai.
• NFS Client:
o Ye software client computer ko NFS server se communicate karne deta hai.
o Ye VFS se file access requests leta hai aur unhe NFS protocol requests me convert karta
hai.
o Ye requests network par NFS server ko bhejta hai.
o Ye NFS server se file data aur responses receive karta hai aur unhe VFS ko wapas deta
hai.
• UNIX file system:
o Ye client computer par local file system hai.
o Ye wo files store karta hai jo physically client ke storage devices par hain.
• Other file system:
o Ye kisi bhi aur type ke file system ko represent karta hai jo client computer par ho sakta
hai, standard UNIX file system ke alawa.
• Application Program:
o Client ki tarah, server computer bhi applications run kar sakta hai jinko share ki gayi files
access karni hoti hain.
• UNIX Kernel:
o Ye server ke operating system ka main part hai.
o Ye server ke hardware aur software resources ko control karta hai.
o Ye clients se NFS requests receive karta hai.
o Ye VFS aur NFS Server ke saath kaam karta hai file operations process karne ke liye.
• Virtual File System (VFS):
o Server par, ye un files ke liye requests handle karta hai jo server ke local file systems par
stored hain.
• NFS Server:
o Ye software server par run hota hai aur clients se NFS requests sunta hai.
o Ye requests receive karta hai, server ke file systems se requested files access karta hai,
aur file data clients ko wapas bhejta hai.
• UNIX file system:
o Ye server par wo file system hai jo clients ke saath share ki gayi files store karta hai.
Other Components:
• NFS Protocol:
o Ye wo rules ka set hai jo client aur server communicate karne ke liye use karte hain.
o Ye requests aur responses ka format specify karta hai, taki dono machines ek dusre ko
samajh sakein.
o Ye wo hai jo client aur server ke beech arrow represent karta hai.
• UNIX system calls:
o Ye wo requests hain jo application programs UNIX kernel ko karte hain.
• Operations on local files:
o Ye wo hai jab client computer apne local storage par stored files ko read ya write karta
hai.
• Operations on remote files:
o Ye wo hai jab client computer NFS server par stored files ko read ya write karta hai.
Example: Google Drive ek DFS hai, jismein tum kisi bhi device se login karke apni files access kar
sakte ho.
2. User Mobility
User kisi specific node pe dependent nahi hota. Wo kisi bhi system ya location se apni files access
kar sakta hai.
Example: Cloud storage services jaise Dropbox ya OneDrive ka use karke users apni files kahin se
bhi access kar sakte hain.
3. Performance
DFS ka performance centralized file system ke barabar ya usse better hona chahiye. System ka load
multiple servers me distribute hota hai, jisse response time fast hota hai.
Example: Netflix ek DFS use karta hai jo alag-alag servers pe videos store karta hai, taaki users
ko fast streaming experience mile.
4. Simplicity
DFS ka design centralized file system ke jaisa hi hota hai, jisse users aur developers dono ke liye
easy rahe.
Example: Google Drive ya SharePoint me users bina kisi technical knowledge ke files store aur
share kar sakte hain.
5. Scalability
System ko agar aur zyada nodes add karne pade, toh bhi performance degrade nahi hona chahiye.
DFS large-scale networks ke liye bana hota hai.
Example: YouTube ka DFS millions of servers pe videos store karta hai, jo globally users ke liye
available hoti hain.
6. Fault Tolerance
Agar ek node crash ho jaye, toh bhi DFS me backup copies hoti hain jo data ko recover kar sakti hain.
Example: Amazon Web Services (AWS) apni files multiple locations pe store karta hai, taaki kisi
ek data center ke fail hone par bhi data safe rahe.
7. Synchronization
DFS me agar ek file multiple users ek saath access kar rahe hain, toh system consistency maintain
karega.
Example: Google Docs me ek document pe multiple users ek saath kaam kar sakte hain bina kisi
data loss ke.
8. Security
DFS me data encryption, authentication aur access control hote hain, taaki files ko unauthorized
access se bachaya ja sake.
Example: Bank ka DFS secure login aur encrypted storage ka use karta hai, taaki sensitive
customer data safe rahe.
9. Heterogeneity
DFS different OS, hardware aur storage devices ke saath compatible hota hai.
Example: Dropbox ya Google Drive ko Windows, Mac, Linux, Android, aur iOS sab pe access
kiya ja sakta hai.
Example: Facebook ka DFS alag-alag servers pe user data store karta hai, taaki data loss ka risk
na ho.
Advantages of DFS ✅
High Availability – Files multiple locations pe store hoti hain, isliye failure hone par bhi access
possible hota hai.
Better Performance – Load balancing aur caching techniques se performance improve hota hai.
Scalability – DFS ko easily expand kiya ja sakta hai.
User Convenience – Users kisi bhi device se files ko access aur share kar sakte hain.
Security & Backup – Encryption aur data replication se data loss aur unauthorized access se
protection milta hai.
Disadvantages of DFS ❌
Complex Management – Multiple servers aur replicas ka management mushkil ho sakta hai.
Network Dependency – Slow ya unstable network pe performance degrade ho sakti hai.
Data Consistency Issues – Multiple users agar ek file edit karein, toh synchronization problems ho
sakti hain.
High Cost – DFS ko maintain karne ke liye high-end servers aur backup storage ki zaroorat hoti hai.
Security Risks – Agar proper encryption aur authentication na ho, toh DFS vulnerable ho sakta hai.
Explain File accessing models.