0% found this document useful (0 votes)
5 views30 pages

Ds Part B

The document outlines the disadvantages of distributed systems, including complexity in management, security risks, and challenges in maintaining data consistency. It discusses concurrency control techniques, comparing pessimistic and optimistic methods, and highlights issues such as data inconsistency and deadlocks. Additionally, it covers the differences between synchronous and asynchronous execution, agreement protocols, and the concepts of faults, errors, and failures in distributed systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views30 pages

Ds Part B

The document outlines the disadvantages of distributed systems, including complexity in management, security risks, and challenges in maintaining data consistency. It discusses concurrency control techniques, comparing pessimistic and optimistic methods, and highlights issues such as data inconsistency and deadlocks. Additionally, it covers the differences between synchronous and asynchronous execution, agreement protocols, and the concepts of faults, errors, and failures in distributed systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Q.1.

Briefly explain the disadvantages of


distributed system.
1. More Complex to Manage:
A distributed system involves many computers working together, which
makes it more difficult to manage and coordinate compared to a single
system.
2. Security Risks:
Since data is stored on different computers, there are more chances for
unauthorized access or data breaches.
3. Depends on Network:
Distributed systems need a strong and reliable network. If the network is
slow or fails, the system may not work properly.
4. Difficult to Find and Fix Problems:
If something goes wrong, it can be hard to find the exact problem because
the system is spread across many computers.
5. Keeping Data Consistent is Hard:
It is challenging to make sure that all the computers have the same and
correct data at all times.
6. Slower Response Time:
Sometimes, because the system has to send data over a network, it may
take more time to complete tasks.
7. Higher Cost:
Using many computers, maintaining them, and setting up communication
between them can cost more than using a single system.

Concurrency Control in Distributed


Transactions
When many users or systems access a distributed database (data stored across
different computers) at the same time, we must ensure:
 Data remains accurate and consistent.
 No transaction interferes with another.
This is done using Concurrency Control Techniques, which are mainly of two
types:

🔐 1. Pessimistic Concurrency Control


"Let's lock things just in case someone else tries to access them!"
These techniques assume conflicts will happen, so they prevent them by
locking the data before using it.
✅ Techniques:
a) Isolation Levels
 Decide how much a transaction is allowed to see or interfere with others.
 Common levels:
1. Read Uncommitted – Reads uncommitted data. Fastest but unsafe (dirty
reads possible).
2. Read Committed – Reads only committed data. Safer, but data may still
change between reads.
3. Repeatable Read – Same data gives same results if read again. Prevents
changes but new rows (phantoms) can appear.
4. Serializable – Highest safety. Transactions run like one at a time. Prevents
all read issues.
b) Two-Phase Locking (2PL)
 Has two phases:
1. Growing Phase – Collects all locks needed.
2. Shrinking Phase – Releases the locks.
 Ensures data correctness.
 Problem: Can cause deadlocks (where two transactions wait forever).
c) Distributed Lock Manager (DLM)
 A lock controller that works across the network of databases.
 Makes sure two systems don’t write the same data at the same time.

d) Multiple Granularity Lock


 You can lock big items (like a whole table) or small items (like a row).
 Example: Locking just a row instead of an entire table improves
performance.

🚀 2. Optimistic Concurrency Control


"Let’s assume everything will go fine – if not, we’ll fix it at the end!"
These techniques assume conflicts are rare, so transactions proceed without
locking. At the end, a check (validation) is done before saving changes.
✅ Techniques:
a) Time-Stamp Based
 Every transaction gets a timestamp (a unique time).
 Operations are allowed only if they follow the correct time order.
 Prevents newer transactions from overwriting older ones incorrectly.
b) MVCC (Multi-Version Concurrency Control)
 Keeps multiple versions of a data item.
 Readers and writers don’t block each other.
 Readers see a snapshot of the data as it was when they started.
c) Snapshot Isolation
 Each transaction sees a snapshot of the database when it started.
 It won’t see changes made by others during its work.
 Good balance of safety and performance.
d) CRDTs (Conflict-Free Replicated Data Types)
 Used when data is copied across multiple places (like in cloud systems).
 Automatically merge data without conflict.
 Common in real-time collaborative apps like Google Docs.

Issues in concurrency control in a distributed


system:
1. Data Inconsistency
When many users try to change the same data at once, the data can
become incorrect or not match.
2. Deadlock
Two or more processes wait for each other’s tasks to finish, and
nothing moves forward. This is harder to manage in different systems.
3. Replication Problems
It’s hard to keep copies of data the same on all servers when many
users are using them at the same time.
4. Delay in Communication
Since the system parts are in different places, messages may take time,
causing delays and poor coordination.
5. Clock Mismatch
Different systems may show different times, which makes it hard to
know the correct order of events.
6. Handling Failures
If one system stops working while doing a task, it’s hard to decide what
to do with the incomplete work.
7. Extra Processing Work
To keep things in order, extra methods like locking are used. These
slow down the system a bit.

Q.2. Compare Synchronous vs Asynchronous Execution in


Distributed Systems
In a distributed system, multiple components located on different networked
computers communicate and coordinate their actions by passing messages.
One of the critical aspects of communication in such systems is whether it is
synchronous or asynchronous.
1. Synchronous Execution
Definition:
In synchronous execution, the caller (or sender) waits for the operation to
complete before proceeding. The communication or interaction is blocking.
Characteristics:
 The sender waits for a response or acknowledgment.
 Both sender and receiver must be active and ready to communicate at the
same time.
 Simple to design and reason about.
 May lead to idle waiting, reducing performance.
Example in Distributed Systems:
 Remote Procedure Call (RPC) where the caller waits for the callee to finish
the task and send back a result.
Advantages:
 Easier to understand and debug.
 Predictable behavior due to sequencing.
Disadvantages:
 Inefficient use of resources (waiting time).
 Vulnerable to network delays or failures—can block the entire system.

2. Asynchronous Execution
Definition:
In asynchronous execution, the caller sends a request and continues its
execution without waiting for a response. The communication is non-
blocking.
Characteristics:
 Sender does not wait for the receiver.
 Receiver can process the request at a later time.
 More complex to design and handle responses or errors.
Example in Distributed Systems:
 Message queues or event-driven systems (e.g., RabbitMQ, Kafka) where
messages are processed independently of the sender’s workflow.
Advantages:
 Better resource utilization and performance.
 More resilient to delays and partial failures.
Disadvantages:
 Harder to implement and manage (requires callbacks, polling, or
notification mechanisms).
 Debugging and error handling can be more complex.

Comparison Table
Feature Synchronous Asynchronous
Blocking Yes No
Slower (waits for
Speed Faster (non-blocking)
response)
Complexity Simple to implement More complex logic
Efficiency Less efficient More efficient
Failure
More vulnerable More resilient
Handling
Use Case RPC, HTTP Messaging systems, Event-
Examples request/response driven apps
QUES… difference between Physical Clock and
Logical Clock:
Physical Clock Logical Clock
Shows real-world time (like a wall
Shows the order of events in a system.
clock).
Measured in hours, minutes, and
Measured by counting events.
seconds.
Used in real-time systems and
Used in distributed systems.
computers.
Needs to be accurate and Just keeps track of what happened
synchronized. first.

Example: Event A happens before


Example: 2:30 PM on your phone.
Event B.
In short:
 Physical Clock = Real time
 Logical Clock = Order of events

Ques…. Explain different kinds of problems that are


associated with the coordination and agreement in
distributed system
Agreement protocols ensure all nodes (computers) in a distributed
system agree on a common value or decision, even if some nodes fail
or give wrong data.

🧾 Why Do We Need Agreement Protocols?


✅ 1. Consistency
 All nodes should have the same result of a computation or decision.
 Example: In a banking system, if one server says “Transaction Success” and
another says “Fail,” that’s a big issue.
🔄 2. Coordination
 Nodes must work together, like voting on what action to take.
 Example: Electing a leader among nodes.
🔒 3. Fault Tolerance
 Some nodes may crash or behave badly, but the rest still need to agree
and function.

🧾 Types of Agreement Problems


1. Consensus Problem
 Goal: All non-faulty nodes must agree on the same value.
 Example: Nodes vote on a value (e.g., yes or no), and all should agree.
2. Byzantine Agreement Problem
 Some nodes may act maliciously (lie or send false messages).
 Goal: Honest nodes should still reach the same decision.
 Example: A node tells Node A = YES, and Node B = NO. Need a system that
can detect & ignore liars.

🔄 Types of Agreement Protocols


🧾 A. Two-Phase Commit (2PC) – used for transactions
Phase 1: Prepare Phase
 Coordinator asks all participants: “Are you ready to commit?”
 Participants respond: Yes / No
Phase 2: Commit / Abort
 If all say Yes → Commit
 If any say No → Abort
🧾 Pros:
 Simple and widely used in databases.
🔴 Cons:
 Blocking: If the coordinator fails after asking, participants may get stuck
waiting.

🧾 B. Three-Phase Commit (3PC) – improved version of 2PC


Adds a third phase to avoid blocking.
Phases:
1. Can Commit? – Same as 2PC's prepare phase.
2. Pre-Commit – A "safe" zone before final commit.
3. Do Commit – Final commit step.
🧾 Pros:
 More reliable than 2PC, non-blocking under many conditions.
🔴 Cons:
 Still doesn’t work well in some network partitions or if nodes lie.

🧾 C. Paxos Algorithm – Consensus in fault-prone systems


Used where nodes might crash but not lie.
Roles:
 Proposer: Suggests a value.
 Acceptor: Votes on values.
 Learner: Learns the final agreed value.
🔵 Paxos Process:
1. Proposer sends a proposal (value + proposal ID).
2. Acceptors reply if it's the highest proposal so far.
3. If majority agree → value is chosen.
🧾 Features:
 Tolerates failures.
 Very reliable but complex to implement.

🧾 D. Raft Algorithm – Easier to understand than Paxos


Used in many systems (like etcd, Consul, Kubernetes).
Raft Process:
1. Leader Election – Choose a leader among nodes.
2. Log Replication – Leader sends decisions to followers.
3. Commit – Once majority accepts, it’s final.
🧾 Advantages:
 Clearer steps than Paxos.
 Strong consistency and reliable.

🔷 E. Byzantine Fault Tolerant Protocols (e.g., PBFT)


Handles Byzantine faults – where nodes can lie, cheat, or act randomly.
PBFT (Practical Byzantine Fault Tolerance):
 Works well up to f faults in 3f+1 nodes
(e.g., can handle 1 faulty node out of 4).
 Used in blockchains and critical systems.
🧾 Pros:
 Very secure.
 Handles bad behavior.
🔴 Cons:
 More overhead and complex messages.
6 major problems associated with coordination
and agreement in distributed systems

✅ 1. Crash Failures
💡 What It Is:
 A node (computer/server) stops working or goes offline unexpectedly.
❌ Problem:
 Other nodes don’t know whether it’s slow or truly dead.
 They may get stuck waiting for a response.
🛠 Real Example:
 In a banking app, if a server handling payment crashes during a
transaction, others can’t confirm whether to complete or cancel the
transaction.

✅ 2. Network Partitioning (Communication Failure)


💡 What It Is:
 The network gets split into disconnected parts.
 Nodes in one part can’t talk to others.
❌ Problem:
 Causes inconsistent decisions, because each partition may act separately.
🛠 Real Example:
 Half the system says "Add item to cart," the other half doesn’t get the
message — leading to missing or duplicated actions.

✅ 3. Byzantine Failures (Malicious or Faulty Nodes)


💡 What It Is:
 A node behaves wrongly, sends false messages, or lies on purpose.
 This is the most dangerous kind of failure.
❌ Problem:
 Nodes may not know which one to trust.
 Can lead to wrong decisions or data corruption.
🛠 Real Example:
 A corrupted node says “Transaction Approved” to one node, and
“Declined” to another. No one knows the real answer.

✅ 4. Message Delay or Loss


💡 What It Is:
 Messages between nodes are delayed, or sometimes lost.
❌ Problem:
 One node may assume another has failed, and take wrong action (like
aborting a task too early).
🛠 Real Example:
 A node waits for a vote, but the message got delayed. It aborts the
transaction, even though all others were ready to commit.

✅ 5. No Global Clock (Clock Synchronization Issues)


💡 What It Is:
 Each node has its own local clock, and they are often not synchronized.
❌ Problem:
 Makes it hard to know the order of events (who sent a message first?).
 Can lead to conflicts or data errors.
🛠 Real Example:
 Two users update the same file at nearly the same time, but the system
can’t decide which update came first.
✅ 6. Leader Election Conflicts
💡 What It Is:
 Many protocols require a leader node to coordinate actions.
 Problems arise if:
o Multiple nodes think they are the leader, or
o Frequent failures trigger re-elections.
❌ Problem:
 Causes confusion, delays, and inconsistent decisions.
🛠 Real Example:
 Two servers both think they are handling user logins — leading to
duplicate sessions or conflicting responses.

Briefly explain fault, errors, and failure in distributed


system.
1. Fault:
A fault is the underlying defect or cause in a component (e.g., hardware,
software, network) that may lead to an error.
o Example: A server crashes or a disk gets corrupted.
2. Error:
An error is the incorrect state caused by a fault. It is a deviation from the
correct state inside the system.
o Example: A corrupted memory value due to a fault.
3. Failure:
A failure occurs when the system does not deliver the expected service to
the user. It is the visible result of an error.
o Example: A website is down or returns incorrect data.
Summary:
 Fault → Error → Failure
 Faults are causes, errors are internal effects, and failures are external
consequences.

. 🔍 What is Fault Tolerance?


1. Definition: Fault tolerance enables a system to continue functioning
correctly even when some components fail.
2. Importance: Essential for ensuring system reliability, availability, and
consistency in distributed
3. Objective: Minimize the impact of failures and maintain uninterrupted
:⚠️ Types of Faults in Distributed Systems
1. Transient Faults
o Definition: Temporary faults that occur once and then disappear.
o Characteristics:
 Short-lived
 Often caused by environmental disturbances (e.g.,
electromagnetic interference)
 Detection: Challenging to identify due to their brief nature
2. Intermittent Faults
o Definition: Faults that occur at irregular intervals and vanish
unpredictably.
o Characteristics:
 Sporadic in nature
 Harder to diagnose than transient faults
 May indicate deeper hardware or software issues
3. Permanent Faults
o Definition: Faults that remain until the component is repaired or
replaced.
o Characteristics:
 Persistent and repeatable
 Easier to detect than transient or intermittent faults
 Usually caused by hardware failures
Here’s a clean, professional breakdown of your content on Fault Tolerance in
Distributed Systems, formatted for study notes, slides, or technical
documentation:

🛠️ Phases of Fault Tolerance


1. Fault Detection
o Continuous monitoring to identify deviations from normal system
behavior.
o Tools & methods: heartbeats, watchdog timers, logs.
2. Fault Diagnosis
o Analyzing detected faults to find the root cause.
o Helps distinguish between transient, intermittent, and permanent
faults.
3. Evidence Generation
o Documenting fault occurrences, their causes, and potential solutions.
o Important for audits, debugging, and improving fault tolerance
strategies.
4. Recovery
o Restoring the system to a normal state.
o Techniques include:
 Reconfiguration: Rerouting tasks or connections
 Resynchronization: Realigning system states or clocks

🧾 Types of Fault Tolerance


1. Hardware Fault Tolerance
o Uses redundant hardware components (e.g., dual power supplies,
RAID systems).
o Ensures continuity if hardware fails.
2. Software Fault Tolerance
o Employs error-detection and correction techniques during execution
(e.g., N-version programming, recovery blocks).
o Detects and corrects software faults dynamically.
3. System Fault Tolerance
o A hybrid approach combining hardware and software fault tolerance.
o Ensures overall system resilience through layered strategies.

🔄 Fault Tolerance Strategies


1. Replication
o Multiple copies of data or services are maintained to ensure
availability.
o Used in databases, cloud services, microservices.
2. Redundancy
o Extra components (hardware/software) act as backups.
o Examples: redundant servers, network paths.
3. Failover Mechanisms
o Automatic switching to a standby component/system upon fault
detection.
o Ensures minimal disruption and high availability.

Q.7. What do you understand by Byzantine


Agreement? Explain
Byzantine Agreement (also known as Byzantine Generals Problem) is a
fundamental problem in distributed systems and fault tolerance, where the
goal is to reach consensus among distributed nodes (or processes), even if some
of them behave maliciously or incorrectly.
It ensures that all non-faulty (honest) nodes agree on a common value or
decision, despite the presence of Byzantine faults — faults that cause nodes to
act arbitrarily, including lying, sending conflicting information, or being silent.

🧾 Key Requirements of Byzantine Agreement:


1. Agreement: All non-faulty nodes must agree on the same value.
2. Validity: If all non-faulty nodes propose the same value, then that must be
the decision.
3. Termination: Every non-faulty node must eventually decide on a value.

🐞 What is a Byzantine Fault?


 A Byzantine fault is the most severe type of fault where a node may:
o Fail silently
o Send incorrect or conflicting messages to different nodes
o Be controlled by an attacker
 These faults are unpredictable and hard to detect
✅ Real-World Applications:
 Blockchain and Cryptocurrencies
 Secure voting systems
 Aerospace systems
 Distributed databases

Ques… Explain transport level communication


services for building distributed applications.
🌐 What is Transport Level Communication?
 It is a way for two computers in a distributed system to send and receive
data over a network.
 It is part of the Transport Layer in the OSI model.
 Helps different parts of a distributed application (like client and server) talk
to each other smoothly and reliably.

🚀 Main Services Provided by Transport Layer:


1. Connection Establishment and Termination
 A connection must be set up before sending data.
 After data transfer, the connection is closed properly.
 This ensures both computers are ready to communicate.
 📌 Example: Like calling someone – you say "Hello", they reply, and then
you start talking.

2. Reliable Data Transfer


 Ensures data is delivered correctly and in order.
 If data is lost or damaged, it is resent.
 No data is duplicated or missed.
 📌 Example: Like sending a parcel with a delivery receipt – the sender
knows it was received.

3. Flow Control
 Makes sure the sender does not send data too fast for the receiver.
 Controls the speed of data transfer.
 Helps avoid overloading the receiver.
 📌 Example: Like talking slowly so the other person can write notes.

4. Error Control
 Detects and fixes errors in data.
 Adds a checksum to check for mistakes during transmission.
 📌 Example: Like double-checking a message to ensure there are no
spelling mistakes.
5. Multiplexing and Demultiplexing
 Multiplexing: Allows many apps to use the network at the same time.
 Demultiplexing: Sends data to the correct application based on port
numbers.
 📌 Example: Like a post office sorting letters based on the recipient's
address.

6. Congestion Control
 Prevents network overload when too much data is sent at once.
 Slows down data transfer if the network is too busy.
 📌 Example: Like waiting in a traffic jam – only a few cars (data packets)
can go at a time.

✅ Common Protocols at Transport Layer:


1. Tcp – transmission control protocol
2. UDP – user datagram protocol

Ques… Adapter Design Pattern


✅ What is Adapter Design Pattern?
The Adapter Design Pattern is a structural design pattern that allows objects
with incompatible interfaces to work together. It acts as a bridge between two
incompatible interfaces by converting one interface into another that a client
expects.
Key Points:
1. Purpose:
 To resolve incompatibility between two classes or systems
 To allow integration between components without modifying existing
code.
 Often used when integrating legacy systems with new modules.
⚙️ Structure:
1. Target Interface: The interface expected by the client.
2. Adaptee: The existing class or system with a different interface.
3. Adapter: Implements the target interface and translates requests to the
adaptee.

1. Class Adapter (Inheritance-based)


 Inherits from both the Target interface and the Adaptee class.
 Tightly coupled due to inheritance.
 Works only in languages that support multiple inheritance (e.g., C++).

2. Object Adapter (Composition-based)


 Uses composition: the adapter holds a reference to the Adaptee.
 Implements the Target interface and delegates calls to the adaptee.
 More flexible and widely used (especially in Java, C#).
3. Two-way Adapter
 Implements both Target and Adaptee interfaces.
 Enables objects of either type to work with the other.
 Useful when bi-directional communication is required.

4. Interface Adapter (Default Adapter)


 Provides default (empty) implementations of all interface methods.
 Subclasses can override only the needed methods.
 Common in Java using abstract adapter classes or default interface
methods (Java 8+).

Ques… . Does using time stamping for


concurrency control ensure serializability?
Discuss.
✅ What is Timestamp-based Concurrency Control?
In DBMS (Database Management Systems), Timestamp Ordering Protocol is a
method used to control concurrency. Each transaction is assigned a unique
timestamp when it enters the system. These timestamps help decide the
execution order of transactions.
The idea:
 Every data item has two timestamps:
o Read_TS(X) → The largest timestamp of any transaction that
successfully read X.
o Write_TS(X) → The largest timestamp of any transaction that
successfully wrote X.

🔄 How it works (Simple Explanation):


Let’s say:
 T1 has timestamp TS(T1) = 5
 T2 has timestamp TS(T2) = 10
This means T1 is older than T2. So, if both want to access the same data item X,
the DBMS ensures that T1 acts before T2 on X.
The DBMS checks:
1. If a read/write operation by a transaction violates timestamp ordering.
2. If no violation, the operation is allowed.
3. If it violates, the transaction is aborted and restarted with a new
timestamp.

✔️ Does it ensure serializability?


Yes, timestamp ordering ensures conflict-serializability. That means:
 The final result of concurrent transactions will be the same as if they were
executed in some serial order (based on timestamps).
 Even if transactions are interleaved, the logical effect is as if they were run
one after the other.

❌ Limitations:
 It can lead to more aborts, especially in high-conflict situations.
 Transactions that are younger may frequently get aborted even for minor
conflicts with older ones.

Ques… Discuss connection less communication


between client and server using sockets.
🔹 What is Connectionless Communication?
Connectionless communication is a form of communication in which:
 No persistent connection is established between client and server.
 Each message (or datagram) is independent.
 No prior handshake is performed before data is sent.
 No guarantee of delivery, order, or duplicate protection is provided.
 It is typically faster but less reliable compared to connection-oriented
communication.

🔹 Sockets and UDP in Distributed Systems


In distributed systems, sockets provide an endpoint for sending or receiving
data. When implementing connectionless communication, sockets use the UDP
protocol instead of TCP.
 UDP sockets are used to send and receive discrete packets of data.
 Both client and server create their own sockets.
 The sender sends data packets to the recipient’s IP address and port
number.
🔹 How It Works (Client-Server Model)
1. Server Side (UDP Socket)
 The server creates a socket using socket(AF_INET, SOCK_DGRAM, 0).
 It binds the socket to an IP address and a specific port using bind().
 It waits for datagrams using recvfrom().
 No listen() or accept() is used (unlike TCP).
2. Client Side
 The client also creates a UDP socket.
 It sends a message using sendto() function to the server's IP and port.
 It can optionally wait for a response using recvfrom().
Since no connection is established, each interaction is standalone.

🔹 Example Use Case in Distributed System


Imagine a sensor network in a smart city application:
 Each sensor (client) sends temperature data periodically to a central server.
 Sensors don’t wait for a response or acknowledgment.
 The server receives this data and logs or processes it.
 Here, UDP suits well because speed is more important than reliability.

🔹 Advantages
 Lightweight and Fast: Ideal for real-time systems and scenarios with
frequent short messages.
 Scalable: Supports large number of clients due to no connection overhead.
 Simple: Minimal setup compared to TCP.

🔹 Disadvantages
 Unreliable: No guarantee that messages will reach their destination.
 No Congestion Control: May lead to packet loss during network
congestion.
 Not Suitable for Sensitive Data: Not ideal where accuracy or order is
critical.

Explain distributed shared memory with the


help of suitable diagram.
What is Distributed Shared Memory (DSM)?
Distributed Shared Memory (DSM) is a software or hardware mechanism that
enables processes running on different nodes of a distributed system to share a
common virtual address space. While each node has its own physical memory,
DSM makes it appear as if all nodes are accessing the same memory, thus
abstracting inter-process communication (IPC).
 DSM does not use physical shared memory.
 Instead, it provides a shared virtual memory abstraction.
 Data is moved between the main memories of different nodes
transparently.
 Developers can program as if memory is shared, even though it's
distributed.

Types of Distributed Shared Memory


1. On-Chip Memory DSM
o Located within the CPU chip.
o Memory is directly connected to address lines.
o Provides very high speed, but is expensive and complex.
2. Bus-Based Multiprocessors
o CPUs and memory are connected via a shared bus.
o Simultaneous memory access by multiple CPUs is controlled through
algorithms.
o Cache memory is used to reduce traffic and improve performance.
3. Ring-Based Multiprocessors
o Nodes are connected in a ring topology.
o No centralized memory; data is passed around via token ring.
o A single address space is divided and shared across nodes.
Advantages of DSM
 ✅ Simplified Programming
Developers work with a single memory abstraction, no need to manage
message passing.
 ✅ Portability
DSM programs can be ported easily between systems due to their
common interface.
 ✅ Locality of Data
DSM fetches blocks of data, improving performance by anticipating future
needs.
 ✅ On-Demand Data Movement
Data is moved only when needed, saving bandwidth and reducing latency.
 ✅ Large Virtual Memory Space
Total memory available is the sum of all nodes' memory, enabling
memory-intensive applications.
 ✅ Performance Boost
Speeds up data access and improves overall system efficiency.

Disadvantages of DSM
 ❌ Slow Accessibility
Data access is slower than local memory access, due to network latency.
 ❌ Consistency Challenges
Maintaining memory consistency across nodes is complex and error-
prone.
 ❌ Inefficient Messaging
DSM may use asynchronous message passing, which isn't always optimal.
Ques… Explain how micro kernels can be used to
organize an operating system in a client-server
fashion.
Ans… A microkernel is the minimalistic core of an operating system that
provides only the essential services such as:
 Low-level memory management
 Inter-process communication (IPC)
 Basic scheduling
In a client-server model, the OS is split into small, isolated processes where:
 Client: Requests services (e.g., file access, device handling)
 Server: Provides the requested services
 Microkernel: Acts as a mediator using IPC
This model helps in better modularity, security, and fault isolation.
⚙️ Working of Microkernel in Client-Server OS
Step-by-Step Process
1. User App (Client) Makes a Request
A user application (e.g., text editor) needs to access a file. It acts as a
client.
2. Request Sent via IPC
The client sends an IPC message to the file system server through the
microkernel.
3. Microkernel Mediates
The microkernel receives the request and forwards it to the appropriate
server (e.g., File Server).
4. Server Processes Request
The File Server receives the message, processes it (e.g., reads the file), and
prepares the response.
5. Response Sent Back via Microkernel
The File Server sends the result back to the client using IPC, again via the
microkernel.
6. Client Receives Response
The application now has access to the data it requested.

✅ Features of Microkernel-Based Operating System


1. Minimal Core
Only essential services like CPU scheduling, memory management, and IPC
are handled by the kernel.
2. Modularity
System services (like file systems, device drivers) are run as separate
modules in user space.
3. Fault Isolation
Crashes in one service (like a driver) won’t crash the whole system;
improves stability.

✅ Advantages of Microkernel
1. Security & Reliability
Smaller kernel means less attack surface and fewer chances of system
crashes.
2. Modularity & Maintainability
Easy to update, debug, or replace services without affecting the whole OS.
3. Portability & Scalability
Can be easily adapted to different hardware or embedded systems.

⚠️ Disadvantages of Microkernel
1. Slower Performance
More context switches and message passing between components reduce
speed.
2. Higher Complexity
Designing and managing IPC between services is more complex.
3. More Memory Usage
Running multiple services in user space uses more RAM.

You might also like