Unit Iv
Unit Iv
This unit covers two main topics: Distributed Operating Systems and Multiprocessor Operating Systems. Each of these
areas involves unique architectures, design issues, and functionalities. Here’s a detailed breakdown:
Distributed Operating Systems (DOS) allow a collection of independent computers to work together and present
themselves as a single coherent system to users.
Key Characteristics:
• Transparency: The distribution of resources should be transparent to users (location, migration, replication, and
concurrency transparency).
• Scalability: The system should efficiently handle an increasing number of users and resources.
• Fault Tolerance: The system should continue functioning despite failures.
Architectural Models:
• Client-Server Model: Clients request services, and servers provide them. This model is widely used in networked
applications.
• Peer-to-Peer Model: All nodes can act as both clients and servers. This model enhances resource sharing and
redundancy.
• Middleware: Provides a layer that abstracts the complexity of distributed systems, facilitating communication
between distributed components.
b. Design Issues
1. Communication: Efficient communication protocols (e.g., RPC - Remote Procedure Call) are essential for
processes to communicate over a network.
2. Resource Management: Strategies for managing resources across multiple nodes, including load balancing and
resource allocation.
3. Synchronization: Mechanisms for coordinating processes that may be distributed across different nodes.
4. Security: Ensuring secure communication and resource access in a distributed environment.
5. Failure Management: Strategies for handling node failures, including redundancy and recovery techniques.
Distributed mutual exclusion ensures that multiple processes in a distributed system can access shared resources
without conflict.
Techniques:
1. Token-Based Algorithms: A token is passed around the network, and only the holder can access the critical
section (e.g., Ricart-Agrawala algorithm).
2. Timestamp-Based Algorithms: Each request is tagged with a timestamp, and the system grants access based on
the order of requests (e.g., Lamport's algorithm).
In a distributed system, deadlocks can occur when multiple processes wait indefinitely for resources held by each other.
Detection Strategies:
1. Wait-For Graphs: Construct graphs in each node and periodically check for cycles.
2. Centralized Algorithms: Use a central coordinator to monitor and detect deadlocks across the system.
3. Distributed Algorithms: Each process maintains a local view and exchanges information with others to detect
deadlocks.
Distributed shared memory (DSM) allows processes on different nodes to share memory locations as if they were in the
same address space.
Techniques:
1. Replication: Keeping copies of shared memory at various nodes for faster access.
2. Consistency Models: Ensuring that all processes view the memory in a consistent state (e.g., strong consistency
vs. eventual consistency).
f. Distributed Scheduling
Distributed scheduling involves assigning tasks to various processors in a distributed system to optimize resource usage
and minimize execution time.
Approaches:
1. Global Scheduling: A central scheduler decides the task allocation for all nodes.
2. Local Scheduling: Each node schedules its tasks independently, which may lead to inefficiencies but reduces
overhead.
3. Heuristic-Based Scheduling: Uses heuristics to optimize task allocation based on current system state.
a. Architecture
1. Symmetric Multiprocessing (SMP): All processors share the same memory and I/O resources. They have equal
access to memory and can run multiple processes simultaneously.
2. Asymmetric Multiprocessing (AMP): One master processor controls the system and assigns tasks to slave
processors, which handle specific tasks.
When designing an operating system for multiprocessor systems, several issues arise:
1. Process Synchronization: Mechanisms to ensure that processes running on different processors do not interfere
with each other when accessing shared resources.
2. Load Balancing: Distributing workloads evenly across processors to optimize resource usage.
3. Memory Management: Strategies for managing memory in a way that supports multiple processors efficiently,
including shared vs. private memory.
4. Inter-Processor Communication: Ensuring efficient communication between processors for data sharing and task
coordination.
c. Threads
Threads allow for concurrent execution within a process, making better use of CPU resources in multiprocessor systems.
Types of Threads:
d. Process Synchronization
Process synchronization in multiprocessor systems can be challenging due to multiple CPUs accessing shared data.
Synchronization Mechanisms:
e. Process Scheduling
Process scheduling determines how processes are assigned to CPUs for execution.
Scheduling Algorithms:
1. Round Robin: Each process gets a fixed time slice in a cyclic order.
2. Priority Scheduling: Processes are scheduled based on priority levels.
3. Load Balancing: Ensures that processes are evenly distributed among processors to maximize throughput.
f. Memory Management
Effective memory management is crucial in multiprocessor operating systems to prevent bottlenecks.
Strategies:
1. Shared Memory: Allows multiple processors to access the same physical memory.
2. Distributed Memory: Each processor has its own local memory, reducing contention.
3. Cache Coherence: Ensures that changes in one processor's cache are reflected in others.
Ensuring reliability and fault tolerance is critical in multiprocessor systems to handle failures gracefully.
Techniques:
1. Redundancy: Using multiple processors to perform the same task ensures that failure of one does not
compromise the system.
2. Checkpointing: Saving the state of processes periodically to recover from failures.
3. Exception Handling: Mechanisms to handle unexpected conditions and restore normal operation.
Conclusion
Understanding advanced operating systems, including distributed and multiprocessor systems, is essential for designing
and managing efficient, reliable, and scalable systems. These concepts provide the foundational knowledge necessary to
tackle complex challenges in contemporary computing environments, including performance optimization, resource
management, and fault tolerance.