Operating System - Unit 4
Operating System - Unit 4
SYSTEM
INTER PROCESS COMMUNICATION AND SYNCHRONIZATION
TYPES OF PROCESS
IPC refers to the methods and mechanisms that allow processes to communicate
with each other. Since processes run independently and can be executed
concurrently, they often need to exchange information to coordinate their actions
or share data. IPC facilitates this communication in various ways:
1. Message Passing: Processes send and receive messages to exchange information.
This can be done using various protocols and methods such as pipes, queues, or
message-passing systems.
2. Shared Memory: Processes can access a common memory region where they can
read from and write to. This approach allows fast communication but requires
synchronization to avoid conflicts.
3. Semaphores and Mutexes: These are synchronization tools that can be used to
manage access to shared resources and prevent race conditions.
4. Sockets: Used for communication between processes on different machines over a
network.
5. Files: Processes can read from and write to files as a form of communication.
MODELS OF IPC
SHARED MEMORY
Shared memory is an inter-process communication (IPC) method that
allows multiple processes to access and communicate by sharing a
common area of memory.
It is one of the fastest IPC mechanisms because processes can directly read
from and write to the same memory space without copying data.
However, proper synchronization (using semaphores or mutexes) is required
to avoid conflicts when multiple processes try to access the memory at the
same time. It's commonly used in high-performance applications for
efficient data sharing between processes.
MESSAGE PASSING
Message passing is a method of inter-process communication (IPC) where
processes exchange data by sending and receiving messages. Unlike shared
memory, in message passing, processes do not share memory space but instead
communicate by passing discrete messages through the operating system's
messaging channels (like pipes, message queues, or sockets).
OS must implement two functions: send() and receive() for message passing
Key Points:
• No shared memory: Processes exchange data by sending messages instead of
sharing memory.
• Synchronization built-in: The sending and receiving of messages are usually
synchronized, making it easier to avoid conflicts.
• Safe for distributed systems: It can be used across different machines over a
network, unlike shared memory, which is limited to processes on the same
machine.
MESSAGE PASSING THROUGH
COMMUNICATION LINKS
DIRECT COMMUNICATION
Sender specify the name of the process to which it want to send message.
Receiver specify the name of the sender from which it wants to receive the
message
P Q
INDIRECT COMMUNICATION
No need to specify the name of sender or receiver
Synchronization
Synchronization is about coordinating the execution of processes to ensure
that they operate correctly when accessing shared resources. Without
proper synchronization, concurrent processes may interfere with each
other, leading to issues like data corruption or unexpected behavior.
Synchronization ensures:
1. Mutual Exclusion: Only one process can access a critical section of code or
resource at a time. This prevents race conditions where multiple processes
might modify data simultaneously.
2. Deadlock Prevention: Avoiding situations where processes are stuck waiting
for each other indefinitely.
3. Starvation Prevention: Ensuring that every process gets a chance to access
resources and is not perpetually denied.
4. Ordering: Coordinating the order of operations so that processes work
together correctly, following the intended sequence.
Need for IPC and Synchronization
MUTUAL EXCLUSION
Only 1 process must enter into CS at a time. If mutual exclusion fails race condition arises.
RACE CONDITION
Initially shared=10
Process 1 Process 2
int X = shared int Y = shared
X++ Y–
sleep(1) sleep(1)
shared = X shared = Y
If the processes are executed sequentially then there is no issue, however there is an
interrupt (sleep), thus a problem will occur and the final output may be 9 or 11 (ideally it
should be 10)
This is called RACE CONDITION.
When two or more process wants to access same variable (shared variable) at same time
then final answer of variable depends on process who access that variable in the last. This
problem is known as Race condition.
MORE ON RACE CONDITION
A race condition occurs when two or more processes or threads try to access and
modify shared resources (such as memory or variables) simultaneously, and the final
outcome depends on the order in which the processes execute. This can lead to
unpredictable or incorrect behavior in programs because the processes "race" to
complete, and the timing of their execution affects the result.
Concept: Race conditions are a concept related to Process Synchronization in
Operating Systems.
• Why it happens: It occurs in multi-threaded or multi-process environments when multiple
threads/processes don't have proper synchronization while accessing shared resources.
• Critical Section: A race condition typically happens when processes access the critical
section (shared resources) without proper controls like locks or semaphores, leading to
inconsistent or unexpected results.
How to Prevent Race Conditions:
• Synchronization Mechanisms: Use techniques like locks, mutexes, semaphores, or monitors
to ensure that only one process can access the critical section at a time, preventing
concurrent modifications of shared resources.
PROGRESS
• Condition: If no process is executing in its critical section and there are processes that wish
to enter their critical sections, then the selection of the next process to enter the critical
section must not be postponed indefinitely.
• Explanation: The system should make progress and not get stuck. In other words, if a process
is waiting to enter the critical section, and no other process is in the critical section, the
system should eventually allow the waiting process to enter.
• WILLINGNESS: decision about who will enter the CS next must be made considering those
process only who wants to enter CS and this decision must be made in finite time.
• FINITE TIME: next process must enter the CS in a finite time. If this condition is not fulfilled then
deadlock will occur.
• Progress: Ensures that the system makes progress and avoids deadlock by not indefinitely
postponing requests.
BOUNDED WAIT
Condition: There must be a limit on the number of times other processes can enter their
critical sections after a process has made a request to enter its critical section and before
the request is granted.
Explanation: A process should not be waiting forever. There should be a bound on how
long a process might have to wait before it gets access to its critical section.
A process requesting CS must get its turn.
Bounded wait is mainly concerned with no of processes and not time.
If bounded wait fails, starvation may occur.
Starvation occurs when a process never gets the chance to enter its critical
section or access a resource because other processes keep getting ahead of it
indefinitely. This could happen if there’s no upper bound on how long a process
might wait for access, meaning that a process can be indefinitely delayed while
other processes continue to access the resource.
HOW TO IMPLEMENT MUTUAL EXCLUSION
•Initialization:
•Set up a shared buffer with a fixed size.
•Initialize pointers or indices to keep track of where to add and remove items (often referred to
as in and out indices).
•Producer Process:
•Produce an item.
•Check if the buffer is full. If it is, the producer must wait.
•Add the item to the buffer at the position indicated by the in index.
•Update the in index (wrap around if necessary).
•Signal that there is a new item available (conceptually).
•Consumer Process:
•Check if the buffer is empty. If it is, the consumer must wait.
•Remove an item from the buffer at the position indicated by the out index.
•Update the out index (wrap around if necessary).
•Signal that there is space available in the buffer (conceptually).
Bounded Buffer Problem
• Specific Definition: This is a specific instance of the Producer-Consumer Problem where
the buffer has a fixed, finite size. It emphasizes managing a limited-size buffer and
coordinating between producers and consumers under these constraints.
• Key Challenges: Same as the Producer-Consumer Problem but with the added
constraint of a buffer with a fixed capacity, which introduces specific issues related to
buffer fullness and emptiness.
Key Similarities
1. Shared Resource: Both involve a shared buffer that must be managed correctly to prevent
issues.
2. Synchronization: Both problems require synchronization mechanisms to manage access to the
shared buffer.
3. Mutual Exclusion: Both need to ensure that only one process (producer or consumer) accesses
the buffer at a time.
Key Differences
1. Buffer Size Constraint: The Bounded Buffer Problem specifically addresses the constraints and
challenges associated with a buffer of limited size, whereas the general Producer-Consumer
Problem may not specify the buffer size constraint.
2. Focus on Buffer Management: The Bounded Buffer Problem often emphasizes how to handle the
buffer when it reaches its maximum capacity or is empty, which is a critical aspect of managing
a finite-sized buffer.
SEMAPHORES
A semaphore is a tool used in computer science to manage how multiple programs or processes
access shared resources, like memory or files, without causing conflicts.
Semaphore Components
• Value: The semaphore maintains an integer value that indicates the number of available
resources or slots.
• Queue: Internally, semaphores maintain a queue of processes or threads that are waiting for
the semaphore to become available.
Semaphore Operations
P Operation (Wait/Proberen): Decrements the semaphore value. If the value is less than or
equal to 0, the process or thread is blocked and added to the semaphore's queue until the
value becomes positive.
V Operation (Signal/Verhogen): Increments the semaphore value. If there are processes or
threads waiting in the queue, one of them is unblocked and allowed to proceed.
Types of semaphores
1. Binary Semaphore
Definition: A binary semaphore can take only two values: 0 or 1. It is a special case of a counting
semaphore with a maximum count of 1.
Characteristics:
• Values: The semaphore can either be in the "locked" state (0) or "unlocked" state (1).
• Usage: Often used to implement mutual exclusion (mutex) in concurrent programming, ensuring
that only one process or thread can access a critical section at a time.
Operations:
• P Operation (Wait/Acquire/Proberen):
• Action: Decreases the value of the semaphore.
• Behavior: If the value is 1, it changes to 0, and the process or thread gains access. If the value is 0, the
process or thread is blocked and must wait until the semaphore is released.
• V Operation (Signal/Release/Verhogen):
• Action: Increases the value of the semaphore.
• Behavior: If the value is 0, it changes to 1, potentially unblocking a waiting process or thread. If the value is
1, it remains unchanged.
2. Counting Semaphore
Definition: A counting semaphore can take a range of non-negative integer values. It is used
to manage access to a finite number of resources.
Characteristics:
• Values: The semaphore can have any non-negative integer value, representing the number
of available resources or slots.
• Usage: Used to control access to a pool of identical resources, such as a set of printers,
database connections, or seats in a room.
Operations:
• P Operation (Wait/Acquire/Proberen):
• Action: Decreases the value of the semaphore.
• Behavior: If the value is greater than 0, it decreases by 1, indicating a resource has been allocated. If
the value is 0, the process or thread is blocked and must wait until a resource becomes available.
• V Operation (Signal/Release/Verhogen):
• Action: Increases the value of the semaphore.
• Behavior: If the value was less than the maximum, it increases by 1, indicating a resource has been
released. This may unblock a waiting process or thread.
REAL LIFE ANALOGY
•Synchronization:
•Definition: Semaphores are used to synchronize processes or threads, controlling their access to
shared resources. They ensure that processes are executed in a coordinated manner, allowing orderly
access and preventing conflicts or concurrent access issues.
•The queuing implementation of semaphores is a method used to manage and coordinate processes or
threads that are waiting to acquire or release a semaphore. This implementation ensures that processes
are handled in an orderly and fair manner, typically using a first-come, first-served (FIFO) queue.
•Waiting Queue:
•When a process cannot proceed because the semaphore value is not favorable (e.g., it's zero), the process is
blocked and added to a queue.
•FIFO Order:
•Processes are managed in a First-In-First-Out (FIFO) order. The process that arrives first is the first to be
granted access when the semaphore value becomes positive.
•Blocking and Waking:
•Blocking: When a process performs a P operation (wait) and the semaphore value is zero or less, it is
placed in the waiting queue and blocked.
•Waking: When a process performs a V operation (signal), the semaphore value is incremented, and if
there are processes in the queue, one is removed and allowed to proceed.
•Fairness:
•This approach ensures that processes are granted access to resources based on the order they arrived,
preventing starvation and promoting fairness.
PRODUCER CONSUMER USING SEMAPHORE
•Shared Buffer:
•A bounded buffer (or circular buffer) is used to store items produced by the
producer and consumed by the consumer. It has a fixed size.
•Semaphores Used:
•empty Semaphore: Counts the number of empty slots in the buffer. Initialized to
the buffer size.
•full Semaphore: Counts the number of filled slots in the buffer. Initialized to 0.
•mutex Semaphore: Ensures mutual exclusion when accessing the buffer.
Initialized to 1.
Producer Process:
•Wait Operation on empty: The producer performs a P operation (wait) on the empty
semaphore before adding an item to the buffer. If no empty slots are available, the
producer is blocked.
•Wait Operation on mutex: The producer performs a P operation (wait) on the mutex
semaphore to gain exclusive access to the buffer.
•Signal Operation on mutex: The producer performs a V operation (signal) on the mutex
semaphore to release access to the buffer.
•Signal Operation on full: The producer performs a V operation (signal) on the full
semaphore to indicate that there is now a new item available in the buffer.
Consumer Process:
•Wait Operation on full: The consumer performs a P operation (wait) on the full
semaphore before removing an item from the buffer. If no items are available,
the consumer is blocked.
•Wait Operation on mutex: The consumer performs a P operation (wait) on the mutex
semaphore to gain exclusive access to the buffer.
•Remove Item from Buffer: The consumer removes an item from the buffer.
P1 P2
•Definition: This condition refers to the fact that resources involved in the deadlock
must be non-shareable, meaning that only one process can use a resource at a
time. If a resource is held by one process, other processes requesting that resource
must wait.
•Example: Imagine a printer. If one process is printing, another process must wait
until the first process releases the printer before it can start printing. The printer can
only be used by one process at a time, and this exclusivity creates the potential for
deadlock.
•Key Point: If resources were shareable, deadlock could be avoided, as multiple
processes could access the same resource simultaneously.
HOLD AND WAIT
•Definition: This condition occurs when a process is holding at least one resource
and is waiting to acquire additional resources that are currently being held by other
processes.
•Example: Process P1 holds Resource A and waits for Resource B, while Process
P2 holds Resource B and waits for Resource A. Both processes hold resources
and are waiting for others, which creates the deadlock.
•Key Point: If processes were required to request and receive all the resources
they need at once, deadlock could be avoided, since no process would be able to
hold a resource while waiting for more.
P1 P2
B
NO PREMPTION
• Definition: This condition means that resources cannot be forcibly taken
away (preempted) from a process once they are allocated. The process
must release the resource voluntarily after it has completed its task.
• Example: Suppose a process P1 has acquired a lock on Resource A. Even if
another process P2 needs Resource A to proceed, the operating system
cannot forcibly take it away from P1. Instead, P2 has to wait until P1
voluntarily releases the resource, possibly leading to deadlock.
• Key Point: If preemption were allowed, the operating system could forcibly
take resources from waiting processes and assign them to other processes,
thus breaking the deadlock cycle.
CIRCULAR WAIT
•Definition: This condition happens when there is a circular chain of processes, with
each process waiting for a resource that the next process in the chain holds. In such a
scenario, no process can proceed because each one is waiting on the next.
•Example: Consider three processes P1, P2, and P3:
•P1 is waiting for a resource held by P2.
•P2 is waiting for a resource held by P3.
•P3 is waiting for a resource held by P1.
•This circular dependency means none of the processes can proceed, leading to
deadlock.
•Key Point: Breaking the circular chain of waiting processes would prevent deadlock.
DEADLOCK HANDLING
• Concept: Mutual exclusion ensures that only one process can access a
resource at a time. Deadlock prevention removes or limits mutual exclusion.
• How to Prevent:
• Shareable Resources: If a resource can be shared between multiple processes
(e.g., read-only files or databases), mutual exclusion is no longer required. This
can prevent deadlock for certain types of resources.
• Non-shareable Resources: Some resources, like printers or CPU cycles, cannot be
shared, so mutual exclusion is necessary for these types of resources. In such
cases, the system cannot prevent deadlock by attacking mutual exclusion, but it
can work on the other conditions.
• Drawback: Not all resources can be made shareable, especially physical
devices like printers, which need exclusive access to prevent interference.
Preventing Hold and Wait
• Concept: In the hold and wait condition, a process holds at least one
resource while waiting for other resources. Deadlock prevention can
remove this condition by ensuring that processes do not hold resources
while waiting.
• How to Prevent:
• Request All Resources at Once: Require that processes request all the resources
they need at the start. If they cannot get all the resources, they must release any
they are holding and start again.
• Resource Allocation Protocol: A process may only request resources when it is not
holding any. Alternatively, processes may only request resources when they
already hold some but must release all held resources if their request for more is
denied.
• Drawback: Requiring processes to request all resources at once can lead to
poor resource utilization. Processes may unnecessarily hold resources they
don't need immediately, leading to resource wastage or delays in other
processes.
Preventing No Preemption
• Concept: In the no preemption condition, once a process acquires a resource,
the resource cannot be forcibly taken away. Deadlock prevention removes this
condition by allowing preemption.
• How to Prevent:
• Preempt Resources: If a process is holding a resource and requests another resource
that is not available, the operating system can preempt (take away) the held resources
and allocate them to another process. The preempted process is temporarily
suspended, and its resources are given to other waiting processes.
• Preemptable Resources: Some resources can be preempted (e.g., CPU cycles or
memory pages), while others (e.g., printers) cannot be safely preempted. Preemption
works well when the resources can be saved and restored without causing errors.
• Drawback: Not all resources can be preempted safely. For example, if a process
is in the middle of printing, preempting the printer could lead to incomplete or
corrupted output. Preemption also adds overhead, as the system needs to track
and manage the preempted resources.
Preventing Circular Wait
• Concept: Circular wait is the condition where a circular chain of processes
exists, each process waiting for a resource held by the next. Deadlock
prevention breaks this circular chain by ensuring that no such cycle can form.
• How to Prevent:
• Impose Resource Ordering: Assign a numerical ordering to all resources.
Processes can only request resources in an increasing order (i.e., they can only
request a resource with a higher number than the one they currently hold). This
prevents a cycle from forming because processes cannot go back and request
lower-numbered resources.
• Drawback: This strategy can limit the flexibility of resource allocation. Processes
may need to request resources in a specific order, which might not align with
their actual resource needs. This can also lead to inefficiency or
underutilization.
Summary of Deadlock Prevention
Banker's Algorithm
The Banker’s Algorithm is the most commonly used deadlock avoidance
algorithm. It was proposed by Edsger Dijkstra and is based on the idea of a
banker who makes loans to customers. The banker must ensure that the
total amount of loans given out does not exceed a safe limit, where each
customer (process) can eventually pay back the loan (complete
execution).
Steps of the Banker's Algorithm:
1.Data Structures:
•Available[]: Number of available instances for each resource type.
•Max[][]: Maximum demand of each process for each resource.
•Allocation[][]: Current allocation of each resource type for each process.
•Need[][]: Remaining resource needs of each process (calculated as Max[][] -
Allocation[][]).
2.Safety Check:
•When a process requests resources, the system pretends to allocate those resources and
checks if it results in a safe state.
•If it does result in a safe state, the resources are allocated; otherwise, the request is
denied.
3.Request Handling:
•For a process Pi requesting resources:
•If Requesti ≤ Needi and Requesti ≤ Available, the system allocates the resources
temporarily.
•The system then checks if the resulting state is safe. If it is safe, the allocation is
made permanent. If not, the system rolls back the temporary allocation.
Pros of the Banker's Algorithm:
• It can help the system avoid deadlock altogether by dynamically checking
each request.
Cons of the Banker's Algorithm:
• Requires processes to specify their maximum resource needs in advance,
which may not always be possible or efficient.
• Increases computational overhead due to frequent checks for safe states.
• Not suitable for systems where the resource requirements change
dynamically or are unpredictable.
DEADLOCK RECOVERY
Deadlock Recovery is a method used to regain system functionality after a
deadlock has occurred. Unlike deadlock prevention and avoidance, which
aim to prevent deadlocks from happening, deadlock recovery is
concerned with detecting a deadlock after it has occurred and then
taking action to resolve it.
A. Process Termination/ Kill the process
• Abort All Deadlocked Processes:
• All processes involved in the deadlock are terminated. This is a brute-force
method that guarantees the deadlock will be resolved, but it can result in
significant data loss.
• Abort One Process at a Time:
• Abort one process at a time from the deadlocked processes until the deadlock is
resolved. The process to abort can be selected based on criteria such as priority,
process runtime, or resources used.
Resource Preemption
Preempt Resources: Temporarily take resources away from one process and
allocate them to another to break the deadlock.
Rollback: Roll back the process to a previous safe state, then re-execute it from
that state, ensuring it does not request the same resource set that led to the
deadlock.
** Selecting a process for preemption is not easy but there are several factprs which
should be kept in mind:
Process having minimum resources
Process which has run for the smallest duration of time
Process which has largest runtime remaining
RESOURCE REQUEST RESOURCE
RELEASE
Resource Request and Resource Release are two fundamental operations
involved in managing resources in an operating system.
They are critical for controlling access to shared resources, ensuring smooth
process execution, and avoiding issues like deadlock.
Resource Request
•Definition: When a process needs to use a resource (like CPU time, memory, or a
device), it sends a request to the operating system to allocate that resource.
•Purpose: The resource request ensures that processes do not use resources they
are not allowed to access and helps the operating system keep track of resource
usage.
•Process:
1.The process identifies the resources it needs (e.g., a file or a printer).
2.It sends a request to the operating system for those resources.
3.The operating system checks the availability of the requested resources:
•If available, it allocates them to the process.
•If not available, the process is put in a waiting state until the resources are
free.
•Example:
•A word processing program (process) wants to print a document. It sends a request
for the printer (resource) to the operating system.
Resource Release
•Definition: When a process no longer needs a resource, it releases it back to the
operating system so that it can be used by other processes.
•Purpose: Releasing resources helps in resource management, prevents resource
leaks, and ensures that resources are available for other processes that need them.
•Process:
1.The process informs the operating system that it no longer needs the resource.
2.The operating system marks the resource as available.
3.If there are other processes waiting for the resource, one of them can now acquire
it.
•Example:
•After printing the document, the word processing program releases the printer,
making it available for other processes that might need it.