0% found this document useful (0 votes)
22 views64 pages

Operating System - Unit 4

Uploaded by

Rauank
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views64 pages

Operating System - Unit 4

Uploaded by

Rauank
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 64

UNIT 4: OPERATING

SYSTEM
INTER PROCESS COMMUNICATION AND SYNCHRONIZATION
TYPES OF PROCESS

INDEPENDENT (NON CORPORATING) CORPORATING


 They can not affect or get  They can affect or get affected
affected by execution of other by execution of other process in
process in system the system
 They never share any resource  They must share some resources
such as variables, file, piece of
code etc
Inter-Process Communication (IPC)

 IPC refers to the methods and mechanisms that allow processes to communicate
with each other. Since processes run independently and can be executed
concurrently, they often need to exchange information to coordinate their actions
or share data. IPC facilitates this communication in various ways:
1. Message Passing: Processes send and receive messages to exchange information.
This can be done using various protocols and methods such as pipes, queues, or
message-passing systems.
2. Shared Memory: Processes can access a common memory region where they can
read from and write to. This approach allows fast communication but requires
synchronization to avoid conflicts.
3. Semaphores and Mutexes: These are synchronization tools that can be used to
manage access to shared resources and prevent race conditions.
4. Sockets: Used for communication between processes on different machines over a
network.
5. Files: Processes can read from and write to files as a form of communication.
MODELS OF IPC
SHARED MEMORY
 Shared memory is an inter-process communication (IPC) method that
allows multiple processes to access and communicate by sharing a
common area of memory.
 It is one of the fastest IPC mechanisms because processes can directly read
from and write to the same memory space without copying data.
 However, proper synchronization (using semaphores or mutexes) is required
to avoid conflicts when multiple processes try to access the memory at the
same time. It's commonly used in high-performance applications for
efficient data sharing between processes.
MESSAGE PASSING
 Message passing is a method of inter-process communication (IPC) where
processes exchange data by sending and receiving messages. Unlike shared
memory, in message passing, processes do not share memory space but instead
communicate by passing discrete messages through the operating system's
messaging channels (like pipes, message queues, or sockets).
 OS must implement two functions: send() and receive() for message passing
 Key Points:
• No shared memory: Processes exchange data by sending messages instead of
sharing memory.
• Synchronization built-in: The sending and receiving of messages are usually
synchronized, making it easier to avoid conflicts.
• Safe for distributed systems: It can be used across different machines over a
network, unlike shared memory, which is limited to processes on the same
machine.
MESSAGE PASSING THROUGH
COMMUNICATION LINKS
 DIRECT COMMUNICATION
 Sender specify the name of the process to which it want to send message.
 Receiver specify the name of the sender from which it wants to receive the
message

P Q

 INDIRECT COMMUNICATION
 No need to specify the name of sender or receiver
Synchronization
 Synchronization is about coordinating the execution of processes to ensure
that they operate correctly when accessing shared resources. Without
proper synchronization, concurrent processes may interfere with each
other, leading to issues like data corruption or unexpected behavior.
Synchronization ensures:
1. Mutual Exclusion: Only one process can access a critical section of code or
resource at a time. This prevents race conditions where multiple processes
might modify data simultaneously.
2. Deadlock Prevention: Avoiding situations where processes are stuck waiting
for each other indefinitely.
3. Starvation Prevention: Ensuring that every process gets a chance to access
resources and is not perpetually denied.
4. Ordering: Coordinating the order of operations so that processes work
together correctly, following the intended sequence.
Need for IPC and Synchronization

1. Resource Sharing: Multiple processes often need to access and modify


shared resources, such as files or variables. IPC allows them to share these
resources efficiently and safely.
2. Coordination: Processes that need to work together or depend on each
other's results must communicate and synchronize their actions to ensure
proper sequencing and consistency.
3. Performance Optimization: Proper IPC and synchronization can improve
performance by allowing parallel execution and avoiding bottlenecks or
contention for resources.
4. Deadlock and Race Condition Prevention: Without synchronization,
processes may enter deadlock or experience race conditions, leading to
incorrect results or system crashes.
CRITICAL SECTION and MUTUAL EXCLUSION
 1. Critical Section:
 A critical section is a part of a program (code) where a process accesses shared resources,
such as variables, files, or memory. Since multiple processes might attempt to access these
shared resources simultaneously, it is important to control their access to avoid data
inconsistencies or corruption. So the critical section problem means designing a way for
cooperative processes to access shared resources without creating data inconsistencies.
 If one process is in CS, then no other process can enter its CS.
 2. Mutual Exclusion:
 Mutual exclusion is a principle used to prevent multiple processes from entering their critical
sections at the same time. It ensures that only one process is allowed to execute in its critical
section at a time, thereby protecting shared resources from being accessed concurrently by
multiple processes.
 Real-life Analogy:
 Imagine you have a single bathroom (shared resource) in a house, and multiple people
(processes) want to use it. If everyone tries to enter the bathroom at the same time, there
would be chaos. To avoid this, there’s a lock on the door (mutual exclusion) which ensures
that only one person can use the bathroom (enter the critical section) at a time. Once the
person is done, they unlock the door so someone else can use it.
SHARED RESOURCES
 Shared resources refer to any data, memory, or resources that multiple processes or
threads access concurrently. When processes or threads share resources, they need to
coordinate their access to ensure that the resource remains consistent and that the
operations performed on it are safe and correct.
 Examples of Shared Resources:
• Variables: Global or static variables that multiple threads or processes can read or write to.
• Memory: Areas of memory (like arrays or buffers) that different threads or processes access.
• Files: Files that multiple processes may read from or write to.
• Devices: Hardware devices (like printers or disk drives) that can be accessed by multiple
processes.
• Databases: Database tables or records accessed by multiple processes or threads.
 Consider a single printer in an office (the shared resource). If multiple employees
(processes) send print jobs (requests) to the printer simultaneously without any
coordination, the printer might become overloaded or print jobs might get mixed up.
To manage this, there’s usually a print queue (synchronization mechanism) that ensures
each job is processed one at a time.
Why Mutual Exclusion is Needed

1. Data Integrity: When multiple processes or threads attempt to read from or


write to a shared resource simultaneously, it can lead to data corruption or
inconsistent results. Mutual exclusion ensures that only one process or
thread can access the resource at a time, maintaining data integrity.
2. Preventing Race Conditions: A race condition occurs when the outcome of
a process depends on the timing or order of other uncontrollable events.
Mutual exclusion helps avoid race conditions by ensuring that processes do
not simultaneously access critical sections.
3. Avoiding Inconsistent States: Shared resources may have constraints or
expected states. Concurrent access without mutual exclusion could lead
to an inconsistent or invalid state of the resource.
CRITICAL SECTION
 A critical section is a segment of code or a part of a system where shared resources
(such as variables, files, or hardware) are accessed or modified. Because multiple
processes or threads might attempt to access these shared resources concurrently,
the critical section must be protected to ensure data integrity and consistency.
 Key Points about Critical Sections
1. Shared Resources: A critical section typically involves shared resources that need to be
accessed by multiple processes or threads. Without proper management, concurrent
access to these resources can lead to problems such as data corruption or inconsistent
results.
2. Mutual Exclusion: To prevent interference and ensure that only one process or thread can
access the shared resource at a time, mutual exclusion mechanisms are employed. These
mechanisms ensure that when one process or thread is executing within the critical section,
others are excluded from entering.
3. Synchronization: Critical sections are managed using synchronization techniques such as
mutexes, semaphores, or locks. These tools ensure that only one process or thread can
execute within the critical section at any given time, preventing conflicts.
4. Problem Avoidance: Without proper handling of critical sections, issues like race conditions,
deadlocks, and data inconsistency can occur. Ensuring that critical sections are well-
managed helps maintain system stability and correctness.
Example of a Critical Section
 Scenario: Consider a bank's online system where multiple users can withdraw
money from their accounts simultaneously. The system needs to handle these
transactions carefully to ensure that no two users can withdraw more money than
what is available in the account.
 Critical Section: The portion of code responsible for checking the account
balance and updating it after a withdrawal is a critical section. Only one
transaction should be processed at a time to ensure that the account balance
remains accurate and no overdraft occurs.
 Management: To manage this critical section, synchronization tools like locks or
semaphores are used. When a transaction is being processed, other transactions
must wait until the critical section is free. This ensures that all transactions are
handled correctly and consistently.
 In summary, a critical section is a crucial part of a program where access to
shared resources must be carefully controlled to avoid conflicts and ensure
accurate operation.
CRITICAL SECTION PROBLEM
WHAT IS CRITICAL SECTION PROBLEM
(how to implement CS properly)
It has to satisfy 3 conditions
1. Mutual exclusion
2. Progress
3. Bounded wait

 MUTUAL EXCLUSION
 Only 1 process must enter into CS at a time. If mutual exclusion fails race condition arises.
RACE CONDITION
Initially shared=10

Process 1 Process 2
int X = shared int Y = shared

X++ Y–

sleep(1) sleep(1)

shared = X shared = Y

 If the processes are executed sequentially then there is no issue, however there is an
interrupt (sleep), thus a problem will occur and the final output may be 9 or 11 (ideally it
should be 10)
 This is called RACE CONDITION.
 When two or more process wants to access same variable (shared variable) at same time
then final answer of variable depends on process who access that variable in the last. This
problem is known as Race condition.
MORE ON RACE CONDITION
 A race condition occurs when two or more processes or threads try to access and
modify shared resources (such as memory or variables) simultaneously, and the final
outcome depends on the order in which the processes execute. This can lead to
unpredictable or incorrect behavior in programs because the processes "race" to
complete, and the timing of their execution affects the result.
 Concept: Race conditions are a concept related to Process Synchronization in
Operating Systems.
• Why it happens: It occurs in multi-threaded or multi-process environments when multiple
threads/processes don't have proper synchronization while accessing shared resources.
• Critical Section: A race condition typically happens when processes access the critical
section (shared resources) without proper controls like locks or semaphores, leading to
inconsistent or unexpected results.
 How to Prevent Race Conditions:
• Synchronization Mechanisms: Use techniques like locks, mutexes, semaphores, or monitors
to ensure that only one process can access the critical section at a time, preventing
concurrent modifications of shared resources.
PROGRESS
• Condition: If no process is executing in its critical section and there are processes that wish
to enter their critical sections, then the selection of the next process to enter the critical
section must not be postponed indefinitely.
• Explanation: The system should make progress and not get stuck. In other words, if a process
is waiting to enter the critical section, and no other process is in the critical section, the
system should eventually allow the waiting process to enter.

• WILLINGNESS: decision about who will enter the CS next must be made considering those
process only who wants to enter CS and this decision must be made in finite time.
• FINITE TIME: next process must enter the CS in a finite time. If this condition is not fulfilled then
deadlock will occur.

• Progress: Ensures that the system makes progress and avoids deadlock by not indefinitely
postponing requests.
BOUNDED WAIT

 Condition: There must be a limit on the number of times other processes can enter their
critical sections after a process has made a request to enter its critical section and before
the request is granted.
 Explanation: A process should not be waiting forever. There should be a bound on how
long a process might have to wait before it gets access to its critical section.
 A process requesting CS must get its turn.
 Bounded wait is mainly concerned with no of processes and not time.
 If bounded wait fails, starvation may occur.
 Starvation occurs when a process never gets the chance to enter its critical
section or access a resource because other processes keep getting ahead of it
indefinitely. This could happen if there’s no upper bound on how long a process
might wait for access, meaning that a process can be indefinitely delayed while
other processes continue to access the resource.
HOW TO IMPLEMENT MUTUAL EXCLUSION

•Mutex (Mutual Exclusion Object):


•A mutex is a synchronization primitive that is used to protect a critical section. Only one thread can acquire
the mutex at a time, ensuring exclusive access to the critical section.
•How it Works: When a thread wants to enter the critical section, it must first acquire the mutex. If another
thread already holds the mutex, the requesting thread must wait. Once the thread completes its task, it
releases the mutex, allowing other threads to acquire it.
•Semaphores:
•Description: Semaphores are another synchronization tool that can be used to control access to a critical
section. They use a counter to manage access.
•How it Works: A binary semaphore (or binary semaphore) can be used like a mutex. When a thread enters
the critical section, it decrements the semaphore. When the thread exits, it increments the semaphore. Only
one thread can decrement the semaphore to zero at a time, ensuring mutual exclusion.
•Locks:
•Description: Locks are similar to mutexes but may offer additional features like read-write locking.
•How it Works: Locks provide mechanisms to ensure mutual exclusion, where only one thread can hold the
lock and access the critical section.
PRODUCER CONSUMER PROBLEM
 The Producer-Consumer Problem is a classic synchronization problem in computer
science that involves two types of processes—producers and consumers—that share a
common, finite-size buffer.
 General Concept: It involves two processes or threads—producers and consumers—that
share a common resource (buffer). Producers add items to the buffer, and consumers
take items from the buffer. The challenge is to ensure proper synchronization to avoid
issues like race conditions, deadlock, and starvation.
 Buffer Size: The buffer can be of any size, including unbounded. In the general version,
the buffer may not have a predefined limit.
Problem Description
• Producer: A process that generates data and puts it into a shared buffer.
• Consumer: A process that retrieves data from the buffer and processes it.
• Buffer: A shared storage area where producers place items and consumers take items from. The buffer has
a limited capacity.
 Key Challenges
1. Mutual Exclusion: Only one process should access the buffer at a time to avoid inconsistencies.
2. Buffer Capacity: The buffer can hold only a finite number of items. If the buffer is full, producers must wait
before adding more items. If the buffer is empty, consumers must wait before consuming items.
3. Synchronization: Producers and consumers must work together so that they don't interfere with each other
and the buffer's state is managed properly.
 Conditions for Solution
1. Mutual Exclusion: Ensures that only one process (either producer or consumer) can access the buffer at a
time.
2. Progress: If no producer is in the critical section and there are items to be consumed, then a consumer
must be able to consume.
3. Bounded Waiting: There must be a limit on the number of times other processes are allowed to access the
buffer before a waiting process gets its turn.
SOLUTION

•Initialization:
•Set up a shared buffer with a fixed size.
•Initialize pointers or indices to keep track of where to add and remove items (often referred to
as in and out indices).

•Producer Process:
•Produce an item.
•Check if the buffer is full. If it is, the producer must wait.
•Add the item to the buffer at the position indicated by the in index.
•Update the in index (wrap around if necessary).
•Signal that there is a new item available (conceptually).

•Consumer Process:
•Check if the buffer is empty. If it is, the consumer must wait.
•Remove an item from the buffer at the position indicated by the out index.
•Update the out index (wrap around if necessary).
•Signal that there is space available in the buffer (conceptually).
Bounded Buffer Problem
• Specific Definition: This is a specific instance of the Producer-Consumer Problem where
the buffer has a fixed, finite size. It emphasizes managing a limited-size buffer and
coordinating between producers and consumers under these constraints.
• Key Challenges: Same as the Producer-Consumer Problem but with the added
constraint of a buffer with a fixed capacity, which introduces specific issues related to
buffer fullness and emptiness.
 Key Similarities
1. Shared Resource: Both involve a shared buffer that must be managed correctly to prevent
issues.
2. Synchronization: Both problems require synchronization mechanisms to manage access to the
shared buffer.
3. Mutual Exclusion: Both need to ensure that only one process (producer or consumer) accesses
the buffer at a time.
 Key Differences
1. Buffer Size Constraint: The Bounded Buffer Problem specifically addresses the constraints and
challenges associated with a buffer of limited size, whereas the general Producer-Consumer
Problem may not specify the buffer size constraint.
2. Focus on Buffer Management: The Bounded Buffer Problem often emphasizes how to handle the
buffer when it reaches its maximum capacity or is empty, which is a critical aspect of managing
a finite-sized buffer.
SEMAPHORES
 A semaphore is a tool used in computer science to manage how multiple programs or processes
access shared resources, like memory or files, without causing conflicts.
 Semaphore Components
• Value: The semaphore maintains an integer value that indicates the number of available
resources or slots.
• Queue: Internally, semaphores maintain a queue of processes or threads that are waiting for
the semaphore to become available.
 Semaphore Operations
 P Operation (Wait/Proberen): Decrements the semaphore value. If the value is less than or
equal to 0, the process or thread is blocked and added to the semaphore's queue until the
value becomes positive.
 V Operation (Signal/Verhogen): Increments the semaphore value. If there are processes or
threads waiting in the queue, one of them is unblocked and allowed to proceed.
Types of semaphores
 1. Binary Semaphore
 Definition: A binary semaphore can take only two values: 0 or 1. It is a special case of a counting
semaphore with a maximum count of 1.
 Characteristics:
• Values: The semaphore can either be in the "locked" state (0) or "unlocked" state (1).
• Usage: Often used to implement mutual exclusion (mutex) in concurrent programming, ensuring
that only one process or thread can access a critical section at a time.
 Operations:
• P Operation (Wait/Acquire/Proberen):
• Action: Decreases the value of the semaphore.
• Behavior: If the value is 1, it changes to 0, and the process or thread gains access. If the value is 0, the
process or thread is blocked and must wait until the semaphore is released.
• V Operation (Signal/Release/Verhogen):
• Action: Increases the value of the semaphore.
• Behavior: If the value is 0, it changes to 1, potentially unblocking a waiting process or thread. If the value is
1, it remains unchanged.
 2. Counting Semaphore
 Definition: A counting semaphore can take a range of non-negative integer values. It is used
to manage access to a finite number of resources.
 Characteristics:
• Values: The semaphore can have any non-negative integer value, representing the number
of available resources or slots.
• Usage: Used to control access to a pool of identical resources, such as a set of printers,
database connections, or seats in a room.
 Operations:
• P Operation (Wait/Acquire/Proberen):
• Action: Decreases the value of the semaphore.
• Behavior: If the value is greater than 0, it decreases by 1, indicating a resource has been allocated. If
the value is 0, the process or thread is blocked and must wait until a resource becomes available.

• V Operation (Signal/Release/Verhogen):
• Action: Increases the value of the semaphore.
• Behavior: If the value was less than the maximum, it increases by 1, indicating a resource has been
released. This may unblock a waiting process or thread.
REAL LIFE ANALOGY

•Binary Semaphore (Mutex):


•Analogy: A single conference room where only one team can book it at a time.
•Operations:
•P Operation: Team requests the room. If available (value = 1), they book it (value = 0). If not,
they wait.
•V Operation: Team releases the room. Value returns to 1, and any waiting teams are notified.
•Counting Semaphore:
•Analogy: Multiple conference rooms available. Semaphore value represents the number of
rooms available.
•Operations:
•P Operation: Team requests a room. Semaphore value decrements by 1 as a room is
booked. If value is 0, they wait.
•V Operation: Team finishes and releases a room. Semaphore value increments by 1. A
waiting team is granted access if any are waiting.
BUSY WAIT IMPLEMENTATION
 Busy wait implementation is a method used in synchronization where a process continuously checks
for a condition to become true instead of being put to sleep or waiting in a queue. This approach
involves the process actively "busy waiting" for a resource or condition to be available.
 How Busy Waiting Works
 In busy waiting, a process repeatedly checks a condition or status flag to see if it can proceed. The process
remains active and continuously polls the condition without releasing the CPU or relinquishing control. This
can lead to inefficiencies, especially if the condition is not likely to change soon.
 Detailed Explanation
1. Condition Checking:
1. The process keeps checking a shared variable or flag to determine if it can proceed.
2. If the condition is not met, the process continues to loop and check the condition repeatedly.
2. No Blocking:
1. Unlike other synchronization methods where a process may be put to sleep or moved to a waiting queue, busy waiting
keeps the process in an active state.
2. This can lead to high CPU usage, as the process consumes CPU cycles without making progress.
3. Example Scenario:
1. Shared Resource Access: Suppose a process needs exclusive access to a critical section or resource protected by a
semaphore. Instead of being put to sleep while it waits, the process continuously checks if the semaphore indicates
availability.
 REAL LIFE: CHECKING AND STANDING AT RESTAURANT DOOR IF THE TABLE GETS FREE
CHARACTERSTICS OF SEMAPHORES
•Atomic Operations:
•Definition: Semaphore operations, namely P (wait) and V (signal), are executed as atomic actions.
This means each operation is completed fully without interruption or interference from other
processes, ensuring consistent state changes and preventing race conditions.

•Synchronization:
•Definition: Semaphores are used to synchronize processes or threads, controlling their access to
shared resources. They ensure that processes are executed in a coordinated manner, allowing orderly
access and preventing conflicts or concurrent access issues.

•Blocking and Waking:


•Definition: When a process requests access to a semaphore but cannot proceed (e.g., because the
semaphore value is 0), it is blocked and placed in a waiting queue. Once the semaphore value is
increased (e.g., through a V operation), the process is removed from the queue and resumed,
effectively "waking" it up.
•Types:
•Binary Semaphore:
•Definition: A type of semaphore that can only have two values, 0 or 1. It is used to
implement mutual exclusion, ensuring that only one process can access a critical section or
resource at a time.
•Counting Semaphore:
•Definition: A type of semaphore that can hold a non-negative integer value. It is used to
manage access to a pool of identical resources, where the value represents the number of
available resources.
•Value Representation:
•Definition: The value of a semaphore reflects the number of resources available or the status
of a critical section. For binary semaphores, the value indicates whether the resource is
available (1) or not (0). For counting semaphores, the value represents the count of available
resources.
1.Operations:
•P Operation (Wait/Acquire):
•Definition: This operation decrements the semaphore value. If the value becomes
negative, the process is blocked and added to a waiting queue until the semaphore
value is incremented. It is used to request access to a critical section or resource.
•V Operation (Signal/Release):
•Definition: This operation increments the semaphore value. If there are any processes
in the waiting queue, one of them is unblocked and granted access to the critical section
or resource. It is used to release the resource and potentially wake up waiting
processes.
2.Queue Management:
•Definition: Processes or threads that cannot proceed because of a semaphore's state are
placed in a queue. This queue manages the order of processes waiting for the semaphore,
typically using a First-In-First-Out (FIFO) method to ensure fairness and prevent starvation.
3.Mutual Exclusion and Coordination:
•Definition: Semaphores enforce mutual exclusion by ensuring that only one process can
access a critical section or resource at a time (for binary semaphores) or coordinate access
among multiple processes in a controlled manner (for counting semaphores). They help
prevent conflicts and ensure that resources are used efficiently and safely.
Queuing Implementation of Semaphores:

•The queuing implementation of semaphores is a method used to manage and coordinate processes or
threads that are waiting to acquire or release a semaphore. This implementation ensures that processes
are handled in an orderly and fair manner, typically using a first-come, first-served (FIFO) queue.
•Waiting Queue:
•When a process cannot proceed because the semaphore value is not favorable (e.g., it's zero), the process is
blocked and added to a queue.
•FIFO Order:
•Processes are managed in a First-In-First-Out (FIFO) order. The process that arrives first is the first to be
granted access when the semaphore value becomes positive.
•Blocking and Waking:
•Blocking: When a process performs a P operation (wait) and the semaphore value is zero or less, it is
placed in the waiting queue and blocked.
•Waking: When a process performs a V operation (signal), the semaphore value is incremented, and if
there are processes in the queue, one is removed and allowed to proceed.
•Fairness:
•This approach ensures that processes are granted access to resources based on the order they arrived,
preventing starvation and promoting fairness.
PRODUCER CONSUMER USING SEMAPHORE

•Shared Buffer:
•A bounded buffer (or circular buffer) is used to store items produced by the
producer and consumed by the consumer. It has a fixed size.
•Semaphores Used:
•empty Semaphore: Counts the number of empty slots in the buffer. Initialized to
the buffer size.
•full Semaphore: Counts the number of filled slots in the buffer. Initialized to 0.
•mutex Semaphore: Ensures mutual exclusion when accessing the buffer.
Initialized to 1.
Producer Process:
•Wait Operation on empty: The producer performs a P operation (wait) on the empty
semaphore before adding an item to the buffer. If no empty slots are available, the
producer is blocked.

•Wait Operation on mutex: The producer performs a P operation (wait) on the mutex
semaphore to gain exclusive access to the buffer.

•Add Item to Buffer: The producer adds an item to the buffer.

•Signal Operation on mutex: The producer performs a V operation (signal) on the mutex
semaphore to release access to the buffer.

•Signal Operation on full: The producer performs a V operation (signal) on the full
semaphore to indicate that there is now a new item available in the buffer.
Consumer Process:
•Wait Operation on full: The consumer performs a P operation (wait) on the full
semaphore before removing an item from the buffer. If no items are available,
the consumer is blocked.

•Wait Operation on mutex: The consumer performs a P operation (wait) on the mutex
semaphore to gain exclusive access to the buffer.

•Remove Item from Buffer: The consumer removes an item from the buffer.

•Signal Operation on mutex: The consumer performs a V operation (signal) on the


mutex semaphore to release access to the buffer.

•Signal Operation on empty: The consumer performs a V operation (signal) on the


empty semaphore to indicate that there is now an additional empty slot available
in the buffer.
READERS WRITERS PROBLEM
 The Readers-Writers Problem is a classic synchronization problem that involves
coordinating access to shared data among multiple readers and writers. The challenge
is to ensure that:
1. Multiple readers can read the shared data simultaneously without conflicts.
2. Only one writer can modify the shared data at any given time.
3. No reader should read the data while a writer is modifying it (to avoid inconsistent reads).
 The Problem:
• Readers: Processes that only read data, which do not interfere with each other.
• Writers: Processes that modify the data, which must have exclusive access to prevent data
corruption.
 Key Issues:
1. Simultaneous Reading: Multiple readers can read the data simultaneously, as they are not
changing it.
2. Exclusive Writing: When a writer wants to modify the data, no other process (reader or writer)
can access the data.
3. Synchronization: Proper synchronization is required to prevent race conditions, where both
reading and writing may happen simultaneously, leading to data inconsistency.
 Synchronization Rules:
1. Multiple Readers - Single Writer: If a writer is modifying the data, no reader can access it.
However, if no writer is active, multiple readers can access the data simultaneously.
2. Writers Get Exclusive Access: Only one writer can modify the data at a time, and it must
exclude all readers to avoid inconsistent states.
 Two Variations:
1. Reader-Preference Solution: Readers get priority, meaning if there is a reader waiting, it
can proceed, even if a writer is also waiting. This may lead to writer starvation, where a
writer waits indefinitely.
2. Writer-Preference Solution: Writers get priority. If a writer is waiting, it will be given access
before any new readers. This prevents writer starvation but may delay readers.
 Real-Life Analogy:
• Think of a shared database in a library where multiple people (readers) can
simultaneously look up information without disturbing each other. But if someone (a
writer) wants to update the database, they need exclusive access, and no one else can
read or write until they finish.
Solution Using Semaphores:
•Mutex (for mutual exclusion): Used to control access to the shared data between readers and
writers.
•Semaphore readers_count: A counter to track the number of active readers.
•Writer Semaphore: Ensures only one writer accesses the data at a time.
DEADLOCK

 Deadlock is a state in a multitasking environment where two or more


processes cannot proceed with their execution because each is waiting for
resources held by the other processes. This situation causes the processes to
remain in a perpetual waiting state, leading to a complete halt in their
execution.
 Resources can be anything required by processes to execute, such as CPU
time, memory, files, and I/O devices.
A

P1 P2

 We have process PI and P2


 P1 has A, P2 has B
 P1 needs B and P2 needs A for completion
 The processes are in deadlock:
 Set of processes are said to be in a deadlock when every process in the set waits
for an event that can only be caused by other process in set.
 Single process can never be in a deadlock
CONDITIONS TO OCCUR DEADLOCK

 Deadlock occurs when a set of processes become permanently blocked


because each process is waiting for a resource that another process holds.
For deadlock to happen, four specific conditions must occur
simultaneously.
 These conditions are referred to as the necessary conditions for deadlock. If
any one of these conditions is prevented, deadlock cannot occur.
 There are 4 conditions for deadlock to occur. It means that if all 4
conditions are satisfied then deadlock may or may not occur.
 But if any condition fails then deadlock will definitely not occur.
1. MUTUAL EXCLUSION

•Definition: This condition refers to the fact that resources involved in the deadlock
must be non-shareable, meaning that only one process can use a resource at a
time. If a resource is held by one process, other processes requesting that resource
must wait.
•Example: Imagine a printer. If one process is printing, another process must wait
until the first process releases the printer before it can start printing. The printer can
only be used by one process at a time, and this exclusivity creates the potential for
deadlock.
•Key Point: If resources were shareable, deadlock could be avoided, as multiple
processes could access the same resource simultaneously.
HOLD AND WAIT
•Definition: This condition occurs when a process is holding at least one resource
and is waiting to acquire additional resources that are currently being held by other
processes.
•Example: Process P1 holds Resource A and waits for Resource B, while Process
P2 holds Resource B and waits for Resource A. Both processes hold resources
and are waiting for others, which creates the deadlock.
•Key Point: If processes were required to request and receive all the resources
they need at once, deadlock could be avoided, since no process would be able to
hold a resource while waiting for more.

P1 P2

B
NO PREMPTION
• Definition: This condition means that resources cannot be forcibly taken
away (preempted) from a process once they are allocated. The process
must release the resource voluntarily after it has completed its task.
• Example: Suppose a process P1 has acquired a lock on Resource A. Even if
another process P2 needs Resource A to proceed, the operating system
cannot forcibly take it away from P1. Instead, P2 has to wait until P1
voluntarily releases the resource, possibly leading to deadlock.
• Key Point: If preemption were allowed, the operating system could forcibly
take resources from waiting processes and assign them to other processes,
thus breaking the deadlock cycle.
CIRCULAR WAIT

•Definition: This condition happens when there is a circular chain of processes, with
each process waiting for a resource that the next process in the chain holds. In such a
scenario, no process can proceed because each one is waiting on the next.
•Example: Consider three processes P1, P2, and P3:
•P1 is waiting for a resource held by P2.
•P2 is waiting for a resource held by P3.
•P3 is waiting for a resource held by P1.
•This circular dependency means none of the processes can proceed, leading to
deadlock.
•Key Point: Breaking the circular chain of waiting processes would prevent deadlock.
DEADLOCK HANDLING

 There are 3 ways to handle deadlock


1. Deadlock prevention
2. Deadlock avoidance
3. Deadlock detection and recovery
DEADLOCK PREVENTION

 Deadlock prevention is a strategy used in operating systems to avoid


deadlock situations by ensuring that one or more of the four necessary
conditions for deadlock cannot occur.
 These conditions, if all met, can lead to a deadlock, so preventing at least
one of them guarantees that deadlock will not happen.
Preventing Mutual Exclusion

• Concept: Mutual exclusion ensures that only one process can access a
resource at a time. Deadlock prevention removes or limits mutual exclusion.
• How to Prevent:
• Shareable Resources: If a resource can be shared between multiple processes
(e.g., read-only files or databases), mutual exclusion is no longer required. This
can prevent deadlock for certain types of resources.
• Non-shareable Resources: Some resources, like printers or CPU cycles, cannot be
shared, so mutual exclusion is necessary for these types of resources. In such
cases, the system cannot prevent deadlock by attacking mutual exclusion, but it
can work on the other conditions.
• Drawback: Not all resources can be made shareable, especially physical
devices like printers, which need exclusive access to prevent interference.
Preventing Hold and Wait
• Concept: In the hold and wait condition, a process holds at least one
resource while waiting for other resources. Deadlock prevention can
remove this condition by ensuring that processes do not hold resources
while waiting.
• How to Prevent:
• Request All Resources at Once: Require that processes request all the resources
they need at the start. If they cannot get all the resources, they must release any
they are holding and start again.
• Resource Allocation Protocol: A process may only request resources when it is not
holding any. Alternatively, processes may only request resources when they
already hold some but must release all held resources if their request for more is
denied.
• Drawback: Requiring processes to request all resources at once can lead to
poor resource utilization. Processes may unnecessarily hold resources they
don't need immediately, leading to resource wastage or delays in other
processes.
Preventing No Preemption
• Concept: In the no preemption condition, once a process acquires a resource,
the resource cannot be forcibly taken away. Deadlock prevention removes this
condition by allowing preemption.
• How to Prevent:
• Preempt Resources: If a process is holding a resource and requests another resource
that is not available, the operating system can preempt (take away) the held resources
and allocate them to another process. The preempted process is temporarily
suspended, and its resources are given to other waiting processes.
• Preemptable Resources: Some resources can be preempted (e.g., CPU cycles or
memory pages), while others (e.g., printers) cannot be safely preempted. Preemption
works well when the resources can be saved and restored without causing errors.
• Drawback: Not all resources can be preempted safely. For example, if a process
is in the middle of printing, preempting the printer could lead to incomplete or
corrupted output. Preemption also adds overhead, as the system needs to track
and manage the preempted resources.
Preventing Circular Wait
• Concept: Circular wait is the condition where a circular chain of processes
exists, each process waiting for a resource held by the next. Deadlock
prevention breaks this circular chain by ensuring that no such cycle can form.
• How to Prevent:
• Impose Resource Ordering: Assign a numerical ordering to all resources.
Processes can only request resources in an increasing order (i.e., they can only
request a resource with a higher number than the one they currently hold). This
prevents a cycle from forming because processes cannot go back and request
lower-numbered resources.
• Drawback: This strategy can limit the flexibility of resource allocation. Processes
may need to request resources in a specific order, which might not align with
their actual resource needs. This can also lead to inefficiency or
underutilization.
Summary of Deadlock Prevention

 Deadlock prevention ensures that at least one of the four conditions


necessary for deadlock does not occur:
• Mutual Exclusion: Make resources sharable where possible.
• Hold and Wait: Require processes to request all resources at once or release
held resources before requesting new ones.
• No Preemption: Allow the operating system to forcibly take resources away
from processes in specific situations.
• Circular Wait: Impose an ordering or hierarchy on resources to prevent
circular waiting.
DEADLOCK AVOIDANCE
 Deadlock avoidance is a strategy used by operating systems to ensure that
a system never enters a deadlock state. Unlike deadlock prevention, which
outright denies one or more of the conditions necessary for deadlock,
deadlock avoidance allows the system to allocate resources dynamically
as needed, while making sure that it will not lead to a deadlock.
 Deadlock avoidance requires the operating system to have additional
knowledge about which resources each process will request and release
during its execution. The system uses this information to avoid unsafe states
where a deadlock might occur.
 Conditions for Deadlock Avoidance
 The system must meet certain conditions to ensure deadlock avoidance:
1. Process Behavior Prediction: The operating system must know, in advance,
the maximum number of resources each process may request during its
lifetime.
2. State Safety Check: Before allocating resources to a process, the operating
system must check whether doing so will keep the system in a "safe state."
Safe and Unsafe States

1. Safe State: A state is considered safe if there exists a sequence of all


processes such that each process can complete by being allocated the
resources it needs, even if all other processes request their maximum
resources. In a safe state, the system can always avoid deadlock by
carefully allocating resources.
2. Unsafe State: A state is unsafe if no such sequence exists, meaning that
there’s a possibility that deadlock might occur, although it has not
happened yet.
 The system transitions between safe and unsafe states during resource
allocation. Deadlock avoidance ensures that the system stays in safe states
by not granting resource requests that could lead to an unsafe state.
BANKERS ALGORITHM

 Banker's Algorithm
 The Banker’s Algorithm is the most commonly used deadlock avoidance
algorithm. It was proposed by Edsger Dijkstra and is based on the idea of a
banker who makes loans to customers. The banker must ensure that the
total amount of loans given out does not exceed a safe limit, where each
customer (process) can eventually pay back the loan (complete
execution).
Steps of the Banker's Algorithm:
1.Data Structures:
•Available[]: Number of available instances for each resource type.
•Max[][]: Maximum demand of each process for each resource.
•Allocation[][]: Current allocation of each resource type for each process.
•Need[][]: Remaining resource needs of each process (calculated as Max[][] -
Allocation[][]).
2.Safety Check:
•When a process requests resources, the system pretends to allocate those resources and
checks if it results in a safe state.
•If it does result in a safe state, the resources are allocated; otherwise, the request is
denied.
3.Request Handling:
•For a process Pi requesting resources:
•If Requesti ≤ Needi and Requesti ≤ Available, the system allocates the resources
temporarily.
•The system then checks if the resulting state is safe. If it is safe, the allocation is
made permanent. If not, the system rolls back the temporary allocation.
 Pros of the Banker's Algorithm:
• It can help the system avoid deadlock altogether by dynamically checking
each request.
 Cons of the Banker's Algorithm:
• Requires processes to specify their maximum resource needs in advance,
which may not always be possible or efficient.
• Increases computational overhead due to frequent checks for safe states.
• Not suitable for systems where the resource requirements change
dynamically or are unpredictable.
DEADLOCK RECOVERY
 Deadlock Recovery is a method used to regain system functionality after a
deadlock has occurred. Unlike deadlock prevention and avoidance, which
aim to prevent deadlocks from happening, deadlock recovery is
concerned with detecting a deadlock after it has occurred and then
taking action to resolve it.
 A. Process Termination/ Kill the process
• Abort All Deadlocked Processes:
• All processes involved in the deadlock are terminated. This is a brute-force
method that guarantees the deadlock will be resolved, but it can result in
significant data loss.
• Abort One Process at a Time:
• Abort one process at a time from the deadlocked processes until the deadlock is
resolved. The process to abort can be selected based on criteria such as priority,
process runtime, or resources used.
 Resource Preemption
 Preempt Resources: Temporarily take resources away from one process and
allocate them to another to break the deadlock.
 Rollback: Roll back the process to a previous safe state, then re-execute it from
that state, ensuring it does not request the same resource set that led to the
deadlock.

** Selecting a process for preemption is not easy but there are several factprs which
should be kept in mind:
Process having minimum resources
Process which has run for the smallest duration of time
Process which has largest runtime remaining
RESOURCE REQUEST RESOURCE
RELEASE
 Resource Request and Resource Release are two fundamental operations
involved in managing resources in an operating system.
 They are critical for controlling access to shared resources, ensuring smooth
process execution, and avoiding issues like deadlock.
Resource Request

•Definition: When a process needs to use a resource (like CPU time, memory, or a
device), it sends a request to the operating system to allocate that resource.
•Purpose: The resource request ensures that processes do not use resources they
are not allowed to access and helps the operating system keep track of resource
usage.
•Process:
1.The process identifies the resources it needs (e.g., a file or a printer).
2.It sends a request to the operating system for those resources.
3.The operating system checks the availability of the requested resources:
•If available, it allocates them to the process.
•If not available, the process is put in a waiting state until the resources are
free.
•Example:
•A word processing program (process) wants to print a document. It sends a request
for the printer (resource) to the operating system.
Resource Release
•Definition: When a process no longer needs a resource, it releases it back to the
operating system so that it can be used by other processes.
•Purpose: Releasing resources helps in resource management, prevents resource
leaks, and ensures that resources are available for other processes that need them.
•Process:
1.The process informs the operating system that it no longer needs the resource.
2.The operating system marks the resource as available.
3.If there are other processes waiting for the resource, one of them can now acquire
it.
•Example:
•After printing the document, the word processing program releases the printer,
making it available for other processes that might need it.

You might also like