0% found this document useful (0 votes)
19 views6 pages

Unit Iii Os

Uhshwhwbe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views6 pages

Unit Iii Os

Uhshwhwbe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

UNIT-III It indicates whose turn it is to enter critical section turn value may be either 0 or 1

2. boolean flag[2];
Synchronization Tools: The Critical Section Problem, Peterson’s Solution, Mutex Locks, Semaphores, Flag array indicates if a process is ready to enter critical section flag[i]=true indicates Pi is ready
Monitors, Classic problems of Synchronization.
to enter the critical section.
Deadlocks: system Model, Deadlock characterization, Methods for handling Deadlocks, Deadlock
prevention, Deadlock avoidance, Deadlock detection, Recovery from Deadlock. Peterson's solution provides a solution to the following problems,
• It ensures that if a process is in the critical section, no other process must be allowed to enter it. This
1. Critical Section Problem in OS:
property is termed mutual exclusion.
It is used to implement process synchronization. Critical Section is a part of the program where shared
• If more than one process wants to enter the critical section, the process that should enter the critical
memory can be accessed like shared variables, shared files etc.
region first must be established. This is termed progress.
Solutions To The Critical Section Problem:
• There is a limit to the number of requests that processors can make to enter the critical region, provided
Any solution to the critical section problem must satisfies three conditions. that a process has already requested to enter and is waiting. This is termed bounding.
1. Mutual exclusion • It provides platform neutrality as this solution is developed to run in user mode, which doesn't require
At a time only one process is in the critical section only when that process comes out of the critical any permission from the kernel.
section then other processes can enter critical section.
Algorithm for process Pi
2. Progress
If no process is executing in critical section and some processes are interested to enter the critical do
section cannot be postponed indefinitely.
{
3. Bounded Waiting
A bound must exists on number of times a process can enter the critical section if other process flag[i]=true;
has made a request to enter the critical section.
turn=j;
General structure of process Pi
while (flag[j]==true&&turn++j);
 Entry section
//critical section
It ensures that only one process can enter the critical section. If a process is in critical section than
other processes are not allowed to enter the critical section. Before entering the critical section, flag[i]=false;
the corresponding process will locks critical section.
//remainder section
 Critical section
It is part of the program where we can access the shared data. }while(true);
 Exit section
It allows a process to exit the critical section here the corresponding process unlocks the critical
3. Mutex Locks:
section. A Mutex (Mutual Exclusion) Lock is a synchronization mechanism used in multithreading to prevent
do multiple threads from accessing a shared resource simultaneously. It ensures that only one thread can
{ execute a critical section at a time, avoiding race conditions and data inconsistency.
Entry section
Critical section Components of Mutex Locks
Exit section 1. Mutex Variable
Remainder section  This is a special variable that represents the lock’s state.
}while(true);  It can have two states:

2. Peterson’s Solution: a. Locked (Occupied) → A thread is using the resource.


b. Unlocked (Available) → The resource is free for other threads.
Peterson’s Solution is a software mechanism implemented at user mode. It is a busy waiting solution can 2. Lock Acquisition
be implemented for only two processes. It is a software-based solution. It provides a good algorithm of  A thread tries to acquire the lock to access the resource.
solving the critical section problem. This solution is restricted to two process that alternate execution  If the lock is available, it takes control of the resource.
between their critical section and remainder section.  If the lock is already taken, the thread waits until it becomes free.
3. Lock Release
1. int turn;
 Once the thread finishes using the resource, it releases the lock. 4. Semaphores:
 This allows other threads to acquire the resource.
Types of Mutex Locks Semaphores are one of the easiest and best process synchronization.
Semaphores are just normal variables used to coordinate the activities of multiple processes in a
Mutex (Mutual Exclusion) Locks help in controlling access to shared resources in multithreading. computer system. They are used to enforce mutual exclusion, avoid race condition, and implement
Different types of mutex locks provide various features to handle concurrency effectively. synchronization between processes.
1. Recursive Mutex Semaphores basically are used to implement critical sections in code. Critical sections are regions where
only one process can execute at once. There are some advantages and disadvantages of binary
 Allows the same thread to lock the mutex multiple times without blocking itself. semaphores as well.
 The thread must unlock it as many times as it was locked before it becomes available. Operations on Semaphores:
1. wait() (P operation)
Example:  Decreases the semaphore value.
Used in a directory traversal system, where a thread locks a directory and enters subdirectories without  If the value becomes negative, the process waits (blocks) until it can proceed.
getting blocked. 2. signal() (V operation)
2. Error-Checking Mutex  Increases the semaphore value.
 If processes are waiting, one is unblocked and allowed to proceed.
 Detects errors like a thread trying to lock a mutex it already holds. Types of Semaphores
 Helps in debugging concurrency issues. Semaphores are classified into two main types:
a) Binary Semaphore (Mutex)
Example:  It has only two values: 0 (locked) and 1 (unlocked).
Used in multi-threaded programs where multiple threads update a shared counter. If a thread mistakenly  Used for mutual exclusion, ensuring that only one process accesses a shared resource at a time.
tries to lock an already locked mutex, an error is reported.  It behaves like a lock (mutex) mechanism.
 Example: Controlling access to a critical section.
3. Timed Mutex
b) Counting Semaphore
 A thread tries to acquire the mutex for a limited time.  It can have values greater than 1, allowing multiple processes to access a resource concurrently.
 Useful for controlling access to a pool of resources (e.g., database connections).
 If the lock is not available within the given time, the operation fails.
 Example: Managing N printers in a network where multiple processes can print at once.
Example: Advantages of Semaphore
Used in real-time systems where tasks need resources within a time limit; otherwise, they switch to an • Efficient allocation: Semaphores allow for a more efficient allocation of system resources, which means
alternative action. that memory can be used more effectively.
4. Priority Inheritance Mutex • Control over multiple processes: Semaphores give you control over multiple processes, which means
you can allocate memory to specific tasks as needed.
 Prevents priority inversion, where a low-priority thread blocks a high-priority thread. • Increased performance: Semaphore-based memory management can result in increased performance
 Temporarily raises the priority of the low-priority thread holding the lock to match the highest and improved system responsiveness.
waiting thread. 5. Monitors:
Example: A monitor is an object that contains both the data and procedures needed to perform allocation of a
Used in real-time operating systems where critical high-priority tasks must not be blocked by lower- shared resource. To accomplish resource allocation using monitors, a process must call a monitor entry
priority tasks. routine. Many processes may want to enter the monitor at the same time. But only one process at a time
is allowed to enter. Data inside a monitor may be either global to all routines within the monitor (or)
5. Read-Write Mutex local to a specific routine. Monitor data is accessible only within the monitor. There is no way for
 Allows multiple threads to read a resource simultaneously, but only one thread to write at a processes outside the monitor to access monitor data. This is a form of information hiding.
time. If a process calls a monitor entry routine while no other processes are executing inside the monitor, the
process acquires a lock on the monitor and enters it. while a process is in the monitor, other processes
 Writers get exclusive access, ensuring data consistency. may not enter the monitor to acquire the resource. If a process calls a monitor entry routine while the
Example: other monitor is locked the monitor makes the calling process wait outside the monitor until the lock on
Used in video streaming, where multiple threads read video frames while one thread writes new frames. the monitor is released. The process that has the resource will call a monitor entry routine to release the
resource. This routine could free the resource and wait for another requesting process to arrive monitor
entry routine calls signal to allow one of the waiting processes to enter the monitor and acquire the Monitor provides condition variables along with two operations on them i.e. wait and signal.
resource. Monitor gives high priority to waiting processes than to newly arriving ones. wait(condition variable)
Syntax: signal(condition variable)
monitor monitor_name{ Every condition variable has an associated queue. A process calling wait on a particular condition variable
//declaring shared variables is placed into the queue associated with that condition variable. A process calling signal on a particular
variables_declaration; condition variable causes a process waiting on that condition variable to be removed from the queue
condition_variables; associated with it.
procedure p1(...) { 6. Classic problems of synchronization:
... 1) Bounded-buffer problem Two processes share a common, fixed –size buffer. Producer puts
}; information into the buffer, consumer takes it out. The problem arises when the producer wants to put
procedure p2(...){ a new item in the buffer, but it is already full. The solution is for the producer has to wait until the
... consumer has consumed at least one buffer. similarly if the consumer wants to remove an item from the
}; buffer and sees that the buffer is empty, it goes to sleep until the producer puts something in the buffer
... and wakes it up.
procedure pn(...){
..
};
{
Initailisation Code();
}
}

2) The readers-writers problem A database is to be shared among several concurrent processes. Some
processes may want only to read the database, some may want to update the database. If two readers
access the shared data simultaneously no problem. If a write, some other process access the database
simultaneously problem arised. Writes have exclusive access to the shared database while writing to the
database. This problem is known as readers- writes problem.
First readers-writers problem No reader be kept waiting unless a writer has already obtained permission
to use the shared resource. Second readers-writes problem: Once writer is ready, that writer performs
its write as soon as possible. A process wishing to modify the shared data must request the lock in write
mode. multiple processes are permitted to concurrently acquire a reader-writer lock in read mode. A
reader writer lock in read mode. but only one process may acquire the lock for writing as exclusive access
is required for writers.
3) Dining Philosophers problem
Five philosophers are seated on 5 chairs across a table. Each philosopher has a plate full of noodles. Each
philosopher needs a pair of forks to eat it. There are only 5 forks available all together. There is only one
fork between any two plates of noodles. In order to eat, a philosopher lifts two forks, one to his left and
the other to his right. if he is successful in obtaining two forks, he starts eating after some time, he stops
eating and keeps both the forks down.
What if all the 5 philosophers decide to eat at the same time? All the 5 philosophers would attempt to c. No Preemption: A resource can be released voluntarily by the process holding it, after the process has
pick up two forks at the same time. So, none of them succeed. One simple solution is to represent each completed its task.
fork with a semaphore. A philosopher tries to grab a fork by executing wait() operation on that d. Circular wait: There exists a set of waiting processes {P0,P1,P2,…Pn} P0 is waiting for a resource held
semaphore. He releases his forks by executing the signal() operation. This solution guarantees that no by P1, P1 is waiting for a resource held by P2.
two neighbours are eating simultaneously. Suppose all 5 philosophers become hungry simultaneously 2.Resource Allocation Graph
and each grabs his left fork, he will be delayed forever. Deadlocks are described by using a directed graph called system resource allocation graph. The graph
7. System Model: consists of set of vertices (v) and set of edges (e).
Types of Vertices:
When processes request a resource and if the resources are not available at that time the a. Process Vertex: Represented as circles (○).
process enters into waiting state. Waiting process may not change its state because the b. Resource Vertex: Represented as rectangles (□). There are two types OS Resource vertex.
resources they are requested are held by other process. This situation is called deadlock. The
 Single Instance
situation where the process waiting for the resource i.e., not available is called deadlock.
 Multiple Instance
Types of Edges:
Request Edge (P → R): A directed edge from a process to a resource (Process P requests Resource R).
Assignment Edge (R → P): A directed edge from a resource to a process (Resource R is assigned to Process
P).
Example 1 (Single Instances RAG)

A system may consist of finite number of resources and is distributed among number
of processes. There resources are partitioned into several instances each with identical
instances.
A process must request a resource before using it and it must release the resource after
using it. It can request any number of resources to carry out a designated task. The amount of
resource requested may not exceed the total number of resources available.
A process may utilize the resources in only the following sequences:

Request:-If the request is not granted immediately then the requesting process must wait it
can acquire the resources. If there is a cycle in the Resource Allocation Graph and each resource in the cycle provides only one
Use:-The process can operate on the resource. instance, then the processes will be in deadlock. For example, if process P1 holds resource R1, process
P2 holds resource R2 and process P1 is waiting for R2 and process P2 is waiting for R1, then process P1
Release:-The process releases the resource after using it. and process P2 will be in deadlock.
Deadlock may involve different types of resources. For eg:-Consider a system with one printer
and one tape drive. If a process Pi currently holds a printer and a process Pj holds the tape
drive. If process Pi request a tape drive and process Pj request a printer then a deadlock occurs.
Multithread programs are good candidates for deadlock because they compete for shared
resources.

8. Deadlock characterization:
A deadlock can be characterized in 2 ways.
1.Necessary conditions
a. Mutual Exclusion: At a time only one process can use a resource. If another process requests that
resource the requesting process must wait until the resource has been released.
b. Hold and Wait: A process holding at least one resource, and is waiting to acquire additional resource
that are held by other process.
Here’s another example, that shows Processes P1 and P2 acquiring resources R1 and R2 while process
P3 is waiting to acquire both resources. In this example, there is no deadlock because there is no circular 2)Deadlock Avoidance
dependency. So cycle in single-instance resource type is the sufficient condition for deadlock. It is used to avoid deadlock. The simplest method is each process must provide additional information
Example 2 (Multi-instances RAG) about maximum number of resources of each resource type that it may need.
When a process requests a resource, operating system decides if request can be satisfied immediately or
not.
A resource allocation state is defined by the number of available and allocated resources, and the
maximum requirements of all processes in the system.
Safe State
 A state is safe if the system can allocate all resources requested by all processes (up to their stated
maximums ) without entering a deadlock state.
 More formally, a state is safe if there exists a safe sequence of processes { P0, P1, P2, ..., PN } such
that all of the resource requests for Pi can be granted using the resources currently allocated to
Pi and all processes Pj where j < i. ( I.e. if all the processes prior to Pi finish and free up their
resources, then Pi will be able to finish also, using the resources that they have freed up. )
 If a safe sequence does not exist, then the system is in an unsafe state, which MAY lead to
deadlock. ( All safe states are deadlock free, but not all unsafe states lead to deadlocks. )

9. Methods for Handling Deadlocks


1) Deadlock Prevention
The best example for Deadlock Prevention is taking the medicine before disease. It is make sure that
system will never enter a deadlock.
We can prevent a Deadlock by eliminating one of four conditions.
a. Mutual Exclusion: It is not possible to eliminate Mutual exclusion. It means that a process can use only
1 resource at a time. If we try to share a resource among multiple processes it will give wrong results.
b. Hold and Wait: Hold and wait is a condition in which a process holds one resource while
simultaneously waiting for another resource that is being held by a different process. The process cannot
continue until it gets all the required resources.

Figure: Safe, unsafe, and deadlocked state spaces.


Resource Allocation Graph:
If resource type contains a single instance
Claim edge: A process may requests the resource in future.
Request edge: When a process wants to use a resource, it puts request for resource.
Assignment edge: If resource is available, operating system allocates resource to process.
It is possible to eliminate hold and wait in two approaches.  Consider for example what happens when process P2 requests resource R2:
 By eliminating Hold: A process can request a resource when it is not holds any resources.
 By eliminating Hold: A process can request the resource once it has obtained all resources, it can
start execution.
c. No Preemption: It is possible to eliminate to pre-empt i.e., making it is pre-emption problem.
 Process has R1,R2,R3 and is waits for R4, it must release R1, R2, R3 and put a request to all
d. Circular wait: It is possible to eliminate circular wait we assign priority to each resource. A process can
only request in increasing order of the priority.
 R1 → R2 → R3 (A process must request in this order).
 If a process holds R2, it cannot request R1, breaking circular dependency.
Figure: Resource allocation graph for deadlock avoidance
 The resulting resource-allocation graph would have a cycle in it, and so the request cannot be o Terminate all processes involved in the deadlock. This definitely solves the deadlock, but
granted. at the expense of terminating more processes than would be absolutely necessary.
o Terminate processes one by one until the deadlock is broken. This is more conservative,
but requires doing deadlock detection after each step.
 In the latter case there are many factors that can go into deciding which processes to terminate
next:
1. Process priorities.
2. How long the process has been running, and how close it is to finishing.
3. How many and what type of resources is the process holding. ( Are they easy to preempt
and restore? )
4. How many more resources does the process need to complete.
5. How many processes will need to be terminated
Figure: An unsafe state in a resource allocation graph 6. Whether the process is interactive or batch.
7. ( Whether or not the process has made non-restorable changes to any resource. )
3)Deadlock detection
2. Resource Preemption
 Allow the system to enter a deadlock state, detect it then recover.
 If resource type contains a single instance, then use a variant of resource allocation graph called  When preempting resources to relieve deadlock, there are three important issues to be
wait for graph. addressed:
 We can obtain wait for graph from resource allocation graph by removing nodes and connecting 1. Selecting a victim - Deciding which resources to preempt from which processes involves
appropriate edges. many of the same decision criteria outlined above.
 An edge Pi to Pj in wait for graph implies that Pi is waiting for Pj to release a resource that Pi 2. Rollback - Ideally one would like to roll back a preempted process to a safe state prior to the
needs. point at which that resource was originally allocated to the process. Unfortunately it can be
 An edge Pi→Pj exists in wait for graph if and only if resource allocation graph contains 2 edges difficult or impossible to determine what such a safe state is, and so the only safe rollback is
Pi→Rq and Rq→Pj for some resource q. to roll back all the way back to the beginning. ( I.e. abort the process and make it start over.
)
3. Starvation - How do you guarantee that a process won't starve because its resources are
constantly being preempted? One option would be to use a priority system, and increase the
priority of a process every time its resources get preempted. Eventually it should get a high
enough priority that it won't get preempted any more.

IMPORTANT QUESTIONS
 Classic Problems of Synchronization
 Methods for handling Deadlocks
 Deadlock characterization
 Semaphores
 Peterson’s Solution
 Mutex locks
Figure: (a) Resource allocation graph. (b) Corresponding wait-for graph
 Critical section problem
4)Deadlock Recovery

There are three basic approaches to recovery from deadlock:

1. Inform the system operator, and allow him/her to take manual intervention.
2. Terminate one or more processes involved in the deadlock
3. Preempt resources.

1. Process Termination

 Two basic approaches, both of which recover resources allocated to terminated processes:

You might also like