0% found this document useful (0 votes)
14 views11 pages

Unit III Complete (1) Os

Uploaded by

Yogesh Gaur
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views11 pages

Unit III Complete (1) Os

Uploaded by

Yogesh Gaur
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

UNIT III

Deadlock:

A set of processes is deadlocked when every process in the set is waiting for a resource that is currently
allocated to another process in the set and which can only be released when that other waiting process
makes progress.

3.1 Necessary Conditions:

There are four conditions that are necessary to achieve deadlock:

1 Mutual Exclusion - At least one resource must be held in a non-sharable mode. If any other process
requests this resource, then that process must wait for the resource to be released.
2 Hold and Wait - A process must be simultaneously holding at least one resource and waiting for at
least one resource that is currently being held by some other process.
3 No preemption - Once a process is holding a resource, then that resource cannot be taken away from
that process until the process voluntarily releases it.
4 Circular Wait - A set of processes { P0, P1, P2, . . ., PN } must exist such that every P[ i ] is waiting
for P[ ( i + 1 ) % ( N + 1 ) ].

3.2 Resource-Allocation Graph:

In some cases deadlocks can be understood more clearly through the use of Resource-Allocation Graphs,
having the following properties:

• A set of resource categories, { R1, R2, R3, . . ., RN }, which appear as square nodes on the graph.
Dots inside the resource nodes indicate specific instances of the resource.
• A set of processes, { P1, P2, P3, . . ., PN }
• Request Edges - A set of directed arcs from Pi to Rj, indicating that process Pi has requested Rj, and
is currently waiting for that resource to become available.
• Assignment Edges - A set of directed arcs from Rj to Pi indicating that resource Rj has been
allocated to process Pi, and that Pi is currently holding resource Rj.

Note that a request edge can be converted into an assignment edge by reversing the direction of the arc
when the request is granted. ( However note also that request edges point to the category box, whereas
assignment edges emanate from a particular instance dot within the box. )

For example:

• If a resource-allocation graph contains no cycles, then the system is not deadlocked. ( When looking
for cycles, remember that these are directed graphs. ) See the example in Figure 7.2 above.
• If a resource-allocation graph does contain cycles AND each resource category contains only a
single instance, then a deadlock exists.

• If a resource category contains more than one instance, then the presence of a cycle in the resource-
allocation graph indicates the possibility of a deadlock, but does not guarantee one. Consider, for
example, Figures 7.3 and 7.4 below:

3.3 Deadlock Prevention:

• Deadlocks can be prevented by preventing at least one of the four required conditions:

3.3.1 Attacking the Mutual Exclusion Condition

• Shared resources such as read-only files do not lead to deadlocks.

• Unfortunately some resources, such as printers and tape drives, require exclusive access by a single
process.

3.3.2 Attacking the Hold and Wait Condition

• To prevent this condition processes must be prevented from holding one or more resources while
simultaneously waiting for one or more others. There are several possibilities for this:

o Require that all processes request all resources at one time. This can be wasteful of system
resources if a process needs one resource early in its execution and doesn't need some other
resource until much later.

o Require that processes holding resources must release them before requesting new resources,
and then re-acquire the released resources along with the new ones in a single new request.
This can be a problem if a process has partially completed an operation using a resource and
then fails to get it re-allocated after releasing it.

o Either of the methods described above can lead to starvation.


3.3.3 Attacking the No Preemption condition

• Preemption of process resource allocations can prevent this condition of deadlocks, when it is
possible.

o One approach is that if a process is forced to wait when requesting a new resource, then all
other resources previously held by this process are implicitly released, forcing this process to
re-acquire the old resources along with the new resources in a single request.

o Another approach is that when a resource is requested and not available, then the system
looks to see what other processes currently have those resources and are themselves blocked
waiting for some other resource. If such a process is found, then some of their resources may
get preempted and added to the list of resources for which the process is waiting.

3.3.4 Attacking the Circular Wait Condition

• One way to avoid circular wait is to number all resources, and to require that processes request
resources only in strictly increasing (or decreasing) order.

• In other words, in order to request resource Rj, a process must first release all Ri such that i >= j.

• One big challenge in this scheme is determining the relative ordering of the different resources

3.4 Deadlock Avoidance

• The general idea behind deadlock avoidance is to avoid deadlocks from ever happening. This
requires more information about each process and tends to lead to low device utilization.

• In some algorithms the scheduler only needs to know the maximum number of each resource that a
process might potentially use. In more complex algorithms the scheduler can also take advantage of
the schedule of exactly what resources may be needed in what order.

• When a scheduler sees that starting a process or granting resource requests may lead to future
deadlocks, then that process is just not started or the request is not granted.

• A resource allocation state is defined by the number of available and allocated resources and the
maximum requirements of all processes in the system.

3.4.1 Safe State

• A state is safe if the system can allocate all resources requested by all processes (up to their stated
maximums) without entering a deadlock state.

• A state is safe if there exists a safe sequence of processes {P0, P1, P2, ..., PN} such that all of the
resource requests for Pi can be granted using the resources currently allocated to Pi and all processes
Pj where j < i. (i.e. if all the processes prior to Pi finish and free up their resources, then Pi will be
able to finish also, using the resources that they have freed up.)

• If a safe sequence does not exist, then the system is in an unsafe state, which MAY lead to deadlock.
(All safe states are deadlock free, but not all unsafe states lead to deadlocks.)

3.4.2 Resource-Allocation Graph Algorithm

• If resource categories have only single instances of their resources, then deadlock states can be
detected by cycles in the resource-allocation graphs.

• In this case, unsafe states can be recognized and avoided by augmenting the resource-allocation
graph with claim edges, noted by dashed lines, which point from a process to a resource that it may
request in the future.

• In order for this technique to work, all claim edges must be added to the graph for any particular
process before that process is allowed to request any resources.

• When a process makes a request, the claim edge Pi->Rj is converted to a request edge. Similarly
when a resource is released, the assignment reverts back to a claim edge.

• This approach works by denying requests that would produce cycles in the resource-allocation
graph, taking claim edges into effect.

3.4.3 Banker's Algorithm

• For resource categories that contain more than one instance the resource-allocation graph method
does not work, and more complex methods must be chosen.

• The Banker's Algorithm gets its name because it is a method that bankers could use to assure that
when they lend out resources they will still be able to satisfy all their clients. (A banker won't loan
out a little money to start building a house unless they are assured that they will later be able to loan
out the rest of the money to finish the house.)
• When a process starts up, it must state in advance the maximum allocation of resources it may
request, up to the amount available on the system.

• When a request is made, the scheduler determines whether granting the request would leave the
system in a safe state. If not, then the process must wait until the request can be granted safely.

• The banker's algorithm relies on several key data structures: (where n is the number of processes
and m is the number of resource categories)

o Available[ m ] indicates how many resources are currently available of each type.

o Max[ n ][ m ] indicates the maximum demand of each process of each resource.

o Allocation[ n ][ m ] indicates the number of each resource category allocated to each


process.

o Need[ n ][ m ] indicates the remaining resources needed of each type for each process.
( Note that Need[ i ][ j ] = Max[ i ][ j ] - Allocation[ i ][ j ] for all i, j. )

• Now that we have a tool for determining if a particular state is safe or not, we are now ready to look
at the Banker's algorithm itself.

• This algorithm determines if a new request is safe, and grants it only if it is safe to do so.

• When a request is made (that does not exceed currently available resources), pretend it has been
granted, and then see if the resulting state is a safe one. If so, grant the request, and if not, deny the
request, as follows:

o Let Request[ n ][ m ] indicate the number of resources of each type currently requested by
processes. If Request[ i ] > Need[ i ] for any process i, raise an error condition.

o If Request[ i ] > Available for any process i, then that process must wait for resources to
become available. Otherwise the process can continue to step 3.

o Check to see if the request can be granted safely, by pretending it has been granted and then
seeing if the resulting state is safe. If so, grant the request, and if not, then the process must
wait until its request can be granted safely. The procedure for granting a request ( or
pretending to for testing purposes ) is:

▪ Available = Available – Request [ i ]

▪ Allocation [ i ] = Allocation [ i ] + Request [ i ]

▪ Need [ i ] = Need [ i ] - Request [ i ]

3.5 Deadlock Ignorance:


• Also known as Ostrich Algorithm.
• Deadlock Ignorance is the most widely used approach among all the mechanism.
• In this approach, the Operating system assumes that deadlock never occurs. It simply ignores
deadlock. This approach is best suitable for a single end user system where User uses the system
only for browsing and all other normal stuff.
• The performance of the system decreases if it uses deadlock handling mechanism all the time if
deadlock happens 1 out of 100 times then it is completely unnecessary to use the deadlock handling
mechanism all the time.
• In these types of systems, the user has to simply restart the computer in the case of deadlock.
Windows and Linux are mainly using this approach.

3.6 Deadlock Detection:

• If deadlocks are not avoided, then another approach is to detect when they have occurred and
recover somehow.

• When should the deadlock detection be done? Frequently, or infrequently? - The answer may
depend on how frequently deadlocks are expected to occur, as well as the possible consequences of
not catching them immediately.

• There are two obvious approaches, each with trade-offs:

1. Do deadlock detection after every resource allocation which cannot be immediately granted.
This has the advantage of detecting the deadlock right away, while the minimum number of
processes are involved in the deadlock. The down side of this approach is the extensive
overhead and performance hit caused by checking for deadlocks so frequently.

2. Do deadlock detection only when there is some clue that a deadlock may have occurred,
such as when CPU utilization reduces to 40% or some other number. The advantage is that
deadlock detection is done much less frequently, but the down side is that it becomes
impossible to detect the processes involved in the original deadlock, and so deadlock
recovery can be more complicated and damaging to more processes.

3.7 Deadlock Recovery:

There are two basic approaches to recovery from deadlock.

3.7.1 Process Termination

Two basic approaches, both of which recover resources allocated to terminated processes:
• Terminate all processes involved in the deadlock. This definitely solves the deadlock, but at the
expense of terminating more processes than would be absolutely necessary.
• Terminate processes one by one until the deadlock is broken. This is more conservative, but requires
doing deadlock detection after each step.

In the latter case there are many factors that can go into deciding which processes to terminate next:

• Process priorities.
• How long the process has been running, and how close it is to finishing.
• How many and what type of resources is the process holding.
• How many more resources does the process need to complete.
• How many processes will need to be terminated
• Whether the process is interactive or batch.

3.7.2 Resource Preemption

When preempting resources to relieve deadlock, there are three important issues to be addressed:

1. Selecting a victim - Deciding which resources to preempt from which processes involves
many of the same decision criteria outlined above.

2. Rollback - Ideally one would like to roll back a preempted process to a safe state prior to the
point at which that resource was originally allocated to the process. Unfortunately it can be
difficult or impossible to determine what such a safe state is, and so the only safe rollback is
to roll back all the way back to the beginning.

3. Starvation - How do you guarantee that a process won't starve because its resources are
constantly being preempted? One option would be to use a priority system, and increase the
priority of a process every time its resources get preempted. Eventually it should get a high
enough priority that it won't get preempted any more.

3.8 Types of Devices:

• Block devices
o provides the main interface to all disk devices in a system.
o include all devices that allow random access to completely independent, fixed-sized blocks of
data, including hard disks and floppy disks, CD-ROMs, and flash memory.
o used to store file systems, but direct access to a block device is also allowed so that programs
can create and repair the file system that the device contains.
• Character devices
o include most other devices, such as mice and keyboards.
o The fundamental difference between block and character devices is random access—block
devices may be accessed randomly, while character devices are only accessed serially.
• Network devices
o Users cannot directly transfer data to network devices
o they must communicate indirectly by opening a connection to the kernel's networking
subsystem.

3.9 Device Drivers

Device Drivers are the software through which, the kernel of a computer communicates with different
hardware, without having to go into the details of how the hardware works. It is software that controls a
hardware part attached to a computer and allows the computer to use the hardware by providing a suitable
interface. This means that the operating system need not go into the details about how the hardware part
works.

Device Driver Types -

There are device drivers for almost every device associated with a computer – from BIOS to even virtual
machines and more. Device drivers can be broadly be classified into two categories:

• Kernel Device Drivers are the generic device drivers that load with the operating system into the
memory as part of the operating system; not the entire driver but a pointer to that effect so that the
device driver can be invoked as soon as it is required. The drivers are pertaining to BIOS,
motherboard, processor, and similar hardware form part of Kernel Software.

o A problem with Kernel Device Drivers is that when one of them is invoked, it is loaded into
the RAM and cannot be moved to page file (virtual memory). Thus, a number of device
drivers running at the same time can slow down machines. That is why there is a minimum
system requirement for each operating system.

• User Mode Device Drivers are the ones usually triggered by users during their session on a
computer. It might be thought of devices that the user brought to the computer other than the kernel
devices. Drivers for most of the Plug and Play devices fall into this category. User Device Drivers
can be written to disk so that they don’t act tough on the resources. However, for the drivers related
to gaming devices, it is recommended to keep them in main memory (RAM).

3.10 Disk Scheduling Algorithms

Disk scheduling is done by operating systems to schedule I/O requests arriving for the disk. Disk
scheduling is also known as I/O scheduling.

Disk scheduling is important because:


• Multiple I/O requests may arrive by different processes and only one I/O request can be served at a
time by the disk controller. Thus other I/O requests need to wait in the waiting queue and need to be
scheduled.
• Two or more request may be far from each other so can result in greater disk arm movement.
• Hard drives are one of the slowest parts of the computer system and thus need to be accessed in an
efficient manner.
• Seek Time: Seek time is the time taken to locate the disk arm to a specified track where the data is
to be read or write. So the disk scheduling algorithm that gives minimum average seek time is
better.
• Rotational Latency: Rotational Latency is the time taken by the desired sector of disk to rotate into a
position so that it can access the read/write heads. So the disk scheduling algorithm that gives
minimum rotational latency is better.

3.10.1 FCFS

FCFS is the simplest of all the Disk Scheduling Algorithms. In FCFS, the requests are addressed in the
order they arrive in the disk queue.

Advantages:

• Every request gets a fair chance

• No indefinite postponement

Disadvantages:

• Does not try to optimize seek time

• May not provide the best possible service

3.10.2 SSTF

In SSTF (Shortest Seek Time First), requests having shortest seek time are executed first. So, the seek time
of every request is calculated in advance in the queue and then they are scheduled according to their
calculated seek time. As a result, the request near the disk arm will get executed first. SSTF is certainly an
improvement over FCFS as it decreases the average response time and increases the throughput of system.

Advantages:

• Average Response Time decreases

• Throughput increases

Disadvantages:
• Overhead to calculate seek time in advance

• Can cause Starvation for a request if it has higher seek time as compared to incoming requests

• High variance of response time as SSTF favours only some requests

3.10.3 Scan

In Scan algorithm the disk arm moves into a particular direction and services the requests coming in its path
and after reaching the end of disk, it reverses its direction and again services the request arriving in its path.
So, this algorithm works as an elevator and hence also known as elevator algorithm. As a result, the
requests at the midrange are serviced more and those arriving behind the disk arm will have to wait.

Advantages:

• High throughput

• Low variance of response time

• Average response time

Disadvantages:

• Long waiting time for requests for locations just visited by disk arm

3.10.4 C - Scan

In Scan algorithm, the disk arm again scans the path that has been scanned, after reversing its direction. So,
it may be possible that too many requests are waiting at the other end or there may be zero or few requests
pending at the scanned area.

These situations are avoided in C - Scan algorithm in which the disk arm instead of reversing its direction
goes to the other end of the disk and starts servicing the requests from there. So, the disk arm moves in a
circular fashion and this algorithm is also similar to Scan algorithm and hence it is known as C - Scan
(Circular Scan).

Advantages:

• Provides more uniform wait time compared to Scan

3.10.5 Look

It is similar to the Scan disk scheduling algorithm except for the difference that the disk arm in spite of
going to the end of the disk goes only to the last request to be serviced in front of the head and then reverses
its direction from there only. Thus it prevents the extra delay which occurred due to unnecessary traversal
to the end of the disk.
3.10.6 C - Look

As Look is similar to Scan algorithm, in similar way, C - Look is similar to C - Scan disk scheduling
algorithm. In C - Look, the disk arm in spite of going to the end goes only to the last request to be serviced
in front of the head and then from there goes to the other end’s last request. Thus, it also prevents the extra
delay which occurred due to unnecessary traversal to the end of the disk.

You might also like