0% found this document useful (0 votes)
3 views

Deadlock in Operating System

The document discusses deadlock in operating systems, describing it as a situation where processes are blocked due to holding resources while waiting for others. It outlines necessary conditions for deadlock, methods for handling it, and various memory management techniques including contiguous memory allocation, paging, and segmentation. Additionally, it differentiates between spooling and buffering in input/output subsystems, highlighting their roles in improving performance and efficiency.

Uploaded by

Prakesh Shrestha
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Deadlock in Operating System

The document discusses deadlock in operating systems, describing it as a situation where processes are blocked due to holding resources while waiting for others. It outlines necessary conditions for deadlock, methods for handling it, and various memory management techniques including contiguous memory allocation, paging, and segmentation. Additionally, it differentiates between spooling and buffering in input/output subsystems, highlighting their roles in improving performance and efficiency.

Uploaded by

Prakesh Shrestha
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Deadlock in Operating System

A process in operating systems uses different resources and uses resources in


following way.
1) Requests a resource
2) Use the resource
2) Releases the resource

Deadlock is a situation where a set of processes are blocked because each process
is holding a resource and waiting for another resource acquired by some other
process.
Consider an example when two trains are coming toward each other on same track
and there is only one track, none of the trains can move once they are in front of
each other. Similar situation occurs in operating systems when there are two or
more processes hold some resources and wait for resources held by other(s). For
example, in the below diagram, Process 1 is holding Resource 1 and waiting for
resource 2 which is acquired by process 2, and process 2 is waiting for resource 1.
Deadlock can arise if following four conditions hold simultaneously
(Necessary Conditions)
Mutual Exclusion: One or more than one resource are non-sharable (Only one
process can use at a time)
Hold and Wait: A process is holding at least one resource and waiting for
resources.
No Preemption: A resource cannot be taken from a process unless the process
releases the resource.
Circular Wait: A set of processes are waiting for each other in circular form.

Methods for handling deadlock


there are three ways to handle deadlock
1) Deadlock prevention or avoidance: It is very important to prevent a deadlock
before it can occur. So, the system checks each transaction before it is executed to
make sure it does not lead to deadlock. If there is even a slight chance that a
transaction may lead to deadlock in the future, it is never allowed to execute.
Prevention is done by negating one of above mentioned necessary conditions for
deadlock.
Avoidance is kind of futuristic in nature, a process requesting a resource is
allocated the resources only if there is no possibility of deadlock occurrence.

2) Deadlock detection and recovery: Let deadlock occur, then do preemption to


handle it once occurred.

3) Ignore deadlock: If deadlock is very rare, then let it happen and reboot the
system. This is the approach that both Windows and UNIX take.

Memory Management

Main Memory refers to a physical memory that is the internal memory to the
computer. The word main is used to distinguish it from external mass storage
devices such as disk drives. Main memory is also known as RAM. The computer is
able to change only data that is in main memory. Therefore, every program we
execute and every file we access must be copied from a storage device into main
memory.
All the programs are loaded in the main memory for execution. Sometimes
complete program is loaded into the memory, but sometimes a certain part or
routine of the program is loaded into the main memory only when it is called by
the program, this mechanism is called Dynamic Loading, this enhance the
performance.

Also, at times one program is dependent on some other program. In such a case,
rather than loading all the dependent programs, CPU links the dependent programs
to the main executing program when its required. This mechanism is known as
Dynamic Linking.

Swapping

A process needs to be in memory for execution. But sometimes there is not enough
main memory to hold all the currently active processes in a timesharing system.
So, excess process are kept on disk and brought in to run dynamically. Swapping is
the process of bringing in each process in main memory, running it for a while and
then putting it back to the disk.

Contiguous Memory Allocation

In contiguous memory allocation each process is contained in a single contiguous


block of memory. Memory is divided into several fixed size partitions. Each
partition contains exactly one process. When a partition is free, a process is
selected from the input queue and loaded into it. The free blocks of memory are
known as holes. The set of holes is searched to determine which hole is best to
allocate.

Memory Protection

Memory protection is a phenomenon by which we control memory access rights on


a computer. The main aim of it is to prevent a process from accessing memory that
has not been allocated to it. Hence prevents a bug within a process from affecting
other processes, or the operating system itself, and instead results in a
segmentation fault or storage violation exception being sent to the disturbing
process, generally killing of process.

Partition Allocation Methods in Memory Management

In the operating system, the following are four common memory management
techniques.
Single contiguous allocation: Simplest allocation method used by MS-DOS. All
memory (except some reserved for OS) is available to a process.

Partitioned allocation: Memory is divided in different blocks or partitions. Each


process is allocated according to the requirement.

Paged memory management: Memory is divided into fixed sized units called
page frames, used in a virtual memory environment.

Segmented memory management: Memory is divided in different segments (a


segment is a logical grouping of the process’ data or code).In this management,
allocated memory doesn’t have to be contiguous.

Most of the operating systems (for example Windows and Linux) use
Segmentation with Paging. A process is divided into segments and individual
segments have pages.

In Partition Allocation, when there is more than one partition freely available to
accommodate a process’s request, a partition must be selected. To choose a
particular partition, a partition allocation method is needed. A partition allocation
method is considered better if it avoids internal fragmentation.

Below are the various partition allocation schemes :

1. First Fit: In the first fit, the partition is allocated which is first sufficient block
from the top of Main Memory.

2. Best Fit Allocate the process to the partition which is the first smallest sufficient
partition among the free available partition.

3. Worst Fit Allocate the process to the partition which is the largest sufficient
among the freely available partitions available in the main memory.

4. Next Fit Next fit is similar to the first fit but it will search for the first sufficient
partition from the last allocation point.

Is Best-Fit really best?


Although best fit minimizes the wastage space, it consumes a lot of processor time
for searching the block which is close to the required size. Also, Best-fit may
perform poorer than other algorithms in some cases.
Fragmentation

Fragmentation occurs in a dynamic memory allocation system when most of the


free blocks are too small to satisfy any request. It is generally termed as inability to
use the available memory.

In such situation processes are loaded and removed from the memory. As a result
of this, free holes exists to satisfy a request but is noncontiguous i.e. the memory is
fragmented into large no. Of small holes. This phenomenon is known as External
Fragmentation.

Also, at times the physical memory is broken into fixed size blocks and memory is
allocated in unit of block sizes. The memory allocated to a space may be slightly
larger than the requested memory. The difference between allocated and required
memory is known as internal fragmentation i.e. the memory that is internal to a
partition but is of no use.

Paging

A solution to fragmentation problem is Paging. Paging is a memory management


mechanism that allows the physical address space of a process to be non-
contagious. Here physical memory is divided into blocks of equal size called
Pages. The pages belonging to a certain process are loaded into available memory
frames.

Page Table

A Page Table is the data structure used by a virtual memory system in a computer
operating system to store the mapping between virtual address and physical
addresses.

Virtual address is also known as logical address and is generated by the CPU.
While Physical address is the address that actually exists on memory.

Segmentation

Segmentation is another memory management scheme that supports the user-view


of memory. Segmentation allows breaking of the virtual address space of a single
process into segments that may be placed in non-contiguous areas of physical
memory.
Segmentation with Paging

Both paging and segmentation have their advantages and disadvantages, it is better
to combine these two schemes to improve on each. The combined scheme is
known as 'Page the Elements'. Each segment in this scheme is divided into pages
and each segment is maintained in a page table. So the logical address is divided
into following 3 parts :

 Segment numbers(S)
 Page number (P)
 The displacement or offset number (D)

Difference between Spooling and Buffering

There are two ways by which Input/output subsystems can improve the
performance and efficiency of the computer by using a memory space in the main
memory or on the disk and these two are spooling and buffering.

Spooling –
Spooling stands for Simultaneous peripheral operation online. A spool is similar to
buffer as it holds the jobs for a device until the device is ready to accept the job. It
considers disk as a huge buffer that can store as many jobs for the device till the
output devices are ready to accept them.

Buffering –
The main memory has an area called buffer that is used to store or hold the data
temporarily that is being transmitted either between two devices or between a
device or an application. Buffering is an act of storing data temporarily in the
buffer. It helps in matching the speed of the data stream between the sender and the
receiver. If the speed of the sender’s transmission is slower than the receiver, then
a buffer is created in the main memory of the receiver, and it accumulates the bytes
received from the sender and vice versa.
The basic difference between Spooling and Buffering is that Spooling overlaps the
input/output of one job with the execution of another job while the buffering
overlaps the input/output of one job with the execution of the same job.

Differences between Spooling and Buffering –

 The key difference between spooling and buffering is that Spooling can
handle the input/output of one job along with the computation of another job
at the same time while buffering handles input/output of one job along with
its computation.
 Spooling stands for Simultaneous Peripheral Operation online. Whereas
buffering is not an acronym.
 Spooling is more efficient than buffering, as spooling can overlap processing
two jobs at a time.
 Buffering uses limited area in main memory while Spooling uses the disk as
a huge buffer.

You might also like