Deadlock in Operating System
Deadlock in Operating System
Deadlock is a situation where a set of processes are blocked because each process
is holding a resource and waiting for another resource acquired by some other
process.
Consider an example when two trains are coming toward each other on same track
and there is only one track, none of the trains can move once they are in front of
each other. Similar situation occurs in operating systems when there are two or
more processes hold some resources and wait for resources held by other(s). For
example, in the below diagram, Process 1 is holding Resource 1 and waiting for
resource 2 which is acquired by process 2, and process 2 is waiting for resource 1.
Deadlock can arise if following four conditions hold simultaneously
(Necessary Conditions)
Mutual Exclusion: One or more than one resource are non-sharable (Only one
process can use at a time)
Hold and Wait: A process is holding at least one resource and waiting for
resources.
No Preemption: A resource cannot be taken from a process unless the process
releases the resource.
Circular Wait: A set of processes are waiting for each other in circular form.
3) Ignore deadlock: If deadlock is very rare, then let it happen and reboot the
system. This is the approach that both Windows and UNIX take.
Memory Management
Main Memory refers to a physical memory that is the internal memory to the
computer. The word main is used to distinguish it from external mass storage
devices such as disk drives. Main memory is also known as RAM. The computer is
able to change only data that is in main memory. Therefore, every program we
execute and every file we access must be copied from a storage device into main
memory.
All the programs are loaded in the main memory for execution. Sometimes
complete program is loaded into the memory, but sometimes a certain part or
routine of the program is loaded into the main memory only when it is called by
the program, this mechanism is called Dynamic Loading, this enhance the
performance.
Also, at times one program is dependent on some other program. In such a case,
rather than loading all the dependent programs, CPU links the dependent programs
to the main executing program when its required. This mechanism is known as
Dynamic Linking.
Swapping
A process needs to be in memory for execution. But sometimes there is not enough
main memory to hold all the currently active processes in a timesharing system.
So, excess process are kept on disk and brought in to run dynamically. Swapping is
the process of bringing in each process in main memory, running it for a while and
then putting it back to the disk.
Memory Protection
In the operating system, the following are four common memory management
techniques.
Single contiguous allocation: Simplest allocation method used by MS-DOS. All
memory (except some reserved for OS) is available to a process.
Paged memory management: Memory is divided into fixed sized units called
page frames, used in a virtual memory environment.
Most of the operating systems (for example Windows and Linux) use
Segmentation with Paging. A process is divided into segments and individual
segments have pages.
In Partition Allocation, when there is more than one partition freely available to
accommodate a process’s request, a partition must be selected. To choose a
particular partition, a partition allocation method is needed. A partition allocation
method is considered better if it avoids internal fragmentation.
1. First Fit: In the first fit, the partition is allocated which is first sufficient block
from the top of Main Memory.
2. Best Fit Allocate the process to the partition which is the first smallest sufficient
partition among the free available partition.
3. Worst Fit Allocate the process to the partition which is the largest sufficient
among the freely available partitions available in the main memory.
4. Next Fit Next fit is similar to the first fit but it will search for the first sufficient
partition from the last allocation point.
In such situation processes are loaded and removed from the memory. As a result
of this, free holes exists to satisfy a request but is noncontiguous i.e. the memory is
fragmented into large no. Of small holes. This phenomenon is known as External
Fragmentation.
Also, at times the physical memory is broken into fixed size blocks and memory is
allocated in unit of block sizes. The memory allocated to a space may be slightly
larger than the requested memory. The difference between allocated and required
memory is known as internal fragmentation i.e. the memory that is internal to a
partition but is of no use.
Paging
Page Table
A Page Table is the data structure used by a virtual memory system in a computer
operating system to store the mapping between virtual address and physical
addresses.
Virtual address is also known as logical address and is generated by the CPU.
While Physical address is the address that actually exists on memory.
Segmentation
Both paging and segmentation have their advantages and disadvantages, it is better
to combine these two schemes to improve on each. The combined scheme is
known as 'Page the Elements'. Each segment in this scheme is divided into pages
and each segment is maintained in a page table. So the logical address is divided
into following 3 parts :
Segment numbers(S)
Page number (P)
The displacement or offset number (D)
There are two ways by which Input/output subsystems can improve the
performance and efficiency of the computer by using a memory space in the main
memory or on the disk and these two are spooling and buffering.
Spooling –
Spooling stands for Simultaneous peripheral operation online. A spool is similar to
buffer as it holds the jobs for a device until the device is ready to accept the job. It
considers disk as a huge buffer that can store as many jobs for the device till the
output devices are ready to accept them.
Buffering –
The main memory has an area called buffer that is used to store or hold the data
temporarily that is being transmitted either between two devices or between a
device or an application. Buffering is an act of storing data temporarily in the
buffer. It helps in matching the speed of the data stream between the sender and the
receiver. If the speed of the sender’s transmission is slower than the receiver, then
a buffer is created in the main memory of the receiver, and it accumulates the bytes
received from the sender and vice versa.
The basic difference between Spooling and Buffering is that Spooling overlaps the
input/output of one job with the execution of another job while the buffering
overlaps the input/output of one job with the execution of the same job.
The key difference between spooling and buffering is that Spooling can
handle the input/output of one job along with the computation of another job
at the same time while buffering handles input/output of one job along with
its computation.
Spooling stands for Simultaneous Peripheral Operation online. Whereas
buffering is not an acronym.
Spooling is more efficient than buffering, as spooling can overlap processing
two jobs at a time.
Buffering uses limited area in main memory while Spooling uses the disk as
a huge buffer.