Operating System
Operating System
UNIT III
Topics
Deadlocks
What is a deadlock?
Every process needs some resources to complete its execution. However, the
resource is granted in a sequential order.
1. The process requests for some resource.
2. OS grant the resource if it is available otherwise let the process waits.
3. The process uses it and release on the completion.
Let us assume that there are three processes P1, P2 and P3. There are
three different resources R1, R2 and R3. R1 is assigned to P1, R2 is
assigned to P2 and R3 is assigned to P3.
Mutual Exclusion
There should be a resource that can only be held by one process at a
time. In the diagram below, there is a single instance of Resource 1 and it
is held by Process 1 only.
Hold and Wait
A process can hold multiple resources and still request more resources
from other processes which are holding them. In the diagram given below,
Process 2 holds Resource 2 and Resource 3 and is requesting the Resource
1 which is held by Process 1.
No Preemption
A resource cannot be preempted from a process by force. A process
can only release a resource voluntarily. In the diagram below, Process 2
cannot preempt Resource 1 from Process 1. It will only be released
when Process 1 relinquishes it voluntarily after its execution is
complete.
Circular Wait
A process is waiting for the resource held by the second process, which is
waiting for the resource held by the third process and so on, till the last
process is waiting for a resource held by the first process. This forms a circular
chain. For example: Process 1 is allocated Resource2 and it is requesting
Resource 1. Similarly, Process 2 is allocated Resource 1 and it is requesting
Resource 2. This forms a circular wait loop.
Handling
There areDeadlocks
three approaches to deal with deadlocks.
1. Deadlock Prevention
2. Deadlock avoidance
3. Deadlock detection
Deadlock Prevention
The strategy of deadlock prevention is to design the system in such a way that the possibility of
deadlock is excluded. Indirect method prevent the occurrence of one of three necessary condition of
deadlock i.e., mutual exclusion, no pre-emption and hold and wait. Direct method prevent the
occurrence of circular wait.
Prevention techniques –
Hold and Wait – condition can be prevented by requiring that a process requests all its required
resources at one time and blocking the process until all of its requests can be granted at a same time
simultaneously. But this prevention does not yield good result because :
• If a process that is holding some resource, requests another resource that can
not be immediately allocated to it, the all resource currently being held are
released and if necessary, request them again together with the additional
resource.
• If a process requests a resource that is currently held by another process, the OS
may pre-empt the second process and require it to release its resources. This
works only if both the processes do not have same priority.
Circular wait One way to ensure that this condition never hold is to impose a total
ordering of all resource types and to require that each process requests resource in
an increasing order of enumeration, i.e., if a process has been allocated resources of
type R, then it may subsequently request only those resources of types following R
in ordering.
Deadlock Avoidance
This approach allows the three necessary conditions of deadlock but makes judicious choices
to assure that deadlock point is never reached. It allows more concurrency than avoidance
detection A decision is made dynamically whether the current resource allocation request
will, if granted, potentially lead to deadlock. It requires the knowledge of future process
requests. Two techniques to avoid deadlock :
Disadvantages :
• This technique does not limit resources access or restrict process action.
• Deadlock Recovery :
A traditional operating system such as Windows doesn’t deal with deadlock recovery as it is a time
and space-consuming process. Real-time operating systems use Deadlock recovery.
2. Resource Preemption –
Resources are preempted from the processes involved in the deadlock, preempted resources are
allocated to other processes so that there is a possibility of recovering the system from deadlock.
In this case, the system goes into starvation.
When a Deadlock Detection Algorithm determines that a deadlock has
occurred in the system, the system must recover from that deadlock. There
are two approaches of breaking a Deadlock:
1. Process Termination:
To eliminate the deadlock, we can simply kill one or more processes. For this, we use two
methods:
• (a). Abort all the Deadlocked Processes:
Aborting all the processes will certainly break the deadlock, but with a great expense. The
deadlocked processes may have computed for a long time and the result of those partial
computations must be discarded and there is a probability to recalculate them later.
• (b). Abort one process at a time until deadlock is eliminated:
Abort one deadlocked process at a time, until deadlock cycle is eliminated from the
system. Due to this method, there may be considerable overhead, because after aborting
each process, we have to run deadlock detection algorithm to check whether any processes
are still deadlocked.
2. Resource Preemption:
To eliminate deadlocks using resource preemption, we preempt some resources from processes
and give those resources to other processes. This method will raise three issues –
(b). Rollback:
We must determine what should be done with the process from which resources are
preempted. One simple idea is total rollback. That means abort the process and restart it.
(c). Starvation:
In a system, it may happen that same process is always picked as a victim. As a result, that
process will never complete its designated task. This situation is called Starvation and
must be avoided. One solution is that a process must be picked as a victim only a finite
number of times.
Device Management
Operatin
Memor g File
y manage
manage system r
r
Process
Device
manage
Manage
r
r
Networ
k Security
Manage manage
r r
Device Management Technique
An operating system or the OS manages communication with the devices through their
respective drivers. The operating system component provides a uniform interface to access
devices of varied physical attributes. For device management in operating system:
• Keep tracks of all devices and the program which is responsible to perform this is called
I/O controller.
• Monitoring the status of each device such as storage drivers, printers and other
peripheral devices.
• Enforcing preset policies and taking a decision which process gets the device when and
for how long.
• Allocates and Deallocates the device in an efficient way.De-allocating them at two levels:
at the process level when I/O command has been executed and the device is temporarily
released, and at the job level, when the job is finished and the device is permanently
released.
• Optimizes the performance of individual devices.
Types of Devices
The OS peripheral devices can be categorized into 3:
• Dedicated devices
• Shared devices
• Virtual devices
The differences among them are the functions of the characteristics of the
devices as well as how they are managed by the Device Manager.
Dedicated Devices
Such type of devices in the device management in operating system are dedicated
or assigned to only one job at a time until that job releases them.
Devices like printers, tape drivers, plotters etc. demand such allocation scheme
since it would be awkward if several users share them at the same point of time.
The disadvantages of such kind of devices is the inefficiency resulting from the
allocation of the device to a single user for the entire duration of job execution
even though the device is not put to use 100% of the time.
Shared Devices
These devices can be allocated o several processes. Disk-DASD can be shared among
several processes at the same time by interleaving their requests. The interleaving is
carefully controlled by the Device Manager and all issues must be resolved on the
basis of predetermined policies.
Virtual Devices
These devices are the combination of the first two types and they are dedicated
devices which are transformed into shared devices. For example, a printer converted
into a shareable device via spooling program which re-routes all the print requests to a
disk. A print job is not sent straight to the printer, instead, it goes to the
disk(spool)until it is fully prepared with all the necessary sequences and formatting,
then it goes to the printers. This technique can transform one printer into several
virtual printers which leads to better performance and use.
Input/Output Devices
input/Output devices are the devices that are responsible for the
input/output operations in a computer system.
• Block devices
• Character devices
Block Devices
A block device stores information in block with fixed-size and own-
address.
The storage systems above the Electronic disk are Volatile, where
as those below are Non-Volatile.
An Electronic disk can be either designed to be either Volatile or
Non-Volatile. During normal operation, the electronic disk stores
data in a large DRAM array, which is Volatile. But many electronic
disk devices contain a hidden magnetic hard disk and a battery for
backup power. If external power is interrupted, the electronic disk
controller copies the data from RAM to the magnetic disk. When
external power is restored, the controller copies the data back into
the RAM.
• Buffering is done to deal effectively with a speed mismatch between the producer
and consumer of the data stream.
• A buffer is produced in main memory to heap up the bytes received from modem.
• After receiving the data in the buffer, the data get transferred to disk from buffer
in a single operation.
• This process of data transfer is not instantaneous, therefore the modem needs
another buffer in order to store additional incoming data.
• When the first buffer got filled, then it is requested to transfer the data to disk.
• The modem then starts filling the additional incoming data in the second buffer
while the data in the first buffer getting transferred to disk.
• When both the buffers completed their tasks, then the modem switches back to
the first buffer while the data from the second buffer get transferred to the disk.
• The use of two buffers disintegrates the producer and the consumer of the data,
thus minimizes the time requirements between them.
• Buffering also provides variations for devices that have different data transfer
sizes.
Types of various I/O buffering techniques :
1. Single buffer :
A buffer is provided by the operating system to the system portion of the main memory.
• After taking the input, the block gets transferred to the user space by the process and then
the process requests for another block.
• Two blocks works simultaneously, when one block of data is processed by the user process,
the next block is being read in.
• Line- at a time operation is used for scroll made terminals. User inputs
one line at a time, with a carriage return signaling at the end of a line.
• Byte-at a time operation is used on forms mode, terminals when each
keystroke is significant.
2. Double buffer :
Block oriented –
• Other buffer is used to store data from the lower level module.
• Double buffering is also known as buffer swapping.
• A major disadvantage of double buffering is that the complexity of the process get
increased.
• If the process performs rapid bursts of I/O, then using double buffering may be
deficient.
Stream oriented –
• Line- at a time I/O, the user process need not be suspended for
input or output, unless process runs ahead of the double buffer.
• When more than two buffers are used, the collection of buffers is itself referred
to as a circular buffer.
• In this, the data do not directly passed from the producer to the consumer
because the data would change due to overwriting of buffers before they had
been consumed.
• The producer can only fill up to buffer i-1 while data in buffer i is waiting to be
consumed.
Secondry storage structure
Secondary storage devices are those devices whose memory is non volatile, meaning,
the stored data will be intact even if the system is turned off. Here are a few things
worth noting about secondary storage.
Tracks of the same distance from centre form a cylinder. A read-write head is used to
read data from a sector of the magnetic disk.
• Transfer rate: This is the rate at which the data moves from disk to the computer.
• Random access time: It is the sum of the seek time and rotational latency.
Seek time is the time taken by the arm to move to the required track. Rotational
latency is defined as the time taken by the arm to reach the required sector in the
track.
Even though the disk is arranged as sectors and tracks physically, the data is logically
arranged and addressed as an array of blocks of fixed size. The size of a block can be
512 or 1024 bytes. Each logical block is mapped with a sector on the disk,
sequentially. In this way, each sector in the disk will have a logical address.
Assume the head is initially at cylinder 56. The head moves in the given order in the
queue i.e., 56→98→183→…→67.
Shortest Seek Time First (SSTF)
Here the position which is closest to the current head position is chosen
first. Consider the previous example where disk queue looks like,
This algorithm is also called the elevator algorithm because of it’s behavior. Here,
first the head moves in a direction (say backward) and covers all the requests in the
path. Then it moves in the opposite direction and covers the remaining requests in
the path. This behavior is similar to that of an elevator. Let’s take the previous
example,
Low-level formatting fills the disk with a special data structure for each sector. The data structure for a
sector consists of a header, a data area, and a trailer. The header and trailer contain information used by
the disk controller, such as a sector number and an error-correcting code (ECC).
To use a disk to hold files, the operating system still needs to record its own data structures on the disk.
It does so in two steps. The first step is to partition the disk into one or more groups of cylinders. The
operating system can treat each partition as though it were a separate disk. For instance, one partition
can hold a copy of the operating system’s executable code, while another holds user files. After
partitioning, the second step is logical formatting (or creation of a file system). In this step, the operating
system stores the initial file-system data structures onto the disk.
Boot block
When a computer is powered up or rebooted, it needs to have an initial program to run. This initial
program is called the bootstrap program. It initializes all aspects of the system (i.e. from CPU registers to
device controllers and the contents of main memory) and then starts the operating system.
To do its job, the bootstrap program finds the operating system kernel on disk, loads that kernel into
memory, and jumps to an initial address to begin the operating-system execution.
For most computers, the bootstrap is stored in read-only memory (ROM). This location is convenient
because ROM needs no initialization and is at a fixed location that the processor can start executing when
powered up or reset. And since ROM is read-only, it cannot be infected by a computer virus. The problem
is that changing this bootstrap code requires changing the ROM hardware chips.
For this reason, most systems store a tiny bootstrap loader program in the boot ROM, whose only job is
to bring in a full bootstrap program from disk. The full bootstrap program can be changed easily: A new
version is simply written onto the disk. The full bootstrap program is stored in a partition (at a fixed
location on the disk) is called the boot blocks. A disk that has a boot partition is called a boot disk or
system disk.
Bad Blocks
Since disks have moving parts and small tolerances, they are prone to failure. Sometimes the
failure is complete, and the disk needs to be replaced, and its contents restored from backup
media to the new disk.
More frequently, one or more sectors become defective. Most disks even come from the
factory with bad blocks. Depending on the disk and controller in use, these blocks are handled
in a variety of ways.
The controller maintains a list of bad blocks on the disk. The list is initialized during the low-
level format at the factory and is updated over the life of the disk. The controller can be told to
replace each bad sector logically with one of the spare sectors. This scheme is known as sector
sparing or forwarding.
Swap-space management
Swap-Space :
The area on the disk where the swapped out processes are stored is called swap space.
Swap-Space Management :
Swap-Swap management is another low-level task pf the operating system. Disk space is used as an
extension of main memory by the virtual memory. As we know the fact that disk access is much slower
than memory access, In the swap-space management we are using disk space, so it will significantly
decreases system performance. Basically, in all our systems we require the best throughput, so the goal
of this swap-space implementation is to provide the virtual memory the best throughput
Swap-Space Use :
Swap-space is used by the different operating-system in various ways. The systems which
are implementing swapping may use swap space to hold the entire process which may
include image, code and data segments. Paging systems may simply store pages that
have been pushed out of the main memory. The need of swap space on a system can
vary from a megabytes to gigabytes but it also depends on the amount of physical
memory, the virtual memory it is backing and the way in which it is using the virtual
memory.
There is also an alternate to create the swap space which is in a separate raw partition. There is no
presence of any file system in this place. Rather, a swap space storage manager is used to allocate and
de-allocate the blocks. from the raw partition. It uses the algorithms for speed rather than storage
efficiency, because we know the access time of swap space is shorter than the file system. By this
Internal fragmentation increases, but it is acceptable, because the life span of the swap space is shorter
than the files in the file system. Raw partition approach creates fixed amount of swap space in case of
the disk partitioning.
Some operating systems are flexible and can swap both in raw partitions and in the file system space,
example: Linux
Disk Reliability
Reliability is the ability of the disk system to accommodate a single- or multi-disk failure and still
remain available to the users.
Adding redundancy almost always increases the reliability of the disk system. The most
common way to add redundancy is to implement a Redundant Array of Inexpensive Disks
(RAID).
• Hardware — The most commonly used hardware RAID levels are: RAID 0, RAID 1, RAID 5,
and RAID 10. The main differences between these RAID levels focus on reliability and
performance as previously defined.
• Software — Software RAID can be less expensive. However, it is almost always much slower
than hardware RAID, because it places a burden on the main system CPU to manage the
extra disk I/O.
The different hardware RAID types are as follows:
Be sure that your disks are able to be hot swapped so repairs can be made without bringing
down the system. Remember that there is a performance penalty during the resynchronization
period of the disks.
On a read, the disk that has its read/write heads positioned closer to the data will retrieve
information. This data retrieval technique is known as an optimistic read. An optimistic read can
provide a maximum of 15 percent improvement in performance over a conventional disk. When
setting up mirrors, it is important to consider which physical disks are being used for primary and
parity information, and to balance the I/O across physical disks rather than logical disks.
CAUTION: Progress Software Corporation recommends not using RAID 5 for database systems.
It is possible to have both high reliability and high performance. However, the cost of a system
that delivers both of these characteristics is higher than a system that is only delivers one of
the two.
Thank you
P.S.
**For further reference and more in-depth knowledge for the topics
covered, you can follow up on the following links from where this material
has been taken
• Unit-4: Device management – B.C.A study (bcastudyguide.com)
• Operating Systems: Device Management (slideshare.net)
• OS Deadlocks Introduction – javatpoint
• Introduction of Deadlock in Operating System - GeeksforGeeks
**This presentation has been created for the sole purpose of education as directed
by our teacher. It was a team effort with all the members having contributed.
From Roll No _ 41 to Roll No_60