0% found this document useful (0 votes)
75 views

Lesson 3: Process Management: IT 311: Applied Operating System

This document discusses process management in operating systems. It begins by defining a process as a program in execution that is characterized by its instructions, state, and resources. Processes are represented in memory by process control blocks that contain their attributes and state. Processes transition between states like running, ready, blocked/waiting, and new as they execute. The document also covers process creation, termination, threads, and synchronization issues like deadlock.

Uploaded by

Rodjean Simballa
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views

Lesson 3: Process Management: IT 311: Applied Operating System

This document discusses process management in operating systems. It begins by defining a process as a program in execution that is characterized by its instructions, state, and resources. Processes are represented in memory by process control blocks that contain their attributes and state. Processes transition between states like running, ready, blocked/waiting, and new as they execute. The document also covers process creation, termination, threads, and synchronization issues like deadlock.

Uploaded by

Rodjean Simballa
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 64

Lesson 3: Process

Management

IT 311: Applied Operating System


Lesson 3: Process Management

 Process Description and Control


 Threads
 Deadlock
 Starvation

IT 311: Applied Operating System


Objectives
 Define the term process and explain the relationship
between processes and process control blocks.
 Explain the concept of a process state and discuss the state
transitions the processes undergo.
 List and describe the purpose of the data structures and data
structure elements used by an OS to manage processes.
 Assess the requirements for process control by the OS.
 Understand the issues involved in the execution of OS code.
 Understand the distinction between process and thread.
 Describe the basic design issues for threads.

IT 311: Applied Operating System


Objectives
 Explain the difference between user-level threads and kernel-
level threads.
 List and explain the conditions for deadlock.
 Define deadlock prevention and describe deadlock prevention
strategies related to each of the conditions for deadlock.
 Explain the difference between deadlock prevention and
deadlock avoidance.
 Understand two approaches to deadlock avoidance.
 Explain the fundamental difference in approach between
deadlock detection and deadlock prevention or avoidance.
 Understand how an integrated deadlock strategy can be
designed.

IT 311: Applied Operating System


Process Description and Control

WHAT IS A PROCESS?
The most fundamental concept in a modern OS is the
process. The principal function of the OS is to create,
manage, and terminate processes. While processes are
active, the OS must see that each is allocated time for
execution by the processor, coordinate their activities,
manage conflicting demands, and allocate system resources
to processes.

IT 311: Applied Operating System


Process Description and Control

WHAT IS A PROCESS? cont…


1.A computer platform consists of a collection of hardware resources, such as the
processor, main memory, I/O modules, timers, disk drives, and so on.
2.Computer applications are developed to perform some task. Typically, they accept
input from the outside world, perform some processing, and generate output.
3.It is inefficient for applications to be written directly for a given hardware platform.
The principal reasons for this are as follows:
1. Numerous applications can be developed for the same platform. Thus, it
makes sense to develop common routines for accessing the computer’s
resources.
2. The processor itself provides only limited support for multiprogramming.
Software is needed to manage the sharing of the processor and other
resources by multiple applications at the same time.
3. When multiple applications are active at the same time, it is necessary to
protect the data, I/O use, and other resource use of each application from
the others.

IT 311: Applied Operating System


Process Description and Control

WHAT IS A PROCESS? cont...


1.The OS was developed to provide a convenient, feature-rich, secure, and
consistent interface for applications to use. The OS is a layer of software between
the applications and the computer hardware that supports applications and
utilities.
2.We can think of the OS as providing a uniform, abstract representation of
resources that can be requested and accessed by applications. Resources
include main memory, network interfaces, file systems, and so on.

IT 311: Applied Operating System


Process Description and Control

Processes and Process Control Blocks


Several definitions of the term process, including:
 A program in execution.
 An instance of a program running on a computer.
 The entity that can be assigned to and executed on a processor.
 A unit of activity characterized by the execution of a sequence of
instructions, a current state, and an associated set of system
resources.
A process as an entity that consists of a number of
elements. Two essential elements of a process are
 program code (which may be shared with other processes that are
executing the same program)
 a set of data associated with that code.

IT 311: Applied Operating System


Process Description and Control

Processes and Process Control Blocks cont…


while the program is executing, this process can be uniquely
characterized by a number of elements, including the
following:
 Identifier: A unique identifier associated with this process, to
distinguish it from all other processes.
 State: If the process is currently executing, it is in the running state.
 Priority: Priority level relative to other processes.
 Program counter: The address of the next instruction in the
program to be executed.
 Memory pointers: Include pointers to the program code and data
associated with this process, plus any memory blocks shared with
other processes.

IT 311: Applied Operating System


Process Description and Control

Processes and Process Control Blocks cont…


 Context data: These are data that are present in registers in the
processor while the process is executing.
 I/O status information: Includes outstanding I/O requests, I/O
devices assigned to this process, a list of files in use by the
process, and so on.
 Accounting information: May include the amount of processor time
and clock time used, time limits, account numbers, and so on.

IT 311: Applied Operating System


Process Description and Control

Processes and Process Control Blocks cont…


The information in the preceding list is
stored in a data structure, typically called
a process control block that is created
and managed by the OS.

IT 311: Applied Operating System


Process Description and Control

Processes States
For a program to be executed, a process, or task, is created
for that program.
It characterize the behavior of an individual process by
listing the sequence of instructions that execute for that
process. Such a listing is referred to as a trace of the
process. We can characterize behavior of the processor by
showing how the traces of the various processes are
interleaved.
Small dispatcher program that switches the processor from
one process to another.

IT 311: Applied Operating System


Process Description and Control

Processes States cont…


Shows the traces of each of the processes during the early
part of their execution.

IT 311: Applied Operating System


Process Description and Control

A Two-State Process Model


A process is either being executed by a processor, or it
isn’t. In this model, a process may be in one of the two
states: Running or Not Running, as shown in Figure a.
Processes that are not running must be kept in some sort of
queue, waiting their turn to execute. Figure b suggests a
structure. There is a single queue in which each entry is a
pointer to the process control block of a particular process.

IT 311: Applied Operating System


Process Description and Control

A Two-State Process Model cont…


.

IT 311: Applied Operating System


Process Description and Control

The Creation and Termination of Processes cont…


Simple two-state model, it will be useful to discuss the
creation and termination of processes; ultimately, and
regardless of the model of process behavior that is used, the
life of a process is bounded by its creation and termination.
Process Creation
 a new process is to be added to those currently being
managed, the OS builds the data structures used to
manage the process, and allocates address space in
main memory to the process.

IT 311: Applied Operating System


Process Description and Control

The Creation and Termination of Processes cont…


Four common events lead to the creation of a process

IT 311: Applied Operating System


Process Description and Control

The Creation and Termination of Processes cont…


Process Termination
Summarizes typical reasons for process termination.

IT 311: Applied Operating System


Process Description and Control

The Creation and Termination of Processes cont…


.

IT 311: Applied Operating System


Process Description and Control

Five State model


Running: The process that is currently being executed.
Ready: A process that is prepared to execute when given the
opportunity.
Blocked/Waiting: A process that cannot execute until some event
occurs, such as the completion of an I/O operation.
Waiting is a frequently used alternative term for Blocked as a process
state. Generally, we will use Blocked, but the terms are interchangeable.
New: A process that has just been created but has not yet been
admitted to the pool of executable processes by the OS.
Exit: A process that has been released from the pool of executable
processes by the OS, either because it halted or because it aborted for
some reason.

IT 311: Applied Operating System


Process Description and Control

Five State model cont…

IT 311: Applied Operating System


Process Description and Control

Process Control
Modes of Execution
OS manages processes, we need to distinguish between the
mode of processor execution normally associated with the OS and
that normally associated with user programs.
Most processors support at least two modes of execution. These
would include reading or altering a control register, such as the
PSW, primitive I/O instructions, and instructions that relate to
memory management.
 The less-privileged mode is often referred to as the user mode,
because user programs typically would execute in this mode.
 The more-privileged mode is referred to as the system mode,
control mode, or kernel mode.

IT 311: Applied Operating System


Process Description and Control

Process Control cont…


Process Creation
The events that lead to the creation of a new process. Having discussed
the data structures associated with a process, we are now in a position to
describe briefly the steps involved in actually creating the process.
To create a new process, it can proceed as follows:
 Assign a unique process identifier to the new process.
 Allocate space for the process.
 Initialize the process control block.
 Set the appropriate linkages.
 Create or expand other data structures.

IT 311: Applied Operating System


Process Description and Control

Process Control cont…


Process Switching
A running process is interrupted, and the OS assigns
another process to the Running state and turns control over
to that process. However, several design issues are raised.
First, what events trigger a process switch? Another issue is
that we must recognize the distinction between mode
switching and process switching. Finally, what must the OS
do to the various data structures under its control to achieve
a process switch?

IT 311: Applied Operating System


Process Description and Control

Process Control cont…


When to Switch Processes
A process switch may occur any time that the OS has
gained control from the currently running process. suggests
the possible events that may give control to the OS.

IT 311: Applied Operating System


Process Description and Control

Process Control cont…


Interrupt examples include the following:
 Clock interrupt: The OS determines whether the currently running process
has been executing for the maximum allowable unit of time, referred to as
a time slice. That is, a time slice is the maximum amount of time that a
process can execute before being interrupted. If so, this process must be
switched to a Ready state and another process dispatched.
 I/O interrupt: The OS determines what I/O action has occurred. If the I/O
action constitutes an event for which one or more processes are waiting,
then the OS moves all of the corresponding blocked processes to the
Ready state (and Blocked/Suspend processes to the Ready/Suspend
state). The OS must then decide whether to resume execution of the
process currently in the Running state, or to preempt that process for a
higher-priority Ready process.

IT 311: Applied Operating System


Process Description and Control

Process Control cont…


Interrupt examples include the following:
 Memory fault: The processor encounters a virtual memory address
reference for a word that is not in main memory. The OS must bring in the
block (page or segment) of memory containing the reference from
secondary memory to main memory. After the I/O request is issued to
bring in the block of memory, the process with the memory fault is placed
in a blocked state; the OS then performs a process switch to resume
execution of another process. After the desired block is brought into
memory, that process is placed in the Ready state.

IT 311: Applied Operating System


Threads

 Some operating systems distinguish the concepts of


process and thread, the former related to resource
ownership, and the latter related to program execution.
This approach may lead to improved efficiency and coding
convenience.
 This may be done using either user-level threads or
kernel-level threads.
 User-level threads are unknown to the OS and are created and
managed by a threads library that runs in the user space of a
process. User-level threads are very efficient because a mode
switch is not required to switch from one thread to another.
 Kernel-level threads are threads within a process that are
maintained by the kernel.

IT 311: Applied Operating System


Threads

PROCESSES AND THREADS


The concept of a process as embodying two characteristics:
 Resource ownership: A process includes a virtual address space to hold
the process image; recall from that the process image is the collection of
program, data, stack, and attributes defined in the process control block.
From time to time, a process may be allocated control or ownership of
resources, such as main memory, I/O channels, I/O devices, and files.
The OS performs a protection function to prevent unwanted interference
between processes with respect to resources.
 Scheduling/execution: The execution of a process follows an execution
path (trace) through one or more programs. This execution may be
interleaved with that of other processes.

IT 311: Applied Operating System


Threads

PROCESSES AND THREADS cont…


Multithreading – refers to the ability of an OS to support multiple,
concurrent paths of execution within a single process. The traditional
approach of a single thread of execution per process, in which the concept
of a thread is not recognized, is referred to as a single-threaded approach.

The two arrangements shown in the left half of


Figure are single-threaded approaches. MS-DOS is
an example of an OS that supports a single-user
process and a single thread.
Other operating systems, such as some variants
of UNIX, support multiple user processes, but only
support one thread per process.
The right half of Figure depicts multithreaded
approaches. A Java runtime environment is an
example of a system of one process with multiple
threads.
Of interest in this section is the use of multiple
processes, each of which supports multiple threads.
This approach is taken in Windows, Solaris, and
many modern versions of UNIX, among others.
IT 311: Applied Operating System
Threads

PROCESSES AND THREADS cont…


The key benefits of threads derive from the performance implications:
1.It takes far less time to create a new thread in an existing process, than to
create a brand-new process. Studies done by the Mach developers show that
thread creation is ten times faster than process creation in UNIX.
2.It takes less time to terminate a thread than a process.
3.It takes less time to switch between two threads within the same process than
to switch between processes.
4.Threads enhance efficiency in communication between different executing
programs. In most operating systems, communication between independent
processes requires the intervention of the kernel to provide protection and the
mechanisms needed for communication. However, because threads within the
same process share memory and files, they can communicate with each other
without invoking the kernel.

IT 311: Applied Operating System


Threads

PROCESSES AND THREADS cont…


Four examples of the uses of threads in a single-user multiprocessing
system:
1.Foreground and background work: For example, in a spreadsheet program, one
thread could display menus and read user input, while another thread executes
user commands and updates the spreadsheet. This arrangement often increases
the perceived speed of the application by allowing the program to prompt for the
next command before the previous command is complete.
2.Asynchronous processing: Asynchronous elements in the program can be
implemented as threads. For example, as a protection against power failure, one
can design a word processor to write its random access memory (RAM) buffer to
disk once every minute. A thread can be created whose sole job is periodic
backup and that schedules itself directly with the OS; there is no need for fancy
code in the main program to provide for time checks or to coordinate input and
output.

IT 311: Applied Operating System


Threads

PROCESSES AND THREADS cont…


Four examples of the uses of threads in a single-user multiprocessing
system:
3.Speed of execution: A multithreaded process can compute one batch of data
while reading the next batch from a device. On a multiprocessor system, multiple
threads from the same process may be able to execute simultaneously. Thus,
even though one thread may be blocked for an I/O operation to read in a batch of
data, another thread may be executing.
4.Modular program structure: Programs that involve a variety of activities or a
variety of sources and destinations of input and output may be easier to design
and implement using threads.

IT 311: Applied Operating System


Threads

PROCESSES AND THREADS cont…


Thread Functionality
•Thread States: the key states for a thread are Running, Ready, and Blocked.
 Four basic thread operations associated with a change in thread
state.
1.Spawn: Typically, when a new process is spawned, a thread for that process is also
spawned. The new thread is provided with its own register context and stack space and
placed on the Ready queue.
2.Block: When a thread needs to wait for an event, it will block (saving its user registers,
program counter, and stack pointers). The processor may then turn to the execution of
another ready thread in the same or a different process.
3.Unblock: When the event for which a thread is blocked occurs, the thread is moved to the
Ready queue.
4.Finish: When a thread completes, its register context and stacks are deallocated.

IT 311: Applied Operating System


Threads

PROCESSES AND THREADS cont…


•Thread Synchronization: All of the threads of a process share the same
address space and other resources, such as open files. Any alteration of a
resource by one thread affects the environment of the other threads in the same
process. It is therefore necessary to synchronize the activities of the various
threads so that they do not interfere with each other or corrupt data structures.
•The issues raised and the techniques used in the synchronization of threads are,
in general, the same as for the synchronization of processes.

IT 311: Applied Operating System


Threads

TYPES OF THREADS
There are two broad categories of thread implementation: user-level
threads (ULTs) and kernel-level threads (KLTs).
User-Level Threads – ULT facility, all of the work of thread
management is done by the application and the kernel is not aware of
the existence of threads.
Figure (a) illustrates the
pure ULT and KLT approach. Any
application can be programmed
to be multithreaded by using a
threads library, which is a
package of routines for ULT
management.
The threads library
contains code for creating and
destroying threads, for passing
messages and data between
threads, for scheduling thread
execution, and for saving and
restoring thread contexts.
IT 311: Applied Operating System
Threads

TYPES OF THREADS cont…


There are a number of advantages to the use of ULTs instead of KLTs,
including the following:
 Thread switching does not require kernel-mode privileges because all of
the thread management data structures are within the user address
space of a single process. Therefore, the process does not switch to the
kernel mode to do thread management. This saves the overhead of two
mode switches (user to kernel; kernel back to user).
 Scheduling can be application specific. One application may benefit most
from a simple round-robin scheduling algorithm, while another might
benefit from a priority-based scheduling algorithm. The scheduling
algorithm can be tailored to the application without disturbing the
underlying OS scheduler.
 ULTs can run on any OS. No changes are required to the underlying
kernel to support ULTs. The threads library is a set of application-level
functions shared by all applications.

IT 311: Applied Operating System


Threads

TYPES OF THREADS cont…


There are two distinct disadvantages of ULTs compared to KLTs:
 In a typical OS, many system calls are blocking. As a result, when a
ULT executes a system call, not only is that thread blocked, but all of
the threads within the process are blocked as well.
 In a pure ULT strategy, a multithreaded application cannot take
advantage of multiprocessing. A kernel assigns one process to only
one processor at a time. Therefore, only a single thread within a
process can execute at a time. In effect, we have application-level
multiprogramming within a single process. While this
multiprogramming can result in a significant speedup of the
application, there are applications that would benefit from the ability
to execute portions of code simultaneously.

IT 311: Applied Operating System


Threads

TYPES OF THREADS cont…


Kernel-Level Threads– KLT facility, all of the work of thread
management is done by the kernel. There is no thread management
code in the application level, simply an application programming
interface (API) to the kernel thread facility. Windows is an example of this
approach.
Figure (b) depicts
the pure KLT approach.
The kernel maintains
context information for the
process as a whole and for
individual threads within
the process. Scheduling by
the kernel is done on a
thread basis.

IT 311: Applied Operating System


Threads

TYPES OF THREADS cont…


Combined Approaches– Some operating systems provide a
combined ULT/KLT facility (see Figure (c)). In a combined system,
thread creation is done completely in user space, as is the bulk of the
scheduling and synchronization of threads within an application. The
multiple ULTs from a single application are mapped onto some (smaller
or equal) number of KLTs. The programmer may adjust the number of
KLTs for a particular application and processor to achieve the best
overall results.

IT 311: Applied Operating System


Threads

 Other Arrangements– the concepts of resource allocation and


dispatching unit have traditionally been embodied in the single
concept of the process—that is, as a 1 : 1 relationship between
threads and processes.
 Relationship between Threads and Processes

IT 311: Applied Operating System


Threads

MULTICORE AND MULTITHREADING


The use of a multicore system to support a single
application with multiple threads (such as might occur on a
workstation, a video game console, or a personal computer
running a processor-intense application) raises issues of
performance and application design. In this section, we first
look at some of the performance implications of a
multithreaded application on a multicore system, then
describe a specific example of an application designed to
exploit multicore capabilities.

IT 311: Applied Operating System


Threads

MULTICORE AND MULTITHREADING cont…


following examples:
 Multithreaded native applications: Multithreaded applications
are characterized by having a small number of highly threaded
processes. Examples of threaded applications include Lotus
Domino or Siebel CRM (Customer Relationship Manager).
 Multiprocess applications: Multiprocess applications are
characterized by the presence of many single-threaded
processes. Examples of multiprocess applications include the
Oracle database, SAP, and PeopleSoft.

IT 311: Applied Operating System


Threads

MULTICORE AND MULTITHREADING cont…


following examples:
 Java applications: Java applications embrace threading in a
fundamental way. Not only does the Java language greatly
facilitate multithreaded applications, but the Java Virtual Machine
is a multithreaded process that provides scheduling and memory
management for Java applications. Java applications that can
benefit directly from multicore resources include application
servers such as Oracle’s Java Application Server, BEA’s
Weblogic, IBM’s Websphere, and the open-source Tomcat
application server. All applications that use a Java 2 Platform,
Enterprise Edition (J2EE platform) application server can
immediately benefit from multicore technology.

IT 311: Applied Operating System


Threads

MULTICORE AND MULTITHREADING cont…


following examples:
 Multi-instance applications: Even if an individual application
does not scale to take advantage of a large number of threads, it
is still possible to gain from multicore architecture by running
multiple instances of the application in parallel. If multiple
application instances require some degree of isolation,
virtualization technology (for the hardware of the operating
system) can be used to provide each of them with its own
separate and secure environment.

IT 311: Applied Operating System


Deadlock and Starvation

 Deadlock is the blocking of a set of processes that either


compete for system resources or communicate with each other.
 There are three general approaches to dealing with deadlock:
prevention, detection, and avoidance.
 Deadlock prevention guarantees that deadlock will not
occur, by assuring that one of the necessary conditions for
deadlock is not met.
 Deadlock detection is needed if the OS is always willing to
grant resource requests; periodically, the OS must check for
deadlock and take action to break the deadlock.
 Deadlock avoidance involves the analysis of each new
resource request to determine if it could lead to deadlock,
and granting it only if deadlock is not possible.

IT 311: Applied Operating System


Deadlock and Starvation

PRINCIPLES OF DEADLOCK
Deadlock can be defined as the permanent blocking of a set
of processes that either compete for system resources or
communicate with each other. A set of processes is
deadlocked when each process in the set is blocked awaiting
an event (typically the freeing up of some requested
resource) that can only be triggered by another blocked
process in the set.
Deadlock is permanent because none of the events is ever
triggered. Unlike other problems in concurrent process
management, there is no efficient solution in the general
case.

IT 311: Applied Operating System


Deadlock and Starvation

PRINCIPLES OF DEADLOCK cont…


All deadlocks involve conflicting needs for resources by two or more
processes. A common example is the traffic deadlock. Figure shows a
situation in which four cars have arrived at a four-way stop intersection at
approximately the same time. The four quadrants of the intersection are
the resources over which control is needed. In particular, if all four cars
wish to go straight through the intersection, the resource requirements
are as follows:
 Car 1, traveling north, needs
quadrants a and b.
 Car 2, traveling west, needs
quadrants b and c.
 Car 3, traveling south, needs
quadrants c and d.
 Car 4, traveling east, needs
quadrants d and a.

IT 311: Applied Operating System


Deadlock and Starvation

PRINCIPLES OF DEADLOCK cont…


Example of Deadlock

IT 311: Applied Operating System


Deadlock and Starvation

PRINCIPLES OF DEADLOCK cont…


Example of No Deadlock

IT 311: Applied Operating System


Deadlock and Starvation

PRINCIPLES OF DEADLOCK cont…


Two general categories of resources can be distinguished:
 Reusable
 Consumable
A reusable resource is one that can be safely used by only
one process at a time and is not depleted by that use.
Processes obtain resource units that they later release for
reuse by other processes. Examples of reusable resources
include processors, I/O channels, main and secondary
memory, devices, and data structures (such as files,
databases, and semaphores).

IT 311: Applied Operating System


Deadlock and Starvation

PRINCIPLES OF DEADLOCK cont…


A consumable resource is one that can be created
(produced) and destroyed (consumed). Typically, there is no
limit on the number of consumable resources of a particular
type. An unblocked producing process may create any
number of such resources. When a resource is acquired by a
consuming process, the resource ceases to exist. Examples
of consumable resources are interrupts, signals, messages,
and information in I/O buffers.

IT 311: Applied Operating System


Deadlock and Starvation

PRINCIPLES OF DEADLOCK cont…


Resource Allocation Graphs
 A useful tool in characterizing the allocation of
resources to processes is the resource allocation
graph, introduced by Holt. The resource allocation
graph is a directed graph that depicts a state of the
system of resources and processes, with each process
and each resource represented by a node.
 A graph edge directed from a process to a resource
indicates a resource that has been requested by the
process but not yet granted.

IT 311: Applied Operating System


Deadlock and Starvation

PRINCIPLES OF DEADLOCK cont…


Examples of Resource Allocation Graphs

IT 311: Applied Operating System


Deadlock and Starvation

PRINCIPLES OF DEADLOCK cont…


The Conditions for Deadlock
Three conditions of policy must be present for a deadlock to be
possible:
1.Mutual exclusion. Only one process may use a resource at a time. No process
may access a resource unit that has been allocated to another process.
2.Hold and wait. A process may hold allocated resources while awaiting
assignment of other resources.
3.No preemption. No resource can be forcibly removed from a process holding it.
The first three conditions are necessary, but not sufficient, for a deadlock to
exist. For deadlock to actually take place, a fourth condition is required:
4.Circular wait. A closed chain of processes exists, such that each process holds
at least one resource needed by the next process in the chain

IT 311: Applied Operating System


Deadlock and Starvation

PRINCIPLES OF DEADLOCK cont…


Four conditions listed above are sufficient for deadlock. To summarize:

IT 311: Applied Operating System


Deadlock and Starvation

DEADLOCK PREVENTION
The strategy of deadlock prevention is, simply put, to design a
system in such a way that the possibility of deadlock is excluded.
Techniques related to each of the four conditions.
 The mutual exclusion allow multiple accesses for reads but only
exclusive access for writes. Even in this case, deadlock can occur if more
than one process requires write permission.
 The hold-and-wait condition can be prevented by requiring that a process
request all of its required resources at one time and blocking the process
until all requests can be granted simultaneously.
 The no preemption this approach is practical only when applied to
resources whose state can be easily saved and restored later, as is the
case with a processor.
 The circular wait condition can be prevented by defining a linear ordering
of resource types.

IT 311: Applied Operating System


Deadlock and Starvation

DEADLOCK AVOIDANCE
An approach to solving the deadlock problem that differs
subtly from deadlock prevention is deadlock.
The term avoidance is a bit confusing. In fact, one could consider the strategies
discussed in this section to be examples of deadlock prevention because they
indeed prevent the occurrence of a deadlock.
Two approaches to deadlock avoidance:
 Do not start a process if its demands might lead to deadlock.
 Do not grant an incremental resource request to a process if this
allocation might lead to deadlock.

IT 311: Applied Operating System


Deadlock and Starvation

DEADLOCK DETECTION
Deadlock detection strategies do not limit resource access or restrict
process actions. With deadlock detection, requested resources are
granted to processes whenever possible. Periodically, the OS performs
an algorithm that allows it to detect the circular wait condition.
Deadlock Detection Algorithm
 A check for deadlock can be made as frequently as each resource
request, or less frequently, depending on how likely it is for a deadlock to
occur.
 Checking at each resource request has two advantages: It leads to early
detection, and the algorithm is relatively simple because it is based on
incremental changes to the state of the system. On the other hand, such
frequent checks consume considerable processor time.

IT 311: Applied Operating System


Deadlock and Starvation

DEADLOCK DETECTION cont…


Recovery
 Once deadlock has been detected, some strategy is needed for
recovery. The following are possible approaches, listed in the
order of increasing sophistication:
1. Abort all deadlocked processes.
2. Back up each deadlocked process to some previously
defined checkpoint, and restart all processes.
3. Successively abort deadlocked processes until deadlock no
longer exists.
4. Successively preempt resources until deadlock no longer
exists.

IT 311: Applied Operating System


Deadlock and Starvation

DEADLOCK DETECTION cont…


 For (3) and (4) in the previous slide, the selection
criteria could be one of the following. Choose the
process with the:
least amount of processor time consumed so far.
least amount of output produced so far.
most estimated time remaining.
least total resources allocated so far.
lowest priority.

IT 311: Applied Operating System


Deadlock and Starvation

AN INTEGRATED DEADLOCK STRATEGY


There are strengths and weaknesses to all of the strategies for
dealing with deadlock. Rather than attempting to design an OS
facility that employs only one of these strategies, it might be more
efficient to use different strategies in different situations. suggests
one approach:
 Group resources into a number of different resource classes.
 Use the linear ordering strategy defined previously for the
prevention of circular wait to prevent deadlocks between resource
classes.
 Within a resource class, use the algorithm that is most appropriate
for that class.

IT 311: Applied Operating System


Deadlock and Starvation

AN INTEGRATED DEADLOCK STRATEGY cont…


As an example of this technique, consider the following
classes of resources:
 Swappable space: Blocks of memory on secondary storage for
use in swapping processes
 Process resources: Assignable devices, such as tape drives, and
files
 Main memory: Assignable to processes in pages or segments
 Internal resources: Such as I/O channels

IT 311: Applied Operating System


Deadlock and Starvation

AN INTEGRATED DEADLOCK STRATEGY cont…


The order of the preceding list represents the order in which resources
are assigned. The order is a reasonable one, considering the sequence of
steps that a process may follow during its lifetime. Within each class, the
following strategies could be used:
 Swappable space: Prevention of deadlocks by requiring that all of the required
resources that may be used be allocated at one time, as in the hold-and-wait
prevention strategy. This strategy is reasonable if the maximum storage requirements
are known, which is often the case. Deadlock avoidance is also a possibility.
 Process resources: Avoidance will often be effective in this category, because it is
reasonable to expect processes to declare ahead of time the resources that they will
require in this class. Prevention by means of resource ordering within this class is also
possible.
 Main memory: Prevention by preemption appears to be the most appropriate strategy
for main memory. When a process is preempted, it is simply swapped to secondary
memory, freeing space to resolve the deadlock.
 Internal resources: Prevention by means of resource ordering can be used.

IT 311: Applied Operating System

You might also like