0% found this document useful (0 votes)
28 views149 pages

Os Process Management

Uploaded by

jazz dias
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views149 pages

Os Process Management

Uploaded by

jazz dias
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 149

Process Management

1
Processes

2
Requirements of an
Operating System
• Alternate the execution of multiple
processes to maximize processor
utilization while providing reasonable
response time
• Allocate resources to processes
• Support interprocess communication and
user creation of processes

3
Concepts
• Computer platform consists of a collection of
hardware resources
• Computer applications are developed to
perform some task
• Inefficient for applications to be written
directly for a given hardware platform
• Operating system provides a convenient to
use, feature rich, secure, and consistent
interface for applications to use
• OS provides a uniform, abstract representation
of resources that can be requested and
accessed by application
4
Manage Execution of
Applications
• Resources made available to multiple
applications
• Processor is switched among multiptle
application
• The processor and I/O devices can be
used efficiently

5
Process
• A program in execution
• An instance of a program running on a
computer
• The entity that can be assigned to and
executed on a processor
• A unit of activity characterized by the
execution of a sequence of instructions,
a current state, and an associated set of
system instructions
6
Process Elements
• Identifier
• State
• Priority
• Program counter
• Memory pointers
• Context data
• I/O status information
• Accounting information
7
Process Control Block
• Contains the process elements
• Created and manage by the operating
system
• Allows support for multiple processes

8
Process Control Block

9
Trace of Process
• Sequence of instruction that execute for
a process
• Dispatcher switches the processor from
one process to another

10
Example Execution

11
Trace of Processes

12
13
Two-State Process Model
• Process may be in one of two states
– Running
– Not-running

14
Not-Running Process in a
Queue

15
Process Creation

16
Process Termination

17
Process Termination

18
Processes
• Not-running
– ready to execute
• Blocked
– waiting for I/O
• Dispatcher cannot just select the process
that has been in the queue the longest
because it may be blocked

19
A Five-State Model
• Running – process in execution; actually
using the CPU.
• Ready – ready for execution; just waiting to be
assigned to a processor.
• Blocked – waiting for some event to occur
before it can continue execution.
• New – the process is created but not yet
admitted to the pool of executable processes.
• Exit (Terminated) – the process has finished
execution or has been aborted.
20
Five-State Process Model

21
Process States

22
Using Two Queues

23
Multiple Blocked Queues

24
Suspended Processes
• Processor is faster than I/O so all
processes could be waiting for I/O
• Swap these processes to disk to free up
more memory
• Blocked state becomes suspend state
when swapped to disk
• Two new states
– Blocked/Suspend
– ReadySuspend
25
One Suspend State

26
Two Suspend States

27
Reasons for Process
Suspension

28
Processes and Resources

29
Operating System Control
Structures
• Information about the current status of
each process and resource
• Tables are constructed for each entity the
operating system manages

30
Memory Tables
• Allocation of main memory to processes
• Allocation of secondary memory to
processes
• Protection attributes for access to shared
memory regions
• Information needed to manage virtual
memory

31
I/O Tables
• I/O device is available or assigned
• Status of I/O operation
• Location in main memory being used as
the source or destination of the I/O
transfer

32
File Tables
• Existence of files
• Location on secondary memory
• Current Status
• Attributes
• Sometimes this information is
maintained by a file management system

33
Process Table
• Where process is located
• Attributes in the process control block
– Program
– Data
– Stack

34
Process Image

35
36
Process Control Block
• Process identification
– Identifiers
• Numeric identifiers that may be stored with the
process control block include
– Identifier of this process
– Identifier of the process that created this process
(parent process)
– User identifier

37
Process Control Block
• Processor State Information
– User-Visible Registers
• A user-visible register is one that may be
referenced by means of the machine language
that the processor executes while in user mode.
Typically, there are from 8 to 32 of these
registers, although some RISC implementations
have over 100.

38
Process Control Block
• Processor State Information
– Control and Status Registers
These are a variety of processor registers that are
employed to control the operation of the processor. These
include
• Program counter: Contains the address of the next
instruction to be fetched
• Condition codes: Result of the most recent arithmetic or
logical operation (e.g., sign, zero, carry, equal, overflow)
• Status information: Includes interrupt enabled/disabled
flags, execution mode

39
Process Control Block
• Processor State Information
– Stack Pointers
• Each process has one or more last-in-first-out
(LIFO) system stacks associated with it. A stack
is used to store parameters and calling addresses
for procedure and system calls. The stack
pointer points to the top of the stack.

40
Process Control Block
• Process Control Information
– Scheduling and State Information
This is information that is needed by the operating system to
perform its scheduling function. Typical items of
information:
•Process state: defines the readiness of the process to be
scheduled for execution (e.g., running, ready, waiting,
halted).
•Priority: One or more fields may be used to describe the
scheduling priority of the process. In some systems, several
values are required (e.g., default, current, highest-allowable)
•Scheduling-related information: This will depend on the
scheduling algorithm used. Examples are the amount of time
that the process has been waiting and the amount of time
that the process executed the last time it was running.
•Event: Identity of event the process is awaiting before it
41 can
be resumed
Process Control Block
• Process Control Information
– Data Structuring
• A process may be linked to other process in a
queue, ring, or some other structure. For
example, all processes in a waiting state for a
particular priority level may be linked in a
queue. A process may exhibit a parent-child
(creator-created) relationship with another
process. The process control block may contain
pointers to other processes to support these
structures.
42
Process Control Block
• Process Control Information
– Interprocess Communication
• Various flags, signals, and messages may be associated
with communication between two independent processes.
Some or all of this information may be maintained in the
process control block.
– Process Privileges
• Processes are granted privileges in terms of the memory
that may be accessed and the types of instructions that
may be executed. In addition, privileges may apply to the
use of system utilities and services.

43
Process Control Block
• Process Control Information
– Memory Management
• This section may include pointers to segment
and/or page tables that describe the virtual
memory assigned to this process.
– Resource Ownership and Utilization
• Resources controlled by the process may be
indicated, such as opened files. A history of
utilization of the processor or other resources
may also be included; this information may be
needed by the scheduler.

44
Process Creation
• A process may create several new
processes during the course of execution.
The creating process is called the parent
process and the newly-created processes
are all called the children of that process.
Each child process may in turn create
other processes, forming a tree of
processes.

45
Process Creation
• Parent and children share all resources,
or children share subset of parent’s
resources, or parent and child share no
resources.
• Parent and children may execute
concurrently or parent waits until child
terminates.

46
Process Creation
• Assign a unique process identifier
• Allocate space for the process
• Initialize process control block
• Set up appropriate linkages
– Ex: add new process to linked list used for
scheduling queue
• Create of expand other data structures
– Ex: maintain an accounting file
47
When to Switch a Process
• Clock interrupt
– process has executed for the maximum
allowable time slice
• I/O interrupt
• Memory fault
– memory address is in virtual memory so it
must be brought into main memory

48
When to Switch a Process
• Trap
– error or exception occurred
– may cause process to be moved to Exit state
• Supervisor call
– such as file open

49
Change of Process State
• Save context of processor including
program counter and other registers
• Update the process control block of the
process that is currently in the Running
state
• Move process control block to
appropriate queue – ready; blocked;
ready/suspend
• Select another process for execution
50
Change of Process State
• Update the process control block of the
process selected
• Update memory-management data
structures
• Restore context of the selected process

51
Process Termination
• Process executes its last statement and
asks the OS to delete it (exit system call)
• Process may return output to its parent
(via the wait system call)
• Process resources deallocated by the OS
• A parent may terminate execution of
children processes (via the abort system
call).
52
Interprocess Communication

53
Interprocess Communication
• Interprocess communication (IPC)
provides a mechanism to allow
processes to communicate and to
synchronize their actions.
• In the producer-consumer problem,
cooperating processes communicate in a
shared-memory environment (the
common buffer pool).

54
Cooperating Processes
• Processes may be either independent or cooperating.
• Independent processes cannot affect or be affected by
the execution of another process.
• Cooperating process can affect or be affected by the
execution of another process.
Advantages of process cooperation:
– Information sharing
– Computation speed-up
– Modularity
– Convenience

55
Concurrent execution that requires
cooperation among processes requires
mechanisms to allow processes to
communicate with each other (IPC) and
to synchronize their actions.

To illustrate the concept of cooperating


processes, consider the producer-
consumer (bounded-buffer) problem:

56
The Bounded-Buffer Problem

57
A producer process produces info that is
consumed by a consumer process.
Examples:
– A print program produces characters that
are consumed by the print device driver,
– A compiler may produce assembly
language code which is consumed by the
assembler.

58
• To allow producer-consumer processes to run
concurrently, we need a buffer of items that
can be filled by the producer and emptied by
the consumer.
• The producer and consumer must be
synchronized so that the consumer does not try
to consume an item that has not yet been
produced.
• If the buffer is of fixed size, the consumer
must wait when the buffer is empty and the
producer must wait if the buffer is full.

59
Interprocess Communication
2 basic communication models:
• Shared-memory model
• Message-passing model
In the shared-memory model, processes gain
access to regions of memory owned by other
processes. Some form of agreement have to
take place in order for this to occur. Processes
may then exchange information by
reading/writing data in these shared areas.

60
IPC, contd.
In the shared memory model, the processes
are responsible for ensuring that they are
not writing to the same location
simultaneously.

Shared-memory and message-passing are


not mutually exclusive; they can both be
used simultaneously in any OS.

61
Message Passing Model
In the message-passing model, the sending process
transmits a set of data values (a message) through a
specified communication channel or port; the
receiving process indicates its acceptance of the
message.
Communication may be synchronous or blocking,
meaning the sender waits until the receiver performs a
receive operation OR it may be asynchronous or non-
blocking, meaning that the message is placed into a
queue waiting for the receiver to accept them and the
sending process can proceed as soon as the message is
placed in a local buffer.
62
Basic Structure of an IPC Facility

• Two operations provided:


– Send (message)
– Receive (message)
• Messages sent can be of fixed or
variable size
• Communication link must exist (logical
link)

63
Processes wishing to communicate must
do so either directly or indirectly.
Direct communication:
– Each process that wants to communicate
must explicitly name the recipient or sender
of the info. The primitives are defined as
follows:
• Send (P, message)
• Receive (Q, message)

64
Direct Communication, contd.
The link in this scheme has the following
properties:
• A link is established automatically
between every pair of processes
• A link is associated with exactly two
processes
• The link is usually bi-directional, but
may be uni-directional.
65
Indirect Communication
Messages are sent to and received from
mailboxes (or ports).
• Each mailbox has a unique identification.
• A process can communicate only if the
processes have a shared mailbox.
The send and receive primitives are defined as
follows:
Send (A, message) – send message to mailbox A
Receive (A, message) – receive message from
mailbox A
66
Indirect Communication, contd.
In this scheme, the communication link
has the following properties;
• A link is established between pair of
processes only if they have a shared
mailbox.
• A link may be associated with more than
two processes
• The link may be either uni-directional or
bi-directional.
67
Link Capacity (Buffering)
A link has some capacity which determines the
number of messages that can temporarily
reside in it. Can be viewed as a queue of
messages attached to the link. The queue can
be implemented as follows:
• Zero capacity – the queue has max. length 0.
i.e. the link can have no messages waiting in it.
Sender must wait until the receiver receives
the message.

68
Link Capacity, contd.
• Bounded capacity – the queue has a finite
length, n. If the queue is not full when a new
message is sent, the message is placed in the
queue and the sender can continue execution
without waiting.
• Unbounded capacity – the queue has
potentially infinite length, thus any number of
messages can wait in it. The sender is never
delayed

69
In the non-zero capacity cases, a process
does not know whether a message has
arrived at its destination after the send
operation. The sender must
communicate explicitly with the receiver
to find out whether the message was
received.
Example:
Suppose process P sends a message to
process Q and can continue its execution
only after the message is received.
70
Process P executes the following
sequence:
send (Q, message)
receive (Q, message)

Process Q executes the sequence:


receive (P, message)
send (P, “acknowledgement”)

71
Lost Messages
3 basic methods for dealing with lost messages:
• The OS is responsible for detecting this event
and re-sending the message.
• The sending process is responsible for
detecting this event and for re-transmitting the
message, if it wants to do so.
• The OS is responsible for detecting this event,
then notifies the sender that the message has
been lost. The sending process can proceed as
it chooses.
72
How do we detect that a message is lost?

The most common detection method is the use of


timeouts.
When a message is sent, an acknowledgement is
always sent back. The OS or process may then
specify a time interval during which it expects
the acknowledgement to arrive. If this time
interval elapses before the ack arrives, the OS
or process may assume that the message is
lost. The message is then resent.
A mechanism must exist to distinguish between
lost and delayed messages. 73
Scheduling

74
Process Scheduling
• Idea of multi-tasking
• Realize that process has cpu-burst and I/O burst cycle
– When I/O burst, CPU idle
– Exploit the idleness to better achieve parallel tasking
• On Uniprocessor system switch between processes so fast
to give an illusion of parallelism
• Determine which process should be next in line for the
CPU
– Selects from among the processes that are ready to execute
– Done by short term scheduler

75
Issues to Consider
• Efficiency
– If scheduler is invoked every 100 ms & it takes 10 ms to
decide the order of execution of the processes then 10/(100 +
10) = 9 % of the CPU is being used simply for scheduling the
work
• Fairness
– Give each process a fair share of the CPU
• Priority
– Allow more important processes
– Context switch overhead
• Saving PCB on process, loading PCB of other process

76
Scheduling Queue
• Processes entering the system are put
into a job queue.
• Processes in memory waiting to be
executed are kept in a list called the
ready queue.
• The ready queue is generally stored as a
linked list.
• Processes waiting for a particular device
are placed in an I/O queue.
77
Ready Queue

• Processes resident in main memory and that in ready


state are kept in a ready queue
– Process waits in the ready queue until selected
• Unless a process terminates, it will eventually be put
back into a ready queue
Similarly OS keeps device queues for processes waiting
for I/O
78
Ready Queue
Representation

79
Schedulers
A process migrates between various
scheduling queues throughout its
lifetime. The process of selecting from
these queues is carried out by a
scheduler.
2 types of scheduler
• Long term scheduler (job scheduler)
• Short term scheduler (CPU scheduler)
80
The short-term scheduler:
• Selects from among the processes that are
ready to execute and allocates the CPU to one
of them
• Must select a new process for the CPU
frequently
• Must be very fast.
The long-term scheduler:
• Selects processes from a batch system and
loads them into memory for execution
• Executes less frequently

81
Some OS, such as time-sharing systems may have an
intermediate scheduler called the medium-term
scheduler. The idea is that it is sometimes
advantageous to remove processes from memory
temporarily, thereby reducing the degree of
multiprogramming. The process can then be re-
introduced into memory at a later time. This is called
swapping.
82
The CPU Scheduler
Selects from among processes in memory that
are ready to execute and allocates the CPU to
one of them. CPU scheduling decisions take
place when a process:
i. Switches from running to waiting state
ii. Switches from running to ready state
iii. Switches from waiting to ready
iv. Terminates
Scheduling under i. and iv. is non-preemtive,
otherwise the scheduling scheme is
preemtive. 83
Non-Preemptive vs. Preemptive
• A process can give up CPU in two ways
• Non-preemptive: A process voluntarily gives up CPU
– I/O request
• Process is blocked, then when request ready it is put back
into ready queue
– A process creates a new child/sub process (more later)
– Finished Instructions to execute (Process termination)
• PCB and resources assigned are de-allocated
• Preemptive: A process is forced to give up the CPU
– Interrupted due to higher priority process
– Each process has fixed time-slice to use CPU

84
The Dispatcher
The module that gives control of the CPU
to the process selected by the short-term
scheduler. This involves:
– Switching context
– Switching to user mode
– Jumping to the proper location in the user
program to restart that program.

85
Context Switch

86
Scheduling Criteria
Different CPU scheduling algoritms have
different properties. The criteria used for
comparing these algorithms include:
• CPU utilization: keep the CPU as busy as
possible. Should range from 50% (lightly
loaded system) to 90% for a heavily used
system.
• Throughput: # of processes that completed
their execution per time unit. This may be 1
process/hr for long jobs; or 10
processes/second for short transactions. 87
Scheduling Criteria, contd.
• Turnaround time: amount of time to execute a particular
process
– Waiting in Ready Queue + Executing on CPU + doing I/O
• Response time: amount of time it takes from when a request was
submitted until the first response is produced
– Ideal for interactive systems
• Note:
– For simplicity illustration and discussion only one CPU burst per
process is used in examples
– Measure of comparison is done with Average Waiting Time
• Waiting Time is the sum of the periods spent waiting in ready
queue
– Context switch time is negligible

88
Scheduling Scheme: FCFS
• First Come First Served (FCFS)
– The process that requests the CPU first is allocated
the CPU
• Implemented using a FIFO queue
– When the process enters the read queue, its PCB is
linked to the tail
– The process at the head of the queue is given to the
CPU
• FCFS is non-preemptive and hence easy to implement
• Wait time varies substantially if the processes that
comes first are CPU intensive
89
FCFS Example (Execution Time)
Process Execution Time
P1 24
P2 3
P3 3
• Suppose that the processes arrive in the order: P1 , P2
, P3
The Gantt/Time Chart for the schedule is:
P1 P2 P3

0 24 27 30
• Waiting time for P1 = 0; P2 = 24; P3 = 27
• Average waiting time: (0 + 24 + 27)/3 = 17
90
FCFS Example (contd..)
• Suppose that the processes arrive in the order:
P2 , P3 , P1
• The time chart for the schedule is:
P2 P3 P1
0 3 6 30
• Waiting time for P1 = 6; P2 = 0; P3 = 3
• Average waiting time: (6 + 0 + 3)/3 = 3
– Much better than previous case
• Convoy effect: all processes wait for the one big
process to get off the CPU

91
Shortest Job First (SJF) Scheduling

• The process with the shortest execution time takes priority over
others
• Associate with each process the length of its next CPU execution
time
2 schemes:
• Non-preemptive
– Once CPU given to the process it cannot be preempted until
completes its CPU execution time is complete.
• If 2 short jobs with same execution time then use their time of
arrival to break the tie
• Preemptive
– If a new process arrives with CPU execution time less than
remaining time of current executing process, preempt

92
SJF with Preemption Example
Process Arrival Time Next CPU Ex Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
• SJF (preemptive)
P1 P2 P3 P2 P4 P1
0 2 4 5 7 11 16
• Average waiting time = (9 + 1 + 0 +2)/4 = 3
• Also known as Shortest Remaining Time First

93
SJF Advantages and Disadvantages

Advantage
• SJF is optimal – gives minimum average
waiting time for a given set of processes
Disadvantage
• Need to have a good heuristic to guess
the next CPU execution time
• Short duration processes will starve
longer ones
94
Priority Scheduling
• A priority number (integer) is associated with each
process
• CPU is allocated to the process with the highest
priority
– If processes have same priority then schedule according to
FCFS
• SJF is an example of a priority scheduling
• Problem: Starvation – low priority processes may
never execute
• Solution: Aging – as time progresses increase the
priority of the process

95
Priority Scheduling, contd.
Example: The following processes arrive at time 0 in the
order – P1, P2, P3, P4, P5.
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 3
P4 1 4
P5 5 2
P2 P5 P1 P3 P4
0 1 6 16 18 19

96
Priority Scheduling, contd.
The average wait time is:
(0 + 1 + 6 + 16 +18)/5 = 8.2 ms
• Priority scheduling can be either preemtive or non-
preemtive.
• A major problem with priority scheduling algorithms
is indefinite block or starvation. Low priority
processes could wait indefinitely for the CPU.
• A solution to the problem of starvation is aging.
Aging is a technique of gradually increasing the
priority of processes that wait in the system a long
time.

97
Round Robin (RR) Scheduling

• Each process gets a small unit of CPU time called time


quantum or slice
– After this time has elapsed, the process is preempted and
added to the end of the ready queue
• If there are n processes in the ready queue and the
time quantum is q, then each process gets 1/n of the
CPU time in chunks of at most q time units at once
– No process waits more than (n-1)q time units until its next
time quantum.
• Performance
– If q is large then RR becomes like FCFS
– q should not be so small such that it requires too many
context switches
98
RR Scheduling, contd.
The ready queue can be implemented as a FIFO queue of
processes. New processes are added to the tail of the
queue. The scheduler picks the first process from the
ready queue, sets a timer to interrupt after 1 time
quantum and then dispatches the process. One of two
things will happen:
• The process may have a CPU burst of less than 1 time quantum,
or
• CPU burst of the currently executing process is longer than 1 time
quantum. In this case, the timer will go off, cause an interrupt, a
context switch is then executed and the process put at the tail of
the ready queue.

99
RR Scheduling, contd.
The average waiting time under the RR scheme is often
quite long. Consider the following set of processes,
the time quantum is set at 4:
Process CPU Burst Time
P1 24
P2 3
P3 3

P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30

100
RR Scheduling, contd.
The average waiting time is 17/3 = 5.66

RR scheduling is appropriate for time-


sharing systems.

101
What do real systems use?
• Multi-level Feedback queues
– N priority levels
– Priority scheduling between levels
– RR within a level
– Quantum size decreases as priority level increase
– Process in a given not scheduled until all higher
priority queues are empty
– If process is does complete in a given quantum at a
priority level it is moved to next lower priority
level
• Aging
102
Two synchronization problems associated
with interprocess communication and
CPU scheduling are race condition and
deadlock.

103
Deadlocks

104
The following is drawn from a law passed
in Kansas in the early part of the 20th
century:
“When two trains approach each other at
a crossing, both shall come to a full stop
and neither shall start up again until the
other has gone.”
The above is a good illustration of a
deadlock situation.

105
Deadlock
• Permanent blocking of a set of processes
that either compete for system resources
or communicate with each other
• No efficient solution
• Involve conflicting needs for resources
by two or more processes

106
107
Resources
• There are three kinds of resources:
– Sharable
– Serially reusable
– Consumable
• Sharable resources can be used by more
than one process at a time.
• A consumable resource can only be used
by one process, and the resource gets
“used up.”
108
Serially Reusable Resources
• Used by only one process at a time and not
depleted by that use
• Processes obtain resources that they later
release for reuse by other processes
• Processors, I/O channels, main and secondary
memory, devices, and data structures such as
files and databases
• Deadlock occurs if each process holds one
resource and requests the other

109
Example of Deadlock

110
Another Example of Deadlock
• Space is available for allocation of
200Kbytes, and the following sequence
of events occur
P1 P2
... ...
Request 80 Kbytes; Request 70 Kbytes;
... ...
Request 60 Kbytes; Request 80 Kbytes;

• Deadlock occurs if both processes


progress to their second request
111
Consumable Resources
• Created (produced) and destroyed
(consumed)
• Interrupts, signals, messages, and
information in I/O buffers
• May take a rare combination of events to
cause deadlock

112
Resource Allocation Graphs
• Directed graph that depicts a state of the
system of resources and processes

113
Resource Allocation Graphs

114
Conditions for Deadlock
• Mutual exclusion
– Only one process may use a resource at a
time
• Hold-and-wait
– A process may hold allocated resources
while awaiting assignment of others
• No preemption
– No resource can be forcibly removed form a
process holding it
115
Conditions for Deadlock
• Circular wait
– A closed chain of processes exists, such that each
process holds at least one resource needed by the
next process in the chain

116
117
Possibility of Deadlock
• Mutual Exclusion
• No preemption
• Hold and wait

118
Existence of Deadlock
• Mutual Exclusion
• No preemption
• Hold and wait
• Circular wait

119
Methods for Handling Deadlocks
• Ensure that the system will never enter a
deadlock state.
• Allow the system to enter a deadlock
state and then recover.
• Ignore the problem and pretend that
deadlocks never occur in the system;
used by most operating systems,
including UNIX.

120
Deadlock Prevention
• Mutual Exclusion
– Not required for sharable resources; must
hold for non-sharable resources
• Hold and Wait
– Require process to request and be allocated
all of its required resources before it begins
execution, or allow process to request
resources only when the process has none.

121
Deadlock Prevention
• No Preemption
– If a process that is holding some resources
requests another resource that cannot be
immediately allocated to it, then all
resources currently being held are released.
– Preempted resources are added to the list of
resources for which the process is waiting.
– Process will be restarted only when it can
regain its old resources, as well as the new
ones that it is requesting.
122
Deadlock Prevention
• Circular Wait
– Impose a total ordering of all resource
types, and require that each process requests
resources in an increasing order of
enumeration.

123
Deadlock Avoidance
• A decision is made dynamically whether
the current resource allocation request
will, if granted, potentially lead to a
deadlock
• Requires knowledge of future process
request

124
Two Approaches to
Deadlock Avoidance
• Do not start a process if its demands
might lead to deadlock
• Do not grant an incremental resource
request to a process if this allocation
might lead to deadlock

125
Resource Allocation Denial
• Referred to as the banker’s algorithm
• State of the system is the current
allocation of resources to process
• Safe state is where there is at least one
sequence that does not result in deadlock
• Unsafe state is a state that is not safe

126
Deadlock Avoidance
• Maximum resource requirement must be
stated in advance
• Processes under consideration must be
independent; no synchronization
requirements
• There must be a fixed number of
resources to allocate
• No process may exit while holding
resources
127
Strategies once Deadlock
Detected
• Abort all deadlocked processes
• Back up each deadlocked process to
some previously defined checkpoint, and
restart all process
– Original deadlock may occur
• Successively abort deadlocked processes
until deadlock no longer exists
• Successively preempt resources until
deadlock no longer exists
128
Selection Criteria Deadlocked
Processes
• Least amount of processor time
consumed so far
• Least number of lines of output
produced so far
• Most estimated time remaining
• Least total resources allocated so far
• Lowest priority

129
Strengths and Weaknesses of the
Strategies

130
Race Condition

131
Process Synchronization
The problem of process synchronization
arises from the need to share resources.
This sharing requires coordination and
cooperation to ensure correct operation.
Consider the following scenario: A husband and
wife (2 processes) attempt to deposit cash to
the same bank account (a joint account)
simultaneously.
Suppose husband wishes to deposit 300
and wife wishes to deposit 500. 132
• If one completes before the other starts,
the combined effect would be to add
$800 to the balance.
• However, if they both try to deposit at
the exact same time what would be the
effect?
• Suppose the initial balance is $30 and
the two processes run on different CPUs.
One possible result would be:

133
Process P1 loads 30 into its register.
Process P2 loads 30 into its register.
P1 adds 300 to its register, giving 330
P2 adds 500 to its register, giving 530
P1 stores 330 in Balance
P2 stores 530 in Balance
The net effect is to add only $500 to the
balance!
This situation is known in OS term as a
race condition.
134
A race condition occurs when the
scheduling of two processes is so critical
that the various orders of scheduling
them result in different computations.

Race conditions result from the sharing of


data or resources among two or more
processes.

135
How do we avoid Race Conditions?

• Each process has a segment of code in


which shared memory is accessed. This
segment is called the critical section.
• Must ensure that when one process is
executing in its critical section, no other
process is allowed to execute in the
critical section – mutual exclusion.
• This condition alone is not sufficient to
avoid race conditions.
136
The following conditions must also hold:
• Mutual exclusion
• Must make progress. I.e. no process should
wait indefinitely to enter its critical section.
• Bounded waiting. A bound must exist on the
number of times other process are allowed to
enter their critical section after a process has
made a request to enter its critical region.
• No assumptions regarding the relative speed of
execution of the processes must be made.

137
Interrupts

138
What Are Interrupts?
• Interrupts alter a program’s flow of
control
– Behavior is similar to a procedure call
• Interrupt causes transfer of control to an
interrupt service routine (ISR)
• ISR is also called a handler
• When the ISR is completed, the original
program resumes execution
• Interrupts provide an efficient way to
handle unanticipated events
Interrupts
• Interrupt the normal sequencing of the
processor
• Most I/O devices are slower than the
processor
– Processor must pause to wait for device

140
Examples of interrupts

 Mouse moved.
 Disk drive at sector/track position(old days).
 Keyboard key pressed.
 Printer out of paper.
 Video card wants memory access.
 Modem sending or receiving.
 USB scanner has data.
Events that may trigger an interrupt:

• The completion of an I/O operation


• Arrival of a higher priority process
• Division by zero
• Invalid memory access
• A request for a specific OS service e.g.
restarting a device

142
Types of interrupts
 Synchronous/Asynchronous: Synchronous if it
occurs at the same place, every time the program is
executed with the same data and memory allocation.
Asynchronous interrupts are those that occur
unexpectedly.

 Internal/External : Internal interrupts arise from


illegal or
erroneous use of an instruction or data , also called
as traps. External interrupts arise from I/O devices,
timing device, circuit generated by power supply.

 Software/Hardware : Software interrupts is initiated


by executing an instruction .
Classes of Interrupts

144
Interrupt Handler
• Program to service a particular I/O
device
• Generally part of the operating system
• Suspends the normal sequence of
execution

145
Interrupt Cycle

146
Interrupt Cycle
• Processor checks for interrupts
• If no interrupts fetch the next instruction
for the current program
• If an interrupt is pending, suspend
execution of the current program, and
execute the interrupt-handler routine

147
Simple Interrupt Processing

148
THE END

149

You might also like