0% found this document useful (0 votes)
3 views

OS Unit-2

This document covers process scheduling in operating systems, detailing the basic concepts, CPU scheduling types, and various algorithms such as FCFS, SJF, and Round Robin. It also discusses deadlocks, their characterization, and methods for handling them, including prevention and avoidance strategies. Key scheduling criteria like CPU utilization, throughput, and turnaround time are highlighted to compare different scheduling algorithms.

Uploaded by

maneyacx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

OS Unit-2

This document covers process scheduling in operating systems, detailing the basic concepts, CPU scheduling types, and various algorithms such as FCFS, SJF, and Round Robin. It also discusses deadlocks, their characterization, and methods for handling them, including prevention and avoidance strategies. Key scheduling criteria like CPU utilization, throughput, and turnaround time are highlighted to compare different scheduling algorithms.

Uploaded by

maneyacx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

UNIT-2

PROCESS SCHEDULING
Basic Concepts
• In a single-processor system,
▪ Only one process may run at a time.
▪ Other processes must wait until the CPU is rescheduled.
• Objective of multiprogramming:
▪ To have some process running at all times, in order to maximize
CPU utilization.
CPU Scheduler
• This scheduler
▪ selects a waiting-process from the ready-queue and
▪ allocates CPU to the waiting-process.
• The ready-queue could be a FIFO, priority queue, tree and list.
• The records in the queues are generally process control blocks (PCBs) of the
processes.

CPU Scheduling
Four situations under which CPU scheduling decisions take place:
1. When a process switches from the running state to the waiting state. For ex; I/O
request.
2. When a process switches from the running state to the ready state. For ex:
when an interrupt occurs.
3. When a process switches from the waiting state to the ready state. For ex:
completion of I/O.
4. When a process terminates.
• Scheduling under 1 and 4 is non- preemptive. Scheduling under 2 and 3 is preemptive.

Non Preemptive Scheduling


• Once the CPU has been allocated to a process, the process keeps the CPU
until it releases the CPU either
▪ by terminating or
▪ by switching to the waiting state.

Preemptive Scheduling
• This is driven by the idea of prioritized computation.
• Processes that are runnable may be temporarily suspended
• Disadvantages:
1. Incurs a cost associated with access to shared-data.
2. Affects the design of the OS kernel.

Dispatcher
• It gives control of the CPU to the process selected by the short-term scheduler.
• The function involves:
1. Switching context
2. Switching to user mode&
3. Jumping to the proper location in the user program to restart that program
• It should be as fast as possible, since it is invoked during every process switch.
• Dispatch latency means the time taken by the dispatcher to
▪ stop one process and
▪ start another running.

SCHEDULING CRITERIA:
In choosing which algorithm to use in a particular situation, depends upon the properties
of the various algorithms. Many criteria have been suggested for comparing CPU- scheduling
algorithms. The criteria include the following:

CPU utilization: We want to keep the CPU as busy as possible. Conceptually, CPU utilization
can range from 0 to 100 percent. In a real system, it should range from 40 percent (for a lightly
loaded system) to 90 percent (for a heavily used system).

Throughput: These the number of processes executed per one unit of time. The unit of time may
be one minute or one hour.
Turnaround time: This is the important criterion which tells how long it takes to execute that
process. The interval from the time of submission of a process to the time of completion is the
turnaround time. Turnaround time is the sum of the periods spent waiting to get into memory,
waiting in the ready queue, executing on the CPU, and doing I/0.

Waiting time: It is the time period between arrival time of the process into ready queue to first
response of the process.
Response time: The time interval when first response is displayed to the user regarding process
execution.

SCHEDULING ALGORITHMS
CPU scheduling deals with the problem of deciding which of the processes in the ready
queue is to be allocated the CPU.
Following are some scheduling algorithms:
1. FCFS scheduling (First Come First Served)
2. SJF scheduling (Shortest Job First)
3. Priority scheduling
4. SRTF scheduling
5. Round Robin scheduling
6. Multilevel Queue scheduling
7. Multilevel Feedback Queue scheduling

Non-Preemptive Preemptive Scheduling


Scheduling

FCFS scheduling (First Come First SRTF scheduling


Served)

SJF scheduling (Shortest Job First) Round Robin scheduling

Priority scheduling

1. FCFS Scheduling
• The process that requests the CPU first is allocated the CPU first.
• The implementation is easily done using a FIFO queue.
• Procedure:
1. When a process enters the ready-queue, its PCB is linked onto the tail of the queue.
2. When the CPU is free, the CPU is allocated to the process at the queue’s head.
3. The running process is then removed from the queue.
Advantage:
1. Code is simple to write & understand.
Disadvantages:
1. Convoy effect: All other processes wait for one big process to get off the CPU.
2. Non-preemptive (a process keeps the CPU until it releases it).
3. Not good for time-sharing systems.
4. The average waiting time is generally not minimal.

• Example: Suppose that the processes arrive in the order P1, P2, P3.
• The Gantt Chart for the schedule is as follows:

• Waiting time for P1 = 0; P2 = 24; P3 =27


Average waiting time: (0 + 24 + 27)/3 = 17ms

2.SJF Scheduling
• The CPU is assigned to the process that has the smallest next CPU burst.
• If two processes have the same length CPU burst, FCFS scheduling is used to
break the tie.
• For long-term scheduling in a batch system, we can use the process time
limit specified by the user, as the ‘length’
• SJF can't be implemented at the level of short-term scheduling, because
there isno way to know the length of the next CPU burst
• Advantage:
1. The SJF is optimal, i.e. it gives the minimum average waiting time
for agiven set of processes.
• Disadvantage:
1. Determining the length of the next CPU burst.

• SJF algorithm may be either 1) non-preemptive or 2)preemptive.


1.Non preemptiveSJF
The current process is allowed to finish its CPU burst.
2. Preemptive SJF
If the new process has a shorter next CPU burst than what is left of the
executing process, that process is preempted. It is also known as SRTF scheduling
(Shortest-Remaining-Time-First).

• Example (for non-preemptive SJF): Consider the following set of


processes,with the length of the CPU-burst time given in milliseconds.

• For non-preemptive SJF, the Gantt Chart is as follows:


• Waiting time for P1 = 3; P2 = 16; P3 = 9; 4=0 Average waiting time: (3 + 16 + 9 +
0)/4= 7

3.Preemptive SJF/SRTF:
Consider the following set of processes, with the length

of the CPU- burst time given in milliseconds.


• For preemptive SJF, the Gantt Chart is as follows:

• The average waiting time is ((10 - 1) + (1 - 1) + (17 - 2) + (5 - 3))/4 = 26/4 =6.5.

4.Priority Scheduling
• A priority is associated with each process.
• The CPU is allocated to the process with the highest priority.
• Equal-priority processes are scheduled in FCFS order
• Advantage:
▪ Higher priority processes can be executed first.
• Disadvantage:
▪ Indefinite blocking, where low-priority processes are left waiting
indefinitely for CPU. Solution: Aging is a technique of increasing
priority of processes that wait in system for a long time.
• Example: Consider the following set of processes, assumed to have arrived at time
0, in the order PI, P2, ..., P5, with the length of the CPU-burst time given in
milliseconds.
• The Gantt chart for the schedule is as follows:

• The average waiting time is 8.2milliseconds.

5.Round Robin Scheduling


• Designed especially for time sharing systems.
• It is similar to FCFS scheduling, but with preemption.
• A small unit of time is called a time quantum(or time slice).
• Time quantum is ranges from 10 to 100ms.
• The ready-queue is treated as a circular queue.
• The CPU scheduler
▪ goes around the ready-queue and
▪ allocates the CPU to each process for a time interval of up to 1
timequantum.
• To implement:
The ready-queue is kept as a FIFO queue of processes
• CPU scheduler
1. Picks the first process from the ready-queue.
2. Sets a timer to interrupt after 1 time quantum and
3. Dispatches the process.
• One of two things will then happen.
1. The process may have a CPU burst of less than 1 time quantum. In this
case, the process itself will release the CPU voluntarily.
2. If the CPU burst of the currently running process is longer than 1 time
quantum, the timer will go off and will cause an interrupt to the OS.
The process will be put at the tail of the ready-queue.
• Advantage:
▪ Higher average turnaround than SJF.
• Disadvantage:
▪ Better response time than SJF.
• Example: Consider the following set of processes that arrive at time 0,
with thelength of the CPU-burst time given in milliseconds.
• The Gantt chart for the schedule is as follows:

• The average waiting time is 17/3 = 5.66milliseconds.

Multilevel Queue Scheduling


• Useful for situations in which processes are easily classified into different groups.
• For example, a common division is made between foreground (or interactive) processes and
background (or batch) processes.
• The ready-queue is partitioned into several separate queues (Figure2.19).
• The processes are permanently assigned to one queue based on some property like
▪ memory size
▪ process priority or
▪ process type.
• Each queue has its own scheduling algorithm.
For example, separate queues might be used for foreground and background processes.

Fig: Multilevel queue scheduling

Multilevel Feedback Queue Scheduling


• A process may move between queues
• The basic idea: Separate processes according to the features of their CPU bursts.
For example
1. If a process uses too much CPU time, it will be moved to a lower-priority queue.
This scheme leaves I/O-bound and interactive processes in the higher-priority
queues.
2. If a process waits too long in a lower-priority queue, it may be moved to a higher-
priority queue This form of aging prevents starvation.

.
Figure 2.20 Multilevel feedback queues

In general, a multilevel feedback queue scheduler is defined by the following parameters:


1. The number of queues.
2. The scheduling algorithm for each queue.
3. The method used to determine when to upgrade a process to a higher priority queue.
4. The method used to determine when to demote a process to a lower priority queue.
5. The method used to determine which queue a process will enter when
that process needs service

Deadlocks
• Deadlock is a situation in which each process is waiting for a resources never changes
its state. Because the resource is held by other waiting processes.
• Let P1 & P2 are two processes and R1&R2 are two resources.
• Here P1 & P2 will never change its state from waiting. This situation is called
Deadlock.
System Model
• A system consists of a finite number of resources to be distributed among a
number of competing processes.
• If a system has two CPUs, then the resource type CPU has two instances. Similarly, the
resource type printer may have five instances.
• If a process requests an instance, the allocation should satisfy the request.
• A process must request a resource, use resource and release the after using it.
• A process may request as many resources as it requires to carry out its task.
• A process may utilize a resource in only the following sequence:
• Request. The process requests the resource. If the request cannot be granted immediately
(for example, if the resource is being used by another process), then the requesting process
must wait until it can acquire the resource.
• Use. The process can operate on the resource.
• Release. The process releases the resource.
The request and release of resources may be system calls

Deadlock Characterization
The features that characterize deadlock are
• Necessary Conditions
• Resource-Allocation Graph

Necessary Conditions
A deadlock situation can arise if the following four conditions occur simultaneously in a
system:
• Mutual exclusion. One process at a time can use the resource. If another process
requests that resource, the requesting process must be delayed until the resource
has been released.
• Hold and wait. A process must be holding at least one resource and waiting to
acquire additional resources that are currently being held by other processes.
• No preemption. Resources cannot be preempted; that is, a resource can be released
only after that process has completed its task.
• Circular wait. A set P0, P1, ..., Pn of waiting processes must exist such that P0 is
waiting for a resource held by P1, P1 is waiting for a resource held by P2, ..., Pn−1 is
waiting for a resource held by Pn, and Pn is waiting for a resource held by P0.
Resource-Allocation Graph

• Deadlocks can be described more precisely in terms of a directed graph


called a system resource-allocation graph.
• This graph consists of a set of vertices V and a set of edges E.
• The set of vertices V is partitioned into two different types
• P = P1, P2, ..., Pn , the set consisting of all the active processes in the system
• R = R1, R2, ..., Rm , the set consisting of all resource types in the system.
• Pi Rj ; it signifies that process Pi has requested an instance of resource type Rj.
It is called Request edge.
• Rj Pi ; it signifies that an instance of resource type Rj has been allocated to
process Pi. It is called Assignment edge.
• Pictorially, we represent process Pi as a circle and each resource
type Rj as a rectangle.
• At this point, two minimal cycles exist in the system:
• P1 → R1 → P2 → R3 → P3 → R2 → P1
• P2 → R3 → P3 → R2 → P2

Methods for Handling Deadlocks


• To ensure that deadlocks never occur, the system can use either a
• Deadlock- prevention – Ensures that the deadlock is prevented
occurring in the system.
• Deadlock-avoidance – Avoid deadlock i.e. do not allow the resource
request to the process if it leads to deadlock.
• If a system does not employ either a deadlock-prevention or a deadlock-
avoidance algorithm, then a deadlock situation may arise. In this case following
can be done
• Deadlock detection & recovery – Allows the system to enter deadlock state &
then recover

Deadlock Prevention
• For a deadlock to occur, each of the four necessary conditions must hold.
• By ensuring that at least one of these conditions cannot hold, we can prevent
the occurrence of a deadlock.
• Mutual Exclusion
– The mutual exclusion condition must hold. That is, at least one
resource must be nonsharable. Such as printer cannot be
simultaneously shared by several processes.
– Shared resources such as read only files do not require mutual
exclusion condition.
– Several process can attempt to open a read only file

• Hold and Wait


– To ensure that the hold-and-wait condition never occurs in the system, we
must guarantee that, whenever a process requests a resource, it does not
hold any other resources.
– One protocol that we can use requires each process to request and be
allocated all its resources before it begins execution.
– An alternative protocol allows a process to request resources only
when it has none of the resources.
Disadvantages
– Resource utilization may be low – Processes need to know in advance
what resources they need
– Starvation is possible - A process that needs several popular resources
may have to wait indefinitely.

• No Preemption
– If a process is holding some resources and requests another resource that
cannot be immediately allocated to it (that is, the process must wait),
then all resources the process is currently holding are preempted.
– Alternatively, if a process requests some resources, we first check
whether they are available. If they are, we allocate them. If they are not,
we check whether they are allocated to some other process that is
waiting for additional resources.
– If so, we preempt the desired resources from the waiting process and
allocate them to the requesting process.
• Circular wait
– let R = R1, R2, ..., Rm be the set of resource types.
– We assign unique integer number to each resource type, which allows us to
compare two resources and to determine whether one precedes another
– Each process can request resources only in a ring order of the resource type
R.
– A process has requested for Ri & then request for Rj
– If F(Rj) > F(Ri)
– Whenever a process request an instances of resources type Rj it must have
released resources Ri such that F(Rj) >= F(Ri)

Deadlock Avoidance

• It is to grant only those request for available resources that cannot possibly
result in state of deadlock.
• This algorithm checks the resources allocation state to make use the circular
wait condition never exist.

Safe State
• The critical part of deadlock avoidance is the safe state.
• A state is safe if the system can allocate resources to each process in
some order and still avoid a deadlock.
• A system is in a safe state only if there exists a safe sequence.
• A sequence of processes <P1, P2, ..., Pn> is a safe sequence, where for each Pi
the resources that Pi can still request can satisfy by the currently available
resources plus the resources held by Pj.
• If the resource that process Pi needs is not immediately available
then Pi has wait until Pj has finished.
• A Safe state is not a deadlock state
• A deadlock state is an unsafe state
• Not all unsafe state are deadlock, however unsafe state may lead to a
deadlock

Deadlock avoidance algorithm


• Resource-Allocation-Graph Algorithm
• Banker’s Algorithm

1. Resource-Allocation-Graph Algorithm
• If all resources in the system has only one instance then the RAG can be
used for deadlock avoidance.
• In addition to the request & assignment edge, it consist of a new type of
edge called claim edge.
• A claim edge is represented in the graph by - - - -> (dashed)
• When a process Pi requests resource Rj , the claim edge
Pi - - - > Rj is converted to a request edge.
• Similarly, when a resource Rj is released by Pi , the assignment edge Rj 88 Pi is
converted to a claim edge Pi - - - > Rj
• Suppose that process Pi requests resource Rj . The request can be granted only if
converting the request edge Pi 88Rj to an assignment edge Rj 88 Pi does not
result in the formation of a cycle in the resource-allocation graph.
• If no cycle exists, then the allocation of the resource will leave the system in a
safe state.
• If a cycle is found, then the allocation will put the system in an unsafe state.
• In that case, process Pi will have to wait for its requests to be satisfied.

• Consider the following RAG

• This system is in safe state, no cycles in the graph


• Let us assign R2 to P2

• This system is in unsafe state, it consists of cycle


2. Banker’s Algorithm

• The name was chosen because the algorithm could be used in a banking system
• When a new process enters the system, it must declare the maximum
number of instances of each resource type that it may need.
• This number may not exceed the total number of resources in the system.
• When a user requests a set of resources, the system must determine
whether the allocation of these resources will leave the system in a safe
state.
• The resources are allocated if the system in safe state otherwise,
the process must wait

Data structures for banker’s algorithm

• Let n is the number of processes in the system and m is the number of


resource types

• Available: If Available[j] = k, then k instances of resource type Rj are available.


• Max: An nxm matrix defines the maximum demand of each process. If
Max[i][j] =k, then process Pi may request at most k instances of resource type
Rj .
• Allocation: An n m matrix defines the number of resources of each type
currently allocated to each process. If Allocation[i][j] =k, then process Pi is
currently allocated k instances of resource type Rj .
• Need: An n m matrix indicates the remaining resource need of each process. If
Need[i][j] equals k, then process Pi may need k more instances of resource type
Rj to complete its task.

An Illustrative Example - Banker’s algorithm


• Consider a system with five processes P0 through P4 and three resource types A,
B, and C.

• Resource type A has ten instances, resource type B has five instances, and
resource type C has seven instances.

• The following is the recourse allocation state.

• The system is currently in a safe state, since the sequence


• <P1, P3, P4, P2, P0> satisfies the safety criteria.
• Now that process P1 requests one additional instance of resource type
A and two instances of resource type C,
• so Request1 = (1,0,2)
• P1 releases all its resources after completion. Then available resources are
(5,3,2)
• P3 also request & acquires the resources (0 1 1). Releases after completion. Then
available resources are (7 4 3)
• Similarly P4 Then available resources are (7 4 5)
• P2 Then available resources are (10 4 7)
• Finally P0 Then available resources are (10 5 7)
Deadlock Detection
• If a system does not employ either a deadlock-prevention or a deadlock- avoidance
algorithm, then a deadlock situation may occur. In this environment, the system
may provide:
• An algorithm that examines the state of the system to determine
whether a deadlock has occurred
• An algorithm to recover from the deadlock

Types of detection algorithm


• Single Instance of Each Resource Type
– This uses resource allocation graph called wait-for-graph. A wait-for-
graph is obtained by removing the resource nodes & collapsing the
appropriate edge.

• Several Instances of a Resource Type


– This algorithm uses the Data structure similar to those used in bankers
algorithm.

Recovery from Deadlock


• When a detection algorithm determines that a deadlock exists, then the
system may inform operator that a deadlock has occurred & let the operator
deal with the deadlock manually.

• Another possibility is to let the system recover from the deadlock


automatically.

• One is simply to abort one or more processes to break the circular wait.

• The other is to preempt some resources from one or more of the deadlocked
processes.
• Process Termination

– Abort all deadlocked processes

– Abort one process at a time until the deadlock cycle is eliminated

• Resource Preemption

– Selecting a victim

– Rollback

– Starvation

You might also like