Chapter 2 OS
Chapter 2 OS
Scheduling
Processes
Process
Process Memory
Process State
Process Control Block
A program in execution
An instance of a program running on a computer
The entity that can be assigned to and executed on a processor
A unit of activity characterized by
A single sequential thread of execution
A current state
An associated set of system resources: memory image, open files,
locks, etc.
When a program is loaded into the memory and it becomes a
process, it can be divided into four sections ─ stack, heap, text
and data.
Process in memory is divided into four sections:
Program is a passive entity, such as a file
containing a list of instructions stored on disk
(often called an executable file).
In contrast, a Process is an active entity, with a
program counter specifying the next instruction
to execute and a set of associated resources.
When a process executes, it passes through different
states. The state of a process is defined in part by the
current activity of that process.
These stages may differ in different operating systems,
and the names of these states are also not standardized.
A Process may be in one of the following states:
1. New
2. Ready
3. Running
4. Waiting
5. Terminated
New-The process is being created.
Job Pool
Disk
Long Term Schedular=>
Selects job from job queue and loads it in ready queue.
The long-term scheduler executes much less frequently.
The long-term scheduler may need to be invoked only
when a process leaves the system.
Because of the longer interval between executions, the
long-term scheduler can afford to take more time to decide
which process should be selected for execution.
Controls degree of multiprogramming.
CPU bound process
I/O process bound
Mostly used in batch system.
An I/O-bound process is one that spends more of its time doing
I/O than it spends doing computations.
A CPU-bound process, in contrast, generates I/O requests
infrequently, using more of its time doing computations.
Long-term scheduler select a good process mix of I/O-bound and
CPU-bound processes.
If all processes are I/O bound, the ready queue will almost always
be empty, and the short-term scheduler will have little to do.
If all processes are CPU bound, the I/O waiting queue will
almost always be empty, devices will go unused, and again the
system will be unbalanced.
The system with the best performance will thus have a
combination of CPU-bound and I/O-bound processes.
ShortTerm Schedular=>
Also called as CPU schedular.
Selects process from ready queue and allocates that to CPU.
The short-term scheduler must select a new process for the
CPU frequently.
The short-term scheduler executes at least once every 100
milliseconds.
Because of the short time between executions, the short-term
scheduler must be fast.
Medium Term Schedular :
Medium Term Schedular :
Time-sharing systems introduced additional, intermediate
level of scheduling i.e. medium term scheduler.
Removes processes from memory (and from active contention
for the CPU) and thus reduce the degree of multiprogramming.
Later, the process can be reintroduced into memory, and its
execution can be continued where it left off. This scheme is called
swapping.
The process is swapped out, and is later swapped in, by the
medium-term scheduler.
Swapping may be necessary to improve the process mix or
because a change in memory requirements has overcommitted
available memory, requiring memory to be freed up.
Process scheduling is an essential part of a
Multiprogramming operating systems. Such operating
systems allow more than one process to be loaded into the
executable memory at a time and the loaded process shares
the CPU using time multiplexing.
A typical process involves both I/O time and CPU time.
In a uniprogramming system like MS-DOS, time
spent waiting for I/O is wasted and CPU is free during
this time.
In multiprogramming systems, one process can use
CPU while another is waiting for I/O. This is possible only
with process scheduling.
Process execution consists of a cycle of CPU Burst and I/O Burst.
Process alternates between these two states.
CPU Burst
Process execution begins with a: is when
That is followed by an, the
process is
Which is followed by another executed
That by another, I/O Burst is in CPU
when the
And so on…. process is
waiting for
I/O for further
execution
CPU-scheduling decisions may take place under the following
four circumstances:
1. When a process switches from the running state to the
waiting state (for example, as the result of an I/O request or
an invocation of wait() for the termination of a child process).
2. When a process switches from the running state to the ready
state (for example, when an interrupt occurs)
3. When a process switches from the waiting state to the ready
state (for example, at completion of I/O)
4.When a process terminates.
For conditions 1 and 4 there is no choice - A new process
must be selected.
For conditions 2 and 3 there is a choice - To either continue
running the current process, or select a different one.
If scheduling takes place only under conditions 1 and 4,
the system is said to be non-preemptive, or
cooperative. Under these conditions, once a process starts
running it keeps running, until it either voluntarily blocks or
until it finishes. Otherwise the system is said to be
preemptive.
There are several different criteria to consider when trying to
select the "best" scheduling algorithm for a particular
situation and environment, including:
CPU utilization - keeping the CPU as busy as possible.
Throughput - The number of processes that are completed per
time unit.
For long processes, this rate may be one process per hour; for
short transactions, it may be ten processes per second.
Turnaround time - The interval from the time of submission of a
process to the time of completion is the turnaround time.
Periods spent waiting to get into memory + waiting in
the ready queue + executing on the CPU + I/O.
Waiting time - amount of time that a process spends waiting in
the ready queue.
Waiting time is the sum of the periods spent waiting in the
ready queue.
Response time - time from the submission of a request until the
first response is produced.
This measure is the time it takes to start responding, not
the time it takes to output the response.
The first-come, first-served(FCFS) is the simplest
scheduling algorithm.
The process that requests the CPU first is allocated the
CPU first.
The implementation of the FCFS policy is easily managed
with a FIFO queue.
When a process enters the ready queue, its PCB is linked
onto the tail of the queue. When the CPU is free, it is
allocated to the process at the head of the queue.
The running process is then removed from the queue.
On the negative side, the average waiting time under the
FCFS policy is often quite long.
The average waiting time is not minimal and may vary
substantially if the process’s CPU burst times vary
greatly.
The FCFS scheduling algorithm is nonpreemptive.
Waiting Time =Service Time - Arrival Time
0 24 27 30
Calculation=>
Waiting Time:
=>wT=ST-AT
P1=>0-0=0
P2=>24-0=24
P3=>27-0=27
Turnaround Time:
=>TT=waiting time+Burst Time
P1=>0+24=24
P2=>24+3=27
P3=>27+3=30
Process Burst Time Waiting Time
P1 24 0-0 = 0
P2 3 24-0 = 24
P3 3 27-0 = 27
0 5 8 16 22
Calculation=>
Waiting Time=Service Time-Arrival Time
P1=>0-0=0
P2=>5-1=4
P3=>8-2=6
P4=>16-3=13
AWT=(0+4+6+13)/4
Turnaround Time=Waiting Time+Burst Time
P1=>0+5=5
P2=>4+3=7
P3=>6+8=14
P4=>13+6=19
ATT=(5+7+14+19)/4
Process Burst Time Waiting Time
P1 5 0-0 = 0
P2 3 5-1 = 4
P3 8 8-2 = 6
P4 6 16-3 = 13
0 2 4 10 28
Calculation=>
Waiting Time=>ST-AT
Turnaround Time=>Wt+BT
P1=10-0=10
P1=10+18=28
P2=0-0=0
P2=0+2=2
P3=2-0=2
P4=4-0=4
P3=2+2=4
AWT=(10+0+2+4)/4
P4=4+6=10
ATT=(28+2+4+10)/4
Process Burst Waiting
Time Time
P1 18 10-0 = 10
P2 2 0-0 = 0
P3 2 2-0 = 2
P4 6 4-0 = 4
28+2+4+10 = 11 ms
Avg.Turnaround Time=
4
Example 2=> Consider following set of processes that
with length of the CPU burst given in milliseconds:
Arrival Burst
Process Time Time
P1 0 7
P2 2 4
P3 4 1
P4 5 4
Gantt chart representation=>
Arrival Burst
Process Time Time
P1 0 7
P2 2 4
P3 4 1
P4 5 4
0 7 8 12 16
Calculation
WT=ST-AT
P1=0-0=0
P2=8-2=6
P3=7-4=3
P4=12-5=7
AWT=(0+6+3+7)/4
Arrival Burst Waiting
Process Time Time Time
P1 0 7 0-0 = 0
P2 2 4 8-2 = 6
P3 4 1 7-4 = 3
P4 5 4 12 – 5 =7
7+10+4+11 = 8 ms
Avg.Turnaround Time=
4
Preemptive SJF scheduling is sometimes called shortest-
remaining time-first scheduling (SRTF)
Example 1=> Consider following set of processes that
with length of the CPU burst given in milliseconds:
Arrival Burst
Process Time Time
P1 0 7
P2 2 4
P3 4 1
P4 5 4
Gantt chart representation=>
P4 5 4 P4 4 0
0 1 2 3 4 5 7 11 16
Calculation=>
WT=Total waiting time-No of ms process executed-Arrival Time
P1=11-2-0=9
P2=5-2-2=1
Turnaround Time=WT+BT
P3=4-0-4=0
P4=7-0-5=2
AWT=
Arrival Burst Waiting
Process Time Time Time
P1 0 7 11 - 2 - 0 =9
P2 2 4 5–2–2=1
P3 4 1 4 – 4=0
P4 5 4 7–5=2
16+5+1+6 = 7 ms
Avg.Turnaround Time=
4
Example 3
0 3 9 16 24
Problems with SJF Scheduling=>
The real difficulty is knowing the length of next CPU burst.
Although the SJF algo is optimal, it cannot be implemented at the
level of short term scheduling.
There is no way to know length of next CPU burst.
One approach is:
To try to approximate SJF scheduling.
We may not know the length of next CPU burst, but we may able to
predict this value. We may expect that the next CPU burst will
be similar in length to the previous one.
Thus by computing an approximation of the length of next CPU
burst, we can pick the process with the shortest predicted
CPU burst.
The SJF algorithm is a special case of the general priority
scheduling algorithm.
A priority is associated with each process, and the CPU is
allocated to the process with the highest priority. Equal
priority processes are scheduled in FCFS order.
The SJF algorithm is simply a priority algorithm where the
priority (p) is the inverse of the (predicted) next CPU burst.
The larger the CPU burst, the lower the priority, and vice
versa. This algorithm can be either preemptive or non-preemptive.
In practice, priorities are implemented using integers within a
fixed range, but there is no agreed-upon convention as to whether
"high" priorities use large numbers or small numbers.
Example1=> Consider following set of processes that
with length of the CPU burst given in milliseconds:
Process BT Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
0 1 6 16 18 19
Calculation
Turnaround Time:
Waiting time=>St-AT TT=WT+BT
P1=6-0=6 P1=6+10=16
P2=0-0=0 P2=0+1=1
P3=16-0=16 P3=16+2=18
P4=18-0=18 P4=18+1=19
P5=1-0=1 P5=1+5=6
AWT= ATT=
Process Burst Time WT
P1 10 6-0=6
P2 1 0-0=0
P3 2 16-0=16
P4 1 18-0=18
P5 5 1-0=1
16+1+18+19+6 = 12 ms
Avg.Turnaround Time=
5
Example2=> Consider following set of processes that
with length of the CPU burst given in milliseconds:
Process AT BT Priority
P1 0 10 3
P2 2 3 1
P3 3 9 2
P4 3 5 4
Process AT BT Priority
P1 0 10 3
P2 2 3 1
P3 3 9 2
P4 3 5 4
P1 P2 P3 P4
0 10 13 22 27
Example1=> Consider following set of processes that
with length of the CPU burst given in milliseconds:
Process AT BT Priority
P1 0 10 3
P2 2 3 1
P3 3 9 2
P4 3 5 4
Gantt chart representation=>
Process AT BT Priority P1 10 8 0
P2 3 0
P1 0 10 3
P2 2 3 1 P3 9 0
P3 3 9 2
P4 3 5 4 P4 5 0
0 2 5 14 22 27
Calculation=>
WT=Total WT-No of ms processed-AT
P1=14-2-0=12
P2=2-0-2=0
P3=5-0-3=2
P4=22-0-3=19
TT=WT+BT
Process BT AT WT
P1 10 0 14-2-0=12
P2 1 2 2-2=0
P3 2 3 5-3=2
P4 1 3 22-3=19
22+1+4+20 = 11.75 ms
Avg.Turnaround Time=
4
Priority(Preemptive)
Proc AT BT Priorit
ess y
7
P1 0 8 3
P2 1 1 1
P3 2 3 2
P4 3 2 3
P5 4 6 4
P1 P2 P3 P1 P4 P5
0 1 2 5 12 14 20
Priorities can be assigned either internally or
externally.
Internal priorities are assigned by the OS using criteria such
as average burst time, ratio of CPU to I/O activity, system
resource use, and other factors available to the kernel.
External priorities are assigned by users, based on the
importance of the job, fees paid, politics, etc.
Priority scheduling can suffer from a major problem known
as indefinite blocking, or starvation, in which a low-
priority task can wait forever because there are always some
other jobs around that have higher priority.
If this problem is allowed to occur, then processes will either run
eventually when the system load lightens, or will eventually
get lost when the system is shut down or crashes. (There are
rumors of jobs that have been stuck for years.)
One common solution to this problem is aging, in which
priorities of jobs increase the longer they wait.
Under this scheme a low-priority job will eventually get its
priority raised high enough that it gets run.
Priority scheduling can be either preemptive or non-
preemptive.
When a process arrives at the ready queue, its priority is
compared with the priority of the currently running
process.
A preemptive priority scheduling algorithm will preempt
the CPU if the priority of the newly arrived process is
higher than the priority of the currently running process.
A nonpreemptive priority scheduling algorithm will
simply put the new process at the head of the ready
queue.
The round-robin (RR) scheduling algorithm is designed
especially for timesharing systems.
Round robin scheduling is similar to FCFS scheduling,
except that CPU bursts are assigned with limits called time
quantum.
When a process is given the CPU, a timer is set for whatever
value has been set for a time quantum.
If the process finishes its burst before the time quantum
timer expires, then it is swapped out of the CPU just like
the normal FCFS algorithm.
If the timer goes off first, then the process is swapped out
of the CPU and moved to the back end of the ready queue.
The ready queue is maintained as a circular queue, so
when all processes have had a turn, then the scheduler
gives the first process another turn, and so on.
RR scheduling can give the effect of all processes
sharing the CPU equally, although the average wait
time can be longer than with other scheduling algorithms.
Example1=> Consider following set of processes that
with length of the CPU burst given in milliseconds:
Process Burst Time
P1 18
P2 2
P3 2
P4 6
Time quantum=3
Gantt chart representation=>
Process BT
P1 18 15 12 0
P1 18
P2 2 0
P2 2
P3 2 P3 2 0
P4 6 P4 6 3 0
Time Quantum=3
0 3 5 7 10 13 16 28
Calculation
WT= TWT-No of msprocess exe-AT
P1=16-3-3-0=10
P2=3-0-0=3
P3=5-0-0=5
P4=13-3-0=10
Process BT WT
P1 18 16-3-3-0=10
P2 2 3-0=3
P3 2 5-0=5
P4 6 13-3-0=10
28+5+7+16 = 14 ms
Avg.Turnaround Time=
4
Example 2=> Consider following set of processes that
with length of the CPU burst given in milliseconds.
Time Slice=10
Process AT BT
P1 0 10
P2 0 29
P3 0 3
P4 0 7
P5 0 12
Gantt chart representation=>Time Slice=10
Process AT BT P1 10 0
P1 0 10 P2 29 19 9 0
P2 0 29 P3 3 0
P3 0 3 0
P4 7
P4 0 7 0
P5 12 2
P5 0 12
0 10 20 23 30 40 50 52 61
Calculation
Process BT WT
P1 10 0-0=0
P2 29 52-10-10-0=32
P3 3 20-0=20
P4 7 23-0=23
P5 12 50-10-0=40
10+61+23+30+52 35.2 ms
Avg.Turnaround Time= =
5
In the RR scheduling algorithm, no process is allocated the
CPU for more than 1 time quantum in a row (unless it is
the only runnable process).
If a process’s CPU burst exceeds 1 time quantum, that
process is preempted and is put back in the ready
queue.The RR scheduling algorithm is thus preemptive.
The performance of RR is sensitive to the time quantum
selected. If the quantum is large enough, then RR reduces
to the FCFS algorithm; If it is very small, then each
process gets 1/nth of the processor time and share the
CPU equally.
BUT, a real system invokes overhead for every context
switch, and the smaller the time quantum the more context
switches there are.
6M
6M
4m 4m
10
10
10M
6M