Os - Lecture - 04 - Process MNGNT 2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

Operating Systems

Lecture 4 - Process Management /2

Ralph Tambala

MUST . CSIT

Lecture 4 1 / 20
Outline

1 Scheduling Goals

2 CPU Scheduling Criteria

3 Scheduling Policies
First Come First Served
Shortest Job First
Shortest Remaining Time First
Round Robin
Priority

4 Additional Info

Lecture 4 2 / 20
Important Terms

Ready queue – This queue keeps a set of all processes residing in


main memory, ready and waiting to execute.
Burst time – The time required in milli seconds by a process for its
execution.
Arrival time – The time at which the process arrives in the ready
queue.

Lecture 4 3 / 20
Scheduling Goals

Scheduling Goals
Process scheduling, also known as CPU scheduling, is the problem of
deciding when and to which processes the CPU should be allocated.
The part of the operating system concerned with this decision is called the
scheduler, and algorithm it uses is called the scheduling algorithm (policy).
The following are general goals:
Minimize waiting time
◦ Process should not wait long in the ready queue
Maximize CPU utilization
◦ CPU should not be idle
Maximize throughput
◦ Complete as many processes as possible per unit time
Minimize response time
◦ CPU should respond immediately
Fairness
◦ Give each process a fair share of CPU
Lecture 4 4 / 20
CPU Scheduling Criteria

CPU Scheduling Criteria


The criteria include the following:
1 CPU utilization

The main objective of any CPU scheduling algorithm is to keep the


CPU as busy as possible. It is measured in percentage (0-100%).
2 Throughput
Throughput is the amount of jobs completed per per unit time.
Higher throughput usually means better utilized system
3 Response time
Response time is the time from when a process is submitted until
results are returned
4 Turnaround time
Turnaround time is the total amount of time spent by the process from
coming in the ready state for the first time to its completion.
Mathematically, turnaround time = Time of completion – Arrival time
5 Waiting time
It is the total time spent by a process waiting in the ready queue.
Lecture 4 5 / 20
Scheduling Policies

Scheduling Policies

CPU scheduling involves the allocation and de-allocation of the CPU


to a job. There are various ways to select the next job to service,
these are known as scheduling policies.
In this lecture, we will look at five different policies listed below:
1 First Come First Served
2 Shortest Job First
3 Shortest Remaining Time First
4 Round Robin
5 Priority-based

Lecture 4 6 / 20
Scheduling Policies First Come First Served

First Come First Served / FCFS (1)

In First Come First Served (FCFS), a job is handled based on arrival


time: Earlier job arrives, earlier served.
FCFS is non-preemptive: Once the processor starts executing a
process, it must finish it before executing the another.
Pros:
FCFS uses a simple algorithm implementation; first-in, first-out (FIFO)
queue.
It is considered fair as it does not make any distinction between
submitted jobs.
Cons:
When lprocesses with longer burst time arrive earlier than shorter
processes they delay every job after them. This is called the convoy
effect because later processes are held up behind a long-running first
process. See examples in the next slide.

Lecture 4 7 / 20
Scheduling Policies First Come First Served

First Come First Served / FCFS (2)


Given the following:
Process ID Arrival time (ms) Burst time (ms)

P1 0 8
P2 0 5
P3 0 2

Consider the average waiting time under different arrival orders:


◦ Arrival order: P1 , P2 , P3 will give us the following gantt chart

Waiting time for P1 = 0, P2 = 8, P3 = 13


∴ Average waiting time = (0 + 8 + 13)/3 = 7 ms
◦ Arrival order: P3 , P2 , P1 will give us the following gantt chart

Waiting time for P1 = 7, P2 = 2, P3 = 0


∴ Average waiting time = (7 + 2 + 0)/3 = 3 ms
Lecture 4 8 / 20
Scheduling Policies Shortest Job First

Shortest Job First / SJF (1)

Shortest job first (SJF), also known as shortest job next (SJN), is a
scheduling policy that selects for execution the waiting process with
the smallest execution time.
SJF is non-preemptive.
SJF applies FCFS if processes have the same burst time
Pros:
◦ Minimizes average wait time
Cons:
◦ May lead to starvation for long jobs. Starvation is when a process
ready to run for CPU can wait indefinitely because of low priority.

Lecture 4 9 / 20
Scheduling Policies Shortest Job First

Shortest Job First / SJF (2)

Given the following:


Process ID Arrival time (ms) Burst time (ms)

P1 0 14
P2 0 4
P3 0 3
P4 0 7

The order of execution will be: P3 , P2 , P4 , P1 (processes with smallest


burst time first)

Waiting time for each process: P1 = 14, P2 = 3, P3 = 0, P4 = 7


∴ Average waiting time = (14 + 3 + 0 + 7)/4 = 6 ms

Lecture 4 10 / 20
Scheduling Policies Shortest Remaining Time First

Shortest Remaining Time First / SRTF (1)

SRTF is a preemptive scheduling in which the process with the least


processing time remaining is executed first.
SRTF is also referred to as preemptive SJF.
Pros:
◦ Further reduces average waiting time.
Cons:
◦ SRTF may also lead to starvation.

Lecture 4 11 / 20
Scheduling Policies Shortest Remaining Time First

Shortest Remaining Time First / SRTF (2)

Given the following:


Process ID Arrival time (ms) Burst time (ms)

P1 0 8
P2 1 2
P3 4 3

Waiting time for: P1 = (0 + 2 + 3) = 5, P2 = 0, P3 = 0


∴ Average waiting time = (5 + 0 + 0)/4 = 1.67 ms
Turn around time for each process:
P1 = (13 − 0) = 13, P2 = (3 − 1) = 2, P3 = (7 − 4) = 3
∴ Average turnaround time = (13 + 2 + 3)/3 = 6 ms

Lecture 4 12 / 20
Scheduling Policies Shortest Remaining Time First

Shortest Remaining Time First / SRTF (3)

Step by step explanation of the example:


At 0 ms, we have only P1 , so it gets executed for 1 ms.
At 1 ms, P2 also arrives. Now, P1 needs 7 ms more to be executed,
and P2 needs only 2 ms. So, P2 is executed by preempting P1 .
P2 gets completed at time 3 ms, and now no new process has arrived.
So, after the completion of P2 , again P1 is sent for execution.
Now, P1 has been executed for 1 ms, and we have an arrival of new
process P3 at time 4 ms. Now, P1 needs 6 ms more and P3 needs
only 3 ms. So, P3 is executed by preempting P1 .
P3 gets completed at time 7 ms, and after that, we have the arrival of
no other process. So again, P1 is sent for execution, and it gets
completed at 13 ms.

Lecture 4 13 / 20
Scheduling Policies Round Robin

Round Robin / RR (1)


RR is preemptive
Run process for a time slice then move to FIFO
Time slice is called a time quantum, or simply quantum.
Time quantum size is crucial to system performance and varies from
100 ms to 1-2 s (real world application)
CPU is equally shared among all active processes.
If job CPU cycle < time quantum
◦ Job preempted and placed at end of ready queue
◦ Information saved in PCB
If job CPU cycle < time quantum
◦ Job finished: allocated resources released and job returned to user
◦ Interrupted by I/O request: information saved in PCB and linked to
I/O queue
Once I/O request satisfied
◦ Job returns to end of ready queue and awaits CPU
Lecture 4 14 / 20
Scheduling Policies Round Robin

Round Robin / RR (2)

Consider the following:


Process ID Arrival time (ms) Burst time (ms)

P1 0 4
P2 0 3
P3 0 5

(A) Gantt chart given a quantum = 2 ms

Waiting time for:


P1 = (6 − 2) = 4, P2 = 2 + (8 − 4) = 6, P3 = 4 + (9 − 6) = 7
∴ Average waiting time = (4 + 6 + 7)/3 = 5.67 ms

Lecture 4 15 / 20
Scheduling Policies Round Robin

Round Robin / RR (3)

Step by step explanation of (A)


At 0 ms, say we have processes which arrived in this order P1 , P2 , P3 .
P1 gets the first 2 ms.
At 2 ms, the time quantum for P1 expires. P2 is executed by
preempting P1 which still has 2 ms burst time left so it is moves to
the end of the queue. New queue becomes P2 , P3 , P1 .
At 4 ms, the time quantum for P2 expires. P3 is executed by
preempting P2 which still has 1 ms burst time left so P2 moves to the
end of the queue. New queue becomes P3 , P1 , P2 .
At 6ms, the time quantum for P3 expires. P1 is executed by
preempting P3 which still has 3 ms burst time left so P3 moves to the
end of the queue. New queue becomes P1 , P2 , P3 .
And so on so forth...

Lecture 4 16 / 20
Scheduling Policies Round Robin

Round Robin / RR (4)

(B) Given a long quantum (e.g. quantum = 5 ms) it becomes FCFS

Waiting time for: P1 = 0, P2 = 4, P3 = 7


∴ Average waiting time = (0 + 4 + 7)/3 = 3.67 ms
Pros:
Fair as every process gets an equal share of the CPU.
RR is cyclic in nature, so there is no starvation.
Cons:
Setting the quantum too short, increases the overhead and lowers the
CPU efficiency (too many context switches e.g. A)
Setting it too long may cause poor response to short processes.
Average waiting time under the RR policy is often long.

Lecture 4 17 / 20
Scheduling Policies Round Robin

Priority (1)

In real world, not all processes are created equal.


In priority scheduling policy, each process is assigned a priority
Priority scheduling picks the process in the ready queue having the
highest priority
Pros:
◦ Mechanism to provide relative importance to processes
Cons:
◦ could lead to starvation of low priority processes
Priorities can be set internally (by CPU scheduler) or externally (by
users)

Lecture 4 18 / 20
Scheduling Policies Priority

Priority (2)

Given the following:


Process ID Arrival time (ms) Burst time (ms) Priority

P1 0 4 1
P2 2 3 2
P3 3 2 1
P4 8 5 3

Gantt chart

Waiting time for:


P1 = 0, P2 = 6 − 2 = 4, P3 = 4 − 3 = 1, P4 = 9 − 8 = 1
∴ Average waiting time = (0 + 4 + 1 + 1)/4 = 1.5 ms

Lecture 4 19 / 20
Additional Info

Additional Info
Work again on all examples I provided in this lecture on your own.
For each:
1 Draw Gantt chart
2 Calculate average waiting time
3 Calculate average turnaround time
4 Ask for help when stuck
Let’s all participate in forum discussions!

Lecture 4 20 / 20

You might also like