Unit 2
Unit 2
OPERATING SYSTEMS
UNIT NO: 2
Syllabus :
CPU scheduling, goals of scheduling,
CPU scheduling algorithms: FCFS, SJF, SRTF, RR, Priority based.
Inter-process communication: process cooperation and synchronization, race condition,
critical section, mutual exclusion and implementation, semaphores, classical
inter-process communication problems.
CPU scheduling
CPU scheduling is the basis of multi-programmed operating systems. By switching
the CPU among processes, the operating system can make the computer more
productive.
In a single-processor system, only one process can run at a time; any others must
wait until the CPU is free and can be rescheduled. The objective of multiprogramming
is to have some process running at all times, to maximize CPU utilization.
The idea is relatively simple. A process is executed until it must wait, typically for
the
completion of some I/O request. In a simple computer system, the CPU then just sits
idle. All this waiting time is wasted; no useful work is accomplished.
With multiprogramming, we try to use this time productively. Several processes are
kept in memory at one time. When one process has to wait, the operating system takes
the CPU away from that process and gives the CPU to another process. This pattern
continues. Every time one process has to wait, another process can take over use of the
CPU.
Scheduling of this kind is a fundamental operating-system function. Almost all
computer
resources are scheduled before use. The CPU is, of course, one of the primary computer
resources.
Thus, its scheduling is central to operating-system design.
• There are two categories of scheduling:
1. Non-preemptive: (once process starts executing it will not stop until complete
execution.)Here the resource can’t be taken from a process until the process
completes execution. The switching of resources occurs when the running
process terminates and moves to a waiting state.
2. Preemptive: (once process starts executing it will stop when any interrupt
generates and executes.)Here the OS allocates the resources to a process for a
fixed amount of time.
During resource allocation, the process switches from running state to ready state or
from waiting state to ready state.
Goals of scheduling
The primary objective of CPU scheduling is to ensure that as many jobs are running at a
time as is possible.
• On a single-CPU system, the goal is to keep one job running at all times.
• Describe various CPU scheduling algorithms
• Assess CPU scheduling algorithms based on scheduling criteria
• Explain the issues related to multiprocessor and multicore scheduling
• Describe various real-time scheduling algorithms
• Describe the scheduling algorithms used in the Windows, Linux, and
• Solaris operating systems
• Apply modeling and simulations to evaluate CPU scheduling
Scheduling Criteria:
CPU utilization – Maximum utilization of CPU so that we can keep the CPU as busy as
possible.
Throughput – means the number of processes that complete their execution in per unit
time. There must be maximum throughput.
Turnaround time –means the number of processes which are completing their
execution in per unit time. (Time taken for execution)
Waiting time –It is that time for which the process remains in the ready queue. It must
be a minimum.There must be fare allocation of CPU.
Response time – It is the time when the process gives its first response. It must be a
minimum.
Terms in each scheduling process.
• Arrival time(AT) is the time at which the process arrives in the ready queue for
execution, and it is given in our table when we need to calculate the average
waiting time.
• Completion time(CT) is the time at which the process finishes its execution.
• Turnaround time(TT) is the difference between completion time and arrival
time, i.e. turnaround time = Completion time- arrival time ( TT = CT – AT )
• Burst time(BT) is the time required by the process for execution of the process
by CPU.
• Waiting time (WT) is the difference between turnaround time and burst time, i.e.
Waiting time= Turnaround time – Burst time
CPU–I/O Burst Cycle
Process execution consists of a cycle of CPU execution and I/O wait. Processes
alternate between these two states. Process execution begins with a CPU burst That is
followed by an I/O burst, which is followed by another CPU burst, then another I/O
burst, and so on. Eventually, the final CPU burst ends with a system request to terminate
execution.
CPU Burst. It is the amount of time, a process uses the CPU until it starts waiting
for some input or interrupted by some other process. I/O Burst or Input Output
burst. It is the amount of time, a process waits for input-output before needing CPU
time.
The CPU time is the time taken by CPU to execute the process. While the I/O time is
the time taken by the process to perform some I/O operation.
Burst time is the total time taken by the process for its execution on the CPU
Scheduling Criteria
There are several different criteria to consider when trying to select the "best"
scheduling algorithm for a particular situation and environment, including:
CPU utilization - Ideally the CPU would be busy 100% of the time, so as to waste 0
CPU cycles. On a real system CPU usage should range from 40% ( lightly loaded ) to
90% ( heavily loaded. )
Throughput - Number of processes completed per unit time. May range from 10 /
second to 1 / hour depending on the specific processes.
Turnaround time - Time required for a particular process to complete, from submission
time to completion. ( Wall clock time. )
Waiting time - How much time processes spend in the ready queue waiting their turn to
get on the CPU.
( Load average - The average number of processes sitting in the ready queue waiting
their turn to get into the CPU. Reported in 1-minute, 5-minute, and 15-minute averages
by "uptime" and "who". )
Response time - The time taken in an interactive program from the issuance of a
command to the commence of a response to that command.
Scheduling Algorithms : Types of scheduling algorithms
1. First-Come, First-served Scheduling(FCFS)
2. Shortest-Job-first Scheduling (SJF)
3. Shortest Remaining Time First (SRTF)
4. Round Robin Scheduling (RR)
5. Priority Scheduling(PR)
6. Multilevel Queue Scheduling
7. Multilevel Feedback Queue Scheduling
First-Come, First-served Scheduling(FCFS)
It is an algorithm that automatically executes queued requests and processes according
to their arrival time. In this type of algorithm, processes which requests the CPU first
get the CPU allocation first. It is always non-preemptive in nature.
Jobs are always executed on a first-come, first-serve basis.
In this, the process that comes first will be executed first and next process starts only
after the previous gets fully executed.
This method is poor in performance, and the general wait time is quite high.
A real-life example of FCFS method is buying a movie ticket on the ticket
counter.(online booking)
FCFS means the buyer who request first to show interest to pay and pick up and to get
the item. The seller of the product won’t hold the item at all.(facebook)
Advantages of FCFS
1) FCFS algorithm doesn't include any complex logic, it just puts the process
requests in a queue and executes it one by one
2) The simplest form of a CPU scheduling algorithm
3) Easy to program
4) Eventually, every process will get a chance to run, so starvation doesn't occur.
Disadvantages of FCFS
1) It is a Non-Preemptive CPU scheduling algorithm, so after the process has been
allocated to the CPU, it will never release the CPU until it finishes executing.
2) The Average Waiting Time is high.
3) Short processes that are at the back of the queue have to wait for the long process
at the front to finish.
4) Resources utilization in parallel is not possible, which leads to Convoy Effect,
and hence poor resource(CPU, I/O etc) utilization.
P1 0 2 2 2–0=2 2–2=0
P2 3 1 4 4–3=1 1–1=0
P3 5 6 11 11- 5 = 6 6 – 6 = 0
P1 3 4
P2 5 3
P3 0 2
P4 5 1
P5 4 3
Calculate the average waiting time and average turn around time.
Solution-
Now, we know-
Turn Around time = Exit time – Arrival time
Waiting time = Turn Around time – Burst time
Turn Around
Process Id Exit time Waiting time
time
P1 7 7–3=4 4–4=0
P2 13 13 – 5 = 8 8–3=5
P3 2 2–0=2 2–2=0
P4 14 14 – 5 = 9 9–1=8
P5 10 10 – 4 = 6 6–3=3
Now,
Average Turn Around time = (4 + 8 + 2 + 9 + 6) / 5 = 29 / 5 = 5.8 unit
Average waiting time = (0 + 5 + 0 + 8 + 3) / 5 = 16 / 5 = 3.2 unit
Disadvantages of SJF
1. May suffer with the problem of starvation
2. It is not implementable because the exact Burst time for a process can't be known
in advance.
Problem-01:
Consider the set of 5 processes whose arrival time and burst time are given below-
If the CPU scheduling policy is SJF non-preemptive, calculate the average waiting
time and average turn around time.
Arrival
Process Id Burst time
time
P1 3 1
P2 1 4
P3 4 2
P4 0 6
P5 2 3
SOLUTION:
Gantt Chart-
Now, we know-
Turn Around time = Completion time – Arrival time
Waiting time = Turn Around time – Burst time
Process Comp Turn Around
Waiting time
Id time time
P1 7 7–3=4 4–1=3
P2 16 16 – 1 = 15 15 – 4 = 11
P3 9 9–4=5 5–2=3
P4 6 6–0=6 6–6=0
P5 12 12 – 2 = 10 10 – 3 = 7
Now,
Average Turn Around time = (4 + 15 + 5 + 6 + 10) / 5 = 40 / 5 = 8 unit
Average waiting time = (3 + 11 + 3 + 0 + 7) / 5 = 24 / 5 = 4.8 unit
Process
A.T B. T C.T TAT W.T
Id
7–3= 4–1=
P1 3 1 7
4 3
16 – 1 15 – 4
P2 1 4 16
= 15 = 11
9–4= 5–2=
P3 4 2 9
5 3
6–0= 6–6=
P4 0 6 6
6 0
12 – 2 10 – 3
P5 2 3 12
= 10 =7
Problem-02:
If the CPU scheduling policy is SJF non-pre-emptive, calculate the average
waiting time and average turn around time.
Arriva Burst
Process CT TAT WT
l time time
P0 0 3
P1 2 6
P2 4 4
P3 6 5
P4 8 2
Gantt chart
P0 P1 P4 P2 P3
P0 0 3 3 3 0
P1 2 6 9 7 1
P2 4 4 15 11 7
P3 6 5 20 14 9
P4 8 2 11 3 1
Now,
Average Turn Around Time = (3 + 7 + 11 + 14 + 3) / 5 = 38/ 5 = 7.6 unit
Average Waiting Time = (0 + 1 + 7 + 9 + 1) / 5 = 18/ 5 = 3.5 unit
SJF (Preemptive) (Shortest Remaining Time First -SRTF) Scheduling Algorithm
It is the pre-emptive mode of Shortest Job First (SJF) scheduling.
In this algorithm, the process which has the short burst time is executed by the CPU.
There is no need to have the same arrival time for all the processes.
If another process was having the shortest burst time then the current process which is
executing get stopped in between the execution, and the new arrival process will be
executed first.
Example-1: Consider the following table of arrival time and burst time for five
processes P1, P2, P3, and P4.
P1 0 ms 8 ms
P2 1 ms 4 ms
P3 2 ms 9 ms
P4 3 ms 5 ms
P1 P2 P4 P1 P3
As we know,
• Turn Around time = Completion time – arrival time
• Waiting Time = Turn around time – burst time
Arrival Burst
Process CT TWT WT
Time Time
8
P1 0 ms 17 17 9
ms
4
P2 1 ms 5 4 0
ms
9
P3 2 ms 26 24 15
ms
5
P4 3 ms 10 7 2
ms
Now,
• Average Turn around time = 52/4 = 13
• Average waiting time = 26/4 = 6.5
PRACTICE Example
Q. Given the arrival time and burst time of 3 jobs in the table below. Calculate the
Average waiting time of the system.
Process Arrival Burst Completio Turn Around Waiting
ID Time Time n Time Time Time
1 0 9 13
2 1 4 5
3 2 9 22
Solution:
Problem-01:
Consider the set of 3 processes whose burst time are given below-
If the CPU scheduling policy is Round Robin with time quantum = 2 unit,
Calculate the average waiting time and average turn around time.
P1 P2 P3 P1 P2 P3 P3
P1 P2 P3 P1 P2 P3 P3
Process Burst TAT =
CT WT=TAT-BT
Queue time CT-AT
P1 4 8 8-0=9 8-4=4
P2 3 9 9-0=9 9-3=6
P3 5 12 12-0=12 12-5=7
Process
Burst time TAT WT
Queue
P1 21
P2 3
P3 6
P4 2
P1P2P3P4 P1P3P1P1P1
Problem-03:
Consider the set of 5 processes whose arrival time and burst time are given below-
If the CPU scheduling policy is Round Robin with time quantum = 2 unit,
calculate the average waiting time and average turn around time.
Solution-
Ready Queue- P1 P2 P3 P1 P4 P5 P2 P1 P5
Now, we know-
• Turn Around time = Exit time – Arrival time
• Waiting time = Turn Around time – Burst time
• RT =CPU first time -AT
Response
Process Turn Around
Exit time Waiting time Time(RT) = cpu
Id time
first time -AT
P1 13 13 – 0 = 13 13 – 5 = 8 0-0=0
P2 12 12 – 1 = 11 11 – 3 = 8 2-1=1
P3 5 5–2=3 3–1=2 4-2=2
P4 9 9–3=6 6–2=4 7-3=4
P5 14 14 – 4 = 10 10 – 3 = 7 9-4=5
Now,
• Average Turn Around time = (13 + 11 + 3 + 6 + 10) / 5 = 43 / 5 = 8.6 unit
• Average waiting time = (8 + 8 + 2 + 4 + 7) / 5 = 29 / 5 = 5.8 unit
Problem-04:
Consider the set of 6 processes whose arrival time and burst time are given below-
Solution-
Gantt chart-
Ready Queue- P1, P2, P3, P1, P4, P5, P2, P6, P5, P2, P6, P5
Now, we know-
• Turn Around time = Exit time – Arrival time
• Waiting time = Turn Around time – Burst time
Now, we know-
• Turn Around time = Exit time – Arrival time
Waiting time = Turn Around time – Burst time
Now,
Average Turn Around time = (27 + 23 + 30 + 29 + 4 + 15) / 6 = 128 / 6 = 21.33 unit
Average waiting time = (22 + 17 + 23 + 20 + 2 + 12) / 6 = 96 / 6 = 16 unit
Priority Scheduling :
• A priority number (integer) is associated with each process
• The CPU is allocated to the process with the highest priority (smallest integer ≡
highest priority).
• Preemptive
• Non-preemptive
• SJF is a priority scheduling where priority is the predicted next CPU burst time.
• Problem ≡ Starvation – low priority processes may never execute.
• Solution ≡ Aging – as time progresses increase the priority of the process.
Advantages-
It considers the priority of the processes and allows the important processes to run first.
Priority scheduling in preemptive mode is best suited for real time operating system.
Disadvantages-
Processes with lesser priority may starve for CPU.
There is no idea of response time and waiting time.
Problem-01:
Consider the set of 5 processes whose arrival time and burst time are given below-
Arrival Burst
Process Id Priority
time time
P1 0 4 2
P2 1 3 3
P3 2 1 4
P4 3 5 5
P5 4 2 5
Now, we know-
Turn Around time = Exit time – Arrival time
Waiting time = Turn Around time – Burst time
Now,
• Average Turn Around time = (4 + 14 + 10 + 6 + 7) / 5 = 41 / 5 = 8.2 unit
• Average waiting time = (0 + 11 + 9 + 1 + 5) / 5 = 26 / 5 = 5.2 unit
Problem-02:
Consider the set of 5 processes whose arrival time and burst time are given below-
P1 0 4 2
P2 1 3 3
P3 2 1 4
P4 3 5 5
P5 4 2 5
If the CPU scheduling policy is priority preemptive, calculate the average waiting time
and average turn around time. (Higher number represents higher priority)
Now, we know-
• Turn Around time = Exit time – Arrival time
• Waiting time = Turn Around time – Burst time
Process Turn Around
Exit time Waiting time
Id time
P1 15 15 – 0 = 15 15 – 4 = 11
P2 12 12 – 1 = 11 11 – 3 = 8
P3 3 3–2=1 1–1=0
P4 8 8–3=5 5–5=0
P5 10 10 – 4 = 6 6–2=4
Now,
Average Turn Around time = (15 + 11 + 1 + 5 + 6) / 5 = 38 / 5 = 7.6 unit
Average waiting time = (11 + 8 + 0 + 0 + 4) / 5 = 23 / 5 = 4.6 unit
This buffer is a space of a certain size in the memory of the system which is used for
storage.
Code for Producer process:
while (true) {
//produce an item & put in nextProduced
while (count == BUFFER_SIZE); to check the buffer is full or not
and stores the size of buffer
// do nothing
buffer[in] = nextProduced;
in = (in + 1) % BUFFER_SIZE; (circular buffer, 0to4)
count ++ ;
}
Consumer Process
while (true) {
while (count == 0);
// do nothing
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
count--;
/* consume the item in nextConsumed
}
Hence, The critical section problem is to design a protocol that the processes can use to
cooperate.(to sync with each other)
Solution of critical section problem for two processes
1) Software based
a)Peterson’s solution
b) Dekker’s solution
2) Hardware based
a) Interrupt disabling
b) Special m/c instruction
3) Operating system based
a) Semaphore
b) Monitor
Critical-Section Problem Solution for Two processes
Algorithm1 (works on the basis of turn)
● Shared variables: (here turn is shared variable)
● int turn;
initially turn = 0
● turn i ⇒ Pi can enter its critical section
● Process Pi
do {
while (turn != i) ; ( is turn not eq to i)
critical section
turn = j;
reminder section (to access non shared variable)
} while (1);
Satisfies mutual exclusion, but not progress
FOR TWO PROCESSES
● Shared variables
● Boolean flag[2];
initially flag [0] =
flag [1] = false.
● flag [ i ] = true ⇒ Pi ready to enter its critical section
● Process Pi
do {
flag [ i ] = true;
while (flag [ j ]) ; (is flag true) critical section (process pi will not enter
into cs)
flag [i] = false;
remainder section
} while (1);
● Satisfies mutual exclusion, but not progress requirement.
Algorithm3 (Peterson’s Solution) (combination of 1&2)
● Combined shared variables of algorithms 1 and 2.
● Process P0,P1
do {
flag [ i ] = true;
turn = j;
while (flag [ j ] and turn = j) ;
critical section
flag [i] = false;
remainder section
} while (1);
● Meets all three requirements; solves the critical-section problem for two
processes.
Bakery Algorithm for n processes
Bakery Algorithm is an algorithm that basically works as a generalized solution for the
critical section problem
The algorithm preserves the first come first serve property.
Critical section for n processes
Before entering its critical section, process receives a token number. Holder of the
smallest number enters the critical section.
If processes Pi and Pj receive the same number,
if i < j, then
Pi is served first;
else
Pj is served first.
The numbering scheme always generates numbers in increasing order of enumeration;
i.e., 1,2,3,4,5...
● Notation -: lexicographical order (ticket #, process id #)
● Firstly the ticket number is compared. If same then the process ID is compared
next, i.e.-
● (a,b) < (c,d) if a < c or if a = c and b < d
● max (a0,…, an-1) is a number, k, such that k ≥ ai for i - 0,
…, n – 1
● Shared data - choosing is an array [0..n – 1] of boolean values & number is an
array [0..n – 1] of integer values.
boolean choosing[n];
int number[n];
Both are initialized to False & Zero respectively.
do {
choosing[i] = true; process those wants to enter in to cs will request
number[i] = max(number[0], number[1], …, number [n – 1])+1; assigning
choosing[i] = false; (means token received)
for (j = 0; j < n; j++) { checking for all other process
while (choosing[ j ]) ; till other process receiving token no.
while ((number[j] != 0) && (number[ j,j] < number[i,i])) ;
} when both cond.are true not allowed to enter in cs
critical section
number[i] = 0; token no. becomes zero after completion
remainder section
} while (1);
Semaphore :
Semaphore is an integer variable i.e shared by multiple processes
Synchronization tool that does not require busy waiting
Semaphore is un integer flag, indicated that it is safe to proceed
Two standard operations modify S: wait() and signal()
Originally called P() and V()
Less complicated
Can only be accessed via two indivisible (atomic) operations
1) wait (S) {
while S <= 0
; // no-op
S--;
}
2) signal (S) {
S++;
}
Dining-Philosopher Problem
THE BOUNDED BUFFER ( PRODUCER / CONSUMER ) PROBLEM:
This is the same producer / consumer problem as before. But now we'll do it with
signals and waits. Remember: a wait decreases its argument and a signal increases its
argument.