0% found this document useful (0 votes)
75 views39 pages

Unit 2

This document discusses operating system processes and CPU scheduling algorithms. It defines key terms like process, CPU burst, I/O burst, scheduling criteria. It then describes several common CPU scheduling algorithms like FCFS, SJF, SRTF, Round Robin and Priority scheduling. For each algorithm it discusses the scheduling technique, advantages and disadvantages. The goals of scheduling are to maximize CPU utilization and minimize waiting time and turnaround time for processes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views39 pages

Unit 2

This document discusses operating system processes and CPU scheduling algorithms. It defines key terms like process, CPU burst, I/O burst, scheduling criteria. It then describes several common CPU scheduling algorithms like FCFS, SJF, SRTF, Round Robin and Priority scheduling. For each algorithm it discusses the scheduling technique, advantages and disadvantages. The goals of scheduling are to maximize CPU utilization and minimize waiting time and turnaround time for processes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

IV-SEMESTER

OPERATING SYSTEMS
UNIT NO: 2
Syllabus :
CPU scheduling, goals of scheduling,
CPU scheduling algorithms: FCFS, SJF, SRTF, RR, Priority based.
Inter-process communication: process cooperation and synchronization, race condition,
critical section, mutual exclusion and implementation, semaphores, classical
inter-process communication problems.

Program under execution is called process.


In computing, a process is the instance of a computer program that is being executed by
one or many threads. a process may be made up of multiple threads of execution that
execute instructions concurrently.
The process in an operating system passes from different states starting from its
formation to its completion.
A process may change its state because of the following events like I/O requests,
interrupt routines, synchronization of processes, process scheduling algorithms, etc
Process execution consists of a cycle of CPU execution and I/O wait. Processes
alternate between these two states.
Process execution begins with a CPU burst. That is followed by an I/O burst, which is
followed by another CPU burst, then another I/O burst, and so on.
The final CPU burst ends with a system request to terminate execution.
CPU cycle- the time required for the execution of one simple processor operation.(such
as an addition)
Burst time is the total time taken by the process for its execution on the CPU.
CPU burst is when the process is being executed in the CPU.
I/O burst is when the CPU is waiting for I/O for further execution
I/O wait time is used to measure the amount of time the CPU waits for disk I/O
operations to complete.
Scheduling;
Scheduling : It is the process of selecting a process from a ready queue, and allotting
CPU to this process for execution .
It is the process of removing an active task from the processor and replacing it with a
new one.
It divides a procedure into states such as ready, waiting, or running.

CPU scheduling
 CPU scheduling is the basis of multi-programmed operating systems. By switching
the CPU among processes, the operating system can make the computer more
productive.
 In a single-processor system, only one process can run at a time; any others must
wait until the CPU is free and can be rescheduled. The objective of multiprogramming
is to have some process running at all times, to maximize CPU utilization.
 The idea is relatively simple. A process is executed until it must wait, typically for
the
completion of some I/O request. In a simple computer system, the CPU then just sits
idle. All this waiting time is wasted; no useful work is accomplished.
 With multiprogramming, we try to use this time productively. Several processes are
kept in memory at one time. When one process has to wait, the operating system takes
the CPU away from that process and gives the CPU to another process. This pattern
continues. Every time one process has to wait, another process can take over use of the
CPU.
 Scheduling of this kind is a fundamental operating-system function. Almost all
computer
resources are scheduled before use. The CPU is, of course, one of the primary computer
resources.
Thus, its scheduling is central to operating-system design.
• There are two categories of scheduling:
1. Non-preemptive: (once process starts executing it will not stop until complete
execution.)Here the resource can’t be taken from a process until the process
completes execution. The switching of resources occurs when the running
process terminates and moves to a waiting state.
2. Preemptive: (once process starts executing it will stop when any interrupt
generates and executes.)Here the OS allocates the resources to a process for a
fixed amount of time.
During resource allocation, the process switches from running state to ready state or
from waiting state to ready state.
Goals of scheduling
The primary objective of CPU scheduling is to ensure that as many jobs are running at a
time as is possible.
• On a single-CPU system, the goal is to keep one job running at all times.
• Describe various CPU scheduling algorithms
• Assess CPU scheduling algorithms based on scheduling criteria
• Explain the issues related to multiprocessor and multicore scheduling
• Describe various real-time scheduling algorithms
• Describe the scheduling algorithms used in the Windows, Linux, and
• Solaris operating systems
• Apply modeling and simulations to evaluate CPU scheduling

Scheduling Criteria:
CPU utilization – Maximum utilization of CPU so that we can keep the CPU as busy as
possible.
Throughput – means the number of processes that complete their execution in per unit
time. There must be maximum throughput.
Turnaround time –means the number of processes which are completing their
execution in per unit time. (Time taken for execution)
Waiting time –It is that time for which the process remains in the ready queue. It must
be a minimum.There must be fare allocation of CPU.
Response time – It is the time when the process gives its first response. It must be a
minimum.
Terms in each scheduling process.
• Arrival time(AT) is the time at which the process arrives in the ready queue for
execution, and it is given in our table when we need to calculate the average
waiting time.
• Completion time(CT) is the time at which the process finishes its execution.
• Turnaround time(TT) is the difference between completion time and arrival
time, i.e. turnaround time = Completion time- arrival time ( TT = CT – AT )
• Burst time(BT) is the time required by the process for execution of the process
by CPU.
• Waiting time (WT) is the difference between turnaround time and burst time, i.e.
Waiting time= Turnaround time – Burst time
CPU–I/O Burst Cycle
Process execution consists of a cycle of CPU execution and I/O wait. Processes
alternate between these two states. Process execution begins with a CPU burst That is
followed by an I/O burst, which is followed by another CPU burst, then another I/O
burst, and so on. Eventually, the final CPU burst ends with a system request to terminate
execution.
CPU Burst. It is the amount of time, a process uses the CPU until it starts waiting
for some input or interrupted by some other process. I/O Burst or Input Output
burst. It is the amount of time, a process waits for input-output before needing CPU
time.
The CPU time is the time taken by CPU to execute the process. While the I/O time is
the time taken by the process to perform some I/O operation.
Burst time is the total time taken by the process for its execution on the CPU
Scheduling Criteria
There are several different criteria to consider when trying to select the "best"
scheduling algorithm for a particular situation and environment, including:
CPU utilization - Ideally the CPU would be busy 100% of the time, so as to waste 0
CPU cycles. On a real system CPU usage should range from 40% ( lightly loaded ) to
90% ( heavily loaded. )
Throughput - Number of processes completed per unit time. May range from 10 /
second to 1 / hour depending on the specific processes.
Turnaround time - Time required for a particular process to complete, from submission
time to completion. ( Wall clock time. )
Waiting time - How much time processes spend in the ready queue waiting their turn to
get on the CPU.
( Load average - The average number of processes sitting in the ready queue waiting
their turn to get into the CPU. Reported in 1-minute, 5-minute, and 15-minute averages
by "uptime" and "who". )
Response time - The time taken in an interactive program from the issuance of a
command to the commence of a response to that command.
Scheduling Algorithms : Types of scheduling algorithms
1. First-Come, First-served Scheduling(FCFS)
2. Shortest-Job-first Scheduling (SJF)
3. Shortest Remaining Time First (SRTF)
4. Round Robin Scheduling (RR)
5. Priority Scheduling(PR)
6. Multilevel Queue Scheduling
7. Multilevel Feedback Queue Scheduling
First-Come, First-served Scheduling(FCFS)
It is an algorithm that automatically executes queued requests and processes according
to their arrival time. In this type of algorithm, processes which requests the CPU first
get the CPU allocation first. It is always non-preemptive in nature.
Jobs are always executed on a first-come, first-serve basis.
In this, the process that comes first will be executed first and next process starts only
after the previous gets fully executed.
This method is poor in performance, and the general wait time is quite high.
A real-life example of FCFS method is buying a movie ticket on the ticket
counter.(online booking)
FCFS means the buyer who request first to show interest to pay and pick up and to get
the item. The seller of the product won’t hold the item at all.(facebook)
Advantages of FCFS
1) FCFS algorithm doesn't include any complex logic, it just puts the process
requests in a queue and executes it one by one
2) The simplest form of a CPU scheduling algorithm
3) Easy to program
4) Eventually, every process will get a chance to run, so starvation doesn't occur.
Disadvantages of FCFS
1) It is a Non-Preemptive CPU scheduling algorithm, so after the process has been
allocated to the CPU, it will never release the CPU until it finishes executing.
2) The Average Waiting Time is high.
3) Short processes that are at the back of the queue have to wait for the long process
at the front to finish.
4) Resources utilization in parallel is not possible, which leads to Convoy Effect,
and hence poor resource(CPU, I/O etc) utilization.

What is Convoy Effect?


The Convoy Effect is a phenomenon in which the entire Operating System slows down
due to a few slower processes in the system.
When Central processing unit (CPU) time is allotted to a process, the FCFS algorithm
assures that other processes only get Central processing unit (CPU) time when the
current one is finished.
This essentially leads to poor utilization of resources and hence poor performance.
Difference between convoy effect and starvation
• Convoy Effect is a phenomenon linked with the FCFS (First Come First Serve)
algorithm, in which the entire Operating System slows down because of a few
slow processes.
• However, starvation arises when a process has to wait for an indefinite period
of time to acquire the resource it needs.
• Starvation is a problem of resource management where in the OS, the process
does not has resources because it is being used by other processes.

FCFS Scheduling Algorithms


Problem-01:
• Consider the set of 3 processes whose arrival time and burst time are given
below- If the CPU scheduling policy is FCFS,
• Calculate the average waiting time and average turn around time.
Process Arrival Burst
Id time time
P1 0 2
P2 3 1
P3 5 6
Solution-

Here, black box represents the idle time of CPU


Now, we know-
• Turn Around time = Completion Time – Arrival Time
• Waiting time = Turn Around Time – Burst Time
Turn
Waiting
Process Arrival Burst Completion Around
timeTT-B
Id time time Time timeCT-A
T
T

P1 0 2 2 2–0=2 2–2=0
P2 3 1 4 4–3=1 1–1=0
P3 5 6 11 11- 5 = 6 6 – 6 = 0

Average Turn Around time = (2 + 1 + 6) / 3 = 9 / 3 = 3 unit


Average waiting time = (0 + 0 + 0) / 3 = 0 / 3 = 0 unit
Problem-02: Consider the set of 5 processes whose arrival time and burst time are
given below-

Process Id Arrival time Burst time

P1 3 4

P2 5 3

P3 0 2

P4 5 1

P5 4 3

Calculate the average waiting time and average turn around time.
Solution-

Now, we know-
Turn Around time = Exit time – Arrival time
Waiting time = Turn Around time – Burst time
Turn Around
Process Id Exit time Waiting time
time
P1 7 7–3=4 4–4=0
P2 13 13 – 5 = 8 8–3=5
P3 2 2–0=2 2–2=0
P4 14 14 – 5 = 9 9–1=8
P5 10 10 – 4 = 6 6–3=3

Now,
Average Turn Around time = (4 + 8 + 2 + 9 + 6) / 5 = 29 / 5 = 5.8 unit
Average waiting time = (0 + 5 + 0 + 8 + 3) / 5 = 16 / 5 = 3.2 unit

Example-3: using FCFS Calculate the average waiting time.


Process Arrival Burst
Time Time
P1 2 6
P2 5 2
P3 1 8
P4 0 3
P5 4 4
SOLUTION

Ans: Average waiting time = 8


Shortest-Job-First (SJF)
It is also called the Shortest Job Next (SJN) scheduling. It is both preemptive and
non-preemptive.
In this scheduling algorithm, the process which has the shortest burst time (duration
time) will be processed first by the CPU.
In this, the arrival time of all the processes must be the same. Also, the processor must
have to aware about the burst time of all the processes in advance.
In SJF scheduling, the process with the lowest burst time, among the list of available
processes in the ready queue, is going to be scheduled next.
It is the non-preemptive mode of scheduling.(SJF)
The preemptive method of SJF scheduling is known as Shortest Remaining Time First
scheduling algorithm.(SRTF)
Advantages of SJF
1. Maximum throughput
2. Minimum average waiting and turnaround time
It is of two types:
1. Non Pre-emptive
2. Pre-emptive

Disadvantages of SJF
1. May suffer with the problem of starvation
2. It is not implementable because the exact Burst time for a process can't be known
in advance.
Problem-01:
Consider the set of 5 processes whose arrival time and burst time are given below-
If the CPU scheduling policy is SJF non-preemptive, calculate the average waiting
time and average turn around time.
Arrival
Process Id Burst time
time
P1 3 1
P2 1 4
P3 4 2
P4 0 6
P5 2 3
SOLUTION:
Gantt Chart-
Now, we know-
Turn Around time = Completion time – Arrival time
Waiting time = Turn Around time – Burst time
Process Comp Turn Around
Waiting time
Id time time
P1 7 7–3=4 4–1=3
P2 16 16 – 1 = 15 15 – 4 = 11
P3 9 9–4=5 5–2=3
P4 6 6–0=6 6–6=0

P5 12 12 – 2 = 10 10 – 3 = 7

Now,
Average Turn Around time = (4 + 15 + 5 + 6 + 10) / 5 = 40 / 5 = 8 unit
Average waiting time = (3 + 11 + 3 + 0 + 7) / 5 = 24 / 5 = 4.8 unit

Process
A.T B. T C.T TAT W.T
Id
7–3= 4–1=
P1 3 1 7
4 3
16 – 1 15 – 4
P2 1 4 16
= 15 = 11
9–4= 5–2=
P3 4 2 9
5 3
6–0= 6–6=
P4 0 6 6
6 0
12 – 2 10 – 3
P5 2 3 12
= 10 =7

Problem-02:
If the CPU scheduling policy is SJF non-pre-emptive, calculate the average
waiting time and average turn around time.

Process Id Arrival time Burst time


P0 0 3
P1 2 6
P2 4 4
P3 6 5
P4 8 2

Arriva Burst
Process CT TAT WT
l time time
P0 0 3
P1 2 6
P2 4 4
P3 6 5
P4 8 2

Gantt chart
P0 P1 P4 P2 P3

Process Arrival time Burst time CT TAT WT

P0 0 3 3 3 0
P1 2 6 9 7 1
P2 4 4 15 11 7
P3 6 5 20 14 9
P4 8 2 11 3 1

Now,
Average Turn Around Time = (3 + 7 + 11 + 14 + 3) / 5 = 38/ 5 = 7.6 unit
Average Waiting Time = (0 + 1 + 7 + 9 + 1) / 5 = 18/ 5 = 3.5 unit
SJF (Preemptive) (Shortest Remaining Time First -SRTF) Scheduling Algorithm
It is the pre-emptive mode of Shortest Job First (SJF) scheduling.
In this algorithm, the process which has the short burst time is executed by the CPU.
There is no need to have the same arrival time for all the processes.
If another process was having the shortest burst time then the current process which is
executing get stopped in between the execution, and the new arrival process will be
executed first.
Example-1: Consider the following table of arrival time and burst time for five
processes P1, P2, P3, and P4.

Process Arrival Time Burst Time

P1 0 ms 8 ms

P2 1 ms 4 ms

P3 2 ms 9 ms

P4 3 ms 5 ms

calculate average waiting time and turn around time


solution:
Gantt chart :

P1 P2 P4 P1 P3

As we know,
• Turn Around time = Completion time – arrival time
• Waiting Time = Turn around time – burst time
Arrival Burst
Process CT TWT WT
Time Time
8
P1 0 ms 17 17 9
ms

4
P2 1 ms 5 4 0
ms

9
P3 2 ms 26 24 15
ms

5
P4 3 ms 10 7 2
ms

Now,
• Average Turn around time = 52/4 = 13
• Average waiting time = 26/4 = 6.5

PRACTICE Example
Q. Given the arrival time and burst time of 3 jobs in the table below. Calculate the
Average waiting time of the system.
Process Arrival Burst Completio Turn Around Waiting
ID Time Time n Time Time Time

1 0 9 13
2 1 4 5
3 2 9 22

Solution:

Avg Waiting Time = 15/3 = 5 units

Round Robin (RR) (Preemptive) :


Round Robin is the preemptive process scheduling algorithm. Each process is provided
a fix time to execute, it is called a quantum/slice.
Once a process is executed for a given time period, it is pre-empted and other process
executes for a given time period.
Characteristics :-
Round robin is a pre-emptive algorithm
A fixed time is allotted to each process, called a quantum, /time slice for execution,
Once a process is executed for a given time period, it is preempted and other process
executes for a given time period.
Round robin is one of the oldest, fairest, and easiest algorithm.
Context switching is used to save states of preempted processes..
A fixed time is allotted to each process, called a quantum, for execution.

Problem-01:
Consider the set of 3 processes whose burst time are given below-

Process Queue Burst time


P1 4
P2 3
P3 5

If the CPU scheduling policy is Round Robin with time quantum = 2 unit,
Calculate the average waiting time and average turn around time.
P1 P2 P3 P1 P2 P3 P3

P1 P2 P3 P1 P2 P3 P3
Process Burst TAT =
CT WT=TAT-BT
Queue time CT-AT
P1 4 8 8-0=9 8-4=4
P2 3 9 9-0=9 9-3=6
P3 5 12 12-0=12 12-5=7

Q.2 TQ=5 , AT=0

Process
Burst time TAT WT
Queue
P1 21
P2 3
P3 6
P4 2

P1P2P3P4 P1P3P1P1P1

Problem-03:
Consider the set of 5 processes whose arrival time and burst time are given below-

Process Id Arrival time Burst time


P1 0 5
P2 1 3
P3 2 1
P4 3 2
P5 4 3

If the CPU scheduling policy is Round Robin with time quantum = 2 unit,
calculate the average waiting time and average turn around time.

Solution-

Ready Queue- P1 P2 P3 P1 P4 P5 P2 P1 P5

Now, we know-
• Turn Around time = Exit time – Arrival time
• Waiting time = Turn Around time – Burst time
• RT =CPU first time -AT

Response
Process Turn Around
Exit time Waiting time Time(RT) = cpu
Id time
first time -AT
P1 13 13 – 0 = 13 13 – 5 = 8 0-0=0
P2 12 12 – 1 = 11 11 – 3 = 8 2-1=1
P3 5 5–2=3 3–1=2 4-2=2
P4 9 9–3=6 6–2=4 7-3=4
P5 14 14 – 4 = 10 10 – 3 = 7 9-4=5

Now,
• Average Turn Around time = (13 + 11 + 3 + 6 + 10) / 5 = 43 / 5 = 8.6 unit
• Average waiting time = (8 + 8 + 2 + 4 + 7) / 5 = 29 / 5 = 5.8 unit

Problem-04:

Consider the set of 6 processes whose arrival time and burst time are given below-

Process Id Arrival time Burst time


P1 0 4
P2 1 5
P3 2 2
P4 3 1
P5 4 6
P6 6 3

If the CPU scheduling policy is Round Robin with time quantum = 2,


calculate the average waiting time and average turn around time.

Solution-
Gantt chart-
Ready Queue- P1, P2, P3, P1, P4, P5, P2, P6, P5, P2, P6, P5
Now, we know-
• Turn Around time = Exit time – Arrival time
• Waiting time = Turn Around time – Burst time

Process Id Exit time Turn Around time Waiting time


P1 8 8–0=8 8–4=4
P2 18 18 – 1 = 17 17 – 5 = 12
P3 6 6–2=4 4–2=2
P4 9 9–3=6 6–1=5
P5 21 21 – 4 = 17 17 – 6 = 11
P6 19 19 – 6 = 13 13 – 3 = 10
Now,
Average Turn Around time = (8 + 17 + 4 + 6 + 17 + 13) / 6 = 65 / 6 = 10.84 unit
Average waiting time = (4 + 12 + 2 + 5 + 11 + 10) / 6 = 44 / 6 = 7.33 unit
Problem-05:
Consider the set of 6 processes whose arrival time and burst time are given below-

Process Id Arrival time Burst time


P1 5 5
P2 4 6
P3 3 7
P4 1 9
P5 2 2
P6 6 3
If the CPU scheduling policy is Round Robin with time quantum = 3, calculate the
average waiting time and average turn around time.
Solution-
Ready Queue- P4, P5, P3, P2, P4, P1, P6, P3, P2, P4, P1, P3

Now, we know-
• Turn Around time = Exit time – Arrival time
Waiting time = Turn Around time – Burst time

Process Id Exit time Turn Around time Waiting time


P1 32 32 – 5 = 27 27 – 5 = 22
P2 27 27 – 4 = 23 23 – 6 = 17
P3 33 33 – 3 = 30 30 – 7 = 23
P4 30 30 – 1 = 29 29 – 9 = 20
P5 6 6–2=4 4–2=2
P6 21 21 – 6 = 15 15 – 3 = 12

Now,
Average Turn Around time = (27 + 23 + 30 + 29 + 4 + 15) / 6 = 128 / 6 = 21.33 unit
Average waiting time = (22 + 17 + 23 + 20 + 2 + 12) / 6 = 96 / 6 = 16 unit

Priority Scheduling :
• A priority number (integer) is associated with each process
• The CPU is allocated to the process with the highest priority (smallest integer ≡
highest priority).
• Preemptive
• Non-preemptive
• SJF is a priority scheduling where priority is the predicted next CPU burst time.
• Problem ≡ Starvation – low priority processes may never execute.
• Solution ≡ Aging – as time progresses increase the priority of the process.
Advantages-
It considers the priority of the processes and allows the important processes to run first.
Priority scheduling in preemptive mode is best suited for real time operating system.

Disadvantages-
Processes with lesser priority may starve for CPU.
There is no idea of response time and waiting time.

Problem-01:
Consider the set of 5 processes whose arrival time and burst time are given below-

Arrival Burst
Process Id Priority
time time
P1 0 4 2
P2 1 3 3
P3 2 1 4
P4 3 5 5
P5 4 2 5

If the CPU scheduling policy is priority non-preemptive,


calculate the average waiting time and average turn around time. (Higher number
represents higher priority)
Solution-
Gantt Chart-

Now, we know-
Turn Around time = Exit time – Arrival time
Waiting time = Turn Around time – Burst time

Process Exit Turn


Waiting time
Id time Around time
P1 4 4–0=4 4–4=0
P2 15 15 – 1 = 14 14 – 3 = 11
P3 12 12 – 2 = 10 10 – 1 = 9
P4 9 9–3=6 6–5=1
P5 11 11 – 4 = 7 7–2=5

Now,
• Average Turn Around time = (4 + 14 + 10 + 6 + 7) / 5 = 41 / 5 = 8.2 unit
• Average waiting time = (0 + 11 + 9 + 1 + 5) / 5 = 26 / 5 = 5.2 unit

Problem-02:
Consider the set of 5 processes whose arrival time and burst time are given below-

Process Id Arrival time Burst time Priority

P1 0 4 2

P2 1 3 3

P3 2 1 4

P4 3 5 5

P5 4 2 5

If the CPU scheduling policy is priority preemptive, calculate the average waiting time
and average turn around time. (Higher number represents higher priority)

Now, we know-
• Turn Around time = Exit time – Arrival time
• Waiting time = Turn Around time – Burst time
Process Turn Around
Exit time Waiting time
Id time
P1 15 15 – 0 = 15 15 – 4 = 11
P2 12 12 – 1 = 11 11 – 3 = 8
P3 3 3–2=1 1–1=0
P4 8 8–3=5 5–5=0
P5 10 10 – 4 = 6 6–2=4
Now,
Average Turn Around time = (15 + 11 + 1 + 5 + 6) / 5 = 38 / 5 = 7.6 unit
Average waiting time = (11 + 8 + 0 + 0 + 4) / 5 = 23 / 5 = 4.6 unit

Multilevel Queue (MLQ) CPU Scheduling


Multilevel queue scheduling is one such CPU scheduling algorithm where the
tasks to be performed by the CPU are divided into different groups based on various
properties
Multilevel queue scheduling is a type of CPU scheduling in which the processes
in the ready state are divided into different groups, each group having its own
scheduling needs. The ready queue is divided into different queues according to
different properties of the process like memory size, process priority, or process type.
the processes are divided into three different queues given below.
System Processes(highest priority processes) eg:RR
Interactive Processes(medium priority) eg: SJF
Batch Processes. (background process) eg:FCFS
When a new process comes, that process is added to one of the above-given three
process queues based on the classification specified for those process queues.
Let us assume that an interactive process like online gaming wants to utilize the
CPU.
Then this process will be placed in the Interactive processes queue. Or
if any process owned by the operating system itself comes, it is placed in
the System Processes queue, and similarly for other processes.
Advantages of Multilevel Queue Scheduling
1. It allows us to apply different scheduling algorithms for different processes.
2. It offers a low scheduling overhead, i.e., the time taken by the dispatcher to move
the process from the ready state to the running state is low.
Disadvantages of Multilevel Queue Scheduling
1. There are chances of starvation for lower-priority processes. If higher-priority
processes keep coming, then the lower-priority processes won't get an opportunity
to go into the running state.
2. Multilevel queue scheduling is inflexible.
Multilevel Feedback Queue Scheduling
Multilevel feedback queue scheduling is a CPU scheduling algorithm that assigns
processes to different queues based on their priority and history of resource usage.
The algorithm uses multiple queues with different priorities and time quantum, and
processes are moved between queues based on their behaviour and performance,
improving overall system efficiency. (to remove the starvation in previous method)
Multilevel Feedback Queue Criteria :
Start, all processes in Q1 are completed.
Processes in Q2 will run when Q1 is empty.
When Q1 and Q2 are both empty, Q3 processes will run.
If a procedure is scheduled for Q2, it will take precedence over the process planned for
Q3. Similarly, if a process arrives in Q3 before the current process, it will preempt it in
Q3. In Q3, operations are conducted on an FCFS basis.
Advantages of Multilevel Feedback Queue Scheduling
The MFQS algorithm is a flexible scheduling method.
It allows various processes to switch across queues.
Prevents CPU overload.
After a specific amount of time, a mechanism known as Aging helps move a lower
priority activity to the next higher priority queue.

Disadvantages of Multilevel Feedback Queue Scheduling


It is the most challenging algorithm for scheduling.
Other methods are required to choose the optimum scheduler.
There may be CPU overheads associated with this operation.
Inter-process communication : Process cooperation and synchronization,
Inter-process communication (IPC) refers specifically to the mechanisms, an
operating system allows the processes to manage shared data.
IPC is used for programs to communicate data to each other and to synchronize
their activities. Semaphores, shared memory, and internal message queues are common
methods of interprocess communication
Co-operating Processes :
❑ An Independent process is not affected by other running processes.
❑ Cooperating processes may affect each other,can affect or be affected by other
processing
❑ The cooperating process is one which shares data with another process
❑ Cooperating processes interact with each other via Inter-Process Communication
(IPC).
❑ As cooperating processes shares their resources, there might be a deadlock
condition.
❑ To avoid deadlocks, operating systems typically use algorithms such as the
Banker’s algorithm to manage and allocate resources to processes.
Why cooperating processes?(Advantages)
• information sharing : (access to the same files)
• computational speedup : (increases computation speed performed parallel processing)
• modularity : (dividing complicated tasks into smaller subtasks)
Convenience : (tasks that a user needs to do such as compiling, printing, editing etc)

Producer Consumer Process


In operating System, Producer is a process which is able to produce data/item.
Consumer is a Process that is able to consume the data/item produced by the Producer.
Both Producer and Consumer share a common memory buffer.

This buffer is a space of a certain size in the memory of the system which is used for
storage.
Code for Producer process:

while (true) {
//produce an item & put in nextProduced
while (count == BUFFER_SIZE); to check the buffer is full or not
and stores the size of buffer
// do nothing
buffer[in] = nextProduced;
in = (in + 1) % BUFFER_SIZE; (circular buffer, 0to4)
count ++ ;
}

Consumer Process
while (true) {
while (count == 0);
// do nothing
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;

count--;
/* consume the item in nextConsumed
}

The Producer-Consumer problem is a classical multi-process synchronization problem


in operating systems.
The problem is defined as follows: there is a fixed-size buffer and a Producer process,
and a Consumer process.
The Producer process creates data item and adds it to the shared buffer and consumer
consumes the data items from the buffer. i.e producer consumer

What are the Producer-Consumer Problems?


Producer Process should not produce any data when the shared buffer is full.
Consumer Process should not consume any data when the shared buffer is empty.
The access to the shared buffer should be mutually exclusive i.e at a time only one
process should be able to access the shared buffer and make changes to it.
Solution For Producer Consumer Problem
To solve the Producer-Consumer problem three
semaphores variable are used :
Semaphores are variables used to indicate the number of resources available in the
system at a particular time. semaphore variables are used to achieve `Process
Synchronization.
Race Condition :
We faces race condition in producer consumer problem, A race condition is a situation
in which two or more threads or processes are reading or writing some shared data,
A race condition occurs when two threads access a shared variable at the same time.
The first thread reads the variable, and the second thread reads the same value from the
variable.
● count++ could be implemented as
register1 = count
register1 = register1 + 1
count = register1
● count-- could be implemented as
register2 = count
register2 = register2 - 1
count = register2
● Consider this execution interleaving with “count = 5” initially:
S0: producer execute register1 = count {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = count {register2 = 5}
S3: consumer execute register2 = register2 - 1 {register2 = 4}
S4: producer execute count = register1 {count = 6 }
S5: consumer execute count = register2 {count = 4}
For solve this problem process synchronization can be used
Critical-Section Problem:
When more than one processes try to access the same code segment that segment is
known as the critical section.
The critical section contains shared variables or resources which are needed to be
synchronized to maintain the consistency of data variables
Problems Caused by Critical Section in OS
Race Condition
Deadlock
Solutions to the Critical Section Problem in OS:
Semaphores, Peterson’s Algorithm, Bakery Algorithm
The critical section problem is to make sure that only one process should be in a
critical section at a time.
When a process is in the critical section, no other processes are allowed to enter the
critical section.
This solves the race condition.
The requirement to Critical Section Problem is:
Mutual Exclusion, Progress and Bounded Waiting .

Solution to Critical-Section Problem


Any solution to the critical section problem must satisfy the following
requirements:
Requirements:
1. Mutual Exclusion - When one process is executing in its critical section, no
other process is allowed to execute in its critical section.
2. Progress - If no process is executing in its critical section and there exists a
process that wishes to enter its critical section, it should not have to wait
indefinitely to enter it.
3. Bounded Waiting - A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made a
request to enter its critical section and before that request is granted.
● Assume that each process executes at a nonzero speed
● No assumption concerning relative speed of the N processes
The general architecture of these solutions is shown below:

Hence, The critical section problem is to design a protocol that the processes can use to
cooperate.(to sync with each other)
Solution of critical section problem for two processes
1) Software based
a)Peterson’s solution
b) Dekker’s solution
2) Hardware based
a) Interrupt disabling
b) Special m/c instruction
3) Operating system based
a) Semaphore
b) Monitor
Critical-Section Problem Solution for Two processes
Algorithm1 (works on the basis of turn)
● Shared variables: (here turn is shared variable)
● int turn;
initially turn = 0
● turn i ⇒ Pi can enter its critical section
● Process Pi
do {
while (turn != i) ; ( is turn not eq to i)
critical section
turn = j;
reminder section (to access non shared variable)
} while (1);
Satisfies mutual exclusion, but not progress
FOR TWO PROCESSES

Algorithm2 (works on the basis of flag)

● Shared variables
● Boolean flag[2];
initially flag [0] =
flag [1] = false.
● flag [ i ] = true ⇒ Pi ready to enter its critical section
● Process Pi
do {
flag [ i ] = true;
while (flag [ j ]) ; (is flag true) critical section (process pi will not enter
into cs)
flag [i] = false;
remainder section
} while (1);
● Satisfies mutual exclusion, but not progress requirement.
Algorithm3 (Peterson’s Solution) (combination of 1&2)
● Combined shared variables of algorithms 1 and 2.
● Process P0,P1
do {
flag [ i ] = true;
turn = j;
while (flag [ j ] and turn = j) ;
critical section
flag [i] = false;
remainder section
} while (1);
● Meets all three requirements; solves the critical-section problem for two
processes.
Bakery Algorithm for n processes
Bakery Algorithm is an algorithm that basically works as a generalized solution for the
critical section problem
The algorithm preserves the first come first serve property.
Critical section for n processes
Before entering its critical section, process receives a token number. Holder of the
smallest number enters the critical section.
If processes Pi and Pj receive the same number,
if i < j, then
Pi is served first;
else
Pj is served first.
The numbering scheme always generates numbers in increasing order of enumeration;
i.e., 1,2,3,4,5...
● Notation -: lexicographical order (ticket #, process id #)
● Firstly the ticket number is compared. If same then the process ID is compared
next, i.e.-
● (a,b) < (c,d) if a < c or if a = c and b < d
● max (a0,…, an-1) is a number, k, such that k ≥ ai for i - 0,
…, n – 1
● Shared data - choosing is an array [0..n – 1] of boolean values & number is an
array [0..n – 1] of integer values.
boolean choosing[n];
int number[n];
Both are initialized to False & Zero respectively.

do {
choosing[i] = true; process those wants to enter in to cs will request
number[i] = max(number[0], number[1], …, number [n – 1])+1; assigning
choosing[i] = false; (means token received)
for (j = 0; j < n; j++) { checking for all other process
while (choosing[ j ]) ; till other process receiving token no.
while ((number[j] != 0) && (number[ j,j] < number[i,i])) ;
} when both cond.are true not allowed to enter in cs
critical section
number[i] = 0; token no. becomes zero after completion
remainder section
} while (1);

Semaphore :
Semaphore is an integer variable i.e shared by multiple processes
Synchronization tool that does not require busy waiting
Semaphore is un integer flag, indicated that it is safe to proceed
Two standard operations modify S: wait() and signal()
Originally called P() and V()
Less complicated
Can only be accessed via two indivisible (atomic) operations
1) wait (S) {
while S <= 0
; // no-op
S--;
}
2) signal (S) {
S++;
}

Semaphore Implementation with no Busy waiting:


With each semaphore there is an associated waiting queue. Each entry in a waiting
queue has two data items:
value (of type integer)
pointer to next record in the list
Two operations:
block – place the process invoking the operation on the appropriate waiting queue.
wakeup – remove one of processes in the waiting queue and place it in the ready queue
Semaphore Implementation with no Busy waiting:
Implementation of wait:
wait (S){
value--;
if (value < 0) {
add this process to waiting queue
block(); }
}
Implementation of signal:
Signal (S){
value++;
if (value <= 0) {
remove a process P from the waiting queue
wakeup(P); }
}
Classical Problems of synchronization
Reader-Writer Problem
Producer- Consumer Problem(Bounded Buffer Problem)

Dining-Philosopher Problem
THE BOUNDED BUFFER ( PRODUCER / CONSUMER ) PROBLEM:
This is the same producer / consumer problem as before. But now we'll do it with
signals and waits. Remember: a wait decreases its argument and a signal increases its
argument.

BINARY_SEMAPHORE mutex = 1; // Can only be 0 or 1


COUNTING_SEMAPHORE empty = n; full = 0; // Can take on any integer
value

You might also like