0% found this document useful (0 votes)
8 views

Operating system notes

CPU scheduling is essential for multiprogrammed operating systems to enhance CPU utilization by managing process execution and minimizing idle time. It involves various algorithms, including First-Come, First-Served, Shortest-Job-First, and Round-Robin, each with specific advantages and performance metrics such as waiting time and turnaround time. Effective scheduling aims to optimize CPU utilization, throughput, and response time while addressing issues like starvation and context-switching overhead.

Uploaded by

jainekagra18
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Operating system notes

CPU scheduling is essential for multiprogrammed operating systems to enhance CPU utilization by managing process execution and minimizing idle time. It involves various algorithms, including First-Come, First-Served, Shortest-Job-First, and Round-Robin, each with specific advantages and performance metrics such as waiting time and turnaround time. Effective scheduling aims to optimize CPU utilization, throughput, and response time while addressing issues like starvation and context-switching overhead.

Uploaded by

jainekagra18
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

CPU Scheduling

• CPU scheduling is the basis of multiprogrammed operating systems.

• By switching the CPU among processes, the operating system can make the computer more
productive.

Why CPU Scheduling?


Purpose:

To improve CPU utilization.

Problem:

When a process is executed and it has to wait for the completion of I/O request, the CPU just sit idle.
This waiting time is wasted; no useful work is accomplished.

Solution:

With the help of multiprogramming, try to use this time productively. When one process has to wait, the
operating system takes the CPU away from that process and gives the CPU to another process.

Basic Concepts of CPU Scheduling

• Scheduling is a fundamental operating system function.

• Almost all computer resources are scheduled before use.

• The CPU is one of the primary computer resources.

• CPU Burst Cycle: process execution consists of a cycle of CPU execution.

• I/O Burst Cycle: process execution consists of a cycle of I/O Wait.

• Process execution begins with a CPU bust, that is followed by an I/O burst and
so on (alternatively).
Alternating Sequence of CPU And I/O Bursts

Preemptive Scheduling

• A new process(if one exists in the ready queue) must be selected for execution.

• all subsequent versions of Windows operating systems have used preemptive scheduling.

• it needed the special hardware (for example, a timer) for scheduling.

Non-preemptive or cooperative Scheduling

• once the CPU has been allocated to a process, the process keeps the CPU until it releases the
CPU either by terminating or by switching to the waiting state.

• This scheduling method was vised by Microsoft Windows 3.x

• it does not require the special hardware (for example, a timer) for scheduling.

Preemptive Scheduling

CPU-scheduling decisions may take place under the following four circumstances:

1. When a process switches from the running state to the waiting state (for example, as the result of an
I/O request or an invocation of wait for the termination of one of the child processes)

2. When a process switches from the running state to the ready state (for example, when an interrupt
occurs)

3. When a process switches from the waiting state to the ready state (for example, at completion of I/O)

4. When a process terminates

• CPU Scheduler:

Whenever the CPU becomes idle. The short-term scheduler /CPU scheduler selects a process from the
processes in memory that are ready to execute and allocates the CPU to that process.
• Dispatcher:

The dispatcher is the module that gives control of the CPU to the process selected by the short-term
scheduler. This function involves the following:

• Switching context

• Switching to user mode

• Jumping to the proper location in the user program to restart that program

• Dispatch latency.

The time dispatcher takes for the dispatcher to stop one process and start another running is known as
the dispatch latency.

Scheduling Criteria
• Scheduling criteria is used for comparing different CPU scheduling algorithms.

• Many criteria have been suggested for comparing CPU scheduling algorithms.

• The criteria include the following:

CPU utilization.

Keep the CPU as busy as possible.

Throughput.

The number of processes that are completed per time unit, called throughput.

Turnaround time.

The interval from the time of submission of a process to the time of completion is the turnaround time.

Waiting time.

The amount of time that a process spends waiting in the ready queue.

Response time.

The time from the submission of a request until the first response is produced.

Optimization Criteria
It is desirable to

1. Maximize CPU utilization


2. Maximize throughput
3. Minimize turnaround time
4. Minimize start time
5. Minimize waiting time
6. Minimize response time

In most cases, we strive to optimize the average measure of each metric

In other cases, it is more important to optimize the minimum or maximum values rather than the
average

Scheduling Algorithms
• CPU scheduling deals with the problem of deciding which of the processes in the ready queue is
to be allocated the CPU.

• There are many different CPU scheduling algorithms.

1. First-Come, First-Served Scheduling

2. Shortest-Job-First Scheduling

3. Priority Scheduling

4. Round-Robin Scheduling

5. Multilevel Queue Scheduling

6. Multilevel Feedback-Queue Scheduling

Single Processor Scheduling Algorithms

First-Come, First-Served Scheduling

• The FCFS scheduling algorithm is non-preemptive.

• Once the CPU has been allocated to a process, that process keeps the CPU until it releases it
either by terminating or by requesting I/O

• It is a troublesome algorithm for time-sharing systems

• FCFS same as FIFO.

• Simple, fair, but poor performance.

• Average queueing time may be long.

• What are the average queueing and residence times for this scenario?
• How do average queueing and residence times depend on ordering of these processes in the
queue?

Process Burst Time

P1 24

P2 3

P3 3

With FCFS, the process that requests the CPU first is allocated the CPU first.

The Gantt Chart for the schedule is:

P P P

0 2 2 3
Suppose that the processes arrive in the order: P1,P2,P3
4 7 0
Waiting time for process P1 = 0 milliseconds

Waiting time for process P2 = 24 milliseconds

Waiting time for process P3 = 27 milliseconds

Waiting time for P1=0; P2=24; P3=27

Average waiting time = P1 + P2 + P3/3

Average waiting time: (0+24+27)/3 = 17

Average turn-around time: (24+27+30)/3=27

Suppose that the processes arrive in the order: P2 , P3 , P1

The Gantt chart for the schedule is:

P P P

0 3 6 3
0
Waiting time for P1 = 6; P2 = 0; P3 = 3

Average waiting time: 6 + 0 + 3)/3 = 3

(Much better than previous)

Average turn-around time: (3 + 6 + 30)/3 = 13

Comparison of both cases

• All the other processes wait for one long-running process to finish using the CPU.

• This problem results in lower CPU and device utilization.

But 2nd situation shows

• Higher utilization might be possible if the short processes were allowed to run first.

Shortest-Job-First Scheduling

• The SJF algorithm associates with each process the length of its next CPU burst.

• When the CPU becomes available, it is assigned to the process that has the smallest next CPU
burst (in the case of matching bursts, FCFS is used)

• Two schemes:

– Non-preemptive – once the CPU is given to the process, it cannot be preempted until it
completes its CPU burst.

– Preemptive – if a new process arrives with a CPU burst length less than the remaining
time of the current executing process, preempt. This scheme is know as the Shortest-
Remaining-Time-First (SRTF)

Example #1
Non-Preemptive SJF(simultaneous arrival)

Process Arrival Time Burst Time (milliseconds)

P1 0.0 6

P2 0.0 4

P3 0.0 1

P4 0.0 5
The Gantt chart for the schedule is:

P3 P2 P4 P1

0 1 5 10 16

• Waiting time for

P1 = 10; P2 = 1; P3 = 0, P4 = 5

• Average waiting time = (0 + 1 + 5 + 10)/4

= 4 milliseconds

• Average turn-around time :

= (1 + 5 + 10 + 16)/4

= 8 milliseconds

Example #2

Non-Preemptive SJF (varied arrival times)

Process Arrival Time Burst Time

P1 0.0 7

P2 2.0 4

P3 4.0 1

P4 5.0 4

SJF (non-preemptive, varied arrival times)

The Gantt chart for the schedule is:

P P P P

0 7 8 1 1
Calculation of Waiting Time

Waiting time for P1 = 0 milliseconds

Arrival time for P1 = 0.0 milliseconds

waiting time for P1

= (Waiting Time – Arrival Time)

= ( 0 – 0.0 ) = 0 milliseconds

Waiting time for P2 = 8 milliseconds

Arrival time for P2 = 2.0 milliseconds

waiting time for P2

= (Waiting Time – Arrival Time)

= ( 8 – 2.0 ) = 6 milliseconds

Waiting time for P3 = 7 milliseconds

Arrival time for P3 = 4.0 milliseconds

waiting time for P3

= (Waiting Time – Arrival Time)

= ( 7 – 4.0 ) = 3 milliseconds

Waiting time for P4 = 12 milliseconds

Arrival time for P4 = 5.0 milliseconds

waiting time for P4

= (Waiting Time – Arrival Time)


= ( 12 – 5.0 ) = 7 milliseconds

Calculation of Turn-Around Time

Completion Time of P1 = 7 milliseconds

Arrival time for P1 = 0.0 milliseconds

turn-around time for P1

= (Completion Time – Arrival Time)

= ( 7 – 0.0 ) = 7 milliseconds

Completion Time of P2 = 12 milliseconds

Arrival time for P2 = 2.0 milliseconds

turn-around time for P2

= (Completion Time – Arrival Time)

= ( 12 – 2.0 ) = 10 milliseconds

Completion Time of P3 = 8 milliseconds

Arrival time for P3 = 4.0 milliseconds

turn-around time for P3

= (Completion Time – Arrival Time)

= ( 8 – 4.0 ) = 4 milliseconds

Completion Time of P4 = 16 milliseconds

Arrival time for P4 = 5.0 milliseconds

turn-around time for P4

= (Completion Time – Arrival Time)

= ( 16 – 5.0 ) = 11 milliseconds
• Average waiting time
= ( (0 – 0) + (8 – 2) + (7 – 4) + (12 – 5) )/4
= (0 + 6 + 3 + 7)/4

= 4 milliseconds

• Average turn-around time:


= ( (7 – 0) + (12 – 2) + (8 - 4) + (16 – 5))/4
= ( 7 + 10 + 4 + 11)/4

= 8 milliseconds

Example #3
Preemptive SJF(Shortest-remaining-time-first)

Process Arrival Time Burst Time

P1 0.0 7

P2 2.0 4

P3 4.0 1

P4 5.0 4

SJF (preemptive, varied arrival times)

Gantt Chart

P P P P P P

0 1 1
2 4 5 7 6
1
First Waiting time for P1 = 0 milliseconds
Arrival time for P1 = 0.0 milliseconds

waiting time for P1

= (Waiting Time – Arrival Time)

= ( 0 - 0 ) = 0 milliseconds

Second waiting time for P1 = 11 milliseconds

waiting time for P1

= (Waiting Time – Starting Time after first processing)

= ( 11 – 2 ) = 9 milliseconds

Total waiting time for P1 = [(0-0) + (11-2)]

First Waiting time for P2 = 2 milliseconds

Arrival time for P2 = 2.0 milliseconds

waiting time for P2

= (Waiting Time – Arrival Time)

= ( 2 – 2.0 ) = 0 milliseconds

Second waiting time for P2 = 5 milliseconds

waiting time for P2

= (Waiting Time – Starting Time after first processing)

= ( 5 – 4 ) = 1 milliseconds

Total waiting time for P2 = [(2-2) + (5-4)]

Waiting time for P3 = 4 milliseconds

Arrival time for P3 = 4.0 milliseconds

waiting time for P3

= (Waiting Time – Arrival Time)


= ( 4 – 4.0 ) = 0 milliseconds

Waiting time for P4 = 7 milliseconds

Arrival time for P4 = 5.0 milliseconds

waiting time for P4

= (Waiting Time – Arrival Time)

= ( 7 – 5.0 ) = 2 milliseconds

• Average waiting time

= ( [(0 – 0) + (11 - 2)] + [(2 – 2) + (5 – 4)] + (4 - 4) + (7 – 5) )/4


= (9 + 1 + 0 + 2)/4
= 3 milliseconds

• Average turn-around time

= (16 + 7 + 5 + 11)/4

= 9.75 milliseconds

Priority Scheduling

The SJF algorithm is a special case of the general priority scheduling algorithm.

A priority number (integer) is associated with each process.

The CPU is allocated to the process with the highest priority (smallest integer = highest
priority)

• Priority scheduling can be either preemptive or non-preemptive

– A preemptive approach will preempt the CPU if the priority of the newly-
arrived process is higher than the priority of the currently running process.

– A non-preemptive approach will simply put the new process (with the
highest priority) at the head of the ready queue

Process Burst Time Priority

P1 10 3
P2 1 1

P3 2 3

P4 1 4

P5 5 2

Gantt Chart

• Average waiting time


= ( 6 + 0 + 16 + 18 + 1 )/5
= 41/5
= 8.2 milliseconds

• Average turn-around time

= (16 + 7 + 5 + 11)/4

= 9.75 milliseconds

• SJF is a priority scheduling algorithm where priority is the predicted next CPU
burst time

• The main problem with priority scheduling is starvation, that is, low priority
processes may never execute.

• A solution is aging; as time progresses, the priority of a process in the ready


queue is increased.

Round-Robin Scheduling
In the round robin algorithm, each process gets a small unit of CPU time (a time
quantum), usually 10-100 milliseconds.

when this time has elapsed, the process is preempted and added to the end of the ready
queue.

If there are n processes in the ready queue and the time quantum is q, then each
process gets 1/n of the CPU time in chunks of at most q time units at once.

No process waits more than (n-1)q time units.

• Performance of the round robin algorithm

– q large Þ FCFS

– q small Þ q must be greater than the context switch time; otherwise, the
overhead is too high

• One rule of thumb is that 80% of the CPU bursts should be shorter than the time
quantum

Example
RR with Time Quantum = 20

Process Burst Time

P1 53

P2 17

P3 68

P4 24

The Gantt chart is:


Calculation of Waiting Time

First Waiting time for P1 = 0 milliseconds

Second waiting time for P1 =

= (Waiting Time – Time after first processing)

= ( 77 – 20 ) = 57 milliseconds

Third waiting time for P1 =

= (Waiting Time – Time after second processing)

= ( 121 – 97 ) = 24 milliseconds

Total waiting time for P1

= [(0 – 0) + (77- 20) + (121 – 97)]

First Waiting time for P3 = 37 milliseconds

Second waiting time for P 3=

= (Waiting Time – Time after first processing)

= ( 97 – 57 ) = 40 milliseconds

Third waiting time for P3 =

= (Waiting Time – Time after second processing)

= ( 134 – 117 ) = 17 milliseconds

Forth waiting time for P3 =

= (Waiting Time – Time after third processing)

= ( 154 – 154 ) = 0 milliseconds

Total waiting time for P3

= [(37 – 0) + (97 - 57) + (134 – 117) + (154 -154)]


First Waiting time for P4 = 57 milliseconds

Second waiting time for P4=

= (Waiting Time – Time after first processing)

= [(117 – 77)] = 40 milliseconds

Total waiting time for P4

= [(57 – 0) + (117 – 77)]

Calculation of Turn-Around Time

• Typically, higher average turnaround than SJF, but better response time

• Average waiting time


= ( [(0 – 0) + (77 - 20) + (121 – 97)] + (20 – 0) +

[(37 – 0) + (97 - 57) + (134 – 117)] +

[(57 – 0) + (117 – 77)] ) / 4


= (0 + 57 + 24) + 20 + (37 + 40 + 17) + (57 + 40) ) / 4
= (81 + 20 + 94 + 97)/4
= 292 / 4

= 73 milliseconds

• Average turn-around time

= 134 + 37 + 162 + 121) / 4

= 113.5 milliseconds

Multilevel Queue Scheduling

• Multi-level queue scheduling is used when processes can be classified into


groups.

• For example,
– foreground (interactive) processes and

– background (batch) processes

– The two types of processes have different response-time requirements


and so may have different scheduling needs.

– Also, foreground processes may have priority (externally defined) over


background processes.

• A multi-level queue scheduling algorithm partitions the ready queue into several
separate queues.

• The processes are permanently assigned to one queue, generally based on some
property of the process such as memory size, process priority, or process type.

• Each queue has its own scheduling algorithm

– The foreground queue might be scheduled using an RR algorithm.

– The background queue might be scheduled using an FCFS algorithm.

• In addition, there needs to be scheduling among the queues, which is commonly


implemented as fixed-priority pre-emptive scheduling.

– The foreground queue may have absolute priority over the background
queue.

One example of a multi-level queue are the five queues shown below
• Each queue has absolute priority over lower priority queues.

• For example, no process in the batch queue can run unless the queues above it
are empty.

• However, this can result in starvation for the processes in the lower priority
queues.

• Another possibility is to time slice among the queues.

• Each queue gets a certain portion of the CPU time, which it can then schedule
among its various processes.

– The foreground queue can be given 80% of the CPU time for RR
scheduling

– The background queue can be given 20% of the CPU time for FCFS
scheduling

Multilevel Feedback-Queue Scheduling


• In multi-level feedback queue scheduling, a process can move between the
various queues; aging can be implemented in this way.

• A multilevel-feedback-queue scheduler is defined by the following parameters:

– Number of queues.

– Scheduling algorithms for each queue.

– Method used to determine when to promote a process.

– Method used to determine when to demote a process.

– Method used to determine which queue a process will enter when that
process needs service.

• Scheduling

– A new job enters queue Q0 (RR) and is placed at the end.

– When it gains the CPU, the job receives 8 milliseconds.

– If it does not finish in 8 milliseconds, the job is moved to the end of


queue Q1.

– A Q1 (RR) job receives 16 milliseconds.

– If it still does not complete, it is pre-empted and moved to queue Q2


(FCFS).
Multiple-Processor Scheduling

• If multiple CPUs are available, load sharing among them becomes possible; the
scheduling problem becomes more complex.

• Here, the processors are identical (homogeneous) in terms of their


functionality.

– We can use any available processor to run any process in the queue.

• Two approaches:

1. Asymmetric processing and

2. symmetric processing .

Asymmetric multiprocessing (ASMP)

• One processor handles all scheduling decisions, I/O processing, and other system
activities.

• The other processors execute only user code.

• Because only one processor accesses the system data structures, the need for
data sharing is reduced.

Symmetric multiprocessing (SMP)

• Each processor schedules itself.

• All processes may be in a common ready queue or each processor may


have its own ready queue.

• Either way, each processor examines the ready queue and selects a
process to execute.

• Efficient use of the CPUs requires load balancing to keep the workload
evenly distributed.

• In a Push migration approach, a specific task regularly checks


the processor loads and redistributes the waiting processes
as needed.
• In a Pull migration approach, an idle processor pulls a waiting
job from the queue of a busy processor.

• Virtually all modern operating systems support SMP, including Windows


XP, Solaris, Linux, and Mac OS X.

Techniques for Algorithm Evaluation

• Deterministic modelling – takes a particular predetermined workload and defines


the performance of each algorithm for that workload.
Process Burst Time
P1 10
P2 29
P3 3
P4 7
P5 12

– Using FCFS scheduling

– Using non-preemptive SJF scheduling

– Using round robin scheduling

(Time quantum = 10ms)

You might also like