0% found this document useful (0 votes)
7 views23 pages

Lecture 5

The document outlines key concepts in CPU scheduling for operating systems, including various scheduling algorithms such as First-Come, First-Served, Shortest-Job-First, and Round-Robin. It discusses the roles of the CPU scheduler and dispatcher, as well as the criteria for effective scheduling, such as CPU utilization and response time. Additionally, it addresses issues like race conditions, starvation, and the importance of aging in priority scheduling, along with the distinction between soft and hard real-time systems.

Uploaded by

Ahmed Khaled
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views23 pages

Lecture 5

The document outlines key concepts in CPU scheduling for operating systems, including various scheduling algorithms such as First-Come, First-Served, Shortest-Job-First, and Round-Robin. It discusses the roles of the CPU scheduler and dispatcher, as well as the criteria for effective scheduling, such as CPU utilization and response time. Additionally, it addresses issues like race conditions, starvation, and the importance of aging in priority scheduling, along with the distinction between soft and hard real-time systems.

Uploaded by

Ahmed Khaled
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 23

Assiut University

Course Title: Operating Systems


Course Code: CS321

Prof. Khaled F.
Hussain
Reference
• Silberschatz, Abraham, Peter B. Galvin, and Greg
Gagne. Operating system concepts. John Wiley & Sons.
CPU Scheduling
Alternating sequence of CPU and I/O
bursts
Histogram of CPU-burst durations
CPU scheduler (short-term
scheduler)
• The scheduler selects a process from the processes in memory that are ready to
execute and allocates the CPU to that process.
• Under nonpreemptive scheduling, once the CPU has been allocated to a process, the
process keeps the CPU until it releases the CPU either by terminating or by switching to
the waiting state.
• Unfortunately, preemptive scheduling can result in race conditions when data are
shared among several processes. Consider the case of two processes that share data.
While one process is updating the data, it is preempted so that the second process can
run. The second process then tries to read the data, which are in an inconsistent state.
Dispatcher
• Another component involved in the CPU-scheduling function is the dispatcher.
• The dispatcher is the module that gives control of the CPU to the process selected by
the short-term scheduler. This function involves the following:
• Switching context
• Switching to user mode
• Jumping to the proper location in the user program to restart that program
Scheduling Criteria
• CPU utilization. We want to keep the CPU as busy as possible.
• Throughput: the number of processes that are completed per time unit.
• Turnaround time: the interval from the time of submission of a process to the time of
completion.
• Waiting time: the sum of the periods spent waiting in the ready queue.
• Response time: the time from the submission of a request until the first response is
produced, the time it takes to start responding.

• It is desirable to maximize CPU utilization and throughput and to minimize turnaround


time, waiting time, and response time.
Scheduling Algorithms
• First-Come, First-Served Scheduling
• Shortest-Job-First Scheduling
• Priority Scheduling
• Round-Robin Scheduling
• Multilevel Queue Scheduling
First-Come, First-Served Scheduling
(FCFS)
• The implementation of the FCFS policy is easily managed with a FIFO queue. When the
CPU is free, it is allocated to the process at the head of the queue. The running process
is then removed from the queue.

Process Burst Time


P1 24
P2 3
P3 3
• If the processes arrive in the order P1, P2, P3, and are served in FCFS order, we get the
result shown in the following Gantt chart
• The waiting time is 0 milliseconds for process P1, 24 milliseconds for process P2, and
27 milliseconds for process P3. Thus, the average waiting time is (0 + 24 + 27)/3 = 17
milliseconds.
First-Come, First-Served Scheduling
(Cont.)
• If the processes arrive in the order P2, P3, P1, however, the results will be as shown in the
following Gantt chart:

• The average waiting time is now (6 + 0 + 3)/3 = 3 milliseconds. This reduction is substantial.

• Convoy effect: all the other processes wait for the one big process to get off the CPU.
• Note that the FCFS scheduling algorithm is nonpreemptive.
Shortest-Job-First Scheduling (SJF)
• This algorithm associates with each process the length of the process’s next CPU burst. When
the CPU is available, it is assigned to the process that has the smallest next CPU burst.
• If the next CPU bursts of two processes are the same, FCFS scheduling is used to break the tie.
Process Burst Time
P1 6
P2 8
P3 7
P4 3
• Using SJF scheduling, we would schedule these processes according to the following Gantt
chart:
Shortest-Job-First Scheduling (SJF)
(Cont.)
• The waiting time is 3 milliseconds for process P1, 16milliseconds for process P2,
9milliseconds for process P3, and 0milliseconds for process P4. Thus, the average waiting
time is (3 + 16 + 9 + 0)/4 = 7 milliseconds.
• By comparison, if we were using the FCFS scheduling scheme, the average waiting time
would be 10.25 milliseconds.
• The SJF scheduling algorithm is provably optimal, in that it gives the minimum
average waiting time for a given set of processes.
• The real difficulty with the SJF algorithm is knowing the length of the next CPU
request.
• Although the SJF algorithm is optimal, it cannot be implemented at the level of short-term
CPU scheduling. With short-term scheduling, there is no way to know the length of the
next CPU burst. One approach to this problem is to try to approximate SJF scheduling. We
may not know the length of the next CPU burst, but we may be able to predict its value.
• By computing an approximation of the length of the next CPU burst, we can pick the
process with the shortest predicted CPU burst.
Shortest-Job-First Scheduling (SJF)
(Cont.)
• The next CPU burst is generally predicted as an exponential average of the
measured lengths of previous CPU bursts.
• Let tn be the length of the nth CPU burst, and let τn+1 be our predicted value for the
next CPU burst. Then, for α, 0 ≤ α ≤ 1, define
τn+1 = α tn + (1 − α) τ n.
• The value of tn contains the most recent information, while τ n stores the past history.
• The parameter α controls the relative weight of recent and past history in our
prediction. If α =0, then τn+1 = τ n, and recent history has no effect (current conditions
are assumed to be transient). If α =1, then τn+1 = tn, and only the most recent CPU
burst matters (history is assumed to be old and irrelevant).
• More commonly, α = 1/2, so recent history and past history are equally weighted.
• The SJF algorithm can be either preemptive or nonpreemptive. Preemptive SJF
scheduling is sometimes called shortest-remaining-time-first scheduling.
Example: preemptive SJF schedule

• Process P1 is started at time 0, since it is the only process in the queue.


• Process P2 arrives at time 1. The remaining time for process P1 (7 milliseconds) is larger than the time
required by process P2 (4 milliseconds), so process P1 is preempted, and process P2 is scheduled.
• The average waiting time for this example is [(10 − 1) + (1 − 1) + (17 − 2) + (5 − 3)]/4 = 26/4 = 6.5
milliseconds.
• Nonpreemptive SJF scheduling would result in an average waiting time of 7.75 milliseconds.
Priority Scheduling
• The SJF algorithm is a special case of the general priority-scheduling algorithm. An SJF
algorithm is simply a priority algorithm where the priority (p) is the inverse of the (predicted) next
CPU burst.
• A priority is associated with each process, and the CPU is allocated to the process with the highest
priority.
• Equal-priority processes are scheduled in FCFS order.
• Priority scheduling can be either preemptive or nonpreemptive.
• In this text, we assume that low numbers represent high priority.
• The average waiting time is 8.2 milliseconds.
Priority Scheduling (Cont.)
• A major problem with priority scheduling algorithms is indefinite blocking, or
starvation. A process that is ready to run but waiting for the CPU can be considered
blocked. A priority scheduling algorithm can leave some low priority processes waiting
indefinitely, in a heavily loaded computer system.
• (Rumor has it that when they shut down the IBM 7094 at MIT in 1973, they found a low-
priority process that had been submitted in 1967 and had not yet been run.)
• A solution to the problem of indefinite blockage of low-priority processes is aging.
Aging involves gradually increasing the priority of processes that wait in the system for
a long time.
Round-Robin Scheduling
• The round-robin (RR) scheduling algorithm is designed especially for timesharing systems.
It is similar to FCFS scheduling, but preemption is added to enable the system to switch between
processes.
• A small unit of time, called a time quantum or time slice, is defined. A time quantum is
generally from 10 to 100 milliseconds in length.
• The ready queue is treated as a circular queue. The CPU scheduler goes around the ready
queue, allocating the CPU to each process for a time interval of up to 1 time quantum.
• The average waiting time under the RR policy is often long.
• The RR scheduling algorithm is thus preemptive.
• If there are n processes in the ready queue and the time quantum is q, then each process gets
1/n of the CPU time in chunks of at most q time units.
• Each process must wait no longer than (n − 1) × q time units until its next time quantum.
• If the time quantum is extremely large, the RR policy is the same as the FCFS policy.
• If the time quantum is extremely small (say, 1 millisecond), the RR approach can result in a
large number of context switches.
Round-Robin Scheduling (Cont.)
Process Burst Time
P1 24
P2 3
P3 3

• Let’s calculate the average waiting time for this schedule. P1 waits for 6 milliseconds
(10 - 4), P2 waits for 4 milliseconds, and P3 waits for 7 milliseconds. Thus, the average
waiting time is 17/3 = 5.66 milliseconds.
Multilevel Queue Scheduling
• For example, a division is made
between foreground (interactive)
processes and background
(batch) processes.
• These two types of processes
have different response-time
requirements and so may have
different scheduling needs. In
addition, foreground processes
may have priority (externally
defined) over background
processes.
• A multilevel queue scheduling
algorithm partitions the ready
queue into several separate
queues
• Each queue has its own scheduling
algorithm.
Multilevel Feedback Queue
Scheduling
• The multilevel feedback queue scheduling algorithm allows a process to move
between queues.
• The idea is to separate processes according to the characteristics of their CPU bursts. If
a process uses too much CPU time, it will be moved to a lower-priority queue. This
scheme leaves I/O-bound and interactive processes in the higher-priority queues. In
addition, a process that waits too long in a lower-priority queue may be moved to a
higher-priority queue. This form of aging prevents starvation.
Multilevel Feedback Queue Scheduling
(Cont.)
• In general, a multilevel feedback queue scheduler is defined by the following
parameters:
• The number of queues
• The scheduling algorithm for each queue
• The method used to determine when to upgrade a process to a higher priority
queue
• The method used to determine when to demote a process to a lower priority queue
• The method used to determine which queue a process will enter when that process
needs service
Real-Time CPU Scheduling
• We can distinguish between soft real-time systems and hard real-time systems:
• Soft real-time systems provide no guarantee as to when a critical real-time
process will be scheduled. They guarantee only that the process will be given
preference over noncritical processes.
• Hard real-time systems have stricter requirements. A task must be serviced by
its deadline; service after the deadline has expired is the same as no service at all.

You might also like