0% found this document useful (0 votes)
27 views10 pages

CPU Scheduling

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views10 pages

CPU Scheduling

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

5.

CPU SCHEDULING

5.1 Basic Concepts:


The objective of multiprogramming is to have some process running at all times, to
maximize CPU utilization.

CPU–I/O Burst Cycle: process execution consists of


a cycle of CPU execution and I/O wait. Processes
alternate between these two states.
Process execution begins with a CPU burst. That is
followed by an I/O burst, which is followed by
another CPU burst, then another I/O burst, and so on.

A CPU-bound process is one that spends most of its


time executing instructions on the processor.
A process that is I/O-bound spends most of its time
waiting for input and output operations to complete.

CPU Scheduler: -

Whenever the CPU becomes idle, the O.S must select one of the processes in the ready
queue to be executed. The short-term scheduler carries out the selection process. The
scheduler selects from among the processes in memory that are ready to execute and,
allocate the CPU to one of them.

Ready queue is not necessarily a FIFO queue, considering various scheduling


algorithms; a ready queue may be implemented as a FIFO queue, a priority queue, a tree
or simply an unordered linked list.
Preemptive scheduling: - CPU scheduling decisions may take place under the following
4 conditions.

1) When a process switches from running state to waiting state.

2) When a process switches from running state to ready state.

3) When a process switch from waiting state to ready state.

4) When a process terminates.

When scheduling takes only under 1 and 4. Then such scheduling is known as
non-preemptive scheduling. Scheduling takes place under 2 & 3 is known as
preemptive scheduling.

In non-preemptive scheduling once the CPU is allocated to a process, the


process will never release CPU until its task is over or need to wait for an I/O.

Another component involved in the CPU scheduling


function is the DISPATCHER. It is the module that gives
the control of the CPU to the process selected by the
short-term scheduler.
This function involves the following:
• Switching context from one process to another
• Switching to user mode
• Jumping to the proper location in the user program to
resume that program

The time it takes for the dispatcher to stop one process and
start another running is known as the dispatch latency

5.2 Scheduling Criteria: -


In choosing which algorithm to use in a particular situation, we must consider the
properties of the various algorithms depend up on the criteria. That are used include the
following.
1) CPU Utilization: - We want to keep the CPU busy as possible. CPU utilization may
range from 0 to 100 percent. In real systems it should range from 40% to 90%.
2) Through put: - The number of processes that are completed for unit time is called
through put.
3) Turnaround time: - The interval from the time of submission to the time of
completion is known as TURN around time.
4) Waiting time: - Waiting time is the sum of the periods spent in the ready queue. The
time waiting for I/O is not considered as waiting time.
5) Response time: - the time from the submission of a request until the first response is
produced. This measure is called response time.
5.3 Scheduling algorithms: -
CPU scheduling deals with the problem of deciding which of the processes in the ready
queue is to allocate the CPU. There are many CPU algorithms.
1) FCFS (first come first serve)
2) SJF (shortest job first)
3) Priority Scheduling algorithm
4) Round Robin Scheduling Algorithm.
FCFS: - The simplest algorithm is FCFS. Here the CPU is allocated as the process
request. i.e the process that request the CPU first is allocated the CPU first. The
implementation of the FCFS policy is easily managed with FIFO queue.
When a process enters the ready queue, its PCB is linked on to the tail of the
queue. When CPU is free it is allocated to the process at the head of the queue. The
running process is then removed from the queue. The code for the FCFS is also very
simple and easy to under stand but the average waiting time and average turn-around time
is high.
EX: - processes burst time

P1 24
P2 3
P3 3
Gantt Chart: -

The waiting time for the P1 is 0


The waiting time for the P2 is 24
The waiting time for the P3 is 27
The average waiting time = (0+24+27)/3 = 17 milliseconds.
The turn around time for P1 = 24
The turn around time for P2 = 27
The turn around time for P3 = 30
The average turnaround time = (24+27+30)/3= 27 milliseconds.
NOTE: - once the CPU has been allocated to the process that process keeps, the CPU
until it releases the CPU.
Convoy effect: - when one big process is executing by the CPU all other processes will
be in waiting state. This is known as convoy effect.
FCFS algorithm is non preemptive.

Shortest Job First scheduling: -


In this scheduling algorithm the shortest job is first executed and then the next
job and so on…. SJF is the optimal (best) of all the algorithms.
If two processes have the same length FCFS algorithm is used to break the tie.
Ex: - process CPU burst time

P1 24
P2 3
P3 3
Gantt chart: -

Waiting time for P1= 6


Waiting time for the P2=0
Waiting time for P3= 3
Average waiting time = (6+0+3)/3=9/3=3milliseconds.
Turnaround time of P1= 30
Turnaround time of P2= 3
Turn around time of P3= 6
Average turn around time= (30+3+6)/3= 13 milliseconds.
Advantages: -
1) the average waiting time is minimum.
2) This algorithm is optimal.
Disadvantages: - To predict the length of the next CPU burst is very difficult.

It may be either preemptive or non preemptive. This algorithm is just a special case of
priority algorithm. SJF is non preemptive. It also a preemptive.
The preemptive SJF scheduling algorithm is some times called Shortest- remaining
time first (SRTF).

SRTF (Shortest reaming time first): -


In this scheduling algorithm all the processes may not come at the same time to ready
queue, scheduling the processes as they arrive, using the concept of SJF is known as
shortest remaining time first.
This comes under the preemptive algorithm.
Ex:

Priority scheduling algorithm: -


A priority is associated with each process, and the CPU is allocated to the process with
the highest priority. Equal –priority processes are scheduled in FCFS order.
As an example, consider the following set of processes, assumed to have arrived at time
0, in the order p1, p2, p3, p4, p5 with the length of the CPU burst time given in
milliseconds. (we assume that low numbers represent high priority.)
Processes Burst time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Gantt chart: -

Priorities can be defined either internally or externally. Priority scheduling can be either
preemptive or non preemptive. When a process arrives at the ready queue, its priority is
compared with the priority of the currently running process.
✓ A preemptive priority-scheduling algorithm will preempt the CPU if the priority
of the newly arrived process is higher than the priority of the currently running
process.
A non-preemptive scheduling algorithm will simply put the new process at the head of
the ready queue.
✓ A major problem with priority scheduling algorithm is indefinite blocking or
starvation. A process that is ready to run but lacking the CPU can be considers
blocked, waiting for CPU. Here always the higher priority processes are
executed and lower priorities processes are wait long time for CPU.
✓ A solution to the problem of indefinite blockage of lower priority processes is
aging. Aging is technique of gradually increasing the priority of processes that
wait in the system for a long time.

Round- Robin Scheduling: -


It is especially for the time-sharing systems. It is similar to the FCFS scheduling, but
preemption is added to switch between processes. A small unit of time, called time
quantum or time slice. The ready queue is treated as a circular queue. The CPU scheduler
goes around the ready queue. Allocating the CPU to each process for a time interval of up
1 time quantum.
✓ To implement the RR scheduling, we keep the ready queue as a FIFO queue of
processes.
✓ New processes are added to the tail of the ready queue.
✓ The CPU scheduler picks the first process from the ready queue, sets a timer to
interrupt after 1 time quantum, and dispatches the process.
Consider the following set of processes that arrive at time 0, with the length of the CPU
bursts time gives in milliseconds.
Process burst time
P1 24
P2 3
P3 3
Here the time quantum is 4 milliseconds.
Gantt chart: -

The waiting time for P1= (10-4) milliseconds


The waiting time for the P2= 4 milliseconds
The waiting time for P3= 7 milliseconds
The average waiting time is= 17/3=5.66 milliseconds
If a process CPU burst exceeds time quantum, that process is preempted and is put back
in the ready queue.
✓ If we use a time quantum of 4 milliseconds, then process p1 gets the first 4
milliseconds. Since it requires another 20millisecons.
✓ It is preempted after the first time quantum, and the CPU is given to the next
process in the queue, process p2, since the process p2 does not need 4 m.s ,
✓ it quits before its time quantum expires, the CPU is then given to the next process,
process p3.
✓ Once each process has received 1 time quantum, the CPU returns to processp1 for
additional time quantum.
✓ The performance of the RR algorithm depends heavily on the size of the time
quantum. At one execute, if the time quantum is very large, the RR policy is the
same as the FCFS policy. If the quantum is very small, the RR approach is called
processor sharing.

Multilevel queue scheduling: -


Another class of scheduling algorithm has been created for the processes. A common
division is made between foreground processes and background processes. These two
types have different response time and have different scheduling needs.

✓ A multilevel queue scheduling algorithm partitions the ready queue in to several


separate queues.
✓ The processes are permanently assigned to one queue based on some property of
the processes; each queue has its own scheduling algorithm.
✓ For example, separate queues might be used for foreground and back ground
processes.
✓ The foreground queue might be scheduled by an RR algorithm, while background
queue is scheduled by an FCFS algorithm.
Multilevel Feedback Queue Scheduling: -
✓ In a multilevel queue-scheduling algorithm, processes are permanently assigned
to a queue on entry to the system. Processes do not move between the queues.
This setup has the advantage of low scheduling, but is inflexible.
✓ Multilevel feedback queue scheduling, however allows a process to move
between queues. If a process uses too much CPU time, it will be moved to a
lower-priority queue. Similarly, a process that waits too long in a lower priority
queue may be moved to a higher priority queue. This form of aging prevents
starvation.

Ex: - consider a multilevel feed back queue scheduler with three queues, numbered from
0 to 2. The scheduler first executes all processes in queue 0. Only when queue 0 is empty
will it execute processes in queue 1. Similarly, process that arrives for queue 1 will only
be executed if queue 0 is empty. A process that arrives for queue 1 will preempt a process
in queue 2.A process in queue 1 will in turn be preempted by a process arriving for queue
0. A process entering the ready queue is put in queue 0. A process in queue 0 is given a
time quantum of 8m.s. If it does not finish with in this time, it is moved to the tail of
queue of 1. If queue 0 is empty, the process at the head of queue 1 is given a quantum of
16 m.s. If it does not complete it is preempted and is put in to queue 2.
Processes in queue 2 are run on an FCFS basis, only when queues 0 and 1 are empty.
The multilevel feed back queue scheduler is defined by the following parameters.
1) The number of queues.
2) The scheduling algorithm for each queue.
3) The method used to determine when to upgrade a process to a higher priority
queue.
4) The method used to determine when to demote a process to a lower priority
queue.
5) The method used to determine which queue a process will enter when that
process needs services.

5.4 Thread Scheduling: -


✓ On most modern operating systems, kernel-level threads are being scheduled by
the operating system.
✓ User-level threads are managed by a thread library, and the kernel is unaware of
them.
✓ To run on a CPU, user-level threads must ultimately be mapped to an associated
kernel-level thread, via lightweight process (LWP).
Contention Scope
✓ On systems implementing the many-to-one and many-to-many models, the thread
library schedules user level threads to run on an available LWP. This scheme is
known as process contention scope (PCS), since competition for the CPU takes
place among threads belonging to the same process.
✓ When we say the thread, library schedules user threads onto available LWPs, we
do not mean that the threads are actually running on a CPU as that further
requires the operating system to schedule the LWP’s kernel thread onto a physical
CPU core.
✓ To decide which kernel-level thread to schedule onto a CPU, the kernel uses
system- contention scope (SCS). Competition for the CPU with SCS scheduling
takes place among all threads in the system.
✓ Systems using the one-to-one model, schedule threads using only SCS. Typically,
PCS is done according to priority—the scheduler selects the runnable thread with
the highest priority to run.
✓ User-level thread priorities are set by the programmer and are not adjusted by the
thread library, although some thread libraries may allow the programmer to
change the priority of a thread.
5.5 Multiple – processor Scheduling: -
If multiple CPU is available, the scheduling problem is quite complex.
✓ The term multiprocessor referred to systems that provided multiple physical
processors.
Approaches to Multiple-Processor Scheduling
Asymmetric multiprocessing: One approach to CPU scheduling in a multiprocessor
system has all scheduling decisions, I/O processing, and other system activities handled
by a single processor — the master server.
✓ The other processors execute only user code.
✓ Advantage: This is simple because only one CPU accesses the system data
structures, reducing the need for data sharing.
✓ Disadvantage: System performance may be reduced.
Symmetric multiprocessing (SMP) The standard approach for supporting
multiprocessors is, where each processor is self-scheduling. Scheduling proceeds by
having the scheduler for each processor examine the ready queue and select a process
to run. Two possible scheduling strategies:
1. All threads may be in a common ready queue.
2. Each processor may have its own private queue.
5.6 Real – time –scheduling: -
Real time computing is divided in to two types.
✓ Hard-real-time systems are generally required to complete a critical task with in a
guaranteed amount of time.
✓ Soft –real –time computing is less restrictive.
✓ Implementing soft real time functionality requires careful design of the scheduler
and related aspects of the O.S. first the system must have priority scheduling, and
real time processes must have the highest priority.

You might also like