OS UNIT-2 (Chapter 4, CPU Scheduling)
OS UNIT-2 (Chapter 4, CPU Scheduling)
Scheduling Criteria :
CPU scheduling is the process of determining which process or task is to be executed by the
central processing unit (CPU) at any given time. It is an important component of modern
operating systems that allows multiple processes to run simultaneously on a single processor.
CPU Scheduling Criteria
There are some CPU scheduling criteria given below −
1. CPU Utilization
CPU utilization is used in CPU scheduling that measures the percentage of time the CPU is
busy processing a task. Keeping the CPU as busy as possible. CPU utilization may range from
0% to 100%.
2. Throughput: Throughput is the amount of work completed in a unit of time. Or Throughput
is the processes executed to number of jobs completed in a unit of time.
3. Turn Around Time: The interwell from the time of submission of a process to the time of
completion is called Turn Around Time.
Turn Around Time = Waiting Time + Execution Time.
4. Waiting Time: Sum of period spent in the ready queue is called waiting time.
Waiting Time = Turn Around Time – Burst Time.
5. Response Time : The time from submission of a request until the first response is produced
is called response time.
[
Arrival Time : Time at which the process arrives in the ready queue.
Completion Time : Time at which the process completes its execution.
Burst Time : Time required by a process for CPU execution.
Dispatcher is the module that gives control of the CPU to the process selected by
the short-term scheduler. This function involves:
Switching context
Switching to user mode
Jumping to the proper location in the user program to restart that program from
where it left last time.
The dispatcher should be as fast as possible, given that it is invoked during every process
switch. The time taken by the dispatcher to stop one process and start another process is known
as the Dispatch Latency.
]
NOTE: To increase the system performance, the CPU Utilization and Throughput must be
increase. The waiting time, response time, turnaround time must be decrease.
Scheduling Algorithm
CPU scheduling deals with the problem of deciding which of the processes in the ready queue
is to be allocated the CPU.
1. First Come First Serve Scheduling [FCFS].
2. Shortest Job First Scheduling [ SJFS].
3. Priority Scheduling.
4. Round Robin Scheduling.
5. Multi-Level Queue Scheduling.
6. Multiple Feedback Queue Scheduling.
1.First Come First Serve Scheduling [FCFS].
FCFS is an operating system scheduling algorithm that automatically executes queued
requests and processes in order of their arrival. It is the easiest and simplest CPU scheduling
algorithm.
The processes which requests the CPU first get the CPU allocation first. This is managed with
a FIFO queue.
As the process enters the ready queue, its PCB (Process Control Block) is linked with the tail
of the queue and, when the CPU becomes free, it should be assigned to the process at the
beginning of the queue.
Advantages of FCFS
Disadvantages of FCFS
Convoy Effect : If the CPU acquires the processes with higher burst time at the front end of
the ready queue then those processes which have lower burst time may get blocked. It means
that they may never get the CPU time if the job already in the execution has a very high burst
time. It is known as the convoy effect.
The convoy effect is seen in the FCFS algorithm if the burst time of the first job entering the
ready queue of the CPU is the highest of all.
Advantages :
Disadvantages:
3. Priority Scheduling:
The processes with higher priority should be carried out first, whereas jobs with equal priorities
are carried out on a round-robin or FCFS basis. Priority depends upon memory requirements,
time requirements, etc.
Characteristics of Priority Scheduling
This method provides a good mechanism where the relative important of each process
may be precisely defined.
Suitable for applications with fluctuating time and resource requirements.
If the system eventually crashes, all low priority processes get lost.
If high priority processes take lots of CPU time, then the lower priority processes may
starve and will be postponed for an indefinite time.
This scheduling algorithm may leave some low priority processes waiting indefinitely.
A process will be blocked when it is ready to run but has to wait for the CPU because
some other process is running currently.
Starvation: In heavily loaded computer system the higher priority process can prevent low
priority process from available of CPU. This indefinite blocking of CPU for low priority
process is called Starvation.
Aging: A solution to the problem for indefinite blocking of low priority process is called
Aging.
Aging is a technique of gradually increasing the priority of the process that wait in the system
for a long time.
4. Round-Robin Scheduling
In Round-robin scheduling, each ready task runs turn by turn only in a cyclic queue for
a limited time slice. This algorithm also offers starvation free execution of processes.
Round robin is a pre-emptive algorithm
The CPU is shifted to the next process after fixed interval time, which is called time
quantum/time slice.
The ready queue has been partitioned into seven different queues using the multilevel queue
scheduling technique. These processes are assigned to one queue based on their priority, such
as memory size, process priority, or type. The method for scheduling each queue is different.
Some queues are utilized for the foreground[Interactive] process, while others are used for the
background[non-interactive] process. The foreground queue may be scheduled using a round-
robin method, and the background queue can be scheduled using an FCFS strategy.
It is used in the processes are permanently assign to a queue when they are entered the
system. It enables a process to switch between queues. If a process consumes too much
processor time, it will be switched to the lowest priority queue. A process waiting in a lower
priority queue for too long may be shifted to a higher priority queue. This type of aging prevents
starvation.
5. The method for determining which processes will enter the queue and when those
processes will require service
A Multi-processor is a system that has more than one processor but shares the same
memory, bus, and input/output devices. In multi-processor scheduling, more than one
processors(CPUs) share the load to handle the execution of processes smoothly. The
scheduling process of a multi-processor is more complex than that of a single processor system.
The multiple CPUs in the system share a common bus, memory, and other I/O devices.
2 types of multiprocessor-
Homogenous – Systems in which processors are identical in terms of their functionality.
Heterogeneous - Systems in which processors are of different types.
Processor Affinity
A process has an affinity for a processor on which it runs. This is called processor
affinity.When a process runs on a processor, the data accessed by the process most recently is
populated in the cache memory of this processor. The following data access calls by the process
are often satisfied by the cache memory.
Soft Affinity: The system has a rule of trying to keep running a process on the same
processor but does not guarantee it. This is called soft affinity.
Hard Affinity: The operating system will attempt to keep a process on a single
processor this situation is called hard affinity.
Load Balancing:
In a multi-processor system, all processors may not have the same workload. Some may have
a long ready queue, while others may be sitting idle. To solve this problem, load
balancing comes into the picture. Load Balancing is the phenomenon of distributing workload
so that the processors have an even workload in a symmetric multi-processor system.
Push Migration: In push migration, a task routinely checks the load on each processor.
Some processors may have long queues while some are idle. If the workload is unevenly
distributed, it will extract the load from the overloaded processor and assign the load to
an idle or a less busy processor.
Pull Migration: In pull migration, an idle processor will extract the load from an
overloaded processor itself.
Multi-Core Processors
Hard real time system: it is used to execute and complete a given critical task within a
guaranteed period of time. The process is submitted for execution along with the amount of
time within which it must be completed.
If the scheduler can guarantee that the process will be completed within a specified time,it
admits the process. Otherwise the scheduler rejects request ,it is calles resource reservation.
Resource reservation requires the scheduler to know exactly how long each operating system
function takes to perform.
Soft real time system: they are less restrictive. It simply providing that a critical real time
tasks gets priority over the others, and retains that priority until it completes. It has less timing
constraints and do not support dead line scheduling.