OS
OS
CPU SCHEDULE
Scheduling refers to a set of policies and mechanisms built into
the operating system that govern the order in which the work
to be done by a computer system is completed. A scheduler is
an OS module that selects the next job to be admitted into the
system and the next process to run. The primary objective of
scheduling is to optimize system performance in accordance
with the criteria deemed most by the system designer.
There are three types of schedulers in a complex operating
system.
i. Long term scheduler
ii. Short term scheduler
iii. Medium term scheduler
Scheduling Criteria
• CPU utilization: We want to keep the CPU as busy as possible.
Conceptually, CPU utilization can range from 0 to 100 percent.
In a real system, it should range from 40 percent (for a lightly
loaded system) to 90 percent (for a heavily loaded system).
• Throughput: If the CPU is busy executing processes, then
work is being done. One measure of work is the number of
processes that are completed per time unit, called
throughput. For long processes, this rate may be one process
per hour; for short transactions, it may be ten processes per
second.
• Turnaround time: From the point of view of a particular
process, the important criterion is how long it takes to execute
that process. The interval from the time of
submission of a process to the time of completion is the
turnaround time. Turnaround time is the sum of the periods
spent waiting to get into memory, waiting in the ready
queue, executing on the CPU, and doing I/O.
• Waiting time: The CPU-scheduling algorithm does not affect
the amount of time during which a process executes or does
I/O. It affects only the amount of time that a
process spends waiting in the ready queue. Waiting time is the
sum of the periods spent waiting in the ready queue.
• Response time: In an interactive system, turnaround time
may not be the best criterion. Often, a process can produce
some output fairly early and can continue computing new
results while previous results are being output to the user. Thus,
another measure is the time from the submission of a request
until the fifirst response is produced. This measure, called
response time, is the time it takes to start responding,
not the time it takes to output the response. The turnaround
time is generally limited by the speed of the output device.
SCHEDULING ALGORITHMS
First-Come, First-Served Scheduling
By far the simplest CPU-scheduling algorithm is the fifirst-come,
fifirst-served (FCFS) scheduling algorithm. With this scheme,
the process that requests the CPU fifirst is allocated the CPU
fifirst. The implementation of the FCFS policy is easily managed
with a FIFO queue. When a process enters the ready queue, its
PCB is linked onto the tail of the queue. When the CPU is free,
it is allocated to the process at the head of the queue.
The running process is then removed from the queue. The code
for FCFS scheduling is simple to write and understand. On the
negative side, the average waiting time under the FCFS policy is
often quite long. Consider the following set of processes that
arrive at time 0, with the length of the CPU burst given in
milliseconds:
Process Burst Time
P1 24
P2 3
P3 3
If the processes arrive in the order P1, P2, P3, and are served in
FCFS order, we get the result shown in the following Gantt
chart, which is a bar chart that illustrates a particular schedule,
including the start and fifinish times of each of the participating
processes:
Shortest-Job-First Scheduling
A different approach to CPU scheduling is the shortest-job-
fifirst (SJF)scheduling algorithm. This algorithm associates with
each process the length of the process’s next CPU burst. When
the CPU is available, it is assigned to the process that has the
smallest next CPU burst. If the next CPU bursts of two
processes are the same, FCFS scheduling is used to break the
tie. Note that a more appropriate term for this scheduling
method would be the shortest-next- CPU-burst algorithm,
because scheduling depends on the length of the next CPU
burst of a process, rather than its total length.
Process Burst Time
P1 6
P2 8
P3 7
P4 3
Advantages:
Average Response Time decreases
Throughput increases
Disadvantages:
Overhead to calculate seek time in advance Can cause
Starvation for a request if it has higher seek time as compared
to incoming requests
High variance of response time as SSTF favours only some
requests
SCAN Scheduling
In the SCAN algorithm, the disk arm starts at one end of the
disk and moves toward the other end, servicing requests as it
reaches each cylinder, until it gets to the other end of the disk.
At the other end, the direction of head movement is reversed,
and servicing continues. The head continuously scans back and
forth across the disk. The SCAN algorithm is sometimes called
the elevator algorithm , since the disk arm behaves just like an
elevator in a building, first servicing all the requests going up
and then reversing to service requests the other way.
Advantages:
High throughput
Low variance of response time
Average response time
Disadvantages:
Long waiting time for requests for locations just visited by
disk arm .
4.C-SCAN Scheduling
Circular SCAN (C-SCAN) scheduling is a variant of SCAN
designed to provide a more uniform wait time. Like SCAN,C-
SCAN moves the head from one end of the disk to the other,
servicing requests along the way. When the head reaches the
other end, however, it immediately returns
to the beginning of the disk without servicing any requests on
the return trip .The C-SCAN scheduling algorithm essentially
treats the cylinders as a circular list that wraps around from the
final cylinder to the first one.
parity
Parity is an interesting method used to rebuild data in case of
failure of one of the disks. Although its interesting to
understand, how parity works, you will find less documentation
about it on the internet. Parity makes use of a very famous
mathematical binary operation called as "XOR"