Operating system notes
Operating system notes
• By switching the CPU among processes, the operating system can make the computer more
productive.
Problem:
When a process is executed and it has to wait for the completion of I/O request, the CPU just sit idle.
This waiting time is wasted; no useful work is accomplished.
Solution:
With the help of multiprogramming, try to use this time productively. When one process has to wait, the
operating system takes the CPU away from that process and gives the CPU to another process.
• Process execution begins with a CPU bust, that is followed by an I/O burst and
so on (alternatively).
Alternating Sequence of CPU And I/O Bursts
Preemptive Scheduling
• A new process(if one exists in the ready queue) must be selected for execution.
• all subsequent versions of Windows operating systems have used preemptive scheduling.
• once the CPU has been allocated to a process, the process keeps the CPU until it releases the
CPU either by terminating or by switching to the waiting state.
• it does not require the special hardware (for example, a timer) for scheduling.
Preemptive Scheduling
CPU-scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state (for example, as the result of an
I/O request or an invocation of wait for the termination of one of the child processes)
2. When a process switches from the running state to the ready state (for example, when an interrupt
occurs)
3. When a process switches from the waiting state to the ready state (for example, at completion of I/O)
• CPU Scheduler:
Whenever the CPU becomes idle. The short-term scheduler /CPU scheduler selects a process from the
processes in memory that are ready to execute and allocates the CPU to that process.
• Dispatcher:
The dispatcher is the module that gives control of the CPU to the process selected by the short-term
scheduler. This function involves the following:
• Switching context
• Jumping to the proper location in the user program to restart that program
• Dispatch latency.
The time dispatcher takes for the dispatcher to stop one process and start another running is known as
the dispatch latency.
Scheduling Criteria
• Scheduling criteria is used for comparing different CPU scheduling algorithms.
• Many criteria have been suggested for comparing CPU scheduling algorithms.
CPU utilization.
Throughput.
The number of processes that are completed per time unit, called throughput.
Turnaround time.
The interval from the time of submission of a process to the time of completion is the turnaround time.
Waiting time.
The amount of time that a process spends waiting in the ready queue.
Response time.
The time from the submission of a request until the first response is produced.
Optimization Criteria
It is desirable to
In other cases, it is more important to optimize the minimum or maximum values rather than the
average
Scheduling Algorithms
• CPU scheduling deals with the problem of deciding which of the processes in the ready queue is
to be allocated the CPU.
2. Shortest-Job-First Scheduling
3. Priority Scheduling
4. Round-Robin Scheduling
• Once the CPU has been allocated to a process, that process keeps the CPU until it releases it
either by terminating or by requesting I/O
• What are the average queueing and residence times for this scenario?
• How do average queueing and residence times depend on ordering of these processes in the
queue?
P1 24
P2 3
P3 3
With FCFS, the process that requests the CPU first is allocated the CPU first.
P P P
0 2 2 3
Suppose that the processes arrive in the order: P1,P2,P3
4 7 0
Waiting time for process P1 = 0 milliseconds
P P P
0 3 6 3
0
Waiting time for P1 = 6; P2 = 0; P3 = 3
• All the other processes wait for one long-running process to finish using the CPU.
• Higher utilization might be possible if the short processes were allowed to run first.
Shortest-Job-First Scheduling
• The SJF algorithm associates with each process the length of its next CPU burst.
• When the CPU becomes available, it is assigned to the process that has the smallest next CPU
burst (in the case of matching bursts, FCFS is used)
• Two schemes:
– Non-preemptive – once the CPU is given to the process, it cannot be preempted until it
completes its CPU burst.
– Preemptive – if a new process arrives with a CPU burst length less than the remaining
time of the current executing process, preempt. This scheme is know as the Shortest-
Remaining-Time-First (SRTF)
Example #1
Non-Preemptive SJF(simultaneous arrival)
P1 0.0 6
P2 0.0 4
P3 0.0 1
P4 0.0 5
The Gantt chart for the schedule is:
P3 P2 P4 P1
0 1 5 10 16
P1 = 10; P2 = 1; P3 = 0, P4 = 5
= 4 milliseconds
= (1 + 5 + 10 + 16)/4
= 8 milliseconds
Example #2
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
P P P P
0 7 8 1 1
Calculation of Waiting Time
= ( 0 – 0.0 ) = 0 milliseconds
= ( 8 – 2.0 ) = 6 milliseconds
= ( 7 – 4.0 ) = 3 milliseconds
= ( 7 – 0.0 ) = 7 milliseconds
= ( 12 – 2.0 ) = 10 milliseconds
= ( 8 – 4.0 ) = 4 milliseconds
= ( 16 – 5.0 ) = 11 milliseconds
• Average waiting time
= ( (0 – 0) + (8 – 2) + (7 – 4) + (12 – 5) )/4
= (0 + 6 + 3 + 7)/4
= 4 milliseconds
= 8 milliseconds
Example #3
Preemptive SJF(Shortest-remaining-time-first)
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
Gantt Chart
P P P P P P
0 1 1
2 4 5 7 6
1
First Waiting time for P1 = 0 milliseconds
Arrival time for P1 = 0.0 milliseconds
= ( 0 - 0 ) = 0 milliseconds
= ( 11 – 2 ) = 9 milliseconds
= ( 2 – 2.0 ) = 0 milliseconds
= ( 5 – 4 ) = 1 milliseconds
= ( 7 – 5.0 ) = 2 milliseconds
= (16 + 7 + 5 + 11)/4
= 9.75 milliseconds
Priority Scheduling
The SJF algorithm is a special case of the general priority scheduling algorithm.
The CPU is allocated to the process with the highest priority (smallest integer = highest
priority)
– A preemptive approach will preempt the CPU if the priority of the newly-
arrived process is higher than the priority of the currently running process.
– A non-preemptive approach will simply put the new process (with the
highest priority) at the head of the ready queue
P1 10 3
P2 1 1
P3 2 3
P4 1 4
P5 5 2
Gantt Chart
= (16 + 7 + 5 + 11)/4
= 9.75 milliseconds
• SJF is a priority scheduling algorithm where priority is the predicted next CPU
burst time
• The main problem with priority scheduling is starvation, that is, low priority
processes may never execute.
Round-Robin Scheduling
In the round robin algorithm, each process gets a small unit of CPU time (a time
quantum), usually 10-100 milliseconds.
when this time has elapsed, the process is preempted and added to the end of the ready
queue.
If there are n processes in the ready queue and the time quantum is q, then each
process gets 1/n of the CPU time in chunks of at most q time units at once.
– q large Þ FCFS
– q small Þ q must be greater than the context switch time; otherwise, the
overhead is too high
• One rule of thumb is that 80% of the CPU bursts should be shorter than the time
quantum
Example
RR with Time Quantum = 20
P1 53
P2 17
P3 68
P4 24
= ( 77 – 20 ) = 57 milliseconds
= ( 121 – 97 ) = 24 milliseconds
= ( 97 – 57 ) = 40 milliseconds
• Typically, higher average turnaround than SJF, but better response time
= 73 milliseconds
= 113.5 milliseconds
• For example,
– foreground (interactive) processes and
• A multi-level queue scheduling algorithm partitions the ready queue into several
separate queues.
• The processes are permanently assigned to one queue, generally based on some
property of the process such as memory size, process priority, or process type.
– The foreground queue may have absolute priority over the background
queue.
One example of a multi-level queue are the five queues shown below
• Each queue has absolute priority over lower priority queues.
• For example, no process in the batch queue can run unless the queues above it
are empty.
• However, this can result in starvation for the processes in the lower priority
queues.
• Each queue gets a certain portion of the CPU time, which it can then schedule
among its various processes.
– The foreground queue can be given 80% of the CPU time for RR
scheduling
– The background queue can be given 20% of the CPU time for FCFS
scheduling
– Number of queues.
– Method used to determine which queue a process will enter when that
process needs service.
• Scheduling
• If multiple CPUs are available, load sharing among them becomes possible; the
scheduling problem becomes more complex.
– We can use any available processor to run any process in the queue.
• Two approaches:
2. symmetric processing .
• One processor handles all scheduling decisions, I/O processing, and other system
activities.
• Because only one processor accesses the system data structures, the need for
data sharing is reduced.
• Either way, each processor examines the ready queue and selects a
process to execute.
• Efficient use of the CPUs requires load balancing to keep the workload
evenly distributed.