000-Lecture 6 - CPU Scheduling
000-Lecture 6 - CPU Scheduling
1.1
Chapter 5: CPU Scheduling
Chapter 5: CPU Scheduling
• Basic Concepts
• Scheduling Criteria
• Scheduling Algorithms
• Thread Scheduling
• Multiple-Processor Scheduling
• Operating Systems Examples
• Algorithm Evaluation
Basic Concepts
• The CPU is one of the primary computer resources, its scheduling is
central to operating-system design.
• In a single-processor system:
• only one process can run at a time;
• any others must wait until the CPU is free and can be rescheduled.
• Maximum CPU utilization obtained with multiprogramming
• A process is executed until it must wait (execute some I/O requests).
• Several processes are kept in memory at one time.
• When one process must wait, the operating system takes the CPU away from
that process and gives the CPU to another process.
• This pattern continues.
• Every time one process must wait, another process can take overuse of the
CPU.
CPU – I/O Burst Cycle
• Process execution consists of a cycle of
CPU execution and I/O wait.
• Processes alternate between these two
states.
• Process execution begins with a CPU
burst, followed by an I/O burst, which is
followed by another CPU burst, then
another I/O burst, and so on.
• The final CPU burst ends with a system
request to terminate execution.
• The durations of CPU bursts have been
measured.
• An I/O-bound program typically has
many short CPU bursts.
• A CPU-bound program might have a few
long CPU bursts.
CPU Scheduler
• Selects from among the processes in ready queue, and allocates the CPU to one
of them
• Queue may be ordered in various ways
• CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state (I/O request or an invocation of wait for the
termination of one of the child processes) (non preemptive scheduling )
2. Switches from running to ready state (an Interrupt occurs) (preemptive scheduling )
3. Switches from waiting to ready (I/O Terminates)(preemptive scheduling )
4. Terminates(non preemptive scheduling )
• Under non-preemptive scheduling, once the CPU has been allocated to a process,
the process keeps the CPU until it releases the CPU, either by terminating or by
switching to the waiting state.
• All other scheduling is preemptive
• Consider access to shared data
• Consider preemption while in kernel mode
• Consider interrupts occurring during crucial OS activities
Dispatcher
• Dispatcher module gives control of the CPU to the process selected by the short-
term scheduler; this involves:
• switching context
• switching to user mode
• jumping to the proper location in the user program to restart that
program
• Dispatch latency – time it takes for the dispatcher to stop one process and start
another running
Scheduling Criteria
• Many criteria have been suggested for comparing CPU-scheduling algorithms.
• CPU utilization: keep the CPU as busy as possible ( rang from 0% to 100%).
• Throughput: # of processes that complete their execution per time unit
• Turnaround time: amount of time to execute a particular process (the sum of
the periods spent waiting to get into memory, waiting in the ready queue,
executing on the CPU, and doing I/O).
• Waiting time –the sum of the periods spent waiting in the ready queue.
• Response time: amount of time it takes from when a request was submitted
until the first response is produced.
• Scheduling Requirements:
• Max CPU utilization
• Max throughput
• Min turnaround time
• Min waiting time
• Min response time
Scheduling Algorithms
1. First-Come, First-Served Scheduling (FCFS)
• The process that requests the CPU first is allocated the CPU first.
• Can be Implemented using a FIFO queue.
• When a process enters the ready queue, its PCB is placed on the tail
of the queue.
• CPU is allocated to the process at the head of the queue.
• The running process is then removed from the queue.
• The code for FCFS scheduling is simple to write and understand.
• Disadvantages:
• The average waiting time is often quite long.
• All the other processes wait for the one big process to get off the CPU.
• This effect results in lower CPU and device utilization than might be possible
if the shorter processes were allowed to go first.
Scheduling Algorithms
1. First-Come, First-Served Scheduling (FCFS)
Scheduling Algorithms
2. Shortest Job First (SJF)
• Associate with each process the length of its next CPU burst
• Use these lengths to schedule the process with the shortest time
• SJF is optimal – gives minimum average waiting time for a given set of
processes
• The difficulty is knowing the length of the next CPU request
• Could ask the user
• Disadvantage:
• How to know the length of the next CPU
request.
• For long-term (job) scheduling in a batch
system, we can use as the length the process
time limit that a user specifies when he
submits the job.
• Users must estimate the process time limit
accurately, since a lower value may mean
faster response.
• SJF scheduling is used frequently in long-term
scheduling.
Determining Length of Next CPU Burst
• We may not know the length of the next CPU burst, but we may be able to predict its
value.
• the next CPU burst will be similar in length to the previous ones.
• Computing an approximation of the length of the next CPU burst, we can pick the
process with the shortest predicted CPU burst.
• an exponential average of the measured lengths of previous CPU bursts.
Let tn be the length of the nth CPU burst, and let
n+1 be our predicted value for the next CPU
burst. Then, for , 0 ≤ ≤ 1, define
Examples of Exponential Averaging
• =0
• n+1 = n
• Recent history does not count
• =1
• n+1 = tn
• Only the actual last CPU burst counts
• If we expand the formula, we get:
n+1 = tn+(1 - ) tn -1 + …
+(1 - )j tn -j + …
+(1 - )n +1 0
• Since both and (1 - ) are less than or equal to 1,
each successive term has less weight than its
predecessor
Example of Shortest-remaining-time-first
Priority Scheduling
• A priority number (integer) is associated with each process
• Equal Priorities are served using FCFS algorithm
• The CPU is allocated to the process with the highest priority (smallest integer)
• SJF is priority scheduling, priority is the inverse of predicted next CPU burst time
• Problem Starvation – low priority processes may never execute
• Solution Aging – as time progresses increase the priority of the process
[P1(6)+P2(0)+P3(16)+P4(18)+P5(1)]/5=41/5=8.2
Round Robin (RR)
• Each process gets a small unit of CPU time (time quantum q), usually 10-100 ms.
• After this time, the process is preempted, added to the end of the ready queue.
• If there are n processes in the ready queue and the time quantum is q, then each
process gets 1/n of the CPU time in chunks of at most q time units at once.
• Timer interrupts every quantum
to schedule next process
• Performance
• q large FIFO
• q small q must be large with
Respect to context switch,
otherwise, overhead is too high
Time Quantum and Context Switch Time
Turnaround
Time Varies
With The Time
Quantum
80% of CPU
bursts should be
shorter than q
Multilevel Queue
• Ready queue is partitioned into separate queues, eg:
• Foreground (interactive)
• Background (batch)
• Process permanently in each queue
• Each queue has its own scheduling algorithm:
• foreground – RR
• background – FCFS