0% found this document useful (0 votes)
0 views

Lect 3

Chapter 5 discusses CPU scheduling, covering basic concepts, types of schedulers, scheduling objectives, and various scheduling algorithms such as FCFS, SJF, and Round Robin. It highlights the importance of scheduling in process management and the criteria for evaluating scheduling performance. Additionally, it addresses multiprocessor scheduling challenges, including processor affinity and load balancing, as well as real-time scheduling techniques like Rate Monotonic and Earliest Deadline First.

Uploaded by

asnake ketema
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

Lect 3

Chapter 5 discusses CPU scheduling, covering basic concepts, types of schedulers, scheduling objectives, and various scheduling algorithms such as FCFS, SJF, and Round Robin. It highlights the importance of scheduling in process management and the criteria for evaluating scheduling performance. Additionally, it addresses multiprocessor scheduling challenges, including processor affinity and load balancing, as well as real-time scheduling techniques like Rate Monotonic and Earliest Deadline First.

Uploaded by

asnake ketema
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Chapter 5:

CPU Scheduling
CPU Scheduling

● Scheduling
○ Basic Concepts of scheduling
○ Scheduling Criteria
○ Scheduling Algorithms
○ Algorithm Evaluation
CPU Scheduling
● In discussing process management and synchronization, we
talked about context switching among processes/threads on the
ready queue
● But we have glossed over the details of exactly which thread is
chosen from the ready queue
● Making this decision is called scheduling
● Scheduling is a task of selecting a process/thread from a ready
queue and letting it run on the CPU
○ This is done by a scheduler also called dispatcher
Types of Scheduler

● Non-preemptive scheduler
○ Process remains scheduled until voluntarily relinquishes
CPU
● Preemptive scheduler
○ Process may be descheduled at any time
When to schedule?

● A new job starts


● The running job exits
● The running job is blocked
● I/O interrupt (some processes will be ready)
● Timer interrupt
○ Every 10 milliseconds (Linux 2.4)
○ Every 1 millisecond (Linux 2.6)
Scheduling Objectives

● Fair (nobody cries)


● Priority (lady first)
● Efficiency (make best use of equipment)
● Encourage good behavior (good boy/girl)
● Support heavy load (degrade gracefully)
● Adapt to different environment (interactive, real-time,
multi-media, etc.)
Scheduling/Performance Criteria

● CPU utilization – keep the CPU as busy as possible


● Throughput – #of processes that complete their execution per time unit
● Turnaround time – The interval from the time of submission of a
process to the time of completion.
○ Sum of the periods spent waiting to get into memory, waiting in the
ready queue, executing on the CPU, and doing I/O.
● Waiting time – amount of time a process has been waiting in the
ready queue
● Response time – amount of time it takes from when a request was
submitted until the first response is produced, not output (for
time-sharing environment)
Different Systems, Different Focuses

● Batch Systems (e.g., billing, accounts receivable, accounts


payable, etc.)
○ Max throughput, max CPU utilization
● Interactive Systems (e.g., our PC)
○ Min. response time
● Real-time system (e.g., airplane)
○ Priority, meeting deadlines
■ Example: on airplane, Flight Control has strictly higher
priority than Environmental Control
Scheduling Algorithms

● First Come First Serve (FCFS) Batch


● Short Job First (SJF) Systems

● Shortest remaining time first(SRTF)


● Priority Scheduling
Interactive
● Round Robin Systems
● Multi-Queue & Multi-Level Feedback

● Earliest Deadline First Scheduling Real-time


Systems
● Ratemonotonic Scheduling
Gantt Chart

● Illustrates how processes/jobs are scheduled overtime on CPU


● Example:

A B C
0 10 12 16
Time
SCHEDULING ALGORITHMS
1.First- Come, First-Served (FCFS) Scheduling
● Jobs are executed on first come, first serve basis.
● Easy to understand and implement.
● Poor in performance as average wait time is high.
Process Burst Time
P1 24
P2 3
P3 3
● Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:

24 27 30
● Waiting time for P1 = 0; P2 = 24; P3 = 27
● Average waiting time: (0 + 24 + 27)/3 = 17
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order:
P2 , P3 , P1
● The Gantt chart for the schedule is:

● Waiting time for P1 = 6; P2 = 0; P3 = 3


● Average waiting time: (6 + 0 + 3)/3 = 3
● Much better than previous case
● Drawback of FCFS- large avg waiting time if short process behind long
process. This is also called Convoy effect
● Note also that the FCFS scheduling algorithm is nonpreemptive.
● Troublesome for timesharing systems
2. Shortest-Job-First (SJF) Scheduling

● Associate with each process the length of its next CPU burst
○ And then, use these lengths to schedule the process
with the shortest time
● SJF is optimal – gives minimum average waiting time for a
given set of processes if all arrives simultaneously
○ The difficulty is knowing the length of the next CPU
request. One possibility
■ Could ask the user
Example of SJF

ProcessArriva lBurst Time


P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3
● SJF scheduling chart

16 24

● Average waiting time = (3 + 16 + 9 + 0) / 4 = 7


SJF is not always optimal

● Is SJF optimal if not all the processes are available


simultaneously?

● What if both arrived simultaneously ?


P2 P1
0 2 12

● P1 waiting time: 2-0=2 The average waiting time: (2+0)/2=1


● P2 waiting time: 0-0=0
3.Shortest-Remaining-Time-First

● The SJF algorithm can be either


○ preemptive or
○ nonpreemptive.
● a non-preemptive SJF algorithm allow the currently running
process to finish its CPU burst.
○ The examples of SJF we saw earlier are non-preemptive
● A preemptive SJF algorithm preempt the currently executing
process, if the newly arrived process may be shorter than what is left
of the currently executing process.
● Preemptive SJF scheduling is sometimes called shortest-
remaining-time-first scheduling
Example of Shortest-remaining-time-first

● Now we add the concepts of varying arrival times and preemption to the analysis
ProcessA arri Arrival Time Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
● Preemptive SJF Gantt Chart

10
● Average waiting time =
● WT for P1: (0-0) + (10-1)=9 WT for P2= 1 – 1= 0
● WT for P3: 17 – 2= 15 WT for P4: 5-3= 2

● Therefore, the AWT : [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 = 6.5


● Exercise:
● What is AWT for nonprimitive SJF ?
4. Priority Scheduling

● A priority number (integer) is associated with each process

● The CPU is allocated to the process with the highest priority


(smallest integer ≡ highest priority)
○ Preemptive
○ Nonpreemptive

● Problem ≡ Starvation – low priority processes may never


execute

● Solution ≡ Aging – as time progresses increase the priority of the


process
Example of Priority Scheduling

ProcessA arri Burst Time Priority


P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

● Priority scheduling Gantt Chart


5. Round Robin (RR)
● Each process gets a small unit of CPU time (time quantum q), usually
10-100 milliseconds.
● After this time has elapsed, the process is preempted and added to the
end of the ready queue.
● If there are n processes in the ready queue and the time quantum is q,
then each process gets 1/n of the CPU time in chunks of at most q time
units at once.
○ No process waits more than (n-1)q time units.
● Timer interrupts every quantum to schedule next process
● Performance
○ q large ⇒ FCFS
○ q small ⇒ q must be large with respect to context switch, otherwise
overhead is too high
Time Quantum and Context Switch
Time
Example of RR with Time Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3
● The Gantt chart is:

10 14 18 22 26 30

● Typically, higher average turnaround than SJF, but better


response
● q should be large compared to context switch time
● q usually 10ms to 100ms, context switch < 10 usec
6. Multilevel Queue
● Ready queue is partitioned into separate queues, eg:
○ foreground (interactive), background (batch)
● Process permanently in a given queue
● Each queue has its own scheduling algorithm:
○ foreground – RR
○ background – FCFS
● Scheduling must be done between the queues:
○ Fixed priority preemptive scheduling; (i.e., serve all from foreground
then from background). Possibility of starvation.
○ Time slice – each queue gets a certain amount of CPU time which it can
schedule amongst its processes;
■ Like, 80% to foreground in RR
■ 20% to background in FCFS
Multilevel Queue Scheduling
7. Multilevel Feedback Queue

● A process can move between the various queues;


○ Aging can be implemented this way
● Multilevel-feedback-queue scheduler defined by the following
parameters:
○ number of queues
○ scheduling algorithms for each queue
○ method used to determine when to upgrade or demote a process
○ method used to determine which queue a process will enter when
that process needs service
Example of Multilevel Feedback
Queue

● Three queues:
○ Q0 – RR with time quantum 8 milliseconds
○ Q1 – RR time quantum 16 milliseconds
○ Q2 – FCFS

● Scheduling
○ A new job enters queue Q0 which is served FCFS
■ When it gains CPU, job receives 8 milliseconds
■ If it does not finish in 8 milliseconds, job is moved to queue Q1
○ At Q1 job is again served FCFS and receives 16 additional milliseconds
■ If it still does not complete, it is preempted and moved to queue Q2
MULTIPROCESSOR SCHEDULING
Affinity and Load balancing
Multiple-Processor Scheduling

● CPU scheduling becomes more complex when multiple CPUs are


Involved
● Approaches to Multiple-processor scheduling
○ Asymmetric multiprocessing, in which one processor is the master,
controlling all activities and running all kernel code, while the other
runs only user code.
■ This approach is relatively simple, as there is no need to share critical
system data.
○ Symmetric multiprocessing, SMP, where each processor schedules
its own jobs, either from a common ready queue or from separate
ready queues for each processor.
■ Currently, most common ( XP, Solaris, Linux, etc …)
Processor affinity
● Processors contain cache memory, which speeds up repeated accesses
to the same memory locations.
● If a process were to switch from one processor to another each time it
got a time slice, the data in the cache ( for that process ) would have to be
invalidated in the 1st processor and
○ the cache for the 2nd processor must be repopulated from main memory,
○ thereby obviating the benefit of the cache.
● Therefore SMP systems attempt to keep processes on the same
processor. This is called processor affinity.
○ Soft affinity - the system attempts to keep processes on the same
processor but makes no guarantees.
○ Hard affinity - a process specifies that it is not to be moved between
processors .E.g. Linux and some other Oses
NUMA and CPU Scheduling
● Main memory architecture can also affect process affinity, if particular
CPUs have faster access to memory on the same chip or board than to
other memory loaded elsewhere NUMA ( Non-Uniform Memory Access,. )
● As shown below, if a process has an affinity for a particular CPU, then it
should preferentially be assigned memory storage in "local" fast access
areas.

Note that memory-placement


algorithms can also consider
affinity
Multiple-Processor Scheduling – Load Balancing

● Obviously an important goal in a multiprocessor system is to


balance the load between processors,
○ Therefore, one processor won't be sitting idle while another is
overloaded.
● Load balancing attempts to keep workload evenly distributed
○ Systems using a common ready queue are naturally self-balancing, and
do not need any special handling.
○ However, systems maintaining separate ready queues for each
processor needs load balancing mechanisms.
Multiple-Processor Scheduling – Load
Balancing

● Load balancing can be achieved through:


○ Push migration – periodic task checks load on each processor, and if
found pushes task from overloaded CPU to other CPUs
○ Pull migration – idle processors pulls waiting task from busy processor
○ Push and pull migration are not mutually exclusive.
● Note:- Load balancing works against Affinity
○ Load balancing - Moving processes from CPU to CPU to achieve load
balancing
○ Affinity - Keep processes to run on same processor, and
○ if not carefully managed, the savings gained by balancing the system
can be lost in rebuilding caches.
● One option is to only allow migration when imbalance surpasses a given
threshold.
REAL-TIME SCHEDULING
Rate Monotonic Scheduling
Earliest Deadline Scheduling
Real-time scheduling algorithms

● Real-time scheduling algorithms:- schedule tasks in real-time


systems, where tasks have strict timing requirements and
deadlines.
● The primary objective of these algorithms is to ensure that tasks
meet their deadlines while making efficient use of resources.
● Two of the most common RT scheduling algorithms are:
● Rate-monotonic: uses pre-emptive scheduling with static
priorities.(Priority of tasks never changes)
● Earliest Deadline First: uses preemptive scheduling with dynamic
priorities - tasks with earlier deadlines have higher priorities.
● EDF dynamically updates priorities as tasks arrive or complete.
Scheduling Summary
● Scheduler (dispatcher) is part of the OS that gets invoked when a context switch
needs to happen
● Scheduling algorithm determines which process runs, where processes are placed
on queues
● Many potential goals of scheduling algorithms
○ Utilization, throughput, wait time, response time, etc.
● Various algorithms to meet these goals
○ FCFS/FIFO, SJF, Priority, RR
● Can combine algorithms
○ Multiple-level feedback queues, Multiple-level queue,
● Issues in multiprocessor scheduling
○ Affinity, load balancing
● Real time scheduling
○ RMS, EDFS

You might also like