0% found this document useful (0 votes)
5 views

Scheduling (Computing) - Wikipedia

Uploaded by

praveenkumarr997
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Scheduling (Computing) - Wikipedia

Uploaded by

praveenkumarr997
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

Scheduling

(computing)

In comput ing, scheduling is t he act ion of assigning resources t o perform tasks. The resources
may be processors, net work links or expansion cards. The tasks may be t hreads, processes or
dat a flows.

The scheduling act ivit y is carried out by a process called scheduler. Schedulers are oft en
designed so as t o keep all comput er resources busy (as in load balancing), allow mult iple users t o
share syst em resources effect ively, or t o achieve a t arget qualit y-of-service.

Scheduling is fundament al t o comput at ion it self, and an int rinsic part of t he execut ion model of
a comput er syst em; t he concept of scheduling makes it possible t o have comput er mult it asking
wit h a single cent ral processing unit (CPU).

Goals
A scheduler may aim at one or more goals, for example:
maximizing throughput (the total amount
of work completed per time unit);
minimizing wait time (time from work
becoming ready until the first point it
begins execution);
minimizing latency or response time
(time from work becoming ready until it
is finished in case of batch activity,[1][2][3]
or until the system responds and hands
the first output to the user in case of
interactive activity);[4]
maximizing fairness (equal CPU time to
each process, or more generally
appropriate times according to the
priority and workload of each process).
In pract ice, t hese goals oft en conflict (e.g. t hroughput versus lat ency), t hus a scheduler will
implement a suit able compromise. Preference is measured by any one of t he concerns
ment ioned above, depending upon t he user's needs and object ives.

In real-t ime environment s, such as embedded syst ems for aut omat ic cont rol in indust ry (for
example robot ics), t he scheduler also must ensure t hat processes can meet deadlines; t his is
crucial for keeping t he syst em st able. Scheduled t asks can also be dist ribut ed t o remot e
devices across a net work and managed t hrough an administ rat ive back end.

Types of operating system


schedulers
The scheduler is an operat ing syst em module t hat select s t he next jobs t o be admit t ed int o t he
syst em and t he next process t o run. Operat ing syst ems may feat ure up t o t hree dist inct
scheduler t ypes: a long-term scheduler (also known as an admission scheduler or high-level
scheduler), a mid-term or medium-term scheduler, and a short-term scheduler. The names suggest
t he relat ive frequency wit h which t heir funct ions are performed.

Process scheduler
The process scheduler is a part of t he operat ing syst em t hat decides which process runs at a
cert ain point in t ime. It usually has t he abilit y t o pause a running process, move it t o t he back of
t he running queue and st art a new process; such a scheduler is known as a preemptive scheduler,
ot herwise it is a cooperative scheduler.[5]

We dist inguish bet ween long-term scheduling, medium-term scheduling, and short-term
scheduling based on how oft en decisions must be made.[6]

Long-term scheduling
The long-term scheduler, or admission scheduler, decides which jobs or processes are t o be
admit t ed t o t he ready queue (in main memory); t hat is, when an at t empt is made t o execut e a
program, it s admission t o t he set of current ly execut ing processes is eit her aut horized or
delayed by t he long-t erm scheduler. Thus, t his scheduler dict at es what processes are t o run on a
syst em, t he degree of concurrency t o be support ed at any one t ime – whet her many or few
processes are t o be execut ed concurrent ly, and how t he split bet ween I/O-int ensive and CPU-
int ensive processes is t o be handled. The long-t erm scheduler is responsible for cont rolling t he
degree of mult iprogramming.

In general, most processes can be described as eit her I/O-bound or CPU-bound. An I/O-bound
process is one t hat spends more of it s t ime doing I/O t han it spends doing comput at ions. A CPU-
bound process, in cont rast , generat es I/O request s infrequent ly, using more of it s t ime doing
comput at ions. It is import ant t hat a long-t erm scheduler select s a good process mix of I/O-
bound and CPU-bound processes. If all processes are I/O-bound, t he ready queue will almost
always be empt y, and t he short -t erm scheduler will have lit t le t o do. On t he ot her hand, if all
processes are CPU-bound, t he I/O wait ing queue will almost always be empt y, devices will go
unused, and again t he syst em will be unbalanced. The syst em wit h t he best performance will
t hus have a combinat ion of CPU-bound and I/O-bound processes. In modern operat ing syst ems,
t his is used t o make sure t hat real-t ime processes get enough CPU t ime t o finish t heir t asks.[7]

Long-t erm scheduling is also import ant in large-scale syst ems such as bat ch processing
syst ems, comput er clust ers, supercomput ers, and render farms. For example, in concurrent
syst ems, coscheduling of int eract ing processes is oft en required t o prevent t hem from blocking
due t o wait ing on each ot her. In t hese cases, special-purpose job scheduler soft ware is t ypically
used t o assist t hese funct ions, in addit ion t o any underlying admission scheduling support in t he
operat ing syst em.

Some operat ing syst ems only allow new t asks t o be added if it is sure all real-t ime deadlines can
st ill be met . The specific heurist ic algorit hm used by an operat ing syst em t o accept or reject
new t asks is t he admission control mechanism.[8]

Medium-term scheduling
The medium-term scheduler t emporarily removes processes from main memory and places t hem
in secondary memory (such as a hard disk drive) or vice versa, which is commonly referred t o as
swapping out or swapping in (also incorrect ly as paging out or paging in). The medium-t erm
scheduler may decide t o swap out a process t hat has not been act ive for some t ime, a process
t hat has a low priorit y, a process t hat is page fault ing frequent ly, or a process t hat is t aking up a
large amount of memory in order t o free up main memory for ot her processes, swapping t he
process back in lat er when more memory is available, or when t he process has been unblocked
and is no longer wait ing for a resource.

In many syst ems t oday (t hose t hat support mapping virt ual address space t o secondary st orage
ot her t han t he swap file), t he medium-t erm scheduler may act ually perform t he role of t he long-
t erm scheduler, by t reat ing binaries as swapped-out processes upon t heir execut ion. In t his way,
when a segment of t he binary is required it can be swapped in on demand, or lazy loaded, also
called demand paging.

Short-term scheduling
The short-term scheduler (also known as t he CPU scheduler) decides which of t he ready, in-
memory processes is t o be execut ed (allocat ed a CPU) aft er a clock int errupt , an I/O int errupt ,
an operat ing syst em call or anot her form of signal. Thus t he short -t erm scheduler makes
scheduling decisions much more frequent ly t han t he long-t erm or mid-t erm schedulers – A
scheduling decision will at a minimum have t o be made aft er every t ime slice, and t hese are very
short . This scheduler can be preempt ive, implying t hat it is capable of forcibly removing
processes from a CPU when it decides t o allocat e t hat CPU t o anot her process, or non-
preempt ive (also known as voluntary or co-operative), in which case t he scheduler is unable t o
force processes off t he CPU.

A preempt ive scheduler relies upon a programmable int erval t imer which invokes an int errupt
handler t hat runs in kernel mode and implement s t he scheduling funct ion.

Dispatcher
Anot her component t hat is involved in t he CPU-scheduling funct ion is t he dispat cher, which is
t he module t hat gives cont rol of t he CPU t o t he process select ed by t he short -t erm scheduler.
It receives cont rol in kernel mode as t he result of an int errupt or syst em call. The funct ions of a
dispat cher involve t he following:

Context switches, in which the


dispatcher saves the state (also known
as context) of the process or thread that
was previously running; the dispatcher
then loads the initial or previously saved
state of the new process.
Switching to user mode.
Jumping to the proper location in the
user program to restart that program
indicated by its new state.
The dispat cher should be as fast as possible since it is invoked during every process swit ch.
During t he cont ext swit ches, t he processor is virt ually idle for a fract ion of t ime, t hus
unnecessary cont ext swit ches should be avoided. The t ime it t akes for t he dispat cher t o st op
one process and st art anot her is known as t he dispatch latency.[7]: 155

Scheduling disciplines
A scheduling discipline (also called scheduling policy or scheduling algorithm) is an algorit hm
used for dist ribut ing resources among part ies which simult aneously and asynchronously request
t hem. Scheduling disciplines are used in rout ers (t o handle packet t raffic) as well as in operat ing
syst ems (t o share CPU t ime among bot h t hreads and processes), disk drives (I/O scheduling),
print ers (print spooler), most embedded syst ems, et c.

The main purposes of scheduling algorit hms are t o minimize resource st arvat ion and t o ensure
fairness amongst t he part ies ut ilizing t he resources. Scheduling deals wit h t he problem of
deciding which of t he out st anding request s is t o be allocat ed resources. There are many
different scheduling algorit hms. In t his sect ion, we int roduce several of t hem.

In packet -swit ched comput er net works and ot her st at ist ical mult iplexing, t he not ion of a
scheduling algorithm is used as an alt ernat ive t o first -come first -served queuing of dat a
packet s.

The simplest best -effort scheduling algorit hms are round-robin, fair queuing (a max-min fair
scheduling algorit hm), proport ional-fair scheduling and maximum t hroughput . If different iat ed or
guarant eed qualit y of service is offered, as opposed t o best -effort communicat ion, weight ed
fair queuing may be ut ilized.

In advanced packet radio wireless net works such as HSDPA (High-Speed Downlink Packet
Access) 3.5G cellular syst em, channel-dependent scheduling may be used t o t ake advant age
of channel st at e informat ion. If t he channel condit ions are favourable, t he t hroughput and
syst em spect ral efficiency may be increased. In even more advanced syst ems such as LTE, t he
scheduling is combined by channel-dependent packet -by-packet dynamic channel allocat ion, or
by assigning OFDMA mult i-carriers or ot her frequency-domain equalizat ion component s t o t he
users t hat best can ut ilize t hem.[9]

First come, first served

A sample thread pool (green boxes) with a queue (FIFO) of waiting


tasks (blue) and a queue of completed tasks (yellow)

First in, first out (FIFO), also known as first come, first served (FCFS), is t he simplest scheduling
algorit hm. FIFO simply queues processes in t he order t hat t hey arrive in t he ready queue. This is
commonly used for a task queue, for example as illust rat ed in t his sect ion.
Since context switches only occur upon
process termination, and no
reorganization of the process queue is
required, scheduling overhead is
minimal.
Throughput can be low, because long
processes can be holding the CPU,
causing the short processes to wait for
a long time (known as the convoy
effect).
No starvation, because each process
gets chance to be executed after a
definite time.
Turnaround time, waiting time and
response time depend on the order of
their arrival and can be high for the same
reasons above.
No prioritization occurs, thus this
system has trouble meeting process
deadlines.
The lack of prioritization means that as
long as every process eventually
completes, there is no starvation. In an
environment where some processes
might not complete, there can be
starvation.
It is based on queuing.

Priority scheduling
Earliest deadline first (EDF) or least time to go is a dynamic scheduling algorit hm used in real-t ime
operat ing syst ems t o place processes in a priorit y queue. Whenever a scheduling event occurs (a
t ask finishes, new t ask is released, et c.), t he queue will be searched for t he process closest t o
it s deadline, which will be t he next t o be scheduled for execut ion.
Shortest remaining time first
Similar t o short est job first (SJF). Wit h t his st rat egy t he scheduler arranges processes wit h t he
least est imat ed processing t ime remaining t o be next in t he queue. This requires advanced
knowledge or est imat ions about t he t ime required for a process t o complet e.

If a shorter process arrives during


another process' execution, the currently
running process is interrupted (known as
preemption), dividing that process into
two separate computing blocks. This
creates excess overhead through
additional context switching. The
scheduler must also place each
incoming process into a specific place in
the queue, creating additional overhead.
This algorithm is designed for maximum
throughput in most scenarios.
Waiting time and response time increase
as the process's computational
requirements increase. Since turnaround
time is based on waiting time plus
processing time, longer processes are
significantly affected by this. Overall
waiting time is smaller than FIFO,
however since no process has to wait
for the termination of the longest
process.
No particular attention is given to
deadlines, the programmer can only
attempt to make processes with
deadlines as short as possible.
Starvation is possible, especially in a
busy system with many small processes
being run.
To use this policy we should have at
least two processes of different priority

Fixed-priority pre-emptive scheduling


The operat ing syst em assigns a fixed-priorit y rank t o every process, and t he scheduler arranges
t he processes in t he ready queue in order of t heir priorit y. Lower-priorit y processes get
int errupt ed by incoming higher-priorit y processes.

Overhead is not minimal, nor is it


significant.
FPPS has no particular advantage in
terms of throughput over FIFO
scheduling.
If the number of rankings is limited, it
can be characterized as a collection of
FIFO queues, one for each priority
ranking. Processes in lower-priority
queues are selected only when all of the
higher-priority queues are empty.
Waiting time and response time depend
on the priority of the process. Higher-
priority processes have smaller waiting
and response times.
Deadlines can be met by giving
processes with deadlines a higher
priority.
Starvation of lower-priority processes is
possible with large numbers of high-
priority processes queuing for CPU time.

Round-robin scheduling
The scheduler assigns a fixed t ime unit per process, and cycles t hrough t hem. If process
complet es wit hin t hat t ime-slice it get s t erminat ed ot herwise it is rescheduled aft er giving a
chance t o all ot her processes.

RR scheduling involves extensive


overhead, especially with a small time
unit.
Balanced throughput between
FCFS/FIFO and SJF/SRTF, shorter jobs
are completed faster than in FIFO and
longer processes are completed faster
than in SJF.
Good average response time, waiting
time is dependent on number of
processes, and not average process
length.
Because of high waiting times, deadlines
are rarely met in a pure RR system.
Starvation can never occur, since no
priority is given. Order of time unit
allocation is based upon process arrival
time, similar to FIFO.
If Time-Slice is large it becomes
FCFS/FIFO or if it is short then it
becomes SJF/SRTF.

Multilevel queue scheduling


This is used for sit uat ions in which processes are easily divided int o different groups. For
example, a common division is made bet ween foreground (int eract ive) processes and background
(bat ch) processes. These t wo t ypes of processes have different response-t ime requirement s
and so may have different scheduling needs. It is very useful for shared memory problems.

Work-conserving schedulers
A work-conserving scheduler is a scheduler t hat always t ries t o keep t he scheduled resources
busy, if t here are submit t ed jobs ready t o be scheduled. In cont rast , a non-work conserving
scheduler is a scheduler t hat , in some cases, may leave t he scheduled resources idle despit e t he
presence of jobs ready t o be scheduled.

Scheduling optimization problems


There are several scheduling problems in which t he goal is t o decide which job goes t o which
st at ion at what t ime, such t hat t he t ot al makespan is minimized:

Job-shop scheduling – there are n jobs


and m identical stations. Each job
should be executed on a single machine.
This is usually regarded as an online
problem.
Open-shop scheduling – there are n jobs
and m different stations. Each job
should spend some time at each station,
in a free order.
Flow-shop scheduling – there are n jobs
and m different stations. Each job
should spend some time at each station,
in a pre-determined order.

Manual scheduling
A very common met hod in embedded syst ems is t o schedule jobs manually. This can for example
be done in a t ime-mult iplexed fashion. Somet imes t he kernel is divided in t hree or more part s:
Manual scheduling, preempt ive and int errupt level. Exact met hods for scheduling jobs are oft en
propriet ary.

No resource starvation problems


Very high predictability; allows
implementation of hard real-time
systems
Almost no overhead
May not be optimal for all applications
Effectiveness is completely dependent
on the implementation

Choosing a scheduling algorithm


When designing an operat ing syst em, a programmer must consider which scheduling algorit hm
will perform best for t he use t he syst em is going t o see. There is no universal best scheduling
algorit hm, and many operat ing syst ems use ext ended or combinat ions of t he scheduling
algorit hms above.

For example, Windows NT/XP/Vist a uses a mult ilevel feedback queue, a combinat ion of fixed-
priorit y preempt ive scheduling, round-robin, and first in, first out algorit hms. In t his syst em,
t hreads can dynamically increase or decrease in priorit y depending on if it has been serviced
already, or if it has been wait ing ext ensively. Every priorit y level is represent ed by it s own queue,
wit h round-robin scheduling among t he high-priorit y t hreads and FIFO among t he lower-priorit y
ones. In t his sense, response t ime is short for most t hreads, and short but crit ical syst em
t hreads get complet ed very quickly. Since t hreads can only use one t ime unit of t he round-robin
in t he highest -priorit y queue, st arvat ion can be a problem for longer high-priorit y t hreads.

Operating system process


scheduler implementations
The algorit hm used may be as simple as round-robin in which each process is given equal t ime
(for inst ance 1 ms, usually bet ween 1 ms and 100 ms) in a cycling list . So, process A execut es
for 1 ms, t hen process B, t hen process C, t hen back t o process A.

More advanced algorit hms t ake int o account process priorit y, or t he import ance of t he process.
This allows some processes t o use more t ime t han ot her processes. The kernel always uses
what ever resources it needs t o ensure proper funct ioning of t he syst em, and so can be said t o
have infinit e priorit y. In SMP syst ems, processor affinit y is considered t o increase overall syst em
performance, even if it may cause a process it self t o run more slowly. This generally improves
performance by reducing cache t hrashing.

OS/360 and successors


IBM OS/360 was available wit h t hree different schedulers. The differences were such t hat t he
variant s were oft en considered t hree different operat ing syst ems:

The Single Sequential Scheduler option,


also known as the Primary Control
Program (PCP) provided sequential
execution of a single stream of jobs.
The Multiple Sequential Scheduler option,
known as Multiprogramming with a Fixed
Number of Tasks (MFT) provided
execution of multiple concurrent jobs.
Execution was governed by a priority
which had a default for each stream or
could be requested separately for each
job. MFT version II added subtasks
(threads), which executed at a priority
based on that of the parent job. Each job
stream defined the maximum amount of
memory which could be used by any job
in that stream.
The Multiple Priority Schedulers option,
or Multiprogramming with a Variable
Number of Tasks (MVT), featured
subtasks from the start; each job
requested the priority and memory it
required before execution.
Lat er virt ual st orage versions of MVS added a Workload Manager feat ure t o t he scheduler, which
schedules processor resources according t o an elaborat e scheme defined by t he inst allat ion.

Windows
Very early MS-DOS and Microsoft Windows syst ems were non-mult it asking, and as such did not
feat ure a scheduler. Windows 3.1x used a non-preempt ive scheduler, meaning t hat it did not
int errupt programs. It relied on t he program t o end or t ell t he OS t hat it didn't need t he processor
so t hat it could move on t o anot her process. This is usually called cooperat ive mult it asking.
Windows 95 int roduced a rudiment ary preempt ive scheduler; however, for legacy support opt ed
t o let 16-bit applicat ions run wit hout preempt ion.[10]

Windows NT-based operat ing syst ems use a mult ilevel feedback queue. 32 priorit y levels are
defined, 0 t hrough t o 31, wit h priorit ies 0 t hrough 15 being normal priorit ies and priorit ies 16
t hrough 31 being soft real-t ime priorit ies, requiring privileges t o assign. 0 is reserved for t he
Operat ing Syst em. User int erfaces and APIs work wit h priorit y classes for t he process and t he
t hreads in t he process, which are t hen combined by t he syst em int o t he absolut e priorit y level.

The kernel may change t he priorit y level of a t hread depending on it s I/O and CPU usage and
whet her it is int eract ive (i.e. accept s and responds t o input from humans), raising t he priorit y of
int eract ive and I/O bounded processes and lowering t hat of CPU bound processes, t o increase
t he responsiveness of int eract ive applicat ions.[11] The scheduler was modified in Windows Vist a
t o use t he cycle count er regist er of modern processors t o keep t rack of exact ly how many CPU
cycles a t hread has execut ed, rat her t han just using an int erval-t imer int errupt rout ine.[12] Vist a
also uses a priorit y scheduler for t he I/O queue so t hat disk defragment ers and ot her such
programs do not int erfere wit h foreground operat ions.[13]

Classic Mac OS and macOS


Mac OS 9 uses cooperat ive scheduling for t hreads, where one process cont rols mult iple
cooperat ive t hreads, and also provides preempt ive scheduling for mult iprocessing t asks. The
kernel schedules mult iprocessing t asks using a preempt ive scheduling algorit hm. All Process
Manager processes run wit hin a special mult iprocessing t ask, called t he blue task. Those
processes are scheduled cooperat ively, using a round-robin scheduling algorit hm; a process yields
cont rol of t he processor t o anot her process by explicit ly calling a blocking funct ion such as
WaitNextEvent . Each process has it s own copy of t he Thread Manager t hat schedules t hat
process's t hreads cooperat ively; a t hread yields cont rol of t he processor t o anot her t hread by
calling YieldToAnyThread or YieldToThread .[14]

macOS uses a mult ilevel feedback queue, wit h four priorit y bands for t hreads – normal, syst em
high priorit y, kernel mode only, and real-t ime.[15] Threads are scheduled preempt ively; macOS also
support s cooperat ively scheduled t hreads in it s implement at ion of t he Thread Manager in
Carbon.[14]

AIX
In AIX Version 4 t here are t hree possible values for t hread scheduling policy:

First In, First Out: Once a thread with this


policy is scheduled, it runs to completion
unless it is blocked, it voluntarily yields
control of the CPU, or a higher-priority
thread becomes dispatchable. Only
fixed-priority threads can have a FIFO
scheduling policy.
Round Robin: This is similar to the AIX
Version 3 scheduler round-robin scheme
based on 10 ms time slices. When a RR
thread has control at the end of the time
slice, it moves to the tail of the queue of
dispatchable threads of its priority. Only
fixed-priority threads can have a Round
Robin scheduling policy.
OTHER: This policy is defined by
POSIX1003.4a as implementation-
defined. In AIX Version 4, this policy is
defined to be equivalent to RR, except
that it applies to threads with non-fixed
priority. The recalculation of the running
thread's priority value at each clock
interrupt means that a thread may lose
control because its priority value has
risen above that of another dispatchable
thread. This is the AIX Version 3
behavior.
Threads are primarily of int erest for applicat ions t hat current ly consist of several asynchronous
processes. These applicat ions might impose a light er load on t he syst em if convert ed t o a
mult it hreaded st ruct ure.

AIX 5 implement s t he following scheduling policies: FIFO, round robin, and a fair round robin. The
FIFO policy has t hree different implement at ions: FIFO, FIFO2, and FIFO3. The round robin policy
is named SCHED_ RR in AIX, and t he fair round robin is called SCHED_ OTHER.[16]

Linux

Linux 1.2
Linux 1.2 used a round-robin scheduling policy.[17]

Linux 2.2
Linux 2.2 added scheduling classes and support for symmet ric mult iprocessing (SMP).[17]
A highly simplified structure of the Linux kernel, which
includes process schedulers, I/O schedulers, and packet
schedulers

Linux 2.4
In Linux 2.4,[17] an O(n) scheduler wit h a mult ilevel feedback queue wit h priorit y levels ranging
from 0 t o 140 was used; 0–99 are reserved for real-t ime t asks and 100–140 are considered nice
t ask levels. For real-t ime t asks, t he t ime quant um for swit ching processes was approximat ely
200 ms, and for nice t asks approximat ely 10 ms. The scheduler ran t hrough t he run queue of all
ready processes, let t ing t he highest priorit y processes go first and run t hrough t heir t ime slices,
aft er which t hey will be placed in an expired queue. When t he act ive queue is empt y t he expired
queue will become t he act ive queue and vice versa.

However, some ent erprise Linux dist ribut ions such as SUSE Linux Ent erprise Server replaced t his
scheduler wit h a backport of t he O(1) scheduler (which was maint ained by Alan Cox in his Linux
2.4-ac Kernel series) t o t he Linux 2.4 kernel used by t he dist ribut ion.

Linux 2.6.0 to Linux 2.6.22


In versions 2.6.0 t o 2.6.22, t he kernel used an O(1) scheduler developed by Ingo Molnar and many
ot her kernel developers during t he Linux 2.5 development . For many kernel in t ime frame, Con
Kolivas developed pat ch set s which improved int eract ivit y wit h t his scheduler or even replaced it
wit h his own schedulers.

Linux 2.6.23 to Linux 6.5


Con Kolivas' work, most significant ly his implement at ion of fair scheduling named Rot at ing
St aircase Deadline (RSDL), inspired Ingo Molnár t o develop t he Complet ely Fair Scheduler (CFS)
as a replacement for t he earlier O(1) scheduler, credit ing Kolivas in his announcement .[18] CFS is
t he first implement at ion of a fair queuing process scheduler widely used in a general-purpose
operat ing syst em.[19]

The CFS uses a well-st udied, classic scheduling algorit hm called fair queuing originally invent ed
for packet net works. Fair queuing had been previously applied t o CPU scheduling under t he name
st ride scheduling. The fair queuing CFS scheduler has a scheduling complexit y of ,
where N is t he number of t asks in t he runqueue. Choosing a t ask can be done in const ant t ime,
but reinsert ing a t ask aft er it has run requires operat ions, because t he run queue is
implement ed as a red–black t ree.

The Brain Fuck Scheduler, also creat ed by Con Kolivas, is an alt ernat ive t o t he CFS.

Linux 6.6
In 2023, Pet er Zijlst ra proposed replacing CFS wit h an earliest eligible virt ual deadline first
scheduling (EEVDF) process scheduler.[20][21] The aim was t o remove t he need for CFS latency
nice pat ches.[22]

FreeBSD
FreeBSD uses a mult ilevel feedback queue wit h priorit ies ranging from 0–255. 0–63 are reserved
for int errupt s, 64–127 for t he t op half of t he kernel, 128–159 for real-t ime user t hreads, 160–
223 for t ime-shared user t hreads, and 224–255 for idle user t hreads. Also, like Linux, it uses t he
act ive queue set up, but it also has an idle queue.[23]

NetBSD
Net BSD uses a mult ilevel feedback queue wit h priorit ies ranging from 0–223. 0–63 are reserved
for t ime-shared t hreads (default , SCHED_ OTHER policy), 64–95 for user t hreads which ent ered
kernel space, 96-128 for kernel t hreads, 128–191 for user real-t ime t hreads (SCHED_ FIFO and
SCHED_ RR policies), and 192–223 for soft ware int errupt s.

Solaris
Solaris uses a mult ilevel feedback queue wit h priorit ies ranging bet ween 0 and 169. Priorit ies 0–
59 are reserved for t ime-shared t hreads, 60–99 for syst em t hreads, 100–159 for real-t ime
t hreads, and 160–169 for low priorit y int errupt s. Unlike Linux,[23] when a process is done using it s
t ime quant um, it is given a new priorit y and put back in t he queue. Solaris 9 int roduced t wo new
scheduling classes, namely fixed-priorit y class and fair share class. The t hreads wit h fixed priorit y
have t he same priorit y range as t hat of t he t ime-sharing class, but t heir priorit ies are not
dynamically adjust ed. The fair scheduling class uses CPU shares t o priorit ize t hreads for
scheduling decisions. CPU shares indicat e t he ent it lement t o CPU resources. They are allocat ed
t o a set of processes, which are collect ively known as a project .[7]

Summary
Operating System Preemption Algorithm

Amiga OS Yes Priorit ized round-robin scheduling

FreeBSD Yes Mult ilevel feedback queue

Linux kernel before 2.6.0 Yes Mult ilevel feedback queue

Linux kernel 2.6.0–2.6.23 Yes O(1) scheduler

Linux kernel aft er 2.6.23 Yes Complet ely Fair Scheduler

Earliest eligible virt ual deadline first


Linux kernel 6.6 and lat er Yes
scheduling (EEVDF)

classic Mac OS pre-9 None Cooperat ive scheduler

Preempt ive scheduler for MP t asks, and


Mac OS 9 Some
cooperat ive for processes and t hreads

macOS Yes Mult ilevel feedback queue

Net BSD Yes Mult ilevel feedback queue

Solaris Yes Mult ilevel feedback queue

Windows 3.1x None Cooperat ive scheduler

Preempt ive scheduler for 32-bit processes,


Windows 95, 98, Me Half
and cooperat ive for 16-bit processes

Windows NT (including 2000, XP,


Yes Mult ilevel feedback queue
Vist a, 7, and Server)

See also

Activity selection problem


Aging (scheduling)
Atropos scheduler
Automated planning and scheduling
Cyclic executive
Dynamic priority scheduling
Foreground-background
Interruptible operating system
Least slack time scheduling
Lottery scheduling
Priority inversion
Process states
Queuing theory
Rate-monotonic scheduling
Resource-Task Network
Scheduling (production processes)
Stochastic scheduling
Time-utility function

Notes

1. C. L., Liu; James W., Layland (January


1973). "Scheduling Algorithms for
Multiprogramming in a Hard-Real-Time
Environment" (https://doi.org/10.1145%2F3
21738.321743) . Journal of the ACM. ACM.
20 (1): 46–61.
doi:10.1145/321738.321743 (https://doi.or
g/10.1145%2F321738.321743) .
S2CID 207669821 (https://api.semanticsch
olar.org/CorpusID:207669821) . "We define
the response time of a request for a certain
task to be the time span between the
request and the end of the response to that
request."
2. Kleinrock, Leonard (1976). Queueing
Systems, Vol. 2: Computer Applications (htt
ps://archive.org/details/queueingsystems0
0klei/page/171) (1 ed.). Wiley-
Interscience. p. 171 (https://archive.org/det
ails/queueingsystems00klei/page/171) .
ISBN 047149111X. "For a customer
requiring x sec of service, his response
time will equal his service time x plus his
waiting time."
3. Feitelson, Dror G. (2015). Workload
Modeling for Computer Systems
Performance Evaluation (http://www.cs.huj
i.ac.il/~feit/wlmod/) . Cambridge
University Press. Section 8.4 (Page 422) in
Version 1.03 of the freely available
manuscript. ISBN 9781107078239.
Retrieved 2015-10-17. "if we denote the
time that a job waits in the queue by tw, and
the time it actually runs by tr, then the
response time is r = tw + tr."
4. Silberschatz, Abraham; Galvin, Peter Baer;
Gagne, Greg (2012). Operating System
Concepts (9 ed.). Wiley Publishing. p. 187.
ISBN 978-0470128725. "In an interactive
system, turnaround time may not be the
best criterion. Often, a process can produce
some output fairly early and can continue
computing new results while previous
results are being output to the user. Thus,
another measure is the time from the
submission of a request until the first
response is produced. This measure, called
response time, is the time it takes to start
responding, not the time it takes to output
the response."
5. Paul Krzyzanowski (2014-02-19). "Process
Scheduling: Who gets to run next?" (https://
www.cs.rutgers.edu/~pxk/416/notes/07-sc
heduling.html) . cs.rutgers.edu. Retrieved
2023-06-19.
6. Raphael Finkel (1988). "Chapter 2: Time
Management". An Operating Systems Vade
Mecum (https://www.yumpu.com/en/docu
ment/read/32199214/an-operating-system
s-vade-mecum) . Prentice Hall. p. 27.
7. Abraham Silberschatz; Peter Baer Galvin;
Greg Gagne (2013). Operating System
Concepts. Vol. 9. John Wiley & Sons, Inc.
ISBN 978-1-118-06333-0.
8. Robert Kroeger (2004). "Admission Control
for Independently-authored Realtime
Applications" (https://citeseerx.ist.psu.edu/
viewdoc/download?doi=10.1.1.71.3977&re
p=rep1&type=pdf) . UWSpace.
http://hdl.handle.net/10012/1170 .
Section "2.6 Admission Control". p. 33.
9. Guowang Miao; Jens Zander; Ki Won Sung;
Ben Slimane (2016). Fundamentals of
Mobile Data Networks. Cambridge
University Press. ISBN 978-1107143210.
10. Early Windows (https://web.archive.org/we
b/*/www.jgcampbell.com/caos/html/node
13.html) at the Wayback Machine (archive
index)
11. Sriram Krishnan. "A Tale of Two Schedulers
Windows NT and Windows CE" (https://we
b.archive.org/web/20120722015555/htt
p://sriramk.com/schedulers.html) .
Archived from the original (http://sriramk.c
om/schedulers.html) on July 22, 2012.
12. "Windows Administration: Inside the
Windows Vista Kernel: Part 1" (https://tech
net.microsoft.com/en-us/magazine/cc162
494.aspx) . Technet.microsoft.com. 2016-
11-14. Retrieved 2016-12-09.
13. "Archived copy" (https://web.archive.org/w
eb/20080219174631/http://blog.gabefros
t.com/?p=25) . blog.gabefrost.com.
Archived from the original (http://blog.gabe
frost.com/?p=25) on 19 February 2008.
Retrieved 15 January 2022.
14. "Technical Note TN2028: Threading
Architectures" (https://developer.apple.co
m/library/archive/technotes/tn/tn2028.ht
ml) . developer.apple.com. Retrieved
2019-01-15.
15. "Mach Scheduling and Thread Interfaces"
(https://developer.apple.com/library/archiv
e/documentation/Darwin/Conceptual/Kern
elProgramming/scheduler/scheduler.html)
. developer.apple.com. Retrieved
2019-01-15.
16. [1] (http://www.ibm.com/developerworks/a
ix/library/au-aix5_cpu/index.html#N100F
6) Archived (https://web.archive.org/web/
20110811094049/http://www.ibm.com/de
veloperworks/aix/library/au-aix5_cpu/inde
x.html) 2011-08-11 at the Wayback
Machine
17. Jones, M. (2018-09-18) [first published on
2009-12-14]. "Inside the Linux 2.6
Completely Fair Scheduler" (https://develop
er.ibm.com/tutorials/l-completely-fair-sch
eduler/) . developer.ibm.com. Retrieved
2024-02-07.
18. Molnár, Ingo (2007-04-13). "[patch] Modular
Scheduler Core and Completely Fair
Scheduler [CFS]" (https://lwn.net/Articles/2
30501/) . linux-kernel (Mailing list).
19. Tong Li; Dan Baumberger; Scott Hahn.
"Efficient and Scalable Multiprocessor Fair
Scheduling Using Distributed Weighted
Round-Robin" (http://happyli.org/tongli/pap
ers/dwrr.pdf) (PDF). Happyli.org. Retrieved
2016-12-09.
20. "EEVDF Scheduler May Be Ready For
Landing With Linux 6.6" (https://www.phoro
nix.com/news/Linux-6.6-EEVDF-Likely) .
Phoronix. Retrieved 2023-08-31.
21. "EEVDF Scheduler Merged For Linux 6.6,
Intel Hybrid Cluster Scheduling Re-
Introduced" (https://www.phoronix.com/ne
ws/Linux-6.6-EEVDF-Merged) .
www.phoronix.com. Retrieved 2024-02-07.
22. "An EEVDF CPU scheduler for Linux
[LWN.net]" (https://lwn.net/Articles/92537
1/) . LWN.net. Retrieved 2023-08-31.
23. "Comparison of Solaris, Linux, and FreeBSD
Kernels" (https://web.archive.org/web/200
80807124435/http://cn.opensolaris.org/fil
es/solaris_linux_bsd_cmp.pdf) (PDF).
Archived from the original (http://cn.opens
olaris.org/files/solaris_linux_bsd_cmp.pd
f) (PDF) on August 7, 2008.

References

Błażewicz, Jacek; Ecker, K.H.; Pesch, E.;


Schmidt, G.; Weglarz, J. (2001). Scheduling
computer and manufacturing processes
(2 ed.). Berlin [u.a.]: Springer. ISBN 3-540-
41931-4.
Stallings, William (2004). Operating Systems
Internals and Design Principles (https://archiv
e.org/details/operatingsystems00stal)
(fourth ed.). Prentice Hall. ISBN 0-13-031999-
6.
Information on the Linux 2.6 O(1)-scheduler
(https://github.com/bdaehlie/linux-cpu-sched
uler-docs/)

Further reading

Operating Systems: Three Easy Pieces


(https://pages.cs.wisc.edu/~remzi/OS
TEP/) by Remzi H. Arpaci-Dusseau and
Andrea C. Arpaci-Dusseau. Arpaci-
Dusseau Books, 2014. Relevant
chapters: Scheduling: Introduction (htt
p://pages.cs.wisc.edu/~remzi/OSTEP/c
pu-sched.pdf) Multi-level Feedback
Queue (http://pages.cs.wisc.edu/~remz
i/OSTEP/cpu-sched-mlfq.pdf)
Proportional-share Scheduling (http://pa
ges.cs.wisc.edu/~remzi/OSTEP/cpu-sc
hed-lottery.pdf) Multiprocessor
Scheduling (http://pages.cs.wisc.edu/~r
emzi/OSTEP/cpu-sched-multi.pdf)
Brief discussion of Job Scheduling
algorithms (http://www.cs.sunysb.edu/~
algorith/files/scheduling.shtml)
Understanding the Linux Kernel: Chapter
10 Process Scheduling (https://web.arch
ive.org/web/20060613130106/http://ore
illy.com/catalog/linuxkernel/chapter/ch
10.html)
Kerneltrap: Linux kernel scheduler
articles (http://kerneltrap.org/scheduler)
AIX CPU monitoring and tuning (https://
web.archive.org/web/20110811094049/
http://www.ibm.com/developerworks/ai
x/library/au-aix5_cpu/index.html#N100
F6)
Josh Aas' introduction to the Linux
2.6.8.1 CPU scheduler implementation
(https://github.com/bdaehlie/linux-cpu-s
cheduler-docs/)
Peter Brucker, Sigrid Knust. Complexity
results for scheduling problems [2] (htt
p://www.mathematik.uni-osnabrueck.d
e/research/OR/class/)
TORSCHE Scheduling Toolbox for
Matlab (http://rtime.felk.cvut.cz/schedul
ing-toolbox) is a toolbox of scheduling
and graph algorithms.
A survey on cellular networks packet
scheduling (https://ieeexplore.ieee.org/x
pl/articleDetails.jsp?arnumber=622679
5)
Large-scale cluster management at
Google with Borg (https://static.googleu
sercontent.com/media/research.googl
e.com/en/us/pubs/archive/43438.pdf)

Retrieved from
"https://en.wikipedia.org/w/index.php?
title=Scheduling_(computing)&oldid=1204756587"

This page was last edited on 7 February 2024, at


22:49 (UTC). •
Content is available under CC BY-SA 4.0 unless
otherwise noted.

You might also like