50% found this document useful (2 votes)
556 views56 pages

Fedora 12 Scheduling Criteria & Algorithms

Fedora 12 uses the 2.6.18 version of the Linux kernel. It employs scheduling algorithms such as first come first serve, shortest job first, priority, and round robin to determine which processes are allocated CPU time. The Linux scheduler has evolved over time from using a single run queue and round robin in early versions to more advanced algorithms like the completely fair scheduler in recent kernels that aims to provide equal CPU shares to all processes.

Uploaded by

Gamindu Udayanga
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
50% found this document useful (2 votes)
556 views56 pages

Fedora 12 Scheduling Criteria & Algorithms

Fedora 12 uses the 2.6.18 version of the Linux kernel. It employs scheduling algorithms such as first come first serve, shortest job first, priority, and round robin to determine which processes are allocated CPU time. The Linux scheduler has evolved over time from using a single run queue and round robin in early versions to more advanced algorithms like the completely fair scheduler in recent kernels that aims to provide equal CPU shares to all processes.

Uploaded by

Gamindu Udayanga
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 56

Fedora 12 scheduling criteria & algorithms

Fedora 12 Specifications
Fedora 12 is a 0s based on the Linux kernel, developed by the community-supported Fedora Project and sponsored by Red Hat. Fedora 12 uses Fedora Core 6 development environment Fedora core 6 uses the 2.6.18 based Linux kernel

History of Linux kernel


The Linux kernel was initially conceived and created by] Linus Torvalds in 1991 He was influenced by the operating system MINIX written by Andrew S. Tanenbaum

Linux kernel specification


Linux is a monolithic kernel
Preemptive kernel Support symmetric multi processing Written in C using GCC compiler

Linux kernel versions


The Linux Kernel Archives is the official site to Download standard Linux kernel. 3.0.1 is the latest stable kernel release It was released in 05-08-2011 But we are talking about 2.6.37.6 kernel in Fedora 12

Selecting

processes one by one from the ready queue and allocate CPU for them. The decisions may occur on
   

We are going to talk more about scheduling

Process Switches from running to waiting state Switches from running to ready state Switches from waiting to ready Terminates

Scheduling Policy

Process Preemption When the dynamic priority of the currently running process is lower than the process waiting in the ready queue A process may also be preempted when its time quantum expires How Long Must a Time Quantum Last? Quantum duration too short - system overhead caused by task switching becomes excessively high Quantum duration too long - processes no longer appear to be executed concurrently, degrades the response time of interactive applications, and degrades the responsiveness of the system The rule of thumb adopted by Linux is: choose a duration as long as possible, while keeping good system response time

What is Scheduling Criteria


CPU utilization keep the CPU as busy as possible Throughput Number of processes that complete their execution per time unit Turnaround time amount of time to execute a particular process Waiting time amount of time a process has been waiting in the ready queue Response time amount of time it takes from when a request was submitted until the first response is produced, not output (for timesharing environment)

Scheduling Algorithems
First

Come First Serve Scheduling Shortest Job First Scheduling Priority Scheduling Round-Robin Scheduling Multilevel Queue Scheduling Multilevel Feedback-Queue Scheduling

First Come First Serve Scheduling (FCFS)


Process Burst time
P1 P2 P2 24 3 3

First Come First Serve Scheduling


The

average of waiting time in this policy is usually quite long

Waiting

time for P1=0; P2=24; P3=27 Average waiting time= (0+24+27)/3=17

First Come First Serve Scheduling


Suppose

we change the order of arriving job P2, P3, P1

First Come First Serve Scheduling

Consider if we have a CPU-bound process and many I/O-bound processes There is a convoy effect as all the other processes waiting for one of the big process to get off the CPU FCFS scheduling algorithm is non-preemptive

Short job first scheduling (SJF)


This

algorithm associates with each process the length of the processes next CPU burst

If

there is a tie, FCFS is used In other words, this algorithm can be also regard as shortest-next-cpu-burst algorithm

Short job first scheduling


SJF

is optimal gives minimum average waiting time for a given set of processes

Example
Processes Burst time
P1 P2 P3 P4 6 8 7 3

FCFS average waiting time: (0+6+14+21)/4=10.25 SJF average waiting time: (3+16+9+0)/4=7

Short job first scheduling


Two schemes: Non-preemptive once CPU given to the process it cannot be preempted until completes its CPU burst Preemptive if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is know as the Shortest-Remaining-Time-First (SRTF)

Short job first scheduling- Nonpreemptive

Short job first scheduling- Preemptive

Priority Scheduling
A priority number (integer) is associated with each process The CPU is allocated to the process with the highest priority (smallest integer highest priority) Preemptive Non-preemptive SJF is a special priority scheduling where priority is the predicted next CPU burst time, so that it can decide the priority

Priority Scheduling
Processes Burst time Priority Arrival time P1 10 3 P2 1 1 P3 2 4 P4 1 5 P5 5 2 The average waiting time=(6+0+16+18+1)/5=8.2

Priority Scheduling
Processes Burst time Priority Arrival time P1 10 3 0.0 P2 1 1 1.0 P3 2 4 2.0 P4 1 5 3.0 P5 5 2 4.0

Gantt chart for both preemptive and non-preemptive, also waiting time

Priority Scheduling
Problem : Starvation low priority processes may never execute Solution : Aging as time progresses increase the priority of the process

Round-Robin Scheduling
Round-Robin is designed especially for time sharing systems. It is similar FCFS but add preemption concept A small unit of time, called time quantum, is defined
The

Round-Robin Scheduling
Each

process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue.

Round-Robin Scheduling

Round-Robin Scheduling
If

there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units.

Round-Robin Scheduling
Performance q large => FIFO q small => q must be large with respect to context switch, otherwise overhead is too high
Typically,

higher average turnaround than SJF, but better response

Round-Robin Scheduling

Linux Scheduler History


Linux schedulers history is short but the popularity and its development achieved makes it impossible to say that this was introduced in the later part of the 19th century. Below are some the schedulers' through its brief history
Linux Version Linux pre 2.5 Linux 2.5-2.6.23 Linux Post 2.6.23 Scheduler Multilevel Feedback Queue O(1) Scheduler Completely Fair Scheduler

LINUX Schedulers

Linux operating system is a time sharing system thus the concept of short and long term schedulers is taken to higher level, introducing an additional intermediary level of scheduling or commonly known as medium term scheduling.

Key ideas behind medium term scheduling


Remove processes from memory for a while Later the process is reintroduced into memory and executed

Linux 1.2 Scheduler

Linux 1.2 scheduler used a circular queue for runnable task management that operated with a round-robin scheduling policy. .Features of 1.2 scheduler Efficient for adding and removing processes (with a lock to protect the structure ). Simple and fast . Not complex to handle .

2.2 and 2.4 Schedulers

Linux 2.2 scheduler Linux 2.2 schedulers introduced the idea of scheduling classes, permitting scheduling policies for real-time tasks, non-preemptive tasks, and non-real-time tasks. Linux 2.4 scheduler Includes a relatively simple scheduler that operated in O(N) time. 2.4 scheduler divided time into epochs, and within each epoch, every task was allowed to execute up to its time slice

Linux 2.6 O(1) Scheduler


ReReducing scheduling algorithm complexity to O(1)from O(n). Better support for SMP system. Single run queue lock Cache problem Preemptive: A higher priority process can preempt a running process with lower priority.

Priority and interactivity effect on time slice

Nice Value vs. static priority and Quantum


Static Priority High Priority 100 110 120 120 Low Priority 139 NICE -20 -10 0 +10 +19 Quantum 800 ms 600 ms 100 ms 50 ms 5 ms

Rotating staircase Deadline Scheduler

Kolivas introduced this scheduler introduced the concept of fair scheduling Borrowed from queuing theory

Con

What is Fair Scheduling


A scheduling strategy for time sharing systems

the CPU usage is equally distributed among system users or groups, Not Among the processers

For

example, if four users (A,B,C,D) are concurrently executing one process each one will get 25% of whole cpu power(cpu cycles)

Time sharing continues

I m the Cpu MY power is 100%

A 25%

B 25%

C 25%

D 25%

What

if user B has two processes

I m the Cpu MY power is 100%

A 25%

B 25%

C 25%

D 25%

B.1 12.5 %

B.2 12.5 %

Where Did We Come From?


Pre 2.6 Schedulers Didn t utilize SMP very well

Single run queue lock meant idle processors awaiting lock release

Preemption

not possible

Lower priority task can execute while high priority task waits

O(n)

complexity

Slows down with larger input.

CFS The Future is Now!


Completely

Fair Scheduler

Merged into the 2.6.23 kernel Runs task with the gravest need Guarantees fairness (CPU usage)
No

run queues!

Uses a time-ordered red-black binary tree Leftmost node is the next process to run

Red/Black Tree Rules


27

19 A A2 5 NL 2 NL NL NL

34

31

65

49 NL NL NL NL

98 NL

NL

Red/Black Continues
1.

Every node has two children, each colored either red or black. Every tree leaf node is colored black. Every red node has both of its children colored black. Every path from root to a tree leaf contains same number of black nodes

2. 3.

4.

CPU

Dispatcher

Scheduler

How CFS Works

The scheduler stores the records about the planned tasks in a red-black tree, using the spent processor time as a key. This allows it to pick efficiently the process that has used the least amount of time (it is stored in the leftmost node of the tree). The spent execution time is updated and the process is then returned to the tree where it normally takes some other location The new leftmost node is then picked from the tree, repeating the iteration.

CFS Features (cont)


No

timeslices!... sort of

Uses wait_runtime (individual) and fair_clock (queue-wide) Processes build up CPU debt Different priorities spend time differently Half priority task sees time pass twice as fast
O(log

n) complexity

Only marginally slower than O(1) at very large numbers of inputs

Digging in CFS Data Structures




CFS has three primary structures task_struct, sched_entity, and cfs_rq. task_struct is the top-level entity, containing things such as task priorities, scheduling class, and the sched_entity struct. (sched.h, L1117) sched_entity includes a node for the RB tree and the vruntime statistic, among others. (sched.h, L1041) cfs_rq contains the root node, task group (more on this later), etc. (sched.c, L424)

Priorities and more




 

While CFS does not directly use priorities or priority queues, it does use them to modulate vruntime buildup. In this version, priority is inverse to its effect a higher priority task will accumulate vruntime more slowly, since it needs more CPU time. Likewise, a low-priority task will have its vruntime increase more quickly, causing it to be preempted earlier. Nice value lower value means higher priority. Relative priority, not absolute...

...that's it?


The CFS algorithm is, as stated, a lot simpler than the previous one, and does not require many of the old variables. Preemption time is variable, depending on priorities and actual running time. So we don't need assign tasks a given timeslice.

Other additions


CFS introduced group scheduling in release 2.6.24, adding another level of fairness. Tasks can be grouped together, such as by the user which owns them. CFS can then be applied to the group level as well as the individual task level. So, for three groups, it would give each about a third of the CPU time, and then divide that time up among the tasks in each group.

Modular scheduling


Alongside the initial CFS release came the notion of modular scheduling , and scheduling classes. This allows various scheduling policies to be implemented, independent of the generic scheduler. sched.c, which we have seen, contains that generic code. When schedule() is called, it will call pick_next_task(), which will look at the task's class and call the class-appropriate method. Let's look at the sched_class struct...(sched.h L976)

Scheduling classes!


Two scheduling classes are currently implemented: sched_fair, and sched_rt. sched_fair is CFS, which We are talking about this whole time. sched_rt handles real-time processes, and does not use CFS it's basically the same as the previous scheduler. CFS is mainly used for non-real-time tasks.

You might also like