Fedora 12 Scheduling Criteria & Algorithms
Fedora 12 Scheduling Criteria & Algorithms
Fedora 12 Specifications
Fedora 12 is a 0s based on the Linux kernel, developed by the community-supported Fedora Project and sponsored by Red Hat. Fedora 12 uses Fedora Core 6 development environment Fedora core 6 uses the 2.6.18 based Linux kernel
Selecting
processes one by one from the ready queue and allocate CPU for them. The decisions may occur on
Process Switches from running to waiting state Switches from running to ready state Switches from waiting to ready Terminates
Scheduling Policy
Process Preemption When the dynamic priority of the currently running process is lower than the process waiting in the ready queue A process may also be preempted when its time quantum expires How Long Must a Time Quantum Last? Quantum duration too short - system overhead caused by task switching becomes excessively high Quantum duration too long - processes no longer appear to be executed concurrently, degrades the response time of interactive applications, and degrades the responsiveness of the system The rule of thumb adopted by Linux is: choose a duration as long as possible, while keeping good system response time
Scheduling Algorithems
First
Come First Serve Scheduling Shortest Job First Scheduling Priority Scheduling Round-Robin Scheduling Multilevel Queue Scheduling Multilevel Feedback-Queue Scheduling
Waiting
Consider if we have a CPU-bound process and many I/O-bound processes There is a convoy effect as all the other processes waiting for one of the big process to get off the CPU FCFS scheduling algorithm is non-preemptive
algorithm associates with each process the length of the processes next CPU burst
If
there is a tie, FCFS is used In other words, this algorithm can be also regard as shortest-next-cpu-burst algorithm
is optimal gives minimum average waiting time for a given set of processes
Example
Processes Burst time
P1 P2 P3 P4 6 8 7 3
FCFS average waiting time: (0+6+14+21)/4=10.25 SJF average waiting time: (3+16+9+0)/4=7
Priority Scheduling
A priority number (integer) is associated with each process The CPU is allocated to the process with the highest priority (smallest integer highest priority) Preemptive Non-preemptive SJF is a special priority scheduling where priority is the predicted next CPU burst time, so that it can decide the priority
Priority Scheduling
Processes Burst time Priority Arrival time P1 10 3 P2 1 1 P3 2 4 P4 1 5 P5 5 2 The average waiting time=(6+0+16+18+1)/5=8.2
Priority Scheduling
Processes Burst time Priority Arrival time P1 10 3 0.0 P2 1 1 1.0 P3 2 4 2.0 P4 1 5 3.0 P5 5 2 4.0
Gantt chart for both preemptive and non-preemptive, also waiting time
Priority Scheduling
Problem : Starvation low priority processes may never execute Solution : Aging as time progresses increase the priority of the process
Round-Robin Scheduling
Round-Robin is designed especially for time sharing systems. It is similar FCFS but add preemption concept A small unit of time, called time quantum, is defined
The
Round-Robin Scheduling
Each
process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue.
Round-Robin Scheduling
Round-Robin Scheduling
If
there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units.
Round-Robin Scheduling
Performance q large => FIFO q small => q must be large with respect to context switch, otherwise overhead is too high
Typically,
Round-Robin Scheduling
LINUX Schedulers
Linux operating system is a time sharing system thus the concept of short and long term schedulers is taken to higher level, introducing an additional intermediary level of scheduling or commonly known as medium term scheduling.
Remove processes from memory for a while Later the process is reintroduced into memory and executed
Linux 1.2 scheduler used a circular queue for runnable task management that operated with a round-robin scheduling policy. .Features of 1.2 scheduler Efficient for adding and removing processes (with a lock to protect the structure ). Simple and fast . Not complex to handle .
Linux 2.2 scheduler Linux 2.2 schedulers introduced the idea of scheduling classes, permitting scheduling policies for real-time tasks, non-preemptive tasks, and non-real-time tasks. Linux 2.4 scheduler Includes a relatively simple scheduler that operated in O(N) time. 2.4 scheduler divided time into epochs, and within each epoch, every task was allowed to execute up to its time slice
Kolivas introduced this scheduler introduced the concept of fair scheduling Borrowed from queuing theory
Con
the CPU usage is equally distributed among system users or groups, Not Among the processers
For
example, if four users (A,B,C,D) are concurrently executing one process each one will get 25% of whole cpu power(cpu cycles)
A 25%
B 25%
C 25%
D 25%
What
A 25%
B 25%
C 25%
D 25%
B.1 12.5 %
B.2 12.5 %
Single run queue lock meant idle processors awaiting lock release
Preemption
not possible
Lower priority task can execute while high priority task waits
O(n)
complexity
Fair Scheduler
Merged into the 2.6.23 kernel Runs task with the gravest need Guarantees fairness (CPU usage)
No
run queues!
Uses a time-ordered red-black binary tree Leftmost node is the next process to run
19 A A2 5 NL 2 NL NL NL
34
31
65
49 NL NL NL NL
98 NL
NL
Red/Black Continues
1.
Every node has two children, each colored either red or black. Every tree leaf node is colored black. Every red node has both of its children colored black. Every path from root to a tree leaf contains same number of black nodes
2. 3.
4.
CPU
Dispatcher
Scheduler
The scheduler stores the records about the planned tasks in a red-black tree, using the spent processor time as a key. This allows it to pick efficiently the process that has used the least amount of time (it is stored in the leftmost node of the tree). The spent execution time is updated and the process is then returned to the tree where it normally takes some other location The new leftmost node is then picked from the tree, repeating the iteration.
timeslices!... sort of
Uses wait_runtime (individual) and fair_clock (queue-wide) Processes build up CPU debt Different priorities spend time differently Half priority task sees time pass twice as fast
O(log
n) complexity
CFS has three primary structures task_struct, sched_entity, and cfs_rq. task_struct is the top-level entity, containing things such as task priorities, scheduling class, and the sched_entity struct. (sched.h, L1117) sched_entity includes a node for the RB tree and the vruntime statistic, among others. (sched.h, L1041) cfs_rq contains the root node, task group (more on this later), etc. (sched.c, L424)
While CFS does not directly use priorities or priority queues, it does use them to modulate vruntime buildup. In this version, priority is inverse to its effect a higher priority task will accumulate vruntime more slowly, since it needs more CPU time. Likewise, a low-priority task will have its vruntime increase more quickly, causing it to be preempted earlier. Nice value lower value means higher priority. Relative priority, not absolute...
...that's it?
The CFS algorithm is, as stated, a lot simpler than the previous one, and does not require many of the old variables. Preemption time is variable, depending on priorities and actual running time. So we don't need assign tasks a given timeslice.
Other additions
CFS introduced group scheduling in release 2.6.24, adding another level of fairness. Tasks can be grouped together, such as by the user which owns them. CFS can then be applied to the group level as well as the individual task level. So, for three groups, it would give each about a third of the CPU time, and then divide that time up among the tasks in each group.
Modular scheduling
Alongside the initial CFS release came the notion of modular scheduling , and scheduling classes. This allows various scheduling policies to be implemented, independent of the generic scheduler. sched.c, which we have seen, contains that generic code. When schedule() is called, it will call pick_next_task(), which will look at the task's class and call the class-appropriate method. Let's look at the sched_class struct...(sched.h L976)
Scheduling classes!
Two scheduling classes are currently implemented: sched_fair, and sched_rt. sched_fair is CFS, which We are talking about this whole time. sched_rt handles real-time processes, and does not use CFS it's basically the same as the previous scheduler. CFS is mainly used for non-real-time tasks.