2012-8-2 Scheduling of Real-Time Processes

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 27

T.B.

Skaali, Department of Physics, University of Oslo)


FYS 4220 / 9220 2012 / #8

Real Time and Embedded Data Systems and Computing

Scheduling of Real-Time processes, part 2 of 2
T.B. Skaali, Department of Physics, University of Oslo
Could there be more scheduling algorithms
than those presented in the previous lecture?

Yes indeed!
FYS4220 / 9220 2012 - lecture 8, part 2 of 2
2
T.B. Skaali, Department of Physics, University of Oslo
Pick your own scheduling strategy
The following is a list of common scheduling practices and disciplines (ref Wikipedia):
Borrowed-Virtual-Time Scheduling (BVT)
Completely Fair Scheduler (CFS)
Critical Path Method of Scheduling
Deadline-monotonic scheduling (DMS)
Deficit round robin (DRR)
Earliest deadline first scheduling (EDF)
Elastic Round Robin
Fair-share scheduling
First In, First Out (FIFO), also known as First Come First Served (FCFS)
Gang scheduling
Genetic Anticipatory
Highest response ratio next (HRRN)
Interval scheduling
Last In, First Out (LIFO)
Job Shop Scheduling
Least-connection scheduling
Least slack time scheduling (LST)
List scheduling
Lottery Scheduling
Multilevel queue
Multilevel Feedback Queue
Never queue scheduling
O(1) scheduler
Proportional Share Scheduling
Rate-monotonic scheduling (RMS)
Round-robin scheduling (RR)
Shortest expected delay scheduling
Shortest job next (SJN)
Shortest remaining time (SRT)
Staircase Deadline scheduler (SD)
"Take" Scheduling
Two-level scheduling
Weighted fair queuing (WFQ)
Weighted least-connection scheduling
Weighted round robin (WRR)
Group Ratio Round-Robin: O(1)
The computer scientists
must have had great fun in
coming up with names!
FYS4220 / 9220 2012 - lecture 8, part 2 of 2
3
T.B. Skaali, Department of Physics, University of Oslo
Scheduling of real-time activities
To repeat: an absolute requirement in hard Real-Time
systems is that deadlines are met! Missing a deadline may
have serious to catastrophic consequences. This requirement
is the baseline for true Real-Time Operating Systems!
Note that a RTOS should have guaranteed worst case reaction times.
However, the actual values are obviously processor dependent.
Finding these numbers is another issue, you may have to dig them out
yourself!
In soft Real-Time systems deadlines may however
occasionally be missed without leading to catastropy. For
such such applications standard OSs can be used (Windows,
Linux, etc)
Since Linux is popular, and a good choice for many embedded
applications, a quick introduction will be given in a later lecture.
FYS4220 / 9220 2012 - lecture 8, part 2 of 2
4
T.B. Skaali, Department of Physics, University of Oslo
Multitasking under VxWorks
Multitasking provides the fundamental mechanism for an
application to control and react to multiple, discrete real-
world events. The VxWorks real-time kernel, wind,
provides the basic multitasking environment. Multitasking
creates the appearance of many threads of execution
running concurrently when, in fact, the kernel interleaves
their execution on the basis of a scheduling algorithm.
A concurrent activity is called a task in VxWorks parlance.
Each task has its own context, which is the CPU
environment and system resources that the task sees
each time it is scheduled to run by the kernel. On a
context switch, a task's context is saved in the task
control block (TCB)
A TCB can be accessed through the taskTcb(task_ID) routine
FYS4220 / 9220 2012 - lecture 8, part 2 of 2
5
T.B. Skaali, Department of Physics, University of Oslo
VxWorks POSIX and Wind scheduling
POSIX scheduling is based on processes, while Wind scheduling is based on tasks.
Tasks and processes differ in several ways. Most notably, tasks can address memory
directly while processes cannot; and processes inherit only some specific attributes from
their parent process, while tasks operate in exactly the same environment as the parent
task;
Tasks and processes are alike in that they can be scheduled independently;
VxWorks documentation uses the term preemptive priority scheduling, while the POSIX
standard uses the term FIFO. This difference is purely one of nomenclature: both
describe the same priority-based policy;
The POSIX scheduling algorithms are applied on a process-by-process basis. The Wind
methodology, on the other hand, applies scheduling algorithms on a system-wide basis--
either all tasks use a round-robin scheme, or all use a preemptive priority scheme.
The POSIX priority numbering scheme is the inverse of the Wind scheme. In POSIX, the higher
the number, the higher the priority; in the Wind scheme, the lower the number, the higher the
priority, where 0 is the highest priority. Accordingly, the priority numbers used with the POSIX
scheduling library (schedPxLib) do not match those used and reported by all other components
of VxWorks. You can override this default by setting the global
variable posixPriorityNumbering to FALSE. If you do this, the Wind numbering scheme
(smaller number = higher priority) is used by schedPxLib, and its priority numbers match those
used by the other components of VxWorks.
FYS4220 / 9220 2012 - lecture 8, part 2 of 2
6
T.B. Skaali, Department of Physics, University of Oslo
VxWorks Real-Time Processes (RTPs) Wind River White Paper, 2005
VxWorks has historically focused on supporting a lightweight, kernel-based threading
model. It distinguished itself as an operating system that is highly scalable and robust,
yet lightweight and fast. The focus was on real-time support (for example, keeping the
maximum time to respond to an interrupt to an absolute minimum), low task-switching
costs, and easy access to hardware. Essentially, VxWorks tried to stay out of the
developers way, and placed few restrictions on what he or she could do.
Times have changed. The preponderance of CPUs with a memory management unit
(MMU) means that many developers in the device software space wish to take
advantage of these MMUs to provide partitioning of their systems and protect
themselves from crashing a system if they make a programming error. The most
common and familiar model for this is a process model, where fully linked applications
run in a distinct section of memory and are largely walled off from other applications
and services in the system.
Kernel mode, by comparison, trades off abstraction and protection of kernel
components for the requirement of having direct access to hardware, tightly bound
interaction with the kernel, and potentially higher responsitivity and performance. This
trade-off requires that kernel development operate at a more sophisticated
programming level, where the developer is more cognizant of subtle interactions with
the hardware and the risk of hitting unrecoverable fault conditions. More fun !!
7
FYS4220 / 9220 2012 - lecture 8, part 2 of 2
T.B. Skaali, Department of Physics, University of Oslo
VxWorks Real-Time Processes (RTPs) Wind River White Paper, 2005
An RTP itself is not a schedulable
entity. The execution unit within
an RTP is a VxWorks task, and
there may be multiple tasks
executing within an RTP. Tasks in
the same RTP share its address
space and memory context, and
cannot exist beyond the lifespan of
the RTP.
Can RTPs be implemented under
the vxsim simulator? Well, there
is the option of building a Real
Time Process project, but how it is
done I have not understood!
8
FYS4220 / 9220 2012 - lecture 8, part 2 of 2
T.B. Skaali, Department of Physics, University of Oslo
VxWorks Kernel programming Task Control
FYS4220 / 9220 2012 - lecture 8, part 2 of 2
9
Call Description
taskSpawn() Spawns (creates and activates) a new task
taskCreate() Creates, but not activates a new task
taskInit() Initializes a new task
taskInitExcStk() Initializes a task with stacks at specified addresses
taskOpen() Open a task (or optionally create one, if it does not exist)
taskActivate() Activates an initialized task
The use of the taskSpawn() routine is demonstrated in the RTlab
programs. The arguments are the new tasks name (ASCII string), the
tasks priority, an options word, the stack size, the main routine (start)
address) and 10 arguments to be passed to the main routine as startup
parameters.
id = taskSpawn (name, priority, options, stacksize, main, arg1, arg(10);

Task Create, Switch, and Delete Hooks
T.B. Skaali, Department of Physics, University of Oslo
VxWorks Kernel programming Task Scheduling
FYS4220 / 9220 2012 - lecture 8, part 2 of 2
10
Call Description
kernelTimeSlice() Controls round-robin scheduling
taskPrioritySet() Changes the priority of a task
taskLock() Disables task rescheduling
taskUnlock() Enables task rescheduling
Task are assigned a priority when created. One can also change a tasks
priority while it is executing by calling taskPrioritySet() .

The kernel has 256 priority levels, numbered 0 through 255. Priority 0 is
the highest and 255 is the lowest. All application tasks should be in the
priority range from 100 to 255.

kernelTimeSlice(int ticks) /* time-slice in ticks or 0 to disable round-robin */

Task Scheduler Control Routines
T.B. Skaali, Department of Physics, University of Oslo
VxWorks Kernel programming Tasking Extensions
FYS4220 / 9220 2012 - lecture 8, part 2 of 2
11
Call Description
taskCreateHookAdd() Adds a routine to be called at every task create
taskCreateHookDelete() Deletes a previsously added task create routine
taskSwitchHookAdd() Adds a routine to be called at every task switch
taskSwitchHookdelete() Deletes a previously added task switch routine
taskDeleteHookAdd() Adds a routineto be called at every task delete
taskDeleteHookDelete() Deletes a previously added task delete routine
To allow additional task-related facilities to be added to the system,
VxWorks provides hook routined that allow additional routines to be
involked whenever a task is created, a task context swiitch occurs, or a
task is deleted. User-installed switch hook routines are called within the
kernel context and therefore do not have access to all VxWorks facilities,
for allowed routines see the documentation.

Task Create, Switch, and Delete Hooks
T.B. Skaali, Department of Physics, University of Oslo
VxWorks Task Control Block
Each task has its own context, which is the CPU environment and
system resources that the task sees each time it is scheduled to run by
the kernel. On a context switch, a task's context is saved in the task
control block (TCB). A task's context includes:
a thread of execution; that is, the task's program counter
the tasks' virtual memory context (if process support is included)
the CPU registers and (optionally) coprocessor registers
stacks for dynamic variables and function calls
I/O assignments for standard input, output, and error
a delay timer
a time-slice timer
kernel control structures
signal handlers
task variables
error status (errno)
debugging and performance monitoring values
Note that in conformance with the POSIX standard, all tasks in a
process share the same environment variables (unlike kernel tasks,
which each have their own set of environment variables).
A VxWorks task will be in one of states listed on next page
FYS4220 / 9220 2012 - lecture 8, part 2 of 2
12
void ti (int taskNameOrId)
Display complete information from a tasks TCB
T.B. Skaali, Department of Physics, University of Oslo
VxWorks Task state transitions
FYS4220 / 9220 2012 - lecture 8, part 2 of 2
13
T.B. Skaali, Department of Physics, University of Oslo
Wind task state diagram and task transitions
See taskLib for the task---( ) routines. Any system call
resulting in a transition may affect scheduling!
FYS4220 / 9220 2012 - lecture 8, part 2 of 2
14
T.B. Skaali, Department of Physics, University of Oslo
VxWorks task scheduling
The default algorithm is Priority based pre-emptive
With a preemptive priority-based scheduler, each task has a
priority and the kernel ensures that the CPU is allocated to the
highest priority task that is ready to run. This scheduling method is
preemptive in that if a task that has higher priority than the current
task becomes ready to run, the kernel immediately saves the
current task's context and switches to the context of the higher
priority task
A round-robin scheduling algorithm attempts to share the CPU
fairly among all ready tasks of the same priority.
Note: VxWorks provides the following kernel scheduler facilities:
The VxWorks native scheduler, which provides options for preemptive priority-
based scheduling or round-robin scheduling.
A POSIX thread scheduler that provides POSIX thread scheduling support in
user-space (processes) while keeping the VxWorks task scheduling.
A kernel scheduler replacement framework that allows users to implement
customized schedulers. See documentation for more information.
FYS4220 / 9220 2012 - lecture 8, part 2 of 2
15
T.B. Skaali, Department of Physics, University of Oslo
VxWorks task scheduling (cont)
Priority-bases preemtive
VxWorks supply routines for pre-emption locks which prevent context switches .
Figure 3-2 : Priority Preemption


Round-Robin
igure 3-3 : Round-Robin Scheduling


FYS4220 / 9220 2012 - lecture 8, part 2 of 2
16
T.B. Skaali, Department of Physics, University of Oslo
Priority inversion - I
17
FYS4220 / 9220 2012 - lecture 8, part 2 of 2
Consider there is a task L, with low priority. This task requires resource R.
Consider that L is running and it acquires resource R. Now, there is another
task H, with high priority. This task also requires resource R. Consider H
starts after L has acquired resource R. Now H has to wait until L relinquishes
resource R.

In some cases, priority inversion can occur without causing immediate
harmthe delayed execution of the high priority task goes unnoticed, and
eventually the low priority task releases the shared resource. However, there
are also many situations in which priority inversion can cause serious
problems. If the high priority task is left starved of the resources, it might lead
to a system malfunction or the triggering of pre-defined corrective measures,
such as a watch dog timer resetting the entire system. The trouble
experienced by the Mars lander "Mars Pathfinder is a classic example of
problems caused by priority inversion in real-time systems.

The existence of this problem has been known since the 1970s, but there is
no fool-proof method to predict the situation. There are however many
existing solutions, of which the most common ones are:
T.B. Skaali, Department of Physics, University of Oslo
Priority inversion - II
18
FYS4220 / 9220 2012 - lecture 8, part 2 of 2
Some solutions to the problem:

Disabling all interrupts to protect critical sections
Brute force!

A priority ceiling
With priority ceilings, the shared mutex process (that runs the operating system
code) has a characteristic (high) priority of its own, which is assigned to the task
locking the mutex. This works well, provided the other high priority task(s) that tries
to access the mutex does not have a priority higher than the ceiling

Priority inheritance
Under the policy of priority inheritance, whenever a high priority task has to wait for
some resource shared with an executing low priority task, the low priority task is
temporarily assigned the priority of the highest waiting priority task for the duration
of its own use of the shared resource, thus keeping medium priority tasks from pre-
empting the (originally) low priority task, and thereby affecting the waiting high
priority task as well. Once the resource is released, the low priority task continues at
its original priority level.
Priority I nversion
To illustrate an extreme example of priority inversion,
consider the executions of four periodic processes: a, b, c
and d; and two resources: Q and V

Process Priority Execution Sequence Release Time
a 1 EQQQQE 0
b 2 EE 2
c 3 EVVE 2
d 4 EEQVE 4


Example of Priority I nversion
Process
a
b
c
d
0 2 4 6 8 10 12 14 16 18
Executing
Executing with Q locked
Preempted
Executing with V locked
Blocked
Process d has the highest priority
Priority I nheritance
If process p is blocking process q, then p runs with q's
priority. In the diagram below, q corresponds to d, while
both a and c can be p

a
b
c
d(q)
0 2 4 6 8 10 12 14 16 18
Process
T.B. Skaali, Department of Physics, University of Oslo
VxWorks Mutual-Exclusion semaphores and Priority inversion
The mutual-exclusion semaphore is a specialized binary semaphore
designed to address issues inherent in mutual exclusion, including
priority inversion, deletion safety, and recursive access to resources.
The fundamental behavior of the mutual-exclusion semaphore is
identical to the binary semaphore, with the following exceptions:
It can be used only for mutual exclusion.
It can be given only by the task that took it.
The semFlush( ) operation is illegal.
Priority inversion problem:
FYS4220 / 9220 2012 - lecture 8, part 2 of 2
22
T.B. Skaali, Department of Physics, University of Oslo
VxWorks: Priority inheritance
In the figure below, priority inheritance solves the problem of priority
inversion by elevating the priority of t3 to the priority of t1 during the time
t1 is blocked on the semaphore. This protects t3, and indirectly t1, from
preemption by t2. The following example creates a mutual-exclusion
semaphore that uses the priority inheritance algorithm:
semId = semMCreate (SEM_Q_PRIORITY | SEM_INVERSION_SAFE);
Other VxWorks facilities which implement priority inheritance: next page
Figure 3-11 : Priority Inversion


FYS4220 / 9220 2012 - lecture 8, part 2 of 2
23
T.B. Skaali, Department of Physics, University of Oslo
VxWorks POSIX Mutexes and Condition Variable
Thread mutexes (mutual exclusion variables) and condition variables
provide compatibility with the POSIX 1003.1c standard. They perform
essentially the same role as VxWorks mutual exclusion and binary
semaphores (and are in fact implemented using them). They are available
with pthreadLib. Like POSIX threads, mutexes and condition variables
have attributes associated with them. Mutex attributes are held in a data
type called pthread_mutexattr_t, which contains two attributes, protocol
and prioceiling.
The protocol mutex attribute describes how the mutex variable deals with
the priority inversion problem described in the section for VxWorks mutual-
exclusion semaphores.
Attribute Name: protocol
Possible Values: PTHREAD_PRIO_INHERIT (default) and PTHREAD_PRIO_PROTECT
Access Routines: pthread_mutexattr_getprotocol( ) and pthread_mutexattr_setprotocol( )
Because it might not be desirable to elevate a lower-priority thread to a
too-high priority, POSIX defines the notion of priority ceiling. Mutual-
exclusion variables created with priority protection use the
PTHREAD_PRIO_PROTECT value.
FYS4220 / 9220 2012 - lecture 8, part 2 of 2
24
POSI X
POSIX supports priority-based scheduling, and has options
to support priority inheritance and ceiling protocols
Priorities may be set dynamically
Within the priority-based facilities, there are four policies:
FIFO: a process/thread runs until it completes or it is blocked
Round-Robin: a process/thread runs until it completes or it is blocked
or its time quantum has expired
Sporadic Server: a process/thread runs as a sporadic server
OTHER: an implementation-defined
For each policy, there is a minimum range of priorities that
must be supported; 32 for FIFO and round-robin
The scheduling policy can be set on a per process and a per
thread basis
POSI X
Threads may be created with a system contention
option, in which case they compete with other system
threads according to their policy and priority
Alternatively, threads can be created with a process
contention option where they must compete with other
threads (created with a process contention) in the
parent process
It is unspecified how such threads are scheduled relative to
threads in other processes or to threads with global contention
A specific implementation must decide which to support
Other POSI X Facilities
POSIX allows:

priority inheritance to be associated with mutexes
(priority protected protocol= ICPP)
message queues to be priority ordered
functions for dynamically getting and setting a thread's
priority
threads to indicate whether their attributes should be
inherited by any child thread they create

You might also like