0% found this document useful (0 votes)
48 views

Multitasking (Overview)

Multitasking allows an RTOS to handle multiple tasks by scheduling them while avoiding issues like race conditions. It involves decomposing applications into small, schedulable tasks and using scheduling algorithms to ensure tasks meet deadlines. Each task has its own context and tasks share non-stack memory, requiring synchronization. A task can be in a ready, blocked, or running state and the scheduler manages state changes as tasks interact with the kernel.

Uploaded by

Raghu Venkatesan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views

Multitasking (Overview)

Multitasking allows an RTOS to handle multiple tasks by scheduling them while avoiding issues like race conditions. It involves decomposing applications into small, schedulable tasks and using scheduling algorithms to ensure tasks meet deadlines. Each task has its own context and tasks share non-stack memory, requiring synchronization. A task can be in a ready, blocked, or running state and the scheduler manages state changes as tasks interact with the kernel.

Uploaded by

Raghu Venkatesan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 23

RTOS

Multitasking
Introduction to Multitasking

• Simple software applications are typically designed to run sequentially, one instruction at a time, in a pre-
determined chain of instructions.
• This scheme is inappropriate for real-time embedded applications, which generally handle multiple inputs and
outputs within tight time constraints.

• Real time systems has to Responsive , Predictable and meet the deadlines
• Hence, designed for concurrency
• Concurrent design requires developers to decompose application into small, schedulable tasks
or processes  Multitasking

• Multitasking is the ability of the RTOS to handle multiple activities within set deadlines by appropriately
scheduling multiple tasks while avoiding Race conditions, deadlocks and facilitating Inter-task
communication.

 With multiple tasks running in the system, it is necessary to schedule them

The multitasking discussions are carried out in the context of uniprocessor environments. (WHY?)

2
…Introduction to Multitasking - concurrency
Concurrent Design:
• Two or more processes that are active at the same time on a system
• Usually access common resources, either by design or by accident
• We want such processes to execute and the resources to be used in such a way that the net effect is – processes
executing serially in some order. That is, we want concurrent processes to
 have a serializable schedule
 Be logically consistent
 But… this does not happen automatically!

• Concurrent design allows system multitasking to meet performance and timing requirements for RTES.

• Each task has its own context: PC, stack, registers (SP, others..) and
the rest is shared amongst all tasks
• Shared data allows inter-task communication. Tasks share all global
data with all other tasks, ISRs, and the kernel
– Key point: Non-stack memory is shared
• Issue: Tasks can be preempted by another task (just like preemption
by an ISR):  shared data problem!
3
…Introduction to Multitasking - Scheduling
 Work is split between ISRs and tasks that are prioritized
 Scheduler (a part of kernel) multitasks in such a way that many threads of execution appear to be
running concurrently; however, the kernel is actually interleaving executions sequentially, based
on a preset scheduling algorithm.
 The scheduler must ensure that the appropriate task runs at the right time.
 Scheduling algorithms are followed, each one focusing on different selection criteria and decision
instance (i.e. when to apply scheduling).
 Careful selection is necessary based on application requirement.
• Pre-emptive vs. non-preemptive
• Static vs. Dynamic
• System tasks and User tasks: Only user tasks to be scheduled.
• Interrupts are not scheduled. Interrupt has higher priority than any task. Interrupts themselves
are prioritized.
• Worst-case response time for the highest priority task
– Sum of execution times of all the ISRs ; other tasks preempted
4
…Introduction to Multitasking
- System Tasks

When the kernel first starts, it creates its own set of system tasks and allocates the appropriate
priority for each from a set of reserved priority levels.
 Application should avoid using reserved priority levels for its tasks
 Application should not modify these reserved priority levels

Examples of system tasks include:


i. Initialization or Startup task — initializes the system and creates and starts system
tasks,
ii. Idle task — uses up processor idle cycles when no other activity is present,
iii. logging task — logs system messages,
iv. Exception-handling task — handles exceptions, and
v. Debug agent task — allows debugging with a host debugger.
vi. Other system tasks — might be created during initialization, depending on what
other components are included with the kernel.

5
Overview of a Process

6
Process Overview

What is a Process or task? When are Processes created?


 For User  When user logs in

• An instance of program in execution  When user runs a program on desktop OS

  When OS wants to perform a task


For OS
•  A process may spawn another process to
Unit of resource allocation and
scheduling (a schedulable entity) perform an independent (sub)task

 Example of a process:
• “vi” being run by 5 users at the same
Processes and Resources
 OS has to manage processes competing for
time –
common resources
One program, but 5 different instances
• CPU
i.e. 5 different processes
• Memory
 A schedulable entity
• IO devices
 It does this, by maintaining Process tables.

7
Terms - Process, Tasks and Threads
Process or Task:
• Processes often referred to as an instance of a program being executed; independent thread of
execution
• Differ from threads in that they have their own private virtual memory spaces. Hence, they provide
better memory protection features, at the expense of performance and memory overhead.

Threads:
• Multiple execution units which run in the context of process and within the memory space of a
single process
• Aka light weight process
• Resources of process are shared among the threads including a common memory space
• Each has to be controlled separately (creation, termination, synchronization, etc.
• Note: In POSIX terminology (e.g. Linux and Windows OS) task is called thread

Different RTOS uses different terminologies; For some RTOS Process and Task mean the same thing. In that
case, Thread is the light weight process/task. Refer RTOS programmer’s manual. For the sake of simplicity,
Henceforth, we will assume task to mean either a task or a process.
8
Process Description

Process Image typically consists of:


1) User program
2) User data and Stack
• Program data, user stack (modifiable)
3) System Stack
• For system calls
4) Process/Task Control Block (PCB/TCB)
• Info needed by OS to control the process

Task object
Upon creation, each task has an
associated name, a unique ID, a priority,
a task control block (TCB), a stack, and a
task routine.

9
…Process Description - Process Control Block

PCB typically stores:


 Process Identification
• IDs of the processes, user and parent process, etc.
 Processor state info
• CPU registers including Program Counter, Stack pointer, etc. and status
 Process Control information
• Scheduling, state, priority, signal, memory management, resource access information

10
Task States and Scheduling

• A task is an independent thread of • Hence, at any point of time each task exists in
execution that can compete based on a one of a small number of states.
predefined scheduling algorithm with • As the real-time embedded system runs, each
other concurrent tasks for processor task moves from one state to another,
execution time. according to the logic of a simple Finite State
• A task is schedulable entity. Machine (FSM).

• Generally three main states are used in typical preemptive-scheduling kernels:


 ready state Some RTOSes may have few more states.
 blocked state Example VxWorks have states like “Pend” and “Delayed”. But
 running state basically these are “Blocked” states only.

• Kernel implements a task's FSM


• Kernel maintains the current state of all tasks.
• When any executing task makes a system call into the kernel, the kernel's scheduler first determines which
tasks need to change states and then makes those changes.
11
…Task States and Scheduling

Completion of ISR
and task no longer
Has highest priority

interrupt

Completion of ISR Interrupted


and task is currently
highest priority

12
Task States and Scheduling (continue)

• The FSM of the tasks shown earlier is self explanatory.

Worth mentioning, a running task can move • Blocked task can get unblocked by one of
to the blocked state in any of the following the following 3 ways:
ways:
a) by making a call that requests an a) a semaphore token for which a task is
unavailable resource waiting is released
b) by making a call that requests to wait b) a message, on which the task is waiting,
for an event to occur, and arrives in a message queue, or
c) by making a call to delay the task for c) a time delay imposed on the task expires.
some duration.

13
Notes on Scheduling

• The RTOS scheduler chooses which task to run based on task priority

– The highest priority ready task is always selected


• RTOS manages scheduling of the tasks by keeping track of their state
• A preemptive RTOS will stop the execution of a lower priority task as soon as a higher-priority
task becomes ready to run
• Blocked task may get unblocked when the purpose for which it was blocked is “served”, either
by ISR or any other task.
• The unblocked task can go to either ready state or Run state depending upon its priority w.r.t.
current task under execution.
• Unlike schedulers in GPOS, the RTOS scheduler does not attempt to be fair
– Low priority tasks can be starved; CPU can be hogged
– Responsibility of application designer (not OS!) to make sure all tasks get what they need

14
Scheduling of Concurrent processes
• Concurrent processes usually access common resources, either by accident or by design
• We want such processes to execute and the resources to be used such that the net effect is of the
processes executing serially in some order. I.e. we want concurrent processes to –
 Have a serializable schedule
 Be logically consistent
• This does not happen automatically!

Serializability:
• A transaction schedule is said to be serializable or has the Serializability property, if its outcome is
equal to the outcome of its transactions executed serially, i.e., sequentially without overlapping in
time.
• Transactions are normally executed concurrently (they overlap), since this is the most efficient way.
• Serializability is the major correctness criterion for concurrent transactions' executions.
• It gives the highest level of isolation between transactions.

15
Serializable Schedule
• Consider two processes A and B sharing a common resource, variable M.
• Initially, Let M = 5

Process A: Process B:
A1: x=M B1: y=M
A2: x = x +1 B2: y=y-1
A3: M=x B3: M=y

If A and B runs concurrently, we expect no matter in what order they are scheduled and run,
finally M=5 should still hold, which is equivalent to serial execution of either
{A1,A2,A3,B1,B2,B3} or {B1,B2,B3, A1,A2,A3}
Execution order Result
Case-1: {A1,A2,A3,B1,B2,B3} M=5 (Correct)  serialized schedule
Case-2: {B1,B2,B3,A1,A2,A3} M=5 (Correct)  serialized schedule
Case-3: {A1,B1,B2,A2,A3,B3} M=4 (Incorrect)  Non-serialized schedule
Case-4: {A1,B1,B2,A2,B3,A3} M=6 (Incorrect)  Non-serialized schedule

16
Processor Modes

• How does an OS protect itself from Users tasks and users tasks from each other?
• By hardware support:

• The CPU has to hardware modes.


 User Mode:
• User code runs in this mode
 Privileged mode (Kernel mode):
• For OS code
• Triggered by any interrupt (h/w or s/w)
• Certain instructions can be executed only in this mode, else causes interrupt
• System calls also causes interrupt

17
The Context Switch

• Every time a new task is created, the kernel also creates and maintains an associated Task
Control Block (TCB). TCBs are system data structures that the kernel uses to maintain task-
context. Task context mainly consists of state of the CPU registers .

• The current process may be interrupted


Task-2
1) By h/w interrupt (e.g. clock, IO device)
Task-1 2) By s/w interrupt (trap, system call)
Context Switch Time 3) By another process of higher priority
Time

A. When task is interrupted by an interrupt (hardware or software interrupt),


i. System switches to privileged mode
ii. Saves the current context of the running task in its TCB
iii. Service the interrupt
iv. Restores the context and resume
18
…The Context Switch
B. When the kernel’s scheduler determines that it needs to stop running task 1 and start
running task 2, it takes the following steps:
1) The kernel saves task 1’s context information in its TCB.
2) It loads task 2’s context information from its TCB, which becomes the current thread of
execution.
3) If the scheduler needs to run task 1 again, (again context switch takes place) task 1
continues from where it left off just before the context switch.

Context Switch Time: Time it takes for the scheduler to switch from one task to another .

• It is relatively insignificant compared to most operations that a task performs.


• If application’s design includes frequent context switching it results into performance
overhead
• When the scheduler determines a context switch is necessary, it relies on an associated
module, called the dispatcher, to make that switch happen.

19
…The Context Switch

Few points on Context switching worth mentioning here…


• Process switch always involves context switch. But, converse is not true. Context switch may or
may not involve process switch.
• During the interrupt processing, the RTOS (scheduler) may decide to switch the current
process.
• Process switch is more expensive than Context switch
• Disabling context switching is one of the mechanisms to protect critical section (To be studied
later)
• A context switch can become a major overhead (unavoidable) when the number of context
switches are too high during a small period.

20
Points to remember

• Most real-time kernels provide task objects and task-management services that allow
developers to meet the requirements of real-time applications.
• Applications can contain system tasks or user-created tasks, each of which has a name, a
unique ID, a priority, a task control block (TCB), a stack, and a task routine.
• A real-time application is composed of multiple concurrent tasks that are independent threads
of execution, competing on their own for processor execution time.
• Tasks can be in one of three primary states during their lifetime: ready, running, and blocked.
• Priority-based, preemptive scheduling kernels that allow multiple tasks to be assigned to the
same priority use task-ready lists to help scheduled tasks run.

21
…Points to remember

• Serializability is the major correctness criterion for concurrent transactions' executions.


• OS protect itself from Users tasks and users tasks from each other by hardware support. User
processes run in User mode where as OS runs in Privileged mode (Kernel mode).
• Process can be preempted by another higher priority task or interrupted by an interrupt. In
both cases, context of the current task under execution is saved in its TCB.
• During the interrupt processing, the RTOS (scheduler) may decide to switch the current
process.
• Disabling context switching is one of the mechanisms to protect critical section (To be studied
later)
• A context switch can become a major overhead (unavoidable) when the number of context
switches are too high during a small period.

22
Thank You

You might also like