0% found this document useful (0 votes)
20 views126 pages

RTOS Based Embedded Design (Unit 4 Btech)

This document provides an overview of Real-Time Operating Systems (RTOS) and their components, including tasks, processes, and threads, as well as the differences between RTOS and General Purpose Operating Systems (GPOS). It covers concepts such as multitasking, scheduling, inter-process communication, and the structure of RTOS, highlighting the advantages and disadvantages of using RTOS in embedded system design. Additionally, it discusses various types of scheduling and the importance of task prioritization in real-time applications.

Uploaded by

7fv5uzxwg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views126 pages

RTOS Based Embedded Design (Unit 4 Btech)

This document provides an overview of Real-Time Operating Systems (RTOS) and their components, including tasks, processes, and threads, as well as the differences between RTOS and General Purpose Operating Systems (GPOS). It covers concepts such as multitasking, scheduling, inter-process communication, and the structure of RTOS, highlighting the advantages and disadvantages of using RTOS in embedded system design. Additionally, it discusses various types of scheduling and the importance of task prioritization in real-time applications.

Uploaded by

7fv5uzxwg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 126

UNIT IV RTOS BASED EMBEDDED SYSTEM

DESIGN
4.1 Introduction to basic concepts of RTOS- Task, process & threads,
Interrupt routines in RTOS
4.2 Multiprocessing and Multitasking
4.3 Preemptive and non-preemptive scheduling
4.4 Task communication shared memory
4.5 Message passing
4.6 Inter process Communication
4.7 Synchronization between processes-semaphores, Mailbox, pipes
4.8 Priority inversion, priority inheritance
4.9 Comparison of Real time Operating systems: VxWorks, чC/OS-II, RT
Linux
Difference Between RTOS & GPOS
• RTOS is light weight and small in size compared to
a GPOS.
• A GPOS is made for high end, general purpose
systems like a personal computer, a work station, a
server system etc.
• The basic difference between a low end system and
high end system is in it's hardware configuration
Contd..
• A General Purpose Operating System (GPOS) is a
complete OS that supports process management,
memory management, I/O devices, file systems and
user interface.

• In a GPOS, processes are created dynamically to


perform user commands.
4.1 Introduction to basic concepts of RTOS

• A variant of OS that operates in constrained


environment in which computer memory and
processing power is limited. Moreover they often need
to provide their services in definite amount of time.

• Hard, Soft & Firm RTOS

• Example RTOS: VxWorks, pSOS, Nucleus, RTLinux…


Difference between Program and Process

1. Program :
When we execute a program that was just compiled, the OS will generate a process
to execute the program. Execution of the program starts via GUI mouse clicks,
command line entry of its name, etc. A program is a passive entity as it resides in the
secondary memory, such as the contents of a file stored on disk. One program can
have several processes.
2. Process :
The term process (Job) refers to program code that has been loaded into a
computer’s memory so that it can be executed by the central processing unit (CPU).
A process can be described as an instance of a program running on a computer or
as an entity that can be assigned to and executed on a processor. A program
becomes a process when loaded into memory and thus is an active entity.
Sr.No Program Process

1 Program contains a set of instructions Process is an instance


designed to complete a specific task. of an executing
program.

2 Program is a passive entity as it resides Process is a active entity


in the secondary memory. as it is created during
execution and loaded
into the main memory.
3 Program exists at a single place and Process exists for a
continues to exist until it is deleted. limited span of time as it
gets terminated after the
completion of task.
4 Program is a static entity. Process is a dynamic
entity.
5.Process has a high resource requirement, it needs
resources like CPU, memory address, I/O during its
lifetime.
5. 6. Process has its own control block called Process
Control Block.
Program does not have any
resource requirement, it only
requires memory space for
storing the instructions.

6.Program does not have any


control block.
4.1 Operating System
A system software, which
• Task Management— Creation, block, run, delay,
suspend, resume, deletion

• Memory Management— Allocation, Freeing, De-


location

• Device Management—Configure, Initiate, register


with OS, read, listen, write, accept, deregister
4.1 Operating System
A system software, which

• I/O Devices subsystems management—Display (LCD,


Touch Screen), Printer, USB ports

• Network Devices subsystems management — Ethernet,


Internet, WiFi

• Includes Middleware — TCP/IP stack for


telecommunications

• Includes Key-applications — Clock, Mail, Internet


Explorer, Search, Access to the Maps external library
4.1 RTOS
▪ Real-time OS (RTOS) is an intermediate layer between hardware devices
and software programming
▪ “Real-time” means keeping deadlines, not speed
▪ Advantages of RTOS in SoC design
• Shorter development time
• Less porting efforts
• Better reusability
▪ Disadvantages
• More system resources needed
• Future development confined to the chosen RTOS
4.1 RTOS
• Multitasking operation system with hard or soft real
time constraints
• An OS for the system having the time-limits for service
of tasks and interrupts
• Enables defining of time constraints
• Enables execution of concurrent tasks (or processes or
threads)
• RTOS enables setting of the rules
• Assigning priorities
• Predictable Latencies
4.1 Soft and Hard Real Time OS
▪ Soft real-time
• Tasks are performed by the system as fast as possible, but
tasks don’t have to finish by specific times
• Priority scheduling
• Multimedia streaming

▪ Hard real-time
• Tasks have to be performed correctly and on time
• Deadline scheduling
• Aircraft controller, Nuclear reactor controller
4.1 Structure of a RTOS

Applications

RTOS-kernel

BSP

Custom-Hardware
4.1 Components of RTOS

• The most important component of RTOS is its kernel


(Monolithic & Microkernel).

• BSP or Board Support Package makes an RTOS target-


specific (It’s a processor specific code onto (processor)
which we like to have our RTOS running).
4.1 RTOS KERNEL
4.1 Tasks

• Task is defined as embedded program computational unit


that runs on the CPU under the state control of kernel of
an OS.

• It has a state , which at an instance defines by status


(running , blocked ,or finished).

• Structure-its data , objects and resources and control


block.
4.1 Tasks
4.1 Tasks
4.1 Task

▪ Task is an instance of program


▪ Task thinks that it has the CPU all to itself
▪ Task is assigned a unique priority
▪ Task has its own set of stack
▪ Task has its own set of CPU registers (backup in its stack)
▪ Task is the basic unit for scheduling
▪ Task status are stored in Task Control Block (TCB)
4.1 Task Structure

Task structure:
▪ An infinite loop
▪ An self-delete function

Task with infinite loop structure Task that delete itself


void ExampleTask(void *pdata) void ExampleTask(void *pdata)
{ {
for(;;) { /* User Code */
/* User Code */ OSTaskDel(PRIO_SELF);
/* System Call */ }
/* User Code */
}
}
4.1 Task States

Waiting

Task Delete Task Gets Event Task Pending Events

Task Create Highest Priority Task Interrupt


Dormant Ready Running ISR
Task Delete Task is Preempted Int. Exit

Task Delete
4.1 Task Priority

▪ Unique priority (also used as task identifiers)

▪ 64 priorities max (8 reserved)

▪ Always run the highest priority task that is READY

▪ Allow dynamically change priority


4.1 Task Control Block

uC/OS-II use TCB to keep record of each task

States

Stack Pointer

Priority

Misc …

Link Pointer
4.1 Task Control Block(cont.)
4.1 PROCESS
A process is a program in execution ...
• Starting a new process is a heavy job for OS: memory has
to be allocated, and lots of data structures and code must
be copied.

• Memory pages (in virtual memory and in physical RAM)


for code ,data, stack, heap, and for file and other
descriptors; registers in the CPU; queues for scheduling;
signals and IPC; etc.
4.1 PROCESS
• A process consists of sequentially executable program and
state control by an OS

• A state during running of a process is represented by the


information of process state,process structure-its
data,objects and resources and process control blocks.

• A process runs on scheduling by OS , which gives the


control of CPU to the process
4.1 Process
• Process runs instructions and the continuous changes of
its state takes place as the PC changes.

• Process is defined as a computation unit that processes


on a CPU and whose state changes under the control of
kernel of an OS.

• Process status-running , blocked or finished

• Process structure-its data,objects,resourses and PCB.


4.1 Process
4.1 Process State
• A process changes state as it executes.

new admitted
exit terminated
interrupt

ready running

Scheduler
I/O or I/O or
dispatch
event event wait
completi
on
waitin
g
4.1 Process States
• New - The process is being created.

• Running - Instructions are being executed.

• Waiting - Waiting for some event to occur.

• Ready - Waiting to be assigned to a processor.

• Terminated - Process has finished execution.



4.1 Process Control Block
Contains information associated
with each process
• Process State - e.g. new, ready,
running etc.
• Process Number – Process ID
• Program Counter - address of next
instruction to be executed
• CPU registers - general purpose
registers, stack pointer etc.
• CPU scheduling information -
process priority, pointer
• Memory Management information
- base/limit information
• Accounting information - time
limits, process number
– I/O Status information - list of
I/O devices allocated
Process
Control
Block
4.1 Process Scheduling
Process (PCB) moves from queue to queue
When does it move? Where? A scheduling decision
4.1 Process Scheduling Queues
• Job Queue - set of all processes in the system

• Ready Queue - set of all processes residing in main memory,


ready and waiting to execute.
• Device Queues - set of processes waiting for an I/O device.

• Process migration between the various queues.

• Queue Structures - typically linked list, circular list etc.


4.1 Thread
A thread is a “ lightweight” process, in the sense that
different threads share the same address space.

• They share global and “static” variables, file descriptors,


signal bookkeeping, code area, and heap, but they have
own thread status, program counter, registers, and stack.
• Shorter creation and context switch times, and faster
IPC.
To save the state of the currently running task (registers,
stack pointer,PC, etc.), and to restore that of the new task.
4.1 Thread
• A thread consists of sequentially executable
program codes under state-control by an OS.

• A state information of a thread is represented by


thread-state (started , running, blocked or finished),

• Thread structure-its data , objects and a subset of


the process resources and thread-stack.
4.1 Multithreading Operating System

• Multitasking operating systems have the ability


to execute different parts of a program, called
threads, simultaneously.

• These threads are mutually exclusive parts of


the program and can be executed
simultaneously with out interfereing each
other.
4.1 Multithreading Operating System
4.1 Multithreading Operating System

Advantages of Multithreaded Operating System


• Increases CPU utilization by reducing idle time
• Mutually exclusive threads of the same application can be
executed simultaneously.

Disadvantages of Multithreaded Operating System


• If not properly programmed, multiple threads can interfere
with each other when sharing hardware resources such as
caches.
• There are chances that the computer might hang while
handling multiple process, but the Multitasking is one of the
best feature of Operating System.
4.1 Interrupt
▪ Interrupt Controller
A device that accepts up to 22 interrupt sources from
other pheripherals and signals ARM processor

▪ Interrupt Handler
A routine executed whenever an interrupt occurs. It
determines the interrupt source and calls corresponding
ISR. Usually provided by OS.

▪ Interrupt Service Routine (ISR)


Service routines specific to each interrupt source. This is
usually provided by hardware manufacturer.
4.1 Interrupt
▪ Peripheral sends interrupt request to interrupt controller
▪ Interrupt controller sends masked request to ARM
▪ ARM executes interrupt handler to determine int. source

bus
Peripheral A

Interrupt ARM
Controller Processor

Peripheral B
4.1 Interrupt
▪ Default interrupt handler: uHALr_TrapIRQ()
1. Save all registers in APCS-compliant manner
2. Call StartIRQ(), if defined
3. Determine interrupt source, call the corresponding
interrupt service routine (ISR)
4. Call FinishIRQ(), if defined
5. Return from interrupt
4.1 Interrupt
▪ When an interrupt occurs, no further interrupt accepted

▪ To achieve real-time, the period of interrupt disabled should be as short as possible

▪ Do only necessary jobs in ISR, leave other jobs in a deferred Task


4.2 Multiprocessing Operating System

• A multiprocessing system is a computer hardware


configuration that includes more than one independent
processing unit. Ther term multiprocessing is generally
used to refer to a large cmputer hardware complexes
found in major scientific and commercial applications.

• User can view the operating system as powerful


uniprocessor system.
4.2 Multiprocessing Operating System
4.2 Multiprocessing Operating System

Advantages of Multiprocessing Operating System

• Due to multiplicity of processors, multiprocessor systems have


better performance (shorter responses times and higher
throughput) than single processor systems.

• In a properly designed multiprocessor system, if one of the


processors breaks down, the other processor(s) automatically
takes over the system workload until repairs are made. Hence, a
complete breakdown of such systems can be avoided.
4.2 Multiprocessing Operating System

Disadvantages of Real Time Operating System


• Expensive to procure and maintain so these
systems are not suited to daily use.

• Requires immense overhead to schedule,


balance, and coordinate the input, output, and
processing activities of multiple processors.
4.2 Multitasking Operating System

• Multitasking operating systems allow more than one


program to run at a time.
• An operating system that gives you the perception of 2 or
more tasks/jobs/processes running at the same time.
• It does this by dividing system resources amongst these
tasks/jobs/processes.
• And switching between the tasks/jobs/processes while they
are executing very fast over and over again.
4.2 Multitasking Operating System

• There are two basic types of multitasking: preemptive and


cooperative.

• In preemptive multitasking, the operating system parcels


out CPU time slices to each program.

• In cooperative multitasking, each program can control


the CPU for as long as it needs it. If a program is not using
the CPU, however, it can allow another program to use it
temporarily.
4.2 Multitasking Operating System
4.2 Multitasking Operating System

Advantages of Multitasking Operating System


• Multitasking increases CPU untilization.

•Multiple tasks can be handled at a given time.

Disadvantages of Multitasking Operating System


• To perform multitasking the speed of processor must be
very high.to daily use.

•There are chances that the computer might hang while


handling multiple process, but the Multitasking is one of the
best feature of Operating System.
4.3 Schedulers
• Also called “dispatchers”
• Schedulers are parts of the kernel responsible
for determining which task runs next
• Most real-time kernels use priority-based
scheduling
– Each task is assigned a priority based on its
importance
– The priority is application-specific
4.3 Priority-Based Kernels

• There are two types


– Non-preemptive
– Preemptive
4.3 Non-Preemptive Kernels
• Perform “cooperative multitasking”
– Each task must explicitly give up control of the CPU
– This must be done frequently to maintain the illusion of
concurrency
• Asynchronous events are still handled by ISRs
– ISRs can make a higher-priority task ready to run
– But ISRs always return to the interrupted tasks
4.3 Non-Preemptive Kernels (cont.)
4.3 Advantages of Non-Preemptive Kernels

• Interrupt latency is typically low


• Can use non-reentrant functions without fear of
corruption by another task
– Because each task can run to completion before it
relinquishes the CPU
– However, non-reentrant functions should not be allowed to
give up control of the CPU
• Task-response is now given by the time of the longest
task
– much lower than with F/B systems
4.3 Advantages of Non-Preemptive Kernels

• Less need to guard shared data through the


use of semaphores
– However, this rule is not absolute
– Shared I/O devices can still require the use of
mutual exclusion semaphores
– A task might still need exclusive access to a printer
4.3 Disadvantages of Non-Preemptive
Kernels
• Responsiveness
– A higher priority task might have to wait for a long
time
– Response time is nondeterministic
• Very few commercial kernels are non-
preemptive
4.3 Preemptive Kernels
• The highest-priority task ready to run is always
given control of the CPU
– If an ISR makes a higher-priority task ready, the
higher-priority task is resumed (instead of the
interrupted task)
• Most commercial real-time kernels are
preemptive
4.3 Preemptive Kernels (cont.)
4.3 Advantages of Preemptive Kernels

• Execution of the highest-priority task is


deterministic
• Task-level response time is minimized
Disadvantages of Preemptive Kernels
• Should not use non-reentrant functions unless
exclusive access to these functions is ensured
4.3 Task Scheduling
▪ Preemptive
Low-priority Task

ISR

High-priority Task

ISR makes the


high-priority task ready

high-priority task
Relinquishes the CPU

Time
4.3 Task Scheduling
▪ Non-preemptive
Low-priority Task

ISR

ISR makes the


high-priority task ready

High-priority Task

low-priority task
Relinquishes the CPU
Time
4.3 Non-Preemptive Scheduling
• Why non-preemptive?

Non-preemptive scheduling is more efficient than


preemptive scheduling since preemption incurs
context switching overhead which can be
significant in fine-grained multithreading
systems.
4.3 Basic Real-Time Scheduling

• First Come First Served (FCFS)


• Round Robin (RR)
• Shortest Job First (SJF)
4.3 First Come First Served (FCFS)

• Simple “first in first out” queue


• Long average waiting time
• Negative for I/O bound processes
• Nonpreemptive
4.3 Example:
Round-Robin Scheduling
4.3 Round Robin (RR)

• FCFS + preemption with time quantum

• Performance (average waiting time) is


proportional to the size of the time
quantum.
4.3 Shortest Job First (SJF)

• Optimal with respect to average waiting


time.

• Requires profiling of the execution times of


tasks.
4.4 Shared Memory Communication
Data 2
Code Code
Stack 1
Data Data
Heap 1
Heap Heap
Code 1 Stack
Stack
Stack 2 Shared
Shared
Data 1
Prog 1 Prog 2
Heap 2 Virtual
Virtual
Address Code 2 Address
Space 1 Space 2
Shared
• Communication occurs by “simply” reading/writing to shared address
page
– Really low overhead communication
– Introduces complex synchronization problems
4.5 Message Passing Communication
• Messages are collection of data objects and
their structures
• Messages have a header containing system
dependent control information and a message
body that can be fixed or variable size.

• When a process interacts with another, two


requirements have to be satisfied.
4.5 Message Passing Communication
4.5 What is message passing?
• Data transfer plus synchronization
Process 0 Data May I Send? Data
Data
Data
Data
Data
Data
Data
Data
Process 1 Yes

Time

 Requires cooperation of sender and receiver


 Cooperation not always apparent in code
4.5 Quick review of MPI Message passing

• Basic terms
– nonblocking - Operation does not wait for
completion
– synchronous - Completion of send requires initiation
(but not completion) of receive
– ready - Correct send requires a matching receive
– asynchronous - communication and computation
take place simultaneously, not an MPI concept
(implementations may use asynchronous methods)
4.5 Basic Send/Receive modes
• MPI_Send
– Sends data. May wait for matching receive. Depends on
implementation, message size, and possibly history of
computation
• MPI_Recv
– Receives data
• MPI_Ssend
– Waits for matching receive to start
• MPI_Rsend
– Expects matching receive to be posted
4.5 Nonblocking Modes
• MPI_Isend
– Does not complete until send buffer available
• MPI_Irsend
– Expects matching receive to be posted when called
• MPI_Issend
– Does not complete until buffer available and matching
receive posted
• MPI_Irecv
– Does not complete until receive buffer available (e.g.,
message received)
4.5 Completion
• MPI_Test
– Nonblocking test for the completion of a nonblocking
operation
• MPI_Wait
– Blocking test
• MPI_Testall, MPI_Waitall
– For all in a collection of requests
• MPI_Testany, MPI_Waitany
• MPI_Testsome, MPI_Waitsome
• MPI_Cancel (MPI_Test_cancelled)
4.5 Message Passing

CPU 1 CPU 2

message message

message
4.5 Message Passing Communication
Synchronization and Communication.

Fixed Length
• Easy to implement
• Minimizes processing and storage overhead.

Variable Length
• Requires dynamic memory allocation, so
fragmentation could occur.
4.5 Basic Communication Primitives

• Two generic message passing primitives for sending and


receiving messages.
send (destination, message)
receive (source, message) source or dest={ process
name, link, mailbox, port}

Addressing - Direct and Indirect

1) Direct Send/ Receive communication primitives


Communication entities can be addressed by process names
(global process identifiers)
4.5 Basic Communication Primitives

Global Process Identifier can be made unique by concatenating the


network host address with the locally generated process id. This
scheme implies that only one direct logical communication path exists
between any pair of sending and receiving processes.
Symmetric Addressing : Both the processes have to explicitly name in the
communication primitives.

Asymmetric Addressing : Only sender needs to indicate the recipient.

2) Indirect Send/ Receive communication primitives


Messages are not sent directly from sender to receiver, but sent to
shared data structure.
4.5 Basic Communication Primitives

Multiple clients might request services


from one of multiple servers. We use
mail boxes.

Abstraction of a finite size FIFO queue


maintained by kernel.
4.5 Synchronization and Buffering
• These are the three typical combinations.

1) Blocking Send, Blocking Receive


Both receiver and sender are blocked until the message is delivered.
(provides tight synchronization between processes)

2) Non Blocking Send, Blocking Receive


Sender can continue the execution after sending a message, the receiver
is blocked until message arrives. (most useful combination)

3) Non Blocking Send, Non Blocking Receive


Neither party waits.
4.6 Inter-Process Communication

• Processes can communicate through shared areas of memory


– the Mutual Exclusion problem and Critical Sections
• Semaphores - a synchronisation abstraction
• Monitors - a higher level abstraction
• Inter-Process Message Passing much more useful for information
transfer
– can also be used just for synchronisation
– can co-exist with shared memory communication
• Two basic operations : send(message) and receive(message)
– message contents can be anything mutually comprehensible
• data, remote procedure calls, executable code etc.
– usually contains standard fields
• destination process ID, sending process ID for any reply
• message length
• data type, data etc.
• Fixed-length messages:
– simple to implement - can have pool of standard-sized buffers
• low overheads and efficient for small lengths
– copying overheads if fixed length too long
– can be inconvenient for user processes with variable amount of
data to pass
• may need a sequence of messages to pass all the data
• long messages may be better passed another way e.g. FTP
– copying probably involved, sometimes multiple copying into kernel and out

• Variable-length messages:
– more difficult to implement - may need a heap with garbage
collection
• more overheads and less efficient, memory fragmentation
– more convenient for user processes
4.6 IPC – unicast and multicast
• In distributed computing, two or more processes engage in IPC
using a protocol agreed upon by the processes. A process may
be a sender at some points during a protocol, a receiver at
other points.

• When communication is from one process to a single other


process, the IPC is said to be a unicast, e.g., Socket
communication. When communication is from one process to
a group of processes, the IPC is said to be a multicast, e.g.,
Publish/Subscribe Message model, a topic that we will
explore in a later chapter.
4.6 Unicast vs. Multicast
4.6 Interprocess Communications in Distributed
Computing
4.6 INTERPROCESS COMMUNICATION
• Processes executing concurrently in the operating system may be either
independent or cooperating processes.

• Reasons for providing an environment that allows process cooperation.


1) Information Sharing
Several users may be interested in the same piece of information.
2) Computational Speed up
Process can be divided into sub tasks to run faster, speed up can be
achieved if the computer has multiple processing elements.
3) Modularity
Dividing the system functions into separate processes or threads.
4) Convenience
Even an individual user may work on many tasks at the same time.
4.6 COMMUNICATION MODELS
4.6 COMMUNICATION MODELS
• Cooperating processes require IPC mechanism that allow them to
exchange data and information. Communication can take place either by
Shared memory or Message passing Mechanisms.
Shared Memory:

1) Processes can exchange information by reading and writing data to the


shared region.
2) Faster than message passing as it can be done at memory speeds when
within a computer.
3) System calls are responsible only to establish shared memory regions.

Message Passing:

• Mechanism to allow processes to communicate and synchronize their


actions without sharing the same address space and is particularly useful
in distributed environment.
4.6 Interprocess communication
• OS provides interprocess communication
mechanisms:
– various efficiencies;
– communication power.
• Interprocess communication (IPC): OS provides
mechanisms so that processes can pass data.
• Two types of semantics:
– blocking: sending process waits for response;
– non-blocking: sending process continues.
4.6 Interprocess communication
• Shared memory:
– processes have some memory in common;
– must cooperate to avoid destroying/missing
messages.

• Message passing:
– processes send messages along a communication
channel---no common address space.
4.6 Blocking, deadlock, and timeouts
• Blocking operations issued in the wrong sequence can cause deadlocks.
• Deadlocks should be avoided. Alternatively, timeout can be used to
detect deadlocks.

P1 is waiting for P2’s data; P2 is waiting for P1’s data.


4.6 Using threads for asynchronous IPC

• When using an IPC programming interface, it is important to note whether


the operations are synchronous or asynchronous.
• If only blocking operation is provided for send and/or receive, then it is the
programmer’s responsibility to using child processes or threads if
asynchronous operations are desired.
4.6 Deadlocks and Timeouts
• Connect and receive operations can result in indefinite
blocking
• For example, a blocking connect request can result in the
requesting process to be suspended indefinitely if the
connection is unfulfilled or cannot be fulfilled, perhaps as a
result of a breakdown in the network .
• It is generally unacceptable for a requesting process to “hang”
indefinitely. Indefinite blocking can be avoided by using
timeout.
• Indefinite blocking may also be caused by a deadlock
4.7 Semaphores
• A semaphore is a key that your code acquires in order
to continue execution
• If the key is already in use, the requesting task is
suspended until the key is released
• There are two types
– Binary semaphores
• 0 or 1
– Counting semaphores
• >= 0
4.7 Semaphore Operations
• Initialize (or create)
– Value must be provided
– Waiting list is initially empty
• Wait (or pend)
– Used for acquiring the semaphore
– If the semaphore is available (the semaphore value is positive), the
value is decremented, and the task is not blocked
– Otherwise, the task is blocked and placed in the waiting list
– Most kernels allow you to specify a timeout
– If the timeout occurs, the task will be unblocked and an error code will
be returned to the task
4.7 Semaphore Operations
• Signal (or post)
– Used for releasing the semaphore
– If no task is waiting, the semaphore value is
incremented
– Otherwise, make one of the waiting tasks ready to
run but the value is not incremented
– Which waiting task to receive the key?
• Highest-priority waiting task
• First waiting task
4.7 Semaphore Example
Semaphore *s;
Time timeout;
INT8U error_code;

timeout = 0;
Wait(s, timeout, &error_code);
Access shared data;
Signal(s);
4.7 Applications of Binary Semaphores

• Suppose task 1 prints “I am Task 1!”


• Task 2 prints “I am Task 2!”
• If they were allowed to print at the same time,
it could result in:
I Ia amm T Tasask k1!2!
• Solution:
– Binary semaphore
4.7 Semaphore
▪ Semaphore serves as a key to the resource
▪ A flag represent the status of the resource
▪ Prevent re-entering Critical Region
▪ Can extent to counting Semaphore

Send data via RS232


Task 1

Request/Release

Semaphore RS-232

Request/Release

Task 2
Send data via RS232
4.7 Semaphore

Using Semaphore in uC/OS-II


▪ OSSemCreate()
▪ OSSemPend()
▪ OSSemPost()
▪ OSSemQuery()
▪ OSSemAccept()
4.7 Message Mailbox

▪ Used for Inter-Process Communication (IPC)


▪ A pointer in MailBox points to the transmitted message

Task Post
Mail Box

Pend
Task

ISR Post
“message”
4.7 Message Queues

▪ Array of MailBox
▪ FIFO & FILO configuration

Task Post
Mail Queues

Pend
Task

ISR Post
“message”
4.7 MailBox & Queues
Using MailBox in uC/OS-II Using Queue in uC/OS-II
▪ OSMboxCreate() ▪ OSQCreate()
▪ OSMboxPend() ▪ OSQPend()
▪ OSMboxPost() ▪ OSQPost()
▪ OSMboxQuery() ▪ OSQPostFront()
▪ OSMboxAccept() ▪ OSQQuery()
▪ OSQAccept()
▪ OSQFlush()
4.7 Memory Management

▪ Semi-dynamic memory allocation


▪ Allocate statically and dispatch dynamically
▪ Explore memory requirements at design time

partition into memory blocks

dispatch dynamically

Task 1 Task 2 Task 3

allocate statically

OS Initializing OS running
4.7 Memory Management

Using Memory in uC/OS-II


▪ OSMemCreate()
▪ OSMemGet()
▪ OSSemPut()
▪ OSSemQuery()
4.8 Priority Inversion
Priority Inversion

(4) (12)
Task 1 (H)

(8)
Task 2 (M)

(1) (6) (10)


Task 3 (L)

Task 3 gets semaphore (2)


Task 3 resumes (9)
Task 1 preempts Task 3 (3)
Task 1 fails to get semaphore (5)
Task 3 releases the semaphore (11)
Task 2 preempts task 3 (7)
running
waiting
4.8 Priority Inversion
• A long existing terminology, recently it becomes a
buzzword due to “Pathfinder” on Mars (RTOS: VxWorks).

• It refers to a lot of situation, may even be used


intentionally to prevent starvation.

• Unbounded priority inversion :


Priority inversion occurs in unpredictable manners – this is
what we care about.
4.8 Priority Inversion
4.8 Priority Inversion

• The problem is that τ1 and τ3 share a common data


structure, locking a mutex to control access to it. Thus,
if τ3 locks the mutex, and τ1 tries to lock it, τ1 must
wait. Every now and then, when τ3 has the mutex
locked, and τ1 is waiting for it, τ2 runs because it has
a higher priority than τ3. Thus, τ1 must wait for both
τ3 and τ1 to complete and fails to reset the timer
before it expires.
4.8 Priority Inversion

• Solution : priority inheritance protocol


and priority ceiling protocol.
• Priority ceiling : raises the priority of any
locking thread to the priority ceiling for each
lock.
• Priority inheritance : raises the priority of a
thread only when holding a lock causes it to
block a higher priority thread.
4.8 Priority Inversion

• Priority(P1) > Priority (P2)


• P1, P2 share a critical section (CS)
• P1 must wait until P2 exits CS even if P(P1) > P(P2)
• Maximum blocking time equals the time needed by P2 to execute its CS
– It is a direct consequence of mutual exclusion
• In general the blocking time cannot be bounded by CS of the lower priority process
4.8 Priority inversion
• Typical characterization of priority inversion
– A medium-priority task preempts a lower-priority task which is using a
shared resource on which a higher priority task is blocked
– If the higher-priority task would be otherwise ready to run, but a
medium-priority task is currently running instead, a priority inversion is
said to occur
4.8 Priority Inheritance
Basic protocol [Sha 1990]
1. A job J uses its assigned priority, unless it is in its CS and blocks higher
priority jobs
In which case, J inherits PH, the highest priority of the jobs blocked by J
When J exits the CS, it resumes the priority it had at the point of entry into
the CS
2. Priority inheritance is transitive

Advantage
• Transparent to scheduler
Disadvantage
• Deadlock possible in the case of bad use of semaphores
• Chained blocking: if P accesses n resources locked by processes with lower
priorities, P must wait for n CS
4.8 Priority Inheritance
4.8 Priority Inheritance

Deadlocks
4.8 Priority Inheritance :
Chained Blocking
• A weakness of the priority inheritance protocol is that it does
not prevent chained blocking.

• Suppose a medium priority thread attempts to take a mutex


owned by a low priority thread, but while the low priority
thread's priority is elevated to medium by priority inheritance,
a high priority thread becomes runnable and attempts to take
another mutex already owned by the medium priority thread.
The medium priority thread's priority is increased to high, but
the high priority thread now must wait for both the low
priority thread and the medium priority thread to complete
before it can run again.
4.8 Priority Inheritance :
Chained Blocking

• The chain of blocking critical sections can extend to


include the critical sections of any threads that might
access the same mutex. Not only does this make it much
more difficult for the system designer to compute
overhead, but since the system designer must compute
the worst case overhead, the chained blocking
phenomenon may result in a much less efficient system.

• These blocking factors are added into the computation


time for tasks in the RMA analysis, potentially rendering
the system unschedulable.
4.9 μC/OS-II
▪ Written by Jean J. Labrosse in ANSI C
▪ A portable, ROMable, scalable, preemptive, real-time,
multitasking kernel
▪ Used in hundreds of products since its introduction in 1992
▪ Certified by the FAA for use in commercial aircraft
▪ Available in ARM Firmware Suite (AFS)
▪ Over 90 ports for free download
4.9 μC/OS-II Features

▪ Portable 8-bit ~ 64 bit


▪ ROMable small memory footprint
▪ Scalable select features at compile time
▪ Multitasking preemptive scheduling, up to
64 tasks
4.9 mC/OS-II vs. mHAL
▪ uHAL is shipped in ARM Firmware Suite
▪ uHAL is a basic library that enables simple application to run
on a variety of ARM-based development systems
▪ uC/OS-II use uHAL to access ARM-based hardware

uC/OS-II & User application AFS Utilities


C, C++ libraries

AFS support
uHAL routines
routines

Development board
4.9 RT-Linux
Init Bash Mozilla
• RT-tasks
cannot use standard OS calls
(www.fsmlabs.com)
scheduler
Linux-Kernel
RT-Task RT-Task
driver
interrupts

RT-Linux RT-Scheduler
interrupts
I/O
interrupts
Hardware
4.9 RTLinux

• Additional layer between Linux kernel and


hardware

• Worst-case dispatch latency on x86: 15 ms


4.9 RTLinux: Basic Idea
4.9 Posix RT-extensions to Linux
• Standard scheduler can be replaced by POSIX
scheduler implementing priorities for RT tasks
RT-Task RT-Task Init Bash Mozilla

⧫ Special RT-calls and standard


POSIX 1.b scheduler OS calls available.
⧫ Easy programming, no
Linux-Kernel guarantee for meeting
deadline
driver

I/O, interrupts

Hardware

You might also like