0% found this document useful (0 votes)
15 views31 pages

Unit 1 RTOS

Uploaded by

Nikhath
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views31 pages

Unit 1 RTOS

Uploaded by

Nikhath
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 31

UNIT 1

“Real Time Embedded Systems and Components “ – SAM SIEWERT


INTRODUCTION TO REAL TIME
EMBEDDED SYSTEMS

• A brief history of real time system


• A brief history of embedded system
RESOURCE ANALYSIS
Processing: networking of microprocessor and micro controller
Memory : all storage elements including volatile and non volatile
I/O : that encodes sensed data
•Schedule multiplexed execution of multiple services on a single processor.
•Multiplexing the CPU by preemting a running thread, saving its state and dispatching a new thread is
called a thread context switch.
•Scheduling involves implementing a policy, whereas preemption and dispatch are context switching
menchanism.
•CPU resources should be sufficient for a given set of service threads to be executed and meet the
deadline.
•Reliability depends on the repeatability of service execution prior to deadlines.
•If the deadlines can be guaranteed then the system is safe and the system behaviour does not
change over time in providing the services by well defined deadlines, hence it is considered as
deterministic
• 4 parameters are important
•Speed : Clock rate for instruction execution
•Efficiency: CPI or IPC(Instruction per clock)
•Algorithm complexity: Ci – instruction count on service longest path for service I and if C i is not
known, the worst case should be used- WCET ( Worst case execution time)
•Service frequency: Ti- service release period
RESOURCE ANALYSIS

Input and output channels between processor cores and devices are one of the most important
resources in real time embedded systems. The coverage of I/O resource management includes

•Latency: arbitration latency for shared I/O interfaces, read latency, time to transit from
device to CPU core, write latency
•Bandwidth(BW): avg. bytes transferred per unit time
•Queue depth: write buffer stall, read buffers
•CPU coupling: DMA channels help decouple the CPU from I/O
Memory resources are designed based upon cost, capacity and access latency
•Memory hierarchy from least to most latency
- Level 1 cache: single cycle access, Harvard Architecture (separate data and instruction
caches), locked for use as fast memory
- Level 2 cache: few or no wait states, typically unified, backs L1 cache
RESOURCE ANALYSIS

- MMRs ( memory mapped registers)


- Main memory- SRAM,SDRAM, DDR, processor bus interface and controller
- MMIO(Memory mapped I/O) devices
- Nonvolatile memory like flash, EEPROM and battery backed SRAM : slowest read/write access,
require algorithm for block erase
- Allocation of data, code, stack, heap to physical hierarchy will significantly affect performance.
A system may experience problems meeting service deadlines because it is
- CPU bound: insufficient execution cycles during release period and due to inefficiency in
execution
- I/O bound: too much total I/O latency during the release period and /or poor scheduling of I/O
during execution
- Memory bound: insufficient memory capacity or too much memory access latency during the
release period
RESOURCE ANALYSIS

Resource margin that a real time embedded


System id designed to maintain depends
upon a number of higher level design factors
-System cost
-Reliability required
-Availability required
-Risk of oversubscribing resources
-Impact of oversubscription
RESOURCE ANALYSIS

Some basic guidelines for resource sizing and margin maintenance


- CPU:set of proposed services must be allocated to processors so that each processor in
the system meets the Lehoczky, Shah, Ding theorem for feasibility.
-I/O: total I/O latency for a given service should never exceed the response deadline or the
service release period. Execution and I/O can be overlapped so that the response time is
not the simple sum of I/O latency and execution time
- Memory: total memory capacity should be sufficient for the worst case static and dynamic
memory requirements for all services. The memory access latency summed with I/O
latency should not exceed the service release period.
REAL TIME UTILITY
Hard real time service utility
• The service is said to be released when the service is ready to start
execution following a service request, most often initiated by an
interrupt
• Utility of the service producing a response any time prior to the
deadline relative to the request is full.
• At the instant following the deadline, the utility not only becomes
zero, but actually negative.
• A late response might actually be worse than no response
REAL TIME UTILITY
Isochronal service utility
• In this service, the utility is negative up to the deadline, full at the
deadline and negative again after the deadline as in the figure.
• For an isochronal service early completion of response requires the
response to be held or buffered up to the deadline if it is computed
early.
• Hard real time services require correct timing to avoid loss of life and /
or assets.
REAL TIME UTILITY
Best effort service utility
• Basically, the nonreal—time service has no real deadline because
full utility is
realized whenever a best effort application finally produces a result.
• On a desktop system, there is no limit on how much the CPU can be
oversubscribed and how much I/O backlog may be generated.
• For example a storage system interface,
REAL TIME UTILITY
Soft real time utility curve
• The idea of receiving partial credit for late homework because a service that
produces a late response still provides some utility to the system.
• Definitionof soft real—timeclearly falls between the extremes of the hard real—time
and the best effort utility curves.
• If some partial utility can be realized for a late response, a well designed system
may want to allow for some fixed amount of service overrun
• some function greater than or equal to zero exists for soft real—time service
responses after the response deadline. If the function is identically zero, a well—
designed system will simply terminate the service after the deadline, and a service
drop will occur.
REAL TIME UTILITY
Anytime service utility
• A policy known as the anytime algorithm is analogous to receiving partial credit for partially completed
homework and partial utility for a partially complete service.
• The concept of an anytime algorithm can only be implemented for services where iterative refinement is
possible, that is, the algorithm produces an initial solution long before the deadline, but can produce a better
solution (response) if allowed to continue processing up to the deadline for response.
• Anytime algorithms have been used most for robotic and AI (ArtificiaIlntelligence) real—timeapplications
where iterative refinement can be beneficial.
• Anytime algorithms are designed to be terminated at their deadlines and produce the best solution anytime, so by
definition anytime services do not overrun deadlines, but rather provide some partial utility solution anytime
following their release.
• for example, in the robot path—planning scenario, paths and partial paths
could be saved in memory for reuse when the robot returns to a
location occupied previously. This is the concept of iterative refinement
REAL TIME UTILITY
Soft isochronal service utility
• Isochronal service can achieve partial utility for responses prior to the deadline, full utility at
the deadline, and again partial utility after the deadline. For example, in a continuous media
system, it may not be possible to hold early responses until the deadline, and it may also be
beneficial to produce a late response to allow overrun.
• Allowing indefinite overrun of any soft real time service can ultimately be a problem. If
overruns continue to occur and service releases start to overlap for the same service, the loading
simply climbs higher and higher.
• overly conscientious student continues to work on more and more late homework, ultimately
lowering grades in all courses to the point that total failure is the ultimate outcome.
SCHEDULING CLASSES
PREEMPTION FIXED PRIORITY
POLICY
For hard real time systems where proof that services will not miss deadlines is desired, RM is an obvious choice. For example, RM is often used for commercial
aircraft, for satellite systems, or any other system where failure to meet all deadlines can result in significant loss of life and/ or assets.
When a context switch occurs and a task/ service is preempted prior to running to completion, this is called interference. In a fixed priority preemptive system with
only one service/task, interference is not possible.
Liu and Layland recognized this and proposed what they believed to be a reasonable set of assumptions and constraints on real systems to formulate a deterministic
model.
A1: All services requested on periodic_basis, the period is constant
A2: Completion—time<period
A3: Service requests are independent (no known phasing)
A4: Runtime is known and deterministic (WCET may be used)
C1: Deadline = period by definition
C2: Fixed-priority, preemptive, run-to-completion scheduling
A5: Critical instant- longest response time for a service occurs when all system services are requested simultaneously
Note : Rate Monotonic (RM) : priority set according to cycle time / small job duration
DeadlineMonotic (DM) : Priority is based on deadline

S1 Makes Deadline if prio(S1) > prio(S2)


PREEMPTION FIXED PRIORITY
POLICY

S1 Makes Deadline if prio(S1) > prio(S2) S1 Misses Deadline if prio(S2) >


prio(S1)

The conclusion that can be drawn is that for a two-service system, the RM policy is optimal
RM-Rate Monotonic function (assign highest priority to the service with the shortest release period)
PREEMPTION FIXED PRIORITY
POLICY

Liu and Layland proposed this simple feasibility test they call the RM Least Upper Bound (RM LUB).
U: Utility of the CPU resource achievable
C: Execution time of Service i
m: Total number of services in the system sharing common CPU resources
T : Release period of Service i
Given Services S1, S2 with periods T1 and T2 and C1 and C2, assume

T2 > T1, for example, T, = 2, T2= 5, C1: 1, C2 = 1; and

then if prio(S1) > prio(S2), The actual utilization of 70% is lower

than the RM LUB of 83.3%, and the system is feasible by inspection.


In this example, RM LUB is safely exceeded, given Services S1,S2with periods T1 and T2 and C, and C2; and
assuming T2 = T1, for example, T1, = 2, T2 = 5, C1: 1, C2 = 2; and then if prio(S1) > prio(S2), note Figure
For real—timescheduling feasibility tests, sufficient therefore means that passing the test guarantees that the
proposed service set will not miss deadlines
However, failing a sufficient feasibility test does not imply that the proposed service set will miss deadlines.
An N&S (Necessaiy and Sufficient)feasibility test is exact- if a service set passes the N&S feasibility test
it will not miss deadlines, and if it fails to pass the N&S feasibility test, it is guaranteed to miss deadlines.
FEASIBILITY TEST

Feasibility tests provide a binary result that indicates whether a set of services (threads or tasks) can be scheduled given
their Ci, Ti, and Di, specification.
So, the input is an array of service identifiers(Si) and specification for each and the output is TRUE if the set can
be safely scheduled so that none of the deadlines will be missed and FALSE if any one of the deadlines might be
missed. There are two types of feasibility tests:
1.Sufficient
2.Necessary and Sufficient(N&S)
Sufficient feasibility tests will always fail a service set that is not real time safe (i.e.,that can miss deadlines).
However, a sufficient test will also fail a service set that is real—time—safe occasionally as well.
Sufficient feasibility tests are not precise. The sufficient tests are conservative because they will never pass an
unsafe set of services.
N&S are precise. An N&S feasibility test will not pass a service set that is unsafe and likewise will not fail any test
that is safe.
The RM LUB is a sufficient test and therefore safe, but it will fail service sets that actually can be safely scheduled.
FEASIBILITY TEST

Relationship between sufficient and N&S feasibility tests.


NECESSARY AND SUFFICIENT
(N&S) FEASIBIALITY

Two algorithms for determination of N&S feasibility testing with RM policy are easily employed:
1.Scheduling Point Test
2.Completion Time Test
•By the Lehoczky, Shah, Ding theorem, if a set of services can be shown to meet all deadlines form the critical
instant up to the longest deadline of all tasks in the set, then the set is feasible.
•The critical instant assumption from Liu and Layland, states that in the worst case, all services might be
requested at the same point in time.
•Lehoczky, Shah, and Ding introduced an iterative test for this theorem called the Scheduling Point Test
NECESSARY AND SUFFICIENT
(N&S) FEASIBIALITY

Scheduling Point Test


NECESSARY AND SUFFICIENT
(N&S) FEASIBIALITY

Completion Time test

Passing this test requires proving that an(t) is less than or equal to the deadline for Sn , which proves
that Sn , is feasible.
DEADLINE MONOTONIC POLICY

Deadline—monotonic(DM) policy is very similar to RM except that highest priority is assigned to the
service with the shortest deadline. The DM policy is a fixedpriority policy.
The DM policy eliminates the original RM assumption that service period must equal service deadline and allows
RM theory to be applied for scenarios even when deadline is/less than period. This is useful for dealing with
significant output latency.
Ci is the execution time for service i,and Ii is the interference time service iexperiences over its deadline Di, time
period since the time of request for service.

For all services from 1 to 11, if the deadline interval is long enough to contain the service execution time interval
plus all interfering execution time intervals, then the service is feasible. If all services are feasible, then the system
is feasible (real—time safe).
Interference to Service Si, is due to preemption by all higher priority services Si to Si-1,

C is the worst case number of releases of Sj over the deadline interval for Si,
Because the interference is the worst—case number of releases, interference is over- accounted for the last
interference may be only partial.
DYNAMIC PRIORITY POLICIES

The policy Liu and Layland specified in their paper later became known as an EDF (Earliest Deadline First)
dynamic priority policy. The policy is called EDF because the scheduler gives highest priority to the service that
has the soonest deadline
The EDF scheduler must be able to insert the new thread into the queue based upon time to its deadline relative to
the time to deadline for all other threads——the insertion has a complexity that is of the order n — O(n),
where n is the number of threads on the queue. By comparison a fixed priority policy scheduler can be
implemented with complexity that is 0(1), or constant time using priority queues.
LLF is a dynamic-priority policy where services on the ready queue are assigned higher priority if their
laxity is the least. Laxity is the time difference between their deadline and remaining computation time.
By comparison, for fixed priority policy such as RM, in an overload,
all services of lower priority than the service that is overrunning
may miss their deadline, yet all services of higher priority are
guaranteed not to be affected as shown in Figure
DYNAMIC PRIORITY POLICIES

The overrunning service has a time to deadline that is


negative because it has passed, so it continues
to be the highest priority service and continues
to cause others to wait and potentially
miss deadlines. In an overrun scenario,
common policy is to terminate the
release of a service that has overrun.
This causes a service dropout.
In some sense, all priority encoding policies, dynamic or static, miss the point- what we really want to do is
encode which service is most important and make sure that it gets the resources it needs first
1.Generalization and reducing constraints for RM application
2.Solving problems related to RM application for real systems
3.Devising alternative policies for deadline—driven scheduling
4.Devising new soft real—time policies to reduce margin required in RM policy

Self Study :
Worst case execution
Deadlock and Livelock
THANK YOU

You might also like