0% found this document useful (0 votes)
85 views

RTS Unit 1 Notes

A real-time system is one that responds to inputs within a specified time constraint. It is crucial for systems like traffic signal controllers or aircraft systems. Real-time systems can be classified as hard, firm, or soft based on how critical meeting timing deadlines is. Hard systems fail if deadlines are missed, while firm and soft systems can tolerate some lateness with degraded quality. Key aspects of real-time systems include scheduling, concurrency, and predictable behavior.

Uploaded by

SAROJ RAJA
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
85 views

RTS Unit 1 Notes

A real-time system is one that responds to inputs within a specified time constraint. It is crucial for systems like traffic signal controllers or aircraft systems. Real-time systems can be classified as hard, firm, or soft based on how critical meeting timing deadlines is. Hard systems fail if deadlines are missed, while firm and soft systems can tolerate some lateness with degraded quality. Key aspects of real-time systems include scheduling, concurrency, and predictable behavior.

Uploaded by

SAROJ RAJA
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Real Time Systems

Module 1: Introduction

Definition of Real-Time System

A computer system which is the response on the time is known as Real-Time System. A system that satisfies bounded
response time constraint or risk of system failure is called RTS.
In real-time computing, the correctness depends on the time at which the results are produced not only depends upon the
logical result. A Deadline is the given time after a event when a response time has completed.

Key Terms of Real-Time System

 The ability of the operating system to provide the necessary level of service in a bound response time
 It responds to inputs immediately(Real-Time)
 Here the task is completed within a specified time delay
 Real-time computing is equivalent to fast computing
 Necessary signaling between interrupt routines and task code is handled by RTS
 Real-time systems operate in a static environment
 Real-time programming includes assembly coding, priority intercept programming, writing device drivers

Example of Real-time system

 In real life situations like controlling a traffic signal or a nuclear reactor or an Aircraft.
 Mobile Phones, Digital Camera, Microwave Oven
 Avionics, Radar Control System, Industrial Process Control
 Command and Control
 Multimedia system
 Electronics Warhead Control System
 Missile Tracking System

Logical / Temporal Correctness

 If a system misses its time limit then the resulting action can be canceled, can continue, depending on the system
option.
 Its specification includes logical / temporal correctness

Logical correctness: Indicates that the system should produce the correct output (must be verifiable)

Temporal correctness: Implies that system must produce output at the correct or right time
CPU Architectures & Structure of a Real-Time System

The Component of RTOS is its kernel - Monolithic & Microkernel.

1. Monolithic kernel

2. Microkernel
Characteristics of the Real-Time System

 Consistency
 Failure cost is high
 Concurrency and Multiprogramming Require
 Requirements of Reliability and fault-tolerant
 Salability
 Predictable behavior
 Performance
 Computation - Offline (precomputed) and Online (Dynamically)
 Priority - Static and Dynamic
 Scheduling - Preemptive and Non-Preemptive

Type of Real-Time System

Real-Time system can be characterized as

1. Hard Real-Time Systems


2. Soft Real-Time Systems
3. Firm Real-Time System

1. Soft Real-Time System:

 This System should meet timings constraints. Response time overrun leads to disastrous damage.
 Here utility value becomes less/drops with time

Example:

1. Exam hall: Delay by 30 minutes allowed. If you very delay then you will not be allowed to appear in the exam.
2. Word processors
3. Airline reservation system
4. Games
5. Simulations

2. Hard Real-Time System:

 Deadline overruns are tolerable, but not desired.


 There is no frightening consequence of missing one or more time limits.
 If a task does not meet the deadline then the system crashes.
 A difficult time limit is put on a job because late results produced by the job after the deadline can have disastrous
consequences.
Example: Onboard computer of a moving aircraft

1. Measures velocity, height, position, acceleration, air pressure, etc.


2. Scheduled time collects data with a sensor within 20 minutes
3. Compares these data against the stored values

3. Firm Real-Time System:

 The system does not fail but throws the results (ignores) in case of failure.
 The system does not fail but throws the results (ignores) pass the deadline.
 Every 1/30 second has to come in the frame.
 Frames will not wait for the destination to arrive but switch to the next frame if the 6th frame does not arrive
 Hard RTS are not periodic
 Most periodic hard real-time system companies are RT systems

Example: Video Conferencing

Applications of Real Time System


Two most common real-time applications are real-time databases and multimedia applications.

Real-Time Databases

The term real-time database systems refers to a diverse spectrum of information systems, ranging from stock price
quotation systems, to track records databases, to real-time file systems. What distinguishes these databases from non-real-
time databases is the perishable nature of the data maintained by them. Specifically, a real-time database contains data
objects, called image objects that represent real-world objects. The attributes of an image object are those of the
represented real world object.

For example, an air traffic control database contains image objects that represent aircraft in the coverage area. The
attributes of such an image object include the position and heading of the aircraft. The values of these attributes are
updated periodically based on the measured values of the actual position and heading provided by the radar system.
Without this update, the stored position and heading will deviate more and more from the actual position and heading. In
this sense, the quality of stored data degrades. This is why we say that real-time data are perishable. In contrast, an
underlying assumption of non-real-time databases (e.g., a payroll database) is that in the absence of updates the data
contained in them remain good (i.e., the database remains in some consistent state satisfying all the data integrity
constraints of the database).

Absolute Temporal Consistency

The age of a data object measures how up-to-date the information provided by the object is. The age of an image object at
any time is the length of time since the instant of the last update, that is, when its value is made equal to that of the real-
world object it represents. The age of a data object whose value is computed from the values of other objects is equal to
the oldest of the ages of those objects.

A set of data objects is said to be absolutely (temporally) consistent if the maximum age of the objects in the set is no
greater than a certain threshold.

Relative Temporal Consistency

A set of data objects is said to be relatively consistent if the maximum difference in ages of the objects in the set is no
greater than the relative consistency threshold used by the application. The column labeled “Rel. Cons.” in Table 1–1gives
typical values of this threshold.

Multimedia Applications

A multimedia application may process, store, transmit, and display any number of video streams, audio streams, images,
graphics, and text. A video stream is a sequence of data frames which encodes a video. An audio stream encodes a voice,
sound, or music. Without compression, the storage space and transmission bandwidth required by a video are enormous.
(As an example, we consider a small 100 × 100-pixel, 30-frames/second color video. If uncompressed, the video requires
a transmission bandwidth of 2.7 Mbits per second when the value of each component at each pixel is encoded with 3 bits.)
Therefore, a video stream, as well as the associated audio stream, is invariably compressed as soon as it is captured.

MPEG Compression/ Decompression

This compression standard makes use of 3 techniques. They are motion compensation for reducing temporal redundancy,
discrete cosine transform for reducing spatial redundancy and entropy encoding for reducing the number of bits required
to encode all the information.

Motion Estimation

In this step analysis and estimation is done. Video frames are not independent and hence significance amount of
compression can be done. For this each image is divided into 16*16 pixel square pieces called major block.

Only frames 1+αk for k=0,1,2…. Are encoded independently of other frames where α is an application specified integer
constant. These frames are I frames. I(Intra coded) frames are the points for random access of the video. More the value of
α , more random accessible the video is and poor the compression ratio. A good compromise is α=9. The frame between
consecutive I- frames are called P and B frames. When α is 9 frames produced are I,B,B,P,B,B,P,B,B,I,B,B,P… For every
k ≥ 0 frame 1+9k+3 is P(Predictive coded)-frame. P frame is generated by prediction from previous I frame. B
(bidirectionally predicted) frame is predicted from both I and P frames.

The important differences between Hard, Firm and Soft Real Time Systems are as follow:

Hard Real Time System: A hard real time task is one that is constrained to produce its results within certain predefined
time bounds. The system is considered to have failed whenever any of its hard real time tasks does not produce its
required results before the specified time bound.

Firm Real Time System: Unlike a hard real time task, even when a firm real time task does not complete within its
deadline, the system does not fail. The late results are merely discarded. In other words, the utility of the results computed
by a real time task becomes zero after the deadline.

Soft Real Time System: Soft real time tasks also have time bounds associated with them. However, unlike hard and firm
real time tasks, the timing constraints on soft real time tasks are not expressed as absolute values. Instead, the constraints
are expressed in terms of the average response times required.

Release Time

Release time is the timing instant on which a job becomes available for execution. A job can be scheduled and executed at
any time at or after its release time. The execution of job starts only when processor provided its resource and dependency
conditions are met.

Response Time

Response time is a timing interval on a job be completely processed. It is length of time from the release time of the job to
the time instant when it completes. It is not the same with execution time, since a job may not execute continually.

Deadlines and Timing Constraints


The instant at which a job completes execution is called completion time. The deadline of a job is the instant of time by
which its execution is required to be completed. Deadline time is divided into the two categories.

Relative deadline – the maximum allowable job response time

Absolute deadline – the instant of time by which a job is required to be completed. It often called simply as deadline. The
absolute deadline is the sum of the relative deadlines and release time.

I.e. absolute deadline = release time + relative deadline


From figure, absolute deadline is represented by the interval for a job Ji is the interval ( ri, di ].

A constraint imposed on the timing behavior of a job a timing constraint. Deadlines are the example of timing constraints
i.e. a timing constraint of a job can be specified in terms of its release time and relative or absolute deadlines.

For example:

A system to monitor and control a heating furnace and requires 20ms to initialize when turned on. After initialization,
every 100 ms, and the system:

 Samples and reads the temperature sensor


 Computes the control-law for the furnace to process temperature readings, determine the correct flow rates of fuel,
air and coolant
 Adjusts flow rates to match computed values

The periodic computations can be stated in terms of release times of the jobs computing the control-law: J0, J1, …, Jk, …
as shown in fig below

The release time of Jk is computed as below.

Release time = 20 + (k × 100) ms

Suppose each job must complete before the release of the next job then

 relative deadline of Jk’ s is 100 ms


 absolute deadline of Jk’ s is 20 + ((k + 1) × 100) ms

Alternatively, each control-law computation may be required to finish sooner i.e. the relative deadline is smaller than the
time between jobs, allowing some slack time for other jobs. The difference between the completion time and the earliest
possible completion time is called the slack time.
Hard and Soft Timing Constraints
Timing Constraint

The timing constraint are classified as hard and soft based on the functional criticality of jobs, usefulness of late results,
and deterministic or probabilistic nature of the constraints.

A timing constraint or deadline is hard if the failure to meet it is considered to be a fatal fault. A hard deadline is imposed
on a job because a late result produced by the job after the deadline may have disastrous consequences. E.g.

 a late command to stop a train may cause a collision


 a bomb dropped too late may hit a civilian population instead of the intended military

A timing constraint or deadline is soft if a few misses of deadlines do no serious harm but only the system’s overall
performance becomes poorer. The system performance becomes poorer when more and more jobs with soft deadlines
complete late so that late completion of a job is undesirable.

Hard Timing Constraints and Temporal Quality-of-Service Guarantees:

The timing constraint of a job is hard then it is called as hard real-time job. The validation is required that meets the
system timing constraints to demonstrate a real time system by using a provably correct, efficient procedure or exhaustive
simulation and testing.

In case of soft timing constraint is impose on the job then it is called soft job and there is no system validation is required
to demonstrate the real time system but timing constraint must meet the some statistical constraints i.e., a timing constraint
specified in terms of statistical averages.

The temporal quality of service measures the systems in terms of different parameters like response time, jitter etc. when
the system requires the validation of these parameter to guarantee and satisfaction of real time system over the define
timing constraint then these timing constraint are called hard.

Some real time system are operated to give the best quality of service but there is no care of violation of timing constraint
slightly and no needs of validation then these timing constrains are called soft.

Hence the validation of temporal quality of system must be needed in case of hard real time system and there is no
guarantee of best quality service whereas quality of service must garneted in soft real time system. Timing constraints can
be expressed in many ways:

 Deterministic: it a constraint expressed in terms of numeric


o e.g. the relative deadline of every control-law computation is 50 ms; the response time of at most 1 out of
5 consecutive control-law computations exceeds 50ms
 Probabilistic: it is a constraint expressed in terms of
o e.g. the probability of the response time exceeding 50 ms is less than 0.2
 In terms of some usefulness function
o e.g. the usefulness of every control-law computation is at least 0.8

Hard Real-Time System


If a job must never misses its deadline then the system is called hard real-time system. For a hard real-time system, every
deadline must be hit. In a real hard real-time system, if the system fails to hit the deadline even once the system is said to
have failed.
A hard real-time system is also known as an immediate real-time system. It is a hardware or software that must operate
within the confines of a stringent deadline. The application is considered to have failed if it does not complete its function
within the given allocated time span. Some examples of hard real-time systems are medical application like pacemakers,
aircraft control systems and anti-lock brakes. Some characteristics of hard real time system are:

 The hard real-time system is called guaranteed


 Response time is hard
 Often safety critical
 Size of data files are small and medium
 Peak load performance is predictable
 Use the autonomous error detection
 Have short-term data integrity

Soft Real-Time System


If some deadlines can be missed occasionally acceptably with low probability then the system is called soft real time
system. In a soft real time system, even if the system fails to meet the deadline one or more than once, the system is still
not considered to have failed.

For example, streaming audio-video.

 The soft real-time system is called best effort


 Response time is soft
 Non-critical safety
 Size of data files are large
 Peak load performance is degradable
 Use user assisted error detection
 Have long-term data integrity

A reference model of real time system

Processor and Resource


Sometimes some elements of the system model are termed as processors and sometimes as resources, depending on how
they are used by the model. For example, in a distributed system a computation job may invoke a server on a remote
processor.

 If it is observed that how the response time of this job is affected by the job which is scheduled on its local
processor then remote server serve as a
 Remote server can be modelled as a

There is no fixed rules to guide us in deciding whether to model something as a processor or as a resource, or to guide us
in many other modelling choices but it deepens on the design criteria. A good model can give us better insight into the
real-time problem that are considered during modelling. A bad model can confuse us and lead to a poor design and
implementation. For example, a model transactions that query and update a database as jobs; these jobs execute on a
database server so database server acts as the processor. If the database server uses a locking mechanism to ensure data
integrity, then a transaction also needs the locks on the data objects it reads or writes in order to proceed. The locks on the
data objects available on database server are considered as resources.

Real Time Workload Parameters


Workload specifies the no of jobs assign to a processor. Workload on processors consists of jobs, each of which is a unit
of work to be allocated processor time and other resources. The real time work load parameter are:

1. Number of tasks or jobs in the system:


In many embedded systems the number of tasks is fixed for each operational mode, and these numbers are known in
advance. In case of some other systems the number of tasks may change as the system executes task but the number of
tasks with hard timing constraints is known at all times. When the satisfaction of timing constraints is to be guaranteed,
the admission and deletion of hard real-time tasks is usually done under the control of the run-time system.

2. Run- time system:

The run-time system must maintain information on all existing hard real-time tasks, including the number of such tasks,
and all their real-time constraints and resource requirements.

Temporal Parameters of Job:

Each job Ji is characterized by its temporal parameters, interconnection parameters and functional parameters and tell us
its timing constraints and behaviour. Its interconnection parameters tell us how it depends on other jobs and how other
jobs depend on it and its functional parameters specify the intrinsic properties of the job. The temporal parameter of job
are:

 Release time ri
 Absolute deadline di
 Relative deadline Di
 Feasible interval (ri, di]

Where di and Di are usually derived from the timing requirements of Ji, other jobs in the same task as Ji, and the overall
system.

Periodic Task Model


The periodic task model is a well-known deterministic workload model and best suited for hard real time system. When a
model consists number of task τi has its period Ti and task τi is composed of sequence of jobs where Ti is minimal inter-
arrival time between consecutive jobs and task computation time is the maximum computation time among all jobs of τi
then this model is called periodic task model and corresponding task is called periodic task.

With its various extensions, the model characterizes accurately many traditional hard real-time applications, such as
digital control, real-time monitoring, and constant bit-rate voice/video transmission. The jobs of a given task are repeated
at regular and are modeled as periodic, with period p. The accuracy of model decreases with increasing jitter. Suppose a
task Ti is a series of periodic Jobs Jij which may be described with the following parameters:

 pi - period, minimum inter-release interval between jobs in Task Ti.


 ei - maximum execution time for jobs in task Ti.
 rij - release time of the jth Job in Task i (Jij in Ti).
 ji - Phase of Task Ti, equal to ri1, i.e. it is the release time of the first job in the task Ti.
 H – Hyper period = Least Common Multiple of pi for all i: H = l cm (pi), for all The number of jobs in a hyper
period is equal to the sum of (H/pi) over all i.
 ui - utilization of Task Ti and is equal to ei/pi.
 U - Total utilization = Sum over all ui.
 di - absolute deadline
 Di - relative deadline
 (ri, di] - feasible interval

Sporadic Jobs:

Most real-time systems have to respond to external events which occur randomly. When such an event occurs the system
executes a set of jobs in response. The release times of those jobs are not known until the event triggering them occurs.
These jobs are called sporadic jobs or aperiodic jobs because they are released at random times.
If the tasks containing jobs that are released at random time instants and have hard deadlines then they are called sporadic
task. Sporadic tasks are treated as hard real-time tasks. To ensure that their deadlines are met is the primary concern
whereas minimizing their response times is of secondary importance. For example,

 An autopilot is required to respond to a pilot’s command to disengage the autopilot and switch to manual control
within a specified time.
 A fault tolerant system may be required to detect a fault and recover from it in time to prevent disaster

When the task or job have no any deadlines or soft deadline then it is called aperiodic task or job. For example,

 An operator adjusts the sensitivity of a radar system. The radar must continue to operate and in the near future
change its sensitivity.

For example: Periodic task τi with ri = 2, Ti = 5, ei = 2, Di = 5 can be executed like this (continues until infinity).

Release Time of Sporadic Job:

 The release times of sporadic and aperiodic jobs are random


 The system model gives the probability distribution A(x) of the release time of such a
 A(x) gives us the probability that the release time of a job is at or earlier than x, or in the case of inter release time,
that it is less than or equal to x.

Arrival Time:

Rather than speaking of release times for aperiodic jobs, sometimes it is termed as arrival time (or inter arrival time)
commonly used in queuing theory. An aperiodic job arrives when it is released. A(x) is the arrival time distribution or
inter arrival time distribution where A (x) is probability of release time of job is at or earlier than x.

Characterization of Execution Time:

The execution time ei of job Ji is in the range [ei-, ei+] where ei- is the minimum execution time and ei+ is the maximum
execution time of job Ji. The ei- and ei+ of every hard real-time job Ji must be known even if there is unknown value
of ei. The purpose of determining ei- and ei+ to determine the deadline as well as execution time ei cannot exceed
than ei+.

Maximum Execution Time:

 For the purpose of determining whether each job can always complete by its deadline, it suffices to know its
maximum execution time.
 In most deterministic models used to characterize hard real-time applications, the term execution time ei of each
job Ji specifically means its maximum execution time.
 However we don’t mean that the actual execution time is fixed and known, only that it never exceeds our ei
(which may actually be ei+).
Precedence Constraint
Data flow and control dependencies between the jobs can constrain the order in which the jobs can be executed such job
or task have two main types of dependencies: Mutual exclusion and Precedence constraints. When Job Ji can start only
after another job Jk finishes in a task model then such constraint is called precedence constraint. If jobs can execute in any
order, they are said to be independent. Similarly when a tasks or job executed without any dependency on other tasks are
called independent task or job. For Example: Consider an information server some of the precedence constraint are.
 Before a query is processed and the requested information retrieved, its authorization to access the information
must first be checked.

 The retrieval job cannot begin execution before the authentication job completes.
 The communication job that forwards the information to the requester cannot begin until the retrieval job
completes.

Similarly, in a radar surveillance system the signal processing task is the producer or track records which the tracker task
is the consumer then

 Each tracker job processes the track records produced by a signal processing job.
 The tracker job is precedence constrained.

Precedence Graph and Task graph:

Precedence relation on a set of jobs is a relation that determines precedence constrains among individual jobs. It is
denoted by a partial order relation (<). A job Ji is a predecessor of another job jk (and jk is a successor of ji) if jk cannot
begin execution until the execution of ji completes. This is represented as ji < jk then

 Ji is an immediate predecessor of jk (and jk is is an immediate successor of ji) if ji < jk and there is no other job jj
such that ji < jj < jk
 Two jobs ji and jk are independent when neither ji < jk nor jk < ji
 A job with predecessors is ready for execution when the time is at or after its release time and all of its
predecessors are

A precedence graph is a directed graph which represents the precedence constraints among a set of jobs J where each
vertex represents a job in J. There is a directed edge from vertex Ji to vertex Jk when the job Ji is an immediate
predecessor of job Jk. For example, A system contains nine non-preemptable jobs named Ji, for i = 1, 2, ..., 9. J1 is the
immediate predecessor of J9, and J4 is the immediate predecessor of J5, J6, J7, and J8. There are no other precedence
constraints. For all the jobs, Ji has a higher priority than Jk if i < k. then the precedence graph can be constructed as,

second row represent jobs in a periodic task with phase 2, period 3, and relative deadline 3. The jobs in it are dependent.
The first job is the immediate predecessor of the second job, the second job is the immediate predecessor of the third job,
etc. The precedence graph of the jobs in this task is a chain. A subgraph’s being a chain indicates that for every pair of
jobs Ji and

J k in the subgraph, either Ji < Jk or Jk < Ji . Hence the jobs must be executed in serial order.
Data Dependency
Data dependency cannot be captured by a precedence graph. In many real-time systems jobs communicate via shared data
hence data of one job is dependent with other and called as data dependency. Often the designer chooses not to
synchronize producer and consumer jobs such that consumer requires the data at any time instead the producer places the
data in a shared address space.

In this case the precedence graph will show the producer and consumer jobs as independent since they are apparently not
constrained to run in turn.

In a task graph, data dependencies are represented explicitly by data dependency edges among jobs. There is a data
dependency edge from the vertex Ji to vertex Jk in the task graph if the job Jk

consumes data generated by Ji or the job Ji sends messages to Jk . A parameter of an edge from Ji

to Jk is the volume of data from Ji to Jk.

In multiple processor systems the volume of data to be transferred can be used to make decisions about scheduling of jobs
on processors.

Sometimes the scheduler may not be able to schedule data dependent jobs independently. To ensure data integrity
some locking mechanism must be used to ensure that only one job can access the shared data at a time. This leads
to resource contention, which may also constrain the way jobs execute. However this constraint is imposed by
scheduling and resource control algorithms. It is not a precedence constraint because it is not an intrinsic
constraint on the execution order of jobs.

Functional Parameters
While scheduling and resource control decisions are made independently of most functional characteristics of jobs, there
are several functional properties that do affect these decisions. The workload model must explicitly describe these
properties using functional parameters:

 Preemptivity
 Criticality
 Optional execution
 Laxity type

Preemptivity of Jobs:
Execution of jobs can often be interleaved. The scheduler may suspend the execution of a less urgent job and give the
processor to a more urgent job. Later, the less urgent job can resume its execution. This interruption of job execution is
called preemption. A job is preemptable if its execution can be suspended at any time to allow the execution of other jobs
and can later be resumed from the point of suspension.

A job is non-preemptable if it must be executed from start to completion without interruption. This constraint may be
imposed because its execution, if suspended, must be executed again from the beginning. Sometimes a job may be
preemptable everywhere except for a small portion which is constrained to be non-preemptable.

An example is an interrupt handling job. An interrupt handling job usually begins by saving the state of the processor.
This small portion of the job is non-preemptable since suspending the execution may cause serious errors in the data
structures shared by the jobs.

During preemption the system must first save the state of the preempted job at the time of preemption so that it can
resume the job from that state. Then the system must prepare the execution environment for the preempting job before
starting the job. These actions are called a context switch. The amount of time required to accomplish a context switch is
called a context- switch time. The terms context switch and context-switch time are used to mean the overhead work done
during preemption, and the time required to accomplish this work.

For example, in the case of CPU jobs, the state of the preempted job includes the contents of the CPU registers. After
saving the contents of the registers in memory and before the preempting job can start, the operating system must load the
new register values, clear pipelines, perhaps clear the caches, etc.

Criticality of Jobs:

In any system, jobs are not equally important. The importance (or criticality) of a job is a positive number that indicates
how critical a job is with respect to other jobs. It also define by the term priority and weight. The more important a job,
the higher its priority or the larger its weight. During an overload when it is not possible to schedule all the jobs to meet
their deadlines, it may make sense to sacrifice the less critical jobs, so that the more critical jobs meet their deadlines. For
this reason, some scheduling algorithms try to optimize weighted performance measures, taking into account the
importance of jobs.

Optional Executions:

It defines the identification of jobs (or portion of jobs) that are either optional or mandatory.

Laxity type or Laxity function:

Laxity can be used to indicate the relative importance of a time constraint, for example hard versus soft constraints. May
be supplemented with a utility function (for soft constraints) that gives the usefulness of a result versus its degree of
tardiness.

Resource Parameters of Jobs


A job require a processor and some resources throughout its execution. The resource parameters of each job give us the
type of processor and the units of each resource type required by the job and the time intervals during its execution when
the resources are required. These parameters are needed to support resource management decisions.

The resource parameters of jobs give us a partial view of the processors and resources from the perspective of the
applications. Sometimes it need to describe the characteristics of processors and resources independent of the application.
For this there parameters of resources.

A resource parameter is preemptivity. A resource is non-preemptable if each unit of the resource is constrained to be used
serially. Once a unit of a non-preemptable resource is allocated to a job, other jobs needing the unit must wait until the job
completes its use. If jobs can use every unit of a resource in an interleaved way, the resource is preemptable. A lock on a
data object is an example of a non-preemptable resource. This does not mean that the job is non-preemptable on others
resources or on the processor. The transaction can be preempted on the processor by other transactions not waiting for the
locks.

Resource Graph:

A resource graph describes the configuration of resources. There is a vertex Ri for every processor or resource Ri in the
system. The attributes of the vertex are the parameters of the resource.

The resource type of a resource tells us whether the resource is a processor or a passive resource, and its number gives us
the number of available units. Edges in resource graphs represent the relationship among resources. There are 2 types of
edges in resource graphs.

1. Is-a-part-of edge:

An edge from vertex Ri to vertex Rk can mean that Rk is a component of Ri is called is-a-part-of edge e.g. a memory is
part of a computer and so is a monitor. The subgraph containing all the is- a-part-of edges is a forest. The root of each tree
represents a major component, with subcomponents represented by vertices. e.g. the resource graph of a system
containing 2 computers consists of 2 trees. The root of each tree represents a computer with children of this vertex
including CPUs etc.

2. Accessibility edges:

Some edges in resource graphs represent connectivity between components. These edges are called accessibility edges.
For example if there is a connection between two CPUs in the two computers, then each CPU is accessible from the other
computer and there is an accessibility edge from each computer to the CPU of the other computer.

Each accessibility edge may have several parameters. For example, a parameter of an accessibility edge from a processor
Pi to another Pk is the cost of sending a unit of data from a job executing on Pi to a job executing on Pk.

Scheduling Hierarchy

The figure ‘model of a real-time system’ shows the three elements of our model of real-time systems. The application
system is represented by

 a task graph which gives the processor time and resource requirements of jobs, their timing constraints and
dependencies
 A resource graph describing the resources available to execute the application system, their attributes and rules
governing their use
 And between these graphs are the scheduling and resource access-control algorithms used by the operating system
Jobs are scheduled and allocated resources according to a chosen set of scheduling algorithms and resource access-control
protocols. The scheduler is a module that implements these algorithms.

The scheduler assigns processors to jobs, or equivalently, assigns jobs to processors. A schedule is an assignment by the
scheduler of all the jobs in the system on the available processors. For proper execution of job, a schedule must be valid.

A valid schedule is the schedule satisfies the following conditions:

 Every processor is assigned to at most one job at any


 Every job is assigned at most one processor at any
 No job is scheduled before its release
 Depending on the scheduling algorithms used, the total amount of processor time assigned to every job is equal to
its maximum or actual execution time.
 All the precedence and resource usage constraints are
 Schedule must be feasible and scheduling algorithms must be

A valid schedule is a feasible schedule if every job completes by its deadline and in general meets its timing constraints.
A set of jobs is schedulable according to a scheduling algorithm if when using the algorithm the scheduler always
produces a feasible schedule.

A hard real-time scheduling algorithm is optimal if using the algorithm the scheduler always produces a feasible schedule
if the given set of jobs has feasible schedules. If an optimal algorithm cannot find a feasible schedule, we can conclude
that a given set of jobs cannot feasibly be scheduled by any algorithm.

Real-time scheduling
Approaches to Real-Time Scheduling

Clock Driven Approach


Clock-driven scheduling is also called as time-driven scheduling. When scheduling is clock- driven, decisions are made at
specific time instants on what jobs should execute when. Typically in clock-driven scheduling system, all the parameters
of hard real-time jobs are fixed and known.

A schedule of the jobs is computed off-line and is stored for use at run-time. The scheduler schedules the jobs according
to this schedule at each scheduling decision time. Hence scheduling overhead at run-time is minimized. Scheduling
decisions are usually made at regularly spaced time instants.

One way to implement this is to use a hardware timer set to expire periodically which causes an interrupt which invokes
the scheduler. When the system is initialized, the scheduler selects and schedules the jobs that will execute until the next
scheduling decision time and then blocks itself waiting for the expiration of the timer. When the timer expires, the
scheduler repeats these actions.

Round-Robin Approach

The round-robin approach is commonly used for scheduling time-shared applications. When jobs are scheduled in a
round-robin system, every job joins a first-in-first-out (FIFO) queue when it becomes ready for execution. The job at the
head of the queue executes for at most one-time slice. If the job does not complete by the end of the time slice, it is
preempted and placed at the end of the queue to wait for its next turn.

When there are n ready jobs in the queue, each job gets one time slice in n that is in every round. The length of the time
slice is relatively short (typically tens of milliseconds) so the execution of each jobs begins almost immediately after it
becomes ready.
Generally, each job gets 1/nth share of the processor when there are n jobs ready for execution. This is why the round-
robin algorithm is also known as the processor-sharing algorithm.

Weighted Round-Robin Approach:

The weighted round-robin algorithm is used for scheduling real-time traffic in high-speed switched networks. In this
approach, different jobs may be given different weights rather than giving an equal shares of the processor for ready jobs.

In weighted round robin each job Ji is assigned a weight Wi where each job will receive Wi consecutive time slices each
round, and the duration of a round is equals to the sum of the weights of all the ready jobs for execution. We can speed up
or slow down the progress of each job by adjusting the weights of jobs.

When fraction of time of processor allocated to a job then a round-robin scheduler delays the completion of every job. If
round-robin scheduling is used to schedule precedence constrained jobs then response time of set of jobs becomes very
large.

For this reason, the weighted round-robin approach is unsuitable for scheduling such jobs.

For example consider two sets of jobs J1 = {J1, 1, J1 , 2} and J2 = {J2, 1, J2 , 2}

 The release times of all jobs are 0


 The execution times of all jobs are 1
 J1, 1 and J2, 1 execute on processor P1

 J1, 2 and J2, 2 execute on processor P2

 Suppose that J1, 1 is the predecessor of J1, 2

 Suppose that J2, 1 is the predecessor of J2, 2

Figure (a) shows ‘weighted round-robin scheduling’ that both sets of jobs complete approximately at time 4. If the jobs
are scheduled in a weighted round-robin manner one after the other, one of the chains can complete at time 2 and the other
at time 3 shown in figure (b).

Suppose that the result of the first job in each set is piped to the second job in the set then second job be executed latter
after each one or a few time slices of the former complete. Then it is better to schedule the jobs on a round-robin basis,
because both sets can complete a few time slices after time 2.

In a switched network a downstream switch can begin to transmit an earlier portion of the message as soon as it receives
the portion. It does not have to wait for the arrival of the rest of the message. The weighted round-robin approach does not
require a sorted priority queue, only a round-robin queue. This is a distinct advantage for scheduling message
transmissions in ultrahigh-speed networks since fast priority queues are very expensive
Priority Driven Approach

The term priority-driven algorithms refer to a class of scheduling algorithms that never leave any resource idle
intentionally. A resource becomes idles only when job does not require the resource for execution. It is a event driven
approach for job scheduling and scheduling decision are made only when release and completion of job occur. Commonly
used terms for this approach are greedy scheduling, list scheduling, and work-conserving scheduling.

A priority-driven algorithm is said to be greedy because it tries to make locally optimal decisions. The resource becomes
idle when resources are not locally optimal.

The term list scheduling is also used because any priority-driven algorithm can be implemented by assigning priorities to
jobs. In this approach, Jobs ready for execution are placed in one or more queues ordered by the priorities of the jobs. At
any scheduling decision time, the jobs with the highest priorities are scheduled and executed on the available processors.
Hence a priority-driven scheduling algorithm is defined largely by the list of priorities it assigns to jobs.

It is also called as work conserving scheduling since when a processor or resource is available and some job can use it to
make progress, a priority-driven algorithm never makes the job in wait i.e. after completion of a job another enters into
execution

Examples include:

Most scheduling algorithms used in non-real-time systems are priority-driven they are

 FIFO (first-in-first-out) and LIFO (last-in-first-out) algorithms which assign priorities to jobs based on their
release
 SETF (shortest-execution-time-first) and LETF (longest-execution-time-first) algorithms which assign priorities
based on job execution times.

Real-time priority scheduling assigns priorities based on deadline or some other timing constraint they are:

 Earliest deadline first


 Least slack time first

Consider a task graph with following condition,

 Jobs J 1, J 2, …, J 8, where J i had higher priority than J k if i < k.


 Jobs are scheduled on two processors P1 and P2
 Jobs communicate via shared memory, so communication cost is negligible –
 The schedulers keep one common priority queue of ready jobs

When the jobs are scheduled in preemptive priority driven approach the jobs are scheduled as

 At time 0, jobs J1, J2, and J7 are ready for execution.


 They are the only jobs in the priority queue at this
 Since J1 and J2 have higher priorities than J7 they are ahead of J7 in the queue and hence are
 At time 1, J2 completes and hence J3 becomes J3 is placed in the priority queue ahead of J7 and is scheduled on
P2, the processor freed by J2.
 At time 3, both J1 and J3 J5 is still not released. J4 and J7 are scheduled.
 At time 4, J5 is Now there are three ready jobs. J7 has the lowest priority among them so it is preempted and J4
and J5 have the processors.
 At time 5, J4 completes. J7 resumes on processor
 At time 6, J5 completes. Because J7 is not yet completed, both J6 and J8 are not yet ready for Thus processor P2
becomes idle.
 J7 finally completes at time 8. J6 and J8 can now be scheduled on the processors. When the jobs are scheduled in
preemptive priority driven approach the jobs are scheduled as

 Before time 4 this schedule is the same as


 However at time 4 when J5 is released, both processors are J5 has to wait until J4 completes at time 5 before it
can begin execution.
 It turns out that for this system, postponement of the higher priority job benefits the set of jobs as a
 The entire set completes one time unit earlier according to the non-preemptive

Dynamic vs Static Systems


Dynamic Systems

If jobs are scheduled on multiple processors, and a job can be dispatched from the priority run queue to any of the
processors then such system is called dynamic system that means A job migrates if it starts execution on one processor
and is resumed on a different processor.
Static Systems

If jobs are partitioned into subsystems, and each subsystem is bound statically to a processor then it is called static system.

Static system provides the poor performance in comparison with dynamic system in terms of overall response time of job
but it is possible to validate static systems, whereas this is not always true for dynamic systems. Hence most hard real time
systems are static.

Effective Release Times and Deadlines


Sometimes the release time of a job may be later than that of its successors, or its deadline may be earlier than that
specified for its predecessors then concept of effective release time or effective deadline comes to play.

Effective release time

If a job has no predecessors then its effective release time is equals to its release time. When it has predecessors then its
effective release time is the maximum of its release time and the effective release times of its predecessors

Effective deadline

If a job has no successors then its effective deadline is equals to its deadline. When It if has successors then its effective
deadline is the minimum of its deadline and the effective deadline of its successors.

To generate more accurate real time system on multiprocessor environment, effective release time and deadlines must be
considered but there no unnecessary on single processor with preemptable jobs.

It is feasible to schedule any set of jobs according to their actual release times and deadline when feasible to schedule
according to effective release times and deadlines. Schedule use effective release times and deadlines as if all jobs
independent then ignore the all precedence constraints.

Consider the following example whose task graph is given in the following figure. Effective Release time

 The numbers in brackets next to each job are its given release time and
 Because J1 and J2 have no predecessors, their effective release times are their given release times, 2 and 0
respectively.
 The given release time of J3 is 1, but the latest effective release time of its predecessors is 2 (that of J1) so its
effective release time is 2.
 The effective release times of J4, J5, J6, J7 are 4, 2, 4, 6 respectively.

Effective deadlines

 J6 and J7 have no successors so their effective deadlines are their given deadlines, 20 and 21
 Sincethe effective deadlines of the successors of J4 and J5 are later than the given deadlines of J4 and J5, the
effective deadlines of J4 and J5 are equal to their given deadlines, 9 and 8
 However the given deadline of J3 (12) is larger than the minimum value (8) of its successors, so the effective
deadline of J3 is 8.
 Similarly the effective deadlines of J1 and J2 are 8 and 7 respectively.
Priority Scheduling Based on Deadline
Priority scheduling based on deadline are:

1. Earliest deadline first (EDF)


2. Least slack time first (LST)

Earliest deadline first (EDF):

It is a type of the priority scheduling algorithms assign priority to jobs based on deadline. In this approach, earlier the
deadline gets higher the priority. Simply, it just requires knowledge of deadlines.

Least Slack Time first (LST):

Suppose a job J i has deadline d i, execution time e i, and was released at time r i then at time t < d

i:

 Remaining execution time t rem = e i - (t – r i)


 Slack time t slack = d i – t – t rem

In this approach, priority to jobs be assigned based on slack time .The smaller the slack time jobs gets higher the priority
then next higher slack time and so on. It is more complex for implementation and requires knowledge of execution times
and deadlines properly. Knowing the actual execution time is often difficult a priori, since it depends on the data, need to
use worst case estimate. For example

Job J1 of example is released at time 0 and has its deadline at time 6 and execution time 3. Hence its slack is 3 at time 0.
The job starts to execute at time 0. As long as it executes its slack remains 3 because at any time before its completion its
slack is 6 - t - (3 – t).

Suppose J1 is preempted at time 2 by J3 which executes from time 2 to 4. During this interval the slack of J1 decreases
from 3 to 1. At time 4 the remaining execution time of J1 is 1, so its slack is 6 - 4 - 1 = 1. The LST algorithm assigns
priorities to jobs based on their slacks. The smaller the slack, the higher the priority.

Optimality of EDF and LST:

These algorithms are optimal only when they always produce a feasible schedule if one exists.

It is constraints on a single processor as long as preemption is allowed and jobs do not contend for resources.

Optimality of EDF (Proof):

Theorem: When preemption is allowed and jobs do not contend for resources, the EDF algorithm can produce a feasible
schedule of a set J of jobs with arbitrary release times and deadlines on a processor if and only if J has feasible schedules
Proof:

To show the optimality of EDF, we have to require to show any feasible schedule can be transformed into another an EDF
schedule

If J i is scheduled to execute before J k, but J i’s deadline is later than J k’s such that there exist two conditions:

 The release time of J k is after the J i completes that means they’re already in EDF order.
 The release time of J k is before the end of the interval in which J i executes

This is always possible to swap J i and J k since J i’s deadline is later than J k’s such that

We can move any jobs following idle periods forward into the idle period then

The result is an EDF schedule.

When deadline of J i’s is earlier than J k’s there is no possibility to generate the other feasible schedule and EDF failed to
produce a feasible schedule. Hence the optimality of EDF is verified.

Latest-Release-Time (LRT) Algorithm:

The Latest-Release-Time algorithm treats release times as deadlines and deadlines as release times and schedules jobs
backwards, starting from the latest deadline of all jobs, in a priority-driven manner, to the current time. The ‘priorities’ are
based on the later the release time, the higher the ‘priority’. Because it may leave the processor idle when there are jobs
awaiting execution, the LRT algorithm is not a priority-driven algorithm. For example

In the following example, the number next to the job is the execution time and the feasible interval follows it.

The latest deadline is 8, so time starts at 8 and goes back to 0. At time 8, J2 is “ready” and is scheduled. At time 7, J3 is
also “ready” but because J2 has a later release time, it has a higher priority, so J2 is scheduled from 7 to 6.

When J2 “completes” at time 6, J1 is “ready” however J3 has a higher priority so is scheduled from 6 to 4.

Finally J1 is scheduled from 4 to 1.


The following corollary states that the LRT algorithm is also optimal under the same conditions that the EDF algorithm is
optimal. Its proof follows straightforwardly from the proof of EDF.

“When preemption is allowed and jobs do not contend for resources, the LRT algorithm can produce a feasible schedule
of a set J of jobs with arbitrary release times and deadlines on a processor if and only if feasible schedules of J exist”.

Non-optimality of EDF and LST


EDF and LST algorithms are not optimal if preemption is not allowed or there is more thhan one processor .Consider the
following 3 independent non-preemptable jobs, J1, J2, J3, with release times 0, 2, 4 and execution times 3, 6, 4, and
deadlines 10, 14, 12 respectively.

Both EDF and LST would produce the infeasible schedule shown in fig (a) Whereas a feasible schedule is possible fig (b)
but left the idle period.

Consider we have two processors and three jobs J1, J2, J3, with execution times 1, 1, 5 and deadlines 1, 2, 5 respectively.
All with release time 0.

Then EDF gives the infeasible schedule (a) whereas LST gives a feasible schedule (b) but in general LST is also non-
optimal for multiprocessors.

Hence both EDF and LST are not optimal in case of jobs non preemptive constrain and multiprocessor environment.

Off line Vs Online Scheduling


The scheduling that makes use of pre-computed schedule of all hard real time jobs i.e schedule is computed at offline
before the system begins to execute and the computation is based on the knowledge of release time , processor time as
well as resource requirement of all jobs for all time is called off-line scheduling. The example of offline scheduling is
clock driven scheduling.
When the operation mode of the system changes, new schedule specifying when each job in new mode is also pre-
computed and stored for use. The major dis advantage of offline schedule is inflexibility and only useful for deterministic
system because shows deterministic timing behavior and optimize the utilization of resources up to 100%.

When the scheduler makes each scheduling decision without knowledge about the jobs that will be released in future and
parameter of each job known to scheduler only after release of job then it is called online scheduling. The example of
priority driven scheduling.

Online scheduling is suitable for a system whose future workload is unpredictable and there is one processor, optimal
online algorithms exists.

You might also like