0% found this document useful (0 votes)
1K views25 pages

RTU Paper Solution: Global Institute of Technology, Jaipur

This document contains the solution to a Real Time Systems examination from Global Institute of Technology in Jaipur, India. It provides answers to two questions about real time systems. The first question defines real time systems and differentiates between soft and hard real time systems based on their response time requirements and ability to tolerate missed deadlines. The second question explains the concept of tasks, including periodic, aperiodic, and sporadic tasks, and defines the parameters used to describe different task types.

Uploaded by

jitendra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views25 pages

RTU Paper Solution: Global Institute of Technology, Jaipur

This document contains the solution to a Real Time Systems examination from Global Institute of Technology in Jaipur, India. It provides answers to two questions about real time systems. The first question defines real time systems and differentiates between soft and hard real time systems based on their response time requirements and ability to tolerate missed deadlines. The second question explains the concept of tasks, including periodic, aperiodic, and sporadic tasks, and defines the parameters used to describe different task types.

Uploaded by

jitendra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Global Institute of Technology, Jaipur

ITS-1, IT Park, EPIP, Sitapura Jaipur 302022 (Rajasthan)


Solution VI and VIII sem University Examination 2019

Subject__Real Time System Code 8CS4.2A Semester VIII Year 4th

GLOBAL INSTITUTE OF TECHNOLOGY,


JAIPUR

RTU Paper Solution

Branch - CSE
Subject Name –Real Time System
Paper Code – 8CS4.2
Date of Exam – 20/04/2019
Global Institute of Technology, Jaipur
ITS-1, IT Park, EPIP, Sitapura Jaipur 302022 (Rajasthan)
Solution VI and VIII sem University Examination 2019

Subject__Real Time System Code 8CS4.2A Semester VIII Year 4th


Global Institute of Technology, Jaipur
ITS-1, IT Park, EPIP, Sitapura Jaipur 302022 (Rajasthan)
Solution VI and VIII sem University Examination 2019

Subject__Real Time System Code 8CS4.2A Semester VIII Year 4th


Global Institute of Technology, Jaipur
ITS-1, IT Park, EPIP, Sitapura Jaipur 302022 (Rajasthan)
Solution VI and VIII sem University Examination 2019

Subject__Real Time System Code 8CS4.2A Semester VIII Year 4th

Q1(A). What is Real Time System Differentiate between soft and hard RTS.
Ans. Real time system means that the system is subjected to real time, & the response should be
guaranteed within a specified timing constraint or system should meet the specified deadline. For
example: flight control system, real time monitors etc.

Differences between Soft Real-Time versus Hard Real-Time:- The major differences between
hard and soft real-time systems. The response time requirements of hard real-time systems are in the
order of milliseconds or less and can result in a catastrophe if not met. In contrast, the response time
requirements of soft real-time systems are higher and not very stringent. In a hard real-time system,
the peak-load performance must be predictable and should not violate the predefined deadlines. In a
soft real-time system, a degraded operation in a rarely occurring peak load can be tolerated. A hard
real-time system must remain synchronous with the state of the environment in all cases. On the
other hand soft real-time systems will slow down their response time if the load is very high. Hard
real-time systems are often safety critical. Hard real-time systems have small data files and real-time
databases. Temporal accuracy is often the concern here. Soft real-time systems for example, on-line
reservation systems have larger databases and require long-term integrity of real-time systems. If an
error occurs in a soft real-time system, the computation is rolled back to a previously established
checkpoint to initiate a recovery action. In hard real-time systems, roll-back/recovery is of limited
use.

Characteristic Hard RTS Soft RTS


Peak load performance Predictable Degraded
Response Time Hard required Soft desired
Safety Often critical Non critical
Size of data file Small/ medium Large
Redundancy type Active Check point recovery
Data integrity Short term Long Term

Q1(B).Explain concept of task. Also explain task parameters

Ans. All kinds of work done by computing and communication systems in general terms. We can
call each unit of work that is scheduled and executed by the system a job and a set of related jobs
which jointly provide some system function a task.

There are two types of tasks in real-time systems:


1. Periodic tasks
2. Dynamic tasks
Periodic Tasks: In periodic task, jobs are released at regular intervals. A periodic task is one which
repeats itself after a fixed time interval. A periodic task is denoted by four tuples:
Ti = < Φi, Pi, ei, Di >Where,
Global Institute of Technology, Jaipur
ITS-1, IT Park, EPIP, Sitapura Jaipur 302022 (Rajasthan)
Solution VI and VIII sem University Examination 2019

Subject__Real Time System Code 8CS4.2A Semester VIII Year 4th


 Φi – is the phase of the task. Phase is release time of the first job in the task. If the
phase is not mentioned then release time of first job is assumed to be zero.
 Pi – is the period of the task i.e. the time interval between the release times of two
consecutive jobs.
 ei – is the execution time of the task.
 Di – is the relative deadline of the task.
Dynamic Tasks: It is a sequential program that is invoked by the occurrence of an event. An event
may be generated by the processes external to the system or by processes internal to the system.
Dynamically arriving tasks can be categorized on their criticality and knowledge about their
occurrence times.
1. Aperiodic Tasks: In this type of task, jobs are released at arbitrary time intervals i.e.
randomly. Aperiodic tasks have soft deadlines or no deadlines.
2. Sporadic Tasks: They are similar to aperiodic tasks i.e. they repeat at random instances. The
only difference is that sporadic tasks have hard deadlines. A speriodic task is denoted by three
tuples:Ti =(ei, gi, Di)
3. Where ei – the execution time of the task.gi – the minimum separation between the
occurrences of two consecutive instances of the task. Di – the relative deadline of the task.

OR
Q1. Briefly Describe
Block diagram of RTS
Resource Graph In a resource graph, there is a vertex Ri for every processor or resource Ri in the
system. The attributes of the vertex are the parameters of the resource. In particular, the resource
type of a resource tells us whether the resource is a processor or a (passive) resource, and its number
gives us the number of available units. task graphs represent different types of dependencies among
jobs, edges in a resource graph represent the relationship among resources. Using different types of
edges.
SCHEDULING HIERARCHY The application system is represented by a task graph,
exemplified by the graph on the top of the diagram. This graph gives the processor time and
resource requirements of jobs, the timing constraints of each job, and the dependencies of jobs.
The resource graph describes the amounts of the resources available to execute the application
system, the attributes of the resources, and the rules governing their usage. Between them are
the scheduling and resource access-control algorithms used by the operating system.

Scheduler and Schedules Jobs are scheduled and allocated resources according to a chosen set
of scheduling algo-rithms and resource access-control protocols. The module which
implements these algorithms is called the scheduler

By a schedule, we mean an assignment of all the jobs in the system on the available processors
produced by the scheduler. Throughout this book, we do not question the correct-ness of the
scheduler; rather, we assume that the scheduler works correctly.

a. Every processor is assigned to at most one job at any time.

b. Every job is assigned at most one processor at any time.

c. No job is scheduled before its release time.


Global Institute of Technology, Jaipur
ITS-1, IT Park, EPIP, Sitapura Jaipur 302022 (Rajasthan)
Solution VI and VIII sem University Examination 2019

Subject__Real Time System Code 8CS4.2A Semester VIII Year 4th


d. Depending on the scheduling algorithm(s) used, the total amount of
processor time assigned to every job is equal to its maximum or actual execution time.

e. All the precedence and resource usage constraints are satisfied

Timing Characteristics. The workload generated by each multivariate, multirate digital controller
consists of a few periodic control-law computations. Their periods range from a few milliseconds to
a few seconds. A control system may contain numerous digital controllers, each of which deals with
some attribute of the plant. Together they demand tens or hundreds of control laws be computed
periodically, some of them continuously and others only when requested by the operator or in
reaction to some events. The control laws of each multirate controller may have harmonic periods.
They typically use the data produced by each other as inputs and are said to be a rate group. On the
other hand, there is no control theoretical reason to make sampling periods of different rate groups
related in a harmonic way.

Each control-law computation can begin shortly after the beginning of each sampling period when
the most recent sensor data become available. It is natural to want the computation complete and,
hence, the sensor data processed before the data taken in the next period become available. This
objective is met when the response time of each control-law computation never exceeds the
sampling period. The response time of the computation can vary from period to period. In some
systems, it is necessary to keep this variation small so that the digital control outputs produced by the
controller become available at time instants more regularly spaced in time. In this case, A timing
jitter requirement on the control-law computation: the variation in its response time does not exceed.
Global Institute of Technology, Jaipur
ITS-1, IT Park, EPIP, Sitapura Jaipur 302022 (Rajasthan)
Solution VI and VIII sem University Examination 2019

Subject__Real Time System Code 8CS4.2A Semester VIII Year 4th


Tracking. Strong noise and man-made interferences, including electronic counter measure like
jamming, can lead the signal processing and detection process to wrong con-collusions about the
presence of objects. A track record on a nonexistent object is called a false return. An application that
examines all the track records in order to sort out false returns from real ones and update the trajectories
of detected objects is called a tracker. Using the jargon of the subject area, we say that the tracker
assigns each measured value (i.e., the tuple of position and velocity contained in each of the track
records generated in a scan) to a trajectory. If the trajectory is an existing one, the measured value
assigned to it gives the current position and velocity of the object moving along the trajectory. If the
trajectory is new, the measured value gives the position and velocity of a possible new object. the
tracker runs on one or more data processors which communicate with the signal processors via the
shared memory.

Gating. Typically, tracking is carried out in two steps: gating and data association. Gating is the process of
putting each measured value into one of two categories de-pending on whether it can or cannot be tentatively
assigned to one or more established trajectories. The gating process tentatively assigns a measured value to an
established trajectory if it is within a threshold distance G away from the predicted current position and
velocity of the object moving along the trajectory. (Below, we call the distance between the measured and
predicted values the distance of the assignment.) The threshold G is called the track gate. It is chosen so that
the probability of a valid measured value falling in the region bounded by a sphere of radius G centered
around a predicted value is a desired constant.

Unit-II

Q2 A).Differntiate between fixed, jittered and sporadic release time

The release time of a job is the instant of time at which the job becomes available for execution. The
job can be scheduled and executed at any time at or after its release time whenever its data and
control dependency conditions are met
Fixed, Jittered, and Sporadic Release Times

In many systems, we do not know exactly when each job will be released. In other words, we
do not know the actual release time ri of each job Ji ; only that ri is in a range [ri −, ri +]. ri can be
as early as the earliest release time ri − and as late as the latest release time ri +. Indeed, some
models assume that only the range of r i is known and call this range the jitter in r i , or release-
time jitter. Sometimes, the jitter is negligibly small compared with the values of other temporal
parameters. If, for all practical purposes, we can approximate the actual release time of each job
by its earliest or latest release time, then we say that the job has a fixed release time.
Almost every real-time system is required to respond to external events which occur at random
instants of time. When such an event occurs, the system executes a set of jobs in response. The
release times of these jobs are not known until the event triggering them occurs. These jobs are
called sporadic jobs or aperiodic jobs because they are released at random time instants. (We
will return shortly to discuss the difference between these two types of jobs.) For example, the
pilot may disengage the autopilot system at any time. When this occurs, the autopilot system
changes from cruise mode to standby mode. The jobs that execute to accomplish this mode
change are sporadic jobs.
The release times of sporadic and aperiodic jobs are random variables. The model of the system
gives the probability distribution of the release time of such a job, or when there is a stream of
similar sporadic or aperiodic jobs, the probability distribution of inter release time (i.e., the
Global Institute of Technology, Jaipur
ITS-1, IT Park, EPIP, Sitapura Jaipur 302022 (Rajasthan)
Solution VI and VIII sem University Examination 2019

Subject__Real Time System Code 8CS4.2A Semester VIII Year 4th


length of the time interval between the release times of two consecutive jobs in the stream).
gives us the probability that the release time of the job is at or earlier than x (or the inter release
time of the stream of jobs is equal to or less than x ) for all valid values of x . Rather than
speaking of release times of aperiodic jobs, we sometimes use the term arrival times (or
interarrival time) which is commonly used in queueing theory. An aperiodic job arrives when it
is released. A(x ) is the arrival time distribution (or interarrival time distribution).

Q2 B)Explain priority driven approach in RTS.

The term priority-driven algorithms refers to a large class of scheduling algorithms that never
leave any resource idle intentionally. A resource idles only when no job requiring the resource
is ready for execution. Scheduling decisions are made when events such as releases and
completions of jobs occur. Hence, priority-driven algorithms are event-driven Other commonly
used names for this approach are greedy scheduling, list scheduling and work-conserving
scheduling. A priority-driven algorithm is greedy because it tries to make locally optimal
decisions. Leaving a resource idle while some job is ready to use the resource is not locally
optimal. So when a processor or resource is available and some job can use it to make progress,
such an algorithm never makes the job wait The term priority-driven algorithm can be
implemented by assigning priorities to jobs. Jobs ready for execution are placed in one or more
queues ordered by the priorities of the jobs. At any scheduling decision time, the jobs with the
highest priorities are scheduled and executed on the available processors. Hence, a priority-
driven scheduling algorithm is defined to a great extent by the list of priorities it assigns to jobs;
the priority list and other rules, such as whether preemption is allowed, define the scheduling
algorithm completely.Priorities on the basis of job execution times. Because we can
dynamically change the priorities of jobs, even round-robin scheduling can be thought of as
priority-driven:

JOB RELEASE EXECUTION


TIME TIME
J1 0 3
J2 0 1
J3 0 2
J4 0 2
J5 4 2
J6 0 4
J7 0 4
J8 0 1
Global Institute of Technology, Jaipur
ITS-1, IT Park, EPIP, Sitapura Jaipur 302022 (Rajasthan)
Solution VI and VIII sem University Examination 2019

Subject__Real Time System Code 8CS4.2A Semester VIII Year 4th

At time 1, J2 completes and, hence, J3 becomes ready. J3 is placed in the priority queue ahead
of J7 and is scheduled on P2, the processor freed by J2.
At time 3, both J1 and J3 complete. J5 is still not released. J4 and J7 are scheduled.
At time 4, J5 is released. Now there are three ready jobs. J7 has the lowest priority among them.
Consequently, it is preempted. J4 and J5 have the processors.
At time 5, J4 completes. J7 resumes on processor P1.
At time 6, J5 completes. Because J7 is not yet completed, both J6 and J8 are not ready for
execution. Consequently, processor P2 becomes idle.
J7 finally completes at time 8. J6 and J8 can now be scheduled .

OR

Q2A). Explain concept of Precedence Graph and Task Graph.

Ans. Precedence Graph and Task Graph

The partial-order relation, called a precedence relation, over the set of jobs to specify the precedence
constraints among jobs. A job Ji is a predecessor of another job Jk (and Jk a successor of Ji ) if Jk
cannot begin execution until the execution of Ji completes. A short-hand notation to state this fact is
Ji < Jk . Ji is an immediate predecessor of Jk (and Jk is an immediate successor of Ji ) if Ji < Jk and
there is no other job J j such that Ji < J j < Jk . Two jobs Ji and Jk are independent when neither Ji <
Jk nor Jk < Ji . A job with predecessors is ready for execution when the time is at or after its release
time and all of its predecessors is completed.
A classical way to represent the precedence constraints among jobs in a set J is by a directed graph
G = (J, <). Each vertex in this graph represents a job in J. We will call each vertex by
Global Institute of Technology, Jaipur
ITS-1, IT Park, EPIP, Sitapura Jaipur 302022 (Rajasthan)
Solution VI and VIII sem University Examination 2019

Subject__Real Time System Code 8CS4.2A Semester VIII Year 4th


the name of the job represented by it. There is a directed edge from the vertex Ji to the vertex Jk
when the job Ji is an immediate predecessor of the job Jk .
A task graph, which gives us a general way to describe the application system, is an extended
precedence graph.. As in a precedence graph, the vertices in a task graph represent jobs. They are shown
as circles and squares in this figure. (Here, we ignore the difference between the types of jobs
represented by them. to differentiate them arises only in the next section.) For simplicity, we show only the job
attributes that are of interest to us. The numbers in the bracket above each job give its feasi-ble interval. The edges in the
graph represent dependencies among jobs. If all the edges are precedence edges, representing precedence constraints,
then the graph is a precedence graph.

For Example:- The system described by the graph two periodic tasks. The task whose jobs are represented by the
vertices in the top row has phase 0, period 2, and relative deadline 7. The jobs in it are independent; there are no edges to
or from these jobs. In other words, the jobs released in later periods are ready for execution as soon as they are released
even though some job released earlier is not yet complete. This is the usual assumption about periodic tasks. The vertices
in the second row of represent jobs in a periodic task with phase 2, period 3, and relative deadline 3. The jobs in it are
dependent; the first job is the immediate predecessor of the second job, the second job is the immediate predecessor of
the third job, and so on. The precedence graph of (the jobs in) this task is a chain as shown here. A subgraph’s being a
chain indicates that for every pair of jobs Ji and Jk in the subgraph, either Ji < Jk or Ji > Jk . Hence the jobs must be
executed in serial order.

Q2B. Differentiate between online versus offline scheduling.

Ans. OFF-LINE VERSUS ON-LINE SCHEDULING A clock-driven scheduler typically


makes use of a pre-computed schedule of all hard real-time jobs. This schedule is computed
off-line before the system begins to execute, and the computation is based on the knowledge of
the release times and processor-time/resource requirements of all the jobs for all times. When
the operation mode of the system changes, the new schedule specifying when each job in the
new mode executes is also pre-computed and stored for use. In this case, we say that
scheduling is (done) off-line, and the pre-computed schedules are off-line schedules.

The disadvantage of off-line scheduling is inflexibility. This approach is possible only when
the system is deterministic, meaning that the system provides some fixed set(s) of functions
Global Institute of Technology, Jaipur
ITS-1, IT Park, EPIP, Sitapura Jaipur 302022 (Rajasthan)
Solution VI and VIII sem University Examination 2019

Subject__Real Time System Code 8CS4.2A Semester VIII Year 4th


and that the release times and processor-time/resource demands of all its jobs are known and do
not vary or vary only slightly. For a deterministic system, however, off-line scheduling has
several advantages, the deterministic timing behavior of the resultant system being one of
them. Because the computation of the schedules is done off-line, the complex-ity of the
scheduling algorithm(s) used for this purpose is not important.

On-Line Scheduling. The scheduling is done on-line, or that we use an on-line scheduling
algorithm, if the scheduler makes each scheduling decision without knowledge about the jobs
that will be released in the future; the parameters of each job become known to the on-line
scheduler only after the job is released. The priority-driven algorithms are on-line algorithms.
the admission of each new task depending on the outcome of an acceptance test that is based on
the parameters of the new task and tasks admitted earlier. Such an acceptance test is on-
line.Clearly, on-line scheduling is the only option in a system whose future workload is
unpredictable. An on-line scheduler can accommodate dynamic variations in user demands and
resource availability. The price of the flexibility and adaptability is a reduced ability for the
scheduler to make the best use of system resources. Without prior knowledge about future jobs,
the scheduler cannot make optimal scheduling decisions while a clairvoyant scheduler that
knows about all future jobs can there is only one processor, optimal on-line algorithms exist, and the
EDF .

Unit-3

Q3 A).With Reference to cyclic scheduler, explain

i). Frames and Major Cycles


A restriction imposed by this structure is that scheduling decisions are made periodically, rather than at
arbitrary times. The scheduling decision times partition the time line into intervals called frames. Every
frame has length f ; f is the frame size. Because scheduling decisions are made only at the beginning of
every frame, there is no preemption within each frame. The phase of each periodic task is a nonnegative
integer multiple of the frame size. In other words, the first job of every task is released at the beginning
of some frame. In addition to choosing which job to execute, we want the scheduler to carry out
monitoring and enforcement actions at the beginning of each frame. In particular, we want the scheduler
to check whether every job scheduled in the frame has indeed been released and is ready for execution.
We also want the scheduler to check whether there is any overrun and take the necessary error handling
action whenever it finds any erroneous condition. These design objectives make some choices of frame
size more desirable than the others.
Global Institute of Technology, Jaipur
ITS-1, IT Park, EPIP, Sitapura Jaipur 302022 (Rajasthan)
Solution VI and VIII sem University Examination 2019

Subject__Real Time System Code 8CS4.2A Semester VIII Year 4th

Frame Size Constraints

Ideally, we want the frames to be sufficiently long so that every job can start and complete its
execution within a frame. In this way, no job will be preempted. We can meet this objective if
we make the frame size f larger than the execution time ei of every task Ti

F>= max(ei)

1<=i<=n
To keep the length of the cyclic schedule as short as possible, the frame size f should be chosen so that it
divides H , the length of the hyperperiod of the system. This condition is met when f divides the period
pi of at least one task Ti , that is

for at least one i . When this condition is met, there is an integer number of frames in each hyper period.
We let F denote this number and call a hyper period that begins at the beginning of the (k F + 1)st
frame, for any k = 0, 1, . . . , a major cycle.

Frame size will be 2f - gcd( Pi, f ) <=Di

Job Slices
Sometimes, the given parameters of some task systems cannot meet all three frame size con-
straints simultaneously. An example is the system T = {(4, 1), (5, 2, 7), (20, 5)}. we must have
f≥5

, but to satisfy 2f - gcd( Pi, f ) <=Di

we must have f ≤ 4. In this situation we are forced to partition each job in a task that has a large
execution time into slices (i.e., subjobs) with smaller execution times. (When the job is a message
transmission, we divide the message into several segments. When the job is a computation, we partition
the program into procedures, each of which is to be executed nonpreemptively.) like this
Global Institute of Technology, Jaipur
ITS-1, IT Park, EPIP, Sitapura Jaipur 302022 (Rajasthan)
Solution VI and VIII sem University Examination 2019

Subject__Real Time System Code 8CS4.2A Semester VIII Year 4th


Q3 B.)Explain Rate Monotonic(RM) Algorithm
well-known fixed-priority algorithm is the rate-monotonic algorithm This algorithm assigns priorities to
tasks based on their periods: the shorter the period, the higher the priority. The rate (of job releases) of a
task is the inverse of its period. Hence, the higher its rate, the higher its priority. We will refer to this
algorithm as the RM algorithm for short and a schedule produced by the algorithm as an RM schedule.

For Example :- This system contains three tasks:

T1 = (4, 1)

T2 = (5, 2)

T3 = (20, 5)

. The priority of T1 is the highest because its rate is the highest (or equivalently, its period is the shortest). Each job in
this task is placed at the head of the priority

T1 T2 T3 T1 T2 T3 T1 T3 T2 T1 T3 T2 T1 T2 T1

0 4 8 12 16 20

OR

Q3.A).Explain optimality of RM and DM algorithm


OPTIMALITY OF THE RM AND DM ALGORITHMS

In fixed-priority scheduling , the tasks in decreasing order of their priorities except where stated otherwise. In other words, the
task Ti has a higher priority than the task Tk if i < k. By indexing the tasks in this manner, our discussion implicitly takes into
consideration the scheduling algorithm. Sometimes, refer to the priority of a task Ti as priority πi . πi ’s are positive integers 1,
2, . . . , n, 1 being the highest priority and n being the lowest priority. We denote the subset of tasks with equal or higher
priority than the total utilization U= Sum of all tasks utilization(u).and we assume here that the tasks have distinct
priorities. Because they assign fixed priorities to tasks, fixed-priority algorithms cannot be optimal: Such an
algorithm may fail to schedule some systems for which there are feasible schedules. To demonstrate this fact,
we consider a system which consists of two tasks: T1 = (2, 1) and T2 = (5, 2.5).

Total utilization = 1/2 + 2.5/5

U =1

The tasks are feasible. J1,1 and J1,2 can complete in time only if they have a higher priority than J2,1. In other
words, in the time interval (0, 4], T1 must have a higher-priority than T2. However, at time 4 when J1,3 is
released, J2,1 can complete in time only if T2 (i.e., J2,1) has a higher priority than T1 (i.e., J1,3). This change in
the relative priorities of the tasks is not allowed by any fixed priority algorithm.

While the RM algorithm is not optimal for tasks with arbitrary periods, it is optimal in the special case
when the periodic tasks in the system are simply periodic and the deadlines of the tasks are no less than
Global Institute of Technology, Jaipur
ITS-1, IT Park, EPIP, Sitapura Jaipur 302022 (Rajasthan)
Solution VI and VIII sem University Examination 2019

Subject__Real Time System Code 8CS4.2A Semester VIII Year 4th


their respective periods. A system of periodic tasks is simply periodic if for every pair of tasks Ti and Tk
in the system and pi < pk , pk is an integer multiple of pi .

A system of simply periodic, independent, preemptable tasks whose relative deadlines are equal to or
larger than their periods is schedulable on one processor according to the RM algorithm if and only if its

total utilization is equal to or less than 1.

Q3B). Show that the periodic task

T1={10,2} T2=(15,5) T3={25,9} are schedulable by the rate monotonic algorithm.

Ans. Step-1 complete the 4 tupples of tasks Ti = < Φi, Pi, ei, Di >

T1={0,10,2,10}

T2=(0,15,5,15)

T3={0,25,9,25}

T1 T2 T3 T1 T3 T2 T1 T3

2 7 10 12 15 20 22 25

Priority of T1>T2>T3

At the time 0 t1 t2 t3 all task are ready but the priority of t1 is high the it will execute

At the time 2 t2 t3 task are ready but the priority of t2 is high the it will execute

At the time 7 only t3 is ready t3 it will execute till 10.

At the time 10 t1 t3 task are ready but the priority of t1 is high the it will execute

At the time 12 only t3 is ready t3 it will execute till 15.

At the time 15 t2 t3 task are ready but the priority of t2 is high the it will execute

At the time 20 t1 t3 task are ready but the priority of t1 is high the it will execute

At the time 22 only t3 is ready t3 it will execute till 25.


Global Institute of Technology, Jaipur
ITS-1, IT Park, EPIP, Sitapura Jaipur 302022 (Rajasthan)
Solution VI and VIII sem University Examination 2019

Subject__Real Time System Code 8CS4.2A Semester VIII Year 4th


Unit-4

Q4 A) Explain Deferrable servers and its operations.

Ans. As we know that periodic task may be schedule by particular scheduling techniques but Aperiodic task
cannot be so we have some approaches for run aperiodic task like DEFERRABLE SERVERS

A deferrable server is the simplest of bandwidth-preserving servers. Like a poller, the exe-cution budget of a
deferrable server with period ps and execution budget es is replenished periodically with period ps . Unlike a
poller, however, when a deferrable server finds no ape-riodic job ready for execution, it preserves its budget
Operations of Deferrable Servers

Specifically, the consumption and replenishment rules that define a deferrable server ( ps , es ) are as
follows.
Consumption Rule:- The execution budget of the server is consumed at the rate of one per unit time
whenever the server executes.

Replenishment Rule:- The execution budget of the server is set to es at time instants kpk , for k = 0, 1, 2, . . . .

the server is not allowed to cumulate its budget from period to period. Stated in another way, any budget held
by the server immediately before each replenishment time is lost.

Example. Deferrable server TD S = (3, 1) has the highest priority. The periodic tasks T1 = (2.0, 3.5, 1.5)
and T2 = (6.5, 0.5) and the server are scheduled rate-monotonically. Suppose that an aperiodic job A
with execution time 1.7 arrives at time 2.8.
At time 0, the server is given 1 unit of budget. The budget stays at 1 until time 2.8. When A arrives, the deferrable server
executes the job. Its budget decreases as it executes.

Immediately before the replenishment time 3.0, its budget is equal to 0.8. This 0.8 unit is lost at time
3.0, but the server acquires a new unit of budget. Hence, the server continues to execute.
At time 4.0, its budget is exhausted. The server is suspended, and the aperiodic job A waits.

At time 6.0, its budget replenished, the server resumes to execute A.

At time 6.5, job A completes. The server still has 0.5 unit of budget. Since no aperiodic job waits in the
queue, the server suspends itself holding this budget.
Global Institute of Technology, Jaipur
ITS-1, IT Park, EPIP, Sitapura Jaipur 302022 (Rajasthan)
Solution VI and VIII sem University Examination 2019

Subject__Real Time System Code 8CS4.2A Semester VIII Year 4th

Q4B). Explain Slack Stealing Algorithm.

Ans. The Slack Stealing Algorith

A natural way to improve the response times of aperiodic jobs is by executing the aperiodic
jobs ahead of the periodic jobs whenever possible. This approach, called slack stealing, For
priority-driven systems the slack-stealing scheme de-scribed below to work, every periodic job
slice must be scheduled in a frame that ends no later than its deadline. Let the total amount of
time allocated to all the slices scheduled in the frame k be xk . The slack (time) available in the
frame is equal to f − xk at the beginning of the frame. If the aperiodic job queue is nonempty at
this time, the cyclic executive can let aperiodic jobs execute for this amount of time without
causing any job to miss its deadline

When an aperiodic job executes ahead of slices of periodic tasks, it consumes the slack in the
frame. After y units of slack time are used by aperiodic jobs, the available slack is reduced to f
− xk − y. The cyclic executive can let aperiodic jobs execute in frame k as long as there is slack,
that is, the available slack f − xk − y in the frame is larger than 0.

When the cyclic executive finds the aperiodic job queue empty, it lets the periodic task server
execute the next slice in the current block. The amount of slack remains the same during this
execution. As long as there is slack, the cyclic executive returns to examine the aperiodic job
queue after each slice completes.

By example. The first major cycle in the cyclic schedule of the periodic tasks Here three
aperiodic jobs A1, A2, and A3. Their release times are immediately before 4, 9.5. and 10.5, and
their execution times are 1.5, 0.5 and 2, respectively. when the aperiodic jobs execute if we use
the cyclic executive shown in which schedules aperiodic jobs after the slices of periodic tasks
in each frame are completed. The execution of A1 starts at time 7. It does not complete at time 8
when the frame ends and is, therefore, preempted. It is resumed at time 10 after both slices in
the next frame complete. Consequently, its response time is 6.5. A2 executes after A1 completes
and has a response time equal to 1.5. Similarly, A3 follows A2 and is preempted once and
completes at the end of the following frame. The response time of A3 is 5.5. The average
response time of these three jobs is 4.5 the cyclic executive does slack stealing. At time 4, the
cyclic executive finds A1 in the aperiodic job queue, and there is 1 unit of slack. Hence it lets A1
execute. At time 5, there is no more slack. It preempts A1 and lets the periodic task
Global Institute of Technology, Jaipur
ITS-1, IT Park, EPIP, Sitapura Jaipur 302022 (Rajasthan)
Solution VI and VIII sem University Examination 2019

Subject__Real Time System Code 8CS4.2A Semester VIII Year 4th

OR

Q4A).Explain any two server based priority Scheduling algorithms.

Ans. Polling is another commonly used way to execute aperiodic jobs. In our terminology, a poller or
polling server ps , es ) is a periodic task: ps is its polling period, and es is its execution time. The poller is
ready for execution periodically at integer multiples of ps and is scheduled together with the periodic
tasks in the system according to the given priority-driven algorithm. When it executes, it examines the
aperiodic job queue. If the queue is nonempty, the poller executes the job at the head of the queue. The
poller suspends its execution or is suspended by the scheduler either when it has executed for es units of
time in the period or when the aperiodic job queue becomes empty, whichever occurs sooner. It is ready
for execution again at the begin-ning of the next polling period. On the other hand, if at the beginning of
a polling period the poller finds the aperiodic job queue empty, it suspends immediately. It will not be
ready for execution and able to examine the queue again until the next polling period.

A periodic task and is created for the purpose of executing aperiodic jobs a periodic server. A periodic server ( ps , es )
is defined partially by its period ps and execution time es . (Roughly speaking, the server never executes for more than es
units of time in any time interval of length ps . However, this statement is not true for some kinds of servers.) The
parameter es is called the execution budget (or simply budget) of the server. The ratio us = es / ps is the size
of the server. A poller ( ps , es ) is a kind of periodic server. At the beginning of each period, the budget
of the poller is set to es . We say that its budget is replenished (by es units) and call a time instant when
the server budget is replenished a replenishment time.

For Example fixed-priority periodic tasks T1 = (3, 1) and T2 = (10, 4). The poller has period 2.5 and
execution budget 0.5. It is treated by the scheduler as the periodic task (2.5, 0.5)
Global Institute of Technology, Jaipur
ITS-1, IT Park, EPIP, Sitapura Jaipur 302022 (Rajasthan)
Solution VI and VIII sem University Examination 2019

Subject__Real Time System Code 8CS4.2A Semester VIII Year 4th

SPORADIC SERVERS

A deferrable server may delay lower-priority tasks for more time than a period task with the same period
and execution time. sporadic servers are designed to improve over a deferrable server in this respect.
The consumption and replenishment rules of sporadic server algorithms ensure that each sporadic server
with period ps and budget es never demands more processor time than the periodic task ( ps , es ) in any
time interval. Consequently, we can treat the sporadic server exactly like the periodic task ( ps , es )
when we check for the schedulability of the system. A system of periodic tasks containing a sporadic
server may be schedulable while the same system containing a deferrable server with the same
parameters is not.

Simple Sporadic Server. In its simpliest form, a sporadic server is governed by the following
consumption and replenishment rules. We call such a server a simple sporadic server. A way to
implement the server is to have the scheduler monitor the busy intervals of TH and maintain information
on BEGIN and END.

Consumption Rules of Simple Fixed-Priority Sporadic Server: At any time t after tr , the server’s
execution budget is consumed at the rate of 1 per unit time until the bud-get is exhausted when either
one of the following two conditions is true. When these conditions are not true, the server holds its
budget.

C1 The server is executing.

C2 The server has executed since tr and END < t .

Replenishment Rules of Simple Fixed-Priority Sporadic Server:


R1Initially when the system begins execution and each time when the budget is replenished, the
execution budget = es , and tr = the current time.

R2 At time t f , if END = t f , te = max(tr , BEGIN). If END < t f , te = t f . The next replenishment time is
set at te + ps .
Global Institute of Technology, Jaipur
ITS-1, IT Park, EPIP, Sitapura Jaipur 302022 (Rajasthan)
Solution VI and VIII sem University Examination 2019

Subject__Real Time System Code 8CS4.2A Semester VIII Year 4th


R3 The next replenishment occurs at the next replenishment time, except under the following
conditions. Under these conditions, replenishment is done at times stated below.

If the next replenishment time te + ps is earlier than t f , the budget is replen-ished as soon as it is
exhausted.

If the system T becomes idle before the next replenishment time te + ps and becomes busy again at tb ,
the budget is replenished at min(te + ps , tb ).
Rules C1 and R1 are self-explanatory. Equivalently, rule C2 says that the server con-sumes its budget at any time t if it
has executed since tr but at t , it is suspended and the higher-priority subsystem TH is idle. Rule R2 says that the next
replenishment time is ps units after tr (i.e., the effective replenishment time te is tr ) only if the higher-priority subsys-
tem TH has been busy throughout the interval (tr , t f ). Otherwise, te is later; it is the latest instant at which an equal or
lower-priority task executes (or the system is idle) in (tr , t f ).

Q4B). Explain schedulability of :-

Ans. well-known fixed-priority algorithm is the rate-monotonic algorithm [LiLa]. This algorithm assigns
priorities to tasks based on their periods: the shorter the period, the higher the priority. The rate (of job
releases) of a task is the inverse of its period. Hence, the higher its rate, the higher its priority. We will
refer to this algorithm as the RM algorithm for short and a schedule produced by the algorithm as an
RM schedule The RM schedule of the system whose cyclic schedule. This system contains three tasks: T1 = (4, 1), T2
= (5, 2), and T3 = (20, 5). The priority of T1 is the highest because its rate is the highest (or equivalently, its period is the
shortest). Each job in this task is placed at the head of the priority queue and is executed as soon as the job is
released. T2 has the next highest priority. Its jobs execute in the background of T1. For this reason, the
execution of the first job in T2 is delayed until the first job in T1 completes, and the third job in T2 is
preempted at time 16 when the fourth job in T1 is released. Similarly, T3 executes in the background of
T1 and T2; the jobs in T3 execute only when there is no job in the higher-priority tasks ready for
execution. Since there is always at least one job ready for execution until time 18, the processor never
idles until that time.

The tasks T1 = (2, 0.9) and T2 = (5, 2.3). The tasks are in phase. Here, another well-known fixed-
priority algorithm is the deadline-monotonic algorithm, called the DM algorithm hereafter. This
algorithm assigns priorities to tasks according their relative deadlines: the shorter the relative deadline,
the higher the priority. Figure 6–3 gives an example. The system consists of three tasks. They are T1 =
(50, 50, 25, 100), T2 = (0, 62.5, 10, 20), and T3 = (0, 125, 25, 50). Their utilizations are 0.5, 0.16, and
0.2, respectively. The total utilization is 0.86. According to the DM algorithm, T2 has the highest
priority because its relative deadline 20 is the shortest among the tasks. T1, with a relative deadline of
100, has the lowest priority. The resultant DM schedule is shown in Figure 6–3(a). According to this
schedule, all the tasks can meet their deadlines.

Clearly, when the relative deadline of every task is proportional to its period, the RM and DM
algorithms are identical. When the relative deadlines are arbitrary, the DM algo-rithm performs better in
the sense that it can sometimes produce a feasible schedule when the RM algorithm fails, while the RM
algorithm always fails when the DM algorithm fails. The example above illustrates this fact. Figure 6–
3(b) shows the RM schedule of the three tasks which Figure 6–3(a) has shown to be feasible when
scheduled deadline-monotonically. According to the RM algorithm, T1 has the highest priority, and T3
has the lowest priority. We see that because the priorities of the tasks with short relative deadlines are
too low, these tasks cannot meet all their deadlines.
Global Institute of Technology, Jaipur
ITS-1, IT Park, EPIP, Sitapura Jaipur 302022 (Rajasthan)
Solution VI and VIII sem University Examination 2019

Subject__Real Time System Code 8CS4.2A Semester VIII Year 4th

UNIT-5

Q5. A) Explain effect resource contention

i) Priority inversion
ii) Timing anomalies
iii) Deadlock
Global Institute of Technology, Jaipur
ITS-1, IT Park, EPIP, Sitapura Jaipur 302022 (Rajasthan)
Solution VI and VIII sem University Examination 2019

Subject__Real Time System Code 8CS4.2A Semester VIII Year 4th


Ans. Priority Inversion, Timing Anomalies, and Deadlock

The priority inversion can occur when the execution of some jobs or portions of jobs is nonpreemptable.
Resource contentions among jobs can also cause priority inversion. Because resources are allocated to jobs on
a nonpreemptive basis, a higher-priority job can be blocked by a lower-priority job if the jobs conflict, even
when the execution of both jobs is preemptable. In the example in the lowest priority job J3 first blocks J2 and
then blocks J1 while it holds the resource R. As a result, priority inversion occurs in intervals (4, 6] and (8, 9].

When priority inversion occurs, timing anomalies invariably follow.. The three jobs are the same
as those except that the critical section in J3 is [R; 2.5 ]. In other words, the execution time of the
critical section in J3 is shortened by 1.5. If we were not warned earlier in about timing anomalies,
our intuition might tell us that as a consequence of this reduction in J3 ’s execution time, all jobs
should complete sooner. Indeed, this reduction does allow jobs J2 and J3 to complete sooner.
Unfortunately, rather than meeting its deadline at 14, J1 misses its deadline because it does not
complete until 14.5.

when one of them holds X and requests for Y , while the other holds Y and requests for X . The
conditions that allow this circular wait of jobs for each other (i.e., a deadlock) to occur are well-
known as deadlock.
Global Institute of Technology, Jaipur
ITS-1, IT Park, EPIP, Sitapura Jaipur 302022 (Rajasthan)
Solution VI and VIII sem University Examination 2019

Subject__Real Time System Code 8CS4.2A Semester VIII Year 4th


Q5B) Explain Rules of Basic Priority Ceiling protocol
Ans. The priority-ceiling protocol extends the priority-inheritance protocol to prevent deadlocks and to
further reduce the blocking time. This protocol makes two key assumptions:

The assigned priorities of all jobs are fixed.

The resources required by all jobs are known a priori before the execution of any job begins.

The protocol makes use of a parameter, called priority ceiling, of every resource. The priority ceiling of any
resource Ri is the highest priority of all the jobs that require Ri and is denoted by (Ri ).
At any time t , the current priority ceiling of the system is equal to the highest priority ceiling of the
resources that are in use at the time, if some resources are in use. If all the resources are free at the
time, the current ceiling is minimum.

Rules of Basic Priority-Ceiling Protocol

Scheduling Rule:

At its release time t , the current priority π(t ) of every job J is equal to its assigned priority. The job remains at this priority
except under the condition stated in rule 3.

Every ready job J is scheduled preemptively and in a priority-driven manner at its current priority π(t ).

Allocation Rule: Whenever a job J requests a resource R at time t , one of the following two conditions
occurs:
R is held by another job. J ’s request fails and J becomes blocked.
R is free priority of j is higher than current priority ceiling r is allocated to j
R isf ree priority of j is not higher than current priority ceiling r is allocated to j r is allocated to j is the
holding resourse whose priority is equal to system priority.

Priority-Inheritance Rule: When J becomes blocked, the job Jl which blocks J inherits the current priority π(t
) of J . Jl executes at its inherited priority until the time when it releases every resource whose priority ceiling
is equal to or higher than π(t ); at that time, the priority of Jl returns to its priority πl (t ) at the time t when it
was granted the resource(s).

Q5C). Explain Briefly convex Ceiling Protocol


Ans. Convex-Ceiling Protocol

The resource access-control protocols do not ensure serializability. For example, both the NPCS and
PC (Priority- and Preemption-Ceiling) protocols allow a higher-priority job Jh to read and write a
data object X between two disjoint critical sec-tions of a lower-priority job Jl during which Jl also
reads and writes X . The value of X thus produced may not be the same as the value produced by
either of the two possible serial sequences (i.e., all the reads and writes of Jl either proceed or follow
that of Jh ).
Global Institute of Technology, Jaipur
ITS-1, IT Park, EPIP, Sitapura Jaipur 302022 (Rajasthan)
Solution VI and VIII sem University Examination 2019

Subject__Real Time System Code 8CS4.2A Semester VIII Year 4th


Definition and Capability. As with the priority-ceiling protocol, at any time t when the scheduler receives a
request to access an object R for the first time from any job J , it computes the system ceiling is equal to the
highest priority of the priority-ceiling functions of all the jobs in the system. The convex-ceiling protocol
defined by the following rules.

Rules of Convex-Ceiling Protocol

Scheduling Rule: At any time, jobs that are not suspended are scheduled on the processor in a
preemptive, priority-driven manner. Upon its release, the current priority of every job Ji is its
assigned priority πi . It executes at this priority except when the inheritance rule is applied
Allocation Rule: When a job Ji requests to access a data object R for the first time
1. if priority is higher than the systemceiling j is allowed to acess R
2. if priority ofJ is not higher than and if system ceiling priority is higher j again allowed otherwise
j suspended.
Priority-Inheritance Rule: When Ji becomes suspended, the job Jl whose priority-ceiling function is equal to
the system ceiling at the time inherits the current priority πi (t ) of Ji

OR
Q5 Write Short Note
1. Non- preemptive critical section
Ans. NONPREEMPTIVE CRITICAL SECTIONS

The simplest way to control access of resources is to schedule all critical sections on the processor
no preemptively In other words, when a job requests a resource, it is always allocated the resource.
When a job holds any resource, it executes at a priority higher than the priorities of all jobs.) This
protocol is called the Nonpreemptive Critical Section (NPCS) protocol. Because no job is ever
preempted when it holds any resource, deadlock can never occur.

The most important advantage of the NPCS protocol is its simplicity, especially when the numbers
of resource units are arbitrary. The protocol does not need any prior knowledge about resource
requirements of jobs. It is simple to implement and can be used in both fixed-priority and dynamic-
priority systems. It is clearly a good protocol when all the critical sections are short and when most
of the jobs conflict with each other. An obvious shortcoming of this protocol is that every job can be
blocked by every lower-priority job that requires some resource even when there is no resource
conflict between them. When the resource requirements of all jobs are known, an improvement is to
let a job holding any resource execute at the highest priority of all jobs requiring the resource.

2. Resource Conflicts and Blocking

Two jobs conflict with one another, or have a resource conflict, if some of the resources they require
are of the same type. The jobs contend for a resource when one job requests a resource that the other
job already has.
When the scheduler does not grant ηi units of resource Ri to the job requesting them, the lock request
L(Ri , ηi ) of the job fails (or is denied). When its lock request fails, the job is blocked and loses the
processor. A blocked job is removed from the ready job queue. It stays blocked until the scheduler
grants it ηi units of Ri for which the job is waiting. At that time, the job becomes unblocked, is
moved backed to the ready job queue, and executes when it is scheduled .
Global Institute of Technology, Jaipur
ITS-1, IT Park, EPIP, Sitapura Jaipur 302022 (Rajasthan)
Solution VI and VIII sem University Examination 2019

Subject__Real Time System Code 8CS4.2A Semester VIII Year 4th


3. Preemption-Ceiling Protocol
PREEMPTION-CEILING PROTOCOL

In critical Section We can avoid paying the time or storage overhead of the dynamic-priority-ceiling protocol
for a class of dynamic-priority systems, which includes deadline-driven systems. We call systems in this class
fixed preemption-level systems For a fixed preemption-level system has a simpler approach to control
resource accesses. The approach is based on the clever observation that the potentials of resource contentions
in such a dynamic-priority system do not change with time, just as in fixed-priority systems, and hence can be
analyzed statically. The observation is supported by the following facts:

The fact that a job Jh has a higher priority than another job Jl and they both require some resource does not
imply that Jl can directly block Jh . This blocking can occur only when it is possible for Jh to preempt Jl .

For some dynamic priority assignments, it is possible to determine a priori the possibil-ity that jobs
in each periodic task will preempt the jobs in other periodic tasks.
Because of fact 1, when determining whether a free resource can be granted to a job, it is not necessary to be
concerned with the resource requirements of all higher-priority jobs; only those that can preempt the job. Fact
2 means that for some dynamic priority systems, the possibility that each periodic task will preempt every
other periodic task does not change with time, just as in fixed-priority systems. In a deadline-driven system,
no job in a periodic task with a smaller relative deadline is ever preempted by jobs in periodic tasks with
identical or larger relative deadlines, despite the fact that some jobs in the latter may have higher priorities.

4. Extended Priority exchange


we give two different definitions of a protocol that is simpler than the priority-ceiling protocol but has the same worst-case
performance as the priority-ceiling protocol. The different definitions arise from two different motivations: to provide
stack-sharing capability

1. Scheduling Rule: After a job is released, it is blocked from starting execution until its assigned
ˆ
priority is higher than the current ceiling t of the system. At all times, jobs that are not
blocked are scheduled on the processor in a priority-driven, preemptive manner according to
their assigned priorities.
2. Allocation Rule: Whenever a job requests a resource, it is allocated the resource. We note that
according to the scheduling rule, when a job begins to execute, all the resources it will ever need during its
execution are free. (Otherwise, if one of the resources it will need is not free, the ceiling of the system is equal
to or higher than its priority.) This is why the allocation rule is as simple as stated above. More
importantly, no job is ever blocked once its execution begins. Likewise, when a job J is
preempted, all the resources the preempting job will require are free, ensuring that the
preempting job can always complete so J can resume. Consequently, deadlock can never
occur.
5. RTS requirements:-

Designing Realtime systems is a challenging task. Most of the challenge comes from the fact
that Realtime systems have to interact with real world entities. These interactions can get fairly
complex. A typical Real time system might be interacting with thousands of such entities at the
same time. For example, a telephone switching system routinely handles calls from tens of
thousands of subscriber. The system has to connect each call differently. Also, the exact
sequence of events in the call might vary a lot discussing these very issues...
Global Institute of Technology, Jaipur
ITS-1, IT Park, EPIP, Sitapura Jaipur 302022 (Rajasthan)
Solution VI and VIII sem University Examination 2019

Subject__Real Time System Code 8CS4.2A Semester VIII Year 4th


Realtime Response

Recovering from Failures

Working with Distributed Architectures
 Asynchronous Communication
6. ClockDriven Scheduling
Ans:- As the name implies, when scheduling is clock-driven (also called time-driven), decisions on
what jobs execute at what times are made at specific time instants. These instants are chosen a priori
before the system begins execution. Typically, in a system that uses clock-driven scheduling, all the
parameters of hard real-time jobs are fixed and known. A schedule of the jobs is computed off-line
and is stored for use at run time. The scheduler schedules the jobs according to this schedule at each
scheduling decision time. In this way, scheduling overhead during run-time can be minimized.

As an example, we consider a system that contains four independent periodic tasks. They are T1 = (4,
1), T2 = (5, 1.8), T3 = (20, 1), and T4 = (20, 2). Their utilizations are 0.25, 0.36, 0.05, and 0.1,
respectively, and the total utilization is 0.76. It suffices to construct a static schedule for the first
hyperperiod of the tasks. Since the least common multiple of all periods is 20, the length of each
hyperperiod is 20. The entire schedule consists of replicated segments of length 20. Figure 5–1 shows
such a schedule segment on one processor. We see that T1 starts execution at time 0, 4, 9.8, 13.8, and
so on; T2 starts execution at 2, 8, 12, 18, and so on. All tasks meet their deadlines.

T1 T3 T2 T1 T4 T2 T1 T2 T1 T1 T2 T1

0 4 8 12 16 20

An arbitrary static schedule.

You might also like