Real Time Systems Notes
Real Time Systems Notes
com
PART – A
UNIT – 1
Introduction to Real – Time Systems:
Historical Background, RTS Definition, Classification of Real – Time Systems, Time constraints,
Classification of programs.
6 Hours
UNIT - 2
Concepts of Computers Control:
Introduction, Sequence Control, Loop Control, Supervisory Control, Centralized Computer Control,
Distributed System, Human-Computer interface, Benefits of Computer Control Systems.
6 Hours
UNIT- 3
Computer Hardware Requirements for RTS:
Introduction, General Purpose Computer, Single Chip Microcontroller, Specialized Processors,
Process –Related Interfaces, Data Transfer Techniques, Communications, Standard Interface.
6 Hours
UNIT- 4
Languages For Real –Time Applications:
Introduction, Syntax Layout and Readability, Declaration and Initialization of Variables and
Constants, Modularity and Variables, Compilation , Data Type, Control Structure, Exception
Handling, Low –Level Facilities, Co routines, Interrupts and Device Handling, Concurrency, Real –
Time Support, Overview of Real –Time Languages.
8 Hours
2 https://vtupro.com
PART –B
UNIT -5&6
Operating Systems:
Introduction, Real –Time Multi –Tasking OS, Scheduling Strategies, Priority Structures, Task
Management, Scheduler and Real –Time Clock Interrupt Handles, Memory Management ,Code
Sharing, Resource control, Task Co-operation and Communication, Mutual Exclusion, Data
Transfer, Liveness, Minimum OS Kernel, Examples.
12 Hours
UNIT-7
Design of RTSS General Introduction:
Introduction, Specification documentation, Preliminary design, Single –Program Approach,
Foreground /Background, Multi- Tasking approach, Mutual Exclusion Monitors.
8 Hours
UNIT -8
RTS Development Methodologies:
Introduction, Yourdon Methodology, Requirement definition For Drying Oven, Ward and Mellor
Method, Hately and Pirbhai Method.
6 Hours
3 https://vtupro.com
Text Books:
1. Real –Time Computer control –An Introduction, Stuart Bennet, 2nd Edn. Pearson
Education 2005.
Reference: Books:
1. Real-Time Systems Design and Analysis, Phillip. A. Laplante, Second Edition, PHI, 2005.
Real time Systems Development, Rob Williams, Elsevier, 2006.
2. Embedded Systems, Raj kamal, Tata MC Graw Hill, INDIA, 2005.
4 https://vtupro.com
CONTENTS
NAME OF THE TOPICS PAGE NO
48 Liveness 101
49 Minimum OS Kernel 101
50 Examples 101
UNIT – 1
Introduction to Real – Time Systems
Historical Background, RTS Definition, Classification of Real – Time Systems, Time constraints,
Classification of programs.
The origin of the term Real –Time Computing is unclear. It was probably first used either
with project whirlwind, a flight simulator developed by IBM for the U.S. Navy in 1947, or with
SAGE, the Semiautomatic Ground Environment air defense system developed for the U.S. Air
force in the early 1950s. Modern real-time systems, such as those that control Nuclear Power
stations, Military Aircraft weapons systems, or Medical Monitoring Equipment, are complex and
they exhibit characteristics of systems developed from the 1940s through the 1960s. Moreover,
today’s real time systems exist because the computer industry and systems requirements grew.
The earliest proposal to use a computer operating in real time as part of a control system
was made in a paper by Brown and Campbell in 1950. It shows a computer in both the feedback
and feed forward loops. The diagram is shown below:
8 https://vtupro.com
In the late 1960s real time operating system were developed and various PROCESS
FORTRAN Compilers made their appearance. The problems and the costs involved in attempting
to do everything in one computer led users to retreat to smaller system for which the newly
developing minicomputer (DEC PDP-8, PDP-11, Data General Nova, Honey well 316 etc.) was to
prove ideally suited.
Real- time systems are those which must produce correct responses within a
definite time limit.
An alternative definition is:
A real- time system read inputs from the plant and sends control signals to the plant
at times determined by plant operational considerations.
Clock – based tasks are typically referred to as cyclic or periodic tasks where the terms can
imply either that the task is to run once per time period T (or cycle time T), or is to run at exactly T
unit intervals.
The completion of the operations within the specified time is dependent on the number of
operations to be performed and the speed of the computer.
Synchronization is usually obtained by adding a clock to the computer system- referred as a real-
time clock that uses signal from this clock to interrupt the operations of the computer at some
predetermined fixed time interval.
For example in process plant operation, the computer may carry out the plant input, plant
output and control tasks in response to the clock interrupt or, if the clock interrupt has been set at a
faster rate than the sampling rate, it may count each interrupt until it is time to run the tasks.
In larger system the tasks may be subdivided into groups for controlling different parts of
the plant and these may need to run a different sampling rate. A tasks or process comprises some
code, its associated data and a control block data structure which the operating system uses to
define and manipulate the task.
1.3.3INTERACTIVE SYSTEM:
Interactive systems probably represent the largest class of real-time systems and cover such
systems as automatic bank tellers; reservation systems for hotels, airlines and car rental companies;
computerized tills, etc. The real-time requirement is usually expressed in terms such as 'the average
response time must not exceed ... ‘
For example, an automatic bank teller system might require an average response time not
exceeding 20 seconds. Superficially this type of system seems similar to the event-based system in
that it apparently responds to a signal from the plant (in this case usually a person), but it is different
because it responds at a time determined by the internal state of the computer and without any
reference to the environment. An automatic bank teller does not know that you are about to miss a
train, or that it is raining hard and you are getting wet: its response depends on how busy the
communication lines and central computers are (and of course the wire of your account).
Many interactive systems give the impression that they are clock based in that they are
capable of displaying the date and time; they do indeed have a real-time clock which enables them to
keep track of time.
1.4TIME CONSTRAINTS:
Real time systems are divided into two categories:
• Hard real-time: these are systems that must satisfy the deadlines on each
and every occasion.
• Soft real-time: these are systems for which an occasional failure to meet
a deadline does not comprise the correctness of the system.
A typical example of a hard real-time control system is the temperature control loop of the hot-air
blower system described above. In control terms, the temperature loop is a sampled data system.
Design, of a suitable control algorithm for this system involves the choice of the sampling interval Ts.
If we assume that a suitable sampling interval is 10 ms, then at 10 ms intervals the input value must
be read, the control calculation carried out and the output value calculated, and the output value sent
to the heater drive.
As an example of hard time constraints associated with event-based tasks let us assume that
the hot-air blower is being used to dry a component which will be damaged if exposed to
https://vtupro.com
14 temperatures greater than 50°C for more than 10 seconds. Allowing for the time taken for the ai r to
travel from the heater to the outlet and the cooling time of the heater element - and for a margin of
safety - the alarm response requirement may be, say, that overt temperature should be detected and
the heater switched off within seven seconds of the over temperature occurring. The general form of
this type of constraint is that the computer must respond to the event within some specified maximum
time.
An automatic bank teller provides an example of a system with a soft time constraint. A
typical system is event initiated in that it is started by the customer placing their card in the machine.
The time constraint on the machine responding will be specified in terms of an average response time
of, say, 10 seconds, with the average being measured over a 24 hour period. (Note that if the system
has been carefully specified there will also be a maximum time; say 30 seconds, within which the
system should respond.) The actual response time will vary: if you have used such a system you will
have learned that the response time obtained between 12 and 2 p.m. on a Friday is very different from
that at 10 a.m. on a Sunday.
A hard time constraint obviously represents a much more severe constraint on the
performance of the system than a soft time constraint and such systems present a difficult challenge
both to hardware and to software designers. Most real-time systems contain a mixture of activities
that can be classified as clock based, event based, and interactive with both hard and soft time
constraints (they will also contain activities which are not real time). A system designer will attempt
to reduce the number of activities (tasks) that are subject to a hard time constraint.
Formally the constraint is defined as follows:
1.5CLASSIFICATION OF PROGRAMS:
The importance of separating the various activities carried out by computer control systems
into real-time and non-real-time tasks, and in subdividing real-time tasks into the two different types,
arises from the different levels of difficulty of designing and implementing the different types of
computer program. Experimental studies have shown clearly that certain types of program,
particularly those involving real time and interface operations, are substantially more difficult to
construct than, for instance, standard data processing programs (Shooman, 1983; Pressman,
1992).The division of software into small, coherent modules is an important design technique and one
of the guidelines for module division that we introduce is to put activities with different types of time
constraints into separate modules.
Theoretical work on mathematical techniques for proving the correctness of a program, and
the development of formal specification languages, such as 'z' and VOM, has clarified the
understanding of differences between different types of program. Pyle (1979), drawing on the work
of Wirth (1977), presented definitions identifying three types of programming:
• Sequential;
• Multi-tasking; and
• Real-time.
The definitions are based on the kind of arguments which would have to be made in order to verify,
that is to develop a formal proof of correctness for programs of each type.
1.5.1 SEQUENTIAL:
https://vtupro.com
16 In classical sequential programming actions are strictly ordered as a time sequence: the
behavior of the program depends only on the effects of the individual actions and their order; the
time taken to perform the action is not of consequence. Verification, therefore, requires two kinds of
argument:
1. That a particular statement defines a stated action; and
2. That the various program structures produce a stated sequence of events.
1.5.2 MULTI-TASKING:
A multi-task program differs from the classical sequential program in that the actions it is
required to perform are not necessarily disjoint in time; it may be necessary for several actions to be
performed in parallel. Note that the sequential relationships between the actions may still be
important. Such a program may be built from a number of parts (processes or tasks are the names
used for the parts) which are themselves partly sequential, but which are executed concurrently and
which communicate through shared variables and synchronization signals.
Verification requires the application of arguments for sequential programs with some additions. The
task (processes) can be verified separately only if the constituent variables of each task (process) are
distinct. If the variables are shared, the potential concurrency makes the effect of the program
unpredictable (and hence not capable of verification) unless there is some further rule that governs the
sequencing of the several actions of the tasks (processes). The task can proceed at any speed: the
correctness depends on the actions of the synchronizing procedure.
1.5.3 REAL-TIME:
A real-time program differs from the previous types in that, in addition to its actions not
necessarily being disjoint in time, the sequence of some of its actions is not determined by the
designer but by the environment - that is, by events occurring in the outside world which occur in real
time and without reference to the internal operations of the computer. Such events cannot be made to
conform to the intertask synchronization rules.
A real-time program can still be divided into a number of tasks but communication between
the tasks cannot necessarily wait for a synchronization signal: the environment task cannot be
delayed. (Note that in process control applications the main environment task is usually that of
keeping real time, that is a real-time clock task. It is this task which provides the timing for the
https://vtupro.com
17 scanning tasks which gather information from the outside world about the process.) In real-t ime
programs, in contrast to the two previous types of program, the actual time taken by an action is an
essential factor in the process of verification. We shall assume that we are concerned with real-time
software and references to sequential and multi-tasking programs should be taken to imply that the
program is real time. Non-real-time programs will be referred to as standard program.
Consideration of the types of reasoning necessary for the verification of programs is
important, not because we, as engineers, are seeking a method of formal proof, but because we are
seeking to understand the factors which need to be considered when designing real-time software.
Experience shows that the design of real-time software is significantly more difficult than the design
of sequential software. The problems of real-time software design have not been helped by the fact
that the early high-level languages were sequential in nature and they did not allow direct access to
many of the detailed features of the computer hardware.
As a consequence, real-time features had to be built into the operating system which was
written in the assembly language of the machine by teams of specialist programmers. The cost of
producing such operating systems was high and they had therefore to be general purpose so that they
could be used in a wide range of applications in order to reduce the unit cost of producing them.
These operating systems could be tailored, that is they could be reassembled to exclude or include
certain features, for example to change the number of tasks which could be handled, or to change the
number of input/output devices and types of device. Such changes could usually only be made by the
supplier.
Excepted Question:
1. Explain the difference between a real-time program and a non-real-time program.
Why are real-time programs more difficult to verify than non-real-time programs?
2 To design a computer-based system to control all the operations of a retail petrol
(gasoline) station (control of pumps, cash receipts, sales figures, deliveries, etc.).
What type of real-time system would you expect to use?
3. Classify any of the following systems as real-time?
In each case give reasons for your answer and classify the real-time systems as
hard or soft.
(a) A simulation program run by an engineer on a personal computer.
18 (b) An airline seat-reservation system with on-line terminals. https://vtupro.com
4 An automatic bank teller works by polling each teller in turn. Some tellers are located
outside buildings and others inside. How the polling system could be organized to ensure that
the waiting time at the outside locations was less than at the inside locations?
5 .Explain the precision required for the analog-to-digital and digital-to-analog converters taking hot-
air blower as an example?
19 https://vtupro.com
UNIT – 2
Concepts of Computers Control
Introduction, Sequence Control, Loop Control, Supervisory Control, Centralized Computer Control,
Distributed System, Human-Computer interface, Benefits of Computer Control Systems.
BATCH:
https://vtupro.com
20 The term batch is used to describe processes in which a sequence of operations are carried out
to produce a quantity of a product - the batch - and in which the sequence is then repeated to produce
further batches. The specification of the product or the exact composition may be changed between
the different runs.
A typical example of batch production is rolling of sheet steel. An ingot is passed through the
rolling mill to produce a particular gauge of steel; the next ingot may be either of a different
composition or rolled to a different thickness and hence will require different settings of the rolling
mill.
An important measure in batch production is set-up time (or change-over time), that is, the
time taken to prepare the equipment for the next production batch. This is wasted time in that no
output is being produced; the ratio between operation time (the time during which the product is
being produced) and set-up time is important in determining a suitable batch size.
In mechanical production the advent of the NC (Numerically Controlled) machine tool which
can be set up in a much shorter time than the earlier automatic machine tool has led to a reduction in
the size of batch considered to be economic.
CONTINUOUS:
The term continuous is used for systems in which production is maintained for long periods of
time without interruption, typically over several months or even years. An example of a continuous
system is the catalytic cracking of oil in which the crude oil enters at one end and the various
products - fractionates – are removed as the process continues. The ratio of the different fractions can
be changed but this is done without halting the process.
Continuous systems may produce batches, in that the product composition may be changed
from time to time, but they are still classified as continuous since the change in composition is made
without halting the production process.
A problem which occurs in continuous processes is that during change-over from one
specification to the next, the output of the plant is often not within the product tolerance and must be
scrapped. Hence it is financially important that the change be made as quickly and smoothly as
possible. There is a trend to convert processes to continuous operation - or, if the whole process
cannot be converted, part of the process.
https://vtupro.com
21 For example, in the baking industry bread dough is produced in batches but continuous ov ens are
frequently used to bake it whereby the loaves are placed on a conveyor which moves slowly
through the oven. An important problem in mixed mode systems, that is systems in which batches are
produced on a continuous basis, is the tracking of material through the process; it is obviously
necessary to be able to identify a particular batch at all times.
LABORATORY SYSTEMS:
Laboratory-based systems are frequently of the operator-initiated type in that the computer is
used to control some complex experimental test or some complex equipment used for routine testing.
A typical example is the control and analysis of data from a vapour phase chromatograph.
Another example is the testing of an audiometer, a device used to lest hearing. The audiometer
has to produce sound levels at different frequencies; it is complex in that the actual level produced is
a function of frequency since the sensitivity of the human ear varies with frequency. Each audiometer
has to be tested against a sound-level meter and a test certificate produced. This is done by using a
sound-level meter connected to a computer and using the output from the computer to drive the
audiometer through its frequency range. The results printed out from the test computer provide the
test certificate.
As with attempts to classify systems as batch or continuous so it can be difficult at times to
classify systems solely as laboratory. The production of steel using the electric arc furnace involves
complex calculations to determine the appropriate mix of scrap, raw materials and alloying additives.
As the melt progresses samples of the steel are taken and analyzed using a spectrometer. Typically
this instrument is connected to a computer which analyses the results and calculates the necessary
adjustment to the additives. The computer used may well be the computer which is controlling the arc
furnace itself.
In whatever way the application is classified the activities being carried out will include:
• Data acquisition;
• Sequence control;
• Loop control (DDC);
• Supervisory control;
• Data analysis;
• Data storage; and
22 • Human-computer interfacing (HCI). https://vtupro.com
7. When the timer indicates that the reaction is complete, switch off the controller
and open valve C to cool down the reactor contents. Switch off the stirrer.
8. Monitor the temperature; when the contents have cooled, open valve E to
remove the product from the reactor.
When implemented by computer all of the above actions and timings would be based upon
software. For a large chemical plant such sequences can become very lengthy and intricate and, to
ensure efficient operating, several sequences may take place in parallel.
The processes carried out in the single reactor vessel shown in Figure 2.1 are often only part
of a larger process as is shown in Figure 2.2. In this plant two reactor vessels (R 1 and R2) are used
alternately, so that the processes of preparing for the next batch and cleaning up after a batch can be
carried out in parallel with the actual production. Assuming that R 1 has been filled with the mixture
and the catalyst, and the reaction is in progress, there will be for R 1: loop control of the temperature
and pressure; operation of the stirrer; and timing of the reaction (and possibly some in process
measurement to determine the state of the reaction). In parallel with this, vessel R2 will be cleaned -
the wash down sequence - and the next batch of raw material will be measured and mixed in the
mixing tank.
Meanwhile, the previous batch will be thinned down and transferred to the appropriate storage
tank and, if there is to be a change of product or a change in product quality, the thin-down tank will
be cleaned. Once this is done the next batch can be loaded into R2 and then, assuming that the
reaction in R1 is complete, the contents of R1 will be transferred to the thin-down tank and the wash
down procedure for R1 initiated. The various sequences of operations required can become complex
and there may also be complex decisions to be made as to when to begin a sequence. The sequence
initiation may be left to a human operator or the computer may be programmed to supervise the
operations (supervisory control - see below). The decision to use human or computer supervision is
often very difficult to make.
The aim is usually to minimize the time during which the reaction vessels are idle since this is
unproductive time. The calculations needed and the evaluation of options can be complex,
particularly if, for example, reaction times vary with product mix, and therefore it would be expected
that decisions made using computer supervisory control would give the best results. however, it is
difficult using computer control to obtain the same flexibility that can be achieved using a human
https://vtupro.com
25 operator (and to match the ingenuity of good operators). As a consequence many supervisory syst ems
are mixed; the computer is programmed to carry out the necessary supervisory calculations and to
present its decisions for confirmation or rejection by the operator, or alternatively it presents a range
of options to the operator.
In most batch systems there is also, in addition to the sequence control, some continuous
feedback control: for example, control of temperatures, pressures, flows, speeds or currents. In
process control terminology continuous feedback control is referred to as loop control or modulating
control and in modern systems this would be carried out using DOC.
A similar mixture of sequence, loop and supervisory control can be found in continuous
systems. Consider the float glass process shown in Figure 2.3. The raw material - sand, powdered
glass and fluxes (the frit) - is mixed in batches and fed into the furnace. It melts rapidly to form a
https://vtupro.com
26 molten mixture which flows through the furnace. As the molten glass moves through the furnace it is
refined. The process requires accurate control of temperature in order to maintain quality and to keep
fuel costs to a minimum - heating the furnace to a higher temperature than is necessary wastes energy
and increases costs. The molten glass flows out of the furnace and forms a ribbon on the float bath;
again, temperature control is important as the glass ribbon has to cool sufficiently so that it can pass
over rollers without damaging its surface.
The continuous ribbon passes into the lehr where it is annealed and where temperature
control is again required. It then passes under the cutters which cut it into sheets of the required size;
automatic stackers then lift the sheets from the production line. The whole of this process is
controlled by several computers and involves loop, sequence and supervisory control. Sequence
control systems can vary from the large - the start-up of a large boiler turbine unit in a power station
when some 20000 operations and checks may have to be made - to the small - starting a domestic
washing machine. Most sequence control systems are simple and frequently have no loop control.
They are systems which in the past would have been controlled by relays, discrete logic, or integrated
circuit logic units. Examples are simple presses where the sequence might be: locate blank, spray
lubricant, lower press, raise press, remove article, spray lubricant. special computer systems known as
programmable logic controllers (PLCs).
In direct digital control (DDC) the computer is in the feedback loop as is shown in Figure 2.4.,
the system shown in Figure 2.4 is assumed to involve several control loops all of which are handled
within one computer.
A consequence of the computer being in the feedback loop is that it forms a critical
component in terms of the reliability of the system and hence great care is needed to ensure that, in
the event of the failure or malfunctioning of the computer, the plant remains in a safe condition. The
usual means of ensuring safety are to limit the DDC unit to making incremental changes to the
actuators on the plant; and to limit the rate of change of the actuator settings (the actuators are labeled
A in Figure 2.4).
PID CONTROL:
The PID control algorithm has the general form
1
m(t) = Kp [e(t) + 1/Ti ∫0 e(t)dt + Td de(t)/dt]
Where e (t) = r (t) - c (t) and c (t) is the measured variable, r (i) is reference value or set point, and e
(t) is error; Kp is the overall controller gain; T; is the integral action time; and Td is the derivative
action time.
For a wide range of industrial processes it is difficult to improve on the control performance
that can be obtained by using either PI or PID control (except at considerable expense) or it is for this
reason that the algorithms are widely used. For the majority of systems PI control is all that is
necessary. Using a control signal that is made proportional to the error between the desired value of
an output and the actual value of the output is an obvious and (hopefully) a reasonable strategy.
Choosing the value of Kp involves a compromise: a high value of Kp gives a small steady-state error
and a fast response, but the response will be oscillatory and may be unacceptable in many
applications; a low value gives a slow response and a large steady-state error. By adding the integral
action term the steady-state error can be reduced to zero since the integral term, as its name implies,
integrates the error signal with respect to time. For a given error value the rate at which the integral
term increases is determined by the integral action time Ti. The major advantage of incorporating an
integral term arises from the fact that it compensates for changes that occur in the process being
controlled.
https://vtupro.com
29 A purely proportional controller operates correctly only under one particular set of pro cess
conditions: changes in the load on the process or some environmental condition will result in a
steady-state error; the integral term compensates for these changes and reduces the error to zero. For a
few processes which are subjected to sudden disturbances the addition of the derivative term can give
improved performance. Because derivative action produces a control signal that is related to the rate
of change of the error signal, it anticipates the error and hence acts to reduce the error that would
otherwise arise from the disturbance.
In fact, because the PID controller copes perfectly adequately with 90070 of all control
problems, it provides a strong deterrent to the adoption of new control system design techniques.
DDC may be applied either to a single-loop system implemented on a small microprocessor or to a
large system involving several hundred loops. The loops may be cascaded, that is with the output or
actuation signal of one loop acting as the set point for another loop, signals may be added together
(ratio loops) and conditional switches may be used to alter signal connections.
A typical industrial system is shown in Figure 2.5. This is a steam boiler control system. The
steam pressure is controlled by regulating the supply of fuel oil to the burner, but in order to comply
with the pollution regulations a particular mix of air and fuel is required. We are not concerned with
how this is achieved but with the elements which are required to implement the chosen control
system.
DDC APPLICATIONS:
DDC may be applied either to a single-loop system implemented on a small microprocessor or
to a large system involving several hundred loops. The loops may be cascaded, that is with the output
or actuation signal of one loop acting as the set point for another loop, signals may be added together
(ratio loops) and conditional switches may be used to alter signal connections. A typical industrial
system is shown in Figure 2.5. This is a steam boiler control system.
The steam pressure control system generates an actuation signal which is fed to an
auto/manual bias station. If the station is switched to auto then the actuation signal is transmitted; if it
is in manual mode a signal which has been entered manually (say, from keyboard) is transmitted.
The signal from the bias station is connected to two units, a high signal selector and a low signal
selector each of which has two inputs and one output. The signal from the low selector provides the
set point for the DDC loop controlling the oil flow, the signal from the high selector provides the set
point for the air flow controller (two cascade loops). A ratio unit is installed in the air flow
measurement line.
DDC is not necessarily limited to simple feedback control as shown in Figure 2.6. It is
possible to use techniques such as inferential, feed forward and adaptive or self-tuning control.
Inferential control, illustrated in Figure 2.7, is the term applied to control where the variables on
which the feedback control is to be based cannot be measured directly, but have to be 'inferred' from
measurements of some other quantity.
ADAPTIVE CONTROL:
Adaptive control can take several forms. Three of the most common are:
• Preprogrammed adaptive control (gain 5cheduled control);
• Self-tuning; and
• Model-reference adaptive control.
Programmed adaptive control is illustrated in Figure 2.10a. The adaptive, or adjustment,
mechanism makes preset changes on the basis of changes in auxiliary process measurements. For
example, in a reaction vessel a measurement of the level of liquid in the vessel (an indicator of the
volume of liquid in the vessel) might be used to change the gain of the temperature controller; in
many aircraft controls the measured air speed is used to select controller parameters according to a
preset schedule.
An alternative form is shown in Figure 2.10b in which measurements of changes in the
external environment are used to select the gain or other controller parameters. For example, in an
aircraft auto stabilizer, control parameters may be changed according to the external air pressure.
32 https://vtupro.com
The adoption of computers for process control has increased the range of activities that can be
performed, for not only can the computer system directly control the operation of the plant, but also it
can provide managers and engineers with a comprehensive picture of the status of the plant
operations. It is in this supervisory role and in the presentation of information to the plant operator -
large rooms full of dials and switches have been replaced by VDUs and keyboards - that the major
https://vtupro.com
34 changes have been made: the techniques used in the basic feedback control of the plant have chan ged
little from the days when pneumatically operated three-term controllers were the norm. Direct digital
control (DDC) is often simply the computer implementation of the techniques used for the traditional
analog controllers.
Many of the early computer control schemes used the computer in a supervisory role and not
for DDC. The main reasons for this were (a) computers in the early days were not always very
reliable and caution dictated that the plant should still be able to run in the event of a computer
failure; (b) computers were very expensive and it was not economically viable to use a computer to
replace the analog control equipment in current use. A computer system that was used to adjust the
set points of the existing analog control system in an optimum manner (to minimize energy or to
maximize production) could perhaps be economically justified. The basic idea of supervisory control
is illustrated in Figure 2.13 (compare this with Figure 2.4).
2. In the event of failure or overloading of a particular unit all or some of the work can be transferred
to other units.
In other words, the work is not divided by function and allocated to a particular computer as
in hierarchical systems: instead, the total work is divided up and spread across several computers.
This is a conceptually simple and attractive approach - many hands make light work - but it poses
difficult hardware and software problems since, in order to gain the advantages of the approach,
allocation of the tasks between computers has to be dynamic, that is there has to be some mechanism
which can assess the work to be done and the present load on each computer in order to allocate
work. Because each computer needs access to all the information in the system, high-bandwidth data
highways are necessary. There has been considerable progress in developing such highways and the
various types are discussed below:
Computer scientists and engineers are also carrying out considerable research on multi-
processor computer systems and this work could lead to totally distributed systems becoming
feasible. There is also a more practical approach to distributing the computing load whereby no
attempt is made to provide for the dynamic allocation of resources but
instead a simple ad hoc division is adopted with, for example, one computer performing all non-
plant input and output, one computer performing all DDC calculations, another performing data
acquisition and yet another performing the control of the actuators. In most modern schemes a
mixture of distributed and hierarchical approaches is used as shown in Figure 2.19. The tasks of
measurement, DDC, operator communications, etc., are distributed among a number of computers
which are linked together via a common serial communications highway and are configured in a
hierarchical command structure. Five broad divisions of function are shown:
Level 1 all computations and plant interfacing associated with measurement and actuation. This level
provides a measurement and actuation database for the whole system.
Level 2 All DDC calculations.
Level 3 all sequence calculations.
Level 4 Operator communications.
Level 5 Supervisory control
Level 6 Communications with other computer systems.
https://vtupro.com
39 It is not necessary to preserve rigid boundaries; for example, a DDC unit may perform s ome
sequencing or may interface directly to plant.
Recommended Question:
1. List the advantages and disadvantages of using DDC.
2. In the section on human-computer interfacing we made the statement 'the design
of user interfaces is a specialist area'. Can you think of reasons to support this statement and
suggest what sort of background and training a specialist in user interfaces might require?
3. What are the advantages/disadvantages of using a continuous oven? How will the
control of the process change from using a standard oven on a batch basis to
43 using an oven in which the batch passes through on a conveyor belt? Which will https://vtupro.com
UNIT- 3
Computer Hardware Requirements for RTS
Introduction, General Purpose Computer, Single Chip Microcontroller, Specialized Processors,
Process –Related Interfaces, Data Transfer Techniques, Communications, Standard Interface.
STORAGE:
The storage used on computer control systems divides into two main categories: fast access
storage and auxiliary storage. The fast access memory is that part of the system which contains data,
programs and results which are currently being operated on. The major restriction with current
https://vtupro.com
47 computers is commonly the addressing limit of the processor. In addition to RAM (random ac cess
memory - read/write) it is now common to have ROM (read-only memory), PROM (programmable
read-only memory) or EPROM (electronically programmable read only memory) for the storage of
critical code or predefined functions. The use of ROM has eased the problem of memory protection to
prevent loss of programs through power failure or corruption by the malfunctioning of the software
(this can be a particular problem during testing).
An alternative to using ROM is the use of memory mapping techniques that trap instructions
which attempt to store in a protected area. This technique is usually only used on the larger systems
which use a memory management system to map program addresses onto the physical address space.
An extension of the system allows particular parts of the physical memory to be set as read only, or
even locked out altogether: write access can be gained only by the use of 'privileged' instructions. The
auxiliary storage medium is typically disk or magnetic tape. These devices provide bulk storage for
programs or data which are required infrequently at a much lower cost than fast access memory. The
penalty is a much longer access time and the need for interface boards and software to connect them
to the CPU. In a real-time system use of the CPU to carry out the transfer is not desirable as it is slow
and no other computation can take place during transfer. For efficiency of transfer it is sensible to
transfer large blocks of data rather than a single word or byte and this can result in the CPU not being
available for up to several seconds in some cases. The approach frequently used is direct memory
access (DMA). For this the interface controller for the backing memory must be able to take control
of the address and data buses of the computer.
BUS STRUCTURE:
Buses are characterized into three ways:
• Mechanical (physical) structure;
• Electrical; and
• Functional.
In mechanical or physical terms a bus is a collection of conductors which carry electrical
signals, for example tracks on a printed circuit board or the wires in a ribbon cable. The physical form
of the bus represents the mechanical characteristic of the bus system. The electrical characteristics
of the bus are the signal levels, loading (that is, how many loads the line can support), and type of
output gates (open-collector, tri-state). The functional characteristics describe the type of information
which the electrical signals flowing along the bus conductors represent. The bus lines can be divided
into three functional groups:
• Address lines;
• Data lines; and
• Control and status lines.
An individual chip can be used as a stand-alone computing device; however, the power of the
transputer is obtained when several transputers are interconnected to form a parallel processing
network. INMOS developed a special programming language, occam, for use with the transputer.
Occam is based on the assumption that the application to be implemented on the transputer can be
modelled as a set of processes (actions) that communicate with each other via channels. A channel is
a unidirectional link between two processes which provides synchronized communication. A process
can be a primitive process, or a collection of processes; hence the system supports a hierarchical
structure. Processes are dynamic in that they can be created, can die and can create other processes.
3.6 DIGITIAL SIGNAL PROCESSORS:
In applications such as speech processing, telecommunications, radar and hi-fi systems analog
techniques have been used for modifying the signal characteristics. There are advantages to be gained
if such processing can be done using digital techniques in that the digital devices are inherently more
https://vtupro.com
52 reliable and not subject to drift. The problem is that the bandwidth of the signals to be processe d is
such as to demand very high processing speeds. Special purpose integrated circuits optimized to meet
the signal processing requirements have been developed. They typically use the so-called Harvard
architecture in which separate paths are provided for data and for instructions. DSPs typically use
fixed point arithmetic and the instruction set contains instructions for manipulating complex numbers.
They are difficult to program as few high-level language compilers are available.
4. Telemetry: The increasing use of remote outstations, for example electricity substations and gas
pressure reduction stations, has increased the use of telemetry. The data may be transmitted by
landline, radio or the public telephone net work: it is, however, characterized by being sent in serial
form, usually encoded in standard ASCII characters. For small quantities of data the transmission is
usually asynchronous. Telemetry channels may also be used on a plant with a hierarchy of computer
systems instead of connecting the computers by some form of network. An example of this is the
CUTLASS system used by the Central Electricity Generating Board, which uses standard RS232
lines to connect a hierarchy of control computers. The ability to classify the interface requirements
into the above categories means that a limited number of interfaces can be provided for a process
control computer.
In its simplest form a pulse input interface consists of a counter connected to a line from the
plant. The counter is reset under program control and after a fixed length of time the contents are read
by the computer. A typical arrangement is shown in Figure 3.7, which also shows a simple pulse
output interface. The transfer of data from the counter to the computer uses techniques similar to
those for the digital input described above. The measurement of the length of time for which the
count proceeds can be carried out either by a logic circuit in the counter interface or by the computer.
If the timing is done by the computer then the 'enable' signal must inhibit the further counting of
pulses. If the computing system is not heavily loaded, the external interface hardware required can be
reduced by connecting the pulse input to an interrupt and counting the pulses under program control.
56 https://vtupro.com
3.13 COMMUNCIATIONS:
The use of distributed computer systems implies the need for communication: between
instruments on the plant and the low-level computers (see Figure 3.20); between the Level land Level
2 computers; and between the Level 2 and the higher level computers. At the plant level
communications systems typically involve parallel analog and digital signal transmission techniques
since the distances over which communication is required are small and high-speed communication is
usually required. At the higher levels it is more usual to use serial communication methods since, as
communication distances extend beyond a few hundred yards, the use of parallel cabling rapidly
becomes cumbersome and costly. As the distance between the source and receiver increases it
becomes more difficult, when using analog techniques, to obtain a high signal-to-noise ratio; this is
particularly so in an industrial environment where there may be numerous
https://vtupro.com
62 sources of interference. Analog systems are therefore generally limited to short distances. The us e of
parallel digital transmission provides high data transfer rates but is expensive in terms of cabling and
interface circuitry and again is normally only used over short distances (or when very high rates of
transfer are required).
Most of the companies which supply computers for real-time control have developed their
own 'standard' interfaces, such as the Digital Equipment Corporation's Q-bus for the PDP-ll series,
and, typically, they, and independent suppliers, will be able to offer a large range of interface cards
for such systems. The difficulty with the standards supported by particular manufacturers is that they
are not compatible with each other; hence a change of computer necessitates a redesign of the
interface. An early attempt to produce an independent standard was made by the British Standards
Institution (BS 4421, 1969). Unfortunately the standard is limited to the concept of how the devices
should interconnect and the standard does not define the hardware. It is not widely used and has been
overtaken by more recent developments.
An interface which was originally designed for use in atomic energy research laboratories -
the computer automated measurement and control (CAMAC) system - has been widely adopted in
laboratories, the nuclear industry and some other industries. There are also FORTRAN libraries
which provide software to support a wide range of the interface modules. One of the attractions of the
system is that the CAMAC data highway) connects to the computer by a special card; to change to a
different computer only requires that the one card be changed. A general purpose interface bus
(GPIB) was developed by the Hewlett Packard Company in the early 1970s for connecting laboratory
instruments to a computer. The system was adopted by the IEEE and standardized as the IEEE 488
bus system.
64 https://vtupro.com
Recommended questions:
1. There are a number of different types of analog-to-digital converters. List them and discuss
typical applications for each type (see, for example, Woolvet (1977) or Barney (1985)).
2. The clock on a computer system generates an interrupt every 20 ms. Draw a flowchart for
the interrupt service routine. The routine has to keep a 24 hour clock in hours, minutes and
seconds.
3. Twenty analog signals from a plant have to be processed (sampled and digitized) every 1s.
the analog-to-digital converter and multiplexer which is available can operate in two modes:
automatic scan and computer-controlled scan. In the automatic scan mode, on receipt of a
'start' signal the converter cycles through each channel in turn.
4. A turbine flow meter generates pulses proportional to the flow rate of a liquid. What
methods can be used to interface the device to a computer?
5. Why is memory protection important in real-time systems?
6. What methods can be used to provide memory protection?
66 https://vtupro.com
UNIT- 4
Languages For Real –Time Applications
Introduction, Syntax Layout and Readability, Declaration and Initialization of Variables and
Constants, Modularity and Variables, Compilation , Data Type, Control Structure, Exception
Handling, Low –Level Facilities, Co routines, Interrupts and Device Handling, Concurrency, Real –
Time Support, Overview of Real –Time Languages.
A language for real-time use must support the construction of programs that exhibit concurrency and
this requires support for:
• Construction of modules (software components).
• Creation and management of tasks.
• Handling of interrupts and devices.
• Intertask communication.
• Mutual exclusion.
• Exception handling.
4.1.1 SECURITY:
Security of a language is measured in terms of how effective the compiler and the run-time support
system is in detecting programming errors automatically. Obviously there are some errors which
cannot be detected by the compiler regardless of any features provided by the language: for example,
errors in the logical design of the program. The chance of such errors occurring is reduced if the
language encourages the programmer to write clear, well-structured, code. Language features that
assist in the detection of errors by the compiler include:
• good modularity support;
• enforced declaration of variables;
• good range of data types, including sub-range types;
• typing of variables; and
• unambiguous syntax.
It is not possible to test software exhaustively and yet a fundamental requirement of real-time systems
is that they operate reliably. The intrinsic security of a language is therefore of major importance for
the production of reliable programs. In real-time system development the compilation is often
performed on a different computer than the one used in the actual system, whereas run-time testing
has to be done on the actual hardware and, in the later stages, on the hardware connected to plant.
Run-time testing is therefore expensive and can interfere with the hardware development program.
Economically it is important to detect errors at the compilation stage rather than at run-time since the
https://vtupro.com
69 earlier the error is detected the less it costs to correct it. Also checks done at compilation time have no
run-time overheads.
4.1.2 READABILITY:
Readability is a measure of the ease with which the operation of a program can be understood without
resort to supplementary documentation such as flowcharts or natural language descriptions. The
emphasis is on ease of reading because a particular segment of code will be written only once but will
be read many times. The benefits of good readability are:
• Reduction in documentation costs: the code itself provides the bulk of the
documentation. This is particularly valuable in projects with a long life
expectancy in which inevitably there will be a series of modifications. Obtaining
up-to-date documentation and keeping documentation up to date can be very
difficult and costly.
• Easy error detection: clear readable code makes errors, for example logical
errors, easier to detect and hence increases reliability.
• Easy maintenance: it is frequently the case that when modifications to a program
are required the person responsible for making the modifications was not
involved in the original design - changes can only be made quickly and safely if
the operation of the program is clear.
4.1.3 FLEXIBILITY:
A language must provide all the features necessary for the expression of all the operations
required by the application without requiring the use of complicated constructions and tricks, or resort
to assembly level code inserts. The flexibility of a language is a measure of this facility. It is
particularly important in real-time systems since frequently non-standard I/O devices will have to be
controlled. The achievement of high flexibility can conflict with achieving high security. The
compromise that is reached in modern languages is to provide high flexibility and, through the
module or package concept, a means by which the low-level (that is, insecure) operations can be
hidden in a limited number of self-contained sections of the program.
4.1.4 SIMPLICITY:
https://vtupro.com
70 In language design, as in other areas of design, the simple is to be preferred to the comp lex.
Simplicity contributes to security. It reduces the cost of training, it reduces the probability of
programming errors arising from misinterpretation of the language features, it reduces compiler size
and it leads to more efficient object code. Associated with simplicity is consistency: a good language
should not impose arbitrary restrictions (or relaxations) on the use of any feature of the language.
4.1.5 PORTABLITILY:
Portability, while desirable as a means of speeding up development, reducing costs and
increasing security, is difficult to achieve in practice. Surface portability has improved with the
standardization agreements on many languages. It is often possible to transfer a program in source
code form from one computer to another and find that it will compile and run on the computer to
which it has been transferred. There are, however, still problems when the word lengths of the two
machines differ and there may also be problems with the precision with which numbers are
represented even on computers with the same word length.
Portability is more difficult for real-time systems as they often make use of specific features
of the computer hardware and the operating system. A practical solution is to accept that a real-time
system will not be directly portable, and to Restrict the areas of non-portability to specific modules by
restricting the use of low level features to a restricted range of modules. Portability can be further
enhanced by writing the application software to run on a virtual machine, rather than for a specific
operating system.
4.1.6 EFFICIENCY:
In real-time systems, which must provide a guaranteed performance and meet specific time
constraints, efficiency is obviously important. In the early computer control systems great emphasis
was placed on the efficiency of the coding - both in terms of the size of the object code and in the
speed of operation - as computers were both expensive and, by today's standards, very slow. As a
consequence programming was carried out using assembly languages and frequently 'tricks' were
used to keep the code small and fast. The requirement for generating efficient object code was carried
over into the designs of the early real-time languages and in these languages the emphasis was on
efficiency rather than security and readability. The falling costs of hardware and the increase in the
computational speed of computers have changed the emphasis. Also in a large number of real-time
https://vtupro.com
71 applications the concept of an efficient language has changed to include considerations of the sec urity
and the costs of writing and maintaining the program; speed and compactness of the object code have
become, for the majority of applications, of secondary importance.
CONTROLCALCULATION;
NEXTSAMPLETIME: =TIME+SAMPLETIME;
IF KEYPRESSEDOTHEN EXIT;
END;
END;
END;
The meaning is now a little clearer, although the code is not easy to read because it is entirely
in upper case letters. We find it much easier to read lower case text than upper case and hence
readability is improved if the language permits the use of lower case text. It also helps if we can use a
different case (or some form of distinguishing mark) to identify the reserved words of the language.
Reserved words are those used to identify
particular language constructs, for example repetition statements, variable declarations, etc. In the
next version we use upper case for the reserved words and a mixture of upper and lower case for user-
defined entities.
BEGIN
NextSampleTime: = Ticks ( ) +Sample Time;
Time: =Ticks ( ) +Sample Time;
LOOP
WHILE Ticks ( ) < NextSampleTime DO (* nothing *)
END;
Time: =Ticks ( );
Control Calculation;
NextSampleTime: = Time + Sample Time;
IF Key Pressed ( ) THEN EXIT;
END;
END;
END;
The program is now much easier to read in that we can easily and quickly pick out the reserved
words. It can be made even easier to read if the language allows embedded spaces and tab characters
to be used to improve the layout.
73 https://vtupro.com
This is not, of course, strictly necessary as a value can always be assigned to a variable. In terms of
the security of a language it is important that the compiler checks that a variable is not used before it
has had a value assigned to it. The security of languages such as Modula-2 is enhanced by the
compiler checking that all variables have been given an initial value. However, a weakness of
Modula-2 is that variables cannot be given an initial value when they are declared but have to be
initialized explicitly using an assignment statement.
CONSTANTS
Some of the entities referenced in a program will have constant values either because they are
physical or mathematical entities such as the speed of light or because they are a parameter which is
fixed for that particular implementation of the program ,for example the number of control loops
being used or the bus address for an input or output device. It is always possible to provide constants
by initializing a variable to the appropriate quantity, but this has the disadvantage that it is in secure
in that the compiler cannot detect if a further assignment is made which changes the value of the
constant. It is also confusing to the reader since there is no indication which entities are constants and
which are variables (unless the initial assignment is carefully documented). Pascal provides a
mechanism for declaring constants, but since the constant declarations must precede the type
declarations, only constants of the predefined types can be declared. This is a severe restriction on the
constant mechanism. For example, it is not possible to do the following:
TYPE
A Motor State = (OFF, LOW, MEDIUM, HIGH);
CONST
Motor Stop = A Motor State (OFF);
A further restriction in the constant declaration mechanism in Pascal is that the value of the constant
must be known at compilation time and expressions are not permitted in constant declarations. The
restriction on the use of expressions in constant declarations is removed in Modula-2 (experienced
assembler programmers will know the usefulness of being able to use expressions in constant
declarations).
For example, in Modula-2 the following are valid constant declarations:
CONST
message = 'a string of characters';
75 length = 1.6; https://vtupro.com
breadth = 0.5;
area = length * breadth;
4.3MODULARITY AND VARIABLES:
Scope and visibility:
The scope of a variable is defined as the region of a program in which the variation is
potentially accessible or modifiable. The regions in which it may actually accessed or modified are
the regions in which it is said to be visible. Most languages provide mechanisms for controlling scope
and visibility. There are two general approaches: languages such as FORTRAN provide a single level
locality whereas the block-structured languages such as Modula-2 provide multilevel locality. In the
block-structured languages entities which are declared within a block, only be referenced inside that
block. Blocks can be nested and the scope extended throughout any nested blocks. This is illustrated
in Example which shows scope for a nested PROCEDURE in Modula-2.
MODULE ScopeExampLe1;
VAR
A, B: INTEGER;
PROCEDURE Level One;
VAR
B, C: INTEGER;
BEGIN
(*
*)
END (* Level one *);
BEGIN
(*
A and B visible here but not Level One and
Level One .C
*)
END ScopeExample1.
The scope of variables A and B declared in the main module ScopeExample1
76 extends throughout the program that is they are global variables. https://vtupro.com
4.5DATA TYPES:
As we have seen above, the allocation of types is closely associated with the declaration of
entities. The allocation of a type defines the set of values that can be taken by an entity of that type
and the set of operations that can be performed on the entity. The richness of types supported by a
language and the degree of rigour with which type compatibility is enforced by the language are
important influences on the security of programs written in the language. Languages which rigorously
enforce type compatibility are said to be strongly typed; languages which do not enforce type
compatibility are said to be weakly typed. FORTRAN and BASIC are weakly typed languages: they
enforce some type checking; for example, the statements A $ = 2 5 or A = X$ + Yare not allowed in
BASIC, but they allow mixed integer and real arithmetic and provide implicit type changing in
arithmetic statements. Both languages support only a limited number of types.
An example of a language which is strongly typed is Modula-2. In addition to enforcing type
checking on standard types, Modula-2 also supports enumerated types. The enumerated type allows
programmers to define their own types in addition to using the predefined types. Consider a simple
motor speed control system which has four settings 0 F F, LOW, ME DIU M, H I GH and which is
controlled from a computer system. Using Modula-2 the programmer could make the declarations:
TYPE
AMotorState = (OFF, LOW, MEDIUM, HIGH);
VAR
motor Speed: AMotorState;
The variable motor Speed can be assigned only one of the values enumerated in
the T YP E definition statement. An attempt to assign any other value will be trapped
79 by the compiler, for example the statement will be flagged as an error. https://vtupro.com
If we contrast this with the way in which the system could be programmed using
FORTRAN we can see some of the protection which strong typing provides. In
ANSI FORTRAN integers must be used to represent the four states of the motor
Control:
INTEGER OFF, LOW, MEDIUM, HIGH
DATA OFF/0/, LOW/1/, MEDIUM/2/, HIGH/3/
If the programmer is disciplined and only uses the defined integers to set MSPEED then the program
is clear and readable, but there is no mechanism to prevent direct assignment of any value to MS PEE
D.
Hence the statements
MSPEED = 24
MSPEED = 1 SO
would be considered as valid and would not be flagged as errors either by the compiler or by the run-
time system. The only way in which they could be detected is if the programmer inserted some code
to check the range of values before sending them to the controller. In FORTRAN a programmer-
inserted check would be necessary since the output of a value outside the range 0 to 3 may have an
unpredictable effect on the motor speed.
4.6EXCEPTION HANDLING:
One of the most difficult areas of program design and implementation is the handling of
errors, unexpected events (in the sense of not being anticipated and hence catered for at the design
stage) and exceptions which make the processing of data by the subsequent segments superfluous, or
possibly dangerous. The designer has to make decisions on such questions as what errors are to be
detected. What sort of mechanism is to be used to do the detection? And what should be done when
an error is detected? Most languages provide some sort of automatic error detection mechanisms as
part of their run-time support system. Typically they trap errors such as an attempt to divide by zero,
arithmetic overflow, array bound violations, and sub-range violations; they may also include traps for
input/output errors. For many of the checks the compiler has to add code to the program; hence the
checks increase the size of the code and' reduce the speed at which it executes. In most languages the
https://vtupro.com
80 normal response when an error is detected is to halt the program and display an error message on the
user's terminal. In a development environment it may be acceptable for a program to halt following an
error; in a real-time system halting the program is not acceptable as it may compromise the safety of
the system. Every attempt must be made to keep the system running.
4.7 LOW LEVEL FACILITIES:
In programming real-time systems we frequently need to manipulate directly data in specific
registers in the computer system, for example in memory registers, CPU registers and registers in an
input! output device. In the older, high-level languages, assembly-coded routines are used to do this.
Some languages provide extensions to avoid the use of assembly routines and these typically are of
the type found in many versions of BASIC. These take the following form:
PEEK (address) - returns as INTEGER variable contents of the location address.
POKE (address, value) - puts the INTEGER value in the location address.
It should be noted that on eight-bit computers the integer values must be in the range o to 255 and on
16 bit machines they can be in the range 0 to 65 535. For computer systems in which the input/output
devices are not memory mapped, for example Z80 systems, additional functions are usually provided
such as INP (address) and OUT (address, value). A slightly different approach has been adopted in
BBC BASIC which uses an 'indirection' operator. The indirection operator indicates that the variable
which follows it is to be treated as a pointer which contains the address of the operand rather than the
operand itself (the term indirection is derived from the indirect addressing mode in assembly
languages). Thus in BBC BASIC the following code
100 DACAddress=&FE60
120? DACAddress=&34
results in the hexadecimal number 34 being loaded into location FE 60 H; the indirection operator is
'?'. In some of the so-called Process FORTRAN languages and in CORAL and
RTL/2 additional features which allow manipulation of the bits in an integer variable are provided,
for example
SETBITJ (I),
IF BIT J(I) n1 ,n2 (where I refers to the bit In
variable.
https://vtupro.com
81 Also available are operations such as AND, 0 R, S LA, S RA, etc., which mimic the operati ons
available at assembly level. The weakness of implementing low-level facilities in this way is that all
type checking is lost and it is very easy to make mistakes. A much more secure method is to allow the
programmer to declare the address of the register or memory location and to be able to associate a
type with the declaration, for example
which declares a variable of type CHAR located at memory location 0 FE60 H.
Characters can then be written to this location by simple assignment
Modula-2 provides a low-level support mechanism through a simple set of primitives which have to
be encapsulated in a small nucleus coded in the assembly language of the computer on which the
system is to run. Access to the primitives is through a module SYS TEM which is known to the
compiler. SYST EM can be thought of as the software bus linking the nucleus to the rest of the
software modules. SYSTEM makes available three data types, WORD, ADDRESS, PROCESS, and
six procedures, ADR, SIZE, TSIZE, NEWPROCESS, TRANSFER, I 0 TRANS FE R. W0 RD is the
data type which specifies a variable which maps onto one unit of the specific computer storage. As
such the number of bits in a WORD will vary from implementation to implementation; for example,
on a PDP·II implementation a WORD is 16 bits, but on a 68000 it would be 32 bits. ADDRESS
corresponds to the definition TYPEA DDRES S = POI NTER TOW0 RD, that is objects of type
ADDRES S are pointers to memory units and can be used to compute the addresses of memory
words. Objects of type PROC ESS have associated with them storage for the volatile environment of
the particular computer on which Modula-2 is implemented; they make it possible to create easily
process (task) descriptors. Three of the procedures provided by SYSTEM are for address
manipulation:
FROM S
AD
EXPOR
ADR (v) returns the ADDRESS of variable v
SIZE (v) returns the SIZE of variable v in WORDs
TSIZE (t) returns the SIZE of any variable of type t
inWORDs.
In addition variables can be mapped onto specific memory locations. This facility can be used for
writing device driver modules in Modula-2. A combination of the low-level access facilities and the
https://vtupro.com
82 module concept allows details of the hardware device to be hidden within a module with only the
procedures for accessing the module being made available to the end user.
4.8CO ROUTINES:
In Modula-2 the basic form of concurrency is provided by co routines. The two procedures NEW
PRO C E S sand T RAN S FE R exported by S Y S T EM are defined as follows:
PROCEDURE NEWPROCESS (ParameterLessProcedure: PROC);
workspace Address: ADDRESS;
workspace Size: CARDINAL;
VAR co routine: ADDRESS (* PROCESS *));
PROCEDURE TRANSFER (VAR source, destination: ADDRESS
(*PROCESS*));
Any parameter less procedure can be declared as a PROCESS. The procedure NEW PRO C E S S
associates with the procedure storage for the process parameters The amount to be allocated depends
on the number and size of the variables local to the procedure forming the coroutine, and to the
procedures which it calls. Failure to allocate sufficient space will usually result in a stack overflow
error at run-time. The variable co routine is initialized to the address which identifies the newly
created co routine and is used as a parameter in calls to T RAN S FER. The transfer of control
between co routines is made using a standard procedure T RAN SF ER which has two arguments of
type ADD RES S (PROCESS) . The first is the calling co routine and the second is the co routine to
which control is to be transferred. The mechanism is illustrated in Example 5.13. In this example the
two parameter less procedures form the two co routines which pass control to each other so that the
message
Co routine one and Co routine two
is printed out 25 times. At the end of the loop, Co routine 2 passes control back
to Main Program.
CONCURRENCY:
Wirth (1982) defined a standard module Processes s which provides a higher-level mechanism than
co routines for concurrent programming. The module makes no assumption as to how the processes
https://vtupro.com
83 (tasks) will be implemented; in particular it does not assume that the processes will be impleme nted
on a single processor.
UNIT -5&6
Operating Systems
Introduction, Real –Time Multi –Tasking OS, Scheduling Strategies, Priority Structures, Task
Management, Scheduler and Real –Time Clock Interrupt Handles, Memory Management ,Code
Sharing, Resource control, Task Co-operation and Communication, Mutual Exclusion, Data Transfer,
Liveness, Minimum OS Kernel, Examples.
The operating system is constructed, in these cases, as a monolithic monitor. In single-job operating
systems access through the operating system is not usually enforced; however, it is good
programming practice and it facilitates portability since the operating system entry points remain
constant across different implementations. In addition to supporting and controlling the basic
activities, operating systems provide various utility programs, for example loaders, linkers,
assemblers and debuggers, as well as run-time support for high-level languages.
A general purpose operating system will provide some facilities that are not required in a
particular application, and to be forced to include them adds unnecessarily to the system overheads.
Usually during the installation of an operating system certain features can be selected or omitted. A
general purpose operating system can thus be 'tailored' to meet a specific application requirement.
Recently operating systems which provide only a minimum kernel or nucleus have become popular;
additional features can be added by the applications programmer writing in a high-level language.
This structure is shown in Figure 6.2. In this type of operating system the distinction between the
operating system and the application software becomes blurred. The approach has many advantages
for applications that involve small, embedded systems.
87 https://vtupro.com
A real-time multi-tasking operating system has to support the resource sharing and the timing
requirements of the tasks and the functions can be divided as follows:
Task management: the allocation of memory and processor time (scheduling) to tasks.
Memory management: control of memory allocation.
Resource control: control of all shared resources other than memory and CPU time.
Intertask communication and synchronization: provision of support mechanisms to provide safe
communication between tasks and to enable tasks to synchronize their activities.
89 https://vtupro.com
5.3SCHEDULING STRATEGIES:
If we consider the scheduling of time allocation on a single CPU there are two basic
strategies:
1. Cyclic.
2. Pre-emptive.
1. Cyclic
The first of these, cyclic, allocates the CPU to a task in turn. The task uses the CPU for as
long as it wishes. When it no longer requires it the scheduler allocates it to the next task in the list.
This is a very simple strategy which is highly efficient in that it minimizes the time lost in switching
between tasks. It is an effective strategy for small embedded 'systems for which the execution times
for each task run are carefully calculated (often by counting the number of machine instruction cycles
https://vtupro.com
90 for. the task) and for which the software is carefully divided into appropriate task segments. In
general this approach is too restrictive since it requires that the task units have similar execution
times. It is also difficult to deal with random events using this method.
2. Pre-emptive.
There are many pre-emptive strategies. All involve the possibility that a task will be
interrupted - hence the term pre-emptive - before it has completed a particular invocation. A
consequence of this is that the executive has to make provision to save the volatile environment for
each task, since at some later time it will be allocated CPU time and will want to continue from the
exact point at which it was interrupted. This process is called context switching and a mechanism for
supporting it is described below. The simplest form of pre-emptive scheduling is to use a time slicing
approach (sometimes called a round-robin method). Using this strategy each task is allocated a fixed
amount of CPU time - a specified number of ticks of the clock – and at the end of this time it is
stopped and the next task in the list is run. Thus each task in turn is allocated an equal share of the
CPU time. If a task completes before the end of its time slice the next task in the list is run
immediately.
The majority of existing RTOSs use a priority scheduling mechanism. Tasks are allocated a
priority level and at the end of a predetermined time slice the task with the highest priority of those
ready to run is chosen and is given control of the CPU. Note that this may mean that the task which is
currently running continues to run. Task priorities may be fixed - a static priority system - or may be
changed during system execution - a dynamic priority system. Dynamic priority schemes can increase
the flexibility of the system, for example they can be used to increase the priority of particular tasks
under alarm conditions. Changing priorities is, however, risky as it makes it much harder to predict
the behavior of the system and to test it. There is the risk of locking out certain tasks for long periods
of time. If the software is well designed and there is adequate computing power there should be no
need to change priorities - all the necessary constraints will be met. If it is badly designed and/or there
are inadequate computing resources then dynamic allocation of priorities will not produce a viable,
reliable system.
91 5.4 PRIORITY STRUCTURES: https://vtupro.com
In a real-time system the designer has to assign priorities to the tasks in the system. The
priority will depend on how quickly a task will have to respond to a particular event. An event may
be some activity of the process or may be the elapsing of a specified amount of time.
1. Interrupt level: at this level are the service routines for the tasks and devices which require very
fast response - measured in milliseconds. One of these tasks will be the real-time clock task and clock
level dispatcher.
2. Clock level: at this level are the tasks which require repetitive processing, such as the sampling and
control tasks, and tasks which require accurate timing. The lowest-priority task at this level is the
base level scheduler.
3. Base level: tasks at this level are of low priority and either have no deadlines to meet or are
allowed a wide margin of error in their timing. Tasks at this level may be allocated priorities or may
all run at a single priority level - that of the base level scheduler.
Interrupt level:
As we have already seen an interrupt forces a rescheduling of the work of the CPU and the system
has no control over the timing of the rescheduling. Because an interrupt-generated rescheduling is
outside the control of the system it is necessary to keep the amount of processing to be done by the
interrupt handling routine to a minimum. Usually the interrupt handling routine does sufficient
processing to preserve the necessary information and to pass this information to a further handling
routine which operates at a lower-priority level, either clock level or base level. Interrupt handling
routines have to provide a mechanism for task swapping, that is they have to save the volatile
environment.
92 https://vtupro.com
Clock level:
One interrupt level task will be the real-time clock handling routine which will be entered at
some interval, usually determined by the required activation rate for the most frequently required
task. Typical values are I to 200 ms. Each clock interrupt is known as a tick and represents the
smallest time interval known to the system. The function of the clock interrupt handling routine is to
update the time-of-day clock in the system and to transfer control to the dispatcher. The scheduler
selects which task is to run at a particular clock tick. Clock level tasks divide into two categories:
93 1. CYCLIC: these are tasks which require accurate synchronization with the outside world. https://vtupro.com
2. DELA Y: these tasks simply wish to have a fixed delay between successive repetitions or to delay
their activities for a given period of time.
Cyclic tasks:
The cyclic tasks are ordered in a priority which reflects the accuracy of timing required for the
task, those which require high accuracy being given the highest priority. Tasks of lower priority
within the clock level will have some jitter since they will have to await completion of the higher-
level tasks.
Delay tasks:
The tasks which wish to delay their activities for a fixed period of time, either to allow some
external event to complete (for example, a relay may take 20 ms to close) or because they only need
to run at certain intervals (for example, to update the operator display), usually run at the base level.
When a task requests a delay its status is changed from runnable to suspended and remains suspended
until the delay period has elapsed.
One method of implementing the delay function is to use a queue of task descriptors, say identified
by the name DELAYED. This queue is an ordered list of task descriptors, the task at the front of the
queue being that whose next running time is nearest to the current time.
Base level:
The tasks at the base level are initiated on demand rather than at some predetermined time interval.
The demand may be user input from a terminal, some process event or some particular requirement of
the data being processed. The way in which the tasks at the base level are scheduled can vary; one
simple way is to use time slicing on a round-robin basis. In this method each task in the runnable
queue is selected in turn and allowed to run until either it suspends or the base level scheduler is again
entered. For real-time work in which there is usually some element of priority this is not a particularly
satisfactory solution. It would not be sensible to hold up a task, which had been delayed waiting for a
relay to close but was now ready to run, in order to let the logging task run.
Most real-time systems use a priority strategy even for the base level tasks. This may be either
a fixed level of priority or a variable level. The difficulty with a fixed level of priority is in
determining the correct priorities for satisfactory operation; the ability to change priorities
dynamically allows the system to adapt to particular circumstances. Dynamic allocation of priorities
can be carried out using a high-level scheduler or can be done on an ad hoc basis from within
https://vtupro.com
94 specific tasks. The high level scheduler is an operating system task which is able to examine the use
of the system resources; it may for example check how long tasks have been waiting and increase the
priority of the tasks which have been waiting a long time. The difficulty with the high-level scheduler
is that the algorithms used can become complicated and hence the overhead in running can become
significant.
5.5TASK MANAGEMENT:
The basic functions of the task management module or executive are:
1. To keep a record of the state of each task;
2. To schedule an allocation of CPU time to each task; and
3. To perform the context switch, that is to save the status of the task that is currently using
the CPU and restore the status of the task that is being allocated CPU time.
In most real-time operating systems the executive dealing with the task management functions is split
into two parts: a scheduler which determines which task is to run next and which keeps a record of
the state of the tasks, and a dispatcher which performs the context switch.
Task states:
With one processor only one task can be running at any given time and hence the other tasks
must be in some other state. The number of other states, the names given to the states, and the
transition paths between the different states vary from operating system to operating system. A
typical state diagram is given in Figure6.1 and the various states are as follows (names in parentheses
are commonly are. alternatives):
95 https://vtupro.com
• Active (running): this is the task which has control of the CPU. It will normally be the task with the
highest priority of the tasks which are ready to run.
• Ready (runnable, on): there may be several tasks in this state. The attribute of the task and the
resources required to run the task must be available for the task to be placed in the Ready state.
• Suspended (waiting, locked out, delayed): the execution of tasks placed this state has been
suspended because the task requires some resource which is not available or because the task is
waiting for some signal from the plant for example input from the analog-to-digital converter, or
because the task is waiting for the elapse of time.
• Existent (dormant, off): the operating system is aware of the existence of this task, but the task has
not been allocated a priority and has not been made runnable.
• Non-existent (terminated): the operating system has not as yet been made aware of the existence of
this task, although it may be resident in the. memory of the computer.
96 https://vtupro.com
Task descriptor:
Information about the status of each task is held in a block of memory by the RTOS. This
block is referred to by various names· task descriptor (TD), process descriptor (PD), task control
block (TCB) or task data block (TDB). The information held in the TD will vary from system to
system, but will typically consist of the following:
• Task identification (10);
• Task priority (P);
• Current state of task;
• Area to store volatile environment (or a pointer to an area for storing the volatile
environment); and
• Pointer to next task in a list.
5.7MEMORY MANAGEMENT:
Since the majority of control application software is static - the software is not dynamically
created or eliminated at run-time - the problem of memory management is simpler than for multi-
programming, on-line systems. Indeed with the cost of computer hardware, both processors and
memory, reducing many control applications use programs which are permanently resident in fast
access memory. With permanently resident software the memory can be divided as shown in Figure.
The user space is treated as one unit and the software is linked and loaded as a single program into
the user area. The information about the various tasks is conveyed to the operating system by means
of a create task statement. Such a statement may be of the form the exact form of the statement will
depend on the interface between the high-level language and the operating system. An alternative
arrangement is shown in Figure. The available memory is divided into predetermined segments and
the tasks are loaded individually into the various segments. The load operation would normally be
carried out using to command processor. With this type of system the entries in the TD (or the
operation system tables) have to be made from the console using a memory examine as change
facility.
Divided (partitioned) memory was widely used in many early real-time operating systems and
it was frequently extended to allow several tasks to share on:
partition; the tasks were kept on the backing store and loaded into the appropriate partition when
required. There was of course a need to keep any tasks in which timing was crucial (hard time
constraint tasks) in fast access memory permanent other tasks could be swapped between fast
memory and backing store. The difficulty with this method is, of course, in choosing the best mix of
partition sizes. The partition size and boundaries have to be determined at system generation.
99 https://vtupro.com
5.8CODE SHARING:
In many applications the same actions have to be carried out in several different tasks. In a
conventional program the actions would be coded as a subroutine and one copy of the subroutine
would be included in the program. In a multi-tasking system each task must have its own copy of the
subroutine or some mechanism must be provided to prevent one task interfering with the use of the
code by another task. The problems which can arise are illustrated in Figure 6.20. Two tasks share the
subroutine S. If task A is using the subroutine but before it finishes some even occurs which causes a
rescheduling of the tasks and task B runs and uses the subroutine, then when a return is made to task
100 https://vtupro.com
A, although it will begin to use subroutine S again at the correct place, the values of locally held data
will have been changed and will reflect the information processed within the subroutine by task B.
Two methods can be used to overcome this problem:
• serially reusable code; and
• re-entrant code.