Module 1
Module 1
MODULE – 1
Introduction to Real-Time Systems: Historical background, Elements of a Computer Control
System, RTS- Definition, Classification of Real-time Systems, Time Constraints, Classification of
Programs.
Concepts of Computer Control: Introduction, Sequence Control, Loop Control, Supervisory
Control, Centralized Computer Control, Hierarchical Systems.
The first digital Computers developed specifically for real time control were for airborne
operation, and in 1954 a digitrac digital computer was successfully used to provide an automatic
flight and weapons control system. The application of digital computers to industrial control began
in the late 1950s.
The first industrial installation of a computer system was in September 1958. When the
Louisiana Power and Light Company installed a Day Storm Computer system for plant
monitoring at their power station in sterling, Louisiana.
The first industrial Computer Control installation was made by the Texaco Company who
installed an RW-300 (Ramo -Wooldridge Company) system at their Port Arthur refinery in
Texas.
During 1957-8 the Monsanto Chemical Company, in co-operation with the Ramo-
Wooldridge company, studied the possibility of using computer control and in October
1958 decided to implement a scheme on the ammonia plant at luling, Louisiana.
The same system was installed by the B.F. Goodrich Company on their acrylanite plant at
Calvert City, Kentucky, in 1959-60.
The first direct digital control (DDC) computer system was the Ferranti Argus 200 system
installed in November 1962 at the ICI ammonia – soda plant at Fleetwood Lancashire.
The computers used in the early 1960s combined magnetic core memories and drum stores, the
drum eventually giving way to hard disk drives. They included the General Electric 4000 series,
IBM 1800, CDC 1700, Foxboro Fox 1 and 1A, the SDS and Xerox SIGMA Series, Ferranti Argus
and Elliot Automation 900 series. The attempt to resolve some of the problems of the early
machines led to an increase in the cost of systems.
The consequence of the generation of further problems particularly in the development of the
software. The increase in the size of the programs meant that not all the code could be stored in
core memory; provision to be made for the swapping of code between the drum memory and core.
MITMysore
The| Dept of Electronics
solution and
appeared to lie Communication
in the Engineering
development general purpose real-time operating systems and
high –level languages.
A common feature of real-time systems and embedded computers is that the computer is connected
to the environment within which it is working by a wide range of interface device and receives and
sends a variety of stimuli. For example, the plant input, plant output, and communication tasks
shown in figure:
Other category, interactive, in which the relationship between the actions in the computer and the
system is much more loosely defined. The control tasks, although not obviously and directly
connected to the external environment, also need to operate in real -time, since time is usually
involved in determining the parameters of the algorithms.
Clock – based tasks are typically referred to as cyclic or periodic tasks where the terms can imply
either that the task is to run once per time period T (or cycle time T), or is to run at exactly T unit
intervals. The completion of the operations within the specified time is dependent on the number
of operations to be performed and the speed of the computer. Synchronization is usually obtained
by adding a clock to the computer system- referred as a real- time clock that uses signal from this
clock to interrupt the operations of the computer at some predetermined fixed time interval.
For example in process plant operation, the computer may carry out the plant input, plant output
and control tasks in response to the clock interrupt or, if the clock interrupt has been set at a faster
rate than the sampling rate, it may count each interrupt until it is time to run the tasks. In larger
system the tasks may be subdivided into groups for controlling different parts of the plant and
these may need to run a different sampling rate. A tasks or process comprises some code, its
associated data and a control block data structure which the operating system uses to define and
manipulate the task.
Events occurring at non-deterministic interval and event-based tasks are frequently referred as
aperiodic tasks. Such tasks may have deadlines expressed in term of having start times or finish
MITMysore | Dept
times of Electronics
(or even both). and Communication Engineering
Examples: Turning off a pump or closing a value when the level of a liquid in a tank reaches a
predetermined valve; or switching a motor off in response to the closure of a micro switch
indicating that some desired position has been reached. Event based systems are also used
extensively to indicate alarm conditions and initiate alarm actions.
Interactive systems probably represent the largest class of real-time systems and cover such
systems as automatic bank tellers; reservation systems for hotels, airlines and car rental companies;
computerized tills, etc. The real-time requirement is usually expressed in terms such as 'the
average response time must not exceed ‘
For example, an automatic bank teller system might require an average response time not
exceeding 20 seconds. Superficially this type of system seems similar to the event-based system in
that it apparently responds to a signal from the plant (in this case usually a person), but it is
different because it responds at a time determined by the internal state of the computer and without
any reference to the environment. An automatic bank teller does not know that you are about to
miss a train, or that it is raining hard and you are getting wet: its response depends on how busy the
communication lines and central computers are (and of course the wire of your account).
Many interactive systems give the impression that they are clock based in that they are capable of
displaying the date and time; they do indeed have a real-time clock which enables them to keep
track of time.
A typical example of a hard real-time control system is the temperature control loop of the hot-air
blower system described above. In control terms, the temperature loop is a sampled data system.
Design, of a suitable control algorithm for this system involves the choice of the sampling interval
Ts. If we assume that a suitable sampling interval is 10 ms, then at 10 ms intervals the input value
must be read, the control calculation carried out and the output value calculated, and the output
value sent to the heater drive.
As an example of hard time constraints associated with event-based tasks let us assume that the
hot-air blower is being used to dry a component which will be damaged if exposed to temperatures
greater than 50°C for more than 10 seconds. Allowing for the time taken for the air to travel from
the heater to the outlet and the cooling time of the heater element - and for a margin of safety - the
alarm response requirement may be, say, that overt temperature should be detected and the heater
switched off within seven seconds of the over temperature occurring. The general form of this type
of constraint is that the computer must respond to the event within some specified maximum time.
An automatic bank teller provides an example of a system with a soft time constraint. A typical
system is event initiated in that it is started by the customer placing their card in the machine. The
time constraint on the machine responding will be specified in terms of an average response time
of, say, 10 seconds, with the average being measured over a 24 hour period. (Note that if the
system has been carefully specified there will also be a maximum time; say 30 seconds, within
which the system should respond.) The actual response time will vary: if you have used such a
system you will have learned that the response time obtained between 12 and 2 p.m. on a Friday is
very different from that at 10 a.m. on a Sunday.
A hard time constraint obviously represents a much more severe constraint on the performance of
the system than a soft time constraint and such systems present a difficult challenge both to
hardware and to software designers. Most real-time systems contain a mixture of activities that can
MITMysore | Dept of Electronics and Communication Engineering
be classified as clock based, event based, and interactive with both hard and soft time constraints
For some systems and tasks the timing constraints may be combined in some form or other, or
relaxed in some way.
The importance of separating the various activities carried out by computer control systems
into real-time and non-real-time tasks, and in subdividing real-time tasks into the two different
types, arises from the different levels of difficulty of designing and implementing the different
types of computer program. Experimental studies have shown clearly that certain types of
program, particularly those involving real time and interface operations, are substantially more
difficult to construct than, for instance, standard data processing programs (Shooman, 1983;
Pressman, 1992).The division of software into small, coherent modules is an important design
MITMysore | Dept of
technique andElectronics and Communication
one of the guidelines Engineering
for module division that we introduce is to put activities with
different types of time constraints into separate modules.
Sequential;
Multi-tasking; and
Real-time.
The definitions are based on the kind of arguments which would have to be made in order to
verify, that is to develop a formal proof of correctness for programs of each type.
1.5.1 SEQUENTIAL:
In classical sequential programming actions are strictly ordered as a time sequence: the behavior
of the program depends only on the effects of the individual actions and their order; the time taken
to perform the action is not of consequence. Verification, therefore, requires two kinds of
argument:
1.5.2 MULTI-TASKING:
A multi-task program differs from the classical sequential program in that the actions it is required
to perform are not necessarily disjoint in time; it may be necessary for several actions to be
performed in parallel. Note that the sequential relationships between the actions may still be
important. Such a program may be built from a number of parts (processes or tasks are the names
used for the parts) which are themselves partly sequential, but which are executed concurrently and
which communicate through shared variables and synchronization signals.
Verification requires the application of arguments for sequential programs with some additions.
The task (processes) can be verified separately only if the constituent variables of each task
(process) are distinct. If the variables are shared, the potential concurrency makes the effect of
the program unpredictable (and hence not capable of verification) unless there is some further rule
that governs the sequencing of the several actions of the tasks (processes). The task can proceed at
any speed: the correctness depends on the actions of the synchronizing procedure.
1.5.3 REAL-TIME:
A real-time program differs from the previous types in that, in addition to its actions not
necessarily
MITMysore | Dept ofbeing disjoint in
Electronics andtime, the sequence ofEngineering
Communication some of its actions is not determined by the
designer but by the environment - that is, by events occurring in the outside world which occur in
A real-time program can still be divided into a number of tasks but communication between the
tasks cannot necessarily wait for a synchronization signal: the environment task cannot be
delayed. (Note that in process control applications the main environment task is usually that of
keeping real time, that is a real-time clock task. It is this task which provides the timing for the
scanning tasks which gather information from the outside world about the process.) In real-time
programs, in contrast to the two previous types of program, the actual time taken by an action is an
essential factor in the process of verification. We shall assume that we are concerned with real-
time software and references to sequential and multi-tasking programs should be taken to imply
that the program is real time. Non-real-time programs will be referred to as standard program.
As a consequence, real-time features had to be built into the operating system which was written in
the assembly language of the machine by teams of specialist programmers. The cost of producing
such operating systems was high and they had therefore to be general purpose so that they could
be used in a wide range of applications in order to reduce the unit cost of producing them. These
operating systems could be tailored, that is they could be reassembled to exclude or include certain
features, for example to change the number of tasks which could be handled, or to change the
number of input/output devices and types of device. Such changes could usually only be made by
the supplier.
Computers are now used in so many different ways that we could take it up by simply describing
various applications. However, when we examine the applications closely we find that there are
many common features. The basic features of computer control systems are illustrated in the
following sections using examples drawn from industrial process control. In this field applications
are typically classified under the following headings:
MITMysore | Dept of Electronics and Communication Engineering
Batch;
The categories are not mutually exclusive: a particular process may involve activities which fall
into more than one of the above categories; they are, however, useful for describing the general
character of a particular process.
1.6.1 BATCH:
The term batch is used to describe processes in which a sequence of operations are carried out to
produce a quantity of a product - the batch - and in which the sequence is then repeated to produce
further batches. The specification of the product or the exact composition may be changed between
the different runs.
A typical example of batch production is rolling of sheet steel. An ingot is passed through the
rolling mill to produce a particular gauge of steel; the next ingot may be either of a different
composition or rolled to a different thickness and hence will require different settings of the rolling
mill.
An important measure in batch production is set-up time (or change-over time), that is, the time
taken to prepare the equipment for the next production batch. This is wasted time in that no output
is being produced; the ratio between operation time (the time during which the product is being
produced) and set-up time is important in determining a suitable batch size.
In mechanical production the advent of the NC (Numerically Controlled) machine tool which can
be set up in a much shorter time than the earlier automatic machine tool has led to a reduction in
the size of batch considered to be economic.
1.6.2 CONTINUOUS:
The term continuous is used for systems in which production is maintained for long periods of
time without interruption, typically over several months or even years. An example of a continuous
system is the catalytic cracking of oil in which the crude oil enters at one end and the various
products - fractionates – are removed as the process continues. The ratio of the different fractions
can be changed but this is done without halting the process.
Continuous systems may produce batches, in that the product composition may be changed from
time to time, but they are still classified as continuous since the change in composition is made
without halting the production process.
A problem which occurs in continuous processes is that during change-over from one
specification
MITMysore | Dept of to the next, the
Electronics output
and of the plant is often
Communication not within the product tolerance and must
Engineering
be scrapped. Hence it is financially important that the change be made as quickly and smoothly as
For example, in the baking industry bread dough is produced in batches but continuous ovens are
frequently used to bake it whereby the loaves are placed on a conveyor which moves slowly
through the oven. An important problem in mixed mode systems, that is systems in which batches
are produced on a continuous basis, is the tracking of material through the process; it is obviously
necessary to be able to identify a particular batch at all times.
Laboratory-based systems are frequently of the operator-initiated type in that the computer is used
to control some complex experimental test or some complex equipment used for routine testing. A
typical example is the control and analysis of data from a vapour phase chromatograph.
Another example is the testing of an audiometer, a device used to test hearing. The
audiometer has to produce sound levels at different frequencies; it is complex in that the actual
level produced is a function of frequency since the sensitivity of the human ear varies with
frequency. Each audiometer has to be tested against a sound-level meter and a test certificate
produced. This is done by using a sound-level meter connected to a computer and using the output
from the computer to drive the audiometer through its frequency range. The results printed out
from the test computer provide the test certificate.
As with attempts to classify systems as batch or continuous so it can be difficult at times to
classify systems solely as laboratory. The production of steel using the electric arc furnace
involves complex calculations to determine the appropriate mix of scrap, raw materials and
alloying additives. As the melt progresses samples of the steel are taken and analyzed using a
spectrometer. Typically this instrument is connected to a computer which analyses the results and
calculates the necessary adjustment to the additives. The computer used may well be the computer
which is controlling the arc furnace itself.
In whatever way the application is classified the activities being carried out will include:
Data acquisition;
Sequence control;
MITMysore | Dept
Loop
of control (DDC);
Electronics and Communication Engineering
Supervisory control;
Efficiency of operation;
Ease of operation;
Safety;
Improved products;
Reduction in waste;
Reduced environmental impact; and
A reduction in direct labour.
1.6.4 GENERAL EMBEDDED SYSTEMS:
In the general range of systems which use embedded computers – from domestic appliances,
through hi-fi systems, automobile management systems, intelligent instruments, active control of
structures, to large flexible manufacturing systems and aircraft control systems - we will find that
the activities that are carried out in the computer and the objectives of using a computer are similar
to those listed above. The major differences will lie in the balance between the different activities,
the time-scales involved, and the emphasis given to the various objectives.
A chemical is produced by the reaction of two other chemicals at a specified temperature. The
chemicals are mixed together in a sealed vessel (the reactor) and the temperature of the reaction is
controlled by feeding hot or cold water through the water jacket which surrounds the vessel. The
water flow is controlled by adjusting valves C and D. The flow of material into and out of the
vessel is regulated by the valves A, Band E. The temperature of the contents of the vessel and the
pressure in the vessel are monitored.
When implemented by computer all of the above actions and timings would be based upon
software. For a large chemical plant such sequences can become very lengthy and intricate and, to
ensure efficient operating, several sequences may take place in parallel.
The processes carried out in the single reactor vessel shown in Figure 1.3 are often only part of a
larger process as is shown in Figure 1.4. In this plant two reactor vessels (R 1 and R2) are used
alternately, so that the processes of preparing for the next batch and cleaning up after a batch can
be carried out in parallel with the actual production. Assuming that R 1 has been filled with the
mixture and the catalyst, and the reaction is in progress, there will be for R 1: loop control of the
temperature and pressure; operation of the stirrer; and timing of the reaction (and possibly some in
process measurement to determine the state of the reaction). In parallel with this, vessel R2 will be
cleaned - the wash down sequence - and the next batch of raw material will be measured and
mixed in the mixing tank.
Meanwhile, the previous batch will be thinned down and transferred to the appropriate storage tank
and, if there is to be a change of product or a change in product quality, the thin-down tank will be
cleaned. Once this is done the next batch can be loaded into R2 and then, assuming that the
reaction in R1 is complete, the contents of R1 will be transferred to the thin-down tank and the
wash down procedure for R1 initiated. The various sequences of operations required can become
complex and there may also be complex decisions to be made as to when to begin a sequence. The
sequence initiation may be left to a human operator or the computer may be programmed to
supervise the operations (supervisory control - see below). The decision to use human or computer
supervision is often very difficult to make.
The aim is usually to minimize the time during which the reaction vessels are idle since this is
unproductive time. The calculations needed and the evaluation of options can be complex,
particularly if, for example, reaction times vary with product mix, and therefore it would be
MITMysore | Dept that
expected of Electronics and using
decisions made Communication Engineering
computer supervisory control would give the best results.
however, it is difficult using computer control to obtain the same flexibility that can be achieved
In most batch systems there is also, in addition to the sequence control, some continuous feedback
control: for example, control of temperatures, pressures, flows, speeds or currents. In process
control terminology continuous feedback control is referred to as loop control or modulating
control and in modern systems this would be carried out using DOC.
A similar mixture of sequence, loop and supervisory control can be found in continuous systems.
Consider the float glass process shown in Figure 1.5. The raw material - sand, powdered glass and
MITMysore | Dept
fluxes (theoffrit)
Electronics
- is mixed and Communication
in batches and fed into Engineering
the furnace. It melts rapidly to form a molten
mixture which flows through the furnace. As the molten glass moves through the furnace it is
The continuous ribbon passes into the lehr where it is annealed and where temperature control is
again required. It then passes under the cutters which cut it into sheets of the required size;
automatic stackers then lift the sheets from the production line. The whole of this process is
controlled by several computers and involves loop, sequence and supervisory control. Sequence
control systems can vary from the large - the start-up of a large boiler turbine unit in a power
station when some 20000 operations and checks may have to be made - to the small - starting a
domestic washing machine. Most sequence control systems are simple and frequently have no loop
control. They are systems which in the past would have been controlled by relays, discrete logic, or
integrated circuit logic units. Examples are simple presses where the sequence might be: locate
blank, spray lubricant, lower press, raise press, remove article, spray lubricant. Special computer
systems known as programmable logic controllers (PLCs).
In direct digital control (DDC) the computer is in the feedback loop as is shown in Figure 1.6.,
the system shown in Figure 1.6 is assumed to involve several control loops all of which are
handled within one computer.
MITMysore
A |consequence
Dept of Electronics and Communication
of the computer Engineering
being in the feedback loop is that it forms a critical
component in terms of the reliability of the system and hence great care is needed to ensure that,
1. Cost - a single digital computer can control a large number of loops. In the early days the
break-even point was between 50 and 100 loops, but now with the introduction of
microprocessors a single-loop DDC unit can be cheaper than an analog unit.
2. Performance - digital control offers simpler implementation of a wide range of control
algorithms, improved controller accuracy and reduced drift.
3. Safety - modern digital hardware is highly reliable with long mean-time between- failures
and hence can improve the safety of systems. However, the software used in
programmable digital systems may be much less reliable than the hardware.
The development of integrated circuits and the microprocessor have ensured that in terms of cost
the digital solution is now cheaper than the analog. Single-loop controllers used as stand-alone
MITMysore | Dept of Electronics and Communication Engineering
controllers are now based on the use of digital techniques and contain one or more microprocessor
PID CONTROL:
The PID control algorithm has the general form
m(t) = Kp [e(t) + 1/Ti ∫01 e(t)dt + Td de(t)/dt]
Where e (t) = r (t) - c (t) and c (t) is the measured variable,
r (i) is reference value or set point, and
e (t) is error;
Kp is the overall controller gain;
T- is the integral action time; and
Td is the derivative action time.
For a wide range of industrial processes it is difficult to improve on the control performance that
can be obtained by using either PI or PID control (except at considerable expense) or it is for this
reason that the algorithms are widely used. For the majority of systems PI control is all that is
necessary. Using a control signal that is made proportional to the error between the desired value
of an output and the actual value of the output is an obvious and (hopefully) a reasonable strategy.
Choosing the value of Kp involves a compromise: a high value of Kp gives a small steady-state
error and a fast response, but the response will be oscillatory and may be unacceptable in
many applications; a low value gives a slow response and a large steady-state error. By adding the
integral action term the steady-state error can be reduced to zero since the integral term, as its
name implies, integrates the error signal with respect to time. For a given error value the rate at
which the integral term increases is determined by the integral action time Ti. The major advantage
of incorporating an integral term arises from the fact that it compensates for changes that occur in
the process being controlled.
A purely proportional controller operates correctly only under one particular set of process
conditions: changes in the load on the process or some environmental condition will result in a
MITMysore | Dept of Electronics and Communication Engineering
steady-state error; the integral term compensates for these changes and reduces the error to zero.
For a few processes which are subjected to sudden disturbances the addition of the derivative term
Dept. of ECE, GSSSIETW, Mysuru Page 25
Real Time Systems (15EC743)
Module 1 Real time systems(15EC743) Introduction to real time systems
can give improved performance. Because derivative action produces a control signal that is related
to the rate of change of the error signal, it anticipates the error and hence acts to reduce the error
that would otherwise arise from the disturbance.
In fact, because the PID controller copes perfectly adequately with 90070 of all control problems,
it provides a strong deterrent to the adoption of new control system design techniques. DDC may
be applied either to a single-loop system implemented on a small microprocessor or to a large
system involving several hundred loops. The loops may be cascaded, that is with the output or
actuation signal of one loop acting as the set point for another loop, signals may be added together
(ratio loops) and conditional switches may be used to alter signal connections.
A typical industrial system is shown in Figure 1.7. This is a steam boiler control system. The steam
pressure is controlled by regulating the supply of fuel oil to the burner, but in order to comply with
the pollution regulations a particular mix of air and fuel is required. We are not concerned with
how this is achieved but with the elements which are required to implement the chosen control
system.
The steam pressure control system generates an actuation signal which is fed to an auto/manual
bias station. If the station is switched to auto then the actuation signal is transmitted; if it is in
manual mode a signal which has been entered manually (say, from keyboard) is transmitted. The
signal from the bias station is connected to two units, a high signal selector and a low signal
selector each of which has two inputs and one output. The signal from the low selector provides
the set point for the DDC loop controlling the oil flow, the signal from the high selector provides
the set point for the air flow controller (two cascade loops). A ratio unit is installed in the air flow
measurement line.
DDC is not necessarily limited to simple feedback control as shown in Figure 1.8. It is possible to
use techniques such as inferential, feed forward and adaptive or self-tuning control. Inferential
control, illustrated in Figure 1.9, is the term applied to control where the variables on which the
feedback control is to be based cannot be measured directly, but have to be 'inferred' from
measurements of some other quantity.
Adaptive control can take several forms. Three of the most common are:
Another example is the use of measurements of external temperature and wind velocities to adjust
control parameters for a building environment control system. Adaptive control using self-tuning is
illustrated in Figure 1.11 and uses identification techniques to achieve continual determination of
the parameters of the process being controlled; changes in the process parameters are then used to
adjust the actual controller. An alternative form of self-tuning is frequently found in commercial
PID controllers (usually called auto tuning). The comparison may be based on a simple measure
such as percentage overshoot or some more complex comparators. The model reference technique
is illustrated in Figure 1.12; it relies on the ability to construct an accurate model of the process
and to measure the disturbances which affect the process.
The adoption of computers for process control has increased the range of activities that can be
performed, for not only can the computer system directly control the operation of the plant, but
also it can provide managers and engineers with a comprehensive picture of the status of the plant
operations. It is in this supervisory role and in the presentation of information to the plant operator
- large rooms full of dials and switches have been replaced by VDUs and keyboards - that the
major changes have been made: the techniques used in the basic feedback control of the plant have
changed little from the days when pneumatically operated three-term controllers were the norm.
Direct digital control (DDC) is often simply the computer implementation of the techniques used
MITMysore
for| the
Dept of Electronics
traditional and Communication Engineering
analog controllers.
An example of supervisory control is shown in Figure 1.13. Two evaporators are connected in
parallel and material in solution is fed to each unit. The purpose of the plant is to evaporate as
much water as possible from the solution. Steam is supplied to a heat exchanger linked to the first
evaporator and the steam for the second evaporator is supplied from the vapors boiled off from the
first stage. To achieve maximum evaporation the pressures in the chambers must be as high as
safety permits. However, it is necessary to achieve a balance between the two evaporators; if the
first is driven at its maximum rate it may generate so much steam that the safety thresholds for the
second evaporator are exceeded.
MITMysore | Dept of Electronics and Communication Engineering
The techniques used have included optimization based on hill climbing, linear programming and
simulations involving complex non-linear models of plant dynamics and economics.
Throughout most of the 1960s computer control implied the use of one central computer for the
control of the whole plant. The reason for this was largely financial: computers were expensive.
From the previous sections it should now be obvious that a typical computer-operation process
involves the computer in performing many different types of operations and tasks. Although a
general purpose computer can be programmed to perform all of the required tasks the differing
MITMysore | Dept of Electronics and Communication Engineering
time-scales and security requirements for the various categories of task make the programming job
difficult, particularly with regard to the testing of software. For example, the feedback loops in a
Dept. of ECE, GSSSIETW, Mysuru Page 32
Real Time Systems (15EC743)
Module 1 Real time systems(15EC743) Introduction to real time systems
process may require calculations at intervals measured in seconds while some of the alarm and
switching systems may require a response in less than 1 second; the supervisory control
calculations may have to be repeated at intervals of several minutes or even hours; production
management will want summaries at shift or daily intervals; and works management will
require weekly or monthly analyses. Interrelating all the different time-scales can cause serious
difficulties.
A consequence of centralized control was the considerable resistance to the use of DOC schemes
in the form shown in Figure 2.4; with one central computer in the feedback loop, failure of the
computer results in the loss of control of the 'whole plant. In the 1960s computers were not very
reliable: the mean-time-to-failure of the computer hardware was frequently of the order of a few
hours and to obtain a mean-time-to-failure of 3 to 6 months for the whole system required
defensive programming to ensure that the system could continue running in a safe condition while
the computer was repaired. Many of the early schemes were therefore for supervisory control as
shown in Figure 2.13. However, in the mid 1960s the traditional process instrument companies
began to produce digital controllers with analog back-up. These units were based on the standard
analog controllers but allowed a digital control signal from the computer to be passed through the
controller to the actuator: the analog system tracked the signal and if the computer did not update
the controller within a specified (adjustable) interval the unit dropped on to local analog control.
This scheme enabled DDC to be used with the confidence that if the computer failed, the plant
could still be operated. The cost, however, was high in that two complete control systems had to be
installed.
By 1970 the cost of computer hardware had reduced to such an extent that it became feasible to
consider the use of dual computer systems (Figure 1.14). Here, in the event of failure of one of the
computers, the other takes over. In some schemes the change-over is manual, in others automatic
failure detection and change-over is incorporated. Many of these schemes are still in use. They do,
however, have a number of weaknesses: cabling and interface equipment is not usually duplicated,
neither is the software - in the sense of having independently designed and constructed programs –
so that the lack of duplication becomes crucial. Automatic failure and change-over equipment
when used becomes in itself a critical component. Furthermore, the problems of designing,
programming, testing and maintaining the software are not reduced: if anything they are further
MITMysore | Dept of Electronics and Communication Engineering
complicated in that provision for monitoring ready for change-over has to be provided. The
1. Hierarchical - Tasks are divided according to function, for example with one computer
performing DDC calculations and being subservient to another which performs supervisory
control.
2. Distributed - Many computers perform essentially similar tasks in parallel.
MITMysore | 2.
Dept of Electronics
Real-Time Systemsand Communication
Design Engineering
and Analysis, Phillip. A. Laplante, Second Edition, PHI,
2005.
Dept. of ECE, GSSSIETW, Mysuru Page 34