Control System
Control System
A control system is a device or set of devices to manage, command, direct or regulate the
behavior of other devices or systems.
There are two common classes of control systems, with many variations and
combinations: logic or sequential controls, and feedback or linear controls. There is also fuzzy
logic, which attempts to combine some of the design simplicity of logic with the utility of linear
control. Some devices or systems are inherently not controllable.
OVERVIEW:
The term "control system" may be applied to the essentially manual controls that allow an
operator, for example, to close and open a hydraulic press, perhaps including logic so that it
cannot be moved unless safety guards are in place.
LOGIC CONTROL:
Logic control systems for industrial and commercial machinery were historically
implemented at mains voltage using interconnected relays, designed usingladder logic. Today,
most such systems are constructed with programmable logic controllers (PLCs)
or microcontrollers. The notation of ladder logic is still in use as a programming idiom for PLCs.
Logic controllers may respond to switches, light sensors, pressure switches, etc., and can
cause the machinery to start and stop various operations. Logic systems are used to sequence
mechanical operations in many applications. Examples include elevators, washing machines and
other systems with interrelated stop-go operations.
Logic systems are quite easy to design, and can handle very complex operations. Some
aspects of logic system design make use of Boolean logic.
ON–OFF CONTROL:
Simple on–off feedback control systems like these are cheap and effective. In some cases,
like the simple compressor example, they may represent a good design choice.
LINEAR CONTROL:
The output from a linear control system into the controlled process may be in the form of
a directly variable signal, such as a valve that may be 0 or 100% open or anywhere in between.
Sometimes this is not feasible and so, after calculating the current required corrective signal, a
linear control system may repeatedly switch an actuator, such as a pump, motor or heater, fully
on and then fully off again, regulating the duty cycle using pulse-width modulation.
PROPORTIONAL CONTROL:
Proportional negative-feedback systems are based on the difference between the required
set point (SP) and process value (PV). This difference is called the error. Power is applied in
direct proportion to the current measured error, in the correct sense so as to tend to reduce the
error (and so avoid positive feedback). The amount of corrective action that is applied for a given
error is set by the gain or sensitivity of the control system.
At low gains, only a small corrective action is applied when errors are detected: the
system may be safe and stable, but may be sluggish in response to changing conditions; errors
will remain uncorrected for relatively long periods of time: it is over-damped. If the proportional
gain is increased, such systems become more responsive and errors are dealt with more quickly.
There is an optimal value for the gain setting when the overall system is said to be critically
damped. Increases in loop gain beyond this point will lead to oscillations in the PV; such a
system is under-damped.
In the furnace example, suppose the temperature is increasing towards a set point at
which, say, 50% of the available power will be required for steady-state. At low temperatures,
100% of available power is applied. When the PV is within, say 10° of the SP the heat input
begins to be reduced by the proportional controller. (Note that this implies a 20° "proportional
band" (PB) from full to no power input, evenly spread around the setpoint value). At the setpoint
the controller will be applying 50% power as required, but stray stored heat within the heater
sub-system and in the walls of the furnace will keep the measured temperature rising beyond
what is required. At 10° above SP, we reach the top of the proportional band (PB) and no power
is applied, but the temperature may continue to rise even further before beginning to fall back.
Eventually as the PV falls back into the PB, heat is applied again, but now the heater and the
furnace walls are too cool and the temperature falls too low before its fall is arrested, so that the
oscillations continue.
OVER-DAMPED FURNACE EXAMPLE:
The temperature oscillations that an under-damped furnace control system produces are
unacceptable for many reasons, including the waste of fuel and time (each oscillation cycle may
take many minutes), as well as the likelihood of seriously overheating both the furnace and its
contents.
Suppose that the gain of the control system is reduced drastically and it is restarted. As
the temperature approaches, say 30° below SP (60° proportional band or PB now), the heat input
begins to be reduced, the rate of heating of the furnace has time to slow and, as the heat is still
further reduced, it eventually is brought up to set point, just as 50% power input is reached and
the furnace is operating as required. There was some wasted time while the furnace crept to its
final temperature using only 52% then 51% of available power, but at least no harm was done.
By carefully increasing the gain (i.e. reducing the width of the PB) this over-damped and
sluggish behavior can be improved until the system is critically damped for this SP temperature.
Doing this is known as 'tuning' the control system. A well-tuned proportional furnace
temperature control system will usually be more effective than on-off control, but will still
respond slower than the furnace could under skillful manual control.
PID CONTROL:
To resolve these two problems, many feedback control schemes include mathematical
extensions to improve performance. The most common extensions lead to proportional-integral-
derivative control, or PID control (pronounced pee-eye-dee).
DERIVATIVE ACTION:
The derivative part is concerned with the rate-of-change of the error with time: If the
measured variable approaches the setpoint rapidly, then the actuator is backed off early to allow
it to coast to the required level; conversely if the measured value begins to move rapidly away
from the setpoint, extra effort is applied—in proportion to that rapidity—to try to maintain it.
Derivative action makes a control system behave much more intelligently. On systems
like the temperature of a furnace, or perhaps the motion-control of a heavy item like a gun or
camera on a moving vehicle, the derivative action of a well-tuned PID controller can allow it to
reach and maintain a setpoint better than most skilled human operators could.
Integral action:
The integral term magnifies the effect of long-term steady-state errors, applying ever-
increasing effort until they reduce to zero. In the example of the furnace above working at
various temperatures, if the heat being applied does not bring the furnace up to setpoint, for
whatever reason, integral action increasinglymoves the proportional band relative to the setpoint
until the PV error is reduced to zero and the setpoint is achieved.
OTHER TECHNIQUES:
It is possible to filter the PV or error signal. Doing so can reduce the response of the
system to undesirable frequencies, to help reduce instability or oscillations. Some feedback
systems will oscillate at just one frequency. By filtering out that frequency, more "stiff" feedback
can be applied, making the system more responsive without shaking itself apart.
Feedback systems can be combined. In cascade control, one control loop applies control
algorithms to a measured variable against a setpoint, but then provides a varying setpoint to
another control loop rather than affecting process variables directly. If a system has several
different measured variables to be controlled, separate control systems will be present for each of
them.
Control engineering in many applications produces control systems that are more
complex than PID control. Examples of such fields include fly-by-wire aircraft control systems,
chemical plants, and oil refineries. Model predictive control systems are designed using
specialized computer-aided-design software and empirical mathematical models of the system to
be controlled.
FUZZY LOGIC:
Fuzzy logic is an attempt to get the easy design of logic controllers and yet control
continuously-varying systems. Basically, a measurement in a fuzzy logic system can be partly
true, that is if yes is 1 and no is 0, a fuzzy measurement can be between 0 and 1.
The rules of the system are written in natural language and translated into fuzzy logic.
For example, the design for a furnace would start with: "If the temperature is too high, reduce the
fuel to the furnace. If the temperature is too low, increase the fuel to the furnace."
Measurements from the real world (such as the temperature of a furnace) are converted to values
between 0 and 1 by seeing where they fall on a triangle. Usually the tip of the triangle is the
maximum possible value which translates to "1."
Fuzzy logic, then, modifies Boolean logic to be arithmetical. Usually the "not" operation
is "output = 1 - input," the "and" operation is "output = input.1 multiplied by input.2," and "or" is
"output = 1 - ((1 - input.1) multiplied by (1 - input.2))". This reduces to Boolean arithmetic if
values are restricted to 0 and 1, instead of allowed to range in the unit interval [0,1].
The last step is to "defuzzify" an output. Basically, the fuzzy calculations make a value
between zero and one. That number is used to select a value on a line whose slope and height
converts the fuzzy value to a real-world output number. The number then controls real
machinery.
If the triangles are defined correctly and rules are right the result can be a good control
system.
When a robust fuzzy design is reduced into a single, quick calculation, it begins to
resemble a conventional feedback loop solution and it might appear that the fuzzy design was
unnecessary. However, the fuzzy logic paradigm may provide scalability for large control
systems where conventional methods become unwieldy or costly to derive.
Fuzzy electronics is an electronic technology that uses fuzzy logic instead of the two-
value logic more commonly used in digital electronics.
PHYSICAL IMPLEMENTATIONS:
Since modern small microprocessors are so cheap (often less than $1 US), it's very
common to implement control systems, including feedback loops, with computers, often in
an embedded system. The feedback controls are simulated by having the computer make periodic
measurements and then calculating from this stream of measurements (see digital signal
processing, sampled data systems).
Diagram:
,
Transfer function: Using LaPlace transforms, you can transform the model to a rational
polynomial transfer function,
Note that whether you work with the model in the time (t) domain, or with the transfer function
in the s-domain, knowledge of all of the ’s and ’s implies a complete description of the
linear, time-invariant system. Obtaining these models or transfer functions for systems of
technological interest is the subject of entire other courses, such as circuit theory,
electromechanics, dynamics, chemical kinetics, etc.
Diagram: A general representation of the use of additional subsystems, whose design
you can modify (shown in orange) is shown below:
The subsystems, C and F have their own inputs and outputs, and hence their own transfer
functions. The control signal, r(t), represents the output desired by the user. By common
convention, the output ofF is subtracted from r(t) before being fed to C as input.
Handout 1-2: What can you learn from the transfer function?
Nearly all modern control systems use digital computers to implement the
controllers. Computers can neither accept continuous functions of time as inputs nor provide
them as outputs to the external (analog) world. All they can do is operate on sequences of
numbers (usually in very rapid succession). In order to interface a computer to the analog world
you must implement a system like the one shown below as a replacement for
subsystems C or F (or sometimes both).
Here the continuous function (solid line in the diagram) r(t), passes through a sample-and-hold
subsystem, S&H. The relationship between the input and the output of the sample-and-hold is
illustrated below:
Here the white curve is r(t), a continuous time function. The output of the S&H block is the blue
line with the stair-step structure. It is continuous everywhere except when it jumps to new
values. Between the jumps, it is constant. If you listed the vertical co-ordinates of the blue dots
you would have a sequence of numbers. The digital representations of these numbers are the
output of the analog-to-digital converter, A/D, and constitute the sequence, rk, shown as a dashed
line on the block diagram.
Once this sequence is formed, the computer has lost all information on the values
of r(t) between sampling points. From the picture of the input and output of the S&H block, you
can see that the time between successive samples, T, is the reciprocal of the sampling
frequency, fs.
The discrete subsystem, D, transforms the input sequence, rk, into another sequence, ok, at
its output.
Discrete system model: The model for the discrete system (a digital filter), is the
difference equation,
Relationship of rk to r(t): From the subsystem diagram and the plot, each element of
the sequence is related to a sample taken at the corresponding time as
Relationship of ok to o(t): From the subsystem diagram and the plot, each element of
the sequence is related to the constant output of the DAC during the corresponding time interval
as
Discrete system transfer function: You can use the z-transform [see Text 8.1-8.2] on
the difference equation to get a discrete transfer function,
.
It is a bit of a stretch to call this a rational polynomial, but it is indeed a rational polynomial in z-
1
. You can turn in into a rational polynomial in z by multiplying numerator and denominator
by zM if M>N or byzN in M<N. The result is
A linear transfer function for this system in the s-domain cannot be written, because of
the non-linear nature of the sample&hold . However, you can write its transfer function in the z-
domain as
where the operation, , is a mathematical transformation from the s to the z-domain. The input
to this transformation is a rational polynomial in s, while the output is a rational polynomial in z.
You can write the transfer function of this system as
which says that the digital filter transfer function you need to replicate the continuous subsystem
C at a given sampling frequency is
5. Stability from the transfer function: a stable system produces bounded (finite) outputs from
any bounded input.
Continuous system: a continuous system is stable if all the poles of H(s) lie in the left
half of the complex plane.
Discrete system: a discrete system is stable of all the poles of H(z) lie within a unit circle
whose center is at the origin in the complex plane.
Alternative criteria for stability exist, but are not needed as long as the poles of the
transfer function can be found. Pole locations determine stability.
This property lets you use pole locations as a rough design tool to determine stability and
speed of response.
For systems whose transfer functions have many poles and zeros, frequency-domain
analysis provides better insight into system performance than does observing pole and zero
locations. You calculate the magnitude, |H|, and phase, , of system transfer functions vs.
angular frequency, f, as follows:
Continuous systems:
Sampled-data systems:
where T is the sampling period. In the above equations, we just made the substitution, ,
1) It lets you deal with systems having more than one feedback loop,
2) It lets you deal with systems having multiple inputs and outputs,
3) It lets you find a set of feedback parameters that place the system poles at any
locations you want, if all the state variables are available for measurement
(observable).
8.1. Continuous systems:
with a set of simultaneous, first-order linear differential equations in the set of state variables
(termed the state vector), x, in the form,
, the state equation,
.
You design the control of a state-variable system by feeding back a weighted sum of the states to
each system input can control (there may be inputs you cannot control: these are disturbances).
s-Domain: You can take the LaPlace transform of the state equation to get
where I is the square identity matrix. This equation relates all of the inputs to all of the outputs
in the s-domain, and thus plays the role of a transfer function, even though it can’t be written as a
simple ratio of polynomials. [For SISO systems (only one input and one output) you get a
rational polynomial transfer function. To see how state variable models apply to SISO systems,
including some simple examples, read the handout on that subject.]
Assuming that all states are observable, you can feed back a weighted sum of these states to each
input, allowing for the fact that in general, this weighting can be different for each accessible
input. Now you have the system shown below,
The state-variable approach to control system design consists of adjusting the elements of
the state feedback matrix, K, to produce desired pole locations consistent with other design
specifications.
Time-domain: You could also write state equations for any system in the form of a set
of difference equations, giving
for the output equation. The dimensions of the matrices and vectors are the same as for the
continuous case. The vectors are now sequences, and the matrices now depend on the
sampling frequency as well as on the parameters of the system.
from which you can get the transfer function-like matrix relationship,
,
The poles of the uncontrolled system are the roots of
There is a transfer-function relationship between each output and each input which you
can evaluate in the frequency domain. Write the matrix input-output relations for both cases as
, and substitute , OR
and substitute .
You can use transfer functions you get from these operations to design a control system with
multiple inputs and outputs.