Advanced Control Unleashed
Advanced Control Unleashed
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Terrence L. Blevins
Gregory K. McMillan
Willy K. Wojsznis
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Michael W. Brown
The information presented in this publication is for the general education of the reader. Because
neither the author nor the publisher have any control over the use of the information by the reader,
both the author and the publisher disclaim any and all liability of any kind arising out of such use.
The reader is expected to exercise sound professional judgment in using any of the information
presented in a particular application.
Additionally, neither the author nor the publisher have investigated or considered the affect of any
patents on the ability of the reader to use any of the information in a particular application. The
reader is responsible for reviewing any possible patents that may affect any particular use of the
information presented.
Any references to commercial products in the work are cited as examples only. Neither the author
nor the publisher endorse any referenced commercial product. Any trademarks or tradenames
referenced belong to the respective owner of the mark or name. Neither the author nor the publisher
make any representation regarding the availability of any referenced commercial product at any
time. The manufacturer’s instructions on use of any commercial product must be followed at all
times, even if in conflict with the information in this publication.
ISBN 1-55617-815-8
No part of this work may be reproduced, stored in a retrieval system, or transmitted in any form or
by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior writ-
ten permission of the publisher.
ISA
67 Alexander Drive
P.O. Box 12277
Research Triangle Park, NC 27709
ACKNOWLEDGEMENT xiii
FOREWORD xvii
Chapter 1 INTRODUCTION 1
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
General Procedure 20
Application Detail 26
Rules of Thumb 74
Theory, 76
Process Time Constants and Gains 76
Process Time Delay 79
Ultimate Gain and Period 80
Peak and Integrated Error 82
Feedforward Control 84
Dead Time from Valve Dead Band 84
Nomenclature, 85
References, 86
vii
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Chapter 7 FUZZY LOGIC CONTROL 239
Practice, 239
Overview 239
Opportunity Assessment 240
Examples 240
Application, 241
General Procedure 241
Rules of Thumb 242
Guided Tour 242
Theory, 244
Introduction to Fuzzy Logic Control 244
Building a Fuzzy Logic Controller 247
Fuzzy Logic PID Controller 251
Fuzzy Logic Control Nonlinear PI Relationship 254
FPID and PID Relations 257
Automation of Fuzzy Logic Controller Commissioning 258
References, 259
INDEX 431
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
The authors wish to express their appreciation to Mark Nixon and Ron
Eddie from Emerson Process Management, for their enthusiastic support
and commitment of resources for this book, to Jim Hoffmaster, Bud Keyes,
Duncan Schleiss, John Berra, and Gil Pareja from Emerson Process Man-
agement for their inspiration and support in establishing the DeltaV
advanced control program, to Karl Astrom from Lund University, Tom
Edgar from the University of Texas at Austin, Dale Seborg from the
University of California, Santa Barbara and Tom McAvoy from the Univer-
sity of Maryland for their guidance in the pursuit of new technologies,
Mike Gray and Mark Mennen from Solutia Inc. for the initiation and
sustenance of advanced control applications and innovations, Ken
Schibler from Emerson Process Management for his help in setting the
direction of the book, Robert Cameron, Michael Mansy, Glenn Mertz, and
Gina Underwood from Solutia Inc. for their valuable comments, and
finally, Scott Weidemann from Washington University, and Jim Cahill,
Brenda Forsythe, and Cory Walton from Emerson Process Management
for their essential contributions to the videos and demos on the CD. The
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Greg is an ISA Fellow and received the ISA “Kermit Fischer Environmen-
tal” Award for pH control in 1991, the Control Magazine “Engineer of the
Year” Award for the Process Industry in 1994, and was one of the first
inductees into the Control Magazine “Process Automation Hall of Fame”
in 2001. He received a B.S. from Kansas University in 1969 in Engineering
Physics and a M.S. from University of Missouri – Rolla in 1976 in Control
Theory. Presently, Greg is an affiliate Professor at Washington University
in Saint Louis, Missouri and is a consultant through EDP Contract Services
in Austin, Texas.
Cell Phone: (314) 703-9981
E-mail: [email protected]
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
There has been a dynamic development of control over the past 50 years.
Many new methods have appeared. The methods have traditionally been
presented in highly specialized books written for researchers or engineers
with advanced degrees in control theory. These books have been very use-
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
ful to advance the sate of the art. They are however difficult for an average
engineer. The reasons are that it is necessary to read many books to get a
good coverage of advanced control techniques and that the level of mathe-
matics used requires a substantial preparation. This is a dilemma because
several of the advanced control techniques have indeed been very benefi-
cial in industrial and more engineers should be aware of them. Even if
many details of the new methods are complicated the basic underlying
ideas are often quite simple. Many methods have also been packaged so
that they are relatively easy to use. It is thus highly desirable to present the
industrially proven control methods to ordinary engineers working in
industry. This book is a first attempt to do this. The book provides a basis
for assessing the benefits of advanced control. It covers auto-tuning,
model predictive control, optimization, estimators, neural networks, fuzzy
control, simulators, expert systems, diagnostics, and performance assess-
ment. The book is written by four seasoned practitioners of control, hav-
ing jointly more than 100 years of real industrial experience in the
development and use of advanced control. The book is well positioned to
provide the bridge over the infamous Gap between Theory and Practice in
control.
Karl J. Astrom
xvii
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
The advent of powerful and friendly integrated software has moved
advanced process control (APC) from the realm of consultants into the
arena of the average process control engineer. The obstacles of infrastruc-
ture and special skill requirements have started to disappear and we are
poised for an accelerated application of APC.
Until recently, most of this knowledge ended up with consultants, and the
success of the application often deteriorated once they departed. There is
now an opportunity for the engineers closest to the process and daily
operations to take a much more active role in the development and sup-
port of APC applications. It is a win-win situation in that the cost of APC
can be reduced by using consultants primarily in a higher-value-added
role of conceptual design and optimization. Even more importantly,
greater understanding, support, and involvement of onsite engineers can
increase the success rate, the on-stream time, and the longevity of an APC
application. This decrease in the cost and increase in the benefits will in
turn lead to a larger number of successful APC installations and a greater
interest in APC as a method of improving process efficiency and capacity.
However, much of the purpose and use of APC has been clouded in the-
ory. The theory is scattered among many books written for graduate
school programs in advanced process control. Application papers typi-
cally concentrate on the benefits of specific APC projects and serve more
as advertisements for particular consulting or software firms than as
implementation guides. Little if anything has been written for the practic-
ing engineer on how to select, design, configure, commission, and tune
APC systems. The purpose of this book is to demystify APC and make it
more accessible. To that end, the book focuses on practice and applications
backed up by enough theory to insure a deeper understanding.
The THEORY section presents the major facets of selected approaches to the
deployment of each APC technology as part of a state-of-the-art tool set.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
For brevity, the section does not survey all the possible methodologies and
techniques, but focuses on those that are innovative and simple enough to
be integrated into a distributed control system.
This book covers a great deal of ground. Each of the technologies dis-
cussed here could easily fill a book in itself. However, users today don’t
have the time or inclination to read a lot of material. Lists, hints, rules of
thumb, and concise explanations are employed to save the reader time and
to provide both a better perspective on the whole picture and an improved
ability to drill down to obtain specific implementation guidance. The book
concentrates on what is most important. Users can quickly get to the heart
of the matter without getting lost in the details associated with a specific
tool or suffering from information overload.
Included with the book is a compact disc that contains a set of examples of
the technologies discussed in the book. They demonstrate, by means of a
step-by-step procedure and a detailed dynamic process model, how to
configure, test, and run each APC application. Configuration and case files
use a virtual plant that has a complete scalable Distributed Control System
(DCS) with a suite of APC tools and a high-fidelity plant simulation.
A companion set of Power Point slides that illustrates all of the major Fig-
ures, equations, tables, lists and rules included in the book is on the CD.
These slides and the hands-on exercises make the book practical as a text-
book for courses on both basic and advanced process control. Chapters 2
and 6 receive the most extensive treatment because introductory courses
are most common. Also, students and users alike need to first concentrate
on getting the basic regulatory control system designed correctly and
tuned properly before moving on to more advanced topics. Most of the
material has been tested in an introductory course on process control for
junior and senior chemical engineers at Washington University in Saint
Louis. These students have demonstrated the ability to immediately apply
these APC tools to example problems after a brief tutorial, using their
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
The tutorials and presentations on the CD do not require any special soft-
ware or hardware beyond a PC with a media player, speakers, and a dis-
play with a screen area of at least 1024 by 768 pixels.
This book with its appendices and CDs should enable the average process
engineer to develop a good understanding of the representative principles
and techniques of APC. This knowledge will be helpful in setting objec-
tives, evaluating potential APC opportunities, and applying the most
appropriate APC technologies. Readers should feel free to contact the
authors at their e-mail addresses if they have any questions about the use
of the book, exercises, demos, slides, or APC tools described.
All royalties from this book will be given directly to universities, consortia,
and educational programs to promote and enhance the development and
use of advanced process control. A beneficiary of each year’s royalties will
be chosen by the authors.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Practice
Overview
The advanced control projects with the largest benefits usually have made
significant improvements in the basic regulatory control system. While
advanced process control (APC) techniques can partially compensate for
such limitations as missing measurements, excessive dead time, and poor
signal-to-noise ratios, a solid foundation will provide the lowest total cost,
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
greatest total benefit, and the longest lifecycle for the advanced control
system. Deficiencies in the measurement and the final element can
increase the time required for process testing and identification by a factor
of 5 or more and can significantly reduce the improvement in process
capacity and efficiency provided by APC.
The core of a solid foundation for advanced process control is good mea-
surements and final elements. The measurement is the window into the
process and must be able to provide an undistorted view of small changes
in the process. The final element is the means of affecting the process and
must be able to make small changes to the process. This overview pro-
vides a perspective of how these objectives are best met by reducing the
reproducibility error, noise, and interferences in the measurement and
decreasing the stick-slip and dead band in the control valve.
Measurement
Reproducibility is the closeness of agreement of an output for an input
approaching from either direction at the same operating conditions over a
period of time. Repeatability is the closeness of agreement of an output for
successive inputs approaching from the same direction at the same operat-
ing conditions. Reproducibility includes the repeatability as it deteriorates
over time plus drift, and is the better number for control. Another impor-
tant consideration is the interference from changes in process fluids and
operating conditions. Unfortunately, the specifications given by manufac-
turers for such measurements as accuracy, linearity, or rangeability are
extraneous if not misleading because they are either not as important as
reproducibility, drift, and interference or are generated under fixed labora-
tory conditions.
Final Element
The most common final element is the control valve. Controller outputs
also manipulate the speed of pumps and power to heaters. With final ele-
ments that are totally electronically set, there are no issues of stick and slip
as there are for control valves, and any dead band that exists is purposely
introduced and adjustable to reduce the response to noise. Also, the
response of the manipulated variable (flow for the pump and heat for the
heater) is linear with controller output. Variable-speed drives have essen-
tially no time delay or time constant and rate limiting is normally adjust-
able and not an issue except for surge control. Heaters are inherently slow,
but most temperature processes are also slow.
Usually, a control valve will not move on its own or when the controller
output is constant unless the actuator is undersized or the positioner is
unstable. Also, if the valve were to drift, the positioner and process con-
troller would correct for it. Thus, long-term reproducibility and noise are
not normally issues for control valves. While noise is not generated in the
valve stroke, noise in the process variable can be passed on as rapid
changes in the valve signal, which, if they exceed the resolution limit or
dead band of the control valve, can cause excessive wear and tear and pre-
mature failure of the packing.
Thus, in the normal scheme of things, slip is worse than stick, and stick is
worse than dead band, and dead band is worse than stroking time. For
sliding stem valves, stick-slip and dead band go hand in hand since the
common cause is excessive packing friction. In fact, if the slip is equal to
the stick, it is effectively the same thing as the resolution limit. The resolu-
tion of sliding stem valves can be estimated as half of the dead band [2.4].
In other words, where you have excessive dead band, you tend also to
have excessive stick-slip. However, in rotary valves, there are different
sources of stick-slip and dead band. A rotary valve could have a large
dead band but little stick-slip.
Pneumatic positioner
requires a negative Stroke
signal to close valve (%) Digital positioner
will force valve
shut at 0% signal
Stick-Slip
0 Signal
dead band (%)
The effect of slip is worse than stick, stick is worse than dead band,
and dead band is worse than stroking time (except for surge control)
Until recent years, when you asked a control valve manufacturer to esti-
mate the dynamic response of a control valve, you were given the stroking
time of the actuator. Even now, if you ask for a response time that includes
the valve, it will be for a change of 10% in controller output at 50% posi-
tion so that the effect of stick-slip and dead band are largely removed [2.5].
In actual operation, the change in controller output per scan is typically
less than 0.5% and can occur at positions less than 20% where the friction
of the sealing surfaces increases the stick-slip. Tests done at these condi-
tions will unearth the real response problems. In valves, stick and slip go
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
together and can be identified while the loop is operating for fast measure-
ments, as shown in Figure 2-2. Here, stick is the amount of change in the
controller output where there is no change in the process variable and slip
is the rapid change in the process variable divided by the product of the
valve and process gain.
59
58.5
3.25 Percent
58 Controller Output
Ball Travel Backlash + Stiction
57.5
57
56.5
Stroke 56 Dead band is
% slip peak to peak
55.5 amplitude
55
stick
54.5
54
53.5
53
0 100 200 300 400 500 600 700 800
Time ( Seconds )
change in flow for a change in rotation and the valve gain approaches
zero.
To summarize, the numbers that traditionally have been cited by the man-
ufacturer for valve performance, such as leakage, stroking time, linearity,
and rangeability, do not provide the information needed to measure con-
trol loop performance. The user needs to know the stick-slip, dead band,
and sensitivity of the installed valve assembly at operating conditions.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Valve Travel (degrees)
Suggested throttle range is 25 to 45 degrees
3
Gain Model
2
Gain
(%Flow/%Input) EnTech Gain
1 Specification
0
0 10 20 30 40 50 60 70 80 90
Valve Travel (degrees)
Suggested throttle range is 10 to 60 degrees
3
Gain Model
2
Gain
(%Flow/%Input) EnTech Gain
1 Specification
0
0 10 20 30 40 50 60 70 80 90 100
Valve Travel (%)
Suggested throttle range is 5 to 75 %
Effect on APC
Advanced control tools such as feedforward control, online estimators,
and model predictive controllers can minimize to a significant degree the
effect of measurement deficiencies. Feedforward control can bypass the
irregularities and delay in the controlled variable but still must work
through the manipulated variable. Since the exact size of stick and dead
band is extremely variable, undercorrection is normal and the overall
improvement is minimal. Filters can reduce the effect of noise; and model
predictive control can reduce the adverse effect of noise, resolution, and
reproducibility by minimizing the error between a process vector created
from a model of the process and the set point vector [2.6]. However, its
model is based on the assumption that the control valve actually moved
for the recent past changes in the controller output. Thus, advanced con-
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
trol algorithms are more vulnerable to deficiencies in the control valve
than measurement. While a kicker algorithm can theoretically reduce the
effect of dead band and stiction, overcorrection will cause excessive move-
ment similar to slip [2.4]. There is no computational correction for valve
slip. The effect of slip is amplified by high valve sensitivity (valve gain)
and high process sensitivity (process gain). The only solution for slip, and
the best solution for stiction and dead band, is a change in the valve type,
assembly, and accessories or the use of a variable-speed drive.
control system from doing its job. This scrutiny involves an analysis of the
type, location, and installation of the instrument and final element. Both
the degree to which deficiencies in the measurement can be compensated
for by advanced control techniques, and the permissible amount of stick-
slip and dead band, must be part of a cost-benefit analysis.
Opportunity Assessment
In this section, some questions are offered that could form an OA to find
improvements in a basic control system. Question (1) deals with the ability
to overdrive the manipulated variable on startup or for a major set point
change in a batch operation to reduce the amount of time it takes for the
controlled variable to reach set point. This question is also important in
performance of an advanced control system to help reduce the time lag in
the manipulated variable for the model predictive controller. There is, of
course, a tradeoff between rise time and degree of permissible overshoot,
but in general, for temperature and composition control of volumes with
mixing and for the start of a continuous or batch process, the output
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
should initially be saturated high but backed off from this limit before the
controlled variable approaches set point. Various algorithms and tuning
methods are available to provide overdrive. The fraction that the startup
time or a batch cycle can be reduced is proportional to the ratio of the
missing area of overdrive to the total area of the manipulated variable dur-
ing the rise time.
reduce the time to reach a batch set point and eliminate operator attention
requests.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
2. Is the variability less in the controlled variable of the loop when the
controller is in manual?
3. Is the variability less in other loops when the controller is in
manual?
4. Is the variability less in the process variable for an important
constraint if the controller gain or rate setting is decreased or
integral time is increased?
5. Would better reproducibility and less noise in measurement reduce
the variability in a process variable for an important constraint?
6. Have tight shutoff valves, high temperature packing, key lock
shafts, vane actuators, scotch yoke actuators, or valves without
digital positioners been used in control loops that affect important
constraints?
7. Have any of the top 20 mistakes been made in an important loop?
(See Appendix D for a list of the mistakes made every year for the
last forty years.)
8. Are there opportunities to linearize the manipulated variable for a
primary controller by creating a secondary loop that encloses the
nonlinearity?
9. Are there opportunities to attenuate a load upset to a primary loop
by creating a secondary loop that encloses the disturbance?
10. Are there flows that can be ratioed and used as a feedforward
signal to enforce a material balance for startup and to compensate
for changes in flow rate?
11. For batch operations, can phases be eliminated by going from
sequential to parallel actions, such as simultaneous heating, filling,
pressurization, and venting?
12. Can batch cycle time be reduced by a decrease in wait times, hold
periods, operator attention requests, manual actions, or lab sample
analysis time?
13. Can batch end points be automated by the use of a property
estimator, trajectory, or sustained rate of change?
14. Can batch cycle time be reduced by overdrive or an all-out run and
coast?
If the variability in a loop decreases when the loop is in manual, it indi-
cates that the loop was doing more harm than good, due to poor control
valve performance, inappropriate tuning, and/or interaction. If the valve
does not respond to small steps (e.g., 0.25% to 0.5%) in the controller out-
put, the oscillations are probably due to the control valve. If an increase in
the controller gain or a decrease in the integral time increases the variabil-
ity, it is mostly due to incorrect tuning. Lastly, if the variability in other
loops is less when a loop is put in manual, the variability is the result of
interaction.
If the variability in a loop increases when the loop is in manual, there are
load upsets that were being attenuated by the loop and it was doing some
good. If the variability stays the same, the fluctuations are mostly due to
noise or lack of measurement reproducibility.
There are also some obvious flaws that will stand out from some simple
tests. If there are significant non-uniform fluctuations in the measurement
regardless of the mode of the controller, then the selection and installation
of the transmitter are suspect. These problems are most often associated
with insufficient runs of straight pipe upstream or sensing line problems.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
One of the last and most obvious opportunities is the use of cascade con-
trol and ratio control. The most common type of cascade control is a flow
loop that deals with the nonlinearity of the control characteristic and com-
pensates for pressure upsets so that the primary control loop can manipu-
late flow instead of valve position and not see the effect of pressure
swings. The next most common cascade control system uses a jacket, or
coil inlet or output temperature secondary loop, to insulate a primary
crystallizer or reactor temperature control loop from changes in coolant
temperature and the nonlinearity associated with the manipulation of
coolant makeup flow.
The largest and most frequent opportunities in basic control are summa-
rized in Table 2-1 and discussed in detail throughout the rest of Chapter 2.
Simple equations for the fundamental relationship between either the
standard deviation or the peak or integrated error for upsets can be used
for each type of improvement.
Examples
Neutralization Process
Figure 2-4a shows a two-stage neutralization process. The economic vari-
able is yield. The optimum yield is for pH between 6 and 8. A byproduct is
formed that is 1% of the total product when the pH goes above 8. Of
greater concern is the fact that the reaction time increases from 2 minutes
by a factor of ten for each pH unit below 6 pH. The first stage is a static
mixer with a residence time of 2 seconds and the second stage is a well
mixed vessel with a residence time of 20 minutes. The titration curve has a
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
particularly steep slope between 6 and 8 pH (1 ∆pH per 0.0001 ∆ratio) and
will greatly amplify a valve stick-slip limit cycle, as shown in Figure 2-4b.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Distillation Process
Figure 2-5a shows a distillation column, feed tank, and a storage tank for
the distillate product. The series of plots in Figure 2-5b are indicative of the
nonlinear relationship between tray temperature and both the distillate-to-
feed ratio (Fd/Ff) and the impurity in the product. The process gain seen
by the temperature loop is the slope of the plot versus Fd/Ff. The inverse
of the slope of the plot of temperature versus impurity concentration
Reagent AC
Stage 2 2-1
FT
Reagent AC
2-1
Stage 1 1-1 AT
FC 2-1
1-2 FT
1-1 AT
1-1
FT
Static Mixer
1-2
Feed
2
pipe
diameters
Neutralizer
Discharge
pH
Reagent Flow
Influent Flow
Thermocouple cards with a 400o C span are used for the temperature mea-
surements. The distillate control valve has GraphoilTM packing and a
pneumatic positioner. The storage tank residence time is 4 hours and the
Feedforward
Summer RSP
FT FC
Σ 2-1 2-1
Signal
FT Characterizer
1-1
AC Reagent
f(x) Stage 2
1-1
FC Reagent
1-2 Stage 1 *1
AT
*1 1-1
FT
Static Mixer
1-2
Feed
20
pipe AC
diameters 2-1
Neutralizer
*1 - Isolation valve closes when control valve closes
AT
2-1
Discharge
20
pipe
Figure 2.4c Basic Neutralizer Control System diameters
time delay in the temperature loop is 1 hour. The reflux-to-feed ratio is 10.
If concentration of impurities in the product in the storage tank exceeds
the spec by more than 0.1%, the product must be recycled. For every 0.1%
reduction in impurity the steam flow to the reboiler must be increased by
0.1%.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
4. Move the location of the sensor down into the tray so it always is
immersed in the liquid rather than in the vapor, or even worse, a
splashing liquid.
5. Replace the thermocouple and its DCS input with a 3- or 4-wire
RTD with a smart temperature transmitter.
6. Tune the overhead distillate receiver level controller with a high
controller gain to insure that the effects of small changes in
distillate flow translate into changes in reflux flow and thus
changes in the column temperature.
7. Tune the feed tank level controller with a low controller gain to
smooth out the changes in feed to the column. Consider the use of
error squared control. For a batch-to-continuous transition in an
undersized feed tank, use an adapted velocity limited feedforward
per Appendix B for optimum smoothing.
8. Add signal characterization to the controlled variable to
compensate for the nonlinearity in the process variable depicted in
Figure 2-5b. Provide a faceplate for the operator that displays the
actual tray temperature rather than the linearized controlled
variable of distillate demand.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
9. Add a secondary flow controller to each column loop to
compensate for the nonlinearity associated with the control valve
and to prepare the column loop for feedforward control.
10. Add a flow feedforward signal to the temperature and level
controller outputs and display the actual and desired ratio of
distillate to feed. Make sure the feedforward action is active when
the temperature controller is in manual and the operator can easily
go to flow ratio control and adjust the ratio for the startup of the
column.
The benefits from the reduction in variability afforded by the listed
improvements will be estimated in Chapter 3. The improvements are illus-
trated in Figure 2-5c.
Application
General Procedure
1. Track down and correct the source of sustained oscillations. A
power spectrum analyzer may be required to find the loops with
the common period of oscillation. Beware of a slow scan time of
the I/O and controller that will cause a slower than actual period
and a smaller than actual amplitude from aliasing. For trends or
data obtained from data historians, make sure the data highway
PC
3-1
LT LC Vent
3-1 3-1
Feed Tank
Distillate
Receiver
PT
3-1
Reflux Overheads
FC
3-3 Thermocouple
TE TC
Tray 10 3-2
3-2
FT
Column
3-3
Feed
Storage Tank
FC LC
3-4 3-2
FT LT
3-4 3-2
Steam
Bottoms
Operating
Point
Temperature
Distillate Flow
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Feed Flow
% Impurity
Impurity Errors
reporting and the time intervals between data points for historical
data are not too slow and the trigger for exception reporting and
compression is not set too high. Also, the data must be saved for at
least a month to catch different process conditions and modes of
operation.
PC
3-1 Feedforward summer
LC LT Vent
FT3-3 Σ 3-1 3-1
Feed Tank
RSP
FC RSP FC
Distillate
3-1 Receiver 3-2
PT
3-1 FT FT
3-1 3-2
Reflux Overheads
Feedforward summer
FC
3-3 FT3-3 Σ
Signal Characterizer
RTD
FT
Column TT TC
3-3 f(x)
Feed Tray 6
3-2 3-2
Storage Tank
FC LC
3-4 3-2 RSP
FC
3-5
FT LT
3-4 3-2
Steam
FT
3-5
Bottoms
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
reset cycle and how the integral time must be increased [2.3].
The best fix, outlined in the Application Detail section, can be
relatively expensive in that it requires a new control valve
designed to minimize backlash and friction or a variable-speed
drive. An alternative that can mitigate but not eliminate the
limit cycle is a level-to-flow cascade loop.
c. Controllers can create periodic upsets from noise if the reaction
to the noise causes the controller output to exceed the dead
band of the control valve from the gain or rate setting being too
large. This most often happens in level controllers, where
controller gains can be quite large.
d. Controllers can amplify periodic upsets whose period is close
to the natural period of the control loop. Resonance occurs
from the feedback action of the controller being in phase with
the disturbance oscillation. It most often occurs for control
loops in series that have similar loop time delays such as liquid
pressure and flow, and inline equipment in series (heat
exchangers, static mixers, and desuperheaters).
e. Interacting controllers can cause sustained oscillations. Here,
the output of a controller affects another controller and vice
versa. A steady state relative gain analysis (RGA) can reveal
the nature of the interaction. However, the dynamics must be
considered as well since the interaction is particularly severe if
the periods of oscillation of the loops are similar. The best
solution is a change in pairing of the control loops per the
RGA. If this is not feasible, model predictive control (MPC)
should be used.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
control band can cause sustained oscillations. With the
disappearance of mechanical sensors, this primarily occurs for
temperature control loops that use thermocouple input cards
instead of narrow-range smart transmitters.
b. Although less common, sustained oscillations can also occur
from controllers tuned so aggressively that they bang back and
forth between output limits, and from nonlinear loops that
have a very high central gain region surrounded by
exceptionally low gain regions. This can occur for control
valves when an insufficient fraction of the system drop has
been allocated as a valve drop, strong acid and base titration
curves, and the temperature response of some monomer and
water distillations. For process nonlinearities, the addition of
signal characterization of the controlled variable and rate
action can eliminate or mitigate the limit cycle. For valve drop
problems, the size of the piping and/or the pump impeller
may need to be increased.
2. Track down the source of long settling times. Here, the oscillations
eventually die out but take too long or cause too much variability.
The most common cause is inappropriate controller tuning, such as
the use of too much reset action (too small an integral time) in
evaporator, reactor, or column temperature or concentration
controllers, or other loops dominated by a large time constant; too
much gain or rate action in level controllers on surge tanks; and too
much gain or rate action in liquid pressure, flow, inline
concentration (blending), or sheet gauge or moisture controllers or
in loops dominated by a large time delay (dead time dominant).
3. Check the sensor selection, installation, and location for
opportunities, per best practices, to improve the reliability,
reproducibility, rangeability, and resolution, and to reduce noise
and decrease loop time delay. Orifice meters and chromatographs
are some of the least reliable measurements and are the biggest
sources of excessively fast and slow noise. Chromatographs are
also the largest source of measurement time delay from sample
transportation and analysis cycle time. Look for ways to eliminate
sensing lines and sample lines by the use of sensors that mount
directly in the pipeline or on the vessel [2.7].
4. Look for ways to reduce the time delay in control loops by changes
in the design of the equipment, piping, instrumentation, final
elements, and the pairing of controlled variables with manipulated
variables.
5. Tune the controllers for the best compromise between robustness
(stability), performance, and smoothness. It is important to realize
that the tuning rules change with the ratio of time delay to time
constant and that all loops will see both load upsets and set point
changes. Methods that focus on set point changes (servo control)
and noise introduced into the measurement are applicable to
aerospace, web, and parts manufacturing but not to processes for
the chemical, petroleum, food, and drug industries and
environmental control. Make sure the tuning method takes into
account the relative degree of dead time and provides the proper
capability for load rejection. Beware of any control loop analysis
that concentrates solely on set point response, and the introduction
of noise or upsets downstream of the process and directly into the
measurement [2.8]. These methods were developed from control
programs in system science or electrical engineering and tend to
ignore the effect of the process and equipment dynamics,
characteristics, and objectives.
6. Find opportunities to employ cascade control. Wherever there is a
reliable flow measurement and a primary loop whose time delay
and time constant are more than five times slower than a flow
loop, a secondary flow controller should be created. There are
some cases where the dynamics are not appropriate for cascade
control. Examples of undesirable choices would be inline pH-to-
reagent flow and liquid pressure–to-flow cascade control because
the primary and secondary loops have about the same time delay.
7. Look for opportunities to add feedforward control, especially flow
feedforward where manipulated flows are ratioed to a feed flow.
Make sure the feedforward signal does not arrive too soon and
cause inverse response.
8. For improvements that cannot be covered by the maintenance
budget, the benefits from the reduction in variability can be
estimated by the calculations in Chapter 3 to justify the project.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Application Detail
This section will take a closer look at the methods to improve the response
of valves and measurements, reduce the total loop time delay, tune con-
trollers, employ cascade control, and add feedforward control.
Valve Selection
Control valves are often selected based on the lowest cost valve that has
the required materials of construction. Often tight shutoff is sought.
Nowhere in the valve specification is there a requirement that the control
valve move or respond to a change in signal. Consequently, rotary valves
are chosen because they are the least expensive and offer models with low
leakage rates. They are also thought to offer the highest rangeability. In
reality, the rotary valve has the least usable rangeability because the
installed characteristic gets too flat for small and large controller outputs.
Figure 2-3a shows how the characteristic is too flat below 5 degrees and
above 45 degrees for a butterfly valve. Figure 2-3b shows how the charac-
teristic is too flat below 10 degrees and above 60 degrees for a ball valve. If
you further take into account that the stick-slip significantly increases
when these valves are less than 15 degrees open, the actual usable range-
ability of these valves is less than 20:1, instead of the 200:1 and 400:1 stated
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
in the literature.
By contrast, the sliding stem valve installed characteristic doesn’t get too
flat until it gets below 5% open or above 75% open as illustrated in Figure
2-3c. Also, its stick-slip is an order of magnitude or more less and usually
doesn’t increase dramatically until the valve is less than 5% open. Also,
unlike the rotary valve, the trim movement of a sliding stem valve closely
matches the actuator shaft movement so that a digital positioner, whose
feedback is typically actuator-stem position, can, by aggressive tuning,
actually compensate for high packing friction. As a result, the real range-
ability of sliding stem valves with digital positioners is 40:1.
Of course, valve manufacturers who offer only rotary control valves will
develop clever ways of diverting attention from these issues or even pitch
the opposite by the use of labels like “high performance” and “high range-
ability” that ignore flat installed characteristics and stick-slip. The user
must realize that “high performance” indicates the ability of the valve to
provide tight shutoff and to withstand high temperatures. These same fea-
tures translate into excessive friction and low performance in terms of con-
trol. The use of a digital positioner cannot correct for the inherent stick slip
problems of rotary valves and can essentially deceive the user into think-
ing it is doing a great job by fancy plots and statistics of the step response
of the actuator stem position. Unfortunately, the ball or disc position does
not track the actuator shaft position, because of gaps in linkages, toler-
New designs of sliding stem (globe) valves, such as that shown in Figure
2-6, reduce the amount of metal used in the body and pockets and crevices
where process material can stagnate and accumulate. This makes the valve
more competitive with the rotary valve for exotic materials, large line
sizes, and fouling or slurry service. Above 6 inches in line size, the cost of
sliding stem (globe) valves can become large enough to warrant further
investigation. If the reduction in stick-slip and loop variability offered by a
sliding stem valve doesn’t provide an acceptable rate of return on the
additional investment, the user should take a closer look at rotary valves,
but avoid any valves originally designed for isolation or interlocks. Sepa-
rate automated “high performance” or “on-off” valves should be used for
isolation or interlocks, and low friction valves for throttling service. Since
many of the rotary valves are flangeless (wafer bodies), the lifecycle cost
should include not only the cost of increased variability, but the increased
difficulty of proper installation and alignment and the increased risk of a
safety incident and reportable release of hazardous materials.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Integral flange
Stem
guided
Retainer
seat ring
Stream-lined passages
Figure 2-6. Sliding Stem Valve with Streamlined Passages and Less Metal
Rotary valves must also pass the checks on the maximum pressure drop.
The rotary valve must meet the maximum pressure drop rating at shutoff
and the maximum allowable pressure drop to avoid choked flow, flashing,
cavitation, and exceeding the noise limit. In general, sliding stem valves
offer higher pressure drop ratings, higher allowable pressure drops to pre-
vent cavitation, and noise reduction trim, and are thus the first choice for
high pressure, boiler-feed water, steam and condensate systems.
If a rotary valve is still the best choice, make sure the connection of the
actuator shaft to the ball or disc stem is a splined connection, as shown in
Figure 2-7, to minimize the tolerance and associated play in the connection
so that the backlash is less than 0.5%. Key lock connections can cause a
backlash of 8%. Also, the shaft diameter should be large and the shaft
length should be short so that shaft windup does not cause a stick-slip
greater than 0.5% [2.10].
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
low friction packing, with the greatest deterioration found in designs that
employ keyed connections, long slender shafts, and high friction sealing
of surfaces for tight shutoff. To help avoid the many traps of creative
advertising, the user should keep in mind the popular myths listed in
Table 2-2.
A piston actuator can reduce the stroking time of large valves once a valve
starts to move. However, the design of most pistons exhibits poor resolu-
tion and dead band that will cause an exceptionally slow response to small
changes in valve position that can get worse if the cylinders are not prop-
erly lubricated. For small changes in valve position, a diaphragm actuator
is generally faster and more precise. Also, a diaphragm actuator does not
require lubrication or as much maintenance unless its temperature rating
is exceeded.
Figure 2-8. Crevice-free Sanitary Valve with High Rangeability and Sensitivity
In all valves, there is a valve prestroke time delay: the time it takes for
enough air to move into or out of the actuator to change the air pressure
enough to start to move the actuator stem. The stroking time is the time
required to complete its transition to the new stem position after the actua-
tor starts to move. The tests to document the prestroke time delay and the
stroking time typically consisted of 10% or larger steps, done with the
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
valve disconnected. The results depended solely on the type and size of
actuator and the type and flow capacity of the actuator connections and
accessories; they did not include the effect of valve dead band or stick slip.
A ramp (a series of small steps held for the loop scan time) would better
duplicate the actual valve response for closed-loop control. The use of a
ramp is particularly important for pneumatic positioners and boosters
because they exhibit a drastic increase in response time as you approach
the resolution limit of the linkages and flapper nozzle assembly. To quan-
tify dead band, stick and slip, a series of steps, first for a reversal of posi-
tion and then in the same direction are used. Each step is held for the
prestroke dead time and stroking time identified from a 10% step. The
steps are continued until movement occurs. The actual valve movement in
excess of the size of the last step is indicative of the amount of slip.
Figure 2-9 shows how the response time changes as a function of the type
of control valve, shaft connections, actuator, and positioner. Diaphragm
actuators, sliding stem valves, and digital positioners have the fastest
response by far to a range of small step sizes, which is the goal of 99% of
all control valves. The combination of an electrical or hydraulic actuator
and a sliding stem valve can have an even better resolution. The only dead
band is what is introduced in the setup of the positioner to eliminate
dither. The prestroke dead time is essentially zero but the stroking time
increases proportionally to the step size and can become quite large for the
electrical actuator. Hydraulic actuators provide the fastest response for
small and large step sizes but are complex and expensive.
Valve Installation
The installation and location requirements for a control valve are generally
less than for a sensor. Ideally, control valves should have the same straight
run of pipe upstream and downstream as a differential head flow meter,
since both constitute a variable orifice [2.9]. Adherence to the straight-run
requirements rarely occurs in industry but is common in the flow test labs
used to establish flow characteristics and flow coefficients [2.9]. The repro-
ducibility error from an erratic flow profile is not as important for a final
element as it is for a measurement because the control loop will drive the
manipulated variable as necessary to reach set point. However, if the flow
is going to be computed through the control valve based on valve position
and pressure drop, the reproducibility of the resulting flow measurement,
and hence the straight-run requirements, become more important.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
400
3 4 5
40.0
Response 6
Time (sec)
1
4.0
2 7
0.4
0.1 1.0 10.0 100
Step Size (%) All valves look good
for about a 10% step
1 - variable speed drive with dead band adjustment set equal to zero
2 - sliding stem valve with diaphragm actuator and a digital positioner
3 - sliding stem valve with diaphragm actuator and pneumatic positioner
4 - rotary valve with piston actuator and digital positioner
5 - rotary valve (tight shutoff) with piston actuator and pneumatic positioner
6 - large valve or damper with any type of positioner
7 - small valve with any type of positioner
Figure 2-9. Response Times for Different Types of Final Elements (Valves,
Actuators and Positioners)
For partially filled lines, there is an excessive time delay even when the
valve stays open. A change in valve position causes a crest or valley in the
wave to travel down the pipe or channel. The transportation delay is the
distance divided by the velocity of the wave but the velocity is difficult to
estimate. For a very low flow down a vertical pipe, the velocity of a falling
film can be computed [2.14]. Whenever the control valve closes, manipu-
lated fluid in the downstream piping and injectors or dip tubes slowly
migrates into the equipment or destination and the process fluid backfills
the same volume. The result is a long delay between the closure of the
valve and the end of manipulated fluid flow and a similarly long delay
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
between the opening of the control valve and the start of the manipulated
fluid flow into the equipment or destination.
For a pressurized, completely full pipeline, dip tube, and injector, the time
delay can be estimated as the volume between the valve and the injection
point divided by flow of the manipulated fluid. For pH control systems
where the manipulated fluid is a reagent, the flow can be as low as one
gallon per hour and just one gallon of volume downstream of the valve
can result in one hour of time delay every time the reagent control valve
closes [2.14].
The control valve should have block and drain valves so that it can be
removed safely and easily. For large continuous processes, it is desirable to
have a manual bypass valve to keep the unit online while the valve is
tested or repaired. Also, the outlet isolation valve can be closed and the
bypass valve, shown in Figure 2-10, opened for inline testing and tuning
of the digital positioner at process pressures and temperatures. For slurry
service, rotary valves can be mounted in a vertical-flow up pipe to pro-
mote self-draining and prevent solids buildup. For sliding stem valves, a
vertical mounting will cause additional wear of the packing from the
weight of the actuator.
flush flush
FT
1-1
A B
drain drain
Figure 2-10. Installation Requirements for a Flow Meter and Control Valve
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
The time from minimum to maximum speed is adjustable within the limits
imposed by the impeller inertia and the motor horsepower. The factory
setting is conservative. The speed response is a velocity-limited ramp rate
with no time delay or lag. Consequently, the response for small speed
changes is fast. For example, if it takes a drive 9 seconds to go from 10% to
100% speed, it will take only 0.1 second to change the speed by 1%.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
For strong acid and base pH systems, the requirement for precise adjust-
ment would best be met by a VSD, but the flow rates are often too small
for a centrifugal pump and the location of the pump on the ground creates
a huge reagent delivery transportation delay. Instead, an electronically set
metering pump is used with the piping designed to stay full.
Measurement Selection
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
For flow measurements, inline meters such as coriolis mass flow meters,
magnetic flow meters, vortex shedding meters, and thermal mass flow
meters should be considered because these meters eliminate sensing lines,
external connections, and small holes that are the largest source of errors,
failures, leaks, and maintenance. To reduce the effect of flow profiles and
changes in process fluid, preferred order of selection is the order listed.
Coriolis mass flow meters require no straight runs, are not affected by Rey-
nolds Number or fluid properties, and have by far the best reliability,
reproducibility, rangeability, and resolution for the measurement of both
mass flow and density. The coriolis meter is the only true mass flow meter.
Magnetic and vortex flow meters are velocity volumetric devices and can
be used to compute mass flow only for a fixed and known composition by
the measurement of temperature and pressure. This is also true for pitot
tubes with differential pressure, pressure, and temperature transmitters
that are advertised as mass flow meters. The thermal mass flow meter is
not a volumetric meter but its reading will depend upon the heat capacity
and thus the composition of the fluid.
Coriolis flow meters above 2 inches get expensive, but still may be justi-
fied where the ability to measure and control a mass balance is important.
Often overlooked are the benefits of an accurate density measurement and
an approximate temperature measurement (the temperature sensor is on
the outside surface of the tube and is not in contact with the fluid) to create
online estimators of fluid composition. Unfortunately, the the fluid may be
too corrosive for the materials of construction offered, the fluid tempera-
ture or concentration of particles may be too high, or the pipe size too
large.
Coriolis flow meters are so accurate that they can be used to replace load
cells or weigh tanks for batch charges. Also, for flat titration curves and
constant composition feeds and reagents, simple mass ratio control with
coriolis flow meters has proven to be more accurate and more reliable than
a pH control. The hardware cost of a coriolis flow meter is high compared
to a differential head meter, but the installation cost may be less since there
are no straight-run requirements or additional measurements to compen-
sate for pressure and temperature. The project cost may be still higher, but
the life-cycle cost is often significantly better for the coriolis meter, as
shown in Figure 2-11, because the coriolis meter requires less maintenance
and accumulates benefits from tighter control.
Lost opportunity
after 10 years
coriolis
orifice
The higher purchase price to get the coriolis technology is partially offset
by lower installation costs but will still often lead to higher project costs
Benefits but can lead to lower Lifecycle costs from less maintenance and better
($) yields from more accurate mass balances and control of stoichiometry
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Next to the coriolis mass flow meter, the magnetic flow meter has the fewest
interferences since it is not affected by either Reynolds Number or viscos-
ity and is relatively insensitive to flow profile. The main limitation is that
the fluid conductivity must be greater than 1 micromho/cm (0.1 for spe-
cial units). For erosive service, ceramic linings are offered.
The flow profile is a big factor for the vortex meter, particularly near the
low end of the meter’s range. If the velocity drops below 1 fps, or Rey-
nolds Number is less than 20,000, or the viscosity is greater than 30 centi-
poises, or the concentration of particles is above 2%, the vortices are not
shed uniformly and the vortex frequency deviates from a proportional
relationship to flow. At low velocities the reading can become very erratic.
The hardware and setup costs of the radar level measurement has decreased
and many of the calibration complexities have been automated to the
point where the lifecycle cost is often less than the differential pressure
(d/p) method of level measurement. It is the most accurate of the common
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
types of level measurement. The main limitation of radar is that the fluid
must have a dielectric constant greater than 2. For tall narrow vessels, the
minimum 8-degree angle of divergence of the beam may result in the
gauge not being able discern the bottom of vessel. The gauge must be pro-
grammed to ignore any obstruction, including dip tubes and agitator
blades, and typically requires an empty tank at some point during the cal-
ibration procedure. Pulsed lasers that are not adversely affected by dust or
vapor can potentially become an even more accurate method of level mea-
surement [2.17]. The angle of divergence is less than a degree and there is
no dielectric requirement. However, a relatively clean sight glass window
is required.
The differential pressure level measurement depends upon density of the fluid
and the condition of the sensing and equalization lines. A second d/p
with both connections below the minimum level can be added to provide
a representative measurement of the density if the vessel is well mixed,
although the accuracy is usually good to only two significant digits. The
sensing and equalization lines can be eliminated by the use of capillary
systems or transmitters mounted directly on vessel flanges for both the
bottom total pressure and top equalization pressure. However, the error
introduced by capillary systems can be significant if there are bubbles in
the fill, or differences in the temperature or length of the capillary. The
communication of signals for the computation of level from dual transmit-
ters is best done digitally to eliminate digital-to-analog (D/A) and analog-
to-digital (A/D) converter errors. Even so, the error increases as the vessel
pressure increases and can become unacceptable because the bottom d/p
measures both liquid head and vessel pressure relative to atmospheric
pressure.
Nuclear level measurements are completely isolated from the process, but are
affected by density unless a second device is added. Strip sources are rec-
ommended to eliminate the need for compensation of the difference in
radiation path length from a point source to the strip detector. The license
procedure is considered a hassle and anything nuclear tends to scare peo-
ple even though the exposure is less than what they receive from the sun.
For analytical measurements, inline meters and probes that do not require
sample systems will pay off by the elimination of sample transportation
delays and the reduction in the life-cycle cost of a sample system. If a den-
sity measurement is sufficient, the coriolis meter offers the fastest and
most accurate and reliable response. For simple water mixture and com-
plex general mixtures, inline meters such as microwave and nuclear mag-
netic resonance (NMR) should be investigated respectively. [Microwave
can be used for simple 2 or 3 component water mixtures, while NMR can
handle aqueous and non-aqueous complex mixtures]. For coal, oil and
mineral slurries, the prompt gamma neutron activation analyzer can
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Measurement Installation
Sensing lines should be eliminated wherever possible by direct mounting
a differential pressure (d/p) transmitter for flow measurement or pressure
measurement, as shown in Figure 2-12a and 2-12b, to the pipe connection.
Isolation, flush, drain, and equalization valves are necessary to minimize
the exposure to chemicals during the removal of the transmitter. An equal-
ization valve is used when both the high and low sides are connected to
the process to offer the opportunity for a zero adjustment and to protect
the d/p from “over-range” from just seeing the upstream pressure.
d/p
FT
Equalization valve H L
1-1
flush drain
Large bore connections
d/p
PT
H L
1-1
flush drain
gas
drain
shown in Figure 2-10 to minimize the distortion of the flow profile and to
provide a more constant pressure. For liquid flow, the upstream location
helps prevent a partially filled meter, besides reducing exposure to flash-
ing and cavitation. Bubbles adversely affect the accuracy of all flow meters
and the implosion of bubbles can cause serious damage in addition to
erratic readings. Isolation, flush, and drain valves are again recommended
for safe removal of an inline meter, although some practices for high haz-
ardous materials seek to minimize the number of connections and leak
points. A bypass valve allows the plant to run on manual while the instru-
ment is repaired. The isolation valves upstream of the flow meter shown
must be wide open when the flow meter is in service for the same reasons
that a control valve is undesirable upstream of a flow meter. If there are
solids, the meter can be installed in vertical pipe with flow up to help
drain the piping. For coriolis meters, a single straight tube is desirable to
eliminate erosion at the bends of a U-tube and unequal distribution of sol-
ids in a dual tube. The meter size should be chosen to provide the opti-
mum velocity to minimize the effect of solids concentration on accuracy.
Figure 2-10 and Table 2-3 show the relative straight run requirements for
different types of flow meters (the A and B values are in Table 2-3). An ori-
fice with a large beta ratio (high orifice bore to inside pipe diameter ratio)
has the greatest upstream straight-run requirement, followed by the vor-
tex meter operating at a low fluid velocity. The upstream requirements
also increase for multiple fittings in different planes or valves upstream
that are not completely open or are partially plugged. The upstream
straight-run requirement can be dramatically reduced by the use of
straightening vanes. The manufacturer should be consulted for actual
requirements based on your piping details and process conditions. The
coriolis flow meter has no upstream or downstream straight-run require-
ments.
Sample lines should be eliminated wherever possible for all types of elec-
trodes by the use of insertion or injector assemblies. Injector electrode
holders with manual or automatic retraction are now offered. Even though
these assemblies have built-in isolation, flush, and drain capability, the
user may choose to have the piping set up as shown in Figure 2-13a and 2-
13b to provide additional protection for hazardous materials. For three
electrodes, a series arrangement, as shown in Figure 2-13b, is favored to
help keep the velocity and concentration the same for all three meters. The
electrodes must be pointed down at a 30- to 60-degree angle, as shown in
Figure 2-13c, to prevent a bubble from becoming lodged in the tip or at an
internal electrode. The first electrode should be at least 20 pipe diameters
from the discharge of a pump or static mixer to reduce pressure pulsation
and facilitate some mixing. The electrodes should also be separated by 10
pipe diameters to help establish a more uniform velocity and they should
be inserted far enough into the line or vessel to get a representative read-
ing. The mounting of electrodes in a pipeline with a 5 to 9 fps velocity is
preferred to a vessel because the higher velocity in the pipe makes the
electrodes respond faster and keeps them cleaner. The bulk fluid velocity
in even highly agitated vessels rarely exceeds 1 fps except near the agitator
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
blade tip. For solids and high caustic or temperature service, there is a
compromise to keep the velocity low to decrease erosion and chemical
attack and yet high enough to reduce coatings. The slot in the protective
shroud of the electrode tip should be oriented to shield the electrode from
abrasion and chemical attack but provide a sweeping action to decrease
fouling. Finally, the transportation delay between the vessel or the point of
reagent addition and the electrodes should not exceed 5 seconds.
AE
1-1
pressure drop for
each branch must throttle valve to
be equal to to keep adjust velocity
the velocities equal AE
1-2
flush
AE
1-3
Differences in velocity, concentration, and temperature are less for probes in series !
AE AE AE
1-1 1-1 1-1 throttle valve to
flush adjust velocity
10 pipe 10 pipe
diameters diameters
20 to 80 degrees
from horizontal
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Advanced Temperature Measurement and Control, ISA, 1995, pp. 14, Figure 2.3
flush
elbow
Desuperheater TE
or static mixer 1-1
20 pipe
diameters
drain
the water purge and the nitrogen purge lines must each have a check valve
before being combined, and each line must have a check valve. The sens-
ing and equalization lines for low to moderate vessel pressures can be
eliminated by the use of separate d/ps for the total pressure and equaliza-
tion, direct mounted on the bottom and top nozzles as shown in Figure 2-
16b. A third d/p can be direct mounted at an intermediate nozzle to com-
pute fluid density, and a temperature sensor can be used to compensate
for the changes in the dimensions of the vessel, to provide a more accurate
level measurement. Flush connections or extended diaphragms are used
for the lower nozzles to help keep the diaphragm clean. The signals are
communicated digitally to a computer or a programmable electronic con-
trol system.
equalization line
flush
LT
H L Purge
1-1
d/p
drain drain
Figure 2-16a. Purged Sensing and Equalization Lines for Level Measurement
flush
d/p
LT
H L
1-2
drain digital
LY LT
1-3 1-3
flush digital
LT
H L
1-1
d/p
drain
Thus, for purposes of this chapter, the open loop response will be a first-
order response that can be characterized by a total time delay, a negative
or positive feedback time constant, and a steady state gain. The time it
takes the process variable to get out of the noise band after a step change
in the controller output (CO), is the observed time delay (τd), or dead time.
It excludes any time delay due to valve dead band or stiction. The Theory
section shows how to estimate the additional time delay from valve dead
band. The time it takes after the time delay for the response of the process
variable (PV) to reach 63% of the final change for a self-regulating
response is the negative feedback open loop time constant (το), or time lag.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
The open loop response of each major component of the plant (control
valve, process, and measurement) in Figure 2-19 has a first-order-plus-
dead-time approximation. The controller also contributes a time delay
from the scan time and time constants from the signal filters. Material and
energy balances are used in the Theory section to show origin of the pro-
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
cess time constants and gains and how they change with operating condi-
tions. Equations in the Theory section also estimate the dead time from
mixing, transportation delay, and valve dead band.
Process
Variable
0.72∗Eo
(%)
0 Acceleration
Ramp
1 ∆PV
Eo Load
Upset ∆t
curve 0 = Self-Regulating
2 curve 1 = Integrating
curve 2 = Runaway
Time
0 τd τ τ’ (min)
Controlled Ko ∆MV
Manipulated Kmv
Variable ∆CV Variable ∆CO
(%) (gpm)
∆PV
Process Kpv Controlled Kcv
∆CV
Variable Variable
(pH) ∆FR
(%) ∆PV
∆CO Local
Set Point
PID Kc Ti Td
∆CV
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Equations
∆%CV
K o = ------------------- = K mv * K pv * K cv (2-1)
∆%MV
∆%PV ⁄ ∆ t K
K i = --------------------------- = ------o (2-2)
∆%CO τo
where:
The total loop time delay (dead time) is the most important of the three
key variables that describe the open loop response of a control loop. It
delays the ability of a controller to see or react to a disturbance. The
minimum peak error is the maximum excursion of the process variable
during this time delay. The oscillation period is also proportional to the
time delay. Thus the integrated error is proportional to the time delay
squared for self-regulating processes with a large time constant that limits
the excursion within the time delay to less than the full change of the
process variable. The equations to approximate these relationships are
developed in the Theory section.
Perfect control is theoretically possible if the total time delay is zero and
there is just a single process time constant. However, in industrial pro-
cesses the total loop time delay is never negligible because even if the pro-
cess time delay is negligible, the addition of a measurement, valve, and
digital controller adds time delay. For flow, pressure, and level control,
most of the time delay in a control loop comes from the automation system
[2.1]. While the time delay cannot be zero, the objective, particularly for
loops with operating point nonlinearities, like the pH and column temper-
ature control examples, is to reduce the total time delay. This is because the
extent of the excursion on the titration curve or tray temperature curve,
and hence the effect of the nonlinearity, is decreased by a reduction in loop
dead time.
Pure dead times come from transportation delays (pipes, sample lines,
static mixers, coils, jackets, conveyors, sheet lines, and textile fiber lines),
valves (prestroke dead time, dead band and stiction), and anything digital
or with a cycle time (microprocessors and analyzers). Equivalent dead
time comes from time constants in series from instrumentation (sensor
time lags, thermowell time lags, and transmitter filter times and dampen-
ing adjustments), analog input cards, (analog filters), and process variable
filter times (digital filters). The exact values are not important, just the rel-
ative sizes. The engineer or technician should work on the largest, most
cost-effective sources of dead time [it is not just the job of the control engi-
neer but also the process engineer who is responsible for equipment
design and the selection of instruments for small automation projects and
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
the technician who often determines the sensor location and loop scan
times. Some plants don’t have a control engineer].
The largest time constant does not have to be in the process. For processes
not dominated by time delay, an increase in the time constant, no matter
where it appears in the loop, will allow an increase in controller gain, even
though the final effect is not beneficial. For example, a large time constant
in the measurement, such as a large thermowell lag or process variable fil-
ter time setting, will allow a higher controller gain and give the illusion of
better control because the controlled variable is an attenuated version of
the real process variable. Equation 3-2 can be used to estimate the effect.
All time constants much smaller than the largest time constant are con-
verted to equivalent dead time. While the fraction converted to dead time
depends upon the relative size of the small to the large time constant, very
small time constants can be summed up as totally converted to dead time
because it is difficult to find and estimate all the small time constants and
sources of dead time. Thus, the total time delay for a control loop is the
sum of all the pure time delays and the small time constants. Dead time
compensators and model predictive controllers can account for the effect
of time delay on the response to changes in the controller output, but the
minimum peak error for unmeasured load upsets and the initial delay
before the start of the set point response is still fixed by the total time
delay.
The open loop time constant is approximately the largest of the time con-
stants plus the portion of all of the small time constants not converted to
time delay. If each of the small time constants is less than 10% of the larg-
est time constant, so that each is essentially converted to equivalent time
delay, the largest time constant can be considered to be the open loop time
constant. The purpose here is to show the relative sizes and sources of
time delay. There are too many unknowns to calculate an exact value. In
industry, the open loop time constant, total time delay, and open loop gain
can not be accurately calculated, except possibly for flow and level, and
must obtained by plant tests.
--`,```,,,```,`,````
The open loop gain (sensitivity) can be too low or too high. If the valve
gain is too low (valve sensitivity is too low), the controller has little effect
on the process and the controlled variable will wander and be at the mercy
of upsets [2.5]. If the process or measurement gain (sensitivity) is too low,
the controlled variable is not representative of the process performance. If
the valve gain (sensitivity) is too high, the effect of stick-slip is excessive
and just the act of putting a controller in automatic can cause unacceptable
oscillations. If the process or measurement gain (sensitivity) is too high,
the nearly full-scale oscillations will scare most people even if the actual
performance of the process is acceptable. The classic example of this prob-
lem is the pH control of a strong acid and base system with a static mixer.
Try explaining to the operator that the actual change in hydrogen ion con-
centration is tiny for a system that oscillates between 2 and 12 pH.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
The total loop time delay, open loop time constant, and open loop gain are
rarely constant but rather a function of operating conditions. For example,
the process gain for composition and temperature control is inversely pro-
portional to feed rate, and the process time constant is inversely propor-
tional to feed flow for back-mixed volumes, whereas the time delay is
inversely proportional to feed flow for plug flow. The theory section
shows the source of these process nonlinearities.
Increasingly, the control loop set point is changed by either a master loop
for cascade control or a model predictive controller for advanced control,
or by a unit operation for an automated startup sequence, product transi-
tion, and batch operation. Even for those loops whose set point is not
changed from a remote source, the local set point is a handle used for star-
tup, sweet spots, and to relieve boredom. Operators are notorious for
moving set points despite claims to the contrary. Plus, the operator has to
start up the unit, which may consist of a series of set point changes as he
walks the unit up to operating conditions.
On the other hand, if there were no process upsets, you wouldn’t need a
controller: You could find and manually set an output to a final element
that would be good indefinitely.
controlled variable. The integrated error (Ei) can be estimated from the
tuning settings and is equivalent to the IAE if the response is not oscilla-
tory. Of increasing importance is the settling time (Ts), which is the time it
takes for a loop to stay within a specified band around the set point after a
set point change or load upset to detect sustained oscillations. Since pro-
cesses and equipment have limits that can trigger interlocks, violate envi-
ronmental constraints, or initiate side reactions, the overshoot (E1) for set
point changes and the peak error for load upsets (Ex) are also important.
The decay ratio is the amplitude of the first peak (E1 or Ex) divided by the
second peak (E2). Finally, the rise time (Tr) (time it takes the controlled
variable to first reach a specified band around the set point) is important
for reducing batch cycle, startup, and transition time and decreasing the
open loop response time (T98) (time to reach 98% of the final value) for
master and model predictive controllers. Figures 2-20a and 2-20b show the
closed-loop performance indices for a set point change and a load upset,
respectively. In the Theory section, equations are developed to estimate Ei
and Ex for load upsets.
Overshoot = A
A B
Decay = B/A
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Rise
Time
Settling Time
Figure 2-20a. Closed Loop Performance Indices for a 10% Setpoint Change
Peak Error = A
A
B
Decay = B/A
Settling Time
Figure 2-20b. Closed Loop Performance Indices for a 40% Load Upset
An increase in the controller gain will greatly reduce the peak error, the
rise time, and the return time, but may increase the overshoot and the set-
tling time. High controller gains will amplify noise, increase interaction,
and pass on more variability from the controlled variability to the manipu-
lated variable. Figures 2-22a and 2-22b show the effect of the controller
gain setting on the response of an effectively proportional-only controller
to a load upset and a set point change, for a process with a time constant
that is 5 times larger than the time delay. An increase in controller gain
dramatically speeds up the initial rate of approach to set point by over-
driving the controller output. It will also start to back off the controller
output from the output limit as soon as the controlled variable comes
within the proportional band, which is the percent change in the control
error (difference between the measurement and set point) necessary to
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
∆%CO2 = ∆%CO1
Ramping or driving
∆%CO1 action from reset
seconds/repeat
∆%PV offset
set point
The offset is inversely proportional to gain but is
only completely eliminated by integral action
0 Time
(seconds)
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
not cause a classic overshoot but instead an oscillatory approach to set
point. It is high controller gain combined with integral action that causes
overshoot.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Figure 2-23a. Response of an Integral-only Controller to a 10% Set point
Change
spike because of a built-in filter whose time constant is typically 1/8 of the
derivative time. The contribution will decay to zero since the offset has a
slope of zero. The direction of the change in controller output depends
upon the sign of the change or the acceleration of the error, so the propor-
tional mode has a definite sense of direction and rate of approach to set
point.
The PID algorithm can have the derivative action on either the control
error or on the controlled variable. The latter method was developed to
reduce the bumps to the controller output from rapid set point changes
made by the operator. Unfortunately, for set point changes made by
sequences, cascade, and advanced control systems, the derivative mode
works against the change requested because it only knows that the mea-
surement is starting to move and that any movement is undesirable. For
this reason, derivative action on control error is preferred so that deriva-
tive action is beneficial if the loop is dominated by a time constant and set
point velocity limits are readily available. An increase in derivative time
will decrease the peak error and reduce overshoot and the period of oscil-
lation. Too much derivative action can increase the rise, or return, time and
the settling time. The derivative mode can respectively speed up or slow
down the initial approach for a set point change by acting on the control
error or the controlled variable.
Figures 2-24a and 2-24b show the effect of the derivative time setting on
the response of a proportional-plus-derivative controller to a load upset
and a set point change, for a process with a time constant that is 4 times
larger than the time delay. The derivative mode is even more likely than
the gain mode to amplify noise, increase interaction, and transfer variabil-
ity to the manipulated variable. It can decrease or increase the response
time of master or model predictive controllers depending on how it is
used. It should not be used on dead time–dominant systems or any system
with abrupt or erratic changes. It works best on processes with large time
constants, low noise, and good measurement repeatability and resolution
so that the response of the controlled variable is smooth.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Temperature and composition loops are the prime candidates for PID con-
trollers.
--`,```,,,```,`,````,``,`
Nearly all temperature controllers should use PID controllers because the
derivative mode provides a phase lead that compensates for the phase lag
of the thermal lags in the thermowell and process. Wherever the process
variable can accelerate, whether due to positive feedback or nonlinearity,
derivative is helpful since it reacts to a change in the rate change. In the
distillation-column example, the control point is on the knee of a plot of
tray temperature versus distillate-to-feed ratio. An increase in feed can
cause the drop in temperature to accelerate on the steep slope. However, if
thermocouple or RTD input cards are used instead of a smart transmitter,
the steps from hitting the resolution limit seriously reduce the amount of
derivative action that can be used. A temporary fix is to add a filter that
smooths out the steps. If there is an inverse response, where the initial
response is opposite to the final response, the derivative mode cannot be
used. This can occur in furnace temperature control where the controller
output is the firing demand that works within a cross limit to make air
lead fuel on a load increase.
For tight control of processes with a time constant much larger than the
dead time, there is benefit in aggressive preemptive and anticipatory
action. Thus, these processes should maximize the gain and derivative set-
ting and overdrive the output to reduce the rise and return time. The inte-
gral time is increased (reset action decreased) since it has no sense of
direction and increases overshoot. If the controller gain is larger than 5,
there is enough muscle from the proportional mode, and the derivative
setting can be small or zero. Derivative mode is a necessity regardless of
gain setting for a process whose control variable can significantly acceler-
ate, whether due to a nonlinearity (pseudo-runaway) or positive feedback
(real runaway), such as a polymerization reactor or a fermentor in the
exponential growth phase. For these processes, the integral time setting is
increased to about 10 times the ultimate period so that gain and rate action
dominate the response.
Conversely, for tight control of processes with a time delay much larger
than the largest time constant, gain and rate action must be minimized
and integral (reset) action maximized, which means the integral (reset)
time must be minimized. The integral mode adds smoothness not inherent
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
in the process. The integral time factor is decreased and can be as small as
1/8 of the ultimate period for a pure dead time process.
Figures 2-25a through 2-25f show the effect of the three mode settings on a
PID controller for a process with a time lag 4 times larger than the loop
time delay. Note that a gain setting too large increases overshoot due to
the presence of reset action. An integral (reset) time setting that is too
small causes a greater overshoot and increases the period of oscillation.
Too much derivative action causes an oscillatory response and a shorter
period but no real overshoot. Figures 2-26a through 2-26d show the effect
of two mode settings on a PID controller for a process with a time lag 4
times smaller than the loop time delay. The doubling of the gain setting is
much more disruptive.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Gain Doubled
Figure 2-25a. Effect of Gain on Set point Response of PID for a Large Time
Lag–to–Time Delay Ratio
Gain Halved
Base Case
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Gain Doubled
Figure 2-25b. Effect of Gain on 40% Load Upset to PID for a Large Time Lag–
to–Time Delay Ratio
Figure 2-25c. Effect of Reset on Set point Response of PID for a Large Time
Lag–to–Time Delay Ratio
Figure 2-25d. Effect of Reset on 40% Load Upset to PID for a Large Time Lag–
to–Time Delay Ratio
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Figure 2-25e. Effect of Rate on Set point Response of PID for a Large Time
Lag–to–Time Delay Ratio
Some loops will fail during an auto tuner pretest because the loop
response is too small or too large within the allowable time frame of the
pretest, or because the valve has too much stick-slip. Also, fast runaway
Base Case
Figure 2-25f. Effect of Rate on 40% Load Upset to PID for a Large Time Lag–to-
Time Delay Ratio
Base Case
Gain Doubled
Gain Halved
Figure 2-26a. Effect of Gain on Set point Response of PID for a Small Time
Lag–to–Time Delay Ratio
and integrating loops cannot be safely taken out of the auto mode. For
these loops, a closed-loop tuning method is best because it is the fastest
method for a large time constant, it keeps the controller in service with
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Gain Halved
Base Case
Gain Doubled
Figure 2-26b. Effect of Gain on a 20% Load Upset to PID for a Small Time Lag–
to–Time Delay Ratio
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Base Case
Figure 2-26c. Effect of Reset on Set point Response of PID for a Small Time
Lag–to–Time Delay Ratio
maximum gain, (safest for processes that can get into trouble quickly or
have a non-self-regulating response), and it includes the effect of poor
valve response in the tuning [2.29].
Figure 2-26d. Effect of Reset on a 20% Load Upset to PID for a Small Time
Lag–to–Time Delay Ratio
Tight Liquid Level 5 (1.0-30) 5.0 (0.5-25)* 600 (120-6000) 0 (0-60) CLM
Gas Pressure (psig) 0.2 (0.02-1) 5.0 (0.5-20) 300 (60-600) 3 (0-30) CLM
* An error square algorithm or gain scheduling should be used for gains < 5
Methods: λ - Lambda, CLM - Closed-loop Method, SCM - Shortcut Method
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
11. Return the output limits to their proper values if narrowed for testing.
Set point changes are used because they are more likely to cause an oscilla-
tion than a change in the controller output: A step change in a set point is a
step change in the error seen by the controller, whereas a step change in
the controller output is smoothed out by the time constants in the loop.
After the new tuning settings are entered, the loop should be checked for a
step change in the controller output and the load rejection capability of the
new settings monitored and compared to historical data for the old set-
tings.
Equations
Tu = 0.7*To (2-3)
Equation 2-4 extends the utility of the closed loop by compensating for
large and small time delay–to–time constant ratios. It estimates an integral
(reset) time to be respectively 1/8 and 1/2 times the oscillation period for
a self-regulating process with a pure time delay and with a large time
constant respectively. [1/8 factor is for a pure time delay and ½ factor is a
large time constant] For integrating and runaway processes, Equation 2-4
yields an integral (reset) time that approaches 10 times the ultimate period
as the negative-feedback time constant becomes large compared to the
time delay or positive-feedback time constant. For pure time delays, the
Lambda tuning method provides a more accurate calculation of the tuning
settings for a desired closed-loop time constant. For very slow loops, the
shortcut tuning method, whereby the user only needs to see the time delay
and the initial rate of change of the PV, can be used to save time, as
detailed in Reference 2.4.
The Theory section shows that a self-regulating process with a pure time
delay and a large time constant will oscillate at 2 and 4 times the time
delay, respectively. See Reference 2.1 for equations to estimate how the
ultimate period increases from 4 times the time delay for integrating and
runaway processes.
Cascade Control
A cascade control system consists of a primary controller that manipulates
the set point of a secondary controller that in turn manipulates a final ele-
ment. The secondary controller can be operated in the automatic mode
with a local set point or in the cascade mode with a remote set point.
Industrial systems are designed to make sure the remote set point of the
secondary controller (output of the primary controller) is equal to the local
set point when the secondary controller is switched from the automatic to
cascade mode so that there is a bumpless transfer.
For optimum performance, the time delay and time constant for the sec-
ondary controller should be 5 times faster than the respective values for
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
the open loop response of the primary controller. If this is not possible and
the periods start to approach each other in value, the loops will fight and
may resonate. The interaction can be reduced by artificially slowing down
the primary controller by an increase in its process variable filter and scan
time or by a decrease in its gain. Unfortunately, this also slows down the
ability of the primary controller to react to load upsets that originate in the
primary loop. It may still be worthwhile to go to cascade control, however,
because most of the upsets are in the secondary loop and/or the lineariza-
tion of the primary loop is beneficial.
If the time delay of the secondary (inner) loop is small, its ultimate period
is small and any oscillation is effectively attenuated, per Equation 3-2, by
the primary (outer) loop time constant. If the inner open loop time con-
stant is large compared to its time delay, the secondary controller gain can
be relatively high, which gives this controller muscle to rapidly correct for
inner loop upsets. Also, the maximum excursion (peak error) for the inner
loop is reduced. Furthermore, by going to cascade control, one of the two
largest time constants that created equivalent time delay and was detri-
mental to performance of the single loop is now a beneficial term for the
inner loop [2.30]. In this scenario, the inner loop can correct for inner loop
upsets before they are even seen by the outer loop, so it doesn’t matter that
the primary controller may need to be detuned if the inner loop time con-
stant is not much smaller than the outer loop time constant. The plot of
simulation results in Figure 2-27 shows how the ratio of the peak error for
a cascade loop to the peak error for a single loop decreases as the inner-to-
outer loop ratio of time delays decreases and the ratio of time constants
increases [2.30]. The results shown in the figure are for self-regulating
inner and outer loops. The improvement is greater for integrating and
runaway loops.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Tuning and Control Loop Performance: A Practitioner’s Guide, 3rd Edition, ISA, 1994, pp. 241, Figure 11.2
Figure 2-27. Reduction in Peak Error for Inner Loop Upsets by Cascade Con-
trol
TT
TT
2-2
2-1
Cooling Tower
Cooling Tower
Makeup
Reactor Return
Product
changes and the lack of straight runs results in poor repeatability of any
calculation. While it is true that the nonlinearity of an equal per cent flow
characteristic compensates for the inverse relationship between process
gain and feed flow for a temperature loop, this compensation is far from
exact and can be better done by output signal characterization or a feed-
forward multiplier instead of a summer.
Cascade control can do more harm than good if the signal-to-noise ratio,
rangeability, or reliability of the secondary measurement is poor or the sec-
ondary loop is actually slower than the primary loop. This is the case for a
static mixer pH-to-reagent flow cascade loop, particularly when the
reagent flow measurement is an orifice meter. If the reagent flow loop
used a coriolis flow meter, and its scan time was small, the stiction and
dead band of the valve was small, and the digital positioner and flow con-
troller were tuned for a fast response, it would be a different story; cascade
control might be beneficial and facilitate reagent-to-feed ratio control.
The secondary controller should be tuned for fast response. If the inner
loop is much faster than the outer loop, it is permissible for the closed-
loop response of the secondary controller to be quite oscillatory because
the amplitude of the oscillations is effectively attenuated by the time con-
stant of the outer loop. Also, an offset in the inner loop is theoretically of
little consequence to the outer loop since the inner loop only exists for the
purpose of the outer loop. This implies that the secondary controller
should use mostly gain and rate action. This is true for valve positioners.
These are secondary controllers whose remote set point (desired valve
position) is the output of the primary process controller. Pneumatic posi-
tioners were high-gain proportional-only controllers. Digital positioners
often use some form of proportional-plus-rate algorithm. Reset is never
used in a positioner because it would cause overshoot and the offset from
proportional-only control is small due to the high gain action. The second-
ary coolant temperature controller in Figure 2-28 should be tuned with
mostly gain and rate action. However, in many situations reset is used in
the secondary controller because operators get concerned about offsets,
secondary set point limits may need to be enforced, and the ratioing of
flows needs to be exact, especially for inline blending. Reset action is used
in flow controllers to not only eliminate offsets but to also help make the
set point response of the flow loops match up for better timing.
Feedforward Control
In feedforward control, a controller output is calculated to compensate for
a measured disturbance. It provides a preemptive action to enforce a mate-
rial or energy balance. The block diagrams in Figures 2-29a and 2-29b
show the use of feedforward, with and without cascade control. The feed-
forward signal must arrive at the same point in the process simulta-
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
neously with the load upset and must be equal but opposite in sign to the
load upset. Feedforward control is never perfect and should be corrected
by feedback control if a suitable feedback measurement or estimator is
available. The total error can be approximated as the root mean square of
the errors in the feedforward measurement, feedforward gain, and feed-
forward timing. In general, the feedforward measurement goes through a
dead time block and a lead-lag block for proper timing and is finally
biased by a process controller output for feedback correction. The feedfor-
ward gain can be the gain in the lead-lag block or a separate gain. If the
controller output goes directly to a control valve, the feedforward calcula-
tion must also go through a signal characterizer that computes the valve
position for a desired flow from the installed valve characteristic. If the
valve nonlinearity is beneficial for feedback control, the characterization is
done before the bias from the feedback controller is added to the feedfor-
ward signal; otherwise it is done after the feedback correction as shown in
Figure 2-29a. It is vastly preferred that the controller output be the remote
set point of a flow controller, as depicted in Figure 2-29b, so that valve
nonlinearity, dead band, and pressure upsets are not issues.
Feed
Forward Delay Lag Gain
Delay τdff τdL τL KL
∆DV Fi
Load Upset
∆FF
Gain Kff Σ
∆CO Local
Set Point
PID Kc Ti Td
∆CV
Delay Lag Gain Lag Delay Lag
τc2 τdc τc1 Kcv τm2 τdm2 τm1 τdm1
Lag Delay
Controller Measurement
Figure 2-29a. Feedforward Control Block Diagram
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Feed
Forward Delay Lag Gain
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Delay τdff τdL τL KL
∆DV Fi
Load Upset
computed from an energy balance for a heat exchanger. Usually, the ratio
is entered by the operator but there is an opportunity to put the energy or
material balance calculation online if there are reliable temperature or
composition measurements.
A common feedforward control system for sheets, webs, films, and con-
veyors is speed ratio control, where the roll speed is ratioed to another roll
speed or an extruder speed. The ratio is corrected by a controller of aver-
age gauge thickness. The timing requirement is very tight: Speed ratio
control of one roll to another must be done within milliseconds.
Normally, the feedforward time delay and lag are adjusted to make sure
the feedforward doesn’t arrive too early due to a time delay or time lag in
the path of the load upset. The lead is adjusted to compensate for a time
lag in the path of the manipulated variable. The lag is increased as neces-
PV PV
perfect!
none!
uncorrected load upset
feedforward gain and timing just right
Time Time
PV PV
Time Time
PV PV
Time Time
sary to make sure that noise from the feedforward measurement doesn’t
show up as dither in the final element.
The greatest benefits of feedforward control occur in loops with large time
delays and large time constants because the integrated error and the tim-
ing window are large. Distillation columns are the prime candidates, fol-
lowed by reactors, crystallizers, and evaporators. About the toughest
application in which to get the timing rate is liquid pressure control,
because the process time constant is so small. Feedforward control is
essential for relatively fast periodic upsets, although the better solution is
to eliminate the root cause, which is typically another control loop. If the
period of the disturbance is less than twice the ultimate period of the loop,
the feedback controller cannot correct for the upset within the settling time
and may amplify the upset and do more harm than good.
When feedforward control is used, there are some important best practices
to consider that are outlined in Table 2-7.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
flow or speed ratio control for feedback measurement problems and maintenance.
4. Display any feedforward timing errors, particularly when the feedforward time delay
is too short, and automatically calculate and correct for any known transportation
delays for sheet lines and pipelines.
5. Make sure the transfer from feedforward to feedback control or vice versa is bump-
less.
6. For dead time–dominant processes, such as sheet lines and pipelines, make sure
the feedforward timing is accurate. For flow or speed ratio control, make sure the
manipulated variable response is in unison with the disturbance variable response.
can slowly optimize the feedforward gain to account for drift and
unknown parameters.
If there are unmeasured load upsets, the feedforward can ask for the
manipulated variable to go in the wrong direction. For example, if the col-
umn temperature makes a sharp turn downward below the set point due
to an unmeasured load upset, an increase in feed that would increase
reflux flow by flow feedforward would make the unmeasured upset
worse. Experienced operators will decrease the reflux flow. Just one of
these situations is enough for the operator to lose confidence in the feed-
forward. An adaptive strategy could compensate for this feedforward mis-
take by looking at trajectories of feedback and feedforward measurements.
It would be easier to do this with model predictive control because a better
feedforward trajectory is available, although a trajectory of the bias correc-
tion would need to be generated to indicate the path of the unmeasured
load upset.
Rules of Thumb
Rule 2.1. — The largest opportunity for final elements is to eliminate stick-slip
and dead band. The effect of slip is worse than stick and stick is worse than
dead band.
Rule 2.2. — The control valve with the best response is a sliding stem control
valve with a diaphragm actuator and a digital positioner. The dead band and
rangeability limits in the variable-speed drive must be relaxed. If the slid-
ing stem valve has high temperature or environment packing, the digital
positioner must be aggressively tuned. If a rotary valve must be used,
make sure it has a splined connection between the disc and actuator shaft
and short large-diameter shaft. For some extremely fast critical applica-
tions, such as polymer pipeline and incinerator pressure control, a vari-
able-speed drive is essential.
Rule 2.3. — The largest opportunity for measurements is the selection and instal-
lation of a sensor for better reproducibility, less noise, and minimal interference.
The reproducibility can be estimated as the repeatability and drift. The
need for reliability and resolution is a given and less of a problem today
than reproducibility and noise.
Rule 2.4. — The best flow measurement is the coriolis flow meter. The main lim-
itations are that the coriolis meter may not be offered in suitable materials
or size, or may not have the temperature rating needed.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
to a 3-or 4-wire RTD sensor installed in a piping elbow. The main limitation is
that the RTD may not be suitable for very high temperatures.
Rule 2.6. — The best level measurement is a radar gauge. The main limitations
are a fluid with a dielectric constant of less than 2.0 and a tall, narrow ves-
sel.
Rule 2.7. — Check the life-cycle cost, including the cost of variability, before
choosing a less expensive control valve or measurement. The hardware cost is
generally a small part of the life-cycle cost.
Rule 2.8. — Use smart transmitters. The improved accuracy and diagnostics
are well worth the extra cost.
Rule 2.9. — Use Fieldbus for major upgrades and new installations. The
reduced cost of commissioning and wiring, the expanded diagnostics, and
improved accuracy from the elimination of A/D error are significant.
Rule 2.10. — Use a closed-loop method if an auto tuner pretest fails or is not safe.
The closed-loop method keeps the loop in auto and includes the effects of
valve stick-slip and dead band.
Rule 2.11. — For a process with a large time constant, use more gain and rate
action to overdrive the manipulated variable, to decrease rise time, peak error, and
return time. If the measurement is smooth, you can use rate action to
reduce overshoot.
Rule 2.12. — For dead time–dominant loops, significantly decrease the integral
(reset) time setting. It can be as small as 1/4 of the time delay or 1/8 of the
ultimate period for a pure dead time process.
Rule 2.14. — Go for the largest and least expensive ways to reduce loop dead
time. The automation system is the largest source of time delay for flow,
level, and pressure loops.
Rule 2.15. — Use cascade control to correct for secondary loop disturbances
before they affect a primary loop, or to linearize the manipulated variable for feed-
forward or ratio control. If the secondary loop is not 5 times faster than the
primary loop, the scan time or filter time must be increased or the gain
decreased for the primary controller to slow down the primary loop.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Rule 2.16. — Use feedforward control for loops with a large time delay, time lag,
or periodic disturbance, to provide a preemptive correction for load upsets. The
timing and gain must both be right and the feedforward signal must not
arrive too soon.
Theory
If dQr /dTr < Fo∗Cr + U∗A, and dQr /dTr > Fo∗Cr + U∗A, then we have a
negative-feedback time constant and positive-feedback time constant,
respectively. The time constant is not constant but is proportional to
reaction mass and inversely proportional to the outlet flow and the
product of the heat transfer coefficient and area for the jacket.
For a batch reactor (no outlet flow) with a negligible heat release (dQr /dTr
= 0), we have the general form of the equation to approximate the thermal
time lag of a closed volume. It can also be used for a thermowell by substi-
tution of the proper parameter values.
The process gain depends upon the input under consideration. If the
manipulated or disturbance variable is feed flow:
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
In either case, we see that the process gain is not constant and is inversely
proportional to the outlet flow and the product of the heat transfer
coefficient and area for the jacket.
For a jacket coolant system with recirculation where there is equal volume
displacement of coolant return flow by coolant makeup flow, so that jacket
flow is constant, we have the following equation for the temperature of a
mixture of makeup and recirculation flow:
The process gain (Kp) for the control of the jacket inlet temperature (Ti) by
manipulation of coolant makeup flow (Fc) in a secondary loop is the
partial derivative dTm / dFc:
For a total mass balance, the rate of accumulation of mass is equal to the
mass flow in minus the mass flow out:
ρ∗Ar∗dL/dt = Fi - Fo (2-19)
Here we see that the integrator gain (Ki) for the level response is inversely
proportional to the product of the fluid density (ρ) and the cross sectional
area of the reactor (Ar).
For plug flow, the entire residence time is a transportation delay. The time
delay is the volume divided by the flow. The flow in pipelines, sample
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
lines, static mixers, coils, and heat exchanger tubes can be considered to be
essentially plug flow.
For perfect mixing, the entire residence time is a process time constant. In
well-mixed volumes of proper geometry and with baffles, the portion of
the residence time that shows up as time delay can be estimated as half the
turnover time, as shown in Equations 2-23 and 2-24, and most of the resi-
dence time becomes a process time constant [2.14].
Lastly, time constants in series create time delay. When the flow of
material or energy can reverse direction depending upon the sign of the
driving force, the time constants are interactive. Conductive heat transfer,
gas flow in pipelines, and the tray response in columns all have interactive
time constants. As the number of equal interactive and non-interactive
time constants in series increases, the time delay increases, from 0.02 to
0.16 and 0.14 to 0.88 times the sum of the time constants respectively. The
portion of the sum not converted to time delay is the process time constant
for a first order–plus–dead time approximation. Thus, interactive time
constants do not create much time delay and the time delay–to–time
constant ratio is always rather nice. A large number of non-interactive
time constants creates an extremely difficult to control dead-time-
dominant system.
1
K u = -----------------------------
- (2-25)
K o * AR –180
1
AR –180 = ----------------------------------------------
2
- (2-26)
[ 1 + ( τo * ωn ) ]1 ⁄ 2
2
[ 1 + ( τo * ωn ) ]1 ⁄ 2
K u = ----------------------------------------------- (2-27)
Ko
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Since the natural frequency in radians per minute (ωn) is 2π divided by the
ultimate period (Tu), we can express the ultimate gain (Ku) as a function of
the ultimate period.
2
[ 1 + { ( τo * 2 * π ) ⁄ Tu } ]1 ⁄ 2
K u = --------------------------------------------------------------------- (2-28)
Ko
For a time constant much larger than the time delay (τ >> τd), the ultimate
gain is:
2*π*τ
K u = ------------------------o (2-29)
Ko * Tu
Since for this case the ultimate period is about 4 times the time delay (Tu ≅
4 ∗ τd), the ultimate gain can be simplified to a ratio of the time constant to
time delay.
τo
K u = 1.6 * ----------------- (2-30)
Ko * τd
Since the controller gain is a factor of the ultimate gain (Kc = 0.25∗Ku), the
controller gain is proportional to the time constant and inversely
proportional to the time delay and the open loop gain.
τo
K c = 0.4 * ----------------- (2-31)
Ko * τd
If the time delay is much larger than the time constant (τd >>το), it can be
shown that Equation 2-27 reduces to the ultimate gain being the inverse of
the open loop gain. This relationship can also be realized from the
amplitude ratio being 1 for a pure time delay.
1
K c = 0.25 * ------ (2-32)
Ko
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Tu = (-360/φ) ∗ τd (2-34)
For a time constant much larger than the time delay there is a -90 phase
shift from the time constant, which leaves only -90 phase shift (φ) needed
from the time delay to reach the -180 total phase shift; the ultimate period
becomes simply 4 times the time delay.
If, on the other hand, the time constant is so much smaller than the time
delay that essentially all -180 phase shift (φ) comes from the time delay, the
ultimate period approaches 2 times the time delay.
For τd >>το:
Tu = 2 ∗ τd (2-36)
The following curve fit shows how the ultimate period changes from a
multiple of 2 to 4 of the time delay and as a function of the relative sizes of
the time constant and time delay.
0.65
τo
T u = 2 * 1 + ---------------- * τd (2-37)
τo + τd
τ /τ )
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
If the time delay is less than the time constant, we can simplify the
relationship.
τd
E x = ---------------- * E o (2-39)
τo + τd
If the time delay is much less than the time constant, we end up with the
ratio of the time delay to the time constant.
τd
E x = ----- * E o (2-40)
τo
The minimum integrated error (Ei) is approximately the peak error (Ex)
multiplied by the time delay and is thus proportional to the time delay
squared.
τd
E i = ----- * τ d * E o (2-41)
τo
1
E x = ------ * E o (2-42)
Kc
If we further realize that the integral time (Ti) is a factor of the ultimate
period that is a multiple of the time delay, we have a relationship where
the integrated error (Ei) is proportional to the integral time and inversely
proportional to the controller gain.
1
E i = ------ * T i * E o (2-43)
Kc
For dead time–dominant loops, the peak error approaches the open loop
error (Eo), and the integrated error approaches the product of the open
loop error and the integral time.
For τd >>το:
Ei = Ti∗Eo (2-44)
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Feedforward Control
The equations for feedforward control are derived from the steady-state
material or energy balance at the point of control. For a heat exchanger
where the bypass is throttled for temperature control, the feedforward
equation for the hot bypass flow (Fhb) is: [2.19]
DB
τ dv = ----------------------- (2-47)
∆CO ⁄ ∆t
The rate of change of controller output depends upon the controller tuning
and the error. If we consider the effect of just controller gain (Kc) for a loop
dominated by a large time lag so that the amount of reset action used is
small [2.19]:
If we use Equation 2-31 for the controller gain with a detuning factor (Kx)
and realize that the rate of change of the controlled variable (∆CV / ∆t ) is
simply the pseudo-integrator gain (Ki = Ko / τo) for the large open loop
time lag (τo) multiplied by the change in the actual valve position (∆AVP),
we have: [2.19]
Kx
∆CO ⁄ ∆t = ------------------- * K i * ∆AVP (2-49)
K i * τ do
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
If we cancel out the integrator gains and approximate the change in actual
valve position on the average as the controller output minus one half of
the dead band (∆AVP = ∆CO – DB), we end up with an expression to
estimate the valve time delay from the valve dead band and the observed
loop time delay for a step change in controller output. [2.19]
DB
τ dv = ---------------------------------------------------- * τ do (2-50)
K x * ( ∆CO – DB ⁄ 2 )
Nomenclature
A = heat transfer area (ft2)
Ar = cross sectional area of reactor (ft2)
AVP = actual valve position (%)
AR180 = amplitude ratio at a phase shift of -180 degrees
CAo = mass fraction of component A in reactor outlet
CAf = mass fraction of component A in reactor feed
Cc = heat capacity of cold fluid (btu/lb*oF)
Cf = heat capacity of feed to reactor (btu/lb*oF)
Ch = heat capacity of hot fluid (btu/lb*oF)
Cr = heat capacity of liquid in reactor (btu/lb*oF)
Cj = heat capacity of liquid in jacket (btu/lb*oF)
CO = controller output (%)
CV = controlled variable (%)
DB = dead band (%)
Ei = integrated error (e.u.)
Eo = open loop error (e.u.)
Ex = peak error (e.u.)
fn = natural frequency (cycles/hr)
Fa = agitator pumping rate (lb/hr)
Fc = coolant makeup flow (lb/hr)
Fci = cold fluid inlet flow to exchanger (lb/hr)
Ff = feed flow to reactor (lb/hr)
Fhi = hot fluid inlet flow to exchanger (lb/hr)
Fho = hot fluid outlet flow from exchanger (lb/hr)
Fhb = hot fluid bypass flow around exchanger (lb/hr)
Fj = jacket coolant flow (lb/hr)
Fo = outlet flow from reactor (lb/hr)
k = reaction rate constant (btu/lb*hr)
Kc = controller gain
Kp = process gain (e.u./e.u.)
Ki = integrator gain (e.u./hr)
Ko = open loop static gain
Ku = ultimate gain
L = liquid level in reactor (ft)
Mr = mass of liquid in reactor (lbs)
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
References
1. McMillan, Gregory K., Tuning and Control Loop Performance, 3rd Edition, 1994,
ISA.
2. Ruel, Michael, “Stiction: The Hidden Menace,” Control, November 2000, pp.
71-76.
3. Ruel, Michael, “Control Valve Health Certificate,” Chemical Engineering,
November 2001, pp 62-65.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Practice
Overview
The objectives for this chapter are to help the user identify advanced pro-
cess control (APC) opportunities, estimate benefits, select the best technol-
ogy, sustain the solution, and track the benefits. Industry is driven by cost
and benefits analysis and is generally not interested in a great technology
looking for an application. This chapter combines the methods that have
been used extensively to identify opportunities for APC in the chemical
and the pulp and paper industries [3.1] [3.2] [3.3] [3.4] [3.5].
The strongest advocates of maintaining the status quo are often the most
experienced people in operations. For batch operations this corresponds to
fixed feed rates, hold times, and valve positions.
Process control deals with change. If there were no variations in raw mate-
rials, utilities, process and ambient conditions, equipment performance,
production rates, or desired operating points, there would be little need
for control loops. In actual plant operations nothing is completely con-
stant. Thus, most of the opportunities for process control deal with
change.
Capital expenditures for most plants have come under increased scrutiny.
In order to make the improvements that provide the biggest bang for the
buck, the engineer needs to know what is important despite an over-
whelming number of choices of control technologies and an explosion of
information and data. This chapter provides a perspective on how to
choose the most appropriate technology.
The time frame, pattern, and sequence of changes are important for track-
ing down the sources and assessing the impact of variability. The faster the
change, the more difficult it is for a controller to correct. However, changes
with a short period are more effectively attenuated by process variable fil-
ters and back-mixed volumes. Table 3-1 summarizes the most common
sources of change, and their relative speeds.
Noise and repeatability errors in a measurement are fast and are passed on
to the manipulated variable. Analyzers are a notable exception as they are
a source of slow frequency variability from a long sample transportation
delay and processing (cycle) time.
Market changes are relatively slow, but the implementation is fast. Once a
rate change is decided upon, operators tend to make large step changes in
feed set points that are not simultaneous or coordinated. Transitions are
accomplished even more quickly to minimize the cross contamination of
products. As inventories decrease, market volatility increases, and the
demand for more flexible manufacturing increases, the frequency of rate
and product changes will increase to the point where steady-state opera-
tion may be a distant memory.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
changes are often a major point of disruption. Operators also tend to get
bored and actually love being in charge of inventory control where they
have to changes flow set points to keep woefully undersized surge or feed
tank volumes from overflowing or running dry. Unfortunately, the set
point changes made by operators are fast, inconsistent, impatient, and do
not take into account the effect of time delays, time lags, and interactions.
Figure 3-1 shows the pyramid of technologies for advanced control. The
base is the solid foundation of basic process control discussed in Chapter
2. The next layers of loop and process performance monitoring provide
the tools to quantify the opportunities, sustain the performance, and track
the benefits of APC. Loop monitors can identify when and where an auto
tuner needs to be run again. Rules can be added on top of the
performance-monitoring layers to provide better explanations and
automatic corrections. Expert systems can be developed to deal with
abnormal situation management, such as equipment and automation
system failures and degradation.
TS
RTO
LP/QP
Ramper or Pusher
Property Estimators
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Fuzzy Logic
Auto Tuner
TS is tactical scheduler, RTO is real time optimizer, LP is linear program, QP is quadratic program
The advanced control technology with the best track record to date for
increasing plant efficiency and capacity is the model predictive controller
[3.7]. The built-in knowledge of the process dynamics and constraints
from extensive plant testing, the ability to readily control the amount of
variability transferred to the manipulated variables by move suppression,
plus the ability to readily add some optimization, are the features respon-
sible for its success. The objective of any advanced control program should
be to get at least to the level of model predictive control with a constraint
pusher for a simple optimization of a single variable. It is expected that as
process modeling tools become easier and less expensive to use and main-
tain, real-time optimization will start to deliver more consistent benefits
and applications that vie for the top of the pyramid will be more common
[3.8].
Opportunity Assessment
Figures 3-2a and 3-2b show how reducing the variability in a process vari-
able associated with a constraint translates to the opportunity to move the
variable closer to the constraint. The traditional way of depicting the
improvement is to show the current peak at the optimum location for the
given degree of variability, as illustrated by Figure 3-2a. The benefits are
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
then based on how much the peak can be moved if the standard deviation
is reduced and still provide the same small number of violations of the
constraint. In practice, the peak is typically set by the operator much more
conservatively so that even if the variability is not reduced, just a better
understanding and automation of the positioning of the peak relative to
the constraint affords an additional opportunity, as shown in Figure 3-2b.
Manual set points are chosen based on opinions and war stories rather
than data. The operator is usually the largest and most active constraint to
optimum operation.
2-Sigma 2-Sigma
Set Point
value
PV distribution for
improved control
2-Sigma 2-Sigma
LOCAL
Set Point Upper Limit
PV distribution for
original control
2-Sigma 2-Sigma
RCAS
Set Point
2-Sigma 2-Sigma
tools and procedures for identification of this economic gain are similar to
those used for identification of the process gain for property estimators
(Chapter 8). The process must start with an online measurement or calcu-
lation of capacity, yield, and utilities to provide the economic variables. In
some cases, missing or inaccurate measurements will be found in the pro-
cess of putting the economic variables online. It also provides a head start
for true performance monitoring (Chapter 4) and real-time optimization
(Chapter 10).
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
control system. Questions (1) through (20) ascertain how important it is to
reduce variability in an operating limit (constraint) and what process vari-
ables are involved. The questions generally apply to both batch and con-
tinuous unit operations. Appendix A presents the more extensive list of
questions that were used by Monsanto and Solutia for opportunity assess-
ments in the last decade in all of the major processes, OAs that led to pro-
cess control improvements worth $60 million a year in benefits. ICI has
cited 2% to 6% of operating costs as the benefits achieved through
improved process control. Besides the cost savings, ICI is convinced it has
increased the safety and reduced the environmental impact of its plants
[3.9].
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
19. Could the amount of scrap be reduced in a sheet line by operation
closer to the cross and machine direction constraints on gauge?
20. Could the roll speed be increased on a sheet line by operation
closer to the sheet moisture and extruder speed constraints?
21. Could the amount of sheet giveaway to the customer be reduced
by tighter control of average thickness and profile near the edges?
22. Can the production rates of unit operations be coordinated to take
maximum advantage of surge volumes to increase plant capacity?
23. Could a reflux flow be reduced to decrease the steam flow to a
reboiler?
24. Could a recycle flow be reduced to decrease utility use and
pressure drops and to increase residence time and surge volume
capacity?
Most reactors operate with an excess of one or more reactants because the
downside of a deficiency due to less than ideal mixing or variability in the
feeds or the composition measurement is low reaction rates and the forma-
tion of byproducts that can lead to waste treatment problems and hazard-
ous operation, besides a loss in production. The amount of excess reactant
is often based on operating practice and how many times an operator has
had actual or perceived problems from operating too close to the con-
straint.
In order to estimate the benefits from reducing feed variability, the reduc-
tion in variability in the excess component of interest must be computed
from an online analyzer, estimator, or attenuation calculation. The online
estimator must include the time constant associated with mixed volume to
show the smoothing effect. It does not need to have the time delay associ-
ated with an analyzer. The cycle time of chromatographs and lab analysis
is too slow and will alias the relatively fast reactor concentration oscilla-
tions from feed variability.
A
AR = ------o (3-1)
Ai
1
AR = ------------------------------------- (3-2)
2 0.5
[ 1 + ( τ*ω ) ]
For τ∗ω > 1:
1 -
AR = ----------------------- (3-3)
2*π*f 0 *τ
Since fo = 1/To:
To
AR = ---------------
- (3-4)
2*π*τ
where:
Ai = amplitude of the oscillations at the inlet to the volume
(e.u.)
Ao = amplitude of the oscillations at the outlet of the volume
(e.u.)
AR = amplitude ratio
fo = frequency of oscillation (cycles/minute)
ω = frequency of oscillation (radians/minute)
To = period of oscillation (minutes)
τ = time constant for back-mixed volume (minutes)
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Unfortunately, the user does not usually know the time delay and it is
rarely if ever constant. The time delay can be measured online as the time
it takes for the controlled variable to get beyond the noise band whenever
there is a change in the set point or manual output. Alternatively, the dead
time can be captured whenever an auto tuner is run on the loop. For flow
and liquid pressure, the time delay can be estimated as one half of the loop
scan time and the valve time delay, which is the time between a change in
the controller output and the actual valve position as measured by a smart
positioner for a sliding stem valve. In practice to date, accurate estimates
and identification of the time delay have not normally been available. The
key point is that the calculation interval should be slowed down for the
process capability calculation to be more representative of the actual capa-
bility of feedback control.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
between the limit and the mean value. This shift provides the same distri-
bution of data points beyond the limit. In most cases, the mean value is
more than two standard deviations away from the limit so that the actual
number of data points beyond the limit is quite small.
Equations 3-10 and 3-11 express the mean and shift in terms of an old and
new set point. It is important to realize that these are estimates for the low-
est possible variability in the controlled variable. In some cases, it may be
undesirable to transfer all this variability from the controlled to the manip-
ulated variable. Model predictive control excels at determining how much
variability is transferred and getting the most out of both feedback and
feedforward control. Thus, the full benefit offered by these calculations is
only approached by the application of model predictive control built on a
solid foundation of good valves and measurements.
$$Savings
Limit or
Spec Target
Setpoint
Equations
2 2 2 1⁄2
S ffc = [ ( S ffm ) + ( S ffg ) + ( S ffd ) ] (3-6)
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Production
Increase $$
Throughput at
Throughput limit I.e. maximum
possible
Operator setpoint
Value of limiting
input or constraint
Limiting
Condition Process input of
constraint limit
Time
SP old = CV M (3-10)
where:
Kp = process gain (∆CV/∆MV)
SPnew = new set point
SPold = old set point
Sapc = standard deviation possible for advanced process control
Sfbc = standard deviation possible for feedback control
Sffc = standard deviation for feedforward control
Sffd = standard deviation of feedforward dynamics error
Sffg = standard deviation of feedforward gain error
Sffm = standard deviation of feedforward measurement error
Stot = total standard deviation
∆CVm = shift in mean value of controlled variable
CVM = mean value of controlled variable
CVL = limit (constraint) for controlled variable
MVM = mean value of manipulated variable
MVL = limit (constraint) for manipulated variable
The questions listed do not cover any reduction in down time or penalties
associated with exceeding equipment and environmental limits. For exam-
ple, the life of glass linings and the rupture disks depend upon the number
of temperature and pressure cycles. Rupture discs can cause reportable
emissions and a brief excursion in an effluent stream below 2 pH or above
12 pH can classify a whole volume as hazardous waste.
Examples
Figure 3-4, any variation in this temperature from best operating point will
result in less production at a given feedstock input. The slope is the pro-
cess gain used to calculate the benefits as you approach the optimum
operating point.
3500
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
3400 Mean Variation Production
(DegC) (DegC) Loss(%)
Ammonia 415 0 0
Production 3300 415 25 0.103
(mol/hr) 415 50 0.415
460 25 1.102
460 50 1.412
3200
3100
350 400 450 500
Bed #1 Inlet
Temperature (Deg C)
Shifting Bottlenecks
Stressed-out plants with old equipment, difficult processes, and recycle
streams often have a shifting and confusing bottleneck. The proper APC
pathway can solve and demystify the problem. Often overlooked are the
significant side benefits of process knowledge gained from the implemen-
tation and day-to-day operation of an APC system.
Figure 3-6 shows the major unit operations for the production of a nylon
intermediate chemical. Since the solids content of the streams is high, the
plant is over 40 years old, and the plant is sold out and running at more
than four times the original nameplate capacity, one or more pipelines,
reactors, evaporators, crystallizers, and centrifuges are always shut down
for washout or maintenance. Centrifuges periodically trip on vibration or
spill over solids (slop) into the recycle. The equipment and operators are
Unmeasured
Wash Water Crystallizers (Cx)
Reactors (Rx) Evaporators (Ev) and Centrifuges (Cs)
Recycle Hydroclones (Hc)
Tk
Feed
Rx Rx Ev Cx Hc Cs
Feed Rx Rx Tk Ev Tk Cx Hc Tk Cs Tk
Feed Rx
Rx Ev Cx Hc Cs
Purge
Reactors, evaporators, and crystallizer heat transfer surfaces get coated Centrifuges slop
with solids and must be periodically shutdown and manually defrosted Recycle solids into recycle
Tk
All surge and recycle tanks (Tk) are undersized Operator sets crystallizer feed rate
based on visual inspection of solids
Purge in sample of hydro-clone overflow Unmeasured
Wash Water
so stressed that the control room is always in crisis mode with incessant
alarms and flipping of screens. Most of the controller outputs are satu-
rated high. War stories rule. Engineer burnout and turnover are high. The
young engineers thrown into the fray defer to operator opinion rather
than applying chemical engineering principles. Management is afraid to
do anything because capacity has actually decreased after each debottle-
necking project.
A large amount of water is introduced into the process from washout con-
nections but is largely not metered. The equipment must be cleaned and
started up manually, so the operator is naturally reluctant to push capac-
ity. The surge volumes that are used as feed and recycle tanks are seriously
undersized for the present production rates, the additional water load,
and centrifuge slopping. The operators continually adjust feed rates or
add water to keep tanks from running dry. Sometimes the recycle tanks
overflow from the additional water load and product is lost to the sewer.
The feed rate to each of the final-stage crystallizers is set by the operator
based on a visual inspection of a sample from the recirculation line for the
amount of solids. The interpretation of the concentration and particle size
is subjective and the main goal of the operator is to reduce how often the
crystallizer must be taken down and the crystal buildup on the walls and
heat-transfer surfaces removed. A supervisory control system is periodi-
cally turned on that finds the lowest production rate and drags the whole
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
plant down to match it. Since the feed to the last stage of crystallizers is
typically set low from the margin of operator error and lack of confidence,
the tail wags the dog, and the plant capacity decreases each year despite
capital projects to replace and add equipment to reduce bottlenecks.
Some pipeline and pump sizes are increased so control valves are not wide
open, and some on-off valves and magnetic flow meters must be installed
to measure and control water addition. Coriolis mass flow meters are
installed not only to provide more constant solids loading of the feed to
the unit operations but also to provide estimators of solids concentration
and crystal buildup on walls. Nuclear gauges are added to each centrifuge
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
to measure the cake mass in the basket. The combination of mass feed flow
and basket solids load control improves the production rate and on-stream
time of the centrifuges. The scheduling and the sequencing of valves for
automated cleaning and startup makes the performance of the equipment
more reproducible and predictable. The use of variable speed drives to
throttle the feed to the hydroclones increases the pressure drop available
for separation of solids in the hydroclones and eliminates significant slip-
stick since sliding stem valves are not suitable for the high solids concen-
tration. Model predictive control is used to control the solids concentration
in each reactor, evaporator, and crystallizer and to manage the overall
inventory control and purge rate. Rampers and pushers are used to maxi-
mize feed rates without a projected violation of a level or process con-
straint. Finally, real-time optimization is used to track bottlenecks,
coordinate feeds, optimize recycle and purge flows, reach an understand-
ing of the root causes, separate fact from fiction, and provide data to lead
to successful and viable projects.
Application
General Procedure
1. Install a control loop and process performance monitoring
systems.
2. Ask marketing to identify the key business drivers.
3. Develop management sponsors, onsite advocates, and resources.
4. Determine the percentage of time sold out, scheduled and
unscheduled downtime, profit per pound, variable costs per
Unmeasured
Wash Water Crystallizers (Cx)
Reactors (Rx) Evaporators (Ev) and Centrifuges (Cs)
Recycle Hydroclones (Hc)
Tk
Feed
Rx Rx Ev Cx Hc Cs
Feed Rx Rx Tk Ev Tk Cx Hc Tk Cs Tk
Feed
Rx Rx Ev Cx Hc Cs
Purge
Reactors, evaporators, and crystallizer heat transfer surfaces get coated Centrifuges slop
with solids and must be periodically shutdown and manually defrosted Recycle solids into recycle
Tk
All surge and recycle tanks (Tk) are undersized Operator sets crystallizer feed rate
based on visual inspection of solids
Purge in sample of hydro-clone overflow Unmeasured
Wash Water
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Application Detail
Neutralizer
The benefits from the reduction in variability afforded by the improve-
ments to the basic control system can be estimated for the pH example in
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Chapter 2. The largest sources of variability in the existing system are the
ball valves with excessive stick-slip caused by the excessive capacity and
friction and missing positioner. The limit cycle from the reagent valves in
terms of a reagent-to-feed ratio is translated to a pH limit cycle as illus-
trated in Figure 2-4b. However, the limit cycle from the first-stage valve is
first multiplied by the amplitude ratio based on the period of oscillation of
the first stage and the residence time of the second stage as shown in
Equation 3-12.
Equation 3-13b is a curve fit for the distribution of data points in neutral-
izer pH for a limit cycle. The standard deviation of the limit cycle for the
new precise control valves is estimated and the calculations are repeated.
To see the improvement in yield by other improvements, the decrease in
the fractional integrated error (∆Xp) can be estimated for a typical set of
feed rate changes. For the benefits in terms of less reagent use for a more
optimum pH set point, Equations 3-15 and 3-16 can be used to estimate the
shift in the mean (∆Xm) from reduced variability. Whether it is best to not
move the pH set point and reduce the fraction of product below the low
constraint, or to lower the pH set point to the point where there is no
change in the fraction of product below the low constraint, depends upon
the values of ∆Bp and ∆Bm.
S ni = ( S mi ⁄ 100% ) * ( F r ⁄ F f ) * P n * AR n (3-12)
z i = ( X L – X M ) ⁄ S ni (3-13a)
3 2
φ i = – 1.4743 * z i + 14.488 * z i – 46.888 * z i + 50 (3-13b)
∆X p = ( φ i – φ j ) ⁄ 100% (3-13c)
∆B p = ∆X p * ∆Y p * C t * F f * E f (3-14)
∆X m = ∆S ni (3-15)
∆B m = ( ∆X m ⁄ P n ) * C t * F f * E r (3-16)
where:
ARn = amplitude ratio for neutralizer
∆Bp = delta benefits from less product below low constraint ($/
hr)
∆Bm = delta benefits from shift of mean closer to low constraint
($/hr)
∆Yp = yield loss from distribution below low constraint
Ct = conversion factor for time (24hr/day)
Ef = economic factor for cost of feed ($/lb)
Er = economic factor for cost of reagent ($/lb)
Ff = feed flow (lb/hr)
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Fr = raw material (reagent) flow (lb/hr)
φi = percent of data points below low constraint for limit cycle i
(%)
Pn = process gain in neutralizer from titration curve slope
(∆pH/∆ratio)
Smi = standard deviation in mixer reagent valve position for limit
cycle i (%)
Sni = standard deviation in neutralizer pH for limit cycle i (pH)
∆Xp = delta fractional product below low limit
∆Xm = shift in mean (pH)
XM = mean (pH)
XL = low constraint (pH)
zi = number of standard deviations from the mean to the flow
constraint (see eq. 3-6a)
By the use of a sliding stem valve with a digital positioner, the standard
deviation of a limit cycle in the first reagent valve is reduced from 10%
(Sm1) to 0.1% (Sm2). The process gain is 10,000 pH per flow ratio (Pn), the
flow ratio at set point is 0.1 (Fr/Ff), the set point is currently 7 pH (XM), the
feed flow is 100 kpph (Ff), the limit cycle period is 1 minute (To), the
residence time in the well mixed neutralizer is 20 minutes (τ), the cost of
feed and reagent are both $0.1/lb (Ef and Er), and the conversion decreases
linearly from 98% at 6 pH (XL) to 88% at 5 pH.
The benefits in reduced feed costs from a better conversion for a 7 pH set
point can be estimated as follows with Equations 3-17 through 3-25:
1
AR n = ------------------------
- = 0.008 (3-17)
2 * π * 20
3 2
φ 1 = – 1.4743 * ( 1.25 ) + 14.488 * ( 1.25 ) – 46.888 * 1.25 + 50 = 11% (3-20)
φ2 = 0% (3-23)
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
∆B p = 0.11 * 0.04 * 24hr/day * 100000pph * 0.1$/lb = $1,056/day (3-25)
The benefits in reduced reagent costs from a shift of the set point toward
the low constraint can be estimated as follows with Equations 3-26 and
3-27:
From the above analysis, it is clear that it is better to leave the set point at
7 pH and reduce the fraction of product that is below 6 pH.
Distillation Column
The benefits from the reduction in variability afforded by the improve-
ments to the basic control system can be estimated for the distillation
example in Chapter 2 with equations similar to those given for the pH
example. The limit cycle before and after the addition of the aggressively
tuned digital positioner is multiplied by an amplitude ratio to determine
the attenuation from the storage tank volume. The period of the limit cycle
is about 6 times the temperature loop time delay, and the time constant of
the storage tank with just an eductor for mixing is about half the residence
time. The attenuated limit cycles are next translated from an oscillation in
the distillate-to-feed ratio to temperature oscillation as illustrated in Fig-
ure 3-5c. The amplitudes of the temperature oscillations are then multi-
plied by the process gain to translate from temperature to impurity
concentration and finally by the economic gain to translate from impurity
concentration to the cost of extra recycle and steam.
cant. The change in total temperature loop dead time affects the loop
period and hence the amplitude ratio.
The value of flow feedforward and better feed tank level controller tuning
can be estimated by the reduction in fractional integrated error (∆Xp) in
product impurity concentration for a typical set of flow upsets. Case stud-
ies have shown a reduction of 2:1 from better control valves and a 5:1
reduction from an improved control scheme in product variability for a
distillation process [3.10].
The value of reduced valve stick-slip, although real and significant, is one
of the most difficult improvements to quantify. The estimation of benefits
from advanced control is usually much simpler because higher value-
added variables are more directly affected or optimized.
Paper Machine
The benefits from the use of model predictive control for the basis weight
of a paper machine shown in Figure 3-7a can be rather easily estimated by
Equations 3-28 and 3-29. Figure 3-7b illustrates how a decrease in the 2
sigma standard deviation can result in a shift of the mean much closer to
the optimum.
AT
1-1
Basis Weight
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Number of
Samples
Basis Weight
Product Spec
Minimum Wt.
Shift in aveage
(mean) value
Reactor
A key operation in a plant is the catalytic reaction system. In this example,
the catalyst concentration is critical to maximize the conversion of raw
material to product, to improve yield and to minimize costs. Every pound
of the product is sold so it is also important to avoid operating conditions
that could lead to downtime and lost production [3.12].
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
XL = low constraint for catalyst concentration (%)
Popular Excuses
The following is a compilation of popular excuses used by people who
want to maintain the status quo and stay in their comfort zones. Most
reflect a process-design and steady-state viewpoint and a lack of under-
standing of dynamics and the advanced control goal of building and
incorporating process knowledge by extensive process monitoring, test-
ing, and modeling. In case these excuses are used as a means to delay an
APC opportunity, the APC counterpoint is given.
Rules of Thumb
Rule 3.1. — Measurements of present variability and estimates of reduced vari-
ability, attenuation factors for back-mixed volumes, conversion factors, and an
economic gain factor can be used to provide a more accurate estimate of the bene-
fits from advanced control. The gains and factors must trace the path from
the controlled variable to the process variable in the product that leaves
the plant. Dynamic property estimators may be useful to find process
gains and put the calculations online.
Rule 3.2. — Find the best practical production rate or yield from the best periods
of operation or batches from cost sheets and the best theoretical rate or yield from
steady-state and dynamic simulation of new operating conditions. The opportu-
nity assessment questions should all be oriented to find how to eliminate
the gap between the actual and practical or theoretical performance.
Rule 3.3. — Loop and process performance monitoring systems and online plant
economic performance calculations are essential to maximize and sustain benefits.
Without these calculations in place, the benefits will get lost in the noise or
attributed to other activities.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Rule 3.4. — If the benefits are not documented and reported, advanced process
control will not get the recognition needed to insure resource availability and com-
mitment. After any initial enthusiasm, the effort will rapidly fade. Due to
downsizing and information explosion, the average user is faced with an
overwhelming number of initiatives and supposedly neat ideas.
Rule 3.6. — Insure that at least a model predictive control (MPC) with some
degree of optimization is implemented. MPC has the best track record for ben-
efits.
Rule 3.7. — APC technologies must be employed in closed loop control to achieve
most of the benefits. Advanced control improvements that stay in an advi-
sory mode achieve a small fraction of the benefits possible, because of
operator inattention and bias.
Rule 3.10. — Plants with changing economic objectives, complex recycle effects,
shifting bottlenecks, and complex nonlinear relationships in several unit opera-
tions need a real-time optimization. High-fidelity steady-state models are
used for the real-time optimization of continuous constant conversion pro-
cesses, whereas dynamic models are needed for batch operations and for
processes where reaction and polymerization rates are important for opti-
mization.
References
1. Bialkowski, William L., “Process Control Audits Have Major Impact on
Uniformity,” American Papermaker, September 1990, pp. 50-57.
2. McMillan, Gregory K., “Benchmarking Studies in Process Control,” InTech,
November 1992, pp. 44-46.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Control, Prentice Hall, Inc., 1995.
5. Bialkowski, William L., “Process Analysis and Diagnostics,” Fisher-
Rosemount Systems Users Group Meeting, November 1996.
6. Luyben , Michael L. and Luyben, William L., Essentials of Process Control,
McGraw Hill Chemical Engineering Series, 1997.
7. Chia, T.L., and Lefkowitz I, “Add Efficiency with Multivariable Control,”
InTech, September 1997, pp. 85-88.
8. Mansy, Michael M., McMillan, Gregory K., and Sowell, Mark S., “Step Into the
Virtual Plant,” Chemical Engineering Progress (CEP), February 2002, pp. 56-61.
9. Tinham, Brian, “Control in the Chemical Industry,” Control and
Instrumentation, January 1993, pp 34-35.
10. Beal, James F., “Process Control Analysis, Improvements and Results,” ISA
Expo/Conference, Houston, September 10-13, 2002.
11. Bialkowski, William L., “Process Control Related Variability and the Link to
End Use Performance,” Control Systems Conference, Helsinki, September 17-
20, 1990.
12. Shunta, Joseph P., “Assessing & Implementing Control Improvement
Opportunities,” ISA Short Course SC05(Du), 1996, Instructor’s Notes.
Practice
Overview
Target product quality and production levels can be maintained by contin-
uously evaluating the performance of the process and the control system.
The maximum throughput and operating efficiency of a plant are ulti-
mately determined by the process design and equipment selection. How-
ever, in many cases a plant's operation is far from achieving the ultimate
capability inherent in the plant design and equipment. For example,
numerous studies done in the pulp and paper industry show that loop uti-
lization ranged from 55% to 76%, depending on production area.
The reasons for process variation and poor control utilization can be attrib-
uted to one or more of the following:
119
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,
One of the main reasons such conditions exist is that the downsizing of
support services in many plants has resulted in plants operating with a
minimal staff for process control and instrumentation maintenance. Often
there is only enough manpower available to fix the critical problems found
today that are limiting production and affecting product quality. Under
these circumstances, there is little time to study the process operation to
determine if an abnormal condition exists that could soon be a major
source of process disturbance if not addressed. A problem may not
become visible until the situation has deteriorated to the point where it
affects product quality or production. Operating in this “firefighting”
mode may lead to variations in operation that result in less than maximum
production in a sold-out market, or to operating at less than best efficiency,
or to wider variation in product quality, regardless of production rate.
One of the key factors is that the performance of control loops decays with
time as a result of the wearing of control valves, loss of calibration of trans-
mitters, or changes in the operation of the process. Some plants have real-
ized the impact that control and field devices are having on their
operation.
Efforts to achieve best plant performance must address both the areas of
analysis and diagnostics.
In some cases, additional staff has been added and a performance moni-
toring tool has been purchased to periodically evaluate control loop and
field device performance. However, this solution is costly and often hard
to justify in the short term. Tools layered on top of a traditional Distributed
Control System (DCS) to detect abnormal operation have had limited suc-
cess in the detection of problems in instrumentation and fast processes
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Measure
Process Identify Determine Eliminate
Variation, Areas for Root Cause Source of
Control Improvement of Variation Variation
Utilization
Analysis
Work Order - Diagnostics
Operation Problem
Opportunity Assessment
The justification for investment in tools and people for process and control
analysis and diagnostics is the reduction in process variability and the
associated improvement in plant profitability. A process parameter’s effect
on a particular plant’s production depends greatly on that plant’s limita-
tions and operating conditions. If the answer is yes to some of following
questions, there is enough of an economic incentive to implement a perfor-
mance monitoring system in a plant.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
The built-in diagnostic and analysis tools of modern scalable control sys-
tems provide the unprecedented capability to automatically identify pro-
cess variation caused by under-performing loops. This is done by
continuously calculating the improvement in control that is possible and
comparing this to the baseline loop performance. This approach to perfor-
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
indication may in some cases only require that the PID control block be re-
tuned to correct a change in operating condition. An uncertain indication
on a transmitter may sometimes be quickly resolved using the online diag-
nostics provided by the manufacturer of the device.
Unfortunately, in some cases the source of the problem may not be appar-
ent and further analysis is required. In particular, resolving the source of
high variability may not be as easy as just re-tuning the loop. Even though
the loop may be correctly tuned, valve stick slip may cause the loop to
continually oscillate, as discussed in Chapter 2 and illustrated in Figure
4- 2.
Setpoint (SP)
Stem Position
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
today’s world, data drives the decision-making process and the ability to
make use of the vast amounts of process data is essential. In many plants,
the rate at which we are accumulating this data surpasses our ability to
extract knowledge from data and use it for better decision making. Multi-
variate statistical analysis is a technique that allows us to analyze plant
data in order to extract underlying themes in the behavior of the data, and
to then use these themes to monitor the state of the process.
Traditionally, the task of monitoring plant data has fallen into the statisti-
cal process control (SPC) or the univariate statistical analysis world. The
manufacturing industries have made great progress by focusing on key
quality variables and monitoring these variables with univariate analysis
techniques such as SPC. But are these techniques still relevant now that
the volumes of data being extracted by our automation systems have
increased by orders of magnitude? The answer is, Not always! In previous
years we had 10 engineers per quality parameter being monitored; today a
single engineer is asked to monitor 10 quality parameters. The drive to
understand quality coupled with the vast amounts of data can present a
formidable task, one well suited to the application of multivariate tech-
niques.
The multivariate techniques that are showing success today are called
Principal Component Analysis (PCA) and Partial Least Squares or Projec-
tion to Latent Structures (PLS). Multivariate statistical analysis is not a
new technology, but the application of PCA and PLS has gained attention
due mainly to the data explosion. The manufacturing and process indus-
tries have invested heavily in real-time data acquisition systems or Process
Information Management Systems (PIMS) and there is now a strong desire
by manufacturing directors to see this accumulation of data put to use to
improve plant operation; hence the strong interest in multivariate moni-
toring techniques. Other multivariate monitoring techniques include Fac-
tor Analysis, Eigen-vector Analysis and Singular Value Decomposition.
The field that uses multivariate statistical analysis tools for real-time anal-
ysis of process data is called Multivariate Statistical Process Control
(MSPC). The attraction of MSPC is in the ability to rapidly develop behav-
ioral models of your process, akin to fingerprinting normal process behav-
ior, and then to continuously compare current plant behavior to the
normal fingerprint. If a deviation from normal plant behavior is identified,
MSPC will allow you to identify which plant variables are the major con-
tributors to the cause of the deviation.
The ability to monitor, identify and diagnose the cause of process variabil-
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
ity is a task every engineer will recognize as important and one that will
improve the performance of our manufacturing facilities. The technologies
Examples
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Control Variability
The response of a control loop in its designed mode of operation may not
be adequate to compensate for process disturbance and changes in set
point. The root cause of poor control performance may be traced to the
control design, tuning, measurement or actuator performance. Many mod-
ern control systems provide a means to automatically quantify the varia-
tion seen while on automatic control. Using this embedded feature, it is
possible to quickly identify control loops that require maintenance. If such
capability is not available in the control system, it may be possible to uti-
lize the statistical calculations included in a plant historian or to add loop
performance-monitoring packages that connect directly to the DCS or to
the plant historian through OPC connectivity. Both methods will monitor
control loop variability and provide insight into changes in control perfor-
mance.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
seen for the same operating condition and timeframe, it is possible to iden-
tify loops whose performance is degraded. Through the investigation of
problems that cause increased variability, it is often possible to find the
root cause and significantly improve plant operation. As discussed in
Chapter 3, this reduction in variation often makes it possible to shift oper-
ating points closer to operating constraints or to maintain the best operat-
ing set point, and thus provide greater throughput or improved quality.
Outlier
X2
X1
Caster Monitoring
The next sample industrial process we will consider for MSPC is a caster
monitoring application in a steel mill. The process is a continuous, straight
mold, Demag slab caster, shown in Figure 4-4. Liquid steel enters the
caster process where it is partially solidified in a water-cooled copper
mould. The steel exits the mould, where the steel is contained within a
thin solidified shell or skin, and cools as it moves along the process line
towards the run-out. Upon leaving the mould, there is potential for the
loss of containment of the molten steel. This loss of containment and
release of molten liquid steel is a called a breakout. A casting breakout is
an extremely hazardous occurrence that causes equipment damage and
loss of production. An MSPC monitoring system was installed to provide
real-time monitoring of the casting process variables in order to predict
potential breakout conditions and provide operators with information to
help diagnose casting process problems.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Tundish - 60 tonne
Mould
Bending
Unbending
Runout
The MSPC casting application has a few multivariate SPC control charts
representing the stable mould operation and alarms on detection of any
significant anomalies in the behavior of the process variables. At any time,
the operator can view how the measured plant values are contributing to
these alarms, to help diagnose process problems. The final PCA model of
the process contained over 240 inputs, collected selectively over several
months of operation in order to cover many different operating scenarios.
The PCA models were further summarized into control charts, using
Hotelling’s T2 statistic, to be monitored along with the prediction-error
control chart. T2 statistics provide an indication of variability. The T2 is
named for Harold Hotelling, a pioneer in multivariate statistical analysis.
These three charts form the basis of the real-time MSPC system, providing
continuous information to the operator on the stability of the mould oper-
ation. This example of an MSPC system, implemented in a steel plant,
delivered benefits in the areas of increased production, improved safety
and much improved process understanding—the ultimate goal of any
monitoring system [4.12].
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Figure 4-5. Display Interface for Slab Caster Example (Courtesy of Dofasco,
Inc.) [4.12]
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Application
General Procedure
1. Decide on the objectives of performance monitoring in terms of the
relative importance of diagnosing and predicting the following:
a. Measurement problems
b. Rotating equipment problems
c. Hydraulic resonance and shock waves
d. Pressure regulator and steam trap problems
e. Interlock and alarm system sequence of events
f. Control valve problems
g. Improper controller tuning
h. Improper controller modes
i. Interaction between control loops
j. Feedforward and disturbance analysis
k. Weeping and flooding of column trays
l. Distribution and mixing of feeds, components, phases,
polymers, and particles in fluidized beds, columns, fermentors,
reactors, and crystallizers
m. Analyzer and sample system problems
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
constraint has been active, and the percent of time that each
constraint has been violated.
b. Provide trending and statistics of the bias correction to each
trajectory.
c. Provide trending and statistics of predicted values for each
trajectory.
d. Provide trending and statistics of optimization set points (set
points from rampers and pushers).
Application Details
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
With a modern control system, all control loops are automatically moni-
tored on a continuous basis and any degradation in loop performance or
the detection of an abnormal condition in a measurement, actuator, or con-
trol block is automatically flagged. These systems may thus identify prob-
lem areas sooner than can audits done with portable PC-based tools. By
using automated system performance monitoring and built-in diagnostic
tools, the typically limited resources of plant maintenance can be used to
advantage to resolve measurement, actuator and control problems. As a
result, maintenance costs may be reduced or a higher overall level of sys-
tem performance may be maintained and process variability reduced. Any
reduction in variability can lead directly to greater plant throughput,
greater operating efficiency and/or improved product quality.
Continuous Control
In evaluating the performance of a continuous process, control systems
may calculate indices that quantify loop utilization, measurements with a
“bad,” “uncertain,” or “limited” status, limitations in control action, and
process variability. In addition, for control loops, the systems show the
potential improvement possible in control loop performance. The follow-
ing are a few practical scenarios that show how such information might be
used to determine an operating problem.
Limited Output
An instrument technician for the power plant notices that the system
shows the oil-flow control loop is limited in operation. Having been
alerted to this problem, the technician determines that the set point for the
oil header pressure control has been lowered below the design pressure,
forcing the oil valve to go fully open under heavy load conditions. When
the pressure control is readjusted to its designed target, the oil valve can
meet its set point without going fully open.
Bad I/O
A key temperature measurement is flagged by the system as having been
“bad” more than 1% of the time over the last day. Having been alerted to
the fact, the instrument technician re-examines the transmitter calibration
and finds that the device has been calibrated for a temperature range that
is too low. Re-calibrating the transmitter restores the accuracy of the mea-
surement and improves the operation of the process.
High Variability
During normal operation of the plant, the plant engineer sets all variabil-
ity limits to the current value plus 5 percent. After a few weeks, he notices
that the system has flagged a critical flow loop as having excessive vari-
ability. Upon further investigation, he discovers that the valve positioner
connection to the valve stem is loose, causing the control loop to cycle
severely. Fixing the valve positioner returns the variability index to its nor-
mal value.
All these cases are quite typical. Individually they may seem to have mini-
mal impact on plant operation. However, the net effect of these problems
could be significant if they were not addressed in a timely manner. An
automated analysis system to evaluate performance can detect abnormal
situations as they happen and thus play a key role in preventing produc-
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
tion losses and product quality problems.
1. Trends
A high-speed trend provides the ability to trace any parameter
associated with measurement or control actions at the same rate at
which these values change. A trend resolution of 100 msec is usu-
ally sufficient to analyze most measurement and control problems.
2. Histograms
A histogram allows the distribution of the variation in a measure-
ment value or actuator position to be analyzed. A bell shaped dis-
tribution indicates that the source of variation is random in nature.
3. Power Spectrums
A power spectrum is a frequency distribution of the components
that make up a measurement or actuator signal over a selected
period of time. Such information may be helpful in determining
the magnitude and frequency of process noise or the frequency of
disruptions caused by loop interaction or upstream processes.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
4. Cross Correlations
The influence of other control loops and upstream conditions on
the variability of a control loop may be determined through cross
correlation
Diagnostic tools for process and control analysis are available as embed-
ded features in some control systems. These tools may be used to analyze
both fast and slow processes. If an integrated tool set is not available, diag-
nostic applications can be layered on the control system for diagnostics,
although they are often limited to analysis of slow processes. Figure 4-6
illustrates the diagnostic information that is presented to the user by a typ-
ical performance monitoring system.
FT101/PID1
Analysis
+ANAL_FT101PID020602
ANAL_FT101PID020802
PowerSpectrum
Histogram
Autocorrelation
Trend
Trend Auto correlation
Cross Correlation
CORR_FT101PID020802
PI103/PID1/OUT
AI321/AI/PV
+CORR_FT101PID020902
+CORR_FT101PID021002
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Batch Control
The application of online performance monitoring is most often associated
with continuous processes. However, the percentage improvement from
its application to batch control could be larger because the state of the con-
trol loops typically receives less attention than the sequence of the batch
operation. The emphasis to date in batch control has been on event sched-
uling rather than on process control. The benefits of monitoring can be
especially significant when you consider the higher profit margin and
value-added opportunities for specialty chemical and pharmaceutical
products, which are almost always batch processes. In such cases, the
answers to Opportunity Assessment questions (4) or (5) are frequently
Yes.
Figure 4-7 shows a batch reactor for a sold-out high profit–margin spe-
cialty chemical plant. One of the feeds and a byproduct can be recovered
from the vent system if there is sufficient condensing to reflux unreacted
feed back to the reactor and there is no liquid carryover from high level or
foaming. Periodically there are interruptions due to high reactor pressure,
level, and exchanger temperature. The cooling water pressure is some-
times too low and its temperature too high. The feed profile is scheduled
to minimize the activation of interlocks and excessive riding of the con-
denser, vent system, and exchanger limits. The efficiency and safety of the
operation depends upon a uniform quality, flow rate, and distribution of
the feeds within the batch. There is no online composition measurement of
the product. The yield varies significantly with each batch.
PC
1-3
TC
1-3 PT
1-3
FC TT
1-3 1-3 Vent System
Eductor
PC
FT 1-1
1-3
Anti-Foam Coolant
FC PT
1-2 1-1 TC TT
1-2 1-1
FT TC RSP
1-2 1-1
Feed B
FC TT
1-1 1-2
FT Coolant
1-1
Feed A LT LC
Batch Reactor 1
1-2 1-2
Discharge
The standard deviation and variability indices are high and the control
output blocks are limited at times during the batch for the reactant B and
antifoam feed loops, the condenser and exchanger temperature loops, and
the reactor pressure loop.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
The low-limit flag for the output function block of the reactant B flow con-
troller is periodically activated throughout the batch. After some investi-
gation, it is discovered that the original sliding stem control valve was
replaced with a ball valve that has too much capacity and too much fric-
tion near the closed position. The reactant B flow controller ramps up its
output but there is no appreciable increase in flow above the leakage flow
until its output reaches 6 per cent. Then there is a burst of flow so high
above the set point that the controller must close the valve. This square
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
wave of flow from the stick-slip of the oversized control valve continues
through most of the batch.
The antifoam controller output during some batches ramps up to its high-
output limit. A correlation analysis shows that this coincides with the
simultaneous demand for antifoam by other batch operations and a dip in
supply pressure. An undersized antifoam pump is pinpointed as the cul-
prit. A further look at other variables shows there is a significant cross cor-
relation between high antifoam flow controller output and high reactor
pressure controller output and high condenser temperature controller out-
put. This indicates the possibility of carryover into the vent system that
can result in contamination of the byproduct and a corresponding loss in
yield of the main product.
frequency with significant power. This is a clue that there is a reset cycle
from excessive integral action from the eductor pressure loop. During hot
summer days the process gain and time constant are less, making the reset
cycle more severe.
The above analysis reveals the importance of extending the scope of per-
formance monitoring to other unit operations, utility systems, vent sys-
tems, and ambient conditions. It also shows the value of being able to
trend performance indices, status flags, events, and variables to look for
coincident events. It is important that the trend be fast enough to deter-
mine what happened first and that the oscillation period or waveform not
be distorted by aliasing. Finally, there is obviously a need for some basic
understanding of the system to determine the actual cause-and-effect rela-
tionships.
Once the basic system has been improved, the use of performance moni-
toring can be extended to include online multivariate principal component
analysis (PCA). A PCA worm plot can provide rapid recognition of
batches that start to trend away from normal operation to warn of abnor-
mal conditions before they cause significant problems of reduced yield or
capacity. It can also lead to Partial Least Squares (PLS), Neural Network,
and dynamic linear estimators of batch end time and product concentra-
tion based on various peaks during the batch cycle. For the above reactor,
it turned out that the amount of fresh feed added to reach the first peak in
temperature was an indication of the composition of the recycled feed and
a sustained low condenser and vent valve position could be used to pre-
dict the batch end point.
While the above analysis was for a specialty chemical, the opportunities
for fermentors could be even greater due to the need for tight dissolved-
oxygen and pH control despite interactions and interferences and the
advantages of having online estimators to detect deviations of batch con-
ditions and cell and product concentration. Similarly, the analysis should
be extended to substrate and nutrient feed systems, antifoam systems,
vent systems, and utility systems. Figure 4-8 shows the main control loops
for a batch fermentor.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
PC
FC 1-1
1-3 RSP
PT
1-1
Vent
FT
1-3 AC
Anti-Foam 2-2
Dissolved
Reagent
Oxygen
VSD
TC RSP TC
2-1 2-2
steam coolant
TT
TT 2-2
FC 2-1
1-2 pH
AT
AC AT 2-2
2-1 2-1
FT
1-2 Batch Fermentor
Substrate
tempered water
RSP FC
1-1
FT
1-1
Air
Figure 4-8. Batch Fermentor Control System C S
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Multivariate Analysis
Although the methods for monitoring plant data have traditionally been
limited to statistical process control (SPC) or univariate statistical analysis,
manufacturing industries have made good use of these methods to iden-
tify and monitor key quality variables. Today, the volume of data being
extracted by automation systems has increased by orders of magnitude.
The drive to understand quality coupled with the vast amounts of data
can present a formidable task.
What are the factors that make process data analysis a challenge?
And what are the elements that make PCA and PLS good multivariate
techniques for modeling plant behavior from real-time operating data?
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
4. Identify which and how the original plant variables contribute to the
deviation
One of the principal strengths of the MSPC approach is the ability
to drill down and identify how the original plant variables contrib-
ute to the abnormal behavior in plant performance. Knowing how
the original plant variables contribute allows action to be taken to
bring the plant back within control.
Temperature (X1)
Pressure (X2)
Flow (X3)
The first step in MSPC is to draw a line through the data in the direction of
maximum variability. This line is called the first principal component and
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
projecting the data points onto this component defines our first latent vari-
able. A second line can now be drawn through the data, orthogonal to the
first (linearly independent), and in the direction defining the second great-
est variability as shown in Figure 4-10. This is known as the second princi-
pal component, and again the data points can be projected onto this
principal component to generate latent variable 2.
These 2 new principal components now form a new plane in our data
space, which is commonly referred to as a scores plot. The scores plot for
our temperature, pressure and flow data is shown in Figure 4-11; it cap-
tures over 92% of the variability in the dataset. The scores plot has reduced
the order of our data from 3 dimensions to 2 dimensions. Monitoring how
the data points move on this plot is one of the graphical tools used in iden-
tifying abnormal behavior.
X3
PC1
X2
PC2
X1
5
4
3
2
1
PC-2
Scores 0
-1
-2
-3
-4
-5
For example, the temperature, pressure and flow data show significant
deviation from model norm, behavior not easily detected from the time
series trends. Although we have limited our analysis to reducing 3 vari-
ables down to 2 principal components, we can often reduce hundreds of
variables into only several principal components and still capture a signif-
icant amount of the variability in the dataset.
Scores Plot
The scores plot allows the observations to be viewed in the new co-ordi-
nate system of principal components. The new principal components will
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
form a sub-space, and projecting the observations down onto this sub-
space will allow us to visualize how our observations relate to our PCA
model of the process. A common technique for real-time monitoring is to
connect the observations in the scores plot, forming a worm of a preset
length. Every new observation adds a value to the head of the worm,
while an older observation is dropped from the tail. Following the direc-
tion of the worm can indicate when the process is trending towards abnor-
mal behavior.
Hotelling T 2
The Hotelling T2 is a statistical parameter indicating variability of the mul-
tivariable process. It can be used to classify when an observation in the
data is a strong outlier. A strong outlier in the data indicates abnormal
behavior. The Hotelling T2 statistic is most often represented as an ellipse
on the scores plot with 95% or 99% confidence intervals. Thus, any obser-
vation outside of these limits would indicate a strong outlier that does not
conform to the normal correlation model of the process data. Figure 4-11
illustrates the result of the Hotelling T2 classification for a process with
two principal components. Prediction Error or Model Residual Plot
Contribution Plot
One of the many advantages of the PCA approach is that the information
in the model is open for inspection. If a situation is detected the system can
present how the original variables are contributing to the detected fault.
These are most often presented as a contribution difference from model
center or contribution difference between two observations.
Although this treatment of the MSPC topic has been limited to the moni-
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Rules of Thumb
Rule 4.1. — Make sure the scan time of the I/O, module, and performance moni-
toring system is faster than 1/5 of the dead time for fault diagnostics. This will
prevent misalignment of events and aliasing of loop oscillations because it
will insure at least 10 samples per oscillation for dead time–dominant
loops and 20 samples per oscillation for self-regulating processes with a
large process time constant.
Rule 4.2. — To analyze items 1a through 1e in the General Procedure, the scan
time should be as fast as the input card or field device to avoid improper identifica-
tion of periods of oscillation (aliasing) and the sequence of events. The speed and
richness of diagnostics from Fieldbus devices can be an important advan-
tage.
Rule 4.3. — To track down the cause of oscillating control loops, locate the loops
with significant power at the same frequency on a process flow diagram (PFD)
and trace the direction of manipulated flows between the loops. The loop from
which the manipulated flow originates is the source of the oscillations. It is
generally, but not necessarily, the furthest loop upstream.
Rule 4.4. — Watch out for periodic disturbances. Aggressive tuning of level
loops, valve stick-slip, too small a reset time on loops dominated by a time
lag (reactors, fermentors, evaporators, crystallizers, and columns), and too
high a controller gain for loops dominated by a time delay (webs, sheets,
and pipelines) are the most common causes of a periodic disturbance. If
the loop manipulates an inlet or outlet flow to the volume, the oscillating
loops will be upstream or downstream, respectively.
Level loops on surge and feed tanks tend to be tuned too tightly. Level
measurement noise is a frequent problem, particularly if a high gain or
any rate action is mistakenly used. Valve stick-slip often appears as a
square wave limit cycle. If the reset time is less than the oscillation period
for a loop dominated by a large time constant, the reset time is probably
too small. These loops need to overdrive the manipulated variable and
require more gain than reset action (see Chapter 2). Conversely, the pulp
and paper industry tends to have loops that are dominated by large time
delay, and consequently the use of a gain or reset time that is set too high
is more of an issue.
Rule 4.5. — If the oscillation period is less than 4 seconds, it probably originates
from rotating equipment, pressure regulators, actuator instability, burner insta-
bility, resonance, or measurement noise. Incipient surge and buzzing due to
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
sized actuators and springs can cause large rapid fluctuations in flow.
Burner instability can cause rapid oscillations in fuel flow and furnace
pressure.
Rule 4.6. — Items 1a through 1i in the General Procedure can usually be handled
by performance monitoring systems that provide relative statistical measures of
variability, capability, utilization, sustained cycling, oscillation frequency and
power (Power Spectrums), saturated controller outputs, and limited process vari-
ables.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Rule 4.7. — The diagnosis and prediction of process equipment, operation,
sequence, allocation, and raw material problems (items 1j through 1s in the Gen-
eral Procedure) generally require cross correlation analysis and multivariate prin-
cipal component analysis (PCA).
Rule 4.8. — Online property estimators should be created to enhance the repeat-
ability and reliability of online and lab analysis systems by the addition of Partial
Least Squares (PLS) analysis, dynamic linear estimators, or neural networks.
Time delays and time constants must be used to align the inputs with out-
puts. Some of the need for data alignment can be reduced by the use of
long scan times but this slows down the calculation for estimation and
fault detection to a point where it may be too late or unable to resolve
what occurred first. This is a more important issue for continuous opera-
tions than batch operations since batch outputs can be tied to a batch, step,
and phase identification number and time (see Chapter 8).
Rule 4.9. — Control loops that are intentionally not in service due to batch oper-
ations, product or grade produced, or trains of equipment that are shut down to be
maintained or cleaned, must be flagged as such and other diagnostics automati-
cally suppressed. Otherwise, true problems are camouflaged by false alerts
and alarms. Also, the “Normal Mode” of loops used in such batch opera-
tions should be automatically updated to reflect the needs of the batch
sequence.
Guided Tour
This tour illustrates the potential ease of use and convenience of an inte-
grated interface that is possible for a performance monitor embedded in
an industrial control system. The following areas are addressed:
From the explorer view at the left of the interface, the user may select the
entire plant or an individual process area to examine. Based on this selec-
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
tion, a summary is provided that shows the number of modules that are
being utilized for control, monitoring, and calculation within the selected
area. Also, a summary is provided of the modules that have an I/O or con-
trol block with abnormal conditions. The individual modules that have an
abnormal condition are listed in the right portion of the view in the Sum-
mary tab. The types of problems that have been detected are shown using
a frown face.
When a module is selected in the summary list, the individual I/O and
control blocks in that module are listed in the bottom right portion of the
interface. The percent time that an abnormal condition existed is shown
for each block. If this time exceeds the defined limit, then this condition is
flagged as an abnormal condition in the interface; that is, a frown face is
shown for the module. Through the Filter selections, it is possible to view
information for the previous or current hour, shift or day. Also, the user
may select the type of blocks and whether all modules or just those with
abnormal conditions are displayed.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
By selecting the Print icon, the user may choose to print a module sum-
mary report or a detail report. Also, to allow the operator to view the per-
formance and utilization information from his displays, a standard
function block and dynamo are used to include this information in the
operator interface. Through this dynamo an operator may also enable and
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
disable the monitoring in the associated area. This dynamo appears in an
operator display as shown in Figure 4-15.
Theory
The ability to quickly inspect control and measurement loops has a pri-
mary importance in industrial applications. Both poorly tuned loops and
malfunctioning field devices jeopardize product quality and production.
In the last decade this problem has received significant attention from both
academia and practitioners. Much of this work has focused on an assess-
ment of control loop performance using minimum variance controller as a
reference; see Harris [4.1] and Desborough and Harris [4.2] and [4.3]. Bea-
verstock et al. used heuristic definitions of unit production performance
tailored to specific applications, rather than loop performance [4.4].
Numerous other researchers have advanced performance monitoring. In
particular, Rhinehard [4.5] explored simple ways of computing standard
deviation from measurement-to-measurement deviations and filtering.
Harris et al. [4.6] and Huang et al. [4.7] provided guidelines for perfor-
mance assessment of multivariable controllers. Qin reviewed recent works
on performance monitoring in [4.8]. Shunta gave an excellent practical
summary of the performance monitoring in the monograph [4.9].
Figure 4-16. Block Parameter Status and Mode are Utilized in Performance
Monitoring --`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
The status associated with each function-block output gives a direct indi-
cation of the quality to the measurement of control signal. Quality is
defined by describing the measurement as suitable for control (Good),
questionable for use in control (Uncertain) or not suitable for use in con-
trol (Bad). In addition, the status provides an indication of whether a mea-
surement or control signal is high or low limited. For example, if a
measurement is operating above its calibration range, the quality of the
measurement may be shown as “Uncertain.” Downstream blocks may
• Bad I/O — the status of the block process variable (PV parameter) is
“bad,” “uncertain” or “limited.” A sensor failure, inaccurate
calibration, or measurement diagnostics have detected a condition
that requires attention by maintenance.
• Limited (control action) — a downstream condition exists that
limits the control action taken by the block. Such limitations may
prevent the loop from achieving or maintaining set point.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
• Mode not Normal — The actual mode of the block does not match
the normal mode configured for the block. An operator may change
the target mode from normal because of equipment malfunction.
The percent time that these conditions exist over an hour, a shift, and a day
is computed for every block and compared to a configured global limit for
each condition. When one of these limits is exceeded, the associated
module is displayed in the main summary display.
It is possible that the status of all inputs to a control block is normal and
the mode of the block is correct and yet the control provided is poor. No
indication is included in the standard block defined by Fieldbus that
directly indicates this problem. However, statistical techniques exist that
allow the quality of control to be determined in a reliable manner. Leading
companies in the process industry use such techniques to evaluate the per-
formance of control loops[4.9]. Based on a knowledge of total and capabil-
ity standard deviation of the control measure, illustrated in Figure 4-17, it
is possible to compute a variability index for control measurements that
compares current control performance to the best achievable for the pro-
cess dynamics.
Process
Value
Time
Figure 4-17. Capability and Total Standard Deviation for Process Variable
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
∑ Xi
i=1 -
X = -------------- (4-2)
n
When loop performance is of concern, then standard deviation alone may
not provide sufficient information to evaluate loop tuning. To gain an
objective judgement, a reference value for the process control performance
is required. The best performing feedback control theoretically is mini-
mum variance control, Sfbc. This value may be calculated [4.9] directly
based on the knowledge of the total, Stot, and capability, Scap, standard
deviation of the process measurement as shown in the Formulas 4-3 and
4-4.
S cap 2
S fbc = S cap 2 – ---------- (4-3)
S tot
∑ ( Xi – Xi – 1 )
2
S cap = i=2
---------------------------------------
- (4-4)
2(n – 1)
Based on a the value of total standard deviation and standard deviation
for minimum variance control, it is possible to define a variability index
for the control loop that reflects how close control performance comes to
minimum variance.
S fbc + s
Variability Index VI = 100 1 − (4-5)
Stot + s
where s is the sensitivity factor.
Finally, for control blocks, overall control performance is as follows:
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
each execution of the function blocks. The parameter values are then
updated per every n executions of the function block. For a typical imple-
mentation, the update is done after 120 executions of the function block.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
where:
N
1
MAE =
N
∑ | ( y (t ) − y ) |
1
t (4-8)
The measurement value is used in I/O blocks to calculate the mean value.
In control blocks, either the working set point or the measurement value is
used depending on block mode.
The relation between standard deviation and a mean absolute error can be
verified by computing a mean absolute value for the normalized Gaussian
distribution, p(x):
∞ x2
1 −
p( x ) =
2π ∫e
−∞
2
(4-9)
x2 0 ∞
1
∞ x2 0 x2 ∞ x2 x2
1 − − − 1 −2 − 2
| x|= ∫ | x|e 2
= − xe 2 + xe 2 =
∫ ∫ e −e 2
= π = .7978845 (4-10)
2π 2π −∞
2π
−∞ 0
−∞ 0
MR
S cap = ------------- (4-12)
1.128
where:
1 N
MR = ∑ | ( y(t ) − y (t − 1)) |
N −1 2
(average moving range) (4-13)
Only the summing component associated with the MAE and MR is done
each execution. The division of the sum by N or N-1 is done as part of the
Stot, and Scap calculations only once every N executions (default N=120).
Capability standard deviation calculation requires that the sampling rate
be fast enough. The requirements for the sampling rate are similar to the
scan rate of a control loop. A practical estimate for selecting scan rate for
control loops is sampling five or more times per time constant. [4.9].
TT AT FT Adaptive
1-1 2-1 3-8
Tuning
Model
FT Neural FT Predictive
1-2 Network 2-9 Control
AT TT TT
Control
1-2 2-5 3-8
FT FT
FT
4-3 6-2
5-7
Multivariable Real Time Blending
FT AT FT
Fuzzy Logic Optimization
4-6 5-8 6-6
Control
LT FT FT
4-6 5-8 6-8
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Abnormal Inputs
For a single input control block such as the PID or FLC control block, the
status of the primary input (PV) of the block may be used to determine if
the input is abnormal. For example, if the PV status is uncertain or “bad”
where:
∀ = a logical sum of I = 1, …, N Conditions
Control Limited
Where advanced control is providing multiple process inputs, the control
interface to the process will be done through I/O or control blocks. Both
provide an output that will be used as a back-calculation input to the
advanced control application. If the control action taken is limited down-
stream, then this is reflected in the status of the associated back-calculation
input. Since the control objective will not, in general, be met if any of the
control outputs becomes limited, then the indication that control is limited
must consider the back-calculation input status associated with all control
outputs. This may be calculated as follows:
Incorrect Mode
For single input/output blocks, the status of the primary input, cascade
input, back-calculation input, and target mode must be used in determin-
ing achievable mode of operation. This mode of operation is reflected in
the mode parameter as the actual mode attribute. When a block is config-
ured, the customer may indicate the normal mode that the block is
designed to operate in. By comparing the actual mode attribute to the nor-
mal mode attribute for the block, it is possible to determine whether the
block is operating in its designed mode. Advanced control applications
may be engineered to use the standard status and mode definition. To cal-
culate mode, the status of all inputs used in the control and the status of all
back-calculation inputs will be utilized. Each output provided by the
advanced control application will be designed to support handshaking,
bumpless transfer, and windup prevention based on the back-calculation
inputs. Thus, mode will be treated exactly the same as other control blocks
in the control system. Incorrect Mode may be determined as follows:
Control Index
For single loop PID and FLC control, the variability index is calculated
based on the total and capability standard deviations calculated in the
control block. In an advanced control application involving multiple
inputs and targets, the measure of control performance must consider
each controlled input compared to its target value. For advanced control
applications, such as MPC or multi-variable fuzzy logic, this concept may
be extended to calculate an average index or minimum value of the index:
L
CIA = 1
L∑
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
CI ( i )
1
Diagnostic Tools
Insight into the source of variation may be obtained using tools that sup-
port the calculation of power spectrum and cross correlation:
The power spectrum may be calculated from the Fourier series coefficients
as illustrated in Figure 4-19.
+ +
+
+
+ + + +
+ +
+
Fourier Series
+
+
n
X (t ) = ∑ ( Ai cos( wi t ) + Bi sin( wi t ))
+ + +
i =1
i 1
Where wi = 2π
N ∆T
N = Number of po int s collected
Power Spectrum N
n=
+
2
Power +
+
Amplitude Pi at frequency wi is :
Pi
+ Pi = Ai2 + Bi2
+
+ +
+
Frequency Wi
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Y (t ) + +
+
+
+
+
+ +
+
+ Cross Correlation
+ +
+
+ + +
X (t ) +
+
+ +
+ + +
+ + + + +
+ + + +
N −k
1
Time
N
∑(X i − X )(Yi + k − Y )
C xy (k ) = i =1
σ xσ y
+
+ Where N = Number of samples
Cxy +
K = 0,1,2,..., N − 1
+
+ +
+ +
+
+ + +
+ +
+ +
Time shift K
The status and actual mode attributes used by the performance monitor-
ing system to calculate loop utilization, limited condition, and bad mea-
surement normally do not change in value. Thus, communication
requirements may be minimized by reporting parameters only on a
change in these attribute values [4.10], [4.11]. If this approach is taken in
the system design, then the communication load for reporting these
parameters will normally be close to zero. Performance statistics may be
calculated over a specified period of time, e.g., 120 executions of the con-
trol or measurement block, and then reported to the performance monitor-
ing application. Thus, these parameters are reported every 60 seconds for
a block with an execution rate of 0.5 seconds.
Scalable System
Main Workstation Workstation Workstation
Server Client Client
Application Application Application
PM - FB =
Parameters
PM - FB Process/Performance
PM - FB Reported by PM - FB
Parameter Monitoring
Parameter Exception Parameter
Function Block
Scalable Scalable Scalable
Controller Controller Controller
When the server is first placed online, the current state of the required
attributes is reported once and subsequent updates are sent by exception
reporting.
The support of fast speed trend for diagnostics places a special require-
ment on the control system design. Inaccuracies may be introduced by jit-
ter and aliasing as a result of measurement sampling:
the sample rate must be at least twice as fast as the highest frequency in
the sampled signal. If this is not possible, then aliasing occurs, as illus-
trated in Figure 4-23.
x
Measurement
x
Value Plotted
Assuming
Uniform
Sampling
x
x
Variation
in time of
sampling x
x
Sample
Taken Time
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
x x
Measurement
Value Plotted
Assuming
Uniform Sampling
Periodic
Samples Time
Historically, the only way to collect high-speed data for diagnostics was
dedicated tools that attached at the terminal strip of the control system, as
illustrated in Figure 4-24.
Scalable System
Workstation (s)
Scalable
Controller
Temporary
Wiring to I/O
I/O File terminations
PC Based Diagnostic
FT Tool With I/O
9-1
However, many Fieldbus devices introduced in the last few years support
the collection of high-speed data for diagnostics. Also, some modern con-
trollers allow measurement and control parameters to be collected in the
controller for diagnostic support. This trend information collected in the
Fieldbus device and controllers may be accessed without aliasing or jitter
and thus used within the control system for diagnostic support, as illus-
trated in Figure 4-25.
Scalable System
Workstation (s)
Diagnostic
Application
Communication Network
Scalable
Controller
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Trend Blocks
Trend objects
I/O File
Smart or Fieldbus
traditional
4-20ma TT
FT 1-1
9-1
References
1. Harris, T., “Assessment of Control Loop Performance,” Can. J. Chem. Eng.,
1989, 67(10):856-861.
2. Desborough, L., and Harris, T.J, “Performance Assessment Measures for
Univariate Feedback Control,” Can. J. Chem. Eng., 1992, 70:1186.
3. Desborough, L., and Harris, T.J, “Performance Assessment Measures for
Univariate Feedforward/Feedback Control,” Can. J. Chem. Eng., 1993, 71:605.
4. Beaverstock, C. Malcolm, and Martin, Peter G, “Performance Control
Apparatus and Method in a Processing Plant,” US Patent Number 5,134,574,
July 28, 1992.
5. Rhinehart, R. Russell, “A Cusum type on-line filter,” Process Control and
Quality, 2 (1992) 169-179.
6. Harris, T., Boudreau, F., and Macgregor, J. F., “Performance Assessment of
Multivariable Feedback Controllers,” Automatica, 1996, 32(11):1505-1518.
7. Huang, B., Shah, S.L., and Kwok, K.Y., “Good, Bad or Optimal? Performance
Assessment of MIMO Processes,” Automatica, 1997, 33(6): 1175-1183.
8. Qin, S.J., “Control Performance Monitoring – A Review and Assessment,”
NSF/NIST Workshop, New Orleans, March 6-8, 1998.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Practice
Overview
In modern control systems, expert system technology is playing an ever-
increasing role in assisting the operator in the detection and management
of abnormal situations in a plant. With the introduction of distributed con-
trol systems in the late 1970’s, the basic control systems of many process
plants went through major changes in organization and operation. In
many cases, the introduction of distributed control allowed control func-
tions to be concentrated into a few control rooms. The traditional control
panels for operator interface to the process were replaced with keyboards
and monitors. There were few limits on the amount of information that
could be accessed and displayed at these operator stations. These systems
allowed an operator to make changes and see the process alarms associ-
ated with his area of responsibility. In some cases, the system was
designed to allow all information about the plant to be accessed from any
terminal within the system [5.1]. As a result of this technology change, and
the increasing pressure on companies to increase productivity, the scope of
control that an operator was responsible for changed dramatically.
In one pulp and paper mill the introduction of a distributed control system
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
allowed three control rooms to be consolidated into one and for one oper-
ator to do the job formerly done by three [5.2]. In some process areas an
operator is responsible for as many as thousands of measurements and
hundreds of motors and control loops in addition to various subsystems in
a process area. Thus, it has become increasingly difficult for an operator to
be aware of all conditions in the plant. During normal operations, there is
insufficient time for an operator to examine all measurements in his area
163
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Opportunity Assessment
Expert systems have been successfully applied in a variety of applications.
Within the process industry, there is significant benefit in using this tech-
nology for abnormal situation management. In assessing the potential
benefits of this technology, the following questions should be asked:
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Examples
Alarm Screening
Expert systems are used in areas of the refining industry for abnormal sit-
uation management. One such use is in the prevention of alarm flooding.
For example, the regeneration unit of the hydrocracking process is vital to
plant operation and production. The interactive nature of the large num-
ber of measurements and control loops associated with this unit means
that a failure of one piece of equipment may result in the operator being
presented with many alarms: the original failure plus the alarms associ-
ated with measurements that the equipment affects. Under these condi-
tions, it is of great help to the operator if the alarm associated with the
failure is clearly presented and other alarms resulting from this event are
only logged, not presented to the operator. An expert system is used to
look for specific equipment failures and to automatically suppress other
alarms triggered by the failure. Under normal operating conditions, all
alarms would be active; the expert system only suppresses alarms when
specific operating conditions are detected.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
trips, a number of other upstream and downstream alarms are also gener-
ated that would make it difficult for the operator to quickly identify the
problem areas, as illustrated in Figure 5-1.
Fault Detection
An oil field may contain hundreds of wells. The early detection of an
abnormal condition such as blocked flow in the wellhead may avoid dam-
age to the associated pump. Because such conditions are often indicated
by a combination of measurement values, the traditional value or devia-
tion alarming cannot be used to alert the operator of them. However, by
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
using an expert system to monitor the wells, the conditions that indicate
abnormal operation are detected and brought to the operator’s attention.
To implement the expert system, facts would be defined for the measure-
ments that are included in a production control system. To detect blocked
flow, one rule would be written that examines the conditions that indicate
blocked flow; e.g., oil flow and the pump pressure. By using the variable
definitions in both the left and right portions of the expert rule, it is possi-
ble to monitor all the wells with one rule. As wells are added to or
removed from the system, the only change required would be to update
the facts; the rule would not have to be modified. When a blocked-flow
condition is detected, the rule is designed to write to parameters in the
control system that cause the detected condition to be alarmed and dis-
played at the operator interface. An example of how this would be dis-
played to the operator is shown in Figure 5-3.
Application
General Procedure
1. Identify the areas to be addressed by the expert system:
a. Select applications that have significant impact on plant
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
operations.
Application Details
The following details were developed for the application of expert sys-
tems for abnormal situation management. To achieve the best results, the
user should adhere to the guidelines specified in this section.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
develop an expert system for abnormal situation monitoring, it is
necessary to have access to a person who is very familiar with the
process; i.e., an expert on the process.
• Define I/O for the process areas and units of equipment that are
addressed by the project.
• Analyze the best grouping of this information into facts to support
the pattern-matching requirements.
• Test the fact design by addressing portions of the application in the
first iteration of the implementation. Modify the fact slot definitions
as required based on testing with this initial implementation.
Rules of Thumb
Rule 5.1. — The granularity of rules has impact. Too little definition leads to
rules firing that are hard to understand. However, too fine granularity
makes an expert system difficult to modify.
Rule 5.3. — The domain must be well bounded for an expert system to work.
Effort should be spent in the initial portion of the project to clearly define
the objectives that must be met. The project should be tightly managed to
achieve these objectives.
Rule 5.5. — When new functionality is added, don’t test just the new features.
As an expert system is built up in increments of new functionality, it is
important to always test the previous knowledge features of the system
along with the new ones.
Rule 5.6. — Make the rules as modular and generic as possible to improve the
expandability and maintainability of the expertise. Make sure the fundamental
concept is captured rather than a specific representation of a more general
rule [5.9].
Rule 5.7. — False diagnostics and alerts will quickly destroy operator confidence.
Debug the system and bring it online out of view of the operator, in a com-
puter or configuration room [5.9].
Rule 5.8. — Base the validation of level and other integrating responses on a rate
of change rather than an absolute value to eliminate model drift. The diagnostics
for level measurements and the detection of plugged nozzles should be
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Rule 5.9. — Rate of change, sustained deviation, and dead signal calculations
must use filters, velocity limits, and calculation intervals that maximize the sig-
nal-to-noise ratio. The signal should be filtered and sent through a dead
time block to create a calculation time interval long enough to see a valid
change. Velocity limits should screen out changes that are faster than
physically possible in the process. Note that a dead time block instead of a
large scan time should be used to prevent low-frequency noise and long
failure-detection delays.
Rule 5.10. — Scan time and rule execution time must be fast enough to resolve
the proper sequence to avoid picking up on an effect rather than a cause. Often
the sequence of events is important to determining the root cause. For fast
processes or slow measurements, it can be difficult to decipher what hap-
pened first.
Rule 5.11. — The interface must be integrated into the displays for the operators.
Separate computers with special screens decrease the utility and accep-
tance of the system
Rule 5.12. — The alerts must not be repetitive or obnoxious. Loud or abusive
alerts will lead to loud and abusive replies from the operators. The setting
and clearing of alerts should require multiple executions with the same
conclusions before the operator is notified. The engineer should be
advised of flickers so that the system can be adjusted. Use a manual reset
for alerts that cannot be reliably cleared [5.9].
Rule 5.13. — Use a sustained deviation from an updated baseline for the detec-
tion of batch phase endpoints and a rate of change for the detection of batch trajec-
tory abnormalities to eliminate drift and changes in starting points. A large filter
is used to create the baseline. The last good batch trajectory is used as the
reference trajectory. Process knowledge must be used to make sure conclu-
sions are reasonable.
Rule 5.14. — Upgrade the field instruments before installing an expert system.
Otherwise it will be giving you information you already know. If you use
valves without positioners or orifice meters, the expert system will tell you
over and over again that these are stupid.
Rule 5.15. — Make sure you have access to the original process expertise to tune
and maintain the system. The idea that these systems allow you to capture
for prosterity expertise that is disappearing is a fallacy. These systems
need to be adjusted and updated for unforeseen or changing conditions
and mistakes. If the expert leaves, the expert system is not far behind.
Guided Tour
This tour illustrates the potential ease of use and convenience of an expert-
system interface for abnormal situation monitoring embedded in an
industrial control system [5.3, 5.4]. The following areas are addressed:
Embedding the expert system within the control system makes it possible
to use predefined fact templates to access measurement, calculations,
events, and historical information in the control system. As part of the I/O
processing done outside of the expert engine, dynamic analog values
included as slots in the template are automatically compared to limit val-
ues to obtain a discrete value. Thus, when the discrete value of a slot
changes state, the old fact is automatically retracted and the fact re-
inserted with the new slot value.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Inference Working
Engine Memory
(Facts)
Agenda
Real-time
Data
Knowledge Diagnostic
Base (Rules) Data/Events
Control System
Historian
Configuration
Through a special function added to the expert engine, a rule can write to
any function-block parameter within the control system. A special block is
provided in the control system to allow alarms and messages to be acti-
vated, as shown in Figure 5-5.
Operator Interface
Pre-defined Dynamo
Message
Expert
System Alarm Banner Alarm Banner
Fact Rule
Parameter
Write
Interface
Block Block Alarm
The fact templates and rule templates provided in the expert system are
made visible within the system in a hierarchical tree similar to Microsoft
Windows® Explorer file presentation. When a fact template is selected, a
dialog is automatically presented to the user that can be used to define a
fact. An example of the template provided to access parameters of a func-
tion block within the control system is shown in Figure 5-6.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
that allows the features of the expert engine to be fully utilized. The fact
templates can reference control system data in the left side of the rule. In
addition, a special function is supported in the action, or right-hand, side
of the rule that allows block parameters within the control system to be
written. This capability is used to trigger alarms and to write messages to
be shown in the operator interface. Also, this capability is used to directly
interact with the control system by changing set point, mode of control
blocks, etc. An example of the dialog used to define rules is shown in Fig-
ure 5-7.
Once rules and facts have been assigned to execute on a PC node within
the control system, the results of rule evaluation are examined. The expert
system includes an option to examine the current state of facts and rules
that have fired and are on the agenda, and to access statistics concerning
the evaluation of rules. Based on this selection, the information about the
rule evaluation is displayed in the right side of the expert system explorer
view as shown in Figure 5-8:
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Figure 5.8. Examining Rule Evaluation
To support the testing of an expert system, the expert engine has several
online debugging features. Through the toolbar selection shown in Figure
5-9, the user may exert the following control over the expert engine’s exe-
cution:
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Figure 5.9. Online Debugging support
When simulation is enabled on a fact, the user may change the value of
fact slots. The icon associated with a fact changes, to flag the fact that the
real value from the system is not being utilized.
Theory
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
The knowledge of an expert can be represented as rules and facts. If facts
exist that match the pattern defined by a rule, then the rule is satisfied and
executes its configured action. Unlike procedural program languages,
such as C++, in which there is a tight interweaving of data and knowl-
edge, expert systems allow for two levels of abstraction: data and knowl-
edge. An inference engine applies the knowledge to the data and relies on
pattern matching to guide execution. The inference engine determines
which rules are satisfied by facts and executes these in the order of their
assigned priority. There is no specified control flow as with procedural
programming. The general structure of an expert system is shown in Fig-
ure 5-11.
Knowledge
Base
(Rules)
Inference
Engine
Facts
Agenda
User
Interface
The user interface will typically allow various aspects of the expert system
execution to be monitored. The user interface may also be used to enter
facts.
Rules
The reasoning capability of the human mind is complex and difficult to
fully explain. However, as early as the early 1970’s, it was shown that
much of human problem solving can be expressed by IF…THEN–type
rules [5.6]. This model of human problem solving and inference is the
IF …. THEN ….
Consequence
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Antecedent
Commonly referred to as
Commonly referred to as the
“list of Actions” or “Right
“Conditional Part”, “Pattern” Hand” side of the rule
or “Left Hand side of the rule
The expert system attempts to match the patterns of rules to the facts in
the fact list. If all the patterns of a rule match facts, the rule is activated and
put on the agenda. A rule will fire only once for a specific set of facts. This
property, know as refraction, prevents an expert system from being caught
in a loop where the same rule continues to fire.
Inference Engine
The inference engine is the part of an expert system that matches facts to
rules to determine which rule to execute. Expert systems may use several
different inference methods.
The two most common methods used by commercial expert systems for
the inference engine are forward and backward chaining.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Facts
In order for an expert system to solve a problem, it must have data or
information for pattern matching. Groupings of this information are
known as facts. In general, a fact will consist of a relation name and named
slots with their associated values. An example of a fact, as defined in the
CLIPS expert system software, is shown in Figure 5-14.
In this example, the symbol “Header” is the fact’s relation name and the
fact contains three slots: name, pressure, and temperature. The value of the
name slot is “Steam,” the value of the pressure slot is “1200” and the value
of the temperature slot is “900.” The order in which slots are specified in
the fact is irrelevant. To facilitate implementation, groups of facts that
share the same relation name and contain common information are
described using a Fact template. Also, multi-slot capability is provided
that allows multiple values to be defined for one slot.
Groups of facts are stored in the expert system in the fact list. New facts
may be added to the fact list using an Assert command. Similarly, the
Retract command is used to remove facts.
Other Structures
In some cases, other technologies are combined with an expert system to
achieve a specific goal. For example, the relationship between process
inputs and a process output may in some cases be highly nonlinear. For
this case, it is most effective to use a neural network to determine this rela-
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
tionship and then to use the expert system. The neural network output is
used as a fact input to the expert system (rather that using the process
inputs directly as facts). Such an implementation is illustrated in Figure
5-15.
Neural
Network
Expert
System
Process
References
1. Deaton, B and Blevins, T., “An approach to Mill Wide Control,” ISA
Conference, Houston, October 1986.
2. Flynn, J., “G-P Completes First Phase of Upgrade at 1,300-tpd Crossett Pulp &
Paper Mill,” Paper Trade Journal, August 15, 1982, pp 17-22.
3. Tzovla, V. and Zhang, Y., “Abnormal Condition Management Using Expert
Systems,” ISA Conference, Houston 2001.
4. Giarratano, J. and Riley, G., “Expert Systems—Principles and Programming,”
PWS Publishing Company, 1998.
5. Buchanan, B.G and Mitchell, T. “Model-directed Learning of Production
Rules,” Pattern Directed Inference Systems, Academic Press, 1978, pp. 297-
312.
6. Newell, Allen and Simon, Herbert A., Human Problem Solving, Prentice-
Hall, 1972.
7. Forgy, Charles, “Rete: A Fast Algorithm for the Many Pattern/Many Object
Pattern Match Problem,” Artificial Intelligence, 19, 1982, pp. 17-37.
8. Fink, Gavin A., “Rules of Thumb for Implementing Expert Systems in
Engineering,” InTech, April, 1987, pp 33-37.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Practice
Overview
Proper controller tuning is the largest, quickest, and least expensive
improvement one can make in the basic control system to decrease process
variability. The detrimental effects of disturbances, interactions, and con-
trol valve dead band are minimized by an appropriate selection of tuning
settings [6.30]. However, the expertise, patience, and time required to com-
pute these settings is significant and most of the methods don’t take into
account real-world problems like process and measurement noise, peri-
odic upsets, load shifts, and valve stick-slip. In addition, many processes
are nonlinear: the tuning settings change with the operating point, and
with time. Also, the tuning methods and coefficients change with the ratio
of time delay to time lag and the performance objective [6.3]. For these and
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
other reasons, the accuracy of tuning settings usually doesn’t exceed 20%
for industrial processes and a reproducibility of better than 10% should
not be expected for settings computed by manual or automated methods.
183
Auto tuners can conduct an automated test of the process, compute the
tuning settings that are best for a particular process and performance
objective, provide estimates of performance and robustness, and show
predicted response plots for load upsets and set point changes. Since the
time to start the test is selected by the user, it is called an “on demand”
auto tuner. Auto tuners also eliminate human error and impatience and
can implement a more exact and complex methodology.
As mentioned, the best tuning settings are a moving target. Often over-
looked are some fundamental process relationships that are used to pre-
dict changes in tuning settings before they affect control. The task of
predicting the actual value of a tuning setting is usually too difficult, but
often a simple first principle equation predicts how the setting will change
with a process condition. The computation is for the change from a refer-
ence point and is done in conjunction with an auto tuner. If the change in
tuning settings with an operating point or time is known, the controller
settings are scheduled. For example, if a plot of the controlled variable ver-
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
There are many possible opportunities for the scheduling of tuning set-
tings. If there is a known change in the process gain caused by changes in
heat-transfer surface area, reaction rates, and catalyst activity, the control-
ler gain is scheduled as a function of batch or run time. Since the biggest
contribution to a time delay for a continuous process is often a transporta-
tion delay that is inversely proportional to throughput, a performance
improvement is realized by scheduling integral time and derivative time
settings that are inversely proportional to feed or speed. In addition, surge
tank level control is made smoother by scheduling the gain to be propor-
tional to the error to achieve error-squared control. The power of modern
control systems combined with an auto tuner makes this custom schedul-
ing of tuning settings relatively easy and a potential improvement ripe for
the picking.
Adaptive controllers can tune themselves. The tuning settings are cor-
rected whenever there is an error sufficient to provide enough knowledge
to find better controller settings. Many of the adaptive algorithms pre-
sume an oscillation and are confused by noise and quiet periods of opera-
Opportunity Assessment
The greatest improvement in plant performance from the application of
this technology occurs where oscillations are evident. Most oscillation is
eliminated, or at least attenuated, by an adjustment of the controller tun-
ing settings. The questions to ask are:
If the oscillation period gets larger if you increase the integral time, the
oscillation is probably caused by excessive reset action or valve stick-slip
from excessive friction. While the best solution is to repair or replace con-
trol valves with stick-slip, tuning can decrease the severity of the oscilla-
tion and provide a temporary fix.
Finally, if the standard deviation is high and the loop performance is low
compared to the control capability, then either the robustness or the per-
formance of the controller needs to be improved. If the standard deviation
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
is associated with a large error from process upsets, then the performance
is too low. If the standard deviation is high due to an oscillation, then the
robustness is insufficient.
Even if the loops are not oscillating, there can an excessive control error
and lost or downgraded production. This is most evident in a response to
set point changes and load upsets that is sluggish due to overly conserva-
tive or missing tuning settings. The following questions should be asked
in this situation.
If the answer is yes to any of these questions, the tuning of these loops
should be checked.
The ability of loops with large mixed volumes to minimize the overshoot
from load upsets and set point changes has been severely compromised if
the derivative or rate time setting is zero.
Supervisory and sequence controls need more gain than reset action to
provide a more immediate response. However, this is not how loops are
usually tuned. To minimize the time required for batch operations or star-
tup, the controller output should actually drive its output beyond its rest-
ing value.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Level loops on vessels that feed other unit operations are a major cause of
variability. The level controllers on feed tanks and surge volumes must be
tuned to allow the level to float between the high and low alarm limits by
decreasing the controller gain or scheduling the gain proportional to the
control error.
Examples
Lambda tuning method with a large Lambda factor provides the desired
decrease in gain action and increase in reset action and is the preferred
choice for liquid flow loops.
The step size must be large enough to get out of the noise band of the mea-
surement and the dead band of the control valve. Step sizes of 2% to 5%
are normal. The hysteresis setting of a relay auto tuner is set large enough
to compensate for measurement noise.
An important feature shown in Figure 6-1a is the ratio of one flow-loop set
point to the other flow-loop set point. This is a common requirement for
reactor feeds to enforce a stoichiometry that is critical for a reaction. It is
important that both feeds change in unison. Ratioing to the set point
rather than the process variable of the other loop helps maintain proper
timing if both control loops have the same closed loop response. Other-
wise, the follower is always behind the leader. Lambda tuning helps
insure that the differences in the response are negligible by enforcing a
closed loop time constant. This is particularly important if the feeds are
frequently changed by a sequence, recipe, or a control loop for startup,
batch, changeover, or production rate control.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
clean new electrode. However, a coating of just 5 millimeters on the sur-
face of the glass electrode can cause the sensor time constant to increase
from 0.1 minute to 2 minutes. When this happens the controller gain must
be increased but the reset action must be greatly decreased since the time
delay–to–time constant ratio has decreased by an order of magnitude. In
addition, the oscillations at the mixer outlet from more gain action do not
appear in the reactor discharge because of the attenuation by the highly
mixed reactor volume. A responsive Ziegler-Nichols tuning is best since
the loop can switch to being dominated by a large time constant and the
high frequency oscillations at the static mixer do not increase the variabil-
ity of the final product.
There is also a valve controller XC1-1 on the static mixer whose set point is
the optimum position of the small control valve for throttling and whose
PV is the pH controller output or implied position of the small control
valve. The output of XC1-1 manipulates the large reagent valve. This valve
optimization control system is better than split ranging the small and big
valve because the pH controller can manipulate smaller and more precise
increments of reagent by always throttling the small reagent control valve
for even high reagent demands. It also eliminates the stiction and disconti-
nuity associated with a split range point. However, the response of XC1-1
must be slow and gradual, so that it doesn’t interact with AC1-1. Integral
only action is normally used. This can be approached by the use of
Lambda tuning with a very large closed loop time constant (small control-
ler gain). Also, AC1-1 must be in automatic with aggressive settings when
XC1-1 is tuned. There are many other examples where valve controllers
are used for optimization where a limit in valve capacity is important.
Some of the more common applications involve a valve controller whose
PV is a coolant, compressor, or pressure control valve position, whose set
point is the maximum position of this valve with sufficient sensitivity
(slope) on the installed characteristic, and whose output manipulates a
feed flow controller set point.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
A large step size of 10% or more, and the use of the integrator specification
that is available in some auto tuners for the pretest, are suggested for reac-
tors with small vent valves since the time to reach steady state is long due
to the large process time constant. The specification of an integrating
response insures the pretest doesn’t wait for the process to line out (reach
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
a steady state).
RSP FC
1-1
FT
1-1 Recycle Tank
Makeup Recycle
RSP FC
1-3
LC LT
1-1 1-1
FT
1-3
Reactor 1
FC
1-4
RC
1-4
Feed Tank PC FT
1-1 1-4
Reactor 1
FC
1-2
PT
1-1
FT
1-2
Reactor 2
XC
1-1
AC
1-1 TT
TC
1-2 1-1
AT TC RSP
1-1 1-1
big small
Static Mixer TT
Feed A
1-2
Coolant
LT LC
Reactor 1
1-2 1-2
Feed B
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Discharge
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
sor comes directly into the control system through a thermocouple or RTD
card, the wide range of these cards will cause A/D chatter that will
severely restrict the use of gain and rate action. A temporary fix until a
transmitter is installed is to add a velocity limit to the process variable to
restrict its rate of change to one that is physically realizable to partially
screen out the A/D noise. The controller scan time should be 5 seconds or
more to improve the signal-to-noise ratio.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Large step sizes of 5 to 10% are used for the endothermic portion of the
batch to make the pretest faster. During the exothermic reaction, smaller
step sizes are generally used for a safer test. Any change in settings must
be extensively tested for set point changes and closely monitored for sev-
eral batches before left overnight or over a weekend. The controller scan
time should be 5 seconds or more to improve the signal-to-noise ratio.
The integrator option must be chosen and the step size should be at least
10% to make the pretest faster. The controller scan time should be at least 5
seconds to improve the signal-to-noise ratio.
TC RSP TC
2-1 2-2
steam coolant
TT
TT 2-2
2-1
Batch Reactor
tempered water
Surge Tank
FT
2-1
RSP
FC
2-2
LC LT
2-1 2-1
FT
2-2
Product
Figure 6.2. Batch Reactor and Surge Tank
Figure 6-2. A Batch Reactor and Surge Tank
temperature will probably look like a ramp and the integrator option
should be chosen. PID Ziegler-Nichols tuning is preferred, but with the
controller gain cut in half to provide more robustness. Large step sizes of 5
to 10% are used to help make the pretest faster. The scan time should be at
least 10 seconds to improve the signal-to-noise ratio unless the RTD trans-
mitter span is less than 50o C.
since the temperature controller only affects the column when the reflux
changes as a result of the level controller’s response to a change in distil-
late flow. Also, the changes in internal reflux caused by the changes in
temperature of the column walls from rainstorms are partially compen-
sated for by the changes in reflux flow made by the level controller. For
these reasons, tight level control is important for column operation and
high gain action is recommended. PI Ziegler-Nichols tuning should be
used. However, the controller gain that is permitted is way above the prac-
tical limit. The user should select the maximum gain that doesn’t wear out
The integrator option must be chosen and the step size should be at least
10% to make the pretest faster. The controller scan time should be at least 5
seconds and a process variable filter should be added to improve the sig-
nal-to-noise ratio.
PC
3-1
LC LT Vent
3-1 3-1
RSP
FC RSP FC
Distillate
3-1 Receiver 3-2
PT
3-1 FT FT
3-1 3-2
Reflux Overheads
FC
3-3
FT
Column TT TC
3-3
Feed 3-2 3-2
FC LC
3-4 3-2 RSP
FC
3-5
FT LT
3-4 3-2
Steam
FT
3-5
Bottoms
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
controller scan time is essentially the total loop dead time plus any lags
from pressure transmitter dampening or controller PV filtering. These sig-
nals tend to be very noisy. Installation practices to muffle the sensing lines
from the impact of a varying velocity head are helpful but often the PV fil-
ter time must be judiciously increased. Some furnace pressure loops can-
not stay in manual long enough for a pretest. The process gain is so high
that the pressure will ramp off toward a trip setting before the pretest is
complete. The open loop response looks like an integrator in the allowable
operating region. For these “pseudo” integrators, the integrator option
must be chosen. Since the peak error is important to prevent furnace trips
and the time delay–to–time constant ratio is usually extremely small, PI
Ziegler-Nichols tuning is the preferred choice, but with the controller gain
cut in half to provide more robustness and less amplification of noise.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
method of tuning should be used. Since the controller gain is way below
its maximum, the integral time must be checked to make sure it will not
cause sustained oscillations.
The feedforward signal, where the feed flow is set by steam flow, should
be turned off during the tuning tests. The controller scan time should be 1
to 2 seconds and a process-variable filter should be added to improve the
signal-to-noise ratio. The integrator option should be selected for the pre-
test and the step size should be 5% to 10%. The hysteresis setting of a relay
auto tuner may need to be relatively large to compensate for noise.
FY LC
4-1 4-1
RSP
FC LT FT
4-1 4-1 4-2 Steam
FT Boiler Drum
4-1
BFW
PT PC
4-1 4-1
FC Furnace FC Stack
4-4 4-3
VSD
FT FT
4-4 4-3 VSD
Fuel
FD Fan
ID Fan
Application
General Procedure
1. Get the controller to its normal operating point by a combination of
changes in its set point and controller output. The controller
should also be tuned at expected loads (flows). The tuning of
control loops during water batching is a waste of time. Nearly all
control loops have some operating-point or load nonlinearity.
a. For operation on the knees and flat portions of an installed
valve characteristic or on a plot of a process variable such as
column temperature or neutralizer pH versus the manipulated
flow-to-feed ratio, add signal characterization per Chapter 2 to
reduce the severity of the gain nonlinearity.
b. For split range valves, adjust the split range point or add signal
characterization per Chapter 2 to minimize the changes in gain.
2. For cascade control systems, tune the secondary (slave) controller
first. After the secondary is tuned it should be put in the remote set
point (RSP) or CAS mode and then the primary (master) controller
tuned.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
chatter.
8. Decide if the control loop behaves like an integrator or a runaway.
If it does, choose the integrator option on the auto tuner. Loops
with very large time constants or process gains will look like an
integrator in the control region of interest. If the integrator option
is not selected, a test to find the steady-state response may take too
long, time out, or activate an interlock.
9. Pick a step size at least 5 times larger than the resolution of the
final element that will cause a PV excursion at least 5 times larger
than the noise band. Most importantly, make sure the step size is
not so large that it will drive the process into an undesirable
operating region, hit an output limit, exceed a set point limit,
activate an alarm, trip an interlock, cross the signal selection point
of an override controller, or cross back and forth across a split
range point. See the Rules Of Thumb for more details on the step
size selection.
10. Pick a tuning method that is consistent with the process and
performance objective. Some auto tuners will pick the appropriate
method after you specify the type of loop (temperature, level, flow,
pressure). If you need to minimize peak errors for temperature,
composition, and pressure control of vessels, the Zeigler-Nichols
settings with about half the gain action are generally the best
option. For pipelines, plug flow, and web (sheet) processes,
Lambda tuning is the right approach. See the Rules Of Thumb for
more details on method selection.
11. During the test initiated by the auto tuner make sure the
magnitude and rate of change of the process variable are not too
large or too small.
12. Check the set point and load response for the new settings first
with any simulation options offered by the auto tuner and then by
making actual set point and controller output changes. Observe
the performance of the controller for several days under different
operating conditions, noise, and interactions before leaving it on
overnight or over the weekend.
Application Detail
The following details, developed for the relay-based auto tuner, are also
generally true for other auto tuners. To achieve the best results, the user
should adhere to the guidelines specified in this section.
• Periodic upsets
• Stick-slip
• The process input must be able to make the step change required
for tuning.
• The step change should not cause a relief valve to open or any
interlocks to be triggered, either of which could affect the tuning
results.
• When tuning, the controller set point, process variable, and
controller output should not be near their limits.
• After changing tuning values, observe the loop for a period of time
to see its reaction to noise, disturbances, and small SP step changes.
• If the loop performance is not satisfactory, change the selection of
desired loop response (Normal, Slow, or Fast) or the use of the
Expert option for advanced tuning rules.
• If the loop performance is still not satisfactory, try an alternative
tuning method (without re-testing the loop).
• The PI design is the most common choice for flow, liquid level, and
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
gas pressure control.
• PID control, along with feedforward and cascade strategies, is often
used for temperature control, pH control, and composition control.
(Composition loops with sample and hold signals might require
some limits on derivative action.)
• If a loop was tuned by trial and error and it works well, it is still
worthwhile to consider using an auto tuner to verify loop settings.
If the values calculated by the auto tuner are drastically different
than trial-and-error values, someone has to consider the new values
closely before changing controller parameters.
• The PID designs are limited for some applications. For example, if
the time delay of a loop exceeds the time constant (control loop is
dead time dominant), the PID performance deteriorates as the
degree of dead time dominance increases (ratio of time delay to
time constant increases). For such applications, a dead time
compensator or Model Predictive Control provides better results. If
the dead time changes more than 10% for the compensator or more
than the scan time for the model predictive controller, the dead time
used in the model must be updated.
Rules of Thumb
Rule 6.1a. — The control valve resolution limit without excessive stick-slip is
about ½ the valve dead band and varies from 0.1% to 5% for valves with properly
calibrated and tuned positioners. It is as high as 25% for valves without posi-
tioners or with excessive packing or seal friction. As you approach the
dead band, the valve dead time increases dramatically, especially for
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Rule 6.1.c — The noise band is the largest for flow, pressure, composition, and
level measurements and can vary from 0.05% to 5% of scale span. For tempera-
ture measurements, the noise is principally from A/D dither (0.05% of
input card span). The signal-to-noise ratio is improved by selecting a scan
time large enough that the true change in the process variable within the
scan time is larger than the measurement resolution. The noise band is
reduced by a proper adjustment of the process variable filter. Some tuners
also provide a noise band or hysteresis setting to discern true changes in
the process variable.
Rule 6.2. — The step size should be the largest possible that does not cause a pro-
cess or operational problem or cause a controller to hit its set point or output limit.
An increase in the step size will reduce the test time and minimize the vul-
nerability to noise, load upsets, and resolution limits. This is particularly
important for slow loops. However, the step must not be so large as to acti-
vate an alarm or exceed a limit or cause a failure.
Rule 6.2a. — Ask the operator and the process engineer what is the largest possi-
ble safe step size in each direction. You need to avoid getting within the accu-
racy limit of constraint, interlock, and relief device settings. Rupture discs
that see excursions within 10% of their setting may experience premature
failure.
Rule 6.2b. — Make sure the step size does not cause a secondary (slave) control-
ler to hit its set point or its output limit or the primary (master) controller output
to hit its output limit for more than the dead time. Some tuners will stop the
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
tuning procedure as soon as any of a chain of controllers in a cascade con-
trol system hit an output limit. This is particularly a problem with reactor
temperature–to–coolant temperature cascade control systems.
Rule 6.2c. — For split ranged valves, pick a step size and a starting point and
choose a set point to stay away from the split range point to insure each valve is
individually throttled. Avoid crossing back and forth across the split range
point because this is the point of greatest valve nonlinearity and stick-slip.
The lowest gain and derivative time setting and the largest integral time
setting should be used from the separate tests unless the tuning settings
are going to be scheduled based on the valve throttled.
Rule 6.2d. — For override controllers, pick a step size that does not cause the
output signal selector to choose another controller output. Each override con-
troller must be tuned individually with freedom to move the manipulated
variable. There must be no crossing of the signal selection point during the
test. It is necessary to put the other override controllers in manual with an
output far away from the signal selection point so that they are out of the
picture.
Rule 6.2e. — The step size is proportional to the controller gain. For loops with
high controller gains, such as level and temperature, the step size should
be large enough (10% or more) to reduce the time of the test and the effect
of noise. For high process gains that are often encountered in pH and fur-
nace, kiln, or dryer pressure loops, the step size needs to be quite small
(less than 1%) to avoid an excessive upset to the process. Even though
flow loops have a low controller gain, step sizes of 2% or more are needed
to sufficiently get out of the noise band. If a positioner is not used or the
control valve has excessive stiction or backlash, the step size might need to
be an order of magnitude larger.
Rule 6.2f. — For severe operating point nonlinearities, the step must not drive
the process to either an excessively flat or steep slope on a plot of the controlled
variable versus the manipulated variable. A movement from a flat to a steep
portion of the curve will look like a runaway. A transition to the extremely
flat tails of the plot will cause an exceptionally slow response and test.
Rule 6.2g. — For high process gain integrators and runaways, the step size must
not cause an excessive rate of change of the controlled variable. The step size for
furnace pressure and exothermic reactor temperature controllers must be
carefully chosen to prevent the activation of interlocks or relief devices.
Rule 6.3. — For fast interacting loops and dead time–dominant loops, use the
Lambda tuning method settings. Flow, liquid pressure, liquid pipeline, plug
flow reactor, and solid product control loops tend to interact and have a
dead time larger than the largest time constant (lag). Sheet, web, conveyor,
and fiber spin line are examples of processes with solid products that are
dead time–dominant. Variability that enters almost anywhere in the pro-
cess is trapped in a solid product and cannot be blended out as it can with
liquid products. The Lambda tuning method provides a smooth response
that minimizes variability and the disruption to other loops. It also results
in reset action that increases (reset time that decreases) with the degree of
dead time dominance. For a pure dead time–dominant system the reset
action is as much as 8 times greater (reset time 8 times smaller) for Lambda
tuning compared to Ziegler-Nichols tuning.
Rule 6.3a. — If rapid changes in the controller output disturb other loops, a
higher gain margin (larger Lambda and lower controller gain) should be used than
what is obtained from default Lambda Tuning settings. If changes in the con-
trolled rather than the manipulated variable are important, then a smaller
Lambda (larger controller gain) should be used to provide tighter control.
Rule 6.4. — For lag-dominant loops, use the Ziegler-Nichols tuning method set-
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
tings but reduce the gain for more robustness. Crystallizer, reactor, evaporator,
and neutralizer temperature, concentration, and pressure control loops
have back-mixed volumes with a large lag. Columns have a series of
mixed interacting volumes that create a large lag.
Rule 6.4a. — For self-regulating loops where the process variable will line out to
steady state in manual, the gain margin should be increased (controller gain
decreased) from the classic Ziegler-Nichols tuning for quarter amplitude to pro-
vide more robustness. While the quarter amplitude response minimizes the
peak error for a load upset, an increase of 25% in the dead time or process
gain can cause instability. The controller gain should be reduced at least by
50% to insure sufficient robustness.
Rule 6.4b. — For fast integrating and runaway loops where the process variable
can rapidly ramp or accelerate in manual, the minimum gain margin (maximum
controller gain) for sustained robustness should be used. The tuning must be
aggressive. The user must pay particular attention to the window of
allowable controller gains for integrators and runaways since too low a
controller gain is as much a problem as too high a controller gain. Vessel
level, boiler drum blow down, and batch vessel temperature and concen-
tration control loops have an integrating response, but the integrating pro-
cess gain (ramp rate) of these loops is usually very slow, except where
there are volumes with extremely small diameters. Of greater general con-
cern are exothermic reactors, fermentors in the growth phase, and com-
pressors in surge, because they can develop an accelerating response for
certain combinations of conditions.
Rule 6.4d. — Concentration and temperature control loops with a recycle effect
should be treated as integrators. If the recycle effect is delayed and slow, then
it should be ignored for tuning. Closed loop tuning methods and the relay-
oscillation method of auto tuning will identify tuning settings to match the
fast dynamics. Open loop tuning methods and pretests may try to identify
erroneous tuning settings for the slow ramp from the recycle effect that
should be ignored.
Rule 6.5. — For a controller with an objective of optimization rather than distur-
bance rejection or servo response, use Lambda tuning with a large closed loop time
constant to insure a gradual and smooth approach to the optimum. The most
common example of this type of controller is a valve controller that will
increase a feed rate to ride the maximum capacity constraint of a coolant,
compressor, or pressure control valve used for disturbance rejection. The
constraint is the largest position with sufficient sensitivity on the installed
valve characteristic and enough capacity for fast disturbances.
Rule 6.6. — For control loops where there are time constants greater than 0.2
minutes in the process or measurement besides the largest time constant, consider
the use of a derivative time equal to contribution of these additional lags. The
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
temperature control loop with its many thermal lags in the process and
thermowell is a common example of a loop with multiple appreciable time
constants.
Rule 6.7. — For control loops where there is a true or pseudo runaway response,
use derivative action and minimize reset action (maximize the integral time).
Rate action senses the change in slope and acceleration of the process vari-
able (PV) that is important for processes whose open loop response shows
an acceleration within the control band (see Chapter 2 for more details).
For polymerization reactors that are highly exothermic, proportional plus
derivative control is used and reset action is omitted since it has no sense
of direction [6.3]. For pH and column temperature control systems where
the PV can move onto a steep portion of the process curve, rate helps to
catch and prevent large excursions. For these processes, the controller may
have to always stay in auto, which eliminates the ability to do a pretest. If
the runaway is slow enough, the integrator option may enable a pretest
but it must be closely monitored for an accelerating PV.
Guided Tour
This tour illustrates the potential ease of use and convenience of an inte-
grated interface that is possible for an auto tuner embedded in an indus-
trial control system. The following areas are addressed:
The interface provided for tuning PID control blocks is accessed from any
station within the control system. By selecting a control loop and choosing
Tune, this interface is activated for the selected block. The controlled
parameter, manipulated parameter and set point of the PID are automati-
cally trended and the current tuning displayed. Tuning is initiated from
any mode of operation by selecting Test. In response, the actual mode of
the PID block changes to LO to indicate that tuning is active. While testing
is active, the process is under two-state control. The manipulated parame-
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
ter is changed from its initial value by the specified step size for a total of
three cycles, as shown in Figure 6-5.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Figure 6-5. Loop Tuning Interface
Based on the process response during testing, the process dynamics are
automatically determined. Process gain, lag and dead time values are dis-
played in the Test Process portion of the interface. Based on the process
gain and dynamics, a recommended setting is provided for the PID tun-
ing. If the user chooses Simulate, the process response using the recom-
mended tuning is examined before transferring this to the PID block.
When this selection is made, a simulation of the closed loop response is
shown for set point and load disturbances as illustrated in Figure 6-6.
The robustness of the control is also shown in terms of phase and gain
margin. By selecting a different point on the robustness map, the tuning
and response are automatically displayed for the new robustness selec-
tion. Thus, the tuning is easily selected based on the desired response and
the degree of robustness that is needed in the process. The recommended
tuning is transferred to the PID block by selecting Update.
If the user wants to apply his knowledge of tuning, then by choosing the
Expert option he may choose the tuning rule that is used to determine the
recommended setting based on the identified process dynamics, as shown
in Figure 6-7.
When the Expert option is chosen, other selections are provided for special
conditions such as extremely high or low process gain. These expert fea-
tures are not required to tune most processes. Normally, tuning is a simple
two-step process—select Test, select Update—and is quickly and automat-
ically accomplished without additional user input.
Theory
1. Process testing
2. Evaluation of process characteristics from test data
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
exact process model.
Process Testing
A two-state relay replaces the PID/PI controller for the period of process
testing as shown in Figure 6-8. During testing the relay keeps a loop under
two-state control. The two-state controlled loop experiences oscillations
with limited amplitude. The frequency of loop oscillations is the critical
frequency, thus the period of these oscillations is the ultimate period. The
amplitude of the oscillations and the relay step size are used for the calcu-
lation of the ultimate gain. The ultimate gain is defined as the proportional
controller gain that brings the system to its stability limit. After these
parameters are defined they are used to calculate controller parameters.
Control
Block
Relay
Process
Dynamics
Calculation
SP
-S Process
Measurement
PV PID or
Fuzzy PID
Figure 6-9 illustrates a typical time plot of the relay output and the process
variable (PV) during tuning. Note that after initialization the relay is
triggered at the point when the PV passes through the set point. The relay
amplitude (d) is typically 3 to 10 % of the controller output range. The PV
change (a) is largest during initialization (the first half-period sine wave).
Typically, the PV change ranges from 1 to 3 percent of the PV range.
Relay Output
Process Output - PV
a SP
t1 t2
....
Initialization Tuning
Tu
Time
Oscillations must continue for at least one cycle after initialization. If more
cycles are used for tuning, the average amplitude and period of the oscilla-
tions is used to determine the ultimate gain and ultimate period. Usually
two cycles are adequate for defining the amplitude and period of the oscil-
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
4d
N (a, ε , d ) =
π a 2 − ε 2 + iπε
where:
N ( a, ε, d ) = relay-describing function at the critical frequency
ε = relay hysteresis
d = relay amplitude
a = amplitude of the oscillation of the process variable (PV).
i = imaginary unit vector
Figure 6-6 explains the relay parameters: relay output amplitude d and
hysteresis ε.
−1
G ( wc ) N (a, ε , d ) = −1 or G ( wc ) = (6-1)
N (a, ε , d )
where:
G( wc ) = process gain at the critical frequency
The Nyquist plot in Figure 6-10 shows the process transfer function G ( w)
and negative inverse of relay describing function with ε = 0 and with ε > 0.
The cross-section of these two functions defines the loop oscillation
parameters.
4d (6-2)
Ku =
πa
However, with hysteresis ε>0, the gain and frequency determined by the
oscillation are shifted from the original ultimate gain and ultimate period.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Imaginary G
Critical point
with ε = 0
Real G
-1
πε
4d
ω
-1/N(a)
G(ω)
Critical point
with ε > 0
ω = Frequency
G(ω) = Process Transfer Function
As illustrated in Figure 6-10, the hysteresis used for noise protection intro-
duces an error in defining the critical point on the Nyquist curve. To pre-
vent the introduction of excessive errors, complementary noise protection
techniques should be applied. A PV filter time constant equal to a properly
selected loop scan period does not degrade loop performance and is bene-
ficial for process test results. An algorithmic noise protection disables
relay switching at the start of each half period for about half of the initially
defined dead time (see Figure 6-9). The algorithmic noise protection has
proven to be an effective method of noise protection without adversely
affecting identification of the ultimate gain and ultimate period.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
K. Astrom proposed more flexible rules with a phase and gain margin
specification, as follows:
Tu 2
T i = ---------- ( tan φ + 4α + tan φ )
4πα
T d = αT i
K = K u G m cos φ (6-3)
where:
φ = desired phase margin
α = design selection of the ration Td:Ti
Gm = the desired inverse of gain margin (the default value .5)
K, Ti, Td = controller gain, integral time and derivative time
Equation 6-3 gives better tuning results than the original Ziegler-Nichols
rules for processes with small dead time–to–time constant ratios. If during
the process test dead time τ d is defined, better results are achieved over a
wide range of process dynamics by using non-linear tuning rule
estimators developed by the authors of this chapter. Non-linear estimators
are intended to correct the following major deficiencies of Ziegler-Nichols
tuning:
• Too short integral time for processes with small dead time–to–time
constant ratios
• Excessive integral time for processes with large dead time–to–time
constant ratios
• Excessive controller gain for processes with small dead time–to–
time constant ratios
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Sigmoid functions are used for creating f1(Tu, τ d ) and f2(Tu, τ d ). A sigmoid
function is used for modeling a neural network (see Chapter 8).
The following formulas were developed and tested to satisfy the above
requirements:
b1
f1 (Tu , τ d ) = a1 + (6-5)
T
1 + exp − u − c1
τd
(b 2 − f 1(Tu ,τ d ))
f 2 (Tu ,τ d ) = a 2 + (6-6)
c2
Ti Coefficient
0.8
K Coefficient
Ti , K 0.6
coefficients
0.4
0.2
0
0 2 4 6 8
TU/Td
Figure 6-11. Non-linear Functions for Computing Controller Integral Time and
Gain
Equation 6-5 gives the value of the coefficient used for Ti computation,
which smoothly varies from a minimal value of a1 to a maximum value of
a1 + b1 (upper plot in Figure 6-11). Formula 6-6 provides the coefficient for
K computation in the range [a2+(b2 – a1)/c, a2+(b2 – a1 - b1)/c] (Lower plot
in Figure 6-11). Let’s take a closer look at the plot to better see the tuning
dependencies. When ratio Tu/τd is close to 2, the process is dead time-dom-
inant. An increased Tu/τd ratio signifies a decreased τd/τ ratio. The upper
plot shows that for the dead time–dominant process Ti ~0.35 Tu, while for a
process with a small dead time Ti ~ Tu. The controller gain for a dead time–
dominant process is K ~ 0.4 Ku, while for a process with a small dead time
it is K ~0.27 Ku (lower plot).
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
60
58
56
54
52 Setpoint
Controlled Variable
Valve
50
48
Figure 6-12. Tuning Plots and Loop Step Responses for the Process with τd/τ ~
0.2, loop scan = .1 sec.
The auto tuner gave the following controller settings: K = 1.25, Ti = 12.36
sec., and Td = 1.97 sec., comparable to the IMC calculations with τfilter= 13
sec: K ~0.87, Ti = 13 sec., and Td = 2.3 sec.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Tu 2πτ d 2πτ d π
τ= tan(π − ) π− <
2π Tu Tu 2
(6-7)
1 4π 2τ 2
Kp = 1+ 2
Ku Tu
where:
τ = process time constant
Tu = ultimate period
τd = process apparent dead time
Kp = process static gain
Ku = ultimate gain
∆SP
Ks = (6-8)
∆u
where:
∆SP = set point change (%)
∆u = controller output change from one steady state to a new
steady state (%)
To detect valve hysteresis, the test should be repeated with a set point
change in the opposite direction. Knowing the process static gain, ultimate
gain, and ultimate period, the process dead time and time constant are
calculated from the following equations:
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Tu
τ= (K p K u ) 2 − 1
2π (6-9)
T 2πτ
τ d = u (π − tan −1 )
2π Tu
Table 6-2 demonstrates the accuracy of using these equations. The ultimate
gain and ultimate period were determined for several first-order plus
dead time process models using relay oscillation. The time constant and
dead time were calculated using Equations 6-9 and the actual static gain.
The difference between actual and calculated time constant and dead time
averaged less than five percent.
Table 6-2. Comparison of Actual and Calculated Process Model from Static Gain,
Ultimate Gain and Ultimate Period
Kp actual τ actual τd actual Ku Tu τ calc τd calc
1.0 100.0 12.0 13.7 68.0 106.2 12.5
1.0 25.0 3.0 13.7 12.0 26.0 3.1
1.0 50.0 12.0 7.5 66.0 52.1 11.0
1.0 12.5 3.0 7.4 11.2 13.0 3.0
1.0 66.7 12.0 9.5 66.0 69.2 12.3
1.0 16.7 3.0 9.8 11.2 17.3 3.0
1.0 200.0 12.0 26.0 50.0 207.0 12.8
1.0 50.0 3.0 27.1 12.0 51.7 3.1
1.0 15.0 2.0 12.3 8.0 15.7 2.1
2.0 15.0 2.0 6.2 8.0 15.6 2.1
0.5 15.0 2.0 26.7 8.0 15.7 2.1
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
loop response. IMC and Lambda tuning have become popular because
oscillation and overshoot are avoided and control performance is specified
in an intuitive way through the closed loop time constant. When there is
mild to moderate interaction between loops, choosing the appropriate
closed loop time constants with Lambda tuning can minimize the impact
of the interactions.
IMC tuning rules for self-regulating processes use the three model param-
eters for a first order–plus–dead time model and a user-defined filter time
constant τ filter to determine PID controller settings. The filter time constant
should be set to the desired closed loop time constant. A larger value of fil-
ter time constant gives more damped tuning. Lambda tuning rules use a
parameter λ , which is also used to set the desired closed loop time con-
stant for a self-regulating process. The closed loop time constant is usually
chosen to be longer than the open loop time constant, perhaps as much as
two or three times longer. This is to ensure robustness in the event of inac-
curacy in model identification and changing process conditions. Process
analysis tools have been used to assist in choosing the closed loop time
constant to achieve minimum variance control [6.12]. A proper closed loop
time constant will attenuate slow disturbances without amplifying noise
in the process measurement. The following are formulas for IMC and
Lambda calculations:
Self-regulating Process
IMC Tuning Rules
PI Control:
τ
K= Ti = τ (6-10)
K p (τ f + τ d )
where τd is filter time constant.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
τc +τ d 2 ττ d
K= Ti = τ + τ d 2 Td = (6-11)
K p (τ f + τ d 2) 2τ + τ d
PI Control:
τ +τd 4
K= Ti = τ + τ d 4 (6-12)
K pλ
where λ defines the desired closed loop time constant.
PID Control:
There is no direct form of Lambda Tuning for the PID controller using a
first order–plus–dead time process model.
Integrating process
Model based control of integrating process makes it possible to achieve
averaging control. Rather than attempting to maintain tight control by
aggressively moving the manipulated variable, averaging control uses the
tank, in the case of level control, to absorb disturbances. By allowing level
to swing within limits, variability in the outlet flow is reduced which min-
imizes disturbances to downstream processes. The IMC rule given below
is intended for integrating processes with a significant dead time. The IMC
and Lambda results are equivalent when there is no dead time. For inte-
grating processes the tuning parameters, τf and λ , are the desired time
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
for the disturbance effect to be arrested, i.e., the time the process output
begins to recover following a disturbance. Increasing the value τf and λ
relaxes control.
PI control:
2τ f + Td
K= Ti = 2τ f + Td (6-13)
Ki (τ f + Td 2)2
∆y
K i = ------------- = integrating process gain
∆u∆t
∆y = % change in the process output
∆u = % change of step change in the process input
∆t = time interval over which ∆y is calculated
PI control:
4
K= Ti = 2 λ (6-14)
K iTi
Model based tuning using the IMC or Lambda rules has some advantages
over other tuning rules. Controllers are less sensitive to noise, valve life is
prolonged, and process variability is minimized. However, the pole-zero
cancellation approach results in poor load disturbance performance when
the process time constant is long. A technique suggested to overcome this
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
pensating PID controller. The PI controller settings are determined using
IMC or Lambda rules, applying the gain and time constant from the pro-
cess model, with little or no dead time.
The discussed formulas provide tuning for the ISA standard form control-
ler. If a series PID controller is in use the parameters should be appropri-
ately converted:
Kn 4T n Tn 4T n Tn 4T n
s
K = [1 + 1 − d ]; T =s i [1 + 1 − d ]; T =s i [1 − 1 − d ] (6-15)
i d
2 Tn 2 Tn 2 Tn
i i i
T s +T s T sT s
n s
K =K i d ; T n = T s +T s; n
T = i d (6-16)
T s i i d d T s +T s
i i d
in dead time and gain, which move control toward the stability limit, and
for a decrease in dead time and gain where control is too sluggish. Twice
an increase or decrease in the gain and dead time is usually assumed. For a
properly tuned loop (loop 2) the area of assumed process-parameter
change should be within the loop’s stable area; otherwise (loop 1), the con-
troller tuning parameters should be adjusted to expand the stable area.
The robustness plot uses a logarithmic scale [6.14].
Unstable
2.25 Loop Stable
2
1.75
Loop 2
1.5
1.25
Dead time
Ratio 1
0.75
Loop 1
Figure 6-13. Robustness Plot – Loop Stable Area and Controller Operation
Area
It is useful at this point to define gain margin and phase margin and illus-
trate the definitions on the Bode plot in Figure 6-14.
Gain margin is the inverse of a loop gain at the phase crossover frequency,
i.e., when the loop phase shift is180 degrees. It means simply how many
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
times loop gain is increased to reach stability limit at the crossover fre-
quency. Since stability limit gain at crossover frequency is equal to 1, we
can write:
K*GM=1 GM=1/K
Gain(f)
Phase Shift
(degrees)
90
Phase Margin
180
Phase margin is the phase shift that is added to the loop phase shift at the
gain crossover frequency to make a total phase shift of 180 degrees. The
gain crossover frequency is the frequency when the loop gain is equal to 1.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
select. Tuning objectives bound tuning parameter range. The bounds for
Figure 6-15a are defined by:
• Tuning for near zero overshoot on a step change in set point (IMC
tuning)
• Near minimum integral absolute error (IAE) for a step disturbance
The robustness coordinates are in terms of gain margin and phase margin.
Within the delineated tuning area there are a variety of responses
available, the intention being to simply click at a point to get a set of
tuning parameters and view the responses. A set point performance range
at a gain margin of three and six is shown in Figure 6-15a (the vertical
number sequence on the robustness map correlates phase margin to
response curve). Significant variation in the set point overshoot in Figure
6-15b for different tuning points on the robustness map in Figure 6-15a
illustrates that not only are there performance and robustness tradeoffs to
be made, but there are performance tradeoffs between the speed of the
approach to set point and overshoot, as well.
90
85 6
80
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
3
75 Zero overshoot bound
Phase Margin
(Degrees) 70
65 5
2
60
Minimum IAE bound
55
50
1 4
45
40
2 3 4 5 6 7 8
Gain Margin (Ratio)
63
1 4
61
59 2 3 5
6
SP and PV 57
(%Scale)
PV
55
SP
53
51
49
0 10 20 30 40 50 60 70 80
Time (sec)
Adaptive Control
A perfectly tuned PID controller may degrade over time and perform
poorly or become oscillatory. There are two main reasons for these
changes.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
1. The controlled process is non-linear and the process operation
entered a region with significantly different process parameters
than during the tuning.
2. The process parameters have changed since the auto-tuning was
performed.
The application of a gain scheduler, which adjusts the PID controller gain
as a function of the operating point, is often sufficient for solving the first
problem. In more demanding applications the PID controller is auto-tuned
in several ranges and the PID gain and reset and rate set appropriately to
match the current working range. In between ranges, parameters are
approximated by weighted averages, taking into account parameters in
adjacent ranges and distance from the ranges boundaries [6-20].
To achieve the desired performance in the second case, the tuning should
be repeated periodically or upon changes in the controller performance. If
Adaptive tuning or the more general area of adaptive control has been a
significant research focus for years. Many sophisticated techniques have
been developed for adjusting controller parameters to process changes.
Practical progress is, however, less impressive, because the safety, reliabil-
ity and robustness requirements of practical applications often exceed
those achieved in research.
We can classify simple and reliable adaptive tuners for PID controllers as
shown in Figure 6-16.
Adaptive Tuning
Techniques
Process variable
evaluation Time Frequency
Domain domain
PID terms
relationship Recursive Recursive
Controller
switching Model Discrete
switching Fourier
Transform
There has been significant effort over the years to implement adaptive tun-
ers. This section presents the more promising candidates for implementa-
tion as industrial products.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
This controller adapts from the loop response to a set point change or a
load disturbance, when the actual damping and overshoot of the error sig-
nal are recognized, and the period of oscillation is measured. If there are
no oscillations in the error signal, proportional control gain is increased,
and integral and derivative times are decreased. If an oscillation occurs,
damping and overshoot of the oscillation are measured and controller
parameters are adjusted accordingly [6.4].
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Supervisor
Excitation
Generator
PV
PID Process
SP Controller OUT
e ( k ) * ∆e ( k ) < 0 (6-18)
Both controller input (PV) and output (out) should be within their operat-
ing range and there should be no oscillations. If the supervisor detects
oscillations, it activates the safety-net component, which cuts controller
gain every time the PV crosses SP line, that is:
Wref is a value from the interval of –1 to +1. Practically it was found that
Wref ~ -0.3 delivers an adaptive gain close to the model-based tuning. W(k)
is an oscillation index, which is negative for an oscillatory PV and positive
for a normally damped and over-damped PV. As is seen from Equation 6-
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Reset adaptation uses the property in a PID controller that a firm relation-
ship between the increments of proportional and integral terms is speci-
fied:
∆P
∆P = ∑ ∆Pk ; and ∆I = ∑ ∆I k ; β= (6-22)
k k ∆I α
The summation is performed over a predefined number of scans or over
the whole period when the adaptation conditions (6-17) and (6-18) are sat-
isfied.
1
∆Ti (k ) = γ Ti (k ) −1 ; (6-23)
β
γ ~0.05 defines speed of reset adaptation.
The tracking of the changes in the process lag and the appropriate adjust-
ment of the controller’s reset time are demonstrated in Figure 6-18a. The
adaptive performance of this type of controller to set point changes is
shown in Figure 6-18b.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Process
30 PID reset
25
Time
20
Constant
(Seconds)
15
10
1 300 600 900 1200 1500 1800 2100 2400 2700 3000
Time (Seconds)
In this figure, Process Gain=1.5, PID Gain = 1.0, Dead Time = 2 sec, Lag
Time = 15+10sin(k/14400), SP changes every 240 sec, trend sample time =
20 sec, α=1.6.
Setpoint
Controlled Parameter
54
52
Value
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
(%)
50
48
46
44
42
0 4 8 12 16 20 24 28 32 36 40 44 48 52 56 58 62 64 68
Time (Seconds)
M.G. Safonov and T. C. Tsao introduced the concept of a fictitious set point
for PID controller that assumes a simultaneous evaluation of the actual
controller in the loop and all other controllers in the assumed controllers
set [6.27 ]. The set of candidate controllers is fitted to the process input and
output measured data over the period of adaptation. To make the fit of the
same data set possible for various controllers, a set point value is calcu-
lated different than the true set point. The fictitious set point for an ideal
PID controller (6-24) is calculated by inverting the PID controller as in Fig-
ure 6-19 [6.28].
ki sk D
out = (k p + )( SP − PV ) − PV (6-24)
s sTd + 1
After the fictitious set point is generated, the performance index is calcu-
lated as a squared control error relative to the fictitious set point. The per-
SPi (t )
PVi (t ) sK ci s
+ +
sTd + 1 sK ci + K li
Outi (t )
Figure 6-19. Fictitious Set Point Diagram
formance index accounts as well for the controller output move and the
difference between real and fictitious set point.
Supervisor
PID
Process
SP Controller OUT PV
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
An adaptive PID loop contains supervisor and safety-net blocks, two typi-
cal blocks for any adaptive design. As soon as adaptation conditions,
defined primarily by control error, exist, the supervisor activates a recur-
sive model update mechanism. The recursive mechanism updates the cor-
rective term first and then recursively updates the process model
parameters:
θ k = θ k −1 + Pk xk −1ek (6-25)
process. Models are fixed or adaptive. The rationale for using fixed models
is to insure that there is at least one model with parameters sufficiently
close to those of the unknown process. The approach yields the desired
speed of adaptation, but requires the use of a significant number of mod-
els. If instead of switching models, the adaptation is performed through
model-parameters interpolation, the number of models used is drastically
reduced. Figure 6-21 shows a PID adaptive controller with model switch-
ing and interpolation of parameters.
d
Feedforward
Model Set of Y E
Controller
Excitation Interpolator Models
Generator
PID Process
sp Controller u y
Figure 6-21. Adaptive PID Controller with Model Switching and Interpolation
Ei (t ) = ( y (t ) − Yi (t ) ) 2
(6-26)
where:
y(t) = process output at the time t
Yi(t) – i – th = model output at the time t
Ei(t) = squared error for the model i at the time t
E(t) = [E1(t),…,Ei(t),…,EN(t)] = squared error vector for models 1,…, N at the time t
The norm (6-26) is assigned to every parameter value of the model i, if this
parameter value is used in the evaluated model. If a specific parameter
value is not a part of the evaluated model, the parameter value is assigned
zero. Next the model i+1 is evaluated, and again a norm (6-26) is
computed for this model and assigned to every parameter value of the
model i+1 and added to the previously assigned norm for every parameter
value. The process continues until all models are evaluated. As a result of
this evaluation every parameter value has been assigned a sum of squared
errors from all models in which this specific parameter value has been
used. In one scan t, every parameter value pkl has an assigned norm where k
= 1,2,...,m – number of parameters and l = 1,2,…,n – number of values
assigned for every parameter.
N
Ep kl (t ) = ∑ γ kl Ei (t ) (6-27)
i =1
where:
Epkl(t) = norm for parameter pkl evaluation at scan t
N = number of models
γ kl = 1 if parameter pkl is used in the model i and γ kl = 0 otherwise
The process is repeated in the next scan and the sum of the squared errors
for every parameter value is added to the sum of the appropriate
parameter values accumulated in the previous scan. The process
(adaptation cycle) continues the declared number of scans (1 to M), or
until there is enough excitation on the inputs, whichever condition is
satisfied first. As a result of this procedure, every parameter value pkl has
an assigned accumulated value of the squared errors over a period of
evaluation:
M
sumEp kl = ∑ Ep kl (t ) (6-28)
i =1
At the end of the adaptation cycle the inverse of the sum is calculated for
every parameter value pkl:
pk(a) = pk1 *fk1 +…+ pkl *fkl +…+ pkn *fkn (6-30)
--`,```,,,```,`,````,``,`,
where:
fkl = Fkl/sumFk (6-31)
sumFk = Fk1 + …+ Fki +…+ Fkn (6-32)
Calculated parameters define a new model set, with center parameter val-
ues pk(a), k = 1,…m, and a range of parameter changes as assumed in the
design. The range of changes are defined as +-∆% , and should be repre-
sented by two or more extra parameters. In other words, if we define for
the new model parameter pk (a), then we have to define at least two extra
parameters, pk(a)+ ∆% pk (a)/100 and pk (a)- ∆% pk (a)/100, for the new model
evaluation. Every parameter has defined lower and upper bounds for
adaptation, and if pk(a) exceeds the bound it is taken as a bound value. As
soon as the model has been updated, controller redesign takes place based
on the updated pk(a), k = 1,…m model parameters. Adaptation can work
for the whole model or on the feedback or feedforward part of the model,
exactly that part which relates output with inputs where the required min-
imum excitation level exists. Application of the first order–plus–dead time
process model for both the feedback and feedforward loops gives the
adaptive model structure for PID loop shown in Figure 6-22.
d
Dead time 2 Filter 2 Gain 2
u y
Dead time 1 Filter 1 Gain 1
Assuming three values for every parameter, then for every input (u and d)
the number of model switching combinations is 3 x 3 x 3 = 27. If both
inputs in Figure 6-22 are used for adaptation, the number of switching
combinations increases up to 27 x 27 = 729.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
There are several self-tuning systems for industrial controllers that apply
frequency domain tuning, though they don’t use the DFT. Hagglund and
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Astrom designed a controller that performs tuning from set point changes
and from natural disturbances [6.26]. The specific tuning frequency is
selected by using band-pass filters on the process input and output. The
frequency of the filters is set from an on-demand tuner, which defines the
ultimate period using the relay-oscillation technique. The tuner defines
the process gain for the tuning frequency, using a simplified recursive least
squares estimator. The tuner can work from set point changes and from
natural disturbances, and may include external excitation injected in the
controller output or at the set point. A simplification of this design is
achieved by using a discrete Fourier transform identifier (estimator),
instead of a recursive least squares estimator [6.31]. The DFT tuner design
is shown in Figure 6-23.
Verification
Frequency
adjustment Interpolator
DFT
Estimator
DFT DFT
Tuning
rules BP Filter BP Filter
Excitation
Generator
PID Process
SP Controller OUT PV
DFT computations are applied on the input and output, and the subse-
quent transfer-function estimation for every filtered frequency is defined.
The Verification unit tests the monotonous process gain and phase
decrease with frequency increase. The Interpolator defines the frequency
and process gain for the point where the phase shift is about -π. From
these parameters ultimate period and ultimate gain are calculated directly
and used for PID controller tuning and for setting up a tuning frequency
for the next cycle. Required excitation level depends on the process gain
around the critical frequency and on the valve performance. Loops with
References
1. Ziegler, J.G. and Nichols, N.B., “Optimum Settings for Automatic
Controllers,” Transactions of the ASME, Vol. 66, Nov. 1962, p.759-768.
2. Bialkowski, W.L., “Plant Analysis, Design, and Tuning for Uniform
Manufacturing,” Process/Industrial Instruments and Control Handbook,
McMillan, G.K., editor, McGraw-Hill, 1999, pp 10.50-10.55.
3. McMillan, Gregory K., Pocket Guide to Good Tuning, ISA, 2000.
4. Astrom, K. and Hagglund T., “PID Controllers: Theory, Design, and Tuning,”
ISA, 1995.
5. Ziegler, J.G. and Nichols, N. B., “Optimum Settings for Automatic
Controllers,” Transactions of the ASME, Vol. 115, June 1993, pp. 220-222.
6. Bialkowski, W.L. and Haggman, Brian, “Quarter Amplitude Damping
Method Is No Longer the Industry Standard,” American Papermaker, March
1992.
7. Astrom, K. J. and Hagglund, T., “Automatic Tuning of PID controllers,” ISA,
1988.
8. Astrom, K. J., and Hagglund, T.,“A Frequency Domain Method for Automatic
Tuning of Simple Feedback Loops,” IEEE 23rd Conference on Decision and
Control, Las Vegas, Dec. 1986.
9. Astrom, K. J., and Hagglund, T., “Automatic Tuning of Simple Regulators
with Specifications on Phase and Amplitude Margins,” Automatica, 20, 1986,
pp. 665-651.
10. McMillan, Gregory K., Wojsznis, Willy K., and Meyer, Ken, “Easy Tuner for
DCS,” ISA Conference, Chicago 1993.
11. Chien, I-Lung and Fruehauf, P. S. ,“Consider IMC Tuning to Improve
Controller Performance,” Chemical Engineering Progress, October 1990.
12. Bialkowski, W. L., “Control Engineering Course Material”, Entech Seminar
1991, Toronto, Canada.
13. Wojsznis, W. K. and Blevins, T. L., “System and Method for Automatically
Tuning a Process Controller,” US patent No. 08/070 090.
14. Shinskey, F. G., “Putting Controllers to the Test,” Chemical Engineering,
December 1990.
15. Ott, Mike and Wojsznis, Willy, “Auto-Tuning: From Ziegler-Nichols to Model
Based Rules,” Instrumentation and Control, ISA/95 Conference, New
Orleans, 1995.
16. Yu, Cheng-Ching, Autotuning of PID Controllers, Springer, 1998.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
17. Wojsznis, W.K., Blevins, T. L., and Thiele, D., “Neural Network Assisted
Control Loop Tuner,” IEEE Conference on Control Applications, Hawaii, July
1999.
18. Gudaz, John and Zhang, Yan, “Robustness Based Loop Tuning,” ISA
Conference, Houston, September 2001.
19. Kiong, Tan Kok, Quing-Guo, Wang , Chieh, Hang Chang and Hagglund, Tore,
“Advances in PID Control,” Springer, 1999.
20. Wojsznis, Willy K., Borders, Guy T. Jr., and McMillan, Gregory K., “Flexible
Gain Scheduler,” ISA Transactions 33 (1994) 35-41, Elsevier.
21. Maršik, J. and Strejc V., “Application of Identification-free Algorithms for
Adaptive Control,” Automatica, Vol. 25, No. 2, 1989, pp.273–277.
22. Wojsznis, Willy K., Gudaz, John, and Blevins, Terrence L., “Adaptive Model
Free PID Controller,” Fifth SIAM Conference on Control Applications, San
Diego, Ca, July 2001.
23. A. S. Morse, F. M. Pait, and S. R. Weller, “Logic-Based Switching Strategies for
Self-Adjusting Control,” 33rd IEEE Conference on Decision and Control,
Workshop Number 5. Lake Buena Vista, Florida, USA, December 1994.
24. Narendra, Kumpati S and Balakrishnan, Jeyendran, “Adaptive Control Using
Multiple Models,” IEEE Transactions on Automatic Control,. Vol. 42, No. 2,
February 1997, pp.177–187.
25. Lamaire, Richard O., Valavani, Lena, Athans, Michael, and Stein, Gunter, “A
Frequency-domain Estimator for Use in Adaptive Control Systems,”
Automatica, Vol. 27, No.1, 1991, pp.23–38.
26. Hagglund, Tore and Astrom, Karl Johan, “Industrial Adaptive Controllers
Based on Frequency Response Techniques,” Automatica, Vol. 27, No. 4, 1991,
pp. 599–609.
27. Safonov, M.G., and T.C. Tsao, “The Unfalsified Control Concept and
Learning,” IEEE Transactions on Automatic Control, vol 42, no. 6., Jun. 1997, pp.
843–847.
28. Jun, M., and Safonov, M.G., “Automatic PID Tuning: An Application of
Unfalsified Control,” Proceedings of the 1999 IEEE International Symposium on
Computer Aided Control System Design, Hawaii, USA, August 1999.
29. “Emerson Process Management auto tuning,” www.easydeltav.com.
30. McMillan, Gregory K., Tuning and Control Loop Performance — A Practitioner’s
Guide, 3rd Edition, ISA, 1994.
31. Wojsznis, Willy K., “Discrete Fourier Transform Based Self-Tuning,” Advances
in Instrumentation and Control, ISA/96 Conference, Chicago, October, 1996.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Practice
Overview
In considering control applications for specific processes, Fuzzy Logic
Control (FLC) gains advantage when developing a process model is diffi-
cult or impossible. Another advantage of FLC is its easy and natural use of
the existing experience of human operators in process control and optimi-
zation. Such terms as “smooth operation,” “good performance,” and
“good quality” are commonly heard from people involved in building and
tuning FLC.
Fuzzy logic was developed in the 1960’s and was first applied in the early
1970’s for the control of a cement kiln [7.3]. Recent applications of FLC
spread over various areas of automatic control, mostly as a complement to
existing classical techniques. In some areas, FLC has become the dominant
control technology, primarily as embedded controllers. Some examples of
extremely successful fuzzy logic applications [7.4] are:
239
--`,```,,,```,`,````,``,`,,`,`-`-`
Opportunity Assessment
The greatest advantage of fuzzy logic control is evident when it is applied
to processes with insignificant dead time in order to accelerate the speed
of control while retaining a high-quality level of control. In the applica-
tions where set point changes and severe disturbances are common, and
the PID controller does not meet expectations, fuzzy logic control may be
considered as an alternative.
Examples
Temperature Control
A fuzzy logic controller was applied to a temperature control loop on a
chilled water refrigeration machine [7.14]. The temperature controller was
frequently exposed to large unmeasured load disturbances and large set
point changes, particularly on startup. On startup, the controller was
required to rapidly bring the water temperature down to set point without
significant undershoot. Significant undershoot could lead to chiller freez-
ing or compressor shutdown due to discharge pressure interlocks. The
conventional PID controller was replaced with a fuzzy logic–based con-
troller and tuned as per the vendor recommendations. The PID controller
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
response was relatively rapid but undershoot was significant. The fuzzy
logic–based controller was able to provide the rapid response to set point
changes with very minimal overshoot. Use of the fuzzy logic–based con-
troller greatly improved the chiller operation.
Moisture Control
Another example of the application of fuzzy logic is in the automating of a
drying oven in a continuous dyeing range for solid-color tufted carpeting.
The installed fuzzy logic controller has eliminated carpet overheating due
to overshoot during temperature set point changes. Tighter control and
other oven improvements have helped boost average line speed by 15%.
The reprocessing and scrap costs have been reduced, providing an invest-
ment payback of only four months. And finally, the consistency of color
and density across the carpet width has been improved, eliminating cus-
tomer complaints of poor matching at abutted sections. In addition, the
operation could be shifted to a higher average temperature without over-
heating the product. This is possible because the automation is more pre-
cise and more high-quality information is available to the operator,
producing better decision making.
yard for an average product. The greater oven capacity also permitted the
mill to perform the in-house dyeing of some heavy solid-color products, a
task previously not possible because the line was sized for light carpet and
was too slow for the heavier grades.
Autotuned PID smoothed the loops running the last six zones but failed to
bring Zone 1 totally under control. Auto-tuned fuzzy logic loops for the
Zone 1 half-zones delivered fast and smooth step responses. In effect scrap
was substantially reduced at transition points and line throughput was
boosted.
Application
General Procedure
Despite the many potential advantages of fuzzy logic, developing and
tuning a fuzzy logic controller for an industrial application is always a
challenge. It may be desirable to use fuzzy logic technology to build a
generic FLC that is designed to handle classes of control problems, as
opposed to building an FLC for a specific application.
One such generic class of control problem would be to mirror the PID con-
troller for applications where the FLC can provide better control perfor-
mance. By comparing the Integrated Absolute Error (IAE) of the FLC
controller with the IAE of the standard PID controller, it was found that
the IAE for the fuzzy controller could, in some cases, deliver up to a 20%
improvement in performance [7.14].
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Rules of Thumb
Rule 7.1 — Fuzzy logic control excels at reducing the overshoot for processes
with a large lag time. For these types of processes, the non-linear feature of
fuzzy logic allows faster response without overshoot for both set point
and load-disturbance changes. Many of the successful applications have
been on temperature loops.
Rule 7.3 — Fuzzy logic can be used to compensate for a decreasing process gain.
For pH control systems with set points on the steepest portion of the titra-
tion curve, the higher control action for larger errors helps compensate for
the decrease in process gain as you move away from the set point.
Guided Tour
This tour illustrates the potential ease of use and convenience of an inte-
grated interface that is possible with fuzzy logic control embedded in an
industrial control system. The following areas are addressed:
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
systems. Even some stand-alone digital controllers for temperature control
are based on fuzzy logic control. Often the implementation of fuzzy logic
control in these systems is limited in scope to make it easier to engineer
and commission the control. The rules and membership functions are
predefined in some implementations to meet a specific application
requirement. Processes dominated by lag and where the control must
respond quickly to set point changes and load disturbances with minimal
overshoot are the best candidates for fuzzy logic.
ship functions are predefined except for the scaling of the membership
functions for error, change in error and change in output. Thus the fuzzy
block is used as a replacement for the PID block, as illustrated in Figure
7-1.
Similar to the auto tuning for PID (see Chapter 6), the process dynamics
are automatically established when Test is selected. The process is under
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
If the user wants to see the performance that this tuning will provide
before using it in his/her process, then he/she can select Simulate. The
closed loop response to set point and load disturbances is shown. By
changing the values of the recommended scaling factors the new response
is displayed, as illustrated in Figure 7-3.
Since fuzzy logic control is non-linear, it is not possible to show the robust-
ness of the control in terms of phase and gain margin.
Theory
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
ness. The difference comes from calculations based on fuzzy logic. Apply-
ing fuzzy logic calculations it is possible to design a non-linear controller,
without a detailed knowledge of the operating point nonlinearity, as
would be required for a classical control design.
Fuzzy logic has its roots in a classical two-valued “True” and “False” logic.
Adding “fuzziness” makes it possible to better address real-life situations.
To illustrate this connection and analogy, let’s review a few basics of classi-
cal logic. Classical logic operates on crisp sets. A simple example, not actu-
ally used in industrial process control but illustrating the concept, is where
a set of temperature values above 33°C is “hot,” thus below or equal to
33°C is “cold”; or a person 6 feet tall belongs to the set “tall,” but someone
below that value by even a fraction of an inch is “short.” Is it reasonable to
call a person of 5 feet 11.99 inches “short”? Applying common sense,
apparently it is not. This type of doubt stimulated the search for more flex-
ible expressions of such situations and led to the development of fuzzy sets
and multivalued logic.
In fuzzy sets, bounds are defined in a more flexible way. Set “hot” for
example, is defined as not only above 33°C, but also including any value
below 33°C that is “hot” to some degree, until it is fully “hot” at the 33°C
point and above. The “degree of hotness” at 27°C would of course be
greater than at 26°C, for example. To define the degree of “hotness,” fuzzy
logic applies membership function. Membership function defines the degree
of membership to a fuzzy set. The above discussion is summarized in
Table 7-1 and Figure 7-4.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Table 7-1. Classical Logic vs. Fuzzy Logic
Sets Membership Logic Values Operations
Crisp sets Unambiguous “True,” “False” or 1,0 Two-valued
Classical Logic
logic
Fuzzy sets Various degrees of Any value in the Multi-valued
membership defined range [0,1], defined logic
Fuzzy Logic
by the membership by the membership
function function
In summary, we can say that in fuzzy logic the truth of any statement is a
matter of degree. We can observe as well that classical logic is a special
case of fuzzy logic with a step membership function.
After defining fuzzy logic values, the next step would be to define fuzzy
logic operations. Fuzzy logic operates on many values and operations in
fuzzy logic are based on multi-valued logic principles. Using again the
comparison with classical logic, we introduce the basics of multi-valued
Logic Value
Membership function
24 27 30 33 Temperature C
Hot
logic operations in Table 7-2. Multi-valued logic uses the same operators
as classical logic, with the difference that the operations are performed on
the real values in the range [0-1], instead of on 0 or 1 only.
In the next sections we learn more about fuzzy logic control by developing
several fuzzy logic controllers. As a basic fact about fuzzy logic it is worth-
while to honor Lofti Zadeh, who developed fuzzy logic in the early 1960’s,
and Jan Lukasiewicz, who pioneered in developing multi-valued logic in
the 1930’s.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
ure 7-5.)
1 2 3
Fuzzification
Fuzzy sets and membership functions are used to translate controller input
values into fuzzy values.
The next step is to define membership functions for every fuzzy set, to
establish boundaries for the set. Membership functions are defined analyt-
ically. Examples of simple membership functions for fuzzy sets Low and
High are given by Formulas (7-1) and (7-2) and presented graphically in
Figure 7-6. Level is used as control input. Prior to applying fuzzification,
control input is normalized in the range 0-1.
Formula 7-1 defines the membership function for fuzzy set High, and For-
mula 7-2 for fuzzy set Low. A level equal to 70% or .7 gives a degree of
membership of .7 for the High membership function and .3 for the Low
membership function. This can be expressed as
M(Low)=.3; M(High)=.7
Fuzzy
Sets
1.0
Low High
0.7
0.3
0
0 Level (%) 100
Example
Value
Figure 7-6. Figure Membership Functions for Fuzzy Sets Low and High
Rather than requiring full membership in one state or the other, fuzzy
logic allows for a tank's level, for example, to be somewhat high (degree of
membership=.7) and somewhat low (degree of membership = .3) at the
same time. In other words, a statement is typically true only to a certain
extent or degree.
In this example, the membership functions describing the tank's level, the
tank’s inlet flow and the outlet valve might be defined as shown in Figure
7-8.
Inlet Flow FT
1-1
Tank
Level
LT
1-2
The inference rules for controlling the tank level are defined as in Table
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
7-3.
After applying the degree of membership for both level and inlet flow to
the fuzzy rules for tank outlet valve, as shown in Figure 7-9, we get fuzzi-
fied output values, that is, values for the membership functions Closed,
Normal, Open. Normal flow in this example is a flow that is not too low.
Rule 2 Rule 3
As you may notice from Figure 7-9, while applying the fuzzy logic AND
function to input membership values, the output membership function
value is a minimum of two input membership values. Valve Closed mem-
bership function value, for example, is equal to Level Low membership
function value, which is smaller than value Flow Normal. This follows
from inference Rule 1:
Defuzzification
Defuzzification translates fuzzy logic values into real control output val-
ues. The most common method of defuzzification is to calculate a
weighted average of all the activated output membership functions. Refer-
ring to Figure 7-9, the weighted average of membership functions Closed,
Normal, and Open determines the outlet valve position.
The nonlinearity built into the FLC function block reduces overshoot and
settling time, achieving tighter control of the process loop. Specifically, the
FLC function block treats small control errors differently from large con-
trol errors and penalizes large overshoots more severely. It also severely
penalizes large changes in the error, helping to reduce oscillations.
Membership Functions
The FLC function block uses three membership functions: the input sig-
nals are error and change in error, and the output signal is the change in
control action. The relations among these three variables represent a non-
linear controller. The nonlinearity results from a translation of process
variables to a fuzzy set (fuzzification), rule inference, and retranslation of
the fuzzy set to a continuous signal (defuzzification).
The two membership functions for error, change in error and change in
output are Negative (N) and Positive (P). The membership functions’ scal-
ing (Se and S∆e) and the error and change in error value, respectively,
determine the degree of membership. First, error in percent is calculated
relative to the set point and then scaled to the range [-1, 1]:
Es = ( PV − SP) / Se
(7-3)
−1 ≤ Es ≤ 1
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
N P
Legend:
N = negative large
P = positive large
-Se 0 + Se
Error (e)
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Degrees of membership for positive error Me(P) and negative error Me(N)
are calculated by using the following expressions:
N P
Legend:
N = negative error
P = positive error
-S∆e 0 + S∆e
Change in error (∆e)
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
for an absolute value of scaled error and change in error equal to 1 with
the same sign.
N ZO P
1
Legend:
N = negative
P = positive
ZO = zero
-S∆u 0 + S∆u
Change in Output (∆U)
Table 7-5. Fuzzy Logic Rules for Fuzzy PID Controller in a Matrix Form
Error\Change in Error Negative Positive
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
membership functions for the change in output.
Defuzzification
Defuzzification uses the center-of-gravity method to determine the fuzzy
scaled change in the output ∆us .
M ∆u ( P ) − M ∆ u ( N )
∆u s = (7-8)
M ∆u ( P ) + M ∆u ( Z ) + M ∆ u ( N )
The change in output is calculated as:
For regions where the absolute error is greater than the error scaling factor
or the absolute change in error is greater than the change in error scaling
factor, the values for error and change in error are clipped at the error scal-
ing factor and change in error scaling factor, respectively. Figure 7-13
shows an example of FLC gain curve that illustrates change in controller
gain for FPID controller when e = ∆e .
The straight line through the origin shows the linear relationship of a con-
ventional PI controller. As the error and change in error increase, the
change in output of a conventional PI controller increases linearly. Note
that the gain of the FLC function block is similar to the gain of the PI con-
troller when the error and change in error are small. The gain of the FLC
function block increases gradually as the error and change in error
increase.
-1 0 1
e and ∆e
∆u
-1
The nonlinearity built into the FLC function block reduces overshoot and
settling time, achieving tighter control of the process loop. To help antici-
pate a rapid change in the process, derivative action is provided in the
feedback path of the loop, as shown in Figure 7-14.
Fuzzy Logic
Controller
e
∆u OUT
Setpoint Rule
Translation Retranslation Process
Inference
+
- ∆e
Derivative PV IN
Filter
Action
The FLC function block treats small control errors differently from large
control errors and penalizes large overshoots and large changes in the
error, helping to reduce the oscillation.
Figure 7-15 shows how an FPID reacts to overshoot and oscillation. The
FPID applies stronger control action prior to points B, D, and F where
overshoot is significant and increasing, as compared to control action at
points B, D, and F where change in error is small, or after points B, D, and
F where change in error is negative. Similarly at points A, C, and E, control
action is defined jointly by minimum value of error and change in error.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Setpoint A C E F
D
This type of nonlinearity allows the FPID to provide better control perfor-
mance than standard or nonlinear PID control. Following Joe S. Qin [7.7]
we compare FPID and error-squared controller with the gain K esq defined
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
in Shinsky [7.13]:
K esq = K p ( L + (1 − L) | e |) (7-10)
This representation clearly shows the analogy between FLC and a non-lin-
ear controller. It is possible in principle to design any non-liner controller
using a FLC control surface. Change in controller output would be simply
FLC change in
output
1.5
0.5
10
8
8
6
Error 6
4
2 4
Change in error
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Figure 7-16. Control Surface of Fuzzy Logic Controller
defined by reading the values of the 3-D plot, if error and change in error
are known.
S ∆ e = β ∆ SP (7-11)
S ∆u = 2 S ∆eK c (7-12)
Ti S ∆e
Se = (7-13)
∆t
where:
S ∆e = change in error scaling factor
Se = error scaling factor
Se = change in controller output scaling factor
∆SP = SP change relative to a nominal change (1%)
β = a function of process dead time ( τ d ) and time constant ( τ )
and is limited in the range .2 to .5 , as defined by the
Equation 7-14:
τd
β = .2 + .2 ≤ β ≤ .5 (7-14)
τ
The Fuzzy PID tuning technique has been developed by Qin and
enhanced by Qin, Ott, and Wojsznis [7.6].
Tu
Se = S ∆e (7-15)
2
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
References
1. Pedrycz, Witold, “Fuzzy Control and Fuzzy Systems,” Second edition, John
Wiley and Sons, 1993.
2. Chen, Guanrong, and Trung Tat Pham, “Introduction to Fuzzy Sets, Fuzzy
Logic, and Fuzzy Control Systems,” CRC Press, 2001.
3. Zadeh, L. A., “Fuzzy Sets and Systems,” Information & Control, vol.8, 1965,
pp.338–353.
4. Sugeno, M., “An Introductory Survey of Fuzzy Control,” Information Sciences,
vol. 38, 1985 pp. 59-83.
5. Mamdani, E. H., “Application of Fuzzy Algorithms for Control of Simple
Dynamic Plant”, Proceedings of the Institution of Electrical Engineers
(London) v 121 n 12 Dec 1974 p 1585-1588| , 121(12), pp. 1585–1588.
6. Shaw, I.S., Fuzzy Control of Industrial Systems, Klewer Academic Publishers,
Boston, 1998.
7. Qin, S. Joe, “Auto-Tuned Fuzzy Logic Control,” Proceedings of American
Control Conference, Baltimore, 1994, vol. 3 pp. 2465-9.
8. Qin, S. J.,. Ott, M, and Wojsznis, W., “Method of Adapting and Applying
Control Parameters in Non-Linear Process Controllers,” US Patent No.
5,748,467, 1998.
9. Ying, Hao, Siler, William, and Buckley, James, “Fuzzy Control Theory: A
Nonlinear Case,” Automatica, vol.26, no. 3, 1990.
10. Ling, Cheng, and Edgar, Thomas F., “The Tuning of Fuzzy Heuristic
Controllers,” Proceedings of American Control Conference, Chicago, 1992, pp.
22842290.
11. Astrom, K.J., and Hagglund, T., “PID Controllers: Theory, Design, and
Tuning,” ISA 1995.
12. Willy Wojsznis, “An Alternative Fuzzy Logic Controller Design: A Simulation
Study,” Proceedings of the 1994 International Control Engineering Conference, pp.
159-167, March 1994, Chicago.
13. Shinsky, F.G., “Process Control Systems: Application, Design, and
Adjustment,” 3rd edition, McGraw-Hill Book, New York 1988.
14. Jay D. Colclazier, “Fuzzy Logic – An Effective Alternative to PID Control,”
Advances in Instrumentation and Control, ISA Conference, 1994.
15. DeltaVTM Home Page: http://www.easydeltav.com.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Practice
Overview
When critical measurements are slow to reflect process changes or only lab
analysis is available, then parameter estimation can often be used to
improve the performance of control and monitoring applications.
The old saying, “You can’t control what you can’t measure,” has haunted
many Advanced Process Control projects. Being able to either accurately
measure or predict an important process parameter is an integral part of
closing the loop, but one that is often afforded little attention. There are
many approaches available in building properties estimators for closed-
loop control. Clearly understanding the final control objective of the prop-
erty estimator and matching this objective with the best estimation tech-
nique is often the key to a successful control application.
analyzer
• Continuous estimation of a laboratory quality measurement
261
Along with the many possible uses of the estimator, there are just as many
techniques available to build the relationships and implement the
estimator online. The proper selection of the estimation technique is an
important step. It is often possible to evaluate the quality of generated
estimators using different estimation techniques. The estimation
techniques most common in today’s advanced process control
applications are:
Neural Networks
• Estimation technique applied in cases where the inferred property
has a significant nonlinear relationship with independent inputs.
• Capable of easily modeling dynamic and transient behavior even
where first-principal modeling techniques have failed.
The selection of the final form and technique is largely a function of the
final use of the estimator. It is important to remember that many of the
techniques available today have slowly evolved into industrial practice
from the world of academia or statistics, and have only recently been
adapted to the real-time world for process control. This is a learning
process and many of the available techniques are still evolving to properly
handle the peculiarities associated with the online control world. This
section will explore the various uses of the properties estimator and
discuss the attributes of the various techniques. The objective is to
understand how these techniques work and where these estimators fit into
the APC project. This knowledge will then allow us to determine which
estimation technique best fits the task at hand.
• The process drifting outside the data region used for modeling
• Unmeasured disturbances.
HINT: Consistency logic blocks on the analyzer signal can often detect analyzer
malfunctions and report these malfunctions for maintenance. During this period,
the feedback correction is set to zero, and the process control system can continue
operating on the inferential model. Predictive maintenance and increased control-
ler robustness are two benefits.
Opportunity Assessment
Although processes have many measurements, only the ones that corre-
late well with the property in question are used in the properties estima-
tor. This correlation is expressed in terms of a model directly relating the
inferential inputs to the estimated property. Moreover, if the estimator is
based upon inputs with known causal affects, this will only increase the
robustness of the estimator since correlation relationships with no physical
basis can break down for numerous reasons.
The linear estimator is based upon simple time series–based models. For a
common distillation process, a simple linear dynamic model is sufficient
to relate the tower temperature and pressure to the overhead composition.
In contrast, estimating the properties of a high purity distillation tower,
operating across a wide range of purities, may require a more complex
nonlinear model, such as a neural network, to achieve the level of accu-
racy required for closed-loop control.
The use of the simple linear estimator can provide many benefits within a
larger APC application. The benefits range from improved performance,
to better operator understanding, to predictive maintenance of analyzers.
The implementation is simple and the modeling techniques are the same
as the ones being used by the MPC system.
The general form of the dynamic linear estimator is shown in Figure 8-8.
Neural Network
Neural network technology exploded into the world of process control in
the early 1990’s. The technology showed great promise and for a while
was seen as a technology that could provide models for most of our sys-
tems without the need to understand the fundamental behavior or rela-
tionships of our process. Unfortunately, the free lunch did not have much
meat. People soon realized that blindly applying black-box modeling tech-
nology may provide a satisfactory fit for historical data but often leads to
poor performance for use online in a closed-loop control application, or on
time-variant processes.
Over time, people focused more on the true capabilities and power of neu-
ral networks: the ability to model nonlinear relationships in data without
having to define the form of the nonlinearity. This, coupled with some fun-
damental knowledge and understanding of the process relationships and
the careful selection of the variables and data required to generate the
model, has proven to be a powerful estimation technique. Today, neural
networks are being successfully applied in a number of online perfor-
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Using the function-block structure that is provided in most control sys-
tems, it is possible to easily create an effective Dynamic Estimator. The fol-
lowing are examples of typical applications.
Distillation Process
Consider the typical industrial distillation process shown in Figure 8-1.
This simple distillation tower exhibits the components of the process esti-
mator. The tower separates two components, having different boiling
points, into an overhead distillate product and a bottoms product. The
composition of the feedstream is measured and denoted by the instrument
(AT 1-1). The overhead pressure (PT 1-2) is measured along with the tem-
perature (TT 1-2) at the top of the tower, this temperature being a good
indication of overhead product quality at a fixed pressure and feed com-
position. An overhead analyzer (AT 1-2), located downstream of the tower
overhead accumulator, provides a measurement of the distillate stream
composition at an update frequency of 15 minutes. A lab sample is also
taken near the analyzer sample point twice a day. An APC application on
TT PT
1-2 1-2 LT LC
1-2 1-2
FC
1-2 Distillate Receiver
AT
FT 1-2
1-2
Distillate
AT TT Lab Sample
1-1 1-1 FT Point
1-1
Feed Column
FC
1-3
LC
1-3
FT LT
1-3 1-3
Steam
Bottoms
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Analyzer Validator
Online analyzers are very important elements in understanding what the
process is doing. These are all common things you can hear in any control
room:
“I don’t believe the analyzer”… “The controller would run fine if only the
analyzer worked”… “The lab says high and the analyzer says low; take
your pick.”
Y(t-1)
Y(t) (Estimate of
TT 1-2 Analyzer Output
Input Input Input
Delay Lag Delta
x KT1-2 + +
- X Kfbk
• Spiking behavior
• Frozen response
• Drifting
HINT: Integrating the analyzer feedback error term over time is a good indicator
of analyzer drift.
Online Prediction
Many MPC applications are designed to control product qualities, with
these qualities being measured by online analyzers. Online analyzers
present many challenges within a closed-loop control application includ-
ing the following:
Spiking
Drift
Freeze
AT 1-2
Y(t)
TT 1-2
Analyzer
Error
There are two basic configurations of the online predictor. The first is to
use the measured process inputs to generate a continuous signal that will
track the analyzer. The second configuration is designed to take advantage
of the fact that the estimator inputs often contain information on how the
analyzer will respond long before the analyzer changes. Thus we can build
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
the predictor to remove the analyzer delay and lag. For example, the over-
head temperature and pressure can predict the composition in the tower
overheads now, and simply use the analyzer to provide feedback correc-
tion once that material is finally sampled by the downstream analyzer.
The response of the Online Predictor with removal of the analyzer delay
and lag is shown in Figure 8-2c.
AT 1-2
Y(t)
TT 1-2
PT 1-2
HINT: The feedback of these online analyzers into our properties estimators is
important. However, we can make use of the online estimator technology to some-
times reduce the number of online analyzers required to properly control the pro-
cess. For example, consider a distillation train with several towers in series.
Analyzers are located on the overheads and bottoms of each tower to provide com-
positional values for control. With an online predictor, the light key in the bottoms
of the first column is estimated with tower bottoms temperature and pressure with
feedback from the analyzer in the overheads of the second column. The linear esti-
mator can remove the delay time associated with only measuring the light key
impurity in the first tower bottoms with an analyzer located in the overheads of
the second tower. Reducing the number of analyzers will not only lower capital
expenditure but will also reduce maintenance costs.
The normal challenge with this application of the neural network is in rig-
orously handling the dynamics. In many cases the settling time between
the process inputs and the analyzer reading is significantly longer than the
cycle time of the analyzer. If the goal is to provide a continuous signal of
an analyzer that may update only every 15-30 minutes, then the dynamics
between inputs and outputs must be handled properly.
Another role for this type of estimator is to handle the case when an ana-
lyzer must be taken offline for maintenance or calibration. Having an
online soft sensor can provide a level of redundancy for periods when an
analyzer is offline for maintenance. If the neural soft sensor is biased with
feedback from an online analyzer, tracking the integral of the bias term is a
good indication of total drift in the soft sensor, which can often point to an
analyzer that requires calibration.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
is in emissions monitoring with soft sensors for SOx and NOx emissions.
In these applications, routine calibration checks with portable measuring
devices must be performed to insure the soft-sensor application is not
drifting. The most common causes for this drifting are process design
changes or changes in the operating range.
Continuous Digester
Within the pulp and paper industry, a continuous digester is used to con-
vert wood chips to pulp by chemically removing the lignin that holds the
cellulose fibers together. A typical continuous digester is shown in Figure
8-3. The purpose of using a neural network for this thermo-chemical pro-
cess is in the calculation of the degree of delignification as indicated by
Kappa number. Normally, Kappa number is determined by lab analysis of
a grab sample of the pulp downstream of the digester. Online measure-
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Chip Bin
Heating TT
Zone 1-7
TT
1-3 FT
1-3
High Pressure Flash Tank
Wash
Feeder
Zone
FT
1-5
White
Liquor FT FT Main Blow
1-6 1-2
IT Outlet AY Kappa
1-1 Device 1-2 Analysis
Cold Blow
100 110 120 130 140 150 160 170 180 190 200
Figure 8-4. Kappa Lab Analysis and Virtual Sensor Estimate of Kappa
Fermentation Process
Within the food and pharmaceutical industries, numerous products are
manufactured using a batch fermentation process. One of the challenges in
operating such a process is to know when a batch is complete and the
batch processing terminated. In many cases, it is possible to determine the
state of the fermentation by doing lab analysis of the batch after it is com-
plete. However, by using a history of lab analysis and information avail-
able during the batch such as off-gas measurement ,batch temperature,
and charge rate, it is possible to create a soft sensor to predict when a batch
is complete. A typical fermentation process is shown in Figure 8-5.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Off Steam
Gas AT AT PT
1-1 1-2 1-1
O2 CO2 Fermenter
Glycol #1
TT
AT
2-2
2-1
pH Glycol #2
AT
Dissolved
2-2
Glycol O2
Return Glycol #3
AY PT
2-4 2-3 Air
Figure 8-5. Fermentation Process for Which End of Batch Was Predicted
Using a Neural Network
4
Predicted Start of
(Hours) Batch
3
0
0 1 2 3 4 5 6
Actual (Hours)
Application
General Procedure
Often within the process industry you can find examples where a critical
control measurement is not available in real time. This may be because an
analyzer for online measurement is not available or because the infrequent
values and delays associated with a sampling system limit the control
response. In other cases, the analyzer is not reliable enough to use in
closed-loop control. As a result, it may take longer for correction to be
made by an operator to compensate for changes in the feedstock or in the
process. Such delay will increase the amount of process variation and thus
may directly affect product quality and production rate.
Some typical examples of soft sensor usage in the process control industry
are:
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
properly characterize the dynamic behavior, then plant test data may
improve the modeling effort. The modeling techniques used will most
often be the time series or the regression modeling techniques used for
developing dynamic models for model based controllers.
Once the dynamic models relating the inputs to the output have been
developed, then the feedback element is introduced. A feedback update
The amount of feedback taken is a function of the model quality and the
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Monitor the feedback error to determine the estimator quality. The vari-
ability of the error can identify degradation in the model; integrating the
error can isolate model or analyzer drift; and the auto-correlation of the
error can identify opportunities for improvement. For example, strongly
auto-correlated model error over several time lags is a good indication
that the feedback correction factor should be increased.
The open structure of the linear estimator makes it very simple to apply
online gain scheduling. The individual gain term, for each input, can eas-
ily be updated. This will often allow the linear estimator to cover a nonlin-
ear operating range without the need to go to a more complex estimation
technique such as neural networks. This enhancement does, however,
require knowledge of the form of the nonlinearity.
Many times the linear dynamic estimator is developed and used within a
model-based predictive control application. In order to develop the
dynamic input/output relationship model for the constrained MPC con-
troller, the response of the estimator needs to be modeled against a step
change in the manipulated variable. If the dynamic linear estimator is
built and online before any process step testing is performed, then the task
is straightforward and we need only collect the estimator value and model
it directly with the manipulated variable.
Neural Network
Classical approaches to the problem of designing and training a neural
network for actual process conditions involved a sequence of complex
steps which often was a grueling experience for the normal user. More
often than not it required a highly trained professional to create models.
These models also had the drawback of not being able to constantly adapt
to drifts in process inputs and other process conditions.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
matic adaptation of its prediction in response to changes in process, as is
illustrated in Figure 8-7.
Process Measurements
Referenced by Block
Figure 8-7. Soft Sensor Uses Upstream Measurements to Predict Process Out-
put Values
Application Detail
Consider the distillation process shown in Figure 8-1. The flow begins
with the inferential inputs, denoted as the variables AT 1-1, PT 1-2 and TT
1-2. The inputs must first be conditioned. The conditioning step either
applies a filter to remove measurement noise, applies a nonlinear transfor-
mation, or places a hold on the input to insure that inputs measured at dif-
ferent points in the process are all synchronized in time to some selected
physical point in the process at which we are estimating the property. In
our process shown in Figure 8-1, variable TT 1-2 may be conditioned with
a filter to remove measurement noise, variable PT 1-2 may require a loga-
rithmic transformation to linearize its relationship to the estimated prop-
erty, and the variable AT 1-1 may require a hold parameter to insure that a
change in AT 1-1 does not immediately reflect a change in the estimator at
the top of the tower. Conditioning the inputs for noise or dynamic posi-
tioning will insure that the data is properly time aligned and the modeling
effort can extract the true input/output relationship.
N
Y (t ) = Y (t − 1) + ∑ K i * ∆I i + K fbk * (Y − Y fbk )
i =1
where
Y (t ) = Estimator at time t
Y (t − 1) = Estimator at time t − 1
N = Number of input
K i = Gain for input i
∆I = Change in input i
K fbk = Feedback gain
Y fbk = Feedback element
Thus, for this example :
Y (t ) = Y (t − 1) + [(K T 1− 2 * ∆T1− 2 ) + (K P1− 2 * ∆P1− 2 ) + (K A1−1 * ∆A1−1 )]
+ K fbk * (Y − Y fbk )
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Lab Entry of Property
Following the conditioning step, the measurements are input to the esti-
mator model block in order to predict the property. The inputs must also
be input to the feedback model block in order to generate a property that
is compared directly to the feedback element for feedback correction pur-
poses. In most cases, the steady-state behavior of the estimator model and
feedback model are identical. However, the dynamic components are
often different since the feedback element has both a delay and a lag that
must be captured in the feedback model, but is often not desired in the
estimator model. Removing the delay and lag from a predicted quality
parameter has many advantages, ranging from the ability to simplify the
control algorithm to being able to present to operators the value of key
properties at the point at which they are being controlled. Better under-
standing by the operator is often the difference between an APC applica-
tion that runs and one that is turned off.
AT1-2 (Feedback)
TT 1-2
AT 1-2
PT 1-2
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
AT1-2 (Feedback)
Y (Estimated Analyzer
Measurement)
Y (EstimatedProperty)
The final element in the estimator structure is the feedback correction. The
output of the feedback model block generates a signal that is compared to
the feedback element. The difference is the modeling error due to noise,
unmeasured disturbance or model inaccuracy. This model error can then
be used to update both the outputs from the estimator model and the feed-
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
back model blocks in order to insure that the estimator tracks the feedback
element and does not drift. It is every engineer’s dream to create the per-
fect model, not requiring feedback, but every operating manager’s objec-
tive to quickly move forward with an estimator that is used to improve
performance. Remember that a little feedback can go a long way!
HINT: One of the big benefits of applying the linear estimator to MPC projects is
that the dynamics of the estimation process relating to the MPC controller are
handled rigorously; an important aspect of estimators for MPC control.
Neural Network
There are many industrial applications of neural networks, but the appli-
cations that have dominated the process industries are centered on infer-
ential properties estimators or soft sensors. The steps in building neural-
network soft sensors are well defined and the methodology proposed
below should assist in insuring that robust, high-quality estimators are
generated.
be introduced into our soft sensor and to better select the data
required for training. Process understanding must be applied at
each stage of the process to validate the results. Ultimately, it is
process understanding that will insure the robustness of the final
application.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Highly correlated inputs are a problem if all these inputs are included in
the model and especially if there is no causal relationship. This is often the
case with historical data as variables are moved manually by a plant
operator in an often systematic fashion. Consider a case recently
investigated where a neural network was configured to predict a quality
parameter on a draw stream of a fractionator shown in Figure 8-12.
TT
FC
3-1
1-1
Top
FT Draw
1-1
Quality AT
Sample Point 1-1
TT Top End Point
4-1 Composition
Column FC
TT 2-1
6-1
TT Side
5-1 Draw
FT
2-1
AT
Quality
2-1
3-1 Sample Point
Side End Point
TT Composition
7-1
Feed
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
• Extrapolating to operating regions outside of data
The predictions of the online neural model are considered valid
only in the operating range for which the data was modeled.
As discussed above, it is therefore important to select the train-
ing data to cover the true operating range of the process.
Although the model may perform adequately outside this
range, there is no indication of prediction quality and these
results should be treated as suspect.
There are two special applications of online neural networks in the realm
of an APC application that warrant special attention. They are the applica-
tions of high-frequency neural predictions where the dynamics of the pre-
diction response are important and the development of a neural prediction
for subsequent closed-loop control.
inputs. The number of past values will depend upon the settling time of
the process. A second method involves delaying and filtering the input
values before introducing them into the input layer. This method requires
specific knowledge of the dead time and settling time of the inputs relative
to outputs. A third technique is to introduce a first-order filter at the out-
put of each hidden node. The filter factors are equivalent to time constants
or lags and are fit during network training. This architecture benefits from
the fact that no a-priori knowledge of the dynamics is required and that it
can model the majority of nonlinear dynamic systems.
There are two issues in developing neural networks for closed-loop con-
trol that must be understood. The first is changing behavior in the system
inputs due to a closed-loop controller being applied to a neural network
and the second is modeling correlation versus causality. In the first case,
we are dealing with the fact that neural-network estimators are built by
modeling the correlation of the input variables to the output variables,
with the sole objective to give the best fit. If closing the loop on the neural
network significantly changes the correlation structure of input informa-
tion going into the neural network, then the neural network may break
down, with little more notification than a poorly performing estimator
and even poorer control.
currently runs in open loop and the temperature and reflux flow are
selected as inputs to our neural network. It is important to note that if a
good estimator is generated, there is incentive to control this estimator to a
target value. A good neural model is developed with the temperature and
reflux as inputs. The loop is now closed on the neural-network estimator
by automatically adjusting the reflux. This change in controller configura-
tion will map the variability in the temperature onto the reflux variable,
significantly altering the behavior of the estimator input signals. This
change in the input signals will affect the performance of our neural-net-
work estimator and may render it useless as an estimator for closed-loop
control. Be aware that taking control action on the variables acting as
inputs in a neural network or on the neural estimator itself may signifi-
cantly affect the quality of the estimator generated.
Significant improvements have been made over the last ten years in the
tools used to develop neural networks for soft-sensor applications. These
tools are specifically designed to take into account the dynamic nature of
the processes for which a soft sensor is to be developed. The difference in
the traditional approach to that available today is illustrated in Figure
8-13.
Traditional Approach
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Obtain data from lab data
Modern
management system and
Approach
format data into input-
output files
Configure NN
Block and
Graph Input and download
Select strategy for View
output data missing data Outliers
Automatic
Run Pre-Processing Data Collection
•Automatic Pre-
Guess input Guess Delay processing
inclusion/exclusion
•Automatic
design of inputs
Calculate Sensitivities and delays
•Automatic
taining
Design number of
hidden nodes, Train
Network Download
Trained NN
Export the model, for on-line
integrate application operation
into the control system
Rules of Thumb
Rule 8.1. — Neural networks excel as estimators for nonlinear heuristic
relationships, a large number of inputs, batch end points and cycle times,
and wide variety of process dynamics.
Rule 8.2. — Dynamic linear estimators are best for a linear heuristic rela-
tionship with a small number of inputs, batch trajectories, complex tran-
sient behavior, and properties with a long settling time compared to the
analyzer cycle time or lab analysis time.
Rule 8.4. — Alignment of data is more critical for continuous and semi-
continuous (fed-batch) operation property estimation and fault detection
than for the end-of-batch property estimation and long time intervals.
Rule 8.6. — Noise in inputs that appears in the analyzer should be left in
the delayed and filtered estimator output synchronized with the analyzer.
Rule 8.7. — Noise in inputs that is not in the analyzer reading should be
filtered out before used in the estimator.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Guided Tour
This tour illustrates the potential ease of use and convenience of an inte-
grated interface that is possible for property estimation embedded in an
industrial control system. The following areas are addressed:
• Screening Data
• Examining Input Sensitivity
• Determining Input Delay
• Training the Neural Network
• Options in Training
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Figure 8-15. Identifying Upstream Measurement
The user must provide an estimate of the time it takes for the process
inputs to affect the measurement. Once this information is provided and
the data selected, the neural network is automatically trained by the user
selecting “Autogenerate.” In response, a sensitivity analysis is automati-
cally done for the configured inputs to identify the contribution of each
input to the estimated parameter. Inputs that do not affect the measure-
ment are automatically eliminated from the neural network. The results of
the sensitivity analysis are presented to the user as illustrated in Figure
8-17.
An important part of the sensitivity calculation and the training of the net-
work is to accurately account for the delay between the input changing
and it having an impact on the estimated measurement. This delay is auto-
matically calculated by doing a cross-correlation between each input and
the sample input values. The calculated cross-correlation and input delay
--`,```,,,```,`,````,``,`,,`
are shown in the detail view for the sensitivity of each input, as shown in
Figure 8-18:
As part of the training process, the number of hidden neurons that should
be used for best results is automatically determined. Part of the data is
used for training the neural network and part for testing. By comparing
the results for the two sets of data, it is possible to automatically determine
when to stop the training process, as shown in Figure 8-19.
Expert features allow the user to provide input into the inputs that are
used in creation of the neural network. For example, the user may choose
to calculate input sensitivity without training the neural network. When
this is done, the user may change the input delay or manually remove
inputs that were automatically selected, as shown in Figure 8-20.
After generating and examining the input sensitivity, the user may train
the neural network based on a selected input sensitivity. In response, the
expert is allowed to modify some of the parameters used in training, as
shown in Figure 8-21.
The default training parameter value will provide the best results. How-
ever, options such as the training-test data split are helpful in addressing
special requirements associated with the test data.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Theory
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Neural Networks
Neural networks (NN) are essentially nonlinear function approximators
that utilize process inputs to predict process output. Much of the initial
neural network research project and development done some fifty years
ago focused on emulating the network of neurons that make up the
human brain. The significant progress happened after the development of
feedforward NN with backward propagation training in 1983 [8.12]. Since
that time, much work has been done to adapt this technology for use in the
process industry. [8.1, 8.2, 8. 3]. The technical promise of neural-network
technology comes from the fact that universal approximators are created
using a multi-layer network with a single hidden layer that can approxi-
mate any continuous function to any desired degree of accuracy [8.4, 8.5].
Soft sensors that utilize neural networks must be adapted to the special
requirements of the process industry. In particular, it is necessary to com-
pensate for the delay in the process output for changes in upstream condi-
tions. Thus, a neural network typically has one output (the predicted
variable) and any number of upstream measurements as inputs with com-
pensation of process delay. Figure 8-22 shows a three-layer feedforward
NN.
X2 h1
i2
Td2
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Sj hj
Xi y
Wij
ii Tdi
XN
iN TdN
1 1
For a sigmoidal hidden neuron, the summed node input Sj, and the output
hj, are given by the following formula:
−S
1− e j
hj = −S
where S j = ∑ wij xi (8-2)
1 +e j i
Typically the output layer has a linear activation function, that is, a sum-
mation of the inputs to the output layer.
Collecting historic data on the process inputs and the process output mea-
surement for screening of the potential inputs to the neural network. Also,
this data is needed to determine the neural network structure and value of
parameters used in the neural network.
• Identifying the delay between each input and its impact on the
process output predicted by the neural network
• Determining which of the process inputs significantly affects the
process output through a calculation of input sensitivity
• Determining the weighting factors and number of neurons
included in the hidden layers for best results
• Validation of the network.
Data Collection
The data collection is by far one of the most critical steps in the develop-
ment of a neural network. When collecting data to be used to train the
neural network, it is important that the inputs and outputs vary over their
normal operating range. If the process output is available only as a lab
analysis, then this data must be merged with historical data on the input
measurements to allow further analysis. Some simple rules that should be
observed in the data collection are:
• Only if the inputs change during the time that data is collected will
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
thumb for valid data limits is Mean +/- 3.5 * Standard deviation,
which includes approx. 99.9% of the data in the given region.
Also, it is important that the user be able to flag regions of the historic data
to indicate periods of abnormal operation so this data, for all inputs and
outputs, will not be used in the development of the network.
+
Y (t)
+
+
+
+ + + + Cross Correlation
+ + +
+
+
+ + +
X (t) +
+
+ +
+ + +
+ + + + +
+ + + +
1 N−k
Time ∑(Xi − X )(Yi+k −Y )
N i=1
Cxy (k ) =
σ xσ y
Where N = Numberof samples
K = 0,1,2,...,N −1
The time shift, K, between the input and the output that produces the max-
imum cross-correlation coefficient is used as the input delay that should
be introduced into the input processing of the neural network, as shown in
Figure 8-24.
The cross-correlation value indicates the magnitude (and sign) of the effect
of the input on the output. For example, for a simple first-order process,
input at delay that equals approximately (dead time + time constant/2) has
most relevance, as the maximum correlation value occurs at that delay.
Once the most significant delay is known, the input data is shifted by that
delay.
+
+ +
Cxy +
+ +
+
+
+ + +
+ +
+ +
Time shift K
Input Sensitivity
In the initial definition of the neural network, it may not be possible to
know which of the upstream measurements influence the process output
to be predicted by the soft sensor. Only those that have a significant
impact should be included in the network. The sensitivity is defined as the
change in dependent variable (output) y for a unit change in independent
variable (input) x, or, mathematically:
∆y
y (8-3)
S xy =
∆x
x
The initial sensitivity estimate is calculated from a simplified linear model,
prior to developing the NN model. A linear model for computing the sen-
sitivities may be obtained using the standard PLS algorithm, for example.
The principle of PLS is outlined later in this section; for more detail there
are several good references [8.7, 8.8]. The delayed input values and pro-
cess output are used in the development of this model. Using this model,
the input sensitivity is calculated by changing the input by unit while all
other inputs are kept constant, and determining the output change.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
(
E p = y ppred − y p )2
(8-4)
data
E = ∑ Ep (8-5)
p =1
By incrementing the weighing factor Wij associated with input i and node j
on the value ∂ Wij and observing change in cumulative error E [see Figure
8-22] we can determine gradient,
∂E ,
∂Wij
or in final differences
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
∆E
∆Wij
Gradient defines change on the output for a unit change of the weight Wij.
The back propagation algorithm [8.6] is used to calculate new weights that
minimize the cumulative error in the direction of negative gradient (steep-
est descents) for each pass through the data set (an epoch):
∂E
Wijnew = Wijold − α (8-6)
∂Wij
where α defines the step size of change in the gradient direction, more
popularly known as the learning rate. The speed of convergence is
improved by modifying the back propagation algorithm to incorporate the
conjugate gradient technique as described by Brent [8.15]. Rather than use
a fixed step size, the new direction is based on a component of the previ-
During the training of the neural network, the cumulative error for the
training set of data will decrease monotonically, approaching a constant
value in an asymptotic manner, as shown in Figure 8-25. However, if the
cumulative error is calculated using a validation data set not used in train-
ing, then at some point the error will begin to increase. Past this point
where the training and validation error begin to diverge, the neural nets
start learning features specific to the training data set rather than the gen-
eral process. The goal of training, though, is to learn to predict the output
given real process inputs, and not just to memorize the training set. This is
known as generalization.
Generalization Requirements
Training Set
Validation Set
Error
Best Training
Epoc
Number of Epochs
To detect over-training, a certain portion of the data set is kept aside for
validation during the training phase. This is called the test set. At each
epoch, while the weights are modified based on the error on the training
set, the test set is used to detect when overfitting of the training set has
occurred. At each epoch, the error on the test set (test error) for the new set
of weights is calculated and compared with the best test error. In order to
prevent training from stopping at some random choice of weights for
which the test error turns out to be small, the algorithm runs for at least a
fixed number of epochs before establishing any minima. Also, the training
error is added to the test error to define a stringent minimum total error
condition. In this way it is made sure that both the data sets have accept-
able errors when the algorithm converges, and the weights at the best
epoch are picked at the end of training.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
1 2 3 4 5 6
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Two approaches are used to calculate the correction factor that must be
applied to the NN prediction. Both are based on calculation of the predic-
tion error using the time-coincident difference between the uncorrected
predicted value and the corresponding measurement value. Depending on
the source of the error, a bias or a gain change in the predicted value is
appropriate. To avoid making corrections based on noise or short-term
variations in the process, the calculated correction factor should be limited
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
and heavily filtered; for example, equal to twice the response-time horizon
for a change in a process input. During those times when a new process
output measurement is not available, the last filtered correction factor is
maintained.
Uncorrected
Prediction
Upstream Feedforward
Measurements Neural Network
Model + Filter
Compensated
Prediction
Sample Delay
Delay
Sample Value
From Lab or + Filter
Analyzer
State-of-the-Art Implementation
The following features should be provided in a dynamic linear estimator.
References
1. Mehta, A., Ganesamoorthi, S., and Wojsznis, W., “Feedforward Neural
Networks for Process Identification and Prediction,” in Proceedings of ISA
Expo/2001, September, 2001, Houston, TX.
2. Tzovla, V. and Mehta, A., “Automated Approach to Development and Online
Operation of Intelligent Sensors,” in Proceedings of ISA Expo/2001, September,
2001, Houston, TX.
3. Ganseamoorthi, S., Colclazier, J., and Tzovla, V., “Automatic Creation of
Neural Nets for Use in Process Control Applications,” in Proceedings of ISA
Expo/2000, 21-24 August 2000, New Orleans, LA.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
4. Qin, S.J., “Neural Networks for Intelligent Sensors and Control – Practical
Issues and Some Solutions,” in Progress in Neural Networks: Neural Networks for
Control, edited by D. Elliott, New York: Academic Press, 1995.
5. Morrison, S. and Qin, J., “Neural Networks for Process Prediction,”
Proceedings of the 50th Annual ISA conference, New Orleans, 1994, pp. 443-450.
6. Fisher Rosemount Systems, Installing and Using the Intelligent Sensor Toolkit,
User Manual for the ISTK on PROVOX Instrumentation, 1997.
7. Masters, T., Practical Neural Network Recipes in C++, Academic Press, London,
UK, 1993.
8. Hornik, K., Stinchcombe, M., and Halbert, W., “Multilayer Feedforward
Networks Are Universal Approximators,” Neural Networks, 2:5, 1989, pp. 359-
66.
9. Cybenko, G., “Approximation by Superpositions of a Sigmoidal Function,”
Mathematical Control, Signals, and Systems, vol.2, 1989, pp. 303-314.
10. Press, W. H., Flannery, B., Teukolsky, B., and Vetterling, W., Numerical Recipes
in C, Cambridge University Press, New York, 1988.
11. Wold, S., Jellberg, S., Lundsted, T., Sjostrom, M., and Wold, H., “PLS
Modeling with Latent Variables in Two or More Dimensions,” Proceedings,
Frankfurt PLS Meeting, September, 1987.
12. Rumelhart, D. and McClelland, J., Parallel Distributed Processing, MIT Press,
Cambridge, MA, 1986.
13. Geladi, P. and Kowalski, B., “Partial Least Squares Regression: A Tutorial,”
Analytica Chimica Acta, 185, 1986, pp. 1-17.
14. Polak, E., Computational Methods in Optimization, Academic Press, New York,
1971.
15. Brent, R., Algorithms for Minimization without Derivatives, Prentice-Hall,
Englewood Cliffs, NJ, 1973.
16. Blevins, T., Wojsznis, W., Tzovla, V., and Thiele, D., “Integrated Advanced
Control Blocks in Process Control Systems,” US Patent Application, Fisher
File No: 59-11211.
17. Blevins, T., Tzovla, V., Wojsznis, W., Ganesamoorthi, S. and Mehta, A.,
“Adaptive Process Prediction,” US Patent Application, Fisher File No: 59-
11243.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Practice
Overview
Model predictive control has been proven by a group of companies to pro-
vide benefits, in the right applications, that are greater than those achieved
from the improvement of basic control systems [9.17]. The greatest bene-
fits are realized in applications with dead-time dominance, interactions,
constraints, and the need for some optimization. The constraints define
the operating limits of processes or equipment. Optimization is often as
simple as the maximization or minimization of a flow. The advantage of
model predictive control lies in its knowledge of the effects of past actions
of manipulated and disturbance variables on the future profile of con-
trolled and constraint variables.
For systems with large dead times, interactions, and multiple constraints,
the ability to provide the necessary patience, anticipation, and forecast of
the future approach to targets and limits is essential for moving an opera-
tion closer to its optimum. “A good control engineer can draw straight
lines; a great one can move the lines.” Model predictive control is a tool
that can make a good control engineer a great one.
Many good review papers [9.1], [9.2], [9.3] and reports [9.4], [9.5] have
been published on model predictive control (MPC). In contrast, there are
only a few books dedicated to the subject [9.11], [9.12]. The literature pub-
lished to date focuses on the theory or on an overview of the application
and benefits of MPC. Information on how to design, build, commission,
and tune an MPC controller has been lacking.
307
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
This chapter is structured in such a way as to fill the gap and to benefit an
average control engineer who wants to refresh his knowledge on MPC or
has not yet had a chance to learn about MPC and wants to understand
MPC basics. It also opens the door for potential users of new, small and
easy-to-use MPC products, designed to be applied by a broad spectrum of
process and process control engineers [9.8], [9.10], [9.18].
loop, but as a vector of error values from target for a time period from the
present to some set time in the future, usually defined to cover the settling
time of the process.
Future
Setpoint SP Manipulated
Error
(SP) (MV)
SP Prediction Vector Controlled
(CV)
Control
- Algorithm Process
CV Prediction Future DV
CV
Disturbance
Model output
(DV)
-
Model output
correction
Figure 9-1 shows an MPC controller for a process with two inputs and one
output, in a form that allows one to see the analogy with a typical feed-
back control loop. The process has a manipulated variable (MV) and a dis-
turbance variable (DV) on the input and a controlled variable (CV) on the
output. A simple MPC controller used in this configuration has three basic
components:
1. A process model that predicts the process output for 120 or more
scans ahead. (For consistency, we use a prediction horizon of 120
scans throughout this discussion.)
2. A future trajectory of the set point for the same number of scans as
the trajectory of the predicted process output.
Setpoint
Predicted
trajectory
errors
CV
CV
prediction
Future calculated
MV moves
An MPC matrix is developed from the matrix of the dynamic model of the
future trajectory of the process formed from step responses. A specific
gain is used for every element of the error vector. For a simple process
with just one MV and CV, the MPC matrix has 120 gain coefficients. In the
MPC algorithm, the predicted errors are multiplied by the appropriate
gains and all the terms are summed together to compute an increment of
the controller output. The prediction is adjusted every scan by an actual
measurement.
measurement subtracted from the set point value for the current
and previous scans as in a classical PID feedback controller.
• The increments in the controller output required to bring the
process output trajectory as close as possible to the set point
trajectory are limited and spread over several moves into the future.
Only the first move is implemented, however, and the computation
procedure is corrected and repeated for the next scan.
• Disturbances with the proper dynamics based on identified models
from process step responses are included in the prediction of the
process output. In comparison, the feedforward signal in a basic
control system provides a single current value. Some simple
dynamic compensation is applied to this value but the lead and lag
times and time delays are often adjusted online by trial and error
rather than being computed. Since a simple feedforward signal
added to a PID controller output has no knowledge of where the
process is headed in the future, it may be taking the wrong action.
Some special logic is added to anticipate this problem by looking at
the slope of the controlled variable, but the correction is crude and
incomplete.
Opportunity Assessment
An MPC controller can simultaneously adjust multiple manipulated vari-
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
push capacity or maximize efficiency but will also back off the production
rate or sacrifice efficiency when absolutely necessary. Constraint control
can correct for the projected effect of costly excursions caused by unusual
upsets or failures that cannot be handled by conservative set points alone.
The questions to ask about constraints (process or equipment limits) are:
1. Are there limits to operating values that are important for product
quality, process efficiency, or for environmental, personnel, and
property protection?
2. Can these limits be measured online, analyzed in the lab, or
calculated?
3. Has there been down time attributable to violations of these limits?
This can show up as an increase in the maintenance cost or number
of failures of equipment, a decrease in the run time between
catalyst replacement or regeneration, a decrease in the run time
between clean-outs or defrosts from a faster rate of fouling or
coating, and trips from interlocks for personnel and property
protection.
4. Has product been downgraded, recycled, returned, or discarded as
the result of excursions beyond these limits?
5. Would operation closer to a limit significantly decrease utility use?
6. Would operation closer to a limit significantly decrease raw
material use?
7. Would operation closer to a limit significantly increase production
rate?
8. Have there been any environmental violations or near misses?
9. Does the operator pick set points to keep operating points away
from limits?
10. Is there a batch operation with a feed rate that depends upon a
process variable, where the batch time could be reduced by an
increase in a feed rate by operating closer to process or equipment
limits?
If the answers to questions (1) and (2) are Yes, there is definitely a potential
application for model predictive control to optimize a process and assist,
but not replace, a safety interlock system. MPC can improve the efficiency
and capacity of a plant besides reducing the number of interruptions of a
batch and trips by an interlock system that is the last line of defense. If the
limit (constraint) is analyzed in the lab or calculated, a dynamic linear
estimator or virtual sensor may be developed to put the constraint online
as part of MPC. Standard operating procedures, equipment design
--`,```,,,```,`,`
If the answers to any of questions (3) to (8) are Yes, the benefits are signifi-
cant. If the answer to (9) is Yes, the benefits are usually larger than esti-
mated.
The next-largest opportunity for MPC exists when there are interactions.
A good operator and engineer can reason through the relationships for
two manipulated and controlled variables. However, even the most atten-
tive person cannot do this for every scan and cannot consistently or accu-
rately forecast future effects.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
ments accounts for model mismatch and adds robustness. The questions
to ask about interactions are:
11. Are there more than two controlled or constraint variables affected
by more than one manipulated variable?
12. Are these controlled or constraint variables important?
13. Do the controlled variables have the same order of magnitude lag
and delay?
14. Can the PID controllers effectively use rate action?
If the answer to question (14) is No, it is a sure sign that there is excessive
interaction, noise, or time delay, which is the next major reason you might
consider MPC.
Table 9-1. How Effectively Can Operators and Control Systems Hold Coincident
Constraints?
Number of % Time
% Time Operator % Time Override
Coincident MPC
Can Hold Can Hold
Constraints Can Hold
None 70% 10% 2%
One + 30% 90% 98%
Two + 20% 45% 90%
Three + 0% 30% 80%
If a control loop has a time delay larger than the time lag, it is called dead
time–dominant and is a candidate for MPC. The abrupt delayed response
is difficult for PID controllers to handle effectively. Gain and rate action
has to be minimized and mostly reset action is used in the tuning of the
--`,```,,,```,`,````,``
15. Is the time delay more than ¼ of the total time to steady state or
time to reach 98% of the change (T98)?
16. Is a chromatograph used for a controlled or constraint variable?
If the answer to question (15) or (16) is Yes, the total time delay is large
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
For loops with a time lag (τ) much larger than the time delay (τd), theoret-
ically the PID controller could be tuned for superior performance by over-
driving its controller output through high gain and rate action. However,
analog-to-digital converter noise from large temperature scale ranges and
the irregular or erratic response of analyzers preclude the use of large PID
controller gain and rate settings [9.18] [9.19]. Also, the time required to
tune these controllers tests the patience of most mortals to the point where
mistakes are made. Consequently, in practice an MPC controller can do as
well as or better than a PID controller for these loops dominated by a large
time lag.
current direction of the process variable, and knowledge of the past moves
of the manipulated variable is less important. The best solution might still
end up as an MPC controller with signal characterization for linearization
and some further investigation and mitigation of the sources of the upsets.
PID is also effective when tight level control is important and there is no
inverse response or excessive time delay. A PID controller might provide
the best control of a reactor’s residence time and a distillation column
overhead receiver’s level. However, an MPC controller would provide the
most effective level control of a surge volume because the predicted
approach to level limits is employed to smooth the manipulated flow. A
proportional-only controller reacts only to current deviations and noise
and does not know whether the current situation is temporary. The MPC
sees the effect of the net imbalance in flows over the time horizon and
schedules several gradual moves.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
known about the parameters in the equations to provide an accurate esti-
mate and the feedforward gain ends up being tuned online. Even less is
normally known about the feedforward dynamics. Consequently, no
dynamic compensation is used, or a simple lead-lag function is adjusted to
slow down or speed up the feedforward signal. The MPC controller pro-
vides the proper dynamics by identifying and using a model of the effect
of the disturbance on each controlled and constraint variable. It is also able
to predict the future effect of the upsets.
If the answer is Yes to question (17) and Yes to either question (18) or (19),
the predictive capability of MPC should be used to provide a feedforward
correction of the size and timing to minimize present and future errors
and violations.
A less common but still important situation that can benefit from MPC is
the occurrence of inverse response. If the initial response is the opposite of
the final response, it is called inverse response. It is often associated with
20. Are there any loops where the initial response is opposite to the
final response?
Examples
In the mid-1990’s it was reported that there were more than 2,200 MPC
applications in industry, mostly in refining and petrochemicals. A current
estimate could be triple that number, with the number of new MPC appli-
cations growing rapidly. MPC applications are expanding in such areas as
chemicals, pulp and paper, gas processing, food processing, and metal-
lurgy. The applications range in size from simple controllers with only a
few manipulated variables, controlled variables and maybe 1 or 2 con-
straints, to very large-scale applications encompassing up to 100 manipu-
lated variables and sometimes up to 200 output variables. These larger-
scale controllers are designed to meet the control objectives of an entire
processing unit as well as to provide simple optimization.
As the cost of the commercial MPC tool sets has come down and the
implementation knowledge has broadened, the application of MPC tech-
nology is more easily justified in smaller industrial applications. Although
the large-scale refining applications still represent the lion’s share of
today’s applications, it is the smaller applications and the untapped indus-
trial sectors of mining, metals, food and pulp and paper that represent the
areas of new growth for MPC technology.
TC PC
1-2 1-4
A Vent
TT PT LT
1-2 Air Cooler 1-4 1-2
PT FC RSP LC RSP FC
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
FC TT LC
1-3 1-5 1-3 RSP
PT FC
1-3 2-1
FT LT
1-3 1-3
A Steam
FT
2-1
A Bottoms
to Column B
Figure 9.3a Distillation Tower A
Figure 9-3a. Distillation Tower A
TC PC
2-2 2-4
B Vent
TT PT LT
2-2 Air Cooler 2-4 2-2
PT FC RSP LC RSP FC
2-2 2-2 Distillate 2-2 2-4
Receiver
RSP FT AT FT
2-2 2-2 2-4
FC B Reflux B Overheads
B Analyzer
2-1
FC LC
2-3 2-3 RSP
FC
2-5
FT LT
2-3 2-3
B Steam
FT
2-5
B Bottoms
to Waste
Figure 9.3b Distillation Tower B
Figure 9-3b. Distillation Tower B
The main manipulated variables for composition control are the steam rate
to the reboiler and the reflux flow. Currently the reflux rate is in closed-
loop control with the overhead temperature. The tower pressure is con-
trolled by manipulation of the overhead vapor rate of the non-condens-
ables. Plant economics dictate that the feed to tower A should be
maximized. The primary operating constraints are tower A flooding, indi-
cated by a maximum tower differential pressure, and tower A overhead
cooling, indicated by a maximum reflux flow or maximum reflux tempera-
ture. Also, there are benefits associated with minimizing tower A pressure
in order to optimize energy efficiency by reducing steam utilization in the
reboiler.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
• Feed maximization
• Energy minimization.
Before we proceed with the design of an MPC system for our distillation
problem, it is important to reflect on exactly what benefits MPC offers to
insure that the MPC technology is being properly utilized. The primary
benefits of MPC technology are:
It is now possible to build the first phase of our MPC application as a sin-
gle input–single output MPC controller with predictive capabilities, as
shown in Figure 9-4. It is interesting to note that with MPC control, the
reboiler rate will come to its final resting value long before the change in
the product composition is ever seen by the analyzer, which clearly shows
the advantage of the predictive capability. Any error in the predictive
model is captured by the analyzer and corrected by feedback.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Manipulated
FC 1-3
Reboil Rate
AT 2-2
A btms comp
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Controlled
In the past, the regulatory loops were most often broken, with the MPC
being designed to manipulate the slave elements. However, there are often
benefits in retaining the existing regulatory controls within the MPC struc-
ture. For example, the regulatory controls, such as the overhead tempera-
ture controller, can provide fast disturbance rejection in between MPC
controller executions, adding to the stability of the application. Careful
thought should always be given to the question of removing existing regu-
latory controls from the MPC. In our illustrative example, we will retain
the overhead temperature control loop (TC 1-2) and select the top temper-
ature set point as the second manipulated variable in our MPC controller.
Version 2 of our MPC controller is shown in Figure 9-5.
Manipulated
FC 1-3 TC 1-2
Reboil Rate Top Temp
AT 2-2
A btms comp
AT 1-2
Controlled
A ovhd comp
The MPC controller now has 2 manipulated variables, reboiler flow and
top temperature, and 2 controlled variables, tower A overhead heavy key
and tower A bottoms light key (as measured in tower B overheads product
stream). This 2 input–2 output system will require 4 dynamic models.
These models, as shown in Figure 9-5, define the dynamic response over
time of each controlled variable subject to a unit change in each manipu-
lated variable. Imbedding these models in the controller insures that
responses in time are decoupled and that the controller can use both
manipulated variables to better achieve its control objectives.
tion will usually be dictated by how well our MPC controller can operate
Manipulated
FC 1-3 TC 1-2
Reboil Rate Top Temp
AT 2-2
A btms comp
AT 1-2
Controlled
A ovhd comp
FC 1-2
Reflux rate
TT 1-4
Reflux temp
Constraint
TT 1-5
Tower A temp
The MPC application for our distillation tower process is now a true con-
strained multivariable predictive controller. The application is further
improved by incorporating feedforward action into the MPC controller.
Fluctuations in the reboiler steam header pressure (PT 1-3) will affect the
reboiler capacity and introduce an unmeasured disturbance into the
tower. This disturbance will affect tower A bottoms composition which
can only be handled through feedback control once the disturbance is
detected in the analyzer located on tower B overhead product stream. The
steam header pressure is introduced into the MPC controller as a feedfor-
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Manipulated Disturbance
PT 1-3
FC 1-3 TC 1-2 Steam Pressure
Reboil Rate Top Temp
AT 2-2
A btms comp
AT 1-2
Controlled
A ovhd comp
FC 1-2
Reflux rate
TT 1-4
Reflux temp
Constraint
TT 1-5
Tower A temp
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Figure 9-7. Distillation Tower MPC, Version 4
One of the objectives of our MPC controller is to maximize the feed rate.
Past experience with our distillation tower has shown that maximum feed
rate is most often limited by either tower A overhead cooling capacity or
tower A flooding, as measured by the differential pressure (DP 1-1). Thus,
the feed rate and tower delta P are added to our MPC in an effort to maxi-
mize feed rate. The proposed MPC matrix now has 3 manipulated vari-
ables, 2 control targets and 4 constraints. Assuming that none of the
constraints are active, we now have an underspecified system with more
control knobs (3) than objectives (2), as shown in Figure 9-8.
Manipulated Disturbance
PT 1-3
FC 1-3 TC 1-2 FC 1-1 Steam Pressure
Reboil Rate Top Temp Feed Rate
AT 2-2
A btms comp
AT 1-2
Controlled
A ovhd comp
FC 1-2
Reflux rate
TT 1-4
Reflux temp
Constraint
TT 1-5
Tower A temp
DP 1-1
Delta P
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
To meet the objective of maximizing feed rate, a ramper is implemented on
the feed rate such that this variable is maximized until one of the con-
straints becomes active. The active constraint then becomes the third con-
trol objective and the system is now fully specified with no degrees of
freedom available. This is achieved by inserting a model with a unity gain
between the feed rate and feed-rate target, allowing the feed-rate target to
be ramped up from its current value. A ramper optimizer structure within
an MPC controller is very simple to understand and implement, and is
shown in Figure 9-9.
Manipulated Disturbance
PT 1-3
FC 1-3 TC 1-2 FC 1-1 Steam Pressure
Reboil Rate Top Temp Feed Rate
AT 2-2
A btms comp
AT 1-2
Controlled
A ovhd comp
FEED TGT
A ovhd comp
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
FC 1-2
Reflux rate
TT 1-4
Reflux temp
Constraint
TT 1-5
Tower A temp
DP 1-1
Delta P
There are cases where the simple ramper or pusher optimizer strategy is
not sufficient to properly deal with the optimization demands of the MPC
controller. In these cases, a linear program (LP) and/or a quadratic pro-
gram (QP) sits above the MPC controller to provide an optimization layer.
This optimization structure is best suited to MPC control problems with
many degrees of freedom or with many interacting constraints. The LP/
QP formulation allows each variable to be assigned an economic cost such
that the optimal steady-state position of the process is determined and the
MPC controller is used to drive the process to this economic optimum,
while observing defined targets and constraints.
technology truly handles these control challenges. These concepts can then
be easily extended to MPC controllers much greater in size and scope that
are often seen in industry today.
Evaporator Control
Evaporators are used in the food, chemical and pharmaceutical industries
for efficient concentration of liquid. Most frequently used are falling film
evaporators, which are particularly suitable for heat-sensitive products.
The heating of the evaporator is typically done by live steam or by thermal
vapor recompression. The operation of a single-effect evaporator is
depicted in Figure 9-10.
FC
1-1
Feed
FT
1-1
Heating
Steam
FT
1-2
Evaporator
PT Body Vapor
Body pressure 1-3
and
temperature TT
1-4 Separator
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Condensate Analyzer
AT
1-5
Concentrate
Dryer Control
Another simple MPC application is spray dryer control. Spray drying is
the most widely used industrial process involving particle formation and
drying. It is well suited for the continuous production of dry solids in
either powder or granulated form from liquid feed stocks, such as solu-
tions, emulsions and pumpable suspensions. Spray drying is an ideal pro-
cess, where the end product must conform to precise quality standards
regarding particle-size distribution, residual moisture content, bulk den-
sity, and particle shape. It is used in the chemical, food, pharmaceuticals
and ceramics industries. A typical spray dryer consists of feed pumps,
atomizer, air heater, air dispenser, drying chamber, and systems for
exhaust air cleaning, as shown in Figure 9-11.
Final product moisture is determined by the air flow, heater outlet temper-
ature, feed rate and pressure, viscosity of feed, and nozzles in service. The
MPC controller configuration for the spray dryer is shown in Figure 9-12.
Fuel
Valve
Heater outlet
VSD TT
temperature
FT 1-2
Air 1-1 Air Heater
Fan
PT Pressure to
1-3 Spray
VSD TT Nozzle(s)
1-5
FT
1-3
Spray
ID Fan Dryer
Cyclone
Separator Tower
Feed Pump
Product moisture
measurement
AT
1-4
Controlled Model
parameter Predictive
Disturbance
Control
parameter
Slurry
FT
Feed
1-3
Flow
TC Air heater
1-2 outlet
FC Air flow
1-1 to dryer
TT VSD
1-2 FT
Product 1-1
Moisture
Air Fan
AT
1-4
metal ores. They are essentially refractory-lined steel tubes, rotated slowly,
mostly at 0.5 to 2 revolutions per minute. The kiln is fired by fuel and air,
usually injected at the lower end. The flame travels up the kiln, counter to
the solids, heating them directly by radiation and indirectly through the
refractory lining.
The feed consists of a source rich in calcium such as limestone. For the
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
cement process, the feed also contains silica. The processing occurs in
three zones:
O2 AT TT
Material Exit gas
PT 1-2 1-2
Temperature Temperature
1-3 Draft
TT
1-3 Cold FC
Fuel Input Hot Kiln End
Mud 1-1
End Barrel
FC Filter
1-4 VSD
FT
1-1
FT VSD
1-4 Lime Mud
Feed Pump
ID Fan
• Throughput/capacity increase
• Decrease in energy consumption
For example, the time required for pulp stock to move through a tower in
the bleaching process is strictly a function of the feed rate, assuming plug
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
flow through the tower. Thus, the time required for the brightness sensor
after the tower to indicate changes in chemical addition before the tower
will vary with the stock flow rate, as illustrated in Figure 9-16.
For this example, the brightness control using MPC might be generated
using the step response model identified at a stock flow rate of 500 GPM.
The control would be degraded at flow rates other than 500 GPM because
the model used to generate the control matches the process only when the
flow rate is 500 GPM.
Brightness AT
1-3
FRC
1-2
Time
60 minutes
Bleach
Chemical
FT
1-2
FY Brightness Response for
1-1 1000 GPM stock
Chemical Addition
VSD
FT AT
1-1 1-1
Stock
Time
Stock Consistency 30 minutes
Pump (% dry stock)
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Control for
Control for feed rate 3
feed rate 2
Control for
feed rate 1
When the changes in time delay as a function of throughput are the pri-
mary variation in the process response, it is possible to avoid the complex-
ity of control-model switching. By comparing the step responses of such a
process at different throughputs, some insight is gained into how this is
accomplished, as shown in Figure 9-18.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Examining the two step responses, the only difference is that each
response is spread over a different time to steady state; that is, the coeffi-
cients of the step response are the same but the time difference between
the time samples is inversely proportional to the feed rate. Thus, the con-
trol matrix generated for models created at different throughputs would
be the same except for the control execution rate (since each model’s time
to steady state is unique). Thus, for this type of process, adjusting the con-
trol execution rate compensates for the variation in dead time with
throughput.
Time
60 minutes 30 minutes
deadtime deadtime
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
If the MPC technology adopted does not allow model swapping, online
dead-time adjustment, or adjustment of the controller execution rate, and
the model is known to cover a range of dead times, then a more robust
MPC controller can be designed by overestimating the dead time. An
MPC controller in which the modeled dead time is longer than the actual
process dead time will be sluggish in performance, while a controller in
which the modeled dead time is underestimated can produce cycling and
stability problems. Thus, if uncertain of the actual dead time, . . . “ go
long!”
ble to break the control into two MPC blocks, one addressing the fast-
responding outputs and the other addressing slow-responding outputs.
This is done if the process is two processes in series where the intermedi-
ate measurements are accessible, as illustrated in Figure 9-20.
MPC
Correlated
Outputs
Process
Unmeasured
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Disturbances Setpoint for
Correlated Parameter
MPC MPC
(fast) (slow)
Process 1 Process 1
Process
Figure 9-20. Structuring the MPC to Meet Both Fast and Slow Responses With-
out Compromise
the draft pressure do. Since the O2 and hot-end temperature are highly
correlated, it is possible to use the O2 measurement as a controlled param-
eter of an inner-loop MPC that is manipulated by an outer-loop MPC, as
illustrated in Figure 9-21.
O2 Setpoint
MPC MPC
(fast) (slow)
O2 (Controlled)
PT
1-2
TT
1-1
TC
1-1
Split
Range Batch
FY
1-1 Reactor
PC
FC 1-2 TC
1-2 1-2 Heat
FY
1-2
Low Cool
Feed Select
FT
1-2
Reactor Pressure
Limit
Reactor Temperature
Cooling
Heating
Start of End of
End with
Batch Batch
Possible
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Feed
PT
1-1
TT
1-1
MPC Split
Range Batch
FY
1-1 Reactor
Feed Flow (Opt - Controlled)
Reactor Temp (Controlled)
Reactor Pressure (Constraint)
Feed Flow (Manipulated) Heat
Heating/cooling(Manipulated)
FC
1-1
Cool
Feed
FT
1-1
Application
General Procedure
The general procedure to be followed to develop and install a model pre-
dictive controller is as follows:
Application Detail
A typical application procedure for small and medium-size applications
consists of several steps as outlined in Figure 9-25. The details of the proce-
dure are discussed below.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Process Analysis
The Process Analysis phase of the project is the starting point. Its purpose
is to insure that the control problem is well defined and fits into the
Process analysis
I/O assignment
to historian
Data
Historian Test Process
Download controller
Evaluate operation
Process inputs and outputs can also be defined as optimized if the process
has degrees of freedom—a greater number of control knobs than control
objectives. An optimized variable is a process input or output for which
there is an economic or performance incentive to push the variable in a
certain direction until some constraint becomes active and therefore limits
any further pushing of this variable. There are two modes of optimization
found in many APC controllers, an LP term and a QP term. (See the dis-
cussion following Figure 9-9.) The LP term is the generic, multi-input
implementation of a ramper that simply pushes the selected variable in a
direction defined by the LP cost toward the active constraint set. The QP
term is often selected to push a degree of freedom to some pre-defined
steady-state target.
design or tuning must be addressed before the testing stage. All regulatory
loops, whether they are included as manipulated variables or not, should
be checked and tuned. The selected tunings will affect the response of the
process and are therefore considered part of the process from the MPC
perspective.
Process Testing
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
The solution and best path is to perform a structured test on the process,
whereby every input variable is perturbed in a manner that generates data
best suited for the identification and modeling of the input/output rela-
tionship. Advances have been made in the use of off-line dynamic simula-
tors to generate models required for MPC controllers. Although the true
capability of the simulation option is still at the research level, the use of
simulation tools can greatly enhance process-testing efforts. As of today,
process testing is still the primary path to the generation of the MPC
model.
Many techniques are being used and promoted for testing a process in
order to obtain optimal data for analysis. Although it is important to
understand the strengths and weaknesses of the different plant-testing
techniques, we must remember that the goal of the Process Testing phase
is to obtain the best model of the process behavior for the MPC controller.
The most common testing procedures used today are
If the PRBS moves are implemented in a random manner, the input signal
is considered uncorrelated (white); theoretically, a white input signal will
produce better parameter estimates for the generated model. The fre-
quency of the PRBS moves generates fast and slow flips in order to excite
the full frequency of the process response. For example, the fast flips will
provide good information of the dead time, the long flips are necessary for
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Once all the dust settles, the PRBS test makes more changes to the process
and thus produces data with more information content and therefore the
potential for better models. In designing a PRBS test, there are a number of
factors that must be set. These include the amplitude of the move in the
input variable, the hold time of the flip sequence, and the total length of
the test. How these parameters are set will affect the quality of the test
data, and although there are no hard-and-fast rules, there are some basic
practices that are followed.
Move amplitude:
• Select the amplitude big enough that you can see the effect of the
move in the raw data but not so big that the process is upset or the
operator needs to make compensating moves in other variables.
• The amplitude can always be adjusted during the test if required.
• If the process is very noisy and the move is small or within the
noise bandwidth, the test period (length) will need to be longer to
achieve the same quality model.
Flip time:
Test length:
• The test length is purely a function of the quality of the data. The
greater the noise in the data, the longer the test length in order to
generate a model with an equivalent confidence level.
• The test length is reduced by increasing the amplitude of the
moves and increasing the signal-to-noise ratio of the data.
• For most chemical processes a test sequence that generates 16 to 32
flips is usually sufficient to generate a good model.
The Process Testing segment of the MPC project consumes a large portion
of the project cost. Some effort has been applied to developing automated
test sequences that will automatically calculate and implement the moves.
These automated programs can reduce testing effort but must still be care-
fully monitored to insure that the process remains within desired operat-
ing limits. An automated testing program [9.10] that generates a pseudo-
random testing sequence with only one parameter (process time to steady
state) set by the user is shown in Figure 9-26.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Time (Seconds)
The first phase of the modeling effort is to analyze the time series trends of
the variables collected over the test period. It is important to review the
values of the data and remove any periods of time that do not reflect nor-
mal operating conditions. Most MPC packages allow this to be done via a
graphical interface to the data.
Once the appropriate data has been analyzed and selected, the next step is
to select the inputs and outputs and to identify the time series relation-
ships that exist between them. Before the advent of automated testing and
modeling, a cross-correlation model was used to examine the relationship
between the inputs and outputs and to manually identify the time delay.
The cross-correlation model is analogous to the finite impulse response
(FIR) model used in many MPC packages, and is obtained by simply com-
puting the cross-correlation of the input series against the output series
across a number of lags equal to the estimated settling time of the process.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Since there are many advantages to the use of parametric models, cross-
correlation models are mostly employed to manually identify the time
delay and make sure the direction and size of the change in the process
output is reasonable. It is a good practice to always examine the process
model that was identified and compare it with the knowledge of the pro-
cess.
There are many forms of the parametric model, including the ARX model,
the output error model, and the Box-Jenkins model. These parametric
model forms differ mostly in the way they handle process noise and
dynamics. The more complex the process dynamics and process noise, the
more sophisticated the model form required, Box-Jenkins being the most
generic.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Once the form of the model has been selected and the model is fit to the
data, the diagnostic stage of the model fit is next. This is an important step
in the modeling phase. A simple and very intuitive diagnostic is to com-
pare the measured process output values against those calculated by the
model based on the process inputs. This comparison allows one to deter-
mine how closely the model matches the actual plant operation. An exam-
ple of this type of output validation is shown in Figure 9-33.
Statistical tests on both the model residuals and the difference between the
modeled output and the actual output can provide insight into the model
deficiencies. For example, if the model residual terms demonstrate strong
autocorrelation, there is a well defined pattern in the data that we are not
accounting for within our process model but that should be accounted for
with the addition of a noise model. Further, the cross-correlation of the
input series with the residual series will identify whether the selected
form of the parametric model is sufficient to accurately model the data.
For a more detailed review of these statistical model diagnostics, refer to
Box-Jenkins [9.22].
In the end, we want to extract the best possible model from a good set of
test data. In some cases, the model may need to be manually set or
adjusted to accommodate process knowledge and/or the unavailability of
good test data. Therefore, editing the associated response models based on
your knowledge of the process would be a practical solution.
Controller Design
The MPC controller design stage consists of rationalizing the preliminary
controller configuration with the models obtained from the test data in
order to build an MPC controller configuration that will meet the control
objectives of the process. The first step is to lay out the preliminary matrix
defined in the Process Analysis stage. This preliminary matrix can then be
populated with the precise models obtained from the testing stage. Know-
ing the actual models, the preliminary MPC matrix design can now be re-
evaluated to insure the identified relationships generate a controller capa-
ble of achieving the defined control strategy and objectives. For example,
if a preliminary design anticipated a model between MV1 and AV2, but no
model was identified for this relationship, we must assess the impact this
will have on the original control strategy.
Once the final matrix structure is defined and the models entered into the
MPC matrix, it is a good practice to evaluate the stability of the proposed
MPC controller. The purpose here is to verify that the proposed controller
matrix is not singular, a condition that arises when there is linear depen-
dency in the output variables. A singular matrix will make it impossible to
independently control selected CVs to their targets.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
This singular behavior occurs when the models with respect to the manip-
ulated variables for a given output are a linear combination of the models
for another output. This is easily identified in the 3x3 controller gain
matrix, shown in Figure 9-27, by noting that the rows for output variables
CV1 and CV2 are linearly dependent; that is, the steady-state gains for
CV2 are 4 times the gains of CV1. Thus any attempt to move CV1 will
cause a corresponding move in CV2, making it impossible to move CV1
and CV2 targets independently of each other. Checking the controller for
linear dependents can avoid many problems often detected only later at
the tuning, simulation or commissioning stages.
0.5 -0.2 0
CV1
CV3 -.1 0 3
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
However, if the model is only in the prediction matrix, the effect of one
MV on another MV is delayed by one scan. When tight timing is impor-
tant for ratio control, such as when a speed that is manipulated for
machine-direction control of plastic sheets (webs) is also ratioed to another
speed used for production-rate maximization, it might be better to include
the model in the control matrix and adjust the MV and CV tuning weights
to provide the desired ratio effect and suppression of manipulation for
control.
In addition to defining the models for the MPC matrices, the controller
execution frequency must also be set. The MPC controller runs at a much
slower frequency than the PID regulatory layer. Ultimately the MPC con-
troller’s execution interval is defined by the dynamic of the process: the
faster the process responds, the faster the MPC must execute. A good rule
of thumb is to execute the controller at least as fast as a rate of 5 moves per
time constant. Also, since the integrated error for unmeasured load upsets
is proportional to the square of the dead time, it is beneficial to make sure
the MPC does not increase the total loop dead time by more than 10%.
This is achieved by choosing an MPC execution interval that is less than
1/5 of the loop dead time since the additional time delay is on average
1/2 the control interval. MPC execution intervals of less than 5 seconds are
now possible except for very large matrices.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Before the new control is tried on the real process, the user must observe
how well the control responds to set point changes and load disturbances
in simulation. The process response is simulated based on a process
model. The model used in simulation is the one identified for the process.
However, by using a modified model, the user can test the control
response for changes in the process.
The ability to introduce disturbances and test the control for changes in
process gain or dynamics is extremely valuable when you want to test the
robustness of the control. If the process response is too aggressive, it is
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
CV
CV Range Control
SP C
B
Reference trajectory
A
CV
0 Time
easy to improve results by modifying the default values for the penalty on
error, the penalty on move, and the set point filter.
MPC Commissioning/Operation
Putting the MPC controller on-line is where the rubber meets the road. If
you took the time and effort to properly conduct the process tests, collect
high-quality test data, build significant models and generate a control
matrix that is well conditioned, the commissioning phase of the controller
will most often proceed with few surprises. If shortcuts were taken in the
previous stages, poor on-line controller performance will be obvious, but
the root problem may not be so easily identified.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
and to compute moves in the manipulated variables. These moves are dis-
played but not actually sent to the local controllers. In this shadow mode,
the application developer or operator can easily see what moves the MPC
controller would make if the MPC were active. This step gives the opera-
tor confidence and serves as a good learning tool to explain controller
action without actually being in closed-loop mode.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Once you are assured that the controller is making moves in the correct
direction, the MPC controller can be put on line. For small applications, up
to a 3x3 matrix, the application can be fully activated on the first pass. For
larger MPC controllers it is recommended that a limited number of vari-
ables or subsets of the controller be activated at one time in order to more
easily identify any performance issues. Once the controller is fully opera-
tional and operating in a stable manner, the performance elements that
must be checked out during the commissioning phase include the follow-
ing:
limit. This occurs when the controller action is too aggressive and then
rapidly pulls back from the constraint. This is followed by the controller’s
desire to meet the target as it once again pushes back towards the con-
straint only to cause another rapid pull-back. This behavior over time will
cause a sawtooth pattern in the constraint variable and can cause excessive
variability in the other MPC controller variables. The solution is to reduce
the aggressiveness of the constraint-tuning action or to modify the con-
straint model by increasing the dead time or increasing the gain. A well
tuned constraint controller will be able to approach the limit in a smooth
dynamic fashion and ride the limit over time.
Manipulated variables can hit hard limits defined in the controller, can indi-
vidually be taken out of the MPC mode, or can be driven into a wind-up
condition. All these occurrences will cause an over-specification of the
controller and a loss of degree of freedom in the MPC. This will cause
giveway to arise and can cause controller stability problems. These condi-
tions should be verified during the commissioning stage.
Once the control performance is satisfactory for the MPC, the optimization,
or ramper, functionality can be evaluated. The different optimization modes
can be tested and the optimization tuning speed can be modified as
required. A good rule of thumb is to optimize the process at a rate 1/5 that
of the control action. This usually insures good control action with a slow
push towards the unit optimum.
Selectively testing and verifying the MPC controller objectives and func-
tionality will insure that the MPC application is robust, and will provide
plant operators with the confidence that the application will deliver the
identified benefits.
Rules of Thumb
The process model that is identified for use in MPC is influenced by pro-
cess noise, unmeasured load disturbances, nonlinearities, and inverse
response. The following rules of thumb apply when these conditions are
present during process testing.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Rule 9.1. — Use the simplest model with identified parameters that are consis-
tent with more complex models despite poor statistical indices for the simpler
model. Model parameter changes of less than 10% are negligible due to
nonlinearities. The actual process gain, time lag, and time delay normally
change more than 10% as a result of changes in operating point and oper-
ating conditions. The simplest model is a first-order ARX model.
Rule 9.2. — For fast noise, use either a signal filter to attenuate the peak-to-peak
amplitude of the noise to less than 1/5 of the step size for a bump test, or use a
Box-Jenkins noise model to avoid a much shorter-than-actual identified lag time.
For example, fast noise of just 5% peak to peak for a step size of 10% can
cause the identified lag time to be 80% less than the actual time lag and the
identified process gain to be slightly low.
Rule 9.3. — For a shift in the steady state (nonstationary behavior), either
remove the affected area of the test or calculate the difference between the consecu-
tive data points to avoid the identification of a larger-than-actual time lag. For
example, a positive and negative shift that approaches the step size can
cause a time lag to be identified that is more than 75% larger than the
actual time lag. A single differencing of adjacent data points can account
for a change in the magnitude of the mean, whereas a second differencing
of adjacent data points can account for a change in the slope of the mean.
Rule 9.4. — For slow noise, either remove the affected area of the test or use an
autoregressive Box-Jenkins noise model to avoid the identification of a larger-
than-actual time lag. For example, slow sinusoidal noise of just 5% peak to
peak with a period about 1/2 of the response time can cause a 30% larger
than actual time lag for a step size of 10%.
Rule 9.5. — For inverse response, either use an increase in the time delay or add
a partial delay to the ARX model to avoid a severe overestimate of the time lag.
For example, the identified time lag is more than 100% larger than actual
due to an inverse response that approaches half of the step response. The
most accurate results are obtained with a partial delay (a second ARX B
coefficient). However, if it is not important to have the model prediction
match the process during the inverse response, a first-order ARX model is
sufficient.
Rule 9.6. — For bump or PRBS tests where none of the steps were held long
enough to reach a steady state, the time lag identified is too small and the process
gain identified is slightly low. In PRBS or bump tests, some of the steps must
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Rule 9.7. — Use a control interval that is large enough to reduce A/D noise and
minimize modeling error but that does not add excessive dead time. The true rate
Rule 9.8. — For an integrating response, calculate the difference for the consecu-
tive output data points to develop a first-order model. The rate of change of the
process variable is now modeled.
Rule 9.9. — For a level response, compute the rate of change of level based on the
geometry of the vessel and location of the sensor taps. These responses are typi-
cally slow and PRBS testing would be time-consuming. The actual
response for a given density is computed for a change in difference
between the flow going into and out of the vessel. If the MPC controller
output does not manipulate a flow controller set point but directly throt-
tles a control valve, the calculation is nonlinear and depends upon the
installed characteristic of the control valve.
Guided Tour
This tour illustrates the potential ease of use and convenience of an inte-
grated interface that is possible for model predictive control embedded in
an industrial control system. The following areas are addressed:
• MPC Configuration
• Process Testing
• Process Model Identification
• Verification of Step-Response Model
• Testing Control Off-line
• Operator Interface
• Expert Options in Model and Control Generation
Model predictive control may often be the best control solution for
processes characterized as dead time–dominant. Also, MPC is often the
preferred solution if a process has measurable disturbances, constraints, or
interaction between control parameters. The application of this
technology, however, has traditionally been limited to a few high-
throughput processes because of the cost and complexity of the tools used
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
A few parameters associated with the MPC block, such as set point limits
and constraint limits, are defined as part of module configuration. All the
block inputs and outputs are automatically assigned for historical collec-
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
tion. Once the block has been configured, the module containing the MPC
block is downloaded to the controller.
The time over which testing was done is shown by a different color in the
trend. After testing completes, selecting Autogenerate will cause the pro-
cess model to be identified and the controller generated. On completion of
control generation, the identified step response is automatically displayed,
as shown in Figure 9-31.
Figure 9-32. Comparison of FIR (short horizon) and ARX Step-Response Mod-
els
As a means of further checking the accuracy of the model, the user may
select Verify Against Original Data. In response, the calculated process
output based on the actual process inputs vs. the actual process output is
plotted, as shown in Figure 9-33.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
To test the control that was generated based on the model, the user may
select Simulate. In response, a simulation environment is automatically
created based on the block definition, the identified process model and the
generated control. The control is automatically executed in this off-line
environment with a simulated process based on the identified process
model. The control response is automatically trended along with a trend of
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
the future response calculated by the MPC block, as shown in Figure 9-34.
In the manual mode of operation, the user may examine the process
response for changes in the manipulated inputs to the processes. The con-
trol response to a set point change is observed in the automatic mode. To
allow the control to be tested in the presence of noise and changing distur-
bance inputs, the user may select to add noise to process inputs and out-
puts. Also, the ability is provided to execute control and process
simulation up to 200 times faster than real time. This feature significantly
reduces the time required to check out the control associated with slower
processes.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Once the control has been generated, the module is downloaded to the
controller to put the control on-line. Predefined dynamos are provided to
allow each control, constraint, manipulated or disturbance input to be
included on the operator display. Also, a dynamo is provided that allows
all the MPC inputs and outputs to be viewed together. An example of the
MPC operator view is shown in Figure 9-35.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Figure 9-36. Editing Step Response
The tuning parameters associated with the generation of the MPC control-
ler are set automatically based on the process dynamics. Thus in most
cases there is no need for the user to adjust the tuning parameters. How-
ever, to include a special control response if needed, the user can choose to
generate the control from the model. When this option is selected, the user
can modify the tuning parameters used in controller generation. The dia-
log for this is shown in Figure 9-38.
The ability to generate the control from a selected model is also useful
when a model has been modified to include user knowledge.
Theory
The advantages of MPC are most evident when it is used as a multivari-
able controller. Therefore, the acronym MPC is sometimes read as multi-
variable predictive control. In applications to date, MPC has been used
predominantly as a multivariable controller. For the sake of consistency,
however, this book stays with the original use of the acronym MPC to
mean model predictive control.
In MPC, the process inputs are manipulated variables (MV) and measured
disturbances (DV). The process outputs are controlled variables (CV) and
auxiliary, or constraint, variables (AV).
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
MV IN1 OUT1 CV
DV IN17 OUT17 CV
DV IN18 OUT18 CV
OUT19 AV
OUT20 CV
Process
Measured disturbances are inputs to the process that are not managed by
MPC.
Controlled variables are process outputs kept at specific set points (tar-
gets) or within specified ranges. Process models predict the future values
of controlled variables for a number of scans ahead. The prediction horizon
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
is the range in scans of process output prediction.
Some use the term controlled variables for both controlled and constraint
variables, differentiating between them by using terms like controlled vari-
ables with set points for controlled variables and controlled variables with lim-
its for constraint variables.
Predicted Process
Output at the Time
Process
Response
30 60 90 120
Time in number of scans
In MPC terms we see the step response as a prediction of the process out-
put up to 120 scans ahead, for a unit step input that was applied at scan
zero. The value of the prediction at a particular point in the future is repre-
sented by a specific coefficient. Figure 9-40 marks coefficients representing
the process output at 30, 60, 90, and 120 scans ahead after applying unit
step value on the input.
IN1 OUT1
IN2 OUT2
IN3 OUT3
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
The superposition principle is illustrated graphically in Figure 9-42.
0.7
0.6
0.5
Y1 + Y2 0.4
Y1+Y2
0.3
0.2
0.1
0
1 3 5 7 9 11 13 15 17 19 21
1.5
1
Output
0.5 Y1
Y1 and
Y2 0 Y2
-0.5
-1
1 3 5 7 9 11 13 15 17 19 21
Prediction
T
A is shift operator defined as Axk = y1 , y2 ,...yi ,..., y p −1 , y p −1
T
B = b0 , b1 ,..., bi ,..., bp −1 is vector of p step response coefficients
For a process with n outputs and m inputs, vector xk has dimension n*p
and vector B becomes a matrix with dimension n*p rows and m columns.
The graphical illustration of the equations in Figure 9-43 explains the pre-
diction principles.
CV
Step 2
Step 3
Step 1 Predicted process output at the time
k k+1 k+ . . . . . Time
1. Prediction made at the time k-1 (the bottom dotted curve) is shifted
one scan to the left.
2. A step response, scaled by the current change on the process input,
is added to the output prediction.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Using test data, we can build for every output as many equations as the
number of collected samples. The number of collected samples in test data
normally is significantly higher than the number of unknown model coef-
ficients. Such equations are solved using the least squares technique. This
technique finds the coefficients that may not fit perfectly in any equation,
but fit optimally for all equations, in such way that the total squared error
for all equations is minimized. The form of the equations used for process
modeling is defined by the modeling technique, a number of which are
used. The most common identification (modeling) techniques are Finite
Impulse Response (FIR) and Auto Regressive with eXternal inputs (ARX).
p
∆y k = ∑ hi ∆uk − i for SISO process (9-2)
i =1
where:
p = prediction horizon, with a typical default value for MPC
models of 120
∆y k = change on the process output at the time k
∆uk-i = change on the process input at the time k-I
hi = pulse response coefficients of the model
k
bk = ∑ hi k = 1, 2,..., p (9-3)
i =1
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
On the other hand, the ARX model has many fewer coefficients, which are
defined with higher confidence, provided the process dead times are
known.
V A
yk = ∑ ai yk −i + ∑ bi uk − d −i for SISO process (9-4)
i =1 i =1
where:
V, A = autoregressive and moving average orders of ARX. A=4,
V = 4 satisfy most applications
ai , bi = autoregressive and moving average coefficients of the ARX
model
d = dead time in scans
Applying FIR first and defining dead times and then applying those dead
times to ARX provides the best identification results.
Finally, having an ARX model and applying unit step on the input, step
responses for any prediction horizon can be calculated directly from Equa-
tion 9-4.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
For MIMO processes, superposition is applied from all inputs to every
output both in FIR and ARX models.
The validation procedure may also include statistical techniques for calcu-
lating confidence intervals. A confidence interval forms an area around a
nominally defined step response, as shown in Figure 9-44.
e( k ) = r ( k ) − y ( k ) k = 1, 2,..., p (9-5)
To compensate for the errors, the change in the MPC controller output
should cause a change in the process output with predicted errors that are
the mirror image of the existing errors relative to the set point trajectory:
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
e’(k)
SP
r(k)
e(k)
1 2 k Time p
min { B * ∆u − E }
where:
E = error vector of p dimension.
B = step response vector of p dimension.
From linear algebra we know the solution of the least squares problem as:
( )
−1
∆u = B T * B * BT * E (9-8)
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
coefficients of the same step response shifted to the right with the first
coefficient equal to zero. The next columns are built in a similar way from
the preceding columns. The number of columns in the dynamic matrix is
equal to the control horizon. Equation 9-9 is an example of dynamic matrix
control for a control horizon equal to 2 with a 5-coefficients step response.
Multiplication of the matrix over the vector of the 2 sequential control-out-
put moves develops a prediction of process-output change resulting from
the two moves.
∆ out 1 b1 0
∆ out 2 b2 b1
∆u1
∆ out 3 = b 3 b 2 * Two future control moves (9-9)
∆u2
∆ out 4 b4 b3
∆ out 5 b b 5 4
To get the desired MPC controller performance, the two tuning parame-
ters, penalty on moves and penalty on error, are used in the formulation.
The MPC controller objective for minimizing the squared error of the con-
trolled variable includes the penalty on error over the prediction horizon;
and the objective for minimizing the squared changes in the controller out-
put over the control horizon includes the penalty on moves. Both of these
are expressed in the following way:
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
∆U(k)= (SuTΓyTΓySu + ΓuTΓu)-1SuTΓyTΓyEp(k) (9-10)
where:
Su is a p x m process dynamic matrix
Ep(k) is error vector over prediction horizon
It follows from experience that the dead time should be accounted for as a
major factor in setting the penalty on moves. Equation 9-11 defines a pen-
alty on moves that provides stable and responsive MPC operation for a
model error of up to about 50%:
where:
DTi is the dead time in MPC scans for an MVi - CVi pair
Gi is gain (no units) for the MVi - CVi pair
p denotes the prediction horizon
MV and CV denote manipulated variable and controlled variable,
respectively
∆U k = K mpc E p (9-12)
where:
where:
∆AV is the magnitude of the predicted steady-state constraint
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
violation
GCV - AV is the gain relationship between AV and CV
r is the relaxation factor; r <1
∆SPcv is the requested working set point change for the controlled
variable
GCV − MV
GCV - AV = (9-15)
GAV − MV
GCV - AV gain tells how much we need to move the CV’s set point to com-
pensate for a unit constraints violation.
The working set point is adjusted every scan according to constraint viola-
tions. If a constraint violation ends, the working set point returns gradu-
ally to the original set point value. Figure 9-46 illustrates the concept.
Referring to Figure 9-46, it is seen that both CV1 and AV1 depend on the
same manipulated variables MV1 and MV2. If AV1 violates constraint lim-
its, changing the set point of CV1 in the proper direction will cause MV1
and MV2 moves that result in the return of AV1 to within its desired lim-
its.
MV1 CV1
MV2 CV2
Simple Optimization
The objectives of optimization are usually to maximize a product value
and minimize a raw-material cost. A simple optimization is applied for the
cases where a priori knowledge defines optimization targets as minimiz-
ing or maximizing one of the process parameters.
where:
∆MV is the magnitude of the predicted MV limit violation.
GOPTMV − MV is the gain relationship between optimized MV and MV
with limits being violated.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
∆MV opt with MPC closed-loop control
GoptMV -MV = (9-17)
∆MV
GoptMV -MV gain tells how much we need to move an optimal CV’s set point
to compensate for a unit MV limit violation.
1 4
MV1 CV1
5
MV2 CV2
3 2
MV3 CV3 (Optimized CV)
The CV3 set point is set as a maximum limit of MV3 if it represents pro-
duction rate or as a minimum limit of MV3 if it represents production cost.
When the predicted MV stays within limits, CV3 set points are achieved
and kept as normal MPC set points. However, when the predicted value of
one or more MVs is out of limits, the optimized working set point value is
changed according to equation 9-16 to keep MVs within limits, effectively
retaining CV control.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
(9-18)
∆CV (t + p ) = A * ∆MV (t + c)
where:
∆cv1
∆CV (t + p ) = ... = changes in controlled and constraint
∆cvn
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
The objective function for maximizing product value and minimizing raw
material cost is defined jointly in the following way:
where:
UCV = cost vector when CV changes on unit value
UMV = cost vector when MV changes on unit value
The LP solution is always located at one of the vertices of the region of fea-
sible solutions. An illustration for a two-dimensional problem explains the
point; see Figure 9-48.
max
MV2 CV2
CV
MV2max
1m
in Region of feasible
C
V1
solutions Optimal solution
MV1max
is in one of the vertexes
ax
MV1min
min
CV2 MV2min
MV1
The region of feasible solutions for two controlled variables and two
manipulated variables is an area contained within MV1 and MV2 limits,
represented by the vertical and horizontal lines, and CV1 and CV2 limits,
represented by the parallelogram, per Equations 9-22 and 9-23.
Optimizer
Steps 3
MV current values
MV and CV optimal
CV predictions
targets
MV limits
CV limits
CV manual
targets MPC
Next MV
Steps 1& 2 Steps 4 & 5
References
1. Garcia, C.E., Prett, D.M. and Morari, M., “Model Predictive Control: Theory
and Practice – A Survey,” Automatica, 25(3), 1989, pp. 335–348.
2. Muskie, K.R. and Rowlings, J.B., “Model Predictive Control with Linear
Models,” AIChE, 39(2), 1993, pp. 262-287.
3. Froisy, J.B., “Model Predictive Control: Past, Present and Future,” ISA
Transactions.33, 1994, pp. 235-243.
4. Qin, S. J. and Badgwell, T. A., “An Overview of Industrial Model Predictive
Control Technology,” Fifth International Conference on Chemical Process Control,
AIChE and CACHE, 1997, pp. 232-256.
5. Badgwell, T.A., and S.J. Qin, “Review of Nonlinear Model Predictive Control
Applications,” in Non-linear Predictive Control: Theory and Practice, B.
Kouvaritakis and M. Cannon eds., IEE Publishing, 2001, pp. 3-32.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
6. McMillan, G.K., Process/Industrial Instruments and Control Handbook, McGraw-
Hill, 1999, ISBN 0-07-012582-1, pp.10.190-220.
7. Lee, J.H., Morari, M., and Garcia, C.E., “State-space Interpretation of Model
Predictive Control,” Automatica, vol. 30, no. 4, 1994, pp. 707-717.
8. Wojsznis, K.W. and Wojsznis, P.W, “Robust Predictive Controller in Object-
Oriented Implementation,” Advances in Instrumentation and Control, vol. 48,
part 1, 1993, pg.521.
9. Morari, M., Ricker, N.L., and Zafiriou, E., “Model Predictive Control,”
Workshop No. 4, American Control Conference, Chicago, June, 1992.
10. Wojsznis, K.W., Blevins, L.T., and Nixon, M., “Easy Robust Optimal Predictive
Controller,” Advances in Instrumentation and Control, ISA Tech, New Orleans,
2000.
11. Prett, David M. and Garcia, Carlos, Fundamental Process Control, Butterworths,
1988, ISBN 0-409-90082-6.
12. Camacho, E.F., and Bordens, C., Model Predictive Control in the Process Industry,
Springer-Verlag, London, 1995, ISBN 3-540-19924-1.
13. Shinsky, F.G., Process Control Systems: Application, Design, and Adjustment, 3rd
edition, McGraw-Hill, New York, 1988.
14. Wojsznis, W., Gudaz, J., Mehta, A., and Blevins, D., “Practical Approach to
Tuning MPC,” in Proceedings of ISA 2001 Conference, Houston, September,
2001.
15. Cutler, C. R. and Ramaker, D. L., “Dynamic Matrix Control – A Computer
Control Algorithm,” Proceedings, JACC, San Francisco, 1980.
16. DeltaVTM Home Page: http://www.easydeltav.com.
17. McMillan, Gregory, “Opportunities in Process Control,” InTech, November,
1992.
18. McMillan, Gregory, “A New Era for Model Predictive Control,” Control, vol.
XIII, no. 05, 2000, PP 45-50.
19. McMillan, Gregory, Good Tuning: A Pocket Guide, ISA, 2000, pp. 78-82.
20. Butler, Douglas L., Cameron, Robert A., Brown, Michael W., and McMillan,
Gregory K., “Constrained Multivariable Predictive Control of Plastic Sheets,”
ISA Expo 2001, Houston, Paper 1022.
21. Cutler, C. R. and Ramaker, D. L., “Dynamic Matrix Control – A Computer
Control Algorithm,” Proceedings, JACC, San Francisco, 1980.
22. Box, George E. P., Jenkins, Gwilym, M., and Reinsel, Gregory C., Time Series
Analysis, Forecasting and Control, 3rd Edition, Prentice Hall, New Jersey, 1994.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Practice
Overview
A virtual plant environment is essential for the testing of control systems
and processes and the training of operators and engineers before and after
startup. The capital cost associated with a new plant or a major plant
expansion in the process industry often involves hundreds of millions of
dollars. Any improvement in the design, engineering, construction, or
startup that allows full operation of the plant to be achieved in a shorter
period of time can result in significant savings to the investors in such a
project. High-fidelity process simulations that can employ most of the
equations and physical properties in chemical engineering are commonly
used by many industries in the design of a process to examine and verify
process performance under a variety of operating conditions. The technol-
ogy is now available from companies to convert these steady-state models
into high-fidelity dynamic models with true pressure and flow dynamics
by the simple addition of volumes, pump curves and efficiencies, and
pressure drops. Details on process equipment, control valves, and mea-
surements are added so that the model provides a dynamic simulation of
everything in the field and becomes a warehouse of plant knowledge and
tools for training and testing with widespread access [10.1] [10.2].
High-fidelity dynamic simulations are used to test how the process and
control system reacts during equipment failures, upsets, and startups
[10.3] [10.4]. It provides an important learning tool for process and control
engineers as well as operators [10.5]. The virtual plant can go where you don’t
want the real plant to go or don’t think the real plant can go. Thus, it can show
how to recover from undesirable situations and reveal unforeseen or doubted capa-
383
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
For the virtual plant to show the true performance of an operating plant,
the exact control system must be used together with a high-fidelity
dynamic simulation of everything in the field. The ability of a dynamic
simulation package to duplicate the behavior of a control system is lim-
ited, in most cases, to simple control loops with a generic operator inter-
face. It is unreasonable to expect a dynamic modeling package to match
the investment made by control system manufacturers in the development
of an extensive function-block control language. Even if the model had
more control capability, the effort of duplication of the control system may
approach the cost of the process model. Even more important, it is funda-
mentally wrong to test anything other than the actual configuration.
Therefore, reconstructing or emulating the control system in a computer is
both expensive and unreliable. In some cases, actual control-system equip-
ment has been connected to process-simulation software by special inter-
faces. However, the initial and upgrade cost to plants is too high to
provide this setup in training and testing centers, particularly since hard-
ware today rapidly becomes obsolete.
These setups are still used today to provide training and testing before the
I/O is available in the field. The virtual plant by definition has no field I/
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
O so it can be used even for pre-project studies. The virtual plant provides
a lower-cost and more flexible, portable, and accessible alternative to the
traditional hardware setup in a training room. The use of the actual hard-
ware for studies, testing, and training is not only a dinosaur but is also
problematic because access is restricted and delayed. By the time the con-
trol system is purchased, it is often too late to make changes in the process
design based on what is learned from the simulation. Problems in control
schemes and equipment sizing and operator errors can result in damaged
equipment, spills, and an extended time spent in commissioning the plant
or in achieving production or quality targets. Skimping on simulation util-
ity and fidelity is risky business.
Plant Design
Process
Control
Control System
Purchase
Configuration
Installation
Checkout
Operation
Sys Req
Training
Startup
Time
Figure 10-1. Virtual Plant Capability for Checkout and Training Before Star-
tup
it is possible to set the process and control simulation execution faster and
slower than real time.
Opportunity Assessment
The concept is simple, but it has far-ranging benefits. The integration of a
scalable virtual DCS with a dynamic plant model via Object Link Embed-
ding (OLE) for process control (OPC) on a laptop or multi-node PC net-
work places a virtual plant at your fingertips. The result is an accessible
graphical tool in a familiar Windows environment that can develop, store,
and share a knowledge base of both the process and the control system,
including the operator interface. The scalable virtual DCS is the ware-
house of control knowledge and expands the access to testing and improv-
ing the control-system strategy from the configuration specialist to the
process control and the process engineer. Similarly, the dynamic plant
model is the warehouse of process knowledge and extends the access to
testing and improving the plant dynamics from the simulation specialist
to the process and process control engineer.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
The following are some of the areas of expected benefits:
Most of the costs of advanced controls are associated with the time in the
field needed to generate test data and develop experimental models. As
more dynamic process models are built, it is possible to generate some of
the data and, in particular, to be able to identify some of the process gains
and time constants to provide a more efficient implementation of
advanced controls. This virtual plant capability is used to test the effect of
changing raw material, operations, and equipment conditions offline
ahead of time so that the performance and on-stream time of the control
system are more sustainable after the specialists leave the plant site.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
The estimated saving from one company’s use of the virtual plant environ-
ment in the areas of design, checkout, operator training and continued
improvements in the process operation are shown in Figure 10-2.
Examples
Examples of how a virtual plant is used in operator training and for plant
and control system design follow.
Failure Handling
An engineer is responsible for the control design of a new plant design.
His/her control system will not be shipped for another month but he/she
has installed a control system as part of a virtual plant on his office PC.
One loop that is critical to plant operations uses a feedforward signal from
Control Development
Configuration (20% reduction) 50
Control Modeling (50% reduction) 50
a feed stream that is not used for some products. Under the condition of
no flow, the input signal may have a BAD status. It is important in his/her
design of the control system to understand firsthand how the PID behaves
under this condition.
To quickly learn more about how the PID controller operates under these
conditions, the engineer assigns the associated control module to his/her
PC, downloads and then through online engineering tools provided with
the control system, he/she examines the PID and feedforward signal
delivered by an AI block. Utilizing the SIMULATE parameter of the Field-
bus input blocks to the PID controller, he/she establishes a normal operat-
ing condition. By changing the SIMULATE parameter status of the
feedforward input to BAD and then back to GOOD, he/she observes that
the operation of the PID is not disrupted by the status going BAD. Know-
ing that the control system PID addresses this situation, the engineer is
able to finalize the design of this critical control loop.
Control Response
Plant production is limited by feedstock processing. To determine if a pro-
posed process design change will allow greater throughput, a high-fidelity
dynamic simulation of the process is constructed. The engineer responsi-
ble for resolving the production limitation must determine whether the
product specifications can still be met using the original control strategy.
To examine the control-system response to changes in feedstock, the engi-
neer installs the control system onto on his development PC with a high-
fidelity process-simulation package to create a virtual plant.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
By assigning the current control strategy used in the plant to his PC and
configuring the process simulation to access the control I/O blocks, the
engineer is able to fully simulate the dynamic response of his plant with
the current control strategy. By introducing the expected feedstock
changes into the process simulation, he determines that the current control
strategy does a good job but that the tuning of some loops will need to be
changed. Using the control system autotuning capability in this simulation
environment, he retunes the simulated control loops and establishes what
the best tuning is for the new process design.
Operator Training
An operating area of an existing plant is being expanded. The design work
associated with the process changes has been verified using a steady-state
process simulation. As part of this modernization project, a new control
system is being installed in this area. This new control system is designed
to allow all of the control system to be simulated on a single PC or distrib-
uted between multiple PC’s. The plant operators are not familiar with the
new control system or the process changes and their impact on the plant
operation.
Each operator is provided with a PC that is a virtual plant with the actual
displays imported from the real control system. The process simulation is
connected to the imported control system configuration using the OPC
interface. The effort to create the training system is minimized by the fact
that the actual control system configuration is directly and easily imported
and exported to facilitate rapid revision of the displays and configuration.
By incorporating the process simulation used in the design process, the
training system accurately reflects the dynamic process response so the
operators are trained on the process changes as well as the controls and
new operator interface.
Application
General Procedure
All of the OPC-compliant software used in the actual control system
should be loaded into the virtual plant. Dynamic simulations and virtual
control systems require a lot of memory and are processor hogs. PCs more
than a couple of years old are undesirable.
The flow is specified for some steady-state models that are used for flow
sheet simulators, whereas for a dynamic model the flow depends upon
flow resistances and the curves of prime movers just as it would in a real
plant. These steady-state models often have negative pressure drops.
Valve positions and sizes, pump curves, and static heads have no affect on
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
tem studies and testing are missing. Drastically wrong pressure profiles
and missing flow resistances, prime movers, jackets, coils, and utility
streams are the major obstacles in converting a steady-state model to a
dynamic model. The degree of detail needed to make the model dynamic
is also the degree needed to make the model sufficiently representative of
actual plant operation to enable reconciliation and real-time optimization.
Steady-state models use techniques convenient for converging to a new
process design. Almost any flow, temperature, or composition flagged as
adjustable can be specified and variables optimized. A dynamic model,
like the real plant, relies upon control loops and final elements.
The major steps and check points for the creation of a high-fidelity
dynamic virtual plant are:
1. Set up one or more PCs each with at least 512 MG memory and a 1
GHz or faster processor with a Windows NT or XP Professional
Operating System.
2. Connect multiple PCs to a dedicated control network and the local
area network (LAN). The LAN connection must be set up so that
PC-to-PC communication is direct and circumvents the rest of the
LAN, which is reserved for transmission of files and actual plant
data for adaptation of the process model.
3. Install the software for the process simulator, the virtual
representation of the scalable DCS, the data historian, and tools for
advanced data analysis (e.g., multivariate principal component
analysis) and for advanced control (e.g., real-time optimization).
4. Create dynamic-ready high-fidelity steady-state simulations:
a. Add a valve with the correct inherent flow characteristic (e.g.,
linear or equal percentage) to each stream to provide a flow
resistance and a method of stopping and starting the flow to
each stream.
b. Use heat exchangers with actual utility streams instead of
simple heat sources or sinks and duty streams.
c. Add all pumps and compressors that actually exist.
d. Add the correct pressure drops for each valve, heat exchanger,
and column tray and the correct pressure rise for each prime
mover biased to include the static head (a pump or compressor
curve biased to include static head is preferable to a power
specification).
e. Make sure the minimum pressure drop for any flow resistance
on a stream entering or exiting the flow sheet is at least 2 psi
(other resistances should have a pressure drop of at least 0.2
psi).
f. Check the pressure profile throughout the system to make sure
there is no reverse flow (make sure the feed pressure is much
greater than the tray pressure).
g. Activate the automatic sizing routine to set the flow coefficients
for each flow resistance (valve, exchanger, or tray).
h. Add the correct holdup volumes for each piece of equipment.
5. Add the transfer functions to simulate time delay, lag, and noise
for each measurement.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
6. Add the stick-slip, dead band, and stroking time for each control
valve.
7. Import the actual configuration of the basic control system and
faceplates. Import the higher-level controls and displays as they
become available.
8. Complete the interface table between the process model and
control system.
9. Verify that all of the basic PID controllers are available and have
the correct control actions and set points and conservative tuning
settings.
10. Put all level and pressure controllers in automatic to prevent
volumes and headers from going empty and to negative pressures.
11. Make sure all of the controller outputs are initialized correctly to
match the control valve positions; this may require turning on the
integrator.
12. With the integrator on and the dynamic model running,
commission the rest of the controllers, line out the dynamic plant
to match the steady-state case, and save out the case.
The ability to use an identical configuration for both the actual control
system installed in the plant and for an operator training system cannot be
overestimated. Maximum benefit is achieved by allowing either physical
I/O or simulated I/O to be selected. With this feature, only one control
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Figure 10-3. Hardware-Based Simulation System for Checkout and Training
The major benefit of this approach is the dramatic reduction in the hard-
ware required for configuration development, control and operator inter-
face checkout, and operator training. The modules developed for the
control system are assigned to the PC and executed as though they were in
the controller. Using the simulation capability of input-output function
blocks, modules may be created to simulate the process for control system
checkout and operator training. Also, since support is provided for access
block parameters through OPC, it is possible to execute and check out
applications that normally run in a workstation and that access informa-
tion in the simulated controller through OPC.
In the real world, both the plant and the controllers run in real time. In the
simulated time, depending on the modeling rigors and computing
resources, the simulation can potentially run faster than real time. Run-
ning faster than real time is beneficial for slow processes. Running faster
than real time can shorten training or control-system evaluation times.
Model characterization studies for multi-variable predictive control appli-
cations are other situations where faster-than-real-time operation is essen-
tial. If the process models are running faster than real time, it is important
for the controls to run at the same speed as the simulation. Otherwise, the
controllers tuned for real time will not respond properly in simulation
time.
The reverse problem occurs when the process simulation cannot meet the
real-time constraint because of inadequate processing power or an overly
rigorous model. In either situation, the controller should be able to run
slower than real time to match the process-model execution speed. The
computational load of the process modeling can vary over time, depend-
ing on the state of the model. The complete system is most flexible when
the controller time base is dynamically changed as the complete simula-
tion progresses. To address these requirements, the capability is provided
in this modern control system to run function blocks faster or slower than
real time in a simulation environment. In addition, a trainer or application
may coordinate the execution of the control system and process simula-
tion.
Online Adaptation
Figure 10-4 shows how a high-fidelity dynamic process model is adapted
online to match the process by a straightforward but innovative use of
model predictive control (MPC). It does not require any special features or
algorithms and is well within the ability of the average control engineer to
implement. It is implemented without any intrusion or risk to actual plant
operations since it reads but does not write to the actual plant control sys-
tem. Users are surprised at how much they can learn about the model and
the plant. The basic procedure is to tackle one unit operation at a time as
follows:
1. An MPC is set up with key process variables from the virtual plant
as controlled variables (CV) and key model parameters from the
virtual plant as manipulated variables (MV). The targets are the
actual measurements from the real plant.
2. Analog output (AO) blocks are added to the virtual control system
to receive the key model parameters. The AO blocks are
automatically initialized by the model when the integrator is
started.
3. Analog input (AI) blocks are added to the virtual control system to
receive the actual measurements and set points from the real plant.
These measurements become the MPC targets and the actual set
points become the SUP set points of the virtual loops.
4. The virtual plant is connected via a data historian to the real plant
DCS system by a LAN so that it has access to actual measurements
and set points.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
identification, tuning, and commissioning of a MPC system are
followed but the MPC execution time and move suppression are
set to provide a very slow and gradual response to optimize key
model parameters without disrupting the virtual plant’s response
to upsets.
8. Once the model parameters settle in on average values, implement
these as the new starting point for the parameters and move on to
another plant operation. If the model parameter can change (e.g.,
column tray pressure drop due to fouling), the MPC is left on but is
slowed down even further since most process changes, such as
fouling and catalyst activity, are usually very slow.
AO AI AO
AO
RCAS
AI Virtual DCS
Virtual Loops will run in the RCAS mode with the set points used in the Actual Loops except the
Virtual Loops that control the MPC CV must be in manual (secondary Virtual loops go to RCAS)
Application Detail
Virtual versions of scalable distributed control systems (DCS) are now
available that support the development of control-system configuration,
logic checkout, and operator training using a single PC or multi-node PC
environment. These packages are designed to be used on a laptop or office
PC. Using this capability, it is possible to configure all of the features that
the control system supports (for example, continuous control, batch con-
trol, advanced control, and their associated workstation displays, alarms,
and historian data collection) without purchasing system hardware.
Using this capability, the execution is simulated for the operator interface
and selected control modules defined for the plant control system. This
capability is used for control logic checkout. And, using the control and
I/O block and Fieldbus block simulation capability, field measurement
values and status are manually supplied to the simulation—or provided
by blocks used to simulate the process.
The full range of OPC features in the control system are available and may
be used for the development and testing of OPC interfaces. Plus, before
your plant is even constructed, the OPC-compliant process simulation
package is used for process and control design checkout Thus, most of the
features of a complete control system are made available on the PC.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
The Fieldbus Foundation architecture has been adopted by the control sys-
tem. The control functionality that is assigned to a controller or Fieldbus
device in the control system is executed in the PC and the results dis-
played in the configured operator interface, without any change in the
control or operator interface configuration. Using the simulate capability
of the control system and Fieldbus input-output function blocks, field
measurement values are available for control-system and operator-inter-
face checkout as shown in Figure 10-6.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Field
measurement Actuator
readback
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Figure 10-7. Process Models Constructed from Function Blocks
real time and to pause the execution. Common requirements such as the
initialization of the simulation, block mode, and block dynamics are
addressed. An OPC interface is provided to allow process-simulation
packages to also invoke the same requests that are made through this user
interface to the control simulation.
Rules of Thumb
Rule 10.1.— The actual control system configuration and displays must be
imported and exported from the virtual plant. The use of a reconstructed or
emulated control system and operator interface is fundamentally
unsound. The actual configuration and displays and all the features of the
actual system must be used.
Rule 10.2.— The use of actual hardware instead of a virtual plant is costly, delays
and restricts the access, and decreases the utility of a simulation for training and
testing. If an engineer needs something he can touch, give him last year’s
obsolete controller.
Rule 10.3.— The process simulation must use an OPC interface to the control-
system configuration and software tools. This will facilitate the connection of
the system to other advanced diagnostic, data-analysis, and control tools.
Rule 10.4.— The definition of the information flow between the control system
and the process simulation should be configurable and easily done without requir-
ing programming knowledge. The connection should be as simple as brows-
ing and dragging and dropping tags in an interface table.
Rule 10.5.— The process simulation and control system should support execu-
tion fast or slower than real time. Also, the ability should be provided to
freeze both.
Rule 10.6.— The virtual plant should support single commands to initiate setup
of the process simulation and control system to a known state. There should be
an ability to easily capture and restore the state of the process and control
system.
Rule 10.8.— The process-simulation software must be graphical and have high-
level and a wide variety of physical property packages, state equations, and config-
urable functional blocks and unit operations. Custom simulations that require
the setup and numerical integration of differential equations are too com-
plex to construct and maintain and do not have all the physical properties
and their dynamic interaction with process conditions. If you spent the
millions of dollars to create a custom simulation of an entire plant, only
the programmers who created the custom model could debug and modify
it.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Rule 10.9.— To verify that the configuration matches the control definition and
to teach operators how to use the interface, an enhanced tieback simulation is suf-
ficient. Configuration engineers prefer to use a tieback simulation to test
their work so they don’t have to get involved in the process or learn how
to start and stop a process simulation. Also, these tieback models have an
input and output file card channel (FCC) screen that lets the configuration
engineer conveniently set and monitor input and output signals.
Rule 10.10.— To test whether the control definition is correct, to learn how the
process and control system behave for upsets, startups, and failures, and to proto-
type advanced control solutions, a high-fidelity dynamic process simulation is
needed. While the configuration engineer is too busy and will move on to
the next project too quickly to take advantage of a high-fidelity virtual
plant, it is a valuable ongoing resource for operators and engineers at the
plant and can lead to real-time optimization. While this method does not
check the I/O file card channels, these assignments are verified anyway as
a part of the normal course of the checkout of the field wiring.
Guided Tour
This tour illustrates the potential ease of use and convenience of an inte-
grated interface that is possible for control simulation embedded in an
industrial control system. The following areas are addressed:
Modules that have been engineered for the plant control and normally
execute in hardware controllers are re-assigned to run in the PCs that have
the control system software installed, as shown below. When these mod-
ules are loading in the module folder on the PC as shown in Figure 10-9,
the assigned modules automatically begin to execute at their configured
rate of execution. Late binding of module information in the communica-
tions between the PC stations allows the operator and engineering inter-
face to access the inputs, alarms, and calculate values associated with
these modules without any change in system configuration.
PC Node
Modules assigned
to PC for Execution
Within the PC nodes that make up the virtual plant environment, an appli-
cation may be added to provide process simulation. The process-simula-
tion application may use OPC to read the actuator positions from the I/O
blocks within the modules. Based on these values, the process-simulation
package may calculate the process inputs and writing these values to the
simulate parameter of the input blocks. Thus, within the accuracy of the
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
When a module is selected from this view, details concern the I/O and
control blocks in the module are automatically displayed. Also, the user
may enable or disable simulation of an individual block from this
interface. Using this capability, the user can simulate disturbance in the
process during control-system checkout and operator training
Theory
The software and the associated configuration of some modern control
systems are designed to be independent of the hardware, and thus may
execute in multiple operating environments without change. When this
approach is taken, it is possible to distribute control-system functionality
in a variety of ways without needing to reengineer the software or recon-
figure the associated applications. All features of the control system are
combined on a single platform or distributed between multiple PCs to
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Multi-node Environment
Single PC Environment
Optional
This capability is used to check out control logic and operator displays.
Also, using simple tieback from output blocks to the simulation input val-
ues of input blocks, it is possible to provide simple process simulation for
operator training.
Process
Simulation
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
OPC Client
Driver
OPC Read
of OUT
Control
Simulation
OPC Write to
Simulate
The major benefits realized from structuring the interface to the control
system around OPC and Fieldbus function blocks are:
The OPC server of this control system abstracts the deployment of the
controller and Fieldbus-based control items from the OPC client. For
example, consider a system that employs a large number of Fieldbus
devices. In order to simulate such a system easily, the measurement and
control function blocks need to be reassigned to run on a workstation. If
the OPC server is set up correctly, the OPC client does not need to know
that control execution has been reassigned. The same client configuration
connects to the actual scalable DCS or the control assigned to a PC with no
change in configuration.
tools and plugged seamlessly into the simulation engine process. Custom-
ers do not have to rely on the simulator manufacturer to provide the
required functionality.
The OLE automation interface, OPC browser interfaces, and tools based
on these technologies help customers flexibly configure large, integrated
simulation systems. Maps must be created to associate elements in the
control system with elements in the process-simulation system. This map-
ping is generated during the configuration phase of a project. Both visual
drag-and-drop and more automated programmatic generation features
are desirable for flexible map configuration.
OPC
Process Process
Simulation Simulation
Application o o o
OPC OPC
Station(s)
Emulated Emulated
Controller Controller
Operator
Station(s)
Normal Control System
Communications
in setting up the simulation. Some common types of request that are sup-
ported by the virtual plant are:
The instructor interface to the virtual plant may allow all modules
assigned to a node to be examined. Modules that are not correctly set up
for simulation are automatically identified. The blockmodes and simulate
parameters are independently changed. The module summary indicates if
modules are correctly set up for simulation. The module detail view indi-
cates abnormal conditions for simulation. Block Mode, Simulate, and OUT
values are changed and Simulation is enabled or disabled with a single
request.
modules in the node run faster or slower than their configured execution
rate based on the factor. However, time-dependent calculations in the
block still use the configured execution rate in their calculation. Thus, true
faster or slower than real time response is exactly provided.
Typically, the virtual plant may support speed factors from 1/30 to 30
times real time. How fast the virtual plant can run depends upon the dis-
tribution of the models among multiple PCs, the speed of the PC proces-
sors, and the complexity of the process models.
Process Simulation
Application
Workstation
Module(s)
Node Function
Parameters Block Function
Parameters Block
Parameters
References
1. Mansy, Michael, M., McMillan, Gregory, K., and Sowell, Mark, S., “Step Into
the Virtual Plant,” Chemical Engineering Progress, February, 2002, pp 56-61.
2. Butler, Douglas, L., Cameron, Robert, A., Eckelman, Larry, D., and McMillan,
Gregory K., “Virtual Plants for Hands On Advanced Control,” ISA, Houston,
September 10-13, 2001, Paper 1053.
3. Lo, P., et. al., “Distillation Tower Pump Failure,” Control, October, 2000, pp.
71-82.
4. Stanley, Peter, “Dynamic Simulation for Insight,” Chemical Processing,
December, 1999, pp. 47-50.
5. Chin, K., “Learning in a Virtual World,” Chemical Engineering, December,
2000, pp. 107-110.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Assessment Questions
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
b. Can you re-schedule your operations better in time so you can
send people to other departments or use operators to help on
maintenance during shutdown periods?
c. Are critical quality and economic variables calculated,
analyzed, and indicated online? Give operator information on
yields, energy consumption and production rates, rework
amounts, etc. Make operator aware to keep operating costs
under control.
6. Reduce maintenance cost
a. Do you have maintenance costs due to poor operation (e.g.,
pumps cavitating, overloaded, overpressures etc.)? Automate
startup procedures and best operating practices for equipment.
b. Do you blow rupture discs and activate relief valves due to
manual disoperation or bad control systems? Use better control
and automate to reduce safety risk and possible down time.
c. Could smoother control (reduced thermal and pressure shocks)
or tighter control (less byproduct and contamination) increase
time between repairing, replacing or cleaning equipment?
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
and find ideal settings for each product.
9. Reduce rework
a. Do you produce rework because the operator sometimes
makes human mistakes? Look for automation in the areas
where most mistakes are being made.
b. Are all your operating instructions clear, up-to-date and easily
accessible? Use unique process parameter descriptions and
units to avoid confusing the operator at any time.
10. Reduce shutdowns and upsets
a. Do you have too many or too few alarms? Is the operator
alerted in advance and is he not confused by too many alarms?
Make alarms smart (e.g., don't want “Low Flow” alarm when
pump is stopped). Use advanced alarm handling and logging.
b. Does the operator have easy access to information when
something goes wrong? Can he do his troubleshooting when
things fail? Provide help displays with dynamic information
and instructions on what to do to correct.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
conditions. Color display access numbers to reflect mode
changes and interlock condition status.
e. Do you have shutdowns, upsets, and incidents due to
bypassing of interlocks? Use interlock-bypass procedures and
indications (forms). Display interlock and bypass status on
control system.
f. Do you have buffer tanks in between batch and continuous
processes? Smart level control can be used to prevent
shutdowns in continuous process by looking at batch cycle
times and smoothing continuous feed on batch problems (see
Appendix B).
g. Do you have a system set up to track every problem and fault
that generates upsets, rework or shutdowns? Learn from
mistakes and set actions to prevent their happening again. Use
problem reports.
h. Can product changeover (transition) time be reduced?
Automate changeover operations (emptying, cleaning, and
washing). Look for methods to reduce transition times and set
operating parameters for every product.
11. Reduce cycle time
a. Can you reduce batch cycle times by eliminating unnecessary
waits and introducing parallel sequences (e.g., starting heating
before loading, using head starts, combining hold periods with
cooling, etc.)?
b. Can you eliminate concurrent actions that slow down your
process (e.g., blowing cold air in a tank for mixing purposes
while heating it up)?
c. Can you speed up heating times and cooling times by using
energy more efficiently (colder water, better insulation, better
agitation, improved burner performances etc.)?
d. Can you prepare premixes in separate tanks in parallel with
other sequences, and use bigger pumps, feed lines, and valves?
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
The directional velocity limit spreads the flow upset over one half of the
surge working volume (e.g., set point=50%). The up- and down-scale
velocity limits are set to keep the level near set point by means of a small
saw tooth around the average flow for the batch cycle with minimal feed-
back correction. A head start of the feedforward flow for the first batch is
essential to establish a feed-forward signal close to the average flow and to
reduce the saw tooth amplitude.
Without a head start and with a constant velocity limit set to achieve the
average flow after one batch cycle, the tank level will rise and then contin-
ually drop below set point until the full feedback correction is reached.
415
There are several important points to consider. First, a zero gain may cause
some controllers to think they have no proportional mode and are “inte-
gral only.” Second, a zero gain or dead band region provides no surge
capacity, because the integrating response of the level will cause the mea-
surement to go to one end or the other of the region. Third, any reset
action will cause nearly sustained oscillations for the low gain settings.
Fourth, the control valve must have minimal amount of dead band and no
stick-slip (that is, good actuator, packing, linkages, I/P and positioner) or
the limit cycle will be large enough to upset the continuous flow loop.
Figure B-1 shows the use of a feedforward with a velocity limit adapted to
maximize the smoothing of the discharge flow based on current computed
time to violation of a level constraint in the surge vessel. The inlet flow to
the surge tank is from a batch operation that can be in a “hold” state
within and between batches. Consequently, the batch start time and cycle
time are variable. The discharge from the surge tank goes to continuous
operations and should be as smooth as possible.
Flow
Average Flow
0%
0 Time 10 hr
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
Equation B-1 computes the time to violation of the min or max level
constraint as the difference between the present level measurement and
the set point converted to gallons divided by the average difference
between the batch and continuous flow. Thus, gallons of operating room
divided by the net flow imbalance in gpm gives minutes to constraint
violation.
In Equation B-2 this same flow difference is divided by the time to viola-
tion to get the velocity limit that spreads out a batch cycle disturbance
over the free room in the surge tank. It includes an element of proportional
feedback control in that the velocity limit approaches an instantaneous
response as the level approaches its constraint. It smooths out small fluctu-
ations around the average batch flow, yet greatly accelerates the response
to large upsets and startup and shut-down conditions. It enforces the
material balance by gradually seeking the average continuous flow that
balances out the average batch flow. The implementation of this strategy
on surge tanks progressively smooths out batch operations, especially
where recycle streams are involved, by increasing the attenuation and
decreasing the propagation of upsets.
The batch flow is filtered by a time constant about equal to the off time to
create an average batch flow for the adaptation of the velocity limit oper-
and, but the instantaneous batch flow is the input to the velocity limit
function.
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
automatic mode – PID controller mode to enable its set point to be locally
adjusted by the operator (also known as LOCAL and AUTO mode)
419
cascade mode – secondary PID controller mode to enable its set point to be
remotely adjusted by the output of a primary controller (also known as
REMOTE and CAS mode)
DDC mode – PID controller direct digital control mode to enable its out-
put to be adjusted by a sequence, interlock, or a computer (also known
as ROUT mode)
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
dead band – minimum change in the input for a reversal of direction that
will cause a change in the output (important for a control valve and a
manipulated variable)
dead time – time it takes for the process variable to get out of the noise
band after a change in the manipulated variable (same as time delay)
feedback control – a control algorithm to reduce the error between the set
point and the controlled variable (most often a PID or model predic-
tive controller algorithm is used)
filter time – time constant of a signal filter that is is usually applied to the
PV but can be inserted anywhere in the configuration as a function
block (seconds or minutes)
manual mode – operator adjusts the controller output directly (PID algo-
rithm is suspended but will provide a bumpless transfer to automatic,
cascade, or supervisory control)
nonlinear system – gain, time constant, or time delay and hence controller
tuning settings are not constant but a function of time, direction, oper-
ating point, or load
open loop gain – final % change in the controller input divided by the %
change in the controller output with the controller in manual (dimen-
sionless steady state gain)
process gain – final change in the process variable divided by the change
in the manipulated variable (commonly used as the open loop gain)
remote set point – preferred value for the controlled variable that is not
locally set by the operator but comes from a control system (usually
reserved for cascade control)
repeatability – short term maximum scatter in the output for the same
input approached from the same direction and at the same conditions
(see reproducibility)
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
reproducibility – long term maximum scatter in the output for the same
input approached from both directions and at the same conditions
(includes repeatability and drift)
resolution – minimum change in the input in the same direction that will
cause a change in the output (important for a control valve and a
manipulated variable)
saturation – controller output is at either low or high limits that are nor-
mally set to match the valve being shut and wide open, respectively
(controller is effectively disabled)
scan time – time interval between successive scans for digital devices
(same as update time or control interval and the inverse of scan rate or
frequency)
self-regulating process – a process that will reach a open loop steady state
for a change in the manipulated variable if there are no disturbances
(no integrating or runaway response)
sensitivity – ratio of the steady state change in output to the change in the
input (poor sensitivity is caused by poor resolution and a low steady
state gain or flat curve)
steady state gain – final change in the output of loop components (control-
ler, valve, process, measurement) divided by the change in input
(static or zero frequency gain)
stick – lack of valve trim movement for a change in input signal caused by
friction or an undersized actuator (major source of valve resolution
limit)
time delay – time it takes for the process variable to get out of the noise
band for a change in the manipulated variable (same as dead time)
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
time constant – time it takes for the process variable to reach 63% of its
change after its time delay for a step in the manipulated variable (same
as time lag)
time lag – time it takes for the process variable to reach 63% of its change
after its time delay for a step in the manipulated variable (same as time
constant)
update time – time interval between successive updates for digital devices
(same as scan time or control interval and the inverse of scan rate or
frequency)
valve action – relative direction of change in flow for a change in the input
signal (“direct or inc-open” for fail closed and ”reverse” or ”inc-close”
for fail open)
1. Control valve pressure drops that are not engineered. If you consistently
see 5 or 10 psi pressure drops or line size control valves, the
process or instrument engineers are playing the lottery with your
control valves. They will hit it big more often than not, because
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
425
stick-slip causes a limit cycle. With slip you can rest assured that
any match of the process variable (PV) with the set point (SP) is a
transitory event. It is like the PV was saying to the SP, “Just passing
through.” While these valves have generous amounts of all of
these largely undocumented features, stick-slip is simply
breathtaking.
3. The placement of differential head or insertion type (e.g., vortex, annubar,
thermal mass) flow meters downstream of the control valve. It screws up
the velocity profile and the next performance review of you and
your meter. Yet websites, newsletters, and books have this
arrangement proudly displayed as a logo for process control.
4. Recalibration as the solution to instrument problems. This gives the
appearance of doing something. The real problem is usually
associated with the operating conditions, application, or the
installation, but this requires some thinking. For example, the
orifice could be worn, the steam tracing could have boiled out the
liquid in the sensing lines, connections could be plugged,
condensate could have collected in sensing lines, and the
composition or any of the physical properties (density and
viscosity) could have changed.
5. Tight control of level in a surge tank. The purpose of the volume was
to absorb changes in flow and for the level to roll with the punches.
Keeping the level within a few per cent of set point passes them on
and defeats the purpose of the tank. Most oscillations in a process
can often be traced back to an overzealous person tuning a level
controller and an excessively tight set of low and high level alarm
limits.
6. Hydro test and flush of control valves and instruments. This is a good
idea if you want to see how tight a person with a wrench can make
the valve packing, whether the vendor really meant the pressure
over-range limit, or whether your valve trim, insertion meter, or
electrode can withstand welding rods and pipe wrenches traveling
at 9 fps.
7. Equalization lines without a purge or tracing. It is not a case of
whether you view the line as half full or half empty. You need to
choose full or empty and then install a purge or heat tracing to
guarantee it. Trusting technicians to periodically empty or fill lines
is an exciting exercise in wishful thinking. Vapors will condense
and fill up an empty line, or changes in pressure and temperature
will empty a full line at the least opportune times.
8. The overuse of orifice meters to measure flow. Orifices stink. To use
them for mass flow borders on the absurd. Apparently, engineers
are oblivious to measurement noise, poor rangeability from the
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
busted from all the gauges and electrodes that get busted during
construction and water batching.
13. Stream tracing left on during the summer. This is great for conducting
chemistry experiments, such as how hot does the process fluid
need to get before its gets sticky, solidifies, or polymerizes. Also, it
provides a good check on sensor temperature ratings and safety
programs for technicians to wear gloves.
14. The use of field-mounted switches. This is lots of fun if you like to
guess the trip settings, especially for field pressure and
temperature switches. Just don’t plan on figuring it out after a
failure, because the evidence will be blown up with the rest of the
plant. Limit switches on valves may be unavoidable but make sure
they are encapsulated and have no mechanical components. The
higher amp rating of other types doesn’t do you much good if they
get hung up or corroded. Just remember that limit switches, which
are designed to tell you if a valve has failed, fail more often than
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
the valve.
15. The use of homemade instruments, analyzers, and algorithms. The use of
special fabricated or designed stuff instead of off-the-shelf
standard solutions is exciting for the innovator but not for
everyone else who has to support or use it. The testing is
prejudiced and the documentation is nonexistent, but don’t let this
stop you; after all, that next promotion or graduate degree may just
be one special instrument or algorithm away.
16. The placement of all your eggs in one basket. Not a good idea unless
you are big or stupid enough to absorb the loss. The placement of
interlocks or control systems that can disable an entire plant in one
computer or controller or on one power supply or circuit is risky
business.
17. The use of too much reset action (too small an integral time) in reactor
and column temperature controllers. Reset has no sense of direction
and will only make a correction after it crosses the line, much like a
90-year-old driver.
18. The use of not enough derivative action in temperature and pH
controllers. Turn off the rate in these applications and experience
firsthand the exhilaration from acceleration. Just make sure the
trend recording shows the new highs reached by exothermic
reactors or steep titration curves.
19. The omission of positioners on fast loops. This was the fad in the days
of analog controllers and pneumatic positioners and Nyquist plots
of idealized boosters and valves. The real-life problems introduced
by bench settings, high packing friction, shaft windup, high
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
431
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
controlled variable 6, 50, 184, 308, 363, 419 frequency estimator 236
controller commissioning 258 function block 147–148, 251, 278, 395
coriolis flow meter 36, 42, 70, 75 funnel control 350–351, 373
cross correlation 134, 136, 144, 156–157, 276 furnace pressure control 34, 195
cumulative error 300–302 future trajectory 308–309, 421
fuzzification 247, 251
DCS 3, 20, 39, 120–121, 126, 150, 286, 386, 390, fuzzy logic 153, 156, 182, 221, 239–251, 253–
393, 395, 397, 405, 427 255, 257–258
DDC 189, 420 control 153, 239–244, 246, 250, 254–255
dead band 6–8, 23, 133, 416, 420 control block 243
defuzzification 247, 250–251, 254 inference rules 248, 253
derivative time 57, 184, 203, 205, 214 fuzzy set 245, 247, 251
diagnostic tool 132
direct digital control 189 gain adaptation 227–228
discrete Fourier transform adaptation 236 gain margin 204–205, 207, 214, 222–224, 244
distillate receiver level control 194 gain scheduling 65, 276–277, 409, 415
distillation Gaussian distribution 152
column 17, 110, 193, 195, 315, 331 gradient 300
tower 103, 264–265, 287, 316–318, 320–
325, 408 hard constraints 225, 363
tower control 316 hidden layer 296, 302
distributed control system 3, 120, 163 hidden nodes 302, 305
DMC 363–364, 369 histogram 133
dryer control 327 Hotelling T2 142
dynamic linear estimator 262–264, 276, 278– hysteresis 187, 196, 200, 202, 210–213, 217, 237
279, 294–295, 303, 311
dynamic matrix 363, 370–372, 381 IAE 51–52, 223, 225, 241
Dynamic Matrix Control 363, 371 ill conditioned 338, 377
IMC 216, 218–221, 223–224
embedded expert system 174 IMC tuning rule 220
energy balance 70, 72, 76, 78, 84, 99 incorrect mode 154–156
epoch 300–301 input layer 287
evaporator control 326 input sensitivity 290, 293, 297, 299
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
execution period 333–334 input weights 300
execution rate 158, 332, 334, 407 integral absolute error 223
expert system 163–174, 176–182 integral time 55, 193, 214
Integrated Absolute Error 51, 241
factor analysis 124 integrated error 7, 14–15, 49, 52, 73, 82–83, 85,
failure handling 387 108, 111, 349
feedforward control 11, 15, 17, 20, 25–26, 70– integrating process 205, 216, 220–221, 225
74, 76, 84, 100–102, 193, 251, 315, 319, 421 integrating response 47, 190, 205, 355, 416
feedforward neural network 296, 304 Internal Model Control 218
fictitious set point 230–231 interpolation 233
fieldbus 9, 75, 121–122, 143, 147–151, 153, 160, inverse response 25, 59, 72, 194, 196, 209, 276,
385, 388, 395–397, 403–405 315, 353–354
filter 11, 145, 173, 279, 366, 384 ISA standard form 219, 221
filter time constant 219, 373
final element 5–6, 12, 15, 22, 31, 34, 51, 67, 73, jitter 158–160
199, 282, 421
finite impulse response 345, 367 kappa analysis 271
FIR 345–346, 357–358, 367–368 knowledge database 171
forward chaining 180
fractionator 284 Lambda factor 187, 198
frequency domain 212, 216, 231–232, 236 Lambda tuning method 67, 187, 204
Lambda tuning rule 220 optimization 92–93, 116, 307, 323, 341, 353,
learning rate 300, 304 375, 377, 379–380
least squares solution 370 output layer 297
level control 19, 49, 57, 73, 184, 187, 190, 193– override control 313
194, 196, 220, 249, 315–316, 413, 415 over-training 301
level dontrol 194
linear programming 377 paper machine 111–112, 331
linear regression 213, 262 parallel installation 43
liquid flow control 187–188 partial least squares 121, 124, 137, 144, 265
liquid pressure control 73, 188 peak error 7, 46, 49–50, 52–53, 55, 57, 68–69, 75,
load response 200 82–83, 85, 183, 196, 204
LP 116, 325–326, 335, 341, 377–379 penalty on error 350–351, 371
penalty on move 349–351
magnetic flow meter 34, 37 performance monitoring interface 145
manipulated variables 192, 310, 318, 353, 362 periodic upsets 23, 73, 183, 200
manual mode 339, 360, 421 pH control 33, 36, 51, 59, 103, 137, 201, 242
material balance 13, 72, 77, 79, 187, 194, 417 phase margin 214, 222–223
measured disturbances 314, 360, 362–363 PID 51, 60–65, 192, 201, 213–214, 224, 226, 228–
measurement noise 37, 57, 143, 183, 185, 187, 229, 231, 251, 257–258, 309, 315, 388, 420
194, 196, 199, 279, 426 PID controller tuning 51
membership function 245–248, 250, 253–254, pneumatic positioner 18
258 positioner 29, 70, 187, 416, 422, 429
minimum integrated error 83 positive feedback 29, 46–47, 59, 76
minimum variance control 151, 219 power spectrum 20, 133, 136, 156–157, 185
mixed dynamics 334 PRBS 303, 338, 343–345, 354–355
mode 52, 55–56, 120, 122, 131, 144, 149, 155, prediction error 142, 265, 285, 303
175, 279, 351, 403, 419–420, 423 prediction horizon 308, 333, 363, 367–368, 371–
model predictive control 11, 72, 101, 106, 130, 373, 377–378
307–308, 312, 355, 362, 393 primary controller 13, 46, 67–68, 76, 192, 198,
model residual plot 141–142 420, 422–423
moisture control 240 principal component analysis 95, 121, 124, 137,
MPC 144, 275, 285
commissioning 351–352 process
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---
controller design 347, 370 analysis 117, 120, 219, 339–341, 347
multi-node environment 404 gain 11, 46, 70, 77–78, 187, 189, 314, 422
multi-valued logic 245–247, 254 performance 51, 91, 106, 115, 383
multivariate analysis 138 static gain 216–217
test 213–214, 218, 278, 367
negative feedback 46–47, 76, 80 time constant 15, 49–51, 73, 78–80, 86, 143,
neural network 181, 264–265, 271, 275, 278– 183, 189–190, 205, 216–217, 220,
279, 282, 287–288, 293, 296–297, 301, 304 333
training 301 variability 121, 124, 131, 148, 150, 183, 206,
neutralizer 18–19, 59, 97, 108–110, 197, 204 220, 244
NN 285, 290, 296, 299, 302–303, 305 variable 7, 76, 94, 140, 150, 206, 226, 256,
noise protection 211, 213 304, 426
process gain 46, 76
objective function 377–379 process model 216, 218, 226, 231–232, 308, 331,
OLE 386, 405–406 344, 346, 353, 357, 363–365, 367, 394, 398
OPC 126, 148, 286, 386, 389, 392, 395, 397–399, process variable 47
401, 404–406 projection to latent structures 124
open loop response 45–47, 49, 52, 68, 196, 206 PV 46, 149, 154, 188, 206, 426
operating constraints 126, 318, 336 pyramid of technologies 91
operator interface 147, 163, 167–168, 175, 355,
360, 384–386, 389, 392, 395–396, 399 QP 325–326, 335, 341
Raman scattering emission step size 30–31, 187–188, 190, 193, 195–196,
spectrophotometers 40 199–200, 202–204, 207, 210, 244, 300, 338, 354
ratio control 12, 14, 20, 36, 68, 70, 72, 74, 76, stick-slip 8, 10, 15, 22, 26, 29, 51, 90, 123, 200,
195, 349, 409–410 409, 416, 426
reactor 59, 98, 113, 188–190, 192–193 stiction 7, 9, 11, 46, 49, 70, 189, 203, 271
recursive adaptation 231 stroking time 7–8, 10, 29–31, 391
relay supervisory control 105, 421
amplitude 211–212 surge tank level control 73, 184, 193
hysteresis 210–212
step size 210 temperature control 44, 77–78, 190, 192–193,
reproducibility 5–7, 11, 13–15, 19, 24, 31, 35, 240
37–38, 75, 92, 111, 183, 422 temperature loop 17, 19, 66, 70, 111, 192
reset 55, 59, 61–62, 64–65, 70, 133, 218, 225, 227, temperature measurement 36, 39, 75, 132, 266
229, 257 temperature sensor 36, 45
rise time 12, 52–53, 66, 75 thermocouple 6, 18, 20, 24, 39, 59, 192, 202, 427
robustness thermowells 44
based tuning 221 time delay 11, 30, 32–33, 49–50, 65, 79, 82, 84,
map 207, 222–225 100, 111, 420–421, 423
plot 221–222 time series 141, 264, 276, 345
rotary kiln control 327 titration curve 15–16, 49, 109, 242
rotary valves 8–9, 23, 26–29, 33, 425 total standard deviation 102, 113, 150–152
ROUT 66, 189, 420 training error 301
transportation delay 17, 32, 35, 43, 47, 79, 91,
scalable DCS 390, 397, 405 184, 188, 427
scan rate 34, 153, 198, 419, 423–424 tuning interface 207, 243
scores plot 140–142 Tuning Rule selection 206
screening data 290 tuning settings 52, 60, 65–67, 183–186, 203–
secondary controller 46, 67–68, 70, 192, 422– 205, 208, 391, 421
423 typical tuning settings 65, 204
self-regulating process 67, 76, 80, 217, 219, 221,
354, 423 ultimate gain 80–81, 85, 210–213, 216–218, 221,
series installation 43 236, 258, 424
set point response 7, 25, 50, 60–64, 70, 183, 192, ultimate period 67–68, 76, 210–213, 236, 258,
224–225 424
set point trajectory 310, 350–351, 369 ultrasonic level measurement 39
settling time 52–53, 55, 57, 66, 73, 251, 255, 270,
287, 289, 308, 319, 345 valve characteristic 9, 16, 71, 187, 197, 205
sigmoid function 215 valve dead band 30–31, 46–47, 84, 111, 183, 202
simulation state 400, 402 valve gain 8–11, 15–16, 49, 51, 90, 187–188, 314
singleton 253 variable dead time 333–334
singular value decomposition 124 variable speed drive 195–196, 421
slab caster process 128 varying delay 331
sliding stem valves 7–8, 26–29, 31, 33, 106, 133, virtual plant 3, 117, 289, 383–390, 392–396,
202 399–401, 406–408
slip 7–8, 74, 200, 423, 426
Smith Predictor 221 windup 14, 27–28, 55, 155, 425, 428
soft sensor 269–275, 279, 283–284, 288, 297,
299–300, 302 Ziegler-Nichols rules 187, 213–214
split range 92
standard deviation 14–15, 94–96, 100–102,
108–109, 111–114, 135, 146–153, 185–186, 298
static mixer concentration control 188
step response 26, 213, 216, 330, 332, 345–346,
354, 357, 360–361, 364, 366, 368–371
--`,```,,,```,`,````,``,`,,`,`-`-`,,`,,`,`,,`---