The Integration of Process Design and Control
The Integration of Process Design and Control
THE INTEGRATION OF
PROCESS DESIGN AND
CONTROL
Edited by
Panos Seferlis
Centre for Research and Technology - Hellas
Chemical Process Engineering Research Institute
P.O. Box 361, 57001 Thermi - Thessaloniki, Greece
Michael C. Georgiadis
Centre for Process Systems Engineering
Department of Chemical Engineering
Imperial College London
London SW7 2AZ, UK
ELSEVIER
2004
Amsterdam Boston Heidelberg London New York Oxford
Paris San Diego San Francisco Singapore Sydney Tokyo
ELSEVIER B.V. ELSEVIER Inc. ELSEVIER Ltd ELSEVIER Ltd
Sara Burgerhartstraat 25 525 B Street The Boulevard 84 Theobalds Road
P.O. Box 211,1000 AE Suite 1900, San Diego Langford Lane, Kidlington, London WC1X 8RR
Amsterdam, The Netherlands CA 92101-4495, USA Oxford OX5 1GB, UK UK
This work is protected under copyright by Elsevier B.V., and the following terms and conditions apply to its use:
Photocopying
Single photocopies of single chapters may be made for personal use as allowed by national copyright laws. Permission of the Pub-
lisher and payment of a fee is required for all other photocopying, including multiple or systematic copying, copying for advertising
or promotional purposes, resale, and all forms of document delivery. Special rates are available for educational institutions that wish
to make photocopies for non-profit educational classroom use.
Permissions may be sought directly from Elsevier's Rights Department in Oxford, UK: phone (+44) 1865 843830, fax (+44) 1865
853333, e-mail: [email protected]. Requests may also be completed on-line via the Elsevier homepage (http://www.elsevier.
com/lo cate/permi ssio ns).
In the USA, users may clear permissions and make payments through the Copyright Clearance Center, Inc., 222 Rosewood Drive,
Danvers, MA 01923, USA; phone: (+ 1) (978) 7508400, fax: (+ 1) (978) 7504744, and in the UK through the Copyright Licensing
Agency Rapid Clearance Service (CLARCS), 90 Tottenham Court Road, London W1P 0LP, UK; phone: (+44) 20 7631 5555; fax:
(+ 44) 20 7631 5500. Other countries may have a local reprographic rights agency for payments.
Derivative Works
Tables of contents may be reproduced for internal circulation, but permission of the Publisher is required for external resale or
distribution of such material. Permission of the Publisher is required for all other derivative works, including compilations and
translations.
Electronic Storage or Usage
Permission of the Publisher is required to store or use electronically any material contained in this work, including any chapter or
part of a chapter.
Except as outlined above, no part of this work may be reproduced, stored in a retrieval system or transmitted in any form or by any
means, electronic, mechanical, photocopying, recording or otherwise, without prior written permission of the Publisher.
Address permissions requests to: Elsevier's Rights Department, at the fax and e-mail addresses noted above.
Notice
No responsibility is assumed by the Publisher for any injury and/or damage to persons or property as a matter of products liability,
negligence or otherwise, or from any use or operation of any methods, products, instructions or ideas contained in the material herein.
Because of rapid advances in the medical sciences, in particular, independent verification of diagnoses and drug dosages should
be made.
ISBN: 0-444-51557-7
ISSN: 1570-7946 (Series)
@ The paper used in this publication meets the requirements of ANSI/NISO Z39.48-1992 (Permanence of Paper).
Printed in The Netherlands.
V
Preface
Extensive research in issues such as interactions of design and control, analysis and design
of plantwide control systems, integrated methods for design and control has resulted in
impressive advances and significant new technologies that have enriched the variety of
instruments available for the design engineer in her endeavour to design and operate new
processes. The field of integrated process design and control has reached a maturity level that
mingles the best from process knowledge and understanding and control theory on one side,
with the best from numerical analysis and optimisation on the other. Direct implementation of
integrated methods would soon become the mainstream design procedure.
Within this context we believe that a book bringing together the developments in a variety
of topics related to the integrated design and control would be a real asset for design
engineers, practitioners and researchers. Although the individual chapters reach a depth of
analysis close to the frontier of current research status, the structure of the book and the
autonomous nature of the chapters make the book suitable for a newcomer in the area.
The book comprises four distinct parts:
outline the basics of the back-off approach as an evaluation instrument for alternative designs.
Swartz in Chapter B3 considers a special class of controllers that mark the boundary of
achievable dynamic performance. Alhammadi and Romagnoli (Chapter B4) present a
comprehensive design procedure that incorporates the environmental impact and energy
integration in the overall design objectives. Decomposition techniques essential for the
identification of the sources of interaction between plant units and sections due to recycle are
the subject of the contribution by Carlemalm and Jacobsen (Chapter B5). Seferlis and
Grievink (Chapter B6) investigate the efficient evaluation and screening of alternative process
flowsheet and control structure configurations using nonlinear sensitivity analysis.
We hope that by the end of the book, the reader will have developed a commanding
comprehension of the main aspects of integrated design and control, the ability to critically
assess the key characteristics and elements related to the interactions between design and
control, and the capacity to implement the new technology in practice.
VII
At this point we would like to take the opportunity to extend our appreciation to the
authors that accepted our invitation to contribute in this book with chapters of the highest
possible quality. Finally, special thanks go to the CACE series editor Prof. Rafiqul Gani for
offering useful ideas in this project.
Panos Seferlis
Chemical Process Engineering Research Institute (CPERI),
Thessaloniki, Greece
Michael Georgiadis
Imperial College London,
London, U. K.
This page is intentionally left blank
IX
List of Contributors
Alhammadi, H. Y. Chemical Engineering Department, University of Bahrain, Isa Town
32038, Bahrain
Allgower, F Institute for Systems Theory in Engineering, University of Stuttgart,
Pfaffenwaldring 9, 70569 Stuttgart, Germany
allgower(S>ist.uni-stuttgart.de
Alonso, A. A. Process Engineering Group, Institute Investigaciones Marinas (CSIC),
Eduardo Cabello, 6, 36208 Vigo, Spain
Alvarez, J. Universidad Autonoma Metropolitana-Iztapalapa, Depto. de
Ingenieria de Procesos e Hidraulica, Apdo. 55534, 09340 Mexico D.F,
Mexico
iac@,xanum.uam.mx
Banga, J. R. Process Engineering Group, Instituto Investigaciones Marinas (CSIC),
Eduardo Cabello, 6, 36208 Vigo, Spain
iulio(o),iim.csic.es
Bildea, C. S. Department of Chemical Technology, Faculty of Applied Sciences,
Delft University of Technology, Julianalaan 136, 2628 BL Delft, The
Netherlands
c. s.bildea(g),tnw. tudelft.nl
Bogle, I. D. L. Centre for Process Systems Engineering, Department of Chemical
Engineering, University College London, Torrington Place, London,
WC1E 7JE, U. K.
d.bogle@,ucl.ac.uk
Cameron, I. T. School of Engineering, The University of Queensland, 4072, Australia
itc(o),cheque.uq .edu.au
Carlemalm, H. C. S3-Process Control, Royal Institute of Technology, SE-100 44
Stockholm, Sweden
Chen, Y. H. Department of Chemical Engineering, National Taiwan University of
Science and Technology, Taipei 106-07, Taiwan
Dimian, A. C. Department of Chemical Engineering, Faculty of Science, University
of Amsterdam, Nieuwe Achtergracht 166, 1018 WV Amsterdam, The
Netherlands
alexd@,science.uva.nl
Doyle III, F. J. Department of Chemical Engineering, University of California Santa
Barbara, CA 93106, U.S.A.
dovlefSjengineering. ucsb.edu
Engell, S. Process Control Laboratory (LS AST), Department of Biochemical
and Chemical Engineering, Universitat Dortmund, D-44221
Dortmund, Germany
[email protected]
Espufia, A. Chemical Engineering Department, Universitat Politecnica de
Catalunya, ETSEIB, Diagonal 647, E08028 Barcelona, Spain
Fraga, E. S. Centre for Process Systems Engineering, Dept of Chemical
Engineering, University College London, Torrington Place, London,
WC1E7JE,U. K.
Georgakis, C. Polytechnic University, Brooklyn, NY, U.S.A.
[email protected]
Georgiadis, M. C. Centre for Process Systems Engineering, Department of Chemical
Engineering, Imperial College London, London SW7 2AZ, U.K.
mgeorg@otenet. gr
Goyal, V. Department of Chemical and Biochemical Engineering,
Rutgers - The State Universuty of New Jersey, NJ, U.S.A.
Grievink, J. Department of Chemical Technology, Faculty of Applied Sciences,
Delft University of Technology, Julianalaan 136, 2628 BL, Delft, The
Netherlands
i [email protected]
Hagemann, J. Centre for Process Systems Engineering, Department of Chemical
Engineering, University College London, Torrington Place, London,
WCIE 7JE, U. K.
Hauksdottir, A. S. Electrical and Computer Engineering Department, University of
Iceland, Iceland
Hernjak, N. Department of Chemical Engineering, University of Delaware,
Newark DE 19716, U.S.A.
Hoo, K. A. Department of Chemical Engineering, TTexas Tech University,
Lubbock, TX 79409-3121, U.S.A.
[email protected]
Ierapetritou, M. Department of Chemical and Biochemical Engineering, Rutgers - The
State University of New Jersey, NJ, U.S.A.
[email protected]
S3-Process Control, Royal Institute of Technology, SE-100 44
Jacobsen, E. W.
Stockholm, Sweden
i acobsen@s3 .kth. se
Kookos, I. M. Department of Chemical Engineering, University of Manchester,
Institute of Science and Technology, UMIST, M60 1QD, Manchester,
U.K.
[email protected]
Lewin, D. R. PSE Research Group, Wolfson Department of Chemical Engineering,
Technion, 1.1. T., Haifa 32000, Israel
[email protected]
Luyben, M. L. E. I. du Pont de Nemours and Company, Engineering Technology,
1007 Market St. - Brandywine 7434, Wilmington, DE 19898, U.S.A.
XI
[email protected]
Luyben, W. L. Process Modeling and Control Center, Department of Chemical
Engineering, Lehigh University, Bethlehem, PA 18015, U.S.A.
[email protected]
Ma,K. Centre for Process Systems Engineering, Department of Chemical
Engineering, University College London, Torrington Place, London,
WCIE 7JE, U. K.
Mann, U. Texas Tech University, U.S.A
Meeuse, F. M. Unilever Research and Development Vlaardingen, Olivier van
Noortlaan 120, 3133 AT, Vlaardingen, The Netherlands
[email protected]
Moles, C. G. Process Engineering Group, Instituto Investigaciones Marinas (CSIC),
Eduardo Cabello, 6, 36208 Vigo, Spain
Nougues, J. M. Chemical Engineering Department, Universitat Politecnica de
Catalunya, ETSEIB, Diagonal 647, E08028 Barcelona, Spain
Oaxaca, G. Universidad Autonoma Metropolitana-Iztapalapa, Depto. de
Matematicas, Apdo. 55534, 09340 Mexico D.F, Mexico
Ogunnaike, B. A. Department of Chemical Engineering, University of Delaware,
Newark DE 19716, U.S.A.
Pearson, R. K. Thomas Jefferson University, Philadelphia PA 19107, U.S.A.
Pegel, S. Bayer Technology Services, Advanced Process Control, Leverkusen,
Germany
Centre for Process Systems Engineering, Department of Chemical
Perkins, J. D.
Engineering, Imperial College of Science Technology and Medicine,
London, U. K.
Centre for Process Systems Engineering, Department of Chemical
Pistikopoulos, E. N.
Engineering, Imperial College of Science Technology and Medicine,
London, U. K.
[email protected]
Puigjaner, L. Chemical Engineering Department, Universitat Politecnica de
Catalunya, ETSEIB, Diagonal 647, E08028 Barcelona, Spain
[email protected]
Romagnoli, J. A. Laboratory for Process Systems Engineering, Department of Chemical
Engineering, The University of Sydney, Sydney, NSW 2006 Australia
[email protected]
Sakizlis, V Centre for Process Systems Engineering, Department of Chemical
Engineering, Imperial College of Science Technology and Medicine,
London, U. K.
Scheickhardt, T Institute for Systems Theory in Engineering, University of Stuttgart,
Pfaffenwaldring 9, 70569 Stuttgart, Germany
Seader, J. D. Department of Fuels and Chemical Engineering, University of Utah,
Xll
Table of Contents
Preface v
List of Contributors ix
The integration of process design and control -Summary and future directions 1
Panos Seferlis, Michael C. Georgiadis
Introduction
a
CERTH - Chemical Process Engineering Research Institute (CPERI),
P.O. Box 361, 57001 Thermi - Thessaloniki, Greece
b
Centre for Process Systems Engineering, Department of Chemical Engineering,
Imperial College London, South Kensington Campus, London, SW7 2AZ, U. K.
1. INTRODUCTION
The integration of process design and control aims at identifying design decisions that
would potentially generate and inherit possible trouble to the dynamic performance of the
control system. Furthermore, it aims at exploiting the synergistic powers of a simultaneous
approach to ensure the economical and smooth operation of the plant despite the influence of
disturbances and the existence of uncertainty.
An integrated design methodology requires that a good qualitative and quantitative
description of those process characteristics that have a dominant effect on the dynamic
behaviour of the process is obtained and their relationship to design decisions is understood.
Section 2 summarizes the book chapters on those two directions. The success of the integrated
design and control procedure relies on the accurate definition of the problem within a
mathematical framework that will assist on the selection of the best possible option from a
pool of alternative designs. Section 3 discusses recent advances in methods towards a holistic
approach to process design and control. Undoubtedly, the consideration of the unit-to-unit
interactions in a flowsheet through recycle streams and feedback control is important for the
design of a well-functioning and effective mechanism that ensures the alleviation of
exogeneous variations from the final product quality. Section 4 summarises the chapters
referring to plantwide control systems design. The frontiers for the integrated process design
and control technology are expanding as the need for further integration of an industrial
environment of perpetual change and uncertainty grows constantly. Section 5 covers a sample
of issues that embrace the ideas of integration in fields of operations, quality assurance,
numerical optimisation, scheduling, and batch-wise applications. Finally, an attempt to
speculate on future research directions in the field of integrated design and control is taken in
Section 6.
2
Bill Luyben in Chapter Al provides an excellent introduction to the real need for a
simultaneous process and control system design education. The chapter offers a number of
illustrative and motivating examples that show in the most vivid and convincing way the
advantages of considering steady state economics together with the dynamic performance of
the control system when designing new processes. Assuring good and acceptable operation of
the plant that is undisputed essential for the overall economic performance should be the main
objective in the mind of the design engineer. However, the basic recipe for success relies on
the in-depth understanding of the process system and its inherited implications to maintain
quality and specifications within acceptable limits in a constantly varying environment.
The research groups of Doyle and Ogunnaike in Chapter A2 provide a comprehensive
overview of the key process characteristics that determine to a great extent the selection of the
most suitable control system for a given process. The classification of process systems is
made based on the degree of process nonlinearity (i.e. deviation from linearity), the dynamic
character (i.e. complexity of dynamic behaviour) and degree of interaction (i.e. degree of
coupling among controlled and manipulated variables). All possible combinations between
the key characteristics each one divided in three levels of intensity are represented in the
"process characterization cube". The search for the "joint metric" that fully defines the
process character is originated in the investigation of the equivalences between the metrics for
individual characteristics. Each combination of properties is indicative of the difficulty to
control and operate the process under real operating conditions and is associated to an
appropriate set of controller types. Model-order reduction issues pertaining to model-based
control design and always in conjunction with the dynamic character of the process are further
investigated.
Schweickhardt and Allgower in Chapter A3 mainly concentrate on the nonlinearity
assessment of processes. A comprehensive overview of general nonlinearity measures and a
thorough investigation of the predictive and computational dimension of open loop measures
are presented. As the main objective becomes the development of a tool to judge whether a
nonlinear controller should be beneficial or needed for a particular process with specific
nonlinear characteristics, the controller relevant nonlinearity is quantified. The selected
measure is based on the relative differences between the output of nonlinear state feedback
law and that of an equivalent linear state feedback law. The controller relevant nonlinearity
measure depends not only on the plant dynamics and region of operation but also on the
performance criterion used in the derivation of the controller law.
Georgakis and co-workers in Chapter A4 cover the issue of process operability analysis.
Operability measures quantify the ability of the process to maintain the operating
specifications despite the influence of disturbances in an acceptable dynamic fashion. The
analysis is carried out using static and dynamic process models and irrespectively of the
selected feedback control structure. Steady state operability defines the percentage of the
desired output space that can be achieved by the available input space. Dynamic operability
3
investigates the ability of the design to alleviate the effect of disturbances or reach a new set
point level in a timely manner. Thus, the comparison of alternative design decisions based on
the static and dynamic operability performance becomes substantially more effective and
reliable.
Knowledge of the structure of dynamic modes of a system is undoubtedly useful in process
design because it can act as the instrument to manipulate the dynamic properties of a new
system. Cameron and Walsh in Chapter A5 explore the spectral association properties of
process systems through the association of a group of eigenvalues to a group of process states
and a specific dynamic mode. Such behaviour arises due to strong coupling among the states
of the system. Different spectral resolution techniques are compared on the basis of
computational efficiency and power of analysis in terms of eigenvalue sensitivity, interaction
between fast and slow modes and strength of coupling.
An alternative way to investigate the controllability properties of a system is through non-
equilibrium thermodynamics. Meeuse and Grievink in Chapter A6 combine process synthesis,
non-equilibrium thermodynamics and systems theory to perform the thermodynamic
controllability assessment (TCA) of alternative designs. Non-equilibrium thermodynamics
describe entropy production as a function of the transferred flux and the respective driving
force. The link between process design and thermodynamic description of the process lies
within the notion of passivity. The assessment focuses on the design's influence on the
entropy production as it is closely related to the control performance. The prediction of
disturbance rejection properties for the TCA has been demonstrated through applications in
heat transfer and separation processes.
Bogle and co-workers in Chapter A7 provide a critical assessment of the ability of existing
and commonly used controllability measures to describe the interactions between design and
control for nonlinear problems. A limiting factor is the difficulty to develop generic methods
for all types of nonlinear problems. Dynamic simulations and performance metrics are used
for the evaluation of alternative designs in an attempt to remove non-minimum phase
characteristics. An algorithm for the elimination of input multiplicity, a common source of
significant problems in nonlinear systems, is presented. Design modifications based on the
best utilization of exergy, the useful energy in a process, result in significant dynamic and
control improvements.
A unified framework for the integrated process and control system design involves the
determination of a large set of decisions that are linked to the process topology, equipment
design specifications, operating conditions, control structure configuration and controller
tuning. The design decisions represented as continuous and discrete variables are determined
through the optimisation of a set of objective functions that capture the goals and desired
properties subject to the static and dynamic behaviour for the system under the presence of
both time-varying disturbances and time-invariant uncertainty. The complexity of the design
4
complex dynamics, cause instability, induce non-minimum phase behaviour and affect
disturbance sensitivity (e.g., "snowball" effect). Carlemalm and Jacobsen in Chapter B5 use
the partition of the dynamic modes to those resulting from the unit interaction through recycle
streams from those associated with the single unit for the refinement of the process design.
Ingenious design modifications using frequency domain analysis tools exploit the
decomposition of the plant dynamics to adaptively eliminate the limiting factors induced by
recycle feedback in control performance and therefore relax the imposed constraints for the
controller.
The evaluation and screening of alternative process flowsheet and control structure
configurations in a rigorous, effective and systematic way is essential in forming meaningful,
efficient and manageable integrated design problems. The key element is the identification
and elimination from any further consideration of those designs that are eventually
responsible for undesirable behaviour. Seferlis and Grievink in Chapter B6 investigate the
disturbance rejection sensitivity for candidate process flowsheets in association with the
importance of the control objectives, the available resources for control purposes, the input-
output control structure and the dynamic characteristics of the system as represented by the
system eigenvalues. Design sensitivity acts as an additional mechanism that guides the
engineer to design modifications that result in enhanced static and dynamic properties.
Given the high complexity of the simultaneous design and control problem its
decomposition in a series of hierarchically aligned levels with increasing degree of detail
often offers the most efficient way for a satisfactory result. Each hierarchical methodology
may result in more than one plantwide control systems; therefore, further investigation is
required for revealing all aspects of the anticipated dynamic performance. In addition, another
degree of freedom is the process design itself as it can be creatively used to offer sufficiently
rich input space (number and quality of manipulated variables) and simultaneously large
achievable output space (range for controlled variables) as the total control objectives usually
outnumbers the independent handles for control.
Mike Luyben in Chapter Cl offers an industrial viewpoint on incorporating controllability
and developing plantwide control strategies at the design stage of new processes. Accurate
prediction of the effects of design decisions on dynamic operability safeguards the process
and control systems from critical limitations that will diminish their flexibility to operate
smoothly in an environment of increased technical uncertainty. Even though process
integration leads to concrete capital and operating cost savings the creative incorporation of
potential degrees of freedom in the design increases the operating window and the ability of
the control system to alleviate disturbances and allow smooth dynamic operation.
Hoo and co-workers in Chapter C2 propose a modular decomposition of the plant
flowsheet using a decision-based methodology for the synthesis of plantwide control
structures. Design, operational and economic objectives are associated with those individual
6
units that have the greatest influence on them. Reduction of dimensionality and therefore,
tractability of the system, consistency of the decomposition procedure that evaluates the
steady state sensitivity, operational, and dynamic control objectives are the main advantages
of the procedure.
Dimian and Bildea in Chapter C3 explore the issues related to the plantwide control of the
material balance. The control of the reactants' and impurity inventories in complex reactive
systems with recycle are interrelated to the design of the reactor and the separation units.
Nonlinear analysis of the reactor model and the recycle structure reveal the conditions for
good dynamic performance and guides through the selection of the most appropriate
plantwide control strategy. The interactions induced by the recycle streams in the plant can be
favourably exploited to build effective control structures that are impossible with stand-alone
units.
Engell and co-workers in Chapter C4 deal with the control structure selection based on
input/output controllability measures. The limitations imposed by non-minimum phase
characteristics on the attainable closed-loop performance are considered in the evaluation of
the candidate set of control structure configurations. The optimisation of the attainable
performance over the set of all linear stabilizing controllers can refine the controller structure
with input constraints and coupling properties directly accounted for.
Chen and Yu in Chapter C5 investigate the interactions between design and control and the
control challenges associated with a gas-phase adiabatic tubular reactor with liquid recycle.
Careful inventory balance of the reactants in the system and tight temperature control in the
reactor are essential for good operability. Total annual costs evaluate the design economics,
steady state operability analysis assesses the impact of disturbances on operating conditions
and dynamic simulations judge the performance of the selected control structures.
After a new design is brought into operation, the options for improvement of the static and
dynamic behaviour of the process are very limited. However, there are degrees of freedom in
plant operations that can minimise the damage on the economic performance of the plant from
the influence of disturbances. The solution to such a task is, as cleverly pointed out by
Skogestad in Chapter Dl, the "integration of design people and control people". The idea of
self-optimising control, defined as the selection of those control variables that despite the
influence of uncertainty and disturbances maintain the economic loss during operation within
an acceptable level, can lead to significant savings during the operation of the plant.
Puigjaner and co-workers in Chapter D2 explore the interactions between the various
decisions levels linked to the batch control system. The work is motivated by the increasing
shift in chemical industry to higher added value products that are usually produced batch-wise
(Ref. 1). Optimal design, analysis, and scheduling of batch processes lead to hierarchical and
interconnected decision levels that require a holistic approach. A comprehensive overview of
the requirements and standards for automatic batch control systems provides the basis for the
7
The calculation of the desired operating trajectory for a batch or semi-batch reactor is by
itself a design decision for the system. The integration of the trajectory design with the design
of the regulatory control system that would implement the trajectory to the reactor is very
challenging problem especially provided the highly nonlinear behaviour that characterise
batch systems. The group of Alvarez in Chapter D6 explores such issues and considers an
application in a polymerisation process. A constructive control method is used that exploits
the nonlinear characteristics of the system and considers stability properties in the calculation
of the batch trajectory. The control system relies in a nonlinear state-feedback controller with
an open-loop estimator. The structure of the feedback control system that implements the
optimal trajectory is therefore closely related to the batch design.
The synergistic combination of creative and ingenious process and control system
development with the aid of advanced and state-of-the-art numerical and analytical tools
generate processes that are able to satisfy the long and demanding list of operating
requirements and constraints. Undoubtedly, the deep knowledge and understanding of the
physical and chemical interacting phenomena occurring within the process environment are
the keys for the development of a successful new design. The growth of available computing
power and the recent advances in numerical optimisation tools allow the quick and accurate
synthesis, analysis and evaluation of alternative process designs. The solution of holistic
approaches with multiple objectives are therefore becoming a much more tractable problem.
The rigorous specifications of the complex process design problem within a mathematical
framework consequently allow the derivation of optimal and meaningful designs. It is
probably a matter of time before the integration of controllability analysis, the process
characterisation, and ultimately the controller design components become integrated within
the framework of currently existing simulation and optimisation software tools.
As the field of integrated design and control is reaching a point of maturity judging from
the great number of research contributions (Ref. 4), it is quite obvious that the main research
trends will be towards a higher degree of integration dictated by the need for increased
competitiveness in a fast changing business environment. Integration of energy, safety, and
environmental issues will be necessary to satisfy tighter quality assurance specifications in a
plantwide basis. Shortening of the manufacturing time, and tight control of product quality
variability through the numerous successive stages of production are key objectives.
Opportunities for further process integration and intensification in existing plants will be
persistently sought. Greater interaction with planning and scheduling levels in the company is
also expected leading to issues related to supervisory control of expandable plants and the
ability to manage efficiently large manufacturing systems.
A definite shift from process-oriented design to product-oriented design is occurring, as
high-valued structured chemical products become the main focus of industry. The continuous
evolution of product quality leads to shorter life cycles and a need for constant adaptation to
9
varying product specifications (Ref. 5). The integration of design and control becomes a much
more complicated problem as the product quality specifications are rigorously defined in a
space of much higher dimensionality (e.g., molecular structure, molecular weight distributions
and so forth). Interactions with the new growing field of molecular simulations would provide
the links between desired properties and achieved product structure.
Changes in products, market and societal trends make imperative that the plant responds
rapidly and efficiently to product upgrades and component or unit substitutions (e.g., due to
new environmental concerns and regulations). The responsiveness to global environment and
technological changes would give the leading edge for future chemical plants and
manufacturing sites. The design of flexible plants and units that can quickly and efficiently
absorb and utilise technological innovations, and adapt to varying product specifications
reflecting customer demand set the new frontier in the integration of design and control.
REFERENCES
Chapter A1
1. INTRODUCTION
enormous economic benefits later in the project in terms of rapid, trouble-free startups,
reduced product-quality variability, less-frequent emergency shutdowns, reduced
environmental contamination and safer operation.
Despite the wide-spread recognition in industry that the dynamic control of chemical plants
is a vital issue, very few university design courses incorporate this component. Undergraduate
education in plantwide control at most universities is almost completely lacking. The typical
one-semester process control course only covers the theory of conventional single-loop
systems. The logical place to incorporate plantwide control is in the senior design course. But
this is not being done in most schools.
The goal of this chapter is to point out the importance of teaching simultaneous design in
the chemical engineering design course. The history and current status will be reviewed. Then
the basic concept of the inherent conflict between steady-state economics and dynamic
controllability will be illustrated using several mechanical engineering examples. Next three
chemical engineering examples will be explored in detail. Finally a methodology for
quantitatively incorporating dynamics into design will be reviewed. This material is based on
an AIChE Webcast that was presented on October 17, 2002.
I hope the examples will clearly demonstrate that the development of a steady-state
economically optimum process is only half the job and answers only half the vital questions.
The design in not complete and intelligent management decisions about what process to build
cannot be made until dynamic performance is evaluated.
Simultaneous design concepts are not new. One of the earliest references to the importance
of the process design is found in the pioneering and much referenced controller-tuning paper
of Ziegler and Nichols [2] in 1942. These authors point out that the performance of a feedback
controller depends not only on the tuning parameters but very strongly on the structure of the
loop and the inherent dynamics of the process being controlled.
Page Buckley's book [1], written in 1964, was the first to bring the concepts of
simultaneous design to the attention of the chemical engineering community. Page's plant
experience and his later work in servo-mechanism research convinced him that the really
significant improvements in process control could be achieved by having control engineers
involved in all stages of process development, particularly at the conceptual and detailed
design stages. He achieved this integration by transferring to the Design Division of DuPont's
Engineering Department. Here he coordinated the efforts of process engineers and
instrumentation engineers to get them talking to each other as the design project evolved
through its many stages.
In those days the entire chemical industry was actively developing new processes,
expanding existing facilities and building new grass-roots plants around the world. The rapid
startup and successful operation of dozens of DuPont plants during that period bear witness to
Page's successful application of simultaneous design.
12
He proposed the first plantwide control strategy. The various steps in the procedure are
summarized below:
1. The first step is to set up "material-balance" loops (level and pressure) so that flow of
material through the process is controlled in a consistent and logical way. Decide what
levels and pressures should be controlled and what manipulated variables are used for
each (make loop pairing decisions). The conventional structure fixes the flowrates of
process streams entering a unit and controls liquid levels by manipulating the
flowrates of liquid streams leaving the unit. In vapor-phase systems, pressures are
controlled by manipulating gas flowrates leaving the unit. This is called material
balance in the direction of flow. An alternative is an "on-demand" control structure in
which the flowrates of process product streams leaving a unit are fixed by a
downstream consumer. Liquid levels are controlled by manipulating the flowrates of
feed streams entering the unit. In vapor-phase systems, pressures are controlled by
manipulating gaseous feed stream entering the unit. This is called material balance
opposite the direction of flow.
2. Establish product-quality loops and tune for as tight control as possible. Decide what
temperatures, pressures and compositions should be controlled and what manipulated
variables should be used in each loop to achieve the smallest closedloop time constants
as limited by closedloop robustness (reasonable closedloop damping coefficients).
3. Make liquid inventories in surge vessels large enough (by specifying vessel sizes) so
that the closedloop time constants of the material-balance loops are at least ten time
larger than the closedloop time constants of the faster product-quality loops. This
permits the tuning of the two types of loops to be done with negligible interaction. The
use of proportional-only level control is recommended for maximum flow smoothing.
4. Use "override control" to achieve variable control structures to handle constraints and
"valve position control" to achieve self-optimizing control in a simple and inexpensive
way. Use ratio (feedforward) control to improve load rejection.
The analysis and design methods Buckley employed ranged from back-of-the-envelop
block-diagram calculations and concepts to rigorous dynamic simulations of complex
chemical processes. That period was the hay-day of the analog computer. DuPont and all
major chemical and petroleum companies invested millions of dollars in large corporate
computing facilities and large engineering staffs to test control structures on dynamic models
of complex unit operations, both new and old.
Over the last four decades since Buckley's pioneering work, there have been many
improvements in techniques and tools for dynamic analysis. The dynamics of reasonably
complex chemical processes can be fairly easily studied using commercial software such as
Matlab, AspenDynamics and HYSYS.
There have been many notable developments in the area of process control during this
period.
1. Tools: Digital simulation has replaced analog computers. Software has been developed
that is more powerful and more user-friendly.
13
2. Methods: Dynamic identification techniques have been developed that are simple and
provide accurate information (relay-feedback test). Controller tuning methods have
improved so that a variety of different types of processes can be effectively controlled.
Singular value decomposition has provided a useful tool for the problem of selecting
controlled and manipulated variables.
3. Control Hardware: Control valves and sensors have improved. Most processes use
DCS (distributed control systems), which make data acquisition, loop reconfiguration
and on-line calculations much easier.
4. Dynamic Models: Realistic dynamic models have been developed for many unit
operations.
5. Textbooks: In 1960 there was one chemical engineering textbook (Ceaglske, N. H.,
"Automatic Process Control for Chemical Engineers", Wiley (1956)), which contained
228 pages of material. Now there are dozens, some of which run to over 1200 pages.
There have also been many "not-so-notable" developments that have surfaced briefly and
then faded away over this period of time. My own personal list of these "fads" in the process
control field is given below:
1. Relative gain array
2. Neural nets
3. Wavelets
4. Artificial intelligence
5. Kalman filters
6. Statistical quality control
7. Fuzzy control
8. Adaptive control
9. Nonlinear control (with the exception of gain scheduling)
10. Robust control
11. Performance monitoring
12. Supply chain management
13. Six sigma
14. Model predictive control
The last item in this list is unquestionably the most controversial since many industries
have widely accepted the notion that MPC is the way to achieve improved control
performance. These MPC projects are expensive and time consuming. Typical reported costs
range from $300,000 to over a million dollars, and weeks of plant testing are required. The
marketing success of MPC is undeniable. The interest in the subject by the academic control
community, to the exclusion of almost any other topic, is demonstrated by the appearance of
literally hundreds of papers on the subject.
However, in my opinion, the real technical and economic advantages of this complex and
expensive approach to plantwide control are not clearly and solidly proven. The skeptical
reader may find the paper by Ricker [3] to be informative. It presents an unbiased technical
comparison of conventional SISO control versus MPC as applied to the Eastman process.
14
Most of the academic MPC papers, if they give any comparisons at all, typically present
unfair comparisons of their proposed complex MPC system with a very poorly designed PI
control system.
Time will be the final judge as to whether the MPC fad is enduring or not. The concepts
that have been enduring over the last half century are:
1. Process understanding: The First Law of Process Control is inviolate! "Understand
the process."
2. PI control: Simple proportional-integral SISO loops provide effective control of the
vast majority of all chemical plants. These systems require process understanding to
set up, rational tuning methods, the use of overrides to handle constraints and split-
ranged valves to handle the case where several manipulated variables can be used to
control a single controlled variable.
3. Dynamic fundamentals: The importance of designing processes and establishing loop
pairing to minimize undesirable dynamics in a feedback loop is obvious to an engineer
who understands the effects of deadtime, multiple lags and inverse response on the
stability and performance of a closedloop system.
4. Simultaneous design: Considering dynamics as the process is designed produces more
easily controlled processes that make more money for the company, have better
product quality, are safer to operate and reduce environmental pollution problems.
The current status in industry is that many process designs are develop with a consideration
of dynamic controllability. This can range from a detailed dynamic simulation of the
flowsheet to at least a dynamic review by the in-house or external control expert.
However the major obstacle to a wider application of simultaneous design is the lack of
engineers trained in the subject. This is a direct result of the subject not being taught in the
vast majority of chemical engineering departments around the world.
I hope this chapter is successful in convincing university teachers and administrators that
simultaneous design is just as vital a part of the chemical engineering curriculum as
thermodynamics or transport phenomena or reactor design.
The engineer who is responsible for the steady-state economic design of the process is
called a process engineer or a project engineer. The engineer who is responsible for specifying
the control hardware (valves, sensors and DCS) and the control structure for the process is
called a control engineer. Historically these two engineers have almost always had many
arguments about the design of the process. For example, the process engineer wants small
vessels (minimize capital investment) and small control valve pressure drops (minimize
pumping and compression energy costs). But the control engineer wants large vessels (smooth
out disturbances) and large control valve pressure drops (permit larger changes in flowrates
and avoid control valve saturation).
15
This section presents a discussion of the fundamental reason for this difference of
objectives.
In contrast, let us consider an F16 fighter. It weighs 12 tons and can fly 500 miles on 1000
gallons of fuel, so its economic efficiency parameter is only
Thus the F16 is 25 times less efficient than the 747. However, its jet engines produce
35,000 lb of thrust at a take-off speed of 115 mph. Therefore its dynamic performance
parameter is 45 times greater than the 747.
(35,000ftX115«tf«/*r)r_Jb_Y52gMY fr-sec ) = ^
12 tons 1^3600 sec Jl, mile \550 ft-lb)
So the F16 is a very dynamically agile aircraft, as its outstanding combat record clearly
proves.
Hay truck:
(13 tons)(\3 miles)
- ————- = 170 ton - mpg
gallons
200 hp , r , ,
— = 15 hp/ton
13 tons
17
Indy car:
(0.65 tons)(200 miles) . £
±————— = 1.6 ton - mpg
80 gallons
This comparison shows how economical the hay truck is to operate and how the dynamic
performance of the Indy car is vastly superior. Note that less energy is used to haul hay in a
truck than to deliver it in a 747 (170 ton-mpg versus 150 ton-mpg).
I hope these two examples illustrate that a device designed for economy is not going to
have fast dynamic responses. There is a common statement that you can't make a garbage
truck handle like a Ferrari. You could if you put a big enough engine in the garbage truck!
counter-intuitive effect is due to the relative size of the disturbance compared to the
vapor boilup in the column. When the feed tray is not optimum, the vapor rate in the
column is larger. The disturbance has less of an effect and smaller relative changes in
vapor rates are required to reject the disturbance.
4. Distillation design: Designing for a low ratio of the actual reflux ratio to the minimum
reflux ratio produces a column with more trays and lower vapor rates. Intuition and
conventional wisdom suggest that dynamic controllability is better. However, for the
same reason as cited in Item 3 (larger vapor rates relative to the disturbance), tighter
control in the face of load disturbances is achieved when the column is designed for
higher reflux ratios.
5. Reactor design with two reactants: This example is discussed in detail later in this
chapter. If the reaction involves two reactants (for example, A + B —» C) steady-state
design favors having the reactant concentrations in the reactor more or less equal
because the reaction rate depends on the product of the concentrations ZAZB- Equimolar
reactant concentrations result in smaller reactor volumes and lower recycle flowrates
for a given production rate. However, if the reaction is exothermic and the reaction
rate is very temperature sensitive (large activation energy), temperature runaways can
easily occur if there are large quantities of both reactants available. One solution to the
problem is to design the process for a "limiting reactant" concentration, i.e. size the
reactor and recycle flowrate for a low zA concentration. This provides some dynamic
self-regulation to the rate of reaction. If an increase in temperature increases the
specific reaction rate k and the rate of reaction increases, reactant A will be consumed
and its concentration will decrease. Thus the overall rate of reaction k ZAZB will not
increase as rapidly and will go to zero as A is completely consumed. This design has
better dynamics but poorer steady-state economics (larger reactor and recycle
flowrate).
There are many other examples, but these should give you a good idea of the conflict
between steady-state economic design (reversibility) and dynamic controllability. Detailed
discussions of three illustrative examples are presented in the following sections.
In Case 1 the design has only one reactor. The feed has a fiowrate F (Ib-mol/min), a
reactant A concentration z0 (mole fraction A) and a temperature To (°F). Reactor holdup is VR
(lb-moles), and reactor temperature is TR (°F). The reactant concentration in the reactor and in
the product stream leaving the process is z (mole fraction A). The reactor vessel has an aspect
ratio (L/D) of 2, where L is the length (ft) and D is the diameter (ft).
The cooling jacket surrounding the vessel provides a heat-transfer area equal to nDL (ft2).
Cooling water is introduced into the jacket at a rate Fj (gallons/minute) and with a
temperature TJO (°F). A circulating cooling water system is assumed, so the water in the jacket
is perfectly mixed with a temperature Tj (°F). An overall heat-transfer coefficient of 150
Btu/hr-ft2-°F is used. The horizontal distance between the reactor wall and the jacket wall is 4
inches, giving a jacket volume Vj=nDL/3 (ft3).
In Case 2 the design has two equal-size reactors in series, each with its own jacket and
cooling water supply. The fresh feed is the same as in Case 1, and the product stream leaving
the process is the same in both cases (z in Case 1 and Z2 in Case 2).
The fresh feed is pure A (ZQ=1). The specified conversion is 95%, so the product stream has
a concentration z=Z2=0.05 mole fraction A. Table 1 gives parameter values for kinetics and
physical properties. Table 2 gives operating conditions and equipment sizes for the two
designs. Reactor temperatures are 140 °F in all cases.
20
Table 1
Kinetic and physical property parameter values
Kinetics:
Specific reaction rate at 140 °F = 0.5 hr'1
Activation energy = 30,000 Btu/lb-mole
Heat of reaction = -15,000 Btu/lb-mole
Physical Properties Density = 50 lb/ft3
Molecular weight = 50 lb/lb-mole
Heat capacity = 0.75 Btu/lb-°F
Table 2
Operating conditions and sizes for two cases
Case 1 - One Reactor Case 2 - Two Reactors
Reactor Volume (gallons) 28,400 5200/5200
Diameter (ft) 13.4 7.62/7.62
2
Heat-transfer Area (ft ) 1131 364/364
Heat-transfer (K Btu/hr) 1421 1159/261
Reactor Temperature (°F) 140 140/140
Jacket Temperature (°F) 131.6 118.8/135.6
CW Flowrate (gal/min) 46.12 47.48/7.97
Reactor Comp. (m. f. A) 0.05 0.2235/0.05
Capital Cost ($) 427,300 296,600
minimum jacket temperature would be 70 °F. The first reactor in Case 2 is using (21.2/70)* of
this maximum differential temperature, while in Case 1 only (8.4/70)th of it is being used
under design conditions.
This means that the temperature difference (and therefore the heat removal rate) can be
changed more readily in Case 1 than in Case 2. The dynamic results presented below
demonstrate that the more "muscular" design of Case 1 gives superior dynamic performance.
where L = vessel length (ft) and D = vessel diameter (ft). The utility used in each system is
cooling water. Somewhat more flow is needed in Case 2, but since cooling water is relatively
inexpensive, the difference in the cost of cooling water is assumed negligible. Therefore we
only look at capital investment. The cost of one large reactor in Case 1 is $427,300. The cost
of the two smaller reactors in Case 2 is $296,600.
Thus steady-state economics indicate that the two-CSTR process of Case 2 is the best
process. However, this is not necessarily true. We need to look at the dynamics of the two
alternatives before we make a decision.
Table 3
Temperature Controller Tuning Parameters
1-CSTR Process 2-CSTR Process 2-CSTR Process
First Reactor Second Reactor
Ultimate gain 91 23 94
Ultimate period 0.29 0.27 0.28
(hr)
Kc 41 10 42
Integral time (hr) 0.48 0.46 0.46
Figure 3A shows how temperature, cooling water flow, reactor composition and jacket
temperature respond for Case 1 to the heat of reaction increase. The peak in the temperature
curve is only about 0.6 °F.
Figures 3B and 3C give responses of the first and second reactors for Case 2. Now the peak
in the temperature in the first reactor is greater than 3 °F, which is almost five times larger
than Case 1 with the single large CSTR. This improvement in dynamic performance is
strikingly better than Case 2, as Figure 3D shows in detail.
The impact of more temperature variability on product quality and safety can be
tremendously important in many reactor systems. The improvement in control in building and
23
operating the single CSTR process is dramatic compared to the performance of the two-CSTR
process. We are not talking about a 10% improvement. In this example, the improvement is a
factor of almost 5!
This simple process provides a convincing example of the need for simultaneous design.
Had the decision of which alternative to build been made on just the steady-state economics
of capital investment, the process would have been much more difficult to control, show
larger swings in temperature and produce product of poorer quality.
24
Fig. 3. (A) 1 CSTR-process, (B) first reactor 2-CSTR process, (C) second rector 2-CSTR
process, (D) comparison of 1-CSTR and 2-CSTR process.
25
The second process considered in detail is a reactor that is cooled by evaporative cooling:
the boiling liquid in the CSTR uses the latent heat of vaporization to remove the exothermic
heat of reaction. The irreversible exothermic reaction A + B —> C occurs in the liquid phase in
the reactor. Figure 4 shows the process configuration.
In this section we will explore the effect of conversion and condenser size on the dynamic
controllability of this autorefrigerated reactor process. We will demonstrate that
controllability becomes more difficult as reaction conversion decreases because more "fuel" is
available to permit reaction runaways. We will also demonstrate that the design of the
condenser must consider dynamics, i.e. condensers designed using traditional steady-state
heuristics are grossly undersized and can produce safety and environmental problems.
Condensers that are too small to handle dynamic disturbances can lead to reaction runaways,
disk ruptures and environmental pollution.
A detailed description of the dynamic model used in the simulations and all the kinetic
parameters, physical properties and vapor-liquid equilibrium relationships used in the
simulation of this process are given in Luyben [6]. Some important parameters are the overall
heat-transfer coefficient U = 150 Btu/hr-ft2-°F, heat of reaction X = -30,000 Btu/lb-mole of C
generated and heat of vaporization AHy = 10,000 Btu/lb-mole of all components.
The inlet cooling water temperature is Tcco = 70 °F. With this inlet cooling water
temperature, the design engineer might assume that it is reasonable to select a condenser
cooling water temperature of 110 to 130 °F. We demonstrate below that this apparently
reasonable selection would lead to an uncontrollable process.
The steady-state reactor temperature TR is held at 175 °F for all cases by adjusting the
operating pressure. The process temperature in the condenser Tc varies with the level of
conversion for which the system is designed. For example, for a 90% conversion design the
condenser temperature is 136 °F. For a 60% conversion design the condenser temperature is
150 °F because the operating pressure is higher (85.6 psia versus 61.5 psia) since reactor
liquid has more of the light A and B components. Table 4 gives operating conditions and
equipment sizes for these two cases. Fresh feed has a flowrate of 100 lb-mol/hr and
composition 55 mol% A and 45 mol% B in both cases (an excess of A is fed).
Temperature differentials of 20 to 30 °F have been chosen for the condenser designs shown
in Table 4. These AT's are typical of what might be selected for steady-state design when no
consideration of dynamics is incorporated in the design procedure.
27
Table 4
Design at different levels of conversion
Conversion (%) 90 60
Reactor temperature (°F) 175 175
Reactor volume (gallons) 553 58.5
Reactor compositions (m.f.)
ZA 0.2437 0.3836
ZB 0.0756 0.2426
zc 0.6907 0.3698
Reactor pressure (psia) 61.5 85.6
Condenser process temperature (°F) 136.3 150.6
Condenser compositions (m.f.)
xA 0.5835 0.6604
XB 0.0905 0.2123
xc 0.3260 0.1273
Condenser cooling water temperature (°F) 110 130
Condenser area (ft2) 208 134
area) and those on the right correspond to Tec = 130 °F (large area). An openloop unstable
process is more difficult to control than an openloop stable process. The smaller-area process
becomes openloop unstable when conversion drops below 80%.
Figures 8A and 8B gives results for conversion from 80 down to 50% when different
temperature differentials are used for the design. It is clear that very small design AT's must
be used to achieve a controllable system.
5.4. Conclusions
This process provides a dramatic example of the need for simultaneous design.
Conventional steady-state design procedures would select temperature differentials for this
process of 30 to 40 °F. Dynamic considerations show that much smaller AT's (an order of
magnitude for low conversion reactors) must be used to provide good temperature control.
The last example is a gas-phase process with a tubular reactor, gas recycle compressor,
feed-effluent heat exchanger, condenser and separator. The steady-state design of this process
leads to an uncontrollable system if the reactions are highly temperature sensitive. We
demonstrate that changing the design produces a much more easily controlled process. We
consider a complete plant, not just the reactor in isolation.
temperature rise occurs through the reactor. This type of system typically has a maximum
temperature limitation, and this maximum temperature occurs at the reactor exit under steady-
state conditions. A maximum design reactor outlet temperature Tmt of 500 K is assumed.
A simple condenser/separator is assumed. All of product C produced in the reactor is
condensed and leaves in the liquid product stream. It is assumed that there is no lose of the
reactant components A or B in this liquid stream and that the gas recycle stream from the
separator drum contains no reactant C.
Several papers [7,8,9] studying this type of system give all the details of the parameters,
economics, steady-state optimization and dynamic models.
Another design variable is the reactor inlet temperature. However it is not independent of
the recycle flowrate because a fixed production rate generates a fixed amount of energy in the
reactor. If the reactor outlet temperature and the recycle flowrate are fixed, the resulting
temperature rise fixes the reactor inlet temperature.
However, in the "hot" high-activation-energy case, reactor runaways occur for very small
changes in inlet temperature, as shown in Figure 11. These results are for the design with 50
mol% A in the recycle gas.
A very small 2 K increase in the inlet temperature causes a reactor temperature runaway.
This occurs because there is plenty of fuel around to react and the specific reaction rate k
changes drastically with temperature if the activation energy is large.
The overall rate of reaction R (kmol/sec/kg catalyst) depends on k and on the partial
pressures of the two reactants: R = k(yAP)(ysP)- If the concentration of one of the reactants is
designed to be small and is permitted to decrease as the reactant is consumed, some built-in
"self regulation" is achieved. This makes the process more controllable.
This "limiting-reactant" design requires a bigger reactor and more recycle, so its steady-
state economics are not as good as the equimolar case. It also requires a change in the control
structure to that shown in Figure 12.
The fresh feed of component A is flow controlled. The composition in the recycle gas is
permitted to float.
The effectiveness of this design and this control structure is shown in Figure 13. The
response of reactor outlet temperature for several large disturbances is shown. Reactor inlet
temperature changes of 12 °F and fresh feed flowrate changes of+50% can be handled.
This process provides an excellent example of the critical need for simultaneous design.
I hope you are now convinced that a consideration of dynamics needs to be included in the
design of a chemical process. But how can this be done quantitatively and effectively so that
intelligent decisions can be made?
Several approaches have been proposed, and some of the more practical are briefly
reviewed in this section. The "capacity-based" approach discussed in Section 7.3 appears to
offer a fairly simple, logical and effective methodology for achieving the goals of
simultaneous design.
Note that Step 5 requires specifying the control structure and tuning, specifying
disturbances and developing rigorous dynamic model for each design considered.
The main difficulty with this method is the determination of the weighting factors. It is not
clear what values of $/ISE to assign to each controlled variable and how to balance these with
TAC. Similar algorithmic approaches to the problem have been proposed [12], but their
complexity and computational intensity limit their application to relatively simple flowsheets.
7.3.Capacity-Based Approach
The basic idea of this method is to determine what periods of time the process is making
on-spec products and what periods of time the products are outside the specification band.
Then the capacity of the process equipment is adjusted to produce the required production of
on-spec product. The cost of handling the off-spec material is included in the economic
calculations. The result is a $/year profit (or any other economic measure) for each alternative
flowsheet. This permits a quantitative comparison that incorporates both steady-state and
dynamic factors. Several papers [13, 14, 15] develop this method and illustrate its application
to several processes, which vary from simple flowsheets to complex, multi-unit processes
with recycles and multiple reaction and separation sections.
The first four steps in this procedure are the same as those given in Section 7.1. Steps 5
through 7 are different.
1. Design several alternative steady-state processes: flowsheet, equipment sizes,
operating conditions and utilities.
37
2. Apply a control structure to each and determine controller tuning. The controller may
be whatever type is desired: decentralized PID, MPC, nonlinear, etc.
3. Develop rigorous dynamic models of each process.
4. Specify a scenario of typical time-domain disturbances. These must include
magnitudes and frequencies (steps, ramps, etc.).
5. Subject the models to these disturbances.
6. Determine the fraction of time that on-spec products are produced.
7. Increase the size of the equipment so that the required production rate of on-spec
product is achieved.
8. Calculate an economic performance measure for this enlarged plant (total annual cost,
return on investment, discounted cash flow or net present value).
9. Include in these economics the cost of handling the off-spec material. This may
require reworking, incineration, waste disposal or selling at a reduced price.
A simple example helps to illustrate the method. Suppose we want to quantitatively
compare the two alternative flowsheets shown in Figure 15.
Both processes consist of a CSTR and a stripping column. Fresh feed of reactant A enters
the reactor in which B is produced via the reaction A—>B. Reactor effluent is a mixture of A
and B and is fed to the column. The lighter A is taken overhead and recycled back to the
reactor. Product B is the bottoms stream from the stripper.
The flowsheet on the left features a smaller reactor than the one on the right (3000 versus
5000 gallons), but it has a stripping column with more trays (19 versus 12). Since the smaller
reactor has less per-pass conversion, the concentration of reactant A in the feed to the stripper
is larger. This means more recycle, which required more energy in the stripper and a larger
diameter column.
Running through the steady-state economics of these two systems (annual capital cost plus
energy cost) gives total annual costs of $693,000/yr for the flowsheet on the left and
$725,800/yr for the flowsheet on the right (with the larger reactor).
Which of these two processes should be built? The correct answer is "We do not know!"
Until the dynamic controllability of the two are explored, the design selection cannot be
intelligently made.
The control structure shown in Figure 15 is selected, controllers are tuned and a series of
typical disturbances are introduced into the process. Figure 16 gives results. The small-reactor
process (top of Figure 16) shows much more variability than does the process with the larger
reactor (lower graph). Product quality is outside of the high and low specification limits about
29% of the time, giving a capacity factor of 0.71. The size of the equipment must be increase
by 1/0.71.
The larger-volume reactor process produces on-spec product 93% of the time, so its
capacity factor is 1/0.93. Calculating the annual profit for each process, including the cost of
reworking the off-spec material, gives a profit of $737,000/year for the small-reactor process
and $l,534,000/year for the large-reactor process.
38
Thus the plant that looks more attractive from simply a steady-state point of view is not the
best plant.
The capacity-based approach is a practical and effective method for incorporating dynamic
controllability into the design of a chemical plant. All the tools needed for the job are
available in the commercial flowsheeting simulation software.
Note that this method has the significant advantage of explicitly taking into account
variability in product quality. According the Downs [16], "The importance of product quality
has put low process variability in a place of prominence among the process design criteria,
along with the traditional goals of capital cost and utility consumption minimization." The
capacity-based approach provides a convenient way to simultaneously combine all these
steady-state and dynamic factors. Quantitative comparisons can be made on the basis of
economics ($ profit) that incorporate dynamic effects.
Simultaneous design has been taught in the senior design course at Lehigh University for
almost a decade. The course covers two semesters, with traditional steady-state synthesis
covered in the fall: steady-state computer flowsheet simulation, engineering economics,
equipment sizing, reactor selection, energy systems, distillation separation sequences,
azeotropic distillation and heuristic optimization. The spring semester deals with dynamic
39
9. CONCLUSION
I hope the material in this chapter is successful in making the message loud and clear that
simultaneous design must be an integral part of the senior chemical engineering design
course. Chemical engineering education is behind other disciplines in stressing dynamics. The
next time you take an airplane flight, just imagine how safe you would feel if the mechanical
engineer who designed the airplane knew nothing about dynamics. You should get that same
feeling when you enter a chemical plant or refinery.
REFERENCES
Chapter A2
1. INTRODUCTION
In recent years, the control design problems faced in the chemical industry have become more
challenging owing to a number of factors: (i) the universal drive for more consistent attainment
of high product quality, (ii) more efficient use of energy, and (iii) more stringent environmental
and safety regulations. Both in academia and in industry, therefore, "Chemical Process Con-
trol" research (and development) has been concerned, on the one hand, with the development
of control strategies specifically aimed at solving the control problems that are peculiar to the
contemporary chemical process, but also with the adaptation of relevant control strategies de-
veloped in other disciplines (such as aerospace, electrical engineering, etc.).
There are many mature controller design strategies for solving the range of control problems
encountered in chemical processes, some of which have attained a certain degree of industrial
success. Because some of these strategies are more appropriate than others for solving certain
problems, it is important for the practicing engineer, charged with the responsibility of designing
and implementing industrial control systems, to have a means of matching controller design
strategies to the specific problems posed by the chemical process of interest. This chapter is
concerned with providing some preliminary thoughts and concepts on how chemical processes
can be characterized and classified, and how such classification might be used for the rational
selection of controller design strategies.
In section 2, the three main attributes for characterizing and classifying chemical processes
are introduced and the "Process Characterization Cube" [1] is described as a means of summa-
rizing this classification. In section 3, as a starting point, a set of possible metrics that can be
applied along the axes of the "characterization cube" are presented. In section 4, the relation-
43
ship between the results of the characterization analysis and the resulting "best" control strategy
is discussed. In section 5, two case studies, including a benchmark chemical reactor and a wood
chip digester (used in the pulp and paper industry), are used to demonstrate the characterization
techniques. Finally, in section 6, conclusions and a summary of future directions are presented.
2. Dynamic Character. The extent of complexity associated with the dynamic responses (for
linear systems: step response, frequency response, or transfer function poles and zeros):
From a classification of Simple for systems exhibiting first-order, and other relatively low-
order behavior, through a classification of Moderate for systems exhibiting higher order,
but still relatively benign, behavior, to a classification of Difficult for systems exhibiting
problematic dynamics such as inverse response, time delays, etc. that impose severe
limitations on the best possible control system performance.
3. Degree of Interaction. The extent to which all of the input (or manipulated) variables
interact with all of the output (or controlled) variables: From a classification of Low for
those systems whose variables are uncoupled or only weakly coupled, through a classi-
fication of Medium for those systems whose variables are mildly coupled but enough to
44
warrant some attention, to a classification of High for those systems whose variables are
very strongly coupled and ill-conditioned.
where TV : U —> 3^ is the system operator describing the process in question and C :
U —> y is a linear approximation to TV". U is the space of considered input signals, y
is the space of admissible output signals, and L is the space of linear operators. n% is a
number between zero and one where a value of zero indicates the existence of a linear
approximation to the system whose output matches the output of the original system over
the set of inputs being considered. A value close to one indicates a highly nonlinear
system.
As Eq. (1) represents an infinite dimensional optimization problem, approximate com-
putational techniques must be utilized to compute the measure. A general computational
49
technique involves selecting a representative set of inputs and then building a linear ap-
proximation composed of a weighted sum of linear basis functions, e.g..
TV,
Wi are the weights on the basis functions, Tj are the functions' time constants, and Ni is
the number of basis functions chosen. An optimization routine is then employed to find
the set of m, to complete the infimum operation across the considered input set. It has
been shown [3] that the search for the optimal set of wt is convex.
A less rigorous but more computationally efficient lower bound on Eq. (1) can be ob-
tained by limiting the space of admissible inputs to sinusoids of varying amplitude and
frequency. Provided that the nonlinear system in question preserves periodicity, the out-
put after any transients have decayed can be represented by a Fourier series:
oo
u _ L -4i(w>fl)
""' ~ aX P £ n V ~ 2A
%»' a ) + Sr=i A\{u>, a) (5)
where A, fi are the sets of input signal amplitudes and frequencies being considered. ^
is thus denned as the lower bound (LB) on Eq. (1) and usually lies within 10-15% of the
best value obtained by using the optimization method discussed above.
Eq. (1) and its various approximations are suited to characterize the open-loop degree
of nonlinearity of a process. Research and practical experience have shown that, for
purposes of controller design, characterization of purely open-loop nonlinearity is insuf-
ficient. Closed-loop effects, such as controller performance objectives, play a role in
determining the optimal degree of nonlinearity of a controller to be designed for a given
process. As will be discussed in section 4, such closed-loop concerns will affect the
mapping between a process' location in the characterization cube to its location in the
corresponding controller design cube.
50
2. Dynamic Character
On the basis of the fact that the SISO linear first order system is the easiest to control,
so that complexity in dynamic character is indicated by deviations from this simple form,
we define a measure of the dynamic character difficulty as:
A - 4»
(,D = max (6)
U Lt D
where C\ is the operator for the "closest" first order system in the sense implied by the
dynamic norm 11 • j j D:
r°° 9
|J£-£«|i D =min/
T
(y(t) - y«(t, r)f dt (7)
Jo
(i.e., the Integral Square Error (ISE) norm) where y(t) is the step response of the cor-
responding system linearization (normalized by the input magnitude), and y^(t,r) is
the corresponding step response of the first-order system with a time constant, r. It is
also possible to utilize other alternatives such as the Integral Absolute Error (IAE), Time
Weighted Integral Absolute Error (ITAE), or Time Weighted Integral Square Error (ITSE)
in (7).
C\ represents the steady-state response of the system as characterized by the steady-state
gain, Kt. The denominator of the argument of (6) thus scales the measure by the value of
(7) as T —* 0. In other words, the best approximation of the system is an infinitely-fast
system - a poor approximation for most realizable systems, but is one that sets an upper
bound on the measure.
51
Observe that for first-order systems, /ip = 0 with the value increasing as the actual
process dynamics (as represented by the step response) exhibits more complex character-
istics. Due to scaling, the maximum meaningful value that can be obtained is \i£> = 1
as £^ with T —> 0 (essentially becoming Cf ) can always be chosen as the comparison
operator.
It is, in fact, possible to generalize fj,p to a. family of measures /i^, , n = 1,2,..., defined
as:
II £ _ £(n)
n ) %
^ =max- —^ (8)
I n MO)
Ll Ll
D
where C\ is the operator for the "closest" nth order positive real system so that JJLO
given in Eq. (6) is simply the first in the family. For an n^-order system, the successively
increasing quality of approximation provided by proceeding through the range of model
orders yields the following relationship:
with Up = /i}^ + ' = .... As a result of the fundamental properties of the class of positive
real systems, those systems with RHP poles and zeros will return higher values for this
family of measures.
3. Interaction
A very well known indicator of interaction is Bristol's relative gain array (RGA) [8],
typically denoted as a matrix A and calculated as:
A = K x (K"1)7 (10)
where K is the process steady-state gain matrix and " x " denotes element-by-element
multiplication.
The RGA may be converted to a single metric for extent of interaction (after rearrang-
ing the input/output configuration such that the "best" RGA elements are on the main
diagonal) as follows:
An alternative way of utilizing the RGA is by considering a scaled version of the RGA
number proposed by Skogestad and Postlethwaite [9]:
l|A(G)-/|| gum
"' ~ HA(G)||sum °2)
where G represents the linear process model matrix, / is the identity matrix, and ||-||sum
is the sum matrix norm. For example, given a matrix A with elements ai3•:
i,j
A iij value of zero indicates a completely diagonal system while values tending toward
one indicate a lack of diagonal dominance in the system and, therefore, a strongly coupled
system.
While both the RGA and the RGA number are computable as functions of frequency, the
commonly-used procedure involves calculating A based on the steady-state gain matrix
of the process, thus taking into consideration only steady-state interactions.
metric (12) can be computed. In order to compute the value as a function of frequency, thorough
identification of linear process models must be performed.
1. Dynamics-Nonlinearity Plane A close relationship exists between ffy and fir,. While
fi!^ is a measure of the normed "distance" from a nonlinear operator to a linear operator,
fj,p is a normed "distance" from one linear operator to a (perhaps) reduced-order linear
operator. While /x" is explicitly a function of the inputs used to calculate its value due to
the effects of nonlinearity, /J,£> is not since it is based on a linear model of the process.
As defined in general, the two measures are independent since //^ is not a function of dy-
namic difficulty given that no restriction is made on linear model order or structure in the
search for the optimal linear approximation, JID is only a function of nonlinearity when
more than one operating point is considered since the linearizations are operating-point
dependent. In the case where process data is used to compute [i£>, nonlinear effects could
play a role since there may be no guarantee that the data come from a region sufficiently
close to a steady-state condition where a linear assumption is valid.
3. Multivariate-Nonlinearity Plane While (i% is a general definition and allows for con-
sideration of multivariable systems, the available literature does not include description
of rigorous techniques for computing a proper value for multivariable systems. There-
fore, a general description of the relationship between /i; and /x" is not possible. It can
be said that fij is a not function of nonlinearity, given a fixed operating point, since it is
based on linearized models. As with jio, for a nonlinear process, fj,j may be a function of
the operating point considered since standard procedure for use of the RGA for nonlinear
54
systems involves linearizing the model at a relevant operating point in order to obtain the
steady-state gain matrix.
Having discussed various candidate metrics that one may use to determine the "location" of
a process in the cube, this section is concerned with the second half of the cube utility issue,
namely: once the "location" of a process in the process characterization cube has been de-
termined, what is an appropriate controller design to consider? The premise here is that the
various classes of problems posed by the characteristics that define each process category are
adequately handled by controller design strategies specifically tailored to these problems. The
objective is therefore to match process categories to the appropriate controller design strategy.
In this regard, we begin with some general recommendations based on the categorization in-
troduced in section 2.2. This is followed with a discussion of additional considerations for
model-based control.
The discussion may be used in one of two ways:
1. to select a particular candidate controller design strategy for a given process, (when one
has a choice) or
2. to assess a specific controller's appropriateness for the process at hand (when one is com-
pelled, either because of hardware limitations or other reasons, to use the given con-
troller).
I Single-loop PID control, with appropriate loop pairing for multivariable processes; (since
such processes are mostly linear, with no difficult dynamics and little or no interactions
among the process variables).
II Single-loop PID control with compensation for difficult dynamics (e.g., Smith predictors
for time-delays), again with appropriate loop pairing for multivariable processes. Alter-
natively, the use of explicitly model-based control strategies like direct synthesis control,
Internal Model Control (IMC), or Model Predictive Control (MPC) may be appropriate;
III Single-loop PID control with compensation for the loop interactions, such as the use of
linear decouplers or SVD-based control, or the use of multivariable model-based control
strategies like MPC;
55
IV Single-loop PID control, with additional compensation for both interactions and difficult
dynamics (e.g., linear decouplers and Smith predictors). Alternatively, extensions of mul-
tivariable control strategies like MPC to handle difficult dynamics (e.g., time-delays) are
available [12];
VI Single-loop nonlinear extensions of PID controllers with added compensation for difficult
dynamics (e.g., Smith predictors with gain scheduling). Alternatively, nonlinear model-
based control strategies like GMC or Nonlinear Model Predictive Control (NMPC) [15];
VII To deal with both nonlinearities and interactions, nonlinear model-based multivariable
control strategies like GMC or NMPC are recommended;
VIII These are the most difficult control problems, requiring full-scale multivariable nonlinear
model-based control strategies like NMPC, with compensation for difficult dynamics like
significant time-delays.
As a practical matter, it is important to note that in cases where loop pairing may be appropriate,
but where the best pairings are not a priori obvious, the number of possible loop pairings may
grow rapidly, being equal to M! for an M x M square system and possibly much larger for
non-square systems.
1. approximation accuracy,
2. physical interpretability,
4. ease of development.
Generally speaking, models that excel with respect to the first two of these criteria suffer sig-
nificantly with respect to the third and vice versa. In particular, note that highly accurate, easily
interpretable models are almost always developed in continuous-time, whereas the models re-
quired for model-based control strategies like MPC are generally discrete-time, describing the
approximate evolution of the process between sampling times. Performance with respect to the
last criterion is a strong function of both model type and technological advances. Consequently,
56
management of the tradeoff between these criteria lies at the heart of the practical model devel-
opment task.
A classical approach to this problem involves developing a sequence of models, as shown in
Fig. 2. The first step in this chain, labelled 1 in the figure, is the development of a fundamental
model Mp describing the dynamic interplay between dominant chemical and physical process
phenomena. This model is optimized with respect to the first two model validity criteria—
ability to predict process behavior accurately and physical interpretability—and advances in
model development tools are improving the quality of fundamental models with respect to the
fourth criterion (ease of development). Because their complexity is determined by process
details, however, fundamental models typically suffer badly with respect to the third criterion:
they are not directly compatible with most model-based control strategies.
To overcome this limitation, a sequence of model reduction steps is commonly employed.
Perhaps the best-known of these is Step 2 in Fig. 2, the linearization of the nonlinear funda-
mental model Mp to obtain a linear approximation ML- Note that the process characterization
cube may be applied to both the process V and all of the approximating models considered here.
Since the fundamental model Mp is intended as a detailed description of the process V, we can
expect MF and V to occupy about the same position in the characterization cube. In contrast,
the linearized model ML represents a projection of Mp onto the linear multivariable face of
the cube. Since the dynamic complexity of the linearized model ML is determined by the com-
plexity of the fundamental model Mp, model reduction procedures (e.g., procedures based on
singular perturbation approaches) are commonly applied to the linearized model ML to obtain
a reduced order linear model MR in Step 3 of the sequence shown in Fig. 2. Note that this
process may be viewed as a projection of ML along the dynamic complexity axis of the cube
toward the origin. Step 4 in the model development process shown in Fig. 2 is discretization of
the continuous-time model MR to obtain the discrete-time model MD- This step is necessary
for computer-based control strategies that take control actions at discrete time instants, based
on measurements made at discrete time instants. It is important to note that this process effec-
tively maps the model MR from a continuous-time process and model characterization cube
to a closely related but not fully equivalent discrete-time model characterization cube. As a
specific example, non-minimum phase behavior need not be preserved under discretization [ 1,
p. 909].
Finally, Step 5 in Fig. 2 corresponds to the controller design task: given the discrete-time,
reduced-order, linearized model M.D, design a controller C. Note that many controller design
strategies are closely related to the idea of approximate model inversion, leading us to write,
with some abuse of notation, C ~ M~^. More specifically, many linear controllers may be
represented as special cases of the IMC structure in Fig. 3 [17]. Here, it can be shown that if
the process model M is exactly equal to the true process dynamics V, then perfect control is
achievable by making C = M~J = V l, assuming that this inverse exists. In the face of such
factors as time-delays that prevent exact inversion of M., and unavoidable differences between
M. and V, practical IMC designs may be viewed as suitably constrained approximate inverses
of the process model M..
It is useful to generalize the model development procedure just described, adopting the
idea of homotopy methods that have become important in the design of algorithms to solve
various types of optimization problems. The basic idea is the construction of a continuous path
connecting a difficult problem that we wish to solve with a simpler problem that we can solve
easily. By following the path in sufficiently small steps from the simple problem, we ultimately
obtain an approximate solution to the more complicated problem of interest. More specifically,
two continuous functions / : X —> Y and g : X —> Y, are said to be homotopic if there exists
a continuous function H : X x [0,1] —> Y such that H(x, 0) = f(x) and H(x, 1) = g(x)
[18, ch. 11]. The idea behind homotopy methods in minimization, for example, is to find a
homotopy function H(x, A) such that H(x, 0) = f(x) defines an easy minimization problem
and H(x, 1) = g{x) defines the minimization problem we would like to solve. This approach is
useful in cases where we can construct a sequence of intermediate values satisfying
such that the minimization of H(x, Xi) is computationally feasible and provides a good starting
guess for the minimization of H(x, A;+i).
This concept extends nicely to the model reduction problem, as illustrated in Fig. 4. There,
Mo represents a fundamental process model, to be reduced ultimately to a simple model MN
that is directly useable for controller design. Although the model simplification steps described
in Fig. 2 do not constitute a continuous path, viewing the problem from the perspective of
homotopy methods does serve to emphasize at least two important ideas. First, the objective of
the model reduction process is to obtain a simplified model MN that is a good approximation to
Mg- If we adopt the view of controller design as approximate model inversion, this requirement
means that if C ~ Mj,1 is a good approximation, then C ~ M.^1 should also be a good
approximation. The second important idea emphasized by the homotopy view is that the steps
taken along the path from .Mo to MN should not be "too large."
It is particularly instructive to view this second requirement in terms of the process charac-
terization cube. It was noted in the discussion of Fig. 2 that the linearization step represents a
projection from the interior of the cube, where the initial model M^ lies, onto the multivariable
linear face. Since this projection initially eliminates all nonlinear behavior from the model M\,
this first step in the model reduction chain is often (and increasingly, as control requirements
become tighter) "too large." This observation suggests that a better first approximation strategy
would be to project Mo onto some surface (e.g., the surface of a sphere centered at the origin)
that lies closer to the origin, but which is not contained in a face of the cube. As a specific
example, if it is possible to apply singular perturbation-based reductions to the original model
Mo, the reduced model M\ will still be nonlinear and multivariable in character, but of lower
dynamic order, moving it closer to the origin of the process characterization cube. This idea is
closely related to model reduction strategies based on compartmentalization [19, 20] or hybrid
modeling [21], which attempt to reduce model complexity while preserving important forms of
qualitative behavior.
An inherent difficulty with projection-based model simplification strategies like traditional
linearization is that, since projections are non-invertible, information is lost. An alternative
approach is that taken in methods like feedback linearizing control [14], in which the original
nonlinear problem is mapped invertibly into a linear control problem. The applicability of such
strategies is, of course, highly dependent on the structure of the original nonlinear model, but in
cases where it is applicable, it does represent a way of reducing the "step size" in the homotopy-
based model reduction strategy considered here.
Overall, these results suggest the following general observations. First, the necessity for
model reduction comes from the fact that detailed fundamental models typically lie far from the
origin of the process characterization cube, making them incompatible with most control sys-
tern design methods, which require simpler models that lie closer to the origin. Since approx-
imation accuracy and physical interpretability—the first two of the four model quality criteria
discussed at the beginning of this section—generally degrade as we move closer to the origin,
the question of how far we can reduce the process model towards the origin is typically dictated
by minimum requirements with respect to these criteria. In particular, classical linearization
strategies—which project the model onto the linear multivariable face of the cube—are increas-
ingly inadequate, motivating the need for moderate complexity nonlinear models that lie in the
interior of the cube, but much closer to the origin than detailed fundamental models do. One
of the key practical challenges in process control system design is how to develop these mod-
els, especially for the practically important case of computer-based control, where the resulting
models occupy the corresponding—but not strictly equivalent—discrete-time model character-
ization cube [16].
5. CASE STUDIES
To demonstrate the proposed techniques, two case studies are presented that follow the proce-
dure from characterization of the processes through to performance assessment of a pertinent
set of controllers.
2A - ^ D
where C represents cyclopentanediol and D represents dicyclopentadiene. The objective is to
make the desired product, B, from pure feed of A, maintaining constant reactor temperature
and a constant concentration of the desired product in the reactor. The feedrate and the reactor
jacket temperature are available as manipulated variables.
The modeling equations for this process obtained by material and energy balances are given
as follows:
where
kt = /cioexp ( ~Y J (15)
Vi = x2
Vi = %3
Here, xio is the feed concentration of A, x\ is the concentration of A in the reactor, £2 is the
concentration of B, 2:3 is the reactor temperature, VR is the reactor volume, u\ is the feedrate
(scaled by VR), and u-i is the reactor feed temperature.
Using the process parameters reported by Engell and Klatt [22], the steady-state behavior of
the reactor may be investigated as a function of the feedrate u\. The reactor jacket temperature,
w2, is fixed at 130°C and x\§ is fixed at 5.1 mol/L.
Figure 5 shows the steady-state behavior of 2/1 as a function of feedrate and Figure 6 shows
the steady-state behavior of 2/2 • Six specific operating points, as indicated in Figure 5, will
be characterized using the techniques mentioned above. Nonlinearity characterization of the
U\ — yi relationship is performed using the lower bound (5) in operating regions of ±5 h" 1 and
frequencies of approximately 0.3 to 100 rad/h. The dynamic and interaction measures in Eqs.
(6) and (12) are computed using the system linearization.
Figure 7 is a plot of the nonlinearity measure (5) as a function of reactor feedrate. The
results show severe nonlinearity in the region of steady-state gain change seen in Figure 5. The
results further suggest essentially linear behavior at high flowrates and increasingly nonlinear
behavior at low flowrates.
Figure 8 is a plot of the An RGA element for the system to be used as a comparison to the
JJLJ plot in Figure 9. The /i/ results show mild interactions for the highest flowrates and strong
interactions near the steady-state gain change regions. The lowest flowrates show significant
interactions as well. A useful finding from the RGA results is the recommended pairings of
Ui — 2/1 and W2 — 2/2 f° r the design of two single-loop controllers across most of the operating
space. The opposite pairing is recommended at the lowest flowrates with the switch occurring
near the apparent discontinuity in fij near u\ = 18 h" 1 .
Finally, the dynamic measure (6) was computed at the six operating points labelled in Figure
5. These results along with the system poles and zeros can be found in Table 5.1. Also included
in the table is the time constant (T/ O ) of the best first-order approximation obtained in computing
(6).
The results of the dynamic measure found in Table 1 show a trend of decreasing dynamic
severity as flowrate is increased up to 60 h"1. By examination of the system zeros, it can be
seen that at low flowrates the linearization has two RHP-zeros. For moderate flowrates, the
61
Figure 8: Degree of interaction as a function of reactor feedrate as characterized using the RGA.
63
linearization has one RHP-zero and for the highest flowrates, no RHP-zeros appear. As it is
known that RHP-zeros lead to inverse response, a known difficult dynamic behavior, the (in
results correlate with the number of RHP-zeros and the magnitude of the system time constants.
The strong LHP-zeros at 60 and 80 h" 1 skew the no results as this is not behavior that can be
modeled by a first-order system. The results show that //£> is most useful in the presence of
significant non-minimum phase elements.
To confirm the characterization analysis, the following is an assessment of several con-
trollers designed for the process at three of the operating points: 80 h" 1 , 45 h" 1 and 60 h"1.
The characterization shows that an operating point of 80 h" 1 is only mildly nonlinear, shows
slight interactions and is dynamically far from first-order but without any difficult dynamic ele-
ments (i.e., time delay or inverse response). Based on these results, the process at this operating
point can be considered a Category I process and should be effectively controlled using simple
techniques. To verify this conclusion, two single-loop PI controllers were designed using IMC
64
tuning rules based on first-order approximations of the diagonal transfer functions. The IMC
tuning parameter was chosen as 0.2r where T is the time constant of the first-order approxi-
mation. No detuning was performed to compensate for interactions. Figure 10 is a plot of the
response of the two outputs to a set-point change of 0.01 mol/L in j/i. The results show a first-
order response in y\ and insignificant interaction effects in y2. As expected, the system at this
operating point is well-controlled by this strategy.
At an operating point of 45 h" 1 , the characterization analysis indicates significantly difficult
dynamics (due to the presence of inverse response), low degree of interaction and nonlinearity
that is more significant than at 80 h" 1 but still mild. Thus, this point is characterized as a
Category II process. First, two single-loop PI controllers are evaluated under the IMC tuning
rules. In this case, the iti — yx transfer function demonstrates inverse response, therefore the
IMC tuning rules for first-order plus time delay models were used in which the time delay was
taken to be the time over which the inverse response occurs. The controller performance for a
-0.01 mol/L set-point change in yi is shown in the solid lines of Figure 11. The results indicate
sluggish behavior and the presence of inverse response in y\. Also note the undershoot of the
set-point.
The results can be improved by replacing the «i — j/i controller with an IMC design that
uses the complete (1,1) transfer function. The u2 — y% controller remains the same. The results
for this configuration are shown in the dashed lines of Figure 11. As can be seen, it is possible
to achieve responses in this configuration that possess less undershoot and, with increasingly
aggressive tunings, are less sluggish.
The key finding that emerges from the analysis of the 45 h" 1 operating point is the effect
that the dynamic difficulty has on the necessary controller design. As is implied by the IMC
tunings, PI control is equivalent to IMC for a first-order process model. When full IMC is used
with the complete U\ — y\ transfer function, the order of the controller's process model increases
to third-order with one RHP and one LHP-zero as is necessitated by the more difficult process
dynamics.
The final operating point to consider is 60 h" 1 . Characterization has shown this point to be
highly nonlinear and non-minimum phase and therefore belongs to Category VI. Figure 12 is a
plot of the response to a -0.01 mol/L set-point change in yi given two single-loop PI controllers
tuned using the IMC rules. The process is unable to attain this new steady-state value leading to
a condition of reactor washout (initial concentration was 1.1 mol/L). While replacement of the
u\ — yi controller with a full IMC allows the closed-loop system to handle the -0.01 set-point
change, washout is seen again for a unit change in y2 as shown in Figure 13.
In the case of the 60 h" 1 operating point, it is seen that even by using higher-order linear
models in the controller to address the difficult dynamics, the compounding nonlinearity of
the gain change effect limits the attainable level of performance. Only nonlinear control could
address this issue appropriately thus demonstrating the importance of characterizing each of the
process attributes.
65
Figure 10: Reactor response to a [0.01 0] T set-point change under two single-loop PI controllers
Figure 11: Reactor response to a [-0.01 0] T set-point change under two single-loop PI con-
trollers (solid) and one IMC and one PI (dashed) at u\ = 45 h" 1 .
66
Figure 12: Reactor response to a [-0.01 0] T set-point change under two single-loop PI con-
trollers at ux = 60b" 1 .
Figure 13: Reactor response to a [0 -1] T set-point change under one IMC and one PI control at
w^Olr1.
67
dmstm _
7, — Qstm ~ WstmQvent ~ Qcond
at
dmaiT
, = lair — ^airQvent (io)
where qt represents a mass flowrate, mi is the mass of component i in the vapor phase and
u>i is the mass fraction of component i. Note that non-condensable gas mass is unmodeled by
these equations, therefore uiair + Ljstrn = 1.
Qcond is the rate of condensation of vapors on the chips. It is assumed that air will not con-
dense or otherwise become entrained on the chips and, therefore, is only able to leave through
the vent. The rate of condensation on the chips is modeled as follows:
68
where Kcon(i is a constant, qChips is the mass flow rate of chips, T is the system temperature,
Pstm is the partial pressure of steam in the vapor phase and Pstm.chp is the partial pressure of
steam at the chip temperature (assumed constant).
The above equations are used to compute system temperature and pressure as follows: by
knowing the current mass of steam in the system and the volume of the vapor phase section,
the temperature of the system, T, can be obtained by correlation based on the resulting steam
density and by assuming the steam to be saturated. By the phase rule, knowledge of steam
density and temperature defines the partial pressure of steam in the system, Pstm. Because air
is present in very low fractions in the system, the air partial pressure, Pair is computed from the
ideal gas law. The total system pressure is then: P = Pstm + Pair-
The three entering flowrates in Eqs. (16) are related to valve opening percentages, the actual
manipulated variables, by equations of the following form:
where B, and CVi are constants related to the valves, Xi is the valve opening and Phdr,i is
the appropriate header pressure (constant). For the vent flow, the term under the radical in Eq.
(18) is reversed with the digester pressure, P, appearing first and atmospheric pressure second,
since the vent opens to atmospheric conditions.
In total, the above model represents a 3 x 2 system where the inputs are the three valve
positions and the outputs are temperature and pressure. Nonlinearities appear in several places:
square roots in the valve flowrate expressions, the nominally bilinear expressions in Eqs. (16),
and the nonlinear correlations for steam temperature and the partial pressures. Model parame-
ters were determined from process data obtained from a digester running at normal operating
conditions.
The process model will be characterized given conditions for three different types of wood:
hardwood, purchased softwood and woodroom softwood. Woodroom softwood refers to lower
quality softwood composed mainly of scrap pieces remaining from cutting and other processes.
Softwood and hardwood require different operating temperatures in the digester. All three of
the types of woods were found to give off varying degrees of non-condensable gases in process
identification studies. Therefore, not only are the process operating points different, but the
model parameters differ for each of the three species. Control during operating transitions from
one species to the next is also considered.
To begin, the degree of process nonlinearity will be evaluated. The lower bound (5) is
again used with inputs spanning ±5% in valve position. The nonlinear effect of each input is
considered separately on each output. For example, the air valve is set to provide input sinusoids
while the vent and steam valves are held constant to analyze nonlinear effects of air flowrate on
pressure and temperature. The steady-state sinusoids in pressure and temperature are collected
and analyzed independently using (5).
The results of the nonlinearity characterization are found in Table 2. The first item that
should be noted is the significantly higher nonlinearity associated with the vent valve com-
pared to the other possible inputs. Figure 15 is a plot of step responses in temperature given
vent as an input to demonstrate the nonlinearity. As can be seen, the primary nonlinear effect
is asymmetric steady-state response. Given this simplified model, it appears that in terms of
Table 2
Characterization of digester nonlinearity as a function of wood species (inputs listed in bold,
outputs in italics).
avoiding unnecessary process nonlinearity, one may want to avoid using the vent valve for con-
trol. Therefore, for a 2 x 2 control structure, air and steam flows should be the preferential
choices based on nonlinearity analysis. It should be noted that, given the selected operating
range, the nonlinearity is low across all of the wood types.
To analyze the process dynamics, consider the following transfer function model for the
linearized vapor phase model given the woodroom softwood operating conditions and model
parameters:
where the first output is pressure, the second output is temperature and the inputs are steam,
air, and vent, respectively. Note that the transfer function matrices for the two other wood types
are similar in structure but with different values for the parameters with no sign changes. As
learned in the previous case study, JID is only meaningful for systems with difficult dynamic
elements. In this case, only the (2,3) transfer function shows inverse response behavior and
returns a/iu value of 4.14x 10~4. This value is quite low as should be expected from comparison
of the magnitude of the RHP-zero to the system time constant.
Three of the transfer functions in Eq. (19) have zeros that need to be considered, primarily
the vent-pressure (2,3) transfer function. This transfer function has a RHP-zero indicating the
presence of inverse response in that channel. This feature further discourages use of the vent as
a manipulated variable as the RHP-zero will place performance limitations on that control loop.
The steam transfer functions (column 1) both contain strong LHP-zeros giving the correspond-
ing responses strong lead behavior, i.e. quicker responses. This is expected as it is known that
the size of the steam line entering a typical digester is quite large and small changes in valve
opening will result in large, fast changes in steam flow.
The final characterization step is degree of interaction. As the system is non-square and the
results so far suggest that simple control (i.e., PI) may be quite effective, the RGA will be used
on the three possible subsystems to determine the degree of interaction and possible pairings
for decoupled PI control. The RGA results are as follows (S = steam, A = air, V = vent) for
woodroom softwood:
The RGA results show that significantly decoupled control can be achieved with the steam-
air or steam-vent pairings with steam controlling temperature and either air or vent controlling
pressure. As expected due to the system physics, using air and vent as manipulated variable
71
Figure 15: Step responses in temperature given vent opening as an input variable for the di-
gester.
results in a highly coupled system. Similar RGA results are obtained for the other wood types.
The corresponding JJ,J values are: JJLI^SA = l^i,sv — 0-40 and fiitAV = 0.99.
To summarize the characterization results, nonlinearity assessment indicates a mildly non-
linear system with the largest degree of nonlinearity associated with the vent. The dynamic
character analysis showed strong lead behavior when steam is used as an input and inverse re-
sponse in temperature given vent as an input. Steady-state interaction analysis indicates the
possibility of a decoupled system when steam and air or steam and vent are used as manipu-
lated variables. Based on the characterization results, the process can be considered a Category
I process. In general, the results suggest that steam and air as manipulated variables in a 2 x 2
arrangement may be the best control option.
To assess the results of the characterization, two decoupled PI designs using steam and air
flows as the manipulated variables are considered first. Figure 16 shows the results for the
two competing controller pairings, steam controlling pressure and steam controlling tempera-
ture with air controlling the other output, given a — 1°C set-point change in temperature. The
controllers were tuned with the IMC rules based on the transfer functions in Eq. (19) with the
choice of the filter parameter as 10% of the open-loop time constant.
Recall first that the RGA results suggest pairing steam with temperature for minimiz-
ing steady-state interactions. As the results show, the opposite pairing (steam-pressure, air-
temperature) is actually preferred in terms of minimizing dynamic pressure deviations.
In order to more precisely ascertain the degree of difficulty of the multivariable interactions,
a 2 x 2 IMC controller [17] is considered next. The controller in this case is the inverse of the
full 2 x 2 model matrix augmented with a diagonal filter block with first-order elements. The
filter time constants are chosen to be the same as those used in the equivalent PI designs. The
72
Figure 16: Digester pressure and temperature responses to a [0 -1] T set-point change under two
decoupled PI controllers with steam controlling pressure (solid) or steam controlling temper-
ature (dashed) with air controlling the remaining output compared to a full-block IMC design
(dotted).
responses for this closed-loop system given the — 1°C temperature set-point change are found in
the dotted lines in Figure 16, As can be seen, the pressure regulation of this controller is much
tighter but with more sluggish temperature control. Given that the simpler decoupled PI design
with steam controlling pressure provides tight control with only an acceptable performance
loss, there appears to be no need to use full multivariable control, as was predicted by the
multivariable interaction analysis.
To further verify the applicability of the decentralized controller design, the steam-
pressure/air-temperature decentralized PI design is assessed during a transition between wood
species. In this case, the transition from woodroom softwood to hardwood and vice versa is
investigated. During the transition from softwood to hardwood, temperature is lowered by 4°C
and the chip feedrate is increased by roughly 10%. Within the model, the bias on the vent valve
is increased to simulate an overall decrease in the level of non-condensable gases in going from
softwood to hardwood.
Figure 17 shows the controller performance during the transitions. The results show that
pressure remains within the acceptable range of ±5 kPa of the target value. Temperature reacts
smoothly but with evidence of hysteresis in comparing the downward to upward changes. The
reason for the hysteresis is evident if the manipulated variable trends in Figure 18 are consid-
ered. The air valve saturates for both transitions but for a much longer period of time during the
+4°C temperature set-point change leading to decreased dynamic response. To correct for this,
a control scheme with explicit constraint-handling capabilities should be investigated.
To summarize, the digester case study demonstrates how the concepts of process charac-
73
Figure 17: Controller performance for two independent PI loops paired as: steam-pressure,
air-temperature during wood grade changes. At t = 1.4 h, change from woodroom softwood to
hardwood and at t = 14 h the reverse change occurs.
Figure 18: Digester input trends during the grade change in Figure 17.
74
terization can be used to simplify a control problem. In this case, a potentially troublesome
manipulated variable (vent opening) was eliminated from consideration due to associated dif-
ficult dynamics and relatively higher levels of nonlinearity. As mentioned in section 2.3, the
grade change example demonstrated the role of additional considerations in the design proce-
dure. In this case, while the model was not significantly nonlinear, the presence of constraints
created another source of nonlinearity not explicitly characterized in analysis of the process
model. This particular source of nonlinearity is one that could be eliminated if these types
of control-relevant process characterization schemes are employed during the process design
procedure.
6. CONCLUSIONS
It was the objective of this chapter to lay a groundwork for a systematic procedure for character-
izing chemical processes with the goal of determining appropriate controller design strategies.
In the chemical reactor case study, the techniques were applied to determine operating condi-
tions that reduce the complexity of the resulting control problem. In the digester case study, the
techniques were used to simplify the design of control structures. In the ideal situation, these
types of characterization procedures would be carried-out during the process design in order to
see what effect design decisions have on the necessary controller designs.
In terms of the design of the controllers themselves, the information presented on controller
design gives guidelines regarding which general control algorithms may be most well-suited to
the various categories. For model-based control, it was shown how model development and re-
duction strategies should encompass many of the same ideas that emerge from characterization
of the process itself.
The proposed metrics in this chapter should be considered as purely starting points for use
in characterization of the three process attributes of extent of interaction, dynamic character and
nonlinearity and not as the final solution. Further theoretical development of a joint metric of
the three quantities should be considered keeping in mind the need for the metric to be control-
relevant, as described in the discussion of the nonlinearity measure. Only once a clear definition
of the mappings between the process characterization and controller design cube exists will
these techniques be able to be used to their fullest potential. Work on clarifying these mappings
is on-going (e.g., nonlinearity: Ref. [25]).
REFERENCES
[1] B. A. Ogunnaike and W. H. Ray, Process Dynamics, Modeling and Control, Oxford Uni-
versity Press, New York, 1994.
[2] B. A. Ogunnaike, R. K. Pearson, and F. J. Doyle III, Proc. European Control Conf.,
Groningen, The Netherlands, 1993, pp. 1067-1071.
[3] F. Allgower, 3rd IFAC Nonlinear Control Systems Design Symposium, Lake Tahoe, CA,
1995, pp. 279-284.
75
Chapter A3
For the last 50 years, automatic control has been a field of intense study. Driven by the need
for automation of advanced systems on one hand and the possibilities of information technology
and electronics on the other hand, great progress has been made in the theoretical understand-
ing of the fundamental properties of dynamic processes and especially feedback control loop
operation. Important concepts that have been found and that allow the analysis of dynamic
processes include for instance state space controllability and observability analysis as well as
zero dynamics analysis to investigate how a system's dynamical behaviour is connected to its
environment and to find out operational limitations. Techniques to analyze the effect of feed-
back with respect to stability have been developed (e.g. Nyquist criterion, root locus method,
small gain theorem) and even robustness aspects with respect to model uncertainties can now be
addressed. Moreover, a vast variety of powerful controller design methods has been established
like PID control, pole assignment and optimal regulation, internal model control, model pre-
dictive control and H2- and if^-optimal synthesis, just to mention some of the most important
techniques.
Control engineers are thus in the situation to have mature tools for a wide variety of engi-
neering problems at hand - with one restriction. This restriction is connected to the fact that
nearly all of the methods for system analysis, identification and control only apply to linear
systems. For some approaches, this is due to the fact that no similar techniques do exist for
nonlinear systems. For other problems, the methods that can be derived are just not practicable
for nonlinear systems. In both cases the deeper reason is the broad diversity in behaviour that
nonlinear systems can exhibit as opposed to linear systems. General statements can hardly be
77
transfer operator
N : Ua -» y, u^y = N[u]
is compared to a linear model G described by the linear transfer operator
that approximates the dynamic behaviour of N. The signals u, y and y represent input and
output trajectories of the systems N and G respectively. Without loss of generality it is assumed
that
N[0] = 0.
The error signal e is the difference between the output y of the nonlinear system and the
output y of the linear system. This signal contains the information how well the nonlinear
system N is approximated by the linear model G. In order to quantify this error, a norm on the
signal spaces has to be defined, describing the "absolute value" of a signal. Throughout this
presentation we will use the L2-norm
is used. 6 gives the norm ("absolute value") of the error signal, when the worst case input
signal u £ U is considered. The best linear approximation G is chosen among the set of all
causal stable linear systems Q such that the resulting worst case error is minimized. Q denotes
the set of all linear transfer operators.
As can be seen from its definition, the nonlinearity measure 6 depends on the system N
and on the set of considered inputs U. The set U usually describes the region of operation in
which the nonlinearity of the system N is to be assessed. In this case U contains e.g. only
signals not exceeding a certain maximal amplitude. If only sinusoidal inputs are included in
U, the describing function is the best linear approximation in Eq. (2) [2]. There are recent
approaches that build on similar definitions as Eq. (2) [3]. There is an alternative definition for
the nonlinearity measure that has a strong relation to the nonlinear gain of a nonlinear transfer
operator [2,4-9].
In the formulation of the discussed nonlinearity measures, scaling plays an important role.
If the output of the nonlinear system is multiplied by a constant factor, then the nonlinearity
measure is magnified by the same factor. So the degree of nonlinearity seems to get worse
while there is no qualitative change in the behaviour of the nonlinear system. This scaling
dependence clearly represents an undesirable property for a nonlinearity measure. In Ref. 2
Desoer and Wang address the scaling problem by choosing the range of considered inputs such
that the magnitudes of the output signals are equal for the different systems under consideration.
Thus they are able to compare the nonlinearity of different systems that exhibit different gains.
The drawback of this method is that one has to be very careful when applying the nonlinearity
measure.
An approach to get over the scaling problem without this drawback is given in Refs. 4,10,11.
The error term in the definition of the nonlinearity measure <% of Eq. (2) is normalized by the
output of the nonlinear system N, yielding the new nonlinearity measure
M • c \\G[u]-N[u}\\
J
ll«MII?so "
Another very important property of the measure <j% is that its value is bounded by one [4].
The linear system that for any input gives a zero output is always a possible linear approx-
imation. The deviation of the output of this zero operator from the output of the nonlinear
system N equals 100%, independently of the input. As the best linear approximation can not be
worse than the zero operator, the value of can not surpass one. A value of <ftfj close to one thus
corresponds to a highly nonlinear system. The boundedness of cfifj and the fact that the error
expression is normalized are the reasons that scalar input and output scaling does not affect the
degree of nonlinearity and that this measure can be used to compare the severity of nonlinearity
of systems of different types and gains.
It has already been said that hi can characterize the region of operation. Note that the nonlin-
earity measure can not decrease when additional inputs are considered (when hi is made bigger).
Because <j% corresponds to the worst case with respect to the considered inputs, "harmless" ad-
ditional inputs will have no impact on the measure whereas "severe" additional input can. This
fact is mathematically expressed by
The practical meaning is intuitively clear: if a larger operating regime is considered, the non-
linearity measure will increase or stay constant, but will not decrease. But hi can have other
significances as well. In Sec. 3 is discussed how hi can reflect the effect of feedback for control-
relevant nonlinearity characterization.
This section gave an overview over definitions for nonlinearity measures based on signal
norms. In the literature also nonlinearity measures using other approaches than the definition
via signal norms can be found. In Ref. 12 the curvature of the steady state map is introduced
as a measure of nonlinearity. In Ref. 13 an extension of this approach to dynamic systems is
discussed. A different approach is presented in Ref. 14, where controllability and observability
Gramians are used to quantify the degrees of input-to-state and state-to-output nonlinearity.
With this method, systems can be classified in Hammerstein-like and Wiener-like models. In
order to draw a coherent picture, we concentrate on norm-based definitions in this chapter.
The nonlinearity measure <j% has been identified as being of special interest because of its
advantageous properties. These properties are firstly the intuitive interpretation of the value of
(fix as some kind of relative error when the nonlinear system N is approximated by a linear
model G. Secondly, the independence of the gain of the nonlinear system N allows to compare
the values of cfy for different systems. Moreover, the following sections will show that the value
of the nonlinearity measure </>^ can be computed efficiently. The remainder of this exposition
will therefore focus on the nonlinearity measure (/>" and methods derived therefrom.
of very difficult problems. It can hardly be expected that an analytical solution is possible,
especially for practical problems.
However, in this section we want to demonstrate a method to approximately compute the
nonlinearity measure <% defined by Eq. (3). This approach is based on solving an appropriate
nonlinear programming problem numerically. In order to apply numerical methods we will
make two approximations:
1. We restrict the set of inputs considered to a finite set that we will call Uc in the sequel. We
will make no further restriction on Uc neither with respect to the type of inputs considered
(sine-functions, steps, random inputs, etc.) nor with respect to the number of input sig-
nals. The approximated problem can be rewritten as the following minimization problem:
Find G* e Q, such that
2. We predefine a fixed structure for the unknown best linear approximation G* in Eq. (5)
that only contains a finite number of variable real parameters and that retains convexity.
The convex infinite dimensional optimization problem of Eq. (5) is thus approximated
through a convex finite dimensional problem by a suitable convex parametrization of the
space of linear operators. A possible parametrization for discrete time SISO systems is
for example given by assuming a moving average structure (step response model) for
linear system G:
where the scalars d{ are the step response coefficients to be determined. This choice
preserves the convexity between the variable vector of parameters d = [d\,..., dn]T
and the optimizing function. For continuous time SISO systems for instance a convex
parametrization
with Gi[u] fixed stable linear systems is possible. The extension to multivariate systems
is straightforward. It is clear that the quality of the approximate solution depends on the
82
choice of approximation functions. It can be shown that any linear transfer operator can
be approximated up to an arbitrarily small error by a series of transfer functions
G w (8)
- = fSTT7
for any fixed T e R+ [15]. Our own experience shows, however, that the approximating
terms
G (9)
^ =^ T T
lead to better results and much faster convergence. The time constants Ti £ R are fixed a
priori. Knowledge about the dominating time constants of the nonlinear system N should
be considered when choosing the constants Tj.
Taking both approximations into account, the definition of the approximated nonlinearity
measure becomes
N v
d,eR™ueuc \\N[u}\\ '
Note that only a finite number of inputs u are considered and the min-max problem Eq. (10)
can be rewritten into a minimization problem with inequality constraints. The resulting formu-
lation
can directly be used for computation with common optimization software by replacing the
constraint "for all u" by separate constraints, one for each element Uj 6 Uc.
Calculation of the approximate solution &^ of the nonlinearity measure cj>% via numerical
optimization as described has some advantageous properties due to the convexity of the problem
[16]. Firstly, the cost to determine the global optimum increases the most polynomially with the
number of variable parameters. Secondly, the cost grows the most linearly with respect to the
number of input signals u contained in Uc. Last, there exist very powerful algorithms and tools
to solve convex optimization problems [17,18]. Therefore 6*"c can be calculated numerically in
an effective and reliable way.
An algorithm to compute 9$ then consists of the following three steps:
2. Simulate and store the responses of the linear systems G, to inputs Uj £ Uc.
The numerical computation is only based on the simulated output signal. Thus no restrictions
have to be assumed on the type of nonlinear systems that can be considered. Therefore for any
system that can be simulated, including systems described by differential-algebraic equations,
systems with non-smooth nonlinearities, etc., the nonlinearity measure 6%? can be calculated.
As the set of inputs Uc contains only a finite number of elements, 9%c will only be a lower
bound on <^ (provided that sufficiently many linear terms for G are considered). Note that by
including more and more input functions, 6% will get arbitrarily close to </>". Experience has
shown that consideration of step and sine functions of different frequency and amplitude suffice
to get satisfying results.
In this section a scheme to derive the numerical value for the nonlinearity measure by convex
optimization has been presented. The method is computationally efficient and standard tools for
technical computing can be used. By an appropriate choice of the parameters and considered
input signals, the error of the numerical approximation can be made arbitrarily small.
A%B^C
2AhD.
produce the unwanted byproducts dicyclopentadiene (D) and cyclopentanediol (C). The reac-
tion is governed by the nonlinear van der Vusse kinetics
-£ = Ti~(cA0-cA)-kicA-k3c2A (12)
at VR
—§- = -T^CBJrklCA-k2cB (13)
at VR
T r
f = f( °- ) + ^ ( T c - T ) (14)
at VR pCpVji
— 7 r{hc A {±H R . A B + k2cBAHR:BC + k3c2AAHR}AD) (15)
f = ^ Q + *"A«(T-TC)) (16)
84
The states cA, cB are the concentrations of A and B respectively, T is the reactor temperature
and Tc is the coolant temperature. The heat flow Q that influences the temperature of the coolant
is assumed to be constant. The variable to be regulated is the product concentration cB in the
outflow and the manipulated variable is the inlet flow rate q. A schematic of the process is
depicted in Fig. 2. For further details on the process and parameter values we refer to Ref. 19.
Usually, the design goal of such a chemical reaction is to maximize the yield, i.e. to get the
most possible product B for the amount of reactant A fed into the CSTR. In the case of only
one manipulated variable q, the operating point of the input variable can be chosen such that
this yield is maximized, denoted as the optimal operating point (OP).
Now we want to find out whether the process shows highly nonlinear behaviour or whether
the process is only mildly nonlinear, indicating that no severe problems are expected to occur
during process operation using linear control techniques. The value of the nonlinearity measure
of the plant without controller even for a very small operating regime around the operating point
OP is computed to be
nUc -in
U l u
CSTR,OP ~ --
For this situation, it can be shown that this is the exact value even for an arbitrarily small
operating regime [20]. This highest possible value for the nonlinearity measure indicates that
problems are likely to occur in the case that this process is operated by a linear controller. And
85
Fig. 3. Values of the nonlinearity measure for the example CSTR at different operating points.
The operating points are identified by the achieved yield, given as percentage of the optimal
yield.
indeed, it can be shown that the process can not be robustly stabilized by a PI controller when
operated at the point of optimal yield (operating point OP). This is due to the fact that the
stationary gain changes sign as a function of the control input.
However note that, even if the governing differential equations stay the same, a different op-
erating point may lead to a different, maybe more linear behaviour of the process. Assessing
the nonlinearity for different operating points, specified by the corresponding steady state input,
the nonlinearity measure is not constant but changes to lower values. E.g., the example process
given above is typically driven at an operating point that achieves about 70% of the maximally
possible yield, denoted suboptimal operating point (SP). The method of convex optimization
returns a value of 6>"cSTi? sp = 0.37 for the approximated nonlinearity measure of the CSTR at
the suboptimal operating point SP and an operating regime of q = qs±lll/h. This result, com-
pared to the previous results, demonstrated that by the choice of the operating point, the degree
of nonlinearity of the process behaviour can be reduced considerably. Like this, a preferable
process behaviour is gained in exchange for 30% of yield, see Fig. 3.
Similar analysis can be performed for other operation parameters like the heat exchange flow
Q or geometric parameters of the reactor. In other contexts, the placement of actuators and
sensors may be investigated or completely different process schemes may be assessed with
86
respect to the operability. Thus, the nonlinearity measure can give hints on whether the actual
design is favorable in view of the future process control already in the design stage of the
process.
1. plant dynamics,
87
3. performance criterion.
The first two points are valid for open-loop process nonlinearity measures as well. The third
point is new in control-relevant nonlinearity quantification. In a more general context, one has
not only to consider the performance criterion but additionally mention the controller design
method. Following the idea of Ref. 24, optimal control theory with an integral performance
criterion will be used here as it represents a benchmark for any achievable performance. Con-
sidering nonlinear internal model control with different filter time constants is also possible, see
for example Ref. 23.
For a better readability, the main aspects of optimal control theory will be repeated here. A
model of the plant is assumed to be given in state-space form
± = f(x,u) (18)
where x{t) G Rn is the state vector and u{t) € Rp is the control input. The controller is
sought, which is optimal with respect to the integral cost criterion
r
J(xo)[u} = / F(x,u)dt. (19)
where the trajectory of x(t) has to satisfy the plant dynamics Eq. (18) and the initial condition
a; (to) = x0 is restricted to lie in some region x0 € B.
It is common knowledge that for time invariant systems and cost criteria and an infinite hori-
zon T —> oo the resulting optimal control can be formulated as a static state feedback control
law fc(as) [26], i.e. the optimal control depends only on the current state vector of the plant. In
accordance with what has been said above, the controller nonlinearity is influenced by (1) the
plant dynamics Eq. (18), (2) the region of operation characterized by the set of initial conditions
B and (3) the performance criterion Eq. (19).
In Ref. 24, an approximative approach to quantify the nonlinearity of the controller u = k(x)
is used, the so-called Optimal Control Structure (OCS). In this presentation, we are going to use
the more rigorous approach introduced in Ref. 25, that is based on the following definition: The
optimal control law (OCL) nonlinearity measure for a certain control problem is defined as the
quantity
fc mf Bapl^f^-.^H (20)
KeRP
llf0 \\NOCL{K0\\\
with NOCL[X*XQ} := uXQ = k(x*Xg) and x*xo defined as the solution to the infinite horizon
optimal control problem given by Eqns. (18, 19) and the initial condition x(0) = x0.
The nonlinearity measure 4>QCL evaluates the nonlinearity of the optimal static state feedback
control law k(x) in closed-loop operation as depicted in Fig. 4. The setup and the definition of
88
Fig. 4. Definition of the operator NOCL and setup for the optimal control law (OCL) nonlinear-
ity measure.
the nonlinearity measure 4>%CL represents the application of the general nonlinearity measure
from Sec. 2 to the static state feedback law k(x) with some modifications. The nonlinear
operator NOCL[X10], representing the optimal controller k(x), is compared to the static linear
system u = Kx. In contrast to the general nonlinearity measure defined in Sec. 2, the set of
considered linear approximations is restricted to linear static relations. This is adequate, as it is
known that the optimal control law is a static state feedback. The measure gives the normalized
prediction error of the linear static state feedback, that best approximates the optimal (nonlinear)
static state feedback for the worst case trajectory.
From Fig. 4 it can be seen that the set of considered input signals only consists of optimal
trajectories of the closed loop, which is equivalent to consider initial conditions of the optimally
controlled closed loop. The required control task is to optimally regulate the system for a
given initial condition. In the case of other disturbances or tracking problems, the described
controller loses its optimality property. Thus, regarding optimal trajectories amounts to having
a closed-loop measure that respects the conditions, under which the optimal control is derived.
The region B C Rn of initial conditions replaces the set of considered input signals. To be
consistent, the set B C Rn must be positive invariant for the closed-loop system, i.e. any
trajectory that starts from a point in B must remain in B for all times.
The computational scheme to determine <PQCL i s a g a m based on convex optimization. The
optimal control problem given by Eqns. (18, 19) is solved for a finite number of points in the
set B and for a finite but large horizon T. This can be done in a numerically efficient way using
results from optimal open-loop control theory. Given the optimal trajectories, the problem is
then reformulated as a constrained minimum search over the coefficients of the static controller
gain matrix K. A sufficiently large horizon can be found in a heuristic way by increasing T
until the nonlinearity value does not change any more.
89
Even though the computations are not trivial, the problem considered is still much simpler
than the problem to explicitly compute the nonlinear optimal feedback law, as only optimal
open loop trajectories for individual initial conditions need to be computed here.
An important point that must be mentioned here is that the nonlinearity that is captured by the
described method concerns the input-to-state nonlinearity, as the input to the optimal controller
is the full process state x and its output the process input u. This means that output nonlineari-
ties of the plant can not be captured except if the number of states equals the number of outputs
and the output nonlinearity y = h(x) is invertible. In that case, the plant dynamics Eq. (18) can
be reformulated into a differential equation for the output y instead of the state x.
In this section, the optimal control law (OCL) nonlinearity measure has been introduced.
The two important ideas of control-relevant nonlinearity (or control problem nonlinearity or
controller nonlinearity) that have been adapted from Ref. 24 and further expanded are (1) the
three-fold problem structure considering plant dynamics, region of operation and cost criterion
and (2) the concept of using optimal control theory to get benchmark controller for a wide class
of nonlinear systems.
Important advantages of the presented approach are, firstly, that exact solutions for the opti-
mal control problem are considered. Secondly, the optimal static state feedback control law is
compared to linear static state feedback, respecting the nature of the optimal control law. By
this means, an evaluation of the nonlinearity of the optimal control law in closed-loop operation
is possible without the necessity to compute the feedback law. A big advantage of the pre-
sented approach is that the OCL nonlinearity can be computed for stable as well as for unstable
systems, due to the fact that the optimal control law is a static relation in any case.
3.3. Example
In this section, the application of the OCL nonlinearity measures to a simple scalar example
system with saturating input nonlinearity will be illustrated. This system belongs to the class
of Hammerstein systems that are commonly met in the modeling of process systems. The state
equation is given by
x = —x + arctan(w). (21)
The trend of nonlinearity with the variation of an operating condition parameter is often more
interesting than a single value. A possible parameter of the open-loop nonlinearity measure is
the region of operation, i.e. the maximal admissible amplitude A of the input signals. The vari-
ation of the degree of nonlinearity with varying maximal input amplitude is shown in Fig. 5.
The set-point for the example system is chosen to be zero. As expected, the value of the nonlin-
earity measure increases with a growing maximal amplitude of the input signals. As the set of
considered input signals gets bigger if a larger amplitude is allowed, the nonlinearity measure
can only get larger or stay constant. In the specific case of the considered Hammerstein sys-
tem, the saturation in the input becomes more and more important, resulting in an increasing
90
nonlinearity value. This fact coincides with the general intuition and confirms that the proposed
definition is in accordance with qualitative insight.
In control-relevant analysis, additional parameters are introduced by the cost criterion. In the
following, the dependence of the OCL nonlinearity measure on the controller aggressiveness
will be considered. The cost criterion is taken to be
where x and u are scalars and the weight on the control action a will be the parameter
to choose the controller aggressiveness. Smaller values of a correspond to small penalty on
control action and will lead to an aggressive controller tuning. High values of a correspond to
high penalty on control action and will lead to a non-aggressive controller.
Figure 6 shows the dependence of the OCL nonlinearity on the region of initial values for
three different values of a. First of all, nonlinearity increases with a bigger region of operation
as expected and as in the case of the open-loop nonlinearity measure. But the dependence on a
is very interesting. It can be seen that the nonlinearity of the controller gets more important with
increasing aggressiveness of the controller. The interpretation of the results for the Hammerstein
91
example is obvious: for large values of a (corresponding to a less aggressive controller), the
controller output stays small. Thus, the plant input signal stays in the almost linear part of
the static nonlinearity that precedes the linear dynamics. Only if aggressive control action is
allowed, the input nonlinearity begins to play a role and has to be taken into account in the
controller structure.
An important remark has to be made regarding Figures 5 and 6. The abscissa in both graphs
shows the region of operation. In the case of an open-loop measure this corresponds to plant
input signal amplitudes, whereas in the case of the OCL measure this corresponds to the plant
output (i.e. controller input) signal amplitudes. Both can not be matched in a simple way.
However, the figure shows, that the performance objective plays an important role in control-
relevant nonlinearity quantification and that the results obtained by open-loop considerations
can be misleading. Note that this is not due to a failure of the open-loop nonlinearity measure,
but this is due to the fact that depending on the desired performance a nonlinear system does
not necessarily require a nonlinear controller!
Figure 7 shows the explicit dependence of the OCL nonlinearity on a and an operating region
characterized by a maximal amplitude of A = 20. Again the strong dependence of a can be
seen. For values of a above 10, the performance of a linear controller will be very close to the
92
Fig. 7. Variation of the OCL nonlinearity measure as a function of the penalty weight a on
control action for the Hammerstein-type example system. Initial conditions lie within a radius
of | x01 < 20 to the origin.
performance of an optimal nonlinear controller. For values of a below 1, the linear controller
and the optimal nonlinear controller will differ significantly and a nonlinear controller design is
strongly recommended.
In order to qualitatively verify the results obtained by the OCL nonlinearity measure, the exact
optimal feedback law is computed for this example. In the simple case of a scalar nonlinear
system (nonlinear system of order one) that is considered in this example, the infinite-horizon
optimal controller can be numerically calculated via the Hamilton-Jacobi-Bellman equation.
This equation is in general a partial differential equation involving time. For a time-invariant
control problem in one dimension, it collapses into an ordinary differential equation. Figure 8
shows the optimal feedback laws u = k(x) for different values of the parameter a. The
results strongly support the OCL nonlinearity calculations from Fig. 7. For smaller values of
a, corresponding to an aggressive controller tuning, the optimal control law becomes more and
more "curved" and thus more and more nonlinear. For large values of a, corresponding to little
control action, the feedback law is almost a straight line and thus very linear.
This section illustrated the results of the open-loop nonlinearity measure and the optimal
control law (OCL) nonlinearity measure with the help of a simple one-dimensional example
93
Fig. 8. Optimal feedback control law for the Hammerstein example system for different values
of the penalty weight on control action a.
system with saturating input nonlinearity. The results show that performance requirements
typically play an important role in the quantification of a system's degree of control-relevant
nonlinearity, and that in this case an open-loop measure can not give the desired information
to determine whether a linear or a nonlinear controller should be chosen. For the presented
simple example, the derivation of the optimal feedback law was possible. It was thus illustrated
that the OCL nonlinearity properly reflects the desired structural information about the optimal
feedback law .
4. CONCLUSIONS
Linear behaviour of processes is desirable because linear system theory is highly developed
and there exist many powerful and mature tools for the purpose of process control. When
dealing with nonlinear systems, it is crucial to know whether linear modeling, analysis and
design tools can be applied. Nonlinearity measures do deliver important insight about the degree
of nonlinearity of a system.
One very important question is in which cases nonlinear controller design is necessary or
beneficial. To answer this question it does not suffice to analyze the plant alone as not ev-
ery nonlinear plant requires a nonlinear controller. The approach presented here quantifies the
nonlinearity of the optimal static state feedback controller without the necessity to explicitly
compute the feedback law. Thus already early in the design stage a statement can be made
whether linear controllers will lead to satisfying performance. The presented control-relevant
94
nonlinearity measure is practical even for complex systems as the numerical value can be effi-
ciently computed and the measure is applicable to stable as well as unstable systems.
In the context of design and control interaction, nonlinearity measures are powerful tools to
assess different process designs as they deliver the important information which of a set of given
process designs is favorable with respect to operation with linear controllers.
REFERENCES
[1] AJ. van der Schaft, L2-Gain and Passivity Techniques in Nonlinear Control, Springer,
London, 2000.
[2] C.A. Desoer and Y.-T. Wang, EEE Trans, on Circuits and System, CAS-27(1980) 104.
[3] D. Sun and K.A. Hoo, Int. J. Contr., 73(2000) 29.
[4] F. Allgower, In 3rd IFAC Nonlinear Control Systems Design Symposium, Lake Tahoe, CA,
1995 pp. 279-284,.
[5] D. Sourlas and V. Manousiouthakis, In Proc. 37th IEEE Conf. Decision Contr., 1998, pp.
1434-39.
[6] K.R. Harris, M.C. Colantonio, and A. Palazoglu, Chem. Eng. Sci., 55 (2000) 2393.
[7] M Nikolaou and V. Manousiouthakis, AIChE J., 35(1989) 559.
[8] S. A. Eker and M. Nikolaou, AIChE J., 9 (2002) 1957.
[9] P.M. Makila and J.R. Partington, Automatica, 39(2003) 1.
[10]F. Allgower, Naherungsweise Ein-/Ausgangs-Linearisierung nichtlinearer Systeme.
Fortschr.-Ber. VDI Reihe 8 Nr. 582, VDI Verlag, Diisseldorf, 1996.
[11]A. Helbig, W. Marquardt, and F. Allgower, J. Proc. Contr., 10 (200) 113.
[12]M. Guay, P.J. McLellan, and D.W. Bacon, Can. J. Chem. Eng, 73 (1995) 868.
[13]M. Guay, P.J. McLellan, and D.W. Bacon, AIChE J, 43(1997) 2261.
[14]J. Hahn and T.F. Edgar, Ind. Eng. Chem. Res, 40 (2001) 5724.
[15]C. W. Scherer, IEEE Trans. Automat. Contr, AC-40(1995) 1054.
[16] A. Nemirovsky and D. Yudin, Problem complexity and method efficiency in optimization,
Wiley-Interscience, New York, NY, 1983.
[17]Y. Nesterov and A. Nemirovsky, Interior point polynomial methods in convex program-
ming: Theory and applications. SIAM, Philadelphia, 1994.
[18]M. Grotschel, L. Lovasz, and A. Schrijver, (eds.). Geometric algorithms and combinatorial
optimization, Springer-Verlag, New York, NY, 1988.
[19]H. Chen, A. Kremling, and F. Allgower, In Proc. 3rd European Control Conference
ECC'95, Rome, 1995 pp. 3247-3252
[20]F. Allgower. Naherungsweise Ein-/Ausgangs-Linearisierung nichtlinearer Systeme.
Fortschr.-Ber. VDI Reihe 8 Nr. 582, VDI Verlag, Dusseldorf, 1996.
[21]R.K. Pearson, F. Allg5wer, and P.H. Menold,
In Proc. of the 4th European Control Conference, ECC'97, Briissel, 1997, Paper FR-A F4.
[22]P.H. Menold, F. Allgower, and R.K. Pearson, In Proc. of the First European Congress on
95
Chapter A4
a
Polytechnic University, Brooklyn, NY, USA
b
Air Products and Chemicals, Allentown, PA, USA
C
CPMC Research Center, Department of Chemical Engineering, Lehigh Univerisity, USA
d
ExxonMobil Research and Engineering, Fairfax, VA, USA
1. INTRODUCTION
A chemical process must be able to operate cost effectively at all desired production rates and
product splits; achieve product purities; and not violate other process, equipment, and machin-
ery constraints. A well-designed process must also be able to change production rates and splits,
within the specified time frame. Historically, these objectives have been achieved by a two-step
process: first, the process design was completed and the plant built, and second, controls were
implemented with the goal of meeting all of the performance objectives. Often, performance
difficulties introduced by the specific process and equipment designs had to be overcome by
overly complicated control strategies. Control research initially tackled this issue beginning
with tools to aid the pairing of individual control loops [1,2], rules for tuning control loops [3]
and dealing with interaction among the loops [4]. As helpful as these aids have been, the more
helpful approach is to design operability into the process. Research has more recently been
focused on developing tools and methodologies to ensure that a process is designed to meet op-
erability objectives [5-8]. The following sections in this chapter will review the prior research
in this field, then present what the authors believe is a very useful approach to the design of an
operable process.
An operability measure needs to quantify the inherent ability of the process to move from one
steady state to another and to reject any of the expected disturbances in a timely fashion with the
limited control action available. This inherent ability is designed into the process by economic
tradeoffs made in flowsheet design, equipment sizing and specification, and provisions for dy-
* Present Address: Invensys India Private Limited, Chennai, India.
97
Figure 1. (a) Achievable output space at steady-state, (b) Dynamic changes in achievable output
space over time
F = 9i + <?2 (1)
T = qiTi + q2T2
9l +<?2
The achievable temperature range at steady-state is shown in Figure la. Note that it is not
possible to maintain the temperature of the exit stream over a wide temperature range when the
total flow is quite high. This defines the operability problem: if it is required to cover more of
the operating space (defined later as the Desired Output Space) than is achievable, (defined later
as the Achievable Output Space) then changes in the process design would have to be made. No
control method will be able to overcome the inherent limitations on operability imposed by the
limitations in design.
If hot water were not instantaneously available (T2 not constant at 50°C) but gradually in-
98
creased in temperature, then the achievable range would be much smaller at the beginning and
gradually increase to the steady-state range. This can be seen in Figure lb.
The process design would be different depending on the desired range of steady-state opera-
tion and speed of dynamic response required. If the full operating range would be required at
startup, then a system to maintain the hot water at full temperature would need to be installed.
This is the issue addressed by operability analysis and specifically by the concept of operability
analysis espoused by the authors.
A reliable, accurate, and straightforward methodology for the examination of the operability
characteristics of a process that would permit ranking competing designs would be of great use.
In order to effectively accomplish this task, the approach must provide a precise measure for
operability. It should not miss operability difficulties nor should it provide false indications
of operability problems. The approach contained in subsequent portions of this chapter pro-
vides the groundwork for measures that accurately quantifies the trade-offs between design and
control.
2. OPERABILITY REVIEW
There are multiple approaches that one could take to examine the literature on operability
analysis, but the method taken in this chapter is to divide the body of research into steady-
state and dynamic classifications. Each of these classifications further consists of different
approaches that utilize linear and non-linear analysis, statistical models versus deterministic
models, open loop versus closed loop analysis, and optimization methods versus less numer-
ically intense methods. These subcategories are incorporated into the two main divisions of
steady-state and dynamic analysis. The most complete picture of the existing research is best
gained by examining the body of literature in this manner.
pairing of manipulated variable when all other loops are open to the open loop gain for that ma-
nipulated variable when all other loop are closed. Hence, RGA is an indicator of the interaction
between control loops and is useful for avoiding bad control pairings [13, 14] but it can also
provide misleading results [8, 15, 16]. Extensions of the RGA approach have been introduced
to account for additional information. In an attempt to incorporate the impact of disturbances,
the relative disturbance gain [17] was introduced. Improved analysis of the multivariable im-
pacts of subsystems were addressed by the block relative gain [18] and relative sensitivity [19].
Other steady state tools apply the powerful capabilities of singular value decomposition to the
calculation of useful measures such as the largest singular value, condition number, and the in-
tersivity index [20-23], Singular values identify the impact that inputs have on outputs so these
tools attempt to define the magnitude of the impact that manipulated and disturbance variables
have on selected controlled variables. Some of the applications addressed with these tools in-
clude control variable pairing, optimal sensor location, robust controller design, and resiliency
analysis.
Morari [2] identified the relationship between the invertibility of transfer function matrix of a
system and its ability to move fast and smoothly from one operating condition to another and to
deal effectively with disturbances. Therefore, plants that are easier to invert should be easier to
control and possess greater resilience. Four fundamental factors were identified preventing the
inversion of the process: right-half-plane zeros, time delays, constraints on the variables, and
model uncertainty. The limitations imposed by these factors on the operability of multivarible
systems were explored by Morari, coworkers, and others in a series of papers [24-28].
Non-linear approaches to operability analysis most often utilize mechanistic models. These
models are based on the underlying mathematical models for mass transfer, energy transfer,
and chemical reaction. Analysis of process operability by utilizing simulations of specific pro-
cess designs [29, 30] is an approach that is useful for a specific process design. This approach
requires conducting significant numbers of case studies in order to arrive at a general conclu-
sion specific to the simulated configuration. An approach that addresses the shortcoming of
the previous case study approaches was introduced [31, 32] to quantify the steady-state oper-
ability of nonlinear processes. The concept is to calculate a flexibility index for the process.
An optimization problem is solved to define the maximum permissible normalized parameter
uncertainty that a process can tolerate without violating any constraints. This concept was sub-
sequently extended to eliminate initial restrictions requiring that the limiting points lie in the
uncertain parameter vertex directions [5]. A similar approach was utilized by Bahri, Bandonni,
and Romagnoli [33], except the manipulated variables are fixed in the analysis and a back-
off operating point is defined where constraints should not be violated even when predefined
disturbances enter the plant.
Another approach to the problem is the development of a methodology that permits a logical
selection of a control structure for a given process design. One of the earliest methods was
proposed by Buckley [34] in which his primary requirements were to maintain product quality
100
and the plant material balance. This procedure provides an excellent framework for control
structure selection but requires engineering experience for individual controller pairing. Build-
ing on the Buckley methodology, Shunta [35] added steps to develop secondary and constraint
controls. Fisher, Dougherty, and Douglas [36-38] present an hierarchical approach to control
system synthesis which provides for simultaneous evaluation of controls and process design.
This is a nice framework but leaves the actual control structure selection in the experiential
realm. They suggested that adding more manipulated variables, overdesigning selected pieces
of equipment, and ignoring the least important variables could improve controllability.
Vinson and Georgakis [7, 8] developed a direct measure, called the Operability Index, which
effectively captures the inherent steady-state operability of linear and non-linear continuous
processes. Its geometrical interpretation makes it easy to understand and it also addresses multi-
variable interactions. The index was demonstrated to more accurately reflect the true operability
than other indices, such as minimum singular value, CN, and RGA, on representative linear
examples. In addition it has been proven to be independent of inventory control structure [39]
space relative to the desired output space for open loop and closed loop cases.
3. STEADY-STATE OPERABILITY
In this section, we shall first review the steady-state operability framework introduced by
Vinson and Georgakis [7, 8] in a generic nonlinear setting. We illustrate the concepts with a
simple mixing tank problem as well as a highly nonlinear tubular reactor for the production
of vinyl acetate. Then, we proceed to discuss some conceptual extensions and algorithmic
developments to the basic framework. It might be worth recalling the definition of operability
given by Vinson [16]:
One of the basic requirements in performing the operability analysis is that we have a model
of the process relating the inputs and the disturbances to the outputs. Many of the process
models can be described by the following state-space representation:
M: x = f (x, u, d) (3)
y = g(x, u, d) (4)
hi(x,x,y,u,u,d) = 0 (5)
h 2 (x,x,y,u,u,d) < 0 (6)
where x e R"* is the state vector, u G Rn" is the input/control vector, d e RT'd is the dis-
turbance vector, and y £ R™" is the output vector of the process. Here, x and u represent the
time derivatives of the corresponding variables. It is implied in the above equations that all the
variables are functions of time. The two nonlinear maps f and g are of the following dimen-
sionality f : R"*+"»+"<< -y R™* and g : R™*+««+«« - • R'%. Constraints in Equations 5 and
6 represent the process, product, and safety specifications, including the bounds on the magni-
tudes and the rate-of-change of the inputs. These constraints may be applicable to the complete
time history of the process (path constraints) or only at certain times (point constraints). These
time-dependent constraints will be relevant when we discuss the dynamic operability in the next
section.
In general, based on operational requirements, process outputs can be classified into two
broad categories: (1) set-point controlled, outputs to be controlled at a desired value, and (2)
set-interval controlled, outputs to be controlled within a desired range. For instance, production
rate and product quality may fall into the first category, whereas process variables, such as
level, pressure, and temperature in different units/streams may fall into the second category. In
the following subsection, we shall focus on systems without extra degrees of freedom, that is
102
systems having the same number of inputs and the set-point controlled outputs, nu = ny. Later,
we shall present methodologies for dealing with systems with extra degree(s) of freedom, that
is, where there are more input variables than set-point controlled outputs. There we shall present
an approach for plantwide operability in this framework.
Suppose we also specify a Desired Output Space (DOS), which is the desired operating win-
dow for the process outputs. The set of input values required to reach the entire DOS can then
be calculated from the inverse of the model. The collection of all such input values is denoted
as the Desired Input Space (DIS). Mathematically,
The dependence of DIS on y and d is emphasized with the subscritps and arguments as for
the AOS. We do not use the subscripts and arguments if the context makes it evident. It might
be worth pointing out here that the AOS and DIS discussed so far are obtained for fixed values
of the disturbance and, in accordance with the control literature, are referred to as servo spaces.
Having outlined the required spaces, the servo operability index in the output space can be
defined as:
_ /x[AOS u (d")nDOS]
S Uiy i j
~ M [DOS]
Here (x is a measure function calculating the size of the corresponding space. For instance, in
two dimensions, it represents the area, and in three dimensions, it represents the volume. This
index would be particularly useful in analyzing the operability of existing plant designs as it
indicates how much of the desired process output region is achievable with the available inputs.
If the index is less than one, it implies that our expectations are higher than that the process can
2
Vinson and Georgakis in their original work called this index as Output Controllability Index. It was subse-
quently renamed Operability Index [ 16, 39] to avoid confusion with the system theory concept of state controlla-
bility. Strictly speaking the operability indices and the operating spaces to be discussed in this section should be
additionally qualified as steady-state. For the sake of brevity, we are not adding this prefix.
103
deliver.
For new plant designs, or while considering design changes to an existing plant, it will be
more useful to analyze the problem in the input space. Similar to the previous equation, a servo
operability index in the input space can be defined as:
M [AISnDIS y (d^)]
S Uiu (10)
~ M[DIS,(cF)]
This index quantifies how much of the servo-DIS is covered by the AIS. If its value is less
than one, it indicates a need to increase the available ranges of some of the inputs. For linear
systems, both these operability index definitions give the same value. However, there will be
some differences between the calculated values for nonlinear systems.
In order to investigate the regulatory operability of the process, additionally the anticipated
ranges of disturbances needs to be specified, that will define the Expected Disturbance Space
(EDS). For the steady-state case, the EDS may also reflect the uncertainties in some of the im-
portant model parameters employed in the design, such as kinetic constants, heats of reaction,
heat-transfer coefficients, etc. The regulatory operability index is defined from the inputs re-
quired to compensate for the effect of disturbances while maintaining the plant at its nominal
set point, yN as:
r
"OI"= MDW)] (11)
with
Alternately, we could calculate the region of disturbances that can be tolerated with the avail-
able inputs, keeping the plant at the nominal operating point. This is denoted as the Tolerable
Disturbance Space (TDS) defined as:
A regulatory operability index can now be formulated in the disturbance space as:
iTD Dsl
-^-" Xsr
Our overall objective is to reject the expected disturbances, and, at the same time, be able to
reach all the points in the DOS. With the understanding that DIS^ is a function of y and DISy is
a function of d, the total desired input space can be defined as the set theoretic union of DISd(y)
for all y in the DOS, or the similar union of DISy(d) for all d in the EDS. Both of these unions
104
Figure 2. Achievable and desired output spaces for the mixing problem. The AIS is shown in
the inset
Notice that all these indices have their best and worst values of 1 and 0, respectively. As they
are ratios of similar quantities, these measures are dimensionless and do not require the inputs
and outputs to be scaled to yield consistent results.
In the following, we will illustrate the operability calculations with some systems. Let us
refer back to the shower problem discussed in the introduction. The achievable and desired
output spaces for this problem are shown in Figure 2. The AIS is shown in the inset of the
figure. Comparing the AOS with the DOS, the servo operability index can be computed as
s - Oly = 0.67. This indicates that the AIS is not sufficiently large enough to deliver all the
outputs in DOS. The DIS calculated for the given DOS is shown in Figure 3. In the input space,
we calculate the s — OIU as 0.55, which again points to the insufficient available input ranges.
The requirement for the inputs to accommodate the DOS can be observed from the figure.
When dealing with nonlinear systems, the boundaries of a given input space do not nec-
essarily map to the boundaries of the output region. The existence of either input or output
multiplicities at steady state are causes for parts of the boundary of an input region not to map
105
Figure 3. Desired and available input spaces for the mixing problem. The DOS is shown in the
inset
onto the boundary of the output region. Such a behavior was exhibited by vinyl acetate reactor
studied by Subramanian and Georgakis [53]. The reactor is a part of the flowsheet problem
presented by Luyben and Tyreus [55]. The reactor is of tube and shell configuration, with the
reactions taking place on the catalyst packed tubes. The heat due to the exothermic reactions is
removed by boiling water on the shell side. In this isolated reactor study, the inputs are the feed
flow rate and the shell side temperature. The specified AIS and the calculated AOS are shown
in Figure 4 and 5, respectively. Mapping the input boundary ABCD to the output space yields
an unusal curve in the shape of number 8, giving rise to input multiplicity. On further analysis,
it was found that the line characterized by det(J) = 0, where J is the Jacobian matrix, which
lies inside the given input space maps to the missing boundary of the output space (it is shown
as dashed line in both the figures). This leads to an intersting property for the output points in
the region bounded by the dashed line (det(J) = 0 line) and the continuous lines. As each of
these points can be reached from two different two points in the input space, this area can be
referred to as the input multiplicity region. The nominal operating point shown as a star in the
figures lies in this region. It can be noted that the input multiplicity equivalent, shown to the
left of dashed line in the AIS, achieves the same production and selectivity levels with only one
third of the nominal feed flow rate. Further discussion on this problem can be found in [53].
which centered around point controlled outputs. Systems where the number of input variables
equal the number of set-point controlled output variables are candidate for the point operability.
On the other hand, if some of the output variables are relaxed and need only be controlled in
a set-interval. This makes the number of input variables available more numerous than the
number of set-point controlled outputs. For such problems, the calculation of DIS should be
modified to account for the extra degrees of freedom. It is suggested here that the steady-state
design problem to find the desired inputs, u* for given y sp , and d, be solved as a constrained
optimization problem minimizing a cost function dependent on the input variables, subject to
process and performance constraints. Mathematically stated,
The objective function, J, would typically involve terms associated with the produced product
but also with respect to the cost associated with the input use, such as their range, penalties to
discourage wide ranges of certain inputs, etc. The constraints in M. include equality constraints
for the set-point controlled outputs, y sp , and the inequality constraints for the set-interval con-
trolled outputs. Note that u*, the solution of the optimization problem, is a function of y sp and
d. Further, it will be worth pointing out that the DOS is defined only in terms of the set-point
controlled variables.
Subramanian, Uztiirk and Georgakis [12] studied the operability characteristics of CSTR
systems using the interval operability framework. Figure 6 shows an example from this work,
where a CSTR's temperature is set-interval controlled, and the reactor concentration is set-point
107
Figure 5. AOS for the vinyl acetate reactor for the AIS shown in Figure 4.
controlled in a desired range. Note that each servo-DIS in this figure is a one-dimensional space
but their union for all d e EDS forms a two-dimensional input region. The AIS is defined as
0.3 to 1 in the normalized reactor volume and 0 to 4 in the normalized coolant flow rate. This
overall DIS is then intersected with the AIS to calculate the 01 as 0.68.
Here, «*(y sp ,d) represents the jth element of u*(y sp ,d) which is obtained by solving PI.
This calculation has to be repeated for each input to obtain their individual upper limit. If the
maximization operator in P2 is replaced by the minimization operator, we obtain the lower
bounds on each input, Uy In addition to the input limits, the solutions of these optimization
problems also identify the limiting combinations of outputs and disturbances that require these
extreme values of the inputs. Obviously, this formulation can also be applied to systems with
ny = nu, in which case the inner optimization problem (PI) is eliminated, and u*(ysp, d) is
directly obtained as the solution of M.
impractical input values, such as negative flow rates. The difficulty may even go unnoticed in
the bounding box calculations depending on the process constraints utilized. If one encounters
such a problem in the DIS calculation, the defined DOS and EDS should be respecified to be
more reasonable, without compromising the real demands. If the infeasibility persists, alter-
nate design options have to be explored declaring the current design inoperable. Clearly, an
adaptation for dealing with such infeasibilities directly will be preferable.
It can be argued that an approach based on the output space could overcome this deficiency.
A practical problem in working with the output space is its potentially large dimensionality,
especially for plantwide problems. If one were to include internal process variables, such as
temperature, pressure, level, etc, the dimensionality of output space could become prohibitive
even for problems of moderate complexity. Though it is essential to maintain many of these
outputs within their acceptable intervals for reasons of safety, corrosion, optimality or other
considerations, the focus of an initial operability study should be on a few critical process
variables.
If we consider the entire chemical process plant as a single process operation, process vari-
ables associated with the feed and product streams entering or leaving the process can be called
exogenous (external) input and output variables, respectively (refer to Figure 7). These exoge-
nous input and output variables are connected by the main process path, in the terminlogogy
of Price and Georgakis [14]. In order to maintain the process in the desired operating point
(region), several internal auxiliary input variables, such as steam flow, cooling water flow, etc.,
should be manipulated. Such variables can be referred as endogenous (internal) inputs. The
state of the plant is generally measured using several variables, such as pressure, temperature,
and level; these variables can be grouped as endogenous output variables. Though these en-
dogenous output variables have to be held in a range for acceptable operation of a chemical
plant using both the exogenous and endogenous inputs, the overall objective of the process is
generally expressed in terms of the exogenous output variables that are related to the production
rates and product qualities. Keeping these points in mind, in the operability analysis of high di-
mensional plantwide problems we limit our focus to these exogenous output variables. Many
of the set-point controlled outputs, discussed in section 3.2, will be exogenous outputs.
Based on this discussion, we introduce a special case of the previously known AOS called
Achievable Production Output Space (APOS). It is defined as the entire feasible operating re-
gion in the production variables (rates and product qualities) space, a subset of the previously
referred exogenous outputs, that is achievable with the given AIS and without violating the pro-
cess constraints. Clearly, APOS is a subset of the previously known AOS. Note that we are not
only limiting our consideration to a selected set of coordinates but also to a feasible section of
it. From another perspective, the APOS can also be viewed as a method for quantifying the
steady-state capacity of a given process design and control structure. The dimensionality of
APOS as defined here, will depend mainly on the number of products a plant makes and the
number of independent quality variables associated with each of the products. A schematic of
110
a two-dimensional APOS is shown in Figure 8. Though the process might be able to deliver
a broad range of quality, especially in the low conversion/quality direction, it will be sufficient
to pay attention to the range that is of some practical interest. A quantitative operability index
similar to those many defined before can be used. Here the Desired Production Output Space
(DPOS), that is the DOS in production related output variables, is compared to the APOS to
obtain a quantitative Operability Index as:
Incase of two dimensional APOS, for a given product quality, the upper bound of APOS can be
obtained by solving a min-max problem, where the min operator searches over the disturbance
space and the max operator maximizes the production as in P3. Similarly, the lower bound
of this APOS can be found by solving a set of max-min problem, where the max operator
looks for the worst disturbance case and the min operator minimizes the production. A less
conservative approach might be possible if the disturbances/uncertain parameters are expressed
stochastically, then robust optimization techniques [refer for e.g., 57, and references there in]
can be used to establish probabilistic maximum and minimum production.
Figure 9. Comparison of APOS of the TE Process with two different control structures, with
and without the loss of Feed A. Base case operating point is plotted as a star.
would not be an issue in our steady-state analysis. These factors will influence the dynamic
operability of a process quite significantly. Sensitivity of the process capacity to the set-points
of the SISO controllers can be studied in this method. It should also be kept in mind that a
steady-state point calculated to be inside the APOS does not necessarily imply that it meets
dynamic operability requirements.
In order to demonstrate the APOS approach, we present some results for the well-known
Tennessee Eastman process of Downs and Vogel [60]. The process has two main reactions
leading to the formation of the products G and H, and side reactions that lead to a less desirable
byproduct F. As per the operational requirements given in the problem statement, the plant
operates over a broad range of the G to H ratio of production rates.
The APOS for the process is presented in Figure 9 (as continuous line). It was observed that
the production is limited by the availability of the feeds D and E which control the formation of
products G and H, respectively. Downs and Vogel also had presented many disturbance scenario
for the problem. Among these, the sudden loss of feed stream A was found to be a drastic
change with which many of the control structures proposed for this process had difficulties to
deal with. Here we present the effect of this disturbance on the APOS which is shown as dotted
line in Figure 9. It is found that the maximum possible production in the lower G to H ratio,
in the range of 0.1 to 2 suffers the most. It seems to be attributable to the fact that the slower
H forming reaction is set back more by the decreased availability of the component A. It can
be seen from the figure that the nominal production rate is still possible even after feed stream
114
A had become unavailable. But many of the control structures suggested a reduction in the
production demand for this disturbance to make this problem more tractable dynamically.
Ricker [61] had presented a comprehensive control structure which was successful at regulat-
ing the process and maximizing the production. Here we compare its steady-state performance
with the best possible APOS discussed before. The result is shown in Figure 9 as dashed line.
It can be seen that Ricker's CS performs well over the entire range of the G to H ratio. In the
figure we have also shown its performance after the feed stream A is lost, which is shown as
dash-dotted line. The performance falls short of the best possible APOS in the lower G to H
range. From the above discussion, it is clear that the APOS methodology is helpful in studying
the operability of plantwide problems. More detailed account on this problem can be found in
[62].
4. DYNAMIC OPERABILITY
Motivated by the steady-state concepts discussed above, Uztiirk and Georgakis [9-11] for-
mulated a dynamic operability framework that aims to quantify the inherent dynamic character-
istics of the process. We shall briefly review their approach in this section.
In order to effectively perform dynamic operability analysis of a process, one needs a quanti-
tative measure of dynamic performance. Rise time, settling time, overshoot, and integral square
error are well-known and commonly used examples of such measures. For the purpose of this
framework, a dynamic operability measure is defined as the shortest time it would take a system
to settle to the desired set point after a set-point change and/or a disturbance occurrence.
The operability measure is based on the idea that the time spent away from the desired set
point is linked to potential losses due to off-specification products and economic penalties for
non-optimal performance. Different types of feedback controllers can be utilized to evaluate this
operability measure. However, a performance measure independent of the feedback controller
to be used and capable of assessing the inherent limitations of the process is desirable. The
minimum-time optimal controller suits these demands very well. The approach is based on an
implied assumption that a feedback controller exists that will deliver a closed-loop dynamic
operability close to the one calculated here by use of the optimal open-loop controller. For
similar reasons, Carvallo [63] employed the minimum time optimal controller to calculate the
time it would take the process to respond to the worst disturbance and/or set-point change.
Minimum-time optimal control problem for continuous time systems can be formulated as
follows:
t}{ysp,d)=mm / dt (22)
u
Jo
s.t. _M(x o ,u o ,y s p ,d given)
where £^(ysp, d) is the minimum time necessary to respond to a change in the set point y s p
and to a disturbance d. Here, M. represents the dynamic model of the process with the input
115
constraints and includes the final-time constraints which are set to ensure that once the system
reaches or returns to the set point, it stays there afterwards. For the optimal control problem in
Equation 22 to have a solution, a necessary condition is the constraints in M. to have at least
one feasible solution at steady state.
where tj(ysp, d) represents the desired dynamic performance, or the maximum allowable re-
sponse time, in tracking a set-point change y sp in DOS and/or recovering from a disturbance d
in EDS.
Dynamic Achievable Operating Space (dAOpS) is defined as the operating space representing
the dynamic performance that can be achieved by the system for a given choice of the dAIS,
DOS, and EDS. Mathematically, dAOpS can be defined as follows:
The lower bound, tj(y sp , d), for the response time is obtained from the minimum-time optimal
control calculations. This establishes an upper bound for the dynamic performance. Since the
response of a stable system to any (y sp , d) can take infinite time to reach the desired steady-state
when u is changed to the corresponding steady-state value, dAOpS is unbounded at one end.
The next operating space, referred to as S2, represents the ranges of set points and disturbances
that can be achieved within tj(ysp, d). S2 is obtained by projecting the intersection of dDOpS
and dAOpS onto Si and defined as follows:
dOI = ^ (27)
where \i represents a function calculating the size of the corresponding space. dOI can take
values varying between 0 and 1, representing the worst and the best performances, respectively.
Since the response times are calculated using an idealized controller, dOI represents an upper
bound for the achievable control performance of the process. That is, if the minimum-time
controller fails to satisfy the performance requirements in dDOpS (i.e., dOI < 1), no feedback
controller can satisfy them in practice. However, if the performance requirements can be met
by the optimal controller (i.e., dOI = 1), it is not guaranteed that a feedback controller exists
that will give the same performance. In this context, Uztiirk and Georgakis [64] compared the
performance of different types of feedback controllers with the performance bounds predicted
using the minimum-time optimal controller. Their study showed that one can expect that an
advanced model-based controller, like the Model Predictive Controller, would approach the
performance of the optimal controller.
Using this dynamic operability framework, a systematic study of the dynamic operability of
CSTR systems was done [12]. Some selected results of this work are presented here. In Figure
10, we compare three single CSTR designs nominally operating at different temperatures. These
designs were first overdesigned using the bounding box approach discussed in the previous
section to ensure steady-state operability (i.e., 01 = 1). In this study, the reactor concentration
was controlled at set point and the reactor temperature and level were controlled within an
interval. These minimum transition time plots reveal that the reactor operating at the higher
temperature responds faster than the ones operating at a lower temperature and with a larger
residence time. This confirms that the results are consistent with the expected behavior. Similar
trends were also observed in the minimum disturbance rejection times.
In Figure 11, two CSTRs-in-series systems are compared with a single reactor (D3) design.
All the reactors compared here are nominally operating at the same temperature. One of the
reactors-in-series systems is designed with equal reactor volumes (VR10D9), while the other
system is designed so that the heat load is evenly distributed between them (QR10D9). The fig-
ure shows that (i) two reactors systems are faster than the single reactor design, and (ii) QR10D9
is significantly faster than the other designs. This suggests that the criteria of distributing the
heat load evenly is a good option in designing two CSTRs in series when considering inherent
117
Figure 10. Minimum transition time plots for three single reactor designs. Transitions are from
the nominal concentration of 0.05 lbmol/ft .
dynamic operability.
Extending this approach to multivariable processes, next we study the dynamic operability of
a 2 x 2 MIMO system whose model is shown in Figure 12. We define the DOS (equivalently,
SO for this system to be a square as shown in the figure. The dynamic performance bounds
for this system are presented in the form of parallelograms enclosing the output points that
can be reached within a specified time. The outermost of these parallelograms corresponds to
tdf —> oo, which is equivalent to the AOS of the steady-state calculations. The intermediate
parallelograms were obtained by fixing tj = 20 and 50, respectively. One can see that if the
iA > 50, the system would have a dOI of unity, thus being dynamically operable.
Figure 11. Comparison of minimum transition times of selected single and two reactors-in-
series systems. Two CSTRs in series designs sharing equal heat duty show quicker response.
inputs, referred to as the bounding box, with which the process can achieve the desired dynamic
performance requirements as defined in the dDOpS, for all y sp in the DOS and all d in the
EDS. This enhancement problem is closely related to the concept of bounding box of inputs for
steady-state operability analysis (problems PI and P2). The calculation of the bounding box
in the dynamic case is formulated in two steps. In the first step, the smallest bounding box of
inputs is calculated for a given (y sp , d). In the second, the set of (ysp, d) that would require
the largest input bounds is identified for each input variable separately. These formulations are
presented and discussed below.
The first step, from now on referred to as P4, can be formulated as in the following optimiza-
tion problem
where the objective function, J, typically represents the cost associated with demanding
larger ranges on the inputs and is selected as an appropriate norm of u. Here, M includes
the nonlinear dynamic model of the process, the constraints on the outputs and inputs, and the
final-time constraints. In this formulation, the min operator iterates over the possible choices
of the input trajectory, u(£), searching for the smallest input ranges that will satisfy the final-
time constraints for a given (ysp, d) and a fixed final time oftj. The resulting bounding box is
represented by the vector u*.
119
As discussed above, the idea in the enhancement problem is to identify the ranges of inputs
for which the process is dynamically operable over the entire operating range. To achieve this
objective, a search in the DOS and EDS has to be made to identify the set of (ysp, d) that would
give the largest bounding box in P4. However, the most demanding (worst) set of (y sp , d)
might not be the same for each input. Therefore, the bounding value of each input should
be calculated separately. This problem, referred to as P5, is formulated as in the following
optimization problem
where utj is the jth element of the nu-dimensional bounding box vector u» calculated using P4.
Here, the max operator searches over y sp and d for the largest bound of input Uj out of the input
bounds calculated using P4 (notice that problem P4 is nested in P5). This bound is referred to
as u*. One would need to solve one P5 problem for each input, Uj, in order to construct the
bounding box of all inputs.
Note that in P4 and P5 we assume that the input bounds are of the following symmetrical
form
However, if the operating ranges are not symmetrical, one can easily modify the objective func-
tion in P4 to include both the upper and lower bounds of the input variables. In that case, one
120
Figure 13. Graphical representation of problems P4 and P5 denned in the enhancement formu-
lation.
would need to replace the maximization operator in P5 with minimization in order to calculate
the bounding value of the lower bound for a particular input variable.
The enhancement problem is demonstrated graphically using a process with two inputs. Fig-
ure 13 depicts three bounding boxes, u*(VsP, d J ), for such a process. Each of these boxes cor-
responds to the solution of a P4 problem for a fixed value of (ysp, d). For instance, 11,(3^, d1)
represents the bounding box of inputs required for the process to overcome (y*p, d 1 ) within the
response time oftj as defined in the dDOpS. Of interest is to calculate the union of all bounding
boxes for all set points, y sp , in the DOS and disturbances, d, in the EDS. This is depicted in
Figure 13 with the boldface rectangle and is the solution of the problem P5 defined in Equation
29. Note that the calculation of the bounding box also reveals the set points and disturbances
that require the largest input bounds.
The solution of this optimization problem can be computationally intensive, especially in the
case of multivariable nonlinear systems. However, for linear systems, the most demanding sets
of (ysp, d) would be at the vertices of the DOS and EDS. Therefore, one can enumerate these
sets, solve as many P4 problems as the number of vertices, and determine the bounding box by
inspecting the results. This would eliminate the need for solving the second problem, P5, in an
explicit manner, which simplifies the enhancement problem significantly.
The bounding box of inputs obtained using the above formulation represents idealistic values
of the required bounds of the inputs. In other words, these bounds would be sufficient only for
the optimal controller to cover the entire operating space. A feedback controller would require
larger input ranges in order to accomplish the same task within the desired response time, tj.
121
Table 1
Bounding values of the inputs calculated for different values of tj. These calculations are
performed without the assumption of symmetrical bounds on the inputs.
tj V\ J/2 "1 "2 Ui «2
10 1 1 1.7369 0.8488 -0.6834 -0.2091
-1 1 0.3462 0.0544 -0.1034 -0.8952
1 -1 0.1034 0.8952 -0.3462 -0.0544
-1 -1 0.6834 0.2091 -1.7369 -0.8488
15 1 1 0.6371 0.2996 -0.0180 -0.0001
-1 1 0.1589 0.0002 -0.0045 -0.5993
1 -1 0.0045 0.5993 -0.1589 -0.0002
-1 -1 0.0180 0.0001 -0.6371 -0.2996
Therefore, certain caution should be exercised when interpreting the results of this enhancement
study.
We illustrate here the enhancement formulation using a two-input two-output linear system
[64]. In this example, the enhancement calculations are carried out using a DOS as shown
in Figure 14. An objective function of the form J = j|u|| 2 + ||uj| 2 is utilized in P4, which
allowed us to calculate the upper and lower bounds of the inputs independent of each other. In
other words, we did not assume symmetrical bounds on the inputs. However, since the DOS
is symmetrical with respect to the nominal operating point, we expect that these calculations
to result in symmetrical input bounds. The results are presented in Table 1 and graphically
displayed in Figure 14 for the case of tj = 10. Note that, since we did not assume symmetry,
we had to solve P4 for all four vertices in the DOS in order to obtain both the upper and lower
bounds of the inputs. The table reveals that the most demanding set-point change is different
for each input bound (shown in boldface) and that the bounding box of inputs is symmetrical as
expected.
Figure 14. Graphical representation of the bounding boxes presented in Table 1 for ti = 10.
122
In this chapter, we presented a brief review of an operability approach that is based on some
simple geometric concepts utilized to examine the steady state and dynamical characteristics
of a given plant in the design stage. The major emphasis of the technical exposition presented
in the foregoing sections focused on the operability studies that our research group has made
during the last few years. Foundational to these concepts are three operating spaces known as
the available, desired, and expected spaces. These spaces are defined for the input, output, and
disturbance variables of a process. The methodology presented addresses first the steady state
and then the dynamic operability issues.
The steady-state operability framework uses the steady-state model of the process and aims
to calculate whether the input ranges are sufficient to achieve the desired output ranges in the
presence of the expected disturbances. A steady-state operability index is thus defined. The
steady-state operability characteristics are quantified by comparing two spaces related to the
inputs or the outputs of the process. With respect to the input variables, one can compare the
available input space versus the desired input space. The latter can be calculated to be large
enough to compensate for all the expected disturbances and desired output values. A similar
comparison can be made with respect to the output variables. The calculation of operability in
this manner represents the inherent operability of the process and is independent of inventory
control structure.
Steady state operability is a necessary but not sufficient requirement for a well-designed
plant, as the dynamic characteristics should also be considered. The dynamic operability is
examined by the use of a dynamic model of the process and considers the issue of whether a
given disturbance will be rejected quickly or whether a set-point change can be implemented
within a given time interval, or both. This is addressed by solving an optimal control problem
to find the minimum time, within which the process can respond to a disturbance or move to a
new operating point with the available ranges of inputs. Such performance represents the best
possible performance of any feedback controller, and similar to the steady state case, identifies
the inherent operability characteristics of the process.
If either the steady state or the dynamic operability is not satisfactory then the process design
needs to be altered, as no possible control structure will be able to improve the operability. On
the other hand, even if both are satisfactory we still need some over design to achieve the desired
operability with a given feedback controller and to guard against unmeasured disturbances. This
approach addresses both the servo and regulatory issues over the entire operating space of inter-
est. The steady state and dynamic operability frameworks presented, utilize optimization tools
extensively to answer the problems posed. As such, they are computationally intensive, espe-
cially when examining the dynamic characteristics of multivariable and nonlinear processes.
In order to reduce the computational burden for high dimensional problems, approximate yet
practical solutions, such as the "bounding box" approach have been introduced. Plant-wide
issues with respect to the Tennessee Eastman process have also been considered by separating
123
REFERENCES
[63]F. D. Carvallo, Design of chemical processes for controllability, Ph.D. thesis, Carnegie
Mellon University, USA, 1989.
[64]D. Uztiirk, Dynamic operability of processes, Ph.D. thesis, Lehigh University, USA, 2001.
The Integration of Process Design and Control
P. Seferlis and M.C. Georgiadis (Editors)
126 © 2004 Elsevier B.V. All rights reserved.
Chapter A5
1. INTRODUCTION
The dynamic performance of a processing plant is determined by the designs of both the
process units and the control system. The need to integrate process design and control system
design to optimise dynamic performance is well understood. While there are many tools avail-
able for control system design, the tools available for designing the dynamics of the process are
more limited.
The simplest approach to the integrated process-control design problem is to simulate the
closed-loop performance. This approach requires a complete closed-loop design, the many
design simulations can be time consuming and each simulation result provides only an indirect
indication of the design changes that need to be made to improve performance. Measures
which identify the dynamic structure of a system provide a more sophisticated option for the
optimisation of the closed-loop design.
The controllability of the process can be used as a measure of the potential closed-loop per-
formance. Measures such as singular values (SV) [1,2], the relative gain array (RGA) [3], and
the dynamic relative gain array (DRGA) and closed-loop disturbance gain (CLDG) [4] have all
been used to optimise the controllability of a process design. The economic impact of process
controllability can be measured as a back-off from the optimal steady-state design [5], and this
can be included in a design optimisation problem.
Controllability measures, such as the RGA and CLDG, can sometimes be interpreted on a
physical basis [6], but the insight they can give into the dynamics of a process are limited
because they only consider input-output behaviour. For process design it is useful to have
a measure of dynamic structure which retains information about the internal structure of the
model states. The process model states are often closely related to the physical structure of a
127
design, and hence indicators of the constraints on any particular design. A measure of dynamic
structure that retains the model states presents information to the designer with some physical
significance. This form is more suitable for developing insight into the structure and constraints
of a process design.
This chapter develops and applies a new spectral resolution method that permits insight into
the dynamic structure of the process. It can be used for the analysis and retrofit of existing
designs and also as a means to help develop new process designs.
2. SPECTRAL ASSOCIATION
Spectral association is a method of identifying the source of system dynamics through asso-
ciation of eigenvalues with states.
Previous work using eigenvalue tracking (ET) as a method of spectral association has been
successfully applied for the purposes of dynamic analysis and model reduction. ET uses homo-
topy methods that transform a system with known eigenvalue-to-state association into the final
system and track the eigenvalue associations as the system is transformed.
The behaviour of the eigenvalues through the system transformation can be used to judge the
validity of final eigenvalue-to-state associations, but only as an indicator of failure. No method
of correction is available once incorrect associations are indicated. In addition, eigenvalue
tracking assumes one-to-one associations between states and eigenvalues.
A quantitative measure of the levels of association between all states and eigenvalues of
a system would overcome the problems of incorrect associations and multiple associations.
The development of a measure of levels of association, called the Unit Perturbation Spectral
Resolution, follows.
energy, and information streams. The units are usually sparsely connected, and this results in
weak dynamic coupling between units, and often strong coupling within a unit. If the coupling
between units is weak then eigenvalues can be associated with a particular unit, and units will
form the basis of group-based associations. If the number of states within a unit is small there
will be a near one-to-one correspondence between states and eigenvalues.
Sometimes, however, coupling between units may be strong enough to require group associ-
ations that cross unit boundaries. It is especially important to determine when strong dynamic
coupling crosses unit boundaries, because when this occurs a new range of dynamic behaviour
is introduced. The connection of different units can result in dynamics which are more complex
and difficult to control than might be obvious from an analysis of the unconnected units.
Eigenvalue tracking (ET) can offer a crude graphical measure of the applicability of a partic-
ular eigenvalue-to-state association result. It has been stated that: "eigenvalue traces which dis-
play complicated eigenvalue bifurcation behaviour are associated with highly coupled states"
[7]. Furthermore strong coupling between states is indicated by any complicated eigenvalue
trace behaviour during the ET procedure [8].
This principle provides a method of rejecting eigenvalue-to-state associations determined
using ET when strong coupling between states invalidates the one-to-one correspondence as-
sumed. What this principle does not provide is a method to establish what the multiple eigen-
value to state associations are, when sufficiently strong coupling is present. The next section
considers aspects of the general spectral resolution of nonlinear or linear systems and the infor-
mation contained in that representation.
Any dynamic response, linear or nonlinear, can be analysed with the spectral resolution by
taking an instantaneous model linearisation and calculating the spectral resolution. This pro-
vides a measure of the contribution of each eigenvalue to the dynamic response at that time. For
a linear system the model linearisation will not change, and any changes in the instantaneous
spectral resolution are a result of changes in the states. For a nonlinear system, both the model
129
| x ( i ) = Ax(*),x(0)=x° (1)
If the eigenvectors of the matrix A are linearly independent, the solution has the form:
A = V- 1 AV = dm 5 (A 1 ,A 2 ,...,A n )
exp(At) = diag(eXlt,eX2t,...,ex"t)
The structure within the solution described by Eq. (2) and Eq. (3) can be expressed as:
n n
3=1 fc=l
in which the hyper-matrix S, the general spectral resolution (GSR), is given by:
S$ = Vlk{V~l)k3 (5)
This structure of the GSR is illustrated in Fig. 2. The dynamic response of a system is
described in terms of a source perturbation a;?, a dynamic pathway Afc, and a response Xi(t):
The standard spectral resolution discards the information describing the source of a dynamic
response:
^ = E4^ (?)
fc=i
Having established the general spectral resolution we can examine specific pathways which
can provide complementary insights into the internal structure of the system. In particular we
develop the Unit Perturbation Spectral Resolution (UPSR) as a new method of eigenvalue-to-
state association.
130
X ° = [ 1 0 0 ••• ] =>X!(t)
The response of each state to such a unit perturbation in itself is described by a diagonal slice
through the general spectral resolution matrix S:
Calculation of the UPSR matrix P follows readily from the general spectral resolution S:
Or in matrix notation,
P = V®(V~1)r (11)
The perturbation in the jth state, x°, passes through the dynamic pathway of the fcth eigen-
value, Afc, to contribute to the dynamic response of the state Xi(t). This combination of per-
turbed state, eigenvalue pathway, and dynamically responding state can be arranged in a three-
dimensional hyper-matrix, as was seen in Fig. 2.
Each element of the hyper-matrix S(i,j, k) represents a combination of source (x°), pathway
(A*.), and response (£;(£)). The value of each of these elements can be calculated from:
The conventional spectral resolution Z and the UPSR are all readily calculated from S. The
spectral resolution Z ignores the sources of the different dynamic responses, and provides only
a measure of the effect of each dynamic pathway (eigenvalue) on the system response. This
measure can be represented by:
A fc - > Xi
132
The lumping of all the different perturbations removes any discrimination between the dif-
ferent sources of dynamic response, as can be seen in the formula for the spectral resolution
matrix Z:
^ = ^5^0 (13)
3=1
The spectral resolution matrix removes one of the dimensions of the hyper-matrix S by sum-
ming along the j-axis.
The UPSR considers only the response of a state to a perturbation in itself. This can be
represented by:
0 ^fe
Pik = S f (14)
There is no need to include any x° terms in the calculation of the UPSR because the UPSR
uses unit perturbations.
The UPSR has been developed as a tool for spectral association. The following sections set
out some of the important properties of the UPSR.
feedback connection between those two states. If the UPSR is rearranged in a block diagonal
structure then it is possible to directly read state-to-state interactions, where
That is, because the block diagonal rearrangement means it is possible to assume an associa-
tion between state Xj and eigenvalue Xj, a sufficiently large P^ indicates the dynamics of state
Xj are present in the natural response of state xt. This indicates an interaction from state Xj to
state Xi. The structure of the non-zeroes in the UPSR matrix indicates a maximum possible level
of feedback interaction within a process. However once the UPSR matrix is actually calculated
many of the structural feedback loops prove to be very weak, and can be regarded as zero. This
type of structure is very common in process systems.
where T{ is the sum of the elements of row i, and Cj is the column sum of column j . The row
sums are then equal to the diagonals of VV" 1 = I, while the column sums are equal to the
diagonals of V~ 1 V = I.
This property of the row and column sums establishes a key property for the interpretation
of the UPSR. If a state is associated with one and only one eigenvalue then the element of the
UPSR measuring the strength of association between that state and that eigenvalue will be equal
134
to one, while all other elements in the row and column of the UPSR corresponding to the state
and eigenvalue of interest will be equal to zero. Conversely, any states and eigenvalues which
have no association will have UPSR elements equal to zero.
The row and column sums show the UPSR can be viewed as a measure of the distribution of
the dynamic modes, or eigenvalues, between system states. Alternatively, the reverse view can
be taken, and the UPSR can be used to infer the intercation between states that give rise to the
system eigenvalues.
The UPSR elements can have real parts less than zero or greater than one, and can also be
complex. However, the fact that the row and column sums are always equal to one provides
a basis for comparison. In this work a comparison of the magnitude of the real part of the
UPSR elements with one (1.0) has been found to be a successful method for eigenvalue-to-state
association. Elements near zero indicate a low strength of association, while elements with a
magnitude significantly larger than zero indicate a high strength of association.
The magnitude of an element is regarded as significantly larger than zero when it is compared
to one. In this work a value of 0.10 is typically regarded as significant, and is interpreted as
indicating a 10% strength of association.
Which when compared with the formula for the UPSR gives:
"•' = *• = Mi »8)
That is, the U P S R measures the sensitivity of the eigenvalues of the A-matrix to changes in
the diagonal elements of that matrix.
This relationship shows that there is a connection between the U P S R and the perturbation
method originally used by Robertson [7] to determine eigenvalue-to-state associations follow-
ing a complex-to-real bifurcation in the eigenvalue tracking (ET) algorithm. Eigenvalue associ-
ations following a complex-to-real bifurcation were determined by maximising the sensitivity of
the associated eigenvalues to changes in the diagonal elements of the A-matrix corresponding
to the states involved.
For two bifurcating eigenvalues Ai and A2, the original routine numerically approximated the
two sensitivities as
<9Ai dAi
State x\ would then b e associated with the eigenvalue with the greatest sensitivity to the
element A n by comparing the two quantities:
9AX dXi
dXTi dA^2
P.i = P ^ (19)
On the other side of an eigenvalue bifurcation, where the eigenvalues are still real, a high
level of interaction between states results in large negative and positive entries in the UPSR.
There is a qualitative change in the nature of a system's dynamics when eigenvalues bifurcate.
Oscillation starts and there is a discontinuity in the UPSR elements because the eigenvector
matrix V is singular at eigenvalue bifurcation points.
P ^ f 1 °l,A«=f-1Ol (22)
The absolute deviation of the two eigenvalues from their uncoupled values are equal in mag-
nitude, with a value of 0.11. This arises because the trace of the A-matrix is equal to the sum
of the eigenvalues. The relative deviations of the two eigenvalues however are quite different.
The fast eigenvalue changes by 1%, while the slow eigenvalue changes by 10%. This is an im-
portant result, because it shows how a fast state can determine the magnitude of the eigenvalue
associated with a slow state. A slow state, however, has a much smaller effect on the eigenvalue
of a fast state, because of the difference between absolute and relative changes.
We now consider the effect that a slow state can have on the dynamic response of a fast state.
x
°=[ 0 ; 1 ] (23)
The spectral resolution of the system is:
Z=[-°-01 0 - 1 1 ] ) A = f - m i l (24)
[ 0.00 1.00 J [ -0.89 J
The uncoupled system has a spectral resolution of:
• The eigenvalues associated with slow states can be significantly changed by coupling with
fast states. The effect of the slow states on fast eigenvalues is much smaller. This can be
represented as a fast eigenvalue to slow eigenvalue effect: A^as* —> \slm"
• The dynamic response of a fast state can be dramatically affected by coupling with a slow
state. The responses of slow states are not affected by fast states. This result arises from
fast dynamic modes decaying rapidly to steady-state, leaving slow modes as the only
significant response curve.
• Entries much larger than one indicate that a system is near a real-to-complex eigenvalue
bifurcation, and dynamic interaction between the two associated eigenvalues will exist.
(See section 4.3)
The key question that has not been addressed is how to compare different values within the
UPSR matrix, and how then to determine a suitable cut-off value Pmt for determining the
UPSR structure, as discussed in section 4.1.
A systematic method comparing UPSR matrix elements has not been developed, and this
remains as an important future direction for theoretical development of the UPSR. Instead some
simple, empirical rules have been used for analysing the UPSR. The following two ideas are
necessary for the UPSR analysis:
138
• An entry in the UPSR that is of comparable magnitude to the other entries in the associ-
ated rows and columns must be regarded as significant.
• The row and columns sums of one allow entries to be considered as measuring a fraction
of the contribution of each state to each eigenvalue, and eigenvalue to each state. For
example, an entry of 0.5 can be read as a 50% contribution.
The first point is straight forward, but the second has some important qualifications. Although
row and column sums are one, entries are not limited to the range zero to one. This interpre-
tation of the UPSR as a measure of the distribution of eigenvalue-to-state associations, while
complicated by entries outside the range zero to one, does nevertheless remain the single most
important interpretation of the UPSR.
As an example of how these ideas are used to determine the structure of the UPSR, consider
the rules developed for determining the UPSR block structure in design problems as given by
the following equation,
pWocfc f° *\Re(Pt3)\<Pr
\l if\Re(Pij)\>P?pt ^
where
i ^ f = 0.20min(max(fle(Pi.), Re(P.j), 1)) (=> 0 < P%u < 0.20)
An entry of one (1) in the matrix pWocfc indicates a significant eigenvalue-to-state association
within the UPSR matrix P.
An arbitrary level of 0.20 has been chosen, corresponding to a 20% contribution to eigenvalue-
to-state associations. A cut-off P^%t is then calculated for each element of the UPSR. The cut-
off will lie between 0 and 0.20. Any cut-off less than 0.20 results from the maximum element
in a row or column being less than one. This adjustment is made to satisfy the first condition of
comparing similar magnitude elements. The upper limit placed on the cut-off is to allow for the
presence of any very large UPSR elements.
This example is used for a specific UPSR analysis for the design problem, and takes a par-
ticular view of the UPSR structure for this purpose [8]. However, the fundamental method of
determining significant eigenvalue-to-state associations within the UPSR can be applied to any
system.
5. UPSR LIMITATIONS
The UPSR is limited to revealing no more dynamic structure than is found within the state
coefficient matrix A. An additional, inescapable result of using the eigenvalue decomposition
is that only feedback between states is measured. The immediate limitation of this condition is
that only interactions between states are measured. This can be represented as:
x <-> x
139
The structure of the inputs and outputs to a system, u and y, are not included because of the
exclusion of the matrices B, C, and D of the full input-output model from the calculation of
the UPSR. A more complete dynamic measure would be based on the entire dynamic structure
of a system, which can be represented by:
In this respect the UPSR measures the dynamics of a system's states, rather than a system's
input-output behaviour. This focuses attention on the dynamics within a system, especially if
the states have a physical significance, but neglects important characteristics of the input-output
behaviour.
Some input-output behaviour can be included in the UPSR by applying it to closed-loop
systems. The coefficient matrix A of closed-loop systems will include some of the input-output
structure from the open-loop B, C, and D matrices.
However, even a closed-loop system does not incorporate knowledge of the structure of dis-
turbances into the UPSR.
A supercritical fluid extraction (SFE) process studied by Samyudia [9] was used as a case
study. The SFE process has a high degree of multivariable coupling and a range of different
response times within the different units of the process. The coupling and variable response
times are factors that complicate the control system design for the SFE process, and hence
would be targets of any redesign with the goal of improving the process' dynamics. The process
recovers isopropyl alcohol (iPA) from an aqueous stream using carbon dioxide (CO2) as a
supercritical solvent, as shown in Fig. 5.
— = Ax p + Bu p
dt (29)
y" = Cx p
Table 1
SFE operating point, steady state values and variable descriptions.
Variable Description Value
process. The methodology measures the interactions between multiple units at a specific oper-
ating point allowing the best multi-unit decomposition to be achieved for decentralized control
design [11], [12]. The gap (3 measures the difference between a shaped design (decentralised)
model G<;W(s) and the shaped full (centralised) model G W ( s ) and can be compared with
the maximum stability margin bmax for the design model G^. This comparison was made for
various control structures (which implies a specific design model, Gj(s)), and the best control
structure was selected, based on the stability requirement that /? < bmax, and minimising the
gap p. It is necessary to specify the control performance required to construct the shaped plant
models. These shaped models approximate the closed-loop performance of the plant. Thus the
gap is a measure of the difference between the performance of centralised and decentralised
control structures. The maximum stability margin bmax can be computed from a knowledge of
the design model G<j and the decentralized controller matrix Kd [11].
Hence, the "best" plant decomposition can be determined by examining (3 and bmax for every
design model in the set of alternative plant decompositions.
Four of the control structures considered, satisfied the stability criterion, and the results are
shown in Table 2. The sensitivity of these results to the performance specifications W(s) was
also tested.
The important results of this analysis of the SFE dynamics are:
• The coupling between X\ and x5 is very significant, and these variables must be controlled
142
Table 2
Analysis results using the gap j3 and the maximum stability margin bmax.
Control Structure. Controlled and manipulated P Dmax
variable pairings.
together. (The four other control structures that were considered did not implement this
control strategy, and all failed the stability criterion /? < bmax.)
• The coupling between the pair {xi, x5} and x8 is significant, although not as strong as
that between xx and £5.
• The state xw is not coupled strongly with xi, x5 or xs for the required performance
specifications.
• A tight specification on the control of Xg provided very bad, and possibly unstable, control
performance for structures DAU3, DAU5 and DAU6. Tight control of x& is only possible
if the interaction x% —> x5 is considered. DAU1 allows design of a controller which can
compensate for this interaction.
The UPSR columns have been ordered so that the eigenvalue-to-state group associations can
be read off as block-diagonal. For the UPSR matrix in Eq. (30) two groupings are possible:
• {XU X2, X3, X4}, {x5, X6}, {X7, X8}, {xg}, {xW}
• {xi, x2, x3, xA, x5> x6}, {x7, x&}, {x9}, {x10}
These two groupings differ in the significance they place on the elements of P(5 : 6,1 : 4).
The diagonally opposite elements P(l : 4,5 : 6) are much smaller. This asymmetry in the
UPSR matrix shows the dynamics of states 5 and 6 are affected by the dynamics of states 1 to 4,
but not the reverse. This is because the slower states 1 to 4 affect the fast states 5 and 6, but any
effect of the fast states on states 1 to 4 quickly decays and becomes insignificant, as discussed
in section 4.
The second grouping of states shows that coupling between states can occur across process
unit boundaries. In this case the extractor and the top two trays of the stripper contain a group
of six closely coupled states, while the bottom two trays of the stripper contain a group of
two closely coupled states. This grouping across process units also agrees partially with the
results of Samyudia, since it has an interaction between x\ and x$. However the UPSR results
show very little interaction between the states {x\, x5} and xg, which is not in agreement with
Samyudia's results. The source of this difference is input coupling in the state-input matrix B,
which the UPSR method does not consider.
State Xg forms no feedback loops with any other states, while state xw forms only very weak
feedback loops with other states. This is reflected in the very strong one-to-one associations
both these states show with their eigenvalues, and also by the negligible difference between the
eigenvalues A9 and Ai0, and the diagonal elements of A, A9)g and AIQIW.
State x$ is associated with slower eigenvalues (A7 and As), and this agrees with Samyudia's
result that a tighter, or faster, control specification for x& leads to poor control performance,
since the speed of a variable imposes limits on how tightly it can be controlled.
7. SUMMARY
With the addition of the UPSR, there are now two different types of spectral resolution which
can be used to analyse system dynamics.
144
These two types of spectral resolution can be used to analyse the dynamic response of a
system. The UPSR locates the sources of the different dynamic modes, and eigenvalue-to-
state association identifies all of the system eigenvalues with a particular part of the system.
The conventional SR can then be used to determine which eigenvalues dominate the dynamic
responses of the system states, which in turn, through eigenvalue-to-state associations indicates
which states' dynamics dominate the system dynamic response. If necessary the GSR can be
used to more closely examine the elements of a dynamic response if the UPSR and SSR do not
supply sufficient insight.
8. CONCLUSIONS
The UPSR provides a tool for measuring the strength of association between each eigenvalue
and each state. Determining eigenvalue to state associations with the UPSR is significantly
simpler, and the computational requirements much less, when compared with the use of the
eigenvalue tracking method. In addition, the UPSR provides quantitative information on the
interaction between states, and the structure of group-based eigenvalue-to-state associations.
The UPSR is a measure of the interaction between states only, although it can be applied to
systems with different degrees of connection between components — for example open-loop
and closed-loop systems. It is calculated using only the state coefficient matrix A, and makes
no use of the input-output matrices B, C, and D.
The UPSR can be used to locate the sources of a system's dynamic modes. For design, this
allows the localisation of the effects of design parameters, which can reduce and better define
the goals and possibilities for the dynamic design problem.
Further analysis of the spectral resolution revealed a source-pathway-response structure. The
UPSR was shown to be a particular subset of this structure.
145
Used in combination, all the spectral resolution measures provide a valuable tool for the
analysis of process dynamics.
REFERENCES
[1] A. Palazoglu, B. Manouslouthakis and Y. Arkun, Ind. Eng. Chem. Process Des. Dev.
24(1985) 802.
[2] A. Palazoglu and Y. Arkun, Comput. Chem. Eng. 10 (1986) 567.
[3] M. Luyben and C. Floudas, Comput. Chem. Eng. 10 (1994) 971.
[4] S. Skogestad, M. Hovd and P. Lundstrom, in: PSE '91 4th International Symposium on
Process Systems Engineering, Montebello, Quebec, Canada, 1991, pp. III.3.1-III.3.15.
[5] L. Narraway, J. Perkins and G. Barton, J. Proc. Cont. (5) (1991) 243.
[6] M. Morari and J. Perkins, in: FOCAPD-1994, Snowmass, Colorado, 1994, pp. 105-114.
[7] G. Robertson, Mathematical modelling of startup and shutdown operations of process
plants, Ph.D. thesis, The University of Queensland (1992).
[8] A. M. Walsh, Analysis and design of process dynamics using spectral methods, Ph.D. the-
sis, The University of Queensland (1999).
[9] Y. Samyudia, Control of multi unit processing plants, Ph.D. thesis, The University of
Queensland (1995).
[10]T. Georgioua and M. Smith, IEEE Transactions of Automatic Control 35 (1990) 673.
[11]Y. Samyudia, P. Lee and I. Cameron, Chem. Eng. Sci. 11 (1995) 1695.
[12]Y. Samyudia, P. Lee, I. Cameron and M. Green, Chem. Eng. Sci. 51 (1996) 769.
The Integration of Process Design and Control
P. Seferlis and M.C. Georgiadis (Editors)
146 © 2004 Elsevier B.V. All rights reserved.
Chapter A6
1. INTRODUCTION
Traditionally, process design and control system design are carried out sequentially. The
premise underlying this sequential approach is that the decisions made in the process design
phase do not limit the control design. However, it is generally known that incongruent designs
can occur quite easily. Two different classes of approaches that consider the interaction between
design and control can be discriminated [1]: (i) Anticipating sequential approaches where pro-
cess design and control system design are still carried out sequentially, but in anticipation of
the control design the controllability properties of the process are taken into account during the
process design phase, (ii) Simultaneous approaches. In the simultaneous approach the process
design and the control system design are carried out simultaneously.
A drawback of almost all approaches, both the anticipating sequential approaches and the
simultaneous approaches is that the controllability is only analyzed once an initial design has
been made. Controllability issues are not incorporated in the so-called synthesis phase. The
objective of this work is to show how one specific controllability aspect, disturbance sensitivity,
can be included into the synthesis phase. In order to include the controllability in the synthe-
sis phase, one needs to have insight in how the internal structure of a non-equilibrium process
affects the controllability. The inner working of a process is determined by a set of interacting
rate processes associated with ongoing physico-chemical phenomena. The framework of irre-
versible thermodynamics is the fundamental theory describing these rate processes in terms of
rates, driving forces and geometric parameters such as volumes and surfaces. Therefore this
theory is an appropriate fundamental starting point for dealing with controllability.
Recently some work has been done on the connection between process control and irre-
versible thermodynamics. Ydstie and coworkers have focused on the stability of process sys-
tems [2,3]. They use a storage function, related to the exergy function for their analysis. An
alternative method uses thermodynamics to select control variables for partial control strategies
147
[4]. However we are not aware of any work in the field of process design for controllability
using irreversible thermodynamics.
In this work aspects of three different scientific fields will be combined: process design,
non-equilibrium thermodynamics and systems theory. The resulting theory is called Thermo-
dynamic Controllability Assessment, TCA. Figure 1 shows how these are combined. The pre-
sentation of the TCA in this Chapter follows this structure. In section 2 the modelling frame-
work based on irreversible thermodynamics will be presented. Section 3 will show how this
framework can be used for process design, enabling a formal link between process design and
non-equilibrium thermodynamics. Section 4 elaborates on the work of Ydstie and coworkers on
passivity of process systems. It will be shown how the passivity is related to non-equilibrium
thermodynamics. Their stability results will be extended to disturbance sensitivity. It will then
be shown how these results can be used during the synthesis phase of the process design pro-
cedure to end up with processes that have improved controllability characteristics. Throughout
the chapter two examples will be used to illustrate the ideas: a heat exchanger and a distillation
column.
The description will first outline the fundamental rate phenomena and driving forces at the
microscopic, continuum scale. Then a spatial integration to a macroscopic scale is carried out
to explicitly introduce geometric design variables like volume and surfaces, enabling to define
the behavior of the process. The description focuses on equations of change for inventories and
on the entropy production rate.
148
Microscopic description
A fundamental description of process systems will be derived based on thermodynamics,
using inventories [5]. An inventory, V, for system S, is a nonnegative function of the state, such
that if Si is the state of system Si and s? is the state of system 52, the inventory of the combined
system 5 is given by:
V ( s ) = V ( S l ) + V(s2). (1)
Eq. (1) defines inventories to be extensive variables. Typical inventories are energy, exergy, total
mass and molar amounts. For the densities of the inventories, v, the general (micro)balance for
a (homogeneous) element with a fixed position in space is given by:
C = CT (5)
149
Table 1
Physico-chemical processes and their flows and forces.
Process Flow Force
with T temperature, \x chemical potential, A chemical affinity and </? electrical potential
The cross-effects, described by the off-diagonal elements, are especially important for systems
in which multi-component mass transfer takes place. Cross effects between scalar and vectorial
effects are impossible. The entropy production in the system is given as a function of the fluxes
and driving forces:
Up till now only a microscopic system description was considered. It will now be shown how
this can be extended to a macroscopic description in order to be directly applicable for typical
process systems. For clarity, macroscopic driving forces will be indicated by X.
Eq. 3 is integrated over the system volume, V, that is time indifferent:
I ///vdv - IIIpdv+//
V V A
J dA
" +// J'dA'
A
<8)
where A is the area of the surface surrounding the system. The parts of this area through which
convective fluxes and conductive fluxes flow are called respectively Acv and Ac&. The total
hold-up, V, is defined as:
f ff
V = / / / vdV, (9)
v
and the total production, V, as:
P = PdV (10)
lIl v
150
When it is assumed that the fluxes are uniform over the respective surfaces, equation (8) can be
reduced to:
The summation is carried out over all conductive and convective surfaces.
A net conductive (vectorial) flow to the system is now given by:
Besides these vectorial flows, scalar flows also contribute to the entropy production. For scalar
flows Acd should be interpreted as a volume. The most relevant scalar flow is the production of
a chemical species. The flowrate is the reaction rate, driven by the chemical affinity.
At this point it is pertinent to note that with the introduction of geometric variables, the
necessary degrees of freedom in process design are appearing. For the remainder it is convenient
to introduce a new variable, K, which is the product of the phenomenological constant L and
the relevant geometric variable. So for vectorial processes K is defined as:
K = LAcd (13)
K = LV. (14)
This variable is called the thermodynamic design factor. The reason for this will become clear
in section 3. This leads to the following linear relation between the driving force and the flow:
Fcd = KX (15)
Jcd iXi + J
ps = J2 ' E TJX>- (16)
vectorial scalar
So Eq. 6 can be rewritten as:
T
= / / / E J"*X* + [ff E JTiXr <17>
TT vecloridl TT scalar
For the vectorial processes a force X{ can be rewritten as the gradient of a potential Y, VYt,
leading to:
J
X) cd,^Yi= J2 v (j^yo - y^vj^ (is)
vectorial vectorial
151
The second term on the right hand side of Eq. 18 can be neglected as a second order effect.
Application of Gauss theorem leads to:
J
/ / E cd,iYida= E JcdAYiA (2°)
j^ vectorial vectorial
/ / / E Jr*Xi = E VJ
^r (21)
y scalar scalar
So the entropy production in the system is directly related to the flows and forces according to:
Fi (22)
Vs= Yl ^-
vectorial, scalar
The driving forces defined in Table 1 are based on a microscopic description. This should
be extended to a macroscopic description in order to be directly applicable for typical process
systems. This will be done for the two examples; a heat exchanger and a binary distillation
column.
Heat exchanger
Consider the system shown in Figure 2. The system is a heat-exchanger with two streams.
The system consists of three phases, two fluid phases and a solid wall where the heat transfer
occurs. Thefluidphases are assumed to be ideal mixed, Moreover it is assumed that only the
heat conduction contributes to the entropy production and that the phenomenological coefficient
for heat transfer is independent of the temperature in the temperature range considered. The
temperature of the solid wall is distributed in the 1-direction only, as shown in Figure 2. Now
an expression for the heat transfer between the two fluid phases, through the solid phase will
be developed. Through the surfaces A 1.1, A 1.2, A2.1 and A2.2, convection dominates, through
the surfaces A1.3 and A2.3 conduction dominates. So the heat between the two fluid phases is
transferred through conduction only. The (steady-state) energy balance for the fluid phases are
given by:
For the solid phase, a steady-state energy balance is constructed for a slice in the m-direction,
parallel to the areas A1,3 and A2.3:
^L=V, (26)
and
T|m=mi=T2i0Ut. (27)
-Xthermal = A — , (29)
where A ^ indicates the difference of inverse temperatures between the two phases. For this
system the thermodynamic design factor, K, the flow and the total entropy production are given
by:
K =A c d ^ ^ , (30)
nil
F = K&Q^, (31)
153
and
Vs = K^y (32)
We now showed how a heat exchanger can fully be represented in a macroscopic non-equilibrium
thermodynamic framework. This example will be used later in this Chapter to demonstrate the
application of non-equilibrium thermodynamics based disturbance rejection in process synthe-
sis.
Distillation
where FL and Fv are the liquid and vapour flows on the trays, and x and y are the liquid and
vapour mole fractions. The microscopic driving force for the conductive flow is given by:
=
Xdiffusion ^^p- ( '
154
In order to determine the conductive flow between the two phases, detailed information is re-
quired about the flow pattern on the trays. De Koeijer et al. proposed to determine the molar
driving force on a tray as the average between the forces at the inlets and the outlets of the tray
[8]. This would imply that the molar driving force on tray n is given by:
A = Amin-— (40)
with Amin the surface for a column operating at minimum reflux, R the reflux rate and Rmin the
minimum reflux rate. So the total convective flow is:
K = Amin-^-L (42)
and
VS = K(^-^)2 (43)
\J-top J- bottom /
We now showed how a distillation system can fully be represented in a macroscopic non-
equilibrium thermodynamic framework.
155
Distributed systems
Up till now only ideal mixed systems were considered. However a large number of process
systems are distributed systems, e.g. tubular reactors, heat exchangers and packed distillation
and absorption beds. It will be shown how the local driving force can be integrated to obtain
an average driving force. These average driving forces can then be used in the analyses. Let us
consider again a heat transfer process. Figure 4 shows again a heat exchanger, but now with a
temperature gradient in both fluid phases in the n-direction. The flux at position n is given by:
The flow increment at this point is given by the product of the flux and the area:
where a is the exchange area per unit length. The total flow can be found by integrating the
local heat flow over the n-direction:
nl
X = --±—[xdn (47)
nl -nO J
nO
Ftotai = KX (48)
The difference between the synthesis step and the analysis step is already briefly discussed.
In the synthesis step all input parameters and the desired performance at the output side are
given. The unit for the desired transformation needs to be determined. In the analysis step all
input parameters and the unit model are given and the behaviour of the system can be analysed.
A combination of the physico-chemical conditions and the geometric properties of the system
determine this behaviour. The difference between synthesis and analysis is also illustrated by
Figure 5 [9]
In this section it will shown how the non-equilibrium thermodynamic framework can be used
in the synthesis phase of conceptual process design.
156
A central concept in the synthesis phase is the concept of degrees of freedom [10]. The
degrees of freedom, DoF, are given by:
Besides the behavior equations some variables are given as fixed inputs variables. These fixed
inputs describe inlet conditions of streams entering the system. Another class of variables are
the design targets. Examples are the outlet temperature of a stream leaving a heat exchanger
and the composition leaving a reactor system. The design degrees of freedom, DDof, are now
defined as:
ln
Puts ,1 Process I Products
Known ??? Known
Synthesis
• DDof = 0, there is usually one, unique solution (more if the model allows for multiplic-
ity) to the problem. Hence the problem is not a true design problem.
• DDof < 0, the problem is over defined. There is either one, trivial solution, or the
problem is inconsistent.
• DDof > 0, there is an infinite number of alternative solutions. This represents real
design problems.
So the essence of design problems is that there are design degrees of freedom left such that
design choices can be made
with:
The mass balances for the two fluid phases are given by:
A degree of freedom analysis will now be performed. For simplicity the density, the heat capac-
ities and the phenomenological coefficients are assumed to be independent of the temperature.
The system contains the following ten variables:
• 4 temperatures,
• 1 conductive flow,
• 4 convective flows,
• 1 area.
DDof = 1 0 - 5 - 3 - 1 = 1 (56)
This remaining degree of freedom can be removed by either fixing T<ifiUu or the area. This
choice defines the thermal driving force. Alternative designs of a heat exchange system will
have different values of Tiput and will hence have a different thermal driving forces in the
system. However the total heat flow in the system (=heat duty) is the same for all designs. This
is an example of a "flow specified" process unit.
3.2. Distillation
We now go back to the distillation column, shown in Figure 7. Also for this system a degree
of freedom analysis will be performed. The assumptions for this degree of freedom analysis
are that there are only two components, and that there is constant molar overflow (so the energy
balance is neglected). The system then contains the following 7n+l variables, where n is the
number of trays:
• n molar hold-ups
• 1 reflux rate
• 1 number of trays
• 2n component balances
• n-1 VLE-relationships
• 2 product specifications
It was shown that process design can be described in the non-equilibrium thermodynamic
framework. In Chapter 3 the need for considering controllability issues from the earliest phases
of conceptual process design on was emphasized. Now it will be shown how this can be done in
this framework. First the results obtained by Ydstie and coworkers on non-equilibrium thermo-
dynamics and stability will be presented. Then these results will be extended to controllability
and the relation with conceptual process design will be established.
4.1. Irreversible thermodynamics and stability
First the concept of Lyapunov functions will be introduced. A Lyapunov function, Y(x(t)),
is a positive scalar that depends on the system's state. By definition, the time derivative of a
Lyapunov function is non-positive. Mathematically these conditions can be described by:
160
• Y{x(t)) > 0
• Y(x{t)) < 0
If it is possible to find a Lyapunov function for a dynamic system operating around a state x*,
x* is a stable state that is approached asymptotically.
The second concept is the supply rate. The supply rate, w, is a real-valued function of the
system inputs and outputs. It is assumed that w(u, y) satisfies:
t+T
\w(u,y)\dt < oo (58)
/
The system S is now said to be dissipative if there exists a nonnegative function S of the states,
called the storage function, such that for all u e U, x e X, t > 0 [11]:
where VA{X) is the Ydstie function, VE(X) is the energy and VAO{X) an constant. An appropriate
selection of VA0 will guarantee that VA{%) > 0. In the notation it is indicated that VA(X), VE{X)
161
and Vs(x) are functions of the state, x, of the system. Note that the Ydstie function does not
equal to the exergy function since the exergy function is defined as:
where 4>A is the net flow of the quantity VA and To is the reference temperature. Since the
energy is conservative, VE{X) is always zero, leading to:
The effectiveness of using this storage function in distillation control has been demonstrated
[15].
0 = <t>A-ToPs(x). (64)
Now the system is exposed to a step change perturbation in <fiA, A<pA- So the system will move
away from it's original steady-state. The entropy production, Ps, is a function of the state of
the system. Hence Vs will change since the perturbation in <pA leads to a change of the state
of the system. The stability results presented above teach us that eventually the system will go
to a new steady-state. Obviously also in this state Eq 64 holds. When these two steady-states
are compared, we can define the difference in VA(X) as AVA(x) and the difference in Vs(x) as
AVs(x). Analogous to [13] a response time, r, is defined:
T0APs(x)
This response time is a time constant indicating how fast the new thermodynamic state is
reached. Note that the magnitude of r depends on the disturbance directionality since different
disturbances will lead to different values of AVA(%) and AVs(x). From a controllability point
of view it is now desirable that the response time is small. The only factor in Eq. 65 that can
be influenced by the design of the system is AVs{x). Since the response time decreases with
increasing AVs{x), the design objective is to have a large influence on the entropy production
rate. In order to see how the design decisions influence the entropy production we go back to
the entropy production as described by Eq. 22. Let us consider a system operating in steady-
state. A departure from the steady-state will lead to a change of entropy production. For small
departures from steady-state the following approximation holds:
Since the flux and force are related as described by Eq. (4), this can be rewritten to:
AVS = Yl2KXAX- ( 67 )
Now one has to discriminate between the two types of systems earlier introduced: flow-specified
systems and force-specified system.
Let us first consider the flow-specified system. For these systems all alternative designs
will have the same value of KX. Hence the difference in AVs between the alternatives is
determined completely by the difference in AX. The selection of either a large X or a small
X depends on the relation between 4>A and X. For a design with a small nominal driving force,
the disturbance, A<J>A, will have a much larger effect on the driving force then for a system with
a bigger nominal driving force. This can be illustrated with the heat transfer example. Two
alternatives will be compared. Both designs have a hot side temperature of 350K. One design
has a cold side outlet temperature of 340K, leading to a thermal gradient of 8.4.lO" 5 ^" 1 ,
the other design has a cold side outlet temperatures 300K, leading to a thermal gradient of
4.8.IQ~AK~1. Now the temperature of the hot side increases due to the disturbance in ACJ>A-
When this temperature increase is IK, the thermal gradient of the first design will increase with
9.7%. The thermal gradient of the second design will increase with only 1.7%. So the design
with the small driving force has a larger difference in the driving force, and hence a larger effect
on the entropy production. Hence for flow-specified processes, a small driving force is desirable
from a controllability point of view.
For a force-specified process all alternative designs will have the same value for the driving
force, X. The alternatives differ in K and AX. However analogous to the flow specified
situation, we can conclude that the change in X for the alternative designs is relatively small
since they all have the same nominal value of X. Hence the difference in AVs between the
alternatives is mainly by the difference in K. Alternatives with a large value of K will have
a large effect on the entropy production since K acts like a multiplier in Eq. 67. Hence from
a controllability point of view large flows are desirable for force-specified processes. For the
previously introduced distillation example this has the following implications. Alternatives with
a large reflux have a large value of K. Hence these design alternatives will be more controllable
then alternatives with a small K. So for force specified processes a large flow is desirable form
a controllability point of view.
The analysis was based on a linear relation between the flow and the force. Far from equi-
librium this linear relation will not be accurate anymore. However extension to this range will
be possible using a first order Taylor series expansion for the relation between the flow and the
force.
5. HEAT TRANSFER
Now two examples will be presented to demonstrate the application of the developed ap-
proaches. The first example is a flux-specified system: a heat exchanger. Alternatives with
163
different heat transfer areas will be compared. The second example is a force-specified process:
distillation. Alternatives with a different number of trays will be compared.
In order to test the conclusions of the previous section, the results should be compared with
a traditional approach to analyse the controllability of given designs. A large set of alternative
controllability indicators are available, including the Relative Gain Array, singular value decom-
position based methods. When selecting the appropriate index, one should keep in mind that
one wants to analyse the disturbance sensitivity. Ideally the closed-loop regulating performance
of alternative designs is compared. Obviously the RGA and singular value decomposition based
indices are not suited for this. A disturbance sensitivity approach [16] is proposed to be used.
This approach analysis the steady-state disturbance sensitivity within an optimization frame-
work. For a specified disturbance S the following optimization problem is formulated. Given
the process model, the nominal steady-state value of the inputs (MO) and the controlled variables
(2/o):
where 6 is the objective function and Wu and Wy are weighting matrices. Note that the objective
function of Eq. (68) is closely related to the LQG objective function.
Four different designs of a heat exchanger are considered, two co-current and two counter-
current designs. For all designs a hot stream needs to be cooled from 50°C to 40°C with cooling
water entering at 20°C . So all designs have the same duty. Table 2 shows the flow mode, rela-
tive surface area, relative flowrate of the cooling water, the cooling water outlet temperature and
the logarithmic mean temperature differences. The driving force profiles are shown in Figure
Table 2
Alternative heat exchanger designs.
Design Flow mode Area T AT
ln
c,out
A counter-current 10.0 1.00 27 21.5
B counter-current 11.3 0.58 32 19.0
C co current 10.6 1.00 27 20.3
D co-current 13.0 0.58 32 16.6
6. Based on the theory presented above, a smaller driving force should have a positive effect
on the controllability. This would imply that the 'best controllable' design is D, followed by B,
164
C and finally A. The output and manipulated variables were scaled for the disturbance sensi-
tivity index, 6, such that a difference of 1°C in the hot stream output temperature is penalized
equally as a 20 % difference in manipulated variable action. A disturbance of +1°C in the cold
temperature inlet was considered. Table 3 shows for the various designs the disturbance sensi-
tivity index and the average driving force. This Table shows that the controllability is closely
correlated with the average driving force. An interesting observation is that this is independent
of the flow configuration.
6. DISTILLATION
The distillation column example is based on the well known column A [17]. All alterna-
tives are designed to have a bottom composition of 0.01 and a top composition of 0.99. The
feed stream is a saturated liquid stream with a composition of 0.50. The phase equilibrium is
modelled with a constant relative volatility of 1.5. Alternative designs vary in the number of
stages but have the same sum of driving forces. For all alternatives the feed stage is located
in the middle of the column. Table 4 shows some of the typical values belonging to the some
alternative designs. Now the effect is calculated of a disturbance in feed composition.
Table 3
Controllability results for distributed heat exchanger .
Design Average driving force e
A 1.00 0.109
B 0.88 0.091
C 0.94 0.106
D 0.77 0.079
165
Table 4
Alternative designs for Column A.
design 1 2 3 4 5
number of stages 29 35 41 47 53
feed stage 15 18 21 24 27
reflux ratio 12.26 6.96 5.41 4.72 4.37
sum of molar flows 13.13 7.88 6.35 5.67 5.31
Figure 7 shows 6 divided by 6 for the base case design (41 stages) for a feed composition
of 0.45. The input and output variable deviations are scaled with their nominal value. The
three different graphs represent three different sets of weighting matrices. Respectively Wu =
J, Wy = I, Wu = I,Wy = 0.2/ and Wu = I,Wy = 51. The controlled variables are the top
and bottom composition. The manipulated variables are the reflux rate and the vapour boil-up
rate. Figure 7 clearly shows that a small number of trays is beneficial from a controllability
point of view. This is in line with the prediction made in Section 4 that for a force-specified
process large flows are desirable from a control point of view. It also shows that the result is
insensitive for the weighting matrices.
7. CONCLUSIONS
All existing (anticipating) sequential approaches for integration of process design and control
analyse the controllability for a given set of design alternatives, based on input-output behaviour
of the alternatives. The rational link between design decisions and the emerging controllability
is not explicit as controllability is derived in a black-box manner from input-output data, without
clear conceptual connections with the design decisions inside the box. This work presents a first
attempt to include controllability in the synthesis phase by linking it to the fundamental forces
166
and fluxes, which bring about the transformations of matter and energy inside the process.
One of the attractive aspects is that it allows a controllability study in the earliest phase of the
conceptual design.
The Thermodynamic Controllability Assessment method presented in this chapter is based
on the concept of passive systems with a dissipative storage function. The dissipative effect
arises through the entropy production in the process system. Having a large influence (by the
manipulated variables?) on the entropy production rate in a system is beneficial for having a
fast response time to counteract disturbances. Using the response time as a measure of control-
lability, one can make statements about the desired magnitude of driving force or a flow in a
process unit to affect the entropy production rate.
For process units with a single (dominating) force and flux, it was shown that the thermo-
dynamic design factors should be large. For flow-specified process units (e.g. heat exchanger,
reactors) this implies a small driving force, while for force-specified process units (e.g. distilla-
tion) it implies a large internal flow.
The alternative designs generated for the two test cases (heat exchanger and binary distilla-
tion) on basis of this TCA principle prove to be remarkably well aligned with the results of
a black-box, input-output controllability analysis using the steady-state disturbance sensitivity
approach.
Obviously, the TCA method needs further work and testing to make it more robust and widely
applicable. Firstly, the process units tested so far (among which a Fisher-Tropsch reactor de-
sign) can be characterised by a single dominating force and flux. In multi-functional process
units (e.g. reaction-separation) multiple forces and fluxes are active. Multiple design factors
come into play and these might affect the response time differently. Furthermore, the path of
causal physical events from inputs to outputs might be different from the path of events from
the incoming disturbances to the outputs. In that case the additional question arises on which
events the entropy production rate must be based. Secondly, the issue of the integration of pro-
cess units into an overall process flow sheet must be addressed. At this level trade-offs between
units must be considered for which an optimisation-based framework will be required.
REFERENCES
[1] D.R. Lewin, 7th IEEE Mediterranean conference on control and automation, Haifa (1999).
[2] B.E. Ydstie, and K.P. Viswanath, Proceedings of PSE'94 (1994) pp. 781 - 787.
[3] C.A. Farschman, K.P. Viswanath, and B.E. Ydstie, AIChE Journal, 44 (1998) 1841.
[4] B.D. Tyrus, DInd. & Eng. Chem. Res., 38 (1999) 1432.
[5] C.A. Farschman, C.A. On the stabilization of process systems described by the laws of
thermodynamics, PhD thesis, Carnegie Mellon University, 1998.
167
[8] G.M. de Koeijer, S. Kjelstrup, HJ. van der Kooi, B. GroB, K.F. Knoche, and T.R. Andresen,
T.R. presented at ECOS'99, Tokyo, Japan, 1999.
[9] M.F. Doherty and M.F. Malone, Conceptual design of distillation systems McGraw Hill,
Boston, 2001.
[10] R.K. Sinnott, Coulson & Richardson's Chemical Engineering volume 6, Pergamon Press,
Oxford, 1993.
[12] R. Sepulchre, M. Jankovic and P.V. Kokotovic, Constructive nonlinear control, Springer,
Berlin, 1997.
[14] B.E. Ydstie, and A.A. Alonso, Systems & control letters 30 (1997) 253.
[15] D.P. Coffey, B.E. Ydstie C.A. Farschman CA, Computers & Chemical Engineering, 24
(2000)317.
[16] P. Seferlis and J. Grievink, J. Computers & Chemical Engineering, 25 (2001) 177.
Chapter A7
1. INTRODUCTION
We have seen greater pressures on the process industries in recent years to improve
margins and to respond more quickly to market trends. To achieve this we need to have
plants that are more flexible and controllable than previously. Of course many have turned
out to be much more responsive than might have been envisaged on commissioning.
However to meet these demands in the future, stricter demands on responsiveness while
satisfying ever more stringent safety and environmental criteria, we need to design and retrofit
operating plant to meet these demands.
A key issue that must be confronted is that flexibility means both that the range of steady
state operating conditions must be able to reject disturbances effectively, the traditional role
of the control system, but must also be able to move smoothly around the operating region to
achieve the new desired conditions safely and cleanly. This window of operation could be
defined by economic, safety or operating considerations. Continuous plants now expect
regular changeovers in operating conditions even on a daily basis. We must build into our
design procedures the analysis of dynamic behaviour which governs these aspects. Van
Schijndel and Pistikopoulos [1] produced an extensive review on systematic approaches to the
integration of process design and process control. To develop designs which take account of
dynamic behaviour many assumptions must be made relating to disturbance scenarios
(relating for example to feedstock, utilities, or ambient changes). A quantifiable performance
objective must be defined which may be economic or more directly related to control
objectives (discussed in the next section). Design decisions may be operational (flowrates,
temperatures and controller parameters for example) which are mostly continuous variables,
or structural, such as choice of units in the flowsheet, connectivity between them, or control
system structure pairings, which are discrete variables. For these methods dynamic models
are required. Model development can be a complex task in itself and this has restricted the
industrial take up of some methods. However dynamic modelling is becoming more
commonplace.
169
Since the operating region can now be quite large we must reconsider the assumption
which permeates much of the work on dynamics, that because systems only vary a little
around one operating state then we can assume linearity in the behaviour. We have attempted
over recent years to make progress towards incorporating nonlinear aspects of dynamic
behaviour. It is unlikely to be possible to tackle this problem in a purely general framework
as for linear systems. However by looking at various classes of problems it is possible to
make some progress. This chapter brings together some results for a range of different types
of process systems attempting to draw some general conclusions. There are also some results
for more general approaches to the problem, which will also help to gather experience
towards a general methodology.
2. CONTROLLABILITY MEASURES
Commonly used controllability measures for linear systems include the relative gain array
(RGA) [2] and the minimised condition number (CN) [3], both of which rely on a linear
model describing the effect of control variables on the process outputs (structural
controllability). Typical resilience measures are the disturbance condition number [4], the
disturbance cost [5] and the relative disturbance gain array [6], which require input-output
control structure as well as an additional disturbance model that describes the effect of the
disturbance on the process outputs. A review and a procedure for controllability analysis for
linear systems is given in Skogestad and Postlethwaite [7]. The principle is to consider the
effect of the different limitations separately, and then to conclude whether or not
controllability is sufficient for a given task and also to rank order different design alternatives.
This raises one of the major problems in controllability analysis to date - that many different
design aspects result from such analysis. For this reason many researchers now develop
optimisation based integration methods for controllability where a single value connected to
the economics of the process is generated that allows for realistic ranking of alternative
designs [8,9,10,11,12].
Most tools rely on the use of steady state or linear dynamic models, and the use of such
models may be adequate in some cases. But, in general, it is quite unpredictable as to whether
the conclusions drawn are correct, particularly in the face of process nonlinearities. Often, the
final evaluation of the controllability of a system has to be through simulations, in particular
when nonlinear characteristics are important. Moreover, when a dynamic simulation is used,
several limitations can be identified:
• It is inefficient and potentially not conclusive, especially when the process possesses
fast and slow modes;
• It is incomplete since only a limited number of simulation tests can be performed, and
important and complex dynamic behaviour may not be observed for the specified
conditions.
In many cases, it appears that a controllability evaluation based on a linearised model for a
nonlinear system by using controllability criteria is adequate [13,14]. Often, it is quite easy to
170
design simple static nonlinear compensators, which remove most of the process nonlinearity.
The compensated system can then be analysed with linear techniques. This is true for the
regulatory performance around a specified steady state point but fails for problems with a
high degree of nonlinearity, and where operations are expected throughout operating regions.
The reliability of the controllability analysis conclusions from these linearised methods is
only valid around the specified conditions. Some important processes can exhibit nonlinear
behaviour which is not easily correctable with simple nonlinear transformations. These
nonlinear characteristics may have adverse effects on the dynamic performance of the
systems. Therefore, it is of some importance to understand complex nonlinear behaviour of a
process and to analyse the effects of the parameter and operation conditions on it.
The complex behaviour of a nonlinear process and its dependence of the parameters and
conditions can be analysed by utilising nonlinear techniques such as bifurcation analysis.
Continuation methods can be used to locate bifurcations and conditions for input multiplicity.
If this is done early enough at the design stage, the potential control problems associated with
these characteristics could be eliminated or avoided by modifying the process design itself
[14]. It is argued that as long as the dynamics are known to the control engineers one could
conclude that such a nonlinear analysis is not needed at all at the design stage since any
complex nonlinear behaviour can be fixed later on by the control algorithm [15]. This may be
true for controller design but is not satisfactory for analysing the controllability of a system
since it is necessary to fully identify all potential problems associated with the complex
behaviour and to assess how easy the design is to control when the design alternatives are
considered at the design stage.
A number of methodologies and tools have been reported for taking account of the
interactions between process design and process control [1]. A class of approaches that
include controllability into the design problem formulation are the optimisation based
methods for synthesis and design. Swaney and Grossmann [16] measured the flexibility of a
process by maximising a scalar value called the flexibility index. Both measures evaluate the
flexibility of a process at steady state. Dimitriadis and Pistikopoulos [17] extended this
approach for dynamic systems. Mohideen et al. [18] further extended this work by employing
an economic objective. They formulated the process and control system design within an
integrated optimisation framework, where process characteristics and control system
parameters were determined simultaneously. Rigorous dynamic models, pre-specified
disturbances and PID controllers were used while significant economic benefits were
reported. Further work following this measure has been presented by Bansal et al. [19].
Luyben and Floudas [20] approached the design problem taking into account dynamic
control performance characteristics in the form of matrix metrics within a multiobjective
optimisation framework. White et al. [21] proposed an approach to evaluate switchability of a
process design, i.e. its ability to move between operating points. Their approach was based on
determining the optimal switching trajectory for the plant by setting up and solving an optimal
control problem. One feature of this approach is the ability to include parameters
characterising the design of the plant as decision variables.
171
Bahri et al. [22] presented a backoff optimisation formulation to examine the disturbance
rejection capability of the given design and find a backoff optimal design in order to reject the
specified disturbance at steady state. One feature of the optimisation formulation of Bahri et
al. [22] is the ability to include parameters characterising the design of the plant as decision
variables without control design. In their later work, Bahri et al. [23] extended their work to
dynamic systems. In this work dynamic performance was evaluated dependent on detailed
control system design.
In general, the integrated design methods in the literature can be classified as having two
different perspectives. The first set of approaches considers steady state operation to be most
desirable. They then seek to develop the steady state designs that are economically optimal
but are also dynamically operable in a region around specified steady states. This is usually
implemented by using tradeoff between an economic performance measure and a
controllability performance index at steady state. The final decision as to what constitutes the
"best" design is often somewhat arbitrary in the sense that it depends on the relative weights
used for the conflicting objectives. Furthermore, these approaches suffer from the inherent
weaknesses of the performance indices used, namely that the controllability indicators may
not directly and unambiguously relate to real performance requirements. The main drawback
is that the solutions are only reliable around the specified steady states. In order to check the
validity of the conclusion drawn, closed loop dynamic simulations are usually required.
The second set of approaches are dynamic approaches [18, 21, 23, 24] that take the view
that all processes are inherently dynamic, and that dynamic operation is inevitable or in some
cases preferable to steady state operation. They therefore explicitly consider the dynamic
performance at the design stage through the use of the dynamic models. The ambiguity
associated with controllability performance is thus avoided. These methods are not restricted
to a small operating envelope around steady states, thus the final decisions drawn are reliable
over a large region of the operation in the face of the disturbances. However, the optimal
controller parameters strongly depend on the detailed dynamic process models and expected
disturbances of the process systems.
Therefore, it is desirable that a method should be one that only uses open loop steady state
data while considering dynamic characteristics of a process design, i.e. information that is
independent of a detailed controller design, and could eliminate the design candidates for
which a controller that achieves the control objectives in the face of disturbances does not
exist, whatever controller design method is used.
variations such as disturbances and plant changes, using available inputs and available
measurements' [7]. Qualitative features can cause specific problems, which should be
avoided. Input multiplicity, where there may be more than one steady state when the output is
specified, can in many cases cause nonminimum phase behaviour. Nonminimum phase
behaviour should be avoided as it causes particular problems. It can be controlled of course if
the features are known but is best avoided. It is possible to obtain an expression for the zero
dynamics and the stability of this nonlinear equation describes the nonminimum phase
behaviour. Kuhlmann and Bogle [25] showed that for the well known van de Vusse reaction
which is known to exhibit input multiplicity optimal conditions lie in the minimum phase
region. This applies to all systems with intermediate reaction schemes without side reaction.
Some progress has been made towards general methodologies to obtain controllability
conclusions by using ideas developed for model based control. By determining the integral
square of the closed loop error following a disturbance to a nonminimum phase system it is
possible to compare alternative designs. The model is factorised into minimum and
nonminimum phase parts and the integral determined for each part. However the integral of
the nonminimum phase factor cannot in general be directly determined. Chen and Allgower
[26] proposed a method which converts the infinite integral into a finite one. The objective
function which has an infinite limit is modified such that the terminal value bounds the
infinite horizon value. This formulation now allows us to determine the ISE of a potential
switch of steady state and compare alternative designs on that basis. A simple example was
studied by Kuhlmann and Bogle [27] involving a reactor and a separator with recycle. The
switch between steady states has been considered between 98% product purity and 97.5%.
The designs differ only by a small amount in the flowrates and yet in one case nonminimum
phase behaviour is present. One design is superior for switching between steady states and
simulations have demonstrated this. Alternative 1 has a weak right half plane zero. However
while helping with making design decisions on the basis of switching criteria there is much to
be done yet towards a full controllability analysis methodology.
Optimising polymerisation reactors can cause major controllability problems. Lewin and
Bogle [28] showed how optimising a methylmethacrylate polymerisation reactor brought the
operating conditions much closer to the bifurcation point which causes major problems with
controlling to this set point. The use of the disturbance condition number indicated this
problem. The effects are nonlinear which is reflected in different linearised measures at
different operating points. The problem also exhibits input multiplicity conditions.
Table 1
Reactor separator design alternatives
Base case Alternative 1 Alternative 2
Reactor holdup 65.33 1 65.36 1 65.36 1
Separator tops flowrate 1.59 1/s 1.62 1/s 1.52 1/s
Separator bottoms flowrate 1.2 1/s 1.2561/s 1.1461/s
Reactor outlet concentration 0.5 g/s 0.49 g/s 0.51 g/s
x' = f(x,u)
y = h(x) (1)
where x is the state variable vector, u is the manipulated variable, f and h are smooth
functions. Mathematically, a necessary condition for the existence of steady state input
multiplicity [29] is:
G(0) = - C A ^ O (2)
where A, B, and C represent the gradients of function f(x,u) with respect to x, f(x,u) with
respect to u, and h(x) with respect to x at the steady state operating point, respectively, and
G(0) is the steady state process gain. It should be noted that only the SISO case is being
considered here.
Whenever equation (2) has a solution one must consider the possibility of the existence of
input multiplicity behaviour and cannot be confident that the inverse of the system exists as a
one-to-one mapping from the values of the output to the values of the input. In this case there
is a possibility of a large move of the input for an inverse-based control framework. Equation
(2) can be used to determine the locus of the input multiplicity conditions assuming pseudo
steady state conditions using continuation methods.
An artificial dynamic equation can be introduced using the process gain, G(0), which
characterises the position of the input multiplicity as follows:
where v is an artificial state and the initial condition is 0. The value of the G(0) is determined
by system (1) at steady state for each chosen value of u and is independent of the state v.
G(0) = 0. (4)
Therefore, the necessary condition for the existence of input multiplicity can be determined
for this particular dynamic equation. For our purpose, it is sufficient to seek changes in the
sign of G(0) to detect the occurrence of the input multiplicity at a variety of steady-states [30].
If the original dynamic system (2) is augmented with the dynamic equation (3) a new dynamic
system is established, of the form:
x' = f(x,u)
v' = G(0)v (5)
or
where X = [x v] T and
0 = F(X,u) (8)
J= [ J 0 ]
[ 0 G(0) ]
where J is the Jacobian of f with repect to x. The eigenvalues X of the Jacobian matrix J
satisfy the equation:
which represents the characteristic equation of the system (1) with equation (3).
Input multiplicity behaviour of the system can be detected and located using a continuation
method. Branches of this point can be traced out in the parameter space to determine the input
multiplicity regions and the parameter effects. The continuation techniques within the
software package AUTO [31] can handle these problems with the additional small calculation
175
expense of obtaining the process gain by employing the subroutine to calculate the
determinant of the Jacobian of the original system (1).
In order to eliminate input multiplicity by means of process modification, the effects of the
design and operation parameters on the input multiplicity behaviour of the polymerisation
reaction described in the previous section have been explored. The input multiplicity
conditions were identified.
The primary quality constraint is on the number average molecular weight of the polymer,
MWav. In this case, only the reactor volume V is an adjustable design parameter, the flow rate
of the cooling water Fcw is a manipulated variable, and the temperature of the inlet feed Tjn is
considered as a main disturbance. There exist also many other possible changes for the
process, such as the monomer feed concentration, initiator feed concentration, feed flow rate,
reactor volume, overall heat transfer coefficient, cooling water inlet temperature, and reactor
feed temperature may affect the dynamics of the process. It has been assumed that these
others do not vary significantly. Initiator feed concentration is considered to be fixed (Fi =
0.00354). The first input multiplicity condition (MWav = 15,765 and Fcw = 0.1977 m3/h under
the nominal conditions) is considered, which is close to the selected initial operating point
(MWav,s = 25,000 and Fcw,s = 0.1673 m3 /h).
An increase in the cooling water flow rate can move the initial operating condition (Tjn =
350K and Fcw = 0.1673 m 3 /h) to breach the first input multiplicity condition where the value
of the Fcw is Fcw = 0.1977 m3/h. Similarly a slight decrease in the reactor volume will
significantly move the input multiplicity condition away from the operating point. Therefore,
a decrease in the reactor volume may improve the controllability of the process since the
modified operating point will be further away from the input multiplicity condition allowing a
large margin to reject the disturbance.
The effects of the reactor volume and the inlet feed temperature on the steady state
characteristics of the process have been studied separately with respect to the cooling water
flow rate Fcw. The relationship can be functionally expressed as FCWJM = fim (V, T;n ). A
locally linearised expression of fiM(V, T;n) was obtained:
which will used as a constraint on the input in the process optimisation problem to avoid the
input multiplicity condition.
The base case (optimal) design will face control problems subject to changes in the
disturbance T;n when it is operated at the selected optimal point (MWav = 25, 000 and Fcw =
0.1673 m /h). In order to have the desired disturbance rejection ability, the base case design
needs modifying. The control problem is associated with the input multiplicity behaviour. The
modification is to enable the process to eliminate the potential control problem associated
with the input multiplicity for a specified disturbance of the inlet feed temperature T;n in the
operating region by adjusting the design parameter, while keeping the modification cost at a
minimum.
176
We have chosen the reactor volume V as an adjustable design parameter for process
modification. The modified design optimisation formula is:
subject to
f(x) = 0,
MW av = 25000
Fi = 0.00354,
f e w **••• ^cWjIM *
where f(x) represents the six state equations of the process at steady state; F c w j M is the value at
the input multiplicity condition; Vopt and FCWiOpt are the nominal design parameter value of 0.1
m 3 and optimal operation value of 0.1673 m 3 /h, respectively; AT in is the disturbance value to
be defined; R and Q represent the cost factors in the changes of the reactor volume and
cooling water flowrate, respectively (R = 60 and Q = 1 have been used). The degrees of
freedom in the optimisation problem are the reactor volume V and the cooling water flow rate
p
x
cw-
The solutions for the reactor volume resulting in solving the design optimisation formula
above, corresponding to the required set point change defined by the values of ATjn, are given
in Table 2. The optimisation problems were solved using GAMS/MINOS.
The results from the feedback optimal process design modifications indicate that a
decrease in the reactor volume will improve the disturbance rejection ability of the process.
Closed loop simulations are given to demonstrate the improved controllability of the
modifications resulting from the feedback optimal design methodology. The process is
initially at the selected operating point from the base case design, i.e. MWav,.s = 25,000 and
Fcw,s = 0.1673 m 3 /h.
Figure 1 and Figure 2 illustrate the closed loop (using a conventional PI controller whose
parameters were tuned following the Ziegler and Nichols' rule and were kept constant in all
simulation runs for the purpose of comparison) responses of MW av for the base case (optimal)
process to a positive step in the inlet feed temperature T in from 350K to 353K (AT in = +3K)
and from 350K to 354K (AT;n = +4K), respectively. The base case design has the ability to
reject a +3K step change in T in but fails for a +4K step change in Tin. However the output
MW av comes to rest eventually at its set point by feedback control.
Table 2
Polymerisation process design modification results
ATin(K) -5 -3 -1 0 +1 +2 +3 +4 +5
V(m3) 0.102 0.101 0.1 0.1 0.1 0.099 0.098 0.097 0.095
177
Fig. 1. Simulation of polymerisation process for the base case design: V = 0.1 m and
disturbance AT;n - +3K
15
time (h)
Fig. 2. Simulation of polymerisation process for the base case design: V = 0.1 m3 and
disturbance ATin = +4K
178
Fig. 3. Simulation of polymerisation process for the modified case design: V = 0.097 m3 and
disturbance AT;n = +4K
Fig. 4. Simulation of polymerisation process for the modified case design: V = 0.097 m and
disturbance AT;n = +5K
179
10 12
time (h)
Fig. 5. Simulation of polymerisation process for the modified case design: V = 0.095 m3 and
disturbance AT;n = +5K
Fig. 3 shows that the modified design of the case of V = 0.097 m /h has the ability to reject
a +4 step change in T;n as required. The modified process fails to reject a step change in T in
over the specified value of AT;n = +4K, for example AT;n = +5K as shown in Figure 4.
However, as shown in Figure 5, the modified process design for the case of V = 0.95 m /h can
reject the +5K step change in Tjn that is expected in the process design modification.
The simulations indicate that the modifications have the ability to reject larger disturbances
in the inlet temperature Tjn than the base case design does. It is worth noticing that both the
base case and the modifications can eventually be stabilised by the feedback control since
there only exists output multiplicity in the MWav-Fcw loop. But, a sudden change in the output
could happen for a large disturbance during operation.
While the process improvements can be achieved by changing operating conditions, major
improvements are often achieved by considering alternative flowsheet or equipment structure.
This is required for retrofit design projects or for the design of new plant. The scope for
structural decision making greatly increases the potential search and solution space. There is
little generic experience but some results for particular types of problems. An example is in
180
the use of design modifications to improve environmental performance which also affects
control performance.
Increasingly environmental issues are coming more to the fore. An approach to
considering environmental issues has been to consider exergy utilisation. The tools of process
integration also utilise this concept. It is a widely held belief that greater integration
contributes to greater control difficulties particularly because of the energy recycles causing
greater interactions. However in a recent study [32] the effect of minimising the exergy
utilisation on closed loop performance of a vinyl chloride monomer distillation column has
been studied. In the VCM distillation study the controllability was improved simultaneously
with improving the exergy utilisation in the process. Exergy is the useful energy that can be
used in a process. It can easily be wasted by inefficient use of heat and mass transfer fluxes.
It was shown that major reductions in exergy loss could be achieved by optimising the heat
transfer conditions. This was achieved by a mixture of optimising the operating conditions
and of structural decisions in the design. Temperature conditions were determined so as to
minimise exergy loss in the heat transfer processes. However the position of the feed tray was
also chosen which is a structural (discrete) decision, albeit a traditional one. A more obvious
structural design decision was to introduce a side heater to the seventh plate in the column,
again to improve the exergy utilisation. The exergy loss was calculated and directly used as
the objective function to be optimised in the design. This had the effect also of improving the
dynamic performance of the column.
For the 25 plate column in table 3 the thermally optimal (diabatic) design requires side-
heating utilities on plates 14 to 20 and side-cooling on plates 12 and 13. For industrial
application an additional 7 side-heating units and two cooling units is clearly not a practical
solution. However the intermediate cooling utilities can be neglected as these represent only
5.7 % of the total required cooling utilities and the 7 side-heating units can be replaced by one
side-reboiler with very little effect on the results. This single side reboiler, located on plate 15,
provides 74 % of the side-reboiler utilities of the diabatic thermally optimal design, with an
increase in exergy losses of only 11% relative to the seven side reboiler case. This one side
reboiler is located in an area of low driving forces.
Figure 6 shows the profiles of exergy losses for the side-reboiler arrangement in
comparison with the adiabatic and diabatic reversible column configurations (25 plates). The
main benefits are achieved by using only one side reboiler which is a realistic option. The one
side-reboiler arrangement results in a peak in the exergy losses curve mainly due to the fact
that the side-reboiler is an additional plate and the exergy losses due to mass and heat transfer
in the side-reboiler are added to the losses on plate 15.
Figure 7 shows the temperature profiles for the adiabatic, diabatic thermally optimal, and
diabatic one side-reboiler VCM column design. For the diabatic thermally optimal column
design, the temperature or concentration (not shown) profiles are almost linear between the
pinch points at both ends of the column. For the diabatic column with one side-reboiler the
profiles are also closer to being linear than for the adiabatic design. The critical factors for
obtaining such "near linear" profiles are the location and size of the side-reboiler as well as
181
the condition and location of the feed stream. A shifting of the side-reboiler into a region of
already high driving forces, or using the side-reboiler to dump available heating utilities, will
increase the exergy losses but, perhaps more importantly for the control system design, will
alter the column profiles away from near linear profiles.
This optimised 25 plate design plus condenser, main reboiler, and one side-reboiler column
configuration has been used to study a possible control system design and compare it with an
adiabatic 25 plate VCM column configuration. It will be shown that the column profiles are of
significant importance for the controllability of the column.
Table 3
Exergy Comparison of the Diabatic with the Adiabatic Column [ GJ hr"1 ]
Adiabatic - Diabatic 22 Plates 25 Plates 35 Plates
Difference in Total -0.285 -0.182 -0.123
Cooling Duty (+31.6%) (+20.7 %) (+14.9 %)
Difference in Total -0.341 -0.385 -0.34
Heating Duty (-9.8 %) (-11.5%) (-10.8 %)
Difference in Total Exergy
Losses, including losses due to -0.552 -0.684 -0.592
intermediate heating or cooling (-25.1 %) (-37.3 %) (-35.4 %)
Fig. 6. Energy losses for three VCM column designs - adiabatic, with one side reboiler
(diabatic side-reboiler) and with seven side reboilers (diabatic reversible)
182
The new process design has a different dynamic response to feed disturbances which in
turn can provide equivalent process stability and/or less capital investment. The design for
both columns is based on the widely used LV column control configuration [33,34]. The
operating objectives for the VCM column are high product recovery and product purity. The
selection of the LV configuration has the advantage that it has a direct effect on the product
compositions and is weakly dependent on the level control scheme [35]. The controllers were
tuned using the Zeigler Nichols tuning method.
Maintaining the product compositions are the control system objectives of this study.
Figures 8 and 9 show the response of the adiabatic (dotted lines) and diabatic (full lines)
VCM column for the product compositions after imposing the following disturbances: 3%
feed flowrate and 3% feed composition disturbances. The distillate products are shown in the
top half of the diagrams and refer to the right y-axis (AVVCM)- The bottom products are shown
in the bottom half of the diagrams and refer to the left y-axis (AXVCM)- Note that the scaling of
the right y-axis(distillate) is smaller by a factor of 102. For feed disturbances after one hour
most of the adiabatic composition curves have not returned to a steady state yet. However for
clarity of the figures and to maintain reasonable computational times a settling time of one
hour, after a feed disturbance, was permitted in figures 8 and 9.
Results for these disturbances and for a 1C feed temperature disturbance are tabulated in
Table 4. The response time, defined as time taken for the output variable to return within a
fixed distance from the new or old stationary value, has been used as a measure of the quality
and stability of the process and control system design. Table 4 gives a summary of the
response times, in hours, after the feed disturbances at time 0.05 hours. It is assumed that the
product compositions have reached a new stationary operating condition if the differential
values are within (and remain within) ±10% of their final values. The results show that
183
especially for the bottom composition the response times of the diabatic column are much
shorter. In all of the feed disturbances reported here, except one, the diabatic column shows
better disturbance rejection. The exception is the VCM composition of the distillate after a
3.0% VCM feed composition increase (figure 9). The reason for this result is the fixed side-
reboiler duty. Most of the additional VCM entering the column is vapour. Hence, after the
disturbance the side-reboiler evaporates a higher fraction of EDC until the main reboiler is
adjusted to the new operating condition.
These results are perhaps surprising because the side-reboiler for the diabatic design
introduces strong cross-coupling effects causing considerable oscillations within the column.
However the improved controlled response is due to the removal of the sharp nonlinearity in
the temperature profile (Fig 7). Shifts in the internal temperature, used as a measured variable
in the control scheme, are less drastic producing a more even closed loop response.
A second case where the effect of structural changes can be seen is in results for four
designs of a single cell producing continuous fermenter. The study compared the designs on
the basis of the magnitude of the control input required for perfect disturbance rejection. In
this case the controllability problems are related to input saturation. Design 1 is a traditional
fermenter with input flowrate and substrate concentration as possible manipulated variables.
Design 2 allows the overall feed concentration to be manipulated while the overall flowrate
remains constant. Designs 3 and 4 introduce a recycle purified by a filter but with the product
being withdrawn at different places. The inlet and the product flowrates are considered as
manipulated inputs. The two operating conditions chosen were one intermediate rate and the
other at optimal economic conditions but is close to washout. The full results are reported in
Kuhlmann et al [36].
Table 4
Summary of Response Times after the Feed Step Disturbances
Distillate Temperature (Composition) Control
Disturbance at time 0.05 [hr] Response Time [hours]
Adiabatic Diabatic
Flowrate: [ + 3.0%] 0.68 0.53
VCM Composition: [ + 3.0%] 0.72 0.81
Temperature: [ + 1.0°C ] 0.69 0.54
Bottom Temperature (Composition) Control
Disturbance at time 0.05 [hr] Response Time [hours]
Adiabatic Diabatic
Flowrate [ + 3.0%] >0.95 0.37
VCM Composition: [+ 3.0%] >0.95 0.58
Temperature: [+ 1.0°C] >0.95 0.56
Design 1 is not controllable at the two chosen operating points if the dilution rate is chosen
to be the control input. The effect of the dilution rate on the cell concentration is small which
means that even small disturbances cannot be rejected because of saturation. This confirms
the results of Zhao and Skogestad [37]. With design 2 again the disturbance rejection
proposed is not possible with feed flowrate although if feed concentration is chosen it is for
185
some sets of conditions. By including the recycle using the outlet as a control input in design
3 the system is controllable at all frequencies. However if the inlet flowrate is used it is only
controllable at high recycle rates.
The results of Kuhlmann et al. [36] show that designs 3 and 4 with cell recycles have good
controllability properties. A cell recycle leads to a lower fermenter volume. A substrate
recycle makes the fermenter controllable at high recycle rates only. These results were
confirmed by simulation. The controllability results were obtained using linear frequency
dependent indicators. By adding a second substrate which does not contain any growth
limiting substrate leads to superior control performance.
6. CONCLUSIONS
The chapter has tried to bring together a set of results aimed at designing nonlinear
processes to have good controllability properties. Comprehensive generic methods have yet
to be developed because of the difficulty in developing approaches to nonlinear problems of
any type. However there are some general points. Optimising polymerisation systems does
run a strong risk of causing severe controllability problems. Optimising for environmental
performance does not necessarily mean poorer control performance particularly if some
structural changes can be made in the equipment design. This would seem to provide scope
for further development. Linear indicators provide the main bulk of experience and do give
reliable results in many cases. However they are in many cases crude and cannot distinguish
between alternatives.
Methodologies which generate alternative structures are beginning to make an impact. The
ability to solve large combinatorial problems gives scope for swifter progress. However
problems of any size become massive very quickly and so simplifications are necessary. A
particular issue then is what sort of complexity is necessary to ensure that useful answers are
produced in which we can have confidence.
References
[1] J. Van Schijndel and E. N. Pistikopoulos, AIChE Symp. Series No. 323, 96 (2000) 99.
[2] E. H. Bristol, IEEE Trans. Auto. Control, AC-II, 13 (1966) 133.
[3] T. C. Nguyen, G. W. Barton, J. D. Perkins and R. D. Johnson, AIChE J., 34 (1988)
1200.
[4] S. Skogestad and M. Morari, Chem. Eng. Sci., 42 (1987) 1765.
[5] O. Weitz and D. R. Lewin, Comput. Chem. Eng., 20 (1996) 325.
[6] J. W. Chang and C. C. Yu, AIChE J., 38 (1992) 521.
[7] S. Skogestad and I. Postlethwaite, Multivariable feedback control analysis and design.
Wiley, Chichester, 1996.
186
[8] M. L. Luyben and C. A. Floudas, 1st IFAC Workshop on Interactions between process
design and process control, London, (1992) 101.
[9] J. D. Perkins and S. P. K. Walsh, Comput. Chem. Eng., 20 (1996) 315.
[10] S. D. Chenery and S. P. K. Walsh, J Proc. Contr., 8 (1998) 165.
[11] T. J. McAvoy, Ind. Eng. Chem. Res., 38 (1999) 2984.
[12] I. K. Kookos and J. D. Perkins, J. Proc. Contr., 12 (2002) 85
[13] J. D. Perkins, Proc. DYCORD+ 89, Maastricht, (1989) 349.
[14] M. Morari, 1st IFAC Workshop on Interactions between process design and process
control, London (1992) 3.
[15] W. D. Seider, D. D. Brengel, A. M. Provost and S. Widagdo, Ind. Eng. Chem. Res., 29
(1990) 805.
[16] R. E. Swaney and 1. E. Grossmann, AIChE J., 31 (1985) 621.
[17] V. Dimitriadis and E. N. Pistikopoulos, Ind. Eng. Chem. Res., 34 (1995) 4451.
[18] M. J. Mohideen, J. D. Perkins and E. N. Pistikopoulos. AIChE J., 42 (1996) 2251.
[19] V. Bansal, J. D. Perkins, E. N. Pistikopoulos, R. Ross and J. M. G. van Schijndel,
Comput. Chem.. Eng., 24 (2000) 261.
[20] M. L. Luyben and C. A. Floudas. Comput. Chem. Eng., 18 (1994) 933.
[21] V. White, J. D. Perkins and D. M. Espie, Comput. Chem. Eng., 20 (1996) 469.
[22] P. A. Bahri, J. A. Bandoni and J. A. Romagnoli, AIChE J., 42 (1996) 983.
[23] P. A. Bahri, J. A. Bandoni and J. A. Romagnoli, AIChE J., 43 (1997) 997.
[24] V. Bansal, R. Ross, J. D. Perkins and E. N. Pistikopoulos, J. Proc. Contr., 10 (2000) 219.
[25] A. Kuhlmann and I. D. L. Bogle, Comput. Chem. Eng., 21S (1997) S397.
[26] H. Chen and F. Allgower, Automatica, 34 (1998) 1205.
[27] A. Kuhlmann and I. D. L. Bogle, AIChE J., 47 (2000) 2627.
[28] D. R. Lewin and I. D. L. Bogle Comput. Chem. Eng., 20 (1996) S871.
[29] L. B. Koppel, AIChE J., 28 (1982) 935.
[30] P. B. Sistu and W. B. Bequette, Chem. Eng. Sci., 50 (1995) 921.
[31] E. J. Doedel, A. R. Champneys, T. F. Fairgrieve, Y. A. Kuznetsov, B. Sandstede and X.
Wang, Auto 97: Continuation and bifurcation software for ordinary differential
equations. Concordia University, Montreal, Canada, 1998.
[32] J. Hagemann, E. S. Fraga and I. D. L. Bogle, In Proc 7th World Congress of Chemical
Engineering, Melbourne (2001).
[33] P. S. Buckley, W. L. Luyben and J. P. Shunta, Design of Distillation Control Systems.
Edward Arnold, New York, 1985.
[34] S. Skogestad, Trans IChemE Part A, (1997) 539.
[35] S. Skogestad, P. Lundstrom and E. W. Jacobsen, AIChE J., 36 (1990) 1777.
[36] C. Kuhlmann, I. D. L. Bogle and Z. Chalabi, Bioprocess Eng., 19 (1998) 53.
[37] Y. Zhao and S. Skogestad, Proc. IFAC Conference on Advanced Control of Chemical
Processes, Kyoto, Japan (1994) 309.
The Integration of Process Design and Control
P. Seferlis and M.C. Georgiadis (Editors)
© 2004 Elsevier B.V. All rights reserved. 187
Chapter Bl
1. INTRODUCTION
During the past decades there has been growing awareness both in academia and industry that
operability issues need to be considered explicitly at the early stages of process design. As a
result, a number of methodologies have been developed for addressing the interactions between
process design and process control (for a review see Van Schijndel and Pistikopoulos,[1]). A
large proportion of the work in this field has concentrated on the application of controllability
metrics based for instance on the open and closed-loop stability analysis [2], open-loop out-
put/input achievable performance (Output Controllability Index - OCI)[3] or LQG criteria[4].
Despite these developments, however, it is observable that a large proportion of the work in this
field:
• has concentrated on the application of metrics {e.g. condition number) that provide some
measure of a system's controllability, but may not relate directly and unambiguously to
real performance requirements;
• does not account for the presence of both time-varying disturbances and time-invariant
(or relatively slowly varying) uncertainties; and
• does not involve selection of the best process design and the best control scheme, taking
into account both discrete and continuous decisions.
Two of the key challenges that lie ahead in the area of rocess design for operability are (i) the
need for a rigorous and efficient solution of the underlying optimization problem and (ii) and the
188
incorporation of advanced control techniques in a process design framework with the prospect
of improving the process performance.
The aim of this chapter is to give an overview of some recent advances of our group towards
this endeavor, and their application to a typical separation problem. The rest of the chapter
is organized as follows: In the next section an overview of the methods for (i) simultaneous
process and control design under uncertainty and (ii) solution of mixed integer dynamic opti-
mization problems is presented. The application of these two methods to the optimal design
of a distillation column follows. The following section presents the derivation of the explicit
model based optimizing control law and its incorporation in a simultaneous process and control
design framework. The distillation process example is then used to demonstrate the features of
this design methodology. In the last section conclusions are discussed.
min Eo <j> (x(tf), x(tf), xa(tf), v(tf), vv, 6(tf), d, 5, ts), (1)
d,d,vv
s.t.fd{x{t),x{t),xa(t),v(t),vv,8{t),d,5) = 0,
fa(x(t),xa(t),v{t),vv,9(t),d,6) = 0,
g(x(t),x{t),xa(t),v(t),vv,$(t),d,S) < 0,
f0(x(to),x{to),xa(to),v(to),vv,e(to),d,S) = 0,
ye e e
to<t<tf
where x denotes the differential state variables of the system pertaining e.g. to molar hold-up,
internal energies; xa includes the algebraic variables, e.g. geometric characteristics, thermody-
namic variables, internal flowrates; 5 e Y = {0, l}Ni comprises the binary variables for the
process and the control structure (corresponding to e.g. the number of trays in a distillation col-
umn or whether a manipulated variable is paired with a particular controlled variable or not); v
denotes the vector of time varying manipulated variables, such as the utility flowrates, vv is the
189
set of time-invariant operating variables (e.g. set-point of controllers); d are the design variables
that cannot be readjusted during operation (e.g. equipment size); 9{t) are the bounded uncertain
parameters that can be time varying or time invariant; <j> is usually an economic index of perfor-
mance that may include weights on the operability or the environmental impact of the plant; fd
represent the differential equations corresponding e.g. to mass and energy balances; fa denotes
the algebraic equations pertaining e.g. to thermodynamic and hydraulic relations; f0 are the ini-
tial conditions of the dynamic system, e.g. the system is initially at steady state: x = 0; g < 0
represents the set of constraints (end, point and path) that must be satisfied for feasible operation
(e.g. purity specifications, environmental restrictions). Note here (i) that the objective function
in problem (1) is the expectation over the uncertainty space of the performance index of the
system and (ii) that the operability constraints at the solution of the optimization problem have
to be satisfied for every possible uncertainty realization. These two features in addition to the
presence of the binary optimization variables makes the design problem (1) a stochastic infinite
dimensional mixed integer dynamic optimization problem. The solution of such a problem is a
challenging task. A general, algorithmic framework for solving (1) is schematically shown in
Figure 1. The only approximation that this framework makes is to reduce the expectation over
the ucertainty space to a set of discrete probability scenarios each one allocated with a different
weight of occurrence. The steps of this algorithm can be summarized as follows [6,7,5]:
Step 1. Choose an initial set of scenarios for the uncertain parameters.
Step 2. For the current set of scenarios, determine the optimal process and control design by
solving the (multi-period) mixed-integer dynamic optimization (MIDO) problem:
min £ w%4 (x'ttf), x%), xa%), v%), vv\ 6\tf), d, 5, tf) , (2)
s.t.fd(xi{t),xi{t),xai{t),vi{t),vvi,0i(t),d,6) = 0,
i is the index set for the scenarios of the uncertain parameters 8(t) that can be time varying or
time invariant; ns is the number of scenarios; Wj, i = 1,..., ns, are discrete probabilities for the
selected scenarios (X)"=i w. = 1);
Step 3. Test the process and control design from Step 2 for feasibility over the whole range
of the uncertain parameters by solving the dynamic feasibility test problem:
Figure 1. Decomposition algorithm for Process and Control design under uncertainty
If x (d, 5) < 0, feasible operation can be ensured dynamically for all values of 6 within the
given ranges. In this case, the algorithm terminates; otherwise, the solution of (3) defines a
critical scenario that is added to the current set of scenarios before returning to Step 2.
Remarks
1. The formulation (P) is an exact closed-loop, dynamic analogue of the steady-state prob-
lem of optimal design with fixed degree of flexibility [8]. The solution strategy shown in
Figure 1 and described above, is a closed-loop dynamic analogue of the flexible design
algorithm of Grossmann et al,[9].
2. The integrated design and control problem requires the solution of MIDO problems in
steps 2 and 3. Until recently, there were no reliable methods for dealing with such prob-
lems. In the next paragraph a newly developed MIDO algorithm [10,11] is outlined.
s.t,
0 = fd(x{t),x{t),xa(t),vv,d,8,t)
0 = fa(x(t),xa(t),vv,d,6,t) (4)
0 = fo(x(to),x{to),xa(to),vv,d,S,to)
0 > g{x(tf),x{t}),xa{tf),vv,d,6Af)
to<t<tf
The binary variables S participate only in a linear form in the objective function, the differential
system and the constraints, constraints . Here, the path constraints (e.g. minimum allowable
temperature difference in the heat exchangers over the complete horizon) are equivalently trans-
formed to end-point constraints [12]. The approach reviewed here is based on Generalized Ben-
ders Decomposition (GBD) where the original MIDO problem is decomposed to a primal and a
master subproblem The primal problem is constructed by fixing the binaries to a specific value
5 = Sk. Then problem (4) becomes an optimal control problem. In GBD-based approaches the
master problem is constructed using the dual information of the primal at the optimum solution.
The dual information is embedded in the lagrange multipliers fi of the constraints g and the
adjoint time-dependent variables X(t),p(t) that are associated with the differential system of
equations, i.e. fd,fa- While the values of the lagrange multipliers are available at the primal
problem solution, the evaluation of the adjoint variables requires an extra integration of the
so-called adjoint DAE system. This differential system has the form:
"--^•m~[ltf-^ (5
>
<i« •*/>-<>?
Equation 5 involves a backwards integration and can be computationally expensive. After the
adjoint functions are calculated the master problem is constructed and has the following form:
m i n 7?
renders the algorithm difficult to implement. The demanding adjoint DAE system solution can
be eliminated by introducing an extra set of continuous optimization variables 6d, in the primal
problem, that are fixed according to the equality constraint: <5<j — 5K = 0. This gives rise to the
following primal optimal control problem:
m i n 77
6,1
• Fix the values of the binary variables, 5 = <5K, and solve a standard dynamic optimization
problem (eqn. (4), Kth primal problem). An upper bound, UB, is thus obtained.
• Re-solve the primal problem at the optimal solution (eqn. 7) with additional constraints
of the form 5d — 5K = 0, where 5d is a set of continuous search variables and 8K is the set
of (complicating) binary variables. Convergence is achieved in one iteration. Obtain the
Lagrange multipliers, Q.K, corresponding to the new constraints.
• Construct the Kth relaxed master problem from the Kth primal solution, (f>K, and the La-
grange multipliers, S1K (eqn. 8). This corresponds to the mixed-integer linear program
(MILP). The solution of the master, rf, gives a lower bound, LB, on the MIDO solution.
If UB — LB is less than a specified tolerance e, or the master problem is infeasible, the
193
algorithm terminates and the solution to the MIDO problem is given by UB. Otherwise,
set K = K + 1, SK+1 equal to the integer solution of the master, and return to step 1.
Here, an example is presented for demonstrating the features of the simultaneous process and
control design framework and the utilization of mixed integer dynamic optimization within this
framework. This example has been solved by Bansal et al, [7]. The system under consideration,
adapted from one presented by Viswanathan and Grossmann [14], is shown in Figure 3. A
mixture of benzene and toluene is to be separated into a top product with at least 98 mol%
benzene and a bottom product with no more than 5 mol% toluene. The problem objectives and
characteristics are shown in Table 1.
A rigorous model has been developed for representing the process dynamic behaviour, the
continuous and discrete decisions and the process constraints. Binary variables S[ and 5rk are
incorporated in order to account for the locations of the feed and reflux trays, respectively, where
194
S[ = 1 if all the feed enters tray k, and is zero otherwise, and Srk = 1 if the reflux enters tray
k, and is zero otherwise. Additionally, a set of control binary variables 5f are introduced that
are associated with each MV-CV pairing and are unity when the pairing exists and zero other-
wise. The modelling of the control structure selection is carried our similarly to Narraway and
Perkins [15]. These features lead to a mixed-integer dynamic distillation model. The principal
differential-algebraic equations (DAEs) for the trays are given below. A full list of nomencla-
ture, values of the parameters, details of the DAEs for the reboiler, condenser, reflux drum and
control scheme, cost correlations for the objective function and inequality path constraints, can
be found in Bansal et ai, [7].
For k = 1,..., Ntray, where Ntray is an upper bound on the number of trays required, and
i = 1,..., NC, where NC is the number of components:
( \-< \ dM, k
> <5[< ' —7T- = Lk+l •li.fc+i + Vfc_i • s/i,*.! +Fk-ZiJ + Rk • xitd - Lk • xi%k - Vkyz,k. (9)
t ^—' / ^
\k'=k /
Volume constraints:
ML Ml
—7- + —^ = Voltray (13)
P'k Pi
Definition of Murphree tray efficiencies:
Vi.k = Vi,k-l + ESIi,k- (vlk-yi,k-l)- (14)
k
fc'-l
Liquid levels:
Leveh = -; * . (18)
Plk • Atray
Table 1
Formulation of process and control design optimization for the binary Distillation Example
minimize Expected Total Annualized Cost
S.t. Differential-Algebraic Process Model
Inequality constraints
Purity of product. Xbenz,D(t) > 0.98, Xbenz,B(t) < 0.05
Flooding restrictions/minimum diameter Dc - Dcmin(t) > 0
Minimum temperature diff. ToutR{t)-TB(t)>0 (reboiler)
Ensure atmospheric operation Pc > 1.1013
Allowable levels in reflux drum/reboiler 0.1-Ddram < leveld < 0.9DdTUm,0-l-Dreb < levelr < 0.9Dret
Limit on outlet cooling Tw,out < 323.15 < 0
water temperature
Fractional entrainment limit, limit on heat flux on the reboiler and limits on the utility flowrates
Degrees of Freedom
Continuous Process design variables d Tray Diameter: Dc, Heat transfer areas: An (reboiler), Ac (con-
denser), Set points (Controlled variables - CVs): xy£nz D, xsb'^nz B,
leveld, levelb, Pcond
Manipulated variables (MVs) Reflux flow R, Distillate flow D,
Cooling water flow Fw, Steam flow Fst, Bottoms flow B.
Continuous Control tunings Kc,r,
Control Scheme Multiloop proportional integral controllers (PI)
Discrete decisions 5 Number of trays - reflux tray location: 5rk, feed tray location: S[
Pairings of manipulated and control variables 5%
Disturbances/Uncertainties Feed composition zblirlzj(t), Feed flowrate, Cooling water Temper-
ature Tw,in-
Time horizon t0 <t < tf where to = 0, £/ = 720mm
196
p
^ ~ P- = le - 5 ' E s l'\
\k'=k )
• {a • ueZ£_, • pl_1 + f3 • p[ • g • Levdk) . (20)
Vapor velocities:
1 V
;
vdk-l = — • ( — k~l \
• (21)
Flooding velocity:
/ / \ 0.2 / _, x O.5
Empirical coefficient:
K\k = 0.0105 + 0.1496 • Space 0 7 5 5 • exp (-1.463FLV fc 0(s42 ) . (25)
D^ = (^_pj . (26)
Klt,k = °-9-A?JCk- (27)
p^ • r looajac • veli
Only one tray each receives feed and reflux; feed must enter below reflux:
E ** = E « = L (3I)
«,„,
(32)
^ - E *fe' ^ °-
The complete distillation model constitutes a system of [Ntray {INC + 27) + 157VC + 56]
DAEs in [Ntray (TNG + 27) + 15NC + 64] variables (after specification of the feed and util-
ities' inlet conditions), of which [Ntray(NC + 1) + 3NC + 5] are differential state variables.
197
For the case study in this work with Ntray = 30 and NC = 2, there are 1316 DAEs in 1324
variables (101 states). The remaining eight variables consist of the three continuous design vari-
ables for optimization (column diameter, surface areas of the reboiler and the condenser), and
the five manipulated variables (reflux, distillate, cooling water, steam and bottoms flow rates),
whose values are determined by the tuning parameters and the set-points of the control scheme
used.
Stepl
An initial set of two scenarios, [6,6.6], is chosen with weights [0.75,0.25]. These correspond to
the nominal and upper values, respectively, of the feed flow rate.
Step 2
The MIDO problem (2) for this example consists of approximately 2700 DAEs and 216
inequality path constraints, with 85 binary search variables (thirty for the feed, thirty for the
reflux location and twenty five for the control structure selection) and 18 continuous search
variables (column diameter, surface areas of the reboiler and condenser, and gains, reset times
and set-points for each of the five control loops). The problem was solved using the algorithm
outlined in the section Mixed Integer Dynamic Optimization, with gPROMS/gOPT (Process
Systems Enterprise Ltd., 1999) used for solving the dynamic optimization primal problems and
GAMS/CPLEX [16] for the MILP master problems.
Step 3
In this example there are no time-invariant operating variables, and so the dynamic feasibility
test (3) reduces to a conventional dynamic optimization problem with a single maximization
operator in the objective. Testing the design and control system resulting from Step 2 gives
X = 0, indicating that there are no more critical scenarios so that the algorithm terminates.
Table 2 shows the iterations carried out between the Primal & the Master Problems. The
economically optimal process and control design that gives feasible operation for all feed flow
rates in the range 6-6.6 kmol mirr1 is summarized in Table 3. Table 3 also compares the
process design with the optimal steady-state, but dynamically inoperable, nominal and flexible
designs. The latter was obtained through application of the analogous, steady-state algorithm
to that described in §2.1 [17]. It can be seen that in order to accommodate feed flow rates above
the nominal value of 6 kmol mirr1 requires more over-design when the dynamic behavior of
the system is accounted for than when only steady-state effects are considered. This illustrates
a weakness of considering design and control in a sequential manner. Figures 4 and 5 show
the dynamic simulations of the controlled compositions, that are given as part of the solution
of the MIDO problem. Notice how one of the compositions, in this case the distillate, is tightly
controlled relative to the other. This effect of controlling both compositions with one tight loop
and one loose loop is due to the negative interaction of the two control loops, and is a common
198
Figure 4. Controlled distillate composition at feed flow rate of 6.6 krnol rain
Table 2
Progress of Iterations for the Multi-Period MIDO Design and Control Problem
Iteration Number 1 2 3 4 5
Primal Solutions:
Discrete decisions
No. of Trays 25 24 23 22 24
Feed Tray 15 14 13 13 13
Control Scheme* 2 1 1 1 1
Process design
Deal (m) 2.03 1.99 1.99 2.00 2.00
Orefr [TYl 1 127.6 134.2 140.0 138.9 138.0
91.45 85.03 84.13 84.02 85.78
Controllers' gains
Ku i (xi a) 6.70 33.74 48.85 70.00 32.10
Kua (Leveld) -105.0 -41.29 -18.64 -24.55 -25.39
^3,3 (Pc)* -28.00 -31.44 -29.24 -26.57 -36.16
9.71 -2.22 -2.38 -3.37 -0.93
JsfUi5 (Level0) -1042 -600.0 -560.1 -580.5 -550.0
Reset times
i"ul 160.0 87.3 100.0 143.2 77.2
T~U2 530.0 568.2 568.9 568.9 684.5
9935 3483 3615 5032 2809
T u 4 2325 59.8 66.3 61.7 150.6
T"u,5 663.6 693.7 662.1 664.2 695.2
Set-points
seti i 0.9883 0.9849 0.9843 0.9835 0.9853
0.5368 0.0668 0.0773 0.0746 0.0703
seti 3 1.1944 1.2800 1.3022 1.3164 1.2694
setn 0.0182 0.0223 0.0250 0.0293 0.0179
seth5 0.6002 0.8995 0.8994 0.8980 0.8995
Costs ($100/c yr~l)
Capital 1.941 1.883 1.858 1.823 1.894
Operating(l)^ 6.367 6.268 6.287 6.334 6.269
Operating(2)t 7.220 7.122 7.136 7.194 7.097
Expected 8.521 8.364 8.357 8.372 8.370
UB 8.521 8.364 8.357 8.357 8.357
Master Solutions:
No. of Trays 24 23 22 24 22
Feed Tray 14 13 13 13 11
Control Scheme 1 1 1 1 1
LB 8.242 8.282 8.341 8.355 8.357
UB-LB < le^4? No No No No Yes: STOP
* Control s c h e m e \: R - x 1 > d , D - Lavald, Fw - P C , F S - x l h , B - Laval0.
* Control s c h e m e 2: R-x1>dlD - Le.veAdtFw - Pc, B - x 1 < h , F3 - Le.veA0.
* For K3i3, the cooling water flow rate is scaled (0.01 F-w).
"!• Period 1: Nominal feed flow rate, F = 6 kmol min~1. Period 2: Feed flow rale at upper bound, F = 6.6 kmol min'*1,
200
Figure 5. Controlled bottoms composition at feed flow rate of 6.6 krnol min
Table 3
Steady-state vs. dynamically operable design
Variable SS nominal SS flexible Dynamic
No. of trays 23 23 26
Feed location 12 12 14
Dcol (m) 1.82 1.91 1.99
2
Sreb (m ) 113 116 134
Scond (m2) 83 83 88
Capital cost 169 175 195
Operating cost 591 607 641
1
Total ($ k yr- ) 760 782 836
actions in the space of the state measurements. Thus, a simple explicit state feedback controller
is derived that moves off-line the embedded on-line control optimization and preserves all the
beneficial features of MPC. presented This control technique is outlined here, while in the next
paragraph it is shown how it is incorporated in a process and control design framework.
Consider a linear discrete-time state space description of a dynamic plant:
yt & 5Rm, is the vector of the output variables that we aim to drive to their set-points. The
following receding horizon open-loop optimal control problem is formulated in order to derive
the explicit model - based control law for such a system [25]:
jV-1
substituting xt+k\t = A^x* + J2 (A{A2vt+k-i-j) for the states and treating the current states
i * e l a s parameters, problem (34) is recast as a multiparametric quadratic program (mp-QP)
of the form:
4>{x*)= mm{L1+L;l2v + i:lix* + \vTLiv + {x*)TLhv+\{x*)TL<ix*} (35)
s.t. dv <G2 + G3x*
The solution of (35) [33] consists of a set of affine control functions in terms of the states and
a set of regions where these functions are valid. This mapping of the manipulated inputs in
the state space constitutes a feedback parametric control law for the system. The mathematical
form of the parametric controller is as follows:
where Ac, CR\ and Bc, CR2C are constant vectors and matrices respectively, the index c signifies
that different control expressions apply in different regions in the state space and Nc denotes
the number of such regions. The vector vt is the first element of the optimal control sequence,
whereas similar expressions are derived for the rest of the control elements. Note that only the
first element of the open-loop control trajectory is implemented on the plant. The control action
at the next time interval corresponds to the first element of the control sequence pertaining to
the newly updated state realization. In realistic process applications the state of the system is
estimated from a reduced set of measurements. Some of the choices for a state observer include
a Kalman Filter, an extended Kalman Filter [34] and a Moving Horizon Estimator [35]. In this
work we assume that we have a perfect estimation of the values of the current states.
The performance of this model based parametric controller is optimal in terms of the given
performance criteria, the plant model and the imposed constraints. The implementation of
the parametric controller is based merely on simple function evaluations, rather than solving an
optimization problem on-line which makes it attractive for a wide range of systems. A summary
of the principle of parametric controllers is shown in Figure 6.
203
4.2. Extension of the Design and Control framework using Parametric Controllers
To illustrate the simultaneous process and control design paradigm consider the binary dis-
tillation process described in section 3. The differences of the model used in this section with
the one in 3 are summarized here: (i) Ideal thermodynamics with constant relative volatility
are considered here, (ii) fast temperature/energy dynamics are assumed, (iii) perfect inventory
and pressure control are assumed, thus leaving as free manipulated variables only the reflux
flowrate and the steam or vapor boil-up fiowrate, (iv) a subset of the inequality constraints de-
scribed in Table 1 are enforced, i.e. purity specifications, flooding and thermodynamic driving
force constraints are imposed, (v) the system is subject to a high frequency sinusoid disturbance
in the feed composition with an uncertain mean value. The goal is to obtain the economically
optimum process and control design for this system that ensures feasibility, while accounting
directly for both continuous and discrete design decisions. The design optimization problem in
this study features a closed-loop dynamic system that incorporates a receding horizon model
based controller, rather than a conventional PI controller used in previous works [7,36,6]. The
mathematical representation of this problem is:
s.t. 0 = fd{x(t),x(t),xa{t),v(t),8{t),d,S,t)
0 = fa(x(t),xa(t),v{t),6{t),d,S,t)
V{t) = fv{x{t),xa{t),v(t),e{t),d,5,t)
0 >g i x i t j ) ^ ^ ) ^ ^ ) , 6 ( ^ , ( 1 , 5 , ^ ) j = l,---Nf, to<t<t f (37)
JV-l
+vJ+kR(q)vt+k]
s.t. xt+k+i\t = At(d,6)xt+k\t + A2(d,6)vt+k + Wi(d,8)6t
Vt+k\t = B±(d, S)xt+k\t + B2{d, 5)vt+k + W2{d, 5)01
0 > C0(d,5)yt + k\t + C1(d,S)xt+klt + C2{d,5)vt+k + C3(d,5)
k = 0,l,2,..iV-l
0 > Dl{d,5)xt+N{t + D2(d,5) (38)
/At \
At = eAiAt, A2 = AC2- \jeA^TdT\
where At is the sampling time that is used to convert the linear continuous time dynamic
system to a discrete time representation. W_, B , C , W°, Bc_, Cc_ are defined accordingly;
the superscript c denotes the matrices of the continuous time linear dynamic system while the
matrices A,B,C,W that do not have this superscript correspond to its discrete-time counterpart
[37]. q is the vector of control tuning parameters embedded in the quadratic cost function of
(38). For our example x = {XB, XD, ML, • • • M^lray^ X(,enz>1, • • • Xbenz,Ntrays}> where M denotes
the tray hold up, XB,XD the benzene mole fraction in the top and bottom products and x\,enz
pertains to the benzene mole fraction on each tray; y is the vector of output controlled variables,
i.e. the top and bottom mole fractions; v, represents here the vector of the manipulated variables,
i.e. reflux flowrate Refl and boilup flowrate V that are fully determined from the controller
equations (38). Note also that here we assume that the controller has complete information
about the values of the current system states and the values of the disturbances. This information
can be obtained from direct measurements or implicitly, via inferential measurements and the
use of a Kalman Filter estimator, xi, vi, 9S is the linearization point of the system; and yset is the
vector of output set points. The input disturbance in the feed composition is modelled as:
Methods for the treatment of uncertainty and for solving the underlying mixed integer dynamic
optimization problem have been shown in sections 2.1 and 2.2. Here we present an approach
for integrating the receding horizon controller into the simultaneous process and control design
framework.
Table 4
Initial linearizat on points for struc ture 8T2e, = 1, S(2 = 1
z
f Refh Vi %benz,D,l
x
benz,B,l Ac,i Dot
Lin. a 0.45 3.218 5.459 0.98 0.191 110 280 1.65
Lin. P 0.5 3.199 5.701 0.98 0.191 110 280 1.65
error introduced by truncating the (2 • Ntrays + 2) model with transfer function Hf(jui)) to an
n=4 state representation Hr(jui) is computed from:
2-NtTay.+2
l
_ \\Hf — HT\\)lco _ i=n+l ,,~s
||-ff/i|hoo ||-ff/|!/,,co
where of is the ith Hankel singular value of the original model, while the H-infinity norm of a
transfer function is defined as: ||-ff ||/ioo = raa,xamax(H(joj)), where amax is the maximum sin-
gular value of matrix H. Compute the discrete model matrices, keep the control designs fixed
and formulate an open-loop receding horizon problem (38). For this particular process design
the appropriate values of sampling time and time horizon according to heuristics (Seborg et al.
[29] ch. 27) are: At = 0.3mm, N = 6, respectively. Note that these values are allowed to
change during the solution procedure according to the current design and linearization point.
The terminal cost P is the solution of the Lyapunov discrete algebraic equation, while only the
purity constraints are considered in the controller design. Instead of solving this problem on-
line in the traditional MPC fashion, parametric programming [24] is used to derive the explicit
state feedback control law for the plant. The current states xt\t, the set-points yset and the dis-
turbances 8t are treated as parameters and the control inputs as optimization variables, therein,
problem (38) is recast as an multiparametric quadratic program (mp-QP). The solution of this
program results in a parametric controller (Parco) for the distillation process that comprises a
set of piecewise affine control functions and the critical regions in the state space where these
functions hold:
CRa
[VtRefl]T = I A^^benzJ,xl^D,xll<;lZiB)<0, c=l,---N?, zbenzJe [0.37,0.475]
1 " E M f , • xvi) + Cl • zbmzJ + E (V? c • xlfnz,) + B<f if
t=l i=B
CRPixr^^j^ll^xlf^KO, c=l,...JVf, zbenzJ 6 [0.475,0.6]
(43)
where xr are the reduced states. For instance, in the region described by the following inequal-
ities:
206
Note that (43) replaces exactly (38). The next step is to substitute (43) into (37), (39) and treat
only d as optimization variables in the solution of the resulting multiperiod primal problem:
2
1
(j)K = min
d
-{Ccoiumn + Creb + Ccmd) + Y.°PCostl
^ i=i
TotalCosl
l
s.t.0 = fd (x^(t), x (t), x^t), [V(t), Ref 1(1)1(1,^)
0 = / « ( - ) , 0=fo(-),0>g(-), i = l,2 (44)
where V(t)\ Refl(tYgwm from (43)
K = [T'ITY • x xbeu,,NtraJT
i
where TLT is a matrix derived from the model reduction that represents the mapping of the real
states to the reduced states. Note that the parametric controller (43) has an explicit form but it in-
volves "if-then" statements. In the optimization problem (44) these Boolean algebra operations
are replaced by steep hyperbolic tangent functions as shown in the following subsection 4.4.
The elements of the diagonal matrices Q, R and the linearization point of every n primal are de-
termined from perturbations and engineering judgment that indicate a set of values that result in
satisfactory economic performance. Alternatively, a more rigorous outer approximation-based
approach for computing the values of these parameters can be used [40]. The solution of the
dynamic optimization (DO) problem (44) provides an upper bound UP inasmuch as the values
of the fixed discrete decisions are not necessarily optimal.
207
min 1]
Ntray.,
r] > TotalCost* + E (nf • (Kd -%) + n( • (5(d ~s!))
1=1
Ntrays
1 = E si
Ntray,
E sl
1 = /t=i
Ntrays
0 S
> H ~ E k' k = l,..., Ntrays, KeKfeas} (45)
k'=k
where Kfeas is the number of feasible primal solutions. The solution of the MILP master
problem (45) is a lower bound LO for the design MIDO problem, and provides a new integer
realization. If LO > UP then stop, the solution of the MIDO problem corresponds to the upper
bound, else go to Step I, set K = K + 1 and update the integer values.
<t> = m i n 4>(-,v, _)
Vv,d
208
S.t. 0 = fd(-,V,J),fa(-,V,-),fQ{-,V,-)
where y sei = [z6^z>£), ZbtLz^V- Note that when xr(f) G CRc=c, then all h^i = 1, • • • Nineqz
are negative, thus via (50) all a^ are unity, henceforth, <?g is also unity. Otherwise, if xr(t) ^
CRc then at least one h.^ > 0, so at least one a,c = 0, which yields: g^ = 0. Hence, once all
gc c = 1, • • • JVC are substituted back to (47) only the control function pertaining to the region
where xr(t) resides contributes with a non-zero coefficient to the value of the control variable.
VB is the control bias while equalities (51) are used to establish that the closed - loop system
initializes at steady state.
4.5. Remarks
• The key feature in the solution of the structural primal problem is the derivation of the
parametric controller that enables the elimination of the on-line control optimization, thus
enabling the incorporation of the advanced predictive control algorithm to the design
framework.
• The design obtained with our approach is optimal with respect to the economic perfor-
mance index and also guarantees operability in the presence of time varying uncertainties.
• Here, note that the parametric controller is derived with a sampling time of At = 0.3mm.
However, its implementation will still be optimal if we choose to apply its control action
with a sampling time of At < 0.3mm. In the motivating distillation example the con-
troller is implemented in a continuous manner At —> 0 for a fair comparison with the PI
controller.
The results of the distillation process example that served as an illustration for section 4 of
this chapter are presented here.
209
5.1. Results
Two uncertainty periods were selected as discussed in section 4.3 and the multiperiod design
mixed integer dynamic optimization problem is solved first. The algorithm converged in three
iterations between master and primal problems as shown in Table 5.
Table 5
Progress of the Iterations for the multiperiod MIDO design problem in the Distillation Example
Iteration Number 1 2 3
Primal Solutions:
No. of Trays 26 25 24
Feed Location 12 12 12
Control Scheme Full Structure ' Full Structure Full Structure
Process design
Controller tunings
91 16 15 11
Costs
Capital Cost (SlOOkyr-1) 1.9953 1.9720 1.9543
Master Solutions:
No. of Trays 25 24 25
Feed Location 12 12 11
Control Scheme Full Structure Full Structure Full Structure
In all three MIDO iterations the primal problem resulted in a positive definite matrix Q in-
dicating that both outputs are participating in the optimal control structure. In the process
structure there are 30 possible feed tray locations and for the feed located on tray k there are
(31 — k) alternatives for the reflux tray locations. Hence the total number of discrete alternatives
is ]T (31 — k) = 465. Despite this large number of alternative discrete decisions the algorithm
converged in 3 iterations between the structural master and the primal problems. A reduced
model of 4 states was used in all three process structures for the control design purposes. The
210
error from the reduction was error = 0.17% for the optimal structure, while the frequency
response of the singular values of the reduced vs. the full model is shown in Figure 7. Note that
for frequencies above lOOrad/min the response of the two models deviate. However, in this
study we are interested in disturbance frequencies less than: to < I — lOrad/min where the
reduced model portrays the dynamic behaviour satisfactorily. The profiles of the control inputs
and the outputs are depicted in Figure 8, whereas the time trajectory for the minimum allowable
column diameter is shown in Figure 9.
Figure 7. Singular values of full 52-state model (Ntrays = 25, Feed location =12) vs. reduced
4-state model for the Distillation Example
5.2. Remarks
-Table 6 compares different designs to clearly demonstrate the benefits from pursuing a simul-
taneous approach in process and control design rather than the traditional sequential approach.
The steady state design (column 1) is feasible at steady state but inoperable at transient condi-
tions since it cannot satisfy the specifications. In order to make the steady state design operable
the equipment size increased by 10% resulting in a still inoperable process due to violations into
composition and thermodynamic driving force constraints. The equipment size then increased
by 20%, leading to an operable but expensive design as represented in the 2nd column of Ta-
ble 6. As opposed to this ad-hoc sequential overdesign procedure, the systematic optimization
method, using either PI or Parco controllers, leads to a selective increase into the equipment
size of the reboiler and the column diameter, whereas the condenser size remains almost the
same. Hence, economic advantages of the order of 2 — 3% in the case of PI (Column 3 Table
6) and 5 — 6% in the case of Parco (Column 4 Table 6) are extracted with feasible dynamic per-
formance, production specifications and operational restrictions despite the presence of rapidly
varying disturbances.
211
Figure 8. Profiles of the manipulating inputs and the controlled outputs for the Distillation
Example, for 6 = 0.45
Figure 9. Profiles of the minimum column diameters for 8 = 0.45 and 6 = 0.5 for the Distilla-
tion Example
-The economically optimum design point usually lies on the constraint intersections [15].
The parametric controller is particularly effective in dealing with constraints while it contains
a feedforward element to compensate for the disturbance effect. These features allow the plant
to operate closer to the constraint limit, as opposed to the operating point derived, when us-
ing PI control. This property in combination with the reduced equipment size explains why
the parametric controller leads to total economic benefits of 2 - 3% as shown in Table 6,
column 4 vs. column 3. Another significant benefit of the parametric controller is shown in
Figure 10 where we examine the scenario of an increased disturbance amplitude and impulse
{6a = 0.095, 0i = 0.07, 0W = 228min in (41)). The design with the parametric controller
212
Table 6
Comparison of different designs in the Distillation Example
Design Variable s Steady State In- Sequentia ap- Simultaneous Simultaneous
operable proach SISO-PI approach SISO approach Parco
PI
No. of Trays 25 25 26 25
Feed Location 12 12 13 12
F.,t (kg/min) 83 85 84 83
Set points
X
'benz,D 0.98 0.985 0.985 0.9814
X
be.nz,B 0.02 0.01009 0.0099 0.0177
Controller tunings
- 50 50 -
TCr.D-Refl - 80.03 80 -
?1 - - 15
Costs
exhibits half the size of the overshoot comparing to the case of the PI controller, while it avoids
an under-damped oscillatory behaviour. This clearly implies that this novel control law respects
the process constraints, thus enhancing the operational plant performance.
213
Figure 10. Mole fraction vs. time for an aggressive disturbance realization. (Distillation Exam-
pie)
6. CONCLUSIONS
This chapter demonstrates the progress that has been made in simultaneous process and con-
trol design under uncertainty. In the first part of the chapter a well-established decomposition
framework for that purpose is reviewed. This framework in its general form requires the repet-
itive solution of mixed integer dynamic optimization problems. An algorithm for MIDO that
has been recently developed by our group for that purpose is outlined. In the second part of
the chapter a new optimization strategy is presented for simultaneous process and control de-
sign using advanced controllers. The approach employs parametric programming to derive the
explicit control law for the process. Thus, the closed-form of the controller allows its direct
incorporation in the design framework.
The advantage of our methods is that they introduces formally for the first time discrete deci-
sions and advanced optimizing model based controllers in a complete simultaneous process and
control design framework. The benefits from this approach include: (i) improved economic per-
formance, (ii) enhancement of the system's dynamic performance, (iii) guaranteed operability
in the face of uncertainties and (iv) improved system stability characteristics.
ACKNOWLEDGMENTS
The financial support from the Department of Energy Transport and the Regions (DETR-
ETSU), Shell Chemicals and Air Products and Chemicals Inc. (APCI) is gratefully acknowl-
edged. The authors would like to thank Dr. Vik Bansal, Dr. Roderic Ross, Dr. Vivek Dua,
Dr. Myrian Schenk and Dr. Nikos Bozinis for their collaboration in this work and their help in
software implementation issues.
214
REFERENCES
1. J.M.G. Van Schijndel and E. N. Pistikopoulos, In M.F. Malone, J. A. Trainham, and B. Car-
nahan, editors, Foundations of Computer-Aided Process Design, volume 96, 99.
2. M.L. Luyben, B.D. Tyreus, and W.L. Luyben, Ind. Eng. Chem. Res., 35 (1996) 758.
3. D.R. Vinson and C. Georgakis, J. of Proc. Control, 10 (2000) 185.
4. P. Seferlis and J. Grievink, Comput. Chem. Eng., 25 (2001) 177.
5. E. N. Pistikopoulos and V. Sakizlis, In J. Rawlings, T. Ogunnaike, and J. Eaton, editors,
CPC-VI Proceedings, volume 98 of AIChE Symposium Series, 223.
6. M.J. Mohideen, J. D. Perkins, and E. N. Pistikopoulos, AIChE J., 42(1996a) 2251.
7. V. Bansal, J. D. Perkins, and E. N. Pistikopoulos, Ind. Eng. Chem. Res., 41(2002) 760.
8. E. N. Pistikopoulos and I.E. Grossmann, Comput. Chem. Eng., 12(1988) 719.
9. I.E. Grossmann, K.P. Halemane, and R.E. Swaney, Comput. Chem. Eng., 7(1983) 439.
10. V. Bansal, PhD dissertation, Imperial College of Science, Technology and Medicine, Lon-
don, 2000.
11. V. Bansal, V. Sakizlis, R. Ross, J. D. Perkins, and E. N. Pistikopoulos, Comput. Chem.
Eng., 27 (2003) 647.
12. V.S. Vassiliadis, R.W.H. Sargent, and C.C. Pantelides, Ind. Eng. Chem. Res., 33(1994a)
2123.
13. V. Vassiliadis, PhD dissertation, Imperial College of Science, Technology and Medicine,
London, 1993.
14. J. Viswanathan and I.E. Grossmann, Comp. Chem. Eng., 14(1990):769.
15. L. Narraway and J. D. Perkins, Ind. Eng. Chem. Res., 32(1993) 2681.
16. A. Brooke, D. Kendrick, and A. Meeraus. GAMS Release 2.25: A User's Guide. The
Scientific Press, San Francisco, 1992.
17. L. T. Biegler, I. E. Grossmann, and A. W. Westerberg. Systematic Methods of Chemical
Process Design. Prentice Hall PTR, New Jersey, 1997.
18. H.Z. Kister. Distillation Operation. McGraw-Hill, New York, 1990.
19. D.D. Brengel and W.D. Seider, Comput. Chem. Eng., 16(1992) 861.
20. C. Loeblein and J. D. Perkins, AIChE J., 45(1999) 1018.
21. C.L.E. Swartz, J. D. Perkins, and E. N. Pistikopoulos. In Process Control and Instrumenta-
tion 2000, University of Strathclyde, July 2000.
22. G.Y. Zhu and M.A. Henson, Ind. Eng. Chem. Res., 41(2002) 801.
23. N.L. Ricker, J. of Proc. Control, 6(1996) 205.
24. E. N. Pistikopoulos, N. A. Bozinis, and V. Dua, Technical report, Centre for Process Sys-
tems Engineering. Imperial College, London, UK, 1999-2002.
25. D.Q. Mayne, J.B. Rawlings, C.V. Rao, andP.O.M. Scokaert, Automatica, 36 (200) 789.
26. P.O.M. Scokaert and J.B. Rawlings, IEEE Trans. Automatic Contr., 43(1998) 1163.
27. J.B. Rawlings and K.R. Muske, IEEE Trans. Automatic Contr., 38(1993) 1512.
28. D. Chmielewski and V. Manousiouthakis, Systems & Control Letters, 29 (1996) 121.
215
29. D.E. Seborg, T. F. Edgar, and D.A. Mellichamp, Process Dynamics and Control, Willey
and Sons, 1989.
30. D.Q. Mayne, Nonlinear model predictive control: An assessment. In J.C. Kantor, C.E.
Garcia, and B. Carnahan, (eds.), AIChE, 1997.
31. E.C. Kerrigan and J.M. Maciejowski, In 39 tt IEEE Conference on Decision and Control,
2000.
32. H. Chen and F. Allgower, Automatica, 34(1998) 1205.
33. V. Dua, N. A. Bozinis, and E. N. Pistikopoulos, Comp. Chem. Eng., 26(2002) 715.
34. K.R. Muske and T.F. Edgar, Nonlinear Process Control, chapter Nonlinear State Estima-
tion, Prentice Hall PTR, 1997.
35. C.V. Rao and J.B. Rawlings, AIChE J, 48(2001) 97.
36. C.A. Schweiger and C.A. Floudas, Ind. Eng. Chem. Res.. 38(1999) 744
37. H. Kwakernaak and R. Sivan, Linear Optimal Control Systems, John Wiley and Sons Inc.,
1972.
38. B.C. Moore, IEEE Trans. Aut. Control, 26(1981) 17.
39. I.M. Jaimoukha, E.M. Kasenally, and D.J.N. Limebeer, In 31st IEEE Conference on Deci-
sion and Control, 1992.
40. V. Sakizlis, J. D. Perkins, and E. N. Pistikopoulos. To appear in Ind. Eng. Chem. Res..
2003.
The Integration of Process Design and Control
P. Seferlis and M.C. Georgiadis (Editors)
216 © 2004 Elsevier B.V. All rights reserved.
Chapter B2
a
Department of Chemical Engineering, University of Manchester, Institute of Science and
Technology, UMIST, M60 1QD, Manchester, UK
b
Centre for Process Systems Engineering, Department of Chemical Engineering, Imperial
College of Science Technology and Medicine, London, UK
1. INTRODUCTION
Process design is a complex problem that involves the optimal selection of process
equipment and operating conditions. Unavoidable uncertainties in the operating environment
and the knowledge of the process itself introduce stochastic elements that complicate further
this synthesis problem. The uncertainties that can affect the performance of a chemical plant
during its lifetime have an extremely wide range and variety of sources. The challenge of
process design is to develop economically optimal systems that may be operated safely and
reliably with acceptable environmental impact under a wide range of operating conditions and
uncertain parameters.
Control systems are responsible for the identification and implementation of the optimal
operating policy of a plant. The control structure of a continuous plant can be broadly divided
into the optimising structure and the regulatory structure. At the (steady state) optimising
level decisions are taken with respect to the identification of the optimal operating point while
the actual implementation is achieved by the regulatory level. Although the introduction of
model predictive control loosened, at least conceptually, the boundaries between the two
levels, several key features that are inherently related to the time scales of their operation are
still present to justify their distinction. The time scale of operation of the optimising level is
several orders of magnitude larger than that of the regulatory level and is responsible for
responding to persistent and slow changes in the disturbances that cause the optimum
operating point to move to a different region within the feasible space. The regulatory level is
responsible for keeping the process as close as possible to the optimum point so as to
minimise the loss due to the existence of fast acting disturbances. Switching between different
steady state operating points can be achieved more effectively by the co-operation of the
optimising and regulatory levels. This is due to the fact that switching belongs as much to the
responsibilities of the optimising level as to the responsibilities of the regulatory level. In
217
addition, there is no evidence to suggest that a regulatory structure that is optimal at the initial
operating point is also optimal (or even feasible) at other steady state operating points. A
direct consequence of the last statement is that there is no globally optimal regulatory control
structure that can be used to operate a plant successfully at a series of (widely) different
operating conditions. The restructuring of the regulatory level is a key responsibility of the
optimising level.
The previous paragraph suggests that the structure of the control system is, to a certain
extent, determined by the structure and the time scales of the disturbances acting on a plant.
Disturbances may be decomposed into slowly varying and rapidly varying components. The
slow varying parts cause the optimum operating point to change and need the corrective
action of the optimising level. The regulatory level attenuates the rapidly varying part of the
disturbances. An additional and important observation is that, if the regulatory level is
successfully implemented, the combined system of the plant and the regulatory level, as seen
from the optimising level, is steady state dominated. In other words, the combined process
and regulatory control system when considered from the time scales that are important for the
optimising level can be assumed to be at steady state. A direct consequence is that a steady
state optimising level can be justified with negligible loss of performance when compared to a
more promising but computationally more challenging dynamic optimising level.
In structuring the optimising and regulatory control systems the process systems designer
is confronted with a number of important structural decisions. Taking the bi-level structure of
the overall control system into consideration arbitrarily complex mathematical formulations
might be proposed to assist the process engineer in this highly complex synthesis activity.
However, such formulations will be of limited practical use since the overwhelming
complexity will limit their applicability.
In order to make progress in the field a number of interesting decomposition approaches
appeared in the 1970's including the works by Takamatsu et al. [1] and Dittmar and Hartmann
[2] that were based on an overall, steady state perception of the combined process design and
control system design problem. The methodology presented by Grossmann and Sargent [3]
and extended later by Grossmann and co-workers [4] has dominated research in this field.
This methodology identifies the need to guarantee the existence of a feasible region of
operation for the specified range of uncertain parameters and in addition the distinction
between static degrees of freedom (design variables) and dynamic degrees of freedom
(control variables) is for the first time mathematically formulated. The main implication that
stems from this distinction is that during the operation of a plant the process control system
plays a key role in guaranteeing feasible and optimal operation of the plant. Thus, design
decisions should be directed towards assisting the control system in achieving this task. It is
not surprising that the process control community identified at the same time the need to
consider control aspects during process design in order to improve controllability of the
resulting designs [5, 6].
Narraway and Perkins [7] were among the first to consider general mathematical
programming techniques for the simultaneous design and control problem using dynamic
218
process models while Dimitriadis and Pistikopoulos [8] applied the ideas of Halemane and
Grossmann [9] to systems described by sets of differential and algebraic equations. Mohideen
et al. [10] based on the aforementioned methodologies proposed a unified process design
framework for obtaining integrated process and control system designs under uncertainty.
Bahri et al. [11, 12] and Schweiger and Floudas [13] proposed alternative formulations and
solution procedures.
This work presents an overview of the back-off based approach to the solution of the
combined process and control design problem that was first presented by Perkins et al. [14]
and Perkins [15]. The main idea in the back off analysis is that in order for the regulatory
control system to be able to ensure feasible operation of the plant in the presence of fast acting
disturbances the optimum operating point has to be moved away (back-off) from the active
(and possibly the near binding) constraints. The size of the back-off necessary to mitigate the
effects of the fast disturbances is a function of the process design as well as of the structure
and parameters of the optimising and regulatory control systems. This necessitates the
structure and the parameters of the control system to be fixed prior to the determination of the
size of back-off. Different regulatory structures results in different back-off and different
economic performance. As a result back-off analysis can be used to rank alternative
regulatory control structures and also suggest design modifications in order to take the
synergistic action of design and control into consideration.
Section 2 introduces the idea of back-off and also presents a general linear framework for
the back-off methodology while in section 3 the nonlinear back-off synthesis methodology is
summarised. Section 4 presents a case study where the regulatory control structure of a fluid
catalytic cracking model is investigated. The results of the linear and nonlinear back-off
analysis are compared for first time and important conclusions are drawn in section 5.
The mathematical formulation of the process synthesis problem for continuous plants can
be stated as follows [7, 10]
nan E [j(x(t),z(t),u(t),d,X,Xc,w(t))]
a,X ,ac,Xc
s.t.
i(x(t),x(t),z(t),u(t),d,X,w(t))=O
h(x(t),z(t),u(t),d,X,Y/(t)) = 0
g(x(4z(4u(0,d,X,w(0)<0
(1)
<fWfMmy(tMtUc.xc)=o
vk(t\<&),y(t),u(t),dc,Xc) =0
n(x{t),z{t),y(t))=0
219
x(t)eX, z(t)eZ
u(t)eU,w(t)eW
deD, dceDc
Xc € {0,l}"*c ,X e {0,1}""
te[0.Tf]
where
x(/) is the vector of differential variables
z(?) is the vector of algebraic variables
u(t) is the vector of control variables (dynamic degrees of freedom)
d is the vector of process design variables (static degrees of freedom)
X is the vector of integer variables associated with the topology of the process
w(0 is the vector of uncertain parameters
dc is the vector of controller design variables
Xc is the vector of integer variables associated with the topology of the control system
X(0 is the vector of differential variables of the controller
CfJ) is the vector of algebraic variables of the controller
y(t) is the vector of potential measured variables
f is the vector function of the differential equations of the process
h is the vector function of the algebraic equations of the process
g is the vector function of the inequality constraints that define the feasible space
9 is the vector function of the differential equations of the controller
t| is the vector function of the algebraic equations of the controller
H is the functional relationships of the measurements and the differential and algebraic
equations
J is the objective function whose expected value (£(J(.)) is to be minimised over the time
period of interest.
The above problem aims to choose
• the optimal process topology (X)
• the optimal process design (d)
• the optimal controller parameters (dc) and
• the optimal topology of the controller (Xc)
that minimise the expected value of the goal function and at the same time satisfy the
feasibility constraints (over the given time horizon). This is an infinite dimensional,
stochastic, mixed integer, dynamic optimisation problem.
only the slow but also the fast varying disturbances can be considered as time invariant for the
time scales considered at the optimising level. Then, the uncertain parameters can be replaced
in the formulation of the process synthesis problem by their nominal values (wN). As a direct
result, the expectation term and the equations of the controller are dropped but more
importantly the time horizon of interest is restricted to the steady state and problem (1) is
simplified to
min J\x,z,u,d,X,v/N)
v
A,X '
s.t.
f(x,z,u,d,X,wA')=0
h(x,z,u,dXwA')=0 (2)
w
g(x,z,u,d,Xw )<0
xe X, zeZ.uet/
d e D, X e {0\}"x
Solution of problem (2) gives the steady state operating point, the value of the design
variables and the topology of the plant. However, the regulatory control system can only
alleviate but never completely eliminate the effects of the disturbances. Since in most cases
the optimum operating point is defined by the intersection of active constraints, these
constraints (and possibly the near-binding constraints) will be violated under the effect of
disturbances. In order to ensure that the system will operate in the feasible region, a safety
margin or back-off from the active constraints can be used. If the values of the maximum
violations of the constraints were known
v
^=t^%^gMMt)Mt)At)i <? (3)
then feasibility is guaranteed if the steady state optimum point is obtained by the solution of
the following problem
m«/x,z,u,d,I,w")
V
4.X '
S.t.
f(x,z,u,dXwA')=0
h(x,z,u,dXwA')=0 (4)
g(wd,I,w")<-|i
xe X, zeZ.u eU
d € A I s {0,l}"*
221
This idea is also shown in Fig. 1 for the case of two degrees of freedom. The solution of
problem (2) gives the steady-state economics while the solution of problem (4) gives the
dynamic economics. Since the back-off depends on the regulatory control structure and its
parameters the individual elements of the back-off vector cannot be calculated prior to the
design of the controller. However, since different regulatory control structures result in
different back-off vectors and different dynamic economics the most promising structures can
be obtained by solving an optimisation problem where the structure is varied in order to
minimise the economic penalty associated with the back-off.
where S denotes deviation from the steady state values and the slack variable a(t) is defined
as
222
a(O=g(x(O,z(/),uW,w(/))-gw(xw,zA',uw,wA') (6)
We also define
In order to estimate the back-off and at the same time determine the most promising
regulatory control structures we need to introduce a description of all potential controllers.
This is by no means a trivial task. Heath et al. [16] have proposed a methodology for
estimating the back-off for decentralised PI controllers and Kookos and Perkins [17] have
presented a formulation that can be used to design centralized PID controllers. Furthermore,
Kookos and Perkins [18] have proposed a general framework for the determination of the
back-off for any linear time invariant output feedback controller (LTIOF). However, several
classes of controllers of practical importance (such as model predictive controllers) are not
part of this general class. Furthermore, the controller synthesis methodologies that can be
handled by this approach strongly depend on the degree of automation that can be achieved at
the controller design step. In all these methodologies controllers are designed and evaluated in
the frequency domain. Recently, Kookos and Perkins [19] have presented a methodology
where all calculations are performed in the time domain.
In order to present briefly the frequency domain approach it should be observed that any
linear time invariant control law can be written as
0 = E,y(f)+E,,u(/) (8)
where E,, and EM are polynomial matrices in the differentiation operator. Eqs. (5)-(8) consist
of a square system of differential and algebraic equations which, after taking the Laplace
transform, can be written in the following form
where Z(s) is the Laplace transform of a(f), bW(s) is the Laplace transform of 5w(?) and G(s)
is the corresponding transfer function matrix. Finally, using the following property that relates
the maximum deviations in the time domain and frequency domain (see also Heath et al. [16])
an estimation of the back-off vector can be obtained by using the frequency response of the
closed loop system. Then, the performance of the regulatory control structure can be
223
s.t.
A<5x + B<5u = O
C<5x + D<5ii=O (11)
g^+H^x+P^u^-u
It is important to note at this point that the solution of problem (4) (or its linearised form
given by Eq. (11)) is performed at the optimizing level where a realistic estimation of the
actual performance of the regulatory level is taken into consideration. This explains why a
steady state optimization problem is solved at this point in order to evaluate the overall
economics of the plant. Furthermore, formulation (11) (or (4)) takes into consideration not
only the manipulated variables selected in the regulatory level but all available manipulated
variables. This is an important observation that clearly shows the importance of the
optimizing level in mitigating the effect not only of the slow but also of the fast disturbances.
sX = AX + BU + FW
Y=CX+DU+EW (12)
I, = HX + PU + SW
If perfect steady state control is employed (integral action is included in the controller) then
for each controlled variable selected in the regulatory structure its deviation from the steady
state (set point) value should be zero, i.e. ,y/=0. The ones not selected, however, should have
acceptable variability, i.e. Yl- < Yj < Y" should hold, where the superscripts L and U denote
lower and upper bound values, correspondingly. Taking the definition of the binary variables
H into consideration these conditions can be expressed in the form of the following linear
constraints
^ ( l - H ^ y ^ l f (l-H,.) (15)
In a similar way
Finally, in order to have a square control structure the following must hold true
YVi+Y&J=dim(y) (17)
• j
where dim(y) denotes the dimension of vector y. Eqs. (11)-(17) form, at steady state (s—>0),
the following mixed integer linear programming problem (MILP)
• j
-S<ji
225
where superscript R denotes the real part of a complex number (i.e. X=XR+jX! etc.). It is
important to note at this point that formulation (18) aims to find the optimal regulatory control
structure that minimizes the economic penalty associated with the corresponding back-off
vector. The optimization is performed at the optimizing control level where all manipulated
variables are available subject to the constraint imposed by the regulatory control level. This
is a unique feature of the proposed approach as the economic performance of the optimal
regulatory structure is determined at the optimizing control level where the economics can be
considered to be steady state dominated. In addition problem (18) is a MILP problem that can
be solved effectively to global optimality even for large-scale problems.
2.5. Regulatory control structure selection using linear economics and linear output
feedback controllers
Based on formulation (18), Eq. (9) to (11) and the fact that, if QjcQ then
the following algorithm for regulatory control structure has been proposed [18]
The algorithm terminates when no structure exists with steady state economics better than
the current entry cost. Thus, by using tight initial lower bounds to the economic penalty we
can expedite the solution procedure significantly. In addition, including more discrete
frequencies (apart from <y=0) in formulation (18) can also produce tighter lower bounds to the
objective function (see also equation (19)). It is particularly noteworthy that the proposed
formulation is general enough to allow consideration of any linear multivariable controller.
226
Kookos and Perkins [17] extended the theoretical framework developed for the linear
back-off synthesis to the case of nonlinear back-off synthesis. The starting point of this
systematic methodology is formulation (1). In the nonlinear case, we should also ensure
feasibility of the solution for the whole set of the uncertain parameters under dynamic
conditions, i.e.
In order to impose this feasibility constraint a multiperiod formulation is used in this study
as proposed by Grossman and Sargent [3]. In this approach a finite set of uncertain parameters
is selected and a multiperiod problem is formulated where each period corresponds to a
realisation of the uncertainty vector. Then, after the solution to the design problem is
obtained, a constraint maximisation problem is solved with fixed design and the set of
scenarios is updated to include the realisation(s) of the uncertain vector that cause the most
severe violation(s) of the constraints. The algorithm terminates when feasibility is ensured for
all possible realisations of the uncertain parameters.
Before presenting the algorithm a note on the disturbance description is necessary. It is
assumed that the disturbance vector can be expressed as a function of a finite number of time
invariant but uncertain parameters. One suitable description is the following [20]
w
,(t)=wH +Awmsin{coj)\
>Vm (21)
^m^o}m<aum j
where w^ is the nominal value of the m-th uncertain parameter, Awmis its maximum
expected deviation from the nominal value and a>m is its frequency of variation. It should be
noted that Eq. (21) can be used to describe fast (noise-like) and slow (step-like) disturbances.
The main idea in the development of the algorithm for the nonlinear back-off synthesis
method is to use the steady state formulation in order to generate promising control structures
that are then evaluated under dynamic conditions. Since the dynamics of the plant can only
further restrict the feasible space, when compared to the steady state, dynamic economics
cannot be better than the steady state economics (see also Fig. 1). Any structure that is
feasible under dynamic conditions (?e[0,T/]) is also feasible at steady state (/=0) since the
latter is a subset of the former. However, the converse is not true. That is, a structure that is
feasible at steady state can be dynamically infeasible. Those structures that correspond to
steady state feasible but dynamically infeasible solutions have to be excluded from the
feasible space of the steady state problem in order to obtain the feasible space of the dynamic
227
problem. Based on these arguments we conclude that the feasible space of the dynamic
problem is a subset of the feasible space of the steady state problem and as a result the latter is
a relaxation of the former.
As discussed in the introduction, the steady state optimization is performed by the
optimizing control level. This level has available all manipulated variables (dynamic degrees
of freedom) in order to minimize the economic effect of disturbances. A subset of the
manipulated variables is assigned to regulatory control; these are free to vary within their
bounds (uj1 <ut<u"). The remaining manipulated variables are restricted to optimal,
constant values for the entire time horizon of optimisation (M( =U°PT). In addition, when a
square regulatory control structure is used, for each manipulated variable assigned to the
regulatory level a controlled variable can be assumed to be perfectly controlled (y/j =yf),
i.e.
y'S<y»<y>>E rJ'P ^
S] 1 ~ SJ SJ J J
where p denoted the p-th period or the p-th realisation of the uncertainty vector. Eq. (22) and
(23) together with Eq. (2) form the following steady state optimisation problem
J sv = min j(xN,zN,uN,d,X,wN)
s.t.
p f
f(x ,z ,u'',dJ,w')=0'
h(x'',z'>,u'',d,X,w)=O-VJD (24)
p p p l
g(x ,z ,u ,d,X,yv ')<0
^(i-s>yf^;^f+(i-3>yl
>\/l,p
• j
xe X, z eZ,u e U
228
The last formulation aims to find the optimum steady state (nominal) operating point, the
optimum topology of the plant and the optimum topology of the control system for a given set
of scenarios. For fixed topology, Eq. (1) is simplified to the following
JDYN =minj(xN,zN,uN,d,y/N)
s.t.
h(xp(t),zp(t),up(t),d,vfl'(t)) = 0
g{xp(t),zp(t),up(t),d,yyp(t))<0 v
p P
9& (4x'(4s'(4y'('K(4<ic)= °
n{r{tUp(thp{t),up{t),dc)=o
,i(x'(0lz'(0iy'(r))=O.
\(t)eX, z(t)e Z,u(t)eU,w(t)eW
de D, dcsDc,t&[o,Tf]
i.e., for a fixed process and controller topology, the parameters of the process and the
controller are optimised in order to optimise the economics.
Based on formulations (24) and (25), Kookos and Perkins [17] have presented a systematic
methodology for the simultaneous design and control of process systems that is summarised
as follows
i) Set iter=0, choose a set of periods and corresponding values for the uncertain
parameters and set JDYN = +ca (where * denotes the optimum objective function),
ii) Set iter = iter +1.
iii) Solve problem (24) with the constraint Jfsr < J*DYN. If no feasible solution can be
found then the algorithm terminates and the optimum is given by J*Dm. If a feasible
solution is found proceed to step iv.
iv) Solve problem (25). If the problem is infeasible go to step vi. If a feasible solution is
found then proceed to step v.
v) If J'^ < J'DYN then set J*DYN = J^w. Proceed to step vi.
vi) Add an integer cut to exclude the current structure and go to step ii.
It should be observed that as in the case of linear back-off synthesis, a steady state
optimization problem is solved to determine the topology of the process and the controller. A
dynamic optimization problem is then solved to determine the optimal value of the continuous
optimization variables. The algorithm iterates between the steady state mixed integer
229
nonlinear optimization problem and the dynamic optimization problem until no steady state
solution can be found with economics better than the current best dynamic economics.
The equations of the controller dynamics as presented in formulation (25) are general
enough to encompass most of the control laws used in practical applications. Kookos and
Perkins [17] have presented the case of centralized PI controllers as a special case of this
general methodology. In this case the controller is described by the following equations
«,-(o=«r+EAMff(o
|(«r(0)=0 (26)
where Ky (%)is the controller gain (integral time). If either ut or j y has not been selected in the
regulatory structure then the corresponding controller gain is equal to zero.
It is important to note at this point that the scenarios are updated in each iteration in order
to include the values of the disturbances that result in the most severe violation of the
constraints. As a result, at the end of each iteration feasibility is ensured for the complete set
of uncertain parameters. Finally, it should be noted that both formulations (24) and (25) use a
steady state economic objective function for reasons explained in the introduction.
In this section a Fluid Catalytic Cracking (FCC) process case study is examined. The aim
is to compare the alternative methodologies for regulatory control structure selection
presented in sections 2 and 3. The FCC process is particularly suited for this purpose. The
process dynamics described by a low order but highly non-linear set of DAEs. The actual
operation of the process is dominated by economics and a small number of disturbances that
affect significantly its economics has been identified. Furthermore, the most appropriate
control structure for this process is a matter of some controversy, with the conventional
structure being criticized in a number of recent publications.
the reacted hydrocarbon vapours flow up the riser and are separated in the reactor cyclone.
Burning off of the coke deposit is achieved in a fluidized bed inside the regenerator. A steam
turbine driven air blower supplies the oxygen needed to burn the coke deposit. The flue gas
from the regenerator is sent to a waste heat boiler before being discharged to the atmosphere.
Vapour products from the reactor are sent to the bottom of the main fractionator where
various boiling point fractions are withdrawn such as distillate, light cycle oil (LCO) and
heavy cycle oil (HCO), etc.
The model used in this study is based on a model first presented by Lee and Groves [21]
and then modified by Balchen et al. [22]. The catalytic cracking of gas oil can be represented
schematically as
Based on this simplified, three lump kinetic model, the following equations can be used to
describe the riser (the nomenclature used can be found in Balchen et al. [22] while the values
of the parameters of the model are given in Table 1).
jj^yf)=-KyfM, (27)
7^W=fe^-^W (28)
(FrcCprc+FoilCpoil+AFollCpd)j-{TM-TM)=MlfKuj-z{yf) (29)
where p is the catalyst to oil ratio (COR). Balchen et al. [22] presented a simplified solution
of the above simultaneous differential equations in the axial domain of the riser summarised
in the following
a
y/(l) = r ,r. ( YJ>7(0) (30)
Table 1
Parameters of the FCC model.
T ]
M - yK&lrp{7TcP\yA^ (32)
{ a + KJ0[l-exp(-aTcP)\ J
where
T (o)= (Cp°aF°a + XC
PIFQ«)T°« + C
PsFrXg (33)
CpoilFoil + ACPdFoil + CPsFrc
232
r—^ ^ , (34)
\ E / ]
Ko=koexp\- yRTri(o)\ ( 35 )
Kr=kaexp\-El/R1^ (37)
(t>0=\-mCrc (38)
The model of the regenerator consists of the coke balance, oxygen balance and enthalpy
balance
233
Wjt{Crc)=Fsc{Csc-Crc)-kOdCrcW (39)
k d W
WCps j-t{Trg)= Tn(l)FscCps +FaCPaTa -Trg(FscCps +FaCPa)-AH °^ (41)
Csc=Crc+Ccat (43)
k = k exp\— - p (44)
K
"" ^ 9 6 0 Trg) R \
Tcy=Trg+c,Od (47)
The economic objective function of the FCC process has the form ([22], [23])
gl=2\(0)-1000<0 (49)
g2 = r n .(i)-iooo<o (50)
ft=76O-7\(o)<O (51)
g 4 =760-2\(l)<0 (52)
g 5 =7;-1000<0 (53)
g6=S95-Trg<0 (54)
g7=400-7\,<0 (55)
ft = 7 ^ - 6 4 0 <0 (56)
g9=Fo-3600<0 (57)
The optimum operating point is defined by the upper bounds on cyclone and feed
temperatures (the corresponding Lagrange multipliers are A.5=367.45 $/K and Xs=59.17 $/K).
The value of the objective function is 73623.6 $/day.
The following vectors of state, disturbance, manipulated and measured variables are
defined
x= Od , p = [k[c}n= Fa , y = * (59)
_rg\ Lou} AT = T —T
cy rg _
The disturbance kc (rate constant for coke formation) is selected to represent changes in
the feed oil composition. This is, together with changes in the feed flowrate, probably the
235
Table 2
Summary of the nonlinear control structure selection algorithm
Iteration Manipulated Controlled Obj. function Obj. function
Variables variables Formulation Formulation 25
24
1 1,2 1,2 73623.6 72456
2 1,2 1,3 73623.6 73557
3 1,2 2,3 73623.6 73520
4 1,2 2,4 73623.6 73035
5 1,2 3,4 73623.6 73495
Table 3
Summary of results of linear control structure selection
Manipulated Controlled Economic penalty
Variables Variables Perf. Contr. Multi-loop PI OSOF
1,2 2,3 0.0000 94.6220 8.6195
1,2 3,4 0.0000 189.0802 11.8395
1,2 2,4 0.0000 110.2209 34.8567
1,2 1,3 0.0015 143.6590 5.5686
1 3 0.0022 20.9437 9.5944
2 3 0.0023 21.1115 0.7278
1,3 1,3 377.862 +OO +O0
Comparison of Table 2 with Table 3 shows that there is an agreement between linear and
nonlinear methodologies. The structures selected in the five iterations of the nonlinear
methodology, apart from the (Fsc, Fa)-(Tri(\), Trg) structure, are selected as promising control
structures in all linear methodologies. However, there are two main discrepancies.
First, the 2 lxl structures, Fa-Tcy and Fsc-Tcy, although suggested as promising structures
from all linear methodologies are not part of the promising nonlinear structures. In fact, using
the full nonlinear model of the FCC process it was found that these lxl structures are
particularly vulnerable to disturbances and become unstable even for 10% disturbance in the
rate constant for coke formation considered in this study. The local, linear analysis fails to
237
identify this shortcoming and these structures are suggested as promising structures since they
control directly the active constraint on the cyclone temperature.
Second, the perfect control assumption results in an unrealistic estimation of the expected
economic penalty. The multiloop-PI based methodology offers the more realistic estimation
of the economic penalty. This is due the fact that the estimated economic penalty is related to
the "realisability" of the controller used. Both perfect controller and OSOF controller are
hardly realisable and are equivalent, to some extent, to high gain output feedback control. The
multiloop-PI controller, on the other hand, is based on realistic estimations of the control
action since they are based on the nonlinear response of the system and well established
tuning techniques.
Concluding, it could be stated that the methods based on the linearised system dynamics
are, to a great extent, successful in identifying a number of promising structures. The
estimation of the resulting economic penalty is not reliable. However, the exact value of this
economic penalty is immaterial as long as the relative ranking of the promising structures is
"correct". The main objective is to identify a set of promising structures for further analysis
where non-linear simulation or optimisation are employed. The use of linear analysis can
expedite the process of identifying the characteristics common to promising structures and
thus guide the search and further analysis based on more accurate models and methodologies.
REFERENCES
[I] T. Takamatsu, I. Hashimoto and S. Shioya, (1973). J. Chem. Eng. Japan, 6 (1973) 453.
[2] R. Dittmar and K. Hartmann, Chem. Eng. Sci., 31 (1976) 563.
[3] I. E. Grossmann and R. W. Sargent, AIChE J., 24 (1978) 1021.
[4] I. E. Grossmann, K. P. Halemane and R.E. Swaney, Comput. Chem. Eng., 7 (1983) 439.
[5] M. Morari, Comput. Chem. Eng., 7 (1983) 423.
[6] J. D. Perkins and M. P. F. Wong, Chem. Eng. Res. Des., 63 (1985) 358.
[7] L. T. Narraway and J. D. Perkins, Comp.Chem.Eng., 18 (1994) S511.
[8] V. D. Dimitriadis and E. N. Pistikopoulos, Ind. Eng. Chem. Res., 34 (1995) 4451.
[9] K. P. Halemane and I. E. Grossmann, AIChE J., 29 (1983) 425.
[10] M. J. Mohideen, J. D. Perkins and E. N. Pistikopoulos, AIChE J., 42 (1996) 2251.
[II] P. Bahri, J. Bandoni and J. Romagnoli, AIChE J., 42 (1996) 983.
[12] P. Bahri, J. Bandoni and J. Romagnoli, AIChE J., 43 (1997) 997.
[13] C.A. Schweiger and C.A. Floudas, In Optimal Control: Theory, Algorithms and
Applications, W. Hager and Pardalos (Eds)., Kluwer Academic Publishers: NY, 1997.
[14] J. D. Perkins, C. Gannavarapu and G.W. Barton, Control for Profit, Newcastle,
November 1989.
[15] J. D. Perkins, IF AC DYCORD+1989 Symposium Proceedings, Maastricht, The
Netherlands, J. E. Rijnsdrop, J. F. MacGregor, B. D. Tyreus and T. Takamatsu (Eds),
Pergamon Press, 1989.
[16] J. A. Heath, I.K. Kookos and J.D. Perkins, AIChE J., 46 (2000) 1998.
238
[17] I. K. Kookos and J. D. Perkins, Ind. Eng. Chem. Res., 40 (2001) 4079.
[18] I. K. Kookos and J. D. Perkins. J. Proc. Contr., 12 (2002) 85.
[19] I. K. Kookos and J. D. Perkins. Comp. Chem. Eng., 26 (2002) 875.
[20] L. T. Narraway and J. D. Perkins, Ind. Eng. Chem. Res, 32 (1993) 2681.
[21] E. Lee and F. R. Groves. Trans. Soc. Comp. Sim., 27 (1985) 219.
[22] J. G. Balchen, D. Ljunguist and S. Strand. Chem. Eng. Sci., 47 (1992) 787.
[23] C. Loeblein and J. D. Perkins. AIChE J., 45 (1999) 1030.
The Integration of Process Design and Control
P. Seferlis and M.C. Georgiadis (Editors)
© 2004 Elsevier B.V. All rights reserved. 239
Chapter B3
Department of Chemical Engineering, McMaster University, 1280 Main Street West, Hamil-
ton, Ontario L8S 4L7, Canada
1. INTRODUCTION
While the interaction between design and control has long been recognized, it is only over
the past two decades or so that significant attention has been paid to the development of system-
atic techniques for analyzing these effects and incorporating dynamic performance requirements
into plant design calculations. These developments have been motivated in part by trends toward
tighter design margins, demands for low product variability, operation at constraints to maxi-
mize economic performance and more stringent environmental constraints, all of which have
placed increasing demands on plants' control systems. It has therefore become increasingly
important for control issues to be considered during the design phase so that design choices that
are likely to cause control difficulties may be avoided.
Two broad approaches to dynamic operability analysis are the use of so-called open-loop indi-
cators, and the solution of a suitably formulated optimization problem. Characteristics of the
former are that they are based on steady-state or linear dynamic models, are relatively easy
to compute and seek to provide indications of potential plant-inherent control problems inde-
pendent of the choice of control system. Examples are the minimum singular value and the
plant condition number which reflect sensitivity to input constraints and model uncertainty re-
spectively. More detail may be found in [1] with a good overview of these methods given in
[2].
The optimization-based approaches optimize a performance criterion and include in the con-
straint set a dynamic model of the plant and an associated control system. This framework
includes the general integrated plant and control system design problem in which the plant con-
figuration, equipment sizing, control structure selection and controller tuning are all included
within the decision space. Dynamic operability is accounted for through constraints on the
dynamic response to disturbances or set point changes. Unlike the open-loop indicators, the
optimization framework permits all plant-inherent performance-limiting factors to be simulta-
neously accounted for. It also allows for considerable flexibility in the problem formulation as
will be discussed later.
240
While the main thrust of these analyses are to provide a plant that exhibits satisfactory closed-
loop performance, the assumptions regarding the control system vary considerably across the
various methods proposed. The open-loop indicators are largely based on factors that limit
achievable closed-loop performance independent of controller type, whereas most of the opti-
mization based integrated design formulations assume a specific controller type such as multi-
loop PI, LQG and so forth. While this is not considered to be a problem per se, it is impor-
tant that the implications of these assumptions are clear so that appropriate deductions may be
drawn. This chapter attempts to at least in part address this issue.
The main focus of this chapter is on controller parametrization, considered both within an oper-
ability analysis and plant design context. This technique seeks to provide a limit of achievable
closed-loop performance, and as such is closely related to dynamic resilience analysis based
on the Internal Model Control (IMC) paradigm [3]. However, it is formulated here within an
optimization framework which permits all plant-inherent performance-limiting characteristics
to be simultaneously accounted for and provides flexibility in the problem formulation. The re-
mainder of the chapter is organized as follows. The principles underpinning the IMC approach
to dynamic operability analysis are first reviewed. This is followed by a presentation and dis-
cussion of a general optimization-based framework within which the controller parametrization
will be incorporated. An overview of Q—parametrization is given next, after which its formu-
lation within an optimization framework is presented. Its application is thereafter illustrated
through two case studies, with the final sections comprising a discussion of future directions
and conclusions.
2. PERFORMANCE-LIMITING FACTORS
The Internal Model Control (IMC) framework was used in [3] to identify and analyze factors
that limit achievable closed-loop performance. The top diagram in Fig. 1 shows a standard feed-
back structure, which with the addition and subtraction of the controller output passed through
a plant model, Gm, gives the equivalent IMC structure shown in the accompanying diagram.
The classical controller, C, and IMC controller, Gc, may be readily shown to be related as
follows:
Gc = Cil + GnC)-1
C = Gc{l-GmGcyl
For Gm = G, the closed-loop relationship between the plant output and external inputs is
y = GGc(ys-d)+d.
From this we observe that (a) the system is closed-loop stable if both G and Gc are stable,
and (b) perfect control (y = ys) is achieved, regardless of the disturbances, if Gc = G"1.
241
Perfect control is thus limited by factors that prohibit the use of the plant model inverse as
the IMC controller, Gc. These are time delays, which result in prediction in G~l; right-half-
plane transmission (RHPT) zeros, which result in unstable G^1; and input constraints, since
for strictly proper G, Gc = G~l would require infinite controller power. A fourth limitation
to perfect control is model uncertainty, which requires the controller to be de-tuned in order to
avoid instability in the face of plant-model mismatch.
This framework formed the basis for quantitative measures of dynamic resilience proposed in
[3-5]. A key feature of these methods is that the dynamic resilience metrics derived are plant-
inherent and independent of specific controller type and tuning (within the class of linear, con-
stant parameter controllers). A limitation is that the performance-limiting factors are considered
individually, making it difficult to rank plants that exhibit combinations of these characteristics
to varying degrees.
Time delays in MIMO systems could result in an infinite number of transmission zeros. This
effect is explored in [6, 7] where a test for the presence of infinitely many RHPT zeros is
developed and asymptotic formulas for their computation derived. A synthesis framework is
also proposed whereby the additional amount of time delay required to eliminate the presence
of infinite RHPT zeros may be determined.
The use of controller parametrization discussed in Sections 4 and 5 is based on a similar phi-
losophy in that it seeks to provide performance limits independent of controller type. However,
being computational in nature and formulated within an optimization framework, it is able to
handle the simultaneous presence of all performance-limiting factors. In addition, being set
in a somewhat more general theoretical framework, it is also able to accommodate open-loop
242
unstable systems.
3. OPTIMIZATION FRAMEWORK
We shall use as our starting point the following optimization framework for design under un-
certainty:
subject to
h[d,x(*) ) x(t),u(t) ,©(*),*] = 0
g[d,x(*),u(i),0(t),t] < 0
x(0) = x 0
t e [o, tf]
o(t) e r
where
(b) a controller of fixed type such as multi-loop PI [9] and Linear Quadratic Gaussian control
[10].
(c) Q—parametrization, where the search space is an approximation of all linear stabilizing
controllers. This approach is described in more detail in the next sections.
The upper bound provided by the solution of the open loop optimal control problem may be
viewed as the ultimate performance limit, since the inputs to the plant are manipulated di-
rectly. However, there is no guarantee that it is achievable via feedback control. Use of a fixed
controller type, on the other hand, does not guarantee similar performance (or indeed feasible
operation) with the use of a different controller type. Q—parametrization provides an achiev-
able performance bound, but for linear control. These approaches therefore provide different
information; the key is for users to be aware of this so that appropriate deductions may be drawn
from results they generate. The following sections focus on the use of controller parametrization
within an optimization framework, both for analysis and design.
Problem (PI) is generally solved as mixed-integer dynamic optimization (MIDO) problem,
which in turn may be decomposed into dynamic optimization sub-problems. Recent advances
in the solution of MIDO problems are described in [11], while solution techniques for dynamic
optimization problems are reviewed in [12].
4. CONTROLLER PARAMETRIZATION
In order to ascertain limits of closed-loop performance, we would like to search over all sta-
bilizing linear controllers. It is evident from (1) that the use of the classical control structure
results in a closed-loop response that is nonlinear and in general nonconvex in the controller
244
parameters. Furthermore, there is no direct relationship between the structure of C and stabil-
ity of the closed-loop system. Indeed, even a constant C (proportional control) could result in
closed-loop instability.
Q-parametrization (also referred to as Youla or Youla-Kucera parametrization) provides a con-
venient alternative that does not exhibit these undesirable characteristics. The next sub-sections
provide a summary of the underlying theory of which comprehensive treatments are available
in [13-15]. A relatively recent review of the history, developments and applications of Q-
parametrization is given in [16]. The treatment below is applicable to both continuous and
discrete-time systems.
The development that follows will make extensive use of the generalized feedback structure
shown in Figure 3.
In Figure 3, w represents exogenous inputs, typically set points and disturbances; u represents
manipulated inputs; y represents controller inputs; and z represents regulated outputs which are
those variables on which we would like to place some performance specification, and would
245
Z W
=P
u
yJ L '
The classical feedback structure in Figure 2 may be readily expressed in terms of the structure
of Figure 3. By associating K with — C, and choosing
yp r
z= w= ,
u d
P takes the form
0 Gd Gp '
P= 0 0 / . (2)
_-7 Gd\Gp _
Definition 1 Two transfer function matrices N, D G VSKoo with the same number of columns
are right coprime if there exist matrices F , l £ VSH-oo such that
XN + YD = I (3)
where TZTCoo denotes the space of proper and real rational stable transfer function matrices.
Equations of form (3) are known as Bezout identities.
Definition 2 If N and D are right coprime and D is nonsingular then ND~l is called a right
coprime factorization.
Left coprimeness and left coprime factorizations are analogously defined.
Lemma 1 [14]. For each proper real-rational matrix G there exist eight IZHoa-matrices satis-
fying the equations
G = NM-1 = M-XN
X -Y~ 'M Y '
-N M N X
' * -y]\M Y
}=i (5)
-N M \[ N X \
Then
1. The set of all (proper real-rational) K's internally stabilizing P are parametrized by the
formulas
K = (Y - MQ){X - NQ)-1
= {X-QN)-\Y-QM) (6)
Q e HHov
2, With K given as above, the closed-loop transfer matrix from w to z is given by
where
n - PU + P12MYP21
T2 = Pl2M
T3 = MP21
TUT2,T3 e TlHoo
Remarks
1. The closed-loop map, Hzw, is affme in the 'parameter' Q.
N = N = P22
X = M = I, X = M = /
Y = 0, Y = 0
247
K = -Q(I-P22Q)-1
= -{I - QP22)~1Q. (8)
Q e nUoo
and (7) becomes
from which the equivalence with the IMC structure may be readily established.
This section discusses strategies for including Q—parametrization within an optimization frame-
work of the type described earlier. Since Q is infinite dimensional, a useful first step is to
approximate Q using a finite number of parameters. Both continuous and discrete-time approx-
imations will be shown. The integration of the approximation into the optimization framework
can be done in different ways. Two different approaches for doing this will be described - direct
inclusion of the closed-loop transfer function, and inclusion of the individual components of the
feedback system with appropriate interconnections between them. The latter approach admits
nonlinear plant models.
The use of a finite impulse response representation was proposed in [17] and used in [18] in
an optimization framework for dynamic operability assessment. In this formulation, the (i,j)
element of the matrix Q is represented as
L
For linear systems, the closed-loop mapping from the external inputs to outputs of interest may
be formulated directly. We recognize first that for a truncated transfer function approximation
of the form,
G(z) = £>(i)z-\
i=0
Substitution of (10) into the expression for the closed-loop map, (7), then gives the following
expression for the step response coefficients from the sth component of w to the rth component
of z:
k
i=0
Nv Nu k I" k i
E E E * - w EE^,™^-^,™^-^) 01)
m—l n—\ j—0 Li=j £=j
where t\, ti and to, are pulse response coefficient matrices corresponding to the transfer function
matrices Ti, T2 and T3; and Ny and Nu are the respective dimensions of y and u. From Eq.
(11) it can be seen that time-domain bounds on the step response coefficients are simply linear
constraints in the search variables, qnm(j).
The following objective function might be used:
where Nz and Nw represent the dimension of z and w respectively, ry represents the desired
value of the ith output for a step in the jth input, and the Wy are weights.
A measure of dynamic operability for a given design might thus be posed as
min $(<!)
q
2. Many other closed-loop performance constraints of interest such as slew rate and rms
disturbance response are also convex in Q, which when used together with a convex
objective function, guarantees global optimality of any solutions found.
The formulation was also extended in [21] to include conditions for robust stability. An i\
robust control framework was used [22], which has advantages of being formulated within a
discrete-time setting, accommodating nonlinear and/or time-varying perturbations, and having
the perturbation norm bound correspond to bounded peak-to-peak behavior. For unstructured
uncertainty the problem remains convex, but is nonconvex for structured uncertainty. Details of
the formulation with an application example are given in [21]. A linear programming approach
to controllability analysis using Q—parametrization is described in [23].
As an alternative to formulating the closed-loop mapping directly in terms of Q, the various
components of the feedback system may be included within the optimization problem as equal-
ity constraints, together with appropriate interconnections. This is most easily done with the
aid of a feedback representation such as the IMC structure which applies to stable systems. By
considering at different points in the feedback system signal vectors whose elements correspond
to values at discrete points in time, the input-output mappings within the feedback system are
readily constructed. Moreover, nonlinear models may be handled through use of a suitable dis-
cretization strategy such as orthogonal collocation on finite elements [24]. A strategy of this
type was applied in [25]. This approach is described in more detail for continuous-time systems
in the next subsection; for these systems, direct formulation of the closed-loop mapping is sig-
nificantly more cumbersome than for the discrete-time counterpart.
5.2.1. Approximation of Q
A Ritz approximation has been proposed for Q within a continuous-time setting [26]. For a
single-input single-output controller this takes the form,
(i3)
<^)=l>(dby
where j3 6 5ft is fixed and the a; are coefficients to be determined.
Inversion of (13) gives
5.2.2. Realization
The input signal to Q and its output will be designated as e and u respectively as illustrated in
Fig 4.
In order to derive a realization for (13), we consider first the SISO case, for which a realization
is given by
= + ' e
: ; ••• '•• o ;
x2
w = [a n a n _ i ••• a 2 Q i J : +a o e . (15)
a^n-i
u__( PV
combining these as a composite system for i = 1,..., n, then recognizing that the realization
of all the basis functions for i < n are embedded within the realization of the basis function of
highest order.
Realizations for controllers with multiple inputs and outputs may be generated by suitably con-
catenating realizations corresponding to the single input-output combinations. Let the realiza-
tion corresponding to controller output i and input j be given by
x = Ax + Be
u = Cx + De
where x = [x^ ... x[N, ••• , x ^ x . . . x%N}T, u = [uu u2 • • • Ujv]7, e = [eu e2 • • • eNf, and
A, B, C and D are composites of the corresponding system submatrices. If p and the order of
approximation are the same for each output realization corresponding to the same input, then
the total number of states may be reduced to J2j=i nh w n e r e nj is the order of approximation
for the input-output combinations involving controller input j .
We consider here the stable plant case. For asymptotic tracking of step disturbances we re-
quire
Gm(0)Q(0) = /
+E : i
Q(o) = : ;
Vn nNl ••• V™ nNN
. Zvi=0 ai 2-,i=Q ai
Gm(0) may be obtained from a linear or linearized plant model. Given the state space descrip-
tion of Gm,
x = Ax + Bu
y = Cx,
252
If evaluation of a matrix inverse cannot be included directly within the modeling environment
being used, the following construct may be employed:
AX = B
Gm(0) = -CX.
The inclusion of the state space realization of Q within an optimization formulation will be
outlined here. In order to illustrate some of the details of the formulation, a specific problem
will be considered; variations to address other scenarios such as determination of a closed-loop
performance limit for a fixed design, follow in a straightforward manner. In so doing, the vari-
ables and constraints within the general problem formulation (PI) in Section 3 will be separated
into groupings that represent the various components of the overall system.
The goal will be to determine optimal equipment parameters and an optimal steady-state oper-
ating point such that feasible operation is maintained for all realizations of uncertain parameters
within a specified uncertainty region, with a set of outputs controlled at their nominal values.
The use of controller parametrization provides a performance limit for linear control.
Feasibility with respect to uncertain parameter variation is handled by posing the problem di-
rectly within a multi-period framework. The plant will be assumed to be open-loop stable at
the nominal operating point, permitting use of the control structure of Fig. 5. Note that while
the search is restricted to linear controllers, path constraints are enforced for the nonlinear plant
253
model. The design will thus be optimal for linear control that guarantees stability of the nom-
inal operating point and which ensures that path constraints are respected when applied to the
nonlinear plant.
The following description for the plant will be assumed:
x = f(d,x(t),u(t),0(t))
y(t) = g(x(i),u(t))
x(0) = x 0
where the variable definitions are consistent with that in Problem (PI) any y represents the
process outputs to be regulated.
The optimization problem is posed as follows:
min $(d,x J V ,u i v )
d,u w ,a
subject to
0 = f(d J x A I J u J V ,6» i v )
N
yaet=g(x ,uN)
h(d,xN,uN,ON)<0
(b) Dynamic model equations and constraints at vertices of uncertain parameter region:
(c) Controller:
xjJ(t) = fc(xj2(t),y8et,y<(<),y^(t))
u i (i)=g c (x^(t),a,u JV )
254
\x\t)=ui{t) + nN
(e) Linearization :
" X d,xN,uN,8N OU
d,xN,uN,eN
Observe that the linearization in (e) is a function of the decision variables and upon convergence
of the optimization routine, will correspond to the optimal design and operating point. An
application of this formulation is provided in one of the case studies discussed in the next
section.
6. CASE STUDIES
6.1. Problem 1
The first problem is taken from [28], and illustrates the use of controller parametrization within
an optimization-based framework to assess the dynamic operability of processes exhibiting
combinations of performance-limiting characteristics. The process considered is a binary dis-
tillation system analyzed in [7] represented by the transfer function model
0.66e-6s -0.005
P(Q\
_ 6.7s + 1 9.06s + 1
s
n ) - -34.7e- 4 s 0.87(11.6s + l)e- 2 s
8.15s+ 1 (3.9s + l)(18.8s + l)
The outputs correspond to an overhead mole fraction and bottom tray temperature, while the
manipulated inputs are the reflux flow rate and reboiler steam pressure.
The nominal plant model was shown to exhibit an infinite number of RHPT zeros [7], which
as discussed in Section 2, can result from the distribution of time delays in a MIMO system. It
255
was also determined that there would be no RHPT zeros if the time delays were increased such
that the sum of the delays in the off-diagonal elements exceeds 8. However, lengthening the
amount of time delay to eliminate RHPT zeros has to be traded off against potential performance
degradation due to increased response times. In addition, it is not clear to what extent input
constraints would impact the closed-loop performance under these scenarios. These effects
were explored in [28] by considering the following time delay structures:
B [6 °1 B [6 °1 B f6 5"
L 46 42 J' L 96 62 J' L 462.
B =
Bi B ~[[ 6 12 j 'B
f[ 5 21J 'Bh B =[ °"
" [44
Note that it is not being suggested that arbitrary delay structures could be realized in practice
for this process; it is merely a vehicle to explore the effects of time delays on achievable control
performance. The first structure, B\, corresponds to the original plant and introduces an infinite
number of RHPT zeros. Structures B2 to B5 have no RHPT zeros since the sum of the delays
in the off-diagonal elements exceeds 8. Structures B5 and B6 are such that an upper bound on
performance, as measured by the fastest possible output response times, is achievable with a
decoupled response (with time delays assumed to be the only performance-limiting factors) [4].
The dynamic operability of these structures was assessed using the discrete-time formulation
described in section 5.1. The performance measure used was a weighted sum-of-square errors
(SSE) of the outputs to steps of magnitude -0.035 and +3 applied individually to the set points
of j/i and y2 respectively, and is of the form:
[ 2500 2500
W =
1 1
L
to account for the difference in scale of the output responses. The inputs were constrained as
follows:
-0.112 < wi < 0.065 gpm
—4.4 < u2 < 14.0 psig
A sampling interval of At = 0.5 was used and the time horizon set to L = 100.
The results are shown in Table 1. This table also includes the quickest possible response time
for each output (performance upper bound) and the quickest achievable response times for a
decoupled response (lower bound) - performance metrics introduced in [4]. Noting that closed-
loop performance increases with decreasing objective function value, we make the following
observations:
256
1. The plants with RHPT zeros exhibit poorer performance than those in which there are
none. While this is consistent with theory [5], the existence and location of RHPT ze-
ros should not be relied upon as a metric for ranking plants in terms of their dynamic
operability in the presence of other performance-limiting factors.
2. Delay structure B$ exhibits the best performance despite the fact that its time delays are
larger than the minimum necessary to eliminate the RHPT zeros. This counter-intuitive
result confirms the necessity for considering the effects of all performance-limiting char-
acteristics simultaneously.
3. The lower and upper bound metrics are limited in their reliability as indicators of achiev-
able performance. We observe that plants B-2 to B5 have the same lower bound, thus it
might seem reasonable to rank their expected performance on the basis of their upper
bounds. However, the ranking so obtained is exactly the reverse of that determined by the
SSE objective. Key points are that the upper bound is not necessarily achievable, and the
response times are compatible with the SSE performance metric only if perfect tracking
is achieved after the indicated amount of delay.
Figures 6 and 7 show the resulting optimal output trajectories for structures B\, B4 and B 5 .
We observe that although j/i is able to respond more quickly for structures B\ and BA than for
JB5, the quality of the response is inferior for the former two structures resulting in a larger SSE.
In summary, this example demonstrates the importance of considering all performance-limiting
characteristics simultaneously in order for competing effects to be taken into account.
6.2. Problem 2
In contrast to the previous case study, this one (a) includes design variables within the search
space, and (b) utilizes a continuous-time formulation. The problem formulation follows closely
that described in Section 5.2.4. The study was presented originally in [29] and is based on a
nonisothermal stirred tank reactor system analyzed in [30].
257
10 15 20 25 30 35 40 45 50
Figure 6: Optimal output trajectories for a set point change of —0.035 in J/J for the delay
structures Bx(—), B4(—) andS 5 (-).
The objective is to find the economically optimal reactor volume and nominal steady-state op-
erating conditions (reactant and cooling rate) such that feasible operation is maintained for inlet
concentration and temperature disturbances. The cooling rate is manipulated directly in order
to control the reactor temperature and acts on the system through a first-order lag.
The reactor is described by the following equations, with parameter values as given in Table 2
Figure 7: Optimal output trajectories for a set point change of +3 in y2 for the delay structures
B1(--), B 4 ( ~ ) and £ 5 ( - ) .
The design variables for this problem are V, F, Q^f and the coefficients a^ defining Q.
Combinations of step disturbances from their nominal values to upper and lower limits were
considered within a multiperiod formulation, with the disturbances values given by ([low nom-
inal high])
The problem was coded and solved using gPROMS/gOPT [31]. Table 3 shows the design
variables and profit for the nominal steady-state design, the optimal design under uncertainty
259
for different orders of approximation of Q, and the optimal design under PI control. Initial
settings for the PI controller were obtained using the direct synthesis method, after which the
gain was included as an optimization parameter.
We observe the following:
1. The optimal (nominal) profit is appreciably reduced when uncertainty is considered.
3. The optimal profit evaluated using Q control is greater than that with PI control (as we
would expect), but not appreciably so.
The relatively small difference between PI control and the linear control performance limit
would indicate that a more sophisticated control strategy than PI control is not warranted in
this case. A more significant difference might be expected for plants that exhibit multivariable
interaction and/or nonminimum phase characteristics.
7. FUTURE DIRECTIONS
Significant advances have been made in recent years toward the development of systematic tech-
niques for incorporating dynamic operability criteria in plant design calculations. This includes
problem formulation paradigms as well as computational strategies for solving the large-scale
(mixed-integer) dynamic optimization problems that arise in a broad class of optimization-based
approaches. Modelling environments are also now available that either solve directly the types
260
of optimization problems that arise in integrated design and control formulations, or provide
reliable NLP and MIP (and other) building blocks that may be utilized by user-designed al-
gorithms. Good reviews of developments in integrated design and control may be found in
[11,32].
Most of the integrated plant and control system design studies have used linear control sys-
tems such as multi-loop PI control, and do not accommodate actuator saturation discontinuities.
However, promising recent approaches include strategies for incorporating actuator saturation
into a simultaneous optimization framework [33], and controller parametrization that accom-
modates saturation behavior [34]. Application of these techniques to more complex problems
within an integrated design and control framework, as well as the consideration of other more
complex control systems, would be useful.
An issue that has not received much attention is the impact of the assumed control system in the
formulation of the operable design problem. A number of alternatives were discussed in Section
3, but the choice that should be made for design calculations is not clear. For example, should
one design a plant that is operable under multi-loop PI control, or multivariable model-based
control? If the former, how much do we lose by not adopting a leaner design; and if the latter,
would the plant still be operable if process conditions were to change or a less sophisticated
control system were adopted? A possible approach might be to opt for something less ambi-
tious than to expect the 'correct' design to emerge from a single optimization calculation, but
rather use the various control assumption paradigms as a suite of tools to be used in concert. For
example, as pointed out in the second case study, the difference in optimal objective function
values generated using PI control and Q—parametrization might be used as an indicator of the
potential benefit of using a model-based control strategy. The gap between Q—parametrization
and open-loop optimal control might on the other hand indicate the potential benefit of using
a nonlinear control strategy. These control system paradigms thus form a hierarchy of per-
formance bounds, and the gaps between them might be used to indicate potential benefits of
different classes of control system. The development of systematic techniques for addressing
these issues will likely be challenging, but would be of considerable importance in practical
applications.
8. CONCLUSION
The impact that the design of a plant has on its ability to be satisfactorily controlled has moti-
vated the development of systematic approaches to account for dynamic performance during the
design process. Optimization-based approaches permit the plant design and operability criteria
to be handled within a single framework, and they offer considerable flexibility in the problem
definition. A caveat is that the type of control system needs to be specified, which needs to be
borne in mind when results are interpreted. A key argument advanced in the IMC approach to
261
ACKNOWLEDGEMENTS
The author wishes to acknowledge the contributions of several graduate students who have
worked with him in this area: Rod Ross, Julian Young, Rhoda Baker, David Seaman and Kevin
Dunn; and the stimulating and fruitful interaction with John Perkins and Stratos Pistikopoulos
during the author's sabbatical leave at Imperial College.
REFERENCES
[2] M. Morari and J. D. Perkins, In: L. T. Biegler, M. F. Doherty (Eds.), AIChE Symposim
Series No. 304 Volume 91, CACHE and American Institute of Chemical Engineers, 1995,
pp. 105-114.
[5] B. R. Holt, M. Morari, Design of resilient processing plants - VI. The effect of right-half-
plane zeros on dynamic resilience, Chem. Eng. Sci. 40 (1) (1985) 59-74.
[8] Y. Cao D. Biss and J. D. Perkins, Computers Chem. Engng. 20 (1996) 337.
[10] F. M. Meeuse and R. L. Tousain, Comp. & Chem. Engng. 26 (2002) 641.
[12] A. Cervantes and L. T. Biegler, In: C. Floudas, P. Pardalos (Eds.), Encyclopedia of Opti-
mization, Vol. 4, Kluwer Academic Publishers, 2001, pp. 216-227.
[13] G. M., D. J. N. Limebeer, Linear Robust Control, Prentice-Hall, Englewood Cliffs, NJ,
1995.
[14] B. A. Francis, A Course in H,^ Control Theory. Lecture Notes in Control and Information
Sciences 88, Springer-Verlag, Berlin, 1987.
[15] M. Vidyasagar, Control System Synthesis. A Factorization Approach, MIT Press, Cam-
bridge, MA, 1985.
[19] R. Ross and C. L. E. Swartz, In: I. J. Barker (Ed.), 8th IFAC International Symposium on
Automation in Mining, Mineral and Metal Processing, Sun City, South Africa, 1995.
[20] J. C. C. Young R. Ross and C. L. E. Swartz, Comp. & Chem. Engng., Suppl. 20 (1996)
S677.
[21] R. Ross and C. L. E. Swartz, Comp. & Chem. Engng., Suppl. 21 (1997) S415.
[22] M. H. Khammash and J. B. Pearson, IEEE Trans. Aut. Control 36 (1991) 398.
263
[26] S. P. Boyd and C. H. Barratt, Linear Controller Design. Limits of Performance, Prentice-
Hall, Englewood Cliffs, NJ, 1991.
[27] M. Morari E. Zafiriou Robust Process Control, Prentice-Hall, Englewood Cliffs, NJ, 1989.
[28] R. Ross and C. L. E. Swartz, Paper 203g, AIChE Annual Meeting, Los Angeles, 1997.
[29] C. L. E. Swartz J. D. Perkins and E. N. Pistikopoulos, In: Process Control and Instrumen-
tation 2000, University of Strathclyde, 2000, pp. 49-54.
[30] C. Loeblein and J. D. Perkins, Comput. & Chem. Engng. 22 (1998) 1257.
[31] Process Systems Enterprise Ltd., gPROMS Advanced User's Guide, 1999.
[34] D. Sourlas J. Choi and V. Manousiouthakis, Int. J. Robust Nonlinear Control 7 (1997) 449.
The Integration of Process Design and Control
P. Seferlis and M.C. Georgiadis (Editors)
264 © 2004 Elsevier B.V. All rights reserved.
Chapter B4
a
Chemical Engineering Department
University of Bahrain, Isa Town 32038, Bahrain
b
Laboratory for Process Systems Engineering
Department of Chemical Engineering, The University of Sydney
Sydney, NSW 2006 Australia
1. INTRODUCTION
The continual emphasis on energy saving and environmental protection has driven process
systems engineers, including design and operation engineers, to involve a number of crucial
steps in developing a design of a chemical process. Process design teams are required to
integrate their designed processes to satisfy economical, environmental and social objectives,
while at the same time maintaining the process within a satisfactory operational performance.
The traditional chemical engineering design problem is no longer a single economical
objective problem. The majority of the literature on process design supporting tools deals with
the combination of two objectives of the four represented in Fig. 1. In recent years, there has
been an exponential increase in the number of published papers on process design dealing
with the integration of control and process operation. A key focus has been to combine
control and economical considerations [1-3] where control and operation decision tools to be
incorporated in process design were proposed. Due to the increasing environmental
awareness, stringent pollution limits and the emergence of environmentally friendly
processes, some workers have incorporated environmental considerations in the early stages
of design [4-6].
Other researchers focused on the applications of process integration in design and showed
its effectiveness in improving the economical and environmental objectives [7-10]. The
mainstream of these publications is concerned with the synthesis and design of heat integrated
265
processes. Process integration creates unforseen operational problems, which force the
engineers to examine operational performance of the designed processes. These problems
subsequently present new dilemmas under plant-wide control considerations [11-14].
There is a lack of work that looks at integrating the four objectives, economical,
environmental, process integration and operational, simultaneously. This area of integrated
design procedure remains an open and challenging research field.
Over the past decade, improving the environmental performances of chemical processes
became a growing awareness in industry. At the same time, process integration technology
design tools have been well developed as a means of improving the economical as well as the
environmental performances. However, this trend toward process integration changes the
characteristics of the overall designed plant and may have noteworthy effects on the
operational performance and on the plant-wide control structure. Therefore, as shown in Fig.
1, the optimal design of a chemical process is required to satisfy all the above objectives, i.e.
economical, environmental, process integration as well as operational and control.
Generally, the major challenge both at the design and the operational stages lies in
formulating the decision makers' and process designers' goals and then resolving the conflicts
between these goals or sometimes called objectives or targets. The priority list of these
objectives is usually changes based on the stakeholders' pressure on the decision makers.
The need for well-developed tools that assist in exploring the trade-off scenarios arising in
such multi-objective situations has been one of the major tasks for process systems engineers
in the last few years. The goal of this research is to develop an overall integrated approach
allowing all relevant objectives to be formulated and accounted for during the design/retrofit
stages of a processing plant. This paper presents a general framework for such a methodology
that incorporates economical, environmental and operational performances for assessing
various levels of process integration for a given process.
266
Life Cycle Assessment (LCA) is a methodology for estimating the environmental impacts
associated with a given product, process or activity [15]. Being an accepted and widely used
tool in this area, it was employed in this study to map the environmental impact potential of
any given alternative for the selected process in the optimisation framework.
Energy integration techniques are today an accepted means for improving process
economics and reducing environmental impacts [7, 10]. In this paper, thermal pinch analysis
[16] is included to demonstrate the incorporation of energy integration within a multi-
objective optimisation framework. Then the framework is extended to examine the trade-off
over a number of devised and assessed Heat Exchanger Network (HEN) designs. It is
regularly noted that the application of process integration technologies leads to tight the
designed processes and therefore forces the process designers to consider potential control
problems and assess the process controllability and operability in the early stages.
Accordingly, the plant-wide control structure is required to be adjusted to meet such
challenges.
In general, the success of an integrated design is measured based on the agreement with its
ability to be controlled and operated safely and profitably. Therefore, a systematic procedure
is required to evaluate the controllability issues of the integrated designed processes. In
general words, controllability deals with the ease of controlling a continuous plant at a
specified condition. In the proposed framework, a controllability analysis sub-framework is
developed to help the design engineer is assessing the HEN designs based on their ease of
control.
Plant-wide control strategies are playing an important role in the process design procedure,
as plants are designed in sophisticated and complex ways where processes are required to be
coupled and integrated through recycled streams and heat integration. Moreover, due to the
increased concerns for safety, environmental protection and production qualities, a
satisfactory plant-wide control structure is required to meet such objectives. In the proposed
framework, a plant-wide process control strategy is incorporated to build up the overall
control structure of the entire designed plant. Finally, process modelling/simulation is
included as part of the overall strategy as a mean of validating and testing the steady state and
dynamic performance of the designed plant and its control structure. To demonstrate the
effectiveness and the components of the proposed methodology, an industrial case study on
the production of vinyl chloride monomer (VCM) is thoroughly investigated.
2. MULTI-OBJECTIVE OPTIMISATION
Many processes and design problems are multi-objective in nature, where several non-
commensurable objectives are required to be satisfied. Design engineers are required to
optimise, maximise or minimise, not only a single objective function but several functions
simultaneously within a specified range of constraints. These problems with multiple
conflicting objectives and criteria are generally known as multiple criteria optimisation or
multiple criteria decision making (MCDM) problems.
267
The challenge in multi-objective optimisation problems is that most of these objectives are
potentially conflicting process and/or design goals forced by different performances, e.g.
economical, environmental and operational, of a given system. While in single-objective
optimisation problems the optimal solution is usually clearly defined, this does not hold for
multi-objective optimisation problems. Instead of a single optimum solution, there is rather a
set of alternative trade-offs solutions between the clashing objectives.
The theory of the MCDM problems is concerned with simultaneous optimisation of a
number of objectives subject to equality and inequality constraints and the problem can be
presented in a general mathematical form as follows:
where k > 2 and S is the feasible region of the selected decision variables.
These objective functions, usually, conflict or compete with each other. In the case of no
conflict between the objectives, any traditional single objective function technique can be
used easily to solve the problem since optimising one objective will ensure that all the other
objectives are optimised within the same direction of minimisation or maximisation.
In contrast, problems with conflicting goals, usually ends with a set of alternatives, where
no single solution is available, known as the Pareto optimal. The Pareto optimality set is
produced to visualise the trade-off between the objectives through the set of the selected
design alternatives, and consequently, a human decision making is required to express
preferences between the alternative solutions and to carry on from the point where
mathematical tools end.
During the last twenty years, the literature on MCDM problems has grown at a high rate
where few techniques for generating the Pareto optimal have been well developed and
evaluated. The idea of the Pareto optimality and generating the Pareto set, are briefly
reviewed here. These concepts and techniques are dealt with in more detail by Ref. [17-22].
The concepts of multi-objective optimisation problems and the Pareto optimality solution
can be clarified in the following general example. For a system with a number of design
parameters which are under the control of the decision maker, as in Fig. 2, the particular
values of the parameters xi, X2, ..., xn result in particular values of criterion functions Ji, J2,
..., Jn which are functions of the inputs and measure the system performance.
As shown in Fig. 3, each choice of parameter values yields a feasible solution in the
objective (criteria) space. The full set of allowable solutions obtained by mapping all
allowable values of the parameters X into the objective space yields some volume in that
space. In general, the majority of these feasible solutions will be inferior, which means, a
feasible solution exists where it is better in at least one objective while at least as good in all
other objectives. A solution which is not inferior, where is the value of a given objective
function can only be improved at the expense of at least one other objective, is called Pareto
optimal as shown in Fig. 4.
The entire Pareto set can be generated by repeatedly solving the above problem for
different sets of constraint levels s,. The computational effort is the main disadvantage of this
technique as the number of constraints increases and the simplified optimisation problem is
required to be solved several times.
Fig. 5. Structure of the Proposed Integrated Framework for Design and Operation Considerations
272
Process design engineers should be concerned not only about the environmental impacts
that are directly generated in the designed process, but also consider the environmental
impacts that are associated with the provision of the raw materials and services they specify as
inputs to their processes. In recent years, Life Cycle Assessment (LCA) has been given a lot
of attention as an environmental indicator of chemical processes [24]. LCA is a
comprehensive technique that covers both "upstream" and "downstream" effects of the
activity or product under examination, thus often being referred to as "cradle-to-grave"
analysis [25].
Being recognised and internationally accepted, the LCA methodology is employed in this
developed framework to map the environmental impact potential of any given alternative.
Moreover, LCA provides the decision-makers an accurate and clear picture of the interactions
of the examined activity with the environment and identifies opportunities for environmental
improvements.
The International Standard Organization (ISO 14040) [26] breaks the LCA framework into
four main stages: (1) Goal and scope definition of the study. This stage clarifies the purposes
of carrying the study while the assumptions and system boundaries are described clearly. (2)
Life Cycle Inventory (LCI) analysis. LCI involves data collection and calculation procedures
to quantify relevant inputs and outputs of the entire system defined within the system
boundaries. (3) Life cycle impact assessment involves qualifying the potential environmental
impacts of the inventory analysis results. (4) The interpretation of the results from the
previous phases of the study in relation to the objective of the study. This interpretation can
be in form of conclusions and recommendations to decision-makers for process changes to
deliver improvement in the environmental performance.
The environmental performance of the modelled process is evaluated based on the LCA
analysis, via impact potentials, where it formulates the environmental impacts of the process
as a stand-alone objective in the multi-objective optimisation problem. The LCA analysis of
the modelled process is performed in Excel using the transferred process data and a
commercial LCA database, SimaPro1^, used to determine the environmental burdens of the
upstream activities and the utilities used.
The LCA analysis first performs an inventory analysis that involves data collection and
calculation procedures to quantify relevant inputs and outputs of the entire system defined
within the system boundaries. This inventory is followed by an environmental impact
assessment, which quantifies and categorises the inventory analysis results into environmental
impacts. For demonstration purposes, Table 1 shows a list of the inventory emissions to air
and the impact analysis of these emissions. The impact analysis step converts the inventory
results into equivalents of a selected reference substance for each impact category such as
emitting lkg of methane is equivalent to 1 lkg of CO2 for global warming potential and lkg of
HF is equivalent to 1.6kg of SO2 for acidification.
273
Table 1
LCA Inventory and Im]Dact Data Sh eet Example
L =
Minimise f\(x) => Minimise ft (x) = —'• (3)
f (x\- f (x)
Maximise f](x) =i> Minimise f. (x) = — (4)
Z/maxW-Z/minW
As shown in Eqs. (3) and (4), each objective function is normalised over their range,
minimum and maximum achievable values, by performing a single objective optimisation in
each direction, minimisation and maximisation.
The formulation of the s-constraint technique is performed as one of the objectives is
assigned as the objective function while the others are constrained within specified upper
limits. The selected process parameters are assigned as the decision variables of the
optimisation problem. The optimiser searches over the process variables, within the feasibility
and constraints regions and feeds these selected variables to the model in HYSYS. Then, it
waits for the process in HYSYS to converge and then recalculate the objectives and evaluate
the optimisation results. This search loop between the optimiser in Excel and the model in
HYSYS continues until a global optimum point is found which represents a point on the Pareto
curve. The above optimisation process is repeated for different bounds of the constrained
objectives to develop the entire Pareto curve.
275
Previously, most systematic design tools were directed towards the improvement of process
economics whilst little attention to the process controllability and operability issues was given
during the first stages of the process design procedure. Furthermore, if a plant is designed
only on the basis of steady state economic considerations, unfavourable process dynamics
might make controlling the designed process impossible. For these reasons and the increasing
demands for process integration and consequent tightening the plant process conditions, a
good control performance of the designed process that exhibit the process operability
characteristics is demanded more now than any time in the past. And as a result of this, there
are several calls in literature rising for incorporating process control analysis simultaneously
within the early design stages rather than the traditional sequential order.
There are different aspects of the process operability that are required to be included within
the process design objectives. They include flexibility, controllability, reliability and safety.
Process controllability is concerned with the quality and stability of the dynamic response of
the designed process where it represents an assessment of the ease with which the process can
be controlled to be held at a specified condition or moved to another.
From the last decade, the applications of process integration and consequently methods for
heat exchanger network (HEN) design and synthesis are integrated as objectives of the design
procedure and been well developed and widely studied. However, the requirement of
controlling a designed HEN is still under the study and investigation stages. The aim of this
section is to develop a systematic procedure that helps design and process engineers to
evaluate and select the best controllable designed HEN.
Fig. 6. Temperature control of process-to-process exchangers using bypass method [14]: (1)
controlling and bypassing hot stream, (2) controlling cold and bypassing hot, (3) controlling
and bypassing cold stream and (4) controlling hot and bypassing cold
1) Process streams: HI, H2, Cl and C2 and contains exchangers: 2, 3, 4, 5 and HI.
2) Process streams: H3 and C3 and contains exchangers: 1, H3 and Cl.
3) Process stream C4 and heated up by exchanger H2.
278
N D OF = R + N U - N T (5)
Where R is the rank of the matrix of the inner HEN where it implicitly counts for the
available loops, Nu is the number of utility units (process-to-utility heat exchangers) and NT is
the number of target temperatures. In term of the process control, the degree of freedom
number of each sub-network may result in one of the following three cases:
1. NDOF < 0 the available manipulated variables are less than the controlled variables and
therefore the process can't be controlled. Redesigning the HEN is required.
2. NDOF = 0 the available manipulated variables are just equal the controlled variables
and the process can be controlled. Proceed with the next steps.
3. N D O F > 0 the available manipulated variables are more than the controlled variables
and the process can be controlled and optimised. Proceed with the next
steps.
280
As mentioned earlier, for a designed HEN, the available manipulated variables are usually
characterised into two forms: manipulating the bypass flowrate over one side of a process-to-
process heat exchanger or manipulating the utility flowrate of a process-to-utility heat
exchanger. For the HEN example in Fig. 8, each sub-network is required to be examined over
the degree of freedom analysis. For sub-network 1, the rank of the matrix of this sub-network
is 3 (R=3), there is only one utility exchanger used in this sub-network (Nu =1) and the
number of target temperatures is equal to the number of available process streams that are
required to be controlled (NT = 4). Therefore, from Eq. (5):
NDOF=3 + 1 - 4 = 0
Meaning that sub-network 1 has just enough manipulated variables to control the required
outlet temperatures. For sub-network 2, R=l, Nu = 2 and NT = 2, thus NDOF = 1 while for sub-
network 3, NDOF = 0. For the provided example, the degree of freedom analysis shows that
each sub-network has at least the minimum requirement of manipulated variables and the
overall HEN design proceeds with the next steps of the controllability analysis framework.
The process gain matrix of a designed HEN shows clearly the characteristics of the
network and the interactions between the controlled and manipulated variables as well as it is
an essential stage to determine the most effective arrangement of control loops. A non-square
HEN gain matrix will result if the NDOF is greater than zero while a square matrix will
represent a network with a zero NDOF- For the case of non-square gain matrix, all possible
square matrices of size (i x i) are developed to represent the possible pairing selection
between the controlled and manipulated variables within a multi-loop control scheme. Each
combination between the two types of variables is analysed and evaluated, through the
relative gain array and condition number analyses, as an incorrect pairing can result in poor
control system performance and reduce or eliminate the process stability.
281
4.2.5. Relative Gain Array (RGA) and Condition Number (CN) analyses
The relative gain array (RGA) is used in order to measure the process interaction and
provide a tool in the design of multi-loop control systems. The RGA is a matrix composed of
elements called Ay. The element in the ith row andy'th column Ay is the ratio of the steady state
gain between the fth controlled variable and the y'th manipulated variable when all other
manipulated variables are constant divided by the steady state gain between the same two
variables when all other process variables are constant as shown in the following expression:
The RGA analysis is used to assess in pairing between the manipulated variables and the
controlled variables for each square matrix of the developed HEN gain matrix. The RGA can
be computed directly from the square gain matrix K in the following form:
Each element of the developed array is analysed based on its distance to unity where
highly interacting structures are redesigned or restructured to produce a better controllable
system based on the RGA analysis.
The condition number analysis is used after the RGA analysis to check the ease of the
control structure. The condition number is commonly used as an index of controllability
which is a ratio of the largest singular value to the smallest nonzero singular value of a matrix,
i.e. the HEN gain matrix. Mathematically, condition number can be defined as a positive
number that provides useful information on the sensitivity of matrix properties to variations in
the elements of the matrix. In terms of process control, a large condition number indicates that
it will be impractical, if not impossible, to satisfy the entire set of control objectives and vice
versa for small values. Therefore, it is used to measure the performance of the structured
control system by RGA of the designed HEN. This condition number analysis is used mainly
282
in this framework to compare different HEN designs and their control structures that their
RGA matrices are close or equal to identity.
This section briefly summarises the ideas behind plant-wide process control and the strategy
followed in the proposed framework. Furthermore, the set of advanced tools used to
design/test the control system of the individual units of the entire plant are also briefly
introduced as well as the dynamic modelling environment.
The development of the plant-wide control system is performed into two stages, in our
approach. First, the plant-wide control system is developed for the base design where no heat
integration is utilised. This stage is performed and evaluated, according to its dynamic
performance, as a first step to make sure that the basic designed process is controllable. Then
the selected designed HEN, based on the decision maker's preferences, is integrated within
the entire plant and the plant-wide control structure is adjusted accordingly.
Further information and a detailed step by step application of the strategy is provided later
in the study of a Vinyl Chloride Monomer (VCM) plant.
These tools are fully menu driven for easy of navigation through the whole design process
and they run on Matlab and the associated toolboxes. The set of developed MATLAB files
used to perform open and closed loop analysis for different control schemes and use the
transfer function matrix as the unit model. APC-Tool is used, for example, to tune each single
control loop within the multi-loops system, and furthermore can be used to investigate the
interactions between the control loops as well as allowing systematic analysis/comparison of
alternative advanced control structures for a given process. Once the proper control strategy is
finalized for a given unit it is then evaluated and tested on the real time dynamic performance
model developed in HYSYS.
simulation is used to optimise the designed process in term of different objectives such as
economical, environmental, operational, social, etc.
However, chemical plants are never truly at steady state and a dynamic simulation of the
designed processes is required to help in understanding the overall plant performance through
its non-linear dynamic behaviour. The dynamic simulation enables operation and control
engineers to improve the control systems and investigate the operability and controllability
considerations of the plant. Using a dynamic model, individual and plant-wide control
strategies can be designed and tested and even the control loops can be tuned before choosing
one that may be suitable for implementation. Therefore, the dynamic analysis is an essential
stage as it provides feedback and improves the steady state model by identifying specific
areas in a plant that may have difficulty achieving the steady state objective. Having said that,
developing dynamic simulation models have still restricted applications where it is mainly
due to the computational and programmability of this more complex simulation scenario.
In the proposed framework, HYSYS. PLANT simulation package is used to validate both the
steady state and dynamic models even though the switchability from steady state to dynamic
mode is not a trivial procedure, as it will be shown in the case study section.
To demonstrate the step-wise procedure of the proposed framework, a case study has been
fully developed for the production of vinyl chloride monomer (VCM). VCM has a rapid
development in industries as it is been closely interrelated with the polyvinyl chloride (PVC)
industries which are the largest consumer of VCM production.
• Direct-Chlorination reaction:
C2H4 + Cl2 -^ C2H4C12 (EDC) (9)
• Oxy-Chlorination reaction:
C2H4 + 2HC1 + I/2O2 -> C2H4CI2 (EDC) + H2O (10)
286
• Pyrolysis Cracking:
C2H4CI2 (EDC) -> HC1 + C2H3C1 (VCM) (11)
The direct chlorination reaction of the ethylene to EDC is carried out in a liquid phase
reactor by mixing ethylene and chlorine in liquid EDC. Cooling water is used to remove the
heat produced by this exothermic reaction. Direct chlorination reactions may be run rich in
either reactant, ethylene or chlorine, and usually the conversion of the lean component is
100% with the selectivity for EDC is greater than 99%.
The oxy-chlorination section aims to make use of the available process materials, HC1 and
ethylene, to optimise the VCM production process and operates at 220-330°C temperature
and 1-15 atm pressure. The oxy-chlorination reaction is highly exothermic and requires good
temperature control for a successful production of EDC. Typical results for the oxy-
chlorination unit are 94-97% ethylene conversion, 95-97% HC1 conversion and 94-96% EDC
selectivity.
The EDC produced from the direct chlorination, oxychlorination and the recovered from
the cracking step is required to be treated to reach more than 99.5% purity before entering the
pyrolysis unit. The by-products are removed in a sequence of two distillation columns. The
first column removes the light wastes while the heavy wastes, mainly C2H3CI3 (1,1,1-
trichloroethane), are removed in the second column.
The EDC pyrolysis unit operates at temperatures in the range of 500-550°C and pressure
up to 25-30 atm. It is an endothermic reaction and is normally carried out as a homogeneous
non-catalytic gas phase reaction in a direct-fired furnace. The pyrolysis is usually operated in
the range of 50-60% conversion of EDC.
The stream leaving the pyrolysis unit contains the co-product HC1, uncracked EDC and
VCM. This stream is treated in a sequence of two distillation columns. In the first column,
HC1 is distilled off at the top and sent to the oxy-chlorination unit. The bottom product is fed
to the second column to purify the VCM product from the unconverted EDC. The
unconverted EDC leaves the bottom of the column and is recycled back to the EDC
purification section.
Generally, the VCM plant is subject to some undesirable reactions that must be accounted
for in the process design. The main byproducts (wastes) of the VCM plant are: HC1, CO2,
C2H4, EDC and 1, 1, 2 trichloroethane.
Table 2
VCM-Plant model conditions and specifications
Reactors Direct Oxy Pyrolysis
Temperature (°C) 65 90 500
Pressure (kPa) 200 717 2645
Structure CSTR CSTR PFR
Columns Light Heavy HC1 VCM
No. of trays 7 26 11 9
Feed tray 5 3 6 4
Reflux Ratio 10.3 0.9 1.26 0.5
Pressure top (kPa) 165 145 1200 471
Pressure bottom (kPa) 173 170 1210 477
mode, the plant-wide control strategy is required to be developed and implemented within the
designed VCM production processes in HYSYS.PLANT. The stability and controllability of
the entire integrated and controlled plant could then be tested within the developed closed-
loop non-linear dynamic model. As mentioned earlier, the developed dynamic simulation of
highly integrated plant enables the process engineers to understand the interactions within the
entire plant and provides great opportunities in testing and studying the developed control
strategies and many other operational aspects.
utilities. This formulation is performed based on the calculation shown in Eq. (14) while
Table 3 shows the price list of raw materials, products and utilities used in the economic
objective.
Fig. 10 shows the utility streams for the VCM process that are required to heat up the cold
streams and to cool down the hot streams. This inventory of streams is utilised to perform the
pinch analysis. This analysis draws the guidelines strategies for the HEN designs. The
integrated proposed framework including the multi-objective optimisation problem was
performed to evaluate different developed HEN designs for the modelled process. These
designs vary based on the level of process integration where they include no heat integration
option (Design 1) where the utilities are at their maximum load (no heat recovery). In the
second case (Design 2), the process was examined for optimal heat integration with the
minimum (target) heating and cooling requirements being determined by the pinch analysis
technique. In three intermediate cases (Designs 3, 4 & 5), different developed HEN designs
for the process were examined where small heat exchangers are removed or combined. Table
4 shows the supplied and target temperatures and the heat capacity data for the individual
streams in the VCM process. For demonstration purposes, Fig. 11 shows the HEN design for
the process streams of Design 5. This design represents a modified network of design 4 where
small exchangers are removed or combined and sub-networks are introduced.
The process design variable selected for the modelled process in the multi-objective
optimisation search engine is the portion of HC1 recycled to the oxy-chlorinator unit while the
produced HC1 is dealt with as a by-product that have a specific value. It would be a
straightforward extension to the framework to include multiple design variables - however, in
this paper, only a single variable was considered for ease of demonstration. Also, as the
environmental potentials in this case all trend in the same direction, the impact potential most
sensitive to this design variable, i.e. GWP, was chosen to represent the environmental
performance of the process.
In the multi-objective optimisation environment, each objective function, i.e. economical
and environmental objectives, is normalised so that their values are of approximately the same
magnitude and the same direction of optimisation. The range of each new formulated
objective function is [0, 1] where 0 represents the best achievable value and 1 represents the
worst achievable value respectively. Eq. (13) shows the normalisation process of the
economical objective based on the calculated operating profit while Eq. (14) shows the
normalised environmental objective based on the GWP performance, where the maximum
and minimum values of each objective function is calculated based on single objective
optimisation, minimisation and maximisation, within the selected decision variable.
290
Table 3
Values used in the Economical model
Unit Value
C2H4 0.36 $/kg
Cl2 0.22 $/kg
o2 0.02 $/kg
VCM 0.44 $/kg
HC1 0.36 $/kg
Cooling Water 0.19 $/GJ
Heating system 3.10 $/GJ
Electricity 8.33 $/GJ
. (Profit^-(Profit)
Economic = —— (13)
(Profit)m-(Profit)^
(GWP)-(GWP) •
Environmental = — —- ^=— (14)
(GWP)^ ~(GWP)uin
291
Table 4
Streams' Data for the HEN design of the VCM production
Stream T- (°C :)
1
Tou, (°C) mcp (kW/°C)
i n V. *-
1H 260 9.2 7842
2H 500 50 17730
3C -25.9 90 4.870
4C 91 500 17690
5H 31.2 30.7 8060
6C 103.9 104.4 7372
7H -21.8 -25.9 754.7
8C 132.9 133.4 21910
9H 89.3 -25.7 25.84
IOC 141.5 144 1559
11H 98.4 97.9 22270
12C 81.7 93.6 399.8
13H 75 65 1628
14H 260 259 14980
15C 500 501 10630
H: hot streams to be cold, C: cold streams to be heated
The e-constraint method was used to solve the multi-objective optimisation problem and
obtain the Pareto curve. Here, the economic objective was optimised while the environmental
objective was converted into a constraint with a specified upper bound as shown in Eq. (15).
This multi-objective optimisation problem was performed for each designed HEN.
Where x is the selected design variable of the recycled amount of HCl and each Pareto
curve is generated by parametrically varying the upper bound (s) on the environmental
objective over each entire range and solving the optimisation problem for each case.
the RGA analysis within the controllability framework points towards the best available
pairing between the controlled and manipulated variables within the designed HEN.
The controllability analysis framework was performed for each developed HEN design and its
application on design 5 will be presented here. Fig. 11 shows nine independent sub-networks
and they are as follow: 1) streams: 1H, 2H, 3C, 4C and 12C, 2) streams 14H, IOC and 8C, 3)
11H, 4) 9H, 5) 7H, 6) 5H, 7) 13H, 8) 6C, 9) 15C. The further analysis of the controllability
framework is applied only on sub-networks one and two where the others are heated or cooled
using utility streams and therefore they are controllable through adjusting the corresponding
utility flowrate. The loop identification process shows that no loop is exist within each sub-
network. The degree of freedom for sub-network 1 is one (NDOF = 2) where the rank of the
sub-network's matrix is 4 (R=4), Nu= 3 and NT=5 while the degree of freedom for sub-
network 2 is zero (NDOF = 0). For sub-network 1 a process gain matrix of size (5 x7) is
developed while for sub-network 2 its size was (3 x 3). For sub-network 2, the square matrix
was used through the RGA analysis to pair between the variables and the condition number
indicates the ease of control, as it was 22 for sub-network 2. For sub-network 1, twenty-one
different square matrices from the (5 x 7) gain matrix where developed and evaluated based
on the RGA analysis and the condition numbers vary between 28 and 95. Therefore, the
control system that corresponds to the minimum condition number value, 28, was selected.
To demonstrate the applications of the proposed framework to the VCM plant, Fig. 12
shows a selected number of the cases that represents the optimisation results to the decision
makers in a transparent way through the Pareto curves for certain designs over the whole
range of the process variable. The curves for designs one and two provide the lower and upper
bounds for all possible levels of heat integration at all operating points. It is shown that the
HEN designs shift the Pareto curve of no heat integration condition towards the optimal heat
integration curve in terms of economical and environmental preferences. The 'optimal heat
integration' curve (Design 2) shows the maximum possible reduction achievable for the
economical and environmental objectives based on the pinch analysis results. However, in
terms of controllability and operability points of view, moving from the 'no heat integration'
level towards the 'optimal heat integration' level results in more process interactions, through
the HEN, that leads to difficult operation and control. Therefore, plant controllability and
operability is required to be formulated as one of the trade-offs to be considered in addition to
the economic and environmental objectives.
From the presented HEN designs, designs 3 and 4 shows the best improvement toward the
optimal design based on the pinch analysis. Their RGA analyses are very close to the identity
matrix, however, the minimum condition numbers for designs 3 and 4 are 15300 and 11000
respectively, which are large and indicate a poor controllability performance of the networks.
This poor control is due to the high interaction within the networks and Design 5 shows that
having more independent sub-networks is better in terms of control and operation than having
a highly integrated network. However, this improvement in the controllability and operability
is at the cost of a reduction in the other objectives.
293
Fig. 11. HEN for design 5. Hot streams runs from left to right at top, cold streams run counter
current at the bottom
294
entire plant is considered as a second stage after designing the control system of the already
integrated plant that is due to the recycled streams.
The chemical species in a processing plant may be broadly classified as reactants, products
or inerts. A material balance for each of these species must be satisfied. This is typically not a
problem for products and inerts. However, the problem usually arises when we consider
reactants, because of recycle, and account for their inventories within the entire process.
Every molecule of reactants fed into the plant must either be consumed via reaction or leave
as an impurity or through a purge. For the VCM plant, the side reaction products and by-
products are removed as light and heavy impurities in the purification sections. For the
reactant, the design goal is to consume as much reactant as possible via the reactions or the
second option is to remove them from the plant as impurities or purge. For the direct
chlorination reactants, unreacted chloride and most of ethylene components are removed as
impurities as they leave the reactor in small amounts due to the high conversion that exceeds
99%. The conversion of the oxy-chlorination reaction exceeds 96% and the unreacted oxygen
is removed with the light impurities due to its small amount and low cost. The unreacted HC1
is recycled back to the reaction with some of the recovered ethylene.
As discussed previously, within the proposed framework the general heuristic plant-wide
control design procedure developed by Luyben et al. [14] is used. Its step-by-step application
to the VCM plant is discussed and presented next. This procedure essentially decomposes the
plant-wide control problem into various levels and tries to satisfy the two fundamental
chemical engineering principles, i.e. the overall conservation of energy and mass.
For the designed plant with no heat integration, there are 38 control degrees of freedom in this
process. These degrees of freedom represents the available manipulated variables in the
process and can be characterised as follow: four feed valves, direct reaction and oxy-reaction
coolers valves, direct reaction and oxy-reaction product valves, oxy quench cooler valve,
three decanter product valves, pyrolysis preheater and heater valves, pyrolysis product valve,
pyrolysis quench cooler valve, HC1 heater valve, eight valves for the heating and cooling
systems of the four distillation columns, thirteen valves for the base, top and reflux streams of
the four distillation columns.
296
The direct chlorination and oxy-chlorination reactions are exothermic reactions and good
temperature controllers are required to keep the reactions at the optimum conditions. For the
case of no heat integration within the plant processes, the temperatures of the reactors are
controlled by the flow rate of the cooling water streams. The temperature of the endothermic
cracking reaction is controlled by the fuel gas, heating utility, to be kept at the optimum
temperature. For the quench processes, cooling water is used first to cool down the hot reactor
streams before they proceed to the refrigeration sections, if required, so that the refrigeration
load is reduced.
Ethylene, chlorine and oxygen feeds are supplied from headers and/or supply tanks.
Therefore, no design constraint is required to be set for the production rate control. In terms
of the relationships between the reactor conditions and the production rate, the pyrolysis has
the most influence on the production rate through the reaction conversion by manipulating the
reaction temperature. However, this manipulation needs great attention due to the trade-off
between the reaction conversion and coke formation and by-product production.
Step 5: Control product quality and handle safety, operational and environmental
constraints
The principal role of the two distillation columns that follow the pyrolysis section is to
recover all of the produced vinyl chloride, recycle the by-product HC1 and recover the
uncracked EDC. HC1 and EDC are recovered in the first and second columns, respectively.
Therefore, the presence of HC1 and EDC in the vinyl chloride product stream is required to be
reduced as much as possible to prevent yield loss as well as reducing the presence of vinyl
chloride in the recycled streams of HC1 and EDC. Therefore, three control objectives are
required to be considered: (1) vinyl chloride composition in HC1 recovery stream, (2) vinyl
chloride composition in EDC recovery stream and (3) impurities, HC1 and EDC, compositions
in vinyl chloride product stream. The recommended manipulated variables for these control
objectives are the reflux flow or the distillate flow to control the top stream composition and
the reboiler duty or the bottom flow for controlling the bottom stream composition.
For operational and safety considerations of this process, the oxygen concentration in the
gas loop should be kept below the ethylene explosivity region, below 8 mole% anywhere in
the gas loop. The oxygen concentration can be controlled through the oxygen feed flow or the
conversion of the oxy-chlorination reaction by manipulating the reactor temperature.
297
Step 6: Fix a flow in every recycle loop and control inventories (pressures and levels)
In the VCM plant, there are eight pressures must be controlled and they are as follow:
• Four distillation column pressures: the direct way to control a column pressure is by
manipulating the vent stream from the condensation section. However, for the HC1
separation column, a flow controller already controls the vent stream that is recycled
to the oxy-chlorination section. Therefore the pressure of this column can be
controlled by manipulating the condenser duty or the reflux flowrate.
• The pressure of the gas loop can be controlled by the ethylene flow, the oxygen flow
or the pressure of the recycled HC1 stream. And since the oxygen flow has been
previously selected and the composition of both oxygen and ethylene is very small in
the recycled HC1 stream, the pressure of the gas loop controlled consequently by
controlling the top pressure of the HC1 recovery column.
• The pressure of the decanter can be controlled in a straightforward manner by
manipulating the vent flowrate.
• The pressures of the oxy-chlorination and pyrolysis can be controlled by manipulating
the flowrate of the gaseous products streams.
For the level control, there are eleven liquid levels must be controlled and they are as follow:
• There are four distillation columns and in each one two liquid levels are to be
controlled at the column base and the condenser. The most direct way to control these
levels would be through manipulating the valves of the distillates and the bottom
streams, respectively. However, the problem of the fixed flow recycled stream rises
again in the last distillation column where the bottom stream is recycled back to the
upstream section of the plant and it is the only effective manipulated variable to
control the liquid level of the column base. Therefore, a cascade control system is
designed over the flow of the recycled bottom stream to control both the recycled
stream flow and the column liquid level.
• Controlling the liquid level of the direct chlorination reactor is performed over
manipulating the bottom stream flowrate.
• The control of the decanter levels is done as a standard control structure. The EDC
product flow controls the organic phase level while the aqueous flow controls the
aqueous phase level.
Due to the high conversion of the direct and oxy-chlorination reactions, small amount of the
reactants pass over the purification section where they are removed as impurities with the
unwanted by-products. Carbon dioxide is an unwanted by-product and removed from the
process within the top of the light removal column. Similarly, 1,1,2 trichloroethane is
removed as a heavy by-product at the bottom of the heavy removal column. The first column
298
in the purification section removes 99% of the water as top product. The temperature control
in this column achieves EDC-water separation control. The bottom product stream of this
column is fed into the heavies column where the top product stream of this column is purified
to 99% EDC through the column temperature control. The VCM produced in the pyrolysis
section is separated in the VCM purification section. In the first column, called HC1 column,
temperature control is used to distil HC1 off the top of the mixed feed containing mainly EDC,
VCM and HC1. The bottom product is fed to the VCM column, where the temperature is
controlled to purify VCM as the overhead product and the recovered EDC is recycled back to
the EDC purification section.
After accomplishing the above steps, a number of control valves remain unallocated. The
cooling water flows to the direct and oxy-chlorination reactors are used to control the
reactions' temperatures to maintain the optimum conversion. The flow of the heating fuel to
the furnace is used to control the pyrolysis conversion through the temperature controller. For
the quench systems after the oxy-chlorination and pyrolysis reactors, the cooling water flow
and the refrigeration duties are used to control the temperatures of the products streams. Each
controller in the plant flowsheet is tuned using APC-Tool to reach the optimum performance
of each controller and investigate the need of decoupling system or any other advanced
control strategies between interacting controllers within each individual unit especially around
the distillation columns where most likely the temperature and pressure controllers are
interacting to each other. To demonstrate the effectiveness of APC-Tool, Fig. 13 shows the
interactions between the individual tuned temperature and pressure controllers in the heavy
column within the APC-Tool environment. Therefore, the two controllers are tuned
accordingly based on there interactions as shown in Fig. 14 where it shows the importance of
tuning the overall interacting control system rather than tuning the control loops individually.
Moreover, Fig. 15 shows the benefits of installing a decoupling system between the two
controllers in order to reduce the interactions between them.
Up to this point, the basic regulatory plant-wide control approach has been established on the
VCM production processes. The recycle HC1 flow is to be maximised to increase the product
production rate. However, this target may conflict with other environmental and operational
objectives as covered in the optimisation section of the overall framework. Economical,
environmental and operational objectives may play an important role in the optimisation of
several controller setpoints as shown earlier. Moreover, the production rate can be maximised
through the pyrolysis temperature controller where a trade-off rises due to the coke formation
and the increase in the by-products rates. Therefore, a number of specified objectives,
constraints and external factors, such as raw materials and energy costs and the products
299
prices are required to be considered during the optimisation of the entire plant and can be
formulated and incorporated in a straightforward way within the proposed framework.
Fig. 16 shows the final regulatory plant-wide control strategy for the VCM plant through
the pairing process between the available manipulated variables and the required variables to
be controlled.
Fig. 13. Step change response of the Fig. 14. Step change response of the
individual tuned controllers in heavy column integrated tuned controllers in heavy column
300
Fig. 15. Step change response of control loops with a decoupling system in heavy column
Figure 17 shows the dynamic responses of some process variables for a step change in set
point of the temperature controller in the pyrolysis from 500 to 502°C. As the pyrolysis
temperature increases, more EDC cracked to VCM and HCl as the conversion of reaction
increases. The production rate of VCM increases, however, at the price of more coke
formation and by-products generations. This temperature change causes disturbances in the
HCl and VCM column and accordingly the temperature and pressure controllers adjust the
manipulated variables to bring the controlled variables back to set point values.
Figure 18 shows the responses for a 5% step change in the feed flowrates to the direct
chlorination reactor. According to this process disturbance, the temperature and pressure
controllers of the direct chlorination reactor bring the reactor condition to set point values.
This increase in the feed flow results in an increase in the EDC formation and consequently
increases the feed flow to the purification columns. Fig. 18 shows that the response of the
light column is faster compared to that of the heavy column as the disturbance moves
sequentially through the units.
Through these non-linear dynamic simulations, it is shown that the process under the
proposed plant-wide control structure is operable and controllable as it holds the system at the
desired optimal operating conditions (set points) and shows good disturbance rejection
capabilities.
Fig. 16. Plant-wide control strategy of the VCM plant
o
302
Fig. 17. Dynamic responses of process variables to a step change in pyrolysis temperature by
2°C
7. CONCLUSIONS
In this paper, a general systematic methodology has been proposed that incorporates
economical, environmental, heat integration and operational considerations within a multi-
objective optimisation framework. The methodology as it stands enables the design and
process engineers to draw 'boundary' Pareto curves corresponding to the maximum and
minimum levels of heat integration for all operating points achievable by the process. It is
also possible to use the proposed approach to draw the Pareto curve for any designed HEN
between the calculated limits, and thus to quantify the trade-offs between economic and
environmental objectives. Improving energy efficiency generally increases plant complexity
and which leads to significant impacts on plant operability and/or controllability. The
controllability of the designed HEN is analysed explicitly through number of techniques. The
303
case study shows that a HEN design with independent sub-networks is more controllable and
operable than an integrated HEN design that stands as a single sub-network. Plant-wide
control and dynamic evaluation are integrated within the framework as a mean of including
control considerations in the early stages of design. The utilised plant-wide control
methodology shows that a reliable control structure should employ the engineering judgments
and experiences together with the available systematic analyses. Complex plants are highly
integrated, through mainly recycled streams, even without heat integration. Therefore, the
plant-wide control structure was developed and validated on the no heat integration design
before its extension to include the corresponding HEN design. The rigorous dynamic model
was used to implement and validate the developed plant-wide control system and to test the
overall dynamic performance of the plant. The trade-offs between the formulated
performances, economical, environmental, heat integration and operational, of the designed
processes are presented and evaluated in a transparent way. The approach was illustrated with
an industrial case study of a balanced VCM process.
Fig. 18. Dynamic responses of process variables to a step change in direct chlorination feeds
by 5%
304
In the optimisation search of the chemical processes, many of the key parameters are only
partially known where there is significant uncertainty regarding their future values.
Furthermore, there are inherently uncertainties associated with both the plant model as well as
the environmental model. Designing chemical processes under uncertainty has been a
common class of problems in synthesis and design and has received considerable attention in
recent years. A natural extension in the formulation proposed in this thesis is the
incorporation/addition of uncertainty in the formulation of the optimisation problem. This,
however, would naturally increase the computational complexity as the presence of
uncertainty would lead to semi-infinite optimisation problems.
Acknowledgements
The authors would like to thank Dr. Denis Westphalen, Aspentech - Canada, for his
contribution in the controllability analysis section.
REFERENCES
[I] B. Russel, J. Henriksen, S. B. Jorgensen and R. Gam, Comput. Chem. Eng., 24 (2000)
967.
[2] F. Bernardo, E. N. Pistikopoulos and P. Saraiva, Comput. Chem. Eng., 25 (2001) 27.
[3] O. Chacon-Mondragon and D. Himmelblau, Comput. Chem. Eng., 20 (1996) 447.
[4] S. Stefanis, A. Livingston and E. N. Pistikopoulos, Comput. Chem. Eng., 19(1995), S39.
[5] B. Alexander, G. Barton, J. Petrie and J. Romagnoli, Comput. Chem. Eng., 24 (2000)
1195.
[6] A. Azapagic and R. Clift, Comput Chem Eng, 23 (1999) 1509.
[7] A. Rossiter, AIChE Symposium Series, 90 (1994) 12.
[8] H. Spriggs, Waste Manage, 14 (1994) 215.
[9] R. Dunn and G. J. Bush, J. Cleaner Prod, 9 (2001) 1.
[10] B. Linnhoff, Chem. Eng. Progress, (1994) 32.
II1] B. Glemmestad, S. Skogestad and T. Gundersen, Comput. Chem. Eng., 23 (1999) 509.
[12] A. Koggersbol, B. Andersen, J. Nielsen and S. Jorgensen, Comput. Chem. Eng., 20
(2000) S853
[13] K. Papalexandri and E. N. Pistikopoulos, Chem. Eng. Res. Des., 72 (1994) 350.
[14] W. L. Luyben, B. Tyreus and M. L. Luyben, Plantwide Process Control, McGraw-Hill,
1998
[15] Consoli, F., (eds.), Guidelines for Life-Cycle Assessment: A "Code of Practice".
SETAC, USA, 1993.
[16] R. Smith, Chemical Process Design, McGraw-Hill, 1995.
[17] K. Miettinen, Nonlinear Multi-objective Optimisation. Kluwer Int. Series, 1999.
[18] V. Chankong and Y. Haimes, Multiobjective Decision Making, Elsevier Science
Publishing Co., New York, 1983.
305
[19] C. Coello, D. Veldhuizen and G. Lamont, Evolutionary algorithms for solving multi-
objective problems, Kluwer Academic, 2002.
[20] K. Deb, Multi-Objective Optimization Using Evolutionary Algorithms, John Wiley &
Sons; 2001.
[21] C. Hwang, S. Paidy, K. Yoon and A. Masud, Comput. & Ops Res, 7 (1980) 5.
[22] P. Clark and A. Westerberg, Comput. Chem. Eng, 7 (1983) 259.
[23] HYSYS.PLANT, Manuals, Hyprotech Ltd, 1998.
[24] A. Burgess and D. Brennan, Chem. Eng. Sci., 56 (2001) 2589.
[25] R. Heijuhes, (eds.), Environmental Life Cycle Assessment of Products, Leiden, 1994.
[26] ISO 14040 (AS/NZS), Environmental Management, 1998.
[27] B. Linnhoff, et al., A User Guide on Process Integration for the Efficient Use of Energy,
IChemE, Rugby, 1994.
[28] T. Gundersen and L. Naess, Comput. Chem. Eng., 12 (1988) 503.
[29] W. Driedger, Hydrocarbon Process., 75 (1996) 111.
[30] B. Glemmestad and T. Gundersen, AIChE Symposium Series, 94 (1998) 451.
[31] D. Westphalen, B. Young, W. Svrcek and H. Shethna, 3rd Inter Symp Process Integ,
Canada, 2002.
[32] L. Biegler, I. Grossmann and A. Westeberg, Systematic Methods of Chemical Process
Design, Prentice Hall, 1997.
[33] S. Pethe, R. Singh and F. Knopf, Comput. Chem. Eng., 13 (1989) 859.
[34] Z. Han, J. Zhu, M. Rao and K. Chuang, Chem. Eng. Commun, 164 (1998) 191.
[35] M. Ravagnani, A. Silva and A. Andrade, Appl. Therm. Eng., 23 (2003) 141.
[36] J. Zhu, Z. Han., M. Rao and K. Chuang, Can. J. Chem. Eng, 74 (1996) 876.
[37] T. McAvoy, Interaction Analysis-Principles and Applications, ISA, 1983.
[38] J. D. Perkins and S. Walsh, Comput. Chem. Eng, 20 (1996) 315.
[39] L. Narraway and J. D. Perkins, Ind. Eng. Chem. Res, 32 (1993) 2681.
[40] P. Bahri, A. Bandoni and J. Romagnoli, AIChE J, 43 (1997) 997.
[41] T. Vu, P. Bahri and J. Romagnoli, Comput. Chem. Eng, 21 (1997) S143.
[42] T. Larsson and S. Skogestad, Model Ident Control, 21 (2000) 209.
[43] R. McPherson, C. Starks and G. Fryar, Hydrocarb Process, (1979) 75.
[44] J. Cowfer and M. Gorensek, Encyclopaedia of Chemical Technology, 24, 1997.
[45] J. Orejas, Chem. Eng. Sci, 56 (2001) 513.
[46] S. Karra and S. Senkan, Ind. Eng. Chem. Res, 27 (1988) 1163.
[47] A. Lakshmanan, W. Rooney and L. Biegler, Comput. Chem. Eng, 23 (1999) 479.
[48] E. Gel'perin, Y. Bakshi, A. Avetisov and A. Gel'bshtein, Kinet Catal, 25 (1984) 716.
The Integration of Process Design and Control
P. Seferlis and M.C. Georgiadis (Editors)
306 © 2004 Elsevier B.V. All rights reserved.
Chapter B5
1. INTRODUCTION
highly complex dynamics, including chaotic behavior, in processes with otherwise simple dy-
namics [5]. During the last decade a large number of papers, e.g., [6], have also been devoted
to the so-called "snow-ball" effect, i.e., excessively large control moves for small disturbances,
that may occur in processes with recycle. However, this latter effect is a consequence of control
design only, and is not a fundamental property of the process, i.e., it is not related to controlla-
bility.
The introduction of more complex plant structures, involving recycle flows, not only af-
fects the fundamental dynamic behavior of a process. Another important consequence is that
the strong interactions introduced between various process units makes it more difficult to un-
derstand the source of various behaviors. Hence it becomes more difficult to know how the
process should be modified in order to improve the controllability. One way to deal with this
problem is to employ optimization based methods, e.g., [7, 8], which require little or no pro-
cess insight. However, while this may be a feasible approach in some cases, we believe that
process knowledge is essential for designing optimal and controllable plants in most cases. In
this chapter we therefore consider methods and tools which make the relationship between the
overall plant controllability and the properties of the individual units transparent. In this way,
existing knowledge on how to design traditional process units for controllability can be applied
also in an integrated plant environment.
We start the chapter by introducing a formal definition of input-output controllability, based
on linear dynamic models. We then present a reactor-separator problem which will be used
for illustration throughout the chapter. The analogy between plants with recycle and feedback
control systems is then utilized to decompose the plant model, and furthermore, to separate the
dynamics resulting from interactions from those that can be attributed to single process units
only. Based on this separation, a methodology for relaxing control limitations and require-
ments, respectively, caused by process unit interactions is presented. The derived results are
used to ensure acceptable controllability through redesign of the introductory reactor-separator
problem.
2. INPUT-OUTPUT CONTROLLABILITY
With input-output controllability of a process is here understood the inherent ability to achieve
a desired control performance using available measurements and control inputs. See e.g., Sko-
gestad and Postlethwaite [9]. The performance may be related to attenuation of disturbances
and/or tracking of setpoint changes.
It is important to stress that controllability is independent of the control system, and a prop-
erty of the process only. Consider a linearized dynamic model of the process
y(s)=g(s)u(s)+gd(s)d(s) (1)
where y is the output to be controlled, u is the control (manipulated) input and a? is a disturbance.
308
Note that setpoint changes also may be described as disturbances. We here assume all signals
to be scalar, but extensions to multivariable systems is relatively straightforward by considering
singular values or eigenvalues of the resulting transfer-matrices.
In order to more easily assess whether the required control performance can be obtained, the
variables y, u, d, and correspondingly the models g(s) and gd(s), are scaled in such a way that
\d\ = 1 corresponds to the maximum expected disturbance, \u\ = 1 to the maximum allowed
control input, while acceptable performance corresponds to keeping \y\ < 1. Scaling also en-
sures that all variables have comparable magnitude. This simplifies the controllability analysis,
in particular for multivariable systems.
Consider the frequency responses of the scaled system (1) and define &)</ as the disturbance
cross-over frequency, i.e., the frequency where the scaled disturbance gain is unity
Ia/CM/)I = i (2)
and assume that this frequency is unique. We then have
y(S)=S(s)gd(s)d(S) (5)
where
(6)
*> = ™ 3
is the closed-loop sensitivity function. Acceptable disturbance attenuation is obtained when
|S&/(A»)|<1,V<» (7)
Then, for acceptable disturbance rejection, we get the bandwidth requirement COB > 03d, i-e-, the
control must be effective at least up to the frequency c%.
309
Unfortunately, a plant always has fundamental limitations which restrict the highest band-
width (OB that a feedback control system can achieve, even with the best possible control law.
Fundamental limitations stem from the process itself, e.g., in the form of time delays 6 and right
half plane (RHP) zeros z > 0. In addition, the phase lag of a plant imposes a limitation when
low order controllers, such as PID-controllers, are employed. Assume the plant model can be
written on the form
„(,) = ke^(s-Z)
g{ W
' (TlS+\)(T2S+l)---{TnS+l)
where Ti > T2 > ... > xn. Then the upper bound on the bandwidth COB is approximately deter-
mined as
O>B<(OB* = l / # e (10)
In addition, constraints on the control input u impose a limitation. In particular, with \d\ = 1
and \y\ = 1 we require \u\ = \g~l\(\gd\ — 1) < 1- Thus, effective feedback control can only be
achieved at frequencies for which
\g\ > \gd\ ~ 1 (12)
For frequencies where this is not satisfied, acceptable disturbance attenuation can not be achieved
using feedback control with the considered input as the manipulated variable. The smallest fre-
quency for which (12) is not satisfied is denoted 0)BU-
If the maximum attainable <X>B = min(a)B*, &)#«) is smaller than (Oj, then acceptable distur-
bance sensitivity can not be achieved using feedback control alone, and some modification of
the process design is required in order to either increase the attainable <X>B and/or reduce a>d, i.e.,
the disturbance sensitivity of the process in the frequency range ft) £ [cog, a>d}.
In the next section we apply the controllability analysis, as outlined above, to a simple
integrated process system.
where XR is the mole fraction of R in the reactor. The aim of the process is to produce a product
with 99.5% component R, corresponding to the distillate composition yo = 0.995. We design
the system according to the recommendations in [11], that is, we choose a design in which
the reactor is operated at the maximum reaction rate, i.e., minimum volume. The data for
the process are given in Table 1. The dynamic model we employ assume a perfectly mixed
isothermal reactor (CSTR) and an ideal distillation column with constant plate holdups and
constant molar flows.
The worst-case disturbance considered here is up to 20% changes in the reaction rate con-
stant ko, e.g., reflecting disturbances in the fresh feed conditions (a 20% disturbance in feed
composition gives essentially identical results). The performance requirement is that the prod-
uct composition should satisfy yo(t) > 0.99, V?. We assume that all nominal flows correspond
to 50% valve openings. All liquid levels are assumed to be perfectly controlled, but we stress
that this is not important for the results we present since we consider rejection of composition
related disturbances only. For the case of flow disturbances, not considered here, the choice of
the level control structure will influence the results.
Figure 2 shows the frequency response of the linearized model from the scaled disturbance
ko to the scaled output yo. As seen from the figure the crossover frequency (0^ « 0.2 rad/min,
implying that we need to attenuate disturbances up to this frequency. With feedback control
this would correspond to a closed-loop time-constant around 5 min. This can be related to the
process time-constant which is about 170 min. To check whether there exist any fundamental
properties of the process that will limit the achievable bandwidth we compute the zeros of
311
the transfer-function from the reflux L, considered the manipulated variable, to the distillate
compositionyu and find that the transfer-function has a real zero at z = 0, i.e., in the open RHP.
This is reflected in the frequency response from the control input L to the output^ in Fig. 3, in
which it is seen that the amplitude decreases to zero with a slope 1 at low frequencies. Thus, we
have a severe control limitation both in the RHP zero as well as in the small effect of the control
input at low frequencies. As pointed out in [4], these results are inherent for reactor-separator
systems with the reactor operated at the maximum reaction-rate and can not be avoided by using
any combination of available control inputs.
From the above results it is clear that we need to modify the process design in order to re-
move the non-minimum phase behavior and simultaneously increase the effect of the control
input on the product composition. However, since the disturbance sensitivity requires a rela-
tively high bandwidth of the control system, i.e., cog > 0.2, it may be relevant to also modify
the design with the aim of reducing the disturbance sensitivity at higher frequencies. In order
to achieve these goals it is necessary to understand the source of the relevant behaviors, and
for this purpose we shall in the next section consider decomposition of models for integrated
process systems by means of tools from linear systems theory.
312
4. MODEL DECOMPOSITION
Fig. 3. Scaled frequency response from reflux L to distillate composition yc, of reactor-separator
system.
above, the rate constant ko will affect the distillate composition yu only through the reactor
composition which is part of the recycle loop. However, the reflux L has a direct effect on yo
which is independent of variables in the recycle loop. Thus, ko belongs to the input set w2, while
L belong to the input set w\. For inputs belonging to w\, there will be parallel effects from the
input to the output; one direct and one indirect through the recycle loop.
From the block-diagram in Fig. 4 we can now derive the closed-loop dynamics and compare
it to the open-loop dynamics represented by G(s). In this way we can directly compute the
effect of the feedback, provided by the recycle flow, on the overall dynamics. To simplify
the exposition, we here limit ourselves to consider the case in which all variables are scalar.
However, similar results apply when considering the more general multivariable case. With all
variables scalar, G(s) is a 2 x 2 matrix and can be written
G(S)=(W) gu(s)\
\g2\{S) g22{S)J
From the block-diagram above we then derive, for the closed-loop system,
gu{s)-gR{s)detG(s) Wl
gn(s)
y\ = ; T~\—T~\ + 1 T\—T~\Wl (15)
I ~g22(S)gR{S) 1 -g22(S)gR{s)
g2l{s) g22(s)
y2 - ~, T~, 7T»1 + -j , . W2 (16)
1 - g22 [s)gR [S) 1 - g22 (s)gR [S)
From this we see that all transfer-functions involving at least one loop-variable, W2 or y2, is
simply the open-loop dynamics multiplied by the factor
Sp(s) = (17)
1 -g22(s)gR{s)
314
Fig. 4. Block-diagram for a general plant with recycle. G(s) represents dynamics of the forward
path while gR(s) represent dynamics of the recycle path.
This is a well known result from linear systems theory. The function Sp is called the sensitiv-
ity function as it gives the relative change in the input-output sensitivity due the presence of
feedback. The index p is here used to denote that it is the sensitivity function of the process
itself. For the case with multivariable recycle, we get identical results, except that the sensitivity
function in this case is a matrix Sp = (I— G22(S)GR(S))~ .
With the above decomposition we are in a position to determine the part of the dynamics
that can be uniquely attributed to individual process units, and the part which is caused by
interactions involving several units. Furthermore, the effect of interactions can by the use of
linear systems theory easily be related to the properties of the individual units, i.e., to G(s) and
The fundamental control limitations that can exist in a plant include time delays, RHP poles
and RHP zeros [9]. Time-delays are usually caused by measurements and material transport,
and are in general not affected by feedback effects. Feedback will, however, move the poles of
a system, and can hence affect stability by moving poles across the imaginary axis.
From Eqs. (15)-(16) it is easily seen that the feedback will induce instability if the charac-
teristic equation 1 — gn (s)gR (s) = 0 has any roots in the RHP. Provided the open loop system is
stable, the Bode criterion applies, i.e., the feedback will induce instability provided there exist
some frequency a for which the loop-gain
Thus, if a recycle flow is found to induce instability, stability can be regained by modifying any
315
of the units in the loop such that the loop-gain is reduced at the frequencies where the phase-
condition of (18) is satisfied. Examples of design modifications to avoid instability caused by
recycling, as well as to stabilize an open-loop unstable process using recycle feedback can be
found in [12]. Since RHP poles usually do not represent any severe control limitation as such,
however, we do not pursue this problem any further here.
It is usually assumed that, while feedback moves the poles of a system, the system zeros are
unaffected by feedback. This is also correct if full multivariable feedback is considered, e.g.,
[9]. However, as shown in [4], the feedback imposed by recycling can move transfer-function
zeros across the imaginary axis. The reason for this apparent discrepancy is that recycling in
a plant corresponds to partial, or decentralized, feedback and, as shown in [13] and [14], such
feedback will move zeros of transfer-functions involving subsets of the plant variables. For
the transfer-functions in (15)-(16) it is only the zeros of the transfer-function from w\ to y\
that will be affected by the feedback imposed by the recycle flow. This is true in general, i.e.,
only transfer-functions between variables that have a connection external to the recycle loop
will have their zeros moved by the recycle feedback effect. The following theorem gives a
necessary and sufficient condition for the zeros of such transfer-functions to be moved across
the imaginary axis.
Theorem 1. Consider a feedback system as shown in Fig. 4. Assume G(s) z.ndgn(s) stable.
Then the image of
MteGjs)
gn(s)
as 5 follows clockwise around the Nyquist D-contour (enclosing the RHP), will encircle the
point (1,0) N = Z — Z times, where Z and Z is the number of RHP zeros of of the transfer-
function from w\ to y\ before and after closing the feedback loop, respectively. That is, N zeros
are being moved from the LHP to the RHP as the feedback loop is closed (a negative number
implies movement from the RHP to the LHP).
Proof. The proof is based on the Argument Variation Principle (Nyquist criterion) and can
be found in [14].
The theorem, which easily can be extended to multivariable systems [14], covers all possible
zero crossings due to the recycle feedback. To make the relationship to individual unit properties
more clear, one can note that detG(s) is a measure of couplings between the variables in the
forward loop. This becomes even more clear by the fact that we can write [14]
8{s) = S22(s)gR(s)
An (5)
where An is the 1,1-element of the Relative Gain Array for G(s) [15], which is a well known
measure of interactions in multivariable systems. Thus, in order to avoid zeros crossing from
the LHP to the RHP as a consequence of recycle feedback one can either reduce the gain of the
feedback loop g22gn(s), o r modify the interactions between variables of the forward path G(s).
316
For a real zero crossing from the LHP to the RHP we have the simplified condition
and in this case it is hence sufficient to consider the steady-state properties of the individual
process units only.
Reactor-Separator problem revisited: For the reactor-separator problem considered above,
we found that there was one real RHP zero (at 0) from the input L to the output y^. We note
that both these variables are external to the recycle loop, and hence the zero can in principle
be caused by the recycle feedback effect. To investigate whether this is the case we derive the
steady-state models for the reactor and distillation column separately, as outlined in the model
decomposition above,
xR = 0.524c* (22)
8R(0)
U\ = ( 0.0208 0 . 1 3 5 \ / L \
\xB) \-0.0189 L1S6J \xRJ
G(0)
Fig. 5. Scaled frequency response from reflux L (upper) and reaction rate constant ko (lower) to
distillate composition yo after design modification involving increase of reactor volume. Dotted
lines shows corresponding responses prior to design modification.
318
since the process can be modified so as to increase the "built-in" damping of high-frequency
disturbances, that is, by decreasing the disturbance cross-over frequency co^. This will of course
carry a cost which must be traded off against the cost of reducing bandwidth limitations, like
measurement delays, and developing a high-bandwidth feedback control system.
From the expressions for the process transfer-functions in (15)-(16) we see that all input-
output transfer-functions are proportional to the process sensitivity function
SpiS)=
l-g22(s)gR(s)
where g22(s)git(s) is the transfer-function of the recycle loop. For transfer-functions between
variables connected through the recycle loop, which in general will include most transfer-
functions spanning across more than one process unit, we have from Eqs. (15)-(16) that
gd(s)=goi(s)Sp(s) (26)
where go/ is the transfer-function of the forward path, i.e., with the recycle feedback removed,
and gci is the transfer-function with the feedback effect included. Thus, using frequency re-
sponse analysis, we can conclude that the recycle feedback serves to increase the disturbance
sensitivity at frequencies where \Sp(iw)\ > 1 while it decreases the sensitivity where \Sp(i(o)\ <
1.
The process sensitivity function for the modified reactor-separator problem is shown in Fig.
6. As seen from the figure, the recycle feedback increases the disturbance sensitivity by a factor
4.2 at low frequencies, while it in fact serves to slightly dampen the disturbance sensitivity, i.e.,
\Sp\ < 1, at high frequencies.
The fact that recycling tends to increase the disturbance sensitivity of a plant is well known,
see e.g., [2]. However, as seen from the reactor-separator example above, the increase applies
only at low frequencies, while the recycling in fact may have the opposite effect at higher
frequencies. This can be explained using the well known Bode Sensitivity Integral [16]
/ \n\Sp(i(o)\d(o = 0 (27)
which applies if the open-loop, i.e., the individual process units, is stable and the loop transfer-
function g22(s)git{s) n a s a pole excess of at least two. Equation (27) then states that the area
for which \Sp\ > 1 must be balanced by a frequency region with \Sp\ < 1. Thus, any sensitivity
increase due to recycling feedback in some frequency region will be compensated by a similar
reduction in some other frequency region.
The process sensitivity function Sp provides a direct measure of how the process unit inter-
actions contribute to the overall disturbance sensitivity of the plant. From (17) it is clear that
the effect of the interactions will depend strongly on the dynamic properties of the individual
process units that are part of the recycle loop, i.e., on the loop transfer-function
For a given frequency a*, the frequency response of the loop transfer-function is a complex
number which can be written
go(im*)=R + iI (29)
If we assume that the loop-gain |go('&>)| < 1 for all frequencies, which corresponds to assuming
that the recycling do not destabilize the process, we get a necessary and sufficient condition for
when the process interactions will serve to increase the disturbance sensitivity
Since the phase-lag of the process units typically are zero at steady-state, i.e., positive steady-
state gain, we get that the recycling feedback essentially always will serve to increase the
steady-state disturbance sensitivity of a plant. However, when the phase-lag of the loop transfer-
function exceeds —60°, for some frequency, the recycle feedback will provide disturbance
damping. The phase-lag for which we get the maximum sensitivity reduction from recycle
feedback, for a given loop-gain |go|, is /go = —180° corresponding to a negative feedback
effect from the recycling at that frequency.
In summary, the phase-lag properties of the individual process units will play a critical role
for the disturbance sensitivity of a plant with recycle flows and this fact should be utilized when
designing integrated plants for controllability.
For some process units it may relatively straightforward to modify the design such as to
change the phase-lag properties. If not so, a simple alternative is to add capacities in the loop,
320
since these can easily be tailored to provide a desired phase lag, which furthermore can be
changed during operation. Two alternative simple capacities that can be added to the loop are
mixed tanks and plug-flow tanks. A perfectly mixed tank has the transfer-function
where Xg is the residence time of the tank. Similarly, a plug-flow tank with residence time To
has the transfer-function
GD{s) = e-tDS (33)
The corresponding phase-lags are IGB = —atan{zB(o) and LGD = ~(0TD, and hence for a given
residence time, the maximum phase-lag is achieved with a plug-flow tank. To maximize the
residence time and hence phase-lag, for a given tank size, the plug-flow tank should be placed
at the recycle flow where the flow rate is the smallest. In principle, one seemingly attractive
option would therefore be to design a delay tank with a size which yields a total loop phase-
lag of — 180° at the frequency (0^ for which the process should have maximum self-damping
properties. However, this may require a relatively large delay tank and is in most cases not
optimal either, i.e., does not give the minimum required tank volume to provide a controllable
plant [17].
A systematic design for controllability should, as discussed earlier, aim at achieving coj <
(OB, i.e., the attainable bandwidth G>B should exceed the frequency region where active distur-
bance damping by a control system is needed. Thus, given a target bandwidth (0$ for the control
system, the aim should be to reduce the disturbance sensitivity of the process itself such that
(Qd < (£>g. As seen from (26), the disturbance sensitivity is partly a result of the individual
process unit sensitivities, partly a result of the interactions between these units as reflected in
the process sensitivity function. The former can be reduced by reducing the gains of the units,
while the latter can be reduced by modifying the gains and/or the phase lag of the units involved
in the recycle loop. If we consider adding capacities, as discussed above, it can be shown that
the optimal solution, in terms of minimizing the tank volumes for a given <%, is to combine
a plug-flow (delay) tank in the recycle path with a mixed tank placed outside the recycle loop
[17]. Thus, the plug-flow tank is used to modify the phase-lag of the recycle feedback loop,
with the aim of reducing the sensitivity due to process unit interactions as discussed above,
while the mixed tank is used as a traditional buffer tank aimed at damping the magnitude of the
disturbances themselves.
Reactor-Separator problem revisited. In the previous section we modified the process de-
sign with the aim of removing the non-minimum phase behavior. This was achieved by a
modification of the reactor design which reduced the individual disturbance sensitivity gR(s)
of the reactor. Since this disturbance sensitivity also affects the recycle loop gam g2i{s)gR{s),
this also resulted in a significant reduction in the disturbance sensitivity of the overall plant,
i.e., from ko to yo- This can be seen from the lower plot in Fig. 5. At steady-state, a 20%
321
reduction in the disturbance sensitivity gs(0) of the individual reactor resulted in a reduction
in the sensitivity from ko to yo for the overall plant from 40 to 10, i.e., a 75% reduction. That
this large reduction is due to a reduced sensitivity caused by unit interactions can be seen from
Fig. 6 which shows the loop sensitivity function \SP\ as a function of frequency before and after
the design modification. As can be seen from the figure, the sensitivity is significantly reduced
at low frequencies. However, as seen from Figs. 5 and 6, the disturbance sensitivity is almost
unaffected at high frequencies. In particular, the disturbance cross-over frequency a>d is not
affected by the design modification and (0^ « 0.2 rad/min with both designs. Thus, we require
a bandwidth (Og > 0.2 in both cases, corresponding to a closed-loop time-constant of approx-
imately 5 min. This is a relatively high bandwidth for a process control system, and requires
small measurement delays as well as relatively accurate process models.
We here assume that a reasonable target bandwidth for the control system is fflg = 0.05
rad/min, corresponding to a closed-loop time-constant of approximately 20 min. At this fre-
quency we find from Fig. 5 that the disturbance sensitivity is 4.2, and hence needs to be reduced
accordingly to achieve a>d = 0.05. We restrict ourselves here to only consider modifications by
capacity addition. Using the buffer design rules in [18] we find that we can reduce the distur-
bance sensitivity to 1 at COB = 0.05 by placing a perfectly mixed buffer-tank with residence time
%B = 81 min outside the recycle loop, i.e., at the incoming feed. However, this will not affect the
disturbance sensitivity caused by unit interactions, which is quite significant at the considered
frequency 0.05. From Fig. 6 we see that the sensitivity \Sp(ia>B)\ ~ 2 and hence the interac-
tions amplify the disturbance sensitivity by a factor 2 at the critical frequency (OB- AS discussed
above, this can be reduced to a factor less than 1 by adjusting the phase-lag of the loop units at
(OB, e.g., by placing a plug-flow tank at the recycle flow.
Using the results presented in [17] we find that the optimal solution, i.e., the minimal ca-
pacity volume that gives w^ = 0.05, is a delay-tank with residence-time XD = 23 min combined
with a cascaded mixing tank with residence time of T# = 27 min. See also Fig. 7. In terms of
volumes, accounting for flow rate sizes, this means that we reduce the required capacity volume
by 40% compared to a traditional cascaded buffer. For lower <% the reduction is even more
significant, e.g., for (Oj = 0.02 the optimal solution gives a required capacity volume which is
only one third of the required cascaded buffer volume.
Fig. 8 shows the scaled disturbance sensitivity from ko to yo after the design modification,
and as can be seen we have achieved (0^ = 0.05. Fig. 9 shows the effect of the delay tank on the
process sensitivity function Sp. As can be seen, the delay reduces the sensitivity Sp at ft)^ from
2 to 0.8, implying that the interactions effectively serves to dampen disturbances at a>d after the
design modification. The resonances that appear at higher frequencies are dampened out by the
low-pass properties of the process units, and do therefore not pose a problem as can be seen
from Fig. 8.
Effectively, the delay in the feedback path serves as a disturbance filter. This is shown in
322
Fig. 7. Reactor-separator plant with plug-flow (delay) tank and mixed buffer tank.
Fig. 8. Scaled disturbance sensitivity of reactor-separator process before (dashed) and after
(solid) addition of delay tank and mixed buffer. See also Figure 7.
323
Fig. 9. Reactor-separator system. Process sensitivity function \SP | before (dashed) and after
(solid) addition of delay tank in recycle path.
Fig. 10 which shows the effective disturbance filtering of a delay of To = 23 min in the recycle
path of the reactor-separator system. In the same figure is shown the corresponding filtering
effect of a mixing tank of the same size cascaded with the process. As seen from the figure, the
delay provides a filtering effect which is significantly better up to a frequency a> f» 0.1, while
the mixing tank is better at higher frequencies (where disturbance damping is not required).
Fig. 10. Filter effect of delay tank in recycle path (solid) and cascaded mixing tank of same size
(dashed) for reactor-separator system.
324
We finally check that the addition of buffer tanks has not reduced the attainable bandwidth
by reducing the effect of the control input L, and find that (OBU > 0.05. That is, the proposed
design fulfills the controllability requirements.
7. CONCLUSIONS
The dynamic properties of an integrated plant, with recycle of material and energy, are to a
large extent determined by interactions between the various process units. It is important to
understand how these interactions affect the dynamics in general, and controllability in partic-
ular, in order to know where and how to modify the process design with the aim of improving
controllability. We have in this chapter utilized the close relationship between an integrated
plant and feedback control systems to decompose the overall plant model such that the effects
of interactions may be separated from the overall dynamics. A strength of this approach is that
the results provide a direct relationship between properties of individual units and the effects
of interactions on properties such as stability, non-minimum phase behavior and disturbance
sensitivity of the overall plant.
We stress that design for controllability can either aim at reducing control bandwidth lim-
itations, imposed by fundamental process properties, or at reducing the control requirements
imposed by disturbance sensitivities. Based on results from linear systems theory we have pre-
sented simple model based tools, based on the decomposed models above, which can be used
to improve stability, non-minimum phase behavior and disturbance sensitivities in plants with
recycle. One important conclusion of the presented results is that the phase-lag properties of the
individual process units play a crucial role for the disturbance sensitivity of an integrated plant.
In particular, by a careful design of the recycle loop phase lag, it is possible to tailor the effect
of process interactions such that they serve to effectively dampen the effect of disturbances in
the most critical frequency region, that is, around the bandwidth of the control system.
REFERENCES
[8] V. Bansal, J.D. Perkins, and E.N. Pistikopolous, Ind.Eng.Chem.Res., 41 (2002) 760.
[9] S. Skogestad and I. Postlethwaite, Multivariable Feedback Control, John Wiley & Sons,
1996.
[11] O. Levenspiel. Chemical Reaction Engineering, John Wiley & Sons, New York, 2nd
edition, 1972.
[12] H. Cui. On the dynamics and controllability of processes with recycle. Licentiate Thesis,
Royal Inst. of Technology, h t t p : / / c o n t r o l . s3 . k t h . s e , 2000.
[13] E. W. Jacobsen and H. Cui. Zero crossings due to loop closure in decentralized control
systems, In 1998 AIChE annual meeting, Miami, USA, 1998, Paper 233f.
[16] H. W. Bode. Network Analysis and Feedback Amplifier Design, D. Van Nostrand Co,
New York, 1945.
[17] H. Cui and E. W. Jacobsen, Buffer Design for Disturbance Attenuation in Integrated
Plants. Submitted, h t t p : / / c o n t r o l . s3 . k t h . s e , 2003.
Chapter B6
a
CERTH - Chemical Process Engineering Research Institute (CPERI),
P.O. Box 361, 57001 Thermi - Thessaloniki, Greece
b
Department of Chemical Technology, Faculty of Applied Sciences,
Delft University of Technology, Julianalaan 136, 2628 BL, Delft, The Netherlands
1. INTRODUCTION
An optimisation-based method for the integrated process and control system design (Ref.
1-5) aims at the simultaneous determination of the flowsheet configuration, the equipment
design parameters, the plantwide control structure, and the controller tuning parameters. Such
an approach relies on the definition of a set of candidate flowsheet and control structure
configurations in the form of a plant and control system superstructure, which involves
numerous discrete and continuous variables representing the entire variety of design
decisions. Furthermore, appropriate nonlinear process models should enable the accurate
prediction of the steady state and dynamic behaviour of the candidate flowsheets to a set of
representative and meaningful disturbance scenarios. Hence, the solution of the complete
optimisation problem for a plantwide application accounting for all feasible design
alternatives and possible combinations between potential manipulated and controlled
variables may be proved extremely challenging in terms of both problem complexity and
required computational effort. In this chapter, a number of techniques based on nonlinear
sensitivity analysis of the static and dynamic plant controllability properties are introduced
that facilitate the process design and control in a fully optimised and integrated way.
The chapter presents a procedure for the evaluation and screening of alternative process
flowsheet and control structure configurations in a rigorous, effective and systematic way. It
utilises an effective decomposition of the process and control system design task, which in
sequence allows the enhancement of the process controllability properties. The quality of the
steady state behaviour in response to the deleterious effects of multiple simultaneous process
disturbances and model parameter variations is an essential prerequisite for good economic
327
3.4. Section 4 analyses and discusses a number of stimulating and representative examples
while concluding remarks appear in section 5.
2. DESIGN CRITERIA
step is necessary with the assistance of accurate dynamic simulations. Furthermore, criteria
for the control structure selection include structural output controllability (Ref. 25), and the
economic performance for linear and nonlinear process models with PID control loops (Ref.
26-27). Multivariable control strategies in a centralised and decentralised fashion have been
developed and employed in Ref. 28. The thermodynamic properties of the system and
specifically the mechanisms of energy transfer to and from the system were elaborated to
identify the dominant variables for the process and the inferential variables for the control
objectives (Ref. 29).
C denotes the economic objective function, while h and g the equality and inequality
constraints of the process model. Vectors X and d denote the process and design variables in
the model, respectively. Vector eref denotes the model parameters and externally specified
disturbances at a nominal (reference) value level. At this stage, vector X=[x y u] T contains all
process variables without discriminating between state, x, controlled, y, and manipulated, u,
variables. The degrees of freedom in Eq. (1) are the design variables d, and a subset of x (i.e.
the independent set of process variables) that specifies the plant's operating point. Upper and
lower bounds on the process and design variables define the available operating space.
The solution of the mathematical problem of Eq. (1), (dopt, Xopt), depends on the values of
the model parameters and exogeneous inputs (disturbances) to the process. If the optimal
operating point lies at the intersection of process constraints, a most commonly situation,
model uncertainty may cause the steady state actual operating point to violate process
constraints. Therefore, the implemented optimal operating point should be compromised to
330
avoid violations of the feasible space (Ref. 31). However, if the optimal operating point has
available degrees of freedom the influence of disturbances up to a maximum allowable
magnitude can be accommodated by the system.
and the postulated relationship between the input actions with respect to the error in the
output variables. The main target is the evaluation of the candidate control structures such that
any implemented control algorithm would result in adequate static and dynamic performance.
The formulation of the screening method for flowsheet configurations and control
structures proceeds with the construction of a number of disturbance scenarios that are
expected to influence the plant's operating conditions. The greater is the knowledge of the
nature, magnitude and directionality of the anticipated process disturbances, the greater
becomes the reliability of the observed results and the merits of the screening method. The
present method considers only disturbances of deterministic nature. Since the analysis is
based on steady state behaviour the effects of time variations for the disturbances is not
examined.
The static performance evaluation should be able to accommodate the large variety of
control objectives. A quadratic performance objective function is introduced that penalizes
deviations of the controlled variables from a target value in a least squares sense. Hence, the
controlled variables are forced to either remain at a constant steady state value (set point) or
vary within a specified region around a target value. Subsequently, it is desirable to identify a
set of manipulated variables that require the least effort to compensate for the detrimental
effects of process disturbances on the plant's control objectives. Therefore, the sum of squares
of changes from steady state optimal values for the manipulated variables is weighted in the
objective function, as well. Large changes in the manipulated variables may imply large
errors for the controlled variables during dynamic transition from one steady state to another.
Such a case would require that the disturbance or set point variation is slow enough,
compared to the plant's dynamic speed of response, to allow sufficient time for the plant to
reach a new steady state. However, the plant's steady state effort to cope for the effects of
disturbances is indicative for potential trouble in handling the situation even for variations of
higher frequency content. Obviously, the full description for the compensation of high
frequency disturbances would require the evaluation of the complete dynamic behaviour.
The steady state disturbance rejection problem is formulated within an optimisation
framework. Given a set of structural and equipment design variables at their optimal values as
calculated from the solution of Eq. (1) (e.g. number of stages in a distillation column, total
volume of a reactor), dopt, a set of set points for the controlled variables, ysp, a set of optimal
steady state values for the manipulated variables, uss, a set of model parameters and
disturbances, E, and symmetric weighting matrices for the deviations from target values for
the controlled and manipulated variables, Wy and Wu, respectively, the following
optimisation problem is constructed:
The optimal steady state values, u op =u H , for the manipulated variables, and the set points,
y =ysp, for the controlled variables, are retrieved from the optimal Xopt=[xopt ysp u ss ] T vector
opt
calculated from Eq. (1) and remain unchanged during the solution of Eq. (2) for different
parameter values, s. However, a set point change can be simulated if ysp is considered a
varying parameter itself. Upper and lower hard bounds for the controlled, y, manipulated, u,
and the remaining process, x, variables that define the allowable ranges of variation are
present.
The behaviour of the system can be conceived as the response to an unmeasured process
disturbance governed by the respective objective function. Eq. (2) evaluates only the steady
state effects of the disturbances on the flowsheet. In other words, it predicts the behaviour of
an ideal multi-variable controller that simultaneously inspects deviations on controlled and
manipulated variables and takes into consideration the nonlinear relations among the process
variables. Eq. (2) handles explicitly manipulated variable saturation and allows the
investigation of the integrity of a given control structure. Once a manipulated variable
saturates (e.g., input reaches an upper or lower level) or fails to respond (e.g., control system
failure), a degree of freedom for the control system is consumed. The minimisation of the
objective function will then determine how to distribute the effort among the remaining
manipulated variables to compensate for the effects of the disturbances according to the
entries in Wu. The relative importance of the control objectives and the preference in the
usage of each manipulated variable as reflected by the individual entries in the weighting
matrices of the objective function will dictate the reaction of the system to the perturbation.
For instance, a large weight on a diagonal entry of Wy forces the system to react decisively in
order to eliminate the deviation in the corresponding controlled variables from the set point
level at the expense of controlled variables with a smaller weight. Furthermore, the entries in
the weighting matrices can impose different control objectives such as tight control of the
controlled variables with relatively large weights or loose control with relatively smaller in
magnitude weights. In general, the selection of individual entries in matrices Wy and Wu will
be based on the aim to shift variability from the key performance and profit related variables
(e.g., product quality variables, variables associated with safety requirements, use of
expensive raw material) to the cheaper variables that a greater degree of variability is
tolerated (e.g., utility system) (Ref. 33). In all cases hard variable bound define impregnable
control limits.
The above formulation enables the study of control structures with unequal number of
controlled and manipulated variables. As already mentioned, lack of sufficient input capacity
may hinder the flowsheet from satisfying the underlined control objectives completely. The
imposed ranking through the selected objective function would prioritise the importance of
each control objective and guide the control system through the partial satisfaction of the
control targets. The use of the input resources is closely associated with an economic factor
(e.g., cost of steam and fresh material). Therefore, the most competitive way to compensate
for the effects of disturbances relies on the efficient use of the available input capacity. In
333
cases, where an excess of manipulated variables is available, it is possible to drive the process
to the most profitable, from an economic point of view, operating point (e.g., reduction of the
usage of expensive resources).
The disturbance sensitivity control (DiSC) problem will enrich the designer engineer's
perception about the interactions between process design and control system performance in
various ways: (i) Identify inadequate process designs and control structures that require large
changes in the manipulated variables for small disturbance magnitudes, (ii) Calculate the
capacity requirements for the process equipment or the range for manipulated variables in
order to compensate for the effects of disturbances, (iii) Identify the inequality constraints and
variable bounds that bottleneck the control system response and hinder its performance, (iv)
Determine the feasibility region (i.e. the magnitude for the combined disturbance variation for
which no feasible solution exists) for the imposed disturbance scenario rigorously, (v)
Investigate the behaviour of the system under special circumstances such as input saturation,
control failure, lack of input handles and non-square systems.
X = AX + BU + EE ...
y = Cx + Du + FE
where vectors x, y, and u denote the state, input, and output variables, respectively. Vector e
includes the model parameters and externally specified inputs (e.g., disturbances).
The solution of the linear system of differential equations in Eq. (3) depends on the
eigenstructure of matrix A governing both the stability and transient response characteristics
334
of the system. More specifically, system eigenvalues with large negative real parts give rise to
fast dynamics that respond quickly to exogenous variations. Complex conjugate eigenvalues
result in underdamped responses with oscillations. In conjunction with negative real parts
close to the origin, the dynamic response becomes challenging from a control point of view.
Eigenvalues with positive real parts lead to unstable open-loop dynamic behaviour, a usually
undesirable situation because the control system must be designed with extreme caution to
ensure stability and attain good performance. Eigenvalues with small negative real parts are
responsible for sluggish dynamic modes. Usually feedback control systems aim among other
objectives (e.g., stability, zero offset) to relocate the open-loop poles so that a faster closed-
loop response is achieved. The system eigenvalues may be associated with specific states but
in more complex systems such one-to-one association is not possible. In such cases, groups of
eigenvalues are associated with respective groups of process states (Ref. 35).
For nonlinear systems the dynamic behaviour is retrieved from the solution of a set of
nonlinear differential and algebraic equations.
h(x,x,y,u,d,£) = 0 (4)
Differentiating Eq. (4) with respect to the state time derivatives, states, inputs, outputs and
disturbances using the chain rule the following relation is derived:
Rearranging the terms in Eq. (5) the system can be brought to the form of Eq. (3):
The form of Eq. (6) depends on the characterisation of the input and output variables as
determined by the specifications of the selected control structure. A difficulty that arises in
this point is related to the index of the differential-algebraic set of equations in Eq. (4) due to
the selection of the input-output structure.
Model parameter variations and perturbations of the externally specified inputs will
influence the position of the system eigenvalues through the linearised system in Eq. (6) (Ref.
36). Such a situation may lead to the appearance of dynamic modes that are responsible for
the deterioration of the achievable control performance. The dynamic controllability criterion
is the required tool for the investigation of the process dynamics transformation under the
influence of multiple disturbances that accounts also for nonlinear interactions. Therefore, the
static controllability optimisation problem in Eq. (2) is enriched as follows (Ref. 37):
335
where z denotes the eigenvectors of matrix A and S, the corresponding eigenvalues. The
eigenvalues are scaled to unity magnitude to ensure their uniqueness. Matrix A is a function
of all process and design variables, exogenous input variables and model parameters as
derived in Eq. (6). Under certain conditions, variation of any of these variables would perturb
the eigenvalues of the system in a continuous way. It should be noted here that Eq. (7) is valid
for eigenvalues of algebraic multiplicity greater than one but limited to eigenvalues of
geometric multiplicity equal to one.
Solution of Eq. (7) for different values for set 8, traces the optimal steady state response of
the system and in addition the variation of the system eigenvalues, £ The benefits to the
design engineer can be summarised as follows: (i) Identify situations where the system
eigenvalues location gives rise to undesired dynamic characteristics and in extension
difficulties in control, (ii) Calculate the impact of the disturbance scenarios on the dynamics
(e.g., favourable or unfavourable influence), (iii) Determine eigenvalue sensitivity for large
disturbance variations and active set changes, (iv) Calculate the margin of the system
eigenvalues from a region that is considered as of acceptable dynamic response, (v) Identify
design factors that have the greatest impact in changing the position of the eigenvalues (Ref.
37).
variability of the process variables and eigenvalues with respect to model parameters and
disturbances.
Singular value decomposition of P e reveals the direction in the parameter and/or disturbance
space that causes the largest changes in the process variables and system eigenvalues (Ref.
38). This is equivalent to a worst-case disturbance scenario where the main effort of the
analysis is concentrated. The magnitude of variation is then adjusted along the specified worst
direction of perturbation, 8, while a co-ordinate £ along this direction represents this
magnitude.
The DiSC problem is equivalent to the solution of the parameterised set of the first-order
Karush-Kuhn-Tucker (KKT) optimality conditions for the control problem of Eq. (7) for
variable disturbance magnitude. The set is further augmented with the relations that govern
the variations of multiple parameters or disturbances (Ref. 38), As, as follows.
V / + XTVh + jTVg
h
g
F(x,y,u,d,e,z,£C)= A =0 (9)
(A-^lh
(^-l)/2
As-8^
The first entry in Eq. (9) represents the gradient of the Lagrangian function of the nonlinear
programme of Eq. (7), with respect to vectors X=[x y u] T , d, and 8. Vectors X and ji. denote
the Lagrange multipliers associated with the equality, h, and active inequality constraints, gA,
respectively. The eigenvalue defining relations result in zero Lagrange multipliers because
they do not affect the optimal solution of Eq. (7). Vector As denotes the relative changes of
the model parameters or the externally defined disturbances from the nominal reference point
(eref). The trajectory of the optimal solution is calculated for load and model parameter
changes along predefined directions, 9, in the multidimensional disturbance space. Scalar C,
denotes the free continuation variable.
The size of matrix A depends on the number of state variables in the system. The main
objective is to track the changes in a subset of all the system eigenvalues. The greatest interest
is focused on the subset of eigenvalues that are responsible for unstable dynamics (positive
real parts), sluggish responses (negative real parts close to the origin) and strongly oscillatory
behaviour (conjugate eigenvalues with real parts close to the origin). It is assumed that the
selected subset that comprises the slowest system eigenvalues remains the same during the
disturbance scenario. The occurrence of pairs of conjugate complex eigenvalues requires the
337
use of separate defining equations for the real and imaginary parts in Eq. (9). This would
double the eigenvalue-eigenvector defining equations for a complex eigenvalue pair
compared to a real eigenvalue.
A unique solution for Eq. (9) at point s=(x,y,u,d,e,z,££) exists if the Jacobian of F at point
s is non-singular. In general, the Jacobian F is non-singular in the entire domain except a
number of finite points that becomes singular. These singularities are related to either the
optimality conditions or the eigenvalue problem. More specifically, violation of any of the
linear independence constraint qualification, the strict complementarity condition or the
second-order optimality conditions for the parameterised KKT set results in fold points,
boundary points and active set changes in the optimal solution path (Ref. 38). On the other
hand, the eigenvalue-tracking problem may result in fold points when the paths of two real
eigenvalues intersect. At the point of intersection a real double eigenvalue is present; an
eigenvalue with algebraic multiplicity equal to two and geometric multiplicity equal to one.
The double eigenvalue may split again into either two real eigenvalues or a pair of conjugate
eigenvalues. A complete overview of the types of singularities that arise in the asymmetric
eigenvalue problem can be found in (Ref. 39). The tracking of the singular points of the
optimality conditions and their impact on the eigenvalues is the main reason that the
otherwise decoupled steady state controllability and eigenvalue problems are considered and
solved simultaneously in Eq. (7) and (9).
Linearisation of the nonlinear dynamic equations is performed at each trajectory point so
the nonlinear interactions are taken into consideration. The computational technique can
explicitly handle active set changes and hard bounds on all variables efficiently. Active set
changes require the modification of the equation through the addition (if a bound or inequality
constraint become active) or the removal (if a bound or inequality ceases to be binding) of the
respective constraints. Optimality is ensured by inspection of the sign of the Lagrange
multipliers associated with the active inequalities at every continuation point. The solution
technique is quite efficient because an approximation of the optimal solution path of Eq. (9) is
sufficient for the purposes of the problem.
A predictor-corrector type of continuation method as implemented in PITCON (Ref. 40) is
used with £ acting as the independent continuation parameter. Homotopy continuation
methods specialised for the nonsymmetric eigenproblem as described in Ref. 41 and 42 can
also be adapted.
The controllability index accounts for the absolute relative changes of input and output
variables augmented with a set of implicit control objectives represented byfp>j. The elements
of fPj are not directly controlled but rather indirectly with the proper selection of controlled
variables, y, and are evaluated using the nonlinear model at every point in the optimal
solution path. Deviations for the implicit control objectives from a desired level are an
indicator for the ability of the selected measured controlled variables to accurately represent
them. Hence, the performance index can attain broader meaning if the implicit control
objectives are enriched with economic related terms. Qsc is generally considered as a superset
of the objective function in Eq. (7) when p=2 (i.e., Euclidean norm). The weighting terms,
w(Q, determine the significance of each calculated segment of the optimal solution path along
the perturbation direction. For instance, a larger weight may be used for small perturbation
magnitudes that are more likely to occur during plant operation.
As pointed out in section 2.3, a large value for Dsc would imply large error in the
controlled variables during the dynamic transition from one steady state operating point to
another. This is not a sufficient condition and should not be the only criterion for selecting a
proper control structure but rather a necessary condition for good achievable performance by
the control system. The key objective remains the identification and screening of designs that
possess undesirable characteristics that are however difficult to observe without thorough
investigation.
Regarding the dynamic aspects of the candidate designs a dynamic controllability
performance index QDC measures the margins from the region considered as an undesired
dynamic behaviour (e.g., stability margins, margin from region that causes large oscillations)
and defined as follows:
(n)
flflc(^)=z^)Kf!"H>
^oanrf is a point in the boundary of the region for undesired dynamics (e.g., distance from the
origin). Large relative variations for the process eigenvalues imply that the dynamic
behaviour is very sensitivity to changes in the model parameters and exogeneous inputs. Such
a case requires extreme caution if the margins at the nominal point from an undesired
dynamic situation (e.g., unstable response or large oscillations) are small. Disturbance
variations may also have a favorable impact of the system dynamics (i.e., move eigenvalues
towards positions with improved dynamic features).
and the production of low valued product (e.g., off-spec product) in conditions of dynamic
transition between steady states or at the preferred steady state operating point. The structural
design characteristics of the plant, d, calculated from Eq. (1), have been held constant during
the solution of the parameterised control problem of Eq. (9). At this stage, the design
characteristics may act as an additional instrument in order to enhance the static and dynamic
controllability performance properties of the plant. The goals can be summarised as follows:
(i) Improve the static controllability performance index, Qsc, and (ii) improve the dynamic
behaviour with the proper placement of open-loop eigenvalues through the adjustment of the
structural design characteristics of the design for a given disturbance scenario.
Along the optimal trajectory for the disturbance sensitivity control problem a second
sensitivity problem is solved, namely the design sensitivity control (DeSC) problem. In this
case, the local sensitivity of the process variables and system eigenvalues to infinitesimal
changes of the design parameters (e.g., structural flowsheet characteristics, design equipment
parameters), d, is calculated. More specifically, the sensitivity matrix for the operating
conditions with respect to the design parameters, d, for the given disturbance scenario is
derived. The sensitivity matrix, Pj, shown in Eq. (12), provides a measure of the relative
change of the process variables, Lagrange multipliers and system's eigenstructure for
infinitesimal changes in the design parameters.
Focusing on the more significant from an operation's point of view X and <f variables (even
though Lagrange multipliers associated with binding inequality constraints are also important
as they trigger active set changes) the detailed sensitivity matrix has the following form:
The asterisk indicates optimal values (i.e. at a solution point of Eq. (9)). The sensitivity matrix
is usually scaled to facilitate the analysis. A commonly used scaling involves the expression
of the matrix entries in terms of logarithmic sensitivities.
The sensitivity information is calculated directly from the solution of Eq. (9) at a small
additional computational cost, utilising the Newton step performed at every continuation stage
(Ref. 43).
L is the Lagrangian function as in Eq. (9) and X=[x y u] . The matrix in the left side of Eq.
(15) is derived from differentiation of Eq. (9) with respect to d. The computational cost for
the calculation of the design sensitivity is associated with the solution of system in Eq. (15).
The sensitivity information can be decomposed in a set of dominant modes of variation for
the process using singular value decomposition of Pd. A small perturbation in d, along the
eigenvector direction that corresponds to the largest in magnitude singular value of matrix Pd
(also an eigenvector of PdTPd), vi, reveals the dominant direction of variability in the system
that causes the largest change in the process variables and open-loop eigenvalues in a least
square sense (Ref. 13). Similarly, the eigenvector direction that corresponds to the second
largest in magnitude singular value, \2, denotes the second most important direction of
variability and so forth. The orthogonality property that holds for the eigenvectors of PdTPd,
vj V2=0, ensures the independence of the modes of variation. Therefore, the sensitivity factor
affecting the static and dynamic behaviour of the process with respect to the design
parameters can be projected in a low dimensional space defined by the dominant directions.
The entries of the normalized eigenvectors v denote the contribution of each parameter to the
given direction.
The local DeSC calculations and analysis are performed for various values of Q along the
worst disturbance direction, 8. The DeSC sensitivity results calculated at sequential points
along the optimal trajectory are homotopically equivalent provided that the active set remains
unchanged. In such a case the dominant directions of perturbation are not expected to change
substantially during the disturbance scenario for a mildly nonlinear system. However, changes
in the active set, gA, may result in dramatic changes in the DeSC information and
subsequently in the dominant directions and their relationship to the design parameters.
As soon as the most important design variables are identified (e.g. extra reactor capacity,
additional stages in a distillation column) then the additional investment cost is devoted on the
associated process units. Apparently, additional capacity will be translated to increased
investment costs, however the objective function of Eq. (7) of the DiSC problem is then
expected to improve for the same disturbance scenario. The compromise between the
increment in the investment costs and the improvement in the controllability properties will
determine the extent of additional capacity in the system.
341
The sensitivity matrix also involves the relative changes for the dynamic characteristics of
the process as these are represented by the eigenvalues of the system. The design parameters
can be then used to adjust the open-loop eigenvalues. This can be considered as a pole
placement procedure where instead of the controller, the design itself provides the mechanism
to alter the system dynamics.
4. DESIGN APPLICATIONS
1. The steady state optimal operating points and the open-loop eigenvalues for several designs
are shown in Table 2. The single reactor configuration is operated at three different reactor
temperature levels (D1-D3). Alternative designs for the two reactors configuration involve the
relationship between the volumes of the two tanks. More specifically, in D4 two tanks of
equal volume are used, in D5 two tanks with volume ratio equal to 2:1 are used and in D6 the
two reactor volumes are decided by the minimisation of the investment and operating costs.
Table 1
Model parameters for reactor system.
Parameters Values Parameters Values
Reactor density 800 (kgr/m) Feed temperature 320 K
Reactant heat capacity 3,000 (J/kgr K) Coolant inlet temperature 290 K
Coolant density 1,000 (kgr/m3) Heat of reaction 25,000 J/mol
Coolant heat capacity 4,200 (J/kgr K) Heat transfer coefficient 1,250 J/hr m2 K
Feed concentration 4,000 (mol/m3) Activation energy 10,000 J/mol
Feed flowrate 0.1 (m3/hr) Pre-exponential term 0.94 hr"1
Table 2
Operating points for alternative designs.
Dl D2 D3 D4 D5 D6 D7
TRX (K) 345 355 335 345 345 345 345
VRX (m3) 26.41 23.94 29.31 4.83 6.73 4.41 26.41
- - - 4.83 3.37 5.27 -
Tj(K) 318.89 344.80 295.20 296.88 301.47 303.33 318.89
301.91 311.03 305.56
Qj (m3/h) 0.0125 0.0024 0.1136 0.0276 0.0211 0.0131 0.0125
- - - 0.0143 0.0057 0.0120 -
Cost ($/y) 2163.3 1991.7 2772.0 1695.4 1606.59 1578.6 2163.3
Eigenv. -2.96 10"2 -3.27 10"2 -2.67 10"2 -3.41 10"2 -3.25 10"2 -3.46 10"2 -2.96 10"2
2 2 2
-1.09 10"' -3.07 10"' -2.12 10"' -3.64 10" -4.00 10" -3.57 10" -1.45 10"3
-3.07 10"' -6.31 10"4 -7.89 10"2 -1.03 10"2 -1.39 10"2 -1.12 10"2 -3.84 10"'
2
-1.01 10" -7.90 10"' -9.42 10"'
-3.77 10"' -3.90 10"' -2.93 10"'
-2.73 10"' -2.19 10"' -3.46 10"'
343
Inspection of the steady state economic data reveal that a two-reactor system has
significantly lower investment and operating costs. However, single reactor's disturbance
rejection performance is superior compared to the behaviour of the two reactors in series. Fig.
2 shows the behaviour of the static controllability index for variation of factors that
simultaneously affect the heat transfer capacity of the system (e.g., heat transfer coefficient)
and the total heat amount that required to be exchanged (e.g., inlet stream temperature, feed
stream flowrate, heat of reaction). More specifically, the positive sign for parameter, £
indicates an increase in the inlet stream temperature, the feed flowrate and the heat of reaction
and a decrease in the heat transfer coefficient. The single reactor not only exhibits a lower
index than the two-tank configuration but also allows acceptable operation for a wider range
of variation magnitudes. The finding is mainly attributed to the increased heat transfer
capacity of the larger single reactor. The study is based on the assumption that a five degrees
variation in the reactor temperature is acceptable and the maximum jacket coolant flowrate
can be increased up to three times its normal operating value.
As expected the system eigenvalues for the single reactor are closer to the origin leading to
an inherently slower response than the two-tank system. The difference in the dynamic
characteristics is evident from the dynamic simulations in Fig. 3 for Rvalues equal to 5.0. PID
control loops have been placed for the maintenance of the reactor temperature at the desired
level. The single tank response even though more sluggish, exhibits lower overshoot that the
two-tank system. It should be noted here that the operating points (e.g., reactor set points)
have been moved away from the nominal values due to the influence of the disturbance in
accordance with the objective function of Eq. (7) with the single reactor operating much
closer to the nominal point. The improvement of the speed of response for the single reactor
can be achieved from the inspection of the sensitivity of the design parameters to the system
eigenvalues. Reduction of the jacket volume (D7) would place the reactor and jacket energy
balance eigenvalues further away from the origin thus increasing the system's speed of
response (Table 2). Furthermore, the jacket volume does not influence the steady state
344
operating point and therefore is an excellent candidate for the fine-tuning of the system's
dynamic properties. The improved dynamic performance is evident in the dynamic simulation
shown in Fig 3b. The variation of the eigenvalues associated with the reactor and jacket
energy balances for Dl and D7 is shown in Fig. 4. The faster dynamics are clearly depicted.
Fig. 3. Temperature control for: (a) A, o - first and second reactors in 2 CSTR and • - single
CSTR, (b) • - base design, o - reduced jacket volume for single CSTR.
345
4.2.1. DiSCproblem
The control objectives for the flowsheet are summarized as follows:
• Maintain the desired product quality for final product streams D2 and B3.
• Achieve high conversion in the reactors.
• Maintain plant operation close to economic optimum.
A number of disturbance scenarios have been constructed for the specific flowsheet that
aim to reflect realistic cases for the plant. Scenario 1 involves the simultaneous decrease of
the activity of the chemical reactions (i.e., decrease of the pre-exponential kinetic coefficients)
with a reduction of the purity level in the feed stream for component "A" (e.g., "B"
concentration increases in the feed stream FOA). Scenario 2 involves the decrease of the
activity levels for both reactions. Scenario 3 examines the ability of the system to balance the
difference on the imposed activity for the two reacting steps. The final scenario 4 involves the
loss of the vapour stream in the C3 reboiler from the set of the manipulated variables.
347
Table 3
Design parameters for candidate flowsheet configurations at nominal operating point (DeSC
design modifications in italics).
FCI FCII FCIII FCIV
3
Reactor volume (m )
RXl 604.5/650.0 645.8 581.9 581.9/(520.0
RX2 263.2/280.0 2118.9 11.0 37.0
RX3 407.2 0.0 19.9 0.0
Column stages
Cl 35 35 35 35
C2 40 40 39 39
C3 39 36 41 41 143
Stage holdup (m3)
C3 - - 2.4 2.5/2.8
Column reboil ratio
Cl 1.66 1.49 1.77 1.77
C2 2.50 2.04 2.85 2.85
C3 1.56 1.87 1.60 1.64
Dl recycle (kmol/hr) 359.04 387.96 342.78 343.04
D3 recycle (kmol/hr) 134.60 179.23 110.65 110.95
Costs (x!03$/yr) 1,305.9/1,306.7 1,495.3 1,131.1 1,132.8/7,755.7
The candidate input-output control structures as proposed in Ref. 45 are briefly described
in Table 4. Sets of controlled and manipulated variables that lead to "snowball" effects due to
positive recycle feedback can be eliminated quickly through the disturbance rejection
sensitivity. For example, control structure "d" described in Table 4 maintains all recycle
streams on level control, which results in extremely large changes in the flowrate of the
recycle streams for relatively small changes in the flowrate of feed stream FOA. Similar small
changes in FOA cause the recycle stream from column Cl to reactor RXl to rapidly increase
for control structures "c" and "e". Generally, it is assumed that perfect control is achieved for
all flow and level controllers in the plant. Controlled variables that correspond to composition
variables are allowed to vary around a target value and within a range specified by upper and
lower bounds.
Table 5 shows the cumulative performance index with uniform weighting for FC I-IV for
control structures "a" and "b" that differ on the installation or not of a feed/purge stream
system for component "D" (see dashed envelope at the bottom of column C2 in Figure 1)
calculated at various disturbance magnitudes. The cumulative index, Qsc, uses all the scaled
controlled and manipulated variables specified for each control structure. The results reveal
348
that FC III-IV exhibit in general a superior disturbance rejection performance than FC I-II for
most disturbance scenarios (except scenario 3) because of the higher average achieved
concentration of reactants in the reactive distillation column. Consequently, the effects from
the kinetic parameters variation on the product purity are absorbed more efficiently in the
reactive distillation column. The analysis also suggests that the use of a feed/purge system for
component "D" in control structure "a" may improve the disturbance rejection capabilities of
the flowsheet, especially in the cases of a single reactor in the second reacting step and for
larger disturbance magnitudes (^>20.0). In scenario 3, FC I and II perform better than FC III
and IV, because the reduction of the kinetic parameter in the second reactive step affects not
only the conversion in the reactors but also indirectly the separation through the equilibrium
relations in the reactive column C3. The severity of the disturbance scenarios increases, as
fewer manipulated variables are available. Saturation of key manipulation variables becomes
evident as the magnitude of the parametric variation increases (scenario 4).
4.2.2 DeSCproblem
Decomposition of the design sensitivity information for the disturbance scenarios reveals
the key design parameters that affect the most the system's static controllability properties.
Volumes for RX1 and RX2 have the greatest impact on the dominant direction vector for FC
I. On the other hand the stage holdup and the number of stages in the reactive column C3 are
the most significant design parameters for FC IV, even though total holdup for RX1 and RX3
contribute as well. Both results pinpoint the design parameters that affect the extent of the
reacting steps. Table 3 shows the modifications that are implemented in the flowsheets, and
the new total costs (in italics). Table 5 clearly shows the improvement in the cumulative
performance index achieved by the modified flowsheets under simultaneous kinetic parameter
fluctuation, especially at larger perturbation magnitudes (flowsheets with asterisks). Results
suggest that significant improvement is possible with only slight increase in the total costs for
the plant. An increase of the total costs less than 1% results in a respective 25% and 18%
improvement on the controllability performance at £"=25.0 for FC la* and IVa*.
Table 4
Control structures proposed in Ref. 45.
MV CV Fixed variables C o m m o n in all control structures
a F0A-D3-F0D MRXI-XRIJB-MB2 R1-B2 C V : M B i - M B 3 - M D i - M D 2- M D 3-
b F0A-D3-B2 MRXI-XRIJB-MB2 Rl-FoD XBI,B-X D 2,D-XB3,D-XR2,E-MRX2-
C D3-F0D M R X 1-M B 2 F0A-R1-B2 MRX3
d R1-D3-B2 MRXI-XRIJB-MB2 FOA-FOD M V : B1-B3-D1-D2-F0B-V1-V2-
e Ri-B2 MRXI-MB2 F0A-D3-F0D V3-F0E-R2-R3
Note: M denotes material inventory; symbols in capital denote stream flowrates (Fig. 5); x
denotes component molar fractions for distillation and reactor outlet streams.
349
Table 5
Cumulative performance index calculated for different designs, controlled structures and
disturbance scenarios at various disturbance magnitudes expressed in terms of £" (modified
designs with asterisks).
Scenario 1 Scenario 2 Scenario 3 Scenario 4
15.0 20.0 15.0 20.0 25.0 15.0 20.0 25.0 15.0 20.0
la 1.63 3.16 0.15 0.46 1.74 0.13 0.41 0.99 0.22 0.77
la* - - 0.12 0.37 1.31 - - - - -
Ib 1.12 3.17 0.18 0.51 1.85 0.13 0.42 1.13 0.23 0.77
II a 2.46 5.66 0.26 0.67 2.01 0.07 0.28 0.79 0.67 1.50
lib 2.65 8.07 0.29 0.73 2.45 0.08 0.33 1.06 0.20 0.61
Ilia 0.72 2.22 0.20 0.50 1.56 0.39 0.82 1.35 0.22 0.56
Illb 0.74 2.19 0.18 0.49 1.43 0.30 0.84 1.86 0.21 0.52
IV a 0.75 2.35 0.19 0.52 1.59 0.38 0.85 1.52 0.23 0.65
IV a* - - 0.19 0.48 1.30 - - - - -
IV b 0.74 2.23 0.17 0.51 1.78 0.30 0.89 2.11 0.37 0.79
CONCLUSIONS - SUMMARY
This chapter presents the tools for the evaluation, rank orderingand screening of alternative
process flowsheet and control structure configurations in a systematic, rigorous and efficient
way. Flowsheet and control structure configurations are analysed on the basis of: (a) static
disturbance rejection characteristics utilizing nonlinear sensitivity techniques, and (b)
sensitivity of process dynamics with respect to process disturbances and model parameter
variations. A static controllability performance index calculates the impact of disturbances
and model parameter variations on the steady state operation of the flowsheet. A dynamic
controllability index evaluates the margins from an undesired dynamic behaviour for the
process system. These two indices form the basic indicators for the rank ordering of the
alternative design options. Designs may be rejected from any further consideration if poor
controllability properties are identified. Designs may be modified efficiently using design
sensitivity information in order to enhance the controllability properties of the plant. Overall,
the outlined procedure exploits the predictive accuracy of nonlinear models both for steady
state and dynamic behaviour, the prioritisation of multiple control objectives, the preference
in the use of available resources, and the powerful properties of nonlinear sensitivity analysis.
Furthermore, disturbance directionality and sensitivity information decomposition increase
the efficiency of the analysis tools for the benefit of the design engineer.
350
Acknowledgement
The work is financially supported by European Commission (GROWTH programme,
G1RD-CT-2001-00649).
REFERENCES
Chapter Cl
1. INTRODUCTION
The development and design of new industrial chemical processes occur by applying
fundamental scientific principles and a large engineering arsenal of tools. They also require
touches of imagination to create a new workable and coordinated whole out of an almost
infinite number of concepts, options, and possibilities. A successful new development project
ultimately combines technical, economic, market, and timing factors to satisfy the necessary
pieces of a commercial puzzle that is often only well-defined after the fact.
An engineer who is responsible to ensure that the new process can be controlled and can
operate well dynamically in a way that achieves the design objectives faces a challenging
task. The development stage occurs before it is clear what the flowsheet is or what equipment
will be used, before all scale-up issues are resolved or new technology is fully developed.
Analysis must be done and input must be provided to the researchers and design engineers
about dynamics and controllability for many decisions that ultimately have strategic
implications. These include key measurement and control valve locations, special
measurement or analyzer needs, process control strategies for unit operations and the entire
integrated flowsheet, locations and sizes of vessel inventories, flowsheet alternatives and
trade-offs, relative advantages of different types of process equipment types and their layouts,
etc.
The themes that typically run throughout the design stage focus on ensuring process safety
and environmental compliance, minimizing capital cost, meeting project timelines, meeting
product quality and variability, achieving goals for process uptime and yields, and
maximizing process flexibility and robustness. Some of these themes require an evaluation of
economic and technical trade-offs. Reasonable justifications or bases must be provided for
any modifications whose major objective is to improve operability or control of the new
process.
The ultimate objectives for the control engineer are to ensure that the new process will be
capable of operating to satisfy specific requirements that include:
353
To illustrate the effect a seemingly innocent design decision can have on control, consider
a partial condenser at the top of a distillation column. A simple and cheap design alternative is
to mount the condenser directly on top of the column, what is called a dephlegmator (Fig. 1).
The vapor from the column passes up through the tubes, where some of it condenses from
contact with coolant on the shell side. The condensed liquid falls back down the tubes into the
column as reflux while the remaining uncondensed vapor leaves via the overhead product
line. An alternative design is to mount the condenser external from the column (Fig. 2). The
overhead column vapor passes into the tubes of the condenser, where part is condensed with
coolant on the shell side. The condensed liquid collects in the reflux drum, from which it is
pumped or flows by gravity back to the column as reflux. The uncondensed vapor leaves as
product.
What are the advantages and drawbacks of dephlegmator and external condensers? The
dephlegmator condenser requires potentially less infrastructure, since no additional space is
required for the condenser, reflux drum, pump, piping, etc. It also minimizes the inventory of
material at the top of the column, reduces the potential for leak points from the system, and
eliminates the need to pump or convey liquid reflux back to the column. In processes with
hazardous, corrosive, or toxic substances, these characteristics provide definite advantages.
On the other hand, dephlegmator condensers must be mounted and supported directly on top
of the column, any maintenance requires physical removal, cleaning is not straightforward,
the tubes must be designed for countercurrent liquid and vapor flow, coolant piping must be
run up to the top of the column, and coolant must be supplied to that elevation.
External condensers in principle can be located anywhere (even on ground level),
providing potentially easier opportunities for maintenance, making cleaning easier, and
reducing the need for coolant piping. They also allow easier sampling of liquid reflux and can
be designed for co-current liquid and vapor flow. On the other hand, they may physically take
up more space in the plant layout, they require more instrumentation and control valves, they
often need a reflux pump, they require more piping for overhead vapor and reflux lines, they
increase the liquid inventory of material at the top of the column, and they provide greater
opportunities for leak points.
Given the relative advantages of dephlegmator condensers in terms of their cost, what is
wrong with them? From the viewpoint of controllability, they can be disastrously sensitive to
column disturbances and they introduce sensitivity in controlling product quality. An external
condenser allows a direct measurement and control of the column reflux flow. Any
disturbance that affects the condensing rate (inerts, vapor flow, coolant flow or temperature)
will not immediately affect column reflux because of the buffer provided by inventory in the
reflux drum. A dephlegmator condenser provides only an indirect measurement and control of
column reflux flow and no liquid holdup. Column reflux can be inferred only by an energy
balance around the coolant, measuring coolant flowrate and inlet and outlet temperatures. Any
disturbance that affects the condensing rate (inerts, vapor flow, coolant flow or temperature)
355
will immediately change column reflux. Any change in feed flow or composition also changes
column reflux. Once the condenser is affected, liquid flow changes down the column, which
changes the temperature and composition profiles, leading to changes in boilup rate and vapor
flow up the column that immediately are felt again in the condenser. This leads to heightened
interaction among the pressure, reflux, temperature, and base level controllers.
Another way to look at the difference in condensers is to count the number of manipulators
or control valves. With the dephlegmator condenser, we have two (overhead vapor product
and coolant flow). With the external condenser, we have three (overhead vapor product,
coolant flow, and reflux flow). From the viewpoint of improved controllability and more
robust column operation, an external condenser with an ability to control reflux flow is much
preferable than a dephlegmator condenser. At the design stage, we have to provide such input
and justification. We also must be prepared to compromise by assessing the important trade-
offs, since the risk of pumping a highly toxic or corrosive substance at potentially extreme
conditions may not be warranted.
A third alternative also exists that has the condenser on top of the column but also collects
the liquid reflux on a trap-out tray (Fig. 3). Vapor goes up through a riser in the trap-out tray
to the condenser. Liquid level is measured on the trap-out tray and is controlled with the
coolant flow. Then the reflux comes off of the trap-out tray through a flow control valve and
is sent to the trays below. We need no reflux pump or drum, but we will need to add height to
the column and also to have an additional control valve. Yet the advantages of this trap-out
tray arrangement are clear in terms of having a direct measurement and control of column
reflux flow.
No control algorithm, however complex, will be able to suggest such process changes.
This requires engineering insight and understanding by the control engineer of the process
requirements and the effect on dynamic operability of even simple unit operations.
This section discusses the conceptual steps involved when designing or analyzing a new
industrial process to achieve good dynamic operability, with a specific objective for on-aim
product quality control. These steps are not meant to be one-directional but rather are more a
circular chain of thinking.
357
relationships. The allowable variation in product quality variables must also be quantified. A
tool called Quality Function Deployment (QFD) is used to define and organize the
relationships between product quality measurements and customer requirements (Ref. 2).
If the key process variables are not identified at the design stage, then they must be after
the process has been built and is operating. To some extent this task becomes easier because
measurements and behavior of an actual operating process can be observed and analyzed.
However, if certain process measurements were overlooked and not installed, then the task is
as difficult as it is at the design stage. Plant tests and modeling are still the important tools.
Models can often be used to guide the designed experiments that need to be run on the
commercial process as part of the qualification effort. Out of this activity come the aims and
limits of the process variables, which either directly or indirectly are under closed-loop
feedback control and some of which are then monitored by statistical process control
techniques (Ref. 2).
• where liquid and vapor holdups should be located (and how much)
• where off-specification product can be re-worked
• how the units can be started up
• how the units can be shut down
• how transitions between product grades can be done
how to scale up units with different surface to volume ratios
• where and how to measure or infer compositions
• what control structure is inherently self-optimizing
4. DESIGN EXAMPLE
An example of the steps outlined above is a two distillation column system designed to
separate desired product B from component A (Fig. 4). The first is an extractive column (Cl)
where the A/B mixture is fed at a location below the feed point for component S. The feed
rate of S to Cl is to be maintained at a fixed ratio to the feed rate of A/B. Component A goes
overhead from C1 as the vapor product from a partial condenser. The bottoms product from
Cl is a B/S mixture, which is then fed to the second column (C2). C2 separates component B
as the vapor overhead product from a partial condenser. The bottoms from C2, component S,
recycles as feed back to C1.
In the absence of other information, the control strategy design for this system would be
relatively straightforward and the effect of the process design would not in itself be unusual.
A typical control strategy is shown in Fig. 5. Any change in production rate is accomplished
by changing the flow of the A/B feed stream. The flow then works its way through the bottom
of Cl, into C2, and out the top of C2. There is no level control in the base of the second
column since we need to maintain a constant ratio of feed flows to the first column. Thus the
holdup in the base of C2 must be set by the dynamics of the two columns.
However, the customer needs for component B change this picture completely. Component
B is a reactant fed to multiple batch reactors that operate virtually independently of each
other. It is also required to be very high purity. This means that the overhead vapor flow from
C2 changes instantaneously at the will of the batch reactors since no storage of B can exist.
The key process variables with the most significant effect on product quality are then
identified to be the reflux to feed ratio in C2, the control temperature in C2 (for composition
control of S), and the control temperature in Cl (for composition control of A). The control
strategy must then be designed to keep these process variables on-aim and also to satisfy the
on-demand requirement for production rate.
Dynamic analysis can clearly show that product B flowrate cannot be physically changed
quickly enough by manipulating the feed flows to column 1. The overhead flow from C2 is
essentially controlled by the batch reactors, removing it as the obvious manipulator to control
C2 pressure. This means that the on-demand control strategy must start with the satisfaction
of product rate and work backwards (contrary to standard thinking, which usually moves
forwards through a process). Since we cannot use the overhead vapor product to control
pressure in C2, the alternatives are to use coolant flow, feed flow, or reboiler duty. At the
same time we need tight composition (temperature) control, for which reboiler duty is the
ideal candidate. We can compare alternative pairings via dynamic simulation and conclude
that C2 temperature should be controlled by reboiler duty. Since C2 pressure indicates the
inventory of component B, it is controlled using the feed flow to C2, which means we must be
able to change the feed flow to the columns independently. C2 reflux flow is ratioed to feed
flow. The final process with control structure is shown in Fig. 6.
The original system design shown in Figs. 4 and 5 is perfectly reasonable from a steady-
state viewpoint. However, it would have been a failure in its ability to satisfy the dynamic
requirements, including the manipulation of feed flow to C2 independent of Cl. It is possible
to envision the development and deployment of many "advanced" control algorithms on the
original design to try bandaging the perceived "control" problem. Modifying the process
design by adding a buffer tank to provide surge capacity between the bottom of column 1 and
column 2, however, will solve the problem. The actual cost of a separate tank can be saved if
the tank volume is simply built into the bottom of C1.
Once this is done, the A/B feed flow to Cl can be used to control the base level in Cl.
Feed flow of S is ratioed to the A/B feed rate, which means that sufficient liquid inventory in
the base of C2 must also be provided (same considerations as in standard control structure).
Cl temperature is controlled by Cl steam flow. Since level in the base of C2 is not controlled,
rational design criteria must be used to size the volumes of the two column bases to cover the
spectrum of possible operating conditions (e.g. a change in rate over a specified time period, a
dump of the entire column contents, etc.).
With this process design and control strategy in place, the key process variables can be
held on-aim to satisfy the customer needs for product quality and on-demand production rate
control. It is worth emphasizing here that more holdup is inherently neither good nor bad.
What is good is the judicious use of the right holdup at the right place. An example of bad
holdup occurs when the process material is thermally sensitive. Large amounts kept at high
temperature can undergo chemical reactions that might possibly affect product quality or
other parts of the process. Another example of bad holdup is the introduction of delay when
liquid holdups are placed in series between controlled (on one end) and manipulated (on the
other end) variables.
5. DESIGN APPROACHES
How do we come to these process designs in the first place and how do design decisions
and methodologies ultimately affect dynamic operability. Three general approaches (or some
combination thereof) have been identified for tackling the synthesis of a complex, integrated
process flowsheet: (1) methods that use heuristics, evolutionary techniques, and hierarchical
decompositions, (2) methods that use superstructures, mathematical programming, and
optimization, and (3) methods that use thermodynamic targets, process integration, and pinch
analysis. All three approaches typically focus on synthesizing flowsheets that optimize
economics.
Within each of the three general approaches toward process synthesis, key decisions are
made about the flowsheet design that have a bearing on the operability characteristics of the
plant. For example, in a hierarchical procedure (Ref. 6) we will make decisions about whether
the plant is batch or continuous, what types of reactors are used, how material is recycled,
what methods and sequences of separation are employed, how much energy integration is
involved, etc. In a thermodynamic pinch analysis, we typically start with some flowsheet
information, but we must then decide what streams or units to include in the analysis, what
level of utilities are involved, what thermodynamic targets are used, etc. In an optimization
approach, we must decide the scope of the superstructure to use, what physical data to
include, what constraints to apply, what disturbances or uncertainties to consider, what
objective function to employ, etc (Ref. 7).
These key decisions are characteristics of the particular flowsheet design method chosen.
Yet these decisions, both implicitly and explicitly, affect the process behavior well beyond the
steady-state economics. Heuristic and hierarchical methods certainly are the predominant
365
tools in industrial practice today, and for many strong reasons are useful for generating initial
flowsheet alternatives. And while these methods may not be perfect because they cannot
consider the interaction among design variables at various levels, they are at least feasible.
However, many of the design decisions can have a negative impact on dynamic controllability
unless the design engineer is aware of their implications. Looming on the horizon of industrial
practice are the mathematical programming approaches. From the start, these methods may be
perfect because they consider all of the design variables simultaneously, but their solution
may not yet be feasible. They can simultaneously assess the effect of design decisions on
dynamic controllability.
To illustrate the effect of design decisions on dynamic controllability, we can look at two
examples where such a decision introduces feedback, since positive feedback can lead to
process instability. One classic example is the choice of reactor type for a process. A perfect,
adiabatic, plug-flow reactor (PFR) has no feedback at all whereas a continuous stirred tank
reactor (CSTR) has built-in feedback through the mixing process. However, this does not
mean that a CSTR is always more difficult to control. The mixing process itself provides
negative feedback with negligible dynamics for the reactants participating in all non-
autocatalytic reactions. The feedback in an isothermal CSTR thus provides a stabilizing effect
on the operation. However, this is different for the influence of feedback of heat on the rate of
reaction. Since heat raises the reactor temperature and higher temperature raises the rate of
reaction and hence the rate of heat evolution, we are now dealing with a positive feedback
loop. For highly exothermic reactions with reasonably large activation energies, the CSTR
may have unstable operating points if the overall heat transfer coefficient and area are too
small.
Another classic example involves the use of a feed-effluent heat exchanger (FEHE) with
an adiabatic exothermic plug flow reactor (Fig. 7). Cold feed enters the FEHE, where it is
heated by the hot reactor effluent stream as a way to achieve energy integration. The feed
enters the reactor and leaves at higher temperature because of the heat of reaction. The
positive feedback loop creates the situation where the higher the reactor inlet temperature, the
faster the reaction rate, and then the higher the reactor outlet temperature which in turn goes
back to the FEHE. The only mechanism to break this positive feedback loop is to introduce
design changes that add degrees of freedom (cold bypass line around the FEHE, furnace
between the FEHE and reactor to control inlet temperature, etc.).
A third example involves the decision to introduce a material recycle structure into the
design (Fig. 8). The simple input-output structure of an arbitrary process does not determine
from a design viewpoint the recycle structure. Actually, the process could consist of a single
reactor with no recycle at all, although this would probably not be the most economical
design. We actually use design degrees of freedom to create a recycle structure. When the
process is actually operated, specifying the flowrates of the feed streams into the process does
not uniquely determine the recycle flowrates. Instead, we must use independent control
degrees of freedom to achieve the desired objectives provided for when the recycle structure
was incorporated into the process in the first place. This could amount to fixing the rate of the
366
recycle flow or adjusting the flow to attain a desired recycle to feed ratio or to satisfy a
process constraint.
A fourth example of the effect of design decisions comes when we consider the use of
complex unit operations where we try to combine two or more functions into a single device.
A reactive distillation column serves to illustrate this kind of integration (Ref. 8). While the
use of such integrated units can lead to significant cost savings and simplification, care must
be taken in their design to ensure that they can be operated and controlled since we remove
control degrees of freedom.
One area of new technology development within the chemical industry focuses on the use
of renewable feedstocks (as opposed to those that use raw materials from petroleum) in
biologically-based processes that produce chemicals and other materials. Such new processes
will take some carbohydrate source (e.g. com syrup) and convert it with microorganisms via
fermentation into a desired product. The Cargill Dow process to make lactic acid and then
polylactide polymer is one example. DuPont is currently developing a process to make 1,3-
propanediol that is a key ingredient for a new polymer, polytrimethylene terephthalate or
3GT. Cargill is developing a process to produce 3-hydroxypropionic acid in a bio-process
(Ref. 9).
One consequence of using microorganisms in a bio-process is that the desired product may
leave the fermenter in a large amount of water. The desired product must then be separated
from all of this water (e.g. by liquid-liquid extraction, in evaporators, in distillation columns,
etc.) and must also potentially be purified from other by-products or species (e.g. in
crystallizers, filtration units, ion exchange, chromatography columns, carbon beds, distillation
columns, etc.). Such separation steps can conceivably require large quantities of energy. So
any of the process design or synthesis approaches might naturally consider the incorporation
of energy integration to improve the steady-state economics. Here the implications of such
design decisions are discussed and alternatives to ensure operability are described. These
types of issues should be addressed at the design stage but are not significantly different when
viewing them either in a bio-process or a more traditional petrochemical process.
We consider a generic but fictitious bio-process that contains steps for the removal of
water and other impurities (Fig. 9). A heuristic or superstructure design methodology would
have to consider the kinds of evaporation and separation steps to remove water (assumed to be
the lightest component) from a feed stream containing also species (in order of decreasing
volatility) A, B, and C, and then to produce desired product B from A and C. Suppose we
somehow arrive at the initial design shown in Fig. 9. Here we use a three-stage multieffect
evaporator for water removal (we would need to decide on the kind of evaporator, but here we
assume they are falling film). The vapor generated by steam in the first stage is used as the
heating medium in the second evaporator stage (operating at lower pressure). Liquid from the
first evaporator is the feed to the second. Liquid and vapor flow similarly from the second to
the third evaporator. The flow of liquid and vapor is co-current so that we operate at the
lowest pressure and temperature in the final stage. The liquid from the third evaporator feeds
368
the first distillation column that separates A in the distillate from B and C in the bottoms. The
second columns separates B in the distillate from C in the bottoms.
In this design we must use steam in the evaporators to drive the multi-effect evaporator
system and also use steam in both column reboilers and cooling water in the condensers.
Without specifying what pressure levels might be run in the columns, we could adjust the top
temperatures (by changing column operating pressures) in both columns to be hot enough to
use for water evaporation, rather than rejecting that heat to cooling water. Using a heuristic
method, a pinch analysis, or an optimization superstructure, we might then produce a much
more complicated and integrated design (Fig. 10). Here the vapor overheads from both
columns are used as the heating medium in the first evaporator. The liquid condensate must of
course be kept separate, so distinct chests are required so that the liquid can go to separate
reflux drums for each column. With the column overheads as the energy source in the first
stage, the evaporation proceeds in the other stages to remove the water. The columns are fed
from the third evaporator stage. This design has advantages in reducing steam and cooling
water consumption and also in reducing capital costs by eliminating the need for individual
column condensers.
From a steady-state design viewpoint, this process looks wonderful, with the material and
energy flows in balance and able to achieve desired product B. What are the operability
concerns with this energy-integrated system? During normal operation, what is the effect of
increasing water content in the crude feed? This will change the liquid and vapor material
flows from the evaporators. Since we do not have enough energy to remove the water, it will
change the overhead flows from the columns, which will change the amount of water
evaporated. This will again change the feed flow to the distillation columns, resulting in more
change in the vapor overhead flows, which cause additional changes in the evaporators.
Without a manipulator to interrupt this feedback loop and balance out the material and energy
flows, the system would slowly spiral out of control to the point where we could not produce
any product.
What is the effect of changing a reflux flow to one of the columns? This too will affect
column boilup, which changes the overhead flow, which changes the amount of evaporation
in the first stage, which leads to cascading changes to the other evaporators that ultimately
affect the columns again. So any change of a normal process condition initiates a feedback
loop between evaporators and columns that cannot easily be broken.
During abnormal operation, what is the effect of losing the crude feed flow? Since we rely
on the evaporators to condense the column overhead streams, we would immediately lose
condensing capability, column pressures would increase, and we would quickly have to shut
off the steam flow to the columns, causing the column contents to dump. This means we
370
would not even be able to operate the columns under total reflux until we could restore the
crude feed flow.
During start-up conditions, how would we be able to bring up this system to normal
operation? Since the systems are completely interdependent, we would need to start crude
feed when we start feeding steam to the column reboilers. This means we have material
already in the bases of the columns ready to be boiled up. Once the crude feed reaches the
third evaporator, it must go on into the first distillation column and keep flowing through. In
such a start up, it would not be possible to operate the columns under total reflux until we
reached the desired composition and temperature profiles. Reaching steady-state operation
only occurs by consuming crude feed and generating off-specification product, which must
ultimately be re-worked (if possible).
If such operation is economically viable, then the integrated design could be built.
However, it could lead to unstable operation when the crude feed flow or composition
changes, it makes the two systems completely dependent so that any brief loss of flow from
one will cause the other to shut down, it may lengthen the amount of time required to start up
the units and reach the desired operating conditions, and it will generate more non-standard
product that must be re-worked or scrapped.
How do we go about designing this process so that it can be controlled and operated
without running into all of the potential nightmare scenarios outlined above? One approach is
to accept the design in Fig. 10 and begin developing some novel mathematical control
algorithms to apply, even though these may not be able to achieve all of the potential control
objectives. Perhaps a simpler and somewhat obvious answer is that we must deliberately alter
the process design in Fig. 10 and incorporate equipment that will separate to some extent the
degree of interconnectedness between the evaporators and distillation columns and will break
the feedback loop of material and energy. Ideally we would like to formulate and solve the
design and control problem simultaneously using a general superstructure and optimization
approach (Ref. 7). Alternative approaches might come from Refs. 10 through 16.
Certainly the bio-process with heat integration considered here would be a challenging test
for many of these approaches in terms of the size and scope of what must be considered, the
number of potential design alternatives, and the type of control objectives and disturbances
that must be considered. A typical industrial approach would be to work through
systematically all of the control objectives using a nonlinear dynamic simulation of the
process to assess alternatives and to analyze performance (Fig. 11).
One key issue involves disturbances that affect the crude feed flow or composition to the
evaporators. We need a manipulator to de-couple the evaporators from the columns. This we
could do by having some means to control the water content of the liquid leaving the third
evaporator stage. There are several ways to do this. One is to bypass some of the crude feed to
the third stage, assuming there will always be an excess amount of energy in the evaporators
(i.e. more energy comes from the column overheads than is needed to remove all of the
water). A measurement of temperature or composition in the third stage determines the water
content. If the amount of water increases, we decrease the amount of the bypass flow. A
371
second alternative is to manipulate the crude feed and other liquid flows (going backwards) to
control water content in the third stage. If the amount of water increases, we decrease the
liquid flow from stage 2 to 3, which increases the level in stage 2, so we decrease the liquid
flow from stage 1 to 2 and then the crude feed flow to stage 1. A third alternative (shown in
Fig. 11) is to underdesign the evaporators so that we always must add some extra steam flow
to the chest of the third evaporator stage. If the water content increases, we simply increase
this makeup steam flow to maintain a constant water concentration in the feed to the columns.
This manipulator would basically serve as the key break point to avoid interaction between
disturbances that affect the evaporators and columns.
A second key issue involves being able to run the evaporators and columns independently.
We want to avoid shutting down the columns if we briefly lose the crude feed flow or other
parts of the evaporator system. One alternative is to include auxiliary column condensers in
parallel to the first evaporator stage, so that we always have a way to operate the columns on
total reflux. This requires additional capital expense that may be unattractive. A second
alternative is to install the ability to feed makeup water to the first evaporator stage (Fig. 11).
This water stream provides the energy sink required to condense the overhead vapors from the
columns and allows total reflux operation without needing crude feed. The water vapor
generated in this way still must be removed from the first stage and condensed, which means
we must feed enough liquid water to go through the three effects.
Another design option for such a process might not tie the columns and evaporators
together directly. To avoid some of the design and control complexity created by the direct
integration, we can also consider an indirect integration. Here we would generate some
pressure steam using the two overhead streams from the columns. This steam would go into a
header that would feed the multi-effect evaporator system with the desired amount to maintain
a constant water content in the third evaporator stage. Any excess steam could be vented or
used elsewhere in the process and any deficiency in steam could be supplied from the same
steam source as what feeds the column reboilers. Such an option would allow the two systems
to start up and operate separately but would still recover much of the energy savings from the
integration. The trade-off is the additional capital costs of the condensers/boilers to generate
the steam and all of the instrumentation and piping to control the system and connect units
together. More importantly, we would have twice the heat transfer area since we would need
two temperature differences in series (one between condenser and steam, the other between
steam and evaporator). At the design stage, the control engineer may be able to show
substantial benefits from the increased flexibility such a system would provide to offset the
additional cost. It is exactly this type of analysis and these assessments of trade-offs that could
make the difference between a new process with good dynamic operability and one that is a
horror to operate.
Finally, we also have some choices to make about the amount and location of liquid
inventories for this system, which might also provide some design alternatives for reducing
the interaction between evaporators and columns. One approach is to install a large buffer
tank between the third evaporator stage and first column. This tank would filter out
disturbances in both composition and flow and allow much steadier feed flow to the columns.
It would also cost more money and introduce more inventory that does not add economic
value. We also have to determine the amount of liquid holdup in the evaporators and columns.
In principle, we could increase the liquid holdup in the three evaporator stages as a way to
filter disturbances and potentially avoid the need for a separate buffer tank. A drawback of
this is that the larger the liquid holdup in each stage, the slower the response of stage 3
temperature (if measured in the liquid) or composition will be to changes in crude feed flow
and to changes in steam flow to stage 1. Of course, if makeup steam is added to the chest of
stage 3 (Fig. 11), then this is preferable since it avoids much of the dynamic lag. Here though
might be a case where we choose to minimize the liquid holdups in stages 1 and 2 to speed up
the dynamics and include sufficient holdup in stage 3 for good disturbance rejection.
Although this bio-process example is not described quantitatively and is to some extent
unrealistic, it is discussed in some detail here to provide a perspective on the kinds of
industrial problems we encounter (different from a typical problem where we are given a
process with known linear transfer functions and asked to design a control algorithm). It
should help industrial and academic researchers understand what realistic challenges are
faced when looking at the interaction between design and control of industrial chemical
processes.
373
7. CONCLUSIONS
This chapter has presented an industrial view on incorporating operability and designing
plantwide control strategies as part of developing and designing a new chemical process. The
steps span from the investigation of customer needs, to the definition of quality variables, to
the determination of key process variables, to the design of the process control strategies, and
finally to the actual design of the process itself. Because of the technical uncertainty inherent
when developing and designing a new process or deploying new process technology, we
should try to avoid limiting process and control flexibility at the design stage and to be
creative in incorporating potential degrees of freedom into the design. Once a new design is
built and in operation, the flexibility to add degrees of freedom vanishes for the most part.
The frustration and expense associated with starting up a new process that cannot operate
dynamically are enormous. Months or years can be wasted trying to fix problems that may
have been completely avoided by answering some straightforward questions at the design
stage concerning dynamics. Until process, development, and project engineers recognize and
understand the importance of examining both steady-state economics and dynamic
controllability at all stages of the new design, companies will remain vulnerable to designing,
building, and trying to start up potentially disastrous new processes. Those companies that
have the capability to assess dynamic operability as part of the process design will possess a
competitive advantage.
REFERENCES
[I] J. J. Downs and J.E. Doss, Proceedings of CPC IV, (1991) 53.
[2] M. J. Kiemele, S.R. Schmidt, and RJ. Berdine, Basic Statistics - Tools for Continuous
Improvement, 4lh ed., Air Academy Press, 1999.
[3] W. L. Luyben, B.D. Tyreus, and M.L. Luyben, Plantwide Process Control, McGraw-Hill,
New York, 1999.
[4] B. D. Tyreus and M.L. Luyben, Proceedings of FOCAPD V, (1999) 113.
[5] R. Shinnar, B. Dainson, and I.H. Rinard, Ind. Eng. Chem. Res., 39 (2000) 103.
[6] J. M. Douglas, Conceptual Design of Chemical Processes, McGraw-Hill, New York,
1988.
[7] J. Van Schijndel and E.N. Pistikopoulos, Proceedings of FOCAPD V (1999) 99.
[8] M. A. Al-Arfaj and W. L. Luyben, Ind. Eng. Chem. Res., 41 (2002) 3784.
[9] Engineering 1,3-Carbon Molecules, Chem. Engr. Progress, February (2003) 14.
[10] A.J. Groenendijk, A.C. Dimian, and P.D. Iedema,, AIChE J., 46 (2000) 133.
II1] I. K. Kookos and J.D. Perkins, Ind. Eng. Chem. Res., 40 (2001) 4079.
[12] T. Larsson, K. Hestetun, E. Hovland, and S. Skogestad, Ind. Eng. Chem. Res., 40
(2001)4889.
[13] T. J. McAvoy, Ind. Eng. Chem. Res, 38 (1999) 2984.
[14] P. Seferlis and J. Grievink, Comp. Chem. Eng, 25 (2001) 177.
374
[15] K. L.Wu and C.C. Yu, Ind. Eng. Chem. Res., 36 (1997) 2239.
[16] A. Zheng, R.V. Mahajannam, and J.M. Douglas, AIChE J., 45 (1999) 1255.
The Integration of Process Design and Control
P. Seferlis and M.C. Georgiadis (Editors)
© 2004 Elsevier B.V. All rights reserved. 375
Chapter C2
Synthesis of plantwide control structures using a
decision-based methodology
E. M. Vasbinder, K. A. Hoo and U. Mann
1. INTRODUCTION
Increased competitiveness in the chemical process industry has lead to the design of highly
complex processes. The increased complexity has been justified on the bases of (i) improving
energy recovery and unused raw materials, and (ii) reducing the environmental impact of the
process. These tasks may be accomplished by additional and new equipment, or the
incorporation of novel techniques. Inclusion of novel techniques, although very beneficial
with regard to environmental, safety, and economic perspectives, may result in undesirable
effects on the overall operation of the chemical process due to increased interactions among
process variables. Consequently, the motivation to develop a control structure for the entire
plant, rather than on a unit by unit basis, is justified.
Another driving force for a plantwide approach to control structure synthesis is the
possibility of operating at states different from the original design conditions, for instance,
when there are economic incentives. Thus, a unit-by-unit control structure developed for only
the design (nominal) operating state may deter other operations and ultimately reduce profit.
Research conducted over the last two decades has not resulted in a consensus on how best to
address the plantwide control structure synthesis problem. This is in contrast to the design of a
chemical plant where the acceptable methodologies are variations of the hierarchical structure
developed by Douglas [1]. There are numerous contributions on the design of different
control structures for a variety of unit operations. Thus, there is much to be garnered from
these single unit operation control studies to address the larger issue of plantwide control, but
they must be coordinated with the goals of the entire plant, and not just the control objectives
of the individual units.
The chapter is organized as follows. Section 2 presents brief reviews of the hierarchical
steady-state design structure and a particular plantwide control synthesis structure. Both have
relevance to this work. The former because some of the plantwide control methods rely on
this hierarchy, and the latter to complement the proposed approach. Section 3 begins with
preliminaries that define the state of the process design flowsheet, the modified Analytical
Hierarchical Process (mAHP) method used for assessment, and the process flowsheet
376
2. BACKGROUND
A traditional process design approach follows the hierarchical structure of Douglas [1],
shown in column 2 of Table 1.
The development of the process flowsheet begins with the determination of the type of
process to be designed. Successive layers are then added to provide more details, leading to
the final flowsheet that consists of all of the necessary process units and their connections to
meet the steady-state design objectives. It is important to note that (i) design is a steady-state
concept while control issues are dynamic in nature, and (ii) steps three and five are based
solely on economics (maximize profits, minimize waste of raw material and energy, etc.) and
not on operational or design considerations.
Also worth noting is that for chemical processes involving chemical reactions, the
chemical reactor is the heart of the process and its operation affects the operation of other
units (separation units, utility requirements, etc.) [3]. Therefore, the design of the chemical
reactor should be a distinct step in the hierarchical design structure of chemical processes with
chemical reactions. Recently, Mann and Hoo [2] have proposed a modified hierarchical
structure, shown in column 3 of Table 1.
Table 1
A steady-state chemical process design hierarchy.
Layer Task (Douglas) [ 1 ] Task (Mann & Hoo [2])
1 Batch/Continuous operation Batch/Continuous operation
2 Definition of input/output structure Definition of input/output structure
3 Design of recycle subsystem Design of the chemical reactor subsystem
4 Design of separator subsystem Design of separator subsystem
5 Energy Integration Unit Integration
a. recycle
b. heat integration
377
The goal of plantwide control structure synthesis is to develop feasible control structures
that address the objectives of the entire chemical plant and account for the interactions
associated with complex recycle and heat integration schemes, and the expected multivariate
nature of the plant. Many strategies have been proposed for accomplishing this task, and the
majority of them have been demonstrated using dynamic process simulations. However, none
have been accepted as the universal approach, in a manner similar to the steady-state process
design synthesis hierarchy of Douglas [1].
The study of plantwide control dates back to 1964 when Buckley proposed that the control
structure should be addressed on a plantwide basis, considering inventory first and than
product quality [4]. Foss [5,6] later followed by proposing that
"Perhaps the central issue to be resolved by the new theories of chemical process control
is the determination of the control system structure ... and it is the burden of new theories
to invent ways both of asking and answering the questions in an efficient and organized
manner."
Since these initial contributions, many works followed, which may be classified into three
broad categories: (1) Control structure development based mainly on process experience and
engineering judgment [7], (2) Control structure development that follows the guidelines of the
steady-state process design synthesis hierarchy [1], and (3) Control structure development that
relies on a mathematical formulation based on dynamic system theory, optimization, and
control-theoretic systems analysis. Some contributions can be classified in more than one
category.
A large number of the contributions can be classified into the first category. Of worthy
mention are the contributions by Luyben and coworkers [7—9]. They present a systematic
approach that addresses the major issues facing the plantwide control problem, such as the
effect of recycle and energy integration. The final outcome of the approach is a plantwide
control structure that is decentralized, and whose control configuration is predominantly a
single-loop feedback control structure. The main limitation of this approach is the heavy
reliance on available process experience.
Price and Georgakis [10,11] present a five-tiered approach that closely follows the
philosophy of Buckley. Production rate is prioritized ahead of other factors, contrary to
Luyben and coworkers who address energy management first and then production rate. The
difficulty of defining consistent priority measures among competing steady-state design
objectives makes the process-based experience approach highly controversial.
Shinnar and coworkers [12,13] employ the concept of partial control - the identification of
a dominant subset of variables to be controlled, such that by controlling only these variables a
stabilizing effect on the entire system results. Skogestad and coworkers [14,15] introduce a
similar concept - finding the self-optimizing control variables. Groenendijk et al. [16] and
Bouwens et al. [17] propose developing the final plantwide control structure by applying
linear control-theoretic analyses (e.g., relative gain array, singular value decomposition,
closed-loop disturbance gain, etc.).
378
The second category involves the use of a hierarchical design approach (see Table 1) as the
basis for control structure development. A large contribution in this area is provided by
Douglas and coworkers [18,19,1] and Ponton and Laing [20,21]. This approach is attractive,
since by the nature of the design hierarchy, the complexity (dimensionality and details)
increases progressively as the design progresses. Design changes are recommended whenever
the number of control and manipulated variables are not balanced. Zheng et al. [22] propose
the idea of a controllability index; the minimum amount of additional storage necessary to
obtain optimal control performance for a control structure developed using the hierarchical
approach. The main limitations appear to be the amount of analysis required, as the generation
of many different alternatives increases as well as their complexity.
The work of Fonyo and coworkers [23,24] follows the eigenstructure concept suggested by
Luyben [25], that is, to identify the control variables of interest. Dynamic simulation is
employed in order to evaluate the control structure and allow for the development of multi-
variable controllers.
The third category is a rigorous mathematical framework of dynamic modeling,
optimization theory, and systems analysis. The motivation is to bring some formalism that is
effective in spite of a lack of process experience. Notable contributions include, Morari and
coworkers [26-28], Skogestad and coworkers [15,29-31], and Stephanopoulos and coworkers
[6,32-35].
Morari and coworkers choose to address the issue of plantwide control using optimization
theory. They first introduced a hierarchical control structure, shown in Table 2, that
emphasized steady-state and dynamic events natural to the tasks of control. At the top, steady-
state opti mization is done at a low frequency (week or day) while at the bottom, dynamic
optimization is done more frequently (minute or second). Skogestad and coworkers refined
this approach for the selection of self-optimizing control variables that give the smallest loss
in profit [14,15,30]. A self-optimizing control variable is one whose set point does not change
regardless of the disturbance type. This concept is similar to the partial control economy of
Shinnar and coworkers.
Table 2
Plantwide control synthesis hierarchy
Layer Task Mode Timescale
1. Scheduling steady state weeks
2. Site-wide optimization steady state days
3. Local optimization steady state hours
4. Advanced Process control dynamic minutes
5. Regulatory Control (linear) dynamic seconds
379
Stephanopoulos and coworkers prioritize among the control objectives using a goal-
oriented approach, engineering preferences, and design trade-offs [35]. This is achieved by a
vertical decomposition of the plant into a set of process representations of varying degrees of
abstraction. Other related approaches (concept of a super-structure) can be found in
Grossmann [36], Floudas [37], and Pistikopoulos and coworkers [38—41] and optimization
framework [42]. Usually, rigorous mathematical approaches, are not a choice of the
practitioner because the dynamic models are difficult to develop, there are numerous
parameter and model uncertainties, and optimization and control-theories are not readily
available for use and interpretation. This work presents a different approach to the design of
plantwide control structures that makes use of a variant of a decision-based methodology that
has been employed in environmental life cycle assessment known as the Analytic Hierarchical
Process (AHP). The advantages of the approach include: (i) reduction in the dimensionality of
the resulting control structure synthesis problem; (ii) a systematic procedure that yields a
consistent plantwide decomposition, dependent on steady-state design, operational, and
dynamic control objectives; and (iii) a formal method by which competing design alternatives
(flowsheet or modules with or without control structures) can be examined.
Assumption 1 A converged steady-state process flowsheet is available and includes all of the
process equipment. The flowsheet provides the basis for the modular decomposition and the
control structure synthesis.
Assumption 2 The design objectives are well defined for the process and are satisfied by the
process flowsheet.
Assumption 4 The process flowsheet may be re-designed if the operability analysis indicates
insufficient control degrees of freedom (A control degree of freedom is defined as a variable
that can be manipulated.) or the existence of an operational bottleneck. Therefore, concurrent
design is an option in the modular development.
Assumption 5 The economic objectives of the process, both implicit and explicit, can be
translated to control objectives. Therefore, each objective can be associated with a
measurable process variable.
Assumption 6 The process flowsheet was designed to operate about a nominal operating
state. When the issue of switchability (Switchability is the ability of a process to operate at
different operating states in a stable fashion.) is addressed this assumption is relaxed.
3.1. Method
Definition 1 A state is a mode that is identified as important or relevant for the comparison
between the alternatives being examined. An input is a variable that can effect the state or
permits differentiation among the states by way of the response of the state.
Step 1. Select the states and inputs for a module, see section 3.2, under examination.
Create an mxn matrix, called the Level 0 matrix, such that the inputs form the columns (n)
and the states form the rows (m) of the matrix. Choose a scale to represent quantitatively the
effect that an input has on a given state. A convenient choice is the set of numbers
x e (l,9) c 5R where the valuation of 1 suggests that there is little effect of the input on the
state, and a valuation of 9 indicates that the input has a very pronounced effect on the state.
This scale will reflect a transformation of the quantitative results obtained from sensitivity
tests (simulation) of the process flowsheet. This scale will be used to find the entries for this
and subsequent matrices.
Definition 2 The base state, for the generation of the Level 1 matrices, is the state that is the
least sensitive to the set of input perturbations.
381
Step 3. Determine the Level 2 matrix. Singular value decomposition (SVD) is applied on each
Level 1 matrix. The SVD gives the dominant directions and modes that account for the
dependencies in an ordered (largest to smallest) fashion. The product of the absolute value of
the first column of the left singular vector with the largest singular value represents the
dominant direction among the states for the input associated with the given Level 1 matrix.
Combine the results from the SVD evaluations of all the Level 1 matrices, maintaining the
same order of input and states as in the Level 0 matrix, this yields the Level 2 matrix. For
example, if there were three (2x2) Level 1 matrices, three applications of SVD are performed.
The resulting Level 2 matrix is a (2x3) matrix.
Step 4. Create the final weighted mAHP matrix, or the Level 3 matrix.
This matrix is the result of an element by element multiplication of the Level 2 and Level 0
matrices. Note, the dimension of the Level 3 matrix is identical to the Level 0 and Level 2
matrices.
Step 5. Generate quantitative rankings among the states and among the inputs.
Completing a row sum gives the prioritization of the states. The state with the largest row
sum of the Level 3 matrix is identified as the dominant state for the module. Analogously, the
dominant input and the ranking of the inputs for the module can be identified using the
respective column sums.
For example, an explicit design objective such as a minimum reactor inlet temperature can be
associated with the reactor, or a purity requirement with a separation unit, etc. Alternatively, a
design objective such as production rate may be assigned to several unit operations. The units
associated with design and operational objectives and constraints, represent the dominant unit
of the potential candidate modules.
The remaining unit operations (secondary units), which are not associated with any
identified objectives or constraints, are examined next to determine if they can be attached to
a dominant unit of a candidate module on the basis that the secondary unit assists the primary
unit to achieve its objectives. For example, if the explicit objective of a minimum light off
temperature is associated with a reactor unit and a furnace precedes the reactor, the furnace
can be incorporated into the candidate module with the reactor, since the furnace helps the
reactor meet this objective. Once all of the secondary units have been assigned to the
modules, the resulting set of modules represents the first iteration of the flowsheet
decomposition.
It is possible that the initial modules themselves may be combined into a larger module.
The pruning of this set is accomplished in a similar manner as the original decomposition.
Several alternative modules may satisfy a given set of design and operational objectives and
constraints.
A list of known disturbances that can be associated with each module constitutes the set of
inputs to the module. What is not known is which state in the module is most affected by
changes in the inputs to each module. A quantitative answer to this question will permit a
comparison among competing alternative modules. To this end, the modified analytic
hierarchical process is used.
The implicit and explicit objectives represent the states of the module. Once all of the
states have been identified for a module, sensitivity tests are performed as a function of the
size and type of the expected disturbances of the inputs to the module. Then, the effects of the
inputs on the modular states are determined on a relative error basis. The approach uses the
results of the steady-state sensitivity tests to help develop values for the mAHP, to determine
the quality of a given module and prioritize among the module's competing states and inputs.
The results from the mAHP are also used to decide among competing modular alternatives.
Rule 1 If the mAHP is applied to two alternative modules with the same states the resulting
column and row sums can be compared to select between them. The module with the lowest
overall state sum is identified as the best alternative of this pair.
For the modules that were selected as the best modular alternatives, an economic objective
is defined to reflect the design and operational objectives of each module. Following the work
of Skogestad and coworkers [14,29], the objective function is selected to reflect deviation
from the nominal economic valuation, a loss function, for the module. Note that the loss
function only accounts for the given module.
383
C 7 H 8 +H 2 ^C 6 H 6 +CH4 (1)
2 C6H6 «-> C12HI0 + H2 (2)
The process flowsheet has two feed streams: a pure liquid toluene stream at ambient
conditions, and a gaseous hydrogen stream consisting of 95 mole percent hydrogen and 5
mole percent methane, at lOOoF and 560 psia. The objective of the design is to produce
384
benzene at a rate of 256 lbmole/hr at a purity of 0.9997. The reactor operating pressure is 500
psia, and to realize a satisfactory reaction rate, the inlet reactor temperature should be above
1150oF; but the reactor temperature should not exceed 1300oF (to prevent hydrocracking).
Further, the reactant ratio, hydrogen-toluene, at the reactor inlet should be at five to one or
larger to reduce coking and to reduce product losses by reaction (2). The reactor effluent
stream must also be rapidly quenched to at least 1150oF to prevent coking in the heat
exchanger downstream of the reactor.
The flowsheet is shown in Figure 1. The fresh toluene and hydrogen streams are combined
with the recycle hydrogen and toluene streams and fed to a heat exchanger where the
combined stream is heated. The stream is then passed through a furnace where the
temperature is raised to the desired reactor inlet temperature. The effluent of the reactor is
quenched with a small amount of the condensed benzene stream leaving the separator. The
reactor effluent contains hydrogen, methane, benzene, toluene and diphenyl. Most of the non-
condensible gases are removed in the liquid-vapor separator and the stabilizer column. The
bottoms from the stabilizer column is the feed to the product column, where the purified
benzene is taken overhead. The bottoms stream for the product column is sent to the recycle
column where toluene is taken overhead and the diphenyl is removed in the bottoms stream.
There are other process flowsheets for the HDA process. Some of the other alternatives
are: recycling of diphenyl to eliminate losses of benzene to the waste diphenyl by way of the
reverse reaction in reaction 2; combine the stabilizer and product columns into a single
column with the benzene product as a side draw stream, and the use of a membrane system to
reduce hydrogen losses in the purge. These alternative designs are not examined in this work,
rather, this work considers the control of the benchcase design discussed in the literature by
[1,7,19].
In this section, the modular based control structure synthesis that uses the mAHP is
demonstrated on the HDA process. Information regarding the modular development can be
found in [45].
The first step of the flowsheet decomposition is to associate the design, operational and
economic objectives or constraints with individual (dominant) unit operations. Table 3 lists
the objectives and operational constraints of the HDA process. Note that an objective can be
assigned to more than one unit operation. For instance, the benzene production rate can be
associated with the unit operation that first produced it and also with the unit that purified it.
Table 4 shows an association of the objectives of the HDA process with the dominant unit
operations. This assignment is not unique. These unit operations represent the initial set of
potential modules for the flowsheet decomposition.
Fig. 1. Schematic of the HDA process [9]. The solid line represents the first potential reactor module. The solid
and dotted lines represent the second potential reactor module. The solid and dashed lines represent the third
potential reactor module. The fourth potential reactor module is represented by the entire enclosed area.
386
Table 3
Design, operational, and economic objectives and constraints of the HDA process.
Design Operational Economic
Benzene production rate Reactor outlet temp.<1300°F 265 kg-mol/hr
80% single pass conversion Reactor inlet temp.>l 150°F Acceptable rate of
return on
Benzene purity of 99.97% Reactor effluent quench <1150°F investment
5:1 H2 to Toluene molar feed Reactor pressure 500 psia.
ratl
° Minimum hydrogen loss in the
purge
Flexibility"
"Flexibility is the ability of the process to operate over a range of various steady-state disturbances
Table 4
Dominant unit operations with associated HDA process objectives.
Module Objectives
Reactor 5:1 H2:TL ratio; production rate of 265 Ibmole/hr; 80% conversion TL;
reactor effluent temperature of 1150°F; reactor inlet temperature of
1150°F; reactor pressure of 500 psia; reactor outlet temperature<1300°F
flexibility; adiabatic operation
Product Column Production rate of 265 lbmole/hr; product purity; flexibility
Purge/Separator Minimize H2 losses; flexibility; economics
Recycle Column Minimize TL and benzene losses; flexibility; economics
Stabilizer Column Flexibility; economics
Compressor Flexibility; economics
After all of the objectives and constraints have been addressed, the uncommitted
(secondary) units in the flowsheet are assigned to one of the initial modules on the basis of
indirect support of the objectives identified for the module. With this refinement, the
candidate modules are now comprised of the dominant and secondary unit operations. Recall
that the modular decomposition approach is well defined (and finite) in the sense that the
decomposition is bounded by two extremes, either an individual unit operation or the entire
process flowsheet.
The process flowsheet decomposition procedure, presented in Section 3.2, is now applied
to the HDA process. Some of the potential modules that contain the reactor are illustrated in
Figure 1. Table 5 lists all the potential modules that result from the decomposition procedure.
The following rules are used in the systematic evaluation of the modules (see [45]).
Rule 2 Modules that contain a dominant unit operation to which the majority of the feeds that
enter the process, should be evaluated first.
387
Table 5
Potential candidate modules for the HDA process.
Dominant Unit Assigned Secondary Unit
Operation Operation
Rule 3 To avoid identifying a final set of candidate modules with overlapping secondary unit
operations, once the first candidate module is identified as being the best module for the
selected dominant module set, all other modules that share any of the secondary unit
operations should be eliminatedfrom further evaluation.
The first step in evaluating the flowsheet decomposition is to evaluate each potential
module. Rule 2 indicates that the modules with the reactor unit should be evaluated first for
the HAD process. Potential reactor modules are shown in Figure 1.
Steady-state sensitivity studies with respect to the expected disturbances are performed on
each of the potential modules. In the HDA process, the disturbance set consists of +20%
flowrate changes to all the feeds to each module, ±10% composition change of the largest
component in the stream, ±15% change in the duty of utility streams, and ±10°F in
temperatures on all of the streams entering the reactor module. The results from these studies
(inputs effect on the module states) are then used to develop the Level 0 and Level 1 matrices
that are used in the mAHP modular evaluation.
From these results and using the relative scale that was presented in Section 3.1, the values
for the Level 0 and Level 1 matrices are determined. Some results for the steady-state
sensitivity study for the reactor module enclosed by the solid and dotted lines, shown in
Figure 1, are given in Table 6. Table 7 gives the Level 0 matrix for the same module
evaluated for the inputs shown in the table and their effect on the states of the module. It is
remarked that the same states must be used when comparing different modules. Thus, the
expected disturbances or inputs can be different for each module that is evaluated.
The Level 1 matrices are developed as prescribed in Section 3.1 for each of the modules.
Examples of Level 1 matrices for the candidate module described above and for the steady
state sensitivity results given in Table 6, are presented in Table 8.
Singular value decomposition is now applied to all the Level 1 matrices. The product of the
absolute value of the first column of the left singular vector of each of the SVD evaluations
388
and the largest singular value gives the resulting weighted vector for each of the Level 1
matrices. The weighted vectors are combined (retaining order) to yield the Level 2 matrix for
the module. Table 9 shows the Level 2 matrix obtained from the SVD calculations of all the
Level 1 matrices including those shown on Table 7. The element by element product of the
Level 2 matrix with its corresponding Level 0 matrix gives the Level 3 matrix, or the
weighted matrix, for the module. Table 10 provides this result.
Table 6
Steady-state sensitivity results for the reactor module shown by the area enclosed by the solid
and dashed line in Figure 1.
±20% Toluene Recycle Flow
Nominal Increase Decrease %Change
H/TL ratio 4.897 4.579 5.236 -7%
T reac. out (°F) 605.7 604.6 606.9 0.2%
TL conv. (%) 79.43 78.21 80.55 1.5%
±10% Toluene Recycle Composition
Nominal Increase Decrease %Change
H/TL ratio 4.897 4.894 4.978 -1.5%
T reac. out (°F) 605.7 605.7 606.1 -0.1%
TL conv. (%) 79.43 79.31 80.48 -1.0%
Table 7
Level 0 matrix developed for the third reactor module shown in Figure 1.
Inputs
H X X H1 H oo oo
L' t- r c
-nace c
oler du
mpress
bilizer
OQ
Fresh
Recyc
Recyc
Fresh
Fresh
ao
ao nT S; c o
oa ao
a.
flow
flow
i,
H/TL ratio 5 7 2 3 2 9 1 5 1 1
T reac. out 1 1 1 1 1 1 4 4 1 1
TL conv. 3 1 2 2 2 3 1 5 2 1
Production 2 2 2 2 2 6 1 5 2 1
Profit 5 3 4 5 4 8 1 7 4 1
Flexibility 3 5 1 2 1 5 1 1 5 1
389
Table 8
Level 1 matrix for the third reactor module shown in Figure 1
States Inputs (same order as Table 7)
H/TL ratio 43.1061 70.0294 17.4863 18.5409 9.1096 87.9199 2.2388 32.9562 2.4203 2.4495
T reac. out 1.0777 1.2505 1.4572 1.2361 1.5183 1.1137 80.5978 13.1825 1.2101 2.4495
TL conv 13.1937 2.501 14.5719 4.9442 9.1096 13.0252 2.2388 32.9562 7.2608 2.4495
Production 10.5766 5.0021 14.5719 7.4616 12.1461 26.0504 2.2388 32.9562 9.6811 2.4495
Profit 43.1061 22.5094 46.6301 49.4424 42.5114 60.7842 2.2388 69.2080 33.8838 2.4485
Flexibility 19.3977 43.7683 1.4572 7.4164 1.5183 37.9901 2.2388 1.0985 42.3547 2.4495
Table 9
Level 2 matrix for the third reactor module shown in Figure 1
TL recycle flow TL recycle composition
<States/Objective!3 States/Objectives
H H "a >"d K H TJ Tl
o
_, con
exibi lity
reac.
exibi
f 3
reac.
1 o 1 O
r a. r>-t
c 8 g- 5*
o
tion
o o o
o' o' 3
H/TL ratio 1 1/5 2/5 3/5 8/5 3/5 1 1/3 1 4/3 7/3 1/3
T reac. out 5 1 2 3 8 3 3 1 3 4 7 1
TL conv. 5/2 1/2 1 3/2 4 3/2 1 1/3 1 4/3 7/3 1/3
Production 5/3 1/3 2/3 1 8/3 1 3/4 1/4 3/4 1 7/4 1/4
Profit 5/8 1/8 1/4 3/8 1 3/8 3/7 1/7 3/7 4/7 1 1/7
Flexibility 5/3 1/3 2/3 1 8/3 1 3 1 3 4 7 1
Table 10
Level 3 matrix for the third reactor module shown in Figure 1
States Inputs (same order as Table 7)
H/TL ratio 8.6212 10.0042 8.7431 6.1803 4.5548 9.7689 2.2388 6.5912 2.4203 2.4495
T reac. out 1.0777 1.2505 1.4572 1.2361 1.5183 1.1137 20.1494 3.2956 1.2101 2.4495
TL conv 4.3979 2.5010 7.2860 2.4721 4.5548 4.3417 2.2388 6.5912 3.6304 2.4495
Production 5.2883 2.5010 7.2860 3.7082 6.0731 4.3417 2.2388 6.5912 4.8405 2.4495
Profit 8.6212 7.5031 11.6575 9.8885 10.6279 7.5980 2.2388 9.8869 8.4709 2.4495
Flexibility 6.4659 8.7537 1.4572 3.7082 1.5183 7.5980 2.2388 1.0985 8.4709 2.4495
Utilizing the Level 3 matrices, generated for each of the potential reactor modules, the best
candidate module is now selected. This is done by comparing the states of the modules. A row
sum on each of the Level 3 matrices, one for each module, gives the impact of each of the
390
states for the module. The results for each module are tabulated such that each of the
candidate modules appears as a column and each of the states appear as a row. The lowest
column (or state) sum of this combined tabulation gives the best candidate module from that
group. An example of the listing of the reactor modules considered and the states is given in
Table 11. From the values in the table, the reactor module enclosed in the solid and dashed
lines in Figure 1 is selected. It is worth noting that the state sums of the reactor modules
enclosed by the solid and dotted lines in Figure 1 are close and therefore either may be
selected.
To complete the decomposition of the flowsheet, all remaining candidate modules that
contain the other dominant unit operations (except the reactor) are examined using Rule 3.
That is, if any of the remaining candidate modules are found to contain any secondary unit
operations that are also in the identified best reactor module, then these candidate modules are
eliminated. The identified best modules for the HDA flowsheet are shown in Figure 2. A
control structure is now developed individually for each of the modules. Note that the control
structure developed for each module is not necessarily unique. To decide among alternative
control structures, the mAHP is applied with the following modifications. First, the states for
the evaluation are altered to focus on control issues, and second, steady-state sensitivity tests
are replaced by dynamic open loop sensitivity tests. Additionally, all integrating loops such as
level loops should be closed before imposing bounded disturbances.
The approach used here to develop the control structure for the modules is a hybrid of the
formal mathematical approach [14,26] and the heuristic-based approach [7,19,48]. The
portion of the mathematical approach used involves the application of optimization (steady-
state) theory to identify active control constraints for each module. This optimization uses an
objective function, O, that attempts to minimize the loss in profit (see Equation (3)) for a
given module.
The steady state optimization is carried out over the expected disturbances, listed in Table
7. The optimization is formulated as:
min Q(y, u, d)
u
subject to fix, u, d) = 0
constraints g(x, u, d) <0
measurements y = h(x, u, d) (5)
391
Table 11
Comparison of the different candidate reactor modules shown in Figure 1.
States Reactor Mod 1 Reactor Mod 2 Reactor Mod 3 Reactor Mod 4
H/TL ratio 239.1072 286.257 294.534 318.1055
T reac out 72.3555 105.0933 30.545 72.9516
TL conv 193.0555 102.251 104.1299 116.5736
Production 243.8874 123.089 133.497 165.8125
Profit 234.4525 327.7638 319.0022 423.3298
Flexibility 70.1448 159.6896 162.6488 272.0337
Based on the steady-state optimization for each module, only one active control constraint is
identified for the HDA process — the reactor inlet temperature. The nine-step procedure to
generate a plantwide control structure developed by Luyben et al.[7] is now applied to each
module. These steps are: (i) establish the control objectives, (ii) determine the control degrees
of freedom, (iii) establish energy management, (iv) set the production rate, (v) control the
product quality, (vi) fix a flow in every recycle loop and control inventories, (vii) check
component balances and (viii) control individual unit operations, and (ix) optimize the
economics or improve the dynamic controllability. The number of control degrees of freedom
identified for each module (referred to by their respective dominant unit operation) are as
follows: reactor: 10, product column: 10, and recycle column: 5.
One of the limitations of the heuristic-based approach is that not all nine steps are relevant for
each module. For instance, energy management (energy generated from the reaction or
introduced to the system is removed to either an external process stream, or exchanged with a
cooler stream) must be established for each module. However, in the case of step (vi), fixing a
flowrate in all recycle loops is only relevant to the reactor module where the quench is
internal to the module. This is because recycle streams appear as inlet streams to a module.
The proposed methodology assumes that these streams are in fact disturbances to another. All
such streams have been addressed in the development of the modules for the HDA process.
Material balance on all recycle streams and component balances are validated by dynamic
simulation of the re-assembled flowsheet. In particular, special care is taken to assure that
material that enters a module also exits the module. The control structure of each module is
defined using the engineering approach of Luyben [9].
The control structure developed on the modular basis is shown in Figure 5. The
performance of each of the control structures are now assessed by examining how well
disturbances are attenuated or rejected. The first column in Table 12 lists the control and
manipulated variables for each module, which is denoted by the superscript following the
control variable/manipulated variable paring.
392
Table 12
Comparisons of the controlled and manipulated variables for the modular and Luyben
approaches.
Modular Approach Luyben & coworkers [7]
Controlled Manipulated Controlled Manipulated
FEHE cold side Tout FEHE bypass valve FEHE cold side Tout FEHE bypass valve
Reactor Tin Furnace fuel valve1 Reactor T,n Furnace fuel valve
Cooler Tout Cooling water flow1 Cooler Tout Cooling water flow
Reactor Tout Quench flow valve1 Recy. gas press H2 feed valve
Sep. lev. Stab, feed valve1 Sep. lev. Stab, feed valve
Reactor pdt rate TL feed valve1 Stab, cond press. Stab, dist valve
Purge gas flow Purge valve1 Purge CH4 comp. Purge valve
Compr speed Compr power1 Compr. speed Compr. power
5:1 H/TL ratio H2 feed valve' TL recy flow Recy.Col. dist pump
BZ pdt Pdt.Col. dist valve2 speed
Stab, cond press. Stab, dist valve Pdt.Col. ref. flow Pdt. Col. ref. valve
BZ molfrac pdt top Pdt. Col. ref. valve2 BZ molfrac pdt top Case w/Pdt.Col. stm
Pdt.Col. cond press Pdt Col. cond valve2 valve
BZ molfrac pdt. bot. Pdt.Col. reb stm valve2 Pdt.Col. cond press Pdt Col. cond valve
BZ molfrac stab, top Stab. ref. valve2 BZ molfrac Stab, bot Stab, reb stm valve
Stab, cond lev. Stab, cooling water BZ molfrac Stab, top Stab. ref. pump speed
valve2 Stab, cond lev. Stab, cooling water
Stab, reb lev. Stab reb stm valve valve
Pdt.Col. cond lev. Pdt.Col. feed valve2 Stab, reb lev. Pdt.Col. feed valve
Pdt.Col. reb lev. Pdt.Col. bot. valve2 Pdt.Col. cond lev. Pdt. Col. dist valve
Recy.Col. cond press Recy Col. cond valve Pdt.Col. reb lev. Pdt.Col. bot valve
Recy.Col. TL molfrac Recy.Col. ref. Pump Recy.Col. cond press Recy Col. cond valve
speed3 Recy.Col. ref. flow Recy.Col. ref. pump
Recy.Col. reb lev. Recy.Col. bot valve3 speed
Recy.Col. cond lev. Recy.Col. dist pump Recy.Col. reb lev. Recy.Col. reb stm
speed3 valve
Recy.Col. Diph. Recy. reb stm valve3 Recy.Col. cond lev. TL feed valve
molfrac Recy.Col. Diph. Recy.Col. bot. valve
molfrac
Reactor Tout Quench flow valve
Figures 3 and 4 show some dynamic responses to a disturbance in the benzene composition
and in the toluene flowrate for the entire process flowsheet. The results show that the
proposed control structures provide satisfactory disturbance rejection (overshoot, settling
time, stability, etc.). If a control structure does not attenuate or re-direct a disturbance, several
steps can be taken. First, if the control structure development provides alternative structures,
the next best alternative is selected. Second, the control design for the module can be
modified or the base process design itself can be modified to obtain a control degree of
freedom (see assumption 4).
Third, if none of these is a viable alternative, then return to the original modular
decomposition and select the second best module and redesign the control structure for it. The
complete plant/control design flowsheet of the HDA process is shown in Figure 5.
The plantwide control structure that was developed here for the HDA process is compared
to three plantwide control structures that have been published in the literature. The first
structure was developed by Luyben et al. [9], the second is by Stephanopoulos [47], and the
third by Fisher et al. [19]. It is remarked that the latter was inferred by the discussion provided
while the first two provided portions of their completed plantwide control flowsheets. All four
plantwide control structures were simulated (dynamically) in HYSYS.PlantNetVers v2.2 ©,
and compared with respect to their responses to the expected range of disturbances. Of
interest was the satisfaction of the initial process design objectives after being subjected to
typical disturbances.
The results show that the plantwide control structures developed by the methodology
presented here are comparable with the other structures. The analysis shows that both the
Luyben and the modular method give satisfactory closed-loop performance. The responses
also show that the modular control structure did better (faster settling) than the Luyben
structure with a generic non-optimal tuning. Of significance however, is that the modular
method yielded a control structure that performed well when compared to the traditional
heuristic-based plantwide approach. This is in spite of the fact that the control structure was
designed for modules rather than the entire plant. Thus, for more complex flowsheets with
numerous unit operations and non-conventional products, the development of a plantwide
control structure using the modular method may be a more attractive approach, because of the
reduction in size of the control structure synthesis problem.
The dynamic simulation results are not definitive because optimal tuning of the control
loops has not been completed, nor was it the objective of the present work. The main
objective is to evaluate the potential of the modular decomposition approach to synthesize an
effective plantwide control structure and to demonstrate the rigor of the mAHP procedure to
select an acceptable control structure from among competing alternatives in the presence of
competing objectives.
395
Fig. 3. The behavior of the outlet flowrate of the product column for a benzene composition
disturbance.
Fig. 4. The behavior of the outlet temperature of the product column to a disturbance in the
toluene feed flowrate.
8
Fig. 5. HDA flowsheet with control structure as developed using the modular-based approach. The controllers are denoted by
the circles with a horizontal line through them. The connections are depicted in the direction of the arrows.
397
The plantwide control structures of Stephanopoulos and Fisher and coworkers have been
simulated successfully at the nominal design conditions. However, when the simulation (or
the process) is perturbed with the disturbances listed in Table 7 the simulations fail to reach a
stable (convergent) solution. For both cases, the non-convergence may be due to the control
structure of the stabilization column. This is not surprising because neither control structures
controlled the pressure in this column. Table 12 lists the control configurations found from
the modular decomposition and Luyben and coworkers.
7. SUMMARY
true challenge to the process design and process control communities is to devise a design
methodology that combines both steady-state and dynamic considerations in all stages of the
design exercise. Hopefully, the mAHP methodology used here will find applications in the
design exercise itself.
ACKNOWLEDGEMENTS
The first author is grateful for the financial support provided by Texas Tech University's
Chancellors Fellowship and the Texas Tech University Process Control and Optimization
Consortium.
ABBREVIATIONS
Bot bottoms
BZ benzene
Case cascade
Col. column
Compr Compresser
Cond condenser
Diph diphenyl
Dist distillate
FEHE feed heat exchanger
H2 hydrogen
Lev level
Molfrac mole fraction
Pdt production or product
Pdt.Col product column
Press pressure
Reb reboiler
Recy recycle
Recy.Col. recycle column
Ref reflux
Sep separator
Stab stabilizer column
Stm steam
Temp temperature
TL toluene
NOMENCLATURE
d vector of disturbances
f vector valued state functions
g inequality functions
399
u vector of inputs
h vector valued output functions
x vector of states
y vector of measured outputs
O objective function
REFERENCES
[I] J. M. Douglas, Conceptual Design of Chemical Processes, McGraw-Hill, St. Louis, MO,
1988.
[2] U. Mann, K. A. Hoo, and S. Emets, to be submitted to Ind. Eng. Chem. Res, 2003.
[3] R. Smith, Chemical Process Design, McGraw-Hill, New York, NY, 1995.
[4] P. Buckley, Techniques of Process Control, John Wiley & Sons, New York, NY, 1964.
[5] A. S. Foss, AIChE J, 19 (1973) 209.
[6] G. Stephanopoulos and C. Ng, J. Process Contr, 10 (2000) 97.
[7] M. Luyben, B. Tyreus, and W. Luyben, AIChE J, 43 (1997) 3161.
[8] W. L. Luyben, In IF AC Workshop on Interactions Between Process Design and Process
Control, Pergamon Press, Oxford, UK, 1992.
[9] W. Luyben, B. Tyreus, and M. Luyben, Plantwide Process Control, McGraw-Hill, New
York, NY, 1998.
[10] R. Price and C. Georgakis, Ind. Eng. Chem. Res, 32 (1993) 2693.
II1] R. Price, P. Lyman, and C. Georgakis, Ind. Eng. Chem. Res, 33 (1994) 1197.
[12] R. Shinnar, B. Dainson, and I. H. Rinard, Ind. Eng. Chem. Res, 39 (2000) 103.
[13] R. Shinnar, Chem. Eng. Commun, 9 (1981) 73.
[14] S. Skogestad, J. Process Control, 10 (2000) 487.
[15] T. Larsson and S. Skogestad, Intl. J. Control, 21 (2000) 209.
[16] A. Groenendijk, A. Dimian, and P. Iedema, AIChE J , 46 (2000) 133.
[17] S. M. A. M. Bouwens and P. Kosters, In IF AC Workshop on Interactions Between
Process Design and Process Control, Pergamon Press, Oxford, UK, 1992.
[18] W. Fisher, M. Doherty, and J. Douglas, Chem. Eng. Res. Des, 63 (1985) 353.
[19] W. Fisher, M. Doherty, and J. Douglas, Ind. Eng. Chem. Res, 27 (1988) 597.
[20] J. Ponton and D. M. Laing, Trans IChemE, 71 (1993) 181.
[21] D. M. Laing and J. Ponton, In IF AC Workshop on Interactions Between Process Design
and Process Control, Pergamon Press, Oxford, UK, 1992.
[22] A. Zheng, R. V. Mahajanam, and J. M. Douglas, AIChE J, 45 (1999) 1255.
[23] Z. Fonyo, In IF AC Workshop on Interactions Between Process Design and Process
Control, Pergamon Press, Oxford, UK, 1992.
[24] P. Mizsey and Z. Fonyo, Compt. Orient. Proc. Eng, (1991) 411.
[25] W. L. Luyben, Ind. Eng. Chem. Res, 27 (1988) 206.
[26] M. Morari, Y. Arkun, and G. Stephanopoulos, AIChE J, 26 (1980) 220.
400
[27] S. Skogestad and M. Morari, Ind. Eng. Chem. Res., 26 (1987) 2029.
[28] M. Morari, In IF AC Workshop on Interactions Between Process Design and Process
Control, Pergamon Press, Oxford, UK, 1992.
[29] T. Larsson, K. Hestetun, E. Hovland, and S. Skogestad, Ind. Eng. Chem. Res., 40 (2001)
4889.
[30] E. Wolff, S. Skogestad, M. Hovd, and K. Mathisen, In IFAC Workshop on Interactions
Between Process Design and Process Control, Pergamon Press, Oxford, UK, 1992.
[31] M. Hovd and S. Skogestad, In IFAC Workshop on Interactions Between Process Design
and Process Control, Pergamon Press, Oxford, UK, 1992.
[32] Y. Arkun and G. Stephanopoulos, AIChE J., 26 (1980) 975.
[33] T. A. Meadowcroft and G. Stephanopoulos, AIChE J., 38 (1992) 1254.
[34] J.E. Johnston, Synthesis of Control Structures for Complete Chemical Plants. PhD thesis,
MIT, Cambridge, MA, 1991.
[35] C. Ng, A Systematic Approach to the Design of Plant-Wide Control Strategies for
Chemical Processes, PhD thesis, MIT, Cambridge, MA, 1997.
[36] I. E. Grossmann and Z. Karavanja, Compt.& Chem. Engng., 19 (1995) 189.
[37] C. A. Floudas, Nonlinear and Mixed Integer Optimization, Oxford University Press, New
York, NY, 1995.
[38] E. N. Pistikopoulos, Compt. & Chem. Engng., 19 (1995) S553.
[39] E. N. Pistikopoulos and M. G. Ierapetritou, Comput. Chem. Eng., 19 (1996) 1089.
[40] M. J. Mohideen, J. D. Perkins, and E. N. Pistikopoulos, Comput. Chem. Eng., (1997)
S457.
[41] V. Bansal, J. D. Perkins, E. N. Pistikopoulos, R. Ross, and J. M. G. vas Schijndel,
Comput. Chem. Eng., 24 (2000) 261.
[42] M. Tiirkay, T. Giirkan, and C. Ozgen, Comput. Chem. Eng., 17 (1993) 601.
[43] L. T. Biegler, I. E. Grossmann, and A. W. Westerberg, Systematic Methods of Chemical
Process Design. Phys. & Chem. Eng. Sci. Prentice Hall, Upper Saddle River, NJ, 1997.
[44] T. L. Saaty, Decision Making for Leaders. RWS Publications, Pittsburgh, PA, 1995.
[45] E. M. Vasbinder and K. A. Hoo, to appear in Ind. & Eng. Chem. Res., October 2003.
[46] N. L. Ricker, Comput. Chem. Eng., 19 (1995) 949.
[47] G. Stephanopoulos, Chemical Process Control: An Introduction to Theory and Practice,
Prentice-Hall, Englewood Cliffs, New Jersey, 1984.
[48] N. L. Ricker, J. Proc. Contr., 6(1996) 205.
The Integration of Process Design and Control
P. Seferlis and M.C. Georgiadis (Editors)
© 2004 Elsevier B.V. All rights reserved. 401
Chapter C3
a
Department of Chemical Engineering, Faculty of Science, University of Amsterdam,
Nieuwe Achtergracht 166, 1018 WV Amsterdam, The Netherlands.
b
Department of Chemical Technology, Faculty of Applied Sciences,
Delft University of Technology, Julianalaan 136, 2628 BL Delft, The Netherlands.
1. INTRODUCTION
Plantwide control problems arise in the context of intricate recycles of mass and energy
that characterise modern process plants. Positive feedback effects complicate the dynamics
and control because of interactions and non-linear phenomena. Managing the production rate
and the formation of waste are important plantwide control problems that originate primarily
from the control of component inventory. In this paper we will make the distinction between
the inventory of main components and of impurities. Main components designate reactants,
intermediates and products that ensure the targeted production rate and the economic
efficiency. Although much less as material amount, the inventory of impurities is equally
important since they may be harmful for the environment, affect the product quality and lower
the price, plug the equipment and lead to troubles in operation, etc.
In this chapter we will handle two fundamental subjects regarding the plantwide control of
the material balance. The first issue deals with the control of the reactant inventory, which in
turn is related with flexibility of design. We will demonstrate that in recycle systems the
design of the chemical reactor and the control of the reactants' make-up are interrelated. We
will distinguish between two basic types of reactant inventory control. The first one, based on
the concept of self-regulation, consists of setting the fresh reactant feeds on flow control.
However, this strategy, which could be designated as "classical", is possible only if the reactor
is large enough. We will present a quantitative analysis that makes use of a dimensionless
parameter, the plant Damkohler number. Non-linear phenomena could lead to additional
constraints as state multiplicity and closed-loop instability, as well as high sensitivity to
production rate changes, process disturbances or parameter uncertainty in design.
The second strategy based on controlled-regulation consists of measuring the component
inventory and implementing a feedback control system. A suitable implementation consists of
402
fixing the reactor-inlet (recycle plus fresh feed) flow rate, measuring the reactant inventory
somewhere in the recycle loop, and adjusting the fresh feed accordingly. Component
inventory can be evaluated by composition or by level measurements. The important
advantage of this strategy is that the reactor behaves as "decoupled" from the rest of the plant.
Therefore, control structures developed for stand-alone reactors can be applied.
It may be observed that the fundamental difference between the two strategies consists of
the way in which the reactants are brought into the process. The approach based on self-
regulation has the advantage of setting directly the production rate. A supplementary benefit
is that product distribution is fixed in the case of complex reactions. In the second strategy,
the production is manipulated indirectly, by changing the setpoints of recycles, which could
be seen as disadvantage. However, it handles better some non-linear phenomena, as for
example the excessive steady state sensitivity known as the "snowball effect". Additionally,
this strategy guarantees the stability of the whole recycle system if the stand-alone reactor is
stable or stabilized by local control.
The second issue is the handling the inventory of impurities in a complex plant with
recycles. This is again a plantwide control problem combined with the design of units
involved in recycles. A dynamic evaluation of the inventory of components present only in
small amounts is necessary, this time around the separation section. A powerful method to
solve this problem is the assessment of interactions by controllability analysis. Exploiting the
interaction through recycles can lead to effective control structures impossible to achieve with
stand-alone units. If necessary, chemical conversion of impurities can be used to counteract
undesired positive feedback effects. A case study - handling of impurities in a VCM plant -
illustrates the approach. The main result is that the control of three key impurities can be
achieved by implementing only two controllers but by taking advantage from self-regulation
effects induced by recycles.
The paper starts with a literature review. Then, we discuss the issue of controlling the
make-up of reactants in a plant with recycles, with two sub-sections referring to self-
regulation and controlled-regulation. The chemical reactor is an isothermal CSTR. Typical
reaction stoichiometry is examined. Non-linear analysis with dimensionless models is used as
investigation tool. This approach allows the generalisation of results over a wide range of
design or operating variables. Each case emphasises the interaction between reactor design
and plantwide control. A distinct section discusses the extension to more complex
stoichiometry, as well as to PFR and non-isothermal reactors. Then the next subchapter deals
with the plantwide control of impurities in a complex plant with multi-reactors but provided
with a central separation system. The chapter ends with conclusions and recommendations.
2. PREVIOUS WORKS
The strategy of controlling the plants with recycles is known today as "plantwide control".
The first monograph dedicated to this subject has been published only recently by Luyben,
Tyreus and Luyben [1]. As they stressed "How a process is designed fundamentally
403
determines its inherent controllability... In an ideal project dynamic and control strategies
would be considered during the process synthesis and design activities".
In this short review, we address only the control of the material balance. The first
plantwide control methodology came from Buckley [2]. He distinguished between inventory
control, which should be designed first (slow dynamics), and quality control that could be
implemented afterwards (fast dynamics). For recycle systems, the direction of flow should be
included in analysis [3], while the make-up flow rate should be adjusted based on total
inventory in the recycle loop. Along the years, the Buckley's procedure served to develop
workable plantwide control structures. A practical way for handling the effect of interactions
due to recycles was by providing sufficient surge inventories. As design rule, the time
constants of liquid surge inventory should be by a factor of 10 larger than the product-quality
time constants. Today this practice is no longer acceptable as conceptual basis. Just-in-time
supply chain demands fast dynamics and low inventories. Excessive inventories result in
uneconomical capital costs, as well as in higher safety and environmental risks.
Price, Lyman and Georgeakis [4] proposed a modern extension of the Buckley's procedure.
The key concept is "self-consistency". An inventory control structure is self-consistent if it is
able to propagate a production rate change throughout the process so that such change gives
significant variations in flow rates of all major feed and product streams. Guidelines were
developed for production rate and inventory control. When applied to the Eastman benchmark
problem, it was found that the major difference between alternatives comes from the selection
of the throughput manipulator. The authors recommend the use of process internal flows.
However, the systematic investigation of design and control of plants with recycles started
only recently with the works of Luyben and co-workers at Lehigh University in USA. In a
first series of papers [5], they investigated elementary dynamic effects in the system
reactor/distillation column/recycle for simple reactions by means of case studies. The design
was optimum with respect to the Total Annual Cost. Among interesting results we may
mention that 1) the recycle has by far larger effect on the overall plant gain than the gain of
the stand-alone units, and 2) the optimum steady-state design might be not the best from
process dynamics viewpoint. For example, for the consecutive reaction A—>B—>C with B as
desired product the optimal design is a small reactor. However, the system becomes highly
sensitive to throughput changes, so that 5% increase in fresh feed can lead to 100%
amplification in the flow rate to separation. This behaviour, known as "snowball effect", has
motivated the search for better control structures. Luyben found the snowball effect could be
prevented by two measures: i) use a variable-volume strategy in controlling the reactor
holdup, and ii) fix the recycle flow rate somewhere on the path between separator and reactor.
The second observation was at the origin of a generic rule for controlling recycle systems
formulated by Luyben as follows [8]: "Use a control structure that fixes the flow rate of one
stream in a liquid recycle loop. In process with two or more recycle streams the flow rate of
each recycle stream can be set". Wu and Yu [6] presented an improved variable-volume
control strategy. Skogestad [7] brought a more critical viewpoint on Luyben's recycle rule.
404
In the above series, an important paper of Tyreus and Luyben [5] deals with second-order
reactions in recycle systems. Two cases are considered: complete one-pass conversion of a
component (one recycle), and incomplete conversion of both reactants (two recycles). As
general heuristic, they found that fixing the flow in the recycle might prevent snowballing. In
the first case, the completely converted component could be fed on flow control, while the
recycled component added somewhere in the recycle loop. In the second case, the situation is
more complicated. Four reactant feed control alternatives are proposed, but only two
workable. This is the case when both reactants are added on level control in recycles (CS1), or
when the reactant is added on composition control combined with fixed reactor outlet (CS4).
As disadvantage, the production rate can be manipulated only indirectly. Other control
structures - with one reactant on flow control the other being on composition (CS2) or level
control (CS3) - do not work. The last structure can be made workable if the recycle flow rates
are used to infer reactant composition in the reactor. This study reinforces the rule that the
flow rate of one stream in a liquid recycle must be fixed in order to prevent snowballing.
It is also interesting to note the observation of Lyman and Luyben [9] that designs with
small reactor and large recycle flow rates shut down when the production rate is increased, the
case studied being a ternary two-recycle process. Furthermore, tight control of the liquid level
in a CSTR/stripper system is not desirable from plantwide control viewpoint (Belanger and
Luyben, [10]).
From the above presentation, we may conclude that the make-up policy of reactants is a
subtle plantwide control problem that regards the design of chemical reactor too. In the
following section we will demonstrate how non-linear analysis can be used to investigate
these issues over a wide range of production rate or specification of reaction selectivity.
3.1 Self-regulation
In this section, we will show the following characteristics of recycle systems with control
structures based on self-regulation:
1) A feasible steady state exists only if the reactor volume exceeds a critical value.
2) If the reactor is small, the steady state exhibits high sensitivity to production rate
changes (snowball effect).
3) For complex reactions, multiple steady states exist, some of them possibly unstable.
/ - ( l + /3) = 0 (2)
A-f2=o (3)
/2-(/3+/4) =0 (5)
In the above equations, the dimensionless variables are defined using the fresh feed flow rate
F o and concentration c$ as reference values, while star superscript signifies nominal values.
The control structure fixes the values of Da, zA>3 and ZA,4- Therefore, Eqs (1) to (5) can be
solved for five unknowns: zA,2,/I, fi, fi, and/i. The input/output mass balance (2) + (3) + (5)
requires fa, =1. Then, Eqs (4) and (5) give:
f2=E^ZlM. (6)
Z
A,3~ZA,2
f3=^AAZlAA. (7)
Z Z
A,3 A,2
Dn 7 _\l~ZA,4)'{ZA,3-ZA,2) _/ 1 \ ,„.
Da-zA1- -; r -\i-zAAj (X)
\ZA,i ~ZA,l)
Evidently, the factor (ZA,3 - ZA,2) can be simplified if, and only if it is non-zero. Then, the
flow rate and composition of the reactor-outlet and the recycle flow rate are given by:
Da
/ fn _fiz^d W-*A.<) l-zA.<-D°-zA,<]
[ZA M)
" < { Da \AyDa-{l-zAAyZAyDa-(l-ZAA)) ^
In Eq. (9) the flow rates are positive if, and only if the following condition is fulfilled:
l
-Zld±<Da<hlAA (10)
Z Z
A,1 A,4
The first inequality characterizes recycle systems with reactant inventory control based on
self-regulation. It occurs because the separation section does not allow the reactant to leave
the process. Consequently, for given reactant feed flow rate FQ, large reactor volume V or fast
kinetics k is necessary to consume the whole amount of reactant fed in the process, thus
avoiding reactant accumulation. The above variables are conveniently grouped in the
dimensionless plant Damkohler number introduced by Bildea et al. [14] for analysing the
407
behaviour of recycle systems. Note that the factor ZA;3 accounts for the degradation of
reactor's performance due to impure reactant recycle, while the factor (1- ZA,A) accounts for
the reactant leaving the plant with the product stream.
A second, trivial solution is obtained when ZA,3 - ZA,2 = 0 and Eq. (8) cannot be simplified:
(^.2,/2,/3)2=(z,.3,«,<«) (H)
Although this solution is unfeasible, since would imply infinite flow rates, it helps to
understand the mechanism by which the feasible states are born. The trivial and non-trivial
solutions are presented in Fig. 2a. For clarity, the per-pass conversion X is used as state
variable. Note that the two solution branches cross each other in the point given by:
DaT=^± (12)
Z
A,1
Moreover, for Da = Daj, the determinant of the Jacobian of Eqs (1) to (5) vanishes. Thus,
Da\ defines what is called a transcritical bifurcation [15]. Because this type of bifurcation
occurs only in special cases, it is expected that it will disappear under model perturbation, for
example for complex kinetics. This aspect is discussed in more detail in Kiss et al. [18].
where VA1 and vpT are matrices of stoichiometric coefficients, not necessarily positive.
Let us assume that the separation section does not allow reactants Aj to leave the process.
Then, the overall mass balance can be written as:
VAS = F« (14)
where £is the vector of reaction extents and FQ is the vector of fresh reactant flow rates.
Obviously, the linear system (14) has at least one solution £ for any vector FQ, if the
following condition is fulfilled:
The flow rates of fresh reactants can be set at arbitrary values, but within stoichiometric
constraints. Then, the internal flow rates and concentrations adjust themselves in such a way
that, for each reactant, the net consumption rate equals the feed flow rate. Therefore, the
reactant inventory becomes self-regulating. When N = R, Eq. (14) has a unique solution, and
in consequence the reaction rate constants or the reactor volume do no influence neither the
selectivity nor the production rate.
As example, let us consider the parallel-consecutive reactions:
A + B^P
A + P^R
Such chemistry is common, for example in butane/butene alkylation for isooctane production,
or for the reaction between ethylene oxide and alcohols. We assume that the volatilities are
409
such that A and B are recycled together. The control structure setting both A and B fresh feeds
on flow control is feasible, as displayed in Fig. 3 a.
The steady state behaviour is described by the following dimensionless model, where the
feed flow rate of the A reactant was used as reference:
1
+ flZA,l ~ flZA,2 ~D a
• {ZA,2ZB,2 + <ZZA,2ZP,2 ) =0 0 6)
/3Zp,3 -f2ZP,2+Da\ZA,2ZB,2 ~« Z
A,2ZP,2 ) = 0 (18)
/ 2 - ^,2-/3^,3=0 (19)
^,3+Z«,3+Z/.,3-1 = 0
(21)
The equations (16)-(21) have closed-form parametric solution. Multiple steady states
occur. The relation between the reactor volume Da and the reactor-outlet concentration zA,2,
obtained for recycle free of P (zp3 = 0), is given by Eq. (22) and presented in Fig 3b.
a / / ( 2 / / - l ) + (l-(//) 2 ) x
D a =
h \hr» A ' °<Z A 2 <1,-<//<1 (22)
Furthermore, the product distribution is fixed by the fresh feed flow rates, being
independent of the reactor volume and the ratio of reaction rate constants:
JP,4 _ 2 / Q - 1 ....
f ~ \-fB
(ft
^> - 4 — ^ o — - i (24>
V )
A process designed near the turning point of the Da - ZA,2 map can suffer from operability
problems. If the reaction kinetics is over-estimated, or the feed flow rate deviates from the
nominal design value, the operating point falls at the left of the turning point, in the region
where no steady state exists. As a result, infinite reactant accumulation occurs, and the plant
has to be shut down.
3.2. Controlled-regulation
In this section, we will show that strategies based on controlled-regulation of reactant
inventory have two important advantages:
1) Provide two ways for changing the production rate: reactor-inlet flow rates and
reaction conditions. The first one is appropriate for large reactors, while the second
one is recommended for small reactors.
2) Avoid undesired phenomena, such as unfeasibility, state multiplicity and instability.
411
f = fi-Da-z, (25)
Note that other strategies for changing the production rate can be used with the same
control structure, for example changing simultaneously the reactor-inlet flow rate and the
reactor holdup or temperature.
foA+fo+fi+fs-f2=0 (28)
/2-^,2-/,-^,3=0 (29)
f2-zB2-f5-zB5=0 (30)
/ 0 A +/ 3 =/R f ,A (31)
/oB+/s=/Rf,B (32)
The plantwide control presented in Fig. 5a does not rely on self-regulation. The reactor-
inlet flow rate of both reactants are fixed at/i^A and/R^B (Eq. (31) and (32)), respectively.
Fresh feed rates _/oA and/) 8 are used to control inventories at some locations in the plant. Note
that an arbitrary flow rate can be used as reference in the definition of the dimensionless
quantities.
/o"+/o S +/ 3 -/ 2 =O (35)
/ 2 -^,2-/3-^,3=0 (36)
f2 -zBa-f3-zBi=0 (37)
414
Z
,U+ZW+ZP,3=1 (38)
fo+fi=fRf (39)
The steady state model Eqs (33) to (39) has a closed-form solution. Here we only present
the Da - ZA,2 dependence (Eq. (40) and Fig. 6b):
+
£)a = JRec [ s^Q\
Z Z
ZA,2 (/Rec-l)(l- />,3) + ,, 2 (l + /Rec)
For fixed Da either zero or two solutions exist (Fig. 6b). The behaviour is similar to the
one presented in section 3.1.2, the same considerations being valid. In particular, the
feasibility and stability boundary, representing a fold bifurcation, is given by:
4 +1
(Da,zAXj 2 (^ )M 1 -^)/-- 1 ] (41)
1 2 2
' t(l-z P , 3 ) (/ R e c -l) 2 / R c c + 1^
loop, measuring the inventory by means of concentration or level measurements, but feeding
the fresh reactants somewhere in the recycle loop.
Eq. (15) gives the condition for feasibility of control structures based on self-regulation.
The physical explanation is that sufficient reactions should be available in order to balance,
for each component, the fresh feed with the overall consumption rate. A second feasibility
condition requires that the reactor should be large enough. The critical value of reactor
volume corresponds to a bifurcation point of the mass balance equations. High sensitivity of a
recycle with respect to fresh feed, known as "snowball effect", can be avoided by designing
the system for high per-pass conversion. For multi-reactant / multi-reaction systems, state
multiplicity occurs. The instability of the low-conversion branch restricts the selection of the
operating point.
When controlled-regulation is applied, the appropriate location for fixing flow rates is the
reactor inlet. This strategy decouples the reactor from the rest of the plant. Changing the
production rate can be achieved by modifying i) the setpoint of reactor temperature or level
controller; ii) the setpoint of reactor-inlet flow controller. The first method is appropriate for
small reactors, while the second one is recommended for large reactors.
When several reactants are involved, the two strategies may be combined. The
recommended approach may be summarised as follows: design the plant for high conversion
of the reference reactant, set its feed on flow control, fix the recycle flows, and control the
make-up of other reactants somewhere in their respective recycle loops.
The previous sections considered recycle systems involving isothermal CSTRs and rather
simple reactions. We emphasize that the results are also valid qualitatively for other systems.
Further, we will briefly comment other interesting works in this area. Pushpavanam &
Kienle [16] studied the reaction A^-P in a non-isothermal CSTR / Separation / Recycle
process. Assuming infinite activation energy and equal coolant and reactor-inlet temperatures,
they reported state multiplicity, isolated solution branches and instability, for both
conventional and fixed-recycle control structures. In addition, the conventional structure
showed regions of unfeasibility. The authors claimed the superiority of the fixed-recycle
control structure over the fixed-fresh flow rate control.
Bildea et al. [17] studied the behaviour of a first-order reaction in a non-isothermal PFR /
Separation / Recycle process with a control structure based on self-regulation. They reported
four different types of bifurcation diagrams, including a maximum of two steady states, and
parameter regions for which the unique state is unstable. The instability, likely to occur for
low conversion operating points, can be avoided by large heat-transfer capacity, low coolant
temperature, high reactor inlet temperature, or fixed-recycle plantwide control structure. The
HDA plant case study was used to demonstrate that state multiplicity and instability can easily
occur in real plants and to illustrate the pitfalls of a design based only on steady state
considerations.
Kiss et al. [18] studied the multiplicity behaviour of isothermal CSTR / Separation /
Recycle systems involving six reaction systems of increasing complexity, including chain-
growth polymerisation. Below a critical value of the plant Damkohler number Da < Daa the
416
only steady state involves infinite flow rates, obviously infeasible. Therefore, feasible steady
states are possible if the critical Da value is exceeded. For single reactant one-reaction system
Daa corresponds to a transcritical bifurcation, after which a single stable steady state exists.
For consecutive reactions, including polymerisations, a fold bifurcation appears that leads to
two feasible steady states. Moreover, fold bifurcations are typical for multi-reaction systems.
The low-conversion branch is usually unstable. This result may have practical importance, as
for example in the case of polymerisation reactions when the quasi-steady state assumption is
not valid or the gel-effect is significant. Note that qualitatively similar results were obtained
when the CSTR was replaced by a PFR [19]. This similarity suggests that the non-linear
behaviour of the recycle systems is dictated primarily by the chemical reaction and flowsheet
structure rather than by the type of chemical reactor.
From all these results, we conclude that plantwide control strategies should take advantage
from self-regulation. However, undesired non-linear phenomena may occur. Therefore, we
recommend thorough steady state sensitivity and dynamic stability studies, before attempting
a practical implementation.
The inventory of impurities is a plantwide control problem, because it involves both the
reaction and the separation subsystems through recycles. This important issue has been
acknowledged in industry, among others by Downs [11]. Dimian et al. [20] proposed recently
a quantitative approach. As it will be demonstrated, the inventory of the main components and
of impurities cannot be managed separately, because they are coupled through recycles. The
interactions can hinder or help the solution of the problem, depending on the balance between
positive and negative feedback effects. The implementation of control structures based on the
viewpoint of stand-alone units can lead to severe conflicts. Hence, a systemic approach based
on the quantitative evaluation of the recycle effects is needed.
This section starts by presenting the outline of the methodology. It follows the description
of a case study and the plantwide control problem. Possible control structures, as well as the
effect of recycles are evaluated by linear MIMO controllability analysis, both at steady state
and in the frequency domain. The comparison of design alternatives is performed by closed
loop simulation. This procedure allows the designer to choose the final flowsheet, as well as
the appropriate control strategy.
4.1. Methodology
The methodology presented below is suitable for handling complicated dynamic material
balance problems, and can be summarised in the following steps (Fig. 7):
1. Problem definition. The key impurities are traced by means of tables containing sources,
sinks, exit streams, and transit units. Their formation and depletion must be supported by
consistent stoichiometry. This step may lead to flowsheet alternatives by changing the
417
connectivity of streams or inserting new units. Design modifications of units may equally
arise during the application of the procedure.
2. Calibration of a steady state Plant Simulation Model. For an existing plant, this activity
can be combined with data reconciliation. Tuning of stoichiometry, as well as the calibration
of thermodynamic models could be done in a systematic manner [23].
3. Plantwide control problem.
a. Control objectives. The strategic plantwide control objective is the minimisation of the total
process waste. Key impurities are those object of analytical control procedures, both in
reaetants and products, as well as in internal process streams.
b. Process constraints. These are defined by environmental regulations, product quality and
equipment protection, as maximum tolerable amounts or/and concentrations of impurities in
effluent streams, or in internal streams or inventory of selected units.
c. Controlled variables (outputs) are typically quality requirements of intermediate reaetants,
and concentration of key impurities on selected internal process streams.
d. Manipulated variables (inputs). These are degrees of freedom left after considering the
inventory control of main components. They may include manipulated variables left for
quality control of selected separation units, as well as input/output plant streams.
e. Disturbances. These may be flow rates and concentrations of key impurities produced by
the process, or introduced with reactants. Disturbances must be defined in term of amplitude
and frequency range. Setpoint changes, due to optimisation or/to design modifications, as re-
routing of some streams, have also to be considered.
f. Scaling of variables and disturbances. Proper scaling is necessary for meaningful
computation of controllability indices.
4. Steady state controllability analysis. Simple and efficient plantwide control structures can
be built by means of multi-SISO PI controllers, known also as decentralised (integral)
feedback control [24]. The main actions are:
- Compute steady-state gains for plant and disturbances.
- Evaluate the feasibility of input/output combinations by SVD analysis.
- Estimate feasible pairing by RGA and Niederlinski index.
5. Dynamic flowsheeting. The practical approach is to built a simplified dynamic model, but
still capable to describe the relevant dynamics of the actual problem. Detailed dynamic
models are necessary for the key units, where impurities are generated and eliminated, as
kinetic models for reactors and dynamic models for some distillation columns. For units with
low inventory, steady-state models should be sufficient.
6. Dynamic controllability analysis. A tractable linear dynamic model can be built-up either
by means of transfer functions or by state-space description. Then a standard controllability
analysis versus frequency can be performed. Here the main steps are:
- Compute RGA and RGA-number. Check the pairing suggested by steady-state analysis.
Evaluate the effect of interactions.
- Check input constraints by means of closed loop disturbances gain (CLDG). Modify the
design if necessary.
- Estimate controllability performances of the selected structures by performance relative
gain array (PRGA) and relative disturbances gain (RDG).
- Repeat the procedure for each alternative. Details about controllability measures can be
found in the books of Skogestad & Postlethwaite [24] or Dimian [25].
7. Closed loop simulation. The plantwide control structures identified by the controllability
analysis are tested by full non-linear simulation. Evidently, this time-consuming step should
be applied only to the most promising design alternatives.
8. Design alternatives. For each alternative identified at the point 1 the procedure should be
repeated and alternatives ranked. The generation of a "base-case" is recommended, against
which the other alternatives can be evaluated. Design modifications may concern the
(re)sizing of unit operations, and alternative flowsheet (recycle) structures.
Some alternatives can be rejected during the steps 4-6 when controllability analysis indicates
clearly inferior dynamic behaviour. However, some design improvements can be suggested
by the controllability measures, as described at the points 4 and 6. Essentially, they should
ensure:
1) The effect of interactions should not prevent the implementation of a decentralised
control system (RGA and RGA number).
419
2) The magnitude of inputs must be effective in controlling the outputs at steady state
(SVD and CLDG analysis).
9. Design actions. The results of such study are:
- Identify the best flowsheet alternative and suggest plantwide control strategy.
- Suggest revamp actions, as for example change the catalyst by a more selective one,
replace obsolete internals of distillation columns, insert new separation units, etc.
After pre-treatment, the crude DCE is sent to purification in the distillation column S2, the
key unit of the separation system. This column receives DCE from three reactors. It is also the
place where three large recycle loops cross. The top distillate of S2 should remove the light
impurities mentioned above, while the purification of DCE from heavies is continued in the
distillation columns S3 and S5.
The separation of impurities in S2 is affected by volatility constraints. At 350 K, the top
temperature, the volatility of Ii, I2, and I3 relative to DCE is of about 1.9, 0.94 and 1.6.
Therefore, the top distillate of S2 can remove easily Ii and I3, but not I2. Note also that the top
distillate of S2 cannot contain more than 8% \\.
To prevent the accumulation of I2, a side stream drawn from S2 is sent to the reactor Rl,
where chlorination to heavies takes place. Because of constraint on Ii, the top distillate of S2
carries with a significant amount of DCE, which has to be recovered and recycled by the
column S4. By recycling the bottom of S4 to the reactor Rl, some amounts of impurities Ii
and I2 are converted in heavies. This operation helps to reduce the accumulation of undesired
impurities, particularly of I2. Therefore, it is rational to introduce a specialised reactor for the
conversion of non-saturated impurities in heavies by liquid-phase chlorination. This new
reactor, designated by R4, placed between S2 and S4, gives the opportunity for re-routing
some streams. New flowsheet alternatives can be imagined, as depicted in Fig. 9.
All heavy impurities produced by the process or by converting lights to heavies, are
removed by the distillation columns S3 and S5. These columns work in tandem in order to
limit the losses in DCE. Unlike the distillation of light impurities, there is no heavy impurity
that constraints the purification of DCE in the tandem S3-S5.
421
Note that S2 is a large distillation column, of about 50 theoretical stages, operating at high
reflux. The separation in S3 requires only few stages. On the contrary, S4 and S5 are small
units but of particular importance: S4 is the only exit of light impurities (Lights), while S5 is
the only exit of the heavy impurities (Heavies). After thermal cracking the reaction mixture is
quenched and cooled (non-presented). The recovery of HC1 and the separation of VCM from
un-reacted DCE take place in the units S6 and S7, respectively.
A steady-state Plant Simulation Model of an existing plant helped to calibrate the base-
case model on a representative operating point. Some details of an industrial process were
skipped, but their omission affects neither the plantwide material balance nor the process
dynamics. The units SO, S6 and S7 may be considered black boxes. Contrary, SI to S5 are
rigorous distillation columns, modelled as sieve trays. In steady-state all the reactors are
described by stoichiometric approach, but kinetic models are used for Rl and R4 in dynamic
simulation.
Note that the reaction network has been formulated such to use a minimum of
representative chemical species, but to respect the atomic balance. This approach is necessary
because yield reactors can misrepresent the process. Details enabling a full simulation are
given in Dimian et al. [22].
Because the key impurities are implied in all-three reaction systems through recycles that
cross in the separation system, their inventory is a plantwide control problem. This is
completed by technological and environmental constraints, as mentioned.
The advanced removal of Ii and I2 must find a compromise with an optimal concentration of I3
in the bottom product of column S2. It is worth to mention that these contradictory requirements
cannot be fulfilled by any stand-alone design of S2. Thus, the effective control of impurities
becomes possible only by exploiting the positive feedback effects of the recycle loops that are
balanced by the negative feedback effects of chemical conversion and exit streams.
Hence, the plantwide control objective is the quality of DCE sent to the cracking section,
for which three specifications regarding key impurities are required. These are the outputs of
the plantwide control problem, available by direct concentration measurements, as by IR
spectroscopy or on-line chromatography.
An analysis of the degrees of freedom indicates as first choice manipulated variables
belonging to the column S2, used for quality control: D2 - distillate flow rate, SS2 - side
stream flow rate, and Q2 - reboiler duty. We may also consider manipulated variables
belonging to the column S4, adjacent and connected with S2 by a recycle, but dynamically
much faster. Thus, supplementary inputs are: D4 - distillate flow rate, and Q4 - reboiler duty.
Hence, the inputs are the variables D2, SS2, Q2, D4 and Q4.
A major disturbance of the material balance is simulated here by a step variation in the
external feed (FDCE)- A second significant disturbance is X13, the fraction of impurity I3
introduced by the external DCE feed. The most probable range of frequencies for disturbance
rejection is 0.1-1 rad/h for throughput, and 0.1-10 rad/h for impurities.
Table. 1
RGA-elements
II, 12,13 Base-case alt A altB altC
Q2, SS2, D2 1.47, 2.01, 1.34 1.58, 1.27, 1.10 1.37, 1.27, 0 .89 1.63, 0.96, 0 .83
Q2, SS2, D4 1.43, 1.35, 0.96 1.51, 1.38, 1.13 1.43, 1.29, 0 .92 1.61, 1.43, 1.06
Q2, SS2, Q4 1.43, 1.36, 0.97 1.65, 1.48, 1.25 1.43, 1.30, 0 .93 1.74, 1.57, 1.19
In turn, the effect of interactions on the control of I3 depends both on the recycle structure
and the manipulated variable. Distillate rate D2 gives more interactions compared with the
case where the manipulated variable belongs only to S4 (either D4 or Q4). The use of
manipulated variables from different units should not be a surprise when these are,
dynamically speaking, close enough, as it is the case with S2 and S4. In the base-case and
alternative B the effect of the S4 variables on I3 is enhanced by closing the other loops, while
in alternatives A and C this effect is hindered. However, at this point there is not a clear
distinction between the base-case and alternatives. A dynamic analysis is needed.
RGA-number
RGA-number, defined as ||RGA - I||sum, gives a quantitative measure of the interactions in a
diagonal decentralised control structure. The lower RGA-number the more preferred the control
structure. A value close to zero means quasi-independent SISO controllers.
The evaluation of three control structures in the base-case: (1) Q2-I], SS2-I2, D2-I3,
(2) Q2-Ij, SS2-I2, D4-I3, and (3) Q2-I1, SS2-I2, Q4-I3 shows quasi-constant low values at lower
frequencies, up to 0.5 rad/h, in agreement with the steady-state analysis. Note that the
structures using manipulations from the column S4, namely D4 and Q4, are somewhat less
affected. Consequently, the control of all three impurities might be difficult for disturbances
above 0.5 rad/h.
If the loop SS2-I2 is removed, the situation improves considerably. The best pairing is
Q2-Ii and D4-I3. The results indicate that both the base-case and the alternative B behave
much better compared to the alternatives A and C. The mentioned pairing is feasible up to a
frequency of 10 rad/h, realistic from the plantwide viewpoint.
Fig. 10 Closed Loop Performance Gain: a) controllers I r Q2, I2- SS2 and I3-D2, b) I r Q2; I2-
SS2 and I3-D4
Closed-loop simulations with only the controller Q2-Ii show that the effect of the disturbance
FDCE on I2 is reduced by 80%. I2 is below its maximum so that further reduction is not needed.
However, a controller on I2 would be required only if this impurity must be kept on setpoint, but
the operation would create more problems that it solves. Hence, leaving I2 free, but having the
guarantee of a bounded variation, is a rational compromise that preserves the robustness of the
control system. Hence, the plantwide control objective could be achieved with only two control
loops, Q2-Ij and D4-I3. Over a practical range of frequency should be almost de-coupled.
Fully dynamic simulation confirmed the analysis by implementing the controllers Q2-I]
and D4-I3 as P-type only. Thus, the effect of interactions through recycles makes possible to
keep the impurity I2 between bounds, such as its control is not needed. Note that using
manipulated variables from different units is not a current control practice. However, the
principle of proximity is preserved, because the columns S2 and S4 are dynamically adjacent.
Moreover, the dynamics of the impurities is dominated by the effect of recycles.
mainly in chemical reactors, and of depletion (exit streams and chemical conversion), as well
as the accumulation (liquid phase reactors, distillation columns and reservoirs) can be
balanced by the effect of recycles in order to achieve an acceptable equilibrium state.
Interactions through recycles can be exploited to create plantwide control structures that
are not possible from a stand-alone unit viewpoint. In this case, acceptable control of three
key impurities on a distillation column can be achieved with only two control loops.
Chemical conversion of impurities is an effective way to counteract the positive feedback
effect of recycles. Re-routing the connection of separation units can generate flowsheet
alternatives with different controllability properties.
In this case study, the steady-state controllability analysis was not able to make a clear
distinction, but the frequency analysis showed that two alternatives offered better rejection of
disturbances. The analysis, based on steady state and dynamic controllability measures,
requires only limited closed-loop simulation. This investigation can be applied preferably in
revamping projects, where the information about the material balance can be exploited by
tuning a rigorous plant model. The approach is recommended also for designing new plants,
but an accurate knowledge of the chemical reactions producing impurities is necessary.
5. CONCLUSIONS
- For multi-reactant / multi-reaction systems, state multiplicity occurs. The instability of the
low-conversion branch restricts the selection of the operating point.
b. Controlled-regulation consists of fixing one flow rate in each recycle loop, measuring
the inventory by means of concentration or level measurements and feeding the fresh
reactants somewhere in the recycle loop. This approach is known as Luyben's fixed recycle
rule. This paper demonstrates that the reactor-inlet is an appropriate location for fixing the
recycle flow rates. This strategy offers several advantages as:
- Changing the production rate can be realised by adjusting i) the setpoint of reactor
temperature or level controller, or ii) the setpoint of reactor-inlet flow controller. The first
is appropriate for small reactors, while the second is recommended for large reactors.
- The design and control of the chemical reactor can be treated as a stand-alone unit. This
approach is convenient when dealing with non-isothermal reactors where undesired non-
linear phenomena, as state multiplicity and instability, can give troubles in operation.
c. When several reactants are involved, the two strategies may be combined. The
recommended approach is: design the plant for high conversion of the reference reactant, set
the reference reactant on flow control, fix the recycle flows, and make-up the other reactants
somewhere in their recycle loops.
3. The control of inventory of main components and impurities are interrelated with the
design of reactors and separators. For the main components the reactor inventory and the
make-up policy plays the most important role. In handling the inventory of impurities the
design of separators and the interactions through recycles are determinant. Snowball effects
are possible if the positive feedback is not compensated. Chemical conversion of impurities in
benign material is an efficient method to avoid their accumulation in recycles.
4. Handling of impurities is a combined process design and plantwide control problem.
Exploiting the interaction effects through recycles can lead to effective control structures that
are impossible to achieve with stand-alone units. Controllability analysis, both steady state
and dynamic, can be used to evaluate quantitatively their effect. Improvements can be
suggested, which may regard both the flowsheet structure and the design of units. A case
study of handling the impurities in a complex VCM plant illustrates the approach. Keeping
three key impurities between acceptable boundaries can be achieved with two only F-type
controllers taking advantage from self-regulation induced by recycle interactions.
NOTATION
c = concentration, (mol/m3)
y
Da = Damkohler number, dimensionless = k—c°~"
F
o
Jk = flow rate, dimensionless = Fk/FQ
F = flow rate, m3/s
k = pre-exponential factor, (mol/m3)""1' s"1
428
SP - setpoint
V = reactor volume, m
X = conversion, dimensionless = 1 - z^jz\
z = concentration, dimensionless =c/c0
a = ratio of reaction rate constants, dimensionless
Subscripts
0 = fresh feed
1 = reactor inlet
2 = reactor outlet, separation inlet
3,5 = recycle
4 = product
A, B = reactant
P, R = product
REFERENCES
[18] A. A. Kiss, C. S. Bildea, A. C. Dimian and P. D. Iedema, Chem. Eng. Sci., 57 (2002)
535.
[19] A. A. Kiss, C. S. Bildea, A. C. Dimian and P. D. Iedema, Chem. Eng. Sci., 58 (2003),
2973.
[20] A. C. Dimian, A. J. Groenendijk, and P. Iedema, Comput. Chem. Eng., 20S (1996)
S805.
[21] A. J. Groenendijk, A. C. Dimian and P. Iedema, AIChE J., 41 (2000) 133.
[22] A. C. Dimian, A. J. Groenendijk and P. Iedema, Ind. Eng. Chem. Res., 40 (2001) 5784.
[23] A. C. Dimian, Chem. Eng. Progress, September (1994) 58.
[24] S. Skogestad and I. Postelwhite, Multivariable Feedback Control, Wiley, 1996.
[25] A. C. Dimian, Integrated Design and Simulation of Chemical Processes, Computer
Aided Chemical Engineering vol. 13, Elsevier, 2003.
The Integration of Process Design and Control
P. Seferlis and M.C. Georgiadis (Editors)
430 © 2004 Elsevier B.V. All rights reserved.
Chapter C4
b
Laboratory of Process Control and Integration (LACIP),
Department of Chemical Engineering, Federal University of Rio Grande do Sul (UFRGS),
Rua Marechal Floriano, 501, CEP: 90020-061 - Porto Alegre - RS - Brazil
c
Bayer Technology Services, Advanced Process Control, Leverkusen, Germany
1. INTRODUCTION
Due to heat integration and the integration of unit operations in multifunctional reactors,
e.g. reactive distillation processes, or integrated separation processes, e.g. combination of
distillation and membrane separation, industrial processes are becoming more and more
complex. Both measures improve energy utilization and reduce production cost, but they also
have the effect to reduce the number of manipulated variables and to increase process
interactions. This makes the control of such processes as e.g. air separation [46] or reactive
distillation [8] potentially difficult or even impossible [30]. In the conceptual design of such
processes, attention must be paid not only to the nominal performance of the process
(selectivity, yield, production cost) but also to the potential for the compensation of variations
in the feed and the throughput of the process, adaptation to dynamically changing
specifications or degradation of the equipment (e.g. aging of catalysts), and generally to
disturbances that cause a non-optimal operation of the process.
In the detailed engineering phase, the instrumentation of the process must be chosen such
that the process can be controlled to operate at the chosen set-point and to react fast to setpoint
changes, but not more equipment than necessary should be installed to avoid unnecessary
investments and in particular maintenance costs during operation. On the other hand, if it
turns out that the installed measurements are insufficient to meet the control specifications,
431
large costs are incurred as well. So the decision on the instrumentation is critical and requires
special attention, in particular for new processes where experience from operations is often
not available.
Tools and indices for input-output (I/O)-controllability analysis provide support for the
above mentioned decisions in the design phase and also for the specification of the process
control system. In the plant design phase, the manipulated and the regulated variables which
are used in (automatic or manual) feedback control have to be determined. We call this
control structure selection. A large system will typically have a large number of inputs and
outputs. By a top-down approach, a system can be divided into smaller subsystems with
associated inputs and outputs. If possible, the grouping should be done such that (i) there are
only weak couplings between the subsystems; and (ii) each subsystem has the same dynamic
scale; i. e., time constants, time delays, and RHP zeros are of the same order of magnitude.
There are no general rules for the grouping but some useful and intuitive guidelines can be
given. For example, Seborg et al. [33] present such guidelines for the selection of controlled,
manipulated, and measured variables. Some of them are: (a) Select variables that are not self-
regulating. E.g., the column pressure of an atmospheric column is a self-regulating variable.
Therefore, it need not be controlled, (b) Choose output variables that may exceed equipment
and operating constraints (e.g., temperatures, pressures), (c) Select output variables that are a
direct measure of product quality and have favourable dynamic and static characteristics, (d)
Select inputs that have large effects on the controlled variables and rapidly affect the
controlled variables. The reader will find many illustrative examples and further heuristics in
[23, 24, 36]. The advantage of these heuristic rules is that from steady state and qualitative
dynamic models of the process and engineering insight, one can suggest a possible control
structure that is likely to work. But for a final decision on the control structure, it is
recommended to use quantitative criteria like those presented in this chapter.
In the control design phase, the controller structure must be determined, e.g. the decision
must be made whether a simple control structure with independent Pi-controllers is specified
and realized at fairly low cost, whether switching and gain scheduling of controllers has to be
considered, or whether a demanding (and costly) project on nonlinear model-based control of
the process should be set up. We call the second type of questions controller structure
selection. Using the tools presented in this chapter much can be said about the necessary type
of the controller before designing it.
The decisions on the control structure (the inputs and outputs used for control) and on the
controller structure usually have a much stronger influence on the resulting performance of the
process than the final design of the control algorithm. In 1943, Ziegler and Nichols [48]
already pointed out: ,,In the application of automatic controllers, it is important to realize that
controller and process form a unit; credit or discredit for results obtained are attributable to
one as much as the other. A poor controller is often able to perform acceptably on a process
which is easily controlled. The finest controller made, when applied to a miserably designed
process, may not deliver the desired performance."
432
The key problem in I/O-controllability analysis and control structure selection is that due to
the inherent complexity of feedback control, in particular the nonlinear dependence of the
closed-loop dynamics on the dynamics of the plant and of the controller, and due to the large
number of restrictions and specifications that have to be met, no exact prediction of the
controller performance is possible until a controller has been designed and tested at a
reasonably exact dynamic plant model. In the conceptual design phase, the effort to design and
test a controller usually cannot be invested, and in detailed engineering, the number of
possible structures also prohibits an exploration of all possibilities by comparison of controller
performance. In addition, in any controller design procedure, parameters must be chosen by
the designer (from simple time-domain specifications to weighting matrices in more advanced
approaches), and the result depends on the choices made by the designer, so it is not always
clear whether failure to meet the specification must be assigned to the control structure or to
the design method chosen or to the designer's choices.
Known approaches to this problem either use indicators of I/O-controllability (e.g. [22,
25]) or include the controller in the overall optimization. This has been done either by
parameterizing fixed controller structures (e.g.[2]) or by optimizing over the inputs [32]. The
assumption of a specific control structure has two disadvantages: firstly it restricts the control
performance and secondly, it renders the optimization problem to be non-convex and thus a
lot more difficult. The optimization of the inputs in an open-loop fashion on the other hand
does not reflect the issues of closed-loop stability and of robustness correctly.
Thus, there is clearly a need for a fast, but reliable assessment of I/O-controllability and of
potential control structures. This contribution focuses on the dynamic aspects of
controllability. It is tacitly assumed that a (possibly crude) performance specification in
standard control terms is available, i. e. a set of trajectories, disturbances, parameter
variations, and control objectives is specified. This requires knowledge of the process and the
factors that determine its optimal operation that has to be acquired in the design process or
from the operation of similar plants.
The chapter is organized as follows: In section 2, we first review basic results on I/O-
controllability of linear systems. In section 3, a new type of I/O-controllability index is
introduced, the Robust Performance Number. In section 4, I/O-controllability analysis by
optimization is presented. Sections 5 and 6 contain two case studies: an air separation plant
and a reactive distillation column where these tools are applied to select the best control
structure and quantify the process I/O-controllability. The evaluations of the control structures
are validated by simulations with low order controllers which can easily be obtained from the
analysis, in particular from the computed or estimated attainable performance of the chosen
structure, using the procedure described in [9, 42, 29]. So the construction of practically
relevant controllers of minimal complexity is seamlessly integrated with the analysis.
The tools proposed in this chapter all are based upon linear control theory and thus are only
suitable to analyze process controllability within a range of operation where the linearization
of the plant dynamics is justified.
433
In contrast to the concept of state controllability in control theory which refers to the
structural property of a system that the state vector can be driven to the origin in any desired
period of time by suitable inputs, we are here concerned with I/O-controllability of a plant
which characterizes the potential to control the outputs of the system by the available inputs.
A=G(0)o(G(0))- r (1)
where " °" denotes element-by-element multiplication and M"T denotes the transpose of the
inverse of a matrix M, thus
\=SV-Sji- (2)
/L is the ratio of the open-loop gain in channel j —» i to the gain in this channel when all other
loops are perfectly controlled. The elements in each column and in each row of the RGA sum
up to 1. If one RGA value per column is close to 1 and the remaining ones are small, the
interaction in the plant is low and a decentralized control structure in which the inputs and
outputs corresponding to the single large entry in each column are paired will work
satisfactorily at least for small closed-loop bandwidths. Conversely, a decentralized control
structure where one of the corresponding RGA values is negative will suffer from the problem
that the controller settings that achieve a stable closed-loop system for decentralized
controllers with integral parts will give an unstable system if loops are opened [7, 18]. The
property that all control loops can be detuned by reducing the gain from the design value to
zero without causing instability is called decentralized integral controllability (DIC). The DIC
property is very desirable, since it allows to separately put into and out of operation each
controller of a decentralized controller structure. As the DIC considers only the existence of a
controller with integral action, it depends on the plant and the chosen pairings only. Pairings
with negative RGA are not DIC.
Although the RGA is widely used, the loop pairing proposed by the RGA-analysis is not
always the best one. One problem is that the RGA may change considerably after one loop is
closed so that other pairings are more favourable with this loop closed then when feedback is
not taken into account (see e.g. [19]). Even for 2x2-problems, the criterion that pairings with
negative RGA must be avoided is not always correct. Arbel et al. [1] show that for the
fluidized catalytic cracker, the opposite pairing is preferable to the one proposed by the RGA
because it gives faster and physically more sensible control and the disadvantage of loss of
stability when one loop fails does not matter too much because of the presence of skilled
operators and the slow reaction of the loop which becomes unstable.
435
Condition Number
The Euclidean condition number (y) of a matrix is defined as the ratio of its maximal and
its minimal singular values, i.e.,
In numerical analysis, the condition number measures the sensitivity of the inverse of a
matrix (provided it is finite, otherwise no inverse exists). As in decoupling control the
controller implicitly or explicitly inverts the plant dynamics, a large condition number may
lead to robustness problems of the closed-loop system. The value of y however gives no
conclusive information about the I/O-controllability of the system since all systems can be
scaled to get a very large condition number. On the other hand, the minimal attainable
condition number depends on system characteristics only. Thus, for control purposes the
minimized condition number over all diagonal scaling matrices is more useful to analyze the
I/O-controllability of the system. The minimized condition number is defined by
y*(G{0))=miny{LG{0)R), (4)
Dynamic analysis
By inserting the frequency response matrix G(jco) instead of the gain matrix, the minimized
condition number and the RGA can be generalized to dynamic quantities y*(co) and A(co).
Using the dynamic minimized condition number and the dynamic RGA, the quantitative
analysis of the I/O-controllability of a particular plant configuration can be refined
considerably. It has been shown (cf. [21]) (and it corresponds to the standard wisdom in
controller design) that the condition number and the RGA are much more important at the
gain crossover frequency cog of the open-loop system than at a>=0 or at frequencies far above
the bandwidth of the closed-loop system.
If the condition number is high at co=0 but then decreases to small values, dynamic control
with a sufficiently large bandwidth can be possible without encountering problems with
directionality and sensitivity [17]. However, a large condition number at low frequencies
indicates that large inputs are needed to shift the steady state of the output vector in some
direction. Also, the fact that the controlled plant becomes unstable if control loops are opened
when the corresponding RGA-values are negative remains true.
The RGA is a scaling-independent measure of coupling and of plant directionality. It is
related to the minimized condition number y* by [26]
436
where A.y is the element ij of RGA. For 2 x 2 systems Eq. (5) reduces to an identity. Thus if
the RGA has large elements, y* will be large and the gain matrix is ill-conditioned. The RGA
thus provides not only a measure of the couplings in the plant but also of the directionality of
the process.
H^=M-HA^ 0
(6)
This equation is obtained using the convolution theorem for step inputs with the maximal
input intensity (M). From Eq. (6), the minimal rise time and thus the bandwidth of the closed-
loop system can be determined if an assumption on the size of the disturbance for which the
control loop should behave linearly is made.
In the multivariable case with differently effective inputs, the estimation of the appropriate
bandwidth is difficult in general. One may resort to a simulation of a time-optimal controller
or a constrained LQ-MPC controller to estimate the attainable rise times.
The Robust Performance Number (RPN) was introduced in (Trierweiler [41]) and
(Trierweiler and Engell [39]) as a measure to characterize the I/O-controllability of a system.
437
The RPN indicates how potentially difficult it is for a given system to achieve a decoupled
closed-loop system with a pre-specified performance in the individual loops robustly. The
RPN is influenced by both the desired performance of the closed-loop system and its degree of
directionality.
A \—p
\G)n ) COn
where ex is the tolerated offset (steady-state error). The parameters of Eq. (7), ct>n (undamped
natural frequency) and C, (damping ratio), can be easily calculated from the step response of a
second order system. A slightly more general approach is to include a zero in the numerator
which provides an additional degree of freedom as it is needed e.g. if the plant is open-loop
unstable (see e.g. [11,41]).
For the MIMO case, a straightforward extension of this specification is to prescribe a
decoupled response with possibly different parameters for each output, i.e.,
T^diag{Td,i,.--,Td,no), where each T^ corresponds to a SISO time-domain specification.
must be satisfied.
438
When the plant G(s) is asymptotically stable and has at least as many inputs as outputs,
G(s) can be factored as G(s) = Bo,z{s)Gm{s). Bo,z{s) is the output Blaschke factorization for the
rhp-zeros of G(s) (for the definition of the Blaschke factorization and an algorithm to calculate
it, see e.g. [20, 41]. One possibility to specify a closed-loop transfer function T that satisfies
the interpolation constraint Eq. (8) for an arbitrary specified performance TJj) is
T{s)=BOz(s)BrO2(0)Td(s) (9)
where BoJ denotes the pseudo-inverse of Bo,z, and Bo,z(0) Boj(0) = L It is easy to verify that
Eq. (9) implies Eq. (8). T(s) then is different from the original desired transfer function matrix
Td(s), but it has the same singular values. The factor Boj(0) ensures that 7(0) = 7X0) so that
the steady-state characteristics (usually TJ0) = I) are preserved. Similar expressions to
Eq. (9) can be developed for systems with RHP poles, pure time delays, and several different
combinations of the three types of non-minimum phase elements [39, 41].
S =—^:I,T=-^Ll. (10)
\ + k(s) l + k(s)
The structured singular value for the RP problem at frequency D can then be computed as
[47]
where y(G) is the condition number of the plant. When the nominal performance and robust
stability conditions are satisfied, | ws £\ and \wtz\ will be smaller than 1, so (11) essentially
is determined by
^KIM(r(G)+^y. (12)
Now, if we assume ws and wt equal to 1, Eq. (12) essentially states that the condition
number of the plant is only critical in the frequency region where neither 5 nor T are small,
i.e., in the crossover frequency range.
Using Eq. (11) or (12) as controllability measures has three drawbacks: First, Eq. (11)
results from assuming a full block input uncertainty with a scalar weighting function wt. This
description can be very conservative. To reduce this conservativeness, the system should be
scaled before the determination of the weighting function wt. This is to some extent
equivalent to substituting y(G) by y (G), the minimized condition number, in (11). Secondly,
the desired performance is indirectly represented in Eq. (11) by the weighting function ws.
Finally, it can be applied only for minimum phase models.
Instead of using weighting functions, it is more intuitive and useful to relate the
controllability measure to the desired closed-loop performance directly. These ideas led to the
definition of the Robust Performance Number (RPN) as described in the next section.
where y*(G(fco)) is the minimized (with respect to scaling) condition number of G(ja>) and
cr ([7 - T]T) is the maximal singular value of the product of the attainable desired sensitivity
([7-7]) and the corresponding attainable (output) complementary sensitivity transfer matrices
(7), which is for systems with RHP zero given by Eq. (9).
The RPN consists of two factors:
1. cr([7 — T]T) - It is well-known that the system uncertainties close to the crossover
frequency are more important for robust stability and robust performance than the
uncertainties at low and high frequencies (see e.g. [34]). The low and high frequency
regions are less important for the closed-loop feedback properties because at low
frequencies a high loop gain guarantees good properties of the loop in the presence of
moderate model uncertainties and at high frequencies very small loop gains rule out
stability problems in this frequency band. The function a([I -T]T) emphasizes the
crossover frequency range, since it has its peak value in this region and rolls off for high
and for low frequencies. The choice of T depends on the desired closed-loop bandwidth,
which is limited by sensor noise, input constraints, and in particular by the non-minimum-
phase part of G, i. e., RHP zeros, RHP poles, and pure time delays.
2. y*(G)+l/y*(G) - The origin of this term is the result on the robust performance (RP) of
inverse-based decoupling controllers stated in section 3.3. Eq. (11) shows that u for RP is
a direct function of •\/y(G)+l/y(G) multiplied by weighting functions that represent the
performance and uncertainty terms. To make it insensible to scalings, the condition number
is substituted by the minimized condition number.
The RPN is a measure of how potentially difficult it is for a given system to achieve a
decoupled closed-loop behavior with the specified bandwidths in the individual loops
robustly. The corresponding controller will exhibit good performance robustness only when
the RPN is small. A good control structure is one with a small (< 5) RPN [39, 41].
RPNMIN and FMIN are only functions of the desired performance, i. e. of Tj. The minimum
possible condition number is y {Gijcdj) = 1; thus the minimum possible value for
y*(G)+l/y*(G) is 2, leading to Eq. (14). Note that in the computation of the RPN according to
Eq. (13a, b) the attainable specification Eq. (9) enters, while in Eq. (14) the desired
specification Td is used. The multiplication with the Blaschke product does not affect the
singular values of Tj, but strongly affects the term 1-7^ .The difference of RPN and RPNMIN
results from this modification of the sensitivity transfer matrix. In the ideal situation, the RPN
shows a peak slightly above 1 around the gain crossover frequency, whereas the modification
due to the existence of RHP zeros causes a broad range of large values of the RPN between
the attainable and the specified bandwidth, thus alerting the designer to the fact that the
desired performance cannot be attained. Fig. 2 shows an example of RPN and RPNMIN plots.
The difference of the attainable and the specified performance can be measured by the
relative iJiW(rRPN) as introduced by Trierweiler in [38, 43]. The definition of rRPN is
rRPN=^~^JW , (15a)
where
A ^ (15b)
A= T~"Y(G,T,a)dlogw
Fig. 3. gives a graphical interpretation of areas AMIN and A. Note that the areas were
calculated for a given frequency range, [comjn, comax], on a logarithmic scale. The frequency
range must be large enough to capture the important areas.
Typically, rRPN should not be larger than 1, while small values of rRPN indicate an
attainable specification with respect to the desired bandwidth.
442
Fig. 2. An example of an RPN plot (solid line) and an RPN M IN plot (dashdot line). Note that
the frequency axis is logarithmic, -4 means 10~4.
Fig. 3. Schematic representation of AMIN and A- AMIN- The frequency axis is logarithmic.
The stable transfer matrices 7//(s), Ti2(s) and T2i(s) are determined by the plant, the Youla
parameter Q(s) is a free stable transfer matrix. In the optimization, the Youla parameter Q(s)
is expressed as a finite series expansion in terms of suitable fixed transfer matrices q,(s) and
variable coefficients xt [4]. In the evaluation of the cost function, the time-domain equivalent
of (16) is needed which for a better numerical efficiency can be reformulated via Laplace-
transforms to avoid time intensive computations during the optimizations (see e.g. [44, 45]).
The advantage of this formulation is that it is linear in the unknown x, and all other quantities
can be computed before the iterations performed in the optimization process. This leads to
[TAJS)\ [TBJs)^t
444
The transfer matrices TA and TB are computed from the transfer matrices of the plant and
the base functions in the series expansion of the Youla parameter. Eq. (17) is a complete and
convex description of all possible input-output behaviours of stable linear closed loops with
the given plant. H^/xJ^ depends on the choice of the external inputs w(i) and the external
outputs z{t) as well as on the signals used for feedback and the available control inputs.
The optimization problem described above is very general and can be formulated to include
constraints on time and frequency responses and actuator limitations to ensure robustness to
modelling errors. Such a general formulation however requires the use of general purpose
algorithms, e.g. Cutting Plane algorithms. This leads to large computation times which are not
desirable in a combinatorial problem as the one at hand. For the sake of numerical efficiency,
we restrict the formulation of the optimization problem for the purpose of control structure
selection to linear constraints and a quadratic objective function. Robustness is checked in a
subsequent step in the selection procedure by using the RPN.
Using the integral squared error of the external outputs z{t) for step changes of the
individual set-points r(t) of the inputs w(f) as the objective function and the weighted sum of
all individual criteria, the optimization problem can be written as:
mm \{HJx,t)-r(t))T{HJx,t)-r(t))it. (18)
0
Additional requirements like input saturation or steady state accuracy can be formulated by
additional linear constraints
of one of the structures with RHP zeros between 0.5 and 0.05 implies a bandwidth limitation
which has to be compared to the bandwidth that results from other factors, e.g. actuator
limitations. Table 1 summarizes these results.
Table 1
RHP zeros
Inputs Smallest RHP zero [ /mln ]
(1,2,3),(1,3,6),(1,4,6),1 Real part < 0.05, conj. Com-
(2,3,4), (2,3,5), (2,3,6)} plex
(l,2,5),(l,3,5),(l,4,5),j
(1,5,6),(2,4,5),(2,5,6), 0.05 < real part < 0.5
(3,4,5), (3,5,6), (4,5,6)j
(1,2,6), (2,4,6), (3,4,6) Real part > 0.5
(1,2,4), (1,3,4) no rhp zero
447
Table 2
Magnitudes of the RGAs at the frequencies 0 and 1
Combination IRGA]^ \RGA\m=1 Pairing
(1,2,4) 0.8218 0.0452 0.2234 0.9865 0.0053 0.0087
1.6588 3.0902 2.4313 0.0512 0.1475 1.0959 -» Gl42
1.4806 4.1353 1.6547 0.0374 1.1400 0.1097
(3,5,6) 0.0566 0.0123 1.0443 0.0002 0.0167 1.0161 •> G635 for co = 0 and
1.0881 0.0228 0.1107 0.4348 0.9503 0.0162 G 653 form=l
0.0317 0.9651 0.0664 0.9359 0.4388 0.0045
At this stage, the RGA can therefore only be used to eliminate structures with uniformly
unfavourable RGA-values. The RGA at the frequency co = 0 was regarded also to eliminate
structures with negative elements on the diagonal which are not listed here because of the lack
of space. From the minimized condition numbers and the .KG/4-analysis the most promising
combination is (3,5,6). Note that this combination has RHP zeros at 0.11 and 0.66. Depending
on the desired closed-loop performance, the RHP-zero at 0.11 can considerably constrain the
performance of the control scheme. This structure should be selected only if a moderate
bandwidth is specified.
performance. Depending on the desired closed-loop bandwidth, the decision can be different.
This can easily be done using the RPN and rRPN analyses. Table 3 summarizes the RPN
analysis for a representative subset of possible control structures.
In Table 3, the RPN was calculated considering a desired performance of 3 min rise time
and 10% overshoot in all channels. For this desired performance, the control structures
CS_142, CS_642, and CS_635 have the best RPN values (i. e. the smallest RPN). Structures
such as CS_632 and CS_132 with large RPN values can already be disregarded. But based
only on the RPN, we cannot decide which one of the remaining control structures is best
suited. As shown in Figure 5, the directionality (i. e. y*(G(/<»))) of CS__142 and CS642
becomes similar to CS_635 in the middle and high frequency ranges. In the low frequency
range, CS_142 and CS_642 have a strong directionality. It is preferable to use feedback in a
region with low directionality, since the input uncertainty then can be ignored and an inverse-
based decoupling controller can be applied successfully. For the air separation plant, the
possible closed loop bandwidth is mainly limited by the RHP-zeros. To decide which is the
best of the remaining control structures, it is necessary to analyze the rRPN. To illustrate the
use of the relative RPN, the structures 142 and 635 are analyzed for three different desired
closed-loop bandwidths below.
Table 4 shows the values of RPN and rRPN calculated for CS_142 using several values of
the rise time and 10% overshoot. The corresponding RPN and RPNMIN plots are shown in
Figure 6. Based on these results, it can be concluded that for CS142 the faster the closed
loop, the better the system performance. The closed-loop response is only limited by
saturation of manipulated variables.
Table 3
RPN and RHP zero(s) of different control structures
Structure RPN RHP zeros
(A) (1,4,2) 1.7 -
(B) (6,4,2) 2.2 0.66
(C) (6,3,5) 2.4 0.1081,0.66
(D) (6,3,2) 6.3 0.025, 0.027±0.13j, 0.66
(E) (1,3,2) 6.7 0.0244, 0.025±0.12j
450
Fig. 6. RPN plot (solid lines) and RPNMIN plot (dashed lines) for structure 142 calculated
using several values of the rise time and 10% overshoot in each channel.
Table 4
RPN and rRPN indices for structure 142
Rise
Time [min] RPN rRPN
3 1.74 0.36
20 1.89 0.75
60 2.86 1.19
Similarly to Table 4, Table 5 shows the values of RPN and rRPN calculated for the
structure 635 for several values of the rise time and 10% overshoot in each loop. The
corresponding RPN and RPNMIN plots are shown in Fig. 7. Based on these results, it can be
concluded that for the structure 635 the faster the closed loop, the more unrealizable the
desired performance. Here, the closed-loop performance is limited by the RHP zero at 0.11.
Note that all peaks of RPN curves are in the frequency range around the RHP zero, i.e., at
oo « 0.1. If the peak of the desired performance (i.e., peak of < r ( [ / — Td ~\Td}) is above this
frequency, the RPN curve shows a flat region up to the peak of the desired performance.
Based on the rRPN analysis, we can conclude that the best control structure for fast closed-
451
loop responses is CS_142 (rRPN=0.36 for 3 min rise time), whereas for slow closed-loop
responses, the control structure CS_635 will produce a better performance, since for 60 min
rise time the corresponding rRPN is smaller (rPPN=0.43 for ST_635 and rRPN=1.19 for
STJ42, cf. Table 4 and 5).
Fig. 7. RPN plot (solid lines) and RPNMIN plot (dashdot lines) for structure 635 calculated
using several values of the rise time and 10% overshoot in each channel.
Table 5
RPN and rRPN indices for structure 635
Table 6
Attainable performance properties (rise time TR and over-
shoot O) of the selected control structures
Fig. 8. Attainable performances of control structure 142 (upper left), 143 (upper right), 642
(lower left) and 643 (lower right). Dotted: off-diagonal elements.
454
Fig. 9. Simulation of CS142 with full PI controller (left), and decentralized Pi-controller
(right). Dotted: off-diagonal elements.
Fig. 10. Simulation of G642 (left) and G643 (right) with full PI controller. Dotted: off-diagonal
elements.
low to achieve the desired product concentration. The superiority of this process over a
conventional one results from the breaking of the azeotrope of methyl acetate and methanol by
the presence of the other components which makes it possible to achieve product
concentrations above the azeotrope.
The goal of the control structure analysis is to determine which measurable variables
should be controlled in order to keep the semi-batch process on its designed nominal
trajectory. An optimal trajectory for the operation of this column was determined by
optimization studies [14]. Albeit the process is operated in batch mode, a quasi stationary
point in the middle of the batch can be taken as the nominal operating point. This is justified
because the drift due to the batch operation is small for most of the batch run and can be
treated as a small model uncertainty, except at the beginning and at the end of the batch.
During these periods, a linear controller cannot control the process as the gain changes its
sign.
The linearized plant model used in this analysis was derived from a rigorous dynamic
nonlinear model by linearization around a stationary point. The linear model has 85 state
variables and there are two manipulated variables: the reflux ratio (nominal value Rs~0.597)
and the reboiler heat duty (nominal value Qs = 3460 W). Under these conditions a product
concentration of about 80% results. The dynamic behavior of the heating system is
approximated by a first order model. Possible controlled variables are the compositions in the
reflux and in the reboiler, the condenser molar flow and several temperatures along the
column and in the reboiler. The measurements and their numbering are summarized in Table
7.
Table 7
Available measurements
Number Measurement Part of column
1 mole frac. HAc condenser/reflux
2 mole frac. MeOH condenser/reflux
3 mole frac. MeAc condenser/reflux
4 mole frac. H2O condenser/reflux
5 mole frac. HAc reboiler
6 mole frac. MeOH reboiler
7 mole frac. MeAc reboiler
8 mole frac. H2O reboiler
9,...,12 temperature separation packing
13,...,16 temperature catalytic packing 1
17,...,20 temperature catalytic packing 2
21 temperature reboiler
22 molar flow rate condenser
Table 8
Input constraints
Variable Constr aint
Reflux ratio 0A<R<.=0.8
Reboiler heat duty 2kW<Q:i5kW
Table 9
Attainable performance properties (rist ; time 'FR and ov«;rshoot O)
Structure Output TR[h] O[%] Comments
{3,22} yX 0.17 9
0.02 J
{3,10} yx 0.23 2
yi 0.08 4
{3,11} y\ 0.23 3
yi 0.05 2 Coupled
Table 10
Maximum RPN values
Structure R PN max
{3,10} 1.51
{3,11} 1.60
Fig. 12. Simulation of the controller for a set-point step of the methyl acetate mole fraction (a)
and the temperature in the upper separation tray (b) compared to the optimal performance,
plots are scaled (methyl acetate: 1=5\% relative change, (absolute) temperature: 1=1\%
relative change)
458
Fig. 13. Simulation of the manipulated variables for (a) a set-point step of the methyl acetate
concentration and (b) of the temperature in the upper separation tray compared to the optimal
inputs
From these results it can be concluded, that the combinations of controlled variables
{3,22}, {3,10} and {3,11} yield the best closed-loop performance. These combinations allow
very fast rise times, and approximate decoupling of the closed-loop system can be achieved.
All other structures give slower responses and stronger couplings and are therefore rejected.
The structure {3,22} shows a good dynamic behavior, but this is due to the fact that the vapor
stream is determined by the heat duty in a nearly proportional manner. Controlling the vapor
stream by the heat input means to replace one degree of freedom by another does not provide
closed-loop control of the process itself. This structure is therefore discarded.
In the next step, the RPN values were calculated for the two remaining candidate structures.
The attainable performance from the previous step was used as the attainable desired behavior
in the RPN calculation. As shown in Table 10, the RPN values of the remaining structures are
small and thus no robustness problems due to the directionality of the plant are expected.
To verify the predictions of the control structure selection procedure, controllers were
designed for the favourable combinations and the closed-loop behaviours were simulated. The
controllers were obtained by applying the frequency response approximation method [9, 42].
As the desired performance in the frequency response approximation, the results of the
attainable performance calculation listed in Table 9 were used. Full multivariable controllers
with simple elements (first order numerators and second order denominators) were computed.
The closed-loop simulations are shown in Figs. 12 and 13 for the structure {3, 11}. The
simulations confirm the predictions made before. The control structure nearly achieves the
optimal performance with a relatively simple controller. The constraints are not violated.
459
Table 11
Parameter changes and disturbances
Disturbance 1" value 2nd value Time
Factor reaction rate [-] 1 0,5 3h
Heat loss per stage [W] 0 10 4,5 h
Feed[mmol/s] 0 -10 6h
Heat duty [kW] 0 -1 7,5 h
Reflux [-] 0 -0,1 9h
Fig. 14 shows the disturbance rejection of the developed linear controller when applied to
the rigorous nonlinear model for the regulation of the methyl acetate concentration in the
reflux and the temperature at the upper separation packing. The disturbance scenario
considered in the simulation is listed in Table 11.
The linear controller is able to deal with the model uncertainties well. For brevity, set-point
tracking with the controller is not shown here but it can be stated that the controller is able to
deal with significant changes in the set-points over the complete batch.
7. CONCLUSIONS
Fig. 14. Performance of the developed controller when tested at the nonlinear model. For the
disturbances considered, see Table 11.
The predictions made by the RPN indices for the two examples studied were confirmed by
closed-loop simulations with linear and nonlinear plant models. The RPN methodology is also
suitable to tune Model Predictive Controllers [37] and multivariable controllers in general. All
these methods and the computation of the RPN indices are implemented in the RPN Toolbox
([13], http://www.enq.ufrgs.br/rpn/).
The optimization of the attainable performance over all stabilizing linear controllers can be
used to refine the results of the RPN analysis by taking input constraints into account and by
exploring the coupling structure in more detail. In contrast, an optimization over fixed
controller structures is restrictive and creates non-convexities of the optimization problem. It
should therefore only be used as the final step to quantify the benefits of different controller
461
structures for the chosen control structure. The examples also showed that the controller order
and structure can have a strong impact on the achievable closed-loop performance.
From the computation of the attainable performance or the estimation of the attainable
bandwidth based upon the rRPN, a structured low order controller can easily be computed
using frequency response approximation [9, 29, 42]. The selection of the control structure is
thus seamlessly integrated with the construction of a practical linear controller that achieves
the specified performance. The RPN analysis ensures that the controller is robust against plant
uncertainties even though it decouples the closed-loop system and partly inverts the plant
dynamics.
RHP transmission zeros and dead times are important characteristics of a process which
put fundamental limitations on the performance of the controlled process. Process design
procedures should explore this aspect in the design phase and eventually a design should be
preferred in which the RHP zeros are far away from the origin or disappear.
Another aspect that should be considered in the design phase is that the process may be ill-
conditioned what usually occurs if two or more manipulated variables have almost the same
effect on all controlled variables or when the controlled variables are strongly correlated to
each other. Process designs should avoid that these situations occur.
Output constraints always exist for a given process. Most of them are just a consequence of
input constraints that limit the achievable output values and in the design phase they can be
removed by a change in the input ranges (e.g. a new pump or valve) or a process modification
(e.g. by an increase in the number of column trays, an increase of the heat transfer area in a
heat exchanger, etc.). Another type of output constraints is due to the second law of
thermodynamics, which determines the direction that some processes will take. For example,
the energy flow in a heat exchanger is from the hot to the cold temperature and it will be not
possible to transfer energy from one medium to the other if the temperature difference is zero.
Therefore, the second law of thermodynamics is responsible for an irremovable output
constraint. Moreover, the difficulty to move in the direction of the thermodynamic limit
increases with the closeness to this point, whereas it will be more and more easy to shift the
operating point in the opposite direction. An illustrative example is a separation process where
the necessary energy to improve the purity of a product (i.e. to improve the degree of
organization of the system) increases with the purity of the product.
The different facility to go into one direction in comparison to another is another factor that
is responsible for the large condition number of a system which occurs when the inputs are
aligned to the "second law of thermodynamics direction". For example, the temperature of the
hot stream will be always above the temperature of the cold stream at the same point in a
countercurrent heat exchanger. When the heat transfer is very effective such that the two
outlet temperatures are almost the same, it will be difficult to make one outlet stream hotter
and the other colder (this is the weak or difficult or output low-gain direction of the plant),
whereas we may easily make them both hotter or colder (this is the strong or easy or output
high-gain direction of the plant). Another example is a high-purity distillation column
462
operating with reflux L and boil-up V as independent variables (LV-structure). In this case the
low-gain direction is related to manipulations of Z, and V in the same direction (i.e., AL-AV)
corresponding to a simultaneous variation of both product purities. The high-gain direction
consists of manipulations of L and Fin opposite directions (AL ~ -AV) which increases only
one product purity by decreasing the other one. Designing the plant "at the limit" thus
inherently causes I/O-controllability problems and this tradeoff should be kept in mind in the
design process.
The results provided by the tools for input-output-controllability analysis presented here
therefore give important hints for possible changes of the plant design, beyond the selection of
control structures and controller structures.
REFERENCES
[I] A. Arbel, I.H. Rinard and R. Shinnar, Ind. Eng. Chem. Res., 36 (1997) 747.
[2] V. Bansal, R: Ross, J.D. Perkins, and E.N. Pistikopulos, Proc. IF AC Symposium
DYCOPS5(1998)716.
[3] S. Boyd, and C.A. Desoer, IMA J. Math. Contr. Info., 2 (1985) 153.
[4] S. P. Boyd, and C. Barrat, Linear Controller Design: Limits of Performance, Prentice Hall,
Englewood Cliffs, 1991.
[5] E. H. Bristol, IEEE T Automat. Contr, 11 (1966) 133.
[6] J. Chen, IEEE T Automat. Contr, 42 (1997) 1037.
[7] M. S. Chiu, and Y. Arkun, Ind. Eng. Chem. Res, 29 (1997) 269.
[8] S. Engell and G. Fernholz, Chem. Eng. Process, 42 (2003) 201.
[9] S. Engell and R. Muller, Proc. 2nd European Control Conference, (1993) 1715.
[10] S. Engell, IF AC Symposium on New Trends in Design of Control Systems, (1997) 61.
[II] S. Engell, Optimal Linear Control, Springer, Heidelberg, Berlin, 1988.
[12] S. Engell, Proc. IEE International Conference Control (1988) 253.
[13] L. A. Farina (2000). RPN-Toolbox: Uma Ferramenta para a Selecao de Estruturas de
Controle, Master Thesis, Universidade Federal do Rio Grande do Sul, Porto Alegre.
(http://www.enq.ufrgs.br/rpn).
[14] G. Fernholz, S. Engell, L.-U. Kreul and A. Gorak, Comput Chem Eng, 24 (2000) 1569.
[15] G. Fernholz, W. Wang, S. Engell, K. Fougner and J. Bredehoft, Proc. IEEE Int. Confer-
ence on Control Applications, (1999) 397.
[16] J. S. Freudenberg and D.P. Looze, Frequency Domain Properties of Scalar and Multi-
variable Feedback Systems, Springer, New York, 1988.
[17] O. B. Gjoesaeter and B.A. Foss, Automatica, 33 (1997) 427.
[18] P. Grosdidier, M. Morari, B. R. Holt, Ind Eng Chem Fund, 24 (1985) 221.
[19] K. E. Haggblom, Proc. European Control Conference, (1997), p. WE-M-H7.
[20] K. Havre, and S. Skogestad, Proc. European Control Conference, (1997), p. TU-A-H1.
[21] M. Hovd and S. Skogestad, Automatica, 28 (1992) 989.
463
Chapter C5
a
Department of Chemical Engineering,
National Taiwan University of Science and Technology, Taipei 106-07, TAIWAN
b
Department of Chemical Engineering, National Taiwan University,
Taipei 106-17, TAIWAN
1. INTRODUCTION
Research on dynamics and control of processes with recycles has increased rapidly over
the past decade [1-11]. Practitioners as well as academia have recognized the industrial
importance of this subject. Environmental benign process flowsheets have many features that
improve yield and reduce environmental impact while reducing capital investment and
maintaining agility. This is accomplished by making extensive use of material recycles,
energy integration and minimum intermediate storage. Agility, smooth response to fast-
switching operating conditions, of such flowsheets cannot be achieved without renewed
insights into recycle processes.
Most of the above mentioned literature addresses the control issues. Much less work has
been done on the interaction between design and control. Elliott and Luyben [12,13] evaluate
the steady-state design of a ternary system based on the total annual cost (TAC)[14] and
controllability is assessed quantitatively using capacity-based approach. Groenendijk et al.
[15] evaluate different designs using several control measures, e.g., relative gain array, and
relative disturbance gain. These approaches akin to a sequential control-design approach. That
is controllability analysis is an add-on feature to a design problem. Luyben et al. [5] analyze
the pole location of a ternary system using simple dynamic reactor model and this provides an
insight into potential control problem with any given design. Chen and Yu [16,17] extend
such approach to the design of feed-effluent heat exchangers, heat-integrated reactors.
The simple recycle process of Papadourakis et al. [18] probably is one of the most studied
systems [1-3, 19, 20, 21, 22, 23, 24]. It consists of a CSTR and a distillation column in a
recycle structure. The reaction, A—>B, is irreversible with first-order kinetics and the light
reactant is separated from the product in the distillation column and recycled back to the
465
reactor. Important recycle plant characteristics, which include slowing down process
dynamics, increased sensitivity in the recycle flow (also known as the snowball effect),
difference between internal and external flow dynamics, and nonlinear dynamics, can be
deduced from this simple process. Subsequently, issues such as arrangement of throughput
manipulator, on-demand and on-supply control structures, regulatory control structure,
optimizing control structure, nonlinear behaviour for different designs, and interaction
between design and control are explored. This provides the basic principles for plantwide
control. From the reaction kinetics perspective, the bimolecular reaction (A+B^X) provides a
feature was not been seen in the isomerization reaction. The reason is clearly stated in Tyreus
and Luyben [4] that we need to "balance the reactants down to the last molecule". This
stoichiometric balance has significant implication in control structure design. That is: a simple
ratio of the reactants (an open-loop control) will not work and only a feedback mechanism
will overcome the stochiometric imbalance. Potential stability problem of the bimolecular
reaction was pointed out by Luyben et al. [5] and the tradeoff between design and control was
explored by Cheng and Yu [25]. Certainly, the same principle, stiochiometric balance, applies
to all the reactions with more than one reactant. Up to this point, we have focus on recycle
processes with isothermal CSTR. If a tubular reactor is used in recycle processes, this gives
another degree of complexity. Reyes and Luyben [9] pointed out the difference: "Unlike
CSTR systems in which the feed temperature is usually unimportant, both the design and the
control of tubular reactors are strong function of temperature of the inlet stream to the
reactor." In this work, the design and control problem of recycle processes with a bimolecular
reaction taking place in an adiabatic tubular reactor is explored. Although hypothetical
components are used in this paper, the chemistry and the flowsheets are similar for a very
large number of real industrial processes, e.g., production of isooctane, amines, methanol,
ammonia, ethyl-benzene, etc. The objective of this work is to extend the approach of Cheng
and Yu [25] to recycle plant with an adiabatic tubular reactor.
2. STEADY-STATE DESIGN
2.1. Process
Consider a recycle process where an irreversible, exothermic reaction A+B—»X occurs in a
gas phase, adiabatic tubular reactor. The process flowsheet consists of one tubular reactor, one
distillation column, one vaporizer, and one furnace with two heat exchangers which was first
studied by Reyes and Luyben [10] (Fig. 1). Two fresh feed streams FoAand FOB are mixed
with the liquid recycle stream D and sent to a steam-heated vaporizer. According to the
requirement of reaction temperature, the vapor from the vaporizer outlet stream is preheated
first in a feed-effluent heat exchanger followed by a furnace to get proper reactor temperature
as well as for the start-up purpose. The exothermic reaction takes place in the tubular reactor
and the reactor temperature increases monotonically along the axial direction with the
following inlet and outlet temperatures, Tin and Tout. The hot gas from the reactor preheats the
466
reactor feed in a feed-effluent heat exchanger, HX1, and the liquid recycle stream in a second
heat exchanger, HX2, as shown in Fig. 1.
After heat recovery, via HX1 and HX2, the reactor effluent is fed into a distillation
column. The two reactants, A & B, are light key (LK) and intermediate boiler (IK),
respectively, while the product, X, is the heavy component (HK). The Antoine constants of
the vapor pressure equation are chosen such that the relative volatilities of the components are
CXA = 4, cte = 2, and ctc=l for this equal molar overflow system (Table 1). Only one
distillation column is sufficient to separate the product (C) from the unreacted reactants (A &
B). Ideal vapor-liquid equilibrium is assumed. Physical property data and kinetic data are
given in Table 1.
Following Reyes and Luyben [10], the following process specifications are used.
1. The product flow rate (stream B) from the base of the column is fixed at 0.12 kmol s"1.
2. The product purity XB,C is fixed at 0.98 mole fraction .
3. The reactor exit temperature (Tout) is limited to 500K at design.
4. The pressure in the reactor is assumed to be 35 bar, and the pressure drop is neglected.
At design, the following assumptions are made.
1. The minimum approach temperature differences for the heat exchangers are fixed at 10
K in HX1 and 25 K in HX2.
2. The reflux drum temperature in the distillation column is fixed at 316 K (to back-
calculate column pressure)
3. Distillation columns are designed by setting the total number of trays (Nj) equal to twice
the minimum number of trays (Nmin) and the optimum feed tray is estimated from the
Kirkbride equation.
4. The vapor leaving the vaporizer is at its dew point temperature, given P=35 bar.
Table 1.
Physical properties and kinetics data for steady state design
compon ent
• D D
Molecular weight, kg kmol" 17.5 17.5 35
heat of vaporization at 273 K, kJ kmol"1 16 629 16 629 16 629
liquid heat capacity, kJ kmol"1 K"1 56 56 112
vapor heat capacity, kJ kmol"1 K"1 35 35 70
Antoine vapour pressure equation constants "
9.2463 8.5532 7.8600
Bj -2000 -2000 -2000
Heat of reaction, kJ mol" -23237
Specific reaction rate, kmol s"1 bar"2 kgCat"' 0.3882e -69710/8.314T
' Pf in bar and T in Kelvin: In Pf = A} + BJT
467
Fig. 1. Process flowsheet for the recycle process with optimal design.
5. The ratio of furnace duty to total preheat duty is fixed at 20% (QF/QTof=0.2).
6. The distillate composition of D is fixed at 1% (xD,c=0.01).
Calculate the fresh-feed flow rates of component A (FOA) and component B (FOB) from:
F0A =B(xBC+xBA) and F0B =B(xBC+xBB).
Next, shortcut methods are applied to find the minimum number of trays (Fenske equation)
for distillation columns, locate the feed tray location (Kirkbride equation), and size the
468
column diameter. The heat transfer areas for the reboiler and condenser are also computed
from the vapor flow rates [13].
The capital cost and operation cost of the entire plant are estimated using the correlation
given in Douglas [14] and Reyes and Luyben [8]. Therefore, the total annual cost (TAC)
model can be expressed as:
where Wcat is catalyst weight, Vs denotes vapor flow rate in the distillation column, NT
represents total number of trays in the distillation column, Av is heat exchanger area for the
vaporizer, Vv denotes volume of vaporizer, Qv is energy supply to the vaporizer, QF is energy
supply to the furnace. Eq.(l) gives a rigorous expression for the TAC. In Eq.(l), the TAC
model consists of the following terms: the first two term represents the cost of the reactor and
catalyst cost, the third and fourth terms correspond to the capital and operating costs of
distillation column and trays, the fifth and seventh terms are for the vaporizer capital and
operating costs, and the last three terms are for the furnace and heat exchanger costs.
For the purpose of comparison, it is useful to express the simplified TAC model in terms of
process variables, e.g., conversion, reactant distribution, relative volatilities, and reaction rate
constant. This can be done by substituting relevant process variables for the equipment sizes,
tray numbers, and vapor rates in Eq.(2). From the mass and energy balance equations, the
total amount of catalyst WCAT (implying reactor size, VR) can be expressed as:
l
w CAT y 5! (3)
N RP2k *-' r~EIRT'\NR~i I "'"fr" IF**"'' I ~AHy' 1
UL e L J1 T J
"R % i'Ng W* C,(r-roXl+J'c) - Nt Q.(r-r o )(l+3'c)
where B is the production rate, NR is no. of lumps in the reactor (NR=50), F, denotes molar
flow rate in each lump, P is pressure in the reactor, y^, js, yc are mole fractions in the reactor
effluence stream, -AH denotes heat of reaction. Next, we use Fenske equation for the
minimum number of trays [14] and set the total number of theoretical trays as Nj=2Nmin. It
can be expressed as:
469
/X
N =2 log[fa,^ D,HK)(XB,HK /X
B,LK)] (4)
\og(aLK/aHK)
where LK and HK stand for light key and heavy key in the column design (B & X),
respectively. Then the vapor rate can be found from the minimum reflux ratio:
Vs=(\.2Rm+l)D (5)
And the minimum reflux ratio equation of Glinos and Malone [26] is adopted here:
+
AB/C R = (^ ^ ) / K - D +^ / K - l ) (6)
C ^ + . ^ X i + .y^c)
withf=l+l/(10qyB).
It should be emphasized here that, for recycle plants, the distillate flow rate is a function of
the reactor composition, because the feed flow to the separation system (F) varies as the
conversion changes. Assuming 100% fractional recovery for the product C, we have:
*• = — (7)
yc
These equations clearly indicate the difference between standalone separation systems and
recycle plants.
From Eqs (3) to (7), the simplified TAC models can be expressed in terms of system
conversion, reactant ratio, relative volatility, and reaction rate constant. The following two
equations give the simplified TAC model. When the conversion (yc) and reactant distribution
(VA/FB) are given, we can find the TAC immediately.
dTAC= . 1 ^ Ff
dv 'jV RP2k ^P^IRT,\NR-i -kyA \NR-i -XyB 1
For any given yc, we can find the optimal ypjyn by solving Eq.(8) and subsequently
optimal reactant distribution along the trajectory as shown in Fig. 2. Next the TACs along the
trajectory are compared and the true optimum is thus obtained. Fig. 3 reveals the changes of
TAC as yc varies and the minimum TAC corresponds to JFA=0.45, >>B =0.42, yc =0.13 with a
TAC of 4.21xlO7$/year. Table 2 gives the steady-state operating conditions for the optimal
design. Note that, unlike the isothermal operation, the optimal trajectory does not reach the
pure product corner as indicated by the dashed line in Fig. 2. The reason is that the lower-end
of reactor inlet temperature is limited by the vaporizer temperature, a constraint imposed by
the physical properties of reactants A and B. In Fig. 2, A, B and X components are light,
intermediate, and heavy key respectively. Each composition point in the triangular diagram is
the reactor outlet component. The optimal trajectory (Fig. 2) also reveals that, at low
conversion, the separation cost dominates and a biased reactant distribution with LK in excess
0;A/yB>l) is preferred, and, as the conversion increases, the reactor cost becomes more
important and an equally distributed reactant (>'A6;B=1) is favored. The tradeoffs between
reactor and separation costs are clearly illustrated in Fig. 3 for different values of yc along the
optimal trajectory.
discussed in the next section) and this in turn will lead to a higher reactor cost. This is exactly
what Fig. 2C reveals, but the true optimal remains at almost the same reactant distribution.
Fig. 2. Optimal TAC trajectory and design for different specifications on (A) reactor outlet
temperature, (B) relative volatilities, (C) heat of reaction, (D) activation energy.
472
Fig. 3. Costs for designs on the optimal trajectory with different values of yc (conversion).
It is well known that chemical reactions with large activation energies present difficult
control problems because of the rapid increase in the reaction rate as the temperature
increases. It also presents difficult reactor temperature control problem when feed-effluent
heat exchanger is installed as the result of large reactor gains (Tou,/Tin). Fig. 2D shows that,
from steady-state economic perspective, the relative reactor cost will be higher for reactions
with large activation energy. Therefore, the optimal trajectory converges to the center line at a
much lower yc value and, more importantly, the true optimum is also located closer to the
center line which implies equally distributed reactant.
The optimal trajectory can be computed directly from Eq.(8) and this facilitates the
investigation of chemical reactions with different kinetics parameters and vapour liquid
equilibrium. More importantly, the trajectories obtained provided insight to possible tradeoffs
between design and control for different bimolecular reactions.
3. OPERABILITY
The material and energy balances provide the basis for steady-state operability analysis
[19,24]. For a simple isomerization reaction, the production rate in terms of recycle ratio and
subsequently control structure can be devised. Similar approach is taken for the case of
adiabatic tubular reactor.
Assuming perfect separation, the total production rate can be expressed as:
B = R = Waukoe-EIKTPAPB (9)
473
Table 2.
Steady-state operating conditions for the optimal design (case 1) and design with unequally
distributed reactant distribution (case 2)
Casel Case2
WCAT (kg) 106890 120660
yA,in/yA 0.51/0.45 0.65/0.61
yB,in/yi3 0.48/0.42 0.34/0.26
yc.in/ yc 0.01/0.13 0.01/0.13
Tin /Tout (K.) 424 / 500 424 / 500
T mix /T v (K) 415/380 415/380
TH,oUt/TF (K) 465 / 341 465 / 341
Fp (kmol/s) 0.97 0.97
FB (kmol/s) 0.71 0.71
Fy (kmol/s) 1.09 1.09
XD,A/XB,A 0.52/0.00 0.70/0.00
XD,B/XB,B 0.47 / 0.02 0.30/0.02
xD,c/xB,c 0.01 / 0.98 0.01/0.98
D/B (kmol/s) 0.85/0.12 0.85/0.12
R (kmol/s) 0.78 1.02
AHXI (m2) 255.79 255.79
AHx2 (m 2 ) 638.13 638.13
Ac (m2) 1126.90 1036.20
A R (m 2 ) 536.42 493.24
QHXI (MW) 1.27 1.27
QHX2 (MW) 4.53 4.53
Q F (MW) 0.32 0.32
Qv (MW) 15.37 15.37
QR (MW) 19.20 17.65
V v (m 3 ) 1.04 1.04
NT 25 24
NF 15 15
Pcoiumn (bar) 12.6 12.6
P (bar) 35 35
TAC (107$/yr) 4.21 4.30
where B is the production rate, R is the generation rate of the product X, WCAT is the catalyst
weight can also be interpreted as reactor length, Pt is the partial pressure of component /. The
material balances on the distillation column give:
474
F = D +B (10)
FyA=DxD,A+BxBA (11)
DX
... D,A+BXB,A _ RRX
D,A +XB,A , , , ,
yA { }
~ D +B - l + RR
DX BX + X
= D,B + B,B = ^^A* ^ ( U )
£> + £ l+ i «
where RR denotes the recycle ratio (i.e., RR=D/B). Assuming perfect separation between the
reactants and the product (i.e., XBJ= XB,B=0), we have:
y.= ^- (15)
yA
\ + RR
RRx
D,B n ^
yR (16)
3
l + RR
For the case isothermal CSTR, the production rate now becomes [25]:
As pointed out earlier, for adiabatic reactor, the temperatures (Tin and Toul) play significant
role in operability and energy balance has to be taken into consideration. Without loss of
generality, let us use one-lump adiabatic tubular reactor to illustrate the derivation. The
relationship between heat generation and the production rate can be found from the energy
balance equation around the reactor [10]:
Substituting Eq.(19) into Eq.(9), the production rate for the one-lump adiabatic tubular reactor
becomes:
X
n ur 1 r>2 ~E D,AXD,BRR ,«m
Comparing Eq.(17) with Eq.(20), one immediately observes a significant difference in the
reaction rate constant where, for the case of adiabatic tubular, it is a function of recycle ratio
(RR). Also shown in Eq.(20) is that the reactor inlet temperature (Tin), the reactor pressure (P),
and the distribution of the reactant {XDAIXDB) also play visible roles in the production rate
expression. Insights can be gained by examining Eq.(20). Let us explore the effects of
different design/operating variables on the production rate changes.
f dB/B] 2_ E (TQV-T^RR
2
{dRR/RR)p_Wi^yi> l + RR RTin l + RR
Eq.(21) clearly indicates the competing effect between concentration and temperature.
Note that, for isothermal operation, i.e., Treac,or=Tjn, we have only the concentration effect.
That is:
476
f-^L] —^ (22)
[dRR/RRLu, \ + RR
Fig. 4 also shows the production rate variation for isothermal operation and the "snowball
effect" is also evident at high RR region where a large change in RR leads a very small
increase in the production rate ((BI Wcal) I(B I Wcat) nHX).
(&.} . ^ V ^ ^ , ^ (23)
Eq. (23) clearly shows that from steady-state viewpoint. Tin is a good TPM for systems
with large E. Compared to the isothermal CSTR case, the sensitivity is amplified by the
reactor gain KR which is the sensitivity between the inlet and outlet temperature (i.e.,
Kg=dT0J dTin). As pointed out by Chen and Yu [16,17], a heat integrated reactor via feed-
effluent heat exchanger can easily become open-loop unstable for system with a high reactor
gain (KR). Therefore, controllability problem may arise when we try to recover more heat
form the hot gas of the reactor effluent. Nevertheless, Eq.(23) indeed shows that Tin is a good
candidate for TPM.
Steady-state analysis clearly shows the reactor pressure is a good choice for TPM.
However, for gas-phase reactor, interaction between pressure and temperature may lead to
dynamic problem. The thermal inertia may cause significant variation in the reactor inlet
temperature unless a large gas-phase holdup is employed.
477
Fig. 4. Normalized production rate as a function of recycle ratio (RR) for adiabatic and
isothermal operations.
Fig. 5 clearly shows that, a small change in the limiting reactant B can lead to significant
change in the production rate and this is especially true when A is in large excess.
The sensitivity plots of possible TPMs (e.g., Figs 4&5) help us to select appropriate TPM
at given design condition, at least qualitatively. For example, RR is a poor choice of TPM if
the plant is designed at RR=10 (Fig. 4) and composition variation is a poor choice to handle
production rate change if the plant is designed at JA/(>'A+>>B)=0.5 (Fig. 5).
478
Fig. 5. Sensitivity of reactor exit composition on production rate for different reactant
distributions.
4. CONTROL
(B)
Fig. 6. Control structure fixing (A) recycle ratio (CSl) and (B) reactor exit composition
(CS2).
481
Fig. 7. Closed-loop performance using CS1 and CS2 for ATin = +5 K (case 1)
Fig. 8. Closed-loop performance using CSl and CS2 for AF = +1 bar (case 1)
482
Fig. 10. Closed-loop performance using CS1 and CS2 for ATin = +5 K (case 2).
483
5. CONCLUSION
Interaction between design and control for gas-phase adiabatic tubular reactor with liquid
recycle is studied. This generic bimolecular reaction, D+D—»U, has two important features:
(1) stoichimoetiric balance has to be maintained and (2) reactor temperature plays an
important role in design and operability. More importantly, it represents a large class of
important industrial processes. This problem presents fascinating design problems and the
most important one is the tradeoff between reactor size (reactor cost) and recycle flow rate
(separation cost). The total annual cost (TAC) is used to evaluate steady-state economics. The
optimal TAC trajectory starts form the corner of light reactant (D) and converges to the center
line toward the pure heavy product (D) edge in the triangular composition space. Unlike
isothermal operation, the reactor inlet temperature limits the attainable region to low
conversion range. Optimal reactant distribution can be obtained directly form the simplified
TAC equation and effects of kinetics parameters and relative volatilities on this optimality are
also explored. The results show that an increased reactor exit temperature leads to a more
controllable optimal design while a high activation energy results in a less controllable one.
Next the connection between reactor temperature and operability is established analytically
for the simplified process. It reveals the important difference between isothermal and
adiabatic operations, especially for the reversal in production rate variation as the recycle flow
changes. Moreover, it provides insight to evaluate possible throughput manipulators (TPM).
For the operability analysis, two control structures are proposed with three different
combinations of TPM. For the case of equally distributed reactant, the control structure using
the reactor inlet temperature as TPM gives good control performance when the reactant
distribution is held constant. However, potential problem may arise as the result of high
reactor exit temperature (Tout). For the case of biased reactant distribution, the reactant
redistribution provides an extra degree of freedom and this alleviates the high Tout problem.
The results presented in this work clearly indicate that simple material and energy balances
provide useful insights in the design and control of recycle processes.
REFERENCES
Chapter Dl
1. INTRODUCTION
The focus of this book is on the integration of design and control. The objective is to design
a process which, in addition to being economically attractive from a steady-state point of view,
is "easy" to control and operate.
The focus in this chapter is different. The issue here is operation of a given plant where
the design decisions have already been made. Here it is too late with "integration of design
and control", but on the other hand "integration of design people and control people" may give
large benefits. When it comes to operation, the "design people" usually focus their attention on
optimal economic steady-state operation. The "control people" on the other hand are focused on
dynamic operation, and on keeping selected variables at constant setpoints. The "missing link"
where the interaction between the two groups is most needed, is the issue of selecting which
variables to control. For most plants, as illustrated in this chapter, this choice can be made based
on steady-state economics, so here the design people are in charge. One needs information
about expected disturbances and implementation/measurement errors ("uncertainty"), and both
the control and design people can here contribute with their process insight. However, the
dynamic behavior (controllability) of the proposed choice must also be considered, and this is
the domain of the control people.
It should also be noted that many plants are not operated at the conditions they were designed
for. The reason is that the economic conditions are often such that it is optimal to operate the
plant at higher capacity than what it was designed for. This usually involves operating one or
more units at capacity constraints, and the active constraints may change on a daily basis, or as
various units are "debottlenecked". In any case, this means that one needs to rethink the control
strategy, so in most plants there will be an ongoing need for interactions between the design and
control people.
486
As mentioned, the focus of this chapter is on selecting controlled variables. More generally,
the issue of selecting controlled variables is the first subtask in the plantwide control or control
structure design problem (Foss 1973); (Morari 1982); (Skogestad and Postlethwaite 1996):
5. Selection of controller type (control law specification, e.g., PID, decoupler, LQG, etc.).
Even though control engineering is well developed in terms of providing optimal control algo-
rithms, it is clear that most of the existing theories provide little help when it comes to making
the above structural decisions.
The method presented in this paper for selecting controlled variables (task 1) follows the
ideas of Morari et al. (1980) and Skogestad and Postlethwaite (1996) and is very simple. The
basis is to define mathematically the quality of operation in terms of a scalar cost function J to
be minimized. To achieve truly optimal operation we would need a perfect model, we would
need to measure all disturbances, and we would need to solve the resulting dynamic optimiza-
tion problem on-line. This is unrealistic, and the question is if it is possible to find a simpler
implementation which still operates satisfactorily (with an acceptable loss). The simplest op-
eration would result if we could select controlled variables such that we obtained acceptable
operation with constant setpoints, thus effectively turning the complex optimization problem
into a simple feedback problem and achieve what we call "self-optimizing control".
In this chapter we first give an introduction to self-optimizing control (Skogestad 2000), in-
cluding a distillation column case study. In Skogestad (2000) the focus is selecting single mea-
surements as controlled variables, but more generally variable combinations may be used, and
we present briefly the method of Alstad and Skogestad (2002) for finding the optimal variable
combination for the case where implementation errors are not important. The final part of this
chapter is the application of this method to optimal operation of a gasoline blending process.
In this chapter, we focus on optimal steady-state operation, because the plant economics
are primarily determined by the steady-state operation. Although not widely acknowledged,
controlling the right variable is a key element in overcoming uncertainty in operation.
In order to select controlled variables in a systematic way, the first step is to identify the
degrees of freedom.
487
The second step is to quantify what we mean by "desired operation". We do this by defining a
scalar cost function Jo which is to be minimized with respect to the available degrees of freedom
uo,
Here d represents the exogenous disturbances that affect the system, including changes in the
model (typically represented by changes in the function gi), changes in the specifications (con-
straints), and changes in the parameters (prices) that enter in the cost function (and possibly in
the constraints), x represents the internal states. The cost function Jo is in many cases a simple
linear function of extensive variables multiplied by their respective prices.
The third step is the definition of uncertainty, including expected disturbances (d) and imple-
mentation errors (n). The latter are at steady state mainly due to measurement errror.
The fourth step is to find the optimal operating point for the various disturbances by minimiz-
ing Jo with respect to the available degrees of freedom uo. In most cases some of the inequality
constraints are active (g'2 = 0) at the optimal solution.
The final steps, the most important in our view (but not considered to be an important issue
by many people) is the actual implementation of the optimal policy in the control system. We
assume that we have available measurements y = fo{x, uo, d) that give information about the
actual system behavior during operation (y also includes the cost function parameters (prices),
measured values of other disturbances d, and measured values of the independent variables u0).
Obviously, from a purely mathematical point of view, it would be optimal to use a centralized
on-line optimizing controller with continuous update of its model parameters and continuous
reoptimization of all variables. However, for a number of reasons, we almost always decompose
the control system into several layers, which in a chemical plant typically include scheduling
(weeks), site-wide optimization (day), local optimization (hour), supervisory/predictive control
(minutes) and regulatory control (seconds). Therefore, we instead consider the implementation
shown in Figure 1 with separate optimization and control layers. The two layers interact through
the controlled variables c, whereby the optimizer computes their optimal setpoints cs (typically,
updating them about every hour), and the control layer attempts to implement them in practice,
i.e. to get c~cs. The main issue considered in this chapter is then: What variables c should we
control?
As mentioned, in most cases some of inequality constraints are active (i.e. g'2 = 0) at the op-
timal solution). Implementation to achieve this is usually simple: We adjust the corresponding
number of degrees of freedom uo such that these active constraints are satisfied (the possi-
ble errors in enforcing the constraints should be included as disturbances). In some cases this
488
Fig. 1. Implementation with separate optimization and control layers. Self-optimizing control
is when near-optimal operation is achieved with cs constant.
consumes all the available degrees of freedom. For example, if the original problem is linear
(linear cost function with linear constraints gi and 32) > then it is well known that from Linear
Programming theory that there will be no remaining unconstrained variables.
However, for nonlinear problems (e.g. g\ is a nonlinear function), the optimal solution may
be unconstrained in the remaining variables, and such problems are the focus of this paper. The
reason is that it is for the remaining unconstrained degrees of freedom (which we henceforth
call u) that the selection of controlled variables is an issue. For simplicitly, let us write the
remaining unconstrained problem in reduced space in the form
where u represents the remaining unconstrained degrees of freedom, and where we have elim-
inated the states x = x(u, d) by making use of the model equations. These remaining degrees
of freedom u need to be specified during operation, and we use the feedback policy shown
in Figure 1 where the M'S are adjusted dynamically to keep the controlled variables c at their
setpoints cs. However, this constant setpoint policy will, for example due to disturbances d
(which change the optimal value of cs) and implementation errors n (which mean that we do
not actually achieve c = cs), result in a loss, L = J — J opt , when compared to the truly optimal
operation. If this loss is acceptable, then we have a "self-optimizing" control system:
1. A subset of the degrees of freedom uo are adjusted in order to satisfy the active constraints
(as given by the optimization).
2. The remaining unconstrained degrees of freedom (u) are adjusted in order to keep selected
constrolled variables c at constant desired values (setpoints) cs. These variables should
be selected to minimize the loss.
Ideally, this results in "self-optimizing control" where no further optimization is required, but
in practice some infrequent update of the setpoints cs may be required. If the set of active con-
straints changes, then one may have to change the set of controlled variables c, or at least change
their setpoints, since the optimal values are expected to change in a discontinuous manner when
the set of active constraints change.
Example. Cake baking. Let us consider the final process in cake baking, which is to bake it
in an oven. Here there are two independent variables, the heat input (uy = Q) and the baking
time (v.2 = T). It is a bit more difficult to define exactly what J is, but it could be quantified as
the average rating of a test panel (where 1 is the best and 10 the worst). One disturbance will
be the room temperature. A more important disturbance is probably uncertainty with respect
to the actual heat input, for example, due to varying gas pressure for a gas stove, or difficulty
in maintaining a constant firing rate for a wooden stove. In practice, this seemingly complex
optimization problem, is solved by using a thermostat to keep a constant oven temperature (e.g.,
keep C\ = Toven at 200°C). and keeping the cake in the oven for a given time (e.g., choose
C
2 = U2 = 20 min). The feedback strategy, based on measuring the oven temperature c\, gives
a self-optimizing solution where the heat input (u\) is adjusted to correct for disturbances and
uncertainty. The optimal value for the controlled variables (c\ and C2) are obtained from a cook
book, or from experience. An improved strategy may be to measure also the temperature inside
the cake, and take out the cake when a given temperature is reached (i.e., 1*2 is adjusted to get
a given value of c2 = Tcake.
We next consider a distillation case study where we follow the stepwise procedure of Skoges-
tad (2000) for selecting controlled variables. The example also illustrates how to include the
implementation error n in the analysis.
reactor for reprocessing. We assume the feed rate is given and that there is no capacity limit in
the column.
u— [
\DJ
Step 2: Cost function and constraints
Ideally, the optimal operation of the column should follow from considering the overall plant
economics. However, to be able to analyze the column separately, we introduce prices for all
streams entering and exiting the column and consider the following profit function P which
should be maximized (i.e. J = —P)
P = pDD + PBB-pFF-PvV (4)
The price py = 0.1 [$/kmol] on boilup includes the costs for heating and cooling which both
increase proportionally with the boilup V. The price for the feed is pF = 10 [$/kmol], but its
value has no significance on the optimal operation for the case with a given feed rate. The price
for the distillate product is 20 [$/kmol], and its purity specification is
xD > 0.995
There is no purity specification on the bottoms product, but we note that its price is reduced in
proportion to the amount of light component (because the unneccessary reprocessing of light
component reduces the overall capacity of the plant; this dependency is not really important but
it it realistic).
With a nominal feed rate F = 1 kmol/min the profit value P of the column is of the order 4
[$/min], and we would like to find a controlled variable which results in a loss L less than 0.04
[$/min] for each disturbance (corresponding to a yearly loss of less than about $20000).
Step 3: Disturbances
We consider five disturbances:
d4: A decrease in feed liquid fraction qF from 1.0 (pure liquid) to 0.5 (50% vaporized)
dc: An increase of the purity of distillate product xD from 0.995 (its desired value) to 0.996
The latter is a possible safety margin for Xo which may take into account its implementation
error. In addition, we include the implementation error n for the other selected controlled
variable (see below).
Step 4: Optimization
In Table 1 we give the optimal operating point for the five disturbances; larger feed rate
(F = 1.3), less and more light component in the feed (zF = 0.5 and zF = 0.65), a partly
vaporized feed (qF = 0.5), and a purer distillate product (xo = 0.996).
As expected, the optimal value of all the variables listed in the table ( i D , xg, D/F, L/F, V/F, P/F)
are completely insensitive to the feed rate, since the columns has no capacity constraints, and
the efficiency is assumed independent of the column load.
We consider implementation errors of about 20% in all variables, including XQ (the other con-
trolled variable). From Table 1 we see that the optimal value of D/F varies considerably, so
we expect this to be a poor choice for the controlled variable (as it violates requirement 1). For
the other alternatives, it is not easy to say from our requirements of from physical insight which
variable to prefer. We will therefore evaluate the loss.
Table 1
Optimal operating point (with maximum profit P/F) for distillation case study
493
we would like the loss to be less than 0.04 [$/min] for each disturbance. We have the following
comments to the results given in Table 2:
Nominal 0 0 0 0 0 0
F=l.3 0 0 0.514 0 0 0
zF = 0.5 0.023 inf. 0.000 0.000 0.001 1.096
zF = 0.75 0.019 2.530 0.006 0.006 0.004 0.129
qF = Q.b 0.000 0.000 0.001 0.001 0.003 0.000
xD = 0.996 0.086 0.089 0.091 0.091 0.091 0.093
20% impl.error 0.012 inf. 0.119 0.119 0.127 0.130
inf. denotes infeasible operation
Nominal values: xo = 0.995, zp = 0.65, qp = 1.0
20% impl.error: xB = 0.048, D/F = 0.766, L = 18.08, L/F = 18.08, V/F = 18.85, L/D = 28.28
Uacceplable loss (larger than 0.04) shown in bold face
Table 2
Loss [$/min] for distillation case study.
1. As expected, we find that the losses are small when we keep xB constant.
3. Not surprisingly, keeping D/F (or D) constant is not an acceptable policy, e.g., operation
is infeasible when zF is reduced from 0.65 to 0.5.
5. L/D is not a good controlled variable, primarily because its optimal value is rather sen-
sitive to feed composition changes.
7. For reflux L and boilup V one needs to include "feedforward" action from F (i.e. keep
L/F and V/F constant).
494
8. Use of L/F or V/F as controlled variables is very attractive when it comes to distur-
bances, but these variables are rather sensitive to implementation errors.
9. Other controlled variables have also been considered (not shown in Table). For example,
a constant composition (temperature) on stage 19 (towards the bottom), x ig = 0.20, gives
a loss of 0.064 when zp is reduced to 0.5, but otherwise the losses are similar to those
with XB constant.
Remark. If it turns out to be difficult to keep L/F (or V/F) constant, then we may considering
using L (or V) to keep a temperature towards the bottom of the column constant.
Above we selected the controlled variables c simply as a subset of the measurements y. How-
ever, more generally we may allow for variable combinations and write c = h(y) where the
1
There are other possible choices for controlling XD, e.g. we could use the distillate flow D. However, V has a
more direct effect.
495
function h{y) is free to choose. Here the number of controlled variables (c's) is equal to the
number of degrees of freedom. If we only allow for linear variable combinations then we have
Ac = HAy (5)
where the constant matrix H is free to choose. Does there exist a variable combination with zero
loss for all disturbances, that is, for which copt(d) is independent of dl As proved by Alstad and
Skogestad (2002) the answer is "yes" for small disturbance changes, provided we have at least
as many independent measurements (j/'s) as there are independent variables (M'S and d's). The
derivation Alstad and Skogestad (2002) is surprisingly simple: In general, the optimal value of
the j/'s depend on the disturbances, and we may write this dependency as yopt{d). Locally, that
is for small deviations from the optimal operating point, the value of yopt{d) depends linearly
on d,
where the sensitivity F = dyopt(d)/dd is a constant matrix. We would like to find a variable
combination Ac = HAy such that Ac opt = 0. We get Ac opt = HAyopt = HFAd = 0. This
should be satisfied for any value of Ad, so we must require that H is selected such that
HF = 0 (7)
Implementation error
One issue which we have not discussed so far is the implementation error n, which is the
difference between the actual controlled variable c and its desired value (n = c — cs). In some
cases there may be no implementation error, but this is relatively rare.
Figure 1 is a bit misleading as it (i) only includes the contribution to n from the measurement
error, and (ii) gives the impression that we directly measure c, whereas we in reality measure y,
i.e. n in Figure 1 represents the combined effect on c of the measurement errors for y.
In any case, the implementation error n generally needs to be taken into account, and it will
affect the optimal choice for the controlled variables. Specifically, when we have implementa-
tion errors, it will no longer be possible to find a set of controlled variables that give zero loss.
One way of seeing this is to consider the implementation error n as a special case of a distur-
bance d. Recall that to achieve zero loss, we need to add one extra measurement y for each
disturbance. However, no measurement is perfect, so this measurement will have an associated
error ("noise"), which may again be considered as an additional disturbance, and so on.
496
Unfortunately, the implementation error makes it much more difficult to find the optimal
measurement combination, c = h(y), to use as controlled variables. Numerical approaches
may be used, at least locally (Halvorsen et al. 2003), but these are quite complicated.
The following example illustrates clearly the importance of selecting the right controlled
variables, and illustrates nicely of the method of Alstad and Skogestad (2002) for selecting
optimal measurement combinations, for the case when implementation error is not an important
issue.
Problem statement. We want to make 1 kg/s of gasoline with at least 98 octane and not
more than 1 weight-% benzene, by mixing the following four streams
• Stream 1: 99 octane, 0% benzene, price pi = (0.1 + mi) $/kg.
• Stream 2: 105 octane, 0% benzene, price y<2 = 0.200 $/kg.
• Stream 3: 95 octane, 0% benzene, price p 3 = 0.12 $/kg.
• Stream 4: 99 octane, 2% benzene, price p^ = 0.185 $/kg.
The maximum amount of stream 1 is 0.4 kg/s. The disturbance is the octane contents in stream
3 (d = O3) which may vary from 95 (its nominal value) and up to 96. We want to obtain a
self-optimizing strategy that "automatically" corrects for this disturbance.
Solution. For this problem we have
uo ~ ( r n i m2 777,3 777.4 )
where rrij [kg/s] represents the mass flows of the individual streams. The optimization problem
is to minize the cost of the raw material
subject to the 1 equality constraint (given product rate) and 7 inequality constraints.
mi + m2 + 1713 + m4 = 1
mi > 0
m2 > 0
m3 > 0
777,4 ^ 0
mi < 0.4
99 m1 + 105 m2 + 03m3 + 99 m4 > 98
497
2TO4 < 1
At the nominal operating point (where O3 = d* = 95) the optimal solution is to have
which gives Jopt(rf*) = 0.13724 $. We find that three constraints are active (the product rate
equality constraint, the non-negative flowrate for 7714 and the octane constraint). The same three
constraints remain active when we change O3 to 97, where the optimal solution is to have
This leaves one unconstrained degree of freedom (which we may select, for example, as u =
TO! , but which variable we select to represent u is not important as any of the three variables m,i,
m2 orTO3will do). We now want to evaluate the loss imposed by keeping alternative controlled
variables c constant at their nominal optimal values, cs = copt(d*). The available measurements
available are a subset of u0, namely
y= ( TO! TO2 m3 )T
Here we have excluded m4 since it is kept constant at 0, and thus is independent of d and u.
Let us first consider keeping each individual flow constant (and the two others are adjusted to
satisfy the active product rate and octane number constraints). We find when d = O3 is changed
from 95 to 97:
• c = m1 constant at 0.26: J = 0.12636 corresponding to loss L = 0.12636 - 0.126 =
0.00036
• c = m,2 constant at 0.196: Infeasible (requires a negativeTO3to satisfy constraints)
• c = m3 constant at 0.544: J = 0.13182 corresponding to loss L = 0.13182 - 0.126 =
0.00582
Let us now obtain the optimal variable combination that gives zero loss. We use a linear
variable combination
c = Hy = /11TO1 + h2m2 + h3m3
498
The relationship between the optimal value of y and the disturbance is indeed linear in this case
and we have
/ 0.20-0.26 \ /-0.03X
Ayopt = FAd = 0.075 - 0.196 ]-Ad = -0.06 I Ad
V 0.725- 0.544/ \ 0.09 /
F
In this case we have 1 unconstrained degree of freedom (u) and 1 disturbance (d), so we need to
combine at least 2 measurements to get a variable combination with zero loss. This is confirmed
by the above equation which may always be satisfied by selecting one element i H equal to zero.
We then find that the following three combinations of two variables give zero loss:
1. c = mi — 0.5rr?,2: Zero loss (derived by setting /i3 = 0 and choosing hi = 1)
2. c = 3m] + 7713: Zero loss (derived by setting h2 = 0 and choosing /i3 = 1)
3. c = l.5m,2 + m3: Zero loss (derived by setting hi = 0 and choosing /i3 = 1)
1. Make the setpoint cs a function of prices (this is probably the simplest and most obvious
approach).
2. Keep constant setpoints, and instead include the prices as extra "measured disturbancxes".
The latter approach is probably less obvious so let us illustrate how it can be applied to our
blending example. We consider the case where the price of stream 2 may vary. Specifically,
changing the price p2 from 0.2 to 0.21 gives the new optimum
and defining
y = (mi m2 m3 p2 )T
T
d=(O3 P2)
499
gives
/-0.03 2.0 \
-0.06 -0.8 A
Ay M
°*= 0.09 -1.2
\ o W,
F
We then have
c = Hy = hinii + h^rrii + /13TO3 + /14P2
hi = -2/ii+0.8/i 2 + l-2/j3
This gives the following optimal variable combinations with price correction:
1. c = mi- 0.5m2 - 2.4p2 (since /i4 = - 2 • 1 + 0.8 • (-0.5) + 1.2 • 0 = -2.4)
2. c = 3mi + m3 - 4.8p2
3. c = 1.5m2 + m3 + 2.4p2
It seems here that the sum of the first and third variable combination gives a possible "magic"
controlled variable, which is independent of the price p3. However, it turns out that this variable
is mi -\-'m-2+m3, which indeed is independent of the price, is also identical to one of the equality
constraints (the total mass flow is always 1), so this variable is degenerate and fixing its value
does not provide any additional information.
Matlabfile
H = [ 0 . 2 0 0 0 ; 0 0 0 0 ; 0 0 0 0 ; 0 0 0 0 ]
f = [0.1 0.2 0.12 0.185] % prices
A = [-99 -105 -95 -99; 0 0 0 2; -1 0 0 0; 0 - 1 0 0;
0 0 -1 0; 0 0 0 - 1 ; 0 0 0 1]
b = [-98 1 0 . 4 0 0 0 0 ] '
Aeq = [ 1 1 1 1 ]
beq = 1
[X,FVAL]=QUADPROG(H,f,A,b,Aeq, beq)
6. CONCLUSIONS
The selection of controlled variables for different systems may be unified by making use
of the idea of self-optimizing control. The idea is to first define quantitavely the operational
objectives through a scalar cost function J to be minimized. The system then needs to be opti-
mized with respect to its degrees of freedom u0. From this we identify the "active constraints"
which are implemented as such. The remaining unconstrained degrees of freedom u are used
to controlled selected controlled variables c at constant setpoints. In the paper it is discussed
how these variables should be selected. We have in this paper not discussed the implementation
error n = c — cs which may be critical in some applications (Skogestad 2000).
REFERENCES
V. Alstad and S. Skogestad, Presented in 2002 AIChE Annual Meeting, Indianapolis, USA.
CS. Foss, AIChE J., 19 (1973) 209.
I.J. Halvorsen, S. Skogestad, J.C. Morud and V. Alstad, Ind. Eng. Chem. Res., 42(2003), 3273.
M. Morari,. In: Proc. of Second international conference on chemical process control (CPC-2),
Sea Island, Georgia, Jan. 1981. pp. 467-495.
M. Morari, G. Stephanopoulos and Y. Arkun, AIChE J., 26 (1980) 220.
F.G. Shinskey, Distillation Control, McGraw-Hill Book Company, 1984.
S. Skogestad, J. Proc. Control, 10 (2000) 487.
S. Skogestad, and I. Postlethwaite Multivariable Feedback Control, John Wiley & Sons, 1996.
S. Skogestad and M. Morari, AIChE J, 33(1987), 1620.
The Integration of Process Design and Control
P. Seferlis and M.C. Georgiadis (Editors)
© 2004 Elsevier B.V. All rights reserved. 501
Chapter D2
1. INTRODUCTION
solutions is that they do not adequately reflect the distributed nature of the problem in terms
of organisation and production units (plants, production departments, lines, batch units). As a
consequence, internal disturbances occurring at any level of this organisational context or
external perturbations occurring in the market environment may create frequent and
irrecoverable readjustments in real-life industrial operations. Although "rolling-schedule" [6],
and more recently, 'reactive scheduling' [7,8] strategies try to alleviate some of the
synchronisation problems associated with the necessary schedule updating, there is no global
co-ordination strategy linked in real time to the batch control system.
A realistic answer to this situation inevitably entails appropriate consideration of the
interaction between the various decision levels linked to the batch control system:
• Plant management and scheduling control, including planning, scheduling and plant
wide optimisation;
• Sub-plant co-ordination between major production areas, including local schedule
adjustments and recipe modifications;
• Switching and supervisory control of process units, including appropriate handling of
emergencies.
• Industrial equipment regulatory and fault diagnosis' actions.
This paper focuses on this gap between the automatic batch control, process management
systems and other logistics operational systems. A framework contemplating the integration
of the different control levels in batch and hybrid scenarios has been created. First, batch
control is characterised following current standards. Then, the control model is described in
detail. Next, the elements and architecture for integration of control and production
management are introduced. Finally, the results obtained in a prototype environment
(PROCEL) are presented. Also, the application to industrial case scenarios is indicated.
Recently, the trend in systems integration has been towards the use of automatic control in
its broadest sense (including dynamic control, scheduling and closure of information loops) to
integrate all aspects of the plant operations including closing the information loops within the
plant.
On the other hand, the process industries are increasingly using batch processing in order
to augment the flexibility of the manufacturing process, aiming at high profitability, high
productivity and high competitiveness in the global market. By using batch processing and
process automation, it is now possible to produce a new product, in customer-required
quantity [9]. However, this higher flexibility in the manufacturing process causes constantly
changing recipes, production requirements, packaging options and product specifications, and
therefore requires more complex automation and process control. The Instrument Society for
Measurement and Control (ISA) in the S88 [10] by providing a standard terminology for
batch processes and a systematic, hierarchical structure for the control process, aims at
optimising and facilitating the control process.
503
On the one hand, S88.01 defines standards and recommended practices for the design and
specification of batch control systems as used in the process control industries. Unlike other
ISA standards, it is not a compliance standard, but a guideline, and the preferred term for
systems and software based on S88.01 is 'S88 Aware'. On the other, S88.02 is more definitive
in its requirements, and therefore compliance is appropriate to a greater extent [10, 20].
The application of S88.01 to the development of a batch control system can facilitate the
implementation of a project, as it gives engineers a clear set of terms to describe flow sheets
and control schemes and can considerably reduce the time required to implement the system,
'(it) has been very successful in its fundamental task of explaining what batch control is all
about, and in cutting the time needed to develop and configure software' [10].
The main advantages of S88.01 are:
• Defines terminology specific to batch control systems to encourage understanding
between manufacturers and users.
• Provides a consistent data structure batch control language to simplify programming,
configuration, and communication between various system components.
• Provides a common data structure for batch systems to simplify data communications
within system architecture.
• Determines a consistent batch control architecture that defines both, a physical and
functional model.
Furthermore, a number of reasons are found for the necessity of the standard [11],
including: to promote an adequate methodology for the design and operation of batch
products, improve the level of control implemented, and achieve a common standard
independent of the grade of automation of the plant. With the aim of reducing the user time to
reach acceptable levels of production of new products, permits the engineer to design
adequate tools to implement this type of control, and permits the reduction of the global cost
of the automation of these processes [12].
For the success of this approach, one key concept is the openness, to be defined in the
next section.
containing the hardware and system software like the operating system and a set of modules
of application software, which contain the control specific functionality
Users need to integrate software themselves into control systems and want to profit from
cost saving effects like reuse of software and use of standard-hardware, reduced machine
downtime, easy integration of custom technologies, and improved response to changing
customer demands. To achieve this, modular control systems have to be transformed into
open control systems.
3. BATCH CONTROL
Batch control refers to the automated process control of batch manufacturing. Batch
automation is the discipline of industrial manufacturing that processes a given quantity of
material by subjecting it to an ordered set of processing instructions over a finite period of
time.
505
Table 1
Five capabilities of modules in open control systems
Capability of module meaning
Portable a module can run in different control systems
Extendable the functionality of modules can be extended
Exchangeable a module can be replaced by one with comparable
functionality
Scalable multiple instances of modules are possible to increase
performance
Interoperable modules cooperate (exchange data)
In batch production, a recipe that contains all requisite information for manufacturing the
product determines the products processes. This includes ingredients or raw materials needed
the order of process steps through which ingredients must pass the conditions of each step in
the process, and the equipment to be used in the process. Because an individual batch can
represent several million dollars' worth of research and materials, the batch control is critical
for batch manufacturers to ensure the repeatability, consistency, and long-term maintainability
of the process.
The batch process is neither continuous nor discrete, however it has characteristics of both,
which makes the automation of batch processes considerably more complicated than for
continuous or discrete processes. In a continuous process, process control is required only to
monitor that the system is working within the optimum boundaries, and if not, to act on local
control in order to compensate it, so that it returns to within the optimal boundaries. In a batch
process, the control system must also detect when a phase has been completed, and then
change from one dynamic configuration to another dynamic configuration, with changes in
the local controllers for each phase [13].
In batch processes there are three different types of process control: basic control,
procedural control and co-ordination control. A combination of control activities and control
functions of these control types provides batch control, defined as 'a means to process finite
quantities of input materials by subjecting them to an ordered set of processing activities over
a finite period of time using one or more pieces of equipment' [14].
been some effort to standardise how batch control procedures must be implemented, like
NAMUR NE33, ASTRID, ISA S88.01, ISA S88.02, and GAMP guidelines. Among them, the
ISA S88.01 introduced an object-oriented methodology to modelling the batch processes and
a common set of definitions. In this sense, S88.01 offers opportunities to define process
control applications in a modular fashion.
The S88 takes into account the two types of information, equipment dependent and product
dependent, to structure the model description of the batch processes.
Three kinds of models are combined to represent these two types of information. The first
model is the physical hierarchy model. Another is the procedural model, used to describe the
procedural control elements. From the combination of the equipment control functions and
physical equipment, the equipment entities are obtained, that have the same name as the
physical model level that represent. The third model, is the recipe structure. The
characteristics of these models are described in the following sections.
modules must contain all the necessary processing equipment to carry out the minor
processing activities, phases.
Control modules are the lowest grouping of equipment capable of carrying out basic
control. For example, solenoids and limit switches combined can form Off/On valve control
modules, and transmitters and valves can be combined into PID control modules. The main
characteristics of the control modules is a) the direct connection to the process, and b) these
modules cannot execute any procedural elements.
The physical model described is used to describe the plant in terms of physical objects, the
equipment-dependent information.
§ General Recipes, which are maintained at the corporate level (typically within the ERP
system) and which permit companies to make the same product in plants around the
globe on a variety of equipment, but based on the same source recipe.
§ Site Recipes, which typically reside on manufacturing execution systems (a layer that lies
between plant floor control and ERP systems). Site recipes define local site control of
recipes across different hardware platforms, such as those supplied by control systems
manufacturers, like Honeywell, Fisher-Rosemount, Rockwell Automation, ABB, etc.
§ Master Recipes, which are the specific procedures that actually execute the recipe in a
particular manufacturing area, known as a process cell.
§ Control Recipes, which are the running recipes in the process cell control systems.
General and site recipes are non-equipment dependent and describe the technique of the
process, or how to do it in principle, they may however, specify when known, data that may
be required for the equipment, for example, pressure requirements. The general recipe is
defined at the level of the enterprise, whilst the site recipe is specific to a particular site.
Master and control recipes describe the task, i.e. how to do it with actual resources. The
master recipe is the required recipe as without it no control recipe can be created and therefore
no batches can be produced.
The master recipe is more general than the control recipe. In the master recipe equipment
clauses, for example, are stated and quantities are usually specified normalised. A master
recipe can also be used for manufacturing a large number of batches.
A control recipe is used for manufacturing a single batch only. The control recipe is made
from a master recipe by adding batch-specific information. The control recipe is individual for
every batch and includes scheduling and operational information specific to the batch. For
example the equipment and the exact quantities of ingredients to be used.
This hierarchical structure makes the system easy to understand, also, the layered structure
allows encapsulating the non-relevant details of one level to the others. Each level contains
only the relevant information for it.
The four recipe types have a common structure, containing the following sections: Header
including the administrative information, Formula information containing process inputs,
process parameters and process outputs, Equipment Requirements including the constraints of
the equipment needed, Recipe Procedure that defines how the recipe should be produced and
Other Information. The information within the headings changes depending upon the detail of
the information required ensuring that the required information is available to the necessary
people at the right time.
The recipe procedure section is made up of procedural elements as will be discussed
further in next sections. In the general and site recipes it contains information from the
process model detailing only the sequential steps of the process, as they are not equipment-
dependent. The master and control recipes use the information from the procedural control
model as they are equipment-dependent and the procedure is divided up into unit procedures
so that the recipe can provide the process cell with the processing requirements on a unit-by-
unit basis. The control recipe procedure will be discussed in more detail in section 5.5.
The general structure of a master recipe is illustrated in Table 2, using as an example
Recipe 1. It was written omitting the general and site recipes, as is permitted by S88.01 when
'the recipe creator has the necessary process and product knowledge' [10].
The master recipe is less specific than the control recipe. Although it is equipment-
dependent with the resultant restrictions on the unit procedures, operations and phases, it does
not specify the exact vessel that they will be executed in, but describes the equipment that is
required in sufficient detail for the control recipe to allocate the resources. The control recipe
starts as an exact copy of the master recipe, adding the allocated vessels for the unit
procedures, operations and phases, which is required for the running of the batch. Batch
specific information is also included, such as batch number and operating information.
4. CO-ORDINATION CONTROL
Many of the operations in the process industries are examples of discrete-event dynamic
systems. Discrete process states and the transitions between states characterize these systems.
511
Table 2
The structure of a master recipe.
Header Master recipe for Recipe 1
Formula Process inputs - 10 L water at ambient temperature
Recipe Procedure
Unit Procedure Operation Procedure Phases
UP1 Fill Check initial conditions
UP2 Hold...
Other Information
Batch processes are examples of discrete-event dynamic systems; they include start-up and
shutdown processes, valving operations, recipe execution. Different methodologies have been
proposed to model, analyse and control this type of systems.
The models that describe the batch production systems are different from continuous
processes precisely because of the considerable amount of simple switching actions or
operations. Despite the fact that these are simple operations, the complexity of the system
increases with the number of these operations and its logical interactions.
For precise modelling of batch operations a formalism that describes the system behaviour
at this level of details is needed. Such formalism must allow analysing the interactions of the
different process units and steps that allows developing the best co-ordination control
system (Fig. 4). The Petri Net and Sequential Function Chart (SFC) have been used to
adequately represent and develop the co-ordination system of batch processes.
Since early 1990's Petri Nets (Condition and Event Petri Nets) have been used for
modelling and control to model the batch system. Yamalidou and Kantor [15] have studied
different techniques and the use of High-level Petri Nets for modelling and optimal control of
discrete events in chemical processes. In Kitajima et al. [16] a systematic procedure to create
a colored Petri Net model from a SFC and various information sources, recipe, equipment
information and batch schedule has been proposed [17].
The use of Grafcet/SFC for recipe representation has been studied by Johnsson and Arzen
[17]. They have used and extended version called grafchart, where Grafcet is extended with
high level Petri Nets ideas and object oriented programming concepts, for its application in
batch control, recipe execution and resource allocation [18].
512
Fig.4. Physical and procedure models relationship and interaction with the process model.
C = D+-D~
When a Petri Net is used to model a physical system; each condition is modelled by a place
and each event by a transition. An example is given below (section 4.2).
The elements of the SFC are a) the steps, b) the transitions c) the jumps d) alternative
branch and e) the parallel branches (Fig. 6).
There are two types of step, the initial step and the steps. One step becomes active when
the prior transition has been satisfied and becomes inactive when the succeeding transition
has been satisfied.
In the diagram the initial step is marked with double lines along its margins, and it starts
the sequence (Fig. 6). Only one initial step can exist for each sequential section.
The transition is the condition that transfers control from one step to another. When a
transition is True on the next cycle:
• The preceding step(s) is deactivated
• The following step(s) is activated
• The True transition between the steps is no longer solved
• The transition following the new active step is solved
The jump allows the program to continue from a different location. It can be used in two
ways: sequence jump or sequence loop. One important restriction for the jump block is that
there are no jumps allowed into or out of a parallel sequence area.
The alternative branch allows conditional programming of branches in the control flow of
the SFC structure, and only can be one branch active at a time; all alternative branches must
be joined back into one single branch using alternative joints or jumps.
Parallel branches allow splitting up the processing into two or more sequences. One
common transition directly above the parallel branch is allowed.
The sequences are processed in parallel and are processed independently of each other. The
parallel joint combines two or more parallel branches to form one branch.
One common transition is directly below the parallel joint and this transition is evaluated
only when all directly preceding steps of the transition have been set.
The S88.1 standard does not specify any languages for configuring and describe the recipe
sequence and it is not until S88.2 that this language was defined. However, many suppliers
include the IEC 1131-3 standard in the sequential control configuration. The reason for this is
that SFC is easy to configure and to understand because it represents graphically the states
and transitions. SFC is the basis of the Procedural function Charts, the language defined in the
S88.2 for recipe description [20].
These two formalisms for discrete event system models can be converted in between. This
allows using the theory and tools developed to Petri Nets analysis (Fig. 7).
The procedure proposed by Kitajima et al [16] that could be used to convert the SFC into
PN formalism is as follows:
1) Convert the SFC into PN without taking into account the sharing of common resources.
2) With the information of the batch schedule and the assigned equipment, the PN is
converted in high level PN.
3) Modify the net to satisfy the physical constraints on capacity.
4) Check the conflicts by structural analysis and simulation of the PN.
To obtain the incidence matrix (see paragraph 4.1) of SFC from the given unit procedure
to generate a standard PN, the i-j element of the incidence matrix, ay indicates the relation
between step and transition, where each element of incidence matrix is:
atj = 1 if an arc from step i to transition j exists
ao = -1 if an arc from transition j to step i exists
otherwise ay = 0
The incidence matrix for the Petri Net in Fig 7 is given below.
'-10 0 0 1"
1 - 1 0 0 0
0 1 - 1 0 0
0 1 - 1 0 0
0 0 1 - 1 0
0 0 0 1 -1
In the S88.01 guidelines at the process cell level, a large part of the control program is
centred. From here the sequence block module governs the procedural control which directs
equipment-orientated operations to take place in the sequence specified in the procedural
control model. Co-ordination control performed in the operation block module using the
control module information regarding the phases, controls the initialisation, running and
termination of the operations in the units.
At the unit level, process control controls the processing of the batch that is currently
associated with the unit. The basic control in each unit receives the information as to which
valves are to open or close, from the operation block module and detects if any of the valves
have malfunctioned, or are blocked. Co-ordination control communicates between the units as
to the reactant volume and temperature, and the state of the valves and the flow through them.
At the control module level, the process control is simplified, as procedural control is not
performed. For an operation, the co-ordination control block module sends signal for the
valves that are to be opened. The unit level checks that these are open, and if not, the
operation does proceed, however, another route is not proposed. Basic control in the control
module level controls the maintainment of a variable, which may include the control of the
PID controller.
519
5. INTEGRATION
The integration of the Plant floor with the Planning and Scheduling system involves
different levels of the plant information hierarchy. Therefore, it will be necessary to determine
how to convert the information at each level in the information required by the other level to
execute its corresponding/associated tasks. Fig. 9 shows how the information involved in each
level of the recipe is transformed in control actions (solid arrows) by the physical model
represented at the right hand side. Both, the recipe and physical model follow the description
given in Section 3 as set by the S88 standard. In the background are indicated the
corresponding levels of the Computer Integrated Manufacturing (CIM) model. As can be
seen, a key component to successful integration is the co-ordination level where the control
recipe is exploited in its elementary actions (phases and equipment control). Before describing
the developed integration architecture, some concepts related to software integration must be
presented.
In the integration of these heterogeneous systems to build a complex application, we can
distinguish three types of elements [21]: a) Basic Technologies; b) Integration Technologies
and c) Integration Architecture.
Fig. 9. Information flow linking the recipe model and the physical model (solid arrow)
Fig. 10. Architecture implemented in the pilot plant (PROCEL) following CIM Standards.
Over these elements the system integration has to be build. Each specific functionality
software module has a common application program interface (API) to be made accessible to
the others.
A Pilot plant has been built and a batch and continuous control system implemented. The
pilot plant is used as a test platform to investigate several key issues:
• Co-ordination control problems and recipe handling and execution,
• Information system integration between co-ordination control and scheduling and
planning systems, and
• Real-time, online operation and control in closed loop including reactive scheduling.
The pilot plant is fully connected by remote control hardware and allows multiple batch
and continuous configuration possibilities. All the connections can be configured by software
through electrovalves.
Configured as a batch process, it is possible to have several batches in the plant at the
same time and the batches may have the same recipe or different recipes. The Pilot Plant
constitutes an appropriate scenario to study the validation of operation strategies and for
developing and testing consistent software for batch and continuous control, energy
524
integration studies and mainly to test integration architectures for closing the information
loops.
Fig. 13. Information flow across the process originated by the Master Recipe (Phase level).
525
Fig. 14. Bi-directional information flow across all layers (plant data, supervisory and co-
ordination system, scheduling and planning).
heating step, without a stabilising stage. Depending upon the transfer conditions, more or less
complex systems can be represented, and slight changes made to the production process.
The corresponding Sequential Function Chart (SFC) for these two recipes appears in
Fig.17.
Table 3
Recipe description of Configurations 1 and 2.
Configuration 1 Configuration 2
Charge Rl Charge Tl
HeatRl Discharge Tl
Discharge Rl Charge Rl
Clean Rl HeatRl
Charge Tl Discharge Rl
Hold Tl Clean Rl
Discharge Tl Charge R2
Charge R2 Heat R2
Heat R2 Discharge R2
Discharge R2 Clean R2
Clean R2
Fig. 17. Sequential Function Chart for the recipes described in Table 3 (Rl and Tl only).
7. FINAL CONSIDERATIONS
Batch processing remains the preferred mode of operation for the manufacturing of high
value-added products. Furthermore, hybrid systems are gaining widespread interest precisely
because of the flexibility introduced by the batch procedure. However, the major drawback to
improve the performance of a batch process is the complexity of its automation.
Recent development of standards for the detailed operation of batch processes has set the
road map for further research in batch process control. In this chapter, a framework
contemplating the integration of the different manufacturing control levels and that considers
the closing of information loops among those has been presented. The successful
implementation in a pilot plant demonstrates the feasibility of the solution approach
presented. Moreover, the platform is being further tested in industrial scenarios. Specifically,
it is being implemented in sugar manufacturing from sugar cane, where the batch boiling and
crystallisation steps involve complex batch recipes. Initial results have also been very
satisfactory.
The open and modular characteristics of the software created as well as the strict adherence
to present and emerging standards should ease application to other scenarios. It can be also
concluded that the pathway for real-time optimization in batch procelsses is becoming a
reality.
ACKNOWLEDGEMENTS
Financial support for this research received from the "Generalitat de Catalunya", FI programs
and project GICASA-D are fully appreciated. Also support received in part by the European
532
REFERENCES
Chapter D3
a
PSE Research Group, Wolfson Department of Chemical Engineering, Technion, 1.1. T.,
Haifa 32000, Israel
b
Department of Chemical and Biomolecular Engineering, University of Pennsylvania,
Philadelphia, PA 19104-6393, U.S.A.
c
Department of Fuels and Chemical Engineering, University of Utah, Salt Lake City,
UT 84112, U.S.A.
1. INTRODUCTION
Espresso coffee is prepared in a machine that injects high-pressure steam through a cake of
ground coffee. In a conventional machine, the user manually loads ground coffee into a metal
filter cup, locks the cup under the steam head, and then activates the steam heater. The
manufacturer of the espresso machine would like to guarantee that each cup of coffee
processed by the machine has a consistent quality. It is noted that the quality of each cup of
espresso depends on a large number of variables, among them, the grade and freshness of the
coffee beans, the extent to which the beans have been ground, the steam pressure, the degree
to which the ground coffee is packed in the metal filter holder, and the total amount of water
used. Since many of the sources of product variability are out of the manufacturer's control,
the development of an improved espresso machine would be driven by a desire to either
reduce the level of influence of these sources or eliminate as many of them as necessary to
ensure a satisfactory product.
This chapter describes the role of integrated design and control, together with six-sigma
methodology [1, 2], in the manufacture of products, such as espresso machines, integrated
circuits, and commodity chemicals, which are either defect-free, in the case of manufactured
items, or delivered on-specification, as in commodities. As will be shown, these aims can be
achieved by utilizing six-sigma methodology and its statistical tools to quantify quality and
more importantly, loss of quality and its cost. These tools assist in identifying the main
534
sources of product variance, which are then attenuated or eliminated by appropriate integrated
design of the manufacturing process and its control system.
Let x be a vector of process states, y_ be a vector of measured process outputs, and z be a
vector of process quality variables or attributes of the manufactured product that needs to
meet specifications. In six-sigma methodology, described in Section 2, all of the components
of z are referred to as critical-to-quality (CTQ) variables. In the framework of integrated
product/process design and control, the degrees of freedom that are available to meet the CTQ
targets are u, a vector of manipulated variables, d, a vector of uncontrolled, but possibly
measurable, disturbances, and 8, a vector of design variables. The relationship between these
variables is commonly expressed as follows:
x = f{x,u,dS} (1)
y = g{x,u,d,Q} (2)
Note that Eq. (3) implies that the CTQ variables can be expressed either in terms of the
process states, inputs and process parameters, or, more commonly, in terms of the process
outputs. More generally, the process/product design involves not only the selection of
continuous parameters, such as the steam pressure and total surface area of the coffee filter in
the novel espresso machine, but also the structure of the product or process itself, commonly
expressed mathematically in terms of binary operators. For the design of the espresso
machine, these parameters could represent alternative configurations of the steam generation
device or whether to permit one or two coffee filters to be positioned in the machine.
This chapter begins by describing the mathematical basis for six-sigma methodology, and
its role in quantifying the cost of manufacturing defects or abnormal operation in processing,
and in guiding manufacturing in the reduction of product variance. Next, its role in product
design is described, showing how six-sigma methodology is enhanced by incorporating
integrated design and control into the product design process. The chapter concludes with
examples of how the combined approach assists in improving product manufacturing and
processing.
2.1. Definitions
Six-sigma (6o) is a structured methodology for eliminating defects, and hence, improving
product quality in manufacturing and services. The methodology aims at identifying and
reducing the variance in product quality, and involves a combination of statistical quality
control, data-analysis methods, and the training of personnel. The term six-sigma defines a
desired level of quality: 3.4 defects per million opportunities (DPMO). The symbol o (sigma)
535
is the standard deviation of the value of a quality variable, a measure of its variance, which is
assumed to have a normal distribution. Figure la shows such a distribution with a = 2. Note
that the distribution is normalized such that the total area under the curve is unity, with a
probability density function given by:
/W 1 expUf^Y], (4)
a\/2it I 2\ a )
where fix) is the probability of the quality at a value of x, and |i is the average value of x.
Assuming that operation at 3a on either side of jx is considered normal, this defines the upper
control limit (UCL) at (j. + 3a, and the lower control limit (LCL) at jx - 3a. As shown in
Figure la, the number of defects per million opportunities (DPMO) above the UCL is:
This means that 1,350 DPMO can be expected in a normal sample above the UCL and the
same number below the LCL. It is important, however, that the manufacturing process be
insensitive to process drifts. In accepted six-sigma methodology, a worst-case shift of 1.5a in
the distribution of quality is assumed, to a new average value of \x + 1.5a, as shown in Figure
lb. For operation at 3a, the expected DPMO above the UCL is 66,807, and below the LCL, 3.
This gives a total expected DPMO of 66,810, a significant deterioration in quality. In contrast,
suppose that the variance can be reduced, to a = 1. Assuming operation at 6a on either side of
the average value of the distribution, JX = 0, this defines the UCL at u. + 6a and the LCL at jx -
6a, as shown in Figure 2a, with 1 defect per billion opportunities on either side of the
acceptance limits, which are insignificant defect levels. The improvement in performance is
apparent when considering a shift of 1.5a as before; for 6a operation, the DPMO (above the
UCL) increases to only 3.4, as shown in Figure 2b.
Fig. 2. Distribution of product quality at 6a, with a = 1: (a) Normal operation at JJ. = 0;
(b) Abnormal operation shifted to u. + 1.5a.
537
DPMO = 1 0 6 x — - — = 6,944.
30x24
Figure 3 gives the sigma level for this DPMO as 3.8. If improved operations were to
reduce the specification violations to 0.5 hour per month, the DPMO would be reduced by a
factor of 10, leading to an increase in the sigma level to 4.7.
The increased sigma level is a consequence of the reduction in the variance in the CTQ
variable, brought about by improved operation of the column, possibly achieved by
enhancements to the process design and/or its control system. Evidently, one expects lower
sigma levels for processes in which abnormal operation is prevalent than in those in which
they seldom occur. Thus, for example, a crude-oil distillation unit with frequent feedstock
changes is expected to have a lower sigma level than one relying on a single feedstock, since
feedstock changes cause process upsets that propagate through the entire unit, leading to off-
specification product until corrected by the control system.
The expected number of defects presented in Figure 3 applies to a single manufacturing
step. Usually, the manufacture of devices involves a number of steps. For n steps, and
assuming that all defective components of the device are removed from the production
sequence at the step where they occur, the overall defect-free throughput yield, TY, is:
538
where DPMO, is the expected number of defects per million opportunities in step i. If the
DPMO is identical in each step, Eq. (6) reduces to:
The fraction of the production capacity lost due to defects is 1 - TY. For example,
consider the manufacture of a device involving 40 steps, each of which operates at 4a. From
Figure 3, the expected DPMO is 6,210 per step, so TY = (1 - 0.00621)40 = 0.779. Thus, 22%
of production capacity is lost due to defects, rendering the overall manufacturing operation a
2.3a process. In contrast, if each of the 40 steps operate at 6a, TY = (1 - 3.4/106)40 = 0.99986,
corresponding to about one faulty device for every 10,000 produced, and in this case, the
overall operation is a 5.2a process.
In the preceding discussion, it has been assumed that defective devices are eliminated in
production, leaving only the impact on reduced throughput yield. In the likely event that a
fraction of the defects are undiscovered and lead to shipped devices that are faulty, the impact
on sales resulting from customer dissatisfaction could be much greater. Noting that many
manufacturing operations involve hundreds of steps (e.g., integrated-circuit chip
manufacturing), it is clear that high levels of reliability, as expressed by low DPMO values,
are generally required to ensure profitable manufacture. This is the driving force behind the
extensive proliferation of six-sigma methodology [4],
b) Measure: The CTQ variables are monitored to check their compliance with the LCLs
and UCLs. Most commonly, univariate statistical process control (SPC) techniques,
such as the Shewart chart, are utilized (see Chapter 28 in [5]). The data for the critical
quality variables are analyzed and used to compute the DPMO. This enables the
sigma level of the process to be assessed using Figure 3. It is noted that the DPMO is
relatively easy to compute for device manufacture, although it is also readily applied to
improving continuous processes [3, 4]. Continuing the PVC extrusion example,
suppose this analysis indicates operation at 3cr, with a target to attain 5c performance.
c) Analyze: When the sigma level is below its target, steps are taken to increase it,
starting by defining the most significant causes for the excessive variability. This is
assisted by a systematic analysis of the sequence of steps in the manufacturing process,
and the interactions between them. Using this analysis, the common root cause of the
variance is identified. Continuing the PVC extrusion example, note that several factors
contribute to an excessively high variance in product quality, among them, the
variance in the purity of the PVC pellets, the variance in the fraction of volatiles in the
pellets, and the variance in the operating temperature of the steam heater. Clearly, all
of these factors interact, but suppose that after analysis, it is determined that the
variance in the operating temperature has the greatest impact on quality.
d) Improve: Having identified the common root cause of variance, it is eliminated or
attenuated by redesign of the manufacturing process or by employing process control.
Continuing the PVC tubing example, one possible solution would be to redesign the
steam heater. As will be demonstrated, systematic process redesign can improve the
controllability and resiliency of a process, and hence, reduce the variance in controlled
output variables. Alternatively, a feedback controller could be installed, which
manipulates the steam valve to enable tighter control of the operating temperature
540
(through control of the steam pressure). In this way, the variance in the temperature is
transferred to that of the mass flow rate of the utility stream (steam).
e) Control: After implementing steps to reduce the variance in the CTQ variable, the
results are evaluated, and possible further improvements are considered. Thus, steps
(b) to (e) in the DMAIC procedure are repeated to improve process quality in a
stepwise fashion. Note that achieving 6a performance is rarely the goal, and seldom
achieved.
Traditionally, integrated process design and control has focused on the optimization of the
parameters of the synthesized process and its desired operating point to minimize an objective
function, formally defined as the nonlinear programming (NLP) problem:
541
maxmin./. {x,u,d,Q\
Subject to:
x = f_(x,u,d,Q) (8)
y = g_
yeY
In Eq. (8), the constraints are the state and output equations. This NLP permits the
simultaneous design of the process and its control system, subject to worst-case disturbance
scenarios (e.g. see [7]). Note that this formulation supports the definition of a partial control
strategy [8, 9], where the output variables are required to lie inside a hypercube, Y, rather than
meet specific setpoints. Eq. (8) can also be formulated in the steady-state mode, by expressing
the first constraint in its stationary form. Of particular note is the use of the nonlinear
disturbance cost (DC, [9]) to guide the designer to a process that can maintain output targets
inside the partial control hypercube.
In practice, however, the ability to hold the output variables within a prescribed hypercube
is of limited interest, and, in general, is required only to ensure the stability of the process. Of
greater interest is the need to meet CTQ requirements, as formalized by an extended NLP:
maxmin J, \x,u,d,Q\
Subject to:
x = £(x,u,d,d)
y =g (9)
yeY
z = h{x^,u,d_,Q}
zeZ
Note that in contrast with the conditions of Eq. (8), the conditions of Eq. (9) include the
definition of the desired operating range for the CTQ variables, Z. This implies that the
appropriate hypercube in j-space needs to be defined, for which the z-space meets the desired
sigma level of the process.
4. EXAMPLE APPLICATIONS
This section presents two examples that show how product quality is improved by reducing
the variance in CTQ variables. In the first example, the effluent temperatures in a heat
exchanger network are required to lie inside control limits, with improvements made in the
HEN design and the control configuration to achieve the desired sigma level. The second
542
example, which is more qualitative, shows how similar ideas are applied in the design of a
new product.
Fig. 5. Heat-exchanger network, showing heat capacity flow rates in millions of (MM) Btu/hr.
543
f1{x}=Qi-FiCpi(To-T1) =O (10)
/ 2 {4 = a-^c p3 (e 4 -e 3 ) = o (ii)
Mx}=Q2-FlCpl(Tl-T2) =0 (13)
fM-Q* ^ m ^ - e j / ^ - e , ) ] - 0 (15)
f1{x}=Q3-FlCpl{T2-T3) =0 (16)
fs{x}=Qi-F}Cp3(Qi-Q0) =0 (17)
where [/, and ^4, are the heat transfer coefficient and heat transfer area for exchanger i,
respectively, such that: U^ = 0.0811 MMBtu/h °F, U2A2 = 0.3162 MMBtu/h °F, and Uy43
= 0.1386 MMBtu/h °F. The number of independent manipulated variables is NManipulated =
NVariables - NExternally Defined- NEquations = 15 - 4 - 9 = 2, and the pairfngS for Control purposes
can be selected using the relative gain array (RGA). To accomplish this, a linearized model is
generated using the following procedure:
1. The nonlinear state equations,/{x} = 0, in Eqs. (10)-(18) are solved for the nominal
values of the manipulated variables, u = [F2, Fj ] T , disturbances, d = [F\ , 7b]T, and
constants Go and 81, to determine 9 state variables: x = [T\, T% 7 \ 62, 83, 84, Q\, Q2,
Qif . This is accomplished using an appropriate numerical method (e.g., the Newton-
Raphson method).
2. The output vector, y_ = [82, 64]T, is recomputed for small positive and negative
perturbations of magnitude AM, to each manipulated variable, M,, one at a time, with the
results stored in the vectors y^j and ynj, respectively. Then, column i of the steady-state
gain matrix, £(0), is computed: pji(O) = A«,max -(ypjj -)>n,i,j)i'AM(! j = 1,...,3. Note that a
factor of Aujmax scales the input variables such that | ut \ < 1.
3. The output vector is recomputed for small positive and negative perturbations of
magnitude Adj to each disturbance variable, dt, one at a time, with the results stored in
544
the vectors J^J and j ^ j , respectively. Then, column i of the steady-state gain matrix,
EJS>), is computed: Pdj,(0) = M,max -(y^y -yn,ij)/^di,j =
1,...,3. The disturbance gain
matrix is scaled arbitrarily relative to the inputs using the scaling ktfax= [5%, 5°F]T.
Since the nominal values of the manipulated variables are u = [Fi, F3 ] T = [1.00, 1.00]T, the
maximum perturbations are Aumax = [1.00, LOO]1. The resulting linearized model is:
Note that the gains in Eq. (19) are presented as the change in °F in response to a full-scale
change of each input. Thus, for example, the linear model predicts a 4.92 °F increase in T3 in
response to a 5% increase in F\. The steady-state RGA [11] is computed using £i(0):
A.£(.)-(£-(0))'-[i» Z9]
where ® is the Schur product. The RGA indicates that a control system paired diagonally, i.e.
62 - F2 and 64 - FT,, shown in Figure 6, provides responses that are almost perfectly decoupled.
Next, the resiliency of the HEN is examined by computing the linear disturbance cost (DC,
[12]) in the steady state for disturbances of ±5% in F\ and ±5 °F in TQ.
2 J
= - p - l ( o ) . p d (o).
) \, D C = \ 2
) ' \ (21)
=1 w
_AF3(O)J =iiwLAr0J LAF 3 (O)J 2
The values of the two manipulated variables, computed to completely reject the effect of the
disturbances on 02 and 84, lead to changes in T3, computed by substituting Eq. (21) into Eq.
(19):
Ar,(0) = ^ ( 0 ) - P 2 ( 0 ) P i - 1 ( 0 ) - ^ i ( 0 ) ) ^ (22)
Table 1 shows the changes in the control variables, AF2 and AFj (assuming perfect
control), the disturbance cost, and the resulting change in 7/3, computed using Eq. (22) for four
disturbance vectors. The results indicate that perfect disturbance rejection is achieved for 62
and 84 with negligible control effort. However, the uncontrolled temperature, T-$, is
significantly perturbed, with the worst-case disturbance where AF] and A7o are in opposite
directions. Variations of ±5% in F\ and + 5 °F in 7b lead to variations of approximately ±4°F
in T3.
To check these findings, dynamic simulations of the process, using PI controllers, are
performed with HYSYS.Plant. At steady state, the hot stream of n-octane at 2,350 lbmol/h is
cooled from 500 to 300°F using n-decane as the coolant, with F2 = 3,070 lbmol/h and F3 =
1,200 lbmol/h. Note that these species and flow rates are chosen to match the heat-capacity
flow rates defined by [10], with Fi slightly increased to avoid temperature crossovers in the
heat exchangers due to temperature variations in the heat capacities. Additional details of the
HYSYS.Plant simulation are:
Table 1
Input changes and Disturbance Cost for the original HEN
AFi ATo AF2 AF3 DC = \\u\\2 AT3
+5% 0 0.0253 0.0184 0.0313 3.79
+5% +5°F 0.0246 0.0447 0.0511 3.59
0 +5°F -0.0007 0.0264 0.0264 -0.20
-5% +5°F -0.0261 0.0080 0.0273 -4.00
546
(a) The tubes and shells for the heat exchangers provide 2 min residence times.
(b) The feed pressures of all three streams are at 250 psia, with nominal pressure drops of 5
psia defined for the tubes and shells. Subsequently, these pressure drops are computed
based on the equipment sizes and the pressure-flow relationships.
(c) Controllers are tuned using the IMC-PI rules [13].
The regulatory response shown in Figure 7 indicates that, as predicted by the DC analysis,
even the worst-case disturbance has little effect on the two manipulated variables, whose
control loops are decoupled, as indicated by the RGA analysis. Moreover, the uncontrolled
output, r 3 , exhibits offsets of about ±4.5°F, which compare well with the value of ±4°F
predicted by the linear DC analysis. Although both 82 and 84 are maintained within the
desired operating window, the large variability in Tj, violates the control limits on this
variable, with a DPMO value of 633,330, equivalent to a sigma level of 1.17 (See Figure 3).
Clearly, the process needs to be improved significantly.
As discussed in [6], it is often necessary to augment the process degrees-of-freedom to
meet control objectives, either by addition of trim utility exchangers, or by adding bypasses,
as illustrated in Figure 8. This study now focuses of the use of resiliency analysis to select
between these design configurations and to adjust the nominal operating conditions.
Fig. 7. Response of HEN without bypass to the worst-case disturbances: (a) Normalized
changes in F\ (solid) and To (dashed); (b) Tracking errors (82 - solid; 84 - dashed; T3 - dotted:
UCL and LCL - dot-dashed); (c) Manipulated variables (F2 - solid; F3 - dashed).
547
The PFD for the modified HEN, including a bypass around E-102 to eliminate the offsets
in the third target temperature, is shown in Figure 8. Resiliency analysis is used to determine
the required bypass fraction. The energy balances involve 17 variables: Fi, Fi, F3, To, T\, T%
T3, 80, 0i, 82, 63, 83,84, Q\, Qi, Qi and <j>, two of which, 0o and 81, are assumed to be fixed,
and two, F\ and 7b, are considered to be external disturbances. The first six equations, (10)-
(15), for the HEN without bypasses apply. For heat exchanger E-102 and its bypass, the
material and energy balances are:
In Eq. (25), the product C/3^3 is identical to that for the network without bypasses (i.e.
0.1386 MM Btu/h °F). As the bypass fraction, <|), increases, K3 increases beyond unity,
corresponding to an increase in the heat-transfer area. The number of independent
m a n i p u l a t e d v a r i a b l e s i s N M a n i p u i a t e d = N V a n a b k s - N E x t e m a i i y Defined- NEquations = 1 7 - 4 - 10 = 3.
This leaves F2, FT, and <j) as the manipulated variables, which are paired with the controlled
variables, 82, 84 and T3.
A linearized model is generated and used to assist in the selection of an appropriate bypass
fraction, <j>. The procedure followed for the HEN without bypasses is used, parametrized by
values of (|). Since the nominal values of the manipulated variables are u = [F% F% (|>JT = [1, 1,
<j)]T , the maximum perturbations are Ai/™* = [1, 1, <j)]T. For example, for <>
j = 0.1, the
linearized model is:
Table 2
Input Changes and Disturbance Cost for the HEN with <>f = 0.1
With the nominal bypass fractional flow increased to <>j = 0.25, the linearized model is
recomputed:
This RGA is similar to that obtained with (j) = 0.1, again indicating a diagonal pairing, as
shown in Figure 9. Next, the resiliency is tested, with the results reported in Table 3. Note that
when (j) = 0.25, the disturbance rejection is nearly acceptable, with ZX7max =1-1, only slightly
above unity.
Clearly, the resiliency of the HEN increases with the nominal bypass fraction, but at the
cost of increased heat-transfer area. Table 4 shows the trade-off between resiliency and heat-
transfer area. Note that while only 12% additional heat-exchange area is required for (j) = 0.1,
the resiliency is inadequate. In contrast, when (j> = 0.30, the resiliency is satisfactory (with DC
significantly lower than unity), but the heat-transfer area is doubled. A good compromise is to
select <(> = 0.25, which approximates the desired resiliency, while requiring only 55% more
heat-exchange area.
550
Table 3
Input changes and Disturbance Cost for the HEN with <j) = 0.25
AT0 AF2 AF3 A<)» DC =||u||2
+5% 0 -0.0010 0.051 -0.93 0.93
+5% +5°F -0.0003 0.075 0.75 0.75
0 +5°F 0.0007 0.025 0.18 0.18
-5% +5°F 0.0017 -0.026 1.11 1.11
Table 4
Trade-off between the heat-exchanger area and bypass fraction
DC=\\u\\2
0.10 12.3 1.12
0.15 4.63 1.21
0.20 2.16 1.33
0.25 1.11 1.55
0.30 0.58 2.05
The C&R analysis in the steady state predicts the superior performance of the modified
HEN, which allows all three target temperatures to be controlled at their setpoints in the face
of disturbances in the feed flow rate and temperature of the hot stream. More specifically, the
steady-state RGA indicates that a decentralized control system can be configured for the
modified HEN in which 82 - F% 84 - F3, and T^-fy are paired, and in which the first loop is
almost perfectly decoupled, with moderate coupling between the other two loops. Finally,
aided by DC analysis, the nominal bypass fraction is selected to be 0.25, providing the best
trade-off between increased plant costs and adequate resiliency.
Given the design decision to use (j> = 0.25, based upon the steady-state C&R analysis,
verification is performed by dynamic simulations with HYSYS.Plant, as before. The bypass
valve V-3 is sized carefully, ensuring that the nominal bypass fraction is 0.25, with the
nominal valve position being 50% open (selecting a linear characteristic curve). The
regulatory response of the new configuration is shown in Figure 10. Note that the design with
(> = 0.25 rejects the worst-case disturbance with no saturation, indicating that the DC analysis
is slightly conservative. In addition, the first control loop (82 - F2) is perfectly decoupled,
with slight interactions seen in the other two loops, again as predicted by the static RGA
analysis.
551
Fig. 10. Response of HEN with bypass to the worst-case disturbance: (a) Normalized changes
in Fi (solid) and To (dashed); (b) Tracking errors (02 - solid; 64 - dashed; T3 - dotted; UCL
and LCL - dot-dashed); (c) Manipulated variables (F2 - solid; F3 - dashed; V\ - dotted).
As with the original configuration, only T3 violates its specified operating range, but here,
just the LCL is violated, and for a small fraction of the operating cycle (two samples in
1,000). Thus, DPMO = 2,000, equivalent to a sigma level of 4.38, which meets the target.
The sigma level of the process can be further increased by reducing the frequency of
disturbances that affect the HEN, possibly by making improvements in process operations.
Note that increasing the nominal bypass fraction only increases the capital investment, with
little or no expected reduction in process variance.
Today, the coffee industry is globally situated, employing more than 20 million people. As
a commodity, coffee ranks second only to petroleum in dollars traded worldwide.
Furthermore, coffee is the most popular beverage in the world, with over 400 billion cups
consumed annually. Espresso, a recent innovation in the preparation of coffee, obtained its
origin in 1822, with the innovation of the first crude espresso machine in France, later
perfected and first manufactured in Italy. Espresso has become an integral part of Italian life
and culture, with currently over 200,000 espresso bars in Italy.
Espresso coffee is prepared in a machine that pumps cold water at high pressure
(commonly 1 0 - 2 0 bar) into a hot water boiler, which displaces near-boiling hot water. This
552
water is then forced through a cake of ground coffee, as illustrated in Figure 11. In a
conventional machine, the user manually loads ground coffee into a metal filter housing,
referred to as a portafilter, ensuring that the ground coffee is adequately packed, locks the
portafilter under the hot water exit head, and activates the heater. A coffee cup, placed under
the portafilter, is filled with the freshly extracted espresso coffee, which is produced by the
leaching action of the high-pressure hot water as it passes through the ground coffee.
The following are the possible sources of variance in the quality of the coffee produced
using the above machine:
a) Freshness of the ground coffee. If the coffee is too stale, the taste of the coffee is
affected.
b) Grade of the ground coffee. If too coarse, the leaching is insufficient, affecting the
taste of the coffee. If the coffee is ground too fine, the pressure drop across the packed
grinds is too high, affecting the flow and leaching detrimentally. Also, if the coffee is
too fine, it produces harsh bitter flavors. Serious espresso drinkers prepare their own
roasted coffee beans and personally grind them fresh. However, this extreme behavior
is atypical.
c) Ground coffee packed evenly in the portafilter, to the correct degree. Since the brew
water is under high pressure, it finds the path of least resistance through the coffee.
Uneven packing leads to channeling, in which the coffee in and in the proximity of the
channels is over-extracted, and under-extracted elsewhere. The resulting beverage is
bitter and astringent, with many of the potentially good flavors remaining in the
portafilter basket. In contrast, when the ground coffee is evenly and tightly packed,
the water flows through all of the coffee uniformly.
d) Correct amount of coffee loaded. Insufficient coffee leads to over-extraction and to flat
and watery espresso.
e) Sufficiently high water pressure. This controls the temperature at which the leaching
takes place.
f) Proper amount of water passed though the ground coffee.
The possible variance in item (e) can be reduced by installing a pressure-control loop, to
ensure that the pressure is maintained between its required UCL and LCL. The controller
activates an indicator light to signal when the pressure level is stabilized, providing normal
operation. Assuming that the first four items are handled correctly, the size of the portafilter
should be determined to produce a perfect espresso cup of coffee, thus taking care of item (f)
above. In some machines, a solenoid valve is installed to dispense a precise amount of near-
boiling hot water. Note, however, that items (a) to (d) above constitute sources of variance
that are not under the control of the manufacturer of the espresso machine, as described
above.
553
Fig. 11. A typical espresso machine: (A) pressure vessel, (B) portafilter holding ground
coffee, (C) on/off switch, with built in pressure indicator, (D) solenoid valve for espresso
coffee, (E) cup holding leached espresso coffee.
To eliminate these four sources of variance, the manufacturer of a novel espresso machine
provides its users with vacuum-sealed containers of ground coffee having a built-in filter. On
insertion into the machine, the container is perforated and used to prepare a single cup of
coffee. Since the containers are vacuum-sealed, this ensures that the ground coffee is fresh,
reducing the variance of source (a). Furthermore, since the containers are automatically pre-
filled with coffee, the quantity, grade and degree of packing are now controlled by the
manufacturer, taking care of items (b), (c) and (d) above. Furthermore, the manufacturer
controls the coffee supply, and consequently, the annual sales of coffee containers is likely to
far exceed that of the machines.
5. CLOSING REMARKS
This chapter has introduced the potential advantages of combining six-sigma methodology,
to quantify and assure product and process quality, with integrated design and control. As
shown in the first example, this design methodology benefits from the analysis techniques of
process system engineering, through integrated design and control procedures that reduce the
variance in the critical-to-quality variables by exploiting, and when necessary, increasing, the
process degrees of freedom. While traditionally, product manufacturing has relied solely on
statistical process control, the trend to improve profitability through increasing yields is
driving many industries to embrace six-sigma methodology and advanced control strategies.
For example in integrated circuit manufacturing, the increased reliance on advanced process
control (APC), and in particular, mulfivariable control [14], reflects the need to utilize the
potential degrees of freedom in processes to assist in the reduction of CTQ variance.
554
REFERENCES
[I] Rath and Strong, Six Sigma Pocket Guide, Rath and Strong Management
Consultants/AON Management Consulting, Lexington, Massachusetts (2000).
[2] Rath and Strong, Design for Six Sigma Pocket Guide, Rath and Strong Management
Consultants/AON Management Consulting, Lexington, Massachusetts (2002).
[3] Y. B. Trivedi, Chem. Eng. Progress, July (2002) 76.
[4] J. M. Wheeler, Chem. Eng. Progress, June (2002) 76.
[5] B. A. Ogunnaike and W. H. Ray, Process Dynamics, Modeling and Control, Oxford
University Press, New York, pp. 1033-1061 (1994).
[6] W. D. Seider, J. D. Seader and D. R. Lewin, Product and Process Design Principles,
John Wiley and Sons, New York (2004).
[7] V. Sakizlis and E. N. Pistikopoulos, "Advanced Controllers in Simultaneous Process
and Control Design," Chapter 19, this book.
[8] A. Arbel, I. H. Rinard, and R. Shinnar, Ind. Eng. Chem. Res., 36 (1997) 747.
[9] B. E. Solovyev and D. R. Lewin, Ind. Eng. Chem. Res., in press (2003).
[10] McAvoy, T.J. Interaction Analysis, Instrument Society of America, Research Triangle
Park, NC (1983).
[II] E. H. Bristol, IEEE Trans. Auto. Control, AC-11 (1966) 133.
[12] D. R. Lewin, Comput. Chem. Eng, 20 (1996) 13.
[13] D. E. Rivera, S. Skogestad, and M. Morari, Ind. Eng. Chem. Res., 25 (1986) 252.
[14] S. Lachman-Shalem, B. Grosman, and D. R. Lewin, IEEE Trans, of Semiconductor
Manufacturing, 15 (2002) 310.
The Integration of Process Design and Control
P. Seferlis and M.C. Georgiadis (Editors)
© 2004 Elsevier B.V. All rights reserved. 555
Chapter D4
1. INTRODUCTION
Many processes present highly nonlinear dynamics which can lead to challenging
controllability problems. This is particularly frequent in certain areas like bioprocess
engineering or polymer processing. The optimal design of processes operated at steady state
has been usually carried out in two-steps. First, the optimal steady state with respect to some
economic measure is found using standard mathematical optimization techniques. Second,
controllers are placed and tuned in order to obtain a satisfactory response. However, this
common practice of considering process control issues as a sequential second step after
process design ignores the interaction of design and control. In other words, it does not take
into account the economic consequences of the process dynamics arising from such a design.
In the particular area of bioprocessing, wastewater treatment plants are good examples of how
the traditional two step method can lead to designs with very poor controllability.
In order to surmount these difficulties, a simultaneous approach, considering operability
together with the economic issues, has been suggested by many researches in process systems
engineering (see reviews provided by Morari and Perkins [1], Schweiger and Floudas [2] and
Pistikopoulos and van Schijndel [3]). However, the solution of the associated optimization
problems is not trivial. To begin with, these problems are frequently multimodal (non-
convex), as described by e.g. Schweiger and Floudas [2]. Further, as also noted by these
authors, in these problems one must find the best alternative for several (often conflicting)
objectives, so a multiobjective optimization framework should be considered. Recently,
Moles et al [4] considered the global optimization of a composite function including both
economic and controllability objectives. In fact, this is a common approach for solving
multiobjective optimization problems since it provides a compromise solution between all the
objectives.
In this chapter, we consider a multiobjective formulation for the global optimization
problems arising from integrated design and control, and we present two alternative methods
556
for its solution. These methods are ultimately based on extensions of a recent Evolution
Strategy for constrained non-linear programming problems. The usefulness and efficiency of
these novel approaches are illustrated considering the integrated design of a wastewater
treatment plant model. A comparison with another existing stochastic multiobjective approach
is also provided, highlighting the advantages of the approaches presented herein.
2. PROBLEM STATEMENT
subject to:
h(x) = 0 (2)
g(x)<0 (3)
xL<x<xu (4)
where F is the vector of objective functions, x is the vector of decision variables, h and g are
the possible sets of equality and inequality constraints, respectively, and x1 and xu are the
upper and lower bounds for the decision variables. This set of constraints represents the
feasible space while the set of all possible values of the objective functions constitutes the
objective space.
In general, the solution which is simultaneously optimal for all objectives {utopia point) is
not feasible and the real purpose of multiobjective optimization is to generate the set of the
so-called Pareto-optimal solutions, i.e. the set of solutions which represents the relatively best
alternatives. For two objectives, this set is known as the Pareto front. Mathematically, a
feasible solution x* is a Pareto-optimal (or non-dominated, or non-inferior, or efficient)
solution if there exists no x such that F,(x) <Fi(x*) for all i = 1,.. .,m with Fj(x) < F/x*) for at
least one j .
The definition above means that all non-dominated solutions are optimal in the sense that it
is not possible to improve one objective without degrading one or more of the other. After
obtaining the set of Pareto-optimal solutions, the designer will be able to choose a suitable
compromise between all objectives. In order to help the decision-making process, it is
important to find a set of solutions as diverse as possible and uniformly distributed along the
Pareto front.
557
\FAz,z,p,x)~\
min F{z,z,p,x) = \ ' ) ' 'V' '\ (5)
subject to:
f(z,z,p,x) = 0 (6)
z(to) = zo (7)
h(z,p,x) = 0 (8)
g(z,p,x)<0 (9)
L u
x <x<x (10)
where x is the vector of decision variables, z is the vector of dynamic state variables, F is the
vector of objective functions (Fi is a combination of capital and operation costs, and F2 is the
controllability measure), / is the set of differential and algebraic equality constraints
describing the system dynamics (mass, energy and momentum balances, i.e. the non-linear
process model), and h and g are possible equality and inequality path and/or point constraints
which express additional requirements for the process performance.
More general statements are possible, as it has been done by Schweiger and Floudas [2].
Different alternatives for design are considered by means of process and control
superstructures. The resulting problem is a multi-objective mixed-integer optimal control
problem (MIOCP). Due to the non-linear and constrained nature of the system dynamics,
these problems are very often multimodal (non-convex). Further, it is known that using
standard controllability measures, such as the Integral Square Error (ISE), in the objective
function often causes non-convexity [2, 4].
3. OPTIMIZATION METHODS
Many algorithms have been developed for the generation of non-dominated solutions. A full
review of theory and methods can be seen in [5-6] and the references cited therein. A common
approach is to transform the original non-linear multi-objective optimization problem into a
single objective optimization problem. Usually this is done by means of a characteristic
parameter. The solution to this non-linear programming (NLP) problem is expected to be a
Pareto-optimal solution. Multiple solutions can be obtained by changing the value of the
558
parameter and solving the resulting NLP. Traditional techniques include the weighted sum
approach, the s-constraint method and goal programming.
It is important to note that choosing an appropriate value of the parameter requires some
prior knowledge about the problem, and a uniformly spaced set of parameter values may not
produce a uniformly spaced set of Pareto-optimal solutions. In this regard, the recent Normal
Boundary Intersection (NBI) method of Das and Dennis [7] has been designed to generate
Pareto fronts with an even spread of points. However, although all these approaches are rather
easy to apply, they ultimately rely on local solvers for the NLPs (e.g. SQP), so they can fail if
the objective space is non-convex.
In principle, deterministic global optimisation methods could be used to surmount the
difficulties caused by non-convexity. In fact, the deterministic global optimization of non-
linear dynamic systems is receiving increased attention due to a number of recent advances
[8-10]. Although these approaches are very promising and powerful, the problem structure
must comply with certain conditions, e.g. typically the objective function and the dynamics of
the system must be twice continuously differentiable, and restrictions may also apply for the
type of path constraints which can be handled. In any case, research along these lines
continues and it might result in breakthrough results in the short term.
On the other hand, stochastic global optimisation methods, like genetic and evolutionary
algorithms, have no requirements about the problem structure, and they can find multiple
Pareto-optimal solutions in one single optimization run, instead of solving a set of NLPs.
Evolutionary algorithms (EAs) mimic the mechanism of natural selection by using a family of
possible solutions in each iteration (the so-called population). Additionally, their ability to
handle problems involving non-convex Pareto sets and/or discontinuities makes them
attractive to solve highly non-linear multiobjective problems [11-12]. Their main drawback is
the excessive computational effort, because they require a large population in order to
maintain diversity of solutions.
A large number of evolutionary techniques have been proposed during the last few
decades. They can be classified in two main groups: non-Pareto techniques, such as VEGA
(Vector Evaluated Genetic Algorithm) and the s-constraint method hybridized with EAs, or
Pareto-based techniques, such as MOGA (Multi-Objective Genetic Algorithm) and NSGA
(Non-dominated Sorting Genetic Algorithm) [5,6].
In the following sections, we describe briefly the solution strategies used in this work.
First, we describe two well-established methods which represent two of the main groups of
methods: NBI [7] and MOEA [13-14]. Then, in order to surmount the limitations of these
methods, we propose the use of the SRES (Stochastic Ranking Evolutionary Strategy)
algorithm [15] for solving the different sets of NLPs generated with two strategies: the s-
constraint method and the recent NBI method. It should be noted that SRES only solves single
objective optimization problems, so basically we are presenting two types of SRES extensions
for multiobjective problems.
Our main objective is to show how NBI can be greatly improved by using suitable
stochastic algorithms like SRES. NBI has the advantage of generating a well distributed set of
559
Pareto-optimal solutions, but it essentially works by solving a set of NLPs by means of the
SQP (Sequential Quadratic Programming), which is a gradient-based method. Thus, it can fail
with non-convex problems. In order to improve the robustness of the technique, we have
replaced the SQP solver by SRES.
max tN (11)
x,tN
subject to:
®-0 + tN-fi = F(x)-F* (12)
and the same set of constraints given by Eq. (2) to (4). n is the unit normal to the CHIM
pointing to the origin (objectives are redefined with the shadow minimum shifted to the
origin), and tn is a scalar such that O• f3 + tN -ft represents a point on that normal. This
subproblem has to be solved for various /?. The global solution to this problem gives the
intersection point between the normal and the boundary of the objective space closest to the
origin. In practice, the algorithm uses a quasi-normal direction given by an equally-weighted
linear combination of the columns of <P, multiplied by - 1 .
The NBI technique has the advantage that an equally distributed set of /3 produces an
equally distributed set of points on the Pareto surface. The characteristic parameter of the
method is an integer number (spac) from which a set of /? uniformly spaced is generated. The
points obtained by solving the NBI subproblems are Pareto-optimal solutions if the
components of the shadow minimum are the global minima and the Pareto surface is convex.
Nevertheless, points on concave parts can be found, but it is not assured that these solutions
are non-dominated.
560
subject to:
F2{x)<s (14)
and the same set of constraints given by Eq. (2) to (4). The characteristic parameter of the
method is s, which represents an upper bound for F2. The solution to the e-constraint problem
is a Pareto-optimal solution. This technique has been widely used because points in concave
parts of the Pareto front can be obtained.
In this work the set of NLPs obtained with different values of s is solved with the
Stochastic Ranking Evolutionary Strategy (SRES) of Runarsson and Yao [15]. Since penalty
functions are commonly used in constrained optimization, these authors propose a novel
constraint handling technique, Stochastic Ranking, in which the objective and penalty
functions are balanced stochastically, i.e., there is a balance between preserving feasible
individuals in a population and rejecting infeasible ones. This is achieved through a ranking
procedure. The algorithm uses a quadratic penalty function where the setting of a penalty
coefficient is not required, but also it introduces a single probability P/ of using only the
objective function for comparisons between infeasible individuals (it is clear that feasible
individuals are always compared according to the objective function).
This constraint-handling technique is based on a {/u,X ) evolutionary strategy. Each
561
where Rp is the penalty coefficient and IINBI is the vector of equality constraints given
by Eq. 12. This technique works quite well for our purposes, because we only need
an approximate solution close to the normal.
• Regarding bounds for variable tn, they are set directly taking into account the quasi-
normal direction used by NBI.
For the case of two objectives, Fig. 1 shows an objective space with a concave part, the
CHIM (which is the discontinuous line joining the individual minima of objectives) and
several important points. We have to define the Nadir point, Fuadir, which is the vector of
upper bounds of each objective in the entire Pareto-optimal set (in practice, its coordinates
can be estimated from the coordinates of the shadow minimum [5-6]). The line that joins the
shadow minimum F* and the Nadir point represents the quasi-normal direction used by NBI.
This line intersects the CHIM in a point given by the vector f3max = [0.5, 0.5]. Thus, the upper
bound (N is defined by the distance from this point to the shadow minimum. Similarly, the
lower bound ttl is given by the distance to the Nadir point. It is clear that tj' - -tNU- (JN is
positive if the normal is pointing towards the shadow minimum, and negative in the opposite
sense). It should be noted that these bounds are set for a two-objective problem.
This is a conservative setting, because tNu can only be reached for /5max = [0.5, 0.5] if the
shadow minimum is a feasible solution, which rarely occurs. In this case, that point would be
the unique solution to the problem. It is clear that this situation also occurs for other vectors /3,
such as Pi in Fig. 1. The important thing is that although the upper bound is always larger
562
than the optimal IN, this setting will allow us to find non-dominated solutions if the
components of the shadow minimum are not the global minima of objectives. On the other
hand, f# can not be inferior to tnL, because FNMUT is the upper bound of the entire Pareto set,
i.e., a solution which any objective value greater than that of the Nadir point will be
dominated. The variable t^ can only be negative for certain concave parts of the Pareto front,
as occurs for the vector /%. In this case, the optimal value of tn is clearly inferior to the lower
bound, but the important thing is that such solution is not a Pareto-optimal point since it is at
least dominated by F(xj*).
This case study represents an alternative configuration of a real wastewater treatment plant
place in Manresa (Spain), as described by Gutierrez and Vega [16]. The plant is formed by
two aeration tanks, acting as bioreactors, and two settlers, as shown in Fig. 2. A flocculating
microbial population (biomass) is kept inside each bioreactor, transforming the biodegradable
pollutants (substrate), with the aeration turbines providing the necessary level of dissolved
oxygen. The effluents from the aeration tanks are separated in their associated settlers into a
clean water stream and a activated sludge, which is recycled to the corresponding aeration
tank. Since the activated sludge is constantly growing, more is produced that can be recycled
to the tanks, so the excess is eliminated via a purge stream (qp).
The objective of the control system is to keep the substrate concentration at the output (52)
under a given admissible value. The main disturbances come from large variations in both the
flowrate and substrate concentration (g, and s,) of the input stream. Although there are several
possibilities for the manipulated variable, here we have considered the flowrate of the sludge
recycle to the first aeration tank [16].
The dynamic model consists of a set of 33 DAEs (14 of them are ODEs) and 44 variables.
The value of three flowrates (qr2, qr^ and qp) are fixed at their steady-state values
corresponding to a certain nominal operational conditions. Therefore, this leaves 8 design
variables for the integrated design problem, namely the volume of the aeration tanks (v; and
V2), the areas of the settlers (adi and adi), the aeration factors (fki and fki), the gain and the
integral time of a PI controller.
Here, the integrated design problem is formulated as a multiobjective optimization
problem, where the objective functions to be minimized are a weighted sum of economic
terms (Fj) and the Integral Square Error (ISE):
F, = (w, • v,2)+ (w2 • v\)+ (w3 • ad1, )+ (w4 • adl)+ (w5 • jk])+ (w6 • jk\) (16)
F2 = ^e2(t)-dt (17)
A weighting vector w, = [2-105, 2-10"5, 1-105, 1-105, 12, 12] was considered for the
minimization of the economic term, which implies a similar contribution for each term in the
objective function [4, 16]. The ISE is evaluated considering a step disturbance to the input
substrate concentration, s\, whose behaviour is taken from the real plant.
dt ks + Sy s^ V[
dx
2 = yy-j1 :S2;xi _kc,x _^^L + in-fai-xi) ^x9^
dt ks + s2 s2 v2
^ = kla •flc,• ( c , - c , ) - * o i ^ 3 J i L _ 9» • * ^
dt ' ks + s^ Vj
dC
- =kla-JkAcs-c2)- V ^ l ^ + iL-A_fe"C2_ (23)
dt ks + s2 v2 v2
dxdx =(qn-q2)-xbi-q[- xd{ vsdx
dt advldA ldx
dxbl_ = <l22-X2-qsarxb2-<l3-xb2 +
vsd
2 ~_Vlh. (28)
dt ad2-lb2 lb2
565
~-=-'-k.-*2) (30)
dt Ti
-J- = (% - *2 )• (S2s -sl) (31)
Regarding algebraic equations, Eq. (32) describes the control law (qrJs corresponds to
the value in the steady-state), while equations (35-38) state the rate of settling of
sludge, and equations (39-45) correspond to the different balances among the flowrates
{m3/h). Eq. (50) describes the disturbance in the input (substrate) considered for the
computation of the ISE. This disturbance is introduced at t = 25 h.
qrx=qru+kp-(s2s-s2)+I (32)
V92+9T33 (33)
,r = ^:?2±fv^i (34)
vsdl=nnr-xdl-eaarxdi (35)
vsb1=nnr-xbl-eaa"xb> (36)
vsd2=nnr-xd2-eaarxd2 (37)
vsb1=nnr-xb2-emr*bl (38)
q2=qrl+qp-qr3 (39)
q\2=<ii+qri (41)
q22=qx+qr2 (42)
<lsai=qi-<lp (43)
q\=Qu-(l2 (44)
qr=q2+qr3 (45)
?22
w>2==iL^^:..?^ (49)
<?22
J = (50)
' U + (l0-10.e-) ,*25*j
• 32 inequality constraints which impose limits on the residence time and biomass loads
in the aeration tanks, the hydraulic capacity in the settlers, the sludges ages in the
decanters, and the recycles and purge flow rates respectively.
0.001<V ' 3
^ ' 2 2
<0.06 (53)
v2-x2
q 12
- <1.5 (54)
^<1.5 (55)
30^v1.x1+^1-/r1-xr1^00 (56)
qp-xrx-24
30<v2.x2+^2-/v^^00
total of 152 inequality constraints. Table 2 shows the upper and lower bounds on the decision
variables. The values of the model parameters are given in Table 3, where /u (h~l) is the
microbial specific growth rate, yy is the metabolized substrate fraction converted in biomass,
kd (hA), kc (h~l) and ks (h'1) are rate constants for any operating conditions, fkd is the fraction
of death biomass converted in substrate, nnr is the mass rate constant in the settlers, while Ma
(h'\ koi (h'1), cs (h'1) are the kinetic parameters in the oxygen equations. Besides, Iri, Ir2, Id],
M2, lb] and tt>2 (m) are the height of each layer in the settlers, xt (mg/l) and q, (m3/h)
correspond to the input concentration and flow rate respectively, and s,-;S {mg/l) establishes the
steady-state before the disturbance considered.
Table 1
Inequality constraints for the state variables
Variable Lower Bound Upper Bound
Xl 500 3000
X2 200 3000
Si 25 300
s2 20 125
C] 1 8
C2 1 8
xd, 10 300
xb, 50 3000
xr, 3000 10000
xd2 3 300
xb2 30 3000
xr2 1000 10000
sr. 20 1000
xr 2000 8750
vsd. 100 2000
vsb. 300 3000
vsd 2 10 2000
vsb 2 100 3000
xir. 400 2500
sir, 50 500
xir2 200 2000
sir2 30 500
q2 200 3000
q3 200 3000
qn 50 3500
q22 50 3500
qSai 100 3000
qi 50 3000
qr 50 2000
qri 50 3000
568
Table 2
Upper and lower bounds on design variables
Variable Lower Bound Upper Bound
V; 1500 10000
v2 1500 10000
ad] 1000 4000
ad2 1000 4000
fli 0.0 1.0
fki 0.0 1.0
kP -100 -0.005
0.5 100
Table 3
Parameters of the model
Parameter Value (units)
M 0.1824 (If1)
yy 0.5948
kd 5-10-5(rf')
kc ].3333-10"4(h"')
ks 300.0 (h"1)
fkd 0.2
nnr 3.1563
nar -0.00078567
kla 0.7 (h"1)
koi 0.0001 (h"1)
Cs 8.0 (h"1)
lr, 0.5 (m)
lr2 0.5 (m)
Id, 2.0 (m)
d2 2.0 (m)
lb, 3.5 (m)
lb2 3.5 (m)
Xi 80.0 (mg/1)
Qi 1300.0 (nrVh)
s,,s 366.7 (rag/1)
so the SQP method converged to local optima. As alternatives, we have followed two
strategies for finding the Pareto-optimal set:
• Case A: the minimum Fc,Ost and the minimum ISE found with SRES applied to the e-
constraint method (see below) were used as the components of the shadow minimum.
• Case B: the individual minima of objectives were obtained by solving two NLPs (one
for each objective) with the SRES algorithm, and the optimal solutions found were
refined with SQP.
The components of the shadow minimum for both cases are presented in Table 4.
Obviously, they define a different pay-off matrix from which the CHIM and the quasi-normal
direction is determined.
The multiobjective problem was then solved for different values of the integer parameters
spac which is related with the number of NBI subproblems generated (for two objectives, the
number of NLPs is spac - 1). This set of NLPs is solved sequentially. The starting point for
solving the first NLP is JC^*, i.e., the solution which minimizes F2 (the ISE). The optimal
solution for this subproblem is taken as the initial point for solving the next one, and so on.
Thus, as increasing the number of NBI subproblems, the starting points are expected to be
closer to the solution, and the algorithm should converge in fewer iterations.
Table 4.
Components of the shadow minimum for both cases A and B solved with NBI.
Case A CaseB
Min. FCoj( Min. ISE Min. FQOSI Min. ISE
V; 4748.1058 6647.8853 4721.3451 4665.3968
v2 3986.9155 4141.7570 4008.6656 9999.9784
Ad, 2492.7683 2103.6352 2494.2872 2854.5703
Ad2 3998.1991 3944.0140 3990.5424 2212.6446
Fk, 0.0808 0.0550 0.0812 0.7868
Fk2 0.0112 0.0128 0.0112 0.7848
kP -4.3466 -99.9999 -0.1178 -100.0000
21.5541 1.2064 0.5027 1.0810
Fcost 990.8748 1426.8141 988.7496 2580.5725
ISE 84.2991 0.3188 101.9250 0.2822
Payoff
matrix &
[[83.9803
° 1 435.9393] r° 1591.8229]
0 J [101.6428 0 I
r-o.9819] f- 0.9980]
[-0..189lJ [-0.0637J
570
Somewhat surprisingly, the best results are obtained in the case A, where the individual
minima of objectives are not the global ones (Fig. 3). These results can be explained if we
take into account that the components of the shadow minimum used in case A belong to the
Pareto-optimal set, and the distance from the CHIM to the Utopia point (big solid square) is
smaller than that of the case B. In any case, it is important to mention that most of the NLPs
generated could not be solved by the SQP algorithm. The majority of them (more than 50 %)
did not converge to a solution (exceeding the maximum number of iterations) or crashed the
solver. Furthermore, a large number of solutions were dominated (in fact, as we will see
below, all of them are dominated by the fronts generated by the alternative techniques), a
consequence of the non-convexity of the NLPs. All these difficulties, which are a
consequence of the highly constrained and nonlinear nature of the bioprocess models,
illustrate the need for more robust methods, which has been our main motivation for the
development of the novel approaches presented below. In Fig. 4 it is shown the performance
of the method for the two cases, in terms of computational effort (corresponding to a PC
Pentium IV-1.7 GHz).
Fig. 3. Pareto-optimal solutions sets for the wastewater treatment plant obtained with NBI and
different parameterisations of the Pareto front.
Fig. 4. Performance (CPU time and number of NLPs solved) of NBI for both cases A and B.
572
5.4. NBI-SRES
5.4.1. Preliminary Runs
As mentioned previously, there are two issues we have to deal with in this novel strategy:
the handling of additional equality constraints and the bounds on the additional variable tn.
Thus, we have carried out a series of preliminary runs in order to determine which settings
ensure the best performance of this NBI-SRES algorithm. For these runs we have used the
shadow minimum obtained with the s-constraint method in order to make a better comparison
between both strategies.
573
Fig. 5. Pareto-optimal solutions set obtained with the s-constraint SRES technique.
Fig. 6. CPU Times (seconds) for the s-constraint NLPs solved by SRES
574
If equality constraints are added to the objective function in the form of penalty function as
given by Eq. 15, we have to determine the 'optimal' value of the penalty coefficient, Rp. We
allowed a wide range of variation for the additional decision variable IN (-1.0e4 <IN < 1.0e4).
Results for several values and different population size are shown in Fig. 7. The integer
parameter spac was fixed at 10, and the population was evolved for 200 generations.
These curves are somewhat similar as those obtained with NBI, but we can see that better
results were obtained with a low value of the penalty coefficient and a population size of 200.
In contrast, the worst points were found with a Rp = 1 .Oe+5. Low Rp means that the search is
directed towards the unconstrained optimum. It should be noted that the penalty function only
includes the equality constraints which assure that the solution is on the normal. Thus,
although an infeasible point is not penalized enough, the solution is expected to be feasible
with respect to the process constraints, and, consequently, it can be a global Pareto-optimal
solution.
Nevertheless, the Pareto-optimal set were not as good as expected in these preliminary
runs. In order to improve the performance of the method, we defined a tighter set of bounds
for variable tn, as explained before (for this shadow minimum, -220 <tn <220). In Fig. 8 we
present the results found for a value of Rp = 1.0 and different combinations of population size
and number of generations.
Fig. 7. Pareto-optimal solutions set obtained with NBI-SRES depending on the penalty
coefficient and the population size (spac =10).
575
Fig. 8. Pareto-optimal solutions set obtained with NBI-SRES depending on population size
and number of generations and after applying tighter bounds on variable t^ {spac = 10).
Table 5.
Computational effort of NBI-SRES depending on the population size and
the number of generations (spac =10).
CPU Time (seconds)
(Pop/Gen) Total Mean Minimum Maximum
(100/100) 4981 553 435 699
(100/200) 8810 979 753 1587
(200/100) 8753 973 731 1740
(200/200) 23214 2579 1400 3972
At first sight, we note that the solutions for the four runs are well distributed along the
Pareto front. As a result of this, we have obtained a good approximation of the curve by
solving very few NLPs. Another important thing is that the population size does not seems to
have a great influence in the region close to the minimum ISE, but it is clear that best
solutions are obtained with a population size of 200, evolved during 200 generations. We
carried out several additional runs and the results confirmed this conclusion.
576
Regarding computational effort, CPU times for each run are presented in Table 5.
Computation times for the best results are very similar to those of the s-constraint SRES
method, but it is possible to reduce the computational effort if we take into account that for
several subproblems a large population size is not required.
Table 6.
Components of the local shadow minimum for
the special case solved with NBI-SRES.
Min. FCos, Min. ISE
V, 4736.4969 6647.8853
v2 4067.9049 4141.7570
Ad, 2493.9865 2103.6352
Ad2 3969.7449 3944.OHO
Fk, 0.0809 0.0550
Fk2 0.0110 0.0128
K -98.0936 -99.9999
39.6092 1.2064
Fcosl 999.5135 1426.8141
ISE 9.9998 0.3188
Payoff 427.30071
matrix 0
[[9.6810
°
o J
" - 0 . 9997]
- 0 0227J
577
few points. This can be explained due to the potential non-convexity and/or discontinuities of
the Pareto front in the region close to the Utopia point. The total computation time was 15
hours (about 45 minutes for each subproblem). In this sense, the computational effort of both
s-constraint and NBI-SRES is very similar.
Fig. 9. Overall comparison of Pareto-optimal solutions for the wastewater treatment plant.
578
Our second novel approach, NBI-SRES, produces a quite similar curve (Fig. 9a), and the
computational effort required is slightly inferior, although of the same order of magnitude. In
contrast, the solutions found with the original NBI (with SQP) are very far from the real
Pareto front, although it has the advantage of solving the NBI subproblems with a low
computational cost (Fig. 4). Somewhat surprisingly, and although the points obtained belong
to the Pareto front, the MOEA toolbox is only able to localize solutions close to the shadow
minimum, hiding other possible designs. In terms of computational effort, its cost was
intermediate between those of NBI-SRES and s-constraint SRES.
It could be argued that the apparent lack of trade-off between the two objectives could be a
consequence of the ISE not being a good criterion. However, this impression might be also a
question of scaling (i.e. check figure 9d). Moreover, replacing the ISE metric by others, like
the ITSE, led to similar results for the Pareto front. In any case, we would like to stress that
these are a posteriori conclusions which can only be taken if the multi-objective problems are
properly solved with robust methods, the main objective of our chapter (otherwise, the results
can be artifacts due to the nonconvexity of the NLPs, as we discussed).
It is worth to inspect the dynamic behaviour of selected different designs from the Pareto
set. In Fig. 10-12, we show the dynamic response of the controlled variable, S2, for four
designs (marked in Fig. 9c as A, B, C and D). Fig. 10 represents the dynamics of the shadow
minimum used in the last special case solved with NBI-SRES. In Fig. 11 we compare the best
solution found by minimizing a single composite objective function [4] and an intermediate
design C, which is slightly more economic (Fcost = 1073 and ISE = 0.5), and has a similar
controllability. The dynamics of the Pareto-optimal solution corresponding to an ISE = 0.4 is
exactly the same as the best obtained for a single-objective (results not shown). Finally, in
Fig. 12 we show an alternative design D (Fcost = 1000.8 and ISE = 2.5) which corresponds to
point e$ in Fig. 5.
It is important to note that design D is a solution in which the s-constraint is not active.
From this point, a further improvement in the cost function implies a large increase in the ISE.
Really, there are not substantial differences between the design variables for both systems B
and D, except in the controller parameters. The dynamic response of design D seems to be
very close to instability, which may explain the fact that the s-constraint was not active.
In this chapter, the integrated design and control of bioprocesses was considered as a
multi-objective optimization problem subject to non-linear differential-algebraic constraints.
This formulation has a number of advantages over the traditional sequential approach, not
only because it takes into account the process dynamics associated to a particular design, but
also it provides a set of possible solutions from which the engineer can choose the most
appropriate to his/her requirements. However, these problems are usually challenging to solve
due to their non-convexity, which causes the failure of procedures based on local (e.g. SQP)
NLP solvers.
579
Fig. 11. Dynamic response of substrate concentration for design C and the best design
obtained with the single-objective global optimization.
580
We have presented two novel solution strategies, e-constraint SRES and NBI-SRES, which
have been developed with robustness in mind. In particular, the use of SRES, a population-
based stochastic algorithm for global optimization, makes it possible to avoid convergence to
local solutions in most occasions. In order to evaluate their performance, we have considered
a challenging case study regarding the integrated design of a wastewater treatment plant
model. The results indicate that these techniques are more reliable than two recently proposed
multiobjective strategies.
The results for the wastewater plant case indicate that several designs with rather low cost
are possible while maintaining a very good controllability. Although more economic systems
are possible, as those obtained when the traditional sequential approach is considered [4],
such systems present a very poor controllability, very similar to that shown in Fig. 10 for
design B. An additional advantage of the multiobjective approach presented here is that it
allows the identification of regions where designs have a reasonable ISE but are close to
instability, as shown in Fig. 12. In the near future, our research efforts will be directed
towards increasing the efficiency of the NBI-SRES approach while maintaining its
robustness. We also plan to compare it to other techniques considering a wider set of case
studies.
581
ACKNOWLEDGMENTS
Author O.H. Sendin acknowledges a pre-doctoral grant from the DP programme of the
Spanish Council for Scientific Research (CSIC).
REFERENCES
Chapter D5
a
Electrical and Computer Engineering Department,
University of Iceland
b
Department of Chemical and Biochemical Engineering,
Rutgers - The State University of New Jersey
1. INTRODUCTION
The decoupling problem has been of interest for many years, as one of the important
methods in the control of multiple-input multiple-output (MIMO) systems. As such
decoupling has many practical applications many of which lie in the chemical engineering
field. The first methods essentially resulted in integrator decoupling, i.e., the resulting
diagonal elements were integrators [1], [2]. Those methods were subsequently adapted to
include pole-placement decoupling, where in the diagonal elements contained poles not
necessarily at the origin, thus allowing a wider range of dynamical responses to be designed
for, see, e.g., [3].
Essentially, a feedforward gain matrix and state feedback were used in a state space
representation to achieve the desired result in the classical decoupling methods. In general,
state feedback can be used to place poles as well as to affect the element zeros[4]-[6] of
transfer function matrices in MIMO systems. The invariant zeros [4]-[6] of MIMO systems
are, however, not affected by state feedback or feedforward gain. In the classical decoupling
methods, the invariant zeros are typically cancelled by a number of the new system poles,
thus, effectively leading to an overall reduced-order system.
In many cases, such a decoupled overall reduced-order system results in a first-order
differential equation relating the decoupled inputs to the individual outputs, thus, somewhat
limiting the dynamical response achievable by the pole-placement. Often, this does not pose a
major problem, as the first-order response can be shaped by an outer-loop controller, e.g., a
PID controller, once the system is decoupled. The cancellation of invariant zeros can be a
583
much more serious drawback, as in the case of unstable invariant zeros, those are cancelled by
unstable controller poles, thus rendering such a controller useless in practice.
It is therefore of interest to explore the design of a decoupling pole-placement controller,
that leaves invariant zeros intact and allows full pole placement. Naturally, the system must
be fully controllable, which can be ensured at the process design stage. It is known that the
general problem of decoupling and pole placement without cancelling the invariant zeros can
be solved in some cases, while in other cases no solution exists [7], The Faddev algorithm
introduced in [8] imposes the decoupling as well as pole-placement conditions iteratively,
and is easily applicable for low-order systems. Further, the direct computation of the
simultaneous decoupling and pole placement problem without the cancellation of invariant
zeros is considered in [9], [10] and [11]. A related problem is considered in [12] and [13], i.e.,
stable simultaneous disturbance rejection and decoupling.
In this paper (see [14] for an earlier abridged version), the problem of simultaneous
decoupling and pole placement without cancelling invariant zeros is considered that gives rise
to a solution of a system of nonlinear equations.
There exists a large body of literature on methods for solving systems of equations such as
the homotopy continuation methods and the internal-Newton methods. The homotopy
continuation or incremental loading class of methods are based on the pioneering works of
[15] and [16]. The basic idea of homotopy continuation methods is to create a family of a
single parameter functions so that the solution for (t=0) is known and then solve a sequence of
problems with t steadily increasing from (t=0) to (t=l) using the solution of one problem as an
estimate for the next. A problem common to all homotopy variants is that variable bounds and
inequality constraints cannot be handled directly though an effective technique through proper
active set changes was proposed in [17]. The interval-Newton methods are based on finding
the rectangles containing all solutions of nonlinear systems of equations within certain
variable bounds with mathematical certainty. They do so by applying the classical Newton-
like iterative methods on interval variables rather than variables coupled with a generalized
bisection strategy [18],[19]. The main attractive feature of interval-Newton methods is that
they provide mathematical guarantees for convergence to all solutions of fairly arbitrary
nonlinear systems of equations within certain variable bounds. A comprehensive review of
the large number of algorithms can be found in [20].
An alternative approach based on convex lower bounding and partitioning was proposed
by Maranas et al. [20] which is used in this paper and is presented in detail in Section 2. The
simultaneous decoupling and pole placement conditions without the cancellation of invariant
zeros are derived in Section 3. Three examples are then presented in Section 4 and finally the
work is summarized in section 5.
The approach proposed by Maranas et al. [20] is based on creating a convex lower
bounding function coupled with a partitioning strategy and like Interval-Newton methods, it
584
can provide guarantees for convergence to all _-solutions. The fundamental difference,
however, between the proposed approach and interval-newton methods is that while the latter
utilizes a single value to lower bound functions within rectangular domains, the proposed
approach creates a lower bounding convex function for the nonconvex function. By exploiting
the mathematical structure of the problem, this typically results in much tighter bounds. The
basic steps of the method are:
Step 1: Introduce slack variables in the constraints to transform the problem into an
optimization problem of minimizing the slacks, problem (PO). This implies that a
zero objective function satisfies all the constraints.
min~ ^ A S (PO)
x,s>0
subject to
hj(x)-s<0, jeNE
-hj(x)-s<0, jeNE
8t(x)-s<0,keN,
xL <x<xu
Step 2: Replace the nonconvex functions by convex lower bounding functions that
results in the following problem (R).
%s>0S (R)
subject to
h:j(x)-s<0,jeNnoncE
h™{x)-s<0,jeNnoncE
g"r{x)<0,keNnoncI
ti<"(x) = 0, jeNlinE
gr(x)<0,keNcoml
xL<x<xu
where NnoncE,NlinE are the sets of nonconvex and linear equality constraints,
respectively; NnmcI, NnmvI are the sets of nonconvex and convex inequality
constraints; h"°jC{x), hm"c(x) and gnkmc{x) are the tight convex lower bounding
585
This property allows the convergence to the optimal solutions through the
successive refinement of variable bounds.
Step 3: Solve the convex lower bounding problem using a local optimization algorithm
(e.g. MINOS [21], NPSOL [22]) which provides a lower bound for the solution
of the original problem.
Step 4: If the solution is positive, then the solution to the original problem cannot be
driven to zero and consequently this region is fathomed for further consideration.
On the other hand, if the solution is negative further partition of the region is
required. One simple way is to halve on the middle point of the largest side of
the current rectangle, which is the procedure used in this paper.
586
Note, that although the general convex lower bound L can be used for the problems that
involve specific nonconvex forms including bilinear, trilinear or multilinear terms, tight
convex lower bounds can be analytically determined by evaluating the exact value of the a
parameter or by evaluating the special convex lower bounding function. For example for the
case of bilinear terms, x y which appear in the first example considered in this paper, in the
region [x L ,>'''Jx[x t/ ,y (/ J, the bilinear term can be underestimated by the following linear
relaxation where w is a new variable replacing x y .
co>xLy + yLx-xLyL
5)>xuy + yux-xuyu
£><xuy + yLx-xuyL
co<xLy + y"x -xLyu
Also, note that for the case of linear systems considered in this work, only multilinear terms
appear as the non-convex terms in the problem formulation. Detailed description of the
convex underestimators for trilinear and multilinear functions in general can be found in [20].
The simultaneous decoupling and pole placement problem is one application where the use
of global numerical optimization is needed. The problem as such requires the determination of
all solutions of a system of nonlinear equations and may not always have a solution for a
decouplable and controllable system without cancelling invariant zeros. It is of particular
interest to develop an approach for the problem without the cancellation of unstable invariant
zeros.
Consider a square system in a minimal form
x = Ax + Bu
(2)
y = Cx + Du
where A e 9T*" ,B e 9T*"1, C e Vl"""1 and D e Wxm. Feedforward and feedback of the form
u = Fx + Eu (3)
will be applied to decouple the system and to place it's poles. The resulting system is then,
587
where
a(s)=det(sI-Act)
= s" +als"~1 +... + an_ls + an (6)
= (s + AlXs + A2)...(s + An)
where a(s) is the original systems characteristic equation. Such invariant zeros are neither
affected by feedback or feedforward gains, i.e.,
si-A B
det(c Adj(sl - A)B + Da(s)) =
-C D
sI-A-BF B (g)
-C-DF D
= det((C + DF)Adj(sl - Acl )B + Da(s))
=0
Further,
Expanding the adjoint in the numerator part of the closed-loop TFM results in
DE = diag{ylrl...ym} (11)
and
The simultaneous decoupling and pole-placement problem can now be stated as follows:
Simultaneously solve
DiEj=0 (14)
and
As invariant zeros are not affected by feedforward or feedback, they will appear in the
numerator of the diagonal elements of a decoupled system. Furthermore, it is obvious that if
some invariant zeros are to be retained, they must not be a factor of a(s), as the factors of
a(s) appear in the denominator of the different diagonal elements of a decoupled system.
589
4. CASE STUDIES
gives
or
Alx1=Hl-0.2tt2-x1/Bt-{xl-x%)fR1, <18>
Ax2^(x]-^l-X2)/Rn-x2/R2-(X2-x1)/R2i (19)
and
x = M + Bu
y = Cx
A, A,
B= —1^ o (23)
A2RnAx
and
!
C = [° °1. (24)
Lo o l j
Assuming \ = \ - P^ -1, K, = #2 = «3 = 1, Rn = 7?23 = 2 and /? = 1 sec, gives
-3 1 0
A= 2 -2.5 0.5 , (25)
0 0.5 -1.5
"2 -0.4"
JB= -1 0 . (26)
0 1
G{s)=C{sI-A)'lB
-s2-V2s+3/2 -3/10^+3/10 1 .
i 3 +7 s2 +27/2^+15/2 ,s3+7.?2+27/2.s+15/2 ^ll>
1
-1/25+1/2 s +ll/2s+51/10
or
Q-l)(*+1.5) 3 0-1)
rrM_ (^l)(^1.78)(,+4.23) 10(*+l)(s+1.78)(s+4.23)
°W~ 1 (,-1) (s+1.18)(s+4.32) ' (28)
2(i+l)(^+1.78)(i+4.23) (^+l)(i+1.78)(i+4.23)
This system has a single invariant zero at +1.
Solving for E gives
Assuming the new eigenvalues are arbitrarily selected as -2, -3 and -4, gives the new
characteristic equation
Similarly, the off-diagonal elements of the Markow parameters of the decoupled system must
be zero, or
Deriving the characteristic equation coefficients as well as the off-diagonal elements of the
Markow parameters for the elements of the F matrix gives seven nonlinear equations to be
solved simultaneously, i.e.,
In order to find all solutions of the above system of nonlinear equations the algorithm
outlined in the previous section was applied. Since only bilinear terms are involved in
Problem (32), the analytical value of the a parameter can be found to be 0.5. The problem
was solved utilizing a GAMS [23] implementation of the algorithm. A total of 834 iterations
and 32.9 CPU sec were required on Linux 933 Mhz PC identify all the solutions within a
tolerance of 10E-5. The variables are considered to lie between [-3.0,3-0], which was found
by a fast preprocessing step of the algorithm using a larger tolerance of 0.001. The solutions
obtained follow:
Gch(s)= ( - % + 3) J ^ ; (34)
(774)_
'- 1 o '
S+ + 4
Gcl2(s)= ( ^ ) 1 ; (36)
0
M_
and
Thus, three solutions are found, as can be expected since the first diagonal element will have
two poles and the second one will have one pole, thus, there are only three possible different
combinations, based on the physics of this particular problem. The corresponding step
responses are shown in Figure 2, where the system is clearly decoupled and the diagonal
transient responses are as expected based on the closed-loop poles.
As proposed in section 2 when the problem involves specific nonconvex terms, tight lower
bounding functions can be derived. Due to the bilinear nature of the non-convex terms in
problem (32), linear cuts defined by (1) are used to convexify the non-convex terms. After
replacing the non-convex terms the problem is modelled and solved in a branch and bound
framework using GAMS/MINOS [23]. The modified model takes 455 iterations and 5.73
CPU sec to obtain the same set of optimal solutions within the same tolerance 10E-5.
Note that as reported in the literature ([20], [24]), the utilization of the linear-cuts results in
faster convergence since the maximum separation is always greater for the general convex
function L(x). Decreasing the tolerance to 10E-6 results in the same solutions although it
requires a larger number of iterations, 923 compared to 834, and additional computational
time (a total of 35.1 CPU sec).
Fig. 2. A step response of the decoupled three tank system, the response of Gdi (s) is shown
by the solid line, the response of Gdi(s) is shown by the dotted line and the response of
Gd (s) is shown by the dashed line.
[ 0 1 0 ] f 1 -1 1
(40)
[-0.8 0 -0.8J [-0.2 -0.3J
G{S)=C(SI-A-1)B +D
or
(s+2.605)(s2-5.605s+9.598) -(^+1.956)(^-1.488)(.y-4.468)
G(5)= (^2.105)(,-1.223)(,-3.882) (S+2.105)(,-1.223)(,-3.882)
V
' -0.2(J-3)(^ 2 +4^+10) -0.3(i+l.826)(s-l.826)(^-3)
(i+2.105)(i-1.223)(i-3.882) (i+2.105)(^-1.223)(i-3.882)
This system has three invariant zeros, two stable invariant zeros at s=-0.2 and s=-l, and an
unstable invariant zero at s=+3. Solving for E gives
i r°-6 - 2 i
£ = £»"'= . (43)
[-0.4 -2]
Assuming the new eigenvalue is arbitrarily selected as - 2 , gives the new characteristic
equation
det(sI-Ad)
= s3 + ats2 + a2s + a3 (44)
2
= s* + 3.2s + 2.6s+ 0.4 = 0
The off-diagonal elements of the Markow parameters of the decoupled system must be
zero, or
(C,. + DhF)BE2 = 0
(c 2 + D2F)BE.X = o
(C,. + D.F^A + BF)BE2 = 0
(45)
(C2. + D2F) (A + BF)BEl = 0
(C,. + O1.F)(A + BF)2BE2 = 0
(C2 + D^F^A+BFfBE^ =0
Deriving the characteristic equation coefficients as well as the off-diagonal elements of the
Markow parameters for the elements of the F matrix gives nine nonlinear equations to be
solved simultaneously, i.e.,
596
(C, + D, F)BE2 = - 2 / n + 2/ 21 - 2 - 2/ 12 + 2/ 22 = 0
(C2. + D2F)BE^ = -0.48-0.12/,, -0.18/ 2 1 + 0.08/12 + 0.12/22 = 0
(C, + A F X A + BF)B£ 2 = - 2 / u - 2 / , 2 + 2fnf21 - 6
- 6/12 + 4/ 22 + 2/ 22 / 21 - 8/13 + 8/23 - 2/ u / 1 2 - 2/ 12 / 22 + 2/222 = 0
(C2. + D2F)(A + BF)BEl = - 1 . 6 - 0 . 6 / u -0.12/^ -0.18/ 2 1
-0.18/ 2 1 / u +0.36/ 12 +0.06/ 22 -0.18/ 2 2 / 2 1 -0.28/ 1 3 -0.42/ 2 3
+ O.08/12/u + 0.08/12/22 + 0.12/222 = 0
(C, + DhF\A + BF)2BE.2 = -2fnfl2f22 + 2f22f2lfn
-12 fn + 4/ 21 + 44/ 22 + 4/ 23 - 12/, - 56/12 - 4 / 2 + 2/ 2 1 / n
+ 4/ 22 / 21 - 10/ 12 / n - 8/12/22 + 8/222 - 2/ u / 1 2 / 2 1 + 2/ 12 / 22 / 2 , (46)
2 2
-14/ 2 2 / 2 ] - 2f> + 2fl2 - 54 + 2 / 2 1 / + 2/ 12 / 2 + 6/21/13
- 2/ 12 / 23 + 2/ 2 2 / u + 2/ 2 2 / 21 +10/ 22 / 23 + 6/ 2 3 / n + 2/ 23 / 21
- 2/ 12 /, 2 - 2f\ - 2/, 2 / 21 - 2/ 12 / 2 2 - 6/13/12 - 2/ 12 / 22 = 0
(C2. + D2F\A + BF)2BEX = -2.88 + 0.08/ n / 12 / 22 -0.18/ 2 2 / 2 1 / u
+ 0.08/,2/12 + f,22 + 0.08/12/21 + 0.08/12/222 + 0.08/13/22
+ 0.12/232 - 0.18/ 23 / 21 + 0.12/12/22/21 - 0.36/ 21 / n - 0.18/21/;2
- 0.18/,2/22 - 0.54/21/13 - 0.54/22/21 + 0.64/12/22 + 0.08/12/23
+ 0.24/13/12 + 0.3/22 -0.18/ 2 2 / 2 1 -0.3/ 2 2 / 2 3 -0.54/ 2 3 / u
- 0 . 6 4 / n / , 3 -1.44/ I 3 -1.08/ 21 -1.84/ 22 -0.48/ 2 3 - 2 . 8 / u -0.12/^
-0.18/ 2 2 / u -0.72/, 2 -0.66/, 2 / 2 1 +0.32/ n / 1 2 -0.12/ u / l 2 / 2 , = 0
«i = - / 2 2 - / n - 3 = 3.2
<h = /22/n - 6 - 3/ 13 - f12f2i + f22 - fn + 2fn- / 23 = 2.6
a3 = 3/ 2 2 +10 + 5/13 - / 21 + 3/ l 3 / 2 2 -15/ 1 2 + 5 / n + / 23
+ /23/11-/21/13-3/ 12 /23=0.4
In order to find all solutions of the above system of nonlinear equations the algorithm
outlined in the previous section was applied. The variables lower and upper bounds are -10
and 1, respectively, which was found by a fast prerun of the algorithm using a larger tolerance
of 0.001. The problem was solved utilizing a GAMS [23] implementation of the algorithm. A
total of 31 iterations and 1.1 CPU sec were required to identify the solutions within a
tolerance of 10E-5. The optimal solution obtained was:
"I o i ri o
Here only one solution was found, as can be expected since the first diagonal element will
have no poles and the second one will have all three poles, thus, there is only one possible
combination.
The corresponding step responses are shown in Figure 3, where the system is clearly
decoupled and the diagonal transient responses are as expected based on the closed-loop
poles.
terms. The system has an unstable invariant zero at s=3. It is completely decouplable, without
canceling the invariant zero and is defined as,
"-1 1 3 2 ] fl 0"
1 - 1 3 2 0 1
A= , B= , (49)
1 1 0 1 0 0
0 2 2 -lj |_0 0
" 0 1 0 1 1 f 2 - 21
C= \, D =\ \. (50)
[-0.1 0 - 0 . 4 -0.5J [-0.1 -0.2J
G{S)=C(SI-A-')B +D
4 3 2 4 3 2
-0.1 J -0.4^ +0.8s +3.2^+2.1 - 0 . 2 J - 0 . 6 ^ +L3S +5.6S+3.9
5 4 +3i 3 -14^ 2 -52i-36 i 4 +3i 3 -14i 2 -52i-36
or
2(J-3.943)(^+3.631)(J+2.601)(5+0.7117) -2(5-4.334)(5+3.574)(j+2.46)(5+0.8004)
/ N (j-4.081)(j+3.693)(i+2.389)(j+l) (i-4.08l)(i+3.693)(i+2.389)(i+l)
^^~ -0.1(J-3)(5+4.414)(5+1.586)(J+1) -0.2(^-3)(5+l)(^2 +55+6.5)
(i-4.08l)(j+3.693)(j+2.389)(i+l) (j-4.081)(s+3.693)(i+2.389)(s+l)
This system has four invariant zeros, three stable invariant zeros at s=-3.33, s=-0.74 and
s=-2.01, and an unstable invariant zero at s=+3.
Solving for E gives
Assuming the new eigenvalue is arbitrarily selected as - 2 , gives the new characteristic
equation
599
det(sI-Ad)
= s4 + a^s3 + a2s2 + a3s + aA (54)
4 3
= s + 8.1666s + 23.333/ + 37.16666s +10.3333 = 0
The off-diagonal elements of the Markow parameters of the decoupled system must be
zero, or
(C, + DhF)BE2 = 0
(C2 + D2F)BEX =0
(c,. + DXF)(A+BF)BE.2 =o
(C2 + D2F)(A + BF)BE.l = 0
(c + D, F)(A+BF)2BE2 =o
(55)
2
(c2. + D 2 F ) ( A + BF) BEA = o
(Q. + D,.F)(A+BFJBE 2 = o
(c, + D 2 . F )( A +BF) 3 BE 4 = o.
Deriving the characteristic equation coefficients as well as the off-diagonal elements of the
Markow parameters for the elements of the F matrix gives twelve nonlinear equations to be
solved simultaneously, i.e.,
In order to find all solutions of the above system of nonlinear equations the proposed
algorithm was applied utilizing GAMS [23] modelling environment. Assuming that the
variables lower and upper bounds are -8.4 and 1.1, respectively, - again found by a fast prerun
of the algorithm using a larger tolerance of 0.001, - a total of 121 iterations and 8.51 CPU sec
were required to identify the solutions within a tolerance of 10E-4. The optimal solution
obtained was:
"1 0 1 fl 0
G»= Q (s + 3.328X* + 2.lX* +0.7395X^-3) - Q (s-3) . (58)
(.j + 2.102X5 + 1.998Xi + 3.328X^ + 0.7395)J [ (s + 2)_
Also here only one solution was found, as the first diagonal element will have no poles and
the second one will have all four poles, thus, there is only one possible combination. Here,
simulation results are the same as in the example with the trilinear terms as the closed-loop
transfer matrix is the same.
602
In this paper the problem of simultaneous decoupling and pole placement without
canceling invariant zeros was considered as a system of nonlinear equations. A general
solution procedure was developed based on a global optimization methodology that allows the
determination of all feasible solutions of such a system of nonlinear equations.
A three tank example including a pipe delay and circulation with an unstable invariant zero
and bilinear terms was solved by placing all system poles without cancelling the invariant
zero, using the global optimization approach. Due to the bilinear nature of the nonconvexities,
the problem was solved with the general convex underestimator function as well as the linear
relaxations, which proved to be computationally more efficient. Likewise, two higher order
examples containing non-convexities in the form of trilinear and multilinear terms and an
unstable invariant zero, were solved by placing all system poles without cancelling the
invariant zero, using the global optimization approach.
The global optimization approach utilized in this paper guarantees to find all solutions
within the bounds considered for the optimization variables. Note that as mentioned in the
first realistic example these bounds can be determined based on the physics of the system.
Moreover, the number of expected solutions can be determined when the number of poles at
each diagonal element is known, thus all possible combinations of the poles on the diagonal
elements effectively determine the number of solutions. For the general case however, where
speculations on the number of solutions become more complicated, the variable bounds are
very important since they control the determined solutions.
Acknowledgments
The first author would like to thank the department of Industrial Engineering and the
department of Electrical and Computer Engineering at Rutgers University for their friendly
and stimulating research environment during her sabbatical year 1998-1999. This work was
supported by the University of Iceland and NSF grant no. INT-0071505.
REFERENCES
[8] A. Gestsson, and A.S. AHauksdottir, Proceedings of the American Control Conference,
Seattle, Washington, 1995, pp. 4418-4421.
[9] A.S. Hauksdottir, and M. Ierapetritou, submitted to the IASTED International
Conference on Modelling, Simulation, and Optimization MSO 2003, to be held July 2-4,
2003, in Banff, Canada.
[10] U. Zuhlke and A.S. Hauksdottir, The Twenty-Second IASTED International
Conference, Modelling, Identification, and Control, MIC 2003, Innsbruck, Austria,
2003, pp. 578-583.
[11] U. Zuhlke, and A.S. Hauksdottir, Implementation of decoupling controllers for
multivariable systems with a time delay, submitted to The 42nd IEEE Conference on
Decision and Control, to be held in Maui, Hawaii, 2003.
[12] J.F. Camart, M. Malabre, and J.C. Martfnez-Garcia, Automatica, 37 (2001) 297, 2001.
[13] J.C. Martfnez-Garcia, M. Malabre, and J.M. Dion, Int. J. Control, 72, (1999) 1392.
[14] A.S. Hauksdottir, and M. Ierapetritou, Simultaneous decoupling and pole placement
without canceling invariant zeros, Proceedings of the American Control Conference,
Arlington, 2001, pp. 1675-1680.
[15] E. Lahaye, C.R. Acad. Sci. 198 (1934) 1840.
[16] R.W. Klopfenstein, J. Assoc. Comput. Mach., 8 (1961) 366.
[17] P. Seferlis, and A.N. Hrymak, Comput. Chem. Eng., 20 (1996) 1177.
[18] E.R. Hansen, Global Optimization Using Interval Analysis, Marcel Dekkar, New York,
NY, 1992.
[19] A. Neumaier, Interval Methods for Systems of Equations, Cambridge University Press,
Cambridge, UK, 1990.
[20] CD. Maranas and C.A. Floudas, Journal of Global Optimization, 7 (1995) 143.
[21] B.A. Murtagh and M.A. Saunders, Minos 5.5 User's Guide Stanford University
Department of Operations Research, July, 1998.
[22] P. Gill, W. Murray, M. Saunders, and M. Wright, Users Guide for NPSOL (Version
4.0): A Fortran Package for Nonlinear Programming, Stanford University Department of
Operations Research, 1986.
[23] A. Brooke, D. Kendrick, A. Meeraus, and R. Raman, GAMS User's Guide GAMS
Development Corporation, 1998.
[24] I.P. Androulakis, CD. Maranas and C.A. Floudas, Journal of Global Optimization, 7
(1995) 337.
The Integration of Process Design and Control
P. Seferlis and M.C. Georgiadis (Editors)
604 © 2004 Elsevier B.V. All rights reserved.
Chapter D6
a
Universidad Autonoma Metropolitana-Iztapalapa,
Depto. de Ingenieria de Procesos e Hidraulica, Apdo. 55534, 09340 Mexico D.F, MEXICO.
b
Universidad Nacional Autonoma de Mexico, Depto. de Ingenieria Quimica,
Facultad de Quimica, Ciudad Universitaria, 04510 Mexico D.F., MEXICO.
G
Comercial Mexicana de Pinturas, Centro de Investigation en Polimeros (CIP)
Marcos Achar Lobaton # 2, 55855 Tepexpan Edo. de Mexico, MEXICO
d
Universidad Autonoma Metropolitana-Iztapalapa,
Depto. de Matematicas, Apdo. 55534, 09340 Mexico D.F, MEXICO.
1. INTRODUCTION
Batch and semibatch processes play an important role in the production of high valued
products. A wide variety of speciality chemicals such as polymers and pharmaceuticals are
produced in batch reactors. The transient nonlinear nature of batch processes give rise to
complex process and control design problems. In industrial practice, it is well-known that the
process design (i.e., equipment and operation policy) affects and is affected by the control
design, and the interplay between them is handled with some dosage of experience in
conjunction with process, control, laboratory-to-plant testing and scaling tools [1, 2, 3].
Motivated by the need of systematic design and redesign techniques, the field of batch
process systems engineering is currently an active area of research. Eventhough there have
been important advances in the process and control design parts, their integration is lagging
behind. At present, the integration problem is regarded as an important subject of research.
The industrial design of a batch process amounts to finding a suitable compromise between
safety, productivity, and quality attributes in the light of investment-operation costs. The
consideration of the process and control designs involves many constraints and decisions,
among them are: (i) the process equipment (i.e., vessel size and shape, kind of mixing and
heat exchange equipment, and so on), (ii) the batch motion and its duration, (iii) the control
605
structure selection (inputs, outputs, and their interconnection), and (iv) the tracking control
algorithm. In the process systems engineering field, the emphasis has been placed on the
motion design via optimization methods. The related state of the art can be seen elsewhere [1-
5], and here it suffices to say that the applicability of the approach has been demonstrated,
valuable insight has been gained, but the integration of the motion and control designs still
remains as an open research problem. In principle, the integrated process and control design
problem can be addressed via mixed-integer optimization [6], provided adequate definitions
and measures of nonlinear stability, detectability and stabilizability for batch motions are
available. Should this be the case, the solution of the resulting optimization problem could
easily become a cumbersome or intractable task, due to the large dimension of the space over
which the optimal solution must be searched.
The constructive control method [7-10] questions the pursuit of the above mentioned
general-purpose direct optimality method to encompass the large diversity of nonlinear
systems of practical interest, and instead proposes to employ an inverse optimality approach
with design procedures that identify and exploit the characteristics of a particular system. The
geometric method reveals the structural characteristics, the analysis method looks at the error
propagation via robust stability, and recursive procedures are employed to attain optimality
properties. Optimal feedback stabilizing controllers are inherently robust, and are underlain
by a structural property denominated passivity. In a linear system, passivity means a
minimum phase (stability) property, infinite gain margin, and less than ninety degree phase
lag. In a nonlinear system, passivity means relative degrees smaller or equal to one, and stable
zero dynamics (ZD). In a direct optimality framework, the feedback control problem is solved
as follows: an objective function is set, a detectability condition is verified, and the solution of
the corresponding Hamiltonian equations yields the robust stabilizing controller, and
determines an input-output pair with respect to which the system is passive. To make tractable
the search of a feedback control in analytic form, by circumventing the solution of the related
Hamiltonian equations, the inverse optimality approach is executed as follows: an input-
output pair is set so that the system is passive, the associated state-feedback (SF) controller is
constructed, the corresponding objective function is drawn, and the optimality property is
assessed a posteriori. In this way, the consideration of poorly robust and wasteful control
candidates is avoided, at the cost of verifying or correcting the objective function.
The constructive method, which is considered as a major breakthrough in control theory,
was developed in the last decade. As it stands, the method is intended for feedback control
design, and its application to the batch motion case requires the nominal output to be tracked
and a suitable definition of finite-time batch motion stability. In a more applied context, the
inverse optimality idea has been applied to design the nominal motion of homo [11] and
copolymer [12] reactor, obtaining results that are similar to the ones drawn from direct
optimization [4]. The motion was obtained from the recursive application of the process
dynamical inverse [13], and the inverse yielded a nonlinear SF controller [9, 10] that was in
turn used to specify a conventional feedforward-feedback industrial control scheme.
However, the issues of motion stability and systematized search were not formally addressed.
606
Basically, the same approach should apply to free-radical (solution, emulsion, suspension)
multipolymer reactors in particular, and to exothermic reactors in general. This is why the
homopolymer case [11] was chosen as case example of this chapter. Thus, the general-
purpose material of this chapter can be regarded as the generalization of the motion design
procedures employed in the above discussed polymer reactor cases [11,12]. For the
integration of the motion and control design aspects, the following concepts and procedures
are required: (i) the definitions of batch motion stability [14, 15], (ii) the notion of passive
estimation structure [19], and (iii) the design of a nonlinear geometric estimator-based
controller [20], with elimination of output mismatch [14, 21] via quick estimation [17] and
compensation [22-24] of the modeling errors in the input-output path.
In chemical process systems engineering, it has been widely acknowledged that the choice
of control structure has a profound effect on performance, is much more important than the
choice of control algorithm, and it is second only to plant design in importance for effective
control [25-28]. In a way that is analogous to the choice of control structure in the
constructive approach, the choice of estimation structure has been considered to attain
robustness via passivation [16-19]. The idea is that the standard Kalman filter (EKF) and
Luenberger (L) nonlinear estimators have structures that are fixed by the (possibly ill-
conditioned) detectability property of the estimation model, and that robustness-oriented
passive estimation structures can be designed for the purpose at hand [19].
In batch motion design studies via direct optimization, the focus has been kept in the
search of an optimal solution motion, and its stability has not been an issue. Once the nominal
output trajectories have been determined, an advanced or (possibly gain scheduled)
conventional controller is designed to track the outputs, and the resulting closed-loop
behavior is assessed. In other words, the output stability has been accounted for, and the state
stability has been disregarded. On the other hand, the above discussed theoretical control
arguments say that state stability plays a major role in the formulation and solution of the
optimal control problem. What is clear is that the batch process is a nonautonomous control
system whose unique solution is a state motion defined over a finite-time interval, and
therefore, the definitions of asymptotic stability, employed in the study of continuous
processes, cannot be applied to the batch case, and the same is true for the related
detectability, passivity and stabilizability properties. The batch motion deviations, which are
caused by initial state and exogenous input disturbances, exhibit accumulative or irreversible
deviations. From the perspective of an industrial practitioner, the motion design is acceptable
if its deviations stay within prescribed limits, and the manifestation of the motion deviations
on the variables of interest is referred to as variability. In the nonlinear dynamics and control
literature this kind of admissible motion variability is addressed with practical [29],
semiglobal [7-10] and input-to-state (IS) [30] stability concepts. In particular, the definition of
IS stability has played an important role in the development of the constructive control
method. Along this line of thought, in [15] was introduced a definition of finite-time batch
motion stability to prove the convergence of a nonlinear calorimetric estimator, and the
corresponding stability features were illustrated with experiments.
607
With a few exceptions [11, 31], the relationship between motion design and geometric
control has not been discussed in batch process studies. In [11], the optimal motion and its
geometric controller were simultaneously designed. However, the approach is rather
restrictive because a complete controllability condition must be met, and the majority of
chemical processes are only stabilizable. In polymer reactor studies [11, 12], the geometric
method was applied to recursively design the nominal motion, and to design a conventional
control scheme with pre-programmed dosage and a cascade temperature control. The need of
developing a dosage feedback loop was concluded, and the solution to this problem is given
in the case study part of this chapter.
In industrial practice [32], it is a well-known fact that the combination of an inventory
control with a measurement feedback control is the most effective way to control processes
subjected to drastic load and setpoint disturbances, as it is the case in a batch process. The
inventory controller is a feedforward component that acts as a process inverse that continually
balances the material and energy contents against the demands of the load and of the setpoint
changes. If a process has a stable inventory controller, and the process inputs and outputs are
linked via single capacities, the process is analogous to a passive electric network. This
explains the robustness of the inventory control in the light of an inverse optimality
framework. Moreover, this notion of process inversion is precisely the one that connects the
passive control and recursive motion designs employed in the above discussed polymerization
react studies [11,12]. A prototypical example of this robustness feature is given by the
calorimetric control for exothermic reactors [1, 33]. Early ideas on the relationship between
inventory balance, thermodynamics, and control can be found in the use of intensive variables
for process control [34]. Connections between inventory balances and geometric feedback
control can be found in [35], and the interesting interplay between inventory control, passivity
and thermodynamics is discussed in [36].
In the field of polymer reactor engineering, the calorimetric estimation and control
problems have been extensively studied with simulations and experiments [1, 33, 37,39]. EKF
[33,37] and L [39] observers have been employed to estimate the heat generation rate, on the
basis of an off-line fitted heat transfer model [38, 39]. Various control techniques have been
employed; among them are adaptive, inferential, model predictive, and geometric control [1,
38, 39]. The robustness of the controller is shown by its successful implementations,
regardless of the particular estimation and control techniques employed. Recently [15], it has
been formally established, and experimentally demonstrated, the feasibility of jointly
estimating the heat generation rate and the heat transfer coefficient in an exothermic reactor.
The preceding observations suggest that the batch motion and control problems could be
jointly addressed, combining notions and tools from the constructive control, the batch
process motion design, and the inventory control fields. The constructive control should
provide analysis-oriented tools to draw the robust control and the recursive procedure to
design the motion. The existing optimization methods should take care of the formulation and
assessment of the objective function, the specification of constraints, and the systematization
of motion search. The inventory control, together with chemical process engineering, should
608
offer physical insight and engineering judgment. The development of this unified approach,
up to the stage of applicability, represents an endeavor, which has to be tackled with the
participation of many researchers and various disciplines. Therefore, the present chapter must
be regarded as an inductive step towards that aim. Emphasis will be placed on the basic
concepts that connect the motion design and the feedback control design, while the above-
mentioned complementary role of the direct optimization tools will be outlined only, given
that the tools exist and have been put to use in batch process studies.
The content of the chapter is divided into two sections, the first of which addresses batch
processes in general, and the second, a class of polymerization reactors with the prototypical
calorimetric control scheme. The two sections are structured in such a way that each step
discussed in the general purpose section has a correspondence in the case study offered in the
second section. A reader more interested in the polymer reactor problem will be able to move
to the second section with just a brief scanning of the first one.
Specifically, in this chapter, the problem of designing jointly the motion of a batch process
and its feedback control is addressed within a constructive framework. The first step consists
in drawing a passive control structure on the basis of a suitable definition of finite-time
motion stability. In a second step, the following results are obtained: (i) the underlying
solvability conditions, (ii) a fundamental connection between process and control design, (iii)
a recursive procedure to design the nominal motion, and (iv) the construction of the
controller. Finally, the suggested approach is applied to a semibatch polymer reactor with
industrial scale and numerical simulations. The related conditions of solvability are identified
and interpreted with physical meaning. This procedure yields both a nominal trajectory, which
resembles the ones obtained with direct optimization, and a calorimetric controller, which
performs better than others that have been developed. The resulting calorimetric control
methodology simplifies, unifies and systematizes the diversity of previous techniques.
2. PROCESS-CONTROL DESIGN
In this section the joint process and control design problem of batch processes is addressed.
The problem is formulated within an optimization framework, including the search of the
equipment, the motion, and the controller. As stated in the introduction, the emphasis will be
placed on the motion and control problem design via the inverse optimality, while the
complementary role of the direct optimization framework will be outlined only, in the
understanding that the corresponding tools are known and have been employed in batch
process studies [3-5, 6]. First, the problem is stated. Then, a passivated dynamical inversion is
drawn, and the result is applied to construct the output-feedback controller, and to set the
algorithm to design the nominal batch motion.
x = f(x, d, u, p), x(0) = x0, 0 < t < tF, dim x = n, dim d = nj (la)
z = g(x, p), y = h(x, p), dim z = dim u = m, dim p = np, dim y = mm (lb)
over the finite-time interval [0, tF] with state (x), exogenous input (d), control input (u),
measured output (y), tracked output (z), and model parameter (p). The nonlinear maps f, g and
h are smooth. The parameter p contains physicochemical (PM), mass and heat conservation
(p c ), and equipment (pc) parameters. The entries of p e may contain logic variables related to
candidate measurements and actuators.
For robustness purposes, let us regard a passive cascade control configuration: (i) the
tracked output (z) consists of measured (zm) and unmeasured (zf) components (Eq. 2a), (ii) the
output z m must have relative degrees equal to one (RD's = 1) [9], with respect to the primary
input (u^, x's), which consists of entries (up) of the input u, and of measured entries (xs) of the
state x (Eqs. 2a-b), (iii) the measured state xs is regarded as the control input uv (or the tracked
output ys) of the primary (or secondary) control subsystem (Eq. 2c), and the secondary pair (us
, ys) has RD's = 1, and (iv) the state of the corresponding zero dynamics (ZD) [9] is referred
to as xi (2d). These RD requirements determine the following control structure:
s(z) = s(z m ) u e(zf), s(z m ) c e(y), dim (zm, zf) = (mp, mf), mp + mf = m (2a)
s(u) = s(u p ) u e(us), s(us) c e(y), dim (up, us) = (mp, m s ),m p + m s = m (2b)
s(x) = E(X P ) U S(X S ) U E(XI), dim(x p , xs, Xi) = (mp, ms, mi), dim Xi = ni (2d)
U =
(UP>UI)> x
= (xp> K> x!)'= z =
( z m' z f)'' y = (ym>y!)'> zm = ym, P = (PM, p c , pe) (2e)
where E(-) denotes the set of entries in the vector (•). If there is not secondary control [i.e., s(us)
= 0 ] , the pair (u, z) has RD's = 1, and dim xi = n — m, otherwise, [i.e., e(us) ^ 0 ] some of the
RD's will be greater than two. In the latter case, the control scheme must be passivated in
order to construct a robust cascade controller [7, 8].
For a given data set D the system (1) has a unique (possibly open-loop unstable in a sense to
be defined) solution motion x(t), with unique output trajectories z(t) and y(t):
x(t) = x[t, to, x0, d(-), u(-), p], 0<t<tF, D = [xo, d(t), u(t), p] (3a)
610
(i) The nominal operation O = [x(t), d(t), Q(t), z(t), y(t), p], 0 < t < tF (4)
(i.e., the equipment and the operation policy) so that the closed-loop process takes place as
fast as possible with an adequate compromise between safety, operability, product quality,
and cost-benefit measures, according to heuristic criteria commonly employed in practice, or
to their formal representation in terms of a constrained optimization performance index
where p e and a represent (possibly logical) equipment and control structure decisions,
respectively. Preferably, the design should be performed with a model validated with
laboratory and/or pilot plant experimental data. To adjust the nominal operation over the
batch-to-batch horizon, the model should be occasionally updated on the basis of the data
generated by the process.
(ii) The tracking controller: xc = fc[xc, d(t), y(t), z(t), p c ], u(t) = hc(xc, pc) (6)
driven by the prescribed nominal output z(t), the measured output y(t), and the measured
exogenous input d(t), so that the closed-loop reactor robustly tracks the nominal motion X(t).
The controller must have a control structure selection criterion, a systematic construction, and
a closed-loop stability criterion coupled to a simple tuning scheme.
2.2.1. Stability
Since a batch process is a nonautonomous system, with a solution motion, which evolves
over a finite-time period, the standard definitions of asymptotic stability of critical points,
which are appropriate for continuous processes, cannot be applied. The same is true for the
definitions of nonlinear detectability [15, 17, 27, 30] and stabilizability. Moreover, while in a
continuous process those definitions apply to a critical point, in a semibatch process the
definitions apply to one particular motion or operation policy [20], depending on the kind of
load as well as on the material dosage and heat exchange policies. The batch motion
deviations, which are caused by initial state and exogenous input disturbances, exhibit
accumulative or irreversible deviations. If the deviations are acceptably small, the motion is
611
regarded as stable, regardless of whether the deviations grow or get smaller with time. The
related definition of batch motion stability, which was introduced in [15], is given next.
Let () denote the perturbed value of (•), and let 0 = Q - (•) be the corresponding (additive)
perturbation error. The state motion x(t) (Eq. 3a), over the batch period [0, tp], is said to be
exponentially bounded with decreasing or decreasing deviations (Eb-stable) if for a given
disturbance sizes s 0 , s u , Ed, s p , and ex > 0 and a time tp there are constants a, X, b u , bd, and b p
so that the perturbed motions x(t) are bounded in the following integral input-to-state form
[30]:
If the perturbed motions have decreasing deviations (i.e., X < 0), the motion is said to be
Ed-stable. These definitions [15] stem from the definitions of practical [29], semiglobal [7-10]
and input-to-state (IS) [30] stability.
Eb-stability means that the perturbed motions may diverge to an acceptable extent, and Ed-
stability means that the perturbed motions converge with admissible deviations. These batch
motion stabilitiy features are in agreement with the experimental functioning of a polymer
reactor [15]: bounded disturbances in the semibatch load and monomer dosage produce
(small) Eb-stable concentration and conversion deviations (because mass can only be added),
and Ed-stable temperature deviations (because heat can be added or removed).
Let us recall the system partition (2), and write the corresponding control system
Assume this equation can be solved for the (m + mp) entry vector x a ,
The derivation of Eq. (10) followed by the substitution of X| (Eq. 8c) and x a (Eq. 10) yields
Us = i|(xi, d, d, z , z, z, p) (13)
Finally, the substitution of this equation and Eq. (12) into Eq. (8) yields the (n-m)-
dimensional dynamical inverse system, with respect to the input-output pair (u, z)], of the
batch process (1):
where r|i(xi, d, d, z, z, z, p)
To have robustness with respect to disturbances [xi0, d(t), z(t)], the solution motion
x,(t) = 6,[t, t0, xl0, d(t), d(-), z(-), z(-), z(-), p] (15)
The dependency of xi(t) on z(t) is due to the presence of z in the dynamical inverse (Eq.
14). This signifies that the control system has RD's = 2, with an m p -dimensional dynamical
extension [9], in the sense that z depends on (up, up, u s ). Therefore, up must be regarded as the
state of the dynamic extension dup/dt = u with new input u [9]. In the context of a nonlinear
SF control problem, the inverse dynamics system (Eq. 14a) is referred to as the zero dynamics
(ZD) [9]. The dimension ni = n - m (Eq. 2d) of the ZD is not affected by this dynamic
extension, or equivalently, the new state augments the "external" state (xp, x,)' in (Eq. 2e). To
613
remove this RD obstacle for robustness, let us apply the backstepping-passivation procedure
[7, 8], according to the following reasoning. To avoid the presence of z(t) in the dynamical
inverse (Eq. 14), let us recall the solution (Eq. 10b) for the secondary state x s , regard it as the
"measurement" that drives a battery of standard single-output (second order) filters (Eq. 16b)
to obtain an estimate vs of vs = xs. The combination of the filter (Eq. 16b) with the
dynamical inverse (Eq. 14) yields the observer-based dynamical inverse:
fi[tx(xi, d, z, z, p), i*(xi, d, z , z, p), xi, d, xs, i p (x b d, z, z, p), is(xi, d, z, z, xs, vs, p) p]
i is the solution of Eq. (11) for us, and §\ is T)I (Eq. 14) expressed in terms of xs, I m is the m s
x m s identity matrix, co or (C,) is the characteristic frequency (or damping factor) of the filter.
The corresponding augmented motion is denoted by
xIa(t) = (£;, x's)\i) = x,[t, to, xl0, d(t), d(-), z(-), z(-), p] (17)
Provided the observer (Eq. 15b) is tuned sufficiently (usually 5 to 15 times) faster than the
(time-varying) characteristic frequency of the motion over [0, tj.], the preceding motion
approximation converges to the actual one Xi (Eq. 15), with Eb-convergence (Eq. 7) and error
size that depends on the (usually small) estimation error.
The (time-varying) solvability conditions for the existence of a stable (Eq. 7) dynamical
inverse (Eq. 14) are given by:
ii) The inverse (ZD) motion xi(t) (Eq. 14a) is stable. (18c)
Condition (18a) [or (18b)] says that the algebraic equation (9) [or (12)] can be solved for x a
[or xs] or equivalently, that the input-output pair [(up, u x = x s ), z] [or (us, ys)] has RD's = 1.
The dynamical inverse (Eq. 14 or 16) corresponds to a feedforward controller along the
idea of the inventory balance control employed in industrial practice [32]. The controller
614
continually balances the material or energy delivered against the demand of the load, and is
the ideal way to compensate considerable load or setpoint changes, as it is the case in a batch
process. The inventory control is a feedforward component that performs most of the load
disturbance rejection and setpoint tracking tasks, and the feedback controller is dedicated to
compensate modeling errors. Otherwise, a feedback control alone may not function
satisfactorily when the load disturbances and the setpoint changes occur in periods close to
the process natural (time-varying) period. Should the (stable) inventory feedforward control
be applied alone, the open-loop stability of the batch motion should be required.
Strictly speaking, the dynamical inverse (Eq. 14) of the batch process has RD's = 2, and its
observer-based approximation has primary and secondary components with RD's = 1, or
equivalently, the approximated inverse has been passivated with respect to the input-output
cascade structure (Eq. 2).
where diag(-) denotes a diagonal matrix (•), x* is the setpoint for the secondary control and co?
(or a>f) is the control gain of the ith tracked measured output z™ = yf (or z'f = xf = yf) in the
primary (or secondary) loop [ym, zm, Zf and xs were defined in Eq. (2)]. In other words, we are
considering feedforward plus feedback action for the measured tracked outputs (zm), and only
feedforward for the unmeasured tracked outputs (zf).
Recall the observer-based dynamical inverse (or feedforward controller) (Eq. 16), drop its
dynamic component (Eq. 16a), replace the state xs by the setpoint x*, and obtain the nonlinear
SF controller (Eq. 16c)
615
where
y*(x, d, p, t) = i*[Xl, d, g(x, p), v(t), p], v(t) = z(t) - Kc[g(x, p) - z(t)], Kc = bd[0, Kp] (21a)
yp(x, d, p, t) = ip[xi, d, g(x, p), v(t),p], ys(x, d, vs, p,t) = is[xi, d, g(x, p), x;, vs, v(t),p] (21b)
From the stability of the inverse (or zero) dynamics motion (Eq. 15) and of the linear filter
(Eq. 16b) (tuned sufficiently fast), the stability of the closed-loop system motion follows. The
regulated-measured outputs (zm) have quasi(q)LNPA tracking dynamics, and the unmeasured
regulated outputs (zf) have the Eb-stability property (7) of the ZD motion (Eq. 15). As the
controller gain is tuned faster, the controller approaches the behavior of its feedforward
counterpart (Eq. 14), and this feature in turn constitutes the behavior recovery target of the
measurement -driven controller that will be developed next.
entries: (i) the 2m m -entry innovated state x, that include the m m augmented states, as well as
m m states (preferrably linked to the storage state x p of the primary path), and (ii) the nv-entry
nonninovated state x v . This is,
Finally, the control (xp, x s , xi) and estimation (xt, x v ) state partitions should be chosen so
that the maps [yp, ys and y in Eq. (21)] of the nonlinear SF controller (Eq. 20) can be written
as follows:
If possible, the control parameter p c should consist of mass and energy capacity
parameters, heats of reaction, stoichiometric coefficients, and of parameters related to the less
uncertain model functions. Thus, the resulting estimation model has the following form:
In general, only some of the inversion and estimation model states coincide. On one hand,
the consideration of augmented observable states brings in new states, and on the other hand,
the estimation of some model functions takes away some states. From [19] we have that the
motion xe(t) is estimable (i.e, a form of time-varying robust detectability), with estimator
index equal to K0 = 2m m < n and with each estimator index K? = 2 (one for each
measurement), if the following conditions are met along the batch motion:
i) Estimability: rank [M(xe, d, u, p)] = 2m m (25)
ii) Stability of noninnovated motion: xv(t) = t v [t, t0, xvo, d(-), u(-),y(-), p] (26)
1
where M(x e , d, u, p) = dXi<t>0(xe, d, u, p), <t>0 = {h , [(dXeh)fe]'} ', fc = (ft, fv)'
xv(t) is the solution of the the noninnovated dynamics
x v = f v {a[x v , y(t), p], x v , d(t), u(t), p} := <|>v[xv, d(t), u(t), y(t), p], x v (0) = xvo (27)
and a(x v , y, p) denotes the solution for xl of the measurement map (y1, y')' = §0(xe, d, u, p).
Condition (i) says that the model motion, restricted by [u(t), y(t)], meets robustly a partial
observability property [18, 19] in the sense of the definition of instantaneous observability
[40], and Condition (ii) signifies that the related indistinguishable (i.e., a form of robust
617
unobservability) motion is stable. In this case, we say that the estimator and controller
structures are compatible. The corresponding nonlinear estimator is given by Eq. (28a,b) [17,
19].
Up = nP(xe, d, pc, t), x* = |a*(xe, d, pc, t), us = Hs(xe, d, x|, pc, t) (28d)
where (?) denotes the estimate of (•), and the nonlinear gain G is given by
G(x,, xv, d, u, p) = [M" 1 ^, xv, d, u, p)] K0(i;, co0), Ko^, co0) = bd [k1;..., k m j , k; = (2^coo, ro^)1
co0 (or Q is the characteristic frequency (or damping) tuning parameter associated to the
second-order qLNPA output estimation error dynamics
The closed-loop stability of the batch motion can be established with the application of the
standard singular perturbation [25] or small gain theorems [8, 10] available in the nonlinear
dynamical systems literature, in conjunction with the definition (7) of finite-time motion
stability. In a chemical process context this closed-loop stability assessments can be seen in:
the cascade control of a continuous reactor [22], the cascade control of a continuous
distillation [21, 24], and in the calorimetric estimation [15] of a batch polymer reactor. The
closed-loop motion stability is ensured if the observer gain (o0) is tuned slower than the
characteristic frequency (<»_,) of the fastest unmodeled dynamics, and the observer (co0),
secondary (ras), and primary (cop) gains are sufficiently separated. This is,
618
cop = max(co<...,co£p)< min (cof...,co^s) < cos = max (co|...,ra,g < co0 < co, (30)
The tuning of gains can be done with the standard techniques and notions employed in the
conventional-type design of second-order single-input controllers and single-measurement
filters [17, 41], with a clear relationship between the choice of (co, Q and the shape of the
output error response. Typically, the observer gain is about three times slower than the fast
unmodeled dynamics, the secondary gain is about ten times slower than the observer gain, and
the control gain is tuned sufficiently slow to ensure the dynamic separation requirement,
depending on the particular process. Since the observer is tuned fast to quickly compensate
and reject modeling errors, the estimator should be run with an overdamped factor ^ > 1 [15],
otherwise, the oscillatory estimator error behavior can degrade the controller behavior.
The resulting closed-loop error dynamics are as follows: (i) the measured outputs exhibit
decreasing tracking error with adjustable convergence rate, and (ii) the unmeasured outputs
and the state motion exhibit acceptably bounded increasing tracking error.
nominal output trajectory and the equipment design are determined by minimizing the
objective function in the light of constraints and specifications.
While in the standard optimization approach for batch processes, one has as constraint the
(possibly ill-conditioned) open-loop full process dynamics, in the proposed approach one has
as constraint the reduced order (well-conditioned) inverse dynamics in conjunction with a fast
linear filter. In other words, the proposed constructive procedure constitutes a means to
simplify and robustify the search of the optimal solution for the joint process and control
problem.
Regarding the constructive step, which is the main subject of the present chapter, the
solvability of the motion-control design problem, requires the fulfillment of passivity,
detectability, compatibility, and stability conditions. Specifically, the existence of a stable
dynamical inverse requires tracking output controllability (Eq. 18a) with internal stability (Eq.
18b), or equivalently, cascade passivation with respect to the input-tracking output pair (Eq.
18), and this implies a detectability property for the same input-output pair. The building of
the estimator for the measurement-driven controller requires estimability (i.e., a form of
robust detectability) (Eq. 25) with respect to the estimation model and the measured output
pair, as well as compatibility between the control and estimator structures (Eq. 23).
Summarizing, the proposed approach suggests the combination of the constructive and
optimization methods to develop a tractable approach to tackle the complex process and
control design problem. The constructive part provides the building of the controller, the
means to assess the corresponding closed-loop dynamics, fundamental connections between
process and control design, and a procedure to simplify the search of the optimal solution for
the process and control design problem.
In this section, the process and control design approach presented in the last section is applied
to a representative case example in polymer reactors. First, the example is addressed
analytically, drawing results valid for a class of reactors. Then, the case of an industrial
reactor is studied through simulations. The corresponding motion [4] and control [1, 38, 39]
designs have been studied separately, meaning that the results can be compared.
product variability. The process is described by the following nonlinear equations over the
finite-time interval [0, tF]:
Tj = [f^T, TJ? P, M) (T-T) - U/TVT.) + QJ/Cj := fj, 1/0)= TJ0, T(0) = T (31a)
Zr = yT = T, y r ^ (31&h)
where ML is the monomer mass fed over the semibatch period. The states of the reactor are:
the emulsion (T) and jacket (IV) temperatures, the emulsion (M) and polymer (P) masses, the
water-soluble initiator concentration (I), and the number (N) of latex particles per unit volume
of water phase. The initial condition Mo consists of the masses of loaded water (W) and
surfactant (Ms). The measured outputs are: the emulsion (T) and jacket fluid temperatures
(Tj). The control inputs are the monomer (mass) feedrate w, and the heat rate Q : exchanged
through the jacket. The measured exogenous inputs (z) are: the monomer feed (Te), the cold
water feed (Tje), and the surroundings (Ts) temperatures. The tracked outputs (z) are the
measured emulsion temperature (T), and the unmeasured (Zf) mass fraction (m) of free (i.e.,
unreacted) monomer. The unmeasured monomer conversion (x) must be on-line estimated.
The input Q, is realized via a heating-cooling system with a recirculation loop that admits
either the cold-water flow supplied by a chiller WJ or the heat rate wsA,s
fwsXs ifT<^
ifT = T
Qj = i ° J :=0(w ] ,T,T J ,T je ), wJ = (wi,ws)' (32)
[w J [c J (T je -T J )]ifT>T J
w j 5 w s >0, wj = 0 i f T < T j , ws = 0 i f T > T j
where ws steam flow and Xs are is its latent heat of vaporization. Thus, the manipulation of the
scalar input Qj is equivalent to the coordinated manipulation of the vector w,, and the inverse
map u of 8 is given by:
621
R, R,, and RN are respectively the polymerization, initiator decomposition, and latex particle
generation rates functions, and U is the heat transfer coefficient between the emulsion and the
jacket fluid, according to standard expressions [2, 42], U, is the heat transfer coefficient
associated to the heat lost to the surroundings. Q is the heat generation rate by chemical
reaction, and H is the emulsion-jacket fluid heat exchange rate. C is the heat capacity of the
emulsion, and Cj is the heat capacity of the jacket system made by the reactor (R), jacket (J)
and insulator (I) walls as well as by the jacket fluid (F). Thus,
C = C + C + C + C c
J R F J I s = M s c s , S = R , F , J , I , Q = AR, H = U (T- Tj) (34c)
where A is the heat of polymerization per unit monomer mass, M s is the mass of component
S, and c s is its specific heat capacity. cm, cp, cw, Cj are respectively the specific heat capacities
of the monomer, polymer, water, and jacket fluid.
In compact vector notation, the reactor model (Eq. 31) is given by the control system (1)
with the following vectors and maps (Eq. 1) and input-output control structure (Eq. 2):
x = (Tj, T, M, P, I, N)\ u = (w,Wj )', d = (Tc, TJC, T,), p = (p'e,P ; , p'J (35a)
f = (fT, fj; fP, w, f,, fN)', F(x) = (fR, f,, fN, fj(x), Pc = (cm, cp, cw, q , A, W, Ms, U,)1 (35c)
where F(x) is the kinetics-heat transfer model function, p m is its parameter vector, p e contains
the equipment parameters, and pc contains the (calorimetric) parameters for the controller.
given initiator and stabilizing agent. The initial values of the reactor (To), jacket (Tj0), and
surroundings (Ts) temperatures are given. The temperature of the loaded mixture must be
622
taken, without overshoot, to a prescribed temperature T and maintained constant until the
conversion reaches a small pre-specified value XF- The polymerization must take place in
starved regime, meaning that the free monomer content must be kept about a prescribed value
m < m + bounded by the maximum value m + set by thermodynamic restrictions. While
monomer is being feed, the free monomer in the reactor must stay about a value m that must
be chosen so that the heat generation rate (proportional to m) is below cnH . H is the
maximum heat removal rate allowed by the heat exchange system design, and CH < 1 is a
safety coefficient. Thus, the trajectory ZM(0 = rh must be chosen in such a way that a constant
amount of unreacted monomer is kept during most of the monomer addition period. The
maximum (T|) and minimum (Tj) jacket fluid temperature is fixed by the services considering
a safety margin. The reactor operation must finish with a conversion of % = XF.
Without restricting the approach, let us consider that the calorimetric (pc) and kinetics-heat
transfer (pm) parameters are given, and that equipment design consists in specifying the
heating-cooling system, including the stirrer and emulsion mixing pattern, the volume, the
geometry and recirculation of the jacket system, and the jacket fluid-saturated steam heat
exchanger. Thus, the adjustable equipment design parameter vector is given by [11]
where wrj is the recirculation flow in the heat exchange system, p h is the vector with three
constants set by the choice of stirrer and emulsion mixing pattern, and pA is the area of the
jacket fluid-saturated steam heat exchanger.
The process-control problem of the polymer reactor consists in designing:
(i) The nominal operation O (Eq. 4) so that the batch takes place as fast as possible with an
adequate compromise between safety, operability, product quality, and cost-benefit measures.
(ii) The tracking calorimetric controller (Eq. 8) so that the closed-loop reactor robustly
tracks the nominal motion £(t), with adjustable linear temperature error dynamics, and
bounded free monomer error. For robustness purposes, the controller construction must be
based on the material and heat balances in conjunction with the calorimetric parameter vector
p c , and not on the uncertain kinetics-heat transfer model function F(x) (Eq. 33).
Since the reactor motion determines the product properties, the nominal operation and its
control must be such that the closed-loop motion variability over the batch-to-batch operation
is kept within specifications.
It must be pointed out that, in the majority of the laboratory calorimetric control studies,
the temperature has been controlled with a standard or commercial controller, the heat
generation rate is controlled by manipulating the monomer feedrate, and recently, an override
supervisory controller has been incorporated to stop the monomer addition when its
concentration surpasses a certain value [38]. For industrial applicability, these calorimetric
623
controllers have several drawbacks: (i) an a priori [38] or occasionally calibrated [37] heat
transfer-solid content correlation is required, (ii) an override controller is employed to resolve
the conflicting tasks of the heat generation and free monomer control objectives. This last
scheme is questionable because it requires on-line free monomer measurements. In fact, the
rather uncertain free-monomer estimate drawn from a calorimetric observer should not be
used for feedback control [15].
These equations do not depend on the control Qj (jacket-surroundings heat exchange rate),
meaning that, as expected, the pair QJ-ZT does not have RD = 1. To remove this high RD
obstacle for robustness, let us recall the backstepping passivation procedure, for the input-
output pair u-z and regard the measured jacket temperature Tj (= xs) as a virtual control
(instead of Q:) [22], solve the last equation set for (M, w, Tj), and obtain the primary inverse
(Eq. 10)
w = [fftxp zM, zT) + (M o + P)zM/(l- zM)]/(l - zM) := iw(xp zM, zM, zT) (38b)
where
Recall the jacket heat balance (Eq. 31a), solve it for Qj (Eq. 13a) with Tj replaced by its
estimate Vj drawn from a standard Tj-driven fast second-order filter (Eq. 16b), substitute into
Eq. (33), and obtain the observer-based secondary inverse (Eqs. 16b and 16c)
where
i/x,, zM, zT, Vj, TjsTje) = u{C jV| - fV(xp zM, zT) (T - T\) + U / ^ - Ts), T\, Tje} (40)
The substitution of M = iM(P, zM) (Eq. 38a) and T = ZT into Eq. (3 Id, e, f) of the state xj
yields the 3-dimensional inverse dynamics (41a), and the combination of these dynamics with
the primary (Eq. 38) and secondary (Eq. 39) inverses yields the observer-based passivated
dynamical inverse (Eq. 16):
where
<Mx,, zM, zT) = fR[zT, P, xM(P, zM), I, N], ^(x,, zM, zT) = - f,(zT, I),
Thus, the corresponding solvability conditions (Eq. 18) are given by:
The meaning and fulfillment of these conditions will be discussed later, in subsection 3.6.
T* is the jacket temperature set point determined by the primary controller. Recall the
observer-based inverse dynamics (Eq. 41), drop its dynamics component (Eq. 41a), replace Tj
by T*, assume that xi is known, and obtain the observer (43a) plus the nonlinear state feedback
controller (43b):
where
Yw(xp T, M, t) = i j x , , gM(M, P), ^ ( t ) , T], Yj [x p T, M, vj5 T . T J = i,[xp gM(M, P), T, vj5 T^TJ
y*(Xl, T, M, Te, t) = i*[xp gM(M, P), T, v T (t), T J , vT(t) = l T (t)- raT[T - ^(t)]
This controller, with high gain setpoint observer, represents the limiting behavior attainable
with any exact model-based cascade nonlinear SF controller.
In terms of x e , the xi-dependent controller (Eq. 43b) can be written as follows (Eq. 21)
The corresponding noninnovated dynamics motion (27) xv(t) is stable [15], and the
estimability condition (Eq. 25) is given by:
T * Tj (45)
The meaning and fulfillment of this condition will be discussed later, in subsection 3.6.
The corresponding estimator (Eq. 15) is given by Eq. (46a) [15].
The combination of the calorimetric estimator (Eq. 46a) with the controller (Eq. 44),
expressed in terms of the estimator state xe, yields the calorimetric cascade controller (Eq. 28)
Calorimetric estimator (46a)
I^ffiJCjty-fj), H(0) = Ho
where sH is a small number specified to circumvent the small period with lack of calorimetric
observability when there is no heat exchange (i.e., T « Tj). Following the guidelines given in
subsection 2.5, the second-order qLNPA output error dynamics are given by Eq. (29), the
damping factor C, should be greater than one, and the estimator frequency co0 should be close
to or slower than the characteristic frequency Oj of the fastest unmodeled jacket dynamics
[26]:
627
^ I V g [uw + u. + w/Qcj/q
The closed-loop form of this controller shows that its dynamic part is nearly linear. The
unknown strong nonlinearities are estimated and compensated by the calorimetric controller.
The IMC form is better suited for on-line implementation and control saturation handling
aims [22].
The formal analysis of the closed-loop dynamics can be done with an extension of the
nonlocal closed-loop stability analysis of a continuous reactor with temperature cascade
controller presented before [22] in conjunction with the stability definitions (Eq. 7) given in
section 2. Here it suffices to mention that, the closed-loop motion is stable if: the filter and
estimator gains are chosen not faster than the characteristic frequency cc^ of the jacket
hydraulic dynamics [26], and the secondary and primary control gains are chosen so that there
is adequate dynamic separation between Oj, cOj and coT. This is,(Eq. 30),
In this tuning conditions are met, the closed-loop batch motion exhibits the following
features:
(i) The temperature is tracked asymptotically and the free monomer is tracked with bounded
error,
(ii) The nominal motion is tracked with bounded error.
In closed-loop form, the dynamical equations of the controller are nearly linear.
Industrially speaking, the preceding calorimetric controller (Eq. 46) is made by the
interconnection of two well-known controllers: a cascade temperature controller that
manipulates the heat exchange rate Qs for a given monomer feedrate w, and a ratio-type free-
monomer controller that sets w proportionally to the heat generation rate Q. In fact, the ratio
and secondary controllers are regarded as variations of inventory-based feedforward control
schemes [32]. Moreover, the robust tracking of the nominal motion signifies a product with
reduced variability of its molecular weight architecture, or equivalently of its product
attributes.
From a safety viewpoint, this controller has the capability of handling well the runaway
potential due to the accumulation of unreacted monomer by a sudden inhibition or cooling.
When the estimator detects a sudden decrease in the heat generation rate estimate Q, the
cascade controller immediately reduces the monomer feedrate accordingly. Comparing with
the preceding calorimetric controllers [38, 39], the proposed controller does not require heat
transfer versus solid content model, (ii) and effectively resolves the conflicting tasks of the
heat generation and free monomer control objectives. This is a better solution than the one of
an override supervisory controller, which stops the monomer addition when its concentration
surpasses a certain bound [38].
628
On physical grounds, the noninnovated motion xv(t) is always stable [15], and the stability
of the dynamic inverse (or ZD) can be ensured by designing the nominal motion sufficiently
slow [11]. Conditions (47a) and (47b) are required to ensure the dynamical invertibility of the
batch process, and condition (47c) is required to fulfill the partial observability requirement of
the estimator design. The second condition for invertibility (Eq. 47b) is trivially met because
it the reactor wall is diathermic (i.e., U > 0). Except at a rather short initial period (when the
fast reaction is starting and P ~ 0), the first invertibility condition (Eq. 47a) is easily met,
because P » M in any emulsion polymerization. The estimability condition (47c) signifies
that, when the emulsion and jacket temperatures coincide, the heat transfer coefficient cannot
be estimated, and therefore, the jacket temperature setpoint cannot be determined. This event
can only occur instantaneously (i.e., with zero-time duration), and take place only at the
beginning or the end of the operation, when there are abrupt changes from heating-to-cooling
regimes or the other way around. Moreover, in a temporal neighborhood this event, the
computation of a jacket temperature setpoint is not needed because the reactor is nearly
adiabatic. However, this feature must be carefully taken into account when implementing the
controller, in order to prevent its divergence. In our case, this possibility is ruled out because
the controller (Eq. 46) keeps fixed the jacket temperature setpoint when the estimate of the
heat exchange rate (H) is below the given bound sH.
629
features are in agreement with the industrial operation of semibatch starved emulsion
polymerizations, where the polymerization rate, or equivalently the semibatch duration, is
determined by the capacity of the heating-cooling system [2].
A slow 11-hour semibatch (shown in [11]) exhibits an open-loop motion with a large
degree of stability and a small product variability. As the semibatch time is decreased by
increasing the level of the nominal free monomer trajectory ZMO), the degree of open and
closed-loop stability decreases, and when the semibatch is carried out in 3.5 hours, the open-
loop motion becomes unstable due to a sudden initiator depletion caused by an excessive
temperature excursion (shown in [11]). In this case, the reaction stops at about 50 %
conversion, signifying that the motion is unstable because its perturbed motion undergoes an
unacceptable large deviation. In the nominal 4-hour semibatch operation presented in Figure
1, the batch motion is Eb-stable (i.e., with overall growing state deviation) because the
nominal temperature trajectory is asymptotically attractive, but the free monomer and
conversion trajectories exhibit acceptable small increasing errors for the typical initial state,
exogenous input, measurement, and calorimetric parameter errors expected in an industrial
reactor. A detailed analysis of this error propagation and finite-time stability features can be
seen in a previous laboratory scale experimental study on the geometric calorimetric
estimation technique [15].
mentioned in section 2, the nonlinear SF controller recovers the behavior of the "ideal"
inventory-based feedforward controller [32], and consequently, the same is true for the
proposed calorimetric controller. The corresponding robustness feature of the closed-loop
motion was verified numerically: (i) the temperature is tracked asymptotically with negligible
offset, and (ii) the free monomer and conversion are tracked with acceptable bounded errors
that grow along the course of the semibatch.
The batch evolution presented in Figure 1 has a similar shape that the ones drawn from
optimization [4]. The results show that temperature and free monomer concentration can be
reliably tracked with a nearly linear multivariable calorimetric controller that manipulates the
monomer addition and heat exchange rates, and with a control scheme that can be seen as the
adequate coordination of two controllers that are well-known and accepted in industrial
practice: a cascade temperature controller that manipulates the heat exchange rate Q: for a
given monomer feed rate w, and a ratio-type free-monomer controller that sets w
proportionally to the heat generation rate Q.
The robust tracking of the nominal motion signifies a product with reduced variability of
its molecular weight architecture, or equivalently of its product attributes. From a safety
viewpoint, this controller has the capability of handling well the runaway potential caused by
the accumulation of free monomer: when the estimator detects a sudden decrease in the heat
generation rate estimate Q, the cascade controller immediately reduces the monomer feedrate
accordingly. The proposed controller simultaneously estimates the heat generation and
transfer rates, and effectively resolves the conflicting tasks of the heat generation and free
monomer control objectives with a better solution than an override supervisory controller that
stops the monomer addition when its concentration surpasses a certain bound [38].
4. CONCLUSIONS
The problem of jointly designing the equipment, the batch motion, and the tracking
controller was addressed within a nonlinear constructive framework. The proposed approach
combines the inverse optimality method, employed in nonlinear constructive control, with the
direct optimality approach, utilized in previous batch process design studies. The inverse
optimality method yields the robust controller construction and the algorithm to recursively
design the robust nominal batch operation. The direct optimality method performs a
systematic search of the optimal solution for the process and control problem in the light of
specifications, constraints, and an objective function. The incorporation of the constructive
method enabled the identification of key connections between the process and control design
problems, in terms of fundamental properties such as finite-time batch motion stability,
controllability, detectability and passivity. Via the notion of passivity, the nonlinear control
design problem was related to the inventory and cascade control schemes that are commonly
employed in industrial practice.
632
Fig. 1. Nominal operation design, and closed-loop reactor behavior with: (exact model-based)
SF control (discontinuous plots), and calorimetric control (continuous plots).
633
The point of departure for the constructive method was the consideration of a passive
control structure in the light of a suitable definition of finite-time motion stability. Then, the
related dynamical inverse yielded the controller and a recursive algorithm to design the
nominal batch motion. The underlying solvability conditions were identifying. The
combination of the controller with an observer with a compatible structure yielded the design
of the output feedback tracking controller.
The proposed approach was applied to a representative case in polymer reactors in
particular and in batch processes in general. The resulting motion and calorimetric controller
resembled the ones drawn, separately, in previous studies. The calorimetric control performs
better that the previous ones, and its design methodology simplifies, unifies and systematizes
the diversity of techniques employed before.
With regard to future work, several issues should be studied, among them are, the
development of ways to combine the inverse and direct optimality methods, the design of
control structures with compatible passivity and detectability structures, the development of
measures of stability, passivity, and detectability, the consideration of other control schemes
like model predictive control, and the design of dedicated observers for control or monitoring
purposes.
Acknowledgments
For their support in the realization of this work, the authors gratefully acknowledge Comercial
Mexicana de Pinturas (Comex) and its Centro de Investigation en Polimeros (CIP). For their
assistance and valuable comments, the authors are indebted to P. Gonzalez, E. Castellanos-
Sahagun, and M. Hernandez.
REFERENCES
[I] M. R. Juba and J.W. Hamer, Chemical Process Control - CPC III (1986) 139.
[2] F. J. Shork, P. B. Deshpnade, K. W. Leffew, Control of Polymerization Reactors,
Dekker, 1993.
[3] D. Bonvin, Proc. IF AC- ADCHEM Symp., (1997) 155.
[4] J. N. Farber and R.L. Laurence, Chem. Eng. Commun., 46 (1986), 347.
[5] A. Krishnan and K.A. Kosanovich, Can. J. of Chem. Eng. 76, (1998), 806.
[6] L. T. Biegler, I.E. Grossmann and A.W. Westerberg, Systematic Methods of Chemical
Process Design, Prentice Hall PTR, 1997.
[7] M. Krstic, I. Kanellapoulos and P. Kokotovic, Nonlinear and Adaptive Control Design.
Wiley, New York, 1995.
[8] R. Sepulchre, M. Jankovic and P.V. Kokotovic, Constructive Nonlinear Control.
Springer-Verlag, New York, 1997.
[9] A. Isidori, Nonlinear Control Systems, 3rd Ed. Springer Verlag, 1995.
[10] A. Isidori, Nonlinear Control Systems II,. Springer Verlag, 1999.
[II] J. Alvarez, F. Zaldo and S. Padilla, Proc. DYCORD+'95, (1995) 363.
634
Author Index
A I
Alhammadi, Hasan Y. 264 Ierapetritou, Marianthi G. D5
Allgower, F 76
Alonso, Antonio A. 555 J
Alvarez, J. 604 Jacobsen, Elling W. B5
B K
Banga, Julio R. 555 Kookos, I. M. B2
Bildea, C. S. 375
Bogle, I. David L. 168 L
Lewin, D. R. 533
C Luyben, Michael L. 352
Cameron, Ian T. 126 Luyben, William L. 10
Chen, Yih-Hang 467
Carlemalm, Hong Cui 306 M
Ma, Keming 168
D Mann, U. 375
Dimian, A. C. 375 Meeuse, F. Michiel 146
Doyle III, F. J. 42 Moles, Carmen G. 555
E N
Engell, S. 430 Nougues, J. M. 501
Espufia, A. 501
O
F Oaxaca, G. 604
Fraga, Erik S. 168 Ogunnaike, B. A. 42
G P
Georgakis, Christos 96 Pearson, R. K. 42
Georgiadis, Michael 1 Pegel, S. 430
Goyal, Vishal 582 Perkins, John D. 187,216
Grievink, Johan 146, 326 Pistikopoulos, Efstratios N. 187
Puigjaner, L. 501
H
Hagemann, Johannes 168 R
Hauksdottir, Anna Soffia 582 Romagnoli, Jose A. 264
Hernjak, N. 42
Hoo, K. A. 375
636
S V
Sakizlis, Vassilis 187 Vasbinder, E. M. 375
Scheickhardt, T. 76 Vinson, David R. 96
Seader,J. D. 533 Volker, M. 430
Seferlis, Panos 1, 326
Seider, W. D. 533 W
Sendin, Oscar H. 555 Walsh, Ashley M. 126
Skogestad, Sigurd 485
Subramanian, Sivakumar 96 Y
Swartz, Christopher L. E. 239 Yu, Cheng-Ching 464
T Z
Trierweiler, J. O. 430 Zaldo, F. 604
Ziihlke, Ursula 582
U
Uzrttrk, Derya 96
637
Subject Index
Analytical hierarchical process 379 co-ordination, 511
Asymptotic Tracking 251 design, 244
feedback, 316
Back-off 35,218,220 loop-pairing, 488
Batch process objectives, 486
control, 504,516 performance, 511
design, 604 self-optimizing, 443
integration, 519 structure, 42,86,184
Bifurcation 172,409,414 Controllability:
Bioprocess assessment 146,182,307
control, 371 dynamic, 333,418,423
heat integration, 367,369 input-output 307,380
input-output, 307,433
Capacity-based method 36 measures, 169
Chemical Process: static (steady state), 330,418,422
characterisation 43 thermodynamic-based 160
Chemical reactor, stirred tank: Controller:
control, 59,83,247 calorimetric, 623
controllability, 341 controlled variable, 392
design, 18,341 decoupling, 438,587
jacketed, 18 feedback, 225,605
nonisothermal, 256 manipulated variable, 392,485
Chemical reactor, tubular: nonlinear, 615
adiabatic, 465 parameterization, 243
control, 478,480 parametric, 202
design, 30 regulatory, 223
Chemical reactor: stability 245
batch, 501 stability, 610
design, 25,345,407 state feedback, 626
polymerisation, 172,621 structure, 112,219,228
Closed-loop
performance 424,479 Degree of:
simulation, 452 interactions 43
Condition number 169,281435 nonlinearity 43
Continuation method 173,337 Degrees of freedom: 158
Control system: Dephlegmator condenser
plantwide, 375 design, 354
Control: 54,112, 330,384,485 Design:
638
back-off, 35 controllability, 275,291
capacity-based, 36 design, 276,545
course, 38 pairing, 280
criterion, 265 six-sigma, 542
economic, 31,197 Heat exchanger:
optimal, 192 control, 151, 157
process, 156,314 networks, 275
sensitivity 339,348 Hydro-dealkylation process 383
weighing factor 35
Distillation: 169,223,489 Ill-Conditioning 48
control, 362,456 Inventory:
controllability, 180 control, 297,416
design, 180,345 impurity, 402
reactive, 345,454
Disturbance: Lagrange multipliers 336
cost, 545 Life cycle:
sensitivity, 338,346 assessment, 266,272
Dynamic:
analysis 100 Model predictive control, 13,198
operability index 106 Model: 312
operating spaces 115 environmental, 272
procedural, 507,517
Economics: recipe, 508
dynamic, 221
steady state, 21,221 Nonlinear:
Eigenvalues: control, 615
sensitivity, 336 process 47,48,49,76
spectral association, 127 Non-linearity:
Eigenvectors, 335 assessment, 76,86
Energy integration 266 computation, 80
Evaporator: degree of, 43
design, 371 measure, 48,77
Exergy 180 Non-minimum phase 111,172