0% found this document useful (0 votes)
13 views83 pages

Realtime Optimization Dominique Bonvin Instant Download

The document discusses the book 'Real-Time Optimization' edited by Dominique Bonvin, which compiles research on optimization techniques for industrial processes, emphasizing the integration of process measurements to enhance optimization accuracy. It includes various articles on methodologies and applications of real-time optimization, addressing challenges such as plant-model mismatches and uncertainties. The book serves as a comprehensive resource for researchers and practitioners in the field of process optimization, highlighting recent advancements and open research areas.

Uploaded by

haackeonaji
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views83 pages

Realtime Optimization Dominique Bonvin Instant Download

The document discusses the book 'Real-Time Optimization' edited by Dominique Bonvin, which compiles research on optimization techniques for industrial processes, emphasizing the integration of process measurements to enhance optimization accuracy. It includes various articles on methodologies and applications of real-time optimization, addressing challenges such as plant-model mismatches and uncertainties. The book serves as a comprehensive resource for researchers and practitioners in the field of process optimization, highlighting recent advancements and open research areas.

Uploaded by

haackeonaji
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 83

Realtime Optimization Dominique Bonvin download

https://ebookbell.com/product/realtime-optimization-dominique-
bonvin-54691536

Explore and download more ebooks at ebookbell.com


Here are some recommended products that we believe you will be
interested in. You can click the link to download.

Real Time Optimization By Extremum Seeking Control Kartik B Ariyur

https://ebookbell.com/product/real-time-optimization-by-extremum-
seeking-control-kartik-b-ariyur-917122

Realtime Embedded Systems Optimization Synthesis And Networking


Meikang Qiu And Jiayin Li

https://ebookbell.com/product/realtime-embedded-systems-optimization-
synthesis-and-networking-meikang-qiu-and-jiayin-li-2515800

Realtime Pdeconstrained Optimization Edited By Lorenz T Biegler Omar


Ghattas Matthias Heinkenschloss David Keyes And Bart Van Bloemen
Waanders

https://ebookbell.com/product/realtime-pdeconstrained-optimization-
edited-by-lorenz-t-biegler-omar-ghattas-matthias-heinkenschloss-david-
keyes-and-bart-van-bloemen-waanders-1374438

Architectureaware Optimization Strategies In Realtime Image Processing


Ballaarabe

https://ebookbell.com/product/architectureaware-optimization-
strategies-in-realtime-image-processing-ballaarabe-6839782
Real Time Convex Optimisation For 5g Networks And Beyond Trung Q Duong

https://ebookbell.com/product/real-time-convex-optimisation-
for-5g-networks-and-beyond-trung-q-duong-36754876

Realtime Applications With Stochastic Task Execution Times Analysis


And Optimisation 1st Edition Sorin Manolache

https://ebookbell.com/product/realtime-applications-with-stochastic-
task-execution-times-analysis-and-optimisation-1st-edition-sorin-
manolache-2143182

Geometry Of Linear Matrix Inequalities A Course In Convexity And Real


Algebraic Geometry With A View Towards Optimization Tim Netzer

https://ebookbell.com/product/geometry-of-linear-matrix-inequalities-
a-course-in-convexity-and-real-algebraic-geometry-with-a-view-towards-
optimization-tim-netzer-50439616

Realtime Systems Development With Rtems And Multicore Processors 1st


Edition Gedare Bloom

https://ebookbell.com/product/realtime-systems-development-with-rtems-
and-multicore-processors-1st-edition-gedare-bloom-46072756

Realtime Systems Design Principles For Distributed Embedded


Applications 3rd Edition Hermann Kopetz

https://ebookbell.com/product/realtime-systems-design-principles-for-
distributed-embedded-applications-3rd-edition-hermann-kopetz-46249334
processes

Real-Time
Optimization
Edited by
Dominique Bonvin
Printed Edition of the Special Issue Published in Processes

www.mdpi.com/journal/processes
Real-Time Optimization

Special Issue Editor


Dominique Bonvin

MDPI • Basel • Beijing • Wuhan • Barcelona • Belgrade


Special Issue Editor
Dominique Bonvin
Ecole Polytechnique Fédérale de Lausanne
Switzerland

Editorial Office
MDPI AG
St. Alban-Anlage 66
Basel, Switzerland

This edition is a reprint of the Special Issue published online in the open access
journal Processes (ISSN 2227-9717) from 2016–2017 (available at:
http://www.mdpi.com/journal/processes/special_issues/real_time_optimization).

For citation purposes, cite each article independently as indicated on the article
page online and as indicated below:

Author 1; Author 2. Article title. Journal Name Year, Article number, page range.

First Edition 2017

ISBN 978-3-03842-448-2 (Pbk)


ISBN 978-3-03842-449-9 (PDF)

Photo courtesy of Prof. Dr. Dominique Bonvin

Articles in this volume are Open Access and distributed under the Creative Commons Attribution
license (CC BY), which allows users to download, copy and build upon published articles even for
commercial purposes, as long as the author and publisher are properly credited, which ensures
maximum dissemination and a wider impact of our publications. The book taken as a whole is
© 2017 MDPI, Basel, Switzerland, distributed under the terms and conditions of the
Creative Commons license CC BY-NC-ND (http://creativecommons.org/licenses/by-nc-nd/4.0/).
Table of Contents
About the Special Issue Editor.................................................................................................................. v

Preface to “Real-Time Optimization” ...................................................................................................... vii

Alejandro G. Marchetti, Grégory François, Timm Faulwasser and Dominique Bonvin


Modifier Adaptation for Real-Time Optimization—Methods and Applications
Reprinted from: Processes 2016, 4(4), 55; doi: 10.3390/pr4040055 .......................................................... 1

Weihua Gao, Reinaldo Hernández and Sebastian Engell


A Study of Explorative Moves during Modifier Adaptation with Quadratic Approximation
Reprinted from: Processes 2016, 4(4), 45; doi: 10.3390/pr4040045 .......................................................... 36

Sébastien Gros
An Analysis of the Directional-Modifier Adaptation Algorithm Based on Optimal
Experimental Design
Reprinted from: Processes 2017, 5(1), 1; doi: 10.3390/pr5010001 ............................................................ 53

Marco Vaccari and Gabriele Pannocchia


A Modifier-Adaptation Strategy towards Offset-Free Economic MPC
Reprinted from: Processes 2017, 5(1), 2; doi: 10.3390/pr5010002 ............................................................ 71

Jean-Christophe Binette and Bala Srinivasan


On the Use of Nonlinear Model Predictive Control without Parameter Adaptation for
Batch Processes
Reprinted from: Processes 2016, 4(3), 27; doi: 10.3390/pr4030027 .......................................................... 92

Eka Suwartadi, Vyacheslav Kungurtsev and Johannes Jäschke


Sensitivity-Based Economic NMPC with a Path-Following Approach
Reprinted from: Processes 2017, 5(1), 8; doi: 10.3390/pr5010008 ............................................................ 102

Felix Jost, Sebastian Sager and Thuy Thi-Thien Le


A Feedback Optimal Control Algorithm with Optimal Measurement Time Points
Reprinted from: Processes 2017, 5(1), 10; doi: 10.3390/pr5010010 .......................................................... 120

Maurício M. Câmara, André D. Quelhas and José Carlos Pinto


Performance Evaluation of Real Industrial RTO Systems
Reprinted from: Processes 2016, 4(4), 44; doi: 10.3390/pr4040044 .......................................................... 139

Jan-Simon Schäpel, Thoralf G. Reichel, Rupert Klein, Christian Oliver Paschereit and
Rudibert King
Online Optimization Applied to a Shockless Explosion Combustor
Reprinted from: Processes 2016, 4(4), 48; doi: 10.3390/pr4040048 .......................................................... 159

Cesar de Prada, Daniel Sarabia, Gloria Gutierrez, Elena Gomez, Sergio Marmol, Mikel Sola,
Carlos Pascual and Rafael Gonzalez
Integration of RTO and MPC in the Hydrogen Network of a Petrol Refinery
Reprinted from: Processes 2017, 5(1), 3; doi: 10.3390/pr5010003 ............................................................ 172

iii
Dinesh Krishnamoorthy, Bjarne Foss and Sigurd Skogestad
Real-Time Optimization under Uncertainty Applied to a Gas Lifted Well Network
Reprinted from: Processes 2016, 4(4), 52; doi: 10.3390/pr4040052 .......................................................... 192

Martin Jelemenský, Daniela Pakšiová, Radoslav Paulen, Abderrazak Latifi and Miroslav Fikar
Combined Estimation and Optimal Control of Batch Membrane Processes
Reprinted from: Processes 2016, 4(4), 43; doi: 10.3390/pr4040043 .......................................................... 209

Hari S. Ganesh, Thomas F. Edgar and Michael Baldea


Model Predictive Control of the Exit Part Temperature for an Austenitization Furnace
Reprinted from: Processes 2016, 4(4), 53; doi: 10.3390/pr4040053 .......................................................... 230

iv
About the Special Issue Editor
Dominique Bonvin, Ph.D., is Director of the Automatic Control Laboratory of
EPFL in Lausanne, Switzerland. He received his Diploma in Chemical Engineering
from ETH Zürich and his Ph.D. degree from the University of California, Santa
Barbara. He served as Dean of Bachelor and Master studies at EPFL between 2004
and 2011. His research interests include modeling, identification and optimization
of dynamical systems.

v
Preface to “Real-Time Optimization”
Process optimization is the method of choice for improving the performance of industrial
processes, while enforcing the satisfaction of safety and quality constraints. Long considered as an
appealing tool but only applicable to academic problems, optimization has now become a viable
technology. Still, one of the strengths of optimization, namely, its inherent mathematical rigor, can also
be perceived as a weakness, since engineers might sometimes find it difficult to obtain an appropriate
mathematical formulation to solve their practical problems. Furthermore, even when process models
are available, the presence of plant-model mismatch and process disturbances makes the direct use of
model-based optimal inputs hazardous.
In the last 30 years, the field of real-time optimization (RTO) has emerged to help overcome the
aforementioned modeling difficulties. RTO integrates process measurements into the optimization
framework. This way, process optimization does not rely exclusively on a (possibly inaccurate) process
model but also on process information stemming from measurements. Various RTO techniques are
available in the literature and can be classified in two broad families depending on whether a process
model is used (explicit optimization) or not (implicit optimization or self-optimizing control).
This Special Issue on Real-Time Optimization includes both methodological and practical
contributions. All seven methodological contributions deal with explicit RTO schemes that repeat the
optimization when new measurements become available. The methods covered include modifier
adaptation, economic MPC and the two-step approach of parameter identification and numerical
optimization. The six contributions that deal with applications cover various fields including refineries,
well networks, combustion and membrane filtration.
This Special Issue has shown that RTO is a very active area of research with excellent opportunities
for applications. The Guest Editor would like to thank all authors for their timely collaboration with
this project and excellent scientific contributions.

Dominique Bonvin
Special Issue Editor

vii
processes
Review
Modifier Adaptation for Real-Time
Optimization—Methods and Applications
Alejandro G. Marchetti 1,4 , Grégory François 2 , Timm Faulwasser 1,3 and Dominique Bonvin 1, *
1 Laboratoire d’Automatique, Ecole Polytechnique Fédérale de Lausanne, CH-1015 Lausanne, Switzerland;
alejandro.marchetti@epfl.ch (A.G.M.); timm.faulwasser@epfl.ch (T.F.)
2 Institute for Materials and Processes, School of Engineering, The University of Edinburgh,
Edinburgh EH9 3BE, UK; [email protected]
3 Institute for Applied Computer Science, Karlsruhe Institute of Technology,
76344 Eggenstein-Leopoldshafen, Germany; [email protected]
4 French-Argentine International Center for Information and Systems Sciences (CIFASIS),
CONICET-Universidad Nacional de Rosario (UNR), S2000EZP Rosario, Argentina;
[email protected]
* Correspondence: dominique.bonvin@epfl.ch; Tel.: +41-21-693-3843

Academic Editor: Michael Henson


Received: 22 November 2016; Accepted: 12 December 2016; Published: 20 December 2016

Abstract: This paper presents an overview of the recent developments of modifier-adaptation


schemes for real-time optimization of uncertain processes. These schemes have the ability to
reach plant optimality upon convergence despite the presence of structural plant-model mismatch.
Modifier Adaptation has its origins in the technique of Integrated System Optimization and Parameter
Estimation, but differs in the definition of the modifiers and in the fact that no parameter estimation
is required. This paper reviews the fundamentals of Modifier Adaptation and provides an overview
of several variants and extensions. Furthermore, the paper discusses different methods for estimating
the required gradients (or modifiers) from noisy measurements. We also give an overview of the
application studies available in the literature. Finally, the paper briefly discusses open issues so as to
promote future research in this area.

Keywords: real-time optimization; modifier adaptation; plant-model mismatch

1. Introduction
This article presents a comprehensive overview of the modifier-adaptation strategy for real-time
optimization. Real-time optimization (RTO) encompasses a family of optimization methods that
incorporate process measurements in the optimization framework to drive a real process (or plant)
to optimal performance, while guaranteeing constraint satisfaction. The typical sequence of steps
for process optimization includes (i) process modeling; (ii) numerical optimization using the process
model; and (iii) application of the model-based optimal inputs to the plant. In practice, this last step
is quite hazardous—in the absence of additional safeguards—as the model-based inputs are indeed
optimal for the model, but not for the plant unless the model is a perfect representation of the plant.
This often results in suboptimal plant operation and in constraint violation, for instance when optimal
operation implies operating close to a constraint and the model under- or overestimates the value of
that particular constraint.
RTO has emerged over the past forty years to overcome the difficulties associated with
plant-model mismatch. Uncertainty can have three main sources, namely, (i) parametric uncertainty
when the values of the model parameters do not correspond to the reality of the process at hand;
(ii) structural plant-model mismatch when the structure of the model is not perfect, for example in the

Processes 2016, 4, 55 1 www.mdpi.com/journal/processes


Processes 2016, 4, 55

case of unknown phenomena or neglected dynamics; and (iii) process disturbances. Of course these
three sources are not mutually exclusive.
RTO incorporates process measurements in the optimization framework to combat the detrimental
effect of uncertainty. RTO methods can be classified depending on how the available measurements
are used. There are basically three possibilities, namely, at the level of the process model, at the level of
the cost and constraint functions, and at the level of the inputs [1].

1. The most intuitive strategy is to use process measurements to improve the model. This is the main
idea behind the “two-step” approach [2–5]. Here, deviations between predicted and measured
outputs are used to update the model parameters, and new inputs are computed on the basis of the
updated model. The whole procedure is repeated until convergence is reached, whereby it is hoped
that the computed model-based optimal inputs will be optimal for the plant. The requirements for
this to happen are referred to as the model-adequacy conditions [6]. Unfortunately, the model-adequacy
conditions are difficult to both achieve and verify.
2. This difficulty of converging to the plant optimum motivated the development of a modified
two-step approach, referred to as Integrated System Optimization and Parameter Estimation
(ISOPE) [7–10]. ISOPE requires both output measurements and estimates of the gradients of the
plant outputs with respect to the inputs. These gradients allow computing the plant cost gradient
that is used to modify the cost function of the optimization problem. The use of gradients is
justified by the nature of the necessary conditions of optimality (NCO) that include both constraints
and sensitivity conditions [11]. By incorporating estimates of the plant gradients in the model,
the goal is to enforce NCO matching between the model and the plant, thereby making the
modified model a likely candidate to solve the plant optimization problem. With ISOPE, process
measurements are incorporated at two levels, namely, the model parameters are updated on the
basis of output measurements, and the cost function is modified by the addition of an input-affine
term that is based on estimated plant gradients.
Note that RTO can rely on a fixed process model if measurement-based adaptation of the cost and
constraint functions is implemented. For instance, this is the philosophy of Constraint Adaptation
(CA), wherein the measured plant constraints are used to shift the predicted constraints in the
model-based optimization problem, without any modification of the model parameters [12,13].
This is also the main idea in Modifier Adaptation (MA) that uses measurements of the plant
constraints and estimates of plant gradients to modify the cost and constraint functions in the
model-based optimization problem without updating the model parameters [14,15]. Input-affine
corrections allow matching the first-order NCO upon convergence. The advantage of MA, which
is the focus of this article, lies in its proven ability to converge to the plant optimum despite
structural plant-model mismatch.
3. Finally, the third way of incorporating process measurements in the optimization framework
consists in directly updating the inputs in a control-inspired manner. There are various ways of
doing this. With Extremum-Seeking Control (ESC), dither signals are added to the inputs such
that an estimate of the plant cost gradient is obtained online using output measurements [16].
In the unconstrained case, gradient control is directly applied to drive the plant cost gradient
to zero. Similarly, NCO tracking uses output measurements to estimate the plant NCO, which
are then enforced via dedicated control algorithms [17,18]. Furthermore, Neighboring-Extremal
Control (NEC) combines a variational analysis of the model at hand with output measurements to
enforce the plant NCO [19]. Finally, Self-Optimizing Control (SOC) uses the sensitivity between
the uncertain model parameters and the measured outputs to generate linear combinations of the
outputs that are locally insensitive to the model parameters, and which can thus be kept constant
at their nominal values to reject the effect of uncertainty [20].

2
Processes 2016, 4, 55

The choice of a specific RTO method will depend on the situation at hand. However, it is highly
desirable for RTO approaches to have certain properties such as (i) guaranteed plant optimality upon
convergence; (ii) fast convergence; and (iii) feasible-side convergence. MA satisfies the first requirement
since the model-adequacy conditions for MA are much easier to satisfy than those of the two-step
approach. These conditions are enforced quite easily if convex model approximations are used instead
of the model at hand as shown in [21]. The rate of convergence and feasible-side convergence are also
critical requirements, which however are highly case dependent. Note that these two requirements
often oppose each other since fast convergence calls for large steps, while feasible-side convergence
often requires small and cautious steps. It is the intrinsic capability of MA to converge to the plant
optimum despite structural plant-model mismatch that makes it a very valuable tool for optimizing
the operation of chemical processes in the absence of accurate models.
This overview article is structured as follows. Section 2 formulates the static real-time optimization
problem. Section 3 briefly revisits ISOPE, while Section 4 discusses MA, its properties and several MA
variants. Implementation aspects are investigated in Section 5, while Section 6 provides an overview
of MA case studies. Finally, Section 7 concludes the paper with a discussion of open issues.

2. Problem Formulation

2.1. Steady-State Optimization Problem


The optimization of process operation consists in minimizing operating costs, or maximizing
economic profit, in the presence of constraints. Mathematically, this problem can be formulated
as follows:

up = arg min Φ p (u) := φ(u, y p (u)) (1)


u
s.t. G p,i (u) := gi (u, y p (u)) ≤ 0, i = 1, . . . , n g ,

where u ∈ IRnu denotes the decision (or input) variables; y p ∈ IRny are the measured output variables;
φ : IRnu × IRny → IR is the cost function to be minimized; and gi : IRnu × IRny → IR, i = 1, . . . , n g , is the
set of inequality constraints on the input and output variables.
This formulation assumes that φ and gi are known functions of u and y p , i.e., they can be
directly evaluated from the knowledge of u and the measurement of y p . However, in any practical
application, the steady-state input-output mapping of the plant y p (u) is typically unknown, and only
an approximate nonlinear steady-state model is available:

F(x, u) = 0, (2a)
y = H(x, u), (2b)

where x ∈ IRnx are the state variables and y ∈ IRny the output variables predicted by the model.
For given u, the solution to (2a) can be written as

x = ξ (u), (3)

where ξ is an operator expressing the steady-state mapping between u and x. The input-output
mapping predicted by the model can be expressed as

y(u) : = H( ξ (u), u). (4)

Using this notation, the model-based optimization problem becomes

3
Processes 2016, 4, 55

u = arg min Φ(u) := φ(u, y(u)) (5)


u
s.t. Gi (u) := gi (u, y(u)) ≤ 0, i = 1, . . . , n g .

However, in the presence of plant-model mismatch, the model solution u does not generally
coincide with the plant optimum up .

2.2. Necessary Conditions of Optimality


Local minima of Problem (5) can be characterized via the NCO [11]. To this end, let us denote the
set of active constraints at some point u by
 
A(u) = i ∈ {1, . . . , n g } | Gi (u) = 0 . (6)

The Linear Independence Constraint Qualification (LICQ) requires that the gradients of the active
constraints, ∂G
∂u (u) for i ∈ A(u), be linearly independent. Provided that a constraint qualification such
i

as LICQ holds at the solution u and the functions Φ and Gi are differentiable at u , there exist unique
Lagrange multipliers μ ∈ IRn g such that the following Karush-Kuhn-Tucker (KKT) conditions hold at
u [11]

G ≤ 0, μT G = 0, μ ≥ 0, (7)
∂L ∂Φ ∂G
= + μT = 0,
∂u ∂u ∂u

where G ∈ IRn g is the vector of constraint functions Gi , and L(u, μ) := Φ(u) + μT G(u) is the
Lagrangian function. A solution u satisfying these conditions is called a KKT point.
a
The vector of active constraints at u is denoted by Ga (u ) ∈ IRn g , where n ag is the cardinality of
A(u ). Assuming that LICQ holds at u , one can write:

∂Ga 
(u )Z = 0,
∂u

where Z ∈ IRnu ×(nu −n g ) is a null-space matrix. The reduced Hessian of the Lagrangian on this null
a

space, ∇2r L(u ) ∈ IR(nu −n g )×(nu −n g ) , is given by [22]


a a

 
∂2 L  
∇2r L(u ) := ZT ( u , μ ) Z.
∂u2

In addition to the first-order KKT conditions, a second-order necessary condition for a local
minimum is the requirement that ∇2r L(u ) be positive semi-definite at u . On the other hand, ∇2r L(u )
being positive definite is sufficient for a strict local minimum [22].

3. ISOPE: Two Decades of New Ideas


In response to the inability of the classical two-step approach to enforce plant optimality,
a modified two-step approach was proposed by Roberts [8] in 1979. The approach became known under
the acronym ISOPE, which stands for Integrated System Optimization and Parameter Estimation [9,10].
Since then, several extensions and variants of ISOPE have been proposed, with the bulk of the
research taking place between 1980 and 2002. ISOPE algorithms combine the use of a parameter
estimation problem and the definition of a modified optimization problem in such a way that, upon
convergence, the KKT conditions of the plant are enforced. The key idea in ISOPE is to incorporate plant
gradient information into a gradient correction term that is added to the cost function. Throughout

4
Processes 2016, 4, 55

the ISOPE literature, an important distinction is made between optimization problems that include
process-dependent constraints of the form g(u, y) ≤ 0 and problems that do not include them [7,9].
Process-dependent constraints depend on the outputs y, and not only on the inputs u. In this section,
we briefly describe the ISOPE formulations that we consider to be most relevant for contextualizing
the MA schemes that will be presented in Section 4. Since ISOPE includes a parameter estimation
problem, the steady-state outputs predicted by the model will be written in this section as y(u, θ) in
order to emphasize their dependency on the (adjustable) model parameters θ ∈ IRnθ .

3.1. ISOPE Algorithm


The original ISOPE algorithm does not consider process-dependent constraints in the optimization
problem, but only input bounds. At the kth RTO iteration, with the inputs uk and the plant
outputs y p (uk ), a parameter estimation problem is solved, yielding the updated parameter values θk .
This problem is solved under the output-matching condition

y(uk , θk ) = y p (uk ). (8)

∂y p
Then, assuming that the output plant gradient ∂u (uk ) is available, the ISOPE modifier λk ∈ IRnu
for the gradient of the cost function is calculated as

∂φ   ∂y p ∂y
λT
k = uk , y(uk , θk ) (u ) − (u , θ ) . (9)
∂y ∂u k ∂u k k

Based on the parameter estimates θk and the updated modifier λk , the next optimal RTO inputs
are computed by solving the following modified optimization problem:

uk+1 = arg min φ(u, y(u, θk )) + λT


ku (10)
u
s.t. uL ≤ u ≤ uU .

The new operating point is determined by filtering the inputs using a first-order exponential filter:

uk+1 = uk + K(uk+1 − uk ). (11)

The output-matching condition (8) is required in order for the gradient of the modified cost
function to match the plant gradient at uk . This condition represents a model-qualification condition
that is present throughout the ISOPE literature [7,10,23,24].

3.2. Dealing with Process-Dependent Constraints


In order to deal with process-dependent constraints, Brdyś et al. [25] proposed to use a modifier
for the gradient of the Lagrangian function. The parameter estimation problem is solved under
the output-matching condition (8) and the updated parameters are used in the following modified
optimization problem:

uk+1 = arg min φ(u, y(u, θk )) + λT


ku (12)
u
s.t. gi (u, y(u, θk )) ≤ 0, i = 1, . . . , n g ,

where the gradient modifier is computed as follows:

5
Processes 2016, 4, 55

 
∂φ   ∂g   ∂y p ∂y
λT = u , y(uk , θk ) + μT u , y(uk , θk ) (uk ) − (u , θ ) . (13)
k
∂y k k
∂y k ∂u ∂u k k

The next inputs applied to the plant are obtained by applying the first-order filter (11), and the
next values of the Lagrange multipliers to be used in (13) are adjusted as


μi,k+1 = max{0, μi,k + bi (μi,k +1 − μi,k )}, i = 1, . . . , n g , (14)

where μk+1 are the optimal values of the Lagrange multipliers of Problem (12) [7]. This particular ISOPE
scheme is guaranteed to reach a KKT point of the plant upon convergence, and the process-dependent
constraints are guaranteed to be respected upon convergence. However, the constraints might be
violated during the RTO iterations leading to convergence, which calls for the inclusion of conservative
constraint backoffs [7].

3.3. ISOPE with Model Shift


Later on, Tatjewski [26] argued that the output-matching condition (8) can be satisfied without
the need to adjust the model parameters θ. This can be done by adding the bias correction term ak to
the outputs predicted by the model,

ak := y p (uk ) − y(uk , θ). (15)

This way, the ISOPE Problem (10) becomes:

uk+1 ∈ arg min φ(u, y(u, θ) + ak ) + λT


ku (16)
u
s.t. uL ≤ u ≤ uU ,

with

∂φ ∂y p ∂y
λT := (u , y(uk , θ) + ak ) (u ) − (u , θ) . (17)
k
∂y k ∂u k ∂u k

This approach can also be applied to the ISOPE scheme (12) and (13) and to all ISOPE algorithms
that require meeting Condition (8). As noted in [26], the name ISOPE is no longer adequate since,
in this variant, there is no need for estimating the model parameters. The name Modifier Adaptation
becomes more appropriate. As will be seen in the next section, MA schemes re-interpret the role of the
modifiers and the way they are defined.

4. Modifier Adaptation: Enforcing Plant Optimality


The idea behind MA is to introduce correction terms for the cost and constraint functions such
that, upon convergence, the modified model-based optimization problem matches the plant NCO.
In contrast to two-step RTO schemes such as the classical two-step approach and ISOPE, MA schemes
do not rely on estimating the parameters of a first-principles model by solving a parameter estimation
problem. Instead, the correction terms introduce a new parameterization that is specially tailored
to matching the plant NCO. This parameterization consists of modifiers that are updated based on
measurements collected at the successive RTO iterates.

6
Processes 2016, 4, 55

4.1. Basic MA Scheme

4.1.1. Modification of Cost and Constraint Functions


In basic MA, first-order correction terms are added to the cost and constraint functions of the
optimization problem [14,15]. At the kth iteration with the inputs uk , the modified cost and constraint
functions are constructed as follows:

Φ T
Φm,k (u) := Φ(u) + εΦ
k + ( λ k ) ( u − uk ) (18)
Gm,i,k (u) :=
G
Gi (u) + ε k i + (λkGi )T (u − uk ) ≤ 0, i = 1, . . . , n g , (19)

with the modifiers εΦ Φ G G


k ∈ IR, ε k ∈ IR, λk ∈ IR , and λk ∈ IR given by
i nu i nu

εΦ
k = Φ p ( uk ) − Φ (uk ), (20a)
G
= G p,i (uk ) − Gi (uk ), i = 1, . . . , n g ,
εk i (20b)
∂Φ p
T ∂Φ
(λΦ
k )
= (uk ) − (u ), (20c)
∂u ∂u k
∂G p,i ∂Gi
(λkGi )T = (uk ) − (u ), i = 1, . . . , n g . (20d)
∂u ∂u k

The zeroth-order modifiers εΦ


G
k and ε k correspond to bias terms representing the differences
i

between the plant values and the predicted values at uk , whereas the first-order modifiers λΦ
G
k and λk
i

represent the differences between the plant gradients and the gradients predicted by the model at uk .
∂Φ ∂G
The plant gradients ∂up (uk ) and ∂up,i (uk ) are assumed to be available at uk . A graphical interpretation
of the first-order correction for the constraint Gi is depicted in Figure 1. Note that, if the cost and/or
constraints are perfectly known functions of the inputs u, then the corresponding modifiers are equal
to zero, and no model correction is necessary. For example, the upper and lower bounds on the input
variables are constraints that are perfectly known, and thus do not require modification.
Gp,i , Gi , Gm,i,k

Gm,i,k (u)

Gp,i (u)
εG
k
i

Gi (u)

(λG i T
k ) (u − uk )

uk u

Figure 1. First-order modification of the constraint Gi at uk .

At the kth RTO iteration, the next optimal inputs uk+1 are computed by solving the following
modified optimization problem:

T
uk+1 = arg min Φm,k (u) := Φ(u) + (λΦ
k ) u (21a)
u

Gm,i,k (u) := Gi (u) + ε k i + (λk i )T (u − uk ) ≤ 0,


G G
s.t. i = 1, . . . , n g . (21b)

7
Processes 2016, 4, 55

Note that the addition of the constant term εΦ Φ T


k − ( λk ) uk to the cost function does not affect the
solution uk+1 . Hence, the cost modification is often defined by including only the linear term in u, that
is, Φm,k (u) := Φ(u) + (λΦ T
k ) u.
The optimal inputs can then be applied directly to the plant:

uk+1 = uk+1 . (22)

However, such an adaptation strategy may result in excessive correction and, in addition,
be sensitive to process noise. Both phenomena can compromise the convergence of the algorithm.
Hence, one usually relies on first-order filters that are applied to either the modifiers or the inputs.
In the former case, one updates the modifiers using the following first-order filter equations [15]:

ε G ε
 
k = ( In g − K ) ε k −1 + K G p ( uk ) − G ( uk ) ,
εG (23a)
T
∂Φ p ∂Φ
λΦ Φ Φ
k = ( In u − K ) λ k −1 + K
Φ
(uk ) − (u ) , (23b)
∂u ∂u k
T
G G ∂G p,i ∂Gi
λk i = (Inu − KGi )λk−i 1 + KGi (uk ) − (u ) , i = 1, . . . , n g , (23c)
∂u ∂u k

where the filter matrices Kε , KΦ , and KGi are typically selected as diagonal matrices with
eigenvalues in the interval (0, 1]. In the latter case, one filters the optimal RTO inputs uk+1 with
K = diag(k1 , . . . , k nu ), k i ∈ (0, 1]:

uk+1 = uk + K(uk+1 − uk ). (24)

4.1.2. KKT Matching Upon Convergence


The appeal of MA lies in its ability to reach a KKT point of the plant upon convergence, as made
explicit in the following theorem.

Theorem 1 (MA convergence ⇒ KKT matching [15]). Consider MA with filters on either the modifiers or
the inputs. Let u∞ = lim uk be a fixed point of the iterative scheme and a KKT point of the modified optimization
k→∞
Problem (21). Then, u∞ is also a KKT point of the plant Problem (1).

4.1.3. Model Adequacy


The question of whether a model is adequate for use in an RTO scheme was addressed by Forbes
and Marlin [27], who proposed the following model-adequacy criterion.

Definition 1 (Model-adequacy criterion [27]). A process model is said to be adequate for use in an RTO
scheme if it is capable of producing a fixed point that is a local minimum for the RTO problem at the plant
optimum up .

In other words, up must be a local minimum when the RTO algorithm is applied at up .
The plant optimum up satisfies the first- and second-order NCO of the plant optimization Problem (1).
The adequacy criterion requires that up must also satisfy the first- and second-order NCO for the
modified optimization Problem (21), with the modifiers (20) evaluated at up . As MA matches the
first-order KKT elements of the plant, only the second-order NCO remain to be satisfied. That is, the
reduced Hessian of the Lagrangian must be positive semi-definite at up . The following proposition
characterizes model adequacy based on second-order conditions. Again, it applies to MA with filters
on either the modifiers or the inputs.

8
Processes 2016, 4, 55

Proposition 1 (Model-adequacy conditions for MA [15]). Let up be a regular point for the constraints and
the unique plant optimum. Let ∇2r L(up ) denote the reduced Hessian of the Lagrangian of Problem (21) at up .
Then, the following statements hold:

i If ∇2r L(up ) is positive definite, then the process model is adequate for use in the MA scheme.
ii If ∇2r L(up ) is not positive semi-definite, then the process model is inadequate for use in the MA scheme.
iii If ∇2r L(up ) is positive semi-definite and singular, then the second-order conditions are not conclusive with
respect to model adequacy.

Example 1 (Model adequacy). Consider the problem min Φ p (u) = u21 + u22 , for which up = [0, 0]T .
u
The models Φ1 (u) = u21 + u42 and Φ2 (u) = u21 − u42 both have their gradients equal to zero at up , and their
Hessian matrices both have eigenvalues {2, 0} at up , that is, they are both positive semi-definite and singular.
However, Φ1 is adequate since up is a minimizer of Φ1 , while Φ2 is inadequate since up is a saddle point of Φ2 .

4.1.4. Similarity with ISOPE


The key feature of MA schemes is that updating the parameters of a first-principles model is not
required to match the plant NCO upon convergence. In addition, compared to ISOPE, the gradient
modifiers have been redefined. The cost gradient modifier (20c) can be expressed in terms of the
gradients of the output variables as follows:

T ∂Φ p ∂Φ
(λΦ
k ) = (uk ) − (u ), (25)
∂u ∂u k
∂φ ∂φ ∂y p
= (uk , y p (uk )) + (uk , y p (uk )) (u )
∂u ∂y ∂u k
∂φ ∂φ ∂y
− (u , y(uk , θ)) − (u , y(uk , θ)) (uk , θ).
∂u k ∂y k ∂u

Notice that, if Condition (8) is satisfied, the gradient modifier λΦ k in (25) reduces to the ISOPE
modifier (9). In fact, Condition (8) is required in ISOPE in order for the gradient modifier (9) to
represent the difference between the plant and model gradients. Put differently, output matching is a
prerequisite for the gradient of the modified cost function to match the plant gradient. This requirement
can be removed by directly defining the gradient modifiers as the differences between the plant and
model gradients, as given in (25).

4.2. Alternative Modifications

4.2.1. Modification of Output Variables


Instead of modifying the cost and constraint functions as in (18) and (19) , it is also possible to
place the first-order correction terms directly on the output variables [15]. At the operating point uk ,
the modified outputs read:

ym,k (u) := y(u) + εk + (λk )T (u − uk ),


y y
(26)

with the modifiers εk ∈ IRny and λk ∈ IRnu ×ny given by:


y y

y
ε k = y p (uk ) − y ( uk ), (27a)
∂y p ∂y
( λ k )T =
y
(uk ) − (u ). (27b)
∂u ∂u k

9
Processes 2016, 4, 55

In this MA variant, the next RTO inputs are computed by solving

uk+1 = arg min φ(u, ym,k (u)) (28)


u
+ ( λ k )T (u − uk )
y y
s.t. ym,k (u) = y(u) + ε k
gi (u, ym,k (u)) ≤ 0, i = 1, . . . , n g .
y
Interestingly, the output bias εk is the same as the model shift term (15) introduced by
Tatjewski [26] in the context of ISOPE. The MA scheme (28) also reaches a KKT point of the plant upon
convergence and, again, one can choose to place a filter on either the modifiers or the inputs [15].

4.2.2. Modification of Lagrangian Gradients


Section 3.2 introduced the algorithmic approach used in ISOPE for dealing with process-dependent
constraints, which consists in correcting the gradient of the Lagrangian function. An equivalent approach
can be implemented in the context of MA by defining the modified optimization problem as follows:

T
uk+1 = arg min Φm,k (u) := Φ(u) + (λL
k) u (29a)
u
G
s.t. Gm,i,k (u) := Gi (u) + ε k i ≤ 0, i = 1, . . . , n g , (29b)

where ε k i are the zeroth-order constraint modifiers, and the Lagrangian gradient modifier λL
G
k represents
the difference between the Lagrangian gradients of the plant and the model,

T ∂L p ∂L
(λL
k) = (u , μ ) − (u , μ ). (30)
∂u k k ∂u k k

This approach has the advantage of requiring a single gradient modifier λL


k , but the disadvantage
that the modified cost and constraint functions do not provide first-order approximations to the plant
cost and constraint functions at each RTO iteration. This increased plant-model mismatch may result
in slower convergence to the plant optimum and larger constraint violations prior to convergence.

4.2.3. Directional MA
MA schemes require the plant gradients to be estimated at each RTO iteration. Gradient estimation
is experimentally expensive and represents the main bottleneck for MA implementation (see Section 5
for an overview of gradient estimation methods). The number of experiments required to estimate
the plant gradients increases linearly with the number of inputs, which tends to make MA
intractable for processes with many inputs. Directional Modifier Adaptation (D-MA) overcomes
this limitation by estimating the gradients only in nr < nu privileged input directions [28,29].
This way, convergence can be accelerated since fewer experiments are required for gradient estimation
at each RTO iteration. D-MA defines a (nu × nr )-dimensional matrix of privileged directions,
Ur = [δu1 . . . δur ], the columns of which contain the nr privileged directions in the input space.
Note that these directions are typically selected as orthonormal vectors that span a linear subspace of
dimension nr .
At the operating point uk , the directional derivatives of the plant cost and constraints that need to
be estimated are defined as follows:

∂j p (uk + Ur r)
∇Ur j p := , j p ∈ {Φ p , G p,1 , G p,2 , . . . , G p,n g }, (31)
∂r r=0

where r ∈ IRnr . Approximations of the full plant gradients are given by

10
Processes 2016, 4, 55

∂Φ
∇Φk = (u )(Inu − Ur Ur+ ) + ∇Ur Φ p Ur+ , (32)
∂u k
∂Gi
∇ Gi,k = (u )(Inu − Ur Ur+ ) + ∇Ur G p,i Ur+ , i = 1, . . . , n g , (33)
∂u k

where the superscript (·)+ denotes the Moore-Penrose pseudo-inverse, and Inu is the nu -dimensional
identity matrix. In D-MA, the gradients of the plant cost and constraints used in (20c) and (20d) are
replaced by the estimates (32) and (33). Hence, the gradients of the modified cost and constraint
∂G
functions match the estimated gradients at uk , that is, ∂Φ∂u (uk ) = ∇ Φk and ∂u (uk ) = ∇ G i,k .
m m,i

Figure 2 illustrates the fact that the gradient of the modified cost function ∂Φ ∂u (uk ) and the plant
m

∂Φ ∂Φm
cost gradient ∂up (uk ) share the same projected gradient in the privileged direction δu, while ∂u (uk )
matches the projection of the model gradient ∂Φ∂u (uk ) in the direction orthogonal to δu.
u2

λΦ
k

∂Φm
. uk

∂u
δu
∂Φp
∂u ∂Φ
∂u

u1

Figure 2. Matching the projected gradient of the plant using D-MA.

In general, D-MA does not converge to a KKT point of the plant. However, upon convergence,
D-MA reaches a point for which the cost function cannot be improved in any of the privileged
directions. This is formally stated in the following theorem.

Theorem 2 (Plant optimality in privileged directions [29]). Consider D-MA with the gradient
estimates (32) and (33) in the absence of measurement noise and with perfect estimates of the directional
derivatives (31). Let u∞ = lim uk be a fixed point of that scheme and a KKT point of the modified optimization
k→∞
Problem (21). Then, u∞ is optimal for the plant in the nr privileged directions.

The major advantage of D-MA is that, if the selected number of privileged directions is much
lower than the number of inputs, the task of gradient estimation is greatly simplified. An important
issue is the selection of the privileged directions.

Remark 1 (Choice of privileged directions). Costello et al. [29] addressed the selection of privileged input
directions for the case of parametric plant-model mismatch. They proposed to perform a sensitivity analysis of the
gradient of the Lagrangian function with respect to the uncertain model parameters θ. The underlying idea is that,
if the likely parameter variations affect the Lagrangian gradient significantly only in a few input directions, it will
be sufficient to estimate the plant gradients in these directions. The matrix of privileged directions Ur is obtained
by performing singular value decomposition of the normalized (by means of the expected largest variations of
δ L2
the uncertain model parameters θ) sensitivity matrix δuδθ evaluated for the optimal inputs corresponding to
the nominal parameter values. Only the directions in which the gradient of the Lagrangian is significantly

11
Processes 2016, 4, 55

affected by parameter variations are retained. Other choices of Ur are currently under research. For example, it is
proposed in [30] to adapt Ur at each RTO iteration and considering large parametric perturbations.

D-MA is particularly well suited for the run-to-run optimization of repetitive dynamical systems,
for which a piecewise-polynomial parameterization of the input profiles typically results in a large
number of RTO inputs, thus making the estimation of full gradients prohibitive. For instance,
Costello et al. [29] applied D-MA very successfully to a flying power-generating kite.

4.2.4. Second-Order MA
Faulwasser and Bonvin [31] proposed the use of second-order modifiers in the context of MA.
The use of second-order correction terms allows assessing whether the scheme has converged to a
point satisfying the plant second-order optimality conditions. Note that, already in 1989, Golden and
Ydstie [32] investigated second-order modification terms for single-input problems.
Consider the second-order modifiers

j ∂2 j p ∂2 j
Θk := (uk ) − 2 (uk ) , j ∈ {Φ, G1 , G2 , . . . , Gn g }, (34)
∂u 2 ∂u

with Θk ∈ IRnu ×nu . These modifiers describe the difference in the Hessians of the plant and model
j

costs (j = Φ) and constraints (j = Gi ), respectively. Second-order MA reads:

Φ T
uk+1 = arg min Φ (u) + ε Φ T Φ
k + ( λ k ) ( u − uk ) + 2 ( u − uk ) Θ k ( u − uk )
1
(35a)
u   
=: Φm,k (u)
Gi (u) + ε k i + (λk i )T (u − uk ) + 12 (u − uk ) T Θk i (u − uk ) ≤ 0,
G G G
s.t. (35b)
  
=: Gm,i,k (u)
i = 1, . . . , n g ,
u ∈ C, (35c)

with

uk+1 = uk + K (uk+1 − uk ). (36)

Note that, in contrast to the first-order formulation (21), we explicitly add the additional constraint
u ∈ C in (35c), where C denotes a known nonempty convex subset of IRnu . This additional constraint,
which is not subject to plant-model mismatch, will simplify the convergence analysis.
Next, we present an extension to Theorem 1 that shows the potential advantages of second-order
MA. To this end, we make the following assumptions:

A1 Numerical feasibility: For all k ∈ N, Problem (35) is feasible and has a unique minimizer.
A2 Plant and model functions: The plant and model cost and constraint functions are all twice
continuously differentiable on C ⊂ IRnu .

Proposition 2 (Properties of second-order MA [31]). Assume that the second-order MA scheme ((35)
and (36)) has converged with u∞ = lim uk . Let Assumptions A1 and A2 hold, and let a linear independence
k→∞
constraint qualification hold at u∞ . Then, the following properties hold:

i u∞ satisfies the KKT conditions of the plant, and


ii the cost and constraint gradients and Hessians of the modified Problem (35) match those of the plant at u∞ .

In addition, if (35) has a strict local minimum at u∞ such that, for all d ∈ Rnd ,

12
Processes 2016, 4, 55

d T ∇2r L(u∞ )d > 0, (37)

then

iii Φ p (u∞ ) is a strict local minimum of Φ p (u).

Proposition 2 shows that, if second-order information can be reconstructed from measurements,


then the RTO scheme ((35) and (36)) allows assessing, upon convergence, that a local minimum of the
modified Problem (35) is also a local minimum of the plant.

Remark 2 (Hessian approximation). So far, we have tacitly assumed that the plant gradients and Hessians are
known. However, these quantities are difficult to estimate accurately in practice. Various approaches to compute
plant gradients from measurements will be described in Section 5.1. To obtain Hessian estimates, one can rely
on well-known approximation formulas such as BFGS or SR1 update rules [33]. While BFGS-approximated
Hessians can be enforced to be positive definite, the convergence of the SR1 Hessian estimates to the true
Hessian can be guaranteed under certain conditions ([33] Chap. 6). However, the issue of computing Hessian
approximations that can work in a RTO context (with a low number of data points and a fair amount of noise) is
not solved yet!

Remark 3 (From MA to RTO based on surrogate models). It is fair to ask whether second-order
corrections allow implementing model-free RTO schemes. Upon considering the trivial models Φ(u) = 0
and Gi (u) = 0, i = 1, . . . , n g , that is, in the case of no model, the modifiers are

∂j p ∂2 j p
( λ k )T =
j j j
ε k = j p ( uk ) , (u ), Θk = (u ), j ∈ {Φ, G1 , G2 , . . . , Gn g },
∂u k ∂u2 k
and the second-order MA Problem (35) reduces to a Quadratically Constrained Quadratic Program:

∂Φ p ∂2 Φ p
uk+1 = arg min Φ p (uk ) + (uk )(u − uk ) + 12 (u − uk )T (uk )(u − uk )
u ∂u ∂u2
∂G p,i ∂2 G p,i
s.t. G p,i (uk ) + (uk )(u − uk ) + 12 (u − uk )T (uk )(u − uk ) ≤ 0, (38)
∂u ∂u2
i = 1, . . . , n g ,
u ∈ C,

where C is determined by lower and upper bounds on the input variables, C = {u ∈ IRnu : uL ≤ u ≤ uU }.
Note that the results of Proposition 2 also hold for RTO Problem(38). Alternatively, the same information can be
used to construct the QP approximations used in Successive Quadratic Programming (SQP) approaches for
solving NLP problems [33]. The SQP approximation at the kth RTO iteration is given by,

∂Φ p ∂2 L p
uk+1 = arg min Φ p (uk ) + (uk )(u − uk ) + 12 (u − uk )T (uk )(u − uk )
u ∂u ∂u2
∂G p,i
s.t. G p,i (uk ) + (uk )(u − uk ) ≤ 0, i = 1, . . . , n g , (39)
∂u
u ∈ C,

where the constraints are linearized at uk , and the Hessian of the Lagrangian function is used in the quadratic
term of the cost function. Properties (i) and (iii) of Proposition 2 also hold for RTO Problem (39), and the
Hessian of the Lagrangian function of Problem (39) matches the Hessian of the plant upon convergence.
As the approximations of the cost and constraints are of local nature, a trust-region constraint may be
added [33,34]. Obviously, such an approach leads to a SQP-like RTO scheme based on a surrogate model.

13
Processes 2016, 4, 55

A thorough investigation of RTO based on surrogate models is beyond the scope of this paper. Instead, we refer
the reader to the recent progress made in this direction [35–38].

4.3. Convergence Conditions


Arguably, the biggest advantage of the MA schemes presented so far lies in the fact that any
fixed point turns out to be a KKT point of the plant according to Theorem 1. Yet, Theorem 1 is
somewhat limited in value, as it indicates properties upon convergence rather than stating sufficient
conditions for convergence. Note that properties-upon-convergence results appear frequently in
numerical optimization and nonlinear programming, see for example methods employing augmented
Lagrangians with quadratic penalty on constraint violation ([39] Prop. 4.2.1). Hence, we now turn
toward sufficient convergence conditions.

4.3.1. RTO Considered as Fixed-Point Iterations


In principle, one may regard any RTO scheme as a discrete-time dynamical system. In the case
of MA, it is evident that the values of the modifiers at the kth RTO iteration implicitly determine the
values of the inputs at iteration k + 1.
We consider here the second-order MA scheme with input filtering from the previous section.
Let vec(A) ∈ Rn(n+1)/2 be the vectorization of the symmetric matrix A ∈ Rn×n . Using this
 short hand
n u ( n u +1)
notation, we collect all modifiers in the vector Λ ∈ RnΛ , nΛ = (n g + 1) nu + 2 + n g + 1,

T
Φ T 1 T
Gn g Gn g T Gn g
εΦ Φ G G G
Λk := k , ( λk ) , vec( Θk ), ε k , ( λk ) , vec( Θk ), . . . , ε k , ( λk
1 1
) , vec(Θk ) . (40)

As the minimizer in the optimization problem (35) depends on Λk , we can formally state
Algorithm (35 and 36) as
uk+1 = (1 − α)uk + αu (uk , Λk ), (41)

whereby u (uk , Λk ) is the minimizer of (35), and, for sake of simplicity, the filter is chosen as the scalar
α ∈ (0, 1). Clearly, the above shorthand notation can be applied to any MA scheme with input filtering.
Before stating the result, we recall the notion of a nonexpansive map.

Definition 2 (Nonexpansive map). The map Γ : C → C is called nonexpansive, if

∀ x, y ∈ C : Γ( x ) − Γ(y) ≤ x−y .

Theorem 3 (Convergence of MA [31]). Consider the RTO scheme (41). Let Assumptions A1 and A2 hold
and let α ∈ (0, 1). If the map u : u → u (u, Λ(u)) is nonexpansive in the sense of Definition 2 and has at
least one fixed point on C , then the sequence (uk )k∈N of RTO iterates defined by (41) converges to a fixed point,
that is,
lim u (uk , Λ(uk )) − uk = 0.
k→∞

Remark 4 (Reasons for filtering). Filtering can be understood as a way to increase the domain of attraction of
MA schemes. This comes in addition to dealing with noisy measurements and the fact that large correction steps
based on local information should be avoided.

4.3.2. Similarity with Trust-Region Methods


The previous section has investigated global convergence of MA schemes. However, the analysis
requires the characterization of properties of the argmin operator of the modified optimization problem,
which is in general challenging. Next, we recall a result given in [40] showing that one can exploit the
similarity between MA and trust-region methods. This similarity has also been observed in [41].

14
Processes 2016, 4, 55

To this end, we consider the following variant of (21)

T
uk+1 = arg min Φm,k (u) := Φ(u) + (λΦ
k ) u (42a)
u

Gm,i,k (u) := Gi (u) + ε k i + (λk i )T (u − uk ) ≤ 0,


G G
s.t. i = 1, . . . , n g , (42b)
u ∈ B(uk , ρk ). (42c)

The only difference between this optimization problem and (21) is the trust-region constraint (42c),
where B(uk , ρk ) denotes a closed ball in IRnu with radius ρk centered at uk .
Consider
Φ p (uk ) − Φ p (uk+1 )
ωk : =
Φm,k (uk ) − Φm,k (uk+1 )
If ωk 1, then, at uk+1 , the plant performs significantly better than predicted by the modified
model. Likewise, if ωk 1, the plant performs significantly worse than predicted by the modified
model. In other words, ωk is a local criterion for the quality of the modified model. In trust-region
methods, one replaces (input) filtering with acceptance rules for candidate points. In [40], it is suggested
to apply the following rule: 
uk+1 if ωk ≥ η1
uk +1 : = (43)
uk otherwise

Note that this acceptance rule requires application of uk+1 to the plant.
Another typical ingredient of trust-region methods is an update rule for the radius ρk . Consider the
constant scalars 0 < η1 ≤ η2 < 1, 0 < γ1 ≤ γ2 < 1, and assume that the update satisfies the
following conditions: ⎧
⎪ [ρk , ∞)
⎨ if ωk ≥ η2
ρk+1 ∈ [γ2 ρk , ρk ] if ωk ∈ [η1 , η2 ) (44)

⎩ [γ ρ , γ ρ ] if ω ≤ η
1 k 2 k k 1

As in the previous section, we assume that Assumptions A1 and A2 hold. In addition, we require
the following:

A3 Plant boundedness: The plant objective Φ p is lower-bounded on IRnu . Furthermore, its Hessian is
bounded from above on IRnu .
A4 Model decrease: For all k ∈ N, there exists a constant κ ∈ (0, 1] and a sequence
( β k )k∈N > 1 such that
 
∇Φm,k (uk )
Φm,k (uk ) − Φm,k (uk+1 ) ≥ κ ∇Φm,k (uk ) · min ρk , .
βk

Now, we are ready to state convergence conditions for the trust-region-inspired MA scheme given
by (42)–(44).

Theorem 4 (Convergence with trust-region constraints [40]). Consider the RTO scheme (42)–(44) and let
Assumptions A1–A4 hold, then
lim ∇Φ p (uk ) = 0.
k→∞

The formal proof of this result is based on a result on convergence of trust-region methods given
in [34]. For details, we refer to [40].

15
Processes 2016, 4, 55

Remark 5 (Comparison of Theorems 3 and 4). A few remarks on the convergence results given by
Theorems 3 and 4 are in order. While Theorem 3 is applicable to schemes with first- and second-order correction
terms, the convergence result is based on the nonexpansiveness of the argmin operator, which is difficult to verify
in practice. However, Theorem 3 provides a good motivation for input filtering as the convergence result is based
on averaged iterations of a nonexpansive operator. In contrast, Theorem 4 relies on Assumption A4, which
ensures sufficient model decrease ([34] Thm. 6.3.4, p. 131). However, this assumption is in general not easy
to verify.
There is another crucial difference between MA with and without trust-region constraints. The input
update (43) is based on ωk , which requires application of uk+1 to the plant. Note that, if at the kth RTO
iteration the trust region is chosen too large, then first uk+1 is applied to the plant, resulting in ωk < η1 and
thus to immediate (re-)application of uk . In other words, a trust region that is chosen too large can result in
successive experiments that do not guarantee plant cost improvement. In a nominal setting with perfect gradient
information, this is clearly a severe limitation. However, in any real world application, where plant gradients
need to be estimated, the plant information obtained from rejected steps may be utilized for gradient estimation.

4.3.3. Use of Convex Models and Convex Upper Bounds


Next, we turn to the issue of convexity in MA. As already mentioned in Proposition 1, for a
model to be adequate in MA, it needs to be able to admit, after the usual first-order correction, a strict
local minimum at the generally unkown plant optimum up . At the same time, it is worth noting
that the model is simply a tool in the design of MA schemes. In Remark 3, for instance, we pointed
toward second-order MA with no model functions. Yet, this is not the only possible choice. It has
been observed in [21] that the adequacy issue is eliminated if one relies on strictly convex models and
first-order correction terms. The next proposition summarizes these results.

Proposition 3 (Use of convex models in MA [21]). Consider the MA Problem (21). Let the model cost and
constraint functions Φ and Gi , i = 1, . . . , n g , be strictly convex functions. Then, (i) Problem (21) is a strictly
convex program; and (ii) the model satisfies the adequacy condition of Definition 1.

Remark 6 (Advantages of convex models). The most important advantage of convex models in MA is that
model adequacy is guaranteed without prior knowledge of the plant optimum. Furthermore, it is well known
that convex programs are, in general, easier to solve than non-convex ones. Note that one can relax the strict
convexity requirement to either the cost being strictly convex or at least one of the active constraints being
strictly convex at the plant optimum [21].

4.4. Extensions
Several extensions and variants of MA have recently been developed to account for specific
problem configurations and needs.

4.4.1. MA Applied to Controlled Plants


MA guarantees plant feasibility upon convergence, but the RTO iterates prior to convergence might
violate the plant constraints. For continuous processes, it is possible to generate feasible steady-state
operating points by implementing the RTO results via a feedback control layer that tracks the constrained
variables that are active at the RTO solution [42]. This requires the constrained quantities in the
optimization problem to be measured online at sufficiently high frequency. Navia et al. [43] recently
proposed an approach to prevent infeasibilities in MA implementation by including PI controllers
that become activated only when the measurements show violation of the constraints. In industry,
model predictive control (MPC) is used widely due to its ability to handle large multivariable systems
with constraints [44]. Recently, Marchetti et al. [45] proposed an approach for integrating MA with
MPC, wherein MPC is used to enforce the equality and active inequality constraints of the modified
optimization problem. The remaining degrees of freedom are controlled to their optimal values along

16
Processes 2016, 4, 55

selected input directions. In order to implement MA on a controlled plant, the gradients are corrected
on the tangent space of the equality constraints.
The approach used in industry to combine RTO with MPC consists in including a target
optimization stage at the MPC layer [44]. Since the nonlinear steady-state model used at the RTO
layer is not in general consistent with the linear dynamic model used by the MPC regulator, the
optimal setpoints given by the RTO solution are often not reachable by the MPC regulator. The target
optimization problem uses a (linear) steady-state model that is consistent with the MPC dynamic
model. Its purpose is to correct the RTO setpoints by computing steady-state targets that are reachable
by the MPC regulator [46,47]. The target optimization problem executes at the same frequency as the
MPC regulator and uses the same type of feedback. Three different designs of the target optimization
problem have been analyzed in [48], each of which guarantees attaining only feasible points for the
plant at steady state, and reaching the RTO optimal inputs upon convergence.
Another difficulty arises when the inputs of the model-based optimization problem are not the
same as the plant inputs. This happens, for instance, when the plant is operated in closed loop, but
only a model of the open-loop system is available [49]. In this case, the plant inputs are the controller
setpoints r, while the model inputs are the manipulated variables u. Three alternative MA extensions
have recently been proposed to optimize a controlled plant using a model of the open-loop plant [50].
The three extensions use the cost and constraint gradients of the plant with respect to the setpoints r:

1. The first approach, labeled “Method UR”, suggests solving the optimization problem for u, but
computes the modifiers in the space of the setpoints r.
2. The second approach, labelled “Method UU”, solves the optimization problem for u, and
computes the modifiers in the space of u.
3. The third approach, labelled “Method RR”, solves the optimization problem for r, and computes
the modifiers in the space of r. It relies on the construction of model approximations for the
controlled plant that are obtained from the model of the open-loop plant.

As shown in [50], the three extensions preserve the MA property of reaching a KKT point of the
plant upon convergence.

4.4.2. MA Applied to Dynamic Optimization Problems


There have been some attempts to extend the applicability of MA to the dynamic run-to-run
optimization of batch processes [51,52]. The idea therein is to build on the repetitive nature of batch
processes and perform run-to-run iterations to progressively improve the performance of the batches.
The approach used in [51] takes advantage of the fact that dynamic optimization problems can
be reformulated as static optimization problems upon discretization of the inputs, constraints and
the dynamic model [17]. This allows the direct use of MA, the price to pay being that the number
of decision variables increases linearly with the number of discretization points, as shown in [51].
Note that, if the active path constraints are known in the various intervals of the solution, a much more
parsimonious input parameterization can be implemented, as illustrated in ([17] Appendix).
The approach proposed in [52] uses CA (that is, MA with only zeroth-order modifiers) for the
run-to-run optimization of batch processes. Dynamic optimization problems are characterized by the
presence of both path and terminal constraints. Because of uncertainty and plant-model mismatch, the
measured values of both path and terminal constraints will differ from their model predictions. Hence,
for each run, one can offset the values of the terminal constraints in the dynamic optimization problem
with biases corresponding to the differences between the predicted and measured terminal constraints
of the previous batch. Path constraints are modified similarly, by adding to the path constraints
a time-dependent function corresponding to the differences between the measured and predicted
path constraints during the previous batch. An additional difficulty arises when the final time of the
batch is also a decision variable. Upon convergence, the CA approach [52] only guarantees constraint
satisfaction, while the full MA approach [51] preserves the KKT matching property of standard MA.

17
Processes 2016, 4, 55

4.4.3. Use of Transient Measurements for MA


MA is by nature a steady-state to steady-state RTO methodology for the optimization of uncertain
processes. This means that several iterations to steady state are generally needed before convergence.
However, there are cases where transient measurements can be used as well. Furthermore, it would be
advantageous to be able to use transient measurements in a systematic way to speed up the steady-state
optimization of dynamic processes.
The concept of fast RTO via CA was introduced and applied to an experimental solid oxide fuel-cell
stack in the presence of operating constraints and plant-model mismatch [53]. Solid oxide fuel-cell stacks
are, roughly speaking, electrochemical reactors embedded in a furnace. The electrochemical reaction
between hydrogen and oxygen is almost instantaneous and results in the production of electrical
power and water. On the other hand, thermal equilibration is much slower. The fast RTO approach
in [53] is very simple and uses CA. The RTO period is set somewhere between the time scale of the
electrochemical reaction and the time scale of the thermal process. This way, the chemical reaction has
time to settle, and the thermal effects are treated as slow process drifts that are accounted for like any
other source of plant-model mismatch. This shows that it is possible to use RTO before steady state has
been reached, at least when a time-scale separation exists between fast optimization-relevant dynamics
and slow dynamics that do not affect much the cost and constraints of the optimization problem.
In [54], a framework has been proposed to apply MA during transient operation to steady state
for the case of parametric plant-model mismatch, thereby allowing the plant to converge to optimal
operating conditions in a single transient operation to steady state. The basic idea is simply to implement
standard MA during the transient “as if the process were at steady state”. Optimal inputs are computed
and applied until the next RTO execution during transient. Hence, the time between two consecutive
RTO executions becomes a tuning parameter just as the filter gains. Transient measurements obtained
at the RTO sampling period are treated as if they were steady-state measurements and are therefore
directly used for computing the zeroth-order modifiers and estimating the plant gradients at “steady
state”. There are two main advantages of this approach: (i) standard MA can be applied; and (ii) the
assumption that transient measurements can play the role of steady-state measurements becomes more
and more valid as the system approaches steady state. Simulation results in [54] are very encouraging,
but they also highlight some of the difficulties, in particular when the dynamics exhibit non-standard
behaviors such as inverse response. A way to circumvent these difficulties consists in reducing the
RTO frequency. Ultimately, this frequency could be reduced to the point that MA is only solved at
steady state, when the process dynamics have disappeared. Research is ongoing to improve the use
of transient measurements and characterize the types of dynamic systems for which this approach is
likely to reduce the time needed for convergence [55].

4.4.4. MA when Part of the Plant is Perfectly Modeled


As mentioned above, MA is capable of driving a plant toward a KKT point even though the model
is structurally incorrect. The only requirement for the model is that it satisfies the model-adequacy
conditions, a property that can be enforced if convex model approximations are used [54]. In addition,
plant measurements that allow good estimation of the plant constraints and gradients are required.
The fact that MA can be efficient without an accurate model does not mean that it cannot benefit from
the availability of a good model. For instance, in [56] the authors acknowledge that, for most energy
systems, the model incorporates basic mass and energy balances that can often be very rigorously
modeled and, thus, there is no need to include any structural or parametric plant-model mismatch.
The authors suggest separating the process model equations into two sets of equations. The set of
rigorous model equations is denoted the “process model”, while the second set of equations is referred
to as the “approximate model” and describes performance and efficiency factors, which are much
harder to model and therefore susceptible to carry plant-model mismatch. Hence, modifiers are used
only for this second set of equations by directly modifying the corresponding model equations. Key to
the approach in [56] is data reconciliation, which makes explicit use of the knowledge of the set of

18
Processes 2016, 4, 55

“perfect model equations”. One of the advantages of not modifying the well-known subparts of the
model is that it may reduce the number of plant gradients that need to be estimated, without much
loss in performance.

5. Implementation Aspects
The need to estimate plant gradients represents the main implementation difficulty. This is a
challenging problem since the gradients cannot be measured directly and, in addition, measurement
noise is almost invariably present. This section discusses different ways of estimating gradients, of
computing modifiers, and of combining gradient estimation and optimization.

5.1. Gradient Estimation


Several methods are available for estimating plant gradients [57–60]. These methods can be
classified as steady-state perturbation methods that use only steady-state data, and dynamic perturbation
methods that use transient data.

5.1.1. Steady-State Perturbation Methods


Steady-state perturbation methods rely on steady-state data for gradient estimation. For each
change in the input variables, one must wait until the plant has reached steady state before taking
measurements, which can make these methods particularly slow. Furthermore, to obtain reliable
gradient estimates, it is important to avoid (i) amplifying the noise present in experimental data [61,62];
and (ii) using past data that correspond to different conditions (for example, different qualities of raw
materials, or different disturbance values).

Finite-difference approximation (FDA). The most common approach is to use FDA techniques that
require at least nu + 1 steady-state operating points to estimate the gradients. Several alternatives can
be envisioned for choosing these points:

• FDA by perturbing the current RTO point: A straightforward approach consists in perturbing each
input individually around the current operating point to get an estimate of the corresponding
gradient element. For example, in the forward-finite-differencing (FFD) approach, an estimator of
∂Φ p
the partial derivative ∂u j (uk ), j = 1, . . . , nu , at the kth RTO iteration is obtained as

 
(∇Φ p,k ) j = Φ̃ p (uk + he j ) − Φ̃ p (uk ) /h, h > 0, (45)

where h is the step size, e j is the jth unit vector, and the superscript (·) ˜ denotes a noisy
measurement. This approach requires nu perturbations to be carried out at each RTO
iteration, and for each perturbation a new steady state must be attained. Alternatively, the
central-finite-differencing (CFD) approach can be used, which is more accurate but requires 2nu
perturbations at each RTO iteration [61]. Since perturbing each input individually may lead to
constraint violations when the current operating point is close to a constraint, an approach has
been proposed for generating nu perturbed points that take into account the constraints and avoid
ill-conditioned points for gradient estimation [45].
• FDA using past RTO points: The gradients can be estimated by FDA based on the measurements
obtained at the current and past RTO points {uk , uk−1 , . . . , uk−nu }. This approach is used in dual
ISOPE and dual MA methods [7,63–65]—the latter methods being discussed in Section 5.3. At the
kth RTO iteration, the following matrix can be constructed:

19
Processes 2016, 4, 55

Uk : = [ uk − uk −1 , uk − uk −2 , ..., uk − uk−nu ] ∈ IRnu ×nu . (46)

Assuming that measurements of the cost Φ p and constraints G p,i are available at each iteration,
we construct the following vectors:

δΦ̃ p,k := [ Φ̃ p,k − Φ̃ p,k−1 , Φ̃ p,k − Φ̃ p,k−2 , ..., Φ̃ p,k − Φ̃ p,k−nu ]T ∈ IRnu , (47)
T
δ G̃ p,i,k := [ G̃ p,i,k − G̃ p,i,k−1 , G̃ p,i,k − G̃ p,i,k−2 , ..., G̃ p,i,k − G̃ p,i,k−nu ] ∈ IR ,
nu
(48)
i = 1, . . . , n g .

The measured cost has measurement noise vk :

Φ̃ p,k = Φ p (uk ) + vk . (49)

If Uk is nonsingular, then the set of nu + 1 points {uk− j }nj=u 0 is said to be poised for linear
interpolation in IRnu , and Uk is called a matrix of simplex directions [34]. The cost gradient
at uk can then be estimated by FDA as follows:

∇Φ p,k = (δΦ̃ p,k )T (Uk )−1 , (50)

which is known as the simplex gradient [34]. The constraint gradients can be computed in a
similar way.

Broyden’s method. The gradients are estimated from the past RTO points using the following recursive
updating scheme:

(Φ̃ p,k − Φ̃ p,k−1 ) − ∇Φ p,k−1 (uk − uk−1 )


∇Φ p,k = ∇Φ p,k−1 + (uk − uk −1 )T . (51)
( uk − uk −1 ) T (uk − uk −1 )

The use of Broyden’s method was investigated for ISOPE in [66] and for MA in [67]. Comparative
studies including this gradient estimation method can be found in [58,68].

Gradients from fitted surfaces. A widely used strategy for extracting gradient information from
(noisy) experimental data consists in fitting polynomial or spline curves to the data and evaluating
the gradients analytically by differentiating the fitted curves [69]. In the context of MA, Gao et al. [36]
recently proposed to use least-square regression to obtain local quadratic approximations of the cost
and constraint functions using selected data, and to evaluate the gradients by differentiating these
quadratic approximations.

5.1.2. Dynamic Perturbation Methods


In dynamic perturbation methods, the steady-state gradients are estimated based on the transient
response of the plant. Three classes of methods are described next.

Dynamic model identification. These methods rely on the online identification of simple dynamic
input-output models based on the plant transient response. Once a dynamic model is identified, the
steady-state gradients can be obtained by application of the final-value theorem. Indeed, the static gain
of a transfer function represents the sensitivity (or gradient) of the output with respect to the input.
McFarlane and Bacon [70] proposed to identify a linear ARX model and used the estimated static
gradient for online optimizing control. A pseudo-random binary sequence (PRBS) was superimposed
on each of the inputs to identify the ARX model. In the context of ISOPE, Becerra et al. [71] considered
the identification of a linear ARMAX model using PRBS signals. Bamberger and Isermann [72]

20
Processes 2016, 4, 55

identified online a parametric second-order Hammerstein model by adding a pseudo-random


ternary sequence to each input. The gradient estimates were used for online optimizing control.
Garcia and Morari [73] used a similar approach, wherein the dynamic identification was performed in
a decentralized fashion. The same approach was also used by Golden and Ydstie [32] for estimating the
first- and second-order derivatives of a SISO plant. Zhang and Forbes [60] compared the optimizing
controllers proposed in [70] and [32] with ISOPE and the two-step approach.

Extremum-seeking control. The plant gradients can also be obtained using data-driven methods
as discussed in [74]. Among the most established techniques, ESC [16] suggests adding a dither
signal (e.g., a sine wave) to each of the inputs during transient operation. High-pass filtering of the
outputs removes the biases, while using low-pass filters together with correlation let you compute the
gradients of the outputs with respect to the inputs. The main limitation of this approach is the speed of
convergence as it requires two time-scale separations, the first one between the filters and the periodic
excitation, and the second one between the periodic excitation and the controlled plant. Since all
inputs have to be perturbed independently, convergence to the plant gradients can be prohibitively
slow in the MIMO case. Recent efforts in the extremum-seeking community have led to a more
efficient framework, referred to as “estimation-based ESC” (by opposition to the previously described
perturbation-based ESC), which seems to be more efficient in terms of convergence speed [75].

Multiple units. Another dynamic perturbation approach relies on the availability of several identical
units operated in parallel [76]. The minimal number of required units is nu + 1, since one unit operates
with the inputs computed by the RTO algorithm, while a single input is perturbed in each of the
remaining nu units in parallel. The gradients can be computed online by finite differences between
units. Convergence time does not increase with the number of inputs. Obviously, this approach relies
heavily on the availability of several identical units, which occurs for instance when several units, such
as fuel-cell stacks, are arranged in parallel. Note that these units must be identical, although some
progress has been made to encompass cases where this is not the case [77].

5.1.3. Bounds on Gradient Uncertainty


As discussed in [78], obtaining bounds on gradient estimates is often more challenging than
obtaining the estimates themselves. The bounds on gradient estimates should be linked with the
specific approach used to estimate the gradients. For the case of gradient estimates obtained by
FFD, CFD, and two design-of-experiment schemes, Brekelmans et al. [61] proposed a deterministic
quantification of the gradient error due to the finite-difference approximation (truncation error) and a
stochastic characterization due to measurement noise. The expressions obtained for the total gradient
error are convex functions of the step size, for which it is easy to compute for each scheme the step size
that minimizes the total gradient error. Following a similar approach, the gradient error associated
with the simplex gradient (50) was analyzed by Marchetti et al. [64].
The gradient estimation error is defined as the difference between the estimated gradient and the
true plant gradient:

∂Φ p
kT = ∇Φ p,k − (uk ). (52)
∂u

From (49) and (50), this error can be split into the truncation error t and the measurement noise
error n ,

k = kt + kn , (53)

21
Processes 2016, 4, 55

with
∂Φ p
(kt )T = [Φ p (uk ) − Φ p (uk−1 ), . . . , Φ p (uk ) − Φ p (uk−nu )](Uk )−1 − (uk ), (54a)
∂u
(kn )T = [vk − vk−1 , . . . , vk − vk−nu ](Uk )−1 . (54b)

Assuming that Φ p is twice continuously differentiable with respect to u, the norm of the gradient
error due to truncation can be bounded from above by

kt ≤ dΦ rk , (55)

where dΦ is an upper bound on the spectral radius of the Hessian of Φ p for u ∈ C , and rk is the radius
of the unique n-sphere that can be generated from the points uk , uk−1 , . . . , uk−nu :

r k = r (uk , uk −1 , . . . , uk − n u ) = (56)
1 
  

 (uk − uk−1 )T (uk − uk−1 ), . . . , (uk − uk−nu )T (uk − uk−nu ) (Uk )−1 .
2

In turn, assuming that the noisy measurements Φ̃ p remain within the interval δΦ at steady state,
the norm of the gradient error due to measurement noise can be bounded from above:

δΦ
kn ≤ , (57)
lmin,k
lmin,k = lmin (uk , uk−1 , . . . , uk−nu ),

where lmin,k is the minimal distance between all possible pairs of complement affine subspaces that can
be generated from the set of points Sk = {uk , uk−1 , . . . , uk−nu }. Using (55) and (57), the gradient-error
norm can be bounded from above by

δΦ
k ≤ kt + kn ≤ EkΦ := dΦ rk + . (58)
lmin,k

5.2. Computation of Gradient Modifiers

5.2.1. Modifiers from Estimated Gradients


The most straightforward way of computing the gradient modifiers is to evaluate them directly
from the estimated gradients, according to their definition (20):

T ∂Φ
(λΦ
k ) = ∇ Φ p,k − (u ), (59a)
∂u k
∂G
(λkGi )T = ∇ G p,i,k − i
(u ), i = 1, . . . , n g , (59b)
∂u k
where, in principle, any of the methods described in Section 5.1 can be used to obtain the gradient
estimates ∇Φ p,k and ∇ G p,i,k .

5.2.2. Modifiers from Linear Interpolation or Linear Regression


Instead of using a sample set of steady-state operating points to estimate the gradients, it is
possible to use the same set to directly compute the gradient modifiers by linear interpolation or
linear regression. For instance, Marchetti [65] proposed to estimate the gradient modifiers by linear

22
Processes 2016, 4, 55

interpolation using the set of nu + 1 RTO points {uk− j }nj=u 0 . In addition to the plant vectors δΦ̃ p,k and
δ G̃ p,i,k given in (47) and (48), their model counterparts can be constructed at the kth RTO iteration:

δΦk := [ Φ(uk ) − Φ(uk−1 ), ..., Φ(uk ) − Φ(uk−nu ) ]T ∈ IRnu , (60)


T
δGi,k := [ Gi (uk ) − Gi (uk−1 ), ..., Gi (uk ) − Gi (uk−nu ) ] ∈ IR , nu
i = 1, . . . , n g . (61)

The interpolation conditions for the modified cost function read:

Φ T
Φm,k (uk− j ) = Φ(uk− j ) + εΦ
k + ( λk ) (uk − j − uk ) = Φ̃ p,k − j , j = 1, . . . , nu , (62)

with εΦ
k = Φ̃ p,k − Φ (uk ). Equation (62) forms a linear system in terms of the gradient modifier and can
be written in matrix form as

(Uk )T λ Φ
k = δ Φ̃ p,k − δΦk , (63)

where Uk and δΦ̃ p,k are the quantities defined in (46) and (47), respectively. This system of equations
has a unique solution if the matrix Uk is nonsingular. The constraint gradient modifiers can be
computed in a similar way, which leads to the following expressions for the gradient modifiers [65]:

T T −1
(λΦ
k ) = ( δ Φ̃ p,k − δΦk ) (Uk ) , (64a)
−1
(λkGi )T T
= (δ G̃ p,i,k − δGi,k ) (Uk ) , i = 1, . . . , n g . (64b)

Here, the sample points consist of the current and nu most recent RTO points. However, it is also
possible to include designed perturbations in the sample set.
Figure 3 shows how the modified cost function approximates the plant cost function using MA
when (i) the points {uk , uk−1 } are used to obtain the simplex gradient estimate (50), which is then
used in (59a) to compute the gradient modifier, and (ii) the same points are used to compute the linear
interpolation gradient modifier (64a). It can be seen that the linear interpolation approach gives a
better approximation of the plant cost function, especially if the points are distant from each other.

Remark 7 (Linear regression). If there are more than nu + 1 sample points, it might not be possible to interpolate
all the points. In this case, it is possible to evaluate the gradient modifiers by linear least-square regression.

Remark 8 (Quadratic interpolation). In case of second-order MA, it is possible to compute the gradient and
Hessian modifiers by quadratic interpolation or quadratic least-squares regression. In this case, the number of
well-poised points required for complete quadratic interpolation is (nu + 1)(nu + 2)/2 (see [34] for different
measures of well poisedness that can be used to select or design the points included in the sample set).

5.2.3. Nested MA
A radically different approach for determining the gradient modifiers has been proposed
recently [79]. Rather than trying to estimate the plant gradients, it has been proposed to identify
the gradient modifiers directly via derivative-free optimization. More specifically, the RTO problem is
reformulated as two nested optimization problems, with the outer optimization computing the gradient
modifiers at low frequency and the inner optimization computing the inputs more frequently. We shall
use the indices j and k to denote the iterations of the outer and inner optimizations, respectively.

23
Processes 2016, 4, 55

Φm,k (u)

Φp , Φm,k
Φp (u)

. .
uk−1 uk u
Φp , Φm,k

Φp (u)

. . Φm,k (u)

uk−1 uk u

Figure 3. Approximation of the plant cost function. Top plot: Modified cost using gradient from FDA;
Bottom plot: Modified cost using linear interpolation.

The inner optimization problem (for j fixed) is formulated as follows:

T
uk+1 = arg min Φm,k (u) := Φ(u) + (λΦ
j ) u (65a)
u

Gm,i,k (u) := Gi (u) + ε k i + (λ j i )T (u − uk ) ≤ 0


G G
s.t. i = 1, . . . , n g . (65b)

Note the difference with (21a): for the inner optimization, the gradient modifiers are considered
as constant parameters that are updated by the outer optimization. The values of the converged inputs,
u∞ , and of the converged Lagrange multipliers associated with the constraints, μ∞ , depend on the
choice of the modifiers λΦ
Gi
 j and λ j . For the sake of notation, let us group these modifiers in the matrix
Gn g
Λ j := λΦ
G1
j λj . . . λj .
Once the inner optimization has converged, the following unconstrained outer optimization
problem is solved:
     T !
Λj+1 = arg min Φ p u∞ Λ j + μ∞ Λ j G p (u∞ ) , (66)
Λ

and the inner optimization problem is repeated for the modifiers Λ j+1 . Note that, since the functions
Φ p and G p are unknown, Problem (66) is conveniently solved using derivative-free optimization
techniques such as the Nelder-Mead simplex method [79]. Furthermore, it has been shown that
separating the MA problem into two nested optimization problems preserves the ability to reach a
KKT point of the plant [79]. However, since Nested MA often requires many iterations (for both the j
and k indices), it is characterized by potentially slow convergence.

24
Processes 2016, 4, 55

5.3. Dual MA Schemes


Following the idea of the dual ISOPE algorithm [7,63], dual MA schemes estimate the gradients
based on the measurements obtained at the current and past operating points by adding a duality
constraint in the modified optimization problem. This constraint is used to ensure sufficient variability
in the data for estimating the gradients reliably. Several dual MA schemes have been proposed that
differ in the model modification introduced, the approach used for estimating the gradients, and the
choice of the duality constraint.
The following duality constraint is used to position the next RTO point with respect to the nu
most recent points {uk , uk−1 , . . . , uk−nu +1 }:

Dk (u) := D(u, uk , uk−1 , . . . , uk−nu +1 ) ≤ 0 . (67)

To compute the simplex gradient (50), or the interpolation modifiers (64) at the next RTO
point uk+1 , we require the matrix Uk+1 defined in (46) to be nonsingular. Assuming that the last
nu − 1 columns of Uk+1 are linearly independent, they constitute a basis for the unique hyperplane
Hk = {u ∈ IRnu : nT T
k u = bk , with bk = nk uk } that contains the points {uk , uk −1 , . . . , uk −nu +1 }. Here,
nk is a vector that is orthogonal to the hyperplane. Hence, the matrix Uk+1 will be nonsingular if uk+1
does not belong to Hk [65]. For this reason, duality constraints produce two disjoint feasible regions,
one on each side of the hyperplane Hk .
Dual MA schemes typically solve two modified optimization problems that include the duality
constraint, one for each side of the hyperplane Hk . For the half space nT k u > bk , we solve:

Φ T
u+ Φ
k +1 = arg min Φm,k (u) = Φ (u) + ε k + ( λk ) (u − uk ) (68)
u

Gm,i,k (u) = Gi (u) + ε k i + (λk i )T (u − uk ) ≤ 0,


G G
s.t. i = 1, . . . , n g ,
Dk (u) ≤ 0, nT
ku ≥ bk ,

while for the half space nT


k u < bk , we solve:

Φ T
u− Φ
k +1 = arg min Φm,k (u) = Φ (u) + ε k + ( λk ) (u − uk ) (69)
u

Gm,i,k (u) = Gi (u) + ε k i + (λk i )T (u − uk ) ≤ 0,


G G
s.t. i = 1, . . . , n g ,
Dk (u) ≤ 0, nT
ku ≤ bk ,

The next operating point uk+1 is chosen as the solution that minimizes Φm,k (u):

uk+1 = arg min Φm,k (u), s.t. u ∈ {u+ −


k +1 , uk +1 } .
u

Several alternative dual MA algorithms have been proposed in the literature, which we briefly
describe next:
(a) The original dual ISOPE algorithm [7,63] estimates the gradients by FDA according to (50) and
introduces a constraint that prevents ill-conditioning in the gradient estimation. At the kth RTO
iteration, the matrix

Ūk (u) := [ u − uk , u − uk−1 , ..., u − uk−nu +1 ] ∈ IRnu ×nu (70)

is constructed. Good conditioning is achieved by adding the lower bound ϕ on the inverse of the
condition number of Ūk (u):

1 σ (Ū (u))
= min k ≥ ϕ, (71)
κ k (u) σmax (Ūk (u))

25
Processes 2016, 4, 55

where σmin and σmax denote the smallest and largest singular values, respectively. This bound is
enforced by defining the duality constraint

Dk (u) = ϕκk (u) − 1 ≤ 0, (72)

which is used in (68) and (69).


(b) Gao and Engell [14] proposed a MA scheme that (i) estimates the gradients from the current
and past operating points according to (50), and (ii) enforces the ill-conditioning duality
constraint (72). However, instead of including the duality constraint in the optimization problem,
it is used to decide whether an additional input perturbation is needed. This perturbation
is obtained by minimizing the condition number κk (u) subject to the modified constraints.
The approach was labeled Iterative Gradient-Modification Optimization (IGMO) [80].
(c) Marchetti et al. [64] considered the dual MA scheme that estimates the gradients from the current
and past operating points according to (50). The authors showed that the ill-conditioning bound
(71) has no direct relationship with the accuracy of the gradient estimates, and proposed to upper
bound the gradient-error norm of the Lagrangian function:

 L (u) ≤ EU , (73)

where  L is the Lagrangian gradient error. In order to compute the upper bound as a function
of u, we proceed as in (56)–(58) and define the radius rk (u) = r (u, uk , . . . , uk−nu +1 ) and the
minimal distance lmin,k (u) = lmin (u, uk , . . . , uk−nu +1 ). This allows enforcing (73) by selecting u
such that,

δL
EkL (u) := d L rk (u) + ≤ EU , (74)
lmin,k (u)

where d L is an upper bound on the spectral radius of the Hessian of the Lagrangian function, and
δ L is the range of measurement error in the Lagrangian function resulting from measurement
noise in the cost and constraints [64]. This bound is enforced by defining the duality constraint
used in (68) and (69) as

Dk (u) = EkL (u) − EU ≤ 0. (75)

(d) Rodger and Chachuat [67] proposed a dual MA scheme based on modifying the output variables
as in Section 4.2.1. The gradients of the output variables are estimated using Broyden’s formula
(51). The authors show that, with Broyden’s approach, the MA scheme (28) may fail to reach a
plant KKT point upon convergence due to inaccurate gradient estimates and measurement noise.
This may happen if the gradient estimates are not updated repeatedly in all input directions and
if the step uk+1 − uk is too small. A duality constraint is proposed for improving the gradient
estimates obtained by Broyden’s approach.
(e) Marchetti [65] proposed another dual MA scheme, wherein the gradient modifiers are obtained
by linear interpolation according to (64). Using this approach, the modified cost and constraint
functions approximate the plant in a larger region. In order to limit the approximation error
in the presence of noisy measurements, a duality constraint was introduced that limits the
Lagrangian gradient error for at least one point belonging to the simplex with the extreme points
{u, uk , . . . , uk−nu +1 }. This duality constraint produces larger feasible regions than (75) for the
same upper bound EU , and therefore allows larger input moves and faster convergence.

26
Processes 2016, 4, 55

6. Applications
As MA has many useful features, it is of interest to investigate its potential for application. Table 1
lists several case studies available in the literature and compares them in terms of the MA variant that
is used, the way the gradients are estimated and the number of input variables. Although this list is
not exhaustive, it provides an overview of the current situation and gives a glimpse of the potential
ahead. From this table, several interesting conclusions can be drawn:

• About half of the available studies deal with chemical reactors, both continuous and discontinuous.
In the case of discontinuous reactors, the decision variables are input profiles, that can be
parameterized to generate a larger set of constant input parameters. The optimization is then
performed on a run-to-run basis, with each iteration hopefully resulting in improved operation.
• One sees from the list of problems in Table 1 that the Williams-Otto reactor seems to be the
benchmark problem for testing MA schemes. The problem is quite challenging due to the presence of
significant structural plant model-mismatch. Indeed, the plant is simulated as a 3-reaction system,
while the model includes only two reactions with adjustable kinetic parameters. Despite very
good model fit (prediction of the simulated concentrations), the RTO techniques that cannot
handle structural uncertainty, such as the two-step approach, fail to reach the plant optimum. In
contrast, all 6 MA variants converge to the plant optimum. The differentiation factor is then the
convergence rate, which is often related to the estimation of plant gradients.
• Most MA schemes use FDA to estimate gradients. In the case of FFD, one needs to perturb each
input successively; this is a time-consuming operation since, at the current operating point, the
system must be perturbed nu times, each time waiting for the plant to reach steady state. Hence,
FDA based on past and current operating points is clearly the preferred option. However, to
ensure that sufficient excitation is present to compute accurate gradients, the addition of a duality
constraint is often necessary. Furthermore, both linear and nonlinear function approximations
have been proposed with promising results. An alternative is to use the concept of neighboring
extremals (NE), which works well when the uncertainty is of parametric nature (because the
NE-based gradient law assumes that the variations are due to parametric uncertainties). Note that
two approaches do not use an estimate of the plant gradient: CA uses only zeroth-order modifiers
to drive the plant to the active constraints, while nested MA circumvents the computation of
plant gradients by solving an additional optimization problem. Note also that IGMO can be
classified as dual MA, with the peculiarity that the gradients are estimated via FDA with added
perturbations when necessary.
• The typical number of input variables is small (nu < 5), which seems to be related to the difficulties
of gradient estimation. When the number of inputs is much larger, it might be useful to investigate
whether it is important to correct the model in all input directions, which is nicely solved using
D-MA.
• Two applications, namely, the path-following robot and the power kite, deal with the optimization
of dynamic periodic processes. Each period (or multiple periods) is considered as a run, the input
profiles are parameterized, and the operation is optimized on a run-to-run basis.
• Five of these case studies have dealt with experimental implementation, four on lab-scale setups
and one at the industrial level. There is clearly room for more experimental implementations and,
hopefully, also significant potential for improvements ahead!

27
Table 1. Overview of MA case studies (FFD: forward finite difference with input perturbation; FDA: finite-difference approximation based on past and current
operating points; NE: neighboring extremals; CA: Constraint Adaptation; MAWQA: MA with quadratic approximation; IGMO: iterative gradient-modification
optimization, where gradients are estimated via FDA with added perturbation when necessary; in bold: experimental implementation).
Processes 2016, 4, 55

Problems MA Variant Ref. Gradient Est. # of Inputs Remarks


MA [64] FFD 2 basic MA algorithm
dual MA [64,65] FDA, lin. approx. 2 addition of a constraint on gradient accuracy
Williams-Otto CSTR 2nd-order MA [31] FFD 2 addition of second-order correction terms
(benchmark for MA) MAWQA [36,81] quad. approx. 2 gradient computed via quadratic approximation
nested MA [82] — 2 additional optimization to by-pass gradient calculation
various MA [80] various 2 IGMO, dual MA, nested MA and MAWQA
convex MA [21] perfect 2 use of convex model
CSTR
transient MA [54] NE-based 2 use of transient measurements for static optimization
(with pyrrole reaction)
MAWQA [83] quad. approx. 2 gradient computed via quadratic approximation
dual, nested MA [79] FDA, — 3 4 reactions, run-to-run scheme
Semi-batch reactors
MA [84] FDA 11 1 reaction, run-to-run scheme
Distillation column various MA [85,86] various 2 controlled column, dual, nested, transient measurements

28
Batch chromatography various MA [14,87] various 2 IGMO, dual MA and MAWQA, run-to-run schemes
Electro-chromatography dual MA [88] IGMO 2 Continuous process
Chromatographic separation dual MA [89] IGMO 3 Continuous multi-column solvent gradient purification
Sugar and ethanol plant MA [56] FDA 6 heat and power system
Leaching process MA [90] FDA 4 4 CSTR in series, industrial implementation
Hydroformylation process MAWQA [91] quad. approx. 4 reactor and decanter with recycle
Flotation column various MA [92] FFD, FDA, — 3 implemented on lab-scale column
Three-tank system MA [15]; [79] FFP; FDA, — 2 implemented on lab-scale setup
Solid-oxide fuel cell CA [93]; [53] — 3; 2 implemented on lab fuel-cell stack
Parallel compressors MA [94] FFP 2&6 parallel structure exploited for gradient estimation
Path-following robot CA [95] — 700 periodic operation, enforcing minimum-time motion
Power kite D-MA [28], [96] FDA 40 → 2 periodic operation, implemented on small-scale kite
Processes 2016, 4, 55

7. Conclusions
This section concludes the paper with a brief discussion of open issues and a few final words.

7.1. Open Issues


As illustrated in this paper, significant progress has been made recently on various aspects of
MA. Yet, there are still unresolved issues with respect to both the methodology and applications.
In particular, it is desirable for RTO schemes to exhibit the following features [38]:

i plant optimality and feasibility upon convergence,


ii acceptable number of RTO iterations, and
iii plant feasibility throughout the optimization process.

These features and related properties are briefly discussed next.

Feasibility of all RTO iterates. By construction, MA satisfies Feature (i), cf. Theorem 1. However,
Theorem 1 does not guarantee feasibility of the successive RTO iterates, nor does it imply anything
regarding convergence speed. The Sufficient Conditions for Feasibility and Optimality (SCFO)
presented in [35,97] can, in principle, be combined with any RTO scheme and enforce Feature (iii).
However, SCFO often fails to enforce sufficiently fast convergence (cf. the examples provided in [35],
Section 4.4) because of the necessity to upper bound uncertain plant constraints using Lipschitz
constants. Hence, it is fair to search for other approaches that can ensure plant feasibility of the
successive RTO iterates. One intuitive way is to replace the first-order (Lipschitz) upper bounds in [35]
by second-order upper-bounding functions. For purely data-driven RTO, it has been shown numerically
that this outperforms Lipschitz bounds [38]. Furthermore, the handling of plant infeasibility in dual
MA has been discussed in [43]. New results given in [98] demonstrate that the combination of convex
upper-bounding functions with the usual first-order MA corrections terms implies optimality upon
convergence (Feature (i)) and feasibility for all iterates (Feature (iii)). However, a conclusive analysis of
the trade-off between convergence speed and the issue of plant feasibility has not been conducted yet.
Using the numerical optimization terminology, one could say that it remains open how one chooses
the step length in RTO, when plant feasibility and fast convergence are both important.

Robustness to gradient uncertainty. The implementation of MA calls for the estimation of plant
gradients. At the same time, as estimated gradients are prone to errors, it is not clear to which extent
MA is robust to this kind of uncertainty. Simulation studies such as [15,64] indicate that MA is
reasonably robust with respect to gradient uncertainty. For data-driven RTO schemes inspired by
SCFO [35], robustness to multiplicative cost gradient uncertainty has been shown [37]. However, the
assumption of purely multiplicative gradient uncertainty is hard to justify in practice, as this would
imply exact gradient information at any unconstrained local optimum of the plant.

Realistically, one has to assume that gradient uncertainty is additive and bounded. First steps
towards a strictly feasible MA scheme can be found in [99], wherein convex upper-bounding functions
similar to [98] are combined with dual constraints from Section 5.3 [64].

Exploitation of plant structure for gradient estimation. In the presentation of the different MA
variants, it is apparent that the physical structure of the plant (parallel or serial connections, recycles,
weak and strong couplings) has not been the focus of investigation. At the same time, since many
real-world RTO applications possess a specific structure, it is fair to ask whether one can exploit the
physical interconnection structure to facilitate and possibly improve gradient estimation.

Parallel structures with identical units may be well suited for gradient estimation [76]. Recently, it
has been observed that, under certain assumptions, parallel structures of heterogeneous units can also

29
Processes 2016, 4, 55

be exploited for gradient estimation [94]. A formal and general investigation of the interplay between
plant structure and gradient estimation remains open.

RTO of interconnected processes. There is an evident similarity between many RTO schemes and
numerical optimization algorithms. For example, one might regard MA adaptation in its simpliest
form of Section 4.1 as an experimental gradient descent method, likewise the scheme of Section 4.3.2 is
linked to trust-region algorithms, and Remark 3 has pointed toward a SQP-like scheme. Hence, one
might wonder whether the exploitation of structures in the spirit of distributed NLP algorithms will
yield benefits to RTO and MA schemes. The early works on distributed ISOPE methods [7,100,101]
point into such a direction. Furthermore, a recent paper by Wenzel et al. [102] argues for distributed
plant optimization for the sake of confidentiality. In the context of MA, it is interesting to note that
different distributed MA algorithms have recently been proposed [103,104]. Yet, there is no common
consensus on the pros and cons of distributed RTO schemes.

Integration with advanced process control. Section 4.4.1 discussed the application of MA to
controlled plants, and highlighted that the implementation of RTO results by means of MPC can be
used to prevent constraint violations. Section 4.4.3 reported on the use of transient data for the purpose
of gradient estimation for static MA. In general, it seems that the use of transient measurements
in RTO either requires specific dynamic properties of the underlying closed-loop system or a tight
integration of RTO and advanced process control. As many industrial multivariable process control
tasks are nowadays solved via MPC, this can be narrowed down to the integration of MA and MPC.
Yet, there remain important questions: (i) How to exploit the properties of MPC for RTO or MA?
(ii) Can one use the gradient information obtained for MA in the MPC layer? While there are answers
to Question (i) [45,105]; Question (ii) remains largely unexplored. This also raises the research question
of how to design (static) MA and (dynamic) process control in a combined fashion or, expressed
differently, how to extend the MA framework toward dynamic RTO problems. The D-MA approach
sketched in Section 4.2.3 represents a first promising step for periodic and batch dynamic processes.
Yet, the close coupling between MA and economic MPC schemes might bring about interesting new
research and implementation directions [106–108].

Model-based or data-driven RTO? The fact that all MA properties also hold for the trivial case of
no model gives rise to the fundamental question regarding the role of models in RTO. As shown
in Section 4, models are not needed in order to enforce plant optimality upon convergence in MA.
Furthermore, models bring about the model-adequacy issue discussed in Section 4.1.3. At the same
time, industrial practitioners often spend a considerable amount of time on model building, parameter
estimation and model validation. Hence, from the industrial perspective, there is an evident expectation
that the use of models should pay off in RTO. From the research perspective, this gives rise to the
following question: How much should we rely on uncertain model and how much on available plant
data? In other words, what is a good tuning knob between model-based and data-driven RTO?

7.2. Final Words


This overview paper has discussed real-time optimization of uncertain plants using Modifier
Adaptation. It has attempted to present the main developments in a comprehensive and unified way
that highlights the main differences between the schemes. Yet, as in any review, the present one is also
a mere snapshot taken at a given time. As we tried to sketch it, there remain several open issues, some
of which will be crucial for the success of modifier-adaptation schemes in industrial practice.
Author Contributions: All authors have worked on all parts of the paper.
Conflicts of Interest: The authors declare no conflict of interest.

30
Processes 2016, 4, 55

References
1. Chachuat, B.; Srinivasan, B.; Bonvin, D. Adaptation strategies for real-time optimization. Comput. Chem. Eng.
2009, 33, 1557–1567.
2. Chen, C.Y.; Joseph, B. On-line optimization using a two-phase approach: An application study. Ind. Eng.
Chem. Res. 1987, 26, 1924–1930.
3. Darby, M.L.; Nikolaou, M.; Jones, J.; Nicholson, D. RTO: An overview and assessment of current practice.
J. Process Control 2011, 21, 874–884.
4. Jang, S.-S.; Joseph, B.; Mukai, H. On-line optimization of constrained multivariable chemical processes.
AIChE J. 1987, 33, 26–35.
5. Marlin, T.E.; Hrymak, A.N. Real-Time Operations Optimization of Continuous Processes. AIChE Symp.
Ser.—CPC-V 1997, 93, 156–164.
6. Forbes, J.F.; Marlin, T.E.; MacGregor, J.F. Model adequacy requirements for optimizing plant operations.
Comput. Chem. Eng. 1994, 18, 497–510.
7. Brdyś, M.; Tatjewski, P. Iterative Algorithms for Multilayer Optimizing Control; Imperial College Press: London
UK, 2005.
8. Roberts, P.D. An algorithm for steady-state system optimization and parameter estimation. J. Syst. Sci. 1979,
10, 719–734.
9. Roberts, P.D. Coping with model-reality differences in industrial process optimisation—A review of
integrated system optimisation and parameter estimation (ISOPE). Comput. Ind. 1995, 26, 281–290.
10. Roberts, P.D.; Williams, T.W. On an algorithm for combined system optimisation and parameter estimation.
Automatica 1981, 17, 199–209.
11. Bazaraa, M.S.; Sherali, H.D.; Shetty, C.M. Nonlinear Programming: Theory and Algorithms, 3rd ed.;
John Wiley and Sons: Hoboken, NJ, USA, 2006.
12. Chachuat, B.; Marchetti, A.; Bonvin, D. Process optimization via constraints adaptation. J. Process Control
2008, 18, 244–257.
13. Forbes, J.F.; Marlin, T.E. Model accuracy for economic optimizing controllers: The bias update case.
Ind. Eng. Chem. Res. 1994, 33, 1919–1929.
14. Gao, W.; Engell, S. Iterative set-point optimization of batch chromatography. Comput. Chem. Eng. 2005, 29,
1401–1409.
15. Marchetti, A.; Chachuat, B.; Bonvin, D. Modifier-adaptation methodology for real-time optimization.
Ind. Eng. Chem. Res. 2009, 48, 6022–6033.
16. Krstic, M.; Wang, H.-H. Stability of extremum seeking feedback for general nonlinear dynamic systems.
Automatica 2000, 36, 595–601.
17. François, G.; Srinivasan, B.; Bonvin, D. Use of measurements for enforcing the necessary conditions of
optimality in the presence of constraints and uncertainty. J. Process Control 2005, 15, 701–712.
18. Srinivasan, B.; Biegler, L.T.; Bonvin, D. Tracking the necessary conditions of optimality with changing set of
active constraints using a barrier-penalty function. Comput. Chem. Eng. 2008, 32, 572–579.
19. Gros, S.; Srinivasan, B.; Bonvin, D. Optimizing control based on output feedback. Comput. Chem. Eng. 2009,
33, 191–198.
20. Skogestad, S. Self-optimizing control: The missing link between steady-state optimization and control.
Comput. Chem. Eng. 2000, 24, 569–575.
21. François, G.; Bonvin, D. Use of convex model approximations for real-time optimization via modifier
adaptation. Ind. Eng. Chem. Res. 2013, 52, 11614–11625.
22. Gill, P.E.; Murray, W.; Wright, M.H. Practical Optimization; Academic Press: London, UK, 2003.
23. Brdyś, M.; Roberts, P.D. Convergence and optimality of modified two-step algorithm for integrated system
optimisation and parameter estimation. Int. J. Syst. Sci. 1987, 18, 1305–1322.
24. Zhang, H.; Roberts, P.D. Integrated system optimization and parameter estimation using a general form of
steady-state model. Int. J. Syst. Sci. 1991, 22, 1679–1693.
25. Brdyś, M.; Chen, S.; Roberts, P.D. An extension to the modified two-step algorithm for steady-state system
optimization and parameter estimation. Int. J. Syst. Sci. 1986, 17, 1229–1243.
26. Tatjewski, P. Iterative Optimizing Set-Point Control—The Basic Principle Redesigned. In Proceedings of the
15th IFAC World Congress, Barcelona, Spain, 21–26 July 2002.

31
Processes 2016, 4, 55

27. Forbes, J.F.; Marlin, T.E. Design cost: A systematic approach to technology selection for model-based
real-time optimization systems. Comput. Chem. Eng. 1996, 20, 717–734.
28. Costello, S.; François, G.; Bonvin, D. Directional Real-Time Optimization Applied to a Kite-Control
Simulation Benchmark. In Proceedings of the European Control Conference, Linz, Austria, 15–17 July 2015;
pp. 1594–1601.
29. Costello, S.; François, G.; Bonvin, D. A directional modifier-adaptation algorithm for real-time optimization.
J. Process Control 2016, 39, 64–76.
30. Singhal, M.; Marchetti, A.; Faulwasser, T.; Bonvin, D. Improved Directional Derivatives for
Modifier-Adaptation Schemes. In Proceedings of the 20th IFAC World Congress, Toulouse, France,
2017, submitted.
31. Faulwasser, T.; Bonvin, D. On the Use of Second-Order Modifiers for Real-Time Optimization. In Proceedings
of the 19th IFAC World Congress, Cape Town, South Africa, 24–29 August 2014.
32. Golden, M.P.; Ydstie, B.E. Adaptive extremum control using approximate process models. AIChE J. 1989, 35,
1157–1169.
33. Nocedal, J.; Wright, S.J. Numerical Optimization; Springer: New York, NY, USA, 1999.
34. Conn, A.R.; Scheinberg, K.; Vicente, L.N. Introduction to Derivative-Free Optimization; Cambridge University
Press: Cambridge, UK, 2009.
35. Bunin, G.A.; François, G.; Bonvin, D. Sufficient Conditions for Feasibility and Optimality of Real-Time Optimization
Schemes—I. Theoretical Foundations, 2013; ArXiv:1308.2620 [math.oc].
36. Gao, W.; Wenzel, S.; Engell, S. A reliable modifier-adaptation strategy for real-time optimization.
Comput. Chem. Eng. 2016, 91, 318–328.
37. Singhal, M.; Faulwasser, T.; Bonvin, D. On handling cost gradient uncertainty in real-time optimization.
IFAC-PapersOnLine. IFAC Symp. Adchem. 2015, 48, 176–181.
38. Singhal, M.; Marchetti, A.G.; Faulwasser, T.; Bonvin, D. Real-Time Optimization Based on Adaptation of
Surrogate Models. In Proceedings of the IFAC Symposium on DYCOPS, Trondheim, Norway, 6–8 June 2016;
pp. 412–417.
39. Bertsekas, D. Nonlinear Programming, 2nd ed.; Athena Scientific: Belmont, MA, USA, 1999.
40. Bunin, G.A. On the equivalence between the modifier-adaptation and trust-region frameworks.
Comput. Chem. Eng. 2014, 71, 154–157.
41. Biegler, L.T.; Lang, Y.; Lin, W. Multi-scale optimization for process systems engineering. Comput. Chem. Eng.
2014, 60, 17–30.
42. Tatjewski, P.; Brdyś, M.A.; Duda, J. Optimizing control of uncertain plants with constrained feedback
controlled outputs. Int. J. Control 2001, 74, 1510–1526.
43. Navia, D.; Martí, R.; Sarabia, D.; Gutiérrez, G.; de Prada, C. Handling Infeasibilities in Dual
Modifier-Adaptation Methodology for Real-Time Optimization. In Proceedings of the IFAC Symposium
ADCHEM, Singapore, Singapore, 10–13 July 2012; pp. 537–542.
44. Qin, S.J.; Badgwell, T.A. A survey of industrial model predictive control technology. Control Eng. Pract. 2003,
11, 733–764.
45. Marchetti, A.; Luppi, P.; Basualdo, M. Real-Time Optimization via Modifier Adaptation Integrated
with Model Predictive Control. In Proceedings of the 18th IFAC World Congress, Milan, Italy,
28 August–2 September 2011.
46. Muske, K.R.; Rawlings, J.B. Model predictive control with linear models. AIChE J. 1993, 39, 262–287.
47. Ying, C.-M.; Joseph, B. Performance and stability analysis of LP-MPC and QP-MPC cascade control systems.
AIChE J. 1999, 45, 1521–1534.
48. Marchetti, A.G.; Ferramosca, A.; González, A.H. Steady-state target optimization designs for integrating
real-time optimization and model predictive control. J. Process Control 2014, 24, 129–145.
49. Costello, S.; François, G.; Bonvin, D.; Marchetti, A. Modifier Adaptation for Constrained Closed-Loop
Systems. In Proceedings of the 19th IFAC World Congress, Cape Town, South Africa, 24–29 August 2014;
pp. 11080–11086.
50. François, G.; Costello, S.; Marchetti, A.G.; Bonvin, D. Extension of modifier adaptation for controlled plants
using static open-loop models. Comput. Chem. Eng. 2016, 93, 361–371.

32
Processes 2016, 4, 55

51. Costello, S.; François, G.; Srinivasan, B.; Bonvin, D. Modifier Adaptation for Run-to-Run
Optimization of Transient Processes. In Proceedings of the 18th IFAC World Congress, Milan, Italy,
28 August–2 September 2011.
52. Marchetti, A.; Chachuat, B.; Bonvin, D. Batch Process Optimization via Run-to-Run Constraints Adaptation.
In Proceedings of the European Control Conference, Kos, Greece, 2–5 July 2007.
53. Bunin, G.A.; Vuillemin, Z.; François, G.; Nakato, A.; Tsikonis, L.; Bonvin, D. Experimental real-time
optimization of a solid fuel cell stack via constraint adaptation. Energy 2012, 39, 54–62.
54. François, G.; Bonvin, D. Use of transient measurements for the optimization of steady-state performance via
modifier adaptation. Ind. Eng. Chem. Res. 2014, 53, 5148–5159.
55. de Avila Ferreira, T.; François, G.; Marchetti, A.G.; Bonvin, D. Use of Transient Measurements for Static
Real-Time Optimization via Modifier Adaptation. In Proceedings of the 20th IFAC World Congress, Toulouse,
France, 2017, submitted.
56. Serralunga, F.J.; Mussati, M.C.; Aguirre, P.A. Model adaptation for real-time optimization in energy systems.
Ind. Eng. Chem. Res. 2013, 52, 16795–16810.
57. Bunin, G.A.; François, G.; Bonvin, D. Exploiting Local Quasiconvexity for Gradient Estimation in
Modifier-Adaptation Schemes. In Proceedings of the American Control Conference, Montréal, QC, Canada,
27–29 June 2012; pp. 2806–2811.
58. Mansour, M.; Ellis, J.E. Comparison of methods for estimating real process derivatives in on-line optimization.
App. Math. Model. 2003, 27, 275–291.
59. Srinivasan, B.; François, G.; Bonvin, D. Comparison of gradient estimation methods for real-time
optimization. Comput. Aided Chem. Eng. 2011, 29, 607–611.
60. Zhang, Y.; Forbes, J.F. Performance analysis of perturbation-based methods for real-time optimization. Can. J.
Chem. Eng. 2006, 84, 209–218.
61. Brekelmans, R.C.M.; Driessen, L.T.; Hamers, H.L.M.; den Hertog, D. Gradient estimation schemes for noisy
functions. J. Optim. Theory Appl. 2005, 126, 529–551.
62. Engl, H.W.; Hanke, M.; Neubauer, A. Regularization of Inverse Problems; Kluwer Academic Publishers:
Dordrecht, The Netherlands, 2000.
63. Brdyś, M.; Tatjewski, P. An Algorithm for Steady-State Optimizing Dual Control of Uncertain Plants.
In Proceedings of the 1st IFAC Workshop on New Trends in Design of Control Systems, Smolenice, Slovakia,
7–10 September 1994; pp. 249–254.
64. Marchetti, A.; Chachuat, B.; Bonvin, D. A dual modifier-adaptation approach for real-time optimization.
J. Process Control 2010, 20, 1027–1037.
65. Marchetti, A.G. A new dual modifier-adaptation approach for iterative process optimization with inaccurate
models. Comput. Chem. Eng. 2013, 59, 89–100.
66. Roberts, P.D. Broyden Derivative Approximation in ISOPE Optimising and Optimal Control Algorithms.
In Proceedings of the 11th IFAC Workshop on Control Applications of Optimisation, St Petersburg, Russia,
3–6 July 2000; pp. 283–288.
67. Rodger, E.A.; Chachuat, B. Design Methodology of Modifier Adaptation for On-Line Optimization of
Uncertain Processes. In Proceedings of the IFAC World Congress, Milano, Italy, 28 August–2 September 2011;
pp. 4113–4118.
68. Mendoza, D.F.; Alves Graciano, J.E.; dos Santos Liporace, F.; Carrillo Le Roux, G.A. Assessing the reliability
of different real-time optimization methodologies. Can. J. Chem. Eng. 2016, 94, 485–497.
69. Savitzky, A.; Golay, M.J.E. Smoothing and differentiation of data by simplified least squares procedures.
Anal. Chem. 1964, 36, 1627–1639.
70. McFarlane, R.C.; Bacon, D.W. Empirical strategies for open-loop on-line optimization. Can. J. Chem. Eng.
1989, 84, 209–218.
71. Becerra, V.M.; Roberts, P.D.; Griffiths, G.W. Novel developments in process optimisation using predictive
control. J. Process Control 1998, 8, 117–138.
72. Bamberger, W.; Isermann, R. Adaptive on-line steady state optimization of slow dynamic processes.
Automatica 1978, 14, 223–230.
73. Garcia, C.E.; Morari, M. Optimal operation of integrated processing systems. Part I: Open-loop on-line
optimizing control. AIChE J. 1981, 27, 960–968.

33
Processes 2016, 4, 55

74. François, G.; Srinivasan, B.; Bonvin, D. Comparison of six implicit real-time optimization schemes. J. Eur.
Syst. Autom. 2012, 46, 291–305.
75. Guay, M.; Burns, D.J. A Comparison of Extremum Seeking Algorithms Applied to Vapor Compression
System Optimization. In Proceedings of the American Control Conference, Portland, OR, USA, 4–6 June 2014;
pp. 1076–1081.
76. Srinivasan, B. Real-time optimization of dynamic systems using multiple units. Int. J. Robust Nonlinear Control
2007, 17, 1183–1193.
77. Woodward, L.; Perrier, M.; Srinivasan, B. Improved performance in the multi-unit optimization method
with non-identical units. J. Process Control 2009, 19, 205–215.
78. Bunin, G.A.; François, G.; Bonvin, D. From discrete measurements to bounded gradient estimates: A look at
some regularizing structures. Ind. Eng. Chem. Res. 2013, 52, 12500–12513.
79. Navia, D.; Briceño, L.; Gutiérrez, G.; de Prada, C. Modifier-adaptation methodology for real-time
optimization reformulated as a nested optimization problem. Ind. Eng. Chem. Res. 2015, 54, 12054–12071.
80. Gao, W.; Wenzel, S.; Engell, S. Comparison of Modifier Adaptation Schemes in Real-Time Optimization.
In Proceedings of the IFAC Symposium on ADCHEM, Whistler, BC, Canada, 7–10 June 2015; pp. 182–187.
81. Wenzel, S.; Gao, W.; Engell, S. Handling Disturbances in Modifier Adaptation with Quadratic Approximation.
In Proceedings of the 16th IFAC Workshop on Control Applications of Optimization, Garmisch-Partenkirchen,
Germany, 6–9 October 2015.
82. Navia, D.; Gutiérrez, G.; de Prada, C. Nested Modifier-Adaptation for RTO in the Otto-Williams Reactor.
In Proceedings of the IFAC Symposium DYCOPS, Mumbai, India, 18–20 December 2013.
83. Gao, W.; Engell, S. Using Transient Measurements in Iterative Steady-State Optimizing Control.
In Proceedings of the ESCAPE-26, Portorož, Slovenia, 12–15 June 2016.
84. Jia, R.; Mao, Z.; Wang, F. Self-correcting modifier-adaptation strategy for batch-to-batch optimization based
on batch-wise unfolded PLS model. Can. J. Chem. Eng. 2016, 94, 1770–1782.
85. Rodriguez-Blanco, T.; Sarabia, D.; Navia, D.; de Prada, C. Modifier-Adaptation Methodology for RTO
Applied to Distillation Columns. In Proceedings of the IFAC Symposium on ADCHEM, Whistler, BC,
Canada, 7–10 June 2015; pp. 223–228.
86. Rodriguez-Blanco, T.; Sarabia, D.; de Prada, C. Modifier-Adaptation Approach to Deal with Structural
and Parametric Uncertainty. In Proceedings of the IFAC Symposium on DYCOPS, Trondheim, Norway,
6–8 June 2016; pp. 851–856.
87. Gao, W.; Wenzel, S.; Engell, S. Integration of Gradient Adaptation and Quadratic Approximation in Real-Time
Optimization. In Proceedings of the 34th Chinese Control Conference, Hangzhou, China, 28–30 July 2015;
pp. 2780–2785.
88. Behrens, M.; Engell, S. Iterative Set-Point Optimization of Continuous Annular Electro-Chromatography.
In Proceedings of the 18th IFAC World Congress, Milan, Italy, 28 August–2 September 2011; pp. 3665–3671.
89. Behrens, M.; Khobkhun, P.; Potschka, A.; Engell, S. Optimizing Set Point Control of the MCSGP Process.
In Proceedings of the European Control Conference, Strasbourg, France, 24–27 June 2014; pp. 1139–1144.
90. Zhang, J.; Mao, Z.; Jia, R.; He, D. Real-time optimization based on a serial hybrid model for gold cyanidation
leaching process. Miner. Eng. 2015, 70, 250–263.
91. Hernandez, R.; Engell, S. Iterative Real-Time Optimization of a Homogeneously Catalyzed Hydroformylation
Process. In Proceedings of the ESCAPE-26, Portorož, Slovenia, 12–15 June 2016.
92. Navia, D.; Villegas, D.; Cornejo, I.; de Prada, C. Real-time optimization for a laboratory-scale flotation
column. Comput. Chem. Eng. 2016, 86, 62–74.
93. Marchetti, A.; Gopalakrishnan, A.; Tsikonis, L.; Nakajo, A.; Wuillemin, Z.; Chachuat, B.; Van herle, J.;
Bonvin, D. Robust real-time optimization of a solid oxide fuel cell stack. J. Fuel Cell Sci. Technol. 2011,
8, 051001.
94. Milosavljevic, P.; Cortinovis, A.; Marchetti, A.G.; Faulwasser, T.; Mercangöz, M.; Bonvin, D. Optimal Load
Sharing of Parallel Compressors via Modifier Adaptation. In Proceedings of the IEEE Multi-Conference on
Systems and Control, Buenos Aires, Argentina, 19–22 September 2016.
95. Milosavljevic, P.; Faulwasser, T.; Marchetti, A.; Bonvin, D. Time-Optimal Path-Following Operation in
the Presence of Uncertainty. In Proceedings of the European Control Conference, Aalborg, Denmark,
29 June–1 July 2016.

34
Processes 2016, 4, 55

96. Costello, S.; François, G.; Bonvin, D. Real-time optimizing control of an experimental crosswind power kite.
IEEE Trans. Control Syst. Technol. 2016, submitted.
97. Bunin, G.A.; François, G.; Bonvin, D. Sufficient Conditions for Feasibility and Optimality of Real-Time Optimization
Schemes—II. Implementation Issues, 2013; ArXiv:1308.2625 [math.oc].
98. Marchetti, A.G.; Faulwasser, T.; Bonvin, D. A feasible-side globally convergent modifier-adaptation scheme.
J. Process Control, 2016, submitted.
99. Marchetti, A.G.; Singhal, M.; Faulwasser, T.; Bonvin, D. Modifier adaptation with guaranteed feasibility in
the presence of gradient uncertainty. Comput. Chem. Eng. 2016, doi:10.1016/j.compchemeng.2016.11.027.
100. Brdyś, M.; Roberts, P.D.; Badi, M.M.; Kokkinos, I.C.; Abdullah, N. Double loop iterative strategies for
hierarchical control of industrial processes. Automatica 1989, 25, 743–751.
101. Brdyś, M.; Abdullah, N.; Roberts, P.D. Hierarchical adaptive techniques for optimizing control of large-scale
steady-state systems: optimality, iterative strategies, and their convergence. IMA J. Math. Control Inf. 1990, 7,
199–233.
102. Wenzel, S.; Paulen, R.; Stojanovski, G.; Krämer, S.; Beisheim, B.; Engell, S. Optimal resource allocation in
industrial complexes by distributed optimization and dynamic pricing. at-Automatisierungstechnik 2016, 64,
428–442.
103. Milosavljevic, P.; Schneider, R.; Faulwasser, T.; Bonvin, D. Distributed Modifier Adaptation Using a
Coordinator and Input-Output Data. In Proceedings of the 20th IFAC World Congress, Toulouse, France,
2017, submitted.
104. Schneider, R.; Milosavljevic, P.; Bonvin, D. Distributed modifier-adaptation schemes for real-time
optimization of uncertain interconnected systems. SIAM J. Control Optim. 2016, submitted.
105. Alvarez, L.A.; Odloak, D. Optimization and control of a continuous polymerization reactor. Braz. J.
Chem. Eng. 2012, 29, 807–820.
106. Diehl, M.; Amrit, R.; Rawlings, J.B. A Lyapunov function for economic optimizing model predictive control.
IEEE Trans. Automat. Control 2011, 56, 703–707.
107. Ellis, M.; Durand, H.; Christofides, P.D. A tutorial review of economic model predictive control methods.
J. Process Control 2014, 24, 1156–1178.
108. Faulwasser, T.; Bonvin, D. On the Design of Economic NMPC Based on Approximate Turnpike Properties.
In Proceedings of the 54th IEEE Conference on Decision and Control, Osaka, Japan, 15–18 December 2015;
pp. 4964–4970.

c 2016 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

35
processes
Article
A Study of Explorative Moves during Modifier
Adaptation with Quadratic Approximation
Weihua Gao *, Reinaldo Hernández and Sebastian Engell
Biochemical and Chemical Engineering Department, TU Dortmund, Emil-Figge-Str. 70,
44221 Dortmund, Germany; [email protected] (R.H.);
[email protected] (S.E.)
* Correspondence: [email protected]; Tel.: +49-231-755-5131

Academic Editor: Dominique Bonvin


Received: 31 October 2016; Accepted: 22 November 2016; Published: 26 November 2016

Abstract: Modifier adaptation with quadratic approximation (in short MAWQA) can adapt the
operating condition of a process to its economic optimum by combining the use of a theoretical
process model and of the collected data during process operation. The efficiency of the MAWQA
algorithm can be attributed to a well-designed mechanism which ensures the improvement of the
economic performance by taking necessary explorative moves. This paper gives a detailed study
of the mechanism of performing explorative moves during modifier adaptation with quadratic
approximation. The necessity of the explorative moves is theoretically analyzed. Simulation results
for the optimization of a hydroformylation process are used to illustrate the efficiency of the MAWQA
algorithm over the finite difference based modifier adaptation algorithm.

Keywords: real-time optimization; modifier adaptation; quadratic approximation

1. Introduction
In the process industries, performing model-based optimization to obtain economic operations
usually implies the need of handling the problem of plant-model mismatch. An optimum that is
calculated using a theoretical model seldom represents the plant optimum. As a result, real-time
optimization (RTO) is attracting considerable industrial interest. RTO is a model based upper-level
optimization system that is operated iteratively in closed loop and provides set-points to the lower-level
regulatory control system in order to maintain the process operation as close as possible to the economic
optimum. RTO schemes usually estimate the process states and some model parameters or disturbances
from the measured data but employ a fixed process model which leads to problems if the model does
not represent the plant accurately.
Several schemes have been proposed towards how to combine the use of theoretical models
and of the collected data during process operation, in particular the model adaptation or two-step
scheme [1]. It handles plant-model mismatch in a sequential manner via an identification step followed
by an optimization step. Measurements are used to estimate the uncertain model parameters, and the
updated model is used to compute the decision variables via model-based optimization. The model
adaptation approach is expected to work well when the plant-model mismatch is only of parametric
nature, and the operating conditions lead to sufficient excitation for the estimation of the plant outputs.
In practice, however, both parametric and structural mismatch are typically present and, furthermore,
the excitation provided by the previously visited operating points is often not sufficient to accurately
identify the model parameters.
For an RTO scheme to converge to the plant optimum, it is necessary that the gradients of the
objective as well as the values and gradients of the constraints of the optimization problem match
those of the plant. Schemes that directly adapt the model-based optimization problem by using the

Processes 2016, 4, 45 36 www.mdpi.com/journal/processes


Processes 2016, 4, 45

update terms (called modifiers) which are computed from the collected data have been proposed [2–4].
The modifier-adaptation schemes can handle considerable plant-model mismatch by applying bias-
and gradient-corrections to the objective and to the constraint functions. One of the major challenges in
practice, as shown in [5], is the estimation of the plant gradients with respect to the decision variables
from noisy measurement data.
Gao et al. [6] combined the idea of modifier adaptation with the quadratic approximation approach
that is used in derivative-free optimization and proposed the modifier adaptation with quadratic
approximation (in short MAWQA) algorithm. Quadratic approximations of the objective function
and of the constraint functions are constructed based on the screened data which are collected during
the process operation. The plant gradients are computed from the quadratic approximations and are
used to adapt the objective and the constraint functions of the model-based optimization problem.
Simulation studies for the optimization of a reactor benchmark problem with noisy data showed
that by performing some explorative moves the true optimum can be reliably obtained. However,
neither the generation of the explorative moves nor their necessity for the convergence of set-point
to the optimum was theoretically studied. Due to the fact that the estimation of the gradients using
the quadratic approximation approach requires more data than those that are required by using
a finite difference approach, the efficiency of the MAWQA algorithm, in terms of the number of plant
evaluations to obtain the optimum, has been questioned, in particular for the case of several decision
variables. In addition, in practice, it is crucial for plant operators to be confident with the necessity of
taking the explorative moves which may lead to a deterioration of plant performance.
This paper reports a detailed study of the explorative moves during modifier adaptation with
quadratic approximation. It starts with how the explorative moves are generated and then the factors
that influence the generation of these moves are presented. The causality between the factors and the
explorative moves is depicted in Figure 1, where the blocks with a yellow background represent the
factors. The use of a screening algorithm to optimize the regression set for quadratic approximations
is shown to ensure that an explorative move is only performed when the past collected data cannot
provide accurate gradient estimates. Simulation results for the optimization of a hydroformylation
process with four optimization variables are used to illustrate the efficiency of the MAWQA algorithm,
which takes necessary explorative moves, over the finite difference based modifier adaptation algorithm.

Figure 1. Causality between the explorative moves and the influencing factors.

2. Modifier Adaptation with Quadratic Approximation


Let Jm (u) and Cm (u) represent the objective and the vector of constraint functions of a static
model-based optimization problem, assumed to be twice differentiable with respect to the vector of
decision variables u ∈ Rnu
min Jm (u)
u
(1)
s.t. Cm (u) ≤ 0.

37
Processes 2016, 4, 45

At each iteration of the modifier adaptation algorithm, bias- and gradient-corrections of the
optimization problem are applied as
   
(k) (k) T
min Jm (u) + ∇ J p − ∇ Jm u − u(k)
u
    (2)
(k) (k) (k) (k) T
s.t. Cm (u) + C p − Cm + ∇C p − ∇Cm u − u(k) ≤ 0.

(k) (k)
The symbols are explained in Table 1. ∇ J p and ∇C p are usually approximated by the finite
difference approach
⎡ (k) ( k −1) (k) ( k −1)
⎤ −1 ⎡ (k) ( k −1)

u1 − u1 ··· unu − u nu Jp − Jp
(k) ⎢ .. .. .. ⎥ ⎢ .. ⎥
∇ Jp ≈⎢
⎣ . . .



⎣ .
⎥,
⎦ (3)
(k) (k−n ) (k) (k−n ) (k) (k−n )
u1 − u1 u ··· unu − unu u Jp − Jp u

( k −i )
where nu is the number of dimensions of u, J p , i = 0, . . . , nu , are the plant objectives at
(k)
set-points u ( k −i ) ,
i = 0, . . . , nu , and ∇C p
is approximated similarly. The accuracy of the finite
difference approximations is influenced by both the step-sizes between the set-points and the presence
of measurement noise. In order to acquire accurate gradient estimations, small step-sizes are
preferred. However, the use of small step-sizes leads to a high sensitivity of the gradient estimates to
measurement noise.

Table 1. Symbols used in the modifier adaptation formulation.

Symbol Description
k Index of iteration
u(k) Current set-point
(k)
∇ Jp Gradient vector of the plant objective function at u(k)
(k)
∇ Jm Gradient vector of the model-predicted objective function at u(k)
(k)
Cp Vector of the plant constraint values at u(k)
(k)
Cm Vector of the model-predicted constraint values at u(k)
(k)
∇C p Gradient matrix of the plant constraint functions at u(k)
(k)
∇Cm Gradient matrix of the model-predicted constraint functions at u(k)

In the MAWQA algorithm, the gradients are computed analytically from quadratic approximations
of the objective function and of the constraint functions that are regressed based on a screened set
(represented by U (k) at the kth iteration) of all the collected data (represented by U). The screened set
consists of near and distant points: U (k) = Un ∪ Ud , where Un = {u : u − u(k)  < Δu; and u ∈ U},
and Ud is determined by

∑u∈Ud u − u(k) 
min
Ud θ (Ud )
(4)
s.t. size(Ud ) ≥ C2nu +2 − 1
Ud ⊂ U \ Un ,
where Δu is sufficiently large so that Ud guarantees robust quadratic approximations with
noisy data, θ (Ud ) is the minimal angle between all possible vectors that are defined by
u − u(k) , and C2nu +2 = (nu + 2)(nu + 1)/2 is the number of data required to uniquely determine
the quadratic approximations.

38
Processes 2016, 4, 45

In the MAWQA algorithm, the regression set U (k) is also used to define a constrained search space
B (k) for the next set-point move

B ( k ) : ( u − u ( k ) ) T M −1 ( u − u ( k ) ) ≤ γ2 , (5)

where M = cov(U (k) ) is the covariance matrix of the selected points (inputs) and γ is a scaling
parameter. B (k) is a nu -axial ellipsoid centered at u(k) . The axes of the ellipsoid are thus aligned with
the eigenvectors of the covariance matrix. The semi-axis lengths of the ellipsoid are related to the
eigenvalues of the covariance matrix by the scaling parameter γ. The adapted optimization (2) is
augmented by the search space constraint as

(k)
min Jad (u)
u
(k) (6)
s.t. C ad (u) ≤ 0
(k)
u∈B ,

(k) (k)
where Jad (u) and C ad (u) represent the adapted objective and constraint functions in (2).
In the application of the modifier adaptation with quadratic approximation, it can happen that
the nominal model is inadequate for the modifier-adaptation approach and that it is better to only use
the quadratic approximations to compute the next plant move. In order to ensure the convergence,
it is necessary to monitor the performance of the adapted optimization and possibly to switch between
model-based and data-based optimizations. In each iteration of the MAWQA algorithm, a quality index
(k)
of the adapted optimization ρm is calculated and compared with the quality index of the quadratic
(k)
approximation ρφ , where
⎧ ⎫
⎨ (k) ( k −1) (k) ( k −1)
(k)
(k) ( k −1)
Jad − Jad Cad,1 − Cad,1 Cad,nc − Cad,nc ⎬
ρm = max 1− , 1− ,..., 1− (7)
⎩ (k)
Jp − Jp
( k −1) (k)
C p,1 − C p,1
( k −1) (k) ( k −1)
C p,nc − C p,nc ⎭

and ⎧ ⎫
⎨ (k) ( k −1) (k) ( k −1) (k) ( k −1) ⎬
(k) Jφ − Jφ Cφ,1 − Cφ,1 Cφ,nc − Cφ,nc
ρφ = max 1− , 1− ,..., 1− (8)
⎩ (k)
Jp − Jp
( k −1) (k)
C p,1 − C p,1
( k −1) (k) ( k −1)
C p,nc − C p,nc ⎭

(k) (k)
with Jφ and Cφ are the quadratic approximations of the objective and the constraint functions.
(k) (k)
If ρm ≤ ρφ , the predictions of the adapted model-based optimization are more accurate than that
of the quadratic approximations and (6) is performed to determine the next set-point. Otherwise,
an optimization based on the quadratic approximations is done

(k)
min Jφ (u)
u
(k) (9)
s.t. Cφ (u) ≤ 0
(k)
u∈B .
The MAWQA algorithm is given as follows:

Step 1. Choose an initial set-point u(0) and probe the plant at u(0) and u(0) + hei , where h is
a suitable step size and ei ∈ Rnu (i = 1, . . . , nu ) are mutually orthogonal unit vectors. Use the
finite difference approach to calculate the gradients at u(0) and run the IGMO approach [3]
until k ≥ C2nu +2 set-points have been generated. Run the screening algorithm to define the
(k) (k)
regression set U (k) . Initialize ρm = 0 and ρφ = 0.

39
Processes 2016, 4, 45

(k) (k)
Step 2. Calculate the quadratic functions Jφ and Cφ based on U (k) . Determine the search space B (k)
by (5).
Step 3. Compute the gradients from the quadratic functions. Adapt the model-based optimization
problem and determine the optimal set-point û(k) as follows:
(k) (k)
(a) If ρm ≤ ρφ , run the adapted model-based optimization (6).
(b) Else perform the data-based optimization (9).
Step 4. If û(k) − u (k) ( j) (k) ( j) (k)
  < Δu and there exists one point u ∈ U such that u − u  > 2Δu,
( )
set û = u + u
k ( j ) ( k ) /2.
Step 5. Evaluate the plant at û(k) to acquire J p (û(k) ) and C p (û(k) ). Prepare the next step as follows
(k) (k) (k)
(a) If Ĵ p < J p , where Ĵ p = J p (û(k) ), this is a performance-improvement move.
Define u(k+1) = û(k) and run the screening algorithm to define the next regression set
( k +1) ( k +1)
U (k+1) . Update the quality indices ρm and ρφ . Increase k by one and go to Step 2.
(k) (k)
(b) If Ĵ p ≥ J p , this is an explorative move. Run the screening algorithm to update the
regression set for u(k) . Go to Step 2.

Note that the index of iteration of the MAWQA algorithm is increased by one only when
a performance-improvement move is performed. Several explorative moves may be required at
each iteration. The number of plant evaluations is the sum of the numbers of both kinds of moves.
The next section studies why the explorative moves are required and how they contribute to the
improvement of the performance on a longer horizon.

3. Analysis of the Explorative Moves


In the MAWQA algorithm, the quadratic approximations of the objective and the constraint
functions are started once C2nu +2 data have been collected. It can happen that the distribution of
the set-points is not “well-poised” [7] to ensure that the gradients are accurately estimated via the
quadratic approximations, especially when the initial set-point is far away from the optimum and the
following set-point moves are all along some search direction. Interpolation-based derivative-free
optimization algorithms rely on a model-improvement step that generates additional set-point moves
to ensure the well-poisedness of the interpolation set. Although the MAWQA algorithm was designed
without an explicit model-improvement step, the generation of explorative moves can be considered
as an implicit step to improve the poisedness of the regression set for the quadratic approximations.
This section gives a theoretical analysis of the explorative moves. We start with some observations from
the simulation results in [6] and relate the explorative moves to the estimation error of the gradients.
The factors that influence the accuracy of the estimated gradients are analyzed. It is shown that the
screening of the regression set leads to very pertinent explorative moves which, on the one hand,
are sufficient to improve the accuracy of the gradient estimations, and, on the other hand, are less
expensive than the model-improvement step in the derivative-free optimization algorithms.
The generation of the explorative moves is presented in Figure 2 where one MAWQA iteration for
the optimization of the steady-state profit of the Williams-Otto reactor with respect to the flow rate and
the reaction temperature [6] is illustrated. Here the blue surface represents the real profit mapping,
and the mesh represents the quadratic approximation which was computed based on the regression set
( : set-point moves, : measured profit values). The bottom part shows the contours of the profit as
predicted by the uncorrected model (blue lines) , the constrained search space (dash-dot line), and the
contours of the modifier-adapted profit (inside, magenta lines). Comparing the surface plot and the
mesh plot, we can see that the gradient along the direction of the last set-point move is estimated well.
However, a large error can be observed in the perpendicular direction. The gradient error propagates
to the modifier-adapted contours and therefore, the next set-point move ( ) points to the direction
where the gradient is badly estimated. Despite the fact that the move may not improve the objective
function, the data collection in that direction can later help to improve the gradient estimation.

40
Processes 2016, 4, 45

Figure 2. Illustration of one MAWQA iteration with noisy data. Surface plot: real profit
mapping, mesh plot: quadratic approximation, : regression set-point, : not chosen set-point,
: measured profit, : next set-point move, blue contours: model-predicted profit, magenta contours:
modifier-adapted profit, dash-dot line: constrained search space.

The example illustrates how the gradient error in a specific direction may lead to an explorative
move along the same direction. In the MAWQA algorithm, the gradients are determined by evaluating
∇ Jφ and ∇Cφ at u(k) . In order to be able to quantify the gradient error, we assume that the screened
set U (k) is of size C2nu +2 and the quadratic approximations are interpolated based on U (k) . In the
application of the MAWQA algorithm, this assumption is valid when the current set-point is far away
from the plant optimum. Recall the screening algorithm, U (k) consists of the near set Un and the distant
set Ud . From (4), we can conclude that the distant set Ud is always of size C2nu +2 − 1. Step 4 of the
MAWQA algorithm ensures that the near set Un only consists of u(k) until the optimized next move is
such that û(k) − u(k)  < Δu and there are no points u( j) ∈ U (k) such that u( j) − u(k)  > 2Δu, that is,
all the points in Ud keep suitable distances away from u(k) for good local approximations. The above
two conditions imply that u(k) − u∗  ≤ Δu, where u∗ represents the plant optimum. As a result
of Step 4, when u(k) − u∗  > Δu, Un is always of size 1. For simplicity, a shift of coordinates
to move u(k) to the origin is performed and the points in U (k) are reordered as {0, ud1 , . . . , udn },
where n = C2nu +2 − 1. √ √ √
Let φ = {1, u1 , . . . , unu , u21 , . . . , u2nu , 2 u1 u2 , 2 u1 u3 , . . . , 2 unu −1 unu } represent a natural basis
of the quadratic approximation. Let α represent a column vector of the coefficients of the quadratic
approximation. The quadratic approximation of the objective function is formulated as

nu nu √ n u −1 nu
Jφ (u) = α0 + ∑ αi ui + ∑ αnu +i u2i + 2 ∑ ∑ α2nu + I (nu ,i,j) ui u j , (10)
i =1 i =1 i =1 j = i +1

where I (nu , i, j) = nu (i − 1) − (i + 1)i/2 + j. The coefficients αi , i = 0, . . . , n are calculated via the


interpolation of the n + 1 data sets {u, J p (u)}

41
Random documents with unrelated
content Scribd suggests to you:
Petruccio.
Schelm, klop me eens aan deez’ deur, en dat het klinkt,
Of ik klop u, dat morgen ’t oor nog zingt.

Grumio.
Mijn meester zoekt ruzie;—en als ik u klop,
Dan breekt het, dit weet ik, mij later toch op.

Petruccio.
Kom, doet ge ’t of niet? 15
Want als ge niet klopt, dan trek ik aan deez’ schel hier;
Kom, zing mi fa sol, dan hooren ze ’t wel hier.

(Hij trekt Grumio bij ’t oor.)

Grumio.
Helpt, vrienden, helpt, mijn heer is dol!

Petruccio.
Wel klop, als ik ’t beveel, gij lompe vlegel!

(Hortensio komt op.)

Hortensio.
Wat is hier aan de hand?—Wel zoo, mijn oude kennis Grumio! En gij,
mijn waarde vriend Petruccio!—Hoe maakt gij allen het te Verona?

Petruccio.
Signor Hortensio, zijt gij het die dit stuit?
Con tutto il cuore ben trovato, roep ik uit.

Hortensio.
Alla nostra casa ben venuto; molt’ onorato Signor mio Petruccio.
Sta, Grumio, op; ik leg den twist wel bij.

Grumio.
Ach, heer, dat doet er niets toe, wat hij daar in ’t Latijn vertelt. Als dit
nu voor mij geen wettige reden is, om uit zijn dienst te gaan! Denk
eens, heer, hij beveelt mij hem te kloppen en van klinkem te raken,
heer; nu, komt dat te pas, dat een bediende zijn heer zoo zou
behandelen, die al wel,—zoo veel ik weet—twee-en-dertig heeft, en
niet meer meespeelt?

Maar had ik ’t gedaan, toen hij zeide: „Klop, klop!”


Dan had hìj het, en erger brak ’t mij toch niet op.

Petruccio.
Aartsdomme schelm!—Verbeeld u, vriend Hortensio,
Ik zeg den guit te kloppen aan uw deur;
En wat ik zeide of niet, hij woû ’t niet doen.

Grumio.
O hemel! kloppen aan de deur!
Wat! hebt gij niet gezegd: „Knaap, klop mij hier!
Sla toe maar, klop me hier, klop dat het klinkt!”
En komt ge nu met „kloppen aan de deur?”

Petruccio.
Knaap, pak je weg of zwijg, dat raad ik je.

Hortensio.
Petruccio, stil! ik sta voor Grumio borg;
Wat dolle ruzie tusschen u en hem,
Uw ouwen, trouwen, snaakschen dienaar Grumio!—
Zeg liever, beste vriend, wat goede wind
Van ’t oud Verona u naar Padua blies.

Petruccio.
De wind, die ’t jonge volk alom verspreidt,
En verder af dan thuis hun heil doet zoeken.
Ginds blijft men groen als gras. Maar hoor in ’t kort,
Mijn vriend Hortensio, hoe het met mij staat.
Antonio, mijn vader, overleed,
En ik dwaal nu deez’ doolhof in en zoek
Er mijn fortuin,—God weet, misschien een vrouw;
’k Heb in mijn buidel goud, veel goed’ren thuis,
En trek de wereld rond; ik wil die zien. 58

Hortensio.
Petruccio, mag ik zonder omhaal u
Eens werven voor een fel en leelijk wijf?
Doch neen, voor zulk een raad kreeg ik geen dank;
En toch, ’k beloof u, dat zij rijk zou zijn,
Echt rijk;—maar neen, ge zijt te zeer mijn vriend,
Zoo’n koopje mag ik u niet leev’ren.

Petruccio.
Hortensio, tusschen vrienden zooals wij
Zijn weinig woorden noodig. Kent ge er eene,
Die rijk genoeg is voor Petruccio’s vrouw,—
Rijk is ’t refrein voor mijnen huwlijksdans,—
Waar ze ook zoo leelijk als Florentius’ bruid,
Oud als Sibylle, en even fel en vinnig
Als Socrates’ Xanthippe, erger nog,
’t Verschrikt mij niet, ik meen, het schrikt bij mij
Den lust niet weg tot de’ echt; waar’ ze ook zoo wild
Als de opgezweepte zee van Adria,—
Ik zoek een rijken trouw in Padua:
Trouw ’k rijk, dan trouw ik goed in Padua.

Grumio.
Kijk eens, heer, hij vertelt zoo maar platweg, hoe hij er over denkt;
geef hem maar gouds genoeg, en ge kunt hem laten trouwen met een
pop, of met het beeldje van een doekspeld, of met een oude slons, die
geen enk’len tand meer in haar mond heeft, zelfs al had zij ook al de
ziekten van twee-en-vijftig paarden; o niets komt hem te onpas, als er
maar geld bij is.

Hortensio.
Petruccio, ’t ging al verder dan ik dacht;
Nu zet ik voort, wat ik in scherts begon.
Ik kan, Petruccio, stellig aan een vrouw
U helpen, rijk genoeg en jong en schoon,
Wel opgevoed, zooals haar stand dit eischt.
Haar een’ge feil,—en dit is feils genoeg,—
Is, dat zij onverdraag’lijk korzel is
En bits, onhandelbaar, in zulk een mate,
Dat ik, al ware ik ook in bitt’ren nood,
Haar voor een goudmijn zelfs niet trouwen zou.

Petruccio.
O zwijg, gij kent de kracht niet van het goud;—
Zeg mij haars vaders naam, dit is genoeg;
Ik enter haar, al keef ze ook even luid
Als in den herfst de zwartste donderwolk.

Hortensio.
Haar vader heet Battista Minola,
Een hoff’lijk en recht vriend’lijk edelman;
Hààr naam is Katharina Minola,
Befaamd in Padua door haar schamp’re tong.

Petruccio.
Haar vader ken ik, schoon ik haar niet ken,
En met mijn vader was hij ook bevriend.
Ik slaap niet, vriend, eer ik haar heb gezien;
Vergeef mij dus, dat ik na de’ eersten groet
U daad’lijk weer verlaat, tenzij ge mij
Verzellen wilt op mijnen tocht naar ginds. 106

Grumio.
Ik bid u, heer, laat hem gaan, nu hij er lust in heeft. Op mijn woord, als
zij hem zoo goed kende als ik, zou zij begrijpen, dat kijven bij hem
bijzonder weinig uitricht. Zij zal hem misschien tien keeren achter
elkander schelm noemen, het doet hem niets; als hij eens begint, raast
hij er op los met zijn galgescheldwoorden. Ik zal u eens wat zeggen;
heer,—als zij hem durft staan, al is het ook nog zoo weinig, dan zal hij
haar figuren op haar gezicht teekenen, dat haar gezicht geen gezicht
meer is en zij haar oogen zoo dicht moet knijpen als een kat. Gij kent
hem niet, heer.

Hortensio.
Wacht nog, Petruccio, ik moet met u gaan;
Mijn schat is bij Battista in bewaring,
Hij houdt mijns levens kleinood achter slot,
Zijn jongste dochter, schoonheids puik, Bianca;
Hij sluit van haar mij af en and’ren meer,
Die met me, om strijd, aanhouden om haar hand,
Daar hij zich wel niet anders denken kan,
Om al het moois, dat ik u heb verteld,
Dan dat hij met Kath’rina zitten blijft;
Zoo nam Battista dan ’t besluit, dat hij
Aan niemand zijn Bianca gunt, als niet
De helleveeg Katrijn eerst aan den man is.

Grumio.
„De helleveeg Katrijn!”
Geen bijnaam van een maagd kan erger zijn.

Hortensio.
Nu doe mijn vriend Petruccio mij den dienst,
En stell’ mij, stemmigjes gekleed, aan de’ ouden
Battista voor als deeg’lijk onderwijzer,
Om in muziek Bianca les te geven;
Door die vermomming krijg ik op zijn minst
Gelegenheid om met haar saam te zijn
En onverdacht mijn liefde te verklaren.

Grumio.
Neen maar, dat is me daar een guitenstuk! Kijk eens, hoe de jongelui,
om de oudelui te bedotten, de koppen bij elkander steken!

(Gremio komt op met Lucentio, die verkleed is en boeken onder den arm draagt.)

Meester, meester, kijk eens om! Wie komt daar? Ha!

Hortensio.
Stil, Grumio, stil, het is mijn medevrijer;
Petruccio, kom nu hier, op zij!

Grumio.
Een knap jong mensch, juist om verliefd te zijn! 144

(Zij gaan ter zijde.)

Gremio.
In orde; ik heb de lijst goed nagezien;
Maar vriend, laat alles fraai gebonden zijn
En louter liefdeboeken, dit vooral;
En zorg, dat gij niets anders met haar leest.
Verstaan?—Hoor nog: wat u signor Battista
In mildheid schenkt, zal ik door ruime gift,
U nog vermeerd’ren.—Maar al wat gij schrijft,
Schrijf dat toch op geparfumeerd papier,
Want lief’lijker dan ’t geurigst reukwerk is
Zij, die ’t ontvangt.—Wat leest gij ’t eerst met haar?

Lucentio.
Wat het ook zij, ik werk alleen voor u,
Als mijn patroon; vertrouw hierop gerust,
Zoo vast, als waart gijzelf er altijd bij;
Licht vindt zelfs mìjn woord beter ingang, heer,
Dan ’t uwe, of gij moest een geleerde zijn.

Gremio.
O die geleerdheid, welk een schoone zaak!

Grumio.
O die onnooz’le, welk een rare snaak!

Petruccio.
Stil, vrindje!

Hortensio.
Stil, Grumio! (Hij treedt voor den dag.) Wees gegroet, signore
Gremio!

Gremio.
Wees welkom, vriend Hortensio! Raadt gij niet,
Waar ik naar toe ga?—Naar Battista Minola.
’k Had hem beloofd, dat ik zou rondzien naar
Een onderwijzer voor de schoone Bianca;
En ’k heb ’t geluk gehad, deez’ jongen man
Te ontmoeten, die door kennis en manieren
Juist voor haar past, in poëzie belezen
En and’re boeken,—goede boeken, ja.

Hortensio.
Zeer goed; en ik heb juist een heer ontmoet,
Die heeft beloofd, me een fijnen musicus
Te zullen zenden voor onze uitverkoor’ne;
Zoo blijf ik dus niets achter in den dienst
Der schoone Bianca, die ik zoo bemin.
Gremio.
Die ik bemin; mijn doen zal dit bewijzen.

Grumio.
Zijn geldzak zal ’t bewijzen.

Hortensio.
’t Is nu geen tijd voor hart-uitstorting, Gremio.
Maar luister: wilt gij vriend’lijk zijn, dan meld
Ik u iets nieuws, ons beiden even welkom.
Deez’ heer, dien ik toevallig heb ontmoet,
Wil, daar zijn wensch met ons verlangen strookt,
Gaan vrijen naar de kreeg’le Katharina,
Ja, krijgt ze goed wat mee, haar trouwen ook.

Gremio.
Gezegd, gedaan, is mooi.—Hortensio, spreek,
Hebt gij hem haar gebreken opgesomd?

Petruccio.
Ik weet, zij is een twistziek, kijvend wijf;
Is ’t anders niet, mijn heeren, dat ’s geen kwaad.

Gremio.
Geen kwaad, mijn vriend? Nu!—Waar zijt gij vandaan? 190

Petruccio.
’k Ben van Verona, en Antonio’s zoon;
Die is me ontvallen; maar zijn geldkist bleef,
En ’k hoop, dat die mij goede dagen geev’.

Gremio.
Heer, goede dagen en zoo’n wijf, zijn twee;
Maar hebt gij lust, ga dan uw gang gerust;
Ik zal in alles u behulpzaam zijn.
Maar wilt gij zulk een boschkat?

Petruccio.
Maar wilt gij zulk een boschkat? Wil ik leven?
Grumio.
Hij wil haar? Nu, hij doe het, of ik hang haar!

Petruccio.
Waarvoor kwam ik dan hier, dan met dit doel?
Denkt gij mijn oor vervaard voor wat geruchts?
Hoorde ik dan nooit het brullen van den leeuw?
Hoorde ik de zee, door storm gezweept, niet woeden,
Gelijk een toornige ever, wit beschuimd?
Hoorde ik kanongebulder niet, in ’t veld,
Noch ’s hemels zwaar geschut daar in de lucht?
Hoorde ik nooit, in een slag van groote legers,
Gehinnik, krijgsgeschreeuw, trompetgeschal?
En reutelt gij me van een vrouwetong,
Die half zoo luid niet klapt als een kastanje
In ’t haardvuur van een pachter? Maak een kind
Met bietebauwen bang!

Grumio.
Met bietebauwen bang! Neen, hij ducht niets.

Gremio.
Hortensio, hoor.
Deez’ heer komt wel ter rechter tijd; ik heb
Een voorgevoel, ’t is ons geluk en ’t zijne.

Hortensio.
’k Heb hem gezegd, wij staan hem gaarne bij,
En houden bij dit vrijen graag hem vrij.

Gremio.
Volgaarne ja, neemt zij hem als gemaal aan.

Grumio.
O, bood men even wis me een goed onthaal aan.

(Tranio komt op, deftig uitgedost, met Biondello.)

Tranio.
God zegen’ u, mijn heeren! ’k Ben zoo vrij
Te vragen, wat de naaste weg wel is
Naar ’t huis van heer Battista Minola.

Gremio.
Waar die twee mooie dochters zijn, bedoelt gij dien?

Tranio.
Denzelfden.—Biondello!

Gremio.
’t Is u toch om de dochter niet te doen?

Tranio.
Om hem en haar misschien; dit is mìjn zaak. 226

Petruccio.
In geen geval om haar, die kijft, niet waar?

Tranio.
Een kijfster? dank u, heer!—Kom, Biondello!

Lucentio
(ter zijde). Goed, Tranio, goed!

Hortensio.
Goed, Tranio, goed! Heer, eer gij gaat, een woord!
Zeg ja of neen; heeft de and’re u soms bekoord?

Tranio.
Waar’ ’t zoo, beleedigde ik dan u daarmede?

Gremio.
Neen, heer, maar ik verbied u elke verd’re schrede.

Tranio.
De straat, heer, is, zoo ’k denk, wel even vrij
Voor mij en u.

Gremio.
Voor mij en u. De straat, heer, wel, niet zij.

Tranio.
Waarom dan, mag ik vragen?

Gremio.
Waarom dan, mag ik vragen? Vraagt ge zoo?
Welnu, ze is de uitverkoor’ne van signore Gremio.

Hortensio.
Verneem, dat haar verkoor signor Hortensio.

Tranio.
Al zacht, mijn heeren; gunt als edellieden
Ook mij mijn recht, en hoort mij rustig aan.
Battista is een waardig edelman,
En met mijn vader is hij goed bekend;
En waar’ zijn dochter schooner nog dan ze is,
Meer vrijers mocht zij hebben, mij er bij.
Een duizendtal had Leda’s schoone dochter,
De schoone Bianca hebbe één meer dan nu;
Lucentio zij die een, is mijn besluit,
Al dong ook Paris zelf mee naar de bruid.

Gremio.
Let op, deze een praat allen van de baan.

Lucentio.
Laat hem maar gaan; de klepper blijkt een knol.

Petruccio.
Hortensio, zeg, waartoe al dit gepraat?

Hortensio.
Vergun mij deze vraag nog, heer. Hebt gij
De dochters van Battista ooit gezien?

Tranio.
Neen, heer, maar van zijn tweetal wel gehoord;
De eene om haar kijfsche tong niet min befaamd,
Dan de and’re door haar zedigheid en schoon.

Petruccio.
Laat af van de eerste, heer, die is van mij.

Gremio.
Ja, laat aan Hercules dat werk maar over;
Het twaalftal van Alcides tell’ niet meer. 258

Petruccio.
Verneem van mij nu, heer, hoe ’t voor u staat:
De jongste dochter, zij, die gij verlangt,
Blijft, wie ook vrijen wil, nog achter slot;
Aan niemand wil haar vader haar verloven,
Aleer haar oud’re zuster is getrouwd,
Dan wordt de jong’re vrij, maar eerder niet.

Tranio.
Heer, staat het zoo, en zijt gij dus de man,
Die zoo ons allen voorthelpt, mij er bij,
Breekt gij het ijs, volbrengt gij ’t heldenstuk,
Neemt gij die oudste, en wordt de jong’re vrij
Voor onzen wedstrijd,—zeker, die haar krijgt,
Wie ’t zij, zal, zooals ’t past, zich dankbaar toonen.

Hortensio.
Zeer juist gesproken, heer, en met verstand;
En daar gij medevrijer u verklaart,
Zult gij, als wij, dien heer erkent’lijk zijn;
Wij allen saam zijn veel aan hem verplicht.

Tranio.
Ik blijf niet achter, heer, en tot bewijs
Vraag ik: brengt den namiddag met mij door,
En drinken we op het welzijn onzer liefsten,
En doen we als advocaten, die, hoe fel
Ze elkaar bestrijden, vrienden zijn aan tafel.

Grumio en Biondello.
Een prachtig voorstel! jongens, gaan wij mee!
Hortensio.
Dat voorstel is aanneemlijk, ja, ’t is goed;
U wacht, Petruccio, straks mijn welkomstgroet.

(Allen af.)
Tweede Bedrijf.

Eerste Tooneel.
A l d a a r . Een hamer in Battista’s huis.

Bianca en Katharina komen op.

Bianca.
Gij krenkt mij, lieve zuster, krenkt uzelf,
Zoo ge als een dienstmeid en slavin mij sleurt;
Dit duld ik niet. Kunt gij deez’ tooi niet lijden,
Laat dan mijn handen los, dan doe ikzelf
Heel mijn gewaad, tot op mijn onderkleed,
Ja, uit en weg; en wat ge me ook beveelt,
Dat wil ik doen; zoo goed is mij bewust,
Wat ik mijn oud’re zuster schuldig ben.

Katharina.
Beken hier daad’lijk, wie van al uw vrijers
U ’t best bevalt; maar geen gehuichel, hoor!

Bianca.
Geloof mij, zuster, welken man ’k ook zag,
Nooit zag ik nog een aangezicht, dat meer
Mij aantrok dan van eenig ander man.

Katharina.
Fleemtong, gij liegt. Is ’t niet Hortensio?
Bianca.
Is hij uw keuze, zuster? ’k zweer, dan wil ik
Zelf voor u pleiten, totdat hij u neemt. 15

Katharina.
O! dan is rijkdom uw begeerte, en kiest
Gij Gremio, om een fraaien staat te voeren.

Bianca.
Is hij het, die uw nijd zoo wekt? O dan
Zijt gij aan ’t schertsen, en bespeur ik klaar,
Dat gij daar al den tijd aan ’t schertsen waart.
Ik bid u, Kaatje, laat mijn handen los.

Katharina.
Was alles scherts, houd dan ook dit er voor.

(Zij slaat haar.)

(Battista komt op.)

Battista.
Wat is dat hier? Mamsel, wat schand’lijk doen!—
Bianca, ga ter zijde!—arm kind! zij schreit!—
Bemoei u niet met haar; ga aan uw naaiwerk.—
Foei, helleveeg, zoo duivelsch van gemoed,
Wat krenkt gij haar, die u nooit heeft gekrenkt?
Heeft ze ooit een woord u in den weg gelegd?

Katharina.
Haar zwijgen jouwt mij uit; ik wil mij wreken.

(Zij vliegt naar Bianca toe.)

Battista.
Wat, voor mijn oogen?—Ga maar heen, mijn kind!

(Bianca af.)

Katharina.
Gij duldt het niet van mij? Ja ’t is te zien, 31
Zij is uw schat, aan haar bezorgt ge een man;
En ik moet op haar bruiloft barvoets dansen,
Om haar wis apen brengen naar de hel.
Zeg mij niets meer, ik wil gaan zitten weenen,
Totdat ik kans om mij te wreken zie.

(Katharina af.)

Battista.
Had ooit een vader zooveel uit te staan
Als ik?—maar wie komt daar?

(Gremio komt op met Lucentio, eenvoudig gekleed; Petruccio met Hortensio als
muziekonderwijzer, en Tranio, gevolgd door Biondello, die een luit en boeken draagt.)

Gremio.
Goeden morgen, buurman Battista.

Battista.
Goeden morgen, buurman Gremio; gegroet, mijne heeren!

Petruccio.
Gegroet, heer! ’k Bid u, hebt gij niet een dochter
Met name Katharina, schoon en zedig?

Battista.
Ik heb een dochter, heer, met name Katharina.

Gremio.
Bedaard toch; val niet met de deur in ’t huis. 45

Petruccio.
Gij krenkt mij, Signor Gremio; laat mij maar.—
’k Ben uit Verona, heer, een edelman;
De roep van hare schoonheid, haar verstand,
Haar minzaamheid, beschroomde zedigheid,
Haar wond’re gaven en haar zachten aard,
Heeft mij, schoon ongenood, verlokt, als gast
Ten uwent te verschijnen, om mijn oog
Te doen aanschouwen, wat mijn oor vernam.
En om de’ ontvangst, die ’k hoop, mij te verwerven,
Sta ik u hier een van mijn dienaars af,

(Hij stelt Hortensio aan Battista voor.)

Die, met muziek en wiskunst wel vertrouwd,


Haar in die vakken deeg’lijk les kan geven,
Waarin zij, naar ik weet, geen vreemd’ling is;
Versmaad hem niet, want anders krenkt gij mij;
Zijn naam is Licio, van Mantua.

Battista.
Wees welkom, heer, en hij om uwentwil;
Maar ’k weet toch, dat mijn dochter Katharina
Niet van uw gading is, zeer tot mijn spijt.

Petruccio.
Gij wilt, naar ’k zie, van haar geen afstand doen;
Of moog’lijk staat u mijn persoon niet aan?

Battista.
Versta mij wèl; ik zeg slechts wat ik meen.
Maar zeg mij, heer, wat is uw naam en stam?

Petruccio.
Ik ben Petruccio, en Antonio’s zoon,
Een man, door heel Itaalje wel bekend.

Battista.
Ik ken hem wel, wees zijnentwege welkom.

Gremio.
Vergeef de stoornis, maar, Petruccio, gun
Ons, armen smeekelingen, ook het woord!
Houd in! gij draaft door dik en dun maar voort.

Petruccio.
Vergeef mij, Signor Gremio, maar ’k wensch gauw klaar te komen.

Gremio.
’k Geloof ’t, maar vrees, dat gauw, heer, de pret u wordt benomen.—
Buurman Battista, deze aanbieding is u ongetwijfeld bijzonder
aangenaam. Om u van mijn zijde dezelfde beleefdheid te bewijzen,—
en ik acht mij tot meer beleefdheid verplicht jegens u dan jegens
iemand anders,—veroorloof ik mij, u dezen jeugdigen geleerde aan te
bieden (Hij stelt hem Lucentio voor.), die lang in Reims gestudeerd
heeft en even zoo bedreven is in het Latijn, Grieksch en voor andere
talen, als die ander in muziek en wiskunde; zijn naam is Cambio; ik
bid u, neem zijn diensten aan. 84

Battista.
Duizendmaal dank; Signore Gremio;—wees van harte welkom,
Cambio.—Maar gij (Tot Tranio.), geachte heer, gij schijnt een
vreemdeling: mag ik zoo vrij zijn te vragen, waaraan ik uw bezoek
verschuldigd ben.

Tranio.
Vergeef, heer, mijn vrijmoedigheid is groot,
Dat ik, schoon vreemd’ling in deez’ stad, het waag
Te dingen naar de hand uwe dochter,
Bianca, rijk in schoonheid en in deugden.
Ook is mij welbekend, dat gij besloot
Het eerst uw oud’re dochter uit te huwen;
Wat ik u vraag, is daarom slechts de gunst,
Dat ik, als ge eens mijn afkomst weet, met de and’ren,
Die naar haar hand staan, toegang hebben moog’,
Mijn wenschen moge ontvouwen, als ik die and’ren.
Voor ’t onderwijs van uwe dochters kan ik
Slechts kleinigheden bieden: deze luit,
Dit stapeltje Latijnsche en Grieksche boeken;
Aanvaardt gij die, dan schenkt gij hun waardij.

Battista.
Lucentio is uw naam? Van waar afkomstig?

Tranio.
Van Pisa, heer; Vincentio is mijn vader.

Battista.
Een man van aanzien ginds, mij welbekend.
Van hooren zeggen: hartelijk welkom, heer.—
Neem gij (Tot Hortensio.) de luit, en gij (Tot Lucentio.) dien stapel
boeken;
Zoo daad’lijk zult ge uw kweekelingen zien.
Heidaar! (Een Bediende komt.)—Hier, knaap, geleid deez’ heeren naar
Mijn dochters heen; zij zijn haar onderwijzers;
Verzoek voor hen alzoo een heusche ontvangst.

(De Bediende vertrekt, met Hortensio en Lucentio; Biondello volgt.)

Komt, gaan wij thans den tuin eens rond, en dan


Aan tafel; allen zijt gij hartlijk welkom;
Houdt u, dit bid ik, hiervan overtuigd.

Petruccio.
Signor Battista, hoor, mijn zaak eischt spoed;
Ik kan niet elken dag hier aanzoek doen.
Daar gij mijn vader kendet, kent ge mij,
Die al zijn land en goed’ren heb geërfd,
En sedert eer vermeêrd heb dan verminderd;
Zeg dus,—als ik het jawoord van haar krijg,—
Wat mij uw dochter wel ten huw’lijk brengt.

Battista.
De helft van al mijn goed’ren bij mijn dood,
En twintigduizend kronen zoo terstond. 123

Petruccio.
En ik, van mijnen kant, verzeker haar
Een weduwgift,—als zij mij overleeft,—
Van al mijn have en goed, hoe ook genaamd;
Nauwkeurig zij dit wett’lijk dus omschreven,
Opdat aan weêrszij het verdrag ons bind’.

Battista.
Ja, als maar eens de hoofdzaak zeker is:
Haar jawoord;—dit is nu het eerst en ’t laatst.

Petruccio.
O, dat is niets; want ik verklaar u, vader,
’k Ben even kort van stof als zij hooghartig;
En als één heftig vuur een ander vindt,
Dan wordt, wat hunne woede voedt, verteerd;
Een kleine wind blaast een klein vuur wel aan,
Doch een orkaan blaast vuur en alles uit;
Zoo ben ik haar, zoo geeft zij ’t mij gewonnen,
Want ik ben ruw en vrij niet als een melkmuil.

Battista.
Vrij hoe ge wilt; heb er maar zegen op;
Doch wapen u op enk’le booze woorden.

Petruccio.
Ik ben verstaald, onwrikbaar als een rots,
Die pal blijft staan, hoe fel de storm haar trots’.

(Hortensio komt op, met een wond aan ’t hoofd.)

Battista.
Wat is er, vriend? waarom ziet gij zoo bleek?

Hortensio.
Zie ik zoo bleek, dan is ’t van schrik, geloof me.
D e g e t e m d e f e e k s , Tweede Bedrijf, Eerste
Tooneel.

Battista.
En heeft mijn dochter aanleg voor muziek?

Hortensio.
Eer om soldaat te zijn; misschien houdt staal
Het in haar handen uit; een luit kan ’t niet.

Battista.
Dus denkt ge niet, dat zij de luit leert slaan?

Hortensio.
Neen, want zij sloeg de luit al op mij stuk.
Ik zeide alleen, haar vingergreep was valsch,
En boog haar zacht de hand tot beet’ren greep;
Daar werd zij ongeduldig, duivelsch; „noemt ge
„Dat grepen?” riep ze, „grijpen kan ik wel!”
En greep de luit en sloeg me er mee op ’t hoofd,
Zoodat mijn kop de luit geheel doorboorde;
Ik was een wijl verbluft en stond te kijken,
Als had ik ’t halsblok aan en stond te pronk;
En tevens riep ze: „Schelmsche vedelaar;”
En „Brekebeen!” en twintig zulke naampjes,
Als had ze voor mijn smaad die uitgezocht.

Petruccio.
Nu, bij mijn ziel, een aardig meisje! ik houd
Al tienmaal meer van haar dan vroeger; ’k wou,
Dat ik met haar al aan het babb’len was. 163

Battista.
Kom mee en wees niet zoo ontsteld; hervat
Uw onderwijs maar met mijn jongste dochter,
Die leerzaam en erkent’lijk zich betoont.—
Signor Petruccio, wilt gij met ons gaan?
Of zend ik hier mijn Kaatje naar u toe?

Petruccio.
Ja, doe dat, wees zoo goed; ik wacht haar hier,

(Battista, Gremio, Tranio en Hortensio af.)

En maak haar kluchtig ’t hof, zoodra zij komt.


Valt ze uit, dan zeg ik haar eenvoudig weg,
Dat zelfs de nachtegaal zoo mooi niet slaat;
En kijkt ze zwart, ik roem haar blikken, helder
Als morgenrozen, frisch met dauw gedrenkt;
En is ze stom en spreekt ze zelfs geen woord,
Dan roem ik luid de radheid van haar tong
En zeg, dat zulk een taal de ziel beweegt;
En roept ze: „Pak u weg!” dan dank ik haar,
Als had ze mij een week bij zich genood;
Verwerpt zij de’ echt, dan vraag ik haar, wanneer
Zij de geboden en het huw’lijk wil;—
Daar komt ze;—nu, Petruccio, doe uw woord!

(Katharina komt op.)

Goê morgen, Kaatje, want zoo heet ge, hoor ik.

Katharina.
Gij hoordet wel, maar toch niet naar behooren;
Wie van mij spreekt, die noemt mij Katharina.

Petruccio.
Onwaar, onwaar; men noemt u kortweg Kaatje,
En mooie Kaat, en soms ook korz’le Kaat;
Maar Kaatje, liefste Kaatje in ’t christendom,
Kaatje van Kaatjesstein, mijn poez’lig Kaatje,—
Wat Kaatje heet is poez’lig—daarom Kaatje,
Verneem van mij nu, Kaatje, gij mijn troost,
Ik hoorde alom uw lieve zachtheid prijzen,
Uw deugden noemen, en uw schoonheid roemen,—
Schoon niet zoo luide als gij verdient,—en dit
Heeft me aangezet om naar uw hand te staan.

Katharina.
Zoo? aangezet? Die u heeft aangezet,
Zette u weer weg! ’k Zag daad’lijk, dat gij plooibaar
En wel verzetbaar waart.

Petruccio.
En wel verzetbaar waart. Plooi- en verzetbaar?

Katharina.
Zooals een vouwstoel, ja.

Petruccio.
Zooals een vouwstoel, ja. Goed, zet u hier.

Katharina.
Juist, ezels moeten dragen, waarom gij niet?

Petruccio.
Juist, vrouwen moeten dragen, waarom gij niet? 100

Katharina.
Dacht gij me een knol, dat ik u dragen zou?

Petruccio.
’k Zal u geen last, misschien wel lastig zijn;
Want Kaatje, ik weet, ge zijt zoo jong, zoo lucht,—

Katharina.
Te lucht, dan dat een boer mij vangen zou;
Toch niet te licht; ik wil geen last er bij.

Petruccio.
’k Geloof het wel; de laster zwermt als bij
Vaak om u heen; gij kent zijn steek te wel.

Katharina.
Ik ducht dien niet; veeleer moog hij mij duchten.

Petruccio.
Dan zijt ge een wesp, en waarlijk al te fel.

Katharina.
Ben ik zoo wespig, ducht mijn angel dan.

Petruccio.
Die doet mij niets; ik ruk hem daad’lijk uit.

Katharina.
Ja, als een stumperd wist, waar die wel zit.

Petruccio.
Wie weet niet, waar een wesp haar angel draagt?
Ik vang de wesp, en moog ze ook tegenspart’len,
Ze raakt haar angel kwijt.
Katharina.
Ze raakt haar angel kwijt. Haar tong?

Petruccio.
Haar nagels eer, die knipt de man, die u,
Wild Kaatje, vangt, wel af.

(Katharina wendt zich om tot heengaan. Petruccio houdt haar vast.)

Wild Kaatje, vangt, wel af. Lief Kaatje, blijf;


Ik ben een edelman.

Katharina.
Ik ben een edelman. Dat wil ik zien.

(Zij slaat zijn handen weg.)

Petruccio
(grijpt haar handen vast). Bij God, ik klop u, waagt gij ’t weer, te
slaan.

Katharina.
Dan raakt ge uw wapen kwijt.
Want die een vrouw slaat, is geen edelman;
Geen edelman, geen wapen.

Petruccio.
Geen edelman, geen wapen. Wat, lief Kaatje!
Gij wapenkoning? Zet mij in uw stamboek!

Katharina.
Wat is ’t blazoen? Een jonge haan, die koning
Wil kraaien, maar ’t niet kan? Of is ’t een zotskap?

Petruccio.
Een haan, die kraait, als Kaat mijn hen wil zijn. 227

Katharina.
Geen haan voor mij; gij kraait nog als een kuiken.
Petruccio.
Neen, Kaatje, kom, zet niet zoo’n zuur gezicht.

Katharina.
Zoo doe ik steeds, bij ’t zien van onrijp ooft.

Petruccio.
Hier is geen onrijp ooft; zie dus niet zuur.

Katharina.
Het is er wel.

Petruccio.
Het is er wel. Vertoon ’t mij dan!

Katharina.
Het is er wel. Vertoon ’t mij dan! Had ik
Een spiegel, ’k deed het.

Petruccio.
Een spiegel, ’k deed het. Wat? Bedoelt gij mij?

Katharina.
Wel knap bedacht voor een, die pas komt kijken.

Petruccio.
Ja, bij Sint Joris, ’k ben te jong voor u.

Katharina.
En toch verwelkt!

Petruccio.
En toch verwelkt! Van kwelling.

Katharina.
En toch verwelkt! Van kwelling. ’t Kwelt mij niet.

(Zij wil heengaan.)

Petruccio.
Neen, Kaatje, hoor mij; zoo ontsnapt gij niet.

Katharina.
Mijn blijven zou u erg’ren; laat mij gaan.

Petruccio.
Volstrekt niet; ’k vind u allerliefst. Men had
U mij geschetst als schuw en ruw en geem’lijk;
En nu vind ik ’t Gerucht een lastertong,
Want gij zijt vroolijk, geestig, allerhoflijkst;
Wat stil, maar lieflijk als een lentebloem;
Gij fronst het voorhoofd niet, ge kijkt niet donker,
Bijt niet, zooals een feeks doet, op de lip;
Gij hebt geen lust in vinnig tegenspreken,
Maar uw aanbidders boeit gij allerliefst
Met vriend’lijk, zacht, vertrouwelijk gesprek.
Hoe komt men aan ’t verhaal, dat Kaatje hinkt?
De wereld liegt, want Kaatje is slank en recht
Gelijk een hazeltak, ze is bruin van haar,
Gelijk een hazelnoot, en zoeter dan haar kern;—
O loop eens op;—neen, hinken doet ge niet.

Katharina.
Loop, dwaas, en geef uw orders aan uw knechts.

Petruccio.
Verheerlijkte ooit Diana zoo het woud,
Als Kaatje’s vorstelijke gang deez’ zaal?
Wees gij Diaan, en laat haar Kaatje zijn;
En dan zij Kaatje koud, Diana dartel.

Katharina.
Waar hebt gij al dien schoonen praat geleerd?

Petruccio.
Het is voor ’t vuistje, geest van moederswege. 265

Katharina.
Een geestrijk moeder en zoo’n geestloos zoon!
Petruccio.
Heb ik geen geest?

Katharina.
Nu, houd dien geest maar warm.

Petruccio.
Ja, lieve Katharina, in uw arm;
En ’k zet daarom deez’ praatjes aan een kant,
En zeg u kort en goed: uw vader stond
Mij ’t aanzoek toe; de bruidsschat is bepaald;
En, of gij ’t wilt of niet, gij wordt mijn vrouw.
Hoor, Kaatje, ik ben de rechte man voor u;
En bij dit licht, dat op uw schoonheid straalt,—
Uw schoonheid, die voorwaar me in liefde ontgloeit,—
Verlang niet naar een and’ren man dan mij;
Want, Kaatje, ik ben de man om u te temmen,
En uit een wilde kat een lief tam Kaatje
Te maken, als een lief en huis’lijk katje.
Daar komt uw vader aan; neen, weiger niet;
Ik moet en zal Kath’rina tot mijn vrouw.

(Battista, Gremio en Tranio komen weder op.)

Battista.
Hoe is ’t, signor Petruccio, u vergaan
Bij mijne dochter?

Petruccio.
Bij mijne dochter? Wel, zeer goed, zeer goed;
Hoe kon het anders zijn? Dacht gij van neen?

Battista.
Hoe, dochter Katharina, nog steeds knorrig?

Katharina.
Noemt gij mij dochter? Nu, ’k verzeker u,
Gij hebt mij teed’re vaderzorg getoond,
Door aan een halfgek mensch mij toe te zeggen,
Een dollen zotskap, een hansworst, die vloekt,
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.

More than just a book-buying platform, we strive to be a bridge


connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.

Join us on a journey of knowledge exploration, passion nurturing, and


personal growth every day!

ebookbell.com

You might also like