(Ebook) Practical Methods for Optimal Control and Estimation Using Nonlinear Programming, Second Edition (Advances in Design and Control) by John T. Betts ISBN 9780898716887, 0898716888 download
(Ebook) Practical Methods for Optimal Control and Estimation Using Nonlinear Programming, Second Edition (Advances in Design and Control) by John T. Betts ISBN 9780898716887, 0898716888 download
https://ebooknice.com/product/practical-methods-for-optimal-control-
using-nonlinear-programming-1375082
https://ebooknice.com/product/batch-distillation-simulation-optimal-
design-and-control-second-edition-5144738
https://ebooknice.com/product/optimal-and-robust-estimation-with-an-
introduction-to-stochastic-control-theory-second-edition-2388190
https://ebooknice.com/product/biota-grow-2c-gather-2c-cook-6661374
(Ebook) Stable Adaptive Control and Estimation for Nonlinear Systems:
Neural and Fuzzy Approximator Techniques by Jeffrey T. Spooner,
Manfredi Maggiore, Raúl Ordóñez, Kevin M. Passino ISBN 0471415464
https://ebooknice.com/product/stable-adaptive-control-and-estimation-
for-nonlinear-systems-neural-and-fuzzy-approximator-techniques-2099854
https://ebooknice.com/product/optimal-state-estimation-for-process-
monitoring-fault-diagnosis-and-control-46110222
https://ebooknice.com/product/dynamic-programming-and-optimal-
control-1076692
https://ebooknice.com/product/optimal-estimation-of-dynamic-systems-
second-edition-2456434
Practical Methods
for Optimal Control
and Estimation Using
Nonlinear Programming
Advances in Design and Control
SIAM’s Advances in Design and Control series consists of texts and monographs dealing with all areas of
design and control and their applications. Topics of interest include shape optimization, multidisciplinary
design, trajectory optimization, feedback, and optimal control. The series focuses on the mathematical and
computational aspects of engineering design and control that are usable in a wide variety of scientific and
engineering disciplines.
Editor-in-Chief
Ralph C. Smith, North Carolina State University
Editorial Board
Athanasios C. Antoulas, Rice University
Siva Banda, Air Force Research Laboratory
Belinda A. Batten, Oregon State University
John Betts, The Boeing Company (retired)
Stephen L. Campbell, North Carolina State University
Eugene M. Cliff, Virginia Polytechnic Institute and State University
Michel C. Delfour, University of Montreal
Max D. Gunzburger, Florida State University
J. William Helton, University of California, San Diego
Arthur J. Krener, University of California, Davis
Kirsten Morris, University of Waterloo
Richard Murray, California Institute of Technology
Ekkehard Sachs, University of Trier
Series Volumes
Betts, John T., Practical Methods for Optimal Control and Estimation Using Nonlinear Programming, Second
Edition
Shima, Tal and Rasmussen, Steven, eds., UAV Cooperative Decision and Control: Challenges and Practical
Approaches
Speyer, Jason L. and Chung, Walter H., Stochastic Processes, Estimation, and Control
Krstic, Miroslav and Smyshlyaev, Andrey, Boundary Control of PDEs: A Course on Backstepping Designs
Ito, Kazufumi and Kunisch, Karl, Lagrange Multiplier Approach to Variational Problems and Applications
Xue, Dingyü, Chen, YangQuan, and Atherton, Derek P., Linear Feedback Control: Analysis and Design
with MATLAB
Hanson, Floyd B., Applied Stochastic Processes and Control for Jump-Diffusions: Modeling, Analysis,
and Computation
Michiels, Wim and Niculescu, Silviu-Iulian, Stability and Stabilization of Time-Delay Systems: An Eigenvalue-Based
Approach
Ioannou, Petros and Fidan, Baris,¸ Adaptive Control Tutorial
Bhaya, Amit and Kaszkurewicz, Eugenius, Control Perspectives on Numerical Algorithms and Matrix Problems
Robinett III, Rush D., Wilson, David G., Eisler, G. Richard, and Hurtado, John E., Applied Dynamic Programming
for Optimization of Dynamical Systems
Huang, J., Nonlinear Output Regulation: Theory and Applications
Haslinger, J. and Mäkinen, R. A. E., Introduction to Shape Optimization: Theory, Approximation, and
Computation
Antoulas, Athanasios C., Approximation of Large-Scale Dynamical Systems
Gunzburger, Max D., Perspectives in Flow Control and Optimization
Delfour, M. C. and Zolésio, J.-P., Shapes and Geometries: Analysis, Differential Calculus, and Optimization
Betts, John T., Practical Methods for Optimal Control Using Nonlinear Programming
El Ghaoui, Laurent and Niculescu, Silviu-Iulian, eds., Advances in Linear Matrix Inequality Methods in Control
Helton, J. William and James, Matthew R., Extending H∞ Control to Nonlinear Systems: Control of Nonlinear
Systems to Achieve Performance Objectives
Practical Methods
for Optimal Control
and Estimation Using
Nonlinear Programming
SECOND EDITION
John T. Betts
10 9 8 7 6 5 4 3 2 1
All rights reserved. Printed in the United States of America. No part of this book may be
reproduced, stored, or transmitted in any manner without the written permission of the
publisher. For information, write to the Society for Industrial and Applied Mathematics,
3600 Market Street, 6th Floor, Philadelphia, PA 19104-2688 USA.
Trademarked names may be used in this book without the inclusion of a trademark
symbol. These names are used in an editorial context only; no infringement of trademark
is intended.
is a registered trademark.
For Theon and Dorothy
He Inspired Creativity
She Cherished Education
Contents
Preface xiii
vii
viii Contents
8 Epilogue 411
Bibliography 417
Index 431
Preface
Solving an optimal control or estimation problem is not easy. Pieces of the puzzle
are found scattered throughout many different disciplines. Furthermore, the focus of this
book is on practical methods, that is, methods that I have found actually work! In fact
everything described in this book has been implemented in production software and used to
solve real optimal control problems. Although the reader should be proficient in advanced
mathematics, no theorems are presented.
Traditionally, there are two major parts of a successful optimal control or optimal
estimation solution technique. The first part is the “optimization” method. The second part
is the “differential equation” method. When faced with an optimal control or estimation
problem it is tempting to simply “paste” together packages for optimization and numerical
integration. While naive approaches such as this may be moderately successful, the goal of
this book is to suggest that there is a better way! The methods used to solve the differential
equations and optimize the functions are intimately related.
The first two chapters of this book focus on the optimization part of the problem. In
Chapter 1 the important concepts of nonlinear programming for small dense applications
are introduced. Chapter 2 extends the presentation to problems which are both large and
sparse. Chapters 3 and 4 address the differential equation part of the problem. Chapter
3 introduces relevant material in the numerical solution of differential (and differential-
algebraic) equations. Methods for solving the optimal control problem are treated in some
detail in Chapter 4. Throughout the book the interaction between optimization and integra-
tion is emphasized. Chapter 5 describes how to solve optimal estimation problems. Chapter
6 presents a collection of examples that illustrate the various concepts and techniques. Real
world problems often require solving a sequence of optimal control and/or optimization
problems, and Chapter 7 describes a collection of these “advanced applications.”
While the book incorporates a great deal of new material not covered in Practical
Methods for Optimal Control Using Nonlinear Programming [21], it does not cover every-
thing. Many important topics are simply not discussed in order to keep the overall presen-
tation concise and focused. The discussion is general and presents a unified approach to
solving optimal estimation and control problems. Most of the examples are drawn from
my experience in the aerospace industry. Examples have been solved using a particular
implementation called SOCS. I have tried to adhere to notational conventions from both
optimization and control theory whenever possible. Also, I have attempted to use consistent
notation throughout the book.
The material presented here represents the collective contributions of many peo-
ple. The nonlinear programming material draws heavily on the work of John Dennis,
Roger Fletcher, Phillip Gill, Sven Leyffer, Walter Murray, Michael Saunders, and Mar-
xiii
xiv Preface
garet Wright. The material on differential-algebraic equations (DAEs) is drawn from the
work of Uri Ascher, Kathy Brenan, and Linda Petzold. Ray Spiteri graciously shared his
classroom notes on DAEs. I was introduced to optimal control by Stephen Citron, and I
routinely refer to the text by Bryson and Ho [54]. Over the past 20 years I have been for-
tunate to participate in workshops at Oberwolfach, Munich, Minneapolis, Victoria, Banff,
Lausanne, Griefswald, Stockholm, and Fraser Island. I’ve benefited immensely simply
by talking with Larry Biegler, Hans Georg Bock, Roland Bulirsch, Rainer Callies, Kurt
Chudej, Tim Kelley, Bernd Kugelmann, Helmut Maurer, Rainer Mehlhorn, Angelo Miele,
Hans Josef Pesch, Ekkehard Sachs, Gottfried Sachs, Roger Sargent, Volker Schulz, Mark
Steinbach, Oskar von Stryk, and Klaus Well.
Three colleagues deserve special thanks. Interaction with Steve Campbell and his
students has inspired many new results and interesting topics. Paul Frank has played a
major role in the implementation and testing of the large, sparse nonlinear programming
methods described. Bill Huffman, my coauthor for many publications and the SOCS soft-
ware, has been an invaluable sounding board over the last two decades. Finally, I thank
Jennifer for her patience and understanding during the preparation of this book.
John T. Betts
Chapter 1
Introduction to Nonlinear
Programming
1.1 Preliminaries
This book concentrates on numerical methods for solving the optimal control problem.
The fundamental principle of all effective numerical optimization methods is to solve a
difficult problem by solving a sequence of simpler subproblems. In particular, the solution
of an optimal control problem will require the solution of one or more finite-dimensional
subproblems. As a prelude to our discussions on optimal control, this chapter will focus
on the nonlinear programming (NLP) problem. The NLP problem requires finding a finite
number of variables such that an objective function or performance index is optimized
without violating a set of constraints. The NLP problem is often referred to as parameter
optimization. Important special cases of the NLP problem include linear programming
(LP), quadratic programming (QP), and least squares problems.
Before proceeding further, it is worthwhile to establish the notational conventions
used throughout the book. This is especially important since the subject matter covers a
number of different disciplines, each with its own notational conventions. Our goal is to
present a unified treatment of all these fields. As a rule, scalar quantities will be denoted by
lowercase letters (e.g., α). Vectors will be denoted by boldface lowercase letters and will
usually be considered column vectors, as in
x1
x2
x = . , (1.1)
..
xn
where the individual components of the vector are x k for k = 1, . . ., n. To save space, it will
often be convenient to define the transpose, as in
xT = (x 1 , x 2 , . . . , x n ). (1.2)
1
2 Chapter 1. Introduction to Nonlinear Programming
where c (x) = dc/d x is the slope of the constraint at x. Using this linear approximation, it
is reasonable to compute x̄, a new estimate for the root, by solving (1.5) such that c(x̄) = 0,
i.e.,
x̄ = x − [c (x)]−1c(x). (1.6)
Typically, we denote p ≡ x̄ − x and rewrite (1.6) as
x̄ = x + p, (1.7)
where
p = −[c (x)]−1c(x). (1.8)
Of course, in general, c(x) is not a linear function of x, and consequently we cannot
expect that c(x̄) = 0. However, we might hope that x̄ is a better estimate for the root x ∗
than the original guess x; in other words we might expect that
|x̄ − x ∗ | ≤ |x − x ∗ | (1.9)
and
|c(x̄)| ≤ |c(x)|. (1.10)
If the new point is an improvement, then it makes sense to repeat the process, thereby
defining a sequence of points x (0) , x (1) , x (2) , . . . with point (k + 1) in the sequence given by
For notational convenience, it usually suffices to present a single step of the algorithm, as in
(1.6), instead of explicitly labeling the information at step k using the superscript notation
x (k) . Nevertheless, it should be understood that the algorithm defines a sequence of points
x (0) , x (1) , x (2) , . . . . The sequence is said to converge to x ∗ if
In practice, of course, we are not interested in letting k → ∞. Instead we are satisfied with
terminating the sequence when the computed solution is “close” to the answer. Further-
more, the rate of convergence is of paramount importance when measuring the computa-
tional efficiency of an algorithm. For Newton’s method, the rate of convergence is said to
be quadratic or, more precisely, q-quadratic (cf. [71]). The impact of quadratic conver-
gence can be dramatic. Loosely speaking, it implies that each successive estimate of the
solution will double the number of significant digits!
Example 1.1 N EWTON ’ S M ETHOD —ROOT F INDING. To demonstrate, let us sup-
pose we want to solve the constraint
c(x) = a1 + a2 x + a3 x 2 = 0, (1.13)
where the coefficients a1 , a2 , a3 are chosen such that c(0.1) = −0.05, c(0.25) = 0, and
c(0.9) = 0.9. Table 1.1 presents the Newton iteration sequence beginning from the initial
guess x = 0.85 and proceeding to the solution at x ∗ = 0.25. Figure 1.1 illustrates the
first three iterations. Notice in Table 1.1 that the error between the computed solution and
the true value, which is tabulated in the third column, exhibits the expected doubling in
significant figures from the fourth iteration to convergence.
So what is wrong with Newton’s method? Clearly, quadratic convergence is a very
desirable property for an algorithm to possess. Unfortunately, if the initial guess is not
sufficiently close to the solution, i.e., within the region of convergence, Newton’s method
may diverge. As a simple example, Dennis and Schnabel [71] suggest applying Newton’s
method to solve c(x) = arctan(x) = 0. This will diverge when the initial guess |x (0) | > a,
converge when |x (0) | < a, and cycle indefinitely if |x (0) | = a, where a = 1.3917452002707.
In essence, Newton’s method behaves well near the solution (locally) but lacks something
permitting it to converge globally. So-called globalization techniques, aimed at correcting
this deficiency, will be discussed in subsequent sections. A second difficulty occurs when
the slope c (x) = 0. Clearly, the correction defined by (1.6) is not well defined in this case.
In fact, Newton’s method loses its quadratic convergence property if the slope is zero at
the solution, i.e., c (x ∗ ) = 0. Finally, Newton’s method requires that the slope c (x) can
be computed at every iteration. This may be difficult and/or costly, especially when the
function c(x) is complicated.
x̄ = x − B −1c(x) = x + p, (1.16)
1.4. Newton’s Method for Minimization in One Variable 5
x k − x k−1
x k+1 = x k − c(x k ). (1.17)
c(x k ) − c(x k−1)
Figure 1.2 illustrates a secant iteration applied to Example 1.1 described in the pre-
vious section.
Clearly, the virtue of the secant method is that it does not require calculation of the
slope c (x k ). While this may be advantageous when derivatives are difficult to compute,
there is a downside! The secant method is superlinearly convergent, which, in general, is
not as fast as the quadratically convergent Newton algorithm. Thus, we can expect conver-
gence will require more iterations, even though the cost per iteration is less. A distinguish-
ing feature of the secant method is that the slope is approximated using information from
previous iterates in lieu of a direct evaluation. This is the simplest example of a so-called
quasi-Newton method.
development of (1.5), let us approximate F(x) by the first three terms in a Taylor series
expansion about the current point x:
1
F(x̄) = F(x) + F (x)(x̄ − x) + (x̄ − x)F (x)(x̄ − x). (1.18)
2
Notice that we cannot use a linear model for the objective because a linear function does
not have a finite minimum point. In contrast, a quadratic approximation to F(x) is the
simplest approximation that does have a minimum. Now for x̄ to be a minimum of the
quadratic (1.18), we must have
dF
≡ F (x̄) = 0 = F (x) + F (x)(x̄ − x). (1.19)
d x̄
Solving for the new point yields
x̄ = x − [F (x)]−1 F (x). (1.20)
The derivation has been motivated by minimizing F(x). Is this equivalent to solving the
slope condition F (x) = 0? It would appear that the iterative optimization sequence defined
by (1.20) is the same as the iterative root-finding sequence defined by (1.6), provided we
replace c(x) by F (x). Clearly, a quadratic model for the objective function (1.18) produces
a linear model for the slope F (x). However, the condition F (x) = 0 defines only a sta-
tionary point, which can be a minimum, a maximum, or a point of inflection. Apparently
what is missing is information about the curvature of the function, which would determine
whether it is concave up, concave down, or neither.
Figure 1.3 illustrates a typical situation. In the illustration, there are two points
with zero slopes; however, there is only one minimum point. The minimum point is dis-
Updated editions will replace the previous one—the old editions will
be renamed.
1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside the
United States, check the laws of your country in addition to the
terms of this agreement before downloading, copying, displaying,
performing, distributing or creating derivative works based on this
work or any other Project Gutenberg™ work. The Foundation makes
no representations concerning the copyright status of any work in
any country other than the United States.
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if you
provide access to or distribute copies of a Project Gutenberg™ work
in a format other than “Plain Vanilla ASCII” or other format used in
the official version posted on the official Project Gutenberg™ website
(www.gutenberg.org), you must, at no additional cost, fee or
expense to the user, provide a copy, a means of exporting a copy, or
a means of obtaining a copy upon request, of the work in its original
“Plain Vanilla ASCII” or other form. Any alternate format must
include the full Project Gutenberg™ License as specified in
paragraph 1.E.1.
• You pay a royalty fee of 20% of the gross profits you derive
from the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”
• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.
1.F.
1.F.4. Except for the limited right of replacement or refund set forth
in paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO
OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.
Please check the Project Gutenberg web pages for current donation
methods and addresses. Donations are accepted in a number of
other ways including checks, online payments and credit card
donations. To donate, please visit: www.gutenberg.org/donate.
Most people start at our website which has the main PG search
facility: www.gutenberg.org.
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
ebooknice.com