100% found this document useful (1 vote)
21 views76 pages

Optimal Control Theory The Variational Method 1st Edition Zhongjing Ma Instant Download

The document is about the book 'Optimal Control Theory: The Variational Method' by Zhongjing Ma and Suli Zou, which focuses on implementing optimal control problems using the variational method. It covers various topics including the extrema of functional, necessary conditions for optimal control, and applications in different fields such as engineering and economics. The book is intended for senior undergraduate and graduate students, as well as professionals in related areas.

Uploaded by

bigosfouch8g
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
21 views76 pages

Optimal Control Theory The Variational Method 1st Edition Zhongjing Ma Instant Download

The document is about the book 'Optimal Control Theory: The Variational Method' by Zhongjing Ma and Suli Zou, which focuses on implementing optimal control problems using the variational method. It covers various topics including the extrema of functional, necessary conditions for optimal control, and applications in different fields such as engineering and economics. The book is intended for senior undergraduate and graduate students, as well as professionals in related areas.

Uploaded by

bigosfouch8g
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 76

Optimal Control Theory The Variational Method

1st Edition Zhongjing Ma download

https://ebookbell.com/product/optimal-control-theory-the-
variational-method-1st-edition-zhongjing-ma-43024594

Explore and download more ebooks at ebookbell.com


Here are some recommended products that we believe you will be
interested in. You can click the link to download.

A Primer On The Calculus Of Variations And Optimal Control Theory


First Edition Mike Mestertongibbons

https://ebookbell.com/product/a-primer-on-the-calculus-of-variations-
and-optimal-control-theory-first-edition-mike-mestertongibbons-5252106

An Introduction To Optimal Control Theory The Dynamic Programming


Approach Onsimo Hernndezlerma

https://ebookbell.com/product/an-introduction-to-optimal-control-
theory-the-dynamic-programming-approach-onsimo-hernndezlerma-48319294

Nonlinear And Optimal Control Theory Lectures Given At The Cime Summer
School Held In Cetraro Italy June 1929 2004 1st Edition Andrei A
Agrachev

https://ebookbell.com/product/nonlinear-and-optimal-control-theory-
lectures-given-at-the-cime-summer-school-held-in-cetraro-italy-
june-1929-2004-1st-edition-andrei-a-agrachev-1009694

Optimal Control An Introduction To The Theory And Its Applications


Dover Books On Engineering Illustrated Michael Athans

https://ebookbell.com/product/optimal-control-an-introduction-to-the-
theory-and-its-applications-dover-books-on-engineering-illustrated-
michael-athans-49172674
Practical Course Of The Optimal Control Thery With Examples

https://ebookbell.com/product/practical-course-of-the-optimal-control-
thery-with-examples-22240612

Optimal Control Theory With Applications In Economics Thomas A Weber

https://ebookbell.com/product/optimal-control-theory-with-
applications-in-economics-thomas-a-weber-56399830

Optimal Control Theory For Applications Mechanical Engineering Series


1st Edition David G Hull

https://ebookbell.com/product/optimal-control-theory-for-applications-
mechanical-engineering-series-1st-edition-david-g-hull-2133492

Optimal Control Theory Applications To Management Science And


Economics 2nd Suresh P Sethi

https://ebookbell.com/product/optimal-control-theory-applications-to-
management-science-and-economics-2nd-suresh-p-sethi-2375704

Optimal Control Theory With Aerospace Applications Joseph Z Benasher

https://ebookbell.com/product/optimal-control-theory-with-aerospace-
applications-joseph-z-benasher-2415162
Zhongjing Ma
Suli Zou

Optimal
Control
Theory
The Variational Method
Optimal Control Theory
Zhongjing Ma Suli Zou

Optimal Control Theory


The Variational Method

123
Zhongjing Ma Suli Zou
School of Automation School of Automation
Beijing Institute of Technology Beijing Institute of Technology
Beijing, China Beijing, China

ISBN 978-981-33-6291-8 ISBN 978-981-33-6292-5 (eBook)


https://doi.org/10.1007/978-981-33-6292-5

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature
Singapore Pte Ltd. 2021
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of
illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by similar
or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, expressed or implied, with respect to the material contained
herein or for any errors or omissions that may have been made. The publisher remains neutral with regard
to jurisdictional claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd.
The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721,
Singapore
Preface

Many systems, like electrical, mechanical, chemical, aerospace, economic, and so


on, can be mathematically modeled as linear/nonlinear deterministic/stochastic
differential/difference state equations. The state systems evolve with time and
possibly with other variables under certain specified dynamical relations with each
other.
The underlying systems might be driven from a specific state to another one by
applying some external controls. In case there exist many different ways to
implement the same given task, one of them may be best in some sense. For
instance, there may be a typical way to drive a vehicle from an initial place to the
destination in a minimum time or with a minimum consumed fuel. The applied
control corresponding to the best solution is called an optimal control. The measure
of the performance is called the cost function.
It has been briefly introduced an optimal control problem by putting the above
together. This book mainly focuses on how to implement the optimal control
problems via the variational method. More specifically,
• It is studied how to implement the extrema of functional by applying the vari-
ational method. The extrema of functional with different boundary conditions,
involving multiple functions and with certain constraints, etc., are covered.
• It is given the necessary and sufficient condition for the (continuous-time)
optimal control solution via the variational method, the optimal control prob-
lems with different boundary conditions is solved, and the linear quadratic
regulator and tracking problems are analyzed, respectively, in detail.
• It is given the solution of optimal control problems with state constraints by
applying the Pontryagin’s minimum principle, which is developed based on the
calculus of variations. And the developed results are applied to implement
several classes of popular optimal control problems, say minimum-time,
minimum-fuel, minimum-energy problems, and so on.
This book is aimed at senior undergraduate students or graduate students in
electrical, mechanical, chemical, and aerospace engineering, operation research and
applied mathematics, etc. This book contains the stuff which can be covered in a

v
vi Preface

one-semester course and it requires the students to have a background on control


systems or linear systems theory. This book can also be used by professional
researchers and engineers working in a variety of fields.

Beijing, China Zhongjing Ma


October 2020 Suli Zou
Acknowledgements

Parts of this book are mainly based on the lecture materials that the first author has
organized during the past 10 years for the graduate course, Optimal and Robust
Control, at the Beijing Institute of Technology (BIT).
They would like to express their sincere gratitude to many of their colleagues
and students for their support and encouragement.
First of all, the authors thank Prof. Junzheng Wang, Prof. Yuanqing Xia, and
Prof. Zhihong Deng for their advice to the first author of this book to take the
graduate course of Optimal and Robust Control in English at the school of
Automation, BIT, from the winter term of 2010–2011 when he just joined the BIT.
Without taking this course, it may not be possible for the authors to organize this
book.
The first author of the book also expresses the deepest gratitude to his Ph.D.
advisors, Prof. Peter Caines and Prof. Roland Malhame, and his postdoctoral
advisors, Prof. Duncan Callaway and Prof. Ian Hiskens. Their enthusiasm, patience,
and inspiring guidance have driven my research career.
Moreover, the authors also thank their graduate students, Peng Wang, Xu Zhou,
Dongyi Song, Fei Yang, Tao Yang, Yajing Wang, Yuanming Sun, Jing Fan, and
other students, for their efforts on the simulations of parts of numerical examples
given in this book. Besides, they would like to thank the colleagues, Prof. Zhigang
Gao, Dr. Liang Wang, and Dr. Hongwei Ma who have provided with their valuable
suggestions for this book.
In addition, they would like to thank the editors, the reviewers, and the staffs at
the Springer Nature for their assistance.
The authors also thank the financial support from the National Natural and
Science Foundation, China (NNSFC), and the Xuteli Grant, BIT.
Last not least, they would like to express their deepest thanks to the family
members who have always provided them with endless encouragement and sup-
ports behind us.

vii
Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Backgrounds and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Optimal Control Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Examples of Optimal Control Problems . . . . . . . . . . . . . . . . . . . 12
1.4 Formulation of Continuous-Time Optimal Control Problems . . . . 25
1.5 Formulation of Discrete-Time Optimal Control Problems . . . . . . 30
1.6 Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2 Extrema of a Functional via the Variational Method . . . . . . . . . . . . 39
2.1 Fundamental Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.1.1 Linearity of Function and Functional . . . . . . . . . . . . . . . 40
2.1.2 Norm in Euclidean Space and Functional . . . . . . . . . . . . 41
2.1.3 Increment of Function and Functional . . . . . . . . . . . . . . . 43
2.1.4 Differential of Function and Variation of Functional . . . . 44
2.2 Extrema of Functional . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.2.1 Extrema with Fixed Final Time and Fixed Final State . . . 50
2.2.2 Specific Forms of Euler Equation in Different Cases . . . . 53
2.2.3 Sufficient Condition for Extrema . . . . . . . . . . . . . . . . . . . 60
2.2.4 Extrema with Fixed Final Time and Free Final State . . . . 63
2.2.5 Extrema with Free Final Time and Fixed Final State . . . . 66
2.2.6 Extrema with Free Final Time and Free Final State . . . . . 70
2.3 Extrema of Functional with Multiple Independent Functions . . . . 76
2.4 Extrema of Function with Constraints . . . . . . . . . . . . . . . . . . . . 83
2.4.1 Elimination/Direct Method . . . . . . . . . . . . . . . . . . . . . . . 84
2.4.2 Lagrange Multiplier Method . . . . . . . . . . . . . . . . . . . . . . 85
2.5 Extrema of Functional with Constraints . . . . . . . . . . . . . . . . . . . 87
2.5.1 Extrema of Functional with Differential Constraints . . . . . 87
2.5.2 Extrema of Functional with Isoperimetric Constraints . . . . 92

ix
x Contents

2.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
2.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
3 Optimal Control via Variational Method . . . . . . . . . . . . . . . . ..... 99
3.1 Necessary and Sufficient Condition for Optimal Control . . ..... 99
3.2 Optimal Control Problems with Different Boundary
Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
3.2.1 Optimal Control with Fixed Final Time and Fixed
Final State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
3.2.2 Optimal Control with Fixed Final Time and Free
Final State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
3.2.3 Optimal Control with Free Final Time and Fixed
Final State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
3.2.4 Optimal Control with Free Final Time and Free
Final State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
3.3 Linear-Quadratic Regulation Problems . . . . . . . . . . . . . . . . . . . . 125
3.3.1 Infinite-Interval Time-Invariant LQR Problems . . . . . . . . 131
3.4 Linear-Quadratic Tracking Problems . . . . . . . . . . . . . . . . . . . . . 134
3.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
3.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
4 Pontryagin’s Minimum Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
4.1 Pontryagin’s Minimum Principle with Constrained Control . . . . . 147
4.2 Pontryagin’s Minimum Principle with Constrained State
Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
4.3 Minimum Time Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
4.3.1 Optimal Control Solution for Minimum Time
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
4.3.2 Minimum Time Problems for Linear Time-Invariant
Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
4.4 Minimum Fuel Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
4.5 Performance Cost Composed of Elapsed Time and Consumed
Fuel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
4.6 Minimum Energy Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
4.7 Performance Cost Composed of Elapsed Time and Consumed
Energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
4.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
4.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
5 Dynamic Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
5.1 The Hamilton–Jacobi–Bellman Equation . . . . . . . . . . . . . . . . . . 219
5.2 Analysis on Optimal Control . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
5.3 Linear-Quadratic Regulation Problems . . . . . . . . . . . . . . . . . . . . 229
5.4 Affine-Quadratic Regulation Problems . . . . . . . . . . . . . . . . . . . . 235
5.5 Affine-Quadratic Tracking Problems . . . . . . . . . . . . . . . . . . . . . 238
Contents xi

5.6 Development of Pontryagin’s Minimum Principle


via Dynamic Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
5.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
5.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
6 Differential Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
6.1 Noncooperative Differential Games . . . . . . . . . . . . . . . . . . . . . . 249
6.1.1 Formulation of Noncooperative Differential Games . . . . . 249
6.1.2 Nash Equilibrium of Noncooperative Differential
Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
6.1.3 Affine-Quadratic Noncooperative Differential Games . . . . 254
6.2 Two-Person Zero-Sum Differential Games . . . . . . . . . . . . . . . . . 258
6.2.1 Formulation of Two-Person Zero-Sum Differential
Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
6.2.2 Saddle Point of Two-Person Zero-Sum Differential
Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
6.2.3 Implementation of Saddle Point of Two-Person
Zero-Sum Differential Games via Dynamic
Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
6.2.4 Linear-Quadratic Two-Person Zero-Sum Differential
Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
6.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
6.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
7 Discrete-Time Optimal Control Problems . . . . . . . . . . . . . . . . . . . . . 277
7.1 Variational Calculus for Discrete-Time Systems . . . . . . . . . . . . . 277
7.1.1 Optimum of Performance Functions with Fixed Final
Time and Fixed Final Value . . . . . . . . . . . . . . . . . . . . . . 278
7.1.2 Optimum with Fixed Final Time and Free Final
Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
7.2 Discrete-Time Optimal Control via Variational Method . . . . . . . . 283
7.2.1 Optimal Control with Fixed Final Time and Fixed
Final State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
7.2.2 Optimal Control with Fixed Final Time and Free
Final State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
7.3 Discrete-Time Linear-Quadratic Regulation Problems . . . . . . . . . 287
7.3.1 Linear-Quadratic Regulation Problems with Fixed
Final Time and Fixed Final State . . . . . . . . . . . . . . . . . . 290
7.3.2 Linear-Quadratic Regulation Problems with Fixed
Final Time and Free Final State . . . . . . . . . . . . . . . . . . . 292
7.3.3 Optimal Control with Respect to State . . . . . . . . . . . . . . 293
7.3.4 Optimal Cost Function . . . . . . . . . . . . . . . . . . . . . . . . . . 297
7.3.5 Infinite-Interval Time-Invariant Linear-Quadratic
Regulation Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
xii Contents

7.4 Discrete-Time Linear-Quadratic Tracking Problems . . . . . . . . . . 300


7.5 Discrete-Time Pontryagin’s Minimum Principle . . . . . . . . . . . . . 304
7.6 Discrete-Time Dynamic Programming . . . . . . . . . . . . . . . . . . . . 310
7.6.1 Optimal Control Problems with Discrete State
Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
7.6.2 Optimal Control Problems with Continuous State
Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
7.6.3 Discrete-Time Linear-Quadratic Problems . . . . . . . . . . . . 321
7.7 Discrete-Time Noncooperative Dynamic Games . . . . . . . . . . . . . 325
7.7.1 Formulation of Discrete-Time Noncooperative
Dynamic Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
7.7.2 NE of Discrete-Time Noncooperative Dynamic
Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
7.7.3 Discrete-Time Linear-Quadratic Noncooperative
Dynamic Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
7.8 Discrete-Time Two-Person Zero-Sum Dynamic Games . . . . . . . . 332
7.8.1 Formulation of Discrete-Time Two-Person Zero-Sum
Dynamic Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
7.8.2 Saddle Point of Discrete-Time Two-Person Zero-Sum
Dynamic Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
7.8.3 Discrete-Time Linear-Quadratic Two-Person Zero-Sum
Dynamic Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
7.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
7.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
8 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
List of Figures

Fig. 1.1 The block diagram form of classical control . . . . . . . . . . . . . . . . 2


Fig. 1.2 An illustration of the Brachistochrone problem . . . . . . . . . . . . . . 5
Fig. 1.3 An illustration of a simplified vehicle driving control . . . . . . . . . 13
Fig. 1.4 An illustration of a simple charging circuit . . . . . . . . . . . . . . . . . 13
Fig. 1.5 An illustration of a simplified traffic control at the junction . . . . 15
Fig. 1.6 An illustration of a soft landing problem of a spacecraft . . . . . . 17
Fig. 1.7 An illustration of a mechanical system composed of two
masses and two springs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 20
Fig. 1.8 An illustration of a simplified model of a vehicle suspension
system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 21
Fig. 1.9 An illustration of a simplified model of a chemical
processing system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 22
Fig. 1.10 An illustration of a single inverted pendulum system . . . . . . . .. 23
Fig. 1.11 Shortest path problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 24
Fig. 2.1 An illustration of the increment Df , the differential df ,
and the derivative f_ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Fig. 2.2 A function f with several extrema . . . . . . . . . . . . . . . . . . . . . . . 47
Fig. 2.3 An illustration of a function x and its variation . . . . . . . . . . . . . 48
Fig. 2.4 Some functions with fixed final time and fixed final state. . . . . . 50
Fig. 2.5 A nonzero function h and a specific variation dx . . . . . . . . . . . . 51
Fig. 2.6 An illustration of the length of a curve . . . . . . . . . . . . . . . . . . . . 55
Fig. 2.7 An illustration of the Brachistochrone problem . . . . . . . . . . . . . . 56
Fig. 2.8 An illustration of an EV charging coordination problem . . . . . . 58
Fig. 2.9 Some functions with fixed final time and free final states . . . . . . 63
Fig. 2.10 The shortest curve between a fixed point and a free
final state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Fig. 2.11 Some functions with free final time and fixed final state . . . . . . 66
Fig. 2.12 An illustration of the extremal solution x and a variation x . . . . 67
Fig. 2.13 Some functions with free final time and free final states . . . . . . . 70

xiii
xiv List of Figures

Fig. 2.14 The extreme function and another admissible function


in case final time and state are free . . . . . . . . . . . . . . . . . . . . . .. 71
Fig. 2.15 An illustration of the relationship of dxf , dxðtf Þ, /ðtÞ,
and dtf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 73
Fig. 2.16 The shortest curve from a fixed point to a point on a straight
line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 74
Fig. 2.17 The shortest curve from a fixed point to a point on a nonlinear
curve /ðtÞ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 75
Fig. 2.18 The shortest curve from a fixed point to a point . . . . . . . . . . . .. 79
Fig. 3.1 The optimal control u and corresponding state x and costate
k trajectories for optimal control problems with fixed
final time and fixed final state . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Fig. 3.2 The optimal state trajectory x and an variation x þ dx . . . . . . . 110
Fig. 3.3 A display of state trajectory terminating on a linear line
at tf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Fig. 3.4 The optimal control u and corresponding state x and costate
k trajectories for optimal control problems with fixed final
time and unspecified final state on a linear curve . . . . . . . . . . . . 113
Fig. 3.5 A display of state trajectory which terminates on a circular
at tf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Fig. 3.6 A display of optimal control u and its associated state and
costate trajectories which terminates at final time of 1.1671 . . . . 115
Fig. 3.7 A display of optimal control u and its associated state and
costate trajectories which terminates at the final time
of 1.1643 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Fig. 3.8 A display of optimal control u , and its associated state
and costate trajectories, which terminates at final time of 2.
5773 in case the final state dependent upon the final time tf . . . 120
Fig. 3.9 A display of state trajectory which terminates on a surface
with tf unspecified . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Fig. 3.10 A display of state trajectory which terminates on an oval
with tf unspecified . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Fig. 3.11 A display of state trajectory which terminates on a surface
with tf unspecified . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Fig. 3.12 A display of time-variant surfaces which the final state
lies in a circular specified in (3.96) . . . . . . . . . . . . . . . . . . . . . . . 124
Fig. 3.13 A diagram of the optimal control for linear-quadratic
regulation problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Fig. 3.14 A display of x , k and u with a ¼ 0:1 and H ¼ 1 . . . . . . . . . . 131
Fig. 3.15 A display of x and u with a ¼ 0:1, and H ¼ 0:1, 0:5,
1, and 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Fig. 3.16 A display of x and u with a ¼ 0:01, 0:1, 0:5, 1,
and 5, respectively . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
List of Figures xv

Fig. 3.17 A display of x and u with a negative valued a ¼ 0:1 . . . . . . 132


Fig. 3.18 A diagram of the optimal control for LQT problems . . . . . . . . . 137
Fig. 3.19 The optimal tracking solution x , k , and u with a ¼ 0:1
and H ¼ 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Fig. 3.20 Display of x and u with a ¼ 0:1, 0:01, 0:5, 1, and 5,
respectively . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Fig. 4.1 An illustration of an optimal control inside a constrained
control set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Fig. 4.2 An illustration of variations of an optimal control located
inside a constrained control set . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Fig. 4.3 A constrained control set and an illustration of admissible
optimal control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Fig. 4.4 A constrained control set and an illustration of inadmissible
control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Fig. 4.5 Constrained and unconstrained optimal controls for Example
4.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Fig. 4.6 The value of optimal control with respect to its coefficient
in the Hamiltonian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Fig. 4.7 An illustration of evolution of x1 with respect to x2 subject
to u ¼ 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Fig. 4.8 An illustration of evolution of x1 with respect to x2 subject
to u ¼ 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Fig. 4.9 An illustration of switching curve . . . . . . . . . . . . . . . . . . . . . . . . 167
Fig. 4.10 An illustration of evolution of optimal state with initial
state not lying on A-0-B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
Fig. 4.11 The switching curves with respect to different parameters a
for minimum time problems with two-dimensional state . . . . . . . 170
Fig. 4.12 The switching curves with respect to with a specific parameter
a ¼ 0:5 for minimum time problems with two-dimensional
state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Fig. 4.13 An illustration of evolution of the optimal control u
with respect to ½k ðtÞ> Bi ðx ðtÞ; tÞ . . . . . . . . . . . . . . . . . . . . . . . . 173
Fig. 4.14 The value of optimal control u ðtÞ with respect to k2 ðtÞ . . . . . . . 176
Fig. 4.15 The evolution of ju ðtÞj þ k2 ðtÞu ðtÞ with respect to k2 ðtÞ . . . . . 176
Fig. 4.16 A switching curve for two-state minimum fuel problems
with u ¼ 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
Fig. 4.17 A switching curve for two-state minimum fuel problems
with u ¼ 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
Fig. 4.18 An illustration of subspaces for the state systems . . . . . . . . . . . . 177
Fig. 4.19 An illustration of optimal control for two-state minimum
fuel problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
Fig. 4.20 An illustration of an e optimal control for two-state minimum
fuel problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
xvi List of Figures

Fig. 4.21 An illustration of evolution of optimal control with respect


to the costate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
Fig. 4.22 The evolution of consumed fuel with respect to the switching
time t1 with a ¼ 1, x0 ¼ 1 and t0 ¼ 1 . . . . . . . . . . . . . . . . . . . . 183
Fig. 4.23 The state trajectory subject to a control in the form of f0; 1g
with x0 [ 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
Fig. 4.24 The state trajectory subject to a control in the form of f0; 1g
with x0 \0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
Fig. 4.25 An implementation of a fuel-optimal control . . . . . . . . . . . . . . . 188
Fig. 4.26 The dependence of consumed fuel on specified final time tf . . . 188
Fig. 4.27 Several optimal trajectories for a time-fuel performance cost . . . 192
Fig. 4.28 The optimal control for Example 4.8 . . . . . . . . . . . . . . . . . . . . . 193
Fig. 4.29 An implementation of the weighted-time-fuel-optimal control
of Example 4.8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Fig. 4.30 Trajectories for u ¼ 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Fig. 4.31 Some typical candidates for the optimal state trajectory
with a given initial state x0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
Fig. 4.32 Typical optimal state trajectories for time-fuel-optimal
problems with different initial states . . . . . . . . . . . . . . . . . . . . . . 198
Fig. 4.33 Switching curves for minimum time-fuel problems . . . . . . . . . . . 199
Fig. 4.34 The evolutions of the elapsed time and the consumed fuel
on the weighting parameter b . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
Fig. 4.35 An illustration of optimal control for minimum energy
problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
Fig. 4.36 An implementation of optimal control for minimum energy
problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
Fig. 4.37 The evolution of optimal control with respect to k ðtÞ
for minimum energy problems . . . . . . . . . . . . . . . . . . . . . . . . . . 204
Fig. 4.38 Trajectories of k ðtÞ with different initial values . . . . . . . . . . . . . 205
Fig. 4.39 The evolution of optimal control on costate k ðtÞ with different
initial values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
Fig. 4.40 The optimal control u ðtÞ with respect to state x ðtÞ . . . . . . . . . . 207
Fig. 4.41 The relationship between an extremal control and costate . . . . . 209
Fig. 4.42 Possible forms for an extremal costate trajectory . . . . . . . . . . . . 211
Fig. 4.43 The evolution of the optimal control u with respect
to dynamics of k given in the curve (I) in Fig. 4.42 . . . . . . . . . 211
Fig. 4.44 The evolution of the optimal control u with respect
to dynamics of k given in the curve (II) in Fig. 4.42 . . . . . . . . 212
Fig. 4.45 The evolution of the optimal control u with respect
to dynamics of k given in the curve (III) in Fig. 4.42 . . . . . . . . 212
Fig. 4.46 The evolution of the optimal control u with respect
to dynamics of k given in the curve (IV) in Fig. 4.42 . . . . . . . . 212
Fig. 4.47 The time-energy optimal control for Example 4.11 with a ¼ 3
and b ¼ 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
List of Figures xvii

Fig. 4.48 An implementation of the time-energy optimal control


for Example 4.11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
Fig. 4.49 Weighted-time-fuel and weighted-time-energy optimal
trajectories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Fig. 5.1 The cost function subject to optimal control u . . . . . . . . . . . . . 224
Fig. 6.1 An illustration of evolutions of u1 ðÞ and u2 ðÞ with respect
to s1 ðÞ and s2 ðÞ, respectively . . . . . . . . . . . . . . . . . . . . . . . . . . 269
Fig. 6.2 An illustration of u1 ðÞ and u2 ðÞ with respect to s1 ðÞ and s2 ðÞ,
respectively . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Fig. 7.1 A directed graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
Fig. 7.2 The shortest path for Example 7.11 . . . . . . . . . . . . . . . . . . . . . . 312
Fig. 7.3 The state transition subject to control u . . . . . . . . . . . . . . . . . . . 314
Fig. 7.4 The optimal control and associated state trajectories . . . . . . . . . . 317
List of Tables

Table 1.1 Profit with respect to investment amount on each item . . . . . . . 25


Table 7.1 The minimum cost function at time k ¼ 1 and the associated
optimal control at this time . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
Table 7.2 The minimum cost function at time k ¼ 1 and the associated
optimal control at this time . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317

xix
Chapter 1
Introduction

1.1 Backgrounds and Motivation

This book originates from parts of the lecture notes of the graduate course “Optimal
and Robust Control” given at Beijing Institute of Technology since 2011. The purpose
of the course is to provide an extensive treatment to the optimal and robust control
in the modern control theory for the complex, multiple inputs and multiple outputs
systems, to meet radically different criteria of the performance from the classical
control theory.
This book contains some classical materials for the optimal control theory, includ-
ing variational method, optimal control based upon the variational method and Pon-
tryagin’s minimum principle, with lots of numerical examples. The authors appreciate
being informed of errors or receiving other comments about this book.
In this first chapter, the motivations to do researches from the classical control
theory to the optimal control theory are introduced, and an explicit formulation for
optimal control problems is provided.
In classical control theory, the analysis and design of control systems mainly
depend on the concept of transfer function or the theory of Laplace transforms. Due
to the convolution property of Laplace transforms, a convenient representation of a
control system is the block diagram configuration that is illustrated in Fig. 1.1.
In such a block diagram representation, each block contains the Laplace transform
of the differential equation and the component of the control system that relates the
input to the block to its output is represented. The overall transfer function giving
the ratio of the output and input will be yielded through simple algebra. That is, the
classical control theory takes the input and output characteristics as the mathematical
model of the system.
The classical control theory has been covered in many textbooks for senior under-
graduate or graduate students, e.g., [1–12]. The commonly used analysis methods
include frequency response analysis, root locus, description function, phase plane

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 1
Z. Ma and S. Zou, Optimal Control Theory,
https://doi.org/10.1007/978-981-33-6292-5_1
2 1 Introduction

Fig. 1.1 The block diagram form of classical control

and Popov method, etc., and the control is limited to feedback control, PID control,
and so on. By applying these techniques, it mainly studies
• The characteristics of the system in the time domain and frequency domain, such
as rise time, peak overshoot, gain and phase margin, and bandwidth;
• The stability of the system;
• The design and the correction methods of control systems.
The control plants concerned in the classical control are usually single-input and
single-output (SISO) systems, especially linear time-invariant (LTI) systems. These
methods of analysis are difficult to apply to multi-input and multi-output (MIMO)
systems.
In contrast to the classical control materials, generally speaking, the modern con-
trol theory is a time-domain control approach that is amenable to MIMO systems,
and is based upon the state-space method to characterize the control plant in terms
of a set of first-order differential equations [13–19].
For example, an LTI system could be expressed by

ẋ(t) = Ax(t) + Bu(t),


y(t) = C x(t) + Du(t),

where x(t) is the system state, u(t) and y(t) are the control input and output vectors,
respectively, and the matrices A, B, C, and D are the corresponding state, input,
output, and transfer matrices, respectively. A nonlinear system is characterized by

ẋ(t) = f (x(t), u(t), t),


y(t) = f (x(t), u(t), t).

The state variable representation could uniquely specify the transfer function
while this does not hold vice vera. Therefore, modern control theory can deal with
much more extensive control problems than classical control theory, including linear
and nonlinear system, time-invariant and time-variant system, single-variable and
multivariable system. Moreover, it provides the possibility to design and construct
the optimal control system with a specified performance cost function.
1.2 Optimal Control Theory 3

1.2 Optimal Control Theory

When facing with a task, the objective of control is to find the control scheme among
all possible ones that will cause a process/plant to satisfy some physical constraints
and at the same time maximize or minimize a chosen performance criterion.
As one of the major branches of the modern control theory, the optimal control
theory covered in this book targets on finding optimal ways to control a dynamic
system that evolves over time in the way of continuous-time systems.
It also studies the control and its synthesis method when the controlled system
achieves the optimal performance cost function. This could be the consumed time,
cost, or the error between the actual and the expected. The problems studied by the
optimal control theory could be summarized as follows: For a controlled dynamic
system or motion process, an optimal control scheme is found out from a class of
allowed control schemes, so that the performance value of the system is optimal
when the motion of the system is transferred from an initial state to a specified target
state.
This kind of problems widely exists in the field of technology or social problems.
For example, to determine an optimal attitude control to minimize the fuel expen-
diture in the process of a spacecraft changing from one orbit to another, to select
a regulation law of temperature and the corresponding raw material ratio to maxi-
mize the output of the chemical reaction process, and to formulate a most reasonable
population policy to optimize the aging, dependency, and labor performances in the
process of population development. All these are typical optimal control problems.
Consider an explicit example of designing a control for an unmanned vehicle.
The control objective is to reach the expected target position along some certain
trajectory. To complete this control task, the first thing is to get the current state of
the vehicle and how it changes under the input control signal. In this problem, the
states that are concerned most are the current position and speed of the vehicle.
The control behaviors could be controlling the throttle to accelerate the vehicle,
or controlling the brake to decelerate. The running speed of the vehicle will further
affect the position in the next period of time. At the same time, it should ensure that
the vehicle speed would not be too fast to violate the traffic rules. When the fuel on
the vehicle is limited, it is also necessary to keep the fuel consumption to finish the
journey not greater than the fuel amount.
The above example is a typical control task. An experienced driver may have a
variety of control methods to get there. However, if the objective is to reach the goal
in the shortest period of time, the control is not intuitive: increasing the throttle could
reduce the time to reach the destination, while it may cause over-speed problems; and
even fail to reach the destination because the fuel is consumed in advance. Besides,
unexpected disturbances may occur during the journey.
From the above example, the fundamental elements related to optimal control
problems can be given as follows:
• State-space equations describing the dynamic system. That is, the control input
u(·) depending on the time influences the state variable x(·), which is usually repre-
4 1 Introduction

sented by differential equations in case of continuous-time or difference equations


in case of discrete time.
• Admissible control set describing the constraints satisfied by the input and state
variables.
• Specific conditions of the final state at a final time which may be or may not be
fixed.
• Performance cost which is used to measure the performance of the control tasks
when the objective is achieved.
The detailed formulations of the optimal control problems in cases of continuous
time and discrete time are given in Sects. 1.4 and 1.5, respectively. Therefore, from
the mathematical point of view, the determination of the optimal control problem
could be expressed as follows: Under the constraints of the dynamic equation and the
admissible control range, the extreme value of the performance cost function related
to the control and state variables can be calculated.
The realization of optimal control is inseparable from optimization technology,
which is a subject of studying and solving optimization problems. It studies how to
find the optimal solution that optimizes the objective from all the possible ones. That
is to say, the optimization technology is to study how to express the optimization
problem as a mathematical model and how to effectively figure out the optimal solu-
tion via the mathematical model. Generally speaking, solving practical engineering
problems with the optimization method can be divided into the following procedure:
• Establish the mathematical model of the optimization problem, determine the vari-
ables, and list the constraints and objective function for the proposed optimization
problem;
• Analyze the mathematical model in detail, and select the appropriate optimization
method;
• Implement the optimal solution by proceeding the algorithm of the optimization
method, and evaluate the convergence, optimality, generality, simplicity and com-
putational complexity of the proposed algorithm.
After the mathematical model of the optimization problem is established, the
main problem is how to solve the optimization problem by different methods. In
the following, it briefly introduces those methods for solving the optimal control
problems in the literature.
For optimal control problems with simple and explicit mathematical expression
of objective functions and constraints, analytical methods could be applied to solve
them. Generally, the way to find the analytical solutions is to first find out the neces-
sary conditions of the optimal control by the derivative or variational methods which
has been covered in many classical textbooks published in the past decades, [20–28].
The creation of the calculus of variations occurred almost immediately after the
invention or formalization of calculus by Newton and Leibniz. An important problem
in calculus is to find an argument of a function at which the function takes on its
extrema, say maxima or minima.
The extension of the problem posed in the calculus of variations is to find a function
that maximizes or minimizes the value of an integral or functional of that function.
1.2 Optimal Control Theory 5

Fig. 1.2 An illustration of


the Brachistochrone problem

Due to the infinite dimension of a function to be implemented, it is well expected


that the extremum problem of the calculus of variations is much more challenging
than the extremum problem of the calculus.
It has been widely known that the calculus of variations had been considered as
a key mathematical branch after Leonhard Euler published the famous monograph,
Elementa Calculi Variationum, in 1733, and a method for finding curved lines enjoy-
ing properties of maximum or minimum, or solution of isoperimetric problems in
the broadest accepted sense in 1744. The variational method is a powerful mathe-
matical tool to deal with the implementation of the extrema (maxima or minima)
of a function. In his book on the calculus of variations, Euler extended the known
method of the calculus of variations to form and solve differential equations for the
general problem of optimizing single-integral variational quantities.
Nevertheless, it is worth to state, before Euler studied the variational method in
a systematic way, quite a few specific optimization problems had been essentially
solved by using the variational principles. Queen Dido faced with the problem to find
the closed curve with a fixed perimeter that encloses the maximum area. Certainly,
the extremal solution is a circle which can be obtained by applying the variational
method.
Another problem was from Isaac Newton who designed the shape of a body
moving in the air with the least resistance. However, a first problem solved by the
method of calculus of variations was the path of least time or the Brachistochrone
problem, proposed by Johann Bernoulli at the end of the seventeenth century, which
is shown as an illustration in Fig. 1.2, which was solved to be a cycloid by Jacob
Bernoulli, Isaac Newton, L’Hospital, and himself.
It involves finding a curve connecting the two points (x0 , y0 ) and (x1 , y1 ) in a
vertical plane with the proof that a bead sliding along a curve driven under the force
of gravity will move from (x0 , y0 ) to (x1 , y1 ) in a shortest period of time.
Besides Euler, Lagrange, Newton, Leibniz, and the Bernoulli brothers also gave
great contributions to the early development of the field. In the nineteenth and early
twentieth centuries, many mathematicians such as Hamilton, Jacobi, Bolza, Weier-
strass, Caratdory, and Bliss also contributed much to the theory of the solution of
6 1 Introduction

variations problems, see [29, 30] for a historical view on the topics of the method of
calculus of variations.
The initial stage of modern control theory was the publication of the well-known
minimum principle in Russian in the later 1950s, e.g., [31–37], and in 1962 in English,
of the book, the Mathematical Theory of Optimal processes, [38], by Russian math-
ematicians, Pontryagin and his collaborators, such as Boltyanskii, Gamkrelidze, and
Mischenko. Besides, there are American researchers who made many contributions
to these topics are Valentine, McShane, Hestenes, Berkovitz, and Neustadt et al.
The key contributions of the work by Pontryagin and his collaborators include
not only a rigorous formulation of calculus of variations problem with constrained
control variables, but also a mathematical proof of the minimum principle for optimal
control problems.
In this book, it will be mainly studied how to apply the minimum principle in its
various forms to implement the solutions of various optimal control problems in the
fields of electrical engineering, chemical engineering, etc.
Basically, it determines the optimal solution according to a set of equations or
inequalities. This kind of method fits the problems with obvious analytical expres-
sions of performance cost function and constraints.
More especially, when the control vector is not constrained in a set, the Hamilto-
nian (function) is introduced to solve the optimal control problem, and the necessary
conditions of optimal control, i.e., regular equation, control equation, boundary con-
dition, and cross-sectional condition can be derived by using the variational method.
The variational method works on the premise that the control vector is not subject
to any restrictions, that is, the admissible control set can be regarded as the open set
of the whole control space and the control variable is arbitrary. At the same time, the
Hamiltonian is assumed to be continuously differentiable on the control variable.
However, in practical problems, the control variable is often limited to a certain
range, which motivates the minimum principle. In fact, this method is extended
from the variational method [39], but it can be applied to the case that the control
variable is limited by the boundary, and does not require the Hamiltonian output to
be continuously differentiable on the control input. Those problems with constraints
can be solved by using Pontryagin’s minimum principle, [40, 41]. There have been
many textbooks dedicated to introducing this kind of optimal control method, e.g.,
[39, 42–56][?] and references therein. Many of them focus on the applications of
the Pontraygin’s minimum principle in various practical fields such as aerospace
engineering [57–61], mechanical engineering [62–66], electrical engineering [67–
72], chemical engineering [73–76], management science, economics [77–83], social
science [84, 85], etc.
This book mainly introduces the materials introduced above, say the extrema of
the functionals via the variational method, the optimal control by the variational
method, optimal control with constraints via Pontryagin’s minimum principle.
Besides, the dynamic programming method, initialized by Richard Bellman and
his collaborators, [86–88], is another branch of key analytical optimal control meth-
ods to solve the optimal control problems, e.g., [89–94]. Like the minimum principle,
it is an effective way to deal with the optimal control problem in which the control
1.2 Optimal Control Theory 7

vector is constrained to a certain closed set. It transforms the complex optimal con-
trol problem into a recursive function relation of the multi-stage decision-making
process.
Regardless of the initial state and initial decision, the decision must be opti-
mal for this stage and the following stages when given a stage and a state as the
initial stage and initial state. Therefore, by using this principle of optimality, a multi-
stage decision-making problem can be transformed into multiple optimal single-stage
decision-making problems. The decision-making of this stage is independent of the
previous ones, and only related to the initial state and initial decision of this stage.
As dynamic programming is used to solve the optimal control problem of contin-
uous systems, the continuous system could be discretized first, and the continuous
equation can be approximately replaced by the finite difference equations.
Besides the above methods to implement the optimal solution in a centralized
way, differential games have been employed since 1925 by Charles F. Roos [95].
Nevertheless, Rufus Isaacs was the first to study the formal theory of differential
games in 1965 [96]. He formulated so-called two-person (zero-sum) pursuit-evasion
games. And Richard Bellman made a similar work via the dynamic programming
method in 1957, [86]. See [97] for a survey of pursuit-evasion differential games.
Such differential games are a group of problems related to the modeling and
analysis of conflict in the context of a dynamic system, [98, 99]. More specifically,
state variables evolve over time according to certain differential equations.
Distinct from the early analyses of differential games which reflected military
interests by considering two players, say the pursuer and the evader, with opposed
goals. Nowadays, more and more analyses mainly reflect engineering, economic, or
social considerations, [100, 101].
Some research work considers adding randomness to differential games and the
derivation of the stochastic feedback Nash equilibrium (SFNE), e.g., the stochastic
differential game of capitalism by Leong and Huang, [102].
In 2016, Yuliy Sannikov received the Clark Medal from the American Economic
Association for his contributions to the analysis of differential games by applying
stochastic calculus methods, [97, 103]
Differential games are related to optimal control problems. As stated earlier, in
an optimal control problem there are single control u and a single performance
criterion to be optimized; while differential game generalizes this to several con-
trols u 1 , u 2 , . . . , u i and several performance criteria as well. Each individual player
attempts to control the state of the system so as to achieve his own goal; the system
responds to the inputs of all players.
More recently it has been developed a so-called mean-field game theory by Minyi
Huang, Peter E. Caines, and Roland Malhame [104–109], who solved the optimal
control problems in the field of engineering, say telecommunication problems, with a
large population of individual agents that mutually interact with each other and each
of which has negligible effects on the system which tends to vanish as the population
size goes to infinity, and independently around the same time by Jean-Michel Lasry
and Pierre-Louis Lions [110–112, 112], who studied strategic decision-making in a
8 1 Introduction

large population of individual interacting players. Mean-field game theory has been
extended and applied to many fields.
The term “mean-field” is inspired by mean-field theory in physics, which considers
the behavior of systems of large numbers of particles where each of the individual
particles has a negligible impact on the whole system.
In continuous time, a mean-field game is typically composed of a Hamilton–
Jacobi–Bellman equation that describes the optimal control problem of an individual
and a Fokker–Planck equation that describes the dynamics of the aggregate distribu-
tion of agents.
In continuous time case, a mean-field game is typically composed of a Hamilton–
Jacobi–Bellman equation that describes the optimal control problem of an individual
and a Fokker–Planck equation that describes the dynamics of the aggregate distri-
bution of agents. Under certain general assumptions, it can be shown that a class of
mean-field games is the limit of a N -player Nash equilibrium as the population size
N goes to infinity [107, 108, 113].
A related concept to that of mean-field games is “mean-field-type control”. In this
case, a social planner controls distribution of states and chooses a control strategy.
The solution to a mean-field-type control problem can typically be expressed as
a dual adjoint Hamilton–Jacobi–Bellman equation coupled with the Kolmogorov
equation. Mean-field-type game theory [114–117] is the multi-agent generalization
of the single-agent mean-field-type control [118, 119].
Basically, all the above methods described so far, rely upon the explicit formulation
of the problems to be solved. Nevertheless, some methods, like the direct method,
[120–123], can be applied to solve the optimization problems with complex objective
function or without explicit mathematical expression. The basic idea of the direct
method is to use the direct search method to generate the sequence of points through a
series of iterations, such that it is gradually close to the best solution. Direct methods
are often based on experiences or experiments. The numerical calculation method
could be divided into the categories below:
• Interval elimination method, [124], which is also known as the one-dimensional
search method, is mainly used for solving single-variable problems. For example,
there are golden section method, polynomial interpolation method, and so on.
• Hill climbing method, [125], which is also known as a multidimensional search
method, is mainly used for solving multivariable problems. For example, coordi-
nate rotation method, step acceleration method, and so on.
• Gradient-type methods, [126] include unconstrained gradient methods, such as
the gradient descent method and quasi-Newton method, and constrained gradient
methods such as feasible direction method and gradient projection method.
In practice, in certain scenarios, it might be infeasible to apply the offline opti-
mization methods, described briefly above, since they are usually relied upon the
mathematical model of the problem. More especially, many factors like the changes
in the environment, aging of the catalyst and equipment, etc., may introduce distur-
bances to the process even though the process is designed to operate continuously
1.2 Optimal Control Theory 9

under certain normal working conditions. Consequently, the proposed optimal con-
trol solutions may be distinct with the actual optimal ones for the actual problems.
There are quite a few online optimization methods in the literature to overcome the
challenges.
The so-called local parameter optimization method [127, 128] is to adjust the
adjustable parameters of the controller according to the difference between the refer-
ence model and the output of the controlled process, so as to minimize the integration
of the square of the output error. In this way, the controlled process could track the
reference model accurately as soon as possible.
The predictive control, which is also known as model-based control, is a type of
optimal control algorithm rising in the late 1970s. Different from the usual discrete
optimal control algorithm, it does not use a constant global optimization objective,
but uses a receding finite-time-domain optimization strategy. This means that the
optimization process is not carried out offline, but repeatedly online.
Due to the localization of the finite objectives, we can only obtain the solution with
an acceptable performance in ideal situations, while the receding implementation
could take into account the uncertainties caused by model mismatch, time variant,
disturbances, and so on, by compensating them in real time. It always establishes
the optimization based on the actual environments in order to keep the control input
optimal in practice. This heuristic receding optimization strategy takes into account
the influence of ideal optimization and actual uncertainty in the future.
In the complex industrial environment, it is more practical and effective than the
optimal control based on ideal conditions. We can establish an optimization mode
by applying predictive controls. In this way, we can deal with those problems with
complex constraints, multi-objective and nonlinear components. It is promising to
overcome the shortcomings of single model predictive control algorithm and attract
attentions by introducing the ideas of hierarchical decision-making or artificial intel-
ligence technique to the predictive control methods.
The decentralized control is commonly used in the control of large-scale systems.
In this case, the computer online steady-state optimization often applies a hierarchical
control structure. This structure has both a control layer and optimization layer,
wherein the optimization layer is a two-level structure composed of local decision-
maker and the coordinator. The optimization process is that each decision-maker
responds to the subprocess optimization in parallel, and the coordinator coordinates
the optimization processes. The optimal solution is then obtained through mutual
iteration.
Due to the difficulty in having the accurate mathematical model of industrial
processes, which tend to be nonlinear and time variant, the Polish scientist Findesien
proposed that the solution obtained by using the model in the optimization algorithm
is open loop, [129–131]. In the design stage of online steady-state control of large-
scale industrial processes, the open-loop solution can be used to determine the optimal
working point. However, in practice, this solution may not make the industrial process
in the optimal condition, on the contrary, it even violates the constraints. Their new
idea is to extract the steady-state information of the related variables from the actual
process, and feed it back to the coordinator or local decision-makers.
10 1 Introduction

The difficulty of steady-state hierarchical control is that the input and output char-
acteristics of the actual process are unknown. The feedback correction mechanism
proposed by Findesien could only get a suboptimal solution. But its main disadvan-
tage is that it is difficult to accurately estimate the degree of suboptimal solution
deviating from the optimal solution, and the suboptimal degree of suboptimal solu-
tion often depends on the selection of the initial point. A natural idea is to separate
the optimization and parameter estimation and carry out them alternately until the
iteration converges to a solution. In this way, the online optimization control of the
computer includes two tasks: the optimization based on the rough model which is
usually available, and the modification based on the set point. This method is called
the integrated research method of system optimization and parameter estimation.
For more and more complex control plants, on one hand, the control performance
required is no longer limited to one or two indices; on the other hand, all the above
optimization methods are based on the accurate mathematical model of the opti-
mization problem. But many practical engineering problems are very difficult or
impossible to get its accurate mathematical model. This limits the practical appli-
cation of the classical optimization method. With the development of fuzzy theory,
neural network and other intelligent technology, and computer technology, the smart
optimization method has been developed.
The research of artificial neural network originated from the work of Landahl,
Mcculloch, and Pitts in 1943 [132]. In the aspect of optimization, in 1982, Hop-
field first introduced the Lyapunov energy function to judge the stability of the net-
work [133], and proposed Hopfield single-layer discrete model. This work has been
extended by Hopfield and Tank in [134]. In 1986, Hopfield and Tank directly cor-
responded the electronic circuit with the Hopfield model and realized the hardware
simulation [135]. Kennedy and Chua in [136] proposed the analog circuit model
based on the nonlinear circuit theory and studied the stability of the electronic circuit
using the Lyapunov function of the system differential equation.
All these works promote the research of neural network optimization. According
to the theory of neural network, the minimum point of the energy function of the
neural network corresponds to the stable equilibrium point of the system, so the
problem is transformed into seeking the stable equilibrium point of the system.
With the evolution of time, the orbit of the network always moves in the direction
of decreasing the energy function, and finally reaches the equilibrium point of the
system.
Therefore, if the stable attractor of the neural network system is considered as the
minimum point of the appropriate energy function or augmented energy function, the
optimal calculation will reach a minimum point along with the system flow from an
initial point. If the concept of global optimization is applied to the control system, the
objective function of the control system will eventually reach the desired minimum
point. This is the basic principle of neural optimization [137]. Since the Hopfield
model can be applied to both discrete and continuous problems, it is expected to
effectively solve the nonlinear optimization problem of mixed discrete variables in
control engineering.
1.2 Optimal Control Theory 11

Like the general mathematical programming, the neural network method also has
the weakness of costing much. How to combine the approximate reanalysis and other
structural optimization techniques to reduce the number of iterations is one of the
directions for further research.
Genetic algorithm and genetic programming are new search and optimization
techniques [138, 139]. It imitates the evolution and heredity of the organism, and
according to the principle of “survival of the fittest”, it makes the problem to be solved
gradually approach the optimal solution from the initial solution. In many cases, the
genetic algorithm is superior to the traditional optimization method. It allows the
problem to be nonlinear and discontinuous, and can find the globally optimal solution
and the suboptimal solutions from the whole feasible solution space, avoiding only
getting the local optimal solution. In this way, we can provide more useful reference
information for better system control. At the same time, the process of searching for
the optimal solution is instructive, and may avoid the dimension disaster by applying
a general optimization algorithm. With the development of computer technology,
these advantages of the genetic algorithm will play an increasingly important role
in the field of control. The results show that the genetic algorithm is a potential
structural optimization method.
Optimal control problem is one of the most widely used fields of fuzzy the-
ory. Since Bellman and Zadeh made pioneering work on this research in the early
1970s [140], their main research focuses on theoretical research in the general sense,
fuzzy linear programming, multi-objective fuzzy programming, and the application
of fuzzy programming theory in random programming and many practical problems.
The main research method is to use the membership function of the fuzzy set to
transform the fuzzy programming problem into the classical one. The requirements
of the fuzzy optimization method are the same as those of the ordinary optimization
method. It is still to seek a control scheme (i.e., a set of design variables) to meet
the given constraints, and optimize the objective function. The fuzzy optimization
method can be summarized for solving a fuzzy mathematical programming problem
including control variables, objective functions, and constraints. However, those con-
trol variables, objective functions, and constraints may be fuzzy, or some parts are
fuzzy and the other parts are clear. For example, the fuzzy factors could be included
in the constraints such as geometric constraints, performance constraints, and human
constraints.
The basic idea of solving a fuzzy programming problem is to transform fuzzy
optimization into an ordinary optimization problem. One way for solving fuzzy
problems is to give a fuzzy solution; the other is to give a specific crisp solution. It
must be pointed out that the above solutions are all for fuzzy linear programming.
Nevertheless, lots of practical engineering problems are described by nonlinear
fuzzy programming. Therefore, some people put forward the level cut set method,
the limit search method , and the maximum level method, and achieved some grati-
fying results. In the field of control, fuzzy control is integrated with a self-learning
algorithm, fuzzy control, and genetic algorithm. By improving the learning algorithm
and genetic algorithm, and according to the given optimization performance func-
12 1 Introduction

tion, the controlled object is gradually optimized for learning, such that the structure
and parameters of the fuzzy controller can be effectively determined.
There also exist many other smart optimization methods in the literature, e.g.,
ant colony optimization [141], particle swarm optimization [142], and simulated
annealing algorithm [143].
In Sects. 1.4 and 1.5, it will then give the general formulation for optimal control
problems. Before that in Sect. 1.3, some optimal control problems in different fields
are introduced first.

1.3 Examples of Optimal Control Problems

Example 1.1 (Minimum Time for An Unmanned Vehicle Driving)


We first consider the example mentioned in the previous section for introducing
the optimal control theory. Here consider a simple case, say the vehicle drives in
a straight line from the parking point O to the destination point e, as illustrated in
Fig. 1.3. A similar example is also specified in [144]. The objective is to make the
vehicle reach the destination as quickly as possible.
Let d(t) denote the distance of the vehicle from the starting point O at time t. As
stated in the earlier part, the vehicle could be accelerated by using the throttle and
decelerated by using the brake. Let u(t) represent the throttle acceleration when it
is positive valued and the brake deceleration when it is negative valued. Then the
following equation holds:

ḋ(t) = u(t).

 the position and velocity of the vehicle as the state variables, i.e., x(t) =
Selecting

d(t)
, and the throttle acceleration/brake deceleration as the control variables.
ḋ(t)
Hence we obtain the state dynamics differential equation as follows:
   
01 0
ẋ(t) = x(t) + u(t). (1.1)
00 1

Let t0 and t f denote the departure time and arrival time of the vehicle, respectively.
Since the vehicle is parking at O and it stops at e, it could get the boundary conditions
of the state
 
0
x(t0 ) = ,
0
 
e
x(t f ) = .
0
1.3 Examples of Optimal Control Problems 13

Fig. 1.3 An illustration of a


simplified vehicle driving
control

Fig. 1.4 An illustration of a


simple charging circuit

Practically, the acceleration of a vehicle is bounded by some upper limit which


depends on the capability of the engine, and the maximum deceleration is also limited
by the braking system parameters. Denote the maximum acceleration and maximum
deceleration by M1 and M2 , respectively, with M1 , M2 > 0, which gives the con-
straint for the control variable:

−M2 ≤ u(t) ≤ M1 .

In addition, the vehicle has limited fuel, the amount of which is denoted by G,
and there are no gas stations on the way, then another constraint is posed:
 tf
[k1 a(t) + k2 ḋ(t)]dt ≤ G.
t0

Now we can formulate the optimal control problem: for the system specified
in (1.1), given t0 , x(t0 ), and x(t f ), find the u(t), t ∈ [t0 , t f ] under the underlying
constraints to minimize the time used to reach the destination, i.e.,

J (u)  t f − t0 .

Example 1.2 (Minimum Energy Consumption in An Electric Circuit)


Consider the charging circuit shown in Fig. 1.4. Assume that a control voltage is
applied to charge the capacitor to a given voltage within a given time period, and at
the same time, minimize the electric energy consumed on the resistor.
Denote by u i (t) and u c (t) the control voltage and the voltage of the capacitor,
respectively, i(t) the charging current, R the resistance of the resistor and C the
capacitance of the capacitor. Hence, the following equation holds for the control
variable u i (t):
14 1 Introduction

du c (t) 1
C = [u i (t) − u c (t)] = i(t).
dt R
That is, we get the state dynamics equation

du c (t) 1 1
=− u c (t) + u i (t). (1.2)
dt RC RC
The power consumed on the resistor is

1
w R (t) = [u i (t) − u c (t)]2 .
R
Let t0 and t f denote the starting time and ending time of the charging process,
respectively, and V0 and V f denote the starting voltage and ending voltage of the
capacitor, respectively.
Similarly, the problem could be formulated as, for the system specified in (1.2),
given t0 , t f , u c (t0 ) = V0 , and V f , find the u i (t), t ∈ [t0 , t f ] such that u c (t f ) = V f
and minimize the consumed power on the resistor
 tf
1
J (u)  [u i (t) − u c (t)]2 dt.
t0 R

Next, it introduces a specific optimal problem below on how social insects, e.g., a
population of bees, to determine the makeup of their society. This kind of problems
is originally from Chap. 2 of the book “Caste and Ecology in Social Insects” by G.
Oster and E.O. Wilson [145, 146].

Example 1.3 (Reproductive Strategies in Social Insects) [146]


Denote by t f the length of the season starting from t0 = 0 to t f . Introduce w(t)
to represent the number of workers at time t, q(t) the number of queens and α(t) the
fraction of colony effort devoted to increasing workforce.
The control variable α(t) is constrained to

0 ≤ α(t) ≤ 1.

We continue this model by introducing the state dynamics for the numbers of
workers and the number of queens. The worker population evolves according to

ẇ(t) = −μw(t) + bs(t)α(t)w(t),

where μ is a given constant death rate, b is another constant, and s(t) is the known
rate at which each worker contributes to the bee economy, and the initial state is
w(0) = w0 .
1.3 Examples of Optimal Control Problems 15

Suppose also that the population of queens changes according to

q̇(t) = −νq(t) + c[1 − α(t)]s(t)w(t),

with constants ν and c and initial q(0) = q0 .


The number of queens at the final time t f is q(t f ). Thus the problem is formulated
as an optimization problem such that the bees attempt to maximize q(t f ).


Example 1.4 (Control of the Traffic Lights)


Consider the road junction of two single-direction lanes which are called lane 1 and
lane 2, respectively. As illustrated in Fig. 1.5, the lengths or the amounts of the waiting
vehicles at the junction in the two lanes are denoted by x1 (t) and x2 (t), respectively,
and the traffic flows to the junction are denoted by v1 (t) and v2 (t), respectively.
Suppose that the maximum traffic flows for the two lanes are represented by a1 and
a2 , respectively.
Denote by g1 (t) and g2 (t) the lengths that the green lights are “on” in the two
lanes, respectively. Suppose that the switching period of the traffic lights, denoted
by t f , is fixed. Suppose also fixed the time required for the vehicle to accelerate and
the time of yellow lights, which aggregately is denoted by y.
It has the following:

g1 (t) + g2 (t) + y = t f .

Let u(t) represent the average traffic flow in lane 1 within one switching period of
the traffic lights, i.e.,

g1 (t)
u(t) = a1 ,
tf

Fig. 1.5 An illustration of a


simplified traffic control at
the junction
16 1 Introduction

then the average traffic flow in lane 2 could be obtained


 
g2 (t) a2 y
a2 = − u(t) + a2 1 − .
tf a1 tf

Based on these analysis, the state dynamics equation is expressed as

ẋ1 (t) =v1 (t) − u(t),


 
a2 y
ẋ2 (t) =v2 (t) + u(t) − a2 1 − .
a1 tf

There would be a constraint on u(t) if the time length of green light in lane 1 has
time limits:

u min ≤ u(t) ≤ u max .

Hence the optimal problem is formulated: given the initial states x1 (t0 ) and x2 (t0 ),
determine the control u(t), t ∈ [0, t f ] such that x1 (t f ), x2 (t f ) = 0, and at the same
time minimize the waiting time of the vehicles
 tf
J (x, u)  [x1 (t) + x2 (t)]dt.
0

Example 1.5 (Soft Landing of A Spacecraft)


It is considered that a spacecraft is required to softly land on the earth surface,
that is the landing speed of the spacecraft should be as small as possible. In addition
to the parachute deceleration during the return stage, the engine needs to be started
in the final stage to reduce the landing speed into the allowable range. At the same
time, in order to reduce the cost, the deceleration engine is required to consume as
little fuel as possible during the landing process.
In order to simplify the description of the problem, it may be considered the
spacecraft as a particle, which is assumed to move along the vertical line of the
surface at the last stage of the landing process. Then the state dynamics equation is
given as

dv(t)
m(t) = p(t) + f (h, v) − m(t)g
dt
dh(t)
= v(t)
dt
dm(t)
= −αp(t),
dt
1.3 Examples of Optimal Control Problems 17

Fig. 1.6 An illustration of a


soft landing problem of a
spacecraft

where m(t) is the mass of the spacecraft, including the self-weight of the spacecraft
and the mass of the fuel carried, h(t) is the distance from the spacecraft to the surface,
v(t) is the velocity of spacecraft with the direction of vertical upward being positive,
f (h, v) is the air resistance, g is the acceleration of the gravity, which is set as a
constant, α is the combustion coefficient of the engine, which is also a constant, and
p(t) is the engine thrust, which is the control variable to be determined.
Suppose that the engine in the return capsule ignites at t = 0, and give the initials

v(0) = v0 ,
h(0) = h 0 ,
m(0) = Ms + Me ,

where Ms and Me represent the mass of the spacecraft itself and the total mass of
the fuel carried, respectively (Fig. 1.6).
If the soft landing time of the return capsule is t f , then it has the requirement

v(t f ) = 0,
h(t f ) = 0.

The thrust p(t) of the engine is always positive and the maximum value is set
to be p M , that is, 0 ≤ p(t) ≤ p M . Hence, the problem of soft landing is formulated
as an optimization problem such that the engine thrust
 function p(t)is designed to
h(0) h(t f )
transfer the spacecraft from the initial state to the end state under
v(0) v(t f )
the above constraints, and minimize the fuel consumption at the same time, that is,
maximize the mass at the final time

J  m(t f ).


18 1 Introduction

In the following, a problem of a target interception in space is given.

Example 1.6 (Interception of A Target in Space)


Suppose that a space target moves at a certain speed, the thrust of the interceptor is
fixed, and the space target and the interceptor move in the same plane. The direction
of the thrust of the interceptor is the control variable. The goal is to control the thrust
direction of the interceptor during the time interval [t0 , t f ] in order to intercept the
space target.
Then the simplified state dynamics equations of the relative motion of the space
target and interceptor are given as

ẍ(t) = η cos(α(t))
ÿ(t) = η sin(α(t)),

where x(t) and y(t) are the coordinates of the relative positions of the space target
and the interceptor, respectively, η is the thrust amplitude of the interceptor under
the assumption that the mass of the interceptor is equal to 1, and α(t) is the thrust
direction angle of the interceptor.
We hope to design the control for the thrust direction angle α(t) of the interceptor
to achieve the goal of target interception, that is,

x(t f ) = 0,
y(t f ) = 0,

and meanwhile minimize the time cost to accomplish the interception

J  t f − t0 .

Example 1.7 (Optimal Consumption Strategy)


Suppose a person holds a fund of the amount x0 at initial time t0 . He plans to
consume the fund within the time interval [t0 , t f ]. If considering the profit of the
bank deposit, how can this person consume to obtain the maximum benefit?
Denote x(t) as the fund and he has time t, including the interest he obtained
from bank deposit, and u(t) the consumption he makes. The interest rate of the
bank deposit is fixed and denoted by α. During the process of consumption u(t), the
satisfaction or the utility of this person is

L(t) = u(t) exp(−βt),

where β is the discount rate, which represents the interest rate that the future con-
sumption funds are discounted into the present value.
The optimal consumption problem can be described as the following. Consider
the dynamic equation of the fund
1.3 Examples of Optimal Control Problems 19

ẋ(t) = αx(t) − u(t),

and the initial and the end value of the fund

x(t0 ) = x0 ,
x(t f ) = 0.

To implement the optimal consumption strategy u(t), t ∈ [t0 , t f ] such that the
total utility is maximized
 tf 
J (u)  u(t) exp(−βt)dt.
t0

Example 1.8 (Advertising Expense in A Market)


Advertising expense is one of the important expenses of an enterprise, and its pay-
ment strategy is an important factor that determines the total income of an enterprise.
When enterprises make the strategy of advertising expenses, on the one hand, they
should increase the sales of their products through the role of advertising to avoid
the market forgetting of their products; on the other hand, they should pay attention
to the market saturation, that is, the demand for a product from customers is finite.
When the actual sales go close to the maximum demand, the role of advertising will
be reduced.
Based on the above considerations, the relationship between the product sales and
advertising expenses can be described by the following dynamic model:
 
x(t)
ẋ(t) = −αx(t) + βu(t) 1 − ,
xM

which is the so-called Vidale–Wofle model [147], and where x(t) and u(t) are the
sales volume of products and the payment of advertising expenses, respectively, α and
β are constants, which, respectively, represent the role of market forgetfulness and
advertising utility in increasing the sales volume, and x M is the maximum demand
of the product in the market.
Assuming the sale revenue of the unit amount of the product is q, the cumulative
revenue within the time interval [t0 , t f ] is
 tf
J (x, u)  exp(−βt)[q x(t) − u(t)]dt,
t0

where β represents a discount rate.


The optimal market advertising expense problem is that to find the advertising
payment strategy u(t), t ∈ [t0 , t f ], to make the sales volume increase from x(t0 ) = x0
to x(t f ) = x f , and maximize the cumulative revenue J .
20 1 Introduction

When considering the practical situation, it should be noted that the following
constraints should be satisfied in the above optimal market advertising payment
problem:

0 ≤ x(t) ≤ x M ,
0 ≤ u(t) ≤ u M ,

where u M is a given upper limit of the advertising payment.




Example 1.9 (Mass and Spring) [148]


Consider a mechanical system composed of two masses, m 1 and m 2 , and two
springs, such that one spring, with a spring constant k1 , connects the mass m 1 to
a fixed place and the other, with a spring constant k2 , connects the two masses. A
control input u(t) is applied to the mass m 2 . See an illustration displayed in Fig. 1.7.
Denote by x1 (t) and x2 (t)) the displacements of the masses m 1 and m 2 , respec-
tively. And denote by x3 (t) and x4 (t) the velocities of these two masses, respectively.
Thus the state dynamics of the underlying system is specified as

ẋ1 (t) = x3 (t),


ẋ2 (t) = x4 (t),
k1 k2
ẋ3 (t) = − x1 (t) + [x2 (t) − x1 (t)],
m1 m1
k2 1
ẋ4 (t) = − [x2 (t) − x1 (t)] + u(t).
m2 m2

The performance for the underlying system could be to minimize the deviation
from the desired displacement and velocity, and to minimize the control effort.


Example 1.10 (Vehicle Suspension Systems)


Consider a simplified model of a vehicle suspension system. Denote by x(t) the
position, and by u(t) the control input for the suspension system.

Fig. 1.7 An illustration of a


mechanical system
composed of two masses and
two springs
1.3 Examples of Optimal Control Problems 21

Fig. 1.8 An illustration of a


simplified model of a vehicle
suspension system

Thus the state equation is specified as

m ẍ(t) = −kx(t) − mg + u(t),

where m and k represent the mass and the spring constant of the suspension system,
respectively, and g is the acceleration of the gravity (Fig. 1.8).
Formulate the optimal control problem to consider a tradeoff between the mini-
mization of the control energy and passengers’ comfort.


Example 1.11 (Chemical Processing) [144, 149]


Consider a chemical processing system as displayed in Fig. 1.9. It supposes that
the water liquid flows into tank I and tank II at rates of w1 (t) and w2 (t), respectively,
and the chemical liquid flows into tank I with a rate of z(t). The intersection areas of
tank I and tank II are denoted by α1 and α2 , respectively. Denote by y1 (t) and y2 (t)
the liquid levels of tank I and tank II, respectively.
It also considers that there is a tunnel between the tanks such that the flow rate
between the tanks, denoted by v(t), is proportional to the difference between y1 (t)
and y2 (t).
Assume that the mixtures in the tanks are homogeneous. And denote by θ1 (t) and
θ2 (t) the volumes of chemical components in tank I and tank II, respectively.
Specify the state dynamics equations for the underlying system on variables of
y1 (t), y2 (t), θ1 (t) and θ2 (t).


Next, a complex and famous optimal control problem is introduced—the inverted


pendulum problem. The inverted pendulum system is a typical teaching experimen-
tal equipment in the field of automatic control. The research on inverted pendulum
system can be summed up as the research on multivariable, strong coupling, abso-
lutely unstable, and nonlinear system. Its control method has a wide range of uses in
the fields of military industry, aerospace, robotics, and general industrial processes.
For example, the balance control of the robot’s walking process, the verticality con-
trol of rocket launching, and the attitude control of satellite flight are all related to
inversion, which has become a research hotspot in the field of control. On one hand,
22 1 Introduction

Fig. 1.9 An illustration of a simplified model of a chemical processing system

many control theories and methods have been put into practice here; on the other
hand, in making efforts to study its stability control, people have been discovering
new control methods and exploring new control theories.

Example 1.12 (Inverted Pendulum Systems)


Considering a single inverted pendulum system, it is required to keep the pendulum
with a length l on a cart with mass M in a vertical position. A horizontal force u(t)
is applied to the cart. See an illustration displayed in Fig. 1.10.
For analytical simplicity, it supposes that there is a ball with mass m at the top
of the pendulum, and the radius of the ball and the mass of the pendulum are all
negligible
Denote by x1 (t) and x3 the horizontal displacement of the cart and the angular
position of the pendulum from the vertical line. And denote by x2 (t) and x4 the
velocity of the cart and the angular velocity of the pendulum from the vertical line.
Thus the linearized state dynamics of the underlying system can be specified as

ẋ1 (t) = x2 (t),


[M + m]g 1
ẋ2 (t) = x1 (t) − u(t),
Ml Ml
ẋ3 (t) = x4 (t),
mg 1
ẋ4 (t) = − x1 (t) + u(t),
M M
where g represents the gravitational constant.
The performance for the underlying system could be to keep the pendulum in the
vertical position with as little control efforts or consumed energy as possible.


In the next, consider a differential game problem given in Example 1.13, which
was proposed by Rufus Isaacs in [96].
1.3 Examples of Optimal Control Problems 23

Fig. 1.10 An illustration of


a single inverted pendulum
system

Example 1.13 (Homicidal Chauffeur Problems)


In the homicidal chauffeur problem, the pursuer is a driver of a vehicle which is
fast but less maneuverable and the evader is a runner who can only move slowly, but
is highly maneuverable, against the driver of a motor vehicle.
The pursuer wins in case the pursuer catches the evader; otherwise, the evader
wins.


Notice that the homicidal chauffeur problem introduced above is a classic differ-
ential game in continuous time and in a continuous state space.
In the following example, it specifies an optimal charging problem of electric
vehicles (EVs) in power grid systems.

Example 1.14 (Charging of Electric Vehicles in Power Systems) [150, 151]


Consider the state of charge (SoC) of each of the electric vehicles EVs, with
n = 1, . . . , N , satisfying the following dynamics:

xn (k + 1) = xn (k) + βn u n (k)

for all k = k0 , . . . , k f − 1, where xn (k) and u n (k) represent the SoC and the charging
rate of EV n during time interval k, respectively, such that the performance cost
function defined as
k f −1  N 2
 
J (u)  u n (k) + D(k) ,
k=k0 n=1

where D(k) represents the total demand in the grid system at time k, is minimized.
It may also consider the following constraints for the system

xn (k f ) = Γn ,
24 1 Introduction

Fig. 1.11 Shortest path problems

for all n, with Γn representing the capacity of the battery of EV n, say, at the end of
the charging interval t f , all of the electric vehicles are fully charged.
In [152], it is considered an updated performance function such that
⎡  N 2

k f −1
 N 
J (u)  ⎣ γn [u n (t)] +
2
u n (k) + D(k) ⎦,
k=k0 n=1 n=1

where γn [u n (t)]2 represents the battery degradation cost subject to charging rate
u n (t).


In the next, consider a shortest path problem given in Example 1.15.

Example 1.15 (Shortest Path Problems between Two Places)


Figure 1.11, displays a collections of places, denoted by vi , with i = 1, . . . , 9, each
of which is directly connected with some other places via certain edges, respectively.
It is also considered that each of the edges has a weighting factor. For instance,
the edge between v2 and v5 , denoted by (v2 , v5 ), is valued as 7.
And a path between two places is composed of a collection of consecutive places
which are connected by an appropriate edge and the cost of this path is the sum of
the weighting factors of all the edges in this path. For instance, (v1 , v3 , v5 , v9 ) is a
path between v1 and v9 with the cost equal to 2 + 5 + 4 = 11.
For the system specified above, the problem is to find the short path(s) for each
pair of places with a minimum cost.


Example 1.16 (An Investment Problem)


Give a certain amount of money to invest on items I, II, and III, and suppose that
the investments must be made in integer amounts. It further considers that it has 8
units to invest in. Table 1.1, lists the profits with respect to allocations on each of the
1.3 Examples of Optimal Control Problems 25

Table 1.1 Profit with respect to investment amount on each item


Investment amount Profit on I Profit on II Profit on III
0 0 0 0
1 3.0 2.5 1.8
2 4.6 4.1 2.5
3 6.7 5.5 4.0
4 8.0 6.5 5.0
5 9.2 7.5 6.5
6 10.2 8.0 7.3
7 11.3 8.5 8.2
8 13.2 8.8 9.0

items. For instance, for the allocation with 3 in item I, 1 in item II, 4 in item III; then
the total profit is given as 6.7 + 2.5 + 5.0 which is equal to 14.2.
To find the optimal investment allocation:


In Sects. 1.4 and 1.5, we will give a general description of the optimal control
problem. The description of the optimal control problem generally includes the state
dynamic equation of the control system, the state constraints, the target set, the
admissible set of the control input, the performance function, and so on.

1.4 Formulation of Continuous-Time Optimal Control


Problems

In this part, a class of optimal control problems over a continuous-period of time is


firstly formulated.
• State dynamics equation of the control system
The dynamics equations of a continuous system can be expressed as a set of
first-order differential equation, called the state equations, and a set of algebraic
equations called the output equations, that is,

ẋ(t) = f (x, u, t),


y(t) = f (x, u, t),

where x ∈ Rn and u ∈ Rr are the state vector and the control input of the control
system, respectively, and y ∈ Rm is the output vector of the system, the vector-
valued function f (·) ∈ Rn satisfies certain conditions, for example, the Lipschitz
condition.
26 1 Introduction

Considering the above situation, there exists a unique solution to the above dynam-
ics equation subject to the piecewise continuous control input u, and the vector-
valued function g(·) ∈ Rm is the output function. In some cases, the state x is
measurable and can be used to design the associated control, while in some cases,
only the output y can be measured to construct the control.
• State constraints and target sets
According to the specific situation, the initial state and final state of the dynamics
equation of the control system may be required to satisfy various constraints,
including equality constraints and inequality constraints. The general forms of
equality constraints are

h 1 (x(t0 ), t0 ) = 0,
h 2 (x(t f ), t f ) = 0,

where h 1 (·) and h 2 (·) are function vectors. A typical equality constraint is given
as

x(t0 ) = x0 ,
x(t f ) = x f ,

where x0 and x f are given constant vectors, respectively.


Notice that sometimes the initial state x(t0 ) or final state x(t f ) cannot be uniquely
determined by the equality constraints.
The general forms of inequality constraints are

h 1 (x(t0 ), t0 ) ≤ 0,
h 2 (x(t f ), t f ) ≤ 0.

The meaning of the above inequalities is that each element of the function vectors
h 1 and h 2 is less than or equal to 0. For example, when the final state is required
to be located in a circular domain with the origin as the center and 1 as the radius
in the state space, it can be described as an inequality constraint on the final state,
i.e.,

x12 (t f ) + x22 (t f ) + x32 (t f ) + · · · + xn2 (t f ) ≤ 1.

In most cases, the initial state x(t0 ) is given, and the final state x(t f ) is required
to meet certain constraints. The set of all final states satisfying the constraints is
called the target set, whose general definition is given as
 
M  x(t f ) ∈ Rn , such that h 1 (x(t f ), t f ) = 0, h 2 (x(t f ), t f ) ≤ 0 .
1.4 Formulation of Continuous-Time Optimal Control Problems 27

There are also certain requirements or constraints putting on the characteristics


of the system state and the control input in the whole control process. This kind
of constraint is usually described by the integral about the function of the system
state and control input, and there are two forms of integral-type equality constraint
and integral-type inequality constraint, i.e.,
 tf
L e (x, u, t)dt = 0,
t0
 tf
L i (x, u, t)dt ≤ 0,
t0

where L e (·) and L i (·) are integrable function variables.


• Admissible set of the control input
Generally, the control input u is restricted by some constraints. The set composed
of all the input vectors satisfying the control constraints is called the admissible
control set, which is denoted by U .
A typical control constraint is the boundary limit, such as

αi ≤ u i ≤ βi ,

for all i = 1, 2, . . . , r .
The total amplitude constant constraint is

u 21 + u 22 + · · · + u r2 = α 2 ,

where α1 , βi , i = 1, 2, . . . , r and α are all constant valued.


• Performance cost function
The control performance is a quantitative evaluation of the system state charac-
teristics and control input characteristics in the control process, and a quantitative
description of the main problems. The performance function can be generally
described as
 tf
J (x, u)  h(x(t f ), t f ) + g(x(t), u(t), t)dt, (1.3)
t0

where h(·) and g(·) are both scalar value functions, and g(·) is integrable function.
h is called the terminalperformance function, which is the evaluation of final state
t
characteristics, while t0 f g(x, u, t)dt is called the integral performance function
determining the characteristics of system state and control input within time inter-
val [t0 , t f ]. For different control problems, it chooses the terminal performance
cost function and integral performance function properly.
The performance function given in (1.3) is actually a mixed one. In the following,
it is given a few specific quadratic forms of performance functions.
28 1 Introduction

– Finite-time linear state regulation problem.



1 1 tf  
J (x, u)  x  (t f )F x(t f ) + x  (t)Q(t)x(t) + u  (t)R(t)u(t) dt,
2 2 t0

where F = F  ≥ 0, Q(t) = Q  (t) ≥ 0, R(t) = R  (t) > 0 are all coefficient


matrices.
– Finite-time linear tracking problem.

1  1 tf   
J (e, u)  e (t f )Fe(t f ) + e (t)Q(t)e(t) + u  (t)R(t)u(t) dt,
2 2 t0

where F = F  ≥ 0, Q(t) = Q  (t) ≥ 0, R(t) = R  (t) > 0, and

e(t) = z(t) − y(t)

is the output tracking error, where z(t) is the expected output and y(t) is the
output vector.
In the variational method, the optimal control problem of functional with mixed
performance function is called Bolza problem.
When it does not concern about the characteristics of the final state, say h = 0,
the above performance function is reduced to the following:
 tf
J (x, u)  g(x(t), u(t), t)dt. (1.4)
t0

For this integral performance function, there are some specific forms as given
below.
– Minimum time problem.
 tf
J (u)  1dt = t f − t0 ,
t0

with g(x(t), u(t), t) = 1.


– Minimum fuel problem.
⎡ ⎤
 tf 
m
J (u)  ⎣ |u j (t)|⎦ dt.
t0 j=1

– Minimum energy problem.


1.4 Formulation of Continuous-Time Optimal Control Problems 29
 tf
J (u)  u  (t)u(t)dt.
t0

– Minimum time-fuel problem.


 tf  
J (u)  β + |u j (t)| dt,
t0

where β, with β > 0, represents a weighting factor.


– Minimum time-energy problem.
 tf  
J (u)  β + u  (t)u(t) dt.
t0

In the variational method, the optimal control problem of functional with integral
performance function is called Lagrangian problem.
Sometimes it mainly concerns with the terminal characteristics of the system state.
At this time, the following performance function is used

J (x, u)  h(x(t f ), t f ). (1.5)

For example, a missile is required to have a minimum miss distance. t f can be


fixed or flexible, which is determined by the nature of the optimal control problem.
With the above discussions, in the following it is formulated a class of optimal
control problems.

Problem 1.1 For a given control system,

ẋ(t) = f (x, u, t),


y(t) = f (x, u, t),

it is required to design an admissible control u ∈ U such that the system state reaches
the target set

x(t f ) ∈ M .

During the whole control process, the constraints of the state and control input
are satisfied,
 tf
ge (x, u, t)dt = 0,
t0
 tf
gi (x, u, t)dt ≤ 0,
t0

and the performance function is maximized or minimized at the same time.


30 1 Introduction
 tf
J (x, u)  h(x(t f ), t f ) + g(x(t), u(t), t)dt.
t0

Notice that in this book without loss of generality consider the performance cost
function which is minimized subject to the optimal control.
If an admissible control u ∈ U is the solution of the above optimal control prob-
lem, it is called the optimal control, and the state of the corresponding control system
is called the optimal trajectory.
In the design of the optimal control system, the option of performance function is
very important. Improper selection will cause the performance of the control system
to fail to meet the expected requirements, or even the optimal control problem has
no solution.

1.5 Formulation of Discrete-Time Optimal Control


Problems

By following the similar discussions given in Sect. 1.4 for continuous-time state
systems, we can formulate a class of optimal control problems over a discrete-period
of time below.

Problem 1.2 For a given control system,

x(k + 1) = f (x(k), u(k), k),


y(k) = 
f (x(k), u(k), k),

with k ∈ {k0 , . . . , k f }.
It is required to design an admissible control u ∈ U such that the system state
reaches the following target set at the final time k f :

x(k f ) ∈ M .

During the whole control process, we may consider the following constraints of
the state and control input are satisfied:

k f −1

ge (x(k), u(k), k) = 0,
k=k0
k f −1

gi (x(k), u(k), k) ≤ 0,
k=k0
1.5 Formulation of Discrete-Time Optimal Control Problems 31

and the following performance cost function


kf
J (x, u)  h(x(k f ), k f ) + g(x(k), u(k), k),
k=k0

is minimized as well.


1.6 Organization

• In this Chapter, it is given the overall background and the motivation of this text-
book, as well as a brief introduction of the main works in this book.
• In Chap. 2, it is specified the condition for the extrema of functionals via the
variational method and studies how to implement the extrema problems of func-
tionals considering constraints by the elimination/direct method and the Lagrange
method.
• In Chap. 3, it is studied how to solve the optimal control problems by applying
the developed results of the extremal of functional via the variational method.
It is given the necessary and sufficient conditions for the optimal solution to the
optimal control problems with unbounded controls, and the optimal solution to the
optimal control problems with different boundary conditions is studied. Moreover,
it analyzes specific quadratic regulation and linear-quadratic tracking problems.
• In Chap. 4, it is studied how to implement the optimal control problems consid-
ering constrained controls and states by applying the variational method. More
specifically, it analyzes the minimum time, minimum fuel, and minimum energy
problems respectively.
• Besides, the Pontraygin’s minimum principle described in the last chapter, in
Chap. 5, introduces another key branch of optimal control methods, say the
dynamic programming. Also for the purpose of comparison, it studies the rela-
tionship between these two optimal control methods.
• Based on the results developed in previous chapters in this book, Chap. 6 introduces
some games, such as noncooperative differential games and two-person zero-sum
differential games, where the system is driven by individual players each of which
would like to minimize his own performance cost function. The NE strategies are
solved and analyzed by applying the variational method.
• In Chap. 7, it is studied the different classes of optimal control problems in discrete-
time case.
• In Chap. 8, it is given a brief conclusion of this book.
32 1 Introduction

References

1. E.T. Fortmann, K.L. Hitz, An Introduction to Linear Control Systems (CRC Press, Boca Raton,
1977)
2. D.N. Burghes, A. Graham, Introduction to Control Theory Including Optimal Control (Wiley,
New York, 1980)
3. G.F. Franklin, D.J. Powell, A. Emami-Naeini, Feedback Control of Dynamic Systems
(Addison-Wesley, Reading, MA, 1994)
4. B.C. Kuo, Automatic Control Systems, 7th edn. (Prentice Hall, Englewood Cliffs, NJ, 1995)
5. F.H. Raven, Automatic Control Engineering (McGraw-Hill, New York, 1995)
6. R.T. Stefani, B. Shahian, C.J. Savant, G. Hostetter, Design of Feedback Control Systems
(Oxford University Press, Oxford, 2001)
7. A.D. Lewis, A mathematical approach to classical control—single-input, single-output, time-
invariant, continuous time, finite-dimensional, deterministic, linear systems. Lecture notes
(2004)
8. C.-T. Chen, Analog and Digital Control System Design Transfer-Function, State-Space, and
Algebraic Methods (Oxford University Press, Oxford, 2006)
9. J.H. Williams, F. Kahrl, Electricity reform and sustainable development in china. Environ.
Res. Lett. 3(4):044009 (2008)
10. K.J. Ȧström, R.M. Murray, Feedback Systems (Princeton University Press, Princeton, 2010)
11. N.S. Nise, Control Systems Engineering (Wiley, Hoboken, 2011)
12. R. Munasinghe. Classical Control Systems: Design and Implementation (Alpha Science,
Oxford, 2012)
13. R.C. Dorf, Modern Control Systems (Addison-Wesley, Reading, MA, 1992)
14. Z. Gajec, M.M. Lelic, Modern Control Systems Engineering (Prentice-Hall, London, 1996)
15. K. Ogata, Modern Control Engineering (Prentice-Hall, Englewood Cliffs, NJ, 1997)
16. S.M. Shinners, Modern Control System Theory and Design (Wiley, New York, 1992)
17. P.N. Paraskevopoulos, Modern Control Engineering (CRC Press, Boca Raton 2001)
18. S.H. Zak, Systems and Control (Oxford University Press, Oxford, 2002)
19. D.N. Burghes, A. Graham, Control and Optimal Control Theories with Applications
(Elsevier—Woodhead Publishing, Sawston, 2004)
20. R. Weinstock, Calculus of Variations: With Applications to Physics and Engineering (Dover
Publications, New York, 1974)
21. G.M. Ewing, Calculus of Variations with Applications (Dover Publications, New York, 1985)
22. I.M. Gelfand, S.V. Fomin, R.A. Silverman, Calculus of Variations (Dover Publications, New
York, 2000)
23. L.A. Pars, An Introduction to the Calculus of Variations (Dover Publications, New York,
2010)
24. M.I. Kamien, N.L. Schwartz, Dynamic Optimization: The Calculus of Variations and Optimal
Control in Economics and Management (Dover Publications, New York, 2012)
25. O. Bolza, Lectures on the Calculus of Variations (Dover Publications, New York, 2018)
26. C. Fox, An Introduction to the Calculus of Variations (Dover Publications, New York, 1987)
27. H. Kielhöfer, Calculus of Variations–An Introduction to the One-Dimensional Theory with
Examples and Exercises (Springer, Berlin, 2018)
28. B. van Brunt, The Calculus of Variations—Universitext (Springer, Berlin, 2004)
29. H.J. Pesch, R.Z. Bulirsch, Bellman’s equation and Caratheodory’s work. J. Optim. Theory
Appl. 80(2), 203–229 (1994)
30. H.J. Pesch, M. Plail, The maximum principle of optimal control: A history of ingenious ideas
and missed opportunities. Control Cybern. 38(4A), 973–995 (2009)
31. V.G. Boltyanskii, R.V. Gamkrelidze, L.S. Pontryagin, On the theory of optimal processes.
Doklady Akad. Nauk SSSR 110(1), 7–10 (1956)
32. R.V. Gamkrelidze, The theory of time-optimal processes for linear systems. Izvest. Akad.
Nauk SSSR. Ser. Mat. 22, 449–474 (1958)
References 33

33. V.G. Boltyanskii, The maximum principle in the theory of optimal processes. Doklady Akad.
Nauk SSSR 119(6), 1070–1073 (1958)
34. R.V. Gamkrelidze, On the general theory of optimal processes. Doklady Akad. Nazlk SSSR
123(2), 223–226 (1958)
35. V.G. Boltyanskii, Optimal processes with parameters. Doklady Akad. Nnuk Uzbek. SSR 10,
9–13 (1959)
36. R.V. Gamkrelidze, Time optimal processes with bounded phase coordinates. Doklady Akad.
Nazlk SSSR 126(3), 475–478 (1959)
37. L.S. Pontryagin, Optimal control processes. Uspekki Mat. Naak 14(1), 3–20 (1959)
38. L.S. Pontryagin, V.G. Boltyanskii, et al, The Mathematical Theory of Optimal Processes
(Wiley, New York-London, 1962). Translated by K.N. Trirogoff
39. A.J. Burns, Introduction to the Calculus of Variations and Control with Modern Applica-
tions(CRC Press, Boca Raton, 2013)
40. D.S. Naidu, Optimal Control Systems (CRC Press, Boca Raton, FL, 2002)
41. E.J. McShane, The calculus of variations from the beginning through optimal control. SIAM
J. Control Optim. 27, 916–939 (1989)
42. L.M. Hocking, Optimal Control—An Introduction to the Theory with Applications (Clarendon
Press, Oxford, 1991)
43. A. Sagan, Introduction to the Calculus of Variations (Dover Publishers, Mineola, NY, 1992)
44. R.E. Pinch, Optimal Control and Calculus of Variations (Oxford University Press, Oxford,
1993)
45. J.L. Troutman, Variational Calculus and Optimal Control: Optimization with Elementary
Convexity (Springer, Berlin, 1995)
46. J. Macki, A. Strauss, Introduction to Optimal Control Theory (Springer, New York, 1995)
47. D.R. Smith, Variational Methods in Optimization (Dover Publications, New York, 1998)
48. S.D. Naidu, Optimal Control Systems (CRC Press, New York, 2003)
49. D.E. Kirk, Optimal Control Theory: An introduction (Dover Publications, New York, 2012)
50. F. Clarke, Calculus of Variations and Optimal Control, Functional Analysis (Springer, Berlin,
2013)
51. Emanuel Todorov. Optimal Control Theory. MIT press, 2006
52. P.H. Geering, Optimal Control with Engineering Applications (Springer, Berlin, Heidelberg,
2007)
53. T.L. Friesz, Dynamic Optimization and Differential Games (Springer Science+Business
Media, Berlin, 2010)
54. R. Vinter, Optimal Control (Birkhauser, Boston, 2000)
55. E. Miersemann, Calculus of Variations. Lecture Notes in Leipzig University (2012)
56. D. Liberzon, Calculus of Variations and Optimal Control Theory: A Concise Introduction
(Princeton University Press, Princeton, 2012)
57. G. Leitmann, Optimization Techniques: With Applications to Aerospace Systems (Academic
Press Inc., New York, 1962)
58. J.E. Corban, A.J. Calise, G.A. Flandro, Rapid near-optimal aerospace plane trajectory gener-
ation and guidance. J. Guid. 14(6), 1181–90 (1991)
59. Z.J. Ben-Asher, Optimal Control Theory with Aerospace Applications (American Institute of
Aeronautics and Astronautics Inc., Reston, 2010)
60. E. Trelat, Optimal control and applications to aerospace: some results and challenges. J.
Optim. Theory Appl. 154, 713–758 (2012)
61. J.M. Longuski, J.J. Guzman, J.E. Prussing, Optimal Control with Aerospace Applications
(Springer, New York, 2014)
62. H.C. Lee, A generalized minimum principle and its application to the vibration of a wedge
with rotatory inertia and shear. J. Appl. Mech. 30(2), 176–180 (1963)
63. W.S. Howard, V. Kumar, A minimum principle for the dynamic analysis of systems with fric-
tional contacts. In Proceedings IEEE International Conference on Robotics and Automation,
Atlanta, GA, USA, pp. 437–442 (1993)
Another Random Document on
Scribd Without Any Related Topics
attempts to get at Susan Nipper. But, nevertheless, this once, as it is
to be an effort to demonstrate pistol work almost exclusively, I
expect you fellows ought to be included. Sergeant West is to
command; Corporal Gerry will lead. There will be about forty men
and they will start from the lower communicating trench at about
three o'clock to-night. Each man will carry two revolvers only, and
six more rounds of ammunition and go as light as possible. There
will be no barrage, as we want to surprise them. So be ready."
CHAPTER XVI
"Over the Top"
Had the entire bunch of fellows, from Regulars to Draftees been
planning for a football game or a very strenuous social lark of some
kind, they could not have appeared more happy, in the beginning,
over it. The fact that the raiders had first in mind the killing of the
enemy, men like themselves sent to cut down their opponents,
proved what custom will do. For custom is everything, and men in a
body can fit themselves to observe almost any procedure and to
twist it whichever way that gives the greatest satisfaction.
In times of peace we regard the murder of one person as something
over which to get up a vast deal of excitement and much
indignation, but in warfare we plan for the killing of thousands as a
business matter and read of it often with actual elation. Such are the
inconsistencies of mankind.
"Say, Corporal, if I don't get at least a half dozen of those Huns
during this little picnic you can call me a clam! These little get-theres
have got to do the job!" Rankin stood gazing lovingly at his two
service pistols, held in either hand, as he spoke. He was admittedly
the best revolver shot amongst the gun-pit contingent.
"I'll run you a little race as to who makes the best score on real
deaders!" spoke up a youthful-looking fellow who was one of the
recently arrived squad of Regulars. "I sort of like to punch holes with
these small cannons myself."
But Herbert heard no other boasts of the sort from the men
contemplating the night raid; indeed, there was very little talk about
it at all, except that some were curious as to how the program might
work out, or what the hitches might be, and some, though
determined to do their duty, seemed to be a bit nervous as time
went on.
The boy, having now gone through enough in the crucible of death-
dealing to sear him against the fear of possibilities, even of
probabilities, regarded this raid only as a matter of duty, of
necessity, and with very little thought about it, resolved to do his
part to the very best of his ability.
"Over the top!" This has become a familiar phrase now since a large
part of the present method of warfare consists in those in the
opposing trenches finding a way of getting at each other over No
Man's Land, often not more than twenty yards across and on an
average perhaps a hundred and fifty feet, though the turns and
twists of the trenches make it difficult to draw an average.
Open attacks, except by large bodies of men in what is termed a
drive, are not generally successful in the military, the strategic,
sense, for there are more men lost in getting across barbed wire
entanglements, machine-gun and rifle fire than will pay for what
they gain. A section of trench which is part of the enemy's system
will very likely have to be given up, unless the entire trench is soon
after taken, which may result in a general drive.
The military tactics compel that which the scientific boxer adopts
and calls his art, that of self-defense. Anyone can wade in and
hammer a foe if he does not care how he is hammered in turn, but
often the hammering he gets is more than he can give, unless he
studies to shun injury. In this case often the weaker fighter will
outdo the stronger if the former avoids being punished while getting
in some hard cracks on the other chap's weak spots.
And just so with trench fighting. The opposing armies are precisely
like two trained-to-the-minute prize fighters with bare knuckles and
out for blood; they are watching each other's every move, dodging,
ducking and delivering all sorts of straights, hooks, swings and
upper-cuts, all sorts of raids, bombings, grenadings, shellings, air
attacks and what not?
But the raids at night are the best card that, so far, the opposing
platoons or companies have learned to deliver, and they often result
in a knockout blow, at least to that section of the trench attacked.
The raid must be delivered as a surprise to be most effective and
thus may be compared to the fist fighter's sudden uppercut or swing
to the jaw.
The night came on cold, still, with gathering clouds, and the men in
the lower portion of the communicating trench, and mostly within an
offset that had also been dug and roofed over with heavy poles,
brush and sod for camouflage, gathered to partake of the evening
meal and converse in low tones.
Two enemy airplanes bent on scouting duty, started just before dusk
toward the American lines, but with glee the boys heard Susan
Nipper begin to talk again and the planes disappeared, one veering
off out of range, the other being knocked into the customary mass
upon the unkind ground.
Whitcomb, Gardner, Watson, and Rankin chummed together, as was
their habit when all off duty together; not at this time cooking, as
there was no place handy where a fire could be camouflaged. The
men now all ate their grub cold, which was not so bad for an
occasional change; the tinned meats, fresh fruit and fresh biscuits
made at the barracks well satisfying a soldier's appetite.
Hot coffee in a big urn was sent down from the gun pit, and the
lieutenant added a good supply of chocolate candy recently shipped
over from the good old United States for the boys in the trenches
and appreciated as much as anything could be. After this many
indulged in pipes and tobacco, but they were careful to keep the
glow of their smoke well out of sight of the prying eyes of the
enemy, for who can tell when a squirming Hun may wriggle himself
up to almost the very edge of his foeman's trench and spot those
gathered within, or overhear their plans!
"Maybe I'll Hear Them Pronounce My
Doom."
All this while there had been someone at the listening post, that
point of the zigzag trench which was nearest the enemy. The job is
an exacting one and the listeners are frequently relieved by those
men most alive to the interests of the trench.
Presently Sergeant West came to the snipers and addressed
Whitcomb:
"Corporal, you fellows are all wide awake and with your eyes
sharpened. I'd like to have one of your men on relief at the listening
point."
"All right. Rankin has got ears like a rabbit for hearing, even if he is
a pretty boy. Go to it, old man!"
Rankin got up and stretched himself. He seemed more than usually
serious.
"Maybe I'll hear them pronounce my doom," he remarked and
turned away.
"He seems extra solemn tonight," said Gardner. "Wonder if we'll all
come out of this business skin whole."
"All? I'll wager not all of us will. Those Huns can fight; I'll say that
for them. But it's the only good thing I can say for them," Watson
commented.
"That's where you're wrong, old man," Gardner replied. "As you
know, I spent a year in Germany——"
"Or in jail? 'Bout as leave!" Watson jested.
"—— after I left school. Dad sent me over with our buyer to get on
to the toy importing business, and I'll say this for the doggone
Germans. They are rough, they are brags, they are all a little crazy;
but they are wonderfully painstaking, remarkably thorough and
persevering, and here and there, now and then you come across
some mighty fine, good, upright, altogether decent chaps whom you
may be glad and proud to have as friends. It is all wrong, unfair and
a little small to consider all the people in any land unworthy; don't
you think so? You remember what Professor Lamb used to say at
school——"
"Professor Lamb?" interrupted Herbert. "Say, man, what school did
you attend?"
"Brighton Academy. Best school in the——"
"Here, too! I was a junior when I enlisted; Flynn and I. Put it there,
old chap!" Herbert thrust out his hand.
"Now, isn't that funny we didn't know that before about you?"
Gardner said. "Yes, Watson here and I were classmates. We were
chums at school, and have been chums ever since; enlisted
together."
"And we're mighty glad to be under one who has the same Alma
Mater," put in Watson.
"Or, as poor old Roy Flynn would say: 'We're all the same litter and
bark just alike; mostly at the moon'," Herbert quoted.
"Flynn, too, eh?" questioned Gardner. "He, like many another fitted
for some very different task, came out here to be unfitted. I have
thought, ever since the days in camp back home, that he was
admirably cut out for the law."
"A man doesn't need both feet to talk with," Watson suggested.
"And he may not lose his leg at all," Herbert protested, hoping
against hope.
"It won't still his tongue, I'll wager, if he does."
As the night wore on conversation grew less and many of the men
dozed, sitting on the ground and propped against the dirt wall, or
each other. One little fellow slept and even snored lying across the
stretched legs of two others, until they tumbled off to rest their
limbs. Others knew only wakefulness and either stood about or
paced up and down between the narrow walls of the trench,
stopping now and then to exchange a whispered word with their
fellows.
The sniper squad took turns in making pillows of each other. Once,
when they were shifting positions for comfort, Watson remarked
rather sharply:
"We can't yell 'Hurrah for old Brighton!' but we can all pull together,
by gum!"
Rankin, who had been in turn relieved from duty at the listening post
and who was very wide awake, remarked:
"Mebbe we'll all pull together for the other shore before this night's
over."
Herbert waked up at that. "Pull yourself together, old man. You were
telling a while ago what you're hoping to do with those guns of
yours and——"
"If I have any sort of a chance," Rankin said grimly.

"We can't call you fellows together with a bugle," Sergeant West
announced, in a stage whisper. "But it's a few minutes of three
o'clock; everything is as quiet as a mouse. Two of our men are over
there to give an alarm. All get ready. There'll be no falling in, no
formation. Keep well spread out. Orders will be given only by
signals. Three of us have whistles and we hope they won't get all
three. One short blow means follow the leader; two means all
return; three means retreat in a hurry, but with prisoners, if you can
get them; a long-continued blast means retreat for your lives. I
guess all understand. But no signals will be given until after we
attack. We must go across absolutely without noise and we must go
quickly. Get the fellow at their listening post, or any sentinel first. It's
our first raid in this sector and they will hardly expect us. Now, boys,
follow Gerry. He knows the lay of the land."
And over the top went the forty odd, wishing they could do so with a
cheer, but keeping as silent as an army of cats after an army of
rabbits—only the prey they sought was by no means as harmless as
rabbits, and this fact made the need of silence greater.
Not a word came from the scouts, and if the men in the enemy's
trench were apprised of the coming of the Americans they were not
able to communicate with their fellows before the raiders had
scrambled through, or rapidly pulled aside the barbed wire, squirmed
over a pile of sand bags and leaped into the German trench.
Not a man hesitated, and the first signal of any kind they heard was
the bark of Gerry's revolver as he sent down the foremost and lone
Hun he encountered just as the fellow tried to raise his gun.
At short range the handier, expertly used revolver won and it was so
throughout the mêlée that followed.
As the Americans landed, some few dashing on and into a wide
shelter or dugout lined with berths and concrete-floored, in which
fifty men reposed or waited for night duty, the short, sharp, rapidly
repeated bark of the ready pistols sounded almost like, though less
regular than, a machine gun.
But the revolvers were used only against those that opposed them;
the foeman who indicated surrender, who was without a weapon or
who dropped it, or who held up his hands was fully disarmed and
pushed aside between guards, quickly signified by Sergeant West.
It was not all surrender, however; at the very rear of the dugout a
dozen men quickly leveled their Mausers and discharged a volley,
point-blank, at the Americans who had entered, the most of them
being still in the trench fighting the Huns who had rallied from either
end.
The snipers' squad, all light and active young fellows, had been the
first into the trench; the first into the dugout, they were in the fore
when the volley came. Herbert, a gun in both hands, leaped to
prevent two Germans from seizing their guns; Gardner on the other
side held up three men; Watson blazed away at a commander who
blazed away at him, without making a hit, and half a dozen Regulars
behind were coming on to perform a like duty. But it was Rankin
who saw more of the resisting squad at the far end of the dugout.
The young man, a gun in each hand, became transformed instantly
into a sort of fire-spouting mechanism; the red streaks of flame from
his weapons stabbed the semi-darkness almost with one continuous
glare and when the twelve shots were expended every man of the
opposing force had fallen. But not alone! The last to stand before
that burst of fury aimed true; and as more Regulars rushed into the
place to make good the surrender of the other Huns some stumbled
over brave Rankin's body.
The whistle sounded once, twice, thrice. Was the work so soon
completed? That meant hurry, but with prisoners and, of course, the
American wounded and dead.
As though long drilled for this work, knowing precisely what to do
and being not once confused, the boys hustled the Huns before
them, some guarding against any possible flank attack; and Herbert,
feeling for the moment like a young Hercules, lifted Rankin over his
shoulder and, climbing again the ramparts of the enemy's trench,
staggered rapidly back again over No Man's Land, keeping up with
his comrades. And a little behind him came other stalwart fellows,
carrying also their precious human burdens, some groaning, some
quiet, two limp and fast growing cold.
Then came rest, though there was readiness against counter-attack,
which did not then occur. With the coming of dawn a few new men
guarded the communicating trench and the raiders returned to the
gun pit. Herbert listened to Sergeant West's terse report to
Lieutenant Jackson:
"Very successful, sir. Captured twenty and left about thirty-five
enemy dead and wounded. Two of ours dead; four wounded. Got a
lot of their guns and smashed a machine-gun they were trying to
use in the trench."
Then he added in an altered voice:
"Want to recommend every man for bravery, but especially Corporal
Whitcomb, Privates Gardner and Watson for holding the dugout
against odds until more men arrived, and Corporal Long and Privates
Finletter, Beach, Thompson and Michener for capturing the machine-
gun. If I may mention it, we would all be glad to make another raid
at any time."
Herbert saluted. "May I add to that, Lieutenant? Thank you! I want
to tell you what Rankin did before he died." And with a voice a little
unsteady at times the boy related briefly the heroic work of the
young fellow who had shot faster and truer than eight or nine men
against him and had made it possible for the few Americans in the
dugout to take the prisoners they did.
"I think this, more than anything that has occurred yet, shows
clearly the superiority of the Americans' expertness with the revolver
and what may be done with it against odds, if men are taught to
shoot accurately and with great rapidity," he added.
"I am going to report that to our captain," said Lieutenant Jackson,
"and I hope it goes to Washington. I know what I'd do if I had the
say. I'd give each man two pistols and a lot of training and omit a lot
of this liquid-fire business and grenades. A poor shot can do nothing,
nor can a man attempt it who is unfamiliar with the weapon, but an
expert could stop half a dozen men with bayonets before the latter
could get near enough to use them."
CHAPTER XVII
Herbert's Little Scheme
"Keep an eye open for anything the enemy may spring on us,"
cautioned Lieutenant Jackson, at the daily conference of the officers
under him, their men now occupying the gun pit and the trench
near, which had been enlarged from a communicating trench. In all
there were now a platoon and three squads of new men. "They have
all sorts of schemes. We must have only the sharpest-witted fellows
at the two listening posts," continued the commander.
"For this duty I would like to pick Corporals Whitcomb and Kelsey
and Privates Marsh, Ferry, Drake and Horn, with two others that may
be selected later. Experience and practice will do the best work in
this duty and it will be well for you men to arrange regular watches,
as they do on shipboard. Whitcomb, I know you are thinking of
sniping duty, but send your two men out on that, alternately, and
you will have some time for it also. Yes, go ahead, Corporal. Got
another idea?"
"I was just thinking this might work, Lieutenant," offered Herbert.
And briefly he outlined a scheme that made the rest of those present
open wide their eyes. It was a little bit of strategy that was worth
trying.
"Fine, fine!" declared the lieutenant. "They'll be most apt to attack
the trench and you can work it best there. Get ready for tonight; it'll
be as dark as pitch. Sergeant"—to West—"you are in command in
the trench, but in this case give the matter over to Whitcomb and
the two of you can put it through according to his plan. We shall
look after the gun up here with half our men and I'll ask Lieutenant
Searles, beyond, to back you up on that side. So, go to it, men!"
The carrying out of a strategic move in the army is nothing like that
in any other organization; the action is settled by one or two heads,
planned in detail by whoever is put in command, and the rest merely
follow orders. West, Whitcomb and Townsend went at the matter
with all the energy they could show and the help of some others
who were handy.
Just before dark a German airplane, reconnoitering high in air, and
purposely let alone by Susan Nipper, discovered a long section of the
trench very poorly guarded and manned. This ruse, if not found out
as such, is an instant temptation to a raiding party, and the Germans
are never slow to seize an advantage.
Massed and ready at one end of the trench near the gun pit, West's
and Whitcomb's men were waiting patiently, and in the dugout were
more than a dozen stuffed figures posed as though sleeping, a few
others propped standing in the trench. A small number of bombs
were set to go off with the pull of a string.
The Germans came across silently, a hundred strong, prepared to
inflict all the damage they could and to capture prisoners; especially
to capture prisoners, for there were promotion and the Iron Cross
ahead for those who could bring in Americans.
Hidden in a shell hole, almost in the middle of No Man's Land, his
head covered with bunches of grass, and thus successfully
camouflaged, a volunteer spy from out of the ranks heard and saw
the Germans dash across and into the American trench and he at
once gave the signal to the waiting fifty. Without a second's
hesitation they went over the top and dashed toward the enemy's
trench section, to which the spy led them, he having been able to
tell from what direction they had come.
Herbert led the men and without much trouble they found the
breach in the wire through which the raiders had come. Swiftly the
Yanks ran forward, leaped over the sand bags down into the trench,
and an astonished German on duty there got tumbled over so
quickly that he knew not what hit him.
Corporal Whitcomb instantly comprehended the exact situation and
to further carry out his plan acted accordingly. To the left a right-
angled bend led to a communicating trench that could be held by
half a dozen men; a little to the right of this another cut led to an
elaborate shelter, a guard to which had been standing in the
entrance-way. To a dozen men Herbert ordered:
"In there, quick, and hold them up till you hear the signals, and
don't come out until then!"
The guard had alarmed those in the dugout, who were the
remaining men of the trench contingent off duty and sleeping, and
the Americans had a lively time of it, but of that nothing was known
until later.
"Here at the bend line your men up!" Herbert said to Sergeant West,
"and fire when I signal! Carey and I will watch them."
Finding nothing but stuffed figures, the German officer must have
suspected a trap in the American trench and he signaled his men to
return quickly. This they did, retreating across No Man's Land exactly
as they had come. Hidden behind sand bags a little to one side of
the wire breach, Herbert saw them come and he waited until twenty-
five, or more, in a bunch had leaped into the trench.
At Herbert's signal a volley rang out at the trench bend, followed by
groans and curses from the Germans. By this time others, thinking
only of getting back into shelter, and not comprehending that their
enemies were within the German trench, leaped in also and met
much the same fate.
Those not yet in the trench began a retreat along the inner line of
wire entanglement and over the sand bags away from the shooting
and going into the trench at a point farther along. Here they must
have encountered more of their fellows and at once formed a plan of
reprisal. Anticipating this and also an attack from the other side over
the more easily sloping rear of the trench, Herbert leaped back, gave
the signal as agreed upon for the retreat with prisoners, and the
men got busy. There were a dozen or more of the enemy unhurt in
the trench.
Meanwhile, the Germans in the dugout had put up a fight, and had
thrown some hand grenades at the entrance among the Americans,
with the result that some of the attacking party of a dozen must
have been put out of the business of active participation. The others
had begun to shoot, rather at random, but largely accounting for
those who had attempted to resist; and then, as the Americans were
about to round up their prisoners, some brave, foolhardy or fanatic
German managed to set off a box of bombs or grenades, enough
explosives to upset an average house.
But one man, Private Seeley, came out of that volcano able to tell
what happened; two rushed out into the trench to fall on their faces,
blinded and dying. Within was a holocaust of flame, smoke and
poisonous gases presiding over the dead and dying, Americans and
Germans alike.
Sergeant West and Corporal Whitcomb reached the crumbling
entrance and tried to gaze within.
"We must get our boys out!" began Herbert.
"Impossible!" protested West.
"Let's try! There may be some alive——"
"Not one! Let's get out of this!"
"You detail squads at the ends of the trench to fight to the last man
and give me a rescuing party——"
"No use, Corporal. You can see that. We shall be outnumbered and
hemmed in soon. We've got to go!"
"Gardner and Watson are in there!"
"Dead as mackerels! They'll stay there forever. Come, now; we must
go back!" With that Sergeant West blew the signal again, and the
men, with no wounded, but rushing a number of prisoners, turned
once more to retreat.
And then the thing happened which Herbert had expected, in part,
and had planned to circumvent: a rally of reprisal had been started.
But not being sure of their ground, the Huns had meant, in turn, to
cut off the Americans by another detour.
Carey had been left on guard outside of the wire. Paying little
attention to what might be going on in the trench, he had followed
the German survivors and he had seen and heard them return to No
Man's Land and reach a place of ambuscade. This was along the line
of some tall Lombardy poplar trees, that had probably once been a
farm lane, and the spot was easily noted. Directly past it the Yanks
must go to regain their trench.
Carey's speedy progress toward his comrades was hardly marked by
caution. His information was received by West and Whitcomb with as
much elation as they could show in the face of the loss of their
companions in the dugout. This was no time for sentiment; only for
action.
"Follow me, men; double file as much as you can and pussy-foot it
for keeps!" Herbert ordered, caring no more for technical terms than
do many other officers when bent upon such urgent duty.
West ordered three men to conduct the prisoners straight across to
the gun pit. Carey indicated the line of trees. Herbert led his men to
a point fifty yards behind the trees; then he went to West.
"You order the charge, will you? You inspire the men more than I. I
will give you the signal again, this time the soft whistle of a
migrating bird."
The Germans heard a low, plaintive call come from somewhere near;
some might have suspicioned it; others hardly noticed it. But almost
immediately afterward it was followed by such a yell that the enemy
must have believed Satan and all his imps were on the job. Perhaps
they were.
What followed was another mêlée; the Huns, being unable to swing
their several machine-guns around, turned with rifles, bayonets and
grenades to find their foes upon them, the revolvers of the
Americans spitting fire quite as usual. The Huns were being mowed
down most disastrously and in less than half a minute they were
separated, beaten back, thrown into confusion, overpowered in
numbers, disarmed and completely at the mercy of their superior
and more dashing adversaries. Again the ready and effective
revolvers had won.
"Back to our trench! March! Double quick!" shouted Sergeant West.

"A success, men; a success! I cannot give this too high praise in my
report. It is worthy of being imitated. The men in the dugout were
unfortunate; you couldn't help that. It is terribly hard to foresee
anything, and no one would have been to blame if the whole
scheme had failed. You only did your duty magnificently! And,
Whitcomb, the credit for the idea belongs to you. We will have to
term you our Lord High Executioner."
"Please don't, sir!" the boy protested. "We may have to do this sort
of thing in the business of fighting, but I wouldn't care to have it
rubbed in."
The lieutenant laughed. "Well, at any rate, your scheme, though it
practically wiped out your squad, and you are the only one left, must
have accounted for at least ninety of the Huns, in dead and
wounded, and you took fifty prisoners. Not bad out of perhaps two
hundred men in that section of their trench!"
CHAPTER XVIII
The Big Push
Susan Nipper was talking very loud, very fast, and she had need.
The Germans had started something toward the American lines and
gun pits—a cloud of something bluish, greenish, whitish and
altogether very ominous. It was a gas attack.
On the other side of the hill Susan's sister, and still farther beyond
another one of the same capable family, were also talking loud and
fast and very much to the purpose, so that wherever their well-timed
shells reached the gas-emitting guns and machinery the terrible
clouds, after a moment, ceased to flow out and the atmosphere and
the sloping ground became clearer and clearer.
Then, all that the American boys had to do was to put on their gas
masks for several hours and burn anti-gas fumes, the Boches having
been put to a lot of trouble and much expense for very little gain;
one or two careless fellows were for a time overcome. After that
there was a wholesome contempt for the gas on the part of the boys
from over the ocean.
But Susan kept right on speaking her mind. As the gas men
retreated from the field in a terrible hurry they got all that was
coming to them and many had come on that did not go off at all,
unless upon litters.
Then, Susan paid her respects to aircraft of several kinds that had
come over, not on scouting duty, but to drop their bombs here and
there. There was a regular fleet of aircraft planes, or it might seem
better to call a bunch of them a flotilla, or perhaps a flytilla. Anyway,
they made an impressive sight, though not all coming near enough
for Susan to reach.
Most of the enemy airplanes went on, despite the guns aimed at
them from the earth, until, sighting a number of French machines
coming out to do battle, they strategically fell back over the German
lines, thus to gain an advantage if they or their enemies were forced
to come to the ground.
The Americans had not before witnessed such a battle in the air as
that. The birdmen turned, twisted, dived, mounted, maneuvered to
gain advantage, French and German being much mixed up and now
and then spitting red tongues of flame, singly or in rapid succession,
at each other.
Two machines were injured and came to earth, one German, that
descended slowly; the other French, that tumbled over and over,
straight down. Then two other German planes were forced to
descend, and, finally, others coming from far behind the lines, the
French retreated, being much outnumbered; they had to be
outnumbered to retreat from the hated Boches. And the Boches did
not follow them up.
This had all happened soon after daylight, the different incidents
following each other rapidly. It was hardly eight o'clock when Susan
Nipper let fly her last shell at the airplane. Before noon a messenger
arrived at the pit, and Corporal Whitcomb was sent for.
"My boy, they must be aware of you back there at headquarters. You
know you have been mentioned in dispatches a number of times as
resourceful, altogether fearless, capable in leadership and——"
"I don't know how to thank you sufficiently—" Herbert began, but
the lieutenant shut him off.
"Don't try it, then! Merely justice, fair dealing, appreciation,
recognition of worth. We aim toward that in the army; military
standards, you know. Well, as I was going to say, there is a general
advance ordered, in conjunction with our Allies. We want to push the
Huns out of their trenches and make them dig in farther on,
somewhere. If the attempt is successful, the engineers will place
Susan in a new pit somewhere ahead. But the main thing you want
to know is what your duty will be."
The lieutenant settled back with a half smile; half an expression of
deep concern.
"They expect us fighting men in the army, and in the navy, too, I
suppose, to have or to show not one whit of sentiment. We are
expected to be no more subject to such things than the cog-wheels
of a machine. But they can no more teach us that than they can
teach us not to be hungry, or to want sleep. I have begun to think,
of late, that they don't expect us to sleep, either.
"Well, my boy, if you would like to see an example of military brevity
I will show it to you. Ahem! Corporal, report to-night to regimental
headquarters, with your company; Captain Leighton, Advanced
Barracks. By order of Colonel Walling.
"But hold on! Here's a little of the absence of military brevity. It
appears that they so admire your record back there at headquarters
that they have picked you out for almost—no doubt you think me
pessimistic, or a calamity howler—for almost certain injury or death.
My boy, I wanted you to stay here with me until we are relieved,
which will be soon, but now they are going to take you away from
me. An old man like me—I am getting on toward fifty—gets to have
a lot of feeling in such matters. He likes to think of his military
family, of his boys, and becomes more than usually attached to
some of them. But let that pass.
"They're going, I am told, to put you on special scouting duty before
the drive. Of course, you'll go and glory in it, but, my boy—Well,
good luck to you; good luck! If you get out all right, look me up
when we are all relieved. Look us all up; the men will all wish it."
Herbert's leave taking of the pit platoon and the squads in the
adjoining trench, that night, was one that was more fitting for a lot
of school cronies than hardened soldiers bent upon the business of
killing. But human nature is human all the world over and under
pretty much all conditions.
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.

More than just a book-buying platform, we strive to be a bridge


connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.

Join us on a journey of knowledge exploration, passion nurturing, and


personal growth every day!

ebookbell.com

You might also like