Robust and Optimal Control
Robust and Optimal Control
Mi-Ching Tsai
Da-Wei Gu
Robust and
Optimal Control
A Two-port Framework Approach
Advances in Industrial Control
123
Mi-Ching Tsai Da-Wei Gu
Department of Mechanical Engineering Department of Engineering
National Cheng Kung University University of Leicester
Tainan, Taiwan Leicester, UK
The series Advances in Industrial Control aims to report and encourage technology
transfer in control engineering. The rapid development of control technology has
an impact on all areas of the control discipline: new theory, new controllers,
actuators, sensors, new industrial processes, computer methods, new applications,
new philosophies : : : , new challenges. Much of this development work resides in
industrial reports, feasibility study papers, and the reports of advanced collaborative
projects. The series offers an opportunity for researchers to present an extended
exposition of such new work in all aspects of industrial control for wider and rapid
dissemination.
The Advances in Industrial Control monograph series started in 1992, and in
many ways the sequence of volumes in the series provides an insight into what
industries and what control system techniques were the focus of attention over the
years. A look at the series titles on robust control yields the following list:
• Robust Multivariable Flight Control by Richard J. Adams, James M. Buffington,
Andrew G. Sparks, and Siva S. Banda (ISBN 978-3-540-19906-9, 1994)
• H1 Aerospace Control Design by Richard A. Hyde (ISBN 978-3-540-19960-1,
1995)
• Robust Estimation and Failure Detection by Rami S. Mangoubi (ISBN 978-3-
540-76251-5, 1998)
• Robust Aeroservoelastic Stability Analysis by Rick Lind and Marty Brenner
(ISBN 978-1-85233-096-5, 1999)
• Robust Control of Diesel Ship Propulsion by Nikolaos Xiros (ISBN 978-1-
85233-543-4, 2002)
• Robust Autonomous Guidance by Alberto Isidori, Lorenzo Marconi, and Andrea
Serrani (ISBN 978-1-85233-695-0, 2003)
• Nonlinear H2 / H1 Constrained Feedback Control by Murad Abu-Khalaf, Jie
Huang, and Frank L. Lewis (ISBN 978-1-84628-349-9, 2006)
• Structured Controllers for Uncertain Systems by Rosario Toscano (ISBN 978-1-
4471-5187-6, 2013)
v
vi Series Editors’ Foreword
And from the sister series, Advanced Textbooks in Control and Signal Processing
come:
®
• Robust Control Design with MATLAB by Da-Wei Gu, Petko Hr. Petrov, and
Mihail M. Konstantinov (2nd edition ISBN 978-1-4471-4681-0, 2013)
• Robust and Adaptive Control by Eugene Lavretsky and Kevin Wise (ISBN 978-
1-4471-4395-6, 2013)
Clearly, robust control has seen a steady stream of monographs and books in
both series. There is no doubt that the work of George Zames, Bruce Francis, John
Doyle, Keith Glover, and many others created a paradigm change in control systems
theory. Also note the number of aerospace-industry applications in the above list of
texts. This emphasis can be ascribed to the availability of accurate high-dimensional
multivariable models of aerospace systems within the industry, to the wide range of
operating envelopes and therefore models, that aerospace vehicles traverse during
a flight and the facility of optimization-based robust-control techniques in dealing
with multivariable systems and their operational constraints.
From time to time, the Advances in Industrial Control series publishes a
monograph that is theoretical and tutorial in content. This contrasts with most
entries to the series that contain a mix of the theoretical, the practical, and the
industrial. This monograph Robust and Optimal Control by Mi-Ching Tsai and
Da-Wei Gu is one of those exceptions. The authors themselves actually raise the
question “Why another book on the topic of Robust Control?” and their answer is
that they have devised a new route to understanding the derivations and computation
of robust and optimal controllers that they believe is a valuable addition to the
literature of the subject. Their two-port approach is claimed to be more accessible
to an engineering readership and to resonate in particular with an electrical- and
electronic-engineering readership. The theoretical developments reported in the
monograph are fully supported by detailed chapters covering all the background
®
material and MATLAB code and illustrated in a simple but persuasive servo-motor-
control problem in the final chapter of the monograph. The list of monographs and
textbooks on robust control shows that there is a continuing industrial interest in
this field and for this reason this monograph is a valuable entry to the Advances in
Industrial Control monograph series.
It seems satisfactory that practicing control engineers can use available solution
formulae and software routines to work out robust and optimal controllers for given
design problems, when they know well the underlying control systems and design
specifications. However, are we happy with such designed controllers without
knowing exactly how the formulae are derived and on what grounds the solution
procedures are based? As control engineers, are we confident enough to implement
such designed controllers? Answers to above queries might be obvious, and there are
sources for us to know the theories of design approaches, as pointed out earlier. The
problem is that the theory behind the state-space approaches presented in [10] and
other books is very mathematically oriented and difficult for engineers, and students
as well, to understand. Hence, is it possible to present the robust and optimal control
theory for LTI systems in a way such that engineers and students can follow and
grasp the essence of the solution approach? This motivated the research and writing
of the present book.
This book presents an alternative approach to find a robust controller via
optimization. This approach is based on the chain scattering decomposition (CSD),
initiated by Professor Hidenori Kimura [4] and references therein who also named
this as chain scattering description. CSD uses the configuration of two-port circuits
which is a fundamental ingredient of electrical engineering and is familiar to all
electrical engineers and students with basic electrical engineering knowledge. It is
shown in the book that (sub)optimal H1 , H2 as well as stabilizing controllers can
be synthesized following the CSD approach. The book starts from the well-know
linear fractional transformation (LFT), in which a control design problem can easily
be formulated, and then converts LFT into CSD format. From the CSD formulation,
the desired controller can be directly derived by using the framework proposed in
the book in an intuitive and convenient way. The results are complete and valid for
general system settings. The derivation of solution formulae is straightforward and
uses no mathematics beyond linear algebra. It is hoped that readers may obtain
insight from this robust and optimal controller synthesis approach, rather being
bewildered in a mathematics maze.
The prerequisites for reading this book are classical control and state variable
control courses at undergraduate level as well as elementary knowledge of linear
algebra and electrical circuits. This book is intended to be used as a textbook for
an introductory graduate course or for senior undergraduate course. It is also our
intention to prepare this book for control engineers training courses on robust and
optimal control systems design for linear time-invariant systems. With the above
consideration in mind, we use plenty of simple yet illustrative worked examples
throughout the book to help readers to understand the concepts and to see how
the theory develops. Where appropriate, MATLAB codes for the examples are also
included for readers to verify the results and to try on their own problems. Most
chapters are followed with exercises for readers to digest the contents covered in
the chapter. To further demonstrate the proposed approaches, in the last chapter, an
application case study is presented which shows wider usage of the framework.
Preface ix
References
1. Green M, Limebeer DJN (1995) Linear robust control. Prentice Hall, Englewood Cliffs
2. Gu DW, Petkow PH, Konstantinov MM (2005) Robust control design with Matlab. Springer,
London
3. Helton JW, Merino O (1998) Classical control using H1 methods-theory, optimization, and
design. Society for industrial and Applied Mathematics, Baltimore
4. Kimura H (1997) Chain-scattering approach to H1 control. Birkhäuser, Boston
5. Maciejowski JM (1989) Multivariable feedback design. Addison-Wesley, Wokingham/
Berkshire
6. Skogestad S, Postlethwaite I (2005) Multivariable feedback control: analysis and design. Wiley,
New York
7. Stoorvogel AA (1992) The H1 control problem: a state space approach. Prentice Hall,
Englewood Cliffs
8. Zames G (1981) Feedback and optimal sensitivity: model reference transformation, multiplica-
tive seminorms, and approximated inverse. IEEE Trans Autom Control 26:301–320
9. Zames G, Francis BA (1983) Feedback, minimax sensitivity, and optimal robustness. IEEE
Trans Autom Control 28:585–601
10. Zhou K, Doyle JC, Glover K (1996) Robust and optimal control. Prentice Hall, Upper Saddle
River
The authors started research on robust and optimal control using the CSD approach
about two decades ago. Many interesting results were however discovered while
the first author was taking his sabbatical leave (2003–2004) in the University of
Cambridge. The main parts of the book were written during the second author’s
recent visits at the National Cheng Kung University (NCKU), Taiwan. The authors
thank NCKU and the University of Leicester for the support they received. The
authors are greatly indebted to many individual people. The book would not be
completed without their help.
All the contributors in the development of the CSD approach and related robust
and optimal control theory, Professor Keith Glover, Professor Hidenori Kimura,
Professor Ian Postlethwaite, Professor Malcolm Smith, Professor Jan Maciejowski,
Professor Fang Bo Yeh, and Professor Shinji Hara, to name just a few, are very
gratefully thanked. The authors are also indebted to their colleagues and students,
past and present, for finding time in their busy schedules to help in editing,
reviewing, and proofreading of the book. The long list includes Dr. Chin-Shiong
Tsai, Dr. Bin-Hong Sheng, Dr. Jia-Sen Hu, Dr. Fu-Yen Yang, Dr. Wu-Sung Yao,
Chun-Lin Chen, Ting-Jun Chen, Chia-Ling Chen, and many others.
Finally and most importantly, the authors owe their deepest gratitude and love to
their families for their understanding, patience, and encouragement throughout the
writing of this book.
xi
Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1 Linear Algebra and Matrix Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1.1 Vectors and Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1.2 Linear Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.3 Eigenvalues and Eigenvectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1.4 Matrix Inversion and Pseudoinverse . . . . . . . . . . . . . . . . . . . . . . . 12
2.1.5 Vector Norms and Matrix Norms . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.1.6 Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2 Function Spaces and Signals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2.1 Function Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2.2 Norms for Signals and Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.3 Linear System Theory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3.1 Linear Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3.2 State Similarity Transformation. . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3.3 Stability, Controllability, and Observability . . . . . . . . . . . . . . . 24
2.3.4 Minimal State-Space Realization . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.3.5 State-Space Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.3.6 State-Space Formula for Parallel Systems. . . . . . . . . . . . . . . . . 28
2.3.7 State-Space Formula for Cascaded Systems . . . . . . . . . . . . . . 29
2.3.8 State-Space Formula for Similarity Transformation . . . . . . 29
2.4 Linear Fractional Transformations and Chain
Scattering-Matrix Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3 Two-Port Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.1 One-Port and Two-Port Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.2 Impedance and Admittance Parameters (Z and Y Parameters) . . . . . 40
3.3 Hybrid Parameters (H Parameters) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
xiii
xiv Contents
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
Chapter 1
Introduction
This book presents a fresh approach to optimal controller synthesis for linear
time-invariant (LTI) control systems. The readers are assumed to have taken
taught modules on automatic control systems, including classical control in the
frequency domain and state variable control, in a first-degree course (BEng or
BSc). Knowledge of electrical and electronic engineering will be beneficial to
understanding of the approach.
Consider the negative unity feedback control system configuration in Fig. 1.1
with the plant G and controller K. The performance specification of tracking control
requires the output y to follow the reference input r closely. This closeness can be
found in the error signal e. It is obvious that
E D .I C GK/1 R (1.1)
where E(s) and R(s) are the Laplace transforms of the time signals e(t) and
r(t). Therefore, it is required that the error signal e to be “small.” An adequate
measurement to judge the “size” of a signal is the 2-norm, i.e., the square root of
the “energy” of the signal. For a given, fixed input signal r, good tracking asks for
that the 2-norm of e should be as small as possible. That is, one wants to find a
stabilizing controller K which makes k(I C GK) 1 Rk2 small. If one wants “best”
tracking, the controller K should be sought via solving the following optimization
problem:
min .I C GK/1 R : (1.2)
stabilizing K 2
In the case that the reference r is not explicitly known, rather it belongs to an
energy-bounded set, the objective of good tracking then requires the infinity norm of
the transfer function (matrix) from r to e to be small. That is, a stabilizing controller
K is sought to make k(I C GK) 1 k1 small, or to solve
M.-C. Tsai and D.-W. Gu, Robust and Optimal Control: A Two-port Framework 1
Approach, Advances in Industrial Control, DOI 10.1007/978-1-4471-6257-5__1,
© Springer-Verlag London 2014
2 1 Introduction
min .I C GK/1 : (1.3)
stabilizing K 1
How to find such controllers K is the theme of this book. (Definitions and details of
the solution deduction can be found in Chaps. 2, 8, and 9.)
In addition to the performance requirement, robustness is another central issue
in control systems design. Most effective and efficient control design methods are
model based. That is, the controller is designed based on a model which represents
the dynamics of the system (the plant) to be controlled. It should be noted that in
almost all cases in reality, a model will never completely and truly describe the
dynamic behavior of the actual system because of unintentionally or intentionally
excluded dynamics (so-called unmodeled plant dynamics) and/or because of wear-
and-tear effects upon the plant (plant parameter variation). On the other hand,
industrial control systems operating in real world are vulnerable to external
disturbances and measurement noises. Hence, a designed controller should be able
to keep the control system stable as well as maintain certain performance level even
in the presence of unmodeled dynamics, parameter variations, disturbances, and
noises. That is the commonly acceptable definition for the robustness of a control
system.
Consideration of robustness in control systems design is not a new topic.
Up to the middle of the last century, the 1950s, control systems analysis and
design had been dominated by frequency domain methods. Single-input-single-
output (SISO) cases were the main subject. Certain stability robustness can be
achieved by ensuring good gain and phase margins in the design. Good stability
margins usually lead to good, well-damped time responses of SISO systems, i.e.,
good performance. Rapid developments and needs in aerospace engineering in
the 1960s have greatly propelled the development of control system theory and
design methodology, particularly in the state-space approach which is powerful for
multi-input-multi-output (MIMO) systems. Because the system models of aerospace
systems are relatively accurate, the multivariable techniques developed at that time
placed emphasis on achieving good system performance, rather on how to deal
with system dynamics perturbations. These techniques are based on linear quadratic
performance criteria with Gaussian disturbance (noise) considered adequate in such
systems and proved to be successful in many aerospace applications. However,
applications of these techniques, commonly referred to as linear-quadratic-Gaussian
(LQG) methods, to other industrial problems make apparent the poor robustness
properties exhibited by LQG controllers. This led to a substantial research effort to
develop theory and methodology that could explicitly address the robustness issue
1 Introduction 3
in general feedback systems design. The pioneering work in the development now
known as the H1 optimal control theory was published in the early 1980s by Zames
[6] and Zames and Francis [7].
In the H1 framework, the designer from the outset specifies a model of system
uncertainty, such as additive perturbation together with output disturbance, which is
most suitable to the problem at hand. A constrained optimization is then formulated
to maximize the robust stability of the closed-loop system to the type of uncertainty
chosen, the constraint being the internal stability of the feedback system. In most
cases, it would be sufficient to seek a feasible (suboptimal) controller such that
the closed-loop system achieves certain robust stability. Performance objectives can
also be included in the cost function of the optimization problem. Mature theory and
elegant solution formulae in the state-space approach [1, 2, 8] have been developed,
which are based on the solutions of certain algebraic Riccati equations, and are
available in software packages such as MATLAB and Slicot [5].
However, many people, including students and practicing engineers, have expe-
rienced difficulties in understanding the underlying theory and solution procedures
due to the complex mathematics involved in the advanced approaches that very
much hinders the wide application of this advanced methodology in real world
and industrial control systems design. In this book an alternative approach is
presented for the synthesis of H1 and H2 (sub)optimal controllers. This approach
uses the so-called chain scattering decomposition or chain scattering description
(CSD) and provides a unified synthesis framework. CSD is based on the two-port
circuit formulation which is familiar to electrical and electronic engineering students
and engineers. With this engineering background, readers are guided through a
general yet rigorous synthesis procedure. Complemented by illustrative examples
and exercise problems, readers can not only learn detailed steps in synthesizing a
robust controller but also see and obtain an insight to the methodology.
The chain scattering description approach was first proposed by Kimura [4] in
an attempt to provide a unified, systematic, and self-contained exposition of H1
control. In Kimura’s approach, to deal with general systems, signal augmentation is
required to formulate a single CSD framework which leads to extra calculations
in the synthesis of controllers. This book defines right and left CSD matrices
instead and proposes to use coupled CSD and coprime factorizations with specific
requirements in controller synthesis. This approach is capable of dealing with all
general systems and avoids unnecessary computation load and is of a transparent and
intuitive nature. Such a framework covers the synthesis of all stabilizing controllers,
H2 optimal as well as H1 (sub)optimal control synthesis problems.
The CSD approach is straightforward in the derivation of the resultant controller
synthesis procedures. It starts with a standard configuration (SCC, standard control
configuration) of robust control scheme, converts it into a chain scattering decom-
position format, and then uses coprime factorization and spectral factorization to
characterize the required stabilizing controllers. The mathematical tools required
are limited within linear algebra (matrix and vector manipulations) and algebraic
Riccati equations.
4 1 Introduction
With all the ingredients prepared in previous chapters, Chaps. 8 and 9 readily
present solution procedures and formulae to compute all the stabilizing controllers,
the H2 optimal controller and H1 suboptimal controllers. Synthesis of all these
controllers follows a unified framework. Again, all the derivations are accompanied
with graphical illustrations and simple numerical examples.
Following the introduction of the controller synthesis problems in previous
chapters, the last chapter focuses on presenting a more practical case study. It shows
how the CSD approach can be applied in many industrial applications. Again, the
solution procedure is transparent, logical, and explicit.
For readers’ convenience, MATLAB codes for most worked examples are
included in this book.
The material in this book can be used for postgraduate as well as senior
undergraduate teaching.
References
1. Doyle JC, Glover K, Khargonekar PP, Francis BA (1989) State-space solutions to standard H2
and H1 control problems. IEEE Trans Autom Control 34:831–847
2. Francis BA (1987) A course in H1 control theory. Springer, Berlin
3. Green M (1992) H1 controller synthesis by J-lossless coprime factorization. SIAM J Control
Optim 30:522–547
4. Kimura H (1996) Chain-scattering approach to H1 control. Birkhäuser, Boston
5. Niconet e. V (2012) SLICOT – Subroutine Library in Systems and Control Theory. http://slicot.
org/
6. Zames G (1981) Feedback and optimal sensitivity: model reference transformation, multiplica-
tive seminorms, and approximated inverse. IEEE Trans Autom Control 26:301–320
7. Zames G, Francis BA (1983) Feedback, minimax sensitivity, and optimal robustness. IEEE
Trans Autom Control 28:585–601
8. Zhou K, Doyle JC, Glover K (1995) Robust and optimal control. Prentice Hall, Upper Saddle
River
Chapter 2
Preliminaries
Classical control design and analysis utilizes the frequency domain tools to specify
the system performance. The background of operator theories and single-input
and single-output, linear systems is required. In modern control, the time domain
approach can be used to deal with multi-input and multi-out cases. Moreover,
concepts of linear algebra and matrix-vector operations are used in system analysis
and synthesis. Some useful fundamentals will be therefore reviewed in this chapter.
This section presents useful and well-known fundamentals of linear algebra and
matrix theory, which facilitate the understanding of the subsequent control system
concepts and methodology introduced. The stated results can be considered to be
purely preliminary in nature, and hence, their proofs are omitted.
Control systems are, in general, multivariable. That means one deals with more than
one variable in input, output, and state. Hence, vectors and matrices are frequently
used to represent systems and system interconnections. In engineering and science,
one usually has a situation where more than one quantity is closely linked to another.
For instance, in specifying the location of a robot on a flat floor, one may use the
numbers 2 and 3 to indicate the robot is at 2 units east and 3 units north from
where one stands, and following the same logic, one may use 1 and 2 to indicate
that the robot is at 1 unit west and 2 units south. Here, (2, 3) and (1, 2) represent
two different locations, and the numbers 2 and 3 are in a fixed order to show that
M.-C. Tsai and D.-W. Gu, Robust and Optimal Control: A Two-port Framework 7
Approach, Advances in Industrial Control, DOI 10.1007/978-1-4471-6257-5__2,
© Springer-Verlag London 2014
8 2 Preliminaries
particular location, while (3, 2) would represent the position 3 units east and 2 units
north. Such a group of numbers in a certain order forms a vector, and the dimensions
of a vector correspond to how many numbers there are in the vector. Hence, (2, 3)
is a 2-dimensional vector. Conventionally, a vector is defined as a column vector.
2 1
In the above example, the position vector is thus written as or . For any
3 2
positive integer n,2an n-dimensional
3 (usually shortened as n-dim or n-D) vector x is
x1
6 7
denoted by x D 4 ::: 5.
xn
The transpose
of a vector x is denoted as xT and is defined by x T D
x1 x2 xn , a row vector. A group of vectors of the same dimension in a
2 3
mi1
6 7
certain order forms a matrix. For example, for Mi D 4 ::: 5, 1 i p, Mi is
mi n
an n-dim vector and M D M1 M2 MP is an n p matrix. Obviously, a vector
is a special case of matrices. x T D x1 x2 xn is simply a matrix of 1 n
dimensions. The elements or entries in a matrix can be real numbers or complex
numbers. One uses M 2 Rn p to show the matrix M is of n p dimensions and all
the elements of M are real numbers; M 2 Cn p shows a n p dimensional matrix
M with complex numbers. It is clear that Rn p Cn p . It is also a convention to
use capital English letters to show a matrix such as M, whereas lower case letters
(sometimes in bold face) are employed to show a vector
2 such as x 3 and lower case
m11 m1p
6 7
letters to show a scalar number. A matrix M D 4 ::: ::: 5 of the n p
mn1 mnp
dimension can be abbreviated as M D fmij gn p . Similar to vectors, the transpose of
a matrix M is MT D fmji gp ˚ n . When
M is in Cn p , the complex conjugate transpose
of M is defined by M D mj i pn when mj i is the complex conjugate of mji .
A few manipulations can be defined 2 for vectors3and matrices.2 Two matrices 3
m11 m1p n11 n1p
6 7 6 7
of the same dimensions, e.g., M D 4 ::: ::: 5 and N D 4 ::: ::: 5,
mn1 mnp nn1 nnp
2 3
p11 p1p
6 :: : 7
can be added together, i.e., P D M C N where P D 4 : :: 5 D
pn1 pnp
2 3
m11 C n11 m1p C n1p
6 :: :: 7
4 : : 5. A multiplication is defined for two matrices only
mn1 C nn1 mnp C nnp
2.1 Linear Algebra and Matrix Theory 9
when their dimensions are compatible. That is, for M D fmij gn p , N D fnkl gk l ,
only when p D k, one may have the product P D MN, where P D fpij gn l , with
X
p.Dk/
pij D mir nrj . The following paragraph summarizes a few more aspects of
rD1
vector/matrix manipulations [4].
1. A square matrix M is called nonsingular if a matrix B exists, such that
MB D BM D I. Define B D M 1 . The inverse matrix M 1 exists if det(M) ¤ 0,
where det(M) is the determinant of M. If M 1 does not exist, M is said to be
singular. If the inverse of M, B, and MB all exist, then (MB) 1 D B 1 M 1 .
2. A complex square matrix is called unitary if its inverse is equal to its complex
conjugate transpose M*M D MM* D I, where I denotes the identity matrix of the
appropriate dimensions. A square matrix M is called orthogonal if it is real and
satisfies MT M D MMT D I. For an orthogonal matrix, the inverse is its transpose.
3. An n p matrix M is of rank m if the maximum number of linearly independent
rows (or columns) is m. This equals to the dimension of img(M) :D fMxjx 2 Rp g.
4. An n p matrix M is said to have full row rank if n p and rank(M) D n. It has
a full column rank if n p and rank(M) D p.
5. A symmetric matrix M of n n dimension is positive definite if xT Mx 0, where
x is any n-dimensional (real) vector, and xT Mx D 0, only if x D 0. If for any n-
dimensional vector x, xT Mx 0 always holds, then M is positive semi-definite.
A positive (semi-)definite matrix M may be denoted as M > 0(M 0). Similarly,
negative definite and negative semi-definite matrices may be defined.
6. For a positive definite matrix M, its inverse M 1 exists and is also positive
definite.
7. All eigenvalues of a positive definite matrix are positive.
8. For two positive definite matrices M1 and M2 , one has ˛M1 C ˇM2 > 0 when ˛, ˇ
are nonnegative and both not zero.
9. A square matrix M is called normal if MM* D M*M. A normal matrix has the
decomposition of M D UƒU*, where UU* D I and ƒ is a diagonal matrix. The
following Table 2.1 summarizes the classification of normal matrices.
10 2 Preliminaries
Let R and C be real and complex scalar fields, respectively. A linear space V over
a field F consists of a set on which two operations are defined. The first one is
denoted by “addition (C)”; for each pair of elements x and y in V, there exists a
unique element x C y in V. And the second one is a scalar “multiplication ()”; for
each element ˛ in F and each element x in V, there is a unique element ˛x in V. The
following conditions hold with respect to the above two operations.
1. For each element x in V, 1 x D x.
2. For all x, y, z in V, (x C y) C z D x C (y C z).
3. For all x, y in V, x C y D y C x.
4. For each element x in V, there exists an element y in V, such that x C y D 0.
5. There exists an element in V denoted by 0, such that x C 0 D x for each x in V.
6. For each element ˛ in F and each pair of elements x and y in V,
˛(x C y) D ˛x C ˛y.
7. For each ˛, ˇ in F and each element x in V, (˛ˇ)x D ˛(ˇx).
8. For each ˛, ˇ in F and each element x in V, (˛ C ˇ)x D ˛x C ˇx.
Note that one uses the same symbol “0” to denote the element zero and scalar
number zero in V and F, respectively. In the following, some basic concepts are
reviewed first. These definitions can be easily found in standard linear algebra
textbooks, for example see [8].
1. As mentioned in the earlier paragraph, the elements x C y and ˛x are called the
sum of x and y and the product of ˛ and x, respectively, where x, y 2 V, ˛ 2 F.
2. A subset W of a vector space V over a field F is called a subspace of V if
W itself is a vector space over F under the operations of addition and scalar
multiplication defined on V.
3. Let x1 , x2 , : : : , xk be vectors in V, then an element of the form ˛ 1 x1 C ˛ 2 x2 C
C ˛ k xk with ˛ i 2 F is a linear combination over F of x1 , x2 , : : : , xk .
4. The set of all linear combinations of x1 , x2 , : : : , xk 2 V is a subspace called the
span of x1 , x2 , : : : , xk , denoted by
n ˇ o
ˇ
span fx1 ; x2 ; : : : ; xk g D x ˇx D ˛1 x1 C ˛2 x2 C C ˛k xk I ˛i 2 F : (2.1)
W ? D fy 2 V W y x D 0; 8x 2 W g : (2.2)
13. Let M be an n p real, full rank matrix with n > p, the orthogonal
complement
of M is a matrix M? of dimension n (n p), such that M M ? is a square,
nonsingular matrix with the following property: MT M? D 0.
14. The following properties hold:
A matrix can be interpreted as a mapping between two linear spaces. For example,
x1 y1
a 2 2 matrix M D fmij g2 2 , y D Ax, where x D and y D are both
x2 y2
12 2 Preliminaries
in R2 1 (the two spaces in this case are the same). For most x, the image y would
show a rotation of x plus an expansion or reduction in length, which is decided by
the matrix M. However, there are some vectors in the space of which the images
generated by the mapping M will remain at the same direction as the original
vectors. These vectors are the eigenvectors of M, showing somehow the essence
(eigen) of the mapping M. The factors of the length change are the eigenvalues of
M. Rigorous definitions are given below.
For an n n square matrix M, the determinant det(I M) is called the charac-
teristic polynomial of M. The characteristic equation is given by
det . I M / D 0: (2.7)
The n roots of the characteristic equation are the eigenvalues of M. For an eigenvalue
of matrix M, there is a nonzero vector such that
M D (2.8)
M D UƒU : (2.10)
where S D M22 M21 M11 1 M12 is the Schur complement of M11 in M. Then, if M
is nonsingular, it can be derived that
1 M11 1 C M11 1 M12 S 1 M21 M11 1 M11 1 M12 S 1
M D : (2.13)
S 1 M21 M11 1 S 1
and
" #
1
b1
S Sb1 M12 M22 1
M D (2.15)
b1 M22 1 C M22 1 M21 S
M22 1 M21 S b1 M12 M22 1
where Sb D M11 M12 M22 1 M21 is called the Schur complement of M22 in M. The
matrix inversion formulae can be further simplified if M is block triangular as
1
M11 0 M11 1 0
D ; (2.16)
M21 M22 M22 1 M21 M11 1 M22 1
1
M11 M12 M11 1 M11 1 M12 M22 1
D : (2.17)
0 M22 0 M22 1
MM C M D M; (2.19)
M C MM C D M C ; (2.20)
14 2 Preliminaries
MM C D MM C ; (2.21)
C
M M D M C M: (2.22)
2 3
m11 m12 m1n
6 m21 m22 m2n 7
6 7 mn
Definition 2.4 Let M D 6 : :: :: :: 7 be a matrix in C . The
4 :: : : : 5
mm1 mm2 mmn
following gives a list of different matrix norms, which will be useful for the rest
of this book.
Xm
ˇ ˇ
1. Matrix 1-norm (column sum): kM k1 WD max ˇmij ˇ.
j
p iD1
2. Matrix 2-norm: kM k2 WD max .M M /.
X
n
ˇ ˇ
3. Matrix 1-norm (row sum): kM k1 WD max ˇmij ˇ.
i
j D1
v
uX
p u m X
n
4. Frobenius norm: kM kF WD trace .M M / D t mij mij .
iD1 j D1
It is straightforward from the above definition that the matrix M and its complex
conjugate transpose M* have the same singular values, i.e., f i (M)g D f i (M*)g.
Let M 2 Cm n ; there exist unitary matrices U D [u1 u2 um ] 2 Cm m and
V D [v1 v2 vn ] 2 Cn n such that
M D U †V ; (2.24)
where
†1 0
†D ; (2.25)
0 0
16 2 Preliminaries
2 3
1 0 0
60 2 0 7
6 7
†1 D 6 : :: :: :: 7; (2.26)
4 :: : : : 5
0 0 r
Controllers or control schemes are as a matter of fact functions in the time domain
or the frequency domain. Hence, the synthesis of the required controller, an optimal
controller in particular, is a procedure in functional analysis. However, considering
that the underlying systems in this book are mainly the linear time-invariant systems
and that this book is primarily for practicing control engineers and engineering
students, many mathematical definitions and deductions will not be included in
order to make it more accessible to the targeted readers. Interested readers are
recommended to consult relevant books, for instance [5, 6, 7, 10], for rigorous and
in-depth treatment of those mathematical concepts.
Function spaces useful for the themes introduced in this book are L2 , H2 , L1 , and
H1 , and their orthogonal complement spaces.
The space Lp (for 1 p < 1) consists of all Lebesgue measurable functions w(t)
defined in the interval (1, 1) such that
Z 1 p1
p
kwkp WD jw.t /j dt < 1: (2.28)
1
The space L1 consists of all Lebesgue measurable functions w(t) such that
P+ P+
Laplace transform
L2 ( − , )
Inverse transform
P− P−
Laplace transform
L2 ( − , 0] H 2⊥
Inverse transform
Hence, the norm for H2 can be computed just as it is done for L2 . The real
rational subspace of H2 , which consists of all strictly proper and real, rational,
stable transfer function matrices, is denoted by RH2 .
3. L1 -function space: G(s) 2 L1 , if
RL
Example
( s − 3)( s − 4) ( s − 1)
A: B:
BH GH ( s + 1)( s + 2) ( s + 4)
B E C Stable
H ( s + 7) ( s − 20)
F G Strictly C: D:
RH ( s + 5) ( s + 3)( s + 5)
A RH 2 proper
D
( s + 4)( s + 5) ( s − 1)
RH 2 ⊥ E:
( s + 6)( s + 7)
F:
( s + 2)( s + 4)
I
All proper and real, rational, transfer function matrices with no poles on the
imaginary axis form a subspace which is denoted by RL1 .
4. H1 norm, the 1-norm of Hardy space functions: G 2 H1 , if G(s) is stable and
ˇ ˇ
ˇ j! ˇ
1. It is clear that G1 (s) is stable and sup jG1 .j!/j D sup ˇ j!C1 ˇ D sup p !
D
! ! ! 1C! 2
1 < 1 for ! > 0. Hence, G1 (s) 2 RH1 . By decomposition of G1 (s), one has
G1 .s/ D sC1
s
D 1 sC1
1
. Thus,
s Z
1
1
kGk2 D jG .j!/j2 d!
2 1
s Z
1
1 1 1
D 1 1 d!
2 1 j! C 1 j! C 1
s Z
1
1 1 1 1
D 1 C d!
2 1 j! C 1 j! C 1 .j!C1/ .j! C 1/
D 1: (2.34)
This implies G1 (s) 62 RH2 , which agrees with the fact that G1 (s)
ˇ is bi-proper.
ˇ
ˇ .j!/2 ˇ
2. By definition, G2 (s) 62 RL1 because of sup jG2 .j!/j D sup ˇ j!C1 ˇ D 1;
R1 ! !
G2 (s) 62ˇ RH2ˇ because of 1 jG(j!)j2 d! D 1; and G2 (s) 62 RH1 because of
ˇ s2 ˇ
sup ˇ sC1 ˇ D 1.
Re.s/0
3. Apparently, G3 (s) 62 RH1 because G(s) is not analytic at s D 1; G3 (s) 2 RL1
because G(s) is analytic on j! axis and satisfies sup jG .j!/j < 1; and
!
G3 (s) 62 RH2 because G(s) is not analytic at s D 1.
Norm symbolizes the size of a system or a function. For the control system analysis
and synthesis, norm offers a direct criterion corresponding to design specifications.
The detailed treatment of this topic can be found in books, such as [2, 3]. In this
book, the following definitions are listed for easy reference. Note that the signal
mentioned below is scalar and measurable, and the system is also scalar and linear
time-invariant and causal. The vector (matrix) version of these norms can be found
in, e.g., the books mentioned above.
Definition 2.6 The 1-norm of a signal y(t) on (1,1) is defined as
Z 1
kyk1 WD jy.t /j dt : (2.35)
1
20 2 Preliminaries
The kGk1 equals the distance in the complex plane from the origin to the furthest
point on the Nyquist plot of G(s). It also appears as the peak value on the Bode
magnitude plot of G(s). The Hankel norm is also a representation of function size
[3], especially in the design framework of H1 loop shaping. The Hankel norm can
be exploited to determine the stability margin. Its definition is given below.
2.2 Function Spaces and Signals 21
Definition 2.12 Hankel norm is used for determining the residual energy of a
system before t D 0. For a stable system described as y(t) D Gu(t), the Hankel norm
is defined as
v Z 1
u
u y T .t /y.t /
u
u
kGkH D u sup Z 0 0 : (2.42)
tu2L2 .1;0/
uT .t /u.t /dt
1
where Pc and Po are the controllability gramian and observability gramian matrices,
respectively, which will be discussed in Chap. 7.
Example 2.2 Given a linear system G(s) as below, determine its H2 norm and
Hankel norm.
1 0 1
xP D xC u
0 2 1
y D 1 2 x: (2.44)
The aim of this section is to introduce some basic results in linear system theory
[1] that are particularly applicable to the work in the following chapters of this
book. The descriptions, properties, and algebras of linear systems facilitate the
development of optimal and robust control theory. These concepts preliminarily
offer tools for system analysis and synthesis, and construct the main scope of
modern control theory and control engineering.
xP D Ax C Bu; x.0/ D x0
y D C x C Du; (2.49)
where 8 t 0, x(t) 2 Rn is the state vector, u(t) 2 Rm is the input vector, and y(t) 2 Rp
is the output vector. The transfer function from u to y is defined as
where Y(s) and U(s) are the Laplace transform of y(t) and u(t), respectively. It can
be shown that
(2.52)
B C
u x y
B̂ T −1 T Ĉ
x̂
Â
Different states can be defined to a linear time-invariant system given in (2.49) via
^
n n nonsingular matrix T. Let x D T x, and then the system can be described by
P
^ ^ ^ ^
x D TAT 1 x C TBu; x.0/ D x 0 D T x0
^
y D C T 1 x C Du (2.55)
The transformed system is derived via the state similarity transformation of (T, T 1 ).
One has the same transfer function matrix from the input to the output, though with
different state-space model:
(2.56)
^ ^ ^ ^
where A D TAT 1 , B D TB, C D C T 1 , and D D D. The relationship of
this transformation is illustrated in Fig. 2.3. The conjugate system G (s) of G(s) is
given by
(2.57)
(2.58)
24 2 Preliminaries
2.3.3.1 Stability
2.3.3.2 Controllability
Taking the given system in (2.49), e.g., controllability refers to the ability of the
input signal u to transfer the state x from any initial state to any final state in finite
time. A system is called completely controllable if, for any given initial state x0 and
any final state xf , there exist a finite time Tf and an input u(t), 0 t Tf , which takes
x(0) D x0 to x(Tf ) D xf . Note that controllability of a system concerns only the matrix
pair (A,B), and the state similarity transformation does not affect the controllability.
To verify the controllability and the following observability, the rank test and
gramian test are the well-known methods [1]. The following summarizes these
schemes.
2.3 Linear System Theory 25
2.3.3.3 Observability
The controllability describes the ability that the input drives the states, of which the
dual concept is the observability of a system. Taking the given system in (2.49), e.g.,
the observability means the extent to which the system state variables are “visible” at
the output. A system is called completely observable if, by setting the input identical
to zero, any initial state x(0) can be uniquely decided by the output y(t), 0 t T,
for some finite T. For example, Fig. 2.4 shows that the initial voltage across the
capacitor, x(0), cannot be determined by the voltage output y. If no input (voltage
source) u is applied to the circuit of Fig. 2.4, the initial state (voltage across the
capacitor) cannot be deduced from the output y. Note that the observability concerns
only the matrix pair (A,C), and the state similarity transformation does not change
the observability.
26 2 Preliminaries
The complete observability of a system can be found by using the rank test or
gramian test, which are summarized as follows:
2 3
C
6 CA 7
6 7
1. The observability matrix 6 : 7 is of full rank.
4 :: 5
CAn1
I A
2. The matrix has full column rank at every eigenvalue of A.
C
3. The observability gramian matrix
Z t
T
Wo D eA C T C eA d (2.60)
0
For any given LTI system in a state-space model (2.49), an adequately chosen state
similarity transfer matrix T can be applied to transform (2.52) into
(2.61)
which shows that the transfer function only describes the controllable and observ-
able part of the system. Figure 2.5 shows the relation of (2.61) in a block diagram.
The dynamics of the uncontrollable, unobservable, or both, if they exist in the
system, will not be seen in the input/output relationship (the transfer function). That
explains the possible situation of a system being BIBO stable but not asymptotically
stable.
2.3 Linear System Theory 27
1
s
ACO
1
CCO
s
ACO
1
BCO
s
ACO
u 1 y
BCO CCO
s
ACO
Let state-space realizations of the systems G1 (s) and G2 (s) be given respectively by,
xP 1 A1 B1 x1
D (2.63)
y1 C1 D 1 u1
and
xP 2 A2 B2 x2
D : (2.64)
y2 C2 D 2 u2
Obviously, the system models formed from G1 (s) and G2 (s) could involve the
variables from both systems. By augmenting (2.63) and (2.64), one obtains
2 3 2 32 3 2 3 2 32 3
xP 1 A1 0 B1 x1 x2 I 0 0 x2
4 x2 5 D 4 0 I 0 5 4 x2 5 ) 4 xP 1 5 D 4 0 A1 B1 5 4 x1 5 (2.65)
y1 C1 0 D 1 u1 y1 0 C1 D 1 u1
28 2 Preliminaries
D1
x1 1 x1
B1 C1
s
A1
u y
D2
x2 1 x2
B2 C2
s
A2
and
2 3 2 32 3 2 3 2 32 3
xP 1 I 0 0 xP 1 xP 2 A2 0 B2 x2
4 xP 2 5 D 4 0 A2 B2 5 4 x2 5 ) 4 xP 1 5 D 4 0 I 0 5 4 xP 1 5 : (2.66)
y2 0 C2 D 2 u2 y2 C2 0 D 2 u2
It can be seen in the following that manipulations between two control system
models can be realized via the algebra of usual constant matrix operations.
i.e.,
(2.69)
2.3 Linear System Theory 29
D1 D2
u x1 1 x1 x2 1 x2
B1 C1 B2 C2 y
s y1 s
A1 A2
or equivalently
2 3 2 32 32 3 2 32 3
xP 2 A2 0 B2 I 0 0 x2 A2 B2 C1 B2 D1 x2
4 xP 1 5 D 4 0 I 0 5 4 0 A1 B1 5 4 x1 5 D 4 0 A1 B1 5 4 x1 5 :
y C2 0 D 2 0 C1 D 1 u C2 D 2 C1 D 2 D 1 u
(2.71)
Hence,
(2.72)
P
^
^
x D T xP D TAx C TBu D TAT 1 x C .TB/u (2.75)
and
P
^
^
x D y D C x C Du D C T 1 x C Du: (2.76)
This implies
(2.77)
A11 A12 B1
Consider the specific case that A D ,B D , C D C 1 C2 ,
A21 A22 B2
I X I X
and T D i:e:; T 1 D which is helpful to characterize the
0 I 0 I
minimum realization of the state-space solutions later. Then,
I X A11 A12 B1 A11 C XA21 A12 C XA22 B1 C XB2
T AB D D
0 I A21 A22 B2 A21 A22 B2
(2.78)
and
2 3
A11 C XA21 A12 C XA22
TA 1 4 5 I X
T D A21 A22
C 0 I
C1 C2
2 3
A11 C XA21 A11 X C XA21 X C A12 C XA22
D4 A21 A21 X C A22 5: (2.79)
C1 C1 X C C2
(2.80)
2.4 Linear Fractional Transformations and Chain Scattering-Matrix Description 31
Consider a general feedback control framework shown in Fig. 2.8, where P denotes
the interconnection system of the controlled plant, namely, the standard control (or
compensation) configuration (SCC) [10]. The closed-loop transfer function from w
to z in Fig. 2.8 is given by
P11 P12
LFTl .P; K/ D LFTl ; K WD P11 C P12 K.I P22 K/1 P21 ;
P21 P22
(2.81)
where LFT stands for the linear fractional transformation and the subscript “l”
stands for “lower.” Different from the LFT, the chain scattering-matrix description
(CSD) developed in the network circuits provides a straightforward interconnection
in a cascaded way. The CSD transforms a LFT into a two-port network connection.
Thus, many known theories which have been developed for a two-port network can
then be used. The definition of CSD is briefly introduced below, while the details
on background, properties, and use of CSD will be described in Chaps. 3, 4, and 5.
Figure 2.9 shows the right and left CSD representations.
Define right
and
left CSD transformations with G and K denoted by CSDr (G,K)
and by CSDl G; Q K , respectively [9], as
G11 G12
CSDr .G; K/ D CSDr ; K WD .G12 C G11 K/ .G22 C G21 K/1
G21 G22
(2.82)
z w
P
a b
z u u z
w G y K K y G% w
Fig. 2.9 Right and left CSD (a) Right CSD (b) Left CSD
32 2 Preliminaries
and
Q K D CSDl GQ 11 GQ 12
Q 11 K GQ 21 1 GQ 12 K GQ 22 ;
CSDl G; ; K WD G
GQ 21 GQ 22
(2.83)
where G22 and GQ 11 are square and invertible. Note that, if P21 is invertible, the SCC
matrix P can be transformed to a right CSD as
P12 P11 P21 1 P22 P11 P21 1
GD : (2.84)
P21 1 P22 P21 1
Also, if P12 is invertible, the SCC matrix P can be transformed to a left CSD as
P12 1 P12 1 P11
GQ D : (2.85)
P22 P12 1 P21 P22 P12 1 P11
Example 2.3 Consider the unity feedback control system in Fig. 2.10, where Pp is
a SISO-controlled
plant. Find its corresponding LFTl and CSD representations.
ye
Let z D , w D r, and y D ye .
u
From the unity feedback control system, by definition,
0 1
as u D 0, one has ym D 0;
y
@ eA
u 1
hence, r D ye from r Pp u D ye so that P11 D r j uD0 D and P21 D
0
ye
r juD0
D 1. Similarly, as r D 0, one can also obtain
ye
u Pp ye
P12 D jrD0 D and P22 D jrD0 D Pp :
u 1 u
ye
The closed-loop transfer function from r to z D is presented as below
u
(Fig. 2.11).
2.4 Linear Fractional Transformations and Chain Scattering-Matrix Description 33
ye u
K
From
h i
z D LFTl .P; K/ w D P11 C P12 .I KP22 /1 KP21 w;
one has
ye 1
D .1 C PP K/1 r:
u K
Exercises
1. Prove that all the eigenvalues (H) of a Hamiltonian matrix H are symmetric to
the j!-axis.
2 3
12 5 1
2. Determine the rank of A D 4 2 4 1 2 5.
12 1 9
2 1 3 2 3
p p1
2 2 1
6 7 11
3. Let Q D 4 p1 p1 5, R D , b D 4 1 5. Utilize the least square
2 2 01
0 0 1
approach to solve Ax D b where A D QR.
4. Consider the following system:
xP 1 1 0 x1 x10
D ; x0 D :
xP 2 2 4 x2 x20
Y.s/ sCa
D 3 :
U.s/ s C 7s 2 C 14s C 8
(a) Determine the values of a when the system is not completely controllable or
not completely observable.
(b) Define the state variables and derive the state-space model of which one of
the states is unobservable.
(c) Define the state variables and derive the state-space model of which one of
the states is uncontrollable.
References 35
References
1. Chen CT (2009) Linear system theory and design. Oxford University Press, New York
2. Doyle JC, Francis B, Tannenbaum A (1992) Feedback control theory. Macmillan Publishing
Company, New York
3. Francis BA (1987) A course in H1 control theory, vol 88, Lecture notes in control and
information sciences. Springer, Berlin
4. Golub GH, Van Loan CF (1989) Matrix computations. The Johns Hopkins University Press,
London
5. Green M, Limebeer DJN (1995) Linear robust control. Prentice Hall, Englewood Cliffs
6. Helton JW, Merino O (1998) Classical control using H1 methods. SIAM, Philadelphia
7. Rudin W (1973) Functional analysis. McGraw-Hill, New York
8. Strang G (2004) Linear algebra and its applications, 4th edn. Academic, New York
9. Tsai MC, Tsai CS (1993) A chain scattering matrix description approach to H1 control. IEEE
Trans Autom Control 38:1416–1421
10. Zhou K, Doyle JC, Glover K (1996) Robust and optimal control. Prentice Hall, Upper Saddle
River
Chapter 3
Two-Port Networks
This chapter will briefly introduce two-port network descriptions which are closely
related to that of the general control framework involving descriptions using LFT
and CSD. The two-port network was developed as a common methodology to
describe the relationship between inputs and outputs of an electrical circuit. For
example, the impedance matrix of a two-port network can be determined by each
port’s voltage and current according to Ohm’s law. The exposure in this book will
focus on both scattering (i.e., LFT) and chain scattering (i.e., CSD) parameters as
well as their applications to the modern control theory.
V2 Z2
D : (3.1)
V1 Z1 C Z2
M.-C. Tsai and D.-W. Gu, Robust and Optimal Control: A Two-port Framework 37
Approach, Advances in Industrial Control, DOI 10.1007/978-1-4471-6257-5__3,
© Springer-Verlag London 2014
38 3 Two-Port Networks
a b
I1 I1 I2
+ + +
V1 Circuit V1 Circuit V2
− I1 I1 I2
− −
I2
V1 1 I1 V2 1
Z2 − I2
Z1 ZL
−
When load ZL is included, due to the load effect, the transfer function from V1 to V2
can be determined as
Z1 Z2
V2 .Z1 CZ2 / Z2 ZL
D D : (3.2)
V1 ZL C .ZZ1 1CZ
Z2 Z L .Z 1 C Z2 / C Z1 Z2
2/
A system block diagram to describe this circuit is illustrated in Fig. 3.4. It should
be noted that the signal that flows into the two-port system is I2 , so that the
relationship between terminal voltage and current is ZL D I V2
2
. One can then apply
the Mason’s gain formulae to determine its transfer function from V1 to V2 for the
cases with and without load ZL , respectively, as
V2 Z2 =Z1 Z2
D D ; (3.3)
V1 1 C Z2 =Z1 Z1 C Z2
3.1 One-Port and Two-Port Networks 39
and
V2 Z2 =Z1 Z2 ZL
D D : (3.4)
V1 1 C Z2 =Z1 C Z2 =ZL ZL .Z1 C Z2 / C Z1 Z2
Furthermore, with the load effect, Fig. 3.4 can be formulated by a systematic
framework using a two-port description as shown in Fig. 3.5. It can be determined
from Fig. 3.4 by cutting around the load impedance Z1L , and there are two loops
in the case with load. Then by the LFT approach, one has
" # " # " #
V2 P11 P12 V1 1
D and I2 D V2 ; (3.5)
V2 P21 P22 I2 ZL
where
2
Z2 Z1 Z2 3
6 Z1 C Z2 Z1 C Z2 7
P D6
4
7:
5
Z 2 Z Z1 2
Z1 C Z2 Z1 C Z2
It is reminded that the negative sign of “ Z1L ” means that the current I2 direction is
opposite to that of I2 as defined in the circuit in Fig. 3.2. By the definition of LFT, it
can be verified that in the case of no-load,
V2 Z2
D LFTl .P; 0/ D P11 D ; (3.6)
V1 Z1 C Z2
Clearly, the results above are the same as (3.1)–(3.2) and (3.3)–(3.4). However, the
two-port description approach is more systematic and characterizes conveniently the
load effect. The system performance can be tuned easily by the load impedance as
an external part. For example, engineers often use different terminated impedances
40 3 Two-Port Networks
to eliminate the echo problem in communication circuits. The same idea arises in
control engineering. The feedback terminator of an LFT system can be chosen to
achieve the desired response. Furthermore, an open loop-loop unstable system can
be stabilized by a properly defined terminator in the two-port network description.
Resistors (R), inductors (L), and capacitors (C) are basic passive impedance
elements of an electrical circuit. Electronic circuits are frequently needed to process
electrical signals. The problem appearing in the two-port network theory is how to
discover the relationship between input and output at each terminal. Based on these
physical variables V1 , V2 , I1 , and I2 , there are six types of parameters which are
often used for the two-port network description:
1. Impedance parameter (Z parameter)
2. Admittance parameter (Y parameter)
3. Hybrid parameter (H parameter)
4. Transmission parameter (ABCD parameter)
5. Scattering parameter (S parameter)
6. Chain scattering parameter (T parameter)
Although these parameters are common in circuit design synthesis and analysis,
it should be noted that some circuits may not have impedance, admittance, or
transmission matrix descriptions, physically due to certain numerical constraints.
For example, a circuit with transformers does not have an impedance parameter,
and a simple circuit with shunt (or series) impedance does not have the two-port
admittance (or impedance) matrix. The scattering-matrix description, which has its
roots in microwave theory and has connections to operator theory, is then proposed
to overcome problems such as the absence of physical parameters. This situation
will be further discussed in the following sections.
Figure 3.6 depicts a linear, two-port network along with appropriate voltages,
currents, and its terminals. Let the matrix relationship of the impedance parameters
be defined by
I1 I2
+ +
Linear
V1 V2 ZL
Network
− I1 I2 −
Fig. 3.6 Two-port Network
3.2 Impedance and Admittance Parameters (Z and Y Parameters) 41
The impedance parameter, derived from Ohm’s law, is useful for series-
connected circuits. Similarly, the admittance matrix Y is defined as
" # " # " #
I1 Y11 Y12 V1
D ; (3.10)
I2 Y21 Y22 V2
where
ˇ ˇ
I1 ˇˇ 1 I1 ˇˇ 1
Y11 D ; Y12 D ;
V1 ˇV2 D0 V2 ˇV1 D0
ˇ ˇ
I2 ˇˇ 1 I2 ˇˇ 1
Y21 D ; Y22 D : (3.11)
V1 ˇV2 D0 V2 ˇV1 D0
In addition, the parameters of matrices Z and Y are for physical units. One can easily
examine the load (ZL ) effect for a given two-port impedance (or admittance) matrix
Z (or Y) by the LFT description as illustrated in Fig. 3.7.
Now recall the circuit presented in Fig. 3.2. According to (3.8) and (3.10),
the two-port impedance matrix Z and admittance matrix Y can be determined,
respectively, as
" # " #
Z11 Z12 Z1 C Z2 Z2
ZD D ; (3.13)
Z21 Z22 Z2 Z2
42 3 Two-Port Networks
a b
V1 I1 I1 V1
V2 Z I2 I2 Y
V2
1
− −Z L
ZL
Fig. 3.7 LFT forms of Z and Y parameters: (a) LFT form of Z and (b) LFT form of Y
2 3
" # 1
1
Y11 Y12 6 Z1
Z1 7
Y D D6
4
7: (3.14)
Y21 Y22 Z1 C Z2 5
Z11
Z1 Z2
It can be realized from (3.14) that for the series short circuit where Z1 D 0, Fig. 3.2
does not have the two-port Y parameter description. It can also be found from (3.14)
that for the shunt open circuit where Y parameter is equal to zero, and the circuit
shown in Fig. 3.2 does not have the two-port Z parameter description. Equations
(3.8) and (3.9) can be used to determine the relationship (transfer function) between
the currents and voltages by exploring the LFT. For instance, the overall input
impedance which is the transfer function from I1 to V1 is given by (3.13) and by
I2 D Z1L V2 when closing the loop,
V1 1 1 Z2 1
D LFTl Z; D .Z1 C Z2 / C Z2 1C ZL
I1 ZL ZL ZL
Z1 .Z2 C ZL / C Z2 ZL
D Z1 C .Z2 jj ZL / D : (3.16)
Z2 C ZL
Moreover, the overall input admittance which is the transfer function from V1 to I1
is given by (3.16) and by V2 D ZL I2 when closing the loop,
3.3 Hybrid Parameters (H Parameters) 43
I1 1 ZL ZL .Z1 C Z2 / 1
D LFTl .Y; ZL / D 1C
V1 Z1 Z1 Z1 Z2
1 Z 2 C ZL
D : (3.17)
Z1 Z1 .Z2 C ZL / C Z2 ZL
V2 D0
ˇ
current gain, and H22 D VI22 ˇ
the open-circuit output admittance. The
I1 D0
hybrid parameter H is commonly seen in the analysis of transistor circuits. For the
circuit of Fig. 3.2, one has, by (3.18),
" #
Z1 1
H D : (3.19)
1 1
Z2
Here, the overall input impedance can be determined as in Fig. 3.8 and is given by
V1 ZL 1 Z1 .Z2 C ZL / C Z2 ZL
D LFTl .H; ZL / D Z1 C ZL 1 C D ;
I1 Z2 Z2 C ZL
(3.20)
V1 I1
I2 H
V2
The above cases have shown how to find input/output relations by using the LFT
structure. This section shows how the transmission parameters can be used to derive
those relations by directly considering two-port network chains. It will be seen
that the two-port network chain description is an alternative to that of LFT. It is,
however, more appealing to electrical engineers and communication engineers, due
to its direct connection to the two-port network structure.
The transmission parameter matrix description can connect several two-port
network circuits in a series as illustrated in Fig. 3.9. The transmission parameter
matrix is defined by
" # " # " #
V1 A B V2
D ; (3.21)
I1 C D I2
ˇ
ˇ
where A D VV12 ˇ denotes the open-circuit reverse voltage gain, B D
ˇ I2 D0 ˇ 1
V1 ˇ ˇ
I2 ˇV D0
. / the short-circuit transfer impedance, C D VI12 ˇ
the open-
2 ˇ I2 D0
ˇ
circuit transfer admittance, and D D I I1
ˇ the short-circuit reverse current
2 V2 D0
gain. The transmission parameters are often called the ABCD parameters in the
electrical engineering community. Figure 3.10 shows the two-port transmission
parameter description with load ZL .
For the circuit in Fig. 3.2, the transmission parameters in (3.21) can be found as
" # " #
A B 1C Z1
Z2
Z1
D 1
: (3.22)
C D Z2
1
I1 −I2
+ Two-port Two-port +
V1 ... V2 ZL
− Network 1 Network N −
V1 V2
⎡A B⎤
⎢C D ⎥ ZL
Fig. 3.10 Transmission ⎣ ⎦
I1 −I2
parameter chain description
3.4 Transmission Parameters (ABCD Parameters) 45
As expected, the result is the same as (3.16) and (3.20). The transmission parameter
is useful for chaining several two-port networks in a series, which is the same
as matrix multiplication. Hence, transmission parameters are also called chain
matrices. Consider the circuit in Fig. 3.2 again and redraw it in Fig. 3.11. It is further
decomposed with two sub-circuits as shown in Fig. 3.12.
The sub-circuit of Fig. 3.12a gives
" # " # " #
Va1 Aa Ba Va2
D (3.24)
Ia1 Ca D a Ia2
with
" # " #
Aa Ba 1 Z1
D : (3.25)
Ca D a 0 1
with
" # " #
Ab Bb 1 0
D 1
: (3.27)
Cb D b Z2
1
46 3 Two-Port Networks
with
" # " # " # " #
A B 1 Z1 1 0 1C Z1
Z2
Z1
D 1
D 1
: (3.29)
C D 0 1 Z2
1 Z2
1
In the analogy, the Z parameter matrix can be converted into the chain parameters.
Let
2 3 2 3 2 3 2 3
V1 Z11 Z12 V1 Z11 Z12
6 7 6 7 " # 6 7 6 7 " #
6 V2 7 6 Z21 Z22 7 I1 6 I1 7 6 I 0 7 I1
6 7 6 7 )6 7 6 7
6 I 7D6 I 0 7 I2 6 V 7D6 Z 7 :
4 1 5 4 5 4 2 5 4 21 Z22 5 I2
I2 0 I I2 0 I
(3.38)
where
" # " #
A B Z11 Z21 1 Z11 Z21 1 Z22 Z12
D : (3.40)
C D Z21 1 Z21 1 Z22
Note that the matrix conversion between any two parameter descriptions can be
carried out by the same methodology.
48 3 Two-Port Networks
V1 C Z0 I1
p V2 C Z0 I2
p
a1 D p watt ; a2 D p watt ;
2 Z0 2 Z0
V1 Z0 I1 p V2 Z0 I2
p
b1 D p watt ; b2 D p watt ; (3.42)
2 Z0 2 Z0
where ai denotes the incident wave (signal) at port i and bi represents the reflected
wave (signal) at port i. Let
" # " # " #
b1 S11 S12 a1
D ; (3.43)
b2 S21 S22 a2
where
ˇ ˇ ˇ ˇ
b1 ˇˇ b2 ˇˇ b1 ˇˇ b2 ˇˇ
S11 D ; S D ; S D ; S D : (3.44)
a1 ˇa2 D0 a1 ˇa2 D0 a2 ˇa1 D0 a2 ˇa1 D0
12 21 22
The two-port S parameter description is illustrated, in LFT and CSD, in Fig. 3.14.
Z
+ +
Z 0 V1 V2 Z0
Fig. 3.13 Transmission line − −
circuit
3.5 Scattering Parameters (S Parameters) 49
a b
b1 a1 b1 a2
S12
⎡ S11 S12 ⎤
b2 ⎢S S 22 ⎥⎦
a2
⎣ 21
S11 S22 ΓL
ΓL a1
S21
b2
Fig. 3.14 LFT form of S parameter and its block description in CSD: (a) LFT form of S parameters
and (b) Block description of S in CSD
where
" #
1 1 Z0
…D p : (3.47)
2 Z0 1 Z0
V1 C Z0 I1 1 C ZCZ
Z0 ZC2Z0
ZCZ
a1 D p D p 0
Vs D p 0 V s ; (3.49)
2 Z0 2 Z0 2 Z0
V1 Z0 I1 1 ZCZ
Z0 Z
ZCZ
b1 D p D p 0 Vs D p 0 Vs ; (3.50)
2 Z0 2 Z0 2 Z0
V2 Z0 I2
Z0
ZCZ0
C ZCZ
Z0 2Z0
ZCZ
b2 D p D p 0
Vs D p 0 Vs ; (3.51)
2 Z0 2 Z0 2 Z0
V2 C Z0 I2
Z0
ZCZ0
ZCZ
Z0
a2 D p D p 0
Vs D 0: (3.52)
2 Z0 2 Z0
V2 C Z0 I2 1 C ZCZ
Z0 ZC2Z0
ZCZ
a2 D p D p 0 Vs D p 0 V s ; (3.56)
2 Z0 2 Z0 2 Z0
V2 Z0 I2 1 ZCZ
Z0 Z
ZCZ
b2 D p D p 0 Vs D p 0 Vs ; (3.57)
2 Z0 2 Z0 2 Z0
3.6 Chain Scattering Parameters (T Parameters) 51
− −
V1 Z0 I1
Z0
ZCZ0
C ZCZ
Z0 2Z0
ZCZ
b1 D p D p 0
Vs D p 0 Vs ; (3.58)
2 Z0 2 Z0 2 Z0
V1 C Z0 I1
Z0
ZCZ0
ZCZ
Z0
a1 D p D p 0
Vs D 0: (3.59)
2 Z0 2 Z0
This concludes
" # " # " #
b1 S11 S12 a1
D ; (3.62)
b2 S21 S22 a2
where
2 Z 2Z0 3
" #
S11 S12 6 Z C 2Z0 Z C 2Z0 7
D6
4
7:
5 (3.63)
S21 S22 2Z 0 Z
Z C 2Z0 Z C 2Z0
b1 a2
⎡T11 T12 ⎤
⎢T T ⎥ ΓL
a1 ⎣ 21 22 ⎦ b2
where
" # " # " #1
S11 S12 T11 T12 T21 T22
D : (3.68)
S21 S22 0 I I 0
3.6 Chain Scattering Parameters (T Parameters) 53
b1 −1 a1
⎡T11 T12 ⎤ ⎡T21 T22 ⎤
⎢0 I ⎥⎦ ⎢I
⎣ ⎣ 0 ⎥⎦ a2
b2
ΓL
⇓
b1 a1
⎡ S11 S12 ⎤
b2 ⎢S S 22 ⎥⎦
a2
⎣ 21
ΓL
Note that here, T22 should be invertible. Then, the S parameter as illustrated in
Fig. 3.14 can be obtained. Figure 3.18 shows the corresponding manipulations.
Similarly, one can also derive parameter T from S,
2 3 2 3 2 3 2 3
b1 S11 S12 b1 S11 S12
6 7 6 7" # 6 7 6 7" #
6 b2 7 6 S21 S22 7 a1 6 a1 7 6 I 0 7 a1
6 7 6 7 6 7D6 7
6 a 7D6 I 7 a ) 6 a 7 6 0 7 a : (3.69)
4 1 5 4 0 5 2 4 2 5 4 I 5 2
a2 0 I b2 S21 S22
where
" # " #" #1
T11 T12 S11 S12 0 I
D : (3.71)
T21 T22 I 0 S21 S22
54 3 Two-Port Networks
In this section, the conversions from impedance, admittance, chain, and hybrid
matrices to the scattering and chain scattering matrices will be discussed. Firstly,
as shown in Fig. 3.19, the conversion from transmission parameters ABCD to T
parameters will be taken as an example.
Recall (3.42), (3.45), and (3.46). Then, by (3.21), one has
" # " #" # " # " # " #
V1 A B V2 V1 A B 1 V2
D )… D… … … :
I1 C D I2 I1 C D I2
(3.72)
Further, by (3.47),
" # " #" #" #1
T11 T12 1 Z0 A B 1 Z0
D : (3.74)
T21 T22 1 Z0 C D 1 Z0
a I1 I2 b I1 I2
+ + + +
Vs1 + ⎡A B⎤ ⎡A B⎤ + V
V1 ⎢C D ⎥ V2 Z0 Z0 V1 ⎢C D ⎥ V2 s2
⎣ ⎦ ⎣ ⎦
− − − −
Fig. 3.19 Two-port transmission circuits: (a) With left voltage source and (b) With right voltage
source
3.8 Lossless Networks 55
V1 b1 V1 V2 a2 V2
⎡A B⎤
Π −1 Π ⎢C D ⎥ Π −1 Π ZL
a1 ⎣ ⎦
I1 I1 −I2 b2 −I2
Original Coordination T ΓL
domain transformation
Z eq 2 Z eq1
V1 b1 V1 V2 a2 V2
⎡A B⎤
ZS Π −1 Π ⎢C D ⎥ Π −1
Π ZL
⎣ ⎦
I1 a1 I1 −I2 b2 −I2
ΓS T ΓL
As shown in Fig. 3.21, define the input and output reflection coefficient (
) as
b1
Zeq1 Z0
1 D D CSDr …; Zeq1 D ; (3.78)
a1 Zeq1 C Z0
56 3 Two-Port Networks
b2
Zeq2 Z0
2 D D CSDl …1 ; Zeq2 D ; (3.79)
a2 Zeq2 C Z0
" # !
A B
where Zeq1 D CSDr ; ZL is the equivalent impedance looking from
C D
" # !
A B
port 1, Zeq2 D CSDl ; ZS is the equivalent impedance looking
C D
from port 2, and Zo is the characteristic impedance. Clearly, if Zeq1 D Zo (i.e., the
equivalent impedance matches the characteristic impedance), then
1 D 0 by (3.78).
This indicates that the incident wave from port 1 will fully come out from port 2 and
will not cause any reflection in port 1. Also
1 D 0 means that the incident power
wave from port 1 does not reflect a power wave in the output of port 1 that causes
an echo. Similarly, if Zeq2 D Zo ,
2 D 0. This means that the incident power wave
from port 2 does not reflect a power wave in the output of port 2 that produces an
echo. Alternatively, all the output power waves come from another port of a two-port
network. Such a condition is called all-pass in microwave theory [2]. If the released
power is not attenuated through the propagation of two-port networks, this circuit is
considered lossless. In addition, a no-reflection echo two-port network is also called
the matched network.
Define the average delivered power at each port as
1
Pav1 D a1 a1 b1 b1 ; (3.80)
2
1
Pav2 D a2 a2 b2 b2 : (3.81)
2
Consequently, the total delivered power is
− −
the power output at the other ports with the power reflected at the incident port.
" # " #" #1
S11 S12 T11 T12 T21 T22
From D and S*S D I, one can obtain,
S21 S22 0 I I 0
" #
I 0
for J WD ,
0 I
" # " #" # " #
T11 T12 I 0 T11 T12 I 0
T J T D D D J: (3.84)
T21 T22 0 I T21 T22 0 I
or sL (D jwL) D jX
2
jX Ro 3
6 jX C Ro jX C Ro 7
S D6
4
7;
5 (3.86)
Ro jX
jX C Ro jX C Ro
and hence,
2 3
X2 Ro 2 jRo X jRo X
6 C
S S D 6 Ro C X 2
2
Ro C X 2
2
Ro 2 C X 2 Ro 2 C X 2 7 7 D I: (3.87)
4 jRo X jRo X Ro 2 X2 5
2 2 C
Ro C X 2 Ro C X 2 Ro 2 C X 2 Ro 2 C X 2
This concludes that the S matrix is unitary. Consequently, this is a lossless system.
58 3 Two-Port Networks
a1
ZS Zo
S D D CSDl …1 ; ZS D .D CSDr .…; ZS // : (3.89)
b1 ZS C Zo
In fact, these are called the bilinear transformations. For … with real Zo , the
positive real (PR) ZL and ZS such as (R,L,C) can be transformed to the bounded
real (BR) functions via …. For example, if Zo D 1( ) and ZL D sL, then
L D
1
1
CSDr .…; ZL / D sLC1sL1
, or if ZL D C1s , then
L D CSDr .…; ZL / D C1s C1 D 1CC
1C s
s
.
Cs
Figure 3.23 reveals this transformation.
The T parameter description is convenient for network cascading. However, the
representation of T matrix may not always be a proper function. Recall the series
circuit in Fig. 3.24, which gives
From Fig. 3.21, one has, for Z D sL,
2 Z Z 3
" # 1
1 Z 6 2Zo 2Zo 7
T D… …1 D 6 4
7: (3.90)
0 1 Z Z 5
1C
2Zo 2Zo
3.8 Lossless Networks 59
PR BR PR BR
Domain Domain Domain Domain
V1 b1 V1 V2 a2 V2
⎡A B⎤
ZS Π −1 Π ⎢C D ⎥ Π −1 Π ZL
⎣ ⎦
ΓS
I1 a1 I1 −I2 b2 −I2
T ΓL
BR PR BR PR
Domain Domain Domain Domain
− −
" #
1s s
For the case that Zo D 1( ) and L D 2, T D becomes an
s 1 C s
improper function matrix. However, by terminating the load ZL D 3( ), one has
+ +
Vs + v1 R v2 RL
− −
and
2 Zo Zo 3
" # 1
1 0 6 2ZP 2ZP 7
T D… 1
…1 D 6
4
7:
5 (3.93)
1 Zo Zo
ZP 1C
2ZP 2ZP
Converting the system description to LFT as shown in Fig. 3.18, one has
2 Z0 2ZP 3
" #
6 2ZP C Z0 2ZP C Z0 7 SP11 SP12
SP D 6 4
7D
5 (3.95)
2ZP Z0 SP21 SP22
2ZP C Z0 2ZP C Z0
b1 2
L ZP C
L Z0 Z0
D LFTl .SP ;
L / D : (3.96)
a1
L Z0 2ZP Z0
Example 3.4 Consider an RLC circuit illustrated in Fig. 3.26. Derive its ABCD
parameter and T parameters when R D 10 and L D 1.
The ABCD parameter of Fig. 3.26 can be derived from simple matrix
manipulation
2 3
" # " #2 3
1 C
Ls
Ls
2 s 3
A B 1 Ls 1 0 6 R 7 1C s
GD D 4 1 5D6 7D4 10 5
C D 0 1 1 4 1 5
R 1 0:1 1
R
Exercises 61
i1 −i2
which is the positive" real for #any positive real RL . However, it is an improper
1 1
function. Let … D 12 , and one gets
1 1
2 3
19 9s 11s 1
16 10 10 7
T D …G…1 D 64
7
2 1 9s 11s C 21 5
10 10
RL 1
and
L D CSDr .…; RL / D RL C1
. From Fig. 3.23, one can obtain
b1 .11 9
L / s C .19
L 1/
D CSDr .T;
L / D :
a1 .11 9
L / s C .21 C
L /
1
Evidently, since
L D CSDr .…; RL / D RRLLC1 < 1, for any positive real RL > 0, the
norm of b1 is always less or equal to the norm of a1 . For example, when assuming
Exercises
1. Determine the transfer function from V1 to V2 of the following circuit, with and
without the load RL . Convert this circuit into the control block diagram and verify
V2
V1
via Mason’s gain formula and LFT, respectively.
I1 R1 R2 I2
+ +
V1 Ls V2 RL
− −
62 3 Two-Port Networks
+ +
V1 V2
20Ω 20Ω
− −
3. Determine the ABCD and H parameters for the following two-port circuit.
30Ω
+ +
V1 20Ω 20Ω V2
− −
4. Determine the S parameter of the following two-port circuit with the characteris-
tic impedance Zo D 64 .
+ 4Ω 4Ω +
V1 V2
4Ω
− −
5. Determine the S parameter of the following two-port circuit with the characteris-
tic impedance Zo D 10 .
30Ω
+ +
Z0 V1 30Ω V
30Ω 2 Z0
− −
6. Determine the ABCD parameter of the given circuit and then derive the T
parameter using the ABCD parameter.
I1 R R I2
+ +
Z0 V1 Ls V2 Z0
− −
References 63
+ +
Z 01 V1 V2 Z 02
− −
References
1. Anderson BDO, Vongpanitlerd S (1973) Network analysis and synthesis: a modern systems
theory approach. Prentice-Hall Inc, Englewood Cliffs
2. Cheng DK (1992) Fundamentals of engineering electromagnetics. Addison-Wesley, New York
3. Franco S (1995) Electric circuits fundamentals. Saunders College Publishing, Orlando
4. Knopp K (1952) Elements of the theory of functions. Dover, New York
5. Misra DK (2001) Radio-frequency and microwave communication circuits. Wiley, New York
6. Needham T (2000) Möbius transformations and inversion. Clarendon, New York
Chapter 4
Linear Fractional Transformations
Figure 4.1 illustrates the framework of an LFT, which includes two parts, i.e., a two-
port matrix P and one-port feedback terminator K. Without"the#feedback
" #termination
r w
y 7! u, the two-port matrix P is an open-loop system of 7! . Thus, the
u y
matrix representation can be characterized as
(
w D P11 r C P12 u
(4.1)
y D P21 r C P22 u;
where
w ˇˇ w ˇˇ y ˇˇ y ˇˇ
P11 D ˇ ; P12 D ˇ ; P21 D ˇ ; P22 D ˇ : (4.2)
r uD0 u rD0 r uD0 u rD0
M.-C. Tsai and D.-W. Gu, Robust and Optimal Control: A Two-port Framework 65
Approach, Advances in Industrial Control, DOI 10.1007/978-1-4471-6257-5__4,
© Springer-Verlag London 2014
66 4 Linear Fractional Transformations
" # " #
w r
Note that each entrant in and can be vector-valued signals. Here,
y u
(4.2) symbolizes its input/output relations in the MIMO cases, not the actual
mathematical “division.” It can be said as, e.g., P11 represents how w is dependent
on r when u D 0. In the feedback control, the terminator K (to be designed) encloses
the open-loop system (4.1) via the feedback part
u D Ky: (4.3)
therefore,
where
In general, the LFT exploits the matrix form to describe linear systems in which the
LFT allows the terminator to be placed either below or above (interconnected) plant
P. Herein, the subscript l in (4.7) indicates that this is a lower LFT, i.e., terminator
K is below plant P.
Similarly, Fig. 4.2 shows the upper LFT form that
" # " #" #
y P11 P12 u
D (4.8)
w P21 P22 r
4.1 Linear Fractional Transformations 67
y u
⎡ P11 P12 ⎤
w ⎢P P22 ⎥⎦ r
⎣ 21
a b
Δ
Δ
z ⎡ P11 P12 P13 ⎤ e
⎢P P23 ⎥⎥
w ⎢ 21 P22 r ⇒ z e
y ⎢⎣ P31 P32 P33 ⎥⎦ u ⎡ M 11 M 12 ⎤
⎢M M 22 ⎥⎦
w ⎣ 21 r
K
Fig. 4.3 LFT forms: (a) (33) LFT form and (b) (22) LFT form
and
u D Hy: (4.9)
By the same manipulations as (4.4), (4.5) and (4.6), one can then obtain
w D LFTu (P,H)r where
Note that in (4.7) and (4.10), the terminators have implicitly been assumed in
the above such that (I P22 K) and (I P11 H) are invertible. For LFTl (P,K) (or
LFTu (P,H)), if (I P22 K) (or (I P11 H)) is invertible, this system is called well-
defined (or well-posed). In practice, the invertibility condition in almost all feedback
control systems design should be satisfied for the existence of a controller.
Additionally, Fig. 4.3a shows the full LFT form with the terminators that appear
both in the upper and lower presentations. For example, Fig. 4.3a can be employed
to describe a feedback control system which suffers from perturbed dynamics . In
this case, P consists of, accordingly, 9 (3 3) sub-matrices, i.e.,
(4.11)
68 4 Linear Fractional Transformations
where
(4.13)
This shows that the full (3 3) LFT form of Fig. 4.3a can be reduced by the
closed-loop u D Ky into a (2 2) LFT form depicted in Fig. 4.3b, which includes
an uncertain dynamics appealing for robustness consideration [2].
Example 4.1 Consider the upper LFT form of (4.8) and use MATLAB to process
the LFT manipulations.
The LFT determination can be carried out in many ways and be formed as an
S-function. The following MATLAB code is an example included here for readers’
reference.
clc;
clear;
disp(‘LFTl method’)
syms P11 P12 P21 P22 K S;
syms p11 p12 p21 p22 k s;
P11Dinput(‘z D> u (P11) ?’);
P12Dinput(‘z D> y (P12) ?’);
P21Dinput(‘w D> u (P21) ?’);
P22Dinput(‘w D> y (P22) ?’);
KDinput(‘Please input K:’);
if (eye(size(P22))-P22* KDD0);
disp(‘The transfer function is singular !’);
disp(‘Plz enter Ctrl c !’);
pause;
else
Transfer_function_w_to_zDsimplify(P11CP12* K* inv(eye
(size(P22))-P22* K)* P21)
end
4.2 Application of LFT in State-Space Realizations 69
Y.s/
G.s/ D ; i:e:; Y.s/ D G.s/U.s/; (4.15)
U.s/
where G(s) is the transfer function (matrix) of the system, Y(s) and U(s) are the
Laplace transforms of the output and input functions, respectively. If the state and
output equations of (4.14) are known, G(s) can be uniquely computed as
(4.17)
In the former, the manipulation on the right-hand side of the equation is the
usual multiplication between a matrix and a vector. Figure 4.4 below shows the
corresponding block diagram of (4.17) where the integrator (i.e., 1s ) is an operator
P As an example, it is shown
to characterize the relationship of state variables x and x.
next that the LFT manipulation can be used to determine the state-space realization.
Briefly speaking, through the LFT method, one can obtain the state-space model of
a system by cutting off the connections around the integrator.
To determine the matrix P in the LFT formulation, the first step is to properly
cut off all the internal loops such that no feedback loop is left and the isolated part
will be the terminator. It can be seen from the block diagram in Fig. 4.4 that if
the integrator is truncated, there are no internal loops. Thus, a two-port matrix for
70 4 Linear Fractional Transformations
u x I
x y
B s C
y ⎡D C ⎤ u x I x
⎢ B A⎥
⎣ ⎦ s
x I x ⎡A B⎤
s y ⎢C D ⎥ u
⎣ ⎦
LFTl LFTu
As can be seen in (4.19), the transfer function G(s) is purposely formed as the
summation of a strict proper function and a constant d0 (DG(1)). Here, (4.19) can
be rewritten as
4.2 Application of LFT in State-Space Realizations 71
! 1
!
1
b C s12 b1 C s13 b0
s 2 n
Y.s/ D C d0 U.s/ D s1
C d0 U.s/ (4.20)
1 C 1s a2 C s12 a1 C s13 a0 d s
or
" #
1 1 1 1
Y.s/ D 1
b2 C 2 b1 C 3 b0 C d0 U.s/: (4.21)
d s
s s s
Furthermore, defining
1 1
X.s/ D 1
U.s/ D U.s/ (4.22)
d s
1C 1
a
s 2
C 1
a
s2 1
C 1
a
s3 0
yields
1 1 1
1 C a2 C 2 a1 C 3 a0 X.s/ D U.s/ (4.23)
s s s
and
1 1 1
X.s/ D U.s/ a2 X.s/ 2 a1 X.s/ 3 a0 X.s/: (4.24)
s s s
Then,
1 1 1 1
Y.s/ D n X.s/ C d0U.s/ D b2 C 2 b1 C 3 b0 X.s/ C d0U.s/: (4.25)
s s s s
The corresponding system block diagram is depicted in Fig. 4.6, where only the
negative sign, if exists, is noted as “” in all of the summing points.
Now, the LFT approach is adopted to find the state-space representation, and the
integrators can be cut off as illustrated in Fig. 4.7. If the
" state variables
# are defined
A B
as shown in Fig. 4.7, then the realization matrix P D can be found from
C D
" # " #
x xP
7! directly as
u y
(4.26)
72 4 Linear Fractional Transformations
d0
b2
b1
u x y
1 1 1
b0
s s s
-a2
-a1
-a0
d0
b2
b1
× ×× ×× ×
u x3 x3 x2 x2 x1 x1 y
b0
-a2
-a1
-a0
d0
b2
b1
× ×× ×× ×
u x1 x1 x2 x2 x3 x3 y
1 1 1
b0
s s s
-a2
-a1
-a0
i.e.,
2 3 2 32 2 3 3
xP 1 0 1 0 0x1
6 7 6 76 7 6 7
4 xP 2 5 D 4 0 0 1 5 4 x2 5 C 4 0 5 u;
xP 3 a0 a1 a2 x3 1
2 3
x1
6 7
y D b0 b 1 b2 4 x2 5 C d0 u: (4.27)
x3
(4.28)
Apparently, (4.26) and (4.28) indicate that these two different state-space realiza-
tions result from the same third-order transfer function. In fact, for two different
expressions with the same dimension, there always exists a similarity transformation
that links them together. For this example,
74 4 Linear Fractional Transformations
2 3
0 0 1
6 7
T D 40 1 0 5 ; (4.29)
1 0 0
which gives
" # " #" #" #
b B
A b T 1 0 A B T 0
D : (4.30)
b D
C b 0 I C D 0 I
bn1 s n1 C C b1 s C b0
G.s/ D C d0 ; (4.31)
s n C an1 s n1 C C a1 s C a0
(4.32)
is called the controllable canonical form [1]. The observable canonical realization
is dual in the form of (4.28).
× × × ×
u x1 x1 x2 x2 y
1 1
4
s s
-3
-2
3s C 4 Y.s/
G.s/ D D (4.33)
s2 C 3s C 2 U.s/
or equivalently,
3 s11 C 4 s12
Y.s/ D U.s/: (4.34)
1 C 3 s11 C 2 s12
Applying the procedures developed above to obtain (4.24) and (4.25) will yield the
block diagram of Fig. 4.9. Then, the integrators can be separated to obtain a two-port
matrix, of which the state-space realization is
(4.35)
(4.36)
76 4 Linear Fractional Transformations
× × × ×
u x1 x1 x2 x2 y
1 1
-2
s s
-1 -2
Let
! !
3s C 4 1 2 1
2 1s
G.s/ D 2 D 3 D s
3C
s C 3s C 2 sC1 sC2 1C 1
s
1 C 2 1s
(4.37)
which will result in the block diagram of Fig. 4.10. The corresponding two-port
matrix becomes
(4.38)
where the A-matrix of this expression is low triangular. It can be seen that
(4.39)
From
1 2
3s C 4 1 2
G.s/ D D C D s
C s
; (4.40)
s 2 C 3s C 2 sC1 sC2 1C 1
s
1C 2
s
4.3 Examples of Determining LFT Matrices 77
× ×
Fig. 4.11 Realization in a x1 x1
parallel form 1
s
u y
-1
× ×
x2 x2
1
2
s
-2
one has the block diagram of Fig. 4.11. Then the corresponding two-port matrix
becomes
(4.41)
(4.42)
From this example, it can be seen that the LFT state-space representation of a given
transfer is not unique. However, from (4.35), (4.38), and (4.41),
The subscripts 1, 2, and 3 denote the (A, B, C) matrices from (4.35), (4.38), and
(4.41), respectively. This implies that their input/output mappings are the same.
Different state-space realizations are theoretically equivalent in that they lead to
the same transfer function. However, in practice, how to choose an appropriate
expression is still important due to the numerical stability and sensitivity as well as
manipulation convenience, which will always appear in physical implementations.
The following summarizes the use of LFT.
1. The LFT structure represents the interconnection of a two-port, open-loop matrix
and a feedback terminator.
78 4 Linear Fractional Transformations
e u'
Dk Dg
d
xk xk x g xg
r e 1 u 1 y
Bk Ck Bg Cg
− s s
Ak Ag
2. In determining the LFT form of a feedback connection system, one should break
all the inner feedback loops to obtain the two-port, open-loop matrix.
3. Different cutoffs will lead to a different terminator and different two-port
matrices, but the input/output mappings are the same.
Recall the unity feedback system, depicted again in Fig. 4.12 below, where G(s)
is a controlled
" # plant # K(s) is the controller. The closed-loop transfer (2 2)
" and
d e
matrix of 7! can be found as
r y
" #
KG.I C KG/1 .I C GK/1
T D : (4.44)
G.I C KG/1 GK.I C GK/1
a b
I
s I
s
x ⎡ P11 P12 P13 ⎤ x
⎡e⎤ ⎢P ⎡d ⎤
⎢ y⎥ P22 P23 ⎥⎥ ⎢r ⎥ ⇒
⎣ ⎦ ⎢ 21
⎣ ⎦ x x
e ⎣⎢ P31 P32 P33 ⎦⎥ u' ⎡ AT BT ⎤
⎡e⎤ ⎢C
⎢ y⎥ ⎣ T DT ⎥⎦ ⎡d ⎤
⎢r ⎥
⎣ ⎦ ⎣ ⎦
Dk
Fig. 4.14 LFT forms: (a) (33) LFT form and (b) (22) LFT form
(4.45)
(4.46)
where
(4.47)
" # !
AT BT
Thus, one has T .s/ D LFTl .P; Dk / D LFTu ; . Obviously, the I
s
CT D T
feedback system is always well-posed for the case of either Dk D 0 or Dg D 0, i.e.,
the strictly proper case. Then, for Dk D 0, one can find from (4.45) that the state-
space realization of the feedback system is given by Fig. 4.14b
80 4 Linear Fractional Transformations
(4.48)
Mason’s approach [7, 8] has been a useful tool to determine the transfer function for
a given system for many years. A substantial number of practicing control engineers
are familiar with it. However, this approach involves a set of rules that can be
difficult for a new learner to remember.
This section reveals a relationship between the classic Mason’s gain formulae
and the LFT approach to characterize the transfer function from input to output. It
is well-known that the Mason’s gain formulae are a set of rules for determining
the transfer function of a single-input and single-output (SISO) system. From
the discussions in previous sections, one knows that the LFT formulation can
generalize the determination of transfer functions in MIMO cases. In this section,
several examples are given to show the differences and similarities between these
two approaches. Through the following examples, one can observe that these two
approaches reveal their individual benefits in determining the transfer function based
on differently structured control systems.
The Mason’s gain formulae [6] are as follows:
X
Mj j
j
1: M D
2: M D transfer function or gain of the system
3: Mj D gain of j th forward path
4: j D an integer representing the forward paths in the system
X
5: D 1 .all different loop gains/
X
C .sum of the gain products of all combinations of 2 nontouching loop gains/
X
.sum of the gain products of all combinations of 3 nontouching loop gains/
C
6: j D value of for that part of the block diagram that does not touch the j th
forward path: (4.49)
4.4 Relationship Between Mason’s Gain Formulae and LFT 81
L1
−
r w
E B A
−
L2
Example 4.3 Determine the transfer function of the system block diagram depicted
in Fig. 4.15 from r to w using Mason’s gain formulae and the LFT formulation,
respectively.
Firstly, the Mason’s gain formulae (i.e., the following steps (1) to (5)) are applied.
Mason’s Approach
Step 1: Find all of the feedback loops and their gains.
In this example, there are two feedback loops: L1 D CBE and L2 D DAB.
Step 2: Find the forward paths and their gains.
A forward path is the path from r to w that does not cross the same point more than
once. In this example, there is only one forward path, i.e., j D 1 and M1 D ABE.
Step 3: Find .
In this example, there are no non-touching loop pairs so that
X
D1 loop gains D 1 C CBE C DAB: (4.50)
Step 4: Find j .
From the block diagram of Fig. 4.15, one can see that L1 D CBE and L2 D DAB
touch the forward path M1 D ABE so that 1 D 1.
Step 5: Final solution.
Using the Mason’s gain formulae (4.49), the transfer function from r to w is given by
w ABE
DM D : (4.51)
r 1 C ECB C DAB
LFT Approach
To find the transfer function of a feedback control system by using the LFT
approach, the truncated termination part can always be chosen to be an identity.
Note that the cutting place should break all of the internal closed loops. In this
example, the minimum cutting point is 1. Figure 4.16 shows an alternative cutting
selection. As can be seen in this figure, the number of cutting points is 2. Figure 4.17
82 4 Linear Fractional Transformations
u1 y1
C
−
r w
E B A
−
u2 y2
D
⎡ y1 ⎤ ⎡1 0 ⎤ ⎡ u1 ⎤
⎢y ⎥ ⎢0 1 ⎥ ⎢u ⎥
⎣ 2⎦ ⎣ ⎦ ⎣ 2⎦
(4.52)
Now, consider the truncation as depicted in Fig. 4.18, in which there is only
one break point. Clearly, one has the terminator K D 1, and the two-port matrix is
reduced to
(4.53)
4.4 Relationship Between Mason’s Gain Formulae and LFT 83
− y u
r w
E B A
−
[1]
(4.54)
Apparently, the results shown in Eqs. (4.51), (4.52), and (4.54) are the same.
Here, one can observe that (I P22 K) D 1 C (DA C EC)B in the above example is
equal to D 1 C (DA C EC)B in (4.50). The well-posedness condition of the LFT
formulation is thus related to requirement of nonzero in Mason’s gain formulae.
Example 4.4 Consider the following control block diagram, as shown in Fig. 4.20,
and determine the transfer function from r to w by using Mason’s gain formulae and
LFT, respectively.
Mason’s Approach
Step 1: Find all the loops and their gains.
Individual loops:
G4
r w
G1 G2 G3 G5
− − −
L1 L2
H1 L3 & L4
H2
M1 D G5 G3 G2 G1 and M2 D G5 G4 G2 G1 : (4.57)
Step 3: Find .
X X
D1 loop gains C nontouching loop gains taken two at a time
Step 4: Find j .
1 D 1 and 2 D 1: (4.59)
Step 5: Finalsolution.
By (4.49),
G5 .G3 C G4 / G2 G1
M D : (4.60)
1 C H1 G2 C H2 G5 C G5 .G3 C G4 / G2 G1 C H1 G2 H2 G5
4.4 Relationship Between Mason’s Gain Formulae and LFT 85
G4
y1 u1 y2 u 2
r w
G1 G2 G3 G5
− − −
H1 H2
w r
⎡ 0 0 1 ⎤
⎢G G −G2 H1 −G2G1 ⎥⎥
⎢ 2 1
⎢⎣ 0 G5 (G3 + G4 ) −G5 H 2 ⎥⎦
⎡ y1 ⎤ ⎡ u1 ⎤
⎢y ⎥ ⎢u ⎥
⎣ 2⎦ ⎡1 0⎤ ⎣ 2⎦
⎢0 1⎥⎦
⎣
LFT Approach
Breaking the system as depicted in Fig. 4.21, its corresponding LFT form is as
shown in Fig. 4.22 where the two-port matrix is given by
(4.61)
By definition,
" #1 " #
1 C G2 H1 G2 G1 G2 G1
LFTl .P; K/ D 0 1
G5 .G3 C G4 / 1 C G5 H2 0
G1 G2 .G3 C G4 / G5
D : (4.62)
1 C H1 G2 C H2 G5 C G5 .G3 C G4 / G2 G1 C H1 G2 H2 G5
86 4 Linear Fractional Transformations
V1 1 I1 − V2 1 I3 1 V3
R2
R1 R3
− − Cs
Consequently, the LFT yields the same results as Mason’s gain formula. It can be
seen from (4.61) that P22 is 2 2 with non-touching loops in the main diagonal
elements, additionally, det(I P22 ) D , where
D 1 C H1 G2 C H2 G5 C G5 .G3 C G4 / G2 G1 C H1 G2 H2 G5 : (4.63)
3 Cs
D T .s/ D ; (4.64)
V1 R1 C R2 jj R3 C C1s
y1 u1
V1 1 I1 − V2 1 I3 1 V3
R2
R1 R3
− − Cs
u2 y2
V3 V1
⎡ y1 ⎤ ⎡u ⎤
⎢y ⎥ = y u = ⎢ 1⎥
⎣ 2⎦ ⎣u2 ⎦
K
(4.65)
" #
1 0
For the case R1 D R2 D R3 D R, the implementing termination K D yields
0 1
V3
D LFTl .P; K/ D P11 C P12 K .I P22 K/ 1 P21
V1
" #!1 " #
1 1 2 R1 1
1
D0C I 1 R
D : (4.66)
C s RC s Cs
1
RC s
0 3RC sC2
Apparently, the LFT approach of Fig. 4.26 yields the same result as that by using
Kirchhoff’s law.
It should be noted that the cutting places which break the internal feedback loops
are not unique. In this example, one can also choose the other cutting points, e.g.,
88 4 Linear Fractional Transformations
y2 u2
V1 1 I1 − V2 1 I3 1 V3
R1
R2 R3
− − Cs
u1 y1
−1
L2
1 1 1
V1 R1 I1 R2 V2 R3 I 3 Cs V3
1 1 1
I2
L1 L3
−1 −1
" #
1 0
as shown in Fig. 4.27, in which one has the termination K D 1
and the
0 R3
corresponding two-port matrix P given by
(4.67)
R2
V3 R1 R3 C s
T .s/ D D ; (4.69)
V1 1 .L1 C L2 C L3 / C L1 L3
where
R2 R2 1
L1 D ; L2 D ; L3 D : (4.70)
R1 R3 R3 C s
1
T .s/ D : (4.71)
3RC s C 2
Evidently, the results of (4.64), (4.66), and (4.71) are the same. The enhanced
discussions between LFT and Mason’s gain formulae will be revealed in the next
section.
From the examples above, using the LFT approach to determine the transfer
function is much more straightforward than that by Mason’s gain formulae. The
only key point in using the LFT approach is how to choose the places to break
all of the feedback loops. To summarize, for determining the closed-loop transfer
function of a feedback system, the more loops the control system has, the more
complicated Mason’s approach becomes. While in the LFT approach, it only needs
to properly choose the breaking points (even for MIMO systems), and then the
rest of the procedure is systematically manipulated. Moreover, some topological
behaviors such as non-touching loop gains can be observed in the two-port matrix
of the LFT formulation.
Most of the feedback systems can be described in the standard control configuration
(SCC) of Fig. 4.1, in which the LFT approach is used to synthesize and analyze
the control problem. In this section, the feedback control design using the LFT
approach is firstly introduced. The LFT formulation is now mature, and subroutines
are available in MATLAB toolboxes. In the SCC of Fig. 4.1, LFT matrix P denotes a
generalized
" # plant that represents "
an open-loop
# system with two sets of inputs stacked
w r
as and two sets of outputs , where signals can be vector-valued functions
u y
of time. Controller K (to be designed) is to form the closed loop of y 7! u as a
terminator in the SCC. If a given system is “stabilizable” via the feedback controller
K, all internally stabilizing controllers can be found by the Youla parameterization
[10, 11], in which a generator for the stabilizing controller K is also given by an
90 4 Linear Fractional Transformations
z ⎡ P11 P12 ⎤ w
⎢P P22 ⎥⎦
w r y ⎣ 21 u
⎡ P11 P12 ⎤
y ⎢P P22 ⎥⎦ u K
⎣ 21
⇒
⎡ L11 L12 ⎤
⎢L L22 ⎥⎦
⎣ 21
K a b
Φ
LFT in K D LFTl (…,ˆ). Then, the interconnection of two LFTs will include a
product connection, namely, the star product [9], which is very complicated in the
representation. Figure 4.29 shows the star product of two LFT matrices.
Modeluncertainties are almost inevitable in real-world control systems, and
therefore, robustness consideration plays an important role in control system theory
and control system design. Model uncertainties usually arise from unavailable
information on the plant dynamics or represent part of the dynamics which are
purposely left out for the sake of simplicity and ease in system analysis and
controller design. As a consequence, the controller, which is designed based on
the mathematical model, has to be “robust” in the sense that it must achieve
the design specifications for the real plant dynamics containing the unmodeled,
uncertain dynamic component. In fact, almost all control systems, in practice, suffer
from disturbances/noises and model uncertainties. Basically, a disturbance is an
external signal which is neither controllable nor dependent on internal variables
of the plant, such as signal d in Fig. 4.12. However, the model uncertainty describes
inconsistencies in dynamics between the model and the real plant. As shown in
Fig. 4.30 below, it is assumed that the actual plant dynamics G is the sum of
nominal model Go and model uncertainty G . The so-called additive uncertainty,
i.e., G D Go C G , includes therefore uncertain dynamics [2]. In fact, uncertainties
also arise in the model reduction issues, which simplify calculations or avoid
difficulties from the high complexity of a complete model. It is only known that
some discrepancies exist between the simulated model and the real plant. For
instance, the neglect of some high-order terms or some nonlinear phenomena will
be summed up in a global model error, and the model error block G can be a full
transfer function matrix.
4.5 LFT Description and Feedback Controllers 91
z1 z2
α β
W1 W2 d W3 Δ
r y u
K G0 y
⎡0 W3 0 W3 ⎤
α ⎢W W G W W G ⎥ β
⎡ z1 ⎤ ⎢ 1 1 0 1 1 0⎥ ⎡d ⎤
⎢z ⎥ ⎢0 0 0 W2 ⎥ ⎢r ⎥
⎣ 2⎦ y u ⎣ ⎦
⎢ ⎥
⎣⎢ I G0 0 G0 ⎦⎥
(4.72)
Finally, one can employ LFT matrix P to synthesize controller K based on the
specific design goal. For example, in H1 control, "one#can determine
" a#satisfactory
d z1
controller K by minimizing kTzw k1 , where w D and z D . More will
r z2
be discussed in the rest of this book.
92 4 Linear Fractional Transformations
To sum up, this section introduced the motivations of feedback controller design.
Examples were mainly designed for evaluating the presented LFT approach. Under
the LFT scheme, readers can take the synthesis target as the terminator, e.g.,
controller K. Then, via applying the LFT manipulation, one can determine the
solution of K systematically.
This section describes inner and co-inner systems. For synthesizing a (sub)optimal
control system under the LFT framework, to be elaborated in the later part of this
book, properties of these systems reveal the significance of such concepts.
Definition 4.1 Let P .s/ D P T .s/ and P (s) D PT (s). The transfer function
matrix P(s) is called all-pass if P P D I, for all s. An all-pass P(s) is then called
inner if P(s) is stable and P*P I, 8 Re(s) 0. Dually, a transfer function matrix
P(s) is called co-all-pass if PP D I, for all s. A co-all-pass P(s) is then called co-
inner if P(s) is stable and PP* I, 8 Re(s) 0. " #
z
Consider the (q1 C q2 ) (m1 C m2 ) two-port system of Fig. 4.1, where D
y
" #
w
P with q2 D m1 and q1 m2 . An inner system has the property that the 2-norm
u
of the output signal is equal to that of the input [3, 5]. That is,
2 2 2
y .j!/ 2
C z .j!/ 2 D u .j!/ 2 C w .j!/ 2 (4.73)
2
for any !.
Example 4.6 Verify that the one-port transfer function T .s/ D s1
sC1
is inner.
From T (s) D TT (s), one has
s 1
T .s/ D T T .s/ D ;
s C 1
and then
sC1 s1
T .s/T .s/ D D 1:
s1 sC1
Firstly, this shows that T(s) is all-pass. From T .s/ D T T .s/, one can obtain, for
s D C j!,
0.5
Imaginary Axis
0
−1 1
( Pole) ( Zero)
-0.5
-2 -1 0 1 2
Real Axis
Evidently, T*(s)T(s) 1, 8 Re(s) 0. By this and T(s) being stable, T(s) is an inner
function. Furthermore, it can also be examined by the ratio of the distance between
the location of s in the complex plane to the system pole and zero. Now, it can be
found that
is obtained, but T*(s)T(s) 1, 8 Re(s) 0. This means T(s) is not an inner function,
although T (s)T(s) D 1.
" #
s1
0
Example 4.7 Verify that the two-port system P .s/ D sC1 s2 is inner.
0 sC2
From
2 32 3
.s C 1/ s1
6 .s 1/ 0 76 0 7
P .s/P .s/ D 6 7 sC1
4 .s C 2/ 5 4 s 2 5 D I;
0 0
.s 2/ sC2
94 4 Linear Fractional Transformations
Since P(s) is all-pass and P*(s)P(s) I, 8 Re(s) 0 (i. e., 0), and due to that all
of its poles are in the open left-half plane, this two-port P(s) is an inner system by
the definition.
" #
s1
0 sC1
Example 4.8 Verify that P .s/ D s2 is co-inner.
sC2
0
From
2 s 1 32 sC2 3
0 0
6 s C 1 76 s 2 7 D I;
P .s/P .s/ D 4 54 5
s2 sC1
0 0
sC2 s1
we can gather that P(s) is co-all-pass. Since one can also verify that P(s)P*(s) I,
8 Re(s) 0 for s D C j!, and it is stable, P(s) is an co-inner system.
Theorem 4.1 Given an LFT system P(s), if P(s) is inner, then kLFTl (P,K)k1 < 1
for any K 2 BH1 .
" # " # " #
z w P11 P12
Proof Since D P and P D is inner, one has
y u P21 P22
P*(j!)P(j!) D I and kP22 k1 1. The latter is true because P22 is part of P and
kPk1 D 1. Then, by the small gain theory, one has LFTl (P,K) 2 RH1 for any
K 2 BH1 . Since P is inner, one also has
which gives
kzk2
kLFTl .P; K/k1 D max : (4.82)
w¤0 kwk2
Example 4.9# Consider the LFT system as illustrated in Fig. 4.1, where P .s/ D
"
s1
0 sC1
s2
2 RH1 and K D sC3 1
2 BH1 . Verify that kLFTl (P,K)k1 < 1.
0
sC2 1
From (4.7), one has sC3 D 1 and LFTl .P; K/ D .s1/.s2/ .
1 3 .sC1/.sC2/.sC3/
Hence, one can find that kLFTl .P; K/k1 D 13 < 1.
As mentioned in Chap. 2, the LFT system can be transformed into a CSD one.
Hence, the properties of all-pass (or co-all-pass) and inner (or co-inner) have their
counterparts as J-unitary (or dual J-unitary) and J-lossless (or dual J-lossless) [5].
These properties and relations will be introduced in the next chapter.
Exercises
" #
r
1. Let w D LFTl .P; K/ in the block diagram below. Determine P and
d
LFTl (P,K) with the given cutting point.
C d
r − − w w ⎡ r⎤
A K P ⎢d ⎥
⎣ ⎦
− −
D
K
F
96 4 Linear Fractional Transformations
Td
V Ve 1 i Te − 1 ω
Kt
− Ls + R y u Js + B
Ke
3. Determine the transfer function of the given system below from r to w. (Hint:
Use the cutoffs indicated in the block diagram.)
− y2 u2 w
r
E B A
y1 u1 −
4. Find the transfer function of the following system by Mason’s gain formulae
as well as by LFT approach, respectively. Compare the results and discuss their
relationships.
G4
y1 u1 y2 u2
r w
G1 G2 G3 G5
− − −
H1 H2
5. Consider the following block diagram. Determine the transfer function from r to
w using Mason’s gain formulae and LFT approach, respectively.
References 97
G3
y1 u1 y2 u2
r w
G1 G2
− − −
H1 H2
H3
6. Consider the system block diagram below and determine its transfer function
from r to w using Mason’s gain formulae and LFT, respectively.
H2
L1
r y1 u1 y2 u2 w
G1 G2 G3
−
L2 L3
H1
References
M.-C. Tsai and D.-W. Gu, Robust and Optimal Control: A Two-port Framework 99
Approach, Advances in Industrial Control, DOI 10.1007/978-1-4471-6257-5__5,
© Springer-Verlag London 2014
100 5 Chain Scattering Descriptions
a b
z u u z
~
G K K G
w y y w
G11 G12
Consider the right CSD matrix G D of Fig. 5.1a, where the
G21 G22
feedback connection is given by
z G11 G12 u
D ; u D Ky: (5.1)
w G21 G22 y
Note that G11 denotes the transfer function of u 7! z, when y D 0, similarly for the
rest as further notated:
ˇ ˇ ˇ ˇ
z ˇˇ z ˇˇ w ˇˇ w ˇˇ
G11 D ˇyD0 ; G12 D ˇ uD0 ; G21 D ˇyD0 ; G22 D ˇ uD0 : (5.2)
u y u y
From
z G11 K C G12
D y; (5.3)
w G21 K C G22
Then, define the right CSD transformation of G and K, denoted by CSDr (G,K), as
G11 G12
CSDr ;K WD .G12 C G11 K/ .G22 C G21 K/1 : (5.5)
G21 G22
The right CSD transformation is said to be well posed if (G22 C G21 K) is invertible.
The whole loop transfer function of w 7! z is given by CSDr (G,K). This definition
implies that G22 must be square.
Example 5.1 Consider the SCC plant as shown in Fig. 4.1, using MATLAB to
determine its corresponding right CSD manipulations.
5.1 CSD Definitions and Manipulations 101
clc;
clear;
disp(’SCC2CSDr method’)
syms P11 P12 P21 P22 K S;
syms p11 p12 p21 p22 k s;
P11Dinput(’w D> u (P11) ?’);
P12Dinput(’wD> y (P12) ?’);
P21Dinput(’r D> u (P21) ?’);
P22Dinput(’r D> y (P22) ?’);
KDinput(’Please input K:’);
[row col]Dsize(P21);
if (row~Dcol)
disp(’The do not exist a right CSD matrix G !’)
disp(’Plz enter Ctrl c !’)
pause
else
G11Dsimplify(P12-P11* inv(P21)* P22);
G12Dsimplify(P11* inv(P21));
G21Dsimplify(-inv(P21)* P22);
G22Dinv(P21);
GD[G11 G12;G21 G22]
end
if (G21* KCG22DD0)
disp(’The transfer function is singular !’)
disp(’Plz enter Ctrl c !’)
pause
else
Transfer_function_r_to_wDsimplify((G11* KCG12)* inv
(G21* KCG22))
end
Similarly, the diagram shown in Fig. 5.1b represents the following two sets of
equations:
u GQ 11 GQ 12 z
D Q Q ; u D Ky (5.6)
y G21 G22 w
102 5 Chain Scattering Descriptions
where
u ˇˇ u ˇˇ y ˇˇ y ˇˇ
GQ 11 D ˇwD0 ; GQ 12 D ˇ zD0 ; GQ 21 D ˇwD0 ; GQ 22 D ˇ zD0 : (5.7)
z w z w
From
u GQ 11 GQ 12 z
D Q Q (5.8)
y G21 G22 w
and
K
I K y D 0; (5.9)
I
yield
GQ 11 GQ 12 z
0D I K I (5.10)
GQ 21 GQ 22 w
1
hence, when GQ 11 K GQ 21 exists,
1
z D GQ 11 K GQ 21 GQ 12 K GQ 22 w: (5.11)
CSDl Q Q ; K WD GQ 11 K GQ 21 GQ 12 K GQ 22 : (5.12)
G21 G22
clc;
clear;
disp(’SCC2CSDl method’)
syms P11 P12 P21 P22 K S;
syms p11 p12 p21 p22 k s;
P11Dinput(’w D> u (P11) ?’);
5.2 Cascaded Connection of Two CSD Matrices 103
w1 G1 y1 w2 G2 y2 K
K y1 G1 w1 y2 G 2 w2
G
and
where the product of the two right CSD matrices follows the usual (block) matrix
multiplication rule, i.e., G D G1 G2 .
Property 5.1 For the cascaded connection of two right CSD matrices:
1. CSDr (G1 , CSDr (G2 ,K)) D CSDr (G1 G2 , K).
2. CSDr (G1 G2 , K) D CSDr (I,K) D K if G1 D G 1
2 .
and u1 D Ky1 ,
Q K w2
z2 D CSDl G; (5.16)
u a
K y G b
and then
CSDl GQ 1 ; K
z2
0 D I CSDl GQ 1 ; K w1 D I CSDl GQ 1 ; K GQ 2 ;
I w2
(5.18)
and hence,
Similarly, the product of the two left CSD matrices also follows the usual (block)
matrix multiplication rule.
Property 5.2 The cascaded connection of two left CSD matrices gives the
following:
Q K b, one has
and the feedback termination u D Ky. From a D CSDl G;
z a Q K
CSDl G;
DG DG b; (5.21)
w b I
and hence,
Q K w:
z D CSDr G; CSDl G; (5.22)
106 5 Chain Scattering Descriptions
z a
G M
w b
u a
K y G M
b
a u
G K
b y
u a z a
Also, from (5.20), it is clear that GQ 1 D ; hence, DG D
y b w b
a
a z
M b G w
a u
M b G y K
with the feedback termination u D Ky. From a D CSDr (G,K)b, one has
CSDr .G; K/ Q z
bDG ; (5.24)
I w
and therefore,
Q CSDr .G; K/ w:
z D CSDl G; (5.25)
For easier connection description of Fig. 5.6, notation CSDl CSDr is employed
to sketch this connection. It is not useful to illustrate CSDl (I, CSDr (I,K)) D K.
Additionally, one can multiply any nonsingular MQ at the left terminal of both top
and bottom paths, as illustrated in Fig. 5.7, which is inconsequential to the closed-
loop transfer function of w 7! z.
Property 5.4 For the connection of a left CSD associated with a right CSD, one
has the following:
As illustrated in Fig. 5.8, for the case of dim(w) D dim(y) and with an invertible
P21,
u z
the particular SCC plant P has an “equivalent” right CSD matrix G of 7!
y w
such that CSDr (G,K) D LFTl (P,K). This can be derived as follows:
108 5 Chain Scattering Descriptions
z ⎡ P11 P12 ⎤ w
G
⎢P P22 ⎥⎦
z u
y ⎣ 21 u ⇒ ⎡ P12 − P11 P21−1 P22 P11 P21−1 ⎤
⎢ −1 ⎥ K
w ⎣ − P21 P22 P21−1 ⎦ y
Let
(5.26)
" # " #
P12 P11 I 0
where G1 D and GQ 2 D . It can be found that
0 I P22 P21
where
" #
P12 P11 P21 1 P22 P11 P21 1
GD : (5.28)
P21 1 P22 P21 1
1
I 0
where M D D GQ 21 .
P22 P21
Similarly, for the
case
of D dim(u) and an invertible P12 , there exists a left
dim(z)
z u
CSD matrix GQ of 7! such that CSDl G; Q K D LFTl .P; K/. One has,
w y
from (5.26), as illustrated in Fig. 5.9 in the following:
5.3 Transformation from LFT to CSD Matrix 109
z w
⎡ P11 P12 ⎤ G
⎢P P22 ⎥⎦
u z
y ⎣ 21 u ⇒ ⎡ P12 −1 − P12 −1 P11 ⎤
K ⎢ −1 ⎥
y ⎣ P22 P12 P21 − P22 P12 −1 P11 ⎦ w
0
z0 P12 0 u I P11 z z
D D )
y0 P22 I y 0 P21 w y0
u Q z
D G2 D G1 : (5.30)
y w
where
P12 1 P12 1 P11
GQ D 1 : (5.32)
P22 P12 P21 P22 P12 1 P11
D CSDl MQ GQ 1 ; K
Q K
D CSDl G; (5.33)
P12 0
where MQ D D G21 . The above are summarized in the following
P22 I
lemma.
Lemma 5.1 Given an LFT matrix P, there exists a right CSD matrix G such that
CSDr (G,K) D LFT Q
if P21 is invertible, or there exists a left CSD matrix G
l (P,K)
Q
such that CSDl G; K D LFTl .P; K/ if P12 is invertible.
Furthermore, if both P21 and P12 are invertible, then it can be gathered that
1 1
P12 P11 I 0 P12 0 I P11
G GQ D D I: (5.34)
0 I P22 P21 P22 I 0 P21
110 5 Chain Scattering Descriptions
Consequently,
Q K ;
T D LFTl .P; K/ D CSDr .G; K/ D CSDl G; (5.35)
and conversely,
Q T D CSDl .G; T / :
K D CSDr G; (5.36)
For the transformation from LFT to CSD as described in the previous section, there
is a condition that P12 or P21 should be invertible. This condition can however be
relaxed, asexplained
in this section. For any standard control configuration (SCC)
w z
plant P of 7! and u D Ky, as depicted in the left-side diagram of Fig. 5.8,
u y
it can be represented by two coupling CSDs as stated in the following. From
z P11 P12 w
D ; (5.37)
y P 21 P22 u
we derive
(5.38)
u I 0 u
D ; (5.40)
y P22 P21 w
which in turn can be represented by a right CSD associated with a left CSD, as
illustrated in the right-side diagram of Fig. 5.10. Since u D Ky, one has, from the
lower part of (5.40),
5.4 Transformation from LFT to Cascaded CSDs 111
G1
z u
w ⎡ P12 P11 ⎤
z ⎢0
⎡ P11 P12 ⎤ w ⎣ I ⎥⎦ w
⎢P P22 ⎥⎦
y ⎣ 21 u ⇒
G 2
u u
K ⎡ I 0⎤
K y ⎢P P21 ⎥⎦ w
⎣ 22
K I 0 u
yD ; (5.41)
I P22 P21 w
and therefore,
" #
K I 0 u
0 D I K y D I K
I P22 P21 w
u
D I KP22 KP21 : (5.42)
w
This reveals that the transfer function w 7! z can be represented by a right CSD
matrix associated with a left CSD matrix; in below, LFTl stands for the lower linear
fractional transformation as previously defined in Chap. 4:
P11 P12 P12 P11
z D LFTl ; K w D CSDr ;
P21 P22 0 I
I 0
CSDl ;K w: (5.45)
P22 P21
112 5 Chain Scattering Descriptions
Example 5.3 Consider the SCC plant as given in (5.37), using MATLAB to
determine its corresponding right CSD associated with a left CSD representation.
The following MATLAB code is an example for readers’ reference.
clc;
clear;
disp(’SCC_2_(CSDr associated with CSDl) method’)
syms P11 P12 P21 P22 K S;
syms p11 p12 p21 p22 k s;
P11Dinput(’z D> u (P11) ?’);
P12Dinput(’z D> y (P12) ?’);
P21Dinput(’w D> u (P21) ?’);
P22Dinput(’w D> y (P22) ?’);
KDinput(’Please input K:’);
G1_11DP12;
G1_12DP11;
G1_21Dzeros(size(P12));
G1_22Deye(size(P11));
G1D[G1_11 G1_12;G1_21 G1_22 ] % upper triangle matrix
G2_11Deye(size(P22)) ;
G2_12Dzeros(size(P21));
G2_21DP22;
G2_22DP21;
G2D[G2_11 G2_12;G2_21 G2_22] % lower triangle matrix
if (G2_11-K* G2_21DD0)
disp(’The transfer function is singular !’)
disp(’Plz enter Ctrl c !’)
pause
else
Transfer_function_w_to_uDsimplify(-inv(G2_11-K* G2_21)
* (G2_12-K* G2_22))
G1
z′ z
z w ⎡ I − P11 ⎤
⎡ P11 P12 ⎤ y′ ⎢0 − P ⎥ w
⎢P ⎣ 21 ⎦
y ⎣ 21 P22 ⎥⎦
u ⇒
G2 u
z′
K ⎡ P12 0⎤
⎢P K
y′
⎣ 22 − I ⎥⎦ y
(5.46)
Let
" # " #" # " #" #
z0 I P11 z P12 0 u
D D : (5.47)
y0 0 P21 w P22 I y
Then, one can obtain the transfer function w 7! z, which is represented by a left CSD
matrix associated with a right CSD matrix with the latter being terminated by K at
the right end, as illustrated in Fig. 5.11, where z0 and y0 are the intermediate signals.
From their definition in (5.47), it is clear that the dimensions of z0 and y0 are the
same as that of z and y, respectively. It is a common usage in this book that a signal
with the prime symbol means the same dimension as that of the original signal, but
of course, they are different signals.
From (5.47) and u D Ky, one has
" #" # " #
z0 P12 0 K P12 K
D yD y; (5.48)
y0 P22 I I P22 K I
and hence,
P12 0
0
z D CSDr ; K y 0 D P12 K.I P22 K/1 y 0 : (5.49)
P22 I
114 5 Chain Scattering Descriptions
Let
" # !
P12 0
S D CSDr ;K : (5.50)
P22 I
(5.47) offers
" # " # " #" #
z0 S 0
I P11 z
D y D ; (5.51)
y0 I 0 P21 w
and thereby,
" # " #" #
S I P11 z
0D I S y0 D I S : (5.52)
I 0 P21 w
and
h i
z D P11 C P12 K.I P22 K/1 P21 w D LFTl .P; K/ w: (5.54)
One concludes that the transfer function w 7! z can be represented by a left CSD
matrix associated with a right CSD matrix as
" # ! " #
P11 P12 I P11
z D LFTl ; K w D CSDl ;
P21 P22 0 P21
" # !!
P12 0
CSDr ;K w: (5.55)
P22 I
Example 5.4 Consider the SCC plant as given in (5.37), using MATLAB to
determine its corresponding left CSD associated with a right CSD representation.
The following MATLAB code is an example for readers’ reference.
clc; clear;
disp(’SCC_2_(CSDl associated with CSDr) method’)
syms P11 P12 P21 P22 K S;
5.5 Transformation from CSD to LFT matrix 115
end
z w
G ⎡G12G22 −1 G11 − G12G22 −1G21 ⎤
z u y ⎢ −1 ⎥ u
⎣ G22 −G22 −1G21 ⎦
⎡ G11 G12 ⎤ ⇒
⎢G ⎥ y
K
w ⎣ 21 G22 ⎦
K
From
" # " #" #
z G11 G12 u
D ; (5.56)
w G21 G22 y
one has
(5.57)
hence,
" # " #" # " #" #1 " #
z G11 G12 u G11 G12 G21 G22 w
D D
y 0 I y 0 I I 0 u
" #" #
P11 P12 w
D ; (5.58)
P21 P22 u
where
" # " #
P11 P12 G12 G22 1 G11 G12 G22 1 G21
P D D : (5.59)
P21 P22 G22 1 G22 1 G21
~
z ~ ~ ~ −1 w
G −G11−1G12 G11
u z y ~ ~ −1 ~ ~ ~ ~ −1 u
~ ~ −G21G11 G12+ G22 G21G11
G11 G12 ⇒
K y ~ ~
G21 G22 w
Q
Similarly,
one can also derive the transformation
froma left CSD matrix G of
z u z w
7! to its LFT matrix P of 7! as shown in Fig. 5.13,
w y y u
where GQ 11 is invertible.
From
(5.61)
one has
" # " #" # " #" #1
z I 0 z I 0 0 I
D D
y GQ 21 GQ 22 w GQ 21 GQ 22 GQ 11 GQ 12
" w
#
w P11 P12
D ; (5.62)
u P21 P21 u
where
" # " #
P11 P12 GQ 11 1 Q
G12 GQ 11
1
D : (5.63)
P21 P21 GQ 21 GQ 11
1 Q
G12 C GQ 22 GQ 21 GQ 11
1
1 1
1 Q
D GQ 11 G12 C GQ 11
1
I K GQ 21 GQ 11 K GQ 21 GQ 11
1 Q
G12 C GQ 22
1
D GQ 11 K GQ 21 GQ 12 K GQ 22
D CSDl G; Q K : (5.64)
118 5 Chain Scattering Descriptions
a b
V1 ⎡1 ⎤ I1
⎢ Cs 1 ⎥
V1 ⎡ RCs + 1 1 ⎤ V2 ⎢ ⎥
⎢ RCs Cs ⎥ I2 ⎢1
⎢⎣
1
− ⎥ V2
⇒ R ⎥⎦
⎢ ⎥ RL
I1 ⎢ 1 I2
1 ⎥
⎣⎢ R ⎦⎥
RL
Fig. 5.15 Right CSD to LFT: (a) Right CSD representation and (b) LFT representation
Both LFT and CSD representations are depicted in Fig. 5.15. Alternatively, the right
CSD matrix G can also be found directly by the transmission parameter matrices
described in Chap. 3 as
5.5 Transformation from CSD to LFT matrix 119
Rs I1 R C I2
G
Furthermore, one can find LFT matrix P using (5.59) with data from (5.66) as
" # " #
1
G12 G22 G11 G12 G22 1 G21 1
1
P D D Cs
: (5.68)
G22 1 1
G22 G21 1 R1
V1 1 RRL
ZD D C : (5.69)
I1 Cs R C RL
One can obtain the same result via the LFT or CSD approach, respectively, as
V1 1 RRL
ZD
D CSDr .G; RL / D C ; (5.70)
I1 Cs R C RL
V1 1 RL 1 1 RRL
ZD D LFTl .P; RL / D C RL 1 C D C : (5.71)
I1 Cs R Cs R C RL
Clearly, (5.69), (5.70), and (5.71) are the same. These have shown that
V1
ZD D LFTl .P; RL / D CSDr .G; RL / : (5.72)
I1
Example
5.6 Forthe RC circuit depicted in Fig. 5.16, determinea leftCSDmatrix
V V I2 V2
GQ of 2
7! 1
and its corresponding LFT matrix P of 7!
I2 I1 V1 I1
As in Example 5.5, the left CSD matrix GQ can also be found by the transmission
parameter matrices as
2 32 3" 2 3
" # # 1 " #
1 0 1 1
5 4 1 Cs 5 6 7 V2
V1 V Cs
D4 1
2
D4 5 :
I1 1 0 1 I2 1 RCS C 1 I2
R
R RC s
(5.75)
I2 V2
Similarly, the LFT matrix P of 7! can be obtained from (5.75) such
V1 I1
that
" # " #
GQ 111 Q
G12 GQ 11
1 1
1
P D D Cs
: (5.76)
GQ 21 GQ 11
1 Q
G12 C GQ 22 GQ 21 GQ 11
1
1 R1
1 RRs 1
Z D R kRs C D C : (5.77)
Cs R C Rs Cs
One can also obtain the same result via the LFT formula or CSD approach as
V2
ZD Q Rs D 1 C RRs and
D CSDl G; (5.78)
I2 Cs R C Rs
5.6 Applications of CSDs in State-Space Realizations 121
a b
V2 ⎡1 ⎤ I2
⎢ Cs 1 ⎥
V1 ⎡ 1 ⎤ V2 ⎢ ⎥
⎢ 1 −
Cs ⎥
⎢1 1
I1 − ⎥ V1
⎢⎣ R ⎥⎦
Rs ⎢ ⎥ ⇒
I1 ⎢− 1 1 + 1 ⎥ I 2
⎣⎢ R RCs ⎦⎥
Rs
Fig. 5.17 Left CSD to LFT: (a) Left CSD representation and (b) LFT representation
1 Rs 1 1 RRs
LFTl .P; Rs / D C Rs I C D C : (5.79)
Cs R Cs R C Rs
Q Rs .
Hence, LFTl .P; Rs / D CSDl G;
. That is,
and
" # !
A B I
P .s/ D LFTu ; : (5.81)
C D s
Note that it has been assumed that all initial states are zero. The state-space
realization of P(s) can be represented by using a right CSD matrix associated with
a left CSD matrix. From (5.80), one has
(5.82)
122 5 Chain Scattering Descriptions
y x
⎡C D⎤
I u ⎢0 I ⎥⎦
u
⎣
s
⇒ x
x x x
I ⎡I 0⎤
⎡A B⎤ x ⎢ A B⎥ u
⎢C D ⎥ s ⎣ ⎦
y ⎣ ⎦ u
Φ(s)
This shows that, using a right CSD associated with a left CSD,
" # " # !! " # !
C D I 0 I A B I
CSDr ; CSDl ; D LFTu ;
0 I A B s C D s
One can now consider the state feedback case. Let u D Fx C Wu0 , where W is
nonsingular. Then,
(5.86)
5.6 Applications of CSDs in State-Space Realizations 123
y x x
⎡C D⎤ ⎡I 0⎤
⎢0 ⎢F W ⎥
u ⎣ I ⎥⎦ u ⎣ ⎦ u′
x x x
I ⎡ I 0⎤ ⎡I 0⎤
⎢ A B⎥ ⎢F W ⎥
s x ⎣ ⎦ u ⎣ ⎦ u′
Φ(s)
x x
I ⎡ I 0 ⎤
x ⎢ A + BF BW ⎥⎦
u′
s ⎣
and
(5.87)
D
y
u′ u x I x
W B s C
(5.89)
or
(5.90)
From Fig. 5.23, one can obtain Fig. 5.24. Similarly, Property 5.4 gives
" # " # !!
WQ 0 WQ 0
.s/ D CSDl ; CSDr ; .s/ : (5.92)
H I H I
y
⎡− I D⎤
⎢0 B ⎥⎦ u
I ⎣
s
⇒ Ω( s )
x x
⎡A B⎤ x
⎢C D ⎥ ⎡ −C 0 ⎤ I
y ⎣ ⎦ u ⎢− A I ⎥
⎣ ⎦ s
x
y
⎡W 0⎤ ⎡− I D⎤
⎢ ⎥ ⎢0
⎣H I⎦ ⎣ B ⎥⎦ u
Ω( s )
x
⎡W 0⎤ ⎡ −C 0 ⎤ I
⎢ ⎥ ⎢− A I ⎥ x
⎣H I⎦ ⎣ ⎦ s
x
⎡ −WC 0⎤ I
⎢ ⎥ x
⎣ − ( A + HC ) I⎦ s
D y
x
x 1 ~
B C W
u s
u y x x
⎡0 I ⎤ ⎡C D⎤ ⎡ I 0 ⎤
⎢ I 0⎥ ⎢0 I ⎥⎦ ⎢ − D −1C
y ⎣ ⎦ u ⎣ u ⎣ D −1 ⎥⎦ u′
x x x
I ⎡I 0⎤ ⎡ I 0 ⎤
⎢ A B⎥ ⎢ − D −1C
s x ⎣ ⎦ u ⎣ D −1 ⎥⎦ u′
a b
u
x I x
⎡ − D −1C D −1 ⎤
y ⎢ ⎥ s
⎣ 0 I ⎦
⇒
x ⎡ A − BD −1C BD −1 ⎤
⎢ −1 ⎥
I ⎡ I 0 ⎤ ⎣ −D C D −1 ⎦
u y
x ⎢ A − BD −1C BD −1 ⎥⎦
s ⎣
Fig. 5.27 CSD representations: (a) Left CSD representation and (b) LFT representation
I 0 WQ 0
Herein, in Fig. 5.19 and in Fig. 5.23 play important roles
F W H I
in the state-space representations of coprime factorizations, for which F and H
can be chosen such that A C BF and A C HC are Hurwitz. Coprime factorization
will be discussed in the next chapter. Apparently, as shown in Figs. 5.19 and
5.23, the coprime factorization in state-space form can be easily generated by CSD
connections.
y u . x y
x I
D-1 B C
s
− D −1C
(5.94)
One can verify that the state-space representation from u to y is given by Fig. 5.28.
x D T x0 and xP D T xP 0 : (5.95)
In further explication, one can use “s,” the Laplace transform symbol, to show
the differentiation operation, i.e., x D Is T xP 0 , with a bit of abuse of notations.
Figure 5.18 shows LFT and CSD representations, respectively, in terms of state-
space realization matrices. By inserting some CSDs, Fig. 5.18 is equivalent to
Fig. 5.29. It can be found from Fig. 5.30 that the transfer function u 7! x0 is given by
I 0 I
1 1
‰.s/ D CSDl ; D sI T 1 AT T B: (5.96)
T 1 AT T 1 B s
y x x'
⎡C D⎤ ⎡T 0⎤
u ⎢0
⎣ I ⎥⎦ u ⎢0 I ⎥⎦ u
⎣
x x' x x x'
I ⎡T 0⎤ ⎡T −1 0 ⎤ ⎡I 0⎤ ⎡T 0⎤
⎢0 ⎢ −1 ⎥
s x ⎣ T ⎥⎦ x ' ⎣ 0 T ⎦ x ⎢ A B⎥ u ⎢0 I ⎥⎦ u
⎣ ⎦ ⎣
y x′
CT D
I
u 0 I u
s
x' x′ ⇒
x ' x'
I ⎡ I 0 ⎤
x′ ⎢T −1 AT T −1 B ⎥⎦ u ⎡T −1 AT T −1 B ⎤
s ⎣ ⎢ ⎥
y ⎣ CT D ⎦ u
Ψ (s)
" # !
CT D
y D CSDr ; ‰.s/ u D .D C C T ‰.s// u
0 I
1 1
D D C C T sI T 1 AT T B u D D C C .sI A/1 B u: (5.97)
(5.98)
Here the solid lines in a matrix show the matrix is a compact expression for a transfer
function matrix, while the dotted line in a matrix indicates usual matrix partition for
the sake of clarity. From (5.98), one can derive
5.8 State-Space Formulae of CSD Matrix Transformed from LFT Matrix 129
(5.99)
(5.102)
Note that if P21 is strictly proper, then the state-space representation of G does not
exist.
Dually, from
(5.103)
130 5 Chain Scattering Descriptions
a b
G1 G1
z u z′ z
⎡ P12 P11 ⎤ ⎡ I − P11 ⎤
w ⎢0 I ⎥⎦ w y′ ⎢0 − P ⎥
⎣ ⎣ 21 ⎦
w
u G 2 u G2 u
z′
⎡ I 0⎤ ⎡ P12 0⎤
K ⎢P P21 ⎥⎦ ⎢P K
y ⎣ 22 w y′
⎣ 22 − I ⎥⎦ y
Fig. 5.31 CSDr CSDl and CSDl CSDr : (a) Right CSD associated with left CSD and (b) Left
CSD associated with right CSD
one has
2 3 2 32 31 2 3
xP A B1 B2 I 0 0 x
4 u 5D4 0 0 I 5 4 C1 D11 D12 5 4 z 5: (5.104)
y C2 D21 D22 0 I 0 w
(5.106)
Note that if P12 is strictly proper, then the state-space representation of GQ does not
exist.
In Sect. 5.4, it showed that a general LFT system can be transformed into a right
CSD G1 (s) associated with a left CSD GQ 2 .s/ or a left CSD GQ 1 .s/ associated with a
right CSD G2 (s). For the case of Fig. 5.31a where a right CSD G1 (s) is associated
with a left CSD GQ 2 .s/, the corresponding state-space realization can be obtained,
assuming P(s) in the state-space form of (5.98), as
5.9 State-Space Formulae of LFT Matrix Transformed from CSD Matrix 131
(5.107)
For the case of Fig. 5.31b where a left CSD GQ 1 .s/ is associated with a right CSD
G2 (s), the corresponding state-space realization can be obtained as
(5.108)
This chapter introduces the definitions of CSD and its manipulations. These
fundamentals are essential for the determination of a robust controller, which
will be presented in the following chapters. The structures of CSDr CSDl and
CSDl CSDr , which circumvent the difficulty of inversion, were firstly inves-
tigated by the authors of this book and coresearchers. These unique structures
offer a unified approach for the robust controller synthesis problem, which will be
discussed in detail later in the book.
In the following, the state-space formulae for the transformation between CSD and
LFT will be discussed. Let
" # " #" #
z G11 .s/ G12 .s/ u
D and u D Ky; (5.109)
w G21 .s/ G22 .s/ y
where
(5.110)
Assume that D22 is invertible. The problem here is to find a state-space realization
of P(s) such that
(5.112)
(5.113)
Therefore,
(5.115)
Note that if G22 is strictly proper, then the state-space representation of P does not
exist.
5.9 State-Space Formulae of LFT Matrix Transformed from CSD Matrix 133
where
(5.117)
If D11 is invertible, the problem is to find a state-space realization of P(s) such that
Q K w D LFTl .P; K/ w:
z D CSDl G; (5.118)
(5.119)
and hence,
(5.120)
This gives
2 3 2 32 31 2 3
xP A B1 B2 I 0 0 x
6 7 6 76 7 6 7
4 z 5D4 0 I 0 54 0 0 I 5 4 w 5: (5.121)
y C2 D21 D22 C1 D11 D12 u
134 5 Chain Scattering Descriptions
and therefore,
(5.123)
Note that if GQ 11 is strictly proper, then the state-space representation of P does not
exist.
In this section, the relations between LFT (SCC) and CSD will be further investi-
gated. The CSD originates from two-port networks and has benefits for cascading
multisystems. Unlike the CSDs, the interconnections of two LFT matrices, namely,
the star product [3], look much more complicated in their representation. Figure
4.29 in Chap. 4 shows the star product of two LFT matrices, where
P11 P12 L11 L12
z D LFTl ; LFTl ;ˆ w: (5.124)
P21 P22 L21 L22
Next, how to transform the star product into its equivalent CSDs will be expounded.
Figure 4.29 can be converted into CSDs with the termination ˆ connected at
the right port, as shown in Fig. 5.32. When L12 is invertible, one can insert
1
L12 0
in the configuration as in Fig. 5.33, which does not change the
L22 I
input/output relation. In Fig. 5.33, the LFT matrix P is first expressed as a right CSD
followed by a left CSD, which in turn is connected to two left CSDs and another
two right CSDs. By rearranging the middle part of the CSDs, one can obtain
5.10 Star Connection 135
z u
⎡ P12 P11 ⎤
w ⎢0 I ⎥⎦ w
⎣
u u
⎡ I − L11 ⎤ ⎡ I 0⎤
⎢0 − L ⎥ y ⎢P P21 ⎥⎦
⎣ 21 ⎦ ⎣ 22 w
b
⎡ L12 0⎤
⎢L Φ
⎣ 22 − I ⎥⎦ a
z u
⎡ P12 P11 ⎤
⎢0 I ⎥⎦ w
w ⎣
Π
u u
−1
⎡ L12 0⎤ ⎡ I − L11 ⎤ ⎡ I 0⎤
⎢L − I ⎥⎦ ⎢0 − L ⎥ y ⎢P P21 ⎥⎦ w
⎣ 22 ⎣ 21 ⎦ ⎣ 22
b
−1
⎡ L12 0⎤ ⎡ L12 0⎤
⎢L Φ
⎣ 22 − I ⎥⎦ ⎢L
⎣ 22 − I ⎥⎦ a
Q ˆ ;
K D CSDl …; (5.125)
136 5 Chain Scattering Descriptions
z ⎡ P11 P12 ⎤ w u
z
⎢P P22 ⎥⎦
y ⎣ 21 u ⎡ P12 P11 ⎤
w ⎢0 I ⎥⎦
⎣ w
K
⎡ L11 L12 ⎤ ⇒
K
⎢L L22 ⎥⎦ u
⎣ 21 b u
⎡Π ⎤
Π ⎡ I 0⎤
Φ ⎢
11 12
a b a Π ⎥
Π y ⎢P
⎣ 22 P21 ⎥⎦
Φ ⎣ 21 22 ⎦ w
where
" # " #
Q 11 …
… Q 12 L12 1 L12 1 L11
Q D
… D : (5.126)
Q 21 …
… Q 22 L22 L12 1 L21 L22 L12 1 L11
As can be seen in Fig. 5.34, the star product of LFT can be formulated into
CSDr CSDl .
For the dual case that if L21 is invertible, one can convert Fig. 4.29 into CSDs
with the termination ˆ connected at the left port as shown in Fig. 5.35. Then, by
rearranging the middle CSDs, one can acquire Fig. 5.36. Similarly, controller K
could be rewritten as
where
…11 …12 L12 L11 L21 1 L22 L11 L21 1
…D D : (5.128)
…21 …22 L21 1 L22 L21 1
As can be seen in Fig. 5.36, the star product of LFT can be formulated into
CSDr CSDl . Evidently, Figs. 5.34 and 5.36 are definitely equivalent.
z
⎡ I − P11 ⎤
⎢0 − P ⎥ w
⎣ 21 ⎦
Π
y
−1
⎡ P12 0⎤ ⎡ L12 L11 ⎤ ⎡ I 0⎤
⎢P − I ⎥⎦ u ⎢0 I ⎥⎦ ⎢L L21 ⎥⎦
⎣ 22 ⎣ ⎣ 22
b
−1
⎡ I 0⎤ ⎡ I 0⎤
Φ ⎢L
a ⎣ 22 L21 ⎥⎦ ⎢L
⎣ 22 L21 ⎥⎦
z ⎡ P11 P12 ⎤ w z
⎢P P22 ⎥⎦
y ⎣ 21 u ⎡ I − P11 ⎤
⎢0 − P ⎥ w
⎣ 21 ⎦
K
⎡ L11 L12 ⎤ ⇒ K
⎢L L22 ⎥⎦ b
⎣ 21 u
⎡ P12 0⎤ ⎡ Π11 Π12 ⎤
⎢P Φ
a b ⎣ 22 − I ⎥⎦ y ⎢Π
⎣ 21 Π 22 ⎥⎦ a
Φ
represent the energy balance of the two-port networks between the left and right
ports. Consider the system as illustrated in Fig. 5.1; this means
In 0
Definition 5.1 Let Jn;k D : A (n1 C n2 ) (k1 C k2 ) right CSD matrix
0 Ik
G(s) is called J-unitary if G Jn1 ;n2 G D Jk1 ;k2 , where n2 D k2 . A J-unitary G is then
called J-lossless if G Jn1 ;n2 G Jk1 ;k2 and 8 Re(s) 0. A (n1 C n2 ) (k1 C k2 ) left
CSD matrix G.s/Q is called dual J-unitary if GJ Q k1 ;k2 GQ D Jn1 ;n2 , where n1 D k1 .
A dual J-unitary G is then called dual J-lossless if G.s/J Q Q
k1 ;k2 G .s/ Jn1 ;n2 and
8 Re(s) 0.
138 5 Chain Scattering Descriptions
" #
s1
0
Example 5.7 Verify that G.s/ D sC1
sC1 is J-unitary and J-lossless.
0 s1
From
2 3 2 3
s 1 " # s1
0 1 0 0
6 7 6 sC1 7
G .s/J G.s/ D 4 s C 1
s C 1 5 0 1
4 sC1 5
0 0
s 1 s1
1 0
D D J; (5.130)
0 1
As mentioned in Sect. 4.6, the J-lossless and dual J-lossless properties are
counterparts of inner and co-inner. Hence, the relationship is maintained during
the transformation between LFT and CSD. These properties are discussed in the
following.
5.11 J-Lossless and Dual J-Lossless Systems 139
Lemma 5.2 A right CSD matrix G(s) is J-lossless if and only if its corresponding
LFT matrix P(s) which makes CSDr (G, K) D LFTl (P, K) is inner.
G11 .s/ G12 .s/ u z
Proof Let G.s/ D be a right CSD matrix of 7!
G21 .s/ G22 .s/ y w
with dim(y) D dim(w). By (5.58), one can obtain
1
G11 G12 G21 G22 1
P D DW G1 G2 ; (5.131)
0 I I 0
which makes CSDr (G, K) D LFTl (P, K). Then, one has
" #" #
G11 G21 G11 G12
G Jz;w G D
G12 G22 G21 G22
"
#
G11 G11 G21 G21 G11 G12 G21 G22
D ;
G12 G11 G22 G21 G12 G12 G22 G22
1 1 1
P P D G1 G2 1
G1 G2 D G2 1
G1 G1 G2 D G2 1
G2 G2 G2 D I:
(5.132)
This concludes that P P D I if and only if G Jz,w G D Ju,y . That is, G(s) is J-
unitary if and only if P(s) is all pass. Furthermore, when G(s) is J-lossless, one
has G2 (s)G2 * (s) G1 (s)G1 * (s) and 8 Re(s) 0, which implies that
P .s/P .s/ D G1 .s/G2
1 1
.s/ G1 .s/G2 .s/
1
D G2 .s/G1 .s/G1 .s/G2
1
.s/ I; 8Re.s/ 0: (5.133)
Hence, it concludes that G(s) is J-lossless if and only if P(s) is inner (or lossless).
140 5 Chain Scattering Descriptions
Now, from
" #" # " #" #
GQ 12 I GQ 12
GQ 22
GQ 11 0 GQ 11
G21
GQ 2 GQ 2
GQ 1 GQ 1
D
GQ 22 0 I 0 G21 I 0 I
" #
GQ 12 GQ 12
GQ 11 GQ 11
CI GQ 12 GQ 22
GQ 11 GQ 21
D Q
G22 G12 G21 GQ 11
Q Q Q Q Q Q
G22 G22 G21 G21 I
Q z;w GQ ;
D Ju;y GJ (5.135)
one can obtain that Ju;y GJ Q z;w GQ D 0 if and only if PP D I. Furthermore,
Q
G.s/J Q
z;w G .s/ Ju;y , and 8 Re s 0 implies that
1
P .s/P .s/ D GQ 1 1.s/GQ 2 .s/ GQ 1 .s/GQ 2 .s/
D GQ 1
1
.s/GQ 2 .s/GQ 2
.s/GQ 1
1
.s/ I; 8Res 0: (5.136)
Q
Hence, it concludes that G.s/ is dual J-lossless if and only if P(s) is co-inner.
Q has relations to its terminator
As illustrated in Fig. 5.1, the CSD matrix (G or G)
K, especially when the CSD matrix satisfies J-lossless (or dual J-lossless). These
properties are introduced in the following lemmas.
Lemma 5.4 If a right CSD matrix G(s) is J-lossless, then CSDr (G,ˆ) 2 BH1 for
8 ˆ 2 BH1 .
Proof Let u D ˆy, where
ku.j w/k2 D kˆy.j w/k2 kˆk1 ky.j w/k2 < ky.j w/k2 : (5.137)
and because
one has kz(jw)k2 2 kw(jw)k2 2 < 0. Hence, CSDr (G,ˆ) 2 BH1 , 8 ˆ 2 BH1 .
" #
s1
0
Example 5.8 Given a J-lossless G.s/ D sC1
sC1 and ˆ D 1
sC2
2 BH1 ,
0 s1
verify that CSDr (G, ˆ) 2 BH1 .
From (5.5), one has
.s 1/2
CSDr .G; ˆ/ D 2 BH1 :
.s C 1/2 .s C 2/
Q
Lemma 5.5
Dually, if a left CSD system G.s/ is dual J-lossless, then
Q
CSDl G; ˆ 2 BH1 for 8 ˆ 2 BH1 .
Until now, we have introduced the inner and co-inner LFT systems in Chap.
4 and J-lossless and dual J-lossless CSD systems in this chapter. In the next
chapter, the coprime factorization to LFT and CSD systems will be discussed. These
factorization approaches offer a straightforward way to describe a control system
using the RH1 functions.
Exercises
" #
1 1
1. Let P be an SCC plant shown below, where P .s/ D sC1 s.sC1/
1
, K D 3.
1 s
z w
⎡ P11 P12 ⎤
y ⎢P P22 ⎥⎦
u
⎣ 21
P1*
z u
⎡ P12 P11 ⎤
w ⎢0 I ⎥⎦ w
⎣
P2*
u u
⎡ I 0⎤
K y ⎢P P21 ⎥⎦ w
⎣ 22
W1
d
y u v
K G1 W2
G2
3. Find the transfer function VV21 of the network circuit given below, using LFT and
CSD approaches, respectively, where R1 D 9, R2 D 1, and C D 1/9.
C
+ +
R1
V1 R2 V2
− −
4. Use the LFT method to obtain the transfer function of w 7! z for the block
diagram shown below and then derive its left CSD form.
H2
w
- z
G1 G2 G3
-
H1
References 143
5. Use the techniques presented in this chapter to find the transfer function of the
following:
(a) LFTl (P,K)
(b) CSDr (G,K)
(c) CSDl G; Q K
z w
⎡ P11 P12 ⎤
y ⎢P P22 ⎥⎦
u
⎣ 21
References
One can start with the simplest case of real numbers. Consider a real rational number
r D dn , where d and n are two integers. If the greatest common divisor (g.c.d.) of the
pair of integers (d, n) is 1, then d and n are called coprime, and r D dn is called the
coprime factorization of r over the integers. It is well known
that if a pair of integers
(d, n) is coprime, there exists a row vector of two integers xQ yQ such that
" #
d
xQ yQ D 1; (6.1)
n
" #
d
Q yn
i.e., xd Q D 1, where xQ yQ is called the left inverse of . For the
n
example r D 32 , it can be found that
" #
2
1 1 D 1:
3
M.-C. Tsai and D.-W. Gu, Robust and Optimal Control: A Two-port Framework 145
Approach, Advances in Industrial Control, DOI 10.1007/978-1-4471-6257-5__6,
© Springer-Verlag London 2014
146 6 Coprime Factorizations
Clearly, 3 and 2 are coprime, and 1:5 D 32 is the coprime factorization of the real
rational number 1.5. In fact, the factorization 64 is also equal to 1.5. However, 6 and
4 are
" # not coprime, since it is evident that the g.c.d. of (4,6) is 2, and the vector
4
does not have a left inverse which only has integer elements. This reveals that
6
1:5 D 64 is not a coprime factorization.
Proceed forward and consider the coprimeness over a ring of polynomials with
real coefficients. Two polynomials are called coprime if they do not share common
zeros. Let d(s) be a polynomial with real coefficients denoted as
xd y n D 1: (6.3)
n.s/
Consider a simple, illustrative example F .s/ D d.s/ with d(s) D s C 1 and
n(s) D s C 2. Trivially, this is a coprime factorization over the polynomial ring
since one can" find that
# these two polynomials do not share common zero and
sC1 n.s/
s C 1 s D 1: For an alternative factorization F .s/ D d.s/ with
sC2
d(s) D s2 C 4s C 3 and n(s) D s2 C 5s C 6, apparently, the pair of (n(s), d(s)) is not
coprime since s D 3 is a common zero. It can also be seen that this factorization
is reducible as
(6.4)
If two polynomials d(s) and n(s) are coprime over the polynomial ring, the rational
n.s/
function of d.s/ should be irreducible over the polynomial ring. Note that the
coprime factorization is not unique up to a unit (i.e., the real number) in the
polynomial ring. For instance, one can check that
sC2 2s C 4
F .s/ D D ; (6.5)
sC1 2s C 2
" #
3 1 2s C 2
sC sC D 1: (6.6)
2 2 2s C 4
they are coprime over the stable transfer functions, since there exists a left inverse
in the following:
2 3
s3
4s 1 5s C 1 6 7
6 s C 1 7 D 1:
4 (6.8)
sC1 sC1 s2 5
sC1
Note that the coprime factorization is not unique but just up to a unit in RH1 (i.e.,
outer (bistable) rational function). In the above example, one can easily find another
.s3/.sC6/ .s2/.sC6/
coprime factorization such as M.s/ D .sC2/.sC5/ and N.s/ D .sC2/.sC5/ which are
N.s/
coprime factors of T(s), since T .s/ D M.s/ D s2
s3
. It can be verified that M(s) and
N(s) are coprime to each other, because there is a left inverse as
2 3
.s 3/ .s C 6/
.4s 1/ .s C 5/ .5s C 1/ .s C 5/ 6 7
6 .s C 1/ .s C 5/ 7 D 1:
4 (6.9)
.s C 1/ .s C 6/ .s C 1/ .s C 6/ .s 2/ .s C 6/ 5
.s C 1/ .s C 5/
Until now, one has discussed the coprimeness of SISO systems over integers,
polynomials, and stable rational functions, respectively. In the following, one will
expand the coprimeness over stable rational transfer function matrices which will be
introduced for general MIMO cases in the development of control system analysis
and synthesis.
148 6 Coprime Factorizations
Given a transfer function matrix T(s), a basic problem is to find four transfer
function matrices N(s), M(s), MQ .s/, and NQ .s/ in RH1 such that
2 3
1 0
6 7
X.s/ D 4 .s C 3/2 s C 3 5 2 RH1 and
.s C 1/ .s C 2/ sC2
2 sC3 3
0
6 sC2 7
Y.s/ D 4 2 RH1 :
sC15
(6.14)
0
sC2
Definition 6.1 Two matrices M(s) and N(s) in RH1 are right coprime over RH1 if
Q
they have the same number of columns and if there exist matrices X.s/ and YQ .s/ in
RH1 such that
Q
X.s/M.s/ YQ .s/N.s/ D I: (6.15)
Similarly, two matrices MQ .s/ and NQ .s/ in RH1 are left coprime over RH1 if they
have the same number of rows and if there exist matrices X(s) and Y(s) in RH1 such
that
one has
u M 0
D u: (6.18)
y N
T
u M
u′
u M −1
⇒
y N y N
T
N u
N u
0 ⇒
− M −1 y
M y
u
N M
0 u′
− y
M N
I H
and , respectively, state-space realizations are derived, similar to Figs. 5.21
0 WQ
and 5.25 in Sect. 5.6.
matrix with (A, B) stabilizable and (C, A) detectable. A double coprime factorization
of T .s/ D N.s/M 1 .s/ D MQ 1 .s/NQ .s/ in the state-space form is given by
(6.22)
(6.23)
where W and WQ are any square constant matrices which are nonsingular and F and
H are chosen such that both A C BF and A C HC are Hurwitz.
Proof Let
xP A B x
D ; (6.24)
y C D u
Then,
2 3 2 3 2 3 2 3
xP A C BF BW xP A C BF BW
4 y 5 D 4 C C DF 5 x 4 5 4 x
DW ) u D F W 5 0 :
u0 u
u F W y C C DF DW
(6.26)
152 6 Coprime Factorizations
(6.27)
where WQ is any nonsingular matrix. Since (C, A) is detectable, the injected gain
matrix H is chosen such that A C HC is Hurwitz. This structure is in fact a closed-
loop state observer where u and y are the inputs. Then, by comparing (6.20) and
(6.29), one can obtain
(6.30)
∼ ∼
(6.31)
(6.32)
154 6 Coprime Factorizations
This gives
(6.33)
(6.34)
M
This concludes that given by (6.22) is left invertible in RH1 , and then
N
T D NM 1 is a right coprime factorization.
Likewise, to ensure that the state-space realization
(6.23) is of coprime factors,
Y
one now needs to construct a right inverse 2 RH1 in the state-space form
X
1
xP I H xPO
such that MQ X NQ Y D I . From D Q and based on (6.26),
ye 0 W y0
one has, from Fig. 6.5,
(6.35)
(6.36)
6.2 Coprime Factorization over RH1 155
∼−
∼ ∼
where
2 3
A C BF 0 BW H WQ 1
4 BF H C A C H C BW H WQ 1 5
WQ C WQ C 0 I
2 3
2 3 A C BF 0 BW H WQ 1
I 0 0 0 6
0 I 0 0 7
D 4 0 A C H C B C HD H 56 4
7:
F 0 W 0 5
0 WQ C WQ D WQ
.C C DF / 0 DW WQ 1
(6.37)
M p −1 N p Np M p −1
z −1 z′ ω z ω′ −1
ω
⎡ M 11 M 12 ⎤ ⎡ N 11 N 12 ⎤ ⎡ N11 N12 ⎤ ⎡ M 11 M 12 ⎤
⎢ ⎥ ⎢ ⎥ ⎢N ⎢M ⎥
y M
⎣ 21 M 22 ⎦
y′
⎣ N 21 N 22 ⎦ u y
⎣ 21 N 22 ⎥⎦ u′ ⎣ 21 M 22 ⎦
u
K K
(6.38)
This concludes that NQ .s/ MQ .s/ given by (6.23) is right invertible in RH1 , and
then T D MQ 1 NQ is a left coprime factorization.
Furthermore, it can be verified from (6.31) and (6.35) that
(6.39)
Hence, (6.33) and (6.36) will form the Bezout identity as summarized in the
following lemma.
Lemma 6.2 For any proper real-rational matrix T(s), there always exists a double
(left and right) coprime factorization given by (6.10), where N(s), M(s), NQ .s/, and
MQ .s/ are in RH1 , respectively. For the double coprime factorization, there exist
Q
RH1 transfer matrices X(s), Y(s), X.s/, and YQ .s/ satisfying the Bezout identity
Q
X.s/ YQ .s/ M.s/ Y.s/ I 0
D : (6.40)
NQ .s/ MQ .s/ N.s/ X.s/ 0 I
As stated in Chap. 5, the coprime factorization can also arise in the two-port
representation. Referring to the chain scattering approach proposed by Tsai [5], the
coprime factorization of P D Np M Q 1 Q
p (or P D Mp Np ) as illustrated in Fig. 6.6 is
1
utilized.
6.2 Coprime Factorization over RH1 157
and
NQ 11 NQ 12 MQ 11 MQ 12
NQ p D 2 RH1 ; MQ p D 2 RH1 : (6.42)
NQ 21 NQ 22 MQ 21 MQ 22
where
0 0
z w N11 N12 w
D Np 0 D ; (6.44)
y u N21 N22 u0
and
0 0
w w M11 M12 w
D Mp D : (6.45)
u u0 M21 M22 u0
Similarly, the left coprime factorization, as shown in Fig. 6.6, can be found in the
dual way. From the SCC of Fig. 5.8, one has
z P11 P21 w Q 1 Q w
D D Mp Np ; (6.46)
y P12 P22 u u
where
z0 w NQ 11 NQ 12 w
D NQ p D Q ; (6.47)
y0 u N21 NQ 22 u
and
z0 z MQ 11 MQ 12 z
D MQ p D : (6.48)
y0 y MQ 21 MQ 22 y
158 6 Coprime Factorizations
(A,B2 ) stabilizable and (C2 ,A) detectable. A double coprime factorization of P .s/ D
Np .s/Mp1 .s/ D MQ p1 .s/NQ p .s/ in the state-space form is given by
(6.49)
and
(6.50)
Fu
where Wuu , Www , WQ zz , and WQ yy are nonsingular and F D and H D [Hz Hy ]
Fw
are chosen such that both A C B1 Fw C B2 Fu and A C Hz C1 C Hy C2 are Hurwitz.
Proof Let
2 3 2 3
xP A B1 B2
6 z7 6C D 72 x 3
6 7 6 1 11 D12 7
6 7 6 7
6 y 7 D 6 C2 D21 0 7 4 w 5 : (6.51)
6 7 6 7
4 w5 4 0 I 0 5 u
u 0 0 I
2 3 2 32 3
x I 0 0 x
4 5 4 5 4 0 5 Fu
Additionally, let w D Fw Www Wwu w , where F D is
Fw
u Fu 0 Wuu u0
chosen such that A C B1 Fw C B2 Fu is Hurwitz and Wuu and Www are nonsingular.
6.2 Coprime Factorization over RH1 159
2 3
I 0 0
This is equivalent to multiplying 4 Fw Www Wwu 5 in the right-hand side of the
Fu 0 Wuu
above formulation, yielding
2 3 2 3
xP A B1 B2
6 z7 6C D 72 I 32 3
6 7 6 1 11 D12 7 0 0 x
6 7 6 7
6 y 7 D 6 C2 D21 0 7 4 Fw Www Wwu 5 4 w0 5
6 7 6 7
4 w5 4 0 I 0 5 Fu 0 Wuu u0
u 0 0 I
2 3
A C B1 Fw C B2 Fu B1 Www B1 Wwu C B2 Wuu
6C CD F CD F D W 72 x 3
6 1 11 w 12 u 11 ww D11 Wwu C D12 Wuu 7
6 74 05
D6 C2 C D21 Fw D21 Www D21 Wwu 7 w :
6 7
4 Fw Www Wwu 5 u0
Fu 0 Wuu
(6.52)
(6.53)
Furthermore, from
2 3
x
2 3 2 36
xP A B1 B2 0 0 6 w 77
4 ze 5 D 4 C1 D11 D12 6 7
I 056 u 7; (6.54)
6 7
ye C2 D21 0 0 I 4 z5
y
160 6 Coprime Factorizations
2 P3 2 32 3
xO I Hz Hy xP
4
let z 0 5 D 4 Q
0 Wzz 0 5 4 ze 5, where H D [Hz Hy ] is chosen such that
y 0 Q
0 Wyz Wyy Q ye
A C Hz C1 C Hy C2 is Hurwitz and WQ zz and WQ yy are nonsingular. This is equivalent
2 3
I Hz Hy
to multiplying 4 0 WQ zz 0 5 by the left-hand side of the above formulation,
0 WQ yz WQ yy
and then one has
2 3
x
2P 3 2 32 3
xO I Hz Hy A B1 B2 0 0 6 6 w 7
7
4 z 5 D 4 0 WQ zz 6 7
0
0 5 4 C1 D11 D12 I 0 5 6 u 7
6 7
y0 0 WQ yz WQ yy C2 D21 0 0 I 4 z 5
y
2 3
A C Hz C1 C Hy C2 B1 C Hz D11 C Hy D21 B2 C Hz D12 Hz Hy
D4 WQ zz C1 WQ zz D11 WQ zz D12 WQ zz 0 5
WQ yz C1 C WQ yy C2 WQ yz D11 C WQ yy D21 WQ yz D12 WQ yz WQ yy
2 3
x
6 w 7
6 7
6 7
6 u 7:
6 7
4 z5
y (6.55)
(6.56)
To ensure that (6.53) is a pair of right coprime factors, one needs Definition 6.1 to
Q p YQ Np D I: From
construct a left inverse based on (6.56) in RH1 , i.e., XM
2 3 2 31 2 3
x I 0 0 x
4 w0 5 D 4 Fw Www Wwu 5 4 w 5
u0 Fu 0 Wuu u
2 32 3
I 0 0 x
6 W 1 Fw C W 1 Wwu W 1 Fu 1 1
Www Wwu Wuu1 7
D4 ww ww uu Www 54 w5;
Wuu1 Fu 0 W 1 uu
u
(6.57)
6.2 Coprime Factorization over RH1 161
one has
(6.58)
(6.59)
I 0
The similarity transformation T D will yield
I I
(6.60)
This concludes that (6.49) is left invertible in RH1 such that P(s) D Np (s)M 1
p (s)
is a coprime factorization.
Analogously, to ensure (6.50) consists of coprime factors, one needs to construct,
based on (6.52), a right inverse in RH1 . From
2 P3 2 31 2 P 3
xO I Hz Hy xO
4 z0 5 D 4 0 WQ zz 0 5 4 ze 5
y 0 Q
0 Wyz WyyQ ye
2 3
1 2 P 3
I Hz WQ zz1 C Hy WQ yy1 Q
Wyz WQ zz1 Hy WQ yy xO
6 Q 1 74 5
D40 Wzz 0 5 ze ; (6.61)
0 Q 1 Q
Wyy Wyz Wzz Q 1 Q
Wyy 1
ye
162 6 Coprime Factorizations
let
(6.62)
(6.63)
I 0
By the state similarity transformation T D , it yields NQ p Y C MQ p X D I ,
I I
since
(6.64)
This concludes that (6.50) is right invertible in RH1 such that P .s/ D
MQ p1 .s/NQ p .s/ is a coprime factorization.
6.2 Coprime Factorization over RH1 163
Equivalently, the Bezout identity as given in Lemma 6.2 can also be checked
from
Q
X.s/ YQ .s/ Mp .s/ Y.s/ I 0
D ; (6.65)
NQ p .s/ MQ p .s/ Np .s/ X.s/ 0 I
where
and
Consequently, the Bezout identity is also held in the two-port transfer matrix.
Further on, the coprime factorizations for the configurations of CSDr CSDl
and CSDl CSDr are discussed in the following. The two-port SCC plant can be
transformed into the description CSDr CSDl , as illustrated in Fig. 6.7. Note that
multiplication by M* does not change the overall transfer function from w to z.
G1
P1* M*
z u u′
⎡ P12 P11 ⎤ ⎡ M 11 M 12 ⎤
z w w ⎢0 I ⎥⎦ w ⎢M M 22 ⎥⎦
⎡ P11 P12 ⎤ ⎣ ⎣ 21 w′
y ⎢P P22 ⎥⎦ u
⎣ 21
P2* M*
u u u′
K ⎡ I 0⎤ ⎡ M 11 M 12 ⎤
K y ⎢P P21 ⎥⎦ w ⎢M M 22 ⎥⎦ w′
⎣ 22 ⎣ 21
G 2
(6.66)
(6.67)
(6.68)
6.2 Coprime Factorization over RH1 165
and
(6.69)
Fu
where Wuu and Www are nonsingular and F D is chosen such that
Fw
A C B1 Fw C B2 Fu is Hurwitz. The stable left inverse that satisfies
is given by
(6.70)
It should be emphasized that from (6.49) and (6.66), one can assume
G1
in RH1 such that M1 is a coprime factorization. Additionally, by Property
GQ 2
5.3, one has
On the other hand, the two-port SCC plant can be transformed into the description
of CSDl CSDr , as illustrated in Fig. 6.8. As mentioned before, multiplication by
MQ does not change the overall transfer function from w to z.
166 6 Coprime Factorizations
G1
M * ze P*1
z z
w
⎡ P11 P12 ⎤ ⎡ M 11 M 12 ⎤ ⎡ I − P11 ⎤
⎢P ⎢ ⎥ ⎢0 − P ⎥
P22 ⎥⎦ M 22 ⎦
y u ye w
⎣ 21 ⎣ M 21 ⎣ 21 ⎦
M * ze P*2
u
K
⎡ M 11 M 12 ⎤ ⎡ P12 0⎤
⎢ ⎥ ⎢P − I ⎥⎦
K
⎣ M 21 M 22 ⎦ ye
⎣ 22
y
G2
Q at left terminal
Fig. 6.8 Multiplying by M
(6.72)
(6.73)
(6.74)
(6.75)
6.2 Coprime Factorization over RH1 167
where WQ zz and WQ yy are nonsingular and H D Hz Hy is chosen such
is given by
(6.76)
(6.80)
u
r′ r x 1 x y
W B C
s
⎡u ⎤ P
⎢ y⎥ r′
⎣ ⎦
(6.81)
I X
By the similarity transformation T D , it yields
0 I
(6.82)
Therefore,
8
< W DI
C T C C F T F C .A C BF / X C .A C BF / T X D 0 (6.83)
:
F D B T X:
AT X C XA XBB T X C C T C D 0: (6.84)
In Example 6.1, one can find that the normalized right coprime factorization
M.s/
requires a state-feedback gain F such that A C BF is Hurwitz and is inner.
N.s/
170 6 Coprime Factorizations
The algebraic Riccati equation of (6.84) needs to be solved to determine the solution
of F. In the next chapter, properties of algebraic Riccati equations, the solutions,
and applications will be further discussed. The state-space properties of normalized
coprime factorization will then be introduced therein.
Exercises
nator .
1
2. Let G.s/ D .s1/.sC2/
.sC3/.s4/ : Find a stable coprime factorization G(s) D N(s)M (s)
and X(s), Y(s) 2 RH1 such that X(s)N(s) C Y(s)M(s) D 1.
References
In the last chapter, it was discussed that the algebraic Riccati equation (ARE) need
be solved in order to obtain the state-space solutions of the normalized coprime
factorizations. In Chap. 2, the Lyapunov equation was employed to determine the
controllability and observability gramians of a system. Both the algebraic Riccati
and Lyapunov equations play prominent roles in the synthesis of robust and optimal
control as well as in the stability analysis of control systems. In fact, the Lyapunov
equation is a special case of the ARE. The ARE indeed has wide applications in
control system analysis and synthesis. For example, the state-space formulation for
particular coprime factorizations with a J-lossless (or dual J-lossless) numerator
requires solving an ARE; in turn, the J-lossless and dual J-lossless systems are
essential in the synthesis of robust controllers using the CSD approach. In this
chapter, the ARE will be formally introduced. Solution procedures to AREs and
their various properties will be discussed. Towards the end of this chapter, the
coprime factorization approach to solve several spectral factorization problems is
to be considered.
The algebraic Riccati equation is useful for solving control synthesis problems such
as the H2 /H1 (sub)optimal control problems [6, 10]. Let A, R, and Q be n n real
matrices with R and Q being symmetric. The following matrix equation is called an
algebraic Riccati equation (ARE):
AT X C XA C XRX C Q D 0: (7.1)
M.-C. Tsai and D.-W. Gu, Robust and Optimal Control: A Two-port Framework 171
Approach, Advances in Industrial Control, DOI 10.1007/978-1-4471-6257-5__7,
© Springer-Verlag London 2014
172 7 Algebraic Riccati Equations and Spectral Factorizations
J 1 HJ D H T ; (7.3)
0 In
where J D . Note that
In 0
where V11 contains all stable eigenvalues of H, and V22 contains all anti-stable
eigenvalues. Multiplying U1 from the right on both sides of (7.5) yields
V11 V12 I
H U1 D U1 U2 D U1 V11 : (7.6)
0 V22 0
Let
X1 I I
U1 D D X 1 D X1 ; (7.7)
X2 X2 X11 X
A R I I
H U1 D X1 D X1 V11 ; (7.8)
Q AT X X
and therefore,
A R I I
D X1 V11 X11 : (7.9)
Q AT X X
This gives that, by multiplying X I from the left,
A R I I
X I D X I X1 V11 X11 D 0: (7.10)
Q AT X X
X I D AT X C XA C XRX C Q ; (7.11)
Q AT X
which shows that such defined X solves the ARE (7.1). Furthermore, from (7.9),
A C RX X1 V11 X11
D : (7.12)
Q AT X X2 V11 X11
have the same eigenvalues. Note that in most control synthesis problems, matrix
A C RX leads to the state matrix of the closed-loop system, which explains why
A C RX must be Hurwitz.
174 7 Algebraic Riccati Equations and Spectral Factorizations
In general, if there exists a Hermitian matrix X and a square matrix W such that
the following expression
A R I I
D W (7.16)
Q AT X X
rx 2 C 2ax C q D 0: (7.17)
a r
The corresponding Hamiltonian matrix is H D . Define the discrimi-
q a
nant of (7.17) as
D 4 a2 qr : (7.18)
which means that its graph, i.e., y D rx2 C 2ax C q, will cross the x-axis twice. If
D 0, (7.17) has two coincidental real roots,
a
x1 D x 2 D ; (7.20)
r
D 4 a2 qr D 4 det.H /: (7.21)
a b
y y
x
x xRic
xRic
Fig. 7.1 Plot of quadratic curve and ARE solution (a) for r > 0 (b) for r < 0
of the parabola. More precisely, it is the point where the parabola crosses the y-axis.
Similar discussions can be conducted for the case r < 0, while the function graph
would be a straight line when r D 0.
To compare the quadratic equation with the Hamiltonian matrix, one can find
from (7.18), for a solution x, that det(H) D (a C rx)2 0 in the scale case. If > 0,
the eigenvalues of H are given by
p
D ˙ .a C rx/ D ˙ det.H /; (7.22)
and the distance d between the two intersection points on the x-axis is equal to
p
2 det.H /
dD : (7.23)
r
When > 0, this indicates that H does not have any eigenvalues on the imaginary
axis, i.e., H 2 dom(Ric). Hence, there exists a required solution to the ARE. From
(7.19), there are two solutions of the quadratic equation (7.16). For the purpose
of a Hurwitzp(stable) A C RX (a negative number in the scalar case), the solution
a a2 qr
x2 D in (7.19) should be chosen, because it gives a C rx D
p r
a2 qr < 0. For the case r > 0, this required solution xRic is the negative
(left) root in Fig. 7.1 and for r < 0, the positive (right) root. In fact, observation of
Fig. 7.1 reveals that the ARE solution xRic is located at the branch of the parabolic
curve which has a negative slope. This can be simply proven in the following. From
(7.12), a C rx D x1 V11 x1 1 D V11 , for scalar x1 ; therefore, x D aCV r
11
. Now, the
derivative of y D rx C 2ax C q is given by
2
x
-1 1
-4
definite and both of the same sign, the required ARE solution X, if it exists, will
be nonpositive definite, i.e., X 0. On the other hand, if R and Q have opposite
signs, the X is nonnegative definite, i.e., X 0.
Example 7.1 Find the required solution of the quadratic equation (ARE) of
4x2 4 D 0.
It can be calculated D 16 <0, where its corresponding Hamiltonian
that det(H)
a r 0 4
matrix is H D D . From the discussion above, one can
q a 4 0
conclude that H 2 dom(Ric) with eig(H) D ˙ 4. Figure 7.2 shows its corresponding
upward parabolic curve y D 4x2 4 (r D 4 > 0), and clearly, two solutions satisfying
4x2 4 D 0 can be found at x D ˙ 1. Since at the point x D 1 has a negative
slope as depicted in Fig.7.2, this means that x D 1 < 0 is the ARE solution. The
following shows how to find the ARE solution step by step.
By the real Schur decomposition such as (7.5), one has
2 1 1 3 2 1 1 3T
p p p p
6 27 6 27
HD
04
D6
2 7 4 0 6 2 7 D U1 U2 V11 V12 U1 U2 T;
40 4 1 1 5 0 4 4 1 1 5 0 V22
p p p p
2 2 2 2
where
2 1 3
p
x1 I I 6 27 1 1
U1 D D x1 D 6
x1 ) 4 7D p :
x2 x2 x11 x 1 5 1 2
p
2
(2,1)
x
1 3
and then
4 0 1 1 1 1 1
1 1 D 1 1 p .4/ p D 0:
0 4 1 1 2 2
This shows that x D 1 < 0 is the ARE solution, as depicted in Fig. 7.2, such that
V11 D 4 and a C rx D 1. Note that x D 1 is a solution of 4x2 4 D 0, but it is not
the ARE solution since a C rx D 4.
Example 7.2 Find the solution of the quadratic equation ARE of x2 C 4x 3 D 0.
It can be calculated that det(H) D 1< 0 and eig(H) D ˙ 1, where the
a r 2 1
corresponding Hamiltonian matrix is H D D . One can
q a 3 2
deduce that H 2 dom(Ric). Figure 7.3 shows its corresponding downward parabolic
curve y D x2 C 4x 3 (r D 1 < 0), and evidently, two solutions satisfying
x2 C 4x 3 D 0 can be found at x D 1 and x D 3. Since the point x D 3 has a
negative slope as depicted in Fig. 7.3, this means that x D 3 is the ARE solution.
In fact, one can find the ARE solution step by step to obtain the ARE solution
x D 3 such that V11 D 1 and a C rx D 1. Note that x D 1 > 0 is a solution of
x2 C 4x 3 D 0, but it is not the ARE solution since a C rx D 2 C (1) 1 D 1.
The concept of the discriminant is useful in the scalar case to find the ARE
solution of a quadratic equation. However, this property, unfortunately, cannot be
directly extended to the general matrix ARE cases.
178 7 Algebraic Riccati Equations and Spectral Factorizations
HJ C JH T D 0: (7.25)
T11 T12
Let T D be the 2n 2n nonsingular matrix, and its inverse
T21 T22
" #
TQ11 TQ12
1
T D : (7.26)
TQ21 TQ22
TH T 1 J C J T T H T T T D 0
or
H T 1 J T T C T 1 J T T H T D 0: (7.27)
One can easily see that HT is Hamiltonian if and only if the matrix HT 1 JT T is
skew symmetric. For the material presented in this book, however, just a sufficient
condition is needed, which appears in the following lemma.
Lemma 7.1 A Hamiltonian matrix H remains Hamiltonian under a similarity
transformation, if the transformation matrix T satisfies the following conditions:
(1) TQ11 TQ12
T
and TQ22 TQ21
T
are both symmetric, where TQij (i, j D 1, 2) are defined in
(7.26).
(2) TQ22 TQ11
T
TQ21 TQ12
T
D ˛I , where ˛ is a scalar constant.
7.2 Similarity Transformation of Hamiltonian Matrices 179
I 0
1
In this case, T D . Again, the conditions in Lemma 7.1 hold, and HT
0 I
is Hamiltonian. Similarly,
I 0 A C RX I 0R
HT T DT ;
X I 0 .A C RX /T
X I
I 0 I 0 A C RX R
HT D ;
1 X 1 I 1 X 1 I 0 .A C RX /T
i.e.,
I I 0 A C RX I
HT D D .A C RX / :
1 X 1 X 1 I 0 1 X
Hence, Ric(HT ) D 1 X.
T
U 0
Case (III) T D and UT D U 1 , i.e., U is orthonormal.
0 UT
U 0
Here, T 1 D . Straightforward manipulations show that the conditions
0 U
in Lemma 7.1 are satisfactory, and thus, HT D THT 1 is a Hamiltonian matrix. One
obtains accordingly that
UT UT
HT D .A C RX / : (7.29)
U TX U TX
HT
1 D Q
1 I C LX .A C RX / I C LX Q 1 :
X I C LXQ X I C LXQ
(7.31)
7.2 Similarity Transformation of Hamiltonian Matrices 181
AT X C XA XRX C Q D 0; (7.32)
and
I 0 I 0 .A YQ/T Q
HY D
Y I Y I 0 .A YQ/
I 0 .A YQ/T Q I 0
) HY D ;
Y I 0 .A YQ/ Y I
Hence,
A YQ 0 I Y I 0
Q .A YQ/T 0 I X I
I Y I 0 A RX R
D :
0 I X I 0 .A RX /T
That is,
AYQ 0 I CYX Y I CYX Y ARX R
D :
Q .A YQ/T X I X I 0 .ARX /T
Considering the first block column of the product matrices on both sides yields
A YQ 0 I C YX I C YX Y A RX
D :
Q .A YQ/T X X I 0
.A YQ/ .I C YX / D .I C YX / .A RX / :
Therefore,
PA C AT P D Q (7.37)
D Q: (7.39)
The last equation comes from eAt ! 0, as t ! 1, due to A being Hurwitz. Note that
a Lyapunov equation is a special case of the ARE (i.e., R D 0), AT X C XA C Q D 0,
and
A 0
X D Ric :
Q AT
AT Po C Po A D C T C: (7.41)
184 7 Algebraic Riccati Equations and Spectral Factorizations
Observability gramian Po determines the total energy in the system output, which is
driven by a given initial state in the case of identical zero input.
In control system analysis and synthesis, the constant term on the right-hand side
of a Lyapunov equation is usually not negative definite; rather it is nonpositive. For
such cases, one has the following result.
Lemma 7.2 Let (C, A) be observable. Then AT Po C Po A D CT C has a positive
definite solution Po if and only if A is Hurwitz.
Proof (Sufficiency) Construct matrix Po as in (7.40). If A is Hurwitz, it can
be shown as in the deduction of (7.39) that such a Po is indeed a solution to
the Lyapunov equation. Po is obviously nonnegative. Suppose that Po is rank
deficient. Let N be the null space of Po and Np be its matrix representation, i.e.,
Np D [
1 , : : : ,
l ] , 1 l < n, where n is the order of the system (dimension of A
and thus of Po ), and Po Np D On * l . Multiplying Np T and Np from the left and right,
respectively, on both sides of (7.41) concludes that CNp D O. Then, multiplying Np
from the right on (7.41) leads to Po ANp D O. Hence, ANp falls into N, and there
exists matrix L of l l dimension such that
ANp D Np L: (7.42)
AT P0 C P0 A D C T C
P0 C P0 D C T C
C P0 D .C / .C /
For a positive definite Po , the left-hand side of (7.43) is nonnegative while its
right-hand side is nonpositive. Hence, C D 0. Considering that is also such that
A D , this contradicts the assumption of (C,A) being completely observable.
Hence, A is Hurwitz.
7.4 State-Space Formulae for Spectral Factorizations. . . 185
This property can be easily seen in the scalar case when (7.41) becomes a linear
equation 2ap D c2 . Then, one can obtain p D c
2
2a
. It shows that p > 0 (i.e., a
positive definite solution) if and only if a < 0 (i.e., Hurwitz).
Similarly, define controllability gramian as
Z 1
T
Pc WD eAt BB T eA t dt; (7.44)
0
which satisfies
In a physical engineering system with an “impulse” input and zero initial states,
controllability gramian Pc determines the total energy in the states generated.
A result dual to Lemma 7.2 is also available under the condition of complete
controllability of (A,B).
In this section, three cases of spectral factorizations in the state-space form are to be
introduced. They are obtained via a unified procedure, i.e., by employing coprime
factorizations. State-space formulae are derived to find certain coprime factors for
each spectral factorization. The factorization procedure is characterized by using the
so-called weighted all-pass function, which is a generalization of all-pass functions
introduced earlier in Chap. 6 and will be formally defined next.
^ ^T
Definition 7.1 Let † D †T and † D † be constant matrices with compatible
dimensions. Then, P(s) 2 RL1 satisfying
^
P .s/†P .s/ D † (7.46)
The following lemma gives conditions for a transfer function matrix P(s) to be
weighted all-pass.
^ ^T
Lemma 7.3 Let . Given † D †T and † D † , if
186 7 Algebraic Riccati Equations and Spectral Factorizations
^
D T †D D †; (7.48)
XB C C T †D D 0; (7.49)
AT X C XA C C T †C D 0; (7.50)
^
then P .s/†P .s/ D †. Dually, if
^
D†D T D †; (7.51)
Y C T C B†D T D 0; (7.52)
^
then P .s/†P .s/ D †.
Proof The weighted all-pass (and co-all-pass) proof follows the proof procedure
of a correspondent result on standard inner (and co-inner) functions in [1, 3] and,
therefore, is omitted here.
^
Note that for the case P(s) 2 RH n1 k , † D In , and † D Ik , the weighted all-
pass (or weighted co-all-pass) system P(s) becomes an inner (or co-inner)
function.
I 0
Additionally, for the case that P .s/ 2 RH11 Cn2 1 Ck2 , † D
.n /.k / n1
, and
0 In2
^ Ik1 0
†D , the weighted all-pass P(s) becomes a J-lossless function. The
0 Ik2
properties of lossless two-port networks in the viewpoint of power wave propagation
were discussed in Chap. 3. Recall that in Chap. 5, it demonstrated that properties of
J-lossless and dual J-lossless both play an important role in CSD control systems.
For certain engineering systems, spectral factorization is a useful tool. Spectral
factorization separates the causal and minimum-phase component from the rest, and
this in turn reveals energy transformation involved with this system. In some sense,
a spectral factor shows the magnitude of the system. Next, the formal definition of
standard spectral factorization will be given first, and subsequently, several spectral
factorizations that are often used in control system synthesis and analysis will be
introduced. State-space formulae of these spectral factorizations will be described;
7.4 State-Space Formulae for Spectral Factorizations. . . 187
they are all obtained via weighted all-pass functions, which are constructed in a
unified framework of coprime factorizations.
Definition 7.2 [2] Consider a square matrix (s) having the properties
Then,
D ˆ ˆ (7.55)
ˆ 2 GH1 ; (7.56)
(7.57)
(7.58)
Since matrices F, W, H, and WQ are free parameters, one can choose them to suit
various requirements on the coprime factors in addition to stability. In the following,
one shows how to find these matrices for the required spectral factorizations, where
The first type of spectral factorization is to find an outer matrix ˆ(s) such that
then M 1 (s) would be the required spectral factor ˆ(s) provided that M(s) is also
outer. Next, itshows how to choose F and W in the coprime factorization of P(s) to
M.s/
make weighted all-pass, i.e., to satisfy (7.60) and to ensure M(s) outer.
N.s/
R 0
Herein, substituting the state-space realization of (7.57) and † D ;
0 Q
^
† D I into (7.48), (7.49) and (7.50) will obtain that
W T R C D T QD W D I; (7.61)
XB C C T QD C F T R C D T QD D 0; (7.62)
.A C BF /T X C X .A C BF / C F T RF C .C C DF /T Q .C C DF / D 0:
(7.63)
If one has R C P (s)QP(s) > 0, 8 s D j!, then Rx D (R C DT QD) > 0. Hence, define
7.4 State-Space Formulae for Spectral Factorizations. . . 189
W D Rx1=2 ; (7.64)
F D Rx 1 B T X C D T QC ; (7.65)
C C T Q QDRx 1 D T Q C D 0: (7.66)
That is,
" #
A BRx 1 D T QC BRx 1 B T
X D Ric
T 0:
C T Q QDRx 1 D T Q C A BRx 1 D T QC
(7.67)
(7.68)
Dually, the spectral factorizations of Case (I) are to find an outer matrix ˆ(s)
such that
WQ D Ry1=2 ; (7.70)
H D Y C T C BQD T Ry 1 ; (7.71)
where Y solves
T
Y A BD T Ry 1 QC C A BD T Ry 1 QC Y
That is,
2
3
A BD T Ry 1 QC C T Q QDRy 1 D T Q C
Y D Ric 4
T 5 0: (7.73)
BRy 1 B T A BD T Ry 1 QC
(7.74)
W D Rx1=2 ; (7.75)
Rx D R C D T QD ; (7.76)
F D Rx 1 B T X C D T QC ; (7.77)
2 3
A BRx 1 D T QC BRx 1 B T
X D Ric 4
T 5 : (7.78)
C T Q QDRx 1 D T Q C A BRx 1 D T QC
(7.79)
Moreover, there exists a left coprime factorization of P .s/ D MQ 1 .s/NQ .s/ given
by , where
7.4 State-Space Formulae for Spectral Factorizations. . . 191
WQ D Ry1=2 ; (7.80)
Ry D R C DQD T ; (7.81)
H D Y C T C BQD T Ry 1 ; (7.82)
"
#
A BD T Ry 1 QC C T Q QDRy 1 D T Q C
Y D Ric
T : (7.83)
BRy 1 B T A BD T Ry 1 QC
Furthermore, an outer function such that R C P(s)QP (s) D ˆ(s)ˆ (s) is given by
(7.84)
In Case (I), different R and Q correspond to different applications. Four
applications of Case (I) are noted in the following.
Let P .s/ D N.s/M 1 .s/ D MQ 1 .s/NQ .s/ be a right (left) coprime factorization.
or
given by , where
192 7 Algebraic Riccati Equations and Spectral Factorizations
F D Rx 1 B T X C D T C ; (7.87)
Rx D I C D T D; (7.88)
W T I C D T D W D I; (7.89)
C C T I DRx1 D T C D 0: (7.90)
In this case, the inverse of M(s) is not necessarily stable for an unstable P(s), which
is not required in this application.
In addition, a normalized left coprime factorization P .s/ D MQ 1 .s/NQ .s/ can be
found by , where
H D Y C T C BD T Ry 1 ; (7.91)
Ry D I C DD T ; (7.92)
WQ T I C DD T WQ D I; (7.93)
AT X C XA XBB T X C C T C D 0: (7.95)
YAT C AY Y C T C Y C BB T D 0: (7.96)
Applying the method discussed in Case (I) to solve it, one has
Hence,
1 1
Rx D R C D T QD D 2; and W D Rx 2 D p :
2
From
T
C C T Q QDRx 1 D T Q C D 0;
A C BF D 2:2361 < 0
clear all;clc;
disp(’Normalized Coprime Factorization’)
syms A B C D;
ADinput(’A:’);
BDinput(’B:’);
CDinput(’C:’);
DDinput(’D:’);
194 7 Algebraic Riccati Equations and Spectral Factorizations
RD1;
QD1;
sysDss(A,B,C,D);
%Right coprime
RxD(RCD’*Q*D);
WDRxˆ(-1/2);
Hx_A D (A-B*Rxˆ(-1)*D’*Q*C);
Hx_B D B’;
Hx_C D (C’*(Q-Q*D*Rxˆ(-1)*D’*Q)*C);
[x,l,g]Dcare(Hx_A,Hx_B,Hx_C,Rx);
FD-Rxˆ(-1)*(B’*xCD’*Q*C);
disp(’eigenvalues of ACB*F’)
eig(ACB*F)
disp(’Value of X’)
x
M_invDss(A,B,Rxˆ(-1/2)*(B’*xCD’*Q*C),Rxˆ(1/2));
disp(’State-space of M’)
MDinv(M_inv)
disp(’State-space of N’)
NDss(ACB*F,B*W,CCD*F,D*W)
%Left coprime
RyD(RCD*Q*D’);
W_wDRyˆ(-1/2);
Hy_A D (A-B*D’*Ryˆ(-1)*Q*C);
Hy_B D (C’*(Q-Q*D*Ryˆ(-1)*D’*Q)*C);
Hy_C D B*inv(Ry)*B’;
[y,ll,gg]Dcare(Hy_A,Hy_B,Hy_C);
HD-(y*C’CB*Q*D’)*inv(Ry);
M_w_invDss(A,(y*C’CB*Q*D)*Ryˆ(-1/2),C,Ryˆ(1/2));
disp(’State-space of Mw’)
MDinv(M_inv)
disp(’State-space of Nw’)
NDss(ACH*C,BCH*D,W_w*C,W_w*D)
7.4 State-Space Formulae for Spectral Factorizations. . . 195
ulqr w
1
I
R2
u′ u x 1 x y 1 ylqr
W B C Q2
s
As shown in Fig. 7.4, the linear quadratic regulation (LQR) is to find a stabilizing
state feedback gain F to minimize the deterministic cost function
Z 1 T
where R > 0 and Q 0 are the weights. From Fig. 7.4, the right coprime factoriza-
u M
tion of D u0 yields . Let
y N
ulqr R1=2 M
D u0 : (7.98)
ylqr Q1=2 N
It can be seen in Chap. 8 that the optimal LQR control problem is equivalent to
solving for particular coprime factors such that
R 0 ^
† D and † D I . This optimal LQR problem for any initial state is
0 Q
N .s/N.s/ D I: (7.100)
So, N(s) is the inner part and M 1 (s) D ˆ(s) 2 GH1 , where M(s) is the outer part
of P(s) 2 RH1 . That is, M(s) is an outer function.
.s2/
Example 7.5 Given P .s/ D .sC5/ , compute the inner-outer factorization such that
P (s)P(s) D ˆ (s)ˆ(s), using spectral factorizations. Implement the solution to
construct the right coprime factors of G(s).
(1) Compute ˆ(s) 2 GH1 directly. It is evident that
s 2 s2 s2 sC2
P .s/P .s/ D D
s C 5 sC5 s5 sC5
s C 2 sC2
D D ˆ .s/ˆ.s/;
s C 5 sC5
Hence,
1
Rx D R C D T QD D 1; and W D Rx 2 D 1:
7.4 State-Space Formulae for Spectral Factorizations. . . 197
From
T
C T Q QDRx 1 D T Q C D 0;
Then,
A C BF D 5 C 1 .3/ D 2 < 0
is Hurwitz. Then, by (7.57), an outer function such that (M 1 (s)) M 1 (s) D
ˆ (s)ˆ(s) is given by
factorization
(7.103)
198 7 Algebraic Riccati Equations and Spectral Factorizations
where
Rx D 2 I D T D ; and (7.104)
" #
A C BRx 1 D T C BRx 1 B T
X D Ric
T 0: (7.105)
C T I C DRx 1 D T C A C BRx 1 D T C
The spectral factorization of (7.101) is actually the well-known and widely applied
bounded real lemma (BRL) [4]. The above deduction shows that the BRL can be
proved via a spectral factorization, which is solved by a unified approach using the
weighed all-pass concept and coprime factorization.
The second case of spectral factorization problems is to find an outer matrix ˆ(s)
such that, for R D RT > 0,
Then, because
R C P .s/ C P .s/ D R C M 1 .s/ N .s/ C N.s/M 1 .s/
D .M .s//1 .M .s/RM.s/ C N .s/M.s/ C M .s/N.s// M 1 .s/
D M 1 .s/ M 1 .s/;
Rx D R C D T C D > 0; (7.108)
7.4 State-Space Formulae for Spectral Factorizations. . . 199
and
W D Rx 1=2 ; (7.109)
F D Rx 1 B T X C C ; (7.110)
where
" #
A BRx 1 C BRx 1 B T
X D Ric
T 0: (7.111)
C T Rx 1 C A BRx 1 C
(7.112)
and
(7.113)
Because both M(s) and M1 (s) are stable, ˆ(s) D M 1 (s) is a solution to (7.106).
Note that for the case R D 0, P (s) C P(s) D ˆ (s)ˆ(s) is the spectral factorization
of a strictly positive real matrix. Readers can refer to the definitions of positive real
functions in Chap. 3.
R I ^
Q 1 Q
Also, from P .s/ D M .s/N .s/, † D , and † D I , one finds
I 0
R I
MQ NQ MQ NQ D I: (7.114)
I 0
Hence,
MQ 1 .s/ MQ .s/RMQ .s/CNQ .s/MQ .s/CMQ .s/NQ .s/ MQ 1 .s/ DMQ 1 .s/ MQ 1 .s/
) R C G.s/ C G .s/ D MQ 1 .s/ MQ 1 .s/ D ˆ.s/ˆ .s/: (7.116)
200 7 Algebraic Riccati Equations and Spectral Factorizations
R I
Suppose that R C P(s) C P (s) > 0 (positive real), 8 s D j!. With † D
I 0
and the same manipulation, one gathers
WQ D Ry1=2 ; (7.117)
H D Y C T C B Ry 1 ; (7.118)
Ry D R C D C D T > 0; (7.119)
and
" #
A BRy 1 C C T Ry 1 C
Y D Ric
T 0: (7.120)
BRy 1 B T A BRy 1 C
is given by
(7.122)
The above is summarized in the next lemma for the convenience of future
reference.
Lemma 7.5 For R > 0, there exists a right coprime factorization of P(s) D
W D Rx 1=2 ; (7.123)
F D Rx 1 B T X C C ; (7.124)
7.4 State-Space Formulae for Spectral Factorizations. . . 201
Rx D R C D T C D > 0; (7.125)
and
" #
A BRx 1 C BRx 1 B T
X D Ric 1
T 0: (7.126)
T
C Rx C A BRx 1 C
(7.127)
given by , where
WQ D Ry1=2 ; (7.128)
H D Y C T C B Ry 1 ; (7.129)
Ry D R C D C D T > 0; (7.130)
and
" #
A BRy 1 C C T Ry 1 C
Y D Ric
T 0: (7.131)
BRy 1 B T A BRy 1 C
(7.132)
202 7 Algebraic Riccati Equations and Spectral Factorizations
s 4 s4 4s 2 64
R C P .s/ C P .s/ D 2 C C D 2
s C 8 sC8 s 64
.s 4/ .s C 4/ .s C 4/ .s C 4/
D2 2 D2 2
.s 8/ .s C 8/ .s C 8/ .s C 8/
D ˆ .s/ˆ.s/;
Let R D 2 D RT 0, then
Rx D R C D T C D D .2 C 1 C 1/ D 4 > 0;
1
W D Rx 1=2 D 41=2 D :
2
From
T
1
F D Rx 1 B T X C C D .1 .4/ C .12// D 4:
4
Then,
A C BF D 8 C 1 4 D 4 < 0
s4
is Hurwitz. An outer function such that 2 C sC8
C s4
sC8
D ˆ .s/ˆ.s/ is given by
P11 .s/ P12 .s/
2 RH11 Cn2
.n /.k1 Ck2 /
Let be partitioned as
P21 .s/ P22 .s/
with n1 k1 and n2 D k2 . The final case, which is so-called J-spectral factorization,
is to find a matrix function ˆ.s/ 2 GH11 Ck2 1 Ck2 such that
.k /.k /
It is clear that if the coprime factorization of P(s) is found such that M(s) is outer,
then ˆ D M 1 would be a solution to this spectral factorization problem. Using the
M.s/
state-space formula in (7.57), is weighted all-pass with a specified † and
N.s/
^
† if and only if there exists a nonsingular matrix W satisfying WT Rx W D J2 , where
Rx D DT J1 D, and the corresponding ARE is
T
C C T J1 J1 DRx1 D T J1 C D 0: (7.136)
is given by
(7.139)
where
F D Rx1 B T X C D T J1 C : (7.140)
W11 0
Note that a solution W of WT Rx W D J2 is given by W D , where
W21 W22
T
1=2
W22 D D22 D22 D12
T
D12 ; (7.141)
h T
T T
i1=2
W11 D D11
T
D11 D21
T
D21 C D12 D11 D22
T
D21 W222 D12 D11 D22
T
D21 ; and
(7.142)
T
Clearly, if D21 D 0 and DT22 D22 > DT12 D12 , then W is nonsingular.
P11 .s/ P12 .s/
Dually, let be partitioned as 2
P21 .s/ P22 .s/
RH11 Cn2 1 Ck2 with n1 D k1 and n2 k2 . One needs to find a matrix function
.n /.k /
B J2 J2 D T Ry 1 DJ2 B T D 0: (7.147)
Hence,
"
T #
A BJ2 D T Ry 1 C C T Ry 1 C
Y D Ric
(7.148)
B J2 J2 D T Ry 1 DJ2 B T A BJ2 D T Ry 1 C
is given by
(7.150)
where
h
i1=2
T 1
WQ 11 D D12
T
I D11 D11 D12 ; and (7.153)
h
1 T i1=2
1 T
WQ 21 D D21 I D11
T
D11 D21 D21 I D11
T
D11 D11 D12 : (7.154)
Furthermore, the proposed methods can be directly applied to the discrete-time spec-
Q Q
tral factorization. Equations (7.134)
(7.145) show that N J1 N D J2 (N J2 N D
and
J1 ). Hence, the coprime factor N NQ is J-lossless (dual J-lossless). The factoriza-
tion of P(s) into a J-lossless and outer parts has important applications. Therefore,
the results are summarized in the following theorem.
206 7 Algebraic Riccati Equations and Spectral Factorizations
1. There exists an rcf P(s) D ‚(s)… 1 (s) such that ‚(s) is J-lossless, and
W11 0
…(s) is outer if there exists a nonsingular matrix W D 2
W21 W22
R.k1 Ck2 /.k1 Ck2 / satisfying W T D T J1 DW D J2 , and AHx 2 dom .Ric/,
1 T
" #
s1
0 .1C1/.1C1/
Example 7.7 Let P .s/ D sC2 2 RH1 . Use the J-spectral
0 sC3sC4
factorization, which is defined earlier in this section, to find an outer matrix function
.1 C 1/ .1 C 1/
ˆ(s) 2 GH 1 such that P (s)J1 P(s) D ˆ (s)J2 ˆ(s).
Here, one can determine the state-space realization of P(s) as
7.4 State-Space Formulae for Spectral Factorizations. . . 207
1 0
By Case (III), Rx D D J1 D D
T
leads to
0 1
T
C C T J1 J1 DRx 1 D T J1 C D 0
and
2 0
XD ;
0 0
T
1 0
F D Rx1 B X C D J1 C D
T
:
0 1
Thus,
2 0 1 0 1 0 1 0
A C BF D C D <0
0 4 0 1 0 1 0 3
is Hurwitz. From
T
1=2
W22 D D22 D22 D12
T
D12 ;
h T
T T
i1=2
W11 D D11
T
D11 D21
T
D21 C D12 D11 D22
T
D21 W222 D12 D11 D22
T
D21 ;
it yields
W11 0 1 0
W D D :
W21 W22 0 1
Then,
208 7 Algebraic Riccati Equations and Spectral Factorizations
Exercises
" #
s1
0
1. Find a J-lossless spectral coprime factorization of G.s/ D sC4
s1 .
0 sC2
2. Given G.s/ D s1s4
, compute spectral factor ˆ(s) using spectral factorizations
Case (I) and R D
2 Q D 1. 3
0,
3.s3/.s4/
3. Given G.s/ D 4 4.s3/.s4/ 5, compute spectral factor ˆ(s) using the spectral
.sC1/.sC2/
.sC1/.sC2/
factorization Case (I) and R D 0, Q D 1.
4. Given G.s/ D sC4s3
D N.s/M 1 .s/, compute the normalized coprime factors
N(s) and M(s) using spectral factorizations.
5. Given G.s/ D 2.s1/.s3/
.sC1/.sC2/ , compute inner-outer factorization using coprime
factorizations.
6. Compute the J-lossless factorization of the following systems:
(a)
(b)
References 209
References
M.-C. Tsai and D.-W. Gu, Robust and Optimal Control: A Two-port Framework 211
Approach, Advances in Industrial Control, DOI 10.1007/978-1-4471-6257-5__8,
© Springer-Verlag London 2014
212 8 CSD Approach to Stabilization Control and H2 Optimal Control
8.1 Introduction
The issue of controller design has always played a major role in dealing with
dynamic systems. Abundant techniques have been developed based on numerous
mathematical tools and various requirements. For the first controller, one can refer to
the governor for steam engines investigated by J. Watt in 1787, which significantly
initiated the studies of control systems. So far the development of corresponding
research could roughly be traced back into four periods. In 1932, Nyquist [14] used
the theory of complex variables to build the stability criterion for single-input and
single-output systems based on the frequency response. By late 1930s, Bode and
Nichols progressed the analysis of frequency domain and led to great innovations
followed by the root locus developed by Evans in 1948 [4], in which the importance
of pole/zero of a system was characterized. All the above mentioned methods mainly
compose the kernel of classical control. During the second period, Kalman and
others probed into the optimal control theory by the state-space description in 1960s.
With the assumption that the plant model is accurately representative, the so-called
linear quadratic regulator (LQR) controller was derived, in which an infinity gain
margin and at least 60ı phase margin were congenitally obtained, proved by Safonov
and Athans [16].
The third stage, led by Rosenbrock and MacFarlane in 1970s, has turned back
to frequency analysis, due to its good robustness property, and much focus on
the multidimensional systems. Even Rosenbrock [15] and MacFarlane [13] tried
to promote an integration design for overcoming the stability problem caused by the
altering of loop gain, there was not a complete framework with respect to minimize
the influence of uncertainties. As a summary, model uncertainty and disturbance
were not considered formally in controller design within these three periods.
Since the end of 1980s, robust control theory has become more and more well
known because the issue of model uncertainty increases its recognition over the
years, which significantly turned the research into the fourth period. In robust
control theory, H2 and H1 controls provide powerful tools to deal with model
uncertainty, which also have been tremendously developed for linear time-invariant
systems since the work of Zames [5, 19, 20]. In 1992, Smith and Doyle [17]
optimized a perturbed coprime factor plant and connected the robust control issues
to coprime factorizations. The contribution is significant in control synthesis and
has received considerable attention in later developments [7]. In Smith and Doyle’s
method, noise is utilized as an exogenous input of a generalized plant, in which the
uncertainty block is connected as surrounding feedback. Furthermore, the general
model consistency problem was solved in the frequency domain for uncertain
discrete-time plants represented by linear fractional transformations (LFTs) in [2].
Chen showed that these general consistency problems can be transformed into
a set of convex optimization problems, which can be efficiently evaluated by
mathematical techniques. Moreover, Benoit and Bruce [1] simplified the consistency
examination, which made this method more easily applicable.
8.2 Characterization of All Stabilizing Controllers 213
Recall the general feedback control framework discussed in Chap. 4, as shown
P11 P12
in Fig. 8.1 where the two-port transfer function matrix P D denotes
P21 P22
the standard control configuration (SCC) and K is the stabilizing controller to be
designed. The closed-loop transfer function from w to z is given as
P11 P12
LFTl ; K D P11 C P12 K.I P22 K/1 P21 (8.1)
P21 P22
z é P11 P12ù w
êP P ú
y ë 21 22û u
a b
P1* P*1
z u z′ z
⎡ P12 P11 ⎤ ⎡ I − P11 ⎤
w ⎢0 I ⎥⎦ w y′ ⎢0 − P ⎥
⎣ ⎣ 21 ⎦
w
P2* P*2 u
u u z′
⎡ I 0⎤ ⎡ P12 0⎤
⎢P P21 ⎥⎦ ⎢P K
− I ⎥⎦
K y w y′ y
⎣ 22 ⎣ 22
Fig. 8.2 Two coupled CSDs for an LFT matrix. (a) CSDr CSDl , (b) CSDl CSDr
that if both P21 and P12 are not square (i.e., the most general four-block problem),
then a single equivalent CSD does not exist for the purpose. Therefore, utilizing
pseudo signals is a possible approach for satisfying the square augmentation as in
[9, 11]. However, as mentioned early in Chap. 2, the P matrix in SCC can always be
represented by two coupled CSDs, i.e., right C left or left C right, as follows:
P11 P12 P12 P11 I 0
LFT ; K D CSDr ; CSDl ;K (8.2)
P21 P22 0 I P22 P21
or
P11 P12 I P11 P12 0
LFT ; K D CSDl ; CSDr ;K : (8.3)
P21 P22 0 P21 P22 I
This shows the equivalence of LFT and a pair of CSD representations. In other
words, the P matrix in Fig. 8.1 can always be represented either by a right CSD
matrix associated with a left CSD matrix or by a left CSD matrix associated with a
right CSD matrix, as illustrated in Fig. 8.2. Note that CSDr CSDl will be used
in the following to denote the framework of a right CSD associated with a left
one and CSDl CSDr refers to its dual part. In the following, a new approach to
directly characterize stabilization solutions for a given SCC plant is presented. The
approach follows the framework of constructing two coupled CSDs and solving two
coprime factorizations. Two methods are considered, corresponding to the two CSD
representations in Fig. 8.2, respectively.
G1
P1*
z u u¢
é P12 P11ù
w ê0 I úû w
M*
ë w'
P2*
u u u¢
é I 0ù
K y êP P ú
ë 22 21û
w M* w'
G% 2 = P %
% -1Q
(8.4)
Q
Q
is a unit in RH1 . By letting K D CSDl …; ˆ ; 8ˆ 2 RH1 , the stabilization
problem of Fig. 8.1 is represented in Fig. 8.4 where the closed-loop transfer function
of w 7! z is given as
G11 G12 Q 11 ‚
‚ Q 12
z D CSDr ; CSDl Q 22 ; ˆ
Q 21 ‚ w: (8.6)
G21 G22 ‚
216 8 CSD Approach to Stabilization Control and H2 Optimal Control
G1
z u¢
é G11 G12 ù
w êG ú w'
ë 21 G22 û
K G% 2
%
Q
u% u u% u¢
%
éP % ù
P %
éP % ù -1
P %
éQ % ù
Q
F ê%
11 12
% ú
11 12 11 12
y% y ê% % ú y% ê% % ú
ëP 21 P 22 û ëP 21 P 22 û ëQ 21 Q 22 û w'
Fig. 8.4 GQ 2 D …
Q 1 ‚ Q ˆ
Q and K D CSDl …;
z u¢
z w éG11 G12 ù
é P11 P12ù ê 0 I úû
w ë w
y êP P ú u
ë 21 22û
Þ
u% u¢
K éI Q% ù
F ê
12
y% 0 % ú
Q w
ë 22 û
As can be seen, the overall stability of the feedback system in Fig. 8.4 is not
obvious, and it would be difficult to verify the stability with a general form of G1
Q However, if G1 in the rcf of (8.4) can be found to be in the triangular form
and ‚.
G11 G12
of G1 D , then one concludes w0 D w as shown in Fig. 8.5. Furthermore,
0 I
Q Q 1 Q Q
in the lcf of G 2 D … ‚, ‚ can be found to be an upper triangular form of
Q
‚Q D I ‚12 . (Detailed formulae of such factorizations in the state-space form
0‚ Q 22
will be given in Sect. 8.3.) With such G1 and ‚,Q the input/output relation between
w and z can be easily determined as, by direct manipulations,
G11 G12 Q 12
I ‚
z D CSDr ; CSDl D G12 G11 ‚Q 12 ˆ‚
Q 22 w:
0 I 0‚Q 22 ; ˆ
(8.7)
Therefore, because the coprime factors G1 ; ‚ Q and the free parameter ˆ are all in
RH1 , the closed-loop transfer function of w 7! z is naturally stable.
8.2 Characterization of All Stabilizing Controllers 217
Q ˆ D …
K D CSDl …; Q 21 1 …
Q 11 ˆ… Q 12 ˆ…
Q 22 (8.8)
Dually, the two-port SCC plant is described by a left CSD associated with a right
one as shown in Fig. 8.2b. To solve this stabilization problem, let
(8.9)
be an lcf over RH1 . Also let G2 D ‚… 1 be an rcf over RH1 , where … 2 GH1 is
a unit in RH1 . Dually, given an invertible MQ 2 RH1 , by Property 5.4, one has
218 8 CSD Approach to Stabilization Control and H2 Optimal Control
G1
P*1
z
M *
é I - P11ù
ê0 - P ú w
ë 21û
P*2
u u
M *
é P12 0 ù
êP -I ú
ë 22 û
y P P -1 y K
G2
G% 1
z
é G% 11 G% 12 ù
ê% % ú w
ëG21 G22 û
G2 K
Q P -1
u u%
é Q11 Q12 ù
-1
é P11 P12 ù P11 P12
êQ ú êP y% F
ë 21 Q22 û ë 21 P22 úû y P21 P22
D CSDl MQ P 1 ; CSDr MQ P 2 ; K
z
é I G% 12 ù
z w ê
é P11 P12ù % ú w
ë0 G22 û
y êP P ú u
ë 21 22û
Þ
u%
éQ11 Q12 ù
K
ê 0 F
I úû
y%
ë
I GQ 12 ‚11 ‚12
z D CSDl ; CSDr ;ˆ D .‚11 ˆ C ‚12 / GQ 22 GQ 12 w:
0 GQ 22 0 I
(8.12)
In the following, the set of stabilizing controllers in the state-space form are to
be constructed via three steps by a right CSD which is coupled with a left CSD
formation. In the first step, the SCC plant is rearranged into a column stacked
matrix form, which directly characterizes the transfer function in terms of a coupled
CSD representation, and then a right coprime factorization is found of which the
“numerator” matrix of the top half is upper triangular. In the second step, a left
coprime factorization of the left CSD matrix (which is stable) is found such that the
“denominator matrix” is outer and the “numerator” matrix is also upper triangular.
Finally, all admissible controllers can be generated by the denominator which is a
square bistable two-port matrix, i.e., in GH1 . The details of the construction in the
state-space form for each step are described below.
Consider the LFT interconnected system of Fig. 8.1, where P(s) is a
(q1 C q2 ) (m1 C m2 ) proper, real-rational, transfer function matrix given by
(8.14)
with (A, B2 ) stabilizable and (C2 , A) detectable. One can assume D22 D 0 without
loss of generality concerning the stabilization problem [6]. Recall that the solid
lines in a matrix as in (8.14) show the matrix is a compact expression for a transfer
function matrix, while dotted line in a matrix indicates usual matrix partition for the
sake of clarity. In fact, (8.14) has the notation of state-space matrix form as
2 3 2 32 3
xP A B1 B2 x
4 z 5 D 4 C1 D11 D12 5 4 w 5 (8.15)
y C2 D21 0 u
with initial condition x(0) D 0. As presented in Chap. 6, the following introduces the
state-space procedures for the required coprime factorizations to parameterize a set
of proper, real-rational controllers which stabilizes the SCC plant P.
An advantage of using the expression of a control system state-space model in
(8.15) is that usual (constant) matrix manipulations can be applied when there are
changes of system variables, input and output variables in particular. For example,
for the state feedback control of u D Fx C Wu0 , the state-space form of the closed-
2 3 2 32 3
x I 0 0 x
loop system can be readily obtained from (8.15), with 4 w 5 D 4 0 I 0 5 4 w 5,
u F 0W u0
as
8.3 State-Space Formulae of Stabilizing Controllers 221
2 3 2 32 32 3 2 32 3
xP A B1 B2 I 0 0 x ACB2 F B1 B2 W x
4 z 5 D 4 C1 D11 D12 5 4 0 I 0 5 4 w 5 D 4 C1 CD12 F D11 D12 W 5 4 w 5 :
y C2 D21 0 F 0W u0 C2 D21 0 u0
(8.16)
This simple and intuitive approach is believed to be welcomed by engineers and will
be extensively used in manipulations in the rest of this book.
Consider the coupled CSD representation depicted in Fig. 8.2a. Then (8.15) could
be rewritten by exchanging the order of rows and columns to give the state-space
realization of the coupled CSD representation as
(8.17)
(8.19)
Fu
where F D is chosen such that A C B1 Fw C B2 Fu is Hurwitz and both Wuu
Fw
and Www are nonsingular. The order of the vectors on the left and right sides of the
222 8 CSD Approach to Stabilization Control and H2 Optimal Control
2 3
P1
first equation above is arranged according to 4 P2 5 which is the transfer function
I
2 3
z
u 6 w7
matrix from the open-loop system input to the output 6
4
7 stacking above
w u5
y
the input itself. To find the right coprime factors with the required triangular form,
let Fw D 0, Wwu D 0 and Www D I. Then the coprime factors are readily derived from
the above equation as
(8.20)
where
(8.21)
(8.22)
(8.23)
Note that A C B2 Fu is Hurwitz, with the selection request set earlier in the present
choice of Fw D 0. As can be seen above, multiplying the denominator M* of (8.23)
on the right side of the CSD representations, as shown in Fig. 8.3, does not change
the signal flow of the overall system but leads to a stable block triangular right CSD
matrix G1 and a stable left CSD matrix GQ 2 .
Step 2: Find an lcf of GQ 2 with an upper block triangular numerator.
8.3 State-Space Formulae of Stabilizing Controllers 223
(8.25)
Q
where ‚.s/ 2 RH1 is given by
(8.26)
(8.27)
Q 1 .s/ is A C B2 Fu and, therefore, the
It can be verified that the A-matrix of …
denominator of (8.27) belongs to GH1 .
224 8 CSD Approach to Stabilization Control and H2 Optimal Control
As illustrated in Fig. 8.9 which was presented in Chap. 5, the left CSD
characterization of (8.27) can be equivalently transformed
into
its LFT
form by
(5.123), denoted by …Q P , such that K D CSDl …;Q ˆ D LFTl … Q P ; ˆ where
(8.28)
(8.29)
(8.30)
Without loss of generality, one may assume D22 D 0. In Step 3 above, it is obvious
by letting Wuu D I, WQ yy D I that … Q P .s/ of (8.28) is identical to J(s), which shows
that set of controllers generated by (8.8) (i.e., the state-space form of (8.27)) indeed
includes all stabilizing controllers.
A dual scheme of Method I can also be developed as well in which a left CSD is
coupled with a right CSD. The procedure is similar to Method I. The details are
summarized as follows.
8.3 State-Space Formulae of Stabilizing Controllers 225
Step 1: Find an lcf of a SCC plant P with an upper block triangular numerator GQ 1 .
In the dual approach of stabilization, the state-space formulae can be derived as
follows. A state-space realization of the row stacked matrix for (8.9) is given by
(8.31)
(8.32)
(8.33)
where H D Hz Hy is chosen such that A C Hz C1 C Hy C2 is Hurwitz and WQ zz and
WQ yy are nonsingular. Letting Hz D 0, WQ yz D 0, and WQ zz D I in the above equation
will yield the required coprime factors as
(8.34)
(8.35)
(8.36)
226 8 CSD Approach to Stabilization Control and H2 Optimal Control
(8.37)
where
(8.38)
(8.39)
As illustrated in Fig. 8.10, the right CSD characterization can also be transformed
into its LFT form, denoted by …P , such that K D CSDr (…,ˆ) D LFTl (…P ,ˆ) where
(8.40)
(8.41)
Note that the central solution is the same as that obtained by (8.29).
8.4 Example of Finding Stabilizing Controllers 227
So far two alternative methods for finding all stabilizing controllers are character-
ized. It should be emphasized that the right CSD associated with a left one is indeed
the dual topology of the left CSD associated with a right one. It can be verified from
the inverse of (8.27) as
(8.42)
Then from (8.39), one concludes that …… Q D I , i.e., CSDr .…; ˆ/ D CSDl …; Q ˆ .
This implies that Method I and Method II are virtually the dual approach to each
other for the general case. Note that the notation introduced in Method I for coprime
factorizations has been employed purposely in Method II such that a general Bezout
Q
identity (i.e., ….s/….s/ D I ) can be constructed from these two methods. The
parameters of Fu , Hy , Wuu , and WQ yy in the controller generator formulae of … (or
Q will be characterized by the H2 control optimization later in Sect. 8.5.
…)
The following gives an example to show how to characterize all internally stabilizing
solutions for a typical control problem using the proposed CSD approach. Consider
the feedback stabilization problem in Fig. 8.11, where G is the
controlled
plant
and
w1 z1
K is the controller to be designed. Then the SCC plant of 7! from
w2 z2
Fig. 8.11 can be found as
2 3
0 0 I
P11 P12
P D D4 0G G 5: (8.43)
P21 P22
I G G
é z1 ù é w1 ù
w2 z2 êz ú êw ú
ë 2û ë 2û
G (s) é P11 P12ù
y êP P ú u
ë 21 22û
Þ
z1 u y w1
Fig. 8.11 Feedback control K ( s)
K
system
228 8 CSD Approach to Stabilization Control and H2 Optimal Control
where
" #
K.I GK/1 1
h K.I GK/ G i :
LFTl .P; K/ D (8.45)
GK.I GK/1 G I C K.I GK/1 G
tion. Then the state-space realization of the augmented SCC plant (8.43) is given by
(8.46)
(8.47)
P2*
(8.48)
(8.49)
(8.50)
To derive the CSD representations in terms of the transfer function level, let G D
NM 1 D MQ 1 NQ denote the doubly coprime factorizations of the controlled plant.
By Lemma 6.2, there exists the Bezout identity of (6.40) depicted as
Q
X.s/ YQ .s/ M.s/ Y.s/ I 0
D : (8.51)
N .s/ MQ .s/
Q N.s/ X.s/ 0I
From (6.22), (6.23), (6.31), and (6.35) with W D WQ D I , the state-space form of
(8.51) can be represented by
(8.52)
230 8 CSD Approach to Stabilization Control and H2 Optimal Control
P1* G1
M*
z éI 0 0ù u
êG éM 0 M - I ù u¢
0 G úú ê0 I
ê
ê 0 úú
w ê0 I 0ú w w
ê ú êë 0 0 I úû
ëê 0 0 I ûú
u u u¢
éM 0 M - I ù
éI 0 0ù
ê0 I
êG 0 úú
G úû
K
y I w ê w
ë êë 0 0 I úû
P2*
M* G2
On the other hand, one can represent (8.48), (8.49) and (8.50) in the transfer function
level as (Fig. 8.13)
2 3
M 0 M I
M D 4 0 I 0 5; (8.53)
0 0 I
(8.54)
(8.55)
Q and …
Step 3: Given (8.49), the required coprime factors ‚ Q can be found from (8.25)
with Hy D H, WQ yy D I , and Wuu D I as
(8.56)
(8.57)
8.4 Example of Finding Stabilizing Controllers 231
G1
z
éM 0 M -Iù u¢
êN 0 N úú
ê
w ê0 I 0 ú w¢
ê ú
ë0 0 I û
K
%
P u % -1
P %
P G% 2 u¢
é X% -Y% ù é X% -Y% ù
-1
é X% -Y% ù éM 0 M - I ù
F êN I N úû
ê % ú y ê N% ú ê % ú
ëN - M% û ë - M% û ëN - M% û w¢
ë
%
Q
Q 1 …
Fig. 8.14 Insertion of … Q
Q
Therefore, ….s/ above can be written in transfer function level such as
(8.58)
(8.59)
LFTl .P; K/
0 M I M YQ C ˆMQ
M ˆNQ C XQ I
D C
0 N N YQ C ˆMQ N ˆNQ C XQ I
M YQ C ˆMQ MˆN Q C M XQ
I
D
N YQ C N ˆMQ N ˆNQ C XQ
.Y C Mˆ/ MQ
.Mˆ Q
C Y / N
:
D (8.60)
N YQ C ˆMQ N ˆNQ C XQ
232 8 CSD Approach to Stabilization Control and H2 Optimal Control
It can be concluded that the closed-loop transfer function of w 7! z is stable for any
ˆ 2 RH1 and the stabilizing controllers determined by (8.57) are then given by
Q ˆ D XQ ˆNQ 1 YQ C ˆMQ ;
K D CSDl …; 8ˆ 2 RH1 : (8.61)
(8.62)
(8.63)
(8.64)
Dually, it can also be found in the transfer function level as (Fig. 8.15)
2 3
I 0 0
MQ D 4 0 I MQ I 5 ; (8.65)
0 0 MQ
(8.66)
8.4 Example of Finding Stabilizing Controllers 233
Q at
Fig. 8.15 Multiplying M
G1 M* P*1
the left terminal
éI 0 0 ù éI 0 0 0 ù z
ê ú ê0 I 0 -G úú
ê0 I M - I ú ê
ê0 0 M úû ê0 0 - I -G úû w
ë ë
u
éI 0 0 ù éI 0 ù
ê ú êG 0 ú
ê0 I M - I ú ê ú y
K
ê0 0 M úû êG - I ú
ë ë û
G2 P*2
M*
(8.67)
Furthermore, the right coprime factors ‚ and …, given by (8.38) and (8.39) with
Fu D F; Wuu D I; WQ yy D I , can be found as
(8.68)
(8.69)
(8.70)
234 8 CSD Approach to Stabilization Control and H2 Optimal Control
G%1
éI 0 0 0 ù z
ê % ú
ê0 M I - M% - N% ú
w
ê0 0 - M% - N% úû
ë
Q
K
u
éI 0 ù
ê N% I - M% úú éM -Y ù éM -Y ù
-1
éM -Y ù
ê êN F
ê N% - M% úû ë - X úû êN
ë - X úû y
êN
ë - X úû
ë
G2 P P -1
(8.71)
z D LFTl .P; K/ w
LFTl .P; K/
D .‚11 ˆ C ‚12 / GQ 22 GQ 12
M Y 0 0
D ˆC Q Q
M N C Q
N I X M I NQ
" #
.Mˆ C Y / MQ .Mˆ C Y / NQ
D
N ˆMQ C X MQ I .N ˆ C X / NQ
" #
.Y C Mˆ/ MQ .Mˆ C Y / NQ
D
: (8.73)
N YQ C ˆMQ N ˆNQ C XQ
8.5 Stabilization of Special SCC Formulations 235
As expected, the result of (8.73) is identical to the one in (8.60). Then the stabilizing
controller determined by (8.70) is given by
(8.75)
whose minimum realization gives the Bezout identity …… Q D I , which is the same
as that defined in (8.51).
Therefore,
the set of all internally stabilizing controllers is
given by K D CSDl …; Q ˆ D CSDr .…; ˆ/, for any ˆ 2 RH1 .
The procedure to find two successive coprime factorizations is utilized for stabiliza-
tion controller synthesis and, later, H2 problems, in the proposed CSD framework.
The above solution procedure shows that the feedback control problem is reduced to
the solutions of two less complicated problems which contain the determination of
a feedback regulator gain matrix, an observer gain matrix, and two accompanying
nonsingular matrices. It can be found that both Methods I and II proceed via two
coprime factorizations although the operational sequences are different from each
other. In fact as far as result is concerned, using Method I is identical to using
Method II for a general P(s) of (8.14). However, for specific control problems [3],
it is sometime beneficial to choose one of the two methods to make the solution
process more efficient. In this section, the six popular synthesis problems, listed
in Table 8.1, will be discussed in the CSD stabilization framework. The formulae
for … (or …)Q are shown and sets of all stabilizing controllers to be characterized.
Appropriate selection of methods and corresponding results will be summarized at
the end of this section.
In the following, the above specific problems are characterized as examples to
show which method, I or II, would be more appropriate and how only one coprime
factorization is needed in these cases.
236
(8.76)
(8.77)
P1 ;DF G1;DF 1
According to the procedure of Step 1, one has P;DF D D MDF
P2 ;DF GQ 2;DF
from (8.20) with D21 D I where
(8.78)
and
(8.79)
Furthermore, for the DF problem listed in Table 8.1 with the assumption that
A B1 C2 is Hurwitz, the state-space formula of GQ 2;DF D … Q
Q 1 ‚
DF DF is given by
(8.25), with replacements of D21 D I, Hy D B1 , and WQ yy D I ,
238 8 CSD Approach to Stabilization Control and H2 Optimal Control
(8.80)
where ‚ Q DF D I 0 and … Q DF D GQ 1 . Note that ‚
Q DF derived for this DF problem
2;DF
0I
is actually an
identity
matrix. Then
the stabilizing
controllers can be generated by
K D CSDl GQ 2;DF1
; ˆ D CSDr GQ 2;DF ; ˆ ; 8ˆ 2 RH1 . With such G1 and ‚, Q the
input/output relation between w and z in can be easily determined from (8.7) as
G11 G12 I 0
z D CSDr ; CSDl ;ˆ D .G12 C G11 ˆ/ w: (8.81)
0 I 0I
(8.82)
Similarly, the stacked matrix is derived by the procedure of Step 1 from (8.17) as
(8.83)
P1 ;FI G1;FI
and the right coprime factorization P ;FI D D MFI1 is given
P2 ;FI GQ 2;FI
from (8.20) as
(8.84)
8.5 Stabilization of Special SCC Formulations 239
and
(8.85)
Q 1 ‚
Finally, by applying Step 2, the state-space formula of GQ 2;FI D … FI
Q FI in (8.25)
can be realized by
(8.86)
(8.87)
where the central solution is given by a state feedback gain CSDl …Q FI .s/; 0 D
Fu 0 , i.e., u(t) D Kx(t) D Fu x(t), and therefore z D G12 w.
(8.88)
240 8 CSD Approach to Stabilization Control and H2 Optimal Control
The state-space formula of stacked matrix and the right coprime factorization can
be calculated from (8.17) and (8.20) with C2 D I and D21 D 0; the lcf of GQ 2;SF D
… Q
Q 1 ‚
SF SF is realized by
(8.89)
(8.90)
With such G1 and ‚,Q the input/output relation between w and z can be easily
determined from
G11 G12 I 0
where the central solutions is given by a state feedback gain CSDl …Q SF .s/; 0 D Fu ,
i.e., u(t) D Kx(t) D Fu x(t), and therefore z D G12 w.
On the other hand, similar to the DF problem, for the OE problem in which D12 D I,
the framework of CSD1 CSDr is better to be utilized by Method II, for the same
reason. Consider the OE problem of
(8.92)
8.5 Stabilization of Special SCC Formulations 241
where A B2 C1 is Hurwitz and D12 D I. Note that P12 is square and its inverse is
stable (i.e., a 2-block problem). Then one has, from (8.31) with D12 D I,
(8.93)
(8.94)
and
(8.95)
(8.96)
I 0
such that ‚OE D and …OE D G 1
2;OE . Then all stabilizing controllers can be
0I
generated by K D CSDr (G Q
2;OE ,ˆ) D CSDl (G2,OE ,ˆ), 8 ˆ 2 RH1 . With such G1
1
(8.98)
(8.99)
and
(8.100)
(8.101)
8.5 Stabilization of Special SCC Formulations 243
(8.102)
wherethe central solution is given by an output injection gain CSDr .…FC .s/; 0/ D
Hy
, and therefore, z D GQ 12 w.
0
(8.103)
The state-space formula of stacked matrix and the left coprime factorization can
be calculated from (8.33) with B2 D I and D12 D 0; the rcf of G2,OI D ‚OI … 1
OI is
realized by
(8.104)
244 8 CSD Approach to Stabilization Control and H2 Optimal Control
(8.105)
With such GQ 1 and ‚, the input/output relation between w and z can be easily
determined from (8.12) as
I GQ 12 ‚11 0
z D CSDl Q ; CSD r ;ˆ D ‚11 ˆGQ 22 GQ 12 w; (8.106)
0 G22 0 I
and the central solutions are given by an output injection gain CSDr (…OI (s), 0) D Hy
and therefore z D GQ 12 w.
In summary, with respect to the FI and FC issues, one adopts CSDr CSDl
and CSD1 CSDr for each one, respectively, and consequently, ‚.s/ Q (or ‚(s))
in the second coprime factorization would always be of
a diagonal
form.
Noted that the central solution is given by CSDl … Q FI .s/; 0 D Fu 0 and
Hy
CSDr .…FC .s/; 0/ D , respectively. For the state feedback (SF) and output
0
injection (OI) problems, the framework of CSDr CSDl and CSD 1 CSD
r are
utilized, respectively, and each central solution is given by CSDl … Q SF .s/; 0 D Fu
and CSDr (…OI (s), 0) D Hy . Explicit state-space solutions for those special problems
are summarized in Table 8.2.
The structure of Tables 8.1 and 8.2 clearly shows that FC, OE, and OI are duals
of FI, DF, and SF; furthermore, FI and FC are equivalent to DF and OE; SF and OI
are the simplified cases of FI and FC. These relationships are depicted in Fig. 8.17.
To clarify the meaning of “equivalent” described in Fig. 8.17, one examines the
connection between the problems of DF and FI as an example. From (8.78) and
(8.84), one concludes that G1,DF D G1,FI , and furthermore, from (8.79) and (8.85), it
can be verified that
(8.107)
8.5 Stabilization of Special SCC Formulations 245
FI CSDr CSDl
SF CSDr CSDl
OE CSD1 CSDr
FC CSD1 CSDr
OI CSD1 CSDr
equivalent equivalent
dual FC
FI
simplified simplified
SF dual OI
246 8 CSD Approach to Stabilization Control and H2 Optimal Control
z u¢
G1, FI ( s )
w (= G1, DF ( s )) w'
K FI
K DF
u¢
éI 0 0ù
F DF P DF ê0 C I úû
G2,FI
ë 2 w'
G2,DF
Figure 8.18 illustrates the topologies of FI and DF problems. Finally, the I/O
relationship can be realized by
Similarly, problem OE is equivalent to FC. From (8.94) and (8.99), one has
GQ 1;OE .s/ D GQ 1;FC .s/. From (8.95) and (8.100), one can find that
(8.109)
8.6 Optimal H2 Controller 247
G1
z
G1,OE ( s )
(= G% ( s ))
1, FC
w
K FC
K OE
é B2 0ù
êI 0 úú
G2,FC ê P OE F OE
ëê 0 I ûú
G2,OE
Figure 8.19 illustrates the topologies of OE and FC problems. Finally, the I/O
relationship can be realized by
Considering the H2 norm of the closed-loop transfer function from w to z in Fig. 8.5,
one has, from (8.1) and (8.7),
2
2
G11 G12 I ‚Q 12
kLFTl .P; K/k2 DCSDr ; CSDl Q ;ˆ DkG12 CG11 ‰k2 ;
2
0 I 0 ‚22 2
(8.112)
where
I Q 12
‚
‰ D CSDl Q Q
0 Q 22 ; ˆ D ˆ‚22 ‚12 :
‚
(8.113)
(8.114)
8.6 Optimal H2 Controller 249
where
T
1 T
T 1
Hy D Y C2T C B1 D21
T
D21 D21 ; (8.116)
and
T 1 T 1
A B1 D21
T
D21 D21 C2 Y C Y A B1 D21
T
D21 D21 C2 T
T 1
T 1
(8.120)
1
I 0 I 0
Applying the state similarity transformation on the left and
X 0 X 0
I 0
D on the right will yield
X 0
250 8 CSD Approach to Stabilization Control and H2 Optimal Control
(8.121)
Let Wuu D (DT12 D12 ) 0.5 and Fu D (DT12 D12 ) 1 (BT2 X C DT12 C1 ), where X satisfies
.A C B2 Fu /T X C X .A C B2 Fu / C FuT D12
T
C C1T .C1 C D12 Fu / D 0:
Then the ARE of (8.117) is derived by substituting such defined Fu into the above
equation. This gives G11 G11 D I. Note that X is the observability gramian of G12 and
the realization of G
11 12 is given by, with the similarity transformation used in the
G
state-space model,
(8.122)
Thus the optimal 2-norm of the overall transfer function is equivalent to minimize
the 2-norm of the stable transfer function ‰.
Q 22 , one has
Similarly, for a co-inner ‚
2
min k‰k22 D min ‚ Q 12 ˆ‚ Q 22 2 D min ‚
2
Q 12 ‚
Q
22 ˆ 2
ˆ2RH1 ˆ2RH1 ˆ2RH1
2
D min ‚ Q 12 ‚
Q 2
22 2 C kˆk2 ; (8.124)
ˆ2RH1
T 0:5
where WQ yy D D21 D21 , Hy D (YCT2 C B1 DT21 )(D21 DT21 ) 1 , and Y satisfies
T
T
B1 C Hy D21 B1 C Hy D21 C A C Hy C2 Y C Y A C Hy C2 D 0;
(8.125)
8.6 Optimal H2 Controller 251
and the ARE of (8.118) is given by substituting Hy into above equation. Obviously,
the optimal solution is given by ˆ D 0 and then
min kLFTl .P; K/k22 D kG12 k22 C ‚Q 12 2
2
stabilizing K
1 T
1
D trace B1T XB1 C trace D12
T
D12 2 Fu YFuT D12 D12 2 : (8.126)
The dual approach can also be derived as in the following.
Dually for a left CSD associated with a right one, one has, from (8.1) and (8.12),
(8.127)
where
Q D CSDr ‚11 ‚12
‰ ; ˆ D ‚11 ˆ C ‚12 : (8.128)
0 I
The optimal H2 controller is given by Lemma 8.3; proof of the dual part is skipped. It
is interesting but not surprising to find that the two resultant controllers are identical.
The optimal controller is actually unique.
Lemma 8.3 The H2 norm of (8.127) is minimized with a co-inner GQ 22 and inner
‚11 , where ˆ D 0, the minimized H2 norm, is given by
2
min kLFTl .P; K/k22 D GQ 12 2 C k‚12 k22
stabilizing K
T 12 T 12
D trace C1 Y C1T C trace D21 D21 HyT XHy D21 D21 : (8.129)
(8.130)
252 8 CSD Approach to Stabilization Control and H2 Optimal Control
where …(s) is obtained from (8.39) and other elements are identical to the ones in
(8.114).
It should be reminded that for the H1 synthesis, the numerators of the coprime
Q
factorizations in (8.6) are devoted to make G1 (s) J-lossless and ‚.s/ dual J-lossless,
and the dual work of (8.11) is to have a dual J-lossless matrix GQ 2 .s/ and a J-lossless
matrix ‚(s), as will be discussed in Chap. 9.
Consider a standard control problem shown in Fig. 8.20 where the realization of
is minimum with dim(A) D n and the weighting matrices U, V, Q,
and R are symmetric with U 0, Q 0, V > 0, and R > 0. The H2 control problem
in Fig. 8.20, which is essentially the same as the one shown in Fig. 8.11 but
inclusive of the weighting function, is to find a stabilizing feedback controller
K
w1
which minimizes the 2-norm of the closed-loop transfer function matrix from
w2
z
to 1 , i.e.,
z2
where
w2
U 1/ 2 G (s)
x 1 x z1
B C Q1/ 2
s
z2 u y w1
R1/ 2 K ( s) V 1/ 2
It can be found in this example that DT12 C1 D 0 and B1 DT21 D 0. Suppose that U 0
full column rank and of full row rank on the imaginary axis, respectively.
By applying a right CSD associated with a left one in Sect. 8.3, one has, by
assumption of Sect. 8.5 that from Lemma 8.2,
T
1
1
T 2
WQ yy D D21 D21
1 1
Wuu D D12 D12 2 D R 2 ; D V 2
Fu D R1 B T X; Hy D Y C T V 1
where
A BR1 B T
X D Ric .HX / D Ric ; (8.131)
C T QC AT
AT C T V 1 C
Y D Ric .HY / D Ric ; (8.132)
BUB T A
such that G11 is inner and ‚ Q 22 is co-inner. Note that all eigenvalues of
A C BFu D A BR 1 BT X and A C Hy C D A YCT V 1 C are in the open left-half
plane. Thus one has
(8.133)
and
(8.134)
(8.135)
254 8 CSD Approach to Stabilization Control and H2 Optimal Control
(8.136)
(8.137)
(8.138)
It shows that the optimal controller is an observer-based type where Fu and Hy are
Q 22 is co-inner.
solved, such that G11 is inner and ‚
Consider the motion of an antenna discussed in the book [12], which can be
described by the state differential equation with defining proper state variables:
0 1 0 0 0
P /D
x.t x.t / C u.t / C d .t / D Ax.t / C Bu.t / C d .t /;
0 ˛
where d denotes the disturbing torque. Furthermore, one assumes that the observed
variable is given by
.t / D 1 0 x.t / C vm .t / D C x.t / C vm .t /;
in which vm (t) denotes the white noise with constant scalar intensity Vm . The
simplified block diagram of the control system is depicted in Fig. 8.21. Then, an
optimal H2 observer-based control synthesis problem is proposed in Fig. 8.22. In
this example, the purpose of the control scheme is to minimize the criterion
Z 1 Z 1
2 2
Ru .t / C Qy 2 .t / dt D z1 .t / C z22 .t / dt:
0 0
8.7 Example of the Output Feedback H2 Optimal Control Problem 255
H -
x̂& 1 x̂ ŷ
B C
s
With the specified yardstick, one can rewrite Fig. 8.22 into Fig. 8.23, in which
U, Q, R, and V are the weighting functions. The corresponding LFT and CSD
representations are illustrated in Fig. 8.1, where
One now considers the following numerical values [12]: D 0.787 rad/(Vs2 ),
˛ D 4.6 s 1 , D 0.1 kg 1 m 2 , Vd D 10 N2 m2 s, and Vm D 10 7 rad2 s. Furthermore,
one has U D 0.4018, Q D 1, R D 0.00002, V D 10 7 ; then by computing the CSD
form derived before, the optimal controller and observer gain could be obtained by
Lemma 8.2 such as
1
WQ yy D 107 2 ;
1
Wuu D R 2 D .0:0002/ 2 ;
1
A BR1 B T 0:1098 0:0059
X D Ric D ;
C QC
T
AT 0:0059 0:0005
256 8 CSD Approach to Stabilization Control and H2 Optimal Control
z1 td
R1/ 2 U 1/ 2
G (s)
w u x x y
1
B C
s z2
Q1/ 2
A vm
V 1/ 2
h
H
-
x 1 x̂ ŷ
B C
s
AT C T V 1 C 4:0357 106 8:1436 105
Y D Ric D ;
BUB T
A 8:1436 105 3:6611 103
then
F D R1 B T X D 223:6068 18:6992
" #
T 1
40:3573
H D Y C V D :
814:3574
The optimal 2-norm of the closed-loop system can be obtained by Lemma 8.2 in
(8.119) as
min kLFTl .P; K/k22 D trace B1T XB1 C trace Fu YFuT D 9:0779 105 ;
stabilizing K
8.8 Example of LQR Controller 257
or from the dual part, one can also verify the same result by (8.129) such as
Those two calculation results above reveal again that these two topologies are in
fact the same in essence. Note that all numerical solutions are identical to the ones
in the reference [12].
In Sect. 7.4, one first discussed the LQR problem via the coprime factorization.
Consider this problem again as depicted in Fig. 8.24. This problem involves a
stabilizing state feedback u D Fu x C Wuu u0 such that
2 2
min Q1=2 y 2 C R1=2 u2 ; (8.139)
uDF x
P / D Ax.t / C Bu.t /;
x.t x.0/ D xo
y.t / D C x.t /: (8.140)
Now consider the problem in Fig. 8.24 in the form of the H2 optimal control
problem. That is to find a stabilizing feedback gain
which minimizes the 2-norm
yw
of the closed-loop transfer function from w to , i.e.,
uw
uw w
R 1/2 I
u¢ x& x y yw
1
W B C Q 1/2
s
A
u
F
where
It can be found here that D11 D 0, DT12 C1 D 0, DT12 D12 D R > 0, and B1 DT21 D 0.
R > 0 is the full column rank in the imaginary axis. Suppose that Q is given such
For a right CSD associated with a left CSD, one has, from (8.21) and (8.22),
and
(8.141)
Since P12 (s) is of full column rank on the imaginary axis, one has by Lemma 8.2
T
1 1
that Wuu D D12 D12 2 D R 2 , Fu D R 1 BT X where
A BR1 B T
X D Ric ; (8.142)
C T QC AT
8.9 More Numerical Examples 259
(8.143)
where
(8.144)
and
(8.145)
Q 0 D Fu with
The optimal H2 controller is then determined by K2 D CSDl …;
for B1 D I.
_
PP
y u
The target of this problem is to minimize the weighted control effort uw and the
model tracking error
e through minimizing the H2 norm of the transfer function
e
matrix from r to over all stabilizing controllers, i.e.,
uw
Z 1 Z 1
2 2
min kuw k dt C kek dt :
0 0
, and .
Figure 8.26 describes explicitly the input/output signals of SCC with regard to
this particular design problem, of which a state-space block diagram is depicted in
Fig. 8.27. By taking out the integrators and controller, the interconnection 2matrix
3
xT
6x 7
6 p7
6 7
P(s) in the SCC is actually the “open-loop” system from all “input” signals 6 xw 7
6 7
4w5
u
8.9 More Numerical Examples 261
Dw
x& w xw uw
1
Bw Cw
s
Aw
DP
w y u x& P 1 xP uP
K BP CP
- s
AP - e
x& T 1 xT uT
BT CT
s
AT
DT
2 3
xP T
6 xP 7
6 p7
6 7
6 xP 7
to the “output” 6 w 7. The state-space form of P can then be directly obtained from
6 e 7
6 7
4 uw 5
y
Fig. 8.27 as
262 8 CSD Approach to Stabilization Control and H2 Optimal Control
As an illustration example, the following textbook-like data are assumed for this
design. Let
It can be verified that (A,B2 ) is stabilizable and (C2 ,A) detectable with
D11 D 0, D22 D 0 and the system satisfies the following assumptions:
A j!I B2
1. rank D 1 C 1 and rank D12 D 1.
C1 D12
A j!I B1
2. rank D 1 C 1 and rank D21 D 1.
C2 D21
To determine the H2 optimal solution, the four parameters Hy , Fu , Wuu , and WQ yy
in (8.27) with (8.39) can be selected as, according to Lemma 8.2,
T
1
1
T 2
Wuu D D12 D12 2 D 10; WQ yy D D21 D21 D 1;
Fu D 38:6975 4:5182 199:1704 123:494 12:1515 1:7038 ;
T
Hy D 0 1 0 0 0 0 :
where
Q
it
can be verified that …… D I , which in turn shows CSDr .…; ˆ/ D
Note that
CSDl …;Q ˆ .
Consequently, the optimal H2 controller K(s) is given from (8.114) or (8.130) as
This gives
The optimal 2-norm of the closed-loop system can be obtained from (8.119) as
or from the dual part, one can also verify the same result by (8.129) such as
The two calculation results above reconfirm that these two approaches are in fact
the same in essence. All the calculations and results can be verified by using the
function h2syn in MATLAB® Robust Control Toolbox.
8.10 Summary
This chapter has proposed a unified approach to describe and synthesize the
stabilizing controllers and the H2 optimal controller by finding two coupled CSD
matrices. Note that the selection of weighting function is not the major issue
concerned in this book. The obtained results reveal an interesting feature in that
the original output feedback problem can be simplified to the solutions of two
less complicated subproblems. It is found that using the proposed approach admits
separate computations of estimator and regulator gains. In fact, this result is similar
to the “separation principle” in linear control systems theory. The feedback control
gain and the observer gain are found, respectively, in the subproblems to satisfy
a specific cost function. Notice that on the basis of the proposed CSD method,
the solution of specific control problems can be easily solved in an explicit form.
The explicit formulae obtained from the coupled CSD method are beneficial for
analyzing the closed-loop characteristics in various control problems.
References
M.-C. Tsai and D.-W. Gu, Robust and Optimal Control: A Two-port Framework 267
Approach, Advances in Industrial Control, DOI 10.1007/978-1-4471-6257-5__9,
© Springer-Verlag London 2014
268 9 A CSD Approach to H-Infinity Controller Synthesis
control problem [9], and simultaneous stabilization problem [11]. This chapter
aims at developing a CSD framework to solve H1 suboptimal control synthesis
problem. As in Chap. 8, the standard control configuration (SCC) description is
first formulated into a coupled chain scattering-matrix description where graphic
representations are utilized to interpret the matrix description. Specific right and
left CSDs are to be constructed, in the state-space form and at the transfer function
level, to characterize the H1 solutions. This approach provides a comprehensive
understanding for control engineers. Illustration examples are given to show the
solution procedures where the general H1 solutions are derived from finding
two coupled CSDs (one right and one left) as well as two successive coprime
factorizations with J-lossless numerators.
or, equivalently,
LFTl P ; K1 < 1; (9.2)
1
where
2 3
2 3 1
1 1 P11 P12
P11 P12 6 7
P D 4 5 or P D 6
41
7:
5 (9.3)
P21 P22 P21 P22
Similar to the deduction from Figs. 8.1 to 8.4, one has, as depicted in Fig. 9.1,
G11 G12 ‚Q 11 ‚
Q 12
z D LFTl P ; K1 w D CSDr ; CSDl Q 22 ; ˆ
Q 21 ‚ w:
G21 G22 ‚
(9.4)
G1
z u¢
é G11 G12 ù
z é1 1 ù w êG ú
ê g P11 g P12ú w ë 21 G22 û w
ê ú
y ëê P21 P22 ûú u Þ
%
Q
u u¢
K¥ %
éQ % ù
Q
F ê%
11
%
12
ú
y ëQ 21 Q 22 û w
Q
Dually, for a system formulated in terms of a left CSD matrix G1 D
Q
G11 G12Q ‚11 ‚12
2 RH1 associated with a right CSD matrix ‚ D 2 RH1
GQ 21 GQ 22 ‚21 ‚22
as illustrated in Fig. 8.7, the closed-loop transfer function from w to z is given by
GQ 11 GQ 12 ‚11 ‚12
z D LFTl P ; K1 w D CSDl ; CSDl ;ˆ w:
GQ 21 GQ 22 ‚21 ‚22
(9.5)
Apparently, one can also ensure the stability of the interconnection system for
GQ 1 dual J-lossless, ‚J-lossless, and ˆ 2 BH1 by the small gain theory. The rest
part of this chapter is devoted to show how to construct the required right and
left CSD matrices and then the H1 solutions of (9.2) from a given SCC plant
P . The approach is similar to that of the H2 optimal control case as shown in
the previous chapter. It will show that the H1 control problem of (9.2) is reduced
to two solutions which are linked to J-lossless coprime factorizations. State-space
formulae of the solution procedure are provided in the next section which contains
the determination of a feedback gain F, an observer gain H, and two accompanying
nonsingular matrices (W and WQ ).
⎡ P1* ⎤
Perform a rcf of ⎢ ⎥
Give a SCC ⎣ P2* ⎦ Obtain
plant Pg G1 and G 2
Solve coprime
Perform a lcf factorization
Controller Obtain such that G1
of G2 = Π −1Θ
generator Π and Θ is J-lossless
K∞ = CSDl ( Π, Φ)
∀Φ ∈ BH∞ Solve coprime factorization
Π is outer
such that Θ is dual lossless
z u¢
é G11 G12 ù
w êG ú
ë 21 G22 û w
u u u u¢
éP ù
P
éP ù -1
P
éQ ù
Q
11 12
F ê ú 11 12 11 12
ê ê ú
y
ëP 21 P 22 û y ú y
ëQ 21 Q 22 û w
ëP 21 P 22 û
K¥ G 2
Q
Fig. 9.3 K1 D CSDl Q ; ˆ
P1 G 1
Step 1: Find a right coprime factorization of D Q 1 M1 over RH1 such
P2 G2
Q ˆ D …
K1 D CSDl …; Q 21 1 …
Q 11 ˆ… Q 12 ˆ…
Q 22 ; where
(9.6)
8ˆ 2 BH1 :
9.1 H1 Control Problem 271
z D LFT1 P ; K1 w
D CSDr .P
1 ; CSDl .P2 ; K1
// w
D CSDr G1 ; CSDl GQ 2 ; K 1 w
Q …
D CSDr .G1 ; CSDl . CSDl ‚; Q 1 ; K1 // w
D CSDr .G Q Q 1 ; CSDl …;
Q ˆ // w
1 ; CSDl . CSDl
‚; …
D CSDr G1 ; CSDl ‚; Q ˆ w: (9.7)
Theorem 9.1 For a given SCC plant P , there exists an internally stabilizing
P1
controller K1 such that kLFT1 (P ,K1 )k1 < 1 if there exists an rcf of D
P2
G1
M1 1
such that G1 is J-lossless and then an lcf of GQ 2 D …
Q 1 ‚Q such that ‚Q
GQ 2
Q outer. Then, all proper real rational stabilizing controllers
is dual J-lossless and …
A dual scheme of Method I in which a left CSD is coupled with a right CSD can
also be developed. The procedure of Method II is similar to that of Method I. The
detail is summarized as follows.
Recall from Chap. 5 where a row stacked transfer matrix from the SCC plant P is
Perform a
Obtain
lcf of [ P*1 P*2 ] Give a SCC
G% and G
1 2 plant Pg
Solve coprime
factorization Perform an rcf
such that G%1 is Controller
of G2 = QP -1 Obtain
dual J-lossless generator
P and Q
K ¥ = CSDr (P, F )
P is outer "F Î BH ¥
Solve coprime
factorization
such that Q is J-lossless
G%1
z% z
z é1 ù w é G%11 G%12 ù
ê g P11 P12 ú
ê%
ê ú
y% % ú w
y u ëG21 G22 û
ê1 ú
ê g P21 P22 ú
Þ
ë û
z% u¢
é Q11 Q12 ù
K¥ y% êQ ú y¢ F
ë 21 Q 22 û
G1
z z
⎡ G11 G12 ⎤
y ⎢ ⎥ w
⎣G21 G22 ⎦
z u u u
−1
⎡ Θ11 Θ12 ⎤ ⎡ Π11 Π12 ⎤ ⎡ Π11 Π12 ⎤
⎢Π Φ
y ⎢Θ
⎣ 21 Θ 22 ⎥⎦ y ⎢Π
⎣ 21 Π 22 ⎥⎦ y ⎣ 21 Π 22 ⎥⎦ y
Θ Π −1
G2 K∞
Q
Fig. 9.6 K1 D CSDr ( ,ˆ)
Step 2: Find a right coprime factorization of G2 D ‚… 1 over RH1 such that
˘ 2 GH1 and ‚ is J-lossless.
Step 3: The H1 controller set is generated by the denominator … as (Fig. 9.6)
z D LFT1 P ; K1 w
D CSDl .P
1 ; CSDr .P 2 ; K1
// w
Q
D CSDl G1 ; CSDr .G2 ; K1 / w
Theorem 9.2 For a given SCC plant P , there exists an internally stabi-
lizing controller K1 such that kLFTl (P ,K1 )k1 < 1 if there exists an lcf
of such that GQ 1 is dual J-lossless and then an rcf
of G2 D ‚… 1 such that is J-lossless and ˘ outer. Then, all proper
real
rational stabilizing controllers K1 satisfying kLFT1 (P ,K1 )k1 < 1 (or
CSDr GQ 1 ; CSDl .‚; ˆ/ < 1) are given by K1 D CSDr (…,ˆ), 8 ˆ 2 BH1 .
1
In the CSD framework for controller synthesis, the procedure to find two
successive coprime factorizations is unified for solving stabilization, H2 , and H1
(sub)optimal problems. The difference is that, as mentioned above, the numerators
of the coprime factorizations in the problems of finding all stabilization solutions
have to be in triangular forms while in the H2 or H1 problems, the numerators (or
part of them) are related to inner/J-lossless matrices.
(9.10)
(9.11)
(9.12)
(9.13)
(9.14)
Fu1 Wuu 0
where FI D and WI D are found such that G1 2 RH1 is
Fw Wwu Www
J-lossless, i.e., G1 JG1 D J and G1 (s)JG1 (s) J, 8 Re(s) 0. It can be verified that
G1 defined by (9.12) is J-lossless if
2 h i 12 3
T 1
6 D T
12 I D D
11 11 D 12 0 7
WI D 4
h
i 12
1
5;
1 1
I D11
T
D11 D11 T
D12 D12T
I D11 D11 T
D12 I D11
T
D11 2
(9.15)
FI D RI1 B T X C D1T C1
X D Ric .HX / 0 (9.16)
" #
A BRI1 D1T C1 BRI1 B T
HX D
T 2 dom .Ric/ ;
C1T I D1 RI1 D1T C1 A BRI1 D1 C1
T
(9.17)
276 9 A CSD Approach to H-Infinity Controller Synthesis
T T
D12 D12 D12 D11
RI D ; (9.18)
T
D11 D12 I D11
T
D11
D1 D D12 D11 ; B D B2 B1 : (9.19)
Q is dual J-lossless.
Step 2: Find the particular lcf such that ‚
Since GQ 2 2 RH1 is a stable transfer matrix defined by (9.13), there exists an lcf of
GQ 2 D … Q such that …
Q 1 ‚ Q 2 GH1 and ‚ Q 2 RH1 is dual J-lossless by Assumption
(a). Similar to the derivation of state-space formulae in Step 1, the coprime factors
can be constructed by
(9.20)
Q
while ‚.s/ is given by
(9.21)
Q to be dual J-lossless, the nonsingular matrix WQ I D WQ uu 0
For ‚ 2
WQ yu WQ yy
R.m2 Cq2 /.m2 Cq2 / should satisfy
Let
"
T #
AGQ 2 BGQ 2 Jm2 ;m1 DGQ RQ I1 CGQ 2 CGQ RQ I1 CGQ 2
T T
HZ D
2
2 ;
BGQ 2 Jm2 ;m1 Jm2 ;m1 DGQ RQ I1 DGQ 2 Jm2 ;m1 BGTQ AGQ 2 BGQ 2 Jm2 ;m1 DGQ RQ I1 CGQ 2
T T
2 2 2
(9.23)
where
HI D Hu Hy1 D ZCGTQ BGQ 2 Jm2 ;m1 DGTQ RQ 1 (9.25)
2 2
and
2 h
i 12 3
T 1
6 D12 I D11 D11 D12
T
0 7
WQ uu 0 6 h
i 12 7
D 6 1 T h
i 17:
WQ yu WQ yy 4 D21 I D11 D11 D21
T 1 T
D21 I D11 D11 D21
T
25
1 T
D21 I D11
T
D11 D11 D12
(9.26)
It can be verified that such defined HI and WQ I lead to a dual J-lossless ‚ Q and
Q
… 2 GH1 . As can be seen above, the first coprime factorization with J-lossless
G1 determines the free parameters FI and WI , and the second coprime factorization
with J-lossless ‚Q resolves for the other two free parameters HI and WQ I . The above
solution process shows an interesting feature in that the general output feedback
H1 control problem is reduced to the solutions of two less complicated problems
which contain the determination of a feedback control gain matrix FI , an observer
gain matrix HI , and two accompanying nonsingular matrices (WI and WQ I ).
Step 3: Find the H1 (suboptimal) controllers.
For the H1 control problem, if there exists a particular rcf of P1 such that G1 is
J-lossless and there also exists an lcf of GQ 2 such that ‚
Q is dual J-lossless, then one
has by the small gain theorem
Q ˆ <1
LFT1 P ; K1 D CSDr G1 ; CSDl ‚; (9.27)
1 1
Q ˆ D LFTl …
K1 D CSDl …; Q P;ˆ ; 8ˆ 2 BH1 (9.28)
(9.29)
278 9 A CSD Approach to H-Infinity Controller Synthesis
and, by (5.123),
(9.30)
(9.31)
(9.32)
This gives
(9.33)
(9.34)
WQ 0
where HII D Hz Hy2 and WQ II D Q zz Q are two free parameters to be
Wyz Wyy
GQ GQ
found such that GQ 1 D Q 11 Q 12 is dual J-lossless. Then a dual J-lossless GQ 1 can
G21 G22
9.2 State-Space Formulae of H1 Controllers 279
HII D Y C T C B1 DT1 RQ II
1
(9.36)
Q I D11 D11
T T
D11 D21
RII D T T (9.39)
D21 D11 D21 D21
D11
D 1 D ; C D C 1 C2 : (9.40)
D21
(9.41)
(9.42)
280 9 A CSD Approach to H-Infinity Controller Synthesis
‚11 ‚12
Furthermore there exists a particular rcf such that ‚ D 2 RH1 is
‚21 ‚22
J-lossless if
Wuu 0
1. There exists a nonsingular matrix WII D 2 R.m2 Cq2 /.m2 Cq2 / such
Wyu Wyy
that
T 1
6 D T
12 I D D
11 11 D12 0 7
4
h
i 12 h
1 T i 12 5:
T 1 T 1
D21 D11 I D11 D11
T
D12 D12 I D11 D11
T
D12 D21 I D11 D11
T
D21
(9.46)
(9.48)
9.3 H1 Solution of Special SCC Formulations 281
and
(9.49)
(9.50)
As described in Sect. 9.1 (or 9.2), two successive coprime factorizations are
sought for H1 controller synthesis in the proposed CSD framework. Instead of
characterizing upper triangular matrices in the stabilizing controller or H2 optimal
synthesis, the factorization in the H1 synthesis proceeds to find a J-lossless (or dual
J-lossless) matrix factor. Similar to Sect. 8.5, for specific control problems [2], it is
beneficial to choose a better method to make the solution process more efficient. In
this section, the six popular synthesis problems, listed in Table 8.1, will be discussed
in the CSD framework. The formulae for … (or …) Q are presented and H1 controllers
to be characterized.
(9.51)
282 9 A CSD Approach to H-Infinity Controller Synthesis
where
(9.52)
(9.53)
Fu1 Wuu 0
For H1 controllers, F D and W D should be found such
Fw Wwu Www
that G1,DF (s) is J-lossless. Furthermore, the particular lcf of GQ 2;DF D … Q
Q 1 ‚
DF DF can
be found as
(9.54)
where
(9.55)
(9.57)
where
(9.58)
(9.59)
It can be seen that P1 ;DF D P1 ;FI , then one concludes G1,DF D G1,FI . The particular
lcf of GQ 2;FI D …
Q 1 ‚
FI
Q FI can be construed as
(9.60)
284 9 A CSD Approach to H-Infinity Controller Synthesis
where
(9.61)
(9.62)
z D LFT1 .P
FI ; KFI / w
Thirdly, one considers the state feedback (SF) problems. The state-space formula of
the stacked matrix and its right coprime factorization can be calculated from (9.12)
and (9.13) with C2 D I and D21 D 0; the lcf of GQ 2;SF D … Q
Q 1 ‚
SF SF is realized by
(9.66)
where
(9.67)
and
(9.68)
The central solutions are given by a state feedback gain CSDl … Q SF .s/; 0 D Fu1 ,
i.e., u(t) D Kx(t) D Fu1 x(t). Note that with defining ‚Q 22 D ˛, the closed loop of
1
CSDl ‚; ˆ belongs to BH1 if and only if ˛kˆk1 < 1 since CSDl ‚;
Q Q ˆ D
1
ˆ ‚ Q 22 .
1
As depicted in Chap. 8, the output estimation (OE) problem is dual to the distur-
bance feedforward (DF) problem. In this case, the framework of CSDl CSDr is
utilized, and the left coprime factors can be constructed as
286 9 A CSD Approach to H-Infinity Controller Synthesis
(9.69)
(9.70)
(9.71)
(9.72)
where
(9.73)
(9.74)
9.3 H1 Solution of Special SCC Formulations 287
z D LFT1 .P
OE ; KOE / w
2;OE ; 1
D CSDl GQ 1;OE ; CSD
r ‚OE …OE ; CSDr .…OE ; ˆOE / w
D CSDl GQ 1;OE ; ˆOE w: (9.75)
and therefore z D CSDl GQ 1;OE ; 0 w. This concludes that the H1 solution of this
OE problem is given by CSDr (…OE ,ˆOE ) for any ˆOE 2 BH1 , if both H D [Hz Hy2 ]
WQ 0
and WQ D Q zz Q are found such that GQ 1;OE .s/ is dual J-lossless.
Wyz Wyy
(9.77)
(9.78)
where
(9.79)
288 9 A CSD Approach to H-Infinity Controller Synthesis
(9.80)
It can be seen from (9.79) and (9.70) that GQ 1;OE .s/ D GQ 1;FC .s/.
Furthermore, the right coprime factors of G2,FC D ‚FC … 1 FC can be found as
(9.81)
where
(9.82)
(9.83)
z D LFT1 .P
FC ; KFC / w
z D LFT1 .P
FC ; KFC / w
0 0 I
D CSDl GQ 1;FC ; CSDr G2;OE ; CSD
r OE ; ˆOE / // w
.…
D CSDl GQ 1;FC ; CSDr .I; ˆOE / w
D CSDl .G1;OE ; ˆOE / w (9.87)
B2
where uFC D uOE . This shows that the closed-loop transfer function of the OE
I
problem is in fact equivalent to that of the FC problem.
Finally, one considers the output injection (OI) problems. The state-space formulae
of the stacked matrix and the left coprime factorization can be calculated from (9.33)
and (9.34) with B2 D I and D12 D 0, the rcf of G2,OI D ‚OI … 1
OI is realized by
290 9 A CSD Approach to H-Infinity Controller Synthesis
(9.88)
where
(9.89)
The central solutions are given by an output injection gain CSDr (…OI , 0) D Hy2 and
therefore z D GQ 12 w. Similarly, for k‚11 k1 D ˇ, the closed loop of CSDr (‚,ˆ)
belongs to BH1 if and only if ˇkˆk1 < 1 since kCSDr (‚,ˆ)k1 D k‚11 ˆk1 .
In this section, one uses the H1 control theory to solve the robust stabilization
problem of a perturbed plant. The perturbation considered here includes the
discrepancy between the dynamics of real plant and the mathematical model of it,
i.e., the nominal model, such as unmodeled dynamics (high-frequency dynamics)
and neglected nonlinearities. Such perturbations are usually called “lumped” uncer-
tainties or “unstructured” uncertainties in the literature. There are various ways to
describe perturbations, including additive perturbation (absolute error between the
actual dynamics and nominal model), input- and output-multiplication perturbations
(relative errors), and their inverse forms [7]. Theoretically speaking, most of these
perturbation expressions are “interchangeable,” though a successful design would
depend on, to certain extent, an appropriate choice of the perturbation (uncertainty)
model. This section introduces a design technique which incorporates the so-called
loop shaping design procedure (LSDP) [12] to obtain performance/robust trade-offs
and a particular H1 optimization problem to guarantee the closed-loop stability and
a level of robust stability based on the coprime factorization perturbation models.
9.4 H1 Controller Synthesis with Coprime Factor Perturbations 291
K N M −1
u
G
This subsection will show the robust stabilization of a plant, which is formulated
by a left coprime factorization with perturbations on each factors. Let G D MQ 1 NQ
be an lcf. Figure 9.7 gives an H1 stabilization problem with an lcf -perturbed plant
model, where MQ and NQ 2 RH1 are stable transfer functions to represent the
uncertainties on the nominal plant.
The objective of this problem is to stabilize not only the nominal plant but also a
family of perturbed plants defined as
n
1
o
G D MQ C MQ NQ C NQ W NQ MQ 1 < "; " > 0 : (9.90)
u e 1 u
controller K such that the H1 norm from w to (or to 1 D ) is less
y e2 y
than a specified value as shown in Fig. 9.8.
292 9 A CSD Approach to H-Infinity Controller Synthesis
e1 W −1
1
−H
γ
y u x 1 x 1 e2
K (s) B C
s γ
(9.92)
To apply the proposed approach, the first step is to find the stacked matrix
(9.93)
9.4 H1 Controller Synthesis with Coprime Factor Perturbations 293
(9.94)
(9.95)
where
(9.96)
294 9 A CSD Approach to H-Infinity Controller Synthesis
(9.97)
(9.98)
The second step is to find a J-lossless G1 . With the given H and WQ , the suboptimal
H1 control is to obtain Fu , Fw , Www , and Wuu such that G1 is J-lossless, i.e., to
establish the following equations:
Solving (9.100) and (9.101) will give the following results, as described in (9.16):
" #
Fu 2 B T X
D
1
I (9.103)
Fw Iw 12 WQ T WQ 1 WQ T H T X 1
2
C
X D Ric HX 0I (9.104)
2
1
1 3
A 12 H WQ T WQ 12 I C H WQ T WQ 12 I H T 2 BB T
6 7
HX D4
1
1 T 5:
12 C T I 12 WQ T WQ 12 I C A 12 H WQ T WQ 12 I C
(9.105)
(9.106)
is stable since H is any observer gainmatrix, in that (A C HC) are all in the left
Q I 0 Q D GQ 1 , such that
half plane. Hence, let ‚.s/ D (dual J-lossless) and … 2
0 I
GQ 2 D … Q ‚
1 Q is a J-lossless coprime factorization. Then the suboptimal solutions
can be found by (9.28)
Q ˆ D CSDr GQ 2 ; ˆ ;
K D CSDl …; 8ˆ 2 BH1 ; (9.107)
where
(9.108)
Q Q
Equivalently,
… can
by (5.123),
be converted to an LFT matrix, …P , such that
Q Q
CSDl …; ˆ D LFT1 …p ; ˆ where
(9.109)
(9.110)
di z do
MΔ NΔ
y
K (s) M −1 N
u
G = NM −1
It shows that the eigenvalues of A C HC and A C BFu are the closed-loop poles
(which is similar to the separation principle) resulting from the central solution.
Here (A C HC) can be preassigned in the design problem, and (A C BFu ) is solved
by (9.102), (9.103), and (9.104). Apparently, the choice of (A C HC) is a design
freedom for which there exist several ways to determine H and WQ .
McFarlane and Glover [12] proposed a normalized coprime factorization design
such that G D MQ 1 NQ is a normalized coprime factorization
(ncf ), which gives
a guideline to find H and WQ to make NQ MQ co-inner. By the definition
Q Q Q Q Q D I , where Y D
HT D YCT and W D I yield M M C N N
T
of ncf,
A C C
Ric . In this case, the central controller is expressed as
BB T
A
(9.112)
where
02
31
A C 211 Y C T C 2 211 Y C T C Y BB T
X D Ric @4
T 5A : (9.113)
211 C T C A C 211 Y C T C
W −1
di do
1 1
−F
γ γ
u x 1 x y
K (s) B C
s
with
(9.115)
Different from the problem of lcf plant description, the solution process for the rcf
problem will be facilitated by Method II. The first step of Method II is to give a
coupled CSD representation as below
(9.116)
298 9 A CSD Approach to H-Infinity Controller Synthesis
(9.117)
(9.118)
(9.119)
Here let W D I for simplicity. It is easy to work out via (9.35) that
9.4 H1 Controller Synthesis with Coprime Factor Perturbations 299
2
12 3
Q
D 4 1 2
Wzz 0 1
I 0 5 (9.123)
0 WQ yy 0 I
It is interesting, though logical, to notice that the solution of the rcf problem is
a dual case of the lcf problem. Furthermore, since
both G2 and its inverse are
I 0
stable (i.e., G2 2 GH1 ), one has ‚.s/ D (J-lossless) and … D G 2
1
such
0 I
that G2 D ‚… 1 where
(9.127)
(9.129)
300 9 A CSD Approach to H-Infinity Controller Synthesis
Table 9.2 Comparison between the rcf and lcf plant description problem
The lcf plant description The rcf plant description
SCC plant
2-block condition 1
P21 DM Q .s/ P
12 D M(s)
1
Solution method A right CSD coupled a left A left CSD coupled a right CSD
CSD (Method I) (Method II)
Central controller
form
(9.130)
For the central solution, the poles of the overall closed-loop system are determined
by the A-matrix of LFTl (P, K0 ), which can be shown by a similarity transformation,
I 0 A BF I 0 A C BF BF
D :
I I Hy C A C BF C Hy C I I 0 A C Hy C
(9.131)
Exercises
method (formulation) shown in Table 8.1 for the H-infinity suboptimal control
problem.
4. Consider the model matching (or reference) control problem shown in the
following figure. Formulate an H1 control problem that minimizes control
energy u and output error e.
+ u
K P
- +
r e
-
M
References
1. Bombois X, Anderson BDO (2002) On the influence of weight modification in Hinf control
design. IEEE conference on decision and control, Nevada, USA
2. Doyle JC, Glover K, Khargonekar PP, Francis BA (1989) State-space solutions to standard H2
and H1 control problems. IEEE Trans Autom Control 34:831–847
3. Glover K, Doyle JC (1988) State-space formulae for all stabilizing controllers that satisfy an
H1 -norm bounded and relations to risk sensitivity. Syst Control Lett 11:167–172
4. Glover K, Limebeer DJN, Doyle JC, Kasenally EM, Safonov MG (1991) A characterization of
all the solutions to the four block general distance problem. SIAM J Control Optim 29:283–324
5. Green M (1992) H1 controller synthesis by J-lossless coprime factorization. SIAM J Control
Optim 30:522–547
302 9 A CSD Approach to H-Infinity Controller Synthesis
In this chapter, several design examples are illustrated to demonstrate the validity of
the CSD two-port framework. Two different design methodologies with respect to
speed control of DC servomotors are presented. These examples will show how
industrial controllers, such as pseudo derivative feedback (PDF) controllers and
pseudo derivative feedback with feedforward (PDFF) controllers, can be formulated
into the standard control design framework and then solved by the state-space
solution procedures presented in previous chapters. By defining the transfer function
from the load torque disturbance to the controlled output, the dynamic stiffness
of a servo control system is characterized, and a scalar index value is defined by
the inverse of the maximum magnitude of the transfer function with respect to
frequency, i.e., the worst case in the frequency response. Thus, for performance
measurement of robust design, maximizing the dynamic stiffness measurement
implies minimizing the H1 -norm in controller design. This chapter will also show
how the dynamic stiffness of a servo system can be achieved by H1 design.
Mathematical models for describing the control system under investigation typically
contain some inaccuracies when compared with the real plants. This is mostly
caused by simplifications of the model, exclusion of some dynamics that are either
too complicated or unknown, and/or is due to uncertain dynamics. These inaccura-
cies induce a significant problem in control system design. A possible, and proven
useful, approach to dealing with this problem is based on modeling the real system
dynamics as a set of linear time-invariant models built around a nominal one, i.e., the
model is considered as uncertain but within known boundaries. The benefit of such
a representation of the model is the possibility of designing a robust controller that
stabilizes a closed-loop system with uncertainties under consideration. The ideal
goal would be to design a controller capable of stabilizing even the “the worst-
case scenario” representing the most degraded model. This section investigates, as
M.-C. Tsai and D.-W. Gu, Robust and Optimal Control: A Two-port Framework 303
Approach, Advances in Industrial Control, DOI 10.1007/978-1-4471-6257-5__10,
© Springer-Verlag London 2014
304 10 Design Examples
a real-world design example, the robust design of servo control systems, which are
widely used in industries.
Consider a DC permanent magnetic (PM) servomotor. Let the rotor be charac-
terized by the motor winding inductance L (unit: H) and the armature resistance R
(unit: ). Then, the equation associated with such an electrical circuit is given by
di
v.t / D L C Ri C e; (10.1)
dt
where the back EMF, e, of the motor has been taken into account. The torque
generated at the motor shaft is proportional to the armature current, where the ratio
is defined as the torque constant, Kt (unit: Nm/amp), as
T D Kt i: (10.2)
Moreover, the proportional ratio between angular velocity of motor and the back
EMF is defined as electromotive force constant, Ke (unit: V/s/rad), as
e D Ke !: (10.3)
One can now deal with the mechanical representation of the motor. The motor
exerts a torque while supplied by voltages. This torque acts on the mechanical
structure, which is characterized by the rotor inertia J (kgm2 ) and the viscous
friction coefficient B (Nms/rad) as
d!
T TL D J C B! (10.4)
dt
where TL denotes the load torque.
Based on the electrical and mechanical equations above, the system block
diagram of a DC servomotor can be depicted as shown in Fig. 10.1 below.
Voltage Torque
Detect Rotational
Actuate
Current
Velocity
Driver
Circuit
R B
Ke
w TL
Zm
Then the chain scattering description (CSD) as discussed earlier in this book can
be adopted to characterize the relationship between the electrical impedance and
the mechanical impedance for further analysis. This also implies that the motor
not only can be employed to actuate the mechanical loading but can also monitor
the operating condition, in which the mechanical loading can be found from the
measured electrical impedance as depicted in Fig. 10.2.
Consider the block diagram of a DC motor shown in Fig. 10.3, where Zm denotes
the mechanical loading. Let V (voltage reference) and TL (load) be input variables
and I (motor current) and ! (motor angular velocity) be the outputs. A (2 2) LFT
representation of Fig. 10.3 is depicted in Fig. 10.4, where
2 JsCB Ke 3
6 LJs 2 C .LBCJR/ sC .RBCKe Kt / LJs 2 C .LBCJR/ sC .RBCKe Kt / 7
M.s/D 6
4 5
7
Kt .RCLs/
LJs C .LBCJR/ sC .RBCKe Kt / LJs C .LBCJR/ sC .RBCKe Kt /
2 2
(10.5)
306 10 Design Examples
0.21
I
V TL - w
+ 1 1
0.21
0.0038 s + 7.155 + 5.77´10-5s+0.00055
-
0.21
As addressed in Chap. 5, the chain description of Fig. 10.5 with the “input”
variables TL , ! and the “outputs” I, V can be derived from (10.5), by (5.1), as
2 1 Js C B 3
I 6 Kt Kt 7 TL
D4
V R C Ls LJs 2 C .LB C JR/ s C .RB C Ke Kt / 5 !
Kt Kt
G11 G12 TL
D : (10.6)
G21 G22 !
Reversely, if the electrical impedance Ze is measured via V and I at the input port
of Fig. 10.6, then the impedance of mechanical loading at the output port can be
found as
G12 Ze C G22
Zm D : (10.8)
G11 Ze G21
In the following, the parameter values of a DC motor are listed in Table 10.1,
which will be used for computer simulations. Let the mechanical loading shown
in Fig. 10.3 be a spring-damper.
10.2 Two-Port Chain Description Approach to Estimation of Mechanical Loading 307
1
Input Voltage
0.8 Output Current
0.6
0.4
0.2
Amplitude
-0.2
-0.4
-0.6
-0.8
-1
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
time(s)
Fig. 10.7 60 Hz sinusoidal input voltage (solid) and output current (dash)
Consider the simple case of Zm D B (i.e., the damping loading) and B D 0.00055
(Nms/rad). For a 60 Hz sinusoidal input voltage with a magnitude of 1 injected into
the DC motor, the response current can be investigated by using Simulink, as shown
in Fig. 10.6 and Table 10.1.
With 60 Hz, (10.6) can be rewritten by
I 4:7619 0:0026 C 0:1036i TL
D
V 34:0714 C 6:8217i 0:0804 C 0:7449i !
G11 G12 TL
D : (10.9)
G21 G22 !
By comparing the amplitude and the phase between the input voltage and the output
current in Fig. 10.7, the equivalent electrical impedance Ze can be found as
Zm D 0:00055: (10.11)
308 10 Design Examples
V I
w
+ 1
0.21
TL 1
0.0038 s + 7.155 5.77´10-5s+0.00055
-
0.21
1
Input Voltage
0.8 Response Current
0.6
0.4
0.2
Amplitude
-0.2
-0.4
-0.6
-0.8
-1
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
time(s)
and the response current (and the angular velocity) as depicted in Fig. 10.9 (and
Fig. 10.10), M11 (and M21 ) can be calculated as
On the other hand, M12 and M22 can be found by Fig. 10.11 under the condition
of V D 0, where another motor is adopted to generate the input torque. From the
results shown in Figs. 10.12 and 10.13, M12 and M22 can be computed as
2
Input Voltage
1.5 Angular velocity
0.5
Amplitude
-0.5
-1
-1.5
-2
0.2 0.21 0.22 0.23 0.24 0.25 0.26 0.27 0.28 0.29 0.3
time(s)
Fig. 10.10 Input voltage (solid) and response angular velocity (dash)
Motor
I w
-
1 1
0.21
0.0038 s + 7.155 + 5.77´10-5s+0.00055
0.21
0.05
Torque
0.04 Response Current
0.03
0.02
0.01
Amplitude
-0.01
-0.02
-0.03
-0.04
-0.05
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
time(s)
0.5
Amplitude
-0.5
-1
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
time(s)
Fig. 10.13 Input torque (solid) and response angular velocity (dash)
312 10 Design Examples
The inner most layer of a servo drive system is the current control loop. For the
convenience of controller design in the velocity loop, the bandwidth of the current
control loop must be made much higher than that of the velocity loop, e.g., ten
times. When high-gain, closed-loop current control is implemented as a minor-loop
control, the transfer function from the current command to the current output in the
current control loop can be simplified as 1 [10]. Then, the motor model of Fig. 10.1
for speed control design can be simplified as a first-order model of Gm D JsCB Kt
, i.e.,
a simple motor torque constant, which converts current to torque, a single rotational
inertia J, and a damping factor B. Obviously, the assumptions made here imply that
the control of current in this servo loop equivalently generates the desired torque in
that the magnitude of driving current is approximately proportional to the torque.
Notice that the current controller of industrial servo drives, in practice, could not be
tuned by the user. It is necessary to identify the controlled system for speed control
design. The approach outlined in this section employs a plant representation in
terms of a coprime factorization, which can estimate plant dynamics under feedback
control. Identification methods for dealing with closed-loop experimental data have
been developed; see [5] for an overview.
Consider the feedback configuration as depicted in Fig. 10.14, where G and K
are the controlled plant to be identified and a stabilizing controller, respectively.
Assume that the input signal r is available from measurements and a controller K is
given such that the feedback system is stable. Then, the transfer functions from r to
y and u, respectively, can be found as
GK
y D Hyr r D r (10.17)
1 C GK
r + u y
K G
_
and
K
u D Hur r D r: (10.18)
1 C GK
Thus, by measuring the black box transfer functions H byr and H bur , an estimated G
can be obtained as GbDH byr Hbur . This shows that, in fact, the identification method
1
based on the closed-loop data is derived from the concept of coprime factorization
of the plant model, provided that the controller K has no unstable zeros. In practice,
only measurements of frequency responses of signals u and y are required to obtain
an estimated G within a certain bandwidth by a dynamic spectrum analyzer. Notice
that the measured frequency responses of the coprime factors of a possibly unstable
plant are from the closed-loop experimental data.
For example, consider the experimental setup with a servomotor and a current
control power amplifier as shown in Fig. 10.15, where the transfer function from
the current input to the velocity, Gm (s), will be identified. Let K.s/ D 2 C 4s
be chosen as a stabilizing controller and the excitation signal r be the swept sine
to the closed-loop control system. The dynamic signal analyzer that measures
the frequency responses of the motor velocity y D !(t) and the motor current
u D i(t) simultaneously can calculate the Bode plot of Gm with experimental data.
Figure 10.16 shows the measured Bode diagram; subsequently, the curve fitting
gives the identified model of
bm .s/ D Kt 0:2
G D ; (10.19)
Js C B 0:000058s C 0:00056
Bode Diagram
20
0
Magnitude (dB)
-20
-40
-60
-80
0
Experiment
-45 Identification
Phase (deg)
-90
-135
-180
1 2 3 4
10 10 10 10
Frequency (Hz)
Consider a control law of the velocity loop, namely, a pseudo derivative feedback
(PDF) controller [2] hereafter, which will be designed to generate the torque
command. Figure 10.17 shows a PDF controller for the speed control of the DC
motor where Kp denotes the proportional gain and Ki the integral gain constant. The
closed-loop transfer function from !* to ! is given by
!.s/ .K i Kt =J /
T .s/ D
D 2
: (10.20)
! .s/ s C B C K p Kt =J s C .K i K t =J /
Let
B C K p Kt
D ; (10.21)
2.Ki Kt J /0:5
Ki Kt 0:5
!n D : (10.22)
J
10.4 H1 Robust Controller Design for Speed Control 315
Then, T(s) of (10.20) can be written into the standard second-order form as
!n2
T .s/ D ; (10.23)
s2 C 2
!n s C !n2
where
is the damping ratio and ! n the natural frequency. It is known [6] that the
bandwidth of the standard second-order system can be found as
p 0:5
BW D !n 1 2
2 C 2 4
2 C 4
4 : (10.24)
J !n2 2J
!n B
Ki D D 194:74; Kp D D 0:41: (10.25)
Kt Kt
However, due to the external load and/or the parameter inaccuracy, the moment of
inertia of a servo drive system often varies in practice during operations. The varia-
tion of parameter J which is away from its nominal value Jo D 5.77 105 (kgm2 )
will lead to the significant alteration in its output response of speed control. To
illustrate the effect of model uncertainty, three cases of J D 0.1Jo , J D Jo , and
J D 10Jo are investigated, respectively, by computer simulations. Let the step speed
command be 100 rad/s. Figure 10.18 shows the step responses based on the classical
PDF design obtained by (10.25) above. Note that the integral term of the PDF
controller is to assure a zero steady-state error due to the step input.
In Fig. 10.17, according to the closed-loop transfer function (10.20) from !*
to !, the characteristic equation of the speed control system can be obtained
as Js2 C (B C Kp Kt )s C Kt Ki D 0. To characterize the variations of the closed-loop
poles with respect to J, let 1 C kL(s) D 0, where k D J1 and
B C Kp Kt s C Kt Ki
L.s/ D : (10.26)
s2
316 10 Design Examples
140
Classical Design, Nominal J
Classical Design, 0.1J
Velocity Response (rad/sec) 120 Classical Design, 10J
100
80
60
40
20
0
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
Time (sec)
Figure 10.19 presents the root locus with a zero located at BCK
Kt Ki
p Ki
and two
open-loop poles at the origin. Then, for J D Jo D 5.77 10 5 (kgm2 ), the closed-
loop control system has the complex conjugate poles at 757.96 ˙ 367.1i (i.e.,
* D 0.9 and ! n D 842.18 rad/s). As can be expected from the root locus of
Figs. 10.19 and 10.18 shows that the speed response resulting from the conventional
design with J D 10Jo becomes slightly slower than that of the nominal case J D Jo ,
and its controlled output oscillates significantly with the overshoot around 40 % (i.e.,
40 rad/s). However, the speed response resulting from J D 0.1Jo has no overshoot,
10.4 H1 Robust Controller Design for Speed Control 317
ze 2 ze1 zu
we 2 we1 wu
w (t )
*
x 2 x2 y2 u + x 1 x 1 w (t )
+ 1 + 1 1
Ki Kt J
s s
- y-1 -
Kp B
and it is almost the same as that of J D Jo , which can be expected from Fig. 10.19.
This implies that if the controlled plant has certain parameter variations, how to
ensure the robustness in control performance becomes an important design issue. In
the following, H1 control design is adopted to find the PDF (or PDFF) controller
in the velocity loop to enhance the dynamic stiffness and to reduce from the effect
of system uncertainty.
Many control design approaches for improving the dynamic stiffness have been
proposed. Undoubtedly, H1 control is one of the most appropriate techniques for
dealing with robust stability with respect to parameter variations, which appear
commonly in industrial drives [1]. In practice, high dynamic stiffness often results
from large control efforts. As a matter of fact, robust H1 design often leads to high-
order dynamic controllers. Hence, a trade-off between controller order and system
performance should be considered in the formulation of design problems. In the
following, H1 control design is adopted to find the PDF (or PDFF) controller in the
velocity loop to enhance the dynamic stiffness.
Let Gm .s/ D JsCB Kt
be the transfer function of the DC servomotor from the
realization.
Consider the H1 PDF design scheme of Fig. 10.20, where weighting functions
we1 , we2 , and wu should be chosen properly to satisfy the desired specifications.
With the trade-off between the system performance and computational complexity,
let the weighting functions all be positive constants under practical consideration.
Let P denote a SCC plant in the control framework shown in Fig. 10.21,
where the closed-loop transfer function from !* to z D [ze1 , ze2 , zu ]T is denoted
by LFTl (P,K1 ). Then, the PDF control design of Fig. 10.20 is formulated into the
318 10 Design Examples
é y1 ù
êy ú éë K p K i ùû u
ë 2û
where
" #
1
P 1P
11 12
P D (10.28)
P21 P22
and
(10.29)
Controller wu D 0.1 wu D 1 wu D 10
Kp 4.36 0.43 0.04
Ki 9.28 0.93 0.09
In this case, the ratio between Kp and Ki (i.e., Ki /Kp ) remains almost the same,
but both of them are inversely proportional to wu . Recall that, in the root locus
of Fig. 10.19 with varying J1 , the poles and zero of the loop transfer function of
L(s) are p1 D 0, p2 D 0, and z D BCK
Kt Ki
p Ki
KKpi , respectively. Therefore, Ki /Kp ,
determined by the selections of we1 and we2 as discussed above, will naturally affect
the tendency of the closed-loop poles. Thus, the weighting function wu can be
chosen properly to achieve the desired closed-loop poles, which characterize the
system response.
Consequently, for we1 D 19.51, we2 D 0.015, and wu D 0.05, solving the SF
design problem of Fig. 10.21 with D 1, the controller parameters are found as
Analog to the classical PDF design, the step speed responses resulting from the H1
SF design are shown in Fig. 10.22. Note that the weighting functions are chosen
purposely such that the step response with the nominal case J D Jo is close to that of
the classical PDF design as shown in Fig. 10.23. Compared with that of the classical
PDF design, Fig. 10.24 shows the step responses with J D 10Jo in which the
maximum overshoot and settling time have been improved in the illustrated design
example by using the H1 design approach in the case of the actual rotational inertia
being ten times of its nominal value. Of course, in addition to better performance
320 10 Design Examples
140
SF Solution, Nominal J
SF Solution, 0.1J
Velocity Response (rad/sec) 120 SF Solution, 10J
100
80
60
40
20
0
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
Time (sec)
140
Classical Design, 10J
SF Solution, 10J
120
Velocity Response (rad/sec)
100
80
60
40
20
0
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
Time (sec)
140
Classical Design, 10J
SF Solution, 10J
120
Velocity Response (rad/sec)
100
80
60
40
20
0
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
Time (sec)
a
ze2 ze1 zu
we2 we1 wu
w*(t ) + x 2 1
x2 y2 + u + 1 x 1 1
x1 w (t )
Ki Kt
s J s
- - -
Kp B
K¥ +
y1
n
+
achieved, the other prominent feature of the H1 approach is the systematic solution
procedure which guarantees closed-loop stability and applicability for multivariable
systems. No comparison is presented here for the case of J D 0.1 Jo , because the
simulations are very similar to those with the nominal inertia.
(10.31)
Since D21 is nonsingular with the feedforward path gain ˛ > 0, this formulation
is no longer a special SF design problem but rather a general case. It can be verified
that
(a) (A,B2) is stabilizable and (C2 ,A) is detectable.
A j!I B2
(b) rank D 2 C 1 and rank D12 D 1.
C1 D12
A j!I B1
(c) rank D 2 C 2 and rank D21 D 2.
C2 D21
Note that the extra input n has been introduced in the PDFF design problem
of Fig. 10.25 such that the above assumptions all are satisfied. Thus, the explicit
solution of this PDFF control scheme problem can be solved by utilizing the CSD
method presented in Chap. 9.
For we1 D 32.75, we2 D 0.05, wu D 1, and ˛ D 0.04, solving the H1 suboptimal
control problem of kLFTl (P ,K1 )k1 < 1 with D 1.75 will yield
(10.32)
The system step responses resulting from the PDFF scheme controller in the H1
design are shown in Fig. 10.26, where the closed-loop poles with J D Jo are given
by 9.5392, 25, 53.8201, 151833.9 and the closed-loop zeros are 9.5392,
25. As can be seen, for the case of J D 10Jo , the step response becomes slower
than that of the previous PDF design although there is a feedforward gain ˛ D 0.04
in this PDFF controller. This is due to the H1 design formulation that appears with
pole-zero cancellations at p1 D ˛1 D 25 and p2 D JB0 D 9:539, which
would affect the control performance significantly while uncertainties occur. This
inherent pole-zero cancellation property in the closed-loop system resulting from
the weighted mixed-sensitivity design was investigated in [12]. In the following,
the concept of the H1 loop-shaping design addressed in [4, 8, 11] is employed to
overcome this problem.
10.4 H1 Robust Controller Design for Speed Control 323
140
Robust PDFF Design, Nominal J
Robust PDFF Design, 0.1J
Velocity Response (rad/sec) 120 Robust PDFF Design, 10J
100
80
60
40
20
0
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
Time (sec)
The design example here employs the coprime factorization descriptions to formu-
late the advanced PDFF design into an H1 weighted mixed-sensitivity problem
[13]. The advanced PDFF controller is found by a partial pole placement technique
[9] and the loop-shaping design of the normalized coprime factorization [4]. The
proposed design method provides a useful property in that the performance, such as
bandwidth and stability, can be designed simultaneously.
Recall that denotes the transfer function of the servo-
motor from the current (torque) command to velocity. To retain the structure of the
feedforward gain and the integrator in the PDFF controller scheme, consider the
augmented one-input-two-output plant given by
(10.33)
The problem formulation here employs the concept of the H1 loop-shaping design
C
[8], where the original plant Gm is shaped by sf Gm to obtain an augmented plant
Gs . To yield satisfactory stability and performance simultaneously, the coprime
factorization (CF) description of the controlled plant discussed in Sect. 9.4 is
employed to formulate the H1 weighted mixed-sensitivity problem.
324 10 Design Examples
(10.34)
(10.35)
(10.36)
where
" # " #
0 Hm kf1 0
Hs D and WQ s D : (10.37)
kf1 1 0 wQ
wd
ze1
zu w -1
we1
wu -Hm
w *
1 w * w ze2
kf k -f 1 Cf Ki Bm
1
Cm we2
s wi s
Am
Kp
K¥
w
wd w*
Gs
w -1 kf
zu
-H m k -f 1
x1 x1 w x 2 x2 wi
[ Bk 1 Bk 2 ]
1
s
Ck
Kt 1
s
1
1
Cf z e1
u J s
B
- ze 2
Ak J
K¥
To explore the loop-shaping approach with the internal weights, let the output
weighting functions wu D 1.35, we1 D 1, we2 D 1 for satisfying the design specifica-
tion. By redrawing Fig. 10.28, the proposed design problem can be reconstructed
as depicted in Fig. 10.29 which gives a clearer representation. An important feature
of this formulation is that both of the feedforward gain kf and the integrator of the
PDFF controller structure with internal gain Cf have been taken into account as a
part of the augmented plant. The existence of feedforward term kf makes the system
more responsive to commands and provides extra freedom in the tuning procedure.
Note that the overall controller to be implemented should comprise an integrator
with a gain of Cf , a feedforward gain kf , and a computed dynamic H1 controller
as
Cf
(10.40)
Now kf D 0.5 and Cf D 500 are selected as an illustrated design example. Then, the
gives Y D 3,632.7024. Then, for the prespecified D 1.75, solving the H1 subop-
timal control problem of kLFTl (P ,K1 )k1 < 1 with P given by (10.40) will yield
the center controller as
10.4 H1 Robust Controller Design for Speed Control 327
140
CF PDFF Design, Nominal J
CF PDFF Design, 0.1J
Velocity Response (rad/sec) 120 CF PDFF Design, 10J
100
80
60
40
20
0
0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04 0.045 0.05
Time (sec)
(10.42)
(10.43)
where the closed-loop poles are given by 1,356,300, 842.2846, 3642.2, and
1,000 and the closed-loop zeros are 3642.2 and 1,000, which are from
Am C Hm Cm D 3,642.2 and k f" Cf D 1,000. It was #
1
addressed in [9] that all of
Am C Hm Cm 0
the eigenvalues of As CHs Cs D are the closed-loop poles
0 kf1
in the design problem of Fig. 10.28. This implies that the advanced PDFF design
can provide an alternative for partial pole placement by assigning (As C Hs Cs ).
Obviously, this CF design approach does not have pole-zero cancellation occurring
at JB0 D 9:539, which would affect control performance. The system step
responses resulting from the H1 design are shown in Fig. 10.31.
328 10 Design Examples
Bode Diagram
200
Gm
100 (CfGm)/s
Magnitude (dB)
KpGm+(CfKiGm)/s
0
-100
-200
-300
0
0.1 1
-45
Phase (deg)
-90
-135
-180
-1 0 1 2 3 4 5 6 7 8 9
10 10 10 10 10 10 10 10 10 10 10
Frequency (rad/sec)
In the loop-shaping design, the structure and gains of the internal weighting
function Csf can be selected to obtain the desired magnitude of the frequency
response of a loop transfer function Csf Gm , and then, the controller K1 D [Kp Ki ]
is found with the guaranteed stability properties [8]. Figure 10.32 shows the Bode
plots of the original plant Gm (the solid line), the shaped plant Csf Gm (the dashed
140 18
Classical Design Classical Design
SF PDF Design 16 SF PDF Design
120 Robust PDFF Design Robust PDFF Design
CF PDFF Design CF PDFF Design
100
12
80 10
60 8
6
40
4
20
2
0 0
0 0.02 0.04 0.06 0.08 0.1 0 0.02 0.04 0.06 0.08 0.1
Time (sec) Time (sec)
Fig. 10.33 Comparisons of step responses and control efforts with J D Jo from four controllers
140 20
Classical Design Classical Design
SF PDF Design 18 SF PDF Design
120 Robust PDFF Design Robust PDFF Design
CF PDFF Design 16 CF PDFF Design
Control Effort Response (Ampere)
Velocity Response (rad/sec)
100 14
12
80
10
60
8
40 6
4
20
2
0 0
0 0.02 0.04 0.06 0.08 0.1 0 0.02 0.04 0.06 0.08 0.1
Time (sec) Time (sec)
Fig. 10.34 Comparisons of step responses and control efforts with J D 0.1Jo from four controllers
the maximum overshoot and settling time especially by the CF PDFF design. For the
step response of the robust PDFF design in Fig. 10.35, the slower transient response
with overshoot can be found. This is due to that the H1 design formulation appears
with pole-zero cancellations, which affects the control performance significantly
while uncertainties occur.
Disturbance rejection ability, also known as dynamic stiffness, is also an impor-
tant index with which to evaluate the above servo controller design. Enhancing the
330 10 Design Examples
150 80
Classical Design Classical Design
SF PDF Design 70 SF PDF Design
60
50
100
40
30
20
50
10
-10
0 -20
0 0.02 0.04 0.06 0.08 0.1 0 0.02 0.04 0.06 0.08 0.1
Time (sec) Time (sec)
Fig. 10.35 Comparisons of step responses and control efforts with J D 10Jo from four controllers
Bode Diagram
80
Classical Design
SF PDF Design
60 Robust PDFF Design
CF PDFF Design
40
Magnitude (dB)
20
-20
-40
-2 -1 0 1 2 3 4 5 6
10 10 10 10 10 10 10 10 10
Frequency (Hz)
Fig. 10.36 Dynamic stiffness plots resulting from different controller designs
10.5 Summary
References
1. Alter DM, Tsao TC (1996) Control of linear motors for machine tool feed drives: design and
implementation of H1 optimal feedback control. ASME J Dyn Syst Meas Control 118:649–
656
2. Ellis G (2012) Control system design guide. Elsevier Science, Oxford
3. Fu L, Ling SF, Tseng CH (2007) On-line breakage monitoring of small drills with input
impedance of driving motor. Mech Syst Signal Process 21(1):457–465
332 10 Design Examples
A Co-all-pass, 92
ABCD parameter, 40, 44, 60 Co-inner, 92, 186
Additive uncertainty, 90 Completely controllable, 24
Admittance parameter, 40, 41 Completely observable, 25
Advanced PDFF, 321 Conjugate system, 23
design, 323 Controllability, 24
Algebraic Riccati equations (ARE), 3, 4, Controllability gramian, 21, 185
171, 181 Coprime factorization, 3, 4, 145, 147
All pass, 56, 92 Coupled (right and left) CSD, 267
ARE. See Algebraic Riccati equations (ARE) Q
CSDl (G;K), 31
Armature resistance, 304 CSDr (G,K), 31
Asymptotical stability, 24
D
DC permanent magnetic (PM) servomotor, 304
B
Detectable, 26
Back EMF, 304
Disturbance feedforword (DF), 281
Basis, 11
case, 237–238
Bezout identity, 156
dom(Ric), 172
Bilinear transformations, 58
Drill breakages, 310
Bounded-input-bounded-output (BIBO)
Dual J-lossless, 137, 206
stability, 24
Dual J-unitary, 137
Bounded real, 58, 59
Dynamic stiffness, 303
Bounded real lemma (BRL), 198
E
C Eigenvalue, 12
Canonical decomposition form, 26 Eigenvector, 12
Cascaded CSD subsystems, 103 Electromechanical transducer, 308
Chain matrices, 45 Electromotive force constant, 304
Chain scattering decomposition, 3 Equivalent electrical impedance, 306
Chain scattering description/chain scattering
matrix description (CSD), 3, 31, 99, 213
Chain scattering parameter, 40, 51, 52 F
Characteristic equation, 12 Finite-dimensional linear time-invariant (LTI)
Characteristic polynomial, 12 dynamical system, 22
Classical PDF, 319 Four-block distance problem, 267
M.-C. Tsai and D.-W. Gu, Robust and Optimal Control: A Two-port Framework 333
Approach, Advances in Industrial Control, DOI 10.1007/978-1-4471-6257-5,
© Springer-Verlag London 2014
334 Index
O S
Observability, 25 Scattering parameter, 40, 48
Observability gramian, 21, 183, 250 Schur complement, 13
One-port and two-port networks, 4 Schur decomposition, 172
One-port network, 37 Separation principle, 264
Optimal control problem, 267 S-function, 68
Optimal H2 controller synthesis, 248 Similarity transformation, 127
Orthogonal, 9 Single chain-scattering description (CSD), 267
Orthogonal complement, 11 Single-input-single-output (SISO), 2
Orthonormal, 10 Singular value decomposition (SVD), 16
Outer, 187, 206 Singular values, 15
Output estimation (OE) Slicot, 3
case, 240–241 Solutions of special SCC formulations, 245
problem, 285 Space L1 , 16
Output injection (OI) Space Lp, 16
case, 243–247 S parameter, 40, 48, 49, 52, 54, 55, 57
problem, 289 Special SCC formulations, 236
Output response, 22 Special state feedback (SF), 318
Spectral factorization, 3, 4, 185, 267
P Spectral radius, 12
Parameter H, 43, 46 Speed control of DC servomotors, 303
Pole-zero cancellation property, 322 Stability, 24
Positive definite, 9, 25, 26 Stabilizable, 25
Positive real, 58, 61 Stabilizing controller, 1, 3, 5, 217, 224
Positive semi-definite, 9 Standard control (or compensation)
Pseudo derivative feedback (PDF), 303, 314 configuration (SCC), 4
Pseudo derivative feedback with feed-forward Standard control configuration (SCC), 3
(PDFF), 303, 321 Star product, 90, 134
Pseudo-inverse, 13 State feedback (SF)
case, 239–240
problem, 285
Q State response, 22
Quadratic equation, 175 State similarity transformation, 23
State-space formulae of stabilizing controllers,
220–227
R State-space realization, 22, 27–29
Rank, 9 Sub-optimal, 3
Rank test, 24, 26 Subspace, 10
336 Index
T Vector 1-norm, 14
The worst case scenario, 303 Vector 1-norm, 14
Torque constant, 304 Vector 2-norm, 14
T parameter, 40, 52–54, 58, 60, 118, 120 Vector p-norm, 14
Transduction matrix, 309 Viscous friction, 304
Transmission parameter, 40, 45
Transmission parameters, 44, 54
Transpose of a matrix, 8 W
Transpose of a vector, 8 Weighted all-pass, 185
Two-port network, 37, 38, 40, 43–46, 49, 51, Weighted all-pass function, 185
52, 56, 57 Weighted co-all-pass, 185
Weighted mixed sensitivity design, 322
Well-defined, 67
U Well-posed, 67
Unitary, 9
Unitary S matrix, 57
Upper block triangular numerator, 221 Y
Youla parameterization, 89
Y parameter, 40, 42
V
Vector, 7
Vector/matrix manipulations, 9 Z
Vector norm, 14 Z parameter, 40, 42, 47