0% found this document useful (0 votes)
53 views27 pages

FPE6e Appendices WDEFG

This appendix provides a summary of key concepts in complex variables theory needed for control theory. It defines complex numbers using real and imaginary parts. Graphically, complex numbers can be represented using Cartesian or polar coordinates. Algebraic operations on complex numbers like addition, multiplication and division are described. Derivatives and integrals of complex functions follow standard rules. Euler's relations connect complex exponentials to trigonometric functions. Analytic functions and Cauchy's theorem regarding line integrals are covered. Singularities and residues of complex functions are defined, with the residue theorem generalizing Cauchy's theorem.

Uploaded by

Sami Syed
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views27 pages

FPE6e Appendices WDEFG

This appendix provides a summary of key concepts in complex variables theory needed for control theory. It defines complex numbers using real and imaginary parts. Graphically, complex numbers can be represented using Cartesian or polar coordinates. Algebraic operations on complex numbers like addition, multiplication and division are described. Derivatives and integrals of complex functions follow standard rules. Euler's relations connect complex exponentials to trigonometric functions. Analytic functions and Cauchy's theorem regarding line integrals are covered. Singularities and residues of complex functions are defined, with the residue theorem generalizing Cauchy's theorem.

Uploaded by

Sami Syed
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Appendix D

A Review of Complex Variables


This appendix is a brief summary of some results on complex variables theory, with emphasis on
the facts needed in control theory. For a comprehensive study of basic complex variables theory, see
standard textbooks such as Brown and Churchill (1996) or Marsden and Homan (1998).
D.1 Denition of a Complex Number
The complex numbers are distinguished from purely real numbers in that they also contain the
imaginary operator, which we shall denote ,. By denition,
,
2
= 1 or , =
_
1. (D.1)
A complex number may be dened as
= o + ,., (D.2)
where o is the real part and . is the imaginary part, denoted respectively as
o = Re(), . = Im(). (D.3)
Note that the imaginary part of is itself a real number.
Graphically, we may represent the complex number in two ways. In the Cartesian coordinate
system (Fig. D.1a), is represented by a single point in the complex plane. In the polar coordinate
system, is represented by a vector with length r and an angle 0; the angle is measured in radians
counterclockwise from the positive real axis (Fig. D.1b). In polar form the complex number is
denoted by
= [[ \arg = r \0 = rc
j
, 0 _ 0 _ 2, (D.4)
where rcalled the magnitude, modulus, or absolute value of is the length of the vector
representing , namely,
r = [[ =
_
o
2
+ .
2
, (D.5)
and where 0 is given by
tan0 =
.
o
(D.6)
or
0 = arg() = tan
1
_
.
o
_
. (D.7)
Care must be taken to compute the correct value of the angle, depending on the sign of the real
and imaginary parts (i.e., one must nd the quadrant in which the complex number lies).
The conjugate of is dened as

= o ,. (D.8)
35
W
36 APPENDIX D. A REVIEW OF COMPLEX VARIABLES
Figure D.1: The complex number represented in (a) Cartesian and (b) polar coordinates
Figure D.2: Arithmetic of complex numbers: (a) addition; (b) multiplication; (c) division
Therefore,
(

= , (D.9)
(
1

2
)

2
, (D.10)
_

2
_

2
, (D.11)
(
1

2
)

2
, (D.12)
Re() =
+

2
, Im() =

2,
, (D.13)

= ([[)
2
. (D.14)
D.2 Algebraic Manipulations
D.2.1 Complex Addition
If we let

1
= o
1
+ ,.
1
and
2
= o
2
+ ,.
2
, (D.15)
D.2. ALGEBRAIC MANIPULATIONS 37
then

1
+
2
= (o
1
+ ,.
1
) + (o
2
+ ,.
2
) = (o
1
+ o
2
) + ,(.
1
+ .
2
). (D.16)
Because each complex number is represented by a vector extending from the origin, we can add or
subtract complex numbers graphically. The sum is obtained by adding the two vectors. This we do
by constructing a parallelogram and nding its diagonal, as shown in Fig. D.2(a). Alternatively, we
could start at the tail of one vector, draw a vector parallel to the other vector, and then connect the
origin to the new arrowhead.
Complex subtraction is very similar to complex addition.
D.2.2 Complex Multiplication
For two complex numbers dened according to Eq. (D.15),

2
= (o
1
+ ,.
1
)(o
2
+ ,.
2
)
= (o
1
o
2
.
1
.
2
) + ,(.
1
o
2
+ o
1
.
2
). (D.17)
The product of two complex numbers may be obtained graphically using polar representations, as
shown in Fig. D.2(b).
D.2.3 Complex Division
The division of two complex numbers is carried out by rationalization. This means that both the
numerator and denominator in the ratio are multiplied by the conjugate of the denominator:

2
=

1

2
=
(o
1
o
2
+ .
1
.
2
) + ,(.
1
o
2
o
1
.
2
)
o
2
2
+ .
2
2
. (D.18)
From Eq. (D.4) it follows that

1
=
1
r
c
j
, r ,= 0. (D.19)
Also, if
1
= r
1
c
j1
and
2
= r
2
c
j2
, then

2
= r
1
r
2
c
j(1+2)
, (D.20)
where [
1

2
[ = r
1
r
2
and arg(
1

2
) = 0
1
+ 0
2
, and

2
=
r
1
r
2
c
j(12)
, r
2
,= 0, (D.21)
where

=
r
1
r
2
and arg
_

2
_
= 0
1
0
2
. The division of complex numbers may be carried out
graphically in polar coordinates as shown in Fig. D.2(c).
Example D.1 Frequency Response of First-Order SystemFind the magnitude and phase of the
transfer function G(:) =
1
: + 1
, where : = ,..
SOLUTION Substituting : = ,. and rationalizing, we obtain
G(,.) =
1
o + 1 + ,.
o + 1 ,.
o + 1 ,.
=
o + 1 ,.
(o + 1)
2
+ .
2
.
Therefore, the magnitude and phase are
[G(,.)[ =
_
(o + 1)
2
+ .
2
(o + 1)
2
+ .
2
=
1
_
(o + 1)
2
+ .
2
,
arg(G(,.)) = tan
1
_
Im(G(,.))
Re(G(,.))
_
= tan
1
_
.
o + 1
_

38 APPENDIX D. A REVIEW OF COMPLEX VARIABLES


Figure D.3: Graphical determination of magnitude and phase
D.3 Graphical Evaluation of Magnitude and Phase
Consider the transfer function
G(:) =

m
i=1
(: + .
i
)

n
i=1
(: + j
i
)
. (D.22)
The value of the transfer function for sinusoidal inputs is found by replacing : with ,.. The gain and
phase are given by G(,.) and may be determined analytically or by a graphical procedure. Consider
the pole-zero conguration for such a G(:) and a point :
0
= ,.
0
on the imaginary axis, as shown
in Fig. D.3. Also consider the vectors drawn from the poles and the zero to :
0
. The magnitude of
the transfer function evaluated at :
0
= ,.
0
is simply the ratio of the distance from the zero to the
product of all the distances from the poles:
[G(,.
0
)[ =
r
1
r
2
r
3
r
4
. (D.23)
The phase is given by the sum of the angles from the zero minus the sum of the angles from the
poles:
arg G(,.
0
) = \G(,.
0
) = 0
1
(0
2
+ 0
3
+ 0
4
). (D.24)
This may be explained as follows. The term : + .
1
is a vector addition of its two components. We
may determine this equivalently as : (.
1
), which amounts to translation of the vector : + .
1
starting at .
1
, as shown in Fig. D.4. This means that a vector drawn from the zero location to
:
0
is equivalent to : + .
1
. The same reasoning applies to the poles. We reect j
1
, j
2
, and j
3
about
the origin to obtain the pole locations. Then the vectors drawn from j
1
, j
2
, and j
3
to :
0
are
the same as the vectors in the denominator represented in polar coordinates. Note that this method
may also be used to evaluate :
0
at places in the complex plane besides the imaginary axis.
D.4 Dierentiation and Integration
The usual rules apply to complex dierentiation. Let G(:) be dierentiable with respect to :. Then
the derivative at :
0
is dened as
G
0
(:
0
) = lim
s!s0
G(:) G(:
0
)
: :
0
, (D.25)
provided that the limit exists. For conditions on the existence of the derivative, see Brown and
Churchill (1996).
D.5. EULERS RELATIONS 39
Figure D.4: Illustration of graphical computation of : + .
1
The standard rules also apply to integration, except that the constant of integration c is a complex
constant: _
G(:)d: =
_
Re[G(:)]d: + ,
_
Im[G(:)]d: + c. (D.26)
D.5 Eulers Relations
Let us now derive an important relationship involving the complex exponential. If we dene
= cos 0 + , sin0, (D.27)
where 0 is in radians, then
d
d0
= sin0 + , cos 0 = ,
2
sin0 + , cos 0
= ,(cos 0 + , sin0) = ,. (D.28)
We collect the terms involving to obtain
d

= ,d0. (D.29)
Integrating both sides of Eq. (D.29) yields
ln = ,0 + c, (D.30)
where c is a constant of integration. If we let 0 = 0 in Eq. (D.30), we nd that c = 0 or
= c
j
= cos 0 + , sin0. (D.31)
Similarly,

= c
j
= cos 0 , sin0. (D.32)
From Eqs. (D.31) and (D.32) it follows that Eulers relations
cos 0 =
c
j
+ c
j
2
, (D.33)
sin0 =
c
j
c
j
2,
. (D.34)
40 APPENDIX D. A REVIEW OF COMPLEX VARIABLES
Figure D.5: Contours in the :-plane: (a) a closed contour; (b) two dierent paths between
1
and

2
D.6 Analytic Functions
Let us assume that G is a complex-valued function dened in the complex plane. Let :
0
be in the
domain of G, which is assumed to be nite within some disk centered at :
0
. Thus, G(:) is dened
not only at :
0
but also at all points in the disk centered at :
0
. The function G is said to be analytic
if its derivative exists at :
0
and at each point in the neighborhood of :
0
.
D.7 Cauchys Theorem
A contour is a piecewise-smooth arc that consists of a number of smooth arcs joined together. A
simple closed contour is a contour that does not intersect itself and ends on itself. Let C be a
closed contour as shown in Fig. D.5(a), and let G be analytic inside and on C. Cauchys theorem
states that _
C
G(:)d: = 0. (D.35)
There is a corollary to this theorem: Let C
1
and C
2
be two paths connecting the points
1
and
2
as in Fig. D.5(b). Then
_
C1
G(:)d: =
_
C2
G(:)d:. (D.36)
D.8 Singularities and Residues
If a function G(:) is not analytic at :
0
but is analytic at some point in every neighborhood of :
0
, it is
said to be a singularity. A singular point is said to be an isolated singularity if G(:) is analytic
everywhere else in the neighborhood of :
0
except at :
0
. Let G(:) be a rational function (that
is, a ratio of polynomials). If the numerator and denominator are both analytic, then G(:) will be
analytic except at the locations of the poles (that is, at roots of the denominator). All singularities
of rational algebraic functions are the pole locations.
Let G(:) be analytic except at :
0
. Then we may write G(:) in its Laurent series expansion form:
G(:) =

n
(: :
0
)
n
+ . . . +

1
(: :
0
)
+ 1
0
+ 1
1
(: :
0
) + . . . . (D.37)
The coecient
1
is called the residue of G(:) at :
0
and may be evaluated as

1
= Res[G(:); :
0
] =
1
2,
_
C
G(:) d:, (D.38)
D.9. RESIDUE THEOREM 41
Figure D.6: Contour around an isolated singularity
where C denotes a closed arc within an analytic region centered at :
0
that contains no other singu-
larity, as shown in Fig. D.6. When :
0
is not repeated with : = 1, we have

1
= Res[G(:); :
0
] = (: :
0
)G(:)[
s=s0
. (D.39)
This is the familiar cover-up method of computing residues.
D.9 Residue Theorem
If the contour C contains | singularities, then Eq. (D.39) maybe generalized to yield Cauchys
residue theorem:
1
2,
_
G(:) d: =
l

i=1
Res[G(:); :
i
]. (D.40)
D.10 The Argument Principle
Before stating the argument principle, we need a preliminary result from which the principle follows
readily.
Number of Poles and Zeros
Let G(:) be an analytic function inside and on a closed contour C, except for a nite number of
poles inside C. Then, for C described in the positive sense (clockwise direction),
1
2,
_
G
0
(:)
G(:)
d: = 1 (D.41)
or
1
2,
_
d(lnG) = 1, (D.42)
where and 1 are the total number of zeros and poles of G inside C, respectively. A pole or zero
of multiplicity / is counted / times.
Proof Let :
0
be a zero of G with multiplicity /. Then, in some neighborhood of that point, we
may write G(:) as
G(:) = (: :
0
)
k
)(:), (D.43)
42 APPENDIX D. A REVIEW OF COMPLEX VARIABLES
where )(:) is analytic and )(:
0
) ,= 0. If we dierentiate Eq. (D.43), we obtain
G
0
(:) = /(: :
0
)
k1
)(:) + (: :
0
)
k
)
0
(:). (D.44)
Equation (D.41) may be rewritten as
G
0
(:)
G(:)
=
/
: :
0
+
)
0
(:)
)(:)
. (D.45)
Therefore, G
0
(:),G(:) has a pole at : = :
0
with residue 1. This analysis may be repeated for every
zero. Hence, the sum of the residues of G
0
(:),G(:) is the number of zeros of G(:) inside C. If :
0
is
a pole with multiplicity |, we may write
/(:) = (: :
0
)
l
G(:), (D.46)
where /(:) is analytic and /(:
0
) ,= 0. Then Eq. (D.46) may be rewritten as
G(:) =
/(:)
(: :
0
)
l
. (D.47)
Dierentiating Eq. (D.47) we obtain
G
0
(:) =
/
0
(:)
(: :
0
)
l

|/(:)
(: :
0
)
l+1
, (D.48)
so that
G
0
(:)
G(:)
=
|
: :
0
+
/
0
(:)
/(:)
. (D.49)
This analysis may be repeated for every pole. The result is that the sum of the residues of G
0
(:),G(:)
at all the poles of G(:) is 1.
The Argument Principle
Using Eq. (D.38), we get
1
2,
_
C
d[lnG(:)] = 1, (D.50)
where d[lnG(:)] was substituted for G
0
(:),G(:). If we write G(:) in polar form, then
_

d[lnG(:)] =
_

dln[G(:)[ + , arg[lnG(:)]
= ln[G(:)[[
s=s2
s=s1
+ , arg G(:)[
s=s2
s=s1
. (D.51)
Because is a closed contour, the rst term is zero, but the second term is 2 times the net
encirclements of the origin:
1
2,
_

d[lnG(:)] = 1. (D.52)
Intuitively, the argument principle may be stated as follows: We let G(:) be a rational function
that is analytic except at possibly a nite number of points. We select an arbitrary contour in the
:-plane so that G(:) is analytic at every point on the contour (the contour does not pass through
any of the singularities). The corresponding mapping into the G(:)-plane may encircle the origin.
The number of times it does so is determined by the dierence between the number of zeros and
the number of poles of G(:) encircled by the :-plane contour. The direction of this encirclement is
determined by which is greater, (clockwise) or 1 (counterclockwise). For example, if the contour
encircles a single zero, the mapping will encircle the origin once in the clockwise direction. Similarly,
if the contour encloses only a single pole, the mapping will encircle the origin, this time in the
counterclockwise direction. If the contour encircles no singularities, or if the contour encloses an
equal number of poles and zeros, there will be no encirclement of the origin. A contour evaluation of
G(:) will encircle the origin if there is a nonzero net dierence between the encircled singularities.
The mapping is conformal as well, which means that the magnitude and sense of the angles between
smooth arcs is preserved. Chapter 6 provides a more detailed intuitive treatment of the argument
principle and its application to feedback control in the form of the Nyquist stability theorem.
D.11. BILINEAR TRANSFORMATION 43
D.11 Bilinear Transformation
A bilinear transformation is of the form
n =
a: + /
c: + d
, (D.53)
where a, /, c, d are complex constants and it is assumed that ad/c ,= 0. The bilinear transformation
always transforms circles in the n-plane into circles in the :-plane. This can be shown in several
ways. If we solve for :, we obtain
: =
dn + /
cn a
. (D.54)
The equation for a circle in the n-plane is of the form
[n o[
[n j[
= 1. (D.55)
If we substitute for n in terms of : in Eq. (D.53), we get
[: o
0
[
[: j
0
[
= 1
0
, (D.56)
where
o
0
=
od /
a oc
, j
0
=
jd /
a jc
, 1
0
=

a jc
a oc

1, (D.57)
which is the equation for a circle in the :-plane. For alternative proofs the reader is referred to Brown
and Churchill (1996) and Marsden and Homan (1998).
44 APPENDIX D. A REVIEW OF COMPLEX VARIABLES
Appendix E
Summary of Matrix Theory
In the text, we assume you are already somewhat familiar with matrix theory and with the solution
of linear systems of equations. However, for the purposes of review we present here a brief summary
of matrix theory with an emphasis on the results needed in control theory. For further study, see
Strang (1988) and Gantmacher (1959).
E.1 Matrix Denitions
An array of numbers arranged in rows and columns is referred to as a matrix. If A is a matrix with
: rows and : columns, an :: (read : by :) matrix, it is denoted by
A =
_

_
a
11
a
12
a
1n
a
21
a
22
a
2n
.
.
.
.
.
.
.
.
.
a
m1
a
m2
a
mn
_

_
, (E.1)
where the entries a
ij
are its elements. If : = :, then the matrix is square; otherwise it is rectan-
gular. Sometimes a matrix is simply denoted by A = [a
ij
]. If : = 1 or : = 1, then the matrix
reduces to a row vector or a column vector, respectively. A submatrix of A is the matrix with
certain rows and columns removed.
E.2 Elementary Operations on Matrices
If A and B are matrices of the same dimension, then their sum is dened by
C = A+B, (E.2)
where
c
ij
= a
ij
+ /
ij
. (E.3)
That is, the addition is done element by element. It is easy to verify the following properties of
matrices: Commutative
law for addition
Associative law
for addition
A+B = B+A, (E.4)
(A+B) +C = A+ (B+C). (E.5)
Two matrices can be multiplied if they are compatible. Let A = : : and B = : j. Then the
:j matrix
C = AB (E.6)
45
W
46 APPENDIX E. SUMMARY OF MATRIX THEORY
is the product of the two matrices, where
c
ij
=
n

k=1
a
ik
/
kj
. (E.7)
Matrix multiplication satises the associative law Associative law
for multiplica-
tion
A(BC) = (AB)C, (E.8)
but not the commutative law; that is, in general,
AB ,= BA. (E.9)
E.3 Trace
The trace of a square matrix is the sum of its diagonal elements:
trace A =
n

i=1
a
ii
. (E.10)
E.4 Transpose
The :: matrix obtained by interchanging the rows and columns of A is called the transpose of
matrix A:
A
T
=
_

_
a
11
a
21
. . . a
m1
a
12
a
22
. . . a
m2
.
.
.
.
.
.
.
.
.
a
1n
a
2n
. . . a
mn
_

A matrix is said to be symmetric if


A
T
= A. (E.11)
It is easy to show that Transposition
(AB)
T
= B
T
A
T
(E.12)
(ABC)
T
= C
T
B
T
A
T
(E.13)
(A+B)
T
= A
T
+B
T
. (E.14)
E.5 Determinant and Matrix Inverse
The determinant of a square matrix is dened by Laplaces expansion
det A =
n

j=1
a
ij

ij
for any i = 1, 2, . . . , :, (E.15)
where
ij
is called the cofactor and

ij
= (1)
i+j
det '
ij
, (E.16)
where the scalar det '
ij
is called a minor. '
ij
is the same as the matrix A except that its ith row
and ,th column have been removed. Note that '
ij
is always an (: 1) (: 1) matrix, and that
the minors and cofactors are identical except possibly for a sign.
E.6. PROPERTIES OF THE DETERMINANT 47
The adjugate of a matrix is the transpose of the matrix of its cofactors:
adj A = [
ij
]
T
. (E.17)
It can be shown that
A adj A = (det A)I, (E.18)
where I is called the identity matrix: Identity matrix
I =
_

_
1 0 . . . . . . 0
0 1 0 . . . 0
.
.
.
.
.
.
.
.
.
.
.
.
0 . . . . . . 0 1
_

_
;
that is, I has ones along the diagonal and zeros elsewhere. If det A ,= 0, then the inverse of a matrix
A is dened by
A
1
=
adj A
det A
(E.19)
and has the property
AA
1
= A
1
A = I. (E.20)
Note that a matrix has an inversethat is, it is nonsingularif its determinant is nonzero.
The inverse of the product of two matrices is the product of the inverse of the matrices in reverse
order:
(AB)
1
= B
1
A
1
(E.21)
and Inversion
(ABC)
1
= C
1
B
1
A
1
. (E.22)
E.6 Properties of the Determinant
When dealing with determinants of matrices, the following elementary (row or column) operations
are useful:
1. If any row (or column) of A is multiplied by a scalar c, the resulting matrix

A has the
determinant
det

A = cdet A. (E.23)
Hence
det(cA) = c
n
det A. (E.24)
2. If any two rows (or columns) of A are interchanged to obtain

A, then
det

A = det A. (E.25)
3. If a multiple of a row (or column) of A is added to another to obtain

A, then
det

A = det A. (E.26)
4. It is also easy to show that
det A = det A
T
(E.27)
and
det AB = det Adet B. (E.28)
48 APPENDIX E. SUMMARY OF MATRIX THEORY
Applying Eq. (E.28) to Eq. (E.20), we have
det Adet A
1
= 1. (E.29)
If A and B are square matrices, then the determinant of the block triangular matrix is the
product of the determinants of the diagonal blocks:
det
_
A C
0 B
_
= det Adet B. (E.30)
If A is nonsingular, then
det
_
A B
C D
_
= det Adet(DCA
1
B). (E.31)
Using this identity, we can write the transfer function of a scalar system in a compact form:
G(:) = H(:I F)
1
G+ J =
det
_
(:I F) G
H J
_
det(:I F)
. (E.32)
E.7 Inverse of Block Triangular Matrices
If A and B are square invertible matrices, then
_
A C
0 B
_
1
=
_
A
1
A
1
CB
1
0 B
1
_
. (E.33)
E.8 Special Matrices
Some matrices have special structures and are given names. We have already dened the identity
matrix, which has a special form. A diagonal matrix has (possibly) nonzero elements along the Diagonal matrix
main diagonal and zeros elsewhere:
A =
_

_
a
11
0
a
22
a
33
.
.
.
0 a
nn
_

_
. (E.34)
A matrix is said to be (upper) triangular if all the elements below the main diagonal are zeros: Upper triangu-
lar matrix
A =
_

_
a
11
a
12
a
1n
0 a
22
.
.
. 0
.
.
.
0
.
.
.
.
.
.
.
.
.
0 0 0 a
nn
_

_
. (E.35)
The determinant of a diagonal or triangular matrix is simply the product of its diagonal elements.
A matrix is said to be in the (upper) companion form if it has the structure
A
c
=
_

_
a
1
a
2
a
n
1 0 0
0 1 0 0
.
.
.
.
.
.
.
.
.
0 1 0
_

_
. (E.36)
E.9. RANK 49
Note that all the information is contained in the rst row. Variants of this form are the lower, left,
or right companion matrices. A Vandermonde matrix has the following structure:
A =
_

_
1 a
1
a
2
1
a
n1
1
1 a
2
a
2
2
a
n1
2
.
.
.
.
.
.
.
.
.
.
.
.
1 a
n
a
2
n
a
n1
n
_

_
. (E.37)
E.9 Rank
The rank of a matrix is the number of its linearly independent rows or columns. If the rank of A is
r, then all (r + 1) (r + 1) submatrices of A are singular, and there is at least one r r submatrix
that is nonsingular. It is also true that
row rank of A = column rank of A. (E.38)
E.10 Characteristic Polynomial
The characteristic polynomial of a matrix A is dened by
a(:) , det(:I A)
= :
n
+ a
1
:
n1
+ + a
n1
: + a
n
, (E.39)
where the roots of the polynomial are referred to as eigenvalues of A. We can write
a(:) = (: `
1
)(: `
2
) (: `
n
), (E.40)
where `
i
are the eigenvalues of A. The characteristic polynomial of a companion matrix (e.g.,
Eq. (E.36)) is
a(:) = det(:I A
c
)
= :
n
+ a
1
:
n1
+ + a
n1
: + a
n
. (E.41)
E.11 CayleyHamilton Theorem
The CayleyHamilton theorem states that every square matrix A satises its characteristic polyno-
mial. This means that if A is an : : matrix with characteristic equation a(:), then
a(A) , A
n
+ a
1
A
n1
+ + a
n1
A+ a
n
I = 0 (E.42)
E.12 Eigenvalues and Eigenvectors
Any scalar ` and nonzero vector v that satisfy
Av = `v (E.43)
are referred to as the eigenvalue and the associated (right) eigenvector of the matrix A [because
v appears to the right of A in Eq. (E.43)]. By rearranging terms in Eq. (E.43), we get
(`I A)v = 0. (E.44)
50 APPENDIX E. SUMMARY OF MATRIX THEORY
Because v is nonzero,
det(`I A) = 0, (E.45)
so ` is an eigenvalue of the matrix A as dened in Eq. (E.43). The normalization of the eigenvectors
is arbitrary; that is, if v is an eigenvector, so is cv. The eigenvectors are usually normalized to have
unit length; that is, |v|
2
= v
T
v = 1.
If w
T
is a nonzero row vector such that
w
T
A = `w
T
, (E.46)
then w is called a left eigenvector of A [because w
T
appears to the left of A in Eq. (E.46)]. Note
that we can write
A
T
w = `w (E.47)
so that w is simply a right eigenvector of A
T
.
E.13 Similarity Transformations
Consider the arbitrary nonsingular matrix T such that

A = T
1
AT. (E.48)
The matrix operation shown in Eq. (E.48) is referred to as a similarity transformation. If A has
a full set of eigenvectors, then we can choose T to be the set of eigenvectors and

A will be diagonal.
Consider the set of equations in state-variable form:
_ x = Fx +Gn. (E.49)
If we let
T = x, (E.50)
then Eq. (E.49) becomes
T
_
= FT +Gn, (E.51)
and premultiplying both sides by T
1
, we get
_
= T
1
FT +T
1
Gn
=

F +

Gn, (E.52)
where

F = T
1
FT,

G = T
1
G. (E.53)
The characteristic polynomial of

F is
det(:I

F) = det(:I T
1
FT)
= det(:T
1
TT
1
FT)
= det[T
1
(:I F)T]
= det T
1
det(:I F) det T. (E.54)
Using Eq. (E.29), Eq. (E.54) becomes
det(:I

F) = det(:I F). (E.55)
From Eq. (E.55) we can see that

F and F both have the same characteristic polynomial, giving us
the important result that a similarity transformation does not change the eigenvalues of a matrix.
From Eq. (E.50) a new state made up of a linear combination from the old state has the same
eigenvalues as the old set.
E.14. MATRIX EXPONENTIAL 51
E.14 Matrix Exponential
Let A be a square matrix. The matrix exponential of A is dened as the series
c
At
= I +At +
1
2!
A
2
t
2
+
A
3
t
3
3!
+ (E.56)
It can be shown that the series converges. If A is an : : matrix, then c
At
is also an : : matrix
and can be dierentiated:
d
dt
c
At
= Ac
At
. (E.57)
Other properties of the matrix exponential are
c
At1
c
At2
= c
A(t1+t2)
(E.58)
and, in general,
c
A
c
B
,= c
B
c
A
. (E.59)
(In the exceptional case where A and B commutethat is, AB = BAthen c
A
c
B
= c
B
c
A
.)
E.15 Fundamental Subspaces
The range space of A, denoted by (A) and also called the column space of A, is dened by
the set of vectors
x = Ay (E.60)
for some vector y. The null space of A, denoted by A(A), is dened by the set of vectors x such
that
Ax = 0. (E.61)
If r A(A) and j (A
T
), then y
T
x = 0; that is, every vector in the null space of A is
orthogonal to every vector in the range space of A
T
.
E.16 Singular-Value Decomposition
The singular-value decomposition (SVD) is one of the most useful tools in linear algebra and
has been widely used in control theory during the last two decades. Let A be an :: matrix. Then
there always exist matrices U, S, and V such that
A = USV
T
. (E.62)
Here U and V are orthogonal matrices; that is
UU
T
= I, VV
T
= I. (E.63)
S is a quasidiagonal matrix with singular values as its diagonal elements; that is,
S =
_
0
0 0
_
, (E.64)
where is a diagonal matrix of nonzero singular values in descending order:
o
1
_ o
2
_ _ o
r
0. (E.65)
52 APPENDIX E. SUMMARY OF MATRIX THEORY
The unique diagonal elements of S are called the singular values. The maximum singular value is
denoted by o(), and the minimum singular value is denoted by o(A). The rank of the matrix is
the same as the number of nonzero singular values. The columns of U and V,
U = [ n
1
n
2
. . . n
m
],
V = [
1

2
. . .
n
], (E.66)
are called the left and right singular vectors, respectively. SVD provides complete information
about the fundamental subspaces associated with a matrix:
A(A) = span[
r+1

r+2
. . .
n
],
(A) = span[ n
1
n
2
. . . n
r
],
(A
T
) = span[
1

2
. . .
r
],
A(A
T
) = span[ n
r+1
n
r+2
. . . n
m
]. (E.67)
Here denotes the null space, and A denotes the range space respectively.
The norm of the matrix A, denoted by |A|
2
, is given by
|A|
2
= o(A). (E.68)
If A is a function of ., then the innity norm of A, |A|
1
, is given by
|A(,.)|
1
= max
!
o(A). (E.69)
E.17 Positive Denite Matrices
A matrix A is said to be positive semidenite if
x
T
Ax _ 0 for all x. (E.70)
The matrix is said to be positive denite if equality holds in Eq. (E.70) only for x = 0. A symmetric
matrix is positive denite if and only if all of its eigenvalues are positive. It is positive semidenite
if and only if all of its eigenvalues are nonnegative.
An alternate method for determining positive deniteness is to test the minors of the matrix. A
matrix is positive denite if all the leading principal minors are positive, and positive semidenite
if they are all nonnegative.
E.18 Matrix Identity
If A is an : : matrix and B is an :: matrix, then
det[I
n
AB] = det[I
m
BA],
where I
n
and I
m
are identity matrices of size : and : respectively.
Appendix F
Controllability and Observability
Controllability and observability are important structural properties of dynamic systems. First iden-
tied and studied by Kalman (1960) and later by Kalman et al. (1961), these properties have con-
tinued to be examined during the last four decades. We will discuss only a few of the known results
for linear constant systems with one input and one output. In the text we discuss these concepts in
connection with control law and estimator designs. For example, in Section 7.4 we suggest that if
the square matrix given by
( = [G FG F
2
G . . . F
n1
G] (F.1)
is nonsingular, then by transformation of the state we can convert the given description into control
canonical form. We can then construct a control law that will give the closed-loop system an arbitrary
characteristic equation.
F.1 Controllability
We begin our formal discussion of controllability with the rst of four denitions:
Denition 1 Denition I The system (F, G) is controllable if, for any given :th-order polynomial
c
c
(:), there exists a (unique) control law n = Kx such that the characteristic polynomial of FGK
is c
c
(:).
From the results of Ackermanns formula (see Appendix G), we have the following mathematical
test for controllability: (F, G) is a controllable pair if and only if the rank of ( is :. Denition I
based on pole placement is a frequency-domain concept. Controllability can be equivalently dened
in the time domain.
Denition 2 Denition II The system (F, G) is controllable if there exists a (piecewise continu-
ous) control signal n(t) that will take the state of the system from any initial state x
0
to any desired
nal state x
f
in a nite time interval.
We will now show that the system is controllable by this denition if and only if ( is full rank. We
rst assume that the system is controllable but that
rank[G FG F
2
G . . . F
n1
G] < :. (F.2)
We can then nd a vector v such that
v[G FG F
2
G . . . F
n1
G] = 0, (F.3)
or
vG = vFG = vF
2
G = . . . = vF
n1
G = 0. (F.4)
53
W
54 APPENDIX F. CONTROLLABILITY AND OBSERVABILITY
The CayleyHamilton theorem states that F satises its own characteristic equation, namely,
F
n
= a
1
F
n1
+ a
2
F
n2
+ . . . + a
n
I. (F.5)
Therefore,
vF
n
G = a
1
vF
n1
G+ a
2
vF
n2
G+ . . . + a
n
vG = 0. (F.6)
By induction, vF
n+k
G = 0 for / = 0, 1, 2, . . ., or vF
m
G = 0 for : = 0, 1, 2, . . ., and thus
vc
Ft
G = v
_
I +Ft +
1
2!
F
2
t
2
+ . . .
_
G = 0 (F.7)
for all t. However, the zero initial-condition response (x
0
= 0) is
x(t) =
_
t
0
vc
F(t)
Gn(t) dt
= c
Ft
_
t
0
c
Ft
Gn(t) dt. (F.8)
Using Eq. (F.7), Eq. (F.8) becomes
vx(t) =
_
t
0
vc
F(t)
Gn(t) dt = 0 (F.9)
for all n(t) and t 0. This implies that all points reachable from the origin are orthogonal to v.
This restricts the reachable space and therefore contradicts the second denition of controllability.
Thus if ( is singular, (F, G) is not controllable by Denition II.
Next we assume that ( is full rank but (F, G) is uncontrollable by Denition II. This means
that there exists a nonzero vector v such that
v
_
t
f
0
c
F(t
f
)
Gn(t) dt = 0, (F.10)
because the whole state space is not reachable. But Eq. (F.10) implies that
vc
F
(t
f
)
G = 0, 0 _ t _ t
f
. (F.11)
If we set t = t
f
, we see that vG = 0. Also, dierentiating Eq. (F.11) and letting t = t
f
gives
vFG = 0. Continuing this process, we nd that
vG = vFG = vF
2
G = . . . = vF
n1
G = 0, (F.12)
which contradicts the assumption that ( is full rank.
We have now shown that the system is controllable by Denition II if and only if the rank of (
is :, exactly the same condition we found for pole assignment.
Our nal denition comes closest to the structural character of controllability:
Denition 3 Denition III The system (F, G) is controllable if every mode of F is connected to
the control input.
Because of the generality of the modal structure of systems, we will treat only the case of systems
for which F can be transformed to diagonal form. (The double-integration plant does not qualify.)
Suppose we have a diagonal matrix F
d
and its corresponding input matrix G
d
with elements q
i
. The
structure of such a system is shown in Fig. (F.1). By denition, for a controllable system the input
must be connected to each mode so that the
q
i
are all nonzero. However, this is not enough if the poles (`
i
) are not distinct. Suppose, for
instance, that `
1
= `
2
. The rst two state equations are then
_ r
1d
= `
1
r
1d
+ q
1
n,
_ r
2d
= `
1
r
2d
+ q
2
n. (F.13)
F.1. CONTROLLABILITY 55
Figure F.1: Block diagram of a system with a diagonal matrix
Figure F.2: Examples of uncontrollable systems
If we dene a new state, = q
2
r
1d
q
1
r
2d
, the equation for is
_
= q
2
_ r
1d
q
1
_ r
2d
= q
2
`
1
r
1d
+ q
2
q
1
n q
1
`
1
r
2d
q
1
q
2
n = `
1
, (F.14)
which does not include the control n; hence, is not controllable. The point is that if any two poles
are equal in a diagonal F
d
system with only one input, we eectively have a hidden mode that is
not connected to the control, and the system is not controllable (Fig. F.2a). This is because the two
state variables move together exactly, so we cannot independently control r
1d
and r
2d
. Therefore,
even in such a simple case, we have two conditions for controllability:
1. All eigenvalues of F
d
are distinct.
2. No element of G
d
is zero.
Now let us consider the controllability matrix of this diagonal system. By direct computation,
( =
_

_
q
1
q
1
`
1
. . . q
1
`
n1
1
q
2
q
2
`
2
. . .
.
.
.
.
.
.
.
.
.
.
.
.
q
n
q
n
`
n
. . . q
n
`
n1
n
_

_
=
_

_
q
1
0
q
2
.
.
.
0 q
n
_

_
_

_
1 `
1
`
2
1
. . . `
n1
1
1 `
2
`
2
2
. . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1 `
n
`
2
n
. . . `
n1
n
_

_
. (F.15)
Note that the controllability matrix ( is the product of two matrices and is nonsingular if and only
if both of these matrices are invertible. The rst matrix has a determinant that is the product of
56 APPENDIX F. CONTROLLABILITY AND OBSERVABILITY
the q
i
, and the second matrix (called a Vandermonde matrix) is nonsingular if and only if the `
i
are
distinct. Thus Denition III is equivalent to having a nonsingular ( also.
Important to the subject of controllability is the PopovHautusRosenbrock (PHR) test
(see Rosenbrock, 1970, and Kailath, 1980), which is an alternate way to test the rank (or determinant)
of (. The system (F, G) is controllable if the system of equations
v
T
[:I F G] = 0
T
(F.16)
has only the trivial solution v
T
= 0
T
that is, if the matrix pencil
rank[:I F G] = : (F.17)
is full rank for all :, or if there is no nonzero v
T
such that
1
v
T
F = :v
T
, (F.18)
v
T
G = 0. (F.19)
This test is equivalent to the rank-of-( test. It is easy to show that if such a vector v exists, then (
is singular. For, if a nonzero v exists such that v
T
G = 0 then by Eqs. (F.18) and (F.19),
v
T
FG = :v
T
G = 0. (F.20)
Then, multiplying by FG, we nd that
v
T
F
2
G = :v
T
FG = 0, (F.21)
and so on. Thus we determine that v
T
( = 0
T
has a nontrivial solution, that ( is singular, and that
the system is not controllable. To show that a nontrivial v
T
exists if ( is singular requires more
development, which we will not give here (see Kailath, 1980).
We have given two pictures of uncontrollability. Either a mode is physically disconnected from
the input (Fig. F.2b), or else two parallel subsystems have identical characteristic roots (Fig. F.2a).
The control engineer should be aware of the existence of a third simple situation, illustrated in
Fig. F.2c, namely, a pole-zero cancellation. Here the problem is that the mode at : = 1 appears
to be connected to the input but is masked by the zero at : = 1 in the preceding subsystem; the
result is an uncontrollable system. This can be conrmed in several ways. First let us look at the
controllability matrix. The system matrices are
F =
_
1 0
1 1
_
, G =
_
2
1
_
,
so the controllability matrix is
( = [ G FG ] =
_
2 2
1 1
_
, (F.22)
which is clearly singular. The controllability matrix may be computed using the ctrb command in
MATLAB: [cc]=ctrb(F,G). If we compute the transfer function from n to r
2
, we nd that
H(:) =
: 1
: + 1
_
1
: 1
_
=
1
: + 1
. (F.23)
Because the natural mode at : = 1 disappears from the inputoutput description, it is not connected
to the input. Finally, if we consider the PHR test,
[:I F G] =
_
: + 1 0 2
1 : 1 1
_
, (F.24)
and let : = 1, then we must test the rank of
_
2 0 2
1 0 1
_
,
which is clearly less than 2. This result means, again, that the system is uncontrollable.
1
v
T
is a left eigenvector of F.
F.2. OBSERVABILITY 57
Denition 4 Denition IV The asymptotically stable system (F, G) is controllable if the control-
lability Gramian, the square symmetric matrix (
g
, given by the solution to the Lyapunov equation
F(
g
+(
g
F
T
+GG
T
= 0, (F.25)
is nonsingular. The controllability Gramian is also the solution to the following integral equation:
(
g
=
_
1
0
c
F
GG
T
c
F
T
dt. (F.26)
One physical interpretation of the controllability Gramian is that if the input to the system is white
Gaussian noise, then (
g
is the covariance of the state. The controllability Gramian (for an asymp-
totically stable system) can be computed with the following command in MATLAB: [cg]=gram(F,G).
In conclusion, the four denitions for controllabilitypole assignment (Denition I), state reach-
ability (Denition II), mode coupling to the input (Denition III), and controllability Gramian
(Denition IV)are equivalent. The tests for any of these four properties are found in terms of the
rank of the controllability or controllability Gramian matrices or the rank of the matrix pencil [sI
- F G]. If ( is nonsingular, then we can assign the closed-loop poles arbitrarily by state feedback,
we can move the state to any point in the state space in a nite time, and every mode is connected
to the control input.
2
F.2 Observability
So far we have discussed only controllability. The concept of observability is parallel to that of con-
trollability, and all of the results we have discussed thus far may be transformed to statements about
observability by invoking the property of duality, as discussed in Section 7.7.2. The observability
denitions analogous to those for controllability are as follows:
1. Denition I : The system (F, H) is observable if, for any :th-order polynomial c
e
(:), there
exists an estimator gain L such that the characteristic equation of the state estimator error is
c
e
(:).
2. Denition II : The system (F, H) is observable if, for any x(0), there is a nite time t such
that x(0) can be determined (uniquely) from n(t) and j(t) for 0 _ t _ t.
3. Denition III : The system (F, H) is observable if every dynamic mode in F is connected to
the output through H.
4. Denition IV: The asymptotically stable system (F, H) is observable if the observability
Gramian is nonsingular.
As we saw in the discussion for controllability, mathematical tests can be developed for observ-
ability. The system is observable if the observability matrix
O =
_

_
H
HF
.
.
.
HF
n1
_

_
(F.27)
is nonsingular. If we take the transpose of O and let H
T
= G and F
T
= F, then we nd the
controllability matrix of (F, G), another manifestation of duality. The observability matrix O may
2
We have shown the latter for diagonal F only, but the result is true in general.
58 APPENDIX F. CONTROLLABILITY AND OBSERVABILITY
be computed using the obsv command in MATLAB: [oo]=obsv(F,H). The system (F, H) is observable
if the following matrix pencil is full rank for all ::
rank
_
:I F
H
_
= :. (F.28)
The observability Gramian O
g
, which is a symmetric matrix, and the solution to the integral equation
O
g
=
_
1
0
c
F
T
H
T
Hc
F
dt, (F.29)
as well as the Lyapunov equation
F
T
O
g
+O
g
F +H
T
H = 0, (F.30)
can also be computed (for an asymptotically stable system) using the gram command in MATLAB:
[og]=gram(F,H). The observability Gramian has an interpretation as the information matrix in
the context of estimation.
Appendix G
Ackermanns Formula for Pole
Placement
Given the plant and state-variable equation
_ x = Fx +Gu, (G.1)
our objective is to nd a state-feedback control law
n = Kx (G.2)
such that the closed-loop characteristic polynomial is
c
c
(:) = det(:I F +GK). (G.3)
First we have to select c
c
(:), which determines where the poles are to be shifted; then we have
to nd K such that Eq. (G.3) will be satised. Our technique is based on transforming the plant
equation into control canonical form.
We begin by considering the eect of an arbitrary nonsingular transformation of the state,
x = T x, (G.4)
where x is the new transformed state. The equations of motion in the new coordinates, from
Eq. (G.4), are
_ x = T
_
x = Fx +Gn = FT x +Gn, (G.5)
_
x = T
1
FT x +T
1
Gn =

F x +

Gn. (G.6)
Now the controllability matrix for the original state,
(
x
= [ G FG F
2
G F
n1
G ], (G.7)
provides a useful transformation matrix. We can also dene the controllability matrix for the trans-
formed state:
(
x
= [

G

F

G

F
2

G

F
n1

G ]. (G.8)
The two controllability matrices are related by
(
x
= [

T
1
G T
1
FTT
1
G ] = T
1
(
x
(G.9)
and the transformation matrix
T = (
x
(
1
x
. (G.10)
59
W
60 APPENDIX G. ACKERMANNS FORMULA FOR POLE PLACEMENT
From Eqs. (G.9) and (G.10) we can draw some important conclusions. From Eq. (G.9), we see
that if (
x
is nonsingular, then for any nonsingular T, (
x
is also nonsingular. This means that a
similarity transformation on the state does not change the controllability properties of a system. We
can look at this in another way. Suppose we would like to nd a transformation to take the system
(F, G) into control canonical form. As we shall shortly see, (
x
in that case is always nonsingular.
From Eq. (G.9) we see that a nonsingular T will always exist if and only if (
x
is nonsingular. We
conclude that
Theorem G.1 We can always transform (F, G) into control canonical form if and only if the sys-
tem is controllable.
Let us take a closer look at control canonical form and treat the third-order case, although the
results are true for any :th-order case:

F = F
c
=
_
_
a
1
a
2
a
3
1 0 0
0 1 0
_
_
,

G = G
c
=
_
_
1
0
0
_
_
. (G.11)
The controllability matrix, by direct computation, is
(
x
= (
c
=
_
_
1 a
1
a
2
1
a
2
0 1 a
1
0 0 1
_
_
. (G.12)
Because this matrix is upper triangular with ones along the diagonal, it is always invertible. Also
note that the last row of (
x
is the unit vector with all zeros, except for the last element, which is
unity. We shall use this fact in what we do next.
As we pointed out in Section 7.5, the design of a control law for the state x is trivial if the
equations of motion happen to be in control canonical form. The characteristic equation is
:
3
+ a
1
:
2
+ a
2
: + a
3
= 0, (G.13)
and the characteristic equation for the closed-loop system comes from
F
cl
= F
c
G
c
K
c
(G.14)
and has the coecients shown:
:
3
+ (a
1
+ 1
c1
):
2
+ (a
2
+ 1
c2
): + (a
3
+ 1
c3
) = 0. (G.15)
To obtain the desired closed-loop pole locations, we must make the coecients of : in Eq. (G.15)
match those in
c
c
(:) = :
3
+ c
1
:
2
+ c
2
: + c
3
, (G.16)
so
a
1
+ 1
c1
= c
1
, a
2
+ 1
c2
= c
2
, a
3
+ 1
c3
= c
3
, (G.17)
or, in vector form,
a +K
c
= , (G.18)
where a and are row vectors containing the coecients of the characteristic polynomials of the
open-loop and closed-loop systems, respectively.
We now need to nd a relationship between these polynomial coecients and the matrix F. The
requirement is achieved by the CayleyHamilton theorem, which states that a matrix satises its
own characteristic polynomial. For F
c
this means that
F
n
c
+ a
1
F
n1
c
+ a
2
F
n2
c
+ + a
n
I = 0. (G.19)
61
Now suppose we form the polynomial c
c
(F), which is the closed-loop characteristic polynomial with
the matrix F substituted for the complex variable ::
c
c
(F
c
) = F
n
c
+ c
1
F
n1
c
+ c
2
F
n2
c
+ + c
n
I. (G.20)
If we solve Eq. (G.19) for F
n
c
and substitute into Eq. (G.20), we nd that
c
c
(F
c
) = (a
1
+ c
1
)F
n1
c
+ (a
2
+ c
2
)F
n2
c
+ + (c
n
+ c
n
)I. (G.21)
But, because F
c
has such a special structure, we observe that if we multiply it by the transpose of
the :th unit vector, e
T
n
= [ 0 0 1 ], we get
e
T
n
F
c
= [ 0 0 1 0 ] = e
T
n1
, (G.22)
as we can see from Eq. (G.11). If we multiply this vector again by F
c
, getting
(e
T
n
F
c
)F
c
= [ 0 0 1 0 ]F
c
= [ 0 0 1 0 0 ] = e
T
n2
, (G.23)
and continue the process, successive unit vectors are generated until
e
T
n
F
n1
c
=
_
1 0 0

= e
T
1
. (G.24)
Therefore, if we multiply Eq. (G.21) by e
T
n
, we nd that
e
T
n
c
c
(F
c
) = (a
1
+ c
1
)e
T
1
+ (a
2
+ c
2
)e
T
2
+ + (a
n
+ c
n
)e
T
n
= [ 1
c1
1
c2
1
cn
] = K
c
, (G.25)
where we use Eq. (G.18), which relates K
c
to a and .
We now have a compact expression for the gains of the system in control canonical form as
represented in Eq. (G.25). However, we still need the expression for K, the gain on the original
state. If n = K
c
x, then n = K
c
T
1
x, so that
K = K
c
T
1
= e
T
n
c
c
(F
c
)T
1
= e
T
n
c
c
(T
1
FT)T
1
= e
T
n
T
1
c
c
(F). (G.26)
In the last step of Eq. (G.26) we used the fact that (T
1
FT)
k
= T
1
F
k
T and that c
c
is a polyno-
mial, that is, a sum of the powers of F
c
. From Eq. (G.9) we see that
T
1
= (
c
(
1
x
. (G.27)
With this substitution, Eq. (G.26) becomes
K = e
T
n
(
c
(
1
x
c
c
(F). (G.28)
Now, we use the observation made earlier for Eq. (G.12) that the last row of (
c
, which is e
T
n
(
c
, is
again e
T
n
. We nally obtain Ackermanns formula: Ackermanns
formula
K = e
T
n
(
1
x
c
c
(F). (G.29)
We note again that forming the explicit inverse of (
x
is not advisable for numerical accuracy.
Thus we need to solve b
T
such that
e
T
n
(
1
x
= b
T
. (G.30)
We solve the linear set of equations
b
T
(
x
= e
T
n
(G.31)
and then compute
K = b
T
c
c
(F). (G.32)
Ackermanns formula, Eq. (G.29), even though elegant, is not recommended for systems with a large
number of state variables. Even if it is used, Eqs. (G.31) and (G.32) are recommended for better
numerical accuracy.

You might also like