Chapter6 Updated
Chapter6 Updated
x Ax Bu, x( t 0 ) x 0
(6.1)
y Cx Du
x ( t ) Ax ( t ) Bu( t ) (6.2)
e At x ( t ) Ax ( t ) Bu( t )
e At x ( t ) e At Ax ( t ) e At Bu( t ) (6.3)
d At
e x ( t ) e At Bu( t )
dt
z
t
e At x( t ) e At0 x( t 0 ) e A Bu( ) d (6.4)
t0
Finally, moving the initial condition term to the right-hand side and multiplying
both sides of the result by e At gives:
z
t
x ( t ) e At e At0 x ( t 0 ) e At e A Bu( ) d
t0
(6.5)
z
t
A( t t 0 ) A( t )
e x(t 0 ) e Bu( ) d
t0
Actually, although we have noted that this result is based upon an assumption of
time-invariance, we require only that the matrix A be time-invariant, because the
integrating factor would not have worked properly otherwise. If B were time-
varying, we would simply write:
z
t
x( t ) e A( t t0 ) x( t 0 ) e A( t ) B( )u( ) d (6.6)
t0
This is the familiar convolution integral solution from basic linear systems. Its
derivation here for vectors emphasizes one of the primary motivations for state
Chapter 6. Solutions to State Equations 231
space analysis: many of the procedures and results for vectors are direct
extensions of first-order (scalar) cases.
Completing the problem by computing y ( t ) , we obtain M: lsim(sys,u,
T,X0)
z
t
y( t ) C( t )e A( t t0 ) x( t 0 ) C( t ) e A( t ) B( )u( ) d Du( t ) (6.7)
t0
Here again, we have allowed C and D to be functions of time because they do not
interfere with the actual solution of the differential equation.
The solution (6.7) makes apparent the importance of the matrix exponential
e At studied in the last chapter. For LTI systems, this will become known as the
state transition-matrix for reasons that will become apparent in Section 6.4.2.
Solution:
Using phase variables, we can define x1 ( t ) x ( t ) and x 2 ( t ) x ( t ) , giving
the state equations
LM x OP LM0 1OP LM x OP LM 0 OP F (t )
1 1
Nx Q N0 0Q Nx Q MN 1 mPQ
2 2
y 1 0 M P
Lx O 1
Nx Q 2
e At I At
LM1 t OP
N0 1Q
Using (6.7), we compute
232 Part II. Analysis and Control of State Space Systems
LM 0 OPe d
z
t
y ( t ) 1 0 e A( t 0 ) x ( 0) 1 0 e A( t ) a
0
1
N Qm
LM1 t OPLM x OP 1 0 t O L 0 O
z LMN
t
1
1 PQ MN PQ
a
1 0 e d
0
N0 1QN 0 Q 0
0 1
m
x0 1 0 zM
L(t ) OP e
t 1
m a
d
N0 Q
1
m
z
t
1
x0 ( t )e a d
m0
x0
1 LM
1 1 1
te a e a 2 e a
OP t
m aN a a Q 0
F
Gx
1 IJ t 1 1 e at
H 0
a mK
2
am a m 2
One may verify that this is the correct solution via the usual procedures for solving
simple differential equations.
x ( t ) e A( t t 0 ) x ( t 0 ) (6.8)
or
y ( t ) Ce A( t t0 ) x ( t 0 )
Obviously, the matrix exponential e At plays a role in both the zero-state and the
zero-input parts of the total response. To see the effect of this matrix exponential,
it is useful to draw sketches of the solutions in the space, beginning with various
initial conditions. This is most easily done in two dimensions, for which such
plots are common and are called phase portraits.
Chapter 6. Solutions to State Equations 233
x A1 x
LM1 0OP x (6.9)
N 0 4Q
A phase portrait for this system is given in Figure 6.1. In the figure, sufficient
initial conditions are chosen around the edge of the graph to accurately interpolate
the solution for any initial condition in between. The solution drawn from each
initial condition is sketched as a directed curve, as indicated by the arrow showing
the progression of positive time. These curves are known as phase trajectories.
With experience, the general shape of the trajectories can be imagined
without a detailed plot. Generally, it is the qualitative shape of the trajectories that
is important in a phase portrait. (This is particularly true with nonlinear systems,
whose phase portraits can sometimes be constructed entirely qualitatively or with
piecewise analysis of their dynamics.) For example, certain qualitative features of
Figure 6.1 might be predicted from our knowledge of the state space.
Consider the A-matrix given in (6.9) as a linear operator, taking vectors x
into vectors x . Thus, given any position x(t ) on the plot, the equation x Ax
gives us the tangent vector to the phase trajectory at that point. This is the
direction in which the trajectory is evolving. In the figure, we have illustrated this
point by indicating a vector, x ( t ) [ 3 1] , and the tangent to the curve at
that point, x ( t ) A1 x [3 4] . Of course, this velocity will change as the
chosen point x changes.
We can compute that this system has eigenvalues 1 {1, 4} (a set known
as the spectrum of A), and corresponding eigenvectors of
x Ax
3
e2
1
x2 0
e1
-1
x
-2
-3
-4
-4 -2 0 2 4
x1
Figure 6.1 Phase portrait for the homogeneous system given in Equation (6.9). For
this type of portrait, wherein all trajectories asymptotically approach the
origin without encircling it, the origin is known as a stable node.
More will be said about the phase trajectories on these portraits in the next
section. Until then we will present further examples of phase portraits for the
homogeneous equation x Ax .
In order to relate such phase trajectories to the more familiar step responses,
step(sys) Figure 6.2 shows the step responseM for the system
Chapter 6. Solutions to State Equations 235
x A1 x b1u
LM1 0OP x LM1OPu
N 0 4Q N4Q (6.11)
y 1 1x
where the output equation was selected arbitrarily and the matrix b1 in (6.11) was
chosen to give unity DC gain to each state variable, i.e., so that they both
asymptotically approach 1. This figure further illustrates the relative speed of the
two state variables. Note how the “faster” of the two variables converges to its
final value sooner than the “slower” one, and of course the output is simply the
sum of the two inputs.
y (t )
1.5
x 2 (t )
1
x1 (t )
0.5
0
0 1 2 3 4 5
time
Figure 6.2 Step response for the system described by Equation (6.11). The state
variables and output are plotted as functions of time.
Figure 6.3 shows a portrait that is similar to Figure 6.1 except that it appears
rotated and distorted. This is the portrait for the homogeneous (no input) system
with A-matrix
A2
LM2 1OP (6.12)
N2 3Q
with the following spectrum and eigenvectors (expressed as columns in a modal
matrix):
236 Part II. Analysis and Control of State Space Systems
l
2 1 4 q M2
LM 1 1OP
N1 2Q
4
1
e2
x2 0
e1
-1
-2
-3
-4
-4 -2 0 2 4
x1
Figure 6.3 Phase portrait for the homogeneous system given by the matrix in (6.12).
Note that this origin is also a stable node.
A2 M 2 A1 M 2 1 (6.13)
x M 2 A1 M 21 x M 2 b1u
LM2 1OP x LM5OPu
N2 3Q N7Q (6.14)
y CM 21 x 1 0 x
which has the step response shown below in Figure 6.4. Note that the state
variables no longer reach unity asymptotically. This is because the similarity
transformation affects the DC gain of the state variables. However, as we would
expect, the output signal y ( t ) remains exactly the same as in Figure 6.2, because
similarity transformations do not affect the input/output performance, only its
internal representation.
2
x1 (t ), y (t )
1.5
1 x 2 (t )
0.5
0
0 1 2 3 4 5
time
Figure 6.4 Step response for the system described by Equation (6.14). The state
variables and output are plotted as a function of time.
Figure 6.5 is somewhat different in the sense that the trajectories do not all
tend toward the origin. This portrait is based upon the following system:
A3
LM2 0OP l
3 2 1 q M3
LM1 0OP (6.15)
N 0 1Q N0 1 Q
It is again a diagonal system with the eigenvectors constituting the standard basis.
However, in this system, one of the corresponding eigenvalues is positive. Being
diagonal, it is easy to decompose the system into decoupled parts:
238 Part II. Analysis and Control of State Space Systems
which of course has solutions that can be found independently of one another:
LM x (t )OP LMe b g
2 t t0 OP L x (t )O LMe b 2 t t0 g x (t OP
g P MN x (t )PQ M eb
1 0 1 0 1 0)
Nx (t )Q MN eb 0 Q N gx ( t ) PQ
t t t t0
2 0 2 0 2 0
1 e2
x2 0
e1
-1
-2
-3
-4
-4 -2 0 2 4
x1
Figure 6.5 Phase portrait for the homogeneous system given by (6.15). In cases such
as this wherein the trajectories approach the origin from one direction but
diverge from it in the other, the origin is known as a saddle point.
A4
LM3 2OP l
4 2 1 q M4
LM2 1OP (6.17)
N 2 2Q N 1 2Q
The phase portrait of this transformed system is shown in Figure 6.6. Again,
motion along the two invariant subspaces tends toward different directions, and
the remaining trajectories interpolate smoothly.
1 e1
x2 0
-1 e2
-2
-3
-4
-4 -2 0 2 4
x1
Figure 6.6 Phase portrait for the homogeneous system given by (6.17). The origin
in this figure is a saddle point.
It should be pointed out that except at the singular points (for LTI systems,
the origin only), two trajectories cannot cross or meet at a point. This would imply
240 Part II. Analysis and Control of State Space Systems
that at that point, there are two independent solutions to the differential equation,
which is prohibited by the uniqueness theorem. In our linear case, we also expect
the trajectories to be smooth, which is seen in all the plots. Smoothness is
guaranteed by the existence of a unique tangent vector given by the equation
x Ax .
The nature of the phase portrait when there is only a single eigenvector, as
we might expect in a system having a generalized eigenvector, is investigated
next. An example of such a system is:
A5
LM1 1OP l
5 2 2 q e1
LM1OP (6.18)
N1 3Q N 1Q
This system, represented by the phase portrait in Figure 6.7, indeed shows only a
single invariant subspace. This eigenvector, which corresponds to a negative
eigenvalue of 2 , tends toward the origin as expected.
4
x2 0
-1 e1
-2
-3
-4
-4 -2 0 2 4
x1
Figure 6.7 Phase portrait for the homogeneous system given by (6.18). The origin
in this system is called a stable node because again the trajectories
approach the origin without encircling it.
Chapter 6. Solutions to State Equations 241
The other trajectories also tend toward the origin, but do so by partially spiraling
inward. Because trajectories cannot cross, none of the spiraling trajectories may
rotate more than 180° before asymptotically reaching the origin.
Figure 6.8 shows the initial condition response (zero inputs) for the same
system, i.e., from (6.18). Note that qualitatively, the plot appears to be similar to
previous time responses, e.g., Figure 6.4, except that now both curves show one
inflection point.
0.8
0.6
0.4 x1( t )
0.2
x2 ( t )
0
-0.2
0 1 2 3 4
time
Figure 6.8 Initial condition response for the system described by Equation (6.14).
The state variables are plotted as a function of time.
Transforming the system described by (6.18) into its Jordan form, we get
A
LM2 1OP l
6 2 2 q e1
LM1OP (6.19)
N 0 2Q N 0Q
The phase portrait is given in Figure 6.9. As we might guess, the single regular
eigenvector in this portrait lies along a coordinate axis because the Jordan form
produces a system of differential equations, one of which is decoupled from the
other.
242 Part II. Analysis and Control of State Space Systems
x2 0
e1
-1
-2
-3
-4
-4 -2 0 2 4
x1
Figure 6.9 Phase portrait for the homogeneous system given by (6.19). The origin is
a stable node.
A7
LM2 3OP l
7 1 j3 1 j3 q M7
LM1 j1 1 j1 OP (6.20)
N6 4Q N j2 j2 Q
we see in the portrait in Figure 6.10 that no invariant subspaces appear. This being
the case, the spiraling trajectories rotate around the origin forever, asymptotically
approaching it. (The trajectories are spiraling inward because of the negative real
part of the eigenvalues; a system with eigenvalues containing positive real parts
would spiral outward.)
Chapter 6. Solutions to State Equations 243
x2 0
-1
-2
-3
-4
-4 -2 0 2 4
x1
Figure 6.10 Phase portrait for the homogeneous system given by (6.20). This type of
portrait shows an origin that is known as a stable focus.
The time-domain response of the system described by (6.20) is shown in the initial
condition response of Figure 6.11. The oscillations shown in the phase portrait of
Figure 6.10 are clearly seen.
As a final example, consider the system given by
A8
LM1 2OP l
8 j3 j3 q M8
LM3 j 3 j OP (6.21)
N5 1Q N j5 j5 Q
Relative to the previous example, we predict that this system will have no real
invariant subspaces, and the trajectories will therefore be free to encircle the
origin (without crossing each other). However, because there are no real parts of
the eigenvalues, we cannot have solutions that decay to zero or tend to infinity.
The resulting solutions are depicted in Figure 6.12. We know from our
conventional solutions to differential equations or from our explicit computation
244 Part II. Analysis and Control of State Space Systems
1.5
0.5
x1( t )
0
x2 ( t )
-0.5
0 1 2 3 4
time
Figure 6.11 Initial condition response for the system described by Equation (6.20).
The state variables are plotted as a function of time.
Without a decay or exponential growth in the solution, one might ask how it
is possible to determine the directions of the arrows in Figure 6.12. The simplest
way is to choose a single sample point, such as x ( t 0 ) [1 0] , shown in the
figure, and determine the direction of the tangent
x ( t 0 ) A8 x( t 0 )
LM1OP
N5Q
also shown in the figure. Given the direction of rotation of this single trajectory,
the direction of all the others must be the same.
Chapter 6. Solutions to State Equations 245
x ( t0 )
LM1OP
x2 0
N0Q
-1
-2
-3
x ( t 0 )
LM1OP
N5Q
-4
-4 -2 0 2 4
x1
Figure 6.12 Phase portrait for the homogeneous system given by (6.21). The vector
x ( t 0 ) is shortened to fit on the plot. The origin in this portrait is called a
center.
space when computing x(t ) . The decomposition of x(t ) in this way is widely
used in engineering analysis, especially for large-scale problems such as the
deflection of flexible structures, vibrations, and the reduction of large state space
to smaller approximations.
Let {e i } be the set of n linearly independent eigenvectors for a system,
including, if necessary, generalized eigenvectors. Because this set may be used as
a basis for the state space, we can uniquely decompose x(t ) as
n
x ( t ) i ( t )e i (6.22)
i 1
n
B( t )u( t ) i ( t )e i (6.23)
i 1
n n n
i (t )e i i (t ) Ae i i (t )e i
i 1 i 1 i 1
or, by rearranging and supposing for now that {e i } consists of a complete set of
n regular eigenvectors, we can obtain
n n
e i ( t ) i ( t ) A i ( t )je i i ( t )e i i ( t ) i e i i ( t )e i (6.24)
i 1 i 1
0
i ( t ) i ( t ) i i ( t ) i 1,, n (6.25)
i ( t ) i ( t ) i i 1 ( t ) i ( t ) i 1,, n (6.26)
M 1 AM M 1 Bu
(6.27)
y CM Du
n
x ( t ) i ( t )e i
i 1
L (t ) O (6.28)
M MM PP
1
MN (t )PQ n
This is the source of the name “modal” in the term modal matrix. It is the matrix
that decomposes a system so that decoupled equations are solved to produce
system modes. Matrix M 1 AM J is simply the Jordan form discussed in
Chapter 4.
As we did in Chapter 4, if we first convert the system to its basis of
eigenvectors, then we can exploit the simpler form of (6.27) to generate the state-
vector solution:
( t ) e J ( t t 0 ) ( t 0 ) z
t0
t
e J ( t ) M 1 B( )u( ) d,
(6.29)
( t 0 ) M 1 x ( t 0 )
where [1 n ] . Equation (6.29) will be easier to solve because of the
248 Part II. Analysis and Control of State Space Systems
simple structure of the Jordan form (see Section 5.4.2). After obtaining such a
solution, the solution in the original basis may be computed via
x ( t ) M ( t )
u( x, t ) X i ( x )Ti ( t ) (6.30)
i 1
The infinite series arises because the describing equation for beam displacement
is a partial differential equation, rather than an ordinary differential equation such
as the ones with which we have been working. When ordered in decreasing
magnitude of X i ( x ) , it is common practice to truncate the series after the few
most significant terms, thereby approximating an infinite dimensional system
with a finite dimensional one that can be analyzed via matrix arithmetic.
For example, the equations that describe the displacement u( x , t ) of a
stretched string are
2u 2u
0 (6.31)
x 2 T t 2
where is the density of the string and T is its tension. If the string is of length
, is initially displaced at a location x c to a height u( c,0 ) h , and is
subsequently released, it can be shown [6] that
u( x, t ) k
1 FG nc IJ sin FG nx IJ cos FG n T t IJ
n 1 n
2
sin
HK HK HK (6.32)
x A1 x
LM1 0OP x (6.33)
N 0 4Q
with eigenvalues and eigenvectors computed in (6.10). An enlarged view of the
upper-right quadrant of the phase portrait for that system is given again in Figure
6.13, where the trajectories on the eigenvectors are drawn with arrows pointing
in the direction of positive time. The natural question arises when using phase
portraits as a qualitative solution tool, “How do we know which direction the
trajectories will take, i.e., how do we know they appear concave up as in Figure
6.13, rather than facing toward the sides? Could not trajectories approach the
origin from the vertical direction instead of seeming to flatten out and approach
horizontally, as the graphs show?”
To resolve these questions and provide a tool for guessing the trajectory’s
shapes, we show an initial condition, x(t ) , and the tangent to the trajectory at that
point, x ( t ) Ax ( t ) . This tangent vector is decomposed along the two
eigenvectors, e1 and e 2 . Analytically, we know from the preceding section that
2
x ( t ) Ax ( t ) A i ( t )e i
i 1
2
i ( t ) Ae i
i 1 (6.34)
2
i ( t ) i e i
i 1
e t e1 4e 4t e 2
where the last line easily results from our A-matrix being already diagonal. From
this expansion, the lengths of the components along each of the eigenvector
directions are clear. For the time selected, the component along (negative) e 2 is
250 Part II. Analysis and Control of State Space Systems
larger that the component along (negative) e1 . The trajectory at this time, then, is
evolving more along e 2 than e1 .
Taking a macroscopic view of the plot, we can see that the eigenvalue
2 4 is larger in magnitude (“faster” in time constant terms) than the
eigenvalue 1 1 . The result is that, until it decays to negligible proportions
relative to the first mode (the component along e1 ), the second mode (the
component of motion along e 2 ) is larger. When sketching the plot, then, we
naturally expect the vertical component of motion to dominate for small time, and
the reverse for large time. Hence, we expect the curvature of the graph to be the
concave up (and down) shape as observed.
x (t ) 2 2 (t )e 2
x2
4 e 4 t e 2
e2
x (t ) Ax (t )
11 (t )e1 1 e t e1
e1 x1
Figure 6.13 Detail of phase portrait for the homogeneous system given by (6.9).
RSL3O, L1OUV
TMN1PQ MN3PQW
{e1 , e 2 } (6.35)
Solution:
The first step in determining the nature of the trajectories is to sketch the invariant
subspaces, i.e., the straight lines that lie along the given eigenvectors. Noticing
that both eigenvalues are negative real numbers, we expect all trajectories to
approach the origin from any location. Because the eigenvalue 1 is faster than
2 , we expect the trajectories for small time to experience more change in the
direction of e1 than in the direction of e 2 . After a long time, the first mode will
have decayed, and the trajectories’ direction will be dominated by the second
mode, i.e., along e 2 . We then have sufficient information to sketch the phase
portrait shown in Figure 6.14.
e2
1
x2 0 e1
-1
-2 e2
-3 e1
-4
-4 -2 0 2 4
x1
Figure 6.14 Phase portrait for the example system given by the eigenvectors in (6.35).
252 Part II. Analysis and Control of State Space Systems
The figure also shows the decomposition of a trajectory into its modes. For the
sample time selected, it can be seen that there is more motion along the first mode
than along the second. At a later time (farther along that trajectory), there will be
more motion along the second mode.
x ( t ) A( t ) x ( t ) B( t )u( t ) (6.36)
we must either solve the system using a different technique or else find a different
integrating factor. In the next two sections, special matrices are derived, called
the fundamental solution matrix and the state transition matrix. The fundamental
solution matrix performs the function of the integrating factor, while the overall
solution is in terms of the state transition matrix. As we will see, though, the state
transition matrix is difficult to compute.
x ( t ) A( t ) x ( t ) (6.37)
We know from Chapter 2 that the set of solutions to (6.37) constitutes a linear
vector space. It can be observed that this space is n-dimensional by considering a
basis set of n linearly independent initial condition vectors {x i0 } , i 1, , n . If
we restrict our attention to matrices A( t ) that are smooth, then it is possible to
guarantee that the solutions to (6.37) given an arbitrary initial condition, x ( t 0 ) ,
are unique. Then the set {x i ( t )} , where x i ( t ) A( t ) x i ( t ) and x i ( t 0 ) x i 0 ,
defines n solutions for (6.37) that are linearly independent on [t 0 , t ] . Any
additional solution ( t ) must be a linear combination of the x i ( t ) because
( t 0 ) i 1 i x i 0 implies that ( t ) i 1 i x i ( t ) for some set of scalars i ,
n n
i 1, , n .
Organizing these linearly independent solutions, we construct a matrix X ( t )
as follows:
X ( t ) x1 ( t ) x 2 ( t ) x n ( t )
Chapter 6. Solutions to State Equations 253
dX 1 ( t ) dX ( t ) 1
X 1 ( t ) X (t )
dt dt
X 1 ( t ) A( t ) X ( t ) X 1 ( t )
I
X 1 ( t ) A( t )
From this result, we can now show that the matrix X 1 ( t ) qualifies as a valid
integrating factor for the state equations in (6.37).
We may use this integrating factor in the nonhomogeneous equations in
(6.36):
X 1 ( t )[ x ( t ) A( t ) x ( t ) B( t )u( t )]
X 1 ( t ) x ( t )
X
1
( t t ) x ( t ) X 1 ( t ) B( t )u( t )
) A(
dX 1 ( t )
X 1 ( t ) x ( t ) x ( t ) X 1 ( t ) B( t )u( t )
dt
d
X 1 ( t ) x ( t ) X 1 ( t ) B( t )u( t )
dt
z
t
X 1 ( t ) x( t ) X 1 ( t 0 ) x( t 0 ) X 1 ( )B( )u( ) d
t0
or
z
t
x( t ) X ( t ) X 1 ( t 0 ) x( t 0 ) X ( t ) X 1 ( )B( )u( ) d (6.38)
t0
254 Part II. Analysis and Control of State Space Systems
( t , ) X ( t ) X 1 ( ) (6.39)
is known as the state-transition matrix, and it lends insight into the solutions of
time-varying systems and time-invariant systems. With this notation, (6.38)
becomes
z
t
x( t ) ( t , t 0 ) x( t 0 ) ( t , )B( )u( ) d (6.40)
t0
z z z
t t 1
( t , ) I A( 1 )d1 A( 1 ) A( 2 )d 2 d1 (6.41)
LM A() dOP
MNz
t
( t , ) exp (6.42)
PQ
Although it is rare for this condition to be satisfied for general A( t ) , it does hold
when A is constant or when A( t ) is diagonal, in which cases there are more direct
methods for finding the solution of the system. One should be warned that the
expression in (6.42) is not equivalent to e At , and must often be computed with a
Taylor series expansion of the matrix exponential.
Solution.
It should first be noted that the state transition matrix for Example 6.1 is apparent
from the solution. Computing it explicitly,
d 1
N0 1 Q N0 0 Q N0 0Q N0 0 Q
Note the agreement between this result and the matrix seen in the integrand of the
solution of Example 6.1.
x ( t ) X ( t ) X 1 ( ) X ( ) x ( t 0 )
X ( t ) X 1 ( ) x ( ) (6.43)
( t , ) x ( )
( t 2 , t1 )( t1 , t 0 ) X ( t 2 ) X 1 ( t1 ) X ( t1 ) X 1 ( t 0 )
X ( t 2 ) X 1 ( t 0 ) (6.44)
( t 2 , t 0 )
1
d( t , ) d X ( t ) X ( ) dX ( t ) 1
X ()
dt dt dt
A( t ) X ( t ) X 1 ( ) (6.45)
A( t )( t , )
The state transition matrix and the fundamental solution matrix X ( t ) are
solutions for the same homogeneous differential equation, (6.37). This suggests a
numerical method of solving (6.45), numerically integrating (over time t), subject
to the initial condition that ( , ) I , in order to estimate ( t , ) . This is not
done in practice very often, mostly because the system input u( t ) is itself not
known a priori. Therefore, such an open-loop solution of (6.40) is not useful.
Chapter 6. Solutions to State Equations 257
( t , t 0 ) e At e At0 e A( t t0 )
Remember from the definition that for a time-invariant system, the solution
depends only on the difference t t 0 . Fortunately, in this situation we have
already demonstrated analytical methods for generating the state-transition matrix
e At (see Chapter 5). Note that this situation is entirely consistent with the
properties of state-transition matrices derived for the more general time-varying
case above. In particular, we have already shown that in a homogeneous system,
x ( t ) e A( t t 0 ) x ( t 0 )
x( k 1) Ad ( k ) x( k ) Bd ( k )u( k )
(6.46)
y( k ) Cd ( k ) x( k ) Dd ( k )u( k )
Here we have implicitly assumed by our notation that the discrete-time system
258 Part II. Analysis and Control of State Space Systems
matrices Ad, Bd, Cd, and Dd may be functions of time k. In a time-invariant system,
they will not.
However, often, because the universe is modeled (at least by most engineers)
as evolving in continuous-time, the equations in (6.46) more often result from the
discretization of a continuous-time system such as
x ( t ) A( t ) x ( t ) B( t )u( t )
(6.47)
y ( t ) C( t ) x ( t ) D( t )u( t )
6.5.1 Discretization
Consider the kth time instant, wherein t kT . At T units of time later,
t ( k 1)T . In order to get an accurate discrete-time equivalent, we will assume
that period T is much smaller than the Shannon period for the input signal u( t )
(see [4], pp. 79-82). If this is the case, we can approximate the input as
u( t ) u( kT ) , or simply u( k ) , over the entire interval kT t ( k 1)T . Making
the input constant over the sampling interval allows us to remove the input term
from the integrand in the solution for (6.40):
z
k 1
x( k 1) ( k 1, k ) x( k ) ( k 1, ) B( ) d u( k ) (6.48)
k
bg
In this form, if ( t , ) and B are known, the solution for the state vector could
be computed each time an input is applied. The matrices
z
k 1
Ad ( k ) ( k 1, k ) Bd ( k ) ( k 1, ) B( ) d
k
can be computed from knowledge of the system, and (6.46) is the result. The
output equation, being an algebraic equation, follows from a knowledge of C( t )
and D( t ) . Also note that (6.48) provides the values of the state vector at the
chosen sample instants. At intermediate times, the state variables may take on
other values, resulting in a “ripple” effect if the variables are plotted as functions
of continuous time. For discrete-time analysis, though, it is only the values at the
sample instants that we are interested in.
Note that (6.48) is not the solution to the state equations in (6.46). Rather,
c2d(sysc,Ts, (6.48) is a discretizationM of the state equations in (6.47). As we saw in Examples
method) 3.10 and 3.11, the solutions of discrete-time systems are often inductively
obtained by iterating over a few times intervals on (6.46).
Chapter 6. Solutions to State Equations 259
x ( j 1) Ad ( j ) x ( j ) Bd ( j )u( j )
x ( j 2) Ad ( j 1) x ( j 1) Bd ( j 1)u( j 1)
Ad ( j 1) Ad ( j ) x ( j ) Bd ( j )u( j ) Bd ( j 1)u( j 1)
Ad ( j 1) Ad ( j ) x ( j ) Ad ( j 1) Bd ( j )u( j ) Bd ( j 1)u( j 1)
x ( j 3) Ad ( j 2) Ad ( j 1) Ad ( j ) x ( j ) Ad ( j 2) Ad ( j 1) Bd ( j )u( j )
Ad ( j 2) Bd ( j 1)u( j 1) Bd ( j 2)u( j 2)
until, by induction,
k 1
Ad (q ) I
q k
Consider the situation in which the system is homogeneous, i.e., u( k ) 0
for all k. Then we would have
F A (i)I x( j )
k 1
x( k ) GH JK
i j
d (6.50)
k 1
( k , j ) Ad ( i ) (6.51)
i j
As in the continuous-time case, Equation (6.50) makes it apparent that the state-
transition matrix ( k , j ) may be interpreted as the linear operator that takes a
state vector at time j and returns the state vector at time k. This matrix is defined
only for k j and shares most of the properties of the continuous-time state
transition matrix, except for invertibility. From the structure of (6.51), it is clear
260 Part II. Analysis and Control of State Space Systems
k
x( k ) ( k , j ) x( j ) ( k , i ) B(i 1)u(i 1) (6.52)
i j 1
z
( k 1)T
b g
x ( k 1)T e
A ( k 1)T kT
x( kT ) e
A ( k 1)T
B( ) d u( kT )
kT
or
z
( k 1)T
A ( k 1)T
x( k 1) e AT x( k ) e B( ) d u( k ) (6.53)
kT
z
( k 1)T
A ( k 1)T
Ad e AT Bd e B( ) d
kT
again giving a formula for obtaining (6.46). Once again, matrices Ad and Bd can
be computed “off-line,” i.e., without knowledge of the input. Note that matrix Ad
is independent of k.
lsim(sys,u,t) Furthermore, for a time-invariant system, (6.49) will becomeM
b g b Ad g
k
k j k i
x( k ) Ad x( j ) B( i 1)u( i 1) (6.54)
i j 1
( k , j ) ( k j ) Ad b g k j
Chapter 6. Solutions to State Equations 261
We should point out here that the discussion of modes and modal
decompositions as presented in Section 6.3 applies here as well. Because the
eigenvalues and eigenvectors of a matrix A or Ad are calculated in the same
manner regardless of whether the system is discrete-time or continuous-time, the
modal matrix M functions in the same way for both systems. If we use it to define
a new state vector x ( k ) M( k ) , then
( k 1) M 1 Ad M( k ) M 1 Bd ( k )u( k )
A d ( k ) B d ( k )u( k )
(6.55)
y ( k ) Cd M( k ) Dd u( k )
C d ( k ) D d u( k )
A
LM3 1OP B
LM1OP
N 0 2Q N1Q
Solution
The A-matrix was also used in Example 5.4, so we know from that example that
e At
LMe3t
e 3t e 2t OP
MN 0 e 2t
PQ
Therefore,
LMe
3T
e 3T e 2T OP L0.741 0.0779OP
Ad e AT
MN 0 e 2T
PQ MN 0 0.819 Q
As for Bd , we have
262 Part II. Analysis and Control of State Space Systems
z
( k 1)T
A ( k 1)T
Bd e B d
kT
LMe OP L1O d
z
( k 1)T 3 ( k 1)T 3 ( k 1)T 2 ( k 1)T
e e
PQ MN1PQ
kT MN 0 e
2 ( k 1)T
z
( k 1)T 2 ( k 1)T 2 ( k 1)T
kT MNe 2 ( k 1)T
PQ 2 MNe 2 ( k 1)T
PQ kT
LM
1 e 2 ( k 1)T ( k 1)T
e
2 ( k 1)T kT OP
MN
2 e 2 ( k 1)T ( k 1)T e 2 ( k 1)T kT PQ
1 L1 e 2T OP L0.0906O
M
2 MN1 e 2T
PQ MN0.0906PQ
Therefore, the discrete-time approximation of the system, sampled at 10Hz,
is
x( k 1)
LM0.741 0.0779OP x( k ) LM0.0906OPu( k )
N0 0.819 Q N0.0906Q
6.6 Summary
In this chapter, we have investigated the analytical solution of state space
equations. In doing so, we have demonstrated one of the important features of
state variable analysis, namely, that the first-order analysis methods of scalar
differential equations can help us solve (first-order) vector differential equations.
This is first apparent in the integrating factor technique used to solve the LTI
systems. Because any linear system can be represented in state space, these
solution methods are presented to give the reader a set of tools for solving state
space equations that are similar to those used for solving scalar differential
equations.
However, as we have stated, the solution methods developed here are not
necessarily the most useful tools for control system design and analysis. This is
because, as we have shown, we must know the applied input to find the solution
of a system. In control systems design, the input signal is the goal and is not
generally available a priori. Nevertheless, the solution technique has provided
insight into the evolution of state variables, and provided a technique by which
discrete-time systems can be generated as approximations of sampled continuous-
time systems.
Other highlights of the chapter are:
Chapter 6. Solutions to State Equations 263
• The solution of an LTI system has the same form, i.e., a convolution
integral, as the solution of scalar systems. Only the computation of the
matrix exponential e At complicates the matter.
• For homogeneous systems, phase portraits can be useful visualization
tools. It is recommended that the reader practice generating phase
portraits for systems with different dynamic characteristics. As we have
mentioned, the evolution of a system’s solution trajectories is often
understood using this visual imagery, even in higher dimensions. In
nonlinear problems as well, phase portraits are indispensable tools and
can be used to predict the existence of limit cycles (nonlinear
oscillations), switching times, final values, and stability characteristics
that are very often difficult to determine analytically.
• We have introduced the notion of a system mode. This is another state
space tool that is often generalized into higher dimensions. Heat
conduction problems, bending plates and beams, and many
electromagnetic phenomena are described by partial differential
equations that result in infinite-dimensional state spaces. By describing
the solution of such systems as sums of modes, we can retain only the
most dominant and significant components of the overall solutions.
Others, because they are less significant, might be simply ignored. The
modal decomposition of a system is also a convenient method for
generating a qualitative picture of a phase portrait.
• For time-varying systems, state variable solutions are quite difficult to
obtain. The state-transition matrix, while symbolically providing a
simple integral solution, can be as elusive as the solution of a nonlinear
equation. Only limited techniques are available for its construction, such
as the Peano-Baker series, which itself can be difficult to compute,
especially in closed-form
• In discrete-time, we have shown that the state variable solution and the
state transition matrix are similar in appearance to their continuous-time
counterparts.
Part of the discussion of the solutions generated in this chapter has hinted at
the stability properties of systems. For example, in the generation of the phase
portraits, we mentioned the tendency of a system mode to approach or diverge
from the origin. We sometimes find that such modes are unstable, for obvious
reasons. However, such a notion of stability ignores the fact that phase portraits,
by their definition, have not accounted for the presence of the input signal. Indeed,
there are a number of different perspectives on the stability of a system, and these
will be discussed in the next chapter.
264 Part II. Analysis and Control of State Space Systems
6.7 Problems
6.1 Solve the state variable system given in Problem 1.9 for x ( t ) , t 0 , given
that u( t ) e 3t .
6.2 Draw phase portraits for the systems with the following A-matrices:
a)
LM8 6OP b)
LM8 6OP c) LM 0 4OP
N 0 2Q N 0 2Q N4 0Q
d)
LM4 4OP e) M
L1 0OP f) LM1 0OP
N 4 4Q N 0 0Q N1 0Q
LM0 0OP LM1 1 0 OP
g) h) M 0 1 0 P
N0 0Q MN 0 0 1PQ
LM0 1 t2 OP LM 0 0 OP LMe2 t OP
a) MM0 0 tP b)
N 2t 2 t Q
c)
MN 0
0
2t PQ
N0 0 0 PQ
x ( t )
LM1 0 OP x(t )
N1 f (t )Q
can be considered as decoupled scalar equations, the second of which has
the solution of the first as an input term. Use this fact to determine an
expression for ( t , ) for the second-order system.
6.9 Use Equation (6.7) to find an expression for the impulse response matrix,
i.e., the solution to a LTI system whose input is the Dirac delta function
( t ) .
x( k 1)
LM35. 2 OP x( k )
N6 35
. Q
6.11 For the following system, find an expression for y ( n ) if the input u ( k ) 1,
k 0,1, ,
x ( k 1)
LM 0 1OP x( k ) LM1OPu( k )
N0.5 1Q N1Q
y( k ) 1 0 x( k )
266 Part II. Analysis and Control of State Space Systems
x ( t )
LM2 0OP x(t ) LM1OPu(t )
N 1 0Q N0Q
y ( t ) 0 1 x ( t ) 2u( t )
Chapter 6. Solutions to State Equations 267
[1] Atherton, Derek P., Nonlinear Control Engineering, Van Nostrand Reinhold,
1982.
[2] Brockett, Roger W., Finite Dimensional Linear Systems, John Wiley & Sons,
1970.
[3] Coddington, Earl A., and Norman Levinson, Theory of Ordinary Differential
Equations, McGraw-Hill, 1955.
[4] Franklin, Gene, and J. David Powell, Digital Control of Dynamic Systems,
Addison-Wesley, 1981.
[5] Kailath, Thomas, Linear Systems, Prentice-Hall, 1980.
[6] Lebedev, N. N., I. P. Skalskaya, and Y. S. Uflyand, Worked Problems in Applied
Mathematics, Dover, 1965.
[7] Rugh, Wilson, Linear System Theory, 2nd edition, Prentice-Hall, 1996.