0% found this document useful (0 votes)
34 views39 pages

Chapter6 Updated

This document discusses solving state equations for linear time-invariant systems. It presents the general solution of the state equations using an integrating factor method. An example of solving a double integrator system is also provided.

Uploaded by

khin muyar aye
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views39 pages

Chapter6 Updated

This document discusses solving state equations for linear time-invariant systems. It presents the general solution of the state equations using an integrating factor method. An example of solving a double integrator system is also provided.

Uploaded by

khin muyar aye
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

6

Solutions to State Equations


It was stated in Chapter 1 that the state of a system at a given time is sufficient
information to determine the state at all future times, assuming that the system
dynamics (the system matrices A, B, C, and D) and input are all known. Often,
the reason for studying a linear system is that one or more of these quantities is
not known. Sometimes the system matrices are unknown or only approximated,
as with adaptive and robust control. Random signals are present in the input and
output equation, requiring stochastic control. In the most common case, only a
desired state or output is specified, and it is the task of the engineer to “design”
the input by introducing compensators.
In this chapter, we consider only the simplest situations, wherein all terms of
the state equations are known, and we seek an analytical solution to these state
equations. This process is useful for our understanding of the behavior and
properties of state equations, but often we cannot write such solutions in practice,
because we usually have insufficient information to do so.

6.1 Linear, Time-Invariant (LTI) Systems


Recall the state equations for a linear, time-invariant (LTI) state space systemM: ss(A,B,C,D)

x  Ax  Bu, x( t 0 )  x 0
(6.1)
y  Cx  Du

The difficulty in solving this system is the first equation, x  Ax  Bu , because


it is a differential equation. When we determine x(t ) , it becomes a
straightforward matter to substitute it into the second equation to determine y (t ).
Note that by denoting our input, output, and feed-forward matrices as capital
letters (B, C, and D), we are implying that these manipulations hold for MIMO as
well as SISO situations.
230 Part II. Analysis and Control of State Space Systems

We will use the method of introducing an integrating factor in the solution of


first order differential equations of the form:

x ( t )  Ax ( t )  Bu( t ) (6.2)

Multiplying (6.2) by the factor e  At will result in a “perfect” differential on the


left side:

e  At x ( t )  Ax ( t )  Bu( t )
e  At x ( t )  e  At Ax ( t )  e  At Bu( t ) (6.3)
d  At
e x ( t )  e  At Bu( t )
dt

(Note that A commutes with e At .)


Integrating both sides of this equation over dummy variable  from t 0 to t ,

z
t
e  At x( t )  e  At0 x( t 0 )  e  A Bu(  ) d (6.4)
t0

Finally, moving the initial condition term to the right-hand side and multiplying
both sides of the result by e At gives:

z
t
x ( t )  e At e  At0 x ( t 0 )  e At e  A Bu(  ) d
t0
(6.5)
z
t
A( t  t 0 ) A( t   )
e x(t 0 )  e Bu(  ) d
t0

Actually, although we have noted that this result is based upon an assumption of
time-invariance, we require only that the matrix A be time-invariant, because the
integrating factor would not have worked properly otherwise. If B were time-
varying, we would simply write:

z
t
x( t )  e A( t t0 ) x( t 0 )  e A( t   ) B(  )u(  ) d (6.6)
t0

This is the familiar convolution integral solution from basic linear systems. Its
derivation here for vectors emphasizes one of the primary motivations for state
Chapter 6. Solutions to State Equations 231

space analysis: many of the procedures and results for vectors are direct
extensions of first-order (scalar) cases.
Completing the problem by computing y ( t ) , we obtain M: lsim(sys,u,
T,X0)

z
t
y( t )  C( t )e A( t t0 ) x( t 0 )  C( t ) e A( t   ) B(  )u(  ) d  Du( t ) (6.7)
t0

Here again, we have allowed C and D to be functions of time because they do not
interfere with the actual solution of the differential equation.
The solution (6.7) makes apparent the importance of the matrix exponential
e At studied in the last chapter. For LTI systems, this will become known as the
state transition-matrix for reasons that will become apparent in Section 6.4.2.

Example 6.1: Simple LTI System


A simple free-floating object, moving in a straight line, might be described by
Newton’s law with the equation F  mx . Express the system as a pair of state
equations and solve using F ( t )  e  at , x ( 0 )  x 0 , and x ( 0 )  0 .

Solution:
Using phase variables, we can define x1 ( t )  x ( t ) and x 2 ( t )  x ( t ) , giving
the state equations

LM x OP  LM0 1OP LM x OP  LM 0 OP F (t )
1 1

Nx Q N0 0Q Nx Q MN 1 mPQ
2 2

y 1 0 M P
Lx O 1

Nx Q 2

This system is known as a double-integrator. The A-matrix is observed to be in


Jordan form, where the two eigenvalues are obviously both zero. We can use the
methods of the previous chapter to compute e At , but for this case, a Taylor series
expansion has only two nonzero terms since A k  0 for k  2 . Therefore,

e At  I  At 
LM1 t OP
N0 1Q
Using (6.7), we compute
232 Part II. Analysis and Control of State Space Systems

LM 0 OPe d
z
t
y ( t )  1 0 e A( t  0 ) x ( 0)  1 0 e A( t   )  a

0
1
N Qm

LM1 t OPLM x OP  1 0 t  O L 0 O
z LMN
t
1
1 PQ MN PQ
 a
 1 0 e d
0

N0 1QN 0 Q 0
0 1
m

 x0  1 0 zM
L(t  ) OP e
t 1
m  a
d
N0 Q
1
m

z
t
1
 x0  ( t   )e  a d
m0

 x0 
1 LM
1 1 1
 te  a  e  a  2 e  a
OP t

m aN a a Q 0

F
 Gx 
1 IJ  t 1  1 e  at
H 0
a mK
2
am a m 2

One may verify that this is the correct solution via the usual procedures for solving
simple differential equations.

6.2 Homogeneous Systems


In a homogeneous system, of course, the only component of a system’s response
is the zero-input, or initial condition response. This part of the total response is
not easily seen in frequency-domain analysis. Without an input, of course, the
state equations are
x  Ax
y  Cx

and the state vector solution for an LTI system is simply

x ( t )  e A( t  t 0 ) x ( t 0 ) (6.8)
or

y ( t )  Ce A( t  t0 ) x ( t 0 )

Obviously, the matrix exponential e At plays a role in both the zero-state and the
zero-input parts of the total response. To see the effect of this matrix exponential,
it is useful to draw sketches of the solutions in the space, beginning with various
initial conditions. This is most easily done in two dimensions, for which such
plots are common and are called phase portraits.
Chapter 6. Solutions to State Equations 233

6.2.1 Phase Portraits


A phase portrait is strictly defined as a graph of several zero-input responses on a
plot of the phase-plane, x ( t ) versus x ( t ) , these being known as phase variables.
However the term has become commonly used to denote any sketch of zero-input
solutions on the plane of the state variables, regardless of whether they are phase
variables or not.
To create a phase portrait, one simply chooses initial conditions to represent
wide areas of interest in the x1  x 2 plane, solves the systemM according to (6.8), initial(sys,X0)
and sketches the result as a function of time, starting from t 0 . For example,
consider the system

x  A1 x 
LM1 0OP x (6.9)
N 0 4Q
A phase portrait for this system is given in Figure 6.1. In the figure, sufficient
initial conditions are chosen around the edge of the graph to accurately interpolate
the solution for any initial condition in between. The solution drawn from each
initial condition is sketched as a directed curve, as indicated by the arrow showing
the progression of positive time. These curves are known as phase trajectories.
With experience, the general shape of the trajectories can be imagined
without a detailed plot. Generally, it is the qualitative shape of the trajectories that
is important in a phase portrait. (This is particularly true with nonlinear systems,
whose phase portraits can sometimes be constructed entirely qualitatively or with
piecewise analysis of their dynamics.) For example, certain qualitative features of
Figure 6.1 might be predicted from our knowledge of the state space.
Consider the A-matrix given in (6.9) as a linear operator, taking vectors x
into vectors x . Thus, given any position x(t ) on the plot, the equation x  Ax
gives us the tangent vector to the phase trajectory at that point. This is the
direction in which the trajectory is evolving. In the figure, we have illustrated this
point by indicating a vector, x ( t )  [ 3 1] , and the tangent to the curve at
that point, x ( t )  A1 x  [3 4] . Of course, this velocity will change as the
chosen point x changes.
We can compute that this system has eigenvalues 1  {1,  4} (a set known
as the spectrum of A), and corresponding eigenvectors of

le , e q  RSLMN10OPQ, LMN10OPQUV (6.10)


1 2
T W
Knowing that these two vectors are invariant subspaces, we conclude that if an
initial condition x ( t 0 ) lies on either of these lines, then so will the vector
234 Part II. Analysis and Control of State Space Systems

x ( t 0 )  Ax ( t 0 ) at any t 0 . Therefore, if a chosen point lies on one of the


eigenvectors, identified in the graph, the trajectory emanating from that point will
lie on that same line forever. Thus, we can always get a start at constructing a
phase portrait by drawing in the invariant subspaces, i.e., the straight lines, on the
plot.

x  Ax
3

e2
1

x2 0
e1
-1
x

-2

-3

-4
-4 -2 0 2 4
x1

Figure 6.1 Phase portrait for the homogeneous system given in Equation (6.9). For
this type of portrait, wherein all trajectories asymptotically approach the
origin without encircling it, the origin is known as a stable node.

More will be said about the phase trajectories on these portraits in the next
section. Until then we will present further examples of phase portraits for the
homogeneous equation x  Ax .
In order to relate such phase trajectories to the more familiar step responses,
step(sys) Figure 6.2 shows the step responseM for the system
Chapter 6. Solutions to State Equations 235

x  A1 x  b1u 
LM1 0OP x  LM1OPu
N 0 4Q N4Q (6.11)
y 1 1x

where the output equation was selected arbitrarily and the matrix b1 in (6.11) was
chosen to give unity DC gain to each state variable, i.e., so that they both
asymptotically approach 1. This figure further illustrates the relative speed of the
two state variables. Note how the “faster” of the two variables converges to its
final value sooner than the “slower” one, and of course the output is simply the
sum of the two inputs.

y (t )

1.5

x 2 (t )
1

x1 (t )
0.5

0
0 1 2 3 4 5

time
Figure 6.2 Step response for the system described by Equation (6.11). The state
variables and output are plotted as functions of time.

Figure 6.3 shows a portrait that is similar to Figure 6.1 except that it appears
rotated and distorted. This is the portrait for the homogeneous (no input) system
with A-matrix

A2 
LM2 1OP (6.12)
N2 3Q
with the following spectrum and eigenvectors (expressed as columns in a modal
matrix):
236 Part II. Analysis and Control of State Space Systems

l
 2  1 4 q M2 
LM 1 1OP
N1 2Q
4

1
e2

x2 0

e1
-1

-2

-3

-4
-4 -2 0 2 4
x1

Figure 6.3 Phase portrait for the homogeneous system given by the matrix in (6.12).
Note that this origin is also a stable node.

As before, we see that the eigenvectors represent invariant subspaces on which


trajectories stay forever. This portrait was produced by performing the similarity
transformation

A2  M 2 A1 M 2 1 (6.13)

From our knowledge of similarity transformations, we recognize this as nothing


but a change of basis of the state space. We can therefore regard Figure 6.3 as
containing the same information as Figure 6.1, but in different coordinates. When
this same similarity matrix is applied to Equation (6.11), the result is
Chapter 6. Solutions to State Equations 237

x  M 2 A1 M 21 x  M 2 b1u 
LM2 1OP x  LM5OPu
N2 3Q N7Q (6.14)
y  CM 21 x  1 0 x

which has the step response shown below in Figure 6.4. Note that the state
variables no longer reach unity asymptotically. This is because the similarity
transformation affects the DC gain of the state variables. However, as we would
expect, the output signal y ( t ) remains exactly the same as in Figure 6.2, because
similarity transformations do not affect the input/output performance, only its
internal representation.
2

x1 (t ), y (t )
1.5

1 x 2 (t )

0.5

0
0 1 2 3 4 5
time
Figure 6.4 Step response for the system described by Equation (6.14). The state
variables and output are plotted as a function of time.

Figure 6.5 is somewhat different in the sense that the trajectories do not all
tend toward the origin. This portrait is based upon the following system:

A3 
LM2 0OP l
 3  2 1 q M3 
LM1 0OP (6.15)
N 0 1Q N0 1 Q
It is again a diagonal system with the eigenvectors constituting the standard basis.
However, in this system, one of the corresponding eigenvalues is positive. Being
diagonal, it is easy to decompose the system into decoupled parts:
238 Part II. Analysis and Control of State Space Systems

LM x OP  LM2 0OP LM x OP  LM2x OP


1 1 1
(6.16)
Nx Q N 0 1Q Nx Q N x Q
2 2 2

which of course has solutions that can be found independently of one another:

LM x (t )OP  LMe b g
2 t  t0 OP L x (t )O  LMe b 2 t  t0 g x (t OP
g P MN x (t )PQ M eb
1 0 1 0 1 0)
Nx (t )Q MN eb 0 Q N gx ( t ) PQ
t t t  t0
2 0 2 0 2 0

1 e2

x2 0
e1
-1

-2

-3

-4
-4 -2 0 2 4
x1
Figure 6.5 Phase portrait for the homogeneous system given by (6.15). In cases such
as this wherein the trajectories approach the origin from one direction but
diverge from it in the other, the origin is known as a saddle point.

The positive exponent in x 2 ( t ) indicates that a solution starting on the


invariant space e 2 , while it will stay there, will nevertheless diverge from the
origin. On the other hand, the negative exponent makes x1 ( t ) tend toward zero.
Chapter 6. Solutions to State Equations 239

The trajectories in between, while not lying on invariant subspaces, must


interpolate the invariant trajectories, giving the shapes seen in the figure.
Performing a similarity transformation on the system described by (6.15),
which is similar to (6.13) but with a different modal matrix, we can arrive at a
form of (6.15) with a new basis:

A4 
LM3 2OP l
 4  2 1 q M4 
LM2 1OP (6.17)
N 2 2Q N 1 2Q
The phase portrait of this transformed system is shown in Figure 6.6. Again,
motion along the two invariant subspaces tends toward different directions, and
the remaining trajectories interpolate smoothly.

1 e1

x2 0

-1 e2

-2

-3

-4
-4 -2 0 2 4
x1

Figure 6.6 Phase portrait for the homogeneous system given by (6.17). The origin
in this figure is a saddle point.

It should be pointed out that except at the singular points (for LTI systems,
the origin only), two trajectories cannot cross or meet at a point. This would imply
240 Part II. Analysis and Control of State Space Systems

that at that point, there are two independent solutions to the differential equation,
which is prohibited by the uniqueness theorem. In our linear case, we also expect
the trajectories to be smooth, which is seen in all the plots. Smoothness is
guaranteed by the existence of a unique tangent vector given by the equation
x  Ax .
The nature of the phase portrait when there is only a single eigenvector, as
we might expect in a system having a generalized eigenvector, is investigated
next. An example of such a system is:

A5 
LM1 1OP l
 5  2 2 q e1 
LM1OP (6.18)
N1 3Q N 1Q
This system, represented by the phase portrait in Figure 6.7, indeed shows only a
single invariant subspace. This eigenvector, which corresponds to a negative
eigenvalue of 2 , tends toward the origin as expected.
4

x2 0

-1 e1

-2

-3

-4
-4 -2 0 2 4
x1
Figure 6.7 Phase portrait for the homogeneous system given by (6.18). The origin
in this system is called a stable node because again the trajectories
approach the origin without encircling it.
Chapter 6. Solutions to State Equations 241

The other trajectories also tend toward the origin, but do so by partially spiraling
inward. Because trajectories cannot cross, none of the spiraling trajectories may
rotate more than 180° before asymptotically reaching the origin.
Figure 6.8 shows the initial condition response (zero inputs) for the same
system, i.e., from (6.18). Note that qualitatively, the plot appears to be similar to
previous time responses, e.g., Figure 6.4, except that now both curves show one
inflection point.

0.8

0.6

0.4 x1( t )

0.2

x2 ( t )
0

-0.2
0 1 2 3 4
time
Figure 6.8 Initial condition response for the system described by Equation (6.14).
The state variables are plotted as a function of time.

Transforming the system described by (6.18) into its Jordan form, we get

A
LM2 1OP l
 6  2  2 q e1 
LM1OP (6.19)
N 0 2Q N 0Q
The phase portrait is given in Figure 6.9. As we might guess, the single regular
eigenvector in this portrait lies along a coordinate axis because the Jordan form
produces a system of differential equations, one of which is decoupled from the
other.
242 Part II. Analysis and Control of State Space Systems

x2 0
e1

-1

-2

-3

-4
-4 -2 0 2 4
x1

Figure 6.9 Phase portrait for the homogeneous system given by (6.19). The origin is
a stable node.

In the case of complex eigenvectors, we mentioned in Section 4.3 that the


geometric interpretation of invariant subspaces does not directly apply because
we are interested in only the real solutions to the system. Therefore, for a system
such as

A7 
LM2 3OP l
 7  1  j3 1  j3 q M7 
LM1  j1 1  j1 OP (6.20)
N6 4Q N j2  j2 Q
we see in the portrait in Figure 6.10 that no invariant subspaces appear. This being
the case, the spiraling trajectories rotate around the origin forever, asymptotically
approaching it. (The trajectories are spiraling inward because of the negative real
part of the eigenvalues; a system with eigenvalues containing positive real parts
would spiral outward.)
Chapter 6. Solutions to State Equations 243

x2 0

-1

-2

-3

-4
-4 -2 0 2 4
x1

Figure 6.10 Phase portrait for the homogeneous system given by (6.20). This type of
portrait shows an origin that is known as a stable focus.

The time-domain response of the system described by (6.20) is shown in the initial
condition response of Figure 6.11. The oscillations shown in the phase portrait of
Figure 6.10 are clearly seen.
As a final example, consider the system given by

A8 
LM1 2OP l
 8  j3  j3 q M8 
LM3  j 3 j OP (6.21)
N5 1Q N j5  j5 Q
Relative to the previous example, we predict that this system will have no real
invariant subspaces, and the trajectories will therefore be free to encircle the
origin (without crossing each other). However, because there are no real parts of
the eigenvalues, we cannot have solutions that decay to zero or tend to infinity.
The resulting solutions are depicted in Figure 6.12. We know from our
conventional solutions to differential equations or from our explicit computation
244 Part II. Analysis and Control of State Space Systems

of e At in Chapter 5 that the solutions given by (6.8) for (6.21) will be


nondecaying sinusoids, just as those of (6.20) were decaying sinusoids. These
periodic solutions appear as ellipses in phase portraits, as in Figure 6.12. A time
response of such a system would show non-decaying oscillations, as opposed to
the decaying oscillations of Figure 6.11. It is perhaps interesting to note that the
principal axes of these ellipses, the directions of extremal displacement, occur
along the singular vectors of matrix A. Refer to Examples 4.13 and 5.1 for a
discussion of singular vectors and ellipse geometry.

1.5

0.5
x1( t )

0
x2 ( t )

-0.5
0 1 2 3 4
time

Figure 6.11 Initial condition response for the system described by Equation (6.20).
The state variables are plotted as a function of time.

Without a decay or exponential growth in the solution, one might ask how it
is possible to determine the directions of the arrows in Figure 6.12. The simplest
way is to choose a single sample point, such as x ( t 0 )  [1 0] , shown in the
figure, and determine the direction of the tangent

x ( t 0 )  A8 x( t 0 ) 
LM1OP
N5Q
also shown in the figure. Given the direction of rotation of this single trajectory,
the direction of all the others must be the same.
Chapter 6. Solutions to State Equations 245

x ( t0 ) 
LM1OP
x2 0
N0Q
-1

-2

-3
x ( t 0 ) 
LM1OP
N5Q
-4
-4 -2 0 2 4
x1

Figure 6.12 Phase portrait for the homogeneous system given by (6.21). The vector
x ( t 0 ) is shortened to fit on the plot. The origin in this portrait is called a
center.

Although we have been presenting two-dimensional phase portraits here, the


phase trajectory can be drawn in three dimensions, and the concept extends to
arbitrary dimensions. Although our ability to draw in these higher dimensions
diminishes, the intuitive geometric picture they provide does not. Our discussion
of solutions using phase trajectories will continue in the next section and we will
refer to them again in the next chapter on stability.

6.3 System Modes and Decompositions


In this section we will again consider LTI systems that are solved using Equation
(6.6). Of course, for any t , vector x(t ) is an element in the (linear) state space,
a vector space with all the attendant rules and properties. What we demonstrate
in this section is the utility of considering the basis of eigenvectors for the state
246 Part II. Analysis and Control of State Space Systems

space when computing x(t ) . The decomposition of x(t ) in this way is widely
used in engineering analysis, especially for large-scale problems such as the
deflection of flexible structures, vibrations, and the reduction of large state space
to smaller approximations.
Let {e i } be the set of n linearly independent eigenvectors for a system,
including, if necessary, generalized eigenvectors. Because this set may be used as
a basis for the state space, we can uniquely decompose x(t ) as

n
x ( t )    i ( t )e i (6.22)
i 1

where the coefficients in this expansion  i ( t ), i  1,  , n are functions of time.


We can perform the same decomposition on the input terms in (6.1), allowing B
to vary with time:

n
B( t )u( t )    i ( t )e i (6.23)
i 1

Substituting these expansions into the original state equation:

n n n
  i (t )e i   i (t ) Ae i    i (t )e i
i 1 i 1 i 1

or, by rearranging and supposing for now that {e i } consists of a complete set of
n regular eigenvectors, we can obtain

n n
 e i ( t )   i ( t ) A   i ( t )je i    i ( t )e i   i ( t ) i e i   i ( t )e i (6.24)
i 1 i 1
0

where  i is the eigenvalue corresponding to regular eigenvector e i . Because the


set {e i } is linearly independent, the terms in parentheses in (6.24) must also be
identically equal to zero, or

 i ( t )   i ( t ) i   i ( t ) i  1,, n (6.25)

Equation (6.25) constitutes a set of n independent, first-order LTI differential


equations. In the case that {e i } contains some generalized eigenvectors, (6.25)
would take the same form for some i (those i to which no generalized eigenvectors
Chapter 6. Solutions to State Equations 247

are chained). However, when the generalized eigenvector e i1 is chained to


(regular or generalized) eigenvector e i , (6.25) would change to

 i ( t )   i ( t ) i   i 1 ( t )   i ( t ) i  1,, n (6.26)

There will be a total of n   j g j such coupled equations, where the summation


of geometric multiplicities g j is taken over all distinct eigenvalues.
The terms  i ( t )e i in (6.22) and (6.24) are known as system modes, and
(6.22) is the modal decomposition of the solution x(t ) . The modes are equivalent
to “new” state variables. They are, in fact, the same state variables we obtained
when we make the change of basis x  M , using the modal matrix M that results
in our familiar similarity transformation, i.e.,

  M 1 AM  M 1 Bu
(6.27)
y  CM  Du

This fact can be seen by expressing Equation (6.22) as

n
x ( t )    i ( t )e i
i 1

L  (t ) O (6.28)
 M MM  PP
1

MN (t )PQ n

This is the source of the name “modal” in the term modal matrix. It is the matrix
that decomposes a system so that decoupled equations are solved to produce
system modes. Matrix M 1 AM  J is simply the Jordan form discussed in
Chapter 4.
As we did in Chapter 4, if we first convert the system to its basis of
eigenvectors, then we can exploit the simpler form of (6.27) to generate the state-
vector solution:

( t )  e J ( t  t 0 ) ( t 0 )  z
t0
t
e J ( t   ) M 1 B(  )u(  ) d,
(6.29)
 ( t 0 )  M 1 x ( t 0 )


where   [1   n ] . Equation (6.29) will be easier to solve because of the
248 Part II. Analysis and Control of State Space Systems

simple structure of the Jordan form (see Section 5.4.2). After obtaining such a
solution, the solution in the original basis may be computed via

x ( t )  M ( t )

Modal Decompositions in Infinite-Dimensional Spaces


Modal decompositions are often useful in infinite-dimensional spaces, such as
those produced by models of distributed-parameter systems. For example, the
time- and space-varying displacement of flexible beams, strings, and plates are
often expressed as an infinite summation of shape-functions multiplied by time-
functions. These are sometimes called eigenfunctions, which are simply the
modes as described above. For example, the displacement u of a point on a beam
might then be expressed as a summation of products of time- and space-dependent
functions:


u( x, t )   X i ( x )Ti ( t ) (6.30)
i 1

The infinite series arises because the describing equation for beam displacement
is a partial differential equation, rather than an ordinary differential equation such
as the ones with which we have been working. When ordered in decreasing
magnitude of X i ( x ) , it is common practice to truncate the series after the few
most significant terms, thereby approximating an infinite dimensional system
with a finite dimensional one that can be analyzed via matrix arithmetic.
For example, the equations that describe the displacement u( x , t ) of a
stretched string are

2u  2u
 0 (6.31)
x 2 T t 2

where  is the density of the string and T is its tension. If the string is of length
 , is initially displaced at a location x  c to a height u( c,0 )  h , and is
subsequently released, it can be shown [6] that


u( x, t )  k 
1 FG nc IJ sin FG nx IJ cos FG n T t IJ
n 1 n
2
sin
HK HK HK (6.32)

where k is a constant that depends on the system parameters  , c, and h. Although


there are infinitely many modes [or “shapes”, i.e., sin( nx  ) ] that contribute to
Chapter 6. Solutions to State Equations 249

this solution, their magnitudes decrease as 1 n 2 , so after some finite number, a


suitable approximation can be obtained by truncating the series.
Because we are not able to delve into the solution techniques for (6.31), we
cannot further discuss infinite-dimensional systems. However even in finite-
dimensional linear systems, this kind of modal expansion helps us understand
solutions by considering them one component at a time.

6.3.1 A Phase Portrait Revisited


In light of our knowledge of system modes, we now reconsider the information
provided by the phase portraits and provide further clues to their qualitative
construction. Consider again the first example system, given by

x  A1 x 
LM1 0OP x (6.33)
N 0 4Q
with eigenvalues and eigenvectors computed in (6.10). An enlarged view of the
upper-right quadrant of the phase portrait for that system is given again in Figure
6.13, where the trajectories on the eigenvectors are drawn with arrows pointing
in the direction of positive time. The natural question arises when using phase
portraits as a qualitative solution tool, “How do we know which direction the
trajectories will take, i.e., how do we know they appear concave up as in Figure
6.13, rather than facing toward the sides? Could not trajectories approach the
origin from the vertical direction instead of seeming to flatten out and approach
horizontally, as the graphs show?”
To resolve these questions and provide a tool for guessing the trajectory’s
shapes, we show an initial condition, x(t ) , and the tangent to the trajectory at that
point, x ( t )  Ax ( t ) . This tangent vector is decomposed along the two
eigenvectors, e1 and e 2 . Analytically, we know from the preceding section that
2
x ( t )  Ax ( t )  A  i ( t )e i
i 1
2
   i ( t ) Ae i
i 1 (6.34)
2
   i ( t ) i e i
i 1

  e  t e1  4e 4t e 2

where the last line easily results from our A-matrix being already diagonal. From
this expansion, the lengths of the components along each of the eigenvector
directions are clear. For the time selected, the component along (negative) e 2 is
250 Part II. Analysis and Control of State Space Systems

larger that the component along (negative) e1 . The trajectory at this time, then, is
evolving more along e 2 than e1 .
Taking a macroscopic view of the plot, we can see that the eigenvalue
 2  4 is larger in magnitude (“faster” in time constant terms) than the
eigenvalue  1  1 . The result is that, until it decays to negligible proportions
relative to the first mode (the component along e1 ), the second mode (the
component of motion along e 2 ) is larger. When sketching the plot, then, we
naturally expect the vertical component of motion to dominate for small time, and
the reverse for large time. Hence, we expect the curvature of the graph to be the
concave up (and down) shape as observed.

x (t )  2  2 (t )e 2
x2
 4 e  4 t e 2

e2

x (t )  Ax (t )
11 (t )e1  1  e  t e1

e1 x1

Figure 6.13 Detail of phase portrait for the homogeneous system given by (6.9).

Example 6.2: Sketching a Phase Portrait Using Qualitative Analysis


A two-dimensional LTI homogeneous system is known to have eigenvalues
  {10,  2} and the corresponding eigenvectors
Chapter 6. Solutions to State Equations 251

RSL3O, L1OUV
TMN1PQ MN3PQW
{e1 , e 2 }  (6.35)

Sketch the phase portrait for the system.

Solution:
The first step in determining the nature of the trajectories is to sketch the invariant
subspaces, i.e., the straight lines that lie along the given eigenvectors. Noticing
that both eigenvalues are negative real numbers, we expect all trajectories to
approach the origin from any location. Because the eigenvalue  1 is faster than
 2 , we expect the trajectories for small time to experience more change in the
direction of e1 than in the direction of e 2 . After a long time, the first mode will
have decayed, and the trajectories’ direction will be dominated by the second
mode, i.e., along e 2 . We then have sufficient information to sketch the phase
portrait shown in Figure 6.14.

e2
1

x2 0 e1

-1

-2 e2

-3 e1

-4
-4 -2 0 2 4
x1
Figure 6.14 Phase portrait for the example system given by the eigenvectors in (6.35).
252 Part II. Analysis and Control of State Space Systems

The figure also shows the decomposition of a trajectory into its modes. For the
sample time selected, it can be seen that there is more motion along the first mode
than along the second. At a later time (farther along that trajectory), there will be
more motion along the second mode.

6.4 The Time-Varying Case


The matrix exponential integrating factor method, used in Equation (6.3), will not
work when the A-matrix is a function of time. When we consider the linear time-
varying system

x ( t )  A( t ) x ( t )  B( t )u( t ) (6.36)

we must either solve the system using a different technique or else find a different
integrating factor. In the next two sections, special matrices are derived, called
the fundamental solution matrix and the state transition matrix. The fundamental
solution matrix performs the function of the integrating factor, while the overall
solution is in terms of the state transition matrix. As we will see, though, the state
transition matrix is difficult to compute.

6.4.1 State Fundamental Solution Matrix


Consider a homogeneous version of (6.36):

x ( t )  A( t ) x ( t ) (6.37)

We know from Chapter 2 that the set of solutions to (6.37) constitutes a linear
vector space. It can be observed that this space is n-dimensional by considering a
basis set of n linearly independent initial condition vectors {x i0 } , i  1,  , n . If
we restrict our attention to matrices A( t ) that are smooth, then it is possible to
guarantee that the solutions to (6.37) given an arbitrary initial condition, x ( t 0 ) ,
are unique. Then the set {x i ( t )} , where x i ( t )  A( t ) x i ( t ) and x i ( t 0 )  x i 0 ,
defines n solutions for (6.37) that are linearly independent on [t 0 , t ] . Any
additional solution ( t ) must be a linear combination of the x i ( t ) because
( t 0 )  i 1  i x i 0 implies that ( t )  i 1  i x i ( t ) for some set of scalars  i ,
n n

i  1,  , n .
Organizing these linearly independent solutions, we construct a matrix X ( t )
as follows:

X ( t )  x1 ( t ) x 2 ( t )  x n ( t )
Chapter 6. Solutions to State Equations 253

Clearly, X ( t )  A( t ) X ( t ) . Such a matrix X ( t ) is known as a fundamental


solution matrix.
Using the matrix identity revealed in Problem 3.2:

dX 1 ( t ) dX ( t ) 1
  X 1 ( t ) X (t )
dt dt

 
  X 1 ( t ) A( t ) X ( t ) X 1 ( t )

I
  X 1 ( t ) A( t )

From this result, we can now show that the matrix X 1 ( t ) qualifies as a valid
integrating factor for the state equations in (6.37).
We may use this integrating factor in the nonhomogeneous equations in
(6.36):

X 1 ( t )[ x ( t )  A( t ) x ( t )  B( t )u( t )]
X 1 ( t ) x ( t )
X
1
 ( t  t ) x ( t )  X 1 ( t ) B( t )u( t )
) A(

dX 1 ( t )
X 1 ( t ) x ( t )  x ( t )  X 1 ( t ) B( t )u( t )
dt
d
X 1 ( t ) x ( t )  X 1 ( t ) B( t )u( t )
dt

Now integrating both sides of the bottom line above yields

z
t
X 1 ( t ) x( t )  X 1 ( t 0 ) x( t 0 )  X 1 (  )B(  )u(  ) d
t0
or

z
t
x( t )  X ( t ) X 1 ( t 0 ) x( t 0 )  X ( t ) X 1 (  )B(  )u(  ) d (6.38)
t0
254 Part II. Analysis and Control of State Space Systems

This appears to be a general solution to the time-varying state equations, but,


unfortunately, computing this solution is not so easy. After all, if we could easily
find the n linearly independent solutions necessary for the construction of X ( t ) ,
we would not have needed to derive (6.38). In most cases, we have no general
method for computing the fundamental solution matrix X ( t ) . The ease of
computing components x i ( t ) depends a great deal on the exact form of the time
functions within A( t ) . Like nonlinear systems, such time-varying systems tend
to be solved individually, as circumstances allow. One can see from any ordinary
differential equations text that certain classes of time-varying equations are
solved, e.g., Bessel, LeGendre, or Hermite equations, but certainly not all of them
can be solved.

6.4.2 The State-Transition Matrix


If X ( t ) is not easily calculated, then of what useful purpose is Equation (6.38)?
First, we will use its existence to prove the existence of a different matrix. Note
that in (6.38), the fundamental solution matrix only appears as a product of the
form X ( t ) X 1 (  ) . This product


( t ,  )  X ( t ) X 1 (  ) (6.39)

is known as the state-transition matrix, and it lends insight into the solutions of
time-varying systems and time-invariant systems. With this notation, (6.38)
becomes

z
t
x( t )  ( t , t 0 ) x( t 0 )  ( t ,  )B(  )u(  ) d (6.40)
t0

Computing the State-Transition Matrix


While X ( t ) can indeed be difficult to find, there does exist an iterative method
for the computation of ( t ,  ) , known as the Peano-Baker integral series, which
we provide here without proof [7]:

z z z
t t 1
( t ,  )  I  A( 1 )d1  A( 1 ) A(  2 )d 2 d1   (6.41)
  

A difficulty with this technique is the necessary repeated integrations. In addition,


it is unlikely that the series will converge to a closed form. Often, this method is
Chapter 6. Solutions to State Equations 255

convenient when the matrix A( t ) is inherently nilpotent. A nilpotent matrix is


one such that A p  0 for all p  q . An example of a nilpotent matrix is a
triangular matrix with zeros on the diagonal. As we saw in Section 5.4.2, repeated
multiplications of such matrices by themselves eventually results in the zero
matrix, not because of the numbers in the matrix, but because of its inherent
physical structure.
A second method applies only in the special circumstance that
A( t ) A(  )  A(  ) A( t ) . If this is the case, then

LM A() dOP
MNz
t
( t ,  )  exp (6.42)
 PQ
Although it is rare for this condition to be satisfied for general A( t ) , it does hold
when A is constant or when A( t ) is diagonal, in which cases there are more direct
methods for finding the solution of the system. One should be warned that the
expression in (6.42) is not equivalent to e At , and must often be computed with a
Taylor series expansion of the matrix exponential.

Example 6.3: State-Transition Matrix Using Series Expansion


For a system with the A-matrix given in Example 6.1, find the state-transition
matrix ( t ,  ) using the Peano-Baker series.

Solution.
It should first be noted that the state transition matrix for Example 6.1 is apparent
from the solution. Computing it explicitly,

z LMN00 10OPQ d  z LMN00 10OPQ z LMN00 10OPQ d


t t 1
( t ,  )  I  1 2 d 1  
  

L1 0OP  LM0  OP  z LM0 1OP LM0  OP


M
1
t t
2
1

d 1  
N0 1 Q N0 0 Q N0 0Q N0 0 Q
  

L1 0OP  LM0 t  OP  0  


M
N0 1 Q N0 0 Q
L1 t  OP
M
N0 1 Q
Here, it is apparent that the second- and higher-order integrations will yield zero.
256 Part II. Analysis and Control of State Space Systems

Note the agreement between this result and the matrix seen in the integrand of the
solution of Example 6.1.

Properties of the State-Transition Matrix


By expressing an arbitrary solution as a linear combination of fundamental
solutions, x ( t )  X ( t ) x ( t 0 ) , we can perform some algebraic manipulations to
find that

x ( t )  X ( t ) X 1 (  ) X (  ) x ( t 0 )
 X ( t ) X 1 (  ) x (  ) (6.43)
 ( t ,  ) x (  )

Therefore, the state-transition matrix ( t ,  ) is a linear operator that takes a


vector x(  ) and produces a vector x(t ) (i.e., a solution at time t ). Hence the
name for the matrix; it performs the transition from a state at one time to a state
at another. This property can be extended to successive time instants by realizing
that

( t 2 , t1 )( t1 , t 0 )  X ( t 2 ) X 1 ( t1 ) X ( t1 ) X 1 ( t 0 )
 X ( t 2 ) X 1 ( t 0 ) (6.44)
 ( t 2 , t 0 )

Similarly, it is easily shown that  1 ( t ,  )  ( , t ) .


Furthermore, by differentiating,

1
d( t ,  ) d X ( t ) X (  ) dX ( t ) 1
  X ()
dt dt dt
 A( t ) X ( t ) X 1 (  ) (6.45)
 A( t )( t ,  )

The state transition matrix and the fundamental solution matrix X ( t ) are
solutions for the same homogeneous differential equation, (6.37). This suggests a
numerical method of solving (6.45), numerically integrating (over time t), subject
to the initial condition that ( ,  )  I , in order to estimate ( t ,  ) . This is not
done in practice very often, mostly because the system input u( t ) is itself not
known a priori. Therefore, such an open-loop solution of (6.40) is not useful.
Chapter 6. Solutions to State Equations 257

When the system is time-invariant, we know from a comparison of (6.6) and


(6.38) that X ( t )  e At , yielding

( t , t 0 )  e At e  At0  e A( t  t0 )

Remember from the definition that for a time-invariant system, the solution
depends only on the difference t  t 0 . Fortunately, in this situation we have
already demonstrated analytical methods for generating the state-transition matrix
e At (see Chapter 5). Note that this situation is entirely consistent with the
properties of state-transition matrices derived for the more general time-varying
case above. In particular, we have already shown that in a homogeneous system,

x ( t )  e A( t  t 0 ) x ( t 0 )

which is the time-invariant counterpart to (6.43). If we returned to the phase plane


depictions, we could verify that for an initial condition x ( t 0 ) , multiplication by
the state transition matrix e A( t  t0 ) would produce a different vector x(t ) for
whatever time t we choose.
We should remark at this point that phase portraits for time-varying systems
are of little use. Recall that the phase portrait for a system depends on the
eigenvalues and eigenvectors of the A-matrix. When the A-matrix is time-
varying, the eigenvalues and eigenvectors will therefore also depend on time. In
phase portraits, time is a parameter on the trajectories. The plots, therefore, cannot
be constructed for a time-varying A-matrix. In fact, we will observe in the next
chapter that the eigenvalues of time-varying systems are themselves of limited
use. One cannot always even predict the stability of a system by calculating the
eigenvalues at a particular instant in time.

6.5 Solving Discrete-Time Systems


As we have mentioned, there are natural systems that are modeled in discrete-
time. In a linear systems context, this means that the inputs are applied, and the
states change, at discrete intervals of period T. Such a system may be modeled
with the equations

x( k  1)  Ad ( k ) x( k )  Bd ( k )u( k )
(6.46)
y( k )  Cd ( k ) x( k )  Dd ( k )u( k )

Here we have implicitly assumed by our notation that the discrete-time system
258 Part II. Analysis and Control of State Space Systems

matrices Ad, Bd, Cd, and Dd may be functions of time k. In a time-invariant system,
they will not.
However, often, because the universe is modeled (at least by most engineers)
as evolving in continuous-time, the equations in (6.46) more often result from the
discretization of a continuous-time system such as

x ( t )  A( t ) x ( t )  B( t )u( t )
(6.47)
y ( t )  C( t ) x ( t )  D( t )u( t )

We can arrive at the equivalent (6.46) from (6.47) by considering a discrete-time


approximation to the state-equation solution (6.40).

6.5.1 Discretization
Consider the kth time instant, wherein t  kT . At T units of time later,
t  ( k  1)T . In order to get an accurate discrete-time equivalent, we will assume
that period T is much smaller than the Shannon period for the input signal u( t )
(see [4], pp. 79-82). If this is the case, we can approximate the input as
u( t )  u( kT ) , or simply u( k ) , over the entire interval kT  t  ( k  1)T . Making
the input constant over the sampling interval allows us to remove the input term
from the integrand in the solution for (6.40):

z
k 1
x( k  1)  ( k  1, k ) x( k )  ( k  1,  ) B(  ) d u( k ) (6.48)
k

bg
In this form, if ( t ,  ) and B  are known, the solution for the state vector could
be computed each time an input is applied. The matrices

z
k 1
 
Ad ( k )  ( k  1, k ) Bd ( k )  ( k  1,  ) B(  ) d
k

can be computed from knowledge of the system, and (6.46) is the result. The
output equation, being an algebraic equation, follows from a knowledge of C( t )
and D( t ) . Also note that (6.48) provides the values of the state vector at the
chosen sample instants. At intermediate times, the state variables may take on
other values, resulting in a “ripple” effect if the variables are plotted as functions
of continuous time. For discrete-time analysis, though, it is only the values at the
sample instants that we are interested in.
Note that (6.48) is not the solution to the state equations in (6.46). Rather,
c2d(sysc,Ts, (6.48) is a discretizationM of the state equations in (6.47). As we saw in Examples
method) 3.10 and 3.11, the solutions of discrete-time systems are often inductively
obtained by iterating over a few times intervals on (6.46).
Chapter 6. Solutions to State Equations 259

6.5.2 Discrete-Time State-Transition Matrix


Consider the first equation of (6.46). If we write a few terms in the computation
of x ( k ) , given an initial point x ( j ) , we get:

x ( j  1)  Ad ( j ) x ( j )  Bd ( j )u( j )
x ( j  2)  Ad ( j  1) x ( j  1)  Bd ( j  1)u( j  1)
 Ad ( j  1) Ad ( j ) x ( j )  Bd ( j )u( j )  Bd ( j  1)u( j  1)
 Ad ( j  1) Ad ( j ) x ( j )  Ad ( j  1) Bd ( j )u( j )  Bd ( j  1)u( j  1)
x ( j  3)  Ad ( j  2) Ad ( j  1) Ad ( j ) x ( j )  Ad ( j  2) Ad ( j  1) Bd ( j )u( j )
 Ad ( j  2) Bd ( j  1)u( j  1)  Bd ( j  2)u( j  2)

until, by induction,

F A (i)I x( j )  F A (q)I B(i  1)u(i  1)


k 1 k k 1
x( k )  GH  JK
i j
d  G
H JKi  j 1 q  i
d (6.49)

where it is necessary to define

k 1

 Ad (q )  I
q k
Consider the situation in which the system is homogeneous, i.e., u( k )  0
for all k. Then we would have

F A (i)I x( j )
k 1
x( k )  GH  JK
i j
d (6.50)

This formula implicitly defines the state-transition matrix for discrete-time


systems:

k 1
( k , j )   Ad ( i ) (6.51)
i j

As in the continuous-time case, Equation (6.50) makes it apparent that the state-
transition matrix ( k , j ) may be interpreted as the linear operator that takes a
state vector at time j and returns the state vector at time k. This matrix is defined
only for k  j and shares most of the properties of the continuous-time state
transition matrix, except for invertibility. From the structure of (6.51), it is clear
260 Part II. Analysis and Control of State Space Systems

that if any Ad ( i ) is not invertible, which is entirely possible, then ( k , j ) itself


will not be invertible.
Using the notation of the discrete-time state-transition matrix for time-
varying systems, i.e., Equation (6.51) above, then the expression (6.49) for the
general solution of discrete-time systems is

k
x( k )  ( k , j ) x( j )   ( k , i ) B(i  1)u(i  1) (6.52)
i  j 1

where the initial condition is taken at t  j .

6.5.3 Time-Invariant Discrete-Time Systems


Suppose now that in the discrete-time system of (6.46), the matrices Ad and Bd
were independent of time k. Then (6.48) would become

z
( k 1)T
b g
x ( k  1)T  e
A ( k 1)T  kT
x( kT )  e
A ( k 1)T  
B(  ) d u( kT )
kT

or

z
( k 1)T
A ( k 1)T  
x( k  1)  e AT x( k )  e B(  ) d u( k ) (6.53)
kT

From this, we have the definitions

z
( k 1)T
  A ( k 1)T  
Ad  e AT Bd  e B(  ) d
kT

again giving a formula for obtaining (6.46). Once again, matrices Ad and Bd can
be computed “off-line,” i.e., without knowledge of the input. Note that matrix Ad
is independent of k.
lsim(sys,u,t) Furthermore, for a time-invariant system, (6.49) will becomeM

b g  b Ad g
k
k j k i
x( k )  Ad x( j )  B( i  1)u( i  1) (6.54)
i  j 1

yielding the time-invariant, discrete-time state transition matrix

( k , j )  ( k  j )  Ad b g k j
Chapter 6. Solutions to State Equations 261

We should point out here that the discussion of modes and modal
decompositions as presented in Section 6.3 applies here as well. Because the
eigenvalues and eigenvectors of a matrix A or Ad are calculated in the same
manner regardless of whether the system is discrete-time or continuous-time, the
modal matrix M functions in the same way for both systems. If we use it to define
a new state vector x ( k )  M( k ) , then

( k  1)  M 1 Ad M( k )  M 1 Bd ( k )u( k )

 A d ( k )  B d ( k )u( k )
(6.55)
y ( k )  Cd M( k )  Dd u( k )

 C d ( k )  D d u( k )

The transformed matrix A d will again be in its Jordan form.

Example 6.4: Discretization of a System


For the A and B matrices given below, discretize the system to get a discrete-time
state variable description using the formula (6.53). Assume the system is sampled
at T  01. s.

A
LM3 1OP B
LM1OP
N 0 2Q N1Q
Solution
The A-matrix was also used in Example 5.4, so we know from that example that

e At 
LMe3t
 e 3t  e 2t OP
MN 0 e 2t
PQ
Therefore,

LMe
3T
e 3T  e 2T OP  L0.741 0.0779OP
Ad  e AT 
MN 0 e 2T
PQ MN 0 0.819 Q

As for Bd , we have
262 Part II. Analysis and Control of State Space Systems

z
( k 1)T
A ( k 1)T  
Bd  e B d
kT

LMe OP L1O d
z
( k 1)T 3 ( k 1)T   3 ( k 1)T   2 ( k 1)T  
e e
PQ MN1PQ

kT MN 0 e
2 ( k 1)T  

LMe OP d  1 LMe OP ( k 1)T

z
( k 1)T 2 ( k 1)T   2 ( k 1)T  

kT MNe 2 ( k 1)T  
PQ 2 MNe 2 ( k 1)T  
PQ kT


LM
1 e 2 ( k 1)T ( k 1)T
e
2 ( k 1)T  kT OP
MN
2 e 2 ( k 1)T ( k 1)T  e 2 ( k 1)T  kT PQ
1 L1  e 2T OP  L0.0906O
 M
2 MN1  e 2T
PQ MN0.0906PQ
Therefore, the discrete-time approximation of the system, sampled at 10Hz,
is

x( k  1) 
LM0.741 0.0779OP x( k )  LM0.0906OPu( k )
N0 0.819 Q N0.0906Q

6.6 Summary
In this chapter, we have investigated the analytical solution of state space
equations. In doing so, we have demonstrated one of the important features of
state variable analysis, namely, that the first-order analysis methods of scalar
differential equations can help us solve (first-order) vector differential equations.
This is first apparent in the integrating factor technique used to solve the LTI
systems. Because any linear system can be represented in state space, these
solution methods are presented to give the reader a set of tools for solving state
space equations that are similar to those used for solving scalar differential
equations.
However, as we have stated, the solution methods developed here are not
necessarily the most useful tools for control system design and analysis. This is
because, as we have shown, we must know the applied input to find the solution
of a system. In control systems design, the input signal is the goal and is not
generally available a priori. Nevertheless, the solution technique has provided
insight into the evolution of state variables, and provided a technique by which
discrete-time systems can be generated as approximations of sampled continuous-
time systems.
Other highlights of the chapter are:
Chapter 6. Solutions to State Equations 263

• The solution of an LTI system has the same form, i.e., a convolution
integral, as the solution of scalar systems. Only the computation of the
matrix exponential e At complicates the matter.
• For homogeneous systems, phase portraits can be useful visualization
tools. It is recommended that the reader practice generating phase
portraits for systems with different dynamic characteristics. As we have
mentioned, the evolution of a system’s solution trajectories is often
understood using this visual imagery, even in higher dimensions. In
nonlinear problems as well, phase portraits are indispensable tools and
can be used to predict the existence of limit cycles (nonlinear
oscillations), switching times, final values, and stability characteristics
that are very often difficult to determine analytically.
• We have introduced the notion of a system mode. This is another state
space tool that is often generalized into higher dimensions. Heat
conduction problems, bending plates and beams, and many
electromagnetic phenomena are described by partial differential
equations that result in infinite-dimensional state spaces. By describing
the solution of such systems as sums of modes, we can retain only the
most dominant and significant components of the overall solutions.
Others, because they are less significant, might be simply ignored. The
modal decomposition of a system is also a convenient method for
generating a qualitative picture of a phase portrait.
• For time-varying systems, state variable solutions are quite difficult to
obtain. The state-transition matrix, while symbolically providing a
simple integral solution, can be as elusive as the solution of a nonlinear
equation. Only limited techniques are available for its construction, such
as the Peano-Baker series, which itself can be difficult to compute,
especially in closed-form
• In discrete-time, we have shown that the state variable solution and the
state transition matrix are similar in appearance to their continuous-time
counterparts.

Part of the discussion of the solutions generated in this chapter has hinted at
the stability properties of systems. For example, in the generation of the phase
portraits, we mentioned the tendency of a system mode to approach or diverge
from the origin. We sometimes find that such modes are unstable, for obvious
reasons. However, such a notion of stability ignores the fact that phase portraits,
by their definition, have not accounted for the presence of the input signal. Indeed,
there are a number of different perspectives on the stability of a system, and these
will be discussed in the next chapter.
264 Part II. Analysis and Control of State Space Systems

6.7 Problems
6.1 Solve the state variable system given in Problem 1.9 for x ( t ) , t  0 , given
that u( t )  e 3t .

6.2 Draw phase portraits for the systems with the following A-matrices:

a)
LM8 6OP b)
LM8 6OP c) LM 0 4OP
N 0 2Q N 0 2Q N4 0Q
d)
LM4 4OP e) M
L1 0OP f) LM1 0OP
N 4 4Q N 0 0Q N1 0Q
LM0 0OP LM1 1 0 OP
g) h) M 0 1 0 P
N0 0Q MN 0 0 1PQ

6.3 Determine state-transition matrix ( t ,  ) for the following A-matrices:

LM0 1 t2 OP LM 0 0 OP LMe2 t OP
a) MM0 0 tP b)
N 2t 2 t Q
c)
MN 0
0
2t PQ
N0 0 0 PQ

6.4 Show how systems of the form

x ( t ) 
LM1 0 OP x(t )
N1 f (t )Q
can be considered as decoupled scalar equations, the second of which has
the solution of the first as an input term. Use this fact to determine an
expression for ( t ,  ) for the second-order system.

6.5 Show that if ( t , t 0 ) is the state-transition matrix for A, then


 (t, t )
A( t )   . Derive an analogous result for the discrete-time state
0 t  t0
transition matrix, i.e., given ( k , j ) , determine A( k ) .
Chapter 6. Solutions to State Equations 265

6.6 Show for a time-invariant system, that e At  L1 sI  A {b g } , where L is


1

the LaPlace transform operator.

6.7 Suppose  A ( t ,  ) is the state-transition matrix for A( t ) . Define a


(nonsingular) change of variables by x ( t )  M( t ) such that

 ( t )  M 1 AM( t )  A ( t ) . Determine an expression for a new state-
transition matrix  A ( t ,  ) in terms of  A ( t ,  ) . Does the result depend on
whether M is time varying?

6.8 For every system x ( t )  Ax ( t ) there is a defined system p ( t )   A  p( t ) ,


which is called the adjoint system. Show that if  A ( t ,  ) is the state-
transition matrix for the original system, then the state-transition matrix for
the adjoint system is  A ( , t ) .

6.9 Use Equation (6.7) to find an expression for the impulse response matrix,
i.e., the solution to a LTI system whose input is the Dirac delta function
( t ) .

6.10 Find the state-transition matrix ( k ,0 ) for the following system:

x( k  1) 
LM35. 2 OP x( k )
N6 35
. Q
6.11 For the following system, find an expression for y ( n ) if the input u ( k )  1,
k  0,1, , 

x ( k  1) 
LM 0 1OP x( k )  LM1OPu( k )
N0.5 1Q N1Q
y( k )  1 0 x( k )
266 Part II. Analysis and Control of State Space Systems

6.12 Discretize the system below using a sample time of T  1 s.

x ( t ) 
LM2 0OP x(t )  LM1OPu(t )
N 1 0Q N0Q
y ( t )  0 1 x ( t )  2u( t )
Chapter 6. Solutions to State Equations 267

6.8 References and Further Reading


Finding solutions to the time-invariant linear state space system is fairly easy and
can be found in any linear systems or control systems text. For time-varying
systems, which require the state-transition matrix, finding solutions is more
difficult. Good discussions of such systems, including the computation of the
state-transition matrix, may be found in [2], [5], and [7]. References [2] and [7]
offer some particularly interesting problems. For the discrete-time case, see [4]
and [7].
The stretched-string problem and many other modal system problems, such
as deformable beams and plates, electromagnetic, and thermal gradient systems
can be found in [6], which contains a large number of worked problems. Other
infinite dimensional systems are discussed in [3].
More details on the construction and interpretation of phase portraits,
particularly for nonlinear systems, can be found in [1].

[1] Atherton, Derek P., Nonlinear Control Engineering, Van Nostrand Reinhold,
1982.
[2] Brockett, Roger W., Finite Dimensional Linear Systems, John Wiley & Sons,
1970.
[3] Coddington, Earl A., and Norman Levinson, Theory of Ordinary Differential
Equations, McGraw-Hill, 1955.
[4] Franklin, Gene, and J. David Powell, Digital Control of Dynamic Systems,
Addison-Wesley, 1981.
[5] Kailath, Thomas, Linear Systems, Prentice-Hall, 1980.
[6] Lebedev, N. N., I. P. Skalskaya, and Y. S. Uflyand, Worked Problems in Applied
Mathematics, Dover, 1965.
[7] Rugh, Wilson, Linear System Theory, 2nd edition, Prentice-Hall, 1996.

You might also like