0% found this document useful (0 votes)
70 views6 pages

Solution of The State Equation

This document contains 5 problems and solutions related to model-based control systems and state-space representations. Problem 1 proves the solution to a state equation. Problem 2 derives the general solution for a discrete-time difference equation. Problem 3 proves properties of the trace of square matrices. Problem 4 proves that similar matrices have the same eigenvalues, trace, and determinant. Problem 5 forms a state-space realization for a given MISO system.

Uploaded by

James Kabugo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views6 pages

Solution of The State Equation

This document contains 5 problems and solutions related to model-based control systems and state-space representations. Problem 1 proves the solution to a state equation. Problem 2 derives the general solution for a discrete-time difference equation. Problem 3 proves properties of the trace of square matrices. Problem 4 proves that similar matrices have the same eigenvalues, trace, and determinant. Problem 5 forms a state-space realization for a given MISO system.

Uploaded by

James Kabugo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

ELEC-E8116 Model-based control systems

/exercises with solutions 2

Problem 1

Prove that the solution of the state equation x (t ) = Ax(t ) + Bu (t ), x(t 0 ) = x0 is


t
x(t ) = e A(t −t0 ) x0 + ∫ e A(t −τ ) Bu (τ )dτ
t0

Solution 1

First some information, which is needed. The matrix exponential is defined as a series
e At = I + At + ( At ) + ( At ) + 
1 2 1 3

2! 3!
which always converges. There exists at least 19 ways to determine this function
analytically (see. Moler, C., Van Loan, C.: “Nineteen dubious ways to compute the
exponential of a matrix”, SIAM Review, Vol. 20, pp. 801-836, 1978). One way is well-
known
e At = L−1 (sI − A)
−1
[ ]
in which the inverse Laplace transformation is sometimes difficult. In practice, if one
method turns out to be difficult in some example case, the other methods are hardly
easier.

The rule for derivation follows easily


d At 1 1
e = A + A2t + A3t 2 +  = A( I + At + ( At ) 2 +  = Ae At
dt 2! 2!
which makes sense, of course (but in matrix calculus you can never take for granted
that the familiar rules for scalar systems inevitably hold).

Remember the derivation rule


∂f ( x, t )
v (t ) v (t )
d

dt u (t )
− f (u (t ), t )u (t ) + f (v(t ), t )v(t ) + ∫
f ( x, t )dx =
u (t )
∂t
dx

after which the problem becomes tractable. First, check the initial condition by
substituting t 0 into the solution formula given in the problem
x(t 0 ) = Ix0 + 0 = x0
so the initial condition holds. Then let us check, whether the candidate solution fulfills
the differential equation
t
x (t ) = Ae A ( t −t0 )
x0 − 0 + e A ( t −t )
Bu (t ) ⋅ 1 + ∫ Ae A(t −τ ) Bu (τ ) dτ
t0
t
= Ae A(t −t0 ) x0 + Bu (t ) + A∫ e A(t −τ ) Bu (τ ) dτ
t0

= Ax(t ) + Bu (t )
Yes it does.
Problem 2

Derive the general solution for the discrete-time difference equation


x(t + 1) = Ax(t ) + Bu (t ), x(t0 ) = x0

Solution 2

Discrete-time equations are solution algorithms as such. Calculate directly

x(t 0 + 1) = Ax0 + Bu (t 0 )
x(t 0 + 2 ) = Ax(t 0 + 1) + Bu (t 0 + 1) = A 2 x0 + ABu (t 0 ) + Bu (t 0 + 1)
x(t 0 + 3) = Ax(t 0 + 2) + Bu (t 0 + 2) = A 3 x0 + A 2 Bu (t 0 ) + ABu (t 0 + 1) + Bu (t 0 + 2)

x(t 0 + N ) = A N x0 + A N −1 Bu (t 0 ) +  + ABu (t 0 + N − 2) + Bu (t 0 + N − 1)

which gives
x(t ) = A t −t0 x0 + A t −t0 −1 Bu (t 0 ) +  + ABu (t − 2) + Bu (t − 1)
t −1
= A t −t0 x0 + ∑ A t −i −1 Bu (i )
i =t0

Problem 3

The trace of a square matrix is defined as the sum of the elements in the main
diagonal. Let A and B be square matrices of equal dimensions and C and D such
matrices that CD and DC are both properly defined. Prove

a. tr(A+B) = tr(A)+tr(B)
b. tr(CD) = tr(DC)

Solution 3

Let the dimensions of A, B, C and D be nxn, nxn, nxm and mxn. Use the abbreviation
(aij) for the elements of the matrices. It follows
n n n
tr ( A + B) = ∑ (aii + bii ) = ∑ aii + ∑ bii = tr ( A) + tr ( B) Ok.
i =1 i =1 i =1

m
CD (i, j ) = ∑ cik d kj (the component i,j of CD by the definition of matrix
k =1
multiplication)

By setting i = j and calculating the sum of the components (the matrix trace)

n m
tr (CD ) = ∑∑ cik d ki
i =1 k =1
Correspondingly
n
DC ( j , i ) = ∑ d jk c ki
k =1
m n n m
tr ( DC ) = ∑∑ d ik c ki = ∑∑ cik d ki
i =1 k =1 i =1 k =1
The results are the same, Ok.

Note: Sometimes the same result is presented in the following form - let A and B be
nxm-matrices. Then it holds
tr ( AB T ) = tr ( AT B)
This is easily proved by the previous result and by noting that in taking the transpose
of a square matrix the elements in the main diagonal remain the same (trace is constant).
Hence

[(
tr ( AB T ) = tr AB T )
T
] = tr (BA ) = tr ( A B)
T T

Problem 4

Square matrices B and A of equal dimensions are called similar, if there exists an
invertible square matrix T such that
B = TAT −1
Prove that similar matrices have the same eigenvalues, the same trace and the same
determinant.

Solution 4
Let us first show a result: For any invertible matrix T it holds
1
det (T −1 ) =
det (T )
It is known that for square matrices A and B of equal dimensions it holds
det ( AB) = det ( A)det ( B)
By applying that to the identity
TT −1 = I

it follows
det (TT −1 ) = det (T )det (T −1 ) = 1

as claimed.

Then to the problem: form the characteristic polynomial of B

det (λ I − B) = det (λ I − TAT −1 ) = det (λ TT −1 − TAT −1 )


= det (T (λ I − A)T −1 ) = det (T )det (λ I − A)det (T −1 )
= det (T )det (T −1 )det ( λ I − A) = det (λ I − A)
 
1
The characteristic polynomials are the same and therefore the eigenvalues are also the
same.

What about trace? Apply directly the result given in the previous problem ; choose
C = TA and D = T −1 . We obtain
tr ( B) = tr (TA ⋅ T −1 ) = tr (T −1TA) = tr ( IA) = tr ( A)
For the determinant
1
det( B ) = det(TAT −1 ) = det(T ) ⋅ det( A) ⋅ det(T −1 ) = det(T ) ⋅ det( A) ⋅ = det( A)
det(T )

Problem 5

Consider the MISO-system


p+2 1
y (t ) = 2 u1 (t ) + 2 u2 (t )
p + 2 p +1 p + 3p + 2
Form a realization (state-space representation).

Solution 5

Apply the “systematic” method


p+2 1 ( p + 2) 2 u1 (t ) + ( p + 1)u2 (t )
y (t ) = u (t ) + u2 (t ) =
( p + 1)2 1 ( p + 1)( p + 2) ( p + 1) 2 ( p + 2)

(p 3
)
+ 4 p 2 + 5 p + 2 y (t ) = ( p 2 + 4 p + 4)u1 (t ) + ( p + 1)u2 (t )

p 3 y + 4 p 2 y + 5 py − p 2u1 − 4 pu1 − pu2 = −2 y + 4u1 + u2

 
   
 
p  p  p y − u1 + 4 y  + 5 y − 4u1 − u2  = −2 y + 4u1 + u2
  x1  
       
  
x2

x3

It follows
x1 = y
x 2 = x1 − u1 + 4 x1
x3 = x 2 + 5 x1 − 4u1 − u 2
and easily
x1 = −4 x1 + x2 + u1
x2 = −5 x1 + x3 + 4u1 + u2
x3 = −2 x1 + 4u1 + u2
y = x1
which is in the observable canonical form.
− 4 1 0 1 0 
x =  − 5 0 1 x + 4 1 u
 

− 2 0 0 4 1


y = [1 0 0] x

(see textbook pp. 35-37) and also the lecture slides, where the same result has been
obtained by utilizing the observable canonical form directly.

Problem 6

Consider the following optimization problem. Let


x (t ) = u (t ), x(0) = 1
and find an optimal control u, which minimizes the criterion

[ ]
J = ∫ x 2 (t ) + u 2 (t ) dt
0

Prove that the solution can be presented in state feedback form u * (t ) = − x(t ) . What is
the optimal control as a function of time? What is the optimal cost?

Hint: Prove first the identity

∫ [x ]
T T
(t ) + x 2 (t ) dt = x 2 (0) − x 2 (T ) + ∫ [x(t ) + x (t )] dt
2 2

0 0

Solution 6

The identity follows easily by developing the expression in the integral on the right
hand side and by noticing that
d
[x(t )]2 = 2 x(t ) x (t )
dt
Because x (t ) = u (t ) (system dynamics), the criterion can be written as

( ) ( )
T T T
J = ∫ x + u dt = ∫ x + x 2 dt = x 2 (0) − x 2 (T ) + ∫ ( x + u ) dt
2 2 2 2

0 0 0

which attains the minimum, when u* (t ) = − x(t ) (the final state is free, so there is no need
to bother about boundary conditions).

The solution is valid for all T, also for an infinite optimization horizon.

By substituting the optimal control to the system equation leads to


x (t ) = − x(t ), x(0) = 1
which is solved easily e.g. by using the Laplace-transformation
sX ( s ) − x(0) = − X ( s )
( s + 1) X ( s ) = 1
1
X (s) =
s +1
The optimal trajectory is
x * (t ) = e − t
and the optimal cost is easily determined by direct calculus
∞ ∞
( ) ( )
J * = ∫ x 2 (t ) + u 2 (t ) dt = ∫ e − 2t + e − 2t dt = 1
0 0

You might also like