0% found this document useful (0 votes)
71 views3 pages

Controllability Examples: Dr. Matthew Turner January 24, 2018

The document discusses three examples of linear time invariant systems to illustrate the concepts of controllability and stabilizability. Example 1 shows a system that is completely controllable since the input influences all states. Example 2 shows a system that is not completely controllable since one state is unaffected by inputs, but is stabilizable since the uncontrollable state is stable. Example 3 also has one uncontrollable state, but in this case the system is neither completely controllable nor stabilizable as the uncontrollable state is unstable.

Uploaded by

JL Tenorio
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views3 pages

Controllability Examples: Dr. Matthew Turner January 24, 2018

The document discusses three examples of linear time invariant systems to illustrate the concepts of controllability and stabilizability. Example 1 shows a system that is completely controllable since the input influences all states. Example 2 shows a system that is not completely controllable since one state is unaffected by inputs, but is stabilizable since the uncontrollable state is stable. Example 3 also has one uncontrollable state, but in this case the system is neither completely controllable nor stabilizable as the uncontrollable state is unstable.

Uploaded by

JL Tenorio
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Controllability examples

Dr. Matthew Turner

January 24, 2018

Linear System
u(t) y(t)
G
U(s) Y(s)

Consider the following linear time invariant (LTI) system



ẋ(t) = Ax(t) + Bu(t)
x(0) = x0
y(t) = Cx(t) + Du(t)
where x(t) is the state vector, u(t) the input vector and y(t) the output vector.

A system is completely controllable, roughly speaking, if the input vector u(t) influences all of the states.
The study of a system’s controllability only involves the state equation and, therefore, the matrices A
and B. Formal definitions of controllability have been given previously. The aim of this work-sheet is to
illustrate controllability and the related concept of stabilisability through some examples.

Example 1: Consider a state-space system in which


   
0 1 0
A= B=
0 3 1

Is this system completely controllable?


In this case the state equation becomes:
      
ẋ1 0 1 x1 0
= + u
ẋ2 0 3 x2 1

This can be written long-hand as

ẋ1 = x2
ẋ2 = 3x2 + u

Because of the particularly simple form of these equations, they can be solved explicitly to obtain
Z t
x1 (t) = x1 (0) + x2 (τ )dτ
0
Z t
3t
x2 (t) = e x2 (0) + e3(t−τ ) u(τ )dτ
0

1
From the above equations we can see that

• u directly influences the evolution of x2

• u does not directly influence the evolution of x1 , but because x1 depends on x2 , x1 therefore depends
on u. Thus the system is completely controllable.

EXERCISE: Verify the above conclusion by checking the rank of the controllability matrix C = [B AB]

Example 2: Consider a state-space system in which


   
−1 0 0
A= B=
0 3 1

This is similar to the first example except that the first row of the A-matrix has changed a little. These
matrices lead to the state equation:
      
ẋ1 −1 0 x1 0
= + u
ẋ2 0 3 x2 1

This can be written long-hand as

ẋ1 = −x1
ẋ2 = 3x2 + u

Because of the particularly simple form of these equations, they can be solved explicitly to obtain

x1 (t) = e−t x1 (0)


Z t
3t
x2 (t) = e x2 (0) + e3(t−τ ) u(τ )dτ
0

From the above equations we can see that

• u directly influences the evolution of x2 (as with Example 1)

• The evolution of x1 is not affected by either the input u or the other state x2 . Whatever the
input does, it has no influence over x1 , so this state is not controllable. Hence the system is not
completely controllable

However, note that

• The state x2 is controllable

• The state x1 , though uncontrollable, is stable because, whatever the initial condition, the state
always converges to zero.

• Hence in this case, although the system is not completely controllable it is stabilisable because all
of its uncontrollable modes are stable.

EXERCISE: Verify the above conclusion by checking the rank of the controllability matrix C = [B AB].
In this case, we would expect C to lose rank because one state is not controllable.

2
Example 3: Consider a state-space system in which
   
1 0 0
A= B=
0 3 1

This is similar to the second example except that the sign of the first row of the A-matrix has changed.
These matrices lead to the state equation:
      
ẋ1 1 0 x1 0
= + u
ẋ2 0 3 x2 1

This can be written long-hand as

ẋ1 = x1
ẋ2 = 3x2 + u

Because of the particularly simple form of these equations, they can be solved explicitly to obtain

x1 (t) = et x1 (0)
Z t
3t
x2 (t) = e x2 (0) + e3(t−τ ) u(τ )dτ
0

From the above equations we can see that

• u directly influences the evolution of x2 (as with Examples 1 and 2)

• The evolution of x1 is not affected by either the input u or the other state x2 . Whatever the
input does, it has no influence over x1 , so this state is not controllable. Hence the system is not
completely controllable (as with Example 2)

However, note that in this case - different to Example 2 -

• The state x1 is both uncontrollable and not stable - it does not converge to zero and actually
diverges.

• Hence in this case, the system is not completely controllable, nor is it stabilisable because its
uncontrollable modes are not all stable.

You might also like