ENEE 660 HW Sol #2
ENEE 660 HW Sol #2
ENEE 660 HW Sol #2
L
u(t)
R
L
u(t)i(t)
1
L
v(t)
=
E
L
u(t) u(t)
R
L
x
1
(t) x
2
(t)
dx
2
(t)
dt
=
C
dv(t)
dt
=
i(t)
C
= x
1
(t)
So, the state evolution is described by
d
dt
_
x
1
(t)
x
2
(t)
_
=
_
0
0
_ _
x
1
(t)
x
2
(t)
_
+ u(t)
_
R
L
0
0 0
_ _
x
1
(t)
x
2
(t)
_
+
_
E
L
0
_
u(t)
x(t
o
) =
_
Li(t
o
)
Cv(t
o
)
_
For u(t) = 0
d
dt
_
x
1
(t)
x
2
(t)
_
=
_
0
0
_ _
x
1
(t)
x
2
(t)
_
i.e., the system oscillates with frequency .
In the phase plane the trajectories are counterclockwise circles, since
d
dt
_
_
x
1
(t)
x
2
(t)
_
T
_
x
1
(t)
x
2
(t)
_
_
0
x
2
x
1
For u(t) = 1
_
dx
1
(t)
dt
dx
2
(t)
dt
_
=
_
R
L
0
_ _
x
1
(t)
x
2
(t)
_
+
_
E
L
0
_
2
Let
_
y
1
(t)
y
2
(t)
_
=
_
x
1
(t)
x
2
(t)
_
+
_
z
1
z
2
_
where
_
z
1
z
2
_
=
_
R
L
0
_
1
_
E
L
0
_
=
_
0
E
C
_
Then we see that
_
dy
1
(t)
dt
dy
2
(t)
dt
_
=
_
R
L
0
_ _
y
1
(t)
y
2
(t)
_
Since the eigenvalues are
R
R
2
4L/C
2L
, no matter where we start the vector y(t) goes to 0 as
t , in a smooth way (no spirals) if R
2
> 4L/C and in a spiral way if R
2
< 4L/C. So, in
x
1
, x
2
coordinates the x(t) vector goes towards the vector
_
0
E
C
_
in a similar fashion. To obtain
some idea how the reachable set looks like, you have to x an initial condition (it depends on that
heavily). Take, for convenience,
_
0
0
_
as your initial condition. Then, you may have trajectories
such as:
x
1
x
1
x
1
x
2
x
2
x
2
E C
E C 1/2
1/2
Problem 3
The state variable equations, from the voltage controlled resistors are:
dx
1
(t)
dt
= M(v
1
(t) V
p
)x
2
(t)
dx
2
(t)
dt
= M(v
2
(t) V
p
)x
1
(t)
3
or
d
dt
_
x
1
(t)
x
2
(t)
_
=
_
0 MV
p
MV
p
0
_ _
x
1
(t)
x
2
(t)
_
+ v
1
(t)
_
0 M
0 0
_ _
x
1
(t)
x
2
(t)
_
+ v
2
(t)
_
0 0
M 0
_ _
x
1
(t)
x
2
(t)
_
The system is nonlinear since the right-hand side is not simultaneously linear in x(t) and u(t) =
_
v
1
(t)
v
2
(t)
_
. The system is time invariant since right-hand side does not depend explicitly on time,
and this implies time invariance by our denition of time invariance.
For the bilinear state space model to be linear it is necessary that B
1
= B
2
= 0 (as a matrix).
Problem 4
(a) x(t) = Ax(t) with A =
_
4 0
0 1
_
Then e
At
=
_
e
4t
0
0 e
t
_
and the solution is
_
x
1
(t)
x
2
(t)
_
=
_
e
4t
x
o
1
e
t
x
o
2
_
.
Clearly, for x
o
2
= 0 the second component will grow without bound as t increases.
P(t) = e
Bt
=
_
cos 3t sin 3t
sin 3t cos 3t
_
This is continuous with continuous derivative on (, ).
detP(t) = cos
2
3t + sin
2
3t = 1
So, P(t) is a Liapunov transformation.
(b)
z(t) = Be
Bt
x(t) + e
Bt
Ax(t)
= Be
Bt
e
Bt
z(t) + e
Bt
Ae
Bt
z(t)
= (B + e
Bt
Ae
Bt
)z(t)
= A
(t)z(t)
Then
e
Bt
Ae
Bt
=
_
cos 3t sin 3t
sin 3t cos 3t
_ _
4 0
0 1
_ _
cos 3t sin 3t
sin 3t cos 3t
_
4
=
_
cos 3t sin 3t
sin 3t cos 3t
_ _
4 cos 3t 4 sin 3t
sin 3t cos 3t
_
=
_
4 cos
2
3t + sin
2
3t 4 cos 3t sin 3t + sin 3t cos 3t
4 cos 3t sin 3t + sin 3t cos 3t 4 sin
2
3t + cos
2
3t
_
=
_
1 5 cos
2
3t 5 sin 3t cos 3t
5 sin 3t cos 3t 1 5 sin
2
3t
_
So, A
(t) =
_
1 5 cos
2
3t 5 sin 3t cos 3t + 3
5 sin 3t cos 3t 3 1 5 sin
2
3t
_
The eigenvalues of A
2
+ 3 + 5 = 0
1,2
=
3 j
11
2
So the eigenvalues of the time varying matrix A
(t) are constant and have negative real part for all
time!
Despite this,
z(t) = e
Bt
x(t) =
_
cos 3t sin 3t
sin 3t cos 3t
_ _
e
4t
x
o
1
e
t
x
o
2
_
=
_
e
4t
cos 3t x
o
1
+ e
t
sin 3t x
o
2
e
4t
sin 3t x
o
1
+ e
t
cos 3t x
o
2
_
can grow without bound as t grows to innity. For instance, take x
o
1
= 0, x
o
2
= 1. So for time
varying linear systems the sign of the eigenvalues of A(t) has no implications on stability!
5
Problem 5
A state x
1
is reachable from the zero state at time t = 1 if
x
1
=
_
1
0
e
A(1)
bu()d for some u U
Or if
x
1
=
_
1/2
0
e
A(1)
bu()d +
_
1
1/2
e
A(1)
bu()d =
= e
A/2
_
1/2
0
e
A(
1
2
)
bu()d +
_
1/2
0
e
A(
1
2
)
bu(
1
2
+)d
(where we let =
1
2
+ in the second integral)
u(
1
2
+) = u() = (I +e
A/2
)
_
1/2
0
e
A(
1
2
)
bu()d
Cayley Hamilton = (e
A/2
+I)
n
i=1
A
r1
b
_
1/2
0
r
(1 )u()d
=
n
i=1
c
r
(e
A/2
+I)A
r1
b
where c
r
=
_
1
0
r
(1 )u()d.
Therefore
R(1) = Range{(e
A/2
+I)[b, Ab, , A
n1
b]}
Problem 6
(a) For A =
_
1 1
0 1
_
, e
At
= e
t
_
1 t
0 1
_
.
The controllability grammian is
W(0, 1) =
_
1
0
e
t
_
1 t
0 1
_ _
t
0
_
_
t 0
_
e
t
_
1 0
t 1
_
dt
=
_
1
0
e
2t
_
t
2
0
0 0
_
dt =
_
1
4
(1 5e
2
) 0
0 0
_
Then
_
1
0
_
is reachable at time 1 starting from
_
0
0
_
at time 0 i
e
1
_
1 1
0 1
_ _
1
0
_
=
_
e
1
0
_
is in the range of W(0, 1), which is clearly correct. So the answer is yes.
To nd the control we solve
_
1
4
(1 5e
2
) 0
0 0
_
=
_
e
1
0
_
6
So,
=
_
4
e5e
1
0
_
So u(t) =
_
t 0
_
e
t
_
1 t
0 1
_ _
4
e5e
1
0
_
= te
t 4
e5e
1
.
(b) The observability grammian is
M(0, T) =
_
T
0
e
t
_
1 0
t 1
_ _
1
t
_
_
1 t
_
e
t
_
1 t
0 1
_
dt
=
_
T
0
e
2t
_
1 2t
2t 4t
2
_
dt
=
_
_
1
2
(e
2T
1) Te
2T
1
2
(e
2T
1)
Te
2T
1
2
(e
2T
1) 2e
2T
T
2
2Te
2T
+e
2T
1
_
_
The question is: is
_
1
0
_
in the null space of M(0, T)? Or is
_
_
1
2
(e
2T
1)
Te
2T
1
2
(e
2T
1)
_
_ =
_
0
0
_
true? This is not true unless T = 0. Therefore the answer is: YES;
_
1
0
_
is observable. Any small
T nonzero, will allow observability.
Problem 7
I have followed Arbib from Kalman, Falb and Arbib in the proofs below.
Proof of Theorem A
All we have to show is how to exchange h
x
() of the rst commutative diagram by h
x
() in the
second diagram.
If we start from the rst commutative diagram, since M
, and therefore h
x
is one-to-one. So take X
= h
x
(X
x
= h
1
x
on X
.
Now if we start from the second commutative diagram, for x
, select h
x
(x
) to be any x such
that h
x
(x) = x
f
y y
M
S
f
,i
f
Identity
where i
f
: U
S
f
via i
f
(u) = [u]
f
.
M
f
simulates M
S
f
,i
f
. Need to nd maps h
u
and h
x
such that the diagram below commutes:
S
f
U
Y
Y
6
-
-
?
f
M
f
, h
x
(s)
M
S
f
,i
f
,s
h
u
identity
For every sS
f
we select h
x
(s) = uU
s.t. s = [u]
f
. Then set also h
x
(s) = f(u). Now
f
M
S
f
,i
f
,s
(s
) = i
f
(ss
) = f(h
u
(ss
))
= f(h
u
(s)h
u
(s
)) = f
M
f
,[h
u
(s)]
f
(h
x
(s
))
= f
M
f
,h
x
(s)
(h
u
(s
))
Proof of Theorem B
if M
f
simulates M
g
the diagram below commutes.
(U
Y
Y
?
-
-
6
g
f
h
u
h
y
Suppose g = h
y
fh
u
.
Let S
g
= U
/
g
and S
f
= (U
/
f
.
Let S
= {[h
u
(u)]
f
: uU
}. Clearly S
is a subsemigroup of S
f
.
Let Z([h
u
(u)]
f
) = [u]
g
.Z is well dened; indeed [h
u
(u)]
f
= [h
u
(u
)]
f
=[u]
g
= [u
]
g
, because
g(wuz) = h
y
fh
u
(wuz)
= h
y
f(h
u
(w)h
u
(u)h
u
(z))
(since h
u
(u)
f
h
u
(u
)) = h
y
f(h
u
(w)h
u
(u
1
)h
u
(z))
= h
y
fh
u
(wu
1
z)
Conversely
If (S
g
, i
g
) | (S
f
, i
f
) the diagram below commutes
8
S
g
S
f
S
Y
?
-
-
6
i
f
i
g
h
y
Z
Given uU
, choose h
u
(u) to be some t(U
s.t. Z([t]
f
) = [u]
g
. One exists since Z is onto.
Then
h
y
fh
u
(u) = h
y
(i
f
([h
u
(u)]
f
)) = i
g
(Z([h
u
(u)]
f
)
= i
g
([u]
g
) = g(u)
And this completes the proof.
Problem 8 (Optional, Extra Credit)
Look at
C
d
dt
(v
1
(t) +Kv
1
(t h)) + (
1
z
g)v
1
(t) K(
1
z
+g)v
1
(t h) = v
3
1
(t) Kv
3
1
(t h)
and suppose that you start at t = 0 and you want to integrate this dierential equation for 0 t h.
Because of the delayed arguments, you must know v
1
(t) for h t 0 and
d
dt
v
1
(t) for h t 0
to be able to integrate it. These two functions are therefore the state at time t = 0. Similarly
for other times. Therefore this system has an innite dimensional state space, because I need two
functions on the interval [h, 0] as states.
9