0% found this document useful (0 votes)
78 views13 pages

Problems and Solutions

The document outlines proposed problems related to nonlinear control systems. Problem 1 involves analyzing the stability of the origin for a second order nonlinear system using Lyapunov's stability theory. Problem 2 examines the stability of the origin and identifies a limit cycle for another second order nonlinear system, proving trajectories converge to the limit cycle using La Salle's invariance principle. Problem 3 considers the stability of a rotating rigid spacecraft model. The remaining problems involve tasks like finding the relative degree, designing input-output linearizing feedback laws, and analyzing zero dynamics for various nonlinear systems.

Uploaded by

Jason Chiang
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views13 pages

Problems and Solutions

The document outlines proposed problems related to nonlinear control systems. Problem 1 involves analyzing the stability of the origin for a second order nonlinear system using Lyapunov's stability theory. Problem 2 examines the stability of the origin and identifies a limit cycle for another second order nonlinear system, proving trajectories converge to the limit cycle using La Salle's invariance principle. Problem 3 considers the stability of a rotating rigid spacecraft model. The remaining problems involve tasks like finding the relative degree, designing input-output linearizing feedback laws, and analyzing zero dynamics for various nonlinear systems.

Uploaded by

Jason Chiang
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Advanced Nonlinear Control (4/CY/O8)

Proposed Problems
1. (Sastry, 1999) Consider the second order nonlinear system:
x
1
= x
2
+ x
1
(x
2
1
+ x
2
2
) sin(x
2
1
+ x
2
2
)
x
2
= x
1
+ x
2
(x
2
1
+ x
2
2
) sin(x
2
1
+ x
2
2
)
(1)
Show that Jacobian linearisation around the origin is inconclusive in
determining the stability of the origin. Use Lyapunovs stability theory
and your own creativity to pick a V (x) to study the stability of the
origin for 1 1.
2. (Sastry, 1999) Consider the second order nonlinear system:
x
1
= x
2
x
2
= x
1
+ (1 x
2
1
x
2
2
)x
2
(2)
Discuss the stability of the origin. Find the only limit cycle of the
system. Prove using a suitable function V (x) and La Salles invariance
principle that all trajectories not starting from the origin converge to
the limit cycle.
3. (Khalil, 2002) Eulers equations for a rotating rigid spacecraft are given
by:
J
1

1
= (J
2
J
3
)
2

3
+ u
1
J
2

2
= (J
3
J
1
)
3

1
+ u
2
J
3

3
= (J
1
J
2
)
1

2
+ u
3
(3)
where
1
to
3
are the components of the angular velocity vector
along the principal axes, u
1
to u
3
are the torque inputs applied around
the principal axes, and J
1
to J
3
are the principal moments of inertia.
Show that with u
1
= u
2
= u
3
= 0 the origin = 0 is stable. Is it
asymptotically stable?
Suppose that the torque inputs apply the feedback control u
i
= k
i

i
,
j = 1, ..3, where k
1
to k
3
are positive constants. Show that the origin
of the closed loop system is globally asymptotically stable.
1
4. Consider the system
x
1
= x
2
x
2
= a sin(x
1
) b cos(x
1
)u
y = x
1
(4)
Find by inspection the relative degree of this system. Find a feedback
law:
u = p(x) + q(x)v (5)
such that the system is linear from the external input v to the output
y and obeys the following input-output dynamics:
y(s)
v(s)
=
1

2
s
2
+
1
s +
0
(6)
5. (Garces, et. al, 2003) Consider the SISO dynamic neural network model
x
1
=
1
x
1
+
11
(x
1
) +
12
(x
2
) +
1
u
x
2
=
2
x
2
+
21
(x
1
) +
22
(x
2
) +
2
u
y = x
1
(7)
where all the parameters
i
,
ij

i
are dierent from zero. Find the
relative degree of this system. Find an input-output linearising law of
the form:
u = p(x) + q(x)v (8)
so that the dynamics from the external input v to the output y are
linear and have the form:
y(s)
v(s)
=
1

1
s +
0
(9)
Find an expression for the zero dynamics of this dynamic neural net-
work. Find the conditions for the stability of the zero dynamics in
terms of the parameters
i
,
ij

i
of the dynamic neural network.
6. Consider a process whose input-output behaviour can be described by
the rst order transfer function:
G(p) =
y(t)
u(t)
=
b
p + a
(10)
2
where y(t) is the output of the process, u(t) is the input to the process,
a > 0, b > 0, p = d/dt is the dierentiation operator. The process is
controlled by a Proportional-Integral-Derivative (PID) controller that
obeys the following control law:
u(t) = K
p
y(t) +
K
i
p
(u
c
(t) y(t)) K
d
p y(t) (11)
where u
c
(t) is a command signal, K
p
, K
i
, and K
d
are controller param-
eters. Also consider the reference model:
G
m
(p) =
y
m
(p)
u
c
(p)
=

2
n
p
2
+ 2
n
p +
2
n
(12)
which has a unit steady state gain, natural frequency
n
, and damping
ratio .
Find an expression for the closed loop transfer function between
the command signal u
c
(t) and the process output y(t) in terms of
the p operator. Also nd an expression for the error between the
process output y(t) and the reference model output y
m
(t) (e(t) =
y(t) y
m
(t)) in terms of the p operator.
Design, using the MIT rule:
d
dt
= e
e

(13)
adaptation laws for the parameters K
p
, K
i
, and K
d
such that the
closed loop system output y(t) follows the reference model output
y
m
(t).
Hint: Given two functions f(x
1
, x
2
, x
3
) and g(x
1
, x
2
, x
3
), the partial
derivatives of the ratio between the two functions are given by:

x
i
_
f
g
_
=
1
g
f
x
i

f
g
2
g
x
i
(14)
where i=1,2,3
3
Outline solutions to proposed problems
Problem 1
The vector eld describing the system is given by:
f =
_
x
2
+ x
1
(x
2
1
+ x
2
2
) sin(x
2
1
+ x
2
2
)
x
1
+ x
2
(x
2
1
+ x
2
2
) sin(x
2
1
+ x
2
2
)
_
(15)
The Jacobian is given by:
f
1
x
1
= (2x
2
1
(x
2
1
+ x
2
2
) cos(x
2
1
+ x
2
2
) + (3x
2
1
+ x
2
2
) sin(x
2
1
+ x
2
2
))
f
1
x
2
= 1 + 2x
1
x
2
(x
2
1
+ x
2
2
) cos(x
2
1
+ x
2
2
) + 2 x
1
x
2
sin(x
2
1
+ x
2
2
)
f
2
x
1
= 1 + 2x
1
x
2
(x
2
1
+ x
2
2
) cos(x
2
1
+ x
2
2
) + 2x
1
x
2
sin(x
2
1
+ x
2
2
))
f
2
x
2
= (2x
2
2
(x
2
1
+ x
2
2
) cos(x
2
1
+ x
2
2
) + (x
2
1
+ 3 x
2
2
) sin(x
2
1
+ x
2
2
))
(16)
Evaluating the Jacobian at x = 0:
f
x

x=0
=
_
0 1
1 0
_
(17)
The eigenvalues of the Jacobian are
1
= j,
2
= j, so that their real
part is zero. The conditions of Lyapunovs indirect method (Theorem 4,
Lecture 2) are not satised.
To check further the stability of the origin, lets consider the following
candidate Lyapunov function:
V (x) = x
2
1
+ x
2
2
(18)
and calculate the derivatives of V (x) along the trajectories of the system:

V (x) =
V (x)
x
f(x)
= 2x
1
x
2
+ 2x
2
1
(x
2
1
+ x
2
2
) sin(x
2
1
+ x
2
2
)
2x
1
x
2
+ 2x
2
1
(x
2
1
+ x
2
2
) sin(x
2
1
+ x
2
2
)
= 2(x
2
1
+ x
2
2
)
2
sin(x
2
1
+ x
2
2
)
(19)
Suppose that x
1
(0)
2
+x
2
(0)
2
< . Then

V (x) is negative denite if < 0.
So, using Theorem 1 (Lecture 2) the origin x = 0 will be asymptotically
stable if < 0 for trajectories starting within the disk x
1
(0)
2
+ x
2
(0)
2
<
4
Problem 2
Check the time derivative of (x
2
1
+ x
2
2
1):
d
dt
(x
2
1
+ x
2
2
1) = 2x
1
x
1
+ 2x
2
x
2
= 2x
1
x
2
2x
2
x
1
+ 2x
2
2
(1 x
2
1
x
2
2
)
= 2x
2
2
(1 x
2
1
x
2
2
)
(20)
This time derivative is zero on the set dened by (x
2
1
+ x
2
2
= 1). Hence the
set dened by (x
2
1
+ x
2
2
= 1) is an invariant for this system.
The motion on this invariant set is described by the equations:
x
1
= x
2
x
2
= x
1
(21)
we can infer that the invariant set is a limit cycle. To check if the limit cycle
is attractive, dene the Lyapunov function candidate:
V (x) = (x
2
1
+ x
2
2
1)
2
(22)
This function represents the distance to the limit cycle. For an arbitrary
positive number l < 1, the region
l
(dened by V (x) < l), which surrounds
the limit cycle, is bounded. The time derivative of V along trajectories of
the system is given by:

V (x) = [4x
1
(x
2
1
+ x
2
2
1), 4x
2
(x
2
1
+ x
2
2
1)]
_
x
2
x
1
+ (1 x
2
1
x
2
2
)x
2
_
= 4x
1
x
2
(x
2
1
+ x
2
2
1) 4x
1
x
2
(x
2
1
+ x
2
2
1) 4x
2
2
(x
2
1
+ x
2
2
1)
2
= 4x
2
2
(x
2
1
+ x
2
2
1)
2
(23)
Then

V (x) is strictly negative in
l
except if x
2
1
+ x
2
2
= 1 (Notice that
x
2
= 0 is excluded from
l
), in which case V (x) = 0 (so that V (x) is in
fact negative semi-denite) . The equation x
2
1
+ x
2
2
= 1 simply denes the
limit cycle. Referring to Theorem 1, Lecture 8, R is the set of all points in

l
where

V (x) = 0 and M is the union of all invariant sets in R. Since the
limit cycle is the only invariant set in
l
, then the set M is the circle dened
by x
2
1
+ x
2
2
= 1. In this case, R and M coincide and are equal to the circle
dened by x
2
1
+x
2
2
= 1. Then, using the local invariant set theorem (Theorem
1, Lecture 8), all system trajectories starting in
l
converge to M (the limit
cycle). The limit cycle is asymptotically stable.
5
Problem 3
By dening x
1
=
1
, x
2
=
2
, x
3
=
3
, a
1
= (J
2
J
3
)/J
1
, a
2
= (J
3
J
1
)/J
2
,
a
3
= (J
1
J
2
)/J
3
, the system equations can be written as follows:
x
1
= a
1
x
2
x
3
+ u
1
x
2
= a
2
x
3
x
1
+ u
2
x
3
= a
3
x
1
x
2
+ u
3
(24)
If u
1
= u
2
= u
3
= 0, then the origin x = 0 is an equilibrium point of the
system:
x
1
= a
1
x
2
x
3
x
2
= a
2
x
3
x
1
x
3
= a
3
x
1
x
2
(25)
Consider the Lyapunov function candidate:
V (x) =
1
2
(J
1
x
2
1
+ J
2
x
2
2
+ J
3
x
2
3
) (26)
The derivative of V (x) along trajectories of the autonomous system (25) is:

V (x) = (J
1
a
1
+ J
2
a
2
+ J
3
a
3
)x
1
x
2
x
3
= (J
2
J
3
+ J
3
J
1
+ J
1
J
2
)x
1
x
2
x
3
= 0
(27)
with this Lyapunov function and based on Theorem 1, Lecture2, we can say
that the origin x = 0 is a stable equilibrium point.
Using the feedback law u
i
= k
i
x
i
, k
i
> 0, i = 1, . . . 3 we have the
following dynamics:
x
1
= a
1
x
2
x
3
k
1
x
1
x
2
= a
2
x
3
x
1
k
2
x
2
x
3
= a
3
x
1
x
2
k
3
x
3
(28)
Consider again the candidate Lyapunov function:
V (x) =
1
2
(J
1
x
2
1
+ J
2
x
2
2
+ J
3
x
2
3
) (29)
The derivative of V (x) along trajectories of the system (28) is

V (x) = (J
1
a
1
+ J
2
a
2
+ J
3
a
3
)x
1
x
2
x
3
k
1
J
1
x
2
1
k
2
x
2
2
k
3
x
2
3
= k
1
J
1
x
2
1
k
2
J
2
x
2
2
k
3
J
3
x
2
3
(30)
Notice that

V (x) is negative denite. with this Lyapunov function and based
on Theorem 1, Lecture2, we can say that the origin x = 0 is an asymptotically
stable equilibrium point for the feedback system (28).
6
Problem 4
The system is given by:
x
1
= x
2
x
2
= a sin(x
1
) b cos(x
1
)u
y = x
1
(31)
Then, from the control ane description x = f(x) + g(x)u, y = h(x), the
functions f(x), g(x) and h(x) are given by:
f(x) =
_
x
1
a sin(x
1
)
_
g(x) =
_
0
b cos(x
1
)
_
h(x) = x
1
(32)
The relative degree of the system is r = 2. See Denition 2, Lecture 4. The
input-output linearisation law is given by (see Equation (38), Lecture 4):
u =
v
r

k=0

k
L
k
f
h(x)

r
L
g
L
r1
f
h(x)
=
v
0
L
0
f
[x
1
]
1
L
1
f
[x
1
]
2
L
2
f
[x
1
]

2
L
g
L
1
f
[x
1
]
(33)
The Lie derivatives are given by:
L
0
f
[x
1
] = x
1
(34)
L
1
f
[x
1
] = [1 0]f(x) = x
2
(35)
L
2
f
[x
1
] =
x
2
x
f(x) = [0 1]f(x) = a sin(x
1
) (36)
L
g
L
1
f
[x
1
] = L
g
[x
2
] = [0 1]g(x) = b cos(x
1
) (37)
Then substituting the Lie derivatives in 33 the feedback law is given by:
u =
v
0
x
1

1
x
2

2
a sin(x
1
)

2
(b) cos(x
1
)
(38)
7
Problem 5
The dynamic neural network is described by:
x
1
=
1
x
1
+
11
(x
1
) +
12
(x
2
) +
1
u
x
2
=
2
x
2
+
21
(x
1
) +
22
(x
2
) +
2
u
y = x
1
(39)
This is an example of a SISO control ane system with f(x), g(x) and
h(x) given by:
f(x) =
_

1
x
1
+
11
(x
1
) +
12
(x
2
)

2
x
2
+
21
(x
1
) +
22
(x
2
)
_
=
_

1
0
0
2
_
. .

_
x
1
x
2
_
..
x
+
_

11

12

21

22
_
. .

_
(x
1
)
(x
2
)
_
. .
(x)
= x + (x)
g(x) =
_

2
_
=
h(x) = x
1
(40)
We have that:
L
0
f
h(x) = h(x) = x
1
(41)
so that,
C(x) = L
g
L
0
f
h(x) =
[x
1
]
x
g(x) = [1 0]
_

2
_
=
1
(42)
Therefore, using the denition of relative degree (Denition 2, Lecture 4),
it is possible to conclude that the relative degree of this system is r = 1,
provided
1
= 0.
Suppose that
0
and
1
are the design parameters for the linearised sys-
tem. Then, from Equation (38), Lecture 4, the linearising input is given
by:
u =
v
r

k=0

k
L
k
f
h(x)

r
L
g
L
r1
f
h(x)
=
v
0
L
0
f
[x
1
]
1
L
1
f
[x
1
]

1
L
g
L
0
f
[x
1
]
=
v
0
x
1

1
(
1
x
1
+
11
(x
1
)+
12
(x
2
))

1
(43)
8
Applying the linearising law in Equation 43 to the rst dierential equation
of the Dynamic Neural Network, gives:
x
1
=
v
0
x
1

1
(44)
Replacing x
1
by y above and re-arranging the terms, gives:

1
y +
0
y = v (45)
So that the inputoutput behaviour of the system from the external input v
to the output y is given in the Laplace domain by:
y(s) =
1

1
s +
0
v(s) (46)
Notice that the linearised inputoutput dynamics are of rst order, while the
original system is of second order.
In order to obtain the normal form, set:
=
1
(x) = h(x) = x
1
(47)
(48)
We know from the derivation given in p. 16, Lecture 4, that
2
(x) satises
the following partial dierential equation:
L
g

2
(x) = 0
1

2
x
1
+
2

2
x
2
= 0 (49)
If we take

2
=

1
x
2
+
2
x
1

1
(50)
it is easy to see that Equation 49 is satised. Therefore, the coordinate
transformation is linear and given by:
(x) =
_
x
1

1
x
2
+
2
x
1

1
_
=
_
1 0

1
1
_ _
x
1
x
2
_
(51)
The Jacobian of this transformation is:
(x)
x
=
_
1 0

1
1
_
(52)
which is constant and nonsingular. The inverse transformation is given by:
_
x
1
x
2
_
=
_
1 0

1
1
_ _

_
=
_

1
+
2

1
_
(53)
9
In the new coordinates (, ), the system in Equation 39 can be expressed
as follows:

=
1
+
11
() +
12
(

1
+
2

1
) +
1
u
=

2

1
(
1

2
)
2
+
_

21

11
_
()
+
_

22

12
_
(

1
+
2

1
)
y =
(54)
The input that makes the output zero is thus given by:
u(t) =

12
((t))

1
(55)
And nally, the zero dynamics are described by:
=
2
+
_

22

12
_
() (56)
This expression for the zero dynamics represents an autonomous system
with (at least) an equilibrium point at = 0. Consider the Lyapunov func-
tion candidate V () =
2
. Then we have:

V () =
V

= 2
_

2
+
_

22

12
_
()
_
(57)
= 2
2

2
+ 2
_

22

12
_
() (58)
We infer that

V () is negative denite provided that
2
> 0 and (
22

2
/
1

12
) 0. So, by Theorem 1, Lecture 2, we can say that if the following
conditions are satised:
1.
2
> 0
2.
22


2

12
then = 0 is an asymptotically stable equilibrium point of the zero dynamics.
10
Problem 6
The output may be written as follows:
y(t) = G(p)u(t) = G(p)
_
K
p
y(t) +
K
i
p
(u
c
(t) y(t)) K
d
py(t)
_
(59)
_
1 + K
p
+
K
i
p
+ K
d
p
_
y(t) = G(p)
K
i
p
u
c
(t) (60)
y(t) =
G(p)
K
i
p
1 + G(p)(K
p
+
K
i
p
+ K
d
p)
u
c
(t) =
b
p+a
K
i
p +
b
p+a
(K
p
p + K
i
+ K
d
p
2
)
u
c
(t)
(61)
y(t) =
bK
i
p
2
+ ap + bK
p
p + bK
i
+ bK
d
p
2
u
c
(t)
=
bK
i
(bK
d
+ 1)p
2
+ (a + bK
p
)p + bK
i
u
c
(t)
(62)
The closed loop transfer function is given by:
G
c
(p) =
y(t)
u
c
(t)
=
bK
i
(bK
d
+ 1)p
2
+ (a + bK
p
)p + bK
i
=
bK
i
/(bK
d
+ 1)
p
2
+
a+bKp
bK
d
+1
p +
b
bK
d
+1
K
i
(63)
The reference model is:
G
m
(p) =
y
m
(p)
u
c
(p)
=

2
n
p
2
+ 2
n
p +
2
n
(64)
In terms of the p = d/dt operator, the error is given by:
e(t) = y(t) y
m
(t) = G
c
(p)u
c
(t) G
m
(p)u
c
(t)
e(t) =
bK
i
/(bK
d
+ 1)
p
2
+
a+bKp
bK
d
+1
p +
b
bK
d
+1
K
i
u
c
(t)

2
n
p
2
+ 2
n
p +
2
n
u
c
(t)
(65)
The MIT rule says (see Lecture 3, page 1):
d
d t
= e
e

(66)
11
The sensitivity derivatives are given by:
e
K
p
=

K
p
_
bK
i
(bK
d
+ 1)p
2
+ (a + bK
p
)p + bK
i
u
c
(t)
_
=
(bK
i
)bp
[(bK
d
+ 1)p
2
+ (a + bK
p
)p + bK
i
]
2
u
c
=
bp
[(bK
d
+ 1)p
2
+ (a + bK
p
)p + bK
i
]
y
=
bp/(bK
d
+ 1)
p
2
+
a+bK
p
bK
d
+1
p +
b
bK
d
+1
K
i
y
(67)
e
K
i
=

K
i
_
bK
i
(bK
d
+ 1)p
2
+ (a + bK
p
)p + bK
i
u
c
(t)
_
=
b
[(bK
d
+ 1)p
2
+ (a + bK
p
)p + bK
i
]
u
c

(bK
i
)b
[(bK
d
+ 1)p
2
+ (a + bK
p
)p + bK
i
]
2
u
c
=
b
[(bK
d
+ 1)p
2
+ (a + bK
p
)p + bK
i
]
(u
c
y)
=
b/(bK
d
+ 1)
p
2
+
a+bK
p
bK
d
+1
p +
b
bK
d
+1
K
i
(u
c
y)
(68)
e
K
d
=

K
d
_
bK
i
(bK
d
+ 1)p
2
+ (a + bK
p
)p + bK
i
u
c
(t)
_
=
(bK
i
)bp
2
[(bK
d
+ 1)p
2
+ (a + bK
p
)p + bK
i
]
2
u
c
=
bp
2
[(bK
d
+ 1)p
2
+ (a + bK
p
)p + bK
i
]
y
=
bp
2
/(bK
d
+ 1)
p
2
+
a+bK
p
bK
d
+1
p +
b
bK
d
+1
K
i
y
(69)
Applying the MIT rule
dK
p
dt
= e
e
K
p
= e
bp/(bK
d
+ 1)
p
2
+
a+bK
p
bK
d
+1
p +
b
bK
d
+1
K
i
y

e
p
p
2
+ 2
n
p +
2
n
y
(70)
dK
i
dt
= e
e
K
i
= e
b/(bK
d
+ 1)
p
2
+
a+bKp
bK
d
+1
p +
b
bK
d
+1
K
i
(u
c
y)

e
1
p
2
+ 2
n
p +
2
n
(u
c
y)
(71)
12
dK
d
dt
= e
e
K
d
= e
bp
2
/(bK
d
+ 1)
p
2
+
a+bKp
bK
d
+1
p +
b
bK
d
+1
K
i
y

e
p
2
p
2
+ 2
n
p +
2
n
y
(72)
where

=
b
bK
d
+ 1
(73)
and the approximation
p
2
+
a + bK
p
bK
d
+ 1
p +
b
bK
d
+ 1
K
i
p
2
+ 2
n
p +
2
n
(74)
is reasonable when the parameters are close to their correct values. This is
justied by comparing the denominators of the closed loop transfer function
G
c
(p) and the model transfer function G
m
(p).
V. M. Becerra, 2003
13

You might also like