0% found this document useful (0 votes)
10 views18 pages

Lect 25

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views18 pages

Lect 25

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Nonlinear Control

Lecture # 25
State Feedback Stabilization

Nonlinear Control Lecture # 25 State Feedback Stabilization


Backstepping
η̇ = fa (η) + ga (η)ξ
ξ˙ = fb (η, ξ) + gb (η, ξ)u, gb 6= 0, η ∈ Rn , ξ, u ∈ R

Stabilize the origin using state feedback


View ξ as “virtual” control input to the system

η̇ = fa (η) + ga (η)ξ

Suppose there is ξ = φ(η) that stabilizes the origin of

η̇ = fa (η) + ga (η)φ(η)

∂Va
[fa (η) + ga (η)φ(η)] ≤ −W (η)
∂η

Nonlinear Control Lecture # 25 State Feedback Stabilization


z = ξ − φ(η)

η̇ = [fa (η) + ga (η)φ(η)] + ga (η)z


ż = F (η, ξ) + gb (η, ξ)u

V (η, ξ) = Va (η) + 21 z 2 = Va (η) + 21 [ξ − φ(η)]2

∂Va ∂Va
V̇ = [fa (η) + ga (η)φ(η)] + ga (η)z
∂η ∂η
+zF (η, ξ) + zgb (η, ξ)u
 
∂Va
≤ −W (η) + z ga (η) + F (η, ξ) + gb (η, ξ)u
∂η

Nonlinear Control Lecture # 25 State Feedback Stabilization


 
∂Va
V̇ ≤ −W (η) + z ga (η) + F (η, ξ) + gb (η, ξ)u
∂η
 
1 ∂Va
u=− ga (η) + F (η, ξ) + kz , k > 0
gb (η, ξ) ∂η

V̇ ≤ −W (η) − kz 2

Nonlinear Control Lecture # 25 State Feedback Stabilization


Example 9.9

ẋ1 = x21 − x31 + x2 , ẋ2 = u

ẋ1 = x21 − x31 + x2

x2 = φ(x1 ) = −x21 − x1 ⇒ ẋ1 = −x1 − x31

Va (x1 ) = 21 x21 ⇒ V̇a = −x21 − x41 , ∀ x1 ∈ R

z2 = x2 − φ(x1 ) = x2 + x1 + x21

ẋ1 = −x1 − x31 + z2


ż2 = u + (1 + 2x1 )(−x1 − x31 + z2 )

Nonlinear Control Lecture # 25 State Feedback Stabilization


V (x) = 21 x21 + 12 z22

V̇ = x1 (−x1 − x31 + z2 )
+ z2 [u + (1 + 2x1 )(−x1 − x31 + z2 )]

V̇ = −x21 − x41
+ z2 [x1 + (1 + 2x1 )(−x1 − x31 + z2 ) + u]

u = −x1 − (1 + 2x1 )(−x1 − x31 + z2 ) − z2

V̇ = −x21 − x41 − z22

The origin is globally asymptotically stable

Nonlinear Control Lecture # 25 State Feedback Stabilization


Example 9.10

ẋ1 = x21 − x31 + x2 , ẋ2 = x3 , ẋ3 = u

ẋ1 = x21 − x31 + x2 , ẋ2 = x3

def
x3 = −x1 − (1 + 2x1 )(−x1 − x31 + z2 ) − z2 = φ(x1 , x2 )

Va (x) = 21 x21 + 12 z22 , V̇a = −x21 − x41 − z22

z3 = x3 − φ(x1 , x2 )

ẋ1 = x21 − x31 + x2 , ẋ2 = φ(x1 , x2 ) + z3


∂φ 2 ∂φ
ż3 = u − (x1 − x31 + x2 ) − (φ + z3 )
∂x1 ∂x2

Nonlinear Control Lecture # 25 State Feedback Stabilization


V = Va + 12 z32

∂Va 2 ∂Va
V̇ = (x1 − x31 + x2 ) + (z3 + φ)
∂x1 ∂x2
 
∂φ 2 3 ∂φ
+ z3 u − (x − x1 + x2 ) − (z3 + φ)
∂x1 1 ∂x2

V̇ = −x21 − x41 − (x2 + x1 + x21 )2


 
∂Va ∂φ 2 ∂φ
+z3 − (x1 − x31 + x2 ) − (z3 + φ) + u
∂x2 ∂x1 ∂x2

∂Va ∂φ 2 ∂φ
u=− + (x1 − x31 + x2 ) + (z3 + φ) − z3
∂x2 ∂x1 ∂x2

The origin is globally asymptotically stable

Nonlinear Control Lecture # 25 State Feedback Stabilization


Strict-Feedback Form

ẋ = f0 (x) + g0 (x)z1
ż1 = f1 (x, z1 ) + g1 (x, z1 )z2
ż2 = f2 (x, z1 , z2 ) + g2 (x, z1 , z2 )z3
..
.
żk−1 = fk−1 (x, z1 , . . . , zk−1 ) + gk−1(x, z1 , . . . , zk−1 )zk
żk = fk (x, z1 , . . . , zk ) + gk (x, z1 , . . . , zk )u

gi (x, z1 , . . . , zi ) 6= 0 for 1 ≤ i ≤ k

Nonlinear Control Lecture # 25 State Feedback Stabilization


Example 9.12

ẋ = −x + x2 z, ż = u

ẋ = −x + x2 z

z = 0 ⇒ ẋ = −x, Va = 12 x2 ⇒ V̇a = −x2

V = 21 (x2 + z 2 )

V̇ = x(−x + x2 z) + zu = −x2 + z(x3 + u)

u = −x3 − kz, k > 0, ⇒ V̇ = −x2 − kz 2

Global stabilization

Compare with semiglobal stabilization in Example 9.7

Nonlinear Control Lecture # 25 State Feedback Stabilization


Example 9.13

ẋ = x2 − xz, ż = u

ẋ = x2 − xz
1
z = x + x2 ⇒ ẋ = −x3 , V0 (x) = x2 ⇒ V̇ = −x4
2
1
V = V0 + (z − x − x2 )2
2
V̇ = −x4 + (z − x − x2 )[−x2 + u − (1 + 2x)(x2 − xz)]
u = (1 + 2x)(x2 − xz) + x2 − k(z − x − x2 ), k > 0

V̇ = −x4 − k(z − x − x2 )2 Global stabilization

Nonlinear Control Lecture # 25 State Feedback Stabilization


Passivity-Based Control
ẋ = f (x, u), y = h(x), f (0, 0) = 0

∂V
uT y ≥ V̇ = f (x, u)
∂x

Theorem 9.1
If the system is
(1) passive with a radially unbounded positive definite
storage function and
(2) zero-state observable,
then the origin can be globally stabilized by

u = −φ(y), φ(0) = 0, y T φ(y) > 0 ∀ y 6= 0

Nonlinear Control Lecture # 25 State Feedback Stabilization


Proof

∂V
V̇ = f (x, −φ(y)) ≤ −y T φ(y) ≤ 0
∂x

V̇ (x(t)) ≡ 0 ⇒ y(t) ≡ 0 ⇒ u(t) ≡ 0 ⇒ x(t) ≡ 0


Apply the invariance principle
A given system may be made passive by
(1) Choice of output,
(2) Feedback,
or both

Nonlinear Control Lecture # 25 State Feedback Stabilization


Choice of Output
∂V
ẋ = f (x) + G(x)u, f (x) ≤ 0, ∀x
∂x
No output is defined. Choose the output as
 T
def ∂V
y = h(x) = G(x)
∂x

∂V ∂V
V̇ =
f (x) + G(x)u ≤ y T u
∂x ∂x
Check zero-state observability

Nonlinear Control Lecture # 25 State Feedback Stabilization


Example 9.14

ẋ1 = x2 , ẋ2 = −x31 + u

V (x) = 41 x41 + 12 x22

With u = 0 V̇ = x31 x2 − x2 x31 = 0

∂V ∂V
Take y = G= = x2
∂x ∂x2
Is it zero-state observable?

with u = 0, y(t) ≡ 0 ⇒ x(t) ≡ 0

u = −kx2 or u = −(2k/π) tan−1 (x2 ) (k > 0)

Nonlinear Control Lecture # 25 State Feedback Stabilization


Feedback Passivation
Definition
The system

ẋ = f (x) + G(x)u, y = h(x) (∗)

is equivalent to a passive system if ∃ u = α(x) + β(x)v such


that

ẋ = f (x) + G(x)α(x) + G(x)β(x)v, y = h(x)

is passive

Theorem [20]
The system (*) is locally equivalent to a passive system (with
a positive definite storage function) if it has relative degree
one at x = 0 and the zero dynamics have a stable equilibrium
point at the origin with a positive definite Lyapunov function

Nonlinear Control Lecture # 25 State Feedback Stabilization


Example 9.15 (m-link Robot Manipulator)

M(q)q̈ + C(q, q̇)q̇ + D q̇ + g(q) = u

M = M T > 0, (Ṁ − 2C)T = −(Ṁ − 2C), D = D T ≥ 0


Stabilize the system at q = qr
e = q − qr , ė = q̇

M(q)ë + C(q, q̇)ė + D ė + g(q) = u

(e = 0, ė = 0) is not an open-loop equilibrium point

u = g(q) − Kp e + v, (Kp = KpT > 0)

M(q)ë + C(q, q̇)ė + D ė + Kp e = v

Nonlinear Control Lecture # 25 State Feedback Stabilization


M(q)ë + C(q, q̇)ė + D ė + Kp e = v

V = 12 ėT M(q)ė + 21 eT Kp e

V̇ = 21 ėT (Ṁ − 2C)ė − ėT D ė − ėT Kp e + ėT v + eT Kp ė ≤ ėT v

y = ė
Is it zero-state observable? Set v = 0
ė(t) ≡ 0 ⇒ ë(t) ≡ 0 ⇒ Kp e(t) ≡ 0 ⇒ e(t) ≡ 0

v = −φ(ė), [φ(0) = 0, ėT φ(ė) > 0, ∀ė 6= 0]

u = g(q) − Kp e − φ(ė)

Special case: u = g(q) − Kp e − Kd ė, Kd = KdT > 0

Nonlinear Control Lecture # 25 State Feedback Stabilization

You might also like