0% found this document useful (0 votes)
56 views12 pages

Lect 38 PDF

This document discusses observers and exact observers for nonlinear systems. It describes how to transform a nonlinear system into observer form and design the observer gain H such that the error dynamics are stable. Circle criterion design is also covered, which uses a nonlinear feedback term to stabilize the error dynamics. The example shows how to design H and N to ensure the transfer function from measurement to error is strictly positive real. Analysis of closed-loop stability under output feedback and with uncertainty is also discussed.

Uploaded by

win alfalah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views12 pages

Lect 38 PDF

This document discusses observers and exact observers for nonlinear systems. It describes how to transform a nonlinear system into observer form and design the observer gain H such that the error dynamics are stable. Circle criterion design is also covered, which uses a nonlinear feedback term to stabilize the error dynamics. The example shows how to design H and N to ensure the transfer function from measurement to error is strictly positive real. Analysis of closed-loop stability under output feedback and with uncertainty is also discussed.

Uploaded by

win alfalah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Nonlinear Systems and Control

Lecture # 38

Observers

Exact Observers

p. 1/1
Observer with Linear Error Dynamics

Observer Form:

x = Ax + (y, u), y = Cx

where (A, C) is observable, x Rn , u Rm , y Rp


From Lecture # 24: An n-dimensional SO system

x = f (x) + g(x)u, y = h(x)

is transformable into the observer form if and only if



h iT  
n1
= h, Lf h, Lf h , rank (x) = n
x
h iT
b= 0, 0, 1 , =b
x
p. 2/1
[adif , adjf ] = 0, 0 i, j n 1

[g, adjf ] = 0, 0j n2
Change of variables:

i = (1)i1 adi1
f , 1in

T h i
1 , 2 , n =I
x
z = T (x)

p. 3/1
x = Ax + (y, u), y = Cx

x = Ax + (y, u) + H(y C x)
x = x x
x = (A HC)x
Design H such that (A HC) is Hurwitz
What about feedback control?
Let u = (x) be a globally stabilizing state feedback
control
u = (x)

x = Ax + (y, u) + H(y C x)

p. 4/1
How would you analyze the closed-loop system?

x = Ax + (Cx, (x x))
x = (A HC)x

We know that
the origin of x = Ax + (Cx, (x)) is globally
asymptotically stable
the origin of x = (A HC)x is globally exponentially
stable
What additional assumptions do we need to show that the
origin of the closed-loop system is globally asymptotically
stable?

p. 5/1
Circle Criterion Design

x = Ax + (y, u) L(M x), y = Cx

where (A, C) is observable, x Rn , u Rm , y Rp ,


M x R , () = [ 1 (1 ), . . . , ( ) ]T

x = Ax + (y, u) L(M x N (y C x)) + H(y C x)

x = x x
x = (A HC)x L[(M x) (M x N (y C x))]

x = (A HC)x L[(M x) (M x (M + N C)x)]


Define
z = (M + N C)x
(t, z) = (M x(t)) (M x(t) z)
p. 6/1
x = (A HC)x L(t, z)
z = (M + N C)x

def
G(s) = (M + N C)[sI (A HC)]1 L

0 +
- n -
z -
G(s)
6

() 
h iT
(t, z) = 1 (t, z1 ), . . . , (t, z )

p. 7/1
Main Assumption: i () is a nondecreasing function

(a b)[i (a) i (b)] 0, a, b R

If i (i ) is continuously differentiable
di
0, i R
di

zi i (t, zi ) = zi [i ((M x)i ) i ((M x)i zi )] 0


z T (t, z) 0

p. 8/1
By the circle criterion (Theorem 7.1) the origin of

x = (A HC)x L(t, z)
z = (M + N C)x

is globally exponentially stable if


def
G(s) = (M + N C)[sI (A HC)]1 L

is strictly positive real

Design Problem: Design H and N such that G(s) is


strictly positive real
Feasibility can be investigated using LMI (Arcak &
Kokotovic, Automatica, 2001)
p. 9/1
Example:

x1 = x2 , x2 = x31 x32 + u, y = x1
" # " #
0 1 h i 0
A= , C= 1 0 , = 3
,
0 0 y + u
" #
0 h i
3 d
L= , M = 0 1 , () = , = 3 2 0
1 d
" #
h1
h= , N
h2

1 s + N + h1
G(s) = (M + N C)[sI (A HC)] L=
s2 + h1 s + h2

p. 10/1
From Exercise 6.7, G(s) is SPR if and only if

h1 > 0, h2 > 0, 0 < N + h1 < h1


1
h1 = 2, h2 = 1, N = 2
3
s+ 2
G(s) =
(s + 1)2

x 1 = x2 + 2(y x1 )
3
x 2 = y + u x2 +
3 1
2 (y x1 ) + (y x1 )

p. 11/1
What about feedback control?
Let u = (x) be a globally stabilizing state feedback control
Closed-loop system under output feedback:

x = Ax + (y, (x x)) L(M x)


x = (A HC)x L(t, z)
z = (M + N C)x

How would you analyze the closed-loop system?

(t, z) depends on x(t). How would you show that is


well defined?

What about the effect of uncertainty?


p. 12/1

You might also like