0% found this document useful (0 votes)
33 views11 pages

L4 MRASeight

This document discusses model reference adaptive control (MRAC). It begins with a brief history of MRAC, including its origins in flight control problems in the 1950s and early work using the MIT rule and Lyapunov stability theory. The document then outlines several key aspects of MRAC, including adaptation laws based on Lyapunov theory and passivity, dealing with nonlinear systems, and gradient algorithms that minimize tracking error. It provides context on early applications of adaptive control to systems like the X-15 aircraft.

Uploaded by

Jens Ryd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views11 pages

L4 MRASeight

This document discusses model reference adaptive control (MRAC). It begins with a brief history of MRAC, including its origins in flight control problems in the 1950s and early work using the MIT rule and Lyapunov stability theory. The document then outlines several key aspects of MRAC, including adaptation laws based on Lyapunov theory and passivity, dealing with nonlinear systems, and gradient algorithms that minimize tracking error. It provides context on early applications of adaptive control to systems like the X-15 aircraft.

Uploaded by

Jens Ryd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Model Reference Adaptive Control

Model Reference Adaptive Control 1. Introduction


2. The MIT Rule

K. J. Åström 3. Lyapunov Theory


4. Adaptation Laws based on Lyapunov Theory
Department of Automatic Control, LTH
Lund University 5. Passivity
6. Adaptation Laws based on Passivity
February 9, 2021
7. Nonlinear Systems
8. Summary

Introduction Direct Adaptive Control


◮ Driven by flight control - a servo problem
◮ MRAS and the MIT rule Whitaker 1959
Controller parameters are adjusted directly
◮ Empirical evidence of instability - modified adaptation laws
◮ Lyapunov theory - Time domain
Butchart and Shackloth, Synthesis of model reference systems by ym
Lyapunov’s second method. 1965 Model
◮ Passivity theory - Frequency domain
A .I. Lur’e problem - Linear system with one nonlinerity 1944 Controller parameters
V. M. Popov, Absolute stability of nonlinear systems of automatic Adjustment
control, Automat. Remote Control, 22, 837-875, 1961 mechanism
Landau 1969
Kalman Yakubuvich Popov Lemma uc
◮ The augmented error Monopoli 1974 u y
◮ Stability of MRAS Controller Plant
Counterexamples Feuer and Morse 1978
Egardt 1979
Goodwin Ramage Caines 1980
Narendra 1980
Morse 1980

Indirect Adaptive Control Model Reference Adaptive Control

1. Introduction
Controller parameters are adjusted indirectly by first estimating parameters 2. The MIT Rule
of a process model and then designing a controller Background
Adaptation of feedforward gain
Normalized feedback law
Parameter Indirect MRAS
adjustment L1 Adaptive Control
No stability guarantee
Controller
parameters 3. Lyapunov Theory
4. Adaptation Laws based on Lyapunov Theory
Setpoint
Output
5. Passivity
Controller Plant
Control
6. Adaptation Laws based on Passivity
signal
7. Nonlinear Systems
8. Summary

Flight Control – Servo Problem Model Reference Adaptive Control – P. Whitaker MIT 1959
P. C. Gregory March 1959. Proceedings of the Self Adaptive Flight Control
Systems Symposium. Wright Air Development Center, Wright-Patterson
Air Force Base, Ohio.
We have further suggested the name model-reference adaptive system for
the type of system under consideration. A model-reference system is
Most of you know that with the advent a few years ago of hypersonic and
characterized by the fact that the dynamic specification for a desired
supersonic aircraft, the Air Force was faced with a control problem. This
system output are embodied in a unit which is called the model-reference
problem was two-fold; one, it was taking a great deal of time to develop a
for the system, and which forms part of the equipment installation. The
flight control system; and two, the system in existence were not capable of
commandsignal input to the control system is also fed to the model. The
fulfilling future Air Force requirements. These systems lacked the ability to
difference between the output signal of the model and the corresponding
control the aircraft satisfactorily under all operating conditions.
output quantity of the system is then the response error. The design
objectie of the adaptive portion of this type of system is to minimze this
Test flights start summer 1961
response error under all operational conditions of the system. Specifically
◮ Honeywell self oscillating adaptive system X-15
the adjustment is done by the MIT Rule.
◮ MIT model reference adaptive system on F-101A

Mishkin, E. and Brown, L Adaptive Control Systems. Mc-Graw-Hill New


York, 1961

1
Model Reference Adaptive Control – MRAS Gradient Algorithms
Tracking error
e = y − ym
Introduce
ym 1
Model J (θ ) = e2
2
Controller parameters
Adjustment Change parameters such that
mechanism
dθ €J €e
= −γ = −γ e
uc dt €θ €θ
u y
Controller Plant where € e/€θ is the sensitivity derivative
42
dJ de € e dθ €e
3
=e =e = −γ e2
dt dt €θ dt €θ
◮ Servo problem
Many other alternatives
◮ Desired response ym to command signal uc is specified by the model J (e) = pep
◮ How to find the parameter adjustment algorithm? gives
dθ €J €e
= −γ = −γ sign (e)
dt €θ €θ

Feedforward Gain Block Diagram


Process Model
y = kG(s) ym
k 0 G(s)
Desired response
ym = k0 G(s)uc

Controller –
γ e
u = θ uc −
s
Π Σ
e = y − ym = kG(p)θ uc − k0 G(p)uc θ
+

Sensitivity derivative Process


uc u y
€e k Π kG(s)
= kG(p)uc = ym
€θ k0
dθ k
MIT rule = −γ ′ ym e = −γ ym e
dθ ′ k dt k0
= −γ ym e = −γ ym e
dt k0

Simulation γ = 1 A First Order System

Process
dy
ym = −ay + bu
dt
0 Model
dym
y = −am ym + bm uc
dt
−2 Controller
0 5 10 15 20
Time u ( t ) = θ 1 uc ( t ) − θ 2 y ( t )
θ γ =2
γ =1 Ideal controller parameters
2
bm
θ 1 = θ 10 =
γ = 0.5 b
0 am − a
0 5 10 15 20 θ 2 = θ 20 =
b
Time
Find a feedback that changes the controller parameters so that the closed
loop response is equal to the desired model

MIT Rule - First Order System Block Diagram


The error
e = y − ym
G m (s)

e
Σ
bθ 1 d +
y= uc p= uc
Π Σ
u y +

p + a + bθ 2
G(s)
dt −
θ1
€e b Π
= uc θ2
€θ 1 p + a + bθ 2 γ γ

€e b2θ 1 b
s s

=− uc = − y
€θ 2 (p + a + bθ 2 )2 p + a + bθ 2
am
s + am Π Π
am
s + am

Approximate
p + a + bθ 2 ( p + am
dθ 1 am
3 4
= −γ uc e
Hence dt p + am
dθ 1 am
3 4
= −γ uc e dθ 2 am
3 4
dt p + am =γ y e
dt p + am
dθ 2 am
3 4
=γ y e Example a = 1, b = 0.5, am = bm = 2.
dt p + am

2
Simnon Code Simulation a = 1, b = 0.5, am = bm = 2
ym
CONTINUOUS SYSTEM mras 1

"MRAS for first-order system with Gm=bm/(s+am) y


−1
INPUT y uc 0 20 40 60 80 100
OUTPUT u Time

STATE ym th1 th2 x1 x2 5 u


0
DER dym dth1 dth2 dx1 dx2
−5
u=th1*uc-th2*y 0 20 40 60 80 100
dym=-am*ym+bm*uc Time

dx1=-am*x1+am*uc θ1 γ =5
4
dx2=-am*x2-am*y γ =1
2
e=y-ym 0
γ = 0.2
0 20 40 60 80 100
dth1=-gamma*e*x1
θ2
Time
edth2=-gamma*e*x2 am:4 "model parameter 2
γ =5
bm:2 "model parameter γ =1
0
γ = 0.2
gamma:2 "adaptation gain
0 20 40 60 80 100
END Time

Good Control but Bad Parameters? Error and Parameter Convergence

The closed loop transfer function is Consider adaptation of feedforward gain

θ 1 G(s) θ1b e = (kθ − k0 )uc = k (θ − θ 0 )uc


Gcl (s) = =
1 + θ 2 G(s) s + a + θ2b
Parameters for γ = 1 with θ 0 = k0 /k
θ2 dθ
= −γ k 2 uc2 (θ − θ 0 )
dt
2
Solution
2
θ (t ) = θ 0 + (θ (0) − θ 0 )e−γ k It

where
Zt
0 It = uc2 (τ ) dτ
0

−1 Convergence rate depends on the input!


0 1 2 3 4
θ1

MIT Rule Does Not Guarantee Stability with Unmodeled MIT Rule Does Not Guarantee Stability 2
1
Dynamics Process: G(s) = s2 +a1 s+a2
.
1 Approximate characteristic equation: s3 + a1 s2 + a2 s + γ ym
o o
uc k = 0.
Adaptation of Feedforward Gain: G(s) =
s2 + a1 s + a2 Stability condition: γ ym
o o
uc k < a1 a2 .
y = kG(p)u, ym = k0 G(p)uc Square wave amplitude (a) 0.1, (b) 1 and (c) 3.5
(a) ym
u = θ uc , e = y − ym 0.1
y

= −γ ym e
dt −0.1

Parameter equation 0 20 40 60 80 100

(b) ym Time
dθ 1
+ γ ym (kG(p)θ uc ) = γ ym2
dt y
Approximate! −1
dθ 0 20 40 60 80 100
+ γ ymo uco (kG(p)θ ) = γ (ymo )2
y
Time
dt (c)
Characteristic equation
10 ym

s3 + a1 s2 + s2 s + γ ym
o o
uc k = 0 −10

0 20 40 60 80 100
Stable if µ = γ ym
o o 0
uc k < a1 a2 , γ < 1 for k = a1 = a2 = uc0 = ym =1 Time

Normalized Adaptation Law Summary


γφ e ◮ Servoproblem
Replace MIT rule: ddtθ = γφ e by normalized rule ddtθ = α+φ T φ
Model following
The MIT rule
(a) ym Good excitation through reference signal
0.1
◮ The error equation
y
−0.1
e(t ) = (G(p, θ ) − Gm (p))uc (t )
0 20 40 60 80 100

(b) ym Time
◮ Gradient procedure
1

y dθ € G (p , θ )
−1 = γφ e, φ= uc
dt €θ
0 20 40 60 80 100
Time ◮ Normalized adaptation laws
(c) ym
10
dθ φe

−10 y dt α + φTφ
0 20 40 60 80 100
Time Laws based on recursive system identification give normalization
automatically

3
Model Reference Adaptive Control Alexandr Lyapunov 1857-1918

1. Introduction
◮ MS Physico-Math Dept St
2. The MIT Rule Petersburg University 1876
3. Lyapunov Theory ◮ Chebyshev [ Markov
Ordinary differential equations
◮ PhD Moscow University The
Stability
Lyapunov’s idea general problem of the
Finding Lyapunov functions stability of motion 1892
4. Adaptation Laws based on Lyapunov Theory ◮ Chair of Mechanics Kharkov
5. Passivity University 1885
◮ Professor Applied Math St
6. Adaptation Laws based on Passivity
Petersburg 1902
7. Nonlinear Systems
8. Summary

Ordinary Differential Equations Lyapunov Stability


Definition
Consider the solution x (t ) = 0 to The solution x (t ) = 0 is stable if for given ε > 0 there exists a number
dx
δ (ε ) > 0 such that all solutions with initial conditions
= f (x ) f ( 0) = 0
dt
qx (0)q < δ
Existence and uniqueness
have the property
qf (x ) − f (y )q ≤ Lqx − y q L>0
qx (t )q < ε for 0 ≤ t < ∞
Many solutions
Finite escape time
The solution is unstable if it is not stable. The solution is asymptotically
dx √
= x dx stable if it is stable and δ can be found such that all solutions with
dt =x 2
dt qx (0)q < δ have the property that qx (t )q → 0 as t → ∞.
x ( 0) = 0
x ( 0) = 1 Notice
0 if t ≤ t0
I
x (t ) = 1 ◮ Stability of a particular solution
t 2 if t > t0 x (t ) =
1−t ◮ Local concept
◮ How to make it global?

Lyapunov’s Idea Formalities


Inspiration from mechanics - energy function
Stable and unstable equilibria Definition
x2 A continuously differentiable function V : Rn → R is called positive definite
in a region U ⊂ Rn containing the origin if
dx
1. V (0) = 0
dt
2. V (x ) > 0, x ∈ U and x ,= 0
x=0 A function is called positive semidefinite if Condition 2 is replaced by
x1 V (x ) ≥ 0.
V(x)=const
Theorem
If there exists a function V : Rn → R that is positive definite such that

dV € V T dx €V T
= = f ( x ) = −W ( x )
Condition dt € x dt €x

€V T is negative semidefinite, then the solution x (t ) = 0 is stable. If dV /dt is


f (x ) < 0 negative definite, then the solution is also asymptotically stable.
€x

Time-Varying Systems Lyapunov Functions for Linear Systems

dx
= f (x , t )
dt
Let the linear system
Definition dx
= Ax
The solution x (t ) = 0 is uniformly stable if for ε > 0 there exists a number dt
δ (ε ) > 0, independent of t0 , such that be stable. Pick Q positive definite. The Lyapunov equation

qx (t0 )q < δ [ qx (t )q < ε ∀t ≥ t0 ≥ 0 AT P + PA = −Q

The solution is uniformly asymptotically stable if it is uniformly stable and has always a unique solution with P positive definite and the funtion
there is c > 0, independent of t0 , such that x (t ) → 0 as t → ∞, uniformly
in t0 , for all qx (t0 )q < c. V (x ) = x T Px

Definition is a Lyapunov function


A continuous function α: [0, a) → [0, ∞) is said to belong to class K if it
is strictly increasing and α(0) = 0. It is said to belong to class K∞ if
a = ∞ and α(r ) → ∞ as r → ∞.

4
Lyapunov Theorem Model Reference Adaptive Control

1. Introduction

Theorem 2. The MIT Rule


Let x = 0 be an equilibrium point and D = {x ∈ Rn p qx q < r }. Let V be 3. Lyapunov Theory
a continuously differentiable function such that 4. Adaptation Laws based on Lyapunov Theory
Feedforward Gain
α1 (qx q) ≤ V (x , t ) ≤ α2 (qx q) First Order System
Linear Systems - State Feedback
dV €V €V
= + f (x , t ) ≤ −α3 (qx q) Linear Systems - Output Feedback
dt €t €x Kalman Yakobovich Lemma - Passivity
for ∀t ≥ 0, where α1 , α2 , and α3 are class K functions. Then x = 0 is 5. Passivity
uniformly asymptotically stable.
6. Adaptation Laws based on Passivity
7. Nonlinear Systems
8. Summary

Adaptation Laws based on Lyapunov Theory Adaptation of Feedforward Gain


dy
Process model: = −ay + ku
dt
dym
◮ Replace ad hoc with desings that give guaranteed stability Desired response: = −aym + ko uc
dt
◮ Lyapunov function V (x ) > 0 positive definite Controller: u = θ uc
Introduce the error e = y − ym and the error equation becomes
dx
= f (x ), de
dt = −ae + (kθ − ko )uc
dV dV dx DV dt
= = f (x ) < 0
dt dx dt dx Candidate Lyapunov function
◮ Determine a controller structure
γ k k0
◮ Derive the Error Equation V ( e, θ ) = e2 + (θ − )2
2 2 k
◮ Find a Lyapunov function
Time derivative
dV
◮ ≤ 0 Barbalat’s lemma dV k0 " dθ
dt
1 2
= γ e −ae + (kθ − k0 )uc + k θ −
!
◮ Determine an adaptation law dt k dt
1 dθ
2
2
= −γ ae + (kθ − k0 ) + γ uc e
dt

Adaptation of Feedforward Gain Adaptation of Feedforward Gain: MIT Rule and Lyapunov Rules
dθ dθ
MIT Rule a: = −γ ye Lyapunov rule b: = −γ uc e
dt dt
1 dθ
G(s) = MIT Rule: = −γ ye
(a) Model
s+1 dt
ym
k 0 G(s)
– Sinusoidal input of varying frequency
uc e γ θ
Process
Σ Π −
s
+
Π kG(s)
Lyapunov Rule
y MIT Rule
θˆ θˆ
1 1

0 0
0 5 10 15 20
θˆ
0 5 10 15 20
(b) Time
θˆ Time

1
Model 1
ym 0
0
k 0 G(s)

0
θˆ 5 10 15 20
Time
0
θˆ 5 10 15 20
Time
uc e γ θ 1
1

Process
Σ Π −
s
0 0
+ 0 5 10 15 20
Π
0 5 10 15 20
kG(s) Time Time
y

A minor change of architecture (moving one wire) has dramatic effect!

First Order System Derivative of Lyapunov Function


Process model and desired behavior
dy dym 1 1 1
3 4
= −ay + bu, = −am ym + bm uc V ( e, θ 1 , θ 2 ) = e2 + (bθ 2 + a − am )2 + (bθ 1 − bm )2
dt dt 2 bγ bγ

Controller and error Derivative of error and Lyapunov function


de
u = θ 1 uc − θ 2 y , e = y − ym = −am e − (bθ 2 + a − am )y + (bθ 1 − bm ) uc
dt
dV de 1 dθ 2 1 dθ 1
Ideal parameters =e + (bθ 2 + a − am ) + (bθ 1 − bm )
b am − a dt dt γ dt γ dt
θ1 = , θ2 =
bm b 1 dθ 2
3 4
The derivative of the error = − am e 2 + (bθ 2 + a − am ) − γ ye
γ dt
de 1 dθ 1
3 4
= −am e − (bθ 2 + a − am )y + (bθ 1 − bm ) uc + (bθ 1 − bm ) + γ uc e
dt γ dt
Candidate for Lyapunov function Adaptation law

1 1 1 dθ 1 dθ 2 de
3 4
V ( e, θ 1 , θ 2 ) = e2 + (bθ 2 + a − am )2 + (bθ 1 − bm )2 = −γ uc e, = γ ye [ = − e2
2 bγ bγ dt dt dt
Error will always go to zero, what about parameters?

5
Simulation a = 1, b = 0.5, am = bm = 2, γ = 1 Comparison with MIT rule
ym
(a)
1
Lyapunov Rule
y MIT Rule
−1

0 20 40 60 80 100 G m (s)

(b)
Time G m (s)
− e
5 u Σ
e Σ
+ + uc + u y +
0 uc
Π Σ
u
G(s)
y Π Σ G(s)
− −
−5 θ1
Π θ1
0 20 40 60 80 100 Π
Time θ2
θ2
θ1
γ γ

4
γ =5 −
s s −
γ
s
γ
s
am am
2 Π Π
γ = 0.2
s + am s + am

0
γ =1 Π Π
0 20 40 60 80 100

θ2
Time
γ =5
1
γ =1 A minor change of architecture (removing two filters) has dramatic effect!
−1 γ = 0.2
0 20 40 60 80 100
Time

State Feedback The Error Equation


Process model Process
dx dx
= Ax + Bu = Ax + Bu
dt dt

Desired response to command signals Desired response


dxm
dxm = Am xm + Bm uc
= Am xm + Bm uc dt
dt
Control law
Control law
u = Muc − Lx
u = Muc − Lx
Error
The closed-loop system e = x − xm
dx de dx dxm
= (A − BL)x + BMuc = Ac (θ )x + Bc (θ )uc = − = Ax + Bu − Am xm − Bm uc
dt dt dt dt
Parametrization Hence
de
Ac (θ 0 ) = Am = Am e + (A − Am − BL) x + (BM − Bm ) uc
dt
Bc (θ 0 ) = Bm
= Am e + (Ac (θ ) − Am ) x + (Bc (θ ) − Bm ) uc
Compatibility conditions
= Am e + Ψ θ − θ 0
! "
A − Am = BL
B = BM
The Lyapunov Function Output Feedback
The error equation
de
= Am e + Ψ θ − θ 0 Process
! "
dt dx
= Ax + B(θ − θ 0 )uc
Try dt
1! e = Cx
V ( e, θ ) = γ eT Pe + (θ − θ 0 )T (θ − θ 0 )
"
2
Adaptation law
Hence dθ
= −γ uc BT Px
dV γ T 0 T 0 T dθ dt
=− e Qe + γ (θ − θ )Ψ Pe + (θ − θ )
dt 2 dt Can we find P such that
γ dθ BT P = C
3 4
T 0 T T
=− e Qe + (θ − θ ) + γ Ψ Pe
2 dt The adaptation law then becomes
where Q positive definite and dθ
= −γ uc e
dt
ATm P + PAm = −Q

Adaptation law: ddtθ = −γ ΨT Pe [ dV


dt
= − γ2 eT Qe

Kalman-Yakubovich Lemma Summary


Definition
A rational transfer function G with real coefficients is positive real (PR) if
◮ Lyapunov Stability Theory
Stability concept
Re G(s) ≥ 0 for Re s ≥ 0 Lyapunovs theorem
How to use it?

A transfer function G is strictly positive real (SPR) if G(s − ε ) is positive ◮ Adaptive laws with Guaranteed Stability
real for some real ε > 0. ◮ Simple design procedure
Find control law
Lemma Derive Error Equation
The transfer function Find Lyapunov Function
G(s) = C(sI − A)−1 B Choose adjustment law so that dV /dt ≤ 0
◮ Remark
is strictly positive real if and only if there exist positive definite matrices P
and Q such that Strong similarities with MIT rule
Often simpler
AT P + PA = −Q
No normalization
and Connection to passivity!
BT P = C

6
Model Reference Adaptive Control The Input-Output View of Systems
Introduction
Conceptually - the table
1. Introduction White Boxes and Black Boxes
Input-output descriptions
2. The MIT Rule How to generalize from linear to nonlinear?
3. Lyapunov Theory The Small Gain Theorem (SGT)
4. Adaptation Laws based on Lyapunov Theory The notion of gain
5. Passivity Examples
The main result
Input-Output view of systems
The notions of gain and phase The Passivity Theorem (PT)
The small gain theorem Passivity and phase
The passivity theorem Examples
The Passivity Theorem
6. Adaptation Laws based on Passivity
Relations between SGT and PT
7. Nonlinear Systems Applications to Adaptive Control
8. Summary The augmented error
MRAS and STR
Conclusions

The Notion of Gain Examples


Signal spaces Linear systems with signals in L2e
" 12
qy q ≤ max pG(iω )p · quq
!R∞ 2
L2 : qu q = −∞ u (t ) dt ω
L∞ : quq = sup0≤t <∞ pu(t )p u0 = sin ω t
Extended spaces Linear Systems with signals in L∞
I Z∞
x (t ) 0≤t≤T
xT (t ) = γ (G ) = ph(τ )p dτ
0 t>T 0
u0 (s) = u0 sign(h(t − s))
u ∈ Xe if xT ∈ X
Static nonlinear system
The notion of gain (=operator norm) f( x)
f = γx
qSuq
γ (S) = sup
u∈Xe quq

The gain of S is the smallest value γ such that x

f = − γx
qSuq ≤ γ (S)quq for all u ∈ Xe

The Small Gain Theorem Passivity


Definition
A system is called bounded-input, bounded-output (BIBO) stable if the
system has bounded gain.
The idea
Theorem Energy dissipation
Consider the system. Capacitors, induktors, resistances
u e y Mass, spring, dashpot
Σ H1 Circuit Theory
Mechatronics
Mathematical Formalization
The Notion of Phase
− H2
Examples
Postive real linear system
Let γ1 and γ2 be the gains of the systems H1 and H2 . The closed-loop
The passivity theorem
system is BIBO stable if Using passivity in system design
γ1γ2 < 1
and its gain is less than
γ1
γ =
1 − γ1γ2

A Formal Statement The Notion of Phase

Definition
A system with input u and output y is passive if
Let the signal space have an inner product
〈y p u 〉 ≥ 0
The phase for a given input u can then be defined as
The system is input strictly passive (ISP) if there exists ε > 0 such that 〈y p u 〉 〈Hu p u〉
cos φ = =
〈y p u 〉 ≥ ε qu q2 qu q qy q quq qHuq
Passivity implies that the phase is in the range
and output strictly passive (OSP) if there exists ε > 0 such that
π π
− ≤φ ≤
〈y p u 〉 ≥ ε qy q 2 2 2

Intuitively
◮ Think about u and v as voltage and current or force and velocity
◮ Causality?

7
Linear Time-invariant Systems Characterizing Positive Real Transfer Functions
Z
∞ Z

1
〈y p u 〉 = y (t )u(t ) dt = Y (iω )U (−iω ) dω Theorem

0 −∞ A rational transfer function G(s) with real coefficients is PR if and only if
Z
∞ the following conditions hold.
1
= G(iω )U (iω )U (−iω ) dω (i) The function has no poles in the right half-plane.

−∞ (ii) If the function has poles on the imaginary axis or at infinity, they
Z

are simple poles with positive residues.
1
= Re {G(iω )} U (iω )U (−iω ) dω
π (iii) The real part of G is nonnegative along the iω axis, that is,
0

Re (G(iω )) ≥ 0
Definition
A rational transfer function G with real coefficients is positive real (PR) if
A transfer function is SPR if conditions (i) and (iii) hold and if condition (ii) is
Re G(s) ≥ 0 for Re s ≥ 0 replaced by the condition that G(s) has no poles or zeros on the imaginary
axis.
A transfer function G is strictly positive real (SPR) if G(s − ε ) is positive
real for some real ε > 0.

Examples Nonlinear Static Systems y = f (u)


Recall
Z

1
〈y p u 〉 = Re {G(iω )} U (iω )U (−iω ) dω Z∞
π 〈y p u 〉 = f (u(t ))u(t ) dt
0
0
◮ Positive real PR
Re G(iω ) ≥ 0 ◮ Passive if xf (x ) ≥ 0
◮ Input strictly passive ISP ◮ Input strictly passive (ISP) if xf (x ) ≥ δ px p2
◮ Output strictly passive if
Re G(iω ) ≥ ε > 0
xf (x ) ≥ δ f 2 (x )
◮ Output stricly passive OSP
Geometric Interpretation
Re G(iω ) ≥ ε pG(iω )p2
Example
G(s) = s + 1 SPR and ISP not OSP ◮ f (x ) = x + x 3 input strictly passive
1
G(s) = s+ 1
SPR and OSP not ISP ◮ f (x ) = x /(1 + px p) output strictly passive.
2
G(s) = (ss++11)2 OSP and ISP not OSP
1
G(s) = s
PR not SPR, OPS or ISP

The Passivity Theorem Relations Between Small Gain and Passivity Theorems
a) b)

Theorem H1 H1
Consider a system obtained by connecting two systems H1 and H2 in a
feedback loop Σ −I

− H2
u e y
Σ H1
c)
Σ I

− H2
(I + H 1)−1H 1 2

− H2 −I Σ d)

Let H1 be strictly output passive and H2 be passive. The closed-loop S1

system is then BIBO stable. I Σ


Intuitive: Think about phase
I − H2 1

Use of passivity in system design 2 − S2

◮ Force control in robotics


◮ Remote manipulator - Mark Spong S1 = (H1 + I )−1 (H1 − I ) H1 = −(S1 + I )−1 (S1 − I ) LHP to unit circle

Summary Model Reference Adaptive Control

1. Introduction
◮ Passivity is a very powerful idea
2. The MIT Rule
◮ Cascade of two passive systems is passive
3. Lyapunov Theory
◮ Related to
4. Adaptation Laws based on Lyapunov Theory
Energy
Phase shift 5. Passivity
◮ Can be used in many different ways 6. Adaptation Laws based on Passivity
Stable control laws Stability
Remote control Augmented error
Adaptive control PI adjustment
7. Backstepping
8. Summary

8
Adaptation of Feedforward Gain Analysis

(a) Model
ym
k 0 G(s)
– Lemma
uc e γ θ Let r be a bounded square integrable function, and let G(s) be a transfer
Process
Σ Π −
s
+ function that is positive real. The system whose input-output relation is
Π kG(s)
y given by
y = r (G(p)ru)

is then passive.
(b)
Example: PI adjustments
Model
ym
k 0 G(s)

Zt
uc
Σ
e
Π −
γ θ θ (t ) = −γ1 uc (t )e(t ) − γ2 uc (τ )e(τ ) dτ
Process s
+
Π kG(s)
y Explore the advantages of PI adjustments analytically and by simulation!

Redraw b)

A modified algorithm The Augmented Error


Consider the error
Change
0 (θ − θ ) u
0
e
e = G(θ − θ 0 )uc
c
Σ G

θ0 = G(θ − θ 0 )uc + (θ − θ 0 )Guc − (θ − θ 0 )Guc
γ
Π Σ −θ s
Π
Introduce the augmented error

ε =e+η
uc H

To
ym
θ 0
G where

η = G(θ − θ 0 )uc − (θ − θ 0 )Guc = Gθ uc − θ Guc
uc
Σ Gc
+
Π G
y Notice that η is zero under stationary conditions
Use the adaptation law
θ γ dθ

s Π = −γε G2 uc
dt
Stability now follows from the passivity theorem
Make Gc G SPR. Still a problem with pole excess > 1.
The idea can be extended to the general case, details are messy.

A Minor Extension MRAS with Augmented Error - Monopoli


Factor

G = G1 G2
where the transfer function G1 is SPR. The error e = y − ym can then be
Model
written as ym
k0 G
e = G(θ − θ 0 )uc = (G1 G2 )(θ − θ 0 )uc – e ε γ θ
Σ Σ Π −
s
= G1 G2 (θ − θ 0 )uc + (θ − θ 0 )G2 uc − (θ − θ 0 )G2 uc Process
! "
+
uc y
Π kG
Introduce –
η
ε =e+η θ Σ
where η is the error augmentation defined by uc
+
G2 Π G1
η = G1 (θ − θ 0 )G2 uc − G(θ − θ 0 )uc
= G1 (θ G2 uc ) − Gθ uc
Use adaptation law

= −γε G2 uc
dt

Summary Model Reference Adaptive Control

1. Introduction
◮ The concepts 2. The MIT Rule
Notions of gain, phase and passivity 3. Lyapunov Theory
Positive real PR and strictly positive real SPR
4. Adaptation Laws based on Lyapunov Theory
◮ The key results
5. Passivity
The small gain theorem
The passivity theorem 6. Adaptation Laws based on Passivity
Equivalence - complex variable LHP interior of unit circle 7. Nonlinear Systems
Feedback linerization
Adaptive feedback linearization
8. Summary

9
Feedback Linearization - Example Feedback Linearization - Example ...
Consider the system
dx1 dx2 dξ 1 dξ 2
= x2 + f (x1 ) =u = ξ2, = ξ 2 f ′ (ξ 1 ) + u
dt dt dt dt
f is a differentiable function, introduce new coordinates ξ 1 = x1 , ξ 2 = x2 + f (x1 )

ξ 1 = x1 ξ 2 = x2 + f (x1 ) Transforming back to the original coordinates the control law becomes

Then u = −a2 x1 − a1 + f ′ (x1 ) x2 + f (x1 ) + v


! "! "
dξ 1 dξ 2
= ξ2 = ξ 2 f (ξ 1 ) + u

dt dt The design method used in the example can be interpreted as the analog
Introduce the control law
equivalent of pole placement design for linear system. The closed-loop
u = −a2ξ 1 − a1ξ 2 − ξ 2 f ′ (ξ 1 ) + v system obtained will behave like a linear system. This is the reason why
the method is called feedback linearization
gives the closed loop system
The system in Exampleis very special but the method can be applied in
dξ 0 1 0
5 6 5 6
= ξ + v several other systems for example.
dt − a2 − a1 1
dx
This system is linear with the characteristic equation = f (x ) + ug(x )
2
dt
s + a1 s + a2 = 0

Feedback Linearization Feedback Linearization

Consider
dx We obtain the equations
= f (x ) + ug(x )
dt
dξ 1
first pick = ξ2
dt
ξ 1 = h( x )
dξ 2
where h(x ) is chosen so that h′ (x )g(x ) = 0 as a new state variable. The
= ξ3
dt
time derivative of ξ 1 is ..
.
dξ 1 dξ r
= h′ (x ) f (x ) + ug(x ) = α (ξ , η) + uβ (ξ , η)
! "
dt dt

Since h′ (x )g(x ) = 0, we introduce the new state variable ξ 2 = h′ (x )f (x ) = γ (ξ , η)
We proceed as long as the control variable u does not appear explicitly on dt
the right-hand side. In this way we obtain the state variables ξ 1 . . . ξ r , The state variables ξ represents a chain of r integrators, where the integer
which are combined to the vector ξ ∈ Rr , where r ≤ n. We also introduce r is the nonlinear equivalent of pole excess. The variables η will not appear
the new state variable η 1 . . . η n−r , which are combined into the vector if r = n. This case corresponds to a system without zeros.
η ∈ Rn−r . This can be done in many different ways.

Feedback Linerization Feedback Linerization


A design procedure, which is the nonlinear analog of pole placement, can The differential equation
be constructed if β (ξ , η) ,= 0. If this is the case, we can introduce the
0 1 0 ... 0 0
   
feedback law

 0 0 1 ...  0
1 = .

 ξ +  ..  v
  
u= −ar ξ 1 − ar −1ξ 2 − . . . − a1ξ r − α(ξ , η) + b0 v  ..
! "
dt .
β (ξ , η)

− ar −ar −1 −ar −2 . . . −a1 b0
and the closed-loop system becomes

0 1 0 ... 0 0 = γ (ξ , η)
   
dt

 0 0 1 ... 0
has a triangular structure. The vector ξ is a governed by a linear system

= .  ξ +  ..  v
   
dt  ..  . that is decoupled from the variable η. If ξ = 0, η is governed by
− ar −ar −1 −ar −2 . . . −a1 b0 dη
dη = γ ( 0, η )
= γ (ξ , η) dt
dt This equation represents the zero dynamics. It is necessary for this system
The relation between v and ξ 1 is given by a linear dynamical system with to be stable if the proposed control design is going to work. For linear
the transfer function systems the zero dynamics are also the dynamics associated with the
b0 zeros of the transfer function. Feedback linearization is the nonlinear
G(s) = r
s + a1 sr −1 + . . . ar analog of pole placement for linear systems.

Adaptive Feedback Linerization Adaptive Feedback Linerization ..

dx1 dx2
= x2 + θ f (x1 ) =u
dt dt
Consider the system Introduce the new coordinates
dx1 dx2 ξ 1 = x1 ξ 2 = x2 + θˆf (x1 )
= x2 + θ f (x1 ) =u
dt dt
where θˆ is an estimate of θ , we have
where θ is an unknown parameter and f is a known differentiable function. dξ 1 dx1
= = x2 + θ f (x1 ) = ξ 2 + (θ − θˆ)f (ξ 1 )
Applying the certainty equivalence principle gives the following control law: dt dt
dξ 2 dθˆ
u = −a2 x1 − a1 + θˆf ′ (x1 ) x2 + θˆf (x1 ) + v f (x1 ) + θˆ x2 + θ f (x1 ) f ′ (x1 ) + u
! "
=
! "! "
dt dt
The control law
Introducing this into the system equations gives an error equation that is
nonlinear in the parameter error. This makes it very difficult to find a dθˆ
u = −a2ξ 1 − a1ξ 2 − θˆ x2 + θˆf (x1 ) f ′ (x1 ) − f (x1 ) +v
! "
parameter adjustment law that gives a stable system. Therefore it is dt
necessary to use another approach. gives the closed loop system
dξ 0 1 f (ξ 1 ) 0
5 6 5 6 5 6
= ξ + ˆ + v
dt − a2 − a1 θ f (ξ 1 )f ′ (ξ 1 ) 1

10
Adaptive Feedback Linerization .. Adaptive Feedback Linerization ..
Introduce the reference model
dxm 0 1 0
5 6 5 6
= x + um dV 2 dθ˜
dt − a2 − a1 m a2 = eT (AT P + PA)e + 2θ˜BT Pe + θ˜
dt γ dt
Let e = ξ − xm and v = a2 um the error equation becomes
The adaptation law
de 0 1 f (ξ 1 ) dθˆ
5 6 5 6
= e+ θ˜ = Ae + Bθ˜ = γ BT P
dt − a 2 − a1 θˆf (ξ 1 )f ′ (ξ 1 ) dt
The matrix A has all eigenvalues in the left half-plane if a1 , a2 > 0 we can gives
then find a matrix P such that dθ˜ d dθˆ
= (θ − θˆ) = − = −γ BT Pe
T
A P + PA = −I dt dt dt
and the derivative of the Lyapunov function becomes
Choosing the Lyapunov function
1 dV
V = eT Pe + θ˜2 = − eT e
γ dt

gives This function is negative as long as any component of the error vector is
dV 2 dθ˜ different from zero and the tracking error will thus always go to zero.
= eT (AT P + PA)e + 2θ˜BT Pe + θ˜
dt γ dt

Summary Model Reference Adaptive Control

◮ Passivity is a powerful concept 1. Introduction


◮ Admits design of stable adaptive systems 2. The MIT Rule
◮ Strongly intuitive 3. Lyapunov Theory
◮ Straight forward for linear systems 4. Adaptation Laws based on Lyapunov Theory
◮ Nonlinear systems difficult 5. Passivity
◮ Feedback linearization 6. Adaptation Laws based on Passivity
◮ There are other methods like backstepping 7. Nonlinear Systems
8. Summary

Summary

◮ Model reference systems are useful


◮ Servoproblem
◮ MIT rule
Simple and straight forward
Does not guarantee stability
Passivity gives insight
MRAS has been used in aerospace
◮ Lyapunov theory
Guaranteed stability
◮ Passivity
Guaranteed stability
Augmented error
◮ Some results for nonlinear systems

11

You might also like