0% found this document useful (0 votes)
67 views27 pages

16.323 Principles of Optimal Control: Mit Opencourseware

This document provides information about the MIT OpenCourseWare course 16.323 Principles of Optimal Control Spring 2008. The course covers topics related to estimators and observers, including developing estimates of system states based on sensor measurements using open-loop and closed-loop approaches, and analyzing the estimation error dynamics. It also provides an example comparing open-loop and closed-loop state estimation for a simple system.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views27 pages

16.323 Principles of Optimal Control: Mit Opencourseware

This document provides information about the MIT OpenCourseWare course 16.323 Principles of Optimal Control Spring 2008. The course covers topics related to estimators and observers, including developing estimates of system states based on sensor measurements using open-loop and closed-loop approaches, and analyzing the estimation error dynamics. It also provides an example comparing open-loop and closed-loop state estimation for a simple system.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

MIT OpenCourseWare

http://ocw.mit.edu

16.323 Principles of Optimal Control


Spring 2008

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
16.323 Lecture 11

Estimators/Observers

• Bryson Chapter 12

• Gelb – Optimal Estimation


Spr 2008 16.323 11–1
Estimators/Observers

• Problem: So far we have assumed that we have full access to the


state x(t) when we designed our controllers.
– Most often all of this information is not available.
– And certainly there is usually error in our knowledge of x.

• Usually can only feedback information that is developed from the sen­
sors measurements.
– Could try “output feedback” u = Kx ⇒ u = K̂y
– But this is type of controller is hard to design.

• Alternative approach: Develop a replica of the dynamic system that


provides an “estimate” of the system states based on the measured
output of the system.

• New plan: called a “separation principle”


1. Develop estimate of x(t), called x̂(t).
2. Then switch from u = −Kx(t) to u = −Kx̂(t).

• Two key questions:


– How do we find x̂(t)?
– Will this new plan work? (yes, and very well)

June 18, 2008


Spr 2008 Estimation Schemes 16.323 11–2

• Assume that the system model is of the form:


ẋ = Ax + Bu , x(0) unknown
y = Cy x
where
– A, B, and Cy are known – possibly time-varying, but that is sup­
pressed here.
– u(t) is known
� I
– Measurable outputs are y(t) from Cy =

• Goal: Develop a dynamic system whose state


x̂(t) = x(t) ∀t ≥ 0
Two primary approaches:
– Open-loop.
– Closed-loop.

June 18, 2008


Spr 2008 Open-loop Estimator 16.323 11–3

• Given that we know the plant matrices and the inputs, we can just
perform a simulation that runs in parallel with the system
x̂˙(t) = Ax̂ + Bu(t)
– Then x̂(t) ≡ x(t) ∀ t provided that x̂(0) = x(0)

System A,B,Cy y(t)


� �
⇒ x(t)
u(t)

Observer A,B,Cy ŷ(t)


� �
⇒ x̂(t)

• To analyze this case, start with:


ẋ(t) = Ax(t) + Bu(t)
x̂˙(t) = Ax̂(t) + Bu(t)

• Define the estimation error: x̃(t) = x(t) − x̂(t).


– Now want x̃(t) = 0 ∀ t, but is this realistic?

• Major Problem: We do not know x(0)

June 18, 2008


Spr 2008 16.323 11–4

• Subtract to get:
d
(x − x̂) = A(x − x̂) ⇒ x̃˙(t) = Ax̃
dt
which has the solution
x̃(t) = eAtx̃(0)
– Gives the estimation error in terms of the initial error.

• Does this guarantee that x̃ = 0 ∀ t?


Or even that x̃ → 0 as t → ∞? (which is a more realistic goal).

– Response is fine if x̃(0) = 0. But what if x̃(0) �= 0?

• If A stable, then x̃ → 0 as t → ∞, but the dynamics of the estimation


error are completely determined by the open-loop dynamics of the
system (eigenvalues of A).
– Could be very slow.
– No obvious way to modify the estimation error dynamics.

• Open-loop estimation is not a very good idea.

June 18, 2008


Spr 2008 Closed-loop Estimator 16.323 11–5

• Obvious fix to problem: use the additional information available:


– How well does the estimated output match the measured output?
Compare: y = Cy x with ŷ = Cy x̂
– Then form ỹ = y − ŷ ≡ Cy x̃

System A,B,Cy y(t)


� �
→ x(t)

u(t)
L � +

Observer A,B,Cy ŷ(t)


� �
→ x̂(t)

• Approach: Feedback ỹ to improve our estimate of the state. Basic


form of the estimator is:
x̂˙(t) = Ax̂(t) + Bu(t) + Lỹ(t)
ŷ(t) = Cy x̂(t)
where L is a user selectable gain matrix.

• Analysis:
x̃˙ = ẋ − x̂˙ = [Ax + Bu] − [Ax̂ + Bu + L(y − ŷ)]
= A(x − x̂) − L(Cx − Cy x̂)
= Ax̃ − LCy x̃ = (A − LCy )x̃

June 18, 2008


Spr 2008 16.323 11–6

• So the closed-loop estimation error dynamics are now


x̃˙ = (A − LCy )x̃ with solution x̃(t) = e(A−LCy )t x̃(0)

• Bottom line: Can select the gain L to attempt to improve the


convergence of the estimation error (and/or speed it up).
– But now must worry about observability of the system [A, Cy ].

• Note the similarity:


– Regulator Problem: pick K for A − BK
� Choose K ∈ R1×n (SISO) such that the closed-loop poles
det(sI − A + BK) = Φc(s)
are in the desired locations.
– Estimator Problem: pick L for A − LCy
� Choose L ∈ Rn×1 (SISO) such that the closed-loop poles
det(sI − A + LCy ) = Φo(s)
are in the desired locations.

• These problems are obviously very similar – in fact they are called
dual problems
– Note: poles of (A − LCy ) and (A − LCy )T are identical.
– Also have that (A − LCy )T = AT − CyT LT
– So designing LT for this transposed system looks like a standard
regulator problem (A − BK) where
A ⇒ AT
B ⇒ CyT
K ⇒ LT

June 18, 2008


Spr 2008 16.323 11–7
Estimator Example 10–1

• Simple system (see page 11-23)


� �
−1 1.5
� � � �
1 −0.5
A = , B= , x(0) =
1 −2 0 −1
� �
Cy = 1 0 , D = 0
– Assume that the initial conditions are not well known.
– System stable, but λmax(A) = −0.18
– Test observability: � �
1 0
� �
Cy
rank = rank
Cy A −1 1.5

• Use open and closed-loop estimators.


� � Since the initial conditions are
0
not well known, use x̂(0) =
0

• Open-loop estimator:
x̂˙ = Ax̂ + Bu
ŷ = Cy x̂

• Closed-loop estimator:
x̂˙ = Ax̂ + Bu + Lỹ = Ax̂ + Bu + L(y − ŷ)
= (A − LCy )x̂ + Bu + Ly
ŷ = Cy x̂
– Dynamic system with poles λi(A − LCy ) that takes the measured
plant outputs as an input and generates an estimate of x.
– Use place command to set closed-loop pole locations

June 18, 2008


Spr 2008 16.323 11–8

• Typically simulate both systems together for simplicity

• Open-loop case:
ẋ = Ax + Bu
y = Cy x
x̂˙ = Ax̂ + Bu
ŷ = Cy x̂
⎡ ⎤
� �� −0.5
A 0
� � � � � � �
ẋ x B x(0) −1 ⎥
⎢ ⎥
⇒ = + u , =⎢

x̂˙ x̂ B x̂(0) 0⎦

0 A ⎣
0
� ��
Cy 0
� � �
y x
=
ŷ 0 Cy x̂

• Closed-loop case:
ẋ = Ax + Bu
x̂˙ = (A − LCy )x̂ + Bu + LCy x
� ��
A 0
� � � � �
ẋ x B
⇒ = + u
x̂˙ LCy A − LCy x̂ B

• Example uses a strong u(t) to shake things up

June 18, 2008


Spr 2008 16.323 11–9

Open loop estimator


1

0.5
states

x1
−0.5 x2

−1
0 0.5 1 1.5 2 2.5 3 3.5 4
time

0.5
estimation error

−0.5

−1
0 0.5 1 1.5 2 2.5 3 3.5 4
time

Figure 11.1: Open-loop estimator. Estimation error converges to zero, but very

slowly.
Closed−loop estimator
1

0.5
states

x1
−0.5 x2

−1
0 0.5 1 1.5 2 2.5 3 3.5 4
time

0.5
estimation error

−0.5

−1
0 0.5 1 1.5 2 2.5 3 3.5 4
time

Figure 11.2: Closed-loop estimator. Convergence looks much better.

June 18, 2008


Spr 2008 16.323 11–10
Estimator Poles?
• Location heuristics for poles still apply – use Bessel, ITAE, . . .
– Main difference: probably want to make the estimator faster than
you intend to make the regulator – should enhance the control,
which is based on x̂(t).
– ROT: Factor of 2–3 in the time constant ζωn associated with the
regulator poles.

• Note: When designing a regulator, were concerned with “bandwidth”


of the control getting too high ⇒ often results in control commands
that saturate the actuators and/or change rapidly.

• Different concerns for the estimator:


– Loop closed inside computer, so saturation not a problem.
– However, the measurements y are often “noisy”, and we need to
be careful how we use them to develop our state estimates.

⇒ High bandwidth estimators tend to accentuate the effect of sens­


ing noise in the estimate.
– State estimates tend to “track” the measurements, which are fluc­
tuating randomly due to the noise.

⇒ Low bandwidth estimators have lower gains and tend to rely more
heavily on the plant model
– Essentially an open-loop estimator – tends to ignore the measure­
ments and just uses the plant model.

June 18, 2008


Spr 2008 16.323 11–11
Optimal Estimator
• Can also develop an optimal estimator for this type of system.
– Given duality of regulator and estimator, would expect to see close
connection between optimal estimator and regulator (LQR)

• Key step is to balance the effect of the various types of random noise
in the system on the estimator:
ẋ = Ax + Bu + Bw w
y = Cy x + v
– w: “process noise” – models uncertainty in the system model.
– v: “sensor noise” – models uncertainty in the measurements.

• Typically assume that w(t) and v(t) are zero mean E[w(t)] = 0 and
– Uncorrelated Gaussian white random noises: no correlation between
the noise at one time instant and another
E[w(t1)w(t2)T ] =Rww (t1)δ(t1 − t2) ⇒ w(t) ∼ N (0, Rww )
E[v(t1)v(t2)T ] =Rvv (t1)δ(t1 − t2) ⇒ v(t) ∼ N (0, Rvv )
E[w(t1)v(t2)T ] =0
Gaussian for various Sigma

0.25
0
10
0.5

1
P(X)

−1
10
5

10

25

−2
10

−20 −15 −10 −5 0 5 10 15 20


X

Figure 11.3: Example of impact of covariance = σ 2 on the distribution of the PDF


– wide distribution corresponds to large uncertainty in the variable

June 18, 2008


Spr 2008 Analysis 16.323 11–12

• With noise in the system, the model is of the form:


ẋ = Ax + Bu + Bw w , y = Cy x + v
– And the estimator is of the form:
x̂˙ = Ax̂ + Bu + L(y − ŷ) , ŷ = Cy x̂

• Analysis: in this case:


x̃˙ = ẋ − x̂˙ = [Ax + Bu + Bw w] − [Ax̂ + Bu + L(y − ŷ)]
= A(x − x̂) − L(Cy x − Cy x̂) + Bw w − Lv
= Ax̃ − LCy x̃ + Bw w − Lv
= (A − LCy )x̃ + Bw w − Lv (11.18)

• This equation of the estimation error explicitly shows the conflict in


the estimator design process. Must balance between:
– Speed of the estimator decay rate, which is governed by
Re[λi(A − LCy )]
– Impact of the sensing noise v through the gain L

• Fast state reconstruction requires rapid decay rate – typically requires


a large L, but that tends to magnify the effect of v on the estimation
process.
– The effect of the process noise is always there, but the choice of L
will tend to mitigate/accentuate the effect of v on x̃(t).

• Kalman Filter needs to provide an optimal balance between the


two conflicting problems for a given “size” of the process and sensing
noises.
June 18, 2008
Spr 2008 16.323 11–13
Optimization
• Note that Eq. 11.18 is of the form of a linear time-varying system
driven by white Gaussian noise
– Can predict the mean square value of the state (estimation error
in this case) Q(t) = E[x̃(t)x̃(t)T ] over time using Q(0) = Q0 and
Q̇(t) = [A − LCy ] Q(t) + Q(t) [A − LCy ]T
� ��
� Rww 0 T

� Bw
+ Bw −L
0 Rvv −LT

= [A − LCy ] Q(t) + Q(t) [A − LCy ]T + Bw Rww BwT + LRvv LT


– Called a matrix differential Lyapunov Equation16
• Note that ideally would like to minimize Q(t) or trace Q(t), but that
is difficult to do & describe easily17.
• Instead, consider option
� t of trying to minimize trace Q̇(t), the argu­
ment being that then 0 trace Q̇(τ )dτ is small.
– Not quite right, but good enough to develop some insights
• To proceed note that
∂ ∂
trace[AXB] = trace[B T X T AT ] = AT B T
∂X ∂X
and

trace[AXBX T C] = AT C T XB T + CAXB
∂X
• So for minimum we require that

trace Q̇ = −2QT CyT + 2LRvv = 0
∂L
which implies that
−1
L = Q(t)CyT Rvv

16 See K+S, chapter 1.11 for details.


17 My 16.324 discuss how to pose the problem in discrete time and then let Δt → 0 to recover the continuous time results.

June 18, 2008


Spr 2008 16.323 11–14

• Note that if we use this expression for L in the original differential


Lyapunov Equation, we obtain:
Q̇(t) = [A − LCy ] Q(t) + Q(t) [A − LCy ]T + Bw Rww BwT + LRvv LT
� T −1
� � T −1
�T
= A − Q(t)Cy Rvv Cy Q(t) + Q(t) A − Q(t)Cy Rvv Cy
−1 −1 T
+Bw Rww BwT + Q(t)CyT Rvv Rvv (Q(t)CyT Rvv )
−1
= AQ(t) + Q(t)AT − 2Q(t)CyT Rvv Cy Q(t) + Bw Rww BwT
−1
+Q(t)CyT Rvv Cy Q(t)

−1
Q̇(t) = AQ(t) + Q(t)AT + Bw Rww BwT − Q(t)CyT Rvv Cy Q(t)
which is obviously a matrix differential Riccati equation.

June 18, 2008


Spr 2008 Optimal Kalman Filter 16.323 11–15

• Goal: develop an estimator x̂(t) which is a linear function of the


measurements y(τ ) (0 ≤ τ ≤ t) and minimizes the function
J = trace(Q(t))
T
� �
Q(t) = E {x(t) − x̂(t)}{x(t) − x̂(t)}
which is the covariance for the estimation error.

18
• Solution: is a closed-loop estimator
x̂˙(t) = Ax̂ + L(t)(y(t) − Cy x̂(t))
−1
where L(t) = Q(t)Cy T Rvv and Q(t) ≥ 0 solves

−1
Q̇(t) = AQ(t) + Q(t)AT + Bw Rww BwT − Q(t)CyT Rvv Cy Q(t)

– Note that x̂(0) and Q(0) are known


– Differential equation for Q(t) solved forward in time.
– Filter form of the differential matrix Riccati equation for the
error covariance.
– Note that the AQ(t) + Q(t)AT . . . is different than with the regu­
lator which had P (t)A + AT P (t) . . .

• Called Kalman-Bucy Filter – linear quadratic estimator (LQE)

18 See OCW notes for 16.322 “Stochastic Estimation and Control” for the details of this derivation.

June 18, 2008


Spr 2008 16.323 11–16

• Note that an increase in Q(t) corresponds to increased uncertainty


in the state estimate. Q̇(t) has several contributions:
– AQ(t) + Q(t)AT is the homogeneous part
– Bw Rww BwT increase due to the process measurements
−1
– Q(t)Cy T Rvv Cy Q(t) decrease due to measurements

−1
• The estimator gain is L(t) = Q(t)Cy T Rvv
– Feedback on the innovation, y − ŷ
– If the uncertainty about the state is high, then Q(t) is large, and
so the innovation y − Cy x̂ is weighted heavily (L ↑)
– If the measurements are very accurate Rvv ↓, then the measure­
ments are heavily weighted

June 18, 2008


Spr 2008 Steady State 16.323 11–17

• Assume that 19
1. Rvv > 0, Rww > 0
2. All plant dynamics are constant in time
3. [A, Cy ] detectable
4. [A, Bw ] stabilizable

• Then, as with the LQR problem, the covariance of the LQE quickly
settles down to a constant Qss independent of Q(0), as t → ∞ where
−1
AQss + QssAT + Bw Rww BwT − QssCyT Rvv Cy Qss = 0
– Stabilizable/detectable gives a unique Qss ≥ 0
– Qss > 0 iff [A, Bw ] controllable
−1
– Lss = QssCyT Rvv

• If Qss exists, the steady state filter


x̂˙(t) = Ax̂ + Lss(y(t) − Cy x̂(t))
= (A − LssCy )x̂(t) + Lssy(t)
is asymptotically stable iff (1)–(4) above hold.

19 Compare this with 4–10

June 18, 2008


Spr 2008 Filter Interpretation 16.323 11–18

• Given that x̂˙ = (A − LCy )x̂ + Ly

• Consider a scalar system, and take the Laplace transform of both sides
to get:
X̂(s) L
=
Y (s) sI − (A − LCy )

• This is the transfer function from the “measurement” to the “esti­


mated state”
– It looks like a low-pass filter.

• Clearly, by lowering Rvv , and thus increasing L, we are pushing out


the pole.
– DC gain asymptotes to 1/Cy as L → ∞
Scalar TF from Y to \hat X for larger L

0
10
|\hat X / Y|

Increasing L
−1
10

−2
10
−1 0 1 2 3 4 5 6
10 10 10 10 10 10 10 10
Freq (rad/sec)

June 18, 2008


Spr 2008 Example 10–2 16.323 11–19

• Lightly Damped Harmonic Oscillator


� ��
0 1
� � � � �
ẋ1 x1 0
= + w
ẋ2 2
−ω0 0 x 2 1
and y = x1 + v, where Rww = 1 and Rvv = r.
– Can sense the position state of the oscillator, but want to develop
an estimator to reconstruct the velocity state.

• Symmetric root locus exists for the optimal estimator. Can find
location of the optimal poles using a SRL based on the TF
� �−1 � �
� � s −1 0 1 N (s)
Gyw (s) = 1 0 = 2 =
ω02 s 1 s + ω02 D(s)
– SRL for the closed-loop poles λi(A − LC) of the estimator which
are the LHP roots of:
Rww
D(s)D(−s) ± N (s)N (−s) = 0
Rvv
– Pick sign to ensure that there are no poles on the jω-axis (other
than for a gain of zero)
– So we must find the LHP roots of
� 2 � 1 1
s + ω0 (−s) + ω0 + = (s2 + ω02)2 + = 0
2 2 2
��
r r
Symmetric root locus
1.5

0.5
Imag Axis

−0.5

−1

−1.5
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
Real Axis

June 18, 2008


Spr 2008 16.323 11–20

• Note that as r → 0 (clean sensor), the estimator poles tend to ∞


along the ±45 deg asymptotes, so the poles are approximately
−1 ± j
2 2

s≈ √ ⇒
Φe(s) = s
2
+ √ s + = 0

r r
r

• Can use these estimate pole locations in acker, to get that


⎛ � �2 ⎞
� � �−1 � �
0 1 0 1

2 2 C 0
L = ⎝ +√ + I⎠
−ω
0
2 0
r
−ω
0
2 0
r CA 1


2
2 √2
� � �
� �−1 � � � 2 �
− ω √

1 0

0
r
0
r
=
=
r
2 2 2
2 0 1 1
2
2

√r ω
0
r
− ω0
r
− ω
0

• Given L, A, and C, we can develop the estimator transfer function


from the measurement y to the x̂2
� � � � � �−1 �

0 1
2 2
x̂2
� � √
r
� �

r
=
0 1 sI −
+
2
2
1 0
2
2
r − ω
0
r
− ω
0

y
−ω
0
2 0


2
�−1 �

� � s +
r −1

√ √2
r
= 0 1
2

s r − ω0
2
� r �� �

� � s 1
√2
r 1
= 0 1 −2 2
2 2 +
√2 s +
2

s + √2 − ω
0
s

r r r
r r

−2 √2 √2 2 2 √
r r + (s +
r )( r
− ω0
)
s − rω0
2
=

2
s2 +
√2r s +
2

r s +
√2 s +
2

r r

• Filter zero asymptotes to s = 0 as r → 0 and the two poles → ∞


• Resulting estimator looks like a “band-limited” differentiator.
– Expected because we measure position and want to estimate veloc­
ity.
– Frequency band over which filter performs differentiation deter­
mined by the “relative cleanliness” of the measurements.

June 18, 2008


Spr 2008 16.323 11–21

Vel sens to Pos state, sen noise r=1e−008


4
10
−2
r=10
−4
r=10
3
10 r=10−6
r=10−8
2
10

1
10
Mag

0
10

−1
10

−2
10

−3
10

−4
10
−4 −3 −2 −1 0 1 2 3 4
10 10 10 10 10 10 10 10 10
Freq (rad/sec)

200
r=10−2
180 r=10−4
r=10−6
r=10−8
160

140

120
Phase (deg)

100

80

60

40

20

0
−4 −3 −2 −1 0 1 2 3 4
10 10 10 10 10 10 10 10 10
Freq (rad/sec)

Figure 11.4: Bandlimited differentiation of the position measurement from LQE:


r = 10−2 , r = 10−4 , r = 10−6 , and r = 10−8

June 18, 2008


Spr 2008 16.323 11–22
Final Thoughts

• Note that the feedback gain L in the estimator only stabilizes the
estimation error.
– If the system is unstable, then the state estimates will also go to
∞, with zero error from the actual states.

• Estimation is an important concept of its own.


– Not always just “part of the control system”
– Critical issue for guidance and navigation system

• More complete discussion requires that we study stochastic processes


and optimization theory.

• Estimation is all about which do you trust more: your


measurements or your model.

• Strong duality between LQR and LQE problems


A ↔ AT
B ↔ CyT
Cz ↔ BwT
Rzz ↔ Rww
Ruu ↔ Rvv
K(t) ↔ LT (tf − t)

P (t) ↔ Q(tf − t)

June 18, 2008


Spr 2008 16.323 11–23

Basic Estimator (examp1.m) (See page 11-7)

1 % Examples of estimator performance


2 % Jonathan How, MIT
3 % 16.333 Fall 2005
4 %
5 % plant dynamics
6 %
7 a=[-1 1.5;1 -2];b=[1 0]’;c=[1 0];d=0;
8 %
9 % estimator gain calc
10 %
11 l=place(a’,c’,[-3 -4]);l=l’
12 %
13 % plant initial cond
14 xo=[-.5;-1];
15 % extimator initial cond
16 xe=[0 0]’;
17 t=[0:.1:10];
18 %
19 % inputs
20 %
21 u=0;u=[ones(15,1);-ones(15,1);ones(15,1)/2;-ones(15,1)/2;zeros(41,1)];
22 %
23 % open-loop extimator
24 %
25 A_ol=[a zeros(size(a));zeros(size(a)) a];
26 B_ol=[b;b];
27 C_ol=[c zeros(size(c));zeros(size(c)) c];
28 D_ol=zeros(2,1);
29 %
30 % closed-loop extimator
31 %
32 A_cl=[a zeros(size(a));l*c a-l*c];B_cl=[b;b];
33 C_cl=[c zeros(size(c));zeros(size(c)) c];D_cl=zeros(2,1);
34
35 [y_cl,x_cl]=lsim(A_cl,B_cl,C_cl,D_cl,u,t,[xo;xe]);
36 [y_ol,x_ol]=lsim(A_ol,B_ol,C_ol,D_ol,u,t,[xo;xe]);
37
38 figure(1);clf;subplot(211)
39 plot(t,x_cl(:,[1 2]),t,x_cl(:,[3 4]),’--’,’LineWidth’,2);axis([0 4 -1 1]);
40 title(’Closed-loop estimator’);ylabel(’states’);xlabel(’time’)
41 text(.25,-.4,’x_1’);text(.5,-.55,’x_2’);subplot(212)
42 plot(t,x_cl(:,[1 2])-x_cl(:,[3 4]),’LineWidth’,2)
43 %setlines;
44 axis([0 4 -1 1]);grid on
45 ylabel(’estimation error’);xlabel(’time’)
46
47 figure(2);clf;subplot(211)
48 plot(t,x_ol(:,[1 2]),t,x_ol(:,[3 4]),’--’,’LineWidth’,2);axis([0 4 -1 1])
49 title(’Open loop estimator’);ylabel(’states’);xlabel(’time’)
50 text(.25,-.4,’x_1’);text(.5,-.55,’x_2’);subplot(212)
51 plot(t,x_ol(:,[1 2])-x_ol(:,[3 4]),’LineWidth’,2)
52 %setlines;
53 axis([0 4 -1 1]);grid on
54 ylabel(’estimation error’);xlabel(’time’)
55
56 print -depsc -f1 est11.eps; jpdf(’est11’)
57 print -depsc -f2 est12.eps; jpdf(’est12’)

June 18, 2008


Spr 2008 16.323 11–24

Filter Interpretation

1 % Simple LQE example showing SRL

2 % 16.323 Spring 2007

3 % Jonathan How

4 %

5 a=[0 1;-4 0];

6 c=[1 0]; % pos sensor

7 c2=[0 1]; % vel state out

8 f=logspace(-4,4,800);

9
10 r=1e-2;

11 l=polyvalm([1 2/sqrt(r) 2/r],a)*inv([c;c*a])*[0 1]’

12 [nn,dd]=ss2tf(a-l*c,l,c2,0); % to the vel estimate

13 g=freqresp(nn,dd,f*j);

14 [r roots(nn)]

15 figure(1)

16 subplot(211)

17 f1=f;g1=g;

18 loglog(f,abs(g))

19 %hold on;fill([5e2 5e2 1e3 1e3 5e2]’,[1e4 1e-4 1e-4 1e4 1e4]’,’c’);hold off

20 xlabel(’Freq (rad/sec)’)

21 ylabel(’Mag’)

22 title([’Vel sens to Pos state, sen noise r=’,num2str(r)])

23 axis([1e-3 1e3 1e-4 1e4])

24 subplot(212)

25 semilogx(f,unwrap(angle(g))*180/pi)

26 xlabel(’Freq (rad/sec)’)

27 ylabel(’Phase (deg)’)

28 axis([1e-3 1e3 0 200])

29
30 figure(2)

31 r=1e-4;

32 l=polyvalm([1 2/sqrt(r) 2/r],a)*inv([c;c*a])*[0 1]’

33 [nn,dd]=ss2tf(a-l*c,l,c2,0); % to the vel estimate

34 g=freqresp(nn,dd,f*j);

35 [r roots(nn)]

36 subplot(211)

37 f2=f;g2=g;

38 loglog(f,abs(g))

39 %hold on;fill([5e2 5e2 1e3 1e3 5e2]’,[1e4 1e-4 1e-4 1e4 1e4]’,’c’);hold off

40 xlabel(’Freq (rad/sec)’)

41 ylabel(’Mag’)

42 title([’Vel sens to Pos state, sen noise r=’,num2str(r)])

43 axis([1e-3 1e3 1e-4 1e4])

44 subplot(212)

45 semilogx(f,unwrap(angle(g))*180/pi)

46 xlabel(’Freq (rad/sec)’)

47 ylabel(’Phase (deg)’)

48 %bode(nn,dd);

49 axis([1e-3 1e3 0 200])

50
51 figure(3)

52 r=1e-6;

53 l=polyvalm([1 2/sqrt(r) 2/r],a)*inv([c;c*a])*[0 1]’

54 [nn,dd]=ss2tf(a-l*c,l,c2,0); % to the vel estimate

55 g=freqresp(nn,dd,f*j);

56 [r roots(nn)]

57 subplot(211)

58 f3=f;g3=g;

59 loglog(f,abs(g))

60 %hold on;fill([5e2 5e2 1e3 1e3 5e2]’,[1e4 1e-4 1e-4 1e4 1e4]’,’c’);hold off

61 xlabel(’Freq (rad/sec)’)

62 ylabel(’Mag’)

63 title([’Vel sens to Pos state, sen noise r=’,num2str(r)])

64 axis([1e-3 1e3 1e-4 1e4])

65 subplot(212)

66 semilogx(f,unwrap(angle(g))*180/pi)

67 xlabel(’Freq (rad/sec)’)

June 18, 2008


Spr 2008 16.323 11–25

68 ylabel(’Phase (deg)’)

69 %bode(nn,dd);

70 title([’Vel sens to Pos state, sen noise r=’,num2str(r)])

71 axis([1e-3 1e3 0 200])

72

73 figure(4)

74 r=1e-8;

75 l=polyvalm([1 2/sqrt(r) 2/r],a)*inv([c;c*a])*[0 1]’

76 [nn,dd]=ss2tf(a-l*c,l,c2,0); % to the vel estimate

77 g=freqresp(nn,dd,f*j);

78 [r roots(nn)]

79 f4=f;g4=g;

80 subplot(211)

81 loglog(f,abs(g))

82 %hold on;fill([5e2 5e2 1e3 1e3 5e2]’,[1e4 1e-4 1e-4 1e4 1e4]’,’c’);hold off

83 xlabel(’Freq (rad/sec)’)

84 ylabel(’Mag’)

85 title([’Vel sens to Pos state, sen noise r=’,num2str(r)])

86 axis([1e-3 1e3 1e-4 1e4])

87 title([’Vel sens to Pos state, sen noise r=’,num2str(r)])

88 subplot(212)

89 semilogx(f,unwrap(angle(g))*180/pi)

90 xlabel(’Freq (rad/sec)’)

91 ylabel(’Phase (deg)’)

92 %bode(nn,dd);

93 axis([1e-3 1e3 0 200])

94

95 print -depsc -f1 filt1.eps; jpdf(’filt1’)

96 print -depsc -f2 filt2.eps;jpdf(’filt2’)

97 print -depsc -f3 filt3.eps;jpdf(’filt3’)

98 print -depsc -f4 filt4.eps;jpdf(’filt4’)

99

100 figure(5);clf
101 %subplot(211)
102 loglog(f1,abs(g1),f2,abs(g2),f3,abs(g3),f4,abs(g4),’Linewidth’,2)
103 %hold on;fill([5e2 5e2 1e3 1e3 5e2]’,[1e4 1e-4 1e-4 1e4 1e4]’,’c’);hold off
104 xlabel(’Freq (rad/sec)’)
105 ylabel(’Mag’)
106 title([’Vel sens to Pos state, sen noise r=’,num2str(r)])
107 axis([1e-4 1e4 1e-4 1e4])
108 title([’Vel sens to Pos state, sen noise r=’,num2str(r)])
109 legend(’r=10^{-2}’,’r=10^{-4}’,’r=10^{-6}’,’r=10^{-8}’,’Location’,’NorthWest’)
110 %subplot(212)
111 figure(6);clf
112 semilogx(f1,unwrap(angle(g1))*180/pi,f2,unwrap(angle(g2))*180/pi,...
113 f3,unwrap(angle(g3))*180/pi,f4,unwrap(angle(g4))*180/pi,’Linewidth’,2);hold off
114 xlabel(’Freq (rad/sec)’)
115 ylabel(’Phase (deg)’)
116 legend(’r=10^{-2}’,’r=10^{-4}’,’r=10^{-6}’,’r=10^{-8}’)
117 %bode(nn,dd);
118 axis([1e-4 1e4 0 200])
119 print -depsc -f5 filt5.eps;jpdf(’filt5’)
120 print -depsc -f6 filt6.eps;jpdf(’filt6’)

June 18, 2008

You might also like