0% found this document useful (0 votes)
122 views39 pages

Lec12 PDF

This document summarizes a lecture on stochastic optimal control. It discusses how to design optimal controllers for systems with noisy measurements and disturbances. The key points are: 1) For systems with white Gaussian noise, the optimal controller is the same as the deterministic controller. 2) For systems with colored noise, a shaping filter is used to account for the noise spectrum. This results in an augmented system with the same form as the original. 3) The performance of the stochastic LQR controller can be analyzed using the mean square value of the state over time, obtained by solving a matrix differential Lyapunov equation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
122 views39 pages

Lec12 PDF

This document summarizes a lecture on stochastic optimal control. It discusses how to design optimal controllers for systems with noisy measurements and disturbances. The key points are: 1) For systems with white Gaussian noise, the optimal controller is the same as the deterministic controller. 2) For systems with colored noise, a shaping filter is used to account for the noise spectrum. This results in an augmented system with the same form as the original. 3) The performance of the stochastic LQR controller can be analyzed using the mean square value of the state over time, obtained by solving a matrix differential Lyapunov equation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

MIT OpenCourseWare

http://ocw.mit.edu

16.323 Principles of Optimal Control


Spring 2008

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
16.323 Lecture 12

Stochastic Optimal Control

• Kwaknernaak and Sivan Chapter 3.6, 5

• Bryson Chapter 14

• Stengel Chapter 5

Spr 2008 16.323 12–1


Stochastic Optimal Control
• Goal: design optimal compensators for systems with incomplete and
noisy measurements
– Consider this first simplified step: assume that we have noisy system
with perfect measurement of the state.

• System dynamics:
ẋ(t) = A(t)x(t) + Bu(t)u(t) + Bw (t)w(t)
– Assume that w(t) is a white Gaussian noise 20 ⇒ N (0, Rww )
– The initial conditions are random variables too, with
E[x(t0)] = 0, and E[x(t0)xT (t0)] = X0
– Assume that a perfect measure of x(t) is available for feedback.

• Given the noise in the system, need to modify our cost functions from
before ⇒ consider the average response of the closed-loop system
1 tf T
� � �
1 T
Js = E x (tf )Ptf x(tf ) + (x (t)Rxx(t)x(t) + uT (t)Ruu(t)u(t))dt
2 2 t0
– Average over all possible realizations of the disturbances.

• Key observation: since w(t) is white, then by definition, the corre­


lation times-scales are very short compared to the system dynamics
– Impossible to predict w(τ ) for τ > t, even with perfect knowledge
of the state for τ ≤ t
– Furthermore, by definition, the system state x(t) encapsulates all
past information about the system
– Then the optimal controller for this case is identical to the deter­
ministic one considered before.

20 16.322 Notes

June 18, 2008


Spr 2008 16.323 12–2
Spectral Factorization
• Had the process noise w(t) had “color” (i.e., not white), then we need
to include a shaping filter that captures the spectral content (i.e.,
temporal correlation) of the noise Φ(s)
– Previous picture: system is y = G(s)w1, with white noise input
w1 y
� G(s) �

– New picture: system is y = G(s)w2, with shaped noise input

w2 y
� G(s) �

• Account for the spectral content using a shaping filter H(s), so that
the picture now is of a system y = G(s)H(s)w1, with a white noise
input
w1 w̃2 y
� H(s) � G(s) �

– Then must design filter H(s) so that the output is a noise w̃2 that
has the frequency content that we need

• How design H(s)? Spectral Factorization – design a stable mini­


mum phase linear transfer function that replicates the desired spectrum
of w2.
– Basis of approach: If e2 = H(s)e1 and e1 is white, then the spec­
trum of e2 is given by
Φe2 (jω) = H(jω)H(−jω)Φe1 (jω)
where Φe1 (jω) = 1 because it is white.

June 18, 2008


Spr 2008 16.323 12–3

• Typically Φw2 (jω) will be given as an expression in ω 2, and we factor


that into two parts, one of which is stable minimum phase, so if
2σ 2α2
Φw2 (jω) = 2
ω√ + α2 √
2σα 2σα
= · = H(jω)H(−jω)
α + jω α − jω

2σα
so clearly H(s) = s+α which we write in state space form as

ẋH = −αxH + 2ασw1
w2 = xH

• More generally, the shaping filter will be


ẋH = AH xH + BH w1
w2 = CH xH
which we then augment to the plant dynamics, to get:
� ��
A Bw CH
� � � � � � �
ẋ x Bu 0
= + u+ w1
ẋH 0 AH xH 0 B H
� �
� � x

y = Cy 0

xH
where the noise input w1 is a white Gaussian noise.

• Clearly this augmented system has the same form as the original system
that we analyzed - there are just more states to capture the spectral
content of the original shaped noise.

June 18, 2008


Spr 2008 Disturbance Feedforward 16.323 12–4

• Now consider the stochastic LQR problem for this case.


– Modify the state weighting matrix so that
� �
Rxx 0
R̃xx =
0 0
⇒ i.e. no weighting on the filter states – Why is that allowed?
– Then, as before, the stochastic LQR solution for the augmented
system is the same as the deterministic LQR solution (6–9)
� �
� � x
u = − K Kd
xH
– So the full state feedback controller requires access to the state in
the shaping filter, which is fictitious and needs to be estimated

• Interesting result is that the gain K on the system states is com­


pletely independent of the properties of the disturbance
– In fact, if the solution of the steady state Riccati equation in this
case is partitioned as
� �
Pxx PxxH
Paug =
PxHx PxHxH
it is easy to show that
� Pxx can be solved for independently, and
� Is the same as it would be in the deterministic case with the dis­
turbances omitted 21
– Of course the control inputs that are also based on xH will improve
the performance of the system ⇒ disturbance feedforward.

21 K+S pg 262

June 18, 2008


Spr 2008 Performance Analysis 16.323 12–5

• Recall that the specific initial conditions do not effect the LQR con­
troller, but they do impact the cost-to-go from t0
– Consider the stochastic LQR problem, but with w(t) ≡ 0 so that
the only uncertainty is in the initial conditions
– Have already shown that LQR cost can be written in terms of the
solution of the Riccati equation (4–7):
1 T
JLQR = x (t0)P (t0)x(t0)
2 � �
1 T
⇒ Js = E x (t0)P (t0)x(t0)
2
1 � T

= E trace[P (t0)x(t0)x (t0)]
2
1
= trace[P (t0)X0]
2
which gives expected cost-to-go with uncertain IC.

• Now return to case with w �= 0 – consider the average performance


of the stochastic LQR controller.

• To do this, recognize that if we apply the LQR control, we have a


system where the cost is based on zT Rzzz = xT Rxxx for the closed-
loop system:
ẋ(t) = (A(t) − Bu(t)K(t))x(t) + Bw (t)w(t)
z(t) = Cz (t)x(t)

• This is of the form of a linear time-varying system driven by white


Gaussian noise – called a Gauss-Markov Random process22.
22 Bryson 11.4

June 18, 2008


Spr 2008 16.323 12–6

• For a Gauss-Markov system we can predict the mean square value


of the state X(t) = E[x(t)x(t)T ] over time using X(0) = X0 and

Ẋ(t) = [A(t) − Bu(t)K(t)] X(t) + X(t) [A(t) − Bu(t)K(t)]T + Bw Rww BwT

– Matrix differential Lyapunov Equation.

• Can also extract the mean square control values using


E[u(t)u(t)T ] = K(t)X(t)K(t)T

• Now write performance evaluation as:


� � tf �
1 T T T
Js = E x (tf )Ptf x(tf ) + (x (t)Rxx (t)x(t) + u (t)Ruu (t)u(t))dt
2 t0
� � � tf ��
1 T T T
= E trace Ptf x(tf )x (tf ) + (Rxx (t)x(t)x (t) + Ruu (t)u(t)u (t))dt
2 t0
� � tf �
1 T
= trace Ptf X(tf ) + (Rxx (t)X(t) + Ruu (t)K(t)X(t)K(t) )dt
2 t0

• Not too useful in this form, but if P (t) is the solution of the LQR
Riccati equation, then can show that the cost can be written as:
� � tf �
1
Js = trace P (t0)X(t0) + (P (t)Bw Rww BwT )dt
2 t0

– First part, 12 trace {P (t0)X(t0)} is the same cost-to-go from the


uncertain initial condition that we identified on 11–5
– Second part shows that the cost increases as a result of the process
noise acting on the system.

June 18, 2008


Spr 2008 16.323 12–7

• Sketch of Proof: first note that


� tf
d
P (t0)X(t0) − Ptf X(tf ) + (P (t)X(t))dt = 0
t0 dt

1 � �
Js = trace Ptf X(tf ) + P (t0)X(t0) − Ptf X(tf )
2 �� tf �
1
+ trace {Rxx(t)X(t) + Ruu(t)K(t)X(t)K(t)T }dt
2
�� t0tf �
1
+ trace {Ṗ (t)X(t) + P (t)Ẋ(t)}dt
2 t0
−1 T
and (first reduces to standard CARE if K(t) = Ruu Bu P (t))

−Ṗ (t)X(t) = (A − BuK(t))T P (t)X(t) + P (t)(A − BuK(t))X(t)


+RxxX(t) + K(t)T RuuK(t)X(t)

P (t)Ẋ(t) = P (t)(A − BuK(t))X(t) + P (t)X(t)(A − BuK(t))T


+P (t)Bw Rww BwT
• Rearrange terms within the trace and then cancel terms to get final
result.

June 18, 2008


Spr 2008 16.323 12–8
Steady State Values
• Problems exist if we set t0 = 0 and tf → ∞ because performance will
be infinite
– Modify the cost to consider the time-average
1
Ja = lim Js
tf →∞ tf − t0

– No impact on necessary conditions since this is still a fixed end-time


problem.
– But now the initial conditions become irrelevant, and we only need
focus on the integral part of the cost.

• For LTI system with stationary process noise (constant Rww ) and well-
posed time-invariant control problem (steady gain u(t) = −Kssx(t))
mean square value of state settles down to a constant
lim X(t) = Xss
tf →∞

0 = (A − BuKss) Xss + Xss (A − BuKss)T + Bw Rww BwT


– Can show that time-averaged mean square performance is
1 � T

Ja = trace [Rxx + KssRuuKss]Xss
2
1
≡ trace[PssBw Rww BwT ]
2

• Main point: this gives a direct path to computing the expected


performance of a closed-loop system
– Process noise enters into computation of Xss

June 18, 2008


Spr 2008 Missile Example 16.323 12–9

• Consider a missile roll attitude control system with ω the roll angular
velocity, δ the aileron deflection, Q the aileron effectiveness, and φ
the roll angle, then
1 Q
ω̇ = − ω + δ + n(t)
δ̇ = u φ̇ = ω
τ τ
where n(t) is a noise input.

• Then this can be written as:

⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎡ ⎤
δ̇ 0 0 0 δ 1 0

⎣ ω̇ ⎦ = ⎣ −1/τ Q/τ 0 ⎦ ⎣ ω ⎦ + ⎣ 0 ⎦ u + ⎣ 1 ⎦ n

φ̇ 0 1 0 φ 0 0

• Use τ = 1, Q = 10, Ruu = 1/(π)2 and


(π/12)2 0
⎡ ⎤
0
Rxx = ⎣ 0 0 0 ⎦
0 0 (π/180)2
then solve LQR problem to get feedback gains:
K=lqr(A,B,Rxx,Ruu)
K = [26.9 29.0 180.0]
• Then if n(t) has a spectral density of 1000 (deg/sec2)2· sec 23

• Find RMS response of the system from


X=lyap(A-B*K,Bw*Rww*Bw’)
⎡ ⎤
95 −42 −7
X = ⎣ −42 73 0⎦
−7 0 0.87

and that E[φ2] ≈ 0.93deg

23 Process noise input to a derivative of ω, so the units of n(t) must be deg/sec2 , but since E[n(t)n(τ )] = R
ww δ(t − τ ) and
δ(t)dt = 1, then the units of δ(t) are 1/sec and thus the units of Rww are (rad/sec2 )2 · sec=rad2 /sec3

June 18, 2008


Spr 2008 Full Control Problem 16.323 12–10

• Goal: design an optimal controller for a system with incomplete


and noisy measurements

• Setup: for the system (possibly time-varying)


ẋ = Ax + Buu + Bw w
z = Cz x
y = Cy x + v
with
– White, Gaussian noises w ∼ N (0, Rww ) and v ∼ N (0, Rvv ), with
Rww > 0 and Rvv > 0
– Initial conditions x(t0), a stochastic vector with E[x(t0)] = x̄0 and
E[(x(t0) − x̄0)(x(t0) − x̄0)T ] = Q0 so that
x(t0) ∼ N (x̄0, Q0)

• Cost:
� � tf �
1 T 1
J =E x (tf )Ptf x(tf ) + (zT (t)Rzzz(t) + uT (t)Ruuu(t))dt
2 2 t0

with Rzz > 0, Ruu > 0, Ptf ≥ 0

• Stochastic Optimal Output Feedback Problem: Find


u(t) = f [y(τ ), t0 ≤ τ ≤ t] t0 ≤ t ≤ tf
that minimizes J

• The solution is the Linear Quadratic Gaussian Controller, which uses


– LQE (10–15) to get optimal state estimates x̂(t) from y(t) using
gain L(t)
– LQR to get the optimal feedback control u(t) = −K(t)x
– Separation principle to implement u(t) = −K(t)x̂(t)

June 18, 2008


Spr 2008 Solution: LQG 16.323 12–11

• Regulator: u(t) = −K(t)x̂(t)


−1 T
K(t) = Ruu Bu P (t)
−1 T
−Ṗ (t) = AT P (t) + P (t)A + CzT RzzCz − P (t)BuRuu Bu P (t)
P (tf ) = Ptf

• Estimator from:
x̂˙(t) = Ax̂ + Buu + L(t)(y(t) − Cy x̂(t))
where x̂(t0) = x̄0 and Q(t0) = Q0
−1
Q̇(t) = AQ(t) + Q(t)AT + Bw Rww BwT − Q(t)CyT Rvv Cy Q(t)
−1
L(t) = Q(t)CyT Rvv

• A compact form of the compensator is:


ẋc = Acxc + Bcy
u = −Ccxc
with xc ≡ x̂ and

Ac = A − BuK(t) − L(t)Cy

Bc = L(t)

Cc = K(t)

• Valid for SISO and MIMO systems. Plant dynamics can also be time-
varying, but suppressed for simplicity.
– Obviously compensator is constant if we use the steady state regu­
lator and estimator gains for an LTI system.

June 18, 2008


Spr 2008 Infinite Horizon LQG 16.323 12–12

• Assuming LTI plant


• As with the stochastic LQR case, use time averaged cost
– To ensure that estimator settles down, must take t0 → −∞ and
tf → ∞, so that for any t, t0 � t � tf
¯ = lim 1
J J
tf →∞ tf − t0
t0 →−∞

– Again, this changes the cost, but not the optimality conditions

• Analysis of J¯ shows that it can be evaluated as


J¯ = E[zT (t)Rzzz(t) + uT (t)Ruuu(t)]
T
= Tr[PssLssRvv Lss + QssCzT RzzCz ]

= Tr[PssBw Rww BwT + QssKss


T
RuuKss]
where Pss and Qss are the steady state solutions of
−1 T
AT Pss + PssA + CzT RzzCz − PssBuRuu Bu Pss = 0
−1
AQss + QssAT + Bw Rww BwT − QssCyT Rvv Cy Qss = 0
with
−1 T −1
Kss = Ruu Bu Pss and Lss = QssCyT Rvv

• Can evaluate the steady state performance from the solution of 2


Riccati equations
– More complicated than stochastic LQR because J¯ must account for
performance degradation associated with estimation error.
– Since in general x̂(t) �= x(t), have two contributions to the cost
� Regulation error x �= 0
� Estimation error x̃ �= 0

June 18, 2008


Spr 2008 Interpretations 16.323 12–13

• Note that
J¯ = Tr[PssLssRvv LTss + QssCzT RzzCz ]
= Tr[PssBw Rww BwT + QssKssT
RuuKss]
both of which contain terms that are functions of the control and
estimation problems.

• To see how both terms contribute, let the regulator get very fast
⇒ Ruu → 0. A full analysis requires that we then determine what
¯ But what is clear is that:
happens to Pss and thus J.
lim J¯ ≥ Tr[QssCzT RzzCz ]
Ruu →0

which is independent of Ruu


– Thus even in the limit of no control penalty, the performance is
lower bounded by term associated with estimation error Qss.
• Similarly, can see that limRvv →0 J¯ ≥ Tr[PssBw Rww BwT ] which is re­
lated to the regulation error and provides a lower bound on the per­
formance with a fast estimator
– Note that this is the average cost for the stochastic LQR problem.

• Both cases illustrate that it is futile to make either the estimator or


regulator much “faster” than the other
– The ultimate performance is limited, and you quickly reach the
“knee in the curve” for which further increases in the authority of
one over the other provide diminishing returns.
– Also suggests that it is not obvious that either one of them should
be faster than the other.

• Rule of Thumb: for given Rzz and Rww , select Ruu and Rvv so that
the performance contributions due to the estimation and regulation
error are comparable.

June 18, 2008


Spr 2008 Separation Theorem 16.323 12–14

• Now consider what happens when the control u = −Kx is changed


to the new control u = −Kx̂ (same K).
– Assume steady state values here, but not needed.
– Previous looks at this would have analyzed the closed-loop stability,
as follows, but we also want to analyze performance.
plant : ẋ = Ax + Buu + Bw w
z = Cz x
y = Cy x + v
compensator : ẋc = Acxc + Bcy
u = −Ccxc

• Which give the closed-loop dynamics


� � � �� � � �� �
ẋ A −BuCc x Bw 0 w
= +
ẋc BcCy Ac xc 0 Bc v
� �
� � x
z = Cz 0
xc
� �
� � x
y = Cy 0 +v
xc

• It is not obvious that this system will even be stable: λi(Acl) < 0?
– To analyze, introduce n = x − xc, and the similarity transform
� � � � � �
I 0 x x
T = = T −1 ⇒ =T
I −I n xc
so that Acl ⇒ T AclT −1 ≡ Acl and when you work through the
math, you get
� �
A − BuK BuK
Acl =
0 A − LCy

June 18, 2008


Spr 2008 16.323 12–15

• Absolutely key points:


1. λi(Acl) ≡ λi(Acl)
2. Acl is block upper triangular, so can find poles by inspection:
det(sI − Acl) = det(sI − (A − BuK)) · det(sI − (A − LCy ))

The closed-loop poles of the system consist of the


union of the regulator and estimator poles

– This shows that we can design any estimator and regulator sepa­
rately with confidence that the combination will stabilize the system.
� Also means that the LQR/LQE problems decouple in terms of
being able to predict the stability of the overall closed-loop system.

• Let Gc(s) be the compensator transfer function (matrix) where


u = −Cc(sI − Ac)−1Bcy = −Gc(s)y
– Reason for this is that when implementing the controller, we often
do not just feedback −y(t), but instead have to include a reference
command r(t)
– Use servo approach and feed back e(t) = r(t) − y(t) instead

r e u y
� � Gc(s) � G(s) �

– So now u = Gce = Gc(r−y), and if r = 0, then have u = Gc(−y)

• Important points:
– Closed-loop system will be stable, but the compensator dynamics
need not be.
– Often very simple and useful to provide classical interpretations of
the compensator dynamics Gc(s).

June 18, 2008


Spr 2008 Performance Optimality 16.323 12–16

• Performance optimality of this strategy is a little harder to establish


– Now saying more than just that the separation principle is a “good”
idea ⇒ are trying to say that it is the “best” possible solution

• Approach:
– Rewrite cost and system in terms of the estimator states and dy­
namics ⇒ recall we have access to these
– Design a stochastic LQR for this revised system ⇒ full state feed­
back on x̂(t)

• Start with the cost (use a similar process for the terminal cost)
E[zT Rzzz] = E[xT Rxxx] {±x̂}
= E[(x − x̂ + x̂)T Rxx(x − x̂ + x̂)] {x̃ = x − x̂}
= E[x̃T Rxxx̃] + 2E[x̃T Rxxx̂] + E[x̂T Rxxx̂]

• Note that x̂(t) is the minimum mean square estimate of x(t) given
y(τ ), u(τ ), t0 ≤ τ ≤ t.
– Key property of that estimate is that x̂ and x̃ are uncorrelated24
E[x̃T Rxxx̂] = trace[E{x̃x̂T }Rxx] = 0

• Also,
E[x̃T Rxxx̃] = E[trace(Rxxx̃x̃T )] = trace(RxxQ)
where Q is the solution of the LQE Riccati equation (11–11)

• So, in summary we have:


E[xT Rxxx] = trace(RxxQ) + E[x̂T Rxxx̂]
24 Gelb, pg 112

June 18, 2008


Spr 2008 16.323 12–17

• Now the main part of the cost function can be rewritten as


� � tf �
1
J = E (zT (t)Rzzz(t) + uT (t)Ruuu(t))dt
2
� �t0tf �
1
= E (x̂T (t)Rxxx̂(t) + uT (t)Ruuu(t))dt
2 t0
1 tf

+ (trace(RxxQ))dt
2 t0
– The last term is independent of the control u(t) ⇒ it is only a
function of the estimation error
– Objective now is to choose the control u(t) to minimize the first
term

• But first we need another key fact25: If the optimal estimator is


x̂˙(t) = Ax̂(t) + Buu(t) + L(t)(y(t) − Cy x̂(t))
then by definition, the innovations process
i(t) ≡ y(t) − Cy x̂(t)
is a white Gaussian process, so that i(t) ∼ N (0, Rvv + Cy QCyT )

• Then we can rewrite the estimator as


x̂˙(t) = Ax̂(t) + Buu(t) + L(t)i(t)
which is an LTI system with i(t) acting as the process noise through
a computable L(t).

25 Gelb, pg 317

June 18, 2008


Spr 2008 16.323 12–18

• So combining the above, we must pick u(t) to minimize


� � tf �
1
J =E (x̂T (t)Rxxx̂(t) + uT (t)Ruuu(t))dt +term ind. of u(t)
2 t0
subject to the dynamics
x̂˙(t) = Ax̂(t) + Buu(t) + L(t)i(t)
– Which is a strange looking Stochastic LQR problem
– As we saw before, the solution is independent of the driving process
noise
u(t) = −K(t)x̂(t)
– Where K(t) is found from the LQR with the data A, Bu, Rxx, and
Ruu, and thus will be identical to the original problem.

• Combination of LQE/LQR gives performance optimal result.

June 18, 2008


Spr 2008 Simple Example 16.323 12–19

� �
0 1
� � � �
0 0
ẋ = x+ u+ w
0 0 1 1
� �
1 0
z = x
0 1
� �
y = 1 0 x+v
where in the LQG problem we have
� �
1 0
Rzz = Ruu = 1 Rvv = 1 Rww = 1
0 1
• Solve the SS LQG problem to find that
Tr[PssLssRvv LTss] = 8.0 Tr[QssCzT RzzCz ] = 2.8

Tr[PssBw Rww BwT ] = 1.7 T

Tr[QssKss RuuKss] = 9.1


• Suggests to me that we need to improve the estimation error ⇒ that
Rvv is too large. Repeat with
� �
1 0
Rzz = Ruu = 1 Rvv = 0.1 Rww = 1
0 1

Tr[PssLssRvv LTss] = 4.1 Tr[QssCzT RzzCz ] = 1.0

Tr[PssBw Rww BwT ] = 1.7 T

Tr[QssKss RuuKss] = 3.7


and
� �
1 0
Rzz = Ruu = 1 Rvv = 0.01 Rww = 1
0 1

Tr[PssLssRvv LTss] = 3.0 Tr[QssCzT RzzCz ] = 0.5


Tr[PssBw Rww BwT ] = 1.7 T
Tr[QssKss RuuKss] = 1.7

June 18, 2008


Spr 2008 16.323 12–20

• LQG analysis code


A=[0 1;0 0];%

Bu=[0 1]’;%

Bw=[0 1]’; %

Cy=[1 0];%

Cz=[1 0;0 1];%

Rww=1;%

Rvv=1;%

Rzz=diag([1 1]);%

Ruu=1;%

[K,P]=lqr(A,Bu,Cz*Rzz*Cz’,Ruu);%

[L,Q]=lqr(A’,Cy’,Bw*Rww*Bw’,Rvv);L=L’;%

N1=trace(P*(L*Rvv*L’))%

N2=trace(Q*(Cz’*Rzz*Cz))%

N3=trace(P*(Bw*Rww*Bw’))%

N4=trace(Q*(K’*Ruu*K))%

[N1 N2;N3 N4]

June 18, 2008


Spr 2008 16.323 12–21

Stochastic Simulation
• Consider the linearized longitudinal dynamics of a hypothetical heli­
copter. The model of the helicopter requires four state variables:
– θ(t):fuselage pitch angle (radians)
– q(t):pitch rate (radians/second)
– u(t):horizontal velocity of CG (meters/second)
– x(t):horizontal distance of CG from desired hover (meters)

The control variable is:

– δ (t): tilt angle of rotor thrust vector (radians)

Figure by MIT OpenCourseWare.

Figure 12.1: Helicopter in Hover


• The linearized equation of motion are:
θ̇(t) = q(t)

q̇(t) = −0.415q(t) − 0.011u(t) + 6.27δ(t) − 0.011w(t)

u̇(t) = 9.8θ(t) − 1.43q(t) − .0198u(t) + 9.8δ(t) − 0.0198w(t)

ẋ(t) = u(t)

– w(t) represents a horizontal wind disturbance


– Model w(t) as the output of a first order system driven by zero
mean, continuous time, unit intensity Gaussian white noise ξ(t):
ẇ(t) = −0.2w(t) + 6ξ(t)

June 18, 2008


Spr 2008 16.323 12–22

• First, treat original (non-augmented) plant dynamics.


– Design LQR controller so that an initial hover position error, x(0) =
1 m is reduced to zero (to within 5%) in approximately 4 sec.

Figure 12.2: Results show that Ruu = 5 gives reasonable performance.

• Augment the noise model, and using the same control gains, form the
closed-loop system which includes the wind disturbance w(t) as part
of the state vector.
• Solve necessary Lyapunov equations to determine the (steady-state)
variance of the position hover error, x(t) and rotor angle δ(t).
– Without feedforward:
� �
2
E[x ] = 0.048 E[δ 2] = 0.017
• Then design a LQR for the augmented system and repeat the process.
– With feedforward:
� �
2
E[x ] = 0.0019 E[δ 2] = 0.0168

June 18, 2008


Spr 2008 16.323 12–23

• Now do stochastic simulation of closed-loop system using Δt = 0.1.


– Note the subtly here that the design was for a continuous system,
but the simulation will be discrete
– Are assuming that the integration step is constant.
– Need to create ζ using the randn function, which gives zero mean
unit variance Gaussian noise.
– To scale it correctly
√ for a discrete simulation, multiply the 26output
of randn by 1/ Δt, where Δt is the integration step size.
– Could also just convert the entire system to its discrete time equiv­
alent, and then use a process noise that has a covariance
Qd = Rww /Δt

26 Franklin and Powell, Digital Control of Dynamic Systems

June 18, 2008


Spr 2008 16.323 12–24

Figure 12.3: Stochastic Simulations with and without disturbance feedforward.

June 18, 2008


Spr 2008 16.323 12–25

Helicopter stochastic simulation

1 % 16.323 Spring 2008

2 % Stochastic Simulation of Helicopter LQR

3 % Jon How

4 %

5 clear all, clf, randn(’seed’,sum(100*clock));

6 % linearized dynamics of the system

7 A = [ 0 1 0 0; 0 -0.415 -0.011 0;9.8 -1.43 -0.0198 0;0 0 1 0];

8 Bw = [0 -0.011 -0.0198 0]’;

9 Bu = [0 6.27 9.8 0]’;

10 Cz = [0 0 0 1];

11 Rxx = Cz’*Cz;

12 rho = 5;

13 Rww=1;

14
15 % lqr control

16 [K,S,E]=lqr(A,Bu,Rxx,rho);

17 [K2,S,E]=lqr(A,Bu,Rxx,10*rho);

18 [K3,S,E]=lqr(A,Bu,Rxx,rho/10);

19
20 % initial response with given x0

21 x0 = [0 0 0 1]’;

22 Ts=0.1; % small discrete step to simulate the cts dynamics

23 tf=8;t=0:Ts:tf;

24 [y,x] = initial(A-Bu*K,zeros(4,1),Cz,0,x0,t);

25 [y2,x2] = initial(A-Bu*K2,zeros(4,1),Cz,0,x0,t);

26 [y3,x3] = initial(A-Bu*K3,zeros(4,1),Cz,0,x0,t);

27 subplot(211), plot(t,[y y2 y3],[0 8],.05*[1 1],’:’,[0 8],.05*[-1 -1],’:’,’LineWidth’,2)

28 ylabel(’x’);title(’Initial response of the closed loop system with x(0) = 1’)

29 h = legend([’LQR: \rho = ’,num2str(rho)],[’LQR: \rho = ’,num2str(rho*10)],[’LQR: \rho = ’,num2str(rho/10)]);

30 axes(h)

31 subplot(212), plot(t,[(K*x’)’ (K2*x2’)’ (K3*x3’)’],’LineWidth’,2);grid on

32 xlabel(’Time’), ylabel(’\delta’)

33 print -r300 -dpng heli1.png

34
35 % shaping filter

36 Ah=-0.2;Bh=6;Ch=1;

37 % augment the filter dyanmics

38 Aa = [A Bw*Ch; zeros(1,4) Ah];

39 Bua = [Bu;0];

40 Bwa = [zeros(4,1); Bh];

41 Cza = [Cz 0];

42 Ka = [K 0]; % i.e. no dist FF

43 Acla = Aa-Bua*Ka; % close the loop using NO dist FF

44 Pass = lyap(Acla,Bwa*Rww*Bwa’); % compute SS response to the dist

45 vx = Cza*Pass*Cza’; % state resp

46 vd = Ka*Pass*Ka’; % control resp

47
48 zeta = sqrt(Rww/Ts)*randn(length(t),1); % discrete equivalent noise

49 [y,x] = lsim(Acla,Bwa,Cza,0,zeta,t,[x0;0]); % cts closed-loop sim

50 %

51 % second simulation approach: discrete time

52 %

53 Fa=c2d(ss(Acla,Bwa,Cza,0),Ts); % discretize the closed-loop dynamics

54 [dy,dx] = lsim(Fa,zeta,[],[x0;0]); % stochastic sim in discrete time

55 u = Ka*x’; % find control commands given the state response

56
57 % disturbance FF

58 [KK,SS,EE]=lqr(Aa,Bua,Cza’*Cza,rho); % now K will have dist FF

59 Acl=Aa-Bua*KK;

60 PP=lyap(Acl,Bwa*Rww*Bwa’);

61 vxa = Cza*PP*Cza’;

62 vda = KK*PP*KK’;

63 [ya,xa] = lsim(Acl,Bwa,Cza,0,zeta,t,[x0;0]); % cts sim

64 F=c2d(ss(Acl,Bwa,Cza,0),Ts); % discretize the closed-loop dynamics

65 [dya,dxa] = lsim(F,zeta,[],[x0;0]); % stochastic sim in discrete time

66 ua = KK*xa’; % find control commands given the state response

67

June 18, 2008


Spr 2008 16.323 12–26

68 figure(2);

69 subplot(211)

70 plot(t,y,’LineWidth’,2)

71 hold on;

72 plot(t,dy,’r-.’,’LineWidth’,1.5)

73 plot([0 max(t)],sqrt(vx)*[1 1],’m--’,[0 max(t)],-sqrt(vx)*[1 1],’m--’,’LineWidth’,1.5);

74 hold off

75 xlabel(’Time’);ylabel(’y(t)’);legend(’cts’,’disc’)

76 title(’Stochastic Simulation of Helicopter Response: No FF’)

77 subplot(212)

78 plot(t,u,’LineWidth’,2)

79 xlabel(’Time’);ylabel(’u(t)’);legend(’No FF’)

80 hold on;

81 plot([0 max(t)],sqrt(vd)*[1 1],’m--’,[0 max(t)],-sqrt(vd)*[1 1],’m--’,’LineWidth’,1.5);

82 hold off

83

84 figure(3);

85 subplot(211)

86 plot(t,ya,’LineWidth’,2)

87 hold on;

88 plot(t,dya,’r-.’,’LineWidth’,1.5)

89 plot([0 max(t)],sqrt(vxa)*[1 1],’m--’,[0 max(t)],-sqrt(vxa)*[1 1],’m--’,’LineWidth’,1.5);

90 hold off

91 xlabel(’Time’);ylabel(’y(t)’);legend(’cts’,’disc’)

92 title(’Stochastic Simulation of Helicopter Response: with FF’)

93 subplot(212)

94 plot(t,ua,’LineWidth’,2)

95 xlabel(’Time’);ylabel(’u(t)’);legend(’with FF’)

96 hold on;

97 plot([0 max(t)],sqrt(vda)*[1 1],’m--’,[0 max(t)],-sqrt(vda)*[1 1],’m--’,’LineWidth’,1.5);

98 hold off

99

100 print -f2 -r300 -dpng heli2.png


101 print -f3 -r300 -dpng heli3.png

June 18, 2008


Spr 2008 LQG for Helicopter 16.323 12–27

• Now consider what happens if we reduce the measurable states and


use LQG for the helicopter control/simulation
• Consider full vehicle state measurement (i.e., not the disturbance
state)
Cy = [ I4 0 ]
• Consider only partial vehicle state measurement
� �
0 1 0 0 0
Cy =
0 0 0 1 0
• Set Rvv small.

June 18, 2008


Spr 2008 16.323 12–28

Figure 12.4: LQR with disturbance feedforward compared to LQG

June 18, 2008


Spr 2008 16.323 12–29

Figure 12.5: Second LQR with disturbance feedforward compared to LQG

June 18, 2008


Spr 2008 16.323 12–30

Helicopter LQG

1 % 16.323 Spring 2008

2 % Stochastic Simulation of Helicopter LQR - from Bryson’s Book

3 % Jon How

4 %

5 clear all, clf, randn(’seed’,sum(100*clock));

6 set(0,’DefaultAxesFontName’,’arial’)

7 set(0,’DefaultAxesFontSize’,12)

8 set(0,’DefaultTextFontName’,’arial’)

9 % linearized dynamics of the system state=[theta q dotx x]

10 A = [ 0 1 0 0; 0 -0.415 -0.011 0;9.8 -1.43 -0.0198 0;0 0 1 0];

11 Bw = [0 -0.011 -0.0198 0]’;

12 Bu = [0 6.27 9.8 0]’;

13 Cz = [0 0 0 1];

14 Rxx = Cz’*Cz; Rww=1;

15 rho = 5;

16 % lqr control

17 [K,S,E]=lqr(A,Bu,Rxx,rho);

18
19 % initial response with given x0

20 x0 = [0 0 0 1]’;

21 Ts=0.01; % small discrete step to simulate the cts dynamics

22 tf=20;t=0:Ts:tf;nt=length(t);

23 % Now consider shaped noise with shaping filter

24 Ah=-0.2;Bh=6;Ch=1;

25 % augment the filter dyanmics

26 Aa = [A Bw*Ch; zeros(1,4) Ah];

27 Bua = [Bu;0];

28 Bwa = [zeros(4,1); Bh];

29 Cza = [Cz 0];

30 x0a=[x0;0];

31 %zeta = Rww/sqrt(Ts)*randn(length(t),1); % discrete equivalent noise

32 zeta = sqrt(Rww/Ts)*randn(length(t),1); % discrete equivalent noise

33
34 %%%% Now consider disturbance FF

35 [KK,SS,EE]=lqr(Aa,Bua,Cza’*Cza,rho); % now K will have dist FF

36 Acl=Aa-Bua*KK;

37 PP=lyap(Acl,Bwa*Rww*Bwa’);

38 vxa = Cza*PP*Cza’; %state

39 vda = KK*PP*KK’; %control

40 %

41 [ya,xa] = lsim(Acl,Bwa,Cza,0,zeta,t,x0a); % cts sim

42 F=c2d(ss(Acl,Bwa,Cza,0),Ts); % discretize the closed-loop dynamics

43 [dya,dxa] = lsim(F,zeta,[],x0a); % stochastic sim in discrete time

44 ua = KK*xa’; % find control commands given the state response

45
46 %%%% Now consider Output Feedback Case

47 % Assume that we can only measure the system states

48 % and not the dist one

49 FULL=1;

50 if FULL

51 Cya=eye(4,5); % full veh state


52 else

53 Cy=[0 1 0 0;0 0 0 1]; % only meas some states

54 Cya=[Cy [0;0]];

55 end

56 Ncy=size(Cya,1);Rvv=(1e-2)^2*eye(Ncy);

57 [L,Q,FF]=lqr(Aa’,Cya’,Bwa*Rww*Bwa’,Rvv);L=L’;% LQE calc

58 %closed loop dyn

59 Acl_lqg=[Aa -Bua*KK;L*Cya Aa-Bua*KK-L*Cya];

60 Bcl_lqg=[Bwa zeros(5,Ncy);zeros(5,1) L];

61 Ccl_lqg=[Cza zeros(1,5)];Dcl_lqg=zeros(1,1+Ncy);

62 x0_lqg=[x0a;zeros(5,1)];

63 zeta_lqg=zeta;

64 % now just treat this as a system with more sensor noise acting as more

65 % process noise

66 for ii=1:Ncy

67 zeta_lqg = [zeta_lqg sqrt(Rvv(ii,ii)/Ts)*randn(nt,1)];% discrete equivalent noise

June 18, 2008


Spr 2008 16.323 12–31

68 end

69 [ya_lqg,xa_lqg] = lsim(Acl_lqg,Bcl_lqg,Ccl_lqg,Dcl_lqg,zeta_lqg,t,x0_lqg); % cts sim

70 F_lqg=c2d(ss(Acl_lqg,Bcl_lqg,Ccl_lqg,Dcl_lqg),Ts); % discretize the closed-loop dynamics

71 [dya_lqg,dxa_lqg] = lsim(F_lqg,zeta_lqg,[],x0_lqg); % stochastic sim in discrete time

72 ua_lqg = [zeros(1,5) KK]*xa_lqg’; % find control commands given the state estimate

73

74 %LQG State Perf Prediction

75 X_lqg=lyap(Acl_lqg,Bcl_lqg*[Rww zeros(1,Ncy);zeros(Ncy,1) Rvv]*Bcl_lqg’);

76 vx_lqg=Ccl_lqg*X_lqg*Ccl_lqg’;

77 vu_lqg=[zeros(1,5) KK]*X_lqg*[zeros(1,5) KK]’;

78

79 figure(3);clf

80 subplot(211)

81 plot(t,ya,’LineWidth’,3)

82 hold on;

83 plot(t,dya,’r-.’,’LineWidth’,2)

84 plot([0 max(t)],sqrt(vxa)*[1 1],’m--’,[0 max(t)],-sqrt(vxa)*[1 1],’m--’,’LineWidth’,1);

85 hold off

86 xlabel(’Time’);ylabel(’y(t)’);legend(’cts’,’disc’)

87 title(’Stochastic Simulation of Helicopter Response: with FF’)

88 subplot(212)

89 plot(t,ua,’LineWidth’,2)

90 xlabel(’Time’);ylabel(’u(t)’);legend(’with FF’)

91 hold on;

92 plot([0 max(t)],sqrt(vda)*[1 1],’m--’,[0 max(t)],-sqrt(vda)*[1 1],’m--’,’LineWidth’,1);

93 axis([0 tf -0.2 .6])

94 hold off

95 print -f3 -r300 -dpng heli_lqg_1.png;

96

97 figure(4);clf

98 subplot(211)

99 plot(t,ya_lqg,’LineWidth’,3)

100 hold on;


101 plot(t,dya_lqg,’r-.’,’LineWidth’,2)
102 plot([0 max(t)],sqrt(vx_lqg)*[1 1],’m--’,[0 max(t)],-sqrt(vx_lqg)*[1 1],’m--’,’LineWidth’,1);
103 hold off
104 xlabel(’Time’);ylabel(’y(t)’);legend(’cts’,’disc’)
105 title([’Stochastic Simulation of Helicopter Response: LQG R_{v v} = ’,num2str(Rvv(1,1))])
106 subplot(212)
107 plot(t,ua_lqg,’LineWidth’,2)
108 xlabel(’Time’);ylabel(’u(t)’);%legend(’with FF’)
109 if FULL
110 legend(’Full veh state’)
111 else
112 legend(’Pitch rate, Horiz Pos’)
113 end
114 hold on;
115 plot([0 max(t)],sqrt(vu_lqg)*[1 1],’m--’,[0 max(t)],-sqrt(vu_lqg)*[1 1],’m--’,’LineWidth’,1);
116 axis([0 tf -0.2 .6])
117 hold off
118 if FULL
119 print -f4 -r300 -dpng heli_lqg_2.png;
120 else
121 print -f4 -r300 -dpng heli_lqg_3.png;
122 end

June 18, 2008


Spr 2008 16.323 12–32
Example: 747
• Bryson, page 209 Consider the stabilization of a 747 at 40,000 ft
and Mach number of 0.80. The perturbation dynamics from elevator
angle to pitch angle are given by
θ(s) 1.16(s + 0.0113)(s + 0.295)
= G(s) = 2
δe(s) [s + (0.0676)2][(s + 0.375)2 + (0.882)2]

1. Note that these aircraft dynamics can be stabilized with a simple


lead compensator
δe(s) s + 0.6
= 3.50
θ(s) s + 3.6

2. Can also design an LQG controller for this system by assuming that
Bw = Bu and Cz = Cy , and then tuning Ruu and Rvv to get a
reasonably balanced performance.
– Took Rww = 0.1 and tuned Rvv
Pole−Zero Map

4
lead
LQG

1
Imaginary Axis

−1

−2

−3

−4
−4 −3.5 −3 −2.5 −2 −1.5 −1 −0.5 0
Real Axis

Figure 12.6: B747: Compensators

June 18, 2008


Spr 2008 16.323 12–33

RL of B747 system with the given Lead Comp

4.5

3.5

3
Imaginary Axis

2.5

1.5

0.5

−4.5 −4 −3.5 −3 −2.5 −2 −1.5 −1 −0.5 0


Real Axis

RL of B747 system with the LQG Comp

4.5

3.5

3
Imaginary Axis

2.5

1.5

0.5

−4.5 −4 −3.5 −3 −2.5 −2 −1.5 −1 −0.5 0


Real Axis

Figure 12.7: B747: root locus (Lead on left, LQG on right shown as a function of
the overall compensator gain)

June 18, 2008


Spr 2008 16.323 12–34

3. Compare the Bode plots of the lead compensator and LQG designs
2
10
G
Gclead
Gclqg
0
10

−2
10 −3 −2 −1 0 1
10 10 10 10 10
Freq (rad/sec)

100
G
Gclead
0 Gclqg

−100

−200 −3 −2 −1 0 1
10 10 10 10 10
Freq (rad/sec)
2
10
G
Looplead
Looplqg
0
10

−2
10 −3 −2 −1 0 1
10 10 10 10 10
Freq (rad/sec)

100
G
Looplead
0
Looplqg

−100

−200

−300 −3 −2 −1 0 1
10 10 10 10 10
Freq (rad/sec)

Figure 12.8: B747: Compensators and loop TF

June 18, 2008


Spr 2008 16.323 12–35

4. Consider the closed-loop TF for the system


3
10
G
Gcllead
2
10 Gcllqg

1
10

0
10

−1
10

−2
10

−3
10

−4
10 −3 −2 −1 0 1
10 10 10 10 10
Freq (rad/sec)

Figure 12.9: B747: closed-loop TF

5. Compare impulse response of two closed-loop systems.


1.5
G
Gcllead
1
Gcl
lqg
y(t)

0.5

−0.5
0 1 2 3 4 5 6 7 8 9 10
Time

4
Gclead
Gclqg
2
u(t)

−2
0 1 2 3 4 5 6 7 8 9 10
Time

Figure 12.10: B747: Impulse response

6. So while LQG controllers might appear to be glamorous, they are


actually quite ordinary for SISO systems.
– Where they really shine is that it this simple to design a MIMO
controller.

June 18, 2008


Spr 2008 16.323 12–36

B747 LQG

1 % 16.323 B747 example

2 % Jon How, MIT, Spring 2007

3 %

4 clear all

5 set(0,’DefaultAxesFontName’,’arial’)

6 set(0,’DefaultAxesFontSize’,12)

7 set(0,’DefaultTextFontName’,’arial’)

8
9 gn=1.16*conv([1 .0113],[1 .295]);

10 gd=conv([1 0 .0676^2],[1 2*.375 .375^2+.882^2]);

11 % lead comp given

12 kn=3.5*[1 .6];kd=[1 3.6];

13
14 f=logspace(-3,1,300);

15 g=freqresp(gn,gd,2*pi*f*sqrt(-1));

16
17 [nc,dc]=cloop(conv(gn,kn),conv(gd,kd)); % CLP with lead

18 gc=freqresp(nc,dc,2*pi*f*sqrt(-1)); % CLP with lead

19 %roots(dc)

20 %loglog(f,abs([g gc]))

21
22 %get state space model

23 [a,b,c,d]=tf2ss(gn,gd);

24 % assume that Bu and Bw are the same

25 % take y=z

26 Rzz=1;Ruu=0.01;Rww=0.1;Rvv=0.01;

27 [k,P,e1] = lqr(a,b,c’*Rzz*c,Ruu);

28 [l,Q,e2] = lqe(a,b,c,Rww,Rvv);

29 [ac,bc,cc,tdc] = reg(a,b,c,d,k,l);

30 [knl,kdl]=ss2tf(ac,bc,cc,tdc);

31 N1=trace(P*(l*Rvv*l’))%

32 N2=trace(Q*(c’*Rzz*c))%

33 N3=trace(P*(b*Rww*b’))%

34 N4=trace(Q*(k’*Ruu*k))%

35 N=[N1 N2 N1+N2;N3 N4 N3+N4]

36
37 [ncl,dcl]=cloop(conv(gn,knl),conv(gd,kdl)); % CLP with lqg

38 gcl=freqresp(ncl,dcl,2*pi*f*sqrt(-1)); % CLP with lqg

39 [[roots(dc);0;0;0] roots(dcl)]

40 figure(2);clf;

41 loglog(f,abs([g gc gcl])) % mag plot of closed loop system

42 setlines(2)

43 legend(’G’,’Gcl_{lead}’,’Gcl_{lqg}’)

44 xlabel(’Freq (rad/sec)’)

45
46 Gclead=freqresp(kn,kd,2*pi*f*sqrt(-1));

47 Gclqg=freqresp(knl,kdl,2*pi*f*sqrt(-1));

48
49 figure(3);clf;

50 subplot(211)

51 loglog(f,abs([g Gclead Gclqg])) % Bode of compesantors

52 setlines(2)

53 legend(’G’,’Gc_{lead}’,’Gc_{lqg}’)

54 xlabel(’Freq (rad/sec)’)

55 axis([1e-3 10 1e-2 1e2])

56 subplot(212)

57 semilogx(f,180/pi*unwrap(phase([g])));hold on

58 semilogx(f,180/pi*unwrap(phase([Gclead])),’g’)

59 semilogx(f,180/pi*unwrap(phase([Gclqg])),’r’)

60 xlabel(’Freq (rad/sec)’)

61 hold off

62 setlines(2)

63 legend(’G’,’Gc_{lead}’,’Gc_{lqg}’)

64
65 figure(6);clf;

66 subplot(211)

67 loglog(f,abs([g g.*Gclead g.*Gclqg])) % Bode of Loop transfer function

June 18, 2008


Spr 2008 16.323 12–37

68 setlines(2)

69 legend(’G’,’Loop_{lead}’,’Loop_{lqg}’)

70 xlabel(’Freq (rad/sec)’)

71 axis([1e-3 10 1e-2 1e2])

72 subplot(212)

73 semilogx(f,180/pi*unwrap(phase([g])));hold on

74 semilogx(f,180/pi*unwrap(phase([g.*Gclead])),’g’)

75 semilogx(f,180/pi*unwrap(phase([g.*Gclqg])),’r’)

76 xlabel(’Freq (rad/sec)’)

77 hold off

78 setlines(2)

79 legend(’G’,’Loop_{lead}’,’Loop_{lqg}’)

80

81 % RL of 2 closed-loop systems

82 figure(1);clf;rlocus(conv(gn,kn),conv(gd,kd));axis(2*[-2.4 0.1 -0.1 2.4])

83 hold on;plot(roots(dc)+sqrt(-1)*eps,’md’,’MarkerFaceColor’,’m’);hold off

84 title(’RL of B747 system with the given Lead Comp’)

85 figure(4);clf;rlocus(conv(gn,knl),conv(gd,kdl));axis(2*[-2.4 0.1 -0.1 2.4])

86 hold on;plot(roots(dcl)+sqrt(-1)*eps,’md’,’MarkerFaceColor’,’m’);hold off

87 title(’RL of B747 system with the LQG Comp’)

88

89 % time simulations

90 Ts=0.01;

91 [y1,x,t]=impulse(gn,gd,[0:Ts:10]);

92 [y2]=impulse(nc,dc,t);

93 [y3]=impulse(ncl,dcl,t);

94 [ulead]=lsim(kn,kd,y2,t); % noise free sim

95 [ulqg]=lsim(knl,kdl,y3,t); % noise free sim

96

97 figure(5);clf;

98 subplot(211)

99 plot(t,[y1 y2 y3])

100 xlabel(’Time’)
101 ylabel(’y(t)’)
102 setlines(2)
103 legend(’G’,’Gcl_{lead}’,’Gcl_{lqg}’)
104 subplot(212)
105 plot(t,[ulead ulqg])
106 xlabel(’Time’)
107 ylabel(’u(t)’)
108 setlines(2)
109 legend(’Gc_{lead}’,’Gc_{lqg}’)
110
111 figure(7)
112 pzmap(tf(kn,kd),’g’,tf(knl,kdl),’r’)
113 legend(’lead’,’LQG’)
114
115 print -depsc -f1 b747_1.eps;jpdf(’b747_1’)
116 print -depsc -f2 b747_2.eps;jpdf(’b747_2’)
117 print -depsc -f3 b747_3.eps;jpdf(’b747_3’)
118 print -depsc -f4 b747_4.eps;jpdf(’b747_4’)
119 print -depsc -f5 b747_5.eps;jpdf(’b747_5’)
120 print -depsc -f6 b747_6.eps;jpdf(’b747_6’)
121 print -depsc -f7 b747_7.eps;jpdf(’b747_7’)
122

June 18, 2008

You might also like