Advanced Control Using Matlab
Advanced Control Using Matlab
M ATLAB
or Stabilising the
unstabilisable
David I. Wilson
Auckland University of Technology
New Zealand
May 15, 2015
All rights reserved. No part of this work may be reproduced, stored in a retrieval system, or
transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or
otherwise, without prior permission.
C ONTENTS
1
Introduction
1.1 Notation and colour conventions . . . . . . .
1.2 Matlab for computer aided control design . .
1.2.1 Alternative computer design aids . .
1.3 Laboratory equipment for control tests . . . .
1.3.1 Plants with one input and one output
1.3.2 Multi-input and multi-output plants .
1.3.3 Slowing down Simulink . . . . . . . .
1.4 Economics of control . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
2
3
4
5
5
7
8
10
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
15
15
16
16
17
18
21
22
24
24
25
26
27
29
31
33
38
40
41
45
46
49
51
53
57
58
59
61
62
65
67
71
72
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
CONTENTS
iv
2.9.1 Stability in the continuous domain . . . . . . . . . . . . . . . . . . .
2.9.2 Stability of the closed loop . . . . . . . . . . . . . . . . . . . . . . . .
2.9.3 Stability of discrete time systems . . . . . . . . . . . . . . . . . . . .
2.9.4 Stability of nonlinear differential equations . . . . . . . . . . . . . .
2.9.5 Expressing matrix equations succinctly using Kronecker products .
2.9.6 Summary of stability analysis . . . . . . . . . . . . . . . . . . . . . .
2.10 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
73
76
77
78
84
86
86
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
89
89
90
90
91
95
97
98
102
107
109
111
113
115
119
124
125
128
130
132
134
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
137
137
137
138
138
140
141
141
143
144
147
147
151
152
153
155
155
159
164
171
171
173
175
178
180
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
CONTENTS
4.8
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
186
186
189
190
193
195
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
197
197
198
199
200
201
201
205
212
213
215
218
220
222
223
225
228
229
231
232
234
234
236
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
239
239
240
241
242
242
243
249
255
257
258
259
262
263
265
267
268
269
272
273
274
278
280
282
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
CONTENTS
vi
6.6.2 Robust model fitting . . . . . . . . . . . . . . . . . . . . . . .
6.6.3 Common nonlinear model structures . . . . . . . . . . . . .
6.7 Online model identification . . . . . . . . . . . . . . . . . . . . . . .
6.7.1 Recursive least squares . . . . . . . . . . . . . . . . . . . . . .
6.7.2 Recursive least-squares in M ATLAB . . . . . . . . . . . . . .
6.7.3 Tracking the precision of the estimates . . . . . . . . . . . . .
6.8 The forgetting factor and covariance windup . . . . . . . . . . . . .
6.8.1 The influence of the forgetting factor . . . . . . . . . . . . . .
6.8.2 Covariance wind-up . . . . . . . . . . . . . . . . . . . . . . .
6.8.3 Maintaining a symmetric positive definite covariance matrix
6.9 Identification by parameter optimisation . . . . . . . . . . . . . . . .
6.10 Online estimating of noise models . . . . . . . . . . . . . . . . . . .
6.10.1 A recursive extended least-squares example . . . . . . . . .
6.10.2 Recursive identification using the SI toolbox . . . . . . . . .
6.10.3 Simplified RLS algorithms . . . . . . . . . . . . . . . . . . . .
6.11 Closed loop identification . . . . . . . . . . . . . . . . . . . . . . . .
6.11.1 Closed loop RLS in Simulink . . . . . . . . . . . . . . . . . .
6.12 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7 Adaptive Control
7.1 Why adapt? . . . . . . . . . . . . . . . . . . . . . . . .
7.1.1 The adaption scheme . . . . . . . . . . . . . . .
7.1.2 Classification of adaptive controllers . . . . . .
7.2 Gain scheduling . . . . . . . . . . . . . . . . . . . . . .
7.3 The importance of identification . . . . . . . . . . . .
7.3.1 Polynomial manipulations . . . . . . . . . . .
7.4 Self tuning regulators (STRs) . . . . . . . . . . . . . .
7.4.1 Simple minimum variance control . . . . . . .
7.5 Adaptive pole-placement . . . . . . . . . . . . . . . .
7.5.1 The Diophantine equation and the closed loop
7.5.2 Solving the Diophantine equation in Matlab .
7.5.3 Adaptive pole-placement with identification .
7.5.4 Adaptive pole-placement in S IMULINK . . . .
7.6 Practical adaptive pole-placement . . . . . . . . . . .
7.6.1 Dealing with non-minimum phase systems . .
7.6.2 Separating stable and unstable factors . . . . .
7.6.3 Experimental adaptive pole-placement . . . .
7.6.4 Minimum variance control with dead time . .
7.7 Summary of adaptive control . . . . . . . . . . . . . .
8 Multivariable controller design
8.1 Controllability and observability . . . . . . . . . . .
8.1.1 Controllability . . . . . . . . . . . . . . . . . .
8.1.2 Observability . . . . . . . . . . . . . . . . . .
8.1.3 Computing controllability and observability
8.1.4 State reconstruction . . . . . . . . . . . . . .
8.2 State space pole-placement controller design . . . .
8.2.1 Poles and where to place them . . . . . . . .
8.2.2 Deadbeat control . . . . . . . . . . . . . . . .
8.2.3 Pole-placement for tracking control . . . . .
8.3 Estimating the unmeasured states . . . . . . . . . . .
8.4 Combining estimation and state feedback . . . . . .
8.5 Generic model control . . . . . . . . . . . . . . . . .
8.5.1 The tuning parameters . . . . . . . . . . . . .
8.5.2 GMC control of a linear model . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
284
285
286
287
291
295
298
299
300
302
304
307
309
311
311
314
315
315
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
323
323
324
325
325
327
327
328
329
331
332
333
336
339
341
343
345
347
348
353
.
.
.
.
.
.
.
.
.
.
.
.
.
.
357
358
358
359
361
363
366
369
371
373
375
377
379
379
382
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
CONTENTS
8.6
8.7
vii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
383
389
389
390
392
396
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
397
397
397
398
401
404
406
408
409
410
412
413
417
421
421
423
427
431
437
441
444
444
449
451
455
456
460
461
465
467
470
471
472
10 Predictive control
10.1 Model predictive control . . . . . . . . . . . . . . . . . . . . . . . .
10.1.1 Constrained predictive control . . . . . . . . . . . . . . . .
10.1.2 Dynamic matrix control . . . . . . . . . . . . . . . . . . . .
10.2 A Model Predictive Control Toolbox . . . . . . . . . . . . . . . . .
10.2.1 A model predictive control GUI . . . . . . . . . . . . . . .
10.2.2 MPC toolbox in M ATLAB . . . . . . . . . . . . . . . . . . .
10.2.3 Using the MPC toolbox in S IMULINK . . . . . . . . . . . .
10.2.4 MPC control of an unstable 3 degree of freedom helicopter
10.2.5 Further readings on MPC . . . . . . . . . . . . . . . . . . .
10.3 Optimal control using linear programming . . . . . . . . . . . . .
10.3.1 Development of the LP problem . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
475
475
478
482
487
487
488
490
491
495
496
496
A List of symbols
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
505
viii
CONTENTS
507
C Transform pairs
509
L IST
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
1.10
1.11
1.12
1.13
1.14
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
2.10
2.11
2.12
2.13
2.14
2.15
2.16
2.17
2.18
2.19
2.20
2.21
2.22
2.23
2.24
2.25
2.26
OF
F IGURES
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2
3
3
6
6
6
7
8
8
9
10
10
12
12
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
pre. . .
. . .
16
16
20
20
21
21
23
23
30
34
35
37
37
38
38
44
45
47
47
48
50
50
60
70
73
76
78
LIST OF FIGURES
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
91
93
96
96
98
99
100
101
103
105
105
106
107
107
110
110
113
115
117
118
120
122
123
124
126
133
4.1
4.2
4.3
139
140
4.4
4.5
4.6
4.7
4.8
4.9
4.10
4.11
4.12
4.13
4.14
4.15
4.16
4.17
4.18
4.19
4.20
4.21
4.22
4.23
4.24
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
86
141
141
142
142
143
143
144
145
146
146
148
150
152
152
154
154
157
158
159
160
162
163
LIST OF FIGURES
4.25
4.26
4.27
4.28
4.29
4.30
4.31
4.32
4.33
4.34
4.35
4.36
4.37
4.38
4.39
4.40
4.41
4.42
4.43
4.44
4.45
4.46
4.47
4.48
4.49
4.50
4.51
4.52
4.53
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
165
168
169
172
172
174
176
177
179
181
182
182
183
183
184
185
186
187
188
189
190
191
191
192
193
194
194
196
196
197
198
199
200
202
203
203
204
206
211
212
214
215
216
217
218
219
221
221
5.1
5.2
5.3
5.4
5.5
5.6
5.7
5.8
5.9
5.10
5.11
5.12
5.13
5.14
5.15
5.16
5.17
5.18
5.19
5.20
xi
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
223
223
227
227
LIST OF FIGURES
xii
5.24
5.25
5.26
5.27
5.28
5.29
Two signals . . . . . . . . . . . . . . . . . . . . . .
Critical radio frequencies . . . . . . . . . . . . . . .
Power spectrum for a signal . . . . . . . . . . . . .
Smoothing by Fourier transform . . . . . . . . . .
Differentiating and smoothing noisy measurement
Filtering industrial data . . . . . . . . . . . . . . .
.
.
.
.
.
.
228
232
233
234
236
237
6.1
6.2
6.3
6.4
6.5
6.6
6.7
6.8
6.9
6.10
6.11
6.12
6.13
6.14
6.15
6.16
6.17
239
241
242
244
245
245
246
246
248
248
249
250
251
252
253
253
6.18
6.19
6.20
6.21
6.22
6.23
6.24
6.25
6.26
6.27
6.28
6.29
6.30
6.31
6.32
6.33
6.34
6.35
6.36
6.37
6.38
6.39
6.40
6.41
6.42
6.43
6.44
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
254
254
255
256
256
258
259
261
262
263
264
265
266
267
268
268
271
274
275
281
281
282
283
283
284
285
286
292
LIST OF FIGURES
xiii
6.45
6.46
6.47
6.48
6.49
6.50
6.51
6.52
6.53
6.54
6.55
6.56
6.57
6.58
6.59
6.60
6.61
6.62
6.63
6.64
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
293
294
295
296
296
297
299
300
302
303
306
307
308
309
310
312
313
314
315
320
7.1
7.2
7.3
7.4
7.5
7.6
7.7
325
326
330
331
331
337
7.22
352
353
8.1
8.2
8.3
8.4
8.5
8.6
8.7
8.8
8.9
Reconstructing states . . . . . . . . . . . . . . . . . . . . . . .
Pole-placement . . . . . . . . . . . . . . . . . . . . . . . . . .
Pole-placement degradation . . . . . . . . . . . . . . . . . . .
Deadbeat control . . . . . . . . . . . . . . . . . . . . . . . . .
State feedback control system with an integral output state. .
State feedback with integral states . . . . . . . . . . . . . . .
Simultaneous control and state estimation . . . . . . . . . . .
Control and estimation . . . . . . . . . . . . . . . . . . . . . .
GMC tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . .
366
370
371
372
374
375
377
380
381
7.8
7.9
7.10
7.11
7.12
7.13
7.14
7.15
7.16
7.17
7.18
7.19
7.20
7.21
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
337
338
338
340
340
341
341
342
344
346
347
348
349
350
LIST OF FIGURES
xiv
8.10
8.11
8.12
8.13
8.14
8.15
8.16
.
.
.
.
.
.
.
.
.
.
.
.
.
.
382
384
384
388
388
390
394
9.1
9.2
9.3
9.4
9.5
9.6
9.7
9.8
9.9
9.10
9.11
9.12
9.13
9.14
9.15
9.16
9.17
9.18
9.19
9.20
9.21
9.22
9.23
9.24
9.25
9.26
9.27
9.28
9.29
9.30
9.31
9.32
9.33
9.34
9.35
9.36
9.37
IAE areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ITAE breakdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Optimal responses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Optimal PID tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Optimum PI tuning of the blackbox plant . . . . . . . . . . . . . . . . . . . . . . .
S IMULINK model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Production of a valuable chemical in a batch reactor. . . . . . . . . . . . . . . . . .
Temperature profile optimisation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Optimum temperature profile comparison for different number of temperatures
Optimal control for the Rayleigh problem . . . . . . . . . . . . . . . . . . . . . . .
Optimal control for the batch reactor . . . . . . . . . . . . . . . . . . . . . . . . . .
Optimal control with targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Steady-state and time-varying LQR control . . . . . . . . . . . . . . . . . . . . . .
Steady-state continuous LQR controller . . . . . . . . . . . . . . . . . . . . . . . .
A block diagram of LQR control . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Comparing discrete and continuous LQR controllers . . . . . . . . . . . . . . . . .
LQR control with varying tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Pole-placement and LQR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Pole-placement and LQR showing the input . . . . . . . . . . . . . . . . . . . . .
Trial pole locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Trial pole-placement performance . . . . . . . . . . . . . . . . . . . . . . . . . . .
LQR servo control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
LQI servo control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Black box servo control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A block diagram of a state estimator . . . . . . . . . . . . . . . . . . . . . . . . . .
PDF of a random variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Correlated noisy x, y data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2D ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A block diagram of a stochastic state-space system . . . . . . . . . . . . . . . . . .
Kalman filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Prediction-type Kalman filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Current-type Kalman filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Kalman filter demonstration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The performance of a Kalman filter for different q/r ratios . . . . . . . . . . . . .
A random walk process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
LQG in S IMULINK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
LQG of an aircraft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
399
399
400
403
404
405
406
407
408
416
418
420
426
429
433
434
437
438
439
439
440
442
442
443
444
445
448
449
450
455
457
458
461
464
469
470
471
10.1
10.2
10.3
10.4
10.5
10.6
10.7
10.8
10.9
.
.
.
.
.
.
.
.
.
476
476
479
480
481
482
482
483
485
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
LIST OF FIGURES
10.10DMC control . . . . . . . . . . . . . . . . . . . .
10.11Adaptive DMC of the blackbox . . . . . . . . .
10.12An MPC graphical user interface . . . . . . . .
10.13Multivariable MPC . . . . . . . . . . . . . . . .
10.14S IMULINK and MPC . . . . . . . . . . . . . . .
10.15MPC . . . . . . . . . . . . . . . . . . . . . . . .
10.16A 3 degree of freedom helicopter . . . . . . . .
10.17MPC control of a helicopter structure . . . . . .
10.18MPC control of a helicopter results . . . . . . .
10.19LP constraint matrix dimensions . . . . . . . .
10.20LP optimal control . . . . . . . . . . . . . . . .
10.21LP optimal control with active constraints . . .
10.22Non-square LP optimal control . . . . . . . . .
10.23LP optimal control showing acausal behaviour
xv
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
486
488
489
490
491
492
492
493
495
498
502
502
503
503
xvi
LIST OF FIGURES
L IST
OF
TABLES
1.1
Computer aids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1
2.2
2.3
26
32
36
3.1
3.2
3.3
3.4
3.5
3.6
4.1
4.2
4.3
4.5
4.6
5.1
6.1
6.2
8.1
8.2
9.1
4.4
xvii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 94
. 97
. 97
. 118
. 127
. 128
138
158
159
161
167
177
xviii
LIST OF TABLES
L ISTINGS
2.1
2.2
2.3
2.4
35
40
40
59
61
69
75
81
81
82
83
109
109
112
114
116
117
118
120
121
122
125
126
133
139
151
152
163
164
169
170
170
170
170
179
206
207
LISTINGS
xx
5.3
5.4
5.5
5.6
5.7
5.8
5.9
5.10
6.1
6.2
6.3
6.4
6.5
6.6
6.7
6.8
6.9
6.10
6.11
6.12
6.13
6.14
6.15
6.16
6.17
6.18
6.19
6.20
6.21
6.22
6.23
7.1
7.2
7.3
7.4
7.5
7.6
7.7
7.8
7.9
8.1
8.2
8.3
8.4
8.5
8.6
8.7
8.8
8.9
Designing a high-pass Butterworth filter with a cut-off frequency of fc = 800 Hz. . 207
Designing Chebyshev Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
Computing a Chebyshev Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
Converting a 7th-order Butterworth filter to 4 second-order sections . . . . . . . . . 221
Comparing DFII and SOS digital filters in single precision. . . . . . . . . . . . . . . 222
Designing and visualising a 5th order elliptic band-stop filter. . . . . . . . . . . . . 223
Routine to compute the power spectral density plot of a time series . . . . . . . . . 230
Smoothing and differentiating a noisy signal . . . . . . . . . . . . . . . . . . . . . . 235
Identification of a first-order plant with deadtime from an openloop step response
using the Areas method from Algorithm 6.1. . . . . . . . . . . . . . . . . . . . . . . 245
Frequency response identification of an unknown plant directly from input/output
data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
Non-parametric frequency response identification using etfe. . . . . . . . . . . . . 255
Function to generate output predictions given a trial model and input data. . . . . 258
Optimising the model parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
Validating the fitted model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
Continuous model identification of a non-minimum phase system . . . . . . . . . . 260
Generate some input/output data for model identification . . . . . . . . . . . . . . 272
Estimate an ARX model from an input/output data series using least-squares . . . 273
An alternative way to construct the data matrix for ARX estimation using Toeplitz
matrices. See also Listing 6.9. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
Offline system identification using arx from the System Identification Toolbox . . 275
Offline system identification with no model/plant mismatch . . . . . . . . . . . . . 275
Demonstrate the fitting of an AR model. . . . . . . . . . . . . . . . . . . . . . . . . . 276
Create an input/output sequence from an output-error plant. . . . . . . . . . . . . 277
Parameter identification of an output error process using oe and arx. . . . . . . . 277
A basic recursive least-squares (RLS) update (without forgetting factor) . . . . . . . 290
Tests the RLS identification scheme using Listing 6.16. . . . . . . . . . . . . . . . . . 292
A recursive least-squares (RLS) update with a forgetting factor. (See also Listing 6.16.)299
Adaption of the plant gain using steepest descent . . . . . . . . . . . . . . . . . . . 306
Create an ARMAX process and generate some input/output data suitable for subsequent identification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
Identify an ARMAX process from the data generated in Listing 6.20. . . . . . . . . 310
Recursively identify an ARMAX process. . . . . . . . . . . . . . . . . . . . . . . . . 311
Kaczmarzs algorithm for identification . . . . . . . . . . . . . . . . . . . . . . . . . 312
Simple minimum variance control where the plant has no time delay . . . . . . . . 329
A Diophantine routine to solve F A + BG = T for the polynomials F and G. . . . . 334
Alternative Diophantine routine to solve F A + BG = T for the polynomials F and
G. Compare with Listing 7.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
Constructing polynomials for the Diophantine equation example . . . . . . . . . . 335
Solving the Diophantine equation using polynomials generated from Listing 7.4. . 336
Adaptive pole-placement control with 3 different plants . . . . . . . . . . . . . . . . 338
The pole-placement control law when H = 1/B . . . . . . . . . . . . . . . . . . . . 343
Factorising an arbitrary polynomial B(q) into stable, B + (q), and unstable and poorly
damped, B (q), factors such that B = B + B and B + is defined as monic. . . . . . 345
Minimum variance control design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
A simple state reconstructor following Algorithm 8.1. . . . . . . . . . . . . . . . . . 365
Pole-placement control of a well-behaved system . . . . . . . . . . . . . . . . . . . . 369
A deadbeat controller simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
Pole placement for controllers and estimators . . . . . . . . . . . . . . . . . . . . . . 378
GMC on a Linear Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
GMC for a batch reactor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
The dynamic equations of a batch reactor . . . . . . . . . . . . . . . . . . . . . . . . 387
Find the Lie derivative for a symbolic system . . . . . . . . . . . . . . . . . . . . . . 392
Establish relative degree, r (ignore degree 0 possibility) . . . . . . . . . . . . . . . . 393
LISTINGS
8.10
8.11
9.1
9.2
9.3
9.4
9.5
9.6
9.7
9.8
9.9
9.10
9.11
9.12
9.13
9.14
9.15
9.16
9.17
9.18
9.19
9.20
9.21
9.22
9.23
9.24
9.25
9.26
9.27
9.28
9.29
10.1
10.2
10.3
10.4
10.5
10.6
B.1
B.2
B.3
xxi
xxii
LISTINGS
C HAPTER 1
I NTRODUCTION
Mathematicians may flatter themselves that they posses new ideas which mere human language is as yet
unable to express. Let them make the effort to express those ideas in appropriate words without the aid of
symbols, and if they succeed, they will not only lay us laymen under a lasting obligation but, we venture to
say, they will find themselves very much enlightened during the process, and will even be doubtful
whether the ideas expressed as symbols had ever quite found their way out of the equations into their
minds.
James Clerk Maxwell, 1890
Control, in an engineering sense, is where actions are taken to ensure that a particular physical process responds in some desired manner. Automatic control is where we have relieved the
human operator from the tedium of consistently monitoring the process and supplying the necessary corrections. Control as a technical discipline is therefore important not only in the fields
of engineering, but also in economics, sociology and indeed in most aspects of our life. When
studying control, we naturally assume that we do conceivably have some chance of influencing
things. For example, it is worthwhile to study the operation of a coal fired power plant in order
to minimise possibly polluting emissions, but it is not worth our time to save the world from the
next ice age, or as the results of a special study group who investigated methods designed to protect the world from a stray comet (such as the one postulated to have wiped out the dinosaurs 80
million years ago) concluded, there was nothing feasible we could do, such as change the earths
orbit, or blast the asteroid, to avoid the collision. In these latter examples, the problem exists, but
our influence is negligible.
The teaching of control has changed in emphasis over the last decade from that of linear singleinput/single-output systems elegantly described in the Laplace domain, to general nonlinear
multiple-input/multiple-output systems best analysed in the state space domain. This change
has been motivated by the increasing demands by industry and public to produce more, faster or
cleaner and is now much more attractive due to the impressive improvements in computer aided
tools, such as M ATLAB used in these notes. This new emphasis is called advanced (or modern)
control as opposed to the traditional or classical control. This set of notes is intended for students
who have previously attended a first course in automatic control covering the usual continuous
control concepts like Laplace transforms, Bode diagrams, stability of linear differential equations,
PID controllers, and perhaps some exposure to discrete time control topics like z transforms and
state space.
This book attempts to describe what advanced control is, and how it is applied in engineering
applications with emphasis on the construction of controllers using computer aided design tools
such as the numerical programming environment M ATLAB from the MathWorks, [136]. With
this tool, we can concentrate on the intentions behind the design procedures, rather than the
1
CHAPTER 1. INTRODUCTION
2
mechanics to follow them.
Part one contains some revision material in z-transforms, modelling and PID controller tuning.
The discrete domain, z-transforms, and stability concepts with a brief discussion of appropriate
numerical methods are introduced in chapter 2. A brief potpourri of modelling is summarised
in chapter 3. Chapter 4 is devoted to the most common industrial controller, the three term PID
controller with emphasis on tuning, implementation and limitations. Some basic concepts from
signal processing such as filtering and smoothing are introduced in chapter 5. Identification and
the closely related adaptive control are together in chapters 6 and 7. State space analysis and
optimal control design are given in the later chapters 8 and 9.
1.1
Throughout these notes I have used some typographical and colour conventions. In mathematical
expressions, scalar variables are written in italic such as a, b, c, or if Greek, , while vectors x, y
are upright bold lower case and matrices, A, are bold upper case. More notation is introduced
as required.
M ATLAB instructions to be typed in as a command are given in a fixed-width font as A=chol(B*B)
and it is important to carefully note the typographical difference between similar symbols such
as the number zero, 0 and the capital letter O, the number one (1) and the letter l, and the Greek
letters , and the English italic letters v and p.
A section of M ATLAB code and the associated output is given as
1
1
.
s+3
However in many cases the output is not explicitly given and the M ATLAB prompt is omitted.
The book is primarily concerned with control loops for which we will use, where practical, a
consistent colour convention. The inputs or manipulated variables to the plant are coloured blue,
while the plant outputs or measurements are coloured red. A suitable mnemonic is to think of
a simple domestic hot water heater where the industrial plant being dirty brown heats up the
incoming input cold water (blue) to produce an output hot water (red) as shown in Fig. 1.1.
Input u
Plant
output, y
cold water
Domestic
Hot water heater
hot water
Figure 1.1: Standard colour conventions used for inputs (blue) and outputs (red) in this book.
The default colouring convention for a closed loop control system is shown in Fig. 1.2. Here we
colour the reference or setpoint in red (or pink), and disturbing signals such as noise (perhaps
arising from nature) are coloured green.
Fig. 1.3 shows an example of a typical controlled response given in this book. The top plot shows
the output in red, with the setpoint in dotted pink. The lower trend shows the manipulated
input in blue. Other information such as the constraint bounds for the manipulated variable are
overlayed in grey.
noise, e
error
e
setpoint, r
u
controller
Plant
output, y
Figure 1.2: Standard colour conventions used for closed loop block diagrams.
setpoint
output y(t)
0.5
y(t)
Input u(t)
1
0.5
discrete input
0
0
1.2
10
20
time
30
40
Figure 1.3: The standard colour and plot layout for a controlled response used in this
book.
Modern control design has heavy computing requirements. In particular one needs to:
1. manipulate symbolic algebraic expressions, and
2. perform intensive numerical calculations and simulations for proto-typing and testing quickly
and reliably, and finally
3. to implement the controller at high speed in special hardware such as an embedded controller or a digital signal processing (DSP) chip perhaps using assembler.
To use this new theory, it is essential to use computer aided design (CAD) tools efficiently as realworld problems can rarely be solved manually. But as [171] point out, the use of computers in the
design of control systems has a long and fairly distinguished history. This book uses M ATLAB
for the design, simulation and prototyping of controllers.
M ATLAB, (which is short for MATrix LABoratory), is a programming environment that grew out
of an effort to create an easy user-interface to the very popular and well regarded public domain
F ORTRAN linear algebra collection of programmes, L INPACK and E ISPACK. With this direct interpretive interface, one can write quite sophisticated algorithms in a very high level language, that
are consequently easy to read and maintain. M ATLAB is supported with a variety of toolboxes
CHAPTER 1. INTRODUCTION
comprising of collections of source code subroutines organised in areas of specific interest. The
toolboxes we are most interested in, and used in this book are:
Control toolbox containing functions for controller design, frequency domain analysis, conversions between various models forms, pole placement, optimal control etc. (Used throughout)
Symbolic toolbox which contains a gateway to the symbolic capabilities of M APLE.
Signal processing toolbox containing filters, wave form generation and spectral analysis. (Used
principally in chapter 5.)
System identification toolbox for identifying the parameters of various dynamic model types.
(Used in chapter 6.) You may also find the following free statistics toolbox useful available
at: www.maths.lth.se/matstat/stixbox/, or the R package from [163].
Optimisation which contains routines to maximise or minimise algebraic or dynamic functions.
One suitable free toolbox is available from www.i2c2.aut.ac.nz/Wiki/OPTI/. This
toolbox is used primarily in chapters 9 and 10.
Real-time toolbox can be used to interface M ATLAB to various analogue to digital converters. A
free version suitable with Measurement Computings MCC DAQs is available from
www.i2c2.aut.ac.nz/Resources/Software/SysIDToolbox.html.
Additional documentation to that supplied with M ATLAB is the concise and free summary notes
[185] or the more recent [69]. Recently there has been exponential growth of other texts that heavily use M ATLAB (such as this one), and a current list is available from the Mathworks anonymous
ftp server at www.mathworks.com. This server also contains many user contributed codes, as
well as updates, bug fixes etc.
If M ATLAB, or even programming a high level language is new to you, then [204] is a cheap
recommended compendium, similar in form to this, covering topics in numerical analysis, again
with many M ATLAB examples.
1.2.1
A LTERNATIVE
Table 1.1 lists a number of alternatives computer-aided design and modelling environments similar and complimentary to M ATLAB.
Product
S CI L AB
P YTHON
O CTAVE
RL AB
V ISUAL M ODEL Q
M ATH V IEWS
M U PAD
M APLE
M ATHEMATICA
WWW site
www.scilab.org
code.google.com/p/pythonxy
www.octave.org
rlabplus.sourceforge.net
www.qxdesign.com
www.mathwizards.com
www.mupad.de
www.maplesoft.com
www.mathematica.com
comment
Free Matlab/Simulink clone
Free programming environment
Free Matlab clone, inactive
Matlab clone, Linux
shareware Simulink clone
shareware
Interfaces with S CI L AB
commercial CAS
commercial CAS
Table 1.1: Shareware or freeware Matlab lookalikes and computer algebra systems
Unlike M ATLAB, symbolic manipulators are computer programs that by manipulating symbols
can perform algebra. Such programs are alternatively known as computer algebra systems or
CAS. The most well known examples are M ATHEMATICA, M APLE, M U PAD, and M ACSYMA, (see
Table 1.1). These programs can find analytical solutions to many mathematical problems involving integrals, limits, special functions and so forth. They are particularly useful in the controller
design stage.
The Numerics in Control1 group in Europe have collect together a freeware F ORTRAN subroutine
library S LICOT for routines relevant in systems and control.
Problem 1.1
1. Familiarise yourself with the fundamentals of M ATLAB. Run the M ATLAB
demo by typing demo once inside M ATLAB.
2. Try the M ATLAB tutorial (part 1).
3. Read through the M ATLAB primer, [185] or [69], and you should get acquainted with the
M ATLAB users manual.
1.3
Obviously if we are to study automatic control with the aim to control eventually chemical plants,
manufacturing processes, robots, undertake filtering to do active noise cancellation and so forth,
we should practice, preferably on simpler, more well understood, and potentially less hazardous
equipment.
In the Automatic Control Laboratory in the Department of Electrical Engineering at the Karlstad
University, Sweden we have a number of simple bench-scale plants to test identification and
control algorithms on.
1.3.1
T HE
P LANTS
BLACKBOX
Fig. 1.4 and Fig. 1.5(a) shows what we perhaps unimaginatively refer to as a black-box. It is a
box, and it is coloured black. Subjecting the box to an input voltage from 0 to 5 volts delivers an
output voltage also spanning from around 0 to 5 volts, but lagging behind the input voltage since
the internals of the blackbox are simply either 7 or 9 (depending on the switch position) low-pass
passive filters cascaded together.
The blackbox is a relatively well behaved underdamped stable system with dominant time constants of around 5 to 10 seconds. Fig. 1.5(b) shows the response of the blackbox to two input steps.
The chief disadvantage of using this device for control studies is that the output response is not
visible to the naked eye, and that we cannot manually introduce disturbances. One complication
you can do is to cascade two blackboxes together to modify the dynamics.
E LECTRO - MAGNETIC
BALANCE ARM
The electromagnetic balance arm shown in Fig. 1.6(a) is a fast-acting, highly oscillatory, plant
with little noise. The aim is to accurately weigh small samples by measuring the current required
to keep the balance arm level, or alternatively just to position the arm at different angles. The
output response to a step in input shown in Fig. 1.6(b) indicates how long it would take for the
oscillations to die away.
1 The
CHAPTER 1. INTRODUCTION
6
Input
Output
B LACK B OX
From computer
To computer
sluggish
D/A
A/D
Fast
Input indicator
GND
Figure 1.4: Blackbox configuration. The manual switch marked will toggle between either 7 or 9
low-pass filters.
0.45
0.4
0.35
ipnut/output
0.3
0.25
0.2
0.15
input
output
0.1
0.05
0
0.05
15
20
25
30
35
40
45
50
time (s)
(a) Black-box wiring to the National Instruments (b) The response of the blackbox to 2 step inputs
LabPC terminator.
arm position
0.8
0.6
0.4
0.2
0
0.2
0.4
20
30
40
50
30
40
50
60
70
80
90
100
60
70
time T=0.05 seconds
80
90
100
0.3
0.2
input
0.1
0
0.1
0.2
20
Input, u(t)
output angle
Motor
Fan
Figure 1.7: The fan and flapper with an adjustable counter weight as a disturbance
S TEPPER MOTORS
A stepper motor is an example of a totally discrete system.
1.3.2
M ULTI - INPUT
It is possible to construct multivariable interacting plants by physically locating two plants close
to each other. One possibility is to locate two flappers adjacent to each other, another possibility
is one flapper and one balance arm. The extent of the interaction can be varied by adjusting the
relative position of the two plants.
H ELICOPTER
The model helicopter, Fig. 1.9, is an example of a highly unstable, multivariable (3 inputs, 2 outputs) nonlinear, strongly interacting plant. It is a good example of where we must apply control
(or crash and burn). Fig. 1.10(b) shows the controlled response using 2 PID controllers to control
the direction and altitude. Fig. 1.11(a) shows a 3 dimensional view of the desired and actual flight
path. Section 10.2.4 illustrates the control of a different three degree of freedom helicopter.
CHAPTER 1. INTRODUCTION
output
0.3
0.25
0.2
0.15
0.1
0
10
15
20
25
30
35
40
45
50
10
15
20
25
30
time (sec) dt=0.05
35
40
45
50
0.4
input
0.3
0.2
0.1
0
0.1
0
top rotor
plant outputs
side rotor
support stand
Figure 1.9: Helicopter plant with 2 degrees of freedom. See also Fig. 1.10(a).
1.3.3
S LOWING
DOWN
S IMULINK
For some applications like the development of PID controllers you want to be able to slow down
the S IMULINK simulation to have time to manually introduce step changes, add disturbances,
switch from automatic to manual etc. If left alone in simulation mode, S IMULINK will run as fast
as possible, but it will slow down when it needs to sub-sample the integrator around discontinuities or periods when the system is very stiff.
The S IMULINK Execution Control block allows you to specify that the simulation runs at a multiple of real-time. This is most useful when you want to slow down a simulation, or ensure that it
runs at a constant rate.
(a) The flying helicopter balanced using 2 PID controllers since it is openloop unstable and would otherwise crash.
Helicopter
1.5
1
output
0.5
0
0.5
1
1.5
10
20
30
40
50
60
70
80
90
100
10
20
30
40
50
Time (sec)
60
70
80
90
100
input
0.5
0.5
(b) Multivariable PID control of the helicopter exhibiting mediocre controlled response and severe
derivative kick.
CHAPTER 1. INTRODUCTION
10
Helicopter flight path
0.5
Up/Down
0.5
1
1.5
1
0.5
250
0
200
0.5
150
100
1
East/West
1.5
50
0
time
Scope1
Simulink
Execution
Control
1
s+1
Signal
Generator
Transfer Fcn
Scope
Figure 1.12: Real-time Simulink simulations. A parameter in the S IMULINK E XECUTION C ON TROL block sets the speed of the simulation.
An alternative method to slow down a Simulink simulation and force it to run at some set rate is
to use the commercial Humusoft real-time toolbox, but not with the A/D card actually interfaced
to anything.
1.4
E CONOMICS OF CONTROL
Most people would agree that Engineers apply technology, but what do these two words really
mean? Technology is derived from two Greek words, techne which means skill or art, and logia
which means science or study. The interesting point here is that the art component is included.
The English language unfortunately confuses the word engine with engineering so that many people have the mistaken view that engineers drive engines (mostly). Actually engineer is derived
from the Latin ingeniatorium which means one who is ingenious at devising. A far cry from the
relatively simple act of piloting jumbos. An interesting American perspective of the professional
engineer and modern technology is given as light reading in [2] and Flormans The Existential
11
CHAPTER 1. INTRODUCTION
12
Advanced control
Optimisation
Direct search
Rule definition
Online simulation
Constraint
Steady state
dynamic methods
single variable
Regulatory
Signal condition
PID algorithm
multivariable
Deadtime
feedforward control
compensation
Basic
analogue control
Field
Valves
PLCs
Transmitters
DCS
Process computer
smart transmitters
Onstream analysers
Traditional control
Figure 1.13: A comparison of traditional vs. advanced process control techniques. Adapted from
[7].
spt #3
loss ($)
spt #2
setpoint #1
process output
advanced control
regulatory
manual control
time
Figure 1.14: Economic improvements owing to better control. If the control scheme can reduce
the variance, the setpoint can be shifted closer to the operating or quality constraint, thereby
decreasing operating costs.
13
However some information of this type is given in [64, pp131-149] and [18]. One good balance
for the practitioner is [145].
14
CHAPTER 1. INTRODUCTION
C HAPTER 2
F ROM
DIFFERENTIAL TO DIFFERENCE
EQUATIONS
Excerpt from What a sorry state of affairs, Martin Walker, Guardian Weekly, June 29, 1997.
2.1
The cost and flexibility advantages of implementing a control scheme in software rather than
fabricating it in discrete components today are simply too large to ignore. However by inserting a
computer to run the software necessitates that we work with discrete regularly sampled signals.
This added complexity, which by the way is more than compensated by the above mentioned
advantages, introduces a whole new control discipline, that of discrete control.
Fig. 2.1 shows a common configuration of the computer in the loop. For the computer to respond
to any outside events, the signals must first be converted from an analogue form to a digital
signal, say 1 to 5 Volts, which can, with suitable processing be wired to an input port of the
computers processor. This device that accomplishes this conversion is called an Analogue to
Digital converter or A/D. Similarly, any binary output from the pins on a processor must first be
converted to an analogue signal using a Digital to Analogue converter, or D/A. In some microcontrollers, (rather than micro-processors), such as Intels 8048 or some versions of the 8051, these
converters may be implemented on the microcontroller chip.
Digital to analogue conversion is easy and cheap. One simply loads each bit across different resistors, and sums the resultant voltages. The conversion is essentially instantaneous. Analogue
to digital is not nearly as easy nor cheap, and this is the reason that the common data acquisition
cards you can purchase for your PC will often multiplex the analogue input channel. There are
various schemes for the A/D, one using a D/A inside a loop using a binary search algorithm. Obviously this conversion is not instantaneous, although this is not normally considered a problem
for process control applications. Any introductory electrical engineering text such as [172] will
give further details on the implementation details.
15
16
Computer
Plant
D/A
output, y(t)
2.1.1
S AMPLING
AN ANALOGUE SIGNAL
It is the A/D converter that is the most interesting for our analysis of the discrete control loop. The
A/D converter will at periodic intervals defined by the computers clock sample the continuous
analogue input signal. The value obtained is typically stored in a device called a zeroth-order
hold until it is eventually replaced by the new sample collected one sample time later. Given
constraints on cost, the A/D converter will only have a limited precision, or limited number of
bits in which to store the incoming discrete value. Common A/D cards such as the PCLabs card,
[3], use 12 bits giving 212 or slightly over 4 thousand discretisation levels. The residual chopped
away is referred to as the quantisation error. For a given number of bits, b, used in the converter,
the amplitude quantisation is
= 2b
Low cost analogue converters may only use 8 bits, while digital audio equipment use between 16
and 18 bits.
Fig. 2.2 shows the steps in sampling an analogue signal with a three bit (8 discrete levels), A/D
sampler. The dashed stair plot gives an accurate representation of the sampled signal, but owing
to the quantisation error, we are left with the solid stair plot. You can reproduce Fig. 2.2 in M ATLAB using the fix command to do the chopping, and stairs to construct the stair plot. While
Sampling a continuous signal
8
Analogue
Sampled
quantisized
7
6
5
Signal
4
3
2
1
0
1
2
6
Sample time, (kT)
10
12
other types of hold are possible, anything higher than a first-order hold is rarely used.
2.1.2
S ELECTING
A SAMPLE RATE
Once we have decided to implement discrete control, rather than continuous control, we must
decided on a reasonable sampling rate. This is a crucial parameter in discrete control systems.
17
The sample time, (T or sometimes denoted t ), is measured in time units, say seconds or in
industrial applications, minutes. The reciprocal of the sample time is the sample frequency, f ,
and is usually measured in cycles/second or Hertz. The radial or angular velocity (which some
confusingly also term frequency) is denoted and is measured in radians/second. The interrelationships between these quantities are
radians/s
1 cycles
=
(2.1)
f=
T second
2 radians/cycle
The faster the sampling rate, (the smaller the sampling time, T ), the better our discretised signal
approximates the real continuous signal. However, it is uneconomic to sample too fast, as the
computing and memory hardware may become too expensive. When selecting an appropriate
sampling interval, or sample rate, we should consider the following issues:
The maximum frequency of interest in the signal
The sampling theorem which specifies a lower limit required on the sampling rate to resolve
any particular frequency unambiguously. (See 2.1.3 following.)
Any analogue filtering that may be required (to reduce the problem of aliasing)
The cost of the hardware and the speed of the A/D converters.
Ogata discusses the selection of a sample time qualitatively in [150, p38]. However for most
chemical engineering processes, which are dominated by relatively slow and overdamped processes, the sample time should lie somewhere between ten times the computational delay of the
hardware tc and some small fraction of the process dominant time constant , say
10tc T
10
(2.2)
For most chemical engineering applications, the computational delay is negligible compared with
the process time constant, (tc 0), so we often choose T /10. Thus for a simple first order
rise, we would expect to have about 2030 data samples from 0 to 99%. Some may argue that
even this sampling rate is too high, and opt for a more conservative (larger) sample time down
to /6. Note that commonly used guidelines such as presented in Table 22.1 in [181, p535] span a
wide range of recommended sample times.
2.1.3
T HE
Dinsdale was a gentleman. And whats more he knew how to treat a female impersonator
John Cleese in Monty Python (15-9-1970)
18
The sampling theorem gives the conditions necessary to ensure that the inverse sampling procedure is a one to one relationship. To demonstrate the potential problems when sampling,
consider the case where we have two sinusoidal signals, but at different frequencies.1
7
1
y1 = sin 2 t
and y2 = sin 2 t
8
8
If we sample these two signals relatively rapidly, say at 25Hz (T =40ms) then we can easily see
two distinct sine curves. However if we sample at T = 1, we obtain identical points.
Aliasing
1
t = [0:0.04:8]';
y1= -sin(2*pi*7/8*t);
y2= sin(2*pi/8*t);
plot(t,[y1, y2])
0.5
signal
0
0.5
1
0
4
time (s)
Consequently, at the slower sampling rate of 1 Hz, we cannot distinguish between the two different signals. This is the phenomenon known as aliasing since one of the frequencies pretends
to be another. In conclusion, two specific sinusoids of different frequencies can have identical
sampled signals. Thus in the act of taking samples from a continuous measurement, we have lost
some information. Since we have experienced a problem when we sample too slowly, it is reasonable to ask what the minimum rate is so that no aliasing occurs. This question is answered by the
sampling theorem which states:
To recover a signal from its sample, one must sample at least two times a
period, or alternatively sample at a rate twice the highest frequency of interest
in the signal.
Alternatively, the highest frequency we can unambiguously reconstruct for a given sampling rate,
1/T , is half of this, or 1/(2T ). This is called the Nyquist frequency, fN .
In the second example above when we sampled at 1 Hz, the sampling radial velocity was s =
2 = 6.28 rad/s. This was satisfactory to reconstruct the low frequency signal (f1 = 1/8 Hz) since
21 = 1.58 rad/s. We are sampling faster than this minimum, so we can reconstruct this signal.
However for the faster signal (f2 = 7/8 Hz), we cannot reconstruct this signal since 22 = 11.0
rad/s, which is faster than the sampling radial velocity.
2.1.4
D ISCRETE
FREQUENCY
19
= T = 2f T = 2
f
fs
then the range of analogue frequencies is 0 < f < while the range of digital frequencies is
limited by the Nyquist sampling limit, fs /2 giving the allowable range for the digital frequency
as
0
M ATLAB unfortunately decided on a slightly different standard in the S IGNAL P ROCESSING toolbox. Instead of a range from zero to , M ATLAB uses a range from zero to 2 where 1 corresponds
to half the sampling frequency or the Nyquist frequency. See [44].
In summary:
sample time
sampling frequency
angular velocity
digital frequency
symbol
T or t
fs = T1
= 2f
= T = 2f
fs
units
s
Hz
rad/s
continuous
sampled
fN =
N
It is practically impossible to avoid aliasing problems when sampling, using only digital filters.
Almost all measured signals are corrupted by noise, and this noise has usually some high or even
infinite frequency components. Thus the noise is not band limited. With this noise, no matter
how fast we sample, we will always have some reflection of a higher frequency component that
appears as an impostor or alias frequency.
If aliasing is still a problem, and you cannot sample at a higher rate, then you can insert a low
pass analogue filter between the measurement and the analogue to digital converter (sampler).
The analogue filter or in this case known as an anti-aliasing filter, will band-limit the signal, but
not corrupt it with any aliasing. Expensive high fidelity audio equipment will still use analogue
filters in this capacity. Analogue and digital filters are discussed in more detail in chapter 5.
Detecting aliases
Consider the trend y(t) in Fig. 2.3 where we wish to estimate the important frequency components
of the signal. It is evident that y(t) is comprised of one or two dominating harmonics.
The spectral density when sampling at T = 0.7s ( in Fig.2.3) given in the upper trend of Fig. 2.4
exhibits three distinct peaks. These peaks are the principle frequency components of the signal and are obtained by plotting the absolute value of the Fourier transform of the time signal2 ,
20
3
continuous
Ts=0.7
output
Ts=1.05
time (s)
Ts = 0.7 s
power
10
10
fN=0.71
Ts = 1.05 s
power
10
10
fN=0.48
0
0.1
0.2
0.3
0.4
0.5
frequency (Hz)
0.6
0.7
0.8
Figure 2.4: The frequency component of a signal sampled at Ts = 0.7s (upper) and Ts = 1.05s
(lower). The Nyquist frequencies for both cases are shown as vertical dashed lines. See also
Fig. 2.5.
|DF T {y(t)}|. Reading off the peak positions, and for the moment overlooking any potential
problems with undersampling, we would expect y(t) to be something like
y(t) sin(20.1t) + sin(20.5t) + sin(20.63t)
However in order to construct Fig. 2.4 we had to sample the original time series y(t) possibly
introducing spurious frequency content. The Nyquist frequency fN = 1/(2Ts ) is 0.7143 and is
shown as a vertical dashed line in Fig. 2.4(top). The power spectrum is reflected in this line, but
is not shown in Fig. 2.4.
If we were to re-sample the process at a different frequency and re-plot the power density plot
then the frequencies that were aliased will move in this second plot. The points in Fig. 2.3 are
sampled at t = 1.05s with corresponding spectral power plot in Fig. 2.4. The important data
from Fig. 2.4 is repeated below.
Curve
top
bottom
Ts (s)
0.7
1.05
fN (Hz)
0.7143
0.4762
peak 1
0.1
0.1
peak 2
0.50
0.152
peak 3
0.63
0.452
First we note that the low frequency peak (f1 = 0.1Hz) has not shifted from curve a (top) to curve
b (bottom), so we would be reasonably confident that f1 = 0.1Hz and is not corrupted by the
sampling process.
2 More
about the spectral analysis or the power spectral density of signals comes in chapter 5.
21
However, the other two peaks have shifted, and this shift must be due to the sampling process.
Let us hypothesize that f2 = 0.5Hz. If this is the case, then it will appear as an alias in curve b
since the Nyquist frequency for curve b (fN (b) = 0.48) is less than the proposed f2 = 0.5, but it
will appear in the correct position on curve a. The apparent frequency f2 (b) on curve b will be
f2 (b) = 2fN (b) f2 = 2 0.4762 0.5 = 0.4524
which corresponds to the third peak on curve b. This would seem to indicate that our hypothesis
is correct for f2 . Fig. 2.5 shows this reflection.
2
T = 0.7 s
10
power
10
f =0.71
N
T = 1.05 s
10
power
10
f =0.48
N
0.1
0.2
0.3
0.4
0.5
frequency (Hz)
0.6
0.7
0.8
Figure 2.5: Reflecting the frequency response in the Nyquist frequency from
Fig. 2.4 shows why one of the peaks is evident in the frequency spectrum at Ts = 1.05,
but what about the other peak?
Now we turn our attention to the final peak. f3 (a) = 0.63 and f2 (b) = 0.152. Let us again try
the fact that f3 (a) is the true frequency = 0.63Hz. If this were the case the apparent frequency
on curve b would be f3 (b) = 2fN (b) f2 = 0.3224Hz. There is no peak on curve b at this
frequency so our guess is probably wrong. Let us suppose that the peak at f3 (a) is the first
harmonic. In that case the true frequency will be f3 = 2fN (a) f3 (a) = 0.8Hz. Now we check
using curve b. If the true third frequency is 0.8Hz, then the apparent frequency on curve b will
be f3 (b) = 2fN (b) f3 = 0.153Hz. We have a peak here which indicates that our guess is a good
one. In summary, a reasonable guess for the unknown underlying function is
y(t) sin(20.1t) + sin(20.5t) + sin(20.8t)
although we can never be totally sure of the validity of this model. At best we could either resample at a much higher frequency, say fs > 10Hz, or introduce an analogue low pass filter to cut
out, or at least substantially reduce, any high frequencies that may be reflected.
2.2
(2.3)
for y(t). The derivative can be approximated by the simple backward difference formula
yt ytT
dy
dt
T
(2.4)
22
where T is the step size in time or the sampling interval. Provided T is kept sufficiently small, (but not zero)
the discrete approximation to the continuous derivative
is reasonable. Inserting this approximation, Eqn. 2.4
into our original differential equation, Eqn. 2.3, and rearranging for yt results in the famous Euler backward
difference scheme;
yt = ytT + T f (t T, ytT )
(2.5)
Such a scheme is called a recurrence scheme since a previous solution, ytT , is used to calculate yt ,
and is suitable for computer implementation. The beauty of this crude method of differentiation
is the simplicity and versatility. Note that we can discretise almost any function, linear or nonlinear without needing to solve analytically the original differential equation. Of course whether
our approximation to the original problem is adequate is a different story and this complex, but
important issue is addressed in the field of numerical analysis.
Problem 2.1
2.2.1
D IFFERENCE
dy
d2 y
+ 2
+ y = ku
2
dt
dt
EQUATIONS
Difference equations are the discrete counterpart to continuous differential equations. Often the
difference equations we are interested in are the result of the discretisation of a continuous differential equation, but sometimes, as in the case below, we may be interested in the difference
equation in its own right. Difference equations are much easier to simulate and study in a digital
computer than a differential equation for example, since they are already written in a form where
we can just step along following a relation
xk+1 = f (xk )
rather than resorting to integration. This step by step procedure is also known as a mathematical
mapping since we map the old vector xk to a new vector xk+1 .
H E NON S
CHAOTIC ATTRACTOR
x0 = 1
(2.6)
y0 = 1
(2.7)
23
1.4
Product
Gain2
x
1/z
To Workspace
x1
Sum1
1/z
0.3
To Workspace2
y1
Constant
XY Graph
Gain
Figure 2.7: Henons attractor, (Eqns 2.62.7), modelled in S IMULINK. See also Fig. 2.8.
2
1
Start point
0
1
yk
0.5
2
1
0.5
yk
0
0.5
10
15
20
time counter
25
30
35
0.5
1.5
0.5
0
x
0.5
1.5
Figure 2.8: The time response (left) and phase plot (right) of Henons attractor as computed by
the S IMULINK block diagram in Fig. 2.7.
The time response of x and y, (left figures in Fig. 2.8), of this mapping is not particularly interesting, but the result of the phase plot (without joining the dots), (right figure in Fig. 2.8), is an
interesting mapping.
Actually what the chaotic attractor means is not important at this stage, this is after all only a
demonstration of a difference equation. However the points to note in this exercise are that we
never actually integrated any differential equations, but only stepped along using the unit delay
blocks in S IMULINK. Consequently these types of simulations are trivial to program, and run very
fast in a computer. Another example of a purely discrete system is given in later Fig. 6.10.
24
2.3
T HE z TRANSFORM
The sampling of a continuous function f (t) at regular sampling intervals equally spaced T time
units apart creates a new function f (t),
f (t) =
n=0
f (kT ) (t kT )
(2.8)
where (x) is the Dirac delta function which is defined as being a perfect impulse at x = 0,
and zero elsewhere. Note that the actual value of (0) is infinity, but the integral of (t) is 1.0.
Expanding the summation term in Eqn 2.8 gives
f (t) = f0 (t) + fT (t T ) + f2T (t 2T ) + f3T (t 3T ) +
Now suppose we wish to know the value of the third sampled function f (t = 3T ). For simplicity
we will assume the sample time is T = 1.
f (3) = f0 (3) + f1 (2) + f2 (1) + f3 (0) + f4 (1) +
|
|
{z
}
{z
}
all zero
all zero
All the terms except for the term containing (0) are zero, while (0) = . Thus the value of the
function f (3) = . Often you will see a graph of f (t) depicted where the heights of the values
at the sample times are the same as the height of the continuous distribution such as sketched in
Fig. 2.14. Strictly this is incorrect, as it is the strength or integral of f (t) which is the same as the
value of the continuous distribution f (t). I think of the function f (t) as a series of impulses
whose integral is equal to the value of the continuous function f (t).
If we take the Laplace transform of the sampled function f (t), we get
(
)
X
L {f (t)} = L
fkT (t kT )
(2.9)
n=0
Now the function fkT is assumed constant so it can be factored out of the Laplace transform, and
the Laplace transform of the (0) is simply 1.0. If the impulse is delayed kT units, then the Laplace
transform is ekT 1. Thus Eqn. 2.9 simplifies to
L {f (t)} =
where we have defined
fkT ekT s =
n=0
fk z k
k=0
def
z = esT
(2.10)
F (z) = Z {f (t)} =
fk z k
(2.11)
k=0
The function F (z) is an infinite series, but can be written in a closed form if f (t) is a rational
function.
2.3.1
z- TRANSFORMS
OF COMMON FUNCTIONS
We can use the definition of the z-transform, Eqn. 2.11, to compute the z-transform of common
functions such as steps, ramps, sinusoids etc. In this way we can build for ourselves a table of
z-transforms such as those found in many mathematical or engineering handbooks.
25
k=0
kn z k = 1 + 1 z 1 + 1 z 2 + 1 z 3 +
(2.12)
By using the sum to infinity for a geometric series,3 we obtain the closed form equivalent for
Eqn. 2.12 as
1
S(z) =
(2.13)
1 z 1
R AMP
For the ramp function, x(k) = n for k 0, and the exponential function, x(k) = ean , k 0, we
could go through the same formal analytical procedure, but in this case we can use the ztrans
command from the symbolic toolbox in M ATLAB.
The
of the exponential function,
z-transform
Z eak is
1
>> syms k T
>> ztrans(k*T)
ans =
T*z/(z-1)2
>> syms k T a
>> ztrans(exp(-a*k*T))
ans =
z/exp(-a*T)/(z/exp(-a*T)-1)
>> simplify(ans)
ans =
z*exp(a*T)/(z*exp(a*T)-1)
In a similar manner we can build our own table of z-transforms of common functions.
F INAL
Tables and properties of z-transforms of common functions are given in many handbooks and
texts such as [150, p49,67], but two of the more useful theorems are the initial value and the final
value theorems given in Table 2.1. As in the continuous case, the final value theorem is only
applicable if the transfer function is stable.
2.4
I NVERSION OF z- TRANSFORMS
The usefulness of the z-transform is that we can do algebraic manipulations on discrete difference
equations in the same way we can do algebraic manipulations on differential equations using the
3 The
a (1 r n )
; r 6= 1
1r
and if |r| < 1, then the sum for an infinite number of terms converges to
a
S =
1r
26
Table 2.1: Comparing final and initial value theorems for continuous and discrete systems
Continuous
lims sY (s)
lims0 sY (s)
Discrete
lim
z Y (z)
limz1 1 z 1 Y (z)
(2.14)
Note that only the sampled function is returned f (t), not the original f (t). Thus the full inversion
process to the continuous domain f (t) is a one to many operation.
There are various ways to invert z-transforms:
1. Use tables of z-transforms and their inverses in handbooks, or use a symbolic manipulator.
(See 2.4.1.)
2. Use partial-fraction expansion and tables. In M ATLAB use residue to compute the partial
fractions. (See 2.4.2.)
3. Long division for rational polynomial expressions in the discrete domain. In M ATLAB use
deconv to do the polynomial division. (See 2.4.3.)
4. Computational approach while suitable for computers, has the disadvantage that the answer is not in a closed form. (See 2.4.4.)
5. Analytical formula from complex variable theory. (Not useful for engineering applications,
see 2.4.5.)
2.4.1
I NVERTING z- TRANSFORMS
WITH SYMBOLICALLY
The easiest, but perhaps not the most instructive, way to invert z-transforms is to use a computer
algebra package or symbolic manipulator such as the symbolic toolbox or M U PAD. One simply
enters the z-transform, then requests the inverse using the iztrans function in a manner similar
to the forward direction shown on page 25.
>> syms z
>> G = (10*z+5)/(z-1)/(z-1/4)
G =
(10*z+5)/(z-1)/(z-1/4)
>> pretty(G)
10 z + 5
-----------------
27
(z - 1) (z - 1/4)
>> iztrans(G)
% Invert the z-transform, Z 1 {G(z)}
ans =
20*kroneckerDelta(n, 0) - 40*(1/4)n + 20
The Kronecker function, or kroneckerDelta(n, 0) is a shorthand way of expressing piecewise functions in M ATLAB. The expression kroneckerDelta(n, 0) returns 1 when n = 0, and
0 for all other values.
(n)
1
0.5
0
5
0
n
This is a rather messy and needlessly complicated artifact due to the fact that the symbolic toolbox
does not know that n is defined as positive. Also M ATLAB automatically choose n for the indexing
variable, but we might prefer to use k and explicitly inform M ATLAB that k > 0. This gives us a
cleaner solution.
>> syms z
>> syms k positive
>> G = (10*z+5)/(z-1)/(z-1/4);
4
>> y=iztrans(G,k)
y =
20-40*(1/4)k
>> limit(y,k,inf)
ans =
20
The last command computed the steady-state by taking the limit of y[k] as k .
2.4.2
T HE
Inverting transforms by partial fractions is applicable for both discrete z-transforms and continuous Laplace transforms. In both cases, we find an equivalent expression for the transform
in simple terms that are summed together. Hopefully we can then consult a set of tables, such
as [150, Table 2.1 p49], to invert each term individually. The inverse of the sum, is the sum of
the inverses meaning that these expressions when summed together give the full inversion. The
M ATLAB function residue forms partial fractions from a continuous transfer function, although
the help file warns that this operation is numerically poorly conditioned. Likewise the routine
residuez (that is residue with a trailing z) is designed to extract partial fractions in a form
suitable when using ztransforms.
As an example, suppose we wish to invert
G(s) =
4s2 58s 24
4s2 58s 24
=
3
2
s + 2s 24s
s(s + 6)(s 4)
28
of G(s) can be written as
G(s) =
A
B
C
4s2 58s 24
= +
+
s(s + 6)(s 4)
s
s+6 s4
where the coefficients A, B and C can be found either by equating the coefficients or using the
cover-up rule shown below.
4s2 58s 24
4s2 58s 24
A=
= 1,
B=
=3
(s + 6)(s 4) s=0
s(s 4)
s=6
4s2 58s 24
C=
= 8
s(s + 6)
s=4
In M ATLAB we can use the residue command to extract the partial fractions.
% factors or poles
% No extra parts
The order of the residue and pole coefficient produced by residue is the same for each, so the
partial fraction decomposition is again
G(s) =
3
8
1
4s2 58s 24
=
+
+
s3 2s 24s
s+6 s4 s
and we can invert each term individually perhaps using tables to obtain the time domain solution
g(t) = 3e6t 8e4t + 1
If the rational polynomial has repeated roots or complex poles, slight modifications to the procedure are necessary so you may need to consult the help file.
29
S YMBOLIC
PARTIAL FRACTIONS IN
M ATLAB
10z
+ 4z + 3
to g[k] using partial fractions. Unfortunately there is no direct symbolic partial fraction command,
but Scott Budge from Utah University found the following sneaky trick which involves first integrating, then differentiating the polynomial. This round-about way works because M ATLAB
integrates the expression by internally forming partial fractions and then integrating term by
term. Taking the derivative brings you back to the original expression, but now in partial fraction
form.
G(z) =
12
z2
>> syms z
>> G = 10*z/(z2 + 4*z + 3);
>> G= diff(int(G/z)) % Extract partial fractions of G(z)/z
G =
5/(z+1)-5/(z+3)
>> G=expand(G*z)
G =
5z
5z
z+3
5*z/(z+1)-5*z/(z+3) % Gives us G(z)/z = z+1
>> syms k positive; iztrans(G,z,k)
ans =
-5*(-3)k+5*(-1)k
While this strategy currently works, we should take note that in future versions of the symbolic
toolbox this may not continue to work. In the above example we also have the option of using
residuez.
2.4.3
L ONG
DIVISION
Consider the special case of the rational polynomial where the denominator is simply 1,
c0 + c1 z 1 + c2 z 2 + c3 z 3 +
1
The inverse of this transform is trivial since the time solution at sample number k or t = kT is
simply the coefficient of the term z k . Consequently it follows that f (0) = c0 , f (T ) = c1 , f (2T ) =
c2 , . . . , f (kT ) = ck etc. Thus if we have a rational transfer function, all we must do is to synthetically divide out the two polynomials to obtain a single, albeit possibly infinite, polynomial in
z.
F (z) =
This approach is known as direct division [150, p69] and in general will result in an infinite
power series in z. As such it is not a particularly elegant closed-form solution, but for most stable
systems, the power series can be truncated eventually with little error.
So to invert a z-transform using long-division, we:
30
1. Convert the z-transform to a (possibly infinite) power series by dividing the denominator
into the numerator using long division or deconv.
2. The coefficients of z k are the solutions at the sample times kT .
Long division by hand is tedious and error-prone, but we can use the polynomial division capabilities of M ATLABs deconv to do this long division automatically. Since deconv only performs
integer division returning a quotient and remainder, we must fool it into giving us the infinite
series by padding the numerator with zeros.
To invert
Y (z) =
z2 + z
z2 + z
=
(z 1)(z 2 1.1z + 1)
z 3 2.1z 2 + 2.1z 1
using deconv for the long division, we pad the numerator on the right with say 5 zeros, (that is,
we multiply by z 5 ), and then do the division.
z7 + z6
2
= z 4 + 3.1z 3 + 4.41z{z
+ 3.751z + 1.7161} + remainder
z 3 2.1z 2 + 2.1z 1 |
Q
We are not particularly interested in the remainder polynomial. The more zeros we add, (5 in the
above example), the more solution points we get.
>> Yn = [1,1,0];
% Numerator of G(z)
>> Yd = conv([1,-1],[1 -1.1 1]);
% Denominator of G(z) is (z 1)(z 2 1.1z + 1)
>> [Q,R] = deconv([Yn,0,0,0,0,0],Yd) % Multiply by z 5 to zero pad, then do long division
Q =
1.0000 3.1000 4.4100 3.7510 1.7161
>> dimpulse(Yn,Yd,7);
% Matlab's check
We can also numerically verify the inversion using dimpulse for six samples as shown in Fig. 2.9.
5
Output
3
2
1
0
0
3
sample #
Calculate the response of y(nT ) to a unit step change in x using long division.
2. Determine the inverse by long division of G(z)
G(z) =
z(z + 1)
(z 1)(z 2 z + 1)
2.4.4
C OMPUTATIONAL
31
APPROACH
The computational method is so-called because it is convenient for a computer solution technique
such as M ATLAB as opposed to an analytical explicit solution. In the computational approach we
convert the z-transform to a difference equation and use a recurrence solution.
Consider the following transform (used in the example from page 26) which we wish to invert
back to the time domain.
10z + 5
10z 1 + 5z 2
X(z)
10z + 5
= 2
=
=
U (z)
(z 1)(z 0.25)
z 1.25z + 0.25
1 1.25z 1 + 0.25z 2
(2.15)
where we have coloured the input values blue and the output values red. (We will again use this
colour convention in chapter 6).
The inverse is equivalent to solving for x(t) when u(t) is an impulse function. The transform can
be expanded and written as a difference equation
xk = 1.25xk1 0.25xk2 + 10uk1 + 5uk2
(2.16)
Since U (z) is an impulse, U (z) = 1 which means that only u(0) = 1 while all other samples
u(k) = 0, k 6= 0. Now we substitute k = 0 to start, and note that u(k) = x(k) = 0 when k < 0 by
definition. We now have enough information to compute x(0).
x0 = 1.25x1 0.25x2 + 10u1 + 5u2
= 1.25 0 0.25 0 + 10 0 + 5 0
=0
Continuing we substitute k = 1 into Eqn. 2.16 and using our previously computed x0 and noting
that now u0 is nonzero, we can find the next term.
x1 = 1.25x0 0.25x1 + 10u0 + 5u1
= 1.25 0 0.25 0 + 10 1 + 5 0
= 10
Just to clarify this recurrence process further, we can try the next iteration of Eqn. 2.16 with k = 2,
x2 = 1.25x1 0.25x0 + 10u1 + 5u0
= 1.25 10 0.2 0 + 10 0 + 5 1
= 17.5
Now we can use the full recurrence relation, (Eqn 2.16), to obtain the solution in a stepwise manner to build up the solution as shown in Table 2.2. All the terms on the right hand side of Eqn 2.16
are either known initially (u(t)), or involve past information, hence it is possible to solve. Note
that in this inversion scheme, u(t) can be any known input series.
M ATLAB is well suited for this type of computation. To invert Eqn. 2.15 we can use the discrete
filter command. We rewrite Eqn. 2.15 in the form of a rational polynomial in z 1 ,
G(z) =
b0 + b1 z 1 + b2 z 2
1 + a1 z 1 + a2 z 2
where the numerator and denominator are entered as row vectors in decreasing powers of z. We
then create an impulse input vector u with say 6 samples and then compute the output of the
system
32
Table 2.2: The inversion of a z transform using the computation method. Only the first 5 samples
and the final value are calculated. The final value is as calculated using the symbolic manipulator
earlier on page 27.
time (kT )
2
1
0
1
2
3
4
..
.
u(k)
0
0
1
0
0
0
0
..
.
0
x(k)
0
0
0
10
17.5
19.375
19.8438
..
.
20
19.9902
>> sysd = tf([10 5],[1 -1.25 0.25],1) % System of interest (10z + 5)/(z 2 1.25z + 0.25)
Transfer function:
10 z + 5
------------------z2 - 1.25 z + 0.25
Sampling time: 1
11
16
Finally, the contour integral method can also be used to invert z-transforms, [150, 33], but Seborg et al maintains it is seldom used in engineering practice, [181, p571] because it is fraught
with numerical implementation difficulties. These difficulties are also present in the continuous
equivalent; some of which are illustrated in the next section.
2.4.5
N UMERICALLY
INVERTING THE
33
L APLACE
TRANSFORM
Surprisingly, the numerical inversion of continuous transfer functions is considered far less important than the computation of the inverse of discrete transfer functions. This is fortunate because
the numerical inversion of Laplace transforms is devilishly tricky. Furthermore, it is unlikely that
you would ever want to numerically invert a Laplace transform in control applications, since most
problems involver little more than a rational polynomial with a possible exponential term. For
these problems we can easily factorise the polynomial and use partial fractions or use the step
orimpulse routines for continuous linear responses.
However in the rare cases where we have a particularly unusual F (s) which we wish to convert
back to the time domain, we might be tempted to use the analytical expression for the inverse
directly
Z +j
1
F (s) est ds
(2.17)
f (t) =
2j j
where is chosen to be larger than the real part of any singularity of F (s). Eqn. 2.17 is sometimes
known as the Bromwich integral or Mellins inverse formula.
The following example illustrates a straight forward application of Eqn. 2.17 to invert a Laplace
transform. However be warned that for all but the most well behaved rational polynomial examples this strategy is not to be recommended as it results in severe numerical roundoff error due to
poor conditioning.
Suppose we try to numerically invert a simple Laplace transform,
F (s) =
3
3
1 e5t
f (t) =
s(s + 5)
5
with a corresponding simple time solution. We can start by defining an anonymous function of
the Laplace transform to be inverted.
Fs = @(s) 3.0./(s+5)./s;
% Laplace function to invert F (s) = 3/(s(s + 5)) . . .
Fsexp = @(s,t) Fs(s).*exp(s*t) % and associated integrand F (s)est
Now since the largest singularity of F (s) is 0, we can choose the contour path safely to be say
= 0.1. We can also approximate the infinite integration interval by reasonably large numbers.
It is convenient that the M ATLAB routine integral works directly in the complex domain.
c = 0.1;
% Value > than any singularities of F (s)
a = c-100j; b = c+200j; % Contour path approx: j to + j
t=linspace(0,8);
Due to small numerical roundoff however, the returned solution has a small imaginary component, which we can in this instance safely ignore. In this rather benign example, where we know
the analytical solution, we can validate the accuracy as shown in Fig. 2.10(a), which it has to be
admitted, is not that wonderful. We should also note in the simplistic implementation above we
have not adequately validated that our algorithmic choices such as the finite integration limits
are appropriate, so we should treat any subsequent result with caution. In fact if you repeat this
calculation with a larger value of , or a smaller integration range such as say 1 10j to 1 + 10j
34
0.7
0.6
1.5
L1
0.5
1
1
s 2 +1
f(t)
f(t)
0.4
0.5
0.3
0
0.2
L
0.1
3
s(s+5)
0.5
1
2
error
error
0
0.02
0
0.02
(a) Inverting
o a
n
3
L1 s(s+5)
4
t
benign
transfer
function,
1
0
1
10
t
15
20
(b) Inverting
a challenging transfer function,
1
1
L
s2 +1
Figure 2.10: Numerically inverting Laplace transforms by direct evaluation of the Bromwich integral. Since this strategy is suspectable to considerable numerical errors, it is not to be recommended.
then the quality of the solution drops alarmingly. One way to improve the integration accuracy is
to use the waypoints option in the integral routine.
How the Bromwich integral, Eqn. 2.17, actually works can be illustrated with the somewhat unusual pair
e s
1
(2.18)
f (t) = erfc
F (s) =
s
2 t
The top two plots in Fig. 2.11(a) plot the two surfaces of the complex function F (s)est (which is the
integrand in Eqn. 2.17, the thing we need to integrate) for the specific case when t = 1.5. We need
to plot two surfaces, because since s is a complex variable, F (s)est will in general be a complex
variable as well. Consequently we can plot the real surface on one plot and the imaginary surface
on another.
Now to perform the integration in Eqn. 2.17, we need to integrate this complex function along
the imaginary axis, or in this case just slightly east of the imaginary axis in order to miss the
singularity in F (s) at the origin. The two semi-transparent windows vertically cutting the surfaces
show the path along which we need to integrate.
To better visualise exactly what we are integrating, the plots of the function just along the wall
are repeated in the figures below in Fig. 2.11(b). Note how the integrand for {G(s)} is odd, so
the definite integral for symmetric limits will turn out to be zero. However the integral for the
real part (left hand plot in Fig. 2.11(b)) is definitely not zero, but clearly seems to converge as s
tends towards i. The total integral will therefore be a pure complex number which is scaled
by 1/(2i) to give a pure real number representing the time domain solution y(t = 1.5).
Over the years there have been many attempts to improve the inversion strategy as reviewed
in [1], but nearly all of the proposals run into numerical problems of precision. One popular
approximation of order M for inverse Laplace transforms introduced by Gaver and subsequently
35
(a) The real and imaginary surfaces of the complex function F (s)est where F (s) = e
case when t = 1.5. Note the singularity at the origin.
2.5
s /s
for the
0.6
0.4
0.2
(G(s))
(G(s))
1.5
0.2
0.5
0.4
0
0.6
0.5
5
0
{s}
0.8
5
0
{s}
(b) The shaded part represents the integral of F (s)est along the imaginary axis. The real part of is
even (so the integral is non-zero), while the imaginary part is odd, so the integral is zero.
improved by Stehfest is
f (t) = L1 {F (s)}
2M
ln(2) X
k ln(2)
wk F
t
t
(2.19)
k=1
(1)M+k
M!
min(k,M)
j=(k+1)/2
j M+1
M
2j
j
j
j
kj
(2.20)
Listing 2.1 implements this algorithm for scalar t. The routine could be made more efficient by
pre-computing the weights just once for a given approximation order M so in the common case
where t is a vector, we do not waste time recomputing the weights. Note also that the approximation assumes that t is strictly greater than zero, so for t = 0 we could use the initial value
theorem.
Listing 2.1: Numerically inverting Laplace transforms using the Gaver-Stehfest algorithm
36
12
17
Description
Easy test
Control TF
Oscillatory
Nasty
F (s)
f (t)
1/s
1
(s + 2)(s + 3)
1
s2 + 1
1
e1/s
s
Comment
t
1
2
et e3t
J0 (t)
1
t
cos( 4t),
1 e1/s
s
in Fig. 2.13 using a modest tolerance of 102 . The routine splits the time axis up into decades, and
inverts each separately which is why we can see some numerical noise starting at t = 10.
All of these numerical inversion strategies, starting from the direct application of the Bromwich
integral in Eqn. 2.17, the Gaver-Stehfest algorithm and variants and the implementation invlap
require some care to get reasonable results. Unfortunately in practical cases it can be difficult to
37
1
# of terms = 8
0.5
0
1
# of terms = 16
0.5
0
1
# of terms = 20
0.5
0
1
# of terms = 26
0.5
0
0.5
0
5
time, t
10
1.5
f(t)
L1
0.5
exp(1/s)
0
0.5
10
error
10
10
10
10
15
20
time
25
30
35
40
38
choose suitable algorithmic tuning constants, which mean that it is difficult to generate any sort
of error bound on the computed curve.
samhold
2.5
In 2.1.1 we saw how the continuous analogue signal must be sampled to be digestible by the process control computer. To analyse this sampling operation, we can approximate the real sampler
with an ideal sampler in cascade with a hold element such as given in Fig. 2.14.
f (t)
f (t)
b
fh (t)
Hold
b
Ideal sampler
continuous signal
sampled signal
rectangular pulse
y(t)
time, t
0
sample time, T
39
y(t) est dt = yt
= yt
1 esT
s }
| {z
1 est dt
zoh
we obtain the transform for the zeroth-order hold. In summary, the transformation of a continuous plant G(s) with zero-order hold is
G(s)
(2.21)
G(z) = (1 z 1 ) Z
s
Note that this is not the same as simply the z-transform of G(s).
Suppose we wish to discretise the continuous plant
G(s) =
8
s+3
(2.22)
with a zero-order hold at a sample time of T = 0.1 seconds. We can do this analytically using
tables, or we can trust it to M ATLAB. First we try analytically and apply Eqn 2.21 to the continuous
process. The difficult bit is the z-transformation. We note that
G(s)
8
3
Z
= Z
(2.23)
s
3
s(s + 3)
and that this is in the form given in tables of z-transforms such as [150, p50], part of which is
repeated here:
X(s)
X(z)
a
s(s + a)
(1 eaT )z 1
(1 z 1 )(1 eaT z 1 )
(2.24)
We can repeat this conversion numerically using the continuous-to-discrete function, c2d. We
first define the continuous system as an LTI object, and then convert to the discrete domain using
a zeroth order hold.
>>sysc = tf(8,[1 3])
3
Transfer function:
8
40
----s + 3
8
13
which once again is Eqn. 2.24. In fact we need not specify the zoh option when constructing a
discrete model using c2d or even in S IMULINK, since a zeroth-order hold is employed by default.
Methods for doing the conversion symbolically are given next in section 2.5.1.
2.5.1
C ONVERTING L APLACE
TRANSFORMS TO
z- TRANSFORMS
In many practical cases we may already have a continuous transfer function of a plant or controller which we wish to discretise. In these circumstances we would like to convert from a continuous transfer function in s, (such as say a Butterworth filter) to an equivalent discrete transfer
function in z. The two common ways to do this are:
1. Analytically by first inverting the Laplace transform back to the time domain, and then take
the (forward) z-transform.
def
G(z) = Z L1 {G(s)}
(2.25)
2. or by using an approximate method known as the bilinear transform.
The bilinear method whilst approximate has the advantage that it just involves a simple algebraic
substitution for s in terms of z. The bilinear transform is further covered in 2.5.2.
The formal method we have already seen previously, but since it is such a common operation, we
can write a symbolic M ATLAB script to do it automatically for us for any given transfer function.
However we should be aware whether the transformation includes the sample and hold, or if that
inclusion is left up to the user.
Listing 2.2 converts a Laplace expression to a z-transform expression without a zeroth-order hold.
Including the zeroth-order hold option is given in Listing 2.3.
Listing 2.2: Symbolic Laplace to z-transform conversion
function Gz = lap2ztran(G)
% Convert symbolically G(s) to G(z)
syms t k T z
Gz = simplify(ztrans(subs(ilaplace(G),t,k*T),k,z));
return
41
syms s t k T z
Gz = simplify((1-1/z)*ztrans(subs(ilaplace(G/s),t,k*T),k,z));
return
We can test the conversion routines in Listings 2.2 and 2.3 on the trial transfer function G(s) =
1/s2 .
10
2.5.2
T HE
BILINEAR TRANSFORM
The analytical one step backone step forward procedure while strictly correct, is a little tedious,
so a simpler, albeit approximate, way to transform between the Laplace domain and the z-domain
is to use the bilinear transform method, sometimes known as Tustins method. By definition
def
z = esT or z 1 = esT , (from Eqn. 2.10). If we substituted directly natural logs would appear in
the rational polynomial in z,
ln(z)
s=
(2.26)
T
making the subsequent analysis difficult. For example the resulting expression could not be transformed into a difference equation which is what we desire.
We can avoid the troublesome logarithmic terms by using a Pade approximation for the exponential term as
2 sT
esT = z 1
(2.27)
2 + sT
or alternatively
s
2 (1 z 1 )
T (1 + z 1 )
(2.28)
This allows us to transform a continuous time transfer function G(s) to a discrete time transfer
function G(z),
G(z) = G(s)|s= 2 1z1
(2.29)
T 1+z 1
Eqn. 2.29 is called the bilinear transform owing to the linear terms in both the numerator and
denominator and it has the advantage that a stable continuous time filter will be stable in the discrete domain. The disadvantage is that the algebra required soon becomes unwieldy if attempted
manually. Other transforms are possible and are discussed in 4-2 p308 of Ogata [150]. Always
remember that this transformation is approximate, being equivalent to a trapezoidal integration.
42
1
(s + 1)(s + 2)
at a sample time of T = 0.1 using the bilinear transform. The discrete approximate transfer
function is obtained by substituting Eqn. 2.28 for s and simplifying.
1
G(z) =
(s + 1)(s + 2) s= 2 1z1
T 1+z 1
=
=
2
T
1z 1
1+z 1
+1
1
2
T
1z 1
1+z 1
+2
=
T 2 (z + 1)2
1
2
2
2 (2T + 6T + 4)z + (4T 2 8)z + 2T 2 6T + 4
for T = 0.1
>>
>>
>>
>>
syms s T z
G = 1/(s+1)/(s+2);
Gd = simplify(subs(G,s,2/T*(1-1/z)/(1+1/z))); % Substitute Eqn. 2.28 for s.
pretty(Gd)
2
2
T (z + 1)
1/2 ------------------------------------(2 z - 2 + T z + T) (z - 1 + T z + T)
Alternatively we could use M ATLAB to numerically verify our symbolic solution given that T =
0.1.
12
17
Transfer function:
0.002165 z2 + 0.004329 z + 0.002165
-----------------------------------z2 - 1.723 z + 0.7403
22
43
The bilinear command in the S IGNAL P ROCESSING toolbox automatically performs this mapping from s to z. This is used for example in the design of discrete versions of common analogue
filters such as the Butterworth or Chebyshev filters. These are further described in 5.2.3.
Problem 2.3
1. Use Tustins method (approximate z-transform) to determine the discrete time
response of
4
G(s) =
(s + 4)(s + 2)
to a unit step change in input by long division. The sample time T = 1 and solve for 7 time
steps. Compare with the exact solution.
T HE
The zeroth order hold element modifies the transfer function, so consequently influences the
closed loop stability, and frequency characteristics of the discrete process. We can investigate
this influence by plotting the discrete Bode and Nyquist plots for a sampled process with and
without a zeroth order hold.
Suppose we have the continuous process
Gp (s) =
1.57
s(s + 1)
(2.30)
Gp (z) = 1.57
(2.31)
(2.32)
To get the discrete model with the zeroth order hold, we again use Eqn 2.21, and the tables. Doing
this, in a similar manner to what was done for Eqn 2.24, we get
Gh0 Gp (z) =
1.2215z + 0.7306
(z 1)(z 0.208)
(2.33)
Now we can plot the discrete frequency responses using M ATLAB to duplicate the figure given in
[109, p649] as shown in Fig. 2.16.
In the listing below, I first construct the symbolic discrete transfer function, substitute the sampling time, then convert to a numeric rational polynomial. This rational polynomial is now in a
suitable form to be fed to the control toolbox routines such as bode.
syms s T z
G = 1.57/(s*(s+1))
Gd = lap2ztran(G)
Gd2 = subs(Gd,T,pi/2)
[num,den] = numden(Gd2)
44
To generate the discretisation including the zeroth-order hold, we could use the symbolic lap2ztranzoh
routine given in Listing 2.3, or we could use the built-in c2d function.
1
[num,den] = numden(G)
% extract numerator & denominator
Bc= sym2poly(num); Ac = sym2poly(den);
Gc = tf(Bc,Ac)
Gdzoh = c2d(Gc,pi/2,'zoh')
bode(Gc,Gd,Gdzoh)
legend('Continuous','No ZOH','with ZOH')
In this case I have used vanilla Bode function which will automatically recognise the difference
between discrete and continuous transfer functions. Note that it also automatically selects both
a reasonable frequency spacing, and inserts the Nyquist frequency at fN = 1/2T = 1/ Hz or
N = 2 rad/s.
These Bode plots shown in Fig. 2.16, or the equivalent Nyquist plots, show that the zeroth order
hold destabilises the system. The process with the hold has a larger peak resonance, and smaller
gain and phase margins.
Bode Diagram
Magnitude (dB)
100
50
50
Phase (deg)
100
90
Continuous
No ZOH
with ZOH
135
180
225
2
10
10
10
Frequency (rad/sec)
10
10
Of course we should compare the discrete Bode diagrams with that for the original continuous
process. The M ATLAB bode function is in this instance the right tool for the job, but I will construct the plot manually just to demonstrate how trivial it is to substitute s = i, and compute the
magnitude and phase of G(i) for all frequencies of interest.
num = 1.57;
den = [1 1 0];
w = logspace(-2,1)';
iw = 1i*w;
45
G = polyval(num,iw)./polyval(den,iw); % G(s = j)
loglog(w,abs(G))
% Plot magnitude |G(i)|, see Fig. 2.17.
semilogx(w,angle(G)*180/pi)
% and phase (i)
We can compare this frequency response of the continuous system with the discretised version in
Fig. 2.16.
10 4
AR
10 2
10 0
10 -2
-80
-100
-120
-140
-160
-180
10 -2
10
-1
10
10
frequency [rad/s]
2.6
The root locus diagram is a classical technique used to study the characteristic response of transfer functions to a single varying parameter such as different controller gains. Traditionally the
construction of root locus diagrams required tedious manual algebra. However now modern
programs such as M ATLAB can easily construct reliable root locus diagrams which makes the
design procedure far more attractable.
For many controlled systems, the loop is stable for small values of controller gain Kc , but unstable
for a gain above some critical value Ku . It is therefore natural to ask how does the stability (and
response) vary with controller gain? The root locus gives this information in a plot form which
shows how the poles of a closed loop system change as a function of the controller gain Kc . Given
a process transfer function G and a controller Kc Q, the closed loop transfer function is the familiar
Gcl =
Kc Q(z)G(z)
1 + Kc Q(z)G(z)
(2.34)
(2.35)
0.5(z + 0.6)
z 2 (z 0.4)
(2.36)
46
which is a discrete description of a first order process with dead time. We also add a discrete
integral controller of the form
z
Gc (z) = Kc
z1
We wish to investigate the stability of the characteristic equation as a function of controller gain.
With no additional arguments apart from the transfer function, rlocus will draw the root locus
selecting sensible values for Kc automatically. For this example, I will constrain the plots to have
a square aspect ratio, and I will overlay a grid of constant shape factor and natural frequency
n using the zgrid function.
Q = tf([1 0],[1 -1],-1);
% controller without gain, Q(z)
G = tf(0.5*[1 0.6],[1 -0.4 0 0],-1); % plant, Gp (z)
3
Gol = Q*G;
% open loop Q(z)Gp (z)
zgrid('new'); xlim([-1.5 1.5]); axis equal
rlocus(Gol)
% plot root locus, see Fig. 2.18.
rlocfind(Gol)
Once the plot such as Fig. 2.18 is on the screen, I can use rlocfind interactively to establish the
values of Kc at different critical points in Fig. 2.18. Note that the values obtained are approximate.
Pole description
border-line stability
cross the = 0.5 line
critically damped
overdamped, n = 36
Name
ultimate gain
design gain
breakaway gain
Kc
0.6
0.2
0.097
0.09
Once I select the gain that corresponds to the = 0.5 crossing, I can simulate the closed loop step
response.
Gol.Ts = 1;
% Set sampling time T = 1
Kc = 0.2
% A gain where we expect = 0.5
step(Kc*Gol/(1 + Kc*Gol),50) % Simulate closed loop, see Fig. 2.19.
A comparison of step responses for various controller gains is given in Fig. 2.19. The curve in
Fig. 2.19 with Kc = 0.2 where 0.5 exhibits an overshoot of about 17% which agrees well with
the expected value for a second order response with a shape factor of = 0.5. The other curves
do not behave exactly as one would expect since our process is not exactly a second order transfer
function.
2.7
In the mid 1970s, the western world suffered an oil shock when the petroleum producers and
consumers alike realised that oil was a scarce, finite and therefore valuable commodity. This had
a number of important implications, and one of these in the chemical processing industries was
the increased use of integrated heat plants. This integration physically ties separate parts of the
plant together and demands a corresponding integrated or plant-wide control system. Classical single-input/single-output (SISO) controller design techniques such as transfer functions,
frequency analysis or root locus were found to be deficient, and with the accompanying development of affordable computer control systems that could administer hundreds of inputs and
47
Root Locus
1.5
0.1
0.3/T
0.2
0.3
0.4
0.2/T
0.5
0.6
0.7
0.1/T
0.8
0.9
0.7/T
0.8/T
Imaginary Axis
0.5
0.9/T
0
Ultimate
gain
0.5/T
0.6/T
0.4/T
Design gain
/T
/T
0.1/T
0.9/T
0.5
0.8/T
0.2/T
breakaway
gain
0.3/T
0.7/T
0.6/T
0.4/T
0.5/T
1.5
1.5
0.5
0.5
1.5
Real Axis
Figure 2.18: The discrete root locus of Eqn. 2.36 plotted using rlocus. See also Fig. 2.19 for the
resultant step responses for various controller gains.
1
K =0.1
c
0
2
1
K =0.2
c
0
2
10
20
30
time [s]
Figure 2.19: Various discrete closed loop responses for gains Kc = 0.6, 0.2 and 0.1. Note that
with
Kc = 0.6 we observe a response on the staK =0.6
c
bility boundary, while for Kc = 0.1 we observe a
40
50 critically damped response as anticipated from the
root locus diagram given in Fig. 2.18.
48
outputs, many of which were interacting, nonlinear and time varying, new tools were required
and the systematic approach offered by state space analysis became popular.
Manipulated
Disturbances
Many physical processes are multivariable. A binary distillation column such as given in Fig. 2.20
typically has four inputs; the feed flow and composition and reflux and boil-up rates, and at least
four outputs, the product flow and compositions. To control such an interacting system, multivariable control such as state space analysis is necessary. The Wood-Berry distillation column
model (Eqn 3.24) and the Newell & Lee evaporator model are other examples of industrial process
orientated multivariable models.
distillate composition, xD
feed rate
distillate rate, D
feed composition
Distillation
Column
Tray 5 temperature, T5
Tray 15 temperature, T15
reflux rate
Bottoms flowrate, B
reboiler heat
Plant Inputs
Bottoms composition, xB
Plant
Plant Outputs
Figure 2.20: A binary distillation column with multiple inputs, disturbances, and multiple outputs
State space analysis only considers first order differential equations. To model higher order systems, one needs only to build systems of first order equations. These equations are conveniently
collected, if linear, in one large matrix. The advantage of this approach is a compact representation, and a wide variety of good mathematical and robust numerical tools to analyse such a
system.
A FEW WORDS
OF CAUTION
The state space analysis is sometimes referred to as the modern control theory, despite the fact
that is has been around since the early 1960s. However by the end of the 1970s, the promise of
the academic advances made in the previous decade were turning out to be ill-founded, and it
was felt that this new theory was ill-equipped to cope with the practical problems of industrial
control. In many industries therefore, Advanced Process Control attracted a bad smell. Writing
a decade still later in 1987, Morari in [138] attempts to rationalise why this disillusionment was
around at the time, and whether the subsequent decade of activity alleviated any of the concerns.
Morari summarises that commentators such as [70] considered theory such as linear multivariable
control theory, (i.e. this chapter) which seemed to promise so much, actually delivered very little
and had virtually no impact on industrial practice. There were other major concerns such as
the scarcity of good process models, the increasing importance of operating constraints, operator
acceptance etc, but the poor track record of linear multivariable control theory landed top billing.
Incidentally, Morari writing almost a decade later still in 1994, [141], revisits the same topic giving
the reader a nice linear perspective.
2.7.1
S TATES
49
State space equations are just a mathematical equivalent way of writing the original common differential equation. While the vector/matrix construction of the state space approach may initially
appear intimidating compared with high order differential equations, it turns out to be more convenient and numerically more robust to manipulate these equations when using programs such
as M ATLAB. In addition the state space form can be readily expanded into multivariable systems
and even nonlinear systems.
The state of a system is the smallest set of values such that knowledge of these values, knowledge
of any future inputs and the governing dynamic equations, it is possible to predict everything
about the future (and past) output of the system. The state variables are often written as a column
vector of length n and denoted x. The state space is the n dimensional coordinate space that all
possible state vectors must lie in.
The input to a dynamic system is the vector of m input variables, u, that affect the state variables.
Historically control engineers further subdivided the input variables into those they could easily and consciously adjust (known as control or manipulated variable inputs), and those variables
that may change, but are outside the engineers immediate sphere of influence (like the weather)
known as disturbance variables. The number of input variables need not be the same as the number of state variables, and indeed m is typically less than n.
The state space equations are the n ordinary differential equations that relate the state derivatives
to the state themselves, the inputs if any, and time. We can write these equations as
x = f (x, u, t),
x(t = 0) = x0 ,
x n1
(2.37)
where f () is a vector function, the n initial conditions at t = 0 for the system is some known
point, x0 , and that the vector x has dimensions (n 1).
In control applications, the input is given by a control law which is often itself a function of the
states,
u = h(x)
(2.38)
which if we substitute into Eqn. 2.37, we get the closed loop response
x = f (x, h(x), t)
which is now no longer an explicit function of the input u. For autonomous closed loop systems
there is no explicit dependence on time, so the differential equation is simply
x = f (x)
(2.39)
For the continuous linear time invariant case, Eqn. 2.37 simplifies to
x = Ax + Bu
(2.40)
xk+1 = xk + uk
(2.41)
50
x
u(t)
x(t)
uk
xk+1
+
z 1
xk
Continuous plant
Discrete plant
Figure 2.21: A block diagram of a state space dynamic system, (a) continuous system: x = Ax +
Bu, and (b) discrete system: xk+1 = xk + uk . (See also Fig. 2.22.)
The output or measurement vector, y, are the p variables that are directly measured from the operating plant, since often the true states cannot themselves be directly measured. These outputs are
related to the states by
y = g(x), y p1
(2.42)
or if the measurement relation is linear, then
y = Cx.
(2.43)
Sometimes the input may directly affect the output bypassing the dynamic elements, in which
case the full linear system in state space is described by
x = Ax + Bu
y = Cx + Du
(2.44)
which is the standard starting point for the analysis of continuous linear dynamic systems, and is
shown in block diagram form in Fig. 2.22.
u(t)
x
B
x(t)
C
y(t)
Figure 2.22: A complete block diagram of a continuous state space dynamic system with outputs
and direct measurement feed-through, Eqn. 2.44.
Note that in Fig. 2.22, the internal state vector (bold lines), has typically more elements than either
the input or output vectors (thin lines). The diagram also highlights the fact that as an observer to
the plant, we can relatively easily measure our outputs (or appropriately called measurements),
we presumably know our inputs to the plant, but we do not necessarily have access to the internal
state variables since they are always contained inside the dashed box in Fig. 2.22. Strategies to
estimate these hidden internal states is known as state estimation and are described in section 9.5.
Fig. 2.22 also introduces a convention we will use later in system identification, where I colour
the input signals blue, and the outputs red.
51
It is important to be clear on the dimensions of the standard state-space system description given
by Eqn. 2.44. We will take the convention that the state vector x is an (n 1) column vector, or
alternatively written as x (n1) . That means we have n states, m inputs, and p measurements,
or
x (n1) , u (m1) , u (p1)
(2.45)
Consequently, the dimensions of the model (transition) matrix, A, the input matrix, B, and the
output or measurement matrix, C are
A (nn) ,
B (nm) ,
C (pn) ,
D (pm) .
(2.46)
These four matrices in Eqn. 2.44 can be concatenated into one large block thus obtaining a shorthand way of storing these numbers
# of inputs
m
B
n # of states
(n n)
(n m)
C
(p n)
D
(p m)
p # of measurements
which is often called a packed matrix notation of the quadruplet (A,B,C,D). If you get confused
about the appropriate dimensions of A, B etc, then this is a handy check, or alternatively you
can use the diagnostic routine abcdchk. Note that in practice, the D matrix is normally the
zero matrix since the input does not generally effect the output immediately without first passing
through the system dynamics.
Following our coloring convention introduced previously, we have coloured the control matrix B
blue in the packed matrix because it is associated with the input, coloured the output matrix C
red because it is associated with the outputs, and the direct term D purple because it is associated
with both inputs and outputs.
2.7.2
C ONVERTING
52
(2.48)
We can cast this transfer function (or alternatively the original differential equation) into state
space in a number of equivalent forms. They are equivalent in the sense that the input/output
behaviour is identical, but not necessarily the internal state behaviour.
The controllable canonical form is
x 1
0
1
0
x 2 0
0
1
.. ..
..
..
. = .
.
.
x n1 0
0
0
a0 a1 a2
x n
y=
..
.
0
0
..
.
1
an1
..
. bn1 an1 b0
b n an b 0
x1
x2
..
.
xn1
xn
0
0
..
.
+
0
1
(2.49)
x1
x2
..
.
b 1 a1 b 0
xn1
xn
..
.
. ..
+ b0 u
(2.50)
which is useful when designing pole-placement controllers. The observable canonical form is
0 0 0 an
b n an b 0
x 1
x1
x 2 1 0 0 an1 x2 bn1 an1 b0
(2.51)
.. = .. . .
.. +
u
.. ..
..
..
. .
.
. .
.
.
.
0
x n
y=
0 0
a1
b 1 a1 b 0
xn
x1
x2
..
.
xn1
xn
+ b0 u
(2.52)
If the transfer function defined by Eqn. 2.48 has real and distinct poles,
Y (s)
b0 sn + b1 sn1 + + bn1 s + bn
=
U (s)
(s + p1 )(s + p2 ) (s + pn )
c1
c2
cn
= b0 +
+
+ +
s + p1
s + p2
s + pn
then we can derive an especially elegant state space form
p1
0
x 1
x 2
p2
.. =
.
..
.
0
x n
y=
c1
c2
cn
pn
x1
x2
..
.
xn
x1
x2
..
.
xn
+ b0 u
1
1
..
.
1
(2.53)
(2.54)
53
that is purely diagonal. As evident from the diagonal system matrix, this system is totally decoupled and possess the best numerical properties for simulation. For systems with repeated roots,
or complex roots, or both, the closest we can come to a diagonal form is the block Jordan form.
We can interconvert between all these above forms using the M ATLAB canon command.
>> G = tf([5 6],[1 2 3 4]); % Define transfer function G = (5s2 + 6s)/(s3 + 2s2 + 3s + 4)
2
x1
x2
x3
x1
0
1
0
x1
x2
x3
u1
1
0
0
y1
x1
0
y1
u1
0
x2
0
0
1
x3
-4
-3
-2
x2
5
x3
-4
b =
12
17
c =
d =
22
These transformations are discussed further in [150, p515]. M ATLAB uses a variation of this form
when converting from a transfer function to state space in the tf2ss function.
2.7.3
The transfer function form and the state space form are, by in large, equivalent, and we can convert from one representation to the other. Starting with our generic state space model, Eqn. 2.40,
x = Ax + Bu
(2.55)
(2.56)
where Gp (s) is a matrix of expressions in s and is referred to as the transfer function matrix. The
M ATLAB command ss2tf (state space to transfer function) performs this conversion.
In the common single-input/single output case,
x = Ax + Bu
y = Cx
54
(2.57)
which we recognise as the characteristic polynomial for the matrix A. The roots of this polynomial
are the poles of G(s) and are given by the eigenvalues of A.
The numerator polynomial is
N (s) = C adj (sI A) B
(2.58)
where adj() is the adjugate, (or sometimes known as the adjoint), of a matrix. The roots of this
polynomial are the zeros of G(s). In M ATLAB we can use the zero command for SISO systems or
tzero for the transmission zeros for MIMO systems.
It is interesting to note that the stability, oscillation and decay rate are governed only by the poles
and hence the system dynamic matrix A, while the zeros are determined by the dynamics as well
as the B and C matrices which in turn are a function of the sensor and manipulated variable
positions, [87, p46].
State space to transfer function conversion
Suppose we wish to convert the following state space description to transfer function form.
7 10
4 1
x =
x+
u
(2.59)
3 4
2 3
From Eqn. 2.56, the transfer function matrix is defined as
1
Gp (s) = (sI A) B
1
1 0
7 10
4
= s
0 1
3 4
2
1
(s + 7)
10
4 1
=
3
(s 4)
2 3
1
3
(2.60)
Now at this point, the matrix inversion in Eqn. 2.60 becomes involved because of the presence of
the symbolic s in the matrix. Inverting the symbolic matrix expression gives the transfer function
matrix.
1
s 4 10
4 1
Gp (s) =
3 s + 7
2 3
(s + 2)(s + 1)
4
s + 26
2
s+2
s + 3s + 2
=
2
3(6 + s)
2
s+2
s + 3s + 2
We can directly apply Eqn. 2.56 using the symbolic toolbox in M ATLAB.
>> syms s
>> A = [-7 10; -3 4]; B = [4 -1; 2 -3];
>> G = (s*eye(2)-A)\B % Pulse transfer function Gp (s) = (sI A)1 B.
G =
[
4/(s+2), -(s+26)/(2+s2+3*s)]
[
2/(s+2), -3*(s+6)/(2+s2+3*s)]
55
The method for converting from a state space description to a transfer function matrix described
by Eqn. 2.56 is not very suitable for numerical computation owing to the symbolic matrix inversion required. However [175, p35] describes a method due to Faddeeva that is suitable for
numerical computation in M ATLAB.
Faddeevas algorithm to convert a state space description, x = Ax + Bu into transfer function
form, Gp (s).
k = 1, 2, , n 1
(2.61)
We can repeat the state space to transfer function example, Eqn. 2.59, given on page 54 using
Faddeevas algorithm starting by computing the characteristic polynomial of A
a(s) = s2 + 3s + 2 = (s + 1)(s + 2)
and with n = 2 we can compute
1
E1 =
0
0
1
and
E0 =
5 1
2
2
syms s
a = poly(A); n = size(A,1);
as = poly2sym(a,'s');
E = eye(n,n); I = eye(n,n);
Gp = s(n-1)*E;
for i=n-2:-1:0
E = A*E + a(i+2)*I;
Gp = Gp + si*E;
end
Gp = simplify(Gp*B/as);
% Eqn. 2.61.
Finally, we can also use the control toolbox for the conversion from state space to transfer function
form. First we construct the state space form,
56
where in this case if leave out the D matrix, M ATLAB assumes no direct feed though path. We
now can convert to the transfer function form:
>> sys_tf = minreal(tf(sys)) % Convert to TF & remember to cancel common factors
2
#2:
12
2
----s + 2
17
#2:
-3 s - 18
------------s2 + 3 s + 2
Once again we get the same pulse transfer function matrix. Note however that it is also possible
to use ss2tf.
T RANSFER
To convert in the reverse direction, we can use the M ATLAB function tf2ss (transfer function to
state space) thus converting an arbitrary transfer function description to the state space format of
Eqn 2.40. For example starting with the transfer function
G(s) =
s2 + 7s + 12
(s + 3)(s + 4)
= 3
(s + 1)(s + 2)(s + 5)
s + 8s + 17s + 10
>>num = [1 7 12];
% Numerator: s2 + 7s + 12
>>den = [1 8 17 10];
% Denominator: s3 + 8s + 17s + 10
>>[A,B,C,D] = tf2ss(num,den) % convert to state space
8 17 10
0
0 ,
A= 1
0
1
0
C = 1 7 12 ,
1
B= 0
0
D=0
57
This form is a variation of the controllable canonical form described previously in 2.7.2. This
is not the only state space realisation possible however as the C ONTROL T OOLBOX will return a
slightly different ABCD package
12
17
Alternatively we could start with zero-pole-gain form and obtain yet another equivalent state
space form.
1
2.7.4
S IMILARITY
TRANSFORMATIONS
The state space description of a differential equation is not unique. We can transform from one
description to another by using a linear invertible transformation such as x = Tz. Geometrically
in 2 dimensions, this is equivalent to rotating the axes or plane. When one rotates the axes, the
inter-relationship between the states do not change, so the transformation preserves the dynamic
model.
Suppose we have the dynamic system
x = Ax + Bu
(2.62)
which we wish to transform in some manner using the non-singular transformation matrix, T,
where x = Tz. Naturally the reverse transformation z = T1 x exists because we have restricted
ourselves to consider only the cases when the inverse of T exists. Writing Eqn. 2.62 in terms of
our new variable z we get
d(Tz)
= ATz + Bu
dt
z = T1 ATz + T1 Bu
(2.63)
Eqn 2.63 and Eqn 2.62 represent the same dynamic system. They have the same eigenvalues hence
the similarity transform, but just a different viewpoint. The mapping from A to T1 AT is called
58
a similarity transform and preserves the eigenvalues. These two matrices are said to be similar.
The proof of this is detailed in [85, p300] and [150, p513514].
The usefulness of these types of transformations is that the dynamics of the states are preserved
(since the eigenvalues are the same), but the shape and structure of the system has changed.
The motivation is that for certain operations (control, estimation, modelling), different shapes are
more convenient. A pure, (or nearly), diagonal shape of the A matrix for example has much better
numerical properties than a full matrix. This also has the advantage that less computer storage
and fewer operations are needed.
To convert a system to a diagonal form, we use the transformation matrix T, where the columns
of T are the eigenvectors of A. Systems where the model (A) matrix is diagonal are especially
easy to manipulate. We can find the eigenvectors T and the eigenvalues e of a matrix A using the
eig command, and construct the new transformed, and hopefully diagonal system matrix,
[T,e] = eig(A) % find eigenvectors & eigenvalues of A
V = T\A*T
% New system matrix, T1 AT, or use canon
Another use is when testing new control or estimation algorithms, it is sometimes instructive to
devise non-trivial systems with specified properties. For example you may wish to use as an
example a 3 3 system that is stable and interacting, and has one over-damped mode and two
oscillatory modes. That is we wish to construct a full A matrix with specified eigenvalues. We
can use the similarity transformations to obtain these models.
Other useful transformations such as the controllable and observable canonical forms are covered
in [150, 64 p646 ]. The M ATLAB function canon can convert state space models to diagonal or
observable canonical form (sometimes known as the companion form). Note however the help file
for this routine discourages the use of the companion form due to its numerical ill-conditioning.
M ATLAB comes with some utility functions to generate test models such as ord2 which generates
stable second order models of a specified damping and natural frequency, and rmodel is a flexible
general stable random model builder or arbitrary order.
2.7.5
I NTERCONVERTING
The previous section described how to convert between different representations of linear dynamic systems such as differential equations, transfer functions and state space descriptions. This
section describes the much simpler task of converting between the different ways we can write
transfer functions.
Modellers tend to think in continuous time systems, G(s), and in terms of process gain and time
constants, so will naturally construct transfer functions in factored form as
K i (i s + 1)
j (j s + 1)
(2.64)
where the all the variables of interest such as time constants j are immediately apparent. On
the other hand, system engineers tend to think in terms of poles and zeros, so naturally construct
transfer functions also in factored form, but with a different scaling
K i (s + zi )
j (s + pi )
(2.65)
so where now the poles, pj , and zeros zi are immediately apparent. This is the form that M ATLAB
uses in the zeros-pole-gain format, zpk.
59
Finally the hardware engineer would prefer to operate in the expanded polynomial form, (particularly in discrete cases), where the transfer function is of the form
bm sm + bm1 sm1 + + b0
sn + an1 sn1 + + a0
(2.66)
This is the form that M ATLAB uses in the transfer-function format, tf. Note that the leading
coefficient in the denominator is set to 1. As a M ATLAB user, you can define a transfer function
that does not follow this convention, but M ATLAB will quietly convert to this normalised form if
you type something like G = tf(zpk(G)).
The inter-conversions between the forms is not difficult; between expressions Eqn. 2.64 and Eqn. 2.65
simply require some adjusting of the gains and factors, while to convert from Eqn. 2.66 requires
one to factorise polynomials.
For example, the following three transfer function descriptions are all equivalent
1.5 (s 0.3333)(s + 0.1)
60s2 + 14s + 2
2 (10s + 1)(3s + 1)
=
=
(20s + 1)(2s + 1)(s + 1)
(s + 1)(s + 0.5)(s + 0.05)
40s3 + 62s2 + 23s + 1
{z
}
{z
} |
{z
} |
|
time constant
zero-pole-gain
expanded polynomial
It is trivial to interconvert between the zero-pole-gain, Eqn. 2.65, and the transfer function formats, Eqn. 2.66, in M ATLAB, but it is less easy to convert to the time constant description. Listing 2.4 extracts from an arbitrary transfer function form the time constants, , the numerator time
constants, , and the plant gain K.
Listing 2.4: Extracting the gain, time constants and numerator time constants from an arbitrary
transfer function format
Gplant = tf(2*mconv([10 1],[-3 1]),mconv([20 1],[2 1],[1 1])) % TF of interest
G = zpk(Gplant); % Convert TF description to zero-pole-gain
3
Kp = G.k;
p = cell2mat(G.p); z = cell2mat(G.z); % Extract poles & zeros
delay0 = G.iodelay ;
% Extract deadtime (if any)
8
We could of course use the control toolbox functions pole and zero to extract the poles and
zeros from an arbitrary LTI model.
2.7.6
T HE
STEADY STATE
The steady-state, xss , of a general nonlinear system x = f (x, u) is the point in state space such
that all the derivatives are zero, or the solution of
0 = f (xss , u)
(2.67)
If the system is linear, x = Ax + Bu, then the steady state can be evaluated algebraically in closed
form
xss = A1 Bu
(2.68)
Consequently to solve for the steady-state one could invert the model matrix A but which may be
illconditioned or computationally time consuming. Using M ATLAB one should use xss = -A\B*u.
60
If A has no inverse, then no (or alternatively infinite) steady states exist. An example of a process
that has no steady state could be a tank-flow system that has a pump withdrawing fluid from
the outlet at a constant rate independent of liquid height say just exactly balancing an input flow
shown in the left hand schematic in Fig. 2.23. If the input flow suddenly increased, then the level
will rise until the tank eventually overflows. If instead the tank was drained by a valve partially
open at a constant value, then as the level rises, the increased pressure (head) will force more
material out through the valve, (right-hand side of Fig. 2.23.) Eventually the system will rise to a
new steady state. It may however overflow before the new steady state is reached, but that is a
constraint on the physical system that is outside the scope of the simple mathematical description
used at this time.
Flow in
height, h
h
restriction
Figure 2.23: Unsteady (left) and steady (right) states for level systems
If the system is nonlinear, there is the possibility that multiple steady states may exist. To solve for
the steady state of a nonlinear system, one must use a nonlinear algebraic solver such as described
in chapter 3.
Example The steady state of the differential equation
d2 y
dy
+7
+ 12y = 3u
2
dt
dt
where dy
dt = y = 0 at t = 0 and u = 1 , t 0 can be evaluated using Laplace transforms and the
final value theorem. Transforming to Laplace transforms we get
s2 Y (s) + 7sY (s) + 12Y (s) = 3U (s)
Y (s)
3
3
= 2
=
U (s)
s + 7s + 12
(s + 3)(s + 4)
while for a step input
Y (s) =
3
1
2
s s + 7s + 12
61
The final value theorem is only applicable if the system is stable. To check, we require that the
roots of the denominator, s2 + 7s + 12, lie in the left hand plane,
6 72 4 12
= 4 and 3
s=
2
Given that both roots have negative real parts, we have verified that our system is stable and we
are allowed to apply the final value theorem to solve for the steady-state, y(),
3
3
=
= 0.25
s2 + 7s + 12
12
Using the state space approach to replicate the above, we first cast the second order differential
equation into two first order differential equations using the controllable canonical form given in
Eqn. 2.50. Let z1 = y and z2 = dy
dt , then
0
1
0
z =
z+
u
12 7
3
lim y(t) = lim sY (s) = lim
s0
s0
7/12 1/12
1
0
0
3
1=
0.25
0
Noting that z1 = y, we see that the steady state is also at 0.25. Furthermore, the derivative term
(z2 ) is zero, which is as expected at steady state.
2.7.7
P OLES
AND ZEROS
As mentioned in section 2.7.3, the poles of a state space dynamic system are the roots of the
polynomial det(sI A)1 and the zeros are the roots of C adj(sI A)1 B. The concept of zeros,
however in multivariable systems is more complicated than poles. Because forming the adjugate
is awkward, it is easier to note the identity
C adj(sI A)1 B = det(R(s))
where
def
R(s) =
sI A B
C
0
That means we can find the transmission zeros of a state space system by computing generalised
eigenvalues of the matrix pencil
A B
I 0
,
,
C 0
0 0
as detailed in [87, p55]. Listing 2.5 does this calculation which essentially duplicates tzero in
M ATLAB.
Listing 2.5: Find the transmission zeros for a MIMO system
function [tzeros ] = tzero2(G)
% Transmission zeros for a MIMO system (See also tzero.m)
10
62
2.8
Since we can solve differential equations by inverting Laplace transforms, we would expect to be
able to solve state space differential equations such as Eqn. 2.40 in a similar manner. If we look
at the Laplace transform of a simple linear scalar differential equation, x = ax + bu we find two
terms,
sX(s) x0 = aX(s) + bU (s)
x0
b
X(s) =
+
U (s)
sa sa
(2.69)
One of these terms is the response of the system owing to the initial condition, x0 eat , and is called
the homogeneous solution, and the other term is due to the particular input we happen to be
using. This is called the particular integral, and we must know the form of the input, u(t), before
we solve this part of the problem. The total solution is the sum of the homogeneous and particular
components.
T HE
HOMOGENEOUS SOLUTION
For the moment we will consider just the homogeneous solution to our vector differential equation. That is, we will assume no driving input, or u(t) = 0. (In the following section we will add
in the particular integral due to a non-zero input.)
Our vector differential equation, ignoring any input, is simply
x = Ax,
x(t = 0) = x0
(2.70)
Taking Laplace transforms, and not forgetting the initial conditions, we have
sx(s) x0 = A x(s)
Finally inverting the Laplace transform back to the time domain gives
o
n
x(t) = L1 (sI A)1 x0
(2.71)
Alternatively we can solve Eqn. 2.70 by separating the variables and integrating obtaining
x(t) = eAt x0
(2.72)
where the exponent of a matrix, eAt , is itself a matrix of the same size as A. We call this matrix
exponential the transition matrix because it transforms the state vector at some initial time x0 to
some point in the future, xt . We will give it the symbol (t).
The matrix exponential is defined just as in the scalar case as a Taylor series expansion,
eAt or (t) = I + At +
def
X
A k tk
k=0
k!
A 2 t2
A3 t3
+
+
2!
3!
(2.73)
although this series expansion method is not recommended as a reliable computational strategy.
Better strategies are outlined on page 66.
63
o
n
Comparing Eqn. 2.71 with Eqn. 2.72 we can see that the matrix eAt = L1 (sI A)1 .
So to compute the solution, x(t), we need to know the initial condition and a strategy to numerically compute a matrix exponential.
T HE
PARTICULAR SOLUTION
Now we consider the full differential equation with nonzero input x = Ax + Bu. Building on the
solution to the homogeneous part in Eqn. 2.72, we get
x(t) = t x0 +
t Bu d
(2.74)
where now the second term accounts for the particular input vector u.
Eqn. 2.74 is not particularly useful as written as both terms are time varying. However the continuous time differential equation can be converted to a discrete time differential equation that is
suitable for computer control implementation provided the sampling rate is fixed, and the input
is held constant between the sampling intervals. We would like to convert Eqn. 2.40 to the discrete
time equivalent, Eqn. 2.41, repeated here
xk+1 = xk + uk
(2.75)
where xk is the state vector x at time t = kT where T is the sample period. Once the sample period
is fixed, then and are also constant matrices. We have also assumed here that the input vector
u is constant, or held, over the sample interval, which is the norm for control applications.
So starting with a known xk at time t = kT , we desire the state vector at the next sample time,
xk+1 , or
Z (k+1)T
A
T
A
(k+1)T
xk+1 = e xk + e
eA Bu dt
(2.76)
kT
But as we have assumed that the input u is constant using a zeroth-order hold between the sampling intervals kT and (k + 1)T , Eqn. 2.76 simplifies to
xk+1 = eAT xk +
eA Buk d
(2.77)
=
e
d B
(2.78)
(2.79)
which gives us our desired transformation in the form of Eqn. 2.75. In summary, to discretise
x = Ax+Bu at sample interval T , we must compute a matrix exponential, Eqn. 2.78, and integrate
a matrix exponential, Eqn. 2.79.
Note that Eqn. 2.77 involves no approximation to the continuous differential equation provided
the input is constant over the sampling interval. Also note that as the sample time tends to zero,
the state transition matrix tends to the identity, I.
If the matrix A is non-singular, then
= eAT I A1 B
(2.80)
64
0
1
T
= T2
2
!
A
e
d B
!
1 0
1
d
1
0
T
0 1
= T2
0
T
2
For small problems such as this, the symbolic toolbox helps do the computation.
10
Symbolic example with invertible A matrix. We can discretise the continuous state space system
x =
1.5 0.5
1
0
x+
1
0
>> syms T
>> Phi = expm(A*T)
Phi =
[
-exp(-1/2*T)+2*exp(-T),
[ -2*exp(-T)+2*exp(-1/2*T),
exp(-T)-exp(-1/2*T)]
2*exp(-1/2*T)-exp(-T)]
Since A is invertible in this example, we can use the simpler Eqn. 2.80
65
>> Delta = (Phi - eye(2))*A\B % = eAT I A1 B
Delta =
[
2/(exp(-1/2*T)*exp(-T)-exp(-T)-exp(-1/2*T)+1)*(exp(-T)-exp(-1/2*T))]
[ -2*(-exp(-1/2*T)+2*exp(-T)-1)/(exp(-1/2*T)*exp(-T)-exp(-T)-exp(-1/2*T)+1)]
>> simplify(Delta)
ans =
[
2*exp(-1/2*T)/(exp(-T)-1)]
[ -2*(2*exp(-1/2*T)+1)/(exp(-T)-1)]
Of course it is evident from the above example that the symbolic expressions for (T ) and (T )
rapidly become unwieldy for dimensions much larger than about 2. For this reason, analytical
expressions are of limited practical worth. The alternative numerical schemes are discussed in
the following section.
2.8.1
N UMERICALLY
Calculating numerical values for the matrices and can be done by hand for small dimensions
by converting to a diagonal or Jordan form, or numerically using the exponential of a matrix.
Manual calculations are neither advisable nor enjoyable, but [20, p35] mention that if you first
compute
Z T
AT 2
A2 T 3
Ak1 T k
eA d = IT +
=
+
+ +
+
2!
3!
k!
0
then
= I + A and = B
(2.83)
A better approach, at least when using M ATLAB, follows from Eqn. 2.78 and Eqn. 2.79 where
d
= A = A
dt
d
= B
dt
These two equations can be concatenated to give
d
=
0
dt 0 I
(2.84)
(2.85)
A B
(2.86)
0 0
R
R
which is in the same form as da/dt = ab. Rearranging this to da/a = b dt leads to the analytical
solution
A B
= exp
T
(2.87)
0 I
0 0
enabling us to extract the required and matrices provided we can reliably compute the exponential of a matrix. The M ATLAB C ONTROL T OOLBOX routine to convert from continuous to
discrete systems, c2d, essentially follows this algorithm.
We could try the augmented version of Eqn. 2.87 to compute both and with one call to the
matrix exponential function for the example started on page 64.
=
>> Aa = [A,B;zeros(m,n+m)] % Augmented A matrix, A
A
0
B
0
66
12
17
Aa =
-1.5
-0.5
1.0
1.0
0.0
0.0
0.0
0.0
0.0
= exp(AT
)
>> Phi_a = expm(Aa*T);
% compute exponential,
>> Phi = Phi_a(1:n,1:n)
% Extract upper left block of composite solution
Phi =
[
-exp(-1/2*T)+2*exp(-T),
exp(-T)-exp(-1/2*T)]
[ -2*exp(-T)+2*exp(-1/2*T),
2*exp(-1/2*T)-exp(-T)]
>> Delta = Phi_a(1:n,n+1:end)
Delta =
[ -2*exp(-T)+2*exp(-1/2*T)]
[ 2*exp(-T)-4*exp(-1/2*T)+2]
R ELIABLY COMPUTING
MATRIX EXPONENTIALS
There are in fact several ways to numerically compute a matrix exponential. Ogata gives three
computational techniques, [150, pp526533], and a paper by Cleve Moler (one of the original
M ATLAB authors) and Charles Van Loan is titled Nineteen dubious ways to compute the exponential
of a matrix5 , and contrasts methods involving approximation theory, differential equations, matrix
eigenvalues and others. Of these 19 methods, two found their way into M ATLAB, namely expm,
which is the recommended default strategy, and expm1 intended to compute ex 1 for small x .
The one time when matrix exponentials are trivial to compute is when the matrix is diagonal.
Physically this implies that the system is totally decoupled since the matrix A is comprised of
only diagonal elements and the corresponding exponential matrix is simply the exponent of the
individual elements. So given the diagonal matrix
1 0
0
D = 0 2 0
0
0 3
then the matrix exponential is simply
e1
D
e =
0
0
0
e2
0
0
0
e3
which is trivial and reliable to compute. So one strategy then is to transform our system to a
diagonal form (if possible), and then simply find the standard scalar exponential of the individual
elements. However some matrices, such as those with multiple eigenvalues, is is impossible to
convert to diagonal form, so in those cases the best we can do is convert the matrix to a Jordan
block form as described in [150, p527], perhaps using the jordan command from the Symbolic
toolbox.
However this transformation is very sensitive to numerical roundoff and for that reason is not
used for serious computation. For example the matrix
1 1
A=
1
with = 0 has a Jordan form of
1
0
1
1
5 SIAM Review, vol. 20, A 1978, pp802836 and reprinted in [158, pp649680]. In fact 25 years later, an update was
published with some recent developments.
67
but for 6= 0, then the Jordan form drastically changes to the diagonal matrix
0
1+
0
1
In summary, for serious numerical calculation we should use the matrix exponential function
expm. Remember not to confuse finding the exponent of a matrix, expm, with the M ATLAB function exp which simply finds the exponent of all individual elements in the matrix.
2.8.2
U SING M ATLAB
TO DISCRETISE SYSTEMS
All of the complications of discretising continuous systems to their discrete equivalent can be
circumvented by using the M ATLAB command c2d which is short for continuous to discrete.
Here we need only to pass the continuous system of interest, and the sampling time. As an
example we can verify the conversion of the double integrator system shown on page 2.8.
x1
x2
x1
1
2
x1
x2
u1
2
2
y1
x1
0
y1
u1
0
x2
0
1
b =
12
c =
17
x2
1
d =
22
Unlike in the analytical case presented previously, here we must specify a numerical value for the
sample time, T .
Example: Discretising an underdamped second order system with a sample time of T = 3 following Eqn. 2.87 using M ATLAB to compute the matrix exponential of the augmented system.
1
[na,nb] = size(B);
X = expm([A,B; zeros(nb,na+nb)]*Ts);
Phi = X(1:na,1:na);
Del = X(1:na,na+1:end);
% matrix exponential
% Pull off blocks to obtain & .
68
We can use M ATLAB to numerically verify the expression found for the discretised double integrator on page 64 at a specific sample time, say T = 4.
The double integrator in input/output form
is
1
G(s) = 2
s
which in packed state space notation (refer
page 51) is
0 0 1
A B
= 1 0 0
C D
0 1 0
>>Gd_ss = c2d(Gc_ss,dt)
a =
x1
x2
x1
1.000
0
x2
4.000
1.000
b =
x1
x2
13
u1
4.000
8.000
c =
y1
18
x1
0
x2
1.000
d =
y1
u1
0
Once the two system matrices and are known, solving the difference equation for further values of k just requires the repeated application of Eqn. 2.41. Starting from known initial conditions
x0 , the state vector x at each sample instant is calculated thus
x1 = x0 + u0
x2 = x1 + u1 = 2 x0 + u0 + u1
x3 = x2 + u2 = 3 x0 + 2 u0 + u1 + u2
..
.
n
xn = x0 +
n1
X
n1k uk
..
.
(2.88)
k=0
The M ATLAB function ltitr which stands for linear time invariant time response or the more
general dlsim will solve the general linear discrete time model as in 2.88. In special cases such
as step or impulse tests, you can use dstep, or dimpulse.
Problem 2.4
1. Evaluate An where
A=
1
1
1
0
for different values of n. What is so special about the elements of An ? (Hint: find the
eigenvalues of A.)
2. What is the determinant of An ?
3. Write a couple of M ATLAB m-functions to convert between the A, B, C and D form and the
packed matrix form.
69
4. Complete the state space to transfer function conversion analytically started in 2.7.3,
Eqn. 2.60. Compare your answer with using ss2tf.
A submarine example A third-order model of a submarine from [63, p416 Ex9.17] is
0
1
0
0
x = 0.0071 0.111 0.12 x + 0.095 u
0
0.07
0.3
0.072
(2.89)
x =
d
dt
The state is the inclination of the submarine and is the angle of attack above the horizontal.
The scalar manipulated variable u is the deflection of the stern plane. We will assume that of the
three states, we can only actually measure two of them, and , thus
1 0 0
C=
0 0 1
In this example, we consider just three states x, one manipulated variable u, and two outputs, y.
The stability of the open-loop system is given by the eigenvalues of A. In M ATLAB, the command
eig(A) returns
0.0383 + 0.07i
eig(A) = 0.0383 0.07i
0.3343
showing that all eigenvalues have negative real parts, indicating that the submarine is oscillatory,
though stable. In addition, the complex conjugate pair indicates that there will be some oscillation
to the response of a step change in stern plane.
We can simulate the response of our submarine system (Eqn 2.89) to a step change in stern plane
movement, u, using the step command. The smooth plot in Fig. 2.24 shows the result of this
continuous simulation, while the staired plot shows the result of the discrete simulation using a
sample time of T = 5. We see two curves corresponding to the two outputs.
Listing 2.6: Submarine simulation
A = [0,1,0;-0.0071,-0.111,0.12;0,0.07,-0.3]; B = [0,-0.095,0.072]';
C = [1 0 0;0 0 1];
3
Gc = ss(A,B,C,0)
dt = 5;
% sample time T = 5 (rather coarse)
Gd = c2d(Gc,dt) % Create discrete model
8
step(Gc,Gd);
Fig. 2.24 affirms that the openloop process response is stable, supporting the eigenvalue analysis.
Notice how the step command automatically selected appropriate time scales for the simulation.
How did it do this?
The steady state of the system given that u = 1 is, using Eqn 2.68,
1
>> u = 1;
70
0
G
5
x
10
15
0.4
0.3
0.2
0.1
>>xss = -A\B*u;
>>yss = C*xss
yss =
-9.3239
0.2400
20
40
time
60
80
This script should give the same and matrices as the c2d function since A is non-singular.
Given a discrete or continuous model, we can compute the response to an arbitrary input sequence using lsim:
t = [0:dt:100]'; U = sin(t/20); % Arbitrary input U (t)
3
lsim(Gc,U,t)
lsim(Gd,U)
We could explicitly demand a first-order-hold as opposed to the default zeroth order by setting
the options for c2d.
1
71
Problem 2.5 By using a suitable state transformation, convert the following openloop system
x 1 = x2
x 2 = 10 2x2 + u
where the input u is the following function of the reference r,
u = 10 + 9r 9 4 x
into a closed loop form suitable for simulation in M ATLAB using the lsim function. (ie: x =
Ax + Br) Write down the relevant M ATLAB code segment. (Note: In practice, it is advisable to
use lsim whenever possible for speed reasons.)
Problem 2.6
1. Write an m-file that returns the state space system in packed matrix form that
connects the two dynamic systems G1 and G2 together,
Gall = G1 G2
Assume that G1 and G2 are already given in packed matrix form.
2. (Problem from [184, p126]) Show that the packed matrix form of the inverse of Eqn. 2.44,
u = G1 y, is
A BD1 C BD1
(2.90)
G1 =
D1 C
D1
assuming D is invertible, and we have the same number of inputs as outputs.
2.8.3
T HE
The discrete state space equation, Eqn. 2.40, does not account for any time delay or dead time
between the input u and the output x. This makes it difficult to model systems with delay. However, by introducing new variables we can accommodate dead time or time delays. Consider the
continuous differential equation,
x t = Axt + But
(2.91)
where is the dead time and in this particular case, is exactly equal to 2 sample times ( = 2T ).
The discrete time equivalent to Eqn 2.91 for a two sample time delay is
xk+3 = xk+2 + uk
(2.92)
which is not quite in our standard state space form of Eqn. 2.41 owing to the difference in time
subscripts between u and x.
Now let us introduce a new vector of state variables, z where
xk
def
zk = xk+1
xk+2
(2.93)
is sometimes known as a shift register or tapped delay line of states. Using this new state vector,
we can write the dynamic system, Eqn. 2.92 compactly as
xk+1
Ixk+1
72
0
= 0
0
I 0
0
0 I z k + 0 uk
0
(2.95)
This augmented system, Eqn. 2.95 (with 2 units of dead time) is now larger than the original
Eqn. 2.92 given that we have 2n extra states. Furthermore, note that the new augmented transition
matrix in Eqn. 2.95 is now no longer of full rank (provided it was in the first place). If we wish to
incorporate a delay , of sample times ( = T ), then we must introduce dummy state vectors
into the system creating a new state vector z with dimensions (n 1);
zk = xk xk+1 xk+2 xk+
The augmented state transition matrix and augmented control matrix are of the form
0 I 0 0
0
0 0 I 0
0
0 0 0 0
0
= . . . .
, and = 0
(2.96)
. . ...
.. .. ..
..
0 0 0 I
.
0 0
zk+1 = zk + uk
(2.97)
0 0
zk
(2.98)
(2.99)
If the dead time is not an exact multiple of the sample time, then more sophisticated analysis is
required. See [71, p174]. Systems with a large value of dead time become very unwieldy as a
dummy vector is required for each multiple of the sample time. This creates large systems with
many states that possibly become numerically ill-conditioned and difficult to manipulate.
Problem 2.7
1. Simulate the submarine example (Eqn 2.89) but now introducing a dead time
of 2 sample time units.
2. Explain how the functions step and dstep manage to guess appropriate sampling rates,
and simulation horizons. Under what circumstances will the heuristics fail?
2.9
S TABILITY
Stability is a most desirable characteristic of a control system. As a control designer, you should
only design control systems that are stable. For this reason, one must at least be able to analyse
a potential control system in order to see if it is indeed theoretically stable at least. Once the
controller has been implemented, it may be too late and costly to discover that the system was
actually unstable.
One definition of stability is that a system of differential equations is stable if the output is
bounded for all time for any given bounded input, otherwise it is unstable. This is referred to
as bounded input bounded output (BIBO) stability.
2.9. STABILITY
73
While for linear systems, the concept of stability is well defined and relatively straight forward
to evaluate6 , this is not the case for nonlinear systems which can exhibit complex behaviour. The
next two sections discuss the stability for linear systems. Section 2.9.4 discusses briefly techniques
that can be used to assess stability for nonlinear systems.
Before any type of stability analysis can be carried out, it is important to define clearly what sort
of analysis is desired. For example is a Yes/No result acceptable, or do you want to quantify
how close to instability the current operating point is? Is the system linear? Are the nonlinearities
hard or soft? Fig. 2.25 highlights some of these concerns.
System stability
Transfer function
LTIs
Nonlinear
Linearise
nonpolynomial
yes/no
Accuracy ?
Pade
marginal
Routh array
Jury Test
eigenvalues
roots of denom.
Bode plots
Gain & phase margins
Lyapunov
Simulate
May be hard
Extensive testing?
2.9.1
S TABILITY
The most important requirement for any controller is that the controller is stable. Here we describe how one can analyse a continuous transfer function to determine whether it is stable or
not.
First recall the conditions for BIBO stability. In the Laplace domain, the transfer function system
is stable if all the poles (roots of the denominator) have negative real parts. Thus if the roots of the
denominator are plotted on the complex plane, then they must all lie to the left of the imaginary
axis or in the Left Hand Plane (LHP). In other words, the time domain solution of the differential
equation must contain only eat terms and no terms of the form e+at , assuming a is positive.
Why is the stability only determined by the denominator of the transfer function and not by the
numerator or the input?
1. First the input is assumed bounded and stable (by BIBO definition).
6 Actually, the stability criterion can be divided into 3 categories; Stable, unstable and critically (un)stable. Physically
critical stability never really occurs in practice.
74
2. If the transfer function is expanded as a sum of partial fractions, then the time solution terms
will be comprised of elements that are summed together. For the system to be unstable,
at least one of these individual terms must be unstable. Conversely if all the terms are
stable themselves, then the summation of these terms will also be stable. The input will
also contribute separate fractions that are summed, but these are assumed to be stable by
definition, (part 1).
Note that it is assumed for the purposes for this analysis, that the transfer function is written in
its simplest form, that it is physically realisable, and that any time delays are expanded as a Pade
approximation.
In summary to establish the stability of a transfer function, we could factorise the denominator polynomial using a computer program to compute numerically the roots such as M ATLABs
roots routine. Alternatively, we can just hunt for the signs of the real part of the roots, a much
simpler operation, and this is the approach taken by the Routh and Jury tests described next.
R OUTH STABILITY
CRITERION
The easiest way to establish the absolute stability of a transfer function is simply to extract the
roots of the denominator perhaps using M ATLAB S root command. In fact this is an overkill
since to establish absolute stability we need only to know the presence of any roots with positive
real parts, not the actual value.
Aside from efficiency, there are two cases where this strategy falls down. One is where we do
not have access to M ATLAB or the polynomial is particularly ill-conditioned such that simple root
extraction techniques fail for numerical reasons, and the other case is where we may have a free
parameter such as a controller gain in the transfer function. This has prompted the development
of simpler exact algebraic methods such as the Routh array or Lyapunovs method to assess stability.
The Routh criterion states that all the roots of the polynomial characteristic equation,
an sn + an1 sn1 + an2 sn2 + an3 sn3 + + a1 s + a0 = 0
(2.100)
have negative real parts if and only if the first column of the Routh table (as described below)
have the same sign. Otherwise the number of sign changes is equal to the number of right-hand
plane roots.
The first two rows of the Routh table is simply formed by listing the coefficients of the characteristic polynomial, Eqn. 2.100, in the manner following
sn
an
an2 an4
sn1 an1 an3 an5
sn2 b1
b2
b3
c2
c3
sn3 c1
..
..
..
.
.
.
and where the new entries b and c in the subsequent rows are defined as
an1 an2 an an3
,
an1
b1 an3 an1 b2
c1 =
,
b1
b1 =
b2 =
etc.
etc.
2.9. STABILITY
75
The Routh array is only applicable for continuous time systems, but for discrete time systems,
the Jury test can be used in a manner similar to the Routh table for continuous systems. The
construction of the Jury table is slightly more complicated than the Routh table and is described
in [61, 117118].
The Routh array is most useful when investigating the stability as a function of a variable such
as controller gain. Rivera-Santos has written a M ATLAB routine to generate the Routh Array with
the possibility of including symbolic variables.7
Suppose we wish to determine the range of stability for a closed loop transfer function with
characteristic equation
s4 + 3s3 + 3s2 + 2s + K = 0
as a function of the unknown gain K.8
Listing 2.7: Example of the Routh array using the symbolic toolbox
Since all the elements in the first column of the table must be positive, we know that for stability
(9/7)K + 2 > 0 and that K > 0, or
0 < K < 14/7
S TABILITY AND
TIME DELAYS
Time delay or dead time does not affect the stability of the open loop response since it does not
change the shape of the output curve, but only shifts it to the right. This is due to the nonpolynomial term es in the numerator of the transfer function. Hence dead time can be ignored
for stability considerations for the open loop. However in the closed loop the dead time term appears in both the numerator and the denominator and now does affect the stability characteristics
and can now no longer be ignored. Since dead time is a nonpolynomial term, we cannot simply
find the roots of the denominator, since now there are an infinite number of them, but instead we
must approximate the exponential term with a truncated polynomial such as a Pade approximation and then apply either a Routh array in the continuous time case, or a Jury test in the discrete
time case.
Note however that the Nyquist stability criteria yields exact results, even for those systems with
time delays. The drawback is that the computation is tedious without a computer.
Problem 2.8 Given
G(s) =
15(4s + 1)es
(3s + 1)(7s + 1)
Find the value of deadtime such that the closed-loop is just stable using both a Pade approximation and a Bode or Nyquist diagram.
7 The
routine,
routh.m,
is
available
from
www.mathworks.com/matlabcentral/fileexchange/.
8 Problem adapted from [152, p288]
the
MathWorks
users
group
collection
at
76
2.9.2
S TABILITY
1 2 s
1 + 2 s
Prior to the days of cheap computers that can readily manipulate polynomial expressions and
extract the roots of polynomials, one could infer the stability of the closed loop from a Bode or
Nyquist diagram of the open loop.
The openloop system
G(s) =
100(s + 2)
e0.6s
(s + 5)(s + 4)(s + 3)
(2.101)
is clearly stable, but the closed loop system, G/(1 + G) may, or may not be stable. If we substitute
s = i, and compute the complex quantity G(i) as a function of , we can apply either the Bode
stability criteria, or the equivalent Nyquist stability criteria to establish the stability of the closed
loop without deriving an expression for the closed loop, and subsequently solving for the closed
loop poles.
The Bode diagram consists of two plots; the magnitude of G(i) versus , and the phase of G(i)
versus . Alternatively we could plot the real and imaginary parts of G(i) as a function of frequency, but this results in a three dimensional plot. Such a plot for the system given in Eqn. 2.101
is given in Fig. 2.26(a).
Figure 2.26: Nyquist diagram of Eqn. 2.101 in (a) three dimensions and (b) as typically presented
in two dimensions.
However three dimensional plots are difficult to manipulate, so we normally ignore the frequency
component, and just view the shadow of the plot on the real/imaginary plane. The two dimensional Nyquist curve corresponding to Fig. 2.26(a) is given in Fig. 2.26(b). It is evident from either
plot that the curve does encircle the critical (0, 1i) point, so the closed loop system will be unstable.
Establishing the closed loop stability using the Nyquist criteria is slightly more general than when
using the Bode criteria. This is because for systems that exhibit a non-monotonically decreasing
curves may cross the critical lines more than once leading to misleading results. An interesting
example from [80] is use to illustrate this potential problem.
2.9. STABILITY
2.9.3
S TABILITY
77
OF DISCRETE TIME SYSTEMS
The definition of stability for a discrete time system is similar to that of the continuous time system
except that the definition is only concerned with the values of the output and input at the sample
times. Thus it is possible for a system to be stable at the sample points, but be unstable between
the sample points. This is called a hidden oscillation, although rarely occurs in practice. Also note
that the stability of a discrete time system is dependent on the sample time, T .
Recall that at sample time T , by definition z = exp(sT ), the poles of the continuous transfer
function pi map to exp(pi T ) in discrete space. If pi is negative, then exp(pi T ) will be less than
1.0. If pi is positive, then the corresponding discrete pole zi will be larger than 1.0. Strictly if pi is
complex, then it is the magnitude of zi that must be less than 1.0 for the system to be stable.
In summary, the discrete time transfer function G(z) is stable if G(z) has no discrete poles on, or
outside, the unit circle. This circle of radius 1 is called the unit circle.
Now the evaluation of stability procedure is the same as that for the continuous case. One simply
factorises the characteristic equation of the transfer function and inspects the roots. If any of the
roots zi lie outside the unit circle, the system will be unstable. If any of the poles lie exactly on
the unit circle, there can be some ambiguity about the stability, see [119, p120]. For example the
transfer function which has poles on the unit circle,
G(z) =
1
z2 + 1
G(s) =
1
6s2 + 5s + 1
78
0.5134
0.3679
Since both discrete poles lie inside the unit circle, the discretised transfer function with a firstorder hold is still stable.
If you are using the state space description, then simply check the eigenvalues of the homogeneous part of the equation (the A or matrix).
2.9.4
S TABILITY
Despite the fact that nonlinear differential equations are in general very difficult, if not impossible
to solve, one can attempt to establish the stability without needing to find the solution. Studies
of this sort fall into the realm of nonlinear systems analysis which demands a high degree of
mathematical insight and competence. [189] is a good introductory text for this subject that avoids
much of the sophisticated mathematics.
Two methods due to the Russian mathematician Lyapunov9 address this nonlinear system stability problem. The indirect or linearisation method is based on the fact that the stability near
the equilibrium point will be closely approximated to the linearised approximation at the point.
However it is the second method, or direct method that being exact is a far more interesting and
powerful analysis. Since the Lyapunov stability method is applicable for the general nonlinear
differential equation, it is of course applicable for linear systems as well.
The Lyapunov stability theorem says that if such a differential system exists
x = f (x, t)
(2.102)
where f (0, t) = 0 for all t, and if there exists a scalar function V (x) having continuous first partial
derivatives satisfying V (x) is positive definite and the time derivative, V (x), is negative definite,
then the equilibrium position at the origin is uniformly asymptotically stable. If V (x, t) as
x , then it is stable in the large. V (x) is called a Lyapunov function of the system.
9 Some
authors, e.g. Ogata, tend to it spell as Liapunov. Note however that M ATLAB uses the lyap spelling.
2.9. STABILITY
79
If the Lyapunov function is thought of as the total energy of the system, then we know that the
total energy must both always be positive, hence the restriction that V (x) is positive definite, and
also for the system to be asymptotically stable, this energy must slowly with time die away, hence
the requirement that V (x) < 0. The hard part about this method is finding a suitable Lyapunov
function in the first place. Testing the conditions is easy, but note that if your particular candidate
V (x) function failed the requirements for stability, this either means that your system is actually
unstable, or that you just have not found the right Lyapunov function yet. The problem is that
you dont know which. In some cases, particularly vibrating mechanical ones, good candidate
Lyapunov functions can be found by using the energy analogy, however in other cases this may
not be productive.
Algorithm 2.2 Lyapunov stability analysis of nonlinear continuous and discrete systems.
The second method of Lyapunov is applicable for both continuous and discrete systems, [189,
p65], [150, p557].
xk+1 = f (xk ),
3. V (x) as kxk
f (0) = 0
= V (f (xk )) V (xk )
and
3. V (x) as kxk
Example of assessing the stability of a discrete nonlinear system using Lyapunovs second method.
Consider the dynamic equation from Ogata [152, Ex 9-18, p729]
x 1 = x2 x1 x21 + x22
x 2 = x2 x2 x21 + x22
and we want to establish if it is stable or not. The system is nonlinear and the origin is the only
equilibrium state. We can propose a trial Lyapunov function V (x) = x21 + x22 which is positive
definite. (Note that that was the hard bit!) Now differentiating gives
dx1
dx2
V (x) = 2x1
+ 2x2
= 2 x21 + x22
dt
dt
which is negative definite. Since V (x) as x , then the system is stable in the large.
Relevant problems in [152, p771] include B-9-15, B-9-16 and B-9-17.
80
The problem with assessing the stability of nonlinear systems is that the stability can change
depending on the values of the states and/or inputs. This is not the case for linear systems,
where the stability is a function of the system, not the operating point. An important and relevant
chemical engineering example of a nonlinear system that can either be stable or unstable is a
continuously stirred tank reactor. The reactor can either be operated in the stable or unstable
regime depending on the heat transfer characteristics. The stability of such reactors is important
for exothermic reactions, since the thermal run-away is potentially very hazardous.
S TABILITY OF
LYAPUNOV
Sections 2.9.1 and 2.9.3 established stability criteria for linear systems. In addition to these methods, the second method of Lyapunov can be used since the linear system is simply a special case
of the general nonlinear system. The advantages of using the method of Lyapunov are:
1. The Lyapunov method is analytical even for nonlinear systems and so the criteria is, in principle, exact.
2. It is computationally simple and robust and does not require the solution to the differential
equation.
3. We do not need to extract eigenvalues of an n n matrix which is an expensive numerical computation, (although we may need to check the definiteness of a matrix which does
require at least a Cholesky decomposition).
Of course similar advantages apply to the Routh array procedure, but unlike the Routh array, this
approach can be extended to nonlinear problems.
To derive the necessary condition for stability of x = Ax using the method of Lyapunov, we
follow the procedure outlined in Algorithm 2.2 where we choose a possible Lyapunov function
in quadratic form,
V (x) = xT Px
The Lyapunov function will be positive definite if the matrix P is positive definite. The time
derivative is given by
V (x) = x T Px + xT Px
= (Ax)T Px + xT P(Ax)
= xT AT P + PA x
|
{z
}
Q
2.9. STABILITY
81
A solution procedure that avoids us testing many different Lyapunov candidates is to proceed in
reverse; namely choose an arbitrary symmetric positive definite Q (say the identity matrix), and
then solve
AT P + PA = Q
(2.103)
for the matrix P. If P is positive definite, then the system x = Ax is stable.
Eqn. 2.103 is known as the matrix Lyapunov equation or the more general form, AP + PB = Q,
is known as Sylvesters equation.
One obvious way to solve for the symmetric matrix P in Eqn. 2.103 is by equating the coefficients
which results in a n(n + 1)/2 system of linear algebraic equations. It is convenient in M ATLAB
to use the Kronecker tensor product to set up the equations and then use M ATLABs backslash
operator to solve them. This has the advantage that it is expressed succinctly in M ATLAB, but it
has poor storage and algorithmic characteristics and is recommended only for small dimensioned
problems. Section 2.9.5 explains Kronecker tensor products and this analytical solution strategy
in further detail.
Listing 2.8: Solve the continuous matrix Lyapunov equation using Kronecker products
n= size(Q,1);
% Solve AT P + PA = Q, Q = QT
I= eye(n);
% Identity In , the same size as A
P=reshape(-(kron(I,A')+kron(A',I))\Q(:),n,n);% vec(I AT +AT I)vec(P) = vec(Q)
P= (P+P')/2;
% force symmetric (hopefully unnecessary)
The final line, while strictly unnecessary, simply forces the computed result to be symmetric,
which of course it should be in the first case.
Less restrictive conditions on the form of Q exist that could make the solution procedure easier to
do manually and are discussed in [152, p766], and [150, p554]. Note however, using a computational tool such as M ATLAB, we would not normally bother with these modifications.
The C ONTROL T OOLBOX for M ATLAB includes the function lyap that solves the matrix Lyapunov
equation which is more numerically robust and efficient that equating the coefficients or using
the Kronecker tensor product method given on page 81. However note that the definition of the
Lyapunov matrix equation used by M ATLAB,
AP + PAT = Q
(2.104)
is not exactly the same as that defined by Ogata and used previously in Eqn. 2.103. (See the
M ATLAB help file for clarification of this and compare with example 920 in [152, p734].)
Suppose we wish to establish the stability of the continuous system
x = Ax
0
1
=
x
1 0.5
using lyap, as opposed to say extracting eigenvalues.
Now we wish to solve A P + PA = Q for P where Q is some conveniently chosen positive
definite matrix such as say Q = I. Since M ATLABs lyap function solves the equivalent matrix
equation given by Eqn. 2.104, we must pass A rather than A as the first argument to lyap.
Listing 2.9: Solve the matrix Lyapunov equation using the lyap routine
1
>> A = [0,1;-1,-0.4];
82
11
16
2.7 0.5
0.5 2.5
which has minor determinants of 2.7 and 6.5. Since both are positive, following Sylvesters criteria, P is positive definite and so the system is stable.
While we can use Sylvesters criteria to establish if P is positive definite or not, and hence the
systems stability, it is not efficient for large systems. To establish if a symmetric matrix is positive definite, the easiest way is to look at the eigenvalues. If they are all positive, the matrix is
positive definite. Unfortunately we were originally trying to avoid to solve for the eigenvalues,
so this defeats the original purpose somewhat. Another efficient strategy to check for positive
definiteness with M ATLAB is to attempt a Cholesky decomposition using the [R,p]=chol(A)
command. The decomposition will be successful if A is positive definite, and will terminate early
with a suitable error message if not.
For example suppose we wanted to establish the stability of a transfer function
G=
s1
(s + 1)(s + 2)(s 3)
which is clearly unstable due to the pole at s = +3. In this case we convert to state space, solve
for P and then attempt a Cholesky decomposition to establish if P 0.
Listing 2.10: Using Lyapunov to establish stability of a linear system
2
>> chol(P)
Error using chol
Matrix must be positive definite.
12
2.9. STABILITY
83
As a final check, we could evaluate the eigenvalues of A in M ATLAB by typing eig(A). Of course
in the above case we will get 1, 2 and +3.
D ISCRETE TIME
SYSTEMS
For linear discrete time systems, the establishment of stability is similar to the continuous case
given previously. Given the discrete time system xk+1 = xk , we will again choose a positive
definite quadratic function for the trial Lyapunov function, V (x) = xT Px where the matrix P is
positive definite, and compute the forward difference
V (x) = V (xk+1 ) V (xk )
Q = T P P
(2.105)
The reverse solution procedure, i.e. to solve for the matrix P given Q and is analogous to the
continuous time case given in Listing 2.8 using the Kronecker tensor product.
Listing 2.11: Solve the discrete matrix Lyapunov equation using Kronecker products
1
n
I
P
P
=
=
=
=
Once again, the more efficient dlyap routine from the C ONTROL T OOLBOX can also be used. In
fact dlyap simply calls lyap after some minor pre-processing.
We can demonstrate this establishing the stability of
3 0.5
xk+1 =
xk
0
0.8
In this case since is upper triangular, we can read the eigenvalues by inspection, (they are the
diagonal elements, 3 and 0.8). Since one of the eigenvalues is outside the unit circle we know immediately that the process is unstable. Notwithstanding, by following the method of Lyapunov,
1
11
84
AT P + PA = Q
2.9.5
The strategy employed in Listings 2.8 and 2.11 to solve the Lyapunov equation used Kronecker
products and vectorisation or stacking to express the matrix equation succinctly. This made it easy
to solve since the resulting expression was now reformulated into a system of linear equations
which can be solved using standard linear algebra techniques. Further details on the uses and
properties of Kronecker products (or tensor products) are given in [88, p256], [115, Chpt 13] and
[130]. The review in [36] concentrates specifically on Kronecker products used in control applications.
The Kronecker product, given the symbol ,
B pq , results in a new (large) matrix
a11 B
..
AB=
.
am1 B
a1n B
..
mpnq
..
.
.
amn B
(2.106)
The vectorisation of a matrix A is an operation that converts a rectangular matrix into a single
column by stacking the columns of A on top of each other. In other words, if A is an (n m)
matrix, then vec(A) is the resulting (nm1) column vector. In M ATLAB we convert block matrices
or row vectors to columns simply using the colon operator, or A(:).
We can combine this vectorisation operation with Kronecker products to express matrix multiplication as a linear transformation. For example, for the two matrices A and B of compatible
2.9. STABILITY
85
dimensions, then
vec (AB) = (I A) vec(B)
= BT I vec(A)
(2.107)
(2.108)
Table II in [36] summarises these and many other properties of the algebra of Kronecker products
and sums.
This gives us an alternative way to express matrix expressions such as the Sylvester equation
AX XA = Q where we wish to solve for the matrix X given matrix A. In this case, using
Eqn. 2.107, we can write
vec(AX XA) = I A AT I vec(X) = vec(Q)
which is in the form of a system of linear equations Gx = q where the vectors
x and q are simply
the stacked columns of the matrices X, and Q, and the matrix G is given by I A AT I .
We first solve for the unknown vector x using say x = G1 q or some numerically sound equivalent, and then we reassemble the matrix X by un-stacking the columns from x. Of course this
strategy is memory intensive because the size of the matrix G is (n2 n2 ). However [36] describes
some modifications to this approach to reduce the dimensionality of the problem.
U SING fsolve TO
The general nonlinear algebraic equation solver fsolve within the O PTIMISATION T OOLBOX has
the nice feature that it can solve matrix equations such as the continuous time Lyapunov equation
directly. Here we wish to find the square matrix X such that
AX + XAT + Q = 0
given an arbitrary square matrix A and a positive definite matrix Q. For an initial estimate for X
we should start with a positive definite matrix such as I. We can compare this solution with the
one generated by the dedicated lyap routine.
>> n = 4;
>> A = rand(n);
>> Q1 = randn(n); Q = Q1'*Q1;
>> Xx = lyap(A,Q);
Xx =
86
14
2.9.6
-0.8574
-0.6776
1.0713
-0.6776
2.7742
-2.6581
S UMMARY
1.0713
-2.6581
0.7611
OF STABILITY ANALYSIS
The stability of a linear continuous time transfer function is determined by the sign of the real part
of the poles. For the transfer function to be stable, all the poles must lie strictly in the left-handplane. In the discrete case, the stability is determined by the magnitude of the possibly complex
poles. To be stable, the discrete poles must lie within the unit circle. (See Fig. 6.3.) The key
difference between the stability of discrete and continuous transfer functions is that the sample
time plays an important role. Generally as one increases the sample time, the discrete system tend
to instability.
stability boundary
1
Stable
entire LHP
Unstable
Unstable
0
Stable
1
0
unit circle
Continuous s-plane
Discrete z-plane
Figure 2.28: Regions of stability for the poles of continuous (left) and discrete (right) systems
To establish the stability of the transfer functions, one need not solve for the roots of the denominator polynomial since exact algebraic methods such as the Routh Array, Jury blocks and
Lyapunov methods exist. However with the current computer aided tools such as M ATLAB, the
task of reliably extracting the roots of high order polynomials is not considered the hurdle it once
was. The Lyapunov method is also applicable for nonlinear systems. See Ogatas comments, [150,
p250].
2.10
S UMMARY
This chapter briefly developed the tools needed to analyse discrete dynamic systems. While most
physical plants are continuous systems, most control is performed on a digital computer, and
therefore the discrete description is far more natural for computer implementation or simulation.
Converting a continuous differential equation to a discrete differential equation can be done either
by using a backward difference approximation such as Eulers scheme, or by using z transforms.
The z transform is the discrete equivalent to the Laplace transform in the continuous domain.
A vector/matrix approach to systems modelling and control has the following characteristics:
2.10. SUMMARY
87
1. We can convert any linear time invariant differential equation into the state space form
x = Ax + Bu.
2. Once we have selected a sampling time, T , we can convert from the continuous time domain
to the discrete form equivalent; xk+1 = xk + uk .
3. The stability of both the discrete and continuous time systems is determined by the eigenvalues of the A or matrix.
4. We can transform the mathematical description of the process to dynamically similar descriptions by adjusting our base coordinates, which may make certain computations easier.
Stability is one of the most important concepts for the control engineer. Continuous linear systems
are stable if all the poles lie in the left hand side of the complex plane, ie they have negative real
parts. For discrete linear systems, the poles must lie inside the unit circle. No easy check can be
made for nonlinear systems, although the method due to Lyapunov can possibly be used, or one
can approximate the nonlinear as a linear system and check the stability of the latter.
When we deal with discrete systems, we must sample the continuous function. Sampling can
introduce problems in that we may miss interesting information if we dont sample fast enough.
The sampling theorem tells us how fast we must sample to reconstruct particular frequencies.
Over sampling is expensive, and could possibly introduce unwanted artifacts such as RHP zeros
etc.
88
C HAPTER 3
M ODELLING
DYNAMIC SYSTEMS
3.1
Dynamic system models are groups of equations that involve time derivative terms that attempt
to reproduce behaviour that we observe around us. I admire the sense of optimism in the quote
given above by Hans Mark: if we know the governing equations, and the initial conditions, then
the future is assured. We now know that the 18th century thinkers were misguided in this sense,
but nevertheless engineers are a conservative lot, and still today with similar tools, they aim to
predict the known universe.
It is important to realise that with proven process models, control system design becomes systematic, hence the importance of modelling.
Physically dynamic systems are those that change with time. Examples include vibrating structures (bridges and buildings in earthquakes), bodies in motion (satellites in orbit, fingers on a
typewriter), changing compositions (chemical reactions, nuclear decay) and even human characteristics (attitudes to religion or communism throughout a generation, or attitudes to the neighbouring country throughout a football match) and of course, there are many other examples. The
aim of many modellers is to predict the future. For many of these types of questions, it is infeasible to experiment to destruction, but it is feasible to construct a model and test this. Today, it is
easiest to construct a simulation model and test this on a computer. The types of models referred
to in this context are mathematically based simulation models.
Black-box or heuristic models, where we just fit any old curve to the experimental data, and Whitebox or fundamental models, where the curves are fitted to well established physical laws represent
two possible extremes in modelling philosophy. In practice, most engineering models lie somewhere in the grey zone, known appropriately as grey-box models where we combine our partial
prior knowledge with black-box components to fit any residuals. Computer tools for grey-box
modelling are developed in [32].
89
Modelling
90
3.1.1
S TEADY
Steady state models only involve algebraic equations. As such they are more useful for design
rather than control tasks. However they can be used for control to give broad guidelines about
the operating characteristics of the process. One example is Bristols relative gain array ( 3.2.6)
which uses steady state data to analyse for multivariable interaction. But it is the dynamic model
that is of most importance to the control engineer. Dynamic models involve differential equations.
Solving systems involving differential equations is much harder than for algebraic equations, but
is essential for the study and application of control. A good reference for techniques of modelling
dynamic chemical engineering applications is [198].
3.2
An economist is an expert who will know tomorrow why the things he predicted yesterday didnt happen
today.
Laurence J. Peter
This section reviews the basic steps for developing dynamic process models and gives a few examples of common dynamic models. Modelling is particularly important in the capital-intensive
world of chemical manufacture. Perhaps there is no better example of the benefits of modelling
than in distillation. One industry magazine1 reported that distillation accounted for around 3% of
the total world energy consumption. Clearly this provides a large economic incentive to improve
operation.
When modelling chemical engineering unit operations, we can consider (in increasing complexity):
1. Lumped parameters systems such as well-mixed reactors or evaporators
2. Staged processes such as distillation columns, multiple effect evaporators, floatation.
3. Distributed parameter systems such as packed columns, poorly mixed reactors, dispersion
in lakes & ponds.
An evaporator is an example of a lumped parameter system and is described in 3.2.3 while a
rigorous distillation column model described on page 102 is an example of a staged process.
At present we will now investigate only lumped parameter systems and staged processes which
involve only ordinary differential equations, as opposed to partial differential equations, albeit
a large number of them. Problems which involve algebraic equations coupled to the ordinary
differential equations are briefly considered in section 3.4.2.
A collection of industrial process models is available in the nonlinear model library from www.hedengren.net/resear
There are many text books devoted to general modelling for engineers, notably [35, 114] for general issues, [45, 197] for solution techniques and [131, 198] for chemical engineering applications.
I recommend the following stages in the model building process:
1. Draw a diagram of the system with a boundary and labelling the input/output flows that
cross the boundary.
1
91
2. Decide on what are state variables, what are parameters, manipulated variables, uncontrolled disturbances etc.
3. Write down any governing equations that may be relevant such as conservation of mass and
energy.
4. Massage these equations in a standard form suitable for simulation.
The following sections will describe models for a flow system into and out of a tank with external
heating, a double integrator such as an inverted pendulum or satellite control, a forced circulation
evaporator, and a binary distillation column.
3.2.1
S IMPLE
MODELS
The best material model of a cat is another, or preferably the same, cat.
Arturo Rosenblueth, Philosophy of Science, 1945
P ENDULUMS
Pendulums provide an easy and fruitful target to model. They can easily be built, are visually
impressive, and the governing differential equations derived from the physics are simple but
nonlinear. Furthermore, if you invert the pendulum such as in a rocket, the system is unstable,
and needs an overlaying control scheme.
Torque, Tc
,l
gth
len
mg
mg
Pendulum (stable)
Modelling
92
x1 =
(3.2)
d
def
x2 = x 1 =
dt
(3.3)
(3.4)
(3.5)
Linearising Eqn. 3.5 is trivial since sin for small angles. Consequently, both linear and
nonlinear expressions are compared below
#
#
# "
#
"
"
"
0
0
0
1
0
x1
+
x+
,
x = g
(3.6)
x = g
Tc
Tc
sin(x1 ) 0
0
2
2
l
l
ml }
ml }
{z
{z
|
|
nonlinear
linear
(3.7)
but the sign change is enough to place the linearised pole in the right-hand plane.
A DOUBLE
INTEGRATOR
The double integrator is a simple linear example from classical mechanics. It can describe, amongst
other things, a satellite attitude control system,
J
d2
=u
dt2
(3.8)
where J is the moment of inertia, is the attitude angle, u is the control torque (produced by
small attitude rockets mounted on the satellites side). We can convert the second order system in
Eqn. 3.8 to two first order systems by again defining a new state vector
x1 def
=
(3.9)
x=
x2
Substituting this definition into Eqn. 3.8, we get J x 2 = u and x 1 = x2 , or in a matrix-vector form
0 1
x 1
0
(3.10)
=
x 1 u
0 0
x 2
J
93
measured height, h
P
true height, h
(3.12)
since the area and density are constant. If the liquid passes though a valve at the bottom of the
tank, the flowrate out will be a function of height. For many processes, the flow is proportional
to the square root of the pressure drop, (Bernoullis equation)
Fout P = k h
This square root relation introduces a mild nonlinearity into the model.
The tank level is not actually measured in the tank itself, but in a level leg connected to the
tank. The level in this level leg lags behind the true tank level owing to the restriction valve on
the interconnecting line. This makes the system slightly more difficult to control owing to the
added measurement dynamics. We assume, that the level leg can be modelled using a first order
dynamic equation, with a gain of 1, and a time constant of about 2 seconds. I can estimate this by
simply watching the apparatus. Thus
dh
hh
=
(3.13)
dt
2
Modelling
94
The level signal is a current, I, between 4 and 20 mA. This current is algebraically related (by
some function f () although often approximately linearly) to the level in the level leg. Thus the
variable that is actually measured is:
I = f (h)
(3.14)
The current I is called the output or measured variable. In summary, the full model equations
are;
Fin k h
dh
=
(3.15)
dt
A
dh
hh
=
(3.16)
dt
2
I = f (h)
(3.17)
The input variable (which we may have some hope of changing) is the input flow rate, Fin , and
the exit valve position which changes k. The dependent (state) variables are the actual tank level,
The output variable is the current, I. Constant
h, and the measurable level in the level leg, h.
parameters are density, and tank cross-sectional area. In Table 3.1 which summarises this nomenclature, I have written vectors in a bold face, and scalars in italics. The input variables can be
further separated into variables that can be easily changed or changed on demand such as the
outlet valve position (manipulated variables), and inputs that change outside our control (disturbance variables). In this case the inlet flowrate.
Table 3.1: Standard nomenclature used in modelling dynamic systems
type
independent
states
inputs
outputs
parameters
A STIRRED TANK
symbol
t
x
u
y
variable
time, t
h, h
Fin , k
I
, A
HEATER
Suppose an electrical heater is added causing an increase in water temperature. Now in addition
to the mass balance, we have an energy balance. The energy balance in words is the rate of
temperature rise of the contents in the tank is proportional to the amount of heat that the heater is
supplying plus the amount of heat that is arriving in the incoming flow minus the heat lost with
the out flowing stream minus any other heat losses.
We note that for water the heat capacity cp , is almost constant over the limited temperature range
we are considering, and that the enthalpy of water in the tank is defined as
H = M cp T
(3.18)
where T is the temperature difference between the actual tank temperature T and some reference temperature, Tref . Writing an energy balance gives
M cp
dT
= Fin cp (Tin Tref ) Fout cp T + Q q
dt
(3.19)
where Q is the heater power input (kW) and q is the heat loss (kW). These two equations (Eqn 3.12
and 3.19) are coupled. This means that a change in mass in the tank M, will affect the temperature
in the tank T (but not the other way around). This is called one-way coupling.
95
For most purposes, this model would be quite adequate. But you may decide that the heat loss q
to the surroundings is not constant and the variation significant. A more realistic approximation
of the heat loss would be to assume that it is proportional to the temperature difference between
the vessel and the ambient (room) temperature Troom . Thus q = k(T Troom ). Note that the heat
loss now can be both positive or negative. You may also decide that the heat capacity cp , and
density of water are a function of temperature. Now you replace these constants with functions
as cp (T ) and (T ). These functions are tabulated in the steam tables2 . The more complete model
would then be
dh
Fin Fout
=
dt
(T )A
dT
A(T )
= Fin cp (T ) (Tin T ref ) Fout cp (T )T + Q k (T Troom )
dt
(3.20)
C AVEAT E MPTOR
Finally we should always state under what conditions the model is valid and invalid. For this
example we should note that the vessel temperature should not vary outside the range 0 < T <
100C since the enthalpy equation does not account for the latent heat of freezing or boiling. We
should also note that h, Fin , Fout , M, A, k, cp , are all assumed positive.
As a summary and a check on the completeness of the model, it is wise to list all the variables,
and state whether the variable is a dependent or independent variable, whether it is a dynamic
or constant variable, or whether it is a parameter. The degree of freedom for a well posed problem should be zero. That is, the number of unknowns should equal the number of equations.
Incomplete models, and/or bad assumptions can give very misleading and bad results.
A great example highlighting the dangers of extrapolation from obviously poor models is the biannual UK government forecast of Britains economic performance. This performance or growth,
is defined as the change in the Gross Domestic Product (GDP) which is the total amount of goods
and services produced in the country that year. Figure 3.3 plots the actual growth over the last
few years (solid line descending), compared with the forecasts given by the Chancellor of the Exchequer, (dotted lines optimistically ascending). I obtained this information from a New Scientist
article provocatively titled Why the Chancellor is always wrong, [51]. Looking at this performance,
it is easy to be cynical about governments using enormous resources to model things that are
essentially so complicated as to be unmodellable, and then produce results that are politically
advantageous.
3.2.2
A CONTINUOUSLY- STIRRED
TANK REACTOR
Tank reactors are a common unit operation in chemical processing. A model presented in [84]
considers a simple A B reaction with a cooling jacket to adjust the temperature, and hence the
reaction rate as shown in Fig. 3.4.
The reactor model has two states, the concentration of compound A, given the symbol Ca measured in mol/m3 and the temperature of the reaction vessel liquid, T measured in K. The manipulated variables are the cooling jacket water temperature, Tc , the temperature of the feed Tf and
2 Steam
Modelling
96
119
Forecasts
118
117
116
115
114
113
112
Actual
111
1989
1990
1991
Year
1992
1993
Stirrer
Feed, Tf , Caf
Coolant input, Tc
AB
Coolant outflow
Product, T, Ca
Figure 3.4: A continually stirred tank reactor (CSTR) reactor
the concentration of the feed, Caf .
q
E
dT
Ca
= (Caf Ca ) k0 exp
dt
V
RT
UA
q
mH
E
dCa
Ca +
k0 exp
(Tc T )
= (Tf T ) +
dt
V
Cp
RT
V Cp
(3.21)
(3.22)
xss =
T
Ca
=
ss
324.5
0.8772
Tc
300
def
uss = Tf = 350
Caf ss
1
(3.23)
The values of the model parameters such as , Cp etc. are given in Table 3.2.
Note: At a jacket temperature of Tc = 305K, the reactor model has an oscillatory response. The
oscillations are characterized by reaction run-away with a temperature spike. When the concen-
97
name
Volumetric Flowrate
Volume of CSTR
Density of A-B Mixture
Heat capacity of A-B Mixture
Heat of reaction for AB
Pre-exponential factor
Overall Heat Transfer Coefficient Area
variable
q = 100
V = 100
= 1000
Cp = .239
mH = 5 104
E/R = 8750
k0 = 7.2 1010
U A = 5 104
unit
m3 /sec
m3
kg/m3
J/kg-K
J/mol
K
1/sec
W/K
tration drops to a low value, the reactor cools until the concentration builds and there is another
run-away reaction.
3.2.3
A FORCED
CIRCULATION EVAPORATOR
Evaporators such as illustrated in Fig. 3.5 are used to concentrate fruit juices, caustic soda, alumina and many other mixtures of a solvent and a solute. The incentive for modelling such unit
operations is that the model can provide insight towards better operating procedures or future
design alternatives. A relatively simple dynamic model of a forced circulation evaporator is developed in [147, chapter 2] is reproduced here as a test process for a wide variety of control techniques of interest to chemical engineers. Other similar evaporator models have been reported
in the literature; [66] documents research over a number of years at the University of Alberta
on a pilot scale double effect evaporator, and [42] describe a linearised pilot scale double effect
evaporator used in multivariable optimal selftuning control studies.
The nonlinear evaporator model can be linearised into a state space form with the variables described in Table 3.3. The parameters of the linearised state space model are given in Appendix D.1.
Table 3.3: The important variables in the forced circulation evaporator from Fig. 3.5.
type
State
Manipulated
Disturbance
name
level
steam pressure
product conc
product flow
steam pressure
c/w flow
circulating flow
feed flow
feed composition
feed temperature
c/w temperature
variable
L2
P2
x2
F2
P100
F200
F3
F1
x1
T1
T200
unit
m
kPa
%
kg/min
kPa
kg/min
kg/min
kg/min
%
range
02
unspecified
0100
0-10
100350
0300
30-70
5-15
0-10
20-60
15-35
The distinguishing characteristics of this evaporator system from a control point are that the detailed plant model is nonlinear, the model is non-square (the number of manipulated variables,
u, do not equal the number of state variables, x), and that one of the states, level, is an integrator. The dynamic model is both observable and controllable, which means that by measuring the
outputs only, we can at least in theory, control all the important state variables by changing the
inputs.
Modelling
98
cooling water flow, F200
condensor
Vapour
Cooling water
T200
condensate
L
Pressure, P2
level, L2
P
separator
Steam
Steam pressure, P100
Evaporator
Condensate
Feed
F1 , x1 , T1
F3
circulation pump
composition, x2
A
Product
Product flow, F2
Figure 3.5: A forced circulation evaporator from [147, p7] showing state (black), manipulated
(blue) and disturbance variables (green). See also Table 3.3.
Problem 3.1
1. Look at the development of the nonlinear model given in [147, chapter 2].
What extensions to the model would you suggest? Particularly consider the assumption of
constant mass (M = 50kg) in the evaporator circuit and the observation that the level in the
separator changes. Do you think this change will make a significant difference?
2. Describe in detail what tests you would perform experimentally to verify this model if you
had an actual evaporator available.
3. Construct a M ATLAB simulation of the nonlinear evaporator. The relevant equations can be
found in [147, Chpt 2].
3.2.4
Distillation columns, which are used to separate mixtures of different vapour pressures into almost pure components, are an important chemical unit operation. The columns are expensive
to manufacture, and the running costs are high owing to the high heating requirement. Hence
there is much incentive to model them with the aim to operate them efficiently. Schematics of two
simple columns are given in Figures 3.6 and ??.
Wood and Berry experimentally modelled a 9 inch diameter, 8-tray binary distillation column in
[205] that separated methanol from water as shown in Fig. 3.6. The transfer function model they
derived by step testing a real column has been extensively studied, although some authors, such
99
as [187], point out that the practical impact of all these studies has been, in all honesty, minimal.
Other distillation column models that are not obtained experimentally are called rigorous models
and are based on fundamental physical and chemical relations. A more recent distillation column
model, developed by Shell and used as a test model is discussed in [139].
c/w
Reflux, R
Distillate, D, xD
Feed, F, xF
Distillation column
Steam, S
Reboiler
Bottoms, B, xB
Figure 3.6: Schematic of a distillation column
Columns such as given in Fig. 3.6 typically have at least 5 control valves, but because the hydrodynamics and the pressure loops are much faster than the composition dynamics, we can use the
bottoms exit valve to control the level at the bottom of the column, use the distillate exit valve
to control the level in the separator, and the condenser cooling water valve to control the column
pressure. This leaves the two remaining valves (steam reboiler and reflux) to control the top and
bottoms composition.
The Wood-Berry model is written as a matrix of Laplace transforms in deviation variables
18.9e3s
12.8es
16.7s + 1
21s + 1
y=
6.6e7s
19.4e3s
10.9s + 1 14.4s + 1
{z
|
G(s)
3.8e8s
u + 14.9s + 1 d
4.9e3.4s
13.2s + 1
}
(3.24)
where the inputs u1 and u2 are the reflux and reboiler steam flowrates (in lb/min) respectively, the
outputs y1 and y2 are the weight % of ethanol in the distillate and bottoms, and the disturbance
variable, d is the feed flowrate. The time scale is in minutes.
It is evident from the transfer function model structure in Eqn. 3.24 that the plant will be interacting and that all the transfer functions have some time delay associated with them, but the
off-diagonal terms have the slightly larger time delays. Both these characteristics will make controlling the column difficult.
In M ATLAB, we can construct such a matrix of transfer functions (ignoring the feed) with a matrix
of deadtimes as follows:
Modelling
100
10
bottoms:
6.6
exp(-7*s) * ---------10.9 s + 1
15
bottoms:
-19.4
exp(-3*s) * ---------14.4 s + 1
Once we have formulated the transfer function model G, we can perform the usual types of analysis such as step tests etc as shown in Fig. 3.7. An alternative implementation of the column model
in S IMULINK is shown in Fig. 3.8.
Step Response
From: Reflux
From: Steam
To: Distillate
0
10
20
To: bottoms
Amplitude
10
10
20
50
100
0
Time (sec)
50
100
reflux
reflux
distillate
steam
bottoms
101
distillate
steam
bottoms
12.8
16.7s+1
Transport
Delay
Transfer Fcn
distillate
18.9
21s+1
Transfer Fcn1
Transport
Delay1
6.6
10.9s+1
Transfer Fcn2
2
steam
Transport
Delay2
19.4
14.1s+1
Transfer Fcn3
Transport
Delay3
2
bottoms
(b) Inside the Wood Berry column mask. Compare this with Eqn. 3.24.
Figure 3.8: Wood-Berry column model implemented in S IMULINK
column model with a variable side stream draw off from [153] is
0.61e3.5s
0.0049es
0.66e2.6s
6.7s + 1
8.64s + 1
9.06s + 1
y1
1.11e6.5s
3s
2.36e
0.012e1.2s
y2 =
3.25s + 1
5s + 1
7.09s + 1
y3
9.4s
34.68e9.2s
46.2e
0.87(11.61s + 1)e2.6s
8.15s + 1
10.9s + 1
(3.89s + 1)(18.8s + 1)
1.2s
0.0011(26.32s + 1)e2.66s
0.14e
6.2s + 1
(3.89s + 1)(14.63s + 1)
0.0032(19.62s + 1)e3.44s d1
0.53e10.5s
+
d2
6.9s + 1
(7.29s + 1)(8.94s + 1)
11.54e0.6s
2.6s
0.32e
7.01s + 1
u1
u2
u3
(3.25)
7.76s + 1
which provides an alternative model for testing multivariable control schemes. In this model
the three outputs are the overhead ethanol mole fraction, the side stream ethanol mole fraction,
and the temperature on tray 19, the three inputs are the reflux flow rate, the side stream product
rate and the reboiler steam pressure, and the two disturbances are the feed flow rate and feed
temperature. This system is sometimes known as the OLMR after the initials of the authors.
Modelling
102
Problem 3.2
1. Assuming no disturbances (d = 0), what are the steady state gains of the
Wood-Berry column model? (Eqn 3.24). Use the final value theorem.
2. Sketch the response for y1 and y2 for;
(a) a change in reflux flow of +0.02
(b) A change in the reflux flow of 0.02 and a change in reboiler steam flow of +0.025
simultaneously.
3. Modify the S IMULINK simulation to incorporate the feed dynamics.
3.2.5
R IGOROUS
Distillation column models are important to chemical engineers involved in the operation and
maintenance of these expensive and complicated units. While the behaviour of the actual multicomponent tower is very complicated, models that assume ideal binary systems are often good
approximations for many columns. We will deal in mole fractions of the more volatile component,
x for liquid, y for vapour and develop a column model following [132, p69].
A generic simple binary component column model of a distillation column such as shown in
Fig. ?? assumes:
1. Equal molar overflow applies (heats of vapourisation of both components are equal, and
mixing and sensible heats are negligible.)
2. Liquid level on every tray remains above weir height.
3. Relative volatility and the heat of vapourisation are constant. In fact we assume a constant
relative volatility, . This simplifies the vapour-liquid equilibria (VLE) model to
yn =
xn
1 + ( 1)xn
103
c/w
condensor
condensate collector
tray #N
Feed, F, xF
Distillate, D, xD
Reflux, R, xD
feed tray, NF
tray #1
reboiler, V
Bottoms, B, xB
(3.27)
(3.28)
Modelling
104
In summary the number of variables for the distillation column model are:
VARIABLES
Tray compositions, xn , yn
Tray liquid flows, Ln
Tray liquid holdups, Mn
Reflux comp., flow & hold up, xD , R, D, MD
Base comp., flow & hold up, xB , yB , V, B, MB
Total # of equations
2NT
NT
NT
4
5
4NT + 9
E QUATIONS
NT
NT
NT + 1
NT
2
2
4NT + 7
Which leaves two degrees of freedom. From a control point of view we normally fix the boildup
and the reflux flow rate (or ratio) with some sort of controller,
rate, Q,
R = f (xD ),
V Q = f (xB )
Our dynamic model of a binary distillation column is relatively large with two inputs (R, V ), two
outputs (xB , xD ) and 4N states. Since a typical column has about 20 trays, we will have around
n = 44 states. which means 44 ordinary differential equations However the (linearised) Jacobian
for this system while a large 44 44 matrix is sparse. In our case the percentage of non-zero
elements or sparsity is 9%.
.. x
x
.
x
u
|{z}
|{z}
4444
442
The structure of the A and B matrices are, using the spy command, given in Fig. 3.10.
There are many other examples of distillation column models around (e.g. [142, pp 459]) This
model has about 40 trays, and assumes a binary mixture at constant pressure and constant relative
volatility.
S IMULATION OF
The simple nonlinear dynamic simulation of the binary distillation column model can be used in
a number of ways including investigating the openloop response, interactions and quantify the
extent of the nonlinearities. It can be used to develop simple linear approximate transfer function
models, or we could pose What if? type questions such as quantifying the response given feed
disturbances.
An openloop simulation of a distillation column gives some idea of the dominant time constants
and the possible nonlinearities. Fig. 3.11 shows an example of one such simulation where we step
change: the reflux from R = 128 to R = 128 + 0.2, and the reboiler from V = 178 to V = 178 + 0.2.
From Fig. 3.11 we can note that the open loop step results are overdamped, and that steady-state
gains are very similar in magnitude. Furthermore the response looks very like a 2 2 matrix of
105
0
5
ODE equations
10
15
20
25
30
35
40
45
10
20
nz = 25
30
40
Figure 3.10: The incidence of the Jacobian (blue) and B matrix (red) for the ideal binary distillation
column model. Over 90% of the elements in the matrix are zero.
Distill conc
0.995
0.99
0.985
0.98
0.975
0.1
base conc
0.08
0.06
0.04
Step change in Reflux
Step change in Vapour
0.02
0
10
15
time (min)
20
25
30
Modelling
106
second order overdamped transfer functions,
xD (s)
G11
=
xB (s)
G21
G12
G22
R(s)
V (s)
So it is natural to wonder at this point if it is possible to approximate this response with a loworder model rather than requiring the entire 44 states and associated nonlinear differential equations.
C ONTROLLED
A closed loop simulation of the 20 tray binary distillation column with a feed disturbance from
xF = 0.5 to xF = 0.54 at t = 10 minutes is given in Fig. 3.12. Note that while we have plotted the
liquid concentrations on all the 20 trays, we are really only interested in the very top (distillate)
and bottom (reboiler) trends. However simply to obtain the top and bottom trends, we are forced
to calculate all the tray concentrations in this rigorous model. We can look more closely in Fig. 3.13
Binary distillation column model
1
0.9
Liquid concentration
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
135
130
125
190
V
180
170
20
40
60
time (min)
80
100
at the distillate and base compositions to see if they really are in control, namely at xb = 0.02, xd =
0.98.
Distillation columns are well known to interact, and these interactions cause difficulties in tuning.
We will simulate in Fig. 3.14 a step change in the distillate setpoint from xD = 0.98 to xD = 0.985
at t = 10 min and a step change in bottoms concentration at 150 minutes. The interactions
are evident in the base composition transients owing to the changing distillate composition and
visa versa. These interactions can be minimised by either tightly tuning one of the loops so consequently leaving the other loose, or by using a steady-state or dynamic decoupler, or even a
multivariable controller.
107
Distillate
0.99
0.985
0.98
Bottoms
0.975
0.04
0.03
0.02
0.01
20
40
60
time [mins]
80
100
Distillate xD
0.99
0.98
Base, xB
0.04
0.03
0.02
0.01
reboiler
Reflux
0
160
140
120
200
180
160
3.2.6
50
100
150
time (min)
I NTERACTION
200
AND THE
250
300
Figure 3.14: Distillation interactions are evident when we step change the distillate and
bottoms setpoints independently.
Modelling
108
RGA. Mathematically the relative gain array, , is formed by multiplying together Gss and G
ss
elementwise,3
(3.30)
= Gss G1
ss
where the special symbol means to take the Hardamard product (also known as the Schur
product), or simply the elementwise product of two equally dimensioned matrices as opposed to
the normal matrix multiplication.
In M ATLAB, the evaluation of the relative gain array is easy.
1
>>Gss = [12.8, -18.9; 6.6, -19.4] % Steady-state gain from Eqn. 3.29
>>L = Gss.*inv(Gss)'
% See Eqn. 3.30. Dont forget the dot-times (.*)
2.0094 1.0094
1.0094
2.0094
Note that all the columns and rows sum to 1.0. We would expect this system to exhibit severe
interactions, although the reverse pairing would be worse.
The usefulness of the RGA is in choosing which manipulated variable should go with which
control variable in a decentralised control scheme. We juggle the manipulated/control variable
parings until most approaches the identity matrix. This is an important and sensitive topic in
process control and is discussed in more detail in [193, pp494503], and in [181, p457]. The most
frequently discussed drawback of the relative gain array, is that the technique only addresses the
steady state behaviour, and ignores the dynamics. This can lead to poor manipulated/output
pairing in some circumstances where the dynamic interaction is particularly strong. The next
section further illustrates this point.
T HE
As mentioned above, the RGA is only a steady-state interaction indicator. However we could use
the same idea to generate an interaction matrix, but this time consider the elements of the transfer
function matrices as a function of frequency by substituting s = j. This now means that the
dynamic relative gain array, (), is a matrix where the elements are functions of as opposed to
being constants.
Consider the (2 2) system from [183]
2.5e5s
(15s + 1)(2s + 1)
y1
=
y2
1
3s + 1
{z
|
G(s)
5
4s + 1
u1
4e5s u2
20s + 1
}
(3.31)
which has as its distinguishing feature significant time delays on the diagonal elements. We could
compute the dynamic relative gain array matrix using the definition
(s) = G(s) G(s)T
(3.32)
then substituting j for s, and perhaps using the symbolic toolbox to help us with the possibly
unwieldy algebra.
3 Note
details.
that this does not imply that = Gss G1
, i.e. without the Hardamard product. See [181, p456] for further
ss
109
13
>>RGA = G.*inv(G')
% (s), Eqn. 3.32
>>DRGA = subs(RGA,'s',1j*w) % Substitute s = j
>> abs(subs(DRGA,w,1))
ans =
0.0379
0.9793
0.9793
0.0379
You can see that from the numerical values of the elements of at = 1 rad/s that this system is
not diagonally dominant at this important frequency. Fig. 3.15 validates this observation.
An alternative numerical way to generate the elements of the DRGA matrix as a function of frequency is to compute the Bode diagram for the multivariable system, extract the current gains,
and then form the RGA from Eqn. 3.30.
Listing 3.2: Computing the dynamic relative gain array numerically as a function of . See also
Listing 3.1.
2
12
The trends of the diagonal and off-diagonal elements of are plotting in Fig. 3.15. What is interesting about this example is that if we only consider the steady-state case, = 0, then is
diagonally dominant, and our pairing looks suitable. However what we should be really concentrating on are the values around the corner frequency at 0.1 where now the off-diagonal
terms start to dominate.
For comparison, Fig. 3.16 shows the dynamic RGA for the (33) OLMR distillation column model
from Eqn. 3.25. In this case the off-diagonal elements do not dominate at any frequency.
3.3
In most cases of industrial importance, the models are never completely white-box, meaning that
there will always be some parameters that need to be fitted, or at least fine-tuned to experimental
Modelling
110
1
1,1
0.8
1,2
0.6
0.4
0.2
Figure 3.15: The diagonal and off-diagonal elements of the RGA matrix as a function of
frequency.
0
3
10
2.5
10
10
11
22
1.5
elements of ()
10
Diagonal elements
33
1
0.5
23
13
0.5
1
10
frequency
Offdiagonal elements
1.5
3
10
12
10
10
frequency
10
10
where the ith model prediction, yi is a function of the model, parameters, , and input data,
yi = f (, xi )
(3.34)
We want to find the best set of parameters, or in other words the set of that minimises J in
Eqn. 3.33,
)
(N
X
2
(3.35)
(yi yi )
= arg min
i=1
111
The term arg min part of Eqn. 3.35 can be read as the argument who minimises . . . since we
are not particularly interested in the actual value of the performance function at the optimum,
J , but rather the value of the parameters at the optimum, . This least-squared estimation is an
intuitive performance index, and has certain attractable analytical properties. The German mathematician Gauss is credited with popularising this approach, and indeed using it for analysing
data taken while surveying Germany and making astronomical observations. Note however
other possible objectives rather than Eqn. 3.33 are possible, such as minimising the sum of the
absolute value of the deviations, or minimising the single maximum deviation. Both these latter
approaches have seen a resurgence of interest of the last few years since the computer tools have
enabled investigators to by-pass the difficulty in analysis.
There are many ways to search for the parameters, and any statistical text book will cover these.
However the least squares approach is popular because it is simple, requires few prior assumptions, and typically gives good results.
3.3.1
L EAST- SQUARES
POLYNOMIAL REGRESSION
A popular way to fit smooth curves to experimental data is to find a polynomial that passes
through the cloud of experimental data. This is known as polynomial regression. The nth order
polynomial to be regressed is
y = n xn + n1 xn1 + + 1 x + 0
(3.36)
where we try different values of the (n + 1) parameters in the vector until the calculated dependent variable y is close to the actual measured value y. Mathematically the vector of parameters
is obtained by solving a least squares optimisation problem.
Given a set of parameters, we can predict the ith model observation using
n
n1
n
xn1 x 1 ...
yi = x
1
0
(3.37)
yi = xTi
where xi is the independent (column) data vector for the ith observation. If all n observations are
stacked vertically together, we obtain the matrix system
n
x1 x1n1 x1 1
y1
y2 xn2 x1n1 x2 1
(3.38)
.. = ..
..
..
..
..
. .
.
.
.
.
yN
xnN
n1
xN
xN
{z
X
or y = X in a more compact matrix notation. The matrix of comprised of the stacked rows of
measured independent data, X, in Eqn. 3.38 is called the Vandermonde matrix or data matrix and
can be easily constructed in M ATLAB using the vander command, although it is well known that
this matrix can become very illconditioned.
We can search for the parameters in a number of ways, but finding the parameters that minimise
the sum of squared errors is a common and easy approach. In matrix notation, the error vector is
defined as
def
= y X
(3.39)
Modelling
112
and in this case our scalar objective function that we want to minimise (sometimes called the cost
function), is the sum of squared errors,
J = = (y X) (y X)
= y y X y y X + X X
We want to choose such that J is minimised, ie a stationary point, thus we can set the partial
derivatives of J with respect to the parameters to the zero row vector,4
J
=0
T
= XT y yT X 2T XT X
(3.40)
= 2yT X + 2T X X
(3.41)
(3.42)
pseudo-inverse
As a consequence of the fact that we carefully chose our model structure in Eqn. 3.36 to be linear
in the parameters, , then the solution given by Eqn. 3.42 is analytic and therefore very reliable,
and straight forward to implement as opposed to nonlinear regression which require iterative
solution techniques.
This method of fitting a polynomial through experimental data is called polynomial least-squares
regression. In general, the number of measurements must be greater (or equal) to the number
of parameters. Even with that proviso, the data matrix X X can get very ill-conditioned, and
hence it becomes hard to invert in a satisfactorily manner. This problem occurs more often when
high order polynomials are used or when you are trying to over-parameterise the problem. One
solution to this problem is given in section 3.3.2.
1
The matrix X X
X is called the left pseudo-inverse of X, and is sometimes denoted X+ .
The pseudo inverse is a generalised inverse applicable for even non-square matrices and is dis
1
cussed in [152, p928]. M ATLAB can compute X X
X with the pseudo inverse command,
pinv, enabling the parameters to be evaluated simply by typing theta = pinv(X)*y, although it is more efficient to simply use the backslash command, theta = X\y.
A N EXAMPLE
Tabulated below is the density of air as a function of temperature. We wish to fit a smooth
quadratic curve to this data.
Temperature [ C]
Air density [kg/m3 ]
100
1.98
50
1.53
0
1.30
60
1.067
100
0.946
160
0.815
250
0.675
350
0.566
To compute the three model parameters from the air density data we can run the following m-file.
Listing 3.3: Curve fitting using polynomial least-squares
4 See also section 9.3.1. Ogata, [150, p938], also gives some helpful rules when using matrix-vector differentials, but
uses a slightly different nomenclature standard.
113
2.5
Raw Data
fitted curve
2
1.5
0.5
200
100
100
200
Temperature [C]
300
400
T=
[-100
-50
0 60
100, 160
250
350]'; % Temperature, T
rho= [1.98, 1.53, 1.30, 1.067 0.946, 0.815, 0.675, 0.566]'; % Air density,
X =vander(T);X =X(:,end-2:end);
theta = X\rho;
Ti = linspace(-130,400);
% validate with a plot
rho_pred = polyval(theta,Ti);
plot(T,rho,'o',Ti,rho_pred,'r-') % See Fig. 3.17.
The resulting curve is compared with the experimental data in Fig. 3.17.
In the example above, we constructed the Vandemonde data matrix explicitly, and solved for the
parameters using the pseudo-inverse. In practice however, we would normally simply use the
built-in M ATLAB command polyfit which essentially does the same thing.
3.3.2
I MPROVEMENTS
There are many extensions to this simple multi-linear regression algorithm that try to avoid the
poor numerical properties of the scheme given above. If you consider pure computational speed,
then solving a set of linear equations is about twice as fast as inverting a matrix. Therefore instead
of Eqn. 3.42, the equivalent
X X = X y
(3.43)
is the preferred scheme to calculate . M ATLAB S backslash operator, \, used in the example
above follows this scheme internally. We write it as if we expect an inverse, but it actually solves
a system of linear equations.
A numerical technique that uses singular value decomposition (SVD) gives better accuracy (for
the same number of bits to represent the data), than just applying Eqn 3.42, [161, 178]. Singular
values stem from the property that we can decompose any matrix T into the product of two
orthogonal matrices (U, V) and a diagonal matrix ,
T = UV
(3.44)
and
VV = I = V V
(3.45)
Modelling
114
The diagonal matrix consists of the square roots of the eigenvalues of T T which are called
the singular values of T, and the number that differ significantly from zero is the rank of T. We
can make one further modification to Eqn 3.42 by adding a weighting matrix W. This makes the
solution more general, but often in practice one sets W = I. Starting with Eqn 3.43 including the
weighting matrix W1 = G G,
X W1 X = X W1 y
X G GX = X G Gy
def
(3.46)
(3.47)
def
(3.48)
Now we take the singular value decomposition (SVD) of T in Eqn 3.48 giving
VU UV = VU z
V2 V =
(3.49)
(3.50)
VV
| {z } = V
(3.51)
U z
(3.52)
identity
1
= V
| {z U } z
(3.53)
pseudo inverse
The inverse of is simply the inverse of each of the individual diagonal (and non-zero) elements,
since it is diagonal. The key point here is that we never in the algorithm needed to calculate the
possibly ill-conditioned matrix T T. Consequently we should always use Eqn. 3.53 in preference
to Eqn. 3.42 due to the more robust numerical properties as shown in Listing 3.4.
Listing 3.4: Polynomial least-squares using singular value decomposition. This routine follows
from, and provides an alternative to, Listing 3.3.
1
[U,S,V] = svd(X,0);
% Use the economy sized SVD, UV = X
theta2 = V*(S\U')*rho % = V1 U , Eqn. 3.53.
It does not take much for the troublesome XT X matrix to get ill-conditioned. Consider the temperature/density data for air from Fig. 3.17, but in this case we will use temperature in degrees
Kelvin (as opposed to Celsius), and we will take data over slightly larger range of temperatures.
This is a reasonable experimental request, and so we should expect to be able to fit a polynomial
to this new data in much the same way we did in Listing 3.3.
However as shown in Fig. 3.18, we cannot reliably fit a fifth-order polynomial using standard
least-squares to this data set, although we can reliably fit such a 5th-order polynomial using the
SVD strategy of Eqn. 3.53. Note that M ATLABs polyfit uses the reliable SVD strategy internally.
A further refined regression technique is termed partial least squares (PLS), and is summarised
in [74]. Typically these schemes decompose the data matrix X, into other matrices that are
ordered in some manner that roughly corresponds to information. The matrices with the least
information (and typically most noise) are omitted, and the subsequent regression uses only the
remaining information.
The example on page 112 used the least squares method for estimating the parameters of an
algebraic equation. However the procedure for estimating a dynamic equation remains the same.
This will be demonstrated later in 6.7.
115
Polynomial order = 5
3.5
3
Standard leastsquares
= (X X)1 X y
Raw Data
standard LS
SVD fitted
2.5
LS using SVD
= V 1 U T y
1.5
1
0.5
0
100
200
300
400
500
Temperature [degrees Kelvin]
600
700
Problem 3.3 Accuracy and numerical stability are very important when we are dealing with computed solutions. Suppose we wish to invert
1
1
A=
1 1+
where is a small quantity (such as nearly the machine eps)
1. What is the (algebraic) inverse of A?
2. Obviously if = 0 we will have trouble inverting since A is singular, but what does the
pseudo-inverse, A+ , converge to as 0?
3.3.3
I DENTIFICATION
OF NONLINEAR MODELS
If the model equation is nonlinear in the parameters, then the solution procedure to find the optimum parameters requires a nonlinear optimiser rather than the relatively robust explicit relation
given by Eqn. 3.42 or equivalent. Nonlinear optimisers are usually based on iterative schemes
additionally often requiring good initial parameter estimates and even then may quite possibly
fail to converge to a sensible solution. There are many algorithms for nonlinear optimisation
problems including exhaustive search, the simplex method due to Nelder-Mead, and gradient
methods.
M ATLAB provides a simple unconstrained optimiser, fminsearch, which uses the Nelder-Mead
simplex algorithm. A collection of more robust algorithms and algorithms to optimise constrained
systems possibly with integer constraints is the O PTI toolbox5 .
The following biochemical example illustrates the solution of a nonlinear algebraic optimisation
problem. Many biological reactions are of the Michalis-Menton form where the cell number y(t)
is given at time t by the relation
t
y(t) =
(3.54)
t + 1
In this model, time is the independent variable, y(t) is the observed variable, and the two parameters are
def
=
5 The
Modelling
116
Suppose we have some experimental data for a particular reaction given as
time, t
cell count, y
0
0
0.2
1.2
0.5
2.1
1
2.43
1.5
2.52
2
2.58
2.5
2.62
where we wish to estimate the parameters and in Eqn. 3.54 using nonlinear optimisation techniques. Note that the model equation (3.54) is nonlinear in the parameters. While it is not possible
to write the equation in the linear form y = f (t) as done in 3.3, it is possible to linearise the
equation (Lineweaver-Burke plot) by transforming the equation. However this will also transform and bias the errors around the parameters. This is not good practice as it introduces bias
in the estimates, but does give good starting estimates for the nonlinear estimator. Transforming
Eqn. 3.54 we get
y
y=
t
Thus plotting y/t against y should result in a straight line with an intercept of / and a slope of
1/.
3.5
cell count, y
2.5
1.5
3
y/t
which should given an approximately straight line (ignoring the first and possibly the second
points). To find the slope and intercept, we use the polyfit function to fit a line and we get a
slope = 0.2686 and intercept = 2.98. This corresponds to = 11.1 and = 3.723.
Now we can refine these estimates using the fminsearch Nelder-Mead nonlinear optimiser.
We must first construct a a small objection function file which evaluates the sum of the squared
error for a given set of trial parameters and the experimental data. This is succinctly coded as an
anonymous function in Listing 3.5 following.
Since we require the experimental data variables y and t in the objective function, but we are not
optimising with respect to them, we must pass these variables as additional parameters to the
optimiser.
Listing 3.5: Curve fitting using a generic nonlinear optimiser
1
4.10
and a comparison of both the experimental data , and the models predictions is given in Fig. 3.19.
117
cell count
2.5
2
1.5
1
0.5
0
Exptal data
Best fit curve
0
0.5
C URVE FITTING
1.5
2
time
USING THE
2.5
3.5
Figure 3.19: The fitted (dashed) and experimental, , data for a bio-chemical reaction.
The asymptotic final cell count, /, is given
by the dashed horizontal line.
O PTI OPTIMISATION
TOOLBOX
The M ATLAB O PTIMISATION toolbox contains more sophisticated routines specifically intended
for least-squares curve fitting. Rather than write an objective function to compute the sum of
squares and then subsequently call a generic optimiser as we did in Listing 3.5, we can solve the
problem in a much more direct manner. The opti_lsqcurvefit is the equivalent routine in the
O PTI toolbox that solves least-squares regression problems.
Listing 3.6: Curve fitting using the O PTI optimisation toolbox. (Compare with Listing 3.5.)
Note that in Listing 3.6 we encapsulate the function in an anonymous function which is then
passed to the least-squares curve fit routine, opti_lsqcurvefit.
H IGHER
Searching for parameters where we have made two independent measurements is just like searching when we have made one independent measurement, except that to plot the results we must
resort to contour or three-dimensional plots. This makes the visualisation a little more difficult,
but changes nothing in the general technique.
We will aim to fit a simple four parameter, two variable function to model the compressibility
of water. Water, contrary to what we taught in school, is compressible, although this is only
noticeable at very high pressures. If we look up the physical properties for compressed water
in steam tables,6 we will find something like table 3.4. Fig. 3.20(a) graphically illustrates the
strong temperature influence on the density compared with pressure. Note how the missing data
represented by NaNs in M ATLAB is ignored in the plot.
Our model relates the density of water as a function of temperature and pressure. The proposed
6 Rogers
& Mayhew, Thermodynamic and Transport properties of Fluids, 3rd Edition, (1980), p11
Modelling
118
Table 3.4: Density of compressed water, ( 103 kg/m3 )
Temperature, C
200
250
300
0.8711 0.8065 0.7158
0.8795 0.8183 0.7391
0.8969 0.8425 0.7770
0.9242 0.8772 0.8244
Pressure (bar)
0.01
1.0050
1.0111
1.0235
1.0460
100
221.2
500
1000
100
0.9634
0.9690
0.9804
1.0000
350
0.6120
0.6930
0.7610
374.15
0.6410
0.7299
density (kg/m3)
1100
1000
1000
900
950
850
800
1000
775
900
Pressure (bar)
1100
900
850
800
800
700
600
0
1000
100
200
500
Temperature, C
600
500
750
400
700
300
700
700
200
650
300
700
1e+003
1e+003
1e+003
925
925
850
775
100
400
Pressure (bar)
50
100
150
200
250
Temperature (deg C)
300
350
400
Figure 3.20: A 2D model for the density of compressed water. In Fig. 3.20(b), the mark experimental data points used to construct the 2-D model given as contour lines.
model structure is
b
c
= P exp a + + 3
T
T
k
(3.55)
=
need to be determined.
(3.56)
Again the approach is to minimise numerically the sum of the squared errors using an optimiser
such as fminsearch. We can check the results of the optimiser using a contour plot with the
experimental data from Table 3.4 superimposed.
The script file in Listing 3.7 calls the minimiser which in turn calls the anonymous function
J_rhowat which given the experimental data and proposed parameters returns the sum of squared
errors. This particular problem is tricky, since it is difficult to know appropriate starting guesses
for , and the missing data must eliminated before the optimisation. Before embarking on the full
nonlinear minimisation, I first try a linear fitting to obtain good starting estimates for . I also
scale the parameters so that the optimiser deals with numbers around unity.
Listing 3.7: Fitting water density as a function of temperature and pressure
rhowat = @(a,P,T) P.a(1).*exp(a(2) + ...
1e2*a(3)./T + 1e7*a(4)./T.3) % Assumed model = P k exp a +
J_rhowat = @(a,P,T,rho) ...
b
T
c
T3
14
119
P
( )2
19
% Do nonlinear fit
theta_opt = fminsearch(@(theta) J_rhowat(theta,Pv,Tv,rhofv),theta);
24
Fig. 3.20(b) compares contour plots of the density of water as a function of pressure and temperature derived from the experimental data and the fitted model. The shows the location of the
experimental data points. The solid contour lines give the predicted density of water compared
with the contours derived from the experimental data (dashed lines). The optimum parameters
found above are
5.5383
a
b 5.8133 102
(3.57)
=
c = 1.8726 107
0.0305
k
3.3.4
T HE
Finding the optimum model parameters to fit the data is only part of the task. A potentially
more difficult objective is to try to establish how precise these optimum parameters are, or what
the confidence regions of the parameters are. simply expressing the uncertainty of your model as
parameter values some interval is a good first approximation, but it does neglect the interaction
of the other parameters, the so-called correlation effect.
The following data was taken from [86, p198], but with some scaling, modifications and corrections. Himmelblau, [86, p198], gives some reaction rate data, y, as a function of pressure, x, as:
Pressure, x
rate, y
2
0.68
3
0.858
3.5
0.939
4
0.999
5
1.130
5.5
1.162
6
1.190
1 x
1 + 2 x
(3.58)
Modelling
120
Now suppose that we use a nonlinear optimiser to search for the parameters such that the sum
of squared error is minimised,
!
n
X
2
(3.59)
(yi yi )
= arg min
i=1
If we do this optimisation using say the same technique as described in 3.3.3 calling lsqcurvefit
with an initial guess of say T = [1, 1],
Listing 3.8: Finding the optimum parameter for a nonlinear reaction rate model
1
x = [2,3,3.5,4,5,5.5,6]';
% Pressure, x
y = [0.68,0.858,0.939,0.999,1.130,1.162,1.190]'; % Reaction rate, y(x)
% y(x) =
1 x
1+2 x
(3.60)
and the raw data and model predictions are compared in Fig. 3.23.
Experimental data (o) & model fit
1.3
Data
model
1.2
1.1
rate
1
0.9
0.8
0.7
0.6
0.5
4
pressure
The fit in Fig. 3.21 looks reasonable, but it would be prudent to be able to statistically defend the
quality of the fit. To do that, we need to establish the individual confidence limits of the parameters,
and to do that, we need to know the following:
1. The n optimum parameters obtained from the m experimental observations. (For the
nonlinear case, this can be found using a numerical optimiser as shown above in Listing 3.8.)
2. A linearised (m n) data matrix, also known as a Jacobian, X, centered about the optimised
parameters at each observation defined as
def
Xi,j =
yi
j
(3.61)
Thus each of the m rows of X are the partial derivatives of that particular observation with
respect to each of the n parameters. Note that the dimensions of X should be taller than
wide.
121
It is possible that the numerical optimiser can export X at the solution, or we could compute
it using either an analytical expression or finite differences.
3. We can now form the (scaled) covariance matrix
def
P=
from the data matrix.
1
X X
(3.62)
Being a covariance matrix, P must be symmetric positive definite although in practice this
requirement may not always hold owing to poor numerical conditioning. The diagonal
elements are the variances we will use for the individual confidence limits. When using
M ATLAB, it is better to look at the singular values, or check the rank of X X before doing
the inversion.
4. An estimate of the measurement noise sYi . This may be known from prior knowledge, or
simply approximated by
s
p
(3.63)
sYi s2r =
mn
where the is the sum of squared residuals, and the positive integer = (mn) is known
as the degree of freedom.
5. Finally we also need to decide on a suitable confidence interval, typically 90%, 95% or 99%.
One this is decided, we can compute the value of the inverse cumulative t-distribution at
degrees of freedom using appropriate statistical tables, or the simple statistical functions
given in Listing 3.9.
Listing 3.9: Routines for the cumulative probability distribution, pt, and the inverse CDF,
qt, for the t-distribution.
Once we have computed all of the above, the confidence interval for parameter i is given by
p
i t(1/2) sYi Pi,i
(3.64)
Note that if we assume we have perfect measurements and a perfect model, then we would expect
that the sum of the errors will be exactly zero given the true parameters, and hence the uncertainty
interval to be zero. This is consistent with Eqn. 3.63 when = 0 although of course this occurs
rarely in practice!
The t-statistic in Eqn. 3.64, t(1/2) is the value of x to which only /2 of the integral lies to the
right. This quantity can easily obtained from statistical tables or using the qt routine from the
S TIXBOX collection mentioned on page 4 for m-file implementations of some commonly used
statistical functions, or tinv from the Statistics toolbox.
Fig. 3.22 plots the inverse of the cumulative Students t distribution as a function of degrees of
freedom for the three commonly used confidence limits: 90%, 95% and 99%. Note that in the
Modelling
122
case of 95% ( = 5%), then provided we have sufficient degree of freedoms, (i.e. we are not trying
to identify n parameters with the minimum of n experiments), then the t statistic both converges
to the inverse of the cumulative normal distribution, and furthermore, is approximately equal to
2.
Inverse of the cumulative tdistribution, qt(P,)
12
10
Figure 3.22: The inverse of the cumulative density function of the t distribution for varying degrees of freedom for 90%, 95% and 99% confidence limits. Note that t(1/2) 2
for = 5% for all but very low degrees of freedom.
6
4
99%
95%
90%
2
0
10
15
degree of freedom,
20
For the regression problem started on page 119, we can compute the Jacobian X in closed form.
The partial derivatives of the model Eqn. 3.58 with respect to the parameters are
x
y
=
,
1
1 + 2 x
1 x2
y
=
2
(1 + 2 x)2
Given the optimum parameters established in Listing 3.8, the individual uncertainties are calculated in Listing 3.10. The cumulative density function for the t-distribution, pt and its inverse,
qt could be substituted with tcdf and tinv from the Statistical toolbox respectively, or using
routines from the R statistical package, [163].
Listing 3.10: Parameter confidence limits for a nonlinear reaction rate model. This routines follows from Listing 3.8 and uses the statistical functions from Listing 3.9.
1
11
123
This gives the 95% confidence limits for the parameters as7
0.4808
0.5501
<<
0.2301
0.2956
(3.65)
What is important to check in the an expression like Eqn. 3.65 is that none of the parameter bounds
span zero. If they did, then this suggests that the parameter should be dropped.
While the bounds of the parameters are useful, what is more useful is the corresponding bounds
on the predictions. The non-simultaneous prediction bounds are
p
B = y t1/2 s2 + xSxT
(3.66)
{z
}
|
bounds
where S = (XT X)1 sY and x is the ith row vector of the Jacobian associated with measurement
yi .
A plot of the raw data (), the model with the optimum estimated parameters and the associated
error bounds using Eqn. 3.65 is given in Fig. 3.23.
95% Confidence bounds
1.3
Data
model
95% bounds
1.2
rate, y(x)
1.1
1
0.9
0.8
0.7
0.6
0.5
4
pressure, x
The confidence region (an ellipse in the two variable case) can also be plotted. This gives a deeper
understanding of the parameter interactions, but the visualisation becomes near impossible as
the number of parameters gets much larger than 3. The region for this example is evaluated the
following section.
T HE
An approximate confidence region can be constructed by linearising the nonlinear model about
the optimum parameters. The covariance of the parameter estimate is
1
cov(a) X X
(3.67)
Y2i
where a are the estimated parameters and is the true, but unknown, parameters. Now the
confidence region, an ellipse in two parameter space, or an ellipsoid in higher dimensions, is
7 These figures differ from the worked example in Himmelblau. I think he swapped two matrices by mistake, then
continued with this error.
Modelling
124
the region that with a confidence limit of say 95% we are certain that the true parameter () lies
within.
(3.68)
( a) (X X) ( a) = s2Yi m F1 [m, n m]
where F [m, n m] is the upper limit of the F distribution for m and n m degrees of freedom
where m is the number of parameters and n is the number of data
This value can be found
points.
in statistical tables. Note that the middle term of Eqn. 3.68 is X X and not the inverse of this.
For the previous example, we can plot the ellipse about the true optimised parameters. Fig 3.24
shows the 95% confidence ellipse about the optimised parameter (marked with a ) and the individual confidence lengths superimposed from Eqn. 3.65. Since the ellipse is narrow and slanted,
we note that the parameters are strongly correlated with each other.
Optimum solution with 95% confidence
0.3
0.29
0.28
0.27
0.26
0.25
Figure 3.24: The approximate 95% confidence elliptical region in 2 parameter space
compared with the individual rectangular
confidence bounds with the individual confidence bounds.
0.24
0.23
0.22
0.46
0.48
0.5
0.52
0.54
0.56
The ellipse defines an area in which the probability that the true parameter vector lies is better
than 95%. If we use the parameter points marked as a in Fig. 3.24, then the model prediction
will differ from the prediction using the optimised parameter vector. This difference is shown as
the errorbars in the lower plot of Fig. 3.24. Note that these errorbars are much smaller than the
errorbars I obtained from considering only the confidence interval for each individual parameter
given in Fig 3.23. Here, owing to the correlation (slanting of the ellipse in Fig 3.24), the parameters
obtained when neglecting the correlation lie far outside the true 95% confidence region. Hence
the bigger errorbars.
3.4
Unfortunately there are no known methods of solving Eqn 1 (a nonlinear differential equation).
This, of course, is very disappointing.
M. Braun, [35, p493]
Since differential equations are difficult to solve analytically, we typically need to resort to numerical methods as described in many texts such as [41, 43, 48, 94, 103, 170] in addition to [204].
Differential equations are broadly separated into two families:
ODEs Ordinary differential equations (ODEs) where time is the only independent variable. The
solution to ODEs can be described using standard plots where the dependent variables are
plotted on the ordinate or y axis against time on the x axis. Ways to numerically solve these
equations are described in section 3.4.1.
125
PDEs Partial differential equations (PDEs) are where space (and perhaps time) are independent
variables. To display the solution of PDEs, contour maps or 3D plots are required in general.
Generally PDEs are much harder to solve than ODEs and are beyond the scope of this text.
3.4.1
T HE
One of the most important tools in the desk-top experimenters collection, is a good numerical
integrator. We need numerical integrators whenever we are faced with a nonlinear differential
equation, the solutions of which is intractable by any other means. M ATLAB supplies a number of
different numerical integrators optimised for different classess of problems. Typically the 4th/5th
dual order Runge-Kutta implementation with adaptive stepsizing, ode45, is a good first choice.
To use the integrators, you must write a function subroutine that contains your differential system
to be solved which will be called by the numerical integration routine.
1. The script file that calls the integrator solver (eg: ode45) with name of the function subroutine to be integrated and parameters including initial conditions, tolerance and time span.
2. The function subroutine (named in the script file) that calculates the derivatives, y,
as a
function of the independent variable (usually time), and the dependent variable, f (t, y). In
many cases we are solving a system of ordinary differential equations where the variables
are coupled. In this case we often use the nomenclature x(t) to represent the vector of
dependent variables.
We will demonstrate this procedure by constructing templates of the above two files to solve a
small nonlinear differential equation. One can reuse this template as the base for solving other
ODE systems.
ODE
TEMPLATE EXAMPLE
Suppose our task is to investigate the response of the nonlinear pendulum system given in Eqn. 3.6
and compare this with the linearised version. The script file given in Listing 3.11 computes the
angle = x1 (t) trajectory for both the nonlinear and linear approximation.
Listing 3.11: Comparing the dynamic response of a pendulum to the linear approximation
1
Note how the trajectories for the nonlinear (solid) result as calculated from ode45 and linear
approximation calculated using lsim gradually diverge in Fig. 3.25.
Modelling
126
1.5
1
x(t)
0.5
0
0.5
1
Figure 3.25: The trajectory of the true nonlinear pendulum compared with the linearised approximation.
A CHAOTIC
Non Linear
Linearised
1.5
0
10
time
PENDULUM
A second example of an interesting dynamic system is a chaotic forced pendulum from [24]. The
dynamic equation system is
d
= sin + g cos
dt
q
d
=
dt
d
= d
dt
(3.69)
where the three states are angular frequency , angular position, , and the phase of the forcing
term, . We are particularly interested in investigating the behaviour of this system as a function
of the three key parameters, g, q and d .
Listing 3.12: The dynamics of a chaotic pendulum from Eqn. 3.69.
Note how in the definition above for the dynamic system, we have included the three key parameters as trailing arguments so that we can easily change them. For example for the case where we
are interested in g = 1.15, q = 2 and d = 2/3, we simulate
1
127
1. One of my favourite differential equations is called the Van der Pol equation;
m
dy
d2 y
b 1 y2
+ ky = 0
dt2
dt
where m = 1, b = 3 and k = 2. Solve the equation for 0 < t < 20 using both ode23 and
ode45. Sketch the solution. Which routine is better? You will need to create a function
called vandepol(t,y) which contains the system of equations to be solved.
Problem 3.5 A particularly nasty nonlinear chemical engineering type model is found in [82]. This
is a model of two CSTRs8 in series, cooled with a co-current cooling coil. The model equations
are:
E
q
T1 = (Tf T1 ) +
+
exp
V1
cp
RT1
hA1
c cpc
(Tcf T1 )
qc 1 exp
cp V1
qc c cpc
q
E
+
exp
T2 = (T1 T2 ) +
V2
cp
RT2
hA2
hA1
c cpc
T1 T2 + exp
(Tcf T1 )
qc 1 exp
cp V2
qc c cpc
qc c cpc
where the state, control, disturbance and measured variables are defined as
def
def
def
,
u = qc ,
d = Caf Tcf
x = Ca1 T1 Ca2 T2
def
y = Ca2
description
reactant flow
conc. of feed reactant A
temp. of feed
temp. of cooling feed
volume of vessels
heat transfer coeff.
pre-exp. constant
E/R
reaction enthalpy
fluid density
heat capacity
variables
q
Caf
Tf
Tcf
V1 = V2
hA1 = hA2
k0
E/R
H
c =
cp = cpc
value
100
1.0
350
350
100
1.67 105
7.2 1010
1.0 104
4.78 104
1.0
0.239
units
l/min
mol/l
K
K
l
J/min.K
min1
K
J/mol
g/l
J/g.K
Note that the model equations are nonlinear since the state variable T1 (amongst others) is a
appears nonlinearly in the differential equation. In addition, this model is referred to as control
8 continuously
Modelling
128
nonlinear, since the manipulated variable enters nonlinearly. Simulate the response of the concentration in the second tank (Ca2 ) to a step change of 10% in the cooling flow using the initial
conditions given in Table 3.6.
Table 3.6: The initial state and manipulated variables for the CSTR simulation
description
coolant flow
conc. in vessel 1
temp. in vessel 1
conc. in vessel 2
temp. in vessel 2
variable
qc
Ca1
T1
Ca2
T2
value
99.06
8.53 102
441.9
5.0 103
450
units
l/min
mol/l
K
mol/l
K
You will need to write a .m file containing the model equations, and use the integrator ode45.
Solve the system over a time scale of about 20 minutes. What is unusual about this response?
How would you expect a linear system to behave in similar circumstances?
Problem 3.6 Develop a M ATLAB simulation for the high purity distillation column model given in
[142, pp459]. Verify the open loop responses given on page 463. Ensure that your simulation is
easily expanded to accommodate a different number of trays, different relative volatility etc.
Problem 3.7 A slight complexity to the simple ODE with specified initial conditions is where we
still have the ODE system, but not all the initial conditions. In this case we may know some of the
end conditions instead, thus the system is in principle able to be solved, but not in the standard
manner; we require a trial and error approach. These types of problems are called two point
boundary problems and we see them arise in heat transfer problems and optimal control. They
are so called two point boundary problems because for ODEs we have two boundaries for the
independent variable; the start and the finish. Try to solve the following from [198, p178].
A fluid enters an immersed cooling coil 10m long at 200 C and is required to leave at 40 C. The
cooling medium is at 20 C. The heat balance is
d2 T
= 0.01(T 20)1.4
dx2
(3.70)
Tx=10 = 40
To solve the system, we must rewrite Eqn 3.70 as a system of two first order differential equations,
and supply a guess for the missing initial condition dT
dx x=0 =? We can then integrate the system
until x = 10 and check that the final condition is as required. Solve the system.
Hint: Try 47 <
3.4.2
dT
dx x=0
< 42.
In many cases when deriving models from first principals, we end up with a coupled system
of some differential equations, and some algebraic equations. This is often due to our practice
of writing down conservation type equations (typically dynamic), and constraint equations, say
thermodynamic, which are typically algebraic. Such a system in general can be written as
x, t) = 0
f (x,
(3.71)
129
and is termed a DAE or differential/algebraic equation system. If we assume that our model is
well posed, that is we have some hope of finding a solution, then we expect that we have the
same number of variables as equations, and it follows that some of the variables will not appear
in the differential part, but only in the algebraic part. We may be able to substitute those algebraic
variables out, such that we are left with only ODEs, which can then be solved using normal
backward difference formula numerical schemes such as rk2.m. However it is more likely that
we cannot extract the algebraic variables out, and thus we need special techniques to solve these
sorts of problems (Eqn. 3.71) as one.
In a now classic article titled Differential/Algebraic Equations are not ODEs, [159] gives some insight
into the problems. It turns out that even for linear DAE systems, the estimate of the error, which is
typically derived from the difference between the predictor and the corrector, does not decrease as
the step size is reduced. Since most normal ODE schemes are built around this assumption, they
will fail. This state of affairs will only occur with certain DAE structures where the nilpotency or
index is 3. Index 2 systems also tend to cause BDF schemes to fail, but can be tackled using other
methods. The index problem is important because in many times it can be changed (preferably
reduced to less than two), by rearranging the equations. Automated modelling tools tend, if left
alone, to create overly complex models with a high index that are impossible to solve. However
if we either by using an intelligent symbolic manipulator or human, change these equations, we
may be able to reduce the index.
T HE
One reason that computer aided modelling tools such as S IMULINK have taken so long to mature
is the problem of algebraic loops. This is a particular problem when the job of assembling the
many different differential and algebraic equations in an efficient way is left to a computer.
Suppose we want to simulate a simple feedback process where the gain is a saturated function of
the output, say
K(y) = max(min(y, 5), 0.1)
If we simulate this in Simulink using
1
1
s+1
Step
Gain
Transfer Fcn
Product
Saturation
Scope
Gain1
1
we run into an Algebraic Loop error. M ATLAB returns the following error diagnostic (or something
similar):
Warning: Block diagram sgainae contains 1 algebraic loop(s).
Found algebraic loop containing block(s):
sgainae/Gain1
sgainae/Saturation (discontinuity)
sgainae/Product (algebraic variable)
Discontinuities detected within algebraic loop(s), may have trouble solving
Modelling
130
time constant in place of the algebraic gain in the feedback loop. While we desire the dynamics
of the gain to be very fast so that it approximates the original algebraic gain, overly fast dynamics
cause numerical stability problems, hence there is a trade off.
3.5
While most practical engineering problems are nonlinear to some degree, it is often useful to be
able to approximate the dynamics with a linear differential equation which means we can apply
linear control theory. While it is possible to design compensators for the nonlinear system directly,
this is in general far more complicated, and one has far fewer reliable guidelines and recipes to
follow. One particular version of nonlinear controller design called exact nonlinear feedback is
discussed briefly in 8.6.
Common nonlinearities can be divided into two types; hard nonlinearities such as hysterisis,
stiction, and dead zones, and soft nonlinearities such as the Arrhenius temperature dependency, power laws, etc. Hard nonlinearities are characterised by functions that are not differentiable, while soft nonlinearities are. Many strategies for compensating for nonlinearities are only
applicable for systems which exhibit soft nonlinearities.
We can approximate soft nonlinearities by truncating a Taylor series approximation of the original
system. The success of this approximation depends on whether the original function has any nondifferentiable terms such as hysterisis, or saturation elements, and how far we deviate from the
point of linearisation. This section follows the notation of [76, 3.10].
Suppose we have the general nonlinear dynamic plant
x = f (x(t), u(t))
(3.72)
y = g(x, u)
(3.73)
and we wish to find an approximate linear model about some operating point (xa , ua ). A firstorder Taylor series expansion of Eqns 3.723.73 is
f
(x(t) xa ) +
x f (xa , ua ) +
=xa
x x
u=ua
g
(x(t) xa ) +
y g(xa , ua ) +
=xa
x x
u=ua
f
(u(t) ua )
=xa
u x
u=ua
g
(u(t) ua )
=xa
u x
u=ua
(3.74)
(3.75)
where the ijth element of the Jacobian matrix f /x is f i /xj . Note that for linear systems, the
Jacobian is simply A in this notation, although some authors define the Jacobian as the transpose
of this, or AT .
The linearised system Eqn 3.743.75 can be written as
x = Ax(t) + Bu(t) + E
y = Cx(t) + Du(t) + F
where the constant matrices A, B, C and D are defined as
f
A=
,
=xa
x x
u=ua
g
,
C=
=xa
x x
u=ua
(3.76)
(3.77)
f
B=
=xa
u x
u=ua
g
D=
=xa
u x
u =ua
131
E = f (xa , ua )
f
ua
=xa
u x
u
=ua
g
ua
=xa
u x
u=ua
Note that Eqns 3.763.77 are almost in the standard state-space form, but they include the extra
bias constant matrices E and F. It is possible by introducing a dummy unit input to convert this
form into the standard state-space form,
u
x = Ax + B E
(3.78)
1
u
(3.79)
y = Cx + D F
1
which we can then directly use in standard linear controller design routines such as lsim.
In summary, the linearisation requires one to construct matrices of partial derivatives with respect
to state and input. In principle this can be automated using a symbolic manipulator provided the
derivatives exist. In both the S YMBOLIC T OOLBOX and in M APLE we can use the jacobian
command.
Example: Linearisation using the S YMBOLIC toolbox. Suppose we want to linearise the nonlinear
system
#
"
ax1 exp 1 xb2
x =
cx1 (x2 u2 )
at an operating point, xa = [1, 2]T , ua = 20.
First we start by defining the nonlinear plant of interest,
Modelling
132
13
>> A = subs(Avar,{a,b,c,x1,x2,u},{5,6,7,1,2,20})
A =
1.0e+003 *
0.0007
0.0010
-2.7860
0.0070
>> B = subs(Bvar,{a,b,c,x1,x2,u},{5,6,7,1,2,20})
B =
[
0]
[ -280]
>> E = subs(Evar,{a,b,c,x1,x2,u},{5,6,7,1,2,20})
E =
1.0e+003 *
-0.0020
5.5860
At this point we could compare in simulation the linearised version with the full nonlinear model.
3.5.1
L INEARISING
Suppose we wish to linearise the model of the level in a tank given on page 93 where the tank
geometry is such that A = 1. The nonlinear dynamic system for the level h is then simplified to
dh
= k h + Fin
dt
For a constant flow in, Finss , which we will term steady state and denote with the superscript ss,
the resulting steady state level is given by
ss
h =
Finss
k
2
noting that dh/dt = 0. We wish to linearise the system about this steady-state, so we will actually
work with deviation variables,
def
def
x = h hss , u = Fin Finss
Now following Eqn. 3.74, we have
f
f
h = x = f (hss , Finss ) + (h hss ) +
(Fin Finss )
| {z } h
Fin
=0
Note that since f /h = k/(2 hss ), then our linearised model about (Finss , hss ) is
k
x+u
x =
2 hss
which is in state-space form in terms of deviation variables x and u.
We can compare the linearised model with the nonlinear model in Fig. 3.26 about a nominal input
flow of Finss = 2 and k = 2 giving a steady-state level of hss = 4. Note that we must subtract and
add the relevant biases when using the linearised model.
An alternative, and much simpler way to linearise a dynamic model is to use linmod which
extracts the Jacobians from a S IMULINK model by finite differences.
Repeating
Sequence
Stair
level, h
1
s
Add1
Add
133
level
Fin_ss
Fin_ss
Math
Function
Gain
k
sqrt
Scope
simout
uFin_ss
x = Ax+Bu
y = Cx+Du
u+Lss
Bias
StateSpace
Bias1
To Workspace
(a) A S IMULINK nonlinear tank model and linear state-space model for comparison
10
full nonlinear response
9
level
8
7
6
linearised response
5
4
3
10
20
30
40
50
time
Figure 3.26: Comparing the linearised model with the full nonlinear tank level system
Listing 3.13: Using linmod to linearise an arbitrary S IMULINK module.
1
k=1; Fin_ss = 2;
% Model parameters and steady-state input
hss = (Fin_ss/k)2 % Steady-state level, hss
[A,B,C,D] = linmod('sNL_tank_linmod',hss,Fin_ss) % Linearise
A =
-0.2500
B =
1
C =
1
11
D =
0
Q UANTIFYING
It is important for the control designer to be able to quantify, if only approximately, the extent
of the open-loop process nonlinearity. If for example the process was deemed only marginally
Modelling
134
nonlinear, then one would be confident a controller designed assuming a linear underlying plant
would perform satisfactory. On the other hand, if the plant was strongly nonlinear, then such a
linear controller may not be suitable. Ideally we would like to be able to compute a nonlinear
metric, say from 0 (identically linear) to 1 (wildly nonlinear) that quantifies this idea simply by
measuring the openloop input/output data. This of course is a complex task, and is going to be
a function of the type input signals, duration of the experiment, whether the plant is stable or
unstable, and if feedback is present.
One such strategy is proposed in [81] and used to assess the suitability of linear control schemes
in [179]. The idea is to compute the norm of the difference between the best linear approximation and the true nonlinear response for the worst input trajectory within a predetermined set
of trajectories. This is a nested optimisation problem with a min-max construction. Clearly the
choice of linear model family from which to choose the best one, and the choice of the set of input
trajectories will have an effect on the final computed nonlinear measure.
3.6
S UMMARY
. . . state the variables, the laws linking the variables, and the initial and boundary
conditions, and from these compute the forward trajectory of the biosphere.
In actual fact he was lamenting that this strategy, sometimes known as scientific determinism
voiced by Laplace in the early 19th century was not always applicable to our world as we understand it today. Nonetheless, for our aim of modeling for control purposes, this philosophy has
been, and will remain for some time I suspect, to be remarkably successful.
Modelling of dynamic systems is important in industry. These types of models can be used for
design and or control. Effective modelling is an art. It requires mathematical skill and engineering
judgement. The scope and complexity of the model is dependent on the purpose of the model. For
design studies a detailed model is usually required, although normally only steady state models
are needed at this stage. For control, simpler models (although dynamic) can be used since the
feedback component of the controller will compensate for any model-plant mismatch. However
most control schemes require dynamic models which are more complicated than the steady state
equivalent.
Many of the dynamic models used in chemical engineering applications are built from conservation laws with thermodynamic constraints. These are often expressed as ordinary differential
equations where we equate the rate of accumulation of something (mass or energy) to the inputs,
outputs and generation in a defined control volume. In addition there may be some restrictions
on allowable states, which introduces some accompanying algebraic equations. Thus general
dynamic models can be expressed as a combination of dynamic and algebraic relations
dx
= f (x, u, , t)
dt
0 = g(x, u , t)
(3.80)
(3.81)
which are termed DAEs (differential and algebraic equations), and special techniques have been
developed to solve them efficiently. DAEs crop up frequently in automated computer modelling
packages, and can be numerically difficult to solve. [135] provides more details in this field.
9 Stuart
3.6. SUMMARY
135
Steady state models are a subset of the general dynamic model where the dynamic term, Eqn
3.80, is set to equal zero. We now have an augmented problem of the form Eqn 3.81 only. Linear
dynamic models are useful in control because of the many simple design techniques that exist.
These models can be written in the form
x = Ax + Bu
(3.82)
where the model structure and parameters are linear and often time invariant.
Models are obtained, at least in part, by writing the governing equations of the process. If these
are not known, experiments are needed to characterise fully the process. If experiments are used
the model is said to be heuristic. If the model has been obtained from detailed chemical and
physical laws, then the model is said to be mechanistic. In practice, most models are a mixture of
these two extremes. However what ever model is used, it still is only an approximation to the real
world. For this reason, the assumptions that are used in the model development must be clearly
stated and understood before the model is used.
136
Modelling
C HAPTER 4
T HE PID
4.1
CONTROLLER
I NTRODUCTION
The PID controller is the most common general purpose controller in the chemical process industry today. It can be used as a stand-alone unit, or it can be part of a distributed computer control
system. Over 30 years ago, PID controllers were pneumatic-mechanical devices, whereas nowadays they are implemented in software in electronic controllers. The electronic implementation
is much more flexible than the pneumatic devices since the engineer can easily re-program it to
change the configuration of things like alarm settings, tuning constants etc.
Once we have programmed the PID controller, and have constructed something, either in software or hardware to control, we must tune the controller. This is surprisingly tricky to do successfully, but some general hints and guidelines will be presented in 4.6. Finally, the PID controller is
not perfect for everything, and some examples of common pitfalls when using the PID are given
in 4.8.
4.1.1
P, PI
OR
PID
CONTROL
For many industrial process control requirements, proportional only control is unsatisfactory
since the offset cannot be tolerated. Consequently the PI controller is probably the most common controller, and is adequate when the dynamics of the process are essentially first or damped
second order. PID is satisfactory when the dynamics are second or higher order. However the
derivative component can introduce problems if the measured signal is noisy. If the process has
a large time delay (dead time), derivative action does not seem to help much. In fact PID control
finds it difficult to control processes of this nature, and generally a more sophisticated controller
such as a dead time compensator or a predictive controller is required. Processes that are highly
underdamped with complex conjugate poles close to the imaginary axis are also difficult to control with a PID controller. Processes with this type of dynamic characteristics are rare in the chemical processing industries, although more common in mechanical or robotic systems comprising
of flexible structures.
My own impressions are that the derivative action is of limited use since industrial measurements
such as level, pressure, temperature are usually very noisy. As a first step, I generally only use a PI
controller, the integral part removes any offset, and the two parameter tuning space is sufficiently
small, that one has a chance to find reasonable values for them.
137
138
4.2
This section describes how to implement a simple continuous-time PID controller. We will start
with the classical textbook algorithm, although for practical and implementation reasons industrially available PID controllers are never this simple for reasons which will soon become clear.
Further details on the subtlety of implementing a practical useful PID controller are described in
[55] and the texts [14, 16, 98].
The purpose of the PID controller is to measure a process error and calculate a manipulated
action u. Note that while u is referred to as an input to the process, it is the output of the controller.
The textbook non-interacting continuous PID controller follows the equation
Z
1
d
u = Kc +
dt + d
i
dt
(4.1)
where the three tuning constants are the controller proportional gain, Kc , the integral time, i and
the derivative time, d , the latter two constants having units of time, often minutes for industrial
controllers. Personally, I find it more intuitive to use the reciprocal of the integral time, 1/i , which
is called reset and has units of repeats per unit time. This nomenclature scheme has the advantage
that no integral action equates to zero reset, rather than the cumbersome infinite integral time. Just
to confuse you further, for some industrial controllers to turn off the integral component, rather
than type in something like 99999, you just type in zero. It is rare in engineering where we get to
approximate infinity with zero! Table 4.1 summarises these alternatives.
Table 4.1: Alternative PID tuning parameter conventions
Parameter
Gain
integral time
derivative time
symbol
Kc
i
d
units
input/output
seconds
seconds
alternative
Proportional band
reset
The Laplace transform of the ideal PID controller given in Eqn. 4.1 is
1
U (s)
= C(s) = Kc 1 +
+ d s
E(s)
i s
symbol
PB
1/i
units
%
seconds1
(4.2)
and the equivalent block diagram in parallel form is give in Fig. 4.1 where is is clear that the three
terms are computed in parallel which is why this form of the PID controller is sometimes also
known as the parallel PID form.
We could rearrange Eqn. 4.2 in a more familiar numerator/denominator transfer function format,
Kc i d s2 + i s + 1
(4.3)
C(s) =
i s
where we can clearly see that the ideal PID controller is not proper, that is, the order of the numerator (2), is larger than the order of the denominator (1) and that we have a pole at s = 0. We
shall see in section 4.2.1 that when we come to fabricate these controllers we must physically have
a proper transfer function, and so we will need to modify the ideal PID transfer function slightly.
4.2.1
I MPLEMENTING
The textbook PID algorithm of Eqn. 4.2 includes a pure derivative term d s. Such a term is not
physically realisable, and nor would we really want to implement it anyway, since abrupt changes
139
error input
integrator
1
i s
Kc
Controller output
+
d s
differentiator
% C(s) = Kp 1 +
1
i s
d s
d
s+1
N
with Kp = 1, Ti = 2, Td = 3, N = 100
The generated controller is of a special class known as a pidstd, but we can convert that to the
more familiar transfer function with the now overlayed tf command.
>> tf(C)
Transfer function:
101 s2 + 33.83 s + 16.67
------------------------s2 + 33.33 s
140
A related M ATLAB function, pid, constructs a slight modification of the above PID controller in
what M ATLAB refers to as parallel form,
Cparallel (s) = P + I/s + D
s
f s + 1
where in this case as the derivative filter time constant, f , approaches zero, the derivative term
approaches the pure differentiator.
4.2.2
VARIATIONS
OF THE
PID
ALGORITHM
The textbook algorithm of the PID controller given in Eqn. 4.2 is sometimes known as the parallel or non-interacting form, however due to historical reasons there is another form of the PID
controller that is sometimes used. This is known as the series, cascade or interacting form
Gc (s)
Kc
1
(1 + d s)
1+
i s
(4.7)
where the three series PID controller tuning constants, Kc , i and d are related to, but distinct
from, the original PID tuning constants, Kc , i and d . A block diagram of the series PID controller
is given in Fig. 4.2.
Differentiator
Integrator
d s
1
i s
u
Kc
+
b
+
b
error
controller MV
i + d
,
i
i = i + d ,
d =
i d
+ d
(4.8)
1 + 1 4d /i
i =
2
p
i
1 1 4d /i
d =
2
Kc =
(4.9)
(4.10)
(4.11)
since a series form only exists if i > 4d . For this reason, the series form of the PID controller is
less commonly used, although some argue that it is easier to tune, [188]. Note that both controller
forms are the same if the derivative component is not used. A more detailed discussion on the
various industrial controllers and associated nomenclatures is given in [98, pp32-33].
141
Integral only control
0.4
0.4
0.3
output
output
PI controlled response
0.5
0.3
0.2
0.1
20
40
60
80
100
120
140
160
180
0
0
200
0.8
0.6
0.6
0.4
input
input
0.1
0
0.2
0.4
0.2
0
0
20
40
60
80
100
120
140
160
180
200
20
40
60
80
100
120
time (sec) dt=0.08
140
160
180
200
0.2
0
20
40
60
80
100
120
time (sec) dt=0.08
140
160
180
200
0.2
0
Figure 4.3: Comparing PI and integral-only control for the real-time control of a noisy flapper
plant with sampling time T = 0.08.
4.2.3
I NTEGRAL
ONLY CONTROL
For some types of control, integral only action is desired. Fig. 4.3(a) shows the controlled response
of the laboratory flapper, (1.3.1), controlled with a PI controller. The interesting characteristic of
this plants behaviour is the significant disturbances. Such disturbances make it both difficult to
control and demand excessive manipulated variable action.
If however we drop the proportional term, and use only integral control we obtain much the same
response but with a far better behaved input signal as shown in Fig. 4.3(b). This will decrease the
wear on the actuator.
If you are using Eqn. 4.1 with the gain Kc set to zero, then no control at all will result irrespective
of the integral time. For this reason, controllers either have four parameters as opposed to the
three in Eqn. 4.1, or we can follow the S IMULINK convention shown in Fig. 4.6.
4.3
S IMULINK is an ideal platform to rapidly simulate control problems. While it does not have the
complete flexibility of raw M ATLAB, this is more than compensated for in the ease of construction
and good visual feedback necessary for rapid prototyping. Fig. 4.4 shows a S IMULINK block
diagram for the continuous PID control of a third order plant.
3
PID
Step
PID Controller
output
Figure 4.4: A S IMULINK block diagram of a PID controller and a third-order plant.
The PID controller block supplied as part of S IMULINK is slightly different from the classical de-
142
scription given in Eqn. 4.2. S IMULINKs continuous PID block uses the complete parallel skeleton,
Gc (s) = P +
Ns
I
+D
s
s+N
(4.12)
where we choose the three tuning constants, P, I and D, and optionally the filter coefficient N
which typically lies between 10 and 100. You can verify this by unmasking the PID controller (via
the options menu) block to exhibit the internals as shown in Fig. 4.5.
P
Proportional Gain
1
s
1
Sum
Integral Gain
Integrator
Derivative Gain
SumD
Filter Coefficient
1
s
Filter
Note how the derivative component of the S IMULINK PID controller in Eqn. 4.12 follows the
realisable approximation given in Eqn. 4.4 by using a feedback loop with an integrator and gain
of N = 100 as a default.
Block diagrams of both the S IMULINK implementation and the classical PID scheme are compared
in Fig. 4.6. Clearly the tuning constants for both schemes are related as follows:
P = Kc ,
I=
Kc = P,
i =
Kc
,
i
D = Kc d
(4.13)
or alternatively
Kc
,
I
d =
D
Kc
(4.14)
The S IMULINK scheme has the advantage of allowing integral-only control without modification.
P
I
s
Kc
b
1
i s
Ds
d s
Figure 4.6: Block diagram of PID controllers as implemented in S IMULINK (left) and classical text
books (right).
If you would rather use the classical textbook style PID controller, then it is easy to modify the
S IMULINK PID controller block. Fig. 4.7 shows a S IMULINK implementation which includes the
filter on the derivative component following Eqn. 4.6. Since PID controllers are very common,
you may like to mask the controller as illustrated in Fig. 4.8, add a suitable graphic, and add this
to your S IMULINK library.
1
error
Kc
1
s
1/taui
Pgain
143
reset
Integrator
1
u
Add 4
Figure 4.7: A realisable continuous PID controller implemented in S IMULINK with a filter on the derivative action.
taud
taud /N.s+1
deriv
filtered deriv
3
3s3 +5s2 +6s+1.6
Step
output
Plant
PID Controller
D=2
or in classical form
y & ref
P = 2,
1.5
1
0.5
0
i = 1.33,
d = 1
4
u
Kc = 2,
2
0
0
10
time
Because the derivative term in the PID controller acts on the error rather than the output, we see
a large derivative kick in the controller output1 . We can avoid this by using the PID block with
anti-windup, or by modifying the PID block itself. Section 4.4.1 shows how this modification
works.
4.4
Industrial PID controllers are in fact considerably more complicated than the textbook formula
of Eqn. 4.1 would lead you to believe. Industrial PID controllers have typically between 15 and
25 parameters. The following describes some of the extra functionality one needs in an effective
commercial PID controller.
1 Note
144
4.4.1
AVOIDING
DERIVATIVE KICK
If we implement the classical PID controller such as Eqn. 4.3 with significant derivative action, the
input will show jump excessively every time either the output or the setpoint changes abruptly.
Under normal operations conditions, the output is unlikely to change rapidly, but during a setpoint change, the setpoint will naturally change abruptly, and this causes a large, though brief,
spike in the derivative of the error. This spike is fed to the derivative part of the PID controller,
and causes unpleasant transients in the manipulated variable. If left unmodified, this may cause
excessive wear in the actuator. Industrial controllers and derivative kick is further covered in
[181, p191].
It is clear that there is a problem with the controller giving a large kick when the setpoint abruptly
changes. This is referred to as derivative kick and is due to the near infinite derivative of the error
when the setpoint instantaneously changes as when in a setpoint change. One way to avoid
problems of this nature is to use the derivative of the measurement y,
rather than the derivative
of the error e = y y.
If we do this, the derivative kick is eliminated, and the input is much less
excited. Equations of both the classical and the anti-kick PID controller equations are compared
below.
Z
1 t
d
Normal: Gc () = Kc +
(4.15)
d + d
i 0
dt
Z
1 t
dy
Anti-Kick: Gc (, y) = Kc +
(4.16)
d + d
i 0
dt
The anti-derivative kick controller is sometimes known as a PI-D controller with the dash indicating that the PI part acts on the error, and the D part acts on the output measurement. In S IMULINK
you can build a PID controller with anti-kick by modifying the standard PID controller block as
shown in Fig. 4.9.
input/output
1/taui
Pulse
Generator
Sum1
reset
taud
1
s
Kc
Integrator
Sum
3
(s+4)(s+1)(s+0.4)
gain
Plant
s
taud/N.s+1
deriv
derivative filter
Figure 4.9: PID controller with anti-derivative kick. See also Fig. 4.10.
Fig. 4.10 compares the controlled performance for the third order plant and tuning constants
given previously in Fig. 4.4 where the derivative term uses the error (lefthand simulation), with
the modification where the derivative term uses the measured variable (righthand simulation).
Evident from Fig 4.10 is that the PID controller using the measurement rather than the error
behaves better, with much less controller action. Of course, the performance improvement is
only evident when the setpoint is normally stationary, rather than a trajectory following problem.
Stationary setpoints are the norm for industrial applications, but if for example, we subjected the
145
Normal PID
1.5
0.5
0.5
0.5
input
No derivative Kick
1.5
10
20
30
40
0.5
40
40
20
20
20
20
40
10
20
time
30
40
40
10
20
30
40
10
20
time
30
40
Figure 4.10: PID control of a plant given in Fig. 4.9. The derivative kick is evident in the input
(lower trend of the left-hand simulation) of the standard PID controller. We can avoid this kick
by using the derivative of the output rather than the error as seen in the right-hand trend. Note
that the output controlled performance is similar in both cases
closed loop to a sine wave setpoint, then the PID that employed the error in the derivative term
would perform better than the anti-kick version.
A N ELECTROMAGNETIC
BALANCE ARM
The electromagnetic balance arm described previously in 1.3.1 is extremely oscillatory as shown
in Fig. 1.6(b). The overshoot is about 75% which corresponds to a damping ratio of 0.1
assuming a proto-type second order process model.
To stabilise a system with poles so close to the imaginary axis requires substantial derivative
action. Without derivative action, the integral component needed to reduce the offset would
cause instability. Unfortunately however the derivative action causes problems with noise and
abrupt setpoint changes as shown in Fig. 4.11(a).
The controller output, u(t) exhibits a kick every time the setpoint is changed. So instead of the
normal PID controller used in Fig. 4.11(a), an anti-kick version is tried in Fig. 4.11(b). Clearly
there are no spikes in the input signal when Eqn. 4.16 is used, and the controlled response is
slightly improved.
Abrupt setpoint changes are not the only thing that can trip up a PID controller. Fig. 4.12 shows
another response from the electromagnetic balance, this time with even more derivative action.
At low levels the balance arm is very oscillatory, although this behaviour tends to disappear at
the higher levels owing to the nonlinear friction effects.
146
No derivative kick
2200
Derivative kick
Flapper angle
2500
2000
2000
1800
1500
1600
1400
1000
1600
3000
input
1400
2000
1200
1000
0
1000
800
0
10
20
30
time [s]
40
50
60
10
20
30
40
50
time [s]
Figure 4.11: Illustrating the improvement of anti-derivative kick schemes for PID controllers
when applied to the experimental electromagnetic balance.
4000
3500
3000
2500
2000
1500
5000
input
4000
3000
2000
1000
0
10
15
20
time [s]
25
30
35
4.4.2
I NPUT
147
Invariably in practical cases the actual manipulated variable value, u, demanded by the PID controller is impossible to implement owing to physical constraints on the system. A control valve for
example cannot shut less than 0% open or open more than 100%. Normally clamps are placed
on the manipulated variable to prevent unrealisable input demands occurring such as
umin < u < umax ,
or
or in other words, if the input u is larger than the maximum input allowed max_u, it will be
reset to that maximum input, and similarly the input is saturated if it is less than the minimum
allowable limit. In addition to an absolute limit on the position of u, the manipulated variable
can not instantaneously change from one value to another. This can be expressed as a limit on the
derivative of u, such as | du
dt | < cd .
However the PID control algorithm presented so far assumes that the manipulated variable is
unconstrained so when the manipulated variable does saturate, the integral error term continues
to grow without bound. When finally the plant catches up and the error is reduced, the integral
term is still large, and the controller must eat up this error. The result is an overly oscillatory
controlled response.
This is known as integral windup. Historically with the analogue controllers, integral windup
was not much of a problem since the pneumatic controllers had only a limited integral capacity.
However, this limit is effectively infinite in a digital implementation.
There are a number of ways to prevent integral windup and these are discussed in [16, pp1014]
and more recently in [140, p60] and [33]. The easiest way is to check the manipulated variable
position, and ignore the integral term if the manipulated variable is saturated.
An alternative modification is to compute the difference between the desired manipulated variable and the saturated version and to feed this value back to the integrator within the controller.
This is known as anti-windup tracking and is shown in Fig. 4.13(a). When the manipulated input is
not saturated, there is no change to normal PID algorithm. Note that in this modified PID configuration we differentiate the measured value (not the error), we approximate the derivative term
as discussed in section 4.2.1, and we can turn off the anti-windup component with the manual
switch.
As an example (adapted from [20, Fig 8.9]), suppose we are to control an integrator plant, G(s) =
1/s, with tight limits on the manipulated variable, |u| < 0.1. Without anti-windup, the controller
output rapidly saturates, and the uncompensated response shown in Fig. 4.13(b) is very oscillatory. However with anti-windup enabled, the controlled response is much improved.
4.5
To implement a PID controller as given in Eqn. 4.1 on a computer, one must first discretise or
approximate the continuous controller equation. Assuming we are operating at a If the error at
time t = kT is k , then the continuous expression
Z
1
de(t)
u(t) = Kc e(t) +
e(t) dt + d
i
dt
(4.17)
148
Manual Switch
Constant
0
Sum3
1/taui
1/s
Sum2
Integrator
reset
Sum1
Kc
1
error
actuator model
gain
taud.s+1
Kc*taud
2
y
1
u
taud/N.s+1
approx Deriv
Kc*deriv
Antiwindup demonstration
output & setpoint
3
2
1
0
1
Antiwindup off
Antiwindup on
0.15
0.1
input
0.05
0
0.05
0.1
0
100
200
300
400
500
time
600
700
800
900
(b) Turning on the anti-windup controller at t = 660 dramatically improves the controlled response.
Figure 4.13: The advantages of using anti-windup are evident after t = 660.
1000
149
d
T X
j + (k k1 )
uk = Kc k +
i j=0
T
{z
}
|
| {z }
differential
(4.18)
integral
The integral in Eqn. 4.17 is approximated in Eqn. 4.18 by the rectangular rule and the derivative
is approximated as a first order difference, although other discretisations are possible. Normally
the sample time, T , used by the controller is much faster than the dominant time constants of the
process so the approximation is satisfactory and the discrete PID controller is, to all intents and
purposes, indistinguishable from a continuous controller.
It is considerably more convenient to consider the discrete PID controller as a rational polynomial
expression in z. Taking z-transforms of the discrete position form of the PID controller, Eqn. 4.18,
we get
d
T i+1
U (z) = Kc E(z) +
z
+ z i+2 + + z 1 + 1 E(z) + (1 z 1 )E(z)
i
T
|
{z
}
{z
}
|
differential
integral
(4.19)
(4.20)
Eqn. 4.19 is the discrete approximation to Eqn. 4.2 and the approximation is reasonable provided
the sample time, T , is sufficiently short. A block diagram of this discrete approximation is
Kc
b
T
i
1
1z 1
d
T
1 z 1
which we could further manipulate this using (block diagram algebra rules) to use only delay
elements to give
z 1
Kc
b
+
+
T
i
d
T
z 1
There are other alternatives for a discrete PID controller depending on how we approximate the
150
add on
T
Gi (z) =
z1
or a forward difference,
add on
Tz
Gi (z) =
z1
Gi (z) =
add on
T (z + 1)
2(z 1)
We can insert any one of these alternatives to give the overall discrete z-domain PID controller,
although the trapeziodal scheme
d
T (1 + z 1 )
1
+
(1
z
)
GPID (z) = Kc 1 +
2i (1 z 1 )
T
!
2
T + 2i T + 2d i + T 2 2i T 4i d z 1 + 2i d z 2
Kc
=
2T i
1 z 1
(4.21)
(4.22)
1+z 1
1
error
Kc
T/taui/2
gain
1z1
Discrete Filter1
Sum1
1
u
integral
taud/T
1z1
1
derivative
Discrete Filter2
Figure 4.14: A discrete PID controller in S IMULINK using a trapezoidal approximation for the
integral with sample time T following Eqn. 4.21. This controller block is used in the simulation
presented in Fig. 4.15.
4.5.1
D ISCRETISING
151
CONTINUOUS
PID
CONTROLLERS
The easiest way to generate a discrete PID controller is to simply call the M ATLAB standard PID
pidstd with a trailing argument to indicate that you want a discrete controller. Since the default
discretisation strategy is using the forward Euler, it would be prudent to use explicitly state the
stable backward Euler option for both the integration and differentiation.
Listing 4.2: Constructing a discrete (filtered) PID controller
>>
C = pidstd(1,2,3,100,0.1, ...
'IFormula','BackwardEuler','DFormula','BackwardEuler')
Discrete-time PIDF controller in standard form:
4
% C(s) = Kp 1 +
1
i
Ts z
z1
+ d
1
Ts z
d /N+ z1
14
>> tf(C)
Transfer function:
24.13 z2 - 47.4 z + 23.31
-------------------------z2 - 1.231 z + 0.2308
Sampling time: 0.1
V ELOCITY FORM
OF THE
PID
CONTROLLER
Eqn. 4.18 is called the position form implementation of the PID controller. An alternative form
is the velocity form which is obtained by subtracting the previous control input uk1 from the
current input uk to get
uk = uk uk1
2d
d
d
T
k 1 +
k1 + k2
= Kc 1 + +
i
T
T
T
(4.23)
The velocity form in Eqn. 4.23 has three advantages over the position form: (see [193, pp636-637]),
namely it requires no need for initialisation, (the computer does not need to know the current u),
no integral windup problems, and the algorithm offers some protection against computer failure
in that if the computer crashes, the input remains at the previous, presumably reasonable, value.
One drawback, however, is that it should not be used for P or PD controllers since the controller
is unable to maintain the reference value.
Fig. 4.15 shows a S IMULINK block diagram of a pressure control scheme on a headbox of a paper
machine. The problem was that the pressure sensor was very slow, only delivering a new pressure
reading every 5 seconds. The digital PID controller however was running with a frequency of
1 Hz. The control engineer in this application faced with poor closed loop performance, was
concerned that the pressure sensor was too slow and therefore should be changed. In Fig. 4.15,
the plant is a continuous transfer function while the discrete PID controller runs with sample
period of T = 1 second, and the pressure sensor is modelled with a zeroth-order hold with T = 5
seconds. Consequently the simulation is a mixed continuous/discrete example, with more than
one discrete sample time.
152
error
Signal
Generator
3.5
200s2 +30s+1
ctrl output
Scope
headbox
Discrete PID
pressure sensor
Figure 4.15: Headbox control with a slow pressure transducer. The discrete PID controller was
given in Fig. 4.14.
Fig. 4.16 shows the controlled results. Note how the pressure signal to the controller lags behind
the true pressure, but the controller still manages to control the plant.
2
y
y sampled
setpoint
1.5
y & setpoint
1
0.5
0
0.5
1
1.5
4
2
0
2
4
50
100
150
time [s]
Figure 4.16: Headbox control with a slow pressure transducer measurement sample time of T = 5
while the control sample time is T = 1. Upper: The pressure setpoint (dotted), the actual pressure
and the sampled-and-held pressure fed back to the PID controller. Lower: The controller output.
4.5.2
S IMULATING
PID
CONTROLLED RESPONSE IN
M ATLAB
We can simulate a PID controlled plant in M ATLAB by writing a small script file that calls a general
PID controller function. The PID controller is written in the discrete velocity form following
Eqn. 4.23 in the M ATLAB function file pidctr.m shown in listing 4.3.
Listing 4.3: A simple continuous-time PID controller
function [u,e] = pidctr(err,u,dt,e,pidt)
153
% [u,e] = pidctr(err,u,dt,e,pidt)
% PID controller
% err: = , current error, u = u(t), current manipulated variable
% dt = T sample time, e: row vector of past 3 errors
% pidt = [Kc , 1/I , d ] = tuning constants
k = pidt(1); rs = pidt(2); td2=pidt(3)/dt;
e = [err,e(1:2)]; % error shift register
du = k*[1+rs*dt+td2,-1-2*td2,td2]*e';
u = du + u;
% Update control value unew = uold + u
return
This simple PID controller function is a very naive implementation of a PID controller without any
of the necessary modifications common in robust commercial industrial controllers as described
in section 4.4.
The plant to be controlled for this simulation is
Gp (q 1 ) = q d
1.2q 1
1 0.25q 1 0.5q 2
(4.24)
where the sample time is T = 2 seconds and the dead time is d = 3 sample time units, and the
setpoint is a long period square wave. For this simulation, we will try out the PID controller with
tuning constants of Kc = 0.3, 1/i = 0.2 and d = 0.1. How I arrived at these tuning constants is
discussed later in 4.6.
The M ATLAB simulation using the PID function from Listing 4.3 is given by the following script
file:
a=[0.25,0.5];b=1.2;theta=[a b]'; dead = 3;
3
13
% collect i/o
% system prediction
% current error
% PID controller from Listing 4.3.
% Plot results in Fig. 4.17.
Figure 4.17 shows the controlled response of this simulation. Note how I have plotted the input (lower plot of Fig. 4.17) as a series of horizontal lines that show that the input is actually a
piecewise zeroth-order hold for this discrete case using the stairs function.
4.5.3
C ONTROLLER
Given that the discrete PID controller is an approximation to the continuous controller, we must
expect a deterioration in performance with increasing sample time. Our motivation to use coarse
sampling times is to reduce the computational load. Fig. 4.18 compares the controlled response
of the continuous plant,
s+3
Gp (s) =
(s + 4)( 2 s2 + 2 s + 1)
154
PID control
2
y & setpoint
2
1
Input
0.5
0
0.5
Ts = 6
y & setpoint
100
Time
Ts = 4
150
Ts = 1
0.5
0.5
0.5
1.5
1.5
1.5
1.5
0.5
0.5
0.5
0.5
0.5
discrete
continuous PID
PID
20
40
60
20
200
Ts = 2
input
50
40
60
0
0
20
40
60
20
40
Figure 4.18: The effect of varying sampling time, T , when using a discrete PID controller compared to a continuous PID controller. As T 0, the discrete PID controller converges to the
continuous PID controller.
with = 4, = 0.4 given the same continuous controller settings at different sampling rates.
Note that the reset and derivative controller settings for the discrete controller are functions of
time, and must be adjusted accordingly. Fig. 4.18 shows that the controller performance improves
as the sampling time is decreased and converges to the continuous case. However be warned,
if the sampling time is too small, the discrete PID controller is then suspectable to numerical
problems.
60
4.6
155
Tuning PID controllers is generally considered an art and is an active research topic both in
academia and in industry. Typical chemical processing plants have hundreds or perhaps thousands of control loops of which the majority of these loops are non-critical and are of the PID
4.6.1
O PEN
Open loop tuning methods are where the feedback controller is disconnected and the experimenter excites the plant and measures the response. They key point here is that since the controller is now disconnected the plant is clearly now no longer strictly under control. If the loop is
critical, then this test could be hazardous. Indeed if the process is open-loop unstable, then you
will be in trouble before you begin. Notwithstanding for many process control applications, open
loop type experiments are usually quick to perform, and deliver informative results.
To obtain any information about a dynamic process, one must excite it in some manner. If the
system is steady at setpoint, and remains so, then you have no information about how the process
behaves. (However you do have good control so why not quit while you are ahead?) The type
of excitation is again a matter of choice. For the time domain analysis, there are two common
types of excitation signal; the step change, and the impulse test, and for more sophisticated
analysis, one can try a random input test. Each of the three basic alternatives has advantages and
disadvantages associated with them, and the choice is a matter for the practitioner.
2 mainly
in the US
156
Step change The step change method is where the experimenter abruptly changes the input to
the process. For example, if you wanted to tune a level control of a buffer tank, you could
sharply increase (or decrease) the flow into the tank. The controlled variable then slowly
rises (or falls) to the new operating level. When I want a quick feel for a new process, I
like to perform a step test and this quickly gives me a graphical indication of the degree of
damping, overshoot, rise time and time constants better than any other technique.
Impulse test The impulse test method is where the input signal is abruptly changed to a new
value, then immediately equally abruptly changed back to the old value. Essentially you
are trying to physically alter the input such that it approximates a Dirac delta function.
Technically both types of inputs are impossible to perform perfectly, although the step test
is probably easier to approximate experimentally.
The impulse test has some advantages over the step test. First, since the input after the experiment is the same as before the experiment, the process should return to the same process
value. This means that the time spent producing off-specification (off-spec) product is minimised. If the process does not return to the same operating point, then this indicates that
the process probably contains an integrator. Secondly the impulse test (if done perfectly)
contains a wider range of excitation frequencies than the step test. An excitation signal with
a wide frequency range gives more information about the process. However the impulse
test requires slightly more complicated analysis.
Random input The random input technique assumes that the input is a random variable approximating white noise. Pure white noise has a wide (theoretically infinite) frequency range,
and can therefore excite the process over a similarly wide frequency range. The step test,
even a perfect step test, has a limited frequency range. The subsequent analysis of this type
of data is now much more tedious, though not really any more difficult, but it does require
a data logger (rather than just a chart recorder) and a computer with simple regression
software. Building up on this type of process identification where the input is assumed,
within reason, arbitrary, are methods referred to as Time Series Analysis (TSA), or spectral
analysis; both of which are dealt with in more detail in chapter 6.
O PEN - LOOP OR
There are various tuning strategies based on an open-loop step response. While they all follow
the same basic idea, they differ in slightly in how they extract the model parameters from the
recorded response, and also differ slightly as to relate appropriate tuning constants to the model
parameters. This section describes three alternatives, the classic Ziegler-Nichols open loop test,
om-H
the Cohen-Coon test, and the Astr
agglund suggestion.
The classic way of open loop time domain tuning was first published in the early 40s by Ziegler
and Nichols3 , and is further described in [152, pp596597] and in [181, chap 13]. Their scheme requires you to apply a unit step to the open-loop process and record the output. From the response,
you graphically estimate the two parameters T and L as shown in Fig. 4.19. Naturally if your
response is not sigmoidal or S shaped such as that sketched in Fig. 4.19 and exhibits overshoot,
or an integrator, then this tuning method is not applicable.
This method implicitly assumes the plant can be adequately approximated by a first order transfer
function with time delay,
Kes
(4.25)
Gp
s + 1
where L is approximately the dead time , and T is the open loop process time constant . Once
you have recorded the openloop input/output data, and subsequently measured the times T and
3 Ziegler & Nichols actually published two methods, one open loop, and one closed loop, [207]. However it is only the
second closed loop method that is generally remembered today as the ZieglerNichols tuning method.
157
Input
Output
Plant
K
b
time
Figure 4.19: The parameters T and L to be graphically estimated for the openloop tuning method
relations given in Table 4.2.
L, the PID tuning parameters can be obtained directly from Table 4.2.
A similar open loop step tuning strategy due to Cohen and Coon published in the early 1950s is
where you record the time taken to reach 50% of the final output value, t2 , and the time taken to
reach 63% of the final value, t3 . You then calculate the effective deadtime with
=
t2 ln(2)t3
1 ln(2)
The open loop gain can be calculated by dividing the final change in output over the change in
the input step.
Once again, now that you have a model of the plant to be controlled in the form of Eqn. 4.25,
you can use one the alternative heuristics given in Table 4.2. The recommended range of values
for the deadtime ratio for the Cohen-Coon values is 0.1 < / < 1. Also listed in Table 4.2 are
the empirical suggestions from [17] known as AMIGO, or approximate M -constrained integral
gain optimisation. These values have the same form as the Cohen-Coon suggestions but perform
slightly better.
Fig. 4.20 illustrates the approximate first-order plus deadtime model fitted to a higher-order overdamped process using the two points at 50% and 63%. The subsequent controlled response using
the values derived from in Table 4.2 is given in Fig. 4.21.
Conventional thought now considers that both the Zeigler-Nichols scheme in Table 4.2 and the
Cohen-Coon scheme gives controller constants that are too oscillatory and hence other modified
tuning parameters exist, [180, p329]. Problem 4.1 demonstrates this tuning method.
Problem 4.1 Suppose you have a process that can be described by the transfer function
Gp =
K
(3s + 1)(6s + 1)(0.2s + 1)
Evaluate the time domain response to a unit step change in input and graphically estimate the
parameters L and T . Design a PI and PID controller for this process using Table 4.2.
158
Table 4.2: The PID tuning parameters as a function of the openloop model parameters K, and
from Eqn. 4.25 as derived by Ziegler-Nichols (open loop method), Cohen and Coon, or alternatively the AMIGO rules from [17].
Controller
Ziegler-Nichols
(Open loop)
P
PI
PID
P
Cohen-Coon
PI
PID
AMIGO
PID
Kc
K
0.9
K
1.2
K
1
1+
K
3
1
0.9 +
K
12
1 4
+
K 3 4
1
0.2 + 0.45
K
0.3
0.5
30 + 3/
9 + 20/
32 + 6/
13 + 8/
0.4 + 0.8
+ 0.1
4
11 + 2/
0.5
0.3 +
1.5
C ONTROLLER
fitted model
1
actual
0.5
t2 t3
0
10
15
20
25
30
35
time
If we have gone to all the trouble of estimating a model of the process, then we could in principle
use this model for controller design in a more formal method than just rely on the suggestions
given in Table 4.2. This is the thinking behind the Internal Model Control or IMC controller
design strategy. The IMC controller is a very general controller, but if we restrict our attention
to just controllers of the PID form, we can derive simple relations between the model parameters
and appropriate controller settings.
The nice feature of the IMC strategy is that it provides the scope to adjust the tuning with a single
parameter, the desired closed loop time constant, c , something that is missing from the strategies
given previously in Table 4.2. A suitable starting guess for the desired closed loop time constant
is to set it equal to the dominant open loop time constant.
Table 4.3 gives the PID controller settings based on various common process models. For a more
complete table containing a larger selection of transfer functions, consult [181, p308].
159
y & setpoint
Ponly
PIcontrol
PID control
5
0
100
time
200
5
0
100
time
200
100
time
200
Figure 4.21: The closed loop response for a P, PI and PID controlled system using the Cohen-Coon
strategy from Fig. 4.20.
Table 4.3: PID controller settings based on IMC for a small selection of common plants where the
control engineer gets to chose a desired closed loop time constant, c .
Plant
K/( s + 1)
K
(1 s + 1)(2 s + 1)
K/s
K
s( s + 1)
Kes
s + 1
PID constants
Kc K
i
d
c
1 + 2
1 + 2
c
2
2c
c
2c +
2c
2c +
c2
2c +
c +
+ /2
+ /2
c + /2
2 +
An simplification of the this IMC idea in an effort to make the tuning as effortless as possible is
given in [188].
Perhaps the easiest way to tune a plant when the transfer function is known is to use the M ATLAB
function pidtune, or the GUI, pidtool as shown in Fig. 4.22.
4.6.2
C LOSED
The main disadvantage of the open loop tuning methods is that it is performed with the controller
switched to manual, i.e. leaving the output uncontrolled in open loop. This is often unreasonable for systems that are openloop unstable, and impractical for plants where there is a large
invested interest, and the operating engineers are nervous. The Ziegler-Nichols continuous cycling method described next is a well-known closed loop tuning strategy used to address this
problem, although a more recent single response strategy given later in 4.6.3 is faster, safer, and
160
Figure 4.22: PID tuning of an arbitrary transfer function using the M ATLAB GUI.
easier to use.
Z IEGLER -N ICHOLS
The Ziegler-Nichols continuous cycling method is one of the best known closed loop tuning
strategies. The controller is left on automatic, but the reset and derivative components are turned
off. The controller gain is then gradually increased (or decreased) until the process output continuously cycles after a small step change or disturbance. At this point, the controller gain you have
selected is the ultimate gain, Ku , and the observed period of oscillation is the ultimate period, Pu .
Ziegler and Nichols originally suggested in 1942 PID tuning constants as a function of the ultimate
gain and ultimate period as shown in the first three rows of Table 4.4. While these values give
near-optimum responses fort load changes, practical experience and theoretical considerations
(i.e. [16, 31]) have shown that these tuning values tend to give responses that are too oscillatory
for step-point changes due to the small phase margin. For this reason, various people have subsequently modified the heuristics slightly as listed in the remaining rows of Table 4.4 which is
expanded from that given in [180, p318].
Experience has shown that the Chien-Hrones-Reswick values give an improved response on the
om-H
original Ziegler-Nichols, but the Astr
161
Table 4.4: Various alternative Ziegler-Nichols type PID tuning rules as a function of the ultimate
gain, Ku , and ultimate period, Pu .
Response type
Kc
PID constants
i
P
PI
PID
0.5Ku
0.45Ku
0.6Ku
Pu /1.2
Pu /2
Pu /8
No overshoot
Some overshoot
0.2Ku
0.33Ku
Pu /2
Pu /2
Pu /2
Pu /3
PI
PID
0.31Ku
0.45Ku
2.2Pu
2.2Pu
Pu /6.3
Chien-Hrones-Reswick
PI
0.47Ku
om-H
Astr
agglund
PI
0.32Ku
0.94
PID
Ku cos(m )
f d
Eqn. 4.58
Ziegler-Nichols
Modified ZN
Tyreus-Luyben
Specified phase-margin, m
To tune a PID controller using the closed-loop Ziegler-Nichols method, do the following:
1. Connect a proportional controller to the plant to be controlled. I.e. turn the controller on
automatic, but turn off the derivative and integral action. (If your controller uses integral
time, you will need to set i to the maximum allowable value.)
2. Choose a trial sensible controller gain, Kc to start.
3. Disturb the process slightly and record the output.
4. If the response is unstable, decrease Kc and go back to step 3, otherwise increase Kc and
repeat step 3 until the output response is a steady sinusoidal oscillation. Once a gain Kc has
been found to give a stable oscillation proceed to step 5.
5. Record the current gain, and measure the period of oscillation in time units (say seconds).
These are the ultimate gain, Ku and corresponding ultimate period, Pu .
6. Use Table 4.4 to establish the P, PI, or PID tuning constants.
7. Test the closed loop response with the new PID values. If the response is not satisfactory,
further manual fine-tuning may be necessary.
This method has proved so popular, that automatic tuning procedures have been developed that
are based around this theory as detailed in chapter 7. Despite the fact that this closed loop test
is difficult to apply experimentally, gives only marginal tuning results in many cases, it is widely
used and very well known. Much of the academic control literature uses the Z-N tuning method
as a basis on which to compare more sophisticated schemes but often conveniently forgetting
in the process that the Z-N scheme was really developed for load disturbances as opposed to
setpoint changes. Finally many practicing instrument engineers (who are the ones actually tuning
the plant) know only one formal tuning method this one.
Consequently it is interesting to read the following quote from a review of a textbook in Process
control:4 . . . The inclusion of tuning methods based on the control loop cycling (Ziegler-Nichols
method) without a severe health warning to the user reveals a lack of control room experience on
behalf of the author.
4 Review of A Real-Time Approach to Process Control by Svrcek, Mahoney & Young, reviewed by Patrick Thorpe in The
Chemical Engineer, January 2002, p30.
162
Z IEGLER -N ICHOLS
Finding the ultimate gain and frequency in practice requires a tedious trial-and-error approach.
If however, we already have the model of the process, say in the form of a transfer function, then
establishing the critical frequency analytically is much easier, although we may still need to solve
a nonlinear algebraic equation.
Suppose we have identified a model of our plant as
G(s) =
6s2
1
e3s
+ 7s + 1
(4.26)
and we want to control this plant using a PID controller. To use Ziegler-Nichols settings we need
to establish the ultimate frequency, u , where the angle of G(s = iu ) = radians, or solve the
nonlinear expression
6 G(iu ) =
(4.27)
for u . In the specific case given in Eqn. 4.26, we have
6 e3iu
3u tan1
2
16u
+7iu
7u
1 6u2
=
=
(4.28)
which is a non-trivial function of ultimate frequency, u . However it is easy to graph this relation
as a function of u and look for zero crossing such as shown in Fig. 4.23, or use a numerical
technique such as Newton-Rhapson to establish that u 0.4839 rad/s implying an ultimate
period of Pu = 2/u = 12.98 seconds per cycle.
2
3 watan2(7 w,16 w )+
3
f()
2
1
0
1
2
0
0.2
0.4
0.6
0.8
A quick way to numerically solve Eqn. 4.28 is to first define an anonymous function and then to
use fzero to find the root.
7
16 2
Once we have found the critical frequency, u , we can establish the magnitude at this frequency
by substituting s = iu into the process transfer function, Eqn. 4.26,
|G(iu )| = |G(0.48i)| 0.2931
which gives a critical gain, Ku = 1/0.2931 = 3.412. An easy way to do this calculation in M ATLAB
is to simply use bode at a single frequency, as in
10
163
Now that we have both the ultimate gain and frequency we can use the classic Zielger-Nichols
rules in Table 4.4 to obtain our tuning constants and simulate the controlled process as shown in
Fig. 4.24.
PI control
PID control
1.5
1.5
0.5
0.5
0.5
input
P control
1.5
50
100
time
150
200
50
100
time
150
200
50
100
time
150
200
Figure 4.24: The step and load (at t = 100) responses of a P (left) and PI (middle) and PID (right)
controlled process using the closed loop Ziegler-Nichols suggestions from Table 4.4.
The tuned response to both a setpoint change and a load disturbance at t = 100 of all three
candidates shown in Fig. 4.24 is reasonable, but as expected the P response exhibits offset, and
the PI is sluggish. We would expect a good response from the PID controller because the actual
plant model structure (second order plus deadtime), is similar to the assumed model structure by
Ziegler and Nichols, which is first order plus deadtime. However while it could be argued that
the step response is still too oscillatory, the response to the load disturbance is not too bad.
In the very common case of a first order plus deadtime structure, Eqn. 4.25, we can find the
ultimate frequency by solving the nonlinear equation
u tan1 ( u ) + = 0
for u , and then calculate the ultimate gain from a direct substitution
1
1 + 22
=
Ku =
|G(iu )|
K
(4.29)
(4.30)
The tedious part of the above procedure is generating the nonlinear equation in u for the specific plant structure of interest. We can however automate this equation generation for standard
polynomial transfer functions with deadtime as shown in the listing below.
[num,den] = tfdata(G,'v') % Or use: cell2mat(G.num)
164
iodelay = G.iodelay;
15
Rather than solve for the critical frequency algebraically, we could alternatively rely on the M ATLAB margin routine which tries to extract the gain and phase margins and associated frequencies
for arbitrary transfer functions. Listing 4.4 shows how we can tune a PID controller for an arbitrary transfer function, although note that because this strategy attempts to solve a nonlinear
expression for the critival frequency using a numerical iterative scheme, this routine is not infallible.
Listing 4.4: Ziegler-Nichols PID tuning rules for an arbitrary transfer function
G = tf(1,conv([6 1],[1 1]),'iodelay',5); % Plant of interest G =
20
1
(6s+1)(s+1)
e5s
Note that while the use of the ultimate oscillation Ziegler-Nichols tuning strategy is generally
discouraged by practicitioners, it is the oscillation at near instability which is the chief concern,
not the general idea of relating the tuning constants to the ultimate gain and frequency. If for
example, we already have a plant model in the form of a transfer function perhaps derived from
first principles, or system identification techniques, then we can by-pass the potentially hazardous
oscillation step, and compute directly the tuning constants as a function of ultimate gain and
frequency.
4.6.3
C LOSED
Despite the fact that Ziegler-Nichols continuous cycling tuning method is performed in closed
loop, the experiment is both tedious and dangerous. The Yuwana-Seborg tuning method described here, [206], retains the advantages of the ZN tuning method, but avoids many of the the
disadvantages. Given the attractions of this closed-loop reaction curve tuning methodology, there
have been subsequently many extensions proposed, some of which are summarised in [50, 99].
The following development is based on the modifications of [49] with the corrections noted by
[195].
The Yuwana-Seborg (YS) tuning technique is based on the assumption that the plant transfer
function, while unknown, can be approximated by the first order plus deadtime structure,
Gm (s) =
Km em s
m s + 1
(4.31)
with three plant parameters, process gain Km , time constant, m , and deadtime m .
Surrounding this process with a trial, but known, proportional-only controller Kc , gives a closed
loop transfer function between reference r(t) and output y(t) as
Kc Km em s
Y (s)
=
R(s)
1 + s + Kc Km em s
(4.32)
165
A Pade approximation can be used to expand the non-polynomial term in the denominator, although various people have subsequently modified this aspect of the method such as using a
different deadtime polynomial approximation. Using some approximation for the deadtime, we
can approximate the closed loop response of Eqn. 4.32 with the more convenient
Gcl
Kes
2 s2 + 2 s + 1
(4.33)
and we can extract the model parameters, K, , and in Eqn. 4.33 from a single closed loop
control step test.
Suppose we have a trial proportional-only feedback controlled response with a known controller
gain Kc . It does not really matter what this gain is so long as the response oscillates sufficiently
as shown in Fig. 4.25.
If we step change the reference variable setpoint from r0 to r1 , we are likely to see an underdamped response such as shown in Fig. 4.25. From this response we measure the initial output y0 ,
the first peak yp1 , the first trough, ym1 and the second peak yp2 , and associated times. Under these
conditions we expect to see some offset given the proportional-only controlled loop, and that the
controller gain should be sufficiently high such that the response exhibits some underdamped
(oscillatory) behaviour.
peak #1
yp1
setpoint
peak #2
yp2
r1
offset
setpoint change
r0
ym
trough #1
period, P
y0
time
Figure 4.25: A typical closed loop response of a stable system to a P-controller
2
yp2 yp1 ym
yp1 + yp2 2ym
(4.34)
or alternatively, if the experimental test is of sufficient duration, y could simply be just the last
value of y collected. Now we are ready to graphically fit the parameters to the closed loop model,
Eqn. 4.33.
The closed loop gain K is given by
K=
y
r1 r0
(4.35)
(4.36)
166
The closed loop deadtime is
= 2tp1 tm ,
(4.37)
(4.38)
p
(tm tp1 ) 1 2
.
=
(4.39)
Now that we have fitted a closed loop model, Eqn. 4.33, we can compute the ultimate gain, Ku ,
and ultimate frequency, u , by solving the nonlinear expression
2 u
u tan1
=
(4.40)
1 2 u2
for u , perhaps using an iterative scheme starting with u = 0. It follows then that the ultimate
gain is
1
(4.41)
Ku = Kc 1 +
|Gcl (iu )|
where the magnitude
K
|Gcl (iu )| = p
2
2
(1 u )2 + (2 u )2
(4.42)
Alternatively of course we could try to use the M ATLAB routine margin to extract Ku and u .
At this point, given the key design parameters Ku and u , we can tune PID controllers using the
ZN scheme directly following Table 4.4. Note that we now have achieved our aim of tuning a PID
controller using the Ziegler-Nichols rules without undertaking a repetitive, tedious, and possibly
dangerous, experiment searching for the ultimate oscillations.
Alternatively we can derive the parameters for the assumed open loop plant model, Eqn. 4.31, as
|y y0 |
Kc (|r1 r0 | |y y0 |)
1 p 2 2
=
Ku Km 1
u
tan1 (m u )
=
u
Km =
m
m
(4.43)
(4.44)
(4.45)
and then subsequently we can tune it either using the ZN strategy given in Listing 4.4, or use the
internal model controller strategy from Table 4.3, or perhaps using ITAE optimal tuning constants.
The optimal ITAE tuning parameters for P, PI and PID controllers are derived from
A
Kc =
Km
m
m
B
1
1
=
i
Cm
m
m
D
d = E m
m
m
F
(4.46)
where the controller structure-specific constants A, B, C, D, E, F are given in Table 4.5. We are
now in a position to replace our trial Kc with the hopefully better controller tuning constants.
Note that the change in setpoint, |r1 r0 | must be larger than the actual response |y y0 | owing
to the offset exhibited by a proportional only controller. If the system does not respond in this
manner, then a negative process gain is implied, which in turn implies nonsensical imaginary
time constants. Another often overlooked point is that if the controlled process response does not
oscillate, then the trial gain Kc is simply not high enough. This gain should be increased, and the
167
Table 4.5: The ITAE optimal tuning constants A through E in Eqn. 4.46 for P, PI, and PID controllers.
mode
P
PI
PID
A
0.49
0.859
1.357
B
1.084
0.977
0.947
1.484
1.176
0.680
0.738
0.381
0.99
test repeated. If the process never oscillates, no matter how high the gain is, then one can use an
infinite gain controller at least in theory.
Algorithm 4.2 summarises this single-test closed loop tuning strategy.
Algorithm 4.2 Yuwana-Seborg closed loop PID tuning
Tuning a PID controller using the closed-loop Yuwana-Seborg method is very similar to Algorithm 4.1, but avoids the trial and error search for the ultimate gain and period.
1. Connect a proportional only controller to the plant.
2. Choose a trial sensible controller gain, Kc and introduce a set point change and record the
output.
3. Check that the output is underdamped and shaped similar to Fig. 4.25, and if not increase
the trial controller gain, Kc , and repeat step 2.
4. From the recorded underdamped response, measure the peaks and troughs, and note the
demanded setpoint change. You may find Listing 4.5 helpful here.
5. Compute the parameters of the closed loop plant and solve for the ultimate frequency,
Eqn. 4.40 and gain, Eqn. 4.41.
6. Optionally compute the open loop plant model using Eqn. 4.43Eqn. 4.45.
7. Compute the P, PI or PID tuning constants using Table 4.5 or perhaps using the model and
the ZN strategy in Listing 4.4.
8. Test the closed loop response with the new PID values. If the response is not satisfactory,
further manual fine-tuning may be necessary.
A M ATLAB implementation to compute the tuning constants given response data is given in listings starting on page 169.
A consequence of the assumption that the plant is adequately modelled using first-order dynamics with deadtime, the Yuwana-Seborg tuning scheme is not suitable for highly oscillatory
processes, nor plants with inverse response behaviour as illustrated in section 4.8.1. Furthermore,
a consequence of the low order Pade approximation is that the strategy fails for processes with a
large relative deadtime. Ways to use this tuning scheme in the presence of excessive measurement
noise is described in section 5.2.1.
An example of a closed loop response tuner
Suppose we are to tune a PID controller for an inverse response process,
G(s) =
s + 1
(s + 1)2 (2s + 1)
(4.47)
168
Such processes with right-hand plane zeros are challenging because of the potential instability
owing to the initial response heading off into the opposite direction as described further in 4.8.1.
The response to a unit setpoint change is given in the top figure of Fig. 4.26. By recording the
values of the peaks and troughs, we can estimate the parameters of an approximate model of the
closed loop response, Eqn. 4.33. A comparison of this model with the actual data is given in the
middle plot of Fig. 4.26. It is not a perfect fit because the underlying plant is not a pure first order
with deadtime, and because Eqn. 4.33 is only approximate anyway.
Trial closed loop step test, K = 1
c
1
0.8
y
0.6
0.4
0.2
t
p1
m1
p2
y &y
0.6
0.4
0.2
Actual
Fitted
0
0.2
Open loop model
1
0.8
y &y
0.6
0.4
0.2
Actual
fitted
0
0.2
10
time [s]
15
20
Given the closed loop model, we can now extract the open loop model of the plant which is
compared with the true plant in the bottom trend of Fig. 4.26. Note how the first-order plus
deadtime model, specifically in this case
=
G
1.003
e2.24s ,
3.128s + 1
(4.48)
169
y&r
ITAE PI control
1.5
1.5
0.5
0.5
y&r
IMC PI control
1.5
1.5
0.5
0.5
10
20
30
40
50
10
20
30
40
50
Figure 4.27: The PI (left) and PID (right) closed loop responses using the ITAE (upper) and IMC
(lower) tuning schemes based on the identified model, Eqn. 4.48, but applied to the actual plant,
Eqn. 4.47.
in this case. Better performance would be achieved by reducing c .
A UTOMATING
THE
Y UWANA -S EBORG
TUNER
Because this is such a useful strategy for PID tuning, it is helpful to have a semi-automated procedure. The first step once you have collected the closed loop data with a trial gain is to identify
the values and timing of the peaks and troughs, yp1 , ym1 and yp2 . Listing 4.5 is a simple routine
that attempts to automatically extract these values. As with all small programs of this nature, the
identification part can fail, particularly given excessive noise or unusual response curves.
Listing 4.5: Identifies the characteristic points for the Yuwana-Seborg PID tuner from a trial closed
loop response
13
170
Using this peak and trough data, we can compute the parameters in the closed loop model following Listing 4.6.
Listing 4.6: Compute the closed loop model from peak and trough data
2
Now that we have the closed loop model, we can compute the ultimate gain, Ku , and ultimate
frequency, u .
Listing 4.7: Compute the ultimate gain and frequency from the closed loop model parameters.
10
15
B= K/sqrt((1-tau2*wu2)2+(2*tau*zeta*wu)2);
Kcu = Kc*(1+1/B);
% Ultimate gain, Ku , Eqn. 4.41
Finally we can now extract the open loop first-order plus deadtime model using Listing 4.8.
Listing 4.8: Compute the open loop model, Gm , Eqn. 4.31.
20
Km =
taum
dm =
Gm =
yinf/(Kc*(A-yinf));
% Plant gain, Gm , Eqn. 4.43
= 1/wu*sqrt(Kcu2*Km2-1);
% Plant time constant, m , Eqn. 4.44
1/wu*(pi - atan(taum*wu));
% Plant delay, m , Eqn. 4.45
tf(Km,[taum 1],'iodelay',dm);% Open loop model estimate, Gm (s), Eqn. 4.31
Now that we have an estimate of the plant model, it is trivial to use say the IMC relations to
compute reasonable PI or PID tuning constants. We do need however to decide on an appropriate
desired closed loop time constant.
Listing 4.9: Compute appropriate PI or PID tuning constants based on a plant model, Gm , using
the IMC schemes.
tauc = (tau+dm)/3; % IMC desired closed loop time constant, c
controller = 'PID'; % Type of desired controller, PI or PID
25
switch controller
% See IMC tuning schemes in Table 4.3.
case 'PI'
Kc = 1/Km*(tau)/(tauc+dm);
30
171
Now all that is left to do is load the controller tuning constants into the PID controller, and set the
controller to automatic.
4.6.4
S UMMARY
These automated PID tuners are, not surprisingly, quite popular in industry and also in academic
circles. Most of the schemes are based on what was described above, that is the unknown process
is approximated by a simple transfer function, where the ultimate gain and frequency is known
as a function of parameters. Once these parameters are curve fitted, then the PID controller is
designed via a ZieglerNichols technique. There are numerous modifications to this basic technique, although most of them are minor. One extension is reported in [50] while [195] summarises
subsequent modifications, corrects errors, and compares alternatives. The relay feedback method
om,
of Hagglund and Astr
[16], described in next in 4.7 is also a closed loop single test method.
Problem 4.2
1. Modify the ys_ident.m function to improve the robustness. Add improved
error checking, and try to avoid other common potential pitfalls such as negative controller
gain, integrator responses, sever nonlinearities, excessive noise, spikes etc.
2. Test your auto closed loop tuner on the following three processes Gp (s) (adapted from [50]);
G1 (s) =
e4s
s+1
e3s
(s + 1)2 (2s + 1)
e0.5s
G3 (s) =
(s 1)(0.15s + 1)(0.05s + 1)
G2 (s) =
(4.49)
(4.50)
(4.51)
G1 and G2 show the effect of some dead time, and G3 is an open loop unstable process.
3. Create a m file similar to the one above, that implements the closed loop tuning method as
described by Chen, [50]. You will probably need to use the M ATLAB fsolve function to
solve the nonlinear equation.
4.7
Because the manual tuning of PID controllers is troublesome and tedious, it is rarely done which
is the motivation behind the development of self-tuning controllers. This tuning on demand
behaviour after which the controller reverts back to a standard PID controller is different from
adaptive control where the controller is consistently monitoring and adjusting the controller tuning parameters or even the algorithm structure. Industrial acceptance of this type of smart controller was much better than for the fully adaptive controller, partly because the plant engineers
were far more confident about the inherent robustness of a selftuning controller.
172
One method to selftune employed by the ECA family of controllers manufactured by ABB shown
in Fig. 4.28 uses a nonlinear element, the relay. The automated tuning by relays is based on the
assumption that the Ziegler-Nichols style of controller tuning is a good one, but is flawed in
practice since finding the ultimate frequency u is a potentially hazardous tedious trial and error
experiment. The development of this algorithm was a joint project between SattControl and Lund
University, Sweden [16]. This strategy of tuning by relay feedback is alternatively known as the
Auto Tune Variation or ATV method. A good tutorial for estimating models using relay feedback
is given in [124, section 4].
Relay (active)
+d
setpoint, r
Plant
bc
b
G(s)
b
PID
(inactive)
Figure 4.29: A process under relay tuning with the PID regulator disabled.
A relay can be thought of as an on/off controller or a device that approximates a proportional
controller with an infinite gain, but hard limits on the manipulated variable. Thus the relay has
two outputs: if the error is negative, the relay sends a correcting signal of d units, and if the
error is positive, the relay sends a correcting signal of +d units.
4.7.1
D ESCRIBING
173
FUNCTIONS
A relay is a nonlinear element that, unlike the situation with a linear component, the output
to a sinusoidal input will not be in general sinusoidal. However for many plants that are lowpass in nature, the higher harmonics are attenuated, and we need only consider the fundamental
harmonic of the nonlinear output.
The describing function N , of a nonlinear element is defined as the complex ratio of the fundamental harmonic of the output to the input, or
def
N =
Y1
6
X1
(4.52)
where X1 , Y1 are the amplitudes of the input and output, and is the phase shift of the fundamental harmonic component of the output, [152, p652]. If there is no energy storage in the nonlinear
element, then the describing function is a function only of the input amplitude.
By truncating a Fourier series of the output, and using Eqn. 4.52, one can show that the describing
function for a relay with amplitude d is
N (a) =
4d
a
(4.53)
1
N
(4.54)
Now since in the case of a relay, N given by Eqn. 4.53 is purely real, then the intersection of G(i)
and 1/N is somewhere along the negative real axis. This will occur at a gain of a/(4d), so the
ultimate gain, Ku , which is the reciprocal of this, is
Ku =
4d
a
(4.55)
The ultimate frequency, u = 2/Pu , (in rad/s) is simply the frequency of the observed output
oscillation and can be calculated from measuring the time between zero crossings to establish the
period, and the output amplitude, a, by measuring the swing of the output. We now know from
this simple experiment one point on the Nyquist curve and a reasonable PID controller can be
designed based on this point using, say, the classical Ziegler-Nichols table. A typical test setup
and response is given in Fig. 4.30.
In summary, under relay feedback control, most plants will exhibit some type of limit oscillation
such as a sinusoidal-like wave with a phase lag of 180 . This is at the point where the open loop
transfer function crosses the negative real axis on a Nyquist plot, although not necessarily at the
critical 1 point.
Given that we know a single point on the Nyquist curve, how do we obtain controller parameters?
It is convenient to restrict the controller to the PID structure since that is what is mostly about in
practice. The original ZN tuning criteria gave responses that should give a specified gain and
phase margin. For good control a gain margin of about 1.7 and a phase margin of about m = 30
is recommended. Note that by using different settings of the PID controller, one can move a single
point on the Nyquist curve to any other arbitrary point. Let us assume that we know the point
where the open loop transfer function of the process, Gp (s), crosses the negative real axis, u . This
point has a phase (or argument) of or 180. We wish the open loop transfer function, Gc Gp
174
Relay
u
Plant
output, y
output swing, 2a
relay swing, 2d
input, u
time, t
Period, P
Figure 4.30: An unknown plant under relay feedback exhibits an oscillation with amplitude a,
and period P .
to have a specified phase margin of m by choosing appropriate controller tuning constants. Thus
equating the arguments of the two complex numbers gives
1
(4.56)
+ iu d = m
arg 1 +
iu i
which simplifies to5
1
= tan m
u i
Since we have two tuning parameters (i , d ), and one specified condition, (m ), we have many
solutions. Since their are many PID controller constants that could satisfy this phase margins,
om,
Astr
[12], chose that the integral time should be some factor of the derivative time,
u d
i = f d
Then the derivative time, which must be real and positive, is
p
f tan m + f 2 tan2 m + 4f
d =
2f u
5 Remember
def
that for a complex number z = x + iy, the argument of z is the angle z makes with the real axis;
y
arg(z) = arg(x + iy) = tan1
x
(4.57)
(4.58)
175
where the arbitrary factor, f 4. Given the ultimate gain Ku , the controller gain is
Kc = Ku cos m
(4.59)
4.7.2
A N EXAMPLE
OF RELAY TUNING
We can compare the experimentally determined ultimate gain, Ku , and ultimate frequency, u ,
from a relay experiment with the theoretical values for the plant
G(s) =
1
( s + 1)4
(4.60)
where the time constant is = 0.25. The ultimate gain and ultimate frequency are analytically
obtained by solving the two equations
Ku
Ku
=1
=
,
and
arg
( s + 1)4
( s + 1)4 s=iu
s=iu
simultaneously which is somewhat involved, (and of course requires knowledge of the plant
which in practice we would not have). Also note that due to the discontinuity in the tan1 (x)
function at the solution, we need to slightly modify the nonlinear equation for the phase angle to
be solved when using fsolve.
G =
Mag
Ang
Fun
x0 = [0.1,1]';
x = fsolve(Fun,x0)
An alternative graphical solution is obtained using Bode and/or Nyquist diagrams in M ATLAB.
G = zpk([],[-4 -4 -4 -4],44);
nyquist(G,logspace(-1,3,200));
bode(G) % Bode diagram or margin is an alternative
176
The Nyquist diagram given in the upper plot of Fig Fig. 4.31 shows the curve crossing the negative
real axis at 1/Ku 0.25, thus Ku 4. To find the ultimate angular frequency, we need to plot
the phase lag of the Bode diagram shown in the bottom figure of Fig. 4.31. The phase angle crosses
the 180 radians point at a frequency of about u = 4 rad/s. Thus the ultimate period is
Pu = 2/u = 1.57 seconds. Note that it was unnecessary to plot the Nyquist diagram since we
could also extract the ultimate gain information from the magnitude part of the Bode diagram, or
even simply just use margin.
Nyquist Diagram
Imaginary Axis
1/K
0.5
0.1
3
0.5
0.5
2
1.5
1
1
0.5
0
Real Axis
0.5
Bode Diagram
Gm = 12 dB (at 4 rad/sec) , Pm = 180 deg (at 0 rad/sec)
Magnitude (dB)
E STIMATING Ku
AND
100
150
0
Phase (deg)
Figure 4.31: Nyquist (top) and Bode (bottom) diagrams for 1/(0.25s + 1)4 . The
Nyquist curve crosses the negative -axis
at about 0.25. The frequency at which the
Bode curve passes though = 180 is
4 rad/s.
50
180
360
1
10
10
u 10
Frequency (rad/sec)
10
BY RELAY EXPERIMENT
Now we can repeat the evaluation of Ku and u , but this time by using the relay experiment.
This is the approach we would need to take in real life where we are faced not with a convenient
mathematical transfer function, but an actual physical plant.
I will use a relay with a relay gain of d = 2 but with no hysteresis (h = 0) and a sample time
of T = 0.05 seconds. To get the system moving, I will choose a starting value away from the
setpoint, y0 = 1.
G = zpk([],[-4 -4 -4 -4],44);
Ts=0.05; t = [0:Ts:7]';
u = 0*t; y=NaN*t;
d=2;
177
[Phi,Del,C,D] = ssdata(c2d(G,Ts));
x = [0,0,0,0.1]';
% some non-zero initial state
7
12
for i=1:length(t)-1;
x = Phi*x + Del*u(i);
y(i) = C*x;
u(i+1) = -d*sign(y(i));
end % for
plot(t,y,t,u,'r--')
The result of the relay simulation is given in Fig. 4.32 where we can see that once the transients
have died out, the amplitude of the oscillation of the output (or error since the setpoint is constant)
is a = 0.64 units, with a period of P = 1.6 seconds. Using Eqn. 4.55, we get estimates for the
ultimate gain and angular frequency
Pu = P = 1.6 1.57s,
and
u = 4d = 3.978 4
K
a
which approximately equal the estimates that I graphically found using the Bode and Nyquist
diagrams of the true plant.
2
1
0
a = 0.64
1
relay off
2
input
5
0
5
Pu = P = 1.6s
0
10
15
time [s]
20
25
30
Figure 4.32: PID tuning using relay feedback. For the first 10 seconds the relay is enabled, and the
process oscillates at the ultimate frequency. After 10s, the relay is turned off, the PID tuner with
the updated constants is enabled, and the plant is under control.
Once we have the ultimate gain and frequency, the PID controller constants are obtained from
Eqn. 4.57 to 4.59 and are shown in Table 4.6. Alternatively we can use any Z-N based tuning
scheme.
Table 4.6: The PID controller tuning parameters obtained from a relay based closed loop experiment
PID parameter
Kc
i
d
Hagglund
3.445
0.866
0.217
Ziegler-Nichols
0.6Ku = 3.4
Pu /2 = 0.8
Pu /8 = 0.2
units
s
s
178
Now that we have the PID tuning constants, we can simulate the closed loop response to setpoint
changes. We could create the closed loop transfer function using G*Gpid/(1 + Gpid*G), however the preferred approach is to use the routines series and feedback. This has the advantage
that it will cancel common factors and perform a balanced realisation.
1
The remainder of Fig. 4.32 for the period after 10 seconds shows the quite reasonable response
using the PID constants obtained via a relay experiment. Perhaps the overshoot at around 40%
and oscillation is a little high, but this is a well known problem of controller tuning via the classical
ZN approach, and can be addressed by using the modified ZN tuning constants.
4.7.3
S ELF - TUNING
The practical implementation of the relay feedback identification scheme requires that we measure the period of oscillation P , and the error amplitude a automatically. Both these parameters
are easily established manually by visual inspection if the process response is well behaved and
noise free, but noisy industrial outputs can make the identification of even these simple characteristics difficult to obtain reliably and automatically.
When considering disturbances, [12] develops two modifications to the period and amplitude
identification part of the selftuning algorithm. The first involves a least squares fit, (described
below) and the second involves an extended Kalman filter described in problem 9.6. Of course
hybrids of both these schemes with additional heuristics could also be used. Using the same process and relay as in 4.7.2, we can introduce some noise into the system as shown in the S IMULINK
model given in Fig. 4.33(a). Despite the low noise power (0.001), and the damping of the fourthorder process, the oscillations vary in amplitude and frequency as shown in Fig. 4.33(b).
To obtain reliable estimates of the amplitude and period we can use the fact that a sampled sinusoidal function with period P and sample time T satisfies
y(t) 1 y(t T ) + y(t 2T ) + 2 = 0
with
1 = 2 cos
2T
P
(4.61)
(4.62)
The extra constant, 2 is introduced to take into account non-zero means. Eqn. 4.61 is linear in the
parameters, , so we can solve the linear least-squares regression problem in M ATLAB using the
backslash command.
+
1
= ytT 1
(yt + yt2T )
(4.63)
=
2
Once we have established 1 , (we are uninterested in 2 ), the period is obtained by inverting
Eqn. 4.62,
2T
(4.64)
P =
cos1 (1 /2)
179
BandLimited
White Noise
256
(s+4)(s+4)(s+4)(s+4)
Constant
output
ZeroPole
Relay
1
0.5
0
0.5
1
1.5
2
0
10
time
12
14
16
18
20
(4.65)
q
32 + 42
By doing the regression in this two step approach, we avoid the nonlinearity of Eqn. 4.65 if we
were to estimate P as well in an attempt to do the optimisation in one step.
Running the S IMULINK simulation given in Fig. 4.33(a) gives a trend something like Fig. 4.33(b).
The only small implementation problem is that S IMULINK does not necessarily return equally
sampled data for continuous system simulation. The easiest work-around is to define a regularly
spaced time vector and do a table lookup on the data using interp1. The script file given in
listing 4.10 calculates the period and amplitude using least-squares.
Listing 4.10: Calculates the period and amplitude of a sinusoidal time series using least-squares.
N = length(t); % Length of data series
tr = linspace(t(1),t(length(t)),1.5*N)'; dt = tr(2)-tr(1);
yr = interp1(t,y,tr,'linear'); % S IMULINK is not regular in time
y=yr; t = tr; % Now sample time T is regularly spaced.
5
180
10
15
a = 0.616
which is not far off the values obtained with no disturbances, from page 177 of P = 1.6 and
a = 0.64.
Relay based tuning of a black-box
Using a data acquisition card and a real-time toolbox extension6 to M ATLAB, we can repeat the
trial of the relay based tuning method this time controlling a real process, in this case a black
box. The S IMULINK controller and input/output data are given in Fig. 4.34.
From Fig. 4.34, we can measure the period and amplitude of the oscillating error
Pu = 7 seconds,
a = 0.125
Ku =
i (s)
Pu /2 = 3.5
d (s)
Pu /3 = 2.3
Since S IMULINK uses a slightly different PID controller scheme from the classical formula,
P = Kc = 3.4,
I=
Kc
= 0.97,
i
D = Kc d = 7.8
Using these values in the PID controller with the black-box, we obtain the controlled results
shown in Fig. 4.35 which apart from the noisy input and derivative kick (which is easily removed,
refer 4.4.1), does not look too bad.
4.7.4
M ODIFICATIONS
The algorithm using a relay under feedback as described in section 4.7 establishes just one point
on the Nyquist curve. However it is possible by using a richer set of relay feedback experiments
to build up a more complex model, and perhaps better characterise a wider collection of plants.
6 Available
from Humusoft
Clock
181
t
To Workspace1
Adapter
Mux
Mux
PID
0.6
AutoScale
Graph1
yout
To Workspace
PID Controller
setpoint
+
+
Sum1
Signal
generator1
Sum
Output Plug
RT Out
Constant
Switch
RT Out
RT In
RT In
Input Plug
Relay
0.8
0.7
0.6
0.5
0.4
0.3
0
10
20
30
10
20
30
40
50
60
70
40
50
60
70
input
0.5
0
0.5
1
0
time (s)
HYSTERESIS
Physical relays always have some hysteresis h, to provide mechanical robustness when faced with
noise. By adjusting the hysteresis width we can excite the plant at different points on the Nyquist
curve.
The response of a relay with hysteresis is given in Fig. 4.36. Note that the relay will not flip to
+d from the d position until we have a small positive error, in this case h. Likewise, from the
reverse direction, the error must drop below a small negative error, h, before the relay flips from
+d to d.
The describing function for a relay with amplitude d and hysteresis width h is
p 2
h
1
=
a h2 j
N (a)
4d
4d
(4.66)
which is a line parallel to the real axis in the complex plane. Compare this with the describing
function for the pure relay in Eqn. 4.53.
The intersection of this line and G(i) is the resultant steady-state oscillation due to the relay,
refer Fig. 4.37. By adjusting the relay hysteresis width, h, we can move the 1/N (a) line up and
182
1
0.8
0.6
0.4
0.2
0
20
40
60
80
100
120
20
40
60
time (s)
80
100
120
1.5
1
input
0.5
0
0.5
1
1.5
0
Figure 4.35: Experimental PID control of the blackbox using parameters obtained by the relaybased tuning experiment. (See also Fig. 4.34.)
relay amplitude
+d
output
0
d
h
+h
error
183
down, thereby establishing different points on the Nyquist diagram. Of course if we increase the
hysteresis too much, then we may shift the 1/N (a) curve too far down so that it never intercepts
the G(i) path. In this situation, we will not observe any limit cycle in the closed loop experiment.
G(i)
1
N (a)
b
increasing
G(i )
0
0.2
0.4
0.6
0.8
1
1.2
1
I NSERTING
0.5
0
G(i )
0.5
A second simple modification is to insert an integrator prior to the relay. The integrator subtracts
/2 or 90 from the phase, and multiplies the gain by a factor of 1/. In this way, under
relay feedback we can estimate the point where the Nyquist curve crosses the negative imaginary
axis as well as the negative real axis. Such a scheme is shown in Fig. 4.39 and is described by
[201] which is a specific case of the general schemes described by [117, 146]. A survey of other
modifications to the basic relay feedback idea for controller tuning is given in [46].
184
Relay
K(s)
den(s)
Constant
Manual Switch
1
s
Integrator
Transfer Fcn
Transport
Delay
output
Relay1
G(s) =
(4.67)
we need two arbitrary distinct points. That is we establish the magnitude and phase of G(i) at
two frequencies 1 and 2 via two relay experiments.
Defining the magnitudes and angles to be
1
,
k1
G(i1 ) = 1 ,
Kp =
the time constant is
=
and the deadtime is
L=
1
k2
G(i2 ) = 2
|G(i2 )| =
|G(i1 )| =
12
2
k2 12
22
,
k12 12
(4.68)
1 q 2 2
k1 Kp 1,
1
(4.69)
1
1 + tan1 (1 ) .
1
(4.70)
2 = /2.
s2
Kp
eLs
+ as + b
(4.71)
is more complicated. The following algorithm, due to [117], first estimates the deadtime via a
nonlinear function, then extracts the coefficients Kp , a and b using least-squares.
First we define the matrices
A1
12
A2 22
A=
A3 = 0
A4
0
0
0
1
2
1
1
,
0
0
1/Kp
X = a/Kp ,
b/Kp
sin L1
cos L1
Y=
sin L2
cos L2
(4.72)
185
0.4
Step Response
2
0.2
G
G
1est
2est
0.2
Amplitude
{G(i )}
1.5
0.4
0.5
0.6
0.8
0.5
1
1
0.5
0.5
{G(i )}
1.5
20
40
60
80
100
Time (sec)
120
140
160
180
Figure 4.40: Identification of transfer function models using a two-point relay experiment.
and
B1
{G1 (j1 )} {G1 (j1 )}
B2
0
0
B=
B3 = {G1 (j1 )} {G1 (j1 )}
B4
0
0
0
{G1 (j2 )}
0
{G1 (j2 )}
0
{G1 (j2 )}
0
1
{G (j2 )}
(4.73)
Now provided we know the deadtime L, we can solve for the remaining parameters in X using
B1
A1
(4.74)
X = A2 B2 Y
B3
A3
(4.75)
Once given the model, any standard model-based controller tuning scheme such as internal
model control (IMC) can be used to establish appropriate PID tuning constants.
Fig. 4.40 illustrates the procedure for when the true, but unknown plant is the third-order system
G=
1
e6s
(9s + 1)(6s + 1)(4s + 1)
4
( s + 1)3
where = 2.0.
2. Write down pseudo code for the automatic tuning (one button tuning) of a PID controller
using the ZN tuning criteria. Ensure that your algorithm will work under a wide variety of
conditions and will not cause too much instability if any. Consider safety and robustness
aspects for your scheme.
186
1.5
1
Impulse response
0.5
Impulse response
= 0.39
Impulse response
= 0.71
= 1.00
0
0
10
15
10
15
10
15
Figure 4.41: The step responses and associated monotonicity index, , for three representative
systems.
om
and Hagglund
3. Write a .m file to implement a relay feedback tuning scheme using the Astr
tuning method. Use the pidsim file as a starting point. Pass as parameters, the relay gain
d, and the hysteresis delay h. Return the ultimate period and gain and controller tuning
parameters.
4.8
The PID controller despite being a simple, intuitive and flexible controller does have some drawbacks. For example PID control is not particularly suitable for processes where the open loop step
response is highly oscillatory. A way to quantify the extent of the wobble proposed by [14, p226]
is to computing a monotonicity index
R
g(t) dt
0
= R
|g(t)| dt
0
(4.76)
where g(t) is the impulse response of the system. Note that if the step response wobbles, then
the impulse response goes negative. For systems with no wobble, the area under the impulse
response and the area under the absolute value of the impulse response is the same, so = 1.
For systems that do wobble, the area under the impulse curve will be less due to the periods of
negative areas, so < 1. The step response and associated monotonicity index for three systems
are compared in Fig. 4.41. PID control is deemed suitable for process responses where > 0.8.
Further limitations of PID control are the inability to handle responses with large deadtimes, or
inverse responses. These are highlighted when trying to control an inverse response as demonstrated in 4.8.1, and a simple add-on for deadtime processes is given in 4.9.
4.8.1
I NVERSE
RESPONSE PROCESSES
An inverse response is where the response to a step change first heads in one direction, but then
eventually arrives at a steady state in the reverse direction as shown in the middle step response
of Fig. 4.41 or Fig. 4.43(b). Naturally this sort of behaviour is very difficult and un-intuitive to control, and if one tends to overreact, perhaps because the controllers gain is too high, then excessive
oscillation is likely to result. Inverse responses are common in the control of the liquid water level
in a steam drum boiler as described in [21] due to a phenomena known as the swell-and-shrink effect. For example the water level first increases when the steam valve is opened because the drum
pressure will drop causing a swelling of the steam bubbles below the surface. This undershoot
187
K1
K2
s+1 s+1
| 1 {z } | 2 {z }
G1
(4.77)
G2
(K1 2 K2 1 )s + K1 K2
(1 s + 1)(2 s + 1)
(4.78)
If K1 > K2 , then for the system to have right hand plane zeros (numerator polynomial equals
zero in Eqn. 4.78), then
K1 2 < K2 1
(4.79)
If K1 < K2 , then the opposite of Eqn. 4.79 must hold.
A S IMULINK simulation of a simple inverse response process,
G(s) =
4
3
(5s + 1)
=
3s + 1 s + 1
(3s + 1)(s + 1)
| {z } | {z }
G1
(4.80)
G2
of the form given in Eqn. 4.77 and clearly showing the right-hand plane zero at s = 1/5 is shown
in Fig. 4.43(a). The simulation results for the two individual transfer functions that combine to
give the overall inverse response are given in Fig. 4.43(b).
C ONTROLLING
Suppose we wish to control the inverse response of Eqn. 4.80 with a PI controller tuned using the
ZN relations based on the ultimate gain and frequency. The following listing adapts the strategy
described in Listing 4.4 for the PID tuning of an arbitrary transfer function.
188
(a) A S IMULINK simulation of the inverse response process described by Eqn. 4.80
Step Response
4
Amplitude
0
G1
G +G
1
G +G
1
3
0
8
10
Time (sec)
12
14
16
18
Figure 4.43: An inverse response process is comprised of two component curves, G1 + G2 . Note
that the inverse response first decreases to a minimum y = 1 at around t = 1.5 but then increases
to a steady-state value of y = 1.
% Closed loop
% Controlled response as shown in Fig. 4.44.
The controlled performance in Fig. 4.44 is extremely sluggish taking about 100 seconds to reach
the setpoint while the open loop response as shown in Fig. 4.43(b) takes only about one tenth of
that to reach steady-state. The controlled response also exhibits an inverse response.
The crucial problem with using a PID controller for an inverse response process is that if the
controller is tuned too tight, in other words the gain is too high and the controller too fast acting,
then the controller will look at the early response of the system, which being an inverse system is
going the wrong way, and then try to correct it, thus making things worse. If the controller were
more relaxed about the correction, then it would wait and see the eventual reversal in gain, and
not give the wrong action.
189
0.5
0.5
20
40
60
time [s]
80
100
120
In reality, when engineers are faced with inverse response systems, the gains are deliberately deturned giving a sluggish response. While this is the best one can do with a simple PID controller,
far better results are possible with a model based predictive controller such as Dynamic Matrix
Control (DMC). In fact this type of process behaviour is often used as the selling point for these
more advanced schemes. Processes G(s) that have RHP zeros are not open-loop unstable (they
require RHP poles for that), but the inverse process (G(s)1 ) will be unstable. This means that
some internal model controllers, that attempt to create a controller that approximates the inverse
process, will be unstable, and thus unsuitable in practice.
4.8.2
We cannot easily remove the inverse response, but we can derive an approximate transfer function
that has no right-hand plane zeros using a Pade approximation. We may wish to do this to avoid
the problem of an unstable inverse in a controller design for example. The procedure is to replace
the non-minimum phase term (s1) in the numerator with (s1)
(s+1) (s+1), and then approximate
the first term with a deadtime element. In some respects we are replacing one evil (the right-hand
plane zeros), with another, an additional deadtime.
For example the inverse response plant
G=
(2s 1)
e3s
(3s + 1)(s + 1)
(4.81)
has a right-hand zero at s = 1/2 which we wish to approximate as an additional deadtime element. Now we can re-write the plant as
G=
(2s 1) 3s
(2s + 1)
e
(3s + 1)(s + 1)
(4.82)
Note that the original system had an unstable zero and 3 units of deadtime while the approximate
system has no unstable zeros, but whose deadtime has increased to 7.
We can compare the two step responses with:
7 Recall
190
in Fig. 4.45 which, apart from the fact that the approximation version failed to capture the inverse
response, look reasonably similar.
Step Response
0.5
G
Gpade
True system
0
Amplitude
Pade approximation
0.5
1
0
10
15
Time (sec)
20
25
Problem 4.4 A model of a flexible beam pinned to a motor shaft with a sensor at the free end
repeated in [54, p12] is approximated by the non-minimum phase model,
Gp (s) =
1
6.475s2 + 4.0302s + 175.77
3
s 5s + 3.5682s2 + 139.5021s + 0.0929
(4.83)
4.9
Processes that exhibit dead time are difficult to control and unfortunately all too common in the
chemical processing industries. The PID controller, when controlling a process with deadtime
falls prey to the sorts of problems exhibited by the inverse response processes simulated in 4.8.1,
and in practice the gain must be detuned so much that the controlled performance suffers. One
successful technique for controlling processes with significant dead time, and that has subsequently spawned many other types of controllers, is the Smith predictor.
To implement a Smith predictor, we must have a model of the process, which we can artificially
split into two components, one being the estimated model without deadtime, G(s) and one component being the estimated dead time,
es
Gp (s) = G(s) es G(s)
(4.84)
191
The fundamental idea of the Smith predictor is that we do not control the actual process,Gp (s),
Deadtime compensator
Plant
r(t)
G(s)es
D(s)
y(t)
G(s)e
G(s)
s + D(s)G(s)es + G(s)D(s)
R(s)
1 D(s)G(s)e
(4.85)
D(s)G(s)
es
1 + G(s)D(s)
and =
if G = G
(4.86)
The nice property of the Smith predictor is now apparent in Fig. 4.47 in that the deadtime has
been cancelled from the denominator of Eqn. 4.86, and now sits outside the feedback loop.
Plant deadtime
Effective controller
r(t)
D(s)G(s)
es
y(t)
Figure 4.47: The Smith predictor structure from Fig. 4.46 assuming no model/plant mis-match.
For a successful implementation, the following requirements should hold:
1. The plant must be stable in the open loop.
192
2. There must be little or no model/plant mismatch.
3. The delay or dead time must be realisable. In a discrete controller this may just be a shift
register, but if the dead time is equivalent to a large number of sample times, or a nonintegral number of sample times, or we plan to use a continuous controller, then the implementation of the dead time element is complicated.
S MITH
PREDICTORS IN
S IMULINK
2
e50s
2 s2 + 2 s + 1
(4.87)
where = 10, = 0.34. A PID controller with tuning constants K = 0.35, I = 0.008, D = 0.8 is
unlikely to manage very well, so we will investigate the improvement using a Smith predictor.
A S IMULINK implementation of a Smith predictor is given in Fig. 4.48 which closely follows the
block diagram of Fig. 4.46. To turn off the deadtime compensation, simply double click on the
manual switch. This will break the compensation loop leaving just the PID controller.
PID
100 s2 +6.8s+1
PID Controller
setpoint
process
delayp =50
Scope
Sum 3
Manual Switch
delay =50
2
100 s2 +6.8s+1
model
Figure 4.48: A S IMULINK implementation of a Smith predictor. As shown, the deadtime compensator is activated. Refer also to Fig. 4.46.
The configuration given in Fig. 4.48 shows the predictor active, but if we change the constant
block to zero, the switch will toggle removing the compensating Smith predictor, and we are left
with the classical feedback controller. Fig. 4.49 highlights the usefulness of the Smith predictor
showing what happens when we turn on the predictor at t = 1750. Suddenly now the controller
finds it easier to drive the process since it needs not worry about the deadtime. Consequently the
controlled performance improves. We could further improve the tuning when using the Smith
predictor, but then in this case the uncompensated controlled response would be unstable. In
the simulation given in Fig. 4.49, there is no model/plant mis-match, but it is trivial to re-run
the simulation with a different dead time in the model, and test the robustness capabilities of the
Smith Predictor.
Fig. 4.50 extends this application to the control of an actual plant, (but with artificially added
deadtime simply because the plant has little natural deadtime) and a deliberately poor model
(green box in Fig. 4.50(a)). Fig. 4.50(b) demonstrates that turning on the Smith predictor at t = 100
shows a big improvement in the controlled response. However the control input shows a little
too much noise.
193
2
1
0
1
Smith on
2
2
Input u
1
0
1
Smith on
2
500
1000
1500
2000
time
2500
3000
3500
4000
Figure 4.49: Dead time compensation. The Smith predictor is turned on at t = 2100 and the
controlled performance subsequently rapidly improves.
4.10
Fig. 4.51 depicts a plant G controlled by a controller C to a reference r in the presence of process
disturbances v and measurement noise w. A good controller must achieve a number of different aims. It should encourage the plant to follow a given setpoint, it should reject disturbances
introduced by possible upstream process upsets and minimise the effect of measurement noise.
Finally the performance should be robust to reasonable changes in plant dynamics.
The output Y (s) from Fig. 4.51 is dependent on the three inputs, R(s), V (s) and W (s) and using
block diagram simplification, is given by
Y (s) =
CG
G
CG
R(s) +
V (s)
W (s)
1 + CG
1 + CG
1 + CG
(4.88)
G
CG
1
R(s)
V (s) +
W (s)
1 + CG
1 + CG
1 + CG
(4.89)
We can simplify this expression by defining the open loop transfer function, or sometimes referred
to just as the loop transfer function, as
L(s) = C(s)G(s)
(4.90)
1
G
L
R(s)
V (s) +
W (s)
1+L
1+L
1+L
(4.91)
By rearranging the blocks in Fig. 4.51, we can derive the effect of the output on the error, the input
on the setpoint and so on. These are known as sensitivity functions and they play an important
role in controller design.
The sensitivity function is defined as the transfer function from setpoint to error
S(s) =
1
= Ger (s)
1 + L(s)
(4.92)
194
PID
In1
Out1
PID Controller
Signal
Generator
Transport
Delay
blackbox
model
Delay
model
Switch
Scope
Step
1
s+1
0.5
Smith predictor on
1
20
40
60
80
100
120
140
160
180
200
20
40
60
80
100
time (s)
120
140
160
180
200
input
0.5
0
0.5
1
Figure 4.50: Deadtime compensation applied to the blackbox. The compensator is activated at
time t = 100 after which the controlled performance improves.
process disturbance
v
r(t)
C(s)
Controller
G(s)
plant
y(t)
w
Measurement noise
Figure 4.51: Closed loop with plant G(s) and controller C(s) subjected to disturbances, v, and
measurement noise, w.
4.11. SUMMARY
195
The complementary sensitivity function is defined as the transfer function from reference to output
L(s)
= Gyr (s) = Gyw (s)
(4.93)
T (s) =
1 + L(s)
The disturbance sensitivity function is
G(s)
= Gyv (s)
1 + L(s)
(4.94)
C(s)
= Gur (s) = Guw (s)
1 + L(s)
(4.95)
GS(s) =
and the control sensitivity function is
CS(s) =
The error from Eqn. 4.91 (which we desire to keep as small as practical), now in terms of the the
sensitivity and complementary sensitivity transfer functions is
E(s) = S(s)R(s) S(s)G(s)V (s) + T (s)W (s)
(4.96)
If we are to keep the error small for a given plant G(s), then we need to design a controller C(s)
to keep both S(s) and T (s) small. However there is a problem because a direct consequence of
the definitions in Eqn. 4.92 and Eqn. 4.93 is that
S(s) + T (s) = 1
which means we cannot keep both small simultaneously. However we may be able to make S(s)
small over the frequencies when R(s) is large, and T (s) small when the measurement noise W (s)
dominates. In many practical control loops the servo response R(s) dominates at low frequencies,
and the measurement noise dominates the high frequencies. This is known as loop shaping.
We are primarily interested in the transfer function T (s), since it describes how the output changes
given changes in reference command. At low frequencies we demand that T (s) = 1 which means
that we have no offset at steady-state and that we have an integrator somewhere in the process.
At high frequencies we demand that T (s) 0 which means that fast changes in command signal
are ignored.
Correspondingly we demand that since S(s) + T (s) = 1, then S(s) should be small at low frequencies, and 1 for high frequencies such as shown in Fig. 4.52.
The maximum values of |S| and |T | are also useful robustness measures. The maximum sensitivity function
kSk = max |S(j)|
(4.97)
is shown as a in Fig. 4.52 and is inversely proportional to the minimal distance from the loop
transfer function to the critical (1, 0i) point. Note how the circle of radius 1/||S|| centered at
(1, 0i) just touches the Nyquist curve shown in Fig. 4.53. A large peak in the sensitivity plot in
Fig. 4.52, corresponds to a small distance between the critical point and the Nyquist curve which
means that the closed loop is sensitive to modelling errors and hence not very robust. Values of
kSk = 1.7 are considered reasonable, [107].
4.11
S UMMARY
PID controllers are simple and common in process control applications. The three terms of a PID
controller relate to the magnitude of the current error, the history of the error (integral), and the
196
10
||S||
0
Magnitude
10
10
Sensitivity, S(s)
2
10
Complementary, T(s)
S
T
10
10
10
[rad/s]
10
10
0.5
0
{L(i )}
0.5
1
1.5
2
2.5
3
Figure 4.53: A circle of radius 1/kSk centered at (1, 0i) just touches the Nyquist
curve. See also Fig. 4.52.
3.5
4
0
{L(i )}
current direction of the error (derivative). The integral component is necessary to remove offset,
but can destabilise the closed loop. While this may be countered by adding derivative action,
noise and abrupt setpoint changes may cause problems. Commercial PID controllers usually add
a small time constant to the derivative component to make it physically realisable, and often only
differentiate the process output, rather than the error.
Many industrial PID controllers are poorly tuned which is clearly uneconomic. Tuning controllers
can be difficult and time consuming and there is no correct fail-safe idiot-proof procedure. The
single experiment in closed loop (Yuwana-Seborg) is the safest way to tune a critical loop, although the calculations required are slightly more complicated than the traditional methods and
is designed for processes that approximate a first-order system with time delay. The tuning parameters obtained by all these methods should only be used as starting estimates. The fine tuning
of the tuning parameters is best done by an enlightened trial and error procedure on the actual
equipment.
C HAPTER 5
D IGITAL
FILTERING AND
SMOOTHING
Never let idle hands get in the way of the devils work.
Basil Fawlty (circ. 75)
5.1
I NTRODUCTION
This chapter is concerned with the design and use of filters. Historically analogue filters were the
types of filters that electrical engineers have been inserting for many years in radios fabricated out
of passive components such as resisters, capacitors and inductors or more recently and reliably
out of operational amplifiers. When we filter something, be it home made wine, sump oil, or
undesirable material on the Internet, we are really interested in purifying, or separating the noise
or unwanted, from the desired.
The classical approach to filtering assumes that the useful signals lie in one frequency band, and
the unwanted noise in another. Then simply by constructing a transfer function as our filter,
we can pass the wanted band (signal), while rejecting the unwanted band (noise) as depicted in
Fig. 5.1.
input, u(t)
Filter, H(s)
(noisy)
output, y(t)
(smooth)
low-pass filter
Figure 5.1: A filter as a low-pass transfer function
Unfortunately even if the frequency bands of the signal and noise are totally distinct, we still cannot completely separate the signals in an online filtering operation. In practice however, we can
design the filters with such sharp cut-offs that a practical separation is achieved. More commonly
the signal is not distinct from the noise (since the noise tends to inhabit all frequency bands), so
197
198
5.1.1
T HE
Real industrial measurements are always corrupted with noise. This noise (or error) can be attributed to many causes, but even if these causes are known, the noise is usually still unpredictable. Causes for this noise can include mechanical vibrations, poor electrical connections
between instrument and transducer sensor, electrical interference of other equipment or a combination of the above. What ever the cause, these errors are usually undesirable. Remember, if it is
not unpredictable, then it is not noise.
temperature [deg C]
An interesting example of the debilitating affect of noise is shown in the controlled response in
Fig. 5.2. In this real control loop, the thermocouple measuring the temperature in a sterilising
furnace was disturbed by a mobile phone a couple of meters away receiving a call. Under normal
conditions the standard deviation of the measurement is around 0.1 degrees which is acceptable,
but with the phone operating, this increases to 2 or 3 degrees.
118
116
114
112
mobile phone range here
110
Power [%]
0.5
0
0
10
15
20
25
time [min]
5.1. INTRODUCTION
199
true signal, y
measurement noise, v
5.1.2
D IFFERENTIATING
WITHOUT SMOOTHING
As seen in chapter 4, the Proportional-Integral-Differential controller uses the derivative component, or D part, to stabilise the control scheme and offers improved control. However in practice,
the D part is rarely used in industrial applications, because of the difficulty in differentiating the
raw measured signal in real-time.
200
The electromagnetic balance arm shown in Fig. 1.6 is a very oscillatory device, that if left uncompensated would wobble for a considerable time. If we were to compensate it with a PID controller
such that the balance arm closely follows the desired setpoint, we find that we would need substantial derivative action to counter the oscillatory poles of the plant. The problem is how we can
reliably generate a derivative of a noisy signal for out PID controller.
Fig. 5.4 demonstrates experimentally what happens when we attempt to differentiate a raw and
noisy measurement, using a crude backward difference with a sample time T
yt yt1
dy
dt
T
The upper plot shows the actual position of the balance arm under real-time closed-loop PID
control using a PC with a 12 bit (4096 discrete levels) analogue input/output card. In this instance,
I tuned the PID controller with a large amount of derivative action, and it is this that dominates
the controller output (lower plot). While the controlled response is reasonable, (the arm position
follows the desired setpoint, the input is too noisy), and will eventually wear the actuators if used
for any length of time. Note that this noise is a substantial fraction of the full scale input range.
2500
2000
1500
1000
5000
Input
4000
3000
2000
1000
0
10
15
time [s]
20
25
30
Figure 5.4: The significant derivative action of a PID controller given the slightly noisy data from
an electromagnetic balance arm (solid line in upper plot) causes the controller input signal (lower
plot) to go exhibit very high frequencies.
One solution to minimise the excitedness of the noisy input signal whilst still retaining the differentiating characteristic of the controller is to low-pass filter (smooth) the measured variable
before we differentiate it. This is accomplished with a filter, and the design of these filters is what
this chapter is all about. Alternatively the controller could be completely redesigned to avoid the
derivative step.
5.2
If we have all the data at our disposal, we can draw a smooth curve through the data. In the
past, draughtsmen may have used a flexible ruler, or spline, to construct a smoothing curve by
201
eye. Techniques such as least squares splines or other classes of piecewise low-order polynomials
could be used if a more mathematically rigorous fitting is required. Techniques such as these fall
into the realm of regression and examples using M ATLAB functions such as least-squares fitting,
splines, and smoothing splines are covered more in [204]. If the noise is to be discarded and
information about the signal in the future is used, this technique is called smoothing. Naturally
this technique can only be applied offline after all the data is collected since the future data
must be known. A smoothing method using the Fourier transform is discussed in 5.4.5.
Unfortunately for real time applications such as the PID controller application in 5.1.2, we cannot
look into the future and establish what the signal will be for certain. In this case we are restricted
to smoothing the data using only output historical values. This type of noise rejection is called
real-time filtering and it is this aspect of filtering that is of most importance to control engineers.
Common analogue filters are discussed in 5.2.3, but to implement using digital hardware we
would rather use an equivalent digital description. The conversion from the classical analogue
filter to the equivalent digital filter using the bilinear transform is described in 5.3.
Finally we may wish to predict data in the future. This is called prediction.
5.2.1
A SMOOTHING
Section 4.6.3 described one way to tune a PID controller where we first to subject the plant to a
closed loop test with a trial gain, and then record the peaks and troughs of the resultant curve.
However as shown in the actual industrial example in Fig. 5.5(a), due to excessive noise on the
thermocouple (introduced by mobile phones in the near vicinity!), it is difficult to extract the
peaks and troughs from the measured data. It is especially difficult to write a robust algorithm
that will do this automatically with such noise.
The problem is that to achieve a reasonable smoothing we also introduce a large lag. This in turn
probably does not affect the magnitude of the peaks and troughs, but it does affect the timing, so
our estimates of the model parameters will be wrong.
One solution is to use an acausal filter which does not exhibit any phase lag. M ATLAB provides
a double-pass filter called filtfilt which is demonstrated in Fig. 5.5(b). Using the smooth
data from this acausal filter generates reasonable estimates for both the magnitude and timing of
the characteristic points needed for the tuning. The drawback is that the filtering must be done
off-line.
5.2.2
F ILTER
TYPES
There are four common filter types: the low-pass, high-pass, band-pass and notch-pass. Each of
these implementations can be derived from the low-pass filter, meaning that it is only necessary
to study in detail the design of the first. A comparison of the magnitude response of these filters
is given in Fig. 5.8.
202
8
6
4
2
0
raw value
smoothed, n=5, c=0.03
Butterworth filter
Key points
2
100
200
300
400
time [s]
500
600
700
800
(a) To extract the key points needed for PID tuning, we must first
smooth the noisy data. Unfortunately, to achieve good smoothing
using a causal filter such as the 5th order Butterworth used here,
we introduce a large lag.
8
6
4
2
raw value
Acausal smoothed
Key points
0
2
100
200
300
400
time [s]
500
600
700
800
Figure 5.5: In this industrial temperature control of a furnace, we need to smooth the raw measurement data in order to extract characteristic points required for PID tuning.
types of filters are used to smooth out noisy data, or to retain the long term trends, rather than
the short term noisy contaminated transients.
The ideal low pass filter is a transfer function that has zero phase lag at any input frequency, and
an infinitely sharp amplitude cut off above a specified frequency c . The Bode diagram of such a
filter would have a step function down to zero on the amplitude ratio plot and a horizontal line
on the phase angle plot as shown in Fig. 5.8. Clearly it is physically impossible to build such a
filter. However, there are many alternatives for realisable filters that approximate this behaviour.
Fig. 5.6 shows a reasonable attempt to approximate the ideal low-pass filter. Typically the customer will specify a cut-off frequency, and it is the task of the filter designer to select sensible
values for the pass and stop band oscillations or ripple and the pass and stop band frequencies to
achieve these balancing filter performance with minimal hardware cost.
The most trivial low-pass filter approximation to the ideal filter is a single lag with a cut off
frequency (c ) of 1/ . If you wanted a higher order filter, say a nth order filter, you could simply
cascade n of these filters together as shown in Fig. 5.7.
Gn (s) =
1
s
c
+1
n
(5.1)
Actually Eqn. 5.1 is a rather poor approximation to the ideal filter and better approximations exist
for the same complexity or filter order, although they are slightly more difficult to design and
implement in hardware. Two classic analogue filter designs are described in 5.2.3 following.
203
|H(i)|
1
pass band ripple
1
1+2
pass band
stop band
1
A2
0
p
Input, u
1
s/c +1
1
s/c +1
1
s/c +1
output, y
Figure 5.7: Three single low-pass filters cascaded together to make a third-order filter.
H IGH - PASS FILTERS
There are however applications where high pass filters are desired, although these are far more
common in the signal processing and electronic engineering fields. These filters do the opposite
of the low pass filter; namely they attenuate the low frequencies and pass the high frequencies
unchanged. This can be likened to the old saying If Ive already seen it, Im not interested!
These remove the DC component of the signal and tend to produce the derivative of the data.
You may use a high pass filter if you wanted to remove long term trends from your data, such as
seasonal effects for example.
B AND
The third type of filter is a combination of the high and low pass filters. If these two are combined,
they can form a band pass or alternatively a notch filter. Notch filters are used to remove a particular frequency such as the common 50Hz hum caused by the mains power supply. Band pass
filters are used in the opposite manner, and are used in transistor radio sets so that the listener can
tune into one particular station without hearing all the other stations. The difference between the
204
expensive radios and the cheap ones, is that the expensive radios have a sharp narrow pass band
so as to reduce the amount of extra frequency. Oddly enough for some tuning situations a notch
filter is also used to remove neighbouring disturbing signals. Some satellite TV stations introduce
into the TV signal a disturbing component to prevent non-subscribers from watching their transmission without paying for the secret decoder ring. This disturbance, which essentially makes
the TV unwatchable, must of course must be within the TV stations allowed frequency range.
The outer band pass filter will tune the TV to the stations signal, and a subsequent notch filter
will remove the disturbing signal. Fig. 5.8 contrasts the amplitude response of these filters.
AR
High-pass filter
AR
Band-pass filter
AR
, [rad/s]
c
Figure 5.8: Amplitude response for ideal, low-pass, high pass and band-pass filters
F ILTER TRANSFORMATION
Whatever the type of filter we desire, (high-pass, band-pass notch etc), they can all be obtained
by first designing a normalised low-pass filter prototype as described by [160, p415]. This filter
is termed normalised because it has a cutoff frequency of 1 rad/s. Then the desired filter, highpass, band-pass or whatever, is obtained by transforming the normalised low-pass filter using
the relations given in Table 5.1 where u is the upper cutoff and l is the lower cutoff frequencies.
This is very convenient, since we need only to be able to design a low-pass filter from which all
Table 5.1: The filter transformations needed to convert from the prototype filter to other general
filters. To design a general filter, replace the s in the prototype filter with one of the following
transformed expressions.
Desired filter
low pass
band-pass
band-stop
high pass
transformation
s s/u
s2 + u l
s
s(u l )
s(u l )
s 2
s + u l
s u /s
the others can be obtained using a variable transformation. Low-pass filter design is described
for the common classical analogue filter families in 5.2.3. M ATLAB also uses this approach in the
S IGNAL P ROCESSING toolbox with what it terms analogue low-pass proto-type filters.
5.2.3
C LASSICAL
205
Two common families of analogue filters superior to the simple cascaded first-order filter network
are the Butterworth and the Chebyshev filters. Both these filters require slightly more computation to design, but once implemented, require no more computation time or hardware components than any other filter of the same order. Historically these were important continuous-time
filters, and to use discrete filter designs we must convert the description into the z plane by using
the bilinear transform.
In addition to the Butterworth and Chebyshev filters, there are two slightly less common classic analogue filters; the Chebyshev type II (sometimes called the inverse Chebyshev filter), and
the elliptic or Cauer filter. All four filters are derived by approximating the sharp ideal filter in
different ways. For the Butterworth, the filter is constructed in such a way as to maximise the
number of derivatives to zero at = 0 and = . The Chebyshev filter exhibits minimum error
over the passband, while the type II version minimises the error over the stop band. Finally the
elliptic filter uses a Chebyshev mini-max approximation in both pass-band and stop-band. The
calculation of the elliptic filter requires substantially more computation than the other three. The
S IGNAL P ROCESSING toolbox contains both discrete and continuous design functions for all four
analogue filter families.
B UTTERWORTH
FILTERS
1+
1
2n
(5.2)
where n is the filter order and c is the desired cut off frequency. If we substitute s = i, we find
that the 2n poles of the squared magnitude function, Eqn. 5.2, are
1
sp = (1) 2n (ic )
That is, they are spaced equally at /n radians apart around a circle of radius c . Given that
we desire a stable filter, we choose just the n stable poles of the squared magnitude function for
HB (s). The transfer function form of the general Butterworth filter in factored form is
HB (s) =
n
Y
k=1
where
1
s + pk
1 2k 1
pk = c exp i
+
2
2n
(5.3)
(5.4)
Thus all the poles are equally spaced apart on the stable half-circle of radius c in the s plane.
The M ATLAB function buttap, (Butterworth analogue low pass filter prototype), uses Eqn. 5.4 to
generate the poles.
We can plot the poles of a 5th order Butterworth filter after converting the zero-pole-gain form to
a transfer function form, using the pole-zero map function, pzmap, or of course just plot the poles
directly.
1
[z,p,k] = buttap(5)
pzmap(zpk(z,p,k))
206
Note how the poles of the Butterworth filter are equally spread around the stable half circle in
Fig. 5.9. Compare this circle with the ellipse pole-zero map for the Chebyshev filter shown in
Fig. 5.10.
1
(p)
0.5
0.5
1
1
0.5
0
(p)
0.5
Alternatively using only real arithmetic, the n poles of the Butterworth filter are pk = ak bk i
where
ak = c sin , bk = c cos
(5.5)
where
=
(2k 1)
2n
(5.6)
Expanding the polynomial given in Eqn. 5.3 gives the general Butterworth filter template in the
continuous time domain,
HB (s) =
cn
s
c
n
+ cn1
s
c
1
n1
(5.7)
+ + c1 sc + 1
where ci are the coefficients of the filter polynomial. Using Eqn 5.4, the coefficients for the general
nth order Butterworth filter can be evaluated.
Taking advantage of the complex arithmetic abilities of M ATLAB, we can generate the poles typing
Eqn. 5.4 almost verbatim. Note how I use 1i, (that is I type the numeral 1 followed with an i
with no intervening space) to ensure I get a complex number.
Listing 5.1: Designing Butterworth Filters using Eqn. 5.4.
>>n = 4;
% Filter order
>>wc = 1.0;
% Cut-off frequency, c
>>k = [1:n]';
>>p = wc*exp(-pi*1i*(0.5 + (2*k-1)/2/n)); % Poles, pk = c exp i
>>c = poly(p); % should force real
>>c = real(c) % look at the coefficients
c =
1.0000
2.6131
3.4142
2.6131
1.0000
1
2
2k1
2n
Once we have the poles, it is a simple matter to expand the polynomial out and find the coefficients. In this instance, owing to numerical roundoff, we are left with a small complex residual
which we can safely delete. The 4th order Butterworth filter with a normalised cut-off frequency
is
1
HB (s) = 4
3
s + 2.6131s + 3.4142s2 + 2.6131s + 1
207
Of course using the S IGNAL P ROCESSING toolbox, we could duplicate the above with a single call
to butter given as the last line in the above example.
We can design a low-pass filter at an arbitrary cut-off frequency by scaling the normalised analogue proto-type filter using Table 5.1. Listing 5.2 gives the example of the design of a low-pass
filter with a cutoff of fc = 800 Hz.
Listing 5.2: Designing a low-pass Butterworth filter with a cut-off frequency of fc = 800 Hz.
ord = 2; % Desired filter order
[Z,P,K] = buttap(ord); Gc = tf(zpk(Z,P,K)); % 2nd-order Butterworth filter prototype
10
15
p=bodeoptions; p.FreqUnits = 'Hz'; % Set Bode frequency axis units as Hz (not rad/s)
bode(Gc,Gc2,p)
hline(-90); vline(Fc)
A high pass filter can be designed in much the same way, but this time substituting c /s for s.
Listing 5.3: Designing a high-pass Butterworth filter with a cut-off frequency of fc = 800 Hz.
ord = 2; [Z,P,K] = buttap(ord); Gc = tf(zpk(Z,P,K)); % Low-pass Butterworth prototype
Fc = 800;wc = Fc*pi*2 ; % Frequency specifications
5
The frequency response of the Butterworth filter has a reasonably flat pass-band, and then falls
away monotonically in the stop band. The high frequency asymptote has a slope of n on a loglog scale. Other filter types such as the Chebyshev (see following section) or elliptic filters are
contained in the S IGNAL P ROCESSING toolbox, and [44] details the use of them.
Algorithm 5.1 Butterworth filter design
Given a cut-off frequency, c , (in rad/s), and desired filter order, n,
1. Compute the n poles of the filter, HB (s), using
1 2k 1
pk = c exp i
,
+
2
2n
k = 1, 2, . . . , n
2. Construct the filter HB (s) either using the poles directly in factored form, or expanding the
polynomial
1
HB (s) =
poly(pk )
208
3. Convert low pass filter to high-pass, band pass etc by substituting for s given in Table 5.1.
4. Convert to a discrete filter using c2dm if desired ensuring that the cut-off frequency is much
less than the Nyquist frequency; c << N = /T .
Algorithms 5.1 suffers from the flaw that it assumes the filter designer has already selected a
cut-off frequency, c , and filter order, n. However for practical cases, it is characteristics such
as band-pass ripple, stopband attenuation that are specified by the customer such as given in
Fig. 5.6, and not the filter order. We need some way to calculate the minimum filter order (to
construct the cheapest filter) from the specified magnitude response characteristics. Once that is
established, we can proceed with the filter design calculations using the above algorithm. Further
descriptions of the approximate order are given in [92, pp9305327] and [196]. The functionally
equivalent routines, buttord and cheb1ord in the S IGNAL P ROCESSING toolbox attempt to find
a suitable order for a set of specified design constraints.
In the case for Butterworth filters, the filter order is given by
&
'
log10 10Rp /10 1 10As /10 1
n=
2 log10 (p /s )
(5.8)
where Rp is the pass-band ripple, As is the stopband ripple, and p , s are the pass and stop band
frequencies respectively. In general, the order n given by the value inside the brackets (without
feet) in Eqn. 5.8 will not be an exact integer, so we should select the next largest integer to ensure
that we meet or exceed the customers specifications. M ATLABs ceiling function ceil is useful
here.
Problem 5.1
2. Draw a Bode diagram for a fourth order Butterworth filter with wc = 1 rad/s. On the same
plot, draw the Bode diagram for the filter
H(s) =
1
s
c
+1
4
with the same wc . Which filter would you choose for a low pass filtering application?
Hints: Look at problem A-4-5 (DCS) and use the M ATLAB function bode. Plot the Bode
diagrams especially carefully around the corner frequency. You may wish to use the Matlab
commands freqs or freqz.
3. Find and plot the poles of a continuous time 5th order Butterworth filter. Also plot a circle
centered on the origin with a radius wc .
4. Plot the frequency response (Bode diagram) of both a high pass and low pass 4th order
Butterworth filter with a cutoff frequency of 2 rad/s.
C HEBYSHEV
FILTERS
Related to the Butterworth filter family is the Chebyshev family. The squared magnitude function
of the Chebyshev filter is
1
|HC ()|2 =
(5.9)
1 + 2 Cn2 c
where again n is the filter order, c is the nominal cut-off frequency, and Cn is the nth order
Chebyshev polynomial. (For a definition of Chebyshev polynomials, see for example [204], and in
209
particular the cheb_pol function.) The design parameter is related to the amount of allowable
passband ripple, ,
1
(5.10)
=1
1 + 2
Alternatively the filter design specifications may be given in decibels of allowable passband ripple1 , rdB ,
= 1 10
rdB
20
(5.11)
This gives the Chebyshev filter designer two degrees of freedom, (filter order n and or equivalent), rather than just order as in the Butterworth case. Similar to the Butterworth filter, the poles
of the squared magnitude function, Eqn. 5.9, are located equally spaced along an ellipse in the s
plane with minor axis radius of c aligned along the real axis, where
=
1 1/n
1/n
2
with
=
1 p
+ 1 + 2
(5.12)
(5.13)
and a major axis radius of c aligned along t2he imaginary axis where
=
1 1/n
+ 1/n
2
(5.14)
Since the poles pk of the Chebyshev filter, Hc (s) are equally spaced along the stable half of the
ellipse, (refer Fig. 5.10 for a verification of this), the real and imaginary parts are given by:
(pk ) = c sin 2 + 2n
and the corresponding transfer function is given by
K
k=0 (s pk )
Hc (s) = Qn
where the numerator K is chosen to make the steady-state equal to 1 if n is odd, or 1/ 1 + 2 for
even n. Algorithm 5.2 summarises the design of a Chebyshev filter with ripple in the passband.
Algorithm 5.2 Chebyshev type I filter design
Given a cut-off frequency, c , passband ripple, , or rdB , and order filter n,
1. Calculate the ripple factor
p
= 10r/10 1 =
1
1
(1 )2
2. Calculate the radius of the minor, , and major axis of the ellipse using Eqns 5.125.14.
3. Calculate the n stable poles from Eqn. 5.15 and expand out to form the denominator of
Hc (s).
1 The
210
1
,
1+2
1,
n
n
even
odd
We can calculate the gain of a transfer function using the final value theorem which assuming a unit step gives
a0
, n even
1+2
K=
a0 ,
n odd
where a0 is the coefficient of s0 in the denominator of Hc (s).
We can test this algorithm by designing a Chebyshev type I filter with 3dB ripple in the passband,
(rdB = 3dB or = 1 103/20 ), and a cross-over frequency of c = 2 rad/s.
Listing 5.4: Designing Chebyshev Filters
1
11
16
n = 4; wc = 2;
% Order & cut-off frequency,
n, c
r = 3;
% Ripple. Note 3dB = 1/ 2
rn = 1/n;
p
e = sqrt(10(r/10)-1); % = 1 10r/20 , e = 1/(1 )2 1
g = 1/e + sqrt(1 + 1/e/e); minora = (grn - g(-rn))/2;
majorb = (grn + g(-rn))/2;
k = [0:n-1]'; n2 = 2*n;
realp = minora*wc*cos(pi/2 + pi*(2*k+1)/n2);
imagp = majorb*wc*sin(pi/2 + pi*(2*k+1)/n2);
polesc = realp + 1i*imagp;
Ac = real(poly(polesc));
Bc = Ac(n+1);
if rem(n,2)
Bc = Bc/sqrt(1+e2);
end % if
You may like to compare this implementation with the equivalent S IGNAL P ROCESSING toolbox
version of the type 1 Chebyshev analogue prototype by typing type cheb1ap.
We can verify that the computed poles do indeed lie on the stable half of the ellipse with major
and minor axes and respectively in Fig. 5.10. I used the axis(equal) command for
this figure forcing the x and y axes to be equally, as opposed to conveniently, spaced to highlight
the ellipse. Compare this with the poles of a Butterworth filter plotted in Fig. 5.9.
As the above relations indicate, the generation of Chebyshev filter coefficients is slightly more
complex than for Butterworth filters, although they are also part of the S IGNAL P ROCESSING
toolbox. Further design relations are given in [154, p221]. We can compare our algorithm with
the equivalent toolbox function cheby1 which also designs type I Chebyshev filters. We want the
continuous description, hence the optional s parameter. Alternatively we could use the more
specialised cheb1ap function which designs an analogue low-pass filter prototype which is in
fact called by cheby1.
Listing 5.5: Computing a Chebyshev Filter
n = 4; wc = 2;
% Order of Chebyshev filter;
cut-off, c = 2 rad/s
r = 3;
% dB of ripple, 3dB = 1/ 2
[Bc,Ac] = cheby1(n,r,wc,'s'); % design a continuous filter
211
2
1.5
1
(p)
0.5
0
0.5
1
1.5
2
2
0
(p)
[Hc,w] = freqs(Bc,Ac);
semilogx(w,abs(Hc));
These filter coefficients should be identical to those obtained following Algorithm 5.2. You should
exercise some care with attempting to design extremely high order IIR filters. While the algorithm
is reasonably stable from a numerical point of view, if we restrict ourselves just to the factored
form, one may see unstable poles once we have expanded the polynomial owing to numerical
round-off for orders larger than about 40.
Of course in applications while it is common to require high order for FIR filters, it is unusual that
we would need larger than about n = 10 for IIR filters.
Fig. 5.11 compares the magnitude of both the Butterworth and Chebyshev filter and was generated using, in part, the above code. For this plot, I have broken from convention, and plotted the
magnitude on a linear rather than logarithmic scale to emphasize better the asymptote towards
zero. Note that for even order Chebyshev filters such as the 4th order presented in Fig. 5.11,
the steady-state gain is not equal to 1, but rather 1 the allowable pass-band ripple. If this is a
problem, one can compensate by increasing the filter gain. For odd order Chebyshev filters, the
steady-state gain is 1 which perhaps is preferable.
The difference between a maximally flat passband filter such as the Butterworth and the equal
ripple filter such as the Chebyshev is clear. [133, p176] claim that most designers prefer the Chebyshev filter owing to the sharper transition across the stopband, provided the inevitable ripple is
acceptable.
In addition to the Butterworth and Chebyshev filters, there are two more classic analogue filters;
the Chebyshev type II and the elliptic or Cauer filter. All four filters are derived by approximating
the sharp ideal filter in different ways. For the Butterworth, the filter is constructed in such a way
as to maximise the number of derivatives to zero at = 0 and = . The Chebyshev filter
exhibits minimum error over the passband, while the type II version minimises the error over the
212
0.8
Figure 5.11: Comparing the magnitude characteristics of 4th order Butterworth and Chebyshev filters. The
Chebyshev filter is designed to have
a ripple equal to 3dB (dotted line)
which is approx 30%. Both filter cutoffs are at c = 2 rad/s.
|H()|
3dB
0.6
Butterworth
Chebychev
0.4
0.2
ideal filter
0
1
10
10
[rad/s]
10
stop band. Finally the elliptic filter uses a Chebyshev mini-max approximation in both pass-band
and stop-band. The calculation of the elliptic filter requires substantially more computation than
the other three, although all four can be computed using the S IGNAL P ROCESSING toolbox.
5.3
D ISCRETE FILTERS
Analogue filters are fine for small applications and for where aliasing may be a problem, but
they are more expensive and more inflexible than the discrete equivalent. We would much rather
fabricate a digital filter where the only difference between filter orders is a slightly longer tapped
delay or shift register, and proportionally slower computation, albeit with a longer word length.
Discrete filters are also easy to implement in M ATLAB using the filter command.
To design a digital filter, one option is to to start the design from the classical analogue filters
and use a transformation such as the bilinear transform described in chapter 2 to the discrete
domain. In fact this is the approach that M ATLAB uses in that it first designs the filter in the
continuous domain, then employs bilinear to convert to the discrete domain. Digital filter
design is discussed further in section 5.3.2.
Traditionally workers in the digital signal processing (DSP) area have referred to two types of filters, one where the response to an impulse is finite, the other where the response is theoretically
of infinite duration. This latter recursive filter or Infinite Impulse Response (IIR), is one where
previous filtered values are used to calculate the current filtered value. The former non-recursive
filter or Finite Impulse Response, (FIR) is one where the filtered value is only dependent on past
values of the input hence A(z) = 1. FIR filters find use in some continuous signal processing applications where the operation is to be run for very long sustained periods where the alternative
IIR filters are sensitive to numerical blow-up owing to the recursive nature of the filtering algorithm. FIR filters will never go unstable since they have only zeros which cannot cause instability
by themselves. FIR filters also exhibit what is known as linear phase lag, that is each of the signals frequencies are delayed by the same amount of time. This is a desirable attribute, especially
in communications and one that a causal IIR filter cannot match.
However better filter performance is generally obtained by using IIR filters for the same number
of discrete components. In applications where the FIR filter is necessary, 50 to 150 terms are common. To convert approximately from an IIR to a FIR, divide A(z) into B(z) using long division,
and truncate the series at some suitably large value. Normally, the filter would be implemented
in a computer in the digital form as a difference equation, or perhaps in hardware using a series
of delay elements and shift registers.
5.3.1
A LOW- PASS
213
FILTERING APPLICATION
Consider the situation where you are asked to filter a measurement signal from a sensitive pressure transducer in a level leg in a small tank. The level in the tank is oscillating about twice a
second, but this measurement is corrupted by the mains electrical frequency of 50Hz.
Suppose we are sampling at fs = 1kHz which corresponds to a sample time of T = 0.001s, and a
Nyquist frequency of fN = fs /2 = 500Hz. We should note that this sampling rate is sufficient to
avoid aliasing problems since it is above twice the highest frequency of interest, 2 50Hz.
We can simulate this scenario in M ATLAB. First we create a time vector t and then the two signals,
one is the true measurement y1 , and the other is the corrupting noise from the mains y2 . The
actual signal, as received by your data logger, is the sum of these two signals, y = y1 + y2 . The
purpose of filtering is to extract the true signal y1 from the corrupted y as closely as possible.
1.5
Ts = 1/1000;
% Sample time (sec)
t = [0:Ts:1]'; % time vector
y1 = sin(2*pi*2*t); % true signal
y2 = 0.1*sin(2*pi*50*t); % noise
y = y1+y2;
% what we read
plot(t,y)
1
0.5
0
0.5
1
1.5
0.2
0.4
0.6
0.8
Clearly even from just the time domain plot the signal y is composed of two dominant frequencies.
If a frequency domain plot were to be created, it would be clearer still. To reconstruct the true level
reading, we need to filter the measured signal y. We could design a filter with a cut-off frequency
of fc = 30Hz. This should in theory pass the slower y1 signal, but attenuate the corrupting y2 .
To implement such a digital filter we must:
1. select a filter type, say Butterworth, (refer page 205)
2. select a filter order, say 3, and
3. calculate the dimensionless frequency related to the specified frequency cutoff fc = 30Hz,
and the sample time T , (or Nyquist frequency fN ),
fc
= 2T fc
fN
= 2 103 30 = 0.06
We are now ready to use the S IGNAL P ROCESSING T OOLBOX to design our discrete IIR filter or
perhaps build our own starting with a continuous filter and transforming to the discrete domain
as described on page 206.
10
15
>> fc = 30
% Required cut-off frequency, [Hz]
>> wc = 2*Ts*fc;
% Dimensionless cut-off frequency,
>> [B,A] = butter(3,wc)% Find A(q 1 ) & B(q 1 ) polynomials
B =
0.0007
0.0021
0.0021
0.0007
A =
1.0000
-2.6236
2.3147
-0.6855
>> yf = filter(B,A,y);
% filter the measured data
>> plot(t,y,t,yf,'--',t,y1) % any better ?
214
Once having established the discrete filter coefficients, we can proceed to filter our raw signal and
compare with the unfiltered data as shown in Fig. 5.12.
1.4
1.2
raw data
1
0.8
0.6
6th order
0.4
3rd order
Raw data
3rd or Butterworth
6th order Butterworth
0.2
0
0.05
0.1
0.15
0.2
The filtered data Fig. 5.12 looks smoother than the original, but it really is not smooth enough.
We should try a higher order filter, say a 6th order filter.
[B6,A6] = butter(6,wc); % 6th order, same cut-off
yf6 = filter(B6,A6,y); % smoother ?
plot(t,[y,yf,yf6])
% look any better ?
This time the 6th order filtered data in Fig. 5.12 is acceptably smooth, but there is quite a noticeable
phase lag such that the filtered signal is behind the original data. This is unfortunate, but shows
that there must be a trade off between smoothness and phase lag for all causal or online filters.
The difference in performance between the two digital filters can be shown graphically by plotting
the frequency response of the filter. The easiest way to do this is to convert the filter to a transfer
function and then use bode.
20
H3 = tf(B,A,Ts);
H6 = tf(B6,A6,Ts)
bode(H3,H6);
% See Frequency response plot in Fig. 5.13.
Alternatively we could use freqz and construct the frequency response manually.
25
[h,w] = freqz(B,A,200);
% discrete frequency response
[h6,w6] = freqz(B6,A6,200);
w = w/pi*500;
% convert from rad/s to Hz
w6 = w6/pi*500;
loglog(w,abs(h), w6,abs(h6)) % See Magnitude plot in Fig. 5.13.
The sixth order filter in Fig 5.13 has a much steeper roll off after 30Hz, and this more approximates
the ideal filter. Given only this curve, we would select the higher order (6th) filter. To plot the
second half of the Bode diagram, we must unwrap the phase angle, and for convenience we will
also convert the phase angle from radians to degrees.
ph = unwrap(angle(h))*360/2/pi;
10
Amplitude ratio
215
3rd order
10
3rd order
6th order
10
10
10
6th order
10
10
10
Phase angle
0
200
400
600
0
10
fc
1
10
fN
2
10
frequency [Hz]
10
ph6 = unwrap(angle(h6))*360/2/pi;
semilogx(w,ph,w6,ph6)
% plot the 2nd half of the Bode diagram
The phase lag at high frequencies for the third order filter is 3 90 = 270 , while the phase lag
for the sixth order filter is 6 90 = 540 . The ideal filter has zero phase lag at all frequencies.
The amplitude ratio plot in Fig 5.13 asymptotes to a maximum frequency of 500 Hz. This may
seem surprising since the Bode diagram of a true continuous filter does not asymptote to any
limiting frequency. What has happened, is that Fig 5.13 is the plot of a discrete filter, and that
the limiting frequency is the Nyquist frequency (2/T ). However the discrete Bode plot is a good
approximation to the continuous filter up to about half the Nyquist frequency.
A MATRIX
OF FILTERS
Fig. 5.14 compares the performance of 9 filters using the 2 frequency component signal from
5.3.1. The 9 simulations in Fig 5.14 show the performance for 3 different orders (2,5 and 8) in
the columns at 3 different cut-off frequencies (20, 50 and 100 Hz) in the rows. This figure neatly
emphasizes the trade off between smoothness and phase lag.
5.3.2
A PPROXIMATING
As mentioned in the introduction, one way to design digital filters, is to first start off with a
continuous design, and then transform the filter to the digital domain. The four common classical
continuous filter families: Butterworth, Chebyshev, elliptic and Bessel, are all strictly continuous
filters, and if we are to implement them in a computer or a DSP chip, we must approximate these
filters with a discrete filter.
To convert from a continuous filter H(s), to a digital filter H(z), we could use the bilinear transform as described in section 2.5.2. Since this transformation method is approximate, we will find
that some frequency distortion will be introduced. However we can minimise this distortion by
216
c = 50
c =100
order= 5
order= 2
c = 20
order= 8
raw
filtered
Figure 5.14: The performance of various Butterworth filters of different orders 2,5, and 8 (left to
right) and different cut-off frequencies, c of 20, 50 and 100 Hz (top to bottom) of a two frequency
component signal.
adjusting the frequency response of the filter H(s) before we transform to the discrete approximation. This procedure is called using the bilinear transformation with frequency pre-warping.
T
d
1
a = tan
T
2
(5.16)
(5.17)
This means that the magnitude of H(s) at frequency a , |H(ia)|, is equal to the magnitude
of H(z) at frequency d , |H(eid T )|. Suppose we wish to discretise the continuous filter H(s)
using the bilinear transform, but we also would like the digital approximation to have the same
magnitude at the corner frequency (wc = 10), then we must substitute using Eqn 6.13.
Example of the advantages of frequency pre-warping.
Suppose we desire to fabricate an electronic filter for a guitar tuner. In this application the guitar
player strikes a possibly incorrectly tuned string, and the tuner box recognises which string is
struck, and then subsequently generates the correct tone. The player can then adjust the tuning of
217
the guitar. Such a device can be fabricated in a DSP chip using a series of bandpass filters tuned
to the frequencies of each string (give or take a note or two).
We will compare a continuous bandpass filter centered around a frequency of 440 Hz (concert
pitch A), with digital filters using a sampling rate of 2kHz. Since we are particularly interested
in the note A, we will take special care that the digital filter approximates the continuous filter
around f = 440Hz. We can do this by frequency pre-warping. The frequency response of the
three filters are shown in Fig. 5.15.
Gd = c2d(Gc,Ts,'tustin')
% Now sample it using bilinear transform
Gd_pw = c2d(Gc,Ts,'prewarp',440*2*pi) % Now sample it with pre-warping
Fig. 5.15 highlights that the uncompensated digital filter (dashed) actually bandpasses signals in
the range of 350 to 400Hz rather than the 420 to 450Hz required. This significant error is addressed
by designing a filter to match the continuous filter at f = 440Hz using frequency pre-warping
(dotted).
Continuous & discrete Bode plots of a bandpass filter
0
AR
10
Continuous
Discrete
prewarp
f
10
prewarp
discrete
continuous
10
10
Frequency [Hz]
Figure 5.15: Comparing a continuous bandpass filter with a discrete filter and discrete filter with
frequency pre-warping (dashed). Note the error in the passband when using the bilinear transform without frequency pre-warping.
218
5.3.3
E FFICIENT
na
X
i=1
ai yki +
nb
X
bi uki
(5.18)
i=0
meaning now we can implement this using operational amplifiers and delay elements. One
scheme following Eqn. 5.18 directly given in Fig. 5.16 is known as the Direct Form I. It is easy
to verify that the Direct Form I filter has the transfer function
H(z 1 ) =
b0 + b1 z 1 + b2 z 2 + + bnb z nb
1 + a1 z 1 + a2 z 2 + + ana z na
b0
xk
..
.
bnb
|
{z
numerator
yk
b1
+
|
a1
a2
..
.
ana
z 1
{z
denominator
219
(5.19)
=
1
1
W (z ) U (z )
A(z 1 )
which means that we can describe Eqn. 5.19 as one pure infinite impulse response (IIR) and one
finite impulse response (FIR) process
Y (z 1 ) = B(z 1 )W (z 1 )
A(z
)W (z
) = U (z
(5.20)
(5.21)
A block diagram following this topology is given in Fig. 5.17 which is known as Direct Form II,
DFII.
xk
b0
wk
b
a1
z 1
b1
yk
z 1
a2
..
.
..
.
z 1
bnb
ana
z 1
Figure 5.17: An IIR filter with a minimal number of delays, Direct Form II
We could have reached the same topological result by noticing that if we swapped the order of
the tapped delay lines, in the direct form 1 structure, Fig. 5.16, then we will notice that the delay
lines are separated by a unity gain branch. Thus we can eliminate one of the branches, and save
the hardware cost of half the delays.
Further forms of the digital filter are possible and are used in special circumstances such as when
we have parallel processes or we are concerned with the quantisation of the filter coefficients.
One special case is described further in section 5.3.4.
220
With M ATLAB we would normally use the optimised filter command to simulate IIR filters,
but this option is not available when using say C on a microprocessor or DSP chip. The following
script implements the topology given in Fig. 5.17 for the example filter
B(z 1 )
(z 1 0.4)(z 1 0.98)
=
A(z 1 )
(z 1 0.5)(z 1 0.6)(z 1 0.7)(z 1 0.8)(z 1 0.9)
12
17
B = poly([0.4 0.98]);
% Example stable process
A = poly([0.5:0.1:0.9]);
U = [ones(30,1); randn(50,1)]; % example input profile
Yx = dlsim(B,A,U);
% Test Matlab's version
% Version with 1 shift element ----------------nb = length(B); A(1) = []; na = length(A);
nw = max(na,nb); % remember A=A-1
shiftw = zeros(size(1:nw))'; % initialise with zeros
Y = []; for i=1:length(U);
w = - A*shiftw(1:na) + U(i) ;
shiftw = [w; shiftw(1:nw-1)];
Y(i,1) = B*shiftw(1:nb);
end % for
t = [0:length(U)-1]'; [tt,Ys] = stairs(t,Y); [tt,Yx] =
stairs(t,Yx);
plot(tt,Ys,tt,Yx,'--'); % plotting verification
M ATLABs dlsim gives a similar output but is simply shifted one unit to the right.
5.3.4
N UMERICAL
Due to commercial constraints, for many digital filtering applications, we want to be able to realise cheap high-quality filters at high speed. Essentially this means implementing high-order IIR
filters on low-cost hardware with short word-lengths. The problem is that high-order IIR filters
are very sensitive to quantisation effects especially when implemented in the expanded polynomial form such as DFII.
One way to reduce the effects of numerical roundoff is to implement rewrite the DFII filter in factored form as a cascaded collection of second-order filters as shown in Fig. 5.18. Each individual
filter is a second-order filter as shown in Fig. 5.19. These are known as bi-quad filters, or secondorder sections (SOS). For an nth order filter, we will have n/2 sections, that is for filters with an
odd order, we will have one second order section that is actually first order.
We can convert higher-order filters to a collection of second-order sections using the tf2sos or
equivalent commands in M ATLAB. You will notice that the magnitude and range of the coefficients of the SOS filter realisation is much smaller than the equivalent expanded polynomial
filter.
The separating of the factors into groups of second-order sections for a given high-order filter is
not unique, but one near-optimal way is to pair the poles nearest the unit circle with the zeros
nearest those poles and so on. This is is the strategy that tf2sos uses and is explained in further
detail in the help documentation.
Listing 5.6 demonstrates how we can optimally decompose a 7th order Butterworth filter, into
four second-order sections. We start by designing a filter in expanded form.
221
xk
yk
...
SOS # n/2
SOS #1
Figure 5.18: Cascaded second-order sections to realise a high-order filter. See also Fig. 5.19.
xk
b0
b1
yk
z 1
a1
b
z 1
a2
b2
We can see the 7th order filter is rather unwieldy in its expanded form,
0.2363z 7 + 1.6542z 6 + 4.9626z 5 + 8.2710z 4 + 8.2710z 3 + 4.9626z 2 + 1.6542z + 0.2363
B
=
A
z 7 + 4.1823z 6 + 7.8717z 5 + 8.5309z 4 + 5.7099z 3 + 2.3492z 2 + 0.5483z + 0.0558
but if we convert this filter to 4 second-order sections,
>> [SOS,G] = tf2sos(B,A) % Convert 7th order polynomial to 4 second-order sections
SOS =
222
1.0000
1.0000
1.0000
1.0000
0.9945
2.0095
2.0026
1.9934
0
1.0095
1.0027
0.9934
1.0000
1.0000
1.0000
1.0000
0.5095
1.0578
1.1841
1.4309
0
0.3076
0.4636
0.7687
G =
0.2363
then the numbers look far more numerically consistent. Each row of the second order section
matrix output from tf2sos is ordered as
sos = b0
b1
b2
a1
a2
b0 + b1 z 1 + b2 z 2
1 + a1 z 1 + a2 z 2
From the listing above, we can see that the first of the four second-order sections is actually a
first-order section. This is evident by noticing that the coefficients b2 and a2 are equal to zero for
the first filter section.
The advantage when using second-order sections is that we can maintain reasonable accuracy
even when using relatively short word lengths. Fig. 5.20(a), which is generated by Listing 5.7,
illustrates that we cannot successfully run a 7th-order elliptic low-pass filter in single precision in
the expanded polynomial form, (DFII), one reason being that the resultant filter is actually unstable! However we can reliably implement the same filter, in single precision if we use 4 cascaded
second-order sections. In fact for this application, the single precision filter implemented in
SOS is indistinguishable in Fig. 5.20(a) from the double precision implementation given that the
average error is always less than 106 .
Listing 5.7: Comparing DFII and SOS digital filters in single precision.
[B,A] = ellip(7,0.5,20,0.08); % Design a 7th order elliptic filter
2
U = ones(100,1);
% Consider a step response
Y = filter(B,A,U);
% Compute filter in double precision
Ys = filter(single(B), single(A), single(U)); % Compute filter in single precision
7
12
It is also clear from the pole-zero plot in Fig. 5.20(b) that the single precision DFII implementation
has poles outside the unit circle due to the quantisation of the coefficients. This explains why this
single precision filter implemented in expanded DFII is unstable and should never be used.
5.3.5
F ILTER
VISUALISATION TOOL
The GUI routine fvtool from the signal processing toolbox is a quick and convenient way to visualise the characteristics of various filters. Listing 5.8 shows how we can use fvtool to validate
the design of filter with a stop band between 0.45 and 0.5 and with a minimum -3 dB wobble and
-30dB in the stop band.
223
zoomed region
filtered response
PoleZero Map
0.4
0.35
single precision
1.5
1
0.3
0.8
0.250.6
1
0.5
0.20.4
0.150.2
0.9
0
10 0
1.1
error
0.2
0.4
5
10
10
0.6
0.8
double
single
1
10
10
20
40
60
sample interval
80
100
1.5
0.5
0.5
Figure 5.20: Comparing single precision second-order sections with filters in direct form II transposed form. Note that the direct form II filter is actually unstable when run in single precision.
Listing 5.8: Designing and visualising a 5th order elliptic band-stop filter.
2
N = 5
% Design a 5th order elliptic filter
% stop band is 0.4 to 0.5 with a -3dB wobble and a -30dB in the stop band
[B,A] = ellip(N,3,30,[0.4 0.5],'stop')
h = fvtool(B,A)
% See magnitude response in Fig. 5.21
The output of the visualisation tool shown in Fig. 5.21 of the magnitude response of the makes it
easy to check your filter design.
5.4
T HE F OURIER TRANSFORM
The Fourier Transform (FT) is one of the most well known techniques in the mathematical and
scientific world today. However it is also one of the most confusing subjects in engineering,
224
partly owing to the plethora of definitions, normalising factors, and other incompatibilities found
in many standard text books on the subject. The Fourier transform is covered in many books on
digital signal processing, (DSP), such as [38, 133], and can also be found in mathematical physics
texts such as [106, 161]. The Fourier Transform is a mathematical transform equation that takes a
function in time and returns a function in frequency. The Fast Fourier Transform, or FFT, is simply
an efficient way to compute this transformation.
Chapter 2 briefly mentioned the term spectral analysis when discussing aliases introduced by
the sampling process. Actually a very learned authority , [177, p53], tells us that spectral formally means pertaining to a spectre or ghost, and then goes on to make jokes about ghosts and
gremlins causing noisy problems. But nowadays spectral analysis means analysis based around
the spectrum of a something. We are going to use spectral analysis to investigate the frequency
components of a measured signal, since in many cases these frequency components may reveal
critical information hidden in the time domain description. This decomposition of the time domain signal into its frequency components is just one application of the FFT.
Commonly in mathematics, we can approximate any function f (t) using (amongst other things)
a Taylor series expansion, or a sine and cosine series expansion. If the original signal is periodic,
then the latter technique generally gives better results with fewer terms. This latter series is called
the Fourier series expansion.
The fundamental concept on which the Fourier transform (FT) is based is that almost any periodic
wave form can be represented as a sum of sine waves of different periods and amplitudes,
f (t) =
An sin (2nf0 t) +
n=0
Bn cos (2nf0 t)
(5.22)
n=0
This means that we can decompose any periodic wave form (PW) with a fundamental frequency2
(f0 ) into a sum of sinusoidal waves, (which is known as frequency analysis) or alternatively we
can construct complicated waves from simple sine waves (which is known as frequency synthesis).
Since it is impractical to have an infinite number of terms, this series is truncated, and f (t) now
only approximates the right hand side of Eqn. 5.22. Naturally, as the number of terms tends to
infinity, the approximation to the true signal f (t) improves. All we have to do now is to determine
the constants An and Bn for a given function f (t). These coefficients can be evaluated by
An =
2
P
P/2
2
P
and Bn =
P/2
P/2
(5.23)
P/2
and
1
B0 =
P
P/2
f (t)dt
(5.24)
P/2
In summary, f (t) tells us how the signal develops in time, while the constants An , Bn give us a
method to generate f (t).
We can re-write Eqn. 5.22 as a sum of cosines and a phase angle
f (t) =
Cn cos (2nfo t + n )
n=0
2 Remember
the relationship between the frequency f , the period P , and the angular velocity .
f =
1
=
P
2
(5.25)
225
n = tan
Bn
(5.26)
If we plot Cn vs f then this is called the frequency spectrum of f (t). Note that Cn has the same units
of An or Bn which have the same units of whatever the time series f (t) is measured in.
5.4.1
F OURIER
TRANSFORM DEFINITIONS
Any function f (t) that is finite in energy can be represented in the frequency domain as F () via
the transform
Z
F () =
f (t)eit dt
(5.27)
Z
Z
=
f (t) cos(t)dt i
f (t) sin(t)dt
(5.28)
We can also convert back to the time domain by using the inverse Fourier transform,
Z
1
f (t) =
F () eit d
2
You may notice that the transform pair is not quite symmetrical, we have a factor of 2 in the
inverse relation. If we use frequency measured in Hertz, where = 2f rather than , we obtain
a much more symmetrical, and easier to remember, pair of equations. [161, pp381382] comment
on this, and [141, p145] give misleading relations.
The spectrum of a signal f (t) is defined as the square of the absolute value of the Fourier transform,
f () = |F ()|2
Parsevals equation reminds us that the energy of a signal is the same whether we describe it in
the frequency domain or the time domain,
Z
Z
1
2
|F ()|2 d
f (t) dt =
2
We can numerically approximate the function F () by evaluating Eqn 5.28 using a numerical
integration technique such as Eulers method. Since this numerical integration is computationally
expensive, and we must do this for every of interest, the calculation procedure is quite time
consuming. I will refer to this technique as the Slow Fourier Transform (SFT). Remember that
H() is a complex function of angular velocity (), and that the result is best displayed as a graph
with measured in radians/s as the dependent variable.
S OME
HELPFUL PROPERTIES
Here are some helpful properties of the Fourier transform that may make analysis easier.
1. If f (t) = f (t) which means that f (t) is an odd function like sin t, then all the cosine terms
disappear.
2. If f (t) = f (t) which means that f (t) is an even function like cos t, then all the sine terms
disappear.
226
3. If f (t) = f (t + P/2), then the series will contain only odd components; n = 1, 3, 5, . . .
4. If f (t) = f (t + P/2), then the series will contain only even components; n = 2, 4, 6, . . .
Naturally we are not at liberty to change the shape of our measured time series f (t), but we may
be allowed to choose the position of the origin such that points 3 and 4 are satisfied.
T HE E ULER RELATIONS
We can also write the Fourier series using complex quantities with the Euler relations.
ei = cos + i sin ,
where i =
and
ei + ei
,
2
and
ei = cos i sin
sin =
ei ei
2i
(5.29)
(5.30)
Dn ei2nf0 t
(5.31)
f (t)ei2nf0 t dt
(5.32)
where
Dn =
1
P
P/2
P/2
1
(1 cos n)
n
2
2
2
sin 2t +
sin 6t + +
sin 2(2n 1)t +
3
(2n 1)
(5.33)
As the number of terms (n) in Eqn. 5.33 increase, the approximation improves as illustrated in Fig
5.23 which shows the Fourier approximation for 1 through 4 terms. Later in 5.4.3 we will try to
do the reverse, namely given the square wave signal, we will extract numerically the coefficients
of the sine and cosine series.
227
2
time
Figure 5.22: The original square wave signal f (t) (top), and an approximation to f (t)
using one sine term (bottom)
# of terms = 1
1
0
1
# of terms = 2
1
0
1
# of terms = 3
1
0
1
# of terms = 4
1
0
1
228
5.4.2
O RTHOGONALITY
Consider the function y(t) = sin(3t) sin(5t) which is plotted for 2 full periods in Fig 5.24 (top).
If we integrate it over one full period, we can see that the positive areas, cancel out the negative
areas, hence the total integral is zero. In other words,
Z
P =
y(t) dt = 0
0
The only time an integral of a full period will not be zero is when the entire curve is either all
positive or all negative. The only time that can happen, is when the two frequencies are the same
such as in Fig. 5.24 (bottom). Thus given
y = sin(nt) + sin(mt)
(5.34)
if n 6= m
y(t) dt = 0
0
P
0
y(t) dt 6= 0
if n = m
only when n = m.
sin(3t)*sin(5t)
1
0.5
0
0.5
R 2
1
0.5
0
Figure 5.24:
A plot of (top)
y = sin(3t) sin(5t) and (bottom)
y = sin(3t) sin(3t). In the top figure the
shaded regions cancel giving a total
integral of zero for the two periods.
R 2
0.5
1
0
/2
time
3/2
t=[0:0.01:2*pi]';
% create a time vector exactly 2 Periods long
y3=sin(3*t); y5=sin(5*t);
plot(t,[y3.*y3, y3.*y5]) % plot both curves
We can numerically integrate these curves using the sum command. The integral of the top curve
in Fig. 5.24 is
>> trapz(t,y3.*y5)
ans = -1.9699e-05
229
R 2
0
sin2 3t is evaluated as
and is definitely non-zero. Note that for this example we have integrated over 2 periods, so we
should divide the resultant integral by 2. This however, does not change our conclusions.
P ERIOD
The period of y = sin(nt) is P = 2/n. The period of y = sin(nt) sin(nt) can be evaluated using
an expansion for sin2 (nt) as
1
y = sin2 (nt) = (1 cos(2nt))
2
thus the period is P = /n.
The period of the general expression y = sin(nt) sin(mt) where n and m are potentially different
is found by using the expansion
y=
1
[cos(n + m)t + cos(n m)t]
2
1
[ cos 8t + cos 2t]
2
(5.35)
The period of the first term in Eqn 5.35 is P1 = /4 and for the second term is P2 = . The total
period is the smallest common multiple of these two periods which is for this case P = .
The property given in Eqn. 5.4.2 can be exploited for spectral analysis. Suppose we have measured
a signal x(t) that is made up of a sine wave with a particular, but unknown, frequency. However
we wish to establish this unknown frequency using spectral analysis. All we need to do is to
multiply x(t) by a trial sine wave, say sin 5t, integrate over one period, and look at the result.
If the integral is close to zero, then our original signal, x(t), had no sin 5t term, if however we
obtained a significant non-zero integral, then we would conclude that the measured signal had
a sin 5t term somewhere. Note that to obtain a complete frequency description of the signal, we
must try all possible frequencies from 0 to .
So in summary, the idea is to multiply the signal x(t) with different sine waves containing the
frequency of interest, integrate, and then see if the result is non-zero. We could plot the integral
result against the frequency thus producing a spectral plot of x(t).
5.4.3
FUNCTION
M ATLAB provides a convenient way to analyse the frequency component of a finite sampled
signal by using the built-in fft function. This is termed spectral analysis and is at its simplest
just a plot of the magnitude of the Fourier transform of the time series, or abs(fft(x)). The
complexities are due to the scaling of the plot, and possible windowing to improve the frequency
230
resolution at the expense of the magnitude value, or vice-versa and possible adjustment for nonperiodic signals.
The S IGNAL P ROCESSING toolbox includes a command psd which is an improved version of the
example given in Technical Support Note # 1702 available from
www.mathworks.com/support/tech-notes/1700/1702.html.
Other functions to aid
the spectral analysis of systems are spectrum and the associated specplot, and etfe (empirical transfer function estimate) from the S YSTEM I DENTIFICATION toolbox. All of these routines
perform basically the same task.
To look how we can perform a spectral decomposition, let us start with a simple example where
we know the answer, namely
x(t) = 4.5 sin(2 200 t)
It is immediately apparent that at a frequency of 200 Hz, we should observe a single Fourier
coefficient with a magnitude of 4.5 while all others are zero. Our aim is to verify this using
M ATLAB S fft routine.
The function, spsd (scaled PSD) giving in Listing 5.9 will use the Fourier transform to decompose
the signal into different frequencies and display this information. This function was adapted from
the version given in Technical Support Note # 1702 available at the M ATH W ORKS ftp server and
further details are given in the note. Since we have assumed that the input signal is real, then we
know that the Fourier transform will be conjugate symmetrical about the Nyquist frequency, thus
we can throw the upper half away. Doing this, we must multiply the magnitude by 2 to maintain
the correct size. M ATLAB also normalises the transform by dividing by the number of data points
so must also account for this in the magnitude. Since the FFT works much faster when the series
is a power of two, we automatically pad with zeros up to the next power of 2. In the example
following, I generate 1000 samples, so 24 zeros are added to make up the difference to 210 = 1024.
Listing 5.9: Routine to compute the power spectral density plot of a time series
13
MX(1)=MX(1)/2;
MX(length(MX))=MX(length(MX))/2;
MX=MX/length(x);
f=(0:NumUniquePts-1)*2*Fn/NFFT;
18
23
We are now ready to try our test signal to verify spsd.m given in Listing 5.9.
Fs = 1000;
t=0:1/Fs:1;
x = 4.5*sin(2*pi*t*200)';
231
% Sampling freq, Fs , in Hz
% Time vector sampled at Fs for 1 second
% scaled sine wave.
5
4
3
2
1
0
spsd(x,1/Fs)
100
200
300
frequency [Hz]
400
500
We should expect to see a single distinct spike at f = 200Hz with a height of 4.5.
With the power spectrum density function spsd, we can also analyse the frequency components
of a square wave. We can generate a square wave using the square function in a manner similar
to that of generating a sine wave, or simply wrap the signum function around a sine wave.
1
0
10
20
30
40
50
5
10
Frequency (Hz)
15
Note that all the dominant frequency components appear at the odd frequencies (1,3,5,7. . . ), and
that the frequency components at the even frequencies are suppressed as anticipated from 5.4.1.
Of course they theoretically should be zero, but the discretisation, finite signal length and sampling etc. introduce small errors here.
5.4.4
P ERIODOGRAM
There exists a parallel universe to the time domain called the frequency domain, and all the operations we can do in one domain can be performed in the other. Our gateway to this frequency
domain as previously noted is the Fast Fourier Transform or known colloquially as the FFT. Instead of plotting the signal in time, we could plot the frequency spectrum of the signal. That is,
we plot the power of the signal as a function of frequency. The power of the signal y(t) over a
frequency band is defined as the absolute square of the Fourier transform;
X
2
def
y () = |Y ()|2 =
y(t)eit
When we plot the power as a function of frequency, it is much like a histogram where each frequency bin contains some power. This plot is sometimes referred to as a periodogram.
Constructing the periodogram for real data can give rather messy, inconclusive and widely fluctuating results. For a data series, you may find that different time segments of the series give
232
different looking spectra when you suspect that the true underlying frequency characteristics is
unchanged. One way around this is to compute the spectra or periodogram for different segments
and average the results, or you could use a windowing function. The spa and etfe functions in
the System identification toolbox attempt to construct periodograms from logged data.
Problem 5.2 Fig. 5.25 trends the monthly critical radio frequencies in Washington, D.C. from May
1934 until April 1954. These frequencies reflect the highest radio frequency that can be used for
broadcasting.
Critical frequency
15
10
5
0
1935
1940
1945
Year
1950
1955
This data is available from the collection of classic time series data archived in: http://www-personal.buseco.mon
(See also Problem 6.3 for further examples using data from this collection.)
1. Find the dominant periods (in days) and amplitudes of this series.
2. Can you hypothesise any natural phenomena that could account for this periodicity?
5.4.5
F OURIER
SMOOTHING
It is very easy to smooth data in the Fourier domain. You simply take all your data, FFT it,
multiply this transformed data by the filter function H(f ), and then do the inverse FFT (IFFT)
to get back into the time domain. The computationally difficult parts are the FFT and IFFT, but
this is easily achieved in M ATLAB or in custom integrated circuits. Note that this section is called
smoothing as opposed to online filtering because these smoothing filters are acausal since future
data is required to smooth present data as well as past data. Filters that only operate on past
data are referred to as causal filters. Acausal filters tend to perform better than causal filters as
they exhibit less phase lag but they cannot be used in real time applications. In process control,
physical realisation constraints mean that only causal filters are possible. However in the audio
world (compact disc and digital tapes), acausal filters are used since a time delay is not noticeable
and hence allowed. Surprisingly time delays are allowed on telephone communications as well.
Numerical Recipes [161] has a good discussion of the advantages of Fourier smoothing.
Let us smooth the measurement data given in 5.3.1 and compare the smoothed signal with
the online filtered signal. This time however we will deliberately generate 210 = 1024 data pairs
rather than an even thousand so that we can take advantage of the fast Fourier Transform algorithm. It is still possible to evaluate the Fourier transform of a series with a non-power of 2
number of pairs, but the calculation is much slower so its use is discouraged.
Again we will generate a measurement signal as
y = sin (2f1 t) +
1
sin (2f2 t)
10
233
where f1 = 2 Hz and f2 = 50 Hz as before. We will generate 210 data pairs at a sampling frequency
of 200Hz.
1
t = 1/200*[0:pow2(10)-1]';
% 210 data pairs sampled at 200 Hz
1
y = sin(2*pi*2*t) + 0.1*sin(2*pi*50*t); % Signal: y = sin (2f1 t) + 10
sin (2f2 t)
Generating the Fourier transform of the data is easy. We can plot the amount or power in each
frequency bin Y(f ) by typing
yp = fft(y);
plot(abs(yp).2)
% See periodogram in Fig. 5.26
semilogy(abs(yp).2) % clearer plot
In Fig. 5.26, we immediately note that the plot is mirrored about the central Nyquist frequency
meaning that we can ignore the right-hand side of the plot. The x-axis frequency scale is normalised, but you can easily detect the two peaks corresponding to the two dominant frequencies
of 2 and 50 Hz. Since we are trying to eradicate the higher frequency, what we would desire in
the power spectrum is only one spike at a frequency of f = 2 Hz. We can generate a power curve
of this nature, by multiplying it with the appropriate filter function H(f ).
Spectrum
Filter
500
400
zoomed
Power
600
300
400
200
200
0
100
50
0
0
200
400
600
frequency
800
1000
Let us generate the filter H(f ) in the frequency domain. We want to pass all frequencies less than
say 30Hz, and attenuate all frequencies above 30Hz. We do this by constructing a vector where 1
corresponds to the frequencies we wish to retain, and zero is for those we do not want. Then we
convolve the filter with the data, or elementwise multiply the two frequency series.
n = 20;
h=[ones(1,n) zeros(1,length(y)-2*n) ones(1,n)]'; % Filter H(f )
ypf = yp.*h
% convolve signal & filter Y (f ) H(f )
plot([abs(yp), h, abs(ypf)])
Now we can take the inverse FFT to obtain back the transformed the time domain signal which is
now our low pass filtered signal. Technically the inverse Fourier transform of a Fourier transform
of a real signal should return also a real time series of numbers. However owing to numerical
roundoff, the computed series may contain a small complex component. In practice we can safely
ignore these, and force the signal to be real.
10
234
1.5
smoothed
output
0.5
0
0.5
original
1
1.5
0.2
0.2
0.4
0.6
time (s)
0.8
The result shown in Fig. 5.27 is an acceptably smooth signal with no disturbing phase lag that
was unavoidable in the online filtering example.
Problem 5.3 Design a notch filter to band-stop 50Hz. Test
your filter using an input signal with a
time varying frequency content such as u(t) = sin 16t4 . Plot both the original and filtered signal
together. Sample around 1000 data pairs using a sampling frequency of 1000 Hz. Also plot the
apparent frequency as a function of time for u(t).
5.5
In industrial applications, we often want the know the slope of a measured process variable. For
example, we may wish to know the flowrate of material into a vessel, when we only measure the
level of the vessel. In this case, we differentiate the level signal, to calculate the flowrate. However
for industrial signals, which typically have significant noise, calculating an accurate derivative is
not trivial. A crude technique such as differencing, will give intolerable error. For these types of
online applications, we must first smooth the data, then differentiate the smoothed data. This can
be accomplished using discrete filters.
5.5.1
E STABLISHING
FEEDRATES
This case study demonstrates one way to differentiate noisy data. For this project, we were trying
to implement a model based estimation scheme to a 1000 liter fed-batch chemical reactor. For
our model, we needed to know the feedrate of the material being fed to the reactor. However
we did not have a flow meter on the feed line, but instead we could measured the weight of the
reactor using load cells. As the 3 hour batch progressed, feed was admitted to the reactor, and
the weight increased from about 800 kg to 1200 kg. By differentiating this weight, we could at
any time approximate the instantaneous feed rate. This is not an optimal arrangement, since we
expect considerable error, but in this case it was all that we could do.
The weight measurement of the reactor is quite noisy. The entire reactor is mounted on load cells,
and all the lines to and from the reactor (such as cooling/heating etc) are connected with flexible
couplings. Since the reactor was very exothermic, the feed rate was very small (30 g/s) and this is
almost negligible to the combined weight of the vessel ( 3000 kg) and contents ( 1000 kg). The
raw weight signal (dotted) of the vessel over the batch is shown in the upper figures in Fig. 5.29.
An enlargement is given in the upper right plot. The sample time for this application was T = 6
seconds.
235
METHOD OF DIFFERENTIATING
The easiest (and crudest) method to establish the feedrate is to difference the weight signal. Sup = F , is approximated by
pose the sampled weight signal is W (t), then the feedrate, W
Wi Wi1
F =W
T
But due to signal discretisation and noise, this results in an unusable mess3 . The differencing
can be achieved elegantly in Matlab using the diff command. Clearly we must first smooth the
signal before differencing.
U SING A DISCRETE
FILTER
This application is not just smoothing, but also differentiating. We can do this in one step by
combining a low pass filter (Glp ) with a differentiater, s, as
Gd (s) = Glp (s) s
(5.36)
We can design any low pass filter, (say a 3rd order Butterworth filter using the butter command),
then multiply the filter polynomials by the differentiater. In the continuous domain, a differentiater is simply s, but in the discrete domain we have a number of choices how to implement this.
Ogata, [150, pp 308] gives these alternatives;
1z 1
Backward difference
T 1
1z
(5.37)
s
Forward difference
z 1 T
1 z 1
T
(5.38)
with sample time T . Thus the denominator is multiplied by T , and the numerator gets convolved
with the factor (1 z 1 ). We need to exercise some care in doing the convolution, because the
polynomials given by butter are written in decreasing powers of z (not z 1 ). Note however that
fliplr(conv(fliplr(B),[-1,1])); is equivalent to conv(B,[1,-1]);. Listing 5.10 tests
this smoothing differentiator algorithm.
Listing 5.10: Smoothing and differentiating a noisy signal
3I
236
The results of the smoothing and the differentiating of the noisy signal are given in Fig. 5.28.
The upper plot compares the noisy measurement with the the smoothed version exhibiting the
inevitable phase lag. The lower plot in Fig. 5.28 shows the crude approximation of the derivative obtained by differencing (with little phase lag, but substantial noise), and the smooth and
differentiated version.
Measurement
0.4
0.2
0
0.2
Derivative
0.4
1
Figure 5.28:
Smoothing, (upper)
and differentiating (lower) a noisy
measurement.
0.5
0
0.5
1
10
time [s]
15
20
Note that even though the noise on the raw measurement is insignificant, the finite differencing
amplifies the noise considerably. While the degree of smoothing can be increased by decreasing
the low pass cut-off frequency, , we pay the price in increasing phase lag.
Fig. 5.29 shows actual industrial results using this algorithm. The right hand plots show an enlarged portion of the signal. The upper left plot shows the raw (dotted) and smoothed (solid)
weight signal. The smoothed signal was obtained from the normal unity gain Butterworth filter
1
1
. There is a short transient at the beginning, when the filter is first
with a c = 0.02
2T = 600 s
invoked, since I made no attempt to correct for the non-zero initial conditions. The lower 2 plots
show the differentiated signal, in this case feed rate using the digital filter and differentiater. Note
that since the filter is causal, there will be a phase lag, and this is evident in the filtered signal
given in the top right plot. Naturally the differentiated signal will also lag behind the true feedrate. In this construction, the feed rate is occasionally negative corresponding to a decrease in
weight or filter transient. This is physically impossible, and the derived feedrate be chopped at
zero if it were to be used further in a model based control scheme for example.
5.6
S UMMARY
Filtering or better still, smoothing, is one way to remove unwanted noise from your data. Smoothing is best done offline if possible. The easiest way to smooth data is to decide what frequencies
you wish to retain as the signal, and which frequencies you wish to discard. One then constructs
a filter function, H(f ) which is multiplied with the signal in the frequency domain to create the
frequency of the smoothed signal. To get the filtered time signal, you transform back to the time
domain. One uses the Fourier transform to convert between the time and frequency domains.
If you cannot afford the luxury of offline filtering, then one must use a causal filter. IIR filters
perform better than FIRs, but both have phase lags at high frequencies and rounded edges that
are deleterious to the filtered signal. The Butterworth filter is better than simple cascaded first
order filters, it has only one parameter (cut-off frequency), relatively simple to design and suits
most applications. Other more complicated filters such as Chebyshev or elliptic filters, with correspondingly more design parameters are easily implemented using the signal processing toolbox
237
1200
5.6. SUMMARY
1100
1000
900
800
8
10
time [hr]
1100
1080
11
11.5
time [hr]
12
11.5
12
0.1
1120
1060
10.5
12
0.1
0.05
-0.05
8
1140
10
time [hr]
12
0.05
-0.05
10.5
11
time [hr]
Figure 5.29: Industrial data showing (upper) the raw & filtered weight of a fed-batch reactor
vessel [kg] during a production run, and (lower) the feedrate [kg/s]
in M ATLAB if desired. Usually for process engineering applications, we are most interested in
low pass filters, but in some applications where we may wish to remove slow long term trends
(adjustment for seasons which is common in reporting unemployment figures for example), or
differentiate, or remove a specific frequency such as the 50Hz mains hum, then high-pass or notch
filters are to be used. These can be designed in exactly the same way as a low pass filter, but with
a variable transformation.
238
C HAPTER 6
I DENTIFICATION
OF PROCESS
MODELS
Fiedlers forecasting rules:
Forecasting is very difficult, especially if its about the future. Consequently when presenting a forecast:
Give them a number or give them a date, but never both.
6.1
Most advanced control schemes require a good dynamic process model to be effective. One option
is to set the control law to be simply the inverse of the plant so when the inevitable disturbances
occur, the controller applies an equal but opposite input to maintain a constant output. In practice
this desire must be relaxed somewhat to be physically realisable, but the general idea of inserting
a dynamic element (controller) into the feedback loop to obtain suitable closed loop dynamics
necessitates a process model.
When we talk about models, we mean some formal way to predict the system outputs knowing
the input. Fig. ?? shows the three components of interest; the input, the plant, and the output. If
we know any two components, we can calculate the third.
Does this match with reality?
Input u
Model
output, y
240
tion problem is given the output and plant, one must propose a plausible input to explain the
observed results, and the identification problem, the one we are concerned with in this chapter, is
to propose a suitable model for the unknown plant, given input and output data.
Our aim for automatic control problems is to produce a usable model, that given typical plant
input data, will predict within a reasonable tolerance the consequences. We should however
remember the advice by the early English theologian and philosopher, William of Occam (1285
1349) who said: What can be accounted for by fewer assumptions is explained in vain by more.
This is now known as the principle of Occams razor, where with a razor we are encouraged to
cut all that is not strictly necessary away from our model. This motivates us to seek simple, low
order, low complexity, succinct, elegant models.
For the purposes of controller design we are interested in dynamic models as opposed to simply
curve fitting algebraic models. In most applications, we collect discrete data, and employ digital
controllers, so we are only really interested in discrete dynamic models.
Sections 3.3 and 3.3.3 revises traditional curve and model fitting such as what you would use in
algebraic model fitting. Section 6.7 extends this to fit models to dynamic data.
Good texts for for system identification include the classic [34] for time series analysis, [190] and
[125] accompanied with the closely related M ATLAB toolbox for system identification, [126]. For
identification applications in the processing industries, [129] contains the basics for modelling and
estimation suitable for those with a chemical engineering background. A good collection is found
in [149] and a now somewhat dated detailed survey of applications, including some interesting
practical hints and observations with over 100 references is given in [79].
6.1.1
B ASIC
DEFINITIONS
When we formulate and use engineering models, we typically have in mind a real physical process or plant, that is perhaps so complex, large, or just downright inconvenient, that we want to
use a model instead. We must be careful at all times to distinguish between the plant, model, data
and predictions. The following terms are adapted from [190].
System, S This is the system that we are trying to identify. It is the physical process or plant that
generates the experimental data. In reality this would be the actual chemical reactor, plane,
submarine or whatever. In simulation studies, we will refer to this as the truth model.
Model, M This is the model that we wish to find such that the model predictions generated by
M are sufficiently close to the actual observed system S output. Sometimes the model may
be non-parametric where it is characterised by a curve or a Bode plot for example, or the
model might be characterised by a finite vector of numbers or parameters, M(), such as
say the coefficients of a polynomial transfer function or weights in a neural network model.
In either case, we can think of this as a black-box model where if we feed in the inputs, turn
the handle, we will get output predictions that hopefully match the true system outputs.
Parameters, . These are the unknowns in the model M that we are trying to establish such
that our output predictions y are as close as possible to the actual output y. In a transfer
function model, the parameters are the coefficients of the polynomials; in a neural network,
they would be the weights of the neurons.
This then brings us to the fundamental aim of system identification: Given sufficient input/output data, and a tentative model structure M with free parameters , we want to find reasonable
values for such that our predictions y closely follow the behaviour of our plant under test, y, as
shown in Fig. 6.2.
241
y
Plant, S
input, u
Model, M
error,
(hopefully small)
Figure 6.2: A good model, M, duplicates the behaviour of the true plant, S.
Models can be classified as linear or nonlinear. In the context of parameter identification, which
is the theme of this chapter, it is assumed that the linearity (or lack of) is with respect to the
parameters, and not to the shape of the actual model output. A linear model does not mean just a
straight line, but includes all models where the model parameters , enter linearly. If we have the
general model
y = f (, x)
with parameters and independent variables x, then if y/ is a constant vector independent
of , then the model is linear in the parameters, otherwise the model is termed nonlinear. Linear
models can be written in vector form as
y = f (x)
Just as linear models are easier to use than nonlinear models, so too are discrete models compared
with continuous models.
6.1.2
B LACK ,
The two extreme approaches to modelling are sometimes termed black and white box models.
The black box, or data-based model is built exclusively from experimental data, and a deliberate
boundary is cast around the model, the underlying contents inside about which we neither know
nor care. The Wood-Berry column model described in Eqn. 3.24 is a black box model and the
parameters were obtained using the technique of system identification. The white box, theoretical
or first-principles model, preferred by purists, contains already well established physical and
chemical relations ideally so fundamental, no experiments are needed to characterise any free
parameters.
However most industrial processes are such that the models are often partly empirical and partly
fundamental. This intermediate stage lying between black and white is known as a grey-box
model. Ljung in [128] somewhat whimsically distinguishes between palettes of grey. Smokegrey models are those where we may try to find a suitable nonlinear transformation such that the
model is better described by a linear model. Steel-grey models are those where we may locally
linearise about an operating point while slate-grey models are those where a hybrid of models
is suitable.
An example of this type of model is the Van de Waals gas law or correlations for the friction factor
in turbulent flow. Finally if the process is so complex that no fundamental model is possible, then
a black box type model may be fitted. Particularly hopeless models, based mostly on whim and
what functions happened to be lying around at the time, can be found in [77]. These are the types
of the models that should have never been regressed.
Of course many people have argued for many years about the relative merits of these two extreme
approaches. As expected, the practising engineer, tends to have the most success somewhere in
242
the middle, which, not unexpectedly, are termed grey box models. The survey in [186] describes
the advantages and disadvantages of these colour coded models in further details.
6.1.3
T ECHNIQUES
FOR IDENTIFICATION
There are two main ways to identify process models: offline or online in real-time. In both cases
however, it is unavoidable to collect response data from the system of interest as shown in Fig. ??.
Most of the more sophisticated model identification is performed in the offline mode where all
the data is collected first and only analysed later to produce a valid model. This is typically
convenient since before any regressing is performed we know all the data, and we can do this in
comfort in front of our own computer workstation. The offline developed model (hopefully) fits
the data at the time that it was collected, but there is no guarantee that it will be still applicable for
data collected sometime in the future. Process or environmental conditions may change which
may force the process to behave unexpectedly.
Data logging computer
output
input
bc
bc
bc
bc
PI
signal generator
transducer
Plant, S
control valve
(manipulated variable)
Figure 6.3: An experimental setup for input/output identification. We log both the input and the
response data to a computer for further processing.
The second way to identify models is to start the model regression analysis while you are collecting the data in real-time. Clearly this is not a preferred method when compared with offline
identification, but in some cases we will study later, such as in adaptive control, or where we
have little input/output data at our immediate disposal, but are gradually collecting more, we
must use on-line identification. As you collect more data, you continually update and correct the
model parameters that were established previously. This is necessary when we require a model
during the experiment, but that the process conditions are likely to change during the period of
interest. We will see in chapter 7 that the online least squares recursive parameter estimation is
the central core of adaptive control schemes.
6.2
Dynamic model identification is essentially curve fitting as described in section 3.3 but for dynamic systems. For model based control, (our final aim in this section), it is more useful to have
parametric dynamic models. These are models which have distinct parameters to be estimated in
243
a specified structure. Conversely non-parametric models are ones where the model is described
by a continuous curve for example, such as say a Bode or Nyquist plots, or collections of models
where the model structure has not been determined beforehand, but is established from the data
in addition to establishing the parameters.
In both parametric and non-parametric cases we can
make predictions, which is a necessary requirement of
any model, but it is far more convenient in a digital computer to use models based on a finite number of parameters rather than resort to some sort of table lookup or
digitise a non-parametric curve.
How one identifies the parameters in proposed models
from experimental data is mostly a matter of preference.
There are two main environments in which to identify
models:
1. Time domain analysis described in 6.2.1, and
6.2.1
T IME
Many engineers tend to be more comfortable in the time domain, and with the introduction of
computers, identification in the time domain, while tedious, is now reliable and quite straight
forward. The underlying idea is to perturb the process somehow, and watch what happens.
Knowing both the input and output signals, one can in principle, compute the process transfer
function. The principle experimental design decision is the type of exciting input signal.
S TEP AND
IMPULSE INPUTS
Historically, the most popular perturbation test for visualisation and manual identification was
the step test. The input step test itself is very simple, and the major response types such as first
order, underdamped second order, integrators and so forth are all easily recognisable from their
step tests by the plant personnel as illustrated in Fig. 6.4.
Probably the most common model structure for industrial plants is the first-order plus deadtime
model, (FOPDT),
K
Gp (s) =
es
(6.1)
s + 1
where we are to extract reasonable values for the three parameters: plant gain K, time constant
, and deadtime . There are a number of strategies to identify these parameters, but one of the
more robust is called the Method of Areas, [120, pp3233].
Algorithm 6.1 FOPDT identification from a step response using the Method of Areas.
Collect some step response data (u, y) from the plant, then
244
Simple first order
1
0.8
0.6
1
s+1
0.4
0.2
0.6
Amplitude
1
0.8
Amplitude
Amplitude
1.5
1e 4s
s+1
0.4
1
s 2 +0.8s+1
0.5
0.2
0
0
4
6
Time (seconds)
0
0
5
10
Time (seconds)
15
20
1.5
0.8
Amplitude
Amplitude
1
s
15
Inverse response
1
40
30
5
10
Time (seconds)
Pure integrator
50
Amplitude
0.6
0.4
6s 3 +11s 2 +6s+1
6s+2
3s 2 +4s+1
0.5
0
0.2
10
0.5
0
10
20
Time (seconds)
30
40
10
15
Time (seconds)
20
25
10
15
Time (seconds)
20
25
Figure 6.4: Typical open loop step tests for a variety of different standard plants.
1. Identify the plant gain,
K=
y() y(0)
u() u(0)
yus =
2. Compute the area
A0 =
(K yus (t)) dt
(6.2)
t1
yus (t) dt
(6.3)
eA1
K
and
A0 eA1
K
(6.4)
where e = exp(1). Note that we must constrain that the deadtime is positive for causal
plants.
Listing 6.1 gives a simple M ATLAB routine to do this calculation and Fig. 6.5 illustrate the integrals
to be computed. Examples of the identification is given in Fig. 6.6 for a variety of simulated plants
(1st and 2nd order, and an integrator) with different levels of measurement noise superimposed.
Fig. 6.7 shows the experimental results from a step test applied to the blackbox filter with the
identified model.
245
1.2
1
Output
0.8
A0
0.6
0.4
A1
0.2
0
0
10
20
30
40
50
time
Figure 6.5:
Areas method for a
first-order plus time delay model
identification.
Listing 6.1: Identification of a first-order plant with deadtime from an openloop step response
using the Areas method from Algorithm 6.1.
Gp = tf(3, [7 2 1],'iodelay',2); % True plant: Gp (s) = 3e2s /(7s2 + 2s + 1)
[y,t] = step(Gp); % Do step experiment to generate data, see Fig. 6.5.
4
14
6
Output
Output
4
2
Plant
Model
0
2
20
40
4
2
Plant
Model
0
2
60
50
4
2
10
time
150
15
Plant
Model
2
0
Plant
Model
100
time
Output
Output
time
20
0.5
1
time
1.5
Of course not all plants are adequately modelled using a first-order plus deadtime model. For
example, Fig. 6.8 shows the experimental results from a step test applied to the electromagnetic
balance arm which is clearly second order with poles very close to the imaginary axis.
246
600
Output
400
200
Blackbox
Model
1000
500
0
0
10
15
time
20
25
3000
Output
2000
1000
0
1000
Figure 6.8: Two step tests subjected to the balance arm. Note the slow decay of the oscillations indicate that the plant has at least two
complex conjugate poles very close to the stable half of the imaginary axis.
Input
2000
1000
0
0
10
time
15
However many plant operators do not appreciate the control engineer attempting to step test
the plant under open loop conditions since it necessarily causes periods of off-specification product at best and an unstable plant at worst. An alternative to the step test is the impulse test where
the plant will also produce off-spec product for a limited period,1 but at least it returns to the same
operating point even using limited precision equipment. This is important if the plant exhibits
severe nonlinearities and the choice of operating point is crucial. The impulse response is ideal
for frequency domain testing since it contains equal amounts of stimulation at all frequencies, but
is difficult to approximate in practice. Given that we then must accept a finite height pulse with
finite slope, we may be limited as to the amount of energy we can inject into the system without
hitting a limiting nonlinearity. The impulse test is more difficult to analyse using manual graphical techniques (pencil and graph paper), as opposed to computerised regression techniques, than
the step test.
1 Provided
247
INPUTS
An alternative to the step or impulse tests is actually not giving any deliberate input to the plant
at all, but just relying on the ever-present natural input disturbances. This is a popular approach
when one wants still to perform an identification, but cannot, or is unable to influence the system
themselves. Economists, astronomers and historians fall into this group of armchair-sitting, noninterventionist identificationalists. This is sometimes referred to as time series analysis (TSA), see
[34] and [56].
Typically for this scheme to work in practice, one must additionally consciously perturb the input
(control valve, heating coils etc) in some random manner, since the natural disturbance may not
normally be sufficient in either magnitude, or even frequency content by itself. In addition, it is
more complex to check the quality of model solution from comparison with the raw time series
analysis. The plant model analysed could be anything, and from a visual inspection of the raw
data it is difficult to verify that the solution is a good one. The computation required for this type
of analysis is more complex than graphically curve fitting and the numerical algorithms used for
time series analysis are much more sensitive to the initial parameters. However using random
inputs has the advantage that they continually excite or stimulate the process and this excitation,
provided the random sequence is properly chosen, stimulates the process over a wide frequency
range. For complicated models, or for low signal to noise ratios, lengthy experiments may be
necessary, which is infeasible using one-off step or impulse tests. This property of continual
stimulation is referred to as persistent excitation and is a property of the input signal and should
be considered in conjunction with the proposed model. See [19, 2.4] for further details.
Two obvious choices for random sequences are selecting the variable from a finite width uniform
distribution, or from a normal distribution. In practice, when using a normal distribution, one
must be careful not to inject values far from the desired mean input signal, since this could cause
additional unwanted process upsets.
R ANDOM
BINARY SIGNALS
Electrical engineers have traditionally used random signals for testing various components where
the signal is restricted to one of two discrete signal levels. Such a random binary signal is called an
RBS. A binary signal is sometimes preferred to the normal random signal, since one avoids possible outliers, and dead zone nonlinearities when using signals with very small amplitudes. We can
generate an RBS in M ATLAB using the random number generator and the signum function, sign,
as suggested in [125, p373]. Using the signum function could be dangerous since technically it
is possible to have three values, 1, 1 and zero. To get different power spectra of our binary test
signal, we could filter the white noise before we sign it.
x = randn(100,1);
rbsx = 0.0 x;
% white noise
% binary signal
Compare in Fig. 6.9 the white noise (left plots) with the filtered white noise through a low-pass
filter (solid). The filter used in Fig. 6.9 is 0.2/(1 0.8q 1 ).
If you fabricate in hardware one of these random signal generators using standard components
such as flip-flops or XOR gates, rather than use a process control computer, then to minimise
component cost, you want to reduce the number of these physical components. This is done in
hardware using a linear feedback shift register (LFSR), which is an efficient approximation to our
248
White noise
Coloured noise
0.5
0.5
1
Random binary noise
0.5
0.5
0
0
50
100
50
100
Figure 6.9: Comparing white and filtered (coloured) random signals. Upper: Normal signals,
lower: Two-level or binary signals.
binary operation
+
1
|
1
{z
output
Figure 6.10: A 5-element binary shift register to generate a pseudo-random binary sequence.
(almost random) random number generator. Fig. 6.10 shows the design of one version which uses
a 5 element binary shift register. Initially the register is filled with all ones, and at each clock cycle,
the elements all shift one position to the right, and the new element is formed by adding the 3rd
and 5th elements modulus 2.
Since we have only 5 positions in the shift register in Fig. 6.10, the maximum length the sequence
can possibly be before it starts repeating is 25 1 or 31 elements. Fig. 6.11 shows a S IMULINK
implementation and the resulting binary sequence which repeats every 31 samples.
Random number generators such as Fig. 6.10 are completely deterministic so are called pseudo
random binary generators since in principal anyone can regenerate the exact sequence given the
starting point since the construction in Fig. 6.10. RBS can be used as alternatives to our random
sequences for identification. These and other commonly used test and excitation signals are compared and discussed in [203, p143].
249
Sum
mod
Constant
2
1
z
z
Unit Delay1 Unit Delay2
z
Unit Delay3
z
Unit Delay4
z
Unit Delay5
Scope
10
20
30
40
50
60
70
sample #
Figure 6.11: S IMULINK simulation of a 5-element binary shift register (upper) and resulting output
(lower). Note the period of 31 samples. Refer also to Fig. 6.10.
6.2.2
E XPERIMENTAL
When first faced with an unknown plant, it is often a good idea to perform a non-parametric model
identification such as a frequency response analysis. Such information, quantified perhaps as a
Bode diagram, can then be used for subsequent controller design, although is not suitable for
direct simulation. For that we would require a transfer function or equivalent, but the frequency
response analysis will aid us in the choice of appropriate model structure if we were to regress
further a parametric model.
The frequency response of an unknown plant G(s) can be calculated by substituting s = i and
following the definition of the Fourier transform, one gets
R
y(t)eit dt
def Y (i)
G(i) =
(6.5)
= R0
X(i)
x(t)eit dt
0
For every value of frequency, 0 < < , we must approximate the integrals in Eqn. 6.5. If we
choose the input to be a perfect Dirac pulse, then the denominator in Eqn. 6.5 will always equal
1. This simplifies the computation, but complicates the practical aspects of the experiment. If we
use arbitrary inputs, we must compute the integrals numerically, although they are in fact simply
the Fourier transform of y(t) and x(t) and can be computed efficiently using the Fast Fourier
Transform or FFT.
We will compare three alternative techniques to compute the frequency response of an actual
experimental laboratory plant, namely:
1. subjecting the plant to a series of sinusoids of different frequencies,
2. subjecting the plant to a single sinusoid with a time-varying frequency component (chirp
signal),
250
A SERIES OF
The simplest way to experimentally characterise the frequency response of a plant is to input a
series of pure sine waves at differing frequencies as shown in in the S IMULINK diagram Fig. 6.12
which uses the real-time toolbox to communicate with the experimental blackbox.
Terminator
Simulink
Execution
Control
uy
To Workspace
Sine Wave
DAC
ADC
Saturation
Scope
MCC DAQ DAC OUT
Figure 6.12: Using the real-time toolbox in M ATLAB to subject sine waves to the blackbox for an
experimental frequency analysis.
Fig. 6.13 shows the response of the laboratory blackbox when subjected to 12 sine waves of increasing frequency spanning from = 0.2 to 6 radians/second. The output clearly shows a reduction in amplitude at the higher frequencies, but also a small amplification at modest frequences.
By carefully reading the output amplitude and phase lag, we can establish 12 experimental points
given in table 6.1 suitable say for a Bode diagram.
However while conceptually simple, the sine wave approach is not efficient nor practical. Lees
and Hougen, [118], while attempting a frequency response characterisation of a steam/water heat
exchanger noted that it was difficult to produce a good quality sine wave input to the plant when
using the control valve. In addition, since the ranges of frequencies spanned over 3.5 orders of
magnitude (from 0.001 rad/s to 3 rad/s), the testing time was extensive.
Additional problems of sine-wave testing are that the gain and phase lag is very sensitive to bias,
drift and output saturation. In addition it is often difficult in practice to measure accurately phase
lag. [143, pp117-118] give further potential problems and some modifications to lessen the effects
of nonlinearities. The most important conclusion from the Lees and Hougen study was that they
obtained better frequency information, with much less work using pulse testing and subsequently
analysing the input/output data using Fourier transforms.
A CHIRP
SIGNAL
If we input a signal with a time varying frequency component such as a chirp signal, u(t) =
sin(t2 ), to an unknown plant, we can obtain an idea of the frequency response with a single,
251
= 0.2
= 0.3
= 0.5
= 0.8
= 1.0
= 1.3
= 1.5
= 2.0
= 2.5
= 3.0
= 4.0
10
15
20
25
20
25
= 6.0
10
15
20
25
10
15
20
25
10
15
Figure 6.13: The black box frequency response analysis using a series of sine waves of different
frequencies. This data was subsequently processed and is tabulated in Table 6.1 and plotted in
Fig. 6.20.
Table 6.1: Experimentally determined frequency response of the blackbox derived from the laboratory data shown in Fig. 6.13. For a Bode plot of this data, see Fig. 6.20.
Input frequency
(rad/s)
0.2
0.3
0.5
0.8
1.0
1.3
1.5
2.0
2.5
3.0
4.0
6.0
Amplitude ratio
1.19
1.21
1.27
1.42
1.59
1.92
2.08
1.61
0.88
0.54
0.27
0.11
Phase lag
(degrees)
-4.8
-5.1
-11.1
-20.8
-27.6
-46.7
-64.4
-119.2
-144.3
-156.8
-172.6
-356.1
252
albeit long drawn-out experiment unlike the series of experiments required in the pure sinusoidal
case.
Fig. 6.14 shows some actual input/output data collected from the blackbox laboratory plant
collected at T = 0.1 seconds. As the input frequency is increased, the output amplitude decreases
as expected for a plant with low-pass filtering characteristics. However the phase lag information
is difficult to determine at this scale from the plot given in Fig. 6.14. Our aim is to quantify this
amplitude reduction and phase lag as a function of frequency for this experimental plant.
Experimental Blackbox
1
output
0.8
0.6
0.4
0.2
0
50
100
150
200
250
300
350
50
100
150
200
time (s)
250
300
350
input
0.2
0.4
0.6
0.8
1
Figure 6.14: Input (lower) /output (upper) data collected from the black box where the input is a
chirp signal.
Since the phase information is hard to visualise in Fig. 6.14, we can plot the input and output
signals as a phase plot similar to a Lissajou figure. The resulting curve must be an ellipse (since the
input and output frequencies are the same), but the ratio of the height to width is the amplitude
ratio, and the eccentricity and orientation is a function of the phase lag.
If we plot the start of the phase plot using the same input/output data as Fig. 6.14, we should see
the trajectory of an ellipse gradually rotating around, and growing thinner and thinner. Fig. 6.15
shows both the two-dimensional phase plot and a three dimensional plot with the time axis
added.
Fig. 6.16 shows the same experiment performed on the flapper. This experiment indicates that the
flapper has minimal dynamics since there is little discernable amplitude attenuation and phase
lag but substantial noise. The saturation of the output is also seen in the squared edges of ellipse
in the phase plot in addition to the flattened sinusoids of the flapper response.
A PSEUDO RANDOM
INPUT
A more efficient experimental technique is to subject the plant to an input signal that contains
a wide spectrum of input frequencies, and then compute the frequency response directly. This
has the advantage that the experimentation is considerably shorter, potentially processes the data
more efficiently and removes some of the human bias.
253
Start
0.9
0.9
0.8
0.8
0.7
0.7
output (V)
output (V)
0.6
0.5
0.6
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0
0.1
0.5
0
1
0.9
0.8
0.7
0.6
0.5
input (V)
0.4
0.3
0.2
0.1
50
150
100
input (V)
200
250
350
300
400
time (s)
Figure 6.15: Phase plot of the experimental black box when the input is a chirp signal. See also
Fig. 6.14.
Phase plot of the experimental Flapper
Experimental Flapper
0.5
0.5
0.4
output
0.3
0.4
0.2
0.1
0.3
0.2
50
100
150
200
250
300
350
400
output (V)
0.1
Start
0.2
0.1
0.2
input
0
0.4
0.6
0.1
0.8
1
50
100
150
200
time (s)
250
300
350
400
0.2
1
0.9
0.8
0.7
0.6
0.5
input (V)
0.4
0.3
0.2
0.1
Figure 6.16: Flapper response to a chirp signal. There is little evidence of any appreciable dynamics over the range of frequencies investigated.
The experimental setup in Fig. 6.17 shows an unknown plant subjected to a random input. If we
collect enough input/output data pairs, we should be able to estimate the frequency response of
the unknown plant.
Fig. 6.18 compares the magnitude and phase of the ratio of the Fourier transforms of the input and
output to the analytically computed Bode plot given that the unknown plant is in fact G(s) =
e2s /(5s2 + s + 6).
The actual numerical computations to produce the Bode plot in Fig. 6.18 are given in Listing 6.2.
While the listing looks complex, the key calculation is the computation of the fast Fourier transform, and the subsequent division of the two vectors of complex numbers. It is important to note
that S IMULINK must deliver equally spaced samples necessary for the subsequent FFT transformation, and that the frequency vector is equally spaced from 0 to half the Nyquist frequency.
Listing 6.2: Frequency response identification of an unknown plant directly from input/output
254
y & spt
1
UY
G
BandLimited
White Noise
Unknown plant
10
10
|G(i )|
10
10
10
10
0
200
400
600
3
10
10
10
10
frequency [rad/s]
data
10
Fs = 1/Ts; Fn = Fs/2;
Giw = fft(y)./fft(u);
w=2*pi*(0:n-1)*2*Fn/n;
15
20
10
fN
10
255
ylim([-720,10])
ylabel('Phase angle [degrees]'); xlabel('frequency \omega [rad/s]');
We can try the same approach to find the frequency response of the blackbox experimentally.
Fig. 6.19 shows part of the 213 = 8192 samples collected at a sampling rate of 10Hz or T = 0.1 to
be used to construct the Bode diagram.
5
output
4
3
2
1
0
4
input
3
2
1
0
0
50
100
150
time [s]
Figure 6.19: Part of the experimental black box response data series given a pseudo-random input
sequence. This data was processed to produce the frequency response given in Fig. 6.20.
The crude frequency response obtained simply by dividing the two Fourier transforms of the
input/output data from Fig. 6.19 is given in Fig. 6.20 which, for comparison, also shows the
frequency results obtained from a series of distinct sine wave inputs and also from using the
M ATLAB toolbox command etfe which will be subsequently explained in section 6.2.3.
Since we have not employed any windows in the FFT analysis, the spectral plot is unduly noisy,
especially at the high frequencies. The 12 amplitude ratio and phase lag data points manually
read from Fig. 6.13 (repeated in Table 6.1) show good agreement across the various methods.
6.2.3
A N ALTERNATIVE
The frequency response obtained from experimental input/output data shown in Fig. 6.18 and
particularly Fig. 6.20 illustrate that the simple approach of simply numerically dividing the Fourier
transforms is suspect to excessive noise. A better approach is to take more care in the data preprocessing stage by judicious choice of windows. The empirical transfer function estimate routine, etfe, from the System Identification toolbox does this and delivers more reliable and consistent results. Compare the result given in Fig. 6.21 from Listing 6.3 with the simpler strategy
used to generate Fig. 6.18.
Listing 6.3: Non-parametric frequency response identification using etfe.
2
Dat = iddata(y,u,Ts);
Gmodel = etfe(Dat);
bode(Gmodel)
256
|G(i )|
10
10
50
0
50
100
150
200
250
300
2
10
FFT
etfe
sine waves
1
10
10
frequency [rad/s]
10
10
Figure 6.20: The frequency response of the black box computed using FFTs (cyan line), the System ID toolbox function etfe, (blue line) and the discrete sine wave input data, (), read from
Fig. 6.13. The FFT analysis technique is known to give poor results at high frequencies.
|G(i )|
10
10
Estimated
Actual
10
[deg]
200
400
600
0.3
0.5
1
[rad/s]
257
Problem 6.1
1. Plot the frequency response of an unknown process on both a Bode diagram
and the Nyquist diagram by subjecting the process to a series of sinusoidal inputs. Each
input will contribute one point on the Bode or Nyquist diagrams. The unknown transfer
function is
0.0521s2 + 0.4896s + 1
G(s) =
0.0047s4 + 0.0697s3 + 0.5129s2 + 1.4496s + 1
Compare your plot with one generated using bode. Note that in practice we do not know
the transfer function in question, and we construct a Bode or Nyquist plot to obtain it. This
is the opposite of typical textbook student assignments.
Hint: Do the following for a dozen or so different frequencies:
(a) First specify the unknown continuous time
transfer function.
(b) Choose a test frequency, say = 2 rad/s, and
generate a sine wave at this frequency.
(c) Simulate the output of the process given this input. Plot input/output together and read off the
phase lag and amplitude ratio.
Note that the perfect frequency response can be obtained using the freqs command.
2. The opposite calculation, that is fitting a transfer function to an experimentally obtained frequency response is trivial with the invfreqs command. Use this command to reconstruct
the polynomial coefficients of G(s) using only your experimentally obtained frequency response.
Problem 6.2 Write a M ATLAB function file to return Gp (i) given the input/output x(t), y(t) using Eqn6.5 for some specified frequency. Test your subroutine using the transfer function from
Problem 6.1 and some suitable input signal, and compare the results with the exact frequency
response.
Hint: To speed up the calculations, you should vectorise the function as much as possible. (Of
course for the purposes of this educational example, do not use an FFT)
6.3
Historically graphical methods, especially for continuous time domain systems, used to be very
popular partly because one needed little more than pencil and graph paper. For first-order or for
some second-order models common in process control, we have a number of graphical recipes
which if followed, enable us to extract reasonable parameters. See [181, Chpt 7] for further details.
While we could in principle invert the transfer function given a step input to the time domain
and then directly compare the model and actual outputs, this has the drawback that the solutions
are often convoluted nonlinear functions of the parameters. For example, the solutions to a step
response for first and second order overdamped systems are:
K
y(t) = K 1 et/
s + 1
K
1 et/1 2 et/2
y(t) = K 1
(1 s + 1)(2 s + 1)
1 2
and the problem is that while the gain appears linearly in the time domain expressions, the time
constants appear nonlinearly. This makes subsequent parameter estimation difficult, and in general we would need to use a tool such as the O PTIMISATION toolbox to find reasonable values for
the time constants. The following section describes this approach.
258
6.3.1
F ITTING
Fig. 6.22 shows some actual experimental data collected from a simple black box plant subjected
to a low-frequency square wave. The data was sampled at 10Hz or Ts = 0.1. You can notice that
in this case the noise characteristics seems to be clearly dependent on the magnitude of y. By
collecting this input/output data we will try to identify a continuous transfer function.
4
output
3.5
3
2.5
2
1.5
3
2.5
2
0
10
20
30
40
50
time (s)
60
70
80
90
100
Figure 6.22: Experimental data from a continuous plant subjected to a square wave.
We can see from a glance at the response in Fig. 6.22 that a suitable model for the plant would be
an under-damped second order transfer function
G(s) =
Keas
2 s2 + 2 s + 1
(6.6)
T
def
where we are interested in estimating the four model parameters of Eqn. 6.6 as = K a
.
This is a nonlinear least-squares regression optimisation problem we can solve using lsqcurvefit.
Listing 6.4 will be called by the optimisation routine in order to generate the model predictions
for a given trial set of parameters.
Listing 6.4: Function to generate output predictions given a trial model and input data.
2
Keas
2 s2 +2 s+1
The optimisation routine in Listing 6.5 calls the function given in Listing 6.4 repeatedly in order
to search for good values for the parameters such that the predicted outputs match the actual
outputs. In parameter fitting cases such as these, it is prudent to constrain the parameters to be
non-negative. This is particularly important in the case of the deadtime where we want to ensure
that we do not inadvertently try models with acausal behaviour.
Listing 6.5: Optimising the model parameters.
K0 = 1.26; tau0 = 1.5; zeta0 = 0.5; deadt0 = 0.1; % Initial guesses for 0
theta0 = [K0,tau0,zeta0,deadt0]';
LowerBound = zeros(size(theta0)); % Avoid negative deadtime
theta_opt = lsqcurvefit(@(x,t) fmodelsim(x,t,u), theta0,t,y,LowerBound)
259
Once we have some optimised values, we should plot the predicted model output on top of the
actual output as shown in Fig. 6.23 which, in this case, is not too bad.
Listing 6.6: Validating the fitted model.
5
10
0.5
output
0
0.5
1
1.5
2
Actual
fitted
0
10
G(s) =
20
30
40
50
time (s)
60
70
1.166
0.5712 s2 +0.317s+1
80
90
100
Figure 6.23: A continuous-time model fitted to input/output data from Fig. 6.22.
The high degree of similarity between the model and the actual plant shown in Fig. 6.23 suggests
that, with some care, this method does work in practice using real experimental data complete
with substantial noise. However on the downside, we did use a powerful nonlinear optimiser,
we were careful to eliminate the possibility any of the parameters could be negative, we chose
sensible starting estimates, and we used nearly 1000 data points.
6.3.2
I DENTIFICATION
FERENTIATION
While most modern system identification and model based control is done in the discrete domain, it is possible, as shown above, to identify the parameters in continuous models, G(s), directly. This means that we can establish the continuous time poles and zeros of a transfer function
without first identifying a discrete time model, and then converting back to the continuous time
domain.
However there are two main reasons why continuous time model identification is far less useful
than its discrete time equivalent. The first reason is that since discrete time models are more useful for digital control, it makes sense to directly estimate them rather than go through a temporary
continuous model. The second drawback is purely practical. Either we must resort to the delicate
task of applying nonlinear optimisation techniques such as illustrated above, or if we wish to use
the robust linear least-squares approach, we need reliably to estimate high order derivatives of
both the input and output data. In the presence of measurement noise and possible discontinuity
260
in gradient for the input such as step changes, it is almost impossible to construct anything higher
than a second-order derivative, thus restricting our models to second order or less. Notwithstanding these important implementation issues, [31] describes a recursive-least squares approach for
continuous system identification.
As continuous time model identification requires one to measure not only the input and output,
but also the high order derivatives of y(t) and u(t), it turns out that this construction is delicate
even for well behaved simulated data, so we would anticipate major practical problems for actual
experimental industrial data.
If we start with the continuous transfer function form,
Y (s) =
B(s)
U (s)
A(s)
Inverting to the time domain and assuming zero initial conditions, we get a differential equation
an y (n) + an1 y (n1) + + a1 y (1) + y = bm um + bm1 u(m1) + + b1 u(1) + b0 u
where the notation y (n) means the n-th time derivative of y(t). This can be written with some
rearrangement as a vector dot product
y=
y (n)
y (n1)
y (1)
..
.
u(m)
u(m1)
an
an1
..
.
i
a1
u
bm
bm1
.
..
b0
(6.7)
which is now in a form suitable for least-squares regression of the vector of unknown coefficients
of the A and B polynomials. Note that if we desire na poles, we must differentiate the output na
times. Similarly, we must differentiate the input once for each zero we want to estimate. A block
diagram of this method is given in Fig. 6.24.
A simple simulation is given in Listing 6.7. I first generate a truth model with four poles and
three zeros. Because I want to differentiate the input, I need to make it sufficiently smooth by
passing a random input through a very-low-pass filter. To differentiate the logged data I will use
the built-in gradient command which approximates the derivative using finite differences. To
obtain the higher-order differentials, I just repeatedly call gradient. With the input/output data
and its derivatives, the normal equations are formed, and solved for the unknown polynomial
coefficients.
Listing 6.7: Continuous model identification of a non-minimum phase system
Gplant = tf(3*poly([3,-3,-0.5]), ...
poly([-2 -4 -7 -0.4])); % G(s) =
4
3(s3)(s+3)(s+0.5)
(s+7)(s+4)(s+2)(s+0.4)
261
Regress
s
s
Input u(t)
s
s
s
Plant
differentiator chain
parameters,
Output y(t)
Ur = randn(size(t));
% white noise
[Bf,Af]= butter(4,0.2); u= filter(Bf,Af,Ur); % Input now smooth
9
14
19
24
y = lsim(Gplant,u,t); plot(t,[u,y]);
% Now do the identification . . .
na = 2; nb = 1;
dy = y;
for i=1:na
dy(:,i+1) = gradient(dy(:,i),t);
end % for
du = u;
for i=1:nb
du(:,i+1) = gradient(du(:,i),t);
end % for
b = y; % RHS
X = [-dy(:,2:na+1), du];
theta = X\b;
% do input/output experiment
% data matrix
% solve for parameters,
Even though we know the truth model possessed four zeros and three poles, our fitted model
with just two poles and one zero gave a close match as shown in Fig. 6.25. In fact the plant output
(heavy line) is almost indistinguishable from the predicted output (light line) and this is due to
the lack of noise and smoothed input trajectory. However if you get a little over ambitious in
your identification, the algorithm will go unstable owing to the difficulty in constructing reliable
derivatives.
262
0.6
Plant
Model
0.4
output
0.2
0
0.2
input
0.4
1
0
1
2
5
time
10
Figure 6.25: A simulated example of a continuous model identification comparing the actual plant
with the model prediction showing almost no discernable error. Compare these results with an
actual experiment in Fig. 6.26.
6.3.3
P RACTICAL
We may be tempted given the results presented in Fig. 6.25 to suppose that identifying a continuous transfer function from logged data is relatively straight forward. Unfortunately however, if
you actually try this scheme on a real plant, even a well behaved plant such as the black-box, you
will find practical problems of such magnitude that it renders the scheme is almost worthless. We
should also note that if the deadtime is unknown, then the identification problem is non-linear in
the parameters which makes the subsequent identification problematic. This is partly the reason
why many common textbooks on identification and control pay only scant attention to identification in the continuous domain, but instead concentrate on identifying discrete models. (Discrete
model identification is covered next in 6.4.)
The upper plot in Fig. 6.26 shows my attempt at identifying a continuous model of the blackbox,
which, compared with Fig. 6.25, is somewhat less convincing. Part of the problem lies in the difficulty of constructing the derivatives of the logged input/output data. The lower plot in Fig. 6.26
shows the logged output combined with the first and second derivatives computed crudely using
finite differences. One can clearly see the compounding effects on the higher derivatives due to
measurement noise, discretisation errors, and so forth.
For specific cases of continuous transfer functions, there are a number of optimised algorithms
to directly estimates the continuous coefficients. Himmelblau, [86, p358360] gives once such
procedure for which reduces to a linear system which is easy to solve in M ATLAB. However
again this algorithm essentially suffers the same drawbacks as the sequential differential schemes
given previously.
I MPROVING
L AGUERRE FUNCTIONS
One scheme that has found to be practical is one based on approximating the continuous response
with a series of orthogonal functions known as Laguerre functions. The methodology was developed at the University of British Columbia and is available commercially from BrainWave, now
part of Andritz Automation. Further details about the identification strategy are given in [200].
263
0.1
0.05
0.05
0.1
Blackbox
Model
0
8
10
12
Sample time T=0.05 s
14
16
18
20
14
16
18
20
y, dy/dt, d y/dt
0.1
0.05
0
0.05
0.1
0.15
0.2
10
time (s)
12
Figure 6.26: A practical continuous model identification of the blackbox is not as successful as the
simulated example given in Fig. 6.25. Part of the problem is the reliable construction of the higher
derivatives. Upper: The plant (solid) and model prediction (dashed). Lower: The output y, and
derivatives y,
y.
Compared to the standard curve fitting techniques, this strategy using orthogonal functions is
more numerically robust, and only involves numerical integration of the raw data, (as opposed to
the troublesome differentiation). On the downside, manipulating the Laguerre polynomial functions is cumbersome, and the user must select an appropriate scaling factor to obtain reasonable
results. While [200] suggest some rules of thumb, and some algorithms for choosing the scaling
factor, it is still delicate and sensitive calculation and therefore tricky in practice.
Suppose we wish to use Laguerre functions to identify a complex high-order transfer function
G(s) =
(6.8)
which was used as a challenging test example in [200, p32]. To perform the identification, the user
must decide on an appropriate order, (8 in this case), and a suitable scaling factor, p.
Fig. 6.27 compares the collected noisy step data with the identified step and impulse responses,
compares the identified and gives the error in the impulse response.
6.4
The previous section showed that while it was possible in theory to identify continuous models
by differentiating the raw data, in practice this scheme lacked robustness. Furthermore it makes
sense for evenly spaced sampled data systems not to estimate continuous time models, but to
estimate discrete models directly since we have sampled the plant to obtain the input/output
264
T =493.1, p
ss
=0.05
opt
1.5
Step test
0.5
0
T
ss
0.5
Step test: Order = 8, scale factor p=0.050
Inpulse comparison
0.04
0.03
0.02
0.01
0
0.01
error
0.05
0
0.05
100
200
300
400
500
time [s]
600
700
800
Figure 6.27: Identification of the 14th order system given in Eqn. 6.8 using an 8th-order Laguerre
series and a near-optimal scaling parameter.
data anyway. This section describes the various common forms of linear discrete models that we
will find useful in modelling and control.
If we collect the input/output data at regular sampling intervals, t = kT , for most applications it
suffices to use linear difference models such as
y(k) + a1 y(k 1) + a2 y(k 2) + + an y(k n) = b0 u(k) + b1 u(k 1) + + bm u(k m) (6.9)
where in the above model, the current output y(k) is dependent on m + 1 past and present inputs
u and the n immediate past outputs y. The ai s and bi s are the model parameters we seek.
A shorthand description for Eqn. 6.9 is
A(q 1 ) y(k) = B(q 1 )u(k) + e(k)
(6.10)
where we have added a noise term, e(k), and where q 1 is defined as the backward shift-operator.
The polynomials A(q 1 ), B(q 1 ) are known as shift polynomials
A(q 1 ) = 1 + a1 q 1 + a2 q 2 + + an q n
B(q 1 ) = b0 + b1 q 1 + b2 q 2 + + bm q m
265
where the A(q 1 ) polynomial is defined by convention to be monic, that is, the leading coefficient is 1. In the following development, we will typically drop the argument of the A and B
polynomials.
Even with the unmeasured noise term, we can still estimate the current y(k) given old values of y
and old and present values of u using
y(k|k 1) = a1 y(k 1) an y(k n) + b0 u(k) + b1 u(k 1) + + bm u(k m)
{z
} |
{z
}
|
past outputs
(6.11)
= (1 A)y(k) + Bu(k)
(6.12)
Our estimate or prediction of the current y(k) based on information up to, but not including k is
y(k|k 1). If we are lucky, our estimate is close to the true value, or y(k|k 1) y(k). Note
that the prediction model of Eqn. 6.11 involves just known terms, the model parameters and past
measured data. It does not involve the unmeasured noise terms.
The model described by Eqn. 6.10 is very common and is called an Auto-Regressive with Exogenous, (externally generated) input, or an ARX model. The auto-regressive refers to the fact that
the output y is dependent on old ys, (i.e. it is regressed on itself), in addition to an external input,
u. A signal flow diagram of the model is given in Fig. 6.28.
noise, e
known input, u
1
A
output, y
Figure 6.28: A signal flow diagram of an auto-regressive model with exogenous input or ARX
model. Compare this structure with the similar output-error model in Fig. 6.31.
There is no reason why we could not write our models such as Eqn. 6.10 in the forward shift
operator, q as opposed to the backward shift q 1 . Programmers and those working in the digital
signal processing areas tended to favour the backward shift because it is easier and more natural
to write computational algorithms such as Eqn. 6.11, whereas some control engineers favoured
the forward notation because the time delay is given naturally by the difference in order of the A
and B polynomials.
6.4.1
E XTENDING
Since the white noise term in Eqn. 6.10 is also filtered by the plant denominator polynomial, A(q),
this form is known as an equation error model structure. However in many practical cases we will
need more flexibility when describing the noise term. One obvious extension is to filter the noise
term with a moving average filter
y(k) + a1 y(k 1) + + an y(k n) = b0 u(k) + b1 u(k 1) + + bm u(k m)
{z
}
|
ARX model
This is now known as coloured noise, the colouring filter being the C polynomial. The A UTO R EGRESSIVE M OVING -AVERAGE with E X OGENOUS input, ARMAX, model is written in compact
polynomial form as
A(q)y(k) = B(q)u(k) + C(q)e(k)
(6.13)
266
where we have dropped the notation showing the explicit dependance on sample interval k. The
ARMAX signal flow is given in Fig. 6.29. This model now has two inputs, one deterministic u,
and one disturbing noise, e, and one output, y. If the input u is zero, then the system is known
simply as an ARMA process.
noise, e
C
known input, u
1
A
output, y
Figure 6.29: A signal flow diagram of a ARMAX model. Note that the only difference between
this, and the ARX model in Fig. 6.28, is the inclusion of the C polynomial filtering the noise term.
While the ARMAX model offers more flexibility than the ARX model, the regression of the parameters is now nonlinear optimisation problem. The nonlinear estimation of the parameters in
the A, B and C polynomials is demonstrated on page 309. Also note that the noise term is still
filtered by the A polynomial meaning that both the ARX and ARMAX models are suitable if the
dominating noise enters the system early in the process such as a disturbance in the input variable.
B(q) = 2q + 0.5,
Note that these polynomials are written in the forward shift operator, and the time delay given
by the difference in degrees, na nb , is 1. A M ATLAB script to construct the model is:
A = [1 -1 0.5];
B = [2 0.5];
C = [1 -0.2 0.05];
ntot = 10;
U = randn(ntot,1);
Now to compute y given the inputs u and e, we use a for-loop and we write out the equation
explicitly as
y(k + 2) = y(k + 1) 0.5y(k) + 2u(k + 1) + 0.5u(k) + e(k + 2) 0.2e(k + 1) + 0.05e(k)
|
{z
} |
{z
} |
{z
}
old outputs
old inputs
noise
It is more convenient to wind the indices back two steps so we calculate the current y(k) as opposed to the future y(k + 2) as above. We will assume zero initial conditions to get started.
n = length(A)-1;
% Order na
z = zeros(n,1);
% Padding: Assume ICs = zero
Uz = [z;U]; Ez = [z;E]; y = [z;NaN*U]; % Initialise
10
267
d = length(A)-length(B); % deadtime = na nb
zB = [zeros(1,d),B];
% pad B(q) with leading zeros
for i=n+1:length(Uz)
y(i) = -A(2:end)*y(i-1:-1:i-n) + zB*Uz(i:-1:i-n) + C*Ez(i:-1:i-n);
end
y(1:n) = []; % strip off initial conditions
This strategy of writing out the difference equations explicitly is rather messy and error-prone,
so it is cleaner, but less transparent, to use the idpoly routine to create an armax model and the
idmodel/sim command in the identification toolbox to simulate it. Again we must pad with
zeros the deadtime onto the front of the B polynomial.
>>G = idpoly(A,[0,B],C,1,1) % Create polynomial model A(q)y(t) = B(q)u(t) + C(q)e(t)
Discrete-time IDPOLY model: A(q)y(t) = B(q)u(t) + C(q)e(t)
A(q) = 1 - q-1 + 0.5 q-2
5
10
% test simulation
Both methods (writing the equations out explicitly, or using idpoly) will give identical results.
We can replicate this simulation in S IMULINK by using discrete filters as shown in Fig. 6.30 which
directly follows Fig. 6.29. For the B and C blocks I could either use a FIR filter block or the
more general IIR discrete filter with a denominator of 1. Note that we must ensure that the B
polynomial is padded with one zero to take account of the one unit of deadtime.
Figure 6.30: A discrete time ARMAX model implemented in S IMULINK. (See also Fig. 6.29.)
6.4.2
O UTPUT
Both the linear ARX model of Eqn. 6.10, and the version with the filtered noise, Eqn. 6.13, assume that both the noise and the input are filtered by the plant denominator, A. A more flexible
268
approach is to treat the input and noise sequences separately as illustrated in Fig. 6.31. This is
known as an output error model. This model is suitable when the dominating noise term enters
late in the process, such as a disturbance in the measuring transducer for example.
noise, e
B
F
known input, u
output, y
Figure 6.31: A signal flow diagram of an output-error model. Compare this structure with the
similar ARX model in Fig. 6.28.
Again drawback of this model structure compared to the ARX model is that the optimisation
function is a nonlinear function of the unknown parameters. This is because the estimate y not
directly observable, but is itself a function of the parameters in polynomials B and F . This makes
the solution procedure more delicate and iterative. The S YSTEM I DENTIFICATION toolbox can
identify output error models with the oe command.
6.4.3
G ENERAL
The most general linear input/output model with one noise input, and one known input is depicted in Fig. 6.32. The relation is described by
A(q)y(k) =
B(q)
C(q)
u(k) +
e(k)
F (q)
D(q)
(6.14)
although it is rare that any one particular application will require all the polynomials. For example
[91, p76] point out that only the process dynamics B and F are usually identified, and D is a
design parameter. Estimating the noise C polynomial is difficult in practice because the noise
sequence is unknown and so must be approximated by the residuals.
noise, e
C
D
known input, u
B
F
1
A
output, y
Figure 6.32: A general input/output model structure for linear dynamic systems involving 5
polynomials.
Following the block diagram in Fig. 6.32 or Eqn. 6.14, we have the following common reduced
special cases:
1. ARX: Autoregressive with eXogenous input; C = D = F = 1
2. AR: Autoregressive with no external input; C = D = 1, and B = 0
3. FIR (Finite Input Response): C = D = F = A = 1
4. ARMAX: Autoregressive, moving average with eXogenous input; D = F = 1
5. OE (Output error): C = D = A = 1
269
6. BJ (Box-Jenkins): A = 1
In all these above models, our aim is to estimate the model parameters which are the coefficients of
the various polynomials given the observable input/output experimental plant data. As always,
we want the smallest, most efficient model that we can get away with. The next section will
describe how we can find suitable parameters to these models. Whether we have the correct
model structure is another story, and that tricky and delicate point is described in section 6.6.
Nonlinear models are considered in section 6.6.3.
6.5
The previous section listed the various members of the discrete-time models, of which the ARX
and ARMAX are the most popular for control applications. Now we come to the identification
stage. Our aim is given a set of input and output data pairs, (uk , yk ), we wish to regress suitable
parameters for the polynomials in Fig. 6.32.
The simple auto-regressive with exogenous input (ARX) model,
A(q 1 ) y(k) = B(q 1 ) u(k)
(6.15)
can be arranged to give the current output in terms of the past outputs and current and past
inputs as,
y(k) = (1 A(q 1 ))y(k) + B(q 1 ) u(k)
(6.16)
(6.17)
Our job is to estimate reasonable values for the unknown n + m + 1 parameters a1 , . . . , an and
b0 , . . . , bm such that our model is a reasonable approximation to data collected from the true plant.
Collecting all the unknown parameters into a column vector, , defined as
def
a1
a2
an
..
.
b0
b1
bm
iT
(6.18)
.
y(k 1) y(k 2) y(k n) ..
iT
(6.19)
(6.20)
The data vector is also known as the regression vector since we are using it to regress the values
of the parameter vector . Note that we place the negative sign associated with the outputs in
Eqn. 6.19 although there is nothing to stop us placing it in Eqn. 6.18.
While Eqn. 6.20 contains (n + m + 1) unknown parameters we have only one equation. Clearly
we need more information in the form of additional equations to reduce the degree of freedom
which in turn means collecting more input/output data pairs.
270
Suppose we collect a total of N data pairs. The first n are used to make the first prediction, and
the next N n are used to estimate the parameters. Stacking all the N n equations, of which
Eqn. 6.17 is the first, in a matrix form gives
yn
yn+1
..
.
yN
yn1
yn2
y0
un
un1
yn
yn1
y1
un+1
un
..
.
..
..
.
..
yN 2
..
.
uN
..
.
yN 1
..
.
yN n1
uN 1
unm
unm+1
..
uN m
(6.21)
or in a compact form
yN = XN
(6.22)
N
X
k=n
yk Tk
2
(6.23)
which we wish to minimise. In practice, we are rarely interested in the numerical value of J but
in the values of the elements in the parameter vector, , that minimises it,
= argmin
N
X
k=n
yk
2
Tk
(6.24)
Eqn. 6.24 reads as the optimum parameter vector is given by the argument that minimises the
sum of squared errors. Eqn. 6.24 is a standard linear regression problem which has the same
solution as in Eqn. 3.42, namely
1
X
(6.25)
= X
X
N yN
N N
For the parameters to be unique, the inverse of the matrix X
N XN must exist. In order to ensure
this, the N (n + m + 1) data matrix, XN , must at least have more rows than columns, or in
other words it should be taller than it is wide. Admittedly that does not appear to be the case
at first glance in Eqn. 6.21. Secondly the matrix X
N XN , which has dimensions of the number
of unknowns, must have full rank. We can ensure this second requirement by constructing an
appropriate input sequence to the unknown plant. This is known as experimental design.
Finally we should take care in numerically computing the unknown vector by using the improved
strategies given in section 3.3.2 or at least using Matlabs backslash as opposed to attempting to
follow Eqn. 6.25 verbatim.
E XAMPLE OF
271
Figure 6.33: I have always wondered what the regression gun does.
100
output
1
10
29
64
0
50
5
0
input
1
4
3
2
50
time, k
0
1
2
3
2
time, k
to which we would like to fit an ARX model of the form A(q)yk = B(q)uk . We propose a model
structure of
A(q) = 1 + a1 q 1 , and B(q) = b0
T
def
which we are to estimate such that our
which has two unknown parameters, = a1 b0
predictions adequately match the measured output above. We can write our model as a difference
equation at time k,
yk = a1 yk1 + b0 uk
which if we write out in full, at time k = 3 using the 4 previously collected input/output data,
gives
y1
y0
y2 = y1
y2
y3
u1
a1
u2
b0
u3
10
1
4
29 = 10 3 a1
b0
64
29
2
The solution for the parameter vector is over-constrained as we have 3 equations and only 2
unknowns, so we will use Eqn. 6.25 to solve for the optimum .
1
N = X
X
N XN
N yN
272
and substituting N = 3 gives
1
4
10
1
10
29
1
10
29
10 3
29
3 =
4 3 2
4 3 2
29
2
64
1
29 84
2136
=
255
20262 84 942
2
=
3
This means the parameters are a1 = 2 and b0 = 3 giving a model
M() =
3
1 + 2q 1
This then should reproduce the measured output data, and if we are lucky, be able to predict
future outputs too. It is always a good idea to compare our predictions with what was actually
measured.
time
0
1
2
3
input
1
4
3
2
output
1
10
29
64
prediction
10
29
64
Well, surprise surprise, our predictions look unrealistically perfect! (This problem is continued
on page 290 by demonstrating how by using a recursive technique we can efficiently process new
incoming data pairs.)
6.5.1
S IMPLE
The following routines automate the offline identification procedure given in the previous section.
However it should be noted that if you are serious about system identification, then a better option
is to use the more robust routines from the the S YSTEM I DENTIFICATION T OOLBOX. These routines
will be described in section 6.5.3.
In Listing 6.8 below, we will use the true ARX process with polynomials
A(q) = 1 1.9q 1 + 1.5q 2 0.5q 3 , and B(q) = 1 + 0.2q 1
(6.26)
1+0.2q 1
11.9q 1 +1.5q 2 0.5q 3
Now that we have some trial input/output data, we can write some simple routines to calculate
using least-squares the coefficients of the underlying model by solving Eqn. 6.21. Once we have
performed the regression, we can in this simulated example, compare our estimated coefficients
with the true coefficients given in Eqn. 6.26.
273
Listing 6.9: Estimate an ARX model from an input/output data series using least-squares
1
11
16
X = y(nmax-1:N-1);
% Construct Eqn. 6.21
for i=2:na
X = [X, y(nmax-i:N-i)];
end
for i=0:nb-1
X = [X, u(nmax-i:N-i)];
end
y_lhs = y(nmax:N);
% left hand side of y = X
theta = X\y_lhs
% = X+ y
G_est = tf(theta(na+1:end)',[1 -theta(1:na)'],-1,'Variable','z-1')
If you run the estimation procedure in Listing 6.9, your estimated model, G_est, should be identical to the model used to generate the data, G, in Listing 6.8. This is because in this simulation
example we have no model/plant mismatch, no noise and we use a rich, persistently exciting
identification signal.
As an aside, it is possible to construct the Vandemonde matrices (data matrix X, in Eqn. 6.22)
using Toeplitz (or the closely related Hankel matrices) as shown in Listing 6.10.
Listing 6.10: An alternative way to construct the data matrix for ARX estimation using Toeplitz
matrices. See also Listing 6.9.
na = length(A)-1; nb = length(B); % # of parameters to be estimated
nmax = max(na,nb);
4
Y = toeplitz(y(nmax:end-1),y(nmax:-1:nmax-na+1));
U = toeplitz(u(nmax+1:end),u(nmax+1:-1:nmax-nb+2));
X = [Y,U]; % Form data matrix, X.
theta = X\y(nmax+1:end) % = X+ y
Note that in this implementation, na is both the order of the A(q 1 ) polynomial and the number
of unknown coefficients to be estimated given that A is monic, while nb is the order of B plus one.
6.5.2
B IAS
The strategy to identify ARX systems has the advantage that the regression problem is linear,
and the estimated parameters will be consistent, that is, they can be shown to converge to the true
parameters as the number of samples increases provided the disturbing noise is white. If however,
the disturbances are non-white as in the case where we have a non-trivial C(q) polynomial, then
the estimated A(q) and B(q) parameters will exhibit a bias.
To illustrate this problem, suppose we try to estimate the A and B polynomial coefficients in the
the ARMAX process
(1 + 0.4q 1 0.5q 2 )yk = q 1 (1.2 q 1 )uk + (1 + 0.7q 1 + 0.3q 2 ) ek
|
{z
}
noise filter
(6.27)
274
where we have a non-trivial colouring C polynomial disturbing the process. However the results
shown in Fig. 6.34 illustrate the failure for the ARX estimation routine to properly identify the
A(q) and B(q) parameters. Note that the estimate of a1 after 300 iterations is around 0.2 which
is some distance for the true result of 0.4. Similar errors are evident for a2 and b1 and it does not
look like they will converge even if we collect mode data. This inconsistency of the estimated
parameters is a direct result of the C(q) polynomial which filters the white noise, ek . As the
disturbing signal is no longer white, the ARX estimation routine is now no longer consistent.
input
output
Input/output
20
0
20
2
Paramaters,
1.5
b0
1
0.5
0
0.5
1.5
2
1
100
200
Sample No. [k]
300
To obtain the correct parameters without the bias, we have no option other than to estimate the
C(q) polynomial as well. This ARMAX estimation strategy is unfortunately a nonlinear estimation problem and appropriate identification strategies are available in the S YSTEM I DENTIFICA TION T OOLBOX described next.
6.5.3
U SING
THE
S YSTEM I DENTIFICATION
TOOLBOX
As an alternative to using the rather simple and somewhat home grown identification routines
developed in section 6.5.1 above, we could use the collection in the S YSTEM I DENTIFICATION
T OOLBOX. This comprehensive set of routines for dynamic model identification that closely follow [125] and is far more reliable and robust than the purely illustrative routines presented so
far.
The simplest model considered by the S YS ID toolbox is the auto-regressive with exogeneous input, or ARX model, described in Eqn. 6.10. However for computational reasons, the toolbox
separates the deadtime from the B polynomial so that the model is now written
A(q 1 )y = q d B(q 1 )u + e
(6.28)
275
where the integer d is the assumed number of delays. The arx routine will, given input/output
data, try to establish reasonable values for the coefficients of the polynomials A and B using
least-squares functionally similar to the procedure given in Listing 6.9.
Of course in practical identification cases we do not know the structure of the model (i.e. the
order of B, A and the actual sample delay), so we must guess a trial structure. The arx command
returns both the estimated parameters and associated uncertainties in an object which we will
call testmodel and we can view with present. Once we fitted out model, we can compare
the model predictions with the actual output data using the compare command. The script in
Listing 6.11 demonstrates the load, fitting and comparing, but in this case we deliberately use a
structurally deficient model.
Listing 6.11: Offline system identification using arx from the System Identification Toolbox
>>plantio = iddata(y,u,1); % Use the data generated from Listing 6.8.
2
12
Measured Output
testmodel Fit: 71.34%
8
6
4
y1
2
0
2
4
6
8
10
10
20
30
40
50
Note however that with no structural mismatch, the arx routine in Listing 6.12 should manage
to find the exact values for the model coefficients,
Listing 6.12: Offline system identification with no model/plant mismatch
2
>> testmodel = arx(plantio,[3 2 0]) % Data from Listing 6.11 with correct model structure.
Discrete-time IDPOLY model: A(q)y(t) = B(q)u(t) + e(t)
A(q) = 1 - 1.9 q-1 + 1.5 q-2 - 0.5 q-3
B(q) = 1 + 0.2 q-1
276
which it does.
Note that in this case the identification routine perfectly identified our plant and that the loss
function given in the result summary is practically zero ( 1030 ). This indicates an extraordinary
goodness of fit. This is not unexpected, since we have used a perfect linear process, which we
knew perfectly in structure although naturally we could not expect this in practice.
The arx command essentially solves a linear regression problem using the special M ATLAB command \ (backslash), as shown in the simple version given in Listing 6.9. Look at the source of arx
for further details.
(6.29)
and are used when we have a single time series of data which we suspect is simply disturbed by
random noise. In Listing 6.13 we attempt to fit a 3rd order AR model and we get reasonable results
with the coefficients within 1% of the true values. (We should of course compare the pole-zero
plots of the model and plant because the coefficients of the polynomials are typically numerically
ill-conditioned.) Note however that we used 1000 data points for this 3 parameter model, far
more than we would need for an ARX model.
Listing 6.13: Demonstrate the fitting of an AR model.
1
11
E STIMATING
NOISE MODELS
Estimating noise models is complicated since some of the regressors are now functions themselves
of the unknown parameters. This means the optimisation problem is now non-linear compared
to the relatively straight forward linear arx case. However, as illustrated in section 6.5.2, we need
to estimate the noise models to eliminate the bias in the parameter estimation.
In Listing 6.14 we generate some input/output data from an output-error process,
B
u(k d) + e(k)
F
1 + 0.5q 1
u(k 1) + e(k)
=
1 0.5q 1 + 0.2q 2
y(k) =
whose parameters are, of course, in reality unknown. The easiest way to simulate an output-error
process is to use the idmodel/sim command from the System ID toolbox.
277
14
19
Using the data generated in Listing 6.14, we can attempt to fit both an output-error and an arx
model. Listing 6.15 shows that in fact the known incorrect arx model is almost as good as the
structurally correct output-error model.
Listing 6.15: Parameter identification of an output error process using oe and arx.
1
11
16
21
26
The System Identification toolbox has optimal parameters for the fitting of the nonlinear models.
One of the more useful options is to be able to fix certain parameters to known values. This can
be achieved using the FixParameter option.
278
6.5.4
F ITTING
So far we have concentrated on finding parameters for transfer function models, which means
we are searching for the coefficients of just two polynomials. However in the multivariable case
we need to identify state-space models which means estimating the elements in the , and C
matrices. If we are able to reliably measure the states, then we can use the same least-squares approach (using the pseudo inverse) that we used in the transfer function identification. If however,
and more likely, we cannot directly measure the states, x, but only the output variables, y then
we need to use a more complex subspace approach described on page 279.
The state-space identification problem starts with a multivariable discrete model
x n ,
xk+1 = xk + uk ,
u m
(6.30)
where we want to establish elements in the model matrices , given the input/state sequences
uk and xk . To ensure we have enough degrees of freedom, we would expect to need to collect at
least (n + m) m data pairs, although typically we would collect many more if we could.
Transposing Eqn. 6.30 gives xTk+1 = xTk T + uTk T , or expanded out
xTk+1 =
x1
x2
xn
k
|
u1
u2
um
n,1
n,2
..
.
2,1
2,2
1,n
2,n n,n
{z
}
nn
1,1
.
..
k
1,m
|
xTk+1 = [ x1 x2 xn u1 u2
..
.
1,1
1,2
..
.
n,1
..
..
.
.
2,n m,n
{z
}
2,1
mn
1,1
2,1
n,1
.
..
..
.
.
.
.
u m ]k
1,n 2,n n,n
1,1 2,1 n,1
..
(6.31)
where we have stacked the unknown model matrices and together into one large parameter
matrix . If we can estimate this matrix, we have achieved our model fitting.
Currently Eqn. 6.31 comprises of n equations, but far more, (n + m)n, unknowns; the elements
in the and matrices. To address this degree of freedom problem, we need to collect more
input/state data. If we collect say N more input/states and stack them underneath Eqn. 6.31, we
279
get
xTk+1
xTk+2
..
.
xTk+N +1
{z
N n
x1
x1
=
x1
} |
x2
x2
..
.
x2
xn
xn
| u1
| u1
xn
| u1
{z
N (n+m)
um k
um k+1
um k+N
}
(6.32)
or Y = X. Inverting this, using the pseudo inverse of the data matrix X, gives the least-squares
parameters
T
=
= X+ Y
(6.33)
T
and hence the model coefficient and matrices as desired.
The computation of the model matrices and is a linear least-squares regression which is easy
to program, and robust to execute. However we are still constrained to the normal restrictions
on the invertibility and conditioning of the data matrix X apply in order to obtain meaningful
solutions. The largest drawback of this method is that we must measure states x, as opposed to
only outputs. (If we only have outputs then we need to use the more sophisticated technique
described on 279). If we had measured the outputs in addition to the states, it is easy to calculate
the C matrix subsequently using another least-squares regression.
In M ATLAB we could use the fragment below to compute and given a sequence of x(k), and
u(k).
[y,t,X] = lsim(G,U,[],x0);
% data matrix
% delete first row
theta = Xdata\y;
% solve for parameters = X+ Y
Phiest = theta(1:n,:)'; Delest = theta(n+1:end,:)'; % Reconstruct & .
If any of the parameters are known a priori, they can be removed from the parameter matrix,
and the constants substituted. If any dependencies between the parameters are known (such as
a = 3b), then this may be incorporated as a constraint, and a nonlinear optimiser used to search
for the resulting best-fit parameter vector. The S YSTEM I DENTIFICATION T OOLBOX incorporates
both these extensions.
Estimating the confidence limits for parameters in dynamic systems uses the same procedure
as outlined in 3.3.4. The difficult calculation is to extract the data matrix, X, (as defined by
Eqn. 3.61) accurately. [25, pp226228] describes an efficient method which develops and subsequently integrates sensitivity equations, which is in general, the preferred technique to that using
finite differences.
I DENTIFICATION
The previous method required state measurements, but it would be advantageous to be able still
to estimate the model matrices, but only be required to measure input, u, and output y data. This
is addressed in a method known as a subspace identification described in [72, p536] and in more
detail by the developers in [155]. The state-space subspace algorithm for identification, known as
n4sid, is available in the System Identification toolbox.
280
S UMMARY
6.6
Wish to calculate
xk
, , C
n, , , C, xk
Algorithm
Kalman filter
Normal linear regression
Subspace identification
page #
444
278
279
The basic linear ARX model using in Eqn. 6.28 has three structural parameters: the number of
poles, na , the number of zeros nb and the number of samples of delay, d. ARMAX models have
four structural parameters. Before regressing suitable parameters given a tentative model structure, we need some method to establish what a tentative structure is. Typically this involves some
iteration in the model fitting procedure.
If we inadvertently choose too high an order for the model, we are in danger of overfitting the
data, which will probably mean that our model will perform poorly on new data. Furthermore,
for reasons of efficiency and elegance, we would like the simplest model, with the smallest number of parameters that still adequately predicts the output. This is known as the principle of
parsimony which aims to reward simpler, rather than complex models. This is the rational for
dividing the collected experimental data into two sets, one estimation set used for model fitting,
and one validation set used to distinguish between models structures.
One way to penalise overly complex models is to use the Akaike Information Criteria (AIC)
which for normally distributed errors is
AIC =
2p
+ ln (V )
N
(6.34)
where p is the number of parameters, N is the number of observations, and V is the loss function
or sum of squared errors. If we increase the number of parameters to be estimated, the sum of
squared errors decreases, but this is partially offset by the 2p term which increases. Hence by
using the AIC we try to find a balance between models that fit the data, but are parsimonious in
parameters. The following example illustrates using the AIC for determining the appropriate order of an interpolating polynomial to some (x, y) data. Since this is a linear optimisation problem,
it is mathematically equivalent to the fitting of an ARX model.
D ETERMINING
Suppose we have collected some experimental data as shown in Fig. 6.36 and we wish to fit a
polynomial through the data cloud. The key question is: what is an appropriate polynomial
281
5
0
5
10
15
data
n=3
n=4
n=11
20
25
30
2
0
x
order to use? Fig. 6.36 shows three alternatives, a quadratic with 3 parameters, a cubic (with 4)
and an extreme 10th order polynomial. Which polynomial is best?
Now clearly our fit is going to improve as we increase the interpolation order, but that is probably
not what we want since we end up fitting overly complex models that tend to essentially just fit
the noise. Instead we should by plot the AIC from Eqn. 6.34 as a function of order as shown in
Fig. 6.37.
6.5
6
AIC
5.5
5
4.5
4
3.5
3
10
# of parameters
15
20
When using the AIC criteria we are looking for a distinct minimum to tell us the appropriate
polynomial order. However looking at Fig. 6.37 we can see that the AIC improves rapidly to
n = 5 meaning that a quartic polynomial is appropriate, but then the curve flattens out. There
is a distinct knee in the curve, but the AIC does not convincingly increase for the high order
polynomials.
The oversight that we have made in Fig. 6.37 is that we have used the same data for the validation
as that used for the fitting. In situations where we have scant experimental data, we may have
to resort to this, but it is far preferable to collect some separate data that we put aside and only
use for validation. Fig. 6.38 shows the improvement this separation of fitted and validation data
makes. Now we can see a distinct minimum in the AIC plot which clearly shows the penalty of
overfitting.
As it turns out, the experimental data using in this example was synthesised from a simple
quartic polynomial with some added noise. Consequently in this artificial case we know the true
number of parameters is 5. Note however that the AIC criteria suggests that optimum number of
parameters is 4, which in my opinion is acceptably close.
The S YSTEM I DENTIFICATION toolbox computes the AIC with the aic command or when using
the automated structure selection, selstruc. Alternatively we could for a given model estimated say with armax, we can extract the following parameters
p = nparams(Model,'free') % # of free parameters
282
20
0
20
40
60
Data for fitting
Data for validation
80
100
3
x
30
Figure 6.38: Using the AIC to establish model structure based on validation
data. Compare the AIC plot in this case
when using the validation data with
that where we used only the fitted data
in Fig. 6.37.
N = Model.es.DataLength
V = Model.es.LossFcn
AIC
25
20
15
10
5
10
15
# of parameters
20
% # of data pairs, N
% Sum of squared errors
Given a collection of potential models, we should pick the one that minimises the AIC.
6.6.1
E STIMATING
If we restrict our attention to linear models, the most crucial decision to make is that of model
order. We can estimate the model order from a spectral analysis, perhaps by looking at the highfrequency roll-off of the experimentally derived Bode diagram.
An alternative strategy is to monitor the rank of the matrix XT X. If we over-estimate the order,
then the matrix XT X will become singular. Of course with additive noise, the matrix may not
be exactly singular, but it will certainly become illconditioned. Further details are given in [127,
pp496497].
I DENTIFICATION
OF DEADTIME
The number of delay samples, d in Eqn. 6.28 is a key structural parameter to be established prior
to any model fitting. In some cases, an approximate value of the deadtime is known from the
physics of the system, but this is not always a reliable indicator. For example, the blackbox from
Fig. 1.5 should not possess any deadtime since it is composed only of passive resistive/capacitor
283
network. However the long time constant, the high system order, or the enforced one-sample
delay in the A/D sampler may make it appear if there is some deadtime.
If we suspect some deadtime, then we have the following options:
1. Do a step test and look for a period of inactivity.
2. Use the S YSTEM I DENTIFICATION T OOLBOX (SITB) and optimise the fit for deadtime. This
will probably require a trial & error search. The delayest routine may help us with this
identification.
3. Use some statistical tests (such as correlation) to estimate the deadtime.
Note: Obviously the deadtime will be an integer number of the sampling time. So if you want
an accurate deadtime, you will need a small sample time T . However overly fast sampling will
cause problems in the numerical coefficients of the discrete model.
Method #1 Apply a step response & look for the first sign of life.
1. First we do an open loop step test using a sample time T = 0.5 seconds.
Blackbox step response
0.5
input
output
zoomed region
0.06
0.04
0.4
input/output
0.02
0
0.3
0.02
19
20
21
22
23
zoomed region
0.2
0.28
0.26
0.1
0.24
0.22
0.2
0.18
40
15
20
25
30
35
40
45
41
42
43
44
50
time (s)
Figure 6.39: Identification of deadtime from the step response of the blackbox. Note that the
deadtime is approximately 2T 1 second.
2. We should repeat the experiment at a different sampling rate to verify the deadtime. Fig. 6.40
compares both T = 1 and T = 0.1 seconds.
0.2
i/o
0.15
0.1
0.05
0
20
21
22
time [s]
23
24
25
284
Method #2: Use trial & error with the S YSTEM IDENTIFICATION T OOLBOX. We can collect some
input/output data and identify a model M() with different deadtimes, namely d = 0 to 3.
Clearly the middle case in Fig. 6.41 where d = 2 has the least error.
Deadtime=0
0.5
0
0.5
Deadtime=1
0.5
0
0.5
Deadtime=2
0.5
0
0.5
actual
predicted
Figure 6.41: Model prediction and actual using a variety of tentative deadtimes, d = 0, 1, 2, 3.
Deadtime=3
0.5
0
0.5
20
40
60
80
100
time (s)
120
140
160
This shows, at least for the model M() with na = nb = 2, then the optimum number of delays is
d = 2.
M ODELLING THE
BLACKBOX
A model of the experimental blackbox is useful to test control schemes before you get to the lab.
Based on the resistor/capcitor internals of the black-box, we expect an overdamped plant with
time constants in the range of 2 to 5 seconds.
Identification tests on my blackbox gave the following continuous transfer function model
2
bb (s) = 0.002129s + 0.02813s + 0.5788 e0.5s
G
2
s + 2.25s + 0.5362
or in factored form
=
Fig. 6.42 illustrates the S IMULINK experimental setup and compares the model predictions to the
actual measured data.
6.6.2
R OBUST
MODEL FITTING
So far we have chosen to fit models by selecting parameters that minimise the sum of squared
residuals. This is known as the principle of the least squares prediction error and is equivalent to
the maximum likelihood method when the uncertainty distribution is gaussian. This strategy has
285
uy
To Workspace2
0
In1
Out1
initialise
Manual Switch
Saturation
Scope1
bb_pcl711
Mux1
BandLimited
White Noise
0.002129s2 +0.02813s+0.5788
s2+2.25s+0.5362
Blackbox dyn
BB Delay
u
y
y predict
30
35
40
45
50
55
time (seconds)
60
65
70
75
Figure 6.42: A dynamic model for the blackbox compared to actual output data.
some very nice properties first identified by Gauss in 1809 such as the fact that the estimates are
consistent, that is they eventually converge to the true parameters, the estimate is efficient in that
no other unbiased estimator has a smaller variance, and the numerical strategies are simple and
robust as described in [11].
However by squaring the error, we run into problems in the presence of outliers in the data. In
these cases which are clearly non-gaussian, but actually quite common, we must modify the form
of the loss function. One strategy due to [90] is to use a quadratic form when the errors are small,
but a linear form for large errors, another suggested in [11] is to use something like
f () =
2
1 + a2
(6.35)
and there are many others. Numerical implementations of these robust fitting routines are given
in [204].
6.6.3
C OMMON
Nonlinear models, or more precisely, dynamic models that are nonlinear in the parameters are
considerably more complex to identify than linear models such as ARX, or even pseudo-linear
models such as ARMAX as outlined in the unified survey presented in [186]. For this reason fully
general nonlinear models are far less commonly used for control and identification than linear
models. However there are cases where we will need to account for some process nonlinearities,
286
and one popular way is to only consider static nonlinearities that can be separated from the linear
dynamics.
Fig. 6.43 shows one way how these block-orientated static nonlinear models can be structured.
These are known as Hammerstein-Wiener models, or block-orientated static nonlinear models.
The only difference between the two structures is the order of the nonlinear block element.
In the Hammerstein model, it precedes the linear dynamics such as in the case for when we
are modelling input nonlinearities such as modelling equal percentage control valves, and for
the Wiener models, the nonlinearity follows the linear dynamics such as in the case where we
are modelling nonlinear thermocouples. Note of course that unlike linear systems, the order of
nonlinear blocks does make a difference since in general nonlinear systems do not commute.
Hammerstein Model
vt
ut
B(q 1 )
A(q 1 )
yt
Linear dynamics
Static nonlinearity
vt = N (ut )
Wiener Model
ut
B(q 1 )
A(q 1 )
Linear dynamics
vt
yt
Static nonlinearity
yt = N (vt )
6.7
In many practical cases such as adaptive control, or where the system may change from day to
day, we will estimate the parameters of our models online. Other motivations for online identification given by [125, p303] include optimal control with model following, using matched filters,
failure prediction etc. Because the identification is online, it must also be reasonably automatic.
The algorithm must pick a suitable model form (number of dead times, order of the difference
equation etc.), guess initial parameters and then calculate residuals and measures of fit.
As time passes, we should expect that owing to the increase in number of data points that we are
continually collecting, we are able to build better and better models. The problem with using the
offline identification scheme as outlined in 6.4 is that the data matrix, X in Eqn. 6.22 will grow
and grow as we add more input/output rows to it. Eventually this matrix will grow too large to
store or manipulate in our controller. There are two obvious solutions to this problem:
287
1. Use a sliding window where we retain only the last say 50 input/output data pairs
2. Use a recursive scheme where we achieve the same result but without the waste of throwing
old data away.
The following section develops this second, and more attractive alternative known as RLS or
recursive least-squares.
6.7.1
R ECURSIVE
LEAST SQUARES
As more data is collected, we can update the current estimate of the model parameters. If we
make the update at every sample time, our estimation procedure is now no longer off-line, but
now online. One approach to take advantage of the new data pair would be to add another row to
the X matrix in Eqn. 6.22 as each new data pair is collected. Then a new estimate can be obtained
using Eqn 6.25 with the now augmented X matrix. This equation would be solved every sample
time. However this scheme has the rather big disadvantage that the C matrix grows as each new
data point is collected so consequently one will run out of storage memory, to say nothing of
the growing impracticality of the matrix inversion required. The solution is to use a recursive
formula for the estimation of the new k+1 given the previous k . Using this scheme, we have
both constant sized matrices to store and process, but without the need to sacrifice old data.
CALCULATED RECURSIVELY
Digressing somewhat, let us briefly look at the power of a recursive calculation scheme, and the
possible advantages that has for online applications. Consider the problem of calculating the
mean or average, x
, of a collection of N samples, xi . The well known formula for the mean is
x
=
N
1 X
xi
N i=1
Now what happens, (as it often does when I collect and grade student assignments) is that after
I have laboriously summed and averaged the class assignments, I receive a late assignment or
value xN +1 . Do I need to start my calculations all over again, or can I can use a recursive formula
making use of the old mean x
N based on N samples to calculate the new mean based on N + 1
samples? Starting with the standard formula for the new mean, we can develop an equivalent
recursive equation
!
N
N +1
X
1
1 X
xi
xi =
xN +1 +
x
N +1 =
N + 1 i=1
N +1
i=1
1
(xN +1 + N x
N )
N +1
1
= x
N +
(xN +1 x
N )
|{z}
{z
}
N +1|
{z
}
|
old
error
gain
which is in the form of new estimate = old estimate + gain error. Note that now the computation
of the new mean, assuming we already have the old mean, is much faster and more numerically
sound.
In a similar manner, the variance can also be calculated in a recursive manner. The variance at
sample number N + 1 based on the mean and variance using samples 1 to N is
1
1
2
2
2
2
(xN +1 x
N ) N
(6.36)
N +1 = N +
N +1 N +1
288
which once again is the linear combination of the previous estimate and a residual. For further
details, see [121, p29]. This recursive way to calculate the variance published in by B.P. Welford
goes back to 1962 and has been shown to have good numerical properties.
A RECURSIVE ALGORITHM
XN
yN
= N +1
yN +1
TN +1
(6.38)
XN
h
i
h
= X ...
XTN
N
N +1
TN +1
(6.39)
..
.
N +1
yN
yN +1
(6.40)
except that now our problem has grown in the vertical dimension. Note however, the number of
unknowns in the parameter vector remains the same.
However we can simplify the matrix to invert by noting
XTN
..
.
N +1
1
XN
1
= XTN XN + N +1 TN +1
TN +1
(6.41)
The trick is to realise that we have already computed the inverse of XTN XN previously in Eqn. 6.25
and we would like to exploit this in the calculation of Eqn. 6.41. We can exploit this using the
Matrix inversion lemma which states
1
1
(A + BDC) = A1 A1 B D1 + CA1 B
CA1
(6.42)
In the special case where B = b and C = cT are both n 1 vectors, then Eqn. 6.42 simplifies to
A + bcT
1
= A1
A1 bcT A1
1 + cT A1 b
(6.43)
A = X X,
def
b = c = N +1
= XTN XN
XTN XN + N +1 TN +1
1
1 + TN +1 XTN XN
N +1
(6.44)
289
1
1
T
T
T
1
XN XN
N +1 N +1 XN XN
T
T
N +1 =
XTN XN
(6.45)
X
y
+
y
1
N N
N +1 N +1
1 + TN +1 XTN XN
N +1
and recalling that the old parameter value, N , was given by Eqn. 6.39, we start to develop the
new parameter vector in terms of the old,
1
1
TN +1 N +1
XTN XN
N +1 = N
TN +1 yN +1
N + XTN XN
1
T
T
1 + N +1 XN XN
N +1
1
1
TN +1 N +1 XTN XN
XTN XN
TN +1 yN +1
1
T
TN +1
1 + N +1 XN XN
1
TN +1
XTN XN
yN +1 N +1 N
= N +
1
TN +1
1 + N +1 XTN XN
(6.46)
(6.47)
def
PN =
1
X
N XN
then the parameter update equation, Eqn. 6.47 is more concisely written as
N +1 = N + KN +1 yN +1 TN +1 N
(6.48)
(6.49)
with the gain K in Eqn. 6.49 and new covariance given by Eqn. 6.44 as
PN N +1
1 + TN +1 PN N +1
#
"
i
h
N +1 TN +1 PN
T
I
K
=
P
= PN I
N
N
+1
N
+1
1 + TN +1 PN N +1
KN +1 =
(6.50)
PN +1
(6.51)
Note that the form of the parameter update equation, Eqn. 6.49, is such that the new value of
N +1 is the old value N with an added correction term which is a recursive form. As expected,
the correction term is proportional to the observed error. A summary of the recursive least-square
(RLS) estimation scheme is given in Algorithm 6.2.
Algorithm 6.2 Recursive least-squares estimation
Initialise parameter vector, 0 , to something sensible (say random values), and set co-variance to
a large value, P0 = 106 I, to reflect the initial uncertainty in the trial guesses.
1. At sample N , collect a new input/output data pair (yN +1 and uN +1 )
2. Form the new N +1 vector by inserting uN +1 and yN into Eqn. 6.37.
3. Evaluate the new gain matrix KN +1 , Eqn. 6.50
4. Update the parameter vector N +1 , Eqn. 6.49
5. Update the covariance matrix PN +1 , Eqn. 6.51 which is required for the next iteration.
290
6. Wait out the remainder of one sample time T , increment sample counter, N N + 1, then
go back to step 1.
The routine in listing 6.16 implements the basic recursive least-squares (RLS) algorithm 6.2 in
M ATLAB. This routine will be used in subsequent identification applications, although later in
section 6.8 we will improve this basic algorithm to incorporate a forgetting factor.
Listing 6.16: A basic recursive least-squares (RLS) update (without forgetting factor)
K = P*phi/(1 + phi'*P*phi);
% Gain K, Eqn. 6.50.
P = P-(P*phi*phi'*P)/(1 + phi'*P*phi);
% Covariance update, P, Eqn. 6.51.
thetaest = thetaest + K*(y - phi'*thetaest);% N+1 = N + KN+1 yN+1 TN+1 N
return % end rls.m
A RECURSIVE ESTIMATION
EXAMPLE
On page 270 we solved for the parameters of an unknown model using an offline technique by
collecting four data pairs. Now we will try the recursive scheme given that we have just collected
a new fifth input/output data pair at sample time k = 4,
time, k
..
.
uk
yk
1
2
3
4
4
3
2
6
10
29
64
10
2
?
3
?
0.0041
?
0.0014
?
0.0465
?
T4 3
= 10
64 6
2
3
= 120
is clearly non-zero indicating that something has changed, so we should continue to update the
parameter estimates. The covariance matrix at sample time k = 3 has previously been computed
and (repeated here) is
1
1
29 84
0.0014 0.0041
def
=
X
P3 = X
=
3
3
0.0041
0.0465
20262 84 942
so the new gain matrix K4 from Eqn. 6.50 is now
P3 4
=
K4 =
1 + T4 P3 4
0.0099
0.0464
291
2
3
0.0099
0.0464
120 =
0.8074
8.5727
and the updated covariance matrix P4 from Eqn. 6.51 is also updated to
P3 4 T4 P3
=
P4 = P3
1 + T4 P3 4
0.0003 0.0013
0.0013 0.0212
Note that P4 is still symmetric, and less immediately apparent, is that it is also still positive
definite which we could verify by computing the eigenvalues (which are incidentally 0.0002 and
0.0213; both positive). When we subsequently obtain another input/output data pair (u5 , y5 ), we
can just continue with this recursive estimation scheme.
We can compare the new estimates 4 , with the values obtained if we used to nonrecursive
method of Eqn 6.25 on the full data set. At this level, both methods show much the same complexity in terms of computation count and numerical round-off problems, but as N tends to ,
the non-recursive scheme is disadvantaged.
S TARTING
In practice there is a small difficulty in starting these recursive style estimation schemes, since
initially we do not know the covariance PN . However we have two alternatives: we could calculate the first parameter vector using a non-recursive scheme, and the switch to a recursive scheme
only later once we know PN , or we could just assume that at the beginning, since we have no
estimates, our expected error will be large, say PN = 104 I. The latter method is more common in
practice, although it takes a few iterations to converge to the correct estimates if they exist.
6.7.2
R ECURSIVE
LEAST- SQUARES IN
M ATLAB
M ATLAB can automate the estimation procedure so that it is suitable for online applications. To
test this scheme we will try to estimate the same system S from Eqn. 6.26 used previously on
page 272 repeated here
S:
G(q 1 ) =
1 + 0.2q 1
+ 1.5q 2 0.5q 3
11.9q 1
where the unknown vector of A and B polynomial coefficients consist of the five values
=
.
1.9 1.5 0.5 ..
0.2
iT
which are to be estimated. Note of course we do not need to estimate the leading 1 in the monic
A polynomial.
The estimation is performed in open loop using a white noise input simply to ensure persistence
excitability. We will select a model structure the same as our true system. Our initial parameter
estimate is a random vector, and the initial covariance matrix is set to the relatively large value of
106 I as shown in Fig. 6.44. Under these conditions, we expect near perfect estimation.
Listing 6.17 calls the rls routine from listing 6.16 to do the updating of estimated parameters
and associated statistics using Eqns 6.50 and 6.51.
292
white noise
1+1.2q1
11.9q1 +1.5q2 0.5q3
output
Initial covariance, P0 = 103 I
0 = random
RLS
parameter estimate,
Figure 6.44: Ideal RLS parameter estimation. (See also Fig. 6.46(a).)
Listing 6.17: Tests the RLS identification scheme using Listing 6.16.
1
G = tf([1,0.2,0,0], ...
[1 -1.9 1.5 -0.5],1); % unknown plant G =
G.variable = 'z-1';
N = 12;
U = randn(N,1);
Y = lsim(G,U);
1+0.2q 1
11.9q 1 +1.5q 2 0.5q 3
% # of samples
% design random input u(k) = N (0, 1)
% Compute output
16
for i=4:N
% start estimation loop
phi = [Y(i-1:-1:i-3); U(i:-1:i-1)];
% shift X register (column)
[thetaest, P] = rls0(Y(i),phi,thetaest,P); % RLS update of
Param = [Param; thetaest']; % collect data
end
If you run the script in Listing 6.17 above you should expect something like Fig. 6.45 giving
the online estimated parameters (lower) which quickly converge to the true values after about 5
iterations. The upper plot in Fig. 6.45 shows the input/output sequence I used for this open loop
identification scheme. Remember for this identification application, we are not trying to control
the process, and that provided the input is sufficiently exciting for the identification, we do not
really care precisely what particular values we are using.
The near perfect estimation in Fig. 6.45 is to be expected, and is due to the fact that we have no
systematic error or random noise and we have sufficient excitation in the input to stimulate a response in the output. This means that after 4 iterations (we have five parameters), we are solving
an over constrained system of linear equations for the exact parameter vector. Of course if we
superimpose some random noise to y, which is more realistic, then our estimation performance
will suffer.
5
Input/output
293
5
2
a2
1.5
Paramaters,
0.5
b
0.5
1
1.5
a
2
2.5
6
8
Sample No.
10
12
yk1
1
0 0 0
yk
yk1 = 1 0 0 yk2 + 0 yk
(6.52)
yk3
0 1 0
0
yk2
where the state transition matrix has ones on the first sub-diagonal.
294
1+1.2z1
11.9z1+1.5z20.5z3
Band Limited
White Noise
Scope
AutoRegressive
with eXternal input
model estimator
ARX
40
45
50
55
60
65
70
75
Time (secs)
Error In Simulated Model
80
85
40
45
50
55
80
85
1
0.5
0
60
65
70
Time (secs)
75
Figure 6.46: RLS under S IMULINK. In this case, with no model/plant mismatch, and no disturbances, we rapidly obtain perfect estimation.
In general, if we want to construct a shift register of multiple old values of input and output, we
can construct a discrete state-space system as
yk1
yk
0 0 0
1
y
y
1 0 0
0
k2
k1
0
0
.
..
. 0 1 ... 0
..
.
.
ykna1 0
ykna 0 0 1
yk
(6.53)
=
+
uk
uk
0 0 0 uk1
1
uk2
uk1
1
0
0
0
.
0
.
..
..
..
.
.
. 0
0 1
.
uknb
uknb1
0 0 1
0
where we are collecting na shifted values of y, and nb values of u.
295
1+1.2z1
u
11.9z1+1.5z20.5z3
Band Limited
White Noise
"Unknown" plant
phi
Subsystem
theta
y
theta
y (k)
5
P
phi
theta
To Workspace1
[5x5]
P
To Workspace3
RLS
y(k)
Unit Delay
z
phi
2
phi
phi *theta
1
theta
theta
K*err
[5x5]
5
Matrix
Multiply
5
5
P*phi
P*phi
Divide
5
u+1
phi *P*phi
Bias
[5x5]
Unit Delay 1
P(k)
[5x5]
1
z
2
P
P*(Iphi *K)
P(k+1)
phi *K
[5x5]
Matrix
[5x5] Multiply
phi*K
Matrix
[5x5] Multiply
[5x5]
[5x5]
5
phi
[1x5]
Transpose1
uT
Constant
[5x5]
eye(na+nb)
(b) The internals of the RLS block in S IMULINK . The inputs are the current plant output and ,
and the outputs are the parameter estimate, , and the covariance matrix.
Figure 6.47: Implementing RLS under S IMULINK without using the System Identification toolbox.
6.7.3
T RACKING
One must be careful not to place too much confidence on the computed or inherited covariance matrix P. Ideally this matrix gives an indication of the errors surrounding the parameters,
but since we started with an arbitrary value for P, we should not expect to believe these values.
Even if we did start with the non-recursive scheme, and then later switched, the now supposedly
correct covariance matrix will be affected by the ever present nonlinearities and the underlying assumptions of the least square regression algorithm (such as perfect independent variable
knowledge) which are rarely satisfied in practice.
Suppose we attempt to estimate the two parameters of the plant
G=
b0
1 + a1 q 1
(6.54)
where the exact values for the parameters are b0 = 1 and a1 = 0.5. Fig. 6.48 shows the parameter
estimates slowly converging to the true values, (starting with an unreasonably low P0 = 2I to
296
better illustrate the trend) and the elements of the covariance matrix P decreasing for the first 10
samples.
Recursive leastsquares parameter estimation
Input/output
2
1
0
Output
Input
1
2
4
3
parameters,
2
b
0
a
0
1
2
trace(P)
10
10
sample No.
Now even though in this simulated example we have the luxury of knowing the true parameter
values, the RLS scheme also delivers an uncertainty estimate of the estimated parameters via
the covariance P matrix. Fig. 6.49, which uses the same data from Fig. 6.48, superimposes the
confidence ellipses in parameter space around the the current best-estimate. (The darker points
and associated ellipses correspond to more recent data.) It is interesting to note that after the 10 or
so samples, the estimates in Fig. 6.48 have nearly converged to the true values, but the confidence
ellipses according to Fig. 6.49, show surprisingly large uncertainties.
7
b0
b0
a*
Figure 6.49: The trend of the 95% confidence regions for the first 10 estimates
shown in Fig. 6.48.
2
a1
297
The impressive results of the fully deterministic estimation example shown in Fig. 6.45 is somewhat misleading since we have a no process noise, good input excitation, no model plant/mismatch in structure and we only attempt to identify the model once. All of these issues are crucial
for the practical implementation of an online estimator. The following simulation examples will
investigate some of these problems.
Suppose we want to identify a plant where there is an abrupt change in dynamics at sample time
k = 50,
1.2 q 1
, for k 50
1 0.3 q 1 0.7 q 2
A real example of a such an abrupt change in dynamics might arise in the case of a gantry crane
when model 1 is when the crane is unloaded, and model 2 is when the crane has picked up a
heavy shipping container for example.
In this case shown, as in Fig. 6.45, the parameters quickly converge to the correct values as shown
in Fig. 6.50 for the first plant, but after the abrupt model change, they only reluctantly converge
to the new values.
Identifying abrupt plant changes
Input/output
40
20
0
error
20
1
1
1.6
b0
1.4
1.2
1
0.8
a2
0.6
0.4
a1
0.2
0
0.2
50
100
150
sample No.
200
250
300
298
6.8
Sometimes it is a good idea to forget about the past. This is especially true when the process parameters have changed in the interim. As we saw in Fig. 6.50, if we simply implement Eqns 6.50
and 6.49, or alternatively run the routine in Listing 6.17, we will find the estimation works well
initially since the estimates converge to the true values, and the trace of the covariance matrix decreases as our confidence improves. However if the process subsequently changes again, we find
that the parameters do not converge as quickly to the new parameters. This may be surprising
since the convergence worked fine at first, so why didnt it work a second time?
The reason for this failure to converge a second time is due to the fact that the now minuscule
covariance matrix, reflecting our large, but misplaced confidence, is inhibiting any further large
changes in .
One solution to this problem, is to reset the covariance matrix to some large value, say 103 I for
example when the system change occurs. While this works well, it is not generally feasible, since
we do not know when the change is likely to take place. If we did, then gain scheduling may be
more appropriate.
An alternative scheme is to introduce a forgetting factor, , to decrease the influence of old samples. With the inclusion of the forgetting factor, the objective function of the fitting exercise is
modified from Eqn. 6.23 to
J =
N
X
k=1
N k yk Tk
2
<1
(6.55)
where data n samples past is weighted by n . The smaller is, the quicker the identification
scheme discounts old data as shown in Fig. 6.51. Incidentally it would make more sense if the forgetting factor were called the remembering factor, since a higher value means more memory!
Incorporating the forgetting factor into the recursive least squares scheme modifies the gain and
covariance matrix updates very slightly to
KN +1 =
PN +1
PN N +1
+ TN +1 PN N +1
PN N +1 TN +1 PN
PN
+ TN +1 PN N +1
PN
I N +1 KTN +1
=
1
=
(6.56)
!
(6.57)
To choose an appropriate forgetting factor, we should note that the information dies away with a
time constant of N sample units where N is given by
1
(6.58)
1
As evident from Fig. 6.51, a value of is 0.99 (which is typical) gives a time constant of 100
samples.
N=
Consequently most textbooks recommend a forgetting factor of between 0.95 and 0.999, but lower
values may be suitable if you expect to track fast plant changes. However if we drop too much,
we will run into covariance windup problems which are further described later in section 6.8.2.
We can demonstrate the usefulness of the forgetting factor by creating and identifying a time
varying model, but first we need to make some small changes to the recursive least-squares update function rls given previously in Listing 6.16, to incorporate the forgetting factor. as shown
=1
0.8
150
0.6
weighting
299
100
=0.995
0.4
=0.95
0.2
50
=0.98
100
=0.99
150
200
Historic time
50
250
300
0
0.6
0.7
0.8
0.9
Figure 6.51: As the forgetting factor is reduced, older samples are deemed less important.
in Listing 6.18 which we should use from now on. Note that if is not specified explicitly, a value
of 0.995 is used by default.
Listing 6.18: A recursive least-squares (RLS) update with a forgetting factor. (See also Listing 6.16.)
K=P*phi/(lambda + phi'*P*phi);
% Gain K, Eqn. 6.56.
P=(P-(P*phi*phi'*P)/(lambda+phi'*P*phi))/lambda;% Covariance update, P, Eqn. 6.57.
thetaest = thetaest + K*(y - phi'*thetaest);% N+1 = N + KN+1 yN+1 TN+1 N
return % end rls.m
6.8.1
T HE
Figure 6.52 shows how varying the forgetting factor alters the estimation performance when
faced with a time varying system. The true time varying parameters S() are the dotted lines,
and the estimated parameters M() for different choices of from 0.75 to 1.1 are the solid lines.
When = 1, (top trend in Fig. 6.52), effectively no forgetting factor is used and the simulation
shows the quick convergence of the estimated parameters to the true parameters initially, but
when the true plant S() abruptly changes at t = 200, the estimates, M(), follows only very
sluggishly, and indeed never really converges convincingly to the new S(). However by incorporating a forgetting factor of = 0.95, (second trend in Fig. 6.52), better convergence is obtained
for the second and subsequent plant changes.
Reducing the forgetting factor still further to = 0.75, (third trend in Fig. 6.52), increases the con-
300
=1
2
1
b1
a2
a3
= 0.95
2
1
b1
a2
a3
= 0.75
2
1
b
1
a2
a3
= 1.1
Figure 6.52: Comparing the estimation performance of an abruptly time-varying unknown plant with different forgetting factors,
.
Estimated parameters (solid) and true parameters (dashed) for (a) = 1, (b) = 0.95, (c)
= 0.75, (d) = 1.1. Note that normally one
would use 0.95, and never > 1.
b
1
a2
a3
1
0
200
400
sample time [k]
600
vergence speed, although this scheme will exhibit less robustness to noisy data. We also should
note that there are larger errors in the parameter estimates (S() M()) during the transients
when = 0.75 than in the case where = 1. This is a trade-off that should be considered in the
design of these estimators. We can further decrease the forgetting factor, and we would expect
faster convergence, although with larger errors in the transients. In this simplified simulation
we have a no model/plant mismatch, no noise and abrupt true process changes. Consequently
in this unrealistic environment, a very small forgetting factor is optimal. Clearly in practice the
above conditions are not met, and values just shy of 1.0, say 0.995 are recommended.
One may speculate what would happen if the forgetting factor were set greater than unity ( > 1).
Here the estimator is heavily influenced by past data, the more distant pastthe more influence.
In fact, it essentially disregards all recent data. How recent is recent? Well actually it will disregard all data except perhaps the first few data points collected. This will not make an effective
estimator. A simulation where = 1.1 is shown in the bottom trend of Fig. 6.52), and demonstrates that the estimated parameters converge to the correct parameters initially, but fail to budge
from then on. The conclusion is not to let assume values > 1.0.
6.8.2
C OVARIANCE
WIND - UP
Online estimation in practice tends to work well at first, but if the estimator is left running for
extended periods of time, such as a few days or even weeks, the algorithm often eventually fails.
301
One of the reasons leading to a breakdown is that during periods of little interest such as when
the system is a steady-state, there is no information arriving into the estimator to reduce further
the uncertainty of the estimated parameters. In fact the reverse occurs, the uncertainty grows,
and this is seen as a slow but sure exponential growth in the elements of the covariance matrix
P. Soon the matrix will overflow, and this is referred to as covariance wind-up. Any practical
algorithm must take steps to monitor for the problem and take steps to avoid it.
During periods of little interest, the covariance update, Eqn. 6.57, at sample time k, reduces to
effectively
Pk+1 =
Pk
T
I k+1 Kk+1
| {z }
almost zero
Pk
(6.59)
which means that the P matrix increases each sample by a factor of 1/ which under normal
circumstances is say around 2%. It will continue this exponential increase until either there is
sufficient excitation in input and output to start to drop the covariance again or it overflows.
Consequently it is prudent to watch the size of P by monitoring the size of the individual elements. A suitable scalar size measure is given by the trace of the P matrix, which is simply the
sum of the diagonal elements
n
n
X
X
def
i
pi,i =
tr(P) =
i=1
i=1
The trace of a matrix has the nice property in that it also equals the sum of the n eigenvalues, i .
Since P is positive definite, the trace will always be positive. If the trace gets too large, then some
corrective action is required.
Suppose we try to identify the same system as used in section 6.7.2,
S=
1.2q 1
1 0.25q 1 0.50q 2
using a recursive least-squares algorithm with a forgetting factor of 0.98. Rather than using the
optimal white noise as an input excitation signal, we will use a poor input excitation, such as a
square wave with an excessively long period. While the parameters do converge to the correct
values, Fig. 6.53 shows also that the trace of P increases during the periods of no input excitation
to astronomically large values such as 1015 . Clearly the covariance matrix would grows without
bound and will soon exceed the numerical limits of our control computer. If instead we had used
the ideal white noise input signal, the continual excitation keeps the trace of P small. As evident
from Eqn. 6.59, the smaller our forgetting factor is, the faster the covariance matrix will windup.
M ODIFICATIONS
Introducing a forgetting factor is just one attempt to control the behaviour of the covariance matrix. As you might anticipate, there are many more modifications to the RLS scheme along these
lines such as variable forgetting factors, directional forgetting factors, constant trace algorithms,
adaptive forgetting factors, startup forgetting factors etc. See [203] for further details.
The variable forgetting factor introduced in [68] is particularly easy to implement. A similar
algorithm was developed at around the same time in [202] and the two algorithms compared in
[176].
302
Input/output
5
2
0
20
10
=0.98
tr(P)
10
1000
2000
3000
sample No.
4000
5000
The time-varying forgetting factor is now no longer a constant value, but adjusted according to
(t) = 1
(1 T K)2
0
(6.60)
where is the residual error, and 0 is chosen as N 2 and where 2 is the anticipated measurement noise variance of the system, and N adjusts the rate of adaption. One way to interpret N is
from Eqn. 6.58 thus giving
2
0 = N 2 =
1
using a nominal value of = 0.995. It may be also prudent to place some safety limits on (t)
such as
0.3 (t) 0.9999
Fig. 6.54 shows the result of using this simple improvement to the estimation algorithm when we
have multiple abrupt plant changes. Note that in this case, (as opposed to the situation given in
Fig. 6.50), we do manage to both re-identify the second and subsequent changes, and that the trace
of the covariance matrix does not ever get too large. During the changes however, the varying
forgetting factor does drop significantly at times reaching the lower limit of 0.3.
6.8.3
A key concern when implementing a recursive identification algorithm that is of particular concern in limited precision is to always ensure that the covariance matrix is symmetric positive
definite or SPD. Due to the arithmetic operations carried out with limited precision, the P matrix
can lose its positive definite properties, causing the algorithm to fail. The question then is what
to do about that. Ideally we would like to first check if the matrix is indefinite, and then if so,
adjust the matrix to the nearest positive definite matrix. Unfortunately this is not a simple or
computationally cheap correction, but the first thing we could do is to enforce symmetry with a
simple correction like
303
4
2
0
2
input
0.01
0
0.01
3
2
b1
1
2
tr(P)
10
1
0.8
(t)
0.6
0.4
0
100
200
300
sample time
400
Figure 6.54: Using a variable forgetting factor to improve the identification of abrupt plant
changes. Note the improvement over the problems exhibited in Fig. 6.50.
and then to enforce positive definiteness, we can take an eigenvalue decomposition, simply reset
any negative eigenvalues to a small positive number, and then rebuild the original matrix.
[V,D] = eig(P);
P = V*max(D,0+eps)*V';
Strictly speaking, the final line should have been VDV1 , but since V is orthogonal, VT = V1 .
In fact this correction strategy is also suspectable to small numerical errors, so the similar routine
nearestSPD by John DErrico available from the Mathworks website should be used.
304
6.9
Model identification can also be achieved using local parametric optimisation theory. This has
the advantage that it is conceptually simple, allows for general, possibly nonlinear functions,
and closely follows classical optimisation techniques. The basic idea is to update the parameter
vector so that a performance index is minimised. In this manner, the problem is reduced to one
of a normal minimisation problem where many computer algorithms already exist. However
one typically assumes that we have good starting guesses for the parameters, and that one must
live with the resulting slow rate of convergence. Also these simple optimisation schemes can
become unstable, and the resulting stability analysis is typically quite involved. This section
closely follows [113, pp66-73].
Suppose we are given a system with known denominator dynamics,
y(t) =
b0
u(t)
1 + a1 s + a2 s2
(6.61)
but where we wish to estimate the steady-state gain, b0 . Therefore our model M of the process is
M : (1 + a1 s + a2 s2 )
y (t) = b0 u(t)
where we wish b0 b0 . We will take the squared error as our performance index,
Z
1
2 dt
J =
2
(6.62)
(6.63)
(6.64)
where k > 0 is the adaption gain which can be adjusted to vary the speed of convergence. The
rate of change of our parameter is
J
db0
= k
(6.65)
dt
t b0
Now we can reverse the order of differentiation in Eqn. 6.65 provided the system does not change
too fast,
j
k 2
db0
=
= k
dt
2 b0
b0 t
= k
(6.66)
b0
Eqn. 6.66 is referred to as the MIT rule. To use the MIT rule in an adaptive scheme, we must
evaluate the sensitivity function,
y
y
=
(6.67)
b0
b0
b0
|{z}
=0
db0
y
= k
dt
b0
(6.68)
305
The sensitivity function of the predicted output y with respect to the parameter is obtained by
differentiating the original model, Eqn. 6.62. Note that
(1 + a1 s + a2 s2 )
y = b0 u
y + a1 y + a2 y = b0 u
so that the partial derivative of model M with respect to b0 is
y
y
y
+ a1
+ a2
=u
b0
b0
b0
(6.69)
y
+ a1
t
b0
y
b0
+ a2
2
t2
y
b0
(1 + a1 s + a2 s2 )
b0
=u
(6.70)
=u
(6.71)
b0
b0
(6.72)
(6.73)
which would all work very nicely except that we do not know b0 ! (It is after all, what we are
trying to estimate.) So we will substitute the estimate in the adaption law giving,
b 0 = k y
b0
(6.74)
Algorithm 6.3 Searching for the plant gain Given an unknown system S producing measurable
outputs y(t) and a model M with an initial parameter estimate b0 producing outputs y(t) subjected to known inputs u(t),
1. Using the model M with the current best estimate of b0 predict y and compare with the
actual system output, (t) = y(t) y(t).
2. Update the parameter estimate using the adaption law, Eqn. 6.74
3. Go back to step 1.
E XAMPLE OF
PARAMETRIC ADAPTION
We will demonstrate this parametric adaption where our true, but only partially known, system
and corresponding model are
S=
0.45
,
s2 + 4.2s + 4.4
and model
M=
b0
s2 + 4.2s + 4.4
where the initial estimate of the unknown gain b0 has the wrong sign at b0 = 0.3.
306
We will do most of the calculations in the discrete domain, so we will approximate the time
derivative in the adaption law, db0 /dt, with an Euler approximation which when discretised at
sample time T gives,
b0 (t) = b0 (t 1) + T k y
b0
The upper plot in Fig. 6.55 calculated from Listing 6.19 compares the result of using the model
M with no adaption (dash-dot) with one where the model is adapted according to the MIT rule
(dotted) line. Note that eventually the model with adaption closely follows the true process (thick
solid line). The lower plot shows that the parameter estimate b0 slowly converging to the true
value of 0.45. Note that in this case the adaptation gain k = 8 and that the adapted model has a
much reduced integrated error than the model with no adaption.
Adaption gain = 8
True
Open loop
Adapted
Figure 6.55: Parameter adaption using the steepest descent method. The
upper plot compares the true system
(solid) with the openloop model (dotted) and the adapted model (dashed).
The lower plot shows the parameter adapting gradually to the correct
value of b0 = 0.45.
0.05
0.05
Gain estimate, b0
0.1
0.1
0.5
0.5
10
20
30
40
Time T =0.175 [s]
50
b0_est = -0.3;
% Estimated of parameter b0 = 0.3, while actual b0 = 0.45
G = tf(b0_est,[1 4.2 4.4]); % Estimated model b0 /(s2 + 4.2s + 4.4)
dt = 0.175; t = [0:dt:60]'; % Simulate truth & openloop with a random input
U = 0.6*randn(size(t))+0.5*sin(t/4);
Yx = lsim(S,U,t);
% truth output, S
Yol = lsim(G,U,t);
% open loop estimate, M
Y = Yx; B=NaN*Y; k = 8;
13
18
23
% Adaption gain, k = 8
for i=3:length(t)-1
% Now do the estimation . . .
G.num = b0_est; Gd = c2d(G,dt);
[num,den] = tfdata(Gd,'v');
Y(i) = -den(2:3)*Y(i-1:-1:i-2) + num*U(i:-1:i-2);
e = Yx(i)-Y(i) ;
% model/plant error
b0_est = b0_est + dt*k*e*Y(i)/b0_est; % Update b0 b0 + T ky/b0 , Eqn. 6.74
B(i,:) = b0_est;
end % for
d = [Y-Yx, Yx-Yol]; jise = sum(abs(d)); % Performance analysis, See Fig. 6.55.
60
307
Choosing an appropriate adaptation gain is somewhat of an art. Too small a value and the adaptation will be too slow. Too large, and the adaptation may become unstable. Fig. 6.56 trends the
integrated absolute error (ISE) for the adaption scheme as a function of adaptation gain which can
be compared with the openloop scheme over the time horizon used in Fig. 6.55. In this application, the best setting for k is around 50, but this is very dependent on starting conditions, problem
type etc. Above this value the performance rapidly deteriorates, and one would often be better
off using only the uncorrected openloop model. As k 0, then there effectively no adaption, and
this is equivalent to the openloop model.
3
10
ISE
10
10
10
optimal gain
1
10
10
6.10
10
10
Adpation gain, k
10
10
So far we have assumed that the noise acting on our system has been random white noise. However in practice, coloured noise is far more common, and blindly applying the RLS scheme to a
process disturbed by coloured noise will in general result in biased estimates. The text [203, p102]
contains further details. Recall that the ARMAX model of Eqn. 6.13, repeated here,
A(q)yk = B(q)ukd + C(q)ek
| {z }
(6.75)
coloured noise
A = 1 + a1 q 1 + + ana q na ,
C = 1 + c1 q 1 + + cnc q nc
B = b0 + b1 q 1 + + bnb q nb
where the A and C polynomials are assumed monic. A schematic of Eqn. 6.75 is given in Fig. 6.57
which is a special case of the general linear model of Fig. 6.32.
The noise model polynomial, C, is typically never larger than first or second order. We wish to
estimate the na + nb + nc + 1 unknown coefficients of A, B and C. The problem we face when
trying to estimate the coefficients of C is that we can never directly measure the noise term ek .
So the best we can do is replace this unknown error term with an estimate such as the prediction
error,
k1
k = yk k
(6.76)
and augment the data vector with the estimates of the C polynomial. This scheme is referred to as
recursive extended least-squares, or RELS. Note carefully the difference in nomenclature between
the actual, but unmeasured error, e and the estimate of that error, .
308
input, u(t)
q d B
1
A
output, y(t)
Figure 6.57: Addition of coloured noise to a dynamic process. See also Fig. 6.32.
The recursive extended least-squares algorithm given in Algorithm 6.4 is an extension of the standard recursive least-squares algorithm.
Algorithm 6.4 Recursive extended least-squares
We aim to recursively estimate the parameters of the A, B and C polynomials in the dynamic
process
A(q 1 )yk = B(q 1 )ukd + C(q 1 )ek
by measuring uk and yk . We note that A and C are assumed monic, so the parameter vector is
h
iT
def
= a1 an ... b0 bn ... c1 cn
a
c
b
1. Construct the augmented data row vector using past input/output data and past prediction
error data,
..
..
ukd1nb , . k1 , . . . , knc
k = |yk1 , . .{z. , ykna}, . |ukd , . . . ,{z
|
}
{z
}
past outputs
inputs
predicted errors
2. Collect the current input/output data, yk , uk , and calculate the present prediction error using the previously obtained model parameter estimates
k1
k = yk k
3. Update the parameter covariance matrix, gain, and parameter estimate vector using the
standard RLS update equations, (Eqns 6.49, 6.50, and 6.51), from Algorithm 6.2.
4. Wait out the remaining of the sample time, and then return to step (1).
Of course we could modify these update equations to incorporate additional features such as
forgetting factors in the same manner as in the normal recursive least squares case.
An alternative algorithm is the approximate maximum likelihood, where instead of using the
prediction error, one uses the residual error,
k
k = yk k
(6.77)
and minor changes to the covariance and parameter update equations. (Note the difference between this and Eqn. 6.76.) Further implementation details are given in [203, p105].
6.10.1
309
For this simulated example, we will compare the parameter estimates of an unknown plant disturbed by coloured noise using both the normal recursive least-squares (where we anticipate biased estimates), and the recursive extended least-squares algorithm to additionally estimate the
C polynomial and to remove the bias estimates of the A and B polynomials.
Our true plant is
(1 0.4q 1 0.5q 2 ) yk = (1.2q 1 + 0.3q 2 ) uk + (1 0.8q 1 0.1q 1 ) ek
{z
}
|
{z
}
|
{z
}
|
A(q1 )
B(q1 )
(6.78)
C(q1 )
where ek N (0, 2.5), and the input uk N (0, 1). Note that we have coloured noise as C 6= 1 and
that this noise variance is relatively large.
If we just ignore the noise model, and attempt to estimate just the four unknown A and B polynomial coefficients using the normal RLS algorithm, then we get large biases especially as evident
in the first half of the Fig. 6.58.
=0.9990
2
RLS
1.5
RELS
b
Parameter estimates
1
c
0.5
b2
0
0.5
1000
2000
3000
sample time [k]
4000
5000
6000
Figure 6.58: Estimating model parameters using recursive least squares in the presence of
coloured noise. The first period shows the A and B estimates (solid) and true values (dashed)
using normal RLS. The second half of the trend shows the A, B and C estimates (solid) and true
values (dashed) using extended RLS. Note the eventual lack of bias in the latter RELS estimates.
If however, we swap over to use the extended recursive least-squares algorithm at time k = 2000,
then the estimation performance is much improved, exhibits much less bias, and even manages
(eventually) to estimate the noise polynomial correctly as shown in the latter section of Fig. 6.58.
The downside of course is that now we must estimate the noise polynomial coefficients as well
increasing the number of unknown parameters from 4 to 6.
Example Estimating parameters in an ARMAX model.
For the dynamic system described by Eqn. 6.78, we will create an ARMAX model using the
idpoly object, create some suitable random input, and use the model to generate and output
as shown in Listing 6.20. Note how I must pad the B polynomial with a leading zero to take into
account the deadtime.
Listing 6.20: Create an ARMAX process and generate some input/output data suitable for subsequent identification.
310
A = [1 -0.4 0.5]; B = [0, 1.2, 0.3]; C = [1, 0.8 -0.1]; % True coefficients
G = idpoly(A,B,C)
% Create true ARMAX plant
y = sim(G,[u e]);
% Simulate Ay = Bu + Ce
Zdata = iddata(y,u,1); % collect output/input data, sample time T = 1.
To estimate the parameters we could either use the general pem or the more specific armax commands. We will estimate a structurally perfect model, that is 2 parameters in each of the A, B and
C polynomials and 1 unit of dead time.
Listing 6.21: Identify an ARMAX process from the data generated in Listing 6.20.
>>G_est= pem(Zdata,'na',2,'nb',2,'nc',2,'nk',1);% Estimate A(q 1 ), B(q 1 ) & C(q 1 )
>>G_est= armax(Zdata,'na',2,'nb',2,'nc',2,'nk',1)% Alternative to using pem.m
4
>>present(G_est)
% present results
G_est =
Discrete-time ARMAX model: A(z)y(t) = B(z)u(t) + C(z)e(t)
A(z) = 1 - 1.416 z-1 + 0.514 z-2
B(z) = 1.209 z-1 + 0.2935 z-2
C(z) = 1 + 0.7703 z-1 - 0.1181 z-2
14
Fig. 6.59 compares the true and predicted output data. In the left-hand case, the fit is not perfect
even though we used the correct model structure which is partly due to the iterative nature of the
nonlinear regression, but mostly the poor fit is due to the fact that we used only 100 input/output
samples.
On the other hand, if we used 10,000 samples for the fitting as in the right-hand case, we will
estimate a far superior model. It is interesting to note just how many input/output samples we
need to estimate a relatively modest sized system of only 6 parameters.
Using only 100 points for estimation
15
2
4
10
6
8
10
0
12
14
16
18
10
15
20
40
60
sample time
Actual
Estimated
20
Estimated
80
100
22
9900
9920
9940
9960
sample time
9980
10000
Figure 6.59: A validation plot showing the actual plant output compared to the model predictions
of an ARMAX model using series of different lengths. Note that to obtain good results, we need
a long data series.
Listing 6.22 repeats this estimation, but this time does it recursively in M ATLAB.
311
A = [1 -0.4 0.5]; B = [1.2, 0.3]; C = [1, 0.8 -0.1]; % True plant to be estimated
na = length(A); nb = length(B); nc = length(C); % Order of the polynomials+1
d = na-nb; % deadtime
11
Npts = 5e4;
u = randn(Npts,1); a = randn(Npts,1);
y = zeros(Npts,1); err = y; thetaAll = [];
% dummy
np = na+nb+nc - 2;
% # of parameters to be estimated
theta = randn(np,1); P = 1e6*eye(np); % initial guess of
lambda = 1;
% forgetting factor, 1
for i=na:Npts;
y(i)= -A(2:end)*y(i-1:-1:i-na+1)+ B*u(i-d:-1:i-nb+1-d)+ C*a(i:-1:i-nc+1);
16
21
26
As shown in Fig. 6.60, the estimates do eventually converge to the correct values without bias,
and furthermore, the step response of the final estimate compares favourably with the true step
response.
6.10.2
R ECURSIVE
SI
TOOLBOX
The System Identification toolbox can also calculate the parameter estimates recursively. This
essentially duplicates the material presented in 6.7.1. However in this case there are many more
options to choose from regarding type of method, numerical control, starting point etc.
The toolbox has a number of recursive algorithms (See [126, p164]) but they all work in the same
way. For most demonstration purposes, we will recursively estimate the data pseudo online. This
means that we will first generate all the data, and then estimate the parameters. However we
will make sure that when we are processing element i, the data ahead in time, elements i + 1 . . .,
are not available. This pseudo online approach is quite common in simulation applications. The
Toolbox command that duplicates the recursive least squares estimation of a ARMAX process
with a forgetting factor is rarmax(z,nn,ff,0.95).
6.10.3
S IMPLIFIED RLS
ALGORITHMS
The standard recursive least squares (RLS) algorithm such as presented in 6.7.1 requires two
update equations; one for the update gain, and one for the covariance matrix. But for implementation in a small computer, there is some motivation to simplify the algorithm, even at the
om
expense of the quality of parameter estimates. Astr
and Wittenmark, [18, p69-70] describe one
312
RELS: =0.9990
2
Model at k=4e4
5
1.5
Estimated
Actual
5
0
10
15
20
25
(seconds)
Parameter estimates
c
b22
0.5
a
b3
3
b
c31
0.5
Model at k=2e4
a2
1.5
Estimated
Actual
0
0
10
15
20
25
(seconds)
3
sample time [k]
5
4
x 10
Figure 6.60: Recursive extended least-squares estimation where there is an abrupt model change
at time k = 2 104. The inserted figures compare a step response of the true plant with the current
estimate.
such method known as Kaczmarzs projection algorithm. The input/output relation is assumed
yk = k k
where is the parameter vector and is the row data vector, and the correction to the estimated
parameter vector is
k =
k1 +
k
k which gives the full updating formula in our now familiar
where is chosen such that yk = k
form as
T
k =
k1 +
(6.79)
y
k
k k1
k T
Note that in [18], there is some nomenclature change, and that the data vector is defined as a
column vector, where here I have assumed it is a row vector. This update scheme (Eqn 6.79) is
sometimes modified to avoid potential problems when the parameter vector equals zero.
The script in listing 6.23 first generates data from an ARX plant, and then the data is processed
pseudo-online to obtain the estimated parameters.
Listing 6.23: Kaczmarzs algorithm for identification
u = randn(50,1);
3
% random input
313
k
k1 + Tk yt T
k1
theta=theta+phi*(y(i)-phi'*theta)/(phi'*phi); %
k
k
Param= [Param; theta'];
% collect parameters
end %
13
The input/output data (top) and parameters from this data are shown in Fig. 6.61. The estimated
parameters converge relatively quickly to the true parameters, although not as quickly as in the
full recursive least-squares algorithm.
5
5
1
a
Estimated & true parameters
0.5
b
0.5
1
a1
1.5
10
20
30
sample time
40
50
Figure 6.61: The performance of a simplified recursive least squares algorithm, Kaczmarzs algorithm.
Upper: The input/output data.
Lower: The estimated parameters (solid)
eventually converge to the true parameters
(dashed).
Problem 6.3 The following questions use input/output data from the collection of classic time
series data available from the time series data library maintained by Rob Hyndman at:
www-personal.buseco.monash.edu.au/hyndman/TSDL/ or alternatively from the DAISY:
D ATABASE FOR THE I DENTIFICATION OF S YSTEMS collection maintained by De Moor, Department
of Electrical Engineering, ESAT/SISTA, K.U.Leuven, Belgium at
www.esat.kuleuven.ac.be/sista/daisy/.
These collections are useful to test and verify new identification algorithms. Original sources for
this data include [34] amongst others.
1. The data given in /industry/sfbw2.dat shows the relationship in a paper making machine between the input stock flow (gal/min) and basis weight (lb/3300 ft2 ), a quality control
parameter of the finished paper. Construct a suitable model of this time series.
Ref: [157, pp500501].
2. This problem is detailed and suitable for a semester group project.
An alternative suite of programs to aid in model development and parameter identification is
described in [34, Part V]. The collection of seven programmes closely follows the methodology in the text, and are outlined in pseudo-code. Construct a T IME S ERIES A NALYSIS
toolbox with graphics based around these programs.
Test your toolbox on one of the time series tabulated in the text. A particularly relevant series
is the furnace data that shows the percentage of carbon dioxide output from gas furnace at
314
9 seconds intervals as a function of methane input, (ft3 /min), [34, Series J, pp532533].
This input/output data is reproduced in Fig. 6.62, although the input methane flowrate is
occasionally given as negative indicating that the series is probably scaled. The original
reference footnoted that the unscaled data lay between 0.6 and 0.04 ft3 /min.
65
% CO2
60
55
50
6.11
Methane flow
45
4
2
0
2
4
500
1000
1500
2000
2500
time [s]
The identification techniques described this far assume that the input signal can be chosen freely
and that this signal has persistent excitation. However if you are trying to identify while controlling, you have lost the freedom to choose an arbitrary input since naturally, the input is now
constrained to give good control rather than good estimation. Under good control, the input will
probably be no longer exciting nor inject enough energy into the process so that there is sufficient
information available for the identification. Closed loop identification occurs when the process is
so important, or hazardous, that one does not have the luxury of opening the control loop simply
for identification. Closed loop identification is also necessary, (by definition), in many adaptive
control algorithms.
If we use a low order controller, then the columns of the data matrix X in Eqn. 6.22 become
linearly dependent. This is easy to see if we use a proportional controller, ut = kyt , then this in
turn creates columns in X that only differ by the constant controller gain k. In reality, any noise
will destroy this exact dependency, but the numerical inversion may still be a computationally
difficult task.
One way to ensure that u(t) is still persistently exciting the process is to add a small dither or
perturbation signal on to the input u. This noisy signal is usually of such a small amplitude,
that the control is not seriously effected, but the identification can still proceed. In some cases
there may be enough natural plant disturbances to make it unnecessary to add noise to u. With
this added noise, added either deliberately or otherwise, the estimation procedure is the same as
described in 6.7.
The above arguments seems to suggest that closed loop identification is suboptimal, and that if
possible, one should always try to do the identification in open loop, design a suitable controller,
then close the loop. This avoids the possible ill-conditioning, and the biasing in the estimated
parameters. However Landau in a series of publications culminating in [112] argues that in fact
closed loop control, with suitable algorithms, actually develops better models since the identification is constrained to the frequencies of interest for good closed loop control. It all depends in the
final analysis on whether we want a good model of the plant for say design in which case we do
6.12. SUMMARY
315
the identification in open loop, or if we want good control in which case closed loop identification
may be better.
A very readable summary of closed loop identification is given in [93, pp517518], [18, p82] and
especially [203]. Identification in the closed loop is further discussed in chapter 7.
6.11.1
C LOSED
LOOP
RLS
IN
S IMULINK
Implementing recursive identification using S IMULINK is not as straight forward as writing raw
M ATLAB code because of the difficulty in updating the parameters inside a transfer function
block. Under normal operation these are considered constant parameters within the filter block,
so it is not a problem. Fig. 6.63 shows the closed loop estimation using the same RLS blocks as
those used in Fig. 6.47 on page 295. In this example the identified parameters are not used in the
controller in any sort of adaptive manner, for that we need an adaptive controller which is the
subject of the following chapter.
theta
phi
y(k)
phi generator
uy
theta
phi1
5
5
[5x5]
theta
Parameters
P
Covariance
RLS
Signal
Generator
PID
Gd
PID Controller
Unknown plant
Figure 6.63: Closed loop estimation using RLS in S IMULINK. See also Fig. 6.47.
6.12
S UMMARY
System identification is where we try to regress or fit parameters of a given model skeleton or
structure to fit input/output data from a dynamic process. Ideally we would obtain not only
the values of the parameters in question, but also an indication of the goodness of fit, and the
appropriateness of the model structure to which the parameters are fitted. Identification methodologies usually rely on a trial & error approach, where different model structures are fitted, and
the results compared. The S YSTEM I DENTIFICATION T OOLBOX within M ATLAB, [126], is a good
source of tools for experimentation in this area, because the model fitting and analysis can be
done so quickly and competently.
Identification can also be attempted online, where the current parameter estimate is improved or
updated as new process data becomes available in an efficient way without the need to store all
the previous historical data. We do this by re-writing the least-squares update in a recursive algorithm. Updating the parameters in this way enables one to follow time-varying models or even
nonlinear models more accurately. This philosophy of online estimation of the process model is a
central component of an adaptive control scheme which is introduced in chapter 7.
However there are two key problems with the vanilla recursive least-squares algorithm which
316
become especially apparent when we start to combine identification with adaptive control. The
first is the need to prevent the covariance matrix P from being illconditioned. This can, in part be
solved by ensuring persistent excitability in input. The second problem is that given non-white
noise or coloured noise, the RLS scheme will deliver biased estimates. However, there are many
extensions to the standard schemes that address these and other identification issues. Succinct
summaries of system identification with some good examples are given in [23, pp422431] and
[150, p856], while dedicated texts include [125, 190, 203].
6.12. SUMMARY
317
P ROBLEMS
Problem 6.4 An identification benchmark problem from [108].
1. Simulate the second-order model with external disturbance vt ,
yt =
2
X
ai yti +
i=1
2
X
bi uti +
2
X
i=1
i=0
(6.80)
a2
0.9
b0
0.5
b1
0.25
b2
0.1
d1
0.8
d2
0.2
0.1
The input ut is normally distributed discrete white noise, and the external disturbance is to
be simulated as a rectangular signal alternating periodically between the values +1 and 1
at t = 100, 200, . . . Run the simulation for 600 time steps, and at t = 300, change a1 to 0.98.
2. This is a challenging identification problem, because the rarely varying external disturbance
signal gives little information about the parameters d. Identify the parameters in Eqn. 6.80
1|0 = 0, P1|0 = 50I and using an exponential forgetting factor, = 0.8.
perhaps starting with
Problem 6.5
1. Run through the demos contained in the S YSTEM I DENTIFICATION toolbox for
M ATLAB.
2. Investigate the different ways to simulate a discrete plant with a disturbing input. Use the
model structure
A(q 1 )yt = B(q 1 )ut + C(q 1 )et
Choose a representative stable system, say something like
A(q 1 ) = 1 0.8q 1 ,
B(q 1 ) = 0.5,
C(q 1 ) = 1 + 0.2q 1
and choose some sensible inputs for ut and et . Simulate the following and explain any
differences you find.
(a) Write your own finite difference equation in M ATLAB, and compare this using filter.
(b) Use the transfer function object, tf, in MISO (multiple input/single output) mode.
Hint: Use cell arrays as
>>G = tf( {B ,C} , {A,A},T); % discrete version
>>G.variable = q;
(c) Use idpoly and idmodel/sim from the S YSTEM I DENTIFICATION toolbox.
(d) Build a discrete S IMULINK model
Hint: You may find problems due to the differing lengths of the polynomials. Either ensure
all polynomials are the same length, or swap from z 1 to z, or introduce delays.
3. Repeat problem 2 with an unstable model (A is unstable.)
4. What is the difference in S IMULINK when you place the A polynomial together with the B
and C polynomials? Particularly pay attention to the case where A is unstable.
5. Identification of plants that include coloured noise inputs is considerably more complex than
just using arx. The following illustrates this.
(a) Create a truth plant with polynomials, A, B, C. (Use idpoly).
318
6. The D AISY (Data Base for the Identification of Systems), [137], located at www.esat.kuleuven.ac.be/sista/d
and contains a number of simulated and experimental multivariable data sets. Down load
one of these data sets, preferably one of the real industrial data trends, and make an identification.
7. RLS with modified forgetting factors
Construct a simulation to estimate the parameters in a dynamic model (as in tident.m) to
test various extensions to the forgetting factor idea. Create a simulation with output noise,
and step change the model parameters part way through the run. (For more information on
these various extensions, see [203, pp140160].)
(a) Use a start-up forgetting factor of the form
t
1 (t) = 0 + (1 0 ) 1 exp
f
(6.81)
where 0 is the initial forgetting factor, (say 0.90.95), and f is the time constant that
determines how fast (t) approaches 1.0.
Note that Eqn. 6.81 can be re-written in recursive form as
1 (t) = 1 (t 1) + (1 )
where = e1/f and 1 (0) = 0 .
(b) Combine with the start-up forgetting factor, an adaptive forgetting factor of the form
2 (t)
f
2 (t) =
1
(6.82)
f 1
f sf (t)
where sf (t) is a weighted average of past values of the squared error 2 .
[203, p159] suggest using the filter
sf (t) =
f 1
2
sf (t 1) +
f
f
Combine both the start-up and the adaptive forgetting factor to get a varying forgetting
factor (t) = 1 (t)2 (t).
(c) A directional forgetting factor tries to update in only those directions in parameter space
where there is information available. The covariance update with directional forgetting
is
#
"
xt x
t Pt1
(6.83)
Pt = Pt1 I 1
rt1 + x
t Pt1 xt
where the directional forgetting factor r(t) is
rt =
1
x
t+1 Pt xt+1
(6.84)
6.12. SUMMARY
319
P = SST
Update St by following
f t = STt1 t
t = 1 + f T f
1
t =
t + t
Lt = St1 f t
St = St1 t Lt f Tt
Update the parameter estimates using
t+1 = t +
Lt
yt T t
t
Note that as of Matlab R12, there is considerable support for vector/matrix manipulations in
raw S IMULINK.
320
11. Construct a square root update algorithm (from Problem 8) in S IMULINK, perhaps using the
matrix/vector blocks in the DSP blockset.
12. Wellstead & Zarrop, [203, p103], develop an expression for the bias as a function of the
model parameters for a simple 1-parameter model
|a| < 1,
yt = ayt1 + et + cet1 ,
|c| < 1
c(1 a2 )
1 + c2 + 2ac
(a) Validate with simulation the expression for the size of the bias, a a
.
(b) The size of the bias should not be a function of the size of the variance of et . Is this a
surprising result? Verify this by simulation. What happens if you drop the variance of
the noise, e2 , to zero? Do you still observe any bias? Should you?
13. Starting with the S IMULINK demo rlsest which demonstrates an adaptive pole-placement
of a discrete plant do the following tasks.
(a) Modify the rlsest simulation to use a continuous plant of your choice and, for the moment, a fixed (non-adaptive) PID controller. (I.e. disconnect the closed-loop adaption
part of the controller.) Use Fig. 6.64 as a guide to what you should construct.
theta
To Workspace1
RLS
Estimator
parameters Recursive least squares
Parameter Estimator
phi
Clock1
t
time
yr
To Workspace
State
Generator
ZOH
ZOH1
Mux
spt/plant
Setpoint
PID
PID Controller
u[n]
3(s+0.3)
(s+2)(s+1)(s+0.4)
plant
y[n]
+
+
Sum1
distrurbance
6.12. SUMMARY
321
(c) Modify the RLS estimator s-function to export the covariance matrix as well as the
updated parameters. Do these confirm your findings from part 13b?
(d) Modify your RLS scheme to better handle noise and the closed loop conditions. You
may like to use some simple heuristics based on the trace of the covariance matrix, or
use a directional and/or variable forgetting factor.
14. Read the M ATLAB Technical support note #1811 How Do I Create Time Varying Matrix Gain
and State-space Blocks as C MEX S-functions?. Download the relevant files and implement
an adaptive controller.
15. Model identification in state-space with state information.
The English mathematician Richardson has proposed the following simple model for an
arms race between two countries:
xk+1 = axk + byk + f
yk+1 = cxk + dyk + g
(6.85)
(6.86)
where xk and yk are the expenditures on arms of the two nations and a, b, c, d, f and g
are constants. The following data have been compiled by the Swedish International Peace
Research Institute, (SIPRI yearbook 1982).
Millions of U.S. dollars at 1979 prices and 1979 exchange rates
year
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
Iran
2891
3982
8801
11230
12178
9867
9165
5080
4040
Iraq
909
1123
2210
2247
2204
2303
2179
2675
NATO
216478
211146
212267
210525
205717
212009
215988
218561
225411
233957
WTO
112893
115020
117169
119612
121461
123561
125498
127185
129000
131595
(a) Determine the parameters of the model and investigate the stability of this model.
(b) Determine the estimate of the parameters based on 3 consecutive years. Look at the
variability of the estimates.
(c) Create a recursive estimate of the parameters. Start with 1975 with the initial values
determined from 197274.
322
C HAPTER 7
A DAPTIVE C ONTROL
Adopt, adapt and improve . . . Motto of the round table.
John Cleese whilst robbing a lingerie boutique.
7.1
W HY ADAPT ?
Since no one yet has found the holy grail of controllers that will work for all conceivable occasions, we must not only select an appropriate class of controller, but also be faced with the
unpleasant task of tuning it to operate satisfactorily in our particular environment. This job is
typically done only sporadically in the life of any given control loop, but in many cases owing to
equipment or operating modifications, the dynamics of the plant have changed so as to render to
controller sub-optimal. Ideally the necessary re-tuning could be automated, and this is the idea
behind an adaptive controller.
In a control context, adaption is where we adjust the controller as the process conditions change.
The problem is that for tight control we essentially want to have a high feedback gain, giving
a rapid response and good correction with minimal offset. However if the process dynamics
subsequently change due to environmental changes for example, the high feedback gain may
create an unstable controller. To prevent this problem, we adapt the controller design as the
process conditions change, to achieve a consistently high performance.
Adaptive control was first used in the aerospace industry around the late 1950s where automatic
pilots adapted the control scheme depending on external factors such as altitude, speed, payload
and so forth. One classic example was the adaptive flight control system (AFCS) for the X-15 in
the late 1960s described in [8]. The adaptive control system was developed by Honeywell and
was fundamentally a gain scheduler that adapted the control loop gains during the flights that
ranged from sea level to space at 108 km.
Unfortunately the crash of X-15-3 and loss of pilot Michael Adams in 1967 somewhat terminated
the enthusiasm of adaptive flight control, and the research program was terminated shortly afterwards.
In the past couple of decades however, adaptive control has spread to the processing industries
with several commercial industrial adaptive controllers now available. Some applications where
adaptive control is now used are:
Aeronautical control As the behaviour of high performance fighter aircraft depends on altitude,
323
324
payload, speed etc, an adaptive control is vital for the flight to be feasible.
Tanker steering The dynamics of oil supertankers change drastically depending on the depth of
water, especially in relatively shallow and constricted water-ways.
Chemical reactions Chemical processes that involve catalytic reactions will vary as the catalyst
denatures over time. Often coupled to this, is that the heat transfer will drop as the tubes in
the heat exchanger gradually foul.
Medical The anesthetist must control the drug delivery to the patient undergoing an operation,
avoiding the onsets of either death or consciousness.1
Some further examples of adaptive control are given in a survey paper in [182].
Adaptive control is suitable:
7.1.1
T HE
ADAPTION SCHEME
When faced with tuning a new control loop, the human operator would probably do the following
steps:
1. Disturb the plant and collect the input/output data.
2. Use this data to estimate the plant dynamics.
3. Using the fitted model, an appropriate controller can be designed.
4. Update the controller with the new parameters and/or structure and test it.
If the scheme outlined above is automated, (or you find an operator who is cheap and does not
easily get bored), we can return to step 1, and repeat the entire process at regular intervals as
shown in Fig. 7.1. This is particularly advantageous in situations where process conditions can
change over time. The process and feedback controller are as normal, but we have added a new
module in software (dashed box) consisting of an on-line identification stage to extract a process model M(), and given this model information, a controller design stage to find suitable
controller tuning constants.
Of course there will be some constraints if we were to regularly tune our controllers. For example,
it is advisable to do the the estimation continuously in a recursive manner, and not perturb the
plant very much. We could adapt the three tuning constants of a standard PID controller, or we
could choose a controller structure that is more flexible such as pole-placement. The adaption
module could operate continuously or it could be configured to adapt only on demand. In the
latter case, once the tuning parameters have converged, the adaption component of the controller
turns itself off automatically, and the process is now controlled by a constant-tuning parameter
controller such as a standard PID controller.
1 One
325
Adaptive module
Control design
Identify M()
r(t)
setpoint
+
+
controller
Measure
input & output
u(t)
y(t)
input
Plant, S
output
7.1.2
C LASSIFICATION
OF ADAPTIVE CONTROLLERS
As one may expect, there are many different types of adaptive control, all possessing the same
general components, and an almost equal number of classification schemes. To get an overview
of all these similar, but subtly different controllers, we can classify them according to what we
adapt;
Parametric adaption Here we continually update the parameters of the process model, or even
controller, or
Signal adaption where we vary a supplementary signal to be added directly to the output of the
static controller.
Alternatively we can classify them as to how we choose a performance measure,
Static A static performance measure is one such as overall efficiency, or maximising production.
Dynamic A dynamic performance measure is to optimise the shape of a response to a step test.
Functional of states and inputs This more general performance index is discussed more generally in chapter 9.
7.2
G AIN SCHEDULING
The simplest type of adaptive control is gain scheduling. In this scheme, one tries to maintain
a constant overall loop gain, K by changing the controller gain, Kc in response to a changing
process gain Kp . Thus the product Kc Kp is always constant.
An example where gain scheduling is used is in the control of a liquid level tank that has a varying
cross sectional area. Suppose we are trying to control the level in a spherical tank by adjusting a
326
valve on the outlet. The exiting flow is proportional to the square root of height of liquid above
the valve, which in turn is a nonlinear function of the volume in the tank. Fig. 7.2 shows how
this gain varies as a function of material in the tank, illustrating why this is a challenging control
problem. No single constant controller setting is optimal for both operating in the middle and the
extremes of the tank.
1.5
0.5
V
max
Figure 7.2: The plant gain, Kp is proportional to the square toot of the hydraulic head which in turn is a nonlinear function of volume in the tank.
Consequently the controller gain, Kc ,
should be the inverse of the plant gain.
Controller gain K
0
8
6
4
2
0
2
3
Volume
4
Vmax
A way around this problem is to make the controller parameter settings a function of the level
(the measured variable, y). Here the controller first measures the level, then it decides what
controller constants (Kc , i , d ) to use based on the level, and then finally takes corrective action
if necessary. Because the controller constants change with position, the system is nonlinear and
theoretical analysis of the control system is more complicated.
To do this in practice one must:
1. Determine the open loop gain at the desired operating point and several points either side
of the desired operating point.
2. Establish good control settings at each of these three operating points. (This can be done by
any of the usual tuning methods such as Ziegler-Nichols, Cohen-Coon etc)
3. Generate a simple function (usually piecewise linear) that relates the controller settings to
the measured variable (level).
The gain scheduling described above is a form of open-loop or feedforward adaption. The parameters are changed according to a pre-set programme. If the tank is in fact different from the
tank that was used to calculate the parameter adaption functions such as having a cross section,
then the controller will not function properly. This is because the controller is not comparing its
own system model with the true system model at any time. However despite this drawback, gain
scheduling is an appropriate way to control things like tanks of varying cross sectional area. It is
unlikely that the tanks cross sectional area will change significantly with time, so the instrument
engineer can be quite confident that the process once tuned, will remain in tune.
7.3
327
T HE IMPORTANCE OF IDENTIFICATION
The adaptive controller (with feedback adaption) must be aware of the process, or more accurately
it must be aware of changes in process behaviour hence the importance of system identification
covered in the previous chapter. Central to the concept of controllers that adapt, is an identification algorithm that constantly monitors the process trying to establish if any significant changes
have occurred. In the gain schedular given in 7.2, the identification component was trivial, it
was simply the level reading. The purists will argue that the gain schedular is not a true adaptive
controller, because if the dynamics of the system change (such as the cross section of the tank
suddenly varies), the controller is unaware of this change, and cannot take appropriate corrective
action.
There are two popular adaptive control techniques used currently in industry. The first approach
is to use a controller with auto-tuning. Normally these controllers are operating in just the same
manner as a traditional PID controller, but if the operator requests, the controller can be tuned
automatically. This is sometimes called tuning on demand. Once the auto-tuning part is complete, the tuning procedure switches itself off and the controller reverts back to the traditional PID
mode. These types of adaptive controllers are used where adaptive control is required, but the
loop may be too sensitive or critical to have constant adaptation.
The second approach to adaptive control is called true adaptive control. This is where the controller will adapt itself to the changing process without operator demand. Unlike the auto-tuners,
the true adaptive control remains continuously identifying the process and updating the controller parameters. Many use variants of the least-squares identification that was described in
6.7.
Industrially auto-tuners have been successful and are now accepted in the processing industries.
Most of the major distributed computer control companies provide an adaptive control in the
form of a self-tuner as part of their product (Satt Control, Bailey, ABB, Leeds-Northrup, Foxboro).
om
Astr
and Hagglund describe some of these auto-tuning commercial products in chapter 5 of
[16, 105-132].
However the true adaptive control has not been accepted as much as the self-tuner. This is due to
the fact that true adaptive control requires a higher level of expertise than self-tuning regulators.
In addition, because the adaptive control is continuous, it requires a much larger safety network
to guarantee correct operation. Even though the adaptive controller does not require tuning,
the correct operation still requires estimates of values before the estimator can correctly function.
These types of parameters are things such as expected model order, sampling period and expected
dead time. If these values are significantly different from the true values, then poor adaption and
hence control will result. Section 7.4 will consider self-tuning regulators.
7.3.1
P OLYNOMIAL
MANIPULATIONS
The controller design strategies in this section will use the so-called polynomial approach. In this
design technique, we will need to be able to add and multiply polynomials, of possibly differing
lengths. The two routines given in Appendix B on page 507 add and multiply polynomials and
are used in the following controller design scripts in this chapter. An alternative programming
methodology is to create a polynomial object and then overload the plus and times operators.
328
7.4
A better adaptive controller than a gain schedular is one where the controller parameters are
continually updated as the process characteristics change with time in an undetermined manner.
These are called self tuning regulators and an early application was in the pulp and paper industry controlling a paper machine cited in [10]. Further development culminating in the self tuning
regulator is described in the classic reference, [13]. Self tuning regulators are more flexible than
gain scheduling controllers, which to be effective, the relation between the input and output must
be known beforehand and is assumed constant. A textbook covering many aspects of the design
and use of self tuning regulators is [203].
The basic elements of a self tuning regulator are:
1. Identify the model using input-output data. Usually the model is estimated online by using
a recursive least squares parameter estimation.
2. A relation between the model parameters estimated in step 1 and the controller parameters.
3. The control algorithm. Many different types of control algorithms exist, but many are functions that try to adjust the controller parameters, given the continually changing process
parameters in such a way that the closed loop response is constant. Because the controller
is being continuously updated to the new varying conditions, these control algorithms can
be quite vigorous.
Step 1, the identification phase, was covered in chapter 6. Steps 2 and 3 are the controller tuning
and the controller structure sections and are discussed further in this chapter.
Suppose we have previously identified our plant under test using an ARX model of the form
A(q)yk = B(q)uk + ek
(7.1)
then it is natural to ask what should the control law be? The simplest control law is a direct model
inversion. We choose u such that y = ek which is the best possible result given the unknown noise.
This results in the control law
A(q)
uk =
yk
(7.2)
B(q)
which is called minimum variance control, [10, 13, 15]. Expanding this control law gives the
manipulated input at sample time k as
uk =
1
(y + a1 yk1 + a2 yk2 + + an ykn b1 uk1 bm ukm )
b0 k
(7.3)
7.4.1
S IMPLE
329
The algorithm to construct an adaptive minimum variance controller consists of two parts: the
identification of the model parameters, , in this instance using recursive least-squares, and the
implementation of the subsequent minimum variance control law.
Algorithm 7.1 Simple minimum variance control for plants with no delay
G1 (q 1 ) =
(7.4)
(7.5)
For this simulation we will assume no model/plant structural mismatch and we will use the
recursive estimation with the forgetting factor option.
Listing 7.1: Simple minimum variance control where the plant has no time delay
t = [0:200]'; yspt = square(t/18);
y=zeros(size(yspt)); u=zeros(size(y));
nr=0.02;
5
10
15
% Actual plant, G
% Estimated plant, G
% old output & input data
% Min. Var. control law
20
c = [yv', u(i:-1:i-nb+1)'];
y(i) = c*theta(2:end) + 0.01*randn(1)*nr;
% Now do identification
% true process with noise
330
25
Initially we will set the forgetting factor to 0.99, and we will introduce a small amount of noise
so the estimation remains reasonable even under closed loop conditions, nr=0.02. We will aim
to follow the trajectory given as the dashed line in the upper figure of Fig. 7.3.
Our minimum variance controller will both try to estimate the true plant and control the process
to the setpoint, y . The lower trend in Fig. 7.3 shows the parameter values for the estimated
model.
Minimum variance control
output & setpoint
4
G2 (q 1 )
G1 (q 1 )
2
20
input
10
0
10
20
= 0.99
1
0
1
50
100
150
time
200
250
300
Figure 7.3: Simple minimum variance control with an abrupt plant change at t = 100. (See also
Fig. 7.4.)
In this unrealistically ideal situation with no model/plant mismatch, the controlled response
looks practically perfect, despite the plant change at t = 100 (after of course the converging period). Fig. 7.4 shows an enlarged portion of Fig. 7.3 around the time of a setpoint change. Note
how the response reaches the setpoint in one sample time, and stays there. This is as expected
when using minimum variance control since we have calculated the input to force y exactly equal
to the desired output y in one shot. In practice this is only possible for delay-free plants with no
model-plant mismatch.
Of course under industrial conditions (as opposed to simulation), we can only view the controlled
result, and perhaps view the trends of the estimated parameters, we dont know if they have
converged to the true plant parameters. However if the estimates converge at all to steady values,
and the diagonal elements of the covariance matrix are at reasonable values, (these are application
dependent), then one may be confident, but never totally sure, that a reasonable model has been
identified.
331
1
0.5
0
0.5
1
input
10
0
10
20
35
7.5
40
time
45
Clearly the problem with the simple minimum variance control is the excessive manipulated
variable demands imposed by the controller in an attempt to reach the setpoint in one sample
time. While our aim is to increase the speed of the open-loop response, we also need to reduce
the sensitivity to external disturbances. Both these aims should be addressed despite changes in
the process.
Fig. 7.5 shows a generalised control structure given in shift polynomial form known as a two
degree of freedom polynomial controller. In this section we are going to modify our colour scheme
slightly. As shown in the figure, the controller polynomials will be couloured blue hues, and the
plant to be controlled, red hues. The B polynomial, because it operates on the blue input is
coloured purple, i.e. between blue and red.
Controller
pre-compensator
r(t)
setpoint
H(q 1 )
Plant
+
1
F (q 1 )
u(t) B(q 1 )
A(q 1 )
b
y(t)
G(q 1 )
332
are met. The closed loop transfer function of Fig. 7.5 is
y(t) =
HB
r(t)
F A + GB
where clearly the stability is given by the denominator, F A + GB. Note that this polynomial
equation consists of two terms, each with a blue term multiplied by a red term. This fact will be
important later when we derive an algorithm to solve for the unknown blue polynomials.
Rather than demand the one-step transient response used in the minimum variance case, we can
relax this criterion, and ask only that the controlled response follow some pre-defined trajectory.
Suppose that we restrict the class of dynamic systems to the form,
Ay(t) = Bu(t 1)
(7.6)
where we have assumed a one sample time delay between u and y. This is to leave us enough
time to perform any controller calculations necessary. We will use the controller structure given
in Fig. 7.5,
F u(t) = Hr(t) Gy(t)
(7.7)
F = 1 + f1 q 1 + f2 q 2 + + fnf q nf
G = g0 + g1 q 1 + g2 q 2 + + gng q ng
H = h0 + h1 q 1 + h2 q 2 + + hnh q nh
With the computational delay, the closed-loop is
(F A + q 1 BG)y(t) = q 1 BHr(t)
(7.8)
and we desire the closed-loop poles to be assigned to specified locations specified in the polynomial T ,
T = 1 + t1 q 1 + t2 q 2 + + tnt q nt
(7.9)
Assigning the closed-loop poles to given locations is called pole-placement or pole assignment.
(In practice, the closed loop polynomial T will be a design requirement given to us perhaps by
the customer.)
Note that some authors such as [22] use a slightly different standard nomenclature convention for
the controller polynomials. Rather than use the variable names F, G, H in the control law Eqn. 7.7,
the variable names R, S, T are used giving a control law of Ru(t) = T r(t) Sy(t).
7.5.1
T HE D IOPHANTINE
To assign our poles specified in T , we must select our controller design polynomials F, G such
that the denominator of the actual closed-loop is what we desire,
F A + q 1 BG = T
(7.10)
where I have coloured the design polynomial red indicating that it is to drive the output to the
setpoint. The polynomial equation Eqn. 7.10 is called the Diophantine equation, and may have
many solutions. To obtain a unique solution, A, B must be coprime (no common factors), and we
should select the order of the polynomials such that
nf = nb
ng = na 1,
(na 6= 0)
nt na + nb nc
333
One way to establish the coefficients of the design polynomials F, G and thus solve the Diophantine equation is by expanding Eqn. 7.10 and equating coefficients in q i giving a system of linear
equations.
a1
a
2
..
.
a
na
.
..
.
.
.
0
|
1
a1
..
0
..
.
..
.
..
.
b0
b1
..
.
..
b0
b1
.
a2 . .
bnb
.. . .
. 1 0 bnb
.
.
.
0
ana . . a1 ..
..
..
. a2 .
0
..
.
.. f1
f2
..
.
.
.
.. .
.
fnf
0
g0
b0 g1
b1
..
.
..
. gng
bnb | {z
} c
|
}
t1 a 1
t2 a 2
..
.
tnt ant
ant+1
..
.
ana
0
..
.
{z
b
(7.11)
ana 0
{z
A
The matrix A in Eqn. 7.11 has a special structure, and is termed a Sylvester matrix and will be
invertible if polynomials A, B are co-prime. Solving Eqn. 7.11
c = A1 b
gives us the coefficients of our controller polynomials, F , G. Now the dynamics of the closed-loop
requirement are satisfied,
BH
r(t 1)
y(t) =
T
but not the steady-state gain, which ideally should be 1. We could either set H to be the scalar,
T
(7.12)
H=
B q=1
which forces the steady-state gain to 1, or preferably cancel the zeros
H=
1
B
(7.13)
which improves the transient response in addition to keeping the steady-state gain of unity. Cancelling the zeros should only be attempted if all the zeros of B are well inside the unit circle.
Section 7.6.1 describes what to do if the polynomial B does has poorly damped or unstable zeros.
This scheme has now achieved our aim of obtaining a consistent dynamic response irrespective
of the actual process under consideration. Since we cannot expect to know the exact model B, A
A in the controller design.
we will use an estimated model, B,
7.5.2
S OLVING
THE
D IOPHANTINE
EQUATION IN
M ATLAB
Since solving the Diophantine equation is an important part of the adaptive pole-placement regulator, the following functions will do just that. Listing 7.2 solves for the polynomials F and G
such that F A + BG = T . If a single time delay is introduced into the B polynomial such as in
Eqn. 7.10, then one needs simply to pad the B polynomial with a single zero. Note that the expression AF + BG is optionally computed using the polyadd routine given in Listing B.1, and
this polynomial should be close to the T polynomial specified.
334
10
15
20
25
if nargout >2
% then check that AF + BG = T as required
Tcheck = polyadd(conv(A,[1,F]), conv(B,G));
end
return
A N ALTERNATIVE D IOPHANTINE
SCHEME
An alternative, and somewhat more elegant, scheme for solving the Diophantine equation is
given in Listing 7.3 which uses matrix convolution to construct the Sylvester matrix.
Listing 7.3: Alternative Diophantine routine to solve F A + BG = T for the polynomials F and G.
Compare with Listing 7.2.
1
11
% da = deg(A) etc.
% pad with zeros
% Convert to full length
% assuming for RAB to be square
% pad with leading zeros
16
335
The alternative scheme in Listing 7.3 is slightly different from the one presented earlier in that
this time the F polynomial contains the leading unity coefficient. A third method to construct
the Sylvester matrix is to use the lower triangle of a Toeplitz matrix in M ATLAB say with the
command tril(toeplitz(1:5)).
E XAMPLE OF
D IOPHANTINE
PROBLEM
nt = 1 na + nb nc = 5
and expanding the closed-loop equation Eqn. 7.10,
(1 + f1 q 1 + f2 q 2 )(1 + a1 q 1 + a2 q 2 + a3 q 3 )+
q 1 (b0 + b1 q 1 + b2 q 2 )(g0 + g1 q 1 + g2 q 2 ) = 1 + t1 q 1
which by equating the coefficients in q i , i = 1, . . . , 5 gives the linear system following Eqn. 7.11
1 0 b0 0 0
f1
t1 a 1
a1 1 b1 b0 0 f2 a2
a2 a1 b2 b1 b0 g0 = a3
a3 a2 0 b 2 b 1 g 1
0
0 a3 0 0 b 2
g2
0
A N EXAMPLE OF
SOLVING THE
D IOPHANTINE
EQUATION
In this example we start with some known polynomials, and then try to reconstruct them using
the Diophantine routines presented previously.
Suppose we start with some given plant polynomials, say
A(q 1 ) = 1 + 2q 1 + 3q 2 + 3q 3 ,
B(q 1 ) = 5 + 6q 1 + 7q 2
where the orders are na = 3 and nb = 2. This means we should choose orders nf = nb = 2, and
ng = na 1 = 2, so let us invent some controller polynomials
F (q 1 ) = 1 + 8q 1 + 9q 2 ,
noting the A and F are monic. We can now compute the closed loop, AF + BG using convolution
and polynomial addition as shown in Listing 7.4
Listing 7.4: Constructing polynomials for the Diophantine equation example
>> A = [1 2 3 4]; B = [5 6 7];
>> F = [1 8 9]; G = [10 11 12];
3
% Note nf = nb & ng = na 1.
336
14
Note that in both cases we manage to compute the polynomials F (q 1 ) and G(q 1 ) which reconstruct back to the correct T (q 1 ), although the routine in Listing 7.2 drops the leading 1 in the
monic F polynomial.
7.5.3
A DAPTIVE
We have now all the components required for an adaptive controller. We will use an RLS module for the plant identification, and then subsequently solve the Diophantine equation to obtain a
required closed loop response. Algorithm 7.2 describes the algorithm behind this adaptive controller.
Algorithm 7.2 Adapting pole-placement algorithm
We will follow Fig. 7.6 which is a specific case of the general adaption scheme given in Fig. 7.1.
1. At each sample time t, collect the new system output, y(t).
2. Update the polynomial model estimates A, B using recursive least-squares on
Ay(t) = Bu(t 1) + e(t)
3. Synthesize the controller polynomials, F, G by solving the identity,
F A + q 1 BG = T
where T is the desired closed-loop response. (You could use the dioph routine given in
Listing 7.2.)
4. Construct H using either Eqn. 7.12 or Eqn. 7.13.
5. Apply control step
F u(t) = Gy(t) + Hr(t)
6. Wait out the remainder of the sample time & return to step 1.
337
Identify Plant
A
Performance requirements
Diophantine
structure
RLS
H
H(q 1 )
r(t)
B(q 1 )
u(t) A(q 1 )
1
F (q 1 )
setpoint
Plant
b
G(q 1 )
2 DOF Controller
Figure 7.6: Adaptive pole-placement control structure with RLS identification. See also Fig. 7.5.
A N ADAPTIVE CONTROLLER
WITH
RLS IDENTIFICATION
EXAMPLE
In this example will attempt to control three different plants in the configuration as shown in
Fig. 7.7. The open loop step tests of the plants are compared in Fig. 7.8. Of the three plants, G1 is
stable and well behaved, G2 has open-loop unstable poles, and G3 has unstable zeros.
Plant alternatives
}|
{
z
setpoint
r
Gc
G1
G2
Adaptive
controller
G3
Figure 7.7: Control of multiple plants with an adapting controller. We desire the same closed loop
response irrespective of the choice of plant.
In practice we must estimate the process model and if the plant changes, we redesign our controller such that we obtain the same closed loop response. The customer for this application
demands a slight overshoot for the controlled response as shown in Fig. 7.9. We can translate this
y(t)
338
Openloop plant step responses
PoleZero Map
6000
1.5
G1
5000
G1 (q1 )
3000
2000
0.5
0
0.5
G3 (q
1000
G3
Imaginary Axis
output
4000
G2 (q1 )
1
1.5
20
40
60
time
80
100
120
0.5
0
0.5
Real Axis
1.5
Figure 7.8: Step responses of the 3 discrete open-loop plants (one of which is unstable), and the
corresponding pole-zero maps. Note that G3 has zeros outside the unit circle.
requirement for the denominator of the closed-loop transfer function to be
T = 1 1.4685q 1 + 0.6703q 2
output
1
0.8
1
T(q 1 )
0.6
0.4
0.2
10
15
20
25
30
35
time
Listing 7.6 and Fig. 7.10 show the impressive results for this combined identification and adaptive
control simulation when using the three plants from Fig. 7.8. Note that the controller design
computation is done in the dioph.m routine given in Listing 7.2 while the identification is done
using the rls routine.
Here I assume a model structure where na = 5, nb = 3, and I add a small amount of noise,
= 0.005, to keep the estimation quality good. A forgetting factor of = 0.95 was used since the
model changes were known a priori to be rapid.
Listing 7.6: Adaptive pole-placement control with 3 different plants
Gx=struct('b',[3 0.2 0.5],'a',poly([0.95 0.9 0.85])); % 3 different plants
Gx(2) = struct('b',[2,2,1],'a',poly([1.1 0.7 0.65]));
Gx(3) = struct('b',[0.5 0.2 1],'a',poly([0.6 0.95 0.75]));
5
na = 3; nb = 3;
% order of polynomials, na , nb
a = randn(1,na); b = randn(1,nb); % model estimates
10
np = length([a,b]);
P = 1.0e4*eye(np);
lambda = 0.95;
param_est = [a,b]';
339
% # of parameters to be estimated
% Large initial co-variance, P0 = 104 I
% Forgetting factor, 0 < < 1
for i=10:length(t)
if t(i) == tG2, k=2; end % Swap over to model #2
if t(i) == tG3, k=3; end % Swap over to model #3
25
30
35
40
45
% Plant response
y(i) = Gx(k).b*u(i-1:-1:i-length(Gx(k).b)) - ...
Gx(k).a(2:end)*y(i-1:-1:i-length(Gx(k).a)+1);
% Model identification
x = [-y(i-1:-1:i-na)', u(i-1:-1:i-length(b))']; % shift register
[param_est,P] = rls(y(i),x',param_est,P,lambda); % update parameters, Listing 6.18.
B
Fig. 7.10 shows the impressive results of this adaptive controller. When the plant changes, unbeknown to the controller, the system identification updates the new model, and the controller
consequently updates to deliver a consistent response.
If the pole-placement controller is doing its job, then the desired closed loop response should
match the actual output of the plant. These two trajectories are compared in Fig. 7.11 which
shows an enlarged portion of Fig. 7.10. Here it can be seen that apart from the short transients
when the plant does change, and the identification routine is struggling to keep up, the adaptive
pole-placement controller is satisfying the requirement to follow the reference trajectory.
7.5.4
A DAPTIVE
POLE - PLACEMENT IN
S IMULINK
The time-varying filter blocks make it relatively easy to implement an adaptive pole-placement
controller in S IMULINK as showed in Fig. 7.12. The controller design is executed in the orange
RLS & APP sub-block which is shown in Fig. 7.13. This sub-block contains two m-file scripts
340
2
1
0
1
2
3
4
input
0.5
0
0.5
3
2
b
2
b
a0
3
b
a1
trace(P)
50
100
150
350
400
450
Figure 7.10: Adaptive pole-placement with RLS identification. After a period for the estimated
parameters to converge (lower trends), the controlled output response (upper trend) is again reasonably consistent despite model changes. Results from Listing 7.6.
1.8
1.6
Reference, y*
1.4
1.2
1
0.8
0.6
0
50
100
150
200
250
time
300
350
400
450
500
Figure 7.11: An enlarged portion of Fig. 7.10 showing the difference between the actual plant
output y(t), and the reference output, y (t).
341
that first do the RLS model identification update, and then do the pole-placement design based
on the currently identified model. The various memory blocks and the initial condition blocks are
needed to ensure that we do not have any algebraic loops when we start the simulation.
Figure 7.12: Adaptive pole-placement with identification in S IMULINK. See also Fig. 7.13.
Figure 7.13: Internals of the Adaptive pole-placement design routine from Fig. 7.12.
7.6
Some problems still remain before the adaptive pole-placement algorithm is a useful and practical
control alternative. These problems include:
342
Poor estimation performance while the system is in good control, and vice-versa, (poor
control during the periods suitable for good estimation). (This is known as the dichotomy
of adaptive control, or dual control).
Non-minimum phase systems possessing unstable zeros in B which must not be cancelled
by the controller.
Dead time where at least b0 = 0.
We must try to solve all these practical problems before we can recommend this type of controller.
A N EXAMPLE
OF BURSTING
If our controller does its job perfectly well, then the output remains at the setpoint. Under these
circumstances we do not have any identification potential left, and the estimated model starts to
drift. The model will continue to drift until the controller deteriorates so much that there is a
period of good excitation (poor control), that the model re-converges again. This
Fig. 7.14 shows the experimental results from using an adaptive controller, (in this case an adaptive LQR regulator) on the blackbox over a substantial period of time. The controller sampling at
T = 1 second was left to run over night for 10 hours (36,000 data points), and for a 3 hour period
in the middle we had no setpoint changes, and presumably no significant disturbances. During
this time, part of which is shown in Fig. 7.14, the adaptive controller oscillated into, and out of,
good control. This is sometimes known as bursting.
Adaptive LQR of the blackbox t=1.00 (sec)
0.4
0.3
0.2
0.1
0
0.1
0.2
0.3
2.5
3.5
3.5
RLS: =0.995
1
Input
0.5
0.5
1
2.5
3
time (hours)
Figure 7.14: Bursting phenomena exhibited by the black box when using an adaptive controller.
With no setpoint changes or external disturbances, the adaptive controller tends to oscillate between periods of good control and bad estimation and the reverse.
7.6.1
D EALING
343
H =
T
B q=1
as proposed in Eqn. 7.12, then we do not cancel any zeros, and we avoid the problem of inadvertently inverting unstable model zeros. However we can achieve a better transient response if we
cancel the zeros with the pre-compensator H,
H=
1
B
1
HB
r(t 1) = r(t 1)
T
T
which looks fine until we investigate closer what is likely to happen if B has zeros outside the unit
circle. Remember that the polynomial B is a function of the plant that we are trying to control
and hence an output of the estimation scheme. We have no influence on the properties of B; we
get what we are given.
In fact, we will have problems not only if the zeros are unstable, but even if they are stable but
poorly damped. Fig. 7.15 shows the adaptive pole-placement response of a plant with stable, but
poorly damped zeros when using H = 1/B. In fact the zeros are 0.95ej140 indicating that they
are close to, but just inside the unit circle. While the output response looks fine, it is the ringing
behaviour of the input that should cause concern.
The results in Fig. 7.15 were obtained by using the modified controller given in Listing 7.7 in
Listing 7.6.
Listing 7.7: The pole-placement control law when H = 1/B
if
q 1 (1 1.5q 1 )
BH
Hr(t)
r(t
1)
=
F A + q 1 BH
1 0.7q 1
1
q(1 1.5q 1 )
344
3
yref
y*
y
2
1
0
1
2
2
1.5
1
input
0.5
0
0.5
1
1.5
2
50
100
150
200
250
300
sample time
350
400
450
500
Figure 7.15: Adaptive pole-placement with a stable, but poorly damped, B polynomial. The
poorly damped zeros cause the ringing behaviour in the input.
which has a time-advance term, q +1 , meaning that the control law is a non-causal function of the
setpoint. In other words, we need to know r(t + 1) at time t. In fact since we need to know only
future values of setpoint, and not output, this type of control law is easily implemented.
The main drawback stems, however, not from the acausality, but from the ill-advised cancellation.
In all practical cases, we will experience some small numerical round-off error such as where the
pole is represented in a finite-word length computer as 1.50001 rather than exactly 1.5. Consequently the pole will not completely cancel the zero,
y(t) =
q 1 (1 1.5q 1 )
q
r(t)
1 0.7q 1
(1 1.50001q 1)
|
{z
}
small error
The remedy is that if we discover unstable modes in B, we simply do not cancel them, although
we continue to cancel the stable modes. We achieve this by separating the B polynomial into two
groups of factors: the stable roots, B + , (which we will colour green for good) and the unstable
roots, B , (which we will colour red for bad)
def
B = B+B
(7.14)
and then set H = 1/B + only. A simple M ATLAB routine to do this factorisation is given in
Listing 7.8 in section 7.6.2.
345
(7.15)
HB
HB
r(t
1)
=
r(t 1)
T B+
T
(7.16)
We solve the modified pole-assignment problem, Eqn. 7.15, by noting that B + must be a factor of
F,
F = B + F1
so we solve the now reduced Diophantine equation
AF1 + q 1 B G = T
in the normal manner.
We should always check for unstable zeros in the B polynomial when using online estimation
because we have no control over what the RLS routine will deliver. Even in a situation where we
know, or reasonably suspect a stable B, we may stumble across an unstable B while getting to the
correct values.
As an example, we can apply the adaptive pole-placement controller where we cancel the stable B
polynomial zeros, but not the unstable ones, using the three plants from page 337. Fig. 7.16 shows
the result of of this controller where we note that the performance of plant #3 is also acceptable
exhibiting no ringing.
For diagnostic purposes, we can also plot the instantaneous poles and zeros of the identified
model, and perhaps display this information as a movie, or perhaps as given in Fig. 7.16, shown
with decreasing intensity of colour for the older data. This will show how the poles and zeros
migrate, and when zeros, or even poles, leap outside the unit circle. Fig. 7.16 also shows the
magnitude of the estimated poles and zeros which clearly indicate when they exceed 1 and jump
outside the unit circle.
7.6.2
S EPARATING
When applying pole-placement in the case with unstable zeros, we need to be able to factor B(q)
into stable and unstable factors. Listing 7.8 does this factorisation, and additionally it also rejects
the stable, but poorly damped factors given a user-specified damping ratio, . (If you want simply
stable/unstable, use = 0.) Fig. 7.18 illustrates the region for discrete well-damped poles where
well-damped is taken to mean damping ratios > 0.15. This algorithm is adapted from [150,
p238].
Listing 7.8: Factorising an arbitrary polynomial B(q) into stable, B + (q), and unstable and poorly
damped, B (q), factors such that B = B + B and B + is defined as monic.
2
346
3
G1 (q 1 )
G2 (q 1 )
Unstable zeros: G3 (q 1 )
1
0
1
2
0.2
input
0.1
0
0.1
0.2
4
=0.99
2
b
b 2
0
b
1
a
4
5
tr(P)
10
10
500
1000
1500
sample time
def
Figure 7.16: Adaptive pole-placement control with H = 1/B and an unstable B polynomial in
Plant #3. Note that the controller does not exhibit any ringing, even with a plant with unstable
zeros.
347
PoleZero Map
1.5
z1, z2
1.5
p1
Imaginary Axis
0.5
p2
p3
0.5
0.5
1.5
0.5
0
0.5
Real Axis
1.5
500
1000
sample time
1500
Figure 7.17: The migration of poles and zeros in the complex plane (left hand plot) and the time
trend of the magnitude of the poles and zeros (right-hand plot) from the simulation presented in
Fig. 7.16.
% The optional parameter, defines the cutoff. Use = 0 for inside/outside circle.
% Note B + (q) or Bp is defined as monic.
12
17
22
if (exist('zeta') == 1)
zeta = 0.15; % default reasonable cut-off is = 0.15.
end
zeta = max(min(zeta,1),0); % Ensure [0, 1].
tol = 1e-6; r = roots(B);
% Now separate into poorly damped & good poles
mag_r = abs(r); rw = angle(r)/2/pi; % See equations in [150, p238].
mz = exp(-2*pi*zeta/sqrt(1-zeta2)*rw); % magnitude of z
idx = mag_r < mz;
Bp = poly(r(idx)); % Set B + (q) to contain all the zeros inside unit circle
Bm = poly(r(idx))*B(1); % All the others
if norm(conv(Bp,Bm)-B) > tol % Check B = B + B
warning('Problem with factorisation')
end % if
return
7.6.3
E XPERIMENTAL
Actually running an adaptive pole-placement scheme on a real plant, even a well behaved one
in the laboratory is considerably more difficult than the simulation examples presented so far.
Our desire is to test an adaptive pole-placement controller on the black-box to demonstrate the
following:
1. To follow a specified desired trajectory (as quantified by the T polynomial),
348
1
2
4.41s + 2.52s + 1
Gcl (q 1 ) =
0.03628 + 0.03236q 1
1 1.641q 1 + 0.7097q 2
7.6.4
M INIMUM
The minimum variance control formulation of 7.4.1 is not very useful not the least due to its inability to handle any dead time and unstable process zeros. Even without the dead time, practical
experience has shown that this control is too vigorous and tends to be over-demanding causing
excessive manipulator variable fatigue. Moreover when using discrete time models, delay between input and output is much more common, and typically the deadtime is at least one sample
period. The best we can do in these circumstances is to make the best prediction of the output
we can for d samples times ahead, and use this prediction in our control law as done above. This
introduces the general concept of minimum variance control which combines prediction with
control. Further details are given in [23, pp367379].
Suppose we start with the ARMAX canonical model written in the forward shift operator q,
A(q)yk = B(q)uk + C(q)k
(7.17)
349
0.4
Normal
0.3
Slow
0.2
0.1
0
0.1
0.2
t=0.6,
1
input
0.5
0
0.5
1
=0.99, na=4, nb=3
2
parameters
1
0
1
2
tr(P)
10
10
10
50
100
150
200
time (sec)
250
300
350
400
Figure 7.19: Adaptive pole-placement of the black-box showing consistent output response behaviour despite the plant change when the switch was manually activated at t = 128 seconds.
Plot (A): black-box output (solid), intended trajectory (dashed) and setpoint point (dotted). (Refer
also to Fig. 7.20 for an enlarged version.)
Plot (B): Controller input to the plant, Plot (C): Plant parameter estimates, and Plot (D): Trace of
the covariance matrix.
350
0.28
Normal
0.26
actual y
setpoint
intended y*
Slow
0.24
0.22
0.2
0.18
0.16
60
80
100
120
140
160
180
time (sec)
200
220
240
260
280
and we assume that A and C are monic. We can always divide out the coefficients such that A
is monic, and we can choose the variance of the white noise term so that C is monic. For this
simplified development we will also need to assume that all the zeros of both C and B are inside
the unit circle. The pole excess of the system d, is the difference in the order of the A and B
polynomials and is equivalent to the system time delay. The output d steps ahead in the future
is dependent on the current and past control actions, u, which are known, and the current, past
and future disturbances, the latter which are not known. To avoid using the unknown disturbance
values we need to make an optimal prediction.
O PTIMAL
PREDICTION
C(q)
k
A(q)
(7.18)
qG(q)
zk
C(q)
(7.19)
(7.20)
We can calculate the quotient F and remainder G polynomials in Eqn. 7.20 from polynomial
division in M ATLAB using the deconv function, or alternatively, one could calculate them by
equating the coefficients in Eqn. 7.20.
M INIMUM
Once we can make an optimal prediction using the formulation of Eqn. 7.20, we can predict the
output d samples times into the future and then calculate the control action required to equate
this with the setpoint. This will give the desired minimum variance control law as,
uk =
G(q)
yk
B(q)F (q)
(7.21)
351
The closed loop formulation by combining Eqn. 7.17 and Eqn. 7.21 is
G
yk + C k
BF
AF yk = Gyk + CF k
CF
k = F k
yk =
AF + G
Ayk = B
(7.22)
where the last equation is simplified using Eqn. 7.20. Eqn. 7.22 indicates that the closed loop
response is a moving average process of order d 1. This provides us with a useful check in that
the covariance of the error will vanish for delays larger than d 1.
Listing 7.9 designs the F (q) and G(q) polynomials required in the control law, Eqn. 7.21, by using
deconvolution. Note this routine uses stripleadzeros to strip any leading zeros from the B(q)
polynomial from Listing B.3 given in Appendix B.
Listing 7.9: Minimum variance control design
2
12
17
% Construct q d0 1
% Construct q d0 1 C(q)
BF = conv(B,F);
return
352
The step response and pole-zero plot in Fig. 7.21 illustrates the potential difficulties with this
plant.
PoleZero Map
Step Response
20
Figure 7.21:
A nonminimum phase plant
with an unstable zero
which causes an inverse
step response.
Amplitude
Imaginary Axis
15
0.5
10
5
0.5
0
1
1
0.5
0
0.5
Real Axis
1.5
20
40
Time (seconds)
60
80
We can design a moving average controller for this NMP plant as follows.
B = stripleadzeros(B);
[Bp,Bm] = factorBz(B);
4
14
19
Note that this controller does not attempt to cancel all two process zeros since that would result
in an unstable controller giving rise to unbounded internal signals. However instead we cancel
the one stable process zero giving rise to a moving average controller.
In this example shown in Fig. 7.22, F (q) = q + 5.0567 which indicates that the expected covariance
of the output is y2 = (12 + 5.062 ) = 26.6 which, provided we take enough samples as shown in
Fig. 7.22(b), it is.
353
Random
Number
F(z)
ey
1
To Workspace
F
C
C(z)
1
B(z)
A(z)
Add
Scope
Gain
G(z)
den(z)
Min var ctrlr
Variance
20
variance of output
15
10
variance of noise
5
0
3
10
10
10
Experiment duration
10
Figure 7.22: Minimum variance moving average control. If the controller is designed correctly,
then the output variance should be approximately equal to the variance of the noise signal filtered
through F , which in this case it is, provided we take enough samples.
7.7
Adaptive control is where the controller adjusts in response to changing external conditions. This
is suitable for plants that are time varying, severely nonlinear or perhaps just unknown at the
controller design stage. The general adaptive controller has two parts; an identification part and
a controller design part although in some implementations these two components are joined together. The main styles of adaptive control are:
Where the controller parameters are some static function of the output. The simplest example is gain scheduling. This has the advantage that it is easy to implement, but it is really
only open loop adaption, and may not perform as expected if the conditions change in an
unpredictable manner.
Where the classical PID controller is augmented with a self-tuning option. This can be
based on the Ziegler-Nichols design, using a relay feedback to extract the ultimate frequency
and gain.
354
Where the parameters of the plant transfer function model are estimated online using the
techniques described in chapter 6, and the controller is designed to control this model. If
the controller is approximately the inverse of the process model, this leads to minimum
variance control, but is usually too severe, and even tends to instability for many actual
applications. A better alternative is adaptive pole-placement, where reasonable closed loop
poles are specified by the control engineer, and the controller is varied to provide this performance as the identified model changes.
As with all high performance control schemes, there are some open issues with adaptive controllers. For the simple schemes presented here we should be aware of the following issues:
1. The structure (order and amount of deadtime) of the plant must be known beforehand, and
typically is assumed constant.
2. Minimum variance control, while simple to design, attempts to force the process back to the
desired operating point in one sample time. This results in overly severe control action.
3. There are potential numerical problems when implementing recursive estimators, particularly in fixed point or with short word lengths.
4. If dead time is present, and usually dead time of at least one sample time is always
present, then the controller will be unrealisable since future values of y are needed to calculate current values of u. In this case, a sub-optimal controller is used where the dead time is
simply ignored, or the future output is predicted.
However the real problem of adaptive control is that when the plant is in control, there is little or no information going to the estimator. Hence the controller parameters begin to oscillate
wildly. This is not normally noticed by the operator, because the process is in control. However as
soon as a disturbance enters the plant, the controller will take extra action attempting to correct
for the upset, but now based on the wildly inaccurate plant model. During this time of major
transients however, the widely changing process conditions provide good information to the estimator which soon converges to good process parameters, which in turn subsequently generate
appropriate controller parameters. The adaptive controller then brings the process back under
control and the loop starts all over again. This results in alternating periods of very good control
and short periods of terrible control. One solution to this problem is to turn off the estimator
whenever the system is at the setpoint.
In summary when applying adaptive control, one wants:
A constant output for good control, but
a varying input and output for good estimation (which is required for good control)!
This conflict of interest is sometimes called the dichotomy of adaptive control.
355
P ROBLEMS
Problem 7.1
1. Construct a S IMULINK diagram to simulate the general linear controller with
2-degrees of freedom controller. That is, with the R, S and T controller polynomials. Follow
[19, Fig3.2, p93].
Hint: Use the discrete filter blocks in z 1 , since these are the only blocks that you can
construct S(z )/1 in.
2. Simulate the controlled response in problem 1 using the polynomials from [19, Ex3.1], repeated here
B(q)
0.1065q + 0.0902
= 2
A(q)
q 1.6065q + 0.6065
and
R(q) = q + 0.8467,
T (q) = 1.6530q
3. Verify that the closed loop is the same as the expected model,
BT
Bm
=
AR + BS
Am
in the example 3.1, [19, Ex3.1, p9798].
Hint: You may like to use the M ATLAB symbolic toolbox.
4. Construct an m-file to do the minimum degree pole-placement design.
(a) Do an All zeros cancelled version
(b) Do an No zeros cancelled version
(c) Do an general version, perhaps with some automated decider which zeros to cancel.
5. For the minimum variance controller we need to be able to compute the variance of the
output of a FIR filter, refer [19, chapter 4].
(a) What is the theoretical output variance of a FIR filter with input variance of e2 ?
i. In S IMULINK you have the choice of the DSP format z 1 or the transfer function
version z. What would be the natural choice for q?
ii. Test your answer to part (1) by constructing a simulation in S IMULINK.
(b) Approximately how many samples do you need to obtain 2, 3 and 4 decimals accuracy
in your simulation?
(c) Using the hist function look at the distribution of both the input, e, and output, y. Is
the output Gaussian and how could you validate that statistically?
6. For stochastic control and for regulatory control topics we want to be able to elegantly simulate models of the form
A(q)y(t) = B(q)u(t) + C(q)e(t)
(7.23)
where A, B, C are polynomials in the forward shift operator q. Note that the deadtime is
defined as deg(A)-deg(B).
(a) For the standard model in Eqn. 7.23 what are the constraints on the polynomials
A, B, C?
i. Choose sensible polynomials for A, B, C with at least one sample of deadtime.
Create a model object from the System Identification toolbox using the idpoly
command.
356
C HAPTER 8
M ULTIVARIABLE
CONTROLLER
DESIGN
As our engineering endeavors became more sophisticated, so did our control problems. The
oil crises of the mid 1970s gave impetus for process engineers to find ways to recycle energy
and materials. Consequently the plants became much harder to control because they were now
tightly coupled to other plants. As the plants became multivariable, it made sense to consider
multivariable controllers.
If we have a dynamic system with say 3 states and 3 inputs, then assuming that our system is
controllable, we can feedback the three states directly to the three inputs, state 1 to input 1, state 2
to input 2 etc. This is termed de-centralised control, and our first dilemma is which state should
be paired with which input. We then have the problem of tuning the three loops, especially if
there are some interactions.
Alternatively we could feedback each state to all the inputs, which is termed centralised control.
In matrix form, the two alternative control laws look like
0 0
u = 0 0 x
or u = x
0 0
{z
}
{z
}
|
|
centralised
de-centralised
Exploiting the off-diagonals of the gain matrix tends to be more efficient and is often what is
meant by multivariable control. While centralised or multivariable control structures typically
give better results for similar complexity, they are harder to design and tune since we have more
controller parameters (9 as opposed to 3) to select. Not surprisingly, decentralised control remains
popular in industry, none the least for the following reasons given by [89]:
1. Decentralised control is easier to design and implement than a full multivariable controller,
and also requires fewer tuning parameters.
2. They are easier for the operators to understand and possibly retune.
3. Decentralised controllers are more tolerant of manipulated or measurement variable failure.
4. Startup and shutdown operating policies are easier with decentralised controllers, since the
control loops can be gradually brought into service one by one.
Despite these application style advantages of decentralised controllers, it is useful to study what
is possible with a full multivariable controller. This is the purpose of this chapter.
357
358
Finally a one-step optimal controller that is applicable for nonlinear processes called GMC (Generic
Model Control) is presented in 8.5. Exact feedback nonlinearisation is a generalisation of the
GMC controller follows in 8.6.
8.1
This section will introduce in a descriptive manner the concept of controllability, observability
and other abilities. For a more formal treatment see for example [152, 94, p699-713] or [150,
p627 62]. The criteria of controllability is given first in 8.1.1 followed by the related property
of observability (which is the dual of controllability) in 8.1.2. These conditions are necessary for
some of the multi-variable controller and estimator design that follow. Methods to compute the
controllability matrix, and establishing the condition are given in 8.1.3.
8.1.1
C ONTROLLABILITY
When designing control systems, we implicitly assume that we can, at least in principle influence
the system to respond in some desired manner. However, it is conceivable, perhaps due to some
structural deficiency, that we cannot manipulated independently all the states of the system. The
controllability criteria tells us if this is the case.
Formally defined, a system is controllable if it is possible to transfer from any arbitrary initial state
x0 to any other desired state x in a finite time by using only an uncompensated control signal u.
Note that while the transformation is allowed to take place over more than one sample time,
it must be achieved in a finite time. In addition, even if the above controllability requirement is
satisfied, you may not be able to keep the state x at the desired position, the guarantee is only that x
could pass through x at some time in the future. Nevertheless, it is instructive to know if a given
om
system is controllable or not. Some care is needed with these terms. Astr
and Wittenmark, [23,
p127], define reachability as I have defined controllability above, and then define controllability as
a restricted version where x = 0. These two definitions are equivalent if A or is invertible.
First consider a continuous dynamic system that has a diagonal A matrix and an arbitrary B matrix,
0 0
0 0
x =
x + Bu
.
.
0 0
. 0
0 0
where an indicates the presence of a non-zero value. Now suppose if u(t) = 0, then the steady
state is xss = 0. Since A is a diagonal matrix, there is no interaction between the states. This
means that if one particular state xi , is 0 at some time t, then it will always remain at zero since
u= 0. Now consider the full system (u 6= 0), but suppose that the ith row of B are all zeros such
as the second row in the following control matrix,
0
B=
0
0
0
0
0
0
0
Then if xi happens to be zero, it will remain at zero for all time irrespective of the value of the
manipulated variable u. Quantitatively this is due to the following two points:
359
1. No manipulated variable, uj , can modify the value of xi because all the coefficients in the B
matrix for that particular state are zero.
2. No other states can influence xi because in this special case the system matrix A is diagonal.
Hence if xi is zero, it will stay at zero. The fact that at least one particular state is independent of
the manipulated vector means that the system is uncontrollable. We cannot influence (and hence
control) all the states by only changing the manipulated variables.
In summary, a system will be uncontrollable if the system matrix A is diagonal, and any one or
more rows of B are all zero. This is sometimes known as Gilberts criteria for controllability.
If the matrix A is not diagonal, then it may be transformed to a diagonal form (or Jordon block
form), and the above criteria applied. We can easily transform the system to a block diagonal
form using the canon function in M ATLAB. However a more general way to establish the controllability of the n n system is to first construct the n (m n) controllability matrix C,
h
i
def
C = B ... AB ... A2 B ... ... An1 B
(8.1)
and then compute the rank of this matrix. If the rank of C is n, then all n system modes are
controllable. This is proved in problem [150, A-6-6 p753]. The discrete controllability matrix is
defined in a similar manner;
h
i
def
C = ... ... 2 ... ... n1
(8.2)
and also must be of rank n for the discrete system to be controllable.
Output controllability is given in Example A-9-11 Ogata, [152, p756], and [150, p636].
The rank of a matrix is defined as the number of independent rows (or columns) in a matrix, and
is a delicate numerical computation using a finite precision computer. The presence of a non-zero
determinant is not a good indication of rank because the value of the determinant scale so poorly.
By the same argument, a very small determinant such as say 106 is also not a reliable indication
of dependence. Numerically the rank is computed reliably by counting the number of non-zero
singular values, or the number of singular values above some reasonable threshold value. See
also 8.1.3.
The controllability matrix for the submarine system in 2.8.2 is
0 0.095
0.0192
h
i
0.0192 0.0048
C = B ... AB ... A2 B = 0.095
0.072 0.0283
0.0098
which has a rank of 3. Hence the submarine system is controllable.
8.1.2
O BSERVABILITY
360
Note that the observation is allowed to take place over more than one sample time. Once we have
back-calculated x0 , we can use the history of the manipulated variable ut to forward calculate
what the current state vector xt is. It is clear that this condition of observability is important in
the field of estimation.
As a trivial example, suppose we had a 3 state system and we have three outputs which are related
to the states by
1 0 0
y = Cx = 0 1 0 x
0 0 1
Here the outputs (measurements) are identically equal to the states (C= I). It is clear that we can
evaluate all the states simply by measuring the outputs and inverting the measurement matrix
x = C1 y = Iy = y since C1 = I in this trivial example. This system is therefore observable.
Suppose however we measure only two outputs where C is now the non-square matrix
1 0 0
y = Cx =
x
0 0 1
Now, obviously, if we recorded only one output vector (2 measurements), we could not hope to
reconstruct the full 3 state vector. But the observability property does not require us to estimate x
in one sample time, only in a finite number of sample times. Hence it may be possible for a system
with fewer outputs than states to be still observable.
To get a feel for the observability of a system, we will use a similar line of argument to that used
for the controllability. Suppose we have a diagonal system and an arbitrary output matrix
0 0
0 0
(8.4)
x =
x
0 0 ... 0
0
y = Cx
(8.5)
Note that any system can be converted into the above form, and for observability criteria we are
not interested in the manipulated variable effects. Now if any of the columns of the C matrix are
all zeros, then this is equivalent to saying that that a particular state will never influence any of
the outputs, and since the system matrix is diagonal, that same state is never influenced by any
of the other states. Therefore none of the outputs will ever influence that state variable. Note the
symmetry of this criterion to the controllability criteria.
Formally for a linear continuous time system described by Eqn. 2.40, the n-state system is observable if the np n observability matrix O is of rank n where
C
C
CA
C
2
2
def
def
CA
O=
or in the discrete case
O = C
(8.6)
,
..
..
.
.
CAn1
Cn1
Some authors, perhaps to save space, define O as a row concatenation of matrices rather than
stacking them in a column as done above.
Problem 8.1
1. What are the dimensions of the observability and controllability matrices? What
is the maximum rank they can achieve?
2. Establish C and O for the Newell & Lee evaporator.
361
(8.7)
(8.8)
8.1.3
C OMPUTING
The M ATLAB functions ctrb and obsv create the controllability and observability matrices respectively, and the rank function can be used to establish the rank of them. While the concept
of observability and controllability applies to all types of dynamic systems, Eqns 8.1 and 8.6 are
restricted to linear time invariant systems. Read problem A-6-1 [150, p745] for a discussion on the
equivalent term reachable.
The algorithms given above for computing controllability or observability, (equations 8.1 and 8.6
or the discrete equivalents), seem quite straight forward, but are in fact often beset by subtle
numerical errors. There exist other equivalent algorithms, but many of these are also susceptible
to similar numerical problems. Issues in reliable numerical techniques relevant to control are
discussed in a series of articles collected by [158].
An example quoted in [156] illustrates common problems to which these simple algorithms can
fall prey. Part of the problem lies in the reliable computation of the rank of a matrix which is best
done by computing singular values.
Consider computing the controllability of the dynamic system
1
1
A=
,
B=
0 1
362
and has a nonzero determinant of |C| = 3 indicating that the system is technically controllable
given a nonzero .
However in
this example we are particularly interested in small, (but not minute) values of , say
where || < , where is the machine precision. Note that this effectively means that 1 + 2 = 1
whereas 1 + > 1 when executed on the computer. Consequently the controllability matrix is
now
1 1
C=
, for || <
which indicates the system is uncontrollable.
We can try this numerically for values near the expected limit, =
1
>> mu = sqrt(eps(1))
%
mu =
1.4901e-08
>> A = [1 mu; 0 1]; B = [1; mu];
>> det(ctrb(A,B))
ans =
-3.3087e-24
where fortuitously in this case we get a non-zero determinant of equal to 3 as hoped. However
for (slightly) smaller values of ,
X
1 k
eAt =
A
k!
k=0
A=
49 24
64 31
Problem 8.3
1. When using good (in the numerical reliability sense) software, we tend to forget
one fundamental folklore of numerical computation. This folklore, repeated in [158, p2],
states
1 1
A= 0
0
363
where is a real number such that || < where is the machine precision. Calculate the
singular values both analytically, and using the relationship between singular values and the
eigenvalues of A A.
8.1.4
S TATE
RECONSTRUCTION
If a system is observable, we should in principle, be able to reconstruct the states based only
on the current, and perhaps historical outputs. So, supposing we have collected the following
input/output data,
time (k)
0
1
2
u(k)
4
5
y(k)
4
4
36
x1 (k)
?
?
?
x2 (k)
?
?
?
Note that we cannot simply invert the measurements back to the states, partly because the measurement matrix is not square.
However we can see that y(k) = 4x1 (k) from the measurement relation, so we can immediately fill
in some of the blank spaces in the above table, in the x1 (k) column; x1 (0) = 1, x1 (1) = 1, x1 (2) =
9. From the model relation, we can see that x1 (k + 1) = x2 (k), so we can set x2 (0) = x1 (1) = 1.
Now we have established the initial condition
x(0) = 1 1
and with the subsequent manipulated variable history u(t), we can calculate x(t) simply using
the recursive formula, Eqn. 2.41. This technique of estimation is sometimes called state reconstruction and assumes a perfect model and no measurement noise. Furthermore, the estimation
reconstruction takes, at most, n input-output sample pairs.
As a check, we can construct the observability matrix,
4 0
C
4
O=
=
0 1
=
C
4 0
0
3 2
0
4
which clearly has a rank of 2 indicating that the state reconstruction is in fact possible.
We can generalise this state reconstruction scheme outlined above, following [20, p315], for any
observable discrete plant
xk+1 = xk + uk
yk = Cxk
where we wish to reconstruct the current state, xk , given past values of input and output, uk , uk1 , . . .
and yk , yk1 , . . .
Our basic approach will be to use the outputs from the immediate past n samples, to develop
linear equations for the n unknown states at the point n samples in the past. We will then use the
364
knowledge of the past inputs to roll this state forwards to the current sample k. For the present
development, we will just consider SISO systems.
Listing the immediate previous n values finishing at the current sample k gives
ykn+1 = Cxkn+1
ykn+2 = Cxkn+1 + Cukn+1
ykn+3 = C2 xkn+1 + Cukn+1 + Cukn+2
..
.
yk = Cn1 xkn+1 + Cn2 ukn+1 + + Cuk1
(8.9)
To make things clearer, we can stack the collected input and output values in a vector,
ykn+1
ukn+1
ykn+2
ukn+2
..
Yk =
,
U
=
..
k1
.
.
yk1
uk1
yk
(8.10)
and we note that the input vector collection Uk1 is one element shorter than the output vector
collection Y k . Now we can write Eqn. 8.9 more succinctly as
Y k = Oxkn+1 + Wu Uk1
with matrices defined as
O=
C
C
C2
..
.
Cn1
Wu =
0
C
C
..
.
Cn2
(8.11)
0
0
C
..
.
Cn3
..
.
0
0
0
..
.
(8.12)
Solving Eqn. 8.11 for the states (in the past at sample time k n + 1) gives
xkn+1 = O 1 Yk O 1 Wu Uk1
(8.13)
provided O is non-singular. This is why we require the observability condition to be met for state
reconstruction.
The minor problem with Eqn. 8.13 is that it delivers the state n sample times in the past, rather
than the current state, xk , which is of more immediate interest. However, it is trivial to use
the plant model and knowledge of the past inputs to roll the states onwards to current time
k. Repeated forward use of the model gives
xk = n1 xkn+1 + n2 ukn+1 + n3 ukn+2 + + uk1
The entire procedure for state reconstruction is summarised in the following algorithm.
Algorithm 8.1 State reconstruction assuming a perfect observable model and no noise. Given
xk+1 = xk + uk ,
yk = Cxk
365
n3
n1 O + Wu
and Y k , Uk1 are defined in Eqn. 8.10 and O, Wu are defined in Eqn. 8.12. We should note:
To reconstruct the state (at the current time k), we need to log past input and output data,
and the minimum number of data pairs required is n, where n is the number of states. Any
more will gives a least-squares fit.
Since we invert O, this matrix must not be singular. (Alternatively we require that the
system must be observable.) For the multivariable case we should use the pseudo-inverse,
O + , as written above.
We use in the algorithm, but do not invert, the controllability matrix.
A simple state reconstructor following Algorithm 8.1 is given in Listing 8.1.
Listing 8.1: A simple state reconstructor following Algorithm 8.1.
function x_est = state_extr(G,Yk,Uk)
% Back extract past state from outputs & inputs (assuming observable)
3
13
18
Wo = obsv(G);
% same as: O = obsv(Phi,C)
Ay = Phi(n-1)/Wo; % similar to n1 O 1
S = C*Del;
for i=1:n-2
S = [S; C*Phii*Del]; % Building Wu
end % for
zC = zeros(p,m); % or zeros(size(C*Del));
Wu1 = [zC;S]; Wu = Wu1;
for i=n-3:-1:0
Wu1 = [zC; Wu1]; Wu1(end-p+1:end,:) = [];
Wu = [Wu, Wu1];
end % for
23
28
Fig. 8.1 illustrates the results of the state reconstruction algorithm in the case where there is no
model/plant mismatch, but there is noise on both the input and output. Consequently, given that
the algorithm has the wrong u(t) and y(t) trends, then the reconstructed states ( in Fig. 8.1) are
slightly different from the true states.
In summary, this reconstruction algorithm takes current and historical outputs and inputs, and,
provided the system is observable, then we can reconstruct the current states. This strategy is
366
0.3
Output
0.2
0.1
0
actual y
measured y
0.1
4
actual u
measured u
2
0
2
2
est
1
0
1
2
x
est
1
x
Figure 8.1:
Reconstructing states.
Since the state reconstructor is given
an incorrect input and a noisy output, then it is no surprise that the estimated states, , are slightly different
from the true states (solid lines in the
lower two trends).
0
1
time [s]
important because it then allows us to use state feedback in the cases where we only measure the
outputs. In the presence of process or measurement noise, and/or model/plant mismatch this
reconstruction will be in error, although in many cases this may not be significant. An improved
version of this algorithm that takes into account the statistics of the expected noise and plant
errors is known as state estimation and is described in section 9.5.
8.2
This section describes how to design a controller to control a given process, using state space
techniques in the discrete domain.
When a process engineer asks the control engineer to design a control system, the first question
that the control engineer should ask back, is what sort of performance is required? In this way,
we start with some sort of closed loop specification, which, for example, may be that we demand
that the system return to setpoint after a disturbance with no offset, or rise to a new setpoint in a
reasonable time, perhaps one open-loop time constant. If the loop is a critical one, then we may
wish to tighten this requirement by specifying a smaller time to reach control, or conversely if
we have a non-critical loop we could be satisfied with a more relaxed settling time requirement.
It is our job as control engineers to translate these vague verbal requirements into mathematical
constraints, and if possible, then design a controller that meets these constraints.
When we talk about a response, we are essentially selecting the poles (or eigenvalues) of the closed
367
loop transfer function, and to some lesser degree, also the zeros. Specifying these eigenvalues by
changing the controller parameters is called controller design by pole placement. In order to
design a controller by pole placement, we must know two things:
1. A dynamic model of the process, and
2. what type of response we want (i.e. what the closed loop eigenvalues should be).
However choosing appropriate closed loop poles is non-trivial. Morari in [138] reminds us that
poles are a highly inadequate measure of closed loop performance, and he continues to assert
that because of the lack of guidance to the control designer, pole placement techniques have been
abandoned by almost everybody today. Notwithstanding this opinion, it is still instructive for
us to understand how our closed loop response is affected by different choices of eigenvalues, and
how this forms the basis for more acceptable tuning methods such as linear quadratic controllers
discussed in the following chapter.
Suppose we have a single inputmultiple output (SIMO) system such as
xk+1 = xk + uk
(8.14)
where u is a scalar input and we wish to control all n states by using a proportional feedback
control law such as
u = Kx
(8.15)
where K is called the state feedback gain matrix. (Actually in this SIMO case, the gain matrix
collapses to a row vector). Substituting Eqn 8.15 into Eqn 8.14 gives the closed loop response as
xk+1 = ( K) xk
(8.16)
0 0
h
.
..
..
.
.
..
n1
i1
()
(8.17)
where is the matrix characteristic equation of the desired closed loop transfer function. Note
that the matrix to be inverted in Eqn 8.17 is the discrete controllability matrix Co , hence the need
for full rank.
A quick way to calculate () is to use poly and polyvalm. For instance we can verify the
Cayley-Hamilton theorem, (any matrix satisfies its own characteristic equation), elegantly using
1
11
368
giving matrices
2 0.5
=
,
2
0
1
0
To use Ackermann formula, Eqn. 8.17, we need to both compute the controllability matrix and the
matrix characteristic equation .
def
1 2
0 2
0.25
We can easily check that the closed loop poles are as required with
>> eig(G.a-G.b*K)
% Check that = eig( K)
ans =
0.5000 + 0.5000i
0.5000 - 0.5000i
Rather than constructing Ackermanns equation by hand, M ATLAB can design a pole placement
controller using the acker routine. At heart this routine simply computes Eqn. 8.17 as
k = ctrb(a,b)\polyvalm(real(poly(p)),a);
k = k(n,:);
369
or, as noted in the next section, acker is very ill-conditioned, so we could use the preferred place
function. The latter function gives an estimate of the precision of the closed loop eigenvalues, and
an optional error message if necessary. (See also Fig. 8.3.)
[K,prec] = place(G.a,G.b,mu)
K =
1.0000
-0.2500
prec =
15
Eqn 8.15 assumes that the setpoint is at x=0. (We can always transform our state variables by
subtracting the setpoint so that this condition is satisfied, but this becomes inefficient if we are
likely to change the setpoint regularly.) In these cases, it is better to modify our control law from
Eqn 8.15 to
u = K(r x)
(8.18)
where r is a vector of the state setpoints or reference states, now assumed non-zero. Consequently
the closed loop response is now modified from Eqn 8.16 to
xk+1 = ( K) xk + Krk
8.2.1
P OLES
(8.19)
While pole-placement seems an attractive controller design strategy, the problem lies in deciding
where the closed loop poles out to be.
Listing 8.2 designs a pole-placement controller for the blackbox, G(s) = 0.98/(2s + 1)(s + 1), with
discrete closed loop poles specified, somewhat arbitrarily, at T = [0.9, 0.8].
Listing 8.2: Pole-placement control of a well-behaved system
>> Gc = tf(0.98,conv([2 1],[1,1])) % Blackbox model
>> G = ss(c2d(Gc,0.1))
% Convert to discrete state-space with sample time Ts = 0.1
>> [Phi,Delta,C,D] = ssdata(G);
5
Now we can check the closed loop performance from a non-zero initial condition with
Gcl = ss(Phi-Delta*K,zeros(size(Delta)),eye(size(Phi)),0,Ts);
2
x0 = [1; -2];
initial(Gcl,x0)
% start position
% See Fig. 8.2
The results of a controlled disturbance recovery is given in the left hand plot of Fig. 8.2. However
an obvious question at this point is where exactly should I place the poles. The three trends in
370
T
States x1 & x2
= [0.9, 0.8]
= 0.6e
100
100
input
2
100
100
2
1
2.2
0.5
2.7
0
0.5
1.7 1.2
0.2
0.4 0.7
0.6
0.8
0.2
0.2
2.7
2
1
2.2
0.5
2.7
1
1
100
1.7 1.2
0.2
0.4 0.7
0.6
0.8
0.2
0.2
2.7
2
1
2.2
0.5
2.7
1
1
1.7 1.2
0.2
0.4 0.7
0.6
0.8
0.2
0.2
2.7
0.7
2.2
1.7 1.2
0.5
0.7
2.2
1.7 1.2
100
0.5
0.7
2.2
1
1
j80
= [0.9, 0.8]
1.7 1.2
Figure 8.2: Pole-placement of the black box for three different pole locations. From left to right:
T HE
The design of controllers using pole-placement is inherently numerically ill-conditioned, especially for large systems. Ackermanns algorithm requires double precision for systems larger than
about 10th order, and outright fails for systems larger than about 15th order. Fig. 8.3 illustrates
this degradation in accuracy by plotting an estimate of the number of accurate decimal places in
the solution using place for 1500 randomly generated discrete models from orders 3 to 15. Note
that once the order is above 9 or so, we should only expect about 6 decimal places of accuracy.
371
20
precision
15
10
poleplace
Acker
8
9
System order
10
11
12
13
14
15
Figure 8.3: The loss of accuracy using pole-placement algorithms place and acker as a function
of system order for 500 randomly chosen models. It is clear that both routines struggle for systems
of order above about 6.
8.2.2
D EADBEAT
CONTROL
If the system is controllable such that any arbitrary poles are allowable, one is naturally inclined to
extrapolate the selection to poles that will give a near perfect response? Clearly a perfect response
is one when we are always at the setpoint. If a step change in reference value occurs, then we
instantaneously apply a corrective measure to instantaneously push the system to the new desired
point. A system that rises immediately to the new setpoint has a closed loop time constant of = 0
or alternatively a continuous pole at s = . This corresponds to a discrete pole at eT or the
origin. If we decide to design this admittedly rather drastic controller, we would select all the
desired closed loop poles at = 0. Such a controller is feasible in the discrete domain and is
called a deadbeat controller.
The design of a deadbeat controller is the same as for an arbitrary pole placement controller.
However it is better to use the acker function rather than the place function, because the place
function will not allow you to specify more multiple eigenvalues than inputs. This restriction does
not apply for the Ackermann algorithm.
Suppose we design a deadbeat controller for the continuous plant
0.3
0
0
3
0 0.5
2 x + 2 u
x =
0
2 0.5
5
(8.20)
To design a deadbeat controller, we specify the desired discrete closed loop eigenvalues as = 0
and select a sample time of T = 4, we obtain
K = 3.23 102 3.69 104 1.98 102
The closed loop transition matrix is
0.075
0.003 0.139
0.095
cl = K = 0.063 0.019
0.060 0.133 0.056
and we could also compute that all the elements in 3cl are in the order of 1019 and all the
eigenvalues of eig( K) are in the order of 107 as required.
Such a matrix where An1 6= 0, but An = 0 is called a Nilpotent matrix.
372
Fig. 8.4 shows the impressive controlled response of the plant (Eqn. 8.20) under deadbeat control.
The top half of the diagram shows the states which converge from the initial condition, ( x0 =
2 3 1
), to the setpoint x=0 in only three sample times (3T = 12 s). It takes three
sample times for the process to be at the setpoint, because it is a third order system.
Deadbeat controller
4
3
2
states
1
0
1
2
3
input
4
0.04
0.02
0
0
5
10
15
time (s), sample time T=4
20
The deadbeat controlled response looks almost perfect in Fig. 8.4, at least after the 3 samples
required to get top the setpoint. We can investigate this by repeating the simulation at a smaller
time interval (say T = 0.05 s) but with the same input as before, to see what happens between
the sample intervals. (This is how I evaluated the smooth state curve in Fig 8.4 between the states
at the controller sample time denoted with a in Listing 8.3.) Note that the states are oscillating
between the samples, but quickly die out after 10 seconds.
Listing 8.3: A deadbeat controller simulation
1
x0= [-2 -3 1]'; dt=4; tf=dt*5; t=[0:dt:tf]'; % initial condition & time vector
A= [-0.3,0,0;0,-0.5,2;0,-2,-0.5];B= [3;-2;5]; % Continuous plant, Eqn. 8.20
Gc= ss(A,B,eye(3),0); Gd= c2d(Gc,dt,'zoh');
% Convert to a discrete model
11
16
21
mu= [0;0;0];
% Desired closed loop eigenvalues, = 0
K= acker(Gd.a,Gd.b,mu); % construct deadbeat controller
Gd_cl = Gd; Gd_cl.a = Gd.a-Gd.b*K; % closed loop
[y,t,x]= lsim(Gd_cl,zeros(size(t)),[],x0);
U = [-K*x']';
% back extract input
% Reduce sample time at same conditions to see inter-sample ripple
dt = dt/100; t2 = [0:dt:tf]'; % new time scale, Ts = Ts /100
[ts,Us] = stairs(t,U);
% create new input
ts = ts + [1:length(ts)]'/1e10; ts(1) = 0;
U2 = interp1(ts,Us,t2);
% approx new input
[y2,t2,x2]= lsim(Gc,U2,t2,x0);% continuous plant simulation
subplot(2,1,1);plot(t,x,'o',t2,x2); % Refer Fig. 8.4
subplot(2,1,2);plot(ts,Us);
373
Naturally under practical industrial conditions, the plant will not be brought under control so
rapidly owing to either modelplant mismatch, or to manipulated variable constraints. This type
of controller can be made more robust by specifying a larger sample time, which is in fact the
only tuning constant possible in a deadbeat controller. Ogata gives some comments and cautions
about deadbeat control in [150, pp670-71].
Problem 8.4 Create a system with eigenvalues of your choice; say
v=
T
and design a deadbeat controller. Simulate this using the dlsim function. (Try a sample time
of 7.) Investigate the same controller, but at a higher sampling rate and plot the intersample
behaviour of the states.
Investigate the effect on the gain when using different sample rates.
8.2.3
P OLE - PLACEMENT
(8.22)
Kp
Ki
x
z
(8.23)
where Kp are the gains for the proportional controller and Ki are the gains for the integral controller. Now the pole-placement controller is designed for the new system with n additional
integral states in the same manner as for the proportional state only case. In M ATLAB we can
formulate the new augmented system as
Ai = [A zeros(size(A)); eye(size(A)) zeros(size(A))]
Bi = [B; zeros(size(B))]
This modification improves the system type, (i.e. the number of poles at the origin), enabling
our controller to track inputs. [110, 1014,p802] gives further details. We shall see later in section 9.4.6 how we can design optimal pole-placement servo controllers known as linear quadratic
regulators.
374
SISO SERVO INTEGRAL
OUTPUT CONTROL
If we want to track setpoint changes (servo-control) for a SISO system, we need only to add one
extra integral state. The augmented system is
x = Ax + Bu
z = r y
y = Cx
where x is the vector of original states, u, y and r are the scalar input, output and setpoint respectively. We simply concatenate the new integral state to the existing,
def
=
x
x1
x2
..
.
xn
x now has
so that z is relabelled as xn+1 . The feedback gain row-vector in the control law u = K
an additional element
h
i
..
=
K
k1 k2 kn . kn+1
and the augmented closed-loop system is now
d
x
B
K
x
+
= A
dt
0
1
=
A
A 0
,
C 0
{z
}
|
=
B
(n+1)(n+1)
B
0
| {z }
(n+1)1
To implement, the augmented system must be controllable, which is a tougher requirement than
just ctrb(A, B). A block diagram of a pole-placement controller with a single integral state is
given in Fig. 8.5, after [110, p802]. The wide lines in the diagram are vector paths.
integral state
+ x n+1
xn+1
u
kn+1 +
x
R
B +
A
K
Figure 8.5: State feedback control system with an integral output state.
We can compare the effect of adding an integral state to remove the offset given the SISO system
4 3
2
x =
x+
u
1 6
1
y= 1 0 x
375
where we want to control x1 to follow reference r(t) using pole-placement. I will place the closedloop eigenvalues for the original system at
6 + 2i
=
6 2i
and with the extra integral state, I will add another pole at 3 = 10. Fig. 8.6 compares the
response of both cases. Clearly the version with the integral state that tracks the setpoint with no
steady-state error is preferable.
4
3
Integral state
1
0
1
setpoint
8.3
standard
integral states
Proportional only
3
0
5
time [s]
10
The pole-placement control design strategies as developed in 8.2 utilises all the states in the
control law u = Kx so implicitly assumes that all the states are measurable. In many cases this
is not practically possible, so the only alternative then is to estimate the states.
The states can be reconstructed using the outputs, the inputs, and the model equations. In fact we
need not restrict ourselves to just estimating states, we could also estimate derived quality control
variables, or even estimate the outputs themselves. The quantities that we estimate are known as
observers, and offer cost and reliability advantages over actual sensors in many applications.
The simplest state estimator is where we use a model and subject it to the same input u that is
is the estimate for the true state x.
applied to the true process. The resultant x
k+1 =
x
xk + uk
(8.24)
If we know the true model, ,, and the manipulated variable history uk , then this estimator will
0 . Since we have no feedback information
work provided we select the correct initial condition x
as to how well we are doing, this estimator is called an open loop estimator. If the conditions
stated above are not met, and they will rarely be satisfied in practice, then the estimator will
provided estimates that will probably get worse and worse. The error of the estimate is
k
k = xk x
(8.25)
Ideally, we would like to force this error vector to zero. This is actually still a control problem, although while we are not controlling the plant, we are controlling the error of the estimate scheme.
We introduce a feed back error scheme such as
k+1 =
x
xk + uk + L (yk C
xk )
(8.26)
which is called a prediction error observer since the prediction is one sample period ahead of the
observation. In the single output case, L is a (n 1) column vector and is called the observer gain
376
matrix and is a design parameter. The L stems from the fact that it is a Luenberger observer and
we need to carefully distinguish it from the controller gain, K. Following our colour convention,
we will sometimes colour the estimator, L red because it involves states, and the feedback gain,
K blue because it is part of the controller.
Now the error model is
k+1 = ( LC) k
(8.27)
We must select the matrix L such that the observer system LC is stable and has eigenvalues
(observer poles) that are suitably fast. This means that even if we were to get the initial condition
wrong for x0 , we can still use the estimator since any initial error will eventually be forced to
zero. Typically we would choose the poles of the observer system (Eqn. 8.27) much faster than
the openloop system. To be able to solve for L, the system must be observable.
For many chemical process applications, the sampling rate is relatively slow, since the processes
are generally quite slow, so the basic estimator of Eqn 8.26 can be modified so that the prediction
of xk+1 uses the current measurement yk+1 rather than the previous measurement yk as is the case
in Eqn 8.26. This is only possible if the computational time is short, compared with the sample
time, but if this is the case, then the sample period is better utilised. This type of formulation is
termed a current estimator and both are described in [71, pp1478] and in terms of the Kalman
filter in [150, pp87375]. The current estimator is
k+1 =
x
xk + uk
k+1 = x
k+1 + L yk+1 C
x
xk+1
(8.28)
(8.29)
In practice for most systems, the two forms (current and prediction estimators) are almost equivalent.
The design of the observer gain L by pole-placement is similar to the controller gain K given in
section 8.2, but in the observer case we have the closed loop expression
eig( LC) rather than eig( K)
However we note that
eig( LC) = eig(T CT LT )
since eig(A) = eig(AT ), so we can use the M ATLAB functions place or acker for the observer
problem by using the dual property
L = place(T , CT , )T
Table 8.1 shows the equivalent relationships between the matrices used for observation and regulation and make the relevant substitution.
Table 8.1: The relationship between regulation and estimation
Control
B
K
Co
eig( K)
Estimation
C
L
Ob
eig( LC)
8.4
377
We can combine the state feedback control from section 8.2 with the state estimation from section 8.3 into a single algorithm. This is useful because now we can use the estimator to provide
the states for our state feedback controller. For linear plants, the design of the state feedback
controller is independent from the design of the observer. The block diagram shown in Fig. 8.7,
which is adapted from [20, p438], shows the configuration of the estimator, L, and state-feedback
controller, K.
q 1
Plant
L
input, u
+ q 1 x
output, y
Estimator
State feedback
(8.30)
yk = Cxk
(8.31)
(8.32)
(8.33)
378
(8.35)
Concatenating Eqns 8.30 and 8.35 together gives the homogeneous equation
xk+1
xk
K
=
zk+1 =
k+1
k
x
x
LC
K LC
|
{z
}
cl-ob
(8.36)
I will call the transition matrix for Eqn 8.36 the closed-loop/observer transition matrix (cl-ob ).
Ogata gives a slightly different formulation of the combined observerstate feedback control system as Eqn 6-161 in [150, p714]. Ogata also recommends choosing the observer poles 45 times
faster than the system response.
E XAMPLE OF
0.98
,
(3s + 1)(2s + 1)
T = 0.1
which we will sample at T = 0.1. The design specifications are that the estimator poles are to be
relatively fast compared to the controller poles
0.8
0.92
def
def
and ctrl =
est =
0.85
0.93
We can design the controller and estimator gains as follows
Listing 8.4: Pole placement for controllers and estimators
G = tf(0.98,conv([2 1],[3 1])); % Blackbox plant
Gss = ss(G);
% convert to state-space
3
Gss1 = Gss;
% Now create a special version with full outputs
Gss1.c = eye(2); Gss1.d = [0;0];
T = 0.1; Gssz = c2d(Gss,T);
p = [0.92 0.93]';
K = place(Gssz.a, Gssz.b, p);
13
pe = [0.80 0.85]';
% Design estimator gain, L, such that . . .
L = place(Gssz.a', Gssz.c', pe)'; % est = eig( LC)
sort(eig(Gssz.a-L*Gssz.c)) - sort(pe) % Test results should be zero
1.3848 1.6683
and
L=
0.2146
0.4109
Fig. 8.8(a) gives the S IMULINK configuration for the controller which closely resembles the block
diagram given in Fig. 8.7. To test the estimation and control, we will simulate a trend where the
actual and estimated starting conditions are different, namely
1
1.6
0 =
x0 =
, which is not the same as x
1
2
379
The trends shown in Fig. 8.8(b) show the (discrete) estimated states converging rapidly to the true
continuous states, and that both converge, although slightly less rapidly, to the setpoint.
8.5
(8.37)
is in general, much harder than that for linear processes. Previously we discussed the simple state
feedback control law; u = K(x x) where x is the desired state vector or setpoint. We showed
that we could augment this simple proportional only law to incorporate integral action to ensure
zero offset to get a control law such as 8.23. Now suppose that we wish to control not x, but the
. We now wish to find u(t)
derivative of x, dx/dt, to follow some desired derivative trajectory (x)
such that x = r(t) where r(t) is some reference function that we can specify and where dx /dt is
given by Eqn 8.37.
For any controller, when the process is away from the desired state, x , we want to force the rate
of change of the states, x to be such that it is moving in the direction of the desired setpoint, and
in addition, we would like the closed loop to have zero offset. The control law
Z
= K1 (x x) + K2
(x)
(x x) d
(8.38)
= x,
so substitutcombines these two weighted ideas. Ignoring any model-plant mistmatch, (x)
ing in the plant dynamics Eqn. 8.37 to Eqn. 8.38, we get the control law
Z
f (x, u, t) K1 (x x) K2
t
0
(x x) d = 0
(8.39)
Solving Eqn. 8.39 for u is the control law and this scheme is called Generic Model Control or
GMC and was developed by Lee and Sullivan in [116]. Note that solving Eqn 8.39 for u is in
effect solving a system of m nonlinear algebraic equations. This must be done at every time step.
Actually, the solution procedure is not so difficult since the previous solution ut1 can be used as
the initial estimate for ut .
8.5.1
T HE
TUNING PARAMETERS
The GMC control law (Eqn 8.39) imbeds a model of the process, and two matrix tuning parameters, K1 and K2 . Selecting different tuning parameters changes the type of response. Assuming
we have a perfect model, the control law (Eqn 8.39) is
x = K1 (x x) + K2
(x x) d
(8.40)
(8.41)
380
states
yv
To Workspace
4
4
Mux1
2
states
K*u
input
output
ctrl
Gain
plant BB
output
C model
2
K*u
estimator Gain1
K* u
2
K*u
Delta
Unit Delay
K*u
Phi
0
0.5
1
1 (t)
1.5 x
actual
estimated
2
2 (t)
2 x
1.5
1 x2 (t)
0.5
0
0
3
time
(b) The estimated states converge to the true states which in turn converge to the
setpoint.
381
1.6
increasing
1.4
1.2
1.5
response
response
1
0.8
0.6
0.5
=0.2
=0.5
=1
=3
0.4
0.2
0
4
6
Normalised time, =1
0
0
10
0.5
10
shape factor
1.5
normalised time
Figure 8.9: The normalised ( = 1) response curve for a perfect GMC controller for various choices
of shape factor .
and taking Laplace transforms of 8.41 gives
s2 x + sK1 x + K2 x = sK1 x + K2 x
K1 s + K2
x(s)
= 2
x (s)
s + K1 s + K2
2 s + 1
= 2 2
s + 2 s + 1
where
1
= ,
k2
k1
=
2 k2
(8.42)
(8.43)
or alternatively
2
1
(8.44)
,
k2 = 2
Note that Eqn 8.42 is a modified second-order transfer function. The time domain solution to Eqn
8.42 for a step change in setpoint, X (s), is given by
p
t
1 2 t
1 2 t
2
2
2
1 e
+ 1 sin
1 + cos
,
6= 1
(8.45)
x(t) =
2
1
k1 =
For the limiting case where = 1, the solution to Eqn. 8.42 simplifies to just
t
t
x(t) = 1 + e
1
(8.46)
Fig. 8.9 shows the response for a perfectly GMC controlled process for different . Note that as
, the output response approaches a step response at t = 0, and as 0, the output tends to
x(t) = sin(2t) + 1. This improvement comes, naturally, at the expense of decreased robustness.
Fig. 8.10 shows the comparison between the overshoot, decay ratio and the tuning value. These
values are easily obtained from Fig. 8.9 or analytically starting with Eqn. 8.45. Note that for > 1,
there is no oscillation in the response, and that a 0.22 gives a quarter decay ratio.
382
overshoot
decay ratio
0.8
0.6
0.4
0.2
0
0
0.2
0.4
0.6
0.8
shape factor
1.2
8.5.2
GMC
Since GMC can be applied to a nonlinear process, one would expect it to work especially well given
a linear process x = Ax + Bu. Now the GMC control law simplifies from the general Eqn. 8.39 to
Z t
(x x) d
(8.47)
u = B1 K1 x (K1 + A)x + K2
0
noting that B must exist and hence at least should be square. This means that we must have
the same number of manipulated variables as state variables.
It is easy to test such a controller using a stable random model using rmodel. Hopefully our
randomly generated control matrix B is inevitable. (If it is not, just re-run the script file.) To tune
the response, I will select a slightly different shape factor and time constant for each state, namely
1 = 0.7, 1 = 1.5
2 = 0.8, 2 = 2.6
383
Running the script file in Listing 8.5 gives results much like Fig. 8.11.
[A,B,C,D]=rmodel(n,p,m);
% generate random continuous model
D = zeros(size(D)); C = eye(size(C)); % not interesting
[Phi,Del] = c2dm(A,B,C,D,dt,'zoh');
% discretise
10
15
20
25
X = [X; x'];
end % for
plot(t,[Xspt, X],t,Uc,'--')
We would expect the GMC controller to perform well, but what should really convince us is if the
actual response y(t) follows the analytical & desired trajectory, y (t) since GMC is really a model
reference controller. These two curves are compared in Fig. 8.12 which shows an enlarged portion
of Fig. 8.11. The differences between the actual response and the specified trajectory are due to
sampling discretisation.
This demonstration of the GMC controlling a linear process does not do the algorithm justice,
since the controller really is intended for nonlinear processes. This is demonstrated in the following section.
8.5.3
GMC
Section 8.5.2 did not show the true power of the GMC algorithm since the plant was linear and
any linear controller such as an LMR would work. However this section introduces a nonlinear
384
Linear GMC
5
x2
4
3
2
1
0
1
2
3
20
inputs
u2
20
40
60
u
80
10
20
30
40
time
50
60
70
80
Figure 8.12: Comparing the actual response, y, to a setpoint change with the
expected response, y . This response is
a zoomed part of the response given in
Fig. 8.11.
1.5
1
0.5
0
0.5
Actual y
*
Reference y
setpoint
1
48
50
52
54
time
56
58
60
62
model of a continuously stirred tank reactor, CSTR, from [123]. The CSTR model equations are
Fa Ta
Uar
Fb Tb
(Fa + Fb )T
H
T =
r
(T Tc )
+
V
V
V
Cp
V Cp
Fa Cai
(Fa + Fb )Ca
Ca =
r
V
V
Uar
Fc
(Tci Tc ) +
(T Tc )
Tc =
Vc
Vc Cp
(8.48)
(8.49)
(8.50)
r = Ke RT Ca2
(8.51)
385
Table 8.2: The state, manipulated and disturbance variables and parameter values for the Litchfield nonlinear CSTR, [123].
(a) State, input and disturbance variables
variable
T
Ca
Tc
Tci
Cai
Fb
Ta
Fa
Tb
Fc
description
reactor temperature
concentration of A
temperature of jacket
temperature of cooling inlet
inlet concentration of A
inlet flow of stream B
inlet temp. of stream A
inlet flow of stream A
inlet temp. of stream B
cooling flow
units
K
kgmol m3
K
K
kgmol m3
m3 s1
50 C (333K)
3.335 106 m3 s1
50 C (333K)
3 104 m3 s1
parameter
V
Vc
Cp
value
0.0045
0.0074
1000
4.187
units
m3
m3
kg/m3
kJ/kg/K
parameter
K
E/R
H
Uar
value
16.6 106
5.6452 103
74.106 103
0.175
units
m3 /kmol/s
K
kJ/kg
kW/K
The state, manipulated and disturbance variables as explained in Table 8.2 are defined as1
def
x=
Ca
Tc
T
def
u=
Tci
Cai
Fb
T
def
d=
Ta
Fa
Tb
Fc
T
(8.52)
The plant dynamics defined by Eqns 8.48 to 8.50 are nonlinear owing to the reaction rate r, being
an exponential function of the temperature T . This type of nonlinearity is generally considered
significant.
D ESIGNING
THE
GMC
CONTROLLER
First we note that have n states and n manipulated variables. Thus the B matrix, or at least
a linearised version of the nonlinear control function will be square. We also note that the GMC
algorithm assumes that the process states (x) are measured, hence the measurement relation must
be C = I. The GMC control law, Eqn. 8.39, for this case is
Fa
Uar
H
(T Tc )
0
0 TbVT
Tci
V (Ta T ) Cp r C
p
0 Fa Ca Cai +
FaVCa + r
V
V
Fc
Fb
0
0
FVc Tc c + VUcarCp (T Tc )
Vc
Z t
K1 (x x) K2
(x x) d = 0
0
Now you will notice that the control law is a system of algebraic equations unknown in u. For this
case, the control law is linear in u because I had the freedom of choosing the manipulated variables
carefully. If, for example, I had chosen Fa rather than Fb as one of the manipulated variables, then
1 The original reference had only one manipulated variable T . This was changed for this simulation so the GMC is
ci
now a true multivariable controller.
386
the system would be control nonlinear, and hence more difficult to solve. But in this case, the
system is control linear, the GMC control law is a linear system of algebraic equations, and that
the 33 matrix to be inverted is in a lower diagonal form already making the computation straight
forward.
S IMULATING GMC
We will use a GMC controller to drive the states to follow a setpoint as follows:
0 t < 5 min.
300 0.03 290
x =
310 0.03 290
5 t < 10 min.
310 0.025 295
t > 10 min.
starting from an initial condition of x0 = 290 0.035 280
and with a relatively coarse sample time of T = 10 seconds. We set the shape factor = 0.5, and the time constant for the response
1
1
to = 20 seconds. This gives k1 = 20
and k2 = 400
. The disturbance variables (d) are assumed
constant for this simulation and are taken from Table 8.2.
The state derivative x as a function of x, u, d, , t is evaluated using Eqns 8.48 to 8.50 and is given
in listing 8.7 which is called by the integrator ode45 in listing 8.6.
Listing 8.6: GMC for a batch reactor
1
11
16
21
26
31
initial condition
sample time (seconds)
0.03+0.05*square(t/150), ...
setpoint
disturbance vector constant
Fc = d(4);
k1 = 1/20; k2=1/400;
% GMC tuning constants
K1 = eye(3)*k1; K2 = eye(3)*k2;
%Parameters for the CSTR
V = 0.0045; Vc = 0.0074; rho = 1000.0;
K = 16.6e6; ER = 5.6452e3; dH = -74.106e3;
Cp = 4.187; %kJ/Kg.K
Uar = 0.175; % kW/K
rCp = rho*Cp ;
x=x0;Ie=zeros(size(x)); X=[]; U=[];
for i=1:length(t) % start simulation
xspt = Xspt(i,:)'; % current setpoint
T = x(1); Ca = x(2); Tc = x(3); % state variables
Ie = Ie+(xspt-x)*dt;
% update integral error
r = K*exp(-ER/T)*Ca*Ca; % reaction rate
R = [Fa/V*(Ta-T)-dH*r/rCp-Uar*(T-Tc)/rCp;-(Fa*Ca/V+r); ...
-Fc*Tc/Vc+Uar/rCp/Vc*(T-Tc)];
A = [0 0 (Tb-T)/V; 0 Fa/V -Ca/V; Fc/Vc 0 0];
u = A\(K1*(xspt-x)+K2*Ie - R); % solve for u GMC control law
% u = max(zeros(size(u)),u);
% ensure feasible
[ts,xv] = ode45('fcstr',[0,dt/2,dt],x,[],u, ...
387
V, Vc, K, ER, dH, rCp, Uar, Ta, Fa, Tb ,Fc); % integrate nl ODEs
x = xv(3,:)'; % keep only last state
36
11
Physically the manipulated variables are constrained to be positive, and unless these constraints
are added explicitly to the GMC optimisation control law, negative values for some of the inputs
will occur. In the above example I simply clamp the input when these violations occur. If you
removed the u=max(zeros(size(u)),u); line from the control law, you will have the unconstrained version that ignores the physically realistic constraints which is the case shown in the
simulations given in Fig. 8.13.
Fig 8.13 shows the response of the CSTR controlled by the GMC. The upper plots show the state
trajectories and the lower plots give the manipulated variables used. Note that the controlled
response is good, but not perfect as we can still see some interaction between the states. The reaction concentration Ca particularly seems strongly coupled to the other states, and this coupling
is not completely removed by the GMC controller. We could improve on this response both by
decreasing the sample time, or tightening the controller design specifications.
Just for fun, we can plot a three dimensional plot of the trajectory of the state variables from
the CSTR example under GMC control. Fig. 8.14 shows the actual state trajectory (solid line)
converging to the 4 distinct setpoints, (), in state space. Note that the independent variable time
is removed in this illustration.
0
0
0
Z
0
Fa
0 Fc = g(x) + K1 (x x) + K2 (x x)
Tci
(8.53)
388
40
T
30
20
10
0.4
0.35
Conc C
0.3
0.25
C
40
30
Tci
Ci
Temp T (deg C)
0.2
10
30
20
10
ai
Ai
20
10
10
15
time (min)
Temperature Tc
35
30
25
20
15
10
0.5
0.4
40
0.3
35
30
0.2
Conc CA
25
0.1
20
15
Temperature T
389
where we know everything on the right hand side of Eqn. 8.53, (it is a constant for the purposes
of this calculation) and wish to solve for u. Unfortunately, however, we can see from the structure
matrix on the left hand-side of Eqn. 8.53, that Fa appears alone in two equations, and the remaining two manipulated variables appear together in one equation. Thus this nonlinear algebraic
system is degenerate since in general we cannot expect a single unknown Fa to satisfy two independent conditions. Note that even though we do not have an overall degree of freedom problem,
(we have three unknowns and three equations), we do have a sub-structure that has a degree of
freedom problem.
Problem 8.5
1. Re-run this simulation and test the disturbance rejection properties of the GMC
controller. You will need to choose some suitable disturbance profile.
2. Re-run the simulation for different choices of GMC tuning constants. For each simulation,
verify that the response is the linear response that you actually specified in the GMC design.
(You may need to decrease the sampling interval to T 2 seconds to get good results.) Try
for example;
(a) = 20, = 1.5
(b) = 5, = 0.5
(c) any other suitable choice.
8.6
For certain nonlinear systems, we can derive controllers such that the closed loop input/output
behaviour is exactly linear. This is quite different from our traditional approach of approximating
the linearities with a Taylor series approximation and then designing a linear controller based
on the approximate model. Designing exact feedback linearisation uses nonlinear results from
the field of differential geometry, which, to say the least, is somewhat abstract to most engineers.
Lie algebra for nonlinear systems will replace matrix algebra for linear systems, and Lie brackets
and Lie derivatives will be extensions to normal matrix operations. A good tutorial of this topic
with applications of interest to the process industry, and one from where most of this material
was taken, is [104] and the follow up, [105]. A review of nonlinear control for chemical processes
which includes a section on exact feedback linearisation is [30] and the classic text book for this
and other topics in nonlinear control is [189]. Extensions for when the process is not affine in
input, and for systems where the relative degree is greater than unity are discussed in [82].
8.6.1
T HE
NONLINEAR SYSTEM
(8.54)
y = h(x)
(8.55)
where f , g are vector functions, or vector fields, h(x) is a scalar field and the input u, and output
y are scalars. Note that the system described by Eqn 8.54 is written in affine form, that is the
manipulated variable enters linearly into the system. It is always possible to transform an arbitray
system into affine form by introducing a new variable. For example, given the non-affine system
x = f (x, u)
390
we can define a new input, , which is the derivative of our original input, u = , so now our
augmented system becomes
x
f (x, u)
0
=
+
(8.56)
u
0
1
which is now in affine form. Whether creating a new variable which requires differentiating the
input is practical, or even feasible under industrial conditions is a different matter.
8.6.2
T HE
Given the nonlinear system in Eqns 8.54 and 8.55, we will try to construct a control law of the
form
u = p(x) + q(x) y
(8.57)
where y is the desired setpoint. The closed loop system obtained by substituting Eqn 8.57 into
Eqns 8.54 and 8.55 is
x = f (x) + g(x)p(x) + g(x)q(x) y
y = h(x)
(8.58)
(8.59)
and is linear from reference variable y to actual output y. The dashed box in Fig. 8.15 contains
the nonlinearity, but as far as we are concerned, viewing the system from the outside, it may as
well be a linear black box.
Reference input
y (t)
bc
u(t)
bc
x(t)
output
1
s
h()
y(t)
f ()
q(x)
p(x)
g()
Nonlinear plant
Figure 8.15: The configuration of an input/output feedback linearisation control law
Note that for a controller in this configuration, the internal states will still be nonlinear. Notwithstanding, once we have a linear dynamic relation from reference, y (t), to outputs, y(t), it is much
easier to design output-based controllers to control this now linear system, rather than the original nonlinear system. It is also clear from Fig. 8.15, that the controller requires state information
to be implemented.
The control design problem is to select a desired linear response, perhaps by specifying the time
constants of the desired closed loop, and then to construct the p(x) and q(x) functions in the
control law, Eqn. 8.57. These are given by the following relations:
(Lrf h(x) + 1 Lr1
h(x) + + r1 Lf h(x) + r h(x))
f
Lg Lr1
h(x)
f
1
q(x) =
Lg Lr1
h(x)
f
p(x) =
(8.60)
(8.61)
391
where the Lie derivative, Lf h(x), is simply the directional derivative of the scalar function h(x)
in the direction of the vector f (x). Here, the Lie derivative is calculated by
n
Lf h(x) = f (x)
h(x)
h(x) X
fi (x)
=
x
xi
i=1
(8.62)
One can apply the Lie derivative operator repeatedly, L2f = Lf Lf either to the same function, or to
different functions as in Lg L2f .
The relative order of a system such as Eqns 8.548.55 is the smallest integer r such that
Lg Lr1
h(x) 6= 0
f
(8.63)
An alternative interpretation of relative order is that r is the smallest order of derivatives of y that
depends explicitly on u. For linear systems, the relative order is simply the difference between
the number of poles and the number of zeros of the system, i.e. na nb .
Finally, the i in Eqn 8.60 are the user-chosen tuning constants that determine how our output
will respond to the command y . They are defined by
dr1 y
dy
dr y
+
+ + r1
+ r y = y
(8.64)
1
dtr
dtr1
dt
and typically it is convenient to choose Eqn. 8.64 as some sort of stable low-pass filter, such as a
Butterworth filter.
Algorithm 8.4 Exact feedback linearisation design procedure:
1. We are given a single-input/single-output nonlinear model (which may need to be transformed into affine form), Eqn 8.548.55.
2. Find the relative order, r, of the system using Eqn. 8.63. This will govern the order of the
desired response differential equation chosen in the next step. (If r = 1, perhaps a GMC
controller is easier to develop.)
3. We choose a desired linear response, which we consider reasonable, in the form of Eqn 8.64,
that is, we need to choose r time constants or equivalent parameters.
4. We create our exact linearising nonlinear controller, Eqn 8.57 using the relations Eqns 8.60
and 8.61 by constructing Lie derivatives. (A symbolic manipulator such as M APLE may be
useful here.)
5. Closing the loop creates a linear input/output system, which can be controlled by a further controller sitting around this system, designed by any standard linear controller design
technique such as LMR or pole placement.(However this is hardly necessary since we have
the freedom to choose any reasonable response in step 3.)
C AUTIONS
AND RESTRICTIONS
This technique is not applicable if the zero dynamics are unstable. This is analogous to the
linear case where we are not allowed to cancel right-hand plane zeros with unstable poles. Technically, it works, but in practice, the slightest model mis-match will cause an unstable pole which
in turn will create an unstable controller. This then excludes process systems with dead time.
Clearly also owing to the nature of the cancellation of the process nonlinearities, exact feedback
linearisation is very sensitive to modelling errors. The technique demands state feedback, but
only delivers output linearisation. Full state linearisation techniques do exist, but exist only under very restrictive conditions.
392
8.6.3
E XACT
FEEDBACK EXAMPLE
Building an exact feedback controller is relatively easy for well behaved smooth nonlinear systems. The only calculus operation required is differentiation, and this poses few problems for
automated computer tools. This example shows how to build such a controller using a simple
program written for a symbolic manipulator. With this program, we can subsequently change
the system, and regenerate the new nonlinear control law. This is a departure from all the other
simulations presented so far, since here we can change the entire structure of the process in our
program, not just the order or the values of the parameters, and subsequently re-run the simulation.
Suppose we have a simple 3-state nonlinear SISO system,
x1
sin(x2 ) + (x2 + 1)x3
0
x2 = x51 + x3
+ 0 u
x3
x21
1
{z
} | {z }
|
g (x )
f (x )
y= 1 0 0 x
|
{z
}
h(x)
(8.65)
(8.66)
which is already in the form of Eqn. 8.54Eqn. 8.55 or affine form, and we wish to construct an
exact feedback linearised system following Eqn. 8.57. To do this we will follow Algorithm 8.4.
Since we will need to compute some Lie derivatives analytically,
Lf h(x) = f (x)
h(x)
x
to establish the relative order r, it may help to write a small procedure to do this automatically.
Listing 8.8 gives an example using the Symbolic toolbox.
function L = lie_d(F,h,x)
% Find the Lie derivative Lf h(x)
% Note jacobian returns a row vector
L = F'*jacobian(h,x)'; % f (x)
return
h(x)
x
First we define the actual nonlinear functions f (x), h(x), and g(x) for our particular dynamic
problem.
syms x1 x2 x3 ystar real % Specify symbolic variables as real or will have conjugate problems
x = [x1 x2 x3]';
393
h(x)
x
1
0
0
To find the relative degree, we simply repeatedly call the function in Listing 8.8 starting at r = 1
and incrementing r until Lg Lr1
h(x) 6= 0 at which point we have established the relative order.
f
Listing 8.9: Establish relative degree, r (ignore degree 0 possibility)
2
Lfn
= h % Dummy start
LgLfnh = lie_d(g,h,x) % Lg L1f h(x) case.
r = 1; % # relative degree counter
while LgLfnh == 0
r =r+1;
% increment relative degree counter, r
Lfn = lie_d(F,Lfn,x)
% need to build-up
LgLfnh = lie_d(g,Lfn,x) % stop when non-zero
end % while
Now that we know the relative degree r = 2, we can design our desired response filter
Listing 8.10: Design Butterworth filter of order r.
1
Now we can construct the controller components p(x) and q(x) to form the closed loop.
Listing 8.11: Symbolically create the closed loop expression
2
In practice, we are only practically interested in the control law, Eqn. 8.57,
394
y
u
y*
Signal
Generator
3
y
3
multiply
Scope
states
Nonlinear Plant
1./(u(2)+1)
1
Out1
q(x)
p(x)
px
3
butter
yref
Analog
Filter Design
y=x1
0.5
0
0.5
desired trajectory
1
1.5
10
x 10
2
1
0
1
10
20
30
40
50
time
60
70
80
90
100
(b) The closed loop response of the controlled nonlinear system compared to the desired linear
response
Figure 8.16: This example of exact feedback linearisation shows that there is no practical difference between the output of the nonlinear system and the desired linear filter.
Problem 8.6 Create an exact nonlinear feedback controller for a penicillin fermentation process.
395
(8.67)
(8.68)
(8.69)
where X, S are the cell mass and glucose concentration, D is the dilution rate and manipulated
variable, and Sf is the feed glucose concentration, considered constant in this example. The
empirical growth rate expressions are:
x S
Kx X + S
(S, X) = (S, X)/Yxs + (S)/Yps + m
p S
(S) =
Kp S(1 + S/Ki )
(S, X) =
(8.70)
(8.71)
(8.72)
1.5
3.45
0.45
0.89
Kx
Kp
Yps
m
2.34
1.56
0.78
0.34
manipulated
disturbance
initial conditions
D
Sf
X0
S0
0<D<1
1.25
0.05
0.6
Choose an appropriate and design a linearising controller using Eqn 8.57. Test the closed loop
response by simulation and verify that your input/output response is actually linear.
Problem 8.7
1. What is the relative order of the following nasty system from 3.4.1?
x 1 = x2
x 2 = 6x1 7x2 + 24x3 + 10x22 + u
x 3 = x3 x2 x52
y = x1
396
8.7
S UMMARY
This chapter introduced the theory and technique of the design of state space control systems.
Writing the dynamic equations in a discrete matrix form is both convenient for complex systems,
and intuitive since the design is done in the time domain. The major disadvantage is the fact that
the computation is more complex, and is the reason why the design is typically carried out on a
computer. The important conclusions of this chapter are:
Controller design by pole placement means that we choose a gain matrix K, such that the
closed loop system ( K) has a desired dynamic response or alternatively has the desired eigenvalues. To do this, the system must be controllable.
We can design observers L in much the same way as controllers.
However, in this chapter we only covered the design of single inputmultiple output controllers
or single measurement - multiple state observers. The true multivariable case is under-constrained,
and is covered in chapter 9.
Control of a nonlinear process may require a different approach and typically more computation.
Generic model control or GMC is a simple technique that can be used to control a nonlinear plant
in an optimal manner. Many of the so-called difficult processes in chemical process control are
due to nonlinearities and these are well suited to techniques such as GMC. An alternative, more
general nonlinear controller, particularly suitable for SISO nonlinear systems in control affine
form is one where a controller is sought that transforms the nonlinear system into a linear openloop system that can subsequently be controlled using the far more extensive library of linear
controllers.
C HAPTER 9
C LASSICAL
OPTIMAL CONTROL
God does not throw dice ? Nor is it our business to prescribe to God how He should run the
world.
Attributed to Niels Bohr replying to Einstein.
9.1
I NTRODUCTION
This, and the following chapter, is concerned with designing optimal controllers. In practice optimal controllers are complicated owing to issues such as plant nonlinearities, constraints on state
and manipulated variables for example, but for some applications the improvement in controlled
response is well worth the implementation effort. Following a small aside, demonstrating optimal tuning of a known controller, we will begin with the general optimal control problem in 9.3,
develop a solution strategy and test it on a batch reactor example in 9.2.4. Building on the general nonlinear optimisation material, we will make some simplifying assumptions, and develop
an optimal linear controller in 9.4
Chapter 8 described the pole placement method of controller design for single input/single output (SIMO) systems. It was also briefly mentioned that the evaluation of the gain matrix K for
MIMO systems is now under-constrained and an infinite number of possible controller configurations satisfy the original design specifications. However, by using these extra degrees of freedom,
optimal controllers are possible. Section 9.4 will describe an optimal multivariable controller
termed a Linear Quadratic Regulator (LQR). Section 9.5 will present the dual problem of an optimal estimator, the Kalman filter.
Model predictive control, which is the subject of chapter 10, lies somewhere between the full
generic optimal control problem, and the linear quadratic control. This chapter introduces the
concept of a moving horizon for the optimisation and compares a computationally intensive but
very general nonlinear optimal scheme with a much faster linear optimal controller based on a
FIR model.
9.2
PARAMETRIC OPTIMISATION
The simplest type of optimisation problem is known as parametric optimisation where we attempt to find the best values of a finite number of parameters. Finding the best three PID tuning
397
398
constants to minimise say the integral of square error is an example of parametric optimisation
described in 9.2.1.
Alternatively we could try to find the best shape of a trajectory for a missile to hit a target whilst say
minimising energy. In principle the shape may require an infinite number of discrete parameters
to completely describe it, perhaps using a Taylor series. This then becomes an infinite dimensional
parametric optimisation problem, so we must tackle it with other techniques such as variational
calculus.
9.2.1
C HOOSING
A PERFORMANCE INDICATOR
Optimal control is where you try to extremetise something to get the best controlled result. To
optimise, we must at least be able to quantify the controlled performance, that is to put numbers
to the quality of control. Typically however, this assessment is, in practice, a tricky function of
many variables such as process economics, safety customer satisfaction etc, which are all very
difficult to quantify.
We call the value that quantifies the performance the objective function, and typically use the
symbol J . Some authors use the term performance index or perhaps profit function, and naturally we would like to maximise this quantity. However by convention most numerical software
packages are designed to minimise the objective function, and we certainly do not want to minimise the profit function, but rather minimise the cost function. Consequently the term cost
function is often used in optimisation where it is assumed that we are interested in minimising
the objective function. For our purposes, where we are typically interested in minimising the
error residuals, we will be using cost function for our objective function.
A simple example of optimal control is the optimal tuning of a PID controller for a given process
responding to a setpoint change. First we must choose an objective function that we wish to optimise, often some sort of cost term (which we want to minimise), and then we search for the tuning
variables that minimise this cost. Since it is sometimes difficult to correlate cost to performance
in a simple manner, we often choose a performance objective more easily quantifiable such as the
integral of the difference between the actual process and the setpoint. This is termed the integral
of the absolute error or IAE performance index,
Z
JIAE =
|t | dt
(9.1)
0
where t is defined as the difference between setpoint and actual, rt yt at time t. Classically control engineers have tried to design closed-loop control systems that behave similarly to a secondorder, slightly under-damped process. From experience this is found to be a reasonable trade-off
between speed of response, overshoot and ease of design. We can plot both the process response,
and visualise the area given by Eqn. 9.1 we wish to minimise using the floodfill command fill.
1
t = linspace(0,10)';
[num,den] = ord2(1,0.54); % Choose = 0.54
y = step(num,den,t);
% do simulation
fill([t;0],[y;1],'c')
% floodfill area as shown in Fig. 9.1.
399
1.4
controlled response
1.2
1
0.8
0.6
0.4
0.2
0
10
time
Figure 9.1: The area of the shaded figure is the approximate value of the performance integral. You could try other
values for , say 0.1.
Fig. 9.2 shows the steps to numerically compute the ITAE performance index for a given controlled response. In this case the ITAE is 3.85 cost units at time t = 10. Ideally we need to wait
until the final plot of the intergral stops increasing, but in this case we can see that while we have
stopped slightly prematurely, there is little to be gained from increasing simulating much longer.
Notice how the time weighting emphasizes deviations late in the response.
error=(yr)
y and setpoint
ITAE performance
1.5
1
0.5
0
0.5
0
0.5
|(yr)|
1
1
0.5
|(yr)|*t
0
1
0.5
0
ITAE=3.8
|(yr)|*t
time
10
We can compare the performance of a second order process with different damping ratios to find
the optimum, , for a given criteria.
1
t = linspace(0,10)'; dt = t(2)-t(1);
400
11
These curves are given in Fig. 9.3 from which we can extract the optimum damping ratios which
are summarised in Table 9.1. Note that the optimum are all in the 0.5 to 0.75 region. Ogata, [152,
pp295-303], and especially Fig 438 on page 298 compares these alternatives in a similar manner.
Optimum tuning of a 2nd order response
1.4
1.2
1
IAE
ITAE
ITSE
output
Cost factor
10
0.8
0.6
0.4
ISE
0
10
ISE
ITSE
IAE
ITAE
0.2
0.5
1
1.5
Damping factor,
10
time
Figure 9.3: Various performance criteria as a function of damping ratio for a second order plant.
The minima of the various choices are listed in Table 9.1.
Table 9.1: Common integral performance indices and the optimum shape factor, , assuming a
second order prototype response.
J
Name
Integral Squared Error, ISE
0
Z
0
comment
2t dt
0.507
t2t dt
0.597
|t | dt
0.657
Recommended index
t|t | dt
0.748
Similar to ITSE
9.2.2
O PTIMAL
TUNING OF A
401
PID
REGULATOR
For example, say we wish to tune a PID controller to minimise the integral of the absolute error
times time of a response to a step change in setpoint. For this example I will choose the ITAE
performance index, which is a slight modification on the IAE performance index given previously
in Eqn. 9.1,
Z
J =
t |t | dt
(9.2)
The parameters, , we can adjust in this example are the PID tuning constants Kc , i and d ,
while the controller structure is fixed as a three term PID controller. The optimisation problem is
to choose suitable parameters such that the scalar J is minimised subject to the constraint that
Gc Gp
Y (s)
=
R(s)
1 + Gc Gp
(9.3)
where R(s) = 1/s. Here Gc is assumed a PID controller, and we will also assume that the manipulated variable is unbounded. Since we chose the ITAE performance index, (Eqn 9.2), no analytical
solution will exist for the optimisation. The ITAE is a good performance index, but difficult to
analytically manipulate hence the solution is best achieved numerically.
First the performance index, J , must be written in terms of the parameter vector . To do this,
we must insert our particular process model into Eqn. 9.3 and also substitute the PID controller
transfer function with the unknown parameter vector . One can then algebraically invert Eqn 9.3
to obtain a time domain solution yt if the model is sufficiently simple, or we could simulate the
closed loop system for a given . The error t is yt yt where yt is the reference setpoint, in this
case 1. One then substitutes this equation into Eqn 9.2 and numerically integrates to calculate the
performance index J . Finally any numerical optimisation routine can be used to locate the minimum J as a function of the PID tuning parameters. Many such general purpose optimisers exist
such as Powells conjugate directions, Nelder-Mead, simplex etc. (See also [204] for optimisation
routines in M ATLAB.)
The hard part in attempting to optimally tuned PID controllers is locating the optimal solution.
In the above description, it was done via simulation using an iterative numerical technique. This
solution technique is completely general and can be applied to systems with nonlinear process
models, bounds on the manipulated variables and complex performance indices. It is however
often slow and computationally expensive, and prone to fall into local minima etc, as any numerical technique.
Example: Optimal PID tuning Suppose we wish to optimally tune a PID controller given the nonminimum phase plant,
3s + 1
(9.4)
Gp (s) = 2
(s + 0.8s + 0.52)(s + 4)(s + 0.4)
We can see the non-minimum phase from the right-hand zero s = +1/3 from the numerator term.
Gp (s) has two complex and two real poles.
Simulating the openloop model gives us some information about the severity of the inverse response and the dominant time constants.
>>Gp = zpk(1/3,[-0.4, -4, -0.4+0.6i, -0.4-0.6i],-3)
Gp =
4
-3 (s-0.3333)
--------------------------------(s+0.4) (s+4) (s2 + 0.8s + 0.52)
402
Step Response
Amplitude
1.5
1
0.5
0
>>step(Gp) %
0.5
10
Time (seconds)
15
20
We can use fminsearch (which uses Nelder-Meads simplex method), or one of the optimisation
routines from the O PTIMISATION toolbox to do the searching for parameters. Listing 9.1 returns
the IAE performance for a given plant and PID controller tuning constants subjected to a step-test.
Listing 9.1: Returns the IAE performance for a given tuning.
2
The main script file that calls the optimiser to establish the best PID tuning constants is given in
Listing 9.2. Here we specify the plant of interest which we would like to tune, Eqn. 9.4, some
trial PID constants as a first estimate, and then we call the numerical search routine to refine these
estimates.
Listing 9.2: Optimal tuning of a PID controller for a non-minimum phase plant. This script file
uses the objective function given in Listing 9.1.
Kc = 0.5; taui=0.2; taud=1.; % trial start PID constants
x0 = [Kc,taui,taud]';
% trial tuning vector
Gp = zpk(1/3,[-0.4,-4,-0.4+0.6i,-0.4-0.6i],3);% Gp (s) =
3s1
(s2 +0.8s+0.52)(s+4)(s+0.4)
J (x) =
R 50
0
|| dt
In the above example we used the IAE performance objective, but we could just as well have
chosen other alternatives such as ISE or ITAE, and we would see slightly different optimal tuning
parameters. The following table gives the values to which I converged for the three commonly
used performance indices.
Performance
IAE
ISE
ITAE
Kc
reset , 1/i
0.407
0.359
0.388
0.382
0.347
0.317
1.43
1.93
1.17
403
A comparison of the closed-loop response with a PID controller using these tuning constants is
given in Fig. 9.4. You can see that all responses are similar, although I personally favour the ITAE
alternative.
1.2
IAE
0.8
ITAE
0.6
ISE
0.4
0.2
0
2 = 7.100
|| = 7.267
t|| = 7.352
0.2
0.4
10
I MPLEMENTATION
15
20
Time
25
30
35
40
Figure 9.4: A comparison of the optimal PID tuning for a step response
of a non-minimum phase system using IAE, ISE and ITAE performance
objectives.
DETAILS
To be honest, I have cheated somewhat in the above example. Optimisation, even for small dimensional problems such as this is always difficult and typically needs some expertise to get
working properly. For the above case, even though I chose good starting guesses for the tuning
constants, the optimiser still had some numerical conditioning problems while searching for the
solution. We can avoid these problems partly by encouraging the optimiser to search in locations
that we think the solution will be, and also by telling the optimiser explicitly parameter spaces
to avoid such as negative values. If the optimiser tentatively tries a negative controller gain, the
closed loop system will be unstable, and the objective function meaningless. However the computer program does not know this, and will still try to extract an improved estimate from this
information. Typically in these types of examples, the objective function loses any real dependance on the parameters, and the response surface becomes flat, or worse, vertical! This is the
principle cause behind the poor conditioning warnings M ATLAB gives sometimes.
The final point to watch out for in an optimisation problem such as this is that the optimum does
not lie far outside any reasonable tuning values. For first or second order systems, the optimum
controller gain is infinity, and the optimiser will naturally tend in this direction. In this case either
constraints are needed, or the objective function can be modified to take into account the vigorous
controller action, or a new more complex model can be developed.
Optimal PID tuning #2 Finding optimal PID tuning constants for actual plant equipment is a good
example of practical optimisation. In this example, we wish to find the best Kc and i values for a
step response of the black box,
Gbb (s) =
404
0.2
30
0.4
0.5
0.3
0.3
0.5
0.6
0.2
25
J=1.41
J=5.63
0.4
0.1
0.2
0.3
0.3
0.1
0.3
0.2
0.4
0.6
0.5
15
0.1
Integral time, i
20
0.2
J=1.14
0.2
0.1
10
J=1.98
0.6
0.5
0.4
0.3
0.2
0.1
J=2.09
0.4
2
0.2
0.5
3
0.3
0.4
0.4
0.3
0.1
0.5
0.6
0.5
0.6
4
5
Controller gain, K
Figure 9.5: Optimum PI tuning of the blackbox plant. The contour plot shows the ISE performance
for a range of gains and integral times while the inserts show the actual error given a step change
in reference at those specific Kc , i combinations. The optimum tuning constants are Kc = 4.4
and i = 9.8.
Naturally the optimum is only for a step test using our specific model. However the results for
other similar input scenarios should be similar. To construct the contour plot of Fig. 9.5 we used
an exhaustive grid search which requires lots of trials, hence the use of a model. An alternative,
and more efficient strategy is to use a nonlinear numeric optimiser which is the subject of the next
section.
9.2.3
U SING S IMULINK
INSIDE AN OPTIMISER
In cases where we have a complex model with nonlinearities or time delays (such as the black
box model of Fig. 9.5), it may be more convenient to use S IMULINK inside the objective function
of an optimiser. In this case we can use the sim command to execute a S IMULINK simulation from
within the objective function, but we must take care to tell S IMULINK to obtain its parameters from
the current workspace as opposed to the global workspace which it will do by default. Further
implementation details for these types of optimisation problems using S IMULINK are given in [60,
8.6].
Example: Find the optimum PI tuning constants for controlling the plant
G(s) =
0.25s2
2
+ 0.6s + 1
405
The S IMULINK model (named in this example sopt_pi.mdl) is given in Fig. 9.6. The objective
1
error
K*[taui 1](s)
Step
taui.s
2
0.25s2 +0.6s+1
PI ctrl
Plant
Scope
Figure 9.6: S IMULINK model where the PI tuning constants are to be optimised. Note that we
export the error in an output block at the highest level and that the PI controller has parameters
K and taui which are to be optimised.
function is called via the optimiser, (fminunc in this case), which in turn executes the S IMULINK
model from Fig. 9.6 and returns the time-weighted integral of the square error as a performance
index is given in listing 9.3.
function j = foptslink_pi(x)
% Optimisation objective function that will call a S IMULINK model
K = x(1); taui = x(2);
% parameters to be optimised, x
t0 = 0; tf = 20; h = 0.1; % start & finish times
% Key line: System parameters used by S IMULINK
opts = simset('SrcWorkspace','current','DstWorkspace','current');
From the M ATLAB command line we call the general unconstrained optimiser to compute the two
optimum PI tuning constants.
>> x0 = [0.2, 0.1]'; % Start guess for Kc and i .
>> x = fminunc(@foptslink_pi,x0) % See function in Listing 9.3.
4
The optimisation routine found that a gain of Kc = 66.5 and an integral time of i = 25.7 minimised our integral-time squared error performance index.
406
9.2.4
AN
This example illustrates how to establish an optimal input profile u(t) for a given dynamic system
and initial conditions. In this section we are going to approximate the smooth profile with a small
number of constant variables and we are going to use parametric optimisation to obtain a slightly
sub-optimal solution. Later in section 9.3.4 we will repeat this optimisation problem, but solve
for the continuous profile using functional optimisation.
Suppose we have a batch reactor as shown in Fig. 9.7 in which two chemical reactions are carried
out in series,
k1
k2
(9.5)
A B C
The first reaction produces the desired product, B, which is what we wish to maximise after
the two hour reaction duration, but the second reaction consumes our valuable product, and
produces a poisonous byproduct C, which naturally we wish to minimise.
x = [ca , cb ]T . As the temperature varies, so does the reaction rates since they are governed by the
Arrhenius rate constants (k1 , k2 ), which in turn changes the ratio of desired product to by-product.
The plant dynamics are given by the reaction kinetics and initial conditions are
dx1
= 2e6/u x1 ,
dt
dx2
= 2e6/u x1 5e11/u x2 ,
dt
x1 (t = 0) = 0.9
x2 (t = 0) = 0.1
Our objective for the operation of this reactor is to maximise the amount of B after 2 hours, by
adjusting the temperature over time as the reaction progresses, or
J (u) = x2 (tf = 2) +
tf =2
0 dt
(9.6)
The reason for the seemingly superfluous greyed out integral in the performance objective above
will become evident when we look at the standard optimal control objective function in the next
section.
If we decide to operate at a constant temperature for the entire 2 hour batch, then Fig. 9.8(a)
shows us which temperature is the one to maximise the amount of B at the end of the reaction.
407
Alternatively, if we decide to run the batch reactor for 1 hour at one temperature, and the second
hour for another then we would expect the optimum to improve. Fig. 9.8(b) shows the three
dimensional response surface for this case.
(c) The response volume of the objective function when using 3 temperatures
Figure 9.8: Visualising the objective function using one, two, and three different temperatures
throughout the 2 hour batch reaction. Compare with Fig. 9.9.
We could in principle continue this discretisation of the input profile. Fig. 9.8(c) shows the situation when we decide to use three temperatures equally spaced in time (40 mins each) across
the 2 hour duration of the batch. In this case the objective function becomes a response volume so
we must use volume visualisation techniques to see where the optimum lies. Fig. 9.9 compares
the optimum trajectories for the case where we select 1,2,3,5 and 7 equally spaced temperatures
divisions. Also given is the limiting case where the number of temperature divisions tend to .
The analytical computation for this latter case is given in section 9.3.4.
408
analytical T* profile
6
T constant
5
4
7
6
2 divisions
5
4
7
6
3 divisions
5
4
7
6
5 divisions
5
4
Figure 9.9: The optimum temperature profiles for cases where we have different numbers of temperature divisions. Note how
as the number of allowable temperature divisions increase, the discretised trajectory
converges to the smooth analytical optimum profile.
9.3
Temperature
7
6
7 divisions
5
4
0
0.5
1
Time [hrs]
1.5
409
3. The optimisation horizon is the length of time over which the optimisation is allowed to
take place. Sometimes the horizon is infinite, but in practical cases the horizon has some
upper limit over which an optimal solution is sought to minimise the objective function.
The optimal control problems can be written as a standard mathematical optimisation problem,
thus many algorithms and therefore computer programmes exist to give a solution. Two common
solution procedures are Pontryagins maximum principle and dynamic programming. [64, pp16
20] or [53] covers these aspects in more detail and a simple introduction to Pontryagins maximum
principle is given in [173, Chpt 13] and [169, pp 84105].
The optimal control problem as described above is quite general, and the solution will be different
for each case. Section 9.3.2 develops the equations to solve the general optimal control problem
using variational calculus and Section 9.2.4 demonstrates this general approach for a simple nonlinear batch reactor application. The accompanying M ATLAB simulation clearly shows that the
computation required is involved, and motivates the development of less general linear optimal
control designs. This results in the Linear Quadratic Regulator (LQR) described in subsequently
Section 9.4.
There are many other texts that describe the development of optimal controllers. Ogata has already been mentioned, and you should also consult [100, p226] for general linear optimal control,
[164, 168, 169] for chemical engineering applications and [40] is considered a classic text in the
optimisation field. A short summary available on the web is [47]. A survey really highlighting
the paucity of applications of linear quadratic control, (which is a simplified optimal controller
for linear systems), in the process industries is [97].
9.3.1
A REVISION
OF MULTIVARIABLE CALCULUS
This section reviews some basic theory of the derivatives of scalar and vector functions with
respect to a vector. We used this information when developing the least-squares solution to polynomial fits in section 3.3.1.
In the following, we will follow the convention that the derivative of a scalar function V with
respect to a vector x,
dV
dV
dV
dV
=
(9.7)
dx
dx1 dx2
dxn
is a row vector. (I have deliberately chosen the symbol V for this scalar since one can immediately
recognise this as a voltage-like variable, since voltage is a scalar quantity.)
The Jacobian, J, is a matrix of derivatives of an m vector function f with respect to an n vector x,
is and (m n) matrix
df1
df1
df1
dx1 dx2
dxn
df2
df2
df2
dx
dx2
dxn
1
J=
.
df
dfm
dfm
m
dx1 dx2
dxn
which can also be thought of as a series of individual gradients stacked on top of each other.
Using these definitions, we can derive some common derivatives. The derivative of a (constant)
matrix times a vector is
d(Ax)
=A
(9.8)
dx
410
and makes intuitive sense.
The derivative of a quadratic form is
d(xT Ax)
= xT AT + A
dx
= 2xT A
if A is symmetric
(9.9)
(9.10)
9.3.2
T HE
To develop our optimal controller, we need to define, as listed in section 9.3, an objective function, a feasible search space, and list our constraints. The Optimal Control Problem, or OCP, is to
minimise the scalar performance criteria
Z tf
L(x, u, t) dt
(9.11)
J (u(t)) = (x(tf )) +
0
over a time horizon from 0 to a final time tf , where our system dynamics follow the general
nonlinear dynamic model,
x = f (x, u, t),
x(t = 0) = x0
(9.12)
by choosing an appropriate input trajectory u(t).
This performance criteria J (u) to be minimised is quite general and is usually related to process
economic considerations and is often called a cost function. Technically Eqn. 9.11 is termed a
functional since it maps the function u(t) to the scalar J .
The objective function, repeated here,
J (u(t)) = (x(tf )) +
| {z }
Mayer problem
tf
|0
L(x, u, t) dt
{z
}
(9.13)
Lagrange problem
{z
Bolza problem
is quite general and consists of two terms. The first term (or Mayer problem) is called the termination criteria, and is only concerned about the penalty of the final state. The second integral
term is concerned with the cost incurred getting to the final state and is typically some sort of ISE
or IAE type term. The scalar integrand, L(x, u, t), is called the Lagrangian. For example if
L(x, u, t) = x Qx
where the matrix Q is a positive definite matrix, then this term computes the squared deviations
in state variables and is used when we are concerned with the cost of deviations in state variables.
Alternatively if
L(x, u, t) = 1
1 Note that Ogata, [150, Appendix A] has a very good summary of matrix operations including differentiation, although
he defines the gradient as a column vector. Hence he uses the following:
Ax
= A
x
x Ax
= 2Ax
x
if A is symmetric.
411
then we are concerned sorely with the total time for the optimisation (which we want to minimise). If only the terminal criterion, , is present, we have a Mayer problem, if we are only
concerned with the integral term, we have a Lagrange problem, and if we have both terms, then
we have what is known as a Bolza problem. We can convert between all three types of problems
by introducing extra state variables, see [53, p71] for more hints how to do this.
T HE
where the dimension of the introduced variables, dim() = dim(x) = n. Since we have introduced one for each state, these Lagrange multipliers are sometimes known as co-state variables,
or adjoint states.
It is convenient to simplify the notation by introducing the scalar Hamiltonian
def
H = L + f (x, u)
(9.15)
so now our augmented objective function, Eqn. 9.14 is more concisely written as
Z tf
H T x dt
J = +
(9.16)
tf
T (t)x(t) dt
(9.17)
Substituting the right-hand expression in Eqn. 9.17 for the T x term in Eqn. 9.16 gives
Z tf
0
H + x dt
J = + x +
tf
(9.18)
Using variational calculus, we get the Euler-Lagrange equations which define the dynamics of
the co-state variables
T
T
(x(tf ))
H
with final conditions (tf ) =
(9.19)
=
x
x
where we emphasise that the co-state dynamics are specified by terminal boundary conditions at
final time t = tf , as opposed to the initial conditions at t = 0 given for the state dynamics in
Eqn. 9.12.
412
Finally, the optimum control input profile is given by extremetising the Hamiltonian, or equivalently solving for u such that
H
=0
(9.20)
u
Solving these two equations, and applying the manipulated variable to our dynamic system
(Eqn. 9.12) will result in optimal operation.
In many practical cases, the manipulated variable may be constrained between an upper and
lower limit. In this case, we would not blindly use the stationary point of the Hamiltonian
(Eqn. 9.20) to extract the manipulated variable, but we would simply minimise H, now conscious
of the constrained manipulated variable.
In summary, the Optimal Control Problem, or OCP, is given in Algorithm 9.1.
Algorithm 9.1 The Optimal Control Problem
The solution procedure of the optimum control problem (OCP) is as follows:
1. Given a dynamic model with initial conditions
x = f (x, u, t),
x(t = 0) = x0
(9.21)
and a scalar performance index which we wish to extremetise over a finite time from t = 0
to final time tf ,
Z tf
L(x, u, t) dt
(9.22)
J (u) = (x(tf )) +
0
H(x, u, ) = L + T f
(9.23)
H
x
T
(x(tf ))
x
T
(9.24)
(9.25)
for u(t).
Section 9.3.4 solves the optimal control problem for some simple examples. including the optimal
temperature profile of a batch reactor from section 9.2.4. For the batch reactor we have only a
termination criteria in our objective function, and no state or manipulated variable energy cost.
This makes the calculation somewhat easier.
9.3.3
T HE
Equations 9.12, 9.199.20 need to be solved in order to find the desired u(t), the optimal control
policy at every sampling instance. This, however, is not a trivial task. Normally we would expect
to know the initial conditions for the states so in principle we could integrate Eqn. 9.12 from the
413
known x(0) for one sample interval, excepting that we do not know the correct control variable
(u) to use. However, we can solve for the instantaneous optimum u by solving Eqn. 9.20. The
only problem now is that we do not know the correct co-state () variables to use in Eqn. 9.20.
So far the only things we know about the co-state variables are the dynamics (Eqn 9.19), and the
final conditions, but unfortunately not the initial conditions.
These types of ODE problems are known as Two Point Boundary Value problems. If we have n state
dynamic equations, then the number of state and co-state dynamic equations is 2n. We know n
initial state conditions, but that still leaves us n initial conditions short for the costates. (To solve
the system of ODEs, we must know as many initial conditions as DEs). However we do know the
final values of the costates, and this information supplies the missing n conditions.
Methods to solve two point boundary value problems are described in [161, Chp16] and [198,
p176] and are in general much harder to solve than initial value ODE problems, and often require
an iterative approach. The most intuitive way to solve two point boundary value ODEs is by
the shooting method or as it is sometimes termed, the boundary condition iteration, or BCI [168,
p237]. This technique is analogous to the artillery gunner who knows both his own and the
enemys position, but does not know the correct angle to point the gun so that the shell lands on
the enemy. Typically an iterative approach is used where the gun is pointed at different angles
with an observer who watches where the shells land, and then communicates with the gunner
what corrections to make. However, the iteration loop outside the integration is not the only
complication. In many physical cases, numerically integrating the state equations backwards or
the co-state equations forwards is unstable, where integrating in the opposite direction causes no
problems.
An alternative method known as Control Vector Iteration (CVI) avoids some of the failings of the
BCI method. A comparison of these two techniques is given in [95]. Perhaps the most robust
scheme is to use collocation which is the strategy used by the M ATLAB routine bvp4c designed
for boundary value problems.
9.3.4
O PTIMAL
CONTROL EXAMPLES
The following examples illustrate two optimal control problems. In both cases we follow Algorithm 9.1 and use the S YMBOLIC toolbox to assist in developing the necessary equations.
R AYLEIGH S
We wish to minimise
J (u) =
2.5
x21 + u2 dt
(9.26)
x1 (t = 0) = 5
x 1 = x2 ,
x 2 = x1 + 2
0.1x22
x2 (t = 0) = 5
x2 + 4u,
def
414
x21
2
x2
+ u + 1 x2 2 x1 4u + x2
2
10
2
Given the Hamiltonian, we can construct the co-state dynamics, and the terminal conditions from
Eqn. 9.24.
1
1
2
"
22 2x1
3x
1 102 2 1
1 (tf )
2 (tf )
0
0
The optimal control input is given by solving Eqn. 9.25 for u(t), or
showing us that
u (t) = 22 (t)
Now all that is left to do is to form and solve our two-point boundary value problem. We can
substitute the expression for the optimum u(t) found above in the state dynamics, and append
the co-state dynamics
415
So in summary, to solve the optimal control problem, we need to solve the following boundary
value ODE problem,
x2
x1
5
x1 (0)
2
x2 (0) 5
x2 x1 + 2 0.1x2 x2 82
2x
1 (tf ) = 0
1 =
2
1
2
3x
0
2 (tf )
1 102 2 1
2
using the two-point boundary value ODE routine bvp4c. We need additionally need to specify
a suitable grid for the collocation points to give an initial estimate trajectory for the state and costate profiles over the integration interval. In this case I will simply supply constant trajectories.
13
The optimum profile is given in Fig. 9.10 where we can quickly check that the boundary conditions are satisfied. We can also visually check that the optimum input u (t) = 22 (t) as required.
The circles plotted show the position of the collocation points and we should check that they capture the general trend of the profile.
A N OPTIMAL
In the optimum profile determination for the batch reactor example in section 9.2.4, we approximated the continuous temperature profile with a finite series of constant zeroth-order holds. In
this example, we will compute the exact continuous curve.
Recall that the plant dynamics are given by the reaction kinetics and initial conditions are
dx1
= 2e6/u x1 , starting from x1 (t = 0) = 0.9
dt
dx2
= 2e6/u x1 5e11/u x2 , starting from x2 (t = 0) = 0.1
dt
(9.27)
and the objective function to be maximised after a batch reaction of total time tf = 2 hours is
Z t=2
0 dt
J = x2 (t = 2) +
0
which indicates that the Lagrangian L(x, u, t) = 0 and (x(tf )) = x2 (tf ). The Hamiltonian function, H = L + T f , in this instance is
2e6/u x1
H = 0 + 1 2
2e6/u x1 5e11/u x2
= 21 e6/u x1 + 2 2e6/u x1 5e11/u x2
416
6
x (t)
2
x(t)
2
0
2
x (t)
6
10
u(t)
5
0
5
5
(t)
0
(0)
2
(0)
10
0.5
1.5
2.5
time
The costates vary with time following = (H/x)T and finish with
d1
= 2e6/u (1 2 ), ending at 1 (t = 2) = 0
dt
d2
= 5e11/u 2 , ending at 2 (t = 2) = 1
dt
(9.28)
since = x2 . Now the optimum temperature trajectory is the one that maximises H or the
solution to dH/du = 0, which in this case can solved analytically to give
5
u (t) =
ln
552 x2
12x1 (2 1 )
(9.29)
We can replicate this development of constructing of the costate dynamics and optimal input with
the help of a symbolic manipulator as shown in Listing 9.4 below.
Listing 9.4: Analytically computing the co-state dynamics and optimum input trajectory as a
function of states and co-states
>> syms u lam1 lam2 x1 x2 real
2
def
12
17
417
lam_dot =
(2*lam1)/exp(6/u) - (2*lam2)/exp(6/u)
(5*lam2)/exp(11/u)
>> lam_final = jacobian(phi,x)'% tfinal = /x
lam_final =
0
1
>> uopt = solve(diff(H,u),u)
uopt =
5/log(-(55*lam2*x2)/(12*lam1*x1 - 12*lam2*x1))
Once again we can solve the two-point boundary value problem succinctly using bvp4c as demonstrated in Listing 9.5. We need to supply the state and co-state dynamic equations, Eqn. 9.27 and
Eqn. 9.28, the four required boundary conditions, the initial x1 (0), x2 (0) and the final 1 (2), 2 (2),
and the optimum input trajectory as a function of states and co-states.
Listing 9.5: Solving the reaction profile boundary value problem using the boundary value problem solver, bvp4c.m.
u = @(x) 5/log(55*x(4)*x(2)/12/x(1)/(x(4)-x(3))) % Optimum input trajectory, Eqn. 9.29
2
12
17
The optimum input temperature policy is shown in Fig. 9.11 with the resultant state and costate
trajectories. The amount of desired product B that is formed after 2 hours using this policy is
0.438 and we can be confident that no other temperature policy will improve on that result.
In both the Rayleigh and the batch reactor optimal control examples we could extract analytical
expressions for the optimal control policy as an explicit function of states and co-states, u (t) =
g(x, , t) which made the subsequent boundary-value problem easier to solve. However in many
practical cases, the solution of Eqn. 9.25 to give u (t) may not be possible to find in closed form. In
these cases, we may need to embed an algebraic equation solver such as fsolve, or an optimiser
such as fminunc.m inside the BVP solution routine. However the problem to be solved now is a
differential algebraic equation or DAE which introduces considerable computational problems.
9.3.5
P ROBLEMS
In some optimal control problems we wish not only to establish the optimum profile for u(t)
starting from a given initial condition, x(0), but we might also require some (or all) of our states
418
Conc; Ca, Cb
0.8
C
0.6
Cb(2)=0.438
0.4
Cb
0.2
Temp
0
8
6
4
Figure 9.11: The concentrations, the optimum temperature trajectory, and co-states
for the batch reactor as evaluated using an optimum control
policy.
Costates,
2
1.5
2
1
0.5
0
0.5
0.5
1
time (hours)
1.5
(9.30)
These problems are known as OCP problems with a target or manifold set. Further details and
solution strategies for these sorts of problems are given in [47, 164]. The main difference here is
that we need to introduce a new vector of Lagrange multipliers, , the same dimension as the
number of target state constraints in Eqn. 9.30, and these get added to the modified objective
functional.
A simplified, but very common situation is where we wish some of our states to hit a specified
target, and perhaps leave the remainder of our states free. In this case the boundary conditions
simplify to:
1. if xi (tf ) is fixed, then we use that as a boundary constraint, i.e. xi (tf ) = xif ,
2. if xi (tf ) is free, then we use i (tf ) = /xi (tf ) as usual for the boundary constraint.
Note that together, these two conditions supply the additional n boundary conditions we need to
solve the two-point boundary value problem of dimension 2n.
419
Suppose we wish to find the optimum profile u(t) over the interval t [0, 1] to minimise
Z 1
J =
u2 dt
0
x = x2 sin(x) + u,
x(0) = 0
but where we also require the state x to hit a target at the final time
x(1) = 0.5
In this case we have one state variable and one associated co-state variable, so we need two
boundary conditions. One is the initial state condition, and the other is the specified state target.
This means that for this simple problem we do not need to specify any boundary conditions for
the costate. The rest of the OCP problem is much the same as that given in Algorithm 9.1.
1
11
syms x u lam
real
tf = 1;
xtarget = 0.5;
% final time
% target constraint
f = [x2.*sin(x) + u];
phi = 0; L = u2;
H = L + lam'*f
% Hamiltonian, H = L + f
Running the above code gives the optimum input as u = /2 and the the two-point boundary
problem to solve as
x
x2 sin(x) 2
(9.31)
=
x2 cos(x) + 2x sin(x)
and
x(1) = 0.5
13
18
solinit = bvpinit(linspace(0,tf,10),xlam_init)
sol = bvp4c(zdot,BCres,solinit); % Now solve TPBVP
t = sol.x; z = sol.y'; % Extract solution
x = z(:,1); lam = z(:,2);
u = -lam(:,1)/2;
% Optimum input: u = /2
420
Fig. 9.12 shows the result of this optimal control problem. We should note that the state, x, did
indeed manage to hit the specified target at t = 1. Finally we can establish the optimum profile
for u(t).
0.8
0.6
States
target
0.4
0.2
0
0.5
0.45
0.4
0.35
0.6
costates,
0.7
0.8
0.9
1
1.1
0.2
0.4
0.6
0.8
time
Problem 9.1 (These problems were taken in part from [53, p76].)
1. Show that to determine the optimal input trajectory u(t) given the one dimensional system,
x = x + u,
x(0) = 1
such that
Z
1 1 2
x + u2 dt
J (u) =
2 0
is minimised reduces to solving the two point boundary value problem,
x
1 1
x
x(0) = 1
=
,
1 1
(1) = 0
9.4
421
The general optimal controller from section 9.3 has some nice properties, not the least that the
trajectory is optimal. However there are some obvious disadvantages, some of which are substantial. Quite apart from the sheer complexity of the problem formulation and the subsequent
numerical solution of the two-point boundary value problem, there is the fact that the final result is a manipulated variable trajectory which is essentially an open loop controller. This is fine
provided we always start from the same conditions, and follow down the prescribed path, but
what happens if due to some unexpected disturbance we fall off this prescribed path? Do we
still follow the pre-computed trajectory for u(t), or must we re-compute a new optimal profile for
u(t)?
Ideally we would like our optimal controller to have some element of feedback, which after all is a
characteristic of all our other practical controllers. Furthermore if we restrict our attention to just
linear plant models, and use just a quadratic performance objective, then the subsequent problem setup, and implementation, is made considerably easier. These types of optimal controllers
designed to control linear plants with quadratic performance are known as Linear Quadratic Regulators, or LQR.
9.4.1
C ONTINUOUS
The linear quadratic control problem is to choose a manipulated variable trajectory u(t), that minimises a combination of termination error and the integral of the squared state and manipulated
variable error over a time horizon from t = 0 to some specified (although possibly infinite) final
time t = T ,
Z
1 T T
1
x Qx + uT Ru dt
(9.32)
J = x(T )Sx(T ) +
2
2 0
given the constraint of linear plant dynamics
x = Ax + Bu,
x(t = 0) = x0
(9.33)
As we assume linear plant dynamics, and the performance objective, Eqn. 9.32, is of a quadratic
form, this type of controller is known as a continuous Linear Quadratic Regulator or LQR.
The weighting matrices Q and S are positive semi-definite (and typically diagonal) matrices,
while R must be a strictly positive definite matrix. Note that any real positive diagonal matrix is also positive definite. The expressions xT Qx and uT Ru are called quadratic forms and
2
2
are sometimes compactly written as ||x||Q and ||u||R respectively.
Using the terminology introduced in the general optimal control problem given in section 9.3.2
Eqn. 9.11 we have the following in the specific case of an LQR
f = Ax + Bu,
x(t = 0) = x0
1
1
L(x, u, t) = x Q x + u R u
2
2
1
= x (T ) S x(T )
2
(9.34)
(9.35)
(9.36)
for the state dynamics constraint, the Lagrangian or getting there cost, and the final cost term
respectively.
For many applications, there is no explicit termination cost, since all the cost of state deviations
is accounted for in the energy term, thus often simply the matrix S = 0. In addition, the upper
422
limit of the time horizon is often set to T = , in which case we are obviously now no longer
concerned with the termination cost term.
The Hamiltonian in the LQR case (following the definition from Eqn. 9.15) is
H = L + T f =
1
1 T
x Qx + uT Ru + T (Ax + Bu)
2
2
(9.37)
Now applying the Euler-Lagrange equations (Eqns 9.19 and 9.20) using the differentiation rules
from section 9.3.1 gives
d
=
dt
H
x
T
T
12 x Qx + Ax
=
x
T
= x T Q + T A
= Qx A
(9.38)
since Q is symmetric and we have made use of the matrix identity that (AB)T = BT AT .
Similarly using Eqn. 9.20 gives
H
= 0 = uT R + T B
u
(9.39)
or RT u = BT (again using the above matrix identity) which implies that the optimal control
law is
u = R1 B
(9.40)
Eqn. 9.40 defines the optimal u as a function of the co-states, . Note that now our problem has
2n unknown states: the original n x states, and the n introduced co-states, . We can write this
in a compact form using Eqns 9.40, 9.34 and 9.38
#
"
A
BR1 B
x
x
=
(9.41)
Q
A
with mixed boundary conditions
x(0) = x0 ,
(T ) = Sx(T )
and
From Lyapunovs theorem, we know that the costates are related to the states via
(t) = P(t)x(t)
(9.42)
where P(t) is a time varying matrix. Differentiating Eqn. 9.42 using the product rule gives
+ Px
= Px
(9.43)
Now we can substitute the control law based on the costates (Eqn 9.40) into the original linear dynamic model (Eqn 9.34) and further substitute the relation for the states from the costates
(Eqn. 9.42) giving
x = Ax + Bu
= Ax + B R1 B
1
= Ax BR
B Px
(9.44)
(9.45)
(9.46)
Now substituting Eqn. 9.46 into Eqn. 9.43 and equating with the Euler-Lagrange equation Eqn. 9.38
gives
+ P Ax BR1 B Px
= Qx A = Px
(9.47)
423
Rearranging, and cancelling x from each term gives the matrix differential equation
= PA A P + PBR1 B P Q
P
(9.48)
which is called the differential matrix Riccati equation. The boundary requirement or termination
value, from Eqn. 9.19 is
P(t = T ) = S
(9.49)
and which can be solved backwards in time using the original dynamic model matrices A, B,
and the controller design matrices Q, R. While we need to know the model and design matrices
for this solution, we need not know the initial condition for the states. This is fortunate, since
it means that we can solve the matrix Riccati equation offline once, store the series of P(t), and
apply this solution to our problem irrespective of initial condition. We only need to repeat this
computation if the plant model or design criteria change.
In summary, we now have developed an optimal state feedback controller of the proportional
form
u = K(t)x
(9.50)
where the time varying matrix feedback gain is (from Eqn 9.40) defined as
K(t) = R1 B P(t)
(9.51)
where the time-varying P matrix is given by Eqn. 9.48 ending with Eqn. 9.49. Since we have
assumed that the setpoint is at the origin in Eqn. 9.50, we have a regulatory problem, rather than
a servo or tracking problem. Servo controllers are discussed in 8.2.3.
9.4.2
A NALYTICAL
SOLUTION TO THE
LQR
PROBLEM
The problem with implementing the full time-varying optimal controller is that we must solve the
nonlinear differential matrix Riccati equation, Eqn. 9.48. This is a complicated computational task
and furthermore we must store offline the resultant gain matrices at various times throughout the
interval of interest. However there is an analytical solution to this problem which is described
in [62, 2.4] and [164, 5.2]. The downside is that it requires us to compute matrix exponentials,
and if we want to know P(t) as opposed to just the costates, we must do this computation every
sample time.
We start by solving for both states and costates in Eqn. 9.41, repeated here
x
x(0) = known
x
A BR1 B
,
=
(T ) = known
Q
A
{z
}
|
H
(9.52)
where the matrix H is known as the Hamiltonian matrix. This approach has the attraction that
it is a linear, homogeneous differential equation with an analytical solution. Starting from any
known point at t = 0, the states and co-states at any future time are given by
x(0)
x(t)
H
t
(9.53)
=e
(0)
(t)
However the outstanding problem is that we do not know (0) so we cannot get started!
def
(9.54)
424
and remember that with M ATLAB it is straight forward to compute using matrix exponentials
since it is a function of known, constant matrices. We then partition the matrix into four equal
sized n n blocks as
11 (t) 12 (t)
(t) =
21 (t) 22 (t)
which means we can express Eqn. 9.54 as
x(t) = 11 (t)x(0) + 12 (t)(0)
(9.55)
(9.56)
[S11 (T t) 21 (T t)]
(9.58)
In many applications, it suffices to ignore the costates and just consider P(0) which is given directly by
1
P(0) = [22 (T ) S12 (T )] [S11 (T ) 21 (T )]
(9.59)
We will investigate further this important simplification in section 9.4.3.
A N LQR
x(0) =
2
1
using an optimal controller of the form u = K(t)x to drive the process from some specified
initial condition to zero. For this application, we decide that the manipulated variable deviations
are more costly than the state deviations,
1 0
0 0
Q=
,
R = 2,
S=
0 1
0 0
Listing 9.6 first defines the plant, then solves for the initial costates, (0), using Eqn. 9.57, and
then finally simulates the augmented state and costate system given by Eqn. 9.52.
425
A = [-2 -4; 3 -1]; B=[4,6]'; x0 = [-2,1]'; % Plant: x = Ax+Bu & initial state, x(t = 0)
n=length(A);
% Problem dimension
Q = eye(2); R=2; S=zeros(size(A)); % Design criteria: Q, R and S
T = 0.5;
% final time, T
H = [A, -B/R*B'; -Q, -A'];
Phi = expm(H*T);
z0 = [x0;Lam_0];
% augment states & costates
G = ss(H,zeros(2*n,1),eye(2*n),0);
initial(G,z0,T)
% simulate it
The results of this simulation are given in Fig. 9.13 which also compares the near identical results
from a slightly sub-optimal controller which uses simply the steady state solution to the Riccati
equation discussed next in section 9.4.3. Note that the performance drop when using the considerably simpler steady-state controller is small, and is only noticeable when the total time horizon
is relatively short.
An alternative solution procedure to the analytical solution given in Listing 9.6 is to numerically
integrate the continuous time Riccati equation, Eqn. 9.48, backwards using say the workhorse
ode45 with initial condition Eqn. 9.49. The trick that the function in Listing 9.7 does is to collapse
the matrix Eqn. 9.48 to a vector form that the ODE function can then integrate. The negative sign
on the penultimate line is because we are in fact integrating backwards.
Listing 9.7: The continuous time differential Riccati equation. This routine is called from Listing 9.8.
function dPdt = mRiccati(t, P, A, B, Q)
% Solve the continuous matrix differential Riccati equation
% Solves dP/dt = AT P PA + PBBT P Q
5
We call the function containing the matrix differential equation in Listing 9.7 via the ode45 integrator as shown in Listing 9.8. Note that we can express BR1 B efficiently as ZZT where
Z = B(chol(R)1 ).
Listing 9.8: Solves the continuous time differential Riccati equation using a numerical ODE integrator.
2
426
2
x
states
0
x
0.5
0.5
u (t)
u(t)
u(t)
0.5
0.5
u(t)
0.5
0.5
0.5
0.5
P22
P
11
P12
0.5
0
0.5
0.5
0.5
0
0.5
0
1
0.5
1
time [s]
1.5
0.5
0.1
0.2
0.3
time [s]
0.4
0.5
Figure 9.13: Comparing the full time-varying LQR control with a steady-state sub-optimal controller for a long horizon (left) and a short horizon (right). The upper two plots trend the two
states and one input for the steady-state case and for the time varying case. The lower three plots
trend the time-varying elements of the matrices K(t), P(t), and the co-state vector (t) for the full
time-varying optimal case.
427
Now we have generated a trajectory for the four elements of the P(t) matrix as a function of time.
We could then pre-compute the time varying gains, K(t), and then store them ready to be used in
our closed loop simulation.
Compared to the analytical solution, this numerical integration scheme requires that we integrate
n2 equations as opposed to 2n when we combine the co-states. Furthermore we must store the
trajectory of pre-computed time-varying gains. In practice of course, this is storage requirement
rapidly becomes unwieldy and, as we shall see in the next section, rarely worth the effort.
9.4.3
T HE
R ICCATI
EQUATION
In the optimal controller scheme presented above, the gain K(t), is a continuously time varying
matrix, as opposed to a more convenient constant matrix. A time varying controller gain requires
excessive computer resources even if we did discretise, since we need to store an (m n) gain
matrix at every sample time in the control computer over the horizon of interest.
We can make an approximation to our already approximate optimal controller by using only the
steady-state gain derived from the steady-state solution of the matrix differential Riccati equation
= 0. With this approximation, the differential Eqn. 9.48 is simplified to the algebraic
when P
matrix equation,
0 = A P + P A P BR1 B P + Q
(9.60)
where the steady-state solution, P , is now a constant gain matrix which is considerably more
convenient to implement. Eqn. 9.60 is known as the continuous-time algebraic Riccati equation,
or CARE.
The now slightly sub-optimal steady-state controller gain is still obtained from Eqn. 9.51,
K = R1 B P
(9.61)
428
regulator lqr routine finds the controller gain directly which is slightly more convenient than
the two-step procedure solving the algebraic Riccati equation explicitly. All three routines are
demonstrated in Listing 9.9.
EXAMPLE
13
18
% Check results
23
>>[K2,P2,E] = lqr(A,B,Q,R)
>> K2
K2 =
0.1847
0.6603
This completes the controller design problem, the rest is just a simple closed-loop simulation as
shown in Listing 9.10.
Listing 9.10: Closed loop simulation using an optimal steady-state controller gain.
C = eye(size(A)); D = zeros(2,1);
Gcl = ss(A-B*K,zeros(size(B)),C,D); % Hopefully stable optimal closed loop
3
T = 1; x0 = [-2,1]';
[Y,t,X]=initial(Gcl,x0,T);
U = -X*K';
8
429
The state and input trajectories of this optimal controller given a non-zero initial condition are
given in the Fig. 9.14.
1
x (t)
2
states
0
1
2
x (t)
1
3
0.4
input
0.2
0
0.2
0.4
0.2
0.4
0.6
0.8
time (s)
The difference between the fully optimal time varying case and the simplification when using
the steady-state solution to the Riccati equation is given in Fig. 9.13 which uses the same example from page 428. For short optimisation intervals there is a small difference between the
steady-state solution and the full evolving optimal solution as shown in the right-hand figure of
Fig. 9.13. However as the final time increases, there is no discernible difference in the trajectories
of the states nor in the input in the left-hand figure of Fig. 9.13. This goes along way to explain
why control engineers hardly ever bother with the full optimal solution. The lower two plots in
Fig. 9.13 trend the time varying P(t) and K(t) matrices.
M ULTIPLE SOLUTIONS
TO THE ALGEBRAIC
R ICCATI
EQUATION
As mentioned in section 9.4.3, being a quadratic, there are multiple solutions to the continuous
algebraic Riccati equation, Eqn. 9.60. We can investigate all these solutions using the symbolic
toolbox for small sized problems by equating the coefficients and solving the resultant system
of quadratic equations. Note that there are in fact only (n2 + n)/2 unknown elements in P as
opposed to n2 due to the symmetry.
Suppose we wish to design an LQR for the simple SISO system G(s) = 1/(s(s + 1)), with equal
state and input weighting which is a problem initially discussed in [5].
1
G = tf(1,[1 1 0])
% Plant
[A,B,C,D] = ssdata(ss(G)) % actually transposed from [5], but no matter . . .
Q = eye(2); R = 1;
B=
1
0
Now inserting the plant and tuning matrices into the continuous algebraic Riccati equation, Eqn. 9.60,
gives the nonlinear matrix system
0 = A P + PA PBR1 B P + Q
1 1
p11 p12
p11
=
+
0 0
p12 p22
p12
p12
p22
1 1
0 0
p11
p12
p12
p22
1
0
0
0
p11
p12
p12
p22
1
0
0
1
430
noting of course that since P is symmetric, p12 = p21 . From this point onwards, the algebraic
manipulation gets messy, so we will use the Symbolic toolbox to help us.
>>n = 2;
% Dimension of problem
>>P = sym('P',[n,n]);
% Create symbolic P matrix
>>P = triu(P) + triu(P,1).'
% Force P to be symmetric
>>Res = A'*P+P*A - P*B/R*B'*P+Q % Continuous algebraic Riccati equation, Eqn. 9.60.
Res =
[ - P1_12 - 2*P1_1 + 2*P1_2 + 1, P2_2 - P1_2 - P1_1*P1_2]
[
P2_2 - P1_2 - P1_1*P1_2,
1 - P1_22]
This final line shows us that we have 4 conditions to be satisfied, (although two of them are the
same),
0 = p11 2 2p11 + 2p12 + 1
0 = p22 p12 p11 p12
0 = 1 p12 2
giving us enough equations to solve for the three unknown elements in P. We can see immediately that p212 = 1 giving us two solutions: p12 = 1. We could continue this substitution by hand,
but it might be faster at least to use the analytical solve command:
The analytical solve routine returned three solutions for the three unknowns, namely:
eig(Pc ) =
3.6
1.3
and therefore stabilising. Practically we can extend this approach to solving 3rd (with 8 solutions)
and 4th order (with 15 solutions) systems, but anything larger seems to stall.
Of course, we could have used the standard numerical M ATLAB command and avoided all the
spurious non-stabilising solutions:
[P,E,K] = care(A,B,Q,R) % standard way
ALGEBRAIC
R ICCATI
431
EQUATION USING
K RONECKER PRODUCTS
Notwithstanding that the algebraic Riccati equation of Eqn. 9.60 is nonlinear, we can devise an
iterative scheme to solve for P in a manner similar to that for the Lyapunov equation using
vectorisation and Kronecker products described in section 2.9.5.
Collapsing Eqn. 9.60 to vectorised form gives
I AT + AT I I P BR1 BT vec(P ) = vec(Q)
(9.62)
which we now must solve iteratively due to the unknown P in the coefficient matrix in Eqn. 9.62.
Listing 9.11 uses this solution strategy which works (most of the time) for modest sized problems,
but is not numerically competitive with, nor as reliable as, the ARE scheme in M ATLAB.
Listing 9.11: Solving the algebraic Riccati equation for P using Kronecker products and vectorisation given matrices A, B, Q and R.
In a similar manner to solving the Lyapunov equation using fsolve on page 85, we could also
construct the algebraic Riccati matrix equation to be solved and submit that to the general nonlinear algebraic equation solver fsolve. This works in so far as it delivers a solution, but the
solution may not be a stabilising solution, or in other words while the matrix P does satisfy the
original matrix equation, Eqn. 9.60, it may not be symmetric positive definite, and hence lead to a
stable closed loop system, as shown in the example given on page 429. This is due to the fact that
the matrix Riccati equation has multiple solutions and general purpose numerical schemes such
as fsolve will return the first solution it comes across as oppose to the stabilising one that we
are interested in. For this reason it is better to use the dedicated Riccati solving routines than the
general purpose fsolve.
9.4.4
T HE
DISCRETE
LQR
In the discrete domain, the optimisation problem analogous to Eqn. 9.32 is to search for the N
future optimal control moves, u0 , u1 , , uN 1 to minimise
J =
N 1
1
1 X T
xN SxN +
xk Qxk + uTk Ruk
2
2
(9.63)
k=0
where the continuous plant dynamics, Eqn. 2.40, are now replaced by the discrete equivalent
xk+1 = xk + uk
(9.64)
and the state feedback control law, Eqn. 9.50, is as before. Now the discrete matrix difference
Riccati equation analogous to Eqn. 9.48 is
1
Pk+1
(9.65)
Pk = Q + Pk+1 Pk+1 R + Pk+1
432
with final boundary condition
PN = S
(9.66)
1
Kk = R + Pk+1
Pk+1
(9.67)
Kk = R1 T T (Pk Q)
(9.68)
which uses the current Pk rather than the future version, but has the disadvantage that must
be invertible. Note that the optimal feedback gain K, is now an (m n) matrix rather than a row
vector as in the SIMO pole placement case described in chapter 8.
It is possible to compute the value of the performance index J , of Eqn. 9.63 in terms of the initial
states and the initial P0 matrix,
1
J = xT0 P0 x0
(9.69)
2
although this precise value of J has probably little practical worth in most industrial control
applications.
U SING THE
Clearly the discrete optimal gain given by Eqn. 9.67 is again time varying as it was in Eqn. 9.51 for
the continuous case, but this can be computed offline since it is only dependent on the constant
plant matrices and , the weighting matrices Q and R, and the time varying Pk .
However, once again as in the continuous case given in 9.4.3, we can use the considerably simpler steady-state approximation to Eqn. 9.65 with little reduction in optimality. We solve for the
steady state covariance,
1
P = Q + P P R + P
P
(9.70)
by starting with P = 0 and stepping, (rather than integrating), backwards in time until Pk
approximately equals Pk+1 . This is known as the discrete-time algebraic Riccati equation, or
DARE.
It follows that the resultant steady-state (and slightly sub-optimal) gain K given by Eqn. 9.67
S OLVING THE
1
K = R + P
P
DISCRETE ALGEBRAIC
R ICCATI
EQUATION
A simple way to solve the discrete algebraic Riccati equation implemented in the listing 9.12 is to
iterate around Eqn. 9.70 starting from a zero initial condition.
Listing 9.12: Calculate the discrete optimal steady-state gain by iterating until exhaustion. Note
it is preferable for numerical reasons to use lqr for this computation.
function [K,P] = dlqr_ss(Phi,Delta,Q,R)
433
u(t)
z 1
y(t)
Plant
LQR gain
% Steady-state discrete LQR by unwinding the Riccati difference eqn, Eqn. 9.65.
% Routine not as reliable as dlqr.m
P = zeros(size(Phi)); % Start with P = 0 at t =
tol = 1e-5; % default tolerance
P0 = P+tol*1e3;
while norm(P0-P,'fro') > tol
P0=P;
P = Q + Phi'*P*Phi - Phi'*P*Delta*((Delta'*P*Delta+R)\Delta')*P*Phi;
end % while
1 T
K = (Delta'*P*Delta + R)\Delta'*P*Phi; % K = R + T P
P
end % end function dlqr_ss
However, the backward iteration calculation technique given in dlqr_ss in listing 9.12 is both
somewhat crude and computationally dangerous. For this reason it is advisable to use the functionally equivalent, but more robust dlqr (or lqr) provided in the C ONTROL T OOLBOX. We
could also use dare if we only wanted to solve for the P matrix.
C OMPARING
LQR
Listing 9.13 compares a continuous LQR regulator for the system given on page 424 with a discrete
LQR sampled at a relatively coarse rate of Ts = 0.25s. Note that in both the discrete and continuous
cases we design the optimal gain using the same lqr function. M ATLAB can tell which algorithm
to use from the supplied model characteristics.
Listing 9.13: Comparing the continuous and discrete LQR controllers.
2
434
12
17
Even though the heavily discretised LQR controller has quite a different control input trajectory
as evident in the lower figure of Fig. 9.16, the difference between the discrete output, , and the
continuous LQR controller, (solid line), is noticeable in that the solid line does not pass exactly
through the circles, but the difference is very marginal.
1
x2(t)
states
x1(t)
3
0.4
continuous input
input
0.2
0
discrete input, T =0.25
s
0.2
0.4
0.2
0.4
0.6
0.8
time (s)
Problem 9.2
1. Load in the Newell & Lee evaporator matrices (refer Appendix D.1) and design
an LQR with equal weighting on the Q and R matrices by typing Q = eye(A); R = eye(B);
and using the dlmr function with a sample time of 1.0. This feedback matrix should be the
same as the gain matrix given in [147, p55]3 . Comment on any differences in solution you
obtain.
2. Repeat the LQR design using the supplied dlqr function rather than the dlqr_ss function. Using dlqr however, the gain elements are now slightly different. Is this difference
important and how would you justify your answer?
3. Use either dlqr_ss or dlqr to generate feedback gains for the limiting cases when either
(or both) Q or R are zero. What difficulties do you encounter, and how do you solve them?
3 Newell & Lee define the control law in a slightly different way; u = Kx omitting the negative sign. This means that
the K on p55 should be the negative of that evaluated by M ATLAB.
435
LQR
CONTROLLER
As mentioned previously, in most cases we are uninterested in the exact value of the objective
function since we are far more interested in the control law in order to minimise it. However
it is interesting to numerically validate the actual performance with the analytical expectations.
In the steady-state case, we are uninterested in the termination part of the performance index of
Eqn. 9.63 leaving just
J =
=
1X T
xk Qxk + uTk Ruk
2
(9.71)
k=0
1 T
x P x0
2 0
(9.72)
reinforcing the fact that while the controller gain is not dependent on the initial condition, the
value of the performance index is.
Suppose we had a plant
xk+1 =
0
0
0.5 1
xk +
1
0
uk , starting from x0 =
2
2
with controller design matrices Q = diag[1, 0.5], and R = 1, and we wanted to compute the performance index J comparing the numerical summation given by Eqn. 9.71 against the analytical
expression given by Eqn. 9.72.
>> Phi = [0 0; -0.5, 1]; Del = [1 0]'; % Plant matrices , and controller design Q, R
>> Q = diag([1 0.5]); r = 1;
>> [K,P,E] = dlqr(Phi,Del,Q,r) % Solve the algebraic Riccati equation for K & P .
K =
0.2207
-0.4414
P =
1.5664
-1.1328
-1.1328
2.7656
Now that we have the steady-state Riccati equation solution we can compute the performance
directly , J .
>> x0 = [2,2]';
% Initial condition x0
>> J = 0.5*x0'*P*x0 % Performance J = 12 xT0 P x0 , Eqn. 9.72.
J =
4.1328
To obtain an independent assessment of this, we could simulate until exhaustion and numerically compute Eqn. 9.71. Since we have simulated the closed loop, we need to back-extract the
input by using the control law and the states.
1
Now M ATLAB has returned a vector of input and states, and we need toP
compute the summation
given in Eqn. 9.71. One elegant way to calculate expressions such as
xT Qx given the states
stacked in rows is to use the M ATLAB construction sum(sum(x.*x*Q)). This has the advantage
that it can be expressed succinctly in one line of code and it avoids an explicit loop.
436
P
T
>> uRu = u'*u*r; %
k uk Ruk , but only works for scalar r
>> J_numerical = 0.5*(sum(sum(x.*x*Q'))+uRu) % Performance
J_numerical =
4.1328
1
2
In this case the numerical value of J is very close to the analytical value.
VARYING
THE
Q AND R WEIGHTING
MATRICES
The Q and R weighting matrices in the objective function Eqn. 9.32 reflect the relative importance
of the cost of state deviations with respect to manipulated variable deviations and are essentially
the tuning parameters for this control algorithm. If we increase the Q matrix, then the optimum
controller tries hard to minimise state deviations and is prepared to demand very energetic control moves to do it. Likewise, if we increase R, the weighting on the deviations from u, then
excessive manipulated variables moves are discouraged, so consequently the quality of the state
control deteriorates. If we are only really interested in output deviations, yT y, as opposed to
state deviations, then Q is often set equal to CT C. This follows simply from the output relation
y = Cx.
Fig. 9.17 and the listing below demonstrate the design and implementation of a discrete LQR
for a stable, interacting arbitrary plant. We will start from a non-zero initial condition and we
subject the system to a small external disturbance at sample time k = 20. In this listing, the state
deviations are considered more important than manipulated deviations.
1
11
16
% tuning constants
K = dlqr(Phi,Del,100*Q,R);
% Design LQR controller with Q = 100I, R = I
Gd = ss(Phi-Del*K,Del,C,D,1);
% closed loop system
U = zeros(size([t,t]));U(dist,:) = [1,0.2]; % not much + disturbance
x0 = [-1,2,0]';
% given initial condition
[Y,T,X3] = lsim(Gd,U,t,x0);
% do closed loop simulation
U3 = X3*K';
% extract control vector from x
stairs(t,[X3,U3]);
P T
P T
% Compute performance indices:
x Qx, &
u Ru
perfx3 = sum(sum(X3.*(Q*X3')')), perfu3 = sum(sum(U3.*(R*U3')'))
Fig. 9.17 compares two responses. The left controlled response is when we consider state deviations more important than manipulated variations, i.e. the weighting factor for the states is 100
times that for the manipulated variables. In the right plot, the opposite scenario is simulated.
Clearly in Fig. 9.17P
the left controlled response returns a relatively low ISE (integral squared error)
on
the
states,
xT Qx = 12.3, but at the cost of a high ISE on the manipulated variables,
P T
u Ru = 1.57. Conversely the right controlled response shows the situation, where perhaps
owing to manipulated variable constraints, we achieve a much lower ISE on the manipulated
variables, but at a cost of doubling the state deviation penalty.
As Fig. 9.17 shows, the design of an optimal LQR requires the control designer to weigh the relative costs of state and manipulated deviations. By changing the ratio of these weights, different
437
Input movements are deemed costly: q=1,r=100
1.5
1.5
1
State Q=1
0.5
0
0.5
0.5
1.5
0.5
0.5
0
uRu=1.571
0.5
xQx=27.4
x1(t)
1.5
input R=1
x3(t)
xQx=12.3
x2(t)
0.5
input R=100
State Q=100
0
uRu=0.059
0.5
10
20
Sample number
30
40
10
20
Sample number
30
40
Figure 9.17: Varying the state and manipulated weights of a discrete linear quadratic regulator
where there is a disturbance introduced at sample time k = 20. When state deviations are costly (q
is large compared to r), the states rapidly return to setpoint at the expense of a large manipulated
movement.
controlled responses can be achieved.
As we did in 8.2, we can modify the simulation given above to account for non-zero setpoints,
x , by using the control law, u = K(x x) rather than Eqn 9.50, so simulating the closed loop
xk+1 = ( K)xk + Kxk
thus enabling response to setpoint changes.
9.4.5
A NUMERICAL
LQR
If the LQR is indeed an optimal controller, its performance must be superior than any arbitrarily
designed pole-placement controller such as those tested in section 8.2.1. Listing 9.14 designs an
optimal controller for the blackbox plant with equal state and input weights.
Listing 9.14: An LQR controller for the blackbox
>> G = ss(tf(0.98,conv([2 1],[1,1])),0.1) % Blackbox plant with sample time Ts = 0.1
3
438
4.7373
-3.5942
Now it would be interesting to compare this supposedly optimal controller with a pole-placement
controller with poles arbitrarily placed at say T = [0.5 0.5i].
Listing 9.15: Comparing an LQR controller from Listing 9.14 with a pole-placement controller
1
11
The trajectory of the states for the two schemes are given in Fig. 9.18. However what is surprising
is that the pole-placement response looks superficially considerably better than the somewhat
sluggish, though supposedly optimal, LQR response. How could this be? Surely the optimally
designed controller should be better than any ad-hoc designed controller?
Comparing LQR & Poleplacement
2
poleplacement
x1
0
2
4
LQR
6
2
0
x2
Figure 9.18: Comparing the state trajectories for both pole-placement and
LQR. Note that the optimal LQR
is considerably more sluggish to return to the setpoint than the poleplacement controller.
2
dlqr
Poleplacement =0.50.5i
4
6
0.5
1.5
2.5
What Fig. 9.18 has not shown us is the input trajectory. This is also important when comparing controlled responses, although typically perhaps not quite as important as the state variable
trajectories.
We need to consider the input trajectory for at least two reasons. Excessive deviations in input
will almost certainly reach the input saturation constraints that are present in any real control
system. Secondly, excessive movement in the input can cause unreasonable wear in the actuators.
Finally in the case where the input magnitude is related to cost, say fuel cost in an engine, then
the larger the magnitude, the more expensive the operating cost of the corrective action will be.
Fig. 9.19 reproduces the same problem, but this time includes the input trajectory. In this instance,
the cost of the pole-placement controller is J = 1288 units, while for the LQR it is under half at
553.
Obviously different pole locations will give different performances, and it is up to us as control
designers to choose the most appropriate pole location for the particular control application at
439
0
2
LQR
4
6
10
input
0
10
dlqr
Poleplacement =0.50.5i
20
30
0.5
1.5
2.5
Figure 9.19:
Comparing poleplacement and LQR, but this time
including the input trajectories.
Compared to Fig. 9.18, the input
trajectory for the optimal LQR
controller is considerably less active.
hand. Using M ATLAB, it is easy to do an exhaustive search over a reasonable domain to find the
best pole location.
For real systems (that is systems with real coefficients), the desired poles must appear in conjugate
pairs. That means that we only need to consider one half of the unit circle. Furthermore, we
should only consider the 1st quadrant, because in the 2nd quadrant, the response will be overly
oscillatory.
Naturally it goes without saying that in the discrete domain we must be inside the unit circle.
I have chosen a set of trial poles lying on a polar grid from 0 degrees round to 90 , and from a
magnitude of about 0.1 to 0.9. These (unequally spaced) trial pole locations are shown in Fig. 9.20.
1
0.6/T
0.8
0.5/T
0.4/T
0.1
0.3/T
0.2
0.3
0.4
0.2/T
0.5
0.6
0.7
0.1/T
0.8
0.9
0.7/T
0.6
0.8/T
0.4
0.9/T
{}
0.2
1/T
1/T
0.2
0.9/T
0.1/T
0.4
0.8/T
0.6
0.2/T
0.7/T
0.8
0.3/T
0.6/T
0.8
0.6
0.4
0.2
0.5/T
0
{}
0.4/T
0.2
0.4
0.6
0.8
440
As you can see from the resultant contour plot in Fig. 9.21, the performance of our previously
arbitrarily chosen pole location of T = [0.5 0.5i] is reasonable, but not great, and clearly not
optimal. From the plot, the best performance is located at a pole location of approximately
[0.75 0.15i]T marked as a in Fig. 9.21.
The question is: can we find this location without doing an exhaustive search, or indeed any type
of numerical optimisation search strategy?
Figure 9.21: The performance of the closed loop for all the possible pole locations in the first
quadrant. When T = [0.5 0.5i], the weighted performance is approximately 1288, while the
optimum performance, , is considerably better at around 550 at pole location [0.78
0.17i]T .
When we look at the closed loop poles of the LQR controller,
Listing 9.16: Computing the closed loop poles from the optimal LQR controller from Listing 9.14.
then we see that in fact the computed closed loop poles are intriguingly close to the optimum
found numerically in Fig. 9.21. It turns out that the LQR routine did indeed find the optimum
controller without going to the computational trouble of an exhaustive search. Just remember
of course, that this is not a proof of the optimality of the LQR, it is just a numerical validation.
441
4.3 2.2
(a) Q = R = I
(b) Q = 103 R = I
(c) 103 Q = R = I
(d) any other combination of your choice.
In each case, sketch the results of the input u, and the states x with time and calculate the
ISE for the states, manipulated variables and the weighted sum. What conclusions can you
deduce from the relative importance of Q and R?
2. Compare the performance of the full multivariable LQR controller with that of a controller
comprised of just the diagonal elements of the controller gain. What does this say about the
process interactions?
9.4.6
T RACKING LQR
CONTROL
Since the LQR is essentially just an optimal version of pole-placement, for the tracking (or servo)
control problem we can simply follow the approach taken in section 8.2.3 where we introduced
auxiliary states which were defined as the integral of the outputs. The M ATLAB routine lqi
designs LQR controllers for servo applications.
As an example, let us design a continuous servo LQR controller for the SISO system
G(s) =
20(s + 2)
(s + 3)(s + 5)(s + 7)
Clearly this system has m = 1 input, p = 1 output and n = 3 internal states. We desire that the
output follows without offset some reference command signal. To achieve this we are going to
design an optimal LQR controller with weighting matrices Qa (of dimension of the augmented
states, n + p,) and R of dimension m as shown in Listing 9.17.
Listing 9.17: Tracking LQR controller with integral states
1
442
This final augmented gain, is comprised of two parts: the first n elements are associated with
the original plant states, and the final p elements are associated with the outputs we wish to
control with zero offset. An implementation in S IMULINK for this application showing the signal
dimensions is given in Fig. 9.22.
D1
Band-Limited
White Noise
Cont
1
s
Cont 3
Add2 y Integrator
4 Cont 4
-Ka* u
3 Cont
B* u
u
Cont
Ka feedback
3 Cont 3
3
3
Add
1
s
3 Cont
x
C* u
Cont
y
Terminator
Integrator
3 Cont
A* u
Figure 9.22: An LQR servo controller for a SISO plant with 3 states implemented in S IMULINK.
See also Fig. 9.23 for the results of this simulation.
If we are really only interested in controlling the outputs to their setpoint as opposed to all the
states to zero, then we should indicate that desire by the appropriate design of the (augmented)
controller weighting matrix Qa . In this case if I wish to have fast control at the expense of input
movement, I would choose something like
0 0 0 0
Qa = 0 0 0 0 , R = 1
0 0 0 0
0 0 0 10
remembering that Qa need only be semidefinite, but R must be strictly positive definite. Fig. 9.23
shows the result for this servo controlled example for three different tuning ratios. Note that as
Q gets large relative to R, then the output trends faster to the setpoint at the expense of a more
vigorous input.
0.6
Q=1,R=1
Q=5,R=1
Q=1,R=20
0.4
0.2
0
Q=1,R=1
Q=5,R=1
Q=1,R=20
0.2
1
0.5
0.5
20
40
60
80
100
time (s)
120
140
160
180
200
In experimental, rather than simulated applications, since we do not measure the states directly,
we can use a plant model to produce state estimates based on the measured outputs using the
simple state reconstructor from section 8.1.4, or we could use a state estimator which is the subject
of the following section, section 9.5.
In the experimental optimal regulator example shown in Fig. 9.24, at each sample time we:
443
0.5
0.5
1
2
q
Input
2
r
0.5
0
0.5
1
4
RLS: =0.995,
a
2
abbb2
301
10
0
K &K
2
20
10
20
50
100
150
200
250
time (s)
Figure 9.24: An experimental application of LQR servo control of the black box with 103 q = r
with online identification and state reconstruction. Plots: (a) output and setpoint; (b) input; (c)
model parameters ; (d) controller gains. The control started after 20 seconds.
444
Noise
State
Measurement
actual outputs, y
Plant
Input, u
Model
g(x)
Estimator
predicted states
u = K
x
corrected states, x
K
State feedback
Software
9.5
Chapter 6 looked at various alternatives to identify the parameters of dynamic systems, given
possibly noisy input/output measured data. This section looks at ways to estimate not the parameters, (they are assumed known by now), but the states themselves, x, by only measuring the
inputs u, and the outputs y. This is a very powerful technique since if this estimation works, we
can use state feedback control schemes without actually going to the trouble of measuring the
states, but instead using only their estimates.
In section 8.1.4 we built a simple state reconstructor which works well in the case of no measurement noise or model/plant mismatch. The problem is that in practical cases there will always be
disturbing signals which will degrade our state estimates derived from such a straight forward
algorithm. The question is: can we devise a better scheme that mitigates against this degradation?
The answer is given in the form of an optimal state estimator.
, which can be compared to the
These estimators work by using a model to predict the output, y
measured output, y. Based on this difference we can adjust, if necessary, the predicted states.
The estimation of states in this way is known, for historical reasons, as a Kalman filter. A block
scheme of how the estimator works is given in Fig. 9.25. Here we have a control scheme that while
demanding state-feedback, actually uses only state estimates rather than the actual hard-to-obtain
states.
There are many texts describing the development (eg: [52, 96]) and applications (eg: [39, 191]) of
the Kalman filter.
9.5.1
R ANDOM
PROCESSES
The only reason that our estimation scheme will not work perfectly, at least in theory, is due to the
corrupting noise, and to a lesser degree, incorrect initial conditions. Now in practice, noise is the
garbage we get superimposed when we sample a real process, but when we run pure simulations,
445
we can approximate the noise using some sort of random number generator such as described previously in 6.2. These are correctly termed pseudo random number generators, and most of them
are of the very simple linear congruent type, and are therefore deterministic and not strictly random. Whole books have been written on this subject and many of them quote John von Neumann
saying in 1951 Anyone who considers arithmetical methods of producing random digits is, of
course, in a state of sin.
M ATLAB can generate two types of random numbers: namely uniform or rectangular, and normal
or Gaussian. Using the uniform scheme, we can generate any other arbitrary distribution we may
desire including Gaussian. However in most practical cases, it is the Gaussian distributed random
number we are more interested in. To visualise the difference, try:
150
100
x = rand(1000,1); hist(x)
50
% uniformly distributed,
0
0
0.5
300
200
100
0
5
Sometimes it is interesting to compare the actual probability density function or PDF obtained
using the hist command with the theoretical PDF. To directly compare these, we need to scale
the output of the histogram such that the integral of the PDF is 1. Fig. 9.26 compares the actual
(scaled) histogram from 104 samples of a random variable, N ( = 12, = 2) with the theoretical
probability density function. The PDF of a normal variate is, [192, p60],
2 !
1 x
1
exp
(9.73)
f (x) =
2
2 2
0.25
0.2
frequency
mean = 12.0116
0.15
2 = 4.03369
0.1
0.05
10
12
14
16
18
20
Figure 9.26: The probability density function of a normally distributed random variable x
N ( = 12, = 2); theoretical (solid) and experimental (histogram).
Listing 9.18: Comparing the actual normally distributed random numbers with the theoretical
probability density function.
x = xsig*randn(10000,1)+xmean;
% sample (realisation) of r.v. x.
xv = linspace(min(x),max(x),100)'; % approx range of x
pdf= @(xv) 1/sqrt(2*pi)/xsig*exp(-0.5*(((xv-xmean)/xsig).2));% expected PDF, Eqn. 9.73.
area = trapz(xv,pdf(xv)); % Note integral of PDF should=1.0
[numx,xn] = hist(x,30); % Generate histogram [incidence #, x-value]
dx = mean(diff(xn));
% assume evenly spaced
446
M ULTIVARIABLE RANDOM
NUMBERS
For a single time series xk of n elements, the variance is calculated from the familiar formula
n
Var(x) = x2 =
1X
(xi x)2
n i=1
(9.74)
where x is the mean of the vector of data. In the case of two time series of data (xk , yk ) the
variances of x and y are calculated in the normal fashion using Eqn. 9.74, but the cross-variance
or co-variance between x and y is calculated by
n
Cov(x, y) =
1X
(xi x) (yi y)
n i=1
(9.75)
Note that while the variance must be positive or zero, the co-variance can be both positive and
negative and that Cov(x, y) = Cov(y, x). For this two vector system, we could assemble all the
variances and cross-variances in a matrix P known as the variance-covariance matrix
x2
Cov(x, y)
P=
Cov(y, x)
y2
which is of course symmetric with positive diagonals for time series of real numbers. In normal
usage, this name of this matrix is shortened to just the covariance matrix.
For an n variable system, the co-variance matrix P will be a positive definite symmetric n n
matrix. M ATLAB can calculate the standard deviation () of a vector with the std command and
it can calculate the co-variance matrix of a multiple series using the cov command.
One characteristic about two independent random time series of data is that they should have no
cross-correlation, or in other words, the co-variance between x and y, should be approximately
zero. For this reason, most people assume that the co-variance matrices are simply positive diagonal matrices, and that the off-diagonal or cross-covariance terms are all zero. Suppose we create
three random time series vectors in M ATLAB (x, y, z).
x=randn(1000,1) +3;
y=randn(1000,1)*6+1;
z=randn(1000,1)*2-2;
a=[x y z];
cov(a)
The co-variance matrix of the three time series is in my case, (yours will be slightly different),
1.02 0.11 0.10
1
Cov(x, y, z) = 0.11 37.74 0.30 0
0.10
0.30 4.22
0
0 0
36 0
0 4
which shows the dominant diagonal terms which are approximately the true variances (1, 36, 4)
and the small, almost zero, off-diagonal terms.
C OVARIANCE
447
In many cases, the random data does exhibit cross-correlation and we will see this clearly if the
x, y data pairs fall in a patten described by a rotated ellipse. We can compute the 95% percent
confidence limits of this ellipse using
T
(x ) P1 (x )
p(n 1)
F,p,(np)
np
where p is the number of parameters (usually 2), and n is the number of data points (usually
> 100). F,p,(np) is the F -distribution at a confidence of , in this case 95%. To compute values
from the F -distribution we could either use the functions given in Listing 9.19,
Listing 9.19: Probability and inverse probability distributions for the F -distribution.
pf = @(x,v1,v2) betainc(v1*x/(v1*x+v2),v1/2,v2/2);
qf = @(P,v1,v2) fsolve(@(x) pf(x,v1,v2)-P,max(v1-1,1))
or use equivalent routines from S TIX B OX toolbox from Holtsberg mentioned on page 4 or alternatively use finv(P,v1,v2) from the Statistical toolbox.
The easiest way to construct this ellipse is to first plot a circle parametrically, scale the x and y
axes, rotate these axes, then finally translate the ellipse so that the center is at the mean of the
data. The axes of the ellipse are given by the eigenvectors of P, and the lengths are given by the
square root of the eigenvalues of P.
Example of confidence limit plotting
Here we will generate 1000 random (x, y) samples with deliberate correlation and we will compute the ellipse at a 95% confidence limit. Furthermore we can check this limit by using the
inpolygon routine to calculate the actual proportion of data points that lie inside the ellipse. We
expect around 95%.
1. First we generate some correlated random data,
x N (0.3, 1) and y N (2, 1.1) + x
These data pairs are correlated because y is explicitly related to x.
Listing 9.20: Generate some correlated random data.
N = 1e3;
% # of samples in Monte-Carlo
x = randn(N,1)-0.3;
% N (
x = 0.3, = 1)
y = 1.1*randn(N,1)+x + 2; % Deliberately introduce some cross-covariance
plot(x,y,'.')
% See Fig. 9.27(a).
448
We can plot the (x, y) points and we can see a that the buckshot pattern falls in a rotated
ellipse indicating correlation as expected as shown in Fig. 9.27(a).
2. The distribution of the data in Fig. 9.27(a) is more clearly shown in a density plot or a 3D
histogram. Unfortunately M ATLAB does not have a 3D histogram routine, but we can use
the normal 2D histogram in slices along the y-axis as demonstrated in the following code
snippet.
Listing 9.21: Plot a 3D histogram of the random data from Listing 9.20.
1
This gives the density and 3D histogram plots given in Fig. 9.27(b) and (c).
xy data plot
Density plot
50
3D histogram
45
40
50
4
35
30
25
2
0
4
5
0
x
4
5
20
15
10
x
0
x
Figure 9.27: Correlated noisy x, y data. (a) Individual (x, y) data. (b) A density plot. (c) A histogram in 2 independent variables.
3. To plot an ellipse, we plot first plot x = a cos(t), y = b sin(t) with t [0, 2], rotate, and
finally then shift the origin to . The directions of the of the semi-major and semi-minor
axes of the ellipse are given by the eigenvectors of P and the lengths are given by the square
roots of the eigenvalues.
If V is the collection of eigenvectors vi of the (p p) matrix P, then in the usual case of
p = 2,
def
V = v1 v2
and the transformation to the un-rotated ellipse is
u = VT (x )
This is shown in Listing 9.22 which calls the F -distribution using the anonymous functions
given previously in Listing 9.19.
449
Listing 9.22: Compute the uncertainty regions from the random data from Listing 9.21.
X = cov(x,y);
% sampled covariance matrix
[V,D] = eig(X); e = diag(D);
10
)
% Now rotate VT (x x
c = (inv(V')*[uxc, uyc]')'; % basically V1 x
xc = c(:,1); yc = c(:,2);
xc = xc + mean(x); yc = yc + mean(y); % shift for mean
4. As a final check, we could use the inpolygon routine to see if the actual proportion of
points inside the ellipse matches the expected %. The final data points and the 95% ellipse
is given in Fig. 9.28.
Listing 9.23: Validating the uncertainty regions computed theoretically from Listing 9.22.
1
in = inpolygon(x,y,[xc;xc(1)],[yc;yc(1)]);
prop_inside = sum(in)/n;
plot(x(in),y(in),'r.','MarkerSize',1); hold on
plot(x(in),y(in),'b.','MarkerSize',1); plot(xc,yc,'k-');
hold off axis('equal'); grid % See Fig. 9.28.
For my data, 94.6% of the 1000 data points lie inside the confidence limits which is in good
agreement to the expected 95%.
95% confidence limit
6
5
4
3
y
2
1
0
1
2
3
6
0
x
9.5.2
C OMBINING
If we have a perfect model of the process, no noise and we know the initial conditions x0 exactly,
then a perfect estimator will be simply the open loop predictions of x, given the system model
450
and present and past inputs u. However industrially one there is always noise present both in the
measurements and in the state dynamics, so this open loop prediction scheme will soon fail even
if the initial estimates and model are good. A more realistic description of an industrial process is
wk N (0, Q)
xk+1 = xk + uk + wk ,
(9.76)
vk N (0, R)
yk = Cxk + vk ,
(9.77)
where v, w are zero mean, white noise sequences with covariance matrices Q and R respectively.
Note how in Fig. 9.29, the two unobservable noise signals, v and w are wholly inside the dashed
box, as are the true states, xk coloured grey in this figure to highlight that they are essentially
hidden. We, as external observers, can only access those signals outside the dashed box, namely
uk and yk .
Plant noise, w
Measurement noise, v
uk
xk+1
+ q 1
xk
yk
M EAN AND
Predicting the behaviour of the state mean and variance given a stochastic dynamic system sounds
quite complicated, but actually in the discrete domain it is simple. The mean of the system,
Eqn. 9.76 is the expected value,
k+1 = E {xk + uk + wk }
x
k
=
xk +
uk + w
(9.78)
(9.79)
(9.80)
which is simply the deterministic part of the full stochastic dynamic process, and also which
should be intuitive.
The propagation of the state variance, P is slightly more complex. The state variance at the next
sample time is, by definition,
def
k+1 )(xk+1 x
k+1 )
(9.81)
Pk+1 = E (xk+1 x
451
and if we substitute the new mean from Eqn. 9.80, the process, Eqn. 9.76 we get
k ) + wk )
k ) + wk )((xk x
Pk+1 = E ((xk x
(9.82)
which if we expand out, and note that the variance of w is Q, then we get the following matrix
equation
Pk+1 = Pk + Pxk wk + Pwk xk + Q
|
{z
}
(9.83)
very small, 0
= Pk + Q
(9.84)
We can ignore the two inner cross-product terms in Eqn. 9.83) because of the uncorrelatedness of
states and the white noise w. In a similar manner to the above application of definition, expansion and eliminating cross terms, one can find expressions for the propagation of the mean and
variance of the output variables, y. Further details are given in [121, pp6264].
Eqn. 9.84 is called the discrete Algebraic Lyapunov difference Equation, or ALE. If the system is
stable, then the steady-state covariance, P , will be given at sample k = , or by solving the
following matrix equation,
P = P + Q
(9.85)
in exactly the same way we used in section 2.9.4 when establishing the stability of a linear discretetime system either using Kronecker products, or using the M ATLAB dlyap command. Note that
the final state covariance does not depend on the initial uncertainty, P0 .
The propagation of the mean and variance of a linear dynamic system, given stochastic disturbances, is fundamental to what will be developed in the following section which describes the
Kalman filter estimation scheme. The Kalman filter tries to reduce the variance over time which
in effect is to reduce the magnitude of the matrix P using a clever feedback scheme, thus obtaining
improved state estimates.
Problem 9.4 Verify by M ATLAB Monte-Carlo simulation that the steady-state covariance for a
stable linear system subjected to white noise input with known variance is given by the solution
to the ALE, Eqn. 9.85. Compare your simulated actual covariance (perhaps using cov) with the
predicted covariance (perhaps using dlyap).
9.5.3
T HE K ALMAN
The estimation scheme consists of two parts: a prediction and a correction. For the prediction
step, we will predict using the openloop model, the new state vector and the state covariance
matrix. The correction part will adjust the state prediction based on the current measurement,
and in so doing, hopefully reduce the estimated state covariance matrix.
. The prediction of the state using
We will denote the state vector as x and an estimate of this as x
. Once we have assimilated the
just the model is called the a priori estimate and denoted, x
.
measurement information, we have the a posteriori estimate which is denoted simply as x
At sample time k, we have:
P
k )(xk x
k) }
k = E{(xk x
452
Now we want to improve the estimate of xk by using the current measurement. We will do this
by the linear blending equation
k = x
(9.86)
x
x
k
k + Lk yk C
k , is the corrected state estimate or a posteriori estimate, and Lk , the yet to be determined
where x
blending factor or Kalman gain. We use the symbol L for the Kalman gain instead of the commonly used K because we have already used that symbol for the similar feedback gain for the
linear quadratic regulator.
The best value of L will be one that minimises the mean-square error of the a posteriori covariance,
k )(xk x
k )T }
Pk = E{(xk x
(9.87)
x
Pk =E{ (xk x
k)
k ) Lk (Cxk + vk C
(xk x
x
k ) Lk (Cxk + vk C
k) }
T
Pk = (I Lk C) P
k (I Lk C) + Lk RLk
(9.88)
T T
Pk = P
k Lk CPk Pk C Lk + Lk CPk C + Rk Lk
{z
} |
|
{z
}
linear in L
quadratic in L
We wish to minimise the trace of P since that is the sum of the diagonal elements which we want
to minimise. (Minimising the sum is the same as minimising each individually when positive.)
d trace(Pk )
=0
d Lk
= 2 CP
k
Solving for the optimum gain gives
T
T
T
C
+
R
+ 2Lk CP
k
k
1
T
T
CP
C
+
R
Lk = P
C
k
k
k
1
T
T
Pk = P
CP
CP
k Pk C
k C + Rk
k
(9.89)
453
1
T
Lk
Pk = P
k Lk CPk C + Rk
Pk = (I Lk C) P
k
(9.90)
P
k
1. All but Eqn. 9.88 are valid only for the optimum gain. (Eqn. 9.88 is valid for any gain.)
2. The last and simplest expression is the most used, but is suspectable to numerical ill-conditioning.
Prediction updates
x
xk
k+1 =
Note that we can ignore wk since it has mean zero and is uncorrelated with any previous ws.
The a priori error is
e
k+1
k+1 = xk+1 x
= (xk + wk )
xk
= ek + wk
= Pk T + Q
T HE
PREDICTION
The best prediction we can make for the new state vector is by using the deterministic part of
Eqn. 9.76. Remember that Eqn 9.76 is our true process S(), and the deterministic part excludes
the noise term since we cannot predict it4 . We may also note that the expected value of the noise
term is zero, so if we were to add anything, it would be zero. Thus the prediction of the new state
estimate is at sample time kT
xk1|k1 + uk
(9.91)
x
k =
which is precisely Eqn. 9.80. In addition to predicting the states one sample time in the future,
we can also predict the statistics of those states. We can predict the co-variance matrix of the state
predictions Pk|k1 , the a priori co-variance as
Pk|k1 = Pk1|k1 + Q
(9.92)
which again is Eqn. 9.84. Without any measurement information, the state estimates obtained
from Eqn 9.91 with variances from Eqn 9.92 are the best one can achieve. If our original dynamic
system is stable, ( is stable), then over time, the estimate uncertainty P will converge to a steadystate value, and will not further deteriorate, despite the continual presence of process noise. This
is termed statistical steady-state. Conversely however, if our process is unstable, then in most
cases, the uncertainty of the states will increase without bound, and P will not converge.
Stable system or unstable, we can improve these estimates if we take measurements. This introduces the correction step of the Kalman filter.
4 If
454
T HE
CORRECTION
We first take a measurement from the real process to obtain yk . We can compare this actual
measurement with a predicted measurement to form an error vector k known as the predicted
measurement residual error
k = yk C
k = yk y
xk|k1
(9.93)
It would seem natural to correct our predicted state vector by a magnitude proportional to this
error
k|k = x
k|k1 + Lk yk C
xk|k1
(9.94)
x
where Lk is called the Kalman gain matrix at time t = kT and is similar to the gain matrix in
the observers presented in chapter 8. The Kalman gain matrix Lk and co-variance of the state
variables after the correction Pk|k , the a posteriori co-variance, are
1
(9.95)
Lk = Pk|k1 C CPk|k1 C + Rk
Pk|k = (I Lk C) Pk|k1
(9.96)
and note the similarity of these equations to Eqns 9.65 and 9.67. This predictor/corrector scheme
comprises the Kalman filter update.
I NITIALISING THE
ALGORITHM
(9.98)
The above two formulae can be derived (assuming no input) from Eqn 3.46 where the weighting matrix W = R. However [75, p1539] argues that using the above equations for the initial
estimates is probably not worth the extra effort.
Algorithm 9.2 Discrete Kalman filter algorithm
What we must know before we can implement an evolving Kalman filter:
Model matrices , , C
The covariance matrices Q, R of the noise sequences.
455
2. Correction of the a priori state estimate and its error covariance using the current measurement
k = x
x
x
k
k + Lk yk C
Pk = (I Lk C) P
k
xk
x
k+1 =
T
P
k+1 = Pk + Q
4. Wait out the remainder of the sample time, increment counter, k k + 1, and then go back
to step 1.
Kalman filter example
Simulated results presented in Fig. 9.30 show the improvement of the estimated state when using
a Kalman filter with a randomly generated 2-state discrete model, and plant and measurement
noises with covariances
Q = 0.2I,
R=I
Fig. 9.30 compares (a) an open loop model with (b) states computed by directly inverting the
measurement matrix, and (c) states obtained using the Kalman filter. The best practical scheme is
the one where the estimated states best approximates the true states.
Open loop model: sse(ol) = 281
4
x1
estimated states
x2
true states
50
100
50
100
50
Figure 9.30: Comparing an openloop model prediction (left column), with direct measurement
inversion (middle column) and the Kalman filter (right column). The Kalman filter has the lowest
state error. See also the performance of the Kalman filter for a range of conditions in Fig. 9.34.
To compare the performance quantitatively, the sum of the square errors, (sse), is computed for
each of the three alternatives. The Kalman filter, not surprisingly, delivers the best performance.
9.5.4
T HE
K ALMAN
FILTER
Analogous to the optimal regulator gain given in Eqn. 9.51, the Kalman gain matrix Lk in Eqn. 9.94
is also time varying. However in practical instances, this matrix is essentially constant for most
100
456
of the duration of the control interval, so a constant matrix can be used with very little drop in
performance. This constant matrix is called the steady-state form of the Kalman gain matrix and
is equivalent to the steady-state LQR gain matrix used in 9.4.
We can solve for the steady-state Kalman gain matrix by finding P such that
1
CP
P = Q + P PC R + CPC
(9.99)
is satisfied. To solve Eqn. 9.99 for P, we could use an exhaustive iteration strategy where we
iterate until P converges, or we could notice that this equation is in the form of a discrete algebraic
Riccati equation for which we can use the function dare. We should note that routine dare
solves a slightly different Riccati form than Eqn. 9.99, so follow the procedure given in Listing 9.24.
Listing 9.24: Solving the discrete time Riccati equation using exhaustive iteration around Eqn. 9.99
or alternatively using the dare routine.
Once we have the steady-state matrix P, then the steady-state Kalman gain is given by Eqn. 9.95,
1
L = PC CPCT + R
(9.100)
which is known as the prediction-form of the Kalman filter; see also Eqn. 9.103. Note also that I
have dropped the time subscripts on P and L since now they are both time invariant.
The M ATLAB routine dlqe (discrete linear quadratic estimator) uses an equivalent, but numerically robust scheme to compute the steady-state discrete optimum Kalman gain matrix L and
steady-state covariance P.
9.5.5
C URRENT
In fact there are two common ways to implement a discrete Kalman filter depending on what computational order is convenient in the embedded system. Ogata refers to them as the Predictiontype and the Current Estimation type in [150, p880-881], while the Control toolbox in M ATLAB
refers to them as delayed and current in the help file for kalman.
The main difference between the current estimate and the prediction type is that the current esti k , whereas the prediction type
mate uses the most current measurements yk to correct the state x
k.
uses the old measurements yk1 to correct state x
For both cases, we assume our true plant is of the form we used in Eqns 9.76 and Eqn. 9.77,
repeated here with = I
xk+1 = xk + uk + wk
yk = Cxk + vk
457
Prediction-type
The prediction-type state estimate is given by
k+1 =
x
xk + uk + L (yk C
xk )
(9.101)
(9.102)
(9.103)
A block diagram of the prediction type Kalman filter implementation is given in Fig. 9.31.
Process noise, wk
Measurement noise, vk
q 1
xk
Outputs, yk
True plant
Inputs, uk
Kalman gain
L
q 1
k
x
Current estimation-type
The current-estimation state estimate is given by
where
(9.104)
(9.105)
and where the covariance is the same as above in Eqn. 9.102, but the Kalman gain is now
1
P = Q + P PC R + CPC
CP
(9.106)
1
(9.107)
L = PCT R + CPCT
458
Process noise, wk
Measurement noise, vk
xk
q 1
Outputs, yk
True plant
Inputs, uk
q 1
k
x
q 1
uk1
k1
x
zk
+
error,
Kalman gain
Plant model & estimator
Figure 9.32: A block diagram of a steady-state current estimator-type Kalman filter applied to a
linear discrete plant. Compare with the alternative prediction form in Fig. 9.31.
A block diagram of the current estimator type Kalman filter implementation is given in Fig. 9.32.
Note that the prediction-type Kalman filter is obtained simply by multiplying the current estimate
gain with as evident by comparing equations 9.103 and 9.107. Also note that the M ATLAB
routine dlqe solves for the current estimator type.
Listing 9.25 computes the Kalman gain for a discrete plant with matrices
1 2
=
, C = 2 3 , and with covariances Q = 2I,
3 4
R=1
10
20
459
-1.0801
-2.3377
>> Ke = dlqe(A,Gam,C,Q,R) % Note dlqe delivers the current estimator-type Kalman gain
Ke =
-0.1776
-0.4513
Both implementations of the steady-state Kalman filter are very similar to the estimator part of
Fig. 8.7. The main difference between this optimal estimator and the estimator designed using
pole-placement in section 8.3 and 8.4 is that we now take explicit notice of the process and measurement noise signals in the design of the estimator.
Of course knowing the states is not much use if you are not going to do anything with that information, so of course we could add a feedback controller to this strategy in the same was as we did
for the pole-placement designed simultaneous state feedback controller and estimator in Fig. 8.7.
I N CLOSED FORM
If we combine the plant dynamics with the Kalman filter, we get the following system which is
convenient for simulation.
For the predictor form, by combining the plant dynamics in Eqns 9.76 and Eqn. 9.77 with the
Kalman filter update Eqn. 9.101, we get
uk
0
0
xk+1
xk
=
+
(9.108)
wk
k+1
k
LC LC
0 L
x
x
vk
are the corrected best estimates of x, u are the
where x is the true but unknown state vector, x
manipulated variable inputs, and w and v are the measurement and process noise. We colour the
true states grey, because they are unobservable and hence hard to see.
For the current estimator form, by combining the plant dynamics with Eqns 9.104 and 9.105 we
get
uk
0
xk+1
xk
wk
=
+
(9.109)
k+1
k
LC LC
LC 0 L
x
x
vk
For simplicity, let us suppose that we want equal weightings on the state and measurements.
With this simplification we can write
Q = qI,
R = rI
where q and r are positive scalars. As q/r 0, L 0 and the estimated dynamic equation
becomes xk+1|k = xk|k1 + uk which is the pure open-loop model prediction. Conversely as
q/r , we are essentially discarding all process information, and only relying on measurement
back-calculation.
Problem 9.5 Design a Kalman gain for the Newell & Lee evaporator. In this example, we are only
measuring level and pressure, so the measurement matrix is now
1 0 0
C=
0 0 1
The covariance matrices and optimum discrete gain are
460
9.5.6
AN
APPLICATION OF THE
K ALMAN
FILTER
Let us suppose the composition meter on the evaporator failed, but we are required to estimate it
for a state feed-back controller. Since we have an industrial evaporator, we have noise superimposed on both the measurements and the process. We can simulate a scenario such as this. We will
use Eqn. 9.109 as a starting point, but we will also add another equation, that of the uncorrected
model predictions. Now we can compare 4 things:
1. The raw measurement unavoidably corrupted by noise, and possibly incomplete;
yk = Cxk + v
(9.110)
(9.111)
3. The corrected state estimates using both model and the measurements processed by the
Kalman Filter;
k+1 = f (
x
xk , y k )
(9.112)
4. and of course the true state variable5
xk+1 = xk + uk + w
(9.113)
The simulation given in listing 9.26 uses a randomly generated discrete plant model. The actual
manipulated variable trajectories are unimportant for this example, and the measurement noise
variance is one fifth of the process noise variance. To further demonstrate the advantage of the
estimator, I will assume that we have an error in the initial condition of the states.
Listing 9.26: State estimation of a randomly generated discrete model using a Kalman filter.
n = 3;nc = 2;m = 2; % # of states, outputs, inputs
[Phi,Del,C,D]=drmodel(n,nc,m); % generate random discrete model
Gam = eye(n); D = zeros(size(D)); % process noise model & ignore any direct pass-through
4
2
v=0.5;w=0.1; Q=w*eye(n); R=v*eye(nc); % Noise covariances, Q = w
I, R = v2 I
W=sqrt(w)*randn(N,n);V=sqrt(v)*randn(N,nc);% State & measurement noise sequences
5 once
461
19
[y,x] = dlsim(Phia,Dela,Ca,Da,[U,W,V],[x0_true;x0_est]);
24
x_true = x(:,1:n);
x_est = x(:,n+1:2*n);
y_meas = (C*x_true')'+V;
y_est = (C*x_est')';
% true states, x
subplot(3,1,1); plot(t,x_true(:,1),t,x_est(:,1),'--',t,xp(:,1));
subplot(3,1,2); plot(t,x_true(:,2),t,x_est(:,2),'--',t,xp(:,2));
subplot(3,1,3); plot(t,x_true(:,3),t,x_est(:,3),'--',t,xp(:,3));
The most obvious scheme to reconstruct an estimate of the true states, is to use only the measurements y. However using this scheme we have two difficulties, first we have noise superimposed
on the measurements, and second, we cannot reconstruct all the states since C is not inevitable,
not even square. Without the estimator, we can only use the incomplete raw output measurements, rather than either the corrected states or filtered measurements. In addition, the poor state
estimate at t = 0 is compensated.
10
8
6
4
2
0
true
KFcorr
No corr
2
4
4
2
0
2
4
6
8
0
50
100
sample time
150
We can improve on the measurements by using the model information and the Kalman filter.
Fig. 9.33 clearly shows that the Kalman estimates, ( ), are better than using simply an open-loop
model estimator.
9.5.7
T HE
ROLE OF THE
AND
The Kalman filter is an optimum model based filter, and is closely related to the optimum LQR
controller described in section 9.4. If properly designed, the Kalman filter should give better
results than a filter using only measurement or model information. We can rank different design
alternatives by comparing the integral of the squared error between the true states x and our
462
. The only design alternative for the Kalman filter is the selection of the Q and R
predictions x
matrices. The theory leads us to believe that if we select these matrices exactly to match the actual
noise conditions, then we will minimise the error between our prediction and the true states.
Normally the covariance matrices Q and R, are positive diagonal matrices, although strictly they
need only to be positive definite. The elements on the diagonal must be positive since they are
variances which are squared terms. If Q is large relative to R, then the model uncertainty is high
and the estimate values will closely follow the measured values. Conversely if R is large relative
to Q, then the measurement uncertainty is large, and the estimate will closely follow the model.
Suppose we have a very simple system discrete dynamic system
xk+1 = 0.5xk + wk
yk = xk + vk
(9.114)
(9.115)
if we were to design a Kalman filter with model and measurement covariances as Q = 1.5, R =
0.5, we could use dlqe as shown in Listing 9.27.
Listing 9.27: Computing the Kalman gain using dlqe.
2
Now if we were lose all confidence in our measurements, effectively that means Q = 0 (or R
becomes very large), and we essentially discard any update. In this case the Kalman gain should
be L = 0.
>> L = dlqe(Phi,Gam,C,0,R)
L =
0
Alternatively if we lose all confidence in our model, then R becomes small, and the Kalman gain
in this case tends to 1 in the case of the current estimator. (Note that R 6= 0 because R must be
strictly positive definite.)
k + L(yk xk )
so obviously if L = 1, then xk = x
463
OPTIMALITY OF THE
K ALMAN FILTER
The optimality of the Kalman filter is best demonstrated using a Monte-Carlo simulation. The
script given in Listing 9.28 consists of two parts: first the inner loop simulates estimation of a
simple third order plant and calculates the ISE performance,
J =
tf
(
x x)
(
x x) dt
(9.116)
using a steady-state Kalman filter, while the outer loop varies the Kalman design q/r ratios. The
inner loop is repeated many times (since slightly different results are possible owing to the different noise signals) in order to obtain a statistically valid result.
In Listing 9.28, the true process has q/r = 20. We will then compare the performance of a Kalman
filter using different q/r values as the design parameters. If all goes well, we should detect a
minimum (optimum) near the known true value of q/r = 20.
Listing 9.28: Demonstrating the optimality of the Kalman filter.
2
12
17
22
LC=L*C;
% Formulate the extended system
Phib = [Phi, zeros(n); LC, Phi-LC]; % Current predictor
Delb = [Del,Gam,zeros(size(L));Del+L*D,zeros(size(Gam)),L];
Ga = ss(Phib,Delb,eye(size(Phib)),0,Ts);
27
32
jx=[];
for j=1:nsum % Repeat this simulation many times
W = sqrt(wes)*randn(N,n); V = sqrt(ves)*randn(N,p); % Add noise
x = lsim(Ga,[U,W,V],t,[x0;x0_est]);
dx = x(:,[1:3])-x(:,[4:6]); % x x
jx(j) = sum(sum(dx.*dx)); % ISE performance, J , Eqn. 9.116
end % j
Jstats(i,:)=[mean(jx),std(jx),min(jx),max(jx)]; % collect statistics
464
37
42
end % i
Jm = Jstats(:,1);
h=loglog(qr_v,Jm,'-',qr_v,Jstats(:,[3 4]),':',...
qr_v,Jm*[1 1]+Jstats(:,2)*[1 -1],'--' );
set(line(qr_v,Jm),'MarkerSize',18,'LineStyle','.');
y = axis; y = y(3:4); h = line(wes/ves*[1,1],y);
set(h,'LineStyle','-.','color','k');
xlabel(sprintf('Q/R ratio (# of loops=%3g)',nsum)); ylabel('error')
The plot you obtain should be similar to Fig. 9.34 which was produced using 103 inner loops.
This curve shows that the actual minimum obtained by simulation agrees quite well with the
theoretically expected optimum expected at q/r = 20. Actually the curvature of this function near
the optimum is quite flat which means that one can design the q/r ratio an order of magnitude
in error, and it really does not degrade the system performance much. This is fortunate since it is
almost impossible to choose these design values accurately for any practical industrial process.
5
x 10
1.8
1.6
maximum
1.4
error
1.2
2
1
0.8
(q/r)
0.6
0.4
0.2
0
4
10
10
10
Q/R ratio (# of loops=1000)
10
MATRICES
In any actual implementation of the Kalman filter, one does not usually know a priori appropriate values for the Q and R covariance matrices, and therefore these can be effectively viewed as
tuning parameters. Tuning the KF is a cumbersome trial-and-error process that normally requires
expensive state verification. Some techniques make use of the fact that the innovations (or residuals, see Eqn 9.93), of an optimal KF should be zero-mean white-noise. One can easily store these
innovations, and test this assumption once a sufficient number have been collected. [27] describes
one technique due to Mehra that updates the Kalman gain and the R matrix if the whiteness condition of the innovations is not satisfied. There are many variations of adaptive KF tuning based
on this theme, but for a variety of reasons, they have found little acceptance in practice.
9.5.8
E XTENSIONS
TO THE BASIC
465
K ALMAN
FILTER ALGORITHM
There are many extensions to the basic Kalman filter algorithm, and the next few sections briefly
describes some of the more important extensions. (A full treatment is given in [96] and in the later
chapters of [52] to name just a few.) The next few sections describe modifications to the update
equations that are more robust to numerical round-off, using the filter for nonlinear problems,
tricks to save computing time and estimating parameters along with states. These alternatives
are generally preferred to the standard algorithms.
DESIGN EQUATIONS
The Kalman filter design equations presented in 9.5, whilst correct, do have a tendency to fail on
actual industrial examples. This is due, in part to:
The covariance matrices becoming indefinite, (i.e., losing their positive definiteness), owing
to the limited precision of the machine, and
the poor numerical properties of the covariance update equation.
This section describes an alternative scheme to design a Kalman filter, that tries to avoid these
two problems. Some solution strategies are very similar to the covariance concerns expressed
in section 6.8.3 when considering parameter identification, and if we inadvertently lose positive
definiteness in the middle of our control calculation, we can always reset to the nearest symmetric
positive definite matrix via nearestSPD as described in section 6.8.3. See [101] or [6, p147] for
further details.
Recall that the measurement update is
k = x
k + K (yk C
x
xk )
Pk = (I Kk C) Mk
(9.117)
(9.118)
The troublesome equation is the measurement covariance update, Eqn 9.118. There are many variations attempting to improve the numerics of the recursive update, and some of the more simple
rearrangements are given in [121, p69-70]. Two of the most popular are the Upper-Diagonal (UD) factorisation due to Bierman, and Potters square root algorithm. The key idea for the U-D
algorithm is that the covariance matrix is factored into
P = UDU
where U and D are upper triangular and diagonal matrices respectively, and these are the matrices that are propagated rather than P. The algorithm for the U-D update is slightly complicated
and difficult to vectorise in a form elegant for M ATLAB, but many programs exist and one version
is given in [203, p167-168].
T HE C HOLESKY MATRIX
SQUARE ROOT
Any positive semi-definite symmetric matrix can be factored into the product of a lower triangular
matrix and its transpose. This is analogous to a square root in the scalar sense, and is called the
Cholesky matrix square root. If the Cholesky square root of matrix P is S, then by definition
def
P = SS
(9.119)
466
For example
1 2 3
1
0 0
1 2
3
2 8 2 = 2
2 0 0 2 3
(9.120)
3 2 14
3 2 1
0 0
1
{z
} |
{z
}
|
P
S
M ATLAB can extract the Cholesky square root from a matrix using the chol command, although
this function actually returns the transpose ST .
U PDATING THE
The basic idea of the square root filter is to replace the covariance matrices with their square roots.
The covariance matrix can then never be indefinite, since it is calculated as the product of its two
square roots. Using the square roots also has the advantage that we can compute with double
accuracy. We can define the following square roots;
def
M = SS ,
def
P = S + S
+,
def
Q = UU ,
def
R = VV
(9.121)
F = S C
1/2
G = R + F F
(9.122)
S+ = S SFG (G + V)1 F
(9.124)
=x
+ SFG G1 (y C
x
x)
(9.125)
(9.123)
In Eqn. 9.123, one finds the square root of a matrix. M ATLAB can do this with the sqrtm (square
root matrix) command. The expression A is a shorthand way of writing (A1 ) which is also
equivalent to (A )1 . We can verify Eqn 9.124 by multiplying it with its transpose and comparing
the result with Eqn 9.118. This algorithm is often called Potters algorithm.
Listing 9.29 calculates the steady state Kalman gain using the square root method of Potter. The
functionality and arguments are the same as the built-in Matlab function dlqe.
Listing 9.29: Potters algorithm.
2
12
22
467
F = S'*C'; G = sqrtm(R+F'*F);
Sp = S - (S*F/(G'))*((G+V)\F');
P = Sp*Sp'; % Will always be positive definite
K = S*F/(G')/G;
end % while
return % end function potter.m
Modify the potter algorithm to make it more efficient. (For example the calculation of K is not
strictly necessary within the loop.)
S EQUENTIAL PROCESSING
The Kalman filter was developed and first implemented in the 1960s when computational power
was considered expensive. For this reason, there has been much optimisation done on the algorithm. One of these improvements was to sequentially process the measurements one at a time
rather than as a vector. [6, p142] gives two reasons for this advantage. First if the measurement
noise covariance matrix R is diagonal, there is a reduction in processing time, and second, if for
some reason there is not enough time between the samples to process the entire data, (owing to
an interrupt for example), then there is only a partial loss of update information to the optimised
states, rather than a full loss of improvement.
In the sequential processing formulations of the Kalman filter equations, the Kalman gain matrix
L is in fact a single column vector, and the measurement matrix C is a row vector.
D IAGONALISING
THE
R MATRIX
If the measurement noises are correlated, then R will not be diagonal. In this rare case, we can
transform the system to where the measurements are uncorrelated, and work with these. The
transformation uses a Cholesky factorisation. R is factored into
R = SS
(9.126)
= S 1 y
y
(9.127)
9.5.9
The Extended Kalman Filter or EKF is used when the process or measurement models are nonlinear. Once we have a nonlinear process, all the guarantees that the Kalman filter will converge
and that the filter will deliver the optimal state estimates no longer apply. Notwithstanding, it is
still worth a try, since in most cases even sub-optimal estimates are better than none. Almost all
the published accounts of using a Kalman filter in the chemical processing industries are actually
an extended Kalman filter applied to a nonlinear model.
The EKF is a logical extension from the normal linear Kalman filter. The estimation scheme is
still divided up into a prediction step and an estimation step. However for the model and measurement prediction steps, the full nonlinear relations are used. Integrating the now nonlinear
process differential equation may require a simple numerical integrator such as the Runge-Kutta
algorithm. Thus the linear model update and measurement equations are now replaced by
k|k1 = f (
x
xk1|k1 , uk , d, , t)
= g (
y
x)
(9.128)
(9.129)
468
Given the nonlinear model and measurement relations, updating the covariance estimates is no
longer a simple, or even tractable task. Here we assume that over our small prediction step of
interest, the system is sufficiently linear that we can approximate the change in error statistics,
which are only an approximate anyway, with a linear model. Now all the other equations of
the Kalman filter algorithm are the same, except that the discrete state transition matrix , is
obtained from a linearised version of the nonlinear model evaluated at the best current estimate,
k|k1 following the techniques described in 3.5. The creation of the linear approximation to the
x
nonlinear model can be done at every time step, or less frequently, depending on the application
and the availability of cheap computing. Linearising the measurement equation, Eqn 9.129 can
be done in a similar manner.
E STIMATING
K ALMAN FILTER
Chapter 6 discussed estimating model parameters online which could then be subsequently used
in an adaptive control scheme. In addition, there are many text books explaining how to offline
regress parameters from experimental data. While there is some debate about the nomenclature,
parameters are generally considered to be constant, or at least change much more slowly than
the states. In a heat exchanger for example, the heat exchange surface area is constant, thus
a parameter, while the flow may vary depending on demand, thus a state. Some variables lie
somewhere in between, they are essentially constant, but may vary over a long time period. The
heat transfer coefficient may be one such variable. It will gradually get less as the exchanger fouls
(owing to rust for example), but this occurs over time spans much larger than the expected state
changes.
In many engineering applications, however, there is some, (although possibly not complete)
knowledge about the plant, and this partial knowledge should be exploited. This was not the
case for the identification given in chapter 6, where only an arbitrary ARMA model was identified. The Kalman filter can be extended to estimate both states as given in 9.5 and also parameters. Of course once parameters are to be estimated, the extended system generally becomes
nonlinear, (or if already nonlinear, now more nonlinear) so the Extended Kalman filter or EKF must
be used. In this framework, the parameters to be estimated are treated just like states. I typically
simply append the parameter vector () to the bottom of the state vector x. Normally we would
like to assume that the parameter is roughly constant, so it is reasonable to model the parameter
variation as a random walk model,
d
=0+v
dt
(9.130)
where v is a random vector of zero mean and known variance. A plot comparing normally distributed white noise with a random walk is shown in Fig. 9.35 and can be reproduced using:
1
Eqn 9.130 is the simplest dynamic equation for a parameter. In some specialised examples, it
may be worth while to have other simple dynamic models for the parameters, such as random
ramps to model instrument drift for example, but in general this extension is not justified. Of
course, with the extra parameters, the dimension of the estimation problem has increased, and
the observability criteria must still hold.
469
25
20
Random walk
15
10
5
0
5
10
15
White noise
0
100
200
300
Sample #
400
500
N ON -G AUSSIAN NOISE
The assumption that the measurement and process noise sequences are Gaussian, un-correlated,
and mean zero are usually not satisfied in practice. Coupled with the fact that the true system is
probably nonlinear, it is surprising that an optimal linear estimator, such as the Kalman filter, or
even the EKF works at all.
Most textbooks on regression analysis approximate the residual between actual experimental data
and a model as made up of two components; a systematic error and a random error. The Kalman
filter is based on the assumption that the only error is due to a random component. This in general,
is a very strong assumption, and not satisfied in practice. However coloured noise is treated
theoretically in [52, Chpt 5].
One easy test to make is how the Kalman filter performs when the noise is correlated. We can
generate correlated or coloured noise by passing uncorrelated or white noise through a low pass
filter.
Problem 9.6 Chapter 7 discussed auto-tuning by relay feedback. One of the key steps in constructing an automated tuning procedure is to identify the period and amplitude of the oscillation
in the measurement. Often, however, this is difficult owing to the noise on the measurement,
but we can use an Extended Kalman filter to estimate these quantities in the presence of noise.
Further details and alternative schemes are discussed in (author?) [12]. A signal that is the sum
of a sinusoid and a constant bias can be represented by the following equations;
x2
x3
x1
x 2 = 2
x3
x 3 = 0
x 1 = 2
x 4 = 0
y = x1 + x4
and the period and amplitude of the oscillation given by
q
P = x3 ,
= x21 + x22
The idea is to estimate the states of the process given the output y, and then to calculate the
period and amplitude online.
470
9.5.10
C OMBINING
One of the key aims of the Kalman filter is to provide estimates of unmeasured states that can then
be used for feedback. When we combine state estimation with state feedback we have a Linear
Quadratic Gaussian or LQG controller.
Fig. 9.36 shows how we could implement such a combined estimator and controller in S IMULINK
which essentially follows the scheme given in Fig. 8.7. The actual plant is in grey which is disturbed by green state and measurement noise. The estimator (Kalman) gain is in dark red, and
the state feedback controller is in blue.
measurement
noise
plant noise
5
5
3
K*u
5
Add
Delta
1
z
K*u
Add1
Unit Delay
Phi
5
K*u
Ke
5
K*u
5
3
K*u
3
5
5
5
Add2
Delta est
1
z
xe
Unit Delay2
K*u
3
ye
Add3
C est
Phi est
5
K*u
Kc
u
K*u
471
The controlled results are given in Fig. 9.37 which also shows the (typically invisible) true states
in grey.
100
80
x & xe
60
40
20
x4
x5
x
x 31
20
40
20
u1
u3
20
u2
40
5
time (s)
10
Figure 9.37: LQG regulatory control of an aircraft. The true, but hidden, states are given in grey.
(See also Fig. 9.36.)
9.5.11
O PTIMAL
The optimal controllers of 9.4 had the major drawback that they required state feedback since
the control law was a function of the states, namely Eqn. 9.50 or u = Kx. We addressed this
problem in 9.5 by first designing an observer to produce some estimated states which we could
use in place of the true, but unknown, states in the controller. However it would be valuable to
be able to design an optimal controller directly that uses output, rather than state feedback. Such
an optimal output feedback controller in fact exists, but it is difficult to compute, and may only
deliver a local optimum. The derivation of the optimum output controller can be found in [122,
chapter 8], and improvements to the design algorithm are given in [166].
Once again we assume a linear plant
x = Ax + Bu,
y = Cx
x(0) = x0
472
SATc
+X
1
K = R1 BT PSCT CSCT
(9.131)
(9.132)
(9.133)
where the closed loop Ac and the covariance of the initial state are defined as
Ac = A BKC,
and the performance index is given by
def
X = E x(0)xT (0)
J = trace (PX)
(9.134)
The coupled nature of the multiple Lyapunov equations mean that we will need to use a numerical technique to solve for the K that satisfies all three design equations. Our alternatives are:
1. To use a general-purpose unconstrained optimiser to search for the values of K that minimise Eqn. 9.134 directly where P is given by the solution to Eqn. 9.131.
2. To use an unconstrained optimiser and exploit the gradient information since the gradient
of J with respect to K is
J
= 2 RKCSCT BT PSCT
K
where we have computed the auxiliary matrices P, S by solving the two Lyapunov equations Eqns 9.1319.132 perhaps using the M ATLAB function lyap.
3. To follow a simple fixed-position algorithm where we guess a value for K and then iterate
around Eqns 9.131 to 9.133.
Numerical experiments show that the direct optimisation is prone to infinite looping, and the
fixed-point algorithm can converge to local optima.
9.6
S UMMARY
This chapter dealt with optimal control using classical methods. Optimal control is where we try
to control a process in some best manner. The term best in the engineering field usually means
most economic (either most profit, or least cost). While it is naive to expect our controller to
perform optimally in the field, in practice they have been found to out-perform more conventional
controllers designed via methods such as loop shaping or frequency response for example. I think
9.6. SUMMARY
473
of optimal controllers as simply another set of design rules to construct a workable, satisfactory
controller, but would not expect them to be the best in any strict and absolute sense.
A convenient objective function is the quadratic objective function that tries to minimise the
weighted sum of state and manipulated deviations squared over time. The optimal controller
design problem given nonlinear process dynamics and constraints is, in general, too hard to solve
online, although offline open-loop optimal profiles are possible using numerical techniques. The
optimisation problem given linear dynamics is tractable and the solution is termed the linear multivariable regulator (LMR or LQG). The LMR in its simple guise is a multivariable proportional
controller. The fact that estimation is closely related to regulation was shown in chapter 8 and the
optimal estimator is the Kalman filter.
474
C HAPTER 10
P REDICTIVE
CONTROL
To be computationally practical, the optimal controllers of chapter 9 required that the system
satisfy certain mathematical restrictions. To use variational calculus, the dynamic models must
be differentiable; to use LQG, we require linear models with no saturation constraints. In contrast
the controllers described in this chapter take into account constraints in the manipulated and
control variables, something which is crucial for practical control applications.
10.1
Model predictive control, or MPC, first started appearing in a variety of flavours in the late 1970s
and has since then emerged as one of the most popular and potentially valuable control methodologies, particularly popular in the chemical processing industries. A good comprehensive survey of theoretical and practical applications is given in [54, 73, 162, 167].
The central theme of all these variants of predictive control is a controller that uses a process
model to predict responses in a moving or receding horizon. Unlike the optimal control of section 9.3.2 or the linear cases following, this control strategy uses a finite optimisation horizon,
(refer Fig. 10.1). At every sample time, the optimiser built into the controller selects the best finite
number of future control moves over a horizon of 10 or so samples, of which the controller will
subsequently implement just the first. At the next sample time, the whole optimisation procedure
would be repeated over the new horizon, which has now been pushed back one step. This is also
known as receding horizon control. Since an optimisation problem is to be solved each sampling
time step, one hopes that the calculations are not too involved!
This long-term predictive strategy coupled with the seeking of multiple optimal future control
moves turns out to be rather a flexible control scheme where constraints in manipulated and
state variables can be easily incorporated, and some degree of tuning achieved by lengthening or
shortening the horizon. By predicting over the dead time and two or three time constants into the
future, and by allowing the controller to approach the setpoint in a relaxed manner, this scheme
avoids some of the pitfalls associated with the traditionally troublesome processes such as those
with inverse response or with excessive dead-time.
However relying predictions from models are not without their problems. I pity the Reserve Bank
of New Zealand which has the unenviable task of predicting the future time-varying interest rates
at quarterly intervals as shown in Fig. 10.2. The dark solid line shows the actual 90-day interest
rates from March 1999 until March 2003 compared to the predictions made at the originating
point. This data was obtained from [194, p18] where is it cautioned: While we consider our
475
476
Predicted outputs
Past values
b
Known future
setpoints, r
Prediction horizon, Np
b
Past
now
time, t
Future
Ts
Planned future
manipulated inputs
current input
Control horizon, Nc
current
time
Figure 10.1: The basic idea behind predictive control showing the control and prediction horizons.
The controller tries to select future input values such that the output tends to the setpoint in some
optimal manner.
approach generally useful, there are risks. For example, if the interest rate projections are taken as
a resolute commitment to a particular course of policy action, then those that interpret it as such
will be misled.
The idea behind MPC is that we have a dynamic, possibly nonlinear, plant described as
yi+1 = f (yi , ui )
(10.1)
where we wish the plant output y(t) to follow as closely as practical a possibly varying setpoint
or reference r(t).
477
To establish the control move, we find a suitable u(t) solve that minimises the quantity
Jt =
Np
X
i=1
(rt+i yt+i ) +
{z
prediction horizon
Nc
X
wu2t+i1
(10.2)
i=1
{z
control horizon
ui U
}
(10.3)
The set U in the constraint equation, Eqn. 10.3 is the set of allowable values for the input and
typically takes into account upper and lower saturation limits in the actuator. For example in the
case of a control valve, the allowable set would be from 0 (shut) to 100% (fully open).
The optimisation of Eqn. 10.2 involves finding not just the one next control move, but a series of
Nc future moves, one for each sample time over the future control horizon. In practice the control horizon is often about 515 samples long. The quantity optimised in Eqn. 10.2 is comprised
of two parts. The first expression quantifies the expected error between the future model predictions y and the future setpoints, r. Typically the future setpoint is a constant, but if changes
in reference are known in advanced, they can be incorporated. The comparison between error
and expected output is considered over the prediction horizon, Hp , which must be longer than the
control horizon.
The second term in Eqn. 10.2 quantifies the change in control moves. By making the weighting
factor, w, large, one can discourage the optimal controller from making overly excited manipulated variable demands.
At each sample time, a new constrained optimisation problem is solved producing an updated
vector of future control. At the next sample instance, the horizons are shifted forward one sample
time T , and the optimisation repeated. Since the horizons are continually moving, they are often
termed receding horizons.
The chief drawback with this scheme is the computational load of the optimisation every sample
time. If the model, Eqn. 10.1, is linear, and we have no constraints, then we are able to analytically extract the optimum of Eqn. 10.2 without resorting to a time consuming numerical search.
Conversely if we meet active constraints or have a nonlinear plant, then the computation of the
optimum solution becomes a crucial step. Further details for nonlinear predictive control are
given in the survey of [83].
Algorithm 10.1 Model Predictive Control (MPC) algorithm
The basic general predictive control algorithm consists of a design phase and a implementation
phase. The design phase is:
1. Identify a (discrete-time) model of the plant to be controlled.
2. Select a suitable sampling time and choose the lengths of the control and prediction horizons, Hc , Hp , where Hc Hp and the control move weightings, w. Typically Hp is chosen
sufficiently long in order to capture the dominant dynamics of the plant step response.
3. For further flexibility, one can exponentially weight future control or output errors, or not
include the output errors in the performance index for the first few samples. Making the
prediction horizon longer than the control horizon effectively adds robustness to the algorithm, without excessively increasing the optimisation search space.
478
4. Incorporate constraints in the manipulated variable if appropriate. In the absence of constraints, and as the prediction horizons tend to , the MPC will lead to the standard linear
quadratic regulator as described in 9.4.
The implementation phase is carried out each sampling time.
1. Use an optimiser to search for the next future Nc optimum control moves ut starting from
the current position.
2. Implement only the first optimum control move found from step 1.
3. Wait one sample period, then go back to step 1. Use the previous optimum ut vector, (shifted
forward one place) for the initial estimate in the optimiser.
The following sections demonstrate various predictive controllers. Section 10.1.1 applies a predictive controller to a linear plant, but with input constraints. Because this is a nonlinear optimisation
problem, a generic nonlinear optimiser is used to solve for the control moves. As you will see if
you run the example in Listing 10.1, this incurs a significant computational burden. If we ignore
the constraints, then we can use a fast analytical solution as shown in section 10.1.2, but this loses
one of the big advantages of MPC in the first place, namely the ability to handle constraints. The
M ODEL P REDICTIVE C ONTROL T OOLBOX for M ATLAB described in [28] is another alternative to
use for experimenting with MPC.
10.1.1
C ONSTRAINED
PREDICTIVE CONTROL
This section demonstrates a constrained predictive control example using the following linear
non-minimum phase plant,
1.5(1 2.5s)
Gp = 2
(10.4)
4s + 1.8s + 1
but where the manipulated variable is constrained to lie between the values 3.5 < u < 3.
Sampling Eqn. 10.4 at T = 0.75 with a zeroth-order hold gives the discrete system
Gp (z) =
0.005z 1 0.4419z 2
1 1.596z 1 + 0.7136z 2
479
Step Response
2
Amplitude
1.5
0.5
Gp =
0.5
1.5(12.5s)
4s2 +1.8s+1 ,
10
15
Time (sec)
Ts = 0.75
20
25
Listing 10.1: Predictive control with input saturation constraints using a generic nonlinear optimiser
ntot=150; dt=0.75; t=dt*[0:ntot-1]';
% time vector
12
17
for i=1:ntot-Hp-1;
% for each sample time ...
Umin = [Umin(2:end);Umin(end)]; % roll & repeat
Umin = fmincon(@(u)fmodstpc(u,Gss,x,Yspt(i:i+Hp-1),Hp),Umin, ...
[],[],[],[],Ulb,Uub,[],optns);
22
32
The objective function to be minimised called by the optimiser is given in Listing 10.2. Note that
480
auxiliary variables, such as the model, and horizon lengths are passed as anonymous variables
through to the function to be optimised.
Listing 10.2: Objective function to be minimised for the predictive control algorithm with input
saturation constraints
1
11
% Performance index J
The controlled performance as shown in Fig. 10.4 is very good, as indeed it should be.
MPC: T =0.75, H =20, H =10
s
5
0
5
10
4
umax
Input
2
0
2
4
umin
0
20
40
60
time [s]
80
100
120
Figure 10.4: Predictive control of a non-minimum phase system with constraints on u with a
control horizon of Hc = 10 and a prediction horizon of Hp = 20. Refer also to Fig. 10.5 for a
zoomed portion of this plot.
Especially note that since we are making use of a vector of future setpoints, the control algorithm
starts preparing for the setpoint change before the actual setpoint changes. This acausal behaviour,
illustrated more clearly in Fig. 10.5, is allowed because we assume we know future setpoints,
improves the response compared with a traditional feedback only PID controller. Naturally the
control becomes looser as the manipulated variable hits the upper and lower bounds.
You will soon realise, if you run this example, that the computational load for the optimal input trajectory is considerable. With such a performance load, one needs to investigate ways to
increase the speed of the simulation. Reasonable starting estimates for the new set of future control moves are simply obtained from the previous optimisation problem. Good starting guesses
for the nonlinear optimiser are essential to ensure that the controller manages to find a reasonable output in the time available. Even though our model, ignoring the manipulated variable
constraints, is linear, we have used a general nonlinear optimiser to solve the control problem.
This means that the same algorithm will work in principle for any plant model, whether linear
481
5
Hp
Hc
Hp
10
4
Input
2
0
2
4
30
35
40
time [s]
45
50
or nonlinear, and any constraint formulation. Naturally there are many extensions to the basic
algorithm that address this issue such as only optimising every n-th sample time, or lowering Nc .
However substantial simplifications can be made if we restrict our attention to just linear plants
and bounded output, input and input rate constraints. Under these restrictions, the optimal MPC
formulation can be re-written as a standard quadratic program, or QP for which there are efficient
solution algorithms. Further details of this are given in [26, 165], and a commercial M ATLAB
implementation is described in [29].
VARYING
Investigating different controller design decisions is computationally expensive for this application, but we can relax the termination requirements for the optimiser with little noticeable change
in results. From a computational complexity viewpoint, we want to reduce the search space,
and hence reduce the horizons. However, reducing the horizons excessively, will result in a poor
controlled response.
Fig. 10.6 demonstrates what is likely to happen as the control and prediction horizons grow
shorter. For a short control horizon, Nc = 2, and a reasonably short prediction horizon, Np = 8,
but one that still covers inverse response, the controlled response is still quite reasonable as shown
in Fig. 10.6(a). However decreasing the prediction horizon any further, even just slightly such as
to Np = 7, gives problems as exhibited in Fig. 10.6(b). The inverse response horizon is 5.
MPC
Fig. 10.7 shows the result of using the constrained MPC on the black box assuming a constant
model of the blackbox as
0.98
M:
G(s) =
(3s + 1)(5s + 1)
sampling at T = 0.75 seconds with a prediction horizon of Hp = 20, and control horizon of
Hc = 10 samples. Clearly we should also estimate the controller model online perhaps using a
recursive least-squares type algorithm.
482
GPC: Ts=0.75, Hp= 8, Hc= 2
5
0
0
5
2
Input
10
4
Input
10
4
0
2
4
0
2
20
40
60
time [s]
80
100
120
20
40
60
time [s]
80
100
120
Figure 10.6: Varying the horizons of predictive control of a non-minimum phase system with
constraints on u.
BlackBox: MPC t=0.75, H =20, H =10
p
0.5
0.5
1
1
input
0.5
0
0.5
1
20
40
60
80
100
time (s)
120
140
160
180
200
Figure 10.7: Constrained MPC applied to the laboratory blackbox with sampling at T = 0.75 sec
and prediction & control horizons Hp = 20, Hc = 10.
10.1.2
D YNAMIC
MATRIX CONTROL
If you actually ran the simulation of the general optimal predictive controller from 10.1.1, and
waited for it to finish, you would have stumbled across the one big drawback computational
load. As previously anticipated, one way to quicken the search is to assume a linear system,
ignore any constraints, and make use of the analytical optimisation solution. To do this, we must
express our model as functions of input only, that is, a finite impulse response model, rather than
the more parsimonious recursive infinite impulse response model. There are again many different
flavours of this scheme, but the version here is based on the step response model,
y(t) =
X
i=1
gi u(t i)
(10.5)
483
where the gi s are the step response coefficients, and u(t) is the change in input, or ut ut1 , refer
Fig. 10.8. Suppose we start from rest, and make a unit step change in u at time t = 0. Using
Eqn. 10.5, the first few output predictions are
y1 = g1 u0 + g2 u1 +
|{z}
| {z }
=1
(10.6)
=0
y2 = g1 u1 + g2 u0 + g3 u1 +
|{z}
|{z}
| {z }
=0
=1
(10.7)
=0
y3 = g1 u2 + g2 u1 + g3 u0 +
(10.8)
Since all the control moves changes are zero, except u0 = 1, then it follows that y1 = g1 , y2 =
g2 , y3 = g3 and so on. So the values of the output at each sample time yk in response to a unit step
at k = 0 are the step response coefficients, gk . These coefficients form an infinite series in general,
but for stable systems, converge to a limit as shown in Fig. 10.8.
Output, y
gNp
g6
Impulse coeff.
h6 = g 6 g 5
b
b
Approximate
steady state
g4
b
b
0 g1
time, t
b
b
b
g2
g3
Ts
Step response
coefficients, gi
Input
Past
now
Unit step
Future
Figure 10.8: Step response coefficients, gi , and impulse response coefficients, hi , for a stable system.
We can write the response of the model in a compact matrix form as
g1
0
0
0
y1
u0
g2
g
0
0
1
y2
u1
g2
g1
0
.. = g3
..
.
.
..
..
..
..
.
..
.
.
.
.
yN p
uNc 1
gNp gNp 1 gNp 2 gNp Nc +1
{z
}
|
dynamic matrix
yforced = G u
(10.9)
The Np Nc matrix G in Eqn. 10.9 is referred to as the dynamic matrix, hence the term dynamic
matrix control or DMC. It is also closely related to a Hankel matrix which provides us with a
convenient, but not particularly efficient, way to generate this matrix in M ATLAB. Eqn. 10.9 tells
us how our system will respond to changes in future moves in the manipulated variable. Specifically Eqn. 10.9 will predict over the prediction horizon, Np , the effect of Nc control moves. As in
484
the general optimisation case, the input is assumed constant, u = 0, past the end of the control
horizon.
All we need to do for our optimal controller, is to find out what to set these future control moves
to, in order that the future outputs are close to the desired setpoints. Actually Eqn. 10.9 only
predicts how our system will behave given future control changes, (termed the forced response),
but it ignores the ongoing effect of any previous moves, (termed the free response). We must
add the effect of the free response, if any, to Eqn. 10.9 to obtain a valid model prediction. This
simple addition of two separate responses is a consequence of the linear superposition principle
applicable for linear systems. We can calculate the free response over the prediction horizon, yfree
simply by integrating the model from the current state with the current constant input for Np
steps. Therefore to ensure our future response is equal to the setpoint, r, at all sample times over
the prediction horizon, we require
r = yfree + yforced
(10.10)
Again, since we will make the requirement that the control horizon is shorter than the prediction
horizon, our dynamic matrix, G, is non-square, so the solution of Eqn. 10.9 for the unknown
control moves involves a pseudo inverse. Typically we further weight the control moves to prevent
an over-excited input demands by a positive weight , so the optimal solution to the predictive
control problem is
1
u = G G + I
G (r yfree )
(10.11)
This analytical solution to the optimal control problem is far preferred above the numerical search
procedure required for the general nonlinear case. Note that the matrix inversion in Eqn. 10.11 is
only done once, offline and thus does not constitute a severe online numerical problem. The effect
of the control move weighting improves the numerical conditioning of the matrix inversion.
Fig. 10.9 shows how this control philosophy works for a plant with a prediction horizon of 10 and
a control horizon of 4. The output is intended to follow the setpoint, and in this case the setpoint
change at k = 15 is known in advance and that information is available to the controller. (There
is also a disturbance at k = 19 which is not known to the controller.) A small amount of noise is
added to the output to continually stress the controller.
The control law, Eqn. 10.11, generates 4 control moves in the future, but implements only the first.
The grey trends in both the output and the input show at each sample time what the controller is
thinking will happen in the future. Fig. 10.9(b) shows a zoomed portion of Fig. 10.9(a) just prior
to the step change. The four future predictions are plotted, getting progressively lighter in colour
further into the future.
A DYNAMIC
Fig. 10.10 shows the performance of a DMC controller for the non-minimum phase plant of
Eqn. 10.4 from 10.1.1. I generated the step response coefficients elegantly from the transfer function model simply using the dstep command. For illustration purposes in this simulation I have
varied the control move weighting, in Eqn. 10.11 as = 100 for t < 50, then = 1 for the
interval 50 < t < 100, and then finally = 0.
The controller uses a prediction horizon of Hp = 25, a control horizon of Hc = 8 at a sample time
of T = 0.5. The manipulated variable constraints present in 10.1.1 are relaxed for this example.
As the control move weighting is decreased, the output performance improves at the expense of
vigorous manipulated variable movements. Note that a weighting = 0 is allowable, but will
result in an over-excited controller.
The m-file for this DMC example, but using a constant weight of = 1 is given in Listing 10.3.
485
Y & spt
0.8
0.6
0.4
0.2
Disturbance here
0
0.2
1
0.5
0
0
10
15
20
25
sample interval
30
35
40
(a) Dynamic matrix control showing the predictions for a step change and a disturbance.
Ts= 1, Hp=10, Hc= 4, =10.0
0.025
0.02
0.015
yk & spt
0.01
0.005
0
0.005
0.01
0.015
1
0.8
uk
0.6
0.4
0.2
0
0.2
8
9
sample interval
10
11
12
(b) A zoomed portion of the DMC control showing the future predictions at each sample interval
getting lighter in colour as the prediction gets longer.
Figure 10.9: Dynamic matrix predictive control showing at each sample time the future control
moves and the resultant prediction of the plants output given those anticipated control moves.
For example at each sample, the controller generates the next 4 control moves, but actually only
implements the first.
486
T =0.5, H =25, H = 8
s
Y & spt
0.5
0.5
1
1
0.5
0
0.5
1
= 100
0
= 20
50
= 0.1
100
150
sample time
14
19
24
29
487
[p,px] = dlsim(Phi,Del,C,D,u*ones(Hp+1,1),x);
p(1)=[]; % kill repeat of x0
34
39
Y(i,:) = y; Uopt(i,:) = u;
end % for i
Yspt(1) = []; Yspt=[Yspt;1];
subplot(2,1,1); plot(t,Y,t,Yspt,':')
subplot(2,1,2); stairs(t,Uopt);
% plot results
The high fidelity of the controlled response of both predictive control algorithms, (Figures 10.4
and 10.10) is helped by the lack of model/plant mismatch in the previous simulations. In practice,
faced with the inevitable mismatch, one can extend the basic DMC algorithm such as varying the
weights and/or horizon lengths or by including an adaptive model and therefore performing the
matrix inversion online. In some cases, particularly for unstable processes, or for cases where the
sampling rate is either very fast or very slow compare with the step response of the system, the
DMC algorithm will fail primarily owing to the poor conditioning of G. In these cases singular
value decomposition may help.
DMC
OF THE BLACKBOX
A DMC controller is tried on the laboratory blackbox in Fig. 10.11. Unlike the GPC example on
482 however, here we will identify the model online using a recursive least-squares estimation
algorithm. The model-based controller assumes the structure
M:
G(q 1 ) =
b0 + b1 q 1 + b2 q 2
1 + a1 q 1 + a2 q 2 + a3 q 3
for the blackbox with a sample time of T = 1.5 seconds and a forgetting factor of = 0.995. The
controller horizon is set to 8 samples, and the prediction horizon is set to 20 samples. The control
move weighting is set to 0.2.
10.2
A Model Predictive Control toolbox developed by Currie, [58, 59] allows one to rapidly build and
test various MPC configurations for a wide variety of linear, or linearisable nonlinear plants with
both soft and hard constraints on the output, the input and input rate of change. The following
sections describe this toolbox in more detail.
10.2.1
GUI
The easiest way to get started is to use the Model Predictive Control toolbox GUI as shown in
Fig. 10.12. In this M ATLAB GUI, one can load and experiment with different plants (and models of
those plants), adjust prediction horizons, operating constraints, and setpoints. What is interesting
to observe in the GUI application are animations of the future output and input predictions given
to the right of the solid line in the scrolling plot.
488
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
RLS Model ID: na= 3, nb= 2
Input
0.5
0
0.5
1
50
100
150
time (s)
200
250
300
Figure 10.11: Adaptive dynamic matrix control (DMC) of the blackbox with control horizon Hc =
8 and prediction horizon Hp = 25.
10.2.2
MPC
TOOLBOX IN
M ATLAB
The MPC toolbox is easy to use in script mode. This section shows how to build a multivariable
MPC controller using the Wood-Berry column plant in Eqn. 3.24 from section 3.2.4. This plant
has 2 inputs (reflux and steam) and two outputs (distillate and bottoms concentration) and we
will use a sampling time of Ts = 2 seconds. For this application we will assume no model/plant
mismatch.
Now we are ready to specify the controller tuning parameters which include the length of the
control and prediction horizons, the values of the input and output saturation constraints, and
the relative input and output weights in the objective function.
489
con.u = [-1
-1
con.y = [-2
-2
uwt = [1 1]';
% Individual weighting on the inputs
ywt = [1 0.1]'; % We are less interested in controlling bottoms, y2
Kest = dlqe(Model); %Estimator Gain
Now we are ready to initialise the model (and plant). To that, we need to know how many internals states we have in our discrete model. That number depends on the discretisation method,
and the sampling rate, so the easiest way is to inspect the properties of the plant and model.
>> Plant
Plant =
-- jMPC State Space Object -States: 10
Inputs: 2
Outputs: 2
Sampling Time: 2 sec
490
Finally we are ready to design a setpoint trajectory, assemble the MPC controller, and simulate
the results. In this case we are using the high-speed MEX version of the optimisation routines.
T = 200; % Desired simulation length
setp = 0.5*square(0.05*Ts*[0:T]',50)+0.7; % Design interesting setpoint
setp(:,2) = 0.5*square(0.1*Ts*[0:T]',50)-0.7;
setp(1:10,:) = 0;
5
10
simresult = sim(MPC1,simopts,'Mex');
% Simulate & plot results
plot(MPC1,simresult,'summary');
suptitle(sprintf('Horizons: N_p=%2i, N_c=%2i',Np,Nc))
The results for this multivariable MPC example are shown in Fig. 10.13. Note how the control of
y2 is considerably looser than that for y1 . This is a consequence of the output weights which are
ten times higher for y1 . Also note that the change in input u never exceeds the required 0.1.
Horizons: N =10, N = 5
p
Outputs: yp(k)
States: xp(k)
6
4
Amplitude
Amplitude
2
0
1
0
50
100
150
200
50
0.1
0.4
0.05
Amplitude
Amplitude
150
200
150
200
Input: u(k)
0.6
0.2
0
0.2
100
0
0.05
50
100
Sample
150
0.1
200
50
100
Sample
10.2.3
U SING
THE
MPC
TOOLBOX IN
S IMULINK
It is also easy to implement MPC controllers in S IMULINK. Fig. 10.14 shows an MPC controller
from the jMPC toolbox connected to an unstable, non-minimum phase plant
G(s) =
s1
(s 2)(s + 1)
(10.12)
which is difficult to control and has been used in various benchmark control studies, [102].
To use the MPC controller block in S IMULINK, we first must design an MPC controller in the same
way as we did in section 10.2.2. Once we have designed the controller, we can simply insert it
into the discrete MPC controller block and connect it to our plant. However in this case instead of
mdist
491
(s1)
double
Plant
delu
double
double
(s2)(s+1)
Scope
setp
ym
Signal
Generator
yp
xm
Discrete MPC Controller
Np = 40; Nc = 5;
con.u = [-5 5 0.3];
con.y = [-2 2];
uwt = 1; ywt = 0.5;
Kest = dlqe(Model);
14
% Setpoint
MPC1 = jMPC(Model,Np,Nc,uwt,ywt,con,Kest);
The last line created an MPC controller object called MPC1 which we can place in the MPC controller block in the S IMULINK diagram in Fig. 10.14. Note that the true plant, Eqn. 10.12 is never actually used in the MPC controller. The results of the subsequent S IMULINK simulation in Fig. 10.15
show that we hit, but do not exceed, the input rate constraint during the transients.
10.2.4
The 3-DOF helicopter manufactured by Quanser1 shown in Fig. 10.16 is a popular test rig for
testing multivariable controllers, [57]. The helicopter can be modelled using 6 states,
def
xT =
which are the angles (measured in radians) of the elevation, , the pitch, p, and the travel, , from
a nominal zero position, and their corresponding rates of change. The physical limitations of
the elevation angle are about 50 , while the pitch is limited to about 100 . The travel angle
1 www.quanser.com
492
2
1
0
1
2
4
2
0
2
4
0.5
0.5
10
20
30
40
50
60
70
time
is unlimited due to the slip rings on the central shaft. These state constraints provide a good
incentive to use MPC control.
must be
The angles are directly measured via a rotary encoder, but the time derivatives, (,
p,
)
estimated either using a state estimator, or some sort of carefully designed differentiator. The two
control inputs are the two independent voltages applied to the two motors driving the front and
def
Je
Je
K f lh
(Vf Vb )
p =
Jp
K f la
=
(Vf + Vb ) sin(p)
Jt
(10.14)
(10.15)
(10.16)
with the model parameters either directly measurable from the equipment, or alternatively available from the Quanser documentation. Note that the sin() and cos() terms introduce both nonlinearities and cross-coupling to the model and that the double integrators for all three states mean
that this system is triply unstable.
493
Fig. 10.17 shows how we would control the 3DOF helicopter using an MPC controller. The plant
block in Fig. 10.17(a) expanded in Fig. 10.17(b) contains the the dynamic equations given above,
or we could directly control the actual helicopter. The former allows us to test the controller in
a simulation mode, the latter allows us to actually control some external hardware provided we
have suitable external connections.
(b) The internals of the nonlinear Helicopter plant above following Eqns 10.1410.16.
In order to build a MPC controller of helicopter, the model equations, Eqns (10.14)(10.16) are
entered into M ATLAB as a normal ODE function file as shown in Listing 10.5. We will use this
nonlinear model as the base for the MPC controller design.
Listing 10.5: Nonlinear dynamics of a helicopter
1
494
xdot(2,1) = x(5); % pitch velocity
xdot(3,1) = x(6); % rotation (travel) velocity
11
We can either use this nonlinear model as the basis of the MPC design (as shown in the listing
below), or we can deliberately use a different model, perhaps to test some model/plant mismatch.
[Kf,m_h,m_w,m_f,m_b,Lh,La,Lw, ...
g,K_EC_T,K_EC_P,K_EC_E,Je,Jt,Jp,Fg,Tg] = setup_constants();
C = eye(3,6); % Output Matrix, C. We only measure the angles, not the rates of change
Vop = g*(Lw*m_w - La*m_f - La*m_b)/(2*La*Kf); % Operating Point
param = {Je,La,Kf,Fg,Tg,Jp,Lh,Jt,m_f,m_b,m_w,Lw,g};
%parameter cell array
Plant = jNL(@heli_nlQ,C,param);
% See Listing 10.5 for the nonlinear ODEs
u0 = [Vop;Vop];
Now we need to linearise the plant about some operating point in order to build the linear internal
model that will be used in the MPC controller.
1
Now that we have a model, we need to specify the MPC tuning options such as control and
prediction horizons, the input and output constraints, the various control weightings, and the
design of the estimator.
1
%Constraints
11
16
uwt = [1 1]';
%Input & output Weighting
ywt = [10 5 15]';
%Estimator Gain
Q = 2*eye(10); R = 0.1*eye(3); Q(4,4) = 1; Q(5,5) = 1; Q(6,6) = 0.1;
Kest = dlqe(Model,Q,R);
495
Finally we are ready to construct the MPC controller. It might be useful to include descriptive
variable names and units.
opts=jMPCset('InitialU',u0,'QPMaxIter',20,...
'InputNames',{'Motor 1 Input','Motor 2 Input'},...
'InputUnits',{'V','V'},...
'OutputNames',{'Elevation Angle','Pitch Angle','Rotation Angle'},...
'OutputUnits',{'rad','rad','rad'});
% Build MPC controller object & place in the S IMULINK block in Fig. 10.17(a).
MPC1 = jMPC(Model,Np,Nc,uwt,ywt,con,Kest,opts);
The control object, MPC1 in the above listing, can now be inserted into the S IMULINK MPC control
block in Fig. 10.17(a). We are now ready to test out MPC controller.
The resultant MPC control given in Fig. 10.18 shows the results of an extremely large travel setpoint change of 400, over a full circle. Notice how the MPC controller does violate the (soft) state
constraints at times, but does eventually manage to bring the helicopter under control. For comparison, a linear controller such as LQR/LQG would not be able to maintain stability for such a
large setpoint change.
elevation [deg]
10
0
10
20
pitch [deg]
30
40
20
0
20
travel [deg]
0
200
400
600
Motor voltages
20
10
0
10
20
0
10.2.5
10
F URTHER
15
20
time [s]
25
READINGS ON
30
35
40
MPC
Model predictive control is now sufficiently mature that it has appeared in numerous texts, [134,
174, 199], all of which give an overview of the underlying theory, and insight into the industrial
practice. Industrial applications using commercial implementations such as Pavilion Technologies Pavilion8, Honeywells RMPCT or Perceptive Engineering have been surveyed in [162]. A
commercial MPC toolbox for M ATLAB is available in [29] while freeware versions also exist from
[58] and [4].
496
10.3
Optimal control problems quickly demand excessive computation, which all but eliminates them
from practical real-time applications. If however our model is a linear discrete-time state-space
model, our performance objective is to minimise the sum of the absolute value, rather than the
more normal least-squares, and we have linear constraints on the manipulated variables, then
the general constrained optimisation problem becomes a linear program. Linear programs or LPs
are much more attractive to solve than a general constrained nonlinear optimisation problem of
comparable complexity because, in theory, they can be solved in a finite number of operations,
and in practice they are far more successful at solving problems with more than 50 variables and
100 constraints on modest hardware. The theory of linear programming is well known, and if the
problem is poorly conditioned, or not well-posed, then the software will quickly recognise this,
ensuring that we do not waste time on impossible, or poorly thought out problems.
There are many computer codes available to solve linear programs, including various routines in
O PTI or linprog from the O PTIMIZATION T OOLBOX. An introduction to linear programming
with emphasis on the numerical issues with M ATLAB is given in [204], and an experimental application is given in [148] from which this section was adapted.
As in the predictive control development, we want a controlled response that minimises the error
between setpoint, r, and the actual state, say something like Eqn. 9.11
Z
f (x, r)
J =
0
where we could either use the absolute value of the error, l1 norm, or perhaps minimise the maximum error, l norm. In both cases these problems can be converted to a convex linear program.
We will also consider only a finite time horizon, and since the model is discrete, we will use a
simple sum in place of the integral. The optimisation problem is then to choose the set of future
manipulated variables uk over the time horizon such that the performance index
J =
N X
n
X
k=0 i=1
|ri xi |
(10.17)
is minimised subject to the process model. In Eqn. 10.17, n is the number of states, and N is the
number of samples in the time horizon, (typically around 1020). If desired, Eqn. 10.17 could be
further modified by adding state weighting if some states are more important than others.
10.3.1
D EVELOPMENT
OF THE
LP
PROBLEM
The hardest part of this optimal controller is formulating the LP problem. Once that is done, it
can be exported to any number of standard numerical LP software packages to be solved.
We will use a discrete state model for the predictive part of the controller.
xk+1 = xk + uk
(10.18)
Unwinding the recursion in Eqn. 10.18, we can calculate any subsequent state, xk given the manipulated variable history, and the stating condition, x0 , (refer Eqn. 2.88,
xk = k x0 + k1 u0 + k2 2 u1 + + uk1
(10.19)
(10.20)
497
We desire to minimise the sum of the absolute values of these errors, as given in Eqn. 10.17, but
the absolute value sign is difficult to incorporate in a linear programming format. We can instead
introduce two new non-negative variables,
1i := ri xi
2i := 0
if
ri xi
(10.21)
1i := 0
2i := xi ri
if
xi ri
(10.22)
otherwise
Note that we could write the above four relations in M ATLAB without the IF statement as something like:
e1 = abs(r-x)*round(sign(r-x)/2 + 0.5)
e2 = abs(x-r)*round(sign(x-r)/2 + 0.5)
(10.23)
ri xi = 1i 2i
(10.24)
Both relations are valid at all times because only one of either 1 or 2 is non-zero for any particular
state at any particular time.
Re-writing the objective function, Eqn. 10.17 in terms of the new variables using Eqn. 10.23 gives
the linear relation without the troublesome absolute values.
J =
N X
n
X
(1i + 2i )k
(10.25)
k=0 i=1
The penalty paid is that we have doubled the number of decision variables, thereby doubling the
optimisation space.
Eqn. 10.25 is a linear objective function suitable for a linear program. The constraints arise from
the model dynamics which must be obeyed. Substituting Eqn. 10.24 into Eqn. 10.20 gives N
blocks of equality constraints, each with n equations. To make things clearer, the first few blocks
are written explicitly out below.
11 21 +u0 = r1 x0
| {z }
r1 x1
12 22 +2 u0 + u1 = r2 2 x0
| {z }
r2 x2
..
.
+k1 u0 + k2 u1 + + uk1 = rk k x0
| 1k {z 2k}
rk xk
498
Naturally this is best written in a matrix form.
u0
u
1
.
.
.
N 1
0
0 I I
r1 x0
11
0
I I
2 x0
21
=
..
..
..
..
..
.
.
.
.
0
.
12
r
N x0
N
N 1 N 2
I I
22
..
.
1N
2N
(10.26)
The dimensions of the partitions in this large constraint matrix are given in Fig. 10.19 where
N is the number of future sample horizons, n is the number of states, and m is the number of
manipulated variables.
Nn
m
z }| {
..
.
..
.
N 1 N 2
|
{z
Nm
..
.
0
0
..
.
n
z }| {
I
}|
..
{z
2N n
I
}
s, t 0
(10.27)
If the unrestricted u > 0, then t = 0, otherwise u is negative and s = 0. This substitution modifies
499
..
.
..
.
N 1
N 2
..
..
.
. 0
N 1
..
.
N 2
..
I I
I I
..
.
I I
s0
s1
..
. t0
sN 1
t1
tN 1
..
.
2N
11
Using this ordering of decision variables, the objective function, Eqn. 10.25, is the sum of the s.
In linear programming, the objective function is written as J = c x, where c is termed the cost
vector, and x is the decision variable vector to be optimised. In our case the cost vector is given as
c=
"
0 0
|
0
1
{z
} |
2N m
1 1
{z
2N n
(10.28)
Note that the manipulated variables do not explicitly enter the objective function. Naturally the
ones in Eqn. 10.28 could be replaced by different positive individual time and state weightings, if
so desired.
This completes the formal development of the LP as far as the unconstrained optimum controller
is concerned. However since we are already solving a constrained optimisation problem, there
is little overhead in adding extra constraints such as controller upper and lower bounds to the
problem. In practice the manipulated space is limited, (a control valve can only open so far, an
electric drive is limited in top speed), so we are interested in constraints of the form
ulower u uupper
(10.29)
S IMPLIFICATIONS
When using linprog.m from the O PTIMIZATION T OOLBOX, certain simplifications are possible
to the standard LP problem form. These simplifications reduce the problem size, and therefore
also reduce the solution time. Since the algorithm does not make the non-negative constraint
assumption, we do not need to introduce the variable transformation given by Eqn. 10.27, and
suffer the associated growth of the constraint matrix. This is not the case in general, if we were to
use other LP routines such as described in [204] or [78], then we would need to use non-negative
variables.
Most LP programs assume the constraints are given as Ax b, i.e. less-than inequalities rather
than equalities as in Eqn. 10.26. While the conversion is trivial, it again increases the number of
constraint equations. However the routine linprog conveniently allows the user to specify both
equality and inequality constraints.
The third simplification is to exploit how linprog handles constant upper and lower bounds
such as needed in Eqn. 10.29. These are not added to the general constraint matrix, Ax b, but
are supplied separately to the routine. All these modifications reduce the computation required
to solve the problem.
500
O PTIMAL
Listing 10.6 demonstrates a finite time-horizon optimal controller with a randomly generated
stable multiple-input/multiple output (MIMO) model. The optimisation problem is solved using
the linear program linprog from the O PTIMISATION T OOLBOX.
Listing 10.6: Optimal control using linear programming
n = 2; m = 2;
% # of states & manipulated variables
[Ac,Bc,C,D]=rmodel(n,1,m); % Some arbitrary random model for testing
3
13
18
Gc = ss(Ac,Bc,C,D);
dt = min(1. ./abs(eig(Ac)))/0.5; % sparse sample time reasonable guess
G = c2d(Gc,dt,'zoh');
G.d = zeros(size(G.d)); % clear this
if rank(ctrb(G)) 6= n
error('System not controllable')
end % if
N = 20;
% future optimisation horizon
x0 = randn(n,1);
% random start point
r = [ones(floor(N/2),1)*randn(1,n); ...
ones(ceil(N/2),1)*randn(1,n)]; % setpoint change
% start construction of the LP matrix
Phin = G.b; Aeq = G.b; % better for when m > 1
for i=2:N
Phin = G.a*Phin;
Aeq = [Aeq; Phin];
% rough method
end
23
Z = zeros(size(G.b));
% padding
for i=2:N
Aeq = [Aeq, [Z;Aeq(1:(N-1)*n,(i-2)*m+1:(i-1)*m) ]];
end % for i
28
33
38
43
501
A(:,1:2*N*n) = -eye(2*N*n);
53
58
63
68
73
78
83
subplot(2,1,2);
plot(ts,Us,t,ones(size(t))*[ulow, uhigh],'--');
h = line(ts,Us); set(h,'LineWidth',2);
%set(gca,'YLim',[-4,4.5])
xlabel('sample #'); ylabel('Manipulated')
% Sub-sample output response
nsub = 500; % # of sub-samples total (approx 500)
ts2 = ts(:,1) + [1:length(ts)]'/1e10; ts2(1) = 0; % destroy monotonicity
t2 = linspace(0,t(end),nsub)';
U2 = interp1(ts2,Us,t2);
[y,t2,xoptsc] = lsim(Gc,U2,t2,x0);
subplot(2,1,1);
plot(ts(:,1),rs,':',t2,xoptsc);
h = line(t,xopt); set(h,'Marker','.','MarkerSize',24,'LineStyle','none');
ylabel('state'); title('LP optimal control')
You should run the simulation a number of times with different models to see the behaviour of
the optimal controller. One ideal 2-input/2 state case is given in Fig. 10.20. Here the states reach
the setpoint in two sample times, and remain there. The manipulated variables remain inside the
allowable upper (4) and lower (3) bounds. It is clear that the trajectory is optimal, since the sum
of the absolute error at the sample points is zero. One cannot do better than that.
In another case, the manipulated bounds may be active and consequently the setpoint may not be
reached in the minimum time, and our objective function will not reach zero. In Fig. 10.21, both
steady-states are reachable, but owing to the upper and lower bounds on u, they cannot be met
in just two sample times. In other words we have hit the manipulated variable constraints during
the transients.
Our system does not need to be square, Fig. 10.22 shows the results for a 3-state, 2 input problem.
In this case, one state is left floating. Some small inter-sample ripple is also evident in this case.
Note that since this is a predictive controller, and it knows about future setpoint changes, it anticipates the changes and starts moving the states before the setpoint change. Fig. 10.23 shows the
case for a 2 3 system where owing to the active upper and lower bounds on u, the controller
knows that it cannot reach the setpoint in one sample time, hence it chooses a trajectory that will
minimise the absolute error and in doing so, starts the states moving before the setpoint.
502
LP optimal control
3
state 1
2
1
0
1
0
state 2
0.2
0.4
0.6
0.8
Manipulated
1
20
10
0
10
20
3
sample #
Figure 10.20: LP optimal control with no active manipulated variable bounds. Note the perfect
controlled performance. We hit the setpoint in one sample time, and stay there. One cannot
do much better than that within the limitations of discrete control. The plot was generated by
Listing 10.6.
LP optimal control
2
state
1
0
1
2
0
Manipulated
2
0
2
4
0
4
5
sample time
503
LP optimal control
2
1
state
0
1
2
3
0
4
Manipulated
2
0
2
4
0
4
5
sample time
LP optimal control
1
0.5
state
0
0.5
1
1.5
2
0
Manipulated
4
2
0
2
4
0
4
5
sample time
Figure 10.23:
LP
control
showing
behaviour
optimal
acausal
504
A PPENDIX A
L IST
OF SYMBOLS
S YMBOL
D ESCRIPTION
U NITS
t
T, t
()
f (t)
f (T )
L()
Z()
s
z
q 1
f
V(x)
P, Q
J
num
den
time
sample time
time constant
Gain
shape factor
Dirac delta function
function of time
sampled function f (T )
Laplace transform
z transform
Laplace variable
z transform variable
backward shift operator
radial velocity
frequency
scalar Lyapunov function
positive definite matrices
Jacobian matrix
numerator
denominator
s
s
s
radians/s
Hz
-
M
F
h
A
P
T
H
I
cp
T
Q, q
J
mass
material flow
density
height
cross sectional area
pressure differential
temperature
Enthalpy
current
heat capacity
temperature differential
heat loss (gain)
moment of inertia
angle
relative gain matrix
kg
kg/s
kg/m3
m
m2
kPa
K, C
kJ
mA
kJ/kg.K
K
kJ/s
radians
-
i
d
Kc
Ku
u
Pu
manipulated variable
error
integral time
derivative time
proportional gain
ultimate gain
ultimate frequency
ultimate period
s
s
rad/s
s
505
m
,
d,
P
phase margin
actual PID parameters
dead time
period
signal power
weight
s
samples, s
s
M
S
t1/2
cov()
C
K
P
I
f
E{}
model
system
vector of parameters
error vector
t-statistic
co-variance
past input/output data
gain matrix
co-variance matrix
identity matrix
forgetting factor
factor
expected value
t
x
u
y
d
A
B
D
C
z
Co
Ob
K
L
sample time
s
state vector
n1
input vector
m1
output vector
r1
disturbance vector
p1
system matrix
nn
control matrix
nm
disturbance matri x
np
measurement matrix
rn
transition matrix
nn
discrete control matrix
nm
discrete disturbance matrix n p
time delay
s
# whole samples delay
augmented state vector controllability matrix
observability matrix
controller gain
observer gain
-
J,j
Q, R
x
x
,
H
performance index
weighting matrices
mean of x
standard deviation of x
Lagrange multipliers
Hamiltonian
kg
$
-
506
termination criteria
S UBSCRIPTS D ESCRIPTION
N
u,l
ss
Nyquist
upper & lower
steady-state
S UPERSCRIPTS D ESCRIPTION
x
estimate (of x)
mean (of x)
desired value (of x)
A PPENDIX B
U SEFUL UTILITY
M ATLAB
FUNCTIONS IN
The routine in Listing B.1 adds two or more polynomials of possibly differing lengths. The resultant
polynomial order is the same as the maximum order of the input polynomials. Note that this routine
pads with zeros to the left so that all the polynomials are of the same size. It assumed (but not checked)
that the polynomials are row vectors in descending order.
Listing B.1: Polynomial addition.
10
function R = polyadd(A,varargin)
% Adds two or more row vectors of possibly differing lengths.
% R(x) = A(x) + B(x) + C(x) + .
np = length(varargin); nmax = length(A);
for i=1:np % find maximum order in the input argument list
nmax = max(nmax,length(varargin{i}));
end % if
R = [zeros(1,nmax-length(A)),A];
for i=1:np
varargin{i} = [zeros(1,nmax - length(varargin{i}),1), varargin{i}];
R = R+varargin{i};
end % if
return
This routine is useful for checking the results of a Diophantine solution, A(x)R(x) + B(x)S(x) = T (x).
Tc = polyadd(conv(A,R),conv(B,S))
The routine in Listing B.2 convolves or multiplies two or more polynomials. Unlike polynomial addition, we do not need to ensure that all the polynomials are of the same length.
Listing B.2: Convolution (polynomial multiplication) of two or more polynomials.
2
function y = mconv(a,varargin)
% Multiple convolution of polynomial vectors
% R(x) = A(x)B(x)C(x) .
% Would be faster to do divide & conquer, and lots of smaller first.
if nargin < 2
error('Need two or more arguments')
507
508
end % if
12
Listing B.3 strips the leading zeros from a row polynomial, and optionally returns the number of zeros
stripped which may, depending on the application, be the deadtime.
Listing B.3: Strip leading zeros from a polynomial.
2
A PPENDIX C
T RANSFORM
time function
f (t)
unit step
impulse
(t)
t
unit ramp
t2
tn
eat
teat
(1 at)eat
sin(at)
cos(at)
eat
ab
ebt sin(at)
e
bt
ebt cos(at)
aebt beat
1
+
ab
ab(b a)
PAIRS
Laplace transform
F (s)
1
s
1
1
s2
2
s3
n!
sn+1
1
s+a
1
(s + a)2
s
(s + a)2
a
s2 + a2
s
s2 + a2
1
(s + a)(s + b)
a
(s + b)2 + a2
s+b
(s + b)2 + a2
1
s(s + a)(s + b)
509
z-transform
Z
z
z1
Tz
(z 1)2
T 2 z(z + 1)
2(z 1)3
z
z eaT
T zeaT
(z eaT )2
z sin(aT )
z 2 2z cos(aT ) + 1
510
A PPENDIX D
U SEFUL
TEST MODELS
The following are a collection of multivariable models in state-space format which can be used to test
control and estimation algorithms.
D.1
The linearised forced circulation evaporator developed in [147, chapter 2] and mentioned in 3.2.3 in
continuous state-space form, x = Ax + Bu, is
0
0.10455
0.37935
0.1
0
A= 0
2
2
0 1.034 10
5.4738 10
0.1
0.37266
0
0
0
B = 0.1
0
3.6914 102 7.5272 103
(D.1)
0.36676
0
3.6302 102
0.38605
0.1
3.2268 103
0
0.1
0
3.636 102
0
3.5972 103
T200
iT
0
1.7785 102
(D.2)
L2
F2
P2
P100
x2
T
F200
(D.3)
..
.
F3
F1
x1
T1
(D.4)
In this case we assume that the concentration cannot be measured online, so the output equation is
1 0 0
y=
x
(D.5)
0 1 0
511
0 , -3.636e-2, 0
0.1, 0
, 0
0 , 3.5972e-3, 1.7785e-2];
512
G.InputName = {'F2','P100','F200','F3','F1','x1','T1','T200'}
G.Notes = {'Newell & Lee Evaporator'};
D.2
A IRCRAFT MODEL
The following model is of an aircraft reported in [62, p31]. The state model is
0
0
0
1.132
0
1
0.12
0 0.0538 0.1712
0
0.0705
0
0
1
0
A=
, B = 0
0
4.419
0
0.0485
0 0.8556 1.013
1.575
0 0.2909
0
1.0532 0.6859
1 0 0 0 0
C= 0 1 0 0 0
0 0 1 0 0
0
1
0
0
0
0
0
0
1.665
0.0732
(D.6)
(D.7)
where the aircraft states & control inputs are defined as:
state
x1
x2
x3
x4
x5
u1
u2
u3
description
altitude
forward velocity
pitch angle
pitch rate, x3
vertical speed
spoiler angle
forward acceleration
elevator angle
units
m
m/s
degrees
deg/s
m/s
deg/10
m/s2
deg
measured
yes
yes
yes
no
no
Note that we only measure altitude, velocity and pitch angle, we do not measure the rate states.
A reasonable initial condition is
x0 =
10 100 15 1
25
B IBLIOGRAPHY
[1] J. Abate and P.P. Valko.
Multi-precision Laplace
transform inversion. International Journal for Numerical Methods in Engineering, 60:979993, 2004.
34, 36
[2] James L. Adams. Flying Buttresses, Entropy, and Orings: The World of an Engineer. Harvard University
Press, 1991. 10
[3] Advantech Co. Ltd. PC-Multilab Card Users Manual, 1992. 16
Computer Control of a Paper Machine an Application of Linear Stochastic Control Theory. IBM Journal of Research and Development, 11(4):389405, 1967. 328
[11] K. J. Astrom. Maximum likelihood and prediction
error methods. Automatica, 16:551574, 1980. 285
om.
[12] K. J. Astr
513
om
[13] K. J. Astr
and B. Wittenmark. On Self Tuning
Regulators. Automatica, 9:185199, 1973. 328
om
[14] Karl J. Astr
and Tore Hagglund. Advanced PID
Control. ISA, Research Triangle Park, NC, USA,
2006. 138, 186
om.
[15] Karl-Johan Astr
and Bjorn
Wittenmark.
Computer-Controlled Systems: Theory and Design.
PrenticeHall, 3 edition, 1997. 65, 147, 363, 377
om
[21] K.J. Astr
and R.D. Bell. Drum-boiler dynamics.
Automatica, 36(3):363378, 2000. 186
[22] K.J. Astrom
and B. Wittenmark. Adaptive Control.
AddisonWesley, 1989. 332
om
[23] K.J. Astr
and B. Wittenmark. Computer Controlled Systems: Theory and Design. PrenticeHall,
2 edition, 1990. 316, 348, 351, 358
[24] Gregory L. Baker and J. P. Gollub. Chaotic dynamics: an introduction. Cambridge University Press,
1990. 126
[25] Yonathan Bard. Nonlinear parameter estimation.
Academic Press, 1974. 279
[26] R.A. Bartlett, A. Wachter, and L. T. Biegler. Active set vs. interior point strategies for model predictive control. In Proceedings of the American Control Conference, pages 42294233, Chicago, Illinois,
USA, 2000. 481
514
[27] B. Bellingham and F.P. Lees. The detection of
malfunction using a process control computer:
A Kalman filtering technique for general control
loops. Trans. IChemE, 55:253265, 1977. 464
[28] Alberto Bemporad, Manfred Morari, and
N. Lawrence Ricker. Model Predictive Control
Toolbox. The MathWorks Inc., 2006. 478
[29] Alberto Bemporad, Manfred Morari, and
N. Lawrence Ricker.
Model predictive control toolbox 3. Technical report, The Mathworks,
2009. 481, 495
A Multivariable Selftuning Regulator to Control a Double Effect Evaporator. Automatica, 17(5):737743, 1981. 97
[43] Richard L. Burden and J. Douglas Faires. Numerical Analysis. PWS Publishing, 5 edition, 1993. 124
[44] C. Sidney Burrus, James H. McClellan, Alan V.
Oppenheim, Thomas W. Parks, Ronald W. Schafer,
and Hans W. Schuessler. Computer-Based Exercises
for Signal Processing using Matlab. PrenticeHall,
1994. 19, 207
om
[46] C.C. Hang and K. J. Astr
and Q.G. Wang. Relay feedback auto-tuning of process controllers
A tutorial overview. J. Process Control, 12:143162,
2002. 183
[47] B. Chachuat.
Nonlinear and dynamic optimization: From theory to practice.
Technical report, Automatic Control Laboratory,
EPFL, Switzerland, 2007.
Available from
infoscience.epfl.ch/record/111939/files/.
409, 418
[33] C. Bohn and D.P. Atherton. An Analysis Package Comparing PID Anti-Windup Strategies. IEEE
Control Systems, 15:3440, 1995. 147
[34] G.E.P. Box and G.M. Jenkins. Time Series Analysis:
Forecasting and Control. HoldenDay, 1970. 240,
247, 313, 314
[35] M. Braun. Differential Equations and their Applications. SpringerVerlag, 1975. 90, 124
[36] John W. Brewer. Kronecker products and matrix
calculus in system theory. IEEE Transactions on Circuits and Systems, 25(9):772781, September 1978.
84, 85
[37] E.H. Bristol. On a new measure of interaction for
multivariable process control. IEEE transactions on
Automatic Control, AC11(133), 1966. 107
[48] Edward R. Champion. Numerical Methods for Engineering Applications. Marcel Dekker, Inc., 1993.
124
[49] Cheng-Liang Chen. A simple method of on-line
identification and controller tuning. 35(12):2037
2039, 1989. 164
[50] Cheng-Liang Chen. A closed loop reactioncurve
method for controller tuning. Chemical Engineering
Communications, 104:87100, 1991. 164, 171
[51] Robert Chote. Why the Chancellor is always
wrong. New Scientist, page 26, 31 October 1992.
95
[52] C.K. Chui and G. Chen. Kalman Filtering with RealTime Applications. SpringerVerlag, 1987. 444, 465,
469
[38] Jens Trample Broch. Principles of Experimental Frequency Analysis. Elsevier applied science, London
& New York, 1990. 224
[39] Robert Grover Brown and Patrick Y.C. Hwang. Introduction to Random Signals and Applied Kalman Filtering. John Wiley & Sons, 2 edition, 1992. 444
[55] David W. Clarke. PID Algorithms and their Computer Implementation. Trans. Inst. Measurement
and Control, 6(6):305316, Oct-Dec 1984. 138
515
[71] G.F. Franklin and J.D. Powell. Digital control of dynamic systems, pages 131183. AddisonWesley,
1980. 45, 72, 376
[72] G.F. Franklin, J.D. Powell, and M.L. Workman.
Digital Control of Dynamic Systems. Addison
Wesley, 3 edition, 1998. 279
[73] C.E. Garcia, D.M. Prett, and M. Morari. Model predictive control: Theory and practice A survey.
Automatica, 25(3):335348, 1989. 475
[74] Paul Geladi and Bruce R. Kowalski. Partial LeastSquares Regression: A Tutorial. Analytica Chimica
Acta, 185:117, 1986. 114
[75] S.F. Goldmann and R.W.H. Sargent. Applications
of linear estimation theory to chemical processes:
A feasibility study. Chemical Engineering Science,
26:15351553, 1971. 454
[76] Graham C. Goodwin, Stefan F. Graebe, and
Mario E. Salgado. Control System Design. Prentice
Hall, 2001. 130
[77] Felix Gross, Dag Ravemark, Peter Terwiesch, and
David Wilson. The Dynamics of Chocolate in Beer:
The Kinetic Behaviour of theobroma cacao Paste in
a CH3 CH2 OHH2 OCO2 Solution. Journal of Irreproducible Results, 37(4):24, 1992. 241
[78] Tore K. Gustafsson and Pertti M. Makila.
516
[85] David R. Hill. Experiments in Computational Matrix
Algebra. The Random House, 1988. 58
[86] David M. Himmelblau. Process Analysis by Statistical Methods. John Wiley & Sons, 1970. 119, 262
[87] Jesse B. Hoagg and Dennis S. Bernstein.
Nonminimum-phase zeros: Much to do about
nothing. IEEE Control Systems Magazine, pages
4557, June 2007. 54, 61, 187
[88] Roger A. Horn and Charles R. Johnson. Topics
in Matrix Analysis. Cambridge University Press,
1991. 84
[89] Morten Hovd and Sigurd Skogestad. Pairing Criteria for Unstable Plants. In AIChE Annual Meeting, page Paper 149i, St. Louis, Nov 1993. 357
[90] P. J. Huber. Robust regression: Asymptotics, conjectures, and Monte Carlo. Ann. Math. Statist.,
1(5):799821, 1973. 285
[105] Costas Kravaris and Jeffery C. Kantor. Geometric Methods for Nonlinear Process Control. 2 Controller Synthesis. Ind. Eng. Chem. Res., 29:2310
2323, 1990. 389
[91] Enso Ikonen and Kaddour Najim. Advanced Process Identification and Control. Marcel Dekker, 2002.
268
[106] Erwin Kreyszig. Advanced Engineering Mathematics. John Wiley & Sons, 7 edition, 1993. 224
[107] B. Kristiansson and B. Lennartson. Robust and optimal tuning of PI and PID controllers. IEE Proc.Control Theory Applications, 149(1):1725, January
2002. 195
[94] M.L. James, G. M. Smith, and J. C. Wolford. Applied Numerical Methods for Digital Computation
with Fortran and CSMP. Harper & Row, 2 edition,
1977. 124
[114] Leon Lapidus and John H. Seinfeld. Numerical Solution of Ordinary Differential Equations. Academic
Press, 1971. 90
[115] Alan J. Laub. Matrix Analysis for Scientists and Engineers. Society for Industrial and Applied Mathematics, 2004. 84
[116] P.L. Lee and G.R. Sullivan. Generic Model Control (GMC). Computers in Chemical Engineering,
12(6):573580, 1988. 379
517
[133] Paul A. Lynn and Wolfgang Fuerst. Introductory
Digital Singnal Processing with Computer Applications. John Wiley & Sons, 2 edition, 1994. 211, 224
[134] J. M. Maciejowski. Predictive Control with Constraints. PrenticeHall, 2002. 495
[135] Sven Erik Mattsson. On Modelling and Differential/Algebraic Systems. Simulation, pages 2432,
January 1989. 134
[136] Cleve Moler and John Little. Matlab Users Guide.
The MathWorks Inc., December 1993. 1
[143] Jerzy Moscinski and Zbigniew Ogonowski. Advanced Control with Matlab and Simulink. Ellis Horwood, 1995. 250
[144] Frank Moss and Kurt Wiesenfeld. The benefits of
Background Noise. Scientific American, pages 50
53, August 1995. See also Amateur Scientist in the
same issue. 199
[145] Ivan Nagy. Introduction to Chemical Process Instrumentation. Elsevier, 1992. 13
[146] Ioan Nascu and Robin De Keyser. A novel application of relay feedback for PID auto-tuning. In
IFAC CSD03 Conference on Control systems Design,
Bratislava, Slocak Republic, 2003. 183
[147] R.B. Newell and P.L. Lee. Applied Process Control
A Case Study. PrenticeHall, 1989. 97, 98, 434, 460,
511
518
[148] R.E. Nieman and D.G. Fisher. Experimental Evaluation of Optimal Multivariable Servo-control in
Conjunction with Conventional Regulatory Control. Chemical Engineering Communications, 1, 1973.
496
[149] R.E. Nieman, D.G. Fisher, and D.E. Seborg. A review of process identification and parameter estimation techniques. Int. J of control, 13(2):209264,
1971. 240
[150] Katsuhiko Ogata. Discrete Time Control Systems.
PrenticeHall, 1987. 17, 25, 27, 29, 32, 39, 41, 53,
58, 66, 77, 79, 81, 86, 112, 235, 316, 345, 347, 358,
359, 361, 373, 376, 378, 410, 456
[151] Katsuhiko Ogata. Discrete-Time Control Systems.
PrenticeHall, 1987. 28
[152] Katsuhiko Ogata. Modern Control Engineering.
PrenticeHall, 2 edition, 1990. 75, 79, 81, 84, 112,
156, 173, 358, 359, 361, 400
[153] B. A. Ogunnaike, J. P. Lemaire, M. Morari, and
W. H. Ray. Advanced multivariable control of a
pilot-plant distillation column. 29(4):632640, July
1983. 101
[154] Alan V. Oppenheim and Ronald W. Schafer. Digital Signal Processing. PrenticeHall, 1975. 210
[156] Chris C. Paige. Properties of Numerical Algorithms Related to Computing Controllability. IEEE
transactions on Automatic Control, AC26(1):130
138, 1981. 361
[157] Sudharkar Madhavrao Pandit and Shien-Ming
Wu. Time Series and System Analysis with Applications. Wiley, 1983. 313
[158] Rajni V. Patel, Alan J. Laub, and Paul M. van
Dooren. Numerical Linear Algebra Techniques for
Systems and Control. IEEE Press, 1994. A selected
reprint volume. 66, 361, 362
[159] Linda Petzold. Differential/Algebraic Equations
are not ODEs. SIAM J. Sci. Stat. Comput., 3(3):367
384, 1982. 129
[160] C.L. Phillips and H.T. Nagle. Digital Control System
Analysis and Design. PrenticeHall, 2 edition, 1990.
204
[161] W.H. Press, B.P. Flannery, S.A Teukolsky, and W.T.
Vetterling. Numerical Recipes: The Art of Scientific Computing. Cambridge University Press, 1986.
113, 224, 225, 232, 413
[162] S. Joe Qin and Thomas A. Badgwell. A survey of industrial model predictive control technology. Control Engineering Practice, 11(7):733764,
July 2003. 475, 495
[172] William H. Roadstrum and Dan H. Wolaver. Electrical Engineering for all Engineers. John Wiley &
Sons, 2 edition, 1994. 15
[173] E.R. Robinson. Time Dependent Chemical Processes.
Applied Science Publishers, 1975. 409
[174] J. A. Rossiter. Model-Based Predictive Control: A
Practical Approach. CRC Press, 2003. 495
[175] Hadi Saadat. Computational Aids in Control Systems
using Matlab. McGrawHill, 1993. 55
[176] S.P. Sanoff and P.E. Wellstead. Comments on: implementation of self-tuning regulators with variable forgetting factors. Automatica, 19(3):345 346,
1983. 301
[177] Robert Schoenfeld. The Chemists English. VCH
Verlagsgesellschaft mbH, D-6940 Weinheim, Germany, 2 edition, 1986. 224
[178] J. Schoukens and R. Pintelon. Identification of Linear
Systems. Pergamon, 1991. 113
[179] Tobias Schweickhardt and F. Allgower. Linear
control of nonlinear systems based on nonlinearity measures. Journal of Process Control, 17:273284,
2007. 134
519
[195] O. Taiwo. Comparison of four methods of online identification and controller tuning. IEE
Proceedings-D, 140(5):323327, 1993. 164, 171
[181] D.E. Seborg, T.F. Edgar, and D.A. Mellichamp. Process Dynamics and Control. Wiley, 1989. 17, 32, 108,
144, 156, 158, 257
Qinghua Zhang, Lennart Ljung, Albert Benveniste, Bernard Delyon, Pierre-Yves Glorennec, Hakan Hjalmarsson, and Anatoli Juditsky. Nonlinear black-box modeling in system
identification: a unified overview. Automatica,
31(12):16911724, 1995. Trends in System Identification. 242, 285
[187] S. Skogestad. Dynamics and Control of Distillation Columns A Critical Survey. In DYCORD+,
pages 1135. Int. Federation of Automatic Control,
1992. 99
[188] Sigurd Skogestad. Simple analytic rules for model
reduction and PID controller tuning. Journal of Process Control, 13(4):291309, 2003. 140, 159
[189] Jean-Jacques E. Slotine and Weiping Li. Applied
Nonlinear Control. PrenticeHall, 1991. 78, 79, 389
[190] T. Soderstr
om
and P. Stocia. System Identification.
PrenticeHall, 1989. 240, 316
[197] J. Villadsen and M.L. Michelsen. Solution of differential equation models by polynomial approximation.
PrenticeHall, 1978. 90
[198] Stanley M. Walas.
Modeling with Differential
Equations in Chemical Engineering. Butterworth
Heinemann, 1991. 90, 128, 413
[199] Liuping Wang. Model Predictive Control System Design and Implementation using Matlab. Springer,
2009. 495
[200] Liuping Wang and William R. Cluett. From Plant
Data to Process Control. Taylor and Francis, 11 New
Fetter Lane, London, EC4P 4EE, 2000. 262, 263
[201] Ya-Gang Wang, Zhi-Gang Shi, and Wen-Jian Cai.
PID autotuner and its application in HVAC systems. In Proceedings of the American Control Conference, pages 21922196, Arlington, VA, 2527 June
2001. 183
[202] P. E. WELLSTEAD and S. P. SANOFF. Extended
self-tuning algorithm. International Journal of Control, 34(3):433455, 1981. 301
[203] P.E. Wellstead and M.B. Zarrop. Self-tuning Sytems:
Control and Signal Processing. John Wiley & Sons,
1991. 248, 301, 307, 308, 315, 316, 318, 320, 328, 465
[204] David I. Wilson. Introduction to Numerical Analysis with Matlab or whats a NaN and why do I
care? Auckland University of Technology, Auckland, New Zealand, July 2007. 465pp. 4, 124, 201,
208, 285, 401, 496, 499
[205] R.K. Wood and M.W. Berry. Terminal composition
control of a binary distillation column. Chemical
Engineering Science, 28(9):17071717, 1973. 98
[191] Harold W. Sorenson, editor. Kalman filtering: Theory and Application. Selected reprint series. IEEE
Press, 1985. 444
[192] Henry Stark and John W. Woods. Probability, Random Processes, and Estimation Theory for Engineers.
PrenticeHall, 2 edition, 1994. 445