0% found this document useful (0 votes)
131 views113 pages

APC Part 6 Introduction To State Estimation

The document introduces state estimation and its applications in process control when states cannot be directly measured. It provides examples of continuous stirred tank reactor and quad tank systems to illustrate how dynamic models can be used to infer unmeasured states from available measurements through state estimation techniques like the Kalman filter. The goal is to obtain state estimates for feedback control when not all states are measurable online.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
131 views113 pages

APC Part 6 Introduction To State Estimation

The document introduces state estimation and its applications in process control when states cannot be directly measured. It provides examples of continuous stirred tank reactor and quad tank systems to illustrate how dynamic models can be used to infer unmeasured states from available measurements through state estimation techniques like the Kalman filter. The goal is to obtain state estimates for feedback control when not all states are measurable online.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 113

Introduction to

Soft Sensing and


State Estimation
Sachin C. Patawardhan
Department of Chemical Engineering
I.I.T. Bombay
Email: [email protected]
Automation Lab

Outline
IIT Bombay

 Why state estimation?


 Observability
 Recursive Estimators and Luenberger Observer
 Optimal Recursive Estimation and Kalman Filtering
 Properties and Interpretations of Kalman Filtering
 Stationary Kalman Predictor and Time Series
Models
 Extended Kalman Filtering
 Simulation examples and experimental case study

1/16/2014 State Estimation 2


Automation Lab

Motivation
IIT Bombay

 Quality variables : product concentration, average


molecular weight, melt viscosity etc.
 Costly to measure on-line
 Measured through lab assays: sampled at irregular
intervals
 Measurements available from wireless sensors are
at irregular intervals due to packet losses
 For satisfactory control of such processes:
Quality variable / efficiency parameters should be
estimated at a higher frequency
 Remedy: Soft Sensing and State Estimation

1/16/2014 State Estimation 3


Automation Lab
IIT Bombay

State Feedback Controller Design


Discrete time State Space Model
x(k  1)  Φx(k )  Γu(k )
y (k )  Cx(k )

State feedback multivariable control law


u(k )  G x s (k )  x(k ) 
 Step 1: Assume the states are measurable and design a
stable control law / controller
 Step 2: Design a state estimator which constructs
estimates of states by fusing measurements with model
predictions
 Step 3: Implement the controller using the estimated states
 Separation principle ensures nominal closed loop stability
with state estimator-controller pair
1/16/2014 State Estimation 4
Automation Lab
IIT Bombay

Inferential Measurement: Basic Idea

Since fast sampled (primary) variables


(temperatures, pressures, levels, pH) are
correlated with the quality variable, can we
infer values of quality variables from
measurements of primary variables?

On line state estimation:


Feasible after availability of fast Computers

1/16/2014 State Estimation 5


Automation Lab

Model Based Soft Sensing


IIT Bombay

Fast-rate Low-cost Irregularly / Slowly


measurements from sampled Quality variables
Plant (Temperature / from Lab assays
Pressure / Speed)

Dynamic Model
(ODEs/ PDEs)

On-line Fast Rate Estimates of


Quality variables

Soft Sensing: Cost Effective Solution

1/16/2014 State Estimation 6


Automation Lab

Soft Sensing Approaches


IIT Bombay

Soft Sensing
Techniques

Static / Dynamic Model


Algebraic based State
Correlations Estimation

Principle Stochastic (e.g. Deterministic


Components Neural Kalman filters) (e.g. Luenberger
Analysis Networks Observers)

1/16/2014 State Estimation 7


Automation Lab
IIT Bombay

Example: Continuously Stirred Tank Reactor

Consider non-isothermal CSTR dynamics


dC A
 f1 (C A , T , F , Fc , C A0 , Tcin ) feed flow rate
dt
coolant flow rate
dT
 f 2 (C A , T , F , Fc , C A0 , Tcin )
dt
States ( X)  C A T T 
Measured Output (Y )  T 
Manipulated Inputs (U)  [F Fc ]T Feed conc.
Unmeasured Disturbances ( Du )  C A 0 
Cooling water
 cin 
Measured Disturbances ( Dm )  T Temp.

If model parameters are known accurately,


can we estimate CA from measurements of T alone?

1/16/2014 State Estimation 8


Automation Lab
CSTR: Model Parameters and IIT Bombay

Steady state Operating Point

V ( Reactor volume ) = 1 m3 ; F (Inlet flow) = 1 m3/min ;


CA0( Inlet concentrat ion of A) = 2.0 kmol/m3 ;
T0 (Inlet temperature) = 50 0C ; F (Coolant flow) = 15 m3/min ;
Cp (Specific heat of reacting mixture) = 1 cal/(g K) ;

Tcin (Coolent Inlet Temperature ) = 92 0C ;


Cpc (spacific heat of coolent ) = 1 cal /(g K) ;

 (Reacting liquid density) = 106 g/m3 ; c ( Coolent density ) = 106 g/m3 ;


- Hrnx (Heat of reaction) = 130 x 106 cal/kmol ;
a = 1.678 x 106cal / min ; b  0.5 ; E/ R = 8330.1 K

CA (Concentration of A) = 0.265kmol/m3 Operating Steady


T (Reactor Temperature) 1210C State

1/16/2014 State Estimation 9


Automation Lab
IIT Bombay

CSTR: Continuous Perturbation Model


Continuous time linear state space model
C (t )  C A  F (t )  F 
x(t )   A  ; u (t )    ; d(t )  C Ai (t )  C Ai
 T (t ) T  Fc (t )  Fc 
dx  - 7.56 - 0.09  0 1.735  1
  x (t )    u (t )    d(t )
dt 852.72 5.77   - 6.07 - 70.95 0 
y (t )  0 1x(t )

Discrete time linear state space model


Sampling Time (T) = 0.1 min

0.185 - 0.01 0.005 0.13 0.06


x(k  1)    x(k )    u(k )    d(k )
 73.49 1.33   - 0.73 - 1.8   3.9 
y (k )  0 1x(k )

1/16/2014 State Estimation 10


Automation Lab
IIT Bombay
Example: Quadruple Tank System

dh1 a a3 k
 1 2 gh1  2 gh3  1 1 v1
dt A1 A1 A1
Tank3 Tank 4
dh2 a a k
 2 2 gh2  4 2 gh4  2 2 v 2
dt A2 A2 A2
dh3 a3 (1   1 )k1
 2 gh3  v2
dt A3 A3
dh4 a (1   2 )k2
Tank 1
Tank
2  4 2 gh4  v1
Pump1
Pump 2 dt A4 A4
V2
V1
Manipulated Inputs : v1 and v2
Measured Outputs : h1 and h2

If model parameters are known accurately,


can we estimate levels in Tanks 3 and 4
from measurements of levels in Tank 1 and 2?
1/16/2014 State Estimation 11
Automation Lab

State Estimation Problem


IIT Bombay

It is desired to implement a state feedback


control law. However, all the states are not measured.
Thus, given
Computer control relevant discrete model

x(k  1)  x(k )  u(k )  d(k )


y (k )  Cx(k )  v (k )

and input - output data


y(0), y(1).....y(N) and u(0), u(1).....u(N)
Can we estimate state sequence
xˆ (0), xˆ (1).....xˆ (k) ?

1/16/2014 State Estimation 12


Automation Lab
IIT Bombay

Simplified Problem Statement


Consider ideal situation where
 disturbances and measurement errors are absent

 model is perfect

Problem: Given measurements y(0), y(1),…y(N)


and inputs u(0), u(1), ….,u(N) together
with model
x(k  1)  x(k )  u(k )
y (k )  Cx(k )
Estimate state sequence x(0), x(1),....
Since we have the model, it is sufficient to estimate only xˆ (0).
xˆ (1), xˆ (2),....can be estimated through recursive use of the model

1/16/2014 State Estimation 13


Automation Lab

Initial State
IIT Bombay

Let x(0) denote initial state estimate


and given input sequence
u(0), u(1), u(2),.....
we can use model to estimate
x(1)   x(0)  u(0)
x(2)  x(1)  u(1)
  2 x(0)  u(0)  u(1)

x(3)   3x(0)   2 u(0)  ....

How to find x(0) ?

1/16/2014 State Estimation 14


Automation Lab
IIT Bombay

Estimation of Initial State


Given measurements y (0), y (1),...., y (n  1)
and inputs u(0), u(1), u(2),.....
we can write
Cx(0)  y (0)
Cx(1)  y (1)  Cx(0)  Cu(0)
 Cx(0)  y (1)  Cu(0)
.......................................
Cn 1x(0)  y (n  1)  Cn  2 u(0)  ....Cu(n  2)

Can we uniquely estimate the initial state by


Solving above set of linear algebraic equations?

1/16/2014 State Estimation 15


Automation Lab

Estimation of Initial State


IIT Bombay

Combining the equations, we have


 C   y ( 0) 
 C   y (1 )  C  u ( 0 ) 
   
 C 2  x( 0)   y (2)  Cu(0)  Cu(1) 
 ....  (n 1) ......... 
 n 1   
C 
  y (n  1)  C u(0)  ....Cu(n  2)
n 2

A b
Known
quantity

 x(0)  b
A 
(nr )  n (nr ) 1
A unique solution x(0) can be found only if matrix A has rank equal to 'n'
Automation Lab

Observability
IIT Bombay

Observability: System is said to be observable


if initial state can be uniquely estimated from
output observations

Initial state can be uniquely estimated from


measurements of inputs and outputs if following
rank condition holds
Observability
 C  Matrix
 CΦ 
rank    n  state dimension
 ... 
 n 1 
CΦ 

1/16/2014 State Estimation 17


Automation Lab
IIT Bombay

Observability: CSTR Example


Can we estimate concentrations
from measurements of temperature ?
 0.185 - 0.008 
  C  0 1
73.492 1.333 
C   0 1 
rank    rank   2
C  73.492 1.333 
Linear Perturbation model for CSTR is observable
Let x(0)= (0.1,1) and u(0)=(0,0),
Then we get x(1) = (0.0104, 8.682) and
Temperature measurements are
y(0) = 1, y(1) = 8.682.

Estimated initial state from measurements: (0.1,1)


1/16/2014 State Estimation 18
Automation Lab

Quadruple Tank System


IIT Bombay

Discrete Time State Space Model


Sampling Time T = 5 sec

x(k  1)  Φx(k )  Γu(k )


y (k )  Cx(k )  v (k )

0.9233 0 0.1813 0 
 0 0.9462 0 0.1493 
 
 0 0 0.8112 0 
 
 0 0 0 0.8465 
 0.4001 0.02276
0.01209 0.3055 
  0.5 0 0 0 
 C 
 0 0.2159   0 0.5 0 0 
 
 0.1438 0 
1/16/2014 State Estimation 19
Automation Lab

Quadruple Tank System


IIT Bombay

 0.5000 0 0 0 
 0 0.5000 0 0 

 C   0.4617 0 0.0906 0 
 C   0 0.4731 0 0.0746 

 
C 2   0.4263 0 0.1572 0 
 3  
C   0 0.4476 0 0.1338 
 0.3936 0 0.2048 0 
 
 0 0.4235 0 0.1800 

Rank of observability matrix = 4

Levels in Tank 3 and Tank 4 can be estimated


Using measurements of Tank 1 and Tank 2
Automation Lab

Measurements with errors


IIT Bombay

What if measurements have errors ?


y (k )  yT (k )  v (k )
Collect larger sample of size N >> n
Perform least square estimation

min N

xˆ (0)
 ˆ
v
k 0
( k )T
R 1
vˆ(k ) Measurement
True Value subject to Noise
 k 1 
vˆ(k )  y (k )  C xˆ (0)   C u(k  j )
k j 1

 j 1 

R: Measurement Noise Covariance


1/16/2014 State Estimation 21
Automation Lab

CSTR Example (Contd.)


IIT Bombay

Let the process initial state be (0.1,1) and


input sequence be u(0) = u(1) =…u(5) = (0,0)
Suppose we collect following 6 temperature
measurements corrupted with measurement noise
Ym = (0.957, 8.516, 12.353, 11.498, 6.975, 1.291)

Least square estimate of state vector


0.1003
x̂(0)   
 0.924 
Estimate improves if more measurements are added.

Difficulty in on-line implementation:


Optimization problem size grows with time!
1/16/2014 State Estimation 22
Automation Lab

On-line State Observer


IIT Bombay

On-line recursive estimation of states


from measured data and mathematical model

Process Y(k)
u(k) Note
Model yˆ (k ) ˆ
x(0)  x(0)

True Process " Open - Loop" State Estimator


x(k  1)  x(k )  u(k ) ....(1) xˆ (k  1)  xˆ (k )  u(k ) ....(2)
y (k )  Cx(k ) yˆ (k )  Cxˆ (k )

Difficulty : Initial State xˆ (0) is not know exactly.


1/16/2014 State Estimation 23
Automation Lab

On-line State Observer


IIT Bombay

Process Y(k)
u(k)

Model yˆ (k )
Difficulty:
Defining Estimation error Cannot be used
If process is marginally
ε(k)  x(k )  xˆ (k ) Stable or unstable.
Subtracting (2) from (1), we have
ε(k  1)  ε(k)  ε(k)   k ε(0) Even when  ()  1
 () decides rate of
If process is stable, i.e.,  ()  1, convergenc e of  (k )
then  (k )  0 as k   Can we accelerate
the convergenc e?
1/16/2014 State Estimation 24
Automation Lab
IIT Bombay
“Closed Loop” State Observer

Open Loop Observer: Difficulties


1. Not applicable to unstable systems
2. Rate of convergence governed by spectral
radius of Ф matrix

Process +
u(k) Y(k) e(k )

Model -
yˆ (k )

Use of output prediction error to


1. Stabilize estimator for unstable processes
2. Improve rate of convergence for stable systems
1/16/2014 State Estimation 25
Automation Lab

Recursive Estimation
IIT Bombay

Recursive On-line State Estimator


x(k  1)  ˆ
ˆ x(k )  u(k )  Le(k )
e(k )  y(k )  Cˆ
x(k )
 y(k )  ˆ
y(k )

Estimation error Feedback Correction


True process dynamics (deterministic case)
x(k  1)  x(k )  u(k )
y (k )  Cx(k )
How to choose estimator gain matrix L such that
estimation error reduces to zero as quickly as possible?
1/16/2014 State Estimation 26
Automation Lab

Estimator Error Dynamics


IIT Bombay

Estimation Error ε(k )  x(k )  xˆ (k )


ε(k  1)  (  LC)ε(k )
or ε(k )  (  LC)k ε(0)
Choose observer L gain such that
max
i (  LC)  1
i
i (.) : i 'th eigenvalue of matrix (  LC)
The above choice ensure  (k )  0 as k  
as (  LC)k  Null Matrix as k  
irrespective of choice of x̂(0) i.e. ε(0)
1/16/2014 State Estimation 27
Automation Lab
Single Output System (SOS): IIT Bombay

Luenberger Observer
Deterministic Observer Design:
Choose observer gain matrix L such
that matrix   LC has poles at the
desired locations (Pole Placement)
Choice of observer poles: Compromise between
decay of estimation error and sensitivity to
measurement noise/modeling errors
Choice of poles so as to systematically account for
Measurement noise and Unmeasured Disturbances
is difficult
Consequence: sub-optimal performance
in presence of stochastic disturbances
1/16/2014 State Estimation 28
Automation Lab
IIT Bombay
SOS Luenberger Observer
Coordinate Transforma tion :  (k)  Tx(k)

Original Model Observable Cannonical Form

x (k  1)  x (k )  u (k ) .......(I )  (k  1)  o (k )  ou (k ) .......(II )


y (k )  C o (k )
y (k )  Cx (k )
  a1 1 0....
0  b1 
Transfer Fundtion  a b 
0 1.... 0 
o   2     2
b1z n 1  b2z n 2  ......  bn  .... .... .... .... o ....
y(k)  u (k )
z n  a1z n 1  ....  an    
 an 0 .... 0  bn 
C o  1 0 ..... 0
Design Procedure
 Transform the model to observable canonical form
 In transformed coordinates, choose observer vector such
that poles of are placed at desired location
 Express the observer gain matrix in the original co-ordinate
system
1/16/2014 State Estimation 29
Automation Lab
IIT Bombay
SOS Luenberger Observer
Design observer in transformed coordinates
ηˆ (k  1)  o ηˆ (k )  o u(k )  L o Co η(k )  ηˆ (k )
  a1  lo ,1 1 0
0....
 
  a 2  lo , 2 0 1.... 0 
o  L 0Co 
 .... .... .... ....
 
 an  lo ,n 0 .... 0 

 detI  (o  L 0Co )  n  a1  lo ,1 n 1  .......  lo ,n  an 


Let the desired observer characteristic polynomial be
P ( )  n  p1n 1  .......  pn
where polynomial on R.H.S. has poles at the desired location

Equating coefficien ts of detI  (o  L 0Co ) with P( ), we have


pi  ai  lo ,i  lo ,i  pi  ai for i  1,2...n
1/16/2014 State Estimation 30
Automation Lab
IIT Bombay
SOS Luenberger Observer
Transform the observer Lo back to original state space as
L  T -1L o

Coordinate Transformation
η  Tx

~ 1

T  WOBS WOBS
 Co   C 
CΦ   CΦ 
~ o o
WOBS   ; WOBS   
 .....   ..... 
C Φn 1  CΦn 1 
 o o   

Note that the above coordinate transformation


is possible only if the original system is
observable, i.e. Rank( WOBS) = n
1/16/2014 State Estimation 31
Automation Lab
IIT Bombay
Prediction Estimation
The observer we have designed
corresponds to " prediction estimation"
xˆ (k  1 | k )  xˆ (k | k  1)  u(k )  L P  y (k )  Cxˆ (k | k  1)

xˆ (k  1 | k ) : Prediction estimate of state


at time instant (k  1)
based on informatio n up to time instant (k)

Can be employed if sampling time is very small and


time for estimator calculations is significant relative to the sampling interval.
xˆ (k | k  1) calculations can be carried out during intersample period
and used for controller implementation at the k' th insiant.
Disadvantage : Unit information delay

1/16/2014 State Estimation 32


Prediction Estimation and Automation Lab
IIT Bombay

Current State Estimation


Current state estimator
Prediction Step
xˆ (k | k  1)  xˆ (k  1 | k  1)  u(k  1)
Measurement Update
xˆ (k | k )  xˆ (k | k  1)  L C y (k )  Cxˆ (k | k  1)
Estimation error dynamics
ε(k  1 | k)   ε(k | k )
ε(k | k)  I  LCCε(k | k  1)
 ε(k  1 | k)  I  LCCε(k | k  1)
Prediction estimator and Current state estimator
gain matrices are related as
L P  ΦL c or L C  Φ 1L P
1/16/2014 State Estimation 33
Automation Lab

CSTR Example
IIT Bombay

Linearized (Original) State Space Model


0.185 - 0.01 0.005 0.13
x(k  1)    x (k )    u(k )
 73.49 1.33   - 0.73 - 1.8 
y (k )  0 1x(k )

Observable Canonical form


 1.518 1 - 0.7335 - 1.797 
η(k  1)    η(k )    u(k )
- 0.836 0  0.3256 - 10.18 
y (k )  1 0η(k )

 p1  1.518 
L p  T 1  
 p2  0.836
1
 1 0  0 1   0 1 
~
T  WOBS 
1
WOBS    
  
1.518 1  73.492 1.333 73.492 - 0.185
1/16/2014 State Estimation 34
Automation Lab
IIT Bombay

CSTR: “Open Loop” Observer


“Open Loop” Observer” : no Measurement based Correction
Linear Plant- Linear “Open Loop” Observer
State Estimates: (-) True (+) estimated
0.4
Conc.(mod/m3)
0.35

0.3

0.25

0.2
0 1 2 3 4 5
Time (min)
State Estimates: (-) True (+) estimated
405

400
Temp.(K)

395

390

385
0 1 2 3 4 5
Time (min)

1/16/2014 State Estimation 35


Automation Lab
IIT Bombay

CSTR: “Open Loop” Observer


“Open Loop” Observer” : no Measurement based Correction
Linear Plant- Linear “Open Loop” Observer
Open Loop Observer: Error Dynamics
0.2
Error in Conc. Estimate

0.1

-0.1

-0.2
0 1 2 3 4 5
Time (min)

15
Error in Temp. Estimate

10

-5

-10
0 1 2 3 4 5
Time (min)

1/16/2014 State Estimation 36


Automation Lab
IIT Bombay

CSTR: Luenberger Observer


Observer poles: Both poles placed at 0.5
Linear Plant- Linear Observer
S tate E s tim ate s : (-) T rue (+) e s tim ate d
0 .4
Conc.(mod/m3)

0 .3 5

0 .3

0 .2 5

0 .2
0 0 .5 1 1 .5 2
T im e (m in)

S tate E s tim ate s : (-) T rue (+) e s tim ate d


405
Temp.(K)

400

395

390
0 0 .5 1 1 .5 2
T im e (m in)

1/16/2014 State Estimation 37


Automation Lab
IIT Bombay

CSTR: Luenberger Observer


Observer poles: Both poles placed at 0.5
Error in Conc. Estimate
0.15 Linear Plant- Linear Observer
Luenberger Observer (poles at 0.5): Error Dynamics

0.1

0.05

-0.05
0 0.5 1 1.5 2
Time (min)

10
Error in Temp. Estimate

0
0 0.5 1 1.5 2
Time (min)

1/16/2014 State Estimation 38


Automation Lab
IIT Bombay
CSTR Example: Dead-beat Observer

Linear Plant- Linear Observer


State Estimates: (-) True (+) estimated
Conc.(mod/m3) 0.5

0.4

0.3

0.2

0.1
0 0.5 1 1.5 2
Time (min)
420

 (0)  0.1 2
Temp.(K)

410

400

390
0 0.5 1 1.5 2
Time (min)
1/16/2014 State Estimation 39
Automation Lab
IIT Bombay
CSTR Example: Dead-beat Observer
Linear Plant- Linear Observer
Luenberger Observer: Error Dynamics
Error in Conc. Estimate

0.15

0.1

Dead-
0.05

0
beat
-0.05
0 0.5 1 1.5 2 Observer
Time (min) Poles
Error in Temp. Estimate

10
at
5 (0,0)
0

-5
0 0.5 1 1.5 2
Time (min)
1/16/2014 State Estimation 40
Automation Lab
IIT Bombay
CSTR Example: Dead-beat Observer

Non-Linear Plant- Linear Observer


State Estimates: (-) True (+) estimated
0.5
Conc.(mod/m3)
0.4  (0)  0.1 2
0.3

0.2

0.1
0 2 4 6 8 10
Time (min)
420

410
Temp.(K)

400

390

380
0 2 4 6 8 10
Time (min)

1/16/2014 State Estimation 41


Automation Lab
IIT Bombay
CSTR Example: Dead-beat Observer

Non-Linear Plant- Linear Observer


Error in Conc. Estimate
Luenberger Observer: Error Dynamics
0.15

0.1

0.05

-0.05
0 2 4 6 8 10
Time (min)
Error in Temp. Estimate

10

-5

-10
0 2 4 6 8 10
Time (min)

1/16/2014 State Estimation 42


Automation Lab

Estimation Error Variances


IIT Bombay

Luenberger Conc. Temp.


Linear Plant 3.993x10-5 1.112

Nonlinear Plant 2.534x10-4 3.3303

Kalman predictor Conc. Temp.

Linear Plant 3.984x10-5 1.113


Nonlinear Plant 2.547x10-4 3.341

1/16/2014 State Estimation 43


Automation Lab
IIT Bombay

Unmeasured Disturbances
What if there are unknown disturbances
influencing state dynamics?
What if the measurements are corrupted with
measurement noise?
Suppose we have stochastic models for time
evolution of these unmeasured disturbances and
measurement noise, then
can we use these models to design a state estimator,
which filters out the measurement noise but
compensates for the unmeasured disturbances?

It is difficult to carry out pole placement based on these


noise models such that the desired goal is achieved.

1/16/2014 State Estimation 44


Automation Lab
IIT Bombay
Unmeasured Disturbances

Consider Continuous Time Linear Perturbation Model


obtained through linearizat ion of a mechanistic model
dx
 Ax(t )  Bu(t )  Hd(t )
dt
y (t )  Cx(t )

Perturbation variables Computer Controlled Systems

x(t )  X(t ) - X ; y(t )  Y(t ) - Y Manipulated inputs are piecewise constant


u(t)  u(k)
u(t )  U(t ) - U ; d(t )  D(t ) - D
for t  kT  t  (k  1)T

Difficulty
Disturbance inputs d(t) are NOT piecewise constant functions!
How to develop a discrete time model?

1/16/2014 State Estimation 45


Automation Lab
IIT Bombay
Unmeasured Disturbances
Simplifying Assumption 1 :
Sampling interval (T) is small enough
so that disturbance inputs can be
approximated as piecewise constant functions
during the sampling interval
d(t)  d(k) for t  kT  t  (k  1)T

Simplifying Assumption 2 :
d(k ) : zero mean white noise process
 
with Covw (k )  E w (k )w (k )T  Qd Simplifying Assumption 3 :
Measurements are corrupted
with zero mean white noise process
v(k)  with Covv(k)  Ev(k) v(k)T   R
1/16/2014 State Estimation 46
Automation Lab
IIT Bombay
Unmeasured Disturbances
Define w (k )  d(k )
E w (k )  E d(k )  0
Cov w (k )  E w (k )w (k )T   E d(k )d(k )T T  Qd T
Let Q  Qd T

x(k  1)  x(k )  u(k )  w (k )


y (k )  Cx(k )  v (k )
Given measurements {y(k)}, inputs (u(k)} and the model,
how to construct optimal state estimate?

Primary Requirement
Error between the state estimate and the true
process state should be “as small a possible”
1/16/2014 State Estimation 47
Automation Lab

Optimal State Estimation


IIT Bombay

Thus, given stochastic state space model


x(k  1)  x(k )  u(k )  w (k )
y (k )  Cx(k )  v (k )
where w(k) and v(k) are uncorrelated (in time
and with each other) random sequences with
zero mean and know variances
E w (k )w (k )T   Q ; E v(k ) v(k )T   R

Q quantify uncertainties in state dynamics


and/or modeling errors
R quantifies variability of measurement errors
How to design an optimal state estimator?
1/16/2014 State Estimation 48
Automation Lab

Optimal State Estimation


IIT Bombay

 Since the sequences {w(k)} and {v(k)} are stochastic


processes, the state sequence {x(k)} is also a
stochastic process

 Notice that through the difference equation, x(k) and


x(k-j) are correlated. Thus, even when the sequences
{w(k)} and {v(k)} are white noise processes, {x(k)} is a
correlated stochastic variable.

 Two important statistical measures that can be used to


characterize the stochastic process {x(k)} are its mean
and covariance functions, which are related to
characteristics of {w(k)} and {v(k)}. Model for {w(k)}
and {v(k)} together with the difference equation can be
used to generate this information.
Automation Lab

Preliminaries
IIT Bombay

Define set
Y k  y (0), u(0) , y (1), u(1) ,....., y (k ), u(k ) 

 Under weak conditions, the best (i.e. optimal)


estimate is the conditional (or a posteriori) mean


xˆ (k | k )  E x(k ) | YK 
Prediction Step
  
E x(k ) | Y k 1  E x(k  1)  u(k  1)  w (k  1) | Y k 1  0
 
 E x(k  1) | Y k 1  u(k  1)  Ew (k  1)

OR xˆ (k | k  1)  xˆ (k  1 | k  1)  u(k  1)
1/16/2014 State Estimation 50
Automation Lab

Preliminaries
IIT Bombay

  
Cov x(k ) | Y k 1  E x(k )  x(k ) x(k )  x(k )  | Y k 1
T


x(k )  E x(k ) | Yk 1 
Subtracting the equation governing the mean
xˆ (k | k  1)  xˆ (k  1 | k  1)  u(k  1)

from the equation governing the system dynamics


x(k )  x(k  1)  u(k  1)  w (k  1)
we have
ε(k | k  1)  ε(k  1 | k  1)  w (k  1)

Prediction Error Estimation Error


ε(k | k  1)  x(k )  xˆ (k | k  1) ε(k  1 | k  1)  x(k  1)  xˆ (k  1 | k  1)

1/16/2014 State Estimation 51


Automation Lab

Preliminaries
IIT Bombay

Update Step
xˆ (k | k )  xˆ (k | k  1)  L(k )e(k )
e(k )  y (k )  yˆ (k | k  1)
(with an arbitrary gain matrix L(k))
where "innovation " e(k )
is related to state estimation error as follows
e(k )  y (k )  yˆ (k | k  1)
 Cx(k )  v (k )  Cxˆ (k | k  1)
 Cε(k | k  1)  v (k )

Prediction and estimation errors are related as follows


xˆ (k | k )  xˆ (k | k  1)  L(k )e(k )
 x(k )  xˆ (k | k )  x(k )  xˆ (k | k  1)  L(k )e(k )
 ε(k | k )  I  L(k )Cε(k | k  1)  L(k ) v(k )

1/16/2014 State Estimation 52


Automation Lab
IIT Bombay
Mean Values of Estimation Errors

Error Dynamics
ε(k | k  1)  ε(k  1 | k  1)  w (k  1)
ε(k | k )  I  L(k )Cε(k | k  1)  L(k ) v (k )

Combining
ε(k | k )  I  L(k )Cε(k  1 | k  1)  w (k  1)  L(k ) v(k )

Simplifying Assumption 4
Initial State at k  0 is a Random Variable such that
Ex(0)  Eˆ
x(0 | 0) Covx(0)  P(0)
 Ex(0)  ˆ
x(0 | 0)  Eε(0 | 0)  0

1/16/2014 State Estimation 53


Automation Lab
IIT Bombay
Mean Values of Estimation Errors

Eε(1 | 1)  I  L(1)CEε(0 | 0)  w (0)  L(1)Ev (1)  0

Eε(2 | 2)  I  L(2)CEε(1 | 1)  w (1)  L(2)Ev (2)  0


.........

Eε(k | k )  I  L(k )CEε(k  1 | k  1)  w (k  1)  L(k )Ev (k )  0

 Eε(k | k  1)  Eε(k  1 | k  1)  w(k  1)  0

Thus, the proposed linear observer is unbiased

1/16/2014 State Estimation 54


Automation Lab
IIT Bombay
Estimation Errors: Covariance Matrices

Define

P (k | k  1)  Covε(k | k  1)  E ε(k | k  1)ε(k | k  1)T 
P (k  1 | k  1)  Covε(k  1 | k  1)  Eε(k  1 | k  1)ε(k  1 | k  1) 
T

Now
ε(k | k  1)ε(k | k  1)T  ε(k  1 | k  1)  w (k  1)ε(k  1 | k  1)  w (k  1)
T

Taking expectation on both the sides and noting


ε(k  1 | k  1) and w(k - 1) are uncorrelated
 
i.e. E ε(k  1 | k  1)w (k  1)T  0
it follows that

P(k | k  1)  P(k  1 | k  1)T  Q


(Recursive equation for update of prediction covariance)

1/16/2014 State Estimation 55


Automation Lab

Prediction Error
IIT Bombay

The innovation
e(k )  y (k )  yˆ (k | k  1)
 Cε(k | k  1)  v(k )
 Cε(k  1 | k  1)  w (k  1)  v (k )
contains informatio n about w (k  1) and v (k )

It is desired to compensate the state estimate for w(k)


while filtering v(k) out

Update Step can be viewed as


xˆ (k | k )  xˆ (k | k  1)  w
ˆ (k  1 | k )
wˆ (k  1 | k )  L(k )e(k )
L(k ) : decides the " portion of e(k )"
used for disturbance compensation.

1/16/2014 State Estimation 56


Automation Lab
IIT Bombay
Means and Covariance of Errors

Mean of the innovation


Ee(k )  CEε(k | k  1)  Ev(k )  0

Covariance of Innovation s
  
Pe (k )  E e(k )e(k )T  E Cε(k | k  1)  v(k )Cε(k | k  1)  v(k )
T

 CCovε(k | k  1)CT  Covv(k );
 CP(k | k  1)CT  R

Estimation Error

ε(k | k )  ε(k | k  1)  L(k )e(k )


 Eε(k | k )  Eε(k | k  1)  L(k )e(k )  0

1/16/2014 State Estimation 57


Automation Lab
IIT Bombay
Means and Covariance of Errors

Estimation Error

Covε(k | k )  Covε(k | k  1)  L(k )Cove(k )L(k )T


  
 E ε(k | k  1)e(k )T L(k )T  L(k )E e(k )ε(k | k  1)T 
Defining

Pe (k )  E ε(k | k  1)e(k )T 

Pe (k )  E ε(k | k  1)Cε(k | k  1)  v(k )
T
  P(k | k  1)CT

we have
P (k | k )  P (k | k  1)  L(k )Pe (k )L(k )T  L(k )Pe (k )T  Pe (k )L(k )T

1/16/2014 State Estimation 58


Automation Lab
IIT Bombay
Minimum Variance Design

Find gain matrix L(k) such that


estimation error variance in minimum
Minimum Variance Design
Min
tr P(k | k )
L(k )
Necessary Condition for Optimality
tr P (k | k )
 0
L(k )

Note : Properties of Trace of a Matrix


tr (C  D)  tr (C)  tr (D)
tr (C)  tr (CT )

1/16/2014 State Estimation 59


Automation Lab
IIT Bombay
Matrix Calculus

Consider X (m  n matrix) and y  f( X), a scalar function of X


 y y y 
 x ....
x12 x1n 
 11

 y y y 
y ....
  x 21 x 22 x 2n 
X .... .... .... .... 
 y y y 
 .... 
 x m1 x m 2 x mn 

Rules of Differentaition
tr AX tr XA
  AT
X X
Let B represent a symmetric matrix

tr XBXT  2 XB
X
1/16/2014 State Estimation 60
Automation Lab

Minimum Variance Observer


IIT Bombay


tr L(k )Pe (k )L(k )T  2L(k )Pe (k )
L(k )


tr L(k )Pe (k )T  Pe (k )L(k )T
2
 
tr L(k )Pe (k )T 
 2Pe (k )
L(k ) L(k )

Thus, it follows that


tr P(k | k )
 2L(k )Pe (k )  2Pe (k )  [0]
L(k )

 L*(k )  L(k )OPT  Pe (k )Pe (k )1

 P(k | k )OPT  P(k | k  1)  L*(k )Pe (k )1 L*(k )T


 
 I  L*(k )C P(k | k  1)
1/16/2014 State Estimation 61
Automation Lab
IIT Bombay
Kalman Filter: Summary
Prediction
xˆ (k | k  1)  xˆ (k  1 | k  1)  u(k  1)
P(k | k  1)  P(k  1 | k  1)T  Q
Kalman Gain Computation
L* (k )  Pe (k )Pe (k ) 1

 P(k | k  1)CT CP(k | k  1)CT  R 
1

Update
e(k )  y(k )  Cˆ
x(k | k  1)
x(k | k )  ˆ
ˆ x(k | k  1)  L*(k )e(k )


P(k | k )  I  L*(k )C P(k | k  1) 
1/16/2014 State Estimation 62
Automation Lab

Interpretations
IIT Bombay

Covariance matrix quantifies uncertainty


associated with the estimated state
x2
P(k | k  1)  P (k | k ) x2

x1 x1

Prediction Update
step step

P(k | k )  P(k | k  1)  L* (k )Pe (k ) 1 L* (k )T


L* (k )Pe (k ) 1 L* (k )T is  ve definite matrix
 P(k | k )  P (k | k  1)
Update step reduces covariance associated with the estimate
Automation Lab

Gaussian Distributions
IIT Bombay

Why study multivariate Gaussian distribution?


 From Central Limit Theorem, it follows that sum of
many independent and equally distributed random
variables can be well approximated by Gaussian
distribution. If unknown disturbances are assumed
to be arising from many independent physical
sources, then Gaussian distribution is appropriate
for modeling their behavior
 Attractive mathematical properties: linear
transformations of Gaussian distributions are still
Gaussian distributed.
 For Gaussian distributed random variables, optimal
estimated have a simple form.
Automation Lab
IIT Bombay
Multivariate Gaussian Distribution

Consider random variable x  R n


Let x  R n represent mean of x and
P represent  ve definite covarince matrix

p ( x )    x, P )  n /2
(2 )
1
det(P )
 T

exp  x  x  P 1 x  x 

Characterized completely by mean ( x) and covariance ( P)

If x ~  x, P ) is a random vector and A is a (r  n) matrix of rank r


and b is a (r  1) vector, then
z  Ax  b
is also a Gaussian distributed z ~  Ax  b, APAT )
Consequence: Linear filtering of a Gaussian distributed
Inputs will generate a Gaussian distributed output
Automation Lab
IIT Bombay
Multivariate Gaussian Distribution

Consider two random variable x and z


Let x and z represent means of x and z, respectively

Random vector x and z are said to be uncorrelated if


 
E x  x z  z   [0]
T

Random vector x and z are said to be independen t if


p( x, z )  p( x)p( z )

If vectors x and z are independen t


 x and z are uncorrelated
If vectors x and z are uncorrelated and Gaussian
 x and z are independent
Automation Lab

Gaussian Noise and KF


IIT Bombay

Let the process noise, the measurement noise and the initial state
have Gaussian normal distributions, i.e.
w ~  0, Q), v ~  0, R ) and x(0) ~  xˆ (0 | 0), P (0))

then, from the properties of Gaussian distributions it follows that

p x(k ) | Y k 1  ~  xˆ (k | k  1), P(k | k  1))


and
p x(k ) | Yk  ~  xˆ (k | k ), P(k | k ))

Also, the innovation sequence is a Gaussian stochastic process

p e (k ) | Yk  ~  0, Pee (k ))
Pee (k )  CP(k | k  1)CT  R
Automation Lab

Gaussian Noise and KF


IIT Bombay

When the process noise, the measurement noise and the initial state
have Gaussian normal distributions, it can be shown that

x̂(k | k) generated using Kalman filter maximizes p x(k) | Yk  


i.e. it is a " maximum a posteriori" or MAP estimate

xˆ (k | k) generated using Kalman filter maximizes


log likelihood function i.e.
  
log p x(k) | Y k  log p x(k), Yk - log p Yk    
In other words, KF generates solution that minimizes
Min
xˆ (k | k)  y(k) - Cx(k)  x(k) - xˆ (k | k - 1)
2 2

x(k) R 1 P (k |k 1) 1

Thus, Kalman Filter is a “Maximum Likelihood” (ML) Estimator


Automation Lab

Kalman Filter: Advantages


IIT Bombay

 Generates the maximum likelihood (ML) and


maximum a posteriori (MAP) estimates of the
states when noises are Gaussian
 Can be derived without making any assumptions on
distributions of noises as a minimum variance
estimator
 Requires only first and second moments of
conditional densities of the states and the
innovations
 Relatively easy to adapt to multi-rate and irregular
sampling scenario
 Much easier to design than pole placement approach

1/16/2014 State Estimation 69


Automation Lab
IIT Bombay
Convergence of Estimation Errors

Consider a KF as implemented on a linear


deterministic system of the form
x(k  1)  x(k )  u(k )
y (k )  Cx(k )
which is free of the state uncertainty and measurement noise
Kalman Gain Computation using Riccati Equations
P(k | k  1)  P(k  1 | k  1)T  Q


L* (k )  P (k | k  1)CT CP(k | k  1)CT  R 
1

 
P(k | k )  I  L*(k )C P(k | k  1)
where Q >0;R > 0 are tuning matrices
Automation Lab
IIT Bombay
Convergence of Estimation Errors
Kalman Filter
xˆ (k  1 | k )  xˆ (k | k )  u(k )
xˆ (k | k )  xˆ (k | k  1)  L* (k )y (k )  Cxˆ (k | k )

Under the nominal conditions, the only source


of estimation error is the initial state x̂(0 | 0)
Error Dynamics
ε(k  1 | k )  ε(k | k )
 
ε(k | k )  I  L* (k )C ε(k | k  1)
Combining
 
ε(k  1 | k )   I  L* (k )C ε(k | k  1).........(3)
Equation (3) is a Linear Time Varying System
Stability Analysis cannot be carried out using eigenvalues
Automation Lab
IIT Bombay
Convergence of Estimation Errors

Define matrices
 (k | k  1)  P (k | k  1) and  (k | k )  P (k | k )
1 1

Using matrix inversion lemma

A 1
B 
1

 A  A A  B 1  1
A

and Riccati equations, the following inequality can be proved

 (k  1 | k )   c (k )  (k | k  1) c (k )
T 1

  c (k ) (k ) c (k ) ...........(4)


T 1

 c (k )   I  L* (k )C 
 
(k )   (k | k  1)  (k | k )   Q   (k | k  1) T 1
1

Automation Lab
IIT Bombay
Convergence of Estimation Errors

Define Lyapunov function


V (k )  ε(k | k  1)T  (k | k  1)ε(k | k  1)
Combining equation (3) with inequality (4)

V (k  1)  V (k )   ε(k | k  1)T (k )ε(k | k  1)

  1
1
(k )   (k | k  1)  (k | k )   Q   (k | k  1)
T

Since (k ) is always  ve definite
ε(k | k  1)T (k )ε(k | k  1)T  0
and error dynamics given by equation (3) is Lyapunov stable
Automation Lab
IIT Bombay
Convergence of Estimation Errors

Assumption : There exists pL , pH  0 such that


pL I  P(k | k  1)  pH I and pL I  P(k | k )  pH I

1 1
ε(k | k  1) V (k )  ε(k | k  1)
pH pL

  1
(k )   (k | k  1)  (k | k )  T Q 1  (k | k  1)    
1
pH2 pH   / Q 1
2

1
V (k  1) V (k )   ε(k | k  1)
  
2

pH2 pH   / Q 1
2

Thus, estimation error dynamics is asymptotically stable


Automation Lab
IIT Bombay
Stationary Kalman Filter
Thus, as k  ,
~
P(k | k  1)  P , P(k | k )  P and L* (k )  L*

Stationary Kalman Gain Computation using


Algebraic Riccati Equation (ARE)
~
P  P T  Q
~ T ~ T

L  P C CP C  R
*

1

 ~
P  I  L* C P 
Prediction and Update

xˆ (k | k  1)  xˆ (k  1 | k  1)  u(k  1)

xˆ (k | k )  xˆ (k | k  1)  L* y (k )  Cxˆ (k | k  1)


1/16/2014 State Estimation 75
Automation Lab
IIT Bombay
Example: Quadruple Tank System

True Initial State


x(0)  2  2 2  2
T

Kalman Filter Parameters


0.01 0 0 0 
 0 
0.01 0 
Covw(k)  Q   Covv(k)  R  
0. 01 0 0 
 0 0.01 0  
0  0 0 .01
 
 0 0 0 0. 01
xˆ (0 | 0)  0 0 0 0 and P(0 | 0)  Q
T

Stationary Kalman Filter Gain


0.7825 0   0.6337 
 0 0.7921   0.7195 
L*    Eigen values of (I  L*C)  
 0.2212 0   0.6196 
   
 0 0.2365   0.7806 
Automation Lab
IIT Bombay
Example: Quadruple Tank System

Quadruple Tank: State Estimation using Kalman Filter


4
True State
2 Estimated State
Level 1

-2
0 100 200 300 400
Time (sec)
2

0
Level 2

True State
-2
Estimated State

-4
0 100 200 300 400
Time (sec)
Automation Lab
IIT Bombay
Example: Quadruple Tank System

Quadruple Tank: State Estimation using Kalman Filter


2
True State
Level 3
1 Estimated State

-1
0 100 200 300 400
Time (sec)
1

0
Level 4

-1 True State
Measured State
-2
0 100 200 300 400
Time (sec)
Automation Lab
IIT Bombay
Example: Quadruple Tank System

 P(k | k  1)   P(k | k )


Kalman Filter: Predicted and Updated Covariance
0.11
Predicted Covariance P(k+1|k)
0.1
Updated Covariance P(k|k)
0.09
Spectral Radius

0.08

0.07

0.06

0.05

0.04

0.03
0 100 200 300 400
Time (sec)
Automation Lab
Dealing with Non-stationary IIT Bombay

Disturbances
Augment state space model with extra artificial
states (equal to no. of outputs), which behave as
integrated white noise sequence and can capture drifting
x(k  1)  x(k )  u(k )   η(k )  w (k )
η(k  1)  η(k )  w (k )
y (k )  Cx(k )  v (k )
 Q [0] 
State Noise Covarance :  
[ 0] Q 
Choice of  matrix Tuning
Bias in Input Model :    Parameter
Mean shift in diaturbance :   
Design Kalman Filter / Predictor using augmented model
Fast changing disturbance: use high values of co-variance Q
1/16/2014 State Estimation 80
Automation Lab

Kalman Predictor: Summary


IIT Bombay

Initailiza tion Step : Initail mean, X̂(0 | -1),


Initial Covariance P(0 | -1)

At Instant ' k'


Step 1 : Compute Kalman Gain L*p (k )
L p (k )  P(k | k  1)C
* T
R  CP(k | k  1)C T 1

Step 2 : Recursive Prediction Estimator
e(k )  y (k )  Cxˆ (k | k  1)
xˆ (k  1 | k )  xˆ (k | k  1)  u(k )  L*p (k )e(k )

Step 3 : Update Covariance matrix


P(k  1 | k )  P(k | k  1)T  Q  LP (k )CP(k | k  1)T

1/16/2014 State Estimation 81


Automation Lab
IIT Bombay
Kalman Predictor: CSTR Example

0.185 - 0.01 0.005 0.13 0.06


x (k  1)    x (k )  - 0.73 - 1.8 u (k )   3.9 d (k )
 73.49 1.33     
y (k )  0 1x (k )  v (k )
Qd  (0.05)2
0.0036 0.234
Q  Qd T  0.052  
 0.234 15.21 
Cov {v (k )}  R  (0.5)2
Apriori estimate of initial state
x (0 | 1)  0 0T
Initial State Covariance Estimate (selected arbitraril y large)
 1 0
P (0 | 1)  1  103 

0 1 
After about 20 iterations, Kalman (Predictor) Gain settles to following steady State Values
LP   0.00516 0.696T

1/16/2014 State Estimation 82


Automation Lab
IIT Bombay
CSTR Example: Kalman Predictor

Linear Plant- Linear Observer


State Estimates: (-) True (+) estimated
0.5

 (0)  0.1 2
Conc.(mod/m3)

0.4

0.3

0.2

0.1
0 0.5 1 1.5 2 2.5 3 3.5 4
Time (min)
430

420
Temp.(K)

410

400

390
0 0.5 1 1.5 2 2.5 3 3.5 4
Time (min)
1/16/2014 State Estimation 83
Automation Lab
IIT Bombay
CSTR Example: Kalman Predictor

Linear Plant- Linear Observer


Kalman Predictor: Error Dynamics
Error in Temp. EstimateError in Conc. Estimate 0.2

0.1

-0.1

-0.2
0 0.5 1 1.5 2 2.5 3 3.5 4
Time (min)
20

10

-10

-20
0 0.5 1 1.5 2 2.5 3 3.5 4
Time (min)
1/16/2014 State Estimation 84
Automation Lab
IIT Bombay
CSTR Example: Kalman Predictor

Non-Linear Plant- Linear Observer


State Estimates: (-) True (+) estimated
0.4
Conc.(mod/m3)
0.3

0.2

0.1

0
0 2 4 6 8 10
Time (min)
460

440  (0)  0.1 2


Temp.(K)

420

400

380
0 2 4 6 8 10
Time (min)
1/16/2014 State Estimation 85
Automation Lab
IIT Bombay
CSTR Example: Kalman Predictor

Non-Linear Plant- Linear Observer


Kalman Predictor: Error Dynamics

Error in Temp. EstimateError in Conc. Estimate


0.4

0.2

-0.2

-0.4
0 2 4 6 8 10
Time (min)
20

-20

-40
0 2 4 6 8 10
Time (min)

1/16/2014 State Estimation 86


Automation Lab
IIT Bombay
“Steady State” Kalman Predictor
As k  , under weak conditions
the optimal estimator will be time invariant
Theorem
Assume pair (, Q ) is stabilizable and the pair (, C) is detectable
Then the solution of the Riccati equation P (k | k  1)  P  0
where P denotes solution of the Algebraic Riccati Equation
P  P T  Q  L*p , CP T

L*p ,  P CT R  CP CT 
1

Lemma
Assume pair (, Q ) is controllab le and R is non - singular
Then all eigen values of (  L*p , C) are inside the unit circle.
(Dynamics governing the estimation error ε(k | k - 1) is asymptotically stable)

1/16/2014 State Estimation 87


Automation Lab
IIT Bombay
“Steady State” Kalman Predictor
As k  , P(k | k  1)  P
where P denotes solution of the Algebraic Riccati Equation
P  P T  Q  L*p , CP T

L*p ,  P CT R  CP CT 1

Recursive Prediction Estimator


e(k )  y (k )  Cxˆ (k | k  1)
xˆ (k  1 | k )  xˆ (k | k  1)  u(k )  L*p , e(k )

The above " steady state observer" can be written as


xˆ (k  1 | k )  xˆ (k | k  1)  u(k )  L*p , e(k )
y (k )  Cxˆ (k | k  1)  e(k )
Ee(k )  0 and Cove(k )  R  CP CT
1/16/2014 State Estimation 88
Automation Lab
IIT Bombay

Connection with Time Series Models


Stationary form of Kalman predictor
is also known as Innovation form of State Space Model
x(k  1)  x(k )  u(k )  Le(k )
y (k )  Cx(k )  e(k )
Ee(k )  0 and Cove(k )  Pe

The above stationary form of state space model is equivalent to


Box-Jenkins type time series model
y (k )  G (q )u(k )  H (q )e(k )
G (q )  CqI     ; H (q )  I  CqI    L
1 1

Estimation of a time series model (ARX/ARMAX/BJ)


from input output data is equivalent to identifying
Stationary form of Kalman predictor
1/16/2014 State Estimation 89
Automation Lab
IIT Bombay
Connection with Time Series Models

 Thus, stationary form of Kalman predictor can be


identified directly from input output data using ARX /
ARMAX / Box-Jenkins parameterization and
converting into state space realization.
 Advantage: No need to model the state noise, w(k),
and the measurement noise, v(k)
Innovation form of state space model
x(k  1)  x(k )  u(k )  w (k )
y (k )  Cx(k )  v (k )
w (k )  Le(k ) and v (k )  e(k )
Ew (k )  Ee(k )  0
Covw (k )  LPe LT and Covv (k )  Pe
 
Covw (k ), v(k )  E w (k ) v(k )T  LPe
Automation Lab

Notation
IIT Bombay

Mechanistic Model Assumptions


Manipulated inputs and piecewise constant
State Dynamics
u(t)  u(k ) for t k  t  t k 1
dx
 f (x, u, d,  ) Unmeasured disturbances are random fluctuations
dt in the neighborhood of mean value
Measurement Model
d(t)  d  w (k )
y  H x

Control Relevant
tk 1

x(t k 1 )  x(t k )   f (x( ), u(k ), d  w (k ), θ)d


tk Discrete Time
t k  kT t k 1  (k  1)T T : Sampling Time Representation
tk 1

x(k  1)  x(k )   f (x( ), u(k ), d  w (k ), )d x(k  1)  F x(k ), u(k ), w (k ), θ


y (k )  H x(k )
tk

 F x(k ), u(k ), w (k ), θ
Automation Lab
IIT Bombay
Extended Kalman Filter: Summary

Successive Local Linearization


1. Compute local Jacobian matrices
 f   f 
A(t k 1 )    and B d (t k 1 )   
 x  ()  d  ()

at ()  xˆ (k  1 | k  1), u(k  1), 0 
2. Compute matrices Φ(k, k - 1)  expA(t k 1 )T  and
Γ d (k , k  1)  Φ(k, k - 1) - I A(t k 1 ) B d (t k 1 )
1

Prediction Step


xˆ (k | k  1)  F xˆ (k  1 | k  1), u(k  1), 0 
P(k | k  1)  (k , k  1)P(k  1 | k  1) (k , k  1)T  d (k , k  1)Q d d (k , k  1)T

1/16/2014 State Estimation 92


Automation Lab
IIT Bombay
Extended Kalman Filter: Summary

Kalman Gain Computation


 H 
Compute C(k )    at xˆ (k | k  1) and
 x  ( )
L(k )  Pe (k )Pe (k ) 1

 P(k | k  1)C(k )T C(k )P(k | k  1)C(k )T  R 
1

Update Step
e(k )  y (k )  H xˆ (k | k  1) 
xˆ (k | k )  xˆ (k | k  1)  L(k )e(k )

P (k | k )  I  L(k )C(k )P(k | k  1)


Approach originally derived for state estimation of a discrete
linear system used for state estimation of a nonlinear system

1/16/2014 State Estimation 93


Automation Lab
IIT Bombay
EKF : CSTR Example

State Estimates: (-) True (+) estimated


0.5
Conc.(mod/m3)
0.4

0.3

0.2

0.1
0 2 4 6 8 10
Time (min)
440
Temp.(K)

420

400

380
0 2 4 6 8 10
Time (min)
Automation Lab
IIT Bombay
EKF : CSTR Example
Estimation Error Dynamics
Error in Temp. EstimateError in Conc. Estimate
Extended Kalman Predictor: Error Dynamics
0.3

0.2

0.1

-0.1
0 2 4 6 8 10
Time (min)
10

-10

-20
0 2 4 6 8 10
Time (min)
Automation Lab
IIT Bombay

EKF : Plug Flow (Tubular) Reactor (PFR)


Steam, Tjo
Tj-1, TR-1 Tj-2, TR-2

T T T Tj-5, TR-5

CA(1,t), CB(1,t)

CAo, TRo CC(1,t), TR(1,t)

A B C

(Endothermic Reaction)

Tj(0,t)

State Estimation Problem


Estimate concentration profile inside the reactor using
few temperature measurements along the length
Automation Lab
IIT Bombay
Fixed Bed Reactor

 Material Balances (Distributed Parameter System)


CA C
  vl A  k10 e E1 / RTr CA ……..Reactant A
t z
CB C
  vl B  k10 e  E1 / RTr CA  k 20 e E2 / RTr CB ……..Product B
t z
 Energy Balances

Tr Tr  H r1 


  vl  k10 e  E1 / RTr CA ……..Reactor Temp.
t z m Cpm
 H r 2 

m Cpm
k 20 e  E 2 / RTr
CB 
Uw
m Cpm Vr
 Tj  Tr 

Tj Tj
 T T 
U wj
u  ……..Jacket Temp.
t z mjCpmj
r j
V j
Automation Lab
IIT Bombay

PDE To ODE Model (Finite Differencing)


0 1 2 N N+1

Plant Model
No. of internal discretization points 19 4

No. of states 80 20

No. of jacket side temp. measurements 3 3

No. of reactor side temp. measurements 3 3


Automation Lab
IIT Bombay
State Estimation using EKF

Simulation Parameters

Variable Nominal Value Fluctuations added


Feed Flow 1 m/min 0.01 m/min
Feed Concentration 4 mol/lit 0.14 mol/lit
Temperature measurements - 0.4 K

Steam flow rate 1 m/min -

 Performance of EKF under the effect of feed flow and


feed concentration fluctuations was studied
 The estimated concentration approaches the true
concentration within 5 minutes
Automation Lab

Fluctuations in Feed Flow and Feed


IIT Bombay

Concentration
Automation Lab
Actual and Estimated Exit IIT Bombay

Concentration of B
Automation Lab
IIT Bombay
Simulation Result: Concentration profiles
of product B at different time instants
Automation Lab
IIT Bombay
State and Parameter Estimation

Estimation of deterministic changes in


unmeasured disturbances / model parameters
( k 1)T

x(k  1)  x(k )   f x( ), u(k ), d  w(k ), θ(k )d


kT

θ(k  1)  θ(k )  w θ (k )
y (k )  Hx(k )  v(k )

Augment the model with fictitious discrete


evolution equation
θ(k ) : Vector containing slow drifting model parameters
/ unmeasured disturbances to be estimated with states

1/16/2014 Nonlinear Estimation 103


Automation Lab
IIT Bombay

State and Parameter Estimation


Prediction step for augmented model

xˆ (k | k  1) F xˆ (k  1 | k  1), u(k  1), 0, θˆ (k  1 | k  1) 
θˆ (k | k  1)    

   θ(k  1 | k  1)
ˆ

Update Step for augmented model
xˆ (k | k ) xˆ (k | k  1)
θˆ (k | k )   θˆ (k | k  1)   L a (k )y (k )  Cxˆ (k | k  1)
   

Predicted Covariance Update step


 f   f   f 
A(t k 1 )    ; B θ (t k 1 )    ; B d (t k 1 )   
 X  ()  θ  ()  d  ()
()  (x(k  1 | k  1), u(k  1), d, θ(k  1 | k  1))

1/16/2014 Nonlinear Estimation 104


Automation Lab
IIT Bombay

State and Parameter Estimation


Predicted Covariance Update : using augmented matrices
T
 (k , k  1)   expA(t k 1 ) B (t k 1 )d
0

 (k , k  1)  (k , k  1) d (k , k  1)


 a (k , k  1)    and a (k , k  1)   
 [ 0 ] I   I 
Q d [0] 
State Noise Covarance : Q a   
 [ 0 ] Q 

Fast changing parameter/disturbance : Tuning


use high values of covariance Parameter

Pa (k | k  1)   a (k , k  1)Pa (k  1 | k  1) a (k , k  1)T


 a (k , k  1)Q a a (k , k  1)T
1/16/2014 Nonlinear Estimation 105
Automation Lab
IIT Bombay
Extended Kalman Filter: Summary

Kalman Gain Computation


  H  
Compute C(k )     0 at xˆ (k | k  1) and θˆ (k | k  1)
  x  () 
L a (k )  Pe (k )Pe (k ) 1

 Pa (k | k  1)C a (k )T C a (k )Pa (k | k  1)C a (k )T  R 
1

Updated Covariance

Pa (k | k )  I  L(k )C a (k )Pa (k | k  1)

Simultaneous state and parameter estimation can be used for


• Soft sensor for slowly changing parameters / unmeasured
disturbances
• Faults in system, which are viewed as changing parameters
1/16/2014 State Estimation 106
Automation Lab
Experiment: Combined State and IIT Bombay

Parameter Estimation on Heater-Mixer Setup

3-15 psi
Cold Water Flow Input

CV-1
CV-2
Cold Water Flow

T Tank - 1
T

L
T
Tank - 2
Thyrister
4-20 mA Control
Input Unit
Signal
T
T
Automation Lab
IIT Bombay

Example: Stirred Tank Heater-Mixer


dT1 F1 Q (I 1 )
 (Ti 1 T1 ) 
dt V1 V1 C p
dh2 1
 F1  F2 (I 2 )  F 
dt A2
dT2 1  UA(T2 Tatm ) 
 F1 (T1 T2 )  F2 (Ti 2 T2 )  
dt h2A2  C p 
Q (I 1 )  7.979I 1  0.989I 12  0.0073I 13
F2 (I 2 )  3.9  27I 2  0.71I 22  0.0093I 23
20
U  139.5 J / m Ks ; F (h )  k h2  h
I1 : % current input to thyrister power controller
I2 : % current input to control valve
Automation Lab
Estimation of states and heat loss IIT Bombay
parameter using EKF – Experimental Conditions

 Tank 1 temperature and heat loss parameter are to be


estimated
 Tank 2 temperature and level are measured
 The system is kept in perturbed state by perturbing the
inputs (heater input and tank 2 inlet flow)
 The flow to tank 1 is kept constant. This implies that
overflow to tank 2 is also constant
 The parameter is initialized with a value of 0.8
Automation Lab
Experimental result: Tank 1 temperature IIT Bombay

and heat loss parameter estimates


Automation Lab

Issues in State Estimation


IIT Bombay

 Robustness to plant-model mismatch: Model


accuracy is critical to state estimation
 Noise Model Parameters: Measurement and state
noise co-variances are difficult to estimate.
These matrices are often treated as tuning
parameters
 Number of extra states (unmeasured disturbances
/ parameters) estimated cannot exceed number of
measurements
 Modifications necessary for multi-rate sampled
data systems

1/16/2014 State Estimation 111


Automation Lab

Summary
IIT Bombay

 Dynamic model based state observers can be


used to reconstruct unmeasured states from
frequently measured outputs
 Kalman filters generate state estimates with
minimum estimation error variance, provided state
and measurement noise models are known
accurately
 Extended Kalman filtering can be used for
estimating states of nonlinear systems
 Note: KF and EKF belong to a class of filters
called Bayesian estimators, which are used in wide
range of engineering applications (robotics,
process control, target tracking, speech
recognition, image reconstruction)
1/16/2014 State Estimation 112
Automation Lab

References
IIT Bombay

 Soderstrom, T. Discrete Time Stochastic


Systems, Springer, 2002.
 Gelb, A. Applied Optimal Estimation, MIT Press,
1974
 Astrom, K.J. and B. Wittenmark, Computer
Controlled Systems, Prentice Hall, 1994.

1/16/2014 State Estimation 113

You might also like