0% found this document useful (0 votes)
16 views78 pages

Kalman Equationsv3

The document provides an overview of the Kalman filter, a mathematical method developed by Rudolf Emil Kalman that is widely used in control theory and system identification. It covers the principles behind the filter, including innovations sequences, the recursive nature of the filter, and its applications in various fields such as navigation systems. Additionally, it discusses the assumptions underlying the Kalman filter, its mathematical foundation, and its historical significance in space missions like Apollo 11.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views78 pages

Kalman Equationsv3

The document provides an overview of the Kalman filter, a mathematical method developed by Rudolf Emil Kalman that is widely used in control theory and system identification. It covers the principles behind the filter, including innovations sequences, the recursive nature of the filter, and its applications in various fields such as navigation systems. Additionally, it discusses the assumptions underlying the Kalman filter, its mathematical foundation, and its historical significance in space missions like Apollo 11.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 78

Kalman Filters basics

Tim Mazumdar
The immortal one
Hungarian-born U.S. scientist and professor Rudolf Emil Kalman (born 1930) is widely
regarded as the creator of modern control theory and system theory. His research
reshaped
the field of control engineering. His most widely known accomplishment is his developmen
of the Kalman filter, a mathematical method now widely used in of EE, ME and
Communication.
He was born in Budapest, Hungary, on May 19, 1930, the son of an electrical engineer.
He immigrated to the United States in 1943, and studied EE at MIT.
Kalman received his BSEE, MIT 1953 and his MSEE in 1954,. He got his PhD(1957)
under John R. Ragazinni at Columbia. The original paper on Kalman filtering was entitled
“A
New Approach to Linear Filtering and Prediction Problems,” ASME—Transactions March
1960.
In Dedication to the man who taught K filtering

Professor Uday B. Desai –


respected academic, Professor
at IIT, Bombay
Father of Orthogonal Filters, the
Desai-Weinert Smoother, the
Robust Recursive Least Squares
algorithm and about 100 other
major innovations
B Tech , IIT B- 1969-1974
PhD Johns Hopkins University –
1979. Former Director IIT,
Hyderabad
Objective
● Why Kalman filter? -background of the Kalman filter
● What is an innovations sequence and how it is generated
● the geometric idea behind innovations sequences
● Derivation of the Discrete Time Kalman Filter from first
principles
● Prediction , Filtering, Smoothing and system identification
● How to build Practical Kalman filter
● Concepts – Kalman Gain, Process noise, Measurement noise,
● A-priori and a-posteriori measurement errors
● Example Kalman filters – some test cases
● Fast Kalman filters and Extended Kalman filters
Objectives

● Mathematical relationship between the K filter and RLS


algorithms- Kalman filter as the underlying mathematical
basis for RLS
● The Algebraic Riccati Equation and solving the ARE

Apollo 11 Sextant
The Apollo 11 was the
worlds most important
space mission that
utilized an amazing low
memory version of the
Kalman F.
Definitions
● Linear Estimator
● Measurement noise – Noise added to a internal state variable to form a
set of noisy observations. The idea is that the Kalman filter
● Estimation – Estimation of a state in the presence of process and
measurement noise is the problem addressed by the K filter. It allows
us to estimate the state of dynamic systems with certain types of
random behavior using statistical moments e.g. mean. The K filter
tracks the probability distribution of its estimation errors
● Kalman vs Wiener filters – Kalman filter is recursive in nature and
updates the filter parameters upon receipt of every sample of
observation. Wiener requires the full set of observations to work.
● LQG – Linear quadratic Gaussian estimation problem. The estimator
is linear function of observations. The estimate error is of quadratic
form and the noise is Gaussian in amplitude
The Principle of Orthogonality
● Let x and z be two random variables and be an estimator for x.
In estimation problems one tries to come up with a constant alpha
that minimizes the expectation of the MSE. This gives rise to the
principle
The optimal estimate is a weighted function generally expressed
as and estimate of x given that z is a set of observations.

● The other name is “Minimum variance unbiased estimator. The minimization


of variance is due to the square nature of J and the unbiased estimate is due
to the fact
that J=E(…) =0
<..> stands for inner product in a Abstract vector space.
Elaboration of orthogonality
● Orthogonality is best explained geometrically in the context of
estimation. Let z be a vector of observations and let the estimator be a
linear function of z. That is in formal terms we are required to minimize

In estimation the error is minimal is x- a1z is orthogonal to z. Perpendicular


to the observations – as shown in the diagram below.
Controllability and Observability
● Kalman gave us some deep insights into the concepts of
controllability and observability.
● A system is said to be controllable is there exists a set of control
signal inputs uk defined over 0<=k<=N that can make the
system transition from state x0 to state xN in N sampling
instants, where N is a positive integer. For a state variable
model this can be formally expressed as ollows.

● The definition of observability implies by using only the H matrix


or its inverse it is possible to determine the system state at any
time by using x = inv(H)*Y. This implies the observability matrix
has a rank of N. this is formally expressed as
Kailath’s original 1970 definition of innovations process
● Given a stochastic process, its innovations process will be defined as a white
Gaussian noise process obtained from the original process by a causally
invertible transformation.
● What is a Causally invertible transformation- A casually invertible transformation
is a mathematical operation that allows one to transform the white Gaussian noise
process back to the original stochastic process.
● Smoother – A Smoother is a form of optimal filter which predicts the state of a
system at t = kT with samples from before (like (k-1)T, (k-2)T … ) and after ( like
(k+1)T, (k+2)T , ….
● Fixed interval smoother – a popular form of Kalman filter or smoother which uses
all the observations of a fixed length interval to estimate the system state at all
times during at interval. Mostly operate on a off-line basis.
● Fixed- point smoothers – Estimate the system state at (k-m)T using
all observations up to current time.
Recursive nature of the Kalman Filter
● The Kalman filter is a set of mathematical equations that
provides an efficient recursive computational solution to
discrete time data filtering problems. The K filter removes
extraneous "noise" from a given stream of data. Kalman’s
greatest contribution was that he came up with two sets of
algebraic equations that solve real-time problems.
● His solution to the discrete-time problem led him to tackle
the continuous-time problem. He developed the
continuous-time version of the K. filter with Richard Bucy
(1960 – 1961) and published the all-important paper that
describes a way to recursively find solutions to the
discrete-data linear filtering problem.
What made the Kalman filter so celebrated in Media
Kalman’s original work was funded by Air Force Office of Scientific Research
(AFOSR)
as it related to space vehicles. The AFOSR sponsored the research done at RIAS
by
Kalman and Bucy. Kalman and Bucy's work revolutionized the field of estimation
and
had an enormous impact on the design and development of precise navigation
systems.
The Kalman filter was a major breakthrough in guidance technology. In fact the
original
paper is the 25th most quoted Paper in all of EE. It was the basis of the Apollo
Guidance
system and it enabled the first moon landing. It was the core of a memo written b
Donald Fraser in 1965 for Apollo Guidance systems

“Recursive filtering applied to system identification” and used by James Potter . It


is
STRUCTURE OF THE
GOVERNING KALMAN FILTER
EQUATIONS
STATE EQUATION STATE NOISE

MEASUREMENT MEASUREMENT
EQUATION NOISE
KALMAN FILTER IN TERMS OF
PROBABILITY, STATISTICS, AND
RANDOM PROCESS
STATISTICS PROBABILITY

SEQUENTIAL STATISTICAL RANDOM PROCESS


ANALYSIS
OF RANDOM PROCESS IS THE
KALMAN FILTER

UP ARROW: TIME INDEPENDENT;


DOWN ARROW: TIME DEPENDENT;
RIGHT ARROW: DIRECT
PROBLEM;
LEFT ARROW: INVERSE
PROBLEM.
THE WAY KALMAN FILTER WORKS

STATE EQUATION STATE NOISE


(IMPROVED QUAL + QUAN) (EFFECT IS SUPRESSED)

MEASUREMENT MEASUREMENT
EQUATION NOISE
(IMPROVED QUAL + QUAN)

THE KALMAN FILTER DOES MANY THINGS AND


CAN BE INTERPRETED IN MANY WAYS
MANY THINGS WHICH KALMAN FILTER
DOES-1
(FROM AIAA-2000-4319)
● FILTERS THE NOISES
● ESTIMATES THE STATES
● ESTIMATES PARAMETERS IDENTIFIES QUALITATIVELY THE
STATE AND THE MEASUREMENT MODELS
● ESTIMATES QUANTITATIVELY THE PARAMETERS IN THE
ABOVE MODELS
● FUSES/INTEGRATES THE DATA FROM DIFFERENT
MEASUREMENT SOURCES – DATA FUSION
● FILTERS OUTLIERS IN MEASUREMENT
● ASSIMILATES THE DATA FROM DIFFERENT SOURCES AND
IMPROVES BOTH THE STATE AND MEASUREMENT MODELS
● HISTORICALLY KALMAN FILTER HAS BEEN EVOLVED TO DO THE
ABOVE TASKS WHICH ARE INCREASINGLY MORE INVOLVED
MANY THINGS WHICH KALMAN FILTER DOES-2
(FROM AIAA-2000-4319)

● UNOBSERVABLES FROM OBSERVABLES


● EXPANSION OF THE SCENARIO
● ESTIMATION OF UNMODELLABLE INPUTS BY
MODELLING AS NOISE
● PROBABILISTIC MIXING
● HANDLING DETERMINISTIC ERRORS BY NOISE
ADDITION
● HANDLING NUMERICAL ERRORS AS NOISE
ADDITION
● IT IS A STOCHASTIC CORRECTIVE PROCESS
MANY MORE INTERPRETATIONS ARE POSSIBLE
DEFINITION OF KALMAN FILTER
The Kalman filter assimilates the measurement
information with uncertain system and
measurement models based on probabilistic
weighted linear addition of the predicted state
and the measurement data to adapt both the
state and measurement models in a statistically
consistent way.
Another interpretation is the Kalman filter (KF)
is an expanding knowledge front by
assimilating newer information in a meaningful
way.
Mathematics of the Kalman
filter- Bishop’s approach
Bishop and Welch - Kalman
The state variable model for the Kalman filter. Insert equations 9 through 13 here :
The first task during the measurement update is to compute the Kalman gain,

The next step is to actually measure the process


to obtain , and then to generate an a posteriori state estimate by incorporating the
measurement as in (12).

The final step is to obtain


an a posteriori error covariance estimate via (1.13).

The difference between the Kalman and Weiner filters is that the Kalman filter is
recursive
And at each time step the Kalman gain is computed.
Complete update of Kalman filter

Each step require solving 5 equations. In modern processors It is possible to


solve the K filter equations either in SW as embedded code or in HW.
There have been K filters designed for Systolic array based computation one of
which we Will review in this session.
Sorenson 1971- the quantity zk –Hxk is known in Kalman filtering as “residual”.
One way of looking at a single cycle of
computation- Kalman Filter

● Time update projects the current estimate of state ahead in time.


● Measurement update updates the projected estimate by an additional observation
at time k.
Some Process related definitions
● Stationary- If the mean and standard deviation are
indepepent of the origin then a process is said to be
stationary in the wide sense. Much
of communication theory relies on the mathematics of
stationary processes
● Ergodic – If the PD of its values at a single time instant
over the ensemble of sample functions equals the PD
over all time values of randomly chosen member
functions. Ergodicity and Stationarity are independent
characteristics.
3 types of Estimation
● Filtering- filtering uses previous as well as the current
observation to estimate the state if a linear system
● Prediction- Prediction uses previous observations to predict
the state of a linear system in future time instants
● Smoothing- Use observations from the future and previous
observations to estimate the state of a dynamical system
● Unbiased estimator – The K filter computes the conditional
mean and variance of the PDF of the state vector of a linear
system. The process noise and measurement noise must be
white Gaussian. In case of a continuous time Kalman filter the
conditional mean is propagated as a linear Differential
equation or a L. Difference equation. The variance is
propagated by a second order differential equation or its
difference equation equivalent.
Setting up the plant(Matlab)
>> A = [ 1.1269 -0.4940 0.1129; 1 0 0 ; 0 1 0 ];
Q = 1; R = 1;
>> A
[kalmf,L,P,K] =
kalman(Plant,Q,R);
A=
>> K

1.1269 -0.4940 0.1129


K=
1.0000 0 0
0 1.0000 0
0.3798
0.0817
>> B = [ -0.3832; 0.5919; 0.5191];
-0.2570
>> C = [ 1 0 0];
>> Bp = B'; K is really the Kalman
>> Bp gain

Bp =

-0.3832 0.5919 0.5191


One we find the Kalman gain
The inputs are process noise and output observations the outputs
are state estimates x(n) and the estimate of the next observation.
u Compute the first output of the kalmf
y
function
e
kalmf = kalmf(1,:);
Kalmf
kalmf
y Xhat(n
v )
Kalman filter outputs
a=
x1_e x2_e x3_e
x1_e 0.7683 -0.494 0.1129
x2_e 0.6202 0 0
x3_e -0.081732 1 0
b=
uy
x1_e -0.3832 0.3586
x2_e 0.5919 0.3798
x3_e 0.5191 0.08173
c=
x1_e x2_e x3_e
y_e 0.6202 0 0
d=
uy
y_e 0 0.3798
Beginnings- Discrete time State variable model
Discrete Kalman filter Time update equations
Discrete Kalman filter Time update equations
When does Kalman gain converge quickly?
The Kalman gain converges quickly when both the process
noise covariance and error noise covariance are constant. (Q
and R are Constant)

One approach is to run the n filter off-line to estimate the degree


of When does the word recursive imply?
perturbation due to Q and R and arrive at an estimate of the
Kalman gain . This is known as Grewal’s method(1993)

Recursive implies does not imply all previous observations are


stored and required to be processed every time a new data
sample is received. This non requirement of storage
Implies that the K filter requires computation than a Weiner filter
What are the 3 underlying assumptions of the K filter
● The Kalman filter is essentially an optimal data processing algorithm .
It combines all available measurement data , prior knowledge of the
system and measurement sensors to produce an estimate of the
system state in such a way that error is minimized statistically as the
iterations progress.
● the assumption of linearity- Linear system analysis is standard and
well understood and can be extended to non-linear system through
linearization techniques.
● The assumption of whiteness of measurement and process noise

Whiteness implies the noise samples are not correlated in time. Simply if
you know the noise value now it will not do you any good at another point
in time. In real life white noise will have infinite energy as its PSD is flat
and so it cannot exists. White noise can be approximated by real
wideband noise and without white noise the mathematics of the Kalman
Whiteness
Continued
3. the assumption of Gaussian nature of noise - When a
number of non-Gaussian RVs are added the result tends to
Gaussian in the limit when the number of RVs tends to infinity.
This is known as the strong form of the Central Limit theorem.
While “whiteness” of measurement noise and process noise
has to do with the PSD of noise , Gaussian nature implies the
PDF of noise assumes a bell shaped curve for the range of
amplitudes. Gaussian nature is independent of whiteness in
fact it imposes and additional constraint on the noise. .

Gaussian nature makes the mathematics more tractable as


Gaussian PDF implies the first and second order conditional
densities completely determine the observation and process
noise.
What is the Riccati equation
● The ARE or algebraic Riccati equation is due to Jacopo Riccati in its
continuous form. The Riccati equation occurs in Linear Quadratic
Gaussian estimation theory. This is the original context under which the
Riccati equation arises in control theory.
There are two forms of the ARE – one for continuous time systems and
Another for discrete time systems. We demonstrate both here so that the
student has some understanding Of the near central importance of the ARE in
both communication and Control.
The Discrete Algebraic Riccati equation
The DARE is more applicable for discrete time control systems.

● The Riccati equations perform 3 things


● Allow prediction of the statistical performance of the estimator
before the estimator is actually deployed ( a-priori).
● Allow computation of the optimal gain – or Kalman gain.
● for the simplest cases the Riccati equations can be solved
analytically but usually there are robust numerical techniques
to solve
the Riccati equations - Laub’s method
Development of the Kalman Filter equations
● this development follows Monson Hayes it’s a lot less
complicated.

Next we need to define the Q and R matrices which are co


variances of
the process Noise and the measurement noise.
Expressions for Process noise and Measurement noise
Process noise is the noise term added to the state estimation
equation and
Measurement noise is the noise term added to the output equation.

Where the Hermitian conjugate of the noise vector really boils down to a
simple
Transpose.

One objective is to derive the optimum Kalman Gain that minimizes MSE
estimation error .

● We define the best possible Linear estimate based on observation y(i),


given that the
● Observations are from 1,2,3,…n-1. We define the 2 error seqeunces.
The all important P matrix or error covariance
matrix

For each n , xhat(n-1|n-1) and P(n-1|n-1) are available. When a


new
observation y(n) occurs the new estimate for the state vector
must be
Computed and also the error covariance matrix P(n|n-1) is
“propagated”
To P(n|n).
Equations related a-priori and a-posteriori error
Step 2 : Incorporation of new measurement
Expression for updated estimate

The next big conceptual jump the uncorrelated nature of the


process noise
And the measurement noise.

E(v(n)w(n))= 0 ;E(v(n)x(n))= AE(v(n)E(x(n))+E(v(n)w(n))=0


This implies v(n) and x(n) are uncorrelated.
Error covariance matrix propagation

Now we take the derivative of the error covariance matrix wrt. Kalman
Gain
Final Expression for optimal gain
Closure: Plugging the optimal gain into error
For the final expression covariance
we plug the optimal Kalman gain
value
Into the error propagation covariance equation
Time and measurement updates for the continuous time
Kalman filter
The continuous time model of a linear stochastic system can be
expressed as
Follows.
Matlab Examples
This example directly applies K equations
%Define the length of the simulation. w=w1*sqrt(Q); v=v1*sqrt(R);
nlen=20; %Initial condition on the state, x.
%Define the system. Change these x_0=1.0;
% to change the system. %Initial guesses for state and a
a=1; h=3; posteriori covariance.
%Define the noise covariances. xaposteriori_0=1.5; paposteriori_0=1;
Q=0.01; R=2;
%Calculate the state and the output
% Preallocate memory for all arrays x(1)=a*x_0+w(1);
x=zeros(1,nlen); z(1)=h*x(1)+v(1);
z=zeros(1,nlen); %Predictor equations
xapriori=zeros(1,nlen); xapriori(1)=a*xaposteriori_0;
xaposteriori=zeros(1,nlen); residual(1)=z(1)-h*xapriori(1);
residual=zeros(1,nlen); papriori(1)=a*a*paposteriori_0+Q;
papriori=ones(1,nlen); %Corrector equations
paposteriori=ones(1,nlen); k(1)=h*papriori(1)/(h*h*papriori(1)+R);
k=zeros(1,nlen); paposteriori(1)=papriori(1)*(1-h*k(1));
This example directly applies K equations
%Calculate the rest of the values. j=1:nlen;
subplot(221); %Plot states and state estimates
for j=2:nlen,
h1=stem(j+0.25,xapriori,'b');
%Calculate the state and the hold on
output h2=stem(j+0.5,xaposteriori,'g');
x(j)=a*x(j-1)+w(j); h3=stem(j,x,'r');
hold off
z(j)=h*x(j)+v(j);
%Make nice formatting.
%Predictor equations legend([h1(1) h2(1) h3(1)],'a priori','a
xapriori(j)=a*xaposteriori(j-1); posteriori','exact');
residual(j)=z(j)-h*xapriori(j); title('State with a priori and a posteriori elements');
ylabel('State, x');
papriori(j)=a*a*paposteriori(j-
xlim=[0 length(j)+1];
1)+Q; set(gca,'XLim',xlim);
%Corrector equations subplot(223); %Plot errors
k(j)=h*papriori(j)/(h*h*papriori(j) h1=stem(j,x-xapriori,'b');
+R); hold on
h2=stem(j,x-xaposteriori,'g');
paposteriori(j)=papriori(j)*(1- hold off
h*k(j)); legend([h1(1) h2(1)],'a priori','a posteriori');
xaposteriori(j)=xapriori(j) title('Actual a priori and a posteriori error');
This example directly applies K equations
%Plot errors %Plot kalman gain, k
subplot(223); subplot(224);
h1=stem(j,x-xapriori,'b'); h1=stem(j,k,'b');
hold on legend([h1(1)],'kalman gain');
h2=stem(j,x-xaposteriori,'g'); title('Kalman gain');
hold off ylabel('Kalman gain, k');
legend([h1(1) h2(1)],'a priori','a set(gca,'XLim',xlim); %Set limits the
posteriori'); same as first graph
title('Actual a priori and a posteriori
error');
ylabel('Errors');
set(gca,'XLim',xlim); %Set limits the
same as first graph
Curves for 1-D Kalman Filter
Another simple K filter for position
Kalman demo 1
duration=5;
dt=0.2;
[pos,posmeas,poshat]= kalman_demo1(duration,dt);

Kalman_demo1 function is available from the instructor


A 4x4 Kalman filter example
This example includes a 4x4 estimator for a movement of a point in 2 –dimensional
Plane starting from (10,10) . Consider a particle moving in the plane at constant velocity
subject
to random perturbations in its trajectory. The new position (x1, x2) is the old position
plus the
velocity (dx1, dx2) plus noise w.
[ x1(t) ] = [1 0 1 0] [ x1(t-1) ] + [ wx1 ]
[ x2(t) ] [0 1 0 1] [ x2(t-1) ] +[ wx2 ]
[ dx1(t) ] [0 0 1 0] [ dx1(t-1) ] +[ wdx1 ]
[ dx2(t) ] [0 0 0 1] [ dx2(t-1) ]+ [ wdx2 ]
We assume we only observe the position of the particle.
[ y1(t) ] = [1 0 0 0] [ x1(t) ] + [ vx1 ]
[ y2(t) ] [0 1 0 0] [ x2(t) ]+ [ vx2 ]
[ dx1(t) ]
[ dx2(t) ]
Plant = ss(A,[B B],C,0,-1,'inputname',{'u' 'w'}, 'outputname‘’,'y1‘,’y2’});

[x,y] = sample_lds(F, H, Q, R, initx, T);


[xfilt, Vfilt, VVfilt, loglik] = kalman_filter(y, F, H, Q, R, initx, initV);
>> F = [1 0 1 0; 0 1 0 1; 0 0 1 0; 0 0 0
1];
H = [1 0 0 0; 0 1 0 0];
Q = 0.1*eye(ss);
R = 1*eye(os);
>> F
F =1 0 1 0
0101
0010
0001
Kalman Filter “Covariance” Demo2

● Another example of a Kalman filter with 3 axis position estimation. It also


shows
how the variance of the estimation error propagates between time steps
and
decreases as each measurement is processed.
●F = H =
“Simon” the 3rd Kalman example
● Position estimation of a vehicle is performed. The state consists of vehicle position
and
Velocity. Input u is “acceleration” and output “y” is the measured position

1.Velocity at one time step from current = Current velocity + command acceleration*
Kalman filter timing interval.
.
Building the F, H, G matrices
T =.0.1sec; F=
H = [ 1 0];
A = [ 1 0.1; 0 1]; B = [ 0.005; 0.1]; C = [ 1 0];
Xinit = [0;0];x = Xinit;
xhat= Xinit; % Initital state estimate
R = measnoise^2; % R matrix
Sw = accelnoise^2*[ 1 0 ; 0 1]; % Q matrix
P = Sw;
% Defintion Position vector and Poistion estimated vectors
pos = [];
poshat = [];
posmeas= [];
vel = [];
velhat = [];
duration = 10;
Kalman Position Tracker
for t = 0:dt:duration Gain% Gain
u = 1; K = A*P*C'*inv(S);
% Simulate the Linear system % Update the State Estimate
Processnoise =
xhat = xhat + K*Inno_vec;
accelnoise*[(dt^2/2)*randn; dt*randn];
% Compute the covariance of the
x = A*x + B*u + Processnoise;
EStimation Error
% Simulate the obserbvation
equation P = A*P*A' -A*P*C'*inv(S)*C*P*A' +
y = C*x + measnoise; Sw;
% Update the state Estimate pos = [pos; x(1)]; posmeas =
xhat = A*xhat +B*u; [posmeas; y]; poshat = [poshat;
% Form the Innovation Vector xhat(1)];
Inno_vec = y - C*xhat; vel = [vel; x(2)]; velhat = [velhat;
% Write the equation for Compuation xhat(2)];
of th Covariance of Innovations End
% Process close all;
S = C*P*C' + Sz;
t = [0:dt:duration];
Tracking error at two values of process noise
This example is run for two different values of Process noise . The difference
between
The two estimates is telling. The second estimate of the position is much closer
to the
noisy measurement.
What are the key issues in implementation
Verhaegen and Van Dooren proved in a legendary paper that the
effectiveness of the
K filter is dependent on the numerical accuracy with which the error
covariance
Matrix Or the P(n|n-1) matrix is represented. Loss of symmetry of the error
Covariance matrix leads to seriosu issues with the implementation of the K.
Filter. Numerical stability is exacerbated by due to the fact that the Riccatti
Equation solution can get into trouble due to roundoff errors and the
eignevalues
Of the error covariance matrix can lead to a blowout of its condition number
that is
The ratio of the largest and smallest eignevalues.

There many approaches to making the K filter more stable. No suprisingly


one
Approach – the Potter-Stern approach where P is not propagated but its
Equations to explain stability
A second way to write the propagated K filter
error

● The Error covariance matrix can be distorted in two ways. Both of these
ways
are documented well by Sorenson (1971) and Verhaegen(1999). One is when
Pk looses its symmetry its better to propagate a symmetric version of Pk. Pk
Can loose its symmetry due to numerical errors as the Kalman filter
processes.
● The Second type of numerical problem is when the error covariance matrix
becomes
too small thereby reducing the Kalman gain. Reduced Kalman gain implies
newer
Samples are given too little weight. This happens when the plant noise is very
Comparison of continuous time
and discrete time Kalman filter equations
How do we relate the A.. F matrices?
The best way is to use first order approximations to arrive at the difference
from the
Differential equations. The steps are not shown here but the final expressions
are ,
Computational complexity of the
Kalman filter
● It depends on the number of states in the state vector and the number of
observations
whichever is bigger.
● If the number of states is n and the number of observations is m the
computational
complexity can be very simply expressed as O(max(n^3,m^3)).
● This cubic relationship shows that the Kalman filter is really pretty
computationally
● Complex- esp. compared to LMS or Fast LMS or even RLS.
Exactly what was Kalman’s contribution
● The two most significant contributions that the Kalman filter makes are as
follows:
● The Kalman filter equations are very suitable for implementation by a digital
computer.
This is what led to its adoption as a core engine in the Apollo program. The user
of a
Kalman filter today just needs to know basic matrix operations without having
any
understanding of the underlying theory. Its is very easy to modify the Kalman
filter
Equations so that matrix inversion is avoided. It is especially so for (I-KC)^-1
● In contrast the Weiner filter requires inversion of matrix and spectral
factorization.
● The recursive nature of the Kalman filter and the fact that the update requires
only one
additional sample also helped in its adoption
DKF and EKF
● DKF- Discrete Kalman Filter , this is the form in which Kalman invented the filt
Primarily applicable to linear state models it has been proven in practice for the la
50 years.
● DKF is a robust state estimator with separate models for process and
measurement noise.
● It has been 3 assumptions of Maybeck, whiteness of process & measurement
noise,
Gaussianess or process noise and measurement noise and linear state equation
● DKF is applicable to practically all forms of linear signal processing.
● EKF or Extended Kalman filter which is suited for non-linear state models.
● EKF has many flavors- in fact each author has come up with her/his interpreta
of EKF
● EKF performance is mixed. It does not work very well if the process and
measurement Noise differs from Gaussian.
3 dimensional position tracking -an example
Full Form of A Matrix
How the matrix elements are derived
How the matrix elements are derived- Y axis

This gives the second row of the A Matrix


Row 4 of the A matrix
Kalman filter for a second order system
Restructuring the DKF equations for HW
implementation

Reduces number of matrix multiplications by a quarter


THANKS
On Linear prediction for
communications
Mazumdar and Kadambi
From the bone to a satellite – Stanley Kubrick -1968

You might also like