Kalman Equationsv3
Kalman Equationsv3
Tim Mazumdar
The immortal one
Hungarian-born U.S. scientist and professor Rudolf Emil Kalman (born 1930) is widely
regarded as the creator of modern control theory and system theory. His research
reshaped
the field of control engineering. His most widely known accomplishment is his developmen
of the Kalman filter, a mathematical method now widely used in of EE, ME and
Communication.
He was born in Budapest, Hungary, on May 19, 1930, the son of an electrical engineer.
He immigrated to the United States in 1943, and studied EE at MIT.
Kalman received his BSEE, MIT 1953 and his MSEE in 1954,. He got his PhD(1957)
under John R. Ragazinni at Columbia. The original paper on Kalman filtering was entitled
“A
New Approach to Linear Filtering and Prediction Problems,” ASME—Transactions March
1960.
In Dedication to the man who taught K filtering
Apollo 11 Sextant
The Apollo 11 was the
worlds most important
space mission that
utilized an amazing low
memory version of the
Kalman F.
Definitions
● Linear Estimator
● Measurement noise – Noise added to a internal state variable to form a
set of noisy observations. The idea is that the Kalman filter
● Estimation – Estimation of a state in the presence of process and
measurement noise is the problem addressed by the K filter. It allows
us to estimate the state of dynamic systems with certain types of
random behavior using statistical moments e.g. mean. The K filter
tracks the probability distribution of its estimation errors
● Kalman vs Wiener filters – Kalman filter is recursive in nature and
updates the filter parameters upon receipt of every sample of
observation. Wiener requires the full set of observations to work.
● LQG – Linear quadratic Gaussian estimation problem. The estimator
is linear function of observations. The estimate error is of quadratic
form and the noise is Gaussian in amplitude
The Principle of Orthogonality
● Let x and z be two random variables and be an estimator for x.
In estimation problems one tries to come up with a constant alpha
that minimizes the expectation of the MSE. This gives rise to the
principle
The optimal estimate is a weighted function generally expressed
as and estimate of x given that z is a set of observations.
MEASUREMENT MEASUREMENT
EQUATION NOISE
KALMAN FILTER IN TERMS OF
PROBABILITY, STATISTICS, AND
RANDOM PROCESS
STATISTICS PROBABILITY
MEASUREMENT MEASUREMENT
EQUATION NOISE
(IMPROVED QUAL + QUAN)
The difference between the Kalman and Weiner filters is that the Kalman filter is
recursive
And at each time step the Kalman gain is computed.
Complete update of Kalman filter
Bp =
Where the Hermitian conjugate of the noise vector really boils down to a
simple
Transpose.
One objective is to derive the optimum Kalman Gain that minimizes MSE
estimation error .
Now we take the derivative of the error covariance matrix wrt. Kalman
Gain
Final Expression for optimal gain
Closure: Plugging the optimal gain into error
For the final expression covariance
we plug the optimal Kalman gain
value
Into the error propagation covariance equation
Time and measurement updates for the continuous time
Kalman filter
The continuous time model of a linear stochastic system can be
expressed as
Follows.
Matlab Examples
This example directly applies K equations
%Define the length of the simulation. w=w1*sqrt(Q); v=v1*sqrt(R);
nlen=20; %Initial condition on the state, x.
%Define the system. Change these x_0=1.0;
% to change the system. %Initial guesses for state and a
a=1; h=3; posteriori covariance.
%Define the noise covariances. xaposteriori_0=1.5; paposteriori_0=1;
Q=0.01; R=2;
%Calculate the state and the output
% Preallocate memory for all arrays x(1)=a*x_0+w(1);
x=zeros(1,nlen); z(1)=h*x(1)+v(1);
z=zeros(1,nlen); %Predictor equations
xapriori=zeros(1,nlen); xapriori(1)=a*xaposteriori_0;
xaposteriori=zeros(1,nlen); residual(1)=z(1)-h*xapriori(1);
residual=zeros(1,nlen); papriori(1)=a*a*paposteriori_0+Q;
papriori=ones(1,nlen); %Corrector equations
paposteriori=ones(1,nlen); k(1)=h*papriori(1)/(h*h*papriori(1)+R);
k=zeros(1,nlen); paposteriori(1)=papriori(1)*(1-h*k(1));
This example directly applies K equations
%Calculate the rest of the values. j=1:nlen;
subplot(221); %Plot states and state estimates
for j=2:nlen,
h1=stem(j+0.25,xapriori,'b');
%Calculate the state and the hold on
output h2=stem(j+0.5,xaposteriori,'g');
x(j)=a*x(j-1)+w(j); h3=stem(j,x,'r');
hold off
z(j)=h*x(j)+v(j);
%Make nice formatting.
%Predictor equations legend([h1(1) h2(1) h3(1)],'a priori','a
xapriori(j)=a*xaposteriori(j-1); posteriori','exact');
residual(j)=z(j)-h*xapriori(j); title('State with a priori and a posteriori elements');
ylabel('State, x');
papriori(j)=a*a*paposteriori(j-
xlim=[0 length(j)+1];
1)+Q; set(gca,'XLim',xlim);
%Corrector equations subplot(223); %Plot errors
k(j)=h*papriori(j)/(h*h*papriori(j) h1=stem(j,x-xapriori,'b');
+R); hold on
h2=stem(j,x-xaposteriori,'g');
paposteriori(j)=papriori(j)*(1- hold off
h*k(j)); legend([h1(1) h2(1)],'a priori','a posteriori');
xaposteriori(j)=xapriori(j) title('Actual a priori and a posteriori error');
This example directly applies K equations
%Plot errors %Plot kalman gain, k
subplot(223); subplot(224);
h1=stem(j,x-xapriori,'b'); h1=stem(j,k,'b');
hold on legend([h1(1)],'kalman gain');
h2=stem(j,x-xaposteriori,'g'); title('Kalman gain');
hold off ylabel('Kalman gain, k');
legend([h1(1) h2(1)],'a priori','a set(gca,'XLim',xlim); %Set limits the
posteriori'); same as first graph
title('Actual a priori and a posteriori
error');
ylabel('Errors');
set(gca,'XLim',xlim); %Set limits the
same as first graph
Curves for 1-D Kalman Filter
Another simple K filter for position
Kalman demo 1
duration=5;
dt=0.2;
[pos,posmeas,poshat]= kalman_demo1(duration,dt);
1.Velocity at one time step from current = Current velocity + command acceleration*
Kalman filter timing interval.
.
Building the F, H, G matrices
T =.0.1sec; F=
H = [ 1 0];
A = [ 1 0.1; 0 1]; B = [ 0.005; 0.1]; C = [ 1 0];
Xinit = [0;0];x = Xinit;
xhat= Xinit; % Initital state estimate
R = measnoise^2; % R matrix
Sw = accelnoise^2*[ 1 0 ; 0 1]; % Q matrix
P = Sw;
% Defintion Position vector and Poistion estimated vectors
pos = [];
poshat = [];
posmeas= [];
vel = [];
velhat = [];
duration = 10;
Kalman Position Tracker
for t = 0:dt:duration Gain% Gain
u = 1; K = A*P*C'*inv(S);
% Simulate the Linear system % Update the State Estimate
Processnoise =
xhat = xhat + K*Inno_vec;
accelnoise*[(dt^2/2)*randn; dt*randn];
% Compute the covariance of the
x = A*x + B*u + Processnoise;
EStimation Error
% Simulate the obserbvation
equation P = A*P*A' -A*P*C'*inv(S)*C*P*A' +
y = C*x + measnoise; Sw;
% Update the state Estimate pos = [pos; x(1)]; posmeas =
xhat = A*xhat +B*u; [posmeas; y]; poshat = [poshat;
% Form the Innovation Vector xhat(1)];
Inno_vec = y - C*xhat; vel = [vel; x(2)]; velhat = [velhat;
% Write the equation for Compuation xhat(2)];
of th Covariance of Innovations End
% Process close all;
S = C*P*C' + Sz;
t = [0:dt:duration];
Tracking error at two values of process noise
This example is run for two different values of Process noise . The difference
between
The two estimates is telling. The second estimate of the position is much closer
to the
noisy measurement.
What are the key issues in implementation
Verhaegen and Van Dooren proved in a legendary paper that the
effectiveness of the
K filter is dependent on the numerical accuracy with which the error
covariance
Matrix Or the P(n|n-1) matrix is represented. Loss of symmetry of the error
Covariance matrix leads to seriosu issues with the implementation of the K.
Filter. Numerical stability is exacerbated by due to the fact that the Riccatti
Equation solution can get into trouble due to roundoff errors and the
eignevalues
Of the error covariance matrix can lead to a blowout of its condition number
that is
The ratio of the largest and smallest eignevalues.
● The Error covariance matrix can be distorted in two ways. Both of these
ways
are documented well by Sorenson (1971) and Verhaegen(1999). One is when
Pk looses its symmetry its better to propagate a symmetric version of Pk. Pk
Can loose its symmetry due to numerical errors as the Kalman filter
processes.
● The Second type of numerical problem is when the error covariance matrix
becomes
too small thereby reducing the Kalman gain. Reduced Kalman gain implies
newer
Samples are given too little weight. This happens when the plant noise is very
Comparison of continuous time
and discrete time Kalman filter equations
How do we relate the A.. F matrices?
The best way is to use first order approximations to arrive at the difference
from the
Differential equations. The steps are not shown here but the final expressions
are ,
Computational complexity of the
Kalman filter
● It depends on the number of states in the state vector and the number of
observations
whichever is bigger.
● If the number of states is n and the number of observations is m the
computational
complexity can be very simply expressed as O(max(n^3,m^3)).
● This cubic relationship shows that the Kalman filter is really pretty
computationally
● Complex- esp. compared to LMS or Fast LMS or even RLS.
Exactly what was Kalman’s contribution
● The two most significant contributions that the Kalman filter makes are as
follows:
● The Kalman filter equations are very suitable for implementation by a digital
computer.
This is what led to its adoption as a core engine in the Apollo program. The user
of a
Kalman filter today just needs to know basic matrix operations without having
any
understanding of the underlying theory. Its is very easy to modify the Kalman
filter
Equations so that matrix inversion is avoided. It is especially so for (I-KC)^-1
● In contrast the Weiner filter requires inversion of matrix and spectral
factorization.
● The recursive nature of the Kalman filter and the fact that the update requires
only one
additional sample also helped in its adoption
DKF and EKF
● DKF- Discrete Kalman Filter , this is the form in which Kalman invented the filt
Primarily applicable to linear state models it has been proven in practice for the la
50 years.
● DKF is a robust state estimator with separate models for process and
measurement noise.
● It has been 3 assumptions of Maybeck, whiteness of process & measurement
noise,
Gaussianess or process noise and measurement noise and linear state equation
● DKF is applicable to practically all forms of linear signal processing.
● EKF or Extended Kalman filter which is suited for non-linear state models.
● EKF has many flavors- in fact each author has come up with her/his interpreta
of EKF
● EKF performance is mixed. It does not work very well if the process and
measurement Noise differs from Gaussian.
3 dimensional position tracking -an example
Full Form of A Matrix
How the matrix elements are derived
How the matrix elements are derived- Y axis