0% found this document useful (0 votes)
8 views13 pages

Unit 2-4

Uploaded by

boostbuvi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views13 pages

Unit 2-4

Uploaded by

boostbuvi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 13

19AI409 -APPLIED ARTIFICIAL INTELLIGENCE

PRPROBABILISTIC REASONING II
UNIT II
Contents

Probabilistic reasoning over time


 Time and uncertainty

 Inference in temporal models

 Hidden Markov Models

 Kalman filters

 Dynamic Bayesian networks


 Probabilistic programming
DYNAMIC BAYESIAN NETWORK

• Bayesian network that represents a temporal probability model


• Each slice of a DBN can have any number of state variables Xt and evidence variables Et.
• Assume that the variables and their links are exactly replicated from slice to slice and that the DBN
represents a first-order Markov process, so that each variable can have parents only in its own slice or the
immediately preceding slice
• Every hidden Markov model can be represented as a DBN with a single state variable and a single evidence
variable DBN for
Robot
Monitoring a battery-powered robot moving in the X–Y plane,
 State variables, -Xt = (Xt, Yt) for position and X’ t = (X’t, Y’ t) for
velocity.
 Some method of measuring position—perhaps a fixed camera or
onboard GPS (Global Positioning System)—yielding measurements Zt.

The position at the next time step depends on the


current position and velocity, as in the standard
Kalman filter model.
DYNAMIC BAYESIAN NETWORK
 For continuous measurements, a Gaussian distribution with a
small variance might be used.
 Approximate a Gaussian using a distribution in which the
probability of error drops off in the appropriate way, so that
the probability of a large error is very small.
 Hands-on experience of robotics, computerized process
control – Real sensors fail- Transient failure
 when a transient failure occurs with a Gaussian error model
that doesn’t accommodate such failures
DYNAMIC BAYESIAN NETWORK

But even subtle phenomena, such as sensor drift, sudden de-calibration, and
the effects of exogenous conditions (such as weather) on sensor readings, can
be handled by explicit representation within dynamic Bayesian networks
DYNAMIC BAYESIAN NETWORK - Inference
• Dynamic Bayesian networks are Bayesian networks
• Given a sequence of observations, one can construct the full Bayesian network
representation of a DBN by replicating slices until the network is large enough
to accommodate the observations

Unrolling.
DYNAMIC BAYESIAN NETWORK - Inference
Can use any of the inference algorithms—variable elimination, clustering method and so on

If we want to perform filtering or smoothing with a long


sequence of observations e1:t, the unrolled network would
require O(t) space and would thus grow without bound as more
observations were added.
Moreover, if we simply run the inference algorithm a new each
time an observation is added, the inference time per update will
also increase as O(t).
 Constant time and space per filtering update can be achieved if the computation can be
done recursively

 Summing out variables is exactly what the variable elimination (Figure 14.11)
algorithm does, and it turns out that running variable elimination with the variables in
temporal order exactly mimics the operation of the recursive filtering update
 As the variable elimination proceeds, the factors grow to include all the state variables

The maximum factor size is O() and the total update cost per
step is O(n ),

d is the domain size of the variables


and k is the maximum number of parents of any state
variable
Even though we can use DBNs to represent very complex
temporal processes with many sparsely connected variables,
we cannot reason efficiently and exactly about those
processes.

Approximate Inference

 Likelihood weighting
 Markov chain Monte Carlo
The number of samples reaching state xt+1 from each
In practice, it seems that the answer is yes: particle
filtering seems to maintain a good approximation to the
true posterior using a constant number of samples

You might also like