0% found this document useful (0 votes)
33 views3 pages

Exact Inference Bayesian Networks

Uploaded by

realmex7max5g
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views3 pages

Exact Inference Bayesian Networks

Uploaded by

realmex7max5g
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Exact Inference in Bayesian Networks

Introduction
Exact inference in Bayesian Networks (BNs) involves determining the exact probabilities of
query variables given some observed evidence. A Bayesian Network is a probabilistic
graphical model that uses a Directed Acyclic Graph (DAG) to represent random variables
(nodes) and their conditional dependencies (edges).

The goal of exact inference is to compute the posterior distribution of the query variables,
denoted as P(Q | E), where Q represents the query variables and E is the set of evidence
variables.

Key Components of Exact Inference


1. Bayesian Network Structure:

- Represents variables and their dependencies using a DAG.


- Each variable X_i is associated with a Conditional Probability Table (CPT) P(X_i |
Parents(X_i)), which quantifies the relationships.

2. Problem Statement:

- Given:
- Query variables (Q).
- Evidence variables (E = e).
- Compute P(Q | E = e), which requires:
P(Q | E = e) = P(Q, E = e) / P(E = e)
- Here, P(Q, E = e) is the joint probability of Q and E, and P(E = e) is the marginal probability
of the evidence, found by summing over all possible states of non-query and non-evidence
variables.

Methods for Exact Inference

1. Enumeration
This method computes the full joint probability distribution and marginalizes out irrelevant
variables.

Steps:

1. Express P(Q, E = e) as a product of local probabilities using the chain rule of Bayesian
Networks:
P(Q, E = e) = ∏ P(X_i | Parents(X_i))
2. Marginalize over hidden variables (H):
P(Q | E = e) = (∑_H P(Q, H, E = e)) / (∑_Q, H P(Q, H, E = e))
Limitation: This approach is computationally expensive due to the exponential growth in
the number of variables.

2. Variable Elimination
A more efficient method that avoids computing the full joint distribution by summing out
variables systematically.

Steps:

1. Identify irrelevant variables (those not in Q or E).


2. Rearrange terms in P(Q, E = e) to compute sums in an order that minimizes intermediate
computations.
3. Use marginalization and the distributive property to eliminate variables step by step.

Example:

Suppose P(A, B, C, D) = P(A)P(B|A)P(C|B)P(D|C). To compute P(A | D), sum out B and C


efficiently:
P(A | D) = (∑_B, C P(A)P(B|A)P(C|B)P(D|C)) / (∑_A, B, C P(A)P(B|A)P(C|B)P(D|C))

3. Belief Propagation (Message Passing)


Efficient for tree-structured networks, this method computes marginals by passing
messages (probability distributions) between nodes in the network.

Steps:

1. Pass messages from leaf nodes to the root (upward pass).


2. Compute the marginal probabilities at the root.
3. Pass messages back down to compute the marginals for other nodes (downward pass).

4. Junction Tree Algorithm


Used for networks with cycles, where variable elimination or belief propagation cannot be
directly applied.

Steps:

1. Convert the graph into a junction tree by clustering nodes into cliques.
2. Perform message passing between cliques to compute marginal probabilities.
3. Combine results to infer the posterior probabilities.

Challenges of Exact Inference


- Exponential Complexity: The number of terms grows exponentially with the number of
variables in dense networks.
- Treewidth: The complexity of inference depends on the size of the largest clique in the
junction tree (treewidth). Networks with high treewidth require significant computational
resources.
- Scalability: Exact inference becomes intractable for large-scale networks with many
variables and dependencies.

Example
Consider a Bayesian Network with three variables: A, B, and C, where P(A), P(B|A), and P(C|
B) are given.

To compute P(A|C = c):

1. Use the chain rule to express P(A, C = c):


P(A, C = c) = ∑_B P(A) P(B|A) P(C = c | B)
2. Normalize to find P(A | C = c):
P(A | C = c) = P(A, C = c) / ∑_A P(A, C = c)

Applications
- Medical diagnosis (e.g., identifying diseases based on symptoms).
- Predictive systems (e.g., fault diagnosis in machines).
- Decision-making (e.g., financial risk analysis).

Exact inference is foundational to Bayesian reasoning but is often replaced by approximate


methods (like sampling) in large, complex networks.

You might also like