Exp ML
Exp ML
Aim: Implement a Bayesian Belief Network for the following problem statement
Theory:
A Bayesian Network (also called a Belief Network or Bayes Net) is a probabilistic graphical model
representing a set of variables and their conditional dependencies via a directed acyclic graph
(DAG). It models complex relationships between random variables and uses probabilities to reason
under uncertainty.
Key Concepts
o This approach breaks down a complex, multi-variable probability calculation into simpler,
manageable terms.
4. Inference in Bayesian Networks:
o Inference involves computing the probability of a variable given evidence about other
variables. For example, in a medical diagnosis network, one might want to calculate the
probability of a disease given observed symptoms.
o Inference techniques include:
Variable Elimination: Removes irrelevant variables to simplify the calculation.
Belief Propagation: Passes probabilities between nodes to update beliefs.
Sampling Methods: Uses methods like Monte Carlo to approximate probabilities.
5. Applications of Bayesian Networks:
o Bayesian Networks are widely used in areas like medical diagnosis, decision-making, fault
detection, and NLP, where reasoning with uncertainty and dependencies is essential.
Bayesian Networks provide an efficient way to model systems with probabilistic dependencies,
breaking down complex relationships into manageable, conditional probabilities. This allows for
accurate prediction and reasoning even in cases where information is incomplete.
Code:
from collections import defaultdict
import numpy as np
class NaiveBayesClassifier:
def __init__(self):
self.class_prior = {}
self.classes = []
self.feature_likelihoods = {}
for label in y:
class_counts[label] += 1 # Fixed to increment correctly
total_count = len(y)
self.classes = list(class_counts.keys())
posterior = np.log(self.class_prior[label])
for feature, value in X.items():
if feature in self.feature_likelihoods[label] and value in
self.feature_likelihoods[label][feature]:
posterior +=
np.log(self.feature_likelihoods[label][feature][value])
else:
posterior += np.log(1e-6)
posteriors[label] = posterior
data = [
{'Age': '1', 'Food': 'Meat'},
{'Age': '3', 'Food': 'Meat'},
{'Age': '7', 'Food': 'Grass'},
{'Age': '10', 'Food': 'Meat'},
{'Age': '3', 'Food': 'Grass'},
{'Age': '9', 'Food': 'Grass'},
{'Age': '5', 'Food': 'Meat'},
{'Age': '6', 'Food': 'Grass'}
]
labels = ['Dangerous Tiger', 'Dangerous Tiger', 'Zebra', 'Tiger', 'Zebra',
'Zebra', 'Tiger', 'Zebra']
nb_classifier = NaiveBayesClassifier()
nb_classifier.fit(data, labels)
This functionality uses Naive Bayes principles, where class probabilities are updated based on
feature evidence to make predictions.
Output:
Conclusion:
The Bayesian Network code demonstrates how to model probabilistic relationships among variables
by capturing dependencies through directed edges and conditional probabilities. Using the joint
probability distribution and inference methods, a Bayesian Network efficiently calculates the
likelihood of outcomes, enabling predictions under uncertainty. This approach is valuable for
applications like diagnosis and decision-making, where understanding and computing with
conditional dependencies are essential.