0% found this document useful (0 votes)
15 views6 pages

Exp ML

The document outlines the implementation of a Bayesian Belief Network, explaining key concepts such as nodes, edges, conditional probability tables, and inference techniques. It includes a Naive Bayes classifier code for predicting classes based on features, demonstrating how to calculate prior probabilities and feature likelihoods. The conclusion emphasizes the utility of Bayesian Networks in modeling probabilistic relationships and making predictions under uncertainty.

Uploaded by

justtrial748
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views6 pages

Exp ML

The document outlines the implementation of a Bayesian Belief Network, explaining key concepts such as nodes, edges, conditional probability tables, and inference techniques. It includes a Naive Bayes classifier code for predicting classes based on features, demonstrating how to calculate prior probabilities and feature likelihoods. The conclusion emphasizes the utility of Bayesian Networks in modeling probabilistic relationships and making predictions under uncertainty.

Uploaded by

justtrial748
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Experiment 10

Aim: Implement a Bayesian Belief Network for the following problem statement

Theory:
A Bayesian Network (also called a Belief Network or Bayes Net) is a probabilistic graphical model
representing a set of variables and their conditional dependencies via a directed acyclic graph
(DAG). It models complex relationships between random variables and uses probabilities to reason
under uncertainty.

Key Concepts

1. Nodes and Edges:


o Each node represents a random variable (e.g., symptoms, diseases, or weather conditions).
o Directed edges between nodes indicate conditional dependencies. An edge from node A to
B suggests that B depends on A, or that knowledge of A affects the probability of B.
2. Conditional Probability Tables (CPTs):
o Each node has a Conditional Probability Table that quantifies the effect of the parent nodes
on it. For example, if node B depends on A, the CPT of B provides values for P(B∣A) the
probability of B given A
o These tables capture the relationships and are the building blocks for calculating probabilities
in the network.
3. Joint Probability Distribution:
o A Bayesian Network represents the joint probability distribution of all variables using
conditional probabilities for each node given its parents. This factorization greatly reduces
the number of probabilities needed to represent the system:

o This approach breaks down a complex, multi-variable probability calculation into simpler,
manageable terms.
4. Inference in Bayesian Networks:
o Inference involves computing the probability of a variable given evidence about other
variables. For example, in a medical diagnosis network, one might want to calculate the
probability of a disease given observed symptoms.
o Inference techniques include:
 Variable Elimination: Removes irrelevant variables to simplify the calculation.
 Belief Propagation: Passes probabilities between nodes to update beliefs.
 Sampling Methods: Uses methods like Monte Carlo to approximate probabilities.
5. Applications of Bayesian Networks:
o Bayesian Networks are widely used in areas like medical diagnosis, decision-making, fault
detection, and NLP, where reasoning with uncertainty and dependencies is essential.

Bayesian Networks provide an efficient way to model systems with probabilistic dependencies,
breaking down complex relationships into manageable, conditional probabilities. This allows for
accurate prediction and reasoning even in cases where information is incomplete.

Code:
from collections import defaultdict
import numpy as np

class NaiveBayesClassifier:
def __init__(self):
self.class_prior = {}
self.classes = []
self.feature_likelihoods = {}

def fit(self, X, y):


class_counts = defaultdict(int)

for label in y:
class_counts[label] += 1 # Fixed to increment correctly
total_count = len(y)
self.classes = list(class_counts.keys())

for label, count in class_counts.items():


self.class_prior[label] = count / total_count

feature_counts = {label: defaultdict(lambda: defaultdict(int)) for


label in y}

for features, label in zip(X, y):


for feature, value in features.items():
feature_counts[label][feature][value] += 1
self.feature_likelihoods = {label: {} for label in self.classes}

for label in self.classes:


for feature, value_counts in feature_counts[label].items():
total_feature_count = sum(value_counts.values())
self.feature_likelihoods[label][feature] = {
value: count / total_feature_count for value, count in
value_counts.items()
}

def predict(self, X):


posteriors = {}
for label in self.classes:

posterior = np.log(self.class_prior[label])
for feature, value in X.items():
if feature in self.feature_likelihoods[label] and value in
self.feature_likelihoods[label][feature]:
posterior +=
np.log(self.feature_likelihoods[label][feature][value])
else:
posterior += np.log(1e-6)
posteriors[label] = posterior

return max(posteriors, key=posteriors.get)

data = [
{'Age': '1', 'Food': 'Meat'},
{'Age': '3', 'Food': 'Meat'},
{'Age': '7', 'Food': 'Grass'},
{'Age': '10', 'Food': 'Meat'},
{'Age': '3', 'Food': 'Grass'},
{'Age': '9', 'Food': 'Grass'},
{'Age': '5', 'Food': 'Meat'},
{'Age': '6', 'Food': 'Grass'}
]
labels = ['Dangerous Tiger', 'Dangerous Tiger', 'Zebra', 'Tiger', 'Zebra',
'Zebra', 'Tiger', 'Zebra']

nb_classifier = NaiveBayesClassifier()
nb_classifier.fit(data, labels)

new_data = {'Age': '10', 'Food': 'Grass'}


prediction = nb_classifier.predict(new_data)
print("Predicted class:", prediction)
Explanation:

1. NaiveBayesClassifier Class: Implements a Naive Bayes classifier for categorical features.


2. fit Method: Trains the classifier:
o Calculates the prior probability for each class by counting occurrences.
o Computes feature likelihoods for each class and feature-value pair, which are
conditional probabilities used for making predictions.
3. predict Method: Predicts the class of a new data point:
o Computes posterior probabilities for each class by combining the class priors and
feature likelihoods.
o Returns the class with the highest posterior probability, representing the most likely
class for the input.
4. Example Usage:
o Trains the model with a dataset of animals and their features.
o Predicts the class for a new animal given its features.

This functionality uses Naive Bayes principles, where class probabilities are updated based on
feature evidence to make predictions.
Output:
Conclusion:
The Bayesian Network code demonstrates how to model probabilistic relationships among variables
by capturing dependencies through directed edges and conditional probabilities. Using the joint
probability distribution and inference methods, a Bayesian Network efficiently calculates the
likelihood of outcomes, enabling predictions under uncertainty. This approach is valuable for
applications like diagnosis and decision-making, where understanding and computing with
conditional dependencies are essential.

You might also like