0% found this document useful (0 votes)
17 views81 pages

Ai Lab FINAL-28 Merged

Uploaded by

zeusstunner7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views81 pages

Ai Lab FINAL-28 Merged

Uploaded by

zeusstunner7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 81

TABLE OF CONTENTS

FACULTY
S.NO DATE LIST OF EXPERIMENTS
SIGNATURE

31-08-2024 BREADTH FIRST SEARCH AND DEPTH FIRST


1 SEARCH

2 07-08-2024 SOLVE PATH FINDING PROBLEM USING A*


SEARCH

3 07-08-2024 EXPLORE THE BASIC PROBABILITY FUNCTIONS

4 21-08-2024 POSTERTIOR PROBABILITY

5 28-08-2024 BAYESIAN NETWORKS : IMPLEMENTING


CONDITIONAL PROBABILITY DISTRIBUTIONS

6 11-09-2024 PRIOR, POSTERIOR AND PROBABILITY DENSITY


FUNCTIONS

7 11-09-2024 BAYESIAN NETWORKS: VARIABLE ELIMINATION

8 28-09-2024 INFERENCE IN TEMPORAL MODELS

9 16-10-2024 HIDDEN MARKOV MODELS (HMM)

10 23-10-2024 KALMAN FILTER


EX NO-01
BREADTH FIRST SEARCH AND DEPTH FIRST SEARCH
DATE

AIM:
To Implement Breadth First Search (BFS) and Depth First Search (DFS)
algorithm on an undirected graph representing state space of the given problem and tofind
the shortest path.

BREADTH FIRST SEARCH


ALGORITHM:

1. Initialization:

• Create a list explored to keep track of visited nodes.


• Create a queue initialized with a list containing the start node ([[start]]). This
queue will help in traversing the graph level by level.
• Check for trivial case: If the start node is the same as the goal node, print
"Same Node" and return.

2. Traversal:

• While the queue is not empty:


o Dequeue the first path from the queue.
o Extract the last node from this path.
o If this node is not in the explored list:
▪ Get its neighbors from the adjacency list of the graph.
▪ For each neighbor:
▪ Create a new path by extending the current path with this
neighbor.
▪ Enqueue this new path to the queue.
▪ Check if the neighbor is the goal node:
▪ If yes, print the shortest path and return
▪ Mark the current node as visited by adding it to the
explored list.
3. Completion:

• If the queue is exhausted and the goal node is not found, print a message
indicating that no connecting path exists.

PROGRAM:
import networkx as nx
import matplotlib.pyplot as plt

G=nx.Graph()

# Python implementation to find the


# shortest path in the graph using
# dictionaries

# Function to find the shortest


# path between two nodes of a graph
def BFS_SP(graph, start, goal):
explored = []

# Queue for traversing the


# graph in the BFS
queue = [[start]]

# If the desired node is


# reached
if start == goal:
print("Same Node")
return
# Loop to traverse the graph
# with the help of the queue
while queue:
path = queue.pop(0)
node = path[-1]

# Condition to check if the


# current node is not visited
if node not in explored:
neighbours = graph[node]

# Loop to iterate over the


# neighbours of the node
for neighbour in neighbours:
new_path = list(path)
new_path.append(neighbour)
queue.append(new_path)

# Condition to check if the


# neighbour node is the goal
if neighbour == goal:
print("Shortest path = ", *new_path)
return
explored.append(node)
# Condition when the nodes
# are not connected
print("So sorry, but a connecting"\
"path doesn't exist :(")
return
OUTPUT:

Shortest path using BFS = A B G K


G = { 'A': ['B', 'C', 'F'],
'B': ['A', 'C', 'G'],
'C': ['A', 'B', 'F', 'D', 'E', 'G'],
'D': ['C', 'F', 'E', 'I'],
'E': ['C', 'D', 'G', 'K', 'J'],
'F': ['A', 'C', 'D'],
'G': ['C', 'E', 'K'],
'K': ['E', 'G', 'J'],
'J': ['D', 'E', 'K']}

nx.draw(G,with_labels=True)
plt.title("UnDirected Graph")
plt.show

BFS_SP(G, 'A', 'K')

DEPTH FIRST SEARCH


ALGORITHM:
Step 1: SET STATUS = 1 (ready state) for each node in G
Step 2: Push the starting node A on the stack and set its STATUS = 2 (waiting state)
Step 3: Repeat Steps 4 and 5 until STACK is empty
Step 4: Pop the top node N. Process it and set its STATUS = 3 (processed state)
Step 5: Push on the stack all the neighbors of N that are in the ready state (whose
STATUS = 1) and set their STATUS = 2 (waiting state)
[END OF LOOP]
Step 6: EXIT
PROGRAM:
import networkx as nx
import matplotlib.pyplot as plt
G=nx.Graph()
def dfs_shortest_path(graph, start, goal):
visited = set()
parent = {}
stack = [(start, 0)] # Store nodes and their path lengths
while stack:
node, path_length = stack.pop()
visited.add(node)

if node == goal:
path = []
while node != start:
path.append(node)
node = parent[node]
path.append(start)
path.reverse()
return path
for neighbor in graph[node]:
if neighbor not in visited:
parent[neighbor] = node
stack.append((neighbor, path_length + 1))
return None # No path found
G = { 'A': ['B', 'C', 'F'],
'B': ['A', 'C', 'G'],
'C': ['A', 'B', 'F', 'D', 'E', 'G'],
'D': ['C', 'F', 'E', 'I'],
'E': ['C', 'D', 'G', 'K', 'J'],
'F': ['A', 'C', 'D'],
'G': ['C', 'E', 'K'],
'K': ['E', 'G', 'J'],
'J': ['D', 'E', 'K']}
OUTPUT:

Shortest path using DFS : ['A', 'F', 'D', 'J', 'K']


nx.draw(G,with_labels=True)
plt.title("UnDirected Graph")
plt.show

shortest_path = dfs_shortest_path(Graph, 'A','K')


print("Shortest path:", shortest_path)

RESULT:
The undirected graph traversing is done using Breadth first and Depth first search
effectively and the shortest path is found from source node (A) to goal node(K).
EX NO-02
SOLVE PATH FINDING PROBLEM USING A* SEARCH
DATE

AIM:
Finding the shortest path between two nodes in an undirected, weighted graph
using the A* search algorithm. The graph consists of several nodes connected by edges,
each with an associated cost representing the distance or effort required to traverse that
edge.
ALGORITHM:
1. Initialization:
o Start by adding the start node to a priority queue with its total cost.
o Initialize a visited set to keep track of explored nodes.
2. Traversal Loop:
o Pop the node with the lowest total cost (`f`) from the priority queue.
o If this node is the goal node, return the path and cost.
o Otherwise, add the node to the visited set.
o For each neighbour of the current node:
▪ Calculate the new actual cost (`g`) and total cost (`f`).
▪ If the neighbour has not been visited, add it to the priority queue.
3. Termination:
o The loop continues until the goal node is reached or the priority queue is
empty (meaning no path exists).
Implementation Details in Provided Code

• Graph Representation:
o The graph is represented using a dictionary where each node maps to a
list of tuples containing the cost and the connected node.
o Heuristic values are stored in a separate dictionary.
• Priority Queue:
o A heap-based priority queue (heapq) is used to efficiently retrieve the
node with the lowest total cost.
• Handling Paths and Costs:
o Each entry in the priority queue contains the total cost (`f`), actual cost
(`g`), current node, and the path taken to reach this node.
o The algorithm updates and extends the path as it explores new nodes.

PROGRAM:
import heapq

class Graph:
def init (self):
self.edges = {}
self.h = {} # Heuristic function

def add_edge(self, node1, node2, cost):


if node1 not in self.edges:
self.edges[node1] = []
if node2 not in self.edges:
self.edges[node2] = []

# Add both directions since it's an undirected graph


self.edges[node1].append((cost, node2))
self.edges[node2].append((cost, node1))

def set_heuristic(self, node, h_value):


self.h[node] = h_value
def astar(self, start, goal):
# Priority queue for A*
pq = [(0 + self.h[start], 0, start, [])]
heapq.heapify(pq)
visited = set()
while pq:
(f, g, current, path) = heapq.heappop(pq)

# Check if the node has been visited


if current in visited:
continue

# Append current node to path


path = path + [current]

# Goal check
if current == goal:
return path, g

visited.add(current)

# Expand neighbors
for cost, neighbor in self.edges[current]:
if neighbor not in visited:
new_g = g + cost
new_f = new_g + self.h[neighbor]
heapq.heappush(pq, (new_f, new_g, neighbor, path))

return None, float('inf') # If no path is found

# Initialize the graph


graph = Graph()
OUTPUT:
# Add edges (undirected)
graph.add_edge('A', 'B', 2)
graph.add_edge('A', 'E', 3)
graph.add_edge('B', 'C', 1)
graph.add_edge('B', 'G', 9)
graph.add_edge('E', 'D', 6)
graph.add_edge('D', 'G', 1)

# Set heuristic values (h function)


graph.set_heuristic('A', 11)
graph.set_heuristic('B', 6)
graph.set_heuristic('C', 99)
graph.set_heuristic('D', 1)
graph.set_heuristic('E', 7)
graph.set_heuristic('G', 0)
# Run A* search
path, cost = graph.astar('A', 'G')
print(f"Path: {path}, Cost: {cost}")

RESULT:
The A* search algorithm successfully found the shortest path from node 'A' to node
'G' in the given graph by intelligently combining actual path costs with heuristic estimates.
This example demonstrates the effectiveness of A* in navigating complex graphs efficiently.
EX NO-03
EXPLORE THE BASIC PROBABILITY FUNCTIONS
DATE

AIM:
The aim of this program is to generate random samples from different statistical
distributions (Normal, Bernoulli, Discrete Uniform, and Continuous Uniform) and visualize
their properties by plotting histograms of the generated samples.
ALGORITHM:
1. Normal Distribution
Input Parameters:
o Mean (mu): The average of the distribution.
o Standard deviation (sigma): The measure of the spread of the
distribution.
o Sample size: Number of random samples to generate.
Generate Samples:
o Use the normal distribution formula to generate random samples with
the specified mean and standard deviation.
Plot the Distribution:
o Create a histogram of the generated samples.
o Set the title, labels for the x-axis (Value), and y-axis (Density).
o Display the plot.
2. Bernoulli Distribution
Input Parameters:
o Probability of success (p): The probability of getting a "1" in each
trial.
o Sample size: Number of random samples to generate.
Generate Samples:
o Use the Bernoulli distribution to generate random samples, where each
sample is either 0 or 1.
Plot the Distribution:
o Create a histogram of the generated samples.
o Set the title, labels for the x-axis (Outcome), and y-axis (Density).
o Display the plot with ticks on the x-axis for the outcomes (0 and 1).
3. Discrete Uniform Distribution
Input Parameters:
o Lower bound (low): The minimum integer value.
o Upper bound (high): The maximum integer value.
o Sample size: Number of random samples to generate.
Generate Samples:
o Use the discrete uniform distribution to generate random samples from
the specified integer range.
Plot the Distribution:
o Create a histogram of the generated samples.
o Set the title, labels for the x-axis (Value), and y-axis (Density).
o Display the plot with ticks for each possible value in the range.
4. Continuous Uniform Distribution
Input Parameters:
o Lower bound (low): The minimum value.
o Upper bound (high): The maximum value.
o Sample size: Number of random samples to generate.
Generate Samples:
o Use the continuous uniform distribution to generate random samples
within the specified range.
Plot the Distribution:
o Create a histogram of the generated samples.
o Set the title, labels for the x-axis (Value), and y-axis (Density).
o Display the plot.

PROGRAM:
1. Normal Distribution
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
OUTPUT:
mean = 0
std_dev = 1
sample_size = 1000
normal_samples = np.random.normal(mean, std_dev, sample_size)
plt.hist(normal_samples, bins=30, density=True, alpha=0.6, color='b',
edgecolor='black')
xmin, xmax = plt.xlim()
x = np.linspace(xmin, xmax, 100)
p = stats.norm.pdf(x, mean, std_dev)
plt.plot(x, p, 'k', linewidth=2)
plt.title('Normal Distribution with Bell Curve')
plt.xlabel('Value')
plt.ylabel('Density')
plt.show()
2. Bernoulli Distribution
import numpy as np
import matplotlib.pyplot as plt
p = 0.5
sample_size = 1000
bernoulli_samples = np.random.binomial(1, p, sample_size)
plt.hist(bernoulli_samples, bins=2, density=True, alpha=0.6, color='g')
plt.title('Bernoulli Distribution')
plt.xlabel('Outcome')
plt.ylabel('Density')
plt.xticks([0, 1])
plt.show()
3. Discrete Uniform Distribution
import numpy as np
low = 1
high = 6
sample_size = 1000
discrete_uniform_samples = np.random.randint(low, high + 1, sample_size)
plt.hist(discrete_uniform_samples, bins=range(low, high + 2), density=True,
alpha=0.6, color='r')
plt.title('Discrete Uniform Distribution')
plt.xlabel('Value')
plt.ylabel('Density')
plt.show()
4. Continuous Uniform Distribution
import numpy as np
import
low = 0
high = 1
sample_size = 1000
continuous_uniform_samples = np.random.uniform(low, high, sample_size)
plt.hist(continuous_uniform_samples, bins=30, density=True, alpha=0.6,
color='purple')
plt.title('Continuous Uniform Distribution')
plt.xlabel('Value')
plt.ylabel('Density')
plt.show()

RESULT:
The generated distributions illustrate different probability scenarios: Normal shows
label curve where values cluster around the mean; Bernoulli models binary outcomes based
on a success probability. Discrete Uniform gives equal likelihood to integers within a range,
while Continuous Uniform evenly spreads probability across a continuous interval.
EX NO-04
POSTERIOR PROBABILITY
DATE

AIM:
The aim of calculating the posterior probability in the given problem is to
determine the probability that a person actually has the rare disease given that they have
tested positive for it.
ALGORITHM:
1. Define Prior Probabilities:
o Establish the initial probabilities of the events of interest, in this case, the
probability of having the disease (p_disease) and not having the disease
(p_no_disease).

2. Define Conditional Probabilities (Likelihoods):


o Determine the probabilities of observing certain evidence given the
occurrence of each event.
o Here, it's the probability of testing positive given the presence of the disease
(p_pos_given_disease) and the probability of testing negative given the
absence of the disease (p_neg_given_no_disease).

3. Calculate Probability of Evidence:


o Use the law of total probability to calculate the overall probability of
observing the evidence, in this case, the probability of testing positive
(p_pos).
o This involves summing the probabilities of testing positive under both
scenarios (having the disease and not having it), weighted by their
respective prior probabilities.

4. Apply Bayes' Theorem:


o Utilize Bayes' Theorem to calculate the posterior probability, which is the
updated probability of an event occurring given the observed evidence.
o In this scenario, it's the probability of having the disease given a positive
test result (p_disease_given_pos). This is calculated using the formula:
P(Disease | Positive Test) = [P(Positive Test | Disease) * P(Disease)] /
P(Positive Test)
5. Construct Joint Probability Distribution Table:

o Create a table that summarizes the probabilities of all possible


combinations of events and their corresponding evidence.
o In this case, it includes the probabilities of testing positive or negative
given the presence or absence of the disease.

6. Plot:
o Visualize the joint probability distribution using a bar plot to gain a better
understanding of the relationships between the events and their
probabilities.

PROGRAM:
p_disease = float(input("Enter the overall probability of having the disease : "))
p_pos_given_disease = float(input("Enter the probability of testing positive given
having the disease : "))
p_neg_given_no_disease = float(input("Enter the probability of testing negative given
not having the disease : "))
p_no_disease = 1 - p_disease
p_pos = (p_pos_given_disease * p_disease) + ((1 - p_neg_given_no_disease) *
p_no_disease)
p_disease_given_pos = (p_pos_given_disease * p_disease) / p_pos
print("Posterior probability of having disease given positive test: ",
p_disease_given_pos)
joint_prob_table = {
('Positive Test', 'Disease'): p_pos_given_disease * p_disease,
('Positive Test', 'No Disease'): (1 - p_neg_given_no_disease) * p_no_disease,
('Negative Test', 'Disease'): (1 - p_pos_given_disease) * p_disease,
('Negative Test', 'No Disease'): p_neg_given_no_disease * p_no_disease
}
import pandas as pd
df = pd.DataFrame(joint_prob_table, index=['Probability']).transpose()
print("\nJoint Probability Distribution Table:")
print(df)
INPUT:

OUTPUT:
import matplotlib.pyplot as plt
df.plot(kind='bar', figsize=(8, 6))
plt.title('Joint Probability Distribution')
plt.ylabel('Probability')
plt.xlabel('(Test Result, Disease Status)')
plt.xticks(rotation=0)
plt.tight_layout()
plt.show()

RESULT:
The posterior probability provides the updated likelihood of having the disease given a
positive test result. It combines prior knowledge about the disease prevalence with the
test's accuracy.
EX NO-05 BAYESIAN NETWORK: IMPLEMENTING CONDITIONAL
DATE PROBABILITY DISTRIBUTIONS

AIM:
To implement a Bayesian Network using pgmpy for modeling dependencies
between random variables, define Conditional Probability Distributions (CPDs), check
model consistency, and visualize the network.
ALGORITHM:
1. Define the Bayesian Network Structure:
• Create a Bayesian Network and add nodes representing variables like
Low CW, Unknown SR, RT-too high, P too high, and Alarm.
• Add directed edges between nodes to indicate dependencies.
2. Define Conditional Probability Distributions (CPDs):
• Define CPDs for each variable:
o Low CW: Represents the probability of Low CW being true or false.
o Unknown SR: Represents the probability of Unknown SR being true or
false.
o RT-too high: Probability based on Low CW and Unknown SR.
o P too high: Probability based on Blockage.
o Alarm: Probability based on RT-too high and P too high.
3. Add CPDs to the Model:
• Add each CPD to the Bayesian Network model.
4. Check Model Consistency:
• Validate the model to ensure the network is correctly structured with
consistent CPDs.
5. Visualize the Bayesian Network:
• Use NetworkX to visualize the Bayesian Network.
6. Print the CPDs:
• Output the CPDs for each variable to understand the defined probabilities.

PROGRAM:
import networkx as nx
import matplotlib.pyplot as plt
OUTPUT:
from pgmpy.models import BayesianNetwork
from pgmpy.factors.discrete import TabularCPD

# Define the Bayesian Network structure


model = BayesianNetwork([
('Low CW', 'RT-too high'),
('Unknown SR', 'RT-too high'),
('Blockage', 'P too high'),
('RT-too high', 'Alarm'),
('P too high', 'Alarm')
])

# Define the CPDs


low_cw_cpd = TabularCPD(variable='Low CW', variable_card=2, values=[[0.9],
[0.1]])
unknown_sr_cpd = TabularCPD(variable='Unknown SR', variable_card=2,
values=[[0.8], [0.2]])
rt_too_high_cpd = TabularCPD(variable='RT-too high', variable_card=2,
values=[[0.10, 0.20, 0.30, 0.40],
[0.90, 0.80, 0.70, 0.60]],
evidence=['Low CW', 'Unknown SR'], evidence_card=[2, 2])
p_too_high_cpd = TabularCPD(variable='P too high', variable_card=2,
values=[[0.95, 0.05],
[0.05, 0.95]],
evidence=['Blockage'], evidence_card=[2])
alarm_cpd = TabularCPD(variable='Alarm', variable_card=2,
values=[[0.95, 0.80, 0.70, 0.05],
[0.05, 0.20, 0.30, 0.95]],
evidence=['RT-too high', 'P too high'], evidence_card=[2, 2])

# Add CPDs to the model


model.add_cpds(low_cw_cpd, unknown_sr_cpd, rt_too_high_cpd, p_too_high_cpd,
alarm_cpd)

# Check the model for consistency


model.check_model()

# Visualize the Bayesian Network


pos = nx.circular_layout(model)
nx.draw_networkx_edges(model, pos=pos)
nx.draw(model, with_labels=True, node_size=2000, node_color='skyblue', pos=pos)
plt.show()

# Print the CPDs


print("Conditional Probability Table for Low CW:")
print(low_cw_cpd)

print("\nConditional Probability Table for Unknown SR:")


print(unknown_sr_cpd)

print("\nConditional Probability Table for RT-too high:")


print(rt_too_high_cpd)

print("\nConditional Probability Table for P too high:")


print(p_too_high_cpd)

print("\nConditional Probability Table for Alarm:")


print(alarm_cpd)

RESULT:
The Bayesian Network was successfully implemented, with defined
CPDs and verified model consistency. The network's structure and CPDs were
visualized, demonstrating the dependencies between variables effectively.
EX NO-06 PRIOR, POSTERIOR AND PROBABILITY DENSITY
DATE FUNCTIONS

AIM:
To construct and visualize the prior, posterior, and probability density functions to analyze
the relationships and updates in Bayesian inference.
ALGORITHM:
1. Define the Prior Distribution:
Select a prior distribution to represent initial beliefs about the parameter of interest before
observing any data (e.g., use a normal distribution with a mean and variance based on prior
knowledge or assumptions).
Define the mathematical form and parameters of the prior distribution. For example, a
prior could be represented by a normal distribution with mean (mu = 0) and standard
deviation (sigma = 1).
2. Define the Likelihood Function:
Select a prior distribution to represent initial beliefs about the parameter of interest
before observing any data (e.g., use a normal distribution with a mean and variance based on
prior knowledge or assumptions).
Define the mathematical form and parameters of the prior distribution. For example, a
prior could be represented by a normal distribution with mean (mu = 0) and standard
deviation (sigma = 1).
3. Apply Baye’s Theorem to Compute the Posterior Distribution:
Select a prior distribution to represent initial beliefs about the parameter of interest before
observing any data (e.g., use a normal distribution with a mean and variance based on prior
knowledge or assumptions).
Define the mathematical form and parameters of the prior distribution. For example, a
prior could be represented by a normal distribution with mean (mu = 0) and standard
deviation (sigma = 1).
4. Visualize the Prior, Likelihood and Posterior Distribution:
Use a plotting library (e.g., `matplotlib` in Python) to generate visualizations for the prior,
likelihood, and posterior distributions on the same graph.
Set appropriate labels for each curve and provide titles and axis labels to make the graph
interpretable.
Display the plot, showing how the posterior combines information from both the prior and
likelihood.
OUTPUT:
PROGRAM:

import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm

# Define the prior, likelihood, and posterior distributions


x = np.linspace(-5, 5, 1000)
prior = norm.pdf(x, loc=0, scale=1)
likelihood = norm.pdf(x, loc=1, scale=1)
posterior = prior * likelihood
posterior /= posterior.sum()

# Plot the distributions


plt.plot(x, prior, label="Prior", color="blue")
plt.plot(x, likelihood, label="Likelihood", color="green")
plt.plot(x, posterior, label="Posterior", color="red")
plt.title("Prior, Likelihood, and Posterior Distributions")
plt.xlabel("Parameter")
plt.ylabel("Density")
plt.legend()
plt.grid(True)
plt.show()

RESULT:
The graphical representation of the prior, likelihood, and posterior distributions was
successfully constructed, illustrating the Bayesian updating process. The prior distribution
reflects initial beliefs, the likelihood represents observed data, and the posterior combines both
to update the belief about the parameter in light of the new evidence.
EX NO-07
BAYESIAN NETWORKS: VARIABLE ELIMINATION
DATE

AIM:
To implement Variable Elimination algorithms to perform inference on a Bayesian Network,
enabling the calculation of conditional and joint probabilities for various queries.

ALGORITHM:

1. Define the Bayesian Network Structure:

Create nodes representing variables of interest and establish directed edges between
nodes to represent dependencies.

Each node corresponds to a variable, and the directed edges represent probabilistic
relationships in the network.

2. Define Conditional Probability Distributions (CPDs):

For each node, define the CPD based on its dependencies (parent nodes). Specify
probabilities for each state given its parent states, either based on observed data or
hypothetical assumptions.

Add each CPD to the Bayesian Network.

3. Set Up Query Parameters:

Identify the variables of interest for conditional and joint probability queries, along
with any observed variables or evidence.

Prepare the query by specifying which variables are to be inferred and which are to
be eliminated.

4. Apply Variable Elimination:

Initialization: Organize factors based on the variables involved in the query and evidence.

Factor Reduction: Apply evidence by eliminating irrelevant values from factors involving
observed variables.

Summing Out: Sequentially eliminate non-query variables by summing over all possible
values for those variables in the network. Multiply factors that involve the variable to be
eliminated and then sum out the variable.

Normalization: Normalize the results to ensure the probabilities sum to 1,resulting in a


valid probability distribution.
OUTPUT:
5. Answer Queries:
For a conditional probability query, compute the probability distribution of the
query variable given evidence.
For a joint probability query, compute the joint distribution across the specified
variables.

PROGRAM
import matplotlib.pyplot as plt
import networkx as nx
from pgmpy.models import BayesianNetwork
from pgmpy.factors.discrete import TabularCPD
from pgmpy.inference import VariableElimination

# Define Bayesian Network structure


model = BayesianNetwork([
('A', 'B'),
('A', 'C'),
('B', 'D'),
('C', 'D')
])

# Define CPDs for each node


cpd_A = TabularCPD(variable='A', variable_card=2, values=[[0.7], [0.3]])
cpd_B = TabularCPD(variable='B', variable_card=2, values=[[0.2, 0.8], [0.8, 0.2]],
evidence=['A'], evidence_card=[2])
cpd_C = TabularCPD(variable='C', variable_card=2, values=[[0.6, 0.4], [0.4, 0.6]],
evidence=['A'], evidence_card=[2])
cpd_D = TabularCPD(variable='D', variable_card=2, values=[[0.9, 0.1, 0.6, 0.4],
[0.1, 0.9, 0.4, 0.6]], evidence=['B', 'C'], evidence_card=[2, 2])
OUTPUT:
# Add CPDs to the model
model.add_cpds(cpd_A, cpd_B, cpd_C, cpd_D)
model.check_model()

# Perform inference using Variable Elimination


inference = VariableElimination(model)

# Query examples
conditional_prob = inference.query(variables=['D'], evidence={'A': 1})
joint_prob = inference.query(variables=['A', 'B', 'D'])

# Print the output


print("Conditional Probability (P(D | A=1)):\n", conditional_prob)
print("\nJoint Probability P(A, B, D):\n", joint_prob)

# Create a NetworkX graph from the Bayesian Network structure


G = nx.DiGraph() # Directed graph for Bayesian Network
G.add_edges_from(model.edges())

# Plot the Bayesian Network


plt.figure(figsize=(10, 6))
pos = nx.spring_layout(G, seed=42) # Layout for a clear network diagram
nx.draw(G, pos, with_labels=True, node_size=3000, node_color="skyblue",
font_size=14, font_weight="bold", edge_color="gray")
plt.title("Bayesian Network Structure", fontsize=16)
plt.show()
RESULT:
The Variable Elimination algorithm was successfully implemented to perform
inference on the Bayesian Network. The algorithm provided the conditional
probability distribution for specified evidence and computed the joint probability
distribution across queried variables, demonstrating effective Bayesian inference
through variable elimination.
EXNO-08
INFERENCE IN TEMPORAL MODELS
DATE

AIM:
To implement a filtering algorithm for temporal models to estimate the current
state of a system based on past observations and a probabilistic model of the system's
dynamics.

ALGORITHM:

1. Initialize the belief state at time t=0.


2. For each time step t, the algorithm performs the following steps:
Prediction: Project the belief state forward by one time step using the state
transition model.
Update: Incorporate new evidence into the projected belief state using the
observation model.
3. Repeat for each new time step.

Steps for Filtering in Temporal Models

1. Initialize Belief: Set the initial probability distribution over the states.
2. Prediction Step: Use the transition model to predict the new belief state.
3. Update Step: Use the observation model to update the belief state based on new
observations.
4. Repeat: Iterate through the time steps and continuously update the belief.

PROGRAM:
#FILTERING
import numpy as np

# Initial prior
prior = np.array([0.5, 0.5])
OUTPUT:
# Evidence matrix

evidence = np.array([
[0.9, 0.2],
[0.9, 0.2]
])

# Transition model matrix


transition_model = np.array([
[0.7, 0.3],
[0.3, 0.7]
])

# Start with the prior as the posterior


posterior = prior

# Iterating through each piece of evidence


for t, evidence_t in enumerate(evidence):

# Prediction step
prediction = transition_model.dot(posterior)

# Update step (Bayesian update)


likelihood = evidence_t * prediction
normalization_factor = likelihood.sum()

# Compute the posterior


posterior = likelihood / normalization_factor

# Round the posterior for neat output


posterior = np.round(posterior, 3)
OUTPUT:
# Print the results
print(f"P(R{t+1} | u1: {t+1}) = {posterior}")

#SMOOTHING
import numpy as np

# Define the forward (filtered) probabilities for each day


forward_probabilities = np.array([
[0.616, 0.384], # Day 1
[0.427, 0.573], # Day 2
[0.535, 0.465] # Day 3
])

# Define the transition matrix


transition_model = np.array([
[0.7, 0.3],
[0.3, 0.7]
])

# Define the evidence probabilities for each day


evidence_probabilities = np.array([
[0.7, 0.3], # Day 2
[0.2, 0.8], # Day 3
[0.7, 0.3], # Day 4
[0.2, 0.8] # Day 5
# Add more days as needed
])

# Initialize smoothed estimates


smoothed_estimates = []
# Computing the smoothed estimate for each day
for day in range(len(forward_probabilities)):
forward_prob = forward_probabilities[day]
backward_probabilities = np.ones(2) # Initialize backward probabilities

for t in reversed(range(len(evidence_probabilities))):
if t == len(evidence_probabilities) - 1:
backward_probabilities = transition_model.T.dot(evidence_probabilities[t])
else:
backward_probabilities = transition_model.T.dot(evidence_probabilities[t] *
backward_probabilities)

# Compute the smoothed estimate for the current day


smoothed_estimate = forward_prob * backward_probabilities
smoothed_estimate /= smoothed_estimate.sum() # Normalize
smoothed_estimates.append(smoothed_estimate)

# Print the smoothed estimates for each day


for day, estimate in enumerate(smoothed_estimates, start=1):
print(f"Smoothed Estimate for Rain on Day {day}, (given prob of day {day} &
evidences till day 5): {estimate[0]:.2f}")

import numpy as np
import networkx as nx
import matplotlib.pyplot as plt

# Define states and transition matrix


states = ["Sunny", "Rainy", "Foggy"]
transition_matrix = np.array([
[0.8, 0.15, 0.05],
OUTPUT:
[0.2, 0.5, 0.3],
[0.2, 0.2, 0.6]
])

# Initial probabilities
initial_probabilities = {"Sunny": 0, "Rainy": 1, "Foggy": 0}

# Function to calculate probabilities after n days


def calculate_probability_after_n_days(initial_prob, transition_matrix, n):
current_prob = initial_prob
for _ in range(n):
current_prob = np.dot(current_prob, transition_matrix)
return current_prob

# Input for number of days


while True:
try:
n = int(input("After how many days do you want the probabilities? "))
if n < 0:
print("Please enter a non-negative integer.")
continue
break
except ValueError:
print("Invalid input. Please enter a non-negative integer.")

# Calculate probabilities
probabilities = calculate_probability_after_n_days(
np.array([initial_probabilities["Sunny"],
initial_probabilities["Rainy"],
initial_probabilities["Foggy"]]),
transition_matrix, n
)

# Output the probabilities


for i, state in enumerate(states):
print(f"The probability of being {state} after {n} days is: {probabilities[i]:.4f}")

# Check if the sum of probabilities is equal to 1


if np.isclose(sum(probabilities), 1.0):
print("The sum of probabilities is equal to 1.")
else:
print("The sum of probabilities is not equal to 1.")

# Create a directed graph


G = nx.DiGraph()

# Add nodes to the graph


for state in states:
G.add_node(state)

# Add edges with weights to the graph


for i in range(len(states)):
for j in range(len(states)):
weight = transition_matrix[i][j]
if weight > 0: # Only add edges with positive weights
G.add_edge(states[i], states[j], weight=weight)

# Draw the graph


if len(G) > 0: # Check if the graph is not empty
pos = nx.circular_layout(G) # Use circular layout instead of spring layout
labels = {edge: f"{G[edge[0]][edge[1]]['weight']:.2f}" for edge in G.edges()}
# Draw nodes and edges
nx.draw(G, pos, with_labels=True, node_size=800, node_color='lightblue', font_size=10,
connectionstyle='arc3,rad=0.1', edge_color='blue')

RESULT:
The filtering algorithm effectively updates the belief state over time, incorporating
new evidence and refining the estimation of the current state.
EXNO-09
HIDDEN MARKOV MODELS (HMM)
DATE

AIM:
To produce repeated patterns which are useful to model Hidden Markov Models
(HMM) with hidden states and observed states.

ALGORITHM:
1. Define the Hidden States: These are the latent states of the system.
2. Define the Observed States: These are the states that can be observed and are
linked to the hidden states.
3. Define Transition and Emission Probabilities: Transition probabilities describe
how the hidden states evolve over time, while emission probabilities describe how
likely an observation is, given the hidden state.
4. Generate Observations: Based on the transition and emission probabilities,
generate repeated observation sequences for training the model.
5. Plot the Repeated Patterns: Visualize how hidden and observed states evolve
over time.

PROGRAM:
import numpy as np
import matplotlib.pyplot as plt

# Define the states and observations


states = ['Sunny', 'Rainy']
observations = ['Umbrella', 'No Umbrella']

# Transition probability matrix


# Rows represent current state; columns represent next state
transition_matrix = np.array([[0.8, 0.2], # Sunny -> Sunny, Sunny -> Rainy
[0.4, 0.6]]) # Rainy -> Sunny, Rainy -> Rainy
OUTPUT:
# Emission probability matrix
# Rows represent states; columns represent observations
emission_matrix = np.array([[0.9, 0.1], # Sunny -> Umbrella, Sunny -> No Umbrella
[0.2, 0.8]]) # Rainy -> Umbrella, Rainy -> No Umbrella

# Initial probabilities
initial_probabilities = np.array([0.5, 0.5]) # Initial probability for Sunny and Rainy

# Function to simulate HMM


def simulate_hmm(num_steps):
states_sequence = []
observations_sequence = []

# Choose initial state based on initial probabilities


current_state = np.random.choice(states, p=initial_probabilities)
states_sequence.append(current_state)

for _ in range(num_steps):
# Choose the next state based on transition probabilities
if current_state == 'Sunny':
current_state = np.random.choice(states, p=transition_matrix[0])
else:
current_state = np.random.choice(states, p=transition_matrix[1])

states_sequence.append(current_state)

# Choose observation based on current state


if current_state == 'Sunny':
observation = np.random.choice(observations, p=emission_matrix[0])
else:
observation = np.random.choice(observations, p=emission_matrix[1])
observations_sequence.append(observation)
return states_sequence, observations_sequence

# Simulate the HMM for 10 steps


num_steps = 10
states_seq, obs_seq = simulate_hmm(num_steps)

# Print the results


print("States:", states_seq)
print("Observations:", obs_seq)

# Visualizing the results


plt.figure(figsize=(12, 6))
plt.subplot(2, 1, 1)
plt.plot(states_seq, marker='o', color='blue', label='Weather State')
plt.ylim(-0.5, 1.5)
plt.yticks([0, 1], ['Sunny', 'Rainy'])
plt.title('Weather State Over Time')
plt.grid()

plt.subplot(2, 1, 2)
plt.plot(obs_seq, marker='o', color='orange', label='Umbrella Decision')
plt.ylim(-0.5, 1.5)
plt.yticks([0, 1], ['No Umbrella', 'Umbrella'])
plt.title('Umbrella Decision Over Time')
plt.grid()

plt.tight_layout()
plt.show()
RESULT:

Produced a sequence of observations and corresponding hidden states that


exhibit repeated patterns. These patterns can be used to train an HMM to learn the
underlying state transitions and emission probabilities.
EXNO-10
KALMAN FILTER
DATE

AIM:
To Apply Kalman Filters to set the position of an agent and provide graphical
representation.

ALGORITHM:
1. Define the System: We define the agent's state as [position, velocity], assuming
the velocity is constant.
2. Initialize the Kalman Filter Parameters: These include the initial estimate, state
transition matrix, control matrix, process noise, and measurement noise.
3. Simulate Measurements: Create noisy measurements of the agent's position
over time.
4. Apply the Kalman Filter: Use the Kalman Filter equations to estimate the
agent’s true position and compare it with the noisy measurements.
5. Plot the Results: Graphically represent the estimated position vs. the noisy
measurements and true position.

PROGRAM:

import numpy as np
from scipy.stats import norm
import matplotlib.pyplot as plt

# Prediction and Update steps of Kalman Filter


def kalman_filter(mu0, sigma0, delta, xdot, sigma_x, zt, sigma_z):
# Prediction step
mu_pred = mu0 + xdot * delta # Predicted mean (position estimate)
sigma_pred = sigma0 + sigma_x # Predicted variance (uncertainty)
OUTPUT:
# Update step
kalman_gain = sigma_pred / (sigma_pred + sigma_z)
# Kalman Gain
mu_updated = mu_pred + kalman_gain * (zt - mu_pred)
# Updated mean
sigma_updated = (1 - kalman_gain) * sigma_pred
# Updated variancereturn round(mu_updated, 3), round(sigma_updated, 3)

# Initial conditions
mu0 = 0.0 # Initial position
sigma0 = 1.0 # Initial position uncertainty
xdot = 1.0 # Constant velocity

# Model parameters
delta = 1.0 # Time interval
sigma_x = 0.1 # Transition model noise (process noise)
sigma_z = 0.1 # Sensor noise

# Observations (for example, a series of measurements)


observations = [1.1, 1.9, 3.0, 4.2, 5.8] # Example measurements

# Lists to store results for plotting


estimated_means = []

# Perform Kalman filtering for each observation


for zt in observations:
mu0, sigma0 = kalman_filter(mu0, sigma0, delta, xdot, sigma_x, zt, sigma_z)

# Append the updated mean to the list for plotting


estimated_means.append(mu0)
# Print the updated state estimate
print(f"Updated Mean: {mu0}, Updated Covariance: {sigma0}")

# Plotting the results


plt.figure(figsize=(6, 4))
plt.title('Kalman Filter Results')
plt.xlabel('Time Steps')
plt.ylabel('Estimated Mean')
plt.plot(observations, 'ro-', label='Measurements (Old Position)')
plt.plot(estimated_means, 'g*-', label='Estimated Mean (New Position)')
plt.legend()
plt.grid(True)
plt.show()

RESULT:
This code implements a Kalman filter to estimate the position of an object
based on noisy observations over time. It demonstrates how the filter updates the state
estimate and uncertainty with each new measurement, effectively smoothing out the noise.

You might also like