Ai Lab FINAL-28 Merged
Ai Lab FINAL-28 Merged
FACULTY
S.NO DATE LIST OF EXPERIMENTS
SIGNATURE
AIM:
To Implement Breadth First Search (BFS) and Depth First Search (DFS)
algorithm on an undirected graph representing state space of the given problem and tofind
the shortest path.
1. Initialization:
2. Traversal:
• If the queue is exhausted and the goal node is not found, print a message
indicating that no connecting path exists.
PROGRAM:
import networkx as nx
import matplotlib.pyplot as plt
G=nx.Graph()
nx.draw(G,with_labels=True)
plt.title("UnDirected Graph")
plt.show
if node == goal:
path = []
while node != start:
path.append(node)
node = parent[node]
path.append(start)
path.reverse()
return path
for neighbor in graph[node]:
if neighbor not in visited:
parent[neighbor] = node
stack.append((neighbor, path_length + 1))
return None # No path found
G = { 'A': ['B', 'C', 'F'],
'B': ['A', 'C', 'G'],
'C': ['A', 'B', 'F', 'D', 'E', 'G'],
'D': ['C', 'F', 'E', 'I'],
'E': ['C', 'D', 'G', 'K', 'J'],
'F': ['A', 'C', 'D'],
'G': ['C', 'E', 'K'],
'K': ['E', 'G', 'J'],
'J': ['D', 'E', 'K']}
OUTPUT:
RESULT:
The undirected graph traversing is done using Breadth first and Depth first search
effectively and the shortest path is found from source node (A) to goal node(K).
EX NO-02
SOLVE PATH FINDING PROBLEM USING A* SEARCH
DATE
AIM:
Finding the shortest path between two nodes in an undirected, weighted graph
using the A* search algorithm. The graph consists of several nodes connected by edges,
each with an associated cost representing the distance or effort required to traverse that
edge.
ALGORITHM:
1. Initialization:
o Start by adding the start node to a priority queue with its total cost.
o Initialize a visited set to keep track of explored nodes.
2. Traversal Loop:
o Pop the node with the lowest total cost (`f`) from the priority queue.
o If this node is the goal node, return the path and cost.
o Otherwise, add the node to the visited set.
o For each neighbour of the current node:
▪ Calculate the new actual cost (`g`) and total cost (`f`).
▪ If the neighbour has not been visited, add it to the priority queue.
3. Termination:
o The loop continues until the goal node is reached or the priority queue is
empty (meaning no path exists).
Implementation Details in Provided Code
• Graph Representation:
o The graph is represented using a dictionary where each node maps to a
list of tuples containing the cost and the connected node.
o Heuristic values are stored in a separate dictionary.
• Priority Queue:
o A heap-based priority queue (heapq) is used to efficiently retrieve the
node with the lowest total cost.
• Handling Paths and Costs:
o Each entry in the priority queue contains the total cost (`f`), actual cost
(`g`), current node, and the path taken to reach this node.
o The algorithm updates and extends the path as it explores new nodes.
PROGRAM:
import heapq
class Graph:
def init (self):
self.edges = {}
self.h = {} # Heuristic function
# Goal check
if current == goal:
return path, g
visited.add(current)
# Expand neighbors
for cost, neighbor in self.edges[current]:
if neighbor not in visited:
new_g = g + cost
new_f = new_g + self.h[neighbor]
heapq.heappush(pq, (new_f, new_g, neighbor, path))
RESULT:
The A* search algorithm successfully found the shortest path from node 'A' to node
'G' in the given graph by intelligently combining actual path costs with heuristic estimates.
This example demonstrates the effectiveness of A* in navigating complex graphs efficiently.
EX NO-03
EXPLORE THE BASIC PROBABILITY FUNCTIONS
DATE
AIM:
The aim of this program is to generate random samples from different statistical
distributions (Normal, Bernoulli, Discrete Uniform, and Continuous Uniform) and visualize
their properties by plotting histograms of the generated samples.
ALGORITHM:
1. Normal Distribution
Input Parameters:
o Mean (mu): The average of the distribution.
o Standard deviation (sigma): The measure of the spread of the
distribution.
o Sample size: Number of random samples to generate.
Generate Samples:
o Use the normal distribution formula to generate random samples with
the specified mean and standard deviation.
Plot the Distribution:
o Create a histogram of the generated samples.
o Set the title, labels for the x-axis (Value), and y-axis (Density).
o Display the plot.
2. Bernoulli Distribution
Input Parameters:
o Probability of success (p): The probability of getting a "1" in each
trial.
o Sample size: Number of random samples to generate.
Generate Samples:
o Use the Bernoulli distribution to generate random samples, where each
sample is either 0 or 1.
Plot the Distribution:
o Create a histogram of the generated samples.
o Set the title, labels for the x-axis (Outcome), and y-axis (Density).
o Display the plot with ticks on the x-axis for the outcomes (0 and 1).
3. Discrete Uniform Distribution
Input Parameters:
o Lower bound (low): The minimum integer value.
o Upper bound (high): The maximum integer value.
o Sample size: Number of random samples to generate.
Generate Samples:
o Use the discrete uniform distribution to generate random samples from
the specified integer range.
Plot the Distribution:
o Create a histogram of the generated samples.
o Set the title, labels for the x-axis (Value), and y-axis (Density).
o Display the plot with ticks for each possible value in the range.
4. Continuous Uniform Distribution
Input Parameters:
o Lower bound (low): The minimum value.
o Upper bound (high): The maximum value.
o Sample size: Number of random samples to generate.
Generate Samples:
o Use the continuous uniform distribution to generate random samples
within the specified range.
Plot the Distribution:
o Create a histogram of the generated samples.
o Set the title, labels for the x-axis (Value), and y-axis (Density).
o Display the plot.
PROGRAM:
1. Normal Distribution
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
OUTPUT:
mean = 0
std_dev = 1
sample_size = 1000
normal_samples = np.random.normal(mean, std_dev, sample_size)
plt.hist(normal_samples, bins=30, density=True, alpha=0.6, color='b',
edgecolor='black')
xmin, xmax = plt.xlim()
x = np.linspace(xmin, xmax, 100)
p = stats.norm.pdf(x, mean, std_dev)
plt.plot(x, p, 'k', linewidth=2)
plt.title('Normal Distribution with Bell Curve')
plt.xlabel('Value')
plt.ylabel('Density')
plt.show()
2. Bernoulli Distribution
import numpy as np
import matplotlib.pyplot as plt
p = 0.5
sample_size = 1000
bernoulli_samples = np.random.binomial(1, p, sample_size)
plt.hist(bernoulli_samples, bins=2, density=True, alpha=0.6, color='g')
plt.title('Bernoulli Distribution')
plt.xlabel('Outcome')
plt.ylabel('Density')
plt.xticks([0, 1])
plt.show()
3. Discrete Uniform Distribution
import numpy as np
low = 1
high = 6
sample_size = 1000
discrete_uniform_samples = np.random.randint(low, high + 1, sample_size)
plt.hist(discrete_uniform_samples, bins=range(low, high + 2), density=True,
alpha=0.6, color='r')
plt.title('Discrete Uniform Distribution')
plt.xlabel('Value')
plt.ylabel('Density')
plt.show()
4. Continuous Uniform Distribution
import numpy as np
import
low = 0
high = 1
sample_size = 1000
continuous_uniform_samples = np.random.uniform(low, high, sample_size)
plt.hist(continuous_uniform_samples, bins=30, density=True, alpha=0.6,
color='purple')
plt.title('Continuous Uniform Distribution')
plt.xlabel('Value')
plt.ylabel('Density')
plt.show()
RESULT:
The generated distributions illustrate different probability scenarios: Normal shows
label curve where values cluster around the mean; Bernoulli models binary outcomes based
on a success probability. Discrete Uniform gives equal likelihood to integers within a range,
while Continuous Uniform evenly spreads probability across a continuous interval.
EX NO-04
POSTERIOR PROBABILITY
DATE
AIM:
The aim of calculating the posterior probability in the given problem is to
determine the probability that a person actually has the rare disease given that they have
tested positive for it.
ALGORITHM:
1. Define Prior Probabilities:
o Establish the initial probabilities of the events of interest, in this case, the
probability of having the disease (p_disease) and not having the disease
(p_no_disease).
6. Plot:
o Visualize the joint probability distribution using a bar plot to gain a better
understanding of the relationships between the events and their
probabilities.
PROGRAM:
p_disease = float(input("Enter the overall probability of having the disease : "))
p_pos_given_disease = float(input("Enter the probability of testing positive given
having the disease : "))
p_neg_given_no_disease = float(input("Enter the probability of testing negative given
not having the disease : "))
p_no_disease = 1 - p_disease
p_pos = (p_pos_given_disease * p_disease) + ((1 - p_neg_given_no_disease) *
p_no_disease)
p_disease_given_pos = (p_pos_given_disease * p_disease) / p_pos
print("Posterior probability of having disease given positive test: ",
p_disease_given_pos)
joint_prob_table = {
('Positive Test', 'Disease'): p_pos_given_disease * p_disease,
('Positive Test', 'No Disease'): (1 - p_neg_given_no_disease) * p_no_disease,
('Negative Test', 'Disease'): (1 - p_pos_given_disease) * p_disease,
('Negative Test', 'No Disease'): p_neg_given_no_disease * p_no_disease
}
import pandas as pd
df = pd.DataFrame(joint_prob_table, index=['Probability']).transpose()
print("\nJoint Probability Distribution Table:")
print(df)
INPUT:
OUTPUT:
import matplotlib.pyplot as plt
df.plot(kind='bar', figsize=(8, 6))
plt.title('Joint Probability Distribution')
plt.ylabel('Probability')
plt.xlabel('(Test Result, Disease Status)')
plt.xticks(rotation=0)
plt.tight_layout()
plt.show()
RESULT:
The posterior probability provides the updated likelihood of having the disease given a
positive test result. It combines prior knowledge about the disease prevalence with the
test's accuracy.
EX NO-05 BAYESIAN NETWORK: IMPLEMENTING CONDITIONAL
DATE PROBABILITY DISTRIBUTIONS
AIM:
To implement a Bayesian Network using pgmpy for modeling dependencies
between random variables, define Conditional Probability Distributions (CPDs), check
model consistency, and visualize the network.
ALGORITHM:
1. Define the Bayesian Network Structure:
• Create a Bayesian Network and add nodes representing variables like
Low CW, Unknown SR, RT-too high, P too high, and Alarm.
• Add directed edges between nodes to indicate dependencies.
2. Define Conditional Probability Distributions (CPDs):
• Define CPDs for each variable:
o Low CW: Represents the probability of Low CW being true or false.
o Unknown SR: Represents the probability of Unknown SR being true or
false.
o RT-too high: Probability based on Low CW and Unknown SR.
o P too high: Probability based on Blockage.
o Alarm: Probability based on RT-too high and P too high.
3. Add CPDs to the Model:
• Add each CPD to the Bayesian Network model.
4. Check Model Consistency:
• Validate the model to ensure the network is correctly structured with
consistent CPDs.
5. Visualize the Bayesian Network:
• Use NetworkX to visualize the Bayesian Network.
6. Print the CPDs:
• Output the CPDs for each variable to understand the defined probabilities.
PROGRAM:
import networkx as nx
import matplotlib.pyplot as plt
OUTPUT:
from pgmpy.models import BayesianNetwork
from pgmpy.factors.discrete import TabularCPD
RESULT:
The Bayesian Network was successfully implemented, with defined
CPDs and verified model consistency. The network's structure and CPDs were
visualized, demonstrating the dependencies between variables effectively.
EX NO-06 PRIOR, POSTERIOR AND PROBABILITY DENSITY
DATE FUNCTIONS
AIM:
To construct and visualize the prior, posterior, and probability density functions to analyze
the relationships and updates in Bayesian inference.
ALGORITHM:
1. Define the Prior Distribution:
Select a prior distribution to represent initial beliefs about the parameter of interest before
observing any data (e.g., use a normal distribution with a mean and variance based on prior
knowledge or assumptions).
Define the mathematical form and parameters of the prior distribution. For example, a
prior could be represented by a normal distribution with mean (mu = 0) and standard
deviation (sigma = 1).
2. Define the Likelihood Function:
Select a prior distribution to represent initial beliefs about the parameter of interest
before observing any data (e.g., use a normal distribution with a mean and variance based on
prior knowledge or assumptions).
Define the mathematical form and parameters of the prior distribution. For example, a
prior could be represented by a normal distribution with mean (mu = 0) and standard
deviation (sigma = 1).
3. Apply Baye’s Theorem to Compute the Posterior Distribution:
Select a prior distribution to represent initial beliefs about the parameter of interest before
observing any data (e.g., use a normal distribution with a mean and variance based on prior
knowledge or assumptions).
Define the mathematical form and parameters of the prior distribution. For example, a
prior could be represented by a normal distribution with mean (mu = 0) and standard
deviation (sigma = 1).
4. Visualize the Prior, Likelihood and Posterior Distribution:
Use a plotting library (e.g., `matplotlib` in Python) to generate visualizations for the prior,
likelihood, and posterior distributions on the same graph.
Set appropriate labels for each curve and provide titles and axis labels to make the graph
interpretable.
Display the plot, showing how the posterior combines information from both the prior and
likelihood.
OUTPUT:
PROGRAM:
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
RESULT:
The graphical representation of the prior, likelihood, and posterior distributions was
successfully constructed, illustrating the Bayesian updating process. The prior distribution
reflects initial beliefs, the likelihood represents observed data, and the posterior combines both
to update the belief about the parameter in light of the new evidence.
EX NO-07
BAYESIAN NETWORKS: VARIABLE ELIMINATION
DATE
AIM:
To implement Variable Elimination algorithms to perform inference on a Bayesian Network,
enabling the calculation of conditional and joint probabilities for various queries.
ALGORITHM:
Create nodes representing variables of interest and establish directed edges between
nodes to represent dependencies.
Each node corresponds to a variable, and the directed edges represent probabilistic
relationships in the network.
For each node, define the CPD based on its dependencies (parent nodes). Specify
probabilities for each state given its parent states, either based on observed data or
hypothetical assumptions.
Identify the variables of interest for conditional and joint probability queries, along
with any observed variables or evidence.
Prepare the query by specifying which variables are to be inferred and which are to
be eliminated.
Initialization: Organize factors based on the variables involved in the query and evidence.
Factor Reduction: Apply evidence by eliminating irrelevant values from factors involving
observed variables.
Summing Out: Sequentially eliminate non-query variables by summing over all possible
values for those variables in the network. Multiply factors that involve the variable to be
eliminated and then sum out the variable.
PROGRAM
import matplotlib.pyplot as plt
import networkx as nx
from pgmpy.models import BayesianNetwork
from pgmpy.factors.discrete import TabularCPD
from pgmpy.inference import VariableElimination
# Query examples
conditional_prob = inference.query(variables=['D'], evidence={'A': 1})
joint_prob = inference.query(variables=['A', 'B', 'D'])
AIM:
To implement a filtering algorithm for temporal models to estimate the current
state of a system based on past observations and a probabilistic model of the system's
dynamics.
ALGORITHM:
1. Initialize Belief: Set the initial probability distribution over the states.
2. Prediction Step: Use the transition model to predict the new belief state.
3. Update Step: Use the observation model to update the belief state based on new
observations.
4. Repeat: Iterate through the time steps and continuously update the belief.
PROGRAM:
#FILTERING
import numpy as np
# Initial prior
prior = np.array([0.5, 0.5])
OUTPUT:
# Evidence matrix
evidence = np.array([
[0.9, 0.2],
[0.9, 0.2]
])
# Prediction step
prediction = transition_model.dot(posterior)
#SMOOTHING
import numpy as np
for t in reversed(range(len(evidence_probabilities))):
if t == len(evidence_probabilities) - 1:
backward_probabilities = transition_model.T.dot(evidence_probabilities[t])
else:
backward_probabilities = transition_model.T.dot(evidence_probabilities[t] *
backward_probabilities)
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
# Initial probabilities
initial_probabilities = {"Sunny": 0, "Rainy": 1, "Foggy": 0}
# Calculate probabilities
probabilities = calculate_probability_after_n_days(
np.array([initial_probabilities["Sunny"],
initial_probabilities["Rainy"],
initial_probabilities["Foggy"]]),
transition_matrix, n
)
RESULT:
The filtering algorithm effectively updates the belief state over time, incorporating
new evidence and refining the estimation of the current state.
EXNO-09
HIDDEN MARKOV MODELS (HMM)
DATE
AIM:
To produce repeated patterns which are useful to model Hidden Markov Models
(HMM) with hidden states and observed states.
ALGORITHM:
1. Define the Hidden States: These are the latent states of the system.
2. Define the Observed States: These are the states that can be observed and are
linked to the hidden states.
3. Define Transition and Emission Probabilities: Transition probabilities describe
how the hidden states evolve over time, while emission probabilities describe how
likely an observation is, given the hidden state.
4. Generate Observations: Based on the transition and emission probabilities,
generate repeated observation sequences for training the model.
5. Plot the Repeated Patterns: Visualize how hidden and observed states evolve
over time.
PROGRAM:
import numpy as np
import matplotlib.pyplot as plt
# Initial probabilities
initial_probabilities = np.array([0.5, 0.5]) # Initial probability for Sunny and Rainy
for _ in range(num_steps):
# Choose the next state based on transition probabilities
if current_state == 'Sunny':
current_state = np.random.choice(states, p=transition_matrix[0])
else:
current_state = np.random.choice(states, p=transition_matrix[1])
states_sequence.append(current_state)
plt.subplot(2, 1, 2)
plt.plot(obs_seq, marker='o', color='orange', label='Umbrella Decision')
plt.ylim(-0.5, 1.5)
plt.yticks([0, 1], ['No Umbrella', 'Umbrella'])
plt.title('Umbrella Decision Over Time')
plt.grid()
plt.tight_layout()
plt.show()
RESULT:
AIM:
To Apply Kalman Filters to set the position of an agent and provide graphical
representation.
ALGORITHM:
1. Define the System: We define the agent's state as [position, velocity], assuming
the velocity is constant.
2. Initialize the Kalman Filter Parameters: These include the initial estimate, state
transition matrix, control matrix, process noise, and measurement noise.
3. Simulate Measurements: Create noisy measurements of the agent's position
over time.
4. Apply the Kalman Filter: Use the Kalman Filter equations to estimate the
agent’s true position and compare it with the noisy measurements.
5. Plot the Results: Graphically represent the estimated position vs. the noisy
measurements and true position.
PROGRAM:
import numpy as np
from scipy.stats import norm
import matplotlib.pyplot as plt
# Initial conditions
mu0 = 0.0 # Initial position
sigma0 = 1.0 # Initial position uncertainty
xdot = 1.0 # Constant velocity
# Model parameters
delta = 1.0 # Time interval
sigma_x = 0.1 # Transition model noise (process noise)
sigma_z = 0.1 # Sensor noise
RESULT:
This code implements a Kalman filter to estimate the position of an object
based on noisy observations over time. It demonstrates how the filter updates the state
estimate and uncertainty with each new measurement, effectively smoothing out the noise.