Program
Program
Aim:
To write a Python program to implement Breadth First Search (BFS).
Algorithm:
1. Start
2. Put any one of the graph’s vertices at the back of the queue.
3. Take the front item of the queue and add it to the visited list.
4. Create a list of that vertex's adjacent nodes. Add those which are not within the visited
list to the rear of the queue.
5. Continue steps 3 and 4 till the queue is empty.
6. Stop
Program:
graph = {
'5' : ['3','7'],
'3' : ['2', '4'],
'7' : ['8'],
'2' : [],
'4' : ['8'],
'8' : []
}
visited = [] # List for visited nodes.
queue = [] #Initialize a queue
def bfs(visited, graph, node): #function for BFS
visited.append(node) queue.append(node)
while queue: # Creating loop to visit each node
m = queue.pop(0)
print (m, end = " ")
Result:
Thus, the Python program to implement Breadth First Search (BFS) was developed
successfully.
Exp.no.1b Implementation of Uninformed search algorithms (BFS, DFS)
Aim:
To write a Python program to implement Depth First Search (DFS).
Algorithm:
1. Start
2. Put any one of the graph's vertex on top of the stack.
3. After that take the top item of the stack and add it to the visited list of the vertex.
4. Next, create a list of that adjacent node of the vertex. Add the ones which aren't in the
visited list of vertexes to the top of the stack.
5. Repeat steps 3 and 4 until the stack is empty.
6. Stop
Program:
graph = {
'5' : ['3','7'],
'3' : ['2', '4'],
'7' : ['8'],
'2' : [],
'4' : ['8'],
'8' : []
}
Result:
Thus the Python program to implement Depth First Search (DFS) was developed
successfully.
Exp.no.2a Implementation of Informed search algorithms (A*, memory-
bounded A*)
Aim:
To write a Python program to implement A* search algorithm.
Algorithm:
Step 1: Create a priority queue and push the starting node onto the queue.Initialize minimum
value (min_index) to location 0.
Step 2: Create a set to store the visited nodes.
Step 3: Repeat the following steps until the queue is empty:
: Pop the node with the lowest cost + heuristic from the queue.
: If the current node is the goal, return the path to the goal.
: If the current node has already been visited, skip it.
: Mark the current node as visited.
: Expand the current node and add its neighbors to the queue.
Step 4: If the queue is empty and the goal has not been found, return None (no path found).
Step 5: Stop
Program:
import heapq class Node:
def init (self, state, parent, cost, heuristic):
self.state = state self.parent = parent self.cost = cost self.heuristic = heuristic
def lt (self, other):
return (self.cost + self.heuristic) < (other.cost + other.heuristic)
visited.add(current.state)
for state, cost in graph[current.state].items(): if state not in visited:
heuristic = 0 # replace with your heuristic function
heapq.heappush(heap, (cost, Node(state, current, current.cost + cost, heuristic))) return None #
No path found
graph = {
'A': {'B': 1, 'D': 3},
'B': {'A': 1, 'C': 2, 'D': 4},
'C': {'B': 2, 'D': 5, 'E': 2},
'D': {'A': 3, 'B': 4, 'C': 5, 'E': 3},
'E': {'C': 2, 'D': 3}
}
start = 'A' goal = 'E'
result = astar(start, goal, graph) print(result)
Result:
Thus the python program for A* Search was developed and the output was verified
successfully.
Exp.no.2b Implementation of Informed search algorithms (A*, memory- bounded
A*)
Aim:
To write a Python program to implement memory- bounded A* search algorithm.
Algorithm:
Step 1: Create a priority queue and push the starting node onto the queue. Step 2: Create a set
to store the visited nodes.
Step 3: Set a counter to keep track of the number of nodes expanded.
Step 4: Repeat the following steps until the queue is empty or the node counter exceeds the
max_nodes:
: Pop the node with the lowest cost + heuristic from the queue.
: If the current node is the goal, return the path to the goal.
: If the current node has already been visited, skip it.
: Mark the current node as visited.
: Increment the node counter.
: Expand the current node and add its neighbors to the queue.
Step 5: If the queue is empty and the goal has not been found, return None (no path found).
Step 6: Stop
Program:
import heapq
class Node:
def __init__(self, state, parent, cost, heuristic):
self.state = state
self.parent = parent
self.cost = cost
self.heuristic = heuristic
if current.state == goal:
path = []
while current is not None:
path.append(current.state)
current = current.parent
return path[::-1] # Return the reversed path
if current.state in visited:
continue
visited.add(current.state)
node_counter += 1
# Example graph
graph = {
'A': {'B': 1, 'C': 4},
'B': {'A': 1, 'C': 2, 'D': 5},
'C': {'A': 4, 'B': 2, 'D': 1},
'D': {'B': 5, 'C': 1}
}
# Driver Code
start = 'A'
goal = 'D'
max_nodes = 10
result = astar(start, goal, graph, max_nodes)
print("Path found using Memory Bounded A*:", result)
Result:
Thus the python program for memory-bounded A* search was developed and the
output was verified successfully.
Exp.no.3 Implement Naive Bayes models
Aim:
To write a python program to implement Naïve Bayes model.
Algorithm:
Step 1. Load the libraries: import the required libraries such as pandas, numpy, and sklearn.
Step 2. Load the data into a pandas dataframe.
Step 3. Clean and preprocess the data as necessary. For example, you can handle missing
values, convert categorical variables into numerical variables, and normalize the data.
Step 4. Split the data into training and test sets using the train_test_split function from scikit-
learn.
Step 5. Train the Gaussian Naive Bayes model using the training data.
Step 6. Evaluate the performance of the model using the test data and the accuracy_score
function from scikit-learn.
Step 7. Finally, you can use the trained model to make predictions on new data.
Program:
from sklearn.naive_bayes import GaussianNB, MultinomialNB
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.feature_extraction.text import CountVectorizer
# ==========================================
# 1️⃣ Gaussian Naïve Bayes (For Numeric Data)
# ==========================================
print("\n=== Gaussian Naïve Bayes (Numbers) ===")
# ==========================================
# 2️⃣ Multinomial Naïve Bayes (For Text)
# ==========================================
print("\n=== Multinomial Naïve Bayes (Text) ===")
Result:
Thus the Python program for implementing Naïve Bayes model was developed and the
output was verified successfully.
Exp.no.4 BAYESIAN NETWORKS
Aim:
To write a python program to implement a Bayesian network for Traffic.
Algorithm:
Step 1. Start by importing the required libraries such as math and pomegranate.
Step 2. Define the discrete probability distribution for the guest's initial choice of door
Step 3. Define the discrete probability distribution for the prize door
Step 4. Define the conditional probability table for the door that Monty picks based on
the guest's choice and the prize door
Step 5. Create State objects for the guest, prize, and Monty's choice
Step 6. Create a Bayesian Network object and add the states and edges between them
Step 7. Bake the network to prepare for inference
Step 8. Use the predict_proba method to calculate the beliefs for a given set of
evidence
Step 9. Display the beliefs for each state as a string.
Step 10. Stop
Program:
import tkinter as tk
from pgmpy.models import BayesianNetwork
from pgmpy.factors.discrete import TabularCPD
from pgmpy.inference import VariableElimination
# GUI function
def calculate():
result = infer.query(['Traffic'], evidence={'Rain': rain.get(), 'Accident': accident.get()})
label.config(text=f"No Traffic: {result.values[0]:.2f}\nTraffic: {result.values[1]:.2f}")
# GUI setup
root = tk.Tk()
rain, accident = tk.IntVar(), tk.IntVar()
tk.Label(root, text="Rain?").pack()
tk.Checkbutton(root, variable=rain).pack()
tk.Label(root, text="Accident?").pack()
tk.Checkbutton(root, variable=accident).pack()
tk.Button(root, text="Check", command=calculate).pack()
label = tk.Label(root, text="Traffic Probability")
label.pack()
root.mainloop()
Result:
Thus, the Python program for implementing Bayesian Networks was successfully
developed and the output was verified.
Exp.No.5 REGRESSION MODEL
Aim:
To write a Python program to build Regression models
Algorithm:
1. Import necessary libraries: numpy, pandas, matplotlib.pyplot, LinearRegression,
mean_squared_error, and r2_score.
2. Create a numpy array for waist and weight values and store them in separate variables.
3. Create a pandas DataFrame with waist and weight columns using the numpy arrays.
4. Extract input (X) and output (y) variables from the DataFrame.
5. Create an instance of LinearRegression model.
6. Fit the LinearRegression model to the input and output variables.
7. Create a new DataFrame with a single value of waist.
8. Use the predict() method of the LinearRegression model to predict the weight for the
new waist value.
9. Calculate the mean squared error and R-squared values using mean_squared_error()
and r2_score() functions respectively.
10. Plot the actual and predicted values using matplotlib.pyplot.scatter() and
matplotlib.pyplot.plot() functions.
Program:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
Aim:
To write a Python program to build decision tree and random forest.
Algorithm:
3. Extract the features into an array X, and the target variable into an array y.
5. Split the data into training and testing sets using train_test_split function.
6. Create a DecisionTreeClassifier object, fit the model to the training data, and visualize
the decision tree using plot_tree.
7. Create a RandomForestClassifier object with 100 estimators, fit the model to the
training data, and visualize the random forest by displaying 6 trees.
8. Print the accuracy of the decision tree and random forest models using the score
method on the test data.
Program:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier, plot_tree
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
data = {'Age': [25, 30, 35, 40, 45, 50, 55, 60],
'Salary': [30000, 40000, 50000, 60000, 70000, 80000, 90000, 100000],
'Buy_Car': [0, 0, 0, 1, 1, 1, 1, 1]}
df = pd.DataFrame(data)
X = df[['Age', 'Salary']]
y = df['Buy_Car']
plt.figure(figsize=(8,5))
plot_tree(dt_model, feature_names=['Age', 'Salary'], class_names=['No', 'Yes'], filled=True)
plt.title("Decision Tree")
plt.show()
Result:
Thus the Python program to build decision tree and random forest was developed
successfully.
Exp.No.7 SVM MODELS
Aim:
To write a Python program to build SVM model.
Algorithm:
1. Import the necessary libraries (matplotlib.pyplot, numpy, and svm from sklearn).
2. Define the features (X) and labels (y) for the fruit dataset.
5. Plot the fruits and decision boundary using plt.scatter(X[:, 0], X[:, 1], c=colors),
where colors is a list of colors assigned to each fruit based on its label.
7. Use the decision function to create a contour plot of the decision boundary and
margins using ax.contour(xx, yy, Z, colors='k', levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--']).
Program:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score
data = {'Age': [22, 25, 30, 35, 40, 45, 50, 55, 60, 65],
'Salary': [25000, 30000, 40000, 50000, 60000, 70000, 80000, 90000, 100000, 110000],
'Buy_Car': [0, 0, 0, 0, 1, 1, 1, 1, 1, 1]}
df = pd.DataFrame(data)
X = df[['Age', 'Salary']]
y = df['Buy_Car']
svm_model = SVC(kernel='linear')
svm_model.fit(X_train, y_train)
y_pred = svm_model.predict(X_test)
Result:
Thus, the Python program to build an SVM model was developed, and the output was
successfully verified.
Exp.no.8 Implement Ensembling techniques
Aim:
To implement the ensembling technique of Blending with the given Alcohol QCM
Dataset.
Algorithm:
1. Split the training dataset into train, test and validation dataset.
2. Fit all the base models using train dataset.
3. Make predictions on validation and test dataset.
4. These predictions are used as features to build a second level model
5. This model is used to make predictions on test and meta-features.
Program:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.datasets import load_iris
# Make predictions
y_pred = model.predict(X_test)
# Print accuracy
print(f"Accuracy: {accuracy_score(y_test, y_pred):.2f}")
Result:
Thus the program to implement ensembling technique have been executed
successfully and the output got verfied.
Exp.No.9 Implement clustering algorithms
Aim:
To implment K Mean Clustering algorithm to classify the Iris Dataset.
Algorithm:
Step-1: Select the number K of the neighbors
Step-2: Calculate the Euclidean distance of K number of neighbors
Step-3: Take the K nearest neighbors as per the calculated Euclidean distance. Step-4:
Among these k neighbors, count the number of the data points in each category.
Step-5: Assign the new data points to that category for which the number of the neighbor
is maximum.
Step-6: Our model is ready.
Program:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.cluster import KMeans
from sklearn.preprocessing import StandardScaler
Result:
Thus the program to implement K-Means clustering Algorithm for Iris dataset have been
executed successfully and output got verified.
Exp.No.10 Implement EM for Bayesian Networks
Aim:
To write a python program to implement EM for Bayesian Networks.
Algorithm:
1. Start with a Bayesian network that has some variables that are not directly
observed. For example, suppose we have a Bayesian network with variables A, B,
C, and D, where A and D are observed and B and C are latent.
2. Initialize the parameters of the network. This includes the conditional
probabilities for each variable given its parents, as well as the prior
probabilities for the root variables.
3. E-step: Compute the expected sufficient statistics for the latent variables. This
involves computing the posterior probability distribution over the latent variables
given the observed data and the current parameter estimates. This can be done
using the forward-backward algorithm or the belief propagation algorithm.
4. M-step: Update the parameter estimates using the expected sufficient statistics
computed in step 3. This involves maximizing the likelihood of the data with
respect to the parameters of the network, given the expected sufficient
statistics.
5. Repeat steps 3-4 until convergence. Convergence can be measured by
monitoring the change in the log-likelihood of the data, or by monitoring the
change in the parameter estimates.
Program:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import networkx as nx
from pgmpy.models import BayesianNetwork
from pgmpy.estimators import BayesianEstimator
from pgmpy.inference import VariableElimination
# Load dataset
file_path = "C:/Users/navee/OneDrive/Desktop/AIML PROGRAMS/heart.csv"
heartDisease = pd.read_csv(file_path)
# Inference
infer = VariableElimination(model)
Result:
Thus the Python program to Implement EM for Bayesian Networks was developed
successfully.
Exp.No.11 Build Simple NN Models
Aim:
Algorithm:
Program:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import matplotlib.pyplot as plt
import numpy as np
import random
Result:
Thus the Python program to build simple NN Models was developed
successfully.
Exp.No.12 Deep Learning NN Models
Aim:
Program:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import make_moons
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
Result:
Thus the Python program to implement deep learning of NN Models was developed
successfully.