AI Practical TYCS
AI Practical TYCS
Date:
CERTIFICATE
College Seal
Artificial Intelligence 2024-25
Index
Sr.n Topic Date Sign
o
Breadth First Search & Iterative Depth First Search.
1 Implement the Breadth First Search algorithm
Implement the Iterative Depth First Search
algorithm
A* Search and Recursive Best-First Search
2 Implement the A* Search algorithm for
solving a pathfinding problem.
Implement the Recursive Best-First Search
algorithm
Implement the decision tree learning algorithm to
3 build a decision tree for a given dataset
Feed Forward Back propagation Neural Network
4 • Implement the Feed Forward Back
propagation algorithm to train a neural
network
• Use a given dataset to train the neural network
for a specific task
Adaboost Ensemble Learning
5 • Implement the Adaboost algorithm to create
an ensemble of weak classifiers.
• Adaboost Ensemble Learning Implement the
Adaboost algorithm to create an ensemble of
weak classifiers.
Train the ensemble model on a given dataset and
evaluate its performance Compare the results with
individual weak classifiers
Naive Bayes' Classifier
6 Implement the Naive Bayes algorithm for
classification.
Trin a Naive Bayes' model using a given
dataset and calculate class probabilities.
Evaluate the accuracy of the model on test data and
analyze the results.
Implement the K-NN Algorithm for classification or
7 regression.
Apply K-NN Algorithm on the given dataset &
predict the class or value for test data.
Practical 1
Aim: Breadth First Search & Iterative Depth First Search
Dataset: RMP.py File
Program 1 :Implement the Breadth First Search algorithm
to solve a given problem.
import queue as Q
from RMP import dict_gn #RMP.py i
start='Arad'
goal='Bucharest'
result=''
Output:
import queue as Q
from RMP import dict_gn
start = "Arad"
goal = "Bucharest"
result = ""
def DLS(city,visitedstack,startlimit,endlimit):
global result
found = 0
result = result + city + " "
visitedstack.append(city)
if city == goal:
return 1
if startlimit == endlimit:
return 0
for eachcity in dict_gn[city].keys():
if eachcity not in visitedstack:
found = DLS(eachcity,visitedstack,startlimit+1,endlimit)
if found:
return found
def IDDFS(city,visitedstack,endlimit):
global result
for i in range(0,endlimit):
print("Seaching at Limit:", i)
found = DLS(city,visitedstack, 0 , i)
if found:
print("Found")
break
else:
print("Not Found!")
print(result)
print("______")
result=""
visitedstack = []
def main():
visitedstack = []
IDDFS(start,visitedstack,9)
print("IDDFS Traversal from ", start, " to ",goal," is:")
print(result)
main()
Output:
Practical 2
Aim: A* Search and Recursive Best-First Search: -
import queue as Q
from RMP import dict_gn
from RMP import dict_hn
start = 'Arad'
goal = 'Bucharest'
result = ''
def get_fn(citystr):
cities=citystr.split(",")
hn=gn=0
for ctr in range(0, len(cities)-1):
gn=gn+dict_gn[cities[ctr]][cities[ctr+1]]
hn=dict_hn[cities[len(cities)-1]]
return(hn+gn)
def expand(cityq):
global result
tot, citystr, thiscity=cityq.get()
if thiscity==goal:
result=citystr+"::"+str(tot)
return
for cty in dict_gn[thiscity]:
cityq.put((get_fn(citystr+","+cty),citystr+","+cty,cty))
expand(cityq)
def main():
cityq=Q.PriorityQueue()
thiscity=start
cityq.put((get_fn(start),start,thiscity))
expand(cityq)
print("The A* path with the total is: ")
print(result)
main()
Output:
start = 'Arad'
goal = 'Bucharest'
result = ''
def get_fn(citystr):
cities=citystr.split(",")
hn=gn=0
for ctr in range(0, len(cities)-1):
gn=gn+dict_gn[cities[ctr]][cities[ctr+1]]
hn=dict_hn[cities[len(cities)-1]]
return(hn+gn)
def printout(cityq):
for i in range(0, cityq.qsize()):
print(cityq.queue[i])
def expand(cityq):
global result
tot, citystr, thiscity = cityq.get()
nexttot = 999
if not cityq.empty():
nexttot,nextcitystr,nextthiscity=cityq.queue[0]
if thiscity== goal and tot < nexttot:
result = citystr + "::" + str(tot)
return
print("Expaded city ---------", thiscity)
print("second best f(n)---------", nexttot)
tempq = Q.PriorityQueue()
for cty in dict_gn[thiscity]:
tempq.put((get_fn(citystr+','+cty), citystr+','+cty, cty))
for ctr in range(1,3):
ctrtot, ctrcitystr ,ctrthiscity = tempq.get()
if ctrtot < nexttot:
cityq.put((ctrtot, ctrcitystr,ctrthiscity))
else:
cityq.put((ctrtot, citystr, thiscity))
break
printout(cityq)
expand(cityq)
def main():
cityq=Q.PriorityQueue()
thiscity=start
cityq.put((999, "NA", "NA"))
cityq.put((get_fn(start), start, thiscity))
expand(cityq)
print(result)
main()
Output:
Practical 3
Aim:
• Implement the decision tree learning algorithm to build a decision tree
for a given dataset.
• Evaluate the accuracy and efficiency on the test data set.
Practical 4
AIM: Feed Forward Back propagation Neural Network
• Implement the Feed Forward Back propagation algorithm to train a
neural network
• Use a given dataset to train the neural network for a specific task
class NeuralNetwork():
def __init__(self):
np.random.seed()
self.synaptic_weights = 2 * np.random.random((3, 1)) - 1
if __name__ == "__main__":
# Initializing the neuron class
neural_network = NeuralNetwork()
print("Beginning Randomly Generated Weights: ")
print(neural_network.synaptic_weights)
Output:
Practical 5
AIM: Adaboost Ensemble Learning
• Implement the Adaboost algorithm to create an ensemble of weak
classifiers.
• Adaboost Ensemble Learning Implement the Adaboost algorithm to
create an ensemble of weak classifiers.
• Train the ensemble model on a given dataset and evaluate its performance
Compare the results with individual weak classifiers
import pandas
from sklearn import model_selection
from sklearn.ensemble import AdaBoostClassifier
url = "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-
diabetes.data.csv"
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
dataframe = pandas.read_csv(url, names=names)
array = dataframe.values
X = array[:,0:8]
Y = array[:,8]
seed = 7
num_trees = 30
#kfold makes trees with split number.
#kfold = model_selection.KFold(n_splits=10, random_state=seed)
#n_estimators : This is the number of trees you want to build before predictions.
#Higher number of trees give you better voting optionsand perfomance performance
model = AdaBoostClassifier(n_estimators=num_trees, random_state=seed,
algorithm='SAMME')
#cross_val_score method is used to calculate the accuracy of model sliced into x, y
#cross validator cv is optional cv=kfold
results = model_selection.cross_val_score(model, X, Y)
print(results.mean())
Output:
Practical 6
AIM: Naive Bayes' Classifier
Implement the Naive Bayes algorithm for classification.
Trin a Naive Bayes' model using a given dataset and calculate class
probabilities.
Evaluate the accuracy of the model on test data and analyze the results.
Code:
Practical 7
Aim:- Implement the K-NN Algorithm for classification or regression.
Apply K-NN Algorithm on the given dataset & predict the class or
value for test data.
Practical 8
Aim:- Implement the Association Rule Mining algorithm (e.g. Apriori)
to find frequent dataset. Generate association rules from the frequent
item set and calculate their support.