CS3491 AI and ML Lab Manual Final 4
CS3491 AI and ML Lab Manual Final 4
COLLEGE OF ENGINEERING
Paravai, MADURAI - 625 402
(Approved by AICTE, New Delhi& Affiliated to Anna University, Chennai)
BONAFIDE CERTIFICATE
Register No
of …………year……....Semester in ……………………………………………………
1
Exp.No : 1a Implementation of Uninformed search algorithms
Date: (BFS)
AIM:
Write a Program to Implement Breadth First Search.
Procedure:
Create a graph with key – value pair.
Assign empty list as visited nodes.
Initialize a queue as queue.
Define the sub program as bfs and passing parameter as visited, graph and node.
Append the node in to visited and queue.
Creating loop to visit each node in Breadth First Search method.
Print the node which is visited in BFS method.
PROGRAM
graph = {
'5' : ['3','7'],
'3' : ['2', '4'],
'7' : ['8'],
'2' : [],
'4' : ['8'],
'8' : []
}
visited = [] # List for visited nodes.
queue = [] #Initialize a queue
def bfs(visited, graph, node): #function for BFS
visited.append(node)
queue.append(node)
m = queue.pop(0)
2
print (m, end = " ")
visited.append(neighbour)
queue.append(neighbour)
# Driver Code
OUTPUT
Result:
3
Exp.No : 1b Implementation of Uninformed search algorithms (DFS)
Date:
AIM:
Write a Program to Implement Depth First Search.
Procedure:
Create a graph with key – value pair.
Assign empty list as visited nodes.
Initialize a queue as queue.
Define the sub program as dfs and passing parameter as visited, graph and node.
Append the node in to visited and queue.
Creating loop to visit each node in Depth First Search method.
PROGRAM
graph = {
'5' : ['3','7'],
'3' : ['2', '4'],
'7' : ['8'],
'2' : [],
'4' : ['8'],
'8' : []
}
visited = set() # Set to keep track of visited nodes of graph.
print (node)
visited.add(node)
4
dfs(visited, graph, neighbour)
# Driver Code
OUTPUT
Result:
5
Exp.No : 2 Implementation of Informed search algorithms
Date: (Hill Climbing Algorithm)
Aim:
Write a program to implement Hill Climbing Algorithm
Program:
import random
def randomSolution(tsp):
cities = list(range(len(tsp)))
solution = []
for i in range(len(tsp)):
solution.append(randomCity)
cities.remove(randomCity)
return solution
routeLength = 0
for i in range(len(solution)):
return routeLength
def getNeighbours(solution):
neighbours = []
for i in range(len(solution)):
neighbour = solution.copy()
neighbour[i] = solution[j]
6
neighbour[j] = solution[i]
neighbours.append(neighbour)
return neighbours
bestNeighbour = neighbours[0]
bestRouteLength = currentRouteLength
bestNeighbour = neighbour
def hillClimbing(tsp):
currentSolution = randomSolution(tsp)
neighbours = getNeighbours(currentSolution)
currentSolution = bestNeighbour
currentRouteLength = bestNeighbourRouteLength
neighbours = getNeighbours(currentSolution)
tsp = [
7
[300, 500, 400, 0]
print(hillClimbing(tsp))
OUTPUT
Result:
8
Exp.No : 3 Implement naïve Bayes models and Bayesian Networks
Date:
Aim:
Write a program to construct a Bayesian network considering medical data. Use this model to
demonstrate the diagnosis of heart patients using standard Heart Disease Data Set. You can
use Java/Python ML library classes/API.
Attribute Information:
-- Only 14 used
-- 1. #3 (age)
-- 2. #4 (sex)
-- 3. #9 (cp)
-- 4. #10 (trestbps)
-- 5. #12 (chol)
-- 6. #16 (fbs)
-- 7. #19 (restecg)
-- 8. #32 (thalach)
-- 9. #38 (exang)
-- 10. #40 (oldpeak)
-- 11. #41 (slope)
-- 12. #44 (ca)
-- 13. #51 (thal)
-- 14. #58 (num)
Program:
import numpy as np
9
import urllib
import pandas as pd
names = ['age', 'sex', 'cp', 'trestbps', 'chol', 'fbs', 'restecg', 'thalach', 'exang', 'oldpeak', 'slope',
'ca', 'thal', 'heartdisease']
model =BayesianModel([('age'
'trestbps'),('age','fbs'),('sex','trestbps'),('exang',’trestbps'),('trestbps','heartdisease'),('fbs','heartdi
sease'),('heartdisease','restecg'), ('heartdisease','thalach'), ('heartdisease','chol')])
model.fit(heartDisease, estimator=MaximumLikelihoodEstimator)
HeartDisease_infer = VariableElimination(model)
print(q)
OUTPUT
╒════════════════╤════
│ heartdisease│ phi(heartdisease) │
╞══════════════════════
│ heartdisease_0 │0.5593 │
├─────────────────────┤
│ heartdisease_1 │0.4407 │
╘════════════════╧═════
Result:
10
Exp.No : 4 Build the Regression models in order to fit data points. Select
appropriate data set for your experiment and draw graphs.
Date:
Aim:
Write a program to build the Regression models in order to fit data points. Select appropriate
data set for your experiment and draw graphs.
Regression is a technique from statistics that is used to predict values of a desired target
quantity when the target quantity is continuous.
Program:
import matplotlib.pyplot as plt
import pandas as pd
iris = load_iris()
11
X = iris.data[:, 0].reshape(-1, 1) # sepal length
model = LinearRegression()
model.fit(X, y)
y_pred = model.predict(X)
plt.scatter(X, y, color='blue')
# Show plot
plt.show()
OUTPUT
12
Result:
13
Exp.No : 5 Build decision trees
Date:
Aim:
Write a program to demonstrate the working of the decision tree based ID3 algorithm. Use an
appropriate data set for building the decision tree and apply this knowledge to classify a new
sample.
Program:
# Load libraries
import pandas as pd
from sklearn import metrics #Import scikit-learn metrics module for accuracy calculation
df = pd.read_csv('zoo.csv')
feature_cols =
['feathers','eggs','milk','airborne','aquatic','predator','backbone','venomous','legs','tail']
X = df[feature_cols]
y = df['type']
clf = DecisionTreeClassifier()
clf = clf.fit(X_train,y_train)
y_pred = clf.predict(X_test)
print("Accuracy:",metrics.accuracy_score(y_test, y_pred))
14
from sklearn.tree import plot_tree
class_names = df['type'].unique().tolist()
plt.figure(figsize=(20,10))
plt.savefig('diabetes.png')
Output:
Accuracy: 0.9354838709677419
Result:
15
Exp.No : 6 Build SVM Models
Date:
Aim:
Write a program to demonstrate the working of the SVM models. Use an appropriate data set
for building the SVM model and apply this knowledge to classify a new sample.
Program:
import numpy as np
import pandas as pd
dataset = pd.read_csv('Social_Network_Ads.csv')
y = dataset.iloc[:, 4].values
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test,y_pred)
16
from matplotlib.colors import ListedColormap
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
OUTPUT
[[64 4]
[ 3 29]]
Result:
17
Exp.No : 7 Implement Ensemble techniques
Date:
Aim :
Write a program to demonstrate the working of the Ensemble techniques. Use an appropriate
data set for building the Ensemble techniques and apply this knowledge to classify a new
sample.
Program:
import pandas as pd
df = pd.read_csv('zoo.csv')
feature_cols =
['feathers','eggs','milk','airborne','aquatic','predator','backbone','venomous','legs','tail']
X = df[feature_cols]
y = df['type']
lr = LogisticRegression()
lr.fit(X_train, y_train)
rf = RandomForestClassifier(random_state=42)
rf.fit(X_train, y_train)
dt = DecisionTreeClassifier(random_state=42)
dt.fit(X_train, y_train)
18
testdata=[1,0,1,1,1,1,0,1,4,0]
lr_pred = lr.predict(X_new)
rf_pred = rf.predict(X_new)
dt_pred = dt.predict(X_new)
final_pred = pred_df.mode(axis=1)[0].values
print(pred_df)
print()
print("DECISION TREE")
class_names = df['type'].unique().tolist()
plt.figure(figsize=(20,10))
plt.savefig('Zoo.png')
19
Output:
Decision Tree
Result :
Thus the Ensemble Techniques Algorithm was successfully executed.
20
Exp.No : 8 Implement clustering algorithms
Date:
Aim:
To Write a python program to implement clustering algorithm.
Program:
import numpy as np
data = np.random.rand(100, 2)
k=3
kmeans = KMeans(n_clusters=k)
kmeans.fit(data)
labels = kmeans.labels_
21
centroids = kmeans.cluster_centers_
for i in range(k):
plt.legend()
plt.show()
22
Output:
Result:
Thus the Clustering algorithm was successfully executed.
23
Exp.No : 9 Build a simple NN Models
Date:
Aim:
To write a python program to build a simple NN models.
Program:
import numpy as np
class NeuralNetwork():
def __init__(self):
np.random.seed(1)
return 1 / (1 + np.exp(-x))
return x * (1 - x)
24
#training the model to make accurate predictions while adjusting weights continually
output = self.think(training_inputs)
self.synaptic_weights += adjustments
inputs = inputs.astype(float)
return output
if __name__ == "__main__":
neural_network = NeuralNetwork()
25
print("Beginning Randomly Generated Weights: ")
print(neural_network.synaptic_weights)
training_inputs = np.array([[0,0,1],
[1,1,1],
[1,0,1],
[0,1,1]])
training_outputs = np.array([[0,1,1,0]]).T
print(neural_network.synaptic_weights)
26
Output:
[[-0.16595599]
[ 0.44064899]
[-0.99977125]]
[[10.08740896]
[-0.20695366]
[-4.83757835]]
Result:
Thus the NN model was successfully executed.
27
Exp.No : 10 Build deep learning NN models
Date:
Aim:
To write a python program to Build deep learning NN models.
Program:
import keras
import numpy as np
import pandas as pd
iris = load_iris()
X = iris.data
y = iris.target
28
mean = X_train.mean(axis=0)
std = X_train.std(axis=0)
num_classes = len(np.unique(y))
model = Sequential()
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
epochs=10,
batch_size=16,
validation_data=(X_test, y_test))
29
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.show()
30
Output:
Epoch 1/10
8/8 [==============================] - 1s 35ms/step - loss: 1.4290 -
accuracy: 0.1417 - val_loss: 1.3888 - val_accuracy: 0.0667
Epoch 2/10
8/8 [==============================] - 0s 7ms/step - loss: 1.3532 -
accuracy: 0.1250 - val_loss: 1.3127 - val_accuracy: 0.0667
Epoch 3/10
8/8 [==============================] - 0s 6ms/step - loss: 1.2788 -
accuracy: 0.1167 - val_loss: 1.2409 - val_accuracy: 0.1000
Epoch 4/10
8/8 [==============================] - 0s 6ms/step - loss: 1.2147 -
accuracy: 0.1250 - val_loss: 1.1718 - val_accuracy: 0.1333
Epoch 5/10
8/8 [==============================] - 0s 6ms/step - loss: 1.1498 -
accuracy: 0.1250 - val_loss: 1.1083 - val_accuracy: 0.1333
Epoch 6/10
8/8 [==============================] - 0s 7ms/step - loss: 1.0934 -
accuracy: 0.2083 - val_loss: 1.0478 - val_accuracy: 0.2667
Epoch 7/10
8/8 [==============================] - 0s 6ms/step - loss: 1.0356 -
accuracy: 0.3417 - val_loss: 0.9933 - val_accuracy: 0.3667
Epoch 8/10
8/8 [==============================] - 0s 6ms/step - loss: 0.9866 -
accuracy: 0.4250 - val_loss: 0.9436 - val_accuracy: 0.5000
Epoch 9/10
8/8 [==============================] - 0s 6ms/step - loss: 0.9400 -
accuracy: 0.4833 - val_loss: 0.8979 - val_accuracy: 0.5333
Epoch 10/10
8/8 [==============================] - 0s 7ms/step - loss: 0.8984 -
accuracy: 0.5417 - val_loss: 0.8563 - val_accuracy: 0.6000
Result:
Thus the Deep learning NN model was successfully executed.
31