0% found this document useful (0 votes)
40 views32 pages

CS3491 AI and ML Lab Manual Final 4

The document outlines a series of experiments conducted at Mangayarkarasi College of Engineering, focusing on various algorithms and models in computer science, including search algorithms, Bayesian networks, regression models, decision trees, SVM models, and ensemble techniques. Each experiment includes aims, procedures, and sample code implementations, demonstrating the successful execution of each algorithm. The document serves as a practical guide for students to understand and apply these concepts in their academic work.

Uploaded by

Kezial Elizabeth
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views32 pages

CS3491 AI and ML Lab Manual Final 4

The document outlines a series of experiments conducted at Mangayarkarasi College of Engineering, focusing on various algorithms and models in computer science, including search algorithms, Bayesian networks, regression models, decision trees, SVM models, and ensemble techniques. Each experiment includes aims, procedures, and sample code implementations, demonstrating the successful execution of each algorithm. The document serves as a practical guide for students to understand and apply these concepts in their academic work.

Uploaded by

Kezial Elizabeth
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

MANGAYARKARASI

COLLEGE OF ENGINEERING
Paravai, MADURAI - 625 402
(Approved by AICTE, New Delhi& Affiliated to Anna University, Chennai)

BONAFIDE CERTIFICATE
Register No

This is to certify that the bonafide record of this work is

done by Selvan / Selvi ………………………......................Roll No………………….

of …………year……....Semester in ……………………………………………………

Engineering / Technology for ……………………………………………………........

Lab during the academic year …………………………………..

Staff in charge Head of the Department

Submitted for University Practical Examination held on………………………

Internal Examiner External Examiner


LIST OF EXPERIMENTS
S.No Date Experiment 1 Pg.No Marks Sign
Implementation of Uninformed search 2,4
1
algorithms (BFS, DFS)
Implementation of Informed search 6
2
algorithms (A*)
Implement naïve Bayes models and 9
3
Bayesian Networks
4 Build Regression models 11

5 Build decision trees. 14


6 Build SVM models 16

7 Implement ensembling techniques 18

8 Implement clustering algorithms 21


9 Build simple NN models 23

10 Build deep learning NN models 28

1
Exp.No : 1a Implementation of Uninformed search algorithms
Date: (BFS)

AIM:
Write a Program to Implement Breadth First Search.

Procedure:
Create a graph with key – value pair.
Assign empty list as visited nodes.
Initialize a queue as queue.

Define the sub program as bfs and passing parameter as visited, graph and node.
Append the node in to visited and queue.
Creating loop to visit each node in Breadth First Search method.
Print the node which is visited in BFS method.

PROGRAM
graph = {

'5' : ['3','7'],
'3' : ['2', '4'],
'7' : ['8'],
'2' : [],
'4' : ['8'],
'8' : []

}
visited = [] # List for visited nodes.
queue = [] #Initialize a queue
def bfs(visited, graph, node): #function for BFS
visited.append(node)
queue.append(node)

while queue: # Creating loop to visit each node

m = queue.pop(0)

2
print (m, end = " ")

for neighbour in graph[m]:

if neighbour not in visited:

visited.append(neighbour)

queue.append(neighbour)

# Driver Code

print("Following is the Breadth-First Search")

bfs(visited, graph, '5') # function calling

OUTPUT

Following is the Breadth-First Search 5 3 7 2 4 8

Result:

Thus the Breadth First Search was successfully executed.

3
Exp.No : 1b Implementation of Uninformed search algorithms (DFS)
Date:

AIM:
Write a Program to Implement Depth First Search.

Procedure:
Create a graph with key – value pair.
Assign empty list as visited nodes.
Initialize a queue as queue.
Define the sub program as dfs and passing parameter as visited, graph and node.
Append the node in to visited and queue.
Creating loop to visit each node in Depth First Search method.

Print the node which is visited in DFS method.

PROGRAM
graph = {
'5' : ['3','7'],
'3' : ['2', '4'],
'7' : ['8'],

'2' : [],
'4' : ['8'],
'8' : []
}
visited = set() # Set to keep track of visited nodes of graph.

def dfs(visited, graph, node): #function for dfs

if node not in visited:

print (node)

visited.add(node)

for neighbour in graph[node]:

4
dfs(visited, graph, neighbour)

# Driver Code

print("Following is the Depth-First Search")

dfs(visited, graph, '5')

OUTPUT

Output: Following is the Depth-First Search 5 3 2 4 8 7

Result:

Thus the Depth First Search was successfully executed.

5
Exp.No : 2 Implementation of Informed search algorithms
Date: (Hill Climbing Algorithm)

Aim:
Write a program to implement Hill Climbing Algorithm

Program:
import random

def randomSolution(tsp):

cities = list(range(len(tsp)))

solution = []

for i in range(len(tsp)):

randomCity = cities[random.randint(0, len(cities) - 1)]

solution.append(randomCity)

cities.remove(randomCity)

return solution

def routeLength(tsp, solution):

routeLength = 0

for i in range(len(solution)):

routeLength += tsp[solution[i - 1]][solution[i]]

return routeLength

def getNeighbours(solution):

neighbours = []

for i in range(len(solution)):

for j in range(i + 1, len(solution)):

neighbour = solution.copy()

neighbour[i] = solution[j]

6
neighbour[j] = solution[i]

neighbours.append(neighbour)

return neighbours

def getBestNeighbour(tsp, neighbours):

bestRouteLength = routeLength(tsp, neighbours[0])

bestNeighbour = neighbours[0]

for neighbour in neighbours:

currentRouteLength = routeLength(tsp, neighbour)

if currentRouteLength < bestRouteLength:

bestRouteLength = currentRouteLength

bestNeighbour = neighbour

return bestNeighbour, bestRouteLength

def hillClimbing(tsp):

currentSolution = randomSolution(tsp)

currentRouteLength = routeLength(tsp, currentSolution)

neighbours = getNeighbours(currentSolution)

bestNeighbour, bestNeighbourRouteLength = getBestNeighbour(tsp, neighbours)

while bestNeighbourRouteLength < currentRouteLength:

currentSolution = bestNeighbour

currentRouteLength = bestNeighbourRouteLength

neighbours = getNeighbours(currentSolution)

bestNeighbour, bestNeighbourRouteLength = getBestNeighbour(tsp, neighbours)

return currentSolution, currentRouteLength

tsp = [

[0, 400, 500, 300],

[400, 0, 300, 500],

[500, 300, 0, 400],

7
[300, 500, 400, 0]

print(hillClimbing(tsp))

OUTPUT

([1, 0, 3, 2], 1400)

Result:

Thus the Hill Climbing Algorithm was successfully executed.

8
Exp.No : 3 Implement naïve Bayes models and Bayesian Networks
Date:
Aim:
Write a program to construct a Bayesian network considering medical data. Use this model to
demonstrate the diagnosis of heart patients using standard Heart Disease Data Set. You can
use Java/Python ML library classes/API.

Attribute Information:
-- Only 14 used
-- 1. #3 (age)
-- 2. #4 (sex)
-- 3. #9 (cp)
-- 4. #10 (trestbps)
-- 5. #12 (chol)
-- 6. #16 (fbs)
-- 7. #19 (restecg)
-- 8. #32 (thalach)
-- 9. #38 (exang)
-- 10. #40 (oldpeak)
-- 11. #41 (slope)
-- 12. #44 (ca)
-- 13. #51 (thal)
-- 14. #58 (num)

Program:
import numpy as np

from urllib.request import urlopen

9
import urllib

import pandas as pd

from pgmpy.inference import VariableElimination

from pgmpy.models import BayesianModel

from pgmpy.estimators import MaximumLikelihoodEstimator, BayesianEstimator

names = ['age', 'sex', 'cp', 'trestbps', 'chol', 'fbs', 'restecg', 'thalach', 'exang', 'oldpeak', 'slope',
'ca', 'thal', 'heartdisease']

heartDisease = pd.read_csv('heart.csv', names = names)

heartDisease = heartDisease.replace('?', np.nan)

model =BayesianModel([('age'
'trestbps'),('age','fbs'),('sex','trestbps'),('exang',’trestbps'),('trestbps','heartdisease'),('fbs','heartdi
sease'),('heartdisease','restecg'), ('heartdisease','thalach'), ('heartdisease','chol')])

model.fit(heartDisease, estimator=MaximumLikelihoodEstimator)

from pgmpy.inference import VariableElimination

HeartDisease_infer = VariableElimination(model)

q = HeartDisease_infer.query(variables=['heartdisease'], evidence={'age': '67', 'sex' : '1'})

print(q)

OUTPUT

╒════════════════╤════
│ heartdisease│ phi(heartdisease) │
╞══════════════════════
│ heartdisease_0 │0.5593 │
├─────────────────────┤
│ heartdisease_1 │0.4407 │
╘════════════════╧═════

Result:

Thus the Bayesian Network was successfully executed.

10
Exp.No : 4 Build the Regression models in order to fit data points. Select
appropriate data set for your experiment and draw graphs.
Date:

Aim:
Write a program to build the Regression models in order to fit data points. Select appropriate
data set for your experiment and draw graphs.

Regression is a technique from statistics that is used to predict values of a desired target
quantity when the target quantity is continuous.

In regression, we seek to identify (or estimate) a continuous variable y associated with a


given input vector x.

y is called the dependent variable.

x is called the independent variable.

Program:
import matplotlib.pyplot as plt

import pandas as pd

from sklearn.linear_model import LinearRegression

from sklearn.datasets import load_iris

from sklearn.metrics import r2_score

# Load Iris dataset from scikit-learn library

iris = load_iris()

11
X = iris.data[:, 0].reshape(-1, 1) # sepal length

y = iris.data[:, 2].reshape(-1, 1) # petal length

# Create linear regression model and fit it to the data

model = LinearRegression()

model.fit(X, y)

# Generate predictions from the model

y_pred = model.predict(X)

# Calculate the coefficient of determination (R-squared) of the model

r_squared = r2_score(y, y_pred)

print("Coefficient of determination (R-squared): {:.2f}".format(r_squared))

# Create scatter plot with regression line

plt.scatter(X, y, color='blue')

plt.plot(X, y_pred, color='red', linewidth=2)

# Set plot title and axis labels

plt.title("Linear Regression of Sepal Length vs. Petal Length")

plt.xlabel("Sepal Length (cm)")

plt.ylabel("Petal Length (cm)")

# Show plot

plt.show()

OUTPUT

Coefficient of determination (R-squared): 0.76

12
Result:

Thus the Regression Algorithm was successfully executed.

13
Exp.No : 5 Build decision trees
Date:

Aim:
Write a program to demonstrate the working of the decision tree based ID3 algorithm. Use an
appropriate data set for building the decision tree and apply this knowledge to classify a new
sample.

Program:
# Load libraries

import pandas as pd

from sklearn.tree import DecisionTreeClassifier # Import Decision Tree Classifier

from sklearn.model_selection import train_test_split # Import train_test_split function

from sklearn import metrics #Import scikit-learn metrics module for accuracy calculation

df = pd.read_csv('zoo.csv')

feature_cols =
['feathers','eggs','milk','airborne','aquatic','predator','backbone','venomous','legs','tail']

X = df[feature_cols]

y = df['type']

# Create Decision Tree classifer object

clf = DecisionTreeClassifier()

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)

# Train Decision Tree Classifer

clf = clf.fit(X_train,y_train)

#Predict the response for test dataset

y_pred = clf.predict(X_test)

print("Accuracy:",metrics.accuracy_score(y_test, y_pred))

14
from sklearn.tree import plot_tree

import matplotlib.pyplot as plt

class_names = df['type'].unique().tolist()

plt.figure(figsize=(20,10))

plot_tree(clf, filled=True, rounded=True, feature_names=feature_cols,


class_names=class_names)

plt.savefig('diabetes.png')

Output:

Accuracy: 0.9354838709677419

Result:

Thus the Decision Tree Algorithm was successfully executed.

15
Exp.No : 6 Build SVM Models
Date:

Aim:
Write a program to demonstrate the working of the SVM models. Use an appropriate data set
for building the SVM model and apply this knowledge to classify a new sample.

Program:
import numpy as np

import matplotlib.pyplot as plt

import pandas as pd

dataset = pd.read_csv('Social_Network_Ads.csv')

X = dataset.iloc[:, [2, 3]].values

y = dataset.iloc[:, 4].values

from sklearn.model_selection import train_test_split

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)

from sklearn.preprocessing import StandardScaler

sc = StandardScaler()

X_train = sc.fit_transform(X_train)

X_test = sc.transform(X_test)

from sklearn.svm import SVC

classifier = SVC(kernel = 'rbf', random_state = 0)

classifier.fit(X_train, y_train)

y_pred = classifier.predict(X_test)

from sklearn.metrics import confusion_matrix, accuracy_score

cm = confusion_matrix(y_test, y_pred)

print(cm)

accuracy_score(y_test,y_pred)

16
from matplotlib.colors import ListedColormap

X_set, y_set = X_test, y_test

X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1,


step = 0.01),np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))

plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(),


X2.ravel()]).T).reshape(X1.shape),

alpha=0.75, cmap=ListedColormap(('red', 'green')))

plt.xlim(X1.min(), X1.max())

plt.ylim(X2.min(), X2.max())

for i, j in enumerate(np.unique(y_set)):

plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],

c = ListedColormap(('red', 'green'))(i), label = j)

plt.title('SVM (Test set)')

plt.xlabel('Age')

plt.ylabel('Estimated Salary')

plt.legend()

plt.show()

OUTPUT
[[64 4]
[ 3 29]]

Result:

Thus the Decision Tree Algorithm was successfully executed.

17
Exp.No : 7 Implement Ensemble techniques
Date:

Aim :
Write a program to demonstrate the working of the Ensemble techniques. Use an appropriate
data set for building the Ensemble techniques and apply this knowledge to classify a new
sample.

Program:
import pandas as pd

from sklearn.model_selection import train_test_split

from sklearn.linear_model import LogisticRegression

from sklearn.ensemble import RandomForestClassifier

from sklearn.tree import DecisionTreeClassifier

from sklearn.metrics import accuracy_score

df = pd.read_csv('zoo.csv')

feature_cols =
['feathers','eggs','milk','airborne','aquatic','predator','backbone','venomous','legs','tail']

X = df[feature_cols]

y = df['type']

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=52)

lr = LogisticRegression()

lr.fit(X_train, y_train)

rf = RandomForestClassifier(random_state=42)

rf.fit(X_train, y_train)

dt = DecisionTreeClassifier(random_state=42)

dt.fit(X_train, y_train)

#Freatures are [feathers,eggs,milk,airborne,aquatic,predator,backbone,venomous,legs,tail]

18
testdata=[1,0,1,1,1,1,0,1,4,0]

X_new = pd.DataFrame([testdata], columns=X.columns)

lr_pred = lr.predict(X_new)

rf_pred = rf.predict(X_new)

dt_pred = dt.predict(X_new)

pred_df = pd.DataFrame({'Linear regression': lr_pred, 'Random forest': rf_pred, 'Decision


Tree': dt_pred})

final_pred = pred_df.mode(axis=1)[0].values

from sklearn.tree import plot_tree

import matplotlib.pyplot as plt

print("Prediction from 3 models \n")

print(pred_df)

print()

print(" \n THE FINAL PREDICITON by Max Vote is : ",final_pred[0])

print("DECISION TREE")

class_names = df['type'].unique().tolist()

plt.figure(figsize=(20,10))

plot_tree(dt, filled=True, rounded=True, feature_names=feature_cols,


class_names=class_names)

plt.savefig('Zoo.png')

19
Output:
Decision Tree

Prediction from 3 models

Linear regression Random forest Decision Tree


mammal reptile reptile

THE FINAL PREDICITON by Max Vote is: reptile

Result :
Thus the Ensemble Techniques Algorithm was successfully executed.

20
Exp.No : 8 Implement clustering algorithms
Date:

Aim:
To Write a python program to implement clustering algorithm.

Program:

import numpy as np

from sklearn.cluster import KMeans

import matplotlib.pyplot as plt

# Generate some random data

data = np.random.rand(100, 2)

# Specify the number of clusters

k=3

# Initialize the KMeans model

kmeans = KMeans(n_clusters=k)

# Fit the model to the data

kmeans.fit(data)

# Get the cluster labels for each point in the data

labels = kmeans.labels_

# Get the centroids of the clusters

21
centroids = kmeans.cluster_centers_

# Plot the data with different colors for each cluster

colors = ['b', 'g', 'r']

for i in range(k):

plt.scatter(data[labels == i, 0], data[labels == i, 1], c=colors[i], label='Cluster


{}'.format(i+1))

# Plot the centroids as black circles

plt.scatter(centroids[:, 0], centroids[:, 1], marker='o', c='k', s=100, linewidths=2,


label='Centroids')

# Add legend and title

plt.legend()

plt.title('KMeans Clustering with k={}'.format(k))

# Show the plot

plt.show()

22
Output:

Result:
Thus the Clustering algorithm was successfully executed.

23
Exp.No : 9 Build a simple NN Models
Date:

Aim:
To write a python program to build a simple NN models.

Program:

import numpy as np

class NeuralNetwork():

def __init__(self):

# seeding for random number generation

np.random.seed(1)

#converting weights to a 3 by 1 matrix with values from -1 to 1 and mean of 0

self.synaptic_weights = 2 * np.random.random((3, 1)) - 1

def sigmoid(self, x):

#applying the sigmoid function

return 1 / (1 + np.exp(-x))

def sigmoid_derivative(self, x):

#computing derivative to the Sigmoid function

return x * (1 - x)

def train(self, training_inputs, training_outputs, training_iterations):

24
#training the model to make accurate predictions while adjusting weights continually

for iteration in range(training_iterations):

#siphon the training data via the neuron

output = self.think(training_inputs)

#computing error rate for back-propagation

error = training_outputs - output

#performing weight adjustments

adjustments = np.dot(training_inputs.T, error * self.sigmoid_derivative(output))

self.synaptic_weights += adjustments

def think(self, inputs):

#passing the inputs via the neuron to get output

#converting values to floats

inputs = inputs.astype(float)

output = self.sigmoid(np.dot(inputs, self.synaptic_weights))

return output

if __name__ == "__main__":

#initializing the neuron class

neural_network = NeuralNetwork()

25
print("Beginning Randomly Generated Weights: ")

print(neural_network.synaptic_weights)

#training data consisting of 4 examples--3 input values and 1 output

training_inputs = np.array([[0,0,1],

[1,1,1],

[1,0,1],

[0,1,1]])

training_outputs = np.array([[0,1,1,0]]).T

#training taking place

neural_network.train(training_inputs, training_outputs, 15000)

print("Ending Weights After Training: ")

print(neural_network.synaptic_weights)

user_input_one = str(input("User Input One: "))

user_input_two = str(input("User Input Two: "))

user_input_three = str(input("User Input Three: "))

print("Considering New Situation: ", user_input_one, user_input_two, user_input_three)

print("New Output data: ")

print(neural_network.think(np.array([user_input_one, user_input_two, user_input_three])))

print("Wow, we did it!")

26
Output:

Beginning Randomly Generated Weights:

[[-0.16595599]

[ 0.44064899]

[-0.99977125]]

Ending Weights After Training:

[[10.08740896]

[-0.20695366]

[-4.83757835]]

Result:
Thus the NN model was successfully executed.

27
Exp.No : 10 Build deep learning NN models
Date:

Aim:
To write a python program to Build deep learning NN models.

Program:

import keras

from keras.models import Sequential

from keras.layers import Dense

import numpy as np

import pandas as pd

import matplotlib.pyplot as plt

# Load the iris dataset

from sklearn.datasets import load_iris

iris = load_iris()

# Separate the features and labels

X = iris.data

y = iris.target

# Split the data into training and testing sets

from sklearn.model_selection import train_test_split

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)

# Normalize the data

28
mean = X_train.mean(axis=0)

std = X_train.std(axis=0)

X_train = (X_train - mean) / std

X_test = (X_test - mean) / std

# Convert the labels to one-hot encoded vectors

num_classes = len(np.unique(y))

y_train = keras.utils.to_categorical(y_train, num_classes)

y_test = keras.utils.to_categorical(y_test, num_classes)

# Define the neural network model

model = Sequential()

model.add(Dense(16, input_shape=(4,), activation='relu'))

model.add(Dense(num_classes, activation='softmax'))

# Compile the model

model.compile(loss='categorical_crossentropy',

optimizer='adam',

metrics=['accuracy'])

# Train the model

history = model.fit(X_train, y_train,

epochs=10,

batch_size=16,

validation_data=(X_test, y_test))

# Plot the training history

29
plt.plot(history.history['accuracy'])

plt.plot(history.history['val_accuracy'])

plt.title('Model accuracy')

plt.ylabel('Accuracy')

plt.xlabel('Epoch')

plt.legend(['Train', 'Test'], loc='upper left')

plt.show()

30
Output:

Epoch 1/10
8/8 [==============================] - 1s 35ms/step - loss: 1.4290 -
accuracy: 0.1417 - val_loss: 1.3888 - val_accuracy: 0.0667
Epoch 2/10
8/8 [==============================] - 0s 7ms/step - loss: 1.3532 -
accuracy: 0.1250 - val_loss: 1.3127 - val_accuracy: 0.0667
Epoch 3/10
8/8 [==============================] - 0s 6ms/step - loss: 1.2788 -
accuracy: 0.1167 - val_loss: 1.2409 - val_accuracy: 0.1000
Epoch 4/10
8/8 [==============================] - 0s 6ms/step - loss: 1.2147 -
accuracy: 0.1250 - val_loss: 1.1718 - val_accuracy: 0.1333
Epoch 5/10
8/8 [==============================] - 0s 6ms/step - loss: 1.1498 -
accuracy: 0.1250 - val_loss: 1.1083 - val_accuracy: 0.1333
Epoch 6/10
8/8 [==============================] - 0s 7ms/step - loss: 1.0934 -
accuracy: 0.2083 - val_loss: 1.0478 - val_accuracy: 0.2667
Epoch 7/10
8/8 [==============================] - 0s 6ms/step - loss: 1.0356 -
accuracy: 0.3417 - val_loss: 0.9933 - val_accuracy: 0.3667
Epoch 8/10
8/8 [==============================] - 0s 6ms/step - loss: 0.9866 -
accuracy: 0.4250 - val_loss: 0.9436 - val_accuracy: 0.5000
Epoch 9/10
8/8 [==============================] - 0s 6ms/step - loss: 0.9400 -
accuracy: 0.4833 - val_loss: 0.8979 - val_accuracy: 0.5333
Epoch 10/10
8/8 [==============================] - 0s 7ms/step - loss: 0.8984 -
accuracy: 0.5417 - val_loss: 0.8563 - val_accuracy: 0.6000

Result:
Thus the Deep learning NN model was successfully executed.

31

You might also like