Aiml Lab Record
Aiml Lab Record
2023- 2024
FOURTH SEMESTER
1
PAGE
S.NO DATE NAME OF EXPERIMENT REMARKS
NO
EX NO:1
IMPLEMENTATION OF UNINFORMED SEARCH
DATE: ALGORITHM(BFS,DFS)
AIM:
ALGORITHM:
BFS ALGORITHM:
DFS ALGORITHM:
PROGRAM:
BFS PROGRAM:
graph = {
'5' : ['3','7'],
'3' : ['2', '4'],
'7' : ['8'],
'2' : [],
'4' : ['8'],
'8' : []
}
m= queue.pop(0)
print (m, end = " ")
# Driver Code
print("Following is the Breadth-First Search")
bfs(visited, graph, '5') #function calling
3
DFS PROGRAM:
graph = {
'5' : ['3','7'],
'3' : ['2', '4'],
'7' : ['8'],
'2' : [],
'4' : ['8'],
'8' : []
}
visited = set() # Set to keep track of visited nodes of graph.
# Driver Code
print("Following is the Depth-First Search")
OUTPUT:
BFS OUTPUT
Following is the Breadth-First
Search 5 3 7 2 4 8
DFS OUTPUT
Following is the Depth-First Search
5
3
2
4
8
7
RESULT:
Thus the program for BFS and DFS has been implemented successfully.
5
EX NO:2
IMPLEMENTATION OF INFORMED SEARCH ALGORITHMS
DATE: (A*,MEMORY - BOUNDED A*)
AIM:
A* ALGORITHM:
the square. If you are keeping your open list sorted by F score, you
may need to resort to the list to account for the change.
D) Stop when you:
● Add the target square to the closed list, in which case the path has
been found, or
● Fail to find the target square, and the open list is empty. In this
case, there is no path.
3. Save the path. Working backwards from the target square, go from each
square to its parent square until you reach the starting square. That is your path.
7
PROGRAM:
OUTPUT :
RESULT :
EX NO:3
IMPLEMENT NAIVE BAYES MODELS
DATE:
AIM:
To implement the Naive Bayes Model using python.
ALGORITHM:
Naive Bayes is among one of the very simple and powerful algorithms
for classification based on Bayes Theorem with an assumption of
independence among the predictors.
The Naive Bayes classifier assumes that the presence of a feature in a
class is not related to any other feature.
Naive Bayes is a classification algorithm for binary and multi-class
classification problems.
BAYES THEOREM:
Based on prior knowledge of conditions that may be related to an event,
Bayes theorem describes the probability of the event
• conditional probability can be found this way
• Assume we have a Hypothesis(H) and evidence(E),
According to Bayes theorem, the relationship between
the probability of Hypothesis before getting the
evidence represented as P(H) and the probability of
the hypothesis after getting the evidence represented
as P(H|E) is:
P(H|E) = P(E|H)*P(H)/P(E)
11
PROGRAM:
# Importing
library import
math import
random import
csv
# the categorical class names are changed to numberic data
# eg: yes and no encoded to 1 and 0
def encode_class(mydata):
classes = []
for i in range(len(mydata)):
if mydata[i][-1] not in classes:
classes.append(mydata[i][-1])
for i in range(len(classes)):
for j in range(len(mydata)):
if mydata[j][-1] == classes[i]:
mydata[j][-1] = i
return mydata
index = random.randrange(len(test))
# from testset, pop data rows and put it
in train train.append(test.pop(index))
return train, test
# Calculating Mean
def mean(numbers):
return sum(numbers) / float(len(numbers))
# Calculating Standard
Deviation def
std_dev(numbers):
avg = mean(numbers)
variance = sum([pow(x - avg, 2) for x in numbers]) /
float(len(numbers) - 1)
14
return math.sqrt(variance)
def MeanAndStdDev(mydata):
info = [(mean(attribute), std_dev(attribute)) for attribute in zip(*mydata)]
# eg: list = [ [a, b, c], [m, n, o], [x, y, z]]
# here mean of 1st attribute =(a + m+x), mean of 2nd attribute = (b + n+y)/3
# delete summaries of last class
del info[-1]
return info
for i in range(len(classSummaries)):
mean, std_dev = classSummaries[i]
x = test[i]
probabilities[classValue] *= calculateGaussianProbability(x,
mean, std_dev) return probabilities
#Accuracy score
def accuracy_rate(test, predictions):
correct = 0
for i in range(len(test)):
if test[i][-1] == predictions[i]:
correct += 1
return (correct / float(len(test))) * 100.0
#driver code
# add the data path in your system
filename = r'E:\user\MACHINE LEARNING\machine learning
algos\Naive bayes\filedata.csv'
# prepare model
info = MeanAndStdDevForClass(train_data)
# test model
predictions = getPredictions(info, test_data)
accuracy = accuracy_rate(test_data, predictions)
print("Accuracy of your model is: ", accuracy)
18
INPUT:
DATASET DOWNLOAD
https://gist.github.com/ktisha/c21e73a1bd1700294ef790c56c8aec1f
OUTPUT:
RESULT :
EX NO:4
IMPLEMENT BAYESIAN NETWORKS
DATE:
AIM:
PROCEDURE:
PROGRAM:
from pgmpy.models import BayesianNetwork
from pgmpy.factors.discrete import TabularCPD
import networkx as nx
import pylab as plt
# Defining Bayesian Structure
model = BayesianNetwork([('Guest', 'Host'), ('Price', 'Host')])
model.check_model()
infer = VariableElimination(model)
22
nx.draw(model, with_labels=True)
plt.savefig('model.png')
plt.close()
23
INPUT :
DATASET DOWNLOAD
https://drive.google.com/file/d/17vwRLAY8uR-6vWusM5prn08it
https://drive.google.com/file/d/17vwRLAY8uR 6vWusM5prn08it-BEGp-
f/view
OUTPUT:
RESULT :
Thus the Bayesian networks and perform inferences using python is
implemented.
24
EX NO:5
BUILD REGRESSION MODELS
DATE:
AIM:
ALGORITHM:
PROGRAM:
We will start by importing the required libraries and loading a dataset. For
this example, we will be using the Boston Housing dataset which is available
in scikit-learn.
import pandas as pd
import numpy as np
from sklearn.datasets import load_boston
boston = load_boston()
df = pd.DataFrame(boston.data, columns=boston.feature_names)
df['MEDV'] = boston.target
Next, we need to split the dataset into training and testing sets. We will use 80%
of the data for training and 20% for testing.
from sklearn.model_selection import train_test_split
X = df.drop('MEDV', axis=1)
y = df['MEDV']
Step 3: Create a linear regression model and fit it to the training data
Now we can create a linear regression model and fit it to the training data.
model = LinearRegression()
26
model.fit(X_train, y_train)
Step 4: Make predictions on the testing data and evaluate the model's
performance
Finally, we can use the trained model to make predictions on the testing data
and evaluate its performance.
This will output the mean squared error and coefficient of determination (R-
squared) for our model. The mean squared error gives us an idea of how well
the model is able to predict the target variable, while the R-squared value tells
us how much of the variation in the target variable is explained by the model.
And that's it! We have built a simple linear regression model using Python. Of
course, there are many other regression models you can build and various ways
to fine-tune the model's performance, but this should give you a good starting
point.
27
OUTPUT:
RESULT:
EX NO:6
BUILD DECISION TREES AND RANDOM FORESTS
DATE:
AIM:
PROCEDURE:
DECISION TREES
A decision tree is a tree-like model of decisions and their possible consequences.
It consists of nodes, which represent the decisions, and edges, which represent
the possible outcomes. The tree starts at the root node and follows a path down to
the leaves, which represent the final decisions.
1.Select the attribute that best separates the data. You want to select the
attribute that best separates the data into distinct groups that have
different outcomes. You can use different measures to determine which
attribute is the best, such as the Gini index or the information gain.
2.Split the data based on the selected attribute. You want to split the
data into smaller groups based on the selected attribute. Each group
should have similar outcomes.
3.Repeat the process for each group. For each group, you want to
repeat the process by selecting the next best attribute and splitting
the data.
29
4.Stop when you reach a stopping criterion. You want to stop building
the tree when you reach a stopping criterion, such as a minimum number
of instances in a node or a maximum depth of the tree.
Once you have built the decision tree, you can use it to make predictions for
new instances by following the path from the root to the leaf node that
corresponds to the instance.
RANDOM FORESTS
1. Select a random subset of the data. You want to select a random subset
of the data to train each decision tree.
2. Select a random subset of the attributes. For each split in each decision
tree, you want to select a random subset of the attributes to consider.
3. Build a decision tree for each subset. You want to build a decision tree for
each random subset of the data and attributes.
4. Combine the decision trees to make predictions. To make predictions for
new instances, you want to combine the predictions of all the decision
trees in the random forest.
30
Random forests are typically more accurate than single decision trees and are
less prone to overfitting. They are widely used in machine learning for
PROGRAM:
Here's some sample code that you can use to get started:
Now, let's import the necessary modules and load the dataset:
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
iris.data y
=
iris.target
DecisionTreeClassifier()
dtc.fit(X_train, y_train)
# Make predictions on the test set and calculate the
accuracy y_pred = dtc.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
In this example, we're using the iris dataset and splitting it into
training and testing sets. Then, we build a decision tree and a
random forest using scikit-learn's DecisionTreeClassifier and
RandomForestClassifier classes, respectively. Finally, we make
predictions on the test set and calculate the accuracy of each model
using scikit-learn's accuracy_score function.
33
OUTPUT:
RESULT:
Thus the building of decision tree and random forest using python was
implemented successfully.
34
EX NO:7
BUILD SVM MODELS
DATE:
AIM:
PROCEDURE:
5. Evaluate your model: Use the testing data to evaluate your SVM model.
Calculate the accuracy, precision, recall, and F1 score to measure the
performance of your model.
6.Tune your model: If your model does not perform well, you may need to
adjust the hyperparameters, such as the regularization parameter, kernel
coefficient, or degree of the polynomial function.
7. Predict with your model: Once you are satisfied with your model, you can
use it to predict new data.
36
PROGRAM:
Here is some sample code in Python using the scikit-learn library to build
an SVM model with an RBF kernel:
print('Accuracy:', accuracy)
37
OUTPUT:
**Accuracy:** <accuracy_value>
Note: The <accuracy_value> will vary slightly due to the random splitting of the
data in the train_test_split function. You can expect a value in the range of 80% to
95%, indicating the model's performance on unseen testing data.
RESULT:
Thus the building of SVM model using python was implemented successfully.
38
EX NO:8
IMPLEMENT ENSEMBLING TECHNIQUES
DATE:
AIM:
PROCEDURES:
PROGRAM:
dataset X, y =
load_data()
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,
random_state=42)
# Initialize the base
models model1 =
LogisticRegression()
model2 = GaussianNB()
# Initialize the ensemble models
bagging = BaggingClassifier(base_estimator=model1, n_estimators=10,
random_state=42) boosting = AdaBoostClassifier(base_estimator=model2,
n_estimators=10, random_state=42)
40
ensemble_acc = accuracy_score(y_test,
ensemble_preds)
OUTPUT:
Accuracy: 0.875 # This value will vary
Understood. Since the output is dependent on the specific data and the
inherent randomness in the process
RESULT:
Thus the implementation of ensembling techniques using python and machine
learning was implemented successfully.
42
EX NO:9
IMPLEMENT CLUSTERING ALGORITHMS
DATE:
AIM:
PROCEDURE:
1. K-Means Clustering
K-means clustering is one of the most popular clustering algorithms. The
algorithm partitions a set of data points into k clusters, where k is a
predefined number. The algorithm works by iteratively assigning data
points to the nearest cluster centroid, and then updating the centroid based
on the new assignments. This process continues until convergence.
43
PROGRAM:
Here's some sample Python code for K-Means Clustering using the scikit-
learn library:
X = np.array([[1, 2], [1, 4], [1, 0], [4, 2], [4, 4], [4, 0]])
kmeans = KMeans(n_clusters=2, random_state=0).fit(X)
print(kmeans.labels_)
OUTPUT:
[1 1 1 0 0 0]
RESULT:
Thus the implementation of clustering algorithms using python and scikit learn
library was implemented successfully.
45
EX NO:10
IMPLEMENT EM FOR BAYESIAN NETWORKS
DATE:
AIM:
PROCEDURE:
5. Use the learned parameters: Once the EM algorithm has converged, the
learned parameters can be used to make predictions, perform inference, and
generate samples from the Bayesian network.
Note that the E-step and M-step for a Bayesian network can be more complex
than those for a simple mixture model or a linear regression model. In particular,
the E-step involves computing the posterior distribution over the latent variables,
which can be computationally expensive for large networks. The M-step involves
maximizing the log-likelihood of the data with respect to the parameters, which
can involve solving a non-convex optimization problem. Therefore, efficient
algorithms and approximations are often used to implement the EM algorithm for
Bayesian networks.
INPUT:
OUTPUT:
a. E-step:
b. M-step:
3.Return θ
49
Note that the specific form of the expected sufficient statistics and
the update equations for each node's parameters will depend on the type of
distribution used to model the node's conditional probabilities. For example, if
the node's distribution is Gaussian, the expected sufficient statistics would be the
mean and variance, and the update equation would involve setting the mean and
variance to the sample mean and variance of the data weighted by the posterior
probabilities.
RESULT:
EX NO:11
BUILD SIMPLE NN MODEL
DATE:
AIM:
To build a simple neural network model by using a simple neural network with
one input layer, one hidden layer, and one output layer.
PROCEDURE:
First, we need to import the necessary libraries:
import numpy as np
import tensorflow as tf
Next, let's define the model architecture. For this example, we will create a
neural network with 2 input nodes, 4 hidden nodes, and 1 output node.
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(4, input_shape=(2,),
activation='relu'), tf.keras.layers.Dense(1,
activation='sigmoid')
])
In the code above, we create a sequential model, which is a linear stack of layers.
The first layer is a dense layer with 4 nodes and the activation function ‘relu’. The
input shape of this layer is (2,), which means we expect 2 input values for each
sample. The second layer is also a dense layer with 1 node and the activation
function ‘sigmoid’
51
Now, let's compile the model with a loss function, optimizer, and metrics
model.compile(loss='binary_crossentropy', optimizer='adam',
metrics=['accuracy'])
6.
We use the binary_crossentropy loss function because this is a binary classification
problem. For the optimizer, we use adam, which is a popular choice for gradient
descent. We also specify that we want to track the accuracy metric during training
y = np.array([0, 1, 1, 0])
model.fit(X, y, epochs=1000)
In this example, we use the XOR logical function as our dataset. X contains
the input values, and y contains the output values. We train the model for 1000
epochs using the fit() function.
That's it! You've created a simple neural network model. Of course, you can
modify the number of layers, nodes, activation functions, loss functions, and
optimizers to suit your needs.
RESULT:
EX NO:12
BUILD DEEP LEARNING NN MODELS
DATE:
AIM:
ALGORITHM:
PROGRAM:
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from keras.utils import to_categorical
model = Sequential()
model.add(Dense(units=64, activation='relu', input_dim=100))
model.add(Dense(units=10, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
OUTPUT:
Epoch 1/10
32/32 [==============================] - 0s 2ms/step - loss: 2.4028 -
accuracy: 0.0900
Epoch 2/10
32/32 [==============================] - 0s 1ms/step - loss: 2.3642 -
accuracy: 0.0980
...
Epoch 10/10
32/32 [==============================] - 0s 1ms/step - loss: 2.3169 -
accuracy: 0.1120
Test loss: 2.3998091220855713
Test accuracy: 0.07999999821186066
RESULT:
Thus the Python program to implement deep learning of NN Models
was developed successfully.