AIML LAB MANUAL Printe
AIML LAB MANUAL Printe
(Regulation 2021)
Breadth first
searchAlgoritm
5. If the node has not been visited, add it to the visited set.
6. Enqueue all neighbors of the node that has not been visited.
def bfs(graph,
start):visited =
set() queue =
[start] while
queue:
vertex = queue.pop(0)
visited.add(vertex)
queue.extend(graph[vertex] -
visited)return visited
'D': set(['B']),
start_node = 'A'
bfs_result = bfs(graph,
start_node)print("BFS result:",
bfs_result)
Output:
searchAlgorithm:
2. Call the dfs_helper function with the starting node and visited set as arguments.
5. If the current node has not been visited, add it to the visited set.
7. Recursively call the dfs_helper function with the neighbor node and visited set as
arguments.
Program:
visited=None):if visited is
None:
visited =
set()
visited.add(star
t)
visited:dfs(graph, next_node,
visited)
return visited
'D': set(['B']),
start_node = 'A'
dfs_result = dfs(graph,
start_node)print("DFS result:",
dfs_result)
OutPut:
Thus the given Uninformed search algorithms (BFS, DFS) using python was executed
successfully.
Ex No: 2 Implementation of Informed search algorithms
(A*,memory-bounded A*)
A* Search Algorithm:
Algorithm
1. Initialize the open list with the starting node and a cost of zero.
a. Remove the node with the lowest cost from the open list.
4. If the neighbor has not been visited before, calculate its cost as the sum of the
actual cost from the starting node to the neighbor, the estimated cost from the
neighbor to the goal node, and the actual cost from the starting node to the
removed node.
5. ii. If the neighbor is not in the open list, add it to the open list with its cost.
6. iii. If the neighbor is already in the open list with a higher cost, update its cost in
the openlist.
7. If the open list becomes empty and no solution is found, return failure.
Program:
PriorityQueuedef astar(graph,
start, goal):
frontier = PriorityQueue()
frontier.put(start, 0)
came_from = {}
cost_so_far = {}
came_from[start] =
None cost_so_far[start]
= 0 while not
frontier.empty():
current =
frontier.get()if
current == goal:
break
graph[current].items():new_cost =
cost_so_far[current] + weight
cost_so_far[next_node]:cost_so_far[next_node] = new_cost
goal)frontier.put(next_node, priority)
came_from[next_node] = current
# Heuristic
function #
Manhattan
distance(x1, y1) =
(x2, y2) = b
return abs(x1 - x2) + abs(y1 -
y2)graph = {
start = (0, 0)
goal = (1, 1)
goal)path = []
current = goal
path.append(curre
nt)
current =
came_from[current]
path.append(start)
path.reverse()
Output:
Algorithm:
1. Initialize a priority queue with the starting node and a priority of zero.
2. Initialize an empty set of visited nodes.
3. While the priority queue is not empty:
a. Remove the node with the highest priority from the queue.
b. If the removed node is the goal node, return the solution.
c. Add the removed node to the set of visited nodes.
d. For each neighbor of the removed node:
4. If the neighbor has not been visited before, calculate its priority as the sum of the
actual cost from the starting node to the neighbor, the estimated cost from the
neighbor to the goal node, and the actual cost from the starting node to the
removed node.
5. ii. If the priority of the neighbor is less than or equal to the memory limit, add the
neighborto the priority queue with its priority.
6. If the priority queue becomes empty and no solution is found, return failure.
Program
import heapq
visited = set()
_, current_node, actual_cost =
heapq.heappop(queue)if current_node ==
goal_node:
return actual_cost
visited.add(current_node)
current_node.neighbors():if neighbor
not in visited:
return
None #
Example usage
class Node:
self.val = val
self.neighbors =
[]
self.neighbors.append((neighbor,
cost))
def heuristic(node, goal):
return abs(node.val -
goal.val)start = Node(1)
n2 = Node(2)
n3 = Node(3)
n4 = Node(4)
n5 = Node(5)
goal =
Node(6)
start.add_neighbor(n2, 1)
start.add_neighbor(n3, 2)
n2.add_neighbor(n4, 3)
n3.add_neighbor(n4, 1)
n3.add_neighbor(n5, 2)
n4.add_neighbor(goal, 3)
n5.add_neighbor(goal, 2)
result)else:
Output
Result:
Thus the given Informed search algorithms (A*, memory-bounded A*) was executed
successfully.
Ex No: 3 Implement naïve Bayes models
Aim:
Algorithm:
1. Collect and preprocess the training data: This involves cleaning, transforming,
and converting the data into a format that can be used by the algorithm.
2. Calculate the prior probabilities: Prior probabilities are the probabilities of each
class occurring in the training data. This can be calculated by dividing the number
3. Calculate the conditional probabilities: For each feature in the data, calculate the
conditional probabilities of the feature given the class. This can be done by
dividing the number of instances of the feature in each class by the total number
instance belonging to each class by multiplying the prior probability and the
conditional probabilities of each feature. The class with the highest probability is
5. Evaluate the model: Use metrics such as accuracy, precision, recall, and F1 score
6. Improve the model: You can improve the performance of the model by using
GaussianNB
iris = load_iris()
modelnb_model = GaussianNB()
nb_model.fit(X_train, y_train)
set y_pred =
nb_model.predict(X_test)#
model
accuracy = accuracy_score(y_test,
y_pred)print("Accuracy:", accuracy)
Output:
Accuracy: 0.9777777777777777
Result:
Thus the given naïve Bayes models using the iris dataset was executed
successfully.
Ex No: 4 Implement Bayesian Networks
Aim:
Algorithm:
1. Define the structure of the network: Determine the nodes in the network and their
2. Specify the conditional probability distributions (CPDs): For each node in the
network, define the CPD, which specifies the probabilities of each possible value
3. Check the model: Verify that the network structure and CPDs are valid (i.e., the
4. Use the network to make predictions: Given a set of observed variables, compute
inference. This can be done using various algorithms, such as the Variable
6. Improve the model: You can improve the performance of the model by using
TabularCPD
network structure
variable_card=3,
evidence=['C'],
evidence_card=[3])cpd_e =
TabularCPD(variable='E', variable_card=2,
evidence=['C'], evidence_card=[3])
validmodel.check_model()
# Creating an inference object
infer = VariableElimination(model)
'B': 0})print(posterior)
'E': 1})print(posterior)
Output:
+ + +
|C | phi(C) |
+======+==========+
| C(0) | 0.5062 |
+ + +
| C(1) | 0.1937 |
+ + +
| C(2) | 0.3001 |
+ + +
+ + +
|D | phi(D) |
+======+==========+
| D(0) | 0.5207 |
+ + +
| D(1) | 0.4793 |
+ + +
Result:
Algorithm:
1. Collect and preprocess the data: Collect the data relevant to the problem and
2. Split the data into training and testing sets: Split the data into two sets: a training
set used to build the model and a testing set used to evaluate the model's
performance.
for the problem and the data. Common regression algorithms include Linear
4. Train the regression model: Use the training set to fit the regression model to the
data. This involves estimating the coefficients of the regression equation that
5. Evaluate the model's performance: Use the testing set to evaluate the model's
performance.
Common evaluation metrics include Mean Squared Error, Root Mean Squared
model's performance.
7. Use the model to make predictions: Once the model is trained and evaluated, use it
#Import pandas as pd
data = pd.read_csv('data.csv')
variableX = data.drop('target_variable',
axis=1)
y = data['target_variable']
modellin_reg = LinearRegression()
lin_reg.fit(X, y)
resultsprint('Linear Regression
lin_reg.intercept_)
modelridge_reg = Ridge(alpha=1.0)
ridge_reg.fit(X, y)
resultsprint('Ridge Regression
lasso_reg = Lasso(alpha=1.0)
lasso_reg.fit(X, y)
print('Coefficients: ',
lasso_reg.intercept_)
Output:
Intercept: -0.9862635235260367
Result:
Thus the given Regression models are built was executed successfully .
Ex No: 6 Build decision trees and random forest
Aim:
Algortihm:
# Load diabetes
dataset diabetes =
load_diabetes()
variableX = diabetes.data
y = diabetes.target
tree_reg =
DecisionTreeRegressor(random_state=42)
tree_reg.fit(X_train, y_train)
test settree_pred =
tree_reg.predict(X_test)
tree_mse = mean_squared_error(y_test,
forest_reg =
RandomForestRegressor(random_state=42)
forest_reg.fit(X_train, y_train)
test setforest_pred =
forest_reg.predict(X_test)
forest_mse = mean_squared_error(y_test,
forest_pred)forest_r2 = r2_score(y_test,
forest_pred)
{:.2f}'.format(forest_r2))
Output:
0.24
Coefficient of determination (R^2):
Result:
Thus the decision trees and random forest was building executed successfully.
ExN0:77 Build SVM Models
AIM:
To write a python program for build SVM models
Algorithm:
1. Import necessary libraries, including svm from scikit-learn and any other
necessarylibraries for data processing and visualization.
2. Load the dataset you want to use for the model. This can be done using
scikit-learn'sbuilt-in datasets or by loading a custom dataset using a library
like pandas.
3. Split the dataset into training and test sets using train_test_split from scikit-
learn. This isdone to evaluate the performance of the model on unseen data.
4. Preprocess the data as necessary. This may involve scaling the data to a
common range,encoding categorical variables, or removing outliers.
5. Create an SVM model using the svm.SVR or svm.SVC class from scikit-learn.
Specify thekernel function to be used and any other necessary parameters, such
as the regularization parameter C.
6. Fit the SVM model on the training set using the fit method.
7. Evaluate the performance of the model on the test set using a metric such as
mean squarederror or coefficient of determination (R^2). This can be done using
functions like mean_squared_error or r2_score from scikit-learn.
Program:
# Load diabetes
dataset diabetes =
load_diabetes()
# Split data into features and target variable
X=
diabetes.data y
diabetes.target
svm_reg =
SVR(kernel='linear')
svm_reg.fit(X_train, y_train)
set svm_pred =
svm_reg.predict(X_test)
svm_mse = mean_squared_error(y_test,
metricsprint('SVM Model')
Output:
SVM Model
Thus the given SVM models are building was executed successfully.
Ex No: 8 Implement ensembling techniques
Aim:
To write a python program for implementing ensembling techniques.
Algorithm:
1. Load the dataset using a library like pandas or scikit-learn.
3. Split the dataset into training and test sets using train_test_split from scikit-learn.
6. Fit the ensemble model on the training set using fit method.
7. Use the ensemble model to make predictions on the test set using predict method.
8. Evaluate the performance of the ensemble model using performance metrics like
mean_squared_error and r2_score from scikit-learn.
boston = load_boston()
variableX = boston.data
y = boston.target
rf_reg = RandomForestRegressor(n_estimators=10,
random_state=42)svm_reg = SVR(kernel='linear')
ensemble.fit(X_train, y_train)
ensemble_pred =
ensemble.predict(X_test)
ensemble_mse = mean_squared_error(y_test,
ensemble_pred)ensemble_r2 = r2_score(y_test,
ensemble_pred)
Output:
Ensemble Model
Result:
Algorithm:
a. Assign each data point to the nearest cluster centroid based on the
Euclidean distancebetween the data point and the centroids.
b. Update the centroid of each cluster by taking the mean of all data points
assigned to that cluster.
c. Check for convergence by comparing the new centroids with the previous
centroids. If the difference between the old and new centroids is less than a
threshold, terminate thealgorithm.
Program:
load_irisfrom sklearn.cluster
import KMeans
iris = load_iris()
variableX = iris.data
kmeans = KMeans(n_clusters=3,
kmeans.fit(X)
pointlabels = kmeans.labels_
silhouette_avg = silhouette_score(X,
performance metricprint('KMeans
Clustering Model')
Output:
KMeans Clustering
ModelSilhouette score:
0.55
Result:
Thus the KMeans clustering algorithm was implemented successfully.
Ex No: 10 Build simple NN Models
Aim:
To write a python program for Build simple NN models.
Algorithm:
5. Define the neural network architecture using the Keras Sequential API.
6. Add the input layer and specify the number of neurons and activation function.
7. Add one or more hidden layers and specify the number of neurons and activation
functionfor each layer.
8. Add the output layer and specify the number of neurons and activation function.
9. Compile the model and specify the loss function, optimizer, and evaluation metrics.
10. Train the model by fitting it to the training data and specify the number of epochs
and batchsize.
11. Evaluate the model by predicting the test data and computing the loss and
accuracymetrics.
12. Fine-tune the model by adjusting the hyperparameters such as the number of
neurons,layers, activation functions, learning rate, etc.
import tensorflow as tf
[0, 1]
x_train = x_train.astype("float32") /
255.0x_test = x_test.astype("float32")
/ 255.0
model =
keras.Sequential([ layers.Flatten(i
nput_shape=(28, 28)),
layers.Dense(128,
activation="relu"),layers.Dense(10)
])
# Compile the
model
model.compile(
optimizer="adam",
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=["accuracy"],
Result:
Thus the given simple NN models are built was executed successfully.
Ex No: 11 Build deep learning NN models
AIM:
To write a python program for Build deep learning NN models
Agorithm:
1. import necessary libraries: TensorFlow, Keras, numpy, and any other libraries
needed foryour specific application.
2. Load your dataset and preprocess it as needed. This may include normalizing the
data, splitting it into training and validation sets, and one-hot encoding the labels.
3. Define your neural network architecture. This involves selecting the number of
layers, thenumber of neurons in each layer, the activation function for each layer,
and any other relevant parameters. For example, you might define a model with
three dense layers, each with ReLU activation, and a final output layer with
softmax activation.
4. Compile the model. This involves specifying the loss function, optimizer, and
metrics to beused during training. For example, you might use categorical cross-
entropy as the loss function, Adam as the optimizer, and accuracy as the metric.
5. Train the model. This involves calling the model.fit() method and passing in your
training and validation data. You can specify the number of epochs, batch size,
and other training parameters as needed.
6. Evaluate the model. Once the model has been trained, you can evaluate its
performance ona separate test set using the model.evaluate() method. This will
provide you with the model's loss and accuracy on the test data.
7. Make predictions. You can use the model.predict() method to make predictions
on new data. This is often done by passing in a single example at a time, rather
than a batch of examples.
8. Fine-tune the model as needed. Depending on the results of your evaluation, you
may needto fine-tune the model by adjusting the architecture, training parameters,
or other settings.
9. Save the model. Once you are satisfied with the performance of your model, you
can save itto disk using the model.save() method. This will allow you to load the
model later and make predictions without having to retrain it from scratch.
10. Deploy the model. Finally, you can deploy your model to production, either by
integrating it into a larger software system or by making it available as a web
service.
Program:
import tensorflow as tf
between 0 and 1
model =
tf.keras.models.Sequential([ layer
s.Flatten(input_shape=(28, 28)),
layers.Dense(128,
activation='relu'),
layers.Dropout(0.2),
layers.Dense(10)])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(x_train, y_train,
Epoch 1/10
1500/1500 [==============================] - 4s 3ms/step - loss: 0.5203 - accuracy:
0.8123
Epoch 2/10
1500/1500 [==============================] - 4s 3ms/step - loss: 0.3901 - accuracy:
0.8585
Epoch 3/10
1500/1500 [==============================] - 4s 3ms/step - loss: 0.3514 - accuracy:
0.8714
Epoch 4/10
1500/1500 [==============================] - 4s 3ms/step - loss: 0.3277 - accuracy:
0.8797
Epoch 5/10
1500/1500 [==============================] - 4s 3ms/step - loss: 0.3107 - accuracy:
0.8860
Epoch 6/10
1500/1500 [==============================] - 4s 3ms/step - loss: 0.2976 - accuracy:
0.8906
Epoch 7/10
1500/1500 [==============================] - 4s 3ms/step - loss: 0.2869 - accuracy:
0.8945
Epoch 8/10
1500/1500 [==============================] - 4s 3ms/step - loss: 0.2777 - accuracy:
0.8977
Epoch 9/10
1500/1500 [==============================] - 4s 3ms/step - loss: 0.2697 - accuracy:
0.9005
Epoch 10/10
1500/1500 [==============================] - 4s 3ms/step - loss: 0.2625 - accuracy:
0.9030
Test loss: 0.2787592418193817
Test accuracy: 0.8959000110626221
Result:
Thus the given deep learning NN models are built was executed successfully.