LAB Manual - OCS351 AIMF Labarotory-1
LAB Manual - OCS351 AIMF Labarotory-1
DATE:
Aim:
Algorithm:
Step 3: Remove node 0 from the front of queue and visit the unvisited neighbors and push
them into queue.
Step 4: Remove node 1 from the front of queue and visit the unvisited neighbors and push
them into queue.
Step 5: Remove node 2 from the front of queue and visit the unvisited neighbors and push
them into queue.
Step 6: Remove node 3 from the front of queue and visit the unvisited neighbor’s and push
them into queue.
Steps 7: Remove node 4 from the front of queue and visit the unvisited neighbors and
push them into queue.
Program:
graph = {
'5': ['3', '7'],
'3': ['2', '4'],
'7': ['8'],
'2': [],
'4': ['8'],
'8': []
}
visited = [] # List for visited nodes.
queue = [] # Initialize a queue
# Driver Code
print("Following is the Breadth-First Search:")
bfs(visited, graph, '5') # Function calling
Output:
Thus the breadth first search algorithm have been executed successfully and the output got
verified.
EXP NO : 02
Implement depth first search
DATE:
Aim:
Algorithm:
Step 2: Visit 0 and put its adjacent nodes which are not visited yet into the stack.
Step 3: Now, Node 1 at the top of the stack, so visit node 1 and pop it from the stack and put
all of its adjacent nodes which are not visited in the stack.
Step 4: Now, Node 2 at the top of the stack, so visit node 2 and pop it from the stack and put
all of its adjacent nodes which are not visited (i.e, 3, 4) in the stack.
Step 5: Now, Node 4 at the top of the stack, so visit node 4 and pop it from the stack and put
all of its adjacent nodes which are not visited in the stack.
Step 6: Now, Node 3 at the top of the stack, so visit node 3 and pop it from the stack and put
all of its adjacent nodes which are not visited in the stack.
Program:
# Using a Python dictionary to act as an adjacency list
graph = {
'5': ['3', '7'],
'3': ['2', '4'],
'7': ['8'],
'2': [],
'4': ['8'],
'8': []
}
visited = set() # Set to keep track of visited nodes of graph.
# Driver Code
print("Following is the Depth-First Search:")
dfs(visited, graph, '5')
Output:
Following is the Depth-First Search:
532487
Result:
Thus the depth first search algorithm have been executed successfully and the output got
verified.
EXP NO : 03
Analysis of breadth first and depth first search in
terms of time and space
DATE:
Aim:
To implement the breadth first and depth first search in terms of time and space
Algorithm:
Step 1: Create a queue for BFS and initialize it with the starting node
Step 2: Create a set to mark visited nodes and mark the starting node as visited
Step 5: Process the current node (in this example, print it)
queue = deque()
queue.append(start)
visited = set()
visited.add(start)
while queue:
current_node = queue.popleft()
bfs(graph, 'A')
Output:
BFS traversal starting from node 'A':
ABCDEFGH
Result:
Thus the breadth first and depth first search in terms of time and space have been executed
successfully and the output got verified.
EXP NO : 04
Implement and compare Greedy and A* algorithms
DATE:
Aim:
Algorithm:
1. Initialize an empty priority queue (or a min-heap) with the starting node.
2. Initialize a set to keep track of visited nodes.
3. While the priority queue is not empty: a. Dequeue the node with the lowest heuristic
value. b. If the current node is the goal, the algorithm terminates with success. c. Mark
the current node as visited. d. Enqueue all unvisited neighbors of the current node into
the priority queue with their heuristic values as the priority.
4. If the priority queue becomes empty and the goal is not reached, the algorithm
terminates with failure.
Algorithm:
1. Initialize an empty priority queue (or a min-heap) with the starting node and set the
priority to the sum of the cost-so-far and the heuristic estimate.
2. Initialize a set to keep track of visited nodes.
3. While the priority queue is not empty: a. Dequeue the node with the lowest total cost
(cost-so-far + heuristic estimate). b. If the current node is the goal, the algorithm
terminates with success. c. Mark the current node as visited. d. Enqueue all unvisited
neighbors of the current node into the priority queue with updated total costs.
4. If the priority queue becomes empty and the goal is not reached, the algorithm
terminates with failure.
Program:
import heapq
from typing import List, Set, Dict, Tuple
class Node:
def init (self, id: str, heuristic: int = 0, cost_so_far: int = 0):
self.id = id
self.heuristic = heuristic
self.cost_so_far = cost_so_far
def search(graph: List[Node], start: str, goal: str, is_a_star: bool) -> bool:
priority_queue = []
heapq.heapify(priority_queue)
visited = set()
if is_a_star:
heapq.heappush(priority_queue, Node(start, 0, get_heuristic(start)))
else:
heapq.heappush(priority_queue, Node(start, get_heuristic(start)))
while priority_queue:
current = heapq.heappop(priority_queue)
if current.id == goal:
print(f"Goal reached using {'A* Search' if is_a_star else 'Greedy Best-First Search'}.")
return True
visited.add(current.id)
print(f"Goal not reached using {'A* Search' if is_a_star else 'Greedy Best-First Search'}.")
return False
Output:
Goal not reached using Greedy Best-First Search.
Goal not reached using A* Search.
Result:
Thus the Greedy and A* algorithms have been executed successfully and the output got
verified.
EXP NO : 05
Implement the non-parametric locally weighted regression
algorithm in order to fit datapoints. Select appropriate data set for
DATE: your experiment and draw graphs
Aim:
Algorithm:
import numpy as np
import matplotlib.pyplot as plt
def kernel(x0, X, tau):
return np.exp(-np.sum((X - x0)**2, axis=1) / (2 * tau**2))
def locally_weighted_regression(X, y, tau):
m = X.shape[0]
y_pred = np.zeros(m)
for i in range(m):
weights = np.diag(kernel(X[i], X, tau))
theta = np.linalg.inv(X.T @ weights @ X) @ X.T @ weights @ y
y_pred[i] = X[i] @ theta
return y_pred
# Generate synthetic Data
np.random.seed(42)X = np.linspace(-3, 3, 100).reshape(-1, 1)y = np.sin(X) + np.random.normal(0,
0.1, size=X.shape)
# Add intercept
X_bias = np.hstack((np.ones_like(X), X))
# Fit model and predict
tau = 0.5y_pred = locally_weighted_regression(X_bias, y, tau)
# Plot results
plt.scatter(X, y, label="Data", color="blue")
plt.plot(X, y_pred, label="Locally Weighted Regression", color="red")
plt.xlabel("X")
plt.ylabel("y")
plt.legend()
plt.title("Locally Weighted Regression")
plt.show()
Output:
Result:
Thus the non-parametric locally weighted regression algorithm have been executed
successfully and the output got verified.
EXP NO: 06
Aim:
To write a python program to demonstrate the working of the decision tree based algorithm
Algorithm:
class DataPoint:
def init (self, feature1, feature2, target):
self.feature1 = feature1
self.feature2 = feature2
self.target = target
Output:
Predicted Class: 1
Result:
Thus the decision tree based algorithm have been executed successfully and the output got
verified.
EXP NO : 07
Build an artificial neural network by implementing the back
propagation algorithm and test the same using appropriate data sets
DATE:
Aim:
To build an artificial neural network by implementing the back propagation algorithm and
test the same using appropriate data sets
Algorithm:
The neural network is defined by the Neural Network struct, containing the input size,
hidden layer size, output layer size, weight matrices for input to hidden and hidden to
output layers, and vectors for the hidden and output layers.
Activation Functions:
Initialize Neural Network: Initializes the neural network with random weights in
thespecified range.
Forward Pass:
Forward Pass: Performs the forward pass of the neural network, calculating the
valuesof the hidden and output layers using the sigmoid activation function.
Backpropagation:
backpropagation: Implements the backpropagation algorithm to update weights based
on the error between predicted and target outputs. It computes output errors, hidden
errors, and updates weights accordingly.
Program:
import random
import math
class NeuralNetwork:
def init (self, input_size, hidden_size, output_size):
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.weights_input_hidden = [[random.uniform(-1, 1) for _ in range(input_size)] for _ in
range(hidden_size)]
self.weights_hidden_output = [[random.uniform(-1, 1) for _ in range(hidden_size)] for _ in
range(output_size)]
self.hidden_layer = [0.0] * hidden_size
self.output_layer = [0.0] * output_size
def sigmoid(x):
return 1.0 / (1.0 + math.exp(-x))
def sigmoid_derivative(x):
return x * (1.0 - x)
for i in range(nn.output_size):
for j in range(nn.hidden_size):
nn.weights_hidden_output[i][j] += lr * output_errors[i] * nn.hidden_layer[j]
for i in range(nn.hidden_size):
for j in range(nn.output_size):
hidden_errors[i] += output_errors[j] * nn.weights_hidden_output[j][i]
hidden_errors[i] *= sigmoid_derivative(nn.hidden_layer[i])
for i in range(nn.hidden_size):
for j in range(nn.input_size):
nn.weights_input_hidden[i][j] += lr * hidden_errors[i] * input_data[j]
Thus the Python program to Implementing Back propagation algorithm to build artificial
neural network the have been executed successfully and the output got verified.
EXP NO: 08
Write a program to implement the naïve Bayesian classifier
DATE:
Aim:
To write python program to implement the naïve Bayesian classifier
Algorithm:
Training Algorithm:
Prediction Algorithm:
class NaiveBayesClassifier:
def init (self):
self.feature_count_by_class = defaultdict(lambda: defaultdict(int))
self.class_count = defaultdict(int)
self.total_samples = 0
class TrainingSample:
def init (self, features, label):
self.features = features
self.label = label
if name == " main ":
classifier = NaiveBayesClassifier()
training_data = [
TrainingSample(["Sunny", "Hot", "High", "Weak"], "No"),
# ... (other training samples)
]
classifier.train(training_data)
test_features = ["Rain", "Cool", "Normal", "Weak"]
predicted_class = classifier.predict(test_features)
print("Predicted Class:", predicted_class)
Output:
Predicted Class: No
Result:
Thus the Python program to Implementing the naive Bayesian classifier have been executed
successfully and the output got verified.
EXP NO 09
Implementing neural network using self-organizing map
DATE:
Aim :
Algorithm :
1. Initialization:
o initializeWeights initializes SOM weights randomly.
2. Finding the Best Matching Unit (BMU):
o findBestMatchingUnit identifies the unit closest to the input vector.
3. Updating Weights:
o updateWeights adjusts weights based on BMU and input, using Gaussian
neighborhood function.
4. Training:
o train iterates through data for epochs.
o For each input vector, finds BMU and updates weights.
o Adjusts learning rate and radius after each epoch.
5. Get Weights and Print:
o getWeights retrieves weights of a unit.
o printWeights prints trained weights for each unit.
6. Main Function:
o Sets parameters (input size, map size, epochs, learning rate, radius).
o Provides training data, initializes SOM.
o Trains SOM, prints final weights for each unit.
Program :
import random
import math
class SOM:
def init (self, input_size, map_size, learning_rate, radius):
self.input_size = input_size
self.map_size = map_size
self.learning_rate = learning_rate
self.radius = radius
self.weights = [random.random() for _ in range(input_size * map_size)]
def print_weights(self):
for i in range(self.map_size):
unit_weights = self.get_weights(i)
print(f"Unit {i} weights:", *unit_weights)
Thus the Python program to Implementing neural network using self-organizing map have
been executed successfully and the output got verified.
EXP NO 10
Implementing k-Means algorithm to cluster a set of data
DATE:
Aim :
To Implementing k-Means algorithm to cluster a set of data
Algorithm :
1. Initialization:
o initializeCentroids: Randomly selects k data points from the input data to
initialize cluster centroids.
2. Distance Calculation:
o distance: Calculates the Euclidean distance between two points.
3. Finding the Nearest Centroid:
o findNearestCentroid: Determines the index of the nearest centroid for a given
data point based on Euclidean distance.
4. Assigning Points to Clusters:
o assignToClusters: Assigns each data point to its nearest cluster based on the
nearest centroid.
5. Updating Centroids:
o updateCentroids: Updates centroids by calculating the mean of points assigned
to each cluster.
6. Convergence Check:
o hasConverged: Checks for convergence by comparing the distance between
old and new centroids.
7. Training (k-Means Algorithm):
o kMeans: Executes the main k-Means algorithm, including centroid
initialization, point assignment, centroid update, and convergence check.
Repeated for a specified number of iterations or until convergence.
8. Printing Clusters:
o printClusters: Outputs the final clusters along with their data points.
9. Main Function:
o Initializes a set of 2D points as input data.
o Creates an instance of the KMeans class with the desired number of clusters
(k).
o Calls the kMeans method to perform the clustering.
o Prints the final clusters using the printClusters method.
Program :
import random
import math
class Point:
def init (self, x, y):
self.x = x
self.y = y
class KMeans:
def init (self, k):
self.k = k
self.centroids = []
self.clusters = []
def update_centroids(self):
for i in range(self.k):
if self.clusters[i]:
sum_x = sum(point.x for point in self.clusters[i])
sum_y = sum(point.y for point in self.clusters[i])
centroid_x = sum_x / len(self.clusters[i])
centroid_y = sum_y / len(self.clusters[i])
self.centroids[i] = Point(centroid_x, centroid_y)
def print_clusters(self):
for i, cluster in enumerate(self.clusters):
print(f"Cluster {i + 1}: ", end="")
print(*[(point.x, point.y) for point in cluster])
Thus the Python program to Implementing k-Means algorithm to cluster a set of data have
been executed successfully and the output got verified.
EXP NO 11
Implementing hierarchical clustering algorithm
DATE:
Aim :
To Implementing hierarchical clustering algorithm
Algorithm :
3. Main Function:
The main function demonstrates the usage of the HierarchicalClustering class with a
vector of points.
import math
class Point:
def init (self, x, y):
self.x = x
self.y = y
class HierarchicalClustering:
def init (self, points):
self.points = points
self.distances = [[0.0] * len(points) for _ in range(len(points))]
self.initialize_distances()
def initialize_distances(self):
for i in range(len(self.points)):
for j in range(i):
distance = euclidean_distance(self.points[i], self.points[j])
self.distances[i][j] = self.distances[j][i] = distance
def cluster(self):
while len(self.points) > 1:
min_i, min_j = 0, 0
min_distance = float('inf')
for i in range(len(self.points)):
for j in range(i):
if self.distances[i][j] < min_distance:
min_distance = self.distances[i][j]
min_i, min_j = i, j
self.merge_clusters(min_i, min_j)
for k in range(len(self.points)):
if k != i:
self.distances[i][k] = self.distances[k][i] = min(self.distances[i][k],
self.distances[j][k])
del self.distances[j]
for row in self.distances:
del row[j]
def print_clusters(self):
print("Final Clusters:")
for point in self.points:
print(f"({point.x}, {point.y})")
if name == " main ":
data = [Point(1, 2), Point(5, 8), Point(1.5, 1.8), Point(8, 8), Point(1, 0.6), Point(9, 11)]
clustering = HierarchicalClustering(data)
clustering.cluster()
clustering.print_clusters()
Output:
Final Clusters:
(4.4375, 5.375)
Result:
Thus the hierarchical clustering algorithm have been executed successfully and the output
got verified.