Cs3491-Aiml Lab Manual
Cs3491-Aiml Lab Manual
CS3491
LAB MANUAL
II YEAR IT
R-2021
Syllabus
1
Ex.1.1 Date:
Aim:
To implement uninformed search algorithms such as Breadth First and Depth First
Search using python.
1.1 Breadth First Search
Algorithm:
Program:
graph = {
'5' : ['3','7'],
'7' : ['8'],
'2' : ['1','6'],
'4' : ['8'],
'8' : [],
'1':[],
'6':[]
2
queue = [] #Initialize a queue
visited.append(node)
queue.append(node)
m = queue.pop(0)
visited.append(neighbour)
queue.append(neighbour)
# Driver Code
Ex.1.2 Date:
Algorithm:
3
1.2 Depth First Search
Program:
graph = {
'5' : ['3','7'],
'7' : ['8'],
'2' : ['1','6'],
'4' : ['8'],
'8' : [],
'1': [],
'6': []
print (node)
visited.add(node)
4
# Driver Code
Result:
Thus the Uninformed searching algorithms such as BFS and DFS were implemented
and executed successfully.
Ex.2 Date:
Aim:
5
To implement informed search algorithms such as A* and memory-bounded A*
Search using python.
2.1 A* Search algorithm
Algorithm:
1. Start
2. Begin with the start node in the open list and an empty closed list.
3. While there are nodes in the open list, continue iterating.
4. Choose the node with the lowest combined cost and heuristic value (f(n) = g(n) + h(n)).
5. If the current node is the goal, reconstruct and return the path.
6. Remove the current node from the open list, add it to the closed list, and expand its
neighbors.
7. Update the cost (g(n)), heuristic estimate (h(n)), and parent of each neighbor if necessary.
8. Add neighbors to the open list if they are not already present.
9. After finding the goal, reconstruct the path from the start node to the goal node.
10. Stop
Program:
import heapq
path = deque()
path.appendleft(current)
current = came_from[current]
return path
6
open_set = []
came_from = {}
g_score = {start: 0}
while open_set:
current = heapq.heappop(open_set)[1]
if current == goal:
if 0 <= neighbor[0] < len(grid) and 0 <= neighbor[1] < len(grid[0]) and not
grid[neighbor[0]][neighbor[1]]:
tentative_g_score = g_score[current] + 1
came_from[neighbor] = current
g_score[neighbor] = tentative_g_score
return None
grid = [
[0, 0, 0, 0, 0],
[0, 1, 0, 1, 0],
[0, 1, 0, 1, 0],
[0, 1, 0, 1, 0],
7
[0, 0, 0, 0, 0]
start = (0, 0)
goal = (4, 4)
print(list(path))
OutPut:
[(0, 1), (0, 2), (0, 3), (0, 4), (1, 4), (2, 4), (3, 4), (4, 4)]
Algorithm:
1. Start
2. Create an open list (priority queue) to store nodes to be explored.
3. Create a closed set to store explored nodes.
4. Push the start node onto the open list with a cost estimate (f_cost) calculated as the sum
of its g_cost (cost from start to current node) and h_cost (heuristic estimate from current
node to goal).
5. While the open list is not empty:
a. Pop the node with the lowest f_cost from the open list.
b. This node becomes the current node.
6. If the current node is the goal node,
a. Reconstruct the path by following the parent pointers from the goal node back to
the start node.
b. Return the path and the total cost.
7. else,
8
a. add the current node to the closed set.
8. Generate successor nodes by applying valid actions (neighbors) to the current node.
9. For each successor node:
a. Calculate its g_cost (cost from start to successor node) by adding the cost of
reaching the successor node from the current node.
b. Calculate its h_cost (heuristic estimate from successor node to goal).
c. Calculate its f_cost as the sum of g_cost and h_cost.
d. If the f_cost of the successor node is within the memory limit:
i. If the successor node has not been explored (not in the closed set), add it
to the open list.
e. Return:
10. If the goal is not found within the memory limit or there is no valid path,
a. return None.
11. Stop.
Program:
import heapq
class Node:
def __init__(self, state, g_cost, h_cost, parent=None):
self.state = state
self.g_cost = g_cost
self.h_cost = h_cost
self.f_cost = g_cost + h_cost
self.parent = parent
while open_list:
current_node = heapq.heappop(open_list)
closed_set.add(current_node.state)
9
if current_node.state == goal:
# Reconstruct path
path = []
cost = current_node.g_cost
while current_node:
path.append(current_node.state)
current_node = current_node.parent
return path[::-1], cost
# Example usage:
def heuristic(state, goal):
return abs(state[0] - goal[0]) + abs(state[1] - goal[1])
def neighbors(state):
x, y = state
return [(x-1, y), (x+1, y), (x, y-1), (x, y+1)]
start = (0, 0)
goal = (4, 4)
memory_limit = 10 # Example memory limit
10
print("Path:", path)
else:
print("Path not found within memory limit.")
Output:
Result:
Ex.3 Date:
Aim:
Algorithm:
11
Step 5: Train and Predict given data using Naive Bayes Classifier
Program:
# Sample data
X = [[1, 'S'], [1, 'M'], [1, 'M'], [1, 'S'], [1, 'S'],
[2, 'S'], [2, 'M'], [2, 'M'], [2, 'L'], [2, 'L'],
[3, 'L'], [3, 'M'], [3, 'M'], [3, 'L'], [3, 'L']]
y = ['N', 'N', 'Y', 'Y', 'N', 'N', 'N', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'N']
le = LabelEncoder()
model = GaussianNB()
model.fit(X_train, y_train)
# Predicting
y_pred = model.predict(X_test)
# Accuracy
12
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)
OutPut:
Accuracy:0.3333333
Result:
Thus the implementation of a Naïve Bayes classifier model executed in Python using
scikit-learn
Ex.3 Date:
Aim:
The aim of implementing Bayesian Networks is to model the probabilistic relationships between
a set of variables
Algorithm:
1. Define the variables: The first step in implementing a Bayesian Network is to define the
variables that will be used in the model. Each variable should be clearly defined and its possible
states should be enumerated.
2. Determine the relationships between variables: The next step is to determine the
probabilistic relationships between the variables. This can be done by identifying the causal
relationships between the variables or by using data to estimate the conditional probabilities of
each variable given its parents.
13
3. Construct the Bayesian Network: The Bayesian Network can be constructed by
representing the variables as nodes in a directed acyclic graph (DAG). The edges between the
nodes represent the conditional dependencies between the variables.
4. Assign probabilities to the variables: Once the structure of the Bayesian Network has
been defined, the probabilities of each variable must be assigned. This can be done by using
expert knowledge, data, or a combination of both.
5. Inference: Inference refers to the process of using the Bayesian Network to make
predictions or draw conclusions. This can be done by using various inference algorithms, such as
variable elimination or belief propagation.
6. Learning: Learning refers to the process of updating the probabilities in the Bayesian
Network based on new data. This can be done using various learning algorithms, such as
maximum likelihood or Bayesian learning.
Program:
import numpy as np
import csv
import pandas as pd
('heartdisease','thalach'),('heartdisease','chol')])
14
#Learning CPDs using Maximum Likelihood Estimators
print('\n Learning CPD using Maximum likelihood
estimators')
model.fit(heartDisease,estimator=MaximumLikelihoodEstimator)
q=HeartDisease_infer.query(variables=['heartdisease'],evidence ={'age':28})
print(q['heartdisease'])
print(q['heartdisease'])
Output:
age sex cptrestbps ...slope
cathalheartdisease 0 63 1 1 145 ...
3060
1 67 1 4 160 ... 2 3 3 2
2 67 1 4 120 ... 2 2 7 1
3 37 1 3 130 ... 3 0 3 0
15
4 41 0 2 130 ... 1 0 3 0
[5 rows x 14 columns]
╒════════════════╤═════════════════════╕
│ heartdisease │ phi(heartdisease) │
╞════════════════╪═════════════════════╡
│ heartdisease_0 │ 0.6791 │
├────────────────┼─────────────────────┤
│ heartdisease_1 │ 0.1212 │
├────────────────┼─────────────────────┤
│ heartdisease_2 │ 0.0810 │
├────────────────┼─────────────────────┤
│ heartdisease_3 │ 0.0939 │
├────────────────┼─────────────────────┤
│ heartdisease_4 │ 0.0247 │
╘════════════════╧═════════════════════╛
╒════════════════╤═════════════════════╕
│ heartdisease │ phi(heartdisease) │
╞════════════════╪═════════════════════╡
│ heartdisease_0 │ 0.5400 │
├────────────────┼─────────────────────┤
│ heartdisease_1 │ 0.1533 │
16
├────────────────┼─────────────────────┤
│ heartdisease_2 │ 0.1303 │
├────────────────┼─────────────────────┤
│ heartdisease_3 │ 0.1259 │
├────────────────┼─────────────────────┤
│ heartdisease_4 │ 0.0506 │
╘════════════════╧═════════════════════╛
Result:
17