0% found this document useful (0 votes)
31 views18 pages

Cs3491-Aiml Lab Manual

AIMLLAB

Uploaded by

kvengatesh2005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views18 pages

Cs3491-Aiml Lab Manual

AIMLLAB

Uploaded by

kvengatesh2005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

Omsakthi

Adhiparasakthi Engineering College, Melmaruvathur

Department of Information Technology

CS3491

ARTIFICIAL iNTELLIGENCE & MACHINE LEARNING


LABORATORY

LAB MANUAL

II YEAR IT
R-2021
Syllabus

PRACTICAL EXERCISES: 30 PERIODS

1. Implementation of Uninformed search algorithms (BFS, DFS)


2. Implementation of Informed search algorithms (A*, memory-bounded A*)
3. Implement naïve Bayes models
4. Implement Bayesian Networks
5. Build Regression models
6. Build decision trees and random forests
7. Build SVM models
8. Implement ensembling techniques
9. Implement clustering algorithms
10. Implement EM for Bayesian networks
11. Build simple NN models
12. Build deep learning NN models

1. Implementation of Uninformed search algorithms (BFS, DFS)

1
Ex.1.1 Date:

Aim:

To implement uninformed search algorithms such as Breadth First and Depth First
Search using python.
1.1 Breadth First Search

Algorithm:

Program:

graph = {

'5' : ['3','7'],

'3' : ['2', '4'],

'7' : ['8'],

'2' : ['1','6'],

'4' : ['8'],

'8' : [],

'1':[],

'6':[]

visited = [] # List for visited nodes.

2
queue = [] #Initialize a queue

def bfs(visited, graph, node): #function for BFS

visited.append(node)

queue.append(node)

while queue: # Creating loop to visit each node

m = queue.pop(0)

print (m, end = " ")

for neighbour in graph[m]:

if neighbour not in visited:

visited.append(neighbour)

queue.append(neighbour)

# Driver Code

print("Following is the Breadth-First Search")

bfs(visited, graph, '5') # function calling


OutPut:

following is the Breadth-First Search


53724816

Ex.1.2 Date:

Algorithm:

3
1.2 Depth First Search

Program:

# Using a Python dictionary to act as an adjacency list

graph = {

'5' : ['3','7'],

'3' : ['2', '4'],

'7' : ['8'],

'2' : ['1','6'],

'4' : ['8'],

'8' : [],

'1': [],

'6': []

visited = set() # Set to keep track of visited nodes of graph.

def dfs(visited, graph, node): #function for dfs

if node not in visited:

print (node)

visited.add(node)

for neighbour in graph[node]:

dfs(visited, graph, neighbour)

4
# Driver Code

print("Following is the Depth-First Search")

dfs(visited, graph, '5')


OutPut:

Following is the Depth-First Search


5
3
2
1
6
4
8
7

Result:
Thus the Uninformed searching algorithms such as BFS and DFS were implemented
and executed successfully.

2. Implementation of Informed search algorithms (A*, memory-bounded A*)

Ex.2 Date:

Aim:

5
To implement informed search algorithms such as A* and memory-bounded A*
Search using python.
2.1 A* Search algorithm

Algorithm:
1. Start
2. Begin with the start node in the open list and an empty closed list.
3. While there are nodes in the open list, continue iterating.
4. Choose the node with the lowest combined cost and heuristic value (f(n) = g(n) + h(n)).
5. If the current node is the goal, reconstruct and return the path.
6. Remove the current node from the open list, add it to the closed list, and expand its
neighbors.
7. Update the cost (g(n)), heuristic estimate (h(n)), and parent of each neighbor if necessary.
8. Add neighbors to the open list if they are not already present.
9. After finding the goal, reconstruct the path from the start node to the goal node.
10. Stop

Program:
import heapq

from collections import deque

def a_star(grid, start, goal):

def heuristic(a, b):

return abs(a[0] - b[0]) + abs(a[1] - b[1])

def reconstruct_path(came_from, current):

path = deque()

while current in came_from:

path.appendleft(current)

current = came_from[current]

return path

6
open_set = []

heapq.heappush(open_set, (0, start))

came_from = {}

g_score = {start: 0}

f_score = {start: heuristic(start, goal)}

while open_set:

current = heapq.heappop(open_set)[1]

if current == goal:

return reconstruct_path(came_from, current)

for neighbor in [(current[0]+1, current[1]), (current[0]-1, current[1]), (current[0],


current[1]+1), (current[0], current[1]-1)]:

if 0 <= neighbor[0] < len(grid) and 0 <= neighbor[1] < len(grid[0]) and not
grid[neighbor[0]][neighbor[1]]:

tentative_g_score = g_score[current] + 1

if tentative_g_score < g_score.get(neighbor, float("inf")):

came_from[neighbor] = current

g_score[neighbor] = tentative_g_score

f_score[neighbor] = tentative_g_score + heuristic(neighbor, goal)

heapq.heappush(open_set, (f_score[neighbor], neighbor))

return None

grid = [

[0, 0, 0, 0, 0],

[0, 1, 0, 1, 0],

[0, 1, 0, 1, 0],

[0, 1, 0, 1, 0],

7
[0, 0, 0, 0, 0]

start = (0, 0)

goal = (4, 4)

path = a_star(grid, start, goal)

print(list(path))

OutPut:

[(0, 1), (0, 2), (0, 3), (0, 4), (1, 4), (2, 4), (3, 4), (4, 4)]

2.2 Memory bounded A* Search algorithm

Algorithm:

1. Start
2. Create an open list (priority queue) to store nodes to be explored.
3. Create a closed set to store explored nodes.
4. Push the start node onto the open list with a cost estimate (f_cost) calculated as the sum
of its g_cost (cost from start to current node) and h_cost (heuristic estimate from current
node to goal).
5. While the open list is not empty:
a. Pop the node with the lowest f_cost from the open list.
b. This node becomes the current node.
6. If the current node is the goal node,
a. Reconstruct the path by following the parent pointers from the goal node back to
the start node.
b. Return the path and the total cost.
7. else,

8
a. add the current node to the closed set.
8. Generate successor nodes by applying valid actions (neighbors) to the current node.
9. For each successor node:
a. Calculate its g_cost (cost from start to successor node) by adding the cost of
reaching the successor node from the current node.
b. Calculate its h_cost (heuristic estimate from successor node to goal).
c. Calculate its f_cost as the sum of g_cost and h_cost.
d. If the f_cost of the successor node is within the memory limit:
i. If the successor node has not been explored (not in the closed set), add it
to the open list.
e. Return:
10. If the goal is not found within the memory limit or there is no valid path,
a. return None.
11. Stop.

Program:
import heapq

class Node:
def __init__(self, state, g_cost, h_cost, parent=None):
self.state = state
self.g_cost = g_cost
self.h_cost = h_cost
self.f_cost = g_cost + h_cost
self.parent = parent

def __lt__(self, other):


return self.f_cost < other.f_cost

def memory_bounded_astar(start, goal, memory_limit, heuristic, neighbors_fn, cost_fn):


open_list = []
closed_set = set()
heapq.heappush(open_list, Node(start, 0, heuristic(start, goal)))

while open_list:
current_node = heapq.heappop(open_list)
closed_set.add(current_node.state)

9
if current_node.state == goal:
# Reconstruct path
path = []
cost = current_node.g_cost
while current_node:
path.append(current_node.state)
current_node = current_node.parent
return path[::-1], cost

for neighbor in neighbors_fn(current_node.state):


if neighbor not in closed_set:
g_cost = current_node.g_cost + cost_fn(current_node.state, neighbor)
h_cost = heuristic(neighbor, goal)
new_node = Node(neighbor, g_cost, h_cost, current_node)
if new_node.f_cost <= memory_limit:
heapq.heappush(open_list, new_node)

return None, None

# Example usage:
def heuristic(state, goal):
return abs(state[0] - goal[0]) + abs(state[1] - goal[1])

def neighbors(state):
x, y = state
return [(x-1, y), (x+1, y), (x, y-1), (x, y+1)]

def cost(state, next_state):


return 1 # Uniform cost for grid movement

start = (0, 0)
goal = (4, 4)
memory_limit = 10 # Example memory limit

path, cost = memory_bounded_astar(start, goal, memory_limit, heuristic, neighbors, cost)


if path is not None:
print("Optimal path found with cost:", cost)

10
print("Path:", path)
else:
print("Path not found within memory limit.")

Output:

Optimal path found with cost: 8


Path: [(0, 0), (1, 0), (2, 0), (2, 1), (2, 2), (2, 3), (3, 3), (4, 3), (4, 4)]

Result:

Thus the implementation of informed search algorithms such as A* and memory-


bounded A* were successfully executed using python.

3. Implementation of Naïve Bayes models

Ex.3 Date:

Aim:

To implement a Naïve Bayes classifier model in Python using scikit-learn

Algorithm:

Step 1: Start the program

Step 2:Import Necessary Libraries such as sci-kit learn..etc

Step 3: Define the Naive Bayes Classifier Class

Step 4: Initialize and Define Toy Example Data

11
Step 5: Train and Predict given data using Naive Bayes Classifier

Step 6: Calculate the accuracy of the prediction.

Step 7:Stop the program

Program:

from sklearn.model_selection import train_test_split

from sklearn.naive_bayes import GaussianNB

from sklearn.metrics import accuracy_score

# Sample data

X = [[1, 'S'], [1, 'M'], [1, 'M'], [1, 'S'], [1, 'S'],

[2, 'S'], [2, 'M'], [2, 'M'], [2, 'L'], [2, 'L'],

[3, 'L'], [3, 'M'], [3, 'M'], [3, 'L'], [3, 'L']]

y = ['N', 'N', 'Y', 'Y', 'N', 'N', 'N', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'N']

# Encoding categorical features

from sklearn.preprocessing import LabelEncoder

le = LabelEncoder()

X_encoded = [[le.fit_transform([x[0]])[0], le.fit_transform([x[1]])[0]] for x in X]

# Splitting the dataset

X_train, X_test, y_train, y_test = train_test_split(X_encoded, y, test_size=0.2,


random_state=42)

# Initializing and training the Naïve Bayes classifier

model = GaussianNB()

model.fit(X_train, y_train)

# Predicting

y_pred = model.predict(X_test)

# Accuracy

12
accuracy = accuracy_score(y_test, y_pred)

print("Accuracy:", accuracy)

OutPut:

Accuracy:0.3333333

Result:

Thus the implementation of a Naïve Bayes classifier model executed in Python using
scikit-learn

3. Implementation of Bayesian network model

Ex.3 Date:

Aim:

The aim of implementing Bayesian Networks is to model the probabilistic relationships between
a set of variables

Algorithm:

1. Define the variables: The first step in implementing a Bayesian Network is to define the
variables that will be used in the model. Each variable should be clearly defined and its possible
states should be enumerated.

2. Determine the relationships between variables: The next step is to determine the
probabilistic relationships between the variables. This can be done by identifying the causal
relationships between the variables or by using data to estimate the conditional probabilities of
each variable given its parents.

13
3. Construct the Bayesian Network: The Bayesian Network can be constructed by
representing the variables as nodes in a directed acyclic graph (DAG). The edges between the
nodes represent the conditional dependencies between the variables.

4. Assign probabilities to the variables: Once the structure of the Bayesian Network has
been defined, the probabilities of each variable must be assigned. This can be done by using
expert knowledge, data, or a combination of both.

5. Inference: Inference refers to the process of using the Bayesian Network to make
predictions or draw conclusions. This can be done by using various inference algorithms, such as
variable elimination or belief propagation.

6. Learning: Learning refers to the process of updating the probabilities in the Bayesian
Network based on new data. This can be done using various learning algorithms, such as
maximum likelihood or Bayesian learning.

7. Evaluation: The final step in implementing a Bayesian Network is to evaluate its


performance. This can be done by comparing the predictions of the model to actual data and
computing various performance metrics, such as accuracy or precision.

Program:
import numpy as np
import csv

import pandas as pd

from pgmpy.models import BayesianModel

from pgmpy.estimators import MaximumLikelihoodEstimator from pgmpy.inference import


VariableElimination

#read Cleveland Heart Disease data heartDisease = pd.read_csv('heart.csv') heartDisease =


heartDisease.replace('?',np.nan)

#display the data

print('Few examples from the dataset are given below') print(heartDisease.head())

#Model Bayesian Network Model=BayesianModel([('age','trestbps'),('age','fbs'), ('sex','trestbps'),


('exang','trestbps'),('trestbps','heartdise ase'),('fbs','heartdisease'),('heartdisease','restecg'),

('heartdisease','thalach'),('heartdisease','chol')])

14
#Learning CPDs using Maximum Likelihood Estimators
print('\n Learning CPD using Maximum likelihood
estimators')

model.fit(heartDisease,estimator=MaximumLikelihoodEstimator)

# Inferencing with Bayesian Network

print('\n Inferencing with Bayesian Network:')


HeartDisease_infer =
VariableElimination(model)

#computing the Probability of HeartDisease given


Age print('\n 1. Probability of HeartDisease given
Age=30')

q=HeartDisease_infer.query(variables=['heartdisease'],evidence ={'age':28})

print(q['heartdisease'])

#computing the Probability of HeartDisease given cholesterol print('\n 2. Probability of HeartDisease


given cholesterol=100')
q=HeartDisease_infer.query(variables=['heartdisease'],evidence={'chol':100})

print(q['heartdisease'])

Output:
age sex cptrestbps ...slope
cathalheartdisease 0 63 1 1 145 ...
3060

1 67 1 4 160 ... 2 3 3 2

2 67 1 4 120 ... 2 2 7 1

3 37 1 3 130 ... 3 0 3 0

15
4 41 0 2 130 ... 1 0 3 0

[5 rows x 14 columns]

Learning CPD using Maximum likelihood


estimators Inferencing with Bayesian
Network:

1. Probability of HeartDisease given Age=28

╒════════════════╤═════════════════════╕

│ heartdisease │ phi(heartdisease) │

╞════════════════╪═════════════════════╡

│ heartdisease_0 │ 0.6791 │

├────────────────┼─────────────────────┤

│ heartdisease_1 │ 0.1212 │

├────────────────┼─────────────────────┤

│ heartdisease_2 │ 0.0810 │

├────────────────┼─────────────────────┤

│ heartdisease_3 │ 0.0939 │

├────────────────┼─────────────────────┤

│ heartdisease_4 │ 0.0247 │

╘════════════════╧═════════════════════╛

2. Probability of HeartDisease given cholesterol=100

╒════════════════╤═════════════════════╕

│ heartdisease │ phi(heartdisease) │

╞════════════════╪═════════════════════╡

│ heartdisease_0 │ 0.5400 │

├────────────────┼─────────────────────┤

│ heartdisease_1 │ 0.1533 │

16
├────────────────┼─────────────────────┤

│ heartdisease_2 │ 0.1303 │

├────────────────┼─────────────────────┤

│ heartdisease_3 │ 0.1259 │

├────────────────┼─────────────────────┤

│ heartdisease_4 │ 0.0506 │

╘════════════════╧═════════════════════╛

Result:

Thus the program is executed successfully and output is verified.

17

You might also like