Aiml Lab Manual - Not Aligned
Aiml Lab Manual - Not Aligned
Search(BFS)
Aim: To implements the simple uniformed search algorithm breadth first search methods
using python
Procedure:
1. Start by putting any one of the graph’s vertices at the back of the queue.
2. Now take the front item of the queue and add it to the visited list.
3. Create a list of that vertex's adjacent nodes. Add those which are not within the visited list
4. Keep continuing steps two and three till the queue is empty.
Program:
class Graph:
self.edges = {}
self.directed = directed
try:
neighbors = self.edges[node1]
exceptKeyError:
neighbors = set()
neighbors.add(node2)
self.edges[node1] = neighbors
if not self.directed and not reversed:
try:
returnself.edges[node]
exceptKeyError:
return []
found = False
fringe = deque([start])
visited = set([start])
print(' ')
current = fringe.pop()
if current == goal:
found = True
break
for node in self.neighbors(current):
visited.add(node)
fringe.appendleft(node)
came_from[node] = current
print(', '.join(fringe))
if found:
print()
returncame_from
else:
return None
@staticmethod
defprint_path(came_from, goal):
parent = came_from[goal]
if parent:
Graph.print_path(came_from, parent)
else:
print(goal, end='')
return
def __str__(self):
returnstr(self.edges)
graph = Graph(directed=False)
graph.add_edge('A', 'B')
graph.add_edge('A', 'S')
graph.add_edge('S', 'G')
graph.add_edge('S', 'C')
graph.add_edge('C', 'F')
graph.add_edge('G', 'F')
graph.add_edge('C', 'D')
graph.add_edge('C', 'E')
graph.add_edge('E', 'H')
graph.add_edge('G', 'H')
# Perform BFS
iftraced_path:
Graph.print_path(traced_path, goal)
print()
Output:
Expand Node | Fringe
- |A
A | S, B
B |S
S | G, C
C | D, E, F, G
G | H, D, E, F
F | H, D, E
E | H, D
D |H
H |
Path: A => S => G => H
Result:
Thus, the program for breadth first search was executed and output is verified.
EXPTNO :1(b) Implementation of Uninformed search algorithms -Depth-First
Search(DFS)
Aim: To implements the simple uniformed search algorithm Depth first search methods using
python
Procedure:
1. Start by putting any one of the graph's vertex on top of the stack.
2. After that take the top item of the stack and add it to the visited list of the vertex.
3. Next, create a list of that adjacent node of the vertex. Add the ones which aren't in the visited list
Program:
class Graph:
self.edges = {}
self.directed = directed
try:
neighbors = self.edges[node1]
exceptKeyError:
neighbors = set()
neighbors.add(node2)
self.edges[node1] = neighbors
if not self.directed and not reversed:
try:
returnself.edges[node]
exceptKeyError:
return []
found = False
fringe = deque([start])
visited = set([start])
print(' ')
current = fringe.pop()
if current == goal:
found = True
break
for node in self.neighbors(current):
visited.add(node)
fringe.appendleft(node)
came_from[node] = current
print(', '.join(fringe))
if found:
print()
returncame_from
else:
return None
@staticmethod
defprint_path(came_from, goal):
parent = came_from[goal]
if parent:
Graph.print_path(came_from, parent)
else:
print(goal, end='')
return
def __str__(self):
returnstr(self.edges)
# Create the graph
graph = Graph(directed=False)
graph.add_edge('A', 'B')
graph.add_edge('A', 'S')
graph.add_edge('S', 'G')
graph.add_edge('S', 'C')
graph.add_edge('C', 'F')
graph.add_edge('G', 'F')
graph.add_edge('C', 'D')
graph.add_edge('C', 'E')
graph.add_edge('E', 'H')
graph.add_edge('G', 'H')
# Perform BFS
iftraced_path:
Graph.print_path(traced_path, goal)
print()
Output:
Depth-First Search:
Expand Node | Fringe
- | A
A | S, B
B | S
S | G, C
C | G, D, E, F
F | G, D, E
E | G, D, H
H |
Path: A => S => C => E => H
Result:
Thus, the program for Depth first search was executed and output is verified
EXPTNO :2 Implementation of N-Queen Problem
AIM:
To Implement the task is to place N queens on an N×N chessboard in such a way that none of
the queens is under attack. Implement this N Queen problem using Python.
Procedure:
4. The function attack checks vertically and horizontally, while the function N_queens checks
diagonally.
5. If either of these functions return true, it means that there is a queen in that position on the
board.
6. The code is a function that will check if there are enough queens on the chessboard.
7. The code starts by defining a function, N_queens (n), which will return true if there are
8. The variable n is used to define how many queens need to be placed on the board for it to be
considered complete.
Program:
N = int(input())
if board[i][k] == 1 or board[k][j] == 1:
return True
# Checking diagonally
if (k + l == i + j) or (k - l == i - j):
if board[k][l] == 1:
return True
return False
defN_queens(n):
if n == 0:
return True
board[i][j] = 1
ifN_queens(n - 1) == True:
return True
board[i][j] = 0
return False
N_queens(N)
fori in board:
print(i)
output:
[1, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 1, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 1]
[0, 0, 0, 0, 0, 1, 0, 0]
[0, 0, 1, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 1, 0]
[0, 1, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 1, 0, 0, 0, 0]
Result:
Thus the program to implement N queens search strategy is implemented and executed
successfully.
EXPTNO :3 Implementation of Propositional model
DATE
AIM:
Procedure:
1. Define a class Literal with attributes name and sign to denote whether the
2. Implement the __neg__ function to return a new literal with the same name
3. Implement the __repr__ function to return the string of the literal name (or
the string with a negative sign) each time the instance of the literal is called.
4. Create the CNFConvert function to convert the knowledge base (KB) from a
5. Create the VariableSet function to find all the used literals in the KB,
6. Implement the Negativeofx function to hold the negative form of the literal
7. Create the pickX function to pick a literal from the variable set and work
Code:
import re
class Literal:
# Class Literal, it has attributes name and sign to denote whether the literal is positive or
negative in use
self.name = str(name)
self.sign = sign
def __neg__(self): # Returns a new literal with the same name but the opposite sign of its parent
literal
def __str__(self):
returnstr(self.name)
def __repr__(self):
# Returns the string of the literal name (or the string with a negative sign) each time the
ifself.sign:
else:
defCNFconvert(KB):
# This function converts the KB from a list of sets to a list of lists for easier computing
storage = []
fori in KB:
i = list(i)
for j in i:
j = str(j)
storage.append(i)
return storage
defVariableSet(KB):
# This function finds all the used literals in the KB, and in order to assist with running the
DPLL
KB = eval((CNFconvert(KB).__str__()))
storage = []
forobj in KB:
storage.append(str(item[1:]))
storage.append(str(item))
return storage
defNegativeofx(x):
# This function is for holding the negative form of the literal, for use in the DPLL algorithm
if check:
returnstr(x[1:])
else:
# This function picks a literal from the variable set and works with it as a node in the tree
for x in varList:
if x not in literals:
break
return x
defsplitFalseLiterals(cnf, x):
holder = []
if x in item:
item.remove(x)
holder.append(item)
return holder
defsplitTrueLiteral(cnf, x):
holder = []
if x in item:
continue
else:
holder.append(item)
return holder
defunitResolution(clauses):
literalholder = {} # Dictionary for holding the literal holder and their bool
i=0
# This part of the code goes through each and every clause until all literals in the KB are
resolved
newClauses = []
clause = clauses[i]
iflen(clause) == 1:
literal = str(clause[0])
if pattern:
nx = literal[1:]
literalholder[nx] = False
else:
nx = "-" + literal
literalholder[literal] = True
# Checks for all other appearances of the literal or its opposite in the KB
if item != clauses[i]:
ifnx in item:
item.remove(nx)
newClauses.append(item)
i=0
clauses = newClauses
# No unit clause
else:
i += 1
returnliteralholder, clauses
defdpll(clauses, varList):
ifcnf == []:
return literals
elif [] in cnf:
return "notsatisfiable"
else:
# Pick a literal which isn't set yet but has an impact on the KB, and then work on it
recursively
while True:
x = pickX(literals, varList)
x = str(x)
nx = Negativeofx(x)
ncnf = splitTrueLiteral(cnf, x)
ifncnf == cnf:
varList.remove(x)
else:
break
# Does the same DPLL recursively, but follows the true path for that variable
if case1 != "notsatisfiable":
copy = case1.copy()
copy.update(literals)
copy.update({x: True})
return copy
# Does the DPLL recursively, but follows the false path for that variable
if case1:
copy = case1.copy()
copy.update(literals)
copy.update({x: False})
return copy
else:
return "notsatisfiable"
def DPLL(KB):
# Finally restructures the output to fit the required output by the assignment description
KB = eval((CNFconvert(KB).__str__()))
varList = VariableSet(KB)
if result == 'notsatisfiable':
return False
else:
fori in varList:
result[i] = 'true'
result[i] = 'false'
else:
result[i] = 'free'
A = Literal('A')
B = Literal('B')
C = Literal('C')
D = Literal('D')
print(DPLL(KB))
OUTPUT:
Result:
Thus the program to implement Propositional Model checking Algorithm is implemented and
executed successfully.
EXPTNO :4 Implementation of ChatbotModel
AIM:
To create a chatbot that can assist university students with their common queries
Procedure:
3. Copy Code: Save the provided chatbot code into a file, e.g., mkce_chatbot.py.
5. Access URL: Open the displayed URL (e.g., http://127.0.0.1:7860) in a web browser.
6. Use Chatbot: Enter queries in the chat interface and view responses.
7. Reset Chat: Use the "Reset" tab to clear the chat history.
importgradio as gr
responses = {
weekends.",
"course information": "You can find detailed course information on the college website or
"exam schedule": "The exam schedule will be uploaded to the college portal. Please check
your login dashboard.",
"clubs and events": "We have various clubs such as Robotics, Coding, and Cultural Clubs.
"hostel information": "Hostel facilities are available for both boys and girls. Contact the hostel
"canteen menu": "Today's menu: Breakfast - Idli, Pongal; Lunch - Rice, Sambar, Poriyal;
chat_history = []
defchatbot(user_query):
globalchat_history
user_query = user_query.lower()
ifclosest_match:
bot_response = responses[closest_match[0]]
else:
bot_response = "I'm sorry, I don't understand that. Please contact the administration office for
assistance."
chat_history.append(f"You: {user_query}")
chat_history.append(f"Bot: {bot_response}")
# Return the complete chat history as a single string
return "\n".join(chat_history)
defreset_chat():
globalchat_history
chat_history = []
return "Chat history has been cleared. How can I assist you?"
# Gradio interface
interface = gr.Interface(
fn=chatbot,
inputs="text",
outputs="text",
description="This chatbot can assist you with library hours, course information, exam schedules,
hostel details, canteen menu, and more. Type your query below!",
examples=[
],
live=True,
allow_flagging="never"
)
# Add a Reset button to clear chat history
reset_interface = gr.Interface(
fn=reset_chat,
inputs=None,
outputs="text",
live=True,
title="Reset Chat"
output:
Chat Interface
Title:
Description:
This chatbot can assist you with library hours, course information, exam schedules, hostel
Examples (Buttons):
User Input:
Bot Response:
You: what are the library hours?
Result:
The chatbot successfully answers predefined queries related to university resources like library
hours, hostel details, and canteen menus, using fuzzy matching for user inputs. It also allows
AIM:
To construct and implement Naviebayes using Navie classifier methods using Python.
Algorithm
2. The data is then split into a training and test set of equal size.
3. Next, a Gaussian Naive Bayes classifier is trained using the training set.
4. Then predictions are made on the test set with accuracy scores calculated for each prediction.
5. Finally, a confusion matrix is created to show how well each prediction was classified as
correct or incorrect
6. The code is used to train a Gaussian Naive Bayes classifier and then use it to make
predictions.
7. The code prints the model's predictions, as well as the test set's output for comparison.
Program:
iris = datasets.load_iris()
X = iris.data
Y = iris.target
# Split the dataset into training and testing sets
model = GaussianNB()
model.fit(X_train, Y_train)
model_predictions = model.predict(X_test)
print("\nPredictions:", model_predictions)
cm = confusion_matrix(Y_test, model_predictions)
Output:
Predictions: [1 0 2 1 1 0 1 2 1 1 2 0 0 0 0 2 2 1 1 2 0 2 0 2 2 2 2 2 0 0
0 0 1 0 0 2 1
0 0 0 2 1 1 0 0 1 1 2 1 2]
Actual labels: [1 0 2 1 1 0 1 2 1 1 2 0 0 0 0 1 2 1 1 2 0 2 0 2 2 2 2 2 0
0 0 0 1 0 0 2 1
0 0 0 2 1 1 0 0 1 2 2 1 2]
Confusion Matrix:
[[19 0 0]
[ 0 14 1]
[ 0 1 15]]
Result
Thus the program to implement Naïve Bayes Model is implemented and executed successfully
EXPTNO :5(b) Implementation of Bayesian Network using Python
Aim
Algorithm
importnumpy as np
import torch
importtorch.nn as nn
importtorch.optim as optim
importtorchbnn as bnn
importmatplotlib.pyplot as plt
dataset = datasets.load_iris()
data = dataset.data
target = dataset.target
data_tensor = torch.from_numpy(data).float()
target_tensor = torch.from_numpy(target).long()
model = nn.Sequential(
nn.ReLU(),
cross_entropy_loss = nn.CrossEntropyLoss()
# Training loop
# Forward pass
models = model(data_tensor)
# Total cost
# Backward pass
optimizer.zero_grad()
total_cost.backward()
optimizer.step()
models = model(data_tensor)
_, predicted = torch.max(models.data, 1)
# Final output
kl = klloss(model)
defdraw_graph(predicted):
fig_1 = fig.add_subplot(1, 2, 1)
fig_2 = fig.add_subplot(1, 2, 2)
plt.colorbar(z1_plot, ax=fig_1)
plt.colorbar(z2_plot, ax=fig_2)
fig_1.set_title("REAL")
fig_2.set_title("PREDICT")
plt.show()
draw_graph(predicted)
Output:
Thus, the program to implement Bayesian Networks and perform inferences is implemented and
executed successfully
EXPTNO :6(a) Implementation of Regression Model using linear regression
Aim
To Construct and Implement python program for regression model using linear regression.
Algorithm
1. Import Libraries: Import necessary libraries like pandas, numpy, and sklearn.
2. Load and Preprocess Data: Load the dataset, handle missing values (e.g., fill missing
values with the median), and split it into features (X) and target (y).
3. Train the Model: Use Linear Regression to train the model on the training data.
4. Make Predictions: Use the trained model to predict prices for new data.
5. Evaluate the Model: Calculate performance metrics like Mean Squared Error (MSE)
Code:
import pandas as pd
importnumpy as np
print("Original Dataset:")
print(df)
# Step 1: Handle missing values in 'bedrooms' column by filling it with the median value
df['bedrooms'] = df['bedrooms'].fillna(df['bedrooms'].median())
print(df)
# Step 2: Prepare the independent variables (X) and dependent variable (y)
# Step 3: Split the data into Training and Test sets (80% Training, 20% Test)
# Step 4: Create a Linear Regression model and fit it to the training data
reg = LinearRegression()
reg.fit(X_train, y_train)
# Step 6: Predict the price for a home with 3000 sqft area, 3 bedrooms, 40 years old
print("\nPredicted price for a home with 3000 sqft area, 3 bedrooms, 40 years old: $",
predicted_price_1[0])
# Step 7: Predict the price for a home with 2500 sqft area, 4 bedrooms, 5 years old
predicted_price_2 = reg.predict([[2500, 4, 5]])
print("\nPredicted price for a home with 2500 sqft area, 4 bedrooms, 5 years old: $",
predicted_price_2[0])
predictions = reg.predict(X_test)
Output:
Original Dataset:
2 3200NaN 18 610000
Predicted price for a home with 3000 sqft area, 3 bedrooms, 40 years old: $ 498408.25158031
Predicted price for a home with 2500 sqft area, 4 bedrooms, 5 years old: $ 578876.03748933
R-squared: 0.9481663636254594
Result:
Thus, the python program for linear regression model was executed successfully.
EXPTNO :6(b) Implementation of Regression Model using logisticregression
Construct and Implement python program for regression model using logistic regression.
AIM:
To Construct and Implement python program for regression model using logisticregression.
Algorithm
Step4:feature Scaling
Code:
import pandas as pd
import math
df = pd.read_csv("insurance_data.csv")
print(df.head())
plt.xlabel('Age')
plt.show()
print("Test data:")
print(X_test)
model = LogisticRegression()
model.fit(X_train, y_train)
y_predicted = model.predict(X_test)
print("Predicted values:")
print(y_predicted)
print(model.predict_proba(X_test))
# Step 7: Coefficients of the model
def sigmoid(x):
return 1 / (1 + math.exp(-x))
defprediction_function(age):
y = sigmoid(z)
return y
age = 35
age = 43
Dataset
22 0
25 0
47 1
52 0
46 1
62 1
23 0
58 1
50 1
54 1
Output:
agebought_insurance
0 22 0
1 25 0
2 47 1
3 52 0
4 46 1
Test data:
age
4 46
8 62
26 23
17 58
24 50
25 54
Predicted values:
[0 1 0 1 1 1]
Thus, the python program for logistic regression model was executed successfully.
EXPTNO :7 (a) Implementation of Decision Tree
Aim:
ToConstruct python program Decision tree using Gaussian classifier and visualize graph using
weka tool
Procedure:
4. As all the columns are categorical, check for unique values of each column
5. Check how these unique categories are distributed among the columns
6. Heatmap of the columns on dataset with each other. It shows Pearson's correlation
7. As scikit-learn algorithms do not generally work with string values, I've converted string
categories to integers.
import pandas as pd
data = pd.read_csv(file_path)
print("Dataset Information:")
print(data.info())
print(data.head())
# Example Operations:
statewise_total = data.groupby('State/UnionTerritory')['Confirmed'].sum().reset_index()
print(statewise_total)
# 2. Plot confirmed cases over time for a specific state (e.g., Kerala)
importmatplotlib.pyplot as plt
{state}')
plt.xlabel('Date')
plt.ylabel('Confirmed Cases')
plt.xticks(rotation=45)
plt.legend()
plt.grid()
plt.show()
statewise_deaths = data.groupby('State/UnionTerritory')['Deaths'].sum().reset_index()
print(statewise_deaths)
processed_file_path = "processed_data.csv"
data.to_csv(processed_file_path, index=False)
Output:
Dataset Information:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 18110 entries, 0 to 18109
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Sno 18110 non-null int64
1 Date 18110 non-null object
2 Time 18110 non-null object
3 State/UnionTerritory 18110 non-null object
4 ConfirmedIndianNational 18110 non-null object
5 ConfirmedForeignNational 18110 non-null object
6 Cured 18110 non-null int64
7 Deaths 18110 non-null int64
8 Confirmed 18110 non-null int64
dtypes: int64(4), object(5)
memory usage: 1.2+ MB
None
Result:
Thusthe python program for Decision tree was executed successfully.
EXPTNO :7 (b) Implementation of Random Forest Tree.
Construct python program for random forest tree and build the model
Aim:
ToConstruct python program for random forest tree and build the model .
Procedure:
4. Since all columns are categorical, check for unique values in each column.
5. Check how these unique categories are distributed among the columns.
6. Create a heatmap of the columns in the dataset with each other, showing Pearson's
14. Train the model using the training sets. Use y_pred=model.predict(X_test) to make
predictions.
16. Calculate the model's accuracy to determine how often the classifier is correct
Code:
import pandas as pd
importnumpy as np
file_name = list(uploaded.keys())[0]
processed_data = []
errors='ignore')
# Replace non-numeric values with NaN and fill with 0
chunk.fillna(0, inplace=True)
processed_data.append(chunk)
if data[col].dtype == 'float64':
data[col] = data[col].astype('float32')
data[col] = data[col].astype('int32')
# Downsample the data for training (optional, if dataset is still too large)
random_state=42)
# Split the dataset into training and testing sets
rf_model.fit(X_train, y_train)
# Make predictions
y_pred = rf_model.predict(X_test)
print(f"Accuracy: {accuracy:.2f}")
print("\nClassification Report:")
print(classification_report(y_test, y_pred))
print("\nConfusion Matrix:")
print(confusion_matrix(y_test, y_pred))
Output:
Confusion Matrix:
[[0 1 0 ... 0 0 0]
[0 1 0 ... 0 0 0]
[0 2 2 ... 0 0 0]
...
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 1 0]]
Result
Thus, the python program for random forest tree was executed successfully.