0% found this document useful (0 votes)
26 views32 pages

AI RECORD Edited

Uploaded by

Ganesh SA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views32 pages

AI RECORD Edited

Uploaded by

Ganesh SA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

EX NO: 1 WRITE A PROGRAM TO SOLVE 8 QUEENS

DATE: 27/1/24 PROBLEM

AIM:

To Write a program to solve 8 Queen Problem.

ALGORITHM:

1. Get the number of Queens N to be placed in the board.


2. Construct N×N board to place the Queen.
3. Check the occurrence of queen in the current row (Horizontal) and column (Vertical) 4.
If the Queen is not in the row/column, place it, else, find the optimal position to place
the queen by Iteration.

PROGRAM:

# Taking number of queens as input from user print


("Enter the number of queens")
N = int(input())
# here we create a chessboard # NxN
matrix with all elements set to 0 board
= [[0]*N for _ in range(N)] def
attack(i, j):
#checking vertically and horizontally
for k in range(0,N): if
board[i][k]==1 or board[k][j]==1:
return True

#checking diagonally for


k in range(0,N): for l in
range(0,N): if (k+l==i+j)
or (k-l==i-j): if
board[k][l]==1:
return True return False def
N_queens(n): if n==0: return True for i
in range(0,N): for j in range(0,N): if
(not(attack(i,j))) and (board[i][j]!=1):
board[i][j] = 1 if N_queens(n-
1)==True:
return True board[i][j]
=0
return False N_queens(N)
for i in board: print
(i)

OUTPUT:

RESULT:

Thus the python program has been successfully implemented and the output is verified.
EX NO: 2 SOLVE A PROBLEM USING DEPTH FIRST SEARCH
DATE: 3/2/24

AIM:

To Solve a problem using Depth First Search.

ALGORITHM:

1. Assign the values of the parent and children in the tree level.
2. Traverse the Tree from Root node to the left end node 3.
Make the traversal towards right end of each node.
4. After processing all the nodes, stop the process.

PROGRAM:

# Using a Python dictionary to act as an adjacency list graph


={
'5' : ['3','7'],
'3' : ['2', '4'],
'7' : ['8'],
'2' : [],
'4' : ['8'],
'8' : []
} visited = set() # Set to keep track of visited nodes of
graph.
def dfs(visited, graph, node): #function for dfs
if node not in visited: print(node)
visited.add(node) for neighbour in
graph[node]: dfs(visited, graph, neighbour)
# Driver Code
print("Following is the Depth-First Search")
dfs(visited, graph, '5')
OUTPUT:

RESULT:

Thus the python program has been successfully implemented and the output is verified.
EX NO: 3 IMPLEMENT MINMAX ALGORITHM
DATE:10/2/24

AIM:

To implement Minmax algorithm.

ALGORITHM:

1. Define the Game State:

● Represent the current state of the game. This could be a board configuration, a position
in a maze, or any relevant data structure.

2. Define the Terminal Conditions:

● Determine the conditions that indicate the end of the game, such as a win, loss, or draw.
These conditions will be used to evaluate the utility of a game state.

3. Define the Utility Function:

● Assign a utility value to each terminal state. This value represents the outcome of the
game for the current player (e.g., +1 for a win, -1 for a loss, 0 for a draw).

4. Generate Possible Moves:

● Generate all possible moves for the current player from the current game state. This
typically involves identifying empty spaces on the game board or legal moves in the
context of the specific game.

5. Evaluate Each Possible Move:

● Recursively apply the Minimax algorithm to evaluate each possible move. For each
move:
● If the game has reached a terminal state, return the utility value.
● If not, continue the recursion.
6. Maximize and Minimize:

● If it's the current player's turn (maximizing player), choose the move with the
maximum utility value among the available moves.
● If it's the opponent's turn (minimizing player), choose the move with the minimum
utility value among the available moves.

7. Backtrack:

● Propagate the chosen utility value back up the recursion stack to the original call,
updating the utility values of parent nodes.

8. Make the Optimal Move:

● Once the recursion is complete, choose the move with the highest utility value from the
root node. This is the optimal move for the current player.

PROGRAM:

import math

def minimax(curDepth, nodeIndex, maxTurn, scores, targetDepth):


# Base case: targetDepth reached
if curDepth == targetDepth:
return scores[nodeIndex]

if maxTurn: return max( minimax(curDepth + 1, nodeIndex *


2, False, scores, targetDepth), minimax(curDepth + 1, nodeIndex * 2
+ 1, False, scores, targetDepth)
)
else:
return min(
minimax(curDepth + 1, nodeIndex * 2, True, scores, targetDepth),
minimax(curDepth + 1, nodeIndex * 2 + 1, True, scores, targetDepth)
)
# Driver code
scores = [3, 5, 2, 9, 12, 5, 23, 23] treeDepth
= math.log(len(scores), 2) print("The
optimal value is:", end="")
print(minimax(0, 0, True, scores, treeDepth))

OUTPUT:

RESULT:

Thus the python program has been successfully implemented and the output is verified.
EX NO: 4 IMPLEMENTATION OF A* ALGORITHM
DATE:17/2/24

AIM:

To implement A* Algorithm.

ALGORITHM:

1. Initialize Open and Closed Sets:


● Create two sets: Open and Closed.
● The Open set contains nodes to be evaluated, initially containing only the start node.
● The Closed set is empty.

2. Initialize Costs:
● Assign a cost of 0 to the start node.
● Calculate the heuristic estimate from the start node to the goal node.
3. Iterate Until Goal is Reached or Open Set is Empty:
● While the Open set is not empty:
1. Select the node with the lowest total cost (actual cost + heuristic) from the Open
set. This node is the current node.
2. Move the current node from the Open set to the Closed set.

4. Check for Goal:


● If the current node is the goal node, the algorithm has found the optimal path.
● Construct and return the path from the start to the goal by following the parent
pointers.
PROGRAM:

def aStarAlgo(start_node, stop_node):


open_set = set([start_node]) closed_set
= set()
g = {} # store distance from starting node
parents = {} # parents contains an adjacency map of all nodes

# Distance of starting node from itself is zero


g[start_node] = 0
# Start_node is root node, so it has no parent nodes. Set start_node to its own parent
node. parents[start_node] = start_node

while len(open_set) > 0:


n = None
# Node with lowest f() is found for v in open_set:
if n is None or (g[v] + heuristic(v) < g[n] + heuristic(n)):
n=v

if n == stop_node or Graph_nodes.get(n) is None:


pass else: for (m, weight) in
get_neighbors(n):
# Nodes 'm' not in open_set and closed_set are added to open_set
# n is set its parent if m not in
open_set and m not in closed_set:
open_set.add(m)
parents[m] = n g[m]
= g[n] + weight else:
# Check if it's quicker to first visit the current node, then neighbor node
if g[m] > g[n] + weight: # Update the cost and parent
g[m] = g[n] + weight
parents[m] = n

if n is None:
print('Path does not exist!')
return None
# If the current node is the stop_node, reconstruct and return the path
if n == stop_node:
path = []
while parents[n] != n:
path.append(n)
n = parents[n]
path.append(start_node)
path.reverse()
print('Path found:', path)
return path

# Mark the current node as closed


open_set.remove(n)
closed_set.add(n)

print('Path does not exist!')


return None

# Define function to return neighbors and their distances from the passed node def
get_neighbors(v):
return Graph_nodes.get(v, [])

# For simplicity, we'll consider heuristic distances given


# This function returns heuristic distance for all nodes
def heuristic(n): H_dist = {
'A': 11,
'B': 6,
'C': 99,
'D': 1,
'E': 7,
'G': 0,
}
return H_dist.get(n, 0)

# Describe your graph here


Graph_nodes = {
'A': [('B', 2), ('E', 3)],
'B': [('C', 1), ('G', 9)],
'C': None,
'E': [('D', 6)],
'D': [('G', 1)],
}
aStarAlgo('A', 'G')

OUTPUT:

RESULT:

Thus the python program has been successfully implemented and the output is verified.
EX NO: 9 SIMPLE PROLOG PROBLEM
DATE:23/3/24

AIM:

To implement Simple prolog problem.

ALGORITHM:

1. Define the relations.


2. Construct the rules to define the additional rules (Grand parent and sibling)
3. Find the relation between the given input.

PROGRAM:

% Facts: Define family relationships


parent(john, jim). parent(john, sara).
parent(jane, jim). parent(jane, sara).
parent(bob, john). parent(bob, lisa).
parent(lisa, ann).
% Rules: Define additional relationships using facts
grandparent(X, Z) :- parent(X, Y), parent(Y, Z).
sibling(X, Y) :- parent(Z, X), parent(Z, Y),
X \= Y. % X and Y are not the same person %
Query examples:
% - Who are the children of Jane?
% Query: parent(jane, Child).
%
% - Who are the siblings of Jim?
% Query: sibling(jim, Sibling).
%
% - Who are the grandparents of Ann?

% Query: grandparent(Grandparent, ann).


OUTPUT:

RESULT:

Thus the program has been successfully implemented and the output is verified.
EX NO:10 UNIFICATION AND RESOLUTION USING
DATE: 6/4/24 PROLOG

AIM:

To implement unification and resolution using prolog.

ALGORITHM:

UNIFICATION ALGORITHM:

1. Define the variables


2. Apply the unification among the variables.

PROGRAM:

% unify/2 takes two terms and unifies them if possible


unify(X, X) :- var(X), !. unify(X, Y) :- var(X), !, X =
Y. unify(X, Y) :- var(Y), !, Y = X. unify(X, Y) :-
compound(X), compound(Y), !, X =.. [F|Xs], Y =..
[F|Ys], unify_list(Xs, Ys).
% unify_list/2 unifies two lists of terms unify_list([],
[]).
unify_list([X|Xs], [Y|Ys]) :- unify(X, Y), unify_list(Xs, Ys).

RESOLUTION ALGORITHM:

1. Define the Goal


2. List the possible clauses
3. Find the Goal with the support of clauses.
PROGRAM:

% resolution/2 resolves a goal with a list of clauses


resolution(Goal, Clauses) :- member(Clause,
Clauses), resolve(Goal, Clause, Resolvent),
(member([], Resolvent) -> writeln('Goal is true.'); resolution(Resolvent, Clauses)).
% resolve/3 resolves two clauses and produces the resolvent
resolve(Goal, Clause, Resolvent) :- select(Literal, Goal,
RestGoal), select(not(Literal), Clause, RestClause),
append(RestGoal, RestClause, Resolvent).

OUTPUT:

RESULT:
Thus the program has been successfully implemented and the output is verified.
EX NO:11 IMPLEMENT THE BACKWARD CHAIN USING PROLOG
DATE: 13/4/24

AIM:

To implement the backward chain using prolog.

ALGORITHM:

1. Define Facts and Rules


2. Define Backward Chaining Predicate
3. Query the Backward Chaining Predicate

PROGRAM:

% Step 1: Define Facts and Rules


% Facts is_a(dog, mammal). is_a(mammal,
animal). is_a(cat, mammal). % Rules is_a(X,
animal) :- is_a(X, mammal). is_a(X, mammal)
:- is_a(X, dog). % Step 2: Define Backward
Chaining Predicate

backward_chain(Goal) :-
% Check if the goal is already a fact
Goal. backward_chain(Goal) :-
% If the goal is not a fact, check if it can be derived from rules
Rule, backward_chain(Rule).
% Step 3: Query the Backward Chaining Predicate %
Example query: Is a cat an animal?
% Query: backward_chain(is_a(cat, animal)).
OUTPUT:

RESULT:

Thus the program has been successfully implemented and the output is verified.
EX NO: 5 BLOCK WORD PROBLEM
DATE: 24/2/24

AIM:

To implement block word problem

ALGORITHM:

1. Define the class BlocksWorld:


# Implementation of the class as described in the previous Python code.
2. Define the function display_state(state):
3. Define the function is_goal_state(state, goal_state):
4. Define the function generate_successors(state):

PROGRAM:

class BlocksWorld:
def init (self, initial_state):
self.state = initial_state def
display_state(self): for row in
self.state: print(" ".join(row))
print() def is_goal_state(self,
goal_state): return self.state ==
goal_state def
generate_successors(self):
successors = [] for i in
range(len(self.state)): for j in
range(len(self.state[i])):
if self.state[i][j] != " ":
# Try moving the block up
if i > 0 and self.state[i - 1][j] == " ":
new_state = [row.copy() for row in self.state]
new_state[i][j], new_state[i - 1][j] = new_state[i - 1][j], new_state[i][j]
successors.append(BlocksWorld(new_state))
# Try moving the block down
if i < len(self.state) - 1 and self.state[i + 1][j] == " ":
new_state = [row.copy() for row in self.state]
new_state[i][j], new_state[i + 1][j] = new_state[i + 1][j], new_state[i][j]
successors.append(BlocksWorld(new_state))
# Try moving the block left if j > 0
and self.state[i][j - 1] == " ":
new_state = [row.copy() for row in self.state]
new_state[i][j], new_state[i][j - 1] = new_state[i][j - 1], new_state[i][j]
successors.append(BlocksWorld(new_state))
# Try moving the block right if j < len(self.state[i]) - 1
and self.state[i][j + 1] == " ":
new_state = [row.copy() for row in self.state]
new_state[i][j], new_state[i][j + 1] = new_state[i][j + 1], new_state[i][j]
successors.append(BlocksWorld(new_state))

return successors def


depth_first_search(initial_state, goal_state):
stack = [BlocksWorld(initial_state)]
visited = set() while stack: current_state =
stack.pop() if
current_state.is_goal_state(goal_state):
print("Goal state reached:")
current_state.display_state() return True
visited.add(tuple(map(tuple, current_state.state)))
successors = current_state.generate_successors()
for successor in successors: if tuple(map(tuple,
successor.state)) not in visited:
stack.append(successor)
print("Goal state not reachable.")
return False # Example usage:
initial_state = [ ["A", "B", " "],
["C", " ", "D"],
]
goal_state = [ ["
", "B", "A"],
["C", "D", " "],
]
depth_first_search(initial_state, goal_state)

OUTPUT:

RESULT:

Thus the program has been successfully implemented and the output is verified.
EX NO: 6 IMPLEMENTATION OF SVM MODEL
DATE: 2/3/24

AIM:

To implement the SVM model.

ALGORITHM:

1. Select the data set.


2. Perform the data preprocessing
3. Check for the data set balance status
4. Split the data set into training and test sets
5. Perform the SVM model with the training set.
6. Apply the test data on trained SVM model.
7. Evaluate the performance of the model using evaluation metrics.

PROGRAM:

from sklearn import datasets


import matplotlib.pyplot as plt
# Loading dataset df =
datasets.load_iris() print(df) #
Features
print(df.feature_names)
print(df.target_names)
data=df.data target=df.target
print(data) print(target)
x=target[0].value_counts()
print(x)

import seaborn as sns


sns.barplot(x=x.index,y=x)
data = df.values X =
df_data
Y = target
# Split the data to train and test dataset.
from sklearn.model_selection import train_test_split X_train,
X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2)
from sklearn.metrics import accuracy_score
from sklearn.svm import SVC svm =
SVC(gamma='auto')
svm.fit(X_train,y_train) yhat =
svm.predict(X_test) print(y_test.T)
print(yhat)
svm_acc=accuracy_score(y_test, yhat) print("Accuracy
Score = ",accuracy_score(y_test, yhat)) from
sklearn.metrics import classification_report
print(classification_report(y_test, yhat))
X_new = pd.DataFrame([[2, 2, 1, 0.2], [ 4.9, 2.2, 2.8, 1.1 ], [ 5.3,
2.5, 4.6, 1.9 ]]) prediction =
svm.predict(X_new) print("Prediction of
Species:", prediction) for i in prediction:
print(df.target_names[i])
OUTPUT:

RESULT:

Thus the program has been successfully implemented and the output is verified.
EX NO: 7 IMPLEMENTATION OF ANN WITH BP
DATE: 9/3/24

AIM:

To implement ANN with BP.

ALGORITHM:

1 Import Libraries
2 Load Dataset
3 Prepare Dataset
4 Split train and test set
5 Initialize Hyperparameters and Weights
6 Helper Functions
7 Backpropagation Neural Network
8 Plot MSE and Accuracy

PROGRAM:

# Import Libraries import numpy as np import


pandas as pd from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt # Load dataset data
= load_iris() # Get features and target
X=data.data

y=data.target # Get dummy


variable y =
pd.get_dummies(y).values
y[:3]
#Split data into train and test data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=20, random_state=4)
# Initialize variables
learning_rate = 0.1 iterations
= 5000
N = y_train.size # number
of input features
input_size = 4
# number of hidden layers neurons hidden_size
=2
# number of neurons at the output layer output_size
=3
results = pd.DataFrame(columns=["mse", "accuracy"])
# Initialize weights np.random.seed(10)
# initializing weight for the hidden layer
W1 = np.random.normal(scale=0.5, size=(input_size, hidden_size))
# feedforward propagation
# on hidden layer
Z1 = np.dot(X_train, W1)
A1 = sigmoid(Z1)
# on output layer
Z2 = np.dot(A1, W2)
A2 = sigmoid(Z2)

# Calculating error
mse = mean_squared_error(A2, y_train) acc
= accuracy(A2, y_train)
results=results.append({"mse":mse,
"accuracy":acc},ignore_index=True )
# backpropagation E1 =
A2 - y_train dW1 = E1 *
A2 * (1 - A2) E2 =
np.dot(dW1, W2.T) dW2
= E2 * A1 * (1 - A1)
# weight updates
W2_update = np.dot(A1.T, dW1) / N
W1_update = np.dot(X_train.T, dW2) / N
W2 = W2 - learning_rate * W2_update W1
= W1 - learning_rate * W1_update
results.mse.plot(title="Mean Squared Error")
results.accuracy.plot(title="Accuracy")
# feedforward
Z1 = np.dot(X_test, W1)
A1 = sigmoid(Z1)
Z2 = np.dot(A1, W2) A2 =
sigmoid(Z2) acc =
accuracy(A2, y_test)
print("Accuracy: {}".format(acc))

OUTPUT:

RESULT:
Thus the program has been successfully implemented and the output is verified.
EX NO: 8a IMPLEMENTATION OF DECISION TREE
DATE:16/3/24

AIM:

To implement decision tree program.

ALGORITHM:

1. Select the data Set


2. Define the function DecisionTree()
3. Print the data in the DecisionTree() function

PROGRAM:

from google.colab import drive drive.mount('/content/gdrive')


import math import pandas as
pd from operator import
itemgetter class DecisionTree:
def init (self, df, target, positive, parent_val, parent):
self.data = df self.target = target self.positive = positive
self.parent_val = parent_val self.parent = parent
self.childs = [] self.decision = '' def _get_entropy(self,
data):
p = sum(data[self.target]==self.positive) n
= data.shape[0] - p
p_ratio = p/(p+n)

n_ratio = 1 - p_ratio entropy_p = -p_ratio*math.log2(p_ratio)


if p_ratio != 0 else 0 entropy_n = -
n_ratio*math.log2(n_ratio) if n_ratio !=0 else 0 return
entropy_p + entropy_n def _get_gain(self, feat):
avg_info=0 for val in self.data[feat].unique():
avg_info+=self._get_entropy(self.data[self.data[feat] ==
val])*sum(self.data[feat]==val)/self.data.shape[0]
return self._get_entropy(df) - avg_info def
_get_splitter(self):
self.splitter = max(self.gains, key = itemgetter(1))[0]
def update_nodes(self): self.features = [col for col in
self.data.columns if col !=

self.target]

self.entropy = self._get_entropy(self.data) if
self.entropy != 0: self.gains = [(feat,
self._get_gain(feat)) for feat in

self.features]

self._get_splitter()
residual_columns = [k for k in self.data.columns if k !=

self.splitter]

for val in self.data[self.splitter].unique():


df_tmp =

self.data[self.data[self.splitter]==val][residual_columns]
tmp_node = DecisionTree(df_tmp, self.target,

self.positive, val, self.splitter) tmp_node.update_nodes()


self.childs.append(tmp_node)

def print_tree(n): for child in n.childs: if


child: print(child. dict .get('parent', ''))
print(child. dict .get('parent_val', ''), '\n')
print_tree(child)

df = pd.read_csv('/content/gdrive/MyDrive/colab /ML Lab/EX - 7 Decision


Tree with ID3 Algorithm/PlayTennis.csv')
col=df.columns print(col.values[-1])
dt = DecisionTree(df, 'Play Tennis', 'Yes', '', '')
dt.update_nodes()
print_tree(dt)

OUTPUT:

RESULT:

Thus the program has been successfully implemented and the output is verified.
EX NO: 8b IMPLEMENTATION OF K MEANS CLUSTERING
DATE: 16/3/24

AIM:

To implement K means clustering.

ALGORITHM:

1. Select the data set


2. Find the possible K value using Elbow method
3. Split the data into K clusters 4. Display the clusters graphically.

PROGRAM:

import matplotlib.pyplot as plt from


kneed import KneeLocator from
sklearn.datasets import make_blobs
from sklearn.cluster import KMeans

from sklearn.metrics import silhouette_score from


sklearn.preprocessing import StandardScaler features,
true_labels = make_blobs(n_samples=200, centers=3,
cluster_std=2.75,random_state=42)
print(features[:5]) print(true_labels[:5])
plt.scatter(features[:, 0], features[:, 1], s=50); scaler
= StandardScaler()
scaled_features = scaler.fit_transform(features) print(scaled_features[:5])
plt.scatter(scaled_features[:, 0], scaled_features[:, 1], s=50); kmeans =
KMeans(init="random", n_clusters=3, n_init=10, max_iter=300,
random_state=42) kmeans.fit(scaled_features)
yped=kmeans.predict(scaled_features)
plt.scatter(scaled_features[:, 0], scaled_features[:, 1], c=yped, s=50, cmap='viridis')
centers = kmeans.cluster_centers_
plt.scatter(centers[:, 0], centers[:, 1], c='black', s=200, alpha=0.5);
print(kmeans.inertia_) print(kmeans.cluster_centers_)
print(kmeans.n_iter_) kmeans_kwargs = {"init":
"random","n_init": 10,"max_iter":
300,"random_state": 42}
sse = [] for k in range(1,
11):
kmeans = KMeans(n_clusters=k, **kmeans_kwargs)
kmeans.fit(scaled_features)
sse.append(kmeans.inertia_)
plt.style.use("fivethirtyeight")
plt.plot(range(1, 11), sse)
plt.xticks(range(1, 11))
plt.xlabel("Number of Clusters")
plt.ylabel("SSE") plt.show()
kl = KneeLocator(range(1, 11), sse, curve="convex",
direction="decreasing") print(kl.elbow)
kmeans = KMeans(init="random", n_clusters=kl.elbow, n_init=10,
max_iter=300, random_state=42) kmeans.fit(scaled_features)
y_ped=kmeans.predict(scaled_features)
plt.scatter(scaled_features[:, 0], scaled_features[:, 1], c=y_ped,
s=50, cmap='viridis') centers = kmeans.cluster_centers_
plt.scatter(centers[:, 0], centers[:, 1], c='black', s=200, alpha=0.5);
OUTPUT:

RESULT:

Thus the program has been successfully implemented and the output is verified.

You might also like