0% found this document useful (0 votes)
7 views

Lab Manual

This document is a lab manual for the Artificial Intelligence course at the University of Engineering and Technology, Lahore (Faisalabad Campus), detailing various lab topics and exercises. It covers fundamental concepts in Python programming, including basics, string manipulation, and control structures like conditionals and loops. Each lab includes objectives, introductions, code explanations, and conclusions to help students understand and apply Python programming effectively.

Uploaded by

Mehroze Javed
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Lab Manual

This document is a lab manual for the Artificial Intelligence course at the University of Engineering and Technology, Lahore (Faisalabad Campus), detailing various lab topics and exercises. It covers fundamental concepts in Python programming, including basics, string manipulation, and control structures like conditionals and loops. Each lab includes objectives, introductions, code explanations, and conclusions to help students understand and apply Python programming effectively.

Uploaded by

Mehroze Javed
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

UNIVERSITY OF ENGINEERING AND TECHNOLOGY,

LAHORE (FAISALABAD CAMPUS)

Lab Manual

Submitted to: Mr. Asim Naveed

Submitted by: Mehroze Javed

Registration No: 2022-CS-826

Course: Artificial Intelligence

Department: Computer Science

Semester: 5th Semester

1
Table of Content

Lab # 01 ________________________________________________________ 3
Lab # 02 ________________________________________________________ 9
Lab # 03 _______________________________________________________ 15
Lab # 04 _______________________________________________________ 20
Lab # 05 _______________________________________________________ 23
Lab # 06 _______________________________________________________ 27
Lab # 07 _______________________________________________________ 29
Lab # 08 _______________________________________________________ 31
Lab # 10 _______________________________________________________ 33
Lab # 11 _______________________________________________________ 35
Lab # 13 _______________________________________________________ 39
Lab # 14 _______________________________________________________ 41
Lab # 15 _______________________________________________________ 43

2
Lab # 01

Lab Topic: Python Basics

1.1. Objective: The objective of this lab was to introduce fundamental concepts in basic Python
programming, such as the printing of output, comments, declaring variables, type casting and type
switching. The lab aimed at helping students become familiar with the basic syntax and variable behaviors
in Python.
1.2. Introduction: Python is a versatile and powerful high-level programming language widely used for
various types of programming tasks. It is very simple and easy to use, and for this reason, great for
beginners while extensive libraries make it popular among professionals in web development, data
analysis, and artificial intelligence.
In this lab, we explored several foundational concepts:

 Printing to the console using print()


 Writing comments to document code
 Defining and using variables
 Dynamic typing and type conversion

Each of these elements plays an essential role in Python programming, and understanding them is crucial for
writing effective code.

1.3. Software: For performing these lab tasks we are using Google Colab.

1.4. Code Explanation: This lab was all about the basics of Python operations. The major parts are broken
down below:
1.4.1. Printing Outputs: Python’s print() function allows us to display text or other types of output in the
console. The basic syntax involves passing a string or variable to the print() function, which then outputs it
to the console.
Code:
print(“Hello, World!”)

Output:

1.4.2. Comments: Comments in Python help in documenting the code, making it easier to understand.
Python supports two types of comments:
 Single-line comments
 Multi-line comments
Code:
#This is a single line comment
print("Hello, World!")
"""
This is a comment
written in
more than just one line
"""
print("Hello, World!")
3
1.4.3. Variables: Variables are containers for storing data values.
Code:
#variables
x=5
y = "John"
print(x)
print(y)
Output:

1.4.4. Type Casting: To explicitly set the type of a variable, Python provides casting functions such as int(),
float(), and str().
Code:
#If you want to specify the data type of a variable, this can be done with casting.
x = str(3) # x will be '3'
y = int(3) # y will be 3
z = float(3) # z will be 3.0

1.4.5. Case-Sensitive Variables: Variable names are case-sensitive.


Code:
#Variable names are case-sensitive.
a=4
A = "Sally"
#A will not overwrite a

print(A)
print(a)
Output:

1.4.6. Slicing: You can return a range of characters by using the slice syntax.
Code:

4
#slicing string
b = "Hello, World!"
print(b[2:5])
Output:

1.5. The Overall Lab Code:

print("Hello, World!")

#This is a single line comment


print("Hello, World!")

"""
This is a comment
written in
more than just one line
"""
print("Hello, World!")

#variables
x=5
y = "John"
print(x)
print(y)

#Variables do not need to be declared with any particular type, and can even change type
after they have been set.
x=4 # x is of type int
x = "Sally" # x is now of type str
print(x)

#If you want to specify the data type of a variable, this can be done with casting.
x = str(3) # x will be '3'
y = int(3) # y will be 3
z = float(3) # z will be 3.0

#You can get the data type of a variable with the type() function.
x=5
y = "John"
print(type(x))
print(type(y))

#String variables can be declared either by using single or double quotes


x = "John"
print(x)
# is the same as
x = 'John'
print(x)

5
#Variable names are case-sensitive.
a=4
A = "Sally"
#A will not overwrite a

print(A)
print(a)

#Legal variable names


myvar = "John"
my_var = "John"
_my_var = "John"
myVar = "John"
MYVAR = "John"
myvar2 = "John"
print(myvar)
print(my_var)
print(_my_var)
print(myVar)
print(MYVAR)
print(myvar2)

#Python allows you to assign values to multiple variables in one line


x, y, z = "Orange", "Banana", "Cherry"
print(x)
print(y)
print(z)

x = y = z = "Orange"
print(x)
print(y)
print(z)

#Unpack a Collection
fruits = ["apple", "banana", "cherry"]
x, y, z = fruits
print(x)
print(y)
print(z)

#global variable
x = "awesome"

def myfunc():
print("Python is " + x)

myfunc()

x = "awesome"

def myfunc():
x = "fantastic"
print("Python is " + x)

6
myfunc()

print("Python is " + x)

#If you use the global keyword, the variable belongs to the global scope
def myfunc():
global x
x = "fantastic"

myfunc()

print("Python is " + x)

#Import the random module, and display a random number between 1 and 9
import random

print(random.randrange(1, 10))

#casting integers
x = int(1) # x will be 1
y = int(2.8) # y will be 2
z = int("3") # z will be 3
print(x)
print(y)
print(z)

#casting floats
x = float(1)
y = float(2.8)
z = float("3")
w = float("4.2")
print(x)
print(y)
print(z)
print(w)

#casting strings
x = str("s1")
y = str(2)
z = str(3.0)
print(x)
print(y)
print(z)

#Assign String to a Variable


a = "Hello"
print(a)

#Multiline Strings
a = """Lorem ipsum dolor sit amet,
consectetur adipiscing elit,
sed do eiusmod tempor incididunt
ut labore et dolore magna aliqua."""
print(a)
7
#slicing string
b = "Hello, World!"
print(b[2:5])

print(10 + 5)

#boolean
a = 200
b = 33

if b > a:
print("b is greater than a")
else:
print("b is not greater than a")

1.6. Conclusion: In this lab, we covered the basics of Python programming, including printing output, using
comments, and working with variables. We learned that Python allows dynamic typing, meaning that
variables can change types during execution. Moreover, we explored type casting and how to explicitly set a
variable’s type when necessary.By understanding these basic principles, we are now able to build more
complex Python programs in the coming labs.

8
Lab # 02

Lab Topic: String Manipulation

2.1. Objective: The main objective of this lab is to learn string manipulation techniques in Python, such as
string capitalization, length calculation, centering, justification, finding characters, counting occurrences,
and how to do Unicode encoding.
2.2. Introduction: In this lab, we explored several string manipulation techniques, which are fundamental
concepts for handling text data in Python. These techniques include:

 Capitalizing strings using capitalize()


 Calculating string length with len()
 Justifying text with center(), rjust(), and ljust()
 Finding and counting characters with find() and count()
 Checking string properties such as isalnum(), isalpha(), and isspace()
 Transforming case using methods like upper(), lower(), and swapcase()
 Encoding strings in UTF-8

Each of these elements is essential for string manipulation in Python, and mastering them is crucial for
working with text data effectively in programming tasks.

2.3. Code Explanation: This lab was all about the string manipulation in python. The major parts are
broken down below:
2.3.1. String Capitalizing: This converts the first letter of the string to uppercase.
Code:
str ="mehroze"
str.capitalize()
Output:

2.3.2. String Length Calculation: The len() function is used to determine the length of a string, counting
the characters including spaces.
Code:
str="mehroze is a good girl"
print(len(str))
Output:

2.3.3. String Centering: This centers the string and pads it with the character to fit a total width of desired
characters.
Code:
str="mehroze is a good girl"
print(len(str))
9
str.center(24,'h')

Output:

2.3.4. String Justification: This right-justifies the string by padding it with character on the left to fit all to
desired characters.
Code:
str="mehroze"
len(str)
str.rjust(8,'a')
Output:

2.3.5. Counting Occurrences of Characters: This Counts how many times 'e' occurs in the string.
Code:
str="mehroze is my name"
str.count('e',0,22)
Output:

2.3.6. Check String Properties: Check if strings start or end with specific characters.
Code:
str1="i like darkness"
print(str1.startswith("i",0,100))
print(str1.endswith("like",0,100))
Output:

2.3.7. String Swap Case: This converts the strings to title case and swap case between upper and lower.
Code:
str="i am MEHROZE"
str1="i am mehroze"
print(str.islower())
print(str1.islower())
Output:

2.3.8. Encoding Unicode strings:


Code:
10
# unicode string
string = 'anaconda!'

# print string
print('The string is:', string)

# default encoding to utf-8


string_utf = string.encode()

# print result
print('The encoded version is:', string_utf)
Output:

2.4. Overall Code of this Lab:


str ="mehroze"
str.capitalize()

str="mehroze is a good girl"


print(len(str))
str.center(24,'h')

str="mehroze is a good girl."


len(str)

str="mehroze"
len(str)
str.rjust(8,'a')

str="mehroze is a good girl"


len(str)

str="mehroze is my name"
str.count('e',0,22)

str="Mehroze"
str.count('h',0,7)

str1="i like darkness"


print(str1.startswith("i",0,100))
print(str1.endswith("like",0,100))

str="mehroze\tjaved"
str.expandtabs(100)

str="my name is mehroze"


print(len(str))
str.find("e",0,16)

str="my name is mehroze"


print(len(str))

11
str="mehroze11 is a student"
print(str.isalnum())

str1="haven"
print(str1.isalnum())

str=" mehroze is a student"


str1="haven"
print(str.isalnum())
print(str1.isalnum())

str="mehrozeismyid"
print(str.isalpha())

str="mehroze"
print(str.isalpha())

str="mehroze57"
print(str.isdigit())

str="2345"
print(str.isdigit())

str="i am MEHROZE"
str1="i am mehroze"
print(str.islower())
print(str1.islower())

str=" "
str1="byeeeee"
print(str.isspace())
print(str1.isspace())

str="This Is Excellent"
str1="This is excellent"
print(str.istitle())
print(str1.istitle())

s = "_";
seq = ("I", "AM", "Mehroze");
print( s.join( seq ))

str="mehroze is calm"
len(str)

str="mehroze"
len(str)
str.ljust(8,'h')

str="826 is my registration number"


print(len(str))
str.ljust(31,'1')

str="I Am MeHrOze"
12
print(str.lower())
print(str.upper())

str="wwwwwwwhow are you sir?rrrrr"


print(str.lstrip('h'))
print(str.rstrip('r'))

str="i am amina"
print(max(str))

str="i am amina"

print(min(str))

str="we have seen different people"


str.replace('e','$',9)

str="222222 we are good 222222"


str.strip('2')

str="222222 we are good 222222"


str.rstrip('2')

str="222222 we are good 222222"


str.lstrip('2')

str="i am amina"
print(str.split('-',3))
print(str.split('a',2))

str="i am amina"
str.title()

str="i am amina"
str1="I AM AMINA"
print(str.swapcase())
print(str1.swapcase())

str="amina"
str.zfill(10)

# unicode string
string = 'amina!'

# print string
print('The string is:', string)

# default encoding to utf-8


string_utf = string.encode()

# print result
print('The encoded version is:', string_utf)

# unicode string
13
string = 'pythön!'

# print string
print('The string is:', string)

# default encoding to utf-8


string_utf = string.encode()

# print result
print('The encoded version is:', string_utf)

# unicode string
string = 'anaconda!'

# print string
print('The string is:', string)

# default encoding to utf-8


string_utf = string.encode()

# print result
print('The encoded version is:', string_utf)

2.5. Conclusion: This lab explored a wide range of string manipulation techniques in Python. By using
methods such as capitalization, length calculation, justification, encoding, and case conversion. These skills
are crucial for text processing tasks in various applications of AI and software development.

14
Lab # 03

Lab Topic: Basic Python Programming(Conditionals, Loops, Lists, and Functions).


3.1. Objective: The objective of this lab is to provide hands-on experience in basic Python programming
concepts such as conditionals (if-else statements), list operations, loops (for and while), and defining user
functions. The exercises focus on understanding how to create simple decision-making programs,
manipulate lists, and use loops for iteration. It also aims to strengthen problem-solving skills by
implementing custom logic using functions with different types of parameters.
3.2. Introduction: In this lab, we explore essential programming concepts to build a solid foundation in
Python. Students will learn how to work with conditional statements to control program flow based on user
input. We will also manipulate lists, iterate through data using loops, and define user functions that perform
specific tasks. The lab introduces different real-life scenarios such as admission eligibility based on marks,
managing friend lists, and building customized message systems. By working through these exercises,
students will gain confidence in writing structured Python programs.
3.3. Code:
name = str(input("Enter your good name : "))
if(name=='Mehroze'):
print("Welcome",name)
else:
print("You are not Eligible to enter")

name = str(input("Enter your name : "))


if(name=='Mehroze'):
print("Welcome",name)
else:
print("You are not Eligible to enter")

print("+++++ UET FAISALABAD +++++")


print("\nWelcome to Aggregate System")
print("\nYou can have idea about your admission in diffetrent departments of University")
name= input("\nPlease Enter Your Name ")
Marks= input("Plaease Enter Your Marks ")
Marks=int(Marks)
if Marks>=900:
print(name+" Congratulations Your admission is done in Mechanical engineering")

elif Marks>=850:
print(name+" Congratulations Your admission is done in Electrical engineering")

elif Marks>=800:
print(name+" Congratulations Your admission is done in Civil engineering")

elif Marks>=750:
print(name+" Congratulations Your admission is done in Computer Science and engineering")

elif Marks>=700:
print(name+" Congratulations Your admission is done in Biomediacal engineering")

elif Marks<700:
15
print(name+" Sorry your Marks are too low to get admission in any department")

friends = ["Areej", "Hajra", "Areeba","Zainab"]


print(friends)

friends = ["Areej", "Hajra", "Areeba","Zainab"]


print(friends[0])

friends = ["Areej", "Hajra", "Areeba","Zainab"]


print(friends[1])

friends = ["Areej", "Hajra", "Areeba","Zainab"]


print(friends[-1])

friends = ["Areej", "Hajra", "Areeba","Zainab"]


print(friends[-2])

friends = ["Areej", "Hajra", "Areeba","Zainab", "Maheen", "Amna"]


print(friends[2:5])

friends = ["Areej", "Hajra", "Areeba","Zainab", "Maheen", "Amna"]


print(friends[2:4])

friends = ["Areej", "Hajra", "Areeba","Zainab", "Maheen", "Amna"]


print(friends[1:5])

friends = ["Areej", "Hajra", "Areeba","Zainab", "Maheen", "Amna"]


friends[0]="Asma"
print(friends)

friends = ["Areej", "Hajra", "Areeba","Zainab", "Maheen", "Amna"]


friends[3]="Asma"
print(friends)

friends = ["Areej", "Hajra", "Areeba","Zainab", "Maheen", "Amna"]


friends.append("Bakhtawar")
print(friends)

depart = ["CS", "Electrical", "Mechanical","Civil"]


for x in depart:
print(x)

for x in "Engineering":
print(x)

for x in "cs":
print(x)

depart = ["CS", "Electrical", "Mechanical","Civil"]


for x in depart:
print(x)
if x=="Mechanical":
break

16
depart = ["CS", "Electrical", "Mechanical","Civil"]
for x in depart:
print(x)
if x=="CS":
break

depart = ["CS", "Electrical", "Mechanical","Civil"]


for x in depart:

if x=="Mechanical":
continue
print(x)

depart = ["CS", "Electrical", "Mechanical","Civil"]


for x in depart:

if x=="CS":
continue
print(x)

adj = ["red", "big", "tasty"]


fruits = ["apple", "banana", "cherry"]

for x in adj:
for y in fruits:
print(x, y)

name = ["Zainab", "Maheen", "Amna"]


fruits = ["1", "2", "3"]

for x in name:
for y in fruits:
print(x, y)

i=1
while i <= 5:
print(i)
i +=1

i=1
while i < 6:
print(i)
if i == 4:
break
i += 1

i=0
while i < 6:
i += 1
if i == 3:
continue
print(i)

i=1
17
while i < 6:
print(i)
i += 1
else:
print("i is no longer less than 6")

def my_function():
print("Function is called")

my_function()

def my_class():
print("class is called")

my_class()

def reply(str):
if (name=='Mehroze'):
print('welcome Sir')
elif (name=="Javed"):
print("Boss is waiting for you.")
else:
print("You may come tomorrow.")
name = str(input("please tell your name :"))
reply(name)

def my_function(fname):
print(fname + " is a good girl")

my_function("ABC")
my_function("EFG")
my_function("HIJ")

def my_function(name, prof):


print(name + " is a " + prof)

my_function("Mehroze", "scholar")

def find_near(*country):
print("The nearest country is " + country[2])

find_near("Italy", "Pakistan", "Malta")

def find_near(*Mehroze):
print("Mehroze is " + Mehroze[1])

find_near("Russian", "Indian", "Australian")

def friends(frnd3, frnd2, frnd1):


print("The richest friend is " + frnd3)

friends(frnd1 = "ABC", frnd2 = "GDF", frnd3 = "XFJG")

""" def my_function(country = "Arabia"):


18
print("I am from " + country)

my_function("Pakistan")
my_function("India")
my_function()
my_function("Turkey")
"""

Conclusion:This lab enhanced the understanding of key programming concepts, including conditionals,
loops, and functions, which are critical for solving more complex problems in AI and software development.
Through practical exercises, students gained experience in making decisions programmatically, handling
lists, and creating reusable functions. These skills are fundamental for progressing into more advanced
topics in AI and automation, ensuring students are well-prepared for future projects.

19
Lab # 04

Lab Topic: Foundational Python programming


4.1. Objective: The objective of this combined lab is twofold to practice fundamental programming skills
by applying conditionals, loops, and list operations, and to learn how to implement functions for basic
problem-solving.

4.2. Code:

name = input(("Enter Your Name:"))


if name == "Mehroze":
print(name + " " "Welcome to Our University.")
else:
print(name + " " "You are not Elible to Enter in Our Universisty.")

print("******UET FSD*******")
print("\n Welcome to Aggregate System")
print("\nYou can have idea about your admission in diffetrent departments of University")
name= str(input("\nPlease Enter Your Name "))
Marks= int(input("Plaease Enter Your Marks "))
Marks=int(Marks)
if Marks>=900:
print(name+" Congratulations Your admission is done in Computer Science.")

elif Marks>=850:
print(name+" Congratulations Your admission is done in Electrical Engineering")

elif Marks>=800:
print(name+" Congratulations Your admission is done in Chemical Engineering.")

elif Marks>=750:
print(name+" Congratulations Your admission is done in Textile Engineering.")

elif Marks>=700:
print(name+" Congratulations Your admission is done in Chemistry")

elif Marks<700:
print(name+" Sorry your Marks are too low to get admission in any department")

mylist = ['Mehroze', 'Alia', 'Alina', 'Mehar', 'Rafia', 'ABC']

mylist[1] #access specific item

mylist[:2]

mylist[1:5]

mylist[2:]

mylist[0:5:2] #acessing different items using silcing

20
mylist[3] = 'Mehroze' #change value in list
mylist

mylist2 = [1,2,3,44,4]

mylist.append(mylist2)
mylist

#for loops

mylist = ['Mehroze', 'Alia', 'Alina', 'Mehar', 'Rafia', 'ABC']


for x in range(len(mylist)):
print(mylist[x]) #print all element of the list

str1 = 'Mehroze' #print each char of the string


for x in str1:
print(x)

depart = ["CS", "Electrical", "Mechatronics","Textile"]


for x in depart:
print(x)
if x=="Mechatronics":
break # The break Statement. This is used to stop loop traversing when a specific condition comes
true.

depart = ["CS", "Electrical", "Mechatronics","Textile"]


for x in depart:
if x=="Mechatronics":
continue # The continue Statement. This is used to skip a specific value in the list and then
continue.
print(x)

#nested loop

for x in range(3):
for y in range(3):
print(x,y)

#while loop
x =1
while x < 6:
print(x)
x += 1

# functions

def my_function():
print("Hello from a function")

my_function() #func with no arguments

def my_function(fname):
print(fname + "Javed")

21
my_function("Mehroze")
my_function("Alina")
my_function("Amna") #func with arguments

4.3. Conclusion: This lab provided hands-on experience with Python's foundational elements such as
conditionals, loops, lists, and functions.

22
Lab # 05

Lab Topic: Understanding of Decision Tree

5.1. Objective: The lab aims to predict student performance using decision tree classifiers, focusing on data
pre-processing, model training with Gini Index and Entropy, and evaluating performance. Students will also
learn to visualize and interpret decision trees, emphasizing key machine learning processes like data
preparation and model evaluation.

5.2. Introduction: This lab applies decision tree classifiers to predict student performance using features
like study hours, attendance, previous grades, and socioeconomic background. It covers key machine
learning concepts such as data-set preparation, feature encoding, and model evaluation through metrics like
accuracy and confusion matrices. The performance of two decision tree models, based on Gini Index and
Entropy, is trained and compared.

5.3. Code:

# Importing the required packages


import numpy as np
import pandas as pd
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier, plot_tree
import matplotlib.pyplot as plt
from sklearn.preprocessing import LabelEncoder

# Function to import the dataset


def importdata():
# Corrected: Use read_excel instead of read_csv for xlsx files
balance_data = pd.read_excel(r"/content/drive/MyDrive/stu.xlsx")

# Displaying dataset information


print("Dataset Length: ", len(balance_data))
print("Dataset Shape: ", balance_data.shape)
print("Dataset: ", balance_data.head())

return balance_data

# Function to encode categorical variables


def encode_data(balance_data):
# Creating a LabelEncoder object
label_encoder = LabelEncoder()

# Encoding categorical columns manually


balance_data['Previous_Grades'] = label_encoder.fit_transform(balance_data['Previous_Grades'])
balance_data['Extracurricular_Participation'] =
label_encoder.fit_transform(balance_data['Extracurricular_Participation'])
balance_data['Socioeconomic_Background'] =
label_encoder.fit_transform(balance_data['Socioeconomic_Background'])
balance_data['Parental_Involvement'] =
label_encoder.fit_transform(balance_data['Parental_Involvement'])

23
return balance_data

# Function to split the dataset into training and testing sets


def splitdataset(balance_data):
# Separating the features (X) and target (Y)
X = balance_data.iloc[:, :-1] # Assuming all but the last column are features
Y = balance_data.iloc[:, -1] # Assuming the last column is the target

# Splitting the dataset into training and testing sets (70% training, 30% testing)
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.3, random_state=100)

return X, Y, X_train, X_test, y_train, y_test

# Function to train using the Gini Index


def train_using_gini(X_train, X_test, y_train):
# Creating the classifier object
clf_gini = DecisionTreeClassifier(criterion="gini", random_state=100, max_depth=3,
min_samples_leaf=5)

# Performing training
clf_gini.fit(X_train, y_train)
return clf_gini

# Function to train using the Entropy


def train_using_entropy(X_train, X_test, y_train):
# Decision tree with entropy
clf_entropy = DecisionTreeClassifier(criterion="entropy", random_state=100, max_depth=3,
min_samples_leaf=5)

# Performing training
clf_entropy.fit(X_train, y_train)
return clf_entropy

# Function to make predictions


def prediction(X_test, clf_object):
y_pred = clf_object.predict(X_test)
print("Predicted values:")
print(y_pred)
return y_pred

# Function to calculate accuracy


def cal_accuracy(y_test, y_pred):
print("Confusion Matrix: \n", confusion_matrix(y_test, y_pred))
print("Accuracy : ", accuracy_score(y_test, y_pred) * 100)
print("Report : \n", classification_report(y_test, y_pred))

# Function to plot the decision tree


def plot_decision_tree(clf_object, feature_names, class_names):
plt.figure(figsize=(15, 10))
plot_tree(clf_object, filled=True, feature_names=feature_names, class_names=class_names,
rounded=True)
plt.show()

# Main function
24
if __name__ == "__main__":
# Import the dataset
data = importdata()

# Encode categorical data into numerical values


data = encode_data(data)

# Splitting the dataset into train and test sets


X, Y, X_train, X_test, y_train, y_test = splitdataset(data)

# Training using Gini Index


clf_gini = train_using_gini(X_train, X_test, y_train)

# Training using Entropy


clf_entropy = train_using_entropy(X_train, X_test, y_train)

# Visualizing the Decision Trees


plot_decision_tree(clf_gini, X.columns, ['Fail', 'Pass']) # Replace class names as per your dataset
plot_decision_tree(clf_entropy, X.columns, ['Fail', 'Pass'])

# Results Using Gini Index


print("Results Using Gini Index:")
y_pred_gini = prediction(X_test, clf_gini)
cal_accuracy(y_test, y_pred_gini)

# Results Using Entropy


print("Results Using Entropy:")
y_pred_entropy = prediction(X_test, clf_entropy)
cal_accuracy(y_test, y_pred_entropy)

# Importing necessary libraries


import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score, classification_report
from sklearn import tree

# Load the dataset


data = {
"Student_ID": [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20],
"Hours_Study_Per_Week": [10, 5, 12, 3, 8, 4, 9, 2, 7, 6, 15, 1, 11, 5, 13, 7, 14, 4, 10, 3],
"Attendance_Percentage": [90, 75, 95, 60, 85, 70, 88, 50, 80, 78, 98, 40, 93, 73, 90, 80, 97, 65, 85,
55],
"Previous_Grades": ['A', 'C', 'B', 'D', 'B', 'C', 'A', 'D', 'B', 'C', 'A', 'F', 'B', 'D', 'A', 'C', 'A', 'D', 'B',
'F'],
"Extracurricular_Participation": ['Yes', 'No', 'Yes', 'No', 'Yes', 'No', 'Yes', 'No', 'Yes', 'No', 'Yes', 'No',
'Yes', 'No', 'Yes', 'No', 'Yes', 'No', 'Yes', 'No'],
"Socioeconomic_Background": ['High', 'Medium', 'Low', 'Low', 'Medium', 'Medium', 'High', 'Low',
'Medium', 'Medium', 'High', 'Low', 'High', 'Medium', 'High', 'Medium', 'High', 'Low', 'Medium', 'Low'],
"Parental_Involvement": ['High', 'Medium', 'Low', 'Low', 'High', 'Medium', 'High', 'Low', 'Medium',
'Low', 'High', 'Low', 'High', 'Low', 'High', 'Medium', 'High', 'Low', 'High', 'Low'],
"Test_Scores": [85, 65, 78, 50, 80, 55, 90, 45, 75, 60, 92, 30, 88, 52, 87, 65, 91, 48, 82, 42],
"Performance": ['Pass', 'Fail', 'Pass', 'Fail', 'Pass', 'Fail', 'Pass', 'Fail', 'Pass', 'Fail', 'Pass', 'Fail',
'Pass', 'Fail', 'Pass', 'Fail', 'Pass', 'Fail', 'Pass', 'Fail']
25
}

# Convert the dataset into a pandas DataFrame


df = pd.DataFrame(data)

# Convert categorical variables into numerical values


df['Previous_Grades'] = df['Previous_Grades'].map({'A': 5, 'B': 4, 'C': 3, 'D': 2, 'F': 1})
df['Extracurricular_Participation'] = df['Extracurricular_Participation'].map({'Yes': 1, 'No': 0})
df['Socioeconomic_Background'] = df['Socioeconomic_Background'].map({'High': 3, 'Medium': 2,
'Low': 1})
df['Parental_Involvement'] = df['Parental_Involvement'].map({'High': 3, 'Medium': 2, 'Low': 1})
df['Performance'] = df['Performance'].map({'Pass': 1, 'Fail': 0})

# Define features (X) and target (Y)


X = df.drop(['Student_ID', 'Performance'], axis=1)
Y = df['Performance']

# Split the data into training and testing sets


X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.3, random_state=42)

# Create and train the Decision Tree Classifier


clf = DecisionTreeClassifier(criterion='gini', max_depth=3, random_state=42)
clf.fit(X_train, y_train)

# Predict the performance on the test set


y_pred = clf.predict(X_test)

# Evaluate the model


accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)
print("Classification Report:\n", classification_report(y_test, y_pred))

# Visualize the Decision Tree


tree.plot_tree(clf, feature_names=X.columns, class_names=['Fail', 'Pass'], filled=True)

5.4. Conclusion: Through this lab, students gained practical knowledge of building and evaluating decision
tree classifiers using Python. They learned how to preprocess data, split it into training and testing sets, and
train models using different criteria. Visualizing the decision trees allowed students to understand how the
model makes decisions. This exercise emphasized the importance of feature selection and model evaluation
in machine learning tasks, equipping students with essential skills for future projects.

26
Lab # 06

Lab Topic: Graph Traversal Techniques: DFS and BFS

6.1. Objective: The objective of this lab is to implement and compare two fundamental graph traversal
algorithms: Depth-First Search (DFS) and Breadth-First Search (BFS). By utilizing these algorithms, we aim
to explore the structure of a graph and demonstrate how each algorithm traverses nodes using different
strategies. Additionally, the lab seeks to reinforce the understanding of stacks and queues as data structures
used in these algorithms.

6.2. Introduction:Graph traversal algorithms are essential for exploring the nodes and edges of a graph data
structure. Two primary methods for graph traversal are Depth-First Search (DFS) and Breadth-First Search
(BFS). DFS explores as far as possible along a branch before backtracking, utilizing a stack data structure,
which can be implemented either explicitly or through recursion. Conversely, BFS explores all neighbors at
the present depth before moving on to nodes at the next depth level, relying on a queue data structure. This
lab involves implementing both algorithms to traverse a simple undirected graph defined by an adjacency
list. By comparing the outputs of DFS and BFS, we gain insights into their respective behaviors and use
cases.

6.3. Code of DFS:

graph = {
0: [1, 2],
1: [2, 0],
2: [1, 0, 3, 4],
3: [2],
4: [2]
}

start_node = 1

stack = [start_node]
visited = set()

# DFS
while stack:
node = stack.pop()

if node not in visited:


print(node)
visited.add(node)

for neighbor in reversed(graph[node]):


if neighbor not in visited:
stack.append(neighbor)

6.4. Output:

27
6.5. Code of BFS:

from collections import deque


graph = {
0: [1, 2],
1: [2, 0],
2: [1, 0, 3, 4],
3: [2],
4: [2]
}

start_node = 1

queue = deque([start_node])
visited = set()

# BFS
while queue:
node = queue.popleft()

if node not in visited:


print(node)
visited.add(node)

for neighbor in graph[node]:


if neighbor not in visited:
queue.append(neighbor)

6.6. Output:

6.7. Conclusion: In this lab, we successfully implemented Depth-First Search (DFS) and Breadth-First
Search (BFS) to traverse a graph. The results demonstrated the distinct approaches of both algorithms: DFS
delves deep into the graph structure before exploring other branches, while BFS systematically explores all
neighboring nodes at the current level before proceeding. Understanding these traversal techniques is
fundamental for tackling more complex graph-related problems, as they form the basis for various
applications, including pathfinding, web crawling, and social network analysis. The choice between DFS
and BFS depends on the specific requirements of the problem, including memory efficiency and the
structure of the graph being traversed.

28
Lab # 07

Lab Topic: Uniform Cost Search Algorithm

7.1. Objective: The objective of this lab is to implement the Uniform Cost Search (UCS) algorithm to find
the least-cost path between a start node and a goal node in a weighted graph. By employing a priority queue
to explore nodes based on cumulative cost, this algorithm aims to provide an efficient method for
pathfinding in scenarios where the cost of traversing edges varies.

7.2. Introduction: Uniform Cost Search (UCS) is a variant of Dijkstra’s algorithm designed to find the
least-cost path in a weighted graph. It explores nodes by prioritizing those with the lowest cumulative cost,
using a priority queue for efficient retrieval. This lab implements UCS on a sample graph represented by an
adjacency list with edge costs, illustrating how the algorithm identifies the optimal path from a start node to
a goal node.

7.3. Code:
import heapq

def uniform_cost_search(graph, start, goal):


priority_queue = []
heapq.heappush(priority_queue, (0, start))
costs = {start: 0}
parents = {start: None}

while priority_queue:
current_cost, current_node = heapq.heappop(priority_queue)
if current_node == goal:
path = []
while current_node is not None:
path.append(current_node)
current_node = parents[current_node]
return path[::-1], current_cost

for neighbor, cost in graph[current_node].items():


new_cost = current_cost + cost

if neighbor not in costs or new_cost < costs[neighbor]:


costs[neighbor] = new_cost
parents[neighbor] = current_node
heapq.heappush(priority_queue, (new_cost, neighbor))

return None, float('inf')

if __name__ == "__main__":
graph = {
'A': {'B': 1, 'C': 4},
'B': {'A': 1, 'D': 2, 'E': 5},
'C': {'A': 4, 'F': 3},
'D': {'B': 2, 'F': 1},
'E': {'B': 5, 'F': 2},
'F': {'C': 3, 'D': 1, 'E': 2},
}
29
start_node = 'A'
goal_node = 'F'

path, total_cost = uniform_cost_search(graph, start_node, goal_node)

if path is not None:


print(f"Path found: {' -> '.join(path)} with total cost: {total_cost}")
else:
print("Path not found.")

7.4. Output:

7.5. Conclusion: In this lab, we successfully implemented the Uniform Cost Search algorithm to identify the
least-cost path in a weighted graph. The results illustrated UCS's capability to efficiently explore the graph
by expanding nodes based on the cumulative cost of reaching them. We found that the path from the start
node 'A' to the goal node 'F' is optimal, with a total cost determined by the weights of the edges traversed.
This algorithm is particularly useful in real-world applications where costs vary, such as in transportation
and network routing.

30
Lab # 08

Lab Topic: Iterative Deepening

8.1. Objective:The objective of this lab is to understand and implement the Iterative Deepening Search
(IDS) algorithm in Python. IDS is a graph traversal and search technique that combines the benefits of
Depth-First search (DFS) and Breadth-First Search (BFS) to explore a graph efficiently while avoiding the
memory overhead of BFS. The goal of this lab is to enable students to apply IDS to find a path from a
starting node to a goal node within a depth-limited framework.

8.2. Introduction:Iterative Deepening Search (IDS) is a search algorithm that incrementally increases the
search depth until the goal node is found. It is particularly useful in scenarios where the depth of the goal
node is unknown and there is a need to avoid the high memory usage of BFS. IDS performs a series of
Depth-Limited Searches (DLS) with increasing depth limits, exploring the graph progressively deeper until
it reaches the goal. This approach is memory-efficient like DFS and guarantees finding the shortest path in
terms of depth like BFS. In this lab, we implement IDS by building two functions:
iterative_deepening_search, which manages depth limits, and depth_limited_search, which performs the
depth-limited exploration of nodes.

8.3. Code:

def iterative_deepening_search(graph, start, goal):


depth = 0
while True:
path = depth_limited_search(graph, start, goal, depth)
if path:
return path
depth += 1
def depth_limited_search(graph, start, goal, depth_limit):
visited = set()
stack = [(start, [start])]
while stack:
node, path = stack.pop()
if node == goal:
return path
if depth_limit == 0:
continue
visited.add(node)
if node not in graph:
continue
for neighbor in graph[node]:
if neighbor not in visited:
if len(path) < depth_limit:
1stack.append((neighbor, path + [neighbor]))
return None
graph = {
'A': ['B', 'C'],
'B': ['D', 'E'],
31
'C': ['F'],
'D': ['G'],
'E': ['G'],
'F': ['G'],
'G': []
}
start_node = 'A'
goal_node = 'G'
path = iterative_deepening_search(graph, start_node, goal_node)
if path:
print(f"Path from {start_node} to {goal_node}: {path}")
else:
print(f"No path found from {start_node} to {goal_node}")

8.4. Output:

8.5. Conclusion: In this lab, we successfully implemented the Iterative Deepening Search (IDS) algorithm,
demonstrating how to find a path from a start node to a goal node in a graph with minimal memory usage.
IDS combines the advantages of DFS and BFS, exploring each depth level thoroughly before moving deeper,
which ensures finding the shortest path in terms of depth. This approach is ideal for large search spaces with
unknown goal depths. Through this lab, we gained practical insight into recursive depth-limited search and
iterative deepening strategies, both of which are foundational for developing efficient search algorithms in
AI.

32
Lab # 10

Lab Topic: Best First Search

10.1. Objective:The objective of this lab is to understand and implement the Best First Search (BFS)
algorithm in Python, a heuristic-based graph traversal technique that combines the advantages of Breadth-
First Search and Greedy algorithms.

10.2. Introduction:The Best First Search algorithm is a graph traversal technique that explores nodes
based on a specific heuristic function, such as the shortest distance to the goal node or the lowest cost.

10.3. Code:
from queue import PriorityQueue
v = 14
graph = [[] for i in range(v)]

def best_first_search(actual_Src, target, n):


visited = [False] * n
pq = PriorityQueue()
pq.put((0, actual_Src))
visited[actual_Src] = True

while pq.empty() == False:


u = pq.get()[1]

print(u, end=" ")


if u == target:
break

for v, c in graph[u]:
if visited[v] == False:
visited[v] = True
pq.put((c, v))
print()

def addedge(x, y, cost):


graph[x].append((y, cost))
graph[y].append((x, cost))

addedge(0, 1, 3)
addedge(0, 2, 6)
addedge(0, 3, 5)
addedge(1, 4, 9)
addedge(1, 5, 8)
addedge(2, 6, 12)
addedge(2, 7, 14)
addedge(3, 8, 7)
addedge(8, 9, 5)
addedge(8, 10, 6)
addedge(9, 11, 1)
33
addedge(9, 12, 10)
addedge(9, 13, 2)

source = 0
target = 9
best_first_search(source, target, v)

10.4. Output:

10.5. Conclusion: In this lab, students will learn the working mechanism of the Best First Search algorithm.
Implement the algorithm in Python to solve a graph traversal problem. Also understand the importance of
heuristics in optimizing graph search algorithms an gain insight into real-world applications of BFS in AI
and pathfinding problems.

34
Lab # 11

Lab Topic: A*

11.1. Objective: The objective of this lab is to understand and implement the A (A Star)* algorithm in
Python, a powerful and widely used pathfinding and graph traversal algorithm
Introduction:The A algorithm* is a graph traversal and search algorithm that finds the shortest path from a
start node to a goal node. It uses a cost function to guide its search, defined as:
f(n)=g(n)+h(n)f(n) = g(n) + h(n)f(n)=g(n)+h(n)
11.2. Code:
import math
import heapq

class Cell:
def __init__(self):
self.parent_i = 0
self.parent_j = 0
self.f = float('inf')
self.g = float('inf')
self.h = 0

ROW = 9
COL = 10

def is_valid(row, col):


return (row >= 0) and (row < ROW) and (col >= 0) and (col < COL)

def is_unblocked(grid, row, col):


return grid[row][col] == 1

def is_destination(row, col, dest):


return row == dest[0] and col == dest[1]

def calculate_h_value(row, col, dest):


return ((row - dest[0]) ** 2 + (col - dest[1]) ** 2) ** 0.5

def trace_path(cell_details, dest):


print("The Path is ")
path = []
row = dest[0]
col = dest[1]

while not (cell_details[row][col].parent_i == row and cell_details[row][col].parent_j == col):


path.append((row, col))
temp_row = cell_details[row][col].parent_i
temp_col = cell_details[row][col].parent_j
row = temp_row
col = temp_col
35
path.append((row, col))
path.reverse()

for i in path:
print("->", i, end=" ")
print()
def a_star_search(grid, src, dest):

if not is_valid(src[0], src[1]) or not is_valid(dest[0], dest[1]):


print("Source or destination is invalid")
return

if not is_unblocked(grid, src[0], src[1]) or not is_unblocked(grid, dest[0], dest[1]):


print("Source or the destination is blocked")
return

if is_destination(src[0], src[1], dest):


print("We are already at the destination")
return
closed_list = [[False for _ in range(COL)] for _ in range(ROW)]

cell_details = [[Cell() for _ in range(COL)] for _ in range(ROW)]

i = src[0]
j = src[1]
cell_details[i][j].f = 0
cell_details[i][j].g = 0
cell_details[i][j].h = 0
cell_details[i][j].parent_i = i
cell_details[i][j].parent_j = j
open_list = []
heapq.heappush(open_list, (0.0, i, j))
found_dest = False
while len(open_list) > 0:

p = heapq.heappop(open_list)

i = p[1]
j = p[2]
closed_list[i][j] = True

directions = [(0, 1), (0, -1), (1, 0), (-1, 0), (1, 1), (1, -1), (-1, 1), (-1, -1)]
for dir in directions:
new_i = i + dir[0]
new_j = j + dir[1]

if is_valid(new_i, new_j) and is_unblocked(grid, new_i, new_j) and not


36
closed_list[new_i][new_j]:

if is_destination(new_i, new_j, dest):

cell_details[new_i][new_j].parent_i = i
cell_details[new_i][new_j].parent_j = j
print("The destination cell is found")

trace_path(cell_details, dest)
found_dest = True
return
else:

g_new = cell_details[i][j].g + 1.0


h_new = calculate_h_value(new_i, new_j, dest)
f_new = g_new + h_new

if cell_details[new_i][new_j].f == float('inf') or cell_details[new_i][new_j].f > f_new:

heapq.heappush(open_list, (f_new, new_i, new_j))

cell_details[new_i][new_j].f = f_new
cell_details[new_i][new_j].g = g_new
cell_details[new_i][new_j].h = h_new
cell_details[new_i][new_j].parent_i = i
cell_details[new_i][new_j].parent_j = j

if not found_dest:
print("Failed to find the destination cell")

def main():

grid = [
[1, 0, 1, 1, 1, 1, 0, 1, 1, 1],
[1, 1, 1, 0, 1, 1, 1, 0, 1, 1],
[1, 1, 1, 0, 1, 1, 0, 1, 0, 1],
[0, 0, 1, 0, 1, 0, 0, 0, 0, 1],
[1, 1, 1, 0, 1, 1, 1, 0, 1, 0],
[1, 0, 1, 1, 1, 1, 0, 1, 0, 0],
[1, 0, 0, 0, 0, 1, 0, 0, 0, 1],
[1, 0, 1, 1, 1, 1, 0, 1, 1, 1],
[1, 1, 1, 0, 0, 0, 1, 0, 0, 1]
]

src = [8, 0]
dest = [0, 0]

a_star_search(grid, src, dest)

if __name__ == "__main__":
37
main()

11.3. Output:

11.4. Conclusion: In this lab, students will gain a comprehensive understanding of the theory and working
mechanism of the A* algorithm, a widely used pathfinding and graph traversal technique. They will
implement the algorithm in Python to solve pathfinding problems on graphs or grids, enhancing their
practical programming skills. Additionally, students will learn the importance of designing effective
heuristic functions to optimize the algorithm's performance. Through this lab, they will also recognize the
versatility of A* in addressing real-world optimization challenges across various domains.

38
Lab # 13

Lab Topic: Naive Bayes

13.1. Objective:The objective of this lab is to understand the Naive Bayes classification algorithm and its
applications in solving problems such as spam detection, sentiment analysis, and other classification tasks.

13.2. Introduction:Naive Bayes is a probabilistic machine learning algorithm based on Bayes' Theorem,
assuming independence among features. Despite its simplicity, it is efficient and effective for text
classification and other domains where independence assumptions are reasonable.

13.3. Code:
# Import necessary libraries
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import accuracy_score, classification_report

# Load the dataset (20 newsgroups dataset for text classification)


categories = ['sci.space', 'comp.graphics', 'rec.sport.hockey', 'talk.politics.mideast']
newsgroups = fetch_20newsgroups(subset='all', categories=categories, shuffle=True, random_state=42)

# Splitting data into training and testing sets


X_train, X_test, y_train, y_test = train_test_split(newsgroups.data, newsgroups.target, test_size=0.3,
random_state=42)

# Convert text data into numerical format using TF-IDF


vectorizer = CountVectorizer()
X_train_counts = vectorizer.fit_transform(X_train)
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)

# Train the Naive Bayes classifier


clf = MultinomialNB()
clf.fit(X_train_tfidf, y_train)

# Test the classifier


X_test_counts = vectorizer.transform(X_test)
X_test_tfidf = tfidf_transformer.transform(X_test_counts)
y_pred = clf.predict(X_test_tfidf)

# Evaluate the classifier


print("Accuracy:", accuracy_score(y_test, y_pred))
print("\nClassification Report:\n", classification_report(y_test, y_pred,
target_names=newsgroups.target_names))

39
13.4. Output:

13.5. Conclusion:This lab provided practical insights into the working of the Naive Bayes algorithm. It
highlighted its strengths in handling high-dimensional data and its limitations in scenarios with highly
correlated features.

40
Lab # 14

Lab Topic: Multilayer Perceptron


14.1. Objective:The objective of this lab is to understand the structure and functionality of Multilayer
Perceptrons (MLPs) in neural networks. Students will implement an MLP model, train it on a dataset, and
evaluate its performance to learn about forward propagation, backpropagation, and the role of activation
functions.
14.2. Introduction:A Multilayer Perceptron (MLP) is a class of feedforward artificial neural networks that
consist of an input layer, one or more hidden layers, and an output layer. Each layer is fully connected, with
neurons using activation functions to introduce non-linearity. MLPs are widely used in supervised learning
tasks such as classification and regression, where they learn patterns from labeled data through optimization
techniques like gradient descent.

14.3. Code:
# Import necessary libraries
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import accuracy_score, classification_report

# Load the dataset (20 newsgroups dataset for text classification)


categories = ['sci.space', 'comp.graphics', 'rec.sport.hockey', 'talk.politics.mideast']
newsgroups = fetch_20newsgroups(subset='all', categories=categories, shuffle=True, random_state=42)

# Splitting data into training and testing sets


X_train, X_test, y_train, y_test = train_test_split(newsgroups.data, newsgroups.target, test_size=0.3,
random_state=42)

# Convert text data into numerical format using TF-IDF


vectorizer = CountVectorizer()
X_train_counts = vectorizer.fit_transform(X_train)
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)

# Train the Naive Bayes classifier


clf = MultinomialNB()S
clf.fit(X_train_tfidf, y_train)

# Test the classifier


X_test_counts = vectorizer.transform(X_test)
X_test_tfidf = tfidf_transformer.transform(X_test_counts)
y_pred = clf.predict(X_test_tfidf)

# Evaluate the classifier


print("Accuracy:", accuracy_score(y_test, y_pred))
print("\nClassification Report:\n", classification_report(y_test, y_pred,
target_names=newsgroups.target_names))

41
14.4.Output:

14.5. Conclusion: This lab provided hands-on experience with Multilayer Perceptrons, showcasing their
ability to model complex relationships in data. By implementing and training an MLP, we explored its
structure, activation functions, and backpropagation, reinforcing its importance in solving real-world AI
problems.

42
Lab # 15

Lab Topic: Adversarial Search


15.1. Objective:The objective of this lab is to understand the concept of adversarial search in AI, focusing
on game-playing algorithms like Minimax and Alpha-Beta Pruning. The goal is to implement these
algorithms and analyze their performance in strategic decision-making scenarios.

15.2. Introduction:Adversarial search is a fundamental concept in Artificial Intelligence used for decision-
making in competitive environments, such as two-player games. It involves strategies to maximize a player's
advantage while minimizing the opponent's chances of success. Minimax and Alpha-Beta Pruning are key
algorithms that explore game trees to determine optimal moves, making them essential for solving problems
in game theory and AI.

15.3. Code:
def is_moves_left(board):
for row in board:
for cell in row:
if cell == '':
return True
return False

def evaluate(b):
for row in range(6):
for col in range(4):
if b[row][col] == b[row][col+1] == b[row][col+2] == b[row][col+3] == 'o':
return 10
for col in range(7):
for row in range(3):
if b[row][col] == b[row+1][col] == b[row+2][col] == b[row+3][col] == 'o':
return 10
for row in range(3):
for col in range(4):
if b[row][col] == b[row+1][col+1] == b[row+2][col+2] == b[row+3][col+3] == 'o':
return 10
for row in range(3, 6):
for col in range(3):
if b[row][col] == b[row - 1][col+1] == b[row-2][col+2] == b[row - 3][col+3] == 'o':
return 10

return 0

def minimax(board, depth, is_max):


score = evaluate(board)

if score == 10:
return score - depth
if not is_moves_left(board):
43
return 0
if is_max:
best_value = -float('inf')
for col in range(7):
for row in range(5, -1, -1):
if board[row][col] == '':
board[row][col] = 'x'
best_val = max(best_val, minimax(
board, depth+1, not is_max))
board[row][col] = ''
break
return best_val
else:
best_value = float('inf')
for col in range(7):
for row in range(5, -1, -1):
if board[row][col] == '':
board[row][col] = 'o'
best_val = max(best_val, minimax(
board, depth+1, not is_max))
board[row][col] = ''
break
return best_val

def find_optimal_move(board):
best_move = None
best_val = -float('inf')

for col in range(7):


for row in range(5, -1, -1):
if board[row][col] == '':
board[row][col] = 'o'
move_val = minimax(board, 0, False)
board[row][col] = ''

if move_val > best_val:


best_val = move_val
best_move = (row, col)
break
return best_move

board = [
['x', 'x', 'o', '', '', '', 'x'],
['o', 'o', 'o', 'x', '', '', 'x'],
['x', 'o', '', '', '', '', ''],
['x', 'o', 'o', '', '', '', ''],
['x', 'x', 'x', 'o', '', '', ''],
['o', 'o', 'x', 'o', 'x', '', '']
]
optimal_move = find_optimal_move(board)
if optimal_move:
print("The optimal move for player O to win is:", optimal_move)
44
else:
print("Player O cannot win with the current board confguration")

15.4.Output:

15.5. Conclusion: This lab demonstrated the application of adversarial search in strategic decision-making.
By implementing and analyzing Minimax and Alpha-Beta Pruning, we gained insights into optimizing game
strategies while managing computational resources efficiently.

45

You might also like