0% found this document useful (0 votes)
28 views43 pages

23CS0903 Artificial Intelligence and Machine Learning Lab Manual R23 CSE CSM

The document outlines the structure and objectives of the Artificial Intelligence & Machine Learning Lab at Siddartha Institute of Science and Technology, detailing the vision and mission of the institute and department. It includes course objectives, outcomes, and a list of experiments to be conducted, emphasizing the importance of programming with Python and various machine learning algorithms. Additionally, it provides guidelines for lab conduct and safety protocols.

Uploaded by

Thiran Dami
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views43 pages

23CS0903 Artificial Intelligence and Machine Learning Lab Manual R23 CSE CSM

The document outlines the structure and objectives of the Artificial Intelligence & Machine Learning Lab at Siddartha Institute of Science and Technology, detailing the vision and mission of the institute and department. It includes course objectives, outcomes, and a list of experiments to be conducted, emphasizing the importance of programming with Python and various machine learning algorithms. Additionally, it provides guidelines for lab conduct and safety protocols.

Uploaded by

Thiran Dami
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

SIDDARTHA INSTITUTE OF SCIENCE AND TECHNOLOGY:

PUTTUR
(AUTONOMOUS)
(Approved by AICTE, New Delhi& Affiliated to JNTUA, Ananthapuramu) (Accredited by NBA for EEE,
Mech., ECE & CSE
Accredited by NAAC with ‘A’ Grade)
Puttur -517583, Tirupati District, A.P. (India)

Department of Computer Science and Engineering

(23CS0903) ARTIFICIAL INTELLIGENCE & MACHINE


LEARNING LAB

II B.Tech - II Semester

Lab Observation Book

Academic Year: _____________________________________________

Name : _____________________________________________

Roll. Number : _____________________________________________

Year & Branch :____________________________________________

Section :____________________________________________
Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

SIDDARTHA INSTITUTE OF SCIENCE AND TECHNOLOGY


VISION
To become an eminent academic institute for academic and research that producing
global leaders in science and technology to serve the betterment of mankind.
MISSION
M1. To provide broad-based education and contemporary knowledge by adopting modern
teaching-learning methods.
M2. To inculcate a spirit of research and innovation in students through industrial interactions.
M3. To develop individual’s potential to its fullest extent so that they can emerge as gifted
leaders in their fields.
DEPARTMENT OF COMPUETR SCIENCE AND ENGINEERING
VISION
To become a well-known department of Computer Science and Engineering producing
competent professionals with research and innovation skills, inculcating moral values and
societal concerns.
MISSION OF THE DEPARTMENT
M1. To educate students to become highly qualified computer engineers with full commitments
to professional ethics.
M2. To inculcate a mind of innovative research in the field of computer science and related
interdisciplinary areas to provide advanced professional service to the society.
M3. To prepare students with industry ready knowledge base as well as entrepreneurial skills by
introducing duly required industry oriented educational program.
PROGRAM EDUCATIONAL OBJECTIVES STATEMENTS
M1. Graduates with basic and advanced knowledge in science, mathematics, computer science
and allied engineering, capable of analyzing, design and development of solutions for real
life problems.
M2. Graduates who serve the Industry, consulting, government organizations, or who pursue
higher education or research.
M3. Graduates with qualities of professional leadership, communication skills, team work,
ethical values and lifelong learning abilities.

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 2


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

PROGRAMME OUTCOMES (PO’S):

PO1 Engineering knowledge: Apply the knowledge of mathematics, science, engineering


fundamentals, and an engineering specialization to the solution of complex engineering
problems.
PO2 Problem analysis: Identify, formulate, review research literature, and analyze complex
engineering problems reaching substantiated conclusions using first principles of mathematics,
natural sciences, and engineering sciences.
PO3 Design/development of solutions: Design solutions for complex engineering problems and
design system components or processes that meet the specified needs with appropriate
consideration for the public health and safety, and the cultural, societal, and environmental
considerations.
PO4 Conduct investigations of complex problems: Use research-based knowledge and research
methods including design of experiments, analysis and interpretation of data, and synthesis of
the information to provide valid conclusions.
PO5 Modern tool usage: Create, select, and apply appropriate techniques, resources, and modern
engineering and IT tools including prediction and modeling to complex engineering activities
with an understanding of the limitations.
PO6 The engineer and society: Apply reasoning informed by the contextual knowledge to assess
societal, health, safety, legal and cultural issues and the consequent responsibilities relevant to
the professional engineering practice.
PO7 Environment and sustainability: Understand the impact of the professional engineering
solutions in societal and environmental contexts, and demonstrate the knowledge of, and need
for sustainable development.
PO8 Ethics: Apply ethical principles and commit to professional ethics and responsibilities and
norms of the engineering practice.
PO9 Individual and team work: Function effectively as an individual, and as a member or leader
in diverse teams, and in multidisciplinary settings.
PO10 Communication: Communicate effectively on complex engineering activities with the
engineering community and with society at large, such as, being able to comprehend and write
effective reports and design documentation, make effective presentations, and give and receive
clear instructions.
PO11 Project management and finance: Demonstrate knowledge and understanding of the
engineering and management principles and apply these to one’s own work, as a member and
leader in a team, to manage projects and in multidisciplinary environments.
PO12 Life-long learning: Recognize the need for, and have the preparation and ability to engage in
independent and life-long learning in the broadest context of technological change
PROGRAMME SPECIFIC OUTCOMES (PSO’S):
PSO1 Architecture of Computer System: Ability to visualize and articulate computer hardware
and software systems for various complex applications.
PSO2 Design and develop computer programs: Ability to design and develop computer-based
systems in the areas related to algorithms, networking, web design, cloud computing, IoT,
data analytics and mobile applications of varying complexity.
PSO3 Applications of Computing and Research Ability: Ability to use knowledge in various
domains to identify research gaps and hence to provide solution to new ideas and
innovations.

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 3


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

SIDDARTHA INSTITUTE OF SCIENCE AND TECHNOLOGY: PUTTUR


(AUTONOMOUS)
DEPARTMENT OF COMPUETR SCIENCE AND ENGINEERING
Do’s
 Follow the right procedures for starting the computer and shutting down the computer to
avoid loss of data and damage of computer components
 You should only be on the specific program/site your lab faculty has assigned for you.
 Report any problems/damages immediately to the lab In-charge.
 Always use a light touch on the keyboard.
 When working on documents always save several times while working on it.
 Remember to log out whenever you are done using any lab computer.
 Shut down computer properly before you leave the lab.
 Know the location of the fire extinguisher and the first aid box and how to use them in
case of an emergency.
 Read and understand how to carry out an experiment thoroughly before coming to the
laboratory.
Dont’s
 Computers and peripherals are not to be moved or reconfigured without approval of Lab
In-charge.
 Students may not install software on lab computers. If you have a question regarding
specific software that you need to use, contact the Lab In-charge.
 Never bang on the keys.
 Do not open up the metallic covers of computers or peripherals devices a particularly
when the power is on.
 Never open attachments from unknown sources.
 Keep all liquids away from computers and equipment, liquid may spill and cause an
electrical shock or the computer not to operate properly.
 Do not insert metal objects such as clips, pins and needles into the computer casings.
They may cause fire.
 Do not touch, connect or disconnect any plug or cable without Lab in-charge’s
permission.

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 4


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

SIDDARTHA INSTITUTE OF SCIENCE AND TECHNOLOGY


(AUTONOMOUS)
Narayanavanam Road, PUTTUR-517 583
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

Name of the Lab: ARTIFICIAL INTELLIGENCE & MACHINE LEARNING LAB (23CS0903)
Year & SEM : II B.TECH – II Sem.

COURSE OBJECTIVES
The objectives of this course
1. The student should be made to study the concepts of Artificial Intelligence.
2. The student should be made to learn the methods of solving problems using AI
3. The student should be made to introduce the concepts of Expert Systems and Machine
Learning.
4. To learn about computing central tendency measures and Data pre-processing
techniques
5. To learn about classification and regression algorithms
6. To apply different clustering algorithms for a problem.

COURSE OUTCOMES (COs)


At the end of the course, Student will be able to
1. Understand the Mathematical and statistical prospective of machine learning
algorithms through python programming
2. Appreciate the importance of visualization in the data analytics solution
3. Derive insights using Machine learning algorithms
4. Evaluate and demonstrate AI and ML algorithms
5. Evaluate different algorithms

Software Required for ML: Python/R/Weka

List of Experiments

1. Pandas Library
a. Write a python program to implement Pandas Series with labels.
b. Create a Pandas Series from a dictionary.
c. Creating a Pandas Data Frame.
d. Write a program which makes use of the following Pandas methods
i) describe () ii) head () iii) tail () iv) info ()
2. Pandas Library: Visualization
a. Write a program which use pandas inbuilt visualization to plot following graphs:
i) Bar plots ii) Histograms iii) Line plots iv) Scatter plots
3. Write a Program to Implement Breadth First Search using Python.
4. Write a program to implement Best First Searching Algorithm
5. Write a Program to Implement Depth First Search using Python.
6. Write a program to implement the Heuristic Search

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 5


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

7. Write a python program to implement A* and AO* algorithm. (Ex: find the shortest path)
8. Apply the following Pre-processing techniques for a given dataset.
a. Attribute selection
b. Handling Missing Values
c. Discretization
d. Elimination of Outliers
9. Apply KNN algorithm for classification and regression
10. Demonstrate decision tree algorithm for a classification problem and perform parameter
tuning for better results
11. Apply Random Forest algorithm for classification and regression
12. Demonstrate Naïve Bayes Classification algorithm.
13. Apply Support Vector algorithm for classification
14. Implement the K-means algorithm and apply it to the data you selected. Evaluate
performance by measuring the sum of the Euclidean distance of each example from its
class center. Test the performance of the algorithm as a function of the parameters K.

REFERENCES
1. Stuart J. Russell and Peter Norvig, Artificial Intelligence A Modern Approach, 4th
Edition, Pearson, 2020
2. Martin C. Brown (Author), Python: The Complete Reference, McGraw Hill Education,
Fourth edition, 2018
3. R. NageswaraRao , Core Python Programming, Dreamtech Press India Pvt Ltd 2018.
4. Tom M. Mitchell, Machine Learning, McGraw-Hill Publication, 2017
5. Peter Harrington, Machine Learning in Action, DreamTech
6. Pang-Ning Tan, Michel Stenbach, Vipin Kumar, Introduction to Data Mining,7th
Edition, 2019.

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 6


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

INDEX

Sl.No. Date Name of the Experiment Page No. Sign.

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 7


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

Sl.No. Date Name of the Experiment Page No. Sign.

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 8


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

1. Pandas Library
a. Write a python program to implement Pandas Series
with labels.
Ex.No-1 b. Create a Pandas Series from a dictionary. Date:
c. Creating a Pandas Data Frame.
d. Write a program which makes use of the following
Pandas methods
i) describe () ii) head () iii) tail () iv) info ()

Aim:
a. To write a python program to implement Pandas Series with labels.
b. To create a Pandas Series from a dictionary.
c. To create a Pandas Data Frame.
d. To write a program which makes use of the following Pandas methods
i) describe () ii) head () iii) tail () iv) info ()

Procedure:

a. To write a python program to implement Pandas Series with labels.

Labels
The labels in the Pandas Series are index numbers by default. Like in dataframe
and array, the index number in series starts from 0. Such labels can be used to access a
specified value.

Source Code:

import pandas as pd
# create a list
a = [1, 3, 5]
# create a series and specify labels
my_series = pd.Series(a, index = ["x", "y", "z"])
print(my_series)

Output:

b. To create a Pandas Series from a dictionary.

Source Code:

# create a dictionary
grades = {"Semester1": 3.25, "Semester2": 3.28, "Semester3": 3.75}

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 9


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

# create a series from the dictionary


my_series = pd.Series(grades)
# display the series
print(my_series)

Output:

c. To create a Pandas Data Frame.

DataFrames in Pandas:
Data sets in Pandas are usually multi-dimensional tables, called DataFrames.
Series is like a column, a DataFrame is the whole table.

Source code:
import pandas as pd
data = {
"calories": [420, 380, 390],
"duration": [50, 40, 45]
}
myvar = pd.DataFrame(data)
print(myvar)

Output:

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 10


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

d. To write a program which makes use of the following Pandas methods


i) describe () ii) head () iii) tail () iv) info ()

Source Code:

i) describe()
import numpy as np
import pandas as pd
s = pd.Series([2, 3, 4])
s.describe()

Output:

 Consider the following code for head, tail and info methods.

import pandas as pd
# Creating a sample DataFrame
data = {'Name': ['Ankit', 'Bhavya', 'Charvi', 'Diya', 'Eesha'],
'Age': [25, 30, 22, 28, 35],
'City': ['New York', 'London', 'Paris', 'Tokyo', 'Sydney']}
df = pd.DataFrame(data)

ii) head()
print(df.head(3))

Output:

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 11


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

iii) tail()
print(df.tail(2))

Output:

iv) info()
print(df.info())

Output:

Result:

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 12


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

Pandas Library: Visualization


Ex.No-2 Write a program which use pandas inbuilt visualization to plot Date:
following graphs:
i) Bar plots ii) Histograms iii) Line plots iv) Scatter plots

Aim:
To write a program which use pandas inbuilt visualization to plot following graphs:
i) Bar plots ii) Histograms iii) Line plots iv) Scatter plots

Source Code:

i) Bar Plots

# importing matplotlib
import matplotlib.pyplot
# importing pandas as pd
import pandas as pd
# importing numpy as np
import numpy as np
# creating a dataframe
df = pd.DataFrame(np.random.rand(10, 3), columns =['a', 'b', 'c'])
df.plot.bar()

Output:

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 13


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

ii) Histograms

import matplotlib.pyplot as plt

plt.close("all")
df4 = pd.DataFrame({
"a": np.random.randn(1000) + 1,
"b": np.random.randn(1000),
"c": np.random.randn(1000) - 1,
}, columns=["a", "b", "c"])
plt.figure();
df4.plot.hist(alpha=0.5);

Output:

iii) Line Plots

import pandas as pd
import matplotlib.pyplot as plt

data = {
'Year': [2015, 2016, 2017, 2018, 2019, 2020],

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 14


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

'Sales': [200, 250, 300, 350, 400, 450],


'Profit': [20, 30, 50, 70, 90, 110]
}

df = pd.DataFrame(data)

# Plotting both Sales and Profit over the years


ax = df.plot(x='Year', y='Sales', kind='line', marker='o', title='Sales and Profit Over
Years')
df.plot(x='Year', y='Profit', kind='line', marker='s', ax=ax, secondary_y=True)

# Adding labels and grid


ax.set_ylabel('Sales')
ax.right_ax.set_ylabel('Profit')
ax.grid(True)

plt.show()

Output:

iv) Scatter Plots

# Program to draw scatter plot using Dataframe.plot


# Import libraries
import pandas as pd

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 15


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

# Prepare data
data={'Name':['Dhanashri', 'Smita', 'Rutuja',
'Sunita', 'Poonam', 'Srushti'],
'Age':[20, 18, 27, 50, 12, 15]}

# Load data into DataFrame


df = pd.DataFrame(data = data);

# Draw a scatter plot


df.plot.scatter(x = 'Name', y = 'Age', s = 100);

Output:

Result:

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 16


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

Ex.No-3 Write a Program to Implement Breadth First Search using Date:


Python.

Aim:

To write a Program to Implement Breadth First Search using Python.

Description:

1. Begin, place any one of the vertices of our graph at the lower extreme of the queue.
2. Add the very first element in the created queue to the list of objects that have already
been checked out.
3. Create a list of all the nodes that seem to be near that vertex. Individual nodes which are
not in the visited list should be moved to the rear of the queue.
4. Repeat the above two steps, i.e., steps 2 and 3, till our queue is reduced to 0.
5. As breadth-first search scans every node of the given graph, a standard BFS algorithm
splits each node or vertex of the tree or graph into two distinct groups.
 Visited
 Not visited
6. The objective of the technique discussed is to visit each vertex while at the same time
avoiding recurring cycles. BFS starts with a single node, then examines all nodes inside
one distance, then all the other nodes under two distances, and so forth. To retain the
nodes that remained to visit, BFS requires a queue (or, in Python, a list)

Source Code:

graph = {
'5' : ['3','7'],
'3' : ['2', '4'],
'7' : ['8'],
'2' : [],
'4' : ['8'],
'8' : []
}

visited = []
queue = []

def bfs(visited, graph, node):


visited.append(node)
queue.append(node)

while queue:
m = queue.pop(0)
print (m, end = " ")

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 17


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

for neighbour in graph[m]:


if neighbour not in visited:
visited.append(neighbour)
queue.append(neighbour)

print("Following is the Breadth-First Search")


bfs(visited, graph, '5')

Output:

Result:

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 18


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

Ex.No-4 Write a program to implement Best First Searching Algorithm Date:

Aim:
To write a program to implement Best First Searching Algorithm

Description:

The algorithm for Best-First Search follows a strategy of exploring the most promising
states first. Here are the key steps of the algorithm:

1. Initialize an empty priority queue and insert the initial state.


2. While the priority queue is not empty, do the following:
3. Remove the state with the highest priority (based on the evaluation function).
4. If the removed state is the desired goal state, stop and return the solution.
5. Generate all possible next states from the current state.
6. Assign a priority value to each next state using the evaluation function.
7. Insert the next states into the priority queue based on their priority values.
8. If the priority queue becomes empty and no solution is found, the problem is unsolvable.

Source code:

from queue import PriorityQueue


v = 14
graph = [[] for i in range(v)]

def best_first_search(actual_Src, target, n):


visited = [False] * n
pq = PriorityQueue()
pq.put((0, actual_Src))
visited[actual_Src] = True

while pq.empty() == False:


u = pq.get()[1]
print(u, end=" ")
if u == target:
break

for v, c in graph[u]:
if visited[v] == False:
visited[v] = True
pq.put((c, v))
print()

def addedge(x, y, cost):

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 19


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

graph[x].append((y, cost))
graph[y].append((x, cost))

addedge(0, 1, 3)
addedge(0, 2, 6)
addedge(0, 3, 5)
addedge(1, 4, 9)
addedge(1, 5, 8)
addedge(2, 6, 12)
source = 0
target = 9
best_first_search(source, target, v)

Output:

Result:

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 20


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

Ex.No-5 Write a Program to Implement Depth First Search using Date:


Python.

Aim:
To write a Program to Implement Depth First Search using Python.

Description:

The DFS algorithm follows as:


1. We will start by putting any one of the graph's vertex on top of the stack.
2. After that take the top item of the stack and add it to the visited list of the vertex.
3. Next, create a list of that adjacent node of the vertex. Add the ones which aren't in the
visited list of vertexes to the top of the stack.
4. Lastly, keep repeating steps 2 and 3 until the stack is empty.

Source code:

graph = {
'5' : ['3','7'],
'3' : ['2', '4'],
'7' : ['8'],
'2' : [],
'4' : ['8'],
'8' : []
}
visited = set()
def dfs(visited, graph, node):
if node not in visited:
print(node, end=" ")
visited.add(node)
for neighbour in graph[node]:
dfs(visited, graph, neighbour)

print("Following is the Depth-First Search")


dfs(visited, graph, '5')

Output:

Result:

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 21


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

Ex.No-6 Date:
Write a program to implement the Heuristic Search

Aim:

To write a program to implement the Heuristic Search

Description:

Heuristic functions are particularly useful in various problem types, including:

Pathfinding Problems: Pathfinding problems, such as navigating a maze or finding the


shortest route on a map, benefit greatly from heuristic functions that estimate the
distance to the goal.
Constraint Satisfaction Problems: In constraint satisfaction problems, such as
scheduling and puzzle-solving, heuristics help in selecting the most promising variables
and values to explore.
Optimization Problems: Optimization problems, like the traveling salesman problem,
use heuristics to find near-optimal solutions within a reasonable time frame.

In our example, we use A* algorithm to demonstrate Heuristic Approach of Searching

Source code:

import heapq

def heuristic(a, b):


return abs(a - b)

def a_star(graph, start, goal):


open_set = [(0, start)]
came_from = {}
g_score = {node: float('inf') for node in graph}
g_score[start] = 0
f_score = {node: float('inf') for node in graph}
f_score[start] = heuristic(start, goal)
while open_set:
current_f, current = heapq.heappop(open_set)
if current == goal:
path = [current]
while current in came_from:
current = came_from[current]
path.append(current)
return path[::-1], g_score[goal]
for neighbor, cost in graph[current].items():
tentative_g_score = g_score[current] + cost

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 22


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

if tentative_g_score < g_score[neighbor]:


came_from[neighbor] = current
g_score[neighbor] = tentative_g_score
f_score[neighbor] = tentative_g_score + heuristic(neighbor, goal)
if (f_score[neighbor], neighbor) not in open_set:
heapq.heappush(open_set, (f_score[neighbor], neighbor))
return None, None

graph = {
0: {1: 1, 2: 3},
1: {3: 1},
2: {1: 1, 3: 1},
3: {4: 3},
4:{}
}

start_node = 0
goal_node = 4
path, cost = a_star(graph, start_node, goal_node)
if path:
print("Shortest path:", path)
print("Total cost:", cost)
else:
print("No path found.")

Output:

Result:

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 23


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

Ex.No-7 Write a python program to implement A* and AO* algorithm. Date:


(Ex: find the shortest path)

Aim:

To write a python program to implement A* and AO* algorithm for finding the shortest
path.

Description:

A* Algorithm:

A* is a popular pathfinding and graph traversal algorithm, often used in games and artificial
intelligence.
1. Initialize the open list with the start node and the closed list as empty.
2. Repeat the following steps until the open list is empty:
1. Remove the node with the lowest f value (where f = g + h) from the open list and
add it to the closed list.
2. Generate all possible successor nodes of the current node.
3. For each successor:
 If the successor is the goal node, terminate the search and reconstruct the
path.
 If the successor is not in the open list, add it and calculate its g and h
values.
 If the successor is in the open list with a higher g value, update its g value
and parent node.
3. Continue until the goal node is found or the open list is empty.

Source Code:

# A* Search Algorithm
import heapq
def heuristic(a, b):
return abs(a - b)
def a_star(graph, start, goal):
open_set = [(0, start)]
came_from = {}
g_score = {node: float('inf') for node in graph}
g_score[start] = 0
f_score = {node: float('inf') for node in graph}
f_score[start] = heuristic(start, goal)
while open_set:
current_f, current = heapq.heappop(open_set)
if current == goal:
path = [current]
while current in came_from:

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 24


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

current = came_from[current]
path.append(current)
return path[::-1], g_score[goal]
for neighbor, cost in graph[current].items():
tentative_g_score = g_score[current] + cost
if tentative_g_score < g_score[neighbor]:
came_from[neighbor] = current
g_score[neighbor] = tentative_g_score
f_score[neighbor] = tentative_g_score + heuristic(neighbor, goal)
if (f_score[neighbor], neighbor) not in open_set:
heapq.heappush(open_set, (f_score[neighbor], neighbor))
return None, None
graph = {
0: {1: 1, 2: 3},
1: {3: 1},
2: {1: 1, 3: 1},
3: {4: 3},
4:{}
}
start_node = 0
goal_node = 4
path, cost = a_star(graph, start_node, goal_node)
if path:
print("Shortest path:", path)
print("Total cost:", cost)
else:
print("No path found.")

Output:

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 25


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

AO* Algorithm:

AO* is a heuristic search algorithm used in problem-solving and decision-making.


1. Initialize the graph with the start node and add it to the open list.
2. Repeat the following steps:
i) Select the most promising node from the open list.
ii) Expand the selected node by generating its successors.
iii) If a successor leads to a solution, mark the path and backtrack to update costs and
pointers.
iv) For each successor, calculate the heuristic cost.
v) Update the graph based on the new information, adjusting the paths as necessary.
3. Repeat until the goal is reached or the open list is empty.

Source Code:

import heapq

def ao_star(graph, start, goal):


open_set = [(0, start)]
came_from = {}
g_score = {node: float('inf') for node in graph}
g_score[start] = 0
f_score = {node: float('inf') for node in graph}
f_score[start] = heuristic(start, goal)

while open_set:
current_f, current = heapq.heappop(open_set)

if current == goal:
path = reconstruct_path(came_from, current)
return path, g_score[goal]

for neighbor, cost in graph[current].items():


tentative_g_score = g_score[current] + cost
if tentative_g_score < g_score[neighbor]:
came_from[neighbor] = current
g_score[neighbor] = tentative_g_score
f_score[neighbor] = tentative_g_score + heuristic(neighbor, goal)

if (f_score[neighbor], neighbor) not in open_set:


heapq.heappush(open_set, (f_score[neighbor], neighbor))
return None, None

def heuristic(a, b):


return abs(a - b)

def reconstruct_path(came_from, current):

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 26


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

path = [current]
while current in came_from:
current = came_from[current]
path.append(current)
return path[::-1]

graph = {
0: {1: 1, 2: 3},
1: {3: 1},
2: {1: 1, 3: 1},
3: {4: 3},
4: {}
}

start_node = 0
goal_node = 4

path, cost = ao_star(graph, start_node, goal_node)

if path:
print("Shortest path:", path)
print("Total cost:", cost)
else:
print("No path found.")

Output:

Result:

Ex.No-8 Apply the following Pre-processing techniques for a given Date:

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 27


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

dataset.
a. Attribute selection
b. Handling Missing Values
c. Discretization
d. Elimination of Outliers

Aim:

To apply the following Pre-processing techniques for a given dataset.


a. Attribute selection
b. Handling Missing Values
c. Discretization
d. Elimination of Outliers

Description:

To apply the pre-processing techniques, we require a sample dataset. It is created using


pandas. Here is the sample dataset containing three feature variables (A, B, C) and one target
variable (target).

Dataset Creation using Pandas Dataframe

import pandas as pd
import numpy as np

data = {'A': [1, 2, 3, 4, 5, np.nan, 7, 8, 9, 10],


'B': [11, 12, 13, 14, 15, 16, 17, 18, 19, 20],
'C': [21, 22, 23, 24, 25, 26, 27, np.nan, 29, 30],
'Target': [0, 1, 0, 1, 0, 1, 0, 1, 0, 1]}
df = pd.DataFrame(data)
print(df)

Output:

Source Code:

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 28


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

a. Attribute Selection (Example: Select 'A', 'C', and 'Target')

selected_attributes = ['A', 'C', 'Target']


df_selected = df[selected_attributes]
print("Selected attributes:\n", df_selected)

Output:

b. Handling Missing Values (Example: Fill with mean)


df_filled = df_selected.fillna(df_selected.mean())
print("\nMissing values filled:\n", df_filled)

Output:

c. Discretization (Example: Equal-width binning for column 'A')

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 29


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

num_bins = 3
df_filled['A_discretized'] = pd.cut(df_filled['A'], bins=num_bins, labels=False)
print("\nDiscretized 'A' column:\n", df_filled)

Output:

d. Elimination of Outliers (Example: Remove outliers based on IQR for column 'C')
Q1 = df_filled['C'].quantile(0.25)
Q3 = df_filled['C'].quantile(0.75)
IQR = Q3 - Q1
lower_bound = Q1 - 1.5 * IQR
upper_bound = Q3 + 1.5 * IQR
df_no_outliers = df_filled[~((df_filled['C'] < lower_bound) | (df_filled['C'] > upper_bound))]
print("\nDataframe after removing outliers:\n", df_no_outliers)

Output:

Result:

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 30


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

Ex.No-9 Date:
Apply KNN algorithm for classification and regression

Aim:

To apply KNN algorithm for classification and regression

Description:

k-NN for Classification

Steps:
1. Load the dataset.
2. Preprocess the data (handle missing values, encode categorical variables, etc.).
3. Split the dataset into training and testing sets.
4. Train the k-NN model.
5. Make predictions and evaluate the model.

Source Code:

from sklearn.datasets import load_iris


from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import accuracy_score

iris = load_iris()
X, y = iris.data, iris.target

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)

knn = KNeighborsClassifier(n_neighbors=3)

knn.fit(X_train, y_train)

y_pred = knn.predict(X_test)

accuracy = accuracy_score(y_test, y_pred)


print(f"Accuracy: {accuracy}")

Output:

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 31


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

k-NN for Regression

Steps:
1. Load the dataset.
2. Preprocess the data (handle missing values, encode categorical variables, etc.).
3. Split the dataset into training and testing sets.
4. Train the k-NN model.
5. Make predictions and evaluate the model.

Source Code:

import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsRegressor
from sklearn.metrics import mean_squared_error, r2_score

data = {'A': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],


'B': [11, 12, 13, 14, 15, 16, 17, 18, 19, 20],
'C': [21, 22, 23, 24, 25, 26, 27, 28, 29, 30],
'Target': [10, 20, 30, 40, 50, 60, 70, 80, 90, 100]}
df = pd.DataFrame(data)

X = df[['A', 'B', 'C']]


y = df['Target']

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

knn_regressor = KNeighborsRegressor(n_neighbors=3)
knn_regressor.fit(X_train, y_train)

y_pred = knn_regressor.predict(X_test)

mse = mean_squared_error(y_test, y_pred)


r2 = r2_score(y_test, y_pred)

print(f"Mean Squared Error: {mse}")


print(f"R-squared: {r2}")

Output:

Result:

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 32


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

Ex.No-10 Demonstrate decision tree algorithm for a classification Date:


problem and perform parameter tuning for better results

Aim:

To demonstrate decision tree algorithm for a classification problem and perform


parameter tuning for better results

Description:

Here are the steps to implement the decision tree algorithm:


1. Load the dataset.
2. Preprocess the data (if necessary).
3. Split the dataset into training and testing sets.
4. Train the Decision Tree model.
5. Evaluate the model.
6. Perform parameter tuning using Grid Search.
7. Evaluate the tuned model.

Source Code:

from sklearn.tree import DecisionTreeClassifier, plot_tree


from sklearn.model_selection import GridSearchCV
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
import matplotlib.pyplot as plt

iris = load_iris()
X, y = iris.data, iris.target

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)

dt_classifier = DecisionTreeClassifier()

param_grid = {
'criterion': ['gini', 'entropy'],
'max_depth': [None, 10, 20, 30],
'min_samples_split': [2, 5, 10],
'min_samples_leaf': [1, 2, 4]
}

# Perform GridSearchCV for hyperparameter tuning


grid_search = GridSearchCV(estimator=dt_classifier, param_grid=param_grid, cv=5,
scoring='accuracy')

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 33


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

grid_search.fit(X_train, y_train)

best_dt_classifier = grid_search.best_estimator_
print("Best parameters:", grid_search.best_params_)

y_pred = best_dt_classifier.predict(X_test)

accuracy = accuracy_score(y_test, y_pred)


conf_matrix = confusion_matrix(y_test, y_pred)
class_report = classification_report(y_test, y_pred)
print("Accuracy:", accuracy)
print("Confusion Matrix:\n", conf_matrix)
print("Classification Report:\n", class_report)

# Plotting the decision tree


plt.figure(figsize=(15, 10))
plot_tree(best_dt_classifier,
feature_names=iris.feature_names,
class_names=iris.target_names,
filled=True, rounded=True)
plt.show()

OUTPUT:

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 34


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

RESULT:

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 35


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

Ex.No-11 Apply Random Forest algorithm for classification and Date:


regression

Aim:

To apply Random Forest algorithm for classification and regression.

Description:

Random Forest for Classification


Steps:
1. Load the dataset.
2. Preprocess the data (if necessary).
3. Split the dataset into training and testing sets.
4. Train the Random Forest model.
5. Make predictions and evaluate the model.

Source Code:

from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor


from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, mean_squared_error

iris = load_iris()
X, y = iris.data, iris.target

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)

rf_classifier = RandomForestClassifier(n_estimators=100, random_state=42)


rf_classifier.fit(X_train, y_train)
y_pred_class = rf_classifier.predict(X_test)
accuracy = accuracy_score(y_test, y_pred_class)
print(f"Random Forest Classifier Accuracy: {accuracy}")

Output:

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 36


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

Random Forest for Regression


Steps:
1. Load the dataset.
2. Preprocess the data (if necessary).
3. Split the dataset into training and testing sets.
4. Train the Random Forest model.
5. Make predictions and evaluate the model

Source Code:

from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor


from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, mean_squared_error

iris = load_iris()
X, y = iris.data, iris.target

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)

rf_regressor = RandomForestRegressor(n_estimators=100, random_state=42)


rf_regressor.fit(X_train, y_train)
y_pred_reg = rf_regressor.predict(X_test)
mse = mean_squared_error(y_test, y_pred_reg)
print(f"Random Forest Regressor Mean Squared Error: {mse}")

Output:

Result:

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 37


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

Ex.No-12 Date:
Demonstrate Naïve Bayes Classification algorithm.

Aim:
To demonstrate Naïve Bayes Classification algorithm.

Description:

Naïve Bayes Classifier:

The Naive Bayes Classifier is a machine learning model used for classification tasks. It is
based on Bayes' Theorem, which calculates the probability of an event based on prior knowledge
of conditions related to the event. The classifier assumes that all features (attributes) are
independent of each other, which is why it's called "naive."

Despite this naive assumption, the Naive Bayes Classifier performs well in many
situations, especially in text classification tasks like spam detection and sentiment analysis. It is
known for its simplicity and efficiency.

Steps to Implement Naive Bayes Classifier


1. Load the dataset.
2. Preprocess the data (if necessary).
3. Split the dataset into training and testing sets.
4. Train the Naive Bayes model.
5. Make predictions and evaluate the model.

Source code:

from sklearn.datasets import load_iris


from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix

iris = load_iris()
X, y = iris.data, iris.target

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)

gnb = GaussianNB()
y_pred = gnb.fit(X_train, y_train).predict(X_test)

print("Accuracy:", accuracy_score(y_test, y_pred))


print("\nClassification Report:\n", classification_report(y_test, y_pred))
print("\nConfusion Matrix:\n", confusion_matrix(y_test, y_pred))

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 38


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

Output:

Result:

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 39


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

Ex.No-13 Date:
Apply Support Vector algorithm for classification.

Aim:

To apply Support Vector algorithm for classification.

Description:

Support Vector Machine:

Support Vector Machine (SVM) is a classification algorithm that works by finding the best
hyperplane that separates data points of different classes.
1. Hyperplane: A decision boundary that separates different classes.
2. Support Vectors: Data points closest to the hyperplane, crucial for defining its position.
3. Margin: The gap between the hyperplane and the nearest data points from each class,
which the SVM aims to maximize.

SVMs can handle both linear and non-linear data using something called the kernel trick.
They're especially effective in high-dimensional spaces.

Steps:

1. Import Libraries
2. Load the Dataset
3. Split the Dataset
4. Train the Support Vector Machine Model
5. Make Predictions
6. Evaluate the Model

Source code:

from sklearn.svm import SVC


from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay, accuracy_score
import matplotlib.pyplot as plt

X = [[1, 2], [2, 3], [3, 1], [4, 3], [5, 3], [6, 2]]
y = [0, 0, 0, 1, 1, 1]

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

svm_classifier = SVC(kernel='linear')
svm_classifier.fit(X_train, y_train)

y_pred = svm_classifier.predict(X_test)

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 40


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

accuracy = accuracy_score(y_test, y_pred)


print(f"Accuracy: {accuracy}")
cm = confusion_matrix(y_test, y_pred)
disp = ConfusionMatrixDisplay(confusion_matrix=cm)
disp.plot()
plt.show()

Output:

Result:

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 41


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

Implement the K-means algorithm and apply it to the data you


selected. Evaluate performance by measuring the sum of the
Ex.No-14 Date:
Euclidean distance of each example from its class center. Test
the performance of the algorithm as a function of the
parameters K.

Aim:

To implement the K-means algorithm and apply it to the data you selected, to evaluate
performance by measuring the sum of the Euclidean distance of each example from its class
center and to test the performance of the algorithm as a function of the parameters K.

Description:

K-means Algorithm:

The K-Means Algorithm is a clustering technique that partitions data into K clusters. It
assigns data points to the nearest cluster center (centroid) and iteratively refines the cluster
centers until convergence. The sum of these distances gives you a measure of how well the data
points are clustered.

Source code:

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 42


Department of CSE (23CS0903) Artificial Intelligence & Machine Learning Lab

Output:

Result:

Dept. of CSE, Siddartha Institute of Science and Technology: Puttur Page 43

You might also like