0% found this document useful (0 votes)
49 views6 pages

Aml Lab

The document outlines a course on Artificial Intelligence and Machine Learning fundamentals, detailing various programming assignments related to search algorithms, supervised learning, and unsupervised learning. It includes implementations of breadth-first search, depth-first search, decision trees, neural networks, and clustering algorithms, along with guidelines for programming languages and data set sources. Additionally, sample code for BFS and DFS algorithms in Python is provided to illustrate the concepts discussed.

Uploaded by

aidsrpsit1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views6 pages

Aml Lab

The document outlines a course on Artificial Intelligence and Machine Learning fundamentals, detailing various programming assignments related to search algorithms, supervised learning, and unsupervised learning. It includes implementations of breadth-first search, depth-first search, decision trees, neural networks, and clustering algorithms, along with guidelines for programming languages and data set sources. Additionally, sample code for BFS and DFS algorithms in Python is provided to illustrate the concepts discussed.

Uploaded by

aidsrpsit1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

OCS351 ARTIFICIAL INTELLIGENCE AND

MACHINE LEARNING FUNDAMENTALS

Programs for Problem solving with Search


1. Implement breadth first search
2. Implement depth first search
3. Analysis of breadth first and depth first search in terms of time and space
4. Implement and compare Greedy and A* algorithms.
Supervised learning
5. Implement the non-parametric locally weighted regression algorithm in order to fit data points.
Select appropriate data set for your experiment and draw graphs
6. Write a program to demonstrate the working of the decision tree based algorithm.
7. Build an artificial neural network by implementing the back propagation algorithm and test the
same using appropriate data sets.
8. Write a program to implement the naïve Bayesian classifier.
Unsupervised learning
9. Implementing neural network using self-organizing maps
10. Implementing k-Means algorithm to cluster a set of data.
11. Implementing hierarchical clustering algorithm.
Note:
 Installation of gnu-prolog, Study of Prolog (gnu-prolog).
 The programs can be implemented in using C++/JAVA/ Python or appropriate tools can be
used by designing good user interface
 Data sets can be taken from standard repositories
(https://archive.ics.uci.edu/ml/datasets.html) or constructed by the students.

AIMLFLAB (jupyter.org)

# BFS algorithm in Python

import collections

# BFS algorithm

def bfs(graph, root):


visited, queue = set(), collections.deque([root])

visited.add(root)

while queue:

# Dequeue a vertex from queue

vertex = queue.popleft()

print(str(vertex) + " ", end="")

# If not visited, mark it as visited, and

# enqueue it

for neighbour in graph[vertex]:

if neighbour not in visited:

visited.add(neighbour)

queue.append(neighbour)

if __name__ == '__main__':

graph = {0: [1, 2], 1: [2], 2: [3], 3: [1, 2]}

print("Following is Breadth First Traversal: ")

bfs(graph, 0)
2

# DFS algorithm in Python

# DFS algorithm

def dfs(graph, start, visited=None):

if visited is None:

visited = set()

visited.add(start)

print(start)

for next in graph[start] - visited:

dfs(graph, next, visited)

return visited

graph = {'0': set(['1', '2']),

'1': set(['0', '3', '4']),

'2': set(['0']),

'3': set(['1']),

'4': set(['2', '3'])}

dfs(graph, '0')
3

import time

import heapq

class Graph:

def __init__(self):

self.edges = {}

def add_edge(self, from_node, to_node, weight):

if from_node not in self.edges:

self.edges[from_node] = []

self.edges[from_node].append((to_node, weight))

def dfs(graph, start, goal):

stack = [(start, [start])]

visited = set()

while stack:

(vertex, path) = stack.pop()

if vertex in visited:

continue

visited.add(vertex)

for next_node, _ in graph.edges.get(vertex, []):

if next_node == goal:

return path + [next_node]

else:
stack.append((next_node, path + [next_node]))

return None

def best_first_search(graph, start, goal):

pq = [(0, start, [start])]

visited = set()

while pq:

(cost, vertex, path) = heapq.heappop(pq)

if vertex in visited:

continue

visited.add(vertex)

for next_node, weight in graph.edges.get(vertex, []):

if next_node == goal:

return path + [next_node]

else:

heapq.heappush(pq, (weight, next_node, path + [next_node]))

return None

def analyze_search(graph, start, goal):

start_time = time.time()

dfs_path = dfs(graph, start, goal)

dfs_time = time.time() - start_time

start_time = time.time()

bfs_path = best_first_search(graph, start, goal)

bfs_time = time.time() - start_time

print(f"DFS Path: {dfs_path}, Time: {dfs_time:.6f} seconds")

print(f"BFS Path: {bfs_path}, Time: {bfs_time:.6f} seconds")


# Example usage

graph = Graph()

graph.add_edge('A', 'B', 1)

graph.add_edge('A', 'C', 3)

graph.add_edge('B', 'D', 2)

graph.add_edge('C', 'D', 4)

graph.add_edge('D', 'E', 5)

analyze_search(graph, 'A', 'E')

OUTPUT:

You might also like