0% found this document useful (0 votes)
38 views48 pages

4 AI Lab Programs

Uploaded by

lasrireddy5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views48 pages

4 AI Lab Programs

Uploaded by

lasrireddy5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

1

1. Write a program to implement DFS.

Depth First Search Description: Depth First Search (DFS) algorithm traverses a graph in a depth-ward motion and
uses a stack to remember to get the next vertex to start a search, when a dead end occurs in any iteration.

Algorithm:

The DSF algorithm follows as:


1. We will start by putting any one of the graph's vertex on top of the stack.
2. After that take the top item of the stack and add it to the visited list of the vertex.
3. Next, create a list of that adjacent node of the vertex. Add the ones which aren't in the visited list of
vertexes to the top of the stack.
4. Lastly, keep repeating steps 2 and 3 until the stack is empty.

Pseudo Algorithm:
DFS(G, u)
u.visited = true
for each v ∈ G.Adj[u]
if v.visited == false
DFS(G,v)
init() {
For each u ∈ G
u.visited = false
For each u ∈ G
DFS(G, u)
}

Program:

# Using a Python dictionary to act as an adjacency list


graph = {
'5' : ['3','7'],
'3' : ['2', '4'],
'7' : ['8'],
'2' : [],
'4' : ['8'],
'8' : []
}

visited = set() # Set to keep track of visited nodes of graph.

def dfs(visited, graph, node): #function for dfs


if node not in visited:
print (node)
visited.add(node)
for neighbour in graph[node]:
dfs(visited, graph, neighbour)

# Driver Code
print("Following is the Depth-First Search")
dfs(visited, graph, '5')

1
2

Input: Output:
532487

2
3

2. Write a program to implement BFS

Breadth First Search Description:


Breadth First Search (BFS) algorithm traverses a graph in a breadthward motion and uses a queue to remember to
get the next vertex to start a search, when a dead end occurs in any iteration.

Algorithm:
The steps of the algorithm work as follow:
1. Start by putting any one of the graph’s vertices at the back of the queue.
2. Now take the front item of the queue and add it to the visited list.
3. Create a list of that vertex's adjacent nodes. Add those which are not within the visited list to the rear of
the queue.
4. Keep continuing steps two and three till the queue is empty.

Pseudo Algorithm:
create a queue Q
mark v as visited and put v into Q
while Q is non-empty
remove the head u of Q
mark and enqueue all (unvisited) neighbors of u

Program:
graph = {
'5' : ['3','7'],
'3' : ['2', '4'],
'7' : ['8'],
'2' : [],
'4' : ['8'],
'8' : []
}

visited = [] # List for visited nodes.


queue = [] #Initialize a queue

def bfs(visited, graph, node): #function for BFS


visited.append(node)
queue.append(node)

while queue: # Creating loop to visit each node


m = queue.pop(0)
print (m, end = " ")

for neighbour in graph[m]:


if neighbour not in visited:
visited.append(neighbour)
queue.append(neighbour)

# Driver Code
print("Following is the Breadth-First Search")
bfs(visited, graph, '5') # function calling

3
4

Input: Output:
537248

4
5

3. Write a program to implement A* algorithm

A* algorithm problem description:


This Algorithm is the advanced form of the BFS algorithm (Breadth
(Breadth-first
first search), which searches for the shorter
path first than, the longer paths. It is a complete as well as an optimal solution for solving path and grid problems.
In A* search algorithm, we use search heuristic as well as the cost to reach the node. Hence we can combine both
costs as following, and this sum is called as a fitness number.

Algorithm:
1: Firstly, Place the starting node into OPEN and find its f (n) value.
2: Then remove the node from OPEN, having the smallest f (n) value. If it is a goal node, then stop and return to
success.
3: Else remove the node from OPEN, and find all its successors.
4: Find the f (n) value of all the successors, place them into OPEN, and place the removed node into CLOSE.
5: Goto Step-2.
6: Exit.

Pseudo code:
let openList equal empty list of nodes
let closedList equal empty list of nodes
put startNode on the openList (leave it's f at zero)
while openList is not empty
let currentNode equal the node with the least f value
remove currentNode from the openList
add currentNode to the closedList
if currentNode is the goal
You've found the exit!
let children of the currentNode equal the adjacent nodes
for each child in the children
if child is in the closedList
continue to beginning of for loop
child.g = currentNode.g + distance b/w child and current
child.h = distance from child to end
child.f = child.g + child.h
if child.position is in the openList's nodes positions
if child.g is higher than the openList node's g
continue to beginning of for loop
add the child to the openList

5
6

Program:

from collections import deque

class Graph:
def __init__(self, adjac_lis):
self.adjac_lis = adjac_lis

def get_neighbors(self, v):


return self.adjac_lis[v]

# This is heuristic function which is having equal values for all nodes
def h(self, n):
H={
'A': 1,
'B': 1,
'C': 1,
'D': 1
}

return H[n]

def a_star_algorithm(self, start, stop):


# In this open_lst is a lisy of nodes which have been visited, but who's
# neighbours haven't all been always inspected, It starts off with the start
#node
# And closed_lst is a list of nodes which have been visited
# and who's neighbors have been always inspected
open_lst = set([start])
closed_lst = set([])

# poo has present distances from start to all other nodes


# the default value is +infinity
poo = {}
poo[start] = 0

# par contains an adjac mapping of all nodes


par = {}
par[start] = start

while len(open_lst) > 0:


n = None

# it will find a node with the lowest value of f() -


for v in open_lst:
if n == None or poo[v] + self.h(v) < poo[n] + self.h(n):
n = v;

if n == None:
print('Path does not exist!')
return None

# if the current node is the stop

6
7

# then we start again from start


if n == stop:
reconst_path = []

while par[n] != n:
reconst_path.append(n)
n = par[n]

reconst_path.append(start)

reconst_path.reverse()

print('Path found: {}'.format(reconst_path))


return reconst_path

# for all the neighbors of the current node do


for (m, weight) in self.get_neighbors(n):
# if the current node is not presentin both open_lst and closed_lst
# add it to open_lst and note n as it's par
if m not in open_lst and m not in closed_lst:
open_lst.add(m)
par[m] = n
poo[m] = poo[n] + weight

# otherwise, check if it's quicker to first visit n, then m


# and if it is, update par data and poo data
# and if the node was in the closed_lst, move it to open_lst
else:
if poo[m] > poo[n] + weight:
poo[m] = poo[n] + weight
par[m] = n

if m in closed_lst:
closed_lst.remove(m)
open_lst.add(m)

# remove n from the open_lst, and add it to closed_lst


# because all of his neighbors were inspected
open_lst.remove(n)
closed_lst.add(n)

print('Path does not exist!')


return None

Input: Output:
adjac_lis = { Path found: ['A', 'B', 'D']
'A': [('B', 1), ('C', 3), ('D', 7)], ['A', 'B', 'D']
'B': [('D', 5)],
'C': [('D', 12)]
}
graph1 = Graph(adjac_lis)
graph1.a_star_algorithm('A', 'D')

7
8

4. Write A Program To Implement Hill Climbing Problem

Description:
The hill-climbing algorithm is a local search algorithm used in mathematical optimization. An important property
of local search algorithms is that the path to the goal does not matter, only the goal itself matters. Because of this,
we do not need to worry about which path we took in order to reach a certain goal state, all that matters is that
we reached it.
The basic principle behind the algorithm is moving across neighboring states according to elevation or increase in
value. This working principle also causes the algorithm to be susceptible to local maximums.

Algorithm (pseudo code):

HillClimbing(problem) {
currentState = problem.startState
goal = false
while(!goal){
neighbour = highest valued successor of currentState
if neighbour.value <= currentState.value
goal = true
else
currentState = neighbour
}
}

Explanation:
1. We begin with a starting state that we assign to the currentState variable. Following that, we proceed to
perform a loop until we reach our goal state.
2. The current objective function of our algorithm is to find the maximum valued state or, in simpler terms, a
‘peak’.
3. In order to proceed, we first find out the immediate neighbors of our current state. The way this is done is left
up to the reader since the data organization of the problem may vary.
4. After we have found the neighbors, we make the highest valued neighbor and compare it with currentState.
5. If the neighbor’s value is higher than our current state, we move to the neighboring state; else, we end the
loop (since according to the algorithm we have found our peak).

Program:
import numpy as np

def find_neighbours(state, landscape):


neighbours = []
dim = landscape.shape

# left neighbour
if state[0] != 0:
neighbours.append((state[0] - 1, state[1]))

# right neighbour
if state[0] != dim[0] - 1:
neighbours.append((state[0] + 1, state[1]))

8
9

# top neighbour
if state[1] != 0:
neighbours.append((state[0], state[1] - 1))

# bottom neighbour
if state[1] != dim[1] - 1:
neighbours.append((state[0], state[1] + 1))

# top left
if state[0] != 0 and state[1] != 0:
neighbours.append((state[0] - 1, state[1] - 1))

# bottom left
if state[0] != 0 and state[1] != dim[1] - 1:
neighbours.append((state[0] - 1, state[1] + 1))

# top right
if state[0] != dim[0] - 1 and state[1] != 0:
neighbours.append((state[0] + 1, state[1] - 1))

# bottom right
if state[0] != dim[0] - 1 and state[1] != dim[1] - 1:
neighbours.append((state[0] + 1, state[1] + 1))

return neighbours

# Current optimization objective: local/global maximum


def hill_climb(curr_state, landscape):
neighbours = find_neighbours(curr_state, landscape)
bool
ascended = False
next_state = curr_state
for neighbour in neighbours: #Find the neighbour with the greatest value
if landscape[neighbour[0]][neighbour[1]] > landscape[next_state[0]][next_state[1]]:
next_state = neighbour
ascended = True

return ascended, next_state

def __main__():
landscape = np.random.randint(1, high=50, size=(10, 10))
print(landscape)
start_state = (3, 6) # matrix index coordinates
current_state = start_state
count = 1
ascending = True
while ascending:
print("\nStep #", count)
print("Current state coordinates: ", current_state)
print("Current state value: ", landscape[current_state[0]][current_state[1]])
count += 1

9
10

ascending, current_state = hill_climb(current_state, lan


landscape)

print("\nStep #", count)


print("Optimization objective reached.")
print("Final state coordinates: ", current_state)
print("Final state value: ", landscape[current_state[0]][current_state[1]])

__main__()

Example:

10
11

11
12

5. Write a program to implement Towers of Hanoi problem

Problem description:
The Tower of Hanoi, is a mathematical problem which consists of three rods and multiple disks. Initially, all the
disks are placed on one rod, one over the other in ascending order of size similar to a cone
cone-shaped
shaped tower.
The objective of this problem is to move the stack of disks from the initial rod to another rod, following these rules:
 A disk cannot be placed on top of a smaller disk
 No disk isk can be placed on top of the smaller disk.
The goal is to move all the disks from the leftmost rod to the rightmost rod. To move N disks from one rod to
another, 2^𝑁−1−1 steps are required. So, to move 3 disks from star ng the rod to the ending rod, a total
t of 7 steps
are required

Example
This example runs for 3 disks and 3 rods as described in the diagram above. It displays all the steps it follows to
take the stack of disks from start to end.

Note: An Aux is the rod helping the movement of the disk. This rod contains the disks which are not to be moved
in the current function call.
Initially, aux rod is set as middle tower.

Program:

#include <iostream>
#include <string>
using namespace std;

void TowerOfHanoi(int n, string from_tower, string to_tower, string aux_tower)


{
if (n == 1)
{

12
13

cout << "Move disk 1 from rod " << from_tower << " to rod " << to_tower<<endl;
return;
}
TowerOfHanoi(n - 1, from_tower, aux_tower, to_tower);
cout << "Move disk " << n << " from rod " << from_tower << " to rod " << to_tower << endl;
TowerOfHanoi(n - 1, aux_tower, to_tower, from_tower);
}

int main()
{
int n = 3; // Number of disks
TowerOfHanoi(n, "Start", "End", "Mid"); //names of the towers
return 0;
}

Output:
Move disk 1 from rod Start to rod End

Move disk 2 from rod Start to rod Mid

Move disk 1 from rod End to rod Mid

Move disk 3 from rod Start to rod End

Move disk 1 from rod Mid to rod Start

Move disk 2 from rod Mid to rod End

Move disk 1 from rod Start to rod End

13
14

6. Write a Program to find the solution for travelling salesman Problem

Problem description: Travelling Salesman Problem (TSP) : Given a set of cities and distances between every pair of
cities, the problem is to find the shortest possible route that visits every city exactly once and returns to the
starting point.

Algorithm:

Naive Solution:
1) Consider city 1 as the starting and ending point.
2) Generate all (n-1)! Permutations of cities.
3) Calculate cost of every permutation and keep track of minimum cost permutation.
4) Return
n the permutation with minimum cost.
Time Complexity: Θ(n!)

Output of Given Graph:


minimum weight Hamiltonian Cycle :
10 + 25 + 30 + 15 := 80

Program:

# Python3 program to implement traveling salesman


# problem using naive approach.
from sys import maxsize
from itertools import permutations
V=4

# implementation of traveling Salesman Problem


def travellingSalesmanProblem(graph, s):

# store all vertex apart from source vertex


vertex = []
for i in range(V):
if i != s:
vertex.append(i)

14
15

# store minimum weight Hamiltonian Cycle


min_path = maxsize
next_permutation=permutations(vertex)
for i in next_permutation:

# store current Path weight(cost)


current_pathweight = 0

# compute current path weight


k=s
for j in i:
current_pathweight += graph[k][j]
k=j
current_pathweight += graph[k][s]

# update minimum
min_path = min(min_path, current_pathweight)

return min_path

# Driver Code
if __name__ == "__main__":

# matrix representation of graph


graph = [[0, 10, 15, 20], [10, 0, 35, 25],
[15, 35, 0, 30], [20, 25, 30, 0]]
s=0
print(travellingSalesmanProblem(graph, s))

15
16

7. Write A Program To Implement Simulated Annealing Algorithm

What is Annealing?
In simple terms, ‘Annealing’ is a technique, where a metal is heated to a high temperature and slowly cooled
down to improve its physical properties. When the metal is hot, the molecules randomly re-arrange
themselves at a rapid pace.
As the metal starts to cool down, the re-arranging process occurs at a much slower rate. In the end, the
resultant metal will be a desired workable metal. The factors of time and metal’s energy at a particular time
will supervise the entire process.
In machine learning, Simulated annealing algorithm mimics this process and is used to find optimal (or most
predictive) features in the feature selection process.
Let’s now try to draw parallel’s between Annealing in metallurgy and Simulated annealing for Feature
selection:

In terms of feature selection,


1. Set of features represents the arrangement of molecules in the material(metal).
2. No. of Iterations represents time. Therefore, as the no. of iterations decreases temperature
decreases.
3. Change in Predictive performance between the previous and the current iteration represents the
change in material’s energy.
Simulated Annealing is a stochastic global search optimization algorithm which means it operates well on
non-linear objective functions as well while other local search algorithms won’t operate well on this
condition.
Ok, it sounds somewhat similar to Stochastic hill climbing. What’s the difference?

Stochastic Hill Climbing (Vs) Simulated Annealing


A considerably upgraded version of stochastic hill-climbing is simulated annealing
Consider that you are climbing a hill and trying to find the optimal steps to reach the top. The main difference
between stochastic hill-climbing and simulated annealing is that in stochastic hill-climbing steps are taken at
random and the current point is replaced with a new point provided the new point is an improvement to the
previous point.
Whereas in simulated annealing, the search works the same way but sometimes the worse points are also
accepted to allow the algorithm to learn answers that are eventually better.

Simulated annealing algorithm


Let’s go over the exact Simulated Annealing algorithm, step-by-step.
1. The initial step is to select a subset of features at random.
2. Then choose the no. of iterations. A ML model is then built and the predictive performance
(otherwise called objective function) is calculated.
3. A small percentage of features are randomly included/excluded from the model. This is just to
‘perturb’ the features. Then the predictive performance is calculated once again for this new set of
features.
Two things can happen here:
1. If performance Increases in the new set then the new feature set is Accepted.
2. If the performance of the new feature set has worse performance, then the Acceptance
Probability (otherwise called metropolis acceptance criterion) is calculated. (You will see the
formula and its significance shortly. Stay with the flow for now.)
Once the acceptance probability is calculated, generate a random number between 0 – 1 and :
1. If the Random Number > Acceptance Probability then the new feature set is Rejected and the
previous feature set will be continued to be used.

16
17

2. If the Random Number < Acceptance Probability then the new feature set is Accepted.
Accepted

The impact of randomness by this process helps simulated annealing to not get stuck at local optimums in
search of a global optimum.
Keep doing this for the chosen number of iterations.

Now, how is all of this related to ‘annealing’ concept of cooling temperature? you might wonder.
wonde
The ‘acceptance probability’ takes care of that. The formula for acceptance probability is designed in such a
way that, as the number of iterations increase, the probability of accepting bad performance comes down. As
a result, fewer changes are accepted.
Let’s look at the formula now.

Formula for acceptance probability( a.k.a Metropolis acceptance criterion)


The formula for acceptance probability is as follows:

Where,
i = No. Of Iterations,
c = Controls the amount of perturbation that can happ
happen,
old = Old score,
new = New score.
The acceptance probability can be understood as a function of time and change in performance with a
constant ‘c’, which is used to control the rate of perturbation happening in the features. Usually, ‘c’ is set to
be 1.

Working example of acceptance probability formula:


Consider the problem in hand is to optimize the accuracy of a machine learning model. Assume that the
previous solution is 77% and the current solution is 73% :
When,
no. of iteration i = 1 and c = 1

iteration i = 5 and c = 1

and finally when iteration i = 10 and c = 1

As you can see after 10 iterations the acceptance probability came down to 0.0055453. Thus, as the no. of
iterations increases, the chances of accepting a worse solution decreases.
Now,
ow, Why does simulated annealing accept worse performing feature sets ?

If the algorithm tends to accept only the best performing feature sets the probability of getting stuck in the
local optima gets very high which is not good. Even if the algorithm is going to continuously face poor-
poor
performing feature sets for a certain number of times it allows for better chances of finding the global optima
which may exist elsewhere. As the acceptance probability decreases with time (iterations), it tends to go back
to
o the last known local optimum and starts its search for global optimum once again. So the chances of
settling on a worser performing results is diminished.

17
18

When the temperature is high the chances of worse-performing features getting accepted is high and as the
no. of iterations goes up, temperature decreases, and that in turn decreases the chances of worse-
performing features getting accepted.
The intent here is that, when the temperature is high, the algorithm moves freely in the search space, and as
temperature decreases the algorithm is forced to converge at global optima.

Implementing Simulated annealing from scratch in python


Consider the problem of hill climbing. Consider a person named ‘Mia’ trying to climb to the top of the hill or
the global optimum. In this search hunt towards global optimum, the required attributes will be:
1. Area of the search space. Let’s say area to be [-6,6]
2. A start point where ‘Mia’ can start her search hunt.
3. Step_size that ‘Mia’ is going to take.
4. Number of attempts ‘Mia’ is going to make. As of algorithm this would be no. of iterations.
5. Steeps and slopes she climbs as she tries to reach the top/global optimum. As of algorithm this
would be temperature.
Another thing to note here is that both the temperature and no. of iterations arguments will be predefined.
Now how would ‘Mia’ know whether her step is betterment to the previous step or not?
Her steps are validated by a function called ‘objective’. The objective function will be the ‘square of the step
taken’. This is because the steps ‘Mia’ is going to take are going to be totally random between the bounds of
the specified area and that means there are chances of getting a negative value also, To make it positive, the
objective function is used. If this new step is betterment then she will continue on that path.
If her step is not good: The acceptance probability/Metropolis acceptance criterion is calculated. After that, a
random number will be generated using rand().

If random_number > Acceptance probability:


Reject the new step
Else:
Accept the new step

Just an overview,
In this code, the steps taken by ‘Mia’ will be random and not user-fed values. Each time there is an
improvement/betterment in the steps taken towards global optimum, those values alongside the previous
value get saved into a list called outputs.
The initial step is to import necessary libraries.

from numpy import asarray, exp


from numpy.random import randn, rand, seed
from matplotlib import pyplot

Let’s define the objective function to evaluate the steps taken by mia.

# Objective function is the square of steps taken


def objective(step):
return step[0] ** 2

Now that the objective function is defined. ‘Mia’ needs to start the search hunt from some point right ?. Only
if she has a start point she can progress towards the global optimum.
The below code cell gives us a random start point between the range of the area of the search space. Let’s
also see the evaluation of this start_point.

seed(1)
area = asarray([[-6.0, 6.0]])

18
19

# area[:,0] = -6.0, area[:,1] = 6.0, ran(len(area))generates random number within the length of area.
# length of interval is 1.

start_point = area[:, 0] + rand( len( area ) ) * ( area[:, 1] - area[:, 0] )


print(start_point)
print('start_point=', start_point)
print('objective function evaluation of start point=',objective(start_point))
[-0.99573594]
start_point= [-0.99573594]
objective function evaluation of start point= 0.9914900693154707

Seems like the new point obtained( objective function evaluated point ) is better than the start_point.
seed(1) is a Pseudorandom_number_generator.

By using seed(1) same random numbers will get generated each time the code cell is run,
Let’s now define the simulated annealing algorithm as a function.
he parameters needed are:
1. Objective function.
2. Area of the search space.
3. No. of iterations.
4. step_size.
5. Temperature.

After defining the function, the start_point is initialized then, this start_point is getting evaluated by
the objective function and that is stored into start_point_eval
def sa(objective, area = ([[-6.0,6.0]]), n_iterations = 1200, step_size = 0.1, temperature = 12):
# Generating a random start point for the search hunt
start_point = area[:, 0] + rand( len( area ) ) * ( area[:, 1] - area[:, 0] )
# Evaluating the start_point
start_point_eval = objective(start_point)

Now start_point and objective function evaluation of start point(start_point_eval) needs to be stored so that
each time an improvement happens, the progress can be seen.
# Storing the start point and its objective function evaluation into mia_start_point and mia_start_eval.
mia_start_point, mia_start_eval = start_point, start_point_eval

# this empty list will get updated over time once looping starts.
outputs = []

‘Mia’ start point and her start point evaluation are stored into mia_start_point and mia_start_eval. Outputs is
an empty list that will get updated over time once looping starts. As of now, ‘Mia’ started at a point and
evaluated that point. Now she has to take her first step towards her search hunt and to do so, a for loop is
defined ranging from 0 to the iteration number we specify.

# Looping from 0 to the iteration number we specify


for i in range(iterations):
# First step taken by mia
mia_step = mia_start_point + randn( len( area ) ) * step_size

The first step will be in accordance with Gaussian distribution where the mean is the current point and
standard deviation is defined by the step_size. In a nutshell, this means the steps taken will be 3
* step_size of the current point.

19
20

# Evaluating the first step


mia_step_eval = objective(mia_step)

This new point obtained must be checked whether it is better than the current point, if it is better, then
replace the current point with the new point. Then append those new points into our outputs list. If the new
point is better:

(i) Iteration count


(ii) Previous best
(iii) New best
are printed.

# The new step is checked whether it is better than current step. If better, the current point is replaced with new
point.
if mia_step_eval < start_point_eval:
start_point, start_point_eval = mia_step, mia_step_eval
# Step gets appended into the list
outputs.append(start_point_eval)
# printing out the iteration number, best_so_far and new_best
print('iteration Number = ',i," ", 'best_so_far = ',start_point," " ,'new_best = ',start_point_eval)

If the new point isn’t a promising solution, then the difference between the objective function evaluation of
the current solution(mia_step_eval) and current working solution(mia_start_eval) is calculated. One of the
popular ways of calculating temperature is by using the “Fast Simulated Annealing Method” which is as
follows:

temperature = initial_temperature / (iteration_number + 1)

difference = mia_step_eval - mia_start_eval


# Temperature is calculated
t = temperature / float(i + 1)

difference gives us the difference between the old point and the new point so that the acceptance
probability/metropolis acceptance criterion can be calculated. This helps in calculating the probability of
accepting a point with worse performance than the current point.

# Acceptance probability is calculated


mac = exp(-difference / t)

Then a random number is generated using rand() and if the Random Number > Acceptance Probability then
the new point will be Rejected and if Random Number < Acceptance Probability then the new point will
be Accepted.

if difference < 0 or rand() < mac:


# Storing the values
mia_start_point, mia_start_eval = mia_step, mia_step_eval
return [start_point, start_point_eval, outputs] #indenting is outside because return belongs to 'SA' function

The last step is to pass values to the parameters of the simulated annealing function.

seed(1)
# define the area of the search space
area = asarray([[-6.0, 6.0]])

20
21

# initial temperature
temperature = 12
# define the total no. of iterations
iterations = 1200
# define maximum step_size
step_size = 0.1
# perform the simulated annealing search
start_point, output, outputs = sa(objective, area, n_iterations, step_size, temp)

Program:
from numpy import asarray, exp
from numpy.random import randn, rand, seed
from matplotlib import pyplot

# Define objective function


def objective(step):
return step[0] ** 2.0

# Define simulated annealing algorithm


def sa(objective, area, iterations, step_size, temperature):
# create initial point
start_point = area[:, 0] + rand( len( area ) ) * ( area[:, 1] - area[:, 0] )
# evaluate initial point
start_point_eval = objective(start_point)
# Assign previous and new solution to previous and new_point_eval variable
mia_start_point, mia_start_eval = start_point, start_point_eval
outputs = []
for i in range(iterations):
# First step by mia
mia_step = mia_start_point + randn( len( area ) ) * step_size
mia_step_eval = objective(mia_step)
if mia_step_eval < start_point_eval:
start_point, start_point_eval = mia_step, mia_step_eval
#Append the new values into the output list
outputs.append(start_point_eval)
print('Acceptance Criteria = %.5f' % mac," ",'iteration Number = ',i," ", 'best_so_far = ',start_point," "
,'new_best = %.5f' % start_point_eval)
difference = mia_step_eval - mia_start_eval
t = temperature / float(i + 1)
# calculate Metropolis Acceptance Criterion / Acceptance Probability
mac = exp(-difference / t)
# check whether the new point is acceptable
if difference < 0 or rand() < mac:
mia_start_point, mia_start_eval = mia_step, mia_step_eval
return [start_point, start_point_eval, outputs]

seed(1)
# define the area of the search space
area = asarray([[-6.0, 6.0]])
# initial temperature
temperature = 12
# define the total no. of iterations

21
22

iterations = 1200
# define maximum step_size
step_size = 0.1
# perform the simulated annealing search
start_point, output, outputs = sa(objective, area, iterations, step_size, temperature)
#plotting the values
pyplot.plot(outputs, 'ro-')
pyplot.xlabel('Improvement Value')
pyplot.ylabel('Evaluation of Objective Function')
pyplot.show()

22
23

8. Write A Program To Implement 8 Puzzle Problem

Problem Description:
Given a 3×3 board with 8 tiles (every tile has one number from 1 to 8) and one empty space. The objective is to
place the numbers on tiles to match the final configuration using the empty space. We can slide four adjacent
(left, right, above, and below) tiles into the empty space.

1. DFS (Brute-Force)
We can perform a depth-first search on state-space (Set of all configurations of a given problem i.e. all states
that can be reached from the initial state) tree.

State Space Tree for 8 Puzzle


In this solution, successive moves can take us away from the goal rather than bringing us closer. The search of
state-space tree follows the leftmost path from the root regardless of the initial state. An answer node may
never be found in this approach.
2. BFS (Brute-Force)
We can perform a Breadth-first search on the state space tree. This always finds a goal state nearest to the root.
But no matter what the initial state is, the algorithm attempts the same sequence of moves like DFS.

23
24

Complete Algorithm:
/* Algorithm LCSearch uses c(x) to find an answer node
* LCSearch uses Least() and Add() to maintain the list
of live nodes
* Least() finds a live node with least c(x), deletes
it from the list and returns it
* Add(x) adds x to the list of live nodes
* Implement list of live nodes as a min-heap */

struct list_node
{
list_node *next;

// Helps in tracing path when answer is found


list_node *parent;
float cost;
}

algorithm LCSearch(list_node *t)


{
// Search t for an answer node
// Input: Root node of tree t
// Output: Path from answer node to root
if (*t is an answer node)
{
print(*t);
return;
}

E = t; // E-node

Initialize the list of live nodes to be empty;


while (true)
{
for each child x of E
{
if x is an answer node
{
print the path from x to t;
return;
}
Add (x); // Add x to list of live nodes;
x->parent = E; // Pointer for path to root
}

if there are no more live nodes


{
print ("No answer node");
return;
}

// Find a live node with least estimated cost


E = Least();

24
25

// The found node is deleted from the list of


// live nodes
}
}

The below diagram shows the path followed by the above algorithm to reach the final configuration from the
given initial configuration of the 8-Puzzle. Note that only nodes having the least value of cost function are
expanded.

Program:
# Python3 program to print the path from root
# node to destination node for N*N-1 puzzle
# algorithm using Branch and Bound
# The solution assumes that instance of
# puzzle is solvable

# Importing copy for deepcopy function


import copy

# Importing the heap functions from python


# library for Priority Queue
from heapq import heappush, heappop

# This variable can be changed to change


# the program from 8 puzzle(n=3) to 15
# puzzle(n=4) to 24 puzzle(n=5)...
n=3

# bottom, left, top, right


row = [ 1, 0, -1, 0 ]
col = [ 0, -1, 0, 1 ]

# A class for Priority Queue

25
26

class priorityQueue:

# Constructor to initialize a
# Priority Queue
def __init__(self):
self.heap = []

# Inserts a new key 'k'


def push(self, k):
heappush(self.heap, k)

# Method to remove minimum element


# from Priority Queue
def pop(self):
return heappop(self.heap)

# Method to know if the Queue is empty


def empty(self):
if not self.heap:
return True
else:
return False

# Node structure
class node:

def __init__(self, parent, mat, empty_tile_pos,


cost, level):

# Stores the parent node of the


# current node helps in tracing
# path when the answer is found
self.parent = parent

# Stores the matrix


self.mat = mat

# Stores the position at which the


# empty space tile exists in the matrix
self.empty_tile_pos = empty_tile_pos

# Storesthe number of misplaced tiles


self.cost = cost

# Stores the number of moves so far


self.level = level

# This method is defined so that the


# priority queue is formed based on
# the cost variable of the objects
def __lt__(self, nxt):
return self.cost < nxt.cost

26
27

# Function to calculate the number of


# misplaced tiles ie. number of non-blank
# tiles not in their goal position
def calculateCost(mat, final) -> int:

count = 0
for i in range(n):
for j in range(n):
if ((mat[i][j]) and
(mat[i][j] != final[i][j])):
count += 1

return count

def newNode(mat, empty_tile_pos, new_empty_tile_pos,


level, parent, final) -> node:

# Copy data from parent matrix to current matrix


new_mat = copy.deepcopy(mat)

# Move tile by 1 position


x1 = empty_tile_pos[0]
y1 = empty_tile_pos[1]
x2 = new_empty_tile_pos[0]
y2 = new_empty_tile_pos[1]
new_mat[x1][y1], new_mat[x2][y2] = new_mat[x2][y2], new_mat[x1][y1]

# Set number of misplaced tiles


cost = calculateCost(new_mat, final)

new_node = node(parent, new_mat, new_empty_tile_pos,


cost, level)
return new_node

# Function to print the N x N matrix


def printMatrix(mat):

for i in range(n):
for j in range(n):
print("%d " % (mat[i][j]), end = " ")

print()

# Function to check if (x, y) is a valid


# matrix coordinate
def isSafe(x, y):

return x >= 0 and x < n and y >= 0 and y < n

# Print path from root node to destination node


def printPath(root):

if root == None:

27
28

return

printPath(root.parent)
printMatrix(root.mat)
print()

# Function to solve N*N - 1 puzzle algorithm


# using Branch and Bound. empty_tile_pos is
# the blank tile position in the initial state.
def solve(initial, empty_tile_pos, final):

# Create a priority queue to store live


# nodes of search tree
pq = priorityQueue()

# Create the root node


cost = calculateCost(initial, final)
root = node(None, initial,
empty_tile_pos, cost, 0)

# Add root to list of live nodes


pq.push(root)

# Finds a live node with least cost,


# add its children to list of live
# nodes and finally deletes it from
# the list.
while not pq.empty():

# Find a live node with least estimated


# cost and delete it form the list of
# live nodes
minimum = pq.pop()

# If minimum is the answer node


if minimum.cost == 0:

# Print the path from root to


# destination;
printPath(minimum)
return

# Generate all possible children


for i in range(n):
new_tile_pos = [
minimum.empty_tile_pos[0] + row[i],
minimum.empty_tile_pos[1] + col[i], ]

if isSafe(new_tile_pos[0], new_tile_pos[1]):

# Create a child node


child = newNode(minimum.mat,
minimum.empty_tile_pos,

28
29

new_tile_pos,
minimum.level + 1,
minimum, final,)

# Add child to list of live nodes


pq.push(child)

# Driver Code

# Initial configuration
# Value 0 is used for empty space
initial = [ [ 1, 2, 3 ],
[ 5, 6, 0 ],
[ 7, 8, 4 ] ]

# Solvable Final configuration


# Value 0 is used for empty space
final = [ [ 1, 2, 3 ],
[ 5, 8, 6 ],
[ 0, 7, 4 ] ]

# Blank tile coordinates in


# initial configuration
empty_tile_pos = [ 1, 2 ]

# Function call to solve the puzzle


solve(initial, empty_tile_pos, final)

29
30

9. Implement Wumpus World Problem

Wumpus world:
The Wumpus world is a simple world example to illustrate the worth of a knowledge-based agent and to represent
knowledge representation. It was inspired by a video game Hunt the Wumpus by Gregory Yob in 1973.
The Wumpus world is a cave which has 4/4 rooms connected with passageways. So there are total 16 rooms which
are connected with each other. We have a knowledge-based agent who will go forward in this world. The cave has
a room with a beast which is called Wumpus, who eats anyone who enters the room. The Wumpus can be shot by
the agent, but the agent has a single arrow. In the Wumpus world, there are some Pits rooms which are
bottomless, and if agent falls in Pits, then he will be stuck there forever. The exciting thing with this cave is that in
one room there is a possibility of finding a heap of gold. So the agent goal is to find the gold and climb out the cave
without fallen into Pits or eaten by Wumpus. The agent will get a reward if he comes out with gold, and he will get
a penalty if eaten by Wumpus or falls in the pit.

Note: Here Wumpus is static and cannot move.

Following is a sample diagram for representing the Wumpus world. It is showing some rooms with Pits, one room
with Wumpus and one agent at (1, 1) square location of the world.

There are also some components which can help the agent to navigate the cave. These components are given as
follows:
a. The rooms adjacent to the Wumpus room are smelly, so that it would have some stench. The room
adjacent to PITs has a breeze, so if the agent reaches near to PIT, then he will perceive the breeze.
b. There will be glitter in the room if and only if the room has gold.
c. The Wumpus can be killed by the agent if the agent is facing to it, and Wumpus will emit a horrible
scream which can be heard anywhere in the cave.

30
31

PEAS description of Wumpus world:

To explain the Wumpus world we have given PEAS description as below:

Performance measure:
o +1000 reward points if the agent comes out of the cave with the gold.
o -1000 points penalty for being eaten by the Wumpus or falling into the pit.
o -1 for each action, and -10 for using an arrow.
o The game ends if either agent dies or came out of the cave.
Environment:
o A 4*4 grid of rooms.
o The agent initially in room square [1, 1], facing toward the right.
o Location of Wumpus and gold are chosen randomly except the first square [1,1].
o Each square of the cave can be a pit with probability 0.2 except the first square.
Actuators:
o Left turn,
o Right turn
o Move forward
o Grab
o Release
o Shoot.
Sensors:
o The agent will perceive the stench if he is in the room adjacent to the Wumpus. (Not diagonally).
o The agent will perceive breeze if he is in the room directly adjacent to the Pit.
o The agent will perceive the glitter in the room where the gold is present.
o The agent will perceive the bump if he walks into a wall.
o When the Wumpus is shot, it emits a horrible scream which can be perceived anywhere in the cave.
o These percepts can be represented as five element list, in which we will have different indicators for each
sensor.
o Example if agent perceives stench, breeze, but no glitter, no bump, and no scream then it can be
represented as:
[Stench, Breeze, None, None, None].
The Wumpus world Properties:
o Partially observable: The Wumpus world is partially observable because the agent can only perceive the
close environment such as an adjacent room.
o Deterministic: It is deterministic, as the result and outcome of the world are already known.
o Sequential: The order is important, so it is sequential.
o Static: It is static as Wumpus and Pits are not moving.
o Discrete: The environment is discrete.
o One agent: The environment is a single agent as we have one agent only and Wumpus is not considered
as an agent.
Exploring the Wumpus world:
Now we will explore the Wumpus world and will determine how the agent will find its goal by applying logical
reasoning.

31
32

Agent's First step:


Initially, the agent is in the first room or on the square [1,1], and we already know that this room is safe for the
agent, so to represent on the below diagram (a) that room is safe we will add symbol OK. Symbol A is used to
represent agent, symbol B for the breeze, G for Glitter or gold, V for the visited room, P for pits, W for Wumpus.
At Room [1,1] agent does not feel any breeze or any Stench which means the adjacent squares are also OK.

Agent's second Step:


Now agent needs to move forward, so it will either move to [1, 2], or [2,1]. Let's suppose agent moves to the room
[2, 1], at this room agent perceives some breeze which means Pit is around this room. The pit can be in [3, 1], or
[2,2], so we will add symbol P? to say that, is this Pit room?
Now agent will stop and think and will not make any harmful move. The agent will go back to the [1, 1] room. The
room [1,1], and [2,1] are visited by the agent, so we will use symbol V to represent the visited squares.

Agent's third step:


At the third step, now agent will move to the room [1,2] which is OK. In the room [1,2] agent perceives a stench
which means there must be a Wumpus nearby. But Wumpus cannot be in the room [1,1] as by rules of the game,
and also not in [2,2] (Agent had not detected any stench when he was at [2,1]). Therefore agent infers that
Wumpus is in the room [1,3], and in current state, there is no breeze which means in [2,2] there is no Pit and no
Wumpus. So it is safe, and we will mark it OK, and the agent moves further in [2,2].

32
33

Agent's fourth step:


At room [2,2], here no stench and no breezes present so let's suppose agent decides to move to [2,3]. At room
[2,3] agent perceives glitter, so it should grab the gold and climb out of the cave.

33
34

10. Build a Chatbot using AWS Lex, Pandora bots.


(i) What is a Chatbot?
At the most basic level, a chatbot is a computer program that simulates and processes human conversation
(either written or spoken), allowing humans to interact with digital devices as if they were communicating
with a real person. Chatbots can be as simple as rudimentary programs that answer a simple query with a
single-line response, or as sophisticated as digital assistants that learn and evolve to deliver increasing levels
of personalization as they gather and process information.
(ii) What is AWS Chatbot?
AWS Chatbot is an interactive agent that makes it easy to monitor, operate, and troubleshoot your AWS
workloads in your chat channels. With AWS Chatbot, you can receive alerts, run commands to retrieve
diagnostic information, configure AWS resources, and initiate workflows.
With just a few clicks, you can receive AWS notifications and run AWS Command Line Interface (CLI)
commands from your chat channels in a secure and efficient manner. AWS Chatbot manages the integration
and security permissions between the AWS services and your Slack channels or Amazon Chime chatrooms.
AWS Chatbot makes it easier for your team to stay updated, collaborate, and respond quickly to incidents,
security findings, and other alerts for applications running in your AWS environment. Your team can run
commands to safely configure AWS resources, resolve incidents, and run tasks from Slack channels without
switching context to other AWS Management Tools.
(iii) Building a Chatbot using AWS Lex.
To create an Amazon Lex bot (console)
1. Sign in to the AWS Management Console and open the Amazon Lex console
at https://console.aws.amazon.com/lex/.
2. Choose Get Started; otherwise, on the Bots page, choose Create.
3. On the Create your Lex bot page, provide the following information, and then choose Create.
 Choose the OrderFlowers blueprint.
 Leave the default bot name (OrderFlowers).
 For COPPA, choose No.
 For User utterance storage, choose the appropriate response.
4. Choose Create. The console makes the necessary requests to Amazon Lex to save the configuration.
The console then displays the bot editor window.
5. Wait for confirmation that your bot was built.
6. Test the bot.

(iv) Pandora bot:


A chatbot is a computer program or conversational agent that interacts with a real person online to
simulate the experience of talking to another human. It is predominantly used in customer service
platforms to decrease the load of responding to numerous repetitive queries from customers. Chatbots
have automated responses for services that range from fun and entertainment to receiving customer
feedback and answering queries.
Pandorabots is one such platform that is used to build a conversational chatbot with use of AIML
(Artificial Intelligence Markup Language). AIML is an extension of XML.

34
35

11. Build a bot which provides all the information related to your college.

Let us have a quick glance at Python’s ChatterBot to create our bot. ChatterBot is a Python library built based
on machine learning with an inbuilt conversational dialog flow and training engine. The bot created using this
library will get trained automatically with the response it gets from the user.

A simple implementation:

Installation

Install chatterbot using Python Package Index(PyPi) with this command


pip install chatterbot
Below is the implementation.

# Import "chatbot" from


# chatterbot package.
from chatterbot import ChatBot

# Inorder to train our bot, we have


# to import a trainer package
# "ChatterBotCorpusTrainer"
from chatterbot.trainers import ChatterBotCorpusTrainer

# Give a name to the chatbot “corona bot”


# and assign a trainer component.

chatbot=ChatBot('corona bot')

# Create a new trainer for the chatbot


trainer = ChatterBotCorpusTrainer(chatbot)

# Now let us train our bot with multiple corpus


trainer.train("chatterbot.corpus.english.greetings",
"chatterbot.corpus.english.conversations" )

response = chatbot.get_response('What is your Number')


print(response)

response = chatbot.get_response('Who are you?')


print(response)

35
36

12. Build A Virtual Assistant For Wikipedia Using Wolfram Alpha And Python

Problem Description: The Wolfram|Alpha Webservice API provides a web-based based API allowing the computational
and presentation capabilities of Wolfram|Alpha to be integrated into web, mobile, desktop, and enterprise
applications. Wolfram Alpha is an API which can compute expert
expert-level
level answers using Wolfram’s algorithms,
knowledgebase and AI technology.
ology. It is made possible by the Wolfram Language. This article tells how to create
a simple assistant application in Python which can answer simple questions like the ones listed below.

Input : What is the capital of India?


Output : New Delhi

Input : What is sin(30)?


Output : 0.5

Prerequisite: Basic understanding of python syntax and functions.


Getting API Id
1. Create a account at Wolfram alpha. The account can be created at the official website.
2. After signing up, sign in using your Wolfram ID.

3. Now you will see the homepage of the website. Head to the section in the top right corner where you see
your email. In
n the drop down menu, select the My Apps (API) option.

4. Click the Get an AppID button to get the id.

5. In the next dialog box, give the app a suitable name and description.

36
37

6. Note down the APPID that appears in the next dialog box. This app id will be specif
specific
ic to the application.

Implementation:
Make sure that wolframalpha python package is installed beforehand. It can be done by running the following
command in the terminal or cmd –
pip install wolframalpha

Below is the implementation

# Python program to
# demonstrate creation of an
# assistant using wolf ram API

import wolframalpha

# Taking input from user


question = input('Question: ')

# App id obtained by the above steps


app_id = ‘Your app_id’

# Instance of wolf ram alpha


# client class
client = wolframalpha.Client(app_id)

# Stores the response from


# wolf ram alpha

37
38

res = client.query(question)

# Includes only text from the response


answer = next(res.results).text

print(answer)

Output:

38
39

13. The following is a function that counts the number of times a string occurs in another string: # Count the
number of times string s1 is found in string s2 def countsubstring(s1,s2): count = 0 for i in range(0,len(s2)-
len(s1)+1): if s1 == s2[i:i+len(s1)]: count += 1 return count For instance, countsubstring(’ab’,’cabalaba’) returns 2.
Write a recursive version of the above function. To get the rest of a string (i.e. everything but the first
character).

# A Naive recursive Python program


# to find the number of times the
# second string occurs in the first
# string, whether continuous or
# discontinuous

# Recursive function to find the


# number of times the second string
# occurs in the first string,
# whether continuous or discontinuous
def count(a, b, m, n):

# If both first and second string


# is empty, or if second string
# is empty, return 1
if ((m == 0 and n == 0) or n == 0):
return 1

# If only first string is empty


# and second string is not empty,
# return 0
if (m == 0):
return 0

# If last characters are same


# Recur for remaining strings by
# 1. considering last characters
# of both strings
# 2. ignoring last character
# of first string
if (a[m - 1] == b[n - 1]):
return (count(a, b, m - 1, n - 1) +
count(a, b, m - 1, n))
else:

# If last characters are different,


# ignore last char of first string
# and recur for remaining string
return count(a, b, m - 1, n)

# Driver code
a = "GeeksforGeeks"
b = "Gks"

print(count(a, b, len(a),len(b)))

39
40

14. Higher order functions. Write a higher-order function count that counts the number of elements in a list that
satisfy a given test. For instance: count(lambda x: x>2, [1,2,3,4,5]) should return 3, as there are three elements in
the list larger than 2. Solve this task without using any existing higher-order function.

Higher order functions implementation:

Count odd numbers in the list


Single line solution is,
listOfElems = [11, 22, 33, 45, 66, 77, 88, 99, 101]

Count odd numbers in the list:


count = sum(map(lambda x : x%2 == 1, listOfElems))
print('Count of odd numbers in a list : ', count)

Output:
Count of odd numbers in a list : 6

Count even numbers in the list:


listOfElems = [11, 22, 33, 45, 66, 77, 88, 99, 101]
# Count even numbers in the list
count = sum(map(lambda x : x%2 == 0, listOfElems))
print('Count of even numbers in a list : ', count)

Output:
Count of even numbers in a list : 3

Count numbers in a list which are greater than 5:


listOfElems = [11, 22, 33, 45, 66, 77, 88, 99, 101]
# count numbers in the list which are greater than 5
count = sum(map(lambda x : x>5, listOfElems))
print('Count of numbers in a list which are greater than 5: ', count)

Output:
Count of numbers in a list which are greater than 5: 9

Count numbers in a list which are greater than 5 but less than 20:
listOfElems = [11, 22, 33, 45, 66, 77, 88, 99, 101]
# count numbers in the list which are greater than 5 but less than 20
count = getCount(listOfElems, lambda x : x>5 and x < 20)
print('Count of numbers in a list which are greater than 5 but less than 20 : ', count)

Output:
Count of numbers in a list which are greater than 5 but less than 20 : 1

Count total number of elements in the list


listOfElems = [11, 22, 33, 45, 66, 77, 88, 99, 101]
# Get total number of elements in the list
count = getCount(listOfElems)
print('Total Number of elements in List: ', count)

Output

40
41

Total Number of elements in List: 9

Complete Implementation Is As Follows:

from functools import reduce


def getCount(listOfElems, cond = None):
'Returns the count of elements in list that satisfies the given condition'
if cond:
count = sum(cond(elem) for elem in listOfElems)
else:
count = len(listOfElems)
return count
def main():
# List of numbers
listOfElems = [11, 22, 33, 45, 66, 77, 88, 99, 101]
print('**** Use map() & sum() to count elements in a list that satisfy certain conditions ****')
print('** Example 1 **')
# Count odd numbers in the list
count = sum(map(lambda x : x%2 == 1, listOfElems))
print('Count of odd numbers in a list : ', count)
print('** Example 1 : Explanation **')
# Get a map object by applying given lambda to each element in list
mapObj = map(lambda x : x%2 == 1, listOfElems)
print('Contents of map object : ', list(mapObj))
print('** Example 2**')
# Count even numbers in the list
count = sum(map(lambda x : x%2 == 0, listOfElems))
print('Count of even numbers in a list : ', count)
print('** Example 3**')
# count numbers in the list which are greater than 5
count = sum(map(lambda x : x>5, listOfElems))
print('Count of numbers in a list which are greater than 5: ', count)
print('**** Using sum() & Generator expression to count elements in list based on conditions ****')
# count numbers in the list which are greater than 5
count = getCount(listOfElems, lambda x : x>5)
print('Count of numbers in a list which are greater than 5: ', count)
# count numbers in the list which are greater than 5 but less than 20
count = getCount(listOfElems, lambda x : x>5 and x < 20)
print('Count of numbers in a list which are greater than 5 but less than 20 : ', count)
# Get total number of elements in the list
count = getCount(listOfElems)
print('Total Number of elements in List: ', count)
print('**** Use List comprehension to count elements in list based on conditions ****')
# count numbers in the list which are greater than 5
count = len([elem for elem in listOfElems if elem > 5])
print('Count of numbers in a list which are greater than 5: ', count)
print('**** Use reduce() function to count elements in list based on conditions ****')
# count numbers in the list which are greater than 5
count = reduce(lambda default, elem: default + (elem > 5), listOfElems, 0)
print('Count of numbers in a list which are greater than 5: ', count)
if __name__ == '__main__':
main()

41
42

Output:
**** Use map() & sum() to count elements in a list that satisfy certain conditions ****
** Example 1 **
Count of odd numbers in a list : 6
** Example 1 : Explanation **
Contents of map object : [True, False, True, True, False, True, False, True, True]
** Example 2**
Count of even numbers in a list : 3
** Example 3**
Count of numbers in a list which are greater than 5: 9
**** Using sum() & Generator expression to count elements in list based on conditions ****
Count of numbers in a list which are greater than 5: 9
Count of numbers in a list which are greater than 5 but less than 20 : 1
Total Number of elements in List: 9
**** Use List comprehension to count elements in list based on conditions ****
Count of numbers in a list which are greater than 5: 9
**** Use reduce() function to count elements in list based on conditions ****
Count of numbers in a list which are greater than 5: 9

42
43

15. Brute force solution to the Knapsack problem. Write a function that allows you to generate random problem
instances for the knapsack program. This function should generate a list of items containing N items that each
have a unique name, a random size in the range 1 ....... 5 and a random value in the range 1 ..... 10.
Next, you should perform performance measurements to see how long the given knapsack solver take to solve
different problem sizes. You should peform atleast 10 runs with different randomly generated problem
instances for the problem sizes 10,12,14,16,18,20 and 22. Use a backpack size of 2:5 x N for each value problem
size N. Please note that the method used to generate random numbers can also affect performance, since
different distributions of values can make the initial conditions of the problem slightly more or less demanding.
How much longer time does it take to run this program when we increase the number of items? Does the
backpack size affect the answer? Try running the above tests again with a backpack size of 1 x N and with 4:0 x
N.

Fractional Knapsack Problem Description:

Given weights and values of n items, we need to put these items in a knapsack of capacity W to get the maximum
total value in the knapsack.

In the 0-1 Knapsack problem, we are not allowed to break items. We either take the whole item or don’t take it.

Input:
Items as (value, weight) pairs
arr[] = {{60, 10}, {100, 20}, {120, 30}}
Knapsack Capacity, W = 50;

Output:
Maximum possible value = 240
by taking items of weight 10 and 20 kg and 2/3 fraction
of 30 kg. Hence total price will be 60+100+(2/3)(120) = 240

In Fractional Knapsack, we can break items for maximizing the total value of knapsack. This problem in which we
can break an item is also called the fractional knapsack problem.

Input :
Same as above

Output :
Maximum possible value = 240
By taking full items of 10 kg, 20 kg and
2/3rd of last item of 30 kg

A brute-force solution would be to try all possible subset with all different fraction but that will be too much time
taking.

An efficient solution is to use Greedy approach. The basic idea of the greedy approach is to calculate the ratio
value/weight for each item and sort the item on basis of this ratio. Then take the item with the highest ratio and
add them until we can’t add the next item as a whole and at the end add the next item as much as we can. Which
will always be the optimal solution to this problem.
A simple code with our own comparison function can be written as follows, please see sort function more closely,
the third argument to sort function is our comparison function which sorts the item according to value/weight
ratio in non-decreasing order.

43
44

After sorting we need to loop over these items and add them in our knapsack satisfying above-mentioned criteria.

Below is the implementation of the above idea:


# Python3 program to solve fractional
# Knapsack Problem

class ItemValue:

"""Item Value DataClass"""

def __init__(self, wt, val, ind):


self.wt = wt
self.val = val
self.ind = ind
self.cost = val // wt

def __lt__(self, other):


return self.cost < other.cost

# Greedy Approach

class FractionalKnapSack:

"""Time Complexity O(n log n)"""


@staticmethod
def getMaxValue(wt, val, capacity):
"""function to get maximum value """
iVal = []
for i in range(len(wt)):
iVal.append(ItemValue(wt[i], val[i], i))

# sorting items by value


iVal.sort(reverse=True)

totalValue = 0
for i in iVal:
curWt = int(i.wt)
curVal = int(i.val)
if capacity - curWt >= 0:
capacity -= curWt
totalValue += curVal
else:
fraction = capacity / curWt
totalValue += curVal * fraction
capacity = int(capacity - (curWt * fraction))
break
return totalValue

# Driver Code
if __name__ == "__main__":

44
45

wt = [10, 40, 20, 30]


val = [60, 40, 100, 120]
capacity = 50

# Function call
maxValue = FractionalKnapSack.getMaxValue(wt, val, capacity)
print("Maximum value in Knapsack =", maxValue)

45
46

16. Assume that you are organising a party for N people and have been given a list L of people who, for social
reasons, should not sit at the same table. Furthermore, assume that you have C tables (that are infinitly large).
Write a function layout(N,C,L) that can give a table placement (ie. a number from 0 : : :C -1) for each guest such
that there will be no social mishaps.
or simplicity we assume that you have a unique number 0 ......N-1 for each guest and that the list of restrictions
is of the form [(X,Y), ...] denoting guests X, Y that are not allowed to sit together. Answer with a dictionary
mapping each guest into a table assignment, if there are no possible layouts of the guests you should answer
False.

Solution:

The above problem can be mapped to graph coloring problem.


Given an undirected graph and a number m, determine if the graph can be coloured with at most m colours
such that no two adjacent vertices of the graph are colored with the same color. Here coloring of a graph means
the assignment of colors to all vertices.
Input-Output format:

Input:
1. A 2D array graph[V][V] where V is the number of vertices in graph and graph[V][V] is an adjacency matrix
representation of the graph. A value graph[i][j] is 1 if there is a direct edge from i to j, otherwise
graph[i][j] is 0.
2. An integer m is the maximum number of colors that can be used.
Output:
An array color[V] that should have numbers from 1 to m. color[i] should represent the color assigned to the
ith vertex. The code should also return false if the graph cannot be colored with m colors.

Example:

Input:
graph = {0, 1, 1, 1},
{1, 0, 1, 0},
{1, 1, 0, 1},
{1, 0, 1, 0}
Output:
Solution Exists:
Following are the assigned colors
1 2 3 2
Explanation: By coloring the vertices
with following colors, adjacent
vertices does not have same colors

Input:
graph = {1, 1, 1, 1},
{1, 1, 1, 1},
{1, 1, 1, 1},
{1, 1, 1, 1}
Output: Solution does not exist.

Explanation: No solution exits.

46
47

Naive Approach: Generate all possible configurations of colors. Since each node can be coloured using any of
the m available colours, the total number of colour configurations possible are m^V.
After generating a configuration of colour, check if the adjacent vertices have the same colour or not. If the
conditions are met, print the combination and break the loop.

Algorithm:
1. Create a recursive function that takes current index, number of vertices and output color array.
2. If the current index is equal to number of vertices. Check if the output color configuration is safe, i.e
check if the adjacent vertices do not have same color. If the conditions are met, print the configuration
and break.
3. Assign a color to a vertex (1 to m).
4. For every assigned color recursively call the function with next index and number of vertices
5. If any recursive function returns true break the loop and returns true.

Below is the implementation of the above idea:

# Number of vertices in the graph


# define 4 4

# check if the colored


# graph is safe or not
def isSafe(graph, color):

# check for every edge


for i in range(4):
for j in range(i + 1, 4):
if (graph[i][j] and color[j] == color[i]):
return False
return True

# /* This function solves the m Coloring


# problem using recursion. It returns
# false if the m colours cannot be assigned,
# otherwise, return true and prints
# assignments of colours to all vertices.
# Please note that there may be more than
# one solutions, this function prints one
# of the feasible solutions.*/
def graphColoring(graph, m, i, color):

# if current index reached end


if (i == 4):

# if coloring is safe
if (isSafe(graph, color)):

# Print the solution


printSolution(color)
return True
return False

47
48

# Assign each color from 1 to m


for j in range(1, m + 1):
color[i] = j

# Recur of the rest vertices


if (graphColoring(graph, m, i + 1, color)):
return True
color[i] = 0
return False

# /* A utility function to prsolution */


def printSolution(color):
print("Solution Exists:" " Following are the assigned colors ")
for i in range(4):
print(color[i],end=" ")

# Driver code
if __name__ == '__main__':

# /* Create following graph and


# test whether it is 3 colorable
# (3)---(2)
#|/|
#|/|
#|/|
# (0)---(1)
# */
graph = [
[ 0, 1, 1, 1 ],
[ 1, 0, 1, 0 ],
[ 1, 1, 0, 1 ],
[ 1, 0, 1, 0 ],
]
m = 3 # Number of colors

# Initialize all color values as 0.


# This initialization is needed
# correct functioning of isSafe()
color = [0 for i in range(4)]

if (not graphColoring(graph, m, 0, color)):


print ("Solution does not exist")

48

You might also like