0% found this document useful (0 votes)
85 views70 pages

KCA301 AI Unit 2 Searching Techniques

mca

Uploaded by

akritigupta2412
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
85 views70 pages

KCA301 AI Unit 2 Searching Techniques

mca

Uploaded by

akritigupta2412
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 70

Artificial Intelligence

Dr Bishwajeet Pandey, SMIEEE


Professor, Department of MCA, GL Bajaj College of
Technology and Management, India
PhD (Gran Sasso Science Institute, L'Aquila, Italy)
M. Tech in CSE (IIIT Gwalior, India)
Visiting Professor at
UCSI UNIVERSITY-Malaysia (QS World Rank 265)
Eurasian National University-Kazakhstan (QS Work Rank 321)
ABOUT COURSE TEACHER

• PhD from Gran Sasso Science Institute, Italy


• PhD Supervisor Prof Paolo Prinetto from Politecnico Di Torino.
• MTech from Indian Institute of Information Technology, Gwalior
• Visited 49 Countries Across The Globe
• Written 300+ Research paper with 218 Researcher from 93 Universities
• Scopus Profile: https://www.scopus.com/authid/detail.uri?authorId=57203239026
• Google Scholar: https://scholar.google.com/citations?user=UZ_8yAMAAAAJ&hl=hi
• IBM Certified Solution Designer
• EC-Council Certified Ethical Hacker
• AWS Certified Cloud Practitioner
• Qualified GATE 4 times
• Email [email protected], [email protected]
My CERTIFICATE
PROFESSOR OF THE YEAR AWARD-2023
BY LONDON ORGANIZATION OF SKILLS DEVELOPMENT
Syllabus of AI: Unit 2 Searching
Techniques
• Introduction of Searching Techniques
• Problem Solving by Searching
• Searching for Solutions
• Uninformed Searching Techniques
• Informed Searching Techniques
• Local Search Algorithm
• Adversarial Search Methods
• Search Techniques Used in Game
• Alpha Beta Pruning
Introduction to Searching
Techniques
• UNIFORMED SEARCH • INFORMED SEARCH
• Linear Search • Best First Search
• Binary Search • A* Search
• Binary Search with recursion • Hill Climbing
• Binary Search with Iteration • Simulated Annealing
• Breadth First Search • Constraint Satisfaction
• Depth First Search
• Depth-limited Search
• Iterative deepening depth-first search
• Uniform cost search
• Bidirectional Search
Linear Search
Python Code of Linear Search
# Python program for Linear Search # Example usage:
def linear_search(arr, target): arr = [10, 23, 45, 70, 11, 15]
# Traverse through all elements in the array target = 17
for index in range(len(arr)):
# If the element is found, return its index # Function call
if arr[index] == target: result = linear_search(arr, target)
return index
# If the element is not found, return -1 if result != -1:
return -1 print(f"Element found at index: {result}")
else:
print("Element not found in the array")
Array and Target Definition
• # Example usage:
• # Example usage: • arr = [10, 23, 45, 70, 11, 15]
• arr = [10, 23, 45, 70, 11, 15] • target = 70
• target = 17 • arr is list of integers
• arr is list of integers • target is the integer value we
• target is the integer value we want to find the array (70 in this
want to find the array (17 in this case)
case) • Result: Element found at index: 3
• Result: Element not found
Function Definition
# Python program for Linear Search • This function takes two
def linear_search(arr, target): parameters: arr (the array) and
# Traverse through all elements in the array target (the element to search
for).
for index in range(len(arr)):
# If the element is found, return its index • It iterates through each element in
the array using a for loop.
if arr[index] == target:
• If it finds an element equal to the
return index target, it returns the index (i) of
# If the element is not found, return -1 that element.
return -1 • If it reaches the end of the array
without finding the target, it
• Time Complexity: O(n) returns -1 to indicate that the
element is not present.
Binary Search
Binary Search
• In a nutshell, this search algorithm takes advantage of a collection of elements that is
already sorted by ignoring half of the elements after just one comparison.

• Compare x with the middle element.

• If x matches with the middle element, we return the mid index.

• Else if x is greater than the mid element, then x can only lie in the right (greater) half
subarray after the mid element. Then we apply the algorithm again for the right half.

• Else if x is smaller, the target x must lie in the left (lower) half. So we apply the
algorithm for the left half.
Binary Search
while low <= high:

mid = (high + low) // 2

# If x is greater, ignore left half


if arr[mid] < x:
low = mid + 1

# If x is smaller, ignore right half


elif arr[mid] > x:
high = mid - 1
Binary Search
• Time Complexity: O(log n)
Uninformed Search Algorithms
• Uninformed search is a class of general-purpose search algorithms which
operates in brute force-way. Uninformed search algorithms do not have
additional information about state or search space other than how to
traverse the tree, so it is also called blind search.
Following are the various types of uninformed search algorithms:

1. Breadth-first Search
2. Depth-first Search
3. Depth-limited Search
4. Iterative deepening depth-first search
5. Uniform cost search
6. Bidirectional Search
Breadth First Search

ROOT

LEAVES
Not Only in Computer Science but also In
Bhagwat Geeta Chapter 15 Verse 1
• Root (God) is Top and Leaves (Wordly affairs) are down
Breadth First Search: Graph
Representation
graph = { • Node 'A' is connected to 'B'
'A' : ['B', 'C'], and 'C'.
'B' : ['D', 'F'], • Node 'B' is connected to 'D'
'C' : ['F', 'G'], and 'F'.
'D' : [], • Node 'C' is connected to 'F'
and 'G'.
'E' : [],
• Nodes 'D', 'E', 'F', and 'G'
'F' : [],
have no outgoing connections
'G' : [] (empty lists).
}
Breadth First Search Function
def bfs(visited, graph, node):
visited.append(node) # Mark the starting node as visited
queue.append(node) # Add the starting node to the queue

while queue:
s = queue.pop(0) # Remove the first node from the queue

if s == 'F': # Check if the node is 'F'


print(s) # Print 'F' and terminate the loop
break
Breadth First Search Function

print(s, end=" --> ") # Print the node and an arrow

for neighbour in graph[s]: # Iterate through neighbors of the current node


if neighbour not in visited: # If the neighbor hasn't been visited
visited.append(neighbour) # Mark it as visited
queue.append(neighbour) # Add it to the queue
Driver Code

• print("Breadth-First Search (stops • This line calls the bfs function


when 'F' is found):") with the graph and the starting
node 'A'.

• bfs(visited, graph, 'A')


• It initializes the traversal and
outputs the nodes in the order
they are visited.
Depth First Search
Depth First Search: Graph
Representation
• GRAPH = { • The graph is represented as a
• 'A': ['B', 'C'], dictionary where each key is a
• 'B': ['D', 'E'], node, and its corresponding
• 'C': ['F', 'G'], value is a list of its neighbors
• 'D': ['H'], (nodes connected directly to it).
• 'E': [], • For example, node 'A' is
• 'F': [], connected to nodes 'B' and
'C'.
• 'G': [],
• 'H': []
• }
DFS Function

• def dfs(graph, node, visited_set=None, path=None):

• node: The current node being explored.


• visited_set: A set that keeps track of all visited nodes to avoid revisiting
them.
• path: A list that stores the order of nodes visited during the traversal.

• graph: The graph defined as a dictionary.


Initial Setup

• if visited_set is None: • If visited_set and path are not


• visited_set = set() provided when the function is
called, they are initialized.
• if path is None:
• visited_set is a set used to
• path = [] track visited nodes, ensuring
each node is visited only once.
• path is a list that accumulates
the nodes in the order they are
visited.
Visiting The Node

• visited_set.add(node) • The current node (node) is


• path.append(node) marked as visited by adding it
to visited_set.
• It is also added to path to
record the order of traversal.
Exploring Neighbors

• for neighbor in graph[node]: • If it does, it iterates over each


• if neighbor not in neighbor. If a neighbor hasn’t
visited_set: been visited yet, the function
calls itself (dfs) recursively on
• dfs(graph, neighbor, that neighbor.
visited_set, path)
• This recursive call continues until
all connected nodes are visited.
• if node in graph:
• The function checks if the node
has any neighbors in the graph.
Returning the Path

• return path • Finally, the function returns the


path list containing the order of
nodes visited during the
traversal.
Execution

• It prints the path returned, which


• print(dfs(GRAPH, 'A')) shows the order in which nodes
are visited: ['A', 'B', 'D',
'H', 'E', 'C', 'F', 'G'].

• The DFS function is called


starting from node 'A'.
Depth-Limited Search Algorithm
A depth-limited search algorithm is similar to depth-first search with a
predetermined limit. Depth-limited search can solve the drawback of the
infinite path in the Depth-first search. In this algorithm, the node at the
depth limit will treat as it has no successor nodes further.
Depth-limited search can be terminated with two Conditions of failure:
● Standard failure value: It indicates that problem does not have any
solution.
● Cutoff failure value: It defines no solution for the problem within a
given depth limit.
Depth-Limited Search Algorithm
Uniform-cost Search Algorithm
● The primary goal of the uniform-cost search is to find a path to the
goal node which has the lowest cumulative cost.
● Uniform cost search is equivalent to BFS algorithm if the path cost
of all edges is the same.
● A uniform-cost search algorithm is implemented by the priority
queue.
● It does not care about the number of steps involve in searching
and only concerned about path cost. Due to which this algorithm
may be stuck in an infinite loop.
Uniform-cost Search Algorithm
Iterative deepening depth-first Search
• The iterative deepening algorithm is a combination of DFS and BFS
algorithms.

• This search algorithm finds out the best depth limit and does it by
gradually increasing the limit until a goal is found.

• This Search algorithm combines the benefits of Breadth-first search's


fast search and depth-first search's memory efficiency.

• The iterative search algorithm is useful uninformed search when


search space is large, and depth of goal node is unknown.
Iterative deepening depth-first Search
Bidirectional Search Algorithm
• Bidirectional search algorithm runs two simultaneous
searches, one form initial state called as forward-search
and other from goal node called as backward-search, to
find the goal node.
• Bidirectional search replaces one single search graph with
two small subgraphs in which one starts the search from
an initial vertex and other starts from goal vertex.
• The search stops when these two graphs intersect each
other.
Bidirectional Search Algorithm
Heuristic search techniques in AI
Heuristic search techniques in AI (Artificial
Intelligence)
Best First Search
• In BFS and DFS, when we are at a node, we can consider any adjacent as the next node.
So, both BFS and DFS blindly explore paths without considering any cost function.

• The idea of Best First Search is to use an evaluation function to decide which adjacent is
most promising and then explore.

• Best First Search falls under the category of Heuristic Search or Informed Search.

• The worst-case time complexity for Best First Search is O(n * log n) where n is the
number of nodes. In the worst case, we may have to visit all nodes before we reach goal.
Note that priority queue is implemented using Min(or Max) Heap, and insert and remove
operations take O(log n) time.
Best First Search
Best First Search
• The BFS begins at node 'A' and tries to find the target node 'H'.
• The traversal order and queue updates work as follows:
• Starts at 'A':
• Prints 'A'.
• Adds neighbors 'B' (weight 12) and 'C' (weight 4) to the queue.
• The queue is sorted: [('C', 4), ('B', 12)].
• Processes 'C' next (smallest weight):
• Prints 'C'.
• Adds neighbors 'F' (weight 8) and 'G' (weight 2) to the queue.
• The queue is sorted: [('G', 2), ('F', 8), ('B', 12)].
• Processes 'G':
• Prints 'G'.
• Adds neighbor 'H' (weight 0) to the queue.
• The queue is sorted: [('H', 0), ('F', 8), ('B', 12)].
• The first element in the queue is now 'H', the target node.
• It prints 'H' and terminates the search.
Best First Search in Python
• graph = {
• 'A': [('B', 12), ('C', 4)],
• 'B': [('D', 7), ('E', 3)],
• 'C': [('F', 8), ('G', 2)],
• 'D': [],
• 'E': [('H', 0)],
• 'F': [('H', 0)],
• 'G': [('H', 0)]
• }
• Each key is a node, and each value is a list of tuples representing connected nodes and the
weights of the edges to those nodes.
• For example, node 'A' is connected to 'B' with a weight of 12 and to 'C' with a weight of 4.
Best First Search in Python

def bfs(start, target, graph, • The function takes the start


queue=None, visited=None): node, target node, the graph,
# Initialize queue and visited if and optional parameters: queue
not provided (to keep track of nodes to visit)
and visited (to track already
if queue is None: visited nodes).
queue = [] • If queue or visited is not
if visited is None: provided, they are initialized as
empty lists.
visited = []
Best First Search in Python
# If the start node is not visited, visit it • If the start node has not been
if start not in visited: visited, it gets printed and
print(start)
added to the visited list.
visited.append(start) • All unvisited neighbors of the
start node are added to the
# Add the neighbors of the start queue.
node to the queue if not visited
queue += [x for x in graph[start] if x[0] • The queue is then sorted by the
not in visited] weights of the edges (the
# Sort the queue based on the second element of each tuple)
second element (the weight) in ascending order. This ensures
that the BFS explores the node
queue.sort(key=lambda x: x[1]) with the smallest weight next.
Best First Search in Python

# If the target is the first element in the queue, we found it


if queue and queue[0][0] == target:
print(queue[0][0])

• If the first element in the sorted queue is the target node, it is


printed, indicating that the target has been found.
Best First Search in Python
else:
# Otherwise, keep processing the next node in the queue
if queue:
processing = queue.pop(0)
bfs(processing[0], target, graph, queue, visited)

• If the target is not found, the function processes the next node in the queue (i.e.,
the node with the smallest weight), removing it from the queue (queue.pop(0)).
• The function then calls itself recursively with this node as the new start node.
A* Search Algorithm
• A* Search Algorithm is a simple and efficient search algorithm that can be
used to find the optimal path between two nodes in a graph. It will be used for
the shortest path finding.

• It is an extension of Dijkstra’s shortest path algorithm (Dijkstra’s Algorithm).


The extension here is that, instead of using a priority queue to store all the
elements, we use heaps (binary trees) to store them.

• The A* Search Algorithm also uses a heuristic function that provides


additional information regarding how far away from the goal node we are.
This function is used in conjunction with the f-heap data structure in order to
make searching more efficient.
A* Search Algorithm
❑ A* search is the most commonly known form of best-first search. It uses heuristic
function h(n), and cost to reach the node n from the start state g(n). It has combined
features of UCS and greedy best-first search, by which it solve the problem efficiently.
A* search algorithm finds the shortest path through the search space using the
heuristic function. This search algorithm expands less search tree and provides optimal
result faster.
❑ A* algorithm is similar to UCS except that it uses g(n)+h(n) instead of g(n).
❑ In A* search algorithm, we use search heuristic as well as the cost to reach the node.
A* Algorithm
Step1: Place the starting node in the OPEN list.
Step 2: Check if the OPEN list is empty or not, if the list is empty then
return failure and stops.
Step 3: Select the node from the OPEN list which has the smallest value
of evaluation function (g+h), if node n is goal node then return
success and stop, otherwise
Step 4: Expand node n and generate all of its successors, and put n into
the closed list. For each successor n', check whether n' is already in
the OPEN or CLOSED list, if not then compute evaluation function
for n' and place into Open list.
Step 5: Else if node n' is already in OPEN and CLOSED, then it should be
attached to the back pointer which reflects the lowest g(n') value.
Step 6: Return to Step 2
Contd....
❑ Advantages:
❖ A* search algorithm is the best algorithm than other search

algorithms.
❖ A* search algorithm is optimal and complete.

❖ This algorithm can solve very complex problems.

❑ Disadvantages:
❖ It does not always produce the shortest path as it mostly

based on heuristics and approximation.


❖ A* search algorithm has some complexity issues.

❖ The main drawback of A* is memory requirement as it

keeps all generated nodes in the memory, so it is not


practical for various large-scale problems.
Example
we will traverse the given graph using the A* algorithm. The heuristic
value of all states is given in the below table so we will calculate the f(n)
of each state using the formula f(n)= g(n) + h(n), where g(n) is the cost to
reach any node from start state. Here we will use OPEN and CLOSED
list.
Contd
Contd..
❑ Initialization: {(S, 5)}
❑ Iteration1: {(S--> A, 4), (S-->G, 10)}
❑ Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G, 10)}
❑ Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A--
>B, 7), (S-->G, 10)}
❑ Iteration 4 will give the final result, as S--->A--->C--->G it provides
the optimal path with cost 6.
Contd....
Points to remember

❑A* algorithm returns the path which occurred first, and it does not search for all
remaining paths.
❑ The efficiency of A* algorithm depends on the quality of heuristic.

❑A* algorithm expands all nodes which satisfy the condition f(n)

❑Complete: A* algorithm is complete as long as:

❖ Branching factor is finite.


❖ Cost at every action is fixed

Optimal: A* search algorithm is optimal if it follows below two conditions


Admissible: the first condition requires for optimality is that h(n) should be an
admissible heuristic for A* tree search. An admissible heuristic is optimistic in
nature.
Consistency: Second required condition is consistency for only A* graph-search. If
the heuristic function is admissible, then A* tree search will always find the least
cost path.
Contd...
❑ Time Complexity: The time complexity of A* search algorithm
depends on heuristic function, and the number of nodes expanded is
exponential to the depth of solution d. So the time complexity is
O(b^d), where b is the branching factor.
❑ Space Complexity: The space complexity of A* search algorithm is
O(b^d)
Greedy Best First Search

❑ Greedy best-first search algorithm always selects path which appears best at that moment.



It combined approach of BFS and DFS.
It uses the heuristic function and search.

❑ We can choose most promising node at each step with help of BFS.
In BFS search algorithm, we expand node which is close to the goal node and the minimum cost is


estimated by heuristic function,

❑ The heuristic function f(n)=h(n). Where h(n)=estimated cost from node n to goal.

❑ Greedy search algorithm ignores the cost of the path that has already been traversed to reach n.
The solution given is not necessarily optimal
Greedy BFS Example

State Heuristic h(n)


A
75
118

140 B
C A 366

111 B 374
E
C 329
D 99
80
D 244
F
G
E 253

F 178
97
211 G 193
H

H 98
101 I
I 0
A

C
B
Greedy BFS
Example

F
G

Path A--->E--->E--->F--->I
Estimated Path Cost=140+99+211
Heuristic Cost=253+178
Greedy BFS

❑It is incomplete without systematic checking of repeated states.


❑It is not optimal.
❑The worst case time and space complexity is O(b ).
d
AO* Search

❑AND-OR graph is useful for representing the solution of problem that can be solved by
decomposing them into a set of smaller problems, that can be solved by decomposing them into a
set of smaller problems, all of which must then be solved.

Amit wants a bike

Steal it Earn Money Buy it


AO* Algorithms

Step-1: Create an initial graph with a single node (start node).


Step-2: Transverse the graph following the current path, accumulating
node that has not yet been expanded or solved.
Step-3: Select any of these nodes and explore it. If it has no successors
then call this value- FUTILITY else calculate f'(n) for each of the
successors.
Step-4: If f'(n)=0, then mark the node as SOLVED.
Step-5: Change the value of f'(n) for the newly created node to reflect
its successors by backpropagation.
Step-6: Whenever possible use the most promising routes, If a node is
marked as SOLVED then mark the parent node as SOLVED.
Step-7: If the starting node is SOLVED or value is greater
than FUTILITY then stop else repeat from Step-2.
AO* Example Solution
• In this example, we will demonstrate how the AO algorithm* works
using an AND-OR graph.

• Each node in the graph is assigned a heuristic value, denoted as h(n),


and the edge length is considered as 1.
AO* Example Solution

• Step 1: Initial Evaluation Using f(n) = g(n) + h(n)


• Starting from node A, we use the evaluation function:
• f(A -> B) = g(B) + h(B)
• = 1 + 5 (g(n) = 1 is the default path cost)
• =6
• For the path involving AND nodes (C and D):
• f(A -> C + D) = g(C) + h(C) + g(D) + h(D)
• = 1 + 2 + 1 + 4 (C & D are AND nodes)
• =8
• Since f(A -> B) = 6 is smaller than f(A -> C + D) = 8, we select the
path A -> B.

AO* Example Solution
Step 3: Compare and Explore Paths
Now, we compare f(A -> B) = 9 with f(A -> C + D) = 8. Since f(A -
> C + D) is smaller, we explore this path and move to node C.
For node C:
f(C -> G) = g(G) + h(G)
= 1 + 3
= 4
f(C -> H + I) = g(H) + h(H) + g(I) + h(I)
= 1 + 0 + 1 + 0 (H & I are AND nodes)
= 2
f(C⇢H+I) is selected as the path with the lowest cost and the
heuristic is also left unchanged because it matches the actual
cost. Paths H & I are solved because the heuristic for those
paths is 0, but Path A⇢D needs to be calculated because it
has an AND.
The path f(C -> H + I) = 2 is selected. Since the heuristic
for H and I matches the actual cost (both are 0), these paths
are considered solved.
AO* Algorithms

Advantages

➢It is an optimal algorithm.


➢If traverse according to the ordering of nodes. It can be used for both
OR, AND graph.

Disadvantages

➢Sometimes for unsolvable nodes, it can’t find the optimal path. Its
complexity is than other algorithms.
Local Search and Optimization Problems

❑ In the problems we studied so far, the solution is the path.

❑ In many optimization problems, the path is irrelevant. The


goal itself is the solution.

❑ The state space is set up as a set of “complete


"configurations, the optimal configuration is one of them.

❑ An iterative improvement algorithm keeps a single “current


"state” and tries to improve it.

❑ The space complexity is constant.


Travelling Salesman Problem
Travelling Salesman Problem:
Algorithm
• Consider city 1 as the starting and ending point.
• Generate all (n-1)! permutations of cities.
• Calculate the cost of every permutation and keep track of the minimum cost permutation.
• Return the permutation with minimum cost.

You might also like