0% found this document useful (0 votes)
39 views26 pages

AI-03 Informed Search

The document discusses various informed search algorithms like greedy best-first search, A* search, and properties of heuristic functions used in informed search. It explains greedy best-first search expands the node with lowest heuristic value h(n). A* search combines actual cost g(n) and estimated cost to goal h(n) as evaluation function f(n)=g(n)+h(n). It proves optimality of A* requires admissible and consistent heuristics. Finally, it discusses generating admissible heuristics and homework assignments on search algorithms.

Uploaded by

Debashish Deka
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views26 pages

AI-03 Informed Search

The document discusses various informed search algorithms like greedy best-first search, A* search, and properties of heuristic functions used in informed search. It explains greedy best-first search expands the node with lowest heuristic value h(n). A* search combines actual cost g(n) and estimated cost to goal h(n) as evaluation function f(n)=g(n)+h(n). It proves optimality of A* requires admissible and consistent heuristics. Finally, it discusses generating admissible heuristics and homework assignments on search algorithms.

Uploaded by

Debashish Deka
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 26

ARTIFICIAL INTELLIGENCE

lecture-3
-Abhijit Boruah
DUIET

1
Informed (Heuristic) Search
 Uses problem specific knowledge beyond the definition of the problem itself.

 General approach is best first search where a node is selected for expansion based
on an evaluation function f(n).,which is a cost estimate.

 Node with lowest f(n) is evaluated first. The choice of f determines the search
strategy.

 Most best first search algorithms include a heuristic function as a component of f :


h(n) = estimated cost of the cheapest path from the state at node n to a goal state.

 Special cases:
 Greedy Best first search
 A* search
Greedy Best First Search
 Greedy best-first search expands the node that appears
to be closest to goal.

 Evaluates node just by its heuristic function, i.e.,


f(n) = h(n).

 e.g., fSLD(n) = straight-line distance from n to


Bucharest
Fsld example-from Arad to Bucharest
Greedy best-first search example
Greedy best-first search example
Greedy best-first search example
Greedy best-first search example
Properties of greedy best-first search
 Complete? No – can get stuck in loops.
 Time? O(bm), but a good heuristic can give dramatic
improvement
 Space? O(bm) - keeps all nodes in memory
 Optimal? No
e.g. AradSibiuRimnicu VireaPitestiBucharest is
shorter!

*m is maximum depth of the search space


A* Search
 Idea: avoid expanding paths that are already expensive.

 It is also worth mentioning that many games and web-based maps use this
algorithm to find the shortest path very efficiently .

 Node evaluation by combining g(n): the cost to reach the node and h(n):
the cost(cheapest) to get to goal from the node.

 f(n) = g(n) + h(n) (estimated cost of the cheapest solution through n) .

 Hence expand the node with the lowest value of g(n) + h(n).
A* search Example
A* search example
A* search example
A* search example
A* search example
A* search example
A* search example
Conditions for optimality: Admissibility
and consistency
 First condition for optimality is admissibility.

 An admissible heuristic never overestimates the cost to reach the goal as


h(n) in f(n) = g(n) + h(n). WHY?

 A heuristic h(n) is admissible if for every node n, h(n) ≤ h*(n), where h*(n)
is the true cost to reach the goal state from n.

 Example: hSLD(n) (never overestimates the actual road distance)

 Theorem: If h(n) is admissible, TREE-SEARCH version of A* is


optimal.
Optimality of A* tree search version (proof)
 Suppose some suboptimal goal G2 has been generated and is in the fringe.
Let n be an unexpanded node in the fringe such that n is on a shortest path
to an optimal goal G.

We want to prove:
f(n) < f(G2)
(then A* will prefer n over G2)

 f(G2) = g(G2) since h(G2) = 0


 f(G) = g(G) since h(G) = 0
 g(G2) > g(G) since G2 is suboptimal
 f(G2) > f(G) from above
Optimality of A* (proof)

 Suppose some suboptimal goal G2 has been generated and is in the fringe. Let n be
an unexpanded node in the fringe such that n is on a shortest path to an optimal goal
G.

 f(G2) > f(G) copied from last slide


 h(n) ≤ h*(n) since h is admissible (under-estimate)
 g(n) + h(n) ≤ g(n) + h*(n) from above
 f(n) ≤ f(G) since g(n)+h(n)=f(n) & g(n)+h*(n)=f(G)
 f(n) < f(G2) from top line.
Hence: n is preferred over G2
Conditions for optimality: Admissibility and
consistency
 Second and more stronger condition for optimality, also called consistency or
monotonicity.
 A heuristic h(n) is consistent if for every node n, every successor n' of n generated
by any action a, the estimate cost of reaching the goal from n is not greater than
step cost of reaching to n’ and estimated cost of reaching goal from n’

h(n) ≤ c(n,a,n') + h(n')

 If h is consistent, we have
f(n') = g(n') + h(n')
= g(n) + c(n,a,n') + h(n')
≥ g(n) + h(n) = f(n)
f(n’) ≥ f(n) It’s the triangle
 i.e., f(n) is non-decreasing along any path. inequality !

 Theorem:
If h(n) is consistent, GRAPH-SEARCH version of A* is optimal
Heuristics
 We can calculate g but how to calculate h ?
 Either calculate the exact value of h (which is certainly time consuming).
             OR
Approximate the value of h using some heuristics (less time consuming).
 Exact Heuristics
1. Pre-compute the distance between each pair of cells before running
the A* Search Algorithm.
2. If there are no blocked cells/obstacles then we can just find the
exact value of h without any pre-computation using the distance
formula/Euclidean Distance

Video A* Search
Heuristics
 Approximation Heuristics
 Manhattan Distance:  sum of absolute values of differences in the goal’s x and y
coordinates and the current cell’s x and y coordinates respectively, i.e., h = abs
(current_cell.x – goal.x) + abs (current_cell.y – goal.y)
When to use this heuristic? – When we are allowed to move only in four direction
only (right, left, top, bottom)

 Diagonal Distance- maximum of absolute values of differences in the goal’s x and y


coordinates and the current cell’s x and y coordinates respectively, i.e., h = max
{ abs(current_cell.x – goal.x), abs(current_cell.y – goal.y) } . Use it when we are
allowed to move in eight directions only (similar to a move of a King in Chess).

 Euclidean Distance- distance between the current cell and the goal cell using the
distance formula h = sqrt ( (current_cell.x – goal.x) 2 + (current_cell.y – goal.y)2). Use it
when we are allowed to move in any directions.
Properties of A* ( home task)
If C* is the cost of the optimal solution path, then
 A* expands all nodes with f(n) < C*
 A* then might expand some nodes in the goal contour (where f(n) = C*)
before selecting a goal node.
 Completeness? – requires there to be only finitely many nodes with
cost less than or equal to C*.
 Time/Space complexities?
 Time : O(b∆);for constant steps O(bεd ), ∆ is absolute error (h*-h) and ε is relative error
(h*-h)/h*.
 Memory inefficient, runs out of space, not practical for large-scale problems.
 Go for memory bounded heuristic search (eg. Recursive BFS, MA*)
 Optimality? – optimally efficient for any given consistent heuristics
Generation of Heuristic Functions
(Self Study)
 Simple Memory Bound A* search(SMA*) ( go to link
https://userweb.cs.txstate.edu/~ma04/files/CS5346/SMA%20search.pdf )
 Generating admissible heuristics from relaxed problems.
 Generating admissible heuristics from sub problems.
 Learning heuristics from experience.
Assignments
1. Prove each of the following statements:
a) Breadth-first search is a special case of uniform-cost search.
b) Breadth-first search, depth-first search, and uniform-cost search are
special cases of best-first search.
c) Uniform-cost search is a special case of A* search.

2. The heuristic path algorithm is a best-first search in which the objective


function is f(n) = (2 - w)g(n) + wh(n). For what values of w is this
algorithm guaranteed to be optimal? (You may assume that h is
admissible.) What kind of search does this perform when w = O? When
w = l? When w = 2?

You might also like