0% found this document useful (0 votes)
5 views37 pages

Informed Search Old

The document discusses problem-solving in artificial intelligence through informed search methods, focusing on heuristics, dominance, and relaxed problems like the 8-puzzle and traveling salesperson problem. It explains the properties of A* search, including its optimality and consistency, and introduces various search algorithms such as greedy best-first search and recursive best-first search (RBFS). Additionally, it highlights the drawbacks of RBFS and IDA* and mentions memory-bounded search algorithms.

Uploaded by

rais.tamim.cse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views37 pages

Informed Search Old

The document discusses problem-solving in artificial intelligence through informed search methods, focusing on heuristics, dominance, and relaxed problems like the 8-puzzle and traveling salesperson problem. It explains the properties of A* search, including its optimality and consistency, and introduces various search algorithms such as greedy best-first search and recursive best-first search (RBFS). Additionally, it highlights the drawbacks of RBFS and IDA* and mentions memory-bounded search algorithms.

Uploaded by

rais.tamim.cse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Artificial Intelligence

Problem Solving by Informed Search

Conducted by:
Dr. Fazlul Hasan Siddiqui
([email protected])

Credit: MIT, USA


Admissible heuristics
E.g., for the 8-puzzle:
h1(n) = number of misplaced tiles
h2(n) = total Manhattan distance
(i.e., no. of squares from desired location of each tile)
7 2 4 5
1 2 3

5 6 4 5 6

8 3 1 7 8

Start State Goal State

h1(S) =?? 6
h2(S) =?? 4+0+3+3+1+0+2+1 = 14

Chapter 4, Sections 1–3 34


Dominance
Given two admissible heuristics ha, hb,
if hb(n) ≥ ha(n) for all n then hb dominates ha and is better for search
In the 8-puzzle h2 dominates h1. Typical search costs:
d = 14 IDS = 3,473,941 nodes
A∗(h1) = 539 nodes
A∗(h2) = 113 nodes
d = 24 IDS ≈ 54,000,000,000 nodes
A∗(h1) = 39,135 nodes
A∗(h2) = 1,641 nodes
Given two admissible heuristics ha, hb,
h(n) = max(ha(n), hb(n))
is also admissible and dominates ha, hb

Chapter 4, Sections 1–3 35


Relaxed problems
Admissible heuristics can be derived from the optimal
solution cost of a relaxed version of the problem
Rules of the 8-puzzle:
a tile can move from square A to square B if
A is adjacent to B and B is blank;
Relaxations:
♦ a tile can move from square A to square B if A is adjacent to B
♦ a tile can move from square A to square B if B is blank
♦ a tile can move from square A to square B

Key point: the optimal solution cost of a relaxed problem


is no greater than the optimal solution cost of the real problem

Chapter 4, Sections 1–3 40


Relaxed problems contd.
Well-known example: travelling salesperson problem (TSP)
Find the shortest tour visiting all cities exactly once and returning to first
city

Chapter 4, Sections 1–3 41


Relaxed problems contd.
Well-known example: travelling salesperson problem (TSP)
Find the shortest tour which visits all cities exactly once and then returns to
first city

Chapter 4, Sections 1–3 42


Relaxed problems contd.
Well-known example: travelling salesperson problem (TSP)
Find the shortest tour which visits all cities exactly once and then returns to
first city

The set of arcs which covers all nodes at minimum cost


and forms a tour

Chapter 4, Sections 1–3 43


Relaxed problems contd.
Well-known example: travelling salesperson problem (TSP)
Find the shortest tour which visits all cities exactly once and then returns to
first city

The set of arcs which covers all nodes at minimum cost


so that each node has degree 2

Chapter 4, Sections 1–3 44


Relaxed problems contd.
Minimum spanning tree (MST)
The set of arcs which covers all nodes at minimum cost

Minimum spanning tree can be computed in O(n2)


and is a lower bound on the shortest (open) tour

Chapter 4, Sections 1–3 45


Consistency
A heuristic is consistent if
h(n) ≤ c(n, a, n′) + h(n′) n
If h is consistent, then h is admissible, and c(n,a,n’)
f (n) is nondecreasing along any path: h(n)
n’
f (n′) = g(n′) + h(n′)
= g(n) + c(n, a, n′) + h(n′) h(n’)
≥ g(n) + h(n) G
= f (n)

Consequently, when expanding a node, we cannot get a node with a smaller


f , and so the value of the best node on the fringe will never decrease.

Chapter 4, Sections 1–3 25


Optimality of A∗ (based on consistency)
Consistency: A∗ expands nodes in order of increasing f value
Gradually expands “f -contours” of nodes (cf. breadth-first expands layers,
uniform-cost expands g-contours)
Contour i has all nodes with f = fi, where fi < fi+1
O

N
Z

I
A
380 S
F
V
400
T R

L P

H
M U
B
420
D
E
C
G

Chapter 4, Sections 1–3 26


Properties of A∗
Complete?? Yes, unless there are infinitely many nodes with f ≤ f (G)
Time?? Exponential in [relative error in h × length of soln.]
Space?? Keeps all nodes in memory
Optimal?? Yes—cannot expand fi+1 until fi is finished
A∗ expands all nodes with f (n) < C ∗
A∗ expands some nodes with f (n) = C ∗
A∗ expands no nodes with f (n) > C ∗

Chapter 4, Sections 1–3 31


Best-first search

The general approach we will consider is called best-first search. Best-first
search is an instance of the general TREE-SEARCH or GRAPH-SEARCH
algorithm in which a node is selected for expansion based on an evaluation
function, f (n) .
 Implementation:

Best-first search can be implemented within our general search
framework via a priority queue, a data structure that will maintain
the fringe in ascending order of f –values.

A key component of these algorithms is a heuristic function, denoted by h(n):
h(n) = estimated cost of the cheapest path from node n to a goal node.
if n is a goal node, then h(n) = 0.
Greedy best-first search

Greedy best-first search tries to expand the node
that is closest to the goal.

Evaluation function f(n) = h(n) (heuristic)

e.g., hSLD(n) = straight-line distance from n to
Bucharest

Greedy best-first search expands the node that
appears to be closest to goal
Greedy
Best-first search
Greedy
Best-first search
Greedy
Best-first search
Greedy
Best-first search
Recursive best-first search (RBFS)

(RBFS) is a simple recursive algorithm that attempts to mimic the
operation of standard best-first search, but using only linear
space.

Its structure is similar to that of a recursive depth-first search, but
rather than continuing indefinitely down the current path it keeps
track of the f-value of the best alternative path available from any
ancestor of the current node.

If the current node exceeds this limit, the recursion unwinds back
to the alternative path.

As the recursion unwinds, RBFS replaces the f -value of each
node along the path with the best f -value of its children.
Recursive best-first search (RBFS)
Recursive best-first search (RBFS)
Recursive best-first search (RBFS)
Recursive best-first search (RBFS)

RBFS is somewhat more efficient than IDA*, but still suffers from excessive
node re-generation.
– RBFS follows the path via Rimnicu Vilcea, then “changes its mind” and
tries Fagaras, and then changes its mind back again.
– These mind changes occur because every time the current best path is
extended, its f -value is likely to increase—h is usually less optimistic for
nodes closer to the goal.
– When this happens, the second-best path might become the best path,
so the search has to backtrack to follow it.

Like A* tree search, RBFS is an optimal algorithm if the heuristic
function h(n) is admissible.

RBFS's space complexity is linear in the depth of the deepest optimal
solution, but its time complexity is rather difficult to characterize: it
depends both on the accuracy of the heuristic function and on how often
the best path changes as nodes are expanded.
Drawbacks of RBFS & IDA*

IDA* and RBFS suffer from using too little memory.

Between iterations, IDA* retains only a single number: the current
f -cost limit. RBFS retains more information in memory, but it uses
only linear space: even if more memory were available, RBFS
has no way to make use of it.

Because they forget most of what they have done, both
algorithms may end up reexpanding the same states many times
over.

Two algorithms that do this are MA* (memory-bounded A*) and
SMA* (simplified MA*).
Summary

Uninformed/Blind Search
✔ DFS
✔ BFS
✔ Depth-limited search
✔ IDS (or iterative deepening depth-first search)
✔ Bidirectional
✔ Uniform Cost

Informed/Heuristic Search

Best-First Search

Greedy Best-First Search

A*

Memory Bounded Search

IDA*

RBFS

MA*

SMA*

You might also like