0% found this document useful (0 votes)
139 views41 pages

Lecture03 Informed Search

The document discusses various informed search algorithms including best-first search, A* search, greedy best-first search, memory-bounded search techniques like iterative deepening A* search and simplified memory bounded A* search, and local search algorithms like hill-climbing search and simulated annealing. It provides examples of applying these techniques to problems like the 8-puzzle, n-queens problem, and optimization problems.

Uploaded by

Rohan Zahid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
139 views41 pages

Lecture03 Informed Search

The document discusses various informed search algorithms including best-first search, A* search, greedy best-first search, memory-bounded search techniques like iterative deepening A* search and simplified memory bounded A* search, and local search algorithms like hill-climbing search and simulated annealing. It provides examples of applying these techniques to problems like the 8-puzzle, n-queens problem, and optimization problems.

Uploaded by

Rohan Zahid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 41

Informed search algorithms

Course Teacher: Moona Kanwal


Informed Search Techniques
• Best-first search
– Greedy best-first search
– A* search
• Heuristics
• Memory-Bounded Search
– Iterative Deepening A* Search
– Simplified Memory Bounded A*
• Local Search Algorithms and Optimization
Problem
– Hill-climbing search
– Simulated Annealing
Review: Tree search

• A search strategy is defined by picking the


order of node expansion
Best-first search
• Idea: use an evaluation function f(n) for each node
– estimate of "desirability"
 Expand most desirable unexpanded node

• Implementation:
Order the nodes in fringe in decreasing order of
desirability

• Special cases:
– greedy best-first search
– A* search
Romania with step costs in km
Greedy best-first search
• Evaluation function f(n) = h(n) (heuristic)
• = estimate of cost from n to goal
• e.g., hSLD(n) = straight-line distance from n
to Bucharest
• Greedy best-first search expands the node
that appears to be closest to goal
Greedy best-first search
example
Romania with step costs in km
Greedy best-first search
example
Romania with step costs in km
Greedy best-first search
example
Romania with step costs in km
Greedy best-first search
example
Properties of greedy best-first
search
• Complete? No – can get stuck in loops,
e.g., Iasi  Neamt  Iasi  Neamt
• Time? O(bm), but a good heuristic can give
dramatic improvement
• Space? O(bm) -- keeps all nodes in
memory
• Optimal? No
A* search example
A* search example
A* search example
A* search example
A* search example
A* search example
Consistent heuristics
• A heuristic is consistent if for every node n,
every successor n' of n generated by any
action a, is optimal

• Theorem: If h(n) is consistent, A* using


GRAPH-SEARCH is optimal
Properties of A*
• Complete? Yes (unless there are infinitely
many nodes with f ≤ f(G) )
• Time? Exponential
• Space? Keeps all nodes in memory
• Optimal? Yes
Admissible heuristics
E.g., for the 8-puzzle:
• h1(n) = number of misplaced tiles
• h2(n) = total Manhattan distance
(i.e., no. of squares from desired location of each tile)

• h1(S) = ?
• h2(S) = ?
Admissible heuristics
E.g., for the 8-puzzle:

• h1(n) = number of misplaced tiles


• h2(n) = total Manhattan distance
(i.e., no. of squares from desired location of each tile)

• h1(S) = ? 8
• h2(S) = ? 3+1+2+2+2+3+3+2 = 18
Dominance
• If h2(n) ≥ h1(n) for all n (both admissible)
• then h2 dominates h1
• h2 is better for search

• Typical search costs (average number of


nodes expanded)
Relaxed problems
• A problem with fewer restrictions on the actions
is called a relaxed problem

• The cost of an optimal solution to a relaxed
problem is an admissible heuristic for the
original problem

• If the rules of the 8-puzzle are relaxed so that a
tile can move anywhere, then h1(n) gives the
shortest solution

• If the rules are relaxed so that a tile can move to
any adjacent square, then h2(n) gives the
shortest solution
Memory Bounded Search
 Iterative Deepening A* Search
* It evaluates contours.

 Simplified Memory Bounded A* Search


* The shallowest highest f-cost leaf node is discarded when no
memory is available.
* Parent nodes store the value of the lowest f-cost from discarded
descendant nodes.
* Parent nodes update their f-cost to the minimum f-cost among
their children.

CPSC 533 AI - Problem-Solving


Local search algorithms
• In many optimization problems, the path to the
goal is irrelevant; the goal state itself is the
solution

• State space = set of "complete" configurations


• Find configuration satisfying constraints, e.g., n-
queens

• In such cases, we can use local search


algorithms
• keep a single "current" state, try to improve it
Example: n-queens
• Put n queens on an n × n board with no
two queens on the same row, column, or
diagonal

Hill-climbing search
• "Like climbing Everest in thick fog with
amnesia"

Hill-climbing search
• Problem: depending on initial state, can
get stuck in local maxima

Hill-climbing search: 8-queens problem

• h = number of pairs of queens that are attacking each other, either directly
or indirectly
• h = 17 for the above state

Hill-climbing search: 8-queens problem

• A local minimum with h = 1



Simulated annealing search
• Idea: escape local maxima by allowing some
"bad" moves but gradually decrease their
frequency
Properties of simulated
annealing search
• One can prove: If T decreases slowly enough,
then simulated annealing search will find a
global optimum with probability approaching 1

• Widely used in VLSI layout, airline scheduling,


etc
Genetic algorithms
• A successor state is generated by combining two parent
states

• Start with k randomly generated states (population)

• A state is represented as a string over a finite alphabet


(often a string of 0s and 1s)

• Evaluation function (fitness function). Higher values for


better states.

• Produce the next generation of states by selection,


crossover, and mutation

You might also like