Ai Lect 03
Ai Lect 03
Mehadia C P Bucharest
Mehadia C P Bucharest
Bucharest
Arad:0
Example of UCS: Romania
Arad
Oradea
Lugoj Oradea
C P
Mehadia C P
Mehadia C P Bucharest
Mehadia C P Bucharest
Dobreta
Mehadia C P
Dobreta Bucharest
Mehadia C P
Dobreta Bucharest
Dobreta:384, Bucharest:418
Example of UCS: Romania
Arad
Mehadia C P
Dobreta Bucharest
Bucharest:418
GOAL!!!! GOL!!!!!
Goal state
We don’t know which state is closest to goal
• Finding the shortest path is the Start state
whole point of the search
• If we already knew which state
was closest to goal, there would
be no reason to do the search
• Figuring out which one is closest,
in general, is a complexity 𝑂 𝑏!
problem.
Goal state
Search heuristics: estimates of distance-to-goal
• Often, even if we don’t know the
distance to the goal, we can Start state
estimate it.
• This estimate is called a
heuristic.
• A heuristic is useful if:
1. Accurate: ℎ(𝑛) ≈ 𝑑(𝑛), where
ℎ(𝑛) is the heuristic estimate,
and 𝑑(𝑛) is the true distance to
the goal
2. Cheap: It can be computed in Goal state
complexity less than 𝑂 𝑏 !
Example heuristic: Manhattan distance
If there were no walls, this would
If there were no walls in the maze, be the path to goal: straight down,
Start state then straight right.
then the number of steps from
position (𝑥" , 𝑦" ) to the goal
position (𝑥# , 𝑦# ) would be 𝑦#
𝑦$ Goal state
𝑥
𝑥# 𝑥$
Outline of today’s lecture
1. Uniform Cost Search (UCS): like BFS, but for actions that have different
costs
• Complete: always finds a solution, if one exists
• Optimal: finds the best solution
• Time complexity = # nodes that have cost < goal
• Space complexity = # nodes that have cost < goal
2. Heuristics, e.g., Manhattan distance
3. Greedy Best-first search
4. A*: Like UCS but adds an estimate of the remaining path length
• Complete: always finds a solution, if one exists
• Optimal: finds the best solution
• Time complexity = # nodes that have cost+heuristic < goal
• Space complexity = # nodes that have cost+heuristic < goal
Greedy Best-First Search
Instead of choosing the node with the smallest total cost so far (UCS),
Goal state
Greedy Search Example
Start state
If our random choice goes badly,
we might end up very far from the
goal.
Goal state
The problem with Greedy Search
Start state
That’s not a useful path…
Goal state
The problem with Greedy Search
Start state
Neither is that one…
Goal state
What went wrong?
Outline of today’s lecture
1. Uniform Cost Search (UCS): like BFS, but for actions that have different
costs
• Complete: always finds a solution, if one exists
• Optimal: finds the best solution
• Time complexity = # nodes that have cost < goal
• Space complexity = # nodes that have cost < goal
2. Heuristics, e.g., Manhattan distance
3. Greedy Best-first search
4. A*: Like UCS but adds an estimate of the remaining path length
• Complete: always finds a solution, if one exists
• Optimal: finds the best solution
• Time complexity = # nodes that have cost+heuristic < goal
• Space complexity = # nodes that have cost+heuristic < goal
The problem with Greedy Search
Among nodes on the frontier, this Start state
one seems closest to goal (smallest
ℎ(𝑛), where ℎ(𝑛) ≤ 𝑑(𝑛)).
𝑐(𝑛) = 𝑔 𝑛 + 𝑑 𝑛 ≥ 𝑔 𝑛 + ℎ(𝑛)
The problem with Greedy Search
Of these three nodes, this one has 15
the smallest 𝑔 𝑛 + ℎ 𝑛
𝑔 𝑛 + ℎ 𝑛 = 21 + 14 = 35 12
A* notation
• 𝑐(𝑛) = cost of the total path Start state
(START,…,n,…,GOAL).
• 𝑑(𝑛) = distance of the remaining
partial path (n,…,GOAL).
• 𝑔(𝑛) = gone-already on the path
so far, (START,…,n).
𝑐(𝑛) = 𝑔 𝑛 + 𝑑 𝑛 ≥ 𝑔 𝑛 + ℎ(𝑛)
A* Search
• Idea: avoid expanding paths that are already expensive
• The evaluation function f(n) is the estimated total cost of the
path through node n to the goal: