0% found this document useful (0 votes)
2 views22 pages

04 Informed Search Annot

The document discusses informed search methods in artificial intelligence, focusing on best-first search and A* algorithms. It explains the concepts of heuristic functions, admissible heuristics, and how to find effective heuristics for problem-solving. Additionally, it introduces iterative deepening A* as a combination of A* and iterative deepening search techniques.

Uploaded by

leimu.864
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views22 pages

04 Informed Search Annot

The document discusses informed search methods in artificial intelligence, focusing on best-first search and A* algorithms. It explains the concepts of heuristic functions, admissible heuristics, and how to find effective heuristics for problem-solving. Additionally, it introduces iterative deepening A* as a combination of A* and iterative deepening search techniques.

Uploaded by

leimu.864
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Informed Search

Introduction to Artificial Intelligence


© G. Lakemeyer

G. Lakemeyer

Winter Term 2024/25


Best-First Search

Search methods differ in their strategies which node to expand next

Uninformed fixed strategies without information about the cost


search: from a given node to a goal.

uses information about the cost from a given node


© G. Lakemeyer

Informed search: to a goal in the form of an evaluation function f ,


Heuristic search assigning each node a real number.

Best-First Search: always expand the node with the “best” f -value.

h(n) = estimated cost from state at node n to a goal


Greedy Search: state. Expand node n where h(n) is minimal.
Use f = h.

AI/WS-2024/25 2 / 20
Greedy Search Example
Straight−line distance
Oradea to Bucharest
71
Neamt Arad 366
Bucharest 0
Zerind 87
75
151 Craiova 160
Iasi Dobreta 242
Arad
140
Eforie 161
92 Fagaras

-
Sibiu
178
99 Fagaras Giurgiu 77
118 Hirsova 151
Vaslui
80 Iasi 226
Rimnicu Vilcea Lugoj
Timisoara 244
Mehadia 241
142
111 Pitesti 211 Neamt 234
Lugoj 97 Oradea 380
70 98 Pitesti 98
85 Hirsova
Mehadia 146 101 Urziceni Rimnicu Vilcea 193
86 Sibiu 253

Copdine
75 138 Bucharest Timisoara 329
120 Urziceni
Dobreta
90
80
Craiova Eforie Vaslui 199
Zerind
© G. Lakemeyer

Giurgiu 374

Arad
h=366
Arad

Sibiu
h=253
Timisoara
h=329
Zerind
h=374
ninte
Arad

Sibiu Timisoara Zerind


h=329 h=374

Arad Fagaras Oradea Rimnicu Arad


h=366 h=178 h=380 h=193
Sibiu Timisoara Zerind
h=329 h=374

Arad Fagaras Oradea Rimnicu


h=366 h=380 h=193

Sibiu Bucharest
h=253 h=0
AI/WS-2024/25 3 / 20
A⇤
combines uniform cost search with greedy search.

g (n) = actual cost from the initial state to n.


h(n) = estimated cost from n to the nearest goal.
f (n) := g (n) + h(n).

f (n) = is the estimated cost of the cheapest path which passes through n.
© G. Lakemeyer

Let h⇤ (n) be the actual cost of the optimal path from n to the nearest goal.

Admissible Heuristic
h is called admissible if we have for all n:

h (n )  h ⇤ (n ) .

We require for A⇤ that h is admissible.


AI/WS-2024/25 4 / 20
Example A⇤
Arad
f=0+366 Arad
=366

Sibiu Timisoara Zerind


f=140+253 f=118+329 f=75+374 Arad
=393 =447 =449

Sibiu Timisoara Zerind


f=118+329 f=75+374
=447 =449
Arad Fagaras Oradea Rimnicu Arad
f=280+366 f=239+178 f=146+380 f=220+193
=646 =417 =526 =413
- -
Sibiu Timisoara Zerind
f=118+329 f=75+374
=447 =449
© G. Lakemeyer

Arad Fagaras Oradea Rimnicu


f=280+366 f=239+178 f=146+380
=646 =417 =526
Craiova Pitesti Sibiu
f=366+160 f=317+98 f=300+253
=526 =415 =553
-

Note: in the example f is monotone nondecreasing.


The following can always be guaranteed:

Path-Max Equation
Let n, n0 be nodes, where is n parent of n0 . Then let

f (n0 ) = max (f (n), g (n0 ) + h(n0 )).


AI/WS-2024/25 5 / 20
Example Why Path-Max Eq. Makes Sense

n(n) = 4 ⑪ g(ni = 3 =
f(n) =
7

in
G
© G. Lakemeyer

un =
2 g(n = 4

AI/WS-2024/25 6 / 20
Example Why A⇤ is Optimal
Start
u (A) = 13



h(r)
95/ &
=

1 t = 25

greD (D = 0
i A
© G. Lakemeyer

15
j =

if h(d =
20 (non-admissible. !)
then the optimal solution
in not
found .

AI/WS-2024/25 7 / 20
Contour Lines in A⇤
nodes outside the
contour lives are
O
never expanded.
- N
Z

I
A
380 S
F -gas e V
© G. Lakemeyer

400
T R

L P

H
M U
B
D f
= 420
E
C
G

AI/WS-2024/25 8 / 20
Another Example Illustrating Contour Lines

-
-
48

5 40-
=

40
O

!
D -

=
© G. Lakemeyer

time and
space complexity O(ba)
note expansion is exponential
=> in
general
except when
(let *Cl) be the optimue cost
of reaching a
gorl from n)
1Scn) -

Sen = O
(log n (h)

AI/WS-2024/25 9 / 20
Theorem: A⇤ is Optimal for Admissible h
Proof by contradiction :

,
beanopen
Suppose At pick G'
for expansion

start since his admissible


g
f ° f(n)
· not chose- for
© G. Lakemeyer

since n was

⑤ expansion ,

f(G)
⑳ his a
leaf f(n) =
( * /(6)
node in the nence
searchite
on the path
Since is
G' a
gal
Jo G h(G) = 0

AI/WS-2024/25
Then 10 / 20
Heuristic Function 1

5 4 5
1 4
2 3
S G
6 1 88 6
8 84

7 3 22 7 6 25

Start State Goal State

h1 = number of tiles in the wrong position. h .


(s) = 7
.
© G. Lakemeyer

h2 = sum of the distances to the goal location for all tiles


(Manhattan Distance)
4 2 0+ 2 18
hz(s) 3 =
= 2 + 3 + + 2 + + +

AI/WS-2024/25 11 / 20
Heuristic Function 1

5 4 5
1 4
2 3

6 1 88 6
8 84

7 3 22 7 6 25

Start State Goal State

h1 = number of tiles in the wrong position.


© G. Lakemeyer

h2 = sum of the distances to the goal location for all tiles


(Manhattan Distance)

Effective branching factor b⇤ : If A⇤ generates N nodes with solution depth d,


then b⇤ is the branching factor of a uniform tree of depth d with N + 1 nodes,
i.e.
N + 1 = 1 + b ⇤ + (b ⇤ )2 + . . . + (b ⇤ )d
b⇤ is a measure for the goodness of h: the closer b⇤ is to 1 the better.

AI/WS-2024/25 11 / 20
Heuristic Function 2

tant
e
i

d
· IDS
Search Cost
A*(h1 ) A*(h2 ) IDS
Effective Branching Factor
A*(h1 ) A*(h2 )
2 10 6 6 2.45 1.79 1.79
4 112 13 12 2.87 1.48 1.45
6 680 20 18 2.73 1.34 1.30
© G. Lakemeyer

8 6384 39 25 2.80 1.33 1.24


10 47127 93 39 2.79 1.38 1.22
12 364404 227 73 2.78 1.42 1.24
14 3473941 539 113 2.83 1.44 1.23
16 – 1301 211 – 1.45 1.25
18 – 3056 363 – 1.46 1.26
20 – 7276 676 – 1.47 1.27
22 – 18094 1219 – 1.48 1.28
24 – 39135 1641 – 1.48 1.26

AI/WS-2024/25 12 / 20
How to Find a Heuristic

General Strategy:
Simplify the problem
Compute the exact solution for the simplified problem
Use the solution cost as heuristic
© G. Lakemeyer

AI/WS-2024/25 13 / 20
How to Find a Heuristic

General Strategy:
Simplify the problem
Compute the exact solution for the simplified problem
Use the solution cost as heuristic
© G. Lakemeyer

For example:
h1 is the solution cost for the simplified 8-puzzle where tiles can be
placed at an arbitrary position with a single action.
h2 corresponds to the exact solution, if tiles can be moved to an arbitrary
position but actions are restricted to moving a tile to a neighboring
position.

AI/WS-2024/25 13 / 20
Pattern Databases

7 2 4 1 2

5 6 3 4 5

8 3 1 6 7 8

Start State Goal State

2 4 1 2
© G. Lakemeyer

5 6 3 4
5 6

8 3 1 7 8

Start State Goal State

Idea: Compute the exact solution for each pattern with four numbers and use
that value as heuristic. When more than one pattern applies, use the
maximum value. cannot take the sum of the
Better than Manhattan! voitures 2 patterns that match
for
because they sharemoves.

AI/WS-2024/25
/solution here :
only court moves
If 14 / 20
numbered tils .
Iterative Deepening A⇤
Combination of A⇤ and iterative deepening search.
Instead of fixed level increase use f -costs instead.

840 .....
20

o 40
G --------
-
>

g -
40
O -- _ T
© G. Lakemeyer

-- Ol
-do DFS
using currentf-cost
- remember
f cost of the
first mode
large
than the current f-cost
- pick the smallest of those for the
next iteration .

AI/WS-2024/25 15 / 20
IDA⇤ Algorithm
function IDA*( problem) returns a solution sequence
inputs: problem, a problem
static: f-limit, the current f - COST limit
root, a node
root MAKE-NODE(INITIAL-STATE[problem])
f-limit f - COST(root)
loop do
solution, f-limit DFS-CONTOUR(root, f-limit)
if solution is non-null then return solution
© G. Lakemeyer

if f-limit = then return failure; end

function DFS-CONTOUR(node, f-limit) returns a solution sequence and a new f - COST limit
inputs: node, a node
f-limit, the current f - COST limit
static: next-f, the f - COST limit for the next contour, initially

if f - COST[node] > f-limit then return null, f - COST[node]


if GOAL-TEST[problem](STATE[node]) then return node, f-limit
for each node s in SUCCESSORS(node) do
solution, new-f DFS-CONTOUR(s, f-limit)
if solution is non-null then return solution, f-limit
next-f MIN(next-f, new-f); end
return null, next-f

AI/WS-2024/25 16 / 20
Hill Climbing
evaluation

o start with
current
state better Er
many randomly

Schoen
© G. Lakemeyer

state

function HILL-CLIMBING( problem) returns a solution state


inputs: problem, a problem
static: current, a node
next, a node end
current MAKE-NODE(INITIAL-STATE[problem])
one
may up
loop do in local maxima
next a highest-valued successor of current
if VALUE[next] < VALUE[current] then return current
current next
Cheuristic
end function
AI/WS-2024/25 17 / 20
Another Problem with Local Search: Ridges
© G. Lakemeyer

AI/WS-2024/25 18 / 20
Beam Search

Beam search is a refinement of local search with random restarts.


Start with k randomly chosen states.
Generate all successors of all states.
© G. Lakemeyer

Pick k best states.


Iterate.
Focuses search on the most promising states generated.

AI/WS-2024/25 19 / 20
Simulated Annealing

function SIMULATED-ANNEALING( problem, schedule) returns a solution state


inputs: problem, a problem
schedule, a mapping from time to “temperature”
static: current, a node
next, a node
T, a “temperature” controlling the probability of downward steps
© G. Lakemeyer

current MAKE-NODE(INITIAL-STATE[problem])
for t 1 to do
T schedule[t]
if T=0 then return current
next a randomly selected successor of current

-
∆E VALUE[next] – VALUE[current]
if ∆E > 0 then current next
else current next only with probability e∆E/T

AI/WS-2024/25 20 / 20

You might also like