0% found this document useful (0 votes)
23 views20 pages

Search 2

Uploaded by

diwashw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views20 pages

Search 2

Uploaded by

diwashw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Artificial Intelligence Searching

Heuristic Search .

Reading: Section 4.1 – 4.3 of Textbook R&N


Uses problem specific information beyond the problem
itself in order to search the space
more efficiently.

What kind of heuristic information would be useful?


Information that helps:
• Decide which node to expand next, instead of doing the expansion in a
strictly breadth-first or depth-first order;
• In the course of expanding a node, decide which successor(s) to
generate, instead of blindly generating all possible successors at one
time;
• Decide which nodes to discard (pruned), from the search space.

Page 1 of 20
Artificial Intelligence Searching

Best-First Search
Idea: use an evaluation function f(n) that gives an indication of
which node to expand next for each node.
– usually gives an estimate to the goal.
– the node with the lowest value is expanded first.

• A key component of f(n) is a heuristic function, h(n), which is a


additional knowledge of the problem.

Special cases: based on the evaluation function.


– greedy best-first search
– A* search

Page 2 of 20
Artificial Intelligence Searching

Greedy best-first search


Evaluation function f(n) = h(n) (heuristic) 8.5 6 3
= estimate of cost from node n to goal. 10
e.g., h(n) = straight-line distance from n to G.
8 3
Greedy best-first search expands the node that 6
appears to be closest to goal.

f(A)=8.5 A D f(D)=8

f(A)=8.5 A E f(E)=6

… … ...

Page 3 of 20
Artificial Intelligence Searching

Greedy best-first search: Discussion


• Complete? No – can get stuck in loops, e.g for the problem
C to G: CÆBÆC...
Yes, Complete in finite space with repeated-
state checking

• Time? O(bm), but a good heuristic can give dramatic


improvement (expand unnecessary nodes).
• Space? O(bm), keeps all nodes in memory.
• Optimal? No.

Page 4 of 20
Artificial Intelligence Searching

A* Search
Idea: avoid expanding paths that are already expensive.
It finds a minimal cost-path joining the start node and a goal node
for node n.

Evaluation function f(n) = g(n) + h(n)


Where
g(n) = cost so far to reach n S
h(n) = estimated cost from n to goal g*(p)
g*(m) g*(n)
f(n) = estimated total cost of path
through n to goal. m n p

h*(n)
h*(m) h*(p)

G
Page 5 of 20
Artificial Intelligence Searching

A* Search
[8.5] [6] [3]
4 4
A B C
3

[10] S 5
5

4 G
D 2 E F 3
4
[8] [6] [3]

g (A)=3 g (D)=4
h (A)=8.5 S h (D)=8
3 4 f (D)=4+8
f (A)=3+8.5
A 11.5 12 D
4 5 5 2

7+6 B 13 8+8=16 D 9+8.5 A 17.5 12 E 6+6


4 5 5 4

C 14 18 E 11+6 B 17 13 F 10+3
3
G 13
Page 6 of 20
Artificial Intelligence Searching

Admissibility of A*
If estimated distance h(n) never exceed the true distance
h*(n) between the current node to goal node, the A*
algorithm will always find a shortest path - This is
known as the admissibility of A* algorithm and h(n) is a
admissible heuristic.
IF
0 =< h (n) =< h*(n), and costs of all arcs are positive
THEN A* is guaranteed to find a solution path of minimal
cost if any solution path exists.

Theorem: A* is optimal if h(n) is admissible.

Page 7 of 20
Artificial Intelligence Searching

Optimality of A* (Proof)
Suppose some suboptimal goal G2 has been generated and is in the queue.
Let n be an unexpanded node on a shortest path to an optimal goal G.
Let the cost of Optimal solution be C*

f(G2) = g(G2) + h(G2) = g(G2) >C* h(G2) = 0, since G2 is suboptimal


f(n) = g(n) + h(n) ≤ C* since h(n) is admissible
f(n) ≤ C* < f(G2)
So, A* will never select G2.
this proof may be breaks down in repeated state checking algorithm
Page 8 of 20
Artificial Intelligence Searching

Monotonicity
A heuristic h(n) is monotone if, for every node n and every successors n` of
n, the estimated cost of reaching the goal from n is no greater than the step
cost of getting to n` plus the estimated cost of reaching the goal from n`:
h (n) =< c (n, n`) + h (n`). (triangle inequality)

Theorem: Every monotone heuristic is admissible. (Exercise 4.7)

Let h*(n) be the true cost of getting from n to the goal.


(Proof by Induction, work backwards from the goal to show h(n) ≤ h*(n) for every node n.)
For k = 1, let n` be the goal node and n one step prior, then h(n) ≤ h(n`) + c(n, n`) = c(n, n`)
= h*(n).
For the inductive case, assume n` is on a path k steps from the goal and that h(n`) is
admissible (by inductive hypothesis) then h(n) ≤ c(n, n`) + h(n`) ≤ c(n, n`) + h*(n`) = h*(n)
where h*(n) is the true cost of getting from n to the goal. Since h(n) ≤ h*(n), h(n) at k + 1
steps from the goal is also admissible.

Page 9 of 20
Artificial Intelligence Searching

Monotonicity
Theorem: If h(n) is monotone, then the values of f(n) along any path
are nondecreasing.

[g(n`) = g(n) + c(n, n`),


f(n`) = g(n`) + h(n`) = g(n) + c(n, n`) + h(n`) >= g(n) + h(n) = f(n)]

Theorem: A* (repeated state checking) is optimal if h(n) is monotone.

Page 10 of 20
Artificial Intelligence Searching

Generating Admissible Heuristic


• for many tasks, a good heuristic is the key to finding a solution
– prune the search space
– move towards the goal
• relaxed problems
– fewer restrictions on the successor function (operators)
– its exact solution may be a good heuristic for the original problem
• If h2(n) ≥ h1(n) for all n (both admissible) then h2 dominates h1 and is
better for search.
• Find the exact solution by relaxing the original problem such as h1(n),
h2(n) ........ hm(n).
• Admissible heuristic can also be derived from the solution of a
subproblem of given problem.
• The best heuristic can be generated as: h(n) = max{h1(n), h2(n), ...,
hm(n)).

Page 11 of 20
Artificial Intelligence Searching

Generating Admissible Heuristic

Example: 8-puzzle
Fig. shows 8-puzzle start state and goal state. The solution is 26 steps long.
• h1(n) = number of misplaced tiles
• h2(n) = sum of the distance of the tiles from their goal position (not diagonal).

• h1(S) = ? 8
• h2(S) = ? 3+1+2+2+2+3+3+2 = 18
• hn(S) = max{h1(S), h2(S)} = 18

Page 12 of 20
Artificial Intelligence Searching

Relaxing 8-Puzzle

Problem: A tile can move from square A to square B if A is


horizontally or vertically adjacent to B and B is blank.

This problem can be relaxed as:


a. A tile can move from square A to square B if A is
adjacent to B. => h2
b. A tile can move from square A to Square B. =>h1

Page 13 of 20
Artificial Intelligence Searching

Local search algorithms


In many optimization problems, the path to the goal is irrelevant;
the goal state itself is the solution.
State space = set of "complete" configurations
Find configuration satisfying constraints, e.g., n-queens
In such cases, we can use local search algorithms operate using a
single current state-local (rather than multiple paths) and
generally move only to neighbors of that state.

Advantages:
– they use very little memory. (not required to store all nodes)
– they can find solution even in infinite state space.
– Applicable in finding the best solution according objective function.

Page 14 of 20
Artificial Intelligence Searching

Example: n-queens
Put n queens on an n × n board with no two
queens on the same row, column, or diagonal.

Page 15 of 20
Artificial Intelligence Searching

Hill-climbing Search
“Like climbing Everest in thick fog with amnesia”.
Always expand the node which appears closest to goal (according to the heuristic
function h)
A special case of best-first search algorithm but only one node keeps in memory.

Problem:
depending on
initial state, can
get stuck in
local maxima,
shoulder etc.

By Bishnu Gautam 16
Page 16 of 20
Artificial Intelligence Searching

Simulated Annealing Search


Idea: escape local maxima by allowing some "bad" moves (i.e downhill).
It picks a random move, if the move improve the situation, it moves
uphill, otherwise the algorithm accepts the move with some probability
less than 1. The probability decreases exponentially with the badness of
the move. Two parameter d and T are used.
Lets say there are 3 moves available, with changes in the objective function of d1 = -0.1,
d2 = 0.5, d3 = -5. (Let T = 1).
pick a move randomly:
if d2 is picked, move there.
if d1 or d3 are picked, probability of move = exp(d/T)
move 1: prob1 = exp(-0.1) = 0.9, i.e., 90% of the time we will accept this move
move 3: prob3 = exp(-5) = 0.05 i.e., 5% of the time we will accept this move
T = “temperature” parameter
high T => probability of “locally bad” move is higher
low T => probability of “locally bad” move is lower
T = 0 => we get hill-climbing
typically, T is decreased as the algorithm runs longer
Page 17 of 20
Artificial Intelligence Searching

Genetic algorithms

• GA keeps the track of k states in memory rather than just one.


• A successor state is generated by combining two parent states rather than
just modifying a single state.
• Start with k randomly generated states (population).
• A state is represented as a string over a finite alphabet (often a string of 0s
and 1s).
• Evaluation function (fitness function). Higher values for better states leads
to better chances of reproduction.
• Produce the next generation of states by selection, crossover, and mutation.

GA combine an uphill tendency with random exploration and exchange of


information among the parallel search threads.

Page 18 of 20
Artificial Intelligence Searching

Genetic algorithms

• Fitness function: number of non-attacking pairs of queens (min = 0, max =


8 × 7/2 = 28).
• 24/(24+23+20+11) = 31%.
• 23/(24+23+20+11) = 29% etc Page 19 of 20
Artificial Intelligence Searching

See “Search Home Work.pdf” for Homework

Page 20 of 20

You might also like