0% found this document useful (0 votes)
16 views49 pages

Topic - 5 (Informed Search and Exploration

Uploaded by

Anick Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views49 pages

Topic - 5 (Informed Search and Exploration

Uploaded by

Anick Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

Artificial Intelligence

Topic – 5: Informed Search


and Exploration
Tahmina Islam
Lecturer

Department of CSE
University of Information Technology and Sciences
Topic Contents
• Informed (Heuristic) Search Strategies

• Heuristic Functions

• Local Search Algorithms and Optimization

Problems

☞ Hill-climbing search
☞ Simulated annealing search
☞ Local beam search
☞ Genetic algorithms
Best-First Search

• Node is selected for expansion based on


an evaluation function f(n)
• Evaluation function estimates distance to
the goal
• Choose node which appears best
• Implementation:
– fringe is a priority queue sorted in ascending
order of f-values

3
A Heuristic Function h(n)

• Dictionary definition: “A rule of thumb,


simplification, or educated guess that
reduces or limits the search for solutions in
domains that are difficult and poorly
understood”
• For best-first search:
– h(n) = estimated cost of the cheapest path
from node n to goal node

4
Romania with Step Costs in Km
• hSLD = straight-line
distance heuristic
• hSLD cannot be
computed from the
problem description
itself
• In greedy best-first
search f(n)=h(n)
– Expand node that is
closest to goal

5
Greedy Search: Example

6
Greedy Search: Example

7
Greedy Search: Example

8
Greedy Search: Example

• Goal reached
– For this example no node is expanded that is not
on the solution path
– But not optimal (see Arad, Sibiu, Rimnicu Vilcea,
Pitesti)
9
Greedy Search: Evaluation
• Complete or optimal: no
– Minimizing h(n) can result in false starts, e.g. Iasi
to Fagaras
– Check on repeated states

10
Greedy Search: Evaluation

• Time and space complexity:


– In the worst case all the nodes in the search
tree are generated: O(bm )
(m is maximum depth of search tree and b is
branching factor)
– But: choice of a good heuristic can give
dramatic improvement

11
A* Search
• Best-known form of best-first search
• Idea: avoid expanding paths that are already
expensive
• Evaluation function f(n)= g(n) + h(n)
– g(n): the cost (so far) to reach the node
– h(n): estimated cost to get from the node to the goal
– f(n): estimated total cost of path through n to goal
• A* search is both complete and optimal if h(n)
satisfies certain conditions

12
A* Search
• A* search is optimal if h(n) is an admissible
heuristic
• A heuristic is admissible if it never overestimates
the cost to reach the goal
– h(n) ≤ h*(n) where h*(n) is the true cost from n
• Admissible heuristics are optimistic about the
cost of solving the problem
• e.g. hSLD(n) never overestimates the actual road
distance

13
Romania Example

14
A* Search: Example

15
A* Search: Example

16
A* Search: Example

17
A* Search: Example

18
A* Search: Example

19
A* Search: Example

20
Optimality of A*
• Suppose a suboptimal goal node G2 appears on
the fringe and let the cost of the optimal solution be
C*.
• Since G2 is suboptimal and
h(G2) = 0 (true for any goal node),
we know that f(G2) = g(G2) + h(G2) = g(G2) > C*
• Now consider a fringe node n that is on an optimal
solution path.
• If h(n) does not overestimate the cost of completing
the solution path,
then we know that f(n) = g(n) + h(n) ≤ C*
• Since f(n) ≤ C* < f(G2), G2 will not be expanded
and A* search must return an optimal solution.
21
A* Search: Evaluation

• Complete: yes
– Unless there are infinitely many nodes with
f < f(G)
• Optimal: yes
– A* is also optimally efficient for any given
h(n). That is, no other optimal algorithm is
guaranteed to expand fewer nodes than A*.

22
A* Search: Evaluation

• Time complexity:
– number of nodes expanded is still exponential
in length of solution
• Space complexity:
– All generated nodes are kept in memory
– A* usually runs out of space before running
out of time

23
Local Search Algorithms and
Optimization Problems
• The local search algorithm searches only the
final state, not the path to get there.
– e.g. 8-queens
• Local search algorithm are not systemetic: they
might never explore a portion of a search space
where a solution actually resides.
• Memory requirements can be dramatically
relaxed by modifying the current state

24
Local Search Algorithms and
Optimization Problems
• Local search = use single current state and
move to neighboring states.
• Advantages:
– Use very little memory
– Find often reasonable solutions in large or infinite state spaces.
• Are also useful for pure optimization problems.

25
Local Search Algorithms and
Optimization Problems

26
Hill-Climbing Search
• Hill climbing algorithm is a heuristic search
algorithm which continuously moves in the
direction of increasing value to find the peak of
the mountain or best solution to the problem.
• It keeps track of one current state and on each
iteration moves to the neighboring state with
heigher value-that is , it heads in the direction that
provides the steepest ascent
• No backtracking.
• Greedy Approach

27
Hill-Climbing Search
• In Hill-climbing algorithm, when it reaches the peak
value where no neighbour has a higher value then it
terminates.
• Hill-climbing is also called greedy local search
• Hill climbing is mostly used when a good heuristic is
available
• Hill-climbing is a very simple strategy with low
space requirements

28
Hill-Climbing Search
• Different regions in the state space landscape:
• Local Maximum is a state which is better than its neighbour state, but
there is also another state which is higher than it.
• Global Maximum is the best possible state of state space landscape.It
has the higher value of objective function.
• Current state is the state in the landspace diagram where an agent is
currently present
• Flat local maximum is a flat space in the landspace where all the
neighbour states of current states have the same value.
• Shoulder is a plateau region which has an uphill edge.
30
Hill-Climbing Search
Algorithm
• Evaluate the initial state
• Loop untill a solution is found or there are no operators left
– select and apply a new operator
– Evaluate the new state
• if the goal state then quit
• if better than current state then it is now current state
Hill-Climbing Search Example
Start 3 1 2 1 2 Goal State
4 5 8 h=5 3 4 5
6 7 6 7 8

h=4
3 1 2 3 1 2
4 5 8 4 5
6 7 6 7 8

h=3 h=1

3 1 2 3 1 2
4 5 4 5
6 7 8 h=2 6 7 8
Hill-Climbing Search Example
1 2 4 1 4 7
Start 5 7 Goal State 2 5 8
h=5 3 6 8 3 6

up
Down Left Right

1 4 1 2 4 1 2 4 1 2 4
5 2 7 5 6 7 5 7 5 7
3 6 8 3 8 3 6 8 3 6 8

h=5 h=6 h=4 h=5


Drawbacks

• Ridge = sequence of local maxima difficult for greedy


algorithms to navigate
• Plateaux = an area of the state space where the
evaluation function is flat.
• Gets stuck 86% of the time.

34
Hill-Climbing Search
• Problems/Limitations
– local maxima
• algorithm can’t go higher, but is not at a satisfactory solution
– plateau
• area where the evaluation function is flat
– ridges
• search may oscillate slowly

35
Solution of Hill-Climbing
Weakness
• Random initialization of starting state
• Jump
Hill-Climbing Search
function HILL-CLIMBING( problem) return a state that is a local maximum
input: problem, a problem
local variables: current, a node.
neighbor, a node.

current ← MAKE-NODE(INITIAL-STATE[problem])
loop do
neighbor ← a highest valued successor of current
if VALUE [neighbor] ≤ VALUE[current] then return STATE[current]
current ← neighbor

37
Hill-Climbing Variations

• Stochastic hill-climbing
– Random selection among the uphill moves.
– The selection probability can vary with the
steepness of the uphill move.
• First-choice hill-climbing
– implements stochastic hill climbing by
generating successors randomly until a better
one is found.
• Random-restart hill-climbing
– Tries to avoid getting stuck in local maxima.

38
Genetic Algorithms

• Variant of local beam search with sexual recombination.

39
Genetic Algorithms
• GAs begin with a set of k randomly generated states, called
the population.
• Each state, or individual, is represented as a string over a
finite alphabet—most commonly, a string of 0s and 1s.
• For example, an 8-queens state must specify the positions of
8 queens, each in a column of 8 squares, and so requires
8 x log2 8 = 24 bits.
• Alternatively, the state could be represented as 8 digits, each
in the range from 1 to 8.
• Figure 4.6(a) shows a population of four 8-digit strings
representing 8-queens states.


40
Genetic Algorithms

• Figure 4.6: The three major operations in genetic algorithm.


The initial population in (a) is ranked by the fitness function
in (b), resulting in pairs for mating in (c). They produce
offspring in (d), which are subject to mutation in (e).

41
Genetic Algorithms
• The 8-queens states involved in this reproduction step are
shown in Figure 4.7.

• Figure 4.7: The 8-queens states corresponding to the first


two parents in Figure 4.6(c) and the first offspring in
Figure 4.6(d).

42
Genetic Algorithms
• The production of the next generation of states is shown in
Figure 4.6(b)–(e).
• In (b), each state is rated by the objective function, or (in GA
terminology) the fitness function.
• A fitness function should return higher values for better
states, so, for the 8-queens problem we use the number of
nonattacking pairs of queens, which has a value of 28 for a
solution.
• The values of the four states are 24, 23, 20, and 11.
• In this particular variant of the genetic algorithm, the
probability of being chosen for reproducing is directly
proportional to the fitness score, and the percentages are
shown next to the raw scores.

43
Genetic Algorithms
• In (c), two pairs are selected at random for reproduction, in
accordance with the probabilities in (b).
• Notice that one individual is selected twice and one not at all.
• For each pair to be mated, a crossover point is chosen
randomly from the positions in the string.
• In Figure 4.6, the crossover points are after the third digit in
the first pair and after the fifth digit in the second pair.
• In (d), the offspring themselves are created by crossing over
the parent strings at the crossover point.
• For example, the first child of the first pair gets the first three
digits from the first parent and the remaining digits from the
second parent, whereas the second child gets the first three
digits from the second parent and the rest from the first
parent.
44
Genetic Algorithms
• The 8-queens states involved in this reproduction step are
shown in Figure 4.7.

• Figure 4.7: The 8-queens states corresponding to the first


two parents in Figure 4.6(c) and the first offspring in
Figure 4.6(d).

45
Genetic Algorithms
• The example shows that when two parent states are
quite different, the crossover operation can produce
a state that is a long way from either parent state.
• It is often the case that the population is quite
diverse early on in the process, so crossover (like
simulated annealing) frequently takes large steps in
the state space early in the search process and
smaller steps later on when must individuals are
quite similar.

46
Genetic Algorithms
• Finally, in (e), each location is subject to random
mutation with a small independent probability.
• One digit was mutated in the first, third, and fourth
offspring.
• In the 8-queens problem, this corresponds to choosing a
queen at random and moving it to a random square in its
column.

47
Genetic Algorithms
• Figure: A genetic algorithm. The algorithm is the same as the
one diagrammed in the figure in slide no. 45, with one variation:
in this more popular version, each mating of two parents
produces only one offspring, not two.

48
Genetic Algorithms

function GENETIC_ALGORITHM( population, FITNESS-FN) return an individual


input: population, a set of individuals
FITNESS-FN, a function which determines the quality of the individual
repeat
new_population ← empty set
loop for i from 1 to SIZE(population) do
x ← RANDOM_SELECTION(population, FITNESS_FN) y←
RANDOM_SELECTION(population, FITNESS_FN)
child ← REPRODUCE(x,y)
if (small random probability) then child ← MUTATE(child )
add child to new_population
population ← new_population
until some individual is fit enough or enough time has elapsed
return the best individual

49

You might also like