0% found this document useful (0 votes)
4 views28 pages

CS2201 5

Local search algorithms focus on improving a single current state by exploring its neighbors, making them efficient in terms of memory and capable of finding reasonable solutions in large state spaces. They are particularly useful for optimization problems where the path to the goal is irrelevant, and include variants like simple hill climbing, steepest-ascent hill climbing, and stochastic hill climbing. However, these algorithms can get stuck in local maxima, plateaus, or ridges, which may require techniques like backtracking or random restarts to overcome.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views28 pages

CS2201 5

Local search algorithms focus on improving a single current state by exploring its neighbors, making them efficient in terms of memory and capable of finding reasonable solutions in large state spaces. They are particularly useful for optimization problems where the path to the goal is irrelevant, and include variants like simple hill climbing, steepest-ascent hill climbing, and stochastic hill climbing. However, these algorithms can get stuck in local maxima, plateaus, or ridges, which may require techniques like backtracking or random restarts to overcome.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Local Search

• The uninformed and informed search algorithms that we have seen are designed to
explore search spaces systematically.

• They keep one or more paths in memory and by record which alternatives have
been explored at each point along the path.

• When a goal is found, the path to that goal also constitutes a solution to the
problem.

• In many problems, however, the path to the goal is irrelevant.

• If the path to the goal does not matter, we might consider a different class of
algorithms that do not worry about paths at all. => local search algorithms
Local Search
• Local search algorithms operate using a single current node and generally move
only to neighbors of that node.
• Local search algorithms ease up on completeness and optimality in the interest
of improving time
and space complexity?
• Although local search algorithms are not systematic, they have two key
advantages:
1. They use very little memory (usually a constant amount), and
2. They can often find reasonable solutions in large or infinite (continuous) state
spaces.

• In addition to finding goals, local search algorithms are useful for solving pure
optimization problems, in which the aim is to find the best state according to an
objective function.
• In optimization problems, the path to goal is irrelevant and the goal state itself is
the solution.
• In some optimization problems, the goal is not known and the aim is to find the
best state.
Local search algorithms
• In many optimization problems, the path to the
goal is irrelevant; the goal state itself is the
solution

• State space = set of "complete" configurations


• Find configuration satisfying constraints, e.g., n-
queens

• In such cases, we can use local search


algorithms
• keep a single "current" state, try to improve it

Example: n-queens
• Put n queens on an n × n board with no
two queens on the same row, column, or
diagonal

Hill-climbing search
• "Like climbing Everest in thick fog with
amnesia"

Hill-climbing search
• Problem: depending on initial state, can
get stuck in local maxima

Hill-climbing search: 8-queens problem

• h = number of pairs of queens that are attacking each other, either directly
or indirectly
• h = 17 for the above state

Hill Climbing Features
• Features of Hill Climbing:

• Generate and Test variant: Hill Climbing is the variant


of Generate and Test method. The Generate and Test
method produce feedback which helps to decide which
direction to move in the search space.
• Greedy approach: Hill-climbing algorithm search moves
in the direction which optimizes the cost.
• No backtracking: It does not backtrack the search
space, as it does not remember the previous states.
Hill Climbing Types
• Simple hill Climbing:
• Steepest-Ascent hill-climbing:
• Stochastic hill Climbing:
Simple Hill Climbing
• It only evaluates the neighbor node state
at a time and selects the first one which
optimizes current cost and set it as a
current state.
Algorithm for Simple Hill Climbing:

• Step 1: Evaluate the initial state, if it is goal state then


return success and Stop.
• Step 2: Loop Until a solution is found or there is no new
operator left to apply.
• Step 3: Select and apply an operator to the current
state.
• Step 4: Check new state:
– If it is goal state, then return success and quit.
– Else if it is better than the current state then assign new state as
a current state.
– Else if not better than the current state, then return to step2.
• Step 5: Exit.
Steepest-Ascent hill climbing:

• This algorithm examines all the


neighboring nodes of the current state and
selects one neighbor node which is closest
to the goal state.
• More time consumingh
Steepest-Ascent hill climbing:
• Step 1: Evaluate the initial state, if it is goal state then return
success and stop, else make current state as initial state.
• Step 2: Loop until a solution is found or the current state
does not change.
– Let SUCC be a state such that any successor of the current state will
be better than it.
– For each operator that applies to the current state:
• Apply the new operator and generate a new state.
• Evaluate the new state.
• If it is goal state, then return it and quit, else compare it to the SUCC.
• If it is better than SUCC, then set new state as SUCC.
• If the SUCC is better than the current state, then set current state to
SUCC.
• Step 5: Exit.
Stochastic hill climbing:

• Stochastic hill climbing does not examine


for all its neighbor before moving. Rather,
this search algorithm selects one neighbor
node at random and decides whether to
choose it as a current state or examine
another state.
First-Choice Hill
Climbing
• First-choice hill climbing implements stochastic hill climbing by generating successors randomly
until one is generated that is better than the current state.
• This is a good strategy when a state has many of successors.

• First-choice hill climbing is also NOT complete,


Random-Restart
Hill Climbing
• Random-Restart Hill Climbing conducts a series of hill-climbing searches from
randomly generated initial states, until a goal is found.
• Random-Restart Hill Climbing is complete if infinite (or sufficiently many
tries) are allowed.
• If each hill-climbing search has a probability p of success, then the expected
number of restarts required is 1/p.
• For 8-queens instances with no sideways moves allowed, p ≈ 0.14, so we need
roughly 7 iterations to find a goal (6 failures and 1 success).
• For 8-queens, then, random-restart hill climbing is very effective indeed.
• Even for three million queens, the approach can find solutions in under a minute.
• The success of hill climbing depends very much on the shape of the state-space
landscape:
• If there are few local maxima and plateau, random-restart hill climbing will find
a good solution very quickly.
• On the other hand, many real problems have many local maxima to get stuck
on.
• NP-hard problems typically have an exponential number of local maxima to get
stuck on.
Limitations of Hill Climbing
• 1. Local Maximum: A local maximum is a peak
state in the landscape which is better than each
of its neighboring states, but there is another
state also present which is higher than the local
maximum.
• Solution: Backtracking technique can be a
solution of the local maximum in state space
landscape. Create a list of the promising path so
that the algorithm can backtrack the search
space and explore other paths as well.

• 2. Plateau: A plateau is the flat area of the search space
in which all the neighbor states of the current state
contains the same value, because of this algorithm does
not find any best direction to move. A hill-climbing search
might be lost in the plateau area.

• Solution: The solution for the plateau is to take big steps


or very little steps while searching, to solve the problem.
Randomly select a state which is far away from the
current state so it is possible that the algorithm could find
non-plateau region.

• 3. Ridges: A ridge is a special form of the
local maximum. It has an area which is
higher than its surrounding areas, but itself
has a slope, and cannot be reached in a
single move.
• Solution: With the use of bidirectional
search, or by moving in different
directions, we can improve this problem.
Solved Examples
Example 1
Example 1: Questions

a. What solution path is found by Greedy Best-first search


using h2? Break ties alphabetically.

b. What solution path is found by Uniform-Cost search?

c. Give the three solution paths found by algorithm A*


using each of the three heuristic functions, respectively.

Break ties alphabetically


Example 1: Answers

a. S, A, G

b. Sequence of nodes expanded: S, B, D, C, A, G


Solution path: S, B, C, G

c. h0: This is the same as uniform-cost search, so the answer


is the same as (b). That is, the solution path is: S, B, C, G

h1: S, B, C, G expanded; solution: S, B, C, G

h2: S, B, D, G expanded; solution: S, B, D, G

You might also like