0% found this document useful (0 votes)
35 views34 pages

Local Search Algorithms

Local search algorithms are metaheuristic methods for solving computationally hard optimization problems by moving from solution to solution in the search space through local changes until an optimal solution is found or a time bound is elapsed. They work by maintaining a current state and using local search strategies like hill-climbing or simulated annealing to gradually improve the current state. Common challenges include getting stuck in local optima, which techniques like simulated annealing and random restarts help address. Local search is well-suited for online search problems where the environment is unknown and must be explored incrementally.

Uploaded by

Mubasher Shahzad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views34 pages

Local Search Algorithms

Local search algorithms are metaheuristic methods for solving computationally hard optimization problems by moving from solution to solution in the search space through local changes until an optimal solution is found or a time bound is elapsed. They work by maintaining a current state and using local search strategies like hill-climbing or simulated annealing to gradually improve the current state. Common challenges include getting stuck in local optima, which techniques like simulated annealing and random restarts help address. Local search is well-suited for online search problems where the environment is unknown and must be explored incrementally.

Uploaded by

Mubasher Shahzad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 34

Local search algorithms

Incomputer science,local searchis


metaheuristicmethod for solving computationally
hardoptimizationproblems. Local search can be used
on problems that can be formulated as finding a
solution maximizing a criterion among a number
ofcandidate solutions. Local search algorithms move
from solution to solution in the space of candidate
solutions (thesearch space) by applying local
changes, until a solution deemed optimal is found or a
time bound is elapsed.

Local search algorithms

In many optimization problems, the


state space is the space of all
possible complete solutions
We have an objective function that
tells us how good a given state is,
and we want to find the solution
(goal) by minimizing or maximizing
the value of this function

Example: n-queens
problem

Put n queens on an n n board with no two


queens on the same row, column, or diagonal
State space: all possible n-queen configurations
Whats the objective function?
Number of pair-wise conflicts

Example: Traveling salesman


problem

Find the shortest tour connecting a given set of


cities
State space: all possible tours
Objective function: length of tour

Local search algorithms

In many optimization problems, the state space is


the space of all possible complete solutions
We have an objective function that tells us how
good a given state is, and we want to find the
solution (goal) by minimizing or maximizing the
value of this function
The start state may not be specified
The path to the goal doesnt matter
In such cases, we can that keep a single current
state anduse local search algorithms gradually try
to improve it

Example: n-queens
problem
Put n queens on an n n board with no two queens
on the same row, column, or diagonal

State space: all possible n-queen configurations


Objective function: number of pairwise conflicts
Whats a possible local improvement strategy?
Move one queen within its column to reduce conflicts

Example: Traveling Salesman


Problem

Find the shortest tour connecting n cities


State space: all possible tours
Objective function: length of tour
Whats a possible local improvement strategy?
Start with any complete tour, perform pairwise

exchanges

Hill-climbing search

Initialize current to starting state


Loop:
Let next = highest-valued successor of
current
If value(next) < value(current) return
current
Else let current = next
Variants: choose first better successor,
randomly choose among better successors
Like climbing mount Everest in thick fog with
amnesia

The state space


landscape

Iterative Improvement and Hill


Climbing

The main problem that hill climbing can encounter


is that of local maxima.

This occurs when the algorithm stops making


progress towards an optimal solution; mainly due
to the lack of immediate improvement in adjacent
states.

Local maxima can be avoided by a variety of


methods: Simulated annealing tackles this issue by
allowing some steps to be taken which decrease
the immediate optimality of the current state.

Random-Restart Hill-Climbing

Another way of solving the local


maxima problem involves repeated
explorations of the problem space.

Random-restart hill-climbing
conducts a series of hill-climbing
searches from randomly generated
initial states, running each until it
halts or makes no distinct progress.

Simulated annealing
search

Idea: escape local maxima by allowing


some "bad" moves but gradually decrease
their frequency
Probability of taking downhill move decreases

with number of iterations, steepness of downhill


move
Controlled by annealing schedule

Inspired by tempering of glass, metal

Simulated annealing
search

Initialize current to starting state


For i = 1 to
If T(i) = 0 return current
Let next = random successor of current
Let = value(next) value(current)
If > 0 then let current = next
Else let current = next with probability
exp(/T(i))

Effect of temperature

exp(/T)

Simulated annealing
search

One can prove: If temperature decreases slowly


enough, then simulated annealing search will find
a global optimum with probability approaching
one
However:
This usually takes impractically long
The more downhill steps you need to escape a local

optimum, the less likely you are to make all of them in a


row

More modern techniques: general family of


Markov Chain Monte Carlo (MCMC) algorithms for
exploring complicated state spaces

Local beam search

Start with k randomly generated states


At each iteration, all the successors of all k states
are generated
If any one is a goal state, stop; else select the k
best successors from the complete list and repeat
Is this the same as running k greedy searches in
parallel?

Greedy search

Beam search

Genetic algorithms

Variant of local beam search with sexual


recombination.

Genetic algorithms

Variant of local beam search with sexual


recombination.

Genetic algorithm
function GENETIC_ALGORITHM( population, FITNESS-FN) return an
individual
input: population, a set of individuals
FITNESS-FN, a function which determines the quality of the
individual
repeat
new_population empty set
loop for i from 1 to SIZE(population) do
x RANDOM_SELECTION(population, FITNESS_FN)
y RANDOM_SELECTION(population, FITNESS_FN)
child REPRODUCE(x,y)
if (small random probability) then child MUTATE(child )
add child to new_population
population new_population
until some individual is fit enough or enough time has elapsed
return the best individual

19

AI 1

Exploration problems

Until now all algorithms were offline.


Offline= solution is determined before executing it.
Online = interleaving computation and action

Online search is necessary for dynamic


and semi-dynamic environments
It is impossible to take into account all possible

contingencies.

Used for exploration problems:


Unknown states and actions.
e.g. any robot in a new environment, a newborn baby,

20

AI 1

Online search problems

Agent knowledge:
ACTION(s): list of allowed actions in state s
C(s,a,s): step-cost function (! After s is determined)
GOAL-TEST(s)

An agent can recognize previous states.


Actions are deterministic.
Access to admissible heuristic h(s)
e.g. manhattan distance

Online search problems

Objective: reach goal with minimal cost


Cost = total cost of travelled path
Competitive ratio=comparison of cost with cost of the

solution path if search space is known.


Can be infinite in case of the agent
accidentally reaches dead ends

The adversary argument

Assume an adversary who can construct the state


space while the agent explores it
Visited states S and A. What next?
Fails in one of the state spaces

No algorithm can avoid dead ends in all state


spaces.

Online search agents

The agent maintains a map of the


environment.
Updated based on percept input.
This map is used to decide next action.

Note difference with e.g. A*


An online version can only expand the node it is
physically in (local order)

Online DF-search
function ONLINE_DFS-AGENT(s) return an action
input: s, a percept identifying current state
static: result, a table indexed by action and state, initially empty
unexplored, a table that lists for each visited state, the action not yet tried
unbacktracked, a table that lists for each visited state, the backtrack not yet tried
s,a, the previous state and action, initially null
if GOAL-TEST(s) then return stop
if s is a new state then unexplored[s] ACTIONS(s)
if s is not null then do
result[a,s] s
add s to the front of unbackedtracked[s]
if unexplored[s] is empty then
if unbacktracked[s] is empty then return stop
else a an action b such that result[b, s]=POP(unbacktracked[s])
else a POP(unexplored[s])
s s
return a

Online DF-search, example

Assume maze problem


on 3x3 grid.
s = (1,1) is initial state
Result, unexplored (UX),
unbacktracked (UB),
are empty
S,a are also empty

Online DF-search, example


S=(1,1)

GOAL-TEST((,1,1))?

S not = G thus false

(1,1) a new state?


True
ACTION((1,1)) -> UX[(1,1)]

{RIGHT,UP}

s is null?

True (initially)

UX[(1,1)] empty?
False

POP(UX[(1,1)])->a
A=UP

AI 1

s = (1,1)
Return a

Online DF-search, example


S=(2,1)

GOAL-TEST((2,1))?

S not = G thus false

(2,1) a new state?


True
ACTION((2,1)) -> UX[(2,1)]

{DOWN}

s is null?
false (s=(1,1))
result[UP,(1,1)] <- (2,1)
UB[(2,1)]={(1,1)}

UX[(2,1)] empty?
False

A=DOWN, s=(2,1) return A

Online DF-search, example


S=(1,1)

GOAL-TEST((1,1))?

(1,1) a new state?

S not = G thus false


false

s is null?
false (s=(2,1))
result[DOWN,(2,1)] <- (1,1)
UB[(1,1)]={(2,1)}

UX[(1,1)] empty?
False

A=RIGHT, s=(1,1) return A

Online DF-search, example


S=(1,2)

GOAL-TEST((1,2))?
S not = G thus false

(1,2) a new state?


True,

UX[(1,2)]={RIGHT,UP,LEFT}

s is null?

false (s=(1,1))
result[RIGHT,(1,1)] <- (1,2)
UB[(1,2)]={(1,1)}

UX[(1,2)] empty?
False

A=LEFT, s=(1,2) return A

Online DF-search, example


S=(1,1)

GOAL-TEST((1,1))?

(1,1) a new state?

AI 1
6/11/15

True
UB[(1,1)] empty? False

A= b for b in result[b,
(1,1)]=(1,2)

false (s=(1,2))
result[LEFT,(1,2)] <- (1,1)
UB[(1,1)]={(1,2),(2,1)}

UX[(1,1)] empty?

false

s is null?

S not = G thus false

B=RIGHT

A=RIGHT, s=(1,1)

Online DF-search

Worst case each node is


visited twice.
An agent can go on a long
walk even when it is close to
the solution.
An online iterative
deepening approach solves
this problem.
Online DF-search works only
when actions are reversible.

Online local search

Hill-climbing is already online


One state is stored.

Bad performance due to local maxima


Random restarts impossible.

Solution: Random walk introduces exploration (can produce


exponentially many steps)

Online local search

Solution 2: Add memory to hill climber


Store current best estimate H(s) of cost to reach goal
H(s) is initially the heuristic estimate h(s)
Afterward updated with experience (see below)

Learning real-time A* (LRTA*)

You might also like