04 Local Search 2017 Ihler
04 Local Search 2017 Ihler
return best_found
Typically, “you are tired of doing it” means that some resource limit is
exceeded, e.g., number of iterations, wall clock time, CPU time, etc. It
may also mean that Result improvements are small and infrequent,
e.g., less than 0.1% Result improvement in the last week of run time.
Tabu search wrapper
• Add recently visited states to a tabu-list
– Temporarily excluded from being visited again
– Forces solver away from explored regions
– Avoid getting stuck in local minima (in principle)
State
HASH TABLE
Present?
•
Hill-climbing difficulties
Note: these difficulties apply to all local search algorithms, and usually become
much worse as the search space becomes higher dimensional
• Gradient vector
• Indicates direction of
steepest ascent
(negative = steepest
descent)
3. Check if
(or, Armijo rule, etc.)
4. If true then accept move, if not “reject”. (decrease step size, etc.)
5. Repeat.
Gradient descent
Hill-climbing in continuous spaces
• How do I determine the gradient?
– Derive formula using multivariate calculus.
– Ask a mathematician or a domain expert.
– Do a literature search.
Temperature T
e E / T
High Low
Temperature
• If any one is a goal state, stop; else select the k best successors
from the complete list and repeat.
… Repeat indefinitely…
probability of being
in next generation =
fitness/(_i fitness_i)
How to convert a
• Fitness function: #non-attacking queen pairs fitness value into a
probability of being in
– min = 0, max = 8 × 7/2 = 28
the next generation.
• i fitness_i = 24+23+20+11 = 78
• P(pick child_1 for next gen.) = fitness_1/(_i fitness_i) = 24/78 = 31%
• P(pick child_2 for next gen.) = fitness_2/(_i fitness_i) = 23/78 = 29%; etc
Partially observable systems
• What if we don’t even know what state we’re in?
• Can reason over “belief states”
– What worlds might we be in? “State estimation” or “filtering” task
– Typical for probabilistic reasoning
– May become hard to represent (“state” is now very large!)
Recall:
“vacuum world”
Partially observable systems
• What if we don’t even know what state we’re in?
• Can reason over “belief states”
– What worlds might we be in? “State estimation” or “filtering” task
– Typical for probabilistic reasoning
– May become hard to represent (“state” is now very large!)
• Particle filters
– Population approach to state estimation
– Keep list of (many) possible states
– Observations: increase/decrease weights
– Resampling improves density of samples
in high-probability regions
Linear Programming
• Types:
– hill climbing, gradient ascent
– simulated annealing, Monte Carlo methods
– Population methods: beam search; genetic / evolutionary algorithms
– Wrappers: random restart; tabu search