SP22-AI-Hill Climbing

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Hill Climbing in Artificial Intelligence

Bart Selman
CS4700
1
So far:
methods that systematically explore the search space, possibly
using principled pruning (e.g., A*)

Current best such algorithm can handle search spaces of up to 10100


states / around 500 binary variables (“ballpark” number only!)

What if we have much larger search spaces?

Search spaces for some real-world problems may be much larger


e.g. 1030,000 states as in certain reasoning and planning tasks.

A completely different kind of method is called for --- non-systematic:

Local search
(sometimes called: Iterative Improvement Methods)

Bart Selman
CS4700
2
Problem: Place N queens on an NxN Intro example: N-queens
chess board so that no queen attacks
another.

Example solution for N = 8.


How hard is it to find
such solutions? What if N gets larger?
Can be formulated as a search problem.
Start with empty board. [Ops? How many?]
Operators: place queen on location (i,j). [N^2. Goal?]
Goal state: N queens on board. No-one attacks another.
N=8, branching 64. Solution at what depth?
N. Search: (N^2)^N Informed search? Ideas for a heuristic?

Issues: (1) We don’t know much about N-Queens demo!


the goal state. That’s what we are looking for!
(2) Also, we don’t care about path to solution!

What algorithm would you write to solve this?


Bart Selman
CS4700
3
Local Search: General Principle
Key idea (surprisingly simple):

1) Select (random) initial state (initial guess at solution)


e.g. guess random placement of N queens

2) Make local modification to improve current state


e.g. move queen under attack to “less attacked” square

3) Repeat Step 2 until goal state found (or out of time) Unsolvable if
cycle can be done billions of times out of time?

Requirements: Not necessarily!


– generate an initial Method is incomplete.
(often random; probably-not-optimal or even valid) guess
– evaluate quality of guess
– move to other state (well-defined neighborhood function)
. . . and do these operations quickly
. . . and don't save paths followed Bart Selman
CS4700
4
Local Search

1) Hill-climbing search or greedy local search


2) Simulated annealing
3) Local beam search
4) Genetic algorithms (related: genetic programming)
5) Tabu search (not covered)

Bart Selman
CS4700
5
Hill-climbing search

“Like climbing Everest in thick fog with amnesia”


Keep trying to move to a better “neighbor”,
using some quantity to optimize.

Note: (1) “successor” normally called neighbor.


(2) minimization, isomorphic.
(3) stops when no improvement but often better to just
“keep going”, especially if improvement = 0 Bart Selman
CS4700
6
4-Queens
States: 4 queens in 4 columns (256 states)
Neighborhood Operators: move queen in column
Evaluation / Optimization function: h(n) = number of attacks / “conflicts”
Goal test: no attacks, i.e., h(G) = 0

Initial state (guess).

Local search: Because we only consider local changes to the state


at each step. We generally make sure that series of local changes
can reach all possible states.
Bart Selman
CS4700
7
8-Queens
1
2 2
0
3 2
3 2
2 2
2 2
3 2
Representation: 8 integer variables giving positions of 8 queens in columns
(e.g. <2, 5, 7, 4, 3, 8, 6, 1>)
Section 6.4 R&N (“hill-climbing with min-conflict heuristics”)
Pick initial complete assignment (at random)
Repeat
• Pick a conflicted variable var (at random)
• Set the new value of var to minimize the number of conflicts
• If the new assignment is not conflicting then return it

(Min-conflicts heuristics) Inspired GSAT and Walksat


Bart Selman
CS4700
8
Local search with min-conflict heuristic works extremely well for Remarks
N-queen problems. Can do millions and up in seconds. Similarly,
for many other problems (planning, scheduling, circuit layout etc.)
Why?
Commonly given: Solns. are densely distributed in the O(nn)
space; on average a solution is a few steps away from a randomly picked
assignment. But, solutions still exponentially rare!
In fact, density of solutions not very relevant. Even problems with a single
solution can be “easy” for local search!
It all depends on the structure of the search space and the guidance
for the local moves provided by the optimization criterion.

For N-queens, consider h(n) = k, if k queens are attacked.


Does this still give a valid solution? Does it work as well?
What happens if h(n) = 0 if no queen under attack; h(n) = 1 otherwise?
Does this still give a valid solution? Does it work as well?

“Blind” search! No gradient in optimization criterion!


Bart Selman
CS4700
9
Issues for hill-climbing search
Problem: depending on initial state, can get stuck in local optimum
(here maximum)
How to overcome
local optima and plateaus ?

→ Random-restart hill climbing

But, 1D figure is deceptive. True local optima are surprisingly rare in


high-dimensional spaces! There often is an escape to a better state. Bart Selman
CS4700
10
Potential Issues with Hill Climbing / Greedy
Local Search

Local Optima: No neighbor is better, but not at global optimum.


– May have to move away from goal to find (best) solution.
– But again, true local optima are rare in many high-dimensional spaces.

Plateaus: All neighbors look the same.


– 8-puzzle: perhaps no action will change # of tiles out of place.
– Soln. just keep moving around! (will often find some improving
move eventually)

Ridges: sequence of local maxima

May not know global optimum: Am I done?

Bart Selman
CS4700
11
Improvements to Greedy /
Hill-climbing Search
Issue:
– How to move more quickly to successively better plateaus?
– Avoid “getting stuck” / local maxima?

Idea: Introduce “noise:”


downhill (uphill) moves to escape from
plateaus or local maxima (mimima)
E.g., make a move that increases the number of attacking pairs.

Bart Selman
CS4700
12
Bart Selman
CS4700
13
Bart Selman
CS4700
14

You might also like