AI Lecture 6
AI Lecture 6
March 6, 2023 1
In many optimization problems, the path to the
goal is irrelevant; the goal state itself is the
solution
March 6, 2023 2
Local search algorithms are useful for solving
optimization problems
Find the best possible state according to a
given objective function
Optimize the number of products purchased
by an E-Commerce user
State: Action taken by the user plus the
resulting page-view
No track is kept of the path costs between the
states
All that is seen is whether the user is buying
more products (or not).
March 6, 2023 3
March 6, 2023 4
"Like climbing Everest in thick fog with
amnesia“
A loop that continually moves in the direction of
increasing value, i.e., uphill
Terminates when it reaches a peak where no
neighbor has a higher value
Fog with Amnesia: Doesn’t look ahead beyond
the immediate neighbors of the current state.
March 6, 2023 5
March 6, 2023 6
Pick a random point in the search space
2. Consider all the neighbors of the current
state
3. Choose the neighbor with the best quality
and move to that state
4. Repeat 2 thru 4 until all the neighboring
states are of lower quality
5. Return the current state as the solution
state.
March 6, 2023 7
Greedy Local Search: grabs a good neighbor
state without thinking about where to go next
However, greedy algos do make good
progress generally towards the solution
Unfortunately, hill-climbing
Can get stuck in local maxima
Can be stuck by ridges (a series of local
maxima that occur close together)
Can be stuck by plateaux (a flat area in the
state space landscape)
Shoulder: if the flat area rises uphill later on
Flat local maximum: no uphill rise exists.
March 6, 2023 8
Stochastic Hill Climbing: Chooses at random
from amongst the uphill moves, based on a
probability distribution