0% found this document useful (0 votes)
4 views13 pages

AI Lecture 6

Uploaded by

zeegamingzone
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views13 pages

AI Lecture 6

Uploaded by

zeegamingzone
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Lecture 6 – Local Search

Instrutor : Hafiz Mueez Ameen

March 6, 2023 1
 In many optimization problems, the path to the
goal is irrelevant; the goal state itself is the
solution

 State space = set of configurations


 Find a configuration satisfying your
constraints, e.g., n-queens

 In such cases, we can use local search


algorithms
 Keep a single "current" state, and then shift
states, but don’t keep track of paths.
 Use very limited memory
 Find reasonable solutions in large state spaces.

March 6, 2023 2
 Local search algorithms are useful for solving
optimization problems
 Find the best possible state according to a
given objective function
 Optimize the number of products purchased
by an E-Commerce user
 State: Action taken by the user plus the
resulting page-view
 No track is kept of the path costs between the
states
 All that is seen is whether the user is buying
more products (or not).
March 6, 2023 3
March 6, 2023 4
 "Like climbing Everest in thick fog with
amnesia“
 A loop that continually moves in the direction of
increasing value, i.e., uphill
 Terminates when it reaches a peak where no
neighbor has a higher value
 Fog with Amnesia: Doesn’t look ahead beyond
the immediate neighbors of the current state.

March 6, 2023 5
March 6, 2023 6
Pick a random point in the search space
2. Consider all the neighbors of the current
state
3. Choose the neighbor with the best quality
and move to that state
4. Repeat 2 thru 4 until all the neighboring
states are of lower quality
5. Return the current state as the solution
state.

March 6, 2023 7
 Greedy Local Search: grabs a good neighbor
state without thinking about where to go next
 However, greedy algos do make good
progress generally towards the solution
 Unfortunately, hill-climbing
 Can get stuck in local maxima
 Can be stuck by ridges (a series of local
maxima that occur close together)
 Can be stuck by plateaux (a flat area in the
state space landscape)
 Shoulder: if the flat area rises uphill later on
 Flat local maximum: no uphill rise exists.

March 6, 2023 8
 Stochastic Hill Climbing: Chooses at random
from amongst the uphill moves, based on a
probability distribution

 First-choice Hill Climbing: Implements


stochastic HC by generating successors
randomly until one is generated that is better
than the current state

 Random-restart Hill Climbing: Selects a series


of initial nodes randomly until the solution is
found.
March 6, 2023 9
 Idea: escape local maxima by allowing
some "bad" moves but gradually
decrease their frequency
 If Value[Next] is close to Value[Current], the assignment
is more likely to be accepted.
 If the temperature is high, the exponent will be close to
zero, and the probability will be close to 1.
 As the temperature approaches zero, the exponent
approaches -∞, and the probability approaches zero.
 One can prove: If T decreases slowly
enough, then simulated annealing search
will find a global optimum with probability
approaching 1

 Widely used in VLSI layout, airline


scheduling, etc.
March 6, 2023 13

You might also like