0% found this document useful (0 votes)
368 views29 pages

Artificial Intelligence: Beyond Classical Search

This document discusses local search algorithms as an alternative to classical search algorithms that systematically explore all possible states. Local search algorithms focus on exploring the current state and neighboring states instead of maintaining a complete search tree. Some key local search algorithms discussed are hill climbing, simulated annealing, genetic algorithms, and beam search. These algorithms are useful for large or continuous state spaces where exhaustive search is infeasible due to memory constraints. The document also provides examples of applying local search to optimization problems like the 8 queens problem.

Uploaded by

Prasidha Prabhu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
368 views29 pages

Artificial Intelligence: Beyond Classical Search

This document discusses local search algorithms as an alternative to classical search algorithms that systematically explore all possible states. Local search algorithms focus on exploring the current state and neighboring states instead of maintaining a complete search tree. Some key local search algorithms discussed are hill climbing, simulated annealing, genetic algorithms, and beam search. These algorithms are useful for large or continuous state spaces where exhaustive search is infeasible due to memory constraints. The document also provides examples of applying local search to optimization problems like the 8 queens problem.

Uploaded by

Prasidha Prabhu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Artificial Intelligence

Beyond classical search

B E (CSE) B Section, Semester 5


T. T. Mirnalinee
Department of Computer Science and Engineering
SSN College of Engineering
24th August 2020
Introduction
• Observable
• Deterministic
• Known environments
• Local search: rather than systematically exploring- explore/evaluate
from current state
• Simulated annealing
• Genetic Algorithm
Local Search
• Local search -- one or few current nodes in memory but exhaustive search
maintains all the successors
• Takes very less memory – very good in very large or infinite state spaces
• State space is continuous in several ML problems
• It may find reasonable solution (with respect to optimization)
• It is used both in search and optimization problem
• Search a state which has the max/min objective function value
• Profit – Max, error/cost – Min. Maximization can be converted to
minimization
Local search
• Informed and uninformed search – search all states – stores all the
states from initial states to the solution
• Local search – path to the goal is not relevant
• Eg – 8 queens problem only the final configuration is sufficient
• These different class of algorithms are local search – it focus on
current state and neighbouring state
• Paths are not retained
• Less memory, state space is large – it could able to find reasonable
solution
Local search
• Local search can be used if we have heuristic function – so informed search
• Hill climbing
• Simulated annealing
• Genetic algorithm
• Local maxima - process may end even better solution exist - backtracking
• Plateau – all neighbours have same value, - big jump – randomly select a
state far away from the current state
• Ridges – several peaks in the neighbour - move simultaneously in various
direction and select the best
State space landscape

Shoulder – improvement is possible


Plateau – No improvement

• Local search algorithm – explore the


landscape to find the global
maximum/minimum
Local Search Algorithms and optimization
problems
• Optimization – objective function
• Local search – state is a solution
• No of pairs of queens that are attacking each other (objective function)
• Solution – expected ( 0 – so minimization problem)
• nxn board – we do have nn solution (state)
• Local search we are searching in a solution space
• States are independent of each other
• Solution to systematic search: path cost till the goal, order in which states
are arranged(8-queens)
• Local search: find goals as wells as suitable for optimization problems-
finding the best state subject to an objective function.
• Complete and optimal
8-queens problem
• Complete state formulation – each state has eight queens on board,
one per column
• Successor – moving a single queen to another square in the same
column
• A state has 56 sucessors (8x7)
• Heuristic function number of pairs of queens that are attaching each
other – minimization problem
• Hill climbing algorithm choose randomly among the set of successors
if there is more than one
Random Restart
• Trivial algorithm – random sampling a new state
• Take one queen put in first column, another queen second column
and so on
• If we keep doing this at some point we will get an optimal solution
• Random walk: start from a state, randomly jump to any one of the
neighbour
• At some point we will get a solution provided only one connected
component
• Every state has a neighbour, greedy search – out of many alternatives
choose the best and moves
Hill climbing
• Greedy local search – search the good neighbour state
• Local maxima: peak higher than the neighbouring states but lower
than the global maximum
• Ridges: Ridges result in a sequence of local maxima that is very
difficult for the greedy to navigate
• Plateau : flat area of the state space landscape
• Trapped in local minima – so it is incomplete
Hill-climbing search(Greedy local search)
• Steepest ascent
• Continuously moves in the direction of increasing value (uphill)
• Terminates when it reaches the peak – no neighbour with higher value
• Greedy local search – grab the good neighbour without bothering about where to go
next
function HILL-CLIMBING(problem) returns a state that is a local
maximum
current ←MAKE-NODE(problem.INITIAL-STATE)
loop do
neighbor ←a highest-valued successor of current
if neighbor.VALUE ≤ current.VALUE then return current.STATE
current←neighbor
Variant of hill climbing -- incomplete
• Hill climbing is incomplete – that is algorithm will lead to a neighbour
with better current state
• Greedy HC – always select a best neighbour
• Stochastic – chooses at random from the uphill moves
• Select any better neighbour randomly with a probability distribution
• Every Current state has large neighbours memory problem still exist
• First-choice – randomly generate successor until the one which is
better than the current state (Select only a subset of successor)
• Random-restart (complete) – series of search until a goal is found, if
few local maxima and plateau it will find the solution quickly.
Variant of hill climbing
• Randomly generate a state and do hill climbing
• Again generate a state and do hill climbing
• Repeat it
• If randomly selected state is goal – it is complete
Nature inspired algorithm
• Designing algorithms from nature – nature inspired/biological
algorithm
• Model the problem as the landscape
• Local minimum vs global minimum
• Many local minimum but only one global minimum
Simulated Annealing
• Combining hill climbing with random walk – achieve efficiency and
completeness
• Annealing is the process used to temper or harden metals/glass by heating
them to a high temperature and gradually cooling them
• Reach low energy crystalline state
• This is achieved by gradient descent- shake the surface if it get stuck in
local minimum,
• imagine the task of getting a ping-pong ball into the deepest valley in a
bumpy surface
• Shake in such a way that it has to come out from local minimum but not
from global minimum
Simulated Annealing
• Intensity of shaking is temperature
• Shaking hard (high temperature)
• Decrease the intensity of shaking(gradually decrease the temperature)
• Instead of picking the best neighbour, here look for a random
neighbour
• Random move is picking a good solution accept or accept with a
probability of less than one
• Complete and efficient
Simulated annealing- Algorithm
function SIMULATED-ANNEALING(problem, schedule) returns a
solution state
inputs: problem, a problem schedule , a mapping from time to
“temperature”
current ←MAKE-NODE(problem.INITIAL-STATE) P decreases with decreas
for t = 1 to∞do inΔE,
T ←schedule(t ) P decreases with decrease
At high values of T bad mov
if T = 0 then return current
are not accepted
next←a randomly selected successor of current
As T decreases, bad moves
ΔE ←next.VALUE – current.VALUE less likely to be accepted
if ΔE > 0 then current ←next Global minimum with
else current ←next only with probability eΔE/T probability of 1
Find global optimum with probability approaching 1
Simulated Annealing
• T= 0, no need to shake anymore
• Randomly generate successor (not all successor)
• If the next state is best value move otherwise discard and select a
new successor
• Here in SA, even if the successor is worse, no need to discard all the
time, sometimes we will take the successor
Local beam search

• Keeps tracks of k states rather than one (if memory is not the
limitation)
• Starts with k randomly generated states
• Each step all the successors of all the k states are generated.
• If any one is goal algorithm stops otherwise select the k best
successor
• Random restart search – each search process runs independently
• Local beam search- useful information is passed among the
parallel search
Local beam search
• Resemble random restart (randomly generate one initial state rum HC
and repeat)
• But in Local beam search – generate k states initially (appears to be
parallel)
• All K current state share the useful information among all the k states
(select the best K from all the successors)
Stochastic beam search

• Select any k states random


• Probability of selecting a successor is proportional to its fitness value
• Genetic Algorithm
• Resembles natural selection
• Organism – states
• Population – set of states
• Offsprings – successors
• Values – fitness
• Select next state based on the fitness
Genetic Algorithm
• Hard optimization problem can be solved using nature inspired
algorithm
• How biological process evolved
• Two states are combined to generate a new successor
Genetic algorithm
• New state is generated by combing two parent state
• GAs begin with a set of k randomly generated states, called the
population.
Each state, or individual, is represented as a string over a finite alphabet
Every state is represented by a string of symbols from alphabet (generally bit
string/digit string 0’s and 1’s)
8 –queens problem
• Instance - 24613578
GA
• Fitness function
• Each state is evaluated by the fitness function
• 8- queens pbm : no of non attacking queens
GA Operation
• Selection
• Crossover
• Mutuation
• 24, 23, 20,11 are fitness value (no of non attacking pairs)
• S= 24+23+20+11
• P= 24/S….
• Out of the four individuals are selected , one can be selected more
than once as well
• Cross over
• Crossover points are selected randomly
• Initially can take larger points
• Later when near to the global minima, smaller points
Genetic algorithm
function GENETIC-ALGORITHM(population, FITNESS-FN) returns an individual
inputs: population, a set of individuals FITNESS-FN, a function that measures the fitness of an
individual
repeat
new population ←empty set
for i = 1 to SIZE(population) do
x ←RANDOM-SELECTION(population, FITNESS-FN)
y ←RANDOM-SELECTION(population, FITNESS-FN)
child ←REPRODUCE(x , y)
if (small random probability) then child ←MUTATE(child )
add child to new population
population ←new population
until some individual is fit enough, or enough time has elapsed
return the best individual in population, according to FITNESS-FN
function REPRODUCE(x , y) returns an individual
inputs: x , y, parent individuals
n←LENGTH(x ); c←random number from 1 to n
return APPEND(SUBSTRING(x, 1, c), SUBSTRING(y, c + 1, n))
Genetic algorithm
• Combines better neighbour and random exploration
• Sharing of information from all the states in the population
• Granularity of the search
• schema, which is a substring in which some of the positions can be
left unspecified -246*****
• Since 246 is a safe substring * can be replaced by any value
• Schema with best fitness value will sustain
Summary
• For hard optimization problem
• Circuit layout
• Job scheduling

• ACO – how to effectively find food


• PSO – birds find path

You might also like