0% found this document useful (0 votes)
40 views18 pages

3 Optimizingsearch

I will give you some money, z dollars, after you tell me the values of x and y. z is defined as sin(x) + tan(y) + 1.25, where x and y can range from 0 to 10. To optimize search and find optimal solutions, algorithms like hill climbing, simulated annealing, and genetic algorithms can be used. These algorithms iteratively improve potential solutions by making small, random changes and exploring neighborhoods of the search space.

Uploaded by

mauricio1555
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views18 pages

3 Optimizingsearch

I will give you some money, z dollars, after you tell me the values of x and y. z is defined as sin(x) + tan(y) + 1.25, where x and y can range from 0 to 10. To optimize search and find optimal solutions, algorithms like hill climbing, simulated annealing, and genetic algorithms can be used. These algorithms iteratively improve potential solutions by making small, random changes and exploring neighborhoods of the search space.

Uploaded by

mauricio1555
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 18

Imagine that I am in a good mood

Imagine that I am going to give you some money !


In particular I am going to give you z dollars, after
you tell me the value of x and y

z sin( x) tan( y ) 1.25


x and y in the range of 0 to 10

( x y )

Optimizing Search
(Iterative Improvement Algorithms)

I.e Hill climbing, Simulated Annealing Genetic Algorithms


Optimizing search is different to the path finding search we
have studied in many ways.
The problems are ones for which exhaustive and heuristic search are
NP-hard.
The path is not important (for that reason we typically dont bother to
keep a tree around) (thus we are CPU bound, not memory bound).
Every state is a solution.
The search space is (often) continuous.
Usually we abandon hope of finding the best solution, and settle for a
very good solution.
The task is usually to find the minimum (or maximum) of a function.

Example Problem I
(Continuous)

y = f(x)

Finding the maximum


(minimum) of some
function (within a defined
range).

Example Problem II
(Discrete)

The Traveling Salesman


Problem (TSP)
A salesman spends his time
visiting n cities. In one tour he
visits each city just once, and
finishes up where he started.
In what order should he visit
them to minimize the distance
traveled?
There are (n-1)!/2 possible
tours.

A
B
C
...

A
0
12
34
...

B
12
0
76
...

C
34
76
0
...

...
...
...
...

Example Problem III


(Continuous and/or discrete)

Function Fitting
Depending on the way the problem
is setup this, could be continuous
and/or discrete.
Discrete part
Finding the form of the function
is it X2 or X4 or ABS(log(X)) + 75
Continuous part
Finding the value for X
is it X= 3.1 or X= 3.2

Assume that we can


Represent a state.
Quickly evaluate the quality of a state.
Define operators to change from one state to another.

Traveling Salesman

Function Optimizing
y = log(x) + sin(tan(y-x))
x = 2;
y = 7;

A C F K W..Q A

log(2) + sin(tan(7-2)) = 2.00305

A to C = 234
C to F = 142

Total 10,231

x = add_10_percent(x)
y = subtract_10_percent(y)
.

A C F K W..Q A
A C K F W..Q A

A
B
C
...

A
0
12
34
...

B
12
0
76
...

Hill-Climbing I
function Hill-Climbing (problem) returns a solution state
inputs : problem

// a problem.

local variables : current

// a node.

next

// a node.

current Make-Node ( Initial-State [ problem ]) // make random


loop do

// initial state.

next a highest-valued successor of current


if Value [next] < Value [current] then return current
current next
end

How would HillClimbing do on the


following problems?

How can we improve


Hill-Climbing?
Random restarts!
Intuition: call hillclimbing as many
times as you can
afford, choose the
best answer.

function Simulated-Annealing ( problem, schedule ) returns a solution state


inputs : problem // a problem
schedule // a mapping from time to "temperature"
local variables : current // a node
next

// a node

T // a "temperature" controlling the probability of downward


steps
current
for t

Make-Node ( Initial-State [ problem ])

1 to do

T schedule [ t ]
if T = 0 then return current
next a randomly selected successor of current
E Value [ next ] - Value [ current ]
if E > 0 then current next
else current next only with probability eE/T

Genetic Algorithms I (R and N, pages 619-621)


Variation (members of the same species are differ in some ways).
Heritability (some of variability is inherited).
Finite resources (not every individual will live to reproductive age).
Given the above, the basic idea of natural selection is this.
Some of the characteristics that are variable will be advantageous to
survival. Thus, the individuals with the desirable traits are more likely
to reproduce and have offspring with similar traits ...
And therefore the species evolve over time
Since natural selection is known
to have solved many important
optimizations problems it is
natural to ask can we exploit the
power of natural selection?

Richard Dawkins

Genetic Algorithms II
The basic idea of genetic algorithms (evolutionary programming).
Initialize a population of n states (randomly)

While time allows


Measure the quality of the states using some fitness function.
kill off some of the states.
Allow the surviving states to reproduce (sexually or asexually or..)

end
Report best state as answer.

All we need do is ...(A) Figure out how to represent the states. (B) Figure out a
fitness function. (C) Figure out how to allow our states to reproduce.

Genetic Algorithms III


log(xy) + sin(tan(y-x))

One possible representation of


the states is a tree structure

+
log

Another is a bitstring
100111010101001

For problems where we are trying to


find the best order to do some thing
(TSP), a linked list might work...
A

tan

pow

sin

Genetic Algorithms IIII


Usually the fitness function is
fairly trivial.

For the function maximizing problem we


can evaluate the given function with the
state (the values for x, y, z... etc)
For the function finding problem we can
evaluate the function and see how close it
matches the data.

For TSP the fitness function is just the length


of the tour represented by the linked list

C
23

E
12

F
56

D
77

B
36

A
83

log

tan

pow

sin

Genetic Algorithms V
Parent state A
log

Sexual
Reproduction
(crossover)

sin

11101000

cos

Parent state B

+
sin
tan

Parent state A

10011000

Child of A and B

10011101

cos
tan

pow

Parent state B

Child of A and B
y

Genetic Algorithms VI
Parent state A

Asexual
Reproduction

Child of A

cos

Mutation

tan

Parent state A

10011101

Parent state A

10011111
Child of A

Mutation

D
F
Child of A

Discussion of Genetic Algorithms


It turns out that the policy of keep the best n individuals is not the best
idea
Genetic Algorithms require many parameters... (population size, fraction of the
population generated by crossover; mutation rate, number of sexes... ) How do we
set these?
Genetic Algorithms are really just a kind of hill-climbing search, but seem to
have less problems with local maximums
Genetic Algorithms are very easy to parallelize...
Applications
Protein Folding, Circuit Design, Job-Shop Scheduling Problem, Timetabling,
designing wings for aircraft.

+
log

sin

cos
tan

pow

You might also like