0% found this document useful (0 votes)
17 views7 pages

Variations of ES

Evolutionary Machine Learning

Uploaded by

Akshay Hebbar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views7 pages

Variations of ES

Evolutionary Machine Learning

Uploaded by

Akshay Hebbar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 7

Variations of Evolution

Strategies
2-25-2020
Does a parent population exist?
• Yes, in earlier versions of ES

• Not in CMA-ES: the mean and variances are being evolved and
explicitly represented.
What information is associated with
each individual?
• Originally, just the position in data space, with a single step size for
the whole population and all dimensions.

• Both position and step size in later versions of ES, with step size also
being inherited from the parents and subject to mutation and
crossover.

• Nothing, in CMA-ES; i.e., various parameters are associated with the


entire distribution.
How does selection occur?
• Early ‘,’ ES: Next generation consists of the best of the offspring.

• Early ‘+’ ES: Next generation consists of the best of the parents and
offspring.

• CMA-ES: The new “mean” is computed as the old mean perturbed in


the direction of a weighted average of the best offspring from the
previous generation, with greater weights for better offspring.
How many kinds of step size
parameters exist?
• Simplest: just one.

• Next: one per individual in parent population.

• Dimensional: one per dimension per individual.

• CMA-ES: one per dimension.


How is a step size parameter
adapted?
• Simplest: no adaptation.
• Scheduled: fixed decrease in step size as iterations progress; if
population is restarted, then start with large value again.
• Next: 1/5 rule
• A little better: 1/5 with 1/20 rule
• Dimensional relationships: use the covariance matrix (itself adapted
based on progress)
Adapting the Covariance Matrix in
CMA-ES
New matrix Cg+1 = a weighted average of three terms:
• Old matrix Cg
• Rank-μ update that captures information from the best μ of the
offspring.
• Rank-one update that captures the “evolution path”, direction along
which the mean has moved in recent generations, with greater
weightage for more recent moves.

You might also like