Algorithm Genetic
Algorithm Genetic
Tobias Blickle and Lothar Thiele Computer Engineering and Communication Networks Lab (TIK) Swiss Federal Institute of Technology (ETH) Gloriastrasse 35, 8092 Zurich Switzerland fblickle,[email protected] TIK-Report Nr. 11, December 1995 Version 2 (2. Edition)
Abstract
Genetic Algorithms are a common probabilistic optimization method based on the model of natural evolution. One important operator in these algorithms is the selection scheme for which a new description model is introduced in this paper. With this a mathematical analysis of tournament selection, truncation selection, linear and exponential ranking selection and proportional selection is carried out that allows an exact prediction of the tness values after selection. The further analysis derives the selection intensity, selection variance, and the loss of diversity for all selection schemes. For completion a pseudo-code formulation of each method is included. The selection schemes are compared and evaluated according to their properties leading to an uni ed view of these di erent selection schemes. Furthermore the correspondence of binary tournament selection and ranking selection in the expected tness distribution is proven.
Foreword
This paper is the revised and extended version of the TIK-Report No. 11 from April, 1995. The main additions to the rst edition are the analysis of exponential ranking selection and proportional selection. Proportional selection is only included for completeness - we believe that it is a very unsuited selection method and we will show this (like it has be done by other researchers, too) based on a mathematical analysis in chapter 7. Furthermore for each selection scheme a pseudo-code notation is given and a short remark on time complexity is included. The main correction concerns the approximation formula for the selection variance of tournament selection. The approximation given in the rst edition was completely wrong. In this report the approximation formula is derived by a genetic algorithm, or better speaking by the genetic programming optimization method. The used method is described in appendix A and also applied to derive an analytic approximation for the selection intensity and selection variance of exponential ranking selection. We hope that this report summarizes the most important facts for these ve selection schemes and gives all researches a well founded basis to chose the appropriate selection scheme for their purpose.
Tobias Blickle
Contents
1 Introduction 2 Description of Selection Schemes
2.1 2.2 2.3 2.4 2.5 2.6 Average Fitness . . Fitness Variance . . Reproduction Rate Loss of Diversity . Selection Intensity Selection Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 10 10 11 11 13
4 6
3 Tournament Selection
3.1 3.2 3.3 3.4 3.5
Concatenation of Tournament Selection . Reproduction Rate . . . . . . . . . . . . Loss of Diversity . . . . . . . . . . . . . Selection Intensity . . . . . . . . . . . . Selection Variance . . . . . . . . . . . . . Reproduction Rate Loss of Diversity . Selection Intensity Selection Variance . Reproduction Rate Loss of Diversity . Selection Intensity Selection Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14
17 19 19 20 21 24 24 25 25 30 31 32 32
4 Truncation Selection
4.1 4.2 4.3 4.4
23
27
6.1 Reproduction Rate . . . . . . . . . . . . . . . . . . . . . . . . . . 37 6.2 Loss of Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 6.3 Selection Intensity and Selection Variance . . . . . . . . . . . . . 38 2
34
7 Proportional Selection
7.1 Reproduction Rate . . . . . . . . . . . . . . . . . . . . . . . . . . 41 7.2 Selection Intensity . . . . . . . . . . . . . . . . . . . . . . . . . . 41 8.1 8.2 8.3 8.4 8.5 Reproduction Rate and Universal Selection . . . . . . . . . . . . . Comparison of the Selection Intensity . . . . . . . . . . . . . . . . Comparison of Loss of Diversity . . . . . . . . . . . . . . . . . . . Comparison of the Selection Variance . . . . . . . . . . . . . . . . The Complement Selection Schemes: Tournament and Linear Ranking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
40
43
43 46 47 48 50
A.1 Approximating the Selection Variance of Tournament Selection . . 54 A.2 Approximating the Selection Intensity of Exponential Ranking Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 A.3 Approximating the Selection Variance of Exponential Ranking Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
60 61
Chapter 1 Introduction
Genetic Algorithms (GA) are probabilistic search algorithms characterized by the fact that a number N of potential solutions (called individuals Ji 2 J, where J represents the space of all possible individuals) of the optimization problem simultaneously sample the search space. This population P = fJ1 J2 ::: JN g is modi ed according to the natural evolutionary process: after initialization, selection ! : JN 7! JN and recombination : JN 7! JN are executed in a loop until some termination criterion is reached. Each run of the loop is called a generation and P ( ) denotes the population at generation . The selection operator is intended to improve the average quality of the population by giving individuals of higher quality a higher probability to be copied into the next generation. Selection thereby focuses the search on promising regions in the search space. The quality of an individual is measured by a tness function f : J 7! R. Recombination changes the genetic material in the population either by crossover or by mutation in order to exploit new points in the search space. The balance between exploitation and exploration can be adjusted either by the selection pressure of the selection operator or by the recombination operator, e.g. by the probability of crossover. As this balance is critical for the behavior of the GA it is of great interest to know the properties of the selection and recombination operators to understand their in uence on the convergence speed. Some work has been done to classify the di erent selection schemes such as proportionate selection, ranking selection, tournament selection. Goldberg Goldberg and Deb, 1991] introduced the term of takeover time. The takeover time is the number of generations that is needed for a single best individual to ll up the whole generation if no recombination is used. Recently Back Back, 1994] has analyzed the most prominent selection schemes used in Evolutionary Algorithms with respect to their takeover time. In Muhlenbein and SchlierkampVoosen, 1993] the selection intensity in the so called Breeder Genetic Algorithm (BGA) is used to measure the progress in the population. The selection intensity is derived for proportional selection and truncation selection. De la Maza and Tidor de la Maza and Tidor, 1993] analyzed several selection methods according
to their scale and translation invariance. An analysis based on the behavior of the best individual (as done by Goldberg and Back) or on the average population tness (as done by Muhlenbein) only describes one aspect of a selection method. In this paper a selection scheme is described by its interaction on the distribution of tness values. Out of this description several properties can be derived, e.g. the behavior of the best or average individual. The description is introduced in the next chapter. In chapter 3 an analysis of the tournament selection is carried out and the properties of the tournament selection are derived. The subsequent chapters deal with truncation selection, ranking selection, and exponential ranking selection. Chapter 7 is devoted to proportional selection that represents some kind of exception to the other selection schemes analyzed in this paper. Finally all selection schemes are compared.
pc Recombination
1-pc
No
Figure 2.1: Flowchart of the Genetic Algorithm. It is possible to describe a selection method as a function that transforms a tness distribution into another tness distribution. De nition 2.0.2 (Selection method) A selection method is a function that transforms a tness distribution s into an new tness distribution s : s = (s par list) (2.1) par list is an optional parameter list of the selection method. As the selection methods are probabilistic we will often make use of the expected tness distribution.
0 0
denotes the expected tness distribution after applying the selection method to the tness distribution s, i.e. (s par list) = E ( (s par list)) (2.2) The notation s = (s par list) will be used as abbreviation.
It is interesting to note that it is also possible to calculate the variance of the resulting distribution.
s =s 1; N
(2.3)
Proof: s (fi ) denotes the expected number of individuals with tness value fi after selection. It is obtained by doing N experiments \select an individual from the population using a certain selection mechanism". Hence the selection ( probability of an individual with tness value fi is given by pi = s Nfi) . To each tness value there exists a Bernoulli trial \an individual with tness fi is selected". As the variance of a Bernoulli trial with N trials is given by 2 = Np(1 ; p), (2.3) is obtained using pi. 2 The index s in s stands for \sampling" as it is the mean variance due to the sampling of the nite population. The variance of (2.3) is obtained by performing the selection method in N independent experiments. It is possible to reduce the variance almost completely by using more sophisticated sampling algorithms to select the individuals. We will introduce Baker's \stochastic universal sampling" algorithm (SUS) Baker, 1987], which is an optimal sampling algorithm when we compare the di erent selection schemes in chapter 8. De nition 2.0.4 (Cumulative tness distribution) Let n be the number of
;
unique tness values and f1 < ::: < fn 1 < fn (n N ) the ordering of the tness values with f1 denoting the worst tness occurring in the population and fn denoting the best tness in the population. S (fi) denotes the number of individuals with tness value fi or worse and is called cumulative tness distribution, i.e.
(2.4)
tness distribution of the \wall-following-robot" from Koza Koza, 1992]. This distribution is typical of problems solved by genetic programming (many bad and only very few good individuals exist). Figure 2.2 shows the distribution s(f ) (left) and the cumulative distribution S (f ) (right).
We will now describe the distribution s(f ) as a continuous distribution s(f ) allowing the following properties to be easily derived. To do so, we assume 8
s(f) 600
S(f)
400 200
100
200
2.5
7.5
10
12.5
15
2.5
7.5
10
12.5
15
Figure 2.2: The tness distribution s(f ) and the cumulative tness distribution S (f ) for the \wall-following-robot" problem. continuous distributed tness values. The range of the function s(f ) is f0 < f fn, using the same notation as in the discrete case. We denote all functions in the continuous case with a bar, e.g. we write s(f ) instead of s(f ). Similar sums are replaced by integrals, for example
Zf
(2.5)
G(
(x; )2
De nition 2.1.1 (Average tness) M denotes the average tness of the popuN f0 1 Z fn s (f ) f df M =N f0
9
lation before selection and M denotes the expected average tness after selection: Z fn s(f ) f df (2.7) M= 1
(2.8)
s(f)
S(f) 1000
12 800 10
600
6 400 4 200 2
50
100
150
200
50
100
150
200
Figure 2.3: The tness distribution sG(f ) (left) and the cumulative tness distribution SG(f ) (right).
1 Z fn s(f ) (f ; M )2 df = 1 Z fn f 2s(f ) df ; M 2 =N N f0 f0
f0 f0
(2.9) (2.10)
1 Z fn s (f ) (f ; M )2 df = 1 Z fn f 2s (f ) df ; M ( )2 = N N
Note the di erence of this variance to the variance in obtaining a certain tness distribution characterized by theorem 2.0.1
A reasonable selection method should favor good individuals by assigning them a reproduction rate R(f ) > 1 and punish bad individuals by a ratio R(f ) < 1. 10
Proof: For all tness values f 2 (f0 fz ] the reproduction rate is less than one.
Hence the number of individuals that are not selected during selection is given R by ff0z (s(x) ; s (x)) dx. It follows that 1 Z fz pd = N (s(x) ; s (x)) dx f0 ! 1 Z fz s(x) dx ; Z fz s (x) dx = N f0 f0 = 1 S (fz ) ; S (fz ) N
The loss of diversity should be as low as possible because a high loss of diversity increases the risk of premature convergence. In his dissertation Baker, 1989], Baker has introduced a similar measure called \reproduction rate RR". RR gives the percentage of individuals that is selected to reproduce, hence RR = 100(1 ; pd).
We use the term \selection intensity" in the same way it is used in population genetic Bulmer, 1980]. Muhlenbein has adopted the de nition and applied it to genetic algorithms Muhlenbein and Schlierkamp-Voosen, 1993]. Recently more and more researches are using this term to characterize selection schemes Thierens and Goldberg, 1994a Thierens and Goldberg, 1994b Back, 1995 Blickle and Thiele, 1995]. The change of the average tness of the population due to selection is a reasonable measure for selection intensity. In population genetic the term selection intensity was introduced to obtain a normalized and dimension-less measure. The idea is to measure the progress due to selection by the so called \selection differential", i.e. the di erence between the population average tness after and before selection. Dividing this selection di erential by the mean variance of the population tness leads to the desired dimension-less measure that is called the selection intensity.
I = M ;M
(2.13)
By this, the selection intensity depends on the tness distribution of the initial population. Hence, di erent tness distributions will in general lead to di erent selection intensities for the same selection method. For comparison it is necessary to restrict oneself to a certain initial distribution. Using the normalized Gaussian distribution G(0 1) as initial tness distribution leads to the following de nition.
lection intensity I is the expected average tness value of the population after applying the selection method to the normalized Gaussian distribution G(0 1)(f ) = 2 1 e f2 : 2 Z (2.14) I = f (G(0 1))(f ) df
;1
The \e ective" average tness value of a Gaussian distribution with mean and variance 2 can easily be derived as M = I + . Note that this de nition of the standardized selection intensity can only be applied if the selection method is scale and translation invariant. This is the case for all selection schemes examined in this paper except proportional selection. Likewise this de nition has no equivalent in the case of discrete tness distributions. If the selection intensity for a discrete distribution has to be calculated, one must refer to De nition 2.5.1. In the remainder of this paper we use the term \selection intensity" as equivalent for \standardized selection intensity" as our intention is the comparison of selection schemes. 12
De nition 2.6.1 (Selection variance) The selection variance is the normalized expected variance of the tness distribution of the population after applying the selection method to the tness distribution s(f ), i.e.
2 V = ( 2)
(2.15)
tion variance V is the normalized expected variance of the tness distribution of the population after applying the selection method to the normalized Gaussian distribution G(0 1).
V =
that is equivalent to
;1
(f ; I )2
(G(0 1))(f ) df
(2.16) (2.17)
V =
;1
f2
(G(0 1))(f ) df ; I 2
Note that there is a di erence between the selection variance and the loss of diversity. The loss of diversity gives the proportion of individuals that are not selected, regardless of their tness value. The standardized selection variance is de ned as the new variance of the tness distribution assuming a Gaussian initial tness distribution. Hence a selection variance of 1 means that the variance is not changed by selection. A selection variance less than 1 reports a decrease in variance. The lowest possible value of V is zero, which means that the variance of the tness values of population after selection is itself zero. Again we will use the term the \selection variance" as equivalent for \standardized selection variance".
13
tournament(t,J1 ::: JN ): for i 1 to N do Ji best t individual out of t randomly picked individuals from fJ1 ::: JN g
0
The outline of the algorithm shows that tournament selection can be implemented very e ciently as no sorting of the population is required. Implemented in the way above it has the time complexity O(N ). Using the notation introduced in the previous chapter, the entire tness distribution after selection can be predicted. The prediction will be made for the discrete (exact) tness distribution as well as for a continuous tness distribution. These results were rst published in Blickle and Thiele, 1995]. The calculations assume that tournament selection is done with replacement.
selection with tournament size t on the distribution s is 0 ! !1 S (fi) t ; S (fi 1) t A T (s t)(fi ) = s (fi ) = N @
(3.1)
! S (fi) t S (fi) = N N (3.2) Using this equation and the relation s (fi) = S (fi) ; S (fi 1) (see De nition 2.0.4) we obtain (3.1). 2 Equation (3.1) shows the strong in uence of the tournament size t on the behavior of the selection scheme. Obviously for t = 1 we obtain (in average) f i the unchanged initial distribution as T (s 1)(fi) = N S(Ni) ; S(fN;1 ) = S (fi) ; S (fi 1) = s(fi). In Back, 1994] the probability for the individual number i to be selected by tournament selection is given by pi = N t ((N ; i + 1)t ; (N ; i)t ), under the assumption that the individuals are ordered according to their tness value f (J1) f (J2) ::: f (JN ). Note that Back uses an \reversed" tness function where the best individual has the lowest index. For comparison with our results we transform the task into an maximization task using j = N ; i + 1: pj = N t (j t ; (j ; 1)t ) 1 j N (3.3) This formula is as a special case of (3.1) with all individuals having a di erent ( tness value. Then s(fi ) = 1 for all i 2 1 N ] and S (fi ) = i and pi = s Nfi ) yields the same equation as given by Back. Note that (3.3) is not valid if some individuals have the same tness value.
; ; ; ;
or worse, i.e. S (fi). An individual with tness fi or worse can only win the tournament if all other individuals in the tournament have a tness of fi or worse. This means we have to calculate the probability that all t individuals have a tness of fi or worse. As the probability to choose an individual with f tness fi or worse is given by S(Ni ) we get
ure 2.2) we obtain the tness distribution shown in Figure 3.1 after applying tournament selection with a tournament size t = 10. In addition to the expected distribution there are also the two graphs shown for s (f ) ; s (f ) and s (f ) + s(f ). Hence a distribution obtained from one tournament run will lie in the given interval (the con dence interval) with a probability of 68%. The high agreement between the theoretical derived results and a simulation is veri ed in Figure 3.2. Here the distributions according to (3.1) and the average of 20 simulation are shown.
Example 3.0.1 Using the discrete tness distribution from Example 2.0.1 (Fig-
15
s*(f)
100
80
60
40
20
2.5
7.5
10
12.5
15
Figure 3.1: The resulting expected tness distribution and the con dence interval of 68% after applying tournament selection with a tournament size of 10. In example 3.0.1 we can see a very high variance in the distribution that arises from fact that the individuals are selected in N independent trials. In chapter 8.1 we will meet the so called \stochastic universal sampling" method that minimizes this mean variance.
Theorem 3.0.2 Let s(f ) be the continuous tness distribution of the population. Then the expected tness distribution after performing tournament selection with tournament size t is
;
! S (f ) t 1 (3.4) T (s t))(f ) = s (f ) = ts(f ) N Proof: Analogous to the proof of the discrete case the probability of an individual with tness f or worse to win the tournament is given by ! S (f ) t S (f ) = N N (3.5)
Example 3.0.2 Figure 3.3 shows the resulting tness distributions after applying
16
s(f)
100
75
50
25
10
15
Figure 3.2: Comparison between theoretical derived distribution (|) and simulation (- - -) for tournament selection (tournament size t = 10).
Proof:
As
Zf
1 Z x s(y) dy t1 s(x) N f0 f0
t1
1 Z f s(x) dx dx = N N f0
!t1
17
s(f) 30
25
20
15
10
50
100
150
200
250
Figure 3.3: Gaussian tness distribution approximately leads again to Gaussian distributions after tournament selection (from left to right: initial distribution, t =2, t = 5, t = 10). we can write
!t1 1 0 Z f !t1 1t2 Zf 1 @ 1 s(x) dx A T ( T (s t1 ) t2 )(f ) = t2 t1 s(f ) N f s(x) dx N f0 0 !t1 1 Z f !t1 (t2 1) 1Zf 1 s(x) dx = t2 t1 s(f ) N s(x) dx N f0 f0 !t1 t2 1 Zf 1 = T (s t1 t2 )(f ) = t2 t1 s(f ) N s(x) dx f
; ; ; ;
In Goldberg and Deb, 1991] the proportion P of best- t individuals after selections with tournament size t (without recombination) is given to P = 1 ; (1 ; P0)t (3.7) This can be obtained as a special case from Theorem 3.1.1, if only the best- t individuals are considered. Corollary 3.1.1 Let s(f ) be a tness distribution representable as
(3.8)
with 1 and ff0n g(x) dx = N . Then the expected distribution after tournament with tournament size t is
0R f 1t g(x) dx A s (f ) = t g(f ) @ f0 N
(3.9)
with tournament size on the distribution g(f ), (3.9) is directly obtained using Theorem 3.1.1. 2
! s (f ) = t S (f ) t 1 (3.10) RT (f ) = s(f ) N This is directly obtained by substituting (3.4) in (2.11). Individuals with the lowest tness have a reproduction rate of almost zero and the individuals with the highest tness have a reproduction rate of t.
;
= t
t;1
;t
t t;1
It turns out that the number of individuals lost increases with the tournament size (see Fig. 3.4). About the half of the population is lost at tournament size t = 5. 19
p (t)
d
0.8
0.6
0.4
0.2
10
15
20
25
30
tournament size t
Z x 1 y2 t 1 2 p e 2 dy IT (t) = t x p1 e x2 dx (3.13) 2 2 These integral equations can be solved analytically for the cases t = 1 : : : 5 ( Blickle and Thiele, 1995 Back, 1995 Arnold et al., 1992]):
1 ; ; ; ;1 ;1
For a tournament size of two Thierens and Goldberg derive the same average tness value Thierens and Goldberg, 1994a] in a completely di erent manner. But their formulation can not be extended to other tournament sizes. For larger tournament sizes (3.13) can be accurately evaluated by numerical integration. The result is shown on the left side of Figure 3.5 for a tournament size from 1 to 30. But an explicit expression of (3.13) may not exist. By means of the steepest descent method (see, e.g. Henrici, 1977]) an approximation for large tournament sizes can be given. But even for small tournament sizes this approximation gives acceptable results. The calculations lead to the following recursion equation:
q IT (t)k ck (ln(t) ; ln(IT (t)k 1)) (3.14) with IT (t)0 = 1 and k the recursion depth. The calculation of the constants ck is di cult. Taking a rough approximation with k = 2 the following equation is obtained that approximates (3.13) with an relative error of less than 2.4% for t 2 2 5], for tournament sizes t > 5 the relative error is less than 1%: r q IT (t) 2(ln(t) ; ln( 4:14 ln(t))) (3.15)
;
I(t) V(t) 1 2 0.8
2.5
1.5 0.6
1 0.4
0.5
0.2
10
15
20
25
30
10
15
20
25
30
Figure 3.5: Dependence of the selection intensity (left) and selection variance (right) on the tournament size t.
(3.16)
21
Here again (3.16) can be solved by numerical integration. The dependence of the selection variance on the tournament size is shown on the right of Figure 3.5. To obtain a useful analytic approximation for the selection variance, we perform a symbolic regression using the genetic programming optimization method. Details about the way the data was computed can be found in appendix A. The following formula approximates the selection variance with an relative error of less than 1.6% for t 2 f1 : : : 30g:
VT (t)
2:05 + t 3 3:14t 2
t 2 f1 : : : 30g
(3.17)
22
truncation(T ,J1 ::: JN ): J sorted population J according tness with worst individual at the rst position for i 1 to N do r randomf (1 ; T )N ] : : : N g Ji Jr
As a sorting of the population is required, truncation selection has a time complexity of O(N ln N ). Although this method has been investigated several times we will describe this selection method using the methods derived here, as additional properties can be observed.
Theorem 4.0.1 The expected tness distribution after performing truncation se23
S ( fi )
(1;T )N
value below the truncation threshold. The second case re ects the fact that threshold may lie within si . Then only the fraction above the threshold (Si ; 1 (1 ; T )N ) may be selected. These fraction is in average copied T times. The last 1 case in (4.1) gives all individuals above the threshold the multiplication factor T that is necessary to keep the population size constant. 2
Proof: The rst case in (4.1) gives zero o spring to individuals with a tness
Theorem 4.0.2 Let s(f ) be the continuous distribution of the population. Then
the expected tness distribution after performing truncation selection with threshold T is ( s(f ) : S (f ) > (1 ; T )N T (4.2) ; (s T )(f ) = 0 : else
: S (f ) > (1 ; T )N 0 : else
(4.3)
pd ;(T ) = 1 ; T
24
(4.4)
2 fc 2
(4.5)
Proof: The selection intensity is de ned as the average tness of the population
afterRselection assuming an initial normalized Gaussian distribution G(0 1), hence I= (G(0 1))(f ) f df . As no individual with a tness value worse than fc will be selected, the lower integration bound can be replaced by fc. Here fc is determined by S (fc) = (1 ; T )N = 1 ; T (4.6) because N = 1 for the normalized Gaussian distribution. So we can compute Z 1 1 f2 p e 2 f df I; (T ) = fc T 2 f2 1 = T p1 e 2c 2 Here fc is determined by (4.6). Solving (4.6) for T yields Z fc 1 f 2 p e 2 df T = 1; Z 1 2f 2 p e 2 df = fc 2
;1 1 ; ; ; ;1 1 ;
A lower bound for the selection intensity reported by Muhlenbein and Voigt, q1 T 1995] is I;(T ) T . Figure 4.1 shows on the left the selection intensity in dependence of parameter T.
;
V(T)
I(T) 4
3.5
0.8
3
2.5
0.6
0.4
1.5
0.2
0.5
0.2
0.4
0.6
0.8
0.2
0.4
0.6
0.8
1.0
Figure 4.1: Selection intensity (left) and selection variance (right) of truncation selection.
gives
V;(T ) =
26
i + ( + ; ) N; 11 ;
;
i 2 f1 : : : N g
(5.1)
; + Here N is the probability of the worst individual to be selected and N the probability of the best individual to be selected. As the population size is held constant, the conditions + = 2 ; and 0 must be ful lled. Note that all individuals get a di erent rank, i.e. a di erent selection probability, even if they have the same tness value. Koza Koza, 1992] determines the probability by a multiplication factor rm that determines the gradient of the linear function. A transformation into the m form of (5.1) is possible by = rm2+1 and + = r2r+1 . m Whitley Whitley, 1989] describes the ranking selection by transforming an equally distributed random variable 2 0 1] to determine the index of the selected individual
; ; ;
q j = b 2(cN 1) c ; c2 ; 4(c ; 1) c ;
(5.2)
where c is a parameter called \selection bias". Back has shown that for 1 < c 2 this method is almost identical to the probabilities in (5.1) with + = c Back, 1994]. 27
Algorithm 3: (Linear Ranking Selection) Input: The population P ( ) and the reproduction rate of the worst individual 2 0 1] Output: The population after selection P ( )
; 0
linear ranking( ,J1 ::: JN ): J sorted population J according tness with worst individual at the rst position s0 0 for i 1 to N do si si 1 + pi (Equation 5.1)
;
od for i 1 to N do
0 0 0
r < sl
The pseudo-code implementation of linear ranking selection is given by algorithm 3. The method requires the sorting of the population, hence the complexity of the algorithm is dominated by the complexity of sorting, i.e. O(N log N ). Theorem 5.0.2 The expected tness distribution after performing ranking selection with on the distribution s is N ;1 1; 2 2 (5.3) R (s )(fi ) = s (fi ) = s(fi ) N ; 1 + N ; 1 S (fi ) ; S (fi 1 ) Proof: We rst calculate the expected number of individuals with tness fi or worse, i.e. S (fi). As the individuals are sorted according to their tness value this number is given by the sum of the probabilities of the S (fi ) less t individuals:
; ; ; ; ;
S (fi) = N
= = As
+
S (fi ) X j =1
pj
+
;
S (fi) + N ; 1 ;
S (fi ) X j =1
j;1
=2;
;
s (fi ) =
and s (fi) = S (fi) ; S (fi 1) we obtain (S (fi) ; S (fi 1)) + 1 ; (S (fi)(S (fi) ; 1) ; S (fi 1)(S (fi 1) ; 1)) N ;1
; ; ; ; ; ;
28
Example 5.0.1 As an example we use again the tness distribution of the \wallfollowing-robot" from Example 2.0.1. The resulting distribution after ranking selection with = 0:1 is shown in Figure 5.1. Here again the con dence interval is shown. A comparison between theoretical analysis and the average of 20 simulations is shown in Figure 5.2. Again a very high agreement with the theoretical results is observed.
;
s*(f) 400
350
300
250
200
150
100
50
2.5
7.5
10
12.5
15
17.5
Figure 5.1: The resulting expected tness distribution and the con dence interval of 68% after applying ranking selection with = 0:1:
;
Theorem 5.0.3 Let s(f ) be the continuous tness distribution of the population.
Then the expected tness distribution after performing ranking selection on the distribution s is 1 ; S (f )s(f ) R (s )(f ) = s (f ) = s(f ) + 2
; ;
with
N 1 Proof: As the continuous form of (5.1) is given by p(x) = N ( + calculate S (f ) using + = 2 ; : Z S(f ) S (f ) = N p(x) dx
; ;
(5.4)
+; ;
x) we
29
400
350
300
250
200
150
100
50
2.5
7.5
10
12.5
15
17.5
Figure 5.2: Comparison between theoretical derived distribution (|) and the 1 average of 20 simulations (- - -) for ranking selection with = N .
;
= =
Z S(f ) dx + 2 1 ; N 0 x dx 0 S (f ) + 1 ; S (f )2 N Z S(f )
; ;
Example 5.0.2 Figure 5.3 shows the the initial continuous tness distribution
sG and the resulting distributions after performing ranking selection.
30
s*(f)
20 17.5 15 12.5 10 7.5 5 2.5 0
25
50
75
100
125
150
175
200
fitness f
Figure 5.3: Gaussian tness distribution sG(f ) and the resulting distributions after performing ranking selection with = 0:5 and = 0 (from left to right).
; ;
pd R( ) = (1 ; ) 1 (5.6) 4 Proof: Using Theorem 2.4.1 and realizing that S (fz ) = N we calculate: 2 1 pd R ( ) = N S (fz ) ; S (fz ) ! 1 S (f ) ; S (f ) ; 1 ; S (f )2 = N z z N! z 2 = 1 N ; N ; 1; N N 2 2 N 4 1 (1 ; ) = 4 2 Baker has derived this result using his term of \reproduction rate" Baker, 1989]. Note that the loss of diversity is again independent of the initial distribution.
; ; ; ; ; ; ; ;
31
(5.7)
Proof: Using the de nition of the selection intensity (De nition 2.5.2) and using
the Gaussian function for the initial tness distribution we obtain Z x 1 y2 ! Z 1 e x22 p e 2 dy dx + 2(1 ; ) IR ( ) = xp 2 2 Z Z 2 Z x 2 y2 xe x2 = p e 2 dy dx xe x2 dx + 1 ; 2
1 ; ; ; ; ; ;1 ; ;1 1 ; 1 ; ; ; ;1 ;1 ;1
R
;
1 ;1
xe
x2
2
Rx e
;1
y2
2
dy dx =
we obtain (5.7).
The selection intensity of ranking selection is shown in Figure 5.4 (left) in dependence of the parameter .
I( )
1
V( )
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0.2
0.4
0.6
0.8
0.2
0.4
0.6
0.8
Figure 5.4: Selection intensity (left) and selection variance (right) of ranking selection.
(5.8)
;1
f p1 e 2
2
f2
2
+ 2(1 ; ;) 32
Zf
;1
p1 e 2
y2
2
dy df ; IR( )2
;
VR( ) = p
;
Z
;
2 1; Z + ; IR ( )2
;
;1
f 2e
1
f2
2
;1
f 2e
df Z f2 f
2
;1
y2
2
dy df
VR ( ) = + (1 ; ) ; IR ( )2 = 1 ; IR ( )2
; ; ; ; ;
The selection variance of ranking selection is plotted on the right of Figure 5.4.
33
i cN j
;
i 2 f1 ::: N g
(6.1)
The sum PN=1 cN j normalizes the probabilities to ensure that PN pi = 1. j i=1 N As PN=1 cN j = cc 11 we can rewrite the above equation: j
; ; ;
pi = cc ; 11 cN N;
i 2 f1 ::: N g
(6.2)
The algorithm for exponential ranking (algorithm 4) is similar to the algorithm for linear ranking. The only di erence lies in the calculation of the selection probabilities.
S (fi )
cs(fi) ; 1
(6.3)
34
Algorithm 4: (Exponential Ranking Selection) Input: The population P ( ) and the ranking base c 2]0 1] Output: The population after selection P ( )
0
exponential ranking(c,J1 ::: JN ): J sorted population J according to tness with worst individual at the rst position s0 0 for i 1 to N do si si 1 + pi (Equation 6.2)
od for i 1 to N do
0 0 0
r < sl
or worse, i.e. S (fi). As the individuals are sorted according to their tness value this number is given by the sum of the probabilities of the S (fi ) less t individuals:
S (fi) = N
pj
;
(fi c ; 1 SX) cN j = N N c ; 1 j=1 and with the substitution k = N ; j N 1 X k c S (fi) = N cc ; 11 N; k=N S (fi ) 0N 1 1 X k N S(fi ) 1 k A X = N c;1 @ c ; c cN ; 1 k=0 k=0 ! c ; 1 cN ; 1 ; cN S(fi) = N N c ; 1 c ; 1 !c ; 1 N = N 1 ; cNc; 1 c S(fi)
; ; ; ; ; ; ;
S (fi;1 ) ; c S (fi )
;
35
= N cc ; 11 c N;
S (fi )
cs(fi) ; 1
Example 6.0.1 As an example we use again the tness distribution of the \wall-
following-robot" from Example 2.0.1. The resulting distribution after exponential ranking selection with c = 0:99 and N = 1000 is shown in Figure 6.1 as a comparison to the average of 20 simulations. Again a very high agreement with the theoretical results is observed.
100
80
60
40
20
2.5
7.5
10
12.5
15
Figure 6.1: Comparison between theoretical derived distribution (|) and the average of 20 simulations (- - -) for ranking selection with c = 0:99.
Theorem 6.0.3 Let s(f ) be the continuous tness distribution of the population. Then the expected tness distribution after performing exponential ranking selection E with c on the distribution s is
N (s c)(f ) = s (f ) = N cNc; 1 ln c s(f ) c E
;
S (f )
(6.4)
cx we calculate:
cN ln c Z S(f ) c x dx S (f ) = N cN ; 1 0
;
36
As s (f ) = dSdf(f ) , (6.4) follows. It is useful to introduce a new variable dependence on the population size N :
E (s
S(f ) )(f ) = s (f ) = ln s(f ) N ;1 The meaning of will become apparent in the next section.
(6.5)
(6.7)
(6.8)
37
; ln ln 1 ln ln1 ln = ; ln ; ; 1 1 ; ln 1 ; = ; ln ln ; ; 1 1 ; ln 1 1 ; ln ln1 = ; ln ;1
; ; ;
0.75
0.5
0.25
-20 10
-15 10
-10 10
-5 10
10
Figure 6.2: The loss of diversity pd E ( ) for exponential ranking selection. Note the logarithmic scale of the -axis.
I 2.5
1 V()
2
0.75
1.5
0.5
0.25
0.5
-20 10
-15 10
-10 10
-5 10
10
-20 10
-15 10
-10 10
-5 10
10
Figure 6.3: Selection intensity (left) and selection variance (right) of exponential ranking selection. Note the logarithmic scale of the -axis. intensity of exponential ranking selection can be approximated with a relative error of less than 6% for 2 10 20 0:8] by ln ln IE ( ) 0:588 3:69 (6.9) Similar, an approximation for the selection variance of exponential ranking selection can be found. The following formula approximates the selection variance with an relative error of less than 5% for 2 10 20 0:8]: 2 VE ( ) ln 1:2 + 2:225:8414ln (6.10) ;
; ;
39
od for i 1 to N do
0 0 0
r < sl
Algorithm 5 displays the method using a pseudo code formulation. The time complexity of the algorithm is O(N ). Obviously, this mechanism will only work if all tness values are greater than zero. Furthermore the selection probabilities strongly depend on the scaling of the tness function. As an example, assume a population of 10 individuals with the best individual having a tness value of 11 and the worst a tness value of 40
The probability of an individual to be selected is simply proportionate to its tness value, i.e. fi (7.1) pi = NM
1. The selection probability for the best individual is hence pb 16:6% and for the worst pw 1:5%. If we now translate the tness function by 100, i.e. we just add a the constant value 100 to every tness value, we calculate pb 10:4% and pw 9:5%. The selection probabilities of the best and the worst individual are now almost identical. This undesirable property arises from the fact that proportional selection is not translation invariant (see e.g. de la Maza and Tidor, 1993]). Because of this several scaling methods have been proposed to keep proportional selection working, e.g. linear static scaling, linear dynamic scaling, exponential scaling, logarithmic scaling Grefenstette and Baker, 1989] sigma truncation Brill et al., 1992]. Another method to improve proportional selection is the \over selection" of a certain percentage of the best individuals, i.e. to force that 80 % of all individuals are taken from the best 20 % of the population. This method was used in Koza, 1992]. In Muhlenbein and Schlierkamp-Voosen, 1993] it is already stated that \these modi cations are necessary, not tricks to speed up the algorithm". The following analysis will con rm this statement. Theorem 7.0.1 The expected tness distribution after performing proportional selection on the distribution s is f (7.2) P (s)(fi ) = s (f ) = s(f ) M
0 0
f RP (f ) = M
(7.3)
IP = M
41
(7.4)
where is the mean variance of the tness values of the population before selection.
The other properties we are interested in like the selection variance an the loss of diversity are di cult to investigate for proportional selection. The crucial point is the explicit occurrence of the tness value in the expected tness distribution after selection (7.2). Hence an analysis is only possible if we make some further assumptions on the initial tness distribution. This is why other work on proportional selection assume some special functions to be optimized (e.g. Goldberg and Deb, 1991]). Another weak point is that the selection intensity even in the early stage of the optimization (when the variance is high) is too low. Measurements on a broad range of problems showed sometimes a negative selection intensity. This means that in some cases (due to sampling) there is a decrease in average population tness. Seldom a very high selection intensity occurred (I 1:8) if a superindividual was created. But the measured average selection intensity was in range of 0.1 to 0.3. All the undesired properties together led us to the conclusion that proportional selection is a very unsuited selection scheme. Informally one can say that the only advantage of proportional selection is that it is so di cult to prove the disadvantages.
42
Reproduction Rate t f t i RT (fi) = s(Ni ) S(Ni ) ; S(fN;1) 8f > 0 > < S(fi) (1 T )N : S (fi) (1 ; T )N Truncation R;(fi) = > s(fi )T : S (fi 1) (1 ; T )N < S (fi) > 1 : T : else ; ; 1 Linear Ranking RR (fi) = NN 1 + 1N 1 (2S (fi) ; s(fi)) S (f ) s(fi ) N ;1 Exponential Ranking RE (fi) = s(Ni) ln1 Ni f f Proportional RP (fi) = Mi
; ; ; ; ; ; ; ; ; ;
Table 8.1: Comparison of the reproduction rate of the selection methods for discrete distributions. algorithmic sampling frequency. Furthermore, SUS has a minimal spread, i.e. the range of the possible values for s (fi ) is
0
s (fi) 2 fbs (fi)c ds (fi)eg (8.1) The outline of the SUS algorithm is given by algorithm 6. The standard sampling mechanism uses one spin of a roulette wheel (divided into segments for each individual with an the segment size proportional to the reproduction rate) to determine one member of the next generation. Hence, N trials have to be performed to obtain an entire population. As these trials are independent of each other a relatively high variance in the outcome is observed (see also chapter 2 and theorem 2.0.1). This is also the case for tournament selection although there is no explicitly used roulette wheel sampling. In contrary for SUS only a single spin of the wheel is necessary as the roulette has N markers for the \winning individuals" and hence all individuals are chosen at once. By means of the SUS algorithm the outcome of a certain run of the selection scheme is as close as possible to the expected behavior, i.e. the mean variation is minimal. Even though it is not clear whether there any performance advantages in using SUS, it makes the run of a selection method more \predictable". To be able to apply SUS one has to know the expected number of o spring of each individual. Baker has applied this sampling method only to linear ranking selection as here the expected number of o spring is known by construction (see chapter 5). As we have derived this o spring values for the selection methods discussed in the previous chapters it is possible to use stochastic universal sampling for all these selections schemes. Hence, we may obtain a uni ed view of selection schemes, if we neglect the way the reproduction rates were derived and construct an \universal selection method" in the following way: First we compute
0
44
the tness distribution of the population. Next the expected reproduction rates are calculated using the equations derived in the proceeding chapters and summarized in table 8.1. In the last step SUS is used to obtain the new population after selection. This algorithm is given in algorithm 7 and the SUS algorithm is outlined by algorithm 6. Algorithm 6: (Stochastic Universal Sampling) Input: The population P ( ) and the reproduction rate for each tness value Ri 2 0 N ] Output: The population after selection P ( )
0
SUS(R1 : : : Rn J1 : : : JN ): sum 0 j 1 ptr random 0,1) for i 1 to N do sum sum + Ri where Ri is the reproduction rate of individual Ji while (sum > ptr) do Jj Ji j j+1 ptr ptr + 1
0
Algorithm 7: (Universal Selection Method) Input: The population P ( ) Output: The population after selection P ( ) universal selection(J1 : : : JN ): s tness distribution(J1 : : : JN ) r reproduction rate(s) J SUS(r J ) return J
0 0
The time complexity of the universal selection method is O(N ln N ) as the tness distribution has to be computed. Hence, if we perform \tournament selection" with this algorithm we pay the lower mean variation with a higher com45
putational complexity.
Table 8.2: Comparison of the selection intensity of the selection methods. As the selection intensity is a very important property of the selection method, we give in table 8.3 some settings for the three selection methods that yield the same selection intensity. I 0.34 T :t 0.4 R: 0.8 ; :T 0.29 E: 0.999 E :c(N = 1000) I 1.35 7 T :t 0.22 ; :T 4:7 10 E: 0.992 E :c(N = 1000)
;
0.84 1.03 1.16 3 4 5 0.47 0.36 0.30 3 0.032 9:8 10 3:5 10 0.997 0.995 0.994 1.87 2.16 20 40 0.08 0.04 9 10 2:4 10 18 0.979 0.960
; ; ;
Table 8.3: Parameter settings for truncation selection ; , tournament selection T , linear ranking selection R , and exponential ranking selection E to achieve the same selection intensity I . The importance of the selection intensity is based on the fact that the behavior of a simple genetic algorithm can be predicted if the tness distribution is normally distributed. In Muhlenbein and Schlierkamp-Voosen, 1993] a prediction is 46
made for a genetic algorithm optimizing the ONEMAX (or bit-counting) function. Here the tness is given by the number of 1's in the binary string of length n. Uniform crossing-over is used and assumed to be random process which creates a binomial tness distribution. As a result, after each recombination phase the input of the next selection phase approximates a Gaussian distribution. Hence, a prediction of this optimization using the selection intensity should be possible. For a su ciently large population Muhlenbein calculates
(8.2)
where p0 denotes the fraction of 1's in the initial random population and p( ) the fraction of 1's in generation . Convergence is characterized by the fact that p( c) = 1 so the convergence time for the special case of p0 = 0:5 is given by c = 2 In . Muhlenbein derived this formula for truncation selection, where only the selection intensity is used. Thereby it is straightforward to give the convergence time for any other selection method, by substituting I with the corresponding terms derived in the preceding sections. For tournament selection we have s n p (8.3) T c(t) 2 2(ln t ; ln 4:14 ln t)
p
p f2 n e 2c ; c (T ) = T p 2
(8.4)
;c
( ) = 2(1 ; n )
; ;
(8.5)
n ) 2:671 ln 3:69 ln
(8.6)
Table 8.4: Comparison of the loss of diversity of the selection methods and to be able to compare the loss of diversity. We chose this measure to be the selection intensity: The loss of diversity of the selection methods is viewed as a function of the selection intensity. To calculate the corresponding graph one rst computes the value of the parameter of a selection method (i.e. t for tournament selection, T for truncation selection, for linear ranking selection, and for exponential ranking selection) that is necessary to achieve a certain selection intensity. With this value the loss of diversity is then obtained using the corresponding equations, i.e. (3.11), (4.4), (5.6), (6.7). Figure 8.1 shows the result of this comparison: the loss of diversity for the di erent selection schemes in dependence of the selection intensity. To achieve the same selection intensity more bad individuals are replaced using truncation selection than using tournament selection or one of the ranking selection schemes, respectively. This means that more \genetic material" is lost using truncation selection. If we suppose that a lower loss of diversity is desirable as it reduces the risk of premature convergence, we expect that truncation selection should be outperformed by the other selection methods. But in general it depends on the problem and on the representation of the problem to be solved whether a low loss of diversity is \advantageous". But with gure 8.1 one has a useful tool at hand to make the right decision for a particular problem. Another interesting fact can be observed if we look again at table 8.4: The loss of diversity is independent of the initial tness distribution. Nowhere in the derivation of these equations a certain tness distribution was assumed and nowhere the tness distribution s(f ) occurs in the equations. In contrary, the (standardized) selection intensity and the (standardized) selection variance are computed for a certain initial tness distribution (the normalized Gaussian distribution). Hence, the loss of diversity can be viewed as an inherent property of a selection method.
;
pd(I) 1
0.8
0.6
0.4
0.2
0.25
0.5
0.75
1.25
1.5
1.75
Figure 8.1: The dependence of the loss of diversity pd on the selection intensity I for tournament selection (3|3), truncation selection (4 { { 4), linear ranking selection (? - - ?), and exponential ranking selection (2 { - 2). Note that for tournament selection only the dotted points on the graph correspond to valid (integer) tournament sizes. selection intensity. Figure 8.2 shows the dependence of the selection variance on the selection intensity. It can be seen clearly that truncation selection leads to a lower selection variance than tournament selection. The highest selection variance is obtained by exponential ranking. An interpretation of the results may be di cult as it depends on the optimization task and the kind of problem to be solved whether a high selection variance is advantageous or not. But again this graph may help to decide for the \appropriate" selection method for a particular optimization problem. If we accept the assumption that a higher variance is advantageous to the optimization process, exponential ranking selection selection reveals itself to be the best selection scheme. In Muhlenbein and Voigt, 1995] it is stated that \if two selection selection methods have the same selection intensity, the method giving the higher standard deviation of the selected parents is to be preferred". From this point of view exponential ranking selection should be the \best" selection method. 49
Selection Variance r 2:05+t VT (t) 3 3:14t 2 V;(T ) = 1 ; I;(T )(I;(T ) ; fc) 2 VR ( ) = 1 ; IR ( ) 2: VE ( ) ln 1:2 + 2:2258414ln
; ; ;
R (s
Proof:
1)= N
T (s
2)
(8.7)
Goldberg and Deb Goldberg and Deb, 1991] have also shown this result, but only for the behavior of the best t individual. By this we see the complementary character of the two selection schemes. For 1 lower selection intensities (I ) linear ranking selection is the appropriate selection mechanism as for selection intensities (I 1 ) tournament selection is better suited. At the border the two section schemes are identical.
p p
50
V(I) 0
-0.2
-0.4
-0.6
-0.8
-1
0.25
0.5
0.75
1.25
1.5
1.75
Figure 8.2: The dependence of the selection variance V on the selection intensity I for tournament selection (3|3), truncation selection (4 { { 4), ranking selection (? - - ?), and exponential ranking selection (2 { - 2). Note that for tournament selection only the dotted points on the graph correspond to valid (integer) tournament sizes.
51
Chapter 9 Conclusion
In this paper a uni ed and systematic approach to analyze selection methods was developed and applied to the selection schemes tournament selection, truncation selection, linear and exponential ranking selection, and proportional selection. This approach is based on the description of the population using tness distributions. Although this idea is not new, the consequent realization of this idea led to a powerful framework that gave an uni ed view of the selection schemes and allowed several up to now independently and isolated obtained aspects of these selection schemes to be derived with one single methodology. Besides some interesting features of selection schemes could be proven, e.g. the concatenation of several tournament selections (theorem 3.1.1) and the equivalence of binary tournament and linear ranking (theorem 8.5.1). Furthermore the derivation of the major characteristics of a selection scheme, i.e. the selection intensity, the selection variance and the loss of diversity, could easily be achieved with this approach. The selection intensity was used to obtain a convergence prediction of the simple genetic algorithm with uniform crossover optimizing the ONEMAX function. The comparison of the loss of diversity and the selection variance based on the selection intensity allowed for the rst time to compare \second order" properties of selection schemes. This comparison gives a well grounded basis to decide which selection scheme should be used, if the impact of these properties on the optimization process is known for the particular problem. The one exception in this paper is proportional selection, that withdraws itself from a detailed mathematical analysis. But based on some basic analysis and some empirical observations we regard proportional selection to be a very unsuited selection scheme. The presented analysis can easily be extended to other selection schemes and other properties of selection schemes.
52
use of one step hill-climbing to adjust the RFPC numbers The last two items need further explanation: the marking crossover introduced in Blickle and Thiele, 1994] works as follows. During the evaluation of the tness function all edges in the tree of the individual are marked. The edges that remain unmarked after calculating the tness value are said to be redundant, because they were never used for tness calculation. The crossover operator now only selects the edges for crossover that are marked, because only changes at these edges may lead to individuals with a di erent tness score. With this approach an increase in performance of almost 50% for the 6-multiplexer problem was achieved Blickle and Thiele, 1994]. \One step hill-climbing" works in the following way: after evaluation the tness of an individual, successively all random constants in the trees are change by a little amount . If this change leads to a better individual it is accepted, otherwise rejected. In our experiments, the setting is = 0:1. The very large population size was chosen because only small trees were allowed. No further tuning of the parameters was made, as well as no comparison of performance with other possible optimization methods (e.g. simulated annealing) as this is beyond the scope of this paper. The intention was only to nd one good approximation for each data set. The problem was programmed on a SPARC Station 20 using the YAGPLIC library Blickle, 1995]. A run over 30 generations took about 15 minutes CPU time. The given solution were found after 15 - 23 generations.
is obtained that approximates the selection variance of tournament selection with an relative error of less than 1.6% for t 2 f1 : : : 30g: s 2:05 + t VT (t) (3:17) 3 3:14t 2 Table A.1 displays the numerical calculated values for the selection variance, the approximation by (3.17) and the relative error of the approximation for the tournament sizes t = 1 : : : 30.
were RFPC is a random oating point number in the range from -10,10] once determined at creation time of the population. One solution with an accuracy of 5.4% found by GP was VE ( ) Log Subtract Divide 2.840000,Subtract Times Exp 0.796000], ],Log ]]],1.196000]]. Further manual tuning of the constants led to approximation formula 6.10: 2 VE ( ) ln 1:2 + 2:225:8414ln (6:10) ; Table A.3 displays again the numerical calculated values for the selection variance, the approximation by (6.10) and the relative error of the approximation.
56
Tournament size t 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
VT (t) 1 0.6816901138160949 0.5594672037973512 0.4917152368747342 0.4475340690206629 0.4159271089832759 0.3919177761267493 0.3728971432867331 0.357353326357783 0.3443438232607686 0.3332474427030835 0.3236363870477149 0.3152053842122778 0.3077301024704087 0.3010415703137873 0.2950098090102839 0.2895330036877659 0.2845301297414324 0.2799358049283933 0.2756966156185853 0.2717684436810235 0.2681144875238161 0.2647037741277227 0.2615098815029825 0.2585107005876581 0.2556866644747772 0.2530210522851858 0.2504992994478195 0.2481086538596352 0.245837896441101
Approximation (3.17) 0.985314748118875 0.6751186081382552 0.5561984979283774 0.4906341319420119 0.4480145502588547 0.4175510364657733 0.394389935195578 0.3760023889838275 0.3609311128657064 0.3482720218045281 0.3374316422588417 0.3280026037472064 0.3196949671601049 0.3122960960698358 0.3056460664995608 0.299621986989894 0.2941276564766719 0.2890865414042389 0.2844368844693661 0.2801282213768255 0.2761188509706837 0.2723739652629051 0.2688642452925616 0.2655647916870945 0.2624542995840844 0.2595144145665026 0.2567292244751288 0.2540848544627421 0.2515691413740026 0.249171369706327
rel. Error in % 1.468525188112524 0.964001904186694 0.5842533479688358 0.2198640293503141 0.1073619354261157 0.3904355949452292 0.6307851338769677 0.832735179927804 1.00119020701134 1.140777989441309 1.255583395274936 1.349111803935622 1.424335741931216 1.483765664383183 1.52952171388625 1.563398178210858 1.586918496469898 1.601381079377116 1.607897047011976 1.60742116775606 1.600777202362149 1.588678694101006 1.571746069185771 1.55057627681493 1.525507063135685 1.49704721581323 1.465558757444203 1.431363290367017 1.394746801667437 1.355963955713685
57
1: 10 20 1: 10 19 1: 10 18 1: 10 17 1: 10 16 1: 10 15 1: 10 14 1: 10 13 1: 10 12 1: 10 11 1: 10 10 1: 10 9 1: 10 8 1: 10 7 1: 10 6 0.00001 0.0001 0.001 0.01 0.0158489 0.0251189 0.0398107 0.0630957 0.1 0.125893 0.158489 0.199526 0.251189 0.316228 0.398107 0.501187 0.630957 0.794328
; ; ; ; ; ; ; ; ; ; ; ; ; ; ;
IE ( ) 2.21187 2.19127 2.16938 2.14604 2.12104 2.09416 2.0651 2.03349 1.99889 1.96069 1.91813 1.87015 1.81525 1.7513 1.67494 1.58068 1.4585 1.28826 1.02756 0.958452 0.88211 0.797944 0.705529 0.604719 0.55122 0.495745 0.438398 0.379315 0.318668 0.256659 0.193519 0.129509 0.0649044
Approximation (6.9) 2.26634 2.23693 2.20597 2.17329 2.13869 2.10192 2.06269 2.02067 1.97541 1.92637 1.87286 1.81399 1.74857 1.67495 1.59077 1.49248 1.37426 1.22496 1.01518 0.959374 0.895999 0.823028 0.738058 0.638636 0.58292 0.523127 0.459482 0.392562 0.323416 0.253675 0.185613 0.122102 0.0663754
rel. error in % 2.46276 2.08369 1.6866 1.26989 0.83183 0.370482 0.116247 0.630581 1.17477 1.75086 2.36022 3.00251 3.67343 4.35951 5.02548 5.58 5.77587 4.91391 1.20517 0.0961498 1.57453 3.14361 4.61058 5.60873 5.75089 5.52332 4.8093 3.49231 1.49006 1.16253 4.08562 5.71875 2.26635
58
1: 10 20 1: 10 19 1: 10 18 1: 10 17 1: 10 16 1: 10 15 1: 10 14 1: 10 13 1: 10 12 1: 10 11 1: 10 10 1: 10 9 1: 10 8 1: 10 7 1: 10 6 0.00001 0.0001 0.001 0.01 0.0158489 0.0251189 0.0398107 0.0630957 0.1 0.125893 0.158489 0.199526 0.251189 0.316228 0.398107 0.501187 0.630957 0.794328
; ; ; ; ; ; ; ; ; ; ; ; ; ; ;
VE ( ) 0.224504 0.227642 0.231048 0.234767 0.238849 0.243361 0.248386 0.254032 0.260441 0.267807 0.276403 0.286619 0.299052 0.314661 0.335109 0.363607 0.407156 0.482419 0.624515 0.664523 0.70839 0.755421 0.804351 0.853263 0.876937 0.899606 0.920882 0.940366 0.957663 0.972403 0.98425 0.992927 0.998221
Approximation (6.10) 0.232462 0.235033 0.237881 0.241055 0.244614 0.248632 0.253204 0.258454 0.264544 0.271695 0.280208 0.290515 0.303252 0.319393 0.340517 0.36936 0.411119 0.47699 0.595566 0.631164 0.672819 0.721681 0.778705 0.843853 0.878753 0.914171 0.948648 0.979962 1.00499 1.0198 1.02009 1.00213 0.964101
rel. error in % 3.54445 3.24672 2.95725 2.67849 2.41344 2.16573 1.93978 1.74096 1.57569 1.45147 1.37665 1.35936 1.40448 1.50382 1.61388 1.58242 0.973227 1.12538 4.63544 5.01997 5.02134 4.46638 3.18843 1.1029 0.207166 1.61901 3.01517 4.21072 4.94227 4.87463 3.6411 0.927139 3.41803
59
xe
;
x2
2
Zx
;1
x2
2
= ;e p dx = 2
2
x2
x2
2
;1
= 0 Z Z p y2 x2 x 2 xe e 2 dy dx = Z Z x y2 2 2 p xe x2 e 2 dy dx = 2 Z 2 p x2 e x2 dx = 2
1 ;
e Z
y2
2
dy dx = xe
x2
2
;1 ;
;1 ;
;1
;1
;1
;1 1
;1
te
x2 e x2 Zx x2
;
Zx
;
;1
;1
y2
2
dy dx =
;
;1
y2
2
dy
dx = (2 ) 2t
60
Appendix C Glossary
Parameter of Exponential Ranking ( = cN ) c Basis for Exponential Ranking Selection Probability of worst t Individual in Ranking Selection f Fitness Value f (J ) Fitness Value of Individual J G( ) Gaussian Distribution with Mean and Variance 2 I Selection Intensity J Individual J Space of all Possible Individuals M Average Population Fitness N Population Size Selection Method Exponential Ranking Selection E Tournament Selection T Truncation Selection ; Proportional Selection P Ranking Selection R pc Crossover Probability pd Loss of Diversity P Population R Reproduction Rate R Set of Real Numbers
;
61
(Discrete) Fitness Distribution (Continuous) Fitness Distribution Cumulative (Discrete) Fitness Distribution Cumulative (Continuous) Fitness Distribution Mean Variance of the Population Fitness t Tournament Size T Truncation Threshold Generation c Convergence Time (in Generations) for the ONEMAX Example V Selection Variance Z Set of Integers
s s S S
62
Bibliography
Arnold et al., 1992] B.C. Arnold, N. Balakrishnan, and H. N. Nagaraja. A First Course in Order Statistics. Wiley Series in Probability and Mathematical Statistics, Wiley, New York, 1992. Back, 1994] Thomas Back. Selective pressure in evolutionary algorithms: A characterization of selection mechanisms. In Proceedings of the First IEEE Conference on Evolutionary Computation. IEEE World Congress on Computational Intelligence (ICEC94), pages 57{62, 1994. Back, 1995] Thomas Back. Generalized convergence models for tournamentand ( , )-selection. In L. Eshelman, editor, Proceedings of the Sixth International Conference on Genetic Algorithms (ICGA95), San Francisco, CA, 1995. Morgan Kaufmann Publishers. Baker, 1987] J. E. Baker. Reducing bias and ine ciency in the selection algorithm. In Proceedings of the Second International Conference on Genetic Algorithms, pages 14{21, Cambridge, MA, 1987. Lawrence Erlbaum Associates. Baker, 1989] J. E. Baker. An Analysis of the E ects of Selection in Genetic Algorithms. PhD thesis, Graduate School of Vanderbilt University, Nashville, Tennessee, 1989. Blickle and Thiele, 1994] Tobias Blickle and Lothar Thiele. Genetic programming and redundancy. In J. Hopf, editor, Genetic Algorithms within the Framework of Evolutionary Computation (Workshop at KI-94, Saarbrucken), pages 33{38. Max-Planck-Institut fur Informatik (MPI-I-94-241), 1994. Blickle and Thiele, 1995] Tobias Blickle and Lothar Thiele. A mathematical analysis of tournament selection. In L. Eshelman, editor, Proceedings of the Sixth International Conference on Genetic Algorithms (ICGA95), San Francisco, CA, 1995. Morgan Kaufmann Publishers. Blickle, 1995] Tobias Blickle. YAGPLIC - User Manual. Computer Engineering and Communication Networks Lab (TIK), Swiss Federal Institute of Technology (ETH) Zurich, Gloriastrasse 35, CH-8092 Zurich, 1995. 63
Brill et al., 1992] F. Z. Brill, D. E. Brown, and W. N. Martin. Fast genetic selection of features for neural network classi ers. IEEE Transactions on Neural Networks, 2(3):324{328, March 1992. Bulmer, 1980] M.G. Bulmer. The Mathematical Theory of Quantitative Genetics. Clarendon Press, Oxford, 1980. Crow and Kimura, 1970] J.F. Crow and M. Kimura. An Introduction to Population Genetics Theory. Harper and Row, New York, 1970. de la Maza and Tidor, 1993] Michael de la Maza and Bruce Tidor. An analysis of selection procedures with particular attention paid to proportional and bolzmann selection. In Stefanie Forrest, editor, Proceedings of the Fifth International Conference on Genetic Algorithms, pages 124{131, San Mateo, CA, 1993. Morgan Kaufmann Publishers. Goldberg and Deb, 1991] David E. Goldberg and Kalyanmoy Deb. A comparative analysis of selection schemes used in genetic algorithms. In G. Rawlins, editor, Foundations of Genetic Algorithms, pages 69{93, San Mateo, 1991. Morgan Kaufmann. Goldberg, 1989] David E. Goldberg. Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley Publishing Company, Inc., Reading, Massachusetts, 1989. Grefenstette and Baker, 1989] John J. Grefenstette and James E. Baker. How genetic algorithms work: A critical look at implicit parallelism. In J. David Scha er, editor, Proceedings of the Third International Conference on Genetic Algorithms, pages 20 { 27, San Mateo, CA, 1989. Morgan Kaufmann Publishers. Henrici, 1977] P. Henrici. Applied and Computational Complex Analysis, volume 2. A Wiley-Interscience Series of Texts, Monographs, and Tracts, 1977. Holland, 1975] John H. Holland. Adaption in Natural and Arti cial Systems. The University of Michigan Press, Ann Arbor, MI, 1975. Koza, 1992] John R. Koza. Genetic programming: on the programming of computers by means of natural selection. The MIT Press, Cambridge, Massachusetts, 1992. Muhlenbein and Schlierkamp-Voosen, 1993] Heinz Muhlenbein and Dirk Schlierkamp-Voosen. Predictive models for the breeder genetic algorithm. Evolutionary Computation, 1(1), 1993. 64
Muhlenbein and Voigt, 1995] Heinz Muhlenbein and Hans-Michael Voigt. Gene pool recombination in genetic algorithms. In I. H. Osman and J. P. Kelly, editors, Proceedings of the Metaheuristics Inter. Conf., Norwell, 1995. Kluwer Academic Publishers. Shapiro et al., 1994] Jonathan Shapiro, Adam Prugel-Bennett, and Magnus Rattray. A statistical mechanical formulation of the dynamics of genetic algorithms. In Terence C. Fogarty, editor, Evolutionary Computing AISB Workshop. Springer , LNCS 865, 1994. Thierens and Goldberg, 1994a] D. Thierens and D. Goldberg. Convergence models of genetic algorithm selection schemes. In Yuval Davidor, Hans-Paul Schwefel, and Reinhard Manner, editors, Parallel Problem Solving from Nature - PPSN III, pages 119 { 129, Berlin, 1994. Lecture Notes in Computer Science 866 Springer-Verlag. Thierens and Goldberg, 1994b] Dirk Thierens and David Goldberg. Elitist recombination: an integrated selection recombination ga. In Proceedings of the First IEEE Conference on Evolutionary Computation. IEEE World Congress on Computational Intelligence (ICEC94), pages 508{512, 1994. Whitley, 1989] Darrell Whitley. The GENITOR algorithm and selection pressure: Why rank-based allocation of reproductive trials is best. In J. David Scha er, editor, Proceedings of the Third International Conference on Genetic Algorithms, pages 116 { 121, San Mateo, CA, 1989. Morgan Kaufmann Publishers.
65