0% found this document useful (0 votes)
88 views10 pages

A New Hybrid Genetic Algorithm For Global Optimization PDF

This document describes a new hybrid genetic algorithm optimization method that combines genetic algorithms and interval analysis. It presents a Hybrid Interval Genetic algorithm (HIG) that has two phases: 1. The first phase uses interval arithmetic and an interval branch-and-bound algorithm to obtain small regions where candidate solutions may lie, in order to initialize a population of potential solutions and obtain initial bounds for the global minimum. 2. The second phase applies a genetic algorithm that exploits the information from phase one. It constructs a mechanism to update the bounds in each generation, allowing an efficient termination criterion to be defined. When the criterion is fulfilled, the algorithm converges to the global minimum with certainty.

Uploaded by

hidou2013
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
88 views10 pages

A New Hybrid Genetic Algorithm For Global Optimization PDF

This document describes a new hybrid genetic algorithm optimization method that combines genetic algorithms and interval analysis. It presents a Hybrid Interval Genetic algorithm (HIG) that has two phases: 1. The first phase uses interval arithmetic and an interval branch-and-bound algorithm to obtain small regions where candidate solutions may lie, in order to initialize a population of potential solutions and obtain initial bounds for the global minimum. 2. The second phase applies a genetic algorithm that exploits the information from phase one. It constructs a mechanism to update the bounds in each generation, allowing an efficient termination criterion to be defined. When the criterion is fulfilled, the algorithm converges to the global minimum with certainty.

Uploaded by

hidou2013
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Nonlinear

Analysis,

Theory,

Methods

Pergamon PII: SO362-546X(96)00367-7

& Applications, Vol. 30. No. I, pp. 4529-4538. 1997 Proc. 2nd World Congress of Nonlinear Analysts 0 1997 Elsevier Science Ltd Printed in Great Britain. All rights reserved 0362-546X/97 $17.00 + 0.00

A NEW HYBRID
SOTIROPOULOS
Department

GENETIC ALGORITHM OPTIMIZATION


D.G., STAVROPOULOS
University optimization, criterion. INTRODUCTION of Patras, hybrid

FOR GLOBAL
and VRAHATIS
Patras, interval Greece branch-and-

EC.

M.N.

of Mathematics, Global

GR-261.10

Key words
bound, genetic

and phrases:
dgorithms,

algorithm,

arithmetic,

termination

1.

We address the problem

of finding

reliable

solutions

of global optimization

problems

where the objective function f : R + & is continuously differentiable and the compact set X0 C R is an ndimensional box. Classical gradient and random search methods behave welI on simple unimodal functions but are inappropriate to be applied to difficult (and more common) problems, like non-differentiable, multimodal or noisy functions. In such cases, where traditional optimization methods fail to provide reliable results, Genetic Algorithms (GAS) can be an interesting alternative. GAS are optimization methods that evolve a population of potential solutions using mechanisms inspired from those of genetics. The choice of a good initial population as well as the definition of an efficient termination criterion are quite difficult tasks. Interval analysis comes to tackle these difficulties. Interval branchand-bound algorithms are applied in order to discard from consideration large regions of the search space, where the global solution cannot exist, and to bound the global minimum. In this paper a Hybrid Interval Genetic algorithm (HIG) is presented. The algorithm consists of two phases: In the first phase, interval arithmetic aud especially au interval branch-and-bound algorithm is used to obtain small regions where candidate solutions lie. In this way, a population of potential solutions is initialized and initial bounds for the global minimum f are obtained. In the sequence, a genetic algorithm is applied in such a way that all the above pieces of information are exploited. The construction of a mechanism that updates the bounds in each generation, gives the ability to define an efficient termination criterion. When the criterion is fulfilled, the algorithm converges to the global minimum f with certainty and extra effort can be avoided. The contents of this paper are as follows: In Section 2 we shortly review some of the relevant material of genetic algorithms and interval analysis. Section 3 describes the proposed hybrid algorithm HIG. Numerical experiences are presented in Section 4. The final section contains concluding remarks aud a short discussion for further work.
2. PRELIMINARIES

This section begins with a brief discussion treatment of this subject, it is recommended

about the basic concepts of GAS. For a more thorough to see [1,2,3]. In the sequence, interval arithmetic tools

4529

4530

Second

World

Congress

of Nonlinear

Analysts

which are needed for the treatment of (1.1) are established. interval arithmetic can be found in [4,5,6,7]. 2.1. Genetic Algorithms (GAS)

A thorough

introduction

to the area of

GAS are adaptive search and optimization methods based on the genetic processes of biological organisms. Their principles have been first laid down by Holland [S]. The aim of GAS is to optimize a problem--defined function, called the fitness function. To do this, GAS maintain a population of individuals (suitably represented candidate solutions) and evolve this population over time. At each iteration, called generation, the new population is created by the process of selecting individuals according to their level of fitness in the problem domain and breeding them together using operators borrowed from natural genetics, as, for instance, crossover and mutation. As the population evolves, the individuals iu general tend toward the optimal solution. The basic structure of a GA is the following:
ALGORITHM

1. Simple

Genetic

Algorithm in the population; not reached do

1.
2. 3. 4. 5. 6. 7.

Initialize Evaluate while

a population each individual termination

of individuals;

criterion

Select individuals for the next population; Apply genetic operators (crossover, mutation) Evaluate return the new individuals;

to produce

new individuals;

the best individual

GAS demand only an objective function measuring the fitness of each individual. No other auxiliary knowledge such as continuity, differentiability or satisfaction of the Lipschitz condition, is required. They avoid many of the shortcomings exhibited by local search techniques on difficult search spaces since they explore new areas using knowledge accumulated during search, not randomly. They often lead to near-optimal solutions and can be easily parallelized and hybridized. In their recent work, Renders and Flasse [9] proposed hybrid methods which combine principles from genetic algorithms and hill-climbing methods in order to find a trade-off between accuracy, reliability and computing time. GAS are highly depended on the choice of the initial population as well as on various heuristically chosen parameters. Population size (Popsize), mutation rate (p,) and crossover rate (pC) and some other parameters should be properly tuned, in order that the GA exhibits its best performance [lo]. The starting population can be initialized either heuristically, by using whatever knowledge is available about the possible solutions of the specific problem, or randomly, if no such knowledge is available. Measuring the performance of a GA is not an easy task. Typically this is done by comparing the solutions found on different runs, although this implies extra amount of function evaluations. Another non trivial task is the definition of a termination criterion. A GA may terminate when a fixed number of generation has reached, when an acceptable solution is found, or when the average population fitness converges to stable fixed points. However, an efficient termination criterion is difficult to be defined. Besides, there are problems that are hard for a GA to solve [11,12]. GAS do not guarantee to tid the optimal solution because (i) the search process does not ergodically cover and search the state space, and (ii) the precision limits in the encoding process can substantially

Second World Congress of Nonlinear Analysts reduce the solution 2.2. Interval accuracy [13].

4531

Arithmetic

Interval arithmetic is a generalization or an extension of real arithmetic. It has been invented by R. Moore [4] and has been used recently for solving ordinary differential equations, linear systems, verifying chaos, and global optimization. Let I = {[u, b] ( a < b, a, b E lR} be the set of compact intervals and I be the set of n-dimensional interval vectors (also called bozes). The interval arithmetic operations are defined by A*B={~*~]~EA,~EB} where the symbol for A,BEI, is equivalent to the following

(2.2)
rules:

+ may denote +, -, ., or /. The above definition

[a,b]+[c,d]=[a+c,b+d],
[a, b] - [c, d] = [a - d, b - c], [a, b] . [c, d] = [min{ac, ad, bc, bd},max{ac, ad, bc, bd}],

[a,bllh 4 = [a,bl . [l/d, l/cl if 0 4 [c,4.


Throughout this paper, we denote real numbers by 2, y, . and real bounded and closed intervals by X = E,a,Y = [x,n ,..., etc. The width of the interval X is defined by w(X) = y X ifX E I, and w(X) = mx& w(X,), if X E I. The midpoint of the interval X is defined by m(X) = (X+x)/2 if X E I, and m(X) = (m(Xi)), ifX E In. An interval function F(X1,. . . ,X,) of intervals X1,. . . , X, is au interval valued function of one or more variables. F(XI , _. . ,X,) is said to be an interval extension of a real function f(xl, _ , 2,) X,), whenever r2 E Xi for all i = 1,. . . , n. An interval function, F, is if f(r1,. . . , z,) E F(Xl,..., said to be inclusion monotonic if Xi c Y, (i = 1,. . . ,n) implies: F(X1, . . . ,Xu,) c F(Yl,. . . ,Y,). The power of interval methods in solving optimization problems and in other applications, is eukibited in the following result due to Moore [4,5]: Let F(X1,. . . ,X,) be an inclusion monotonic interval e&en&on of a real Jknction f(z1,. . ,.z,). Then F(X1,. . . , X,) contains the range of f(z1,. . . ,2,) for all ST% E xi (i = 1,. . . ) n). Inclusion functions can be constructed in auy programming language in which interval arithmetic is simulated or implemented via natural interval extensions. However, computing au interval bound carries a cost of 2 to 4 times as much effort as evaluating f [7]. Interval methods for solving optimization problems consist of: (a) the main algorithm, which is a sequential deterministic algorithm where branch-and-bound techniques are used, and (b) accelerating devices such as cutoff test, monotonicity test, interval Newton-like step, concavity test, or local search procedures. Branch-and-bound techniques split up the whole domain into areas (branching) where bounds of the objective function f, are computed (bounding). The starting box X0 E I is successively subdivided into smaller subboxes in such a way that subregions which do not contain a global minimizer of f are discarded, while the other subregions are subdivided again until the desired width of the interval vectors is achieved. The development of interval tools appropriate for dealing with optimization problems is presented in [14,15.16].
3. THE NEW HYBRID INTERVAL GENETIC ALGORITHM

In this section we present our Hybrid Interval Genetic algorithm (HIG). Firstly, we give a simple model algorithm which is based on the branch-and-bound principle. It is used in the first part of our hybrid algorithm in order to produce boxes (with relatively small diameter E) from which we get a

4532

Second World

Congress

of Nonlinear

Analysts

good initial population used in the genetic portion of HIG. Secondly, we describe how the initial population is formed and propose our algorithm. Finally, we explain how a termination criterion can be defined using obtained information for bounds of f *.
3.1. Interval

Subdivision

Algorithm

This algorithm has common features with an interval subdivision method for global optimization, concavity test or interval Newton-like steps, as the but does not include local search procedures, latter require the inclusion of the Hessian [14,16,17]. On th e contrary, the cut-off and monotonicity tests are applied. Cut-ofl test uses the inclusion function F and an upper bound f for the global minimum f. Boxes X with min F(X) > f do not reliably contain any global minimum point and, therefore, can be deleted. Moreover, if f is differentiable then monotonicity test can be applied. Monotonicity test allows one to automatically recognize whether f is strictly monotone in one of the variables in some subbox Y & X. Let VF be the inclusion function of the gradient of f, Of. If 0 4 VFj(Y) for some j = 1,. . . , n, then box Y can be discarded or replaced by an edge piece [16]. The algorithm requires the following set of parameters: the initial box X0; the inclusion function F for f : X0 + R; and the maximum diameter E of an accepted box. On output it returns a list of boxes, L, and au interval F containing initial bounds for the global minimum f. The model algorithm is as follows: ALGORITHM 1.
2. 3. 4. 5. 6. 7. 8. 2. Interval

Subdivision

Model

Algorithm

Set Y = X, y = min F(X), and f = max F(X) as an upper bound for f. Initialize the working list W = {(Y, y)} and the candidate list L = {}. Choose a coordinate direction L parallel to the edge of maximum length of Y = YI x . . . x Y,. Bisect Y normal to direction k obtaining boxes V, Vz such that Y = V1 U Vz. Calculate F(V1) and F(V2). f = min{f,maxF(V1),maxF(V2)}. Remove (Y, y) from the working
S et

ui = min F(Vi)

for i = 1,2.

Improve

the upper

bound

list W. 0 4 VFj(V) for any j E (1,

9.

10.

Cut-off test: discard the pair (Vi, vi) if vi > f, for i = 1,2. Monotonicity test: discard the remaining pair(s) (Vi, v) if 2,. . . , n}, and i = 1,2. If w(F(V)) < E then insert the pair (V, w) to candidate list list W. The insertion is done in such a way that the second decrease. If the list W becomes empty, then set as lower bound, c, element of list L, and as upper bound, F*, the current f. containing the bounds. Denote the first element of W by (Y, y). Go to Step 1.

L; else insert it to the working members ui of all pairs do not the second member of the fist Fteturn list L and interval F

The above sequential deterministic algorithm has been mainly established to produce a list of relatively small boxes containing various stationary (minima, maxima or saddle) points. According to our verifying procedure the global minimizer exists with certainty in one of these boxes. Of course, if the inclusion function gives the range of f in a particular box, then the box with the minimal lower bound in the list C contains the global minimizer. In order to discard additional regions, any local (non-interval) optimization method that pursues the aim of delivering small function values at the first stages of Algorithm 2 can be used. In this

Second World Congress of Nonlinear Analysts

4533

way, cut-off test (Step 5) is more effective and regions containing various useless stationary points can be discarded. Cut-off test does not require additional pieces of information, but it may decrease the space complexity, i.e. the maximal length of working list W. Evidently, the monotonicity test (Step 6) can be applied when the function f is differentiable. If this is not the case Step 6 must be removed. Additionally, in many cases a huge amount of time can be gained by applying different strategies in Step 1, instead of bisecting a box orthogonal to the direction with greatest diameter. Various strategies and confirmation of this effect are recently proposed by Csendes and Ratz [18,19]. Also, related approaches and implementations of Algorithm 2 can be found in [li]. Furthermore, Algorithm 2 gives an additional information regarding the bounds of f. These bounds are fundamental for the construction of our termination criterion (explained later) used in the genetic portion of HIG. 3.2. HIG Algorithm

In the second phase, a genetic algorithm is applied. As stated before, boxes obtained by Algorithm 2 are used to form the initial population of a GA. This initialization can be done as follows: Firstly, the population size, Popsize, is defined. The midpoints of the boxes in list L are taken as members of the initial population. Iu this way, the number of individuals is equal to the number of the above boxes, #C. If #L is greater than Popsize, then Popsize is replaced by the value #L. If it is smaller, the population is increased by taking sequentially a box fiorn the list L, randomly selecting a point within it, and adding this point to the population. This procedure takes place cyclically until the number of individuals reaches the value Popsize. The number of boxes contained in L depends on the choice of the heuristic parameter E of Algorithm 2. For all the problems tested a value of E E [O.OOl,O.l] returns a list length value #C smaller than Popsize = 50. In general, according to our experience, the choice of E is proportional to the diameter of the initial region and the morphology of the objective function. Next, we combine Algorithms 1 and 2 to obtain the following hybrid algorithm: ALGORITHM 1. 2. 3. 4. 5. 6. 7. a. 9. 3. Hybrid
Interval

Genetic

Algorithm,

HIG

Apply an interval subdivision algorithm; Initialize the population; Evaluate each individual in the population; while termination criterion not reached do Update the bounds; Select individuals for the next population; Apply genetic operators to produce new individuals; Evaluate return e the new individuals;

f' and xi.


and Fi which are obtained by the Step 1 of the above algorithm satisfy the following

The bounds relation:

Since, in general, we choose a relatively large value of E and also the inclusion function is actually the natural interval extension of f, the r and F are overestimated. The basic idea for the termination

4534

Second World Congress of Nonlinear Analysts

criterion referred to Step 4 is to make these bounds of f sharper at each generation. Now, although it is always possible for a genetic algorithm to update and improve the upper bound at each generation, it is impossible to give a better lower bound. Also, the lower bound is much smaller than the global minimum value. GAS are not able to provide a safe mechanism for updating the lower bound since they sample the objective function at only a finite number of points. Thus, we update the bounds in the following way: At each generation, the upper bound F* of the global minimum is replaced by the minimum of the current upper bound and the best individuals performance. That is, F=min{F, BestPerformance . > Updating the lower bound is not an easy task. To do this, we utilize interval arithmetic and the notion of the current shrdnking box which is defined in the sequel. A current shrinking box, denoted by Xs, is the smallest convex interval vector containing a subset S of n-dimensional individuals ~1~22, . . . ,zk, where k < Popsize, whose performance is within the interval [F*, F]. The shrinking box is constructed when the number k of individuals with performance within the current interval IF, F] exceeds a predefined number, T, which is proportional to the total population size, Popsite. According tu natures survival-of-the-fittest principle, the number k will certainly exceed, r, at some generation. Of course, the indication of the construction of the shrinking box can be handled as a convergence test of a GA. If it is not constructed, the algorithm does not converge. When a shrinking box XS is constructed, an estimation of the range of f over XS is obtained using interval arithmetic. In this way, new bounds FS and E are obtained. Thus, the current bounds of the global minimum are updated as follows: -

= min{F*,

-_

Fs},

= max{F,&}. if w(F*), point is a

Evidently, as w(XS) tends to zero the individuals are accumulated to a point. Additionally, F* = E,F] tends to zero, which means that c N F N f*, then this accumulation global minimizer of f. Based on this, our algorithm proceeds until the following relations hold: w(Xs) < EZ. and 0) < G,

where E= aud cF are the tolerances for z* and f* respectively. According to our opinion the above termination criterion is very effective compared widely used criteria and by using this, extra computational effort is saved. 4. NUMERICAL EXPERIENCES

with

other

The numerical tests have been carried out on an 80486/133MHz PC IBM compatible using an implementation of the I-IIG algorithm in C-XSC which is a C++ class library for scientific computing with automatic result verification [17]. Th e inclusion functions have been produced by natural interval extensions. The performance of BIG algorithm is measured according to the Expected Number of Evaluations per Success performance index (ENES). This number has been defined at the IEEE International Conference on Evolutionary Computation (ICEC96), May 20-22, 1996, Nagoya, Japan. Details on this can be found from the home page http://iridia.ulb.ac .be/langarman/ICEO.html. ENES represents the mean number of function evaluation needed in order that the HIG algorithm reaches the termination criterion aud it is computed by ruuning twenty independent runs of the algorithm

Second World

Congress

of Nonlinear

Analysts

4535

with the same parameters, until the termination criterion is fulfilled. If iVS is the number of successes, that is the number of runs that termination criterion is reached and NE is the total number of function evaluations during the 20 runs, the ENES is defined as ENESNE/NS. If the desired value for the global minimum can never be reached, then ENES is not defined. We have selected difficult optimization problems for both Interval Analysis and Genetic Algorithms, and experimental results are exhibited in the sequel. HIG algorithm has been compared with GENESIS 5.0 optimization system due to Grefenstette [20]. Two different sets of runs of GENESIS have been made: GENESIS-RP and GENESIS-HP, where the initial population has been initialized randomly and heuristically (the same as HIG initial population), respectively. For each test problem we have executed twenty independent runs. Both HIG and GENESIS have been provided with the same set of parameters. GENESIS terminates when a predefined number of trials (function evaluations) is reached. Assuming that one interval evaluation is equivalent to two floating--point evaluations, the number of total trials supplied to GENESIS has been computed by the formula: TT = 23 (IFE $ IGE) $ MNE, where IFE is the total number of interval function calls to determine the range of the function, IGE is the total number of interval gradient evaluations, and MNE is the maximum nmnber of real function evaluations of a particular run, after twenty runs of HIG. For the following test problems, further reported parameters are: n the dimension of the problem, X0 the starting box, Z* the global minimizer, and f the global minimum. BV is the best value each algorithm has reached, and TOL=E==E~ the error tolerance for approximating x and f.
PROBLEM

4.1 = 2
i=l

Levy finction i ms[(i

(n = 2) [18].

This function + 1)~

is defined by + (Q + 0.80032),

f(x)

t 1)x1 + i] ej
JZl

as[(j

+ j] + (x1 + 1.42513)

within the initial box X0 specified by -10 5 Zi 5 10, i = 1,2. The global minimum is f = -176.1375 at Z* = (-1.3068, -1.4248). There are about 760 local minima in the minimization region. The large number of local optimizers makes it extremely difficult for any approximation method to find the global minimizer. All genetic algorithms have run for the same set of parameters. That is: population size Pop&se = crossover rate p, = 0.6, and mutation rate pm = 0.01. Algorithm 2 has returned 25 boxes and has found that f belongs to the interval [-191.8058, -175.00571, with total effort IFE = 292 and IGE = 178. We have executed the HIG algorithm for twenty independent rims and we have found that the maximum number of real function evaluations of a particular run has been MNE = 2650. According to the previous formula, the total number of trials for both GENESIS-HP and GENESIS RP is MNE F TT z 3600. The results exhibited in Table 1 clearly show that HIG has been the only algorithm that has found the global minimum with certainty. HIG has succeeded in all runs and has found the optimal solution
50,

HIG BV NS ENES MNE Success Table


-176.1375 20120

GENESIS-HP
- 174.9709 o/20

GENESIS-RP
-117.363658

o/20 3600
0%

2202 2650
100%

-~ 3600
0%

1: Classification

of Levy function

(n = 2).

4536

Second

World

Congress

of Nonlinear

Analysts

with the desired accuracy TOL = 0.001. The termination criterion has been verified in all runs. For each run the maximum number of real function evaluations has been less than or equal to 2650. The ENES index for HIG has been 2202, while the corresponding index for the rest algorithms cannot be defined. It is clearly seen that GENESIS-HP is superior to GENESIS-RP. This confirms our argument that the choice of a good initial population is crucial for the &ciency of a pure genetic algorithm.
PROBLEM

4.2

Goldstein-Price f(x)

junction

(n = 7). [18]. This function

is defined by

= [1+ (21 + 22 + 1)2 (19 - 1421 + 327 - 14~ + 6~~2~ + 32; [30 + (2~~ - 3~)~
(18 - 32

)I

x , is f = 3.0 at

z1 + 122; + 48x2 - 36zrs2

+ 27r;)]

within
2 =

the initial
(0.0, -1.0).

box X0 specified

by -2

5 Z, 5 2, i = 1,2. The global

minimum

For this problem, the common set of parameters has been: Popsise = 50, p, = 0.6, and p, = 0.01. Algorithm 2 has returned 15 boxes. The initial bounds for f* has been [-3.4875x 106,32.6875] and the total effort has been IFE = 60 and IGE = 30. A s sh own in Table 2, although all algorithms have found the global minimum, with TOL= 0.001, ENES indexes indicate that HIG outperform the rest of them. Both I-IIG and GENESIS-HP have succeeded in all runs, in contrast with GENESIS-RP which has succeeded only in half of them. In addition, HIG seems to be the least cost-effective algorithm, as the termination criterion has been fullfiled in all runs. For each run the maximum number of real function evaluations has been less than or equal to 1950. GENESIS-HP has exhibited better performance than GENESIS-RP, due to the good choice of initial population. However, their comparatively good performance is justified by the relatively small search region as well as by the flatness of the objective function in the global minimums neighborhood. HIG 3.000 20120 1340 1950
100%

BV NS ENES MNE Success

GENESIS-HP 3.000 20120 1885 2150


100%

GENESIS-RP 3.000 10120 4089 2150


50%

Table 2: Classification

of Goldstein-Price

function

(n = 2).

PROBLEM

4.3

Griewank

function

(n = 7) [18]. This function

is defined by

f(x) = $&

- fJ cos? t 1,

within the initial box X0 specified by -600 < ri 5 500, i = 1,2,. . . ,7. The global minimum is f = 0.0 at Z* = (O,O, O,O, O,O, 0). It is an extremely difficult test problem since there are several thousands of local minima in this relatively large minimization region. The common set of parameters for the genetic algorithms has been: Popsise = 50, p, = 0.6,and 2 has returned only one box, with total effort IFE = 580 and IGE = 386. P 7?%= 0.001. Algorithm

Second

World

Congress

of Nonlinear

Analysts

4531

HIG BV NS ENES MNE Success


0.0001

2of20 430 500


100%

GENESIS-HP 0.0005 20120 2450 2450


100%

GENESIS-RP 1.0292 o/20 2450


0%

Table 3: Classification

of Griewank

function

(n = 7)

The large amount of this effort, comparatively with the previous test problems, is due to the high dimension of the problem, as well as to the wide range of the search space. In Table 3, it is easily seen that both HIG and GENESIS-HP h ave succeeded in all the runs but, clearly, HIG has been more efficient than the latter. Observing the ENES indexes it is evident that an efficient termination criterion for a genetic algorithm is of great significance. HIG needs only few trials to reach the optimal solution (with Tt?L = O.OOl), while the rest of them consume all the trials and terminate without any guarantee that the global minimum has been found. GENESIS--HP has performed well (by taking in advantage its good initial population), in contrast with GENESIS-RP which has been completely misleaded by the large search region and the enormous number of local minima. Of course better results for GENESIS-RP can be obtained by tuning the size of the randomly selected population.
5. CONCLUSIONS AND FURTHER WORK

In this contribution, we present a hybrid genetic algorithm for finding guaranteed and reliable solutions of global optimization problems. This algorithm uses the branch-and-bound principle to obtain small regions where candidate solutions lie. In this way, a highly-performing initial population is formed and initial bounds for the global minimum f are obtained. By applying a genetic algorithm using this population as well as a safe and reliable technique for updating properly the bounds of f, we are able to compute with certainty global minima for various difficult test problem. The proposed algorithm becomes more effective when a new termination criterion is used. This criterion is based on the notion of a shrinking box and using this extra computational effort is avoided. HIG has exhibited high performance when applied to difficult problems, especially to multimodal and high-dimensional objective functions. It has been clear that HIG outperforms traditional genetic algorithms. Also, for all the problems examined? HIG has given better results for both Algorithm 1 and 2 studied separately. In its present form, HIG gives us one global minimum. Assuming that clusters of global minima do not exist, HIG can give all the global minimizers, using dynamically produced subpopulations. As a future work, we are going to investigate the construction of a pure genetic algorithm whose genetic In this way and using only function evaluations, we operators will be based on interval arithmetic. hope that all the global minimizers will be computed with certainty.
REFERENCES 1. 2. 3. 4. 5. GOLDBERG D.E., Genetic Algorithms in (1989). DAVIS L., Handbook of Genetic Algorithms, MICHALEWICZ Z., Genetic Algorithms + MOORE R.E., Interval Analysis, Prentice-Hall, MOORE R.E., Methods and Applications of search, Van Data optimization

and machine

koming,

Addison-Wesley,

Mass.,

Nostrand Reinhold, New York, (1991). Structures = Evolution Programs, Springer-Verlag, Englewood Cliffs, New Jersey, (1966). hterval Analysis, SIAM Publ., Philadelphia, (1979).

(1996).

4538 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. ALEFELD NEUMAIER HOLLAND RENDERS
Syst., Man,

Second World G. & A., J.H., J. &


Cybm.,

Congress

of Nonlinear

Analysts

HERZBERGER J.,Zntroduction to Znterud Computations, Academic Press, New York, (1983). Znterual Methods for Systems of Equations, Cambridge University Press, Cambridge, (1990). Adaptation in Natural and Artificial Systems, MIT Press, Cambridge, Mass., (1975). FLASSE S., Hybrid Methods Using Genetic Algorithms for Global Optimization,ZEEE Tmns.
26, 243-258, (1996).

GREFENSTETTE J.J., Optimization of Control Parameters for Genetic Algorithms,ZEEE IIl-ans. Systems, Man, and Cybernetics, 16, 122-128, (1986). FORREST S. & MITCHELL M., What Makes a Problem Hard for a Genetic Algorithm? Some Anomalous Results and Their Explanation, Machine Learning, 13, 285-319, (1993). MAGOULAS G.D., VRAHATIS M.N. & ANDROULAKIS G.S., Effective backpropagation training with variable stepsize, Neural Networks, 9, No. 6, (1996), in press. WGBERG L. & ROSEN B., Genetic Algorithms and Very Fast Simulated Reannealing: A Comparison, J. Mathmaticd and Computer Modelang, 16, 87-100, (1992). RATSCHEK ?I. & ROKNE J., New Computer Methods for Global Optimization, Ellis Horwood Limited, (1988). RATSCHEK H. & ROKNE J., Interval Tools for Global Optimization, Computers Math. Applic., 21, 41-50, (1991). HANSEN E., Globd Optimization Using Interval Andy&, hkcel Dekker Inc., (1992). HAMMER R., HOCKS M., KULISCH U. & RATZ D., C++ Toolbox for Vertjied Computing, Springer-Verlag, (1995), see also http://rrao-iam.mathematik.uni-karlsruheguage. UTZ D. & CSENDES T., On the Selection of Subdivision Directions in Interval Bramh~and~Bound Methods for Global Optimization, J. Globd Optimization , 7, 183-207, (1995). CSENDES T. & RATZ D., Subdivision directions selection in interval methods for global optimization. To appear in SIAM Journal of Numerical Analysis. GREFENSTETTE J.J., A Users Guide to GENESIS, Version 5.0, (1990).

You might also like