Optimization Method Based On Genetic Algorithms
Optimization Method Based On Genetic Algorithms
Optimization Method Based On Genetic Algorithms
4, October 2005
393
394
Introduction
For three decades, many mathematical programming methods have
been developed to solve optimization problems. However, until now,
there has not been a single totally efficient and robust method to cover
all optimization problems that arise in the different engineering fields.
Most engineering application design problems involve the choice of
design variable values that better describe the behavior of a system.
At the same time, those results should cover the requirements and
specifications imposed by the norms for that system. This last
condition leads to predicting what the entrance parameter values
should be whose design results comply with the norms and also
present good performance, which describes the inverse problem.
Generally, in design problems the variables are discreet from the
mathematical point of view. However, most mathematical
optimization applications are focused and developed for continuous
variables. Presently, there are many research articles about
optimization methods; the typical ones are based on calculus,
numerical methods, and random methods. The calculus based
methods have been intensely studied and are subdivided in two main
classes: 1) the direct search methods find a local maximum moving
on a function over the relative local gradient directions and 2) the
indirect methods usually find the local ends solving a set of non-linear
equations, resultant of equaling the gradient from the object function
to zero, i.e., by means of multidimensional generalization of the
notion of the functions extreme points from elementary calculus give
a smooth function without restrictions to find a possible maximum
which is to be restricted to those points whose slope is zero in all
directions. Both methods have been improved and extended, however
they lack robustness for two main reasons: 1) they have a local focus,
since they seek the maximum in the analyzed point neighborhoods; 2)
2005 C. Roy Keys Inc. http://redshift.vif.com
395
396
397
(1)
m ( H ,t + 1) m ( H ,t )
O ( H ) pm
1 pc
f avg
l 1
398
crossover point
string A1 101001 01 crossover 101001 00 string A1
string A 2 111100 00
111100 01 string A2
the strings A1 and A2 are part of the new generation.
As with biological systems the mutation is manifested with a small
change in the genetic string of the individuals. In the case of artificial
genetic strings, the mutation is equal to a change in the elementary
portion (allele) of the individuals code. The mutation takes place
with characteristics different to those that the individuals had at the
beginning, characteristics that didn't possibly exist in the population.
From the point of view of problem optimization, it is equal to a
change of the search area in the parameters space. The above
mentioned is illustrated with the following outline:
Before mutation
After mutation
mutation point
mutation
{11100101 string A1
string A1 10100101}
the string A1 belongs to the new generation.
399
400
where XU and XL are the upper and lower bounds of the continuous
variable X. It is advisable to adapt the precision to the problem,
because the search process can be faulty when more precision by a
longer string is required.
The decoding is basically carried out for the evaluation of the
population's individual in the object function and it is applied to the
population's members.
Selection Strategies
At first the genetic algorithms generate random strings for the
solution population. The following generation is developed by
applying the genetic operators: reproduction, crossover and mutation.
The new generation is evolved based on each individual's
probabilities assigned by its object function fitness; i.e., for poor
object function fitness values there are few probabilities for surviving
the next generation. In this way, the generations are engendered with
the strings or individuals that improve the function objective fitness
value. Those that do not cover these conditions disappear completely.
The reproduction is in essence a selection process. The good
known selection outlines are: the proportional schema, or group one.
The process of proportional selection assigns a reproduction range
according to the fitness value to each individual. In the group
selection process, the population is divided into groups according to
their fitness value; where each group member will have the same
reproduction value.
For instance, the proportional selection could be expressed
mathematically in the following way:
f
(3)
Pi = i
fj
2005 C. Roy Keys Inc. http://redshift.vif.com
401
402
diverse studies [4], [6] and [11]. These studies have focused on the
relationship between the mutation values and convergence; to the
relationship between the population's size and the crossover
probability values, respectively; and to the relationship among good
population's size, crossover probability and selection. These studies
have also focused on specific simplified problems, therefore not
making it possible to use the results in practical problems. For the
above-mentioned reasons it is necessary to carry out convergence
tests with varying values, taking into account that the population's
size, the mutation probability and the crossover probability are related
for the determination of the best control parameters values. An
appropriate approach [9] to begin a search is to consider population
size between 30 and 50 individuals, a crossover probability of about
0.6 and a smaller mutation probability of about 0.01.
Applications
The optimizations in electromagnetic problems often involve many
parameters in which the parameters may be discrete. For instance, a
low side-lobes optimization of elements non-equidistantly spaced on
a long array antenna, when the excitation and phase have quantized
values. Although the number of possibilities in the search space is
finite an exhaustive search is not practical [12] and [13]. The radiation
pattern generated by an array antenna [12], is given by:
N el
n
(6)
AF ( ) = 2sin cos k d m d l 2 cos
n =1
m
=
1
where dl/2 is the distance from the element l to the physical center of
the array, dm is the space between the element m-1 and element m.
The distance of the element m to the center of the array is given by:
n 1
m =1
403
center than element n+1, and also that the minimum distance bigger
than zero is considered. It is clear that the problem gets complicated
when the number of array elements is increased. In this case the most
appropriate optimization method is the Genetic Algorithms.
Another case is the prediction of far field from near field
measurements [14]. The mathematical pattern used in the prediction
of far field involves great parameter quantity, such as complex
excitation, position and orientation of the physical set of the elemental
dipoles that generate the same pattern to the one obtained with
measurements. In this optimization problem the parameters quantity
grows in proportion with the number of elements considered (8
parameters by element). For instance, if a set of four elemental
dipoles is used to predict the far field of some electronic device, the
search space will have 28 parameters and each one of these in an
interval. For this particular case the object function proposed is:
M
F ( s ) = g m ( vm f m ( rm ,s ) ) = 0
(7)
m =1
404
Ri ( f ) =
Ri ( f ) + Ri 1 ( f ) e
2 jk
f t
1 + R ( f ) R ( f ) e i1 ( ) i1
2 jki 1 ( f )ti 1
( f ) ki ( f ) i ( f ) ki 1 ( f )
Ri ( f ) = i 1
i 1 ( f ) ki ( f ) + i ( f ) ki 1 ( f )
for
i > 0,
(8)
i 1
ki ( f ) = 2 f i ( f ) i ( f ) ,
R0 = 1 ,
(9)
and
(10)
and the total thickness. It is clear that the goals are opposed while the
maximum reflection minimization is achieved with a bigger thickness
of the absorbent media; while also seeking to minimize that thickness.
The technique used in this case found the trade off between the
thickness of the absorbent media and the minimum reflections of the
same material.
In [16] the problem of extracting the intrinsic dielectric frequency
properties dependent on the media is presented. It is important to
know the real and complex magnetic permeability, the real and
complex electric permittivity, and the electric conductivity in circuits
design when the operation frequency is in GHz. Under these
conditions the dispersion losses are quite significant and their estimate
is not a simple task. This document proposes a systematic method,
2005 C. Roy Keys Inc. http://redshift.vif.com
405
406
Conclusions
A quick revision to current literature will show that genetic
algorithms have grown in popularity to solve optimization problems
in diverse scientific research subjects. The electromagnetic area is not
the exception; a clear reference about it may be [20]. In this paper the
few selected examples report great optimization work simplification
with quite acceptable results. However, in each case the genetic
algorithm should be adapted to the treated problem. In certain cases it
is necessary to combine this technique with others (like in [15]) and to
check them with other methods of the same class (simulated
annealing). Although genetic algorithms do not demand a previous or
additional knowledge (derivatives) of the function being optimized, it
is necessary that one has the sense that a global optimal exists.
Another aspect necessary to take into account is the growing
parameters space, i.e., the characteristics of the problem plus those of
the genetic algorithm control, and for these, there is no method which
provides its values in an exact way, it will always be necessary to
carry out tests to determine which are the best values. The only
inconvenience of this technique maybe the computation time required
to find the solution to a problem depending on its complexity. In
general, the genetic algorithms are an excellent option for the global
robust search of an optimal value from non-linear and high
dimensionality functions.
References
[1] T. Bck, Evolutionary Algorithms in Theory and Practice: Evolution
Strategies, Evolutionary Programming, Genetic Algorithms. Oxford
University Press, N.Y.,1996.
[2] C.A. Balanis, Antenna Theory Analysis and Design John Wiley & Sons,
2nd ed., 1997.