Particle_swarm_optimization_for_integer_programmin
Particle_swarm_optimization_for_integer_programmin
net/publication/3949434
CITATIONS READS
271 1,593
3 authors:
Michael N. Vrahatis
University of Patras
492 PUBLICATIONS 14,796 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Michael N. Vrahatis on 02 January 2013.
Abstract - The investigation of the performance of where Z is the set of integers, and S is a not necessar-
the Particle Swarm Optimization (PSO) method in ily bounded set, which is considered as the feasible re-
Integer Programming problems, is the main theme gion. Maximization of Integer Programming problems
of the present paper. Three variants of PSO are is very common in the literature, but we will consider
compared with the widely used Branch and Bound only the minimization case, since a maximization prob-
technique, on several Integer Programming test prob- lem can be easily transformed to a minimization problem
lems. Results indicate that PSO handles eciently and vice versa. The problem de ned in Eq. (1) is often
such problems, and in most cases it outperforms the called \All{Integer Programming Problem", since all the
Branch and Bound technique.
variables are integers, in contrast to the \Mixed{Integer
Programming Problem", where some of the variables are
I. INTRODUCTION real.
A remarkably wide variety of problems can be rep- Optimization techniques developed for real search
resented as discrete optimization models [17]. An im- spaces can be applied on Integer Programming problems
portant area of application concerns the ecient man- and determine the optimum solution by rounding o the
agement of a limited number of resources so as to in- real optimum values to the nearest integer [17], [28]. One
crease productivity and/or pro t. Such applications are of the most common deterministic approaches for tack-
encountered in Operational Research problems such as ling Integer Programming problems, is the Branch and
goods distribution, production scheduling, and machine Bound (BB) technique [10], [18], [28]. According to this
sequencing. There are applications in mathematics to technique, the initial feasible region is split into several
the subjects of combinatorics, graph theory and logic. sub{regions. For each one of these sub{regions, a con-
Statistical applications include problems of data analy- strained sub{problem is solved, treating the integer prob-
sis and reliability. Recent scienti c applications involve lem as a continuous one. The procedure is repeated until
problems in molecular biology, high energy physics and the real variables are xed to integer values.
x{ray crystallography. A political application concerns Evolutionary and Swarm Intelligence algorithms are
the division of a region into election districts [17]. Cap- stochastic optimization methods that involve algorithmic
ital budgeting, portfolio analysis, network and VLSI cir- mechanisms similar to natural evolution and social be-
cuit design, as well as automated production systems are havior respectively. They can cope with problems that
some more applications in which Integer Programming involve discontinuous objective functions and disjoint
problems are met [17]. search spaces [7], [14], [30]. Genetic Algorithms (GA),
Yet another, recent, and promising application is the Evolution Strategies (ES), and the Particle Swarm Op-
training of neural networks with integer weights, where timizer (PSO) are the most common paradigms of such
the activation function and weight values are con ned methods. GA and ES draw from principles of natural
in a narrow band of integers. Such neural networks are evolution which are regarded as rules in the optimization
better suited for hardware implementations compared to process. On the other hand, PSO is based on simula-
real weight ones [26]. tion of social behavior. Early approaches in the direction
The Unconstrained Integer Programming problem can of Evolutionary Algorithms for Integer Programming are
be de ned as reported in [8], [11].
In GA, the potential solutions are encoded in binary
min
x
f (x); x 2 S Zn; (1) bit strings. Since the integer search space, of the prob-
lem de ned in Eq. (1), is potentially not bounded, the particle of the swarm, i.e. the particle with the smallest
representation of a solution using a xed length binary function value, is denoted by index g. The best previ-
string is not feasible [29]. Alternatively, ES can be used, ous position (i.e. the position giving the lowest function
by embedding the search space Zn into Rn and truncat- value) of the i-th particle is recorded and represented as
ing the real values to integers. However, this approach is Pi = (pi1 ; pi2 ; : : : ; piD ), and the position change (veloc-
not always ecient due to the existence of features of ES, ity) of the i-th particle is Vi = (vi1 ; vi2 ; : : : ; viD ).
which contribute to the detection of real valued minima The particles are manipulated according to the follow-
with arbitrary accuracy. These features are not always ing equations (the superscripts denote the iteration):
needed in integer spaces, since the smallest distance of
two points, in `1{norm, is equal to 1 [29]. Vin+1 = wVin + c1 rin1 (Pin ; Xin ) +
This paper aims to investigate, the performance of the +c2 rin2 (Pgn ; Xin); (2)
PSO method on Integer Programming problems. The
truncation of real values to integers seems not to a ect Xin+1 = Xin + Vin+1 ; (3)
signi cantly the performance of the method, as the ex- where i = 1; 2; : : : ; N ; N is the swarm's size; is a con-
perimental results indicate. Moreover, PSO outperforms striction factor used to control and constrict velocities; w
the BB technique for most test problems. is the inertia weight; c1 and c2 are two positive constants,
The rest of the paper is organized as follows: in Section called the cognitive and social parameter respectively; rin1
II, the PSO method is described. In Section III, the BB and rin2 are two random numbers uniformly distributed
algorithm is brie y exposed. In Section IV, the experi- within the range [0; 1].
mental results are reported, and conclusions are reported Eq. (2) is used to calculate at each iteration, the i-th
in Section V. particle's new velocity. Three terms are taken into con-
II. THE PARTICLE SWARM sideration. The rst term, wVin , is the particle's previous
OPTIMIZATION METHOD velocity weighted by the inertia weight w. The second
PSO is a Swarm Intelligence method for global opti- term, (Pin ; Xin), is the distance between the particle's
mization. It di ers from other well{known Evolution- best previous position, and its current position. Finally,
ary Algorithms (EA) [2], [4], [7], [14], [30]. As in EA, the third term, (Pgn ; Xin ), is the distance between the
a population of potential solutions is used to probe the swarm's best experience, and the i-th particle's current
search space, but no operators, inspired by evolution pro- position. The parameters c1 rin1 , c2 rin2 , provide random-
cedures, are applied on the population to generate new ness that render the technique less predictable but more
promising solutions. Instead, in PSO, each individual, exible [12]. Eq. (3) provides the new position of the
named particle, of the population, called swarm, adjusts i-th particle, adding its new velocity, to its current posi-
its trajectory toward its own previous best position, and tion. In general, the performance of each particle is mea-
toward the previous best position attained by any mem- sured according to a tness function, which is problem{
ber of its topological neighborhood [12]. In the global dependent. In optimization problems, the tness func-
variant of PSO, the whole swarm is considered as the tion is usually the objective function under consideration.
neighborhood. Thus, global sharing of information takes The role of the inertia weight w is considered crucial for
place and the particles pro t from the discoveries and PSO's convergence behavior. The inertia weight is em-
previous experience of all other companions during the ployed to control the impact of the history of velocities on
search for promising regions of the landscape. For ex- the current velocity. In this way, the parameter w regu-
ample, in the single{objective minimization case, such lates the trade{o between the global (wide{ranging) and
regions possess lower function values than others, visited the local (nearby) exploration abilities of the swarm. A
previously. large inertia weight facilitates global exploration (search-
Several variants of the PSO technique have been pro- ing new areas), while a small one tends to facilitate local
posed so far, following Eberhart and Kennedy [4], [13], exploration, i.e. ne{tuning the current search area. A
[14]. In our experiments, three di erent global versions suitable value for the inertia weight w provides balance
of PSO were investigated. They are all de ned using the between the global and local exploration ability of the
equations, described in the following paragraph [14]. swarm, resulting in better convergence rates. Experi-
First, let us de ne the notation adopted in this pa- mental results suggest that it is better to set the inertia
per: assuming that the search space is D{dimensional, to a large initial value, in order to promote global explo-
the i-th particle of the swarm is represented by the D{ ration of the search space, and gradually decrease it to
dimensional vector Xi = (xi1 ; xi2 ; : : : ; xiD ) and the best obtain re ned solutions [31]. Our approach, employs a
time{decreasing inertia weight value.
The initial population, as well as the velocities, can be where f is the objective function under considera-
generated either randomly or by a Sobol sequence gener- tion.
ator [27], which ensures that the D-dimensional vectors Then, the bounds de ned as
will be uniformly distributed within the search space.
Some variants of PSO impose a maximum allowed ve- := i=1min
;2;:::;m
(Mi );
locity Vmax to prevent the swarm from exploding. Thus,
if vidn+1 > Vmax in Eq. (2), then vidn+1 = Vmax [14]. and
PSO resembles, to some extent, EA. Although it does := i=1min
;2;:::;m
(Mi );
not rely on a direct recombination operator, the recombi-
nation concept is accounted for by the stochastic move- are \overall" bounds, i.e.
ment of each particle toward its own best previous po-
sition, as well as toward the global best position of the 6 min f (S ) 6 :
entire swarm or its neighborhood's best position, depend- Step 3. If = (or ; 6 ", for a prede ned con-
ing on the variant of the PSO that is used [6]. Moreover, stant " > 0), then stop.
PSO's mutation{like behavior is directional, due to the
velocity of each particle, with a kind of momentum built Step 4. Otherwise, choose some of the subsets Mi and
in. In other words, PSO is considered as performing mu- partition them, in order to obtain a more re ned
tation with a \conscience", as pointed out by Eberhart partition of M0 . Determine new (hopefully better)
and Shi [6]. bounds on the new partition elements, and repeat
The PSO technique has proved to be very e ective the procedure.
in solving real valued global optimization problems, in
static, noisy as well as continuously changing environ- An advantage of the BB technique is that, during the
ments, and for performing neural networks training [19]{ iteration process, one can usually delete subsets of S , in
[22], exhibiting competitive results with the EA [1]. which, the minimum of f cannot be attained. Impor-
Moreover, it can cope eciently with Multiobjective Op- tant issues that arise during the BB procedure are that
timization problems [25] and specialized problems, like of properly partitioning the feasible region and selecting
the `1 norm errors{in{variables problems [23]. Its con- which sub{problem to evaluate.
vergence rates can be improved by properly initializing The BB technique has been successfully applied to In-
the population e.g. using a derivative{free method like teger Programming problems. The algorithm applied in
the Nonlinear Simplex Method of Nelder and Mead [24]. this paper, transforms the initial integer problem to a
continuous one. Consecutively, following the prescribed
III. THE BRANCH AND BOUND procedure, it restricts the domain of the variables, which
TECHNIQUE are still considered continuous, and solves the generated
The BB technique is widely used for solving optimiza- sub{problems using the Sequential Quadratic Program-
tion problems. In BB, the feasible region of the problem ming method. This process is repeated until the variables
is relaxed, and subsequently partitioned into several sub{ are xed to an integer value. For the branching, Depth{
regions; this is called branching. Over these sub{regions, First traversal with backtracking was used.
lower and upper bounds for the values of the function can IV. EXPERIMENTAL RESULTS
be determined; this is the bounding part of the algorithm.
The BB technique can be algorithmically sketched as Seven Integer Programming test problems were se-
follows [3], [15], [16]: lected to investigate the performance of the PSO method.
Each particle of the swarm was truncated to the closest
Step 1. Start with a relaxed feasible region M0 S integer, after the determination of its new position using
and partition M0 into nitely many subsets Mi ; i = Eq. (3).
1; 2; : : : ; m, where S is the feasible region of the prob- The considered test problems, de ned immediately be-
lem. low, are frequently encountered in the relevant literature:
Step 2. For each subset Mi , determine lower (and if
possible) upper bounds, (Mi ) and (Mi ), respec- Test Problem 1, [29]:
tively, satisfying F1 (x) = kxk1 = jx1 j + : : : + jxD j;
(Mi ) 6 inf f (Mi \ S ) 6 (Mi ); with x = (x1 ; : : : ; xD ) 2 [;100; 100]D, where D is the
corresponding dimension. The solution is xi = 0, i =
1; : : : ; D, with F1 (x ) = 0. This problem was considered TABLE I
in dimensions 5, 10, 15, 20, 25, and 30. SUCCESS RATE, MEAN NUMBER, STANDARD
DEVIATION, AND MEDIAN OF FUNCTION
Test Problem 2, [29]:
EVALUATIONS, FOR THE TEST PROBLEM F1 .
0x 1 Function Method Succ. Mean St.D. Median
; 1
F2 (x) = x x = x1 : : : xD @ .. C
> B . A; F1
5 dim
PSO-In
PSO-Co
30/30
30/30
1646.0
744.0
661.5
89.8
1420
730
xD PSO-Bo 30/30 692.6 97.2 680
BB 30/30 1167.83 659.8 1166
with x = (x1 ; : : : ; xD )> 2 [;100; 100]D, where D is the F1 PSO-In 30/30 4652.0 483.2 4610
corresponding dimension. The solution is xi = 0, i = 10 dim PSO-Co 30/30 1362.6 254.7 1360
1; : : : ; D, with F2 (x ) = 0. This is a quite trivial problem PSO-Bo
BB
30/30
30/30
1208.6
5495.8
162.7
1676.3
1230
5154
and it was considered in dimension 5.
F1 PSO-In 30/30 7916.6 624.1 7950
15 dim PSO-Co 30/30 3538.3 526.6 3500
Test Problem 3, [9]: PSO-Bo 30/30 2860.0 220.2 2850
;
F3 (x) = ; 15 27 36 18 12 x +
BB 30/30 10177.1 2393.4 10011
F1 PSO-In 30/30 8991.6 673.3 9050
0 35 ;20 ;10 32 ;10 1 20 dim PSO-Co 30/30 4871.6 743.3 4700
BB ;20 40 ;6 ;31 32 CC PSO-Bo 29/30 4408.3 3919.4 3650
BB 30/30 16291.3 3797.9 14550
+ x> B 10 ;6 11 ;6 ;10 C
B@ ;32 C x: F1 PSO-In 30/30 11886.6 543.7 11900
;31 ;6 38 ;20 A 25 dim PSO-Co 30/30 9686.6 960.1 9450
;10 32 ;10 ;20 31 PSO-Bo 25/30 9553.3 7098.6 6500
BB 20/30 23689.7 2574.2 25043
with best known solutions x = (0; 11; 22; 16; 6)> and F1 PSO-In 30/30 13186.6 667.8 13050
x = (0; 12; 23; 17; 6)>, with F3 (x ) = ;737. 30 dim PSO-Co
PSO-Bo
30/30
19/30
12586.6
13660.0
1734.9
8863.9
12500
7500
BB 14/30 25908.6 755.5 26078
Test Problem 4, [9]:
F4 (x) = (9x21 + 2x22 ; 11)2 + (3x1 + 4x22 ; 7)2 ; with solution x = (0; 1)> and F7 (x ) = ;3833:12.
Three variants of PSO were used in the experiments:
with solution x = (1; 1)> and F4 (x ) = 0. one with inertia weight and without constriction fac-
tor, denoted as PSO-In; one with constriction factor and
Test Problem 5, [9]: without inertia weight, denoted as PSO-Co; and one with
both constriction factor and inertia weight, denoted as
PSO-Bo. For all experiments, the maximum number of
F5 (x) = (x1 + 10x2 )2 + 5(x3 ; x4 )2 + allowed function evaluations was set to 25000; the de-
+(x2 ; 2x3 )4 + 10(x1 ; x4 )4 ; sired accuracy was 10;6; the constriction factor was
set equal to 0:729; the inertia weight w was gradually de-
with solution x = (0; 0; 0; 0)> and F5 (x ) = 0. creased from 1 towards 0:1; c1 = c2 = 2; and Vmax = 4.
The aforementioned values for all PSO's parameters are
Test Problem 6, [28]: considered default values, and they are used widely in
the relevant literature [14]. There was no preprocessing
F6 (x) = 2x21 + 3x22 + 4x1 x2 ; 6x1 ; 3x2 ; stage that might yield more suitable values for the param-
eters. For each test problem, 30 experiments were per-
with solution x = (2; ;1)> and F6 (x ) = ;6. formed, starting with a swarm and velocities uniformly
distributed within the range [;100; 100]D, where D is the
Test Problem 7, [9]: dimension of the corresponding problem, and truncated
to the nearest integer.
For the BB algorithm, 30 experiments were performed
F7 (x) = ;3803:84 ; 138:08x1 ; 232:92x2 + 123:08x21 + for each test problem, starting from a randomly selected
+203:64x22 + 182:25x1x2 ; point within [;100; 100]D, and truncated to the nearest
TABLE II TABLE III
SUCCESS RATE, MEAN NUMBER, STANDARD DIMENSION AND SWARM'S SIZE FOR ALL TEST
DEVIATION, AND MEDIAN OF FUNCTION PROBLEMS.
EVALUATIONS, FOR THE TEST PROBLEMS F2 {F7 .
Function Dimension Swarm's Size
Function Method Succ. Mean St.D. Median F1 5 20
F2 PSO-In 30/30 1655.6 618.4 1650 F1 10 20
5 dim PSO-Co 30/30 428.0 57.9 430 F1 15 50
PSO-Bo 30/30 418.3 83.9 395 F1 20 50
BB 30/30 139.7 102.6 93 F1 25 100
F1 30 100
F3 PSO-In 30/30 4111.3 1186.7 3850 F2 5 10
PSO-Co 30/30 2972.6 536.4 2940 F3 5 70
PSO-Bo 30/30 3171.0 493.6 3080 F4 2 20
BB 30/30 4185.5 32.8 4191 F5 4 20
F4 PSO-In 30/30 304.0 101.6 320 F6 2 10
PSO-Co 30/30 297.3 50.8 290 F7 2 20
PSO-Bo 30/30 302.0 80.5 320
BB 30/30 316.9 125.4 386
F5 PSO-In 30/30 1728.6 518.9 1760 gradually truncated variant of PSO, were almost similar
PSO-Co 30/30 1100.6 229.2 1090 to the results reported in Tables I and II for the plain
PSO-Bo 30/30 1082.0 295.6 1090
BB 30/30 2754.0 1030.1 2714 PSO.
F6 PSO-In 30/30 178.0 41.9 180 V. CONCLUSIONS
PSO-Co 30/30 198.6 59.2 195
PSO-Bo 30/30 191.0 65.9 190 The ability of the PSO method to cope with Integer
BB 30/30 211.1 15.0 209 Programming problems formed the core of the paper. Ex-
F7 PSO-In 30/30 334.6 95.5 340 perimental results for seven widely used test problems in-
PSO-Co 30/30 324.0 78.5 320 dicate that PSO is a very e ective method and should be
PSO-Bo 30/30 306.6 96.7 300
BB 30/30 358.6 14.7 355 considered as a good alternative to handle such problems.
The behavior of PSO seems to be stable even for high
dimensional cases, exhibiting high success rates even in
cases in which the BB technique failed. In most cases,
integer. The maximum number of allowed function eval- PSO outperformed the BB approach, by means of the
uations and the desired accuracy were the same as for mean number of required function evaluations.
PSO. Moreover, the method appears not seem to su er from
For both algorithms, the number of successes in de- search stagnation. The aggregate movement of each par-
tecting the integer global minimum of the correspond- ticle towards its own best position and the best position
ing problem, within the maximum number of function ever attained by the swarm, added to its weighted pre-
evaluations, the mean, the standard deviation, and the vious position change, ensures that particles maintain a
median of the required number of function evaluations, position change during the process of optimization, which
were recorded and they are reported in Tables I and II. is of proper magnitude.
The swarm's size was problem dependent. The swarm's Regarding the three di erent variants of PSO, PSO-
size for each test problem is reported in Table III. It Bo, which utilizes both inertia weight and constriction
should be noted at this point, that although the swarm's factor, was the fastest, but the other two approaches
size was problem dependent, the maximum number of posses better global convergence abilities, especially in
allowed function evaluations was equal to 25000, for all high dimensional problems. In most experiments, PSO-
cases. Co, which utilizes only a constriction factor, was signif-
In a second round of experiments, a PSO with gradu- icantly faster than PSO-In, which utilizes only inertia
ally truncated particles was used. Speci cally, the parti- weight.
cles for the rst 50 iterations were rounded to 6 decimal In general, PSO seems an ecient alternative for solv-
digits (d.d.), for another 100 iterations they were rounded ing Integer Programming problems, when deterministic
to 4 d.d., for another 100 iterations they were rounded approaches fail, or it could be considered as an algorithm
to 2 d.d., and for the rest iterations they were rounded for providing good initial points to deterministic meth-
to the nearest integer. The results obtained using this ods, as the BB technique, and thus, help them converge
to the global minimizer of the integer problem. [16] V.M. Manquinho, J.P. Marques Silva, A.L. Oliveira ans
K.A. Sakallah, \Branch and Bound Algorithms for Highly
VI. ACKNOWLEDGEMENT Constrained Integer Programs", Technical Report, Cadence
European Laboratories, Portugal, 1997.
Part of this work was done while the authors (K.E.P. [17] G.L. Nemhauser, A.H.G. Rinnooy Kan and M.J. Todd (Eds.),
and M.N.V.) were at the Department of Computer Sci- Handbooks in OR & MS, Vol. 1: Optimization, Elsevier, 1989.
ence, University of Dortmund, D{44221 Dortmund, Ger- [18] G.L. Nemhauser and L.A. Wolsey, Integer and Combinatorial
many. This material was partially supported by the Optimization, John Wiley and Sons, 1988.
Deutsche Forschungsgemeinschaft{DFG (German Na- [19] K.E. Parsopoulos, V.P. Plagianakos, G.D. Magoulas and
tional Research Foundation) as a part of the collaborative M.N. Vrahatis, \Objective Function \Stretching" to Allevi-
research center \Computational Intelligence" (SFB 531). ate Convergence to Local Minima", Nonlinear Analysis TMA,
Vol. 47(5), pp. 3419{3424, 2001.
References [20] K.E. Parsopoulos, V.P. Plagianakos, G.D. Magoulas and
[1] P.J. Angeline, \Evolutionary Optimization Versus Particle M.N. Vrahatis, \Stretching Technique for Obtaining Global
Swarm Optimization: Philosophy and Performance Di er- Minimizers Through Particle Swarm Optimization", Proc.
ences", Evolutionary Programming VII, pp. 601{610, 1998. of the Particle Swarm Optimization Workshop, Indianapolis
[2] W. Banzhaf, P. Nordin, R.E. Keller and F.D. Francone, Ge- (IN), USA, pp. 22{29, 2001.
netic Programming{An Introduction, Morgan Kaufmann: San [21] K.E. Parsopoulos and M.N. Vrahatis, \Modi cation of the
Francisco, 1998. Particle Swarm Optimizer for Locating All the Global Min-
[3] B. Borchers and J.E. Mitchell, \Using an Interior Point ima", V. Kurkova, N. Steele, R. Neruda, M. Karny (Eds.),
Method In a Branch and Bound Algorithm For Integer Pro- Arti cial Neural Networks and Genetic Algorithms, Springer:
gramming", Technical Report, Rensselaer Polytechnic Insti- Wien (Computer Science Series), pp. 324{327, 2001.
tute, July 1992. [22] K.E. Parsopoulos and M.N. Vrahatis, \Particle Swarm Op-
[4] R.C. Eberhart, P.K. Simpson and R.W. Dobbins, Compu- timizer in Noisy and Continuously Changing Environments",
tational Intelligence PC Tools, Academic Press Professional: M.H. Hamza (Ed.), Arti cial Intelligence and Soft Comput-
Boston, 1996. ing, IASTED/ACTA Press, pp. 289{294, 2001.
[5] R.C. Eberhart and Y.H. Shi, \Evolving Arti cial Neural Net- [23] K.E. Parsopoulos, E.C. Laskari and M.N. Vrahatis, \Solving
works", Proc. Int. Conf. on Neural Networks and Brain, Bei- `1 Norm Errors-In-Variables Problems Using Particle Swarm
jing, P.R. China, 1998. Optimizer", M.H. Hamza (Ed.), Arti cial Intelligence and Ap-
plications, IASTED/ACTA Press, pp. 185{190, 2001.
[6] R.C. Eberhart and Y.H. Shi, \Comparison Between Genetic [24] K.E. Parsopoulos and M.N. Vrahatis, \Initializing the Parti-
Algorithms and Particle Swarm Optimization", Evolutionary cle Swarm Optimizer Using the Nonlinear Simplex Method",
Programming VII, pp. 611{615, 1998. Proc. WSES Evolutionary Computation 2002 Conference, In-
[7] D.B. Fogel, Evolutionary Computation: Toward a New Phi- terlaken, Switzerland, in press.
losophy of Machine Intelligence, IEEE Press: New York, 1995. [25] K.E. Parsopoulos and M.N. Vrahatis, \Particle Swarm Op-
[8] D.A. Gall, \A Practical Multifactor Optimization Criterion", timization Method in Multiobjective Problems", ACM SAC
A. Levi, T.P. Vogl (Eds.), Recent Advances in Optimization 2002 Conference, Madrid, Spain, in press.
Techniques, pp. 369{386, 1966. [26] V.P. Plagianakos and M.N. Vrahatis, \Training Neural Net-
[9] A. Glankwahmdee, J.S. Liebman and G.L. Hogg, \Uncon- works with Threshold Activation Functions and Constrained
strained Discrete Nonlinear Programming", Engineering Op- Integer Weights", Proceedings of the IEEE International Joint
timization, Vol. 4, pp. 95{107, 1979. Conference on Neural Networks (IJCNN 2000), Como, Italy,
[10] R. Horst and H. Tuy, Global Optimization, Deterministic Ap- 2000.
proaches, Springer, 1996. [27] W.H. Press, W.T. Vetterling, S.A. Teukolsky and B.P. Flan-
[11] R.C. Kelahan and J.L. Gaddy, \Application of the Adaptive nery, Numerical Recipes in Fortran 77, Cambridge University
Random Search to Discrete and Mixed Integer Optimization", Press: Cambridge, 1992.
International Journal for Numerical Methods in Enginnering, [28] S.S. Rao, Engineering Optimization{Theory and Practice, Wi-
Vol. 12, pp. 289{298, 1978. ley Eastern: New Delhi, 1996.
[12] J. Kennedy, \The Behavior of Particles", Evolutionary Pro- [29] G. Rudolph, \An Evolutionary Algorithm for Integer Pro-
gramming VII, pp. 581{587, 1998. gramming",Y. Davidor, H.{P. Schwefel, R. Manner (Eds.),
[13] J. Kennedy and R.C. Eberhart, \Particle Swarm Optimiza- Parallel Problem Solving from Nature 3, pp. 139{148,
tion", Proc. of the IEEE International Conference on Neural Springer, 1994.
Networks, Piscataway, NJ, USA, pp. 1942{1948, 1995. [30] H.-P. Schwefel, Evolution and Optimum Seeking, Wiley, 1995.
[14] J. Kennedy and R.C. Eberhart, Swarm Intelligence, Morgan [31] Y. Shi and R.C. Eberhart, \Parameter Selection in Parti-
Kaufmann Publishers, 2001. cle Swarm Optimization", EvolutionaryProgramming VII, pp.
[15] E.L. Lawler and D.W. Wood, \Branch and Bound Methods: 591{600, 1998.
A Survey", Operations Research, Vol. 14, pp. 699{719, 1966.