0% found this document useful (0 votes)
164 views13 pages

A Genetic Algorithm For The Set Covering Problem PDF

Uploaded by

Harry Setya Hadi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
164 views13 pages

A Genetic Algorithm For The Set Covering Problem PDF

Uploaded by

Harry Setya Hadi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 13
ELSEVIER European Jour of Operational Rescach 94 (1996) 392-404 EUROPEAN ‘JOURNAL OF OPERATIONAL RESEARCH Theory and Methodology A genetic algorithm for the set covering problem LE. Beasley *, PC. Chu The Management School, Imperial College, Landon SW7 242, UK Received July 1994; revised June 1995 Abstract In this paper we presenta genetic algorthm-based heuristic for non-unicost set covering problems. We propose several modifications tothe basic genetic procedures including a new ftness-based crossover operator (fusion). @ variable mutation rate and a heursie feasibility operator tailored specifica forthe set covering problem, The performance of our algorithm ‘was evaluated on a large set of randomly generated problems. Computational resus showed that the genetic algorithm-based heuristic is capable of producing high-quality solutions. Keywords: Genetic agoritns; Se: covering; Combinstral optimisation 1. Introduction ‘The set covering problem (SCP) is the problem of covering the rows of an m-row, n-column, zer0-one ‘matrix (ai)) by a subset of the columns at minimal cost. Defining x) = 1 if column j (with cost ¢, > 0) is in the solution and x) = 0 otherwise, the SCP is a subject to Says, > 1. 5 @ 4,601}, Fede @ Eq, (2) ensures that each row is covered by at least ‘one column and (3) is the integrality constraint. If all, the cost coefficients c, are equal, the problem is called * Conesponding author E-mail: jbeasley@ic ac uk In: / sera ms ca. eb ml the unicost SCP. The SCP has been proven to be NP- complete [10] ‘A number of optimal and heuristic solution algo- rithms have been presented in the literature in recent years. Fisher and Kedia (9] presented an optimal so- lution algorithm based on a dual heuristic and applied it to SCP's with up t0 200 rows and 2000 columns. Beasley [7] combined a Lagrangian heuristic, Feasi ble solution exclusion constraints, Gomory f-cuts and an improved branching strategy to enhance his pre vious algorithm [5] and solved problems with up to 400 rows and 4000 columns. OF recent interest i the work of Harche and Thompson [13] who developed an exact algorithm, called column subtraction, which is capable of solving large sparse instances of set cov- ering problems, These optimal solution algorithms are based on tree-search procedures. ‘Among the heuristic methods, Beasley (6) pre- sented a Lagrangian heuristic algorithm and reported that his heuristic gave beter-quaity results than a number of other heuristics [2,19], for problems in- 10377.2217/96/S15.00 Copyright © 1996 Elevice Science BL. Al sights reserve SSDI 0377-2217(95)00159-X LE. Beasts. PC. Chu/ European Journal of Operational Research 94 (1996) 392-404 aos volving up to 1000 rows and 10000 columns. Re- cently, Jacohs and Brusco [15] developed a heuris- tic based on simulated annealing and reported con- siderable success on problems with up t0 1000 rows and 10000 columns. Sen {18} investigated the perfor ‘mances of a simulated annealing algorithm and a sim- ple genetic algorithm on the minimal cost set covering problem, but few computational results were given. A ‘comparative study of several different approximation algorithms for the SCP was conducted in Ref. [12] 2. Genetic algorithms ‘A genotic algorithm (GA) can be understood as ‘an “intelligent” probabilistic search algorithm which can be applied to a variety of combinatorial optimi- sation problems [16]. The theoretical foundations of GAs were originally developed by Holland [14]. The idea of GAs is based on the evolutionary process of biological organisms in nature. During the course of the evolution, natural populations evolve according to the principles of natural selection and “survival of the fies” Individuals which are more successful in adapting o their environment will have a better chance of surviving and reproducing, whilst individuals which ‘are less fit will be climinated. This means that the genes {rom the highly fit individuals will spread to an increasing number of individuals in each successive ‘generation, The combination of good characteristics from highly adapted ancestors may produce even more fit offspring. In this way, species evolve to become ‘more and more well adapted to their environment ‘AGA simulates these processes by taking an initial population of individuals and applying genetic opera- {ors in each reproduction. In optimisation terms, each individual in the population is encoded into a string or chromosome which represents a possible solution (0 a given problem. The fitness of an individual is evaluated with respect to a given objective function. Highly fit individuals or solutions are given opportu- nities to reproduce by exchanging pieces of their ge- netic information, in a crossover procedure, with other highly fit individuals. This produces new “offspring” solutions (ie. children), which share some character- istics taken from both parents, Mutation is often ap- plied after crossover by altering some genes in the sirings. The offspring can either replace the whole population (generational approach) or replace less fit individuals (steady-state approach), This evaluation- selection-reproduction cycle is repeated until a satis- factory solution is found. The basic steps of a simple GA are shown below. A more comprehensive overview of GAs can be found in Refs. [34.11.16] Generate an initial population; Evaluate fitness of individuals in the population: repeat Select parents from the population; Recombine (mate) parents to produce children; Evaluate fitness of the children; Replace some or all of the population by the cil dren; until a satisfactory solution has been found; 3. The GA-based heuristic ‘We modified the basic GA described in the previous section ina way such that problem-specific knowledge is considered. Without loss of generality, we shall as- sume that the columns in the SCP are ordered in in creasing order of cost and columns of equal cost are ‘ordered in decreasing order of the number of rows that they cover. Our modified GA is as follows. 4.1. Representation and fitness function ‘The first step in designing a genetic algorithm for 4 particular problem is to devise a suitable represen: tation scheme, The usual 0-1 binary representation is. aan obvious choice for the SCP since it represents the underlying 0-1 integer variables. We used an n-bit bi nary string as the chromosome structure where 7 is the number of columns in the SCP. A value of 1 for the ith bit implies that column ‘is in the solution. The binary representation of an individual's chromosome (solution) for the SCP is illustrated in Fig. 1. The fit- ness of an individual is directly related to its objective function value. With the binary representation, the fit- ness f; of an individual / is calculated simply by < yo LE Beasley PC. Ch/Earpean Sora of Operational esearch 9 (1996) 392-404 commie) tks not bit sting [ieee [om eal |e om] to Fig. 1. Binary representation of an individual's chromosome, row( gene) 1 2 string wo 7 10 melon 17 213 [|_5 Fig. 2 Non binary representation ofan individuals chromosome, where si isthe value of the jth bit (column) in the string corresponding to the ith individual and cis the cost of bit (column) j. ‘An important issue concerning the use ofthe binary representation is that when applying genetic operators to the binary strings, the resulting solutions are no longer guaranteed to be feasible. There are two ways of dealing with infeasible solutions. One way is to apply a penalty function to penalise the fitnesses of infeasible solutions without distorting the fitness landscape [17]. The other way is to de- sign heuristic operators which transform infeasible so- lutions into feasible solutions. We chose the later ap- proach of using heuristic operators because a “good” penalty function is often difficult to determine. The problem of maintaining feasibility may also be resolved by using a non-binary representation. One possible representation is to have the chromosome size equal to the number of rows in the SCP. In this, representation, the location of each gene corresponds, to a row in the SCP and the encoded value of each gene is a column that covers that row (see Fig. 2) Since the same column may be represented in more than one gene location, a modified decoding method for fitness evaluation is used by extracting only the unique set of columns which the chromosome represents (i.e. repeated columns are only counted once), With this representation, feasibility can generally bbe maintained throughout the erossover and mutation procedures. But the evaluation ofthe fitness may be- come ambiguous because the same solution can be represented in different forms and each form may give a different fitness depending on how the sting is rep- resented. Limited computational experience led us to believe that the performance ofthe GA using this non binary representation was inferior to that of the GA using the binary representation 3.2, Parent selection techniques Parent selection is the task of assigning reproduc: tive opportunities to each individual inthe population There are a number of widely used methods including. proportionate selection and tournament selection ‘The proportionate selection method calculates the probabilities of individuals being selected as propor- tional to their ftnesses and selects individuals for mat ing based on this probability distribution. The tourna- ‘ment selection method works by forming two pools of individuals, each consisting of T individuals drawn from the population randomly. Two individuals with the best fitness, each taken from one of the {v0 tour- rnament pools, are chosen for mating. Using a larger value for T has the effect of increasing selection pres- sure on the more fit individuals. ‘We chose the binary tournament selection (i.e. T = 2) as the method for parent selection because it can be implemented very efficiently (without having to calculate an individual's selection probability as re uired by proportionate selection). Our study showed that the performance of binary tournament selection in terms of solution quality is comparable to that of proportionate selection. 3.3. Crossover operators In atraditional GA, simple crossover operators such as one-point or two-point crossover are often used, ‘These crossover operators work by randomly gener~ ating one or more crossover point(s) and then swap- pping segments of the two parent strings to produce {wo child strings. The one-point crossover is formally defined as follows. Let A, and Py be the parent strings Pi{1]. .... Pin] and Po(1],..., Palm), respectively. Generate JE. Beasley. PC. Chu/ European Journal of Operational Research 94 (1996) 392-404 395 a crossover point k, 1 < k Wi € By, set $= S— j and set wise — LE By (v) Sis now a feasible solution for the SCP that contains no redundant columns. Steps (i) and (ii) identify the uncovered rows. Steps (iii) and (iv) are “greedy” heuristics in the sense that in step (iii) columns with low cost ratios are being considered first and in step (iv) columns. with high costs are dropped first whenever possible. 3.6. Population replacement model Once a new feasible child solution has been gen- erated, the child will replace @ randomly chosen ‘member (usually one with an above-average fitness value) in the population. Note that an above-average fitness means less fit. This type of replacement ‘method is called incremental replacement or steady: state replacement. Another commonly used method is generational replacement, where a new population of children is generated and the whole parent population is replaced. ‘The advantages of the steady-state replacement ‘method are thal the best solutions are always kept in the population and the newly generated solution is, immediately available for selection and reproduction. Hence, 2 GA which uses the steady-state replace ‘ment method tends to converge faster than one which ‘uses the generational replacement method. Limited ‘computational experience showed that our GA using. the steady-state replacement method produced better results than when using the generational replacement ‘method. When using the steady-state replacement method, care must be taken to prevent a duplicate solution from entering the population. A duplicate child is one such that its solution structure is identical © any one of the 'N solution structures in the population. Allowing du- plicate solutions to exist in the population may be un- esirable because a population could come to consist of N identical solutions. 37. Overview To summarise our modified GA for the SCP, the following steps are used. (4) Generate an initial population of Nv random so- Tutions. Set ¢:=0. Select wo solutions P) and Ps from the popu lation using binary tournament selection Gii) Combine P; and P; to form a new solution C using the fusion crossover operator. (iv) Mutate k randomly selected columns in C where k is determined by the variable mutation schedule. (¥) Make C feasible and remove redundant columns in C by applying the heuristic opera c 208 JE. Beasley, PC. Chu/ European Joursal of Operational Research 94 (1996) 182-808 tor. (vi) If C is identical to any one of the solutions in the population, go to step (ii); otherwise, set t= t+ L and go to step (vii) (vii) Replace a randomly chosen solution with an above-average fitness in the population by C (steady-state replacement method) (viii) Repeat steps (ii)~(vii) until ¢ = M (ie, Mt non- Vand isa factor representing the average number of appearances ofa column in the initial population. The size ofthe population is then N= pon (6) Eq, (6) shows that the required size of the popula- tion is proportional to the number of columns and the density of a SCP. For a large SCP, the size of the pop- ulation may become very large. For example, if we let 2 = 20 to ensure adequate coverage, a SCP with 10.000 columns and $% density will require a popula- tion size N= (20)(10000) (0.05) = 10000. This size is clearly too large for the GA to work efficiently. In order to make population size less dependent on problem size, we need to modify the initial solution selection rule above. One simple way is to generate un initial population that covers only part of the whole solution domain. To do this, we modify step (iia) above to ia) randornly select a column j in aa, ae C a, where «xi is defined as the set of the k least-cost columns in ay. Since the columns in are in increas- ing order of cost, ax is simply the set of the first k columns in and ag = a; when |ay| < K. The initial population will now need to cover ai e24t: etme which isa subset of J. In general the valve of k should be chosen such that there isa high probability thet the set of columns in the optimal solution Soy is a subset of US, aa. For the non-unicost SCP. Sap generally consists of columns which have low cost. In our study, ‘we chose k = 5 forall the test problems we considered ‘To estimate the effect that the modified initial pop- tlation selection rule above has oa the required popu- lation size with respect to different problem sizes and densities, we obtained some empirical data based on Cour test problems. Table I shows the average num- ber of columns in the new restricted solution domain (JUS, eal) and the values of 4 with respect to dit- ferent problem sizes and densities when Y= 100 and '. The sizes and densities of these test problems ae given in Table 2. The data in Table I shows that bby modifying the initial population selection criterion, LTE, Beasley, RC Chu/ European Journal of Operational Research 94 (1996) 392-408 99 era wa nen ioc pte en retin mwe[ Use| * 7 6 a only a small population size (WV = 100) is necessary to provide adequate coverage of the intial restricted solution domain regardless of the size and density of the problem. 4.2. Mutation schedule The variable mutation rate described in Section 3.4 involved three constant coefficients. These were: (1) my, which specifies the final stable mutation rate; (ii) me, which specifies the number of child solu- tions (2) that are generated before the mutation schedule reaches half ofthe stable mutation rate; (ii) mg, which specifies the gradient of the mutation function at f= me. As discussed in Section 3.4, the mutation opera- tor becomes the main force for searching when the GA begins to converge. However, because mutation is only an intermediate step in the reproduction process, the bits which have been mutated may still be altered later by the heuristic feasibility operator. Therefore, ‘me does not explicitly determine the magnitude of the search by the mutation operator as it would do with- ‘out the presence of the heuristic feasibility operator. ‘Our study indicated thatthe final converged solutions ‘are generally not very sensitive to the values of my. Choosing a value between 5 and 10 for mr is usu ally adequate to prevent the GA from being trapped in a local minimum, The values of mc and mare de- termined according to the manner in which the GA. converges. In principle, the mutation schedule should “match” the convergence of the GA in the manner ‘ble 2 Test problem deals Problem set Rows Columns Density Number of (m) (%)__problems 4 2001000 2 10 5 20 2000 2 0 6 200 1000 5 5 A m0 0 2 5 B 30 3000 5 5 c 4004000 2 s D 4004000, 5 5 E 50 $00 5 F 500 som 20 5 G 1000 10000 2 s H 00010000 5 5 shown in Fig. 3. The GA convergence curve can be es- timated by running the GA once without the mutation ‘operator. The desirable mutation curve is then approx: {mated (through visual inspection) by manipulating the mutation function in Eq. (5) using m. and mg ‘Another important parameter concerning mutation is in which set of bits (columns) in a string should the mutation take place such that the GA is effective. I is obvious that mutating high-cost columns is. not as productive as mutating low-cost columns since the chance of high-cost columns constituting part of the ‘optimal solution is rather small. Therefore, itis rea- sonable, as well as computationally more efficient, to allow the mutation to occur only in a set of “elit bits (columns) which have a relatively better chance to be in the optimal solution. In our GA, we defined the elite columa set based on the same criterion as that used for generating the initial population, that is ie a where ais is the set of the 5 least-cost columns in a 5. Computational results “The algorithm presented in this paper was coded in C and tested on a Silicon Graphics Indigo work- station (R4000, 100MHz). Our computational study was conducted on 11 test problem sets (a total of 65 SCP's) of various sizes and densities. These test prob- lem sets were obtained electronically from OR-Library 00 IE, Beasley, PC. Chur European Joural of Operational Research 94 (1996) 392-406 Table 3 Mutation coeicients 4 200 13 5 200 06 5 200 20 a 200 20 8 300) Lo c 200 20 > 200 20 E 350 Mt F 400 to G 250 13 4 400 os (e-mail the message sepinfo to o.rlibrary@ ie.ac.uk). ‘The details ofthese test problems are given in Table 2. Problem sets 4-6 and A-D are ones for which op- timal solution values are known. Problem sets E-H are large SCP's for which optimal solution values are not known. Note here that we did not incorporate into our algorithm any of the problem reduction tests (5.6) available for SCP's, In our computational study, 10 trials of the GA. heuristic (each with a different random seed) were made for each of the test problems. Each tral term nated when M = 100000 non-duplicate solutions had been generated. The population size WV was set 10 100 for all the problems and the coefficients of the muta- tion function (mj, mc and mg) for each of the prob- lems sets are listed in Table 3. The coefficients in Ta- ble 3 are determined with relation to the GA conver gence by using the first problem in each of the prob- em sets, Since the convergence behaviours are simi- lar within each of the problem sets, we used only the first problem in each set to generalise the convergence behaviour for that set. Table 3 shows thatthe final mu- tation rate mi is set to 10 for all the problems and the coefficients mi and mg are set based on the GA conver- -gence curves given in Fig. 4, Since the m's and m,'s are relatively similar for all the problems, we made a further generalisation and used m = 200 and mg = 2.0 for all the test problems. The GA convergence curves, in Fig. 4 also show that the GA generally converges very rapidly (when roughly 200 non-duplicate child solutions had been generated). ‘The computational results are shown in Table 4. In ‘Table 4 we give, for each problem @ the optimal solution value (problem sets 4-6 and A-D) from Ref. [7] or the previous best ‘known solution value (problem sets E-H) from Ref. [15]: Gi) the best solution value in each of the 10 tials the average percentage deviation from the optimal solution value (problem sets 4-6 and A-D) or the average percentage deviation from the pre vious best-known solution value (problem sets E-H); (iii) the average solution time and the average exe caution time The average percentage deviation (7) is calculated by Digi (Sti~ 55)/(105,) x 100% where Sr; is the ‘th tral minimum (best) solution value and Sis the ‘optimal solution value or the previous best-known so: lution value. The solution time is measured in CPU seconds and is the time thatthe Ga takes to first reach the final best solution. The exccution time isthe time (CPU seconds) that the GA takes to generate 100 000 non-duplicate child solutions. For a comparative per- formance of different computer systems, see Ref. [8] Examining Table 4, we observe that (j) For problem sets 4-6 and A-D, the GA-based heuristic found optimal solutions in at least one ‘ofthe 10 trials forall butone ofthe 45 test prob- lems. The average percentage deviations from the optimal values are between zero and 1.4% (ii) For problem sets E-H, the GA-based heuristic produced better (or equal) solutions than the previous best-known solutions in atleast one of the 10 trials forall 20 test problems. (Git) In 7 outof the 20 problems in problem sets E-H, the GA-based heuristic generated better solution values than the previous best-known solutions. The negative average percentage deviation indi- cates the average percentage improvement from the previous best-known solution. (iv) The average solution times are reasonably small for al the problems (under 800 CPU seconds). ‘The average solution times are also much less than the average execution times. This suggests thatthe GA terminating condition of generating 100000 solutions could be reduced without af fecting solution quality. We also compared the performance of different crossover operators, namely, the one-point, the two- point, the uniform and the fusion crossover operators 8, Beasley, PC. Chu/Buropean Journal of Operational Research 94 (1996) 392-404 401 ‘ble 4 Computational resus Problem —Opimal/ PBS! Best solution in each ofthe 10 tas ‘verge Averge Average ‘% solution tine execution time 4a 0 0 42 0 430 40 430 © 0 O16 1042 2704 42 Ge el oe eee oe 32 2766 3 ee Oh oes oe eee er 3k 2482 44 D0 6 8 « So 9 0 0 OG S02 287 45 Be Oh Geel el oe cece. ee en 155 m9 46 © 8 59 © 8 6 © 6 0 8 OO 37 2100 4 © 8 A © 0 6 0 oo 8 Os aD 208 43 Bo 0 © 0 9 09 0 9 0 om no 2163, 49 © GS 0 © 9 6 0 650 64S GS 08 ar 2350 410 © 8 6 © 9 © © © © oO 43 2716 St So060600060006 G 32s 52 a0s 0 305 9 405 9 0 308 305 9 05D BO 3805 33 228 2S 8 228 WE 2 28 DB 2 WE ORK a9 6st sa © 9 0 0 2 © 0 0 © 0 008 36 3760 55 0 9 0 0 0 © 9 9 © @ 0 53 asa 56 © 8 0 © © © © 6 © © © 06 S738 57 2 0 0 0 0 © 9 0 0 © 0 689 soa 53 269 249 289 9-289 28289 2H 28D OK aS 4369 59 © 9 0 0 @ 9 0 6 © o 0 46 4062 510 0 0 6 9 © 0 5 6 © oO 6s si68 61 666566060506 0 163) 2766 62 ete ole ce (os ) 60) foil ig a ra 2867 63 669005655056 2 42 a0 64 oe ee ie eco) tcl ce he che (0) a 2565 63 o 0 © 0 «© o@ I 9 0 o oly 43 266 aL O&'O 0-0 5-00-06 a) 186 5906 Az 2 0 © © © 9 © © oo Oo uss sr AS © 0 0 © 2 2 9 23 2m 2 On 449 6207 Aa 0 0 0 @ © © 8 0 © 9 164 668 AS © 8 8 6 8 6 © 8 © © 0 a4 6639 Bi 0 9 6 © 8 6 © © © 0 a 780 Ba 606560665605 3 41 anna Ba © 8 6 © © © © 6 © © Oo 2508 6685 Ba eee Oe ol 0 06 7527 us 0666006656506 0 im 763 cl 0 0fF 0505-05605 664 902 cz 219) ot 2k go 9 OG a 9560 o 249248 24S 247251 DMB 248 44 DT D8 7858 ca 219 6 eo 8 8 8 2 oS su 5030 cs 21588 6 © oo 8 8 0 OS 285 ites ba @ 0 2 © © © © 8 60 © 9 0 49 12406 ba ee ee | 6) 0 702 926 D3 ff ee ete 7c ches etl ee) oie aris 0913 ba Oe i ee 0 260 10199 bs 61 0000566050096 0 82 M46. El % 9 © © 0 0 © © © o © Oo a3 mua BD 0 fee 9 Ota oe oh siete = st (2008 2133 desi 53 yO Ow 0 HR 0 Wo 260 6B 24883 Ea 2% 9 0 © 9 0 9 0 © 0 © 0 08, 24748 ES % 09 0 © 9 © © 0 o © @ O i 979 02 Table 4— continued JE, Beasley PC. Chu/ European Journal of Operational Research $4 (1996) 392-404 Problem Optimsl/PBS* Best solution in each ofthe 10 ils ‘Averige Average Average ‘or solution ime execution time et fh 0 oo oo 0 0 oO Go oO 270 208 2 BP ofb09000G 0540 2B 276 44284 BS Oo 606660605000 a 983 sas Fa 9 6 0 © 9 9 © © © © 8 781 45682 FS “wo 9 9 8 9 9 0 o 0 218 337 46080 Gt 199178 178 178 0176 178 «178 «178 178 178-073-3610 ass G2 18 SS SS 157 1389137 oS SS ton 2Tt 20004 G3 169 166 16K 108 168 5 0 167 10k 168 068 2897 2s212 ca 172171 170 170 168 0 168 170 171 171 9 098 5323 260 Gs 18 oo 169170174 169169 16916969838 aosnt Hi ee oe eel oe eee cee, un cad ‘3067 #2 (ee ee 8 0h (ot cis ote. 10) ee toe gf0\ = 157A) 4600.0 Ha 59 a 5059 5 59 S99 S95) 180 ITS $5629 Ha 38 0 SK OSE OSH 8D 9 6) SR ONT T703 979 HS s 2 0 0 9 6 0 6 8 IB 18D 4881 Previous best known soliton vale (Optimal soluon value or previous best known scltion vale Average Fanass ‘Average Fess 200 00 00 aor 200 ° ° ‘0100 a0 CO eg ange a0] ooo eaten 0 400 “ 20 200 20 10 fa. 10 ~ on er ae ee 2091000 Noe of Chia Solutions Goraatog ‘Nevoer of Gris Stent Goreates JE. Beatle. PC, Chu/uropean Journal of Operational Research 9 (1996) 382-04 403 Tbe 5 Average redundancy rates Problem set One-poimt —Two-point_ Uniform Fusion 4 594 Foe ye 5 04 ile (ase a0 6 37s fale eT) 7) A se Saale fe aa0le 5) 8 336 330 Ma 259 © 12 seo 08 D 447 3160 E 187 182 FE 2 2k 33 o 61 36 BS 4 383 26 TTS in terms of solution quality and computational cost using all the problem sets. We found thatthe four dif- ferent crossover techniques are comparable in terms Of solution quality and solution time. The differences in the number of optimal/current best-known solu- tions found between the crossover operators. were. slight, although the fusion operator pesformed best, failing to find the optimal/current best-known solu- tion in only one of the 65 test problems. Overall we ‘can conclude that our GA-based heuristic performs well irespective of the choice of crossover operator. ‘To measure the efficiency of the GA using each of the crossover operators, we recorded the redundancy rate for each of the four crossover operators as shown in Table 5. The redundancy rate is defined as, # duplicate children # duplicate children + # non-duplicate children 100% Recall here that a duplicate child is discarded in the steady-state replacement model. Table 5 shows that the one-point and two-point crossover operators had. hhigher redundancy rates than the uniform and fusion ‘erossover operators. This indicates that the one-point and two-point crossover operators are less successful at generating new solution structures from the mating. parents than the uniform and fusion crossover opera- tors, 6. Conclusion We have developed a heuristic for the non-unicost set covering problem based on genetic algorithms. We designed a new crossover-fusion operator, a variable ‘mutation rate and a heuristic feasibility operator 10 improve the performance of our GA. Computational results indicated that our GA-based heuristic is able to generate optimal solutions for small-size problems. as well as to generate high-quality solutions for large- size problems, References L117. Bick, “Optimal mutation rates in genetic search, ia S. Forest, ed, Proc. Fh International Conference on Generic Agortims, Morgan Kaufinann, San Mateo, CA, 1993, 2-9 (21.6, Balas and A. Ho, "See covering alors wing cating planes. Hours and subgradient opimization: A computaona study", Mathematical Progranming Study 12 (1980) 37-60, (31D, Beasley, DIR Ball, and RR. Marin, An overview ‘of genetic algorthns: Part I, Fundamentals", University Compusing 15 (1993) 38-69, (4) D. Beasley, DR. Bull and RR. Marin, “An overview of genetic algorithms: Part. Research topics". Universy Comparing 1S (1983) 170-181 [5] 1E, Beasley, “Aa algocthy for set covering problems european Journal of Operational Research 31 (1987) 85- 3, [61 18. Beasley. “A Lagrangian hevristic for set.covering problems", Noval Research Loss 37 (1980) 151-164, ITE, Beasley and K Jomsien, “Enhancing an agorithn for set covering probiens”, Exropean Journal of Operational esearch 38 (1992) 293-300. [81 14. Dongarra, Peformance of various computers using standard linear equations Sofware, Computer Architecture News 20 (1982) 22-44 JOLML Fister and PB Kedia, “Optimal solution of set covering/parttioning problems using dual heuristics” “Managemen Science 36 (1990) 674-688, [10] MR Garey and DS. Johwson, Compusers and lractabitiy: {8 Gauie othe Theory of NP-Completerrs, WH. Freeman, San Francisco, 1979, [11] D. Goldberg, Generic Algantime ox Search, Optimisation ‘and Machine Learning, Adson-Wesley, New York, 1989 [12] T- Grossman and A. Wool, “Computational experience with approximation algorithms for the set covering problem” Working paper. Theoretical Division and CNLS, Los Alumos [Nasional Laboratory, Los Alamos. NM. February 1995, 113] F Harche and G.L. Thompson. “The column subtraction lgrthm: AR exact method for solving weighed set 404 LE Beasley. RC. Che/Eurnpean Journal of Operaonal Research Of (1996) 392-404 covering packing and partioning problems", Computers & Operations Research 21 (1984) 689-705 [14] JH. Holland, Adapton ie Naural and Arta! Systems, MIT Pres, Cambvidge, MA, 1975 1S] LW, Jacobs and MJ. Brasco, "A simulated anaeating trsed heuristic for the setcoveing problem’. Working taper, Operations: Management and Infomation Systems Deparment. Nonher Minos University, Deal, IL, March 1998, L161 CR. Rewves, Modern Heurigic Techaigues for Combinatorial Problem, Blackeell Siete, 1995. 117) 4, Richardson, M. Palmer, G. Liepns. and ML Hillard, Some guidelines for genetic algortnms with penalty functon®” in: J. Schafer, ed, Proc. Thing Insertional Conference on Genet Algor, Morgan Kafr, (98, 191-197 18] S. Sen, “Minimal cost set covering using probabiisic reshods", Pric. 1993 ACMISIGAPP Symposium om Appi Compu, 1993, 157-168, 119] Fa. Vasko and GR, Wilson, “An efficient heuisbi fo lange set covering problems, Naval Research Lagisties Quarter 31 (1988) 163-171.

You might also like