0% found this document useful (0 votes)
81 views20 pages

Intelligent Water Drops Algorithm A New Optimization Method For Solving Multiple Knapsack Problem

This algorithm presents the application of an water drops-inspired algorithm in solving complex problems.

Uploaded by

Tam Jun Hui
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
81 views20 pages

Intelligent Water Drops Algorithm A New Optimization Method For Solving Multiple Knapsack Problem

This algorithm presents the application of an water drops-inspired algorithm in solving complex problems.

Uploaded by

Tam Jun Hui
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

The current issue and full text archive of this journal is available at

www.emeraldinsight.com/1756-378X.htm

Intelligent water drops algorithm

Intelligent water
drops algorithm

A new optimization method for solving


the multiple knapsack problem
Hamed Shah-Hosseini
Electrical and Computer Engineering Department,
Shahid Beheshti University, Tehran, Iran

193
Received 12 December 2007
Revised 25 February 2008
Accepted 4 March 2008

Abstract
Purpose The purpose of this paper is to test the capability of a new population-based optimization
algorithm for solving an NP-hard problem, called Multiple Knapsack Problem, or MKP.
Design/methodology/approach Here, the intelligent water drops (IWD) algorithm, which is a
population-based optimization algorithm, is modified to include a suitable local heuristic for the MKP.
Then, the proposed algorithm is used to solve the MKP.
Findings The proposed IWD algorithm for the MKP is tested by standard problems and the results
demonstrate that the proposed IWD-MKP algorithm is trustable and promising in finding the optimal
or near-optimal solutions. It is proved that the IWD algorithm has the property of the convergence in
value.
Originality/value This paper introduces the new optimization algorithm, IWD, to be used for the
first time for the MKP and shows that the IWD is applicable for this NP-hard problem. This research
paves the way to modify the IWD for other optimization problems. Moreover, it opens the way to get
possibly better results by modifying the proposed IWD-MKP algorithm.
Keywords Programming and algorithm theory, Optimization techniques, Systems and control theory
Paper type Research paper

1. Introduction
The multiple knapsack problem (MKP), is an NP-hard combinatorial optimization
problem with applications such as cutting stock problems (Gilmore and Gomory, 1966),
processor allocation in distributed systems (Gavish and Pirkul, 1982), cargo loading
(Shih, 1979), capital budgeting (Weingartner, 1966), and economics (Martello and Toth,
1990).
Two general approaches exist for solving the MKP: the exact algorithms and the
approximate algorithms. The exact algorithms are used for solving small- to
moderate-size instances of the MKP such as those based on dynamic programming
employed by Gilmore and Gomory (1966) and Weingartner and Ness (1967) and those
based on the branch-and-bound approach suggested by Shih (1979) and Gavish and
Pirkul (1985). A recent review of the MKP is given by Freville (2004).
The approximate algorithms may use metaheuristic approaches to approximately
solve difficult optimization problems. The term metaheuristic was introduced by
Glover and it refers to general purpose algorithms which can be applied to different
The author would like to express his gratitude to grateful to the anonymous referees for their
valuable comments and suggestions, which led to the better presentation of this paper in IJICC.

International Journal of Intelligent


Computing and Cybernetics
Vol. 1 No. 2, 2008
pp. 193-212
q Emerald Group Publishing Limited
1756-378X
DOI 10.1108/17563780810874717

IJICC
1,2

194

optimization problems with usually few modifications for adaptation to the given
specific problem.
Metaheuristic algorithms include algorithms such as simulated annealing
(Kirkpatrick et al., 1983), tabu search (Glover, 1989), evolutionary algorithms like
genetic algorithms (Holland, 1975), evolution strategies (Rechenberg, 1973), and
evolutionary programming (Fogel et al., 1966). Ant colony optimization (Dorigo et al.,
1991), scatter search (Glover, 1977), greedy randomized adaptive search procedure (Feo
and Resende, 1989, 1995), Iterated local search (Lourenco et al., 2003), guided local
search (Voudouris and Tsang, 1995), variable neighborhood search (Mladenovic and
Hansen, 1997), particle swarm optimization (Kennedy and Eberhart, 2001),
electromagnetism-like optimization (Birbil and Fang, 2003), and intelligent water
drops (IWD) (Shah-Hosseini, 2007). For a review on the field of metaheuristics, it is
suggested to read the book by Glover and Kochenberger (2003).
Several kinds of metaheuristic algorithms have been used for the MKP to obtain
near-optimal or hopefully optimal solutions including those based on evolutionary
algorithms such as Glover and Kochenberger (1996) and Chu and Beasley (1998).
Moreover, several variants of hybrid evolutionary algorithms have also been
implemented which are reviewed in Raidl and Gottlieb (2005). Ant Colony-based
algorithms are also used for the MKP including Fidanova (2002) and Leguizamon and
Michalewicz (1999).
A metaheuristic algorithm can be classed as a constructive approach or a local
search method. A constructive algorithm builds solutions from scratch by gradually
adding solutions components to the initially empty solutions whereas a local search
algorithm starts from a complete solution and then tries to improve it over time.
Evolutionary-based algorithms are local search algorithms whereas the Ant
Colony-based algorithms are constructive algorithms. Moreover, a metaheuristic
algorithm may use a single solution or a population of solutions to proceed at each
iteration. Simulated Annealing uses a single solution whereas evolutionary algorithms
are population-based algorithms.
Recently, the new metaheuristic algorithm Intelligent Water Drops, has been
introduced in the literature and used for solving the traveling salesman problem (TSP).
The TSP is also an NP-hard combinatorial optimization problem. Therefore, the new
IWD algorithm should be applicable to solve the MKP. This paper tries to solve the
MKP using an IWD-based algorithm. The IWD algorithm is a population-based
optimization algorithm that uses the constructive approach to find the optimal
solution(s) of a given problem. Its ideas are based on the water drops that flow in
nature such that each water drop constructs a solution by traversing in the search
space of the problem and modifying its environment.
The next section of the paper introduces the MKP. Section 3 overviews the general
principles of the IWD algorithm. Section 4 proposes the modified IWD algorithm for
the MKP. After that, a section on the convergence properties of the IWD algorithm is
stated. Experimental results with the IWD algorithm are presented in section 6. The
final section of the paper includes the concluding remarks.
2. The multiple knapsack problem
Consider we have a set of items i [ I where each item i gives the profit bi and requires
the resource (capacity) ri. The knapsack problem (KP for short) is to select a subset of

items of the set I in such a way that they all fit in a knapsack of limited capacity and the
sum of profits of the selected items is maximized.
The MKP generalizes the KP by considering multiple resource constraints.
Therefore, the MKP is considered to have multiple knapsacks.
Assume the variable yi denotes the inclusion of the item i in the knapsack such that:
(
1 if the item i is added to the knapsack
yi
1
0 otherwise
Moreover, the variable rij is assumed to represent the resource requirement of the item i
with respect to the resource constraint j having the capacity aj. The MKP with m
constraints and n items can be formulated as follows:
max

n
X

y i bi :

i1

Subject to the following constraints:


n
X

r ij yi # aj

for j 1; 2; . . . ; m:

i1

Such that yi [ {0; 1} for i 1; 2; . . . ; n: In the MKP, it is often assumed that the
profits bi and the resources requirements rij are non-negative values.
Here, an MKP is viewed as a graph (N, E) where the set N represents the items of the
MKP and the set E represents the arcs (paths) between the items. A solution is then a
set of N0 items such that they do not violate the constraints in equation (3) and N 0 # N :
3. Basic principles of the IWD algorithm
Water drops that flow in rivers, lakes, and seas are the sources of inspiration for
developing the IWD. This intelligence is more obvious in rivers which find their ways
to lakes, seas, or oceans despite many different kinds of obstacles on their ways. In the
water drops of a river, the gravitational force of the earth provides the tendency for
flowing toward the destination. If there were no obstacles or barriers, the water drops
would follow a straight path toward the destination, which is the shortest path from
the source to the destination. However, due to different kinds of obstacles in their way
to the destination, which constrain the path construction, the real path has to be
different from the ideal path and lots of twists and turns in the river path is observed.
The interesting point is that this constructed path seems to be optimum in terms of
distance from the destination and the constraints of the environment.
Imagine a water drop is going to move from a point of river to the next point in the
front as shown in Figure 1. It is assumed that each water drop flowing in a river can
carry an amount of soil which is shown by the size of the water drop in the figure. The
amount of soil of the water drop increases as it reaches to the right point shown in
Figure 1 while the soil of the river bed decreases. In fact, some amount of soil of the
river bed is removed by the water drop and is added to the soil of the water drop. This
property is embedded in the IWDs such that each IWD holds soil in itself and removes
soil from its path during movement in the environment.

Intelligent water
drops algorithm

195

IJICC
1,2

196

Figure 1.
The IWD on the left flows
to the right side while
removing soil from the
river bed and adding it to
its soil

Figure 2.
The faster IWD gathers
more soil that the slower
IWD while both flowing
from the left side of the
river bed to the right side

A water drop has also a velocity and this velocity plays an important role in the
removing soil from the beds of rivers. Let two water drops having the same amount of
soil move from a point of a river to the next point as shown in Figure 2. The water drop
with bigger arrow has higher velocity than the other one. When both water drops
arrive at the next point on the right, the faster water drop is assumed to gather more
soil that the other one. This assumption is shown in Figure 2 in which a bigger circle on
the right, which has gathered more soil, denotes the faster water drop. The mentioned
property of soil removing which is dependent on the velocity of the water drop is
embedded in each IWD of the IWD algorithm.
It was stated above that the velocity of an IWD flowing over a path determines the
amount of soil that is removed from the path. In contrast, the velocity of the IWD is also
changed by the path such that a path with little amount of soil increases the velocity of
the IWD more than a path with a considerable amount of soil. This assumption is
shown in Figure 3 in which two identical water drops with the same velocity flow on
two different paths. The path with little soil lets the flowing water drop gather more
soil and gain more speed whereas the path with large soil resists more against the
flowing water drop such that it lets the flowing water drop gather less soil and gain
less speed.
What makes a water drop choose one branch of path among several choices it has in
its front? Obviously, a water drop prefers an easier path to a harder path when it has to

Note: The soil on the bed is denoted by light gray color

Note: The size of the IWD shows its carrying soil

choose between several branches that exist in the path from the source to the
destination.
In the IWD algorithm, the hardness is translated to the amount of soil on the path.
If a branch of the path contains higher amount of soil than other branches, it
becomes less desirable than the other ones. This branch selection on the path is
implemented by a probabilistic function of inverse of soil, which is explained in the
next section.
In nature, countless water drops flow together to form the optimal path for reaching
their destination. In other words, it is a population-based intelligent mechanism.
The IWD algorithm employs this mechanism by using a population of IWDs to
construct paths and among all these paths over time, the optimal or near optimal path
emerges.

Intelligent water
drops algorithm

197

4. The proposed IWD algorithm


The IWD (Shah-Hosseini, 2007) have been designed to imitate the prominent
properties of the natural water drops that flow in the beds of rivers. Each IWD is
assumed to have an amount of the soil it carries, soil(IWD), and its current velocity,
velocity(IWD).
The environment in which IWDs are moving is assumed to be discrete. This
environment may be considered to be composed of Nc nodes and each IWD needs to
move from one node to another. Every two nodes are linked by an arc which holds an
amount of soil. Based on the activities of the IWDs flowing in the environment, the soil
of each arc may be increased or decreased.
Consider an IWD is in the node i and wants to move to the next node j. The amount
of the soil on the arc between these two nodes, represented by soil(i, j), is used for
updating the velocity velIWD(t) of the IWD by:

vel

IWD

IWD

t 1 vel

8
>
<

av
bv
if soilb i; j
bv cv soilb i; j
cv

>
:0

otherwise

where velIWD(t 1) represents the updated velocity of the IWD at the next node j.
Moreover, av, bv, and cv are some constant velocity parameters that are set for the given
problem.

Notes: The IWD that flows in the river with less soil gathers more
soil and gets more increase in speed

Figure 3.
Two identical IWDs flow
in two different rivers

IJICC
1,2

198

According to the velocity updating in equation (4), the velocity of the IWD increases if
the soilb(i, j) remains in the open interval 2bv =cv ; 1: The more the amount of the soil
soil(i, j) the less the updated velocity velIWD(t 1) will be. In contrast, if soilb i; j be in
the open interval  2 1; 2bv =cv ; the velocity of the IWD decreases such that the less
the amount of soil(i, j) the less the updated velocity velIWD(t 1) will be.
In the original work on the IWD-based algorithm (Shah-Hosseini, 2007), the
parameter b was not considered and implicitly, it was assumed that b 1. Assuming
that the av, bv, and cv are chosen as positive values, then selecting an even power for the
soil(i, j) in equation (4) has this advantage that the velocity velIWD(t 1) never gets
negative even if the soil(i, j) reaches below zero and the velocity updating in equation (4)
reduces to the following formula:
velIWD t 1 velIWD t

av
:
bv cv soil2a i; j

Such that b 2a. To avoid possible negative values for the velocity, in this paper,
b 2.
Consider that a local heuristic function HUD(.,.) has been defined for a given
problem to measure the undesirability of an IWD to move from one node to another.
The time taken for an IWD having the velocity velIWD(t 1) to move from the current
node i to its next node j, denoted by timei; j; velIWD t 1, is calculated by:
timei; j; velIWD

HUDi; j
:
velIWD

Such that:
(
vel

IWD

vel

IWD

t 1

if jvelIWD t 1j , 1

otherwise

where velIWD is obtained from velIWD(t 1) to keep its value away from zero with
radius e . The constant parameter e is a small positive value. Here, e 0.001. The
function HD(i, j) denotes the heuristic undesirability of moving from node i to node j.
For the TSP, the form of the HUD(i, j) denoted by HUDTSP(i, j) has been suggested as
follows:
HUDi; j HUDTSP i; j kci 2 c jk

where c(k) represents the two dimensional positional vector for the city k. The function
k k calculates the Euclidean norm. As a result, when two nodes (cities) i and j are near
to each other, the heuristic undesirability measure HUD(i, j) becomes small which
reduces the time taken for the IWD to pass from city i to city j.
For the MKP, a few heuristics have been suggested and used in ant-based
optimization algorithms (Dorigo and Stutzle, 2004), which some of the heuristics are
almost complex. Here, a simple local heuristic is used which reflects the undesirability
of adding an item to the current partial solution. Let the heuristic undesirability
HUD(i, j) for the MKP denoted by HUDMKP( j) be defined as:

HUDMKP j

rj
:
bj

Intelligent water
drops algorithm

Such that:
rj

m
1X
r jk
m k1

10

199

where bj is the profit of item j and rj is the average resource requirement for item j. As
equation (9) shows HUDMKP( j) decreases if the profit bj is high while HUDMKP( j)
increases if the average resource requirements rj becomes high. Therefore, among the
items that can be selected for the next move of an IWD, the item which needs less
resource requirements and has higher profit is more desirable.
As an IWD moves from the current node i to its next node j, it removes an amount of
soil from the path (arc) joining the two nodes. The amount of the soil being removed
depends on the velocity of the moving IWD. For the TSP (Shah-Hosseini, 2007), it was
suggested to relate the amount of the soil taken from the path with the inverse of the
time that the IWD needs to pass the arc or path between the two nodes. So, a fast IWD
removes more soil from the path it flows on than a slower IWD. This mechanism is an
imitation of what happens in the natural rivers. Fast rivers can make their beds deeper
because they remove more soil from their beds in a shorter time while slow flowing
rivers lack such strong soil movements. Moreover, even in a single river, parts of the
river that water drops flow faster often have deeper beds than the slower parts.
Specifically, for the TSP, the amount of the soil that the IWD removes from its
current path from node i to node j is calculated by:
Dsoili; j

as
bs cs timei; j; velIWD

11

where Dsoil(i, j) is the soil which the IWD with velocity velIWD removes from the path
between node i and j. The as, bs, and cs are constant velocity parameters that their
values depend on the given problem. The value timei; j; velIWD was defined in
equation (6) and represents the time taken for the IWD to flow from i to j.
Here, equation (11) is slightly improved to include a power for the time value in the
denominator as follows:
8
as
<
if timev i; j; velIWD 2 bcss
bs cs timev i; j;velIWD
12
Dsoili; j
:0
otherwise
In the IWD algorithm for the TSP, the parameter v was not considered and thus
implicitly v 1. For the MKP, the parameter v is set to two. Again, by assuming the
parameters as, bs, and cs are selected as positive numbers, then selecting an even value
for power v 2u simplifies equation (12) to:
as
:
13
Dsoili; j
bs cs time2u i; j; velIWD
After an IWD moves from node i to node j, the soil soil(i, j) on the path between the two
nodes is reduced by:
soili; j ro soili; j 2 rn Dsoili; j:

14

IJICC
1,2

Where ro and rn are positive numbers that should be chosen between zero and one.
In the original algorithm for the TSP, ro 1 2 rn :
The IWD that has moved from node i to j, increases the soil soilIWD it carries by:
soilIWD soilIWD Dsoili; j

200

15

where Dsoil(i, j) is obtained from equation (13). Therefore, the movement of an IWD
between two nodes reduces the soil on the path between the two nodes and increases
the soil of the moving IWD.
One important mechanism that each IWD must contain is to how to select its next
node. An IWD prefers a path that contains less amount of soil rather than the other
paths. This preference is implemented by assigning a probability to each path from the
current node to all valid nodes which do not violate constraints of the given problem.
j of going from node i to node j
Let an IWD be at the node i, then the probability pIWD
i
is calculated by:
f soili; j
j P
:
16
pIWD
i
f soili; k
kvcIWD

Such that f(soil(i, j)) computes the inverse of the soil between node i and j. Specifically:
f soili; j

1
:
1s gsoili; j

17

The constant parameter e s is a small positive number to prevent a possible division by


zero in the function f(.). It is suggested to use e s 0.01. g(soil(i, j)) is used to shift the
soil(i, j) on the path joining nodes i and j toward positive values and is computed by:
8
soili; j
if min soili; l $ 0
>
lvcIWD
<
gsoili; j
18
min soili; l else
>
: soili; j 2 lvcIWD
The function min(.) returns the minimum value of its arguments. The set vc(IWD)
denotes the nodes that the IWD should not visit to keep satisfied the constraints of the
problem.
Every IWD that has been created in the algorithm moves from its initial node to
next nodes till it completes its solution. For the given problem, an objective or quality
function is needed to measure the fitness of solutions. Consider the quality function of a
problem to be denoted by q(.). Then, the quality of a solution T IWD found by the IWD is
given by q(T IWD). One iteration of the IWD algorithm is said to be complete when all
IWDs have constructed their solutions. At the end of each iteration, the best solution
T IB of the iteration found by the IWDs is obtained by:
qT IWD :
T IB arg max
IWD

19

;T

Therefore, the iteration-best solution T IB is the solution that has the highest quality
over all solutions T IWD.

Based on the quality of the iteration-best solution, q(T IB), only the paths of the
solution T IB are updated. This soil updating should include the amount of quality of
the solution. Specifically:
soili; j rs soili; j rIWD kN c soilIWD
IB

;i; j [ T IB :

Intelligent water
drops algorithm

20

Where soilIWD
IB represents the soil of the iteration-best IWD. The best-iteration IWD is the
IWD that has constructed the best-iteration solution T IB. k(Nc) denotes a positive
coefficient which is dependent on the number of nodes Nc. Here, kN c 1=N c 2 1 is
used. rs should be a constant positive value whereas the constant parameter rIWD should
be a negative value. The first term on the right-hand side of equation (20) represents the
amount of the soil that remains from the previous iteration. In contrast, the second term
on the right-hand side of equation (20) reflects the quality of the current solution,
obtained by the IWD. Therefore, in equation (20), a proportion of the soil gathered by the
IWD is reduced from the total soil soil(i, j) of the path between node i and j.
This way, the best-iteration solutions are gradually reinforced and they lead the
IWDs to search near the good solutions in the hope of finding the globally optimal
solution.
At the end of each iteration of the algorithm, the total best solution T TB is updated
by the current iteration-best solution T IB as follows:
( TB
T
if qT TB $ qT IB
TB
T
21
T TB otherwise
By doing this, it is guaranteed that T TB holds the best solution obtained so far by the
IWD algorithm.
In summary, the proposed IWD algorithm for the MKP is specified in the following
steps:
.
Step 1. Initialization of static parameters: the number of items Nc along with the
profit bi for each item i, the number of constraints m such that each resource
constraint j has the capacity aj, the resource matrix R with size Nc m, which
holds the elements rij are all the parameters of the given MKP.
Set the number of water drops NIWD to a positive integer value. Here, it is
suggested that NIWD is set equal to the number of items Nc. For velocity
updating, the parameters are set as av 1, bv 0.01, and cv 1. For soil
updating, as 1, bs 0.01, and cs 1. The local soil updating parameter rn,
which should be a small positive number less than one, is chosen as rn 0.9.
The global soil updating parameter rIWD, which should be chosen from [2 1, 0],
is set as rIWD 2 0.9. Moreover, the initial soil on each path is denoted by the
constant InitSoil such that the soil of the path between every two items i and j is
set by soili; j InitSoil: The initial velocity of IWDs is denoted by the constant
InitVel. Both parameters InitSoil and InitVel are also user selected. In this paper,
InitSoil 1,000 and InitVel 4. The quality of the best solution T TB is
initially set as: qT TB 21: Moreover, the maximum number of iterations
itmax that the algorithm should be repeated needs to be specified.
.
Step 2. Initialization of dynamic parameters: For every IWD, a visited node list
Vc(IWD) is considered and is set to the empty list: V c IWD fg: The velocity of
each IWD is set to InitVel and the initial soil of each IWD is set to zero.

201

IJICC
1,2

.
.
.

202

Step 3. For every IWD, randomly select a node and associate the IWD to this node.
Step 4. Update the visited node list of each IWD to include the nodes just visited.
Step 5. For each IWD that has not completed its solution, repeat Steps 5.1-5.4.
Step 5.1. Choose the next node j to be visited by the IWD among those that are
not in its visited node list and do not violate the m constraints defined in
equation (3). When there is no unvisited node that does not violate the
constraints, the solution of this IWD has been completed. Otherwise, choose
j defined in
next node j when the IWD is in node i with the probability pIWD
i
equation (16) and update its visited node list.
Step 5.2. For each IWD moving from node i to node j, update its velocity
velIWD(t) by setting a 1 in equation (5) which yields:
velIWD t 1 velIWD t

av
:
bv cv soil2 i; j

22

Such that velIWD(t 1) is the updated velocity of the IWD.


Step 5.3. Compute the amount of the soil, Dsoil(i, j), that the current water
drop IWD with the updated velocity velIWD velIWD t 1 loads from its
current path between two nodes i and j by setting u 1 in equation (13):
as
:
23
Dsoili; j
bs cs time2 i; j; velIWD
Such that:
HUDMKP j
timei; j; velIWD
velIWD
where the heuristic undesirability HUDMKP( j) is computed by equation (9).
Step 5.4. Update the soil of the path traversed by that IWD, soil(i, j), and the
soil that the IWD carries, soilIWD, using equations (14) and (15) as follows:
soili; j 1 2 rn soili; j 2 rn Dsoili; j
soilIWD soilIWD Dsoili; j:
.

Step 6. Find the iteration-best solution T IB from all the solutions found by the
IWDs using equation (19).
Step 7. Update the soils of the paths that exist in the current iteration-best
solution T IB using equation (20) by setting rs 1 2 rIWD :
soili; j 1 2 rIWD soili; j rIWD

.
.

24

1
soilIWD
;i; j [ T IB : 25
IB
N c 2 1

Step 8. Update the total best solution T TB by the current iteration-best solution
T IB using equation (21).
Step 9. Go to Step 2 until the maximum number of iterations is reached.
Step 10. The algorithm stops here with the final solution T TB.

It is possible to use only TM and remove Step 8 of the IWD algorithm. But, by doing
this, some good solutions may temporarily be lost and it takes more time of the
algorithm to find them again. Therefore, it is better to keep the total best solution T TB
of all iterations than to count only on the iteration-best solution T IB.
The steps of the proposed IWD algorithm are expressed in two flowcharts shown
in Figure 4. The flowchart in Figure 4(a) shows the main steps of the algorithm. The
Step 5 of the IWD algorithm is depicted with more details in the flowchart of
Figure 4(b).

Intelligent water
drops algorithm

203

Static Parameters Initialization


step 1
Iteration 1

Begin step 5 for all


IWDs

Dynamic Parameters Initialization


step 2
Choose the next path of the IWD
step 5.1

Create and Distribute IWDs


step 3
Update visited lists of IWDs
step 4

Complete Each IWD's Solution


step 5
Find the Iteration-Best Solution
step 6
Update Paths of the
Iteration-Best Solution
step 7

Update the Total-Best


Solution
step 8

Iteration <
Maxlteration
step 9

yes

Iteration Iteration + 1

Update the velocity of the IWD


step 5.2

Compute the amount of soil,


soil, to be carried by the IWD
step 5.3

Remove soil from the path and


adds it to the IWD
steep 5.4

Have all IWDs


completed their
solutions

yes

no
Termination with Total-Best
Iteration Solution
step 10
(a)

End step 5 of the IWD


algorithm
(b)

no

Figure 4.
The flowchart of the
proposed IWD algorithm.
(a) the flowchart of the
main steps of the IWD
algorithm; (b) a detailed
flowchart of the sub-steps
of step 5 of the IWD
algorithm

IJICC
1,2

204

5. Convergence properties of the IWD algorithm


In this section, the purpose is to show that the IWD algorithm is able to find the optimal
solution at least once during its lifetime if the number of iterations that the algorithm is
run be sufficiently big. For a few particular ACO algorithms and careful setting of
parameters of the ACO, such property has been shown to exist and this kind of
convergence is called convergence in value (Dorigo and Stutzle, 2004). In the following,
the convergence in value for the IWD algorithm is investigated.
For any IWD in the proposed algorithm, the next node of the IWD is found
probabilistically by using equation (16). Therefore, as long as the probability of
visiting any node is above zero, in the long run, it is expected with probability one that
an IWD of the algorithm will choose that node at some iteration.
Any solution S of the given problem is composed of a number of nodes {np, nq, . . . ,
nr} selected by an IWD during an iteration of the algorithm. As a result, if it is shown
that the chance of selecting any node nk in the graph (N, E) of the problem is above zero,
then the chance of finding any feasible solution from the set of all solutions of the
problem is nonzero. As a consequence, if it is proved that there is positive chance for
any feasible solution to be found by an IWD in an iteration of the algorithm, it will be
guaranteed that the optimal solution is found. Because, once an IWD finds an optimal
solution, that solution becomes the iteration-best solution in the algorithm and thus the
total-best solution is updated to the newly found optimal solution as expressed in Step 8
of the algorithm. In summary, the convergence in value is proven to exist if the
probability of choosing any node of the problems graph in a solution is nonzero.
Let the graph (N, E) represents the graph of the given problem. This graph is a fully
connected graph with Nc nodes. Also, let NIWD represents the number of IWDs in the
algorithm. In the soil updating of the algorithm, two extreme cases are considered.
Case 1 which includes only those terms that increase soil to an arc of (N, E ) and case 2
which includes only those terms that decrease the soil to an arc of (N, E). For each case,
the worst-case is followed. For case 1, the highest possible value of soil that an arc can
hold is computed. For case 2, the lowest possible value of soil for an arc is computed.
The equations (14) and (20) contain the formulas that update the soil of an arc. In the
following, each case is studied separately.
Case 1. For simplicity, the initial soil of an arc (i, j ) is denoted by IS0. This arc (i, j )
is supposed to contain the maximum possible value of soil and is called arcmax.
For equation (14), the first term on the right hand side, rosoil(i, j), is the only term with
positive sign. To consider the extreme case, it is assumed that in one iteration of the
algorithm, this term is applied just once to the arc because the parameter ro is
supposed to have its value between zero and one. For equation (20), the first term on the
right hand side, rssoil(i, j), has positive sign. In the extreme case, this term is applied
once in one iteration of the algorithm. As a result, by replacing soil(i, j) with IS0 in the
mentioned terms, the amount of soil of arcmax will be (( rsro)IS0) after one iteration.
Let m denotes the number of iterations that the algorithm has been repeated so far.
Therefore, the soil of arcmax, soil(arcmax), will have the soil (( rsro)MIS0) at the end of
m iterations of the algorithm:
soilarc max rs ro m IS0 :

26

Case 2. In this case, the lowest amount of soil of an arc (i, j) is estimated.
Let arcmin denote the aforementioned arc (i, j). Here, only the negative terms of

equations (14) and (20) are considered. From equation (14), the term 2rn Dsoili; j is
supposed to be applied NIWD times to the arcmin in one iteration, which is the extreme
case for making the soil as lowest as possible. The extreme high value for Dsoil(i, j) is
obtained from equation (13) by setting the time in the denominator to zero, which yields
the positive value (as/bs). Therefore, the most negative value in one iteration that can
come from equation (14) is the value: 2rn N IWD as =bs :
From equation (20), the term rIWD kN c soilIWD
IB is the negative term. The highest
can
be
N
2
1a
=b

and
since kN c 1=N c 2 1; the most
value of soilIWD
c
s
s
IB
negative value for the term will be rIWD as =bs : As a result, in one iteration of the
algorithm the arcmin has the amount of soil that is greater than or equal to the value
rIWD 2 rn N IWD as =bs : Similar to case 1, m denotes the number of iterations that
the algorithm has been repeated. Therefore, the soil of arcmin, soilarc min , has the
soil mrIWD 2 rn N IWD as =bs after m iterations:

 
as
soilarc min mrIWD 2 rn N IWD
:
27
bs
The soil(arcmin) and soil(arcmax) are the extreme lower and upper bounds of the soil of
arcs in the graph (N, E) of the given problem, respectively. Therefore, the soil of any arc
after m iterations of the IWD algorithm remains in the interval [soil(arcmin),
soil(arcmax)].
Consider the algorithm is at the stage of choosing the next node j for an IWD when it
is in node i. The value g(soil(i, j)) of arc (i, j) is calculated from equation (18), which
positively shifts soi(i, j) by the amount of the lowest negative soil value of any
arc, min soili; l as explained before. To consider the worst-case, let this lowest
lvcIWD

negative soil value be soil(arcmin) and the soi(i, j) be equal to soil(arcmax). As a result,
the value of g(soil(i, j)) becomes (soil(arcmax)-soil(arcmin)) with the assumption that
soil(arcmin) is negative, which is the worse case. equation (16) is used to calculate the
probability of an IWD going from node i to j. For this purpose, f(soil(i, j)) needs to be
computed by equation (17) which yields:
f soili; j

1
:
1s soilarc max 2 soilarc min

28

The denominator of formula (16) becomes its largest possible value when it is assumed
that each soil(i, k) in equation (16) is zero. Consequently, the probability of the IWD
going from node i to node j, pIWD
j; will be bigger than plowest such that:
i
j . plowest
pIWD
i

1s
:
N c 2 11s soilarc max 2 soilarc min

29

The value of plowest is above zero.


With some assumptions on the relations between parameters of the algorithm,
plowest can become even bigger. For example, if it is assumed that rs ro , 1, then
soil(arcmax) in equation (26) goes to zero as m increases. Moreover, if rn
rIWD =N IWD ; then soil(arcmin) becomes zero. These two assumptions yield that
soilarc max 2 soilarc min 0: Therefore, plowest 1=N c 2 1; which is again
above zero and it is the biggest value that plowest can get in the worse-case.

Intelligent water
drops algorithm

205

IJICC
1,2

The probability of finding any feasible solution by an IWD in the iteration m will be
plowest N c 21 : Since there are NIWD IWDs, then the probability p(s; m) of finding any
feasible solution s by the IWDs in iteration m is:
ps; m N IWD plowest N c 21 :

206

30

Now, the probability of finding any feasible solution s at the end of M iterations of the
algorithm will be:
M
Y
Ps; M 1 2
1 2 ps; m:
31
m1

Because 0 , ps; m # 1; then by making M large, the term becomes small toward
zero:
M
Y
lim
1 2 ps; m 0:
M !1
m1

Therefore:
lim Ps; M 1:

M !1

This fact indicates that any solution s of the given problem can be found at least once
by at least one IWD of the algorithm if the number of iterations of the algorithm, M, is
big enough. The following proposition summarizes the above finding.
Proposition 5.1. If Ps; M represents the probability of finding any feasible
solution s within M iterations of the IWD algorithm. As M gets larger, P(s; M)
approaches to one:
lim Ps; M 1:

M !1

32

Knowing the fact that the optimal solution s * is a feasible solution of the problem, from
above proposition we can conclude the following proposition.
Proposition 5.2. The IWD algorithm finds the optimal solution s * of the given
problem with probability one if the number of iterations M is sufficiently large.
It is noticed that the required M to find the optimal solution s * should be decreased
by careful tuning of parameters of the IWD algorithm for a given problem.
6. Experimental results
The proposed IWD algorithm for solving the MKP is tested here with a set of MKPs
mentioned in the OR-Library (OR-Library, http://people.brunel.ac.uk/ , mastjjb/jeb/
orlib/files). For each test problem, the algorithm is run for ten times. It is reminded that
all experiments are implemented on a Personal Computer having Pentium 4 CPU,
1.80 GHz, and Windows XP using C# language in the environment Microsoft Visual
Studio 2005.
The first data set that is used for testing the proposed IWD-MKP algorithm comes
from the file mknap1.txt of the OR-Library which contains seven test problems of the
MKP. For these seven problems, the qualities of the optimal solutions are known.
Therefore, the IWD-MKP algorithm is tested with these problems to see whether the
algorithm is able to find the optimal solutions or not.

Table I reports the quality of the total best of each run of the IWD-MKP algorithm
for each test problem in the file mknap1.txt. The IWD-MKP reaches the optimal
solutions for five of the problems in the average number of iterations reported in
Table I. For the other two test problems, the algorithm reaches very near-optimal
solutions after 100 iterations. For the problem with ten constraints and 20 items, the
qualities of iteration-best solutions for the ten runs of the IWD algorithm are shown in
Figure 5. The best run of the algorithm converges to the optimum solution 6,120 in four
iterations whereas its worst run converges in 39 iterations. Similar convergence curves
are shown in Figure 6 for the problem with ten constraints and 28 items. The best run
converges to the optimum solution 12,400 in five iterations whereas the worst run of
the algorithm converges in 20 iterations.
The second data set is taken from the file mknapcb1 of the OR-Library in which
each test problem has five constraints and 100 items. Table II shows the results of
applying the proposed IWD-MKP to the first ten problems of the set. For each problem,
the best and the average quality of ten runs of the IWD-MKP is reported. For
comparison, the results of the two Ant Colony Optimization-based algorithms of
Leguizamon and Michalewicz (1999) (for short, L&M) and Fidanova (2002) are
mentioned. Moreover, the results obtained by the LP relaxation method that exist in the

Constraints variables
10 6
10 10
10 15
10 20
10 28
5 39
5 50

Quality of
optimum solution

The solution quality


of the IWD-MKP

Average no. of
iterations of the IWD-MKP

3,800
8,706.1
4,015
6,120
12,400
10,618
16,537

3,800
8,706.1
4,015
6,120
12,400
10,563.6
16,405

3.3
12.9
30.9
18.7
11.9
100
100

Note: The actual optimal qualities are known for these problems and are shown below

Intelligent water
drops algorithm

207

Table I.
The problems of the
OR-Library in file
mknap1.txt, which are
solved by the IWD-MKP
algorithm

7,000
6,000

Quality

5,000
4,000
3,000
2,000
1,000
0
1

11

16

21
26
31
Iterations
Notes: Each color in the figure shows one run of the algorithm

36

Figure 5.
Convergence curves of ten
runs of the IWD algorithm
for an MKP in Table I with
the optimum 6,120

IJICC
1,2

14,000
12,000

Quality

10,000

208

8,000
6,000
4,000
2,000

Figure 6.
Convergence curves of ten
runs of the IWD algorithm
for an MKP in Table I with
the optimum 12,400

0
1

16

Iterations
Note: Each color in the figure shows one run of the algorithm

Constraints variables-problem number

Table II.
The problems with five
constraints and 100 items
of the OR-Library in file
mknapcb1.txt are
solved using 100
iterations of the proposed
IWD-MKP algorithm

11

5
5
5
5
5
5
5
5
5
5

100-00
100-01
100-02
100-03
100-04
100-05
100-06
100-07
100-08
100-09

LP optimal

L&M
Best

Fidanova
Best

24,585
24,538
23,895
23,724
24,223
24,884
25,793
23,657
24,445
24,635

24,381
24,274
23,551
23,527
23,991
24,613
25,591
23,410
24,204
24,411

23,984
24,145
23,523
22,874
23,751
24,601
25,293
23,204
23,762
24,255

Quality of the
IWD-MKPs
solutions
Best
Average
24,295
24,158
23,518
23,218
23,802
24,601
25,521
23,374
24,148
24,366

24,175.4
24,031.3
23,404
23,120.9
23,737.2
24,554
25,435.6
23,344.9
24,047
24,317

Notes: The results are compared with the LP optimal solutions and best solutions of two ant-based
algorithms: Leguizamon and Michalewicz (L&M), and Fidanova

file mkcbres.txt of the OR-Library are also included. The solutions of the IWD-MKP
are often better than the solutions of those obtained by Fidanova. They are also near to
the results of the L&M and LP relaxation methods. These near-optimal results of the
proposed IWD-MKP are obtained by using a simple local heuristic that has been used
in the algorithm whereas the results of the L&M ACO-based algorithm are obtained by
defining a rather complex heuristic definition. Generally, it is seen that the qualities of
solutions of LP relaxation are better than other algorithms in Table II. Therefore, there
is much space to improve these population-based optimization algorithms, including
the proposed IWD-MKP algorithm, to reach the qualities of the specialized
optimization algorithms such as LP relaxation or the algorithms in Vasquez and
Hao (2001) and Vasquez and Yannick (2005) which are a combination of LP relaxation
and tabu search.

The problems with ten constraints and 100 items of the OR-Library in file
mknapcb4.txt are also solved using 100 iterations of the IWD-MKP algorithm and
the solutions are reported in Table III. The results of the IWD-MKP are compared with
the LP optimal solutions and best solutions of two other Ant Colony
Optimization-based algorithms: L&M (Leguizamon and Michalewicz, 1999), and
Ant-Knapsack (Alaya et al., 2004). Again, the solutions of the LP relaxation are better
than the other methods in the table. The solutions of the IWD-MKP are close to those of
LP relaxation and other two ACO algorithms.
The experiments show that the proposed IWD-MKP algorithm is capable to obtain
optimal or near optimal solutions for different kinds of MKPs.

Intelligent water
drops algorithm

209

7. Conclusion
In this paper, a new population-based optimization algorithm called Intelligent
Water Drop algorithm, which is based on the mechanisms that exist in natural
rivers and between the water drops, is proposed for the MKP and thus is called
IWD-MKP. The IWD-MKP considers an MKP as a graph and lets each IWD
traverse the arcs between the nodes of the graph and change their amounts of soil
according to the mechanisms embedded in the algorithm. In fact, each IWD
constructs a solution while modifying its environment. Then, at the end of each
iteration of the algorithm, the iteration-best solution is found and rewarded by
reducing an amount of soil from all the arcs that form the solution. The amount of
soil that is reduced is proportional to the amount of soil that IWD has gathered
from the arcs of the solution in this iteration.
A simple local heuristic is used in the proposed IWD-MKP algorithm and the
IWD-MKP algorithm is tested with different kinds of MKPs. The solutions that are
obtained by the IWD-MKP are optimal or near-optimal solutions. The convergence
properties of the IWD algorithm is also discussed and showed that it has the property
of convergence in value.

Constraints variables-problem number


10
10
10
10
10
10
10
10
10
10

100-00
100-01
100-02
100-03
100-04
100-05
100-06
100-07
100-08
100-09

LP optimal

L&M
Best

Ant-knapsack
Best

23,480
23,220
22,493
23,087
23,073
23,053
22,257
22,964
22,882
23,090

23,057
22,801
22,131
22,772
22,654
22,652
21,875
22,551
22,418
22,702

23,064
22,801
22,131
22,717
22,654
22,716
21,875
22,551
22,511
22,702

Quality of the
IWD-MKPs
solutions
Best Average
22,936
22,591
21,969
22,416
22,466
22,475
21,731
22,542
22,218
22,702

22,754
22,517.8
21,793
22,269.2
22,270.9
22,369.2
21,634.8
22,226.4
22,099.3
22,527

Notes: The results are compared with the LP optimal solutions and best solutions of two ant-based
algorithms: Leguizamon and Michalewicz (L&M), and Ant-Knapsack

Table III.
The problems with ten
constraints and 100 items
of the OR-Library in file
mknapcb4.txt are
solved using 100
iterations of the proposed
IWD-MKP algorithm

IJICC
1,2

210

Different local heuristics may be proposed to be used in the IWD algorithm for the
MKP in order to improve the quality of the solutions. Some new mechanisms,
preferably those that have roots in nature may also be employed in the algorithm to
help it reach the globally optimal solutions. It is emphasized that there are other
mechanisms and interactions in rivers and among natural water drops that has not
been employed in the IWD algorithm. As a result, the way is open to new ideas to be
used in the IWD algorithm.
Moreover, the mechanisms that have been used in the IWD algorithm need to be
analyzed both theoretically and experimentally. The IWD algorithms should
be modified to be used for other combinatorial problems. It also should be modified
to be employed for continuous optimization problems. Local searches are often used in
other optimization algorithms. Therefore, in the IWD, a local search algorithm may
also be used.
References
Alaya, I., Solnon, C. and Ghedira, K. (2004), Ant algorithm for the multidimensional knapsack
problem, International Conference on Bioinspired Optimization Methods and their
Applications, BIOMA 2004, pp. 63-72.
Birbil, I. and Fang, S.C. (2003), An electro-magnetism-like mechanism for global optimization,
Journal of Global Optimization, Vol. 25, pp. 263-82.
Chu, P. and Beasley, J. (1998), A genetic algorithm for the multi-constraint knapsack problem,
Journal of Heuristics, Vol. 4, pp. 63-86.
Dorigo, M. and Stutzle, T. (2004), Ant Colony Optimization, MIT Press, Cambridge, MA.
Dorigo, M., Maniezzo, V. and Colorni, A. (1991), Positive feedback as a search strategy,
Technical Report 91-016, Dipartimento di Elettronica, Politecnico di Milano, Milan.
Feo, T.A. and Resende, M.G.C. (1989), A probabilistic heuristic for a computationally difficult
set covering problem, Operations Research Letters, Vol. 8, pp. 67-71.
Feo, T.A. and Resende, M.G.C. (1995), Greedy randomized adaptive search procedures, Journal
of Global Optimization, Vol. 6, pp. 109-33.
Fidanova, S. (2002), Evolutionary algorithm for multidimensional knapsack problem,
PPSNVII-Workshop.
Fogel, L.J., Owens, A.J. and Walsh, M.J. (1966), Artificial Intelligence through Simulated Evolution,
Wiley, New York, NY.
Freville, A. (2004), The multidimensional 0-1 knapsack problem: an overview, European
Journal of Operational Research, Vol. 155, pp. 1-21.
Gavish, B. and Pirkul, H. (1982), Allocation of databases and processors in a distributed
computing system, Management of Distributed Data Processing, North Holland,
Amsterdam, pp. 215-31.
Gavish, B. and Pirkul, H. (1985), Efficient algorithms for solving the multiconstraint
zero-one knapsack problem to optimality, Mathematical Programming, Vol. 31,
pp. 78-105.
Gilmore, P. and Gomory, R. (1966), The theory and computation of knapsack functions,
Operations Research, Vol. 14, pp. 1045-74.
Glover, F. (1977), Heuristics for integer programming using surrogate constraints, Decision
Sciences, Vol. 8, pp. 156-66.

Glover, F. (1989), Tabu search Part I, ORSA Journal on Computing, Vol. 1 No. 3,
pp. 190-206.
Glover, F. and Kochenberger, A. (1996), Critical event tabu search for multidimensional
knapsack problems, in Osman, I.H. and Kelly, J.P. (Eds), Metaheuristics: Theory and
Applications, Kluwer Academic Publishers, Dordrecht, pp. 407-42.
Glover, F. and Kochenberger, G. (Eds) (2003), Handbook of Metaheuristics, Kluwer Academic
Publishers, Norwell, MA.
Holland, J. (1975), Adaptation in Natural and Artificial Systems, University of Michigan Press,
Ann Arbor, MI.
Kennedy, J. and Eberhart, R. (2001), Swarm Intelligence, Morgan Kaufmann Publishers, Inc.,
San Francisco, CA.
Kirkpatrick, S., Gelatt, C.D. and Vecchi, M.P. (1983), Optimization by simulated annealing,
Science, Vol. 220, pp. 671-80.
Leguizamon, G. and Michalewicz, Z. (1999), A new version of ant system for subset problem,
Congress on Evolutionary Computation, IEEE Press, Piscataway, NJ, pp. 1459-64.
Lourenco, H.R., Martin, O. and Stutzle, T. (2003), Iterated local search, in Glover, F. and
Kochenberger, G. (Eds), Handbook of Metaheuristics, International Series in Operations
Research & Management Science, Vol. 57, Kluwer Academic Publishers, Norwell, MA,
pp. 321-53.
Martello, S. and Toth, P. (1990), Knapsack Problems: Algorithms and Computer Implementations,
Wiley, New York, NY.
Mladenovic, N. and Hansen, P. (1997), Variable neighborhood search, Computers Oper. Res.,
Vol. 24, pp. 1097-100.
Raidl, G.R. and Gottlieb, J. (2005), Empirical analysis of locality, heritability and heuristic bias in
evolutionary algorithms: a case study for the multidimensional knapsack problem,
Evolutionary Computation Journal, Vol. 13, pp. 441-7.
Rechenberg, I. (1973), Evolutionstrategie-Optimierung Technischer Systeme nach Prinzipien der
Biologischen Information, Fromman Verlag, Freiburg.
Shah-Hosseini, H. (2007), Problem solving by intelligent water drops, IEEE Congress on
Evolutionary Computation, Swissotel, The Stamford.
Shih, W. (1979), A branch and bound method for the multiconstraint zero-one knapsack
problem, Journal of Operational Research Society, Vol. 30, pp. 369-78.
Vasquez, M. and Hao, J-K. (2001), A hybrid approach for the 0-1 multidimensional knapsack
problem, 17th International Conference on Artificial Intelligence, pp. 328-33.
Vasquez, M. and Yannick, V. (2005), Improved results on the 0-1 multidimensional knapsack
problem, European Journal of Operational Research, Vol. 165, pp. 70-81.
Voudouris, C. and Tsang, E. (1995), Guided local search, Technical Report CSM-247,
Department of Computer Science, University of Essex, Colchester.
Weingartner, H. (1966), Capital budgeting of interrelated projects: survey and synthesis,
Management Science, Vol. 12, pp. 485-516.
Weingartner, H.M. and Ness, D.N. (1967), Methods for the solution of the multidimensional 0/1
knapsack problem, Operations Research, Vol. 15, pp. 83-103.

Further reading
Kellerer, H., Pferschy, U. and Pisinger, D. (2004), Knapsack Problems, Springer, New York, NY.

Intelligent water
drops algorithm

211

IJICC
1,2

212

About the author


Hamed Shah-Hosseini was born in Tehran, Iran, in 1970. He received the BS degree in
Computer Engineering from Tehran University, the MS, and the PhD degrees from
Amirkabir University of Technology, all with high honors. He is now with the
Electrical and Computer Engineering Department, Shahid Beheshti University,
Tehran, Iran. His research interests include Computational Intelligence especially
Time-Adaptive Self-Organizing Maps, Evolutionary Computation, Swarm
Intelligence, and Computer Vision. Hamed Shah-Hosseini can be contacted at:
[email protected]; [email protected] and and his personal homepage is www.
drshahhoseini.com

To purchase reprints of this article please e-mail: [email protected]


Or visit our web site for further details: www.emeraldinsight.com/reprints

You might also like