Distributed Graph Coloring: An Approach Based On The Calling Behavior of Japanese Tree Frogs
Distributed Graph Coloring: An Approach Based On The Calling Behavior of Japanese Tree Frogs
Distributed Graph Coloring: An Approach Based On The Calling Behavior of Japanese Tree Frogs
Abstract
Graph coloring—also known as vertex coloring—considers the problem of assigning
colors to the nodes of a graph such that adjacent nodes do not share the same color.
The optimization version of the problem concerns the minimization of the number of used
colors. In this paper we deal with the problem of finding valid colorings of graphs in a
distributed way, that is, by means of an algorithm that only uses local information for
deciding the color of the nodes. Such algorithms prescind from any central control. Due
to the fact that quite a few practical applications require to find colorings in a distributed
way, the interest in distributed algorithms for graph coloring has been growing during
the last decade. As an example consider wireless ad-hoc and sensor networks, where
tasks such as the assignment of frequencies or the assignment of TDMA slots are strongly
related to graph coloring.
The algorithm proposed in this paper is inspired by the calling behavior of Japanese
tree frogs. Male frogs use their calls to attract females. Interestingly, groups of males that
are located nearby each other desynchronize their calls. This is because female frogs are
only able to correctly localize the male frogs when their calls are not too close in time.
We experimentally show that our algorithm is very competitive with the current state
of the art, using different sets of problem instances and comparing to one of the most
competitive algorithms from the literature.
1 Introduction
Given an undirected graph G = (V, E), where V is the node set and E is the edge set, and a
number k > 0 of colors, a valid k-coloring of the graph is the assignment of exactly one color
to each node such that adjacent nodes (that is, nodes that are connected by an edge) do not
share the same color. Formally, we say that a k-coloring of an undirected graph G = (V, E)
is a function c : V → {1, 2, . . . , k} such that c(u) 6= c(v) for each edge (u, v) ∈ E. The
optimization version of the graph coloring problem (GCP), which is N P -hard [22], consists
in finding the minimum number k∗ of colors such that a valid k∗ -coloring can be found. This
number is called the chromatic number of graph G and is denoted by χ(G). The GCP is
a quite generic problem. Practical applications originate especially from problems that can
be modelled by networks and graphs, for example, communication networks. Several tasks
in modern wireless ad-hoc networks, such as sensor networks, are related to graph coloring.
Examples include TDMA slot assignment [20], detection of mobile objects and reduction of
1
signaling actuators [38], distributed MAC layer management [18], energy-efficient coverage [9],
delay efficient sleep scheduling [30] or wakeup scheduling [24]. Due to the distributed nature
of these networks, algorithms for solving problems related to graph coloring are generally also
required to be distributed [32]. Such algorithms make an exclusive use of local information
for deciding the color of the nodes, that is, they are characterized by the absence of any
central control mechanism. The goal of this paper is to device an algorithm for generating
valid colorings in a distributed manner.
The distributed conception of an algorithm is generally beneficial for its scalability. More-
over, in comparison to centralized approaches it is generally much easier to adapt a distributed
algorithm to dynamic changes during execution. Unfortunately, the exclusive use of local in-
formation is often not sufficient to completely capture the internal structure of certain graphs
or networks. The following example helps to understand the tradeoff between generating
colorings from a local and a global perspective. Figure 1 shows a graph which has been con-
structed using four different triangles, that is, complete graphs of three nodes. Hereby, we
distinguish between three inner triangles (the three groups of nodes that are close together)
and one outer triangle. The three inner triangles are connected to the outer triangle such that
each node of a specific inner triangle is connected to a different node of the outer triangle.
Even in a distributed manner it is fairly easy to obtain optimal colorings for each of the inner
triangles. Depending on the specific color assignment concerning the three inner triangles the
outer triangle may be colored with the same three colors (as in Figure 1(a)) or with three
additional colors (as in Figure 1(b)). Unfortunately, probability for the latter case is quite
high, especially when the complexity of the graph is increased by adding more inner triangles.
As mentioned already above, one of the key difficulties when coloring graphs in a distributed
manner is that each node is only provided with local information and, therefore, it is unable
to detect situations such as the one from Figure 1(b).
1 6
3 3 1 2
1 2 1 2 2 3 3 1
3 3
1 2 1 2
2 3 4 5
Figure 1: Simple graph topology (composed of three inner triangles and one outer triangle).
(a) shows an optimal 3-coloring, while (b) shows a sub-optimal 6-coloring. Distributed al-
gorithms provide most often a 6-colored solution, because global knowledge is necessary for
capturing the graph structure.
2
1.1 Our Contribution
In this paper we propose a distributed algorithm for graph coloring based on the calling be-
havior exhibited by male Japanese tree frogs for the attraction of females. Several researchers
have observed that male Japanese tree frogs decouple their calls [37]. This property has
evolved because females can only localize the males when their calling is not too close in
time. In [1] Aihara et al. proposed a theoretical model for simulating the behavior of these
frogs. The authors describe an oscillator system, where each oscillator has a phase θ ∈ [0, 2π]
that changes over time with frequency ω (where 2π is the time interval between two calls of
the same frog). When the phase reaches 2π, the oscillator fires and returns to the baseline
phase (θ = 0). The proposed system works such that oscillators try to maximize the distance
between their phases. This model works nicely for the desynchronization of two oscillators.
However, when more than two oscillators are concerned, the model does not accurately reflect
the real behavior of the frogs. A subsequent work [2] mentions some potential applications of
this model in artificial life and robotics. In both works the author(s) mention the limitations
of the systems when operating with groups of more than two coupled oscillators. In fact,
already with three oscillators the final solution (and its stability) strongly depends on the
initial variable settings.
The desynchronization of the frogs’ calls is achieved in a self-organized way. Therefore, the
algorithm proposed in this paper, which is based on this self-desynchronization mechanism,
can be regarded as a swarm intelligence approach [7, 5]. Swarm intelligence is a field of
computer science which is inspired by the collective behavior of social animals and other self-
organizing processes from nature. Successful examples from the literature include particle
swarm optimization (PSO) [23], which is an algorithm for optimization inspired by bird
flocking and fish schooling, and ant colony optimization (ACO) [11], which is inspired by the
foraging behavior of ant colonies. One of the distinguishing properties of a swarm intelligence
approach is the fact that the problem at hand is solved from a local perspective. Moreover,
problem solving is based on the cooperation of rather simple entities. Instead of each entity
trying to solve the problem by itself, they perform simple tasks from a local perspective. The
global problem is solved as a result of cooperation. Therefore, swarm intelligence principles
are well suited for their use in distributed algorithms.
The proposed algorithm uses a desynchronization method based on the original model by
Aihara et al. [1], with some small modifications. The algorithm can be easily implemented,
for example, in sensor networks. In addition to competitive results it comes with several
advantages as, for example, a low consumption of energy resources or its potential ability
to adapt to changes in the network topology. However, as mentioned before, the main goal
of the algorithm is to obtain valid colorings that use an as-low-as-possible number of colors,
while keeping the number of iterations necessary to reach these results as low as possible. An
extensive experimental evaluation shows that the results of the algorithm are comparable or
better than the ones of state-of-the-art algorithms for what concerns the number colors. In
particular, the good performance of our algorithm for grid graphs of any size is remarkable.
On the downside, the results also show that our algorithm may require a slightly higher
number of communication rounds than other state-of-the-art algorithms.
3
1.2 Prior Work on Graph Coloring
Concerning prior work, a distinction must be made between centralized and distributed algo-
rithms. Concerning centralized algorithms, the literature offers both exact approaches that
guarantee to find an optimal solution in bounded time and (meta-)heuristic approaches. A
recent survey can be found in [33]. Due to the intractable nature of the GCP, larger problem
instances can only be tackled efficiently by heuristic approaches. Especially effective are the
tabu search algorithm from [4], a hybrid approach combining tabu search and evolutionary
algorithms from [31] and a variable neighborhood search technique [21]. These algorithms are
nowadays the best centralized metaheuristics for solving the GCP.
When considering distributed algorithms, it is very difficult (if not impossible) to narrow
down the state of the art to a small set of algorithms. This is because distributed algorithms
may be designed with very different goals. These goals may concern, for example, the per-
formance for particular topologies, the minimization of execution time (or communication
rounds), the generation of the best colorings possible, or the performance for dynamically
changing topologies. In addition, a general problem is that most proposals are not evaluated
on publicly available sets of benchmark instances. Moreover, results are generally not shown
per instance, making it difficult to compare to the proposed algorithms. In the following we
only focus on algorithms that generate valid solutions and possibly refer to their simplicity,
solution quality and time complexity.1 It must also be noted that many of the proposed
distributed algorithms were developed for applications in networks of devices with scarce re-
sources. For this reason authors often study the message load the algorithm implies and try
to minimize the amount of calculus required by the algorithm. Typically, these algorithms are
meant to work on a lower layer of the network in parallel with the applications or information
flows that the user may require to send. In [15], Fraigniaud et al. study the effect of the
amount of information shared between the nodes on the quality of the obtained colorings.
One of the most general works was presented by Finocchi et al. in [13]. The authors
introduced three versions of a distributed algorithm and study its behavior under various
conditions. The authors considered both the problem of obtaining O(∆ + 1)-colorings in as
few communication rounds as possible, as well as the problem of generating the best possible
colorings without any limit on the number of communication rounds. The authors provide
extensive experimental results for both cases. Most of their experimentation is based on
random graphs, which are not publicly available. However, they also offer results on a well-
known set of publicly available instances from the DIMACS challenge [14]. As the algorithm
proposed in [13] was shown to outperform the state of the art, we have chosen this algorithm
for comparison.
Concerning distributed algorithms based on swarm intelligence principles, the literature
offers, for example, a method inspired by the synchronous flashing of fireflies (see [27]). This
algorithm, which allows a simple implementation, reaches valid colorings fast, in a constant
number of communication rounds, regardless of the size of the network. However, this work
does not focus on minimizing the number of colors. The first intent to use the calling behavior
of frogs for graph coloring was presented in [29]. Valid colorings are obtained by assigning a
color to each phase used by the nodes (that is, the oscillators). Therefore, if two nodes are
synchronized to exactly the same phase, they will be sharing a common color (the authors
consider a function f : [0, 2π] → (R, G, B), where 2π is the time frame between two callings
1
In the scope of this paper the time complexity is, as usual, measured in terms of communication rounds.
A communication round is the unit of time in which each node is allowed to send at most one message.
4
of the same frog). The main drawback of this approach is that nodes with very near phases
will be colored with different colors. As such small deviations usually occur when the number
of nodes in the system increases, the algorithm does not obtain competitive results. This
work was further extended by adding a parameter for setting a priori the number of allowed
phases [28]. Experimentation shows that the system is able to find optimal solutions for small
topologies, provided the optimal number of colors is known. Note that in contrast to these
works, the algorithm that we propose aims for the minimization of the used number of colors
without any prior knowledge about the optimal solution.
The literature also offers many works that consider distributed graph coloring from a
theoretical point of view. Most of them concern upper bounds for the coloring quality as
well as the time complexity under different constraints. Hansen et al. [19] proposed the
distributed largest-first (DLF) algorithm that runs in O(∆2 logn) communication rounds for
arbitrary graphs and that was proven to provide good upper bounds for specific topologies.
This algorithm was based on the largest-first approach which consists in giving priority for
choosing a color to the nodes with the highest degree (∆). This work was further extended
by Kosowski and Kuszner [25] who reduced the time complexity to O(∆lognlog∆). These
authors also proved that some other approaches, like smallest-last or dynamic-saturation,
are not suitable for distributed environments. Later, in [34] Moscibroda and Wattenhöfer
introduced an algorithm for obtaining O(∆)-colorings in O(δlogn) time when considering
random geometric graphs and other well-known models for wireless multi-hop networks (no
results are given for other topologies). Other theoretical works which may be of interest
for the development of new algorithms are the game theoretic approach for efficient graph
coloring from Panagopoulou and Spirakis [36] and the work by Kuhn and Wattenhöfer [26],
which introduces a new lower bound on the number of colors used by algorithms that are
restricted to one single communication round and a new lower bound on the time complexity
of obtaining a O(∆)-coloring of a graph.
5
1 2 1
1
2 2
(a) Fictitious initial situ- (b) The system after (c) Final situation. The
ation with two frogs call- some iterations. The two frogs call in perfect
ing close in time. system has managed to anti-phase.
increase the distance
between the calls of the
two frogs.
Figure 2: Graphical illustration of the working of a system of two coupled oscillators. The
circle in all three graphics represents the time frame between two calls of the same frog (2π),
the calling period. The nodes marked by integer numbers 1 and 2 indicate the phase of the
corresponding frogs, that is, the moment of time in which they call. (a) shows a fictitious
initial situation. (b) shows the situation after some iterations. Clearly the system tries to
put some distance between the calling of frogs 1 and 2. (c) shows an optimal final situation
in which the frogs (or oscillators) are in perfect anti-phase, that is, their respective calls have
the reached the maximum distance in time (half a circle).
More recently, Aihara et al. [1] introduced a formal model based on a set of coupled
oscillators each one simulating the phase change in the calling period of a single frog. As
oscillators are associated to frogs, we will use both terms in the following with the same
meaning. The basic way of working of this model is graphically illustrated in Figure 2. The
circle represents—in all three graphics—the time frame between two calls of the same frog
(2π), the calling period. The nodes marked by integer numbers 1 and 2 indicate the phase
of the corresponding frogs, that is, the moment of time in which they call. Note that the
oscillators are not able to reach perfect anti-phase in a single step. In general, an indefinite
number of steps is needed before reaching the stable situation corresponding to perfect anti-
phase. Moreover, the difficulty of reaching the optimal configuration tends to increase with
an increasing number of frogs and also with an increasing degree of interaction between them
(note that two frogs that can not hear each other do not influence each other).
Technically, the system introduced by Aihara et al. [1] works as follows. Each oscillator
i has a phase θi ∈ [0, 2π] that changes over time with frequency ωi (where 2π is the time
interval between two calls of the same frog, the calling period). When the phase reaches 2π,
the oscillator fires and returns to the baseline. In addition, oscillators may be coupled with
other oscillators. In case an oscillator j is coupled to an oscillator i, when oscillator i fires,
oscillator j receives a boost and changes the frequency of firing in the next round depending
on the gap ∆ji ∈ [0, 2π] (see below) between both oscillators. These changes do not happen
instantly upon receiving the stimulus. The corresponding oscillator rather waits until it fires.
The model can be summarized in the following equations. First, the behavior of an isolated
oscillator i is modelled as follows:
dθi
= ωi (1)
dt
6
Assuming that oscillators j and i are coupled, the gap between their (current) phases is
defined as:
∆ji = θj − θi (2)
Now, the change in the behavior of oscillator j as influenced by oscillator i can be described
as follows:
dθj
= ωj + g(∆ji ) , (3)
dt
where g(·) is the phase shift function which is responsible for changing the phase of the frogs
that are influenced by other frogs. In [1], the authors suggest the use of the following phase
shift function:
g(x) = α sin(x) (4)
We say that this system of oscillators is in a stable situation and in anti-phase when the
following two conditions are satisfied:
for all i 6= j. The system presented in [1] is able to successfully locate two coupled oscillators
in perfect anti-phase, independent of the initial settings of θ1 and θ2 . Unfortunately, several
problems arise when the number of oscillators grows. Figure 3 shows two examples for such
problems. Given an undirected graph G = (V, E), henceforth we will assign one oscillator to
each node in the graph. Therefore, in the following the terms node and oscillator will refer
to the same. We consider that two oscillators are coupled if and only if their corresponding
nodes are connected by an edge. Depending on the initial phases of the oscillators, for both
topologies shown in Figures 3(a) and 3(d) it is possible to reach suboptimal desynchroniza-
tions (as shown in Figures 3(b) and 3(e)). The corresponding optimal desynchronizations are
shown in Figures 3(c) and 3(f). In [1] the authors provide analytical results for using three
oscillators and show that there is a high system sensitivity with respect to the initial phases
(only a small subset of the possible initial settings leads to an optimal solution).
The initial model by Aihara et al. [1] was later extended by Mutazono et al. [35]. They
used their extended model for anti-phase synchronization for the purpose of collision-free
transmission scheduling in sensor networks. In order to make the system applicable to larger
topologies (sensor networks may consists of hundreds of nodes), they introduced weights in
order to regulate the coupling between each pair of oscillators. The resulting phase shift
function as introduced in [35] can be described as follows:
Thanks to these weights, the system reaches stable situations more easily, especially when
rather small values of α are used. The authors experimented with topologies of up to 20
nodes and although the system still showed certain difficulties to reach stable solutions, the
sensitivity to initial conditions decreased significantly.
Mutazono et al. [35] compared the results of their system to another mechanism for coupled
oscillator desynchronization proposed in [10]. Note that the mechanism from [10] is not based
7
4 3 4 1 1, 3
1 2 3 2 2, 4
4 3 5 1
2, 4 1, 3
5
1 2 3 2, 4
5
(d) Topology 2 (e) Suboptimal desyn- (f) Optimal desynchro-
chronization of topology nization of topology 2
2 (with 4 different (with three different
phases) phases)
Figure 3: Two examples for graph topologies (graphics (a) and (d)) that may cause problems
for the desynchronization as performed by the model proposed in [1]. Graphics (b) and (e)
show suboptimal desynchronizations (corresponding to stable attractors of the system) for
both topologies. In contrast, graphics (c) and (f) show optimal desynchronizations.
on the calling behavior of Japanese tree frogs. The main difference to frog-inspired systems
is the fact that the phase change of a node is made on the basis of only two other nodes. The
phase values allow to order all the nodes sequentially from small to large phase values. The
nodes whose phase values are used to change the phase value of a node are determined as
the predecessor and the successor in this (cyclic) sequence. As shown in [35], both systems
achieve similar results although no extensive experimentation is made on a broad-enough set
of network topologies: mostly random geometric graphs and hand-made instances with at
most eight nodes were used.
Another extension of the system by Aihara et al. [1] was introduced in [29]. The changes
concern the use of different weights for the phase shift function and the introduction of a
so-called frustration parameter which reduces the coupling between each pair of nodes. The
authors show that their system is able to obtain better solutions than the original model for
many different topologies as, for example, k-partite graphs, grids or platonic solids. Moreover,
the authors make some interesting observations: (1) the number of oscillators is not the key
factor for achieving desynchronization. It is rather the topology which most determines the
problem complexity. (2) the time distance between phases is not uniformly distributed around
the whole period. The number of nodes firing at each phase strongly affects the amount of
time between the phases.
8
Algorithm 1 Sensor event of node i
1: if less than K communication rounds executed then
2: θi := recalculateTheta()
3: ci := minimumColorNotUsed()
4: sendColoringMessage()
5: αi := αi /ρ
6: else
7: if first communication round of Phase II then
8: if (ci = 1) then pi := randomPositiveInteger()
9: else pi := 0 endif
10: else if ∃m ∈ Mi | (powerm ≥ pi ) then
11: ci := minimumColorNotUsedByNeighborsWithHigherPower()
12: pi := adoptPowerFromStrongestNode()
13: end if
14: sendRefinementMessage()
15: end if
16: clearMessageQueue()
9
tree. Next, the master node uses this tree to broadcasts an alert to start running the FrogSim
algorithm, that is, the first communication round is triggered. This message also includes the
height of the tree which will be used later on by each node to define the amount of informa-
tion that it must store. The simulation of the FrogSim algorithm is composed of two distinct
phases. The first phase (called phase I; see lines 1–5 of Algorithm 1) makes use of the model
for the desynchronization of frog calling as introduced by Aihara et al. [1], with only a few
modifications. The main difference to other distributed graph coloring algorithms inspired
by this model is as follows. The θi values are used for determining the order in which the
nodes are allowed to choose colors, whereas in previous algorithms these values were directly
associated to specific colors. Note that our algorithm produces a valid coloring already in the
first communication round. The second phase (called phase II, see lines 7–15 of Algorithm 1),
which is initiated after K > 0 communication rounds of phase I, serves to improve the current
coloring by means of a refinement technique, similar to distributed local search.
Phases I and II of FrogSim will be described in detail in Sections 3.1 and 3.2. Moreover,
we will outline how the initially computed tree structure will be used to communicate and
store the best coloring found by the algorithm. In this process, each node collects the color
identifiers used by its children, determines the highest color used, and sends this information
to its parent node. In those cases in which the master node recognizes that the number of
colors used in a certain communication round improves over the currently best solution it
notifies all the other nodes. This procedure is explained in detail in Section 3.3.
where thetam ∈ [0.1) contains the θ-value of the emitter, colorm is the color currently used by
the emitter and relevancem is a parameter that depends on the number of messages received
by the emitter during the last communication round. This parameter controls the weight
that is given by node i to the corresponding message m. In particular, less weight is given to
messages that were emitted by nodes that are influenced by many other nodes. The intuition
for this definition of the weights is that the θ-values of nodes that are little influenced by
other nodes should converge first. This facilitates the convergence of the θ-values of highly-
influenced nodes, which in turn facilitates that the system reaches a stable situation, a term
which refers to a situation in which the θ-values do not change anymore.
Based on the messages in Mi , function recalculateTheta() recalculates a new value for θi :
X
θi := θi + αi relevancem ∗ inc[θm − θi ] , (10)
m∈Mi
where αi is a parameter used to control the convergence of the system, initially set to 0.5. In
general, the lower the value of αi the smaller the change applied to θi . Moreover, inc[·] is a
10
function—corresponding to the phase shift function of Equation 4—that is defined as follows:
x − 0.5 if x ≥ 0
inc[x] = (11)
x + 0.5 if x < 0
Note that this function replaces the sinus function which was originally used in [1] as the
phase shift function. This is because we have noticed that this function leads to a better
convergence behavior than the sinus function. Next, node i decides for a possibly new color
in function minimumColorNotUsed(). Formally, the possible color change by node i can be
described as:
ci := min{c ∈ N |6 ∃m ∈ Mi with colorm = c} (12)
In words, node i chooses among the colors that do not appear in any of the received messages
m ∈ Mi , the one with the lowest identifier. Finally, node i sends the following message m
(see function sendColoringMessage()):
1
m =< thetam := θi , colorm := ci , relevancem := > (13)
|Mi |2
Moreover, node i decreases the value of αi (see line 5 of Algorithm 1). Hereby, ρ is a parameter
of the algorithm that controls the rate of convergence of the θ-values. Note that once the
θ-values have converged the current coloring does not change anymore. To conclude a sensor
event, node i deletes all messages from its queue Mi (see function clearMessageQueue()), that
is, Mi = ∅.
11
θ1 = 0.3 θ2 = 0.6
15 20 15 25 15 25
12 30 12 15 25 15
θ4 = 0.1 θ3 = 0.8
(a) Fictitious situation (b) Some actions have (c) Conflicts are resolved
after phase I created a conflict
Figure 4: Example of the working of phase II of FrogSim. Nodes are labeled with their
respective color. The nodes’ powers are shown as sub-indices of their colors. Graphic (a)
shows a fictitious situation after phase I. Three colors are used in the current feasible coloring.
The fictitious θ-values are as indicated besides the nodes. Note that in phase II they will not
change anymore. Initially the nodes with color 1 receive a random power greater than 0 (in
this case, 2, respectively 5), while the remaining nodes receive a power of 0. First, the node
with highest power forces its neighbor to adopt its power (a color change of the neighbor is
not necessary). Then, this neighbor, which has color 2, forces its other neighbor to adopt
color 1 and power 5 (see graphic (b)). This creates a conflict. However, due to the fact that
power 5 is greater than power 2, the last node is forced to change its color from 1 to 2. Note
that the final situation uses one color less than the original one.
m ∈ Mi such that colorm = ci and powerm ≥ pi . In words, node i only changes its color if
there is an adjacent node with the same color and a higher (or equal) power value. The new
color chosen by node i is the first free color that is not already in use by a node influencing
node i and that has a power equal to or greater than the power value of node i. Formally,
the new color ci is chosen in function minimumColorNotUsedByNeighborsWithHigherPower() as
follows:
ci := min{c ∈ N |6 ∃m ∈ Mi with colorm = c ∧ powerm ≥ pi } (15)
In addition, node i updates its power value in function adoptPowerFromStrongestNode() in
the following way:
pi := argmaxm∈Mi {powerm } (16)
This is the highest power among the powers of the nodes that have forced node i to choose
its current color. As a result, in following communication rounds node i will not be forced
to change its color, because with the new power it has priority over all nodes with a lower
power. Finally, node i sends a refinement message m in function sendRefinementMessage(),
where m is defined as follows:
The last action of the sensor event consists again in deleting all messages from the message
queue Mi , that is, Mi = ∅. Figure 4 shows a small example of the kind of conflicts that
phase II is supposed to resolve.
12
3.3 Determining and Storing the Best Coloring Found
It is intuitively clear that the current coloring of our system—that is, the coloring defined by
colors ci for all nodes i—does not only improve over time. In some communication rounds,
especially during the second phase of the algorithm, the new coloring after the choice of new
colors might actually be worse then the coloring of the previous communication round. This
behavior is very natural, because the search space of a combinatorial optimization problem
is characterized by rather many local minima. If we assume that the current solution corre-
sponds to such a local minimum, the only way to find a better solution is to accept worse
solutions for some iterations. In the context of metaheuristic algorithms such an action is
known as escaping from a local minimum [6].
In order to store the best coloring found by our algorithm over the whole simulation time,
the following mechanism is used. Remember that the first action of the algorithm (before
simulating phases I and II) consisted in the generation of a virtual minimum spanning tree
over the network, resulting in a root node (the master). This tree is characterized by its height
h, which corresponds to the maximum number of communication rounds that a broadcast
message sent by the root node needs in order to reach all nodes of the network. In this context,
note that h may be minimized by using a priori some methods from the literature which are
able to generate spanning trees with minimum diameter in a distributed manner [8].
Each node is required to store its colors from the last 2h communication rounds. Moreover,
we assume that each node stores the color it has used in the best-found coloring in a specific
variable. The way in which this best-found coloring is determined is as follows. First, at each
communication round a node sends the maximum color used by itself and its children (with
respect to the tree) to its parent in the tree. Such a message only contains two integers (the
maximum color and the communication round identifier). Moreover, no additional messages
are required because this information can easily be added to the messages that are sent anyway
(see lines 4 and 14 of Algorithm 1). Given the height h of the tree, it takes h communication
rounds until all the information regarding a specific communication round has reached the
root node. Moreover, the number of colors used at this communication round is the maximum
color identifier that reaches the root node via one of its children. In case this maximum color
is lower than the number of colors used in the currently best-found coloring, the root node
broadcasts a message with the corresponding communication round identifier in which this
coloring was obtained. In order for this information to reach all the nodes of the network,
another h communication rounds are necessary. This is why all nodes must store their colors
from last 2h communication rounds. Note that these alert messages from the root node can
also be propagated using the normal messages of Algorithm 1.
4 Experimental Results
We coded our algorithm by means of discrete event simulation, implemented from scratch
in C++. For the experimental evaluation we chose a large set of different graph topologies:
random geometric graphs of different densities, grid graphs of different sizes, and most of the
graphs used for the DIMACS challenge [14]. All graphs that we used for the experimental
evaluation can be found for download at http://www.lsi.upc.edu/˜hhernandez/graphcoloring.
Note that an edge connecting two nodes indicates that both nodes are able to communicate
directly with each other via their radio antennas.
For the purpose of comparison we re-implemented one of the currently best algorithms
13
from the literature. This algorithm was presented by Finocchi et al. in [13]. For simplicity,
this algorithm will henceforth be referred to by Finocchi. Unfortunately, the description of this
algorithm in the original article contains some ambiguities, which required us to make some
decisions regarding certain aspects in the context of the re-implementation. Fortunately, our
own implementation of the Finocci algorithm provides generally better results than the ones
reported in [13]. This can be verified by comparing the results of the original implementation
with the results of our re-implementation for the graph topologies that are used both in [13]
and in the present paper.
In the following we present the results of three algorithms: (1) Finocci [13], (2) FrogSim ,
which is the FrogSim algorithm without phase II, and (3) FrogSim, which is the complete
FrogSim algorithm. In our opinion, the study of the results of FrogSim is worthwhile,
because it reflects the power of the frog-based model without any additional improvements
of the refinement phase. We applied each of these three stochastic algorithms 100 times
to each graph topology and report the best coloring found in all 100 runs, as well as the
average quality of the best colorings found per run. The number of rounds necessary to reach
these solutions is—due to space reasons—not included in the result tables. However, it is
important to note that algorithms such as Finocci and FrogSim, when used in sensor networks,
are generally carried out continuously in a lower-level layer of the network. Therefore, the
number of communication rounds necessary to reach the best solution are not that significant.
Instead our algorithm continually tries to improve the current solution. As an informative
note, our algorithm requires, on average, 10.34 communication rounds for finding its best
solution in phase I. After entering phase II the best solution is reached, on average, after 3.46
communication rounds. In total, FrogSim requires, on average, 24.33 communication rounds
for finding its best solution. The algorithm of Finocchi et al. uses, on average, a comparable
number of communication rounds (19.83). It should be noted that, in the case of FrogSim,
these numbers do not depend so much on the size of the network. However, FrogSim takes
generally more communication rounds for those graphs that have a larger number of edges.
After tuning by hand, we decided to use a communication round limit of 100 rounds for
FrogSim. Moreover, parameter K, which specifies the number of communication rounds for
phase I, was always set to 80. As a last remark, note that the size of the messages used in
FrogSim is constant (O(1)). In other words, the message size does not depend on the network
size. This is surely a desirably property of a distributed algorithm for graph coloring.
14
For each algorithm we first give the number of colors from the best coloring found over 100
independent runs. In the second column, we show the average number of colors used by the
100 colorings obtained in 100 runs. For ease of comparison the best performing algorithm for
each instance is indicated in bold face. Hereby, the best performing algorithm is defined as the
algorithm that finds the best coloring. Ties are broken (if possible) by the average values. The
four bottom rows of the table provide a summary of the results. The first one of these rows
gives averages for each column. In addition, the last three rows summarize how each algorithm
is performing in comparison to the others. The first of these rows (labelled # times better)
indicates for each algorithm the number of instances for which the corresponding algorithm
was the sole winner, that is, better than the other two algorithms. The second row (labelled
# times all equal) indicates for how many instances the results of the three algorithms
were equal, whereas the last table row indicates for each algorithm the number of instances
for which the corresponding algorithm was the sole looser.
As expected, the results show that the smaller the size of the graph, the easier it is to
find good colorings. The algorithms obtain equivalent results for 24 out of 40 instances (note
that all small instances with 20 and 50 nodes are included in this set). Although Finocchi is 3
times better than the other two algorithms it is also worse in 11 topologies. More importantly,
Finocchi is not always able to match the FrogSim algorithms in terms of the best colorings
for each instance. More specifically, Finocchi uses 0.250 colors more on average than both
FrogSim algorithms. Although FrogSim is not able to outperform the other two algorithms
for any given instance it only obtains the worst result for 5 instances. FrogSim improves
over the results of FrogSim especially for the larger instances. It turns out to be the sole
winner for 10 instances. It is interesting to note that in those cases where FrogSim is better
than FrogSim this is due to the average solution quality. In this sense it can be said that
in the context of random geometric graphs the use of phase II makes the FrogSim algorithm
more robust. It is also important to note that the best colorings obtained are—for almost all
instances—better than ∆ + 1 colors.
In addition to Table 1, the results are also presented in a visual form in Figure 5. For
each graph (x-axis) the improvement of FrogSim and FrogSim over Finocchi in terms of the
best coloring (top graphic) and the average solution quality (bottom graphic) is presented.
The 40 considered graphs are ordered from left to right as they appear in Table 1. These
graphics show nicely that the FrogSim algorithms gain an advantage over Finocchi with
growing instance size (from left to right). The bottom graphic shows that there are only
three graphs for which Finocchi achieves a better average solution quality.
The results are presented in Tables 2 and 3, in the same way as in the case of random
15
Table 1: Results for random geometric graphs.
Finocchi FrogSim FrogSim
Instance (n, ∆, χ)
colors avg. colors avg. colors avg.
random-graph-n20-r05-1.gph (20,2,?) 2 2.000 2 2.000 2 2.000
random-graph-n20-r05-10.gph (20,1,?) 2 2.000 2 2.000 2 2.000
random-graph-n20-r05-2.gph (20,2,?) 3 3.000 3 3.000 3 3.000
random-graph-n20-r05-3.gph (20,2,?) 2 2.000 2 2.000 2 2.000
random-graph-n20-r05-4.gph (20,3,?) 3 3.000 3 3.000 3 3.000
random-graph-n20-r05-5.gph (20,3,?) 3 3.000 3 3.000 3 3.000
random-graph-n20-r05-6.gph (20,1,?) 2 2.000 2 2.000 2 2.000
random-graph-n20-r05-7.gph (20,2,?) 3 3.000 3 3.000 3 3.000
random-graph-n20-r05-8.gph (20,2,?) 3 3.000 3 3.000 3 3.000
random-graph-n20-r05-9.gph (20,2,?) 2 2.000 2 2.000 2 2.000
random-graph-n50-r05-1.gph (50,6,?) 6 6.000 6 6.000 6 6.000
random-graph-n50-r05-10.gph (50,6,?) 5 5.000 5 5.000 5 5.000
random-graph-n50-r05-2.gph (50,3,?) 3 3.000 3 3.000 3 3.000
random-graph-n50-r05-3.gph (50,4,?) 4 4.000 4 4.000 4 4.000
random-graph-n50-r05-4.gph (50,4,?) 3 3.260 3 3.000 3 3.000
random-graph-n50-r05-5.gph (50,4,?) 3 3.000 3 3.000 3 3.000
random-graph-n50-r05-6.gph (50,4,?) 4 4.000 4 4.000 4 4.000
random-graph-n50-r05-7.gph (50,6,?) 4 4.000 4 4.000 4 4.000
random-graph-n50-r05-8.gph (50,4,?) 3 3.000 3 3.000 3 3.000
random-graph-n50-r05-9.gph (50,3,?) 3 3.000 3 3.000 3 3.000
random-graph-n100-r05-1.gph (100,8,?) 5 5.000 5 5.000 5 5.000
random-graph-n100-r05-10.gph (100,8,?) 5 5.000 5 5.820 5 5.220
random-graph-n100-r05-2.gph (100,7,?) 4 4.420 4 4.430 4 4.000
random-graph-n100-r05-3.gph (100,7,?) 6 6.000 6 6.000 6 6.000
random-graph-n100-r05-4.gph (100,9,?) 5 5.000 5 5.560 5 5.410
random-graph-n100-r05-5.gph (100,7,?) 4 4.500 4 4.470 4 4.000
random-graph-n100-r05-6.gph (100,6,?) 6 6.000 6 6.000 6 6.000
random-graph-n100-r05-7.gph (100,6,?) 5 5.000 4 4.450 4 4.200
random-graph-n100-r05-8.gph (100,6,?) 5 5.000 4 4.110 4 4.000
random-graph-n100-r05-9.gph (100,7,?) 6 6.000 6 6.000 6 6.000
random-graph-n200-r05-1.gph (200,13,?) 10 10.000 8 8.500 8 8.360
random-graph-n200-r05-10.gph (200,13,?) 8 8.000 8 8.000 8 8.000
random-graph-n200-r05-2.gph (200,12,?) 8 8.000 8 8.030 8 8.000
random-graph-n200-r05-3.gph (200,12,?) 8 8.000 7 7.640 7 7.490
random-graph-n200-r05-4.gph (200,12,?) 9 9.000 8 8.100 8 8.000
random-graph-n200-r05-5.gph (200,17,?) 10 10.000 8 8.990 8 8.840
random-graph-n200-r05-6.gph (200,12,?) 8 8.260 8 8.000 8 8.000
random-graph-n200-r05-7.gph (200,12,?) 7 7.000 6 6.830 6 6.750
random-graph-n200-r05-8.gph (200,11,?) 8 8.660 7 7.630 7 7.490
random-graph-n200-r05-9.gph (200,11,?) 7 7.000 7 7.260 7 7.050
average 4.925 4.978 4.675 4.845 4.675 4.770
# times better 3 0 10
# times all equal 24 24 24
# times worse 11 5 0
geometric graphs. Concerning the chromatic numbers, in many cases they are known. In the
cases in which they are not known, we either provide an upper bound (in the form ≤X) or a
question mark. As a general remark before analyzing the results in depth, we would like to
mention that for distributed algorithms it is very difficult, if not impossible, to capture the
global structure of these graphs in many cases. Therefore, it is not surprising that the results
obtained by distributed algorithms are often far away from the chromatic numbers.
First it should be emphasized that the FrogSim algorithms achieve the best results for all
instances except for instance zeroin.i.2.col (see Table 3), where Finocchi achieves a slightly bet-
ter average solution quality. Moreover, only in seven further cases, Finocchi is able to match
the results of the FrogSim algorithms. On the other side, for some instances the FrogSim al-
gorithms improve remarkably over Finocchi. Consider, for example, instance DSJC1000.9.col
16
40
Improvement (in percent)
20
0
−20
−40
40
Improvement (in percent)
20
0
−20
−40
Figure 5: Summary of results for random geometric graphs. Both graphics show the perfor-
mance improvement of FrogSim (light gray bars) and FrogSim (dark gray bars) over Finocchi
(in percent). The instances of Table 1 are treated from left to right in the same order. The top
graphic concerns the best colorings found, whereas the bottom graphic concerns the average
solution quality.
(see Table 3) where the best colorings found by the FrogSim algorithms need 297 colors,
while the best coloring found by Finocchi uses 315 colors. Other examples of remarkable
improvements over Finocchi are the six flat∗ instances from Table 3. Concerning the com-
parison between FrogSim and FrogSim, we can state that the power of the algorithm can
clearly be attributed to the first (frog-inspired) phase. As in the case of random geometric
graphs, phase II of FrogSim basically helps to make the algorithm more robust. It should
also be emphasized that, in all cases, the FrogSim colorings require a number of colors that
is smaller than ∆ + 1. Although in most cases the best solution obtained is not an optimal
coloring—respectively, we do not know whether it is or not—for most of the instances of type
mulsol.X, myciel.X and zeroin.X our algorithm generates optimal colorings in each of the 100
applications per instance.
Finally, in Figures 6 and 7 the results of Tables 2 and 3 are provided again in a graphical
form.
17
Table 2: Results for the first set of instances from the DIMACS challenge.
Finocchi FrogSim FrogSim
Instance (n, ∆, χ)
colors avg. colors avg. colors avg.
DSJC1000.1.col (1000,127,≤20) 30 31.250 29 29.564 29 29.564
DSJC1000.5.col (1000,551,≤83) 124 126.550 118 120.505 118 120.505
DSJC1000.9.col (1000,924,≤224) 315 321.140 297 303.594 297 303.594
DSJC125.1.col (125,23,?) 8 8.000 7 7.485 7 7.386
DSJC125.5.col (125,75,?) 24 25.630 22 23.535 22 23.475
DSJC125.9.col (125,120,?) 54 55.300 50 53.030 50 53.020
DSJC250.1.col (250,38,?) 12 12.750 11 11.941 11 11.931
DSJC250.5.col (250,147,?) 40 42.420 38 39.792 38 39.772
DSJC250.9.col (250,234,?) 95 97.290 89 92.297 89 92.297
DSJC500.1.col (500,68,≤12) 18 19.310 17 18.218 17 18.178
DSJC500.5.col (500,286,≤48) 70 72.800 67 68.762 67 68.762
DSJC500.9.col (500,471,≤126) 170 177.000 164 167.703 164 167.703
DSJR500.1.col (500,25,?) 14 14.540 13 13.960 13 13.901
DSJR500.1c.col (500,497,≤85) 100 108.290 97 103.129 97 102.980
DSJR500.5.col (500,388,≤122) 142 146.770 141 146.337 140 144.634
flat1000-50-0.col (1000,520,50) 121 124.420 116 118.139 116 118.139
flat1000-60-0.col (1000,524,60) 121 124.730 115 118.604 115 118.604
flat1000-76-0.col (1000,532,76) 121 125.220 117 119.119 117 119.119
flat300-20-0.col (300,160,20) 44 46.400 42 43.485 42 43.455
flat300-26-0.col (300,158,26) 46 47.660 42 44.198 42 44.188
flat300-28-0.col (300,162,28) 45 47.260 43 44.366 43 44.366
fpsol2.i.1.col (496,252,65) 65 65.000 65 65.000 65 65.000
fpsol2.i.2.col (451,346,30) 30 30.360 30 30.178 30 30.030
fpsol2.i.3.col (425,346,30) 30 30.450 30 30.109 30 30.059
inithx.i.1.col (864,502,54) 54 54.000 54 54.000 54 54.000
inithx.i.2.col (645,541,31) 31 31.020 31 31.000 31 31.000
inithx.i.3.col (621,542,31) 31 31.000 31 31.000 31 31.000
le450-15a.col (450,99,15) 21 21.930 20 21.010 20 20.733
le450-15b.col (450,94,15) 20 21.440 20 21.059 20 20.693
le450-15c.col (450,139,15) 29 30.580 28 29.535 28 29.257
le450-15d.col (450,138,15) 29 30.510 28 29.545 28 29.366
le450-25a.col (450,128,25) 27 28.830 27 27.832 26 27.416
le450-25b.col (450,111,25) 26 27.660 26 27.317 26 26.941
le450-25c.col (450,179,25) 35 35.890 34 35.317 33 34.861
le450-25d.col (450,157,25) 35 35.650 33 35.406 33 34.851
le450-5a.col (450,42,5) 13 13.230 12 12.129 11 12.069
le450-5b.col (450,42,5) 12 13.220 12 12.030 12 12.020
le450-5c.col (450,66,5) 15 16.590 11 13.218 11 13.178
le450-5d.col (450,68,5) 15 16.580 11 13.347 11 13.327
average 57.231 59.197 54.821 56.584 54.718 56.446
# times better 0 0 25
# times all equal 3 3 3
# times worse 36 0 0
The way in which an optimal coloring can easily be achieved is to start the coloring process
in a single node with the first color, and then proceed incrementally. The next step consists
in coloring all the neighbors of the starting node in the second color. All the neighbors of
these nodes have to be painted in the first color again, and so on. However, when considering
distributed computing, nodes only have local information, whereas information about the
position in the grid is missing. Moreover, the incremental process described above is difficult
to achieve without a global control. Therefore, when coloring grids in a distributed way,
what usually happens is that the coloring process is initiated in several different nodes. If
the coloring of these nodes does not follow the chessboard distribution of colors, eventually
borders will form where additional colors are needed in order to obtain valid colorings. An
example is shown in Figure 9. In this context, remember that numbers correspond to colors.
The process of an incremental coloring is shown starting at the top left grid and ending at the
18
Table 3: Results for the second set of instances from the DIMACS challenge.
Finocchi FrogSim FrogSim
Instance (n, ∆, χ)
colors avg. colors avg. colors avg.
anna.col (138,71,11) 11 11.000 11 11.000 11 11.000
david.col (87,82,11) 11 11.720 11 11.446 11 11.297
games120.col (120,13,9) 9 9.000 9 9.040 9 9.000
homer.col (561,99,13) 14 14.070 13 13.644 13 13.158
huck.col (74,53,11) 11 11.000 11 11.000 11 11.000
jean.col (80,36,10) 10 10.000 10 10.069 10 10.000
miles1000.col (128,86,42) 43 44.990 43 44.327 42 44.000
miles1500.col (128,106,73) 74 74.220 73 73.861 73 73.614
miles250.col (128,16,8) 9 10.160 8 8.782 8 8.683
miles500.col (128,38,20) 21 22.120 20 21.297 20 21.139
miles750.col (128,64,31) 32 33.330 31 33.050 31 32.832
mulsol.i.1.col (197,121,49) 49 49.000 49 49.000 49 49.000
mulsol.i.2.col (188,156,31) 31 31.360 31 31.000 31 31.000
mulsol.i.3.col (184,157,31) 31 31.140 31 31.000 31 31.000
mulsol.i.4.col (185,158,31) 31 31.060 31 31.000 31 31.000
mulsol.i.5.col (186,159,31) 31 31.330 31 31.000 31 31.000
myciel2.col (5,2,3) 3 3.000 3 3.000 3 3.000
myciel3.col (11,5,4) 4 4.060 4 4.000 4 4.000
myciel4.col (23,11,5) 5 5.180 5 5.000 5 5.000
myciel5.col (47,23,6) 6 6.230 6 6.000 6 6.000
myciel6.col (95,47,7) 7 7.080 7 7.000 7 7.000
myciel7.col (191,95,8) 8 8.290 8 8.059 8 8.000
queen10-10.col (100,35,?) 15 15.420 14 14.228 14 14.188
queen11-11.col (121,40,11) 17 17.230 14 15.653 14 15.653
queen12-12.col (144,43,?) 17 17.700 16 16.960 16 16.921
queen13-13.col (169,48,13) 19 19.950 17 18.188 17 18.178
queen14-14.col (196,51,?) 20 20.730 18 19.545 18 19.535
queen15-15.col (225,56,?) 21 22.160 20 20.762 20 20.762
queen16-16.col (256,59,?) 21 23.100 21 21.990 21 21.990
queen5-5.col (25,16,5) 5 6.790 7 7.238 5 6.752
queen6-6.col (36,19,7) 9 9.760 8 8.743 8 8.743
queen7-7.col (49,24,7) 10 10.920 10 10.079 10 10.000
queen8-12.col (96,32,12) 15 15.280 13 14.386 13 14.327
queen8-8.col (64,27,9) 11 12.330 11 11.752 11 11.752
queen9-9.col (81,32,10) 12 13.510 12 13.000 12 12.911
school1.col (385,282,?) 40 41.800 35 38.772 35 38.703
school1-nsh.col (352,232,?) 37 38.780 31 35.762 31 35.614
zeroin.i.1.col (211,111,49) 49 49.170 49 49.000 49 49.000
zeroin.i.2.col (211,140,30) 30 30.000 30 30.010 30 30.010
zeroin.i.3.col (206,140,30) 30 30.310 30 30.010 30 30.000
average 20.725 21.357 20.050 20.741 19.975 20.669
# times better 1 0 19
# times all equal 4 4 4
# times worse 32 3 0
bottom right grid. The first row shows several nodes where the coloring is initiated with color
1. These wrong initial decisions lead to borders (see the gray-colored nodes in the bottom
row) where additional colors are needed.
Computational results are shown in Table 4. Note that in this case all chromatic numbers
are known as they can be established theoretically. While small grids can basically be colored
correctly by all three algorithms, both Finocchi and FrogSim have—as expected—increasing
difficulties when the grid size grows. Although this is the case, FrogSim has clear advantages
over Finocchi. This is indicated by the average numbers given in the fourth but last table row,
and also by the fact that Finocchi is the sole looser in 39 cases, whereas FrogSim is the sole
looser in only 2 cases. In contrast to the deteriorating performance of Finocchi and FrogSim
when the grid size grows, FrogSim achieves perfect colorings in all 100 applications for all
19
40
Improvement (in percent)
20
0
−20
−40
40
Improvement (in percent)
20
0
−20
−40
Figure 6: Summary of results for the first set of instances from the DIMACS challenge. Both
graphics show the performance improvement of FrogSim (light gray bars) and FrogSim (dark
gray bars) over Finocchi (in percent). The instances of Table 2 are treated from left to right
in the same order. The top graphic concerns the best colorings found, whereas the bottom
graphic concerns the average solution quality.
instances, which is a remarkable achievement. Even the large grids with periodic boundary
conditions (see graphs Ising32x8.col and Ising32x8-torus.col used in [27]) do not pose any
difficulty for FrogSim. In contrast, both Finocchi and FrogSim use four colors instead of
the optimal two colors, in each coloring generated. Summarizing we can state that phase II
of FrogSim is very useful when applied to grid topologies, helping the algorithm to achieve
an excellent performance.
Figure 10 summarizes graphically the results from Table 4. Note that the y-axis is differ-
ently scaled than the other summarizing figures in this section due to plotted data require-
ments. The significant improvement of FrogSim over both Finocchi and FrogSim can be
nicely appreciated in these graphics. Also the growing advantage of the FrogSim algorithms
over Finocchi can be seen by the fact that the height of the bars generally increases from left
to right. Considering the bottom graphic, which concerns the average solutions quality, we
can note that FrogSim is much less robust than FrogSim.
20
40
Improvement (in percent)
20
0
−20
−40
40
Improvement (in percent)
20
0
−20
−40
Figure 7: Summary of results for the second set of instances from the DIMACS challenge.
Both graphics show the performance improvement of FrogSim (light gray bars) and FrogSim
(dark gray bars) over Finocchi (in percent). The instances of Table 3 are treated from left
to right in the same order. The top graphic concerns the best colorings found, whereas the
bottom graphic concerns the average solution quality.
rs rs sr rs rs
rs rs rs rs sr
rs rs rs rs rs
rs rs rs rs sr
rs rs rs rs rs
in [27] assumes that the number of colors required for the coloring is known a priori, that is, the
algorithm must be run for a pre-fixed number of colors. When graphs are large and chromatic
numbers are unknown, such an algorithm is not practical. Anyway, FrogSim and the algorithm
from [27] behave very similarly for most instances, with some exceptions: for hexagon-based
instances, FrogSim is not quite able to match the average results obtained by the algorithm
from [27]. Moreover, concerning icosahedron.col, the best solution by FrogSim is uses one
color more than the best one by Lee’s algorithm. On the other side, concerning 4-partite-
4-diff-sizes.col and dodecahedron.col, FrogSim improves over the average results obtained by
Lee’s algorithm.
21
1 1 1 1 1 1 1
1
1 1
1 1 2 1 2 2 1 2 2
1 1 1 1 1 1 1 1 1 1
1 2 1 2 1 2 1 2 1 2
1 1 1 1 1 2
1 2 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2
2 1 1 2 1 1 2 1 1 2 1 1 2 1 1
1 2 1 2 1 2 r3s 1 2 r3s 1 2
1 2 1 2 1 2 1 1 2 1 1 2 3rs 1
1 2 1 2 1 2 1 2
2 1 1 2 1 2 1
3rs 4rs 1 2 3rs 4rs 1 2
1 2 3rs 1 1 2 3rs 1
As shown in Table 5, the three algorithms achieve equal results in 7 out of 14 cases. Only
in one case (see 1hexagon-tess.col) Finocchi is slightly better than the FrogSim algorithms due
to the fact that it achieves an optimal coloring in all 100 applications. In the remaining cases
both FrogSim and FrogSim obtain better results than Finocchi. Moreover, it is remarkable
that both FrogSim and FrogSim obtain for 12 of the 14 instances optimal solutions. The
difference between FrogSim and FrogSim is again to be found in the fact that FrogSim is
more robust, which is indicated by a better average solution quality.
Figure 11 graphically summarizes the results as in the previous subsections. Again this
graphical way of presenting the results helps to show the improvement of FrogSim over
FrogSim in terms of the average solution quality.
22
Table 4: Results for grid (respectivly, torus) topologies.
Finocchi FrogSim FrogSim
Instance (n, ∆, χ)
colors avg. colors avg. colors avg.
grid2x1 (2,1,2) 2 2.000 2 2.000 2 2.000
grid2x2 (4,2,2) 2 2.000 2 2.000 2 2.000
grid3x1 (3,2,2) 2 2.000 2 2.000 2 2.000
grid3x2 (6,3,2) 2 2.140 2 2.000 2 2.000
grid3x3 (9,4,2) 2 2.360 2 2.376 2 2.000
grid4x1 (4,2,2) 2 2.250 2 2.000 2 2.000
grid4x2 (8,3,2) 2 2.570 2 2.000 2 2.000
grid4x3 (12,4,2) 2 3.280 2 2.465 2 2.000
grid4x4 (16,4,2) 2 3.180 2 2.465 2 2.000
grid5x1 (5,2,2) 2 2.450 2 2.000 2 2.000
grid5x2 (10,3,2) 2 2.600 2 2.238 2 2.000
grid5x3 (15,4,2) 2 2.420 2 2.238 2 2.000
grid5x4 (20,4,2) 2 3.350 2 2.515 2 2.000
grid5x5 (25,4,2) 2 3.470 2 2.683 2 2.000
grid6x1 (6,2,2) 2 2.870 2 2.000 2 2.000
grid6x2 (12,3,2) 2 2.740 2 2.535 2 2.000
grid6x3 (18,4,2) 2 3.230 2 2.426 2 2.000
grid6x4 (24,4,2) 3 3.050 2 2.980 2 2.000
grid6x5 (30,4,2) 4 4.000 2 2.931 2 2.000
grid6x6 (36,4,2) 3 3.860 2 3.069 2 2.000
grid7x1 (7,2,2) 2 2.300 2 2.000 2 2.000
grid7x2 (14,3,2) 3 3.260 2 2.455 2 2.000
grid7x3 (21,4,2) 3 3.530 2 2.584 2 2.000
grid7x4 (28,4,2) 3 3.750 2 3.050 2 2.000
grid7x5 (35,4,2) 4 4.000 2 3.366 2 2.000
grid7x6 (42,4,2) 4 4.230 3 3.851 2 2.000
grid7x7 (49,4,2) 3 3.930 4 4.000 2 2.000
grid8x1 (8,2,2) 2 2.500 2 2.000 2 2.000
grid8x2 (16,3,2) 2 2.620 2 2.158 2 2.000
grid8x3 (24,4,2) 3 3.730 2 3.168 2 2.000
grid8x4 (32,4,2) 2 3.570 2 3.465 2 2.000
grid8x5 (40,4,2) 3 3.800 2 3.356 2 2.000
grid8x6 (48,4,2) 4 4.000 3 3.673 2 2.000
grid8x7 (56,4,2) 4 4.000 2 3.396 2 2.000
grid8x8 (64,4,2) 4 4.130 3 3.782 2 2.000
grid9x1 (9,2,2) 2 2.630 2 2.396 2 2.000
grid9x2 (18,3,2) 2 3.590 2 2.485 2 2.000
grid9x3 (27,4,2) 3 3.860 2 3.030 2 2.000
grid9x4 (36,4,2) 4 4.000 2 3.149 2 2.000
grid9x5 (45,4,2) 4 4.000 3 3.465 2 2.000
grid9x6 (54,4,2) 4 4.010 2 3.307 2 2.000
grid9x7 (63,4,2) 4 4.000 3 3.822 2 2.000
grid9x8 (72,4,2) 4 4.000 3 3.901 2 2.000
grid9x9 (81,4,2) 4 4.000 3 3.762 2 2.000
Ising32x8.col (256,4,2) 4 4.000 4 4.000 2 2.000
Ising32x8-torus.col (256,4,2) 4 4.000 4 4.000 2 2.000
average 2.804 3.288 2.283 2.838 2.000 2.000
# times better 0 0 36
# times all equal 3 3 3
# times worse 39 2 0
23
60
40
Improvement (in percent)
20
0
−20
60 −40
40
Improvement (in percent)
20
0
−20
−40
Figure 10: Summary of results for grid and torus topologies. Both graphics show the perfor-
mance improvement of FrogSim (light gray bars) and FrogSim (dark gray bars) over Finocchi
(in percent). The instances of Table 4 are treated from left to right in the same order. The top
graphic concerns the best colorings found, whereas the bottom graphic concerns the average
solution quality.
the studied instances. The benchmark set that we chose for comparison includes random
geometric graphs, most of the graphs of the DIMACS challenge, and grid graphs. Apart from
the favorable results, the proposed algorithms comes with some other benefits. It is possible,
for example, to adjust the speed of convergence depending on the time the user wants to spend
on the algorithm. Moreover, the number of communication rounds required is comparable to
that required by other algorithms that provide high quality solutions. Finally, our algorithm
provides a valid coloring already in the very first communication round.
With regard to future work, we consider the use of the proposed algorithm for time
division multiplexing (TDM) which is a mechanism for collision-free communication in wireless
networks, which is strongly related to graph coloring. Finally, due to its adaptive nature, our
algorithm might also be interesting for mobile networks, or any dynamically changing network.
The fact that nodes appear or disappear at certain points in time is nothing strange in wireless
ad hoc networks.
Acknowledgment
This work was supported by grant TIN2007-66523 (FORMALISM) of the Spanish govern-
ment, and by the EU project FRONTS (FP7-ICT-2007-1). In addition, C. Blum acknowledges
24
Table 5: Results of the algorithms on instances from the article [27].
Figure 11: Summary of results for the small graphs from [27]. Both graphics show the
performance improvement of FrogSim (light gray bars) and FrogSim (dark gray bars) over
Finocchi (in percent). The instances of Table 4 are treated from left to right in the same order.
The top graphic concerns the best colorings found, whereas the bottom graphic concerns the
average solution quality.
25
support from the Ramón y Cajal program of the Spanish Government, and H. Hernández
acknowledges support from the Comissionat per a Universitats i Recerca del Departament
d’Innovació, Universitats i Empresa de la Generalitat de Catalunya and from the European
Social Fund.
References
[1] I. Aihara, H. Kitahata, K. Yoshikawa, and K. Aihara. Mathematical modeling of frogs’
calling behavior and its possible application to artificial life and robotics. Artificial Life
and Robotics, 12(1):29–32, 2008.
[2] Ikkyu Aihara. Modeling synchronized calling behavior of Japanese tree frogs. Physical
Review E, 80(1):11–18, 2009.
[3] B. Awerbuch. Optimal distributed algorithms for minimum weight spanning tree, count-
ing, leader election, and related problems. In A. V. Aho, editor, Proceedings of STOC
87 – The 19th Annual ACM Symposium on Theory of Computing, pages 230–240, New
York, NY, USA, 1987. ACM.
[4] I. Blöchliger and N. Zufferey. A graph coloring heuristic using partial solutions and a
reactive tabu scheme. Computers & Operations Research, 35(3):960–975, 2008.
[5] C. Blum and D. Merkle, editors. Swarm Intelligence: Introduction and Applications.
Natural Computing. Springer Verlag, Berlin, Germany, 2008.
[7] E. Bonabeau, M. Dorigo, and G. Theraulaz. Swarm Intelligence: From Natural to Arti-
ficial Systems. Oxford University Press, New York, NY, 1999.
[8] M. Bui, F. Butelle, and C. Lavault. A distributed algorithm for constructing a minimum
diameter spanning tree. Journal of Parallel and Distributed Computing, 64(5):571–577,
2004.
[9] M. Cardei, E. D. MacCallum, and X. Cheng. Wireless sensor networks with energy
efficient organization. Journal of Interconnection Networks, 3(4):213–229, 2002.
[11] M. Dorigo and T. Stützle. Ant Colony Optimization. MIT Press, 2004.
[12] M. Elkin. A faster distributed protocol for constructing a minimum spanning tree. Jour-
nal of Computer and System Sciences, 72(8):1282–1308, 2006.
26
[14] Center for Discrete MAthematics and Theoretical Computer Science. Dimacs implemen-
tation challenges, 2006.
[15] P. Fraigniaud, C. Gavoille, D. Ilcinkas, and A. Pelc. Distributed computing with advice:
Information sensitivity of graph coloring. Distributed Computing, 21(6):395–403, 2009.
[16] R.G. Gallager, P.A. Humblet, and P.M. Spira. A distributed algorithm for minimum-
weight spanning trees. ACM Transactions on Programming Languages and systems
(TOPLAS), 5(1):77, 1983.
[17] J.A. Garay, S. Kutten, and D. Peleg. A sublinear time distributed algorithm for
minimum-weight spanning trees. SIAM Journal on Computing, 27(1):302–316, 1998.
[18] C. Guo, L. C. Zhong, and J.M. Rabaey. Low power distributed mac for ad hoc sensor ra-
dio networks. In IEEE GLOBECOM ’01 – IEEE Global Telecommunications Conference,
2001, volume 5, pages 2944 –2948, 2001.
[20] T. Herman and S. Tixeuil. A distributed TDMA slot assignment algorithm for wireless
sensor networks. In S. Nikoletseas and J. D. P. Rolim, editors, ALGOSENSORS 2004
– Proceedings of 1st International Workshop on Algorithmic Aspects of Wireless Sensor
Networks, pages 45–58. Springer, 2004.
[21] A. Hertz, M. Plumettaz, and N. Zufferey. Variable space search for graph coloring.
Discrete Applied Mathematics, 156(13):2551–2560, 2008.
[22] R.M. Karp. Reducibility among combinatorial problems. Proceedings of the Symposium
on Complexity of Computer Computations, page 85, 1972.
[23] J. Kennedy and R. Eberhart. Particle swarm optimization. In Proceedings of the IEEE
International Conference on Neural Networks., volume 4, pages 1942–1948. IEEE Press,
1995.
[25] A. Kosowski and L. Kuszner. On greedy graph coloring in the distributed model. In
Wolfgang Nagel, Wolfgang Walter, and Wolfgang Lehner, editors, Euro-Par 2006 Parallel
Processing – Proceedings of the 12th International Euro-Par Conference, pages 592–601.
Springer Berlin / Heidelberg, 2006.
[26] F. Kuhn and R. Wattenhofer. On the complexity of distributed graph coloring. In PODC
2006 – Proceedings of the 25th Annual ACM symposium on Principles of Distributed
Computing, page 15. ACM, 2006.
27
[27] S. A. Lee. Firefly Inspired Distributed Graph Coloring Algorithms. In Hamid R. Arabnia
and Youngsong Mun, editors, Proceedings of PDPTA 2008 – International Conference on
Parallel and Distributed Processing Techniques and Applications, pages 211–217. CSREA
Press, 2008.
[29] S. A. Lee and R. Lister. Experiments in the dynamics of phase coupled oscillators when
applied to graph coloring. In Proceedings of ACSC 2008 – Proceedings of the thirty-
first Australasian conference on Computer science, pages 83–89. Australian Computer
Society, Inc., 2008.
[30] G. Lu, N. Sadagopan, B. Krishnamachari, and A. Goel. Delay efficient sleep scheduling in
wireless sensor networks. In Kia Makki and Edward Knightly, editors, IEEE INFOCOM
2005 – Proceedings of the 24th International Conference on Computer Communications,
pages 2470–2481. IEEE, 2005.
[31] Z. Lü and J.K. Hao. A memetic algorithm for graph coloring. European Journal of
Operational Research, 203(1):241–250, 2010.
[33] E. Malaguti and P. Toth. A survey on vertex coloring problems. International Transac-
tions in Operational Research, 17(1):1–34, 2010.
[36] P. Panagopoulou and P. Spirakis. A game theoretic approach for efficient graph coloring.
In Seok-Hee Hong, Hiroshi Nagamochi, and Takuro Fukunaga, editors, ISAAC 2008 –
19th International Symposium on Algorithms and Computation, pages 183–195. Springer
Berlin / Heidelberg, 2008.
[37] K.D. Wells. The social behaviour of anuran amphibians. Animal Behaviour, 25:666–693,
1977.
[38] W. Zhang, G. Wang, Z. Xing, and L. Wittenburg. Distributed stochastic search and
distributed breakout: properties, comparison and applications to constraint optimization
problems in sensor networks. Artificial Intelligence, 161(1-2):55–87, 2005.
28