0% found this document useful (0 votes)
32 views15 pages

The Disjunctive Graph Machine Representation of The Job Shop Scheduling Problem

Uploaded by

孫ウィーユ
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views15 pages

The Disjunctive Graph Machine Representation of The Job Shop Scheduling Problem

Uploaded by

孫ウィーユ
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

European Journal of Operational Research 127 (2000) 317±331

www.elsevier.com/locate/dsw

The disjunctive graph machine representation of the job shop


scheduling problem
a,* b,1 a,2
Jacek Bµa_zewicz , Erwin Pesch , Maµgorzata Sterna
a
Institute of Computing Science, Pozna
n University of Technology, Piotrowo 3A, 60-965 Pozna
n, Poland
b
Institute of Economics and Business Administration, BWL 3, University of Bonn, Adenauerallee 24-42, 53113 Bonn, Germany

Abstract

The disjunctive graph is one of the most popular models used for describing instances of the job shop scheduling
problem, which has been very intensively explored. In this paper, a new time and memory ecient representation of the
disjunctive graph is proposed. It has the form of a graph matrix and combines advantages of a few classical graph
representations, enabling easy operating on the problem data. Computational experiments have proved higher e-
ciency of the proposed approach over the classical ones. Ó 2000 Elsevier Science B.V. All rights reserved.

Keywords: Disjunctive graph; Graph representations; Graph matrix; Scheduling theory

1. Introduction

The disjunctive graph proposed by Roy and Sussman [9] is one of the most popular models used for
describing instances of the job shop scheduling problem, which is actually one of the most intensively
explored scheduling problems (cf. [1,3±5,7]).
The job shop scheduling problem (cf. [1,3]) can be formulated, in general, as a problem of processing a
set of jobs J ˆ fJ1 ; . . . ; Jj ; . . . ; JN g on a set of machines M ˆ fM1 ; . . . ; Mk ; . . . ; MM g. Each job is an ordered
subset of a set of tasks T ˆ fT1 ; . . . ; Ti ; . . . ; Tn g for which precedence constraints are de®ned. If there is a
precedence constraint between tasks Ti and Tj (i.e. Ti ! Tj ) then Ti has to be ®nished before Tj can start.
Tasks Ti of a job have to be performed in the prede®ned order by speci®ed machines M Ti † ˆ Mk within pi
time units. Each machine can process only one job at the same time and each job can be processed by only

*
Corresponding author. Tel.: +48-61-8790790; fax: +48-61-8771525.
E-mail addresses: [email protected] (J. Bµa_zewicz), [email protected] (E. Pesch),
[email protected] (M. Sterna).
1
Fax: +49-22-8735048.
2
Tel.: +48-61-8528503 ext. 278.

0377-2217/00/$ - see front matter Ó 2000 Elsevier Science B.V. All rights reserved.
PII: S 0 3 7 7 - 2 2 1 7 ( 9 9 ) 0 0 4 8 6 - 5
318 J. Bøa_zewicz et al. / European Journal of Operational Research 127 (2000) 317±331

one machine at the same time. The task sequences on machines are unknown and have to be determined in
order to minimise a given optimality criterion such as the schedule length. The assignment of tasks to
machines must be feasible, i.e. not violating any problem constraint.
In this paper, a new time and memory ecient representation of a disjunctive graph is proposed. It has
the form of graph matrix G ˆ ‰gij Š n‡2† n‡2† and combines advantages of a few classical graph represen-
tations, enabling easy operating on the problem data.
In Section 2, the disjunctive graph model is presented in detail and a short recollection of classical graph
representations is given. In Section 3, the new data structure representing the disjunctive graph, the graph
matrix, is presented together with appropriate examples. Section 4 contains a complexity analysis and a
theoretical comparison of the graph matrix eciency in reference to the classical representations. The re-
sults of computational experiments are discussed in Section 5 and the paper ®nishes with the conclusions
given in Section 6.

2. The disjunctive graph model

The disjunctive graph [9] is a directed graph G ˆ V ; C [ D†. V denotes a set of vertices corresponding to
tasks of jobs. This set contains two additional vertices: a source and a sink, which represent the start and
the end of a schedule. The source is equivalent to a dummy task T0 preceding all other tasks and the sink to
a dummy task T n‡1† succeeding all other tasks. Both dummy tasks have zero processing time. C is a set of
conjunctive arcs which re¯ect the precedence constraints initially connecting every two consecutive tasks of
the same job. Undirected disjunctive edges belonging to set D connect mutually unordered tasks which
require the same machine for their execution (a disjunctive edge can be represented by two opposite directed
arcs). Each arc is labelled with the positive weight equal to the processing time of the task where the arc
begins.
The job shop scheduling problem requires to ®nd an optimal order of all tasks on the machines,
resulting in a schedule with the minimal length. In the disjunctive graph model, this is equivalent to
select one arc in each disjunction, i.e. to turn each undirected disjunctive edge into a directed con-
junctive one. By ®xing directions of all disjunctive edges, the execution order of all con¯icting tasks
requiring the same machine is determined and a complete schedule is obtained. Moreover, the resulting
graph has to be acyclic and if it is optimal, the length of the longest path from the source to the sink is
minimum. This longest path determines the makespan, it is the duration of the longest chain of tasks in
a job shop.

Example 1. A job shop consists of a set of three machines M ˆ fM1 ; M2 ; M3 g and a set of three jobs
J ˆ fJ1 ; J2 ; J3 g which are described by the following chains of tasks: J1 : T1 ! T2 ! T3 ; J2 : T4 ! T5 ; J3 :
T6 ! T7 ! T8 . For any task Ti the required machine M Ti † and the processing time pi are presented in Table 1.
The corresponding disjunctive graph for the given instance is presented in Fig. 1 (weights of disjunctive
edges are omitted for simplicity).

Table 1
Task requirements
Ti T1 T2 T3 T4 T5 T6 T7 T8
M Ti ) M1 M2 M3 M3 M2 M2 M1 M3
pi 3 2 3 3 4 6 3 2
J. Bøa_zewicz et al. / European Journal of Operational Research 127 (2000) 317±331 319

Fig. 1. A disjunctive graph example.

The disjunctive graph contains all information in order to describe a partial or complete solution of the
job shop scheduling problem, so its proper representation signi®cantly in¯uences the eciency of the
algorithm solving the problem.
The disjunctive graph, as every graph, can be represented in the form of classical data structures [6]
such as a neighbourhood matrix, predecessor or successor lists. The neighbourhood matrix is a square
matrix in which each entry of indices i, j stores the value 1 if there is an arc from vertex i to j; otherwise
it is equal to 0. The predecessor (successor) list stores for each task the list of tasks preceding (suc-
ceeding) it.
The graph matrix is a specialised representation of a disjunctive graph. It combines three classical graph
representations mentioned above: a neighbourhood matrix and predecessor (Predecessor lists) and suc-
cessor lists (Successor lists), keeping their advantages. Moreover, it allows one to store information on the
lists of tasks for which no precedence relations have been disclosed during the solution process (Unknown
lists). Using this square matrix of the size n ‡ 2†  n ‡ 2†, it is possible to obtain information on the
mutual order of any pair of tasks in constant time (by checking the value of a single matrix entry) and to get
access to the mentioned three lists for each task: Predecessor, Successor, Unknown lists. These lists re¯ect
mutual relations among vertices in a disjunctive graph, i.e. among tasks in a job shop, according to an
implemented algorithm. Thus, successor (predecessor) lists can contain only immediate successors (pre-
decessors) of a task or all its successors (predecessors) which are detected as a result of a transitive closure
computation. The graph matrix is a ¯exible machine representation containing only those arcs that are
important for a given strategy of solving the job shop scheduling problem. The arcs which are introduced
and those which are removed from the graph matrix depend on particular implemented procedures
managing this data structure.

3. The graph matrix

The graph matrix G ˆ ‰gij Š n‡2† n‡2† represents a disjunctive graph. Particular entry gij takes the values
from the range ÿn 6 gij 6 2n. The value of entry gij , where 1 6 i; j 6 n, provides information on the order of
any two tasks Ti , Tj in a job shop, i.e. any two vertices in a graph:
ÿn 6 gij < 0 () Tj 2 Unknown Ti †;
0 6 gij 6 n () Tj 2 Predecessor Ti †;
n < gij 6 2n () Tj 2 Successor Ti †:
320 J. Bøa_zewicz et al. / European Journal of Operational Research 127 (2000) 317±331

Because of the general assumption that a dummy start task T0 always precedes all other tasks and a
dummy end task T n‡1† always succeeds all other tasks in a job shop, and moreover there is no self loop for a
particular task, entries gi0 , g0i , gi n‡1† ; g n‡1†i and gii can be used to store additional information.
Entry gi0 (g0i ) contains the index of the ®rst (last) element of a predecessor list for task Ti and entry
gi n‡1† g n‡1†i † contains the index of the ®rst (last) element of a successor list for task Ti . All list operations
as browsing, adding and removing elements are performed as in a classical list structure. The process of
successor and predecessor lists enlarging is additionally accelerated because their ends are immediately
accessible. This feature is especially useful for those approaches to the job shop scheduling problem which
enlarge a partial solution to the complete one. The order of elements on the particular lists is not important
because they do not re¯ect mutual relations among predecessors or successors of a task but only inform
which tasks follow and which ones precede the considered one. However, these lists could be sorted if it is
important for an implemented algorithm.
Entry gii stores the index of the ®rst element of a list containing tasks which belong neither to Ti Õs
predecessor list nor to Ti Õs successor list (i.e. they belong to Unknown(Ti )). Other entries gij of a graph
matrix have a double meaning. As it has been already mentioned, they describe the mutual relation between
tasks Ti , Tj and in addition they are elements of one of the three lists: Unknown(Ti ), Successor(Ti ) or
Predecessor(Ti ).
The basic idea of the graph matrix can be easily adjusted to particular requirements of an implemen-
tation and its di€erent variants can be proposed.
The basic graph matrix includes the information on all arcs in a disjunctive graph, on all relations
between tasks, which have been obtained as a result of a transitive closure computation. That means that
arcs between non-con¯icting tasks, belonging to di€erent jobs and requiring di€erent machines for their
execution, are also kept.
In this case, for each task Ti the predecessor list is organised according to following rules (where i, j, k, f,
l are integers and 1 6 i; j; k; f ; l 6 n):
gi0 ˆ 0† ^ g0i ˆ 0† () Predecessor Ti † ˆ £;
gi0 ˆ f † () Tf is the first element of the list Predecessor Ti †;
g0i ˆ l† () Tl is the last element of the list Predecessor Ti † and gil ˆ l†;
gij ˆ k† ^ k 6ˆ j† () Tk is the next element of the list Predecessor Ti † following element Tj on this list:
The successor list is de®ned in a similar way:
gi n‡1† ˆ 0† ^ g n‡1†i ˆ 0† () Successor Ti † ˆ £;
gi n‡1† ˆ f † () Tf is the first element of the list Successor Ti †;
g n‡1†i ˆ l† () Tl is the last element of the list Successor Ti † and gil ˆ n ‡ l†;
gij ˆ n ‡ k† ^ k 6ˆ j† () Tk is the next element of the list Successor Ti † following element Tj on this list:
The remaining tasks belong to the list Unknown(Ti ) which is organised as follows:
gii ˆ ÿi† () Unknown Ti † ˆ £;
gii ˆ ÿf † () Tf is the first element of the list Unknown Ti †;
gil ˆ ÿl† () Tl is the last element of the list Unknown Ti †;
gij ˆ ÿk† ^ k 6ˆ j† () Tk is the next element of the list Unknown Ti † following element Tj on this list:
For a complete matrix determination, the values of four entries g00 ; g n‡1† n‡1† , g n‡1†0 and g0 n‡1† have to be
de®ned. They could be set to 0 or used to store some other global values in order to increase memory
utilisation.
For many applications, the information on the numbers of predecessors (denoted for task Ti by NPi )
and successors (denoted for task Ti by NSi ) being in con¯ict is important, i.e. the information on the
numbers of tasks requiring the same machine for their execution as a chosen task Ti and preceding or
succeeding it. A slight modi®cation of the basic approach makes it possible to use matrix entries connected
with the last elements of the predecessor and successor lists for storing values NPi , NSi for each task Ti . In
this case, a graph matrix is correct only if M 6 n. This condition is always true in job shop de®nitions
J. Bøa_zewicz et al. / European Journal of Operational Research 127 (2000) 317±331 321

because if there are more machines than tasks for execution then some machines are idle. The proposed
modi®cation makes it necessary to change slightly the predecessor and successor listsÕ speci®cations. Ac-
tually, the last elements of the mentioned lists can be used for storing other information describing a task,
not necessarily the numbers of its predecessors and successors. In this way, a graph matrix can be adjusted
to particular requirements of an implemented algorithm.
After an introduction of counters for each task Ti , the predecessor list is organised according to the
following modi®ed rules:
gi0 ˆ 0† ^ g0i ˆ 0† () Predecessor Ti † ˆ £ and NPi ˆ 0;
gi0 ˆ f † () Tf is the first element of the list Predecessor Ti †;
g0i ˆ l† () Tl is the last element of the list Predecessor Ti † and the entry gil can be used for storing NPi ;
i:e: gil ˆ NPi †;
gij ˆ k† ^ j 6ˆ l† () Tk is the next element of the list Predecessor Ti † following element Tj on this list:
The successor list is de®ned according to the relations given below:
gi n‡1† ˆ 0† ^ g n‡1†i ˆ 0† () Successor Ti † ˆ £ and NSi ˆ 0;
gi n‡1† ˆ f † () Tf is the first element of the list Successor Ti †;
g n‡1†i ˆ l† () Tl is the last element of the list Successor Ti † and the entry gil can be used for storing
NSi ; i:e: gil ˆ n ‡ NSi †;
gij ˆ n ‡ k† ^ j 6ˆ l† () Tk is the next element of the list Successor Ti † following element Tj on this list:
Other entries of a graph matrix are de®ned as previously.
To concentrate on relations (arcs) between tasks requiring the same machine for their execution, the lists
can be restricted only to tasks being in con¯ict. Such an approach may be very convenient from the im-
plementationÕs point of view. If Ti and Tj are performed by di€erent machines, i.e. M Ti † 6ˆ M Tj †, they do
not belong to any list although the information on their mutual order is still kept in matrix entry gij (gji as
well) in the following way:
gij ˆ 2n () Tj 2 Successor Ti †;
gij ˆ 0 () Tj 2 Predecessor Ti †;
gij ˆ ÿn () Tj 2 Unknown Ti †:
All lists are organised in the same way as previously but only tasks being in con¯ict can be their ele-
ments (the lists may be also enlarged with tasks belonging to the same job, if it is useful for an imple-
mentation). Limiting the lists for each task, the time necessary for browsing sets of predecessors,
successors or not yet ordered competitors requiring the same machine, can be shortened. Note, that these
operations are usually the most frequently executed ones in any algorithm solving the job shop scheduling
problem. Moreover, a graph matrix does not need to store information on all transitive arcs as it has been
presented so far. The decision, which arcs will be introduced into a disjunctive graph and into a graph
matrix, depends only on an algorithm used. At the beginning of the search process, successor and pre-
decessor lists for each task Ti are empty and all other tasks form the list Unknown(Ti ). Then, all arcs
de®ning task sequences of jobs should be added into the graph, causing proper modi®cations of the graph
matrix in order to achieve the initial description of a job shop. Next arcs are yet introduced by an al-
gorithm according to a chosen graph matrix variant and a strategy used for solving the job shop sched-
uling problem.

Example 1 (cont.). The disjunctive graph corresponding to a partial solution of the job shop scheduling
problem given in Fig. 1 is presented in Fig. 2.
The basic graph matrix representing the given job shop n ˆ 8† is formulated in Fig. 3. For example, for
task T2 , g20 and g02 are equal to 1 so task T1 is only element of T2 's predecessor list. Task T3 is the ®rst
successor of task T2 because g29 ˆ 3. Then the list contains one more task T5 g23 ˆ 13 ˆ 8 ‡ 5†. It is the last
element because g25 ˆ 13 ˆ 8 ‡ 5 and also g92 ˆ 5. List Unknown(T2 ) contains as the ®rst element task
T4 g22 ˆ ÿ4† and then tasks T6 g24 ˆ ÿ6†; T7 g26 ˆ ÿ7† and T8 g27 ˆ ÿ8† as the last one g28 ˆ ÿ8†.
322 J. Bøa_zewicz et al. / European Journal of Operational Research 127 (2000) 317±331

Fig. 2. Disjunctive graph for a partial schedule.

Fig. 3. Basic graph matrix.

Summing up, one gets:

Predecessor T2 † ˆ fT1 g;
Successor T2 † ˆ fT3 ; T5 g;
Unknown T2 † ˆ fT4 ; T6 ; T7 ; T8 g:

In the same way, one can de®ne the three lists for other tasks not loosing information on the relation
between any pair of them. For example:
T2 ! T5 because g25 ˆ 13 and n ˆ 8 < g25 6 2n ˆ 16;
T1 ! T7 because g71 ˆ 1 and 0 6 g71 6 n ˆ 8;
and ®nally the order of tasks T3 , T6 is unknown because g36 ˆ ÿ7 and ÿ8† ˆ ÿn† 6 g36 < 0:
The graph matrix with counters is shown in Fig. 4. In this case, for example for task T7 , g70 is equal to 6
so task T6 starts the list of T7 's predecessors. Entry g76 ˆ 1 implies that task T1 also belongs to the list and
because g07 ˆ 1, T1 is its last element. In this situation g71 ˆ NP7 ˆ 1, so T7 has one predecessor on its
machine, that is T1 . Task T7 has only one successor ± T8 , because g79 ˆ g97 ˆ 8. Moreover, because
g78 ˆ 8 ‡ 0† ) NS7 ˆ 0 so T7 has no successor on its machine. List Unknown(T7 ) contains task T2 as
the ®rst element g77 ˆ ÿ2† and then tasks T3 g72 ˆ ÿ3†; T4 g73 ˆ ÿ4† and T5 g74 ˆ ÿ5† as the last one
g75 ˆ ÿ5†.
J. Bøa_zewicz et al. / European Journal of Operational Research 127 (2000) 317±331 323

Fig. 4. Graph matrix with counters.

Summing up, one gets:

Predecessor T7 † ˆ fT1 ; T6 g; NP7 ˆ 1;


Successor T7 † ˆ fT8 g; NS7 ˆ 0;
Unknown T7 † ˆ fT2 ; T3 ; T4 ; T5 g:

In addition, task T7 is completely scheduled because NP7 ‡ NS7 ˆ 1 ‡ 0 ˆ 1 and there are only two tasks
requiring machine M1 .

4. Graph matrix updating rules and complexity analysis

We see that, the graph matrix is a convenient representation of a disjunctive graph, delivering combined
information on a job shop in four di€erent ways: as a classical neighbourhood matrix and as lists of
predecessors, successors and tasks of the unknown execution order. Moreover, this machine representation
can be easily created as well as updated during the search for a solution. Actually to manage a graph
matrix, it is sucient to formulate an updating procedure which modi®es a graph matrix after an arc in-
troduction or removal. Those processes require to move elements between the Unknown lists (their be-
ginnings are immediately accessible) and the Predecessor or Successor lists (their beginnings and ends are
immediately accessible) or, if the considered tasks are not mutually put on their lists, only to modify proper
matrix entries. Example 2 shows a matrix updating routine introducing a new arc into the graph matrix.

Example 2. There is given the example of an updating procedure that introduces a single new arc (Ti , Tj )
into the basic variant of the graph matrix.

{removing Tj from Unknown(Ti ) list} {adding Tj to Successor(Ti ) list}


if gii ˆ ÿj† then if gi n‡1† ˆ 0† then gi n‡1† :ˆ j
if gij ˆ ÿj† then gii :ˆ ÿi if g n‡1†i 6ˆ 0† then k :ˆ g n‡1†i
else gii :ˆ gij gik :ˆ j ‡ n
else gij :ˆ j ‡ n
k :ˆ ÿgii g n‡1†i :ˆ j
while gik 6ˆ ÿj† do k :ˆ ÿgik
if gij ˆ ÿj† then gik :ˆ ÿk
else gik :ˆ gij
324 J. Bøa_zewicz et al. / European Journal of Operational Research 127 (2000) 317±331

{removing Ti from Unknown(Tj ) list} {adding Ti to Predecessor(Tj ) list}


if gjj ˆ ÿi† then if gj0 ˆ 0† then gj0 :ˆ i
if gji ˆ ÿi† then gjj :ˆ ÿj if g0j 6ˆ 0† then k :ˆ g0j
else gjj :ˆ gji gjk :ˆ i
else gij :ˆ i
k :ˆ ÿgjj g0j :ˆ i
while gjk 6ˆ ÿi† do k :ˆ ÿgjk
if gji ˆ ÿi† then gjk :ˆ ÿk
else gjk :ˆ gji

Actually, the formulation of the updating procedure depends on the graph matrix variant used. There is
a possibility to decide which tasks should be added to lists associated with every task and which ones should
remain beyond them (see Section 3). Moreover, some matrix entries can be used for storing additional
information on a particular task and on a job shop instance in general (see Section 3). But in all cases, the
graph matrix can be easily updated. In the procedure given in Example 2, introducing a new single arc, one
has to perform a constant number of operations in order to update the predecessor and successor lists of
involved tasks. Only, updating Unknown lists is more complicated because this operation has the com-
plexity O (n) since all operations may be unordered with reference to any task connected by a new arc.
Enlarging the predecessor and successor lists does not require browsing them because a new task is added to
their ends, which are immediately accessible. Whereas, removing a task from Unknown list requires ®nding
this element on the list that can be performed only by browsing its elements. However, one can resign from
Unknown list or restrict it only to tasks requiring the same machine as the considered one. Moreover in
many applications, the introduction of a new arc into the graph requires to determine full or partial
transitive closure by browsing predecessor and successor sets of involved tasks. This process has also the
complexity O (n).
Now, we will analyse the memory and time complexity of browsing rules for di€erent graph represen-
tations.
The graph matrix has the memory complexity O (n2 ) which is the same as the classical neighbourhood
matrix A (cf. Section 2). This graph representation for the example presented in Fig. 2 is given in Fig. 5,
transitive arcs are taken into consideration to better illustrate the discussed issues.
For the graph matrix, the process of checking a mutual relation between any pair of tasks is performed
in constant time as for the classical matrix. Actually, this test requires maximally ®ve simple arithmetic
operations necessary for checking all the doubled conditions for gij value. In the case of the neighbourhood

Fig. 5. Neighbourhood matrix.


J. Bøa_zewicz et al. / European Journal of Operational Research 127 (2000) 317±331 325

Fig. 6. Predecessor lists.

Fig. 7. Successor lists.

matrix we have to check two entries aij and aji for tasks Ti , Tj in order to determine their mutual relation (to
check whether any of those entries equals 1).
But the classical neighbourhood matrix makes it necessary to browse the whole matrix row (O (n)) in
order to obtain all successors or predecessors of a given task. In the graph matrix these successor and
predecessor lists are immediately accessible without any additional transformations similarly like one can
observe for classical predecessor and successor lists. These data structures, for the example presented
previously, are given in Figs. 6 and 7. Browsing all successors and predecessors using a list structure also
requires O (n) steps but in the mean case the complexity is less because vertices (strictly ± arcs) are dis-
tributed among lists of particular tasks.
Each of these classical list representations has the memory complexity O n ‡ m† where m denotes the
number of arcs in the graph. But if only one type of lists is available, the predecessor lists for example,
detecting successors of a task requires browsing lists of all tasks to ®nd a given one.
In order to assure the same access time to predecessors and successors of a task, creating both classical
list representations is necessary whereas the graph matrix contains both of them simultaneously. In ad-
dition, checking mutual relation between a pair of tasks using a classical predecessor or successor lists
326 J. Bøa_zewicz et al. / European Journal of Operational Research 127 (2000) 317±331

Table 2
Complexity comparison for di€erent graph representationa
Graph machine representation Arc existence test Predecessors/successors browsing Memory usage
Graph matrix O (1) O (n)b O (n2 )
Neighbourhood matrix O (1) O (n) O (n2 )
Successor/predecessor list O (n) O (n)b O (n + m)
a
n ± number of vertices, m ± number of arcs.
b
Much better behaviour in the mean case.

requires the time O (n). The graph matrix makes this operation possible in constant time (O (1)) as it has
been already mentioned. Moreover, to obtain information on tasks of unknown order, one needs to browse
all the classical predecessor or successor lists. In the graph matrix such tasks of unknown relation form the
third group of lists which are also immediately accessible.
The process of browsing the lists embedded in the graph matrix requires only to compute indices of their
elements by performing 2 (for Successor lists), 1 (for Unknown lists) or even none (for Predecessor lists)
additional arithmetical operations. Hence, managing these lists in the graph matrix is not more time
consuming than managing classical graph representations of this type, because they are actually typical lists
implemented in the matrix form.
Summing up, the graph matrix gives an immediate access to information on mutual relation between
two tasks in constant time as a neighbourhood matrix and to three groups of lists Predecessor, Successor
and Unknown lists, easily browsed, without any additional computational e€ort (cf. Table 2). It combines
four data structures using only O (n2 ) memory units. In the next section the above considerations will be
complemented with a computational analysis.

5. Computational experiments

Computational experiments were performed for 30 di€erent instances of the job shop scheduling
problem (see Table 3) obtained from OR-Library of Imperial College Management School at University of
London (http://mscmga.ms.ic.ac.uk). Computations were executed on a SGI Power Challenge XL super-
computer at the Supercomputing and Networking Centre of Pozna n. The implementation was written in
ANSI C.
During the computational experiment, the graph matrix and the classical graph representations were
compared. The graph and neighbourhood matrices were allocated statically, whereas the lists of prede-
cessors and successors were implemented as dynamically allocated data structures i.e. as list structures
containing two-®eld records which store a task index and a pointer to the next element on a list. Obviously,
the predecessor and successor lists can be implemented as table structures as well. In this case, each of them
uses O (n2 ) memory entries independently of the graph density. In this case, those two graph representations
are equivalent to the lists embedded within the graph matrix. For this reason, we decided to test the

Table 3
Job shop scheduling problem instances
Instance name Authors
abz5, abz6 J. Adams, E. Balas, D. Zawack (1988)
ft06, ft10 H. Fisher, G.L. Thompson (1963)
la01 ± la20 S. Lawrence (1984)
orb01 ± orb06 D. Applegate, W. Cook (1991)
J. Bøa_zewicz et al. / European Journal of Operational Research 127 (2000) 317±331 327

dynamically allocated predecessor and successor lists, which are more exact implementations of these
theoretical graph representations.
For a given instance of the job shop scheduling problem four representations (the graph matrix, the
neighbourhood matrix ± NM, the predecessors lists ± PL, the successor lists ± SL) were created and
initialised according to the job shop de®nition by introducing the relations among tasks constituting
particular jobs. Then the complete schedule was found by application of the list algorithm with the ®rst-in-
®rst-out priority dispatching rule [8] and every second disjunction ®xed in this solution was introduced into
the graph causing proper updating of four data structures including transitive closure calculation. Intro-
duction of only the half of the conjunctive arcs ensured that the lists of tasks of the unknown execution
order were not empty. During the next stage of the experiment, the sets of predecessors, successors and
unordered tasks for each task were browsed based on the particular graph representations. Finally, the
mutual relation between each pair of tasks was checked. Those operations are actually the main operations
performed on the data structure describing the disjunctive graph.
During the computational experiment, the memory usage tests were performed which showed the su-
periority of the graph matrix and generally the statically allocated data structures to the dynamically al-
located lists. Analysing the size of particular machine graph representations, it is sensible to divide them
into two groups which members have actually the same memory usage. There are statically allocated
matrices (the graph and neighbourhood matrix) and dynamically allocated lists (the lists of predecessors
and successors). Obviously, the size of data structures depends on a machine and a compiler, so to make the
results more representative the tests were done on the SGI Power Challenge supercomputer and PC
Computer (Pentium 166) with two di€erent C compilers. In the experiment performed on the SGI Power
Challenge supercomputer, a single element of a matrix takes 4 bytes and a single element of a list requires 16
bytes, where as many as 12 bytes are used to store the address of the next element on the list. In this
environment, the list structures were 83.81% greater then the matrices. The tests were repeated also on the
PC computer where an element of a matrix takes 2 bytes while a listÕs element requires 6 bytes. Even for this
proportionally smaller size of the address cell, the list structures used 37.86% more memory space than the
matrices. In the presented analysis, the maximum size of the lists was considered. At the beginning of each
test, they contained only a few elements and then they were enlarged by introducing new arcs while the
matrices did not change their size during the experiment.
This comparison shows that theoretically smaller lists of predecessors and successors (O n ‡ m† where n
denotes the number of vertices and m the number of arcs) may not keep this advantage in real imple-
mentations. The necessity of storing the address of the next element made the list structures much greater
than matrices theoretically less memory ecient (O (n2 )). Thus, representing the graph in the form of the
square matrix seems to be a good choice. Moreover, one has to remember that the dynamically allocated
structure used in the experiments must be released before ®nishing a program and this process should be
also taken into account. Managing a list structure is also much more dicult from the implementationÕs
point of view than it is for a matrix.
Based on the test results, we compare the time of managing the di€erent graph representations. The
following ®gures present the execution time of procedures based on the classical graph representations in
reference to the graph matrix.
Computational results show that the creation of the graph matrix lasts longer than for other data
structures (Fig. 8(a)). Building the initial graph matrix, three lists have to be created for each task by proper
calculation entries of the matrix: the lists of predecessors, successors and tasks of the unknown execution
order. In the case of the neighbourhood matrix only one entry has to be ®xed in order to introduce a single
arc because its entries are mutually independent that means they do not form any list structure. The dy-
namically allocated predecessor and successor lists are created in the half of time necessary for both ma-
trices. In this case only a few elements of the lists have to be created representing the sequence of tasks in
jobs. In the matrices all their entries have to be set to their initial values. However, the process of the data
328 J. Bøa_zewicz et al. / European Journal of Operational Research 127 (2000) 317±331

Fig. 8. The computational time di€erence for classical machine graph representations with regard to the graph matrix for a procedure
of structure creating (a) and updating (b).

structure initialisation is performed only once and does not require a lot of computational e€ort. For all the
remaining procedures of the data managing, the graph matrix was faster than the other machine graph
representations.
The process of the graph updating was performed much faster when the graph matrix representation was
used (Fig. 8(b)). The neighbourhood matrix was 0.36 times slower in this case because it does not ensure an
immediate access to the lists of predecessors and successors. Thus, the transitive closure calculation lasts
longer for it than for the graph matrix. But the time di€erence is not so overwhelming in this case as it can
be observed for dynamically allocated data structures. Actually, updating of the predecessor and successor
lists requires 13.31 and 19.9 times more computational time than updating of the graph matrix. It is caused
by the fact that managing dynamically allocated memory is more time consuming than the static one.
Updating of the graph requires to browse a dynamically allocated list and to create its new elements.
Moreover, to introduce new arcs into the graph, it is necessary to analyse all predecessors as well as all
successors of considered tasks and the considered list structures are specialised only for browsing one group
of them (see the analysis of a procedure browsing task lists given below). The fact that updating the lists of
successors lasted longer than the same procedure for the lists of predecessors results probably from the
memory allocation process. The successor lists were created as the second one when a part of the programÕs
heap had been already used.
The computational experiments showed that the graph matrix really combines the advantages of the
classical graph representations (Fig. 9).

Fig. 9. The computational time di€erence for classical machine graph representations with regard to the graph matrix for a procedure
browsing predecessors (PB), successors (SB) and a procedure checking arc existence (AE).
J. Bøa_zewicz et al. / European Journal of Operational Research 127 (2000) 317±331 329

The process of browsing the predecessor set for tasks is performed as fast as in the predecessor lists
specialised for this operation. Actually the predecessor lists were 0.12 times slower, because the simple
arithmetic operations necessary to ®nd the next task predecessor in the graph matrix appears less time
consuming than operating on addresses in the dynamically allocated list. In the case of the successor lists,
the time di€erence was much bigger because this structure represents quite opposite relation between tasks.
The neighbourhood matrix was also slightly less time ecient. For this representation, it is necessary to
check the whole row for a task to detect its predecessors while for the graph matrix, following indices stored
in its entries, proper elements can be found directly, one by one.
During the process of browsing all task successors, an entry in the graph matrix has to be decreased by
the number of real tasks in order to obtain an index of the next element on the Successor list. So, it requires
more arithmetical operations than it is in the case of the Predecessor list. For this reason, the graph matrix
is 0.33 times slower than the list of successors. Although, the graph matrix is not faster than the list
structure in this particular case, it is more stable. It behaves similarly during analysis of di€erent relations
between tasks while the list structures are specialised for only one of them (it is worth to remind that the
lists of successors were much slower when task predecessors were analysed). Consequently, the lists of
predecessors, representing the opposite relation to the considered one, require 30 times more computational
e€ort. The time di€erence between the graph and neighbourhood matrix was in this case less visible than for
the predecessor browsing but it still exists.
It seems strange that the predecessor and successor lists behave di€erently in comparison with the graph
matrix whereas they are complementary structures. One probable reason has been already formulated.
Browsing predecessor lists in the graph matrix, the index of the next element is stored in a matrix entry
explicitly while in case of the successor list it has to be transformed. This phenomenon could be also caused
by the memory allocation process, during which the order of data structures creation can be important and
may in¯uence on eciency of their further managing.
Comparing the machine graph representation, the process of checking a mutual relation between a single
pair of tasks should be also taken into consideration. The graph matrix ensures the same eciency of arc
existence test as the neighbourhood matrix. On the other hand, it is extremely faster than the lists of
predecessors and successors. They do not make it possible to check whether there is an arc between two
tasks in one step. In the worst case, the whole list must be browsed (O (n)).
Analysing the disjunctive graph, the information on tasks of the unknown execution order is also needed
and often used. In the process of its extracting, the classical machine graph representations are incompa-
rable with the graph matrix (Fig. 10). It is about 27 times faster than the neighbourhood matrix and ex-
tremely faster than the predecessor lists and the successor lists. Such a big di€erence may result from the

Fig. 10. The computational time di€erence for classical machine graph representations with regard to the graph matrix for a procedure
browsing a list of tasks of the unknown execution order.
330 J. Bøa_zewicz et al. / European Journal of Operational Research 127 (2000) 317±331

fact that there were relatively few elements of the unknown execution order in analysed graphs and the
process of their browsing was reduced actually to checking whether there existed any task with this
property. In the graph matrix, the Unknown lists are explicitly stored and can be immediately accessed. In
the classical representations, one has to check the whole structures to detect all pairs of tasks of the un-
known mutual order.

6. Conclusions

The computational experiments con®rmed the high eciency of the graph matrix. It combines all ad-
vantages of the classical graph representations. The mutual relation of any pair of vertices can be checked in
constant time as in the neighbourhood matrix. The predecessor set is determined as fast as in the prede-
cessor lists and the successor set as fast as in the successor lists. The remaining tasks, for which no pre-
cedence relation is ®xed, are determined much faster than in other representations. The graph matrix is not
dedicated to one particular relation between tasks and behaves in the same way for any of them. To achieve
the similar eciency of managing the disjunctive graph as in the graph matrix, one must create all its
classical representations that requires three times more memory space. Compact form of the graph matrix is
especially important for those approaches to the job shop scheduling problem which stores the information
on particular steps of the solution process such as the branch and bound algorithm. Actually, the proposed
disjunctive graph representation has been practically applied within the branch and bound method de-
veloped for the considered problem [2].
The graph matrix is a data structure specialised for the disjunctive graph model but it can be also used to
represent any graph structure. It stores the full information on a graph and makes it possible to check a
mutual order of any pair of tasks (vertices) in constant time as well as to browse the lists of successors,
predecessors and remaining tasks for any vertex in a graph. The ¯exible matrix de®nition allows one to
easily adjust this data structure for particular requirements of an implemented algorithm.

Acknowledgements

We appreciate the remarks of the anonymous referees that allowed us to improve this paper. Moreover,
we would like to thank Krzysztof Kowalczykiewicz who supported the implementation of the presented
approach and made the computational experiments possible.

References

[1] J. Bµa_zewicz, K. Ecker, E. Pesch, G. Schmidt, J. Weßglarz, Scheduling Computer and Manufacturing Processes, Springer, Berlin,
1996.
[2] J. Bµa_zewicz, E. Pesch, M. Sterna, A branch and bound algorithm for the job shop scheduling problem, in: A. Drexl, A. Kimms
(Eds.), Beyond Manufacturing Resource Planning (MRP II). Advanced Models and Methods for Production Planning, Springer,
Heidelberg, 1998, pp. 219±254.
[3] P. Brucker, Scheduling Algorithms, Springer, Berlin, 1995.
[4] P. Brucker, B. Jurisch, A. Kramer, The job-shop problem and immediate selection, Annals of Operations Research 50 (1996) 73±
114.
[5] J. Carlier, E. Pinson, Adjustment heads and tails for the job-shop scheduling problem, European Journal of Operational Research
78 (1994) 146±161.
[6] N. Deo, Graph Theory with Applications to Engineering and Computer Science, Prentice-Hall, Englewood Cli€s, NJ, 1974.
J. Bøa_zewicz et al. / European Journal of Operational Research 127 (2000) 317±331 331

[7] A. Drexl, A. Kimms (Eds.), Beyond Manufacturing and Resource Planning (MRP II). Advanced Models and Methods for
Production Planning, Springer, Berlin, 1998.
[8] R. Haupt, A survey of priority rule-based scheduling, OR Spectrum 11 (1989) 3±16.
[9] B. Roy, B. Sussmann, Les problemes dÕordonnancement avec constraintes disjonctives, SEMA, Note D.S., No. 9, Paris, 1964.

You might also like