Abstract
We address in this paper the parallel machine scheduling problem with a shared loading server and a shared unloading server. Each job has to be loaded by the loading server before being processed on one of the available machines and unloaded immediately by the unloading server after its processing. The objective function involves the minimization of the overall completion time, known as the makespan. This important problem raises in flexible manufacturing systems, automated material handling, healthcare, and many other industrial fields, and has been little studied up to now. To date, research on it has focused on the case of two machines. The regular case of this problem is considered. A mixed integer programming formulation based on completion time variables is suggested to solve small-sized instances of the problem. Due to its \(\mathcal{NP}\mathcal{}\)-hardness, we propose two greedy heuristics based on the minimization of the loading, respectively unloading, server waiting time, and an efficient General Variable Neighborhood Search (GVNS) algorithm. In the computational experiments, the proposed methods are compared using 120 new and publicly available instances. It turns out that, the proposed GVNS with an initial solution-finding mechanism based on the unloading server waiting time minimization significantly outperforms the other approaches.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Parallel Machine Scheduling problem with a Single Server (PMSSS problem) has been widely studied in the last two decades (see [6, 11, 16, 18, 27]). In the PMSSS problem, a server which can represent a team of setup workers or a single operator, etc, is considered to be in charge of the setup operation (or loading operation) of jobs. The setup time resulting from the setup operation can be defined as the time required to prepare the necessary resources to perform a task [3]. The PMSSS problem has various applications, e.g., in supply chain [9, 35], in plastic injection industry [6], in printing industry [24], in semiconductor industry [27], and recently in health care [23].
Unlike the PMSSS problem, the problem considering both loading and unloading operations has been less studied in the literature. Indeed, in the latter problem, each job has to be loaded by the server before being processed on one of the machines, and unloaded immediately by the same server after its processing. It can be noticed that, only few papers have been proposed in the literature for this variant of the problem. Xie et al. [37] addressed a static parallel machine scheduling problem with a single server in charge of loading and unloading jobs on machines. The authors presented complexity results for some particular cases of the problem. They derived some optimal properties that helped to prove that the largest processing time heuristic generated a tight worst-case bound of \(3/2 - 1/2m\), where m is the number of available machines. Later, Jiang et al. [25] addressed a dynamic scheduling problem on two identical parallel machines with a single server. The authors assumed that, both the loading and unloading operations of jobs that are performed by the single server take one unit time. To solve the problem, they proposed two online algorithms that have tight competitive ratios.
For the case with several servers, Kravchenko and Werner addressed the problem with k servers, where \(k > 1\) in order to minimize the makespan [29]. They showed that, the problem is unary \({\mathcal {N}}{\mathcal {P}}\)-hard for each \(k < m\) and developed a pseudo-polynomial time algorithm. Later in 2010, the same authors showed that, the problem with k servers with an objective function involving the minimization of the makespan is binary \({\mathcal {N}}{\mathcal {P}}\)-hard [36]. In addition, they conducted a worst case analysis of two list scheduling algorithms for makespan minimization. Furthermore, in the case of unloading operations involving multiple servers it is assumed that, a job starts its processing immediately on an available machine without prior setup and an unloading server is needed to remove this job from the machine. It can be noticed that, only one paper dealing with this problem is proposed in the literature. In this context, Ou et al. [33] addressed the problem of scheduling m identical parallel machines with multiple unloading servers. The objective function involved the minimization of the total completion time. They showed that, the shortest processing time first algorithm has a worst-case bound of two and proposed other heuristic algorithms as well as a branch-and-bound algorithm to solve the problem. The authors stated that, this problem was motivated by the milk run operations of a logistics company that faces limited unloading docks at the warehouse.
In this paper, we address a non-preemptive parallel machine scheduling problem with two dedicated servers by taking into account loading and unloading operations. The objective is to minimize the makespan. It is assumed that, the information about the problem is available before the scheduling starts (i.e., static case). Following the standard \(\alpha |\beta |\gamma\) classification scheme for scheduling problems known as the Graham triplet [17], the considered problem can be denoted as \(P,S2|s_j,t_j|C_{max}\), where P represents identical parallel machines, S2 represents two common servers, \(s_j\) is the loading time of job j, \(t_j\) is the unloading time of job j and \(C_{max}\) is the objective to be minimised (i.e., the makespan). We consider in this paper the regular case of the problem \(P,S2|s_j,t_j|C_{max}\), where \(\forall i,j \quad p_i < s_j+p_j+t_j\). In contrast, a set of jobs is called general if it is not respecting the regularity constraint (see [27] and [16]). It can be noticed that the regularity constraint was studied for the parallel machine scheduling problem involving a single server by [28] and [1].
In the scheduling literature, only a limited number of papers tackled the problem \(P,S2|s_j,t_j|C_{max}\). Among them, Jiang et al. [26] addressed the problem \(P2,S2|s_j = 1, t_j = 1|C_{max}\) with unit loading time, unit unloading time, and two identical parallel machines. The authors showed that, the classical list scheduling and largest processing time heuristics have worst-case ratios of 8/5 and 6/5, respectively. Recently, Benmansour and Sifaleras [8] proposed an MIP formulation and a general variable neighborhood search metaheuristic for the general case of the problem \(P2,S2|s_j,t_j|C_{max}\) with only two identical parallel machines. To the best of our knowledge, no solution methods have been proposed in the literature for the regular case of problem \(P,S2|s_j,t_j|C_{max}\) with an arbitrary number of machines. We fill this important gap in the literature by making the following contributions:
-
we introduce a novel MIP formulation based on completion time variables for the regular case of the studied problem, and we improve it with a valid inequality.
-
we propose four lower bounds and two greedy heuristics for the regular case of the problem.
-
we design an efficient general variable neighborhood search metaheuristic with different initial solution-finding mechanisms for solving medium and large-sized instances of the regular case of the problem.
-
we provide numerical results for reasonable computing times (with respect to the literature and to the industrial practice), including a comparison with the MIP formulation using a well-known commercial solver.
The rest of the paper is organized as follows: Section 2 provides a formal description, a mixed-integer-programming formulation, as well as a lower bound for the studied problem. Then, two greedy heuristics are presented in Sect. 3. In Sect. 4, a general variable neighborhood search approach and the neighborhood structures that are used, are described. Numerical experiments are performed in Sect. 5. Finally, concluding remarks are made in Sect. 6.
2 Problem formulation and lower bounds
In the problem \(P,S2|s_j,t_j|C_{max}\) we consider that, a set \(N = \{1, 2, \ldots ,n\}\) of n independent jobs has to be processed on a set \(M = \{1, 2, \ldots ,m\}\) of m identical parallel machines with two common servers. The first server (loading server) is dedicated to the loading operation of jobs on the machines, while the second server (unloading server) is used to unload the jobs after their processing. Each job \(j \in N\) is available at the beginning of the scheduling period and has a known integer processing time \(p_j > 0\). Before its processing, each job j has to be loaded by the loading server, and the loading time is \(s_j > 0\). After its processing, a job has to be unloaded from the machine by the unloading server, and the unloading time is \(t_j > 0\). The processing operation starts immediately after the end of the loading operation, and the unloading operation starts immediately after the end of the processing operation. During the loading (respectively unloading) operation, both the machine and the loading server (respectively unloading server) are occupied and after loading (respectively unloading) a job, the loading server (respectively unloading server) becomes available for loading (respectively unloading) the next job. Furthermore, there is no precedence constraints among jobs, and preemption is not allowed. The objective is to find a feasible schedule that minimizes the makespan. In this paper, we consider the regular case of the problem \(P,S2|s_j,t_j|C_{max}\), where \(\forall i,j \in N \quad p_i < s_j + p_j + t_j\).
2.1 Mixed integer programming formulation
In this section, we present a refined version of the MIP formulation proposed in [8] which is based on Completion Time Variables (CSV) for the problem \(P,S2|s_j,t_j|C_{max}\) with a regular job set. CSV formulation known also as natural-date variables formulation was initially used by Balas [5] to model a job shop scheduling problem. This formulation has been also used to model different \({\mathcal {N}}{\mathcal {P}}\)-hard scheduling problems (see [4, 7]).
The decision variables are defined as follows:
Let L be a large positive integer, computed as \(L = \sum _{i \in N} (s_i+p_i+t_i)\).
The problem \(P,S2|s_j,t_j|C_{max}\) with a regular job set can be formulated as the following MIP. We recall that in [8], the authors considered the general case of the problem \(P2,S2|s_j,t_j|C_{max}\).
In this formulation, the objective function (1) indicates that the makespan has to be minimized. Constraints (2) state that the makespan of an optimal schedule is greater than or equal to the completion time of all executed jobs. In order to guarantee that each job is scheduled on exactly one machine, constraint set (3) is added to the formulation. The completion time \(C_i\) which is at least greater than or equal to the sum of the loading, the processing, and the unloading times of the job i is calculated according to constraints (4). Constraints (5)–(8) show that a job can be processed only if the loading server, the machines, and the unloading server are available. Constraints (5) indicate that no two jobs scheduled on the same machine, can overlap in time. Constraints (6) state that the loading server can load at most one job at a time. While, the set of constraints (7) states that the unloading server can unload at most one job at a time. The constraints (8) impose that for each couple of jobs (i, j), one must be processed before the other. Finally, constraints (9) and (10) define binary variables \(x_{ik}\) and \(z_{ij}\).
2.2 Strengthening the formulation
Proposition 1
The following constraints
are valid for MIP.
Proof
Assume the last job (denoted as job n) is executed on machine k.
The quantity \(\sum _{i=1}^{n}~(s_i + p_i+ t_i)x_{ik}\) represents the total time of use of the machine k (since time 0), without counting the idle times. So it is obvious that \(C_n = C_{max} \ge \sum _{i=1}^{n}~(s_i + p_i+ t_i)x_{ik}\). Hence inequalities (11) hold. \(\square\)
Note that, we refer to Eqs. (1)−(10) as MIP1 and by considering the set of constraints Eq. (11) we refer to Eqs. (1)–(11) as MIP2.
2.3 Illustrative example
We now illustrate the previous formulation for an instance of \(n = 5\) jobs and \(m = 3\) machines. The processing time \(p_j\), the loading time \(s_j\) and the unloading time \(t_j\) are given in Table 1. It takes 0.14 seconds to solve the instance using the above MIP1 formulation on IBM ILOG CPLEX 12.6. The optimal objective-function value is 20, and the obtained schedule of the problem is given in Fig. 1.
2.4 Lower bounds
In this section, we present a lower bound (LB) on the optimal makespan value for the problem \(P,S2|s_j,t_j|C_{max}\). It can be useful in order to evaluate the quality of the greedy heuristics and the metaheuristic suggested in Sects. 3 and 6, respectively. \(LB = \max (LB_1,LB_2,LB_3,LB_4)\), where \(LB_1\), \(LB_2\), \(LB_3\), and \(LB_4\) are given in Propositions 2, 3, 4, and 5 , respectively.
Proposition 2
is a valid lower bound for the problem \(P,S2|s_j,t_j|C_{max}\).
Proof
One can see that the total load to be executed by the m machines represents the sum of all loading, processing, and unloading times (i.e., \(\sum _{i=1}^{n} (s_i + p_i + t_i)\)). Then, it is sufficient to divide this total load by the number of machines to obtain the aforementioned lower bound. \(\square\)
Proposition 3
is a valid lower bound for the problem \(P,S2|s_j,t_j|C_{max}\).
Proof
Let \(C_{max}^{*}\) denote the objective function value of an optimal solution of the problem \(P,S2|s_j,t_j|C_{max}\). In addition, the unloading server waiting time of the job in position i, corresponds to the time between the end of the unloading operation of the job in position i and the start time of the unloading operation of the job in position \(i+1\). Indeed, if there is no unloading server waiting time in an optimal schedule of the problem \(P,S2|s_j,t_j|C_{max}\), then \(C_{max}^{*}\) will be equal to the sum of all unloading times plus the shortest sum of the loading and processing times, which corresponds to \(\min _{i \in N}{(s_i+p_i)}\) . Hence, \(LB_2\) is valid. \(\square\)
Proposition 4
is a valid lower bound for the problem \(P,S2|s_j,t_j|C_{max}\).
Proof
Let start by defining the Loading Server Waiting Time (LSWT). LSWT of the job in position i, corresponds to the time between the end of the loading operation of the job in position i and the start time of the loading operation of the job in position \(i+1\). Indeed, if there is no loading server waiting time in an optimal schedule of the problem \(P,S2|s_j,t_j|C_{max}\), then \(C_{max}^{*}\) will be equal to the sum of all loading times plus the shortest sum of the processing and unloading times, which corresponds to \(\min _{i \in N}{(p_i+t_i)}\). Hence, the aforementioned lower bound (\(LB_3\)) is valid. \(\square\)
Proposition 5
is a valid lower bound for the problem \(P,S2|s_j,t_j|C_{max}\).
Proof
One can see that the completion time of the job with the maximal duration of the sum of loading, processing and unloading times, represents a valid lower bound for the problem \(P,S2|s_j,t_j|C_{max}\). \(\square\)
3 Greedy heuristics
In this section, we present two greedy heuristics that aim to minimize the loading/unloading server waiting time for the problem \(P,S2|s_j,t_j|C_{max}\) with a regular job set. The basic objective at each step of the proposed heuristic is to avoid the generation of server loading/unloading waiting times. The idea of the proposed heuristics relies on the works of [2, 22], and [14] for the problems \(P2,S1|s_j,p_j|C_{max}\) and \(P,S1|s_j,p_j|C_{max}\), involving only one single server. Indeed, Abdekhodaee et al. [2] proposed a forward and a backward heuristics to minimize the server waiting time and the machine idle time, respectively for the problem \(P2,S1|s_j,p_j|C_{max}\). Also, Hasani et al. [22] suggested two other heuristics for the same problem. Later, El Idrissi et al. [14] generalized the precedent suggested heuristics for the problem \(P,S1|s_j,p_j|C_{max}\), with an arbitrary number of machines and a single server. It can be noticed that, the unloading server has not been considered in the precedent works.
3.1 Greedy heuristic (USWT)
In this heuristic, jobs are scheduled according to the availability of machines, the loading and the unloading servers. In addition, the job with the shortest sum of the loading and processing times is considered as the first job to be scheduled in the final sequence. It follows the structure of the lower bound \(LB_2\) presented in Proposition 3. The steps of the first proposed heuristic, called USWT (Unloading Server Waiting Time), are as follows.
Heuristic USWT
-
Step 1: Sort the list of jobs \(\pi =\{\pi _1,\ldots ,\pi _k,\ldots ,\pi _n\}\) in increasing order of the sum of loading and processing times.
-
Step 2: Sequence the first job of the list \(\pi\) on the earliest available machine.
-
Step 3: Set \(\varGamma\) to the difference between the end of the loading and the unloading time of last sequenced job.
-
Step 4: From the unsequenced jobs, find a job with a sum of the loading and processing time less than or equal to \(\varGamma\). If there is no such job, select the first one from the list \(\pi\). Schedule the selected job to the first available machine at the earliest possible time taking in consideration the loading/unloading server constraints.
-
Step 5: Repeat Steps 3 and 4 until all jobs are sequenced.
3.2 Greedy heuristic (LSWT)
Contrary to the heuristic USWT, in this section, we present a constructive heuristic that aims to minimize the loading server waiting time. In this heuristic, the job with the shortest sum of the processing and unloading time is considered as the last job to be scheduled in the final list sequence. It follows the structure of the lower bound (\(LB_3\)) presented in Proposition 4. The steps of the second proposed heuristic, called LSWT (Loading Server Waiting Time), are as follows.
Heuristic LSWT
-
Step 1: Sort the list of jobs \(\pi\) in increasing order of the sum of processing and unloading times.
-
Step 2: Sequence the first job of the list \(\pi\) in the final position, and sequence the second job of the list \(\pi\) in the first position on the earliest available machine.
-
Step 3: Set \(\varTheta\) to the difference between the end of the loading and the unloading time of last sequenced job.
-
Step 4: From the unsequenced jobs, find a job with a sum of the loading and processing time greater than or equal to \(\varTheta\). If there is no such job, select the first one from the list \(\pi\). Schedule the selected job to the first available machine at the earliest possible time taking in consideration the loading/unloading server constraints.
-
Step 5: Repeat Steps 3 and 4 until all jobs are sequenced.
Note that USWT and LSWT are both complementary, and are used to generate initial solutions for the metaheuristic presented in the next Sect. 4.
4 General variable neighborhood search (GVNS) metaheuristic
Variable Neighborhood Search (VNS) is a single solution metaheuristic method proposed by Mladenović and Hansen [31] that uses local search procedure as its basic building block. Basic variant of VNS (BVNS) involves three main steps: Shaking, Local Search (LS), and Change Neighborhood (Move or Not). Shaking step represents the diversification (perturbation) phase whose role is to ensure escaping from local optima traps. It is always applied to the current best solution and consists of random perturbation in the given neighborhood. The obtained (shaken, perturbed) solution represents a starting point to the LS step. The role of LS is to improve shaken solution by examining its neighbors (in one or more neighborhoods). When a local optimum is obtained by LS, BVNS performs the final step (Move or Not). Within this step local optimum is compared with the current best solution. If an improvement is obtained, the search concentrates around the newly found best solution which means that the current best solution and the neighborhood index are properly updated. In the case that current best solution was not improved, the search is expanded to wider part of solution space (if possible) by increasing the neighborhood index for Shaking. The main BVNS steps are repeated until a pre-specified stopping criterion is satisfied [19].
Since 1997, VNS has been widely used and many variants and successful applications can be found in the relevant literature [20, 32]. The simplest VNS variant is Reduced VNS (RVNS), consisting only of Shaking and Move or Not Steps. In most of the cases, it is applied to provide good initial solutions for other VNS variants. If the Shaking step is omitted, the corresponding method is known as Variable Neighborhood Descent (VND) [13]. It is a deterministic algorithm where LS is performed along multiple neighborhoods. Skewed VNS (SVNS) represents a variant of VNS that allows the acceptance of non-improving solutions in Move or Not step with a given probability [10]. In SVNS an additional parameter \(\alpha\) is introduced to control the quality of these non-improving solutions, while General VNS (GVNS), contains all three main steps and explores VND instead of LS [19].
Here, the GVNS metaheuristic is implemented for the problem \(P,S2|s_j,t_j|C_{max}\) with a regular job set. The details about this implementation are provided in the remainder of this section.
4.1 Implementation details
A solution of the considered scheduling problem \(P, S2|s_j, t_j|C_{max}\) can be represented as a permutation \(\pi =\{\pi _1,\ldots ,\pi _k,\ldots ,\pi _n\}\) of the job set N, where \(\pi _k\) indicates the job which is processed in the \(k^{th}\) position. This representation is in fact indirect as it requires actual scheduling of jobs to machines taking into consideration the servers constrains (i.e., loading and unloading) in order to calculate value of the objective function (\(C_{max}\)). Having in mind that jobs are independent, all permutations (n!) are representing feasible solutions, and therefore, the search space is very large. On the other hand, several neighborhood structures could be defined for this representation.
4.1.1 Neighborhood structures
To obtain an efficient VNS metaheuristic we have to decide about the neighborhood structures to use. The following three neighborhood structures (\(l_{max} = 3\)) are proposed to explore the solution space for the problem at hand.
-
\(\mathcal {N}_1(\pi )\) = Swap(\(\pi )\). It consists of all solutions obtained from the solution \(\pi\) swapping two jobs of \(\pi\).
-
\(\mathcal {N}_2(\pi )\) = Insert(\(\pi )\): It consists of all solutions obtained from the solution \(\pi\) by reinserting one of its job somewhere else in the sequence.
-
\(\mathcal {N}_3(\pi ) = Reverse(\pi )\): It consists of all solutions obtained from the solution \(\pi\) reversing a sub-sequence of \(\pi\). More precisely, given two jobs \(\pi _i\) and \(\pi _j\), we construct a new sequence by first deleting the connection between \(\pi _i\) and its successor \(\pi _{i+1}\) and the connection between \(\pi _j\) and its successor \(\pi _{j+1}\). Next, we connect \(\pi _{i-1}\) with \(\pi _j\) and \(\pi _{i}\) with \(\pi _{j+1}\).
Note that, these neighborhood structures have been successfully used in different scheduling problems involving a single server (see [15, 21]).
4.1.2 Variable neighborhood descent
Now, we propose to use \(\mathcal{N}_1\), \(\mathcal{N}_2\) and \(\mathcal{N}_3\) within VND (Algorithm 1). It starts with an initial solution \(\pi _{0}\) and continuously tries to construct a new improved solution from the current solution \(\pi\) by exploring its neighborhood \(\mathcal {N}_l(\pi )\). The search continues to generate neighboring solutions until no further improvement can be made. Furthermore, the performance of VND depends on the order in which neighborhoods are explored, and on how to switch from one neighborhood to another during the search. To switch from one neighborhood to another (Change neighborhood step), we propose to use basic sequential, pipe, and cyclic strategies. They are given in Algorithms 2–4, respectively. Exhaustive testing is performed in Sect. 5.3 in order to identify the best order of neighborhoods with respect to the suggested change neighborhood strategies.
4.1.3 Shaking
For escaping local optimum solutions, a shaking procedure is proposed. It consists of sequentially generating k random jumps from the current solution \(\pi\) using the neighborhood structure \(\mathcal{N}_3\). After preliminary experiments, the shaking method with more neighborhood structures reduced the quality of the results. Its pseudo-code is given in Algorithm 5.
4.2 GVNS for the problem \(P, S2|s_j, t_j|C_{max}\)
In this section, we present the overall pseudocode of GVNS as it is implemented to solve the the problem \(P, S2|s_j, t_j|C_{max}\) with a regular job set (see Algorithm 6). The diversification and intensification ability of GVNS relies on the shaking phase and VND, respectively. Shaking step of GVNS consists of one neighborhood structure \(\mathcal{N}_3\). In the VND step, the three proposed neighborhood structures are used. The stopping criterion is a CPU time limit \(T_{max}\). Since GVNS is a trajectory-based metaheuristic, we need to start from a given solution. Therefore, we refer to GVNS I as GVNS using USWT heuristic as initial solution, GVNS II as GVNS using LSWT heuristic as initial solution, and GVNS III as GVNS using a random initial solution. Note that all GVNS variants are compared in the next Sect. 5.
5 Computational results
In this section, the computational experiments carried out to evaluate the performance of the MIP1 (1)-(10), MIP2 (1)-(11), USWT, LSWT, GVNS I, GVNS II, and GVNS III for the problem \(P,S2|s_j,t_j|C_{max}\) with a regular job set, are presented. The MIP1 and MIP2 were solved using concert Technology library of CPLEX 12.6 version with default settings in C++, whereas LSWT, USWT, GVNS I, GVNS II and GVNS III were coded in the C++ language. We use a personal computer Intel(R) Core(TM) with i7-4600M 2.90 GHz CPU and 16GB of RAM, running Windows 7. Except for the small-sized instances for which one run is sufficient, the metaheuristics were executed 10 times in all experiments reported in this section.
5.1 Benchmark instances
To the best of our knowledge, there are no publicly available benchmark instances from the literature regarding the problem \(P,S2|s_j,t_j|C_{max}\) with a regular job set, so we decided to generate a new set of instances. This set was created by generalizing the scheme previously proposed by Silva et al. [34] and Benmansour and Sifaleras [8]. Indeed, to generate a regular job set, first we generate a general job set, where the processing time \(p_j\), loading time \(s_j\), and unloading time \(t_j\) of each job j were generated from the uniform distributions U[10, 100], U[5, 25] and U[5, 25], respectively. Then, we reduce this general job set into a regular one by adapting the Koulama’s reduction algorithm (see Koulamas [28]) to our studied problem. In our generation scheme, we adopted the following values: \(n \in \{10, 50, 100, 250\}\), and \(m \in \{2,3,5\}\). Thus, leading to a total of 12 groups of instances. For each group of instances (n, m), ten instances were created, resulting in a total of 120 new instances. These instances are publicly available at: https://sites.google.com/view/data-set-for-thepmsssproblem/accueil.
5.2 Parameters setting
For the proposed GVNS metaheuristic, two parameters have to be tuned, \(k_{max}\) which represents the maximum level of perturbation and \(T_{max}\) which corresponds to the maximum time allowed to be used by the GVNS. After some preliminary tests, we decided to set \(k_{max}\) to 20 as it offered a reasonable trade-off between the quality of the solution and CPU time. For small-sized instances (\(n=10\) and \(m \in \{2,3,5\}\)), \(T_{max}\) is set to the computing time to find an optimal solution by CPLEX solver. For medium-sized instances (\(n \in \{50,100\}\) and \(m \in \{2,3,5\}\)), \(T_{max}\) is set to 100 seconds. Finally, for large-sized instances (\(n = 250\) and \(m \in \{2,3,5\}\)), \(T_{max}\) is set to 200 seconds. In addition, for all instances, the time limit for CPLEX is set to 1h.
5.3 Comparison of VND variants
Now, we present a detailed comparison of three VND variants, namely: sequential VND, pipe VND and cyclic VND. These VND variants have been widely used for different optimization problems (see [12, 30]). The performance of the proposed VND variants depends on the sequence of the three neighborhood structures (\(\mathcal {N}_1, \mathcal {N}_2, \mathcal {N}_3\)), and also on the search strategy (first or best improvement). Therefore, six different sequences of neighborhood structures are presented in Table 2. Note that, in this comparison each VND variant starts with a random permutation of the solution \(\pi\).
Furthermore, 30 instances of size \(n = 250\) with m ranging from 2 to 5 (large-sized instances), are used in this comparison. Table 3 presents the average results for the first improvement strategy (i.e., in each iteration, stop the generation of neighbor solutions as soon as the current solution can be improved), whereas Table 4 presents the average results for the best improvement strategy (i.e., in each iteration, generate all the neighbor solutions and pick up the best one). For each experiment, we indicate the average value of \(C_{max}\) and the average computing time (in seconds).
According to these results, we can observe that VND is sensitive to the neighborhood-structure sequence and the searching strategy. It can be noticed that the neighborhood-structure Cyclic, Sequence 4, combined with the first improvement strategy leads to the best results, since it returns the minimum value of the average makespan. Therefore, we will use these parameters within the proposed GVNS.
5.4 Results
Because of the \({\mathcal {N}}{\mathcal {P}}\)-hard nature of the problem \(P,S2|s_j,t_j|C_{max}\), an optimal solution was only obtained for small-sized instances with up to 10 jobs and 5 machines. Therefore, LSWT, USWT, GVNS I, GVNS II and GVNS III were designed to solve medium and large-sized instances. Having in mind that in the proposed GVNS I, the initial solution is obtained by the greedy heuristic USWT. In GVNS II, the initial solution is obtained by the greedy heuristic LSWT. Finally, in GVNS III the initial solution is randomly generated.
5.4.1 Exact approaches
In Table 5, we compare the performance of MIP1 and MIP2 for all groups of instances (12 group of instances) for a time limit of 1 h. Each group of instances is characterized by the following information: the group number; a number n of jobs; a number m of machines. In addition, for each MIP formulation, the following information is given: i) the average time required to prove optimality, CPU, ii) the average percentage gap to optimality, \(gap_{LB}(\%)\), iii) the average gap between the formulation’s lower bound and the best upper bound, \(gap_{LB-UB}(\%)\), and iv) the average gap between the linear programming (LP) relaxation lower bound and the best upper bound, \(gap_{LP}(\%)\). The following observations can be made:
-
For \(n=10\) and \(m \in \{2,3,5\}\): Based on formulations MIP1 and MIP2, CPLEX is able to produce an optimal solution for any instance. It can be noticed that for the improved formulation MIP2, CPLEX is able to produce an optimal solution in significantly less computational time in comparison with the original formulation, except for group 2 and group 3.
-
For \(n \in \{50,100,250\}\) and \(m \in \{2,3,5\}\): Based on MIP1 and MIP2, CPLEX is able to find a feasible solution for all groups. It can be noticed that the improved formulation MIP2 produce much smaller \(gap_{LB}(\%)\) and \(gap_{LB-UB}(\%)\) in comparison with MIP1. In addition, MIP2 reduced significantly the value of the linear programming relaxation.
The overall results showed that MIP2 outperforms, on average, the MIP1 formulation on almost all instances. Furthermore, the impact of the strengthening constraints in Eq. 11 is very positive as MIP2 produce more strict LP relaxation bounds than MIP1. In the following of the paper, we compare only MIP2 with the other approaches, as it produce the best results. Note that MIP2 was not able to prove optimality of 50-job instance, which is a limited size. Consequently, there is a need for meta/heuristics able to find an approximate solution if possible of high quality in a short computational time.
5.4.2 Approximate approaches
In Table 6, we compare the performance of the all proposed methods for small-sized instances (\(n=10\) and \(m \in \{2,3,5\}\)), where an optimal solution can be found by MIP2 within one hour. Each instance is characterized by the following information. First, an ID (the name of each instance, e.g., nXmYkZ denotes the \(Z^{th}\) instance with \(n=X\) and \(m=Y\)); the lower bound LB computed as in Sect. 2.4; the optimal value \(C_{max}^*\) of \(C_{max}\) (found with the MIP2). Second, the obtained value of \(C_{max}\) is given for USWT and LSWT (not the computing time, as it is always below 0.0001). Finally, the computing time to find an optimal solution is given for the MIP2, GVNS I, GVNS II, and GVNS III. The last line of the table indicates average results. The following observations can be made :
-
GVNS I, GVNS II, and GVNS III can reach an optimal solution for each instance in significantly less computing time than the MIP2.
-
USWT and LSWT are not able to generate an optimal solution for all instances. On average, USWT produces solution of better quality than LSWT.
-
The theoretical lower bound LB is on average 6.17% bellow \(C_{max}^*\).
-
GVNS I requires the smallest average computing time (0.70 second) to find optimal solutions in comparison with MIP2, GVNS II, and GVNS III.
In addition, Table 7 presents the performance of the three metaheuristic approaches in terms of the percentage deviation from the best-known solutions for each group of instances (the best one over all the runs of all the metaheuristics, and the one obtained by the considered metaheuristic). For each metaheuristic, the following information is given: the minimum value of the percentage deviation over all instance’s group, Min, the average value of the percentage deviation over all instance’s group, Avg, and the maximum value of the percentage deviation over all instance’s group, Max. The last line of the table indicates average results. The results show that GVNS I, on average, obtained a superior performance in terms of minimum, average and maximum gaps for each group of instance, when compared to GVNS II and GVNS III.
Furthermore, the detailed results of the MIP2, the two greedy heuristics, and the three metaheuristics for the remaining instances is given in Appendix 1. Tables 8, 9, 10 present the performance of the all approaches for medium and large-sized instances with \(n \in \{50,100,250\}\) and \(m \in \{2,3,5\}\), where only a feasible solution can be found by MIP2 within one hour (the best results are indicated in bold). The instance characteristics are first indicated. First for the MIP2, the following information is given: the lower bound \(LB_{MIP2}\), the upper bound \(UB_{MIP2}\), the percentage gap to optimality \(Gap_{LB}(\%)\), and the time requested to prove optimality (CPU). Second, the obtained value of \(C_{max}\) is given for USWT and LSWT (not the computing time, as it is always below 0.001). Finally, for GVNS I, GVNS II and GVNS III : the best (respectively average) objective-function value over 10 runs denoted as Best (respectively Avg). In addition, the average computing times are also presented (computed over the 10 runs). Note that the computing time of a run corresponds to the time at which the best visited solution is found. According to these results, overall, out of 120 instances, GVNS I found 73 best solution (60.83%), whereas GVNS II and GVNS III found only 42 (35%) and 71 (59.16%), respectively. Figure 2 depicts the tradeoff between the quality of solution versus the time expended, for 10 instances of size \(n=250\) and \(m=5\) for GVNS I. We have chosen to study these instances in particular because they are difficult to solve. On average, the time to produce a solution within 4% of the best-found solution is equal to 10.7 s over 10 instances.
As it was shown in the previous Tables 8, 9, 10, for GVNS I, GVNS II and GVNS III, the difference between Best and Avg grows with n and m, which is likely to indicate the robustness degradation with the increase of the instance size. It can be noticed that the proposed theoretical lower bound (LB) is equal to the MIP2 formulation’s lower bound \(LB_{MIP2}\) for 90 instances. Furthermore, the proposed GVNS I, GVNS II, and GVNS III outperformed CPLEX in terms of quality (of the obtained solutions) and speed (i.e., time needed to generate efficient solutions). In addition, GVNS I produced, on average, better results is small computational time in comparison with GVNS II, and GVNS III. This success can be explained by the quality of the initial solution, as the unloading server waiting time minimization strategy contribute significantly to the minimization of the makespan. Hence, one can conclude that the GVNS I is the best method for computing good quality solutions in a small amount of time for the problem \(P,S2|s_j,t_j|C_{max}\) with a regular job set.
6 Conclusions and future work
In this paper, the identical parallel machine scheduling problem with two common servers was addressed. Each job has to be loaded by a loading server and unloaded by an unloading server, respectively, immediately before and after being processed on one of the m available machines. The objective function involved makespan minimization. The regular case of this problem is considered, where \(\forall i,j \quad p_i <s_j+p_j+t_j\). A mixed-integer-programming (MIP) formulation based on completion time variables, as well as a valid inequality were suggested to solve optimally small-sized instances with up to 10 jobs and 5 machines. In addition, four lower bounds are proposed. Due to the \({\mathcal {N}}{\mathcal {P}}\)-hard nature of the problem, two greedy heuristics based on the minimization of the loading, respectively unloading server waiting time, and a general variable neighborhood search (GVNS) algorithm with different initial solution-finding mechanisms were designed to obtained solution for large-sized instances with up to 250 jobs and 5 machines. Computational experiments were carried out on 120 new instances, divided into 12 groups. For small-sized instances, GVNS algorithm outperformed the MIP2 formulation in terms of the computing time to find an optimal solution. However, for medium and large-sized instances, the GVNS with an initial solution-finding mechanism based on the unloading server waiting time minimization yielded better results than the other approaches. The future work may include larger test instances with equal loading times (\(s_j=s\)) and/or equal unloading times (\(t_j=t\)), new neighborhood structures, and implementation of other metaheuristic methods for the problem \(P,S2|s_j,t_j|C_{max}\) with a general job set. Additional constraints could also be considered, especially sequence-and-machine-dependent setup times and release dates.
Data availability
The authors declare that, the data set used in this study is available within the supplementary information files of the article.
References
Abdekhodaee, A.H., Wirth, A.: Scheduling parallel machines with a single server: some solvable cases and heuristics. Comput. Oper. Res. 29(3), 295–315 (2002)
Abdekhodaee, A.H., Wirth, A., Gan, H.S.: Scheduling two parallel machines with a single server: the general case. Comput. Oper. Res. 33(4), 994–1009 (2006)
Allahverdi, A., Soroush, H.: The significance of reducing setup times/setup costs. Eur. J. Oper. Res. 187(3), 978–984 (2008)
Baker, K.R., Keller, B.: Solving the single-machine sequencing problem using integer programming. Comput. Ind. Eng. 59(4), 730–735 (2010)
Balas, E.: On the facial structure of scheduling polyhedra. In: Mathematical Programming Essays in Honor of George B. Dantzig Part I, pp. 179–218. Springer (1985)
Bektur, G., Saraç, T,: A mathematical model and heuristic algorithms for an unrelated parallel machine scheduling problem with sequence-dependent setup times, machine eligibility restrictions and a common server. Comput. Oper. Res. 103, 46–63 (2019)
Benmansour, R., Allaoui, H., Artiba, A., Hanafi, S.: Minimizing the weighted sum of maximum earliness and maximum tardiness costs on a single machine with periodic preventive maintenance. Comput. Oper. Res. 47, 106–113 (2014)
Benmansour, R., Sifaleras, A.: Scheduling in parallel machines with two servers: the restrictive case. In: Variable Neighborhood Search: 8th International Conference, ICVNS 2021, Abu Dhabi, United Arab Emirates, 21–25 March, 2021, Proceedings 8, pp. 71–82. Springer International Publishing (2021)
Bish, E.K.: A multiple-crane-constrained scheduling problem in a container terminal. Eur. J. Oper. Res. 144(1), 83–107 (2003)
Brimberg, J., Urošević, D., Mladenović, N,: Variable neighborhood search for the vertex weighted k-cardinality tree problem. Eur. J. Oper. Res. 171(1), 74–84 (2006)
Cheng, T., Kravchenko, S.A., Lin, B.M.: Complexity of server scheduling on parallel dedicated machines subject to fixed job sequences. J. Oper. Res. Soc. 1–4 (2020)
Diana, R.O.M., de Souza, S.R.: Analysis of variable neighborhood descent as a local search operator for total weighted tardiness problem on unrelated parallel machines. Comput. Oper. Res. 117, 104886 (2020)
Duarte, A., Mladenović, N., Sánchez-Oro, J., Todosijević, R.: Variable Neighborhood Descent. In: Martí, R., Panos, P., Resende, M.G. (eds.) Handbook of heuristics, pp. 1–27. Springer International Publishing, Cham (2016)
El Idrissi, A., Benbrahim, M., Benmansour, R., Duvivier, D.: Greedy heuristics for identical parallel machine scheduling problem with single server to minimize the makespan. In: MATEC Web of Conferences, vol. 200, p. 00001. EDP Sciences (2018)
Elidrissi, A., Benbrahim, M., Benmansour, R., Duvivier, D.: Variable neighborhood search for identical parallel machine scheduling problem with a single server. In: International Conference on Variable Neighborhood Search, pp. 112–125. Springer (2019)
Elidrissi, A., Benmansour, R., Benbrahim, M., Duvivier, D.: Mathematical formulations for the parallel machine scheduling problem with a single server. Int. J. Prod. Res. 59(20), 6166–6184 (2021)
Graham, R.L., Lawler, E.L., Lenstra, J.K., Kan, A.R.: Optimization and approximation in deterministic sequencing and scheduling: a survey. In: Annals of Discrete Mathematics, vol. 5, pp. 287–326. Elsevier (1979)
Hamzadayi, A., Yildiz, G.: Modeling and solving static \(m\) identical parallel machines scheduling problem with a common server and sequence dependent setup times. Comput. Ind. Eng. 106, 287–298 (2017)
Hansen, P., Mladenović, N., Pérez, J.A.M.: Variable neighbourhood search: methods and applications. Ann. Oper. Res. 175(1), 367–407 (2010)
Hansen, P., Mladenović, N., Todosijević, R., Hanafi, S.: Variable neighborhood search: basics and variants. EURO J. Comput. Optim. 5(3), 423–454 (2017)
Hasani, K., Kravchenko, S.A., Werner, F.: Simulated annealing and genetic algorithms for the two-machine scheduling problem with a single server. Int. J. Prod. Res. 52(13), 3778–3792 (2014)
Hasani, K., Kravchenko, S.A., Werner, F.: Minimizing the makespan for the two-machine scheduling problem with a single server: two algorithms for very large instances. Eng. Optim. 48(1), 173–183 (2016)
Hsu, P.Y., Lo, S.H., Hwang, H.G., Lin, B.M.: Scheduling of anaesthesia operations in operating rooms. Healthcare 9(6), 640 (2021)
Huang, S., Cai, L., Zhang, X.: Parallel dedicated machine scheduling problem with sequence-dependent setups and a single server. Comput. Ind. Eng. 58(1), 165–174 (2010)
Jiang, Y., Yu, F., Zhou, P., Hu, J.: Online algorithms for scheduling two parallel machines with a single server. Int. Trans. Oper. Res. 22(5), 913–927 (2015)
Jiang, Y., Zhou, P., Wang, H., Hu, J.: Scheduling on two parallel machines with two dedicated servers. ANZIAM J. 58(3–4), 314–323 (2017)
Kim, M.Y., Lee, Y.H.: Mip models and hybrid algorithm for minimizing the makespan of parallel machines scheduling problem with a single server. Comput. Oper. Res. 39(11), 2457–2468 (2012)
Koulamas, C.P.: Scheduling two parallel semiautomatic machines to minimize machine interference. Comput. Oper. Res. 23(10), 945–956 (1996)
Kravchenko, S.A., Werner, F.: Scheduling on parallel machines with a single and multiple servers. Otto-von-Guericke-Universitat Magdeburg 30(98), 1–18 (1998)
Mjirda, A., Todosijević, R., Hanafi, S., Hansen, P., Mladenović, N.: Sequential variable neighborhood descent variants: an empirical study on the traveling salesman problem. Int. Trans. Oper. Res. 24(3), 615–633 (2017)
Mladenović, N., Hansen, P.: Variable neighborhood search. Comput. Oper. Res. 24(11), 1097–1100 (1997)
Mladenović, N., Sleptchenko, A., Sifaleras, A., Omar, M. (eds.): Variable neighborhood search. In: 8th International Conference, ICVNS 2021, Abu Dhabi, United Arab Emirates, 21–25 October, 2021, Revised Selected Papers, LNCS, vol. 12559. Springer, Cham (2021)
Ou, J., Qi, X., Lee, C.Y.: Parallel machine scheduling with multiple unloading servers. J. Sched. 13(3), 213–226 (2010)
Silva, J.M.P., Teixeira, E., Subramanian, A.: Exact and metaheuristic approaches for identical parallel machine scheduling with a common server and sequence-dependent setup times. J. Oper. Res. Soc. 72(2), 444–457 (2021)
Torjai, L., Kruzslicz, F.: Mixed integer programming formulations for the biomass truck scheduling problem. CEJOR 24(3), 731–745 (2016)
Werner, F., Kravchenko, S.A.: Scheduling with multiple servers. Autom. Remote Control 71(10), 2109–2121 (2010)
Xie, X., Li, Y., Zhou, H., Zheng, Y.: Scheduling parallel machines with a single server. In: Proceedings of 2012 International Conference on Measurement, Information and Control, vol. 1, pp. 453–456. IEEE (2012)
Funding
Open access funding provided by HEAL-Link Greece.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix
Detailed results of MIP2, USWT, LSWT, GVNS I, GVNS II and GVNS III for each instance
In this Appendix 1 the detailed results, on entire data sets for all methods studied in this paper, found by the proposed approaches for instances with \(m \in \{2,3,5\}\) and \(n~=~\{50, 100, 250\}\) are depicted in the following Tables 8, 9, and 10 , respectively.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Elidrissi, A., Benmansour, R. & Sifaleras, A. General variable neighborhood search for the parallel machine scheduling problem with two common servers. Optim Lett 17, 2201–2231 (2023). https://doi.org/10.1007/s11590-022-01925-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11590-022-01925-2