A VARIABLE NEIGHBORHOOD SEARCH ALGORITHM FOR SOLVING THE SINGLE MACHINE SCHEDULING PROBLEM WITH PERIODIC MAINTENANCE

In this paper we propose to solve a single machine scheduling problem which has to undergo a periodic preventive maintenance. The objective is to minimize the weighted sum of the completion times. This criterion is defined as one of the most important objectives in practice but has not been studied so far for the considered problem. As the problem is proven to be NP-hard, and a mathematical model is proposed in the literature, we propose to use General Variable Neighborhood Search algorithm to solve this problem in order to obtain near optimal solutions for the large-sized instances in a small amount of computational time. Mathematics Subject Classification. 90–XX, 68–XX. Received April 30, 2017. Accepted June 27, 2018.

of failure is very high and preventive maintenance, with a much lower cost, is applied to minimize the risk of failure. Also, it can be considered that preventive maintenance is carried out to meet a regulatory requirement or that the machine must undergo, at regular intervals and for practical reasons, a cleaning, cooling or a tool change. On that basis, we assume that the risk of a major breakdown is greatly reduced when periodic preventive maintenance is applied. Consequently, the random minor failures that may occur on the machine have no impact on the execution of the jobs. In addition, they have such negligible duration that we do not take them into account later. For this reason, as part of our study we consider that the risk of failures is non-existent or at least negligible thanks to an efficient preventive maintenance strategy.
Based on the three field notation α|β|γ known as the Graham triplet [13], this problem is denoted as 1/pm/ n i=1 w i c i . The first field means that there exists only one machine. The second one specifies that the periodic maintenance (pm) aspect is considered. Finally, the last field contains the problem's objective function where c i is the completion time of a job J i and w i is its weight. Many works dealing with periodic maintenance on a single machine have been published but to the best of our knowledge no one considered such a problem with the objective of minimizing the sum of the weighted completion times. This criterion is defined as one of the most important objectives in practice as it is stated by Ashour in his book [3]. Indeed, it can be considered an auxiliary function measuring the total work-in-process inventory cost in a manufacturing system. The weights can also quantify the priority/urgency of a job based on a possible shortage in a downstream level.
The studied problem is NP-hard since Chen et al. [25] proved that the special case of minimizing the sum of completion times with periodic maintenance is NP-hard. We provide hereafter a brief overview of previous related works. Lee and Limane [22] investigated a non-premptive single machine scheduling problem to minimize the sum of job flow times subject to scheduled maintenance with preemption of the jobs not allowed. They proved the NPhardness of the problem and showed that the Shortest Processing Time (SPT) rule has a worst case error bound of 2/7. Liao and Chen [25] proposed to minimize the maximum tardiness. They developed a heuristic and a branch-and-bound algorithm to solve the problem. Lee et al. [23] developed an efficient heuristic to minimize the number of tardy jobs. Ji et al. [19] considered minimizing the makespan. They showed that the worst case ratio of the LPT rule is 2, and proved that there is no polynomial time approximation algorithm with a worst-case ratio of less than 2 unless P = NP. Chen et al. [7] minimize the total flow time. They proved some theorems and developed a branch and bound algorithm to find an optimal schedule. Low et al. [26] presented a particle swarm optimization algorithm and Ebrahimy et al. [11] developed a dynamic genetic algorithm to solve that problem with the aim of minimizing the makespan. Wen-Jinn et al. [8] proposed a heuristic to solve the single-machine scheduling problem with periodic maintenance for the preemptive case. They minimized the total flow time and the maximum tardiness simultaneously. Benmansour et al. [6] investigated the single machine scheduling problem with periodic maintenance in order to minimize the weighted sum of maximum earliness and maximum tardiness costs. They proved the NP-hardness of this problem and proposed an efficient heuristic to solve it. Roux et al. [31] proposed an hybridization of an optimization algorithm named Nelder-Mead and a simulation multimodel which is able to integrate production aspects and maintenance strategies. Su et al. [33] considered four single machine scheduling problems with a variable machine maintenance. The objectives are mean lateness, maximum tardiness, total flow time and mean tardiness, respectively. They proposed a polynomial-time exact algorithm for the four criteria considered. In [10], Wei-Wei et al. proposed to minimize the makespan with a flexible maintenance and release date constraints. They proved that the problem can be solved in polynomial time with Earliest Release Date (ERD) rule in the resumable case. For the non-resumable case, they proposed a mixed linear integer programming (MILP) model, a heuristic named ERD-LPT and a branch-and-bound algorithm to solve the problem. Recently a scheduling problem with multiple unavailability constraints was proposed by Yazdani et al. [35]. The considered criteria are the sum of maximum earliness and tardiness of jobs. They proposed a mathematical model and a variable neighborhood search algorithm to solve the problem. For problems with setup-times, Angel-Bello et al. [1] proposed a mixed integer programming model for solving the single machine scheduling problem with availability constraints and sequence-dependent setup costs. Later, Luo et al. [27] investigated a single-machine scheduling problem with workload-dependent maintenance duration with the aim of minimizing the total weighted completion time. They proposed a (2+ )-approximation algorithm and a fully polynomial time approximation scheme to solve the problem.
The remainder of the paper is organized as follows: The problem is described in Section 2. Three lower bounds based on job splitting are presented in Section 3. In Section 4 we describe the proposed adaptation of General Variable Neighborhood Search (GVNS) metaheuristic to our scheduling problem. Computational experiments and the calibration tests of GVNS parameters are given in Section 5. Section 6 concludes the study with a discussion and future extensions.

Problem description
The addressed problem can be stated as follows: Let N = {1, 2, . . . , n} be a set of n jobs to be processed non-preemptively on a single machine. The machine has to undergo a periodic preventive maintenance during the scheduling horizon. We assume that the machine is new at the beginning of the schedule (t = 0) and each preventive maintenance action renews the machine. The period and the duration of maintenance are known in advance and the constraints relating to maintenance are defined as follows: The machine, typically a critical machine of the shop, has to undergo a preventive maintenance of duration δ each time point kT − δ (k ∈ N * ). T is defined as the interval between two consecutive maintenance activities. The periodic maintenance induces several unavailability periods. The studied problem 1/pm/ n i=1 w i c i can be formulated as a Mixed Integer Linear Programming model.
Due to the complexity of the considered problem, only small size instances can be solved optimally by exact methods such as branch and bound or dynamic programming. This is why we propose a metaheuristic to solve large instances of the problem.

Lower bound based on job splitting
Job splitting is proposed as a general technique to obtain lower bounds. This concept was introduced by Posner [30] and used later by Belouadah et al. [4]. They proved that if a job J i is split into two jobs J j and J k with processing times p i = p j + p k and weights w i = w j + w k , it decreases the total weighted completion time by p k w j . This method was used by Kacem et al. in [20] to propose lower bounds for a single machine scheduling problem with only one unavailability period of the machine during the scheduling horizon.
Let P denote the original problem with no split jobs and let P 1 be the corresponding problem in which each job J i is split into n i pieces. Each piece o k,i has a processing time p k,i and a weight w k,i (i ∈ N, k ∈ {1, . . . , n i }) such that p i = ni k=1 p k,i and w i = ni k=1 w k,i , the pieces o k,i are constrained to be scheduled contiguously. Let σ * be an optimal schedule for P and σ * 1 the corresponding optimal schedule for P 1 . Belouadah et al. [4] proved the following relation: As suggested by Belouadah et al. [4], CBRK may be thought as the cost of breaking all the jobs J i into n i pieces.
By relaxing the contiguity constraint in problem P 1 , we obtain a relaxed problem denoted as P 2 . Let σ * 2 be an optimal schedule for problem P 2 . Belouadah et al. [4] proposed the following relation: To get the lower bounds, the jobs should be split in such a way that the resulting problem P 2 corresponds to a special case for which an optimal schedule σ * 2 is known. This is particularly the case when the processing times are constant. In this context, let us consider three special splits: 2 denote the optimal solution of this problem. Hence, we have: Second split: The second lower bound (LB2) is obtained by considering the following split: Each job J i (i ∈ N ) is split into two pieces such that p 1,i = p 2,i = p/2 and w 1,i = w 2,i = w/2 with: p = min 1≤i≤n p i and w = min 1≤i≤n w i . Let denote σ * 2 the optimal solution of this problem. Hence, we have: Third split: P 2 is efficiently solvable when all processing times are set to one unit time (p k,i = 1), which means that each job J i (i ∈ N ) is split into p i pieces. The weights are set as follows: denote the optimal solution of this problem. Hence, we have:

General Variable Neighborhood search
To deal with large-sized instances, we need to develop heuristics which are able to find very good solutions in a small or at least reasonable amount of time. With this aim in mind, in this section a General Variable Neighborhood Search is proposed.
Variable neighborhood search (VNS) is a flexible framework for building heuristics, proposed by Mladenovic and Hensen in 1997 [29]. It uses the idea of changing the neighborhood systematically in order to avoid being trapped in a local optimum during the search.
The basic VNS combines two search approaches: A stochastic approach called "shaking step" that is defined as a perturbation of the incumbent solution by jumping to the kth neighborhood and a deterministic approach in which a local search is applied. When we replace the local search by the Variable Neighborhood Descent (VND), it can be seen as a generalization of a local search since it explores several neighborhood structures at once instead of one. The resulting algorithm is called a General VNS (GVNS). VND is a deterministic heuristic, it starts from a feasible solution as the current one and then carries out a series of neighborhood searches through operators. A best solution in each neighborhood is determined. This heuristic has been successfully applied for solving different combinatorial optimization problems [15,16,28].
GVNS has received much attention and showed good performances compared to other VNS variants. It has been applied to many optimization problems [18,24,34].
The proposed GVNS is described in the following subsections.

Neighborhood structures
Five neighborhoods are proposed in the following paragraphs: Reverse two consecutive (N 1 (π)): The neighborhood structure consists of all solutions obtained from solution π by swapping two consecutive jobs of π. The complexity of this neighborhood structure is Θ(n).
Swap (N 2 (π)): The neighborhood structure consists of all solutions obtained from solution π by swapping all pairs of jobs. The complexity of this neighborhood structure is Θ(n 2 ).
Insertion (N 3 (π)): The neighborhood structure consists of all solutions obtained from solution π by inserting each job of π at the position k (1 ≤ k ≤ n). The complexity of this neighborhood structure is Θ(n).
Restart (N 4 (π)): The neighborhood structure consists of all solutions obtained from solution π in such a way as to start from each job π i and finish with the job π i−1 (i ≥ 2). The complexity of this neighborhood structure is Θ(n) 2-opt (N 5 (π)): The 2-opt operator is the most classical one when it comes to solve the traveling salesman problem [17]. It removes two edges from the circuit and reconnects the two paths created. In our implementation, the 2-opt operator selects two jobs π i and π j from the current sequence, then constructs a new sequence which deletes the connection between π i and its successor π i+1 and the connection between π j with its successor π j+1 , and then connects π i with π j and π i+1 with π j+1 . Furthermore, the partial sequence between π i+1 and π j+1 is reversed. The complexity of this neighborhood structure is Θ(n 2 ).
These neighborhood structures have been widely used in literature to solve different combinatorial optimization problems specially for those which are represented by permutations [14].

Initial solution
Definition: We first recall the definition of the WSPT rule (weighted shortest processing time). WSPT is the heuristic that schedules jobs in increasing order according to the ratio p i /w i . It was introduced by Smith in 1956 [32]. Smith also proved that the WSPT rule solves optimally the scheduling problem 1/ n i=1 w i c i . Since GVNS is a trajectory-based metaheuristic, we need to start from a given solution. Due to the impact of the WSPT on the quality of the solution, we designed the initial solution taking into account this rule as a first step, before applying a local search based on swapping two jobs. The choice of this neighborhood structure is motivated by preliminary tests presented in Section 5.

Algorithm 1. Initial Solution.
Data: M ax Iter Result: π best π ← rank the jobs according to the WSPT rule The proposed local search is defined as follows: In each iteration i, we search for the best solution in the neighborhood N 2 (π) of the best solution found in iteration i−1. Max Iter is the maximum number of iterations. argmin y∈N2(π) f (y) is the value of y ∈ N 2 (π) for which f (y) attains its minimum.

Shaking procedure
Our shaking procedure consists of randomly generating a neighboring solution π from the current solution π using the 2-opt operator. Some random jumps in the search space may be used to escape from a local optimum due to the path reversion, and contribute to diversify the search. Preliminary tests show us that, for our problem, the shaking procedure with more operators reduces the quality of the results. In order to tune the strength of the diversification induced by the 2-opt operator, the shaking procedure is repeated L max times. This parameter is fixed on the basis of primarily tests presented in Section 5.

VND procedure
A complete local search is designed as a VND for each solution returned by the shaking procedure. In this work, we fixed the number of neighborhoods to three (k max = 3), as it is advised in the relating literature [12]. To efficiently explore possible solutions it is important to establish the "best" order of the three neighborhood structures defined above. Due to this, we performed a preliminary test. Six different orderings (a i,j ) of the three neighborhood structures are listed in Table 1. The proposed VND operates a changing neighborhood strategy, which consists to explore a complete neighborhood using the same operator as long as there is an improvement, otherwise another operator is used. The pseudo code of VND and GVNS proposed in this work are presented as Algorithms 2 and 3.

Computational results
Since there is no publicly available benchmark instances for the problem under study in the related literature, we decided to carry out experiments with randomly generated instances. The processing times p i and the weights w i are respectively uniformly distributed over [20,100] and [1, n]. The period of maintenance is generated as follows: To ensure feasibility of the generated instances, T is generated in a way that makes it greater than the largest processing time p i . Also, to avoid solutions with only one job per batch, we choose T greater than 4 times the average of the processing times. To get instances with different levels of tightness regarding the duration of maintenance δ, this latter is set to 2·T 100 + 0.5 which is denoted by δ = 2%T . Both the GVNS and the lower bounds are coded in JAVA language. All the experiments described in this section have been carried out on a i7 Intel Core at 2.50 GHz and Linux with 16 GB RAM. The experiments are Algorithm 2. GVNS.
Computational results were conduced using MILP presented in [21]. In order to analyze the performances of GVNS and lower bounds, we include in this paper the obtained results from n = 2 to 20 for the MILP using CPLEX solver. Due to the NP-hardness of the considered problem, it takes at least 20 min to obtain an optimal solution for each instance with size greater than 14.
Note that, in all the tables presented in this section and for each instance size, we report the average value of fifteen instances. All the tests (except for the MILP) presented in this work are done for instances of all sizes n = 2, 3, 4, . . . , 50, 60, 70, 80, 90 and 100. However, to save space, we don't present each time the results of all instance sizes.

Local searches in the proposed initial solution
In this section, we propose to compare local searches based on the predefined five neighborhood structures (i.e. N 1 , N 2 , N 3 , N 4 , N 5 ) in order to determine the best one to use within the Algorithm 1 proposed to generate the initial solution. Each local search is executed with L max set to 30. For each operator, we start each time from the same solution which is returned by the WSPT rule. The summarized results are reported in Table 2.
MinV gives the minimum value among the five results (Value Ni , i ∈ { 1, . . . , 5}) returned by each tested local search for each instance of size n.  This result justifies our choice to use the Swap operator in the local search proposed within the initial solution algorithm.

Neighborhoods order within VND
It is important to establish the "best" order of the neighborhood structures since it affects the efficiency of the GVNS algorithm. To do this, we performed a set of preliminary experiments. The different orderings are listed in Table 1 presented in Section 4. Focusing on results observed in the last paragraph we decided to implement the N 5 , N 3 , N 2 complete neighborhood structures based respectively on the following operators: 2-Opt, Insertion and Swap. The summarized results are reported in Table 3. To test the performances of each case order a ij presented in Table 1, we compared its results with minV (Eq.   The entries Value ai,j represent the value returned by the proposed VND. As seen in Table 3, the six tests are very close to each others. It follows that, for our problem, VND is not very sensitive to the exploration order of the neighborhood structures. In spite of this, the results show that the order a 1,1 (swap, insertion, 2-Opt) is better than the other orders, since it returns the minimum value or a very tight value from the minimum for the large instances. In our proposed GVNS we suggest to use this order of neighborhoods in the VND heuristic.

Parameters calibration of GVNS metaheuristic
In this subsection, we examined the influence of the parameters Max Iter in the initial solution and L max in the GVNS. To do this, we tested different values of Max Iter and L max . The testing is performed by varying their values, Max Iter from 1 to 180, and L max from 5 to 80. Note that when Max Iter = 0, it represents the value returned by the WSPT rule.
L max can be understood as the amount of time that the shaking procedure is used within the proposed GVNS. The obtained results are presented in Tables 4 and 5 for fifteen instances of size n = 100. The large size of these instances could reveal the performance of the initial solution for a given value of Max Iter and L max . For each value of Max Iter and L max the average solution value found and also the average time spent upon reaching these solutions are presented. As GVNS is a stochastic heuristic, we run this latter 10 times for each instance. Since the coefficient of variation (i.e. the relative standard deviation) is smaller than 0.1% we report only the best of these 10 replicates.
From the results presented in Tables 4 and 5, it follows that the proposed initial solution algorithm is very sensitive to the value of Max Iter parameter. Indeed, it turns out that the initial solution returns the best solution values when the parameter Max Iter is set to 140, consuming the least amount of time (0.83 s). Same conclusion can be drawn for L max parameter, as the results returned after L max = 40 seem to be very close. Consequently, for the rest of the testing, Max Iter is set to 140 and L max is set to 40.

Lower bounds performances
In this section, we propose to compare the computational results between the three proposed lower bounds based on job splitting and the linear bound, LB RP , obtained by relaxing the integrity constraints for x i,j in  the mixed integer programming model. Note that relaxing the integrity constraints for y ij is not reported here because it appears to be useless (i.e. no significant decrease in terms of computational time) consuming similarly relaxing x i,j and y i,j at the same time does not provide a good lower bound quality. Table 6 shows the performances of each lower bound obtained by calculating the Ratio between the lower bound LB i and the optimal value of the MILP as follows: Entries in the last column represent the results of the tightest value returned among the four lower bounds.
The calculated ratio clearly shows that LB3 is much tighter than LB1 and LB2. Actually the larger the size of the instances, the greater the gap between the lower bounds (LB1, LB2, LB3) and the optimum value. The lower bound LB RP seems to be better than the others when the size of instances is greater than twelve (n > 12).

GVNS performances
In this section, experiments are carried out to evaluate the performance of the proposed GVNS. Due to the stochastic aspect, we run the GVNS algorithm 10 times. As we already stated in Section 5.2, the coefficient of variation is smaller than 0.1%. Consequently only report the best of the 10 replicates. The results of comparisons between the MILP and GVNS, and also between GVNS and LB best are presented in Table 7. The entries in columns three and five represent the ratio between the optimal value returned by the MILP (Value MILP ) and the objective function value returned by the GVNS (Value GVNS ) respectively, between the lower bound (Value LB best ) and (Value GVNS ). It is calculated as follows: Ratio MH = 100 * (Value MILP /Value GVNS ). The three last columns report the time consumed upon reaching the reported values returned respectively by the MILP, the GVNS and the lower bounds. It follows that GVNS is very fast (an average of 4.22 s for instances with size 100). For instances of size n ≤ 20, the GVNS is very efficient in comparison with the MILP. The ratio Ratio LH becomes smaller for instance sizes ranging from n = 11 to n = 21. This is certainly due to the lower bounds performances. Indeed, the gap is practically rather large for these instance sizes (see Tab. 6). This shows that GVNS is able to reach high quality solutions in a very small amount of time.
The relaxed problem is easier to solve in the case of small instances than in the case of large instances. On top of this, the computation time in both cases is limited to 20 min, so it is more likely that, for large instances, the relaxed MILP is struggling to improve (i.e. to minimize) the solution. Hence the returned value of the relaxed MILP is still quite large and relatively close to the solution returned by GVNS. More precisely, when n > 14, the best lower bound LB best is equal to the lower bound LB RP based on the relaxation of the integrity constraints of the MILP model. Indeed, via the relaxed MILP, CPLEX solves optimally the problem within the calculation time limit of 20 min for instances with n < 50. When n ≥ 50, returned solutions are not necessarily optimal and require more than 20 min to get the optimal solutions. Therefore, the value returned by relaxed MILP is closer to the GVNS value than to the optimal solution of the relaxed MILP (LB RP ). Empirical experiments for those instances show that the gap returned by CPLEX solver remains 90% (in average for the 15 instances) after 20 min. This explains why the calculated gap (Col. 5 in Tab. 7) is so high for the large instances.

Conclusion
In this paper we studied the single machine scheduling problem with periodic maintenance. The objective of our study is to minimize the sum of weighted completion times. We proposed a GVNS algorithm to solve this NP-hard problem. The proposed algorithm was tested and compared with a MILP model for small instance sizes, and with four lower bounds for instance sizes greater than 20, three of them based on job splitting and one based on the relaxation of one integrity constraint. The computational results show that the proposed GVNS is very efficient both in terms of quality of the objective function values and computational time. In future work, it will be interesting to propose more lower bounds and generate more test instances. It would be interesting to compare the performance of GVNS with other metaheuristics for solving the considered problem after generalizing it by adding more realistic constraints such as setup times, release times, failures, etc. Another extension of this research would be to consider the interval between two consecutive maintenance activities as a decision variable of the problem.