FIX-AND-OPTIMIZE METAHEURISTICS FOR MINMAX REGRET BINARY INTEGER PROGRAMMING PROBLEMS UNDER INTERVAL UNCERTAINTY

. The Binary Integer Programming problem (BIP) is a mathematical optimization problem, with linear objective function and constraints, on which the domain of all variables is { 0 , 1 } . In BIP, every variable is associated with a determined cost coefficient. The Minmax regret Binary Integer Programming problem under interval uncertainty (M-BIP) is a generalization of BIP in which the cost coefficient associated to the variables is not known in advance, but are assumed to be bounded by an interval. The objective of M-BIP is to find a solution that possesses the minimum maximum regret among all possible solutions for the problem. In this paper, we show that the decision version of M-BIP is Σ 𝑝 2 -complete. Furthermore, we tackle M-BIP by both exact and heuristic algorithms. We extend three exact algorithms from the literature to M-BIP and propose two fix-and-optimize heuristic algorithms. Computational experiments, performed on the Minmax regret Weighted Set Covering problem under Interval Uncertainties (M-WSCP) as a test case, indicates that one of the exact algorithms outperforms the others. Furthermore, it shows that the proposed fix-and-optimize heuristics, that can be easily employed to solve any minmax regret optimization problem under interval uncertainty, are competitive with ad-hoc algorithms for the M-WSCP.


Introduction
The Binary Integer Programming problem (BIP) is a mathematical optimization problem, with linear objective function and constraints, on which the domain of all variables is {0, 1}.BIP can be formulated by the objective function (1) and the constraints (2) and (3), where  and  are -dimensional vectors of coefficients, and () =  (,   ) −  (   ,   ).
We refer to this scenario as the worst-case scenario   induced by solution .
It is worth noticing that  (   ,   ) is a BIP, as in this case   is constant.Therefore, the robust cost of a solution  is computed by solving a single BIP problem in the scenario   .M-BIP aims at finding a solution with minimum robustness cost.It This formulation can be linearized by replacing  (   ,   ) with a free variable  and adding a new set of linear constraints that bounds the value of  to the value of  (   ,   ).The resulting M-BIP formulation (5)-( 9) has an exponentially large number of constraints (6).The constraints (6) ∈ {0, 1}, ∀ = 1 . . . (8) Proposition 1.6.The decision version of M-BIP is Σ  2 -complete Proof.M-BIP is a generalization of the decision version of the Minmax regret Binary Knapsack problem under interval uncertainty, which is also known to be Σ  2 -complete [29].
Therefore, M-BIP is harder to solve than classical NP-Hard problems as its decision version does not belong to NP, unless Σ  2 =   [50,53].This also means one cannot build a compact ILP formulation for M-BIP.Finally, it is also NP-Hard to compute the cost of a single solution of M-BIP.
In this paper, we propose two Fix-and-Optimize heuristics (FO) for M-BIP that also can be applied to solve any minmax regret optimization problem under interval uncertainty.These are the first FO heuristics in the literature applied to a Σ  2 -hard problem under interval uncertainty.They are inspired by the Fix-and-Optimize metaheuristic framework proposed by [37] for a variant of the Vehicle Routing Problem and successfully adapted to several other combinatorial optimization problems [20,27,30,38,48,54].We evaluate the performance of these heuristics using the Minmax regret Weighted Set Covering problem under interval uncertainty (M-WSCP) [51] as a test case.It is one of the most studied minmax regret optimization problems under interval uncertainty [7,22,23,28].
The remainder of this paper is organized as follows.Related work is reviewed in Section 2. Algorithms for M-BIP are presented in sections 3 and 4. The former shows exact algorithms for this problem while the latter presents the Fix-and-Optimize heuristics.In the latter, we also show how a lower bound can be obtained by using the proposed heuristics.Then, Section 5 formally defines the M-WSCP and reports the computational experiments.Finally, the concluding remarks are drawn in the last session.

Related work
Minmax regret optimization problems under interval uncertainty were widely studied in the literature.They were studied from the theoretical point-of-view, where one is interested in determining the problem's complexity, and also from the algorithmic point-of-view, where one is interested in proposing algorithms for solving the problem.A general result for minmax regret optimization problems under interval uncertainty is that the uncertain variant is, at least, as hard to solve as the deterministic version of the problem [43].This is due to the fact that, for computing the minmax regret of a solution, one must solve the deterministic problem on its worstcase induced scenario, as presented in Section 1. Table 1 summarizes the complexity studies for several minmax regret optimization problems under interval uncertainty in the literature displaying, for each problem, its known complexity.Several classic problems that are solvable in polynomial time, such as the minimum spanning tree problem and the shortest path problem, turn out to be NP-hard when considering the minmax regret objective function and the interval uncertainty.Furthermore, classic problems that were known to be NP-hard remain NP-hard in their minmax regret optimization variant under interval uncertainty.Others become harder to solve, like the Binary Programming problem and the Binary Knapsack problem.
The Minmax regret Weighted Set Covering problem under interval uncertainty was first studied by [51].The authors proposed an integer linear programming formulation for this problem, which was solved by three algorithms based on Benders' decomposition.Several heuristics were also proposed for this problem.A Genetic Algorithm and a Hybrid Genetic Algorithm were proposed by [51], a scenario-based heuristic with a pathrelinking strategy was proposed by [22], while a hybrid heuristic based on the Benders' decomposition algorithm and a scenario-based heuristic were developed by [23].The latter heuristic was demonstrated to be the most efficient of them, see [23].
A fix-and-optimize heuristic was first used to solve minmax regret optimization problems by [20].In this paper, the authors considered the Minmax regret Shortest Path Arborescence problem under interval uncertainty and proposed an ad-hoc fix-and-optimize heuristic.The proposed heuristic consists in solving several Minmax regret Shortest Path problems under interval uncertainty and uses their solutions to reduce the number of variables in the shortest path arborescence problem.Despite the encouraging results obtained by this heuristic, it cannot be extended to other minmax regret optimization problems under interval uncertainty.
In this research, we propose fix-and-optimize heuristics that can be easily employed to solve any minmax regret optimization problem under interval uncertainty.Unlike [20], they are based on () the linear programming relaxation of the problems; or () the Scenario-based Heuristic (SBA) [23], which is a well-known heuristic that can be used to solve any problem within this class.

Exact algorithms
The M-BIP formulation ( 5)-( 9) has an exponential number of the constraints (6).Enumerating all these constraints is a #P-Complete problem [26], since there is one constraint for each feasible solution of the problem.Therefore, this formulation cannot be explicitly solved by commercial BIP solvers, and more sophisticated methods are needed.Thus, we present below three exact algorithms for solving M-BIP.

Benders-like Decomposition (BLD)
The Benders-like Decomposition algorithm (BLD) relies on a cutting plane algorithm inspired by the Benders' Decomposition [14].It differs from the classic algorithm because the BLD' subproblem is not a linear program.BLD decomposes M-BIP into two problems: () the Master Problem (MP); and () the subproblem (SP).The MP consists in solving the formulation defined by objective function (5) and the constraints in ( 7)- (10), i.e. using only a subset of the constraints in (6).The SP consists in solving a BIP where the cost coefficients of the objective function are given by the worst-case scenario induced by the optimal solution of the MP.BLD iteratively solves the MP, then the SP.The solution  of the SP is added to Φ such that Φ ℎ+1 ← Φ ℎ ∪ { } where ℎ is the current iteration.Initially Φ 0 ← ∅.BLD stops when the robustness cost of the solution given by the MP is equal to the smallest robustness cost of a solution computed by the SP.This algorithm is guaranteed to obtain the optimal solution for M-BIP [6].
BLD was successfully applied to solve the Minmax regret Traveling Salesman Problem under Interval Uncertainty [46], the Minmax regret Knapsack problem under Interval Uncertainty [35], the Minmax regret Restricted Shortest Path problem under Interval Uncertainty [7], and the Minmax regret Set Covering problem under Interval Uncertainty [51], among other problems.
Algorithm 1 gives a pseudo-code of BLD.It receives as input the matrix , the vector , and the vector  of BIP.Additionally, it receives the vectors  and  of lower and upper bounds for the cost coefficients, such that   = [  ,   ],  = 1 . . ..Initially, as suggested by [46] to avoid an unbounded MP, Φ 0 is initialized lines 1 to 3 with the solutions  = and  + of two constructive heuristics for minmax regret optimization problems under interval uncertainty: the Algorithm Mean (AM) and the Algorithm Upper (AU) [42].The best solution and its robustness cost are kept lines 4 and 5. Furthermore, the iteration count is set line 6.The loop lines 7-14 corresponds to an iteration of BLD.It is run until an optimal solution is found.First, the MP is solved line 8 using constraints (10), giving solution .Then, the SP corresponding to  is solved in line 9, giving solution  .Next, the primal bound  of BLD is updated with the cost of  line 10 and the solution with the smallest robustness cost is kept line 11.At the end of the loop, the iteration number is updated line 12 and the solution  is inserted into Φ ℎ line 13.Finally, the best feasible solution  * computed is returned line 15.

Extended Benders-like Decomposition (EB)
The Extended Benders-like Decomposition (EB) uses the methodology introduced by [33] for selecting Benders' cuts.It consists in solving one SP for each incumbent solution found by MP, instead of solving a single SP for the optimal solution of MP.Thus, the number of iterations in EB is expected to be smaller than in BLD.

Algorithm 1: Benders-like Decomposition (BLD)
input : ⟨, , , , ⟩ output: The pseudo-code for EB is very similar to Algorithm 1 (BLD).The main difference is that, instead of solving an unique SP on line 10, it solves a SP for each integer solution found by the MP solved in line 8.

Branch-and-Cut (BC)
The Branch-and-Cut algorithm (BC) was initially proposed by [46] for solving the Minmax regret Traveling Salesman Problem under Interval Uncertainty.The authors noticed that BLD may be computationally inefficient, as each iteration runs a branch-and-bound algorithm from scratch to solve the MP.In BC, each incumbent solution found by the algorithm is processed by a SP (which is identical to SP in BLD and EB) to generate a new cut on the original branch-and-bound algorithm.BC differs from EB in the fact that the computed cuts are inserted globally and propagated to all active nodes in the branch-and-bound tree.The MP in BLD and EB is always reset after each iteration.
Algorithm 2 gives a pseudo-code of BC.It receives as input the matrix , the vector , and the vector  of M-BIP.Additionally, it receives the vectors  and  of lower and upper bounds for the cost coefficients, such that   = [  ,   ],  = 1 . . ..Initially, as suggested by [46] and similar to BLD and EB, BC solves M-BIP using the AM and AU constructive heuristics lines 1 and 2, respectively.Then, it initializes Φ ′ ⊆ Φ with solutions obtained by these heuristics line 3. Next, a branch-and-cut framework is performed line 4, which stops when an optimal solution is found or when a predefined time limit is met.Finally, the best feasible solution  * computed is returned line 5.

Fix-and-Optimize heuristics
This section proposes two fix-and-optimize heuristics for M-BIP.These heuristics work in two steps.The first one is the preprocessing step, which uses an ad-hoc algorithm to fix the value of some variables in the problem's formulation.This leads to a reduced M-BIP, which is solved in the second step (solving step).One expects Reduced M-BIP to be solved significantly faster than M-BIP as the number of variables in the former will be much smaller.
The heuristics differ on the ad-hoc algorithm used in the preprocessing step.The first one uses the Scenariobased Heuristic (SBA) [22] to identify variables that can be fixed to zero and removed from the M-BIP Algorithm 2: Branch-and-Cut (BC) input : ⟨, , , , ⟩ output: and-Cut(, , , Φ ′ , , ) 5 return  formulation, while the second one uses the linear relaxation of the M-BIP formulation to achieve this goal.Despite being proposed for M-BIP, both heuristics are very general and they can be applied to solve any minmax regret optimization problem under interval uncertainty.They are described below.

Fix-and-Optimize through Scenario-based Algorithm (FO-SBA)
The Fix-and-Optimize through Scenario-based Algorithm (FO-SBA) relies on the SBA heuristic [22].The intuition behind this heuristic is that the variables whose value is zero in the optimal solution of all scenarios solved by SBA are less likely to appear in the optimal solution of M-BIP.Therefore, they will be set to zero and removed from the set of variables, leading to a reduced M-BIP.In this subsection, we first introduce SBA in Section 4.1.1.Then, we state FO-SBA in Section 4.1.2.

The Scenario-based Algorithm
SBA was first proposed for the Minmax regret Weighted Set Covering problem under Interval Uncertainties [22,23] and later extended to M-BIP [18] and other minmax regret optimization problems under interval uncertainties [19,20,24].It consists in investigating a set  = { 1 ,  2 , . . .,   } ⊆ Γ of so-called target scenarios.Each one is a linear combination between the lower scenario   ∈ Γ (a scenario where the cost of the arcs are set to their respective lower, i.e.     =   ) and the upper scenario   ∈ Γ (a scenario where the cost of the arcs are set to their respective upper, i.e.     =   ).The set  of investigated scenarios is computed using three parameters: () the initial scenario ; () the final scenario ; and () the step size .All parameters are real-valued in the interval [0, 1].The cost of the uncertain coefficients for each target scenario   ∈  is computed as Algorithm 3 presents the pseudocode of SBA.It receives as input the matrix , the vector , and the vector  of BIP.Additionally, it receives the vectors  and  of lower and upper bounds for the cost coefficients, such that   = [  ,   ],  = 1 . . ., and parameters , , and .Initially,  is set to  line 1 and the primal bound ℎ is set to infinity line 2. The algorithm consists in the main loop lines 3-8.In each iteration, a BIP is solved on scenario   (which is defined by the current value of ) line 4.If the regret of the computed solution is smaller than that of the previous best solution, it updates the value of the best solution found line 6 and stores the solution found in line 7.The value of  is incremented line 8.This loop runs until  is greater than .The best solution found  is then returned line 9.It is worth noting that SBA is an exponential-time algorithm, since the BIP solved in line 4 is NP-Hard.

FO-SBA for the M-BIP
Let  be the set of variables that appear in the optimal solution of at least one of the  = || scenarios investigated by SBA.Reduced M-BIP consists in the objective function (5) and the constraints in ( 6)-( 9) and (11).One may observe that constraint (11) fix all variables that are not in  to zero.
Algorithm 3: Scenario-based Algorithm (SBA) input : ⟨, , , , , , , ⟩ output: Algorithm 4 shows the pseudo-code of FO-SBA.It receives as input the matrix , the vector , and the vector  of BIP.Additionally, it receives the vectors  and  of lower and upper bounds for the cost coefficients, such that   = [  ,   ],  = 1 . . .. Besides, is also receives as input the parameters of the SBA heuristic, i.e. , , and .Furthermore, it uses set  to maintain all variables that should be kept in the reduced M-BIP.Initially,  is set to  in line 1.Then, line 2 initializes the set  as an empty set.The preprocessing step consists of the loop in lines 3-6.At each iteration of this loop, it solves a BIP on scenario   on line 4.Then, it stores the variables which were in the optimal solution of the previously solved problem on line 5 and increases the value of  in line 6.The solving step is performed in line 7, which consists in solving the resulting reduced M-BIP formulation.One may observe that this formulation can be solved with any exact algorithm for M-BIP.Finally, FO-SBA returns the computed solution  * on line 8.The Fix-and-Optimize through Linear Relaxation (FO-LR) relies on the linear relaxation of the M-BIP formulation defined by Equations ( 5), ( 7)- (10).In this case, the integrality of the constraints in ( 8) is relaxed, and Φ ℎ = Φ 0 = { = ,  + }.This approach is the same used in the first iteration of BLD, EB, and BC, where  = and  + are the solutions of Algorithm Mean (AM) and Algorithm Upper (AU) [42], respectively.
The intuition behind this heuristic is that the variables whose value is zero in the optimal solution of the LP relaxation are less likely to appear in the optimal solution of M-BIP than the others.Therefore, one can build a reduced M-BIP without the null variables in the solution of the linear relaxation.
FO-LR builds a reduced M-BIP by setting some of the variables of M-BIP to zero.Let  be the set of null variables in the optimal solution of the linear relaxation of M-BIP.The reduced M-BIP formulation consists in solving the formulation ( 5)-( 9) and (11).
Algorithm 5 shows the pseudo-code of FO-LR.The preprocessing step consists in lines 1 and 2. The linear relaxation of M-ILP is solved line 1 using a linear programming solver. receives the strictly positive variables in the optimal solution of the linear relaxation of M-BIP line 2. The solving step is performed in line 3.It consists in solving the reduced M-BIP formulation, which can be done using any exact algorithm for M-BIP.Finally, the resulting solution  * is returned on line 4.

Computational experiments
The computational experiments were performed on a 2.4 GHz Intel Xeon E5645 CPU with 32 GB of RAM, running under Linux Ubuntu.The BIP and the linear programming formulations were solved by the ILOG CPLEX solver, version 12.6, with default parameter settings and a single thread.All algorithms were implemented in C++, using ILOG Concert Technology, and compiled with GNU g++ 8.2.0.The parameters of SBA were set accordingly to [22].In addition, the maximum running time of all algorithms in any of the experiments was set to 3600 seconds.

The test case
The Weighted Set Covering Problem (WSCP) [32] is one of the most studied combinatorial optimization problems.Its decision version was one of the original 21 problems proved to be NP-Complete by [40].WSCP applications range from data partitioning [49] to vehicle routing problems [17].For the reader interested in a deeper knowledge on this problem, we refer to the annotated bibliography [21].
WSCP is defined by a set  of objects and a set  of non-empty subsets of  .Furthermore, a weight   is associated to each subset  ∈ .The objective of WSCP is to find  * ⊆  such that all objects are covered by at least one subset  ∈  * and such that the sum of the weights of the sets in  * is minimum.
WSCP can be formulated as follows.The decision variable   ∈ { 0, 1 } states if subset  ∈  belongs to the optimal WSCP solution (  = 1) or not (  = 0).Furthermore, let   = 1 if item  ∈  is covered by subset  and   = 0 otherwise.The WSCP formulation consists of the objective function (12) and constraints ( 13)- (14).The objective function (12) aims at minimizing the weight of subsets  ∈ .Inequalities (13) ensure that all items  ∈  are covered by at least one subset  ∈ .The domain of variables   is defined by constraints (13).

The Minmax regret Weighted Set Covering problem under Interval Uncertainties
The Minmax regret Weighted Set Covering problem under Interval Uncertainties (M-WSCP), as defined by [51], is a variant of WSCP under uncertainties.In this problem, the weight associated to each subset  ∈  is uncertain and it is assumed to be in the interval [  ,   ], with 0 ≤   ≤   .The objective of M-WSCP is to find a solution  * with the smallest robustness cost, according to the definitions presented in Section 1.
A BIP formulation for M-WSCP was presented in [51].It is obtained by linearizing  ( (  ) ,   ), as explained by [3].It uses binary decision variables {   ∈ { 0, 1 } } :  ∈  and parameter {   :  ∈  ,  ∈  } as for the WSCP.Besides, it also uses an auxiliary variables  ∈ R + to compute the cost of the solution in the worst-case scenario, as in the formulation (5)-( 9) for M-BIP.The resulting formulation is defined by ( 15 The objective function (15) aims at minimizing the maximum regret.The constraints (16) ensure that all items  ∈  are covered by at least one subset  ∈ .The inequalities (17) enforce the correct value of .The constraints ( 18)-( 19) define the domain of the variables   and , respectively.

Benchmark instances for the M-WSCP
We used two set of benchmark instances in our computational experiments, namely OR-Library and Shunji.In both sets, the interval [  ,   ], for all  ∈ , was computed as suggested by [39].Initially, a random value   was obtained from the original instance.Then, we set where  ∈ {0.3, 0.6, 0.9} is a parameter that defines the degree of uncertainty of the instance, i.e., the greater is the value of , the greater is the difference between   and   , on average.The set OR-Library of instances was retrieved from the OR-Library3 , a collection of test data sets for a variety of Operations Research (OR) problems.It was originally described by [13] and encompass instances for several combinatorial optimization problems.We retrieved the instance sets 4, 5, and 6, which were first used for experiments for the WSCP by [12] and later adapted for the M-WSCP [7,51] For each instance obtained from the OR-Library, we generated three instances for the M-WSCP, such that each generated instance use a different value of .Therefore, there are 75 instances in this set.
The set Shunji of instances was generated using the instances generator for WSCP developed by Shunji Umetani 4 .This generator implements the scheme for constructing instances for WSCP proposed by [12] and guarantees that every elements in  is covered by, at least, two sets in , and guarantees that every set  ∈  covers at least one element in  .The instances within this set have | | = 1000 and || = 2000.The weight values for all instances range between 1 and 100.Furthermore, it has a parameter  ∈ [0, 1] which controls the density of the covering matrix.Given the parameters  ∈ {0.02, 0.05, 0.10} and  ∈ {0.3, 0.6, 0.9}, we generated 5 random instances for each combination of these parameters.Therefore, there are 45 instances in this set.

Results for the exact algorithms
The first set of experiments aims at evaluating the efficiency of BLD, EB, and BC to solve M-WSCP using the instances in sets OR-Library and Shunji.The results are reported in Tables 2 and 3.The first and second column respectively report the value of  and the set that the instance belongs.For BLD, the third, fourth, fifth, and sixth columns show respectively () the number of instances solved to optimality; () the average number of cuts inserted by BLD and its standard deviation; () the average relative optimality gap for the instances whose optimal solution was not found and the standard deviation of this same value; and () the average running time (in seconds) for the instances whose optimal solutions were not found and the standard deviation of this same value.This same data is reported for EB in the seventh, eight, ninth, and tenth columns and for BC in the eleventh, twelfth, thirteenth, and fourteenth columns.When optimal solutions were not found for any of the five instances in a group, the fifth column is filled with a '−'.We recall that, for each value of , sets 4 and 5 have 10 instances, whereas set 6, as so as the instances in Shunji, only have 5 instances each.
Regarding the set OR-Library of instances, it can be seen from Table 2 that all algorithms found optimal solutions for all evaluated instances.Furthermore, as greater was the value of , then greater was the average running time of the algorithms.BLD inserted the smallest average number of cuts for all evaluated instance sets, whereas EB inserted the greatest number of cuts, on average.One can see that BC was the fastest among the evaluated algorithms, being able to solve all instance sets within 65 seconds, on average.This smaller running time may be due to the fact that BC do not reinitialize its branching tree, as presented in Section 3.3, while BLD and EB restarts their branching tree at each iteration.Regarding the set Shunji of instances, it can be seen from Table 3 that, as denser is the instance, less time the algorithms need to found the optimal solution.EB found a smaller number of optimal solutions than BLD and BC.Furthermore, BLD found a greater number of optimal solutions for instances with  = 0.3, while BC found a greater number of optimal solutions for instances with  = 0.9.Despite that, the average relative gap achieved by BLD, EB, and BC were up to 0.5%, on average, being very close to the optimal solution.We choose to use BLD for solving the remaining formulation in FO-SBA and FO-LR, as it was able to found a greater number of optimal solutions than EB and BC.
Table 2. Results for the exact algorithms on OR-Library instances.

Results on the preprocessing step of the fix-and-optimize heuristics
The second set of experiments evaluates how efficient are FO-SBA and FO-LR to fix the value of variables in the M-WSCP formulation defined by the objective function (15) and the constraints in ( 16)- (19).As the number of binary variables is the same as the cardinality of the set , we measure as | ′ | how many subsets of  remain in the instance after the preprocessing step of each heuristic and how long it takes to solve the resulting formulation.
The results of this experiment are presented in Tables 4 and 5 for the OR-Library and Shunji instances, respectively.The first column reports the value of , while the second shows the set of the instance (for OR-Library instances) or the density value  (for Shunji instances).The third column gives the average and the standard deviation of the number | ′ | of subsets of  that remained in the instance after the preprocessing step of FO-SBA, while the fourth column presents the average ratio  ′ /, i.e., the proportion of subsets of  that remained in the instance.The average running times, along with their standard deviations, for the preprocessing and solving steps of FO-SBA are presented in the fifth and sixth columns, respectively.The same data is reported for FO-LR in the last four columns.As pointed out in Section 5.2, the solving step of both FO-SBA and FO-LR employs BLD.
Regarding the OR-Library instances, one can see from Table 4 that both heuristics obtained similar results.For both FO-SBA and FO-LR, the greatest average reduction was achieved in instances from set 5, while the smallest average reduction was achieved in instances from set 4. FO-SBA average preprocessing running time never exceeded 9 seconds and was able to fix more than 90% of the variables of the formulation, on average, for all evaluated instances.FO-LR was faster than FO-SBA, such that the maximum running time of its preprocessing step was 1.4 seconds for instances from set 6 with  = 0.3.Using this reduced variable set, the BLD average running time never exceeded 1 second for FO-SBA and 1.5 second for FO-LR.In addition, one can also see that the accumulated running times of the preprocessing step and the solving step of FO-SBA and FO-LR, computed as   +   , were significantly smaller than those of the exact algorithms, as reported in Table 2.
Regarding the Shunji instances, one can see from Table 5 that, the denser was the instance, then smaller was the running time of FO-SBA and FO-LR, on average.Besides, the heuristics also fixed a greater number of variables in denser instances than on the sparser instances, such that only 2.7% and 2.5% of the original variables were not fixed for the instances with  = 0.3 and  = 0.10 using FO-SBA and FO-LR, respectively.The average running time of FO-SBA preprocessing step was up to 1592 seconds on intances with  = 0.6 and  = 0.05, while that of FO-LR was up to only 430 seconds on instances with  = 0.6 and  = 0.05.However, the solving step of FO-SBA was faster than that of FO-LR for all evaluated instance subsets, being up to 206 seconds on instances with  = 0.6 and density of 0.05.

Comparison of the heuristics
The last set of experiments compares the results of the proposed heuristics with those of the best known heuristics for M-WSCP.Our heuristics are constrated with each other and with two other heuristics for the M-WSCP: the Scenario-based Algorithm (SBA) [22] and the Linear Programming Heuristic (LPH) [7].The results reported in Tables 6 and 7 for the OR-Library and Shunji instances, respectively.The first column reports the value of , while the second shows the set of the instance (for OR-Library instances) or the density value  (for Shunji instances).The third and fourth columns report the results of SBA.Let X the best known solution for each instance (found by any of the exact or heuristic algorithms used in our experiments), the third column gives the average relative deviation ()−( X)

𝑍( X)
, where () is the robustness cost of the solution obtained by SBA, along with the standard deviation of this same metric.The fourth column presents the average and the standard deviation of the running time of SBA.The same data is reported for LPH in the fifth and sixth columns, for FO-SBA in the seventh and eighth columns, and for FO-LR in the ninth and tenth columns.
Regarding the OR-Library instances, one can see from Table 6 that the average running time of the heuristics never exceeded 9 seconds, which was the case of FO-SBA for instances from set 6 with  = 0.3.LPH was the fastest among the evaluated heuristics.However, it also found the greater average relative deviations among the evaluated heuristics, being up to 54.4% for instances from set 6 with  = 0.3.The best results for all instances were obtained by FO-SBA, such that it achieved a maximum average relative deviation of 9.9% for instances from set 4 with  = 0.6.Besides, one can observe that FO-SBA average running time never exceeds 9 seconds, and that it was able to improve the average relative deviations of SBA for 4 out of the 9 subset of instances evaluated.These results indicate that FO-SBA was able to fastly compute near-optimal solutions, since the optimal solution for all OR-Library instances were given by BLD, EB, and BC, as shown in Table 2.
Regarding the Shunji instances, one can see from Table 7 that the results were very similar to those obtained for the OR-Library instances.LPH obtained the greatest average relative deviations for all subset of instances.On the other hand, the FO-SBA solutions had the smallest average relative deviation among the evaluated heuristics for all instance subsets, except for instances with  = 0.3 and  = 0.05.One can observe that FO-SBA was able to improve the results of SBA for instances with  = 0.3 and  = 0.10 and for instances with  = 9 and  = 0.10.Furthermore, FO-LR obtained competitive results, being faster than FO-SBA for 7 out of the 9 subset of instances evaluated and computing solutions with an average relative deviation at least as good as those of FO-SBA for 3 out of the 9 subset of instances evaluated.
The results from Tables 6 and 7 point out to the fact that FO-SBA obtained the best results among the evaluated heuristics.To test this observation for OR-Library and Shunji instances, we analyzed our experimental data following the statistical procedure of [36], which is composed of three steps.The first step verifies the nonnormality of the relative deviations of SBA, LPH, FO-SBA, and FO-LR.Next, the second step evaluates whether there is a statistical significant difference among the four heuristics.Then, in case such a difference exists, the third step verifies whether the relative deviations of the heuristics are significantly different among them.These steps are detailed below.They assume a significance level  = 0.05, i.e., the null hypothesis is rejected if the -value is smaller than 0.05.
In the first step, we applied a Shapiro-Wilk test of normality [52] to verify whether the relative deviations of SBA, LPH, FO-SBA and FO-LR follow a normal distribution.With a -value of 0.001, the test indicated that the data of the three heuristics does not follow a normal distribution.Thus, a non-parametric test is used in the next step.
In the second step, we applied the Friedman's test [34] to verify whether there is a statistically significant difference between at least two of the evaluated heuristics.The null hypothesis was that SBA, LPH, FO-SBA and FO-LR have the same relative deviation, on average.The data were ranked according to [18].With a -value of 0.002, the test rejected the null hypothesis for both OR-Library and Shunji instances.Therefore, there is indeed a significant difference in the relative deviations of SBA, LPH, FO-SBA and FO-LR.
In the third step, we applied a non-parametric two-tailed Nemenyi's post-hoc test, also known as the Nemenyi-DamicoWolfeDunn post-hoc test [47], which compares the results of multiple algorithms.This test evaluated the pair of hypothesis {︂  0 :   ≥    1 :   ̸ =   , ∀(  ,   ) ∈ , where  = { sba ,  lph ,  fo-sba ,  fo-lr }, such that  sba ,  lph ,  fo-sba , and  fo-lr are, respectively, the average of the rankings obtained by SBA, LPH, FO-SBA, and FO-LR in the second step.The null hypothesis ( 0 ) states that the average ranking of   and   was not significantly different among them, thus implying that the results of one of the evaluated heuristics is not significantly better than those of the other.On the other hand, the alternative hypothesis ( 1 ) implies that indeed the results of   is significantly different than those of   .Table 8 presents the results of the Nemenyi's test for OR-Library and Shunji instances.Each cell of this table displays the -value obtained by comparing the heuristics displayed on the top of the column and on the beginning of the row.
Regarding the OR-Library instances, one can see from Table 8 that only FO-SBA and FO-LR significantly differ among them, with a -value of 0.03.For the remaining of the comparisons, the test could not reject the null hypothesis and, thus, we cannot assume that the average relative deviation of the heuristics significantly differ among them.Thus, we conclude that SBA, LPH, and FO-SBA results do not significantly differ among them.For this set of instances, the recommended heuristic is LPH, since it had a smaller running time in comparison to SBA and FO-SBA, as shown in Table 6.
Regarding the Shunji instances, one can see from Table 8 that LPH was the worst heuristic for these instances, as the Nemenyi's test returned a -value of 0.01 for the comparison of LPH with all other heuristics.The comparison of SBA and FO-SBA returned a -value of 1.00, while the comparisons of FO-LR with SBA and FO-SBA returned a -value of 0.74 nad 0.61, respectively.Thus, the test could not reject the null hypothesis for these comparisons and we cannot assume that the average relative deviations of SBA and FO-SBA significantly differ among them.Additionally, we also cannot assume that the average relative deviations of FO-LR, SBA, and FO-SBA significantly differ among them.Therefore, for this set of instances, the recommended heuristic is

Table 3 .
Results for the exact algorithms on Shunji instances.
are linear since   is constant.

Table 1 .
Complexity results for min-max regret optimization problems under interval uncertainty.

Table 4 .
Results for the FAO heuristics on OR-Library instances.

Table 5 .
Results for the FAO heuristics on Shunji instances.

Table 6 .
Results for the heuristics on OR-Library instances.

Table 7 .
Results for the heuristics on Shunji instances.

Table 8 .
-values obtained by the Nemenyi's test on OR-Library and Shunji instances.