THE WIDEST 𝑘 -SET OF DISJOINT PATHS PROBLEM

. Finding disjoint and widest paths are key problems in telecommunication networks. In this paper, we study the Widest 𝑘 -set of Disjoint Paths Problem (WKDPP), an 𝒩𝒫 -Hard optimization problem that considers both aspects. Given a digraph 𝐺 = ( 𝑁, 𝐴 ), WKDPP consists of computing 𝑘 arc-disjoint paths between two nodes such that the sum of its minimum capacity arcs is maximized. We present three mathematical formulations for WKDPP, a symmetry-breaking inequality set, and propose two heuristic algorithms. Computational experiments compare the proposed heuristics with another from the literature to show the effectiveness of the proposed methods.


Introduction
The problem of finding multiple disjoint paths was widely studied in telecommunication networks [1][2][3][4][5][6].Several network protocols route through multiple disjoint paths.Examples of such protocols are the Ad hoc Ondemand Multipath Distance Vector Routing Protocol (AOMDV) [7] and the Multipath Transmission Control Protocol (MTCP) [8].One can use disjoint paths as fault-tolerant mechanisms by enabling alternative routing paths when a route link fails [9].In addition, multiple disjoint paths can act as load-balancing structures, distributing network packets among several routes.Thus, it is possible to increase the network throughput and avoid link congestion [10].
An important characteristic of such paths is their bandwidth, i.e., the number of network packets able to flow through the path in a given time span [11].We consider the bandwidth of a path  as the smallest link capacity in .A path between two network devices  and  is the widest if it has the greatest bandwidth among all possible paths.The wider the path, then more packets can be sent through it.Therefore, the throughput of a network is directly correlated to the width of its routing paths [12].
The coupled version of the Widest Pair of Disjoint Paths (WPDPC) [13] is a  -Hard optimization problem that combines the notion of disjoint and widest paths.Let  = (, ) be a digraph with node set  and arc set .Moreover, a function  :  ↦ → R >0 associates each arc (, ) ∈  to a capacity   .The WPDPC problem aims to find two arc-disjoint paths between a source node  ∈  and a sink node  ∈  , such that the sum of their widths is maximized.
This work studies a generalization of the WPDPC, called Widest -set of Disjoint Paths Problem (WKDPP), which was also studied by Wang et al. [14].Given  = (, ), , and , as previously defined, the WKDPP aims to find a set  = { 1 ,  2 , . . .,   } of  arc-disjoint paths from  to  such that the sum of their widths is maximized, i.e., Since the WKDPP is a generalization of the WPDPC for  paths, it is also  -Hard.
The WKDPP relates to other network flow problems, such as the Origin-Destination Integer Multicommodity Flow problem [15], the Concurrent -Splittable Flow Problem [16,17], the Unsplittable Flow Problem [18], and the Maximum Disjoint Paths with Different Colors [19].It differs from the -splittable Flow Problem [20,21] by forbidding two flows to pass through the same arc.It also relates to the Shortest Pair of Disjoint Paths [22] and other disjoint-path problems [23][24][25].
In this paper, we present three Integer Linear Programming (ILP) formulations for the WKDPP.The first model is an arc-path formulation that has a potentially exponential number of elementary ⟨, ⟩ paths.The second model is a generalization of the ILP multicommodity flow formulation for the WPDPC, proposed in [13], for the case of  disjoint paths.The third model is a reformulation of the second one as another ILP multicommodity flow formulation.It uses two flow conservation constraints to compute the set of widest disjoint paths.In addition, we propose two heuristics for the WKDPP.The first heuristic rely on several executions of the Ford-Fulkerson max-flow algorithm [26].The second one is a fix-and-optimize heuristic that removes a subset of arcs from  and optimizes the remaining model.One can extend the proposed heuristics to other  -Hard network flow problems, such as the -Splittable Maximum Flow Problem [20] and the Unsplittable Flow Problem [18].The results obtained by our heuristics are contrasted with those of the Maximum -Path Bandwidth Algorithm (MKPB), a heuristic proposed by Wang et al. [14].
We organize the remainder of this paper as follows.Section 2 proposes three ILP formulations for the WKDPP.Section 3 presents the WKDPP heuristics.Section 4 reports the computational experiments of our work.Finally, the last section draw the concluding remarks of this paper.

Arc-path model
The first model is an arc-path formulation.Let  be the potentially exponential set of all the elementary ⟨, ⟩ paths in .Besides, let  = { 1 , . . .,   } ⊆  be a set of  arc-disjoint paths that carry out flow from  to  and   = min (,)∈   be the width of path  ∈ .We define    = 1 if arc (, ) ∈  belongs to the path  ∈  and    = 0 otherwise.Let  be a binary decision vector such that   = 1 if path  ∈  belongs to the WKDPP solution and   = 0 otherwise.We define this model as follows: Objective function (1) maximizes the total flow between  and .Inequalities (2) guarantee that only one path  ∈  uses an arc (, ) ∈ .Inequality (3) selects || paths to send flow from  to .Constraints (4) define the domain of variables   .This model relies on a potentially exponential number of variables   .Therefore, it cannot be properly solved by an ILP solver without a reformulation.

Arc-node model
The objective function (5) maximizes the sum of the path widths.Constraints (6) are the classical flow balance constraints and ensure the connectivity of the  paths from  to .Inequalities (7) guarantee that only one ⟨, ⟩ path uses the arc (, ) ∈ .Constraints (8) compute the width of each path.It uses a constant  equals to the highest arc capacity in .Constraints (9) and (10) respectively define the domain of variables    and   .

Reformulated arc-node model
The third model is a reformulation of the previous arc-node model.It describes each path as a flow subproblem.The introduction of a second flow variable    0 removes the necessity of Inequalities (8), thus eliminating the big  constant.The second flow constraints, that use variables    , is used to compute the width of path .Let    and   be as previously defined.We define this model as constraints ( 11)- (18).Objective function (11), constraints (13), and ( 14) are the same as in the first arc-node model.Inequalities ( 12), together with (13), define each flow subproblem and guarantee that the flow is unsplittable and acyclic.Constraints (15)

Symmetry-breaking constraints
Both arc-node models have a strong symmetry structure induced by the decision variables' assignment.Let  * = { * 1 ,  * 2 , . . .,  *  } be the optimal set of  paths for a WKDPP instance whose respective widths are { * 1 ,  * 2 , . . .,  *  }.Note that any permutation of those  paths leads to another optimal solution with the same objective value.Therefore, ! different optimal solutions exist as permutations of those  paths.
This symmetric structure can be avoided by introducing a set of variable-ordering constraints [27], as stated by Inequalities (19).It imposes path   to be, at least, as wider as path  (+1) , thus avoiding the permutation of the solutions.However, it cannot prevent situations where the optimal solution has two or more paths   and   with the same width, i.e.,   =   .

Heuristics for the WKDPP
In this section, we propose two heuristics for the WKDPP.The first executes the Ford-Fulkerson max-flow algorithm several times cleaning arcs, leaving the graph containing exactly  disjoint paths.It relies on several executions of the Ford-Fulkerson max-flow algorithm [26].The second one is a fix-and-optimize heuristic that removes a subset of arcs from  and optimizes the remaining model.
Given a WKDPP instance, the first algorithm heuristically removes all arcs with a capacity smaller than a minimum value computed and then solves an ILP model with the remaining arc subset.The second heuristic inspects all arcs separately, in ascending order according to their capacity.It checks if removing an arc leads to an infeasible solution.If true, then it reinserts the arc into the instance.Otherwise, the arc is definitely removed.After inspecting all arcs, the remaining arc set results exactly in  disjoint paths.

Maximum Flow-Based Algorithm
The Maximum Flow-Based Algorithm (MFBA) is a (|| 2 ) heuristic for the WKDPP that runs || times the Ford-Fulkerson maximum flow algorithm [26].It aims at removing low-capacity arcs from  in order to compute a flow decomposition of  into  disjoint paths with maximum width.In order to reduce the complexity of this heuristic, the Ford-Fulkerson algorithm is partially executed.Instead of computing the total flow of the graph, it stops when a flow of at least  units is found.
Algorithm 1 presents the MFBA heuristic.It receives as input , , , and .Initially, line 1 generates an arc set  ′ as an exact copy of .Lines 2, 3 set the capacity of all arcs in  ′ to 1.The algorithm uses these capacities to compute the maximum number of disjoint ⟨, ⟩ paths in  ′ = (,  ′ ).Lines 4-8 iterate over all arcs (, ) in  sorted in ascending order according to their capacity.Line 5 removes the arc (, ) from  ′ .Then, it computes the maximum flow in  ′ ( ) by partially running the Ford-Fulkerson algorithm, where it tries to find  augmenting paths, resulting in a complexity of (||).If the resulting maximum flow is smaller than  (line 7), then arc (, ) cannot be removed from  ′ , otherwise the instance becomes unfeasible.Thus, line 8 reinserts the arc in  ′ .Thereafter, only the arcs from the  paths remain.Next, it executes a flow decomposition Algorithm 1: Maximum Flow-Based Algorithm (MFBA).
input : , , ,  output: algorithm [28] on  ′ to compute a feasible solution for the WKDPP.Such flow decomposition returns exactly  disjoint paths.

Fix-and-Optimize Heuristic
This Fix-and-Optimize Heuristic (F&OH) is a (|| 2 ) heuristic preprocessing algorithm for the WKDPP.Given a WKDPP instance, it aims at computing a reduced arc set  ′ ⊆  to be inserted into an ILP mathematical model, thus speeding-up its resolution.The algorithm searches for a maximum capacity value such that all arcs with lower capacity can be removed without greatly affecting the optimal solution.
Algorithm 2 states the F&OH.It receives as input , , , and .Initially, it generates an arc set  ′ as an exact copy of  (line 1) and creates a variable to store the maximal minimum arc capacity value such that arcs with smaller capacity can be removed while keeping  disjoint paths (line 2).Then, it performs a flow decomposition in  ′ , considering that each arc capacity is 1, and stores the paths found in .While there are at least  disjoint paths in  ′ , it computes the minimum arc in  and removes every arc with capacity lesser or equal than to it from  ′ (lines 5-7).Finally, it generates a new arc set  ′ by removing from  every arc with capacity smaller than the computed min-arc value (line 8).We use the computed arc set  ′ as input to an ILP solver, using any arc-based model for the WKDPP.

Computational experiments
We performed the computational experiments on a single core of an Intel Xeon CPU E5645 with 2.4 GHz and 32 GB of RAM, running under the operating system Ubuntu Server 14.04.We used the branch-and-bound implementation of the ILOG CPLEX version 12.6 with default parameter settings to solve the proposed models and the F&OH remaining model.To implement the heuristics, we used C++, alongside with GNU g++ 6.2 to compile.We limited the running time of all algorithms to 7200 s.
Two instance groups were used in the computational experiments.The first instances group are complete graphs with | | ∈ {100, 200}.We used a normal distribution  (,  2 ) to randomly generate the arcs capacities, where  = 100 and  2 = 25.The second instances group was generated through the Transit Grid Graph generator of G. Waissi and J. Setubal1 .Those instances are sparse graphs, whose topology are similar to a transportation network, being previously used in the computational experiments of Truffot and Duhamel [21].Despite being sparse graphs, these instances were built in such a way that the size of the minimum cut-set is always greater or equal to 10 (which is the maximum value of  evaluated in this work), thus ensuring that all instances have, at least, 10 edge-disjoint paths according to Menger's Theorem [29].Figure 1 presents an example of such topology.Different from the first group, we generated the arc's capacities of such instances by means of an uniform distribution  (1,200).This instance group also contains two subsets, one with | | = 100 and the other with | | = 200.Besides, each subset consists of 10 instances.Therefore, we evaluate in our computational experiments a total of 40 instances.Notes. (*) Average time in seconds with respect to optimal and feasible solutions. (**) We found no feasible solution for this instance set.

Mathematical models evaluation
Table 1 shows the computational results of both models with and without the symmetry-breaking constraints (19).For the sake of brevity, the models presented in Sections 2.2 and 2.3 are respectively referred to as ILP 1 and ILP 2 .Besides, the ILP 1 with such constraints is denoted as ILP  1 .Similarly, the ILP 2 with the symmetry-breaking constraint is denoted as ILP   2 .The first column of Table 1 represents the instance subset name.Each subset contains 10 randomly generated instances with the same characteristics.The instance subset names are represented as a tuple  - , where  denotes the instance topology ( for complete graphs and  for transit grid graphs) and  denotes the number of nodes.The second column displays the number  of disjoint paths.The third, fourth and fifth columns refer to the ILP 1 .The third column displays a triple / / with the number of instances solved at optimality (), the number of instances with feasible solutions ( ), and the number of instances without any integer solution found within 7200 s ().The fourth column displays the average integrality gap computed by CPLEX and its standard deviation.The fifth column shows the model's average running time.The following columns show the same information for ILP   1 , ILP 2 , and ILP  2 .We only take into account the relative gap and running time of the instances with optimal or feasible solutions.Therefore, we symbolize the gap and running time of instance subsets without feasible solutions as a dash.

Comparison between 𝐼𝐿𝑃 1 and 𝐼𝐿𝑃 2
The first experiment compares the execution of ILP 1 and ILP 2 on the proposed instance set.Table 1 shows that ILP 1 found smaller gaps than ILP 2 for complete graphs, both with 100 and 200 nodes.One can see that ILP 1 found optimal solutions for all complete graph instances with  = 2 and  = 4, while ILP 2 could not properly deal with instances with  4. Besides, ILP 2 was not able to find integer solutions for complete graphs with 200 nodes and  = 10.These results may be due to the great number of additional variables and constraints of model ILP 2 , which introduced a new flow variable (   ) and a new set of flow constraints.Consequently, it was not able to find smaller gaps as ILP 1 within the time limit for complete graphs.
On the other hand, ILP 2 performs better than ILP 1 for sparse graphs and large  values, as measured by the computational experiments with instances 100 and 200.It found smaller gaps than ILP 1 for instances 100 with  ∈ {6, 8, 10} and for instances 200 with  = 8 and  = 10.As 100 and 200 are sparse graphs, they do not induce a great number of additional variables and constraints on model ILP 2 over ILP 1 .Consequently, the strategy used in this model to avoid the big  constant showed to be partially effective.

Effectiveness of the symmetry-breaking constraints
The second experiment aims at comparing the effectiveness of the symmetry-breaking constraints (19) over the models ILP 1 and ILP 2 .One can see from Table 1 that the introduction of such constraints significantly improves the first ILP model since ILP   1 found a greater number of optimal solutions than ILP 1 .Besides, the ILP  1 running time is smaller than ILP 1 for all instances with  6.The introduction of constraints ( 19) also enabled ILP   1 to reduce the relative integrality gap over ILP 1 for all not optimal instances.These same results do not hold for the second ILP model.One can see that the introduction of constraints (19) negatively affects model ILP 2 .The model without the symmetry-breaking constraints performs better than its constrained version for all instances when  6, achieving less than half of the ILP  2 integrality gap in some cases.On the other hand, ILP  2 finds smaller gaps than ILP 2 for both complete and sparse instances with  4. Such results might be due the CPLEX scheme for exploring its branch-and-bound tree.When exploring deeper levels of the tree, a great number of variables are fixed.Therefore, the solutions found at these levels have less room to be modified.As the introduction of constraints (19) highly constrain the problem, the search tree is most frequently re-initiated on ILP  2 than in ILP 2 .Therefore, ILP  2 achieves worse results than ILP 2 .The same behavior does not occur with ILP 1 and ILP  1 , since ILP 1 has just one flow sub-problem, while ILP 2 is based on  flow sub-problems and a greater number of variables.

Heuristics evaluation
The last experiment evaluates the heuristics proposed in Section 3, namely the MFBA and the F&OH.It also contrasts the results of the above-mentioned heuristics with those of the MKPB of Wang et al. [14], which is the most prominent heuristic for solving WKDPP in the literature.Tables 2 and 3 show the computational results of this experiment.The F&OH uses ILP   1 to solve the remaining model since it has the best average results in comparison to the other ILP models, as shown in Table 1.
Table 2 shows the results of the three heuristics.The first column represents the instance subset name.The second column displays the number  of disjoint paths computed.The third, fourth, and fifth columns refer to the MKPB.The third column shows the relative deviation of MKPB over the ILP   1 primal solution, computed as (ILP − )/ILP, where  denotes the value given by the heuristic and ILP denotes the ILP   1 primal solution value.The fourth column denotes the standard deviation of the MKPB results shown in the previous column.The fifth column reports the heuristic's average running time.Next, the sixth, seventh, and eighth columns give the same information for the F&OH, and the last three columns show the same data for the MFBA heuristic.
Table 3 reports the arcs set average reduction for each instance subgroup.The first column displays the number of disjoint paths.The second and third column refers to instances 100.The second column shows the number of arcs in the original instance, while the third column displays the average arc set reduction for the 10 instances within this subgroup.The remaining columns show the same information for instances 200, 100, and 200.
One can see from Table 2 that the F&OH has greater running times, being stopped by the maximum running time of 7200 s in some cases.This is due to the resulting optimization model size.The heuristic was able to greatly reduce the arc set of the complete graph instances, as shown in Table 3.However, F&OH cannot significantly reduce the arc set of the sparse graphs, especially for large values of .Despite its resulting arc set size, the reduction approach was effective in providing better solutions than ILP  1 .The F&OH found optimal solutions for complete graphs with  4 and for sparse graphs with 200 nodes and  4. Besides, its running time is smaller than ILP  1 for such instances.In addition, it improved the ILP  1 primal solution by more than 17% in 200 instances with 10 disjoint paths.
The MFBA and the MKPB heuristics have a completely different behavior than the F&OH.One can see that both MFBA and MKPB running times are much smaller than that of F&OH, in such a way that the MFBA's running time never exceeds half of a second, while that of MKPB never exceeds 1.02 s.Moreover, both heuristics were able to improve the average ILP  1 primal results, especially when | | = 200, whose average relative deviation was −1.6% for the case of MFBA and −1.32% in the case of MKPB.Despite that, in the case of sparse graphs, neither of the heuristics was able to improve the ILP  1 solutions.It can be seen that the average relative deviation of MFBA was 22.08% for instances 100 and of 11% for instances 200, while those of MKBP were 26.08% and 16.6%, respectively, for instances 100 and 200.Comparatively, it was observed that the average running time of MFBA was always or smaller than that of MKPB, which is expected due to the greater computational complexity of the latter [14].Besides that, MFBA achieved a smaller average relative deviation than MKPB in 15 out of the 20 instance's subsets.Therefore, these results indicate that MFBA overcomes MKPB both in terms of running time and optimization results.

Figure 1 .
Figure 1.An example of a Transit Grid Graph with 27 nodes and 100 arcs.
[13] et al.[13]proposed an arc-node model for WPDPC.In this work we generalize his model for the case of  disjoint paths.We define    = 1 if the arc (, ) ∈  belongs to path  ∈ , and    = 0 otherwise.Besides, variable   stores the width of path .We define this model as follows: couple flow variables    and    .It induces variable    to assume the capacity of arc (, ) if it belongs to path  ∈ .Finally, constraints (16)-(18) respectively define the domain of variables    ,   and    .

Table 1 .
Evaluation of the symmetry-breaking constraint for models ILP 1 and ILP 2 .

Table 2 .
Evaluation of the heuristics for the WKDPP.

Table 3 .
Effectiveness of F&OH in reducing WKDPP instances.