A BI-OBJECTIVE HUMANITARIAN LOGISTICS MODEL CONSIDERING EQUITY IN THE AFFECTED ZONES: APPLICATION TO A RECENT EARTHQUAKE IN MEXICO

. In this paper, we propose a bi-objective program to model a post-disaster strategical decision problem. We consider the situation after a catastrophic disaster occurred, in which temporary distribution centers must be located. The distribution centers consolidate aid to be delivered to affected people. We assume that affected people go to collect needed aid from temporary located distribution centers. Hence, a predefined mobility radius is considered, that indicates the distance that people are willing to travel to receive aid. Additionally, needed aid required by affected individuals is consolidated in an affected demand zone and equity constraints are included to balance the aid given to those affected zones. One objective of the proposed model is to minimize the time employed by demand zones to collect aid. In humanitarian logistics it is common that the decision maker is associated with either government or non-profit organizations that are in charge of relief. Usually, there is a limited budget to conduct the operations. Hence, the decision maker also aims to minimize the cost of locating temporary distribution centers. Both objectives are simultaneously considered. Hence, to obtain efficient solutions of this bi-objective problem, an exact AUGMECON method is proposed, which is an improved version of the classic 𝜀 -constraint method for multi-objective optimization. To overcome with the computational limitations shown by the exact method, a genetic algorithm is also designed and used to approximate the Pareto front. To conduct the computational experience, a case study and additional random instances are considered. The case study is based on an earthquake that recently occurred in Mexico. The results obtained by both implemented methods are compared by using different well-known metrics, such as, the number of solutions, the 𝑘 -distance, the size of the space covered, and a coverage measure. It is shown that, on average, the proposed genetic algorithm outperforms the AUGMECON when comparing the quality of the obtained Pareto fronts. Results offer the possibility for the decision maker to prioritize either time or cost when locating temporary distribution centers in a catastrophic situation.


Introduction
In the last years, several natural disasters have devastated different regions of the world.A cyclone in Zimbabwe, avalanches in Afghanistan and Pakistan, landslides in Colombia, floods in China, among others, left over hundred of deceased [17].Even though the number of natural disasters is less than last decade and mortality has decreased in comparison to the previous average, high economical impact has resulted from these recently occurred disasters [11].Because logistics efforts represent around 80% of disaster relief [28], it is important to not only develop strategies to tackle natural catastrophes, but also to reduce the costs of relief operations.The latter may be achieved through multi-objective optimization on humanitarian logistics decisions.
Humanitarian logistics focus on diverse operations that aid people in case of a catastrophe.These operations start from the coordination of the roles and the participation of different organizations [24,42].Then, they continue with different tasks, such as locating shelters [10], distributing medical services [56], managing the inventory [5], collecting debris and reconstructing roads [22], among others.Depending on the moment when the decision-making process takes place, humanitarian logistics can oversee the planning operations, relief distribution, and reconstruction.Generating strategies to solve humanitarian logistics problems involves more than one objective; thus, a multi-objective programming model is needed.
In multi-objective optimization problems, a decision maker is confronted by several conflicting objectives.The aim is to find a set of Pareto-optimal solutions from which the decision maker can choose the solution that best suits his/her needs.In humanitarian logistics, when distributing aid, the objectives could be to optimize response time, travel distance, coverage, reliability, security, cost-effectiveness, or equity (see [20]).Bearing these goals in mind and aiming to develop strategies to help people in need after a disaster, we study the following problem.After a disaster occurs, an overlooking organism, such as the government or a non-profit organization will use some available public places in the communities (like main squares, schools, worship places, among others) to open and operate temporary distribution centers.To decide where to open temporary distribution centers, the overlooking organism must take into account the economical perspective of its decision.Additionally, we assume that people in the affected communities collect the aid from the distribution centers.Hence, they cannot travel more than a threshold distance to receive aid.Since neglecting affected zones is not allowed, an equity constraint is considered in the problem.Given these conditions, our optimization problem chooses the temporary distribution centers for the aid distribution minimizing cost and delivery time.Particularly, a mixed-integer linear bi-objective program is proposed to model the situation herein studied.
To solve this bi-objective problem, an exact method and metaheuristic algorithm are proposed.Firstly, an exact AUGMECON method is developed.Preliminary experimental results, show that implementing the classic -constraint method leads to a limited range of Pareto optimal solutions using large computational times.In response, we implement an improved version of the -constraint, called AUGMECON, which is proposed in Mavrotas [33].Indeed, the AUGMECON is also an exact method that guarantees to find a Pareto optimal solution at each iteration.Secondly, we propose a genetic algorithm to obtain good-quality approximations of the Pareto fronts in short computational times.Both algorithms are tested over the set of instances constructed from the earthquake that occurred in 2017 in Mexico City, which is considered during the computational experimentation.
One of the key contributions of this paper is that the proposed model considers a bi-objective programming approach that minimizes the response time and the cost of opening temporary distribution centers.Moreover, it considers that the affected people are the ones who travel to collect the aid given at the temporary distribution centers without surpassing a mobility radius.Also, equity constraints are included in the model to avoid leaving unattended specific affected zones.Additionally, the case study considered in this paper has not been analyzed yet by any other research in humanitarian logistics, this provides another relevant case study to test other models.Finally, this paper proposes an AUGMECON method and a population-based metaheuristic algorithm to solve efficiently the bi-objective problem.
The remainder of the paper is organized as follows.Section 2 presents a literature review regarding multiobjective humanitarian logistics problems, emphasizing the ones that consider time and cost as objective func-tions.The mathematical bi-objective model is defined in Section 3.Then, Section 4 describes the proposed solution schemes, that is, the exact AUGMECON method and the genetic algorithm.Section 5 shows the numerical results according to the case study under consideration, and compares the results obtained through both tested methods.Interesting managerial insights from the case study based on the obtained results are presented.Conclusions and recommendations for future research directions are given in Section 6.

Literature review
Conducting research in humanitarian logistics is more challenging as far as it involves a higher degree of uncertainty than the one in business logistics.Communication is more complex, timely delivery is harder to achieve, and there are limited resources available (see [9]).There are a myriad of approaches to address the challenges presented during and after a natural disaster occurs.Moreover, as stated in Hezam et al. [21], most of the studies in the literature are based on a single objective, and few are based on multi-objectives.Indeed, multi-criteria and multi-objective optimization approaches have received attention in recent years (see a review in [20]).The literature reviews of Gutjahr and Nolz [20], Caunhye [9] and Hezam et al. [21] allow us to compare the contribution of existing optimization models to the area of humanitarian logistics.
Depending on the stage when the decision-making takes place, humanitarian logistics can oversee the operations of planning, relief distribution, and reconstruction (see [28]).In this extensive literature review, we are mainly focused on relief distribution models.All the relief distribution problems related reviewed papers are summarized in Table 1, where we add a description of the case of study similar to the literature review in Juman et al. [25].Also, it can be noticed that most of these articles present a real case study, in which the proposed models and solution methodologies are applied.Based on Table 1, the following papers are the ones which, just like our problem, consider the minimization of response time and cost in their objective functions (or related ones).
There are some papers with multi-objective models, in which two out of three considered objective functions correspond to the ones considered in our proposed model.First, in Tzeng et al. [52], the costs are computed by the sum of the costs associated to the opening of an aid transfer center, transporting aid from the collection centers, and transporting aid from the transfer center to the demand points.The time is computed as the sum of the time of transporting the aid from the collection centers to the transfer centers and from there to the demand points.To solve the problem, fuzzy programming is employed considering only both the response time and the transportation costs, which might be redundant since the relationship between response and transport cost is not necessarily conflicting.More recently, the study of Praneetpholkrang et al. [43] address a shelter location problem to minimize operative costs, evacuation time, and the number of shelters.The authors implement an -constraint algorithm and goal programming on instances using flooding data in Surat Thani, Thailand.Numerical results exhibit the benefits of implementing the proposed approach compared with the actual location of shelters.
In Wang et al. [55], the fixed costs of establishing a distribution center and the transportation costs are minimized.The response time is considered as the minimization of the maximum duration time per route.Even though Wang et al. [55] address a routing problem, it considers that a demand point can be supplied by different distribution centers.Additionally, it utilizes a non-dominated sorting genetic algorithm and a non-dominated sorting differential evolution algorithm.Indeed, routing and location decisions are integrated in humanitarian logistics, which results in problems that consider multiple objective functions.In particular, Vahdani et al. [54] study a two-stage robust optimization approach to allocate distribution centers in response to disasters.The first step addresses a location problem that minimizes storage costs, while the second stage is a routing problem that minimizes vehicle travel time and route reliability.
Furthermore, bi-objective models are also reviewed.In Khalilpourazari and Khamseh [26], the first objective function minimizes the costs of establishing permanent blood centers, collecting blood, transporting it to the blood centers, storing the blood, the fixed costs of transportation, and the penalty for shortage of blood; the second objective minimizes the transportation time of moving the blood from the collection centers to the blood centers.In that paper, a mobility radius is also considered when deciding where to place the temporary blood collection centers, which is a similar situation to the one presented in our paper.However, Khalilpourazari and Khamseh [26] consider both the transportation cost and the transportation time in two different objective functions, which may not necessarily be in conflict.Also, in Rezaei et al. [46], the two objective functions are: the minimization of the weighted average response time in certain possible scenarios of earthquakes, and the minimization of the cost of establishing warehouses, the penalty for unused goods, the penalty for uncovered demand, the cost of removing perished goods from the warehouse, the cost of buying new goods, the cost of transporting the goods from the suppliers to the warehouse, minus the revenue from selling goods that have a certain lifespan left and the shortage of goods in the pre-disaster period.Also, an equity parameter is considered in its constraints, similar to the one considered in our model.The main difference is that Rezaei et al. [46] focus mainly on the pre-disaster stage of emergency logistics instead of the relief distribution stage.
Recently, the study of Geng et al. [18] addresses a two-stage optimization approach of shelter allocation.The first stage is a bi-objective problem that determines the type-and-location of emergency shelters in multiple planning periods, optimizing the suitability of the emergency shelters and the distance travelled by evacuees and delivered materials.The second stage determines the location of short-term shelters, the product flow from distribution centers to shelters, minimizing the distance of product flow and the number of shelters.The solution is limited to constraints of demand satisfaction, budget, capacity of shelters and distribution centers.The major differences with our problem are that we assume a single-stage single-planning period, and scenarios of natural or human-caused disasters with limited aid (i.e., there is no demand satisfaction), which is a handicap for a direct comparison between both models.Finally, Beiki et al. [6] also present a location-and-routing problem to allocate temporary healthcare centers and to deliver pharmaceutical commodities.The problem optimizes response times, operational costs, and route reliability.The authors implement a commercial solver on a small randomly-generated instance.Finally, Nayeri et al. [36] address a bi-objective location-and-scheduling problem for relief teams in humanitarian logistics to minimize the completion time of a set of incidents and the weighted tardiness of relief operations.The problem includes different completion times for each pair incident-team and time windows to attend the incidents.The authors implement a NSGA-II and a Particle Swarm Optimization algorithm, where numerical results exhibit similar performance of both algorithms while evaluating them with different multi-objective metrics.
In Gutjahr and Dzubur [19], a bi-level bi-objective model is proposed.In the upper level, the two objective functions consist of minimizing the cost of opening the distribution centers and minimizing the uncovered demand are considered.Meanwhile in the lower level, an equilibrium is found when allocating users to distribution centers under the consideration of congestion.In that model, it is assumed that people decide from which distribution center they are collecting the aid.The latter supports our assumption that people are in charge of retrieving required aid, thus, distribution costs are associated with that decision.
[2] minimize the distribution time, the penalty for uncovered demand, and the fixed costs of opening local depots.The two objective functions of our interest are considered in that objective function.Because of the conflicting behavior of response time and costs, a single objective function might modify the Pareto front of the nondominated solutions.In Döyen et al. [15], a two-echelon-stochastic location problem to minimize a total cost in terms of location, transportation of products, inventory, and product shortage is addressed.The authors implement a Lagrangian heuristic with a local search.Numerical results show that the proposed solution method can obtain good solutions for randomly generated instances (up to 50 relief centers and 800 demand points) in short computational times.Balcik and Beamon [3] present a facility location problem in case of disaster with the objective of maximizing the covered demand.The authors present a model based on the maximal covering location problem that considers the facility location, the inventory decisions, and capacity constraints.Then, they analyze the results obtained from the application of the model over different earthquakes around the world and evaluate the economic impact of the strategies presented in their work.Finally, the study of Zhang et al. [57] present a stochastic location problem to optimize the weighted sum of transportation and deprivation costs (in terms of the demand satisfaction and delivery time).The authors implement a commercial solver to generate solutions for instances based on data from Rawls and Turnquist [45] about cities affected by hurricanes.Numerical results show slight differences in the location of facilities while varying the amount of demand in affected zones.

Mathematical model
Let us suppose that a natural disaster occurs and several affected zones need to receive aid.Usually, affected individuals retrieve the aid in community spaces, such as schools, main squares, or places of worship.The aid is consolidated from collection centers.In these situations, it is important for the affected individuals (grouped in demand zones) to receive the aid as soon as possible.However, sometimes it is also necessary for government or humanitarian organization to minimize the cost of operating the temporary distribution centers.The latter is motivated by the limited budget that usually is assigned in this type of disasters.In response, we define a problem to determine the distribution centers location to satisfy a minimum fraction of the demand.This situation can be mathematically modeled as a bi-objective optimization problem, in which the first objective is to minimize the time to send the aid from the temporary distribution centers to the demand zones, and the second objective is to minimize the cost of operating the located temporary distribution centers.
The model is described as follows: let  be the set of potential temporary distribution centers that an overlooking organism can operate, and let  be the set of all demand zones that need to receive the aid.Consider the parameter   as the time it takes to retrieve the aid from a temporary distribution center  ∈  and send it to a demand zone  ∈  and the parameter   is the cost of operating the temporary distribution center  ∈ .In this problem, it is considered that the individuals grouped in the demand zones travel to receive aid from the temporary distribution centers.Hence, the transportation cost is not assumed by the government or humanitarian organization.Recall that the demand zones consider a mobility radius.Hence, let () be the set of temporary distribution centers  ∈  that are within the a maximum distance  max from the demand zone  ∈ .In this problem, it is also assumed that the aid needed per demand zone is known and that temporary distribution centers have a certain carrying capacity.Therefore, the parameter   is the aid needed in a demand zone  ∈ , and   is the aid available in a temporary distribution center  ∈ .Additionally, consider the equity parameter   as the minimum aid that a demand zone  ∈  must receive; this parameter is introduced in order to avoid neglecting a demand zone.
The decision variables for the bi-objective problem are: : fraction of the demand of demand zone  ∈  that is allocated to the temporary distribution center  ∈ .
Additionally, to properly compute the travel time, the following auxiliary variables are considered: Then, the proposed bi-objective programming model is defined as follows: Subject to: The bi-objective problem is defined in (3.1)-(4.10).The first objective function (3.1) minimizes the time in which demand zones receive the aid from the temporary distribution centers that are within their mobility radius.On the other hand, the second objective function (3.2) minimizes the cost of operating the opened temporary distribution center.This bi-objective problem looks for the solutions that minimize the response time and the cost of opening the distribution centers, simultaneously.The constraints of the model are given by (4.3)-(4.10).The first constraint ensures that each demand zone  ∈ , receives at least the minimum percentage of its total aid needed.Additionally, a demand zone  ∈  cannot receive more aid than the one it needs, as expressed in (4.4).Inequality (4.5) specifies that a demand zone  ∈  can only receive aid from a temporary located distribution center  ∈ ().The expression in (4.6) limits the aid supplied by a temporary distribution center  ∈ () to the available aid in that specific temporary distribution center.The inequality (4.7) links the variable   to the auxiliary variable   .Finally, constraints (4.8)-(4.10)state the nature of the decision variables.

AUGMECON method
The AUGMECON method developed to find the Pareto front of this problem is presented.This is an exact algorithm proposed by Mavrotas [33] devoted to solve multi-objective optimization problems.AUGMECON method it is based on the -constraint method, which selects an objective function to be optimized and relaxes the other objective functions in order to obtain a mono-objective formulation.Then, the resulting model is solved iteratively while varying the parameter , in order to obtain the set of non-dominated solutions.The rationale behind that method is to bound the relaxed objective functions.Nevertheless, the main difference of AUGMECON with respect to -constraint, is that AUGMECON penalizes the relaxed objective functions in such a way that allows to obtain strong non-dominated solutions.In other words, that the associated value of a solution must be strictly better than the corresponding values in the other ones, in at least one objective function.
Recently, AUGMECON method has been implemented to provide solutions for bi-objective location-andinventory problems within a context of supply chains [36].Hence, to develop this method, the following monoobjective formulation is obtained by relaxing the objective function given by equation ( where  and  represent a bound and a slack variable, respectively.In equation (4.2), the response time is included as constraint.In addition,  is defined by the range of equation (3.1), and  is a small parameter, which is usually between 10 −6 and 10 −3 .We consider a  = 10 (,  ) ← Solve(MHM()) 7: if   then 8: PF ← Add() ← Get f1() 10: end if 12: end while 13: return PF

A bi-objective genetic algorithm
Due to the high degree of complexity that location problems suppose, it is convenient to consider populationbased metaheuristics when looking for solutions.Even though there exist exact optimization techniques, it may require long computational times that are not ideal for the immediate response needed in humanitarian logistics.On the other hand, metaheuristics provide other feasible solutions that may have been overlooked by exact optimization tools.To demonstrate the effectiveness of metaheuristics, some of the papers that use those techniques to solve multi-objective humanitarian logistics problems are cited.
For example, in Ahmadi et al. [2], a variable-neighborhood search algorithm to find solutions to the monoobjective problem of minimizing the distribution time, penalties, and fixed costs of opening distribution centers is designed.The differences between the model presented by Ahmadi et al. [2] have been outlined previously.In Nolz et al. [39], a memetic algorithm is used to solve a multi-objective relief distribution tour in which the main goal is to minimize the risk of the tour, minimize the traveling time, and minimize the total distance between population centers and the facilities set up.Like the model presented in this paper, Nolz et al. [39] considers the mobility of the population as a key part of the model.In Wang et al. [55], the NSGA-II is used to solve their proposed problem; the differences between their model and the one presented in this paper also have been already outlined.
Given the results in the aforementioned papers and given that our proposed model is solving a location problem, a genetic algorithm is chosen to efficiently approximate the Pareto front in order to provide support in the decision-making process.A genetic algorithm is based on the biological process of DNA crossover and mutation in reproduction (see [31]).An overall description of a genetic algorithm is as follows: given an initial population of size  , the individuals of the population are measured regarding their fitness to survive on to the next generation; if they are fit, they crossover with a different individual of the population.From this crossover a given offspring will mutate with certain probability.After this process is completed, the offspring is incorporated to the main population and some existing individuals of the previous population are eliminated.This metaheuristic algorithm allows us to eventually come up with the most successful solutions for a particular problem.
The following are the main steps that conform the genetic algorithm.
-Codification and initial population.For the initial population, a random number  between 1 and the total number of available temporary distribution centers is generated.Afterwards, the  temporary distribution centers are randomly selected.This information is stored in an individual  of size equal to the available temporary distribution centers; if the temporary distribution center  is opened, then   = 1; otherwise,   = 0.Because the cost of opening the temporary distribution centers can be implicitly computed from an individual, the unique objective function that can be minimized is the response time function ∑︀ ∈     .If a feasible solution exists for a certain individual , the individual is stored in the initial population alongside its corresponding minimized value (fitness) for  1 ; if there is not a feasible solution, then the individual is discarded.The creation of individuals and minimization of  1 is repeated until there are  individuals in the initial population.
-Crossover.The most successful individuals are those who are non-dominated.For each individual in the population, a random value between 0 and 1 is selected.If the number is less or equal than the fixed crossover probability, the individual is combined with one of the non-dominated individuals.A single-point crossover is considered, that is, a random position  in the individual is selected and two offspring result from splitting the individuals in the -th position.-Mutation.For each offspring, a random number between 0 and 1 is selected.If the number is less or equal than a fixed mutation probability, then a random position  of the opened distribution centers is selected for mutation.In this order, the distribution center is closed, if the corresponding problem is feasible the solution is stored to update the population.-Selection.From the population obtained after the genetic operators phase, the non-dominated individuals are added to the new population.To avoid repetition, each individual has its own Hash value assigned.If there are still less than  individuals in the population, then random solutions are selected from the dominated offspring or from the dominated original population.The latter is performed to maintain the size of the population and to include diversity.-Stopping criteria.If there are not any new non-dominated individuals added to the population for a predefined number of generations, the genetic algorithm stops.

Computational results
In this section, the experimental results obtained by the AUGMECON method and the bi-objective genetic algorithm over a set of instances generated with real data of the last earthquake occurred in Mexico City are presented.In this order, the solutions obtained from the exact method will be considered to measure the quality and the efficiency of the results obtained by the metaheuristic.Moreover, the exact and approximated Pareto fronts obtained are compared in a fair manner.Both algorithms are coded in C++ using the libraries of CPLEX 12.10 to solve the reduced optimization problem when fixing  variables.The experimentation is conducted on a computer with an Intel Core i9 processor at 4.70 GHz with 64 GB of RAM.

Case study and random generated instances
Data of the earthquake occurred on 19th of September, 2017 in Mexico City are used to generate instances of our problem.The epicenter of this phenomena occured in the Cocos Plate at a depth of 57 kilometers, and it was responsible for at least 291 deceased, 2614 injured (reported by health services), and significant damage to infrastructure and service networks (see [40]).The study of Cruz Atienza et al. [12] states that 32 times more energy was produced during the earthquake on September 19th in 1985.However, there were still many affected areas in Mexico City since the epicenter of the earthquake in 2017 was closer to the urban area and to the soil surface comparing to earthquake in 1985.Moreover, Mexico City is located in a "soft" soil, which promotes high acceleration rates in the top of buildings with more than seven floors.Due to the violent soil movements during the earthquake, some of them collapsed.It is important to highlight that the information about acceleration rates of the buildings, the coordinates of some affected zones, and reports of missing persons were available just a few minutes after the earthquake occurred.The latter is critical in the decision-making process of humanitarian relief for allocating shelters, providing health services, distributing aid kits, rescuing activities, among others.Detailed information related to the earthquake is available in Datos Abiertos [13].Due to our proposed optimization problem nature, we focus on the location and number of affected individuals near the Central District of Mexico City.These data are used to evaluate a potential practical implementation of our methodology.Figure 1 shows a map with the affected zones represented with blue circles and the location of shelters denoted with red crosses.Notice that the selected affected zones are in the south region of Mexico City.
Therefore, 14 instances based on the affected areas in Mexico City are generated.The input data considered for the instance generator are the coordinates and the population of each demand centroid  ∈  (obtained from [13]), a mobility radius, the set of locations used as centers to collect aid in the case study, and a number  of generated locations of potential facilities.Then,  demand centroids are randomly chosen as locations for temporary distribution centers.Next, demand centroids with no access to their demand regarding the distances to locations and the mobility radius are removed.Finally, the capacity regarding each location  ∈  is generated as a fraction  of the total potential demand considering the centroids with access to location  ∈ .Generated instances are divided into four different classes A, B, C, and D, in terms of the number of locations , the coverage radius, and the fraction of demand.Table 2 details the instances classes and their parameters to represent different scenarios on a region affected by the earthquake, where the demand is set as the population for each demand zone.Notice that instances A only consider the locations of temporary distribution centers in the case study and a constant "fraction of demand", but different values for the coverage radius are used.Thus, those instances focus on different mobility parameters while individuals are searching for humanitarian aid.Instances type B focus on the effect of demand deprivation for affected individuals since the "fraction of demand" is varied.Instances C consist of scenarios with different coverage radius and more potential location sites compared to the real case.Finally, instances D represent scenarios with a large number of locations for facilities and a constant coverage radius.
It is important to emphasize that the datasets generated and analyzed during the current study are available from the corresponding author on reasonable request.Now, the numerical results of solving the generated instances by implementing our proposed solution approaches are presented.

Numerical results
To measure the quality of the Pareto fronts obtained by the AUGMECON method and by the genetic algorithm, the following metrics are used (see [32,58], for further details).
-N Sols: the number of solutions in the approximated Pareto front.
--distance: a density measure based on the distance between each non-dominated solution and its -th nearest neighbors.We consider  = 4, and the average of those distances is reported.-SSC: the size of the space covered, which is computed as the area of the polytope formed by the solutions in the front and the axis in the objective space.-(,  ′ ): a coverage measure that represents the fraction of solutions in the set  ′ that is dominated by or equal to the solutions in the set  .This measure is computed as follows: where a value of (,  ′ ) = 1 indicates that all the solutions in  ′ are dominated by or equal to the solutions in  .
The obtained results for the AUGMECON method are presented in Table 3.When a "-" appears, is due to the lack of solutions in the Pareto front to compute this measure.Column "N Sols" show the average number of solutions in the Pareto front, while column "SSC" exhibit the mean value of the size of the space covered ; column "-dist" show the average of the density measure; and finally, required computational times, in seconds, are presented in column "Time (s)." On the other hand, the results reported from the genetic algorithm are summarized.It is important to mention that, due to stochasticity involved in this metaheuristic, ten runs for each instance are executed.Also, different stopping criteria, in terms of the number of generations (10, 25, and 50), are tested.Additionally, crossover and mutation probabilities are fixed to 0.4 and 0.2, respectively.Table 4 presents the average results for the genetic algorithm.Recall that approximations of the Pareto front are obtained under this approach, and the average results of the ten runs is reported.
A direct comparison between the AUGMECON method and the genetic algorithm with 50 generations as stopping criterion is presented.In summary, the genetic algorithm gets 206.68% more non-dominated solutions than the AUGMECON, and the genetic algorithm uses only 4.12% of the total average time required by the AUGMECON.The values of SSC obtained by the genetic algorithm lead to a relative average improvement of 806.07%, compared to the results related to the exact method.However, it is worthy to mention that for instance A2 the AUGMECON is able to obtain almost three times more solutions.Consequently, the space covered and the -distance measures are significantly better in that specific instance.For instance A3, the AUGMECON obtained only two solutions within 2 h, whilst the genetic algorithm obtains 48.60 solutions in average.Table 5. Example of the (,  ′ ) measure for the instance A1.
It is important to mention that although the presented findings from comparing the AUGMECON versus the genetic algorithm with 50 generations, a similar behavior remains when the genetic algorithm uses a different number of iterations.

Identifying the best approximations of the Pareto front
Besides comparing the average values of the different measures, the "best approximation" of the Pareto front for each instance among each algorithm multiple runs is identified.In particular, the coverage metric (,  ′ ) is used to determine the approximation with the largest number of non-dominated solutions.For example, Table 5 shows the coverage measure (,  ′ ) for all pairs (,  ′ ) of approximations of the Pareto front obtained by using the genetic algorithm with 10 generations on instance A1.Notice that ( 3 ,  2 ) = 0.39 indicates that 39% of the solutions in approximation  2 are dominated by the approximation  3 .
As a result, the "best approximation"   may be the one with the minimum measure in its respective column in Table 5.However, some of these approximations have only few solutions.In response, the following ratio is used to identify the best run  * , considering a trade-off between the coverage measure and the number of solutions in the approximation.
It is convenient to point out that  represents the set of all the approximations obtained with one of the implemented algorithms for a specific instance.In case of instance A1, a value of  * = 3 is obtained with formula (5.1), i.e., the best approximation of the Pareto front is obtained in the third run (see brown plot in Fig. 2).

AUGMECON method versus Genetic algorithm
Once that the best approximation for each pair (instance and variant of the genetic algorithm) has been identified, an analogous procedure is used to determine the best variant of the genetic algorithm for each instance.For example, considering the coverage measure (,  ′ ) with the best approximations obtained by the genetic algorithm using 10, 25, and 50 generations.However, results indicate that, for all the instances, the best variant of the implemented genetic algorithm is the one that considers 50 generations.C(AUGMECON, Gen50) 0.65 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.07 0.02 0.00 0.00 0.00 0.00 C(Gen50, AUGMECON) 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Finally, a fair comparison between both implemented approaches can be done.Table 6 shows a comparison between the best solution found with the AUGMECON method and the genetic algorithm with 50 generations, in terms of the coverage measure.As expected, the AUGMECON solutions are non-dominated.However, the first row indicates that the genetic algorithm generates complementary solutions for this problem.The biggest difference can be seen in the instance A1, where the Pareto front obtained by the AUGMECON dominates the 65% of the solutions in the Pareto front of the genetic algorithm.Figure 3 presents an example of the latter comparison for instance C2.As shown in Table 6, the best approximation obtained with the genetic algorithm with 50 generations (see Gen50 in the blue plot) provides many non-dominated points, while the Pareto front obtained by the AUGMECON method (see red plot) finds a point that only dominates three points given by the genetic algorithm.The appendix shows analogous plots for all the instances considered for the Mexican case study.
In general, obtained numerical results indicate that the genetic algorithm has a more robust behavior to find better approximations of the Pareto front considering different measures such as coverage, space covered, and density.
Indeed, the quality of the AUGMECON can be improved by using a large amount of computational time.Alternative experimentation shows that using a stopping criterion of 24 h leads to significant improvement in the quality of the approximation of the Pareto front for 20% of the 14 randomly generated instances.However, this option should be carefully considered since we are dealing with a decision problem in humanitarian relief, where shorter computational times are desired.In contrast, the proposed genetic algorithm is a non-deterministic method that requires short computational times (approximately 5 min in average among all instances A-D); thus, we can implement a multi-start approach with the genetic algorithm and consider the union of the approximations obtained for each run.Notice that this approach will lead to significantly better results, as it is shown in Figure 2, but using a significant less amount of computational time.

Implementing our model into a different case study
Since we are dealing with a location problem that can be implemented in different humanitarian relief scenarios, we consider another case study.In particular, we use the case study of Rawls and Turnquist [45] and Zhang et al. [57], which involves 30 cities affected by hurricanes in the southeastern USA.
To design our instances, we use information of inhabitants in each city (obtained from the search engine of the [53]), and the geographical coordinates of latitude and longitude.Then, the demand zones  in our problem represent the 30 cities of Rawls and Turnquist [45], where aid parameter   is fixed as a fraction (0.0277 as used in [57]) of the number of inhabitants of the city , and parameters   is the Euclidean distance from  to .We define different instances in terms of the number of potential sites || and the maximum travel time parameter  max .In particular, we define instances E1, E2, E3, E4, E5, and E6, for (||,  max ) pairs (10, 300), (10, 500), (10, 800), (20,300), (20,500), and (20, 800), respectively.We randomly choose locations  from set , whereas the capacity   is defined as a fraction (0.75) of the total demand for all zones  with travel time   ≤  max , and the fixed cost is randomly generated between 500 and 1000.Finally, we set the parameter  = 0.5.
We obtain similar approximations of the Pareto front for each instance by implementing ten runs of our genetic algorithm.For example, we observe no differences in the approximations for instances E1 and E2, which represent scenarios with a small number of potential sites  and a small value of  max .Whereas the left panel in Figure 4 exhibits slight differences among the approximations of the Pareto front for instance E3.In the latter case, we can identify the "best approximation" (see right panel of Fig. 4).Now, we present the approximations of the Pareto front for instance E6, which represents the scenario with the largest value of || and  max .Notice that we obtain more dominated points compared with instance E3, and our algorithm generated different approximations for each run.We use the best approximations to compare scenarios E3 and E6.Notice that we improve by 33.93% the minimal response time when using a larger number of potential sites , since the minimum values of the response time for instances E3 and E6 are 4941.3and 3264.62,respectively (left extreme points of the best approximations).However, the minimum value of the total costs is nearly 2000 in both scenarios (see right extreme points of the best approximations).Now, we compare the solutions with a good balance between the objective functions, i.e., the points near the ideal solutions (highlighted with red circles in the best approximations of Figs. 4 and 5).The "balance solution" of instance E3 leads to values of 6259.94 and 4424 for the response time and cost, respectively, while we obtain values of 5173.28 and 5880 in the case of instance E6.The latter difference in the costs exists since balance solutions of E3 and E6 use six and nine facilities, respectively, and we recall that the total cost depends on the installation of facilities (the users of the service network absorb transportation costs).Regarding computational times, instance E3 required 2.41 s to obtain the Pareto front and instance E6 required 4.87 s.
The study of Zhang et al. [57] aims to minimize different measures of deprivation cost functions while allocating pre-disaster relief centers, and they show that there are only slight differences in the location of facilities when varying the demand.In contrast to Zhang et al. [57], we define a deterministic optimization problem to minimize installation costs and response times when allocating relief centers, and we consider a mobility radius of inhabitants in demand zones.Thus, the numerical results exhibit that the number and location of relief centers vary significantly along the solutions in the approximations of the Pareto front due to the characteristics of our problem.In general, we develop a fast optimization approach that can be useful to support decision-making processes by providing information about the impact of location decisions in the minimization of cost and the reduction of response time.

Conclusions and research directions
In this research, we propose a bi-objective location problem to minimize operation cost and response time in the context of humanitarian logistics.In particular, we represent our problem using a mixed-integer linear bi-objective program, and we implement an exact AUGMECON method and a genetic algorithm.The latter approximates the Pareto front for instances based on two case studies.Most of the instances in our experimental stage are intractable in short computational times using the AUGMECON method.In contrast, our genetic algorithm is a fast tool that obtains better quality for most instances compared with the AUGMECON.The latter is an important characteristic that allows a sensitivity analysis for the problem parameters.For example, increasing the parameter value of the minimum fraction of fulfilled demand fosters an egalitarian demand satisfaction, which is a relevant characteristic for decision makers in the public sector.
Even when information may be considered as stochastic in humanitarian logistics, it is highlighted that nowadays, the information flow occurs at higher rates due to the current technologies.For example, people used social networks to identify affected zones after the earthquake in Mexico City on 19th September of 2017.Actually, there were maps available to aid the cities population at the first hours.Thus, the proposed optimization approach can be complemented with Geographical Information Systems and different measures of spatial accessibility for better decision support [16].The dynamic nature of information in such contexts demands fast optimization tools that can be implemented multiple times during the reaction stage after a natural or human-caused disaster.In response, our numerical experimentation for a case study in Mexico and the USA exhibits that we can analyze the effect of different potential locations on the cost and response time while defining a network of relief centers.Thus, we define a practical tool to provide valuable information for decision-makers.
Future research directions may include the assumption related to the distribution of perishable goods that may imply hard time-window constraints or the consideration of items with different preferences due to the particular needs of affected zones.Finally, other measures may be considered which can be relevant from the social perspective.For example, the equity of demand satisfaction for the distribution of vaccines in a pandemic context, where the equity is essential to reach the most vulnerable people and achieve herd immunity.The latter could be modeled by optimizing the minimum fraction of fulfilled demand, which is bounded with a hard constraint in our problem.

Appendix A.
We present the best approximation of the genetic algorithm using 50 iterations and the AUGMECON for all the generated instances.
−5 and  = UB − LB, where UB and LB are the upper and lower bounds of the objective function (3.1), repectively.The value of the bounds is computed by using a lexicographical optimization approach.Algorithm 4.1 describes the AUGMECON method.The function Compute Bounds() obtains the upper and lower bounds for equation (3.1).Solve() computes the solution of problem MHM for the value  and indicates if the problem is feasible or not.Add() incorporates the solution to the Pareto front, and Get f1() evaluates the solution in equation (3.1).Algorithm 4.1.AUGMECON.Require: Problem data, UB and LB Ensure: Pareto Frontier PF 1: PF ← ∅ 2: (UB, LB) ← Compute Bounds() 3:  ← UB 4:   ← True 5: while   do 6:

Figure 1 .
Figure 1.Location of affected areas and centers to collect aid in Mexico City due to the earthquake from 19th of September of 2017 (map generated using data from Datos Abiertos [13] and c ○openstreetmaps contributors).

Figure 2 .
Figure 2. Approximations of the Pareto front for instance A1 obtained with the genetic algorithm with 10 generations.

Figure 4 .
Figure 4. Pareto front approximations for instance E3 using ten runs of our algorithm, and best approximation.

Figure 5 .
Figure 5. Pareto front approximations for instance E6 using ten runs of our algorithm, and best approximation.

Figure A. 1 .
Figure A.1.Plots of the best pareto fronts for the A1 instance.

Figure A. 2 .
Figure A.2. Plots of the best pareto fronts for the A2 instance.

Figure A. 3 .
Figure A.3.Plots of the best pareto fronts for the A3 instance.

Figure A. 4 .
Figure A.4. Plots of the best pareto fronts for the B1 instance.

Figure A. 5 .
Figure A.5. Plots of the best pareto fronts for the B2 instance.

Figure A. 6 .
Figure A.6.Plots of the best pareto fronts for the B3 instance.

Figure A. 7 .
Figure A.7. Plots of the best pareto fronts for the B4 instance.

Figure A. 8 .
Figure A.8. Plots of the best pareto fronts for the C1 instance.

Figure A. 9 .
Figure A.9. Plots of the best pareto fronts for the C3 instance.

Figure A. 10 .
Figure A.10. Plots of the best pareto fronts for the D1 instance.

Figure A. 11 .
Figure A.11. Plots of the best pareto fronts for the D2 instance.

Figure A. 12 .
Figure A.12. Plots of the best pareto fronts for the D3 instance.

Figure A. 13 .
Figure A.13. Plots of the best pareto fronts for the D4 instance.

Table 1 .
Literature review of relief distribution problems.
3.1), and it is penalized by the value shown in equation (4.1).

Table 2 .
Classes of randomly generated instances based on real data of the case study.

Table 6 .
Comparison between the best approximations obtained with the AUGMECON and genetic algorithms.