DESIGNING A BI-OBJECTIVE DECISION SUPPORT MODEL FOR THE DISASTER MANAGEMENT

This paper addresses the allocation and scheduling of the relief teams as one of the main issues in the response phase of the disaster management. In this study, a bi-objective mixed-integer programming (BOMIP) model is proposed to assign and schedule the relief teams in the disasters. The first objective function aims to minimize the sum of weighted completion times of the incidents. The second objective function also minimizes the sum of weighted tardiness of the relief operations. In order to be more similar to the real world, time windows for the incidents and damaged routes are considered in this research. Furthermore, the actual relief time of an incident by the relief team is calculated according to the position of the corresponding relief team and the fatigue effect. Due to NP-hardness of the considered problem, the proposed model cannot present the Pareto solution in a reasonable time. Thus, NSGA-II and PSO algorithms are applied to solve the problem. Furthermore, the obtained results of the proposed algorithms are compared with respect to different performance metrics in large-size test problems. Finally, the sensitivity analysis and the managerial suggestions are provided to investigate the impact of some parameters on the Pareto frontier. Mathematics Subject Classification. 90C11, 90C90. Received March 24, 2020. Accepted September 18, 2021.


Introduction
Disasters have always been a major threat to human societies such that disasters have led to many casualties and economic losses during recent years. Regarding the unpredictability of these incidents (e.g. earthquake and flood), Emergency Operation Centers (EOC) must always be prepared to respond to the disasters, efficiently. Thus, designing a Decision Support Model (DSM) for planning the emergency resources is essential to reduce the economic losses and mortality rate. Allocation of the emergency resources is a key issue in the EOC, which includes preparing the relief supplies and rescue workers [1][2][3][4]. In the real world, many areas in the disaster impact zones are dispersed geographically, and this makes the emergency action very difficult due to time pressure and resource constraints. Hence, effective planning of the rescue units (involving allocation and scheduling) is one of the most important issues in the emergency rescue operations [5]. Also, since the severity level of any disaster may vary across the affected areas, the emergency resources are usually sent the relief teams to the most severely affected areas [6,7]. Undoubtedly, the efficient planning of the relief teams is vital, because it leads to reduce the casualties and economic losses during the emergency response phase. Regarding the literature, this context has received little attention during the last years.
In disasters, allocation and scheduling of the relief teams can be likened to known problems in scheduling and routing literature [8]. In association with the routing problem, it is so close to the multiple traveling salesman problem (MTSP) [9], such that, it can be considered the relief teams and incidents as salesmen and cities, respectively. In addition, total processing time and necessary time for the relief teams to travel between two successive incidents are considered the total travel time. On the other hand, the relief operation is started from the emergency operation center, and the relief operation is done without preemption.
On the other side, if the incidents are considered as jobs, relief teams as machines, relief time as processing time, and travel time as sequence-dependent setup times, we can resemble the disaster management problem to the unrelated parallel machine scheduling problem with sequence-dependent setup times. Regarding the literature and due to the similarity of the considered problem to scheduling and routing problems, allocation and scheduling the relief teams in the disasters is an NP-hard problem [8].
Recently, the fatigue effect has been introduced by Nayeri, Asadi-Gangraj, & Emami (2018.) in disaster management. According to this phenomenon, the processing time of the incidents not to be fixed in the planning horizon. The rescuers loss their physical power after successive activities and their performance decreased step-by-step, and the necessary time to process the incidents can be increased due to the fatigue effect. This concept is also considered in this paper.
In the real world, disaster relief must be done within a specified time after the incident; otherwise the damages will be irreparable. We consider this issue as time window in the research problem. Because, the time window is so important when people are buried alive, and they need to be rescued, quickly.
In this paper, a bi-objective mixed-integer programming model is proposed for designing an efficient decision support model to allocate and schedule relief teams in the disasters. Due to the NPhardness of the problem, The NSGA-II and MOPSO algorithms are applied to solve the problem. In addition to the physical power of the rescuers (fatigue effect), the time window and the damaged routes are also considered in this research due to similarity to the real world.
This paper is organized as follows. The literature review is presented in the next section. Then, the problem definition and the mathematical model are provided in Section 3. The methodology of the research is described in Section 4. In section 5, computational experiments are presented. Finally, in section 6, we conclude the paper and suggest the future works.

Literature Review
Based on the literature, three phases are considered to handle the challenges in the disaster management, including preparation phase, response phase, and recovery phase [11][12][13], such that each of which is involved some tasks. The different phases of disaster management and some of the most important tasks are categorized in Fig. 1 [12,[14][15][16][17][18][19][20][21].
There are many studies that are focused on the relief logistics. For instance, Camacho-Vallejo et al. [22] proposed a bi-level optimization model to improve an existing model to distribute the relief supplies. Huang et al. [23] considered some humanitarian objectives in the emergency response to the disasters. Also, some researchers optimized the relief supply distribution considering uncertainty [24][25][26][27]. Also, some of them are considered integration of the location problems, vehicle routing problems (VRP), and allocations of the relief supply [28][29][30][31][32][33][34][35][36].
Regarding the literature, a few studies dealt with the assignment of the rescue teams in the disasters. Falasca et al. [37] proposed a multi-objective model to assist in the assignment of volunteers to tasks. Rolland et al. [38] proposed a decision support model to allocate the incidents to the relief teams and schedules them. They applied a hybrid heuristic algorithm based on neighborhood search and adaptive reasoning technique (ART) to solve the proposed model. Wex et al. [8,[39][40][41] studied the disaster management problem in the context of allocation and scheduling approach. They examined the problem under certainty and uncertainty and solved the proposed models by a heuristic method based on Monte Carlo simulation. Yuan et al. [42] studied the capabilities of the rescuers for different rescue tasks and proposed an assignment model for emergency tasks based on these capabilities. Zhang et al. [43] formulated the resource allocation model as a two-stage mixed-integer linear programming model (MILP). In the first stage, the total losses are minimized, and in the second stage, the resource allocation for the rescue service is optimized by a heuristic algorithm. Visheratin et al. [44] proposed an advanced hybrid genetic algorithm based on scheduling algorithm and replica reduction concept to schedule some components of the early warning system (EWS). Zhang et al. [45] presented a multi-stage model to allocate the relief teams in the response phase of a disaster, dynamically. They also provided NSGA-II and C-METRIC approaches to solve the problem and developed scheduling strategies for a specific disaster situation. Cunha et al. [46] developed a biased random-key genetic algorithm for allocation and scheduling of the relief teams in the natural disaster. They considered fuzzy processing times for the incidents and showed the proposed algorithm could obtain high-quality solutions. Rauchecker and Schryen [47] developed a branch-and-price algorithm to handle the scheduling of the relief teams in disaster response problems in a reasonable time. Nayeri et al. [10] introduced the fatigue effect in the disaster management and proposed a MIP model to design the decision support model for emergency operation center (EOC). They developed a hybrid metaheuristic algorithm to solve the proposed model and showed their algorithm obtained high-quality solutions.
Santoso et al. [48] developed a non-linear model for assignment and scheduling of relief teams in disaster under uncertainty and solved the proposed model with the GRASP metaheuristics algorithm. Sathish Kumar et al [49] used queuing theory to study the resource scheduling problem in postdisaster management. Shavarani et al. [50] proposed a non-linear model and developed three metaheuristic algorithms to solve the allocation of medical staff to operating rooms in a disaster problem. Xu et al. [51] proposed a mixed-integer non-linear model multi-stage construction of rescue teams in disaster management and used an accelerated bi-level decomposition algorithm to solve it. Bodaghi et al. [52] proposed a MIP model to design a decision support model for the ASRT problem under uncertainty  1) The previous researches have been less focued on multi-objective programming model for assignment and scheduling of the relief teams. In this study, a bi-objective mixed-integer programming model is proposed to cover this research gap. 2) As shown in Table 1, the time window has less addressed in previous research related to assignment and scheduling the relief teams. Due to the importance of the time window in disaster management, this concept is also provided in this research.
3) The previous studies have been less focued on the co-allocation of the relief teams. In the real world, the relief operations may need more than one capability, or an incident may allocate to more than one relief team. In this research, collaborative assignments of the relief teams are allowed, and an incident can be allocated to more than one relief team. 4) As shown in Table 1, the damaged routes have been less addressed in the literature. This study considered this concept in the proposed model. 5) Since time is a prominent issue in the disaster management and with respect to the complexity of the research problem, two multi-objective metaheuristic approaches, namely NSGA-II and MOPSO algorithm, are provided to achieve the Pareto solutions in a reasonable time.

Definition and modeling of the problem
It is assumed that there are m relief teams and n incidents. Each incident has a weighting factor corresponding to casualties and damage, which is considered as destruction or severity level. Each incident needs a special ability to relief it, and all the relief teams have different capabilities. Hence, any incident cannot be assigned to each relief team, unless this team can do it. Relief operations must be started in a certain time interval, such that we consider time windows for the incidents. We also consider two dummy incidents denoted by 0 and n+1 as the starting location or depot and endpoint, respectively. The processing time and severity level for these incidents equal to zero, but the travel time take into account for every relief team for travel from an emergency operation center to a location of incidents. The travel time to dummy incident n+1 is equal to 0 for all the relief teams.
In this study, parallel processing of an incident by the relief teams is also allowed. It means that each incident is considered a processed incident when all the required relief teams have finished their operations. Furthermore, when a relief team has finished processing an incident, it can be allocated to another incident. The schematic of the research problem with five incidents and two relief teams (medical team and fire brigade team), and two damaged routes is illustrated in Fig. 2. For example, the medical team processes three incidents, and the route between the second and third incidents is damaged. After processing all incidents, the medical team comes back to the depot. Also, parallel processing of an incident is shown in this figure (yellow section). To calculate the actual processing time based on normal processing time and fatigue effect, we used a known concept in scheduling problems, named learning effect. The learning effect will affect theprocessing time of a job by the repetition of a particular operation execution. This effect is dependent on the skills, knowledge, tools, and equipment. The concept of the learning effect in scheduling problems is introduced by Biskup [55] and Cheng & Wang [56]. Moreover, some comprehensive surveys related to scheduling with learning effects were provided by [57][58][59][60][61] in recent years. According to Biskup [55], the actual processing time of job i, when scheduled in position r of machine k, is calculated by = . , where is the actual processing time, is the normal processing time, and a<0 is the learning index. In contrary, we consider the fatigue effect through the above formula by considering a>0.

Model assumptions
The following assumptions are made in this study: ✓ The number of relief teams is less than the number of incidents. ✓ Each relief team has different capabilities (one relief team can have more than one capability), and each incident needs a specific ability to rescue. ✓ Processing times (relief times) are dependent on the incidents and relief teams. ✓ The travel time between the incidents is dependent on the relief teams. ✓ All the relief teams begin their relief operations from the depot and return to the depot after finishing the relief operation. ✓ There is no precedence relationship between the incidents. ✓ No interruption is allowed for the relief operations. ✓ A weighted factor, named destruction factor or severity level, is assigned to each incident. ✓ Processing times are not fixed due to the fatigue effect. ✓ Time windows are considered for the incidents. Fig. 3 shows a conceptual outline for the proposed DSM.

Constraints
• Allocation constraints • Scheduling constraints • Capability constraint • Time-windows constraints • Damaged routes constraint

Main Outputs
• Allocation and scheduling of the relief teams in the disasters • Reduced the total completion times of incidents • Reduced the tardiness of relief operation

Mathematical model
In this section, the notations and the proposed BOMIP are presented. The necessary notations are given as bellows. According to the above definitions, the following BOMILP model is proposed for the research problem: (1) .

Indices and Parameters
1..., m; r 1,..., k k n = = (1) shows the first objective function, which minimizes the weighted sum of completion times of the incidents. The second objective function (2) minimizes the tardiness of the relief operations. Constraint set (3) guarantees that only one incident must be assigned to each position. Constraint sets (4) and (5) show that any relief team starts the relief operation from the depot (dummy incident 0), and each relief team ends the relief operation at the dummy incident n + 1. Constraint set (6) indicates that the positions of each relief team must be occupied in ascending order. The actual processing times of incidents are measure in Constraint set (7). Constraint set (8) measured the travel time between nodes i and j for relief team k due to damaged routes. Constraint sets (9), (10), and (11) determine the required traveling time for relief team k incident that is scheduled in position r. Constraint set (12) calculates the completion time of each incident on the corresponding relief team. Constraint set (13) calculates the start time of the incident in positions r by relief team k. Constraint set (14) indicates that relief team k is only assigned to incident i if relief team k is capable to serve it and allows assignments of the collaborative relief teams. Constraint set (15) shows the completion time of incident i. Constraint set (16) determines the tardiness of relief operation. Time window restrictions are enforced by constraint sets (17), (18), and (19). Finally, constraint set (20) defines the value ranges of variables.   (8), (12), and (13) are nonlinear. These expressions can be linearized through Property 1.
Property 1: Suppose = × is the multiplication of binary variable and continuous variable . The following equations can be used to linearize the nonlinear terms, where is a large positive number [75]: CB  (21) . C BigM A  (22) .
In order to linearization of the constraint (12), we define k r kr kr Thus, according to the above property, we can linearize it as follows: Moreover, by defining ( 1) ( the constraint set (13) is converted to the linear form as bellow: Also, the constraint set (8) is nonlinear, this equation can be linearized as follows: Where is an auxiliary binary variable that is equal to 1 if gets a value between (0,1).

Solution framework
To solve the research problem, we have two main challenges. The first one is that the proposed model is a bi-objective one, and the second challenge is the complexity of the research problem. To cope with (24) . ij ij Q BigM   (35) .
the first challenge, we have to employ one of the multiple-objective decision-making (MODM) methods to solve the proposed bi-objective model. In this regard, we have applied the augmented -constraint method is provided to solve the small size test problems. The augmented ε-constraint method is one of the efficient MODM approaches that is widely used to solve multi-objective problems. On the other side, according to Wex et al. [8], allocation and scheduling of the relief teams in disasters is an NP-hard problem. Hence, we should apply heuristic or meta-heuristic algorithms to solve the problem in a reasonable time. In this regard, we have employed two metaheuristic algorithms known as NSGA-II and MOPSO are applied to solve the small and large size test problems. The main reasons for selecting these algorihms are as follows: (i) these algorithms showed good performance to solve the problems that similar to our work, (ii) These algorithms have simple concepts, easy implementation, and quick convergence.
In recent years, the multi-objective parallel machines scheduling problems are studied by many researchers. A large variety of metaheuristic algorithms such as NSGA-II and MOPSO are used to solve the problem, and the results showed the superiority of these algorithms in a reasonable time (see [62][63][64][65]).

Augmented -constraint method
-constraint is one of the classic methods which is used to handle the multi-objective problems. In this method, one objective is considered as the main objective function, and other objectives are considered as constraints, and they are bounded by their lower or upper limits . By using this method, we can convert the multi-objective problem into a single one as follows: Mavrotas [66] developed this method and proposed an improved version namely the augmentedconstraint method (AUGMECON). In this method, lexicographic optimization is performed to construct the payoff table. Afterward, other objective functions are moved to constraints and appropriate slack and surplus variables are also added. By considering the payoff table, the range of right hand side (RHS) parameters are obtained, and RHS of these constraints to be changed. These iterations give Pareto solutions. It is noteworthy that the smaller changes in the RHS value in each iteration give more exact Pareto solutions. Augmented -constraint formula is presented as below, where Si represents the slack/surplus variables, and is usually between 3 10 − and 6 10 − :  (38) In order to prevent any scaling problems, it is better to replace with / , in which shows the range of the ith objective function. The normalization of the above formula is as follow:

Non-dominated sorting genetic algorithm (NSGA-II)
The NSGA-II [67] is one of the useful and well-known multi-objective evolutionary algorithms for solving NP-hard problems. It is one of the efficient algorithms to obtain the Pareto solutions. The earlier version of NSGA could find multiple Pareto-optimal solutions in one simulation run. However, NSGA-II includes a second-order sorting criterion into the algorithm, which helps to overcome the issues of computational complexity and makes NSGA-II more reliable and faster than NSGA. This version considers the crowded distance calculation, which has proper extension in the changes area of the objective functions and gives the freedom to select its considered design among optimized designs to the designer [67]. In this algorithm, at first, a population of chromosomes is generated, and then the fitness function of each chromosome is calculated. Afterward, by applying some strategies such as the roulette wheel, parents are selected and by employing operators (i.e., crossover and mutation) new population is reproduced. Then, the populations (with a bigger size) are assessed and sorted based on their ranks and crowding distance.

Solution representation (Chromosome)
Designing an efficient structure for the chromosome is one of the most important steps in the NSGA-II algorithm. The structure of the chromosome that is used in this research, is made in three steps. These steps are illustrated by an example below.
Suppose that there are 6 incidents (i=6) and 3 relief teams (k=3). Moreover, the capabilities matrix of relief teams is as follows. This matrix shows the ability of the relief teams to process the incidents. For example, based on the following matrix, relief team 1 can process incidents 1, 4, and 5, relief team 2 can process incidents 2, 4, and 6, and eventually relief team 3 can process incidents 3, 5, and 6.
Step 1) First, a i×k matrix is randomly created that determines the allocation and sequence of incidents on relief teams. For example, according to matrix 'A', incidents 2, 6, 1, 5, 4, and 3 are respectively allocated to relief team 1, incidents 1, 2, 5, 3, 4, and 6 are assigned to relief team 2, and finally, incidents 6, 1, 2, 4, 3, and 5 are allocated to relief team 3. Note that this is an initial allocation matrix that should be amended in the next steps. Step 2) The matrix is converted to a vector as bellow: Step 3) The vector has been revised according to Capik matrix:

Crossover operator
In the crossover operator, two strings are selected with a probability of as the parents and are produced by two new offsprings. In the proposed crossover operator in this study, the first two parents are selected and the cut point is determined. Then, the first segment of parent 1 is combined with the second segment of parent 2, to generate a primary offspring 1. Also, the first segment of parent 2 combined with the second segment of parent 1, for creating primary offspring 2. Due to the elements before and after the cut point is maybe repetitive elements, the primary offsprings are revised by removing the repetitive elements and substituting necessary elements. The crossover operator applied in this study is illustrated in Fig. 5. Note that this operator is done in matrix 'A' in Section 4.2.1. Then, to prevent from infeasibility of offsprings, step 2 and step 3 of section 4.2.1 are done on offsprings after mutation operation.

Mutation operator
The mutation operator encourages genetic diversity amongst the solutions and attempts to prevent the algorithm converging to a local optimum. For this purpose, two incidents from the schedule list of each relief team are selected randomly. Then, the positions of two selected incidents from each relief team are substituted in the solution matrix. The mutation operator is depicted in Fig. 6.

Multi-objective particle swarm optimization (MOPSO)
The MOPSO algorithm is a multi-objective metaheuristic algorithm to solve the multi-objective programming problems in a reasonable time. MOPSO is an extended form of PSO algorithm, which is introduced by Eberhart & Kennedy [68] that is a common population-based evolutional algorithm to solve continuous optimization problems. It uses the concept of Pareto dominance to find solutions for multi-objective problems. It also employs a secondary population or external archive to store nondominated solutions and guides the search of future generations. The MOPSO and PSO algorithms update the particle velocity and location in the same way. In this algorithm, by considering the population as a swarm and the solution as a particle, the position and velocity vectors of the particles are updated in each iteration according to Eq. (40) and Eq. (41): Where ( ) and ( ) show the position and velocity vectors of particle i in iteration k, respectively.
is the best position of particle i, and is the best position vector in the population. w shows inertia weight and 1 and 2 are called acceleration coefficients. 1 and 2 are two random numbers generated in the interval [0 1]. The MOPSO and PSO algorithms also differ in several respects: first, the update and selection methods of the individual and global guides are different. In addition, the solution obtained by the MOPSO algorithm is a Pareto optimal solution set. Moreover, the MOPSO algorithm requires a storage set for storing the non-inferior solutions found by the population to be established.

Initial population generation
In this paper, we use a Random-Key (RK) technique to generate the initial population in the MOPSO algorithm. At first, a random matrix (i×k) is created in which the random numbers are generated in the interval [0,1]. Then, a position number is assigned to each random number. The random numbers are sorted in the ascending order with respect to their positions. Fig. 7 illustrates the procedure of randomkey technique to generate a feasible solution for a problem with 6 incidents and 3 relief teams. The allocation of the incidents to the relief teams is similar to the one mentioned for the NSGA-II in Section 4.2.1. It should be noted that the fourth part of Fig. 7 shows the position border of the particles.

Local search
In this research, a local search is also provided to improve the performance of the MOPSO algorithm. For this purpose, the procedure that is explained in Section 4.2.3 is implemented on matrix that is obtained in step 4 in Section 4.3.1.

Time window consideration
It should be noted that for both the NSGA-II and MOPSO, the penalty function is defined to prevent the infeasible solution due to time windows of the incidents. It means that the objective function is extremely penalized if the time window constraint is not satisfied. As a result, the infeasible solution is not considered in the proposed algorithms.

Computational experiments
In this section, the obtained results from solving the model by NSGA-II and MOPSO algorithms are analyzed. First, the performance metrics to compare the algorithms are described. Then, the outputs of the algorithms are investigated with random test problems.

Performance metrics
In this paper, three metrics are used to evaluate the performances of the two metaheuristic algorithms. These metrics are defined as follows [69].

Number of the Pareto solutions (NPS)
NPS represents the number of local Pareto solutions which are found by the algorithms. The higher value of the NPS corresponds to the better exploration of the solution space and a more diverse search direction.

Diversity metric (D)
The diversity metric calculates the diversification of the achieved solutions in the solution space, which is calculated as: are the practical (estimated) trade-off surface and its respective dimension, respectively. The higher value for this metric shows the appropriate performance of the algorithm.

Spacing metric (S)
This metric measures the uniformity of the obtained solutions in the estimated trade-off surface. The small value for this metric is suitable. It is defined as below: Where, i d denotes the euclidean distance in objective function space between individual and and the nearest member in the Pareto front.
d is the average value of d i, and N is the number of individuals in the approximation fron. It is obvious that the lower value shows better performance.

Data generation
In this study, n vary from 6 to 15, and m varies from 2 to 7 for the small size test problems and n vary from 20 to 50, and m varies from 10 to 20 for large size test problems. According to [8], the processing times of the incident and travel time are generated by N (20,10) * and N(1,0.3), respectively. Furthermore, the destruction factor denotes the severity of the incident. Regarding the United States Department of Homeland Security (2008), five severity levels are introduced for an incident as (1) low, (2) guarded, (3) elevated, (4) high, and (5) severe harm. Thus, a discrete uniform distribution is considered for the severity levels in the range [1,5]. Furthermore, we suppose that the due date is dependent on the destruction factor and the incident with a higher destruction factor has an earlier due date to start the relief operations. The necessary time to repair the routes is generated by uniform distribution between [1 3]. As mentioned in the manuscript, the concept and formulation of the fatigue effect were inspired by a known concept in scheduling problems named the learning effect in this research. In this way, in the scheduling problems with the learning effect, the value of this parameter was considered in the range of [0.2,0.4] (many papers considered it equal to 0.3). Hence, in this study, we have considered the value of the fatigue index equal to 0.3. Moreover, five relief teams, named policemen, fire brigades, paramedics, search and relief teams, and special casualty access teams, are considered in this paper.

Parameter setting
Undoubtedly, the appropriate tuning of the parameters can lead to improving the overall performance of the metaheuristic algorithms. The Taguchi method [70] is applied to tune the parameters of the proposed metaheuristic algorithms. Regarding the type of the response, there are three main groups to calculate the variation in the Taguchi approach, (1) smaller-the-better type, (2) nominal-is-best type, and (3) larger-the-better type [71]. By considering the objective functions of the proposed model, we use the smaller-the-better type of the response in this research, in which Signal to Noise (S/N) of this type of response is calculated as follows: * Normal distribution Where Y shows the response (i.e., the criteria that are considered for conducting experiments such as objective function or diversity) and n represents the number of orthogonal arrays [72].
As mentioned above, the lowest and largest values for the spacing and diversity metric are preferred for the multi-objective algorithms. Therefore, we propose response (Y) as follow: With respect to Eq. (45), two metrics are considered to analyze the performance of the algorithms, simultaneously.
Four parameters with three levels involving maximum of iterations (MaxIt), number of population (Npop), crossover rate (Pc), and mutation rate (Pm) are considered to tune the NSGA-II algorithm. Moreover, to tune the parameters of the MOPSO algorithm, five parameters with three levels, MaxIt, swarm size, w, C1, and C2 are considered. Table 2 shows the parameters and their levels for the Taguchi design. As a result, the effect plot for S/N ratio is illustrated in Fig. 8, for each algorithm. If a parameter has the highest level of S/N ratio, it is denoted as the best level of the parameter. According to Fig. 8, the best parameters for the algorithms are summarized in Table 3.

Analysis of the results
In this section, two series of test problems in small and large-size, are conducted to examine the performance of the proposed approaches. For this purpose, 10 test problems in small-size are generated and solved by the augmented -constraint method in GAMS software, and the obtained results are compared with the proposed metaheuristic algorithms. Furthermore, 20 large-size test problems are generated and solved by the proposed algorithms. Characteristics of the small and large-size test problems are summarized in Table 3:  [20,70] U [10,30] Afterward, the performance metrics are used to compare the obtained results by the proposed algorithms. All the proposed algorithms are implemented in MATLAB software on a Core i7 and 4GB RAM computer. It should be noted that for the augmented ε-constraint method, the second objective function has become a constraint. Fig. 9 shows the procedures that generate the Pareto optimal curve by AUGMECON.

Fig. 9: Solution procedures of AUGMECON in this research
The obtained results for the small size test problems are shown in Table 5; where NPS is the number of Pareto solutions and CPU time is the time consumed to find the Pareto solutions. Obviously, more NPS and less CPU time correspond to the excellency of each approach. The best objectives value obtained over 10 runs are reported for each test problem in Table 4. Also, the gap between the ε-constraint and proposed metaheuristic algorithms is presented in Table 4. Due to the multi-objective nature of the considered problem, the gap is calculated as the average gaps of the objective functions. The gap for each objective is as: | − ′ |/ , where is the objective value from the ε-constraint and ′ is the objective value of the metaheuristic algorithm [73]. Regarding Table 5, it can be concluded that the metaheuristic algorithms can achieve the proper solutions with small gaps. Also, Fig. 10 illustrates the Pareto frontiers achieved by the ε-constraint and metaheuristic algorithms for test problem 5.

Generating Pareto frontier
-Considering the second objective function as a constraint and optimizing the first objective function to obtain a set of Pareto solutions.
-Generating Pareto frontier   Table 6 summarizes the results of the proposed algorithms for large-size test problems. Regarding the NP-hardness of the research problem, the ε-constraint approach that ran in GAMS was not presented any solution for the large test problems. Note that for the problem with 20 incidents and 10 relief teams, GAMS software cannot find a feasible solution in 15000 seconds. Table 6 depicts the obtained results of the NSGA-II and MOPSO algorithm for the problems. As mentioned before, for large size test problems, the number of relief teams and the number of incidents belong to the range [10,30] and [20,70], respectively. Note that, the best results are presented as boldface.  Table 6, the CPU time of the NSGA-II is lower than the MOPSO in the entire test problems. Furthermore, Fig. 11 shows the number of Pareto solutions achieved by the proposed algorithms. As can be seen in Fig. 11, the MOPSO has better performance in the NPS metric.  Table 6, the MOPSO algorithm shows better performance in the diversity metric, and the NSGA-II algorithm performs better in the spacing metric. We also compare the performance of the proposed algorithms based on RPD criteria, which is calculated as follow: Where sol ALG denotes the performance measure obtained by each algorithm and sol Best is the best performance measure obtained by the entire algorithms. In this regard, the gap between NSGA-II and MOPSO algorithms in the N.P.S, Diversity, and Spacing metrics is given in Table 7. In this table, the less value shows the better performance of the algorithm in the corresponding metric. In order to validate the obtained results, statistically, the means plot and least significant difference (LSD) intervals   According to Fig.11, there is a significant difference between the proposed algorithms with respect to the NPS. MOPSO algorithm has better performance in this metric. Also, we can conclude that the MOPSO algorithm has better performance in diversity with respect to Fig. 12. Furthermore, Fig. 13 illustrates that NSGA-II outperforms the MOPSO regarding the spacing metric. Hence, the decisionmakers can select the proper algorithm according to their considered metric. For example, if the CPU time is the most important metric for a decision-maker, he/she should select the NSGA-II algorithm.

Sensitivity analysis
In order to analyze the impact of the fatigue effect and travel times on the objective functions, a test problem with 25 incidents and 12 relief teams is conducted in different situations.  According to Fig. 15, increasing in value of the fatigue factor from 0.3 to 0.5 leads to an increase of 16% in the average of the first objective function and 13% in the average of the second objective function. Based on these results, the physical power of the relief teams has an important role in the total time of the relief operation and delays. In order to prevent the increasing of the total relief operations and delays, the strategy of increasing the number of relief teams or improve the physical power of the current rescuers can be adopted.

Sensitivity analysis of the travel time
To consider the effect of the travel time on the results, we take into account two values for the average of the travel time as N(1,(0.3) 2 ) and N(3,(0.3) 2 ). As a result, the new test problem is solved, and the obtained results are represented in Fig. 16. According to Fig. 16, by increasing the average of travel time from 1 to 3, the average of the first objective function increases almost 7%, and the average of the second objective function also increases almost 20%. This result shows that using better transportation vehicles can lead to the better performance for the relief team.

Conclusions and future work
This research addresses the allocation and scheduling of relief teams which has a key role in the response phase of the disasters. The limited resources and time pressure increased the complexity of planning to handle the disasters. In this study, the disaster management problem is likened to the scheduling and routing problem. Also, in this study, the physical power of the rescuers is considered as a fatigue effect. Due to the fatigue effect, tiredness caused by relieving several incidents may lead to an increase in the necessary time to perform a job. The time-windows for incidents and damaged routes are considered in the proposed model in order to bring the issue closer to the real world. As a result, a bi-objective mixed-integer programming model is developed to minimizing the sum of the weighted completion times and tardiness of the relief operations. The problem is computationally intractable; thus two metaheuristic algorithms (NSGA-II and MOPSO) are proposed for the considered problem. The performance of the metaheuristic algorithms are compared based on CPU time, the number of Pareto solutions, diversity, and spacing. The results demonstrated that the NSGA-II outperforms MOPSO by considering the CPU time and spacing metric, and the MOPSO outperforms NSGA-II by considering the number of Pareto frontier and diversity metric. Also, the sensitivity analysis of the problem regarding some parameters is conducted and some managerial suggestions proposed to improve the allocation and scheduling of relief teams in disasters. Suggestions for future studies are included considering uncertain processing time, travel time, and level of severity, as well as investigation the problem in multi-depot mode. Considering the concept of deprivation costs (see [74]) in allocation and scheduling of the relief teams could be another direction for future research.