POLYNOMIAL ALGORITHMS FOR SOME SCHEDULING PROBLEMS WITH ONE NONRENEWABLE RESOURCE

. This paper deals with the Extended Resource Constrained Project Scheduling Problem (ERCPSP) which is defined by events, nonrenewable resources and precedence constraints between pairs of events. The availability of a resource is depleted and replenished at the occurrence times of a set of events. The decision problem of ERCPSP consists of determining whether an instance has a feasible schedule or not. When there is only one nonrenewable resource, this problem is equivalent to find a feasible schedule that minimizes the number of resource units initially required. It generalizes the maximum cumulative cost problem and the two-machine maximum completion time flow-shop problem. In this paper, we consider this problem with some specific precedence constraints: parallel chains, series-parallel and interval order precedence constraints. For the first two cases, polynomial algorithms based on a linear decomposition of chains are proposed. For the third case, a polynomial algorithm is introduced to solve it. The priority between events is defined using the properties of interval orders. to activities at their starting times and released at their completion times. On the contrary, nonrenewable resources are produced or consumed by activities at their starting times only. Money is an example of nonrenewable resource for which Carlier and Rinnooy Kan [3] introduced the ﬁnancing problem.


Introduction
In the literature, the Resource Constrained Project Scheduling Problem (RCPSP) plays a fundamental role in scheduling theory. In this problem, non-preemptive activities requiring renewable resources, and subject to precedence constraints, have to be scheduled in order to minimize the makespan. The RCPSP has led to an impressive amount of research in recent decades. A very effective way of solving this NP-hard problem is to decompose an RCPSP instance into as many Cumulative Scheduling Problems as there are resources. This allows us to obtain tight lower bounds as well as efficient head-tail adjustments [6,7]. The RCPSP with general time lag constraints has also been the subject of several papers [2, 8-10, 15, 19, 20]. Such works concern only renewable resources such as the workforce. Renewable resources are assigned to activities at their starting times and released at their completion times. On the contrary, nonrenewable resources are produced or consumed by activities at their starting times only. Money is an example of nonrenewable resource for which Carlier and Rinnooy Kan [3] introduced the financing problem.
The Extended Resource Constrained Project Scheduling Problem (ERCPSP) is a general scheduling problem where the availability of resources is depleted and replenished [4]. An instance of ERCPSP is defined by events, nonrenewable resources and generalized precedence constraints between pairs of events. Each event produces or consumes some units of resources at its occurrence time. The objective is to build a schedule that satisfies the resource and precedence constraints and minimizes the makespan. ERCPSP is an extension of RCPSP where activities requiring renewable resources are replaced by events consuming or producing nonrenewable resources. In fact, we can associate with each instance of RCPSP an equivalent instance of ERCPSP [4,5]. Other authors have worked on models similar to ERCPSP. We can quote the works of Neumann and Schwindt [18] and of Laborie [14]. Neumann and Schwindt formalized the project scheduling problem with inventory constraints where the availability of each resource is at any time upper and lower bounded. To solve this problem, they introduced a branch-and-bound algorithm with a filtered beam search heuristic. Laborie [14] introduced the concept of a Resource Temporal Network (RTN). He proposed a constraint propagation algorithm to solve the problem.
The decision problem of ERCPSP consists of determining whether an instance has a feasible schedule or not. When there is only one nonrenewable resource, it is equivalent to find a feasible schedule that minimizes the number of resource units initially required. The maximum cumulative cost problem, which was shown NP-complete [24], is the special case where the events are sequenced on one machine in such a way that the maximum cumulative consumption is minimized. It corresponds to the problem investigated by authors of [1,12,24]. Abdel-wahab and Kameda [1] have considered the special case where the precedence constraints can be represented by a parallel chains graph and series-parallel graph. For the parallel chains case, they have introduced an algorithm for finding optimal schedules. The algorithm decomposes each chain into subchains then it provides an optimal schedule by sequencing these subchains. A dominant chain which minimizes the maximum cumulative cost is obtained. This algorithm has been generalized for the series-parallel case. Other authors worked on the Two-Machine Maximum Flow Time Problem with Series-Parallel Precedence Constraints. We can quote the works of Monma and Sidney [16,17] and of Sekiguchi [23]. Flow shop scheduling is a type of scheduling where jobs need to be processed on a set of machines in identical order [25]. In [13], the authors deal with the two-machine flow shop scheduling problem with unlimited periodic and synchronized maintenance applied on both machines. Kaplan and Amir have introduced a simple method for determining the feasibility of the relocation problem [12]. They have proved that the relocation feasibility problem is equivalent to the two-machine flow-shop problem which can be solved in ( log ) using the Johnson's rule [11]. The relocation problem can be considered as an ERCPSP problem with parallel chains precedence constraints where there are only two events in each chain. Note that the events in ERCPSP are not sequenced on one machine. So instead of sequencing events, we have to schedule them. Hence, more than one event can occur at the same time.
In this paper, we consider the decision problem of ERCPSP with one resource under some specific precedence constraints. We study three special cases which can be solved in polynomial time: the parallel chains case, the series-parallel case and the interval order case. A list algorithm is proposed to construct feasible schedules for the parallel chains case. This algorithm is based on a different and simple decomposition of chains. We also present another method of decomposition of chains which provides the same decomposition proposed in [1]. We show that these subchains can be seen as jobs of a flow-shop with two machines. Hence, a feasible schedule can be constructed by using the Johnson's algorithm. An adaptation of this algorithm is presented for the series-parallel case and for scheduling problem with cumulative continuous resources. Finally, a list algorithm is introduced for the interval order case to construct feasible schedules. The priorities of events are defined using the properties of interval orders.
The remaining of this paper is structured as follows. In Section 2 we formulate our problem. In Section 3 we show the relation between scheduling and sequencing problems. In Section 4 we investigate the decision problem of ERCPSP with one resource and parallel chains precedence constraints. In Section 5 we consider the decision problem of ERCPSP with one resource and series-parallel precedence constraints. In Section 6 we solve the decision problem of ERCPSP with one resource and interval order precedence graph, and finally we conclude this paper in Section 7.

Problem formulation
This paper deals with the Extended Resource Constrained Project Scheduling Problem (ERCPSP). An instance = ( , , , , ) of ERCPSP consists of a set = {0, 1, . . . , , + 1} of events, a set of nonrenewable resources, and a set of precedence constraints.
The occurrence time of each event ∈ is denoted (also denoted ( )). Of course is not given and has to be determined. By convention, the two events 0 and + 1 are added to respectively define the start and the end of the schedule.
For each event ∈ , represents the quantity of resource ∈ produced or consumed by event . If is positive, then event produces the quantity of resource , whereas if < 0, it consumes the quantity | | of resource . For each resource ∈ , 0 = corresponds to the initial level of resource . At any time, the resource availability must be positive or null for each resource ∈ .
The precedence constraints express relations of start-to-start between pairs of events. They have the form + ≤ , where represents the time lag between events and . In this paper, we suppose that we have only positive time lags. So, if ( , ) ∈ then event cannot occur before time + . A schedule on event set is a function assigning an occurrence time to each event ∈ . The makespan of a schedule can be computed as max = +1 . A schedule is feasible if it satisfies the precedence constraints (2.1) and the resource constraints (2.2): where ( , ) = { ∈ | ≤ } is the set of events which have occurred by time ≥ 0, and is some given upper bound on the makespan, which means that all events have to occur no latter than time . An optimal schedule is a feasible schedule which minimizes the makespan.
In the following, we consider only the single-resource case of ERCPSP (| | = 1). So, each instance will be defined by a quadruplet ( , , , ). The number of resource units produced or consumed by event is and the initial resource units of the project corresponds to 0 .

Decision problem
Let = ( , , , ) be an instance of ERCPSP. The Decision Problem consists of determining whether has a feasible schedule or not. Solving this problem is equivalent to find a feasible schedule that minimizes the number of resource units initially required. In fact, let * be the minimal number of resource units initially required to get a feasible schedule. If * ≤ 0 , then instance is feasible. Otherwise, is infeasible. The graph resulting from Example 2.1 is shown in Figure 1. The number associated with a node represents the number of resource units required for that event, and the number corresponding to an arc represents the time lag. The number corresponding to event 0 is equal to the initial number of resource units for the project.
The minimal number of resource units initially required to get a feasible schedule is * = 4. It will be obtained by scheduling events 1, 4, 5, 7 and 9 before events 2, 3 and 6. Since * < 0 = 5, the considered instance is feasible.

Notations
Let = ( , , , ) be an instance of ERCPSP. We denote by = { ∈ | ≥ 0} (resp. = { ∈ | < 0}) the set of production (resp. consumption) events. We say that an event is a direct predecessor of an event if there exists a non-negative arc from to in the graph ( , ), which is equivalent to say that is a direct successor of . , denotes the length of the longest path from to . We say that an event is an ascendant of an event if there exists a path from to with non-negative , , which is equivalent to say that is a descendant of . We denote the set of direct successors of an event as Γ + ( ), and the set of all descendants of , not including , asΓ + ( ). The corresponding sets of direct predecessors and ascendants are denoted respectively as Γ − ( ) andΓ − ( ). Thus, event 0 (resp. event + 1) is an ascendant (resp. a descendant) of all the other events.

Sequencing and scheduling problems
We consider a precedence graph with positive valuations. Let be a feasible schedule for a value 0 of initial units of resource. We restrict ourselves to the case of a single resource. A sequence of events is said to be feasible, if the event order respects the precedence constraints and for each event in the sequence, 0 + added to the sum of resource units produced and consumed by events before is positive or null.  Proof. Let be a feasible schedule. If all the couples events are not executed simultaneously, the total order of events obtained by sequencing events in increasing order of their occurrence time is also feasible. Therefore, the only difficulty concerns the events which are executed at the same time in . So let us consider a subset of events which are executed simultaneously in . These events are independent (no precedence relations between them) because a strictly positive arc would prevent two independent events to be simultaneously executed. So we can sequence these events locally by ordering the production events before the consumption events. The result is a feasible sequence. By abusing the notation, we also denote by the feasible sequence.
One can generalize Theorem 3.1 in some cases of zero valuation by using the same reasoning.
Corollary 3.2. If there are arcs with zero valuation, but none of them is between a consumption event and a production event, then there is an optimal sequence, as soon as there is an optimal schedule. Corollary 3.3. If a consumption event is the only predecessor of a production event and the valuation of the corresponding arc is 0, it is dominant to execute these two events at the same time.
So a pretreatment by fusion of such two events leads to the condition of Corollary 3.2. Similar fusion can treat the case of non-negative valuations with directed cycles of zero valuation (see Fig. 3).

The parallel chains case
In this section we investigate a special case of ERCPSP with one resource, where the precedence graph consists of a set of parallel chains with positive valuations. This special case is an extension of the problem considered by Abdel-wahab and Kameda [1], where more than one event can be executed at the same time. These authors introduced an algorithm for minimizing the maximum cumulative cost. The algorithm calculates the change of resource level, then determines production and consumption subchains in each chain. An optimal schedule is obtained by merging the production subchains in nondecreasing order of their rise, followed by consumption subchains in nonincreasing order of their fall. Thus, a dominant chain which minimizes the maximum cumulative cost is obtained. In this section, we propose a list algorithm to construct feasible schedules for the parallel chains case. This algorithm is based on a different and simple decomposition of chains into production and consumption subchains. We also adapt the algorithm proposed in [1] to our problem. It always consists of determining the minimum amount of initial resources required. Abdel-wahab and Kameda sequenced the events because they are executed on one machine. In our problem there is no machine. Thus, some events can be executed at the same time. To use our algorithms, a pretreatment is required. For each consumption event and for each production event , if is the predecessor of and the valuation of the corresponding arc is 0, then we merge these two events into one event such that = + . After the pretreatment, we will be in the condition of Corollary 3.2. So finding a feasible schedule will be equivalent to finding a feasible sequence.
The algorithm is actually very similar. It is also based on a decomposition of chains into production and consumption subchains. We will see that these subchains can be seen as jobs of a flow-shop with two machines. The idea is to construct a standard schedule, where the events of each subchain are scheduled next to each other. Then, we apply the Johnson's rule to these subchains in order to obtain an optimal sequence. This method will be illustrated by the example given in

Definition of OP-subchains and OC-subchains
Suppose that chain ℎ contains events, 1 , 2 , . . . , , in the order of precedence constraints. Chain ℎ will be decomposed into a sequence of optimal subchains. These optimal subchains can be seen as jobs of a flow-shop with two machines. Each subchain consists of two parts which respectively consumes and produces a quantity of resource. The consumption on the first part of the subchain corresponds to the processing time on the first machine, while the production on the second part of the subchain corresponds to the processing time on the second machine.
An optimal subchain is said to be an optimal production subchain (OP-subchain) if it produces more than it consumes. Otherwise, it is an optimal consumption subchain (OC-subchain). , . . . , is an optimal subchain if it can be decomposed into two subchains , . . . , and +1 , . . . , such that: As a consequence, ∑︀ = =1 reach its minimum, when is equal to . Furthermore, will reach its maximum when = . The fall ∆ − of an optimal subchain is equal to − ∑︀ = =1 , which is positive or null according to Definition 4.1. The fall corresponds to the processing time on the first machine of the flow-shop. The rise ∆ + of an optimal subchain is equal to , which is positive or null according to Definition 4.1. The rise corresponds to the processing time on the second machine of the flow-shop.
An optimal subchain can be a production subchain (OP-subchain) or a consumption subchain (OC-subchain). An OP-subchain is an optimal subchain such that ∆ − ≤ ∆ + . An OC-subchain is an optimal subchain such that ∆ − > ∆ + . It is possible that the first part of an optimal subchain does not exist: is not defined if the optimal subchain starts with a production event. It is also possible that the second part of an optimal subchain does not exist: = .
Proof. Let us consider a feasible schedule of events. According to Theorem 3.1, we can deduce a feasible sequence of events . Let 1 , 2 , . . . , be a chain of events and , . . . , be one of its optimal subchains. Let be an event not belonging to { 1 , . . . , }.
-If is before or after in , then we do not move it. -If is the first event between and , then we shift just before . This respects the precedence constraints because there is no precedence relation between and the events of { , . . . , }. It respects also the resource constraints because according to the definition of an optimal subchain ∑︀ = = ≤ 0 for each before in .
-If is the last event between +1 and , then we shift just after . This also respects precedence and resource constraints for the same reasons.
By iterating these modifications, the events of an optimal subchain will be consecutively sequenced.

Definition 4.3.
A sequence in which all events of each optimal subchain are ordered next to each other is said to be of standard form. Proof. Given a feasible schedule, we deduce a feasible sequence . Then, we successively apply the modifications described in the previous proof to the optimal subchains. We finally obtain a feasible sequence of standard form. We can remark that when the events of an OP-subchain (resp: OC-subchain) are already consecutive, they remain consecutive when applying the method to another OP-subchain (resp: OC-subchain).

Decomposition of a chain into optimal subchains
In this section, we prove that one can decompose a chain into a subsequence of OP-subchains, followed by a subsequence of OC-subchains. For that, we report an algorithm to find the first and the shortest OP-subchain OP 1 of a chain. By removing OP 1 and iterating, we get the other OP-subchains. When there is no more OPsubchains, we can apply the same algorithm on the mirror chain of the remaining chain to get the OC-subchains. In fact, the definitions of OP-subchains and OC-subchains are perfectly symmetrical. Note that a chain ′ 1 , . . . , ′ is said to be the mirror chain of 1 , . . . , iff ∀ ∈ 1, . . . , , ′ = − +1− . Theorem 4.5. A chain 1 , . . . , can be decomposed into a subsequence of OP-subchains, followed by a subsequence of OC-subchains. We note: Proof. Algorithm 1 determines the shortest initial OP-subchain of a chain. By removing this OP-subchain and iterating, we get the other OP-subchains. When there are no more OP-subchains, we can apply the same algorithm on the mirror chain of the remaining chain to get the OC-subchains. Note that with this method of decomposition the sequence of the optimal subchains will not necessary respect the Johnson's rule (i.e. the fall (resp. rise) of the successive OP-subchains (resp. OC-subchains) monotonically increases (resp. decreases)). To do that, we have to search for the longest OP-subchain with minimal fall as it is explained in Section 4.4.
Algorithm 1: Algorithm to determine the shortest initial OP-subchain.

List schedule for parallel chains case
We present here a method to solve the decision problem of ERCPSP with parallel chains precedence constraints. The idea is to decompose each chain into OP-subchains followed by OC-subchains which do not necessary respect the Johnson's rule. Then we use a list algorithm to construct a feasible schedule of standard form if the considered instance of ERCPSP admits solutions.
In the first phase of the algorithm, a priority is attributed to each optimal subchain. Based on this priority, the events which belong to the optimal subchain with the highest priority among the available subchains, are the first to be scheduled. The priorities of optimal subchains are defined as follows: -Each OP-subchain has a higher priority than any OC-subchain.
-An OP-subchain OP 1 has a higher priority than an OP-subchain OP 2 iff the two OP-subchains are available and the fall of OP 1 is smaller than the fall of OP 2 . -An OC-subchain OC 1 has a higher priority than an OC-subchain OC 2 iff the two OC-subchains are available and the rise of OC 1 is larger than the rise of OC 2 .
Theorem 4.6. Algorithm 2 constructs a feasible schedule, as soon as the considered instance of ERCPSP with parallel chains precedence constraints admits any solution.
Proof. Let be an instance of ERCPSP with parallel chains precedence graph. Suppose that is feasible and Algorithm 2 detects an over-consumption during the execution of an optimal production subchain OP . According to the algorithm, all the optimal subchains executed before OP are production subchains which provide resources. So, even if OP is executed before, an over-consumption will be detected. Moreover, according to the algorithm all the optimal subchains which can be scheduled instead of OP have a larger fall than OP (they need more resources to be scheduled). So we can deduce that if is feasible then Algorithm 2 cannot detect an over-consumption during the execution of any OP-subchain. Symmetrically and similarly, we can prove that if is feasible then this algorithm cannot detect an over-consumption during the execution of any OC-subchain.
Algorithm 2: List algorithm to construct a feasible schedule. Determine the OP-subchains of each chain using Algorithm 1 Determine the OC-subchains of each chain using Algorithm 1 ← total number of optimal subchains while Some OP-subchains have not yet a priority do Let ℱ be the set of the first OP-subchains without priority of chains Attribute priority to the OP-subchain of ℱ having the smallest fall ← − 1 end ← 0 while Some OC-subchains have not yet a priority do Let ℱ be the set of the last OC-subchains without priority of chains Attribute priority to the OC-subchain of ℱ having the smallest rise ← + 1 end Let be the set of all the optimal subchains while | | > 0 do Let be the optimal subchain of with the highest priority Schedule the events of ← ∖ { } if an over-consumption is detected then return("Infeasible instance") end return("Feasible instance")

Adaptation of the Johnson's rule
In this section, we present an adaptation of the Johnson's algorithm to solve the decision problem of ERCPSP with parallel chains precedence constraints. The idea is to determine the OP-subchains and OC-subchains of each parallel chain which respect the Johnson's rule: the fall (resp. the rise) monotonically increases (resp. decreases) for the successive OP-subchains (resp. OC-subchains). Then we construct a schedule of standard form that respects also the Johnson's rule. The obtained sequence minimizes the required amount of initial resources, so it is optimal. Theorem 4.7. A chain 1 , . . . , can be decomposed into a subsequence of OP-subchains, followed by a subsequence of OC-subchains which respect the Johnson's rule: The fall monotonically increases for the successive OP-subchains and the rise monotonically decreases for the successive OC-subchains.
Proof. Algorithm 3 determines the longest OP-subchain with minimal fall of a given chain. It can be iteratively used to find all the OP-, and OC-subchains in ( ), where is the number of events of the chain.
If 1 ≤ 0, then the algorithm returns a subsequence of events ( , . . . , , . . . , ) which yields resources. Event corresponds to the event giving the minimum value of SUM which represents the fall of OP 1 . If 1 > 0, then does not exist and the fall of OP 1 is equal to 0. Algorithm 3 is defined so that the following minimum value of SUM is larger than the previous one. So, if OP 2 exists, then its fall is larger than the fall of OP 1 . As a consequence, the fall of the successive OP-subchains monotonically increases. Hence, one of the conditions of the Johnson's rule is respected.
When we construct the OC-subchains using the mirror chain, we find that the rise of the successive OCsubchains monotonically decreases.
Algorithm 3: Algorithm to determine the initial longest OP-subchain. Proof. Let us consider a sequence of standard form respecting the Johnson's rule obtained by applying Algorithm 4. According to Corollary 4.4, we can deduce an optimal sequence ′ of standard form from a given optimal schedule. Now, suppose that an OP-subchain immediately succeeds an OC-subchain. We shift the OP-subchain immediately before the OC-subchain since the OP-subchain produces the resources and the OC-subchain consumes the resources. By iterating we obtain a new optimal sequence in which the OP-subchains are followed by the OC-subchains.
After that, we repeatedly interchange two adjacent OP-subchains, if the fall of the first OP-subchain is larger than the fall of the second one. Subsequently, we interchange two adjacent OC-subchains, if the rise of the first OC-subchain is smaller than the rise of the second one. By iterating we obtain a new optimal sequence ′′ of standard form respecting the Johnson's rule.
Finally, we can deduce from ′′ by interchanging the adjacent OP-subchains (resp. OC-subchains) which have the same fall (resp. rise). So, Algorithm 4 constructs an optimal sequence that minimizes the required amount of initial resources.

Application: continuous case
In ERCPSP, the action of producing or consuming resources is instantaneous. This assumption is generally made in the literature [26]. So, the level of resource is modified at some time point and remains constant till the next discrete change (i.e. the level of resource is a step-wise function). However, this instantaneous production and consumption is inadequate for some real-world process scheduling problems. For example, the filling or the emptying of a tank by a liquid is usually subject to a constant rate depending of the number and the size of the siphons and taps. A second typical example is the load and unload of batteries.
This section addresses a scheduling problem with a cumulative continuous resource. Let be a set of preemptive tasks. Each task has a processing time and requires a continuously divisible resource during its processing time. The initial availability of this resource is equal to 0 . We consider the case where the resource amount required by a task at each time is not fixed but is described by a continuous function. Let ( ) be the resource requirement function of task . Note that ( ) is equal to the quantity of resource produced or consumed by when its elapsed processing time is equal to .
Let ( ) be a continuous function which determines whether a task is in process or not at time : The elapsed processing time of at time is given by ( ): We denote by ( ) the cumulative production and consumption of task at time : So the level of resource at time is given by ∑︀ ∈ ( ) + 0 . A schedule consists of determining for each task ∈ a function ( ) which allows to know when task is executed, i.e. we determine for each task ∈ the time intervals [ 0 , 1 ], . . . , [ −1 , ] during which is in process. Thus, we have: otherwise.
A schedule is said to be feasible if ∑︀ ∈ ( ) + 0 ≥ 0, ∀ ≥ 0. The objective in this problem is to construct a feasible schedule that minimizes the number 0 of resource units available initially.
To solve this problem, we start by decomposing each task into a sequence of subtasks that will be executed without preemption. These subtasks can be seen as jobs of a flow-shop with two machines. Each one of them consists of two parts that respectively consumes and produces a quantity of resource. The consumption of the first part corresponds to the processing time on the first machine, while the production of the second part corresponds to the processing time on the second machine. The subtasks of each task respect the Johnson's rule. Once the decomposition is done, the Johnson's rule is used to get an optimal sequence of subtasks that minimizes 0 .

The series-parallel case
We now consider a more general case, where the precedence relations involved can be represented by a seriesparallel graph. This special case of ERCPSP is an extension of the problem considered by [1], where more than one event can be executed at the same time.
A series-parallel graph = ( , ) is a directed graph which can be obtained recursively from a single node by two operations, the series composition (Def. 5.1) and the parallel composition (Def. 5.2) of two series-parallel subgraphs [27].
Abdel-wahab and Kameda [1] define the series-parallel graphs as follows.

Definition 5.3 ([1]).
A graph is a series-parallel graph if it can be reduced to a graph consisting of only two nodes with an arc between them by a sequence of the following operations.
(2) Delete an arc in parallel to another arc.
From Definition 5.3, any series-parallel graph has a subgraph consisting of two chains (see Fig. 7), unless it is a single chain. If the precedence relations are represented by a series-parallel graph, then a total order of events can be defined as follows. We first find two parallel chains using the method proposed by [1]. Then we  apply Algorithm 4 to obtain a locally optimal sequence. By reasoning as with previous section, we can show that this sequence is locally dominant. We replace the two parallel chains by a single chain obtained by adding an arc between two adjacent subchains according to the optimal sequence. Thus, we obtain another simpler series-parallel graph. By iterating the outcome is a single chain which corresponds to a total order of events. This method is illustrated by the example of Figure 8. Abdel-wahab and Kameda proved that any schedule, which respects this total order of events, minimizes the required amount of initial resources. The same proof can be used in the case of ERCPSP. It is based on the same principle as the case of parallel chains (The events of each subchain are clustered around the pivot event, then the subchains are merged). Moreover, the feasibility of the problem in this case can be calculated using an ( 2 ) algorithm [1].

The interval order case
In this section, we investigate the special case of ERCPSP with one resource, where the precedence graph = ( , ) is an interval order graph and the time lags are strictly positive. We introduce for this special case a list algorithm to construct feasible schedules if any exists. The priorities of events are defined using the properties of interval orders, such that all production events are scheduled when they are ready, and all consumption events are scheduled when they are ready and are predecessors of all unscheduled production events.

Interval order graph
An interval order graph = ( , ) is a directed acyclic graph, such that for each ∈ , one can associate a closed interval ( ) in the real line, such that for all , ∈ , ( , ) ∈ if and only if < for all ∈ ( ) and ∈ ( ) [22]. The system of intervals ( ) is called an interval representation of . Figure 9 provides an example of interval order graph and Figure 10 shows an interval representation of this example. For any interval order graph, we can find a total order of elements ( 0 , 1 , . . . , +1 ), such that Γ + ( +1 ) ⊆ Γ + ( ) ⊆ . . . ⊆ Γ + ( 0 ) (the successors of 0 include the successors of 1 which include the successors of 2 , . . ., which include the successors of +1 ). In the example of Figure 9 We can note that Γ + (7) ⊆ Γ + (4) ⊆ Γ + (5) ⊆ Γ + (6) ⊆ Γ + (3) ⊆ Γ + (2) ⊆ Γ + (1). This property can be used to solve the decision problem of the interval order case. The idea is to schedule the production events as soon as possible, and the consumption events when they are available and respect the list ( 0 , 1 , . . . , +1 ). An event is said to be available at time if and only if all its predecessors are scheduled strictly before .   6.2. List schedule for interval order case Let = ( , , , ) be an ERCPSP instance with interval order precedence graph and strictly positive time lags. For each event ∈ , let be the subset which contains all the predecessors of all the successors of . So, an event belongs to if and only if for each event ′ ∈ Γ + ( ), is a predecessor of ′ . The subset can be partitioned into two subsets = and > , where = contains the events which have the same successors than and > contains the events which have more successors than .
Let be the subset which contains all the events not belonging to ∪ Γ + ( ). The subset can be partitioned into two subsets and , where contains the production events of and contains the consumption events. For each event ′ ∈ , has more successors than ′ . It follows that there is no precedence relation between each pair of events of ∪ = . Figure 11 shows the different subsets associated with event . Algorithm 5 is a list algorithm which can be used to solve the decision problem of this special case of ERCPSP. In each iteration of the algorithm, all the available production events are scheduled first, followed by all the available consumption events which have the largest number of successors. If during some iteration the level of resource is negative, the algorithm increases enough the required initial resource units * to satisfy the resource constraints. The algorithm terminates when a full schedule is constructed. We will show that at the end of the algorithm, the obtained schedule minimizes the required amount of initial resources. So if * ≤ 0 , then the considered instance of ERCPSP is feasible.
Algorithm 5: Algorithm to solve the interval order case.
Let * be the set of events that are already scheduled; * ← {0}; * ← −∞; ← 0; while * ̸ = do while some available production event is not yet scheduled do /* Schedule as soon as possible */ ← max( , max{ + , | ∈ Γ − ( )}); ← + 1; * ← * ∪ { }; end if some available consumption events are not yet scheduled then Let be the one with the largest number of successors; /* Schedule all the consumption events of = */ ← max( , max{ + , | ∈ = ∩ and ∈ Γ − ( )}}) for all ∈ = ∩ do ← ← + 1; end end Proposition 6.2. If a consumption event is executed at time by Algorithm 5, then all the events scheduled before or at time are the ones belonging to ∪ .
Proof. The consumption events are scheduled by Algorithm 5 in decreasing order of the number of their successors. Let 1 and 2 be two consumption events scheduled respectively at 1 and 2 such that 1 < 2 . We suppose without loss of generality that between 1 and 2 only production events are processed. The first scheduled production event was not available at time 1 . So 1 is one of its predecessors. By iterating we show that 1 is a predecessor of all the production events scheduled between 1 and 2 . Concerning 2 , if it was available at time 1 , its priority is strictly inferior to the one of 1 . So 1 has more successors than 2 . Consequently 1 belongs to > 2 . Otherwise if 2 was not available at time 1 , then a production event scheduled before 2 by the algorithm precedes 2 and by transitivity 1 precedes 2 . So 1 has more successors than 2 . Consequently 1 belongs to > 2 . As a result, if a consumption event is executed at time by Algorithm 5, then all the consumption events of > are executed before and no event of is executed before . Moreover, according to Algorithm 5 all the consumption events of = are scheduled at time . Now we show that all the production events scheduled after are successors of . In fact, let 1 , 2 and 3 be three consumption events scheduled at different and increasing times. All the production events scheduled strictly after 2 and before 3 , are successors of 2 . So they are also successors of 1 because the successors of 1 include the successors of 2 . As a result, all the production events scheduled after are successors of . From this it follows that all the events of and all the production events of are scheduled before or at time . Proof. This condition is necessary. In fact, let us consider a feasible schedule and a time when the first event 1 of Γ + ( ) is executed. All the predecessors of 1 are necessary executed strictly before . All the events belonging to are predecessors of 1 and each predecessor of 1 not belonging to belongs to . Let ′ be the set of all the events of scheduled strictly before . Hence the events which are executed strictly before are the ones belonging to ∪ ′ . Since is feasible, the level of resource at time − must be positive. So  ≥ 0. Now we show that this condition is sufficient by studying the properties of Algorithm 5. Let be the schedule obtained by using Algorithm 5. It is easy to verify that is a time-feasible schedule. In fact, all the events are scheduled respecting all the precedence constraints. Moreover, suppose that a consumption event is scheduled by the algorithm at time . According to Proposition 6.2, all the events scheduled before or at time are the ones belonging to ∪ . So, if the condition of the theorem is true, the algorithm will construct a feasible schedule. Otherwise if for some consumption event , ∑︀ ∈ ∪ is negative, the algorithm will increase enough the required initial resource units to satisfy the necessary condition of the theorem. This shows that at the end of the algorithm, we obtain a time-feasible schedule that minimizes the required amount of initial resources.

Conclusion
In this paper we have considered the ERCPSP which is a general scheduling problem where the availability of resources is depleted and replenished. We have introduced the decision problem of ERCPSP and we have reported some complexity results. The decision problem in the general case is NP-complete, however some specific cases can be solved in polynomial time. We have presented three polynomial cases which are the parallel chains case, the series-parallel case and the interval order case. Of course, these algorithms cannot be applied directly to the general case because the problem is NP-hard. It is necessary to adapt them, for instance, by factoring arcs or suppression of arcs. This adaptation is the perspective of our work.