MINIMIZING THE TOTAL WEIGHTED EARLINESS AND TARDINESS FOR A SEQUENCE OF OPERATIONS IN JOB SHOPS

. This paper proposes exact algorithms to generate optimal timing schedules for a given sequence of operations in job shops to minimize the total weighted earliness and tardiness. The algorithms are proposed for two job shop scheduling scenarios, one involving due dates only for the last operation of each job and the other involving due dates for all operations on all the jobs. Computational experiments on benchmark problem instances reveal that, in the case of the scheduling scenario involving due dates only for the last operation of each job, the proposed exact algorithms generate schedules faster than those generated using a popular optimization solver. In the case of the scheduling scenario involving due dates for all operations on all the jobs, the exact algorithms are competitive with the optimization solver in terms of computation time for small and medium size problems


Introduction
Job shop scheduling problem (JSP) is one of the important machine scheduling problems and is well known as NP-hard [21,33].The problem involves scheduling  jobs on a set of  machines.Each job has a chain of ordered operations to be performed on specific machines, and the processing order on the machines can be different for different jobs.The most commonly used scheduling objective in the literature on job shops is to minimize makespan [21].Since the problem is NP-hard, the state-of-the-art solution methodologies mainly include heuristic and metaheuristic approaches.Giffler and Thompson (GT) algorithm is a well-known procedure to construct active schedules for a given priority order of operations in JSP with makespan, tardiness and flowtime-based objectives [17,26].The schedule generation mechanism in the GT algorithm also allows it to be used with dynamic priority dispatching rules, in which the job priorities change continuously over time during schedule generation [1,8,17].
The research on scheduling job shops to minimize total weighted earliness and tardiness (TWET) has gained considerable importance in recent years.The TWET minimization objective in JSP is important to manufacturing industries operating in a just-in-time (JIT) environment [22].The aim is to reduce inventory costs and simultaneously satisfy customer demands with the timely delivery of products.The problem involves a due date and weights for earliness and tardiness associated with each job (or its operations).Generating a schedule with a minimum TWET involves completing the jobs (or their operations) as close as possible to their respective due dates.Since the GT algorithm generates active schedules by left-aligning the operations to the earliest start time, it cannot generate optimal schedules for JSP with TWET minimization objective.This paper studies JSP with the minimization of TWET as the objective and proposes exact algorithms to generate optimal schedules for a given sequence of operations.
The rest of the paper is organized as follows.Section 2 presents the literature review, Section 3 presents the formulation of the problems, Section 4 presents the proposed optimal timing algorithms, Section 5 presents the computational study of the proposed algorithms and its comparison with the results obtained from a popular optimization solver, and Section 6 concludes with the scope for future work.

Literature review
The early research on TWET minimization objective in JSP considered the due date and earliness-tardiness weights only for the last operation of each job.Beck and Refalo [7] referred to this problem as the early/tardy scheduling problem (ETSP) and proposed a hybrid technique using constraint programming and linear programming to solve the problem.Danna et al. [13] adopted the mixed integer programming (MIP) formulation from [7] and proposed three strategies to solve the problem, namely, local branching, relaxation induced neighbourhood search, and guided dives.Kelbel and Hanzalek [23] presented a greedy search tree initialization procedure for solving the ETSP applied within a constraint programming framework.They used a slice-based search strategy available in a commercial optimization solver to explore the search tree generated by their procedure.Since the above approaches for ETSP use mathematical programming models to solve the problem, they inherently generate the optimal schedules and do not require a separate algorithm for schedule generation.Yang et al. [32] presented an enhanced genetic algorithm to solve ETSP with distinct due dates and a common deadline for all the jobs.They used an operation-based scheme to represent the chromosomes.They used a three-stage decoding procedure to decode each chromosome to a feasible schedule.Though their decoding procedure generates a feasible schedule, it is not proven to provide optimal schedules in all cases.To the best of our knowledge, there is no exact algorithm reported in the literature other than the mathematical programming-based approaches to generate optimal schedules for a given sequence of operations in ETSP.
Recently, the research on JSP with a due date and earliness-tardiness weights associated with each operation has gained importance.In this problem, the operations on all the jobs are scheduled to minimize the weighted sum of earliness and tardiness associated with the deviation of completion time of each operation from its respective due date.Baptiste et al. [5] were the first to introduce this problem and referred to it as the just-intime job shop scheduling problem (JIT-JSP).They presented a mathematical programming formulation for the problem and found the lower bounds for 72 problem instances using two Lagrangian relaxations methods.They implemented simple heuristics to derive upper bounds using the Lagrangian relaxations and further improved them using a local search algorithm.Monette et al. [25] introduced a constraint programming approach for JIT-JSP that relies on a branch and bound procedure, a global filtering algorithm, and two search heuristics to solve the problem.Metaheuristic approaches have also been implemented to solve JIT-JSP.Araujo et al. [4] implemented a combination of genetic algorithm and local search procedure to solve JIT-JSP in two sequential phases.They generated the schedules for a given sequence of operations by left aligning the operations to their earliest start time.Dos Santos et al. [14] presented a hybrid method that combines an evolutionary algorithm, a mathematical programming model, and a local search procedure.The mathematical programming model is used to determine the optimal schedule for a given sequence of operations using a commercial optimization solver.Yang et al. [31] implemented an improved genetic algorithm that utilizes an operation-based scheme to represent the chromosomes.Each chromosome is decoded to generate the schedules using a three-stage decoding mechanism which initially generates a semi-active schedule and then improves the schedule by reducing earliness cost using greedy insertion mechanisms.Though their decoding procedure generates a feasible schedule, it is not proven to provide the optimal schedule for a given sequence of operations.Wang and Li [30] proposed a combination of variable neighbourhood search and mathematical programming to solve JIT-JSP.They used the mathematical programming model to generate an optimal schedule for a given sequence of operations.Ahmadian and Salehipour [2] presented a matheuristic algorithm to solve JIP-JSP, which operates by decomposing the problem into smaller sub-problems and solving the subproblems using a commercial optimization solver to obtain optimal or near-optimal schedules.Ahmadian et al. [3] developed a variable neighbourhood search algorithm to solve JIT-JSP.They implemented four neighbourhood structures to generate improved solutions.They used a commercial optimization solver to generate and improve schedules in their algorithm.
The above literature review on JIT-JSP reveals that most researchers have developed metaheuristic algorithms that require a mathematical programming model to generate an optimal schedule for a given sequence of operations.To the best of our knowledge, there is no exact approach reported in the literature other than the mathematical programming approaches to generate optimal schedules.An exact algorithm for schedule generation will also be useful in developing and implementing priority dispatching rules in scheduling job shop with TWET objective.This paper proposes exact algorithms to generate optimal timing schedules for a given sequence of operations in JIT-JSP.The proposed optimal timing algorithms for JIT-JSP are extended to generate optimal schedules for a given sequence of operations in ETSP.
The proposed optimal timing (OT) algorithms for JSP is based on the OT algorithms presented in the literature for various other scheduling problems.The OT algorithms were initially introduced to generate optimal timing schedules for a given job sequence in single machine scheduling problem (SMSP) to minimize TWET.Garey et al. [16] presented an OT algorithm with () time complexity for the SMSP with symmetric weights for earliness and tardiness.Szwarc and Mukhopadhyay [27] proposed an OT algorithm with ( 2 ) complexity for the SMSP with asymmetric earliness and tardiness weights.Lee and Choi [24] and Wan and Yen [29] also presented OT algorithms that were used to generate optimal timing schedules within their proposed metaheuristic algorithms for the SMSP.Chretienne [10] extended the OT algorithm proposed by Garey et al. [16] to the asymmetric and task-independent costs case in SMSP without increasing its worst-case complexity.He also proposed an ( 3 ) OT algorithm for the general case of asymmetric and task-dependent costs in SMSP.Bauman and Jozefowska [6] presented an () OT algorithm for the SMSP.Hendel and Sourd [19] proposed an OT algorithm for the earliness-tardiness SMSP with a linear piece-wise cost function for each job.They showed that OT algorithms for SMSP can be extended to minimize TWET in the permutation flow shop scheduling problem with earliness-tardiness penalties for the last operation of each job.Feng and Lau [15] presented an OT algorithm for the SMSP and showed it to be more efficient than the OT algorithms presented in [16,27].
Besides SMSP, OT algorithms have also been implemented for other scheduling problems, like the resourceconstrained project scheduling problem [28], parallel machine scheduling problem [9], PERT scheduling problem [11], and scheduling of aircraft landing problem [18].
The OT algorithms presented in the literature are similar in identifying and shifting the job clusters to minimize TWET.However, they differ in their implementation to handle the problem's specific constraints or make the algorithms run faster in practice.The proposed OT algorithms for JSP are similar in principle to the existing OT algorithms.They differ mainly in the mechanisms used for handling the specific constraints of the problem.

Just-in-time job shop scheduling problem
The just-in-time job shop scheduling problem (JIT-JSP) can be described as follows [5].There are a set of  machines,  = { 1 ,  2 , . . .,   }, and a set of  jobs,  = { 1 ,  2 , . . .,   } to be processed.Let  is the index for jobs and  is the index for machines, i.e.  = 1, 2, . . .,  and  = 1, 2, . . ., .Each job   requires a set of   sequentially ordered operations,   = { 1 ,  2 , . . .,   } to be performed.Let  is the index for operations, i.e.  = 1, 2, . . .,   .Each operation   is performed on a specified machine  (  ) ∈  and the processing time is given by   .For each machine   ∈  , (  ) represents the set of all operations that are performed on   .Each operation   has a due date   such that an early or late completion incurs a penalty which is proportional to the amount of deviation from   .Each operation   has two penalty coefficients,   and   , to penalize its early and tardy completion, respectively.If   represents the scheduled completion time of operation   ,   its earliness, and   its tardiness, then   = max(0,   −   ) and   = max(0,   −   ).The objective of JIT-JSP is to determine an optimal schedule that minimizes the total cost due to deviation of completion of all the operations from their respective due dates, which is given by ∑︀

𝑖=1
∑︀  =1 (    +     ).The mathematical formulation for the problem is as follows. Objective: Subject to: Constraints (3.2) and (3.3) relate the earliness and tardiness of each operation with its completion time and due date.Constraint (3.4) ensures that the first operation of each job starts after time 0. Constraint (3.5) imposes a precedence relationship between the two consecutive operations of the same job.Disjunctive constraint (3.6) ensures that two operations cannot be processed simultaneously if they belong to two different jobs and require processing on the same machine.For a given sequence of operations, the disjunctive constraint (3.6) transforms into a simple linear constraint, and the mathematical formulation transforms into a linear programming model.

Early/tardy job shop scheduling problem
The problem environment of the early/tardy job shop scheduling problem (ETSP) is the same as JIT-JSP, except that only the last operation of each job has a due date, and only its early or tardy completion is penalized.If   is the due date of the last operation of job   ,   represents its earliness, and   represents its tardiness, then   = max(0,   −   ) and   = max(0,   −   ), where   is the completion time of the last operation of job .Each job   has two penalty coefficients,   and   , to penalize its early and tardy completion, respectively.The objective of ETSP is to determine the optimal schedule that minimizes the total cost due to the deviation of completion of the last operation of all the jobs from their respective due dates, which is given by ∑︀

The proposed optimal timing algorithms
This section first presents the implementation of the proposed OT algorithms for JIT-JSP.The extension of the OT algorithms to ETSP is presented subsequently.

The proposed OT algorithms for JIT-JSP
Let  be the given sequence of  number of operations in JIT-JSP, where  = ∑︀  =1   .Let each operation in  is represented by a unique identifier ( = 1, 2, .. ) based on its position in the sequence.Therefore, each operation identifier  can be mapped to one of the operations in   = { 1 ,  2 , . . .,   },  = 1, 2, . . .,  (see Sect. 3.1).Let the ordered set  represent the set of operation identifiers in a sequence corresponding to the operations in .Let   be the processing time,   is the due date, and   is the completion time of the th operation in .Let the early and tardy penalty coefficients be represented by   and ℎ  , respectively.Corresponding to the th operation in , let the singleton sets  () and () respectively contain their immediately preceding and succeeding operations on the same job.If the th operation is the first operation of the job, then  () = ∅ and if the th operation is the last operation, then () = ∅.Let the singleton sets,   () and  (), respectively, contain the immediately preceding and succeeding operations of the th operation performed on the same machine.If th operation is the first operation on the machine, then   () = ∅, and if th operation is the last operation, then  () = ∅.Let  () = ( () ∪   ()) denote the set of preceding operations and  () = (() ∪  ()) denote the set of succeeding operations on the same job or the same machine corresponding to th operation in .
Let   is the partial sequence that contains the first  operations in , and let   = { 1 ,  2 , . . .,   } is the partial schedule corresponding to   .Let   is the total weighted earliness and tardiness corresponding to the partial schedule   .In an optimum partial schedule   with minimum   , the operations will be aligned as close as possible to their respective due dates.This results in either the operations being scheduled at their due dates or forming clusters as shown in the Gantt chart in Figure 1.Each cluster consists of a set of contiguously scheduled operations called a block.A pair of operations (,  ′ ), where  precedes  ′ in   , is said to be contiguous and belong to the same block if   ′ =   +   ′ , provided that (,  ′ ) are either the two consecutive operations of the same job or the two consecutive operations on the same machine (i.e. ∈  ( ′ )).Let   be the block comprising of the cluster of operations which are contiguously scheduled and contains the th operation in   .In Figure 1, the block  12 is a maximum cardinality set formed by the cluster of operations {6,7,8,9,10,11,12} which are contiguously scheduled and contains the operation 12 (i.e. 2,4 ).The operations in the set {1,2,3,4,5} are not contiguously scheduled with any of the operations in  12 and, therefore, are not included in the set  12 .Let  (  ) is the total weighted earliness and tardiness function corresponding to the set of operations in   defined in terms of the completion time of the th operation in   .Then  (  ) will always be a piece-wise linear convex cost function when all the operations in   are shifted by the same amount of time.This can be explained using the following theorem.
Theorem 4.1.The weighted sum of earliness and tardiness cost function  (  ) corresponding to a set of operations   ⊆   with cardinality  ′ will always be a piece-wise linear convex function with at most  ′ breakpoints.
Proof.The weighted sum of earliness and tardiness for the set of operations   with cardinality  ′ is given by The cost function (4.1) can be written in terms of the completion time   as where   is the time gap between the completion time of the operation  and operation  in   , i.e.   =   −  .The cost function (4.2) can be written as where  represents the set of early operations and   represents the set of tardy operations for a given value of   .The cost function (4.3) can be further rewritten as The above cost function will be a straight line equation with slope (  ) = ∑︀ ∈  ℎ  − ∑︀ ∈   when all the operations are left-shifted by the same amount of time, i.e.   remains constant for all the operations in   .A typical plot of  (  ) versus   is shown in Figure 1.The 3 jobs-4 machines JIT-JSP instance shown in Figure 1 contains 7 operations in block  12 .The sets,  and   , change when an operation in   changes from tardiness to earliness at its due time while reducing   .This leads to a change in the slope of the cost function, resulting in a breakpoint in the plot as shown in Figure 1.Since there are  ′ number of operations in   , there can be a maximum of  ′ number of breakpoints, each occurring at the due date of one of the operations in   .As the operations in   change from tardiness to earliness while reducing   , the slope of the cost function monotonically decreases after each breakpoint.This is evident from the slope equation The value of the cost function slope becomes negative after the breakpoint with the least value of  (  ).Therefore, left shifting the operations to the breakpoint where the cost function slope changes from a positive value to a non-positive value provides the optimal  (  ).This property forms the basis for optimizing the cost function in the OT algorithm.This property also holds for any subset of contiguously scheduled operations in   that can be left-shifted without violating the precedence constraints.
The proposed OT algorithm for JIT-JSP can be described as follows.Initially, the first operation in  is assigned its completion time as  1 =  1 and the partial schedule is generated as  1 = { 1 }.In this case,  1 corresponding to  1 will be zero.Subsequently, the partial schedule   (2 ≤  ≤  ) is generated from  −1 by assigning the completion times of all the operations from  −1 to   .The completion time of the th operation in   is determined as   = max(  ,   +   ), where   = max ∈ ()   .If  () = ∅, then   = 0.
If   =   , then the penalty cost of the th operation in   will be zero, and   will be optimal with   =  −1 .On the other hand, if   >   , then the th operation will have a penalty cost due to lateness and the partial schedule   needs to be optimized based on Theorem 4.1 discussed above.This involves the block   containing the th operation in   to be generated and a left shifting procedure, namely LEFT SHIFT, is invoked to optimize the partial schedule   .Algorithm 1 shows the pseudocode of the proposed OT algorithm for JIT-JSP.
In OT algorithms applied to the single machine scheduling problem, all the jobs in a block corresponding to the last operation in the partial sequence are left-shifted to the minimum point of its cost function [24,29].Since JIT-JSP involves multiple machines, multiple shiftable blocks can be generated from   , each containing the th operation in   .A shiftable block   (  ⊆   ) comprises of a set of operations, such that if an operation  is included in   , then its immediately preceding contiguous operations in  () are also included in   .This allows the shiftable block   to be left-shifted by at least one unit of time without violating the precedence constraints of any of its operations with the respective immediately preceding operations.In other words, each shiftable block   is generated by eliminating a set of operations from   , such that the operations in the set   can be left-shifted by at least one unit of time.The shiftable blocks generated from  12 for the illustration problem shown in Figure 1 are  1 = {12, 9, 7},  2 = {12, 9, 7, 10, 8, 6} and  3 = {12, 9, 7, 10, 8, 6, 11}, which are subsets of  12 and can be left shifted by at least one unit of time.
The shiftable block with the highest cost function positive slope value is chosen for left shifting among all the other shiftable blocks that can be formed from   .Let  *  denotes the block with the highest positive slope Algorithm 1: OT algorithm for JIT-JSP Data: , , , , , ℎ,  (),  () ∀ ∈ value among all the shiftable blocks in   .To optimize the partial schedule   , the block  *  is left-shifted towards the minimum point of its cost function until the nearest breakpoint is reached or an operation  ∈  *  becomes contiguous with an operation in  () that does not belong to  *  .In case if any of these two events occur, the block   and its corresponding shiftable blocks are regenerated using the improved partial schedule   and the block  *  with the highest positive cost function slope value (( *  )) is again chosen for further left shifting.This left shifting process continues until a shiftable block with positive cost function slope value cannot be created from   .The above procedure optimizes the partial schedule   and provides optimal   .This can be explained using the following Theorems 4.2 and 4.3.
Theorem 4.2.Only the shiftable blocks, which are a subset of block   and contain the th operation in   , can have a positive cost function slope value.
Proof.The cost function slope value of any shiftable block generated from   can be positive if and only if it contains the th operation in   .This can be explained by the fact that the left shifting procedure is implemented sequentially for the first  − 1 operations in   to find the optimal partial schedules  1 ,  2 , . . .,  −1 .Therefore, considering Theorem 4.1, the left shifting of any shiftable block formed without the th operation will have a negative cost function slope value and will lead to an increase in penalty cost.Similarly, any operation or a set of contiguously scheduled operations not belonging to   will have a negative cost function slope value.Theorem 4.3.If { 1 ,  2 , . . .,   } is the set of all the shiftable blocks that can be formed from block   with positive cost function slope value and containing the th operation in   , then left shifting the block with the highest slope value towards the nearest breakpoint or until its shiftable point without violating the precedence constraints, optimizes the partial schedule   .
Proof.If there exists a shiftable block   with positive slope value (  ) corresponding to its cost function  (  ), then based on Theorem 4.1, it can be concluded that left shifting the block   improves the penalty cost due to earliness and tardiness.The shiftable block with the highest positive cost function slope value provides the highest improvement in penalty cost per unit time and optimizes the partial schedule   .Though left shifting the other shiftable blocks with positive cost function slope value also improves the partial schedule, it may eventually result in sub-optimal schedules.This can be explained with the following example.
Let  * ( * ⊂   ) be the set of operations with the highest positive cost function slope value.Let  ′ ( ′ ⊂   ) be a set of operations that are not contained in  * (i.e. ′ and  * are disjoint sets) and can be left shifted along with  * .Since the operations in  ′ are not included in  * , its cost function slope value ( ′ ) will be non-positive.Let us assume that ( ′ ) < 0 and ( * ) + ( ′ ) > 0. Obviously, ( * ) > ( * ) + ( ′ ).Though, left shifting the operations in  ′ along with the operations in  * will improve the total penalty cost (since ( * ) + ( ′ ) > 0), the rate of improvement of the penalty cost by left shifting the set ( ′ ∪  * ) will be less compared to left shifting  * alone, as ( ′ ) < 0. This indicates that left shifting the operations in  ′ along with the operations in  * results in a sub-optimal partial schedule.Therefore, selecting the block with the highest positive cost function slope value for left shifting optimizes partial schedule   .Regenerating   at each breakpoint or every time an operation becomes contiguous with a preceding operation, followed by left shifting the shiftable block with the highest positive cost function slope value, eventually optimizes   .
Algorithm 2: Left shifting procedure to optimize the partial schedule ◁ Function call to find the optimal block ◁ Function call to find the optimal block 9 end 10 end Algorithm 2 shows the pseudocode of the left shifting procedure.The function    in the pseudocode generates the shiftable block with the highest positive cost function slope value, which is also referred to as the optimal block.We propose two methods to generate the optimal block.The first method is an enumeration procedure that generates all possible shiftable blocks with non-negative cost function slope values.Subsequently, the shiftable block with the highest cost function slope value is selected for left shifting.We consider even the optimal block with slope value equal to zero for left shifting as the objective value of the resulting schedule will remain the same.The second method is an improvement over the first method that uses dominance rules to ignore certain shiftable blocks in the process of finding the optimal block for left shifting.

Enumeration method
This method first generates a tree of sub-blocks in the forward pass.Subsequently, the sub-blocks with nonnegative cost function slope values are recombined in the backward pass to form all possible shiftable blocks.An illustrative example of the procedure is shown in Figures 2 and 3.
The procedure starts with generating the sub-block  1 by including the th operation of   as the first element in  1 .The preceding contiguous operations  ′ ∈  () corresponding to each operation  ∈  1 are then iteratively included in  1 .Subsequently, the succeeding contiguous operations  ′ ∈  () :   ′ >   ′ and  ( ′ ) = ∅ corresponding to each operation  ∈  1 , are iteratively included in  1 .Including the succeeding operations  ′ ∈  () :   ′ >   ′ and  ( ′ ) = ∅ in  1 , increases its cost function slope value.This procedure generates the shiftable sub-block  1 ⊆   .In Figure 2, the set  1 is generated by first assigning the operation 24 to it.Subsequently, its preceding contiguous operation 20 is assigned to  1 .The operations 13 is then assigned to  1 followed by operations 8 and 9, which are the preceding contiguous operations to the operation 13.The operation 4 is subsequently assigned to  1 as it is preceding and contiguous to operation 9.There are no other operations in  24 which are preceding as well as contiguous to any of the assigned operations in  1 .The The succeeding contiguous operations  ′ ∈  () :   ′ ≤   ′ or  ( ′ ) ̸ = ∅ corresponding to each operation  ∈  1 are subsequently identified, and a sub-block is generated corresponding to each one of them using the abovementioned procedure.In Figure 2, the operation 15 is the succeeding contiguous operation on the same machine to the operation 9 in  1 .Since  15 <  15 , operation 15 was not included in  1 , and is assigned to the sub-block  3 .The operation 14 belonging to  4 has a succeeding contiguous operation 16 on the same machine.Since operation 16 has no preceding contiguous operation other than the operations {21,14,10,3} already assigned in  4 and  16 >  16 , it is assigned to the block  4 .However, operation 18 is not included in  4 as it is contiguous with its preceding operation 12 on the same job.Therefore, it is assigned to  6 .In each newly formed sub-block, only the operations in   which were not allocated in the preceding sub-blocks are included.The succeeding contiguous operations corresponding to the operations included in the newly formed sub-blocks are further chosen to form new sub-blocks.This branching procedure is repeated until no more sub-blocks can be further generated.This branching procedure generates a tree of sub-blocks, as shown in Figure 2.Each sub-block can be left-shifted only if its preceding sub-blocks in the tree are also left-shifted by the same amount of time.However, a sub-block has an option of not being left-shifted while its preceding sub-blocks in the tree are left-shifted.
Each sub-block in the branching procedure can be called a node, with block  1 becoming the root node.Let each node is identified with a unique identifier .Let   be the set of operations already assigned in the preceding nodes of node .For e.g., in Figure 2,  2 = {24, 20, 13, 8, 9, 4} and  4 =  5 = {24, 20, 13, 8, 9, 4, 15}, where the elements in the sets  2 ,  4 and  5 are operation identifiers that belong to   .Let   denote the set of all the sub-blocks generated in the forward pass originating from node .For example, in Figure 2,  1 = { 1 ,  2 ,  3 ,  4 ,  5 ,  6 ,  7 ,  8 },  2 = { 2 } and  3 = { 3 ,  4 ,  5 ,  6 ,  7 ,  8 }.Let   denote the set of immediately succeeding nodes corresponding to a node  in the tree of sub-blocks.For e.g., in Figure 2,  1 = {2, 3} and  3 = {4, 5}, where the elements 2, 3, 4 and 5 are node identifiers.As shown in Figure 2, the nodes in the tree of sub-blocks can be categorized into levels such that the succeeding node corresponding to a node will be in the immediately succeeding level and its preceding node in the immediately preceding level.Let  denote the total number of levels, and   denote the set of nodes at each level ( = 1, 2, . . ., ).For e.g., in Figure 2,  3 = {4, 5} and  4 = {6, 7, 8}, where the elements 4, 5, 6, 7 and 8 are node identifiers.The nodes at each level are also denoted as   where  is the index for level and  is the index for the nodes in the particular level.
In the backward pass shown in Figure 3, each node  generates a set of blocks { 1 ,  2 , . . .,   } by combining the sub-block   with the sets of blocks {  ′ 1 ,   ′ 2 , . ..,   ′   ′ } returned by its respective child nodes  ′ ∈   .Only the set of blocks with non-negative cost function slope values in { 1 ,  2 , . . .,   } are returned to its respective parent node.If the cost function slope value of all the blocks in the set { 1 ,  2 , . . .,   } are negative, the node  returns a null set to the parent node.In Figure 3, the blocks  61 ,  71 , and  81 are null sets since the corresponding sets  61 ,  71 , and  81 have negative cost function slope values.This procedure ensures that only the blocks with non-negative cost function slope values are combined at each node.
The set of recombined blocks { 1 ,  2 , . . .,   } at each node  are generated by forming all possible combinations of the blocks returned by the child nodes, ensuring that the operations in a block are not repeated.In Figure 3, the recombined blocks { 11 ,  12 , . . .,  16 } are generated by appending the sub-block  1 with the combination of blocks  21 and { 31 ,  32 } returned by child nodes 2 and 3, respectively.The recombined block with the highest non-negative cost function slope value at the root node is assigned to  *  , which is left-shifted to optimize   .If all the recombined blocks generated at the root node have negative cost function slope values, the partial schedule   and the corresponding   are optimal.
Algorithm 3 shows the overall framework of the optimal block generation procedure in the form of pseudocode.The pseudocode shows two function calls, one to generate the tree of sub-blocks in the forward pass and the other to recombine the sub-blocks in the backward pass to find all possible shiftable blocks with nonnegative cost function slope values.Subsequently, the optimal block with the highest non-negative cost function slope value is selected (lines 4 to 13 in the pseudocode) and returned to Algorithm 2. Algorithm 4 shows the pseudocode to generate the tree of sub-blocks in the forward pass using a recursive function.Algorithm 5 shows the pseudocode to recombine the sub-blocks in the backward pass.The pseudocodes use the same notations used in the illustrative example described above.
◁ return from function else The child node identifiers corresponding to the parent node  are updated in the set   .The level identifier corresponding to the node is updated in line 3 and the total number of levels are updated in lines 34 to 37. In line 38, the set   stores the node identifier .Lines 39 to 41 update the number of times each operation  ∈   appears in the tree of sub-blocks  1 .The updated value for each operation  in   is used in the improved method discussed later.
The pseudocode shown in Algorithm 5 accesses the levels in the tree of sub-blocks in the decreasing order and finds all possible combinations of blocks corresponding to each node.Line 5 in the pseudocode assigns   to set  1 .Subsequently, the lines 6 to 17 generate all possible combinations of blocks { 1 ,  2 ,. ..,   } by accessing the sets of blocks {  ′ 1 ,   ′ 2 , . . .,   ′  ′  } from the child nodes  ′ ∈   .The condition in line 10 ensures that each block generated does not have any repeating operations.Lines 18 to 24 select the set of recombined blocks { 1 ,  2 , . . .,    } with non-negative cost function slope values from the set { 1 ,  2 , . . .,   } to be returned to its parent node.
The total number of shiftable blocks generated from  number of operations can be theoretically considered as a problem of generating -combinations of  elements for all values of  (i.e. 1 ≤  ≤  ), which is 2  [12].However, the actual number of shiftable blocks can be far less because if an operation  is not included in the shiftable block, then all the operations in the tree of sub-blocks   originating from node  will get excluded.Therefore, many combinations will not be feasible, and the proposed OT algorithm ensures that only feasible shiftable blocks are generated.The worst-case time complexity of the OT algorithm can be estimated as an exponential function of the problem size ( ) due to the exponential increase in the number of shiftable blocks.We present the computational performance of the proposed OT algorithm on benchmark instances with up to 30 jobs and 20 machines in Section 5.

Improved method
Algorithms 6 and 7 show the pseudocodes to generate the optimal block using the improved method.The improved method uses dominance rules to ignore certain recombined blocks at each node in the backward pass, which will not result in the optimal block.This reduces the search space and improves the computation time required to generate the optimal schedule.Therefore, this method is an implicit enumeration method.We use the following two dominance rules.
i.The first dominance rule, namely    , checks if the operations present in the set of recombined blocks { 1 ,  2 , . . .,   } at a node  ∈   do not exist in the recombined blocks generated by the other nodes at the same level, i.e.  ′ ∈   :  ′ ̸ = .If the condition is satisfied, the recombined block with the highest non-negative cost function slope value is selected at node  and returned to its parent node.If the condition is not satisfied, then all the recombined blocks with non-negative cost function slope values are selected and returned to its parent node, as in the enumeration method.
ii.The second dominance rule, namely     , is applied when the sub-block   at node  is combined with the blocks (  ′ 1 ,   ′ 2 , . . .,   ′   ′ ) returned by the child nodes  ′ ∈   .The recombined blocks returned by the child nodes in the set   , that satisfied the condition in the    rule, are directly appended to the sub-block   at node .No new set of blocks is generated at the parent node using the blocks returned by such child nodes.
Lines 5 to 17 in the pseudocode shown in Algorithm 6 check whether the node  at level   satisfies the condition in the    rule.  represents the number of times the operation  appears in the set of all the sub-blocks in the tree of sub-blocks   originating from node .Lines 5 to 12 determine   corresponding to each operation .The condition in line 14 of the pseudocode verifies that each operation  belonging to   exists only within it and does not exist in the other trees of sub-blocks at the same level.The value of   denotes whether the node  satisfies the condition in the    rule.  = 1 indicates that the node  satisfies the condition in the    rule, and   = 0 if not.Lines 4 to 14 in Algorithm 7 show the procedure to select the recombined block with the highest non-negative cost function slope value when the node  satisfies the condition in the    rule (i.e.  = 1).Lines 16 to 21 show the procedure to select all the recombined blocks with non-negative cost function slope value when the node does not satisfy the condition in the    rule (i.e.  = 0).Subsequently, in the immediate lower level  −1 , the nodes that satisfied the condition in the    rule in level   , are directly appended to the sub-block as shown in lines 18 to 22 in Algorithm 6.On the other hand, the child nodes that did not satisfy the condition, their returned blocks combine in all possible ways with the sub-block at the parent node, as shown in lines 25 to 38 of the pseudocode.
Figure 4 shows the optimal block generated using the improved method for the illustration problem shown in Figure 2. The recombined blocks generated at node 4 and node 5 at level 3 of the backward pass shown in Figure 4 have common operations.Therefore, they do not satisfy the condition in the      rule and the blocks  41 ,  51 and  3 are combined in all possible ways to form blocks  31 ,  32 and  33 at node 3.
The node 3 at level 2 satisfies the condition in the    rule as none of the operations in the tree of sub-blocks  3 exist in  2 .Therefore, only one block  31 =  32 with highest non-negative cost function slope value (( 32 ) = 0.13) is selected among the set of recombined blocks { 32 ,  33 } with non-negative cost function slope values.Similarly, node 2 also satisfies the condition in the    rule as none of the operations in the tree of sub-blocks  2 exist in  3 .Consequently, at node 1, the blocks  21 and  31 are directly appended to the sub-block  1 using the      rule to form the block  11 , which is also the optimal block for the given instance.
The following Theorems 4.4 and 4.5 prove the two dominance rules.
Theorem 4.4.If a node  ∈   satisfies the condition that the operations in the tree of sub-blocks   do not exist in the other tree of sub-blocks   ′ ,  ′ ∈   , then the recombined block with the highest cost function slope value from the set { 1 ,  2 , . . .,   } dominates the remaining blocks.
Proof.Suppose that there are two blocks,  1 and  1 , with a set of common operations that are returned to the parent node  from the child nodes  and , respectively.Since the child nodes originate from the same parent node, the set of already assigned operations in the preceding nodes,  and , will be the same, i.e.   =   .Therefore, the remaining operations, (  −   ) and (  −   ), available to form the respective tree of sub-blocks,   and   , in their forward pass will also be the same.Consequently, the resulting blocks  1 ◁ Function call to select the recombined blocks end end end and  1 will be the subsets of the set (  −   ) (or (  −   )).As per the forward pass of the optimal block generation procedure shown in lines 28 to 32 of Algorithm 4,   and   are formed by two different succeeding operations corresponding to the set of operations in   .Since the sets  1 and  1 have common operations, the set of all the operations in   will be the same as in   .This is also evident from the illustrative example shown in Figure 2, where the set of operations {16,14,10,3,18,12,21} are the same in the trees of sub-blocks,  4 = { 4 ,  6 } and  5 = { 5 ,  7 ,  8 }.This is because a tree of sub-blocks is generated by identifying the preceding and the succeeding chain of contiguous operations.As a result, any two trees of sub-blocks originating from the  same parent node will either have a different set of operations (e.g., the tree of sub-blocks  2 and  3 in Fig. 2) or they will have the same set of operations (e.g., the tree of sub-blocks  4 and  5 in Fig. 2).Nevertheless, the grouping of operations within the sub-blocks within a particular tree of sub-blocks can differ from the other trees of sub-blocks.This is evident from Figure 2, where the grouping of operations within the sub-blocks in  4 = { 4 ,  6 } differ from the grouping of operations in  5 = { 5 ,  7 ,  8 }.Consequently, the cost function slope values of the sub-blocks also differ from each other, as evident from Figure 2. Therefore, the optimal combination of sub-blocks belonging to different trees of sub-blocks originating from the same parent node eventually optimizes the total cost.Hence, all possible combinations of the recombined blocks are maintained until the trees of sub-blocks with a common set of operations converge at a parent node.
Suppose that the trees of sub-blocks with common operations converge at a node .Then each recombined block generated at node  will be a combination of sub-blocks contained in   .The recombined block containing the optimum combination of sub-blocks will have the highest cost function slope value compared to all other recombined blocks generated at node .Therefore, the recombined block with the highest cost function slope value will dominate all other recombined blocks generated at node .
Theorem 4.5.The recombined block   ′   ′ with   ′ = 1, which satisfied the condition in the    rule at child node  ′ ∈  +1 , can be appended to all the recombined blocks at its parent node  ∈   and does not require the generation of recombined blocks without   ′   ′ to find the optimal block.Proof.As discussed in the proof of Theorem 4.4, all possible combinations of the recombined blocks are maintained until the trees of sub-blocks with a common set of operations converge at a parent node.Therefore, the blocks returned by the child nodes having common operations with other nodes at the same level are combined in all possible ways to eventually obtain the optimum combination of operations in the shiftable block with the highest cost function slope value.Since the blocks that satisfied the condition in the    rule are already the optimum combination of sub-blocks at their respective child nodes, they do not require to be recombined in all possible ways.They can be appended to all the recombined blocks generated at the parent node.

The proposed OT algorithm for ETSP
The problem environment of ETSP is the same as JIT-JSP, except that the due dates and earliness-tardiness penalties are associated only with the last operation of each job.Let  ( ⊆ ) represent the set containing the last operation of each job.We use the notations for earliness-tardiness penalties,   and ℎ  , and due date   for all the operations in .The earliness-tardiness penalties and due dates corresponding to the last operation (i.e.  , ℎ  ,   ,  ∈  ) are the inputs to the problem.The earliness-tardiness penalties and the due dates for the remaining operations are set as zero, i.e.   = ℎ  =   = 0,  ∈ ( −  ).All the other notations used in the description of OT algorithm for ETSP are the same as those used in JIT-JSP.
Since in ETSP, only the last operation of each job has a due date, the first   − 1 operations of each job  are scheduled according to their position in the given sequence  at their earliest start times.The last operation of each job  is scheduled as per the given sequence either at its due date or at its earliest possible start-time, whichever is later.The left shifting procedure is subsequently invoked to optimize its partial schedule.The left shifting procedure is applied only if at least one job with its last operation is not contiguously scheduled with any of its immediately preceding operations either on the same job or the same machine.Suppose that the last operations of the jobs in the partial schedule are contiguous with their respective preceding operations on the same job or the same machine.In that case, left shifting is not possible as all the operations, except the last operation of the jobs, are already scheduled at their earliest possible start times.Algorithm 8 shows the pseudocode of the OT algorithm for ETSP.
The    function call in Algorithm 8 is directed to Algorithm 3 presented in Section 4.1.The OPT BLOCK function is invoked only if at least one job with its last operation in   is not contiguously scheduled with any of its immediately preceding operations.Either the enumeration method (Algorithm 5) or the improved method (Algorithms 6 and 7) can be implemented in the backward pass to generate the set of shiftable blocks.Since we considered   = ℎ  =   = 0,  ∈ ( −  ) and the condition (  ) ≥ 0, 1 ≤  ≤   to return the set { 1 ,  2 , . . .,    } to the parent node, the operations belonging to set ( −  ) will remain left-aligned at their earliest start time during left shifting.Algorithm 8: OT algorithm for ETSP Data: , , ,  (),  () ∀ ∈   = {set of last operations of all the jobs:  ⊆ }, , , ℎ  ∈  1 Initialize: if  >  and at least one of the last operations of the jobs in  is not contiguous with any preceding operation then ◁ Function call to find the optimal block The total number of shiftable blocks generated from  number of operations can be theoretically considered as a problem of generating -combinations of  elements for all values of  (i.e. 1 ≤  ≤  ), which is 2  .This is the same as the computational complexity of generating the shiftable blocks in the case of JIT-JSP.However, only those operations in the set ( −  ) are included in the shiftable blocks that are between the last operation of the job that appears first in  and the last operation of the job that appears last in .Therefore, in a practical scenario, the computational complexity of the OT algorithms for ETSP will be much less than that of JIT-JSP.The computational performance of the proposed OT algorithms on ETSP instances with up to 50 jobs and 30 machines is presented in the subsequent section.
The operations belonging to the set ( −  ) can also be scheduled using the Giffler and Thomspon (GT) algorithm to generate active schedules.However, if an operation  ∈ ( −  ) is sequenced after operation  ′ ∈  in   on same machine,  ′ should not be scheduled prior to .The resulting optimal schedule will be either same or better than the optimal schedule generated strictly following the sequence of operations in .Since, the GT algorithm cannot be implemented in the optimization solver, we did not use the GT algorithm within the OT algorithms for effective comparison with the optimization solver in the computational study.

Computational results
The performance of the proposed OT algorithms for JIT-JSP and ETSP is evaluated using a set of benchmark instances from the literature [5].The problem set consists of 72 instances.Each instance is named in the pattern  −−− − −.Notations  and , respectively, indicate the number of jobs and number of machines in the instance, where  ∈ {10, 15, 20} and  ∈ {2, 5, 10}.Jobs are processed exactly once on each machine.However, the processing order of jobs on the machines varies.Processing times are in the range [10,30]. represents the due date tightness and is either specified as ℎ or . is specified either as  or . indicates that the earliness and tardiness penalties are chosen randomly in the range [0.1, 1]. indicates that the tardiness penalty is chosen in the range [0.1, 1], whereas the earliness penalty is chosen in the range [0.1, 0.3].There are two instances ( = 1 and  = 2) for each combination of the above parameters.Since the instances used in the literature for ETSP are not publicly available, we use the above 72 JIT-JSP instances and consider the due dates and earliness-tardiness penalties only for the last operation of each job.
The performance of the OT algorithms is compared with the results obtained by solving the linear programming (LP) formulation modeled using CPLEX solver [20].The OT algorithms were coded in C language and run using Visual C++ on a PC with 3.6 GHz Intel Core i7-9700K octa-core processor, 16GB RAM, and Windows 10 operating system.The LP model was coded in C language using callable libraries from CPLEX concert technology and embedded within the OT algorithm code to compare the results.
To study the performance of the OT algorithms, a simple local search (LS) algorithm is used to generate sequences of operations corresponding to each problem instance.Algorithm 9 shows the pseudocode of the LS algorithm.In the LS algorithm, an initial solution is first generated by arranging the operations in the increasing order of its due date.The initial solution (  ) is set as the current solution (  ) and a set of neighbourhood solutions is generated corresponding to it.The best neighbourhood (with the least TWET value) replaces the current solution if it is an improved solution.Generating neighbourhoods and selecting the best neighbourhood to replace the current solution is repeated until there is no further improvement in the objective value.We used the pair-wise interchange mechanism for generating neighbourhoods.Two operations in the current solution (  ) are swapped if they do not violate the precedence relationships between the operations in the sequence.An operation is paired with another operation for swapping so that the distance between their positions in the sequence is within a specified limit.We set this limit as 50 for the computational study.
Since the proposed OT algorithms and CPLEX are exact methods, the objective values obtained with the approaches will always be the same.Therefore, the computational performance is evaluated based on the computation time.The average (AVG) and the maximum (MAX) computation time required to generate schedules using the OT algorithms for the sequences generated by the LS procedure are considered for comparison.CPLEX was found to generate schedules with a marginal variability in its computation time to solve different sequences generated using the LS algorithm.Therefore, only the average computation time (AVG) required by  The following sections present the performance comparison between the OT algorithms and CPLEX on JIT-JSP and ETSP instances.

Performance comparison on JIT-JSP instances
Tables 1, 2 and 3 show the results obtained with the OT algorithms and CPLEX for the JIT-JSP instances from literature with 2, 5 and 10 machines, respectively.OT1 represents the enumeration method, and OT2 represents the improved method.The comparison between the average   values of OT1 and OT2 with CPLEX reveals that the OT algorithms are approximately 30 to 50 times faster than CPLEX in generating the schedules.With the increase in the number of machines, the average   values of the OT algorithms decreases.The comparison between the average   values of OT1 and OT2 reveals that the performance of the OT2 algorithm is slightly better than that of the OT1 algorithm.The comparison between the AVG values of the OT algorithms with that of CPLEX reveals that the OT algorithms consistently perform better than CPLEX.The comparison of MAX values of the OT algorithms with the corresponding AVG values of CPLEX reveals that, for problem instances involving 2 and 5 machines, the OT algorithms consistently perform better than CPLEX.However, for a few instances with 10 machines shown in Table 3, the MAX values of the OT algorithms are inferior compared to the corresponding AVG values of CPLEX.This indicates that for a few larger size instances (i.e.instances with 15 and 20 jobs) with 10 machines, the OT algorithms required higher computation time than CPLEX to generate schedules for some of the sequences generated with the LS algorithm.However, the AVG values of the OT algorithms for those instances are marginally better than that of CPLEX.Hence, it can be concluded that the proposed OT algorithms are competitive with CPLEX in terms of computation time for small and medium size problems.For larger size problems with 10 machines, the proposed OT algorithms are not consistent in their performance.They require higher computation time than CPLEX to generate schedules for some of the sequences generated by the LS algorithm.
The above results obtained with the benchmark instances from literature reveal that, with the increase in problem size, the performance of the OT algorithms deteriorates compared to CPLEX.To analyze the performance bounds beyond which the CPLEX would get closer or perform better than the OT algorithms in terms of the average computational time (AVG), we generated larger size instances with up to 30 jobs and 20 machines.The problem instances were generated based on the procedure used in the literature [5], described in Section 5 of this paper.The newly generated instances are named in the pattern  −−− − , where the notations , ,  and  are the same as described previously for the instances from the literature, shown in Tables 1, 2 and 3. Table 4 shows the results obtained with OT algorithms and CPLEX for the newly generated larger size instances.The results reveal that, for some of the instances involving 20 machines, the   values are less than 1, which are highlighted in bold in Table 4.This shows that, as the problem size increases to 25 jobs and 20 machines, CPLEX performs relatively better than the OT algorithms.The MAX values obtained with the OT algorithms for instances involving 20 machines are also much higher, as shown in Table 4.The reasons can be attributed to the exponential complexity of the OT algorithms.This limits their application to small and medium-sized JIT-JSP instances, particularly while implementing within heuristic and metaheuristic algorithms.Tables 5, 6 and 7 show the results obtained with the OT algorithms and CPLEX for the ETSP instances from literature with 2, 5 and 10 machines, respectively.We have considered both the enumeration and the improved methods represented in the tables as OT1 and OT2, respectively.The   values of OT1 and OT2 in the tables reveal that the OT algorithms are approximately 50 to 1500 times faster than CPLEX in generating the schedules.The comparison between the average values of AVG of the OT algorithms for the instances with 2, 5 and 10 machines reveals that the performance of the OT algorithms has negligible influence on the increase in number of machines.However, the average   values of the OT algorithms increase with the increase in the number of machines due to the increase in AVG values of CPLEX.This shows that CPLEX is more influenced by the increase in problem size than the OT algorithms.The comparison between the average   values of OT1 and OT2 reveals that the performance of the OT2 algorithm is slightly better than that of the OT1 algorithm.The comparison in terms of AVG and MAX values of the OT algorithms with that of CPLEX reveals that irrespective of the problem size, the OT algorithms consistently outperform CPLEX.
In addition to the problem instances from literature, we generated larger size ETSP instances with upto 50 jobs and 30 machines to analyze if the performance of CPLEX would get closer or perform better than the OT algorithms.Table 8 shows the results obtained with OT algorithms and CPLEX for the newly generated larger size instances.The results reveal that, with the increase in problem size, the OT algorithms perform much better than CPLEX.The OT algorithms generated schedules approximately 15000 times faster than that of CPLEX for instances with 50 jobs and 30 machines.

Conclusions
In this paper, we presented exact algorithms to generate optimal timing schedules for two job shop scheduling scenarios, namely JIT-JSP and ETSP.In JIT-JSP, each operation has a due date and the associated weights to penalize its earliness and tardiness.The scheduling objective of JIT-JSP involves minimization of the weighted sum of earliness and tardiness associated with the deviation of completion time of each operation from its respective due date.On the other hand, in ETSP, only the last operation of each job has a due date and the associated weights to penalize its earliness and tardiness.The scheduling objective of ETSP involves minimization of the weighted sum of earliness and tardiness associated with the deviation of completion time of each job from its respective due date.We proposed two OT algorithms to generate optimal schedules, which can be used for both scheduling scenarios.The first method, namely OT1, is an enumeration method.The second method, namely OT2, improves the first method that uses dominance rules to reduce the solution space, thereby improving the computation time.The performance of the OT Algorithms, OT1 and OT2, was compared with the CPLEX solver for several JIT-JSP and ETSP instances.The computational experiments revealed that the improved method (OT2) performed slightly better than the enumeration method (OT1) on all the problem instances.Though the OT algorithms have exponential complexity, the computational study revealed that they are practical in generating schedules in reasonable computation time and competitive with CPLEX for small and medium size JIT-JSP instances.In the case of ETSP instances, the OT algorithms generated schedules in short computation time and consistently outperformed CPLEX in all the problem instances.To the best of our knowledge, this is the first reported study on exact approaches for generating optimal schedules in job shop scheduling problems with TWET minimization objective.Future research can be directed towards improving the proposed OT algorithms to reduce their computational complexity.The schedule generation mechanism in the proposed OT algorithms allows them to be used with priority dispatching rules.Therefore, a future research direction would be developing and implementing priority dispatching rules for the static and dynamic job shop scheduling problems.The proposed OT algorithms can be employed to generate schedules within heuristic and metaheuristic approaches.Therefore, future research can also be directed towards developing efficient heuristic and metaheuristic approaches incorporating the proposed OT algorithms.Future research can also directed towards extending the proposed OT algorithms to generate schedules in other related multi-machine scheduling problems.

Figure 1 .
Figure 1.A typical cost function plot obtained by left shifting a set of operations in a block.

Figure 2 .
Figure 2.An illustrative example showing the generation of a tree of sub-blocks in the forward pass of the optimal block generation procedure

Figure 3 .
Figure 3. Generating all possible shiftable blocks in the backward pass for the illustration problem.

Algorithm 3 : 4 𝑧 = 0 5 if 𝑛𝑜𝑑𝑒 ̸ = 0 then 6 𝑚𝑎𝑥 = 0 7 for 9 𝑚𝑎𝑥
Pseudocode to generate the optimal block for left shifting 1 Function OPT BLOCK() 2     (, 0, ∅) ◁ Function call to generate the tree of sub-blocks 3  ←    () ◁ Function call to find the shiftable blocks with non-negative cost function slope values  = 1 to 1 do 8 if (1) >  then 27 in Algorithm 4 generate a sub-block corresponding to an operation  considering the already assigned operations in set   .The child nodes corresponding to the node are generated in lines 28 to 33.

1 ◁
′ ∈   :  ′ / ∈   do  ←     ( ′ , ,   ) ◁ creates a child node if  ̸ = 0 then   ←   ∪ {} ◁   : set of succeeding nodes of node  in the tree end end if  >  then  ←  ◁ : number of levels in the tree   ← ∅ ◁   : set of nodes at the th level in the tree end   ←   ∪ {} for  ∈   do  ←  +   : number of times operation  appears in the tree end return () end

Figure 4 .
Figure 4. Generating optimal block using the improved method in the illustration problem.

Table 1 .
Computational results for the JIT-JSP instances from literature with 2 machines.

Table 2 .
Computational results for the JIT-JSP instances from literature with 5 machines.

Table 3 .
Computational results for the JIT-JSP instances from literature with 10 machines.

Table 4 .
Computational results for larger size JIT-JSP instances.

Table 5 .
Computational results for the ETSP instances from literature with 2 machines.

Table 6 .
Computational results for the ETSP instances from literature with 5 machines.

Table 7 .
Computational results for the ETSP instances from literature with 10 machines.

Table 8 .
Computational results for larger size ETSP instances.