1. Introduction
Minimising the makepan in the Permutation Flow shop Scheduling Problem (PFSP) is one of the most addressed problems in the Operations Research literature [
1]. In the PFSP, there are
n jobs, each one with
m operations, and a set of
m machines. Each machine is exclusively responsible for processing an operation of each job and, therefore, each job must be processed on all machines following the same route. The goal of this scheduling problem is to find the best sequence (the same in all machines) to process the jobs according to a certain objective function, with the makespan being the most common one in the literature. In this problem, machines are typically classified as primary resources that remain busy throughout the processing of each job. In addition, in several situations, the processing of jobs may also require the use of other resources (such as, e.g., raw materials, human resources, or setup tools), denoted as ‘secondary’ ([
2]). Among these types of secondary resources, the use of servers is very common in the real manufacturing industry [
3]: examples can be found in flexible manufacturing systems [
4]; in scheduling problems with versatile machines and assembly components [
5]; in computer-controlled material handling systems [
6]; in biomass truck scheduling problems in the context of supply chain optimisation [
7]; and in container terminals [
8]. Servers are secondary resources that are in charge of carrying out setups between jobs or families of jobs. In this sense, these servers could represent a robot (see, e.g., [
4]), a human (see, e.g., [
9]), or an automatic guided vehicle ([
10]), among others. In the literature, this type of resource has also been denoted as a setup operator (see, e.g., [
11,
12]). In addition to the previous real implications of the problem, this scheduling problem is especially appropriate for the U-shaped manufacturing layout (see [
13] for a definition of this production line layout), where machines are distributed in a flow-shop layout with a U shape, and human resources are located in the centre in charge of carrying out the changeovers.
Despite the practical applications of servers in real scenarios and the recent interest in the literature to these kind of scheduling problems with secondary resources responsible for setup times (see, e.g., [
3,
9,
14,
15,
16]), literature addressing this topic is still very scarce and there are many open research questions. Most papers address the problem with a single server (see, e.g., [
6,
17]) under very simple layouts, mostly identical parallel machines (see, e.g., [
18,
19,
20]). The case with multiple human resources has been tackled only in unrelated parallel machines, hybrid flow shop, open shop, and no-wait flow shop. Although these papers propose relevant algorithms for solving such scheduling problems, there is still a need to further improve the knowledge of multiple servers, especially solving and analysing its influence in the permutation flow shop scheduling problem, the most relevant and addressed layout in the literature. To cover this gap, we address the Permutation Flow shop Scheduling problem with Multiple Servers, denoted as PFSMS in the following. Regarding the setup times which have to carry out these servers, according to [
21], they may be classified first as non-anticipatory and anticipatory: the former (also denoted as inseparable or attached) must be executed after the job arrival; the latter (also denoted as separable or detached) can be performed at the machine at any time after the completion of the previous job (i.e., they may be performed before the job arrival). Second, setup times can be classified as sequence-dependent or sequence-independent if this amount of time depends or not on the previous job executed on the same machine, respectively. Following the common approach used in the literature, in this paper, we address the PFSMS with non-anticipatory and sequence-dependent setup times. This scheduling problem, with the objective of minimising the makespan, is denoted by
, according to [
18,
22]. This problem is clearly NP-hard, since the same problem without considering the servers is already NP-hard [
23].
To deal with this PFSMS for the first time, the contributions of this paper can be stated as follows:
A new PFSP problem is defined with multiple servers or human resources in charge of carrying out setups.
We identify efficient formulations to solve the proposed problem, by developing three different Mixed Integer Linear Programming (MILP) models.
Another contribution is the development of decoding procedures. These procedures are important to obtain feasible schedules and to reduce the solution space of the problem. Both issues are essential to guarantee the efficiency of approximated algorithms developed for the problem.
A new procedure to generate static dispatching rules is proposed. Using this procedure, we propose and compare 896 different dispatching rules.
In addition, we propose different NEH-based constructive heuristics to efficiently solve the problem. Using the traditional NEH heuristic ([
24]), we analyse the influence and efficiency of its first phase, embedding all previous dispatching rules. In this regard, note that heuristics have been traditionally proposed in the literature either to obtain high-quality solutions in short times (required when decisions should be made almost instantaneously or with high computational requirements) or to provide good initial solutions for more advanced algorithms (as metaheuristics). Recently, they are also relevant for quickly reacting to the changing environments of industry 4.0 ([
25]).
Finally, starting with the best constructive heuristic, we develop an iterated greedy metaheuristic to find near-optimal solutions in large-sized instances.
Note that with these contributions, we try to introduce future researchers and practitioners with the problem under study, providing them with tools that can be directly applied, combined or adapted to other related scheduling problems. It is also noteworthy to mention that, as most proposals in the literature, we propose deterministic approaches to solve the problem under consideration. Despite the inherent uncertainty in the supply chain ([
26,
27,
28]), in many scheduling situations, deterministic approaches are very robust to stochasticity and uncertainty (see in this regard [
25,
29]).
The rest of the paper is organised as follows: the problem is described in
Section 2. In this section, we also review the literature related to the problem under study. In
Section 3, we propose several new formulations for the problem using MILP models. The decoding procedures are explained in
Section 4. Regarding approximate algorithms, the proposed dispatching rules and constructive heuristics are detailed in
Section 5, while the iterated greedy algorithm is explained in
Section 6. The computational results of all previous methods are shown in
Section 7. Finally, the conclusions and future research lines are discussed in
Section 8.
2. Problem Description and Background
In the PFSP, there is a set of n jobs that must be processed on a set of m machines, following the same route of machines for each job. Each machine is always available and has to process jobs (one by one) following a certain sequence, , which is identical for all machines (permutation constraint). Let denote the operation of job j processed on machine i. Whenever it does not cause confusion, let be the operation corresponding to the job in position k on machine i (i.e., ). Each operation has a processing time . In addition, each job j requires a setup time , when it is processed after job l on machine i. This setup is both non-anticipatory and sequence-dependent, and has to be carried out on that machine i by a worker w. Let be the set of r identical workers that can perform all setups. Denoting by the completion time of job j on machine i, the goal of the problem is to find the best schedule that minimises the maximum completion time, , where is equal to .
In
Figure 1, we show an example of
with four jobs, four machines, and two workers. We can observe, for example, that the setup operation
cannot be performed before, as all the workers are unavailable. Similarly, some idle time is also forced on machines 3 and 4 by the setup of operations
and
, which have to wait until a worker is available.
As mentioned in the previous section, due to its importance, the permutation flow shop scheduling problem is one of the most active problems in the operation literature with hundreds of contributions in the last years, addressing different variants and constraints in the classical problems. Recent examples solving the permutation flow shop can be found in [
30,
31,
32,
33,
34,
35,
36], and addressing different setups configurations in [
37,
38,
39,
40,
41,
42,
43,
44]. For comprehensive reviews of the problem under different constraints, we refer the interested reader to [
1,
45,
46,
47,
48,
49]. Despite the extensive flow-shop-based literature, we are not aware of any previous reference to the problem under study (PFSMS) so far. Therefore, we focus this review of the literature on scheduling problems that deal with single or multiple servers. Regarding the single server literature, most focus on solving the parallel machine scheduling problem and applying it. In this regard, this problem has been solved for two machines, for example [
3,
4,
50] considering sequence-independent setup times and using objective functions based on the makespan or total idle time. The problem with
m machines has been solved by [
6] sequence-dependent setup times and makespan minimisation and by [
17] for sequence-dependent setup times to minimise total weighted earliness and tardiness. The complexity of this kind of problem is addressed by [
10]. The problem with a single server has also been solved by [
51] for the parallel dedicated machine scheduling problem with sequence-dependent setup times and makespan minimisation, while [
20] have proposed a MILP model and two approximate algorithms to solve the unrelated parallel scheduling problem with sequence-dependent setup times and machine eligibility restrictions. Regarding other layouts in the literature with a single server, the problem has also been solved for the flow shop scheduling problem with two machines and minimisation of the makespan by [
52]. The authors have addressed the problem of sequence-independent and anticipatory setup times. The complexity of this problem is addressed by [
53], considering unit processing times, and, more generally, by [
54]. Using the same configuration of setup times and traditional processing times, [
55] have solved the no-wait variant of the problem to minimise the total completion time. Anticipatory and non-anticipatory setup times are addressed by [
56], minimising the makespan, and also considering dismounting times. The problem with
m machines and a single server has been solved by [
5] for total completion times’ minimisation, but in this case, the server is responsible for both processing and setup operations. The authors have addressed the problem with non-anticipatory and sequence-independent setup times.
Regarding the related scheduling problem with multiple servers, [
18] have addressed the identical parallel machine scheduling problem with sequence-independent setup times. For the unrelated parallel machine layout, [
15] have proposed a Grasp algorithm to solve the problem with sequence-dependent setup times for makespan minimisation, while [
16] have proposed an iterated greedy-based algorithm to solve a bi-objective variant of the problem. Both researchers have addressed the case where a setup requires more than one server at the same time. Based on the ceramic tile manufacturing sector, [
19] have solved a related unrelated parallel machine layout, where the setup times depend on the assignment of resources. The hybrid flow shop scheduling problem with sequence-independent setup times and multiple servers has recently been addressed by [
9,
14]. The former has proposed a backtracking search optimisation algorithm to solve the anticipatory variant, while the latter have proposed several constructive and composite heuristics to solve both the anticipatory and non-anticipatory cases. An example in the open shop layout with two machines can be found in [
57] for non-anticipatory and sequence-independent setup times. Regarding the flow shop layout with multiple servers, the problem has been solved only by [
58,
59]. However, they addressed the no-wait variant using anticipatory and sequence-dependent setup times by proposing a genetic algorithm. A summary of the literature review is presented in
Table 1.
3. MILP Models
In this section, we elaborate three different MILP models to exactly solve the problem under consideration. The parameters and variables used by the models are presented in
Section 3.1. Then, a detailed description of each model is shown in
Section 3.2,
Section 3.3 and
Section 3.4 for Models 1, 2, and 3, respectively.
3.1. Notation: Variables and Common Parameters
The three proposed MILP models use the following common indices and parameters:
i (): machine index.
j (): job index.
k (): position index.
w (): worker index.
(,j,): setup time of job on machine i when it is processed immediately after job j. If the job is the first job of the sequence, its setup time is defined by .
(,): processing time of job j on machine i.
The proposed models use some of the following variables:
(,, ): 1 if the setup associated with the operation is carried out by the worker w.
(): 1 if job j is assigned to position k and is processed before job , 0 otherwise. Additionally, let if job is the first job in the sequence (0 otherwise).
(): 1 if job j is processed before job , 0 otherwise.
(, ): 1 if job j is assigned to position k, 0 otherwise.
(,, ): 1 is setup of the job in position k is performed by worker w on machine i.
(, ): Completion time of job j on machine i.
: Makespan.
(, ): Completion time of the job in position k on machine i.
(, ): Starting time of job j on machine i.
(, ): Starting time of the job in position k on machine i.
(, , ): 1 if worker w performs the setup of operation immediately before operation . Furthermore, let if operation is the first operation in the worker w.
(, ): if the setup for the operations and are performed on the same machine, then it takes 1 if the setup of is carried out after the setup of , and 0 otherwise.
(, ): if the setup for the operations and is performed on the same machine, then it takes 1 if the setup of is carried out after setup of , and 0 otherwise.
3.2. Model 1
Our first proposal, denoted Model 1, emerges from the Wilson model, introduced by [
60] in the classical
, which belongs to the Wagner family. This model used
to define the sequence of jobs in the shop. Then, this auxiliary variable is used to calculate the starting time of the job in position
k (
). In our proposal, we introduce the setup times and workers that are defined by the decision variables
,
, and
. Using the previous parameters and variables, Model 1 can be formulated as follows:
The set of constraints (1) ensures that every job is assigned to a unique position, while the constraints (2) achieve that exactly one job is assigned to each position. The set (3) establishes that each job has a successor, whereas the predecessors are defined in constraints (4) and (5). The starting time of every job on the first machine is defined in the set (6). Analogously, the starting time of the first job on every machine is calculated using constraints (7) and (8). Constraints (9) establish that a job must start after its previous operation is completed. The set (10) ensures that a job starts once its previous job in the same machine has been processed. Constraints (11) ensure that every job is assigned to a worker. The set of constraints (12) and (13) achieves that the same worker does not process two setups at the same time. Finally, on the one hand, constraints (14) establish that an operation has to start before an operation if and . On the other hand, (15) enforces that operation must start before if .
3.3. Model 2
Our second proposal is denoted by Model 2. This model belongs to the Manne family of models ([
60]), also denoted as a disjunctive formulation. Some examples of this formulation in flow-shop layouts can be found in the SGST model proposed by [
60], in the MILP model proposed by [
9] or in the MIP1 model proposed by [
5]. This family starts by defining the sequence of jobs by variable
. In our case, we incorporate variable
for the sequences in the workers. In doing so, Model 3 can be formulated as follows:
The set of constraints (16) ensures that each job has a predecessor. Note that a dummy job is introduced to represent the predecessor of the first job in the sequence. Furthermore, each job (including the dummy job) has at most one successor, which is bounded by constraints (17). The set (18) enforces that the setup of operation is assigned to a unique worker. Constraints (19) define binary variables . The sets (20) and (21) ensure that an operation has to start processing after its predecessor in the machine and after its predecessor in the worker, respectively. The set of constraints (22) defines the completion time of an operation and (23) ensures that an operation is not processed after its previous operation (in the previous stage) is completed. Finally, the set (24) defines the makespan of the sequence.
3.4. Model 3
Our last MILP model arises from Model 2, but modifies how workers are treated. In this case, we introduce variable
to define the sequence followed by each worker (the use of a related decision variable can be found in the first MILP model proposed in [
61]). The formulation of this model is presented below:
As in the previous model, constraints (25) and (26) fully define variables . The completion times of the first machine are established in the set (27), while the constraints (28) and (29) bound the completion times of each job in the other machines. To do so, constraints (28) limit these completion times considering its previous machine, while constraints (29) consider its previous job. Variable is defined in Equations (30)–(33). More specifically, the set (30) ensures that the setup of each operation has either a predecessor in the same worker or a dummy setup. Constraints (31) achieve that each setup has at most a successor in a worker, while constraints (32) and (33) ensure that each worker starts with a setup. The completion times of the jobs and the precedence relationships between the operations are linked in constraints (34) and (35). Finally, the set of constraints (36) defines the makespan.
4. Proposed Decoding Procedures and Complete Enumeration
Due to the limitation of MILP models to solve medium-large sized instances, we also propose different approximate algorithms to tackle the problem (described in
Section 5). Each of the proposed algorithms uses a sequence of jobs to easily represent the solutions, since each machine must process the jobs in the same order (permutation constraint, see
Section 2). This sequence can be formally defined as follows:
Sequence of jobs, : It represents the order that each machine has to follow to process the jobs, i.e., has to be processed before job in any machine i ().
Obviously, for the problem under consideration, the same sequence of jobs could theoretically lead to different semi-active schedules depending on how and in which order the operations of the jobs are assigned to the workers. Therefore, in the proposed algorithms, it is necessary to include mechanisms to construct a unique schedule to represent the sequence. In this regard, the procedure to find a specific schedule (and consequently an objective function value) from a representation of solutions is denoted by decoding procedure. For the problem under consideration, a decoding procedure can be determined by establishing the following rules:
Therefore, given a sequence of jobs and the FAW rule, a different decoding procedure can be obtained by varying the priority rule. In this paper, we propose two families (each composed of six priority rules) to order the operations that are waiting to be processed (denoted as and ). Each of these rules first selects an operation according to a certain criterion (either the operation which can start before, , or the operation to be completed before, ), breaking ties using a specific mechanism. More specifically, we propose and compare the following priority rules:
: This rule selects the operation whose setup can start before breaking the tie in favour of the operation in the lowest position of the sequence .
: This rule selects the operation whose setup can start before breaking ties in favour of the operation that will be processed on the lowest machine index.
: This rule selects the operation whose setup can start before breaking ties according to the operation with the lowest sum of setup and processing times.
: This rule selects the operation whose setup can start before breaking ties in favour of the operation with the highest sum of setup and processing times.
: This rule selects the operation whose setup can start before breaking ties according to operation with the lowest setup time.
: This rule selects the operation whose setup can start before breaking ties in favour of the operation with the highest setup time.
: This rule assigns the operation that can be completed before breaking ties in favour of the operation in the lowest position of the sequence .
: This rule assigns the operation that can be completed before breaking ties in favour of the operation that will be processed on the lowest machine index.
: This rule assigns the operation that can be completed before breaking ties according to the operation with the lowest sum of setup and processing times.
: This rule assigns the operation that can be completed before breaking ties in favour of the operation with the highest sum of setup and processing times.
: This rule assigns the operation that can be completed before breaking ties according to the operation with the lowest setup time.
: This rule assigns the operation that can be completed before breaking ties in favour of the operation with the highest setup time.
Using a specific priority rule PR, the detailed procedure for decoding a sequence can be explained as follows. First, operation
(and its corresponding setup) is processed on the first machine. Once this operation is completed, an operation is selected according to the PR priority rule among the operations that are waiting to be processed (that is,
or
). The procedure is repeated until there are no more operations available. In
Figure 2, we show an example (following the previous example) of operations that are waiting to be processed (
,
, and
) after the following operations have been completed:
,
,
,
, and
. An example of the use of the
priority rule to decode the sequence
is shown in
Figure 1, while in
Figure 3, the same sequence is decoded using the
rule.
Obviously, the use of non-completed representations of the solutions with a specific decoding procedure (i.e., using rules to assign jobs to the worker instead of testing every possibility) does not guarantee the optimum of the problem, as many schedules are omitted. However, the solution space using these representations can be strongly reduced, which could be exploited by approximate algorithms to find solutions close to the optimum. To analyse the efficiency of each decoding procedure, we perform a complete enumeration by evaluating each solution in its solution space (composed by
sequences), whose computational results are shown in
Section 7. Once all solutions are evaluated using a specific decoding procedure, the best objective function value found can be obtained.
7. Computational Results
In this section, we present the computational evaluations carried out in our study. A total of four experiments have been performed: first, we compare the performance of the proposed MILP models in
Section 7.2; second, the proposed decoding procedures are compared in
Section 7.3 by analysing all their complete solutions (complete enumeration procedure); thirdly, the efficiency of each proposed dispatching rule is tested in
Section 7.4; fourthly, the constructive heuristics proposed are analysed in
Section 7.6; then, the iterated greedy algorithm is calibrated and its performance is evaluated in
Section 7.6. To perform all these experiments, we generate three sets of benchmarks explained in
Section 7.1. All procedures tested have been compared using the same computer conditions: on the same Intel Core i7-3770 with 3.4 GHz and 16 GB RAM; under the same programming languages (C# and Gurobi 9.5.0); and by the same person and using the same common functions and libraries. Finally, a sensitivity analysis is performed in
Section 7.7 to study the impact of each factor on the problem.
7.1. Benchmarks
In this section, we describe three sets of instances generated to test the performance of the proposals. The first set, denoted by
, is specifically constructed to compare exact methods, i.e., the proposed MILP models (see
Section 3) and the complete enumerations for each of the proposed decoding procedure (see
Section 7.3). The second set, denoted by
is constructed to compare approximate methods, i.e., the proposed dispatching rules (see
Section 5.1), the proposed NEHV (see
Section 5.2), and to evaluated the performance of the proposed IGV (see
Section 6). Finally, a last benchmark, denoted by
, is generated to calibrated IGV.
for comparison of exact methods. As the complete enumeration procedure has a high computational requirement (due to the evaluation of all solutions), the set is composed of 540 small-sized instances with processing and setup times following uniform distributions from
and
, respectively ([
14,
91,
92]).
is a parameter of the testbed in the range
([
14]). The other parameters are generated for the following values:
,
, and
. Finally, five instances are constructed for each combination of the parameters.
for comparison of approximate methods: is a set of medium and large instances. This benchmark is generated following a procedure similar to . Thereby, and follow the uniform distributions and , respectively. Regarding the parameters of the testbed, five instances are generated for each combination of the following parameters: , , , and , with .
for calibration of the iterated greedy algorithm. Similarly as in , we generate two instances for each combination of the following parameters: , , , and , with .
7.2. Computational Evaluation: Comparison of MILP Models
In this section, we test the performance of the MILP models proposed in
Section 3. The proposed models are solved using Gurobi 9.5.0 with 500 seconds as the stopping criterion in each instance of the
set. The models are then compared in terms of the quality of the solution by the Relative Percentage Deviation (
) indicator (see Equation (37)).
measures (in percentage) the difference between the objective function found by procedure
p in instance
i (
) and the best solution found in this instance (denoted
). In addition, to compare the computational effort required by the MILP models, we use the average CPU time. Both indicators are summarised in
Table 2, grouped by
n,
m,
r, and
. Finally, in
Table 3, we also present the number of optimal, feasible, and no solutions found.
According to these computational results, we can observe that the best performance is found in Model 1. This model clearly outperforms the other MILP models in terms of average RPD (denoted ARPD) and CPU times. In this regard, Model 1 has an ARPD value of 0.01 (in the whole benchmark), while Models 2 and 3 found 0.14 and 0.56, respectively. This difference is statistically significant with a p-value of 0.004, using a non-parametric Mann-Whitney test between Model 1 and 2. With respect to average CPU times, we can observe the same trend. Model 1 requires a lower average CPU (61.61 seconds) as compared to Model 2 (84.78) and Model 3 (214.48). Similarly, Models 1 and 2 clearly outperform Model 3 in terms of number of optimal solutions, both finding a total of 487 optimal solutions versus 332 by Model 3.
7.3. Computational Evaluation: Complete Enumeration Applying Different Decoding Procedures
In this section, using the same set of instances
, we compare the decoding procedures proposed in
Section 4. Each procedure is embedded in a complete enumeration procedure and the best solution found is compared using the RPD indicator (see Equation (37), where, again,
is the best solution found in instance
i, i.e., considering both MILP models and the complete enumeration procedures). The computational results are demonstrated in
Table 4. The best ARPD (2.32) is found using the
procedure of the
family. In this family, the tie-breaking mechanisms have a high influence on reducing the ARPD value from 3.76 (
procedure) to 2.32, while these mechanisms are not as relevant in the
family, oscillating the ARPD from 7.65 to 7.76. Despite the excellent performance found by
, it is not statistically significant with
(the second best proposal), according to the non-parametric Mann-Whitney test (
p-value equals to 0.221). However, with the exception of
and
(whose ARPD is similar to 2.64 and 2.65, respectively), it is statistically significant with all other decoding procedures (in this regard, when compared with the
procedure, the
p-value found in the Mann-Whitney test is 0.000).
7.4. Computational Evaluation: Dispatching Rules
In this section, we compare the dispatching rules generated following the procedure described in
Section 5.1. Using this procedure, we construct 896 (potentially) different sequences of jobs in each instance, based on the specific indicator, measure, and sorting criterion applied. To generate a schedule (and consequently to obtain an objective function value), we apply the best decoding procedure (i.e., the
procedure) to each proposed dispatching rule as mentioned previously. Then, each dispatching rule is tested in the set of instances
and its performance is evaluated by the RPD indicator (see Equation (37)). In the same way,
is the best solution found in instance
i, considering both dispatching rules and NEHV variants. The computational results are shown in
Table 5,
Table 6 and
Table 7. All ARPD values for the proposed dispatching rules are obtained between 23.97 and 28.75. Regarding the best results, they are found by dispatching rules using the WSUM indicator, the C sorting criterion, and any measure that considers the processing times of the operations (i.e., P, PS, PMS, PmS). Thereby, the ARPD values of this group of dispatching rules are 24.01, 23.97, 24.00, and 24.10 (for {P,WSUM,C}, {PS,WSUM,C}, {PMS,WSUM,C}, and {PmS,WSUM,C}, respectively). In fact, there is a statistically significant difference between this group of rules and {P,SUM,C} (the following best rule with an ARPD of 25.28), finding a
p-value of 0.000 (using the non-parametric Mann-Whitney test).
7.5. Computational Evaluation: NEHV Variants
In this section, we analyse the performance of the proposed NEHV variants and compare it with the best proposals in the related literature. These 896 proposed heuristics are tested in the set of instances
. The computational results are shown in
Table 8,
Table 9 and
Table 10. In addition, computational results comparing the best proposal with the most promising NEH variants in the literature are shown in
Table 11 (RA, NM, KK1, KK2, AD, and
are the NEH heuristics that apply the initial order proposed by [
74,
75,
76,
77,
78,
79], respectively, while NEH represents the traditional NEH ordering the jobs in non-increasing sum of processing times). As in the previous section, the quality of the solution of the proposals is evaluated using the RPD indicator. In this case, the ARPDs of the proposals range between 2.58 and 3.30. Despite this difference, there are several variants whose ARPD is very similar, close to the best-obtained value. In this regard, the best value is found using the {PMS,ABS,D} rule as an initial sequence (with an ARPD equal to 2.58), while, e.g., the {PS,WABS,H}, {P,SRA,H}, {P,WSRS,H}, {P,WABS,HIH}, and {PS,ABS,D} found an ARPD value equal to 2.61. In this case, no statistically significant differences (with the Mann-Whitney test) were found when comparing { PMS, ABS, D} with any of the previous variants. However, the proposed NEHV({PMS,ABS,D} clearly outperforms every previous proposal in the literature. In this regard, we obtain
p values equal to 0.031 and 0.004 (using the non-parametric Mann-Whitney test) comparing our best proposal with RA and with the traditional NEH algorithm, whose ARPD values are 2.73 and 2.82, respectively.Finally, in
Table 4 we demonstrate the ARPD values obtained by NEHV({PMS,ABS,D}) in the small-sized instances,
. The global ARPD obtained is 6.73, which outperforms the best results obtained by several decoding procedures. Note that the best ARPD achievable using the
decoding procedure is 2.32.
7.6. Computational Evaluation: IGV
In this section, we analyse the performance of the proposed iterated greedy algorithm, IGV. To do so, we first compare it with both the MILP models and the complete enumeration procedures, on small-sized instances (). Second, we compare our proposal with some of the most promising iterated greedy algorithms proposed in the related literature, on medium-large sized instances (). To deal with this last issue, the following metaheuristics are re-implemented:
IGP: The iterated greedy algorithm proposed by [
93] for the mixed no-idle permutation flow shop scheduling problem and makespan minimisation.
IGF: The iterated greedy algorithm proposed by [
94] for the classical permutation flow shop scheduling problem with total tardiness minimisation.
IGL: The iterated greedy algorithm proposed by [
95] for the distributed permutation flow shop with makespan minimisation.
IGH: The iterated greedy algorithm proposed by [
96] for the distributed assembly permutation flow shop scheduling problem with sequence-dependent setup times.
IGD: The iterated greedy algorithm proposed by [
97] for the classical permutation flow shop scheduling problem with makespan minimisation.
IGV: The proposed iterated greedy algorithm. Regarding the parameters of the algorithm, we calibrate them based on [
97]. More specifically, we vary parameter
d between 3 and 8, setting T to 0.4. This parameter has not been found statistically significant according to the non-parametric Kruskal-Wallis (
p-value equal to 0.609). A similar result is found for parameter
T when we vary it in
, setting
d to 4. In this case, we obtain a
p-value equal to 0.776. In
Figure 4, we show the 95% confidence intervals obtained in both calibrations. Due to the fact that there are no statistically significant differences in
d and
T and the difference between the levels is very narrow, we use
and
as applied in the traditional algorithm by [
84].
To adapt these metaheuristics to the problem under consideration, we apply the best decoding procedure (i.e., ), and initialise them with the best NEH algorithm (i.e., NEHV({PMS,ABS,D})). As stopping criterion, we use , which depends of the size of the problem.
The computational results obtained by IGV on the small-sized instances are shown in
Table 4, while the comparison on medium-large sized instance is shown in
Table 12. Regarding the former, the ARPD obtained by IGV is 1.68 compared to 0.01 found by the best Model 1, requiring only an average computational time of 1.08 seconds, instead of 61.61 seconds required by Model 1. Note that the proposed iterated greedy algorithm explores the space of solutions of
, whose optimum is 2.32, and incorporates a local search to try to capture the global optimum of the problem. Therefore, this intensive local search helps to efficiently escape from the local optimum. The computational results also demonstrate the efficiency of the proposed mechanism, as IGV clearly outperforms the best solutions obtained by any of the proposed complete enumeration procedures. Regarding the comparison with other metaheuristics from the related literature, we again compare the performance of the metaheuristics using the RPD indicator (see Equation (37)), with
as the best solution found in instance
i by any of the iterated greedy algorithms. We also include in the comparison the best obtained NEH algorithm. The following conclusions can be derived from the results:
IGV obtains an ARPD of 0.49, clearly outperforming all other metaheuristics. This conclusion is also statistically confirmed by a non-parametric Mann-Whitney test comparing IGV and IGD (which obtains an ARPD of 0.61). A statistically significant difference has been found with the p-value equal to 0.003.
The constructive heuristic (NEHV({PMS,ABS,D})) improves its performance when n increases compared to the metaheuristics. This fact can be explained by the hardness of the problem. When n increases, the local search included in the metaheuristics requires much more computational time, and the number of global iterations decreases compared to small instances.
Despite the excellent performance found by some of the metaheuristics in related scheduling problems, their effectiveness in the problem under consideration clearly decreases, as e.g., IGL and IGH whose ARPD values are 1.05 and 0.84, respectively.
7.7. Sensitivity Analysis
In this section, we address a sensitivity analysis to study the influence of the parameters of the problem (that is,
n,
m,
r, and
). We analyse this influence both in the original problem under study by using the best MILP model (i.e., Model 1), and in the reduced problem obtained by applying the best decoding procedure (i.e.,
, whose ‘optimal’ solution can be reached by the complete enumeration). To this end, we address two analysis of variance (ANOVA), using the previous parameters as independent variables and analysing their influence in the objective function of the problem (i.e., makespan).
Table 13 reports the results with two-way interactions considering Model 1. We can observe that all the parameters significantly influence the makespan (with
p-value equal to 0.000 in each case). In this regard, the makespan clearly increases when
n,
m, and
increase or when
r decreases. Regarding the second-order interactions, only
is not statistically significant (with
p-value equal to 0.85), i.e., the effect in the makespan of the setup time and server parameters (
r and
) changes with each other parameter. A similar trend can be found by analysing the space of solutions obtained by
in both the main factors and the second-order interactions. The results obtained using
are shown in
Table 14.
8. Conclusions
This paper tackled the permutation flow shop scheduling problem with multiple servers or human resources for the first time. The purpose of our study is twofold: first, to implement the multiple server constraint in one of the most relevant real-world-based layout; and, secondly, to analyse and propose efficient methods to solve this problem. To cover this purpose, the work developed in this paper can be stated as follows:
We formulated three different MILP models to exactly solve the proposed problem. The formulations were based on three different efficient families of models from the literature. The computational results identified the proposed Model 1 as the best exact formulation to solve the problem under consideration.
We proposed 12 different decoding procedures to explore the solutions spaces of future approximate algorithms. These procedures were integrated into a complete enumeration algorithm to analyse their efficiency. The computational results demonstrated that it is more efficient to always select the operations that can start before (i.e., family). Among them, the best result was obtained by breaking ties according to the operation with the lowest setup time (i.e., ).
We analyse the efficiency of dispatching rules in the problem under consideration by combining several different measures, indicators, and sorting criteria. A total of 896 different dispatching rules were proposed and compared. The best values were found by ordering the jobs according to a non-decreasing order of the processing times with or without setup times and giving more importance to its values in the first machines than to the last ones (WSUM indicator, which is based on the Johnson algorithm idea), i.e., dispatching rules {P,WSUM,C}, {PS,WSUM,C}, {PMS,WSUM,C}, and {PmS,WSUM,C}.
We proposed an efficient constructive heuristic to find fast solutions for the problem under consideration. The proposed heuristic is based on the classical NEH algorithm. We modified this algorithm by testing all previous dispatching rules in its first phase. The computational results demonstrate that the proposed NEHV({PMS,ABS,D}) is the best heuristic for the problem, outperforming both the classical NEH algorithm and the related NEH-based heuristics.
Finally, we develop a new iterated greedy metaheuristic to obtain near-optimal solutions in medium-large sized instances. The proposed IGV explores different solutions spaces by changing the decoding procedures between phases. In doing so, the metaheuristic outperforms the most promising iterated greedy algorithms from the related literature.
By this work, we try to introduce future academics with the problem, providing efficient fast heuristics, decoding procedures, and MILP models either to initialise and improve the space of solutions of future proposals (typically metaheuristics) or to embed them in more advanced approaches (as, e.g., in matheuristics). Furthermore, we provide practitioners with efficient methods (dispatching rules and fast constructive heuristics) that can be easily updated and implemented in related real manufacturing scenarios, since they are not complex to implement and are appropriate for quick decision-making.
Regarding future research lines, a number of open research issues could be conducted from this work. First, further advances should come in the development of time-consuming approaches to the problem under consideration, which can be compared with the proposed iterated greedy algorithm and initialised with the proposed methods or used some modification of the proposals in its intermediate phases (e.g., in matheuristics relaxing assignment or scheduling constraints). Second, the technological advances recently brought about by Industry 4.0 offer decision makers the ability to integrate information in real time. Preliminary studies have been conducted in this direction; however, we recommend exploring this research line to improve the efficiency of servers and machines in the problem under consideration. Finally, although the methods proposed in this paper have been developed to be easily adapted to the real manufacturing scenario, future studies could be carried out to validate and implement such methods either in a real shop or considering additional real constraints (such as, e.g., deteriorating job, green scheduling, learning effect,…).