Next Article in Journal
An Integrated Optimization Model for Industrial Energy System Retrofit with Process Scheduling, Heat Recovery, and Energy Supply System Synthesis
Previous Article in Journal
Assessment of the Possibilities for Removal of Ni (II) from Contaminated Water by Activated Carbon foam Derived from Treatment Products of RDF
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dynamic Self-Learning Artificial Bee Colony Optimization Algorithm for Flexible Job-Shop Scheduling Problem with Job Insertion

1
College of Mechanical and Electronic Engineering, Shandong Agricultural University, Tai’an 271018, China
2
School of Mechatronics Engineering, Harbin Institute of Technology, Harbin 150000, China
*
Author to whom correspondence should be addressed.
Processes 2022, 10(3), 571; https://doi.org/10.3390/pr10030571
Submission received: 21 January 2022 / Revised: 27 February 2022 / Accepted: 11 March 2022 / Published: 15 March 2022

Abstract

:
To solve the problem of inserting new job into flexible job-shops, this paper proposes a dynamic self-learning artificial bee colony (DSLABC) optimization algorithm to solve dynamic flexible job-shop scheduling problem (DFJSP). Through the reasonable arrangement of the processing sequence of the jobs and the corresponding relationship between the operations and the machines, the makespan can be shortened, the economic benefit of the job-shop and the utilization rate of the processing machine can be improved. Firstly, the Q-learning algorithm and the traditional artificial bee colony (ABC) algorithm are combined to form the self-learning artificial bee colony (SLABC) algorithm. Using the learning characteristics of the Q-learning algorithm, the update dimension of each iteration of the ABC algorithm can be dynamically adjusted, which improves the convergence accuracy of the ABC algorithm. Secondly, the specific method of dynamic scheduling is determined, and the DSLABC algorithm is proposed. When a new job is inserted, the new job and the operations that have not started processing will be rescheduled. Finally, through solving the Brandimarte instances, it is proved that the convergence accuracy of the SLABC algorithm is higher than that of other optimization algorithms, and the effectiveness of the DSLABC algorithm is demonstrated by solving a specific example with a new job inserted.

1. Introduction

The Flexible Job Shop Scheduling Problem (FJSP) is a non-deterministic polynomial (NP)-hard problem, which is an extension of the classic JSP [1]. In FJSP, the operations of the job can be processed on multiple machines, that is, the corresponding relationship between the operation and the machine is uncertain [2]. In today’s world, with the diversification of user needs, flexible job-shops are faced with frequent insertion of new jobs. When a new job is inserted at a certain point, to reduce the makespan, it is necessary to dynamically form a new scheduling scheme by reasonably arranging the processing sequence of the new job and the old jobs, as well as the corresponding relationship between the operations and the machines, so as to improving economic efficiency and machine utilization. This is the DFJSP studied in this paper, which is a further study of FJSP. For combinatorial optimization problems, an efficient meta-heuristics algorithm can solve DFJSP. In addition, through reinforcement learning (RL) and other learning-based algorithms, the meta-heuristic can be optimized to improve the accuracy of flexible job-shop scheduling and obtain the optimal scheduling scheme.
With the continuous development of computer technology, many intelligent algorithms have been applied to combinatorial optimization problems, such as the particle swarm optimization (PSO) algorithm [3,4], genetic algorithm (GA) [5,6], artificial fish swarm algorithm (AFSA) [7], Bayesian algorithm [8], ant colony optimization (ACO) algorithm [9], gray wolf optimization (GWO) algorithm [10], lion swarm algorithm [11], ABC algorithm [12,13], etc. Sabharwal et al. [14] proposed an improved GA algorithm with excellent performance, which generated the initial population through probability, avoided the error caused by the interaction between the input parameters, which solved the combinatorial optimization problem well. Wang et al. [15] optimized the ACO algorithm, changed the pheromone update mechanism, and optimized the makespan of the FJSP, overcame the shortcoming of the ACO algorithm falling into local optimum, and improved its computational efficiency. Yao et al. [16] improved the population state of GWO algorithm, and proposed the IGWO algorithm through the position-based learning strategy, which enhanced the global search ability of GWO algorithm. Wang et al. [17] proposed an adaptive multi-objective PSO algorithm with cost and tardiness as objective functions. The algorithm adopts an elite strategy and small probability mutation mechanism to avoid premature convergence of PSO and improve the convergence accuracy. Ge et al. [7] improved the AFSA algorithm with the goal of minimizing the makespan, and improved the diversity of the population by adjusting the arrangement mechanism of machines and operations. The algorithm improves its local and global search ability through attracting behavior and path search strategy, and improves the performance of the algorithm to solve FJSP. Park et al. [18] applied the same crossover strategy to the categorical part and sequential part, and proposed a unified GA algorithm, which simplified the structure of GA and effectively explored the search space of FJSP. It can be seen from the above literature that the key parameters of most algorithms are set in advance, and their adjustment methods are difficult to reasonably determine [19]. In addition, some improved algorithms have too many input parameters, and their stability needs further research and verification [20].
The artificial bee colony algorithm was proposed by Karaboga et al. [21] in 2005. Due to its advantages of strong stability and few parameters, the ABC algorithm has received extensive attention and research by scholars [20]. For more than ten years, many researchers have applied the ABC algorithm and its improved algorithm to FJSP, and have achieved practical achievements. Li et al. [22] proposed an improved ABC algorithm using a two-dimensional vector coding method, aiming at the minimum energy consumption and the shortest makespan, taking into account important factors such as job preparation time, which improved the local and global search capabilities of the ABC algorithm. Pan et al. [23] developed an adaptive strategy aiming at the shortest total time of earliness and tardiness, which improved the diversity of the population and enhanced the local reinforcement ability, and solved the job-shop scheduling problem using the discrete artificial bee colony (DABC) algorithm. Meng et al. [24] proposed a hybrid ABC algorithm with the goal of minimizing the total flowtime, which balances the local and global search by dynamically adjusting the search range and increases the diversification of the population. Zhang et al. [25] improved the ABC algorithm to solve stochastic FJSP, and finally obtained the solution with the shortest lateness by screening the solutions and reducing the computational load with the K-armed bandit model. Zheng et al. [26] optimized the population of ABC algorithm by using chaos theory, left-shift strategy and crossover operation with the goal of minimum makespan. This method accelerates the convergence speed of the algorithm, improves the ability of global search, and solves the fuzzy FJSP problem. Gu [27] used adaptive neighborhood search strategy and greedy method to optimize the population of ABC and retain the optimal solution, which prevented the loss of the optimal solution and solved the multi-objective low-carbon FJSP. However, it can be seen from the above researches that most of the parameters of the improved ABC algorithm are set in advance or unchanged, and are not adjusted with the change of the population state, which limits the performance improvement of the ABC algorithm. The Q-learning algorithm, as a reinforcement learning algorithm, focuses on online learning and can maintain a balance between exploration and exploitation. Learning information and updating parameters can be obtained by receiving rewards from the environment for actions [28]. Therefore, the Q-learning algorithm provides the possibility to dynamically adjust the update dimension of the ABC algorithm by virtue of its learning ability. In addition, in view of the frequent insertion of new jobs in the actual flexible job-shop, it is necessary to study an algorithm to solve the dynamic scheduling problem of the job-shop.
To solve the problem that the update dimension of the ABC algorithm cannot be dynamically adjusted and a new job is frequently inserted into the flexible job-shop, this paper proposes the DSLABC algorithm with the goal of the shortest makespan. Firstly, to dynamically adjust the update dimension m, this paper combines the ABC algorithm with the Q-learning algorithm, and proposes the SLABC algorithm. In each iteration of the ABC algorithm, the Q-learning algorithm can use the characteristics of exploration and exploitation to select the appropriate update dimension according to the population state, realize the dynamic adjustment of the update dimension, and improve the convergence accuracy of the ABC algorithm. Secondly, this paper determines the specific method of dynamic scheduling, and proposes the DSLABC algorithm. When a new job is inserted at a certain time, the DSLABC algorithm reschedules the new job and the operations that has not started processing, and dynamically generates a new scheduling scheme, which solves the rescheduling problem in the flexible job-shop and reduces the makespan. Finally, this paper compares the DSLABC algorithm with other meta-heuristics algorithms, and proves the convergence performance and advantages of the DSLABC algorithm by solving 10 Brandimarte instances. And the feasibility of DSLABC algorithm to solve DFJSP is proved through a specific instance. In conclusion, the DSLABC algorithm proposed in this paper successfully realizes the dynamic adjustment of the update dimension, improves the convergence accuracy of the ABC algorithm, and successfully solves the DFJSP.
In this paper, Section 2 introduces the flexible job shop scheduling problem with new job insertion, including the mathematical representation, constraint conditions, and coding of the problem. Section 3 introduces the two basic algorithms needed to propose the DSLABC algorithm: ABC and Q-learning algorithms. Section 4 introduces the flow and specific settings of the DSLABC algorithm. Section 5 verifies the effectiveness and practicability of the DSLABC algorithm. Section 6 is the conclusion of this paper.

2. FJSP with New Job Insertion

2.1. FJSP

The operations of each job in the flexible job-shop can be processed on multiple machines. Therefore, the purpose of this study is to minimize the makespan through the reasonable arrangement of operations and machines [29,30,31]. The various parameters and their meanings of FJSP are shown in Table 1. In the flexible job-shop, each job consists of multiple operations, and each operation can be processed on more than one machine. In this paper, through adjusting the corresponding relationship between O i j and M k , an optimal scheduling scheme is formed and the makespan is reduced. Equation (1) is the objective function of solving FJSP in this paper.
This paper summarizes four constraints for solving FJSP. Condition I: The processing time of any operation O i j is greater than 0. Condition II: The operations of the same job must be processed in strict accordance with the sequence. Condition III: Any operation O i j can be processed on at least one machine. Condition IV: Any machine can only process at most one operation at a certain time. In addition, there is no order requirement between different jobs. The formulas in Table 2 are the mathematical representations of the above four constraints.
M i n C m a x = M i n ( M a x i = 1 n j = 1 l ( b i j k + t i j k ) )
A suitable coding method can effectively solve the combinatorial optimization problem. This paper illustrates the coding method with a specific instance. Table 3 is an instance of FJSP with a scale of 3 × 3 (job × machine), and Figure 1 is the optimal scheduling scheme of Table 3. Figure 2 is the coding method of the scheduling scheme of Figure 1, which is divided into the corresponding operation sequence and machine sequence. The second “2” of the operation sequence means the second operation of job 2, and the corresponding “2” of the machine sequence means that O 22 is processed on machine M 2 . The processing order on machine M 1 is O 21 O 11 O 12 O 33 , the processing order on machine M 2 is O 31 O 22 O 13 , and the processing order on machine M 3 is O 32 O 23 .

2.2. DFJSP

At present, there will be unexpected situations of new job insertion in real job-shops [32,33]. When a new job is inserted at a certain moment, through reasonably adjusting the processing sequence of the new and old jobs, and the corresponding relationship between the operations and the machines, the makespan can be effectively reduced and the efficiency in the flexible job-shop can be improved. Through the description of three methods for dynamic scheduling of job-shop tasks [20,34,35], the dynamic scheduling method adopted in this paper is determined. Table 3 is an instance of FJSP, and Table 4 shows the newly inserted job.
Method I: After the original scheduling scheme is completed, and the machines are in an idle state, that is, when the machines have no processing operations, the operations of the new job are processed after the original scheduling scheme, as shown in Figure 3. Method I simply schedules twice in a row, and the makespan is long.
Method II: The operations of the new job are inserted into the original scheduling scheme, and the operations that cannot be inserted are scheduled according to method I. Method II relies too much on the original scheduling scheme, with very large randomness and low stability.
Method III: Reschedule the newly inserted job and the operations that have not started processing. According to Equation (2), determine the operations that have not started processing in the original scheduling plan, and t i n s e r t represents the insertion time of the new job. Figure 4 shows the new scheme obtained by rescheduling when a new job is inserted. Figure 5 is the coding method corresponding to Figure 4, and the red dashed box is the operations after rescheduling. Compared with methods I and II, method III can complete the processing of all jobs in the shortest time and has higher stability. Therefore, this paper uses method III to study dynamic scheduling.
b i j k < t i n s e r t

3. Basic Algorithm

This paper combines the ABC algorithm with the Q-learning algorithm and proposes the SLABC algorithm. Then, the rescheduling problem with new job insertion is studied, and the DSLABC algorithm is proposed. The following is an introduction to the two basic algorithms.

3.1. ABC Algorithm

As an optimization algorithm, the ABC algorithm effectively solves constraint problems [36,37,38]. The ABC algorithm consists of nectar sources, scouts, employed bees, and onlooker bees. Table 5 shows the meaning of each parameter in the ABC algorithm. The higher the fitness of the nectar source is, the higher the quality of the solution of the algorithm. The principle of the ABC algorithm mainly includes four stages [39,40]:
(1) The initial stage of population: The initial population is generated through Equation (3), and new nectar sources are generated.
X i d = X d m a x + ϕ 1 ( X d m a x X d m i n )
(2) The stage of employed bees: The employed bees generate a new nectar source V i j in the neighborhood of X i j according to Equation (4), and retain the nectar sources with greater fitness through comparison X i j with V i j to optimize the population.
V i j = X i j + ϕ 2 ( X i j X k j )
(3) The stage of onlooker bees: The nectar sources generated in the stage of employed bees were selected through Equation (5) to exclude nectar sources with low fitness.
P i = F i i = 1 S N F i
(4) The stage of scout: If the nectar source cannot be further promoted, then new nectar sources are searched randomly by Equation (3) to avoid the algorithm falling into a local optimum.

3.2. Q-Learning Algorithm

The Q-learning algorithm has a strong real-time learning ability as an RL algorithm. Its goal is to learn a strategy in interaction with the environment to obtain the maximum long-term reward [41,42]. Figure 6 shows the principle of the Q-learning algorithm, in which the agent and environment are the main components of Q-learning. First, the environment passes its state s t to the agent. Then, the agent selects the corresponding a t according to s t , and the environment generates the corresponding reward r t + 1 according to a t . Then, the agent selects a t + 1 according to reward r t + 1 and state s t + 1 of the environment at the next moment. In this way, Q-learning will eventually choose the most appropriate action according to the state at a certain moment.
Table 6 is the initial Q value table, that is, the corresponding table of states and actions. The initial Q value table is a zero matrix. With further learning, the Q value table is constantly enriched and updated. The update function of Q value is shown in Equation (6), where α represents the learning rate and γ represents the discount rate [28].
Q ( s t , a t ) ( 1 α ) Q ( s t , a t ) + α ( r t + 1 + γ max a Q ( s t + 1 , a t + 1 ) )
In the above formula, Q ( s t , a t ) represents the Q value for performing action a t in state s t , and max a Q ( s t + 1 , a t + 1 ) represents the maximum expected Q value at s t + 1 when a t + 1 is performed. In the early stage, the Q-learning algorithm learns and accumulates to enrich the Q value table. In the later stage, the Q-learning algorithm will select the action corresponding to the maximum Q value of the current state in the Q value table to execute, so as to achieve the effect of optimization.

4. Dslabc Algorithm

First, this section introduces the principle of combining ABC and Q-learning algorithms. The combination of the two algorithms can form the SLABC algorithm. Second, it introduces the specific design of the state, action, reward, and action selection strategy in the SLABC algorithm. Finally, the dynamic scheduling problem is studied when a new job is inserted, and the DSLABC algorithm is proposed.

4.1. The Combination of ABC and Q-Learning

The ABC algorithm is a widely used intelligent optimization algorithm. In each iteration, the population’s overall fitness is optimized by updating the m dimension. The m of the traditional ABC algorithm is unchanged or set in advance, which makes it have low convergence accuracy. As a reinforcement learning algorithm, the Q-learning algorithm can select the most suitable update dimension from the Q value table and give it to the ABC algorithm after a certain amount of learning and accumulation, which can solve the shortcomings of the ABC algorithm. Therefore, this paper applies the Q-learning algorithm to run the ABC algorithm. In the early stage, through continuous learning and recording of the dimension m in the iterative process, the optimal value of m suitable for each iteration is found to speed up the convergence and improve the accuracy of the ABC algorithm. Figure 7 is the combination model of the ABC and Q-learning algorithms. The Q-learning algorithm selects an action randomly or from the Q value table according to the state of the ABC algorithm, where the action is m during the iteration of the ABC algorithm. After the ABC algorithm executes the action, a reward will be generated. The positive or negative reward depends on whether the population’s fitness improved. Q-learning updates the Q value table according to the reward and finally forms an optimal correspondence between state and action. The action selection strategy determines whether Q-learning randomly selects an action or uses the Q-value table to select an action. The following paper will make a specific design of the state, action, reward, and action selection strategy.
Figure 8 is a schematic diagram of the SLABC algorithm, a combined algorithm of ABC and Q-learning. The agent of the Q-learning algorithm selects an action a t (i.e., m) according to the state s t of the ABC algorithm. Then, the ABC algorithm changes its state by updating the number of m dimensions in the t-th iteration, resulting in a state s t + 1 at a new moment. Compare state s t with s t + 1 , generate a positive or negative reward r t , update the Q value table by the reward, and then the agent will choose action a t + 1 according to s t + 1 . Follow the above steps to finally complete all the iterations of the ABC algorithm.

4.2. The Design of State, Action, Reward, and Action Selection Strategy

The population state S of the ABC algorithm is determined by the fitness and diversity of the population, and the average nectar fitness can better reflect the overall quality of the population compared to the optimal nectar fitness. Therefore, in this paper, the average nectar fitness ( f a v g ), optimal nectar fitness ( f m a x ), and population diversity (d) is used as the factors to calculate the population state S. Then, the three factors are weighted, and the corresponding weighting coefficients are 0.35, 0.3, and 0.35 [42]. Equations (7)–(9) are the calculation methods for the three factors after the first-generation population standardization. Equation (10) is the calculation formula for state S. The Q-learning algorithm needs to select the appropriate action a according to the value of the state S. The dimension of the nectar source is D. If the number m of each iteration is too large, a new nectar source that is very different from the original nectar source will be generated. Therefore, in this paper, m∈ [1, D/10], when the action is a 1 , m = 1.
The positive or negative reward is determined by the optimal nectar fitness and the average nectar fitness, and the average nectar fitness represents the quality of the entire population. Equations (11) and (12) represent whether the optimal nectar fitness and average nectar fitness of the t-th generation relative to the t 1 generation have been improved, represented by r m a x and r a v g , respectively. In this paper, the corresponding weighting coefficients are set to 0.45 and 0.55 [42], and the calculation formula of the reward value is shown in Equation (13). A positive or negative indicates whether the current action a (i.e., m) is beneficial to improving population fitness. Equation (14) is the action selection strategy of the SLABC algorithm, introducing the ε -greedy strategy, where x 0 1 is a random number in the range [0,1], and ε is the greedy rate [43]. In the early stage of the ABC algorithm, that is, ε < x 0 1 , Q-learning selects actions randomly. When the Q value table has a certain amount of learning and accumulation, that is, ε x 0 1 , Q-learning selects a maximum action corresponding to the current state. When running the algorithm, the Q value table becomes increasingly abundant, and the corresponding relationship between the state and the action becomes increasingly abundant. In each iteration, the probability that the ABC algorithm obtains a suitable action becomes increasingly larger, which is beneficial to improving the algorithm’s convergence accuracy.
f a v g = i = 1 S N f ( x i t ) i = 1 S N f ( x i 1 )
f m a x = max f ( x i t ) max f ( x i 1 )
d = i = 1 S N | f ( x i t ) i = 1 S N f ( x i t ) S N | j = 1 S N | f ( x j 1 ) j = 1 S N f ( x j 1 ) S N |
S = 0.35 f a v g + 0.3 f m a x + 0.35 d
r m a x = m a x f ( x i t ) m a x f ( x i t 1 ) m a x f ( x i t 1 )
r a v g = i = 1 S N f ( x i t ) i = 1 S N f ( x i t 1 ) i = 1 S N f ( x i t 1 )
r = 0.45 r m a x + 0.55 r a v g
π ( s , a ) = max a Q ( s , a ) ε x 0 1 a ( R a n d o m l y ) ε < x 0 1

4.3. The Design and Process of the DSLABC Algorithm

If a new job is inserted when the original scheduling scheme is running, it needs to be rescheduled according to method III above to reduce the makespan.
This paper proposes a DSLABC algorithm for dynamic scheduling. Its operating principle is as follows: According to the beginning time of each operation in the original scheduling scheme, the operation whose begin time is less than the insertion time is judged as a finished operation, and the rest are unfinished operations and need to be rescheduled with the new job. First, delete the operation-machine information in the original scheduling scheme, calculate the end time of the finished operation for each machine after the insertion time, reinitialize the operation-machine information according to the unfinished operation, the new job and the end time of each machine, and finally rescheduling according to the SLABC algorithm to obtain a new scheduling scheme. Among them, the principle of rescheduling is also shown in Section 4.1. The DSLABC program is shown in the Algorithm 1, and Figure 9 shows the flow of the DSLABC algorithm.
Algorithm 1. DSLABC.
Processes 10 00571 i001

5. The Experimental Verification of Algorithm Performance

This section presents the experimental verification of the dynamic scheduling problem and DSLABC algorithm. First, the experiment of selecting parameters is designed, and various parameters needed to run the algorithm are determined. Second, the performance and advantages of the SLABC (i.e., DSLABC) algorithm are verified through experimental comparison. Finally, a specific example is used to verify the effectiveness of the DSLABC algorithm in solving the dynamic scheduling problem.

5.1. The Design of the Experiment

All experiments in this paper are carried out under the same experimental conditions and platform. The computer model is an Intel(R) Core(TM) i5-6500 CPU @ 3.20 GHz, Win10, and the software platform is MATLAB 2015b.
This paper determines the parameters required to run the DSLABC algorithm, including the discount rate, iteration times, learning rate, and greed rate. Experiments are carried out to determine the specific values of the parameters, as shown in Figure 10, Figure 11, Figure 12 and Figure 13. In Figure 10, Figure 11 and Figure 12, the variables are the learning rate, discount rate, and greed rate, respectively, while keeping the other variables unchanged. To maintain accurate convergence accuracy and faster convergence speed, the specific values of the parameters are shown in Table 7. Figure 13 is the convergence diagram of the algorithm for 2000 iterations. The algorithm reaches convergence after approximately 500 iterations. To ensure that the algorithm can reach the final convergence and shorten the algorithm’s running time as much as possible, the number of iterations is selected as 1000 in this paper. In addition, the population size is set to 400, the initial reward is set to 1, and the number of runs is set to 20.

5.2. The Performance Comparison of the SLABC Algorithm

The SLABC algorithm is the core of the DSLABC algorithm, and its performance determines the performance of the DSLABC algorithm. In this paper, the performance of the SLABC algorithm is first verified by experiments. By solving 10 Brandimarte instances [44] with the shortest makespan as the goal, the comparison results of the SLABC algorithm and other algorithms are obtained. Table 8 shows the optimal solutions (second) obtained by 11 algorithms to solve 10 instances. Except for the SLABC algorithm, this paper verifies the performance of ten algorithms of TS [44], HA [45], GWO [10], GA [46], SLGA [42], MA-CROG [47], MATS-PSO [48], ABC [49], ABC-S and ABC-S*Q [41] under the same experimental conditions The optimal results are marked in brackets, and HOV represents the optimal historical value of 10 calculation instances [10]. The SLABC algorithm obtains 8 optimal solutions in 10 calculation instances, indicating that the SLABC algorithm has excellent convergence performance. By counting the results of running the MK8 instance 20 times with 11 algorithms, the boxplot shown in Figure 14 is obtained. It can be seen intuitively that the optimal value and average value obtained by the SLABC algorithm have great advantages, which further verify the convergence of the algorithm.
Figure 15, Figure 16 and Figure 17 show the scheduling schemes obtained by the SLABC algorithm to solve MK2, MK4 and MK9. The scales of the job-machine are 10 × 6, 15 × 8, and 20 × 10. The different colors in the figures represent different jobs, the vertical axis represents the machine, the horizontal axis represents the processing time, and C m a x represents the shortest makespan achievable through the SLABC algorithm. It can be seen from the figures that the shortest makespan of instances MK2, MK4, and MK9 reach 29, 67, and 364, respectively.
Table 9 shows the relative percentage deviation (RPD) calculated by Equation (15). The value of BV is shown in Table 8, and the optimal RPD is marked in brackets. In the last row of Table 9, the average RPD of 11 algorithms to solve 10 instances are calculated, and the gap between the average RPD of SLABC and the other 10 algorithms is compared, as shown in Figure 18. This shows that SLABC will obtain a better solution than other algorithms when solving the same scheduling problem.
R P D = B V H O V H O V × 100 %

5.3. Experiment on Dynamic Scheduling

This paper provides information on instance MK2 according to the coding method. Table 10 shows the machines corresponding to each operation in MK2, Table 11 shows the corresponding time of the machines in Table 10, and there is a one-to-one correspondence between Table 10 and Table 11. For example, operation O 1 of job J 1 can be processed on all six machines. The processing time on machine M 1 is “3”. Table 12 is the newly inserted job designed in this paper, and Figure 19 is the scheduling scheme in the case of no emergent job. Figure 20 represents the new scheduling scheme obtained by adding the emergent Job 11 when the time is “12” and rescheduling the new job with the unfinished operation in the original scheduling scheme. White represents the newly inserted job. Figure 19 and Figure 20 show that the DSLABC algorithm can deal with the job-shop scheduling problem in emergencies, proving the DSLABC algorithm’s practicability.

6. Conclusions

Given the frequent insertion of new jobs in flexible job-shops, this paper determines the specific method of job-shop rescheduling and verifies it by experiments. First, a dynamic scheduling model is proposed. By comparing the insertion time and the beginning time of each operation in the original scheduling scheme, the operations that need to be rescheduled are screened out, and the coding method between the operations and the machines is designed. Second, the DSLABC algorithm for dynamic scheduling is proposed, which realizes the combination of the ABC and the Q-learning algorithms. The specific equations for the state, reward, and action selection strategy are designed. Through the Q value table, the algorithm selects appropriate action according to the specific state of the population during each iteration, which improves the convergence accuracy and stability of the algorithm. Finally, by comparing 11 algorithms to solve the Brandimarte instances, it is verified that the SLABC algorithm (i.e., DSLABC) has good convergence accuracy and stability. Through the dynamic scheduling of specific examples, the effectiveness of the DSLABC algorithm in solving dynamic problems is verified, and a new method with high precision is provided for the current dynamic scheduling of flexible job-shops.
The novelties of this paper are: 1. Since the update dimension of the ABC algorithm cannot be adjusted dynamically, this paper realizes that the ABC algorithm can obtain a suitable update dimension in each iteration through the Q-learning algorithm, which improves the convergence accuracy of the ABC algorithm. 2. In view of the situation that the real flexible job-shop faces the insertion of new jobs, this paper reschedules the new job and the operations that have not started processing to generate a new scheduling scheme. The proposed DSLABC algorithm realizes the dynamic scheduling of flexible job-shop, and the effectiveness and advantages of the DSLABC algorithm are proved by experiments.
The next research plan is as follows: 1. Research on dynamic scheduling with multi jobs and multiperiod insertion to further improve the application scope and field of the DSLABC algorithm. 2. Research on scheduling problems with multiple objective, such as priority and energy consumption. 3. Research on algorithm verification on platforms such as Python. 4. Apply the algorithm to the actual flexible job-shop to verify its actual application performance.

Author Contributions

Conceptualization: J.Z.; Data curation: J.Z.; methodology: J.Z. and K.Z.; formal analysis: J.Z.; writing—original draft preparation: J.Z.; writing—review and editing: J.Z.; funding acquisition: K.Z., T.J. and X.L.; investigation: J.Z.; project administration: K.Z.; resources: X.L.; software: J.Z.; validation: J.Z. and K.Z.; visualization: J.Z.; supervision: K.Z. and T.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key R & D Program of China (2018YFB1308700), the National Key Laboratory Foundation (KGJ6142210210304), the China Postdoctoral Science Foundation (2019M662410) and the National Defense Basic Scientific Research Program of China (JCKY2016204A502).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The code in this article cannot be published due to privacy, and can be obtained from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ABCArtificial bee colony
SLABCSelf-learning artificial bee colony
DSLABCDynamic self-learning artificial bee colony
FJSPFlexible job-shop scheduling problem
DFJSPDynamic flexible job-shop scheduling problem

References

  1. Bruker, P.; Schlie, R. Job-shop scheduling with multi-purpose machines. Computing 1990, 45, 369–375. [Google Scholar] [CrossRef]
  2. Zhang, G.J.; Ding, Q.; Wang, L.J.; Zhou, X.G. Optimization method of production scheduling in flexible job. Comput. Sci. 2018, 45, 269–275. [Google Scholar]
  3. Huang, S.; Na, T.; Yan, W.; Ji, Z. Multi-objective flexible job-shop scheduling problem using modified discrete particle swarm optimization. SpringerPlus 2016, 5, 1432. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Nouiri, M.; Bekrar, A.; Jemai, A.; Niar, S.; Ammari, A.C. An effective and distributed particle swarm optimization algorithm for flexible job-shop scheduling problem. J. Intell. Manuf. 2015, 29, 603–615. [Google Scholar] [CrossRef]
  5. Wang, L.; Zhu, Q. A Hybrid Genetic Algorithm for Flexible Job Shop Scheduling Problem with Sequence-Dependent Setup Times and Job Lag Times. IEEE Access 2021, 9, 104864–104873. [Google Scholar] [CrossRef]
  6. Filho, M.G.; Barco, C.F.; Neto, R. Using Genetic Algorithms to solve scheduling problems on flexible manufacturing systems (FMS): A literature survey, classification and analysis. Flex. Serv. Manuf. J. 2014, 26, 408–431. [Google Scholar] [CrossRef]
  7. Ge, H.; Sun, L.; Chen, X.; Liang, Y. An Efficient Artificial Fish Swarm Model with Estimation of Distribution for Flexible Job Shop Scheduling. Int. J. Comput. Intell. Syst. 2016, 9, 917–931. [Google Scholar] [CrossRef] [Green Version]
  8. Liu, J.; Sui, Z.; Li, X.; Yang, J. A Bayesian-grouping based hybrid distributed cooperative evolutionary optimization for large-scale flexible job-shop scheduling problem. IEEE Access 2021, 9, 69114–69126. [Google Scholar] [CrossRef]
  9. Wu, J.; Wu, G.; Wang, J. Flexible job-shop scheduling problem based on hybrid aco algorithm. Int. J. Simul. Model. 2017, 16, 497–505. [Google Scholar] [CrossRef]
  10. Jiang, T.; Zhang, C. Application of grey wolf optimization for solving combinatorial problems: Job shop and flexible job shop scheduling cases. IEEE Access 2018, 6, 26231–26240. [Google Scholar] [CrossRef]
  11. Huang, C.; Yuan, D.; Zhang, H. Optimization of digital twin job scheduling problem based on lion swarm algorithm. J. Shandong Univ. (Eng. Sci.) 2021, 51, 17–23. [Google Scholar]
  12. Gong, G.; Chiong, R.; Deng, Q.; Gong, X. A hybrid artificial bee colony algorithm for flexible job shop scheduling with worker flexibility. Int. J. Prod. Res. 2019, 58, 4406–4420. [Google Scholar] [CrossRef]
  13. Sassi, J.; Alaya, I.; Borne, P.; Tagina, M. A decomposition-based artificial bee colony algorithm for the multi-objective flexible jobshop scheduling problem. Eng. Optim. 2021, 11, 524–538. [Google Scholar] [CrossRef]
  14. Sabharwal, S.; Bansal, P.; Mittal, N.; Malik, S. Construction of Mixed Covering Arrays for Pair-wise Testing Using Probabilistic Approach in Genetic Algorithm. Arab. J. Sci. Eng. 2016, 41, 2821–2835. [Google Scholar] [CrossRef]
  15. Wang, L.; Cai, J.; Li, M.; Liu, Z. Flexible Job Shop Scheduling Problem Using an Improved Ant Colony Optimization. Sci. Program. 2017, 2017, 9016303. [Google Scholar] [CrossRef]
  16. Yao, Y.Y.; Ye, C.M. Solving Job-Shop scheduling problem using improved hybrid grey wolf optimizer. Appl. Res. Comput. 2018, 35, 1310–1314. [Google Scholar]
  17. Wang, Y.; Feng, Y.X.; Tan, J.R.; Li, Z.K. Optimization method of flexible job-shop scheduling based on multiobjective particle swarm optimization algorithm. Trans. Chin. Soc. Agric. Mach. 2011, 42, 190–196. [Google Scholar]
  18. Park, J.S.; Ng, H.Y.; Chua, T.J.; Ng, Y.T.; Kim, J.W. Unified Genetic Algorithm Approach for Solving Flexible Job-Shop Scheduling Problem. Appl. Sci. 2021, 11, 6454. [Google Scholar] [CrossRef]
  19. Du, Y.; Fang, J.; Miao, C. Frequency-domain system identification of an unmanned helicopter based on an adaptive genetic algorithm. IEEE Trans. Ind. Electron. 2014, 61, 870–881. [Google Scholar] [CrossRef]
  20. Gao, K.Z.; Suganthan, P.N.; Chua, T.J.; Chong, C.S.; Pan, Q.K. A two-stage artificial bee colony algorithm scheduling flexible job-shop scheduling problem with new job insertion. Expert Syst. Appl. 2015, 42, 7652–7663. [Google Scholar] [CrossRef]
  21. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  22. Li, J.; Du, Y.; Tian, J.; Duan, P.; Pan, Q. An Artificial Bee Colony Algorithm for Flexible Job Shop Scheduling with Transportation Resource Constraints. Acta Electron. Sin. 2021, 49, 324–330. [Google Scholar]
  23. Pan, Q.K.; Tasgetiren, M.F.; Suganthan, P.N.; Chua, T.J. A discrete artificial bee colony algorithm for the lot-streaming flow shop scheduling problem. Inf. Sci. 2011, 181, 2455–2468. [Google Scholar] [CrossRef]
  24. Meng, T.; Pan, Q.K.; Sang, H.Y. A hybrid artificial bee colony algorithm for a flexible job shop scheduling problem with overlapping in operations. Int. J. Prod. Res. 2018, 56, 5278–5292. [Google Scholar] [CrossRef]
  25. Zhang, R.; Wu, C. An artificial bee colony algorithm for the job shop scheduling problem with random processing times. Entropy 2011, 13, 1708–1729. [Google Scholar] [CrossRef]
  26. Zheng, X.C.; Gong, W.Y. An improved artificial bee colony algorithm for fuzzy flexible job-shop scheduling problem. Control Theory Appl. 2020, 37, 1284–1292. [Google Scholar]
  27. Gu, X.L. Application Research for Multiobjective Low-Carbon Flexible Job-Shop Scheduling Problem Based on Hybrid Artificial Bee Colony Algorithm. IEEE Access 2021, 9, 135899–135914. [Google Scholar] [CrossRef]
  28. Wang, Y.H.; Li, T.H.S.; Lin, C.J. Backward Q-learning: The combination of Sarsa algorithm and Q-learning. Eng. Appl. Artif. Intell. 2013, 26, 2184–2193. [Google Scholar] [CrossRef]
  29. Ding, H.; Gu, X. Improved particle swarm optimization algorithm based novel encoding and decoding schemes for flexible job shop scheduling problem. Comput. Oper. Res. 2020, 121, 104951. [Google Scholar] [CrossRef]
  30. Tian, X.; Liu, X. Improved Hybrid Heuristic Algorithm Inspired by Tissue-Like Membrane System to Solve Job Shop Scheduling Problem. Processes 2021, 9, 219. [Google Scholar] [CrossRef]
  31. Zhang, G.; Liang, G.; Yang, S. An effective genetic algorithm for the flexible job-shop scheduling problem. Expert Syst. Appl. 2011, 38, 3563–3573. [Google Scholar] [CrossRef]
  32. Gromicho, J.A.S.; van Hoorn, J.J.; Saldanha-da-Gama, F.; Timmer, G.T. Solving the job-shop scheduling problem optimally by dynamic programming. Comput. Oper. Res. 2012, 39, 2968–2977. [Google Scholar] [CrossRef]
  33. Ren, W.; Wen, J.; Yan, Y.; Hu, Y.; Li, J. Multi-objective optimisation for energy-aware flexible job-shop scheduling problem with assembly operations. Int. J. Prod. Res. 2020, 59, 7216–7231. [Google Scholar] [CrossRef]
  34. Gao, K.Z.; Suganthan, P.N.; Pan, Q.K.; Tasgetiren, M.F.; Sadollah, A. Artificial bee colony algorithm for scheduling and rescheduling fuzzy flexible job shop problem with new job insertion. Knowl.-Based Syst. 2016, 109, 1–16. [Google Scholar] [CrossRef]
  35. Gao, K.Z.; Suganthan, P.N.; Tasgetiren, M.F.; Pan, Q.K.; Sun, Q.Q. Effective ensembles of heuristics for scheduling flexible job shop problem with new job insertion. Comput. Ind. Eng. 2015, 90, 107–117. [Google Scholar] [CrossRef]
  36. Karaboga, D.; Ozturk, C. A novel clustering approach: Artificial Bee Colony (ABC) algorithm. Appl. Soft. Comput. 2011, 11, 652–657. [Google Scholar] [CrossRef]
  37. Akay, B.; Karaboga, D. A modified artificial bee colony algorithm for real-parameter optimization. Inf. Sci. 2012, 192, 120–142. [Google Scholar] [CrossRef]
  38. Karaboga, D.; Akay, B. A modified artificial bee colony (ABC) algorithm for constrained optimization problems. Appl. Soft. Comput. 2011, 11, 3021–3031. [Google Scholar] [CrossRef]
  39. Gao, W.F.; Liu, S.Y. A modified artificial bee colony algorithm. Comput. Oper. Res. 2012, 39, 687–697. [Google Scholar] [CrossRef]
  40. Karaboga, D.; Gorkemli, B.; Ozturk, C.; Karaboga, N. A comprehensive survey: Artificial bee colony (ABC) algorithm and applications. Artif. Intell. Rev. 2014, 42, 21–57. [Google Scholar] [CrossRef]
  41. Long, X.; Zhang, J.; Qi, X.; Xu, W.; Jin, T.; Zhou, K. A self-learning artificial bee colony algorithm based on reinforcement learning for a flexible job-shop scheduling problem. Concurr. Comput.-Pract. Exp. 2021, 34, e6658. [Google Scholar] [CrossRef]
  42. Chen, R.H.; Yang, B.; Li, S.; Wang, S.L. A self-learning genetic algorithm based on reinforcement learning for flexible job-shop scheduling problem. Comput. Ind. Eng. 2020, 149, 106778. [Google Scholar] [CrossRef]
  43. Hsieh, Y.Z.; Su, M.C. A Q-learning-based swarm optimization algorithm for economic dispatch problem. Neural Comput. Appl. 2016, 27, 2333–2350. [Google Scholar] [CrossRef]
  44. Brandimarte, P. Routing and scheduling in a flexible job shop by tabu search. Ann. Oper. Res. 1993, 41, 157–183. [Google Scholar] [CrossRef]
  45. Sutton, R.S.; Barto, A.G. Reinforcement learning: An introduction. IEEE Trans. Neural Netw. 1998, 9, 1054. [Google Scholar] [CrossRef]
  46. Kacem, I.; Hammadi, S.; Borne, P. Approach by localization and multiobjective evolutionary optimization for flexible job-shop scheduling problems. IEEE Trans. Syst. Man Cybern. C. 2002, 32, 1–13. [Google Scholar] [CrossRef]
  47. Marzouki, B.; Driss, O.B.; Ghédira, K. Multi agent model based on chemical reaction optimization with greedy algorithm for flexible job shop scheduling problem. Proc. Comput. Sci. 2017, 112, 81–90. [Google Scholar] [CrossRef]
  48. Henchiri, A.; Ennigrou, M. Particle swarm optimization combined with tabu search in a multi-agent model for flexible job shop problem. In Proceedings of the International Conference in Swarm Intelligence; Springer: Berlin/Heidelberg, Germany, 2013; pp. 385–394. [Google Scholar]
  49. Long, X.; Zhang, J.; Yang, S.; Wu, W.; Sun, Y.; Guo, Z.; Zhou, K. Research on Job-shop Scheduling Problem Based on Bee Colony Algorithm. J. Phys. Conf. Ser. 2021, 2033, 012173. [Google Scholar] [CrossRef]
Figure 1. Scheduling scheme of Table 3.
Figure 1. Scheduling scheme of Table 3.
Processes 10 00571 g001
Figure 2. The coded representation of Figure 1.
Figure 2. The coded representation of Figure 1.
Processes 10 00571 g002
Figure 3. The scheduling scheme of method I with new job 4 insertion.
Figure 3. The scheduling scheme of method I with new job 4 insertion.
Processes 10 00571 g003
Figure 4. Scheduling scheme of method III with new job 4 insertion.
Figure 4. Scheduling scheme of method III with new job 4 insertion.
Processes 10 00571 g004
Figure 5. The coded representation with new job insertion of Figure 4.
Figure 5. The coded representation with new job insertion of Figure 4.
Processes 10 00571 g005
Figure 6. The model framework of the Q-learning algorithm.
Figure 6. The model framework of the Q-learning algorithm.
Processes 10 00571 g006
Figure 7. The combination model of ABC and Q-learning.
Figure 7. The combination model of ABC and Q-learning.
Processes 10 00571 g007
Figure 8. Schematic diagram of SLABC algorithm.
Figure 8. Schematic diagram of SLABC algorithm.
Processes 10 00571 g008
Figure 9. The flow chart of the DSLABC algorithm.
Figure 9. The flow chart of the DSLABC algorithm.
Processes 10 00571 g009
Figure 10. Experiment to determine learning rate.
Figure 10. Experiment to determine learning rate.
Processes 10 00571 g010
Figure 11. Experiment to determine discount rate.
Figure 11. Experiment to determine discount rate.
Processes 10 00571 g011
Figure 12. Experiment to determine greed rate.
Figure 12. Experiment to determine greed rate.
Processes 10 00571 g012
Figure 13. Experiment to determine iteration times.
Figure 13. Experiment to determine iteration times.
Processes 10 00571 g013
Figure 14. Boxplot of 11 algorithms to solve MK8.
Figure 14. Boxplot of 11 algorithms to solve MK8.
Processes 10 00571 g014
Figure 15. The scheduling scheme of Brandimarte MK2 (10 × 6).
Figure 15. The scheduling scheme of Brandimarte MK2 (10 × 6).
Processes 10 00571 g015
Figure 16. The scheduling scheme of Brandimarte MK4 (15 × 8).
Figure 16. The scheduling scheme of Brandimarte MK4 (15 × 8).
Processes 10 00571 g016
Figure 17. The scheduling scheme of Brandimarte MK9 (20 × 10).
Figure 17. The scheduling scheme of Brandimarte MK9 (20 × 10).
Processes 10 00571 g017
Figure 18. Different in average RPD.
Figure 18. Different in average RPD.
Processes 10 00571 g018
Figure 19. The original scheduling scheme of MK2.
Figure 19. The original scheduling scheme of MK2.
Processes 10 00571 g019
Figure 20. The rescheduling scheme obtained by the DSLABC algorithm.
Figure 20. The rescheduling scheme obtained by the DSLABC algorithm.
Processes 10 00571 g020
Table 1. The symbols of FJSP.
Table 1. The symbols of FJSP.
SymbolsMeaning
nThe total number of jobs.
J i (i = 1, 2, …, n)The i-th job.
O i j (j = 1, 2, …, l)The j-th operation of the i-th job.
mThe total number of machines.
M k (k = 1, 2, …, m)The k-th machine.
C m a x Makespan
b i j k The beginning time of the O i j processing on the M k .
t i j k The duration time of the O i j processing on the M k .
W i j k 1 , i f O i j i s a s s i g n e d t o M k 0 , o t h e r w i s e
Y i j a q k 1 , O i j i s p r o c e s s e d o n e q u i p m e n t M k b e f o r e O a q 0 , o t h e r w i s e
Table 2. The mathematical expression of constraints.
Table 2. The mathematical expression of constraints.
t i j k >0I
b i j k + t i j k b i ( j + 1 ) k II
k = 1 m W i j k 1 III
b i j k + t i j k b a q k + Z ( 1 Y i j a q k ) IV
Table 3. An instance of FJSP with a scale of 3 × 3.
Table 3. An instance of FJSP with a scale of 3 × 3.
JobsOperation M 1 M 2 M 3
Job 1 O 11 23-
O 12 3-4
O 13 435
Job 2 O 21 2--
O 22 53-
O 23 -43
Job 3 O 31 32-
O 32 -42
O 33 2-3
Table 4. The inserted job 4.
Table 4. The inserted job 4.
New JobOperation M 1 M 2 M 3
Job 4 O 41 453
O 42 2-3
O 43 34-
Table 5. Parameters of ABC algorithm.
Table 5. Parameters of ABC algorithm.
ParametersMeaning
SNThe total number of nectar sources.
i , k { 1 , 2 , , S N } The i-th nectar source.
j , d { 1 , 2 , , D } The number of updated dimensions.
X i j Original nectar source.
V i j The new nectar source obtained near the original nectar source X i j .
X k j Randomly selected nectar source different from the original nectar source X i j .
ϕ 1 , ϕ 2 Random number generated in the range [0,1].
F i The fitness of X i j .
P i Probability of X i j being selected.
X i d The new nectar source was obtained by updating the d dimension of the i-th nectar source.
X d m i n The lower bound of the d-th dimension.
X d m a x The upper bound of the d-th dimension.
Table 6. Initial Q value table.
Table 6. Initial Q value table.
Processes 10 00571 i002
Table 7. The parameter values of the SLABC algorithm.
Table 7. The parameter values of the SLABC algorithm.
ParameterSpecific Value
discount rate ( γ )0.2
iteration times1000
learning rate ( α )0.75
greed rate ( ε )0.85
initial reward1
number of runs20
population size400
Table 8. The best value (BV) obtained by 11 algorithms to solve 10 Brandimarte instances.
Table 8. The best value (BV) obtained by 11 algorithms to solve 10 Brandimarte instances.
InstanceScaleHOVAlgorithms
ABCABC-SABC-S*QMATS-PSOMA-CROGHATSGWOGASLGASLABC
MK110 × 636514343(40)424344(40)444242
MK210 × 624383131(29)353234313831(29)
MK315 × 8204269242242210206207213(204)225207(204)
MK415 × 84896828274697682728873(67)
MK515 × 4168218198201(175)183180189177195188(175)
MK610 × 1533109989882878587848384(80)
MK720 × 5133189156158161173172157157178156(155)
MK820 × 10523590548542(523)552560530(523)542(523)(523)
MK920 × 10299465418418435441421393(347)380371364
MK1020 × 15165345323320328377(283)296316311295(283)
Table 9. The RPD (%) obtained by 11 algorithms to solve 10 instances.
Table 9. The RPD (%) obtained by 11 algorithms to solve 10 instances.
InstanceAlgorithms
ABCABC-SABC-S*QMATS-PSOMA-CROGHATSGWOGASLGASLABC
MK141.6719.4419.44(11.11)16.6719.4422.22(11.11)22.2216.6716.67
MK258.3329.1619.44(20.83)45.8333.3341.6729.1758.3329.17(20.83)
MK331.8618.6318.632.940.981.474.41(0)10.291.47(0)
MK410070.8370.8354.1743.7558.3370.835083.3352.08(39.58)
MK529.7617.8619.64(4.17)8.937.1412.55.3616.0711.9(4.17)
MK6230.3197.0197.0148.5163.6157.6163.6154.4151.5154.4(142.4)
MK742.1117.2918.8021.0530.0829.3218.0518.0533.8317.29(16.54)
MK812.814.783.63(0)5.547.071.34(0)3.63(0)(0)
MK955.5239.8039.8045.4847.4940.8031.44(16.05)27.0924.0821.74
MK10109.195.7693.9498.79128.5(71.52)79.3991.5288.4878.79(71.52)
Average71.1551.0650.1240.7049.1442.6044.5537.5749.4838.59(33.35)
Table 10. Machines corresponding to each operation of MK2.
Table 10. Machines corresponding to each operation of MK2.
Operation
O 1 O 2 O 3 O 4 O 5 O 6
Job J 1 1,2,3,4,5,63,61,2,3,4,5,625,62,3,5
J 2 1,2,4,5,65,652,32,3,51,2,3,4,5,6
J 3 1,2,3,4,5,61,2,3,4,5,62,3,51,2,3,5,61,2,3,4,5,61,3,4,5,6
J 4 1,2,3,5,61,2,4,5,621,2,3,4,5,61,2,4,5,61,2,3,4,5,6
J 5 1,2,3,4,5,61,2,4,5,641,3,4,5,61,2,4,5,62,3
J 6 1,3,4,5,63,61,2,3,5,61,2,3,4,5,621,2,4,5,6
J 7 1,2,3,4,5,651,2,3,4,5,61,2,3,4,5,61,3,4,5,6-
J 8 2,31,2,3,5,61,2,3,4,5,61,3,4,5,61,2,4,5,61,2,4,5,6
J 9 23,61,2,4,5,61,2,4,5,62,3-
J 10 41,2,3,4,5,61,3,4,5,61,2,3,4,5,65,61,2,4,5,6
Table 11. The corresponding time of the machine in Table 10.
Table 11. The corresponding time of the machine in Table 10.
Operation
O 1 O 2 O 3 O 4 O 5 O 6
Job J 1 3,2,3,5,3,64,51,6,3,3,6,566,31,2,4
J 2 3,4,2,6,13,624,31,2,43,2,3,5,3,6
J 3 1,6,3,3,6,52,4,6,6,3,61,2,44,3,5,2,35,4,3,1,5,34,6,6,3,3
J 4 4,3,5,2,33,4,2,6,161,6,3,3,6,54,3,5,4,35,4,3,1,5,3
J 5 2,4,6,6,3,64,3,5,4,334,6,6,3,33,4,2,6,14,3
J 6 4,6,6,3,34,54,3,5,2,32,4,6,6,3,663,4,2,6,1
J 7 5,4,3,1,5,322,4,6,6,3,63,2,3,5,3,64,6,6,3,3-
J 8 4,34,3,5,2,32,4,6,6,3,64,6,6,3,34,3,5,4,33,4,2,6,1
J 9 64,53,4,2,6,14,3,5,4,34,3-
J 10 32,4,6,6,3,64,6,6,3,35,4,3,1,5,36,33,4,2,6,1
Table 12. Information about the newly inserted job.
Table 12. Information about the newly inserted job.
New JobOperation M 1 M 2 M 3 M 4 M 5 M 6
Job 11 O 11 - 1 2-533-
O 11 - 2 -3-354
O 11 - 3 -635--
O 11 - 4 41-253
O 11 - 5 -524-3
O 11 - 6 5632-4
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Long, X.; Zhang, J.; Zhou, K.; Jin, T. Dynamic Self-Learning Artificial Bee Colony Optimization Algorithm for Flexible Job-Shop Scheduling Problem with Job Insertion. Processes 2022, 10, 571. https://doi.org/10.3390/pr10030571

AMA Style

Long X, Zhang J, Zhou K, Jin T. Dynamic Self-Learning Artificial Bee Colony Optimization Algorithm for Flexible Job-Shop Scheduling Problem with Job Insertion. Processes. 2022; 10(3):571. https://doi.org/10.3390/pr10030571

Chicago/Turabian Style

Long, Xiaojun, Jingtao Zhang, Kai Zhou, and Tianguo Jin. 2022. "Dynamic Self-Learning Artificial Bee Colony Optimization Algorithm for Flexible Job-Shop Scheduling Problem with Job Insertion" Processes 10, no. 3: 571. https://doi.org/10.3390/pr10030571

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop