Abstract

Teaching-Learning-Based Optimization (TLBO) algorithm is an evolutionary powerful algorithm that has better global searching capability. However, in the later period of evolution of the TLBO algorithm, the diversity of learners will be degraded with the increasing iteration of evolution and the smaller scope of solutions, which lead to a trap in local optima and premature convergence. This paper presents an improved version of the TLBO algorithm based on Laplace distribution and Experience exchange strategy (LETLBO). It uses Laplace distribution to expand exploration space. A new experience exchange strategy is applied to make good use of experience information to identify more promising solutions to make the algorithm converge faster. The experimental performances verify that the LETLBO algorithm enhances the solution accuracy and quality compared to original TLBO and various versions of TLBO and is very competitive with respect to other very popular and powerful evolutionary algorithms. Finally, the LETLBO algorithm is also applied to parameter estimation of chaotic systems, and the promising results show the applicability of the LETLBO algorithm for problem-solving.

1. Introduction

Evolutionary algorithms (EAs) are population-based optimization search techniques based on swarm intelligence (SI) and Darwin’s theory of evolution which have been widely used in all kinds of complex real-valued optimization problems [1]. There are kinds of typical evolutionary optimization algorithms, such as PSO [2], DE [3], GSO [4], ABC [5], FSA [6], WCA [7], CS [8], DSA [9], ICA [10], BSA [11], ISA [12], and HTS [13].

Recently, Rao et al. [14] proposed the Teaching-Learning-Based Optimization (TLBO) algorithm. In the TLBO algorithm, the optimal individual is marked as “teacher” and the other individuals are “students”, which simulates two behaviors of “Teacher Stage” and “Learner Stage” in a class and has advantages both simple computation and few controlling parameters except for the common controlling parameters to make it easy to implement and fast convergence speed [15]. TLBO has been paid much more attention and had a successful application to many real-world optimization problems [1621].

However, relevant research shows that the higher the dimension of the problem to be optimized is, the easier the algorithm is to slow convergence. According to the above reason, a lot of TLBO variants are proposed in recent years. Venkata Rao et al. [22] incorporate an elitist strategy into the TLBO algorithm to identify its effect on the exploration and exploitation capacity. Feng Zou [23] maintains the diversity of the population in the teacher phase by using a ring neighborhood search. Farahani [14] presented the mutation operations of the differential and interactive to improve the exploration capability and maintain diversity. Debao [24] combines to renew individuals according to a random probability, and the remaining individuals have their positions renewed by learning knowledge from the best individual, the worst individual, and another random individual of the current generation. Feng Zou et al. [25] uses differential evolution (DE) operators to increase the diversity and repulsion learning method to make learners search knowledge from different directions. Sai H C [26] uses a confined TLBO (CTLBO) to eliminate the teaching factor and introduces eight new mutation strategies to the teacher phase and four new mutation strategies to the student phase to enhance the algorithm's exploitation and exploration capabilities. Jiang et al. [27] designed the neighborhood topology and the fitness-distance-ratio mechanism to maintain the exploration ability of the population.

Although the aforementioned TLBO variants have shown a better performance than the original TLBO, there still exists a smaller scope of solution in the later stages of evolution. In addition, the blindness of the random self-learning method from another random learner in the Learner Stage of the original TLBO weakens the exploitation ability of the individuals. To address these issues, this paper proposes a novel version of TLBO that is augmented with Laplace distribution and Experience exchange strategy (LETLBO). The major contributions are as follows in this paper:(i)Laplace distribution is introduced instead of standard Laplace distribution, which improves the mutation ability and broadens the scope of solutions in the later stage of evolution(ii)Experience exchange strategy is designed in Learner Stage, which decreases the blindness of random self-learning method and improves the exploitation ability of the individuals

The paper firstly introduces the background and application of the original TLBO algorithm, then analyses the strengths and weaknesses of the original TLBO and the variant TLBO algorithms, and finally proposes a novel version of TLBO. In Section 2, the original TLBO algorithm is described. Section 3 presents TLBO with Laplace distribution and Experience exchange strategy (LETLBO). In Section 4, the results of LETLBO and related optimization algorithms are analysed via a comparative study. Section 5 applies the LETLBO algorithm to parameter estimation of chaotic systems. This paper concludes with Section 6.

2. Teaching-Learning-Based Optimization

The section aims to give a brief description of the TLBO algorithm proposed by Rao. The TLBO algorithm is a successful human-inspired method to mimic the teaching-learning ability between teacher and learners. The working of this algorithm is carried out during two key parts: Teacher Stage and Learner Stage. The Teacher Stage is referred to as learning from the teacher while the Learner Stage is considered as a mutual learning process between learners.

2.1. Teacher Stage

In Teacher Stage, the purpose is to increase their average grades depending on the teacher of the class to enhance the mean grade of the whole class. The fresh learner operator is recognized as follows:where Xi (i=1, 2, ..., N, N is the number of learners) is a vector of learner; Xi,new is a newly generated individual according to Xi; Xteacher is the best individual of the current population, Xmean is the mean of the learners; r indicates an arbitrary number which is distributed randomly in [0,1]; the parameter TF is a teaching factor deciding the value of the Xmean to be changed. The value of TF is either 1 or 2, indicating the learner learns something or nothing from the teacher, respectively. It is obtained as rule in

2.2. Learner Stage

In Learner Stage, the learners obtain their knowledge by interacting with each other. A fresh learner Xi,new is evaluated through a random learner Xj using the following expression:where i and j are mutually exclusive integers selected from 1 to N, N is the population size, r still indicates an arbitrary number which is distributed randomly in [0,1], and f is the fitness function, which is focused on minimum targets. A learner Xi learns something new if the other learner Xj has more knowledge than him or her.

3. A Novel Teaching-Learning-Based Optimization

There are some methods to improve the overall performance of TLBO by modifying the updating process of learners, but so far, there has been no method to introduce Laplace distribution and Experience exchange strategy of other learners. In this section, a novel version of TLBO is introduced which adopts Laplace distribution and Experience exchange strategy (LETLBO) is introduced. Figure 1 shows the flowchart of the LETLBO algorithm.

3.1. Motivation

The motivation of this method is to use the communication experience of other learners to reduce the blindness of random self-learning methods and improve learners’ grades or scores, thus improving the overall performance of TLBO which both incorporates his own experience and contributes to his experience. In the real classroom teaching and learning process, learners can learn from the learning experience through mutual communication and discussion to improve their grades or scores between them. In this paper, the exchange experience of other learners is introduced into TLBO to make good use of experience information and identify more promising solutions to make the algorithm converge faster.

3.2. Laplace Distribution

Laplace distribution [28] is a continuous probability distribution that probability density function in dimension is as follows:where is the position parameter; is the scale parameter. It is the standard Laplace distribution when the parameter equals zero while the parameter equals 1. Figure 2 shows the probability density curves of standard Gauss distribution, standard Uniform distribution, and standard Laplace distribution, respectively. As can be seen from Figure 2, the peak of Laplace distribution at the origin is the highest of three different distributions while the velocity of the long flat shape near to zero is the slowest. Therefore, if the variation ability of Laplacian distribution is used in the teacher stage and the learner stage, its interference ability or self-regulation ability is the strongest in the three different distributions; then the basic TLBO algorithm is more likely to jump out of local optimum. The updating equation of the Learner Phase is as follows:

3.3. Experience Exchange Strategy

It can be seen from (3) that each learner only learns randomly from other learners in the Learner Phase. This will lead to a certain degree of blindness in the learning direction, thus degrading the algorithm’s accuracy. It is well known that exchange experience is an important way to learn. Experience exchange ensures that each learner not only incorporates his own experience but also contributes his experience to the entire group to maximize the utility of that experience. The updating equation of the Learner Phase is as follows:where ψ is the exchange learning factor and denotes a vector whose elements are distributed randomly in the range [0,1.5] [29], Xmp is the mean of the current experience of all of the learners, and Xpbi is also the experience of the ith learner. The Xmp updating equation of the Learner Phase is as follows:

3.4. Flowchart of Distribution of LETLBO Algorithm

As explained above, the flowchart of the novel version of TLBO with the Laplace distribution and Experience exchange strategy (LETLBO) is shown in Figure 1.

3.5. Analysis of Computational Complexity

Computational complexity [30] is usually considered as a method of measuring the input data set size. There are four kinds of operations including population initialization, Teacher Stage, Learner Stage, and Experience Exchange strategy for the LETLBO algorithm. Here, the total computational complexity time of the LETLBO algorithm in the iteration is as follows:where n denotes the larger value of N and D, and .

According to the above analysis, the total time complexity consumed of the LETLBO algorithm in one cycle is no more than while the total time complexity consumed by the original TLBO algorithm in one cycle is also . It shows that the LETLBO algorithm hardly increases the time complexity of the original TLBO algorithm.

4. Experimental Results and Discussion

The section first investigated control parameter population size about the proposed algorithm. Then, the performance of the LETLBO algorithm is evaluated by comparing with that of other variant TLBO algorithms and nine original intelligence optimization algorithms. The experimental results verify that the LETLBO algorithm is very competitive in terms of the solution accuracy and quality.

4.1. Experimental Designing

In order to study the performance of the proposed LETLBO algorithm, six benchmark functions [31] are numerically simulated in this experiment and listed in Table 1. Windows XP operating system with MATLAB 7.14.0 (R2012a) performed all test experiments in this paper, and the experiments were repeated 30 times independently with a Celoron 2.26 GHz CPU and 2 GB memory.

Parameters settings of all comparative experiments are listed. The dimension (D) of benchmark functions is set to 30. The others parameters of the compared algorithms are set the same value as the recommended value original paper. In addition, the stopping criterion is set to 300,000 Function Evaluations (FEs) [32].

To verify whether the overall optimization performance of various optimization algorithms is significantly different, the statistical method Wilcoxon Rank-Sum Test [33] with a significance levelα = 0.05 is conducted in this paper. The Wilcoxon Rank-Sum Test assesses whether the mean value () of two solutions from any two algorithms are statistically different from each other. These marks “-”, “+”, and “≈” denote that the performance of the proposed algorithm is significantly worse than, significantly better than, and similar to that of LETLBO, respectively. The others parameters of the compared algorithms are set the same value as the recommended value original paper.

4.2. Population Size N Influence on LETLBO Performance

The population is now set from 10 to 100 in increments of 10 with the other parameters the same as previously, and the influence on LETLBO algorithm performance is investigated. All experiments are conducted on test functions F1-F6. Table 2 shows the results for different population sizes. From the statistical results in Table 2, it can be seen that the mean value () of the LETLBO (N = 30) algorithm is better than that of other cases on functions F1, F2, F3, F4, and F6 (5 out of the 6 functions). For other functions such as F5, the mean value () of the LETLBO (N = 30) algorithm is similar to other cases. In summary, LETLBO (N = 30) algorithm wins first place when compared with other cases according to the above analyses. As for the reason, the statistical results experiment shows that the smaller population size may lead to premature convergence and the larger population size will greatly decrease the probability of finding the correct search direction. Therefore, the population size N of the LETLBO algorithm is recommended to be set as the value 30.

4.3. Comparison LETLBO with the Original TLBO, LTLBO, and ETLBO Algorithm

The LETLBO is compared to the original version of TLBO and Teaching-Learning-based Optimization based on Laplace distribution (LTLBO)which is considered as only added Laplace distribution as well as Teaching-Learning-based Optimization based on Experience exchange strategy (ETLBO) which is considered as only added Experience exchange strategy for implementing conveniently. The parameter for the LTLBO and ETLBO are the same as LETLBO. Each algorithm runs independently 30 times, and the statistical results of and SD are provided in Table 3, the last three rows of which show the experimental results, and the best results are marked in bold. The evolution plots of TLBO, LTLBO, ETLBO, and LETLBO are illustrated in Figure 3. In addition, the semi-logarithmic convergence plots are used to analyze the relationship of the mean errors of the functions.

In this section, the LETLBO is compared to the original version of TLBO and LTLBO as well as ETLBO. From Table 3, it can be seen that LETLBO performs much better in most cases than TLBO, LTLBO, and ETLBO. In specific, LETLBO outperforms TLBO, LTLBO, and ETLBO five, four, and five test functions out of six test functions, respectively. For functions F5, LETLBO performs the same as the TLBO, LTLBO, and ETLBO in terms of the statistical mean value (). Furthermore, Figure 3 reveals the convergence behaviors for TLBO, LTLBO, ETLBO, and LETLBO algorithms. As illustrated in Figure 3, according to these evolution plots, it can be seen that the LETLBO algorithm shows the fastest convergence rates for functions F1, F2, F3, and F5. Therefore, referring to the statistical results of and SD, the overall performance of LETLBO is significantly better than the original version of TLBO and LTLBO as well as ETLBO algorithms.

Considering the above-mentioned situations, the main reason is that Laplace distribution can expand the searching space and Experience exchange strategy to avoid detours to achieve a more accurate solution, helping to identify a more promising solution. Thus, exploration and exploitation are balanced better in the LETLBO. Therefore, it can be concluded that LETLBO performs most effectively for accuracy among original version TLBO of and LTLBO as well as ETLBO algorithms.

4.4. Comparison with Other Improved TLBO Variants

In this section, we compare the LETLBO with four other improved TLBO variants including ETLBO [22], NSTLBO [23], TLBMO [34], and TLBODE [14]. The parameters for four other improved TLBO variants are taken from their reference listed above. Each algorithm runs independently 30 times, and the statistical results of and SD are provided in Table 4, the last three rows of which show the experimental results. The best results are shown in bold.

From the statistical mean value () given in Table 4, the overall performance of LETLBO is significantly better than that of the other four algorithms. More specifically, the LETLBO outperforms ETLBO, NSTLBO TLBMO, and TLBODE on four, five, five, and four out of six test functions, respectively. As can be seen from the statistical mean value () in Table 4, LETLBO is better than the other four algorithms for functions F1, F2, F3, and F4. LETLBO performs the same as the other four algorithms in terms of the statistical mean value () for functions F5. LETLBO performs the same as the TLBODE in terms of the statistical mean value () for functions F6. Therefore, referring to the statistical results of and SD, the overall performance of the LETLB algorithm is significantly better than other improved TLBO variants: ETLBO, NSTLBO, TLBMO, and TLBODE algorithms.

4.5. Comparison of LETLBO Algorithm with Nine Original Intelligence Optimization Algorithms

In this section, LETLBO algorithm is compared with nine original intelligence optimization algorithms such as PSO [2], DE [3], GSO [4], ABC [5], WCA [7], CS [8], DSA [9], BSA [11], and ISA [12]. From the statistical mean value () of Table 5, it can be seen that the LETLBO algorithm performs better than the other nine original intelligence optimization algorithms based on the Wilcoxon Rank Sum Test results. More specifically, the LETLBO algorithm outperforms PSO, DE, ABC, CS, GSO, WCA, DSA, BSA, and ISA algorithms on two, four, five, six, five, six, five, three, and six, out of six test functions, respectively. For functions F2, F3, and F4, the LETLBO algorithm especially outperforms above other nine original intelligence optimization algorithms. Table 5 reveals the standard deviation (SD) of ten algorithms in which the LETLBO algorithm is superior to all the other methods including nine original intelligence optimization algorithms on functions F1, F2, F3, and F4. This indicates that the robustness of the LETLBO algorithm outperforms nine original intelligence optimization algorithms on functions F1, F2, F3, and F4. Therefore, our approach is effective for solving optimization problems compared with PSO, DE, ABC, CS, GSO, WCA, DSA, BSA, and ISA algorithms.

5. Application LETLBO to Parameter Estimation of Chaotic System

In this section, we use the above LETLBO algorithm to solve the well-known Lorenz chaotic system and have an estimation of the unknown parameters of the chaotic system. Suppose that the chaotic system [3537] is n-dimensional and described as follows: where and , respectively, represent the state vector and initial state, and denotes the set of the real structure parameters of the chaotic system.

While estimating the parameters of the chaotic system, it is presumed to be known the structure. Therefore, the estimated system can be denoted as follows:where and , respectively, represent the state vector and initial state and denotes the set of the estimated parameters of the chaotic system. Assume that the observation data and the state of the estimated systems are represented as , at time k, respectively. The total number of the observation data is denoted by Alphabet M. It is obvious that the value of J would be denoted as follows [35] if all the estimated parameters are equal to their real values:

It is obvious that this problem is a multidimensional optimization problem. The unknown system parameter is the decision parameter and minimizing is the optimization goal. Figure 4 represents the principle of parameters for a chaotic system via an optimization algorithm. Traditional optimization methods usually have a lot of calculational cost and cannot obtain the global optima or satisfactory solution. In this section, we use the above LETLBO algorithm to solve the well-known Lorenz chaotic system to estimate the unknown parameters in the chaotic system. The Lorenz system is as follows with equations:where the parameters , , of the system decide this system behavior , , are the real values of the original system parameters, respectively. Running trajectories of the Lorenz chaotic system in each plane is as shown in Figure 5.

In this section, LETLBOETLBO, TLBODE, TLBMO, and NSTLBO algorithms are used to estimate parameters. The parameter searching ranges are set as follows: , , . In the experiment, the population N of the LETLBO algorithm is set to 50, and each comparison algorithm runs independently 30 times with 12000 function evaluations. The setting of other parameters is the same as that of the original algorithms. Table 6 shows a lot of lists about the best result, the worst result, the average result, and the estimation results of each algorithm. Figure 6 illustrates the evolution running trajectories of the Lorenz chaotic system on each parameter.

As can be seen from Table 6, parameter estimation values of the Lorenz chaotic system obtained by the LETLBO algorithm are very close to the true values, and the parameter estimation accuracy is high. As for the best result, the worst result, and the average result, the LETLBO algorithm is better than other algorithms. From Figure 6, it is shown that the LETLBO algorithm performs better than four version TLBO algorithms in terms of searching quality and convergence rate. This shows effectiveness and robustness of the LETLBO algorithm for the parameter estimation of the Lorenz chaotic system.

6. Summary and Conclusions

In this paper, a new version of the TLBO algorithm, namely the LETLBO algorithm, is proposed to solve unconstrained optimization problems. Laplace distribution and Experience exchange strategy are incorporated in the proposed algorithms. The influence of control parameters on the performance of these proposed algorithms is also analysed in detail. By comparing with that of other variant TLBO algorithms and nine original intelligence optimization algorithms, the performance of the LETLBO algorithm is evaluated. The experimental results verify that the LETLBO algorithm is superior to other variant TLBO algorithms in terms of the quality solution in most cases. In addition, the proposed algorithms have clear significant advantages compared with nine original intelligence optimization algorithms. Besides that, we also applied it to parameter estimation of chaotic systems and it can be regarded as a new choice that has high practical utility for solving parameter estimation of the Lorenz chaotic system.

Our future work will focus on the real-world application of our proposed LETLBO algorithm in areas of power system, material structure design, unmanned aerial vehicle route (UAV), and robotic path planning. These are all of great value for future research.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The work was supported by the Key projects of Hebei Provincial Department of Education (No. ZD2021024), the Research and practice project of education and teaching reform in Hebei University of Engineering (No. XN2101306073), the Hebei Graduate Innovation Funding Project, and (No. CXZZSS2022024), Key Laboratory of Intelligent Industrial Equipment Technology of Hebei Province (Hebei University of Engineering), and Handan Science Technology Planning Project (No. 2142301290).