Introduction

As a population-based evolutionary algorithm, cuckoo search (CS)1 has been favored by many scholars since it was proposed in 2009 due to its few parameters and strong global search ability. So far, CS algorithm has been successfully applied to solving various optimization problems in scientific and engineering fields2,3,4,5,6.

The core advantage of CS algorithm is that the Levy flight strategy is employed to generate new candidate solutions. The highly random flight mechanism allows individuals to explore the entire solution space as much as possible, so the population exhibits good diversity, but it also leads to poor local exploitation ability and slow convergence speed of the algorithm. Furthermore, when dealing with complex optimization problems, CS algorithm, like other evolutionary computation methods7,8, also has the problem of low search efficiency. Therefore, many scholars have conducted extensive research to enhance the convergence performance of the algorithm. At present, the improvement strategies of CS algorithm mainly include two categories: parameter control and hybridization9.

As for the parameter control, the adaptive scheme is a commonly used strategy. Reda et al.10 developed a double exponential CS algorithm, in which the discovery probability was adjusted adaptively according to the concept of the double Mersenne numbers. After that, the proposed algorithm and other CS variants were compared and evaluated on the CEC2017 benchmark functions. Wei and Yu11 designed a CS with adaptive parameter control. In this variant, the Cauchy distribution and Lehmer mean were used to dynamically update the control parameters. Then, 48 benchmark functions and two fractional-order chaotic systems were employed to verify the convergence performance. Mareli and Twala12 proposed three CS algorithms based on dynamically increasing conversion parameters. Also, these algorithms were tested on 10 benchmark functions. Simulation results indicated that CS with exponentially increasing conversion parameters is superior to the other two methods. Bulatović et al.13 presented an improved CS algorithm for constrained optimization problems. For this version, the step size factor and discovery probability changed dynamically with the number of generations. Also, four constrained engineering optimization problems were considered to test the effectiveness of this algorithm. In terms of hybridization with other methods, some attempts have been made to enhance the performance of CS algorithm. Khadanga et al.14 proposed a CS combined with grey wolf optimization (GWO) algorithm for load frequency controller design. First, 10 benchmark functions were used to test the superiority of this hybrid algorithm. Then, the proposed method was applied to the controller design of power system. Kumar et al.15 developed a hybrid algorithm based on CS and quantum behaved particle swarm optimization (QPSO). In this version, CS was modified by adjusting the step size factor. After that, the total population was divided into two parts, half of which was updated by CS and the other half by QPSO. Shehab et al.16 designed an improved CS by combining with bat algorithm (BA). In this hybrid method, two populations were considered. The first population was evolved using CS, and the best solution was transferred to the second population optimized by BA. Based on the above analyses, there is no doubt that the convergence performance of these modified CS algorithms can be enhanced from different aspects, but the algorithm structure or time complexity usually changes to varying degrees. Therefore, exploring a new CS algorithm still has positive practical significance.

Motivated by these observations, a new CS algorithm based on cloud model (CCS) is developed to strengthen the search accuracy and efficiency. In the traditional CS algorithm, the step size factor is a very important control parameter, which depends on the problem to be solved. When dealing with different types of optimization problems, the step size factor should be able to change with the change of the optimization problem, exhibiting fuzziness and randomness. This uncertainty can be represented by a cloud model. The cloud model is a new method of uncertain information processing, which can comprehensively consider randomness and fuzziness17. Therefore, the cloud model is introduced into CS algorithm to configure the step size factor. First, the step size factor is regarded as the cloud droplet, and the corresponding membership degree can be calculated according to the mathematical characteristics defined in the cloud model. Next, an exponential function is designed to adaptively determine the step size factor by using the membership degree. Obviously, CCS has the same algorithm structure as the traditional CS. Finally, 25 benchmark functions and two chaotic systems are employed to evaluate the advantages of CCS algorithm.

The contributions of this paper are as follows.

  1. (1)

    The step size parameter of CS algorithm is regarded as the cloud droplet, and the parameter set is seen as the cloud. Three numerical characteristics of cloud model are defined by using the individual fitness information in the population.

  2. (2)

    According to the fuzziness and randomness of cloud model, the membership degree corresponding to each cloud droplet is calculated, and an exponential function is designed to realize the configuration from the membership degree to step size factor.

  3. (3)

    Extensive experiments are conducted on 25 benchmark functions and two chaotic systems to evaluate the advantages and competitiveness of CCS algorithm.

The remaining sections of this paper are structured as follows. CS algorithm is introduced in section "CS algorithm", and the proposed CCS algorithm is described in section "CS algorithm based on cloud model". The experiments and results are provided in section "Experiments and results", and the conclusions are drawn in section "Conclusions".

CS algorithm

Cuckoo search (CS) algorithm1 is a global optimization method, which is based on the social behavior idea of cuckoo breeding offspring. CS algorithm mainly includes two important strategies: Levy flight and biased random walk. During each iteration, Levy flight is used to find better nests to lay eggs. For CS algorithm, each nest represents a feasible solution, and the population size of cuckoo represents the number of solutions.

In an ideal state, the nest position update formula based on Levy flight is as follows:

$$ x_{i}^{t + 1} = x_{i}^{t} + \alpha \oplus L\left( \lambda \right) $$
(1)
$$ \alpha = \alpha_{0} \otimes \left( {x_{i}^{t} - x_{best} } \right) $$
(2)

where \(x_{i}^{t + 1}\) represents the new solution, \(x_{i}^{t}\) and \(x_{best}\) denote the current solution and the best solution respectively, and \(t\) indicates the current iteration number. \(\oplus\) is entry-wise multiplication, \(\alpha\) is the stepwise parameter that controls the moving step size of cuckoo, and \(\alpha_{0}\) denotes the step size factor, which is usually set to 0.01. \(L\left( \lambda \right)\) stands for a random search path, which can be expressed as:

$$ L\left( \lambda \right) = \frac{\varphi \times m}{{\left| n \right|^{1/\beta } }} $$
(3)

where \(m\) and \(n\) are two random numbers subjected to the normal distribution, \(\beta\) is set to 1.5. \(\varphi\) is defined as:

$$ \varphi = \left( {\frac{{\Gamma \left( {1 + \beta } \right) \times sin\left( {\pi \beta /2} \right)}}{{\Gamma \left( {\left( {1 + \beta } \right)/2} \right) \times \beta \times 2^{{\left( {\beta - 1} \right)/2}} }}} \right)^{1/\beta } $$
(4)

Another strategy of CS algorithm is the biased random walk. After the nest position is updated by using Levy flight, some solutions are discarded according to the discovery probability \(p_{a}\), and the same number of new solutions can be regenerated by using the biased random walk, which is described as follows:

$$ x_{i}^{t + 1} = x_{i}^{t} + rnd \times \left( {x_{g}^{t} - x_{k}^{t} } \right) $$
(5)

where \(rnd\) is a random number ranging from 0 to 1, \({x}_{g}^{t}\) and \({x}_{k}^{t}\) are two randomly selected solutions.

CS algorithm based on cloud model

As mentioned above, the search strategy of CS algorithm mainly consists of Levy flight and biased random walk, combined with the greedy selection scheme to find promising solutions. In the early stage of the search, a larger step size factor is used to expand the search space, thereby increasing the population diversity. In the late stage of the search, the step size factor should be set to a smaller value to enhance the local search capability of the algorithm. However, in addressing complex multimode problems, due to the lack of parameter adaptation adjustment mechanism, the parameter configuration scheme based on decreasing step size factors may lead to the algorithm being trapped into local optima. Therefore, a new CS algorithm based on cloud model (CCS) is proposed to adaptively adjust the step size factor, so as to strengthen the ability of the algorithm to address complex optimization problems.

The cloud model is a cognitive model based on classical probability theory and fuzzy mathematics to realize the transformation of qualitative concepts and quantitative descriptions, reflecting the fuzziness and randomness of qualitative concepts18. The cloud model has three numerical characteristics, namely, expectation \({E}_{x}\), entropy \({E}_{n}\) and hyper entropy \({H}_{e}\). Expectation \({E}_{x}\) represents the information center value of cloud droplets, which is the central point of quantitative representation of qualitative concepts. Entropy \({E}_{n}\) represents the value range of qualitative concept, reflecting the dispersion degree and floating range of cloud droplets. The greater the entropy \({E}_{n}\), the more ambiguous the concept is. The hyper entropy \({H}_{e}\) is the uncertainty measure of entropy \({E}_{n}\), which reflects the degree of cloud dispersion. These three values together constitute the basic numerical characteristics of the cloud model, as shown in Fig. 1.

Figure 1
figure 1

Schematic diagram of cloud model.

In this work, the cloud model is employed to produce the step size factor \({\alpha }_{0}\). First, the control parameter is regarded as a qualitative concept, the step size factor is considered as a cloud droplet, and the set of step size factors in the process of algorithm evolution is regarded as a cloud. Then, the expectation \({E}_{x}\), entropy \({E}_{n}\) and hyper entropy \({H}_{e}\) of the step size factor set can be obtained by using the forward cloud model. Finally, the membership degree of the step size factor is calculated according to the three numerical characteristics of the cloud model. In terms of the principle of the forward cloud model, the numerical characteristics can be expressed as:

$$ E_{x} = f\left( {best} \right) $$
(6)
$$ E_{n} = f\left( {avg} \right) - f\left( {best} \right) $$
(7)
$$ H_{e} = \frac{{E_{n} }}{5} $$
(8)

where \(f\left( {best} \right)\) and \(f\left( {avg} \right)\) correspond to the best fitness and average fitness of all individuals in the current population, respectively.

Based on these, the membership degree is defined as:

$$ \mu_{i} = \exp \left( {\frac{{ - \left( {f\left( {x_{i} } \right) - E_{x} } \right)^{2} }}{{2\sigma^{2} }}} \right) $$
(9)

where \(\sigma\) represents the normal distribution random number with the mean \(E_{n}\) and variance \(H_{e}\), \(f\left( {x_{i} } \right)\) is the fitness of the current individual.

To realize the reasonable setting of step size factor \(\alpha_{0}\), an exponential function is employed to map the value of the membership degree \(\mu_{i}\) to the ideal interval range, so as to adaptively configure \(\alpha_{0}\). The exponential function is given as:

$$ \alpha_{0} = \frac{\gamma }{{1 + e^{{ - \mu_{i} }} }} $$
(10)

where \(\gamma\) is a control coefficient to be preset.

Undoubtedly, the step size factor \(\alpha_{0}\) is closely related to the individual fitness \(f\left( {x_{i} } \right)\), which is conducive to addressing different types of optimization problems. The smaller the individual fitness, the greater the step size factor obtained, and vice versa. In other words, the step size factor of CCS algorithm can be adaptively configured according to the individual fitness, so it has good robustness. Therefore, this parameter adjustment strategy based on cloud model can alleviate the premature convergence or even stagnation of the algorithm.

Experiments and results

Benchmark functions

In the experiments, two sets of popular benchmark functions are chosen to evaluate the overall performance of the proposed CCS algorithm. The first set of functions contains 11 basic problems, as shown in Table 1. Their specific information can be found in literature19. The second set of functions includes 14 shifted and rotated problems, which are composed of the forehand 14 problems F1–F14 in the CEC 200520. It's worth noting that these problems are very complex benchmark functions used to simulate real-world optimization problems.

Table 1 The first set of benchmark functions.

Influence of the control coefficient

In the proposed CCS algorithm, there are two control parameters to be determined, namely the control coefficient \(\gamma \) and discovery probability \({p}_{a}\). The discovery probability is set to the recommended value of 0.25. To evaluate the sensitivity of CCS performance to the control coefficient, additional experiments are conducted on the first set of benchmark functions with 30 dimensions. The population size of CCS algorithm is set to 50, the control coefficient \(\gamma \) is set from 0.1 to 1, and the step size is set as 0.1. For each control coefficient, CCS is run 30 times on each test function. The maximum number of function evaluations is 300,000 to stop each run. The mean values of the final function errors are shown in Table 2, and the best result is highlighted in bold.

Table 2 Mean values obtained by CCS using different control coefficients.

From Table 2, it can be seen that the control coefficient \(\gamma \) has a significant impact on the convergence accuracy of CCS algorithm. For example, CCS produces the better performance with a larger control coefficient on f6 and f7. On the contrary, CCS performs better on f3 and f9 in the case of a smaller control coefficient. Besides, for other benchmark functions, as the control coefficient \(\gamma \) increases, the search accuracy of CCS algorithm increases first and then decreases. Obviously, the appropriate setting of control coefficient can have a satisfactory effect on the search capability. According to the experimental results, it is observed that the trade-off interval of the control coefficient \(\gamma \) is from 0.2 to 0.5.

Comparison with CS algorithms

To assess the advantages of CCS, five CS algorithms are selected for comparative experiments, including CS, ICS21, DACS22, VCS23 and DECS10. In the experiments, the population size \(N\) of each algorithm is set to 50, the dimension \(D\) of these benchmark functions is 30 and 50, and each algorithm is run on each problem for 30 times. For each run, it will be stopped if the maximum function evaluation reaches \(10000\times D\). Meanwhile, the parameters of these comparative algorithms can be determined according to the original literature. For CCS algorithm, the control coefficient \(\gamma \) is set to 0.2, and the discovery probability \({p}_{a}\) is set to 0.25. Additionally, the mean value (Mean) and standard deviation (Std) of the final function errors are stored for comparative analysis. For the two sets of benchmark functions with 30 dimensions, the experimental results are listed in Tables 3 and 4 respectively, and the best outputs are shown in bold. To make the conclusion more objective, Friedman test is implemented to evaluate the performance difference of these CS algorithms, and the average ranking and final ranking are provided in the numerical results. To visually compare the convergence trend, the convergent graphs of these CS algorithms on some typical benchmark functions with 30 dimensions are given in Fig. 2.

Table 3 Comparison of CS algorithms on the first set of benchmark functions with \(D=30\).
Table 4 Comparison of CS algorithms on the second set of benchmark functions with \(D=30\).
Figure 2
figure 2

Convergent graphs of CCS and its competitors for the 30 dimensional benchmark functions.

As shown in Table 3, CCS produces better solutions in most cases. Specifically, in terms of solution quality, ICS produces promising results on f6 and f7, and it also provides high-quality results on f1, f2, f8, f10 and f11. DACS exhibits good comprehensive performance and is good at addressing f2, f3 and f9. Although VCS cannot get the highest precision solution for any problem, it produces reasonable results on the whole. Unfortunately, CS and DECS cannot perform better on any benchmark function. It should be emphasized that CCS performs best on the unimodal problems f1, f4 and f5 as well as the multimodal problems f8, f9, f10 and f11. Moreover, CCS is the second best method on f2, f3, f6 and f7. Based on the statistical results using the Friedman test, CCS ranks first with an average ranking of 1.9545, followed by ICS, DACS, VCS, CS and DECS. Therefore, CCS has better performance than other CS algorithms on the first set of benchmark functions with 30 dimensions.

From Table 4, it is observed that CCS still exhibits better convergence performance in tackling these challenging optimization problems. In more detail, CS provides the highest quality solution on F3, ICS performs best on F1, F5, F9 and F10, DACS gets promising results on F1 and F2, and VCS is superior to other algorithms on F7, F12 and F13, while DECS cannot provide the lowest mean value for any problem except F8. Similarly, CCS produces reasonable results on F1, F4, F6, F11 and F14, and it performs second best on F2 and F5. Furthermore, for the multimodal problem F8, these CS algorithms all produce the same average error. For the unimodal problem F1, ICS, DACS and CCS find the global optimal solution. In terms of the comparison results obtained by the Friedman test, CCS gets the smallest average ranking of 2.4286, followed by ICS with the average ranking of 2.6429. CS and DECS yield the largest average ranking, which means that they are losers compared with other algorithms. Therefore, it is clear that the proposed CCS is the best among these CS algorithms.

It can be seen from Fig. 2 that CCS has faster convergence speed and better search efficiency. Specifically, for the first set of functions f1, f4 and f11 as well as the second set of functions F1, F4 and F11, the convergence graph of CCS declines faster than that of other algorithms, which indicates that CCS has more advantages in terms of speed and accuracy. For f5 and F6, CCS is slightly better than other CS algorithms. Besides, for f9, CCS converges slightly slower than DACS, but they can finally find the global optimal solution. Therefore, CCS is a competitive optimization method.

Generally speaking, a promising evolutionary algorithm should also be able to produce reasonable results in tackling high-dimensional problems. To better evaluate the impact of increasing dimension on the convergence performance, the proposed CCS and five CS algorithms are compared on the basis of \(D = 50\). Then, the maximum number of function evaluations is \(10000 \times D\), and other parameter configurations are consistent with those in the previous experiments. For these two sets of benchmark functions, the experimental results are reported in Tables 5 and 6, respectively. The statistical results of the nonparametric Friedman test are also provided in these tables.

Table 5 Comparison of CS algorithms on the first set of benchmark functions with \(D=50\).
Table 6 Comparison of CS algorithms on the second set of benchmark functions with \(D=50\).

It can be observed from Table 5 that CCS still maintains stronger competitiveness in addressing these 50 dimensional benchmark functions. According to the quality of the solution, CCS produces the highest accuracy solution on 7 problems, namely f1, f4, f5, f8, f9, f10 and f11. ICS performs best on f6, DACS is good at solving the unimodal problems f2 and f3, VCS is better than other algorithms on f7. However, DACS and VCS have the defect of unstable search when dealing with the multimodal problems f9 and f10. Also, neither CS nor DECS can get the best results for any of these 50 dimensional test functions. In terms of the comparison results of the Friedman test, CCS produces the best comprehensive performance, followed by ICS. DECS is defeated by other algorithms because its robustness in solving different optimization problems needs to be further strengthened. Therefore, the presented CCS has better performance than other CS algorithms in tackling these high-dimensional basic problems.

As indicated in Table 6, CCS achieves better convergence performance for these 50 dimensional challenging problems. For example, ICS produces reasonable results on F1, F5, F9 and F10, DACS gets the lowest mean value on F1, F2, F3 and F7, but it is not competitive in solving F9, F10, F11, F12 and F13. VCS provides the highest quality result on F13, but CS and DECS are still not good at dealing with these complex optimization problems. Similarly, CCS produces promising results on F1, F4, F6, F11, F12 and F14, and it performs second best on F2, F3, F5, F7 and F13. Moreover, it is difficult to determine which algorithm is more suitable for solving F8. Based on the ranking results obtained by the Friedman test, these CS algorithms are sorted as follows: CCS, ICS, VCS, DACS, DECS and CS. Apparently, CCS beats its peers on these challenging benchmark functions. From the above observations, it can be found that CCS is more competitive than other CS algorithms.

Additionally, for these 25 benchmark functions with 30 and 50 dimensions, the statistical results in terms of the number of functions with the lowest mean value produced by different algorithms and their final ranking are plotted in Fig. 3. Obviously, compared to other algorithms, CCS exhibits the most competitive performance in addressing these optimization problems with different dimensions.

Figure 3
figure 3

Statistical results of tackling these 25 benchmark functions using different algorithms.

Comparison with non-CS algorithms

In this section, CCS is compared with five non-CS algorithms on the two sets of benchmark functions with 30 dimensions. These algorithms include AEFA24, FPA25, SCA26, BOA27 and WOA28. The population size of each algorithm is 50, and the maximum number of function evaluations is set to 300,000. Moreover, each algorithm is run 30 times, and the average value and standard deviation of each method on each test function are given in Tables 7 and 8. The best average value is marked in bold. To further comprehensively compare the performance of CCS and these non-CS algorithms, the Friedman test is conducted, and the statistical results are provided in the last two rows of these tables.

Table 7 Comparison of AEFA, FPA, SCA, BOA, WOA and CCS on the first set of benchmark functions with \(D=30\).
Table 8 Comparison of AEFA, FPA, SCA, BOA, WOA and CCS on the second set of benchmark functions with \(D=30\).

From Table 7, it can be observed that CCS exhibits better comprehensive performance compared with these advanced evolutionary algorithms. In more detail, BOA is significantly better than others on f3, f4, f7 and f8, and it also provides the global optimal value on f9. WOA yields the best search results on f1, f2 and f9, but it loses its advantages in addressing f4, f10 and f11. It should be noted that AEFA, FPA and SCA cannot produce the best results for any problem. Further, CCS provides promising comprehensive results, especially in tackling f5, f6, f9, f10 and f11. In accordance with the average ranking of these algorithms, CCS gets the smallest average ranking of 2.1818, followed by WOA, BOA, AEFA, SCA and FPA in rising direction, which means that CCS is the best overall.

As shown in Table 8, these algorithms exhibit different advantages in solving different benchmark functions. In terms of solution accuracy, AEFA gets the solutions with higher accuracy on F8, F9, F10, F11 and F12, FPA performs best on F3, F5 and F7, SCA produces better performance on F14. However, BOA and WOA are not good at solving these challenging optimization problems, and they cannot produce the highest quality results for any problem. Also, CCS outperforms others on F1, F2, F4, F6 and F13, and it is the second best approach on F3, F5, F7, F9, F10, F11, F12 and F14. According to the average ranking and final ranking, CCS is significantly superior to these non-CS algorithms. Considering all the above analysis, it can be easily found that the proposed CCS algorithm has better comprehensive performance.

Chaotic time series prediction

In a complex system, the digital sequence obtained from the observation variables based on the time order is called the time series, which can reflect the dynamic properties of the system. The time series obtained from chaotic systems is called a chaotic time series, which is a dataset with nonlinear characteristics and contains rich system dynamic information. In practical application, the prediction of chaotic time series has always been a research hotspot29.

To further investigate the competitiveness of CCS algorithm, two typical chaotic systems, Lorenz and Mackey–Glass30, are used for numerical simulation and comparison. In this work, the three-layer feed-forward neural network is used as the prediction model of chaotic time series, CCS and CS are used as learning algorithms to independently determine the initial weights and thresholds of the neural network.

The first chaotic system is the Lorenz system described as follows:

$$ \begin{gathered} \dot{x}_{1} \left( t \right) = m\left( {x_{2} \left( t \right) - x_{1} \left( t \right)} \right) \hfill \\ \dot{x}_{2} \left( t \right) = nx_{1} \left( t \right) - x_{2} \left( t \right) - x_{1} \left( t \right)x_{3} \left( t \right) \hfill \\ \dot{x}_{3} \left( t \right) = x_{1} \left( t \right)x_{2} \left( t \right) - px_{3} \left( t \right) \hfill \\ \end{gathered} $$
(11)

where \(m = 10\), \(n = 28\) and \(p = 8/3\).

The second chaotic system is the Mackey–Glass system, which is formulated as follows:

$$ \dot{x}\left( t \right) = - mx\left( t \right) + \frac{{nx\left( {t - 17} \right)}}{{1 + x^{10} \left( {t - 17} \right)}} $$
(12)

where \(m = 0.1\) and \(n = 0.2\).

In the experiments, the population size of CCS and CS algorithms is 30, and the number of repeated runs on each chaotic time series is 25. For the Lorenz chaotic system, the maximum number of iterations is set to 300, and the number of neurons from the input layer to the output layer of the feed-forward neural network is 3, 6 and 1 respectively. For the Mackey–Glass chaotic system, the maximum number of iterations is set to 500, and the number of nodes in the input layer, hidden layer and output layer is 4, 6 and 1 respectively. The experimental results in terms of mean value and standard deviation of the squared errors are shown in Table 9, and the performance comparison on different chaotic systems is given in Figs. 4 and 5.

Table 9 Experimental results of CS and CCS algorithms.
Figure 4
figure 4

Comparison of CCS and CS on the Lorenz time series.

Figure 5
figure 5

Comparison of CCS and CS on the Mackey–Glass time series.

Observed from Table 9, for the Lorenz and Mackey–Glass chaotic systems, CCS provides the smallest mean error of 4.18E–003 and 1.50E–002, respectively. In other words, CCS has higher prediction accuracy than CS algorithm. As reported in Figs. 4 and 5, the predicted value of CCS for each chaotic time series is closer to the actual value of the corresponding system. In conclusion, the above observations show that the proposed CCS algorithm has good search ability and can improve the prediction accuracy of chaotic time series.

Conclusions

To enhance the convergence performance of CS algorithm in addressing various optimization problems, a new CS algorithm based on cloud model is proposed to adaptively determine the step size factor. First, the set of step size factors is considered as a cloud, and three numerical characteristics in the cloud model are defined by using the individual fitness information in the population. Then, the membership degree of each step size factor is calculated. Finally, an exponential function is designed to adaptively configure the step size factor, so as to enhance the versatility and robustness of the algorithm. To evaluate the effectiveness of CCS algorithm, a large number of experiments are conducted on two sets of benchmark functions to compare it with other CS algorithms and five advanced evolutionary algorithms. Further, the non-parametric statistical Friedman test is carried out for comprehensive comparison and analysis. Experimental results indicate that CCS algorithm has strong competitiveness in terms of search accuracy and convergence rate. Besides, the proposed CCS is also a promising method in predicting two chaotic time series. However, CCS algorithm also has some limitations, such as not being able to produce the most promising results for all benchmark functions. In the future, an adaptive configuration scheme for the control coefficient will be explored to enhance the robustness of solving different optimization problems, and more real-world applications will be considered to further investigate the effectiveness of CCS algorithm.