Skip to main content
Log in

Laplace’s rule of succession: a simple and efficient way to compare metaheuristics

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Metaheuristics are algorithms that are used to solve difficult optimization problems. They are typically stochastic approaches; hence, proper statistical tests are needed to compare them. However, choosing an appropriate statistical test is not trivial given that each test requires some assumptions to be true before the test can be used. Moreover, the p-values associated with a statistical test is usually difficult to interpret. In this paper, we propose the use of Laplace’s rule of succession to compare different metaheuristic approaches. The rule is simple, intuitive and easy to compute. It can be used alone or to complement a statistical test. The process of using the rule for comparison purposes is clearly explained and applied to a typical scenario encountered in the field of metaheuristics. In this scenario, an improved variant of an existing metaheuristic algorithm is proposed. To evaluate the performance of the two algorithms, Laplace’s rule and a traditional statistical test are used. Analysis of the results and how to interpret them are provided. The results show that Laplace’s rule is consistent with the used statistical test. Furthermore, the rule is easier to compute and interpret.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Data Availability

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

Notes

  1. A Notebook document with the code and results can be requested from the corresponding author.

  2. The classical Pearson’s correlation coefficient evaluates how much the relationship is linear. Actually, it is also strong on this small benchmark set and these algorithms A and B. However, the Spearman’s rank correlation is more general and robust.

References

  1. Beyer H, Schwefel H (2002) Evolution strategies: A comprehensive introduction. Natural Comput 1(1):3–52

    Article  MathSciNet  MATH  Google Scholar 

  2. Maurice Clerc (2021) MAMSO (Multi-agents multi-strategies optimiser). https://hal.archives-ouvertes.fr/hal-03150719

  3. Dorigo M, Stützle T (2019) Ant colony optimization: overview and recent advances. Springer International Publishing, Cham, pp 311–351

    Google Scholar 

  4. dos Santos Coelho L, Mariani VC (2008) Use of chaotic sequences in a biologically inspired algorithm for engineering design optimization. Expert Syst Appl 34(3):1905–1913

    Article  Google Scholar 

  5. Friedman M (1937) The use of ranks to avoid the assumption of normality implicit in the analysis of variance. J Am Stat Assoc 32(200):675–701

    Article  MATH  Google Scholar 

  6. Goldberg D (1989) Genetic algorithms in search. Optimization and Machine Learning. Addison-Wesley Professional, Boston

    MATH  Google Scholar 

  7. Gosset W (1908) The probable error of a mean. Biometrika 6(1):1–25

    Article  MathSciNet  Google Scholar 

  8. Eghbal H, Kayhan Zrar G, Ali E, Ali Safaa S, Danda BR (2021) Novel metaheuristic based on multiverse theory for optimization problems in emerging systems. Appl Intell 51(6):3275–3292

    Article  Google Scholar 

  9. Heidari AA, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen H (2019) Harris hawks optimization: Algorithm and applications. Future Gener Comput Syst 97:849–872

    Article  Google Scholar 

  10. Han Fei, Zheng Mingpeng, Ling Qinghua (2022) An improved multiobjective particle swarm optimization algorithm based on tripartite competition mechanism. Appl Intell 52(5):5784–5816

    Article  Google Scholar 

  11. Kayhan AH, Ceylan H, Ayvaz MT, Gurarslan G (2010) Psolver: A new hybrid particle swarm optimization algorithm for solving continuous optimization problems. Expert Syst Appl 37(10):6798–6808

    Article  Google Scholar 

  12. Kennedy J, Eberhart R(1995) Particle swarm optimization. In: IEEE international conference on neural networks, pp 1942–1948. IEEE

  13. Laplace P (1814) Essai philosophique sur les probabilités. Courcier, Paris

    MATH  Google Scholar 

  14. Li S, Chen H, Wang M, Heidari AA, Mirjalili S (2020) Slime mould algorithm: A new method for stochastic optimization. Future Gener Comput Syst 111:300–323

    Article  Google Scholar 

  15. Misuse of p-values (2022) Misuse of \(p\)-values—Wikipedia, the free encyclopedia, [Online; accessed 10-June-2022]

  16. Rao RV (2016) Jaya: A simple and new optimization algorithm for solving constrained and unconstrained optimization problems. Int J Ind Eng Comput 7(1):19–34

    Google Scholar 

  17. Skiena S (2017) The data science design manual. Springer, Berlin

    Book  MATH  Google Scholar 

  18. Storn R, Price K (1995) Differential evolution-a simple and efficient adaptive scheme for global optimization over continuous spaces. ICSI, Technical report, Berkeley

  19. Stigler S (1986) The history of statistics: the measurement of uncertainty before 1900. Belknap Press of Harvard University Press, Cambridge

    MATH  Google Scholar 

  20. Wilcoxon Frank (1945) Individual comparisons by ranking methods. Biomet Bull 1(6):80–83

    Article  Google Scholar 

  21. Yang X-S (2014) Nature-inspired optimization algorithms. Elsevier, Amsterdam

    MATH  Google Scholar 

  22. Zhong Xuxu, Cheng Peng (2021) An elite-guided hierarchical differential evolution algorithm. Appl Intell 51(7):4962–4983

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the anonymous reviewers for their constructive and helpful comments and suggestions. This work was supported by Gulf University for Science and Technology (Kuwait) under Grant 251896.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mahamed Ghasib Hussein Omran.

Ethics declarations

Conflicts of interest

The authors of this article certify that they have NO affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Omran, M.G.H., Clerc, M. Laplace’s rule of succession: a simple and efficient way to compare metaheuristics. Neural Comput & Applic 35, 11807–11814 (2023). https://doi.org/10.1007/s00521-023-08322-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-023-08322-5

Keywords

Navigation