Harris hawks optimization: Algorithm and applications

https://doi.org/10.1016/j.future.2019.02.028Get rights and content

Highlights

  • A mathematical model is proposed to simulate the hunting behavior of Harris’ Hawks.

  • An optimization algorithm is proposed using the mathematical model.

  • The proposed HHO algorithm is tested on several benchmarks.

  • The performance of HHO is also examined on several engineering design problems.

  • The results show the merits of the HHO algorithm as compared to the existing algorithms.

Abstract

In this paper, a novel population-based, nature-inspired optimization paradigm is proposed, which is called Harris Hawks Optimizer (HHO). The main inspiration of HHO is the cooperative behavior and chasing style of Harris’ hawks in nature called surprise pounce. In this intelligent strategy, several hawks cooperatively pounce a prey from different directions in an attempt to surprise it. Harris hawks can reveal a variety of chasing patterns based on the dynamic nature of scenarios and escaping patterns of the prey. This work mathematically mimics such dynamic patterns and behaviors to develop an optimization algorithm. The effectiveness of the proposed HHO optimizer is checked, through a comparison with other nature-inspired techniques, on 29 benchmark problems and several real-world engineering problems. The statistical results and comparisons show that the HHO algorithm provides very promising and occasionally competitive results compared to well-established metaheuristic techniques. Source codes of HHO are publicly available at http://www.alimirjalili.com/HHO.html and http://www.evo-ml.com/2019/03/02/hho.

Introduction

Many real-world problems in machine learning and artificial intelligence have generally a continuous, discrete, constrained or unconstrained nature [1], [2]. Due to these characteristics, it is hard to tackle some classes of problems using conventional mathematical programming approaches such as conjugate gradient, sequential quadratic programming, fast steepest, and quasi-Newton methods [3], [4]. Several types of research have verified that these methods are not efficient enough or always efficient in dealing with many larger-scale real-world multimodal, non-continuous, and non-differentiable problems [5]. Accordingly, metaheuristic algorithms have been designed and utilized for tackling many problems as competitive alternative solvers, which is because of their simplicity and easy implementation process. In addition, the core operations of these methods do not rely on gradient information of the objective landscape or its mathematical traits. However, the common shortcoming for the majority of metaheuristic algorithms is that they often show a delicate sensitivity to the tuning of user-defined parameters. Another drawback is that the metaheuristic algorithms may not always converge to the global optimum. [6]

In general, metaheuristic algorithms have two types [7]; single solution based (i.g. Simulated Annealing (SA) [8]) and population-based (i.g. Genetic Algorithm (GA) [9]). As the name indicates, in the former type, only one solution is processed during the optimization phase, while in the latter type, a set of solutions (i.e. population) are evolved in each iteration of the optimization process. Population-based techniques can often find an optimal or suboptimal solution that may be same with the exact optimum or located in its neighborhood. Population-based metaheuristic (P-metaheuristics) techniques mostly mimic natural phenomena [10], [11], [12], [13]. These algorithms start the optimization process by generating a set (population) of individuals, where each individual in the population represents a candidate solution to the optimization problem. The population will be evolved iteratively by replacing the current population with a newly generated population using some often stochastic operators [14], [15]. The optimization process is proceeded until satisfying a stopping criteria (i.e. maximum number of iterations) [16], [17].

Based on the inspiration, P-metaheuristics can be categorized in four main groups [18], [19] (see Fig. 1): Evolutionary Algorithms (EAs), Physics-based, Human-based, and Swarm Intelligence (SI) algorithms. EAs mimic the biological evolutionary behaviors such as recombination, mutation, and selection. The most popular EA is the GA that mimics the Darwinian theory of evolution [20]. Other popular examples of EAs are Differential Evolution (DE) [21], Genetic Programming (GP) [20], and Biogeography-Based Optimizer (BBO) [22]. Physics-based algorithms are inspired by the physical laws. Some examples of these algorithms are Big-Bang Big-Crunch (BBBC) [23], Central Force Optimization (CFO) [24], and Gravitational Search Algorithm (GSA) [25]. Salcedo-Sanz [26] has deeply reviewed several physic-based optimizers. The third category of P-metaheuristics includes the set of algorithms that mimic some human behaviors. Some examples of the human-based algorithms are Tabu Search (TS) [27], Socio Evolution and Learning Optimization (SELO) [28], and Teaching Learning Based Optimization (TLBO) [29]. As the last class of P-metaheuristics, SI algorithms mimic the social behaviors (e.g. decentralized, self-organized systems) of organisms living in swarms, flocks, or herds [30], [31]. For instance, the birds flocking behaviors is the main inspiration of the Particle Swarm Optimization (PSO) proposed by Eberhart and Kennedy [32]. In PSO, each particle in the swarm represents a candidate solution to the optimization problem. In the optimization process, each particle is updated with regard to the position of the global best particle and its own (local) best position. Ant Colony Optimization (ACO) [33], Cuckoo Search (CS) [34], and Artificial Bee Colony (ABC) are other examples of the SI techniques.

Regardless of the variety of these algorithms, there is a common feature: the searching steps have two phases: exploration (diversification) and exploitation (intensification) [26]. In the exploration phase, the algorithm should utilize and promote its randomized operators as much as possible to deeply explore various regions and sides of the feature space. Hence, the exploratory behaviors of a well-designed optimizer should have an enriched-enough random nature to efficiently allocate more randomly-generated solutions to different areas of the problem topography during early steps of the searching process [35]. The exploitation stage is normally performed after the exploration phase. In this phase, the optimizer tries to focus on the neighborhood of better-quality solutions located inside the feature space. It actually intensifies the searching process in a local region instead of all-inclusive regions of the landscape. A well-organized optimizer should be capable of making a reasonable, fine balance between the exploration and exploitation tendencies. Otherwise, the possibility of being trapped in local optima (LO) and immature convergence drawbacks increases.

We have witnessed a growing interest and awareness in the successful, inexpensive, efficient application of EAs and SI algorithms in recent years. However, referring to No Free Lunch (NFL) theorem [36], all optimization algorithms proposed so-far show an equivalent performance on average if we apply them to all possible optimization tasks. According to NFL theorem, we cannot theoretically consider an algorithm as a general-purpose universally-best optimizer. Hence, NFL theorem encourages searching for developing more efficient optimizers. As a result of NFL theorem, besides the widespread studies on the efficacy, performance aspects and results of traditional EAs and SI algorithms, new optimizers with specific global and local searching strategies are emerging in recent years to provide more variety of choices for researchers and experts in different fields.

In this paper, a new nature-inspired optimization technique is proposed to compete with other optimizers. The main idea behind the proposed optimizer is inspired from the cooperative behaviors of one of the most intelligent birds, Harris’ Hawks, in hunting escaping preys (rabbits in most cases) [37]. For this purpose, a new mathematical model is developed in this paper. Then, a stochastic metaheuristic is designed based on the proposed mathematical model to tackle various optimization problems. It should be noted that the name Harris’s hawk and a similar inspiration have been used in [38], in which the authors modified the mathematical model of Grey Wolf Optimizer (MOGWO and GWO) and used it to solve multi-objective optimization problems. There were no new mathematical equations and the algorithm was developed completely based on MOGWO and GWO. In this work, however, we proposed new mathematical models to mimic all the stages of hunts used by Harris’s hawks and solve single-objective optimization problems efficiently

The rest of this research is organized as follows. Section 2 represents the background inspiration and info about the cooperative life of Harris’ hawks. Section 3 represents the mathematical model and computational procedures of the HHO algorithm. The results of HHO in solving different benchmark and real-world case studies are presented in Section 4. Section 5 discusses the results. Finally, Section 6 concludes the work with some useful perspectives.

Section snippets

Background

In 1997, Louis Lefebvre proposed an approach to measure the avian “IQ” based on the observed innovations in feeding behaviors [39]. Based on his studies [39], [40], [41], [42], the hawks can be listed amongst the most intelligent birds in nature. The Harris’ hawk (Parabuteo unicinctus) is a well-known bird of prey that survives in somewhat steady groups found in southern half of Arizona, USA [37]. Harmonized foraging involving several animals for catching and then, sharing the slain animal has

Harris hawks optimization (HHO)

In this section, we model the exploratory and exploitative phases of the proposed HHO inspired by the exploring a prey, surprise pounce, and different attacking strategies of Harris hawks. HHO is a population-based, gradient-free optimization technique; hence, it can be applied to any optimization problem subject to a proper formulation. Fig. 3 shows all phases of HHO, which are described in the next subsections.

Benchmark set and compared algorithms

In order to investigate the efficacy of the proposed HHO optimizer, a well-studied set of diverse benchmark functions are selected from literature [50], [51]. This benchmark set covers three main groups of benchmark landscapes: unimodal (UM), multimodal (MM), and composition (CM). The UM functions (F1–F7) with unique global best can reveal the exploitative (intensification) capacities of different optimizers, while the MM functions (F8–F23) can disclose the exploration (diversification) and LO

Discussion on results

As per results in previous sections, we can recognize that the HHO shows significantly superior results for multi-dimensional F1–F13 problems and F14–F29 test cases compared to other well-established optimizers such as GA, PSO, BBO, DE, CS, GWO, MFO, FPA, TLBO, BA, and FA methods. While the efficacy of methods such as PSO, DE, MFO, and GA significantly degrade by increasing the dimensions, the scalability results in Fig. 12 and Table 2 expose that HHO is able to maintain a well equilibrium

Conclusion and future directions

In this work, a novel population-based optimization algorithm called HHO is proposed to tackle different optimization tasks. The proposed HHO is inspired by the cooperative behaviors and chasing styles of predatory birds, Harris’ hawks, in nature. Several equations are designed to simulate the social intelligence of Harris’ hawks to solve optimization problems. Twenty nine unconstrained benchmark problems were used to evaluate the performance of HHO. Exploitative, exploratory, and local optima

Acknowledgments

This research is funded by Zhejiang Provincial Natural Science Foundation of China (LY17F020012), Science and Technology Plan Project of Wenzhou of China (ZG2017019).

We also acknowledge the comments of anonymous reviewers.

Ali Asghar Heidari is now a Ph.D. research intern at Department of Computer Science, School of Computing, National University of Singapore (NUS), Singapore. Currently, he is also an exceptionally talented Ph.D. candidate at the University of Tehran and he is awarded and funded by Iran’s National Elites Foundation (INEF). His main research interests are advanced machine learning, evolutionary computation, meta-heuristics, prediction, information systems, and spatial modeling. He has published

References (95)

  • MirjaliliS. et al.

    The whale optimization algorithm

    Adv. Eng. Softw.

    (2016)
  • FarisH. et al.

    An efficient binary salp swarm algorithm with crossover scheme for feature selection problems

    Knowl.-Based Syst.

    (2018)
  • ErolO.K. et al.

    A new optimization method: big bang–big crunch

    Adv. Eng. Softw.

    (2006)
  • RashediE. et al.

    Gsa: a gravitational search algorithm

    Inf. Sci.

    (2009)
  • Salcedo-SanzS.

    Modern meta-heuristics based on nonlinear physics processes: a review of models and design procedures

    Phys. Rep.

    (2016)
  • KumarM. et al.

    Socio evolution & learning optimization algorithm: a socio-inspired optimization methodology

    Future Gener. Comput. Syst.

    (2018)
  • RaoR.V. et al.

    Teaching–learning-based optimization: an optimization method for continuous non-linear large scale problems

    Inform. Sci.

    (2012)
  • LefebvreL. et al.

    Feeding innovations and forebrain size in birds

    Anim. Behav.

    (1997)
  • GautestadA.O. et al.

    Complex animal distribution and abundance from memory-dependent kinetics

    Ecol. Complex.

    (2006)
  • ShlesingerM.F.

    Levy flights: variations on a theme

    Physica D

    (1989)
  • ViswanathanG. et al.

    Lévy flights in random searches

    Physica A

    (2000)
  • GandomiA.H. et al.

    Mixed variable structural optimization using firefly algorithm

    Comput. Struct.

    (2011)
  • MirjaliliS. et al.

    Grey wolf optimizer

    Adv. Eng. Softw.

    (2014)
  • MirjaliliS.

    Moth-flame optimization algorithm: a novel nature-inspired heuristic paradigm

    Knowl.-Based Syst.

    (2015)
  • DerracJ. et al.

    A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms

    Swarm Evol. Comput.

    (2011)
  • Van Den BerghF. et al.

    A study of particle swarm optimization particle trajectories

    Inf. Sci.

    (2006)
  • EskandarH. et al.

    Water cycle algorithm–a novel metaheuristic optimization method for solving constrained engineering optimization problems

    Comput. Struct.

    (2012)
  • SaremiS. et al.

    Grasshopper optimisation algorithm: theory and application

    Adv. Eng. Softw.

    (2017)
  • ZhangM. et al.

    Differential evolution with dynamic stochastic selection for constrained optimization

    Inform. Sci.

    (2008)
  • LiuH. et al.

    Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization

    Appl. Soft Comput.

    (2010)
  • SadollahA. et al.

    Mine blast algorithm: a new population based algorithm for solving constrained engineering optimization problems

    Appl. Soft Comput.

    (2013)
  • KavehA. et al.

    A novel meta-heuristic optimization algorithm: thermal exchange optimization

    Adv. Eng. Softw.

    (2017)
  • SalimiH.

    Stochastic fractal search: a powerful metaheuristic algorithm

    Knowl.-Based Syst.

    (2015)
  • HeQ. et al.

    An effective co-evolutionary particle swarm optimization for constrained engineering design problems

    Eng. Appl. Artif. Intell.

    (2007)
  • KavehA. et al.

    Water evaporation optimization: a novel physically inspired optimization algorithm

    Comput. Struct.

    (2016)
  • GongW. et al.

    Engineering optimization by means of an improved constrained differential evolution

    Comput. Methods Appl. Mech. Engrg.

    (2014)
  • HeQ. et al.

    A hybrid particle swarm optimization with a feasibility-based rule for constrained optimization

    Appl. Math. Comput.

    (2007)
  • MontemurroM. et al.

    The automatic dynamic penalisation method (adp) for handling constraints with genetic algorithms

    Comput. Methods Appl. Mech. Engrg.

    (2013)
  • LeeK.S. et al.

    A new structural optimization method based on the harmony search algorithm

    Comput. Struct.

    (2004)
  • RaoR.V. et al.

    Teaching–learning-based optimization: a novel method for constrained mechanical design optimization problems

    Comput. Aided Des.

    (2011)
  • SavsaniP. et al.

    Passing vehicle search (pvs): a novel metaheuristic algorithm

    Appl. Math. Model.

    (2016)
  • GuptaS. et al.

    Multi-objective design optimisation of rolling bearings using genetic algorithms

    Mech. Mach. Theory

    (2007)
  • NocedalJ. et al.

    Numerical Optimization

    (2006)
  • Dr éoJ. et al.

    Metaheuristics for Hard Optimization: Methods and Case Studies

    (2006)
  • TalbiE.-G.

    Metaheuristics: from Design to Implementation, vol. 74

    (2009)
  • KirkpatrickS. et al.

    Optimization by simulated annealing

    Science

    (1983)
  • HollandJ.H.

    Genetic algorithms

    Sci. Am.

    (1992)
  • Cited by (0)

    Ali Asghar Heidari is now a Ph.D. research intern at Department of Computer Science, School of Computing, National University of Singapore (NUS), Singapore. Currently, he is also an exceptionally talented Ph.D. candidate at the University of Tehran and he is awarded and funded by Iran’s National Elites Foundation (INEF). His main research interests are advanced machine learning, evolutionary computation, meta-heuristics, prediction, information systems, and spatial modeling. He has published more than 14 papers in international journals such as Information Fusion, Energy Conversion and Management, Applied Soft Computing, and Knowledge-Based Systems.

    Seyedali Mirjalili is a lecturer at Griffith University and internationally recognized for his advances in Swarm Intelligence (SI) and optimization, including the first set of SI techniques from a synthetic intelligence standpoint – a radical departure from how natural systems are typically understood – and a systematic design framework to reliably benchmark, evaluate, and propose computationally cheap robust optimization algorithms. Dr Mirjalili has published over 80 journal articles, many in high-impact journals with over 7000 citations in total with an H-index of 29 and G-index of 84. From Google Scholar metrics, he is globally the 3rd most cited researcher in Engineering Optimization and Robust Optimization. He is serving an associate editor of Advances in Engineering Software and the journal of Algorithms.

    Hossam Faris is an Associate professor at Business Information Technology department/King Abdullah II School for Information Technology/ The University of Jordan (Jordan).  Hossam Faris received his BA, M.Sc. degrees (with excellent rates) in Computer Science from Yarmouk University and Al-Balqa‘ Applied University in 2004 and 2008 respectively in Jordan. Since then, he has been awarded a full-time competition-based PhD scholarship from the Italian Ministry of Education and Research to peruse his PhD degrees in e-Business at University of Salento, Italy, where he obtained his PhD degree in 2011. In 2016, he worked as a Postdoctoral researcher with GeNeura team at the Information and Communication Technologies Research Center (CITIC), University of Granada (Spain). Since 2017, he is leading with Dr. Ibrahim Aljarah the research group of Evolutionary Algorithms and Machine Learning (Evo-ml). His research interests include: Applied Computational Intelligence, Evolutionary Computation, Knowledge Systems, Data mining, Semantic Web and Ontologies.

    Ibrahim Aljarah is an associate professor of BIG Data Mining and Computational Intelligence at the University of Jordan — Department of Information Technology, Jordan.  He obtained his bachelor degree in Computer Science from Yarmouk University — Jordan, 2003. Dr. Aljarah also obtained his master degree in computer science and information systems from the Jordan University of Science and Technology — Jordan in 2006. He also obtained his Ph.D. In computer Science from the North Dakota State University (NDSU), USA, in May 2014. Since 2017, he is leading with Dr. Hossam Faris the research group of Evolutionary Algorithms and Machine Learning (Evo-ml).  He organized and participated in many conferences in the field of data mining, machine learning, and Big data such as NTIT, CSIT, IEEE NABIC, CASON, and BIGDATA Congress. Furthermore, he contributed in many projects in USA such as Vehicle Class Detection System (VCDS), Pavement Analysis Via Vehicle Electronic Telemetry (PAVVET), and Farm Cloud Storage System (CSS) projects. He has published more than 50 papers in refereed international conferences and journals. His research focuses on data mining, Machine Learning, Big Data, MapReduce, Hadoop, Swarm intelligence, Evolutionary Computation, Social Network Analysis (SNA), and large scale distributed algorithms.

    Majdi Mafarja received his B.Sc in Software Engineering and M.Sc in Computer Information Systems from Philadelphia University and The Arab Academy for Banking and Financial Sciences, Jordan in 2005 and 2007 respectively. Dr. Mafarja did his PhD in Computer Science at National University of Malaysia (UKM). He was a member in Datamining and Optimization Research Group (DMO). Now he is an assistant professor at the Department of Computer Science at Birzeit University. His research interests include Evolutionary Computation, Meta-heuristics and Data mining.

    Huiling Chen is currently an associate professor in the department of computer science at Wenzhou University, China. He received his Ph.D. degree in the department of computer science and technology at Jilin University, China. His present research interests center on evolutionary computation, machine learning, and data mining, as well as their applications to medical diagnosis and bankruptcy prediction. He has published more than 100 papers in international journals and conference proceedings, including Pattern Recognition, Expert Systems with Applications, Knowledge-Based Systems, Soft Computing, Neurocomputing, Applied Mathematical Modeling, IEEE ACCESS, PAKDD, and among others.

    View full text