Abstract

Nature-inspired algorithms are becoming popular among researchers due to their simplicity and flexibility. The nature-inspired metaheuristic algorithms are analysed in terms of their key features like their diversity and adaptation, exploration and exploitation, and attractions and diffusion mechanisms. The success and challenges concerning these algorithms are based on their parameter tuning and parameter control. A comparatively new algorithm motivated by the social hierarchy and hunting behavior of grey wolves is Grey Wolf Optimizer (GWO), which is a very successful algorithm for solving real mechanical and optical engineering problems. In the original GWO, half of the iterations are devoted to exploration and the other half are dedicated to exploitation, overlooking the impact of right balance between these two to guarantee an accurate approximation of global optimum. To overcome this shortcoming, a modified GWO (mGWO) is proposed, which focuses on proper balance between exploration and exploitation that leads to an optimal performance of the algorithm. Simulations based on benchmark problems and WSN clustering problem demonstrate the effectiveness, efficiency, and stability of mGWO compared with the basic GWO and some well-known algorithms.

1. Introduction

Metaheuristic algorithms are powerful methods for solving many real-world engineering problems. The majority of these algorithms have been derived from the survival of fittest theory of evolutionary algorithms, collective intelligence of swarm particles, behavior of biological inspired algorithms, and/or logical behavior of physical algorithms in nature.

Evolutionary algorithms are those who mimic the evolutionary processes in nature. The evolutionary algorithms are based on survival of fittest candidate for a given environment. These algorithms begin with a population (set of solutions) which tries to survive in an environment (defined with fitness evaluation). The parent population shares its properties of adaptation to the environment to the children with various mechanisms of evolution such as genetic crossover and mutation. The process continues over a number of generations (iterative process) till the solutions are found to be most suitable for the environment. Some of the evolutionary algorithms are Genetic Algorithm (GA) [1], Evolution Strategies (ES) [2], Genetic Programming (GP) [3], Differential Evolution (DE) [4], and Biogeography-Based Optimization (BBO) [59].

The physical algorithms are inspired by physical processes such as heating and cooling of materials (Simulated Annealing [10]), discrete cultural information which is treated as in between genetic and culture evolution (Memetic Algorithm [11]), harmony of music played by musicians (Harmony Search [12, 13]), cultural behavior of frogs (Shuffled Frog-Leaping Algorithm [14]), Gravitational Search algorithm [15], Multiverse Optimizer (MVO) [16], and Chemical Reaction Optimization (CRO) [17].

Swarm intelligence is the group of natural metaheuristics inspired by the “collective intelligence” of swarms. The collective intelligence is built up through a population of homogeneous agents interacting with each other and with their environment. Example of such intelligence is found among colonies of ants, flocks of birds, schools of fish, and so forth. Particle Swarm Optimization [18] is developed based on the swarm behavior of birds. The firefly algorithm [19] is formulated based on the flashing behavior of fireflies. Bat Algorithm (BA) [20] is based on the echolocation behavior of bats. Ant Colony Optimization (ACO) [21, 22] is inspired by the pheromone trail laying behavior of real ant colonies. A new evolutionary optimization algorithm, Cuckoo Search (CS) Algorithm [23], is inspired by lifestyle of cuckoo birds. The major algorithms include Ant Colony Optimization (ACO) [21, 22], Particle Swarm Optimization (PSO) [18], Artificial Bee Colony (ABC) Algorithm [24], Fish Swarm Algorithm (FSA) [25], Glowworm Swarm Optimization (GSO) [26], Grey Wolf Optimizer (GWO) [27], Fruit Fly Optimization Algorithm (FFOA) [28], Bat Algorithm (BA) [20], Novel Bat Algorithm (NBA) [29], Dragonfly Algorithm (DA) [30], Cat Swarm Optimization (CSO) [31], Cuckoo Search (CS) Algorithm [23], Cuckoo Optimization Algorithm (COA) [32], and Spider Monkey Optimization (SMO) Algorithm [33].

The biologically inspired algorithms comprise natural metaheuristics derived from living phenomena and behavior of biological organisms. The intelligence derived with bioinspired algorithms is decentralized, distributed, self-organizing, and adaptive in nature under uncertain environments. The major algorithms in this field include Artificial Immune Systems (AIS) [34], Bacterial Foraging Optimization (BFO) [35], and Krill Herd Algorithm [36].

Because of their inherent advantages, such algorithms can be applied to various applications including power systems operations and control, job scheduling problems, clustering and routing problems, batch process scheduling, image processing, and pattern recognition problems.

GWO is recently developed heuristics inspired from the leadership hierarchy and hunting mechanism of grey wolves in nature and has been successfully applied for solving economic dispatch problems [37], feature subset selection [38], optimal design of double later grids [39], time forecasting [40], flow shop scheduling problem [41], optimal power flow problem [42], and optimizing key values in the cryptography algorithms [43]. A number of variants are also proposed to improve the performance of basic GWO that include binary GWO [44], a hybrid version of GWO with PSO [45], integration of DE with GWO [46], and parallelized GWO [47, 48].

Every optimization algorithm stated above needs to address the exploration and exploitation of a search space. In order to be successful, an optimization algorithm needs to establish a good ratio between exploration and exploitation. In this paper, a modified GWO (mGWO) is proposed to balance the exploration and exploitation trade-off in original GWO algorithm. Different functions with diverse slopes are employed to tune the parameters of GWO algorithm for varying exploration and exploitation combinations over the course of iterations. Increasing the exploration in comparison to exploitation increases the convergence speed and avoids the local minima trapping effect.

The rest of the paper is organized as follows. Section 2 gives the overview of original GWO. The proposed mGWO algorithm is explained in Section 3. The experimental results are demonstrated in Section 4. Section 5 solves the clustering problem in WSN for cluster head selection to demonstrate the applicability of the proposed algorithm. Finally, Section 6 concludes the paper.

2. Overview of Grey Wolf Optimizer Algorithm

Grey Wolf Optimizer (GWO) is a typical swarm-intelligence algorithm which is inspired from the leadership hierarchy and hunting mechanism of grey wolves in nature. Grey wolves are considered as apex predators; they have average group size of 5–12. In the hierarchy of GWO, alpha () is considered the most dominating member among the group. The rest of the subordinates to are beta () and delta () which help to control the majority of wolves in the hierarchy that are considered as omega (). The wolves are of lowest ranking in the hierarchy.

The mathematical model of hunting mechanism of grey wolves consists of the following:(i)Tracking, chasing, and approaching the prey.(ii)Pursuing, encircling, and harassing the prey until it stops moving.(iii)Attacking the prey.

2.1. Encircling Prey

Grey wolves encircle the prey during the hunt which can be mathematically written as [27]where indicates the current iteration, and are coefficient vectors, is the position vector of the prey, and indicates the position vector of a grey wolf.

The vectors and are calculated as follows:where components of are linearly decreased from 2 to 0 over the course of iterations and and are random vectors in .

2.2. Hunting

Hunting of prey is usually guided by and , and will participate occasionally. The best candidate solutions, that is, , , and , have better knowledge about the potential location of prey. The other search agents () update their positions according to the position of three best search agents. The following formulas are proposed in this regard:

2.3. Attacking Prey

In order to mathematically model for approaching the prey, we decrease the value of . The fluctuation range of is also decreased by . is a random value in the interval where is decreased linearly from 2 to 0 over the course of iterations. When random values of are in , the next position of a search agent can be in any position between its current position and the position of the prey. The value forces the wolves to attack the prey.

After the attack again they search for the prey in the next iteration, wherein they again find the next best solution among all wolves. This process repeats till the termination criterion is fulfilled.

3. Modified GWO Algorithm

Finding the global minimum is a common, challenging task among all minimization methods. In population-based optimization methods, generally, the desirable way to converge towards the global minimum can be divided into two basic phases. In the early stages of the optimization, the individuals should be encouraged to scatter throughout the entire search space. In other words, they should try to explore the whole search space instead of clustering around local minima. In the latter stages, the individuals have to exploit information gathered to converge on the global minimum. In GWO, with fine-adjusting of the parameters and , we can balance these two phases in order to find global minimum with fast convergence speed.

Although different improvements of individual-based algorithms promote local optima avoidance, the literature shows that population-based algorithms are better in handling this issue. Regardless of the differences between population-based algorithms, the common approach is the division of optimization process to two conflicting milestones: exploration versus exploitation. The exploration encourages candidate solutions to change abruptly and stochastically. This mechanism improves the diversity of the solutions and causes high exploration of the search space. In contrast, the exploitation aims for improving the quality of solutions by searching locally around the obtained promising solutions in the exploration. In this milestone, candidate solutions are obliged to change less suddenly and search locally.

Exploration and exploitation are two conflicting milestones where promoting one results in degrading the other. A right balance between these two milestones can guarantee a very accurate approximation of the global optimum using population-based algorithms. On the one hand, mere exploration of the search space prevents an algorithm from finding an accurate approximation of the global optimum. On the other hand, mere exploitation results in local optima stagnation and again low quality of the approximated optimum.

In GWO, the transition between exploration and exploitation is generated by the adaptive values of and . In this, half of the iterations are devoted to exploration and the other half are used for exploitation , as shown in Figure 1(a). Generally, higher exploration of search space results in lower probability of local optima stagnation. There are various possibilities to enhance the exploration rate as shown in Figure 1(b), in which exponential functions are used instead of linear function to decrease the value of over the course of iterations. Too much exploration is similar to too much randomness and will probably not give good optimization results. But too much exploitation is related to too little randomness. Therefore, there must be a balance between exploration and exploitation.

In GWO, the value of decreases linearly from 2 to 0 using the update equation as follows:where indicates the maximum number of iterations and is the current iteration. Our mGWO employs exponential function for the decay of over the course of iterations. Consideras shown in Figure 1(c). Using this exponential decay function, the numbers of iterations used for exploration and exploitation are 70% and 30%, respectively.

The pseudocode of mGWO is given in Algorithm 1.

Initialize the search agent (grey wolf) population ()
Initialize and
Calculate the fitness of each search agent
= the best (or dominating) search agent
= the second best search agent
= the third best search agent
while  ( < Maximum number of iterations)
    for  each search agent
       update the position of the current search agent by (5)
    end for
    update by (7)
    update and by (2)
    calculate the fitness of all search agents
    update and
    
end while
Return

4. Results and Discussion

This section investigates the effectiveness of mGWO in practice. It is common in this field to benchmark the performance of algorithms on a set of mathematical functions with known global optima. We also follow the same process and employ 27 benchmark functions for comparison. The test functions are divided to four groups: unimodal, multimodal, fixed-dimension multimodal, and composite benchmark functions. The unimodal functions () are suitable for benchmarking the exploitation of algorithms since they have one global optimum and no local optima. On the contrary, multimodal functions () have a large number of local optima and are helpful to examine exploration and local optima avoidance of algorithms.

The mathematical formulation of the employed test functions is presented in Tables 14. We consider 30 variables for unimodal and multimodal test function for further improving their difficulties.

Since heuristic algorithms are stochastic optimization techniques, they have to be run at least more than 10 times to generate meaningful statistical results. It is again a common strategy that an algorithm is run on a problem times and average/standard deviation/median of the best obtained solution in the last iteration are calculated as the metrics of performance. We follow the same method to generate and report the results over 30 independent runs. In order to verify the performance of mGWO algorithm, PSO, BA, CS, and GWO algorithms are chosen. Note that we utilized 30 search agents and 3000 iterations for each of the algorithms.

The convergence curves of unimodal, multimodal, fixed-dimension multimodal, and composite benchmark functions for the competitive optimization algorithms are given in Figures 2, 3, 4, and 5, respectively. As Table 5 shows, mGWO algorithm provides the best results in 5 out of 7 unimodal benchmark test functions. The mGWO algorithm also provides very competitive results compared to CS on and . As discussed above, unimodal functions are suitable for benchmarking exploitation of the algorithms. Therefore, these results evidence high exploitation capability of the mGWO algorithm.

The statistical results of the algorithms on multimodal test function are presented in Table 6. It may be seen that mGWO algorithm highly outperforms other algorithms on , , , and . It should be noted that mGWO algorithm outperforms other algorithms on these multimodal test functions except PSO for . The results of multimodal test function strongly prove that high exploration of mGWO algorithm is a suitable mechanism for avoiding local solutions. Since the multimodal functions have an exponential number of local solutions, the results show that mGWO algorithm is able to explore the search space extensively and find promising regions of the search space. In addition, high local optima avoidance of this algorithm is another finding that can be inferred from these results.

The rest of the results, which belong to and , are provided in Tables 7 and 8, respectively. The results are consistent with those of other test functions, in which mGWO shows very competitive results compared to other algorithms.

5. Cluster Head Selection in WSN Using mGWO

Cluster head (CH) selection problem is a well-known problem in the field of wireless sensor networks (WSNs) in which the energy consumption cost of the network should be minimized [4953]. In this paper, this problem is solved using mGWO algorithm and compared with GA, PSO, BA, CS, and GWO.

The main challenges in designing and planning the operations of WSNs are to optimize energy consumption and prolong network lifetime. Cluster-based routing techniques, such as the well-known low-energy adaptive clustering hierarchy (LEACH) [50], are used to achieve scalable solutions and extend the network lifetime until the last node dies (LND). In order to achieve prolonged network lifetime in cluster-based routing techniques, the lifetime of the CHs plays an important role. Improper cluster formation may cause some CHs to be overloaded. Such overload may cause high energy consumption of the CH and degrade the overall performance of the WSN. Therefore, proper CH selection is the most important issue for clustering sensor nodes. Designing an energy efficient clustering algorithm is not an easy task. Therefore, nature-inspired optimization algorithms may be applied to tackle cluster-based routing problem in WSN. Evolutionary algorithms (EAs) have been used in recent years as metaheuristics to address energy-aware routing challenges by designing intelligent models that collaborate together to optimize an appropriate energy-aware objective function [52]. GWO is one of the powerful heuristics that can be applied for efficient load balanced clustering. In this paper, mGWO based clustering algorithm is used to solve the abovementioned load balancing problem. The algorithm forms clusters in such a way that the overall energy consumption of the network is minimized. Total energy consumption in the network is the sum of the total energy dissipated from the non-CHs to send information to their respective CHs and the total energy consumed by CH nodes to aggregate the information and send it to the base station (BS).

Consider a WSN of sensor nodes randomly deployed in the sensing field and organized into clusters: . The fitness function for the energy consumption may be defined aswhere is the total number of CHs, is a non-CH associated with the th CH, and is the energy dissipated for transmitting data from to .

In order to calculate radio energy transmission and reception costs, a -bit message and also the transmitter-receiver separation distance are given byThe term denotes the per-bit energy dissipation during transmission. The per-bit amplification energy is proportional to when the transmission distance exceeds the threshold (called crossover distance) and otherwise is proportional to . The parameters and denote transmitter amplification parameters for free-space and multipath fading models, respectively. The value of is given byThe reception energy of the -bit data message can be expressed bywhere denotes the per-bit energy dissipation during reception.

is the data aggregation energy expenditure and is set as . The values of other parameters are set to , , and , respectively [51].

For the simulation setup, 100 nodes are randomly deployed in a 100 m × 100 m area of the sensing field. BS is placed at the center of the field. The initial energy of all homogeneous nodes is set to  J. During this analysis, three parameters, namely, first node dead (FND), half nodes dead (HND), and last node dead (LND) are employed to outline the network lifetime.

Table 9 shows the best results obtained for the CH selection problem in WSN. The results of Table 9 show that mGWO algorithm is able to find the best results compared to other algorithms. The results of mGWO are closely followed by the CS and GWO algorithms.

6. Conclusion

This paper proposed a modification to the Grey Wolf Optimizer named mGWO, inspired by the hunting behavior of grey wolves in nature. An exponential decay function is used to balance the exploration and exploitation in the search space over the course of iterations. The results proved that the proposed algorithm benefits from high exploration in comparison to the standard GWO.

The paper also considered the clustering problem in WSN in which the CH selection is performed using the proposed mGWO algorithm, which is a challenging and NP hard problem. The results show that the proposed method is found to be very effective for real-world applications due to fast convergence and fewer chances to get stuck at local minima. It can be concluded that the proposed algorithm is able to outperform the current well-known and powerful algorithms in the literature. The results prove the competence and superiority of mGWO to existing metaheuristic algorithms and it has an ability to become an effective tool for solving real word optimization problems.

Competing Interests

The authors declare that they have no competing interests.