1 Introduction

The attention of the artificial intelligence research communities has recently grown in the optimization domain due to its efficiency in dealing with complex optimization problems. Usually, deterministic methods cannot efficiently tackle complex optimization problems within a specific polynomial time [46]. The optimization problems can be categorized based on the nature of the search space variables into continuous, discrete, binary, permutation, and structure [59]. The search space shape of the optimization problems can be convex vs. non-convex, uni-modal vs. multi-modal, and constrained vs. free. The complexity of the optimization problem is based on the ruggedness of the search space. Therefore, it requires an efficient optimization method such as metaheuristic algorithms to be tackled [59].

The Metaheuristic-based algorithms are efficient optimization algorithms established to tackle a wide range of optimization problems using a generic optimization framework [168]. They are categorized based on the number of initial solutions into local search-based and population-based methods. Local search methods start with one solution. At each iteration, the solution is moved, if better, to its neighbouring solution. The search is stopped when the locally optimal solution is reached [66]. The most popular local search-based methods are simulated annealing [101], Tabu search [75], \(\beta\)-hill climbing [20], Variable neighborhood search [116], Vortex search algorithm [52], and Iterated Local Search [107].

On the other hand, population-based algorithms are initialized by a set of random solutions. Iteratively, those solutions are evolving using recombination, mutation, and selection operators to end up with an optimal solution. The research communities normally distinguish between two types of population-based algorithms: evolutionary-based, physical-based, chemical-based, human-based, and swarm-based algorithms [26, 38, 118]. Evolutionary-based algorithms (EA) utilize the natural selection principle of the survival-of-the-fittest in their optimization operation [120]. Several EAs have been established and adapted efficiently for several optimization problems such as Genetic Algorithm [85], Evolutionary programming [170], Genetic Programming [102], Probability-based incremental learning (PBIL) [35], Differential evolution [144], and Biogeography-Based Optimizer [141].

The physical-based algorithms are mainly inspired by physical phenomena such as physical lows, light dispersions, and electromagnetic forces. These algorithms are light spectrum optimizer [11], equilibrium optimizer [69], henry Gas Solubility Optimization Algorithm [79], multi-verse optimizer [114], Transient search optimization [127], electromagnetic field optimization [15], Artificial electric field algorithm [157], and big bang–big crunch [65]. On the other hand, chemical interactions and chemical laws are the main drivers of chemical-based algorithms, such as Chemical reaction optimization [104], artificial chemical reaction optimization algorithm [24], artificial chemical process [93], gases Brownian motion optimization [3], and Chemotherapy science algorithm [135].

Similarly, social and human behaviors are the main principles of human-based algorithms, such as Coronavirus herd immunity optimizer [21], past present future algorithm [121], chef-based optimization algorithm [149], poor and rich optimization [119], hybrid leader-based optimization [48], Political Optimizer [29], Ali Baba and the forty thieves [40], Ebola optimization search algorithm [124], Growth Optimizer [183], Good and Bad Groups-Based Optimizer [133], and Group teaching optimization algorithm [184].

The swarm-based algorithms have stemmed the behaviour of animals living on swarm bases. These algorithms are modelled as optimization methods concentrating on the leader–follower rule in the swarm structure to find the food or hunt prey [71]. Since the swarm intelligence algorithm depends on the multi-solution concepts, the swarm of the solution is reconstructed every iteration. Therefore, the exploration and exploitation concepts are manipulated through intelligent operators modified the current solution based on its leader and follower. The earliest swarm-based algorithms are Ant Colony Optimization Algorithm [54], Participle Swarm Optimization algorithm [99], bat algorithm [169], krill herd optimization [72], crow search algorithm [30]. The most recent swarm-based algorithms: Red deer algorithm [70], rat swarm optimizer [49], Artificial hummingbird algorithm [186], sparrow search algorithm [156], snake optimizer [78], Dwarf mongoose optimization algorithm [17], White Shark Optimizer [39], Chimp optimization algorithm [100], Horse herd optimization algorithm [110], Grey wolf optimizer [113], Moth-flame optimization algorithm [111], Golden jackal optimization [47], and others reported in [53].

Quite recently, a new swarm-based optimization algorithm called Marine Predators Algorithm (MPA) was proposed in 2021 by Faramarzi et al. [68], which stemmed from the widespread foraging mechanisms based on Lévy and Brownian movements in ocean predators. MPA has superior attributes over other optimization algorithms, such as derivative-free, parameter-less, easy-to-use, flexible, simple, sound, and complete. Therefore, MPA has evolved quickly for a wide range of optimization problems, which will be thoroughly summarized in this paper.

The main aim of this review paper is to analyze the growth of the MPA in terms of the number of published works, reputations, citations, and research topics in Sect. 2. After that, the theoretical background, including the inspiration, optimization procedure, and convergence behaviour, is thoroughly discussed in Sect. 3. Subsequently, the versions of MPA are reviewed in Sect. 4. The main applications of the MPA are also summarized in Sect. 5. The main research gaps, as well as the limitation of the MPA, are discussed in Sect. 7. The available source code and resources of MPA are given in Sect. 6. Finally, the main features of MPA are concluded, and possible future directions are suggested in Sect. 8.

2 The Growth of Marine Predators Algorithm

This section represents a detailed analysis of the growth of the MPA for the period from 2020 to June 2022. In general, this analysis includes the number of research published for the MPA per year, the number of citations obtained per year, and the selection of top authors working in MPA and the most important research centers or institutions which are interested in applying MPA in their works. In addition, many other statistics of MPA works.

To determine these results, 148 published scientific research were obtained from the Scopus database. The extracted research contains articles, chapters in a book, reviews, and conference papers. Some of these works are written in different languages, so multiple filters have been applied to get the relevant group of publications reviewed.

The number of research published in the application of any algorithm, especially in high-reputation journals, is one of the most important criteria for the success of that algorithm. Therefore, Fig. 1 shows the total number of MPA works published annually. Apparently, the growing interest in the implementation of the MPA, especially in the field of optimization problems. The number of articles reached a peak in 2020 when only 13 documents were published, then in 2021, 67 papers were published, and from 2022 until June, there were around 30 papers.

Fig. 1
figure 1

No. publications per year

Figure 2 shows the top publishers who accept to hold the MPA works under their journals. It is clear that Elsevier, with 38 articles, achieves the first rank, IEEE with 16 research papers second rank, Springer Nature with 13 papers, and the rest is distributed among the other publishers as presented in Fig. 2.

Fig. 2
figure 2

No. publications per publisher

The number of citations is another criterion that can be relied upon in determining the quality of work and the interest of researchers in that topic. Figure 3 shows the number of citations of the MPA depending on the period specified by the Scopus query. Through Fig. 3, we notice that the number of citations increased significantly between 2020 and 2022, as the number of citations in 2020 was 32, but it has become 176 until June 2022. This increase represents the great interest of researchers in applying the MPA in their work.

Fig. 3
figure 3

No. citations per year

Based on the MPA citation per domain, computer science has 264 citations, Engineering has 223 citations, and Mathematics has 130 citations. Figure 4 shows the top MPA citation for each domain.

Fig. 4
figure 4

Top MPA citations by domain

Regarding the top MPA authors, we find “Yousri, D.” with 12 papers published in MPA, “Hasanien, H.M.” with 9 papers published in MPA, and “Elaziz, M.A.” and “Houssein, E.H.” with 7 papers published in MPA, the rest of top MPA Authors shows in Fig. 5.

Fig. 5
figure 5

No. publications per author

Based on the problems that MPA solves, Fig. 6 presents the most exciting domain which applied the MPA. We find that Engineering ranks first with the MPA with more than 67 research papers, then Computer Science in second place with 62 research papers, and then mathematics with 22 research papers.

Fig. 6
figure 6

No. publications per domain

In terms of journals that published MPA works, the IEEE ACCESS obtained first place with a total of research papers of MPA published equally to 14 papers, as shown in Fig. 7. The second place is Alexandria Engineering Journal, with five papers. Three journals achieve third place: Ain Shams Engineering Journal, Energy Conversion and Management, and Expert Systems With Applications, with a total of MPA works equal to 4. The rest of the journal’s ranking is presented in Fig. 7.

Fig. 7
figure 7

No. publications per journal

The works in MPA are also measured in terms of the institutions as presented in Fig. 8. The researchers’ team from Zagazig University-Egypt focused on MPA as the main part of their research work; they published more than 22 articles. In second place is Minia University, with 14 MPA articles. In third place is Fayoum University, with 13 MPA works. The rest of the institutions can be found in Fig. 8.

Fig. 8
figure 8

No. publications per affiliation

Another measurement based on the countries is also used to show the growth of the MPA, as shown in Fig. 9. The researchers from Egypt, China, and Saudi Arabia have achieved the top three places with MPA works, where Egypt ranked first with 65 papers, then China and Saudi Arabia with 30 and 27, respectively. The rest of the MPA countries’ ranking is presented in Fig. 9.

Fig. 9
figure 9

No. publications per country

3 Fundamentals to Marine Predators Algorithm

In this section, the inspiration for MPA is discussed. Thereafter, the main procedural steps of MPA are discussed to show its mathematical model. Finally, the convergence behaviour of MPA has been analyzed.

3.1 Inspiration of Marine Predators Algorithm

Organisms in nature usually share territories using different types of relationships. Two species are considered naturalism if they exist in an ecosystem without any impact on each other (e.g., cows and sheep). The second type of relationship is called oxpeckers, in which two species help each other. For example, oxpeckers eat parasites (e.g., ticks, flies, etc.) on large mammal bodies. Mammals keep the parasites under control while the oxpeckers are fed up. In the third type, parasitism, one species gets harmed by the other without death (e.g., worms). Another interesting relationship is competition, in which two species compete for resources in an ecosystem without harming each other. Finally, predation is when a species kills another one.

The MPA mimics the last type of relationship, focusing on big marine predators (e.g., sharks, whales, etc.). In other words, Lévy and Brownian movements of marine predators have been used to develop an optimization algorithm capable of providing exploratory and exploitative behaviours. The natural optimization procedure used as the main source of inspiration is the search for marine predators using the Lévy and Brownian movement to discover new habitats and find the ones with a high distribution of prey while navigating the difficulties of marine ecosystems (e.g., currents, waves, human-related, etc.)

3.2 Procedure of Marine Predators Algorithm

MPA follows a similar framework compared to other metaheuristics. It employs a set of candidate solutions for a given optimization problem. Several mechanisms are used to improve this set, from being random to having solutions approximately close to the global optimum of the problem on hand. There are three main phases in MPA as follows [68]:

  • Phase 1: High velocity, in which the speed of a marine predator is higher than a prey. The best course of action for predators in this phase is to stay put and not change their location, whether the prey is moving in a Lévy or Brownian pattern.

  • Phase 2: Same velocity, in which the speed of a marine predator is identical to that of a prey. In this phase, when the prey is moving in a Lévy pattern, the most effective strategy for the predator is to adopt a Brownian motion.

  • Phase 3: Low velocity, in which the speed of a marine predator is less than a prey. In this phase, the Lévy motion is the best to adopt.

Figure 10 illustrates the three MPA’s optimization phases graphically. In each phase, two of the following matrices are updated:

Fig. 10
figure 10

MPA’s optimization phases [68]

$$\begin{aligned} Prey= & {} \begin{bmatrix} X_{1,1} &{} X_{1,2} &{} \ldots &{} X_{1,d} \\ X_{2,1} &{} X_{2,2} &{} \ldots &{} X_{2,d} \\ \ldots &{} \ldots &{} \ldots &{} \ldots \\ X_{n,1} &{} X_{n,2} &{} \ldots &{} X_{n,d} \end{bmatrix} ,\end{aligned}$$
(1)
$$\begin{aligned} Elite= & {} \begin{bmatrix} X_{1,1}^{I} &{} X_{1,2}^{I} &{} \ldots &{} X_{1,d}^{I} \\ X_{2,1}^{I} &{} X_{2,2}^{I} &{} \ldots &{} X_{2,d}^{I} \\ \ldots &{} \ldots &{} \ldots &{} \ldots \\ X_{n,1}^{I} &{} X_{n,2}^{I} &{} \ldots &{} X_{n,d}^{I} \end{bmatrix}, \end{aligned}$$
(2)

where n is the number of preys and d shows the number of problem’s variables.

The equations used in Phase 1 are as follows:

$$\overrightarrow{Step_{i}} = \overrightarrow{R_{B}} \cdot (\overrightarrow{Elite_{i}} - \overrightarrow{R_{B} }\cdot \overrightarrow{Prey_{i}}),\quad i=1,2,\ldots ,n,$$
(3)

where \(\overrightarrow{R_{B}}\) holds random values generated from the Brownian motion,

$$\overrightarrow{Prey_{i}} = \overrightarrow{Prey_{i}} + P \cdot \overrightarrow{R} \cdot \overrightarrow{Step_{i}},$$
(4)

where \(P=0.5\) but can be turned, \(\overrightarrow{R}\) holds random values between 0 and 1.

The equations used in Phase 2 are as follows:

$$\overrightarrow{Step_{i}} = \overrightarrow{R_{L}} \cdot (\overrightarrow{Elite_{i}} - \overrightarrow{R_{L} }\cdot \overrightarrow{Prey_{i}}),\quad i=1,2,\ldots ,n/2,$$
(5)

where \(\overrightarrow{R_{L}}\) holds random values generated from Lévy distribution,

$$\overrightarrow{Prey_{i}} = \overrightarrow{Prey_{i}} + P \cdot \overrightarrow{R} \cdot \overrightarrow{Step_{i}},$$
(6)

where \(P=0.5\) but can be turned, \(\overrightarrow{R}\) holds random values between 0 and 1.

$$\begin{aligned} \overrightarrow{Step_{i}}= & {} \overrightarrow{R_{B}} \cdot (\overrightarrow{R_{B}} \cdot \overrightarrow{Elite_{i}} - \overrightarrow{Prey_{i}}),\quad i=n/2+1, n/2+2,\ldots ,n, \end{aligned}$$
(7)
$$\begin{aligned} \overrightarrow{Prey_{i}}= & {} \overrightarrow{Prey_{i}} + P \cdot CF \cdot \overrightarrow{Step_{i}}, \end{aligned}$$
(8)

where \(P=0.5\) but can be turned, \(CF =\left( 1- \frac{t}{T} \right) ^{2\frac{t}{T}}\) holds random values.

The equations used in Phase 3 are as follows:

$$\begin{aligned} \overrightarrow{Step_{i}}= & {} \overrightarrow{R_{L}} \cdot (\overrightarrow{R_{L}} \cdot \overrightarrow{Elite_{i}} - \overrightarrow{Prey_{i}}), i=1,2,\ldots ,n, \end{aligned}$$
(9)
$$\begin{aligned} \overrightarrow{Prey_{i}}= & {} \overrightarrow{Prey_{i}} + P \cdot CF \cdot \overrightarrow{Step_{i}}, \end{aligned}$$
(10)

where \(P=0.5\) but can be turned, \(CF = (1- \frac{t}{T} )^{2\frac{t}{T}}\) holds random values.

The flowchart of MPA can be seen in Fig. 11. It can be seen that each phase is performed in a third of the iteration. At the end of each phase, the population of prey is updated. This iterative process is terminated after hitting a pre-determined end condition, which is usually the maximum number of iterations.

Fig. 11
figure 11

MPA flowchart (note that t indicates the current iteration and T is the maximum number of iterations

3.3 The Convergence Behaviour of Marine Predators Algorithm

In this section, the effectiveness and robustness of the MPA were studied on 23 classical test functions and compared to nine powerful optimization algorithms: artificial bee colony (ABC) [98], Harris Hawks optimization (HHO) [81], Flower pollination algorithm (FPA) [167], Moth-Flame Optimization Algorithm (MFO) [111], Multi-Verse Optimizer (MVO) [114], Rat swarm optimizer (RSO) [49], Salp Swarm Algorithm (SSA) [115], and Whale Optimization Algorithm (WOA) [112]. For fair comparisons, all algorithms were run using the same parameter settings: the population size is equal to 30, and the maximum number of iterations is 1000. Each algorithm is repeated 30 times on each test function, and the mean (Avg) and standard deviation (Std) were summarized in Table 1. All experiments are carried out using a laptop Core i7-7700HQ @ 2.80 GHz with 16 GB RAM, programmed using MATLAB R2014a.

As was previously mentioned that 23 classical test functions were used to study the performance of the MPA against the other competitors. These functions are famously used to evaluate the performance of optimization algorithms, and the details of these test functions are given in [32, 112, 115]. It should be noted that the first seven test functions (F1–F7) are classified as unimodal, while the remaining test functions (F8–F23) are multimodal. In fact, the unimodal test functions are used to study the exploitation behaviour of the algorithm, while the multimodal test functions are utilized to study the exploration behaviour of the algorithm.

The mean results and Std of the MPA and the other competitors are shown in Table 1. The best results are highlighted using bold font. According to the results of the unimodal test functions recorded in Table 1, it can be seen that the RSO obtained the best results in four test functions (i.e., F1–F4). The HHO got the best results in F5 and F7, while the ABC achieved the best results in F6. However, the performance of the MPA was better than some of the other comparative algorithms like ABC and WOA in five test functions; SSA in six test functions; FPA, MFO, and MVO in all test functions. This proves that the MPA has superior exploitation characteristics against some other optimization algorithms.

Reading the results of multimodal test functions recorded in Table 1, it can be observed that the MPA was ranked first by obtaining the best results in 12 test functions, while the HHO and ABC were ranked second, with each getting the best results in 11 test functions. The FPA was reported third by achieving the best results in nine test functions, while MVO was placed fourth by getting the best results in five test functions. The MFO, RSO, and SSA were ranked fifth, with each obtaining the best results in four test functions, while WOA was ranked last by achieving the best results in three test functions. Based on the above discussion, we can conclude that the MPA also has superior exploration characteristics compared to other competitors considered in this study.

Reading the overall results given in Table 1 again, it can be illustrated that the MPA, ABC, and HHO was ranked first, with each obtaining the best results in 12 test functions. The FPA was placed second by getting the best results in nine test functions, while the RSO reported third by achieving the best results in eight test functions. The MVO was ranked fourth by getting the best results in five test functions, while the MFO and SSA ranked sixth by obtaining the best results in four test functions. Finally, the WOA was ranked last by achieving the best results in three test functions. Thereafter, Friedman’s statistical test was used to calculate the average rankings of the competitors, as shown in Table 1. It should be noted a lower average ranking means better performance. From Table 1, it can be demonstrated that the MPA was first by getting the lowest average rankings, followed by other comparative methods.

Table 1 Performance of MPA against other optimization algorithms

Figure 12 illustrates the convergence behaviour of the MPA against the other comparative algorithms using two unimodal and two multimodal test functions. The x-axis reflects the number of iterations, while the y-axis represents the fitness value. Clearly, the MPA has fast convergence compared to the other comparative algorithms, where the MPA was able to converge towards the optimal fitness value in the early stages of the search process (less than 100 iterations) for all test functions. This confirmed the robustness of the MPA by achieving the right balance between the exploration and exploitation abilities. However, the ROS got stuck in the local optimum in three test functions. Make a connection between the results of RSO in Table 1 and Fig. 12; we can conclude that the RSO has good exploitation behaviour and poor exploration behaviour. In addition, most of the convergences of the comparative methods (except MPA, HHO, and WOA) were gradually enhanced till the last stages of the search process on the two multimodal test functions (F10 and F22). This is because these algorithms have a low rate of exploitation and, thus, slow convergence.

Fig. 12
figure 12

The convergence behaviour of the MPA against the other comparative algorithms on F1, F5, F10, and F22

4 Variants of Marine Predators Algorithm

Due to the heavy use of MPA to tackle different optimization problems from other domains with varying levels of complexity and various size, it has been modified or hybridized to incubate the search space ruggedness of the real-world optimization problem. Therefore, this section summarizes the different versions of MPA proposed in the literature.

4.1 Modified Versions of Marine Predators Algorithm

Different modified versions of MPA have been proposed to fit the problem nature, such as binary MPA, adaptive MPA, Opposition based learning, Multi-swarm MPA, and others. These studies of MPA versions are discussed in detail below.

4.1.1 Binary Marine Predators Algorithm

The classical version of MPA was introduced for continuous optimization problems. However, the researchers modified the MPA to deal with the optimization problems with the binary search spaces. Generally, transfer functions such as S-shaped and V-shaped were introduced to convert from continuous MPA to a binary one, as shown in the following studies.

Elminaam et al. [63] presented a binary version of MPA for the feature selection problem, called the MPA–KNN algorithm. The sigmoid function converted the problem from a continuous MPA to a binary. In addition, the KNN classifier was used to evaluate the quality of the subset of features that were selected by the MPA. The performance of the MPA–KNN was evaluated using 18 medical datasets taken from the UCI Repository. Five evaluation measures (i.e., classification accuracy, fitness value, sensitivity, specificity, and the number of features selected) were introduced to study the performance of the MPA–KNN against seven other comparative methods. The simulation results demonstrated the effectiveness of the MPA–KNN against the other competitors in all evaluation measures.

Similarly, another binary version of MPA for feature selection was introduced by [33]. The present algorithm was known as ROBL–MPA. In ROBL–MPA, random opposition-based learning (ROBL) was integrated with the MPA to enhance its global search ability. The 10-fold cross-validation technique was used to divide the dataset into training and testing groups. The KNN classifier was utilized to evaluate the quality of each subset of features selected. The performance of the ROBL–MPA was tested using six datasets and compared to the classical MPA and another version of MPA. The experimental results illustrated the effectiveness of the ROBL–MPA compared to the other two versions of MPA in terms of convergence accuracy and fitness value.

Yousri et al. [178], proposed a novel modified MPA for global optimization and feature selection called FOCLMPA. In their algorithm, the principle of the comprehensive learning strategy and memory perspective of the fractional calculus was combined with the MPA to escape from the local solutions and avoid premature convergence. The performance of FOCLMPA was evaluated using test functions from CEC2017 and CEC2020, 4 classical engineering problems, and 18 feature selection datasets from the UCI Repository. The threshold (0.5) was used to convert from continuous FOCLMPA to binary. The KNN classifier was used to evaluate the quality of the features selected. The simulation results demonstrated the effectiveness and robustness of the FOCLMPA against the other comparative methods for all problems considered in this research.

A modified binary version of the MPA for solving the FS problems was introduced in [34]. In their algorithm, the opposition-based learning concept was incorporated with the MPS to enhance the exploration ability of the algorithm. Their algorithm was called OBL–MPA. The OBL–MPA was evaluated using six microarray datasets and compared to the original version of MPA and other optimization algorithms such as WOA, GWO, HHO, and enhanced WOA. Five standard machine learning classifiers (i.e., KNN, SVM, RF, NN, and NB) were utilized to assess the subset of selected features. The 10-fold CV was used to divide the datasets into training and testing groups. The experimental results show that the OBL–MPA was superior to the other comparative methods in classification accuracy, converging capability, and stable feature selection.

The authors of [12] integrated the sine–cosine algorithm (SCA) with the MPA for feature selection problems, called MPASCA. The SCA was used in their algorithm to enhance the local search ability of the algorithm. The MPASCA was evaluated using 18 datasets from the UCI Repository and one real dataset from the metabolomics domain. The simulation results demonstrated the effectiveness of the MPASCA compared to other comparative methods in terms of fitness value and classification accuracy.

Alrasheedi et al. [27] introduced an enhanced version of MPA by integrating its components with a chaotic sequence for solving FS problems, called CMPA. The main purpose of using a chaotic sequence is to improve the exploration ability of the algorithm. Their algorithm was evaluated using 17 datasets from the UCI Repository. The KNN classifier was utilized to assess the quality of the subset feature selected. The simulation results illustrated the effectiveness and robustness of the CMPA against two chaotic versions of the grasshopper optimization algorithm and the chaotic artificial bee colony algorithm in terms of fitness value, classification accuracy, and the number of features selected.

Wang et al. [151] presented another binary MPA for detecting the anterior cruciate ligament. The SVM classification was used to assess the quality of the subset of features selected. The simulation results illustrated the effectiveness and robustness of their algorithm against seven well-known optimization algorithms in terms of sensitivity, specificity, and classification accuracy.

Abdel-Basset et al. [6] presented another binary variant of MPA for 0–1 knapsack problems, called BMPA. To deal with the binary nature of the 0–1 knapsack problem, the S-shaped and V-shaped transfer functions were utilized to convert from continuous MPA to binary MPA. Three different datasets were considered to evaluate the performance of the BMPA. The experimental results show the superiority of the BMPA compared to other comparative methods from the literature in terms of solution quality, success rate, percent deviation, and computational time.

4.1.2 Adaptive Marine Predators Algorithm

The MPA employs minimal adjustable parameters during its optimization processes. However, researchers are constantly seeking ways to enhance the performance of the MPA by self-tuning its parameters for specific problems through dependent adaptation. This is critical to increase the speed of convergence and avoid getting stuck in local minima by balancing exploration and exploitation. This is typically performed through the use of nonlinear operators that inherently adjust the MPA parameters to achieve improved performance. The papers mentioned below provide examples of this concept.

Yu et al. [181] proposed a new version of the MPA by introducing an adaptive learning factor to enhance the updating behaviour of the search agents near prey and improve convergence toward optimal solutions. This adaptive version of the MPA is primarily introduced to address one of the power engineering problems called the power distributed generation problem. In the experimental results, the proposed method proved its performance compared with the traditional methods and PSO, where it outperformed the compared methods in achieving the objectives.

Another adaptive MPA version was proposed by Fan et al. [67] for global optimization. The proposed method was presented in two stages; in the first stage, an adaptive control parameter and inertia weight coefficient were combined with the MPA search agents to enhance their updating mechanism, and in the second stage, the MPA components were hybridized with the logistic opposition-based learning mechanism to increase the diversity of the MPA initial population. CEC2020 functions, 23 classical benchmarks, and 4 real-world problems were used to evaluate and test the proposed method. The obtained results showed the high performance of the proposed method, where it outperformed all compared methods.

Chen et al. [44] introduced a novel modified MPA by using Q-learning, called QMPA, to enhance the convergence speed and stability of the algorithm. Reinforcement learning was applied in selecting the update strategy to choose the most appropriate position update strategy for search agents in all search stages. The QMPA was evaluated using CEC2014 test functions and two engineering problems. The experimental results illustrate the superiority of the QMPA against the other competitors in terms of convergence and stability.

4.1.3 Opposition Based Learning Marine Predators Algorithm

The researchers also introduced opposition-based learning to empower the exploitation capability and speed up the convergence of the MPA and thus strike a balance between exploration and exploitation abilities. Some applications of the integration of MPA with opposition-based learning are reviewed below.

Ramezani et al. [129] presented a modified MPA (MMPA) based on opposition-based learning, a chaotic circle map, and a self-adaptive population method for global optimization. In MMPA, opposition-based learning was used to improve the diversity of the initial population, while the chaotic circle map was utilized to enhance the exploration ability and thus make the right balance between exploration and exploitation capabilities during the search process. The self-adaptive population method was introduced to speed up convergence. The performance of the MMPA was evaluated using test functions published in CEC2019, and one engineering problem was based on PID control applied to a DC motor. The numerical results demonstrated the efficiency of the proposed MMPA compared to the original version of MPA, Particle Swarm Optimization, JAYA algorithm, equilibrium optimizer, grasshopper optimization algorithm, whale optimization algorithm, differential evaluation algorithm, and league championship algorithm.

In the work of [90], another modified version of MPA was presented for global optimization and multilevel thresholding image segmentation. The opposition-based learning was combined with the MPA, named MPA–OBL, to enhance the convergence speed. Their algorithm was evaluated using CEC2020 test functions and ten benchmark grey-scale images. The performance of the MPA-OBL was compared to the original MPA and other hybrid algorithms based on opposition-based learning (i.e., LSHADE_SPACMA–OBL, CMA_ES–OBL, DE–OBL, HHO–OBL, SCA–OBL, SSA–OBL). The simulation results illustrated the superiority of the MPA–OBL algorithm against the other competitors in terms of solution quality for test functions and peak signal-to-noise ratio, structural similarity, and feature similarity for image segmentation.

Houssein et al. [87] presented a modified version of MPA based on an opposition-based learning strategy and grey wolf optimizer (GWO) for global optimization and the global maximum power point of the photovoltaic system problem. Their algorithm was named MPAOBL–GWO. The opposition-based learning strategy was used to speed up convergence, while the GWO was utilized to enhance the local search ability of the algorithm. Firstly, the authors evaluated the performance of their algorithm using CEC2017. Then, the MPAOBL–GWO was used to solve the global maximum power point of the photovoltaic system problem. The experimental results demonstrated the effectiveness of the proposed MPAOBL–GWO compared to the classical versions of MPA, GWO, and PSO in terms of solution quality.

Similarly, another enhanced MPA (MSMPA), was introduced for global optimization and joint regularization semi-supervised ELM by [166]. MSMPA utilized a chaotic opposition learning strategy to generate the initial population to start the algorithm from a good enough population. Later, the adaptive inertia weights and adaptive step control factor were combined with the MPA to enhance its exploration and speed up convergence. Finally, the neighbourhood dimensional learning strategy was incorporated with the MPA to maintain population diversity. The MSMPA was tested using 18 classical test functions, CEC2017 test functions, and 1 real-world problem based on the joint regularization semi-supervised ELM. The experimental results illustrated the efficiency of the MSMPA against the classical optimization techniques in terms of solution quality, global search behaviour, and avoiding the local optimally.

Zhao et al. [185] introduced another modified version of MPA using quasi-opposition-based learning and Q-learning based to increase population diversity, enhance the global search ability and avoid traps in local optima. Their algorithm is called QQLMPA. The QQLMPA was evaluated using 20 classical test functions, 12 test functions taken from CEC2015, and three engineering problems. The simulation results demonstrated the superiority of the QQLMPA against the other versions of MPA and other comparative algorithms.

4.1.4 Multi-swarm Marine Predators Algorithm

The concept of multi-swarm was introduced by the researchers to empower the population diversity of the MPA and to avoid the local minima. In multi-swarm MPA variants, the population was divided into sub-populations. Later on, the best solutions were exchanged between the different groups to ensure the transfer of the best knowledge between groups.

Pan et al. [125] introduced another modified version of MPA, called (MGMPA), for global optimization and economic load dispatch problems. In MGMPA, the multigroup mechanism was introduced to divide the population into independent sub-populations. These groups share the knowledge by transferring the best individuals between the groups after a fixed number of iterations. The performance of the MGMPA was tested using CEC2013 test functions and one test case of economic load dispatch with 40 generating units. The experimental results show the effectiveness of the MGMPA against the original version of MPA, SSA, and PSO algorithms in terms of solution quality and convergence speed.

Another enhanced version of MPA was introduced for parameter identification of solid oxide fuel cells by Yousri et al. [177]. Their algorithm was called CLDMMPA. The dynamic multi-swarm strategy and comprehensive learning approach were utilized to make the population more adaptive, robust, and conducive to knowledge flow and interaction to enhance population diversity. Firstly, the performance of CLDMMPA was evaluated using 10 test functions published in CEC2020 and compared to the classical MPA, atomic search optimizer, salp swarm algorithm, and enhanced differential evolution. The numerical results demonstrated the effectiveness of the CLDMMPA against the other comparative algorithms in terms of solution quality and convergence behaviour. Later on, the performance of the CLDMMPA was evaluated using static and dynamic versions of parameters identification of solid oxide fuel cells. The simulation results illustrated that the CLDMMPA matches highly with less deviation against the other competitors.

4.1.5 Other Modifications of Marine Predators Algorithm

Other modified versions of MPA were introduced by the researchers in the literature to balance exploration and exploitation abilities when used to solve complex optimization problems.

Shaheen et al. [138] presented a modified version of MPA based on the Lévy and Brownian movements for tackling the combined heat and power economic dispatch problem, called IMPA. This modification was introduced by the authors to make a balance between the exploration and exploitation abilities of their algorithm when used to solve the mentioned problem. The performance of the MPA was evaluated using four test cases of the economic dispatch problem. The numerical results reported that the proposed IMPA performs better than the original MPA by obtaining the optimal solutions with lower computational time. In addition, the experimental results demonstrated the effectiveness and robustness of the IMPA compared to the other competitors in the literature.

The parallel MPA was presented to minimise the energy loss in distribution networks by [173]. Their MPA was implemented for the 69-bus radial distribution system. The simulation results demonstrated the effectiveness of their algorithm compared to other metaheuristic algorithms in terms of accuracy and solution efficiency.

Similarly, the quantum theory was integrated with the MPA, called QMPA, for global optimization and the multilevel image segmentation problem by [1]. The quantum theory was utilized to find the optimal threshold levels to enhance the segmentation process. Their algorithm was evaluated using 23 classical test functions and 10 grey-scale images. The simulation results illustrated the QMPA performs well compared to the other competitors in terms of solution quality and convergence speed.

Abdel-Basset et al. [4] introduced a modified version of MPA based on the ranking-based diversity reduction (RDR) strategy, named IMPA, for X-Ray image segmentation. The RDR strategy was used to enhance the performance of the original MPA to find good enough solutions with lower computational times. The IMPA was evaluated using eight COVID-19 chest images and compared to the original MPA and five other comparative algorithms. The numerical results demonstrated that the IMPA performs better than the other competitors by obtaining the best results in the Similarity Index Metric and Universal Quality Index.

Yousri et al. [175] introduced another modified version of MPA based on comprehensive learning to identify the optimal parameters of the supercapacitor equivalent circuit. Their algorithm was called CLMPA. The comprehensive learning strategy was used to share the best knowledge among all solutions in the population and thus avoid immature convergence. Eight parameters were considered in the evaluation process. The simulation results demonstrated the superiority of the CLMPA against eight of the classical optimization algorithm by obtaining better results very close to the optimal results.

Another modified version of MPA was introduced by Abdel-Basset et al. [7] for task scheduling in IoT-based fog computing applications. Their modified MPA was called MMPA. In their algorithm, the update positions equation was modified by using the last updated positions rather than the previous best one to improve the exploitation capability of the algorithm. Furthermore, the ranking strategy and the mutton operator were utilized to modify the worst solutions using the features of the best solutions. The experimental results illustrated their algorithm’s effectiveness and robustness compared to other optimization techniques and two versions of MPA in terms of energy consumption, makespan, flow time, and carbon dioxide emission rate.

Shaheen et al. [137] introduced another modified version of MPA based on new updating strategies for simultaneous network reconfiguration and distributed generator allocation in distribution systems. Their algorithm was called MMPA. The MMPA was tested using 33- and 99-bus distribution systems with different loading levels. The performance of the MMPA was compared to the classical MPA, genetic algorithm, harmony search, fireworks algorithm, firefly algorithm, and sine cosine algorithms. The numerical results illustrated the superiority of the MMPA against the classical MPA and other competitors by simultaneous DNR and DG allocation.

Houssein et al. [88] integrated the improved version of the MPA with convolutions neural networks (CNN) to classify the ECG arrhythmia, called the IMPA–CNN model. A new nonlinear step-factor control approach was introduced in the MPA’s main framework to balance exploration and exploitation, improving global search capacity and promoting rapid convergence in local research. In their model, each layer in CNN was configured using the IMPA. The experimental results demonstrated the effectiveness of the IMPA–CNN model compared to other comparative models in terms of accurate results and convergence speed.

Kumar et al. [103] introduced a modified version of MPA, called Chaotic MPA (CMPA). In their algorithm, 10 chaotic maps are integrated with MPA to enhance their algorithm’s exploitation ability and thus balance the exploration and exploitation abilities of the CMPA. The performance of the MPA was evaluated using CEC2020 test functions and five engineering problems. The simulation results demonstrated the effectiveness of the CMPA against ten of the other comparative algorithms.

4.2 Multi-objective Marine Predators Algorithm

Multi-Objective Optimizations (MOs) are used to optimize problems with more than one objective. Most engineering and real-world applications are multi-objective, and these objectives often conflict with one another. For instance, in the case of an assembly line optimization problem, one objective may be to minimize power consumption while another is to maximize production. Therefore, finding an optimal solution for one objective may not produce the best result for the other objectives.

Abdel-Basset et al. [8] presented four variants of MPA for multi-objective optimization problems. The first variant is the multi-objective MPA (MMPA). In the second variant, the dominance strategy-based exploration-exploitation was integrated with the MMPA (M-MMPA) to balance exploration and exploitation abilities. In the third variant, the Gaussian-based mutation was combined with the M-MMPA (M-MMPA-GM) to enhance the balance between the exploration and exploitation of the algorithm. Finally, the Nelder–Mead simplex method was integrated with the M-MMPA (M-MMPA-NMM) in the last variant to find more good solutions. The performance of the four variants of their algorithm was evaluated using 32 test functions published in GLT, CEC2009, and CEC2020. The experimental results demonstrated the effectiveness and superiority of the M-MMPA-GM variant against the other three variants of MPA and other comparative methods.

Hassan et al. [80] presented a modified version of MPA (MMPA) for global optimization and single and multi-objective combined economic emission dispatch problems (CEED). The comprehensive learning strategy was utilized to share the best experiences between all solutions to avoid premature convergence. In addition, the Pareto concept and fuzzy set method were used to find the optimum solution for the bi-objective CEED problems. The performance of their algorithm was tested using 28 test functions published in CEC2017 and four different cases of CEED problems. The numerical results demonstrated the superiority of the proposed MMPA compared to the conventional MPA and the other comparative methods.

Chen et al. [42] introduced another multi-objective MPA for global optimization called MOMPA. Their algorithm utilized the reference point strategy and non-dominated sorting approach to select the fittest solutions and ensure the Pareto optimal solution sets diversity. In addition, population diversity was improved by utilizing the Gaussian perturbation. The MOMPA was evaluated using a set of multi-objective test functions. The numerical results illustrated that the MOMPA was very competitive with other comparative methods.

Similarly, a multi-objective version of MPA, named MOMPA, was introduced for multi-objective optimization problems by [95]. In MOMPA, the crowding distance mechanisms and the elitist non-dominated sorting were utilized to avoid local optima and emphasize the exploration capability. Their algorithm was evaluated using different benchmark functions and engineering problems. Also, the performance of the MOMPA was evaluated based on a comparison with the multi-Objective moth-flame optimizer, symbiotic-organism search algorithm, and water-cycle algorithm. The outcomes presented the high performance of the MOMPA compared to the other algorithms in improving quality and convergence speed.

Alharthi et al. [25] proposed a multi-objective MPA (IMPA) for the optimal techno-economic operation of alternating/direct current (AC/DC) electronic grids. The exterior repository was utilized to preserve nondominated individuals. In addition, the fuzzified decision procedure to determine the acceptable operational solution for the mentioned problem. The IMPA was tested using the standard power system of standard IEEE 57-bus and compared to six other optimization algorithms. The simulation results demonstrated the effectiveness of the IMPA against the other algorithms in extracting well-diversified Pareto solutions.

An improved multi-objective MPA (IMMPO) was implemented for optimal operation of hybrid AC and multi-terminal-high voltage direct current (AC/MT-HVDC) power systems by Elsayed et al. [64]. In IMMPO, the external repository was utilized to conserve the non-dominated prey. In addition, fuzzy decision-making was used to select the best compromise operating point for the hybrid AC/HVDC power systems. The IMMPO was evaluated using a modified standard IEEE 30-bus, and one real-world problem sampled from the Egyptian West Delta Region Power Network emerged with VSC-HVDC grids. The simulation results demonstrated the efficiency and robustness of the IMMPO by successfully extracting well-diversified Pareto solutions compared to other comparative algorithms.

Another engineering problem tackled by multiobjective MPA is fractional-order capacitors in electrical power systems [147]. The authors improved the system performance by reducing the loss (first objective) and increasing the voltage profile (second objective). Their study compared the classical and fractional capacitors, where the system produced results similar to the standard IEEE 69 radial system.

The Multiobjective version of MPA is also proposed by Abdel-Basset et al. [9]. This version has been heavily analyzed and studied with different Multiobjective configurations, where four versions of Multiobjective MPA were proposed. The four versions are tested against well-known theoretical and practical problems with Multiobjective features. They show their superiority in dealing with the multiobjective domain.

Another version of Multiobjective MPA was proposed by Jangir et al. [95]. This version is based on elitist non-dominated sorting and crowding distance mechanisms. It has the ability to tackle a multiobjective problem with conflicting objectives. Different case studies are used to test the proposed method, including unconstrained, constrained, and engineering design problems with Pareto front background. Compared with other multiobjective versions of metaheuristic methods, multiobjective MPA proved its superiority.

Zhong et al. [188] presented a multi-objective MPA, abbreviated MOMPA, for global optimization. In MOMPA, an external memory was used to store the best non-dominated solution set. This was introduced to enhance population diversity. In addition, the fittest solutions in the external memory were used as predators to simulate the predator’s foraging behaviour. The performance of the MOMPA was evaluated using CEC2019 test functions and seven engineering design problems. The experimental results demonstrated the superiority of the MOMPA against nine state-of-the-art multi-objective algorithms.

Similarly, an enhanced multi-objective optimization algorithm of the MPA (MOEMPA) is introduced for managing the sharing energy in an interconnected microgrid with a utility grid [180]. In MOEMPA, the population is divided into two groups; the exploration equations of MPA were applied in the first group. In contrast, the exploitation equations of MPA were applied in the second group. Furthermore, the non-uniform mutation was used to enhance the exploration ability of the algorithm. The MOEMPA is applied for managing energy sharing in an interconnected microgrid consisting of solar and wind renewable energy sources, diesel for emergency loads, and batteries for extra energy storage. The simulation results demonstrated the effectiveness of the MOEMPA against six multi-objective optimization algorithms in reducing the emissions with the highest profits.

Xing and He [155] presented a multi-objective version of MPA for infrared-image fault diagnosis. In their algorithm, an adaptive weight strategy was used to avoid the problem of local minima, while opposition-based learning was utilized to make the right balance between the exploration and exploitation abilities of the algorithm. Their algorithm was tested using DTLZ and WFG datasets. Furthermore, their algorithm was applied to a benchmark dataset with 500 real insulator infrared images. The experimental results show the effectiveness of their algorithm against the other competitors in terms of solution quality with higher computational times. Another multi-objective version of the MPA (MOMPA) for global optimization was introduced by Chen et al. [43]. The MOMPA was evaluated using ZDT, DTLZ, and WFG datasets with satisfactory results.

4.3 Hybridized Versions of Marine Predators Algorithm with Other Components

The third MPA variant type is hybridization. In this variant, the MPA searching capabilities and performance are improved by combining components of another method/approach with the MPA components. Several studies were proposed using the hybridization approach for the MPA, which are summarized below.

4.3.1 Marine Predators Algorithm with Fractional-Order

The functional approach is primarily used to alter the movements of the predators within the search space [179].

Sahlol et al. [134] proposed a two-phase approach based on convolutional neural networks (CNN), fractional-order calculus (FO), and MPA for COVID-19 image classification. In the first phase, the CNN was utilized to extract the features, while the combination between the MPA and FO, called FO–MPA, was used to select the most relevant features. Their algorithm was tested using two public COVID-19 X-ray datasets. The experimental results demonstrated the superiority of the FO–MPA against other recent feature selection algorithms and other different CNN architectures in terms of classification accuracy, the number of extracted features, and the F-score.

Yousri et al. [179] incorporated the fractional calculus strategy and the MPA to enhance the MPA searching capabilities and avoid local optima stagnation by improving the sharing of historical experiences and knowledge between search agents. The proposed method was tested using CEC2017 and CEC2020 benchmarks, four engineering problems, and a feature selection problem. The performance of the proposed method was compared with several competitive methods. The proposed method obtained the best results among all compared methods.

4.3.2 Marine Predators Algorithm with Evolutionary Algorithms

The searching components of the MPA were hybridized with different evolutionary algorithms to enhance its exploration and exploitation capabilities and, thus, solutions quality. The differential evolution optimization algorithm highly attracted the MPA researchers’ attention, which was utilized to enhance the MPA searching performance in five studies. This interest is because of the simplicity of the DE and the high ability to skip local optima stagnation.

Hu et al. [91] proposed an enhanced MPA version-based EA to primarily address the shape adjustment problem by optimizing the shape of shape-adjustable generalized cubic developable Ball surfaces. The proposed method was proposed by combining the MPA with the DE algorithm to increase the diversity of the MPA population and enhance its performance in skipping the local optima stagnation. The experimental results tested the proposed method using 23 classical benchmark functions, three engineering problems, and the CEC’17 test. The performance of the proposed method was mainly compared with that of the original MPA. The obtained results proved the high search execution of the proposed method. The same enhancement and hybridization purpose was proposed by [165] to address the poor image problem and optimize the illumination correction accuracy. The proposed method was evaluated and compared with the traditional methods. The proposed method performed better than the compared method in optimizing the objective function.

An adaptive EA was hybridized with the MPA to modify and enhance the MPA searching behaviour, skipping local optima and balancing local and global search was proposed by [122]. An adaptive version of EA was utilized to find the best parameter values that could achieve the optimal performance of the MPA. The proposed method was mainly introduced to address the PV panels problem. The proposed method was tested by comparing its results with well-known state-of-the-art methods. The proposed method exhibited better achievements than that of the compared methods. In [2] introduced, another version of MPA-based EA to address the same problem and identify the unknown parameters for different PV models. The proposed hybrid method was suggested to enhance the overall searchability of the MPA and emphasize the balance between exploration and exploitation. The proposed method was deeply tested, analyzed, and compared with the original MPA and other well-known methods. The proposed method outperformed the compared methods and addressed the problem.

A new binary EA was utilized and combined with the MPA by [74] to address the biological clustering problem and cluster multi-omics datasets. The introduced combination was proposed to improve the MPA searchability and its search agents’ behaviour by enhancing diversity and avoiding local search stagnation. The proposed method was evaluated using multi-omics datasets from TCGA and compared with different methods. The obtained results proved the high performance of the proposed method in addressing the problem.

Abdel-Basset et al. [5] optimized PV system parameters and found their best values utilizing a novel hybrid version of MPA. The proposed MPA version was suggested and introduced by modifying the MPA searching behaviour and combining its agent and parameters with an adaptive mutation operator. The enhancement is in two main steps, improve the best solutions’ quality using the adaptive mutation operation and the lower best solutions using the location of the best search agents. Such a modification was proposed to enhance the population strategy in finding the optimal solutions. The proposed method was tested and compared with different existing methods. The proposed method performed better than the compared methods in achieving the best solutions. Another study was presented by combining the MPA components with a mutation operator to optimize wind power using time-series datasets efficiently [23]. The primary purpose of utilizing the mutation operator is to enhance the convergence rate of the MPA to achieve the best local optima. Several wind turbine datasets were used to evaluate the proposed method’s performance. Also, the results obtained by the proposed method were compared with various traditional methods. The proposed method outperformed the compared methods in optimizing the objectives.

Sun and Gao [145] addressed two MPA drawbacks: local optima stagnation and lack of population diversity by using the estimation distribution algorithm and Gaussian random walk strategy. The estimation distribution algorithm was used to enhance the convergence rate and performance by modifying the population distribution, while the Gaussian random walk strategy was utilized to avoid local search stagnation. The proposed method was tested using the CEC2014 test suite and compared with other methods. The proposed method obtained the best solutions among the compared methods.

Another study that combined an evolutionary component with the MPA was proposed by [187]. In this study, the MPA was hybridized with the Teaching-learning-based method to emphasize exploitation by improving the encounter rate between prey and predators and exploring by improving the population’s diversity. The proposed method was tested using IEEE CEC-2017 functions and four engineering optimization problems. The performance of the proposed method was compared with state-of-the-art methods. The proposed method enhanced the solutions significantly and achieved better results than the compared methods.

4.3.3 Marine Predators Algorithm with Swarm Intelligence

This section summarizes the most significant studies proposed to enhance the MPA performance utilizing swarm Intelligence optimization methods.

Due to the impact of the local search stagnation on the MPA searching performance, Eid et al. [60] hybridized the MPA components with a well-established swarm optimization method called multi-verse optimizer. The main aim of such a combination is to promote the MPA local search capabilities and skip stagnation in local optima. The proposed method was tested using the distributed generation optimization problem using two standard test systems. The performance of the proposed method was compared with that of the original MPA and other methods. The proposed method outperformed the compared methods in optimizing all objectives.

Liu and Yang [106] optimized the colour constancy calculation of dyed fabrics by proposing and adopting a new hybrid approach based on the MPA and the sine and cosine swarm optimization algorithm. The sine and cosine algorithm was used to increase the randomization of the initial population. The outcomes of the proposed method were used as input for random vector functional-link and to find optimal prediction accuracy. In the experimental results, the proposed method was compared with eight methods and showed better performance than them.

In two studies, the original version of MPA is also hybridized with a well-known optimization algorithm called particle swarm optimization (PSO). In the first study, Shaheen et al. [139] combined the components of the MPA with the PSO local search operators to emphasize the local search capabilities of the MPA and obtain a better local optimum for the power dispatch problem. The proposed method was tested using three datasets and compared with other state-of-the-art methods. The proposed hybrid approach proved its effect by outperforming the compared methods. In the second study, the MPA was hybridized with the global search operators of the PSO to increase the diversity of the population and find the global optima for the dynamic clustering problem [152]. The proposed method was compared in the evaluation phase with the original MPA, PSO, and other methods. The proposed method exhibited better results than the compared method in optimizing the objectives.

Abualigah et al. [16] proposed a novel hybrid approach on the basis of the searching components of the MPA and the Salp Swarm Algorithm to handle the image segmentation problem and found its optimal multilevel threshold. The main purpose of introducing the hybrid approach is to boost the overall search performance and capabilities of the MPA search agents. The proposed method was experimented with using different benchmark images. The obtained results were compared with the literature optimization methods. The proposed hybrid method outperformed the compared methods in addressing such an optimization problem.

Another enhancement approach for the MPA was proposed by Yousri et al. [176] utilizing the slime mould algorithm. The primary aim of proposing such an approach is to employ the slime mould algorithm with the local search operators of the MPA to lead the search agents to exploit the search space better and find the optimal solution. The proposed method was mainly introduced to determine the optimal values for the triple diode model parameters. The performance of the proposed method was investigated and evaluated by comparing its results with other powerful methods. The proposed method showed outstanding performance compared to the compared methods.

4.3.4 Marine Predators Algorithm with Other Algorithms

Oszust [123] enhanced the MPA performance and searching behaviour for the local optima by utilizing the Local Escaping Operator as a global search booster for the MPA by replacing the worst solution with one generated by the new operator. In addition, the Local Escaping Operator helps get the best balance between exploration and exploitation. The experimental results evaluated the proposed method using 82 test functions and 3 engineering problems. Also, it was compared with state-of-the-art methods. The proposed method achieved better results than the compared methods.

A recent study was proposed to enhance and emphasize the local search capabilities of the MPA using the Nelder–Mead algorithm [126]. The Nelder–Mead algorithm was utilized as an exploitation booster to find the optimal local solution and enhance the balance between local and global search. The proposed method was adapted to address the structural design optimization problem. The obtained results by the proposed method presented its high capability in addressing the problem, where it outperformed the compared methods.

The linear improvement and ranking-based updating strategies were combined with the MPA by Abdel-Basset et al. [10] to optimally address the image segmentation problem. The linear improvement strategy was utilized to find the worst solutions in the MPA population and replace them with better solutions, while the ranking-based updating strategy was used to enhance the updating mechanism in the last few iterations. The proposed method was tested using four test images to evaluate its search performance. Also, it was compared with seven state-of-the-art methods. The proposed method proved its high performance compared to the other methods.

5 Applications of Marine Predators Algorithm

The MPA is successfully applied for several real-world optimization problems, which can be summarized in Table 2. The MPA shows powerful results in many domains, such as Energy, Renewable energy, Power Systems, Networking, Engineering applications, classification and clustering, feature selection, image and signal processing, Maths, Global Optimization, Scheduling domain, and many others. These applications are real-world optimization problems with specific search space complexity and size. For that, the MPA was proposed with different MPA versions because some of these problems cannot be efficiently solved by original versions of such optimization techniques as MPA. Therefore, some modification or hybridization ideas shall be introduced to cope with the complexity of the search space.

On the other hand, Table 2 provides a comprehensive review of the MPA works which have been done from the day of MPA proposed in 2021 by [68] to the present. The review includes what the domains of the MPA applications, the problem which MPA tries to solve, the version type of MPA which are applied, and a summary of the Descriptions.

Fig. 13
figure 13

Number of MPA Applications

Figure 13 show Power Systems field is the outermost interesting domain for applying the MPA with 28%, the second interesting area for applying the MPA is Global Optimization with 15%, and the third-ranking field for applying MPA is the Image processing domain with 13%.

Fig. 14
figure 14

Number of MPA Versions

Like other swarm-based optimization algorithms, several variants of the MPA were proposed for solving many real-world problems, such as the Original version of MPA, Multi-Objective MPA, Modified version of MPA, and hybrid version of MPA. Figure 14 shows the percentage rate of applying MPA versions in different real-world problems. Around 45% applied the original version of MPA, a modified version of MPA around 27%, the third-ranking version of applying MPA is a hybrid version equal to 16%, and the last version is a multi-objective version of MPA with 12% works only.

Fig. 15
figure 15

MPA Applications in Energy domain

Fig. 16
figure 16

MPA Applications in Image and Signal processing domain

As can be seen from the above table and Fig. 13, most of the MPA works have been performed under the energy power system domain. The photovoltaic energy system works dominated other fields with 13 research papers in this domain. Figure 15 shows the MPA Applications in the Energy domain.

Fig. 17
figure 17

MPA Applications in Global optimization domain

MPA shows its superiority in image and signal processing, where the MPA works focus in this domain on image segmentation, Signal processing apps, and medical healthcare such as cancer classification, and COVID-19 prediction. Figure 16 shows the applications of MPA in the image and signal processing domain.

Figure 17 shows the MPA Applications in the Global optimization domain; like other metaheuristic algorithms, the engineering problems with MPA were the most exciting area for the researchers.

Table 2 Applications of MPA on different domains

6 Open Source Software of Marine Predators Algorithm

The MPA has drawn a lot of scientific interest, as this review demonstrates. We have included the primary links for all conceivable open-source codes in this section to make it simple for the next interested researchers to utilize, use, or alter the MPA method.

The original MPA code is provided in Faramarzi et al. [68] which is implemented in MATLAB and is stored in the MathWorks File Exchange repositoryFootnote 1 and in GitHub Integration Platform.Footnote 2

Furthermore, the multi-objective version of MPA is also provided by the founder of MPA, Seyedali Mirjalili, and stored in his website implemented in MATLABFootnote 3 and similar code for Multi-objective is given in GitHub Integration Platform.Footnote 4

7 Discussion About Marine Predators Algorithm

The robustness and successful performance of the MPA attracted the researchers’ attention from different domains, showing high capabilities in efficiently addressing various optimization problems. Furthermore, the MPA has a simple mathematical model, and flexible search agents enhance its adaptability to deal with different problems with better exploration and exploitation balance. However, like other optimization methods, the MPA has several drawbacks and limitations that reduce its searching performance and capabilities when it utilizes to tackle real-world optimization problems.

The first limitation is related to the diversity of the MPA population, where the MPA has low diversity among its candidate solutions in the population. This problem occurred because of procedure and model of the MPA lack the diversity controller during the search since its operators are designed to tackle optimization problems with flat search space. Accordingly, new controller diversity aspects can be added or extracted from another algorithm to be utilized in the MPA model for managing the diversity of the candidate solutions [23, 91, 165].

Due to the complexity of the large-scale and multimodal optimization problems and the vast number of local optimums in their search spaces, the MPA search agents can be easily stuck in these local optimums. Therefore, the MPA search agents should be incorporated intelligently with other search agents to navigate the large-scale search spaces trying to find global optimum rather than the local ones [60]. In this case, the hybrid metaheuristic can be suited optimally.

Another limitation that reduces the performance of the MPA is related to the low flexibility of its parameters, which makes the search agents unable to explore the large search spaces efficiently. In addition, these parameters limit the updating strategy for the agents’ positions not to search far from the best solution. To address such issues, a new position-updating rule and a randomize controller can be utilized to liberate the search agent and find better positions near the global optima [67].

Another issue is related to the optimization problems’ complexity, where the original MPA is modelled to address only continuous optimization problems with a single objective. However, the optimization problems are not limited to these types, where the problems can be formulated as multi or many objectives, binary or discrete, dynamic or combinatorial. Thus, the MPA must be extended to deal with various kinds of problems [7, 8, 63].

Finally, the MPA’s main limitation is related to the No Free Lunch (NFL) Theorem [150, 154]. The NFL states that no superior optimizer can excel over all other optimizers for all optimization problems in all cases. Therefore, the MPA convergence is particularly related to the problem search space nature. Thus, modifying and hybridizing the MPA searching behaviour is essential to fit the search space nature of different optimization problems.

8 Conclusion and Future Work

The MPA was introduced in 2020 as one of the intriguing new natural optimization techniques inspired by the common foraging strategies based on Lévy and Brownian motions in ocean predators. Because the MPA is derivative-free, parameter-less, easy-to-use, adaptable, and simple, numerous researchers have used it to address various fascinating optimization issues. This review article has been undertaken due to the numerous optimization challenges that MPA has been used to solve in the short time since its development.

The fundamentals of MPA and its variations that rely on its initial development were covered in this review. Furthermore, all its variations were discussed, including Binary, Adaptive, Opposition-based learning, Multi-swarm MPA, and others. The performance of the MPA was improved by hybridizing it with/into other optimization approaches, such as fractional-order, Evolutionary algorithms, Swarm Intelligence, and others.

Nine powerful optimization algorithms, including ABC, HHO, FPA, MFO, MVO, RSO, SSA, and WOA, were studied in this review paper to assess the effectiveness and robustness of the MPA using 23 classical test functions. These tests demonstrated that MPA outperformed the majority of comparative algorithms.

One interesting point that has been covered in this review article is the substantial applications tackled by the MPA. Therefore, researchers can use this review as a main reference to find all pertinent materials and MPA application fields, such as electrical engineering, networking and communication, medical applications, environmental engineering, planning, and scheduling, etc. However, the MPA model shall be hybridized or modified to be in the same theme of the problem complexity since it has been adjusted to empower diversity aspects, avoid the local optima trap, tackle multi-objective optimization problems, etc. Indeed, this review paper discussed the MPA limitations, allowing researchers to know better the areas where the MPA may fall short of offering the best solutions.

After the MPA proved effective performance in solving a number of optimization problems, the following future directions can be considered:

  • Structured population MPA. To control the diversity, one prevalent solution is to use the structure population implicit approaches, such as island model concepts (i.e., sub-population, migration, and selection), hierarchical concepts, cellular automata theory, etc. [105]. There are also explicit structure population approaches, such as fitness sharing and crowding mechanisms. These methods can improve the diversity aspects of MPA in dealing with NP-hard and combinatorial optimization problems.

  • Parameter-less MPA. It is conventionally known in the optimization domain that the algorithm’s parameters intrinsically impact its performance. The concept of controlling parameters describes how the parameters’ value affects the balance between exploration and exploitation during the search. Therefore, the MPA control parameters can be the subject of future research where they can be adapted using either deterministic, adaptive, or self-adaptive tuning mechanism [58].

  • Evolutionary-based MPA. Each operator of MPA has performed one of the following actions: recombination, mutation, and selection [59]. Many theoretical concepts have been proposed to improve the performance of each operator type in the evolutionary computations that can also be very effective in improving the MPA performance. For example, the selection methods for the parents and child can be the subject of future MPA studies.

  • Memetic MPA. The concept of memetic techniques (or hybrid metaheuristics) refers to adding a local search agent in the Optimizer model to be more convenient for optimization problems with deep and rugged search space. Therefore, efficient local search agents such as hill climbing, \(\beta\)-hill climbing, simulated annealing, Tabu search, VLNS, etc., can be added to the MPA optimization model to improve its exploitation capabilities.

  • Theoretical analysis of MPA model. Although the MPA successfully deals with many optimization problems from different domains with various search space natures, the theory behind this success is still vague. Therefore, the theoretical aspects, such as genetic drift, convergence behaviour, building block theory, etc., can be analyzed in the future.

  • Tackle new domains. Researchers are more likely to use MPA in other problem domains, such as renewable energy, chemical engineering, robotics, and image processing, because of the algorithm’s success in solving various problems.