1 Introduction

Facility location problems (FLP) are a very-studied family of combinatorial optimization problems that consists in finding the best location to site certain number of facilities to efficiently serve all demand points. As is described in López-Sánchez et al. (2020), the definition of “best” in the facility location literature varies, as it depends on the idiosyncrasies of each particular problem and its solution requirements. Briefly, in the p-Median problem, first studied in the mid-sixties (Hakimi 1964, 1965), the objective function seeks to minimize the total distance between demand points and their nearest facility; in the p-center problem, first solved in Minieka (1970), the objective function seeks to minimize the maximum distance between demand points and their nearest facilities; in the maximal covering location problem, first introduced in Church and Velle (1974), the objective consists in maximizing the total number of demand points covered, within a specific distance, by the facilities; or in the p-Dispersion Problem (Erkut 1990), the objective selects facilities as dispersed as possible. There exists many other variants of FLP that optimize different objectives. Traditionally, most of the FLPs has been widely studied not only theoretically but also practically, and many methods of resolution have been successfully proposed, exact and approximate algorithms.

FLPs become even more difficult when more than one objective function must be optimized at a time. This situation appears when companies, either public or private, deal with real-world FLPs since they are inherently multi-objective in nature. Indeed, besides the typical objective function, expressed as a sum of various component expenses (most simply as transportation and fixed costs), multi-objective approaches require for optimizing resource utilization and customer responsiveness, in addition to the standard economic objectives. Recent concerns regarding climate change or environmental objectives have been also considered. See Farahani et al. (2019) for a thorough review.

The optimization of such problems is hard since the objectives that must be optimized are usually in conflict with each other (Deb and Deb 2014). Indeed, when this happens, it is impossible to improve one objective without deteriorating another one, requiring multi-objective optimization (Sánchez-Oro et al. 2020). The majority of these problems appear when solving real-world challenges, because more than one perspective needs to be tackled simultaneously, such as the profit of the company, the benefits of the workers, or the satisfaction of the customers, among others (Franca et al. 2010; Sagrado et al. 2015; Pinto-Varela et al. 2011). The simple fact of considering more than one perspective at the same time justifies the emergence of a conflict between the different objectives.

In the last few years, the number of scientific papers that have addressed FLP from a multi-objective point of view has increased considerably. For instance, the p-Center problem and the p-Dispersion Problem are studied from a bi-objective optimization perspective in Tutunchi and Fathi (2019) and Pérez-Peló et al. (2019). However, in Karatas and Yakıcı (2018) and López-Sánchez et al. (2020), it is addressed a multi-objective FLP in which they considered three problems: the p-Center problem, the p-Median problem, and the maximal coverage location problem. Different objectives were simultaneously considered in Karatas (2017), where the coverage of the demand points is maximized while the total distance and balance workload of the facilities are minimized. Similarly, in Wang et al. (2018), it is maximized the total coverage and simultaneously minimized the total distance. There exist many other FLP variants depending on the objectives to optimize. For a recent survey of this family of problems, see Boonmee et al. (2017).

This paper is focused on the Bi-objective p-Median and p-Dispersion problem (BpMD problem), a multi-objective facility location problem, which was first addressed in Sayyady (2012). Both objectives have been proven to be in conflict in Sayyady (2012) and Sayyady et al. (2015). Additionally, for a thorough analysis on the relation between several facility location problems from a multi-objective perspective see Erkut and Neuman (1989).

As stated in Sayyady et al. (2015), the location of traffic sensors in highway networks motivated the consideration of the p-Median and p-Dispersion problem simultaneously. Nevertheless, this is not the only realistic problem that fits in this bi-objective model. For instance, considering the location of any profitable chain of business in different sites of a region and being the business exactly the same, the goal would be to locate the businesses near to the customers but, at the same time, in order to do not compete one business to each other, it would be necessary to locate one business as far as possible to another one and here, relies the importance of solving the BpMD problem. Sayyady et al. (2015) formulated the BpMD problem as an integer programming model and then, solved by using the \(\epsilon \)-Constraint (\(\epsilon \)-C). Furthermore, they proposed an alternative Integer Programming (IP) model and an iterative algorithm named the Incremental Algorithm. Then, they also used a Lagrangian heuristic procedure for solving the IP model for larger instances. Later on, Colmenar et al. (2018) proposed the first metaheuristic to solve the BpMD problem using a Scatter Search algorithm with three improvement methods: a local search based on dominance, a local search that alternates both objectives and a local search that tries to minimize the normalized distance to an ideal point. The algorithms providing the best results for the BpMD problem in the literature were proposed by Sayyady et al. (2015) and Colmenar et al. (2018); therefore, both algorithms have been considered in the competitive testing of Sect. 4. Furthermore, the comparison includes the well-known multi-objective evolutionary algorithms: MOEA/D, Multi-Objective Evolutionary Algorithm based on Decomposition (Zhang and Li 2007); NSGA-II, elitist Non-dominated Sorting Genetic Algorithm (Deb et al. 2002); and SPEA2, Strength Pareto Evolutionary Algorithm (Zitzler et al. 2001).

In this paper, we propose a novel approach based on the path relinking (PR) methodology is proposed, named reactive path relinking (RPR), which is based on the combination of the two most extended path relinking strategies, Interior and Exterior PR (IPR and EPR, respectively) to solve the BpMD problem. EPR strategy was first introduced by Duarte et al. (2015) and was compared against the IPR strategy. The authors solved another interesting FLP known as Minimum Differential Dispersion Problem, whose objective is to minimize the difference between the sum of the maximum and the sum of the minimum distances between the demand points and the selected facilities. The novelty of the current proposal lies in the combination of both path relinking strategies (IPR and EPR) into a single one, called reactive path relinking. The proposed procedure will decide which of the strategies should be used depending on the pairs of solutions to combine. Computational results prove the superiority of the proposed algorithm over the state-of-the-art procedures when considering the set of instances previously used in the related literature. Specifically, the widely accepted multi-objective metrics have been considered in order to compare the procedures, emerging the RPR as the most competitive algorithm in both quality and computing time.

The main contributions of this work are the following:

  • A novel metaheuristic algorithm based on the path relinking methodology has been proposed, the reactive path relinking (RPR) algorithm.

  • The RPR algorithm has emerged to address a multi-objective facility location problem: the BpMD problem.

  • A complete analysis of the performance of the novel approach reactive path relinking has been included.

  • A competitive testing is performed with the RPR algorithm and the state-of-the-art algorithms.

  • The proposed algorithm and the obtained results are publicly available to ease further comparisons.Footnote 1

The rest of this paper is organized as follows. Section 2 describes the BpMD problem and includes some definitions related to the problem. Section 3 details the algorithm implemented to solve the problem under consideration. Section 4 includes the performance metrics used to test the quality of the algorithm and the computational experiments to check the validity of our proposal. Finally, Sect. 5 summarizes the paper and discusses future work.

2 Problem definition

Let N and F be two sets of locations, with \(|N| = n\) representing the demand points and \(|F|=m\) representing the candidate facilities. Without loss of generality, in this paper \(N=F\) since our instances consider that demand points are also candidates to host facilities. It is worth mentioning that this assumption is not always considered, since in some cases N and F are disjoint sets. Let us define \(d_{ij} \ge 0\) as the distance between i and j, with \(i,j \in N\). This distance satisfies \(d_{ij}\ge d_{ik}+d_{kj}, \forall i,j,k \in N\), and \(d_{ii}=0\). The goal of a facility location problem is to select a subset of nodes to host facilities, \(S \subset N\) with \(|S|=p\), in order to optimize one or more objective functions.

The p-Median problem (pMP) selects a subset of \(S \subset N\) facilities, with \(|S| = p\), with the aim of minimizing the total distance between demand points and its closest facility. Given a solution S, the objective function of p-Median is evaluated as:

$$\begin{aligned} f_{pM}(S) \leftarrow \sum _{j \in N \setminus S} \min _{i \in S} d_{ij} \end{aligned}$$

Then, the objective of the p-Median problem is to find a solution \(S^\star _{pM}\) with the minimum \(f_{pM}\) value. More formally,

$$\begin{aligned} S^\star _{pM} \leftarrow {{\,\mathrm{arg\,min}\,}}_{S \in {\mathbb {P}}} f_{pM}(S) \end{aligned}$$

being \({\mathbb {P}}\) the set of all possible combinations of facilities that can be conformed selecting p elements from the set of available locations N.

The p-Dispersion problem (pDP) seeks to maximize the distance between the p selected facilities, with the aim of distributing the facilities throughout the available space and avoiding to place facilities close to each other. Given a solution S, the objective function of the p-Dispersion problem is evaluated as:

$$\begin{aligned} f_{pD}(S) \leftarrow \min _{i,j \in S, i \ne j} d_{ij} \end{aligned}$$

The p-Dispersion problem then consists in finding a solution \(S^\star _{pD}\) with the maximum \(f_{pD}\) value. Specifically,

$$\begin{aligned} S^\star _{pD} \leftarrow {{\,\mathrm{arg\,max}\,}}_{S \in {\mathbb {P}}} f_{pD}(S) \end{aligned}$$

Therefore, the Bi-objective p-Median p-Dispersion problem, known as BpMD problem, is focused on solving two problems at the same time, the pMP and the pDP. It is needed to emphasize that the objective function of the pMP and the pDP are in conflict and, consequently, the improvement of one objective function leads the deterioration the other one. It is worth mentioning that a detailed integer programming model for the BpMD is included in Sayyady et al. (2015).

Next, an example with \(N = \{1,2,3,4,5\}\) (with \(|N| = n = 5\)) and the aim to locate \(p = 3\) facilities is included in order to show the conflict between the functions. Table 1 shows all possible solutions (10 combinations), the first column (named Facilities) shows the x and y coordinates of the chosen facilities, and columns 2 and 3 compute the values of the two objective functions of the two considered problems, pMP and pDP, respectively. For this example, the distance between any two points is calculated as the euclidean distance. The best objective function value for each problem isolated is highlighted in bold font.

Table 1 Evaluation of all the feasible solutions for the example depicted in Fig. 1

Figures 1 and 2 represent the optimal location for the p-Median problem and the p-Dispersion problem, respectively. Facilities are represented with squares and demand points with circles. Considering the Euclidean distance, the optimal solution value for the p-Median problem is 2.41 units and the facilities must be located at points (1, 4), (2, 2), and (4, 4). On the contrary, the value of the objective function of the p-Dispersion problem is 2.24 units, when considering the same facilities. Analogously, the optimal solution value for the p-Dispersion problem is 3.00 units and the facilities should be located at points (1, 1), (1, 4), and (4, 4), while the evaluation of the objective function for the p-Median problem yields 3.65 units.

Fig. 1
figure 1

\((f^{\star }_{pM}, f_{pD})=(2.41, 2.24)\)

Fig. 2
figure 2

\((f_{pM}, f^{\star }_{pD})=(3.65, 3.00)\)

As this toy-example shows, both objectives are in conflict, that is, it might not be possible to find a single solution with the optimum value for \(f_{pM}\) and \(f_{pD}\) simultaneously, being impossible to improve one objective without deteriorating the other one.

Next, without loss of generality the BpMD problem is formally defined as a minimization bi-objective combinatorial optimization problem:

$$\begin{aligned} \min _{S \in {\mathbb {P}}} [f_{pM}(S), -f_{pD}(S)] \end{aligned}$$

where \({\mathbb {P}}\) is called the feasible set and \(f_{pM}:{\mathbb {P}}\rightarrow {\mathbb {R}}\) and \(f_{pD}:{\mathbb {P}}\rightarrow {\mathbb {R}}\) are the two considered objective functions. The image of the feasible set is \(\{(f_{pM}(S),-f_{pD}(S)):S\in {\mathbb {P}}\}\) is named the objective space.

Let us briefly introduce the basic concepts of multi-objective definitions. Given two solutions \(S_1\) and \(S_2\) in \({\mathbb {P}}\):

  • \(S_1\) weakly dominates \(S_2\), denoted as \(S_1\preceq S_2\), if and only if \(f_{pM}(S_1)\le f_{pM}(S_2)\) and \(-f_{pD}(S_1)\le -f_{pD}(S_2)\).

  • \(S_1\) dominates \(S_2\), denoted as \(S_1 \prec S_2\), if and only if \(S_1\preceq S_2\) and at least one objective function is strictly better than the other, that is, \(f_{pM}(S_1) < f_{pM}(S_2)\) and \(-f_{pD}(S_1)\le -f_{pD}(S_2)\) or \(f_{pM}(S_1) \le f_{pM}(S_2)\) and \(-f_{pD}(S_1) < -f_{pD}(S_2)\).

  • \(S_1\) strictly dominates \(S_2\), denoted as \(S_1\preceq \preceq S_2\), if and only if \(f_{pM}(S_1) < f_{pM}(S_2)\) and \(-f_{pD}(S_1) < -f_{pD}(S_2)\).

The goal of the BpMD problem is to find a high-quality approximation of non-dominated solutions, also known as efficient solutions or Pareto set. It should bear in mind that the Pareto set contains only those solutions that are not dominated by any other solution. Notice that the image of the Pareto set is named the Pareto front. In order to achieve this goal, a metaheuristic procedure able to solve the BpMD problem is proposed. The metaheuristic finds the best approximation of the Pareto front with high-quality solutions in a reasonable amount of time even for problems with a considerable size.

3 Algorithmic proposal

The methodology used to solve the BpMD problem is based on a simple but efficient algorithm which has been successfully applied to solve a wide variety of combinatorial optimization problems, the path relinking (PR) metaheuristic (see Resende et al. (2010); Pérez-Peló et al. (2020); Campos et al. (2014) for relevant research in PR). This metaheuristic was first proposed in the framework of tabu search to integrate intensification and diversification strategies (Glover and Laguna 1998). The success of PR relies on exploring trajectories that connect pairs of high-quality solutions (hereinafter, the initiating and guiding solutions, \(S_i\) and \(S_g\), respectively). The procedure generates new intermediate solutions that would be hopefully better than these pairs of connected solutions. Specifically, PR links \(S_i\) to \(S_g\) combining paths of solutions which in turn, generates new solutions on such paths that will share attributes from both solutions. Without loss of generality, combining two solutions \(S_i\) and \(S_g\) consists in including attributes of \(S_g\) in \(S_i\) iteratively until \(S_i = S_g\).

In this paper, two different PR strategies, Interior and Exterior PR (IPR and EPR, respectively), are considered. Additionally, a novel approach based on the mixture of these two path relinking strategies is proposed. Both strategies have been deeply studied but in an isolated way by Duarte et al. (2015). The novelty of our proposal lies on the combination of both, IPR and EPR, into a single one, which is able to autonomously decide which of the strategies should be used, analyzing the pairs of solutions to combine.

3.1 Constructive method

As previously explained, the PR methodology requires a set of efficient solutions to combine. Although they can be constructed in a random manner, a greedy constructive procedure is proposed in order to form promising points of the search space. Of course, another way to obtain a good set of efficient solutions would be using multi-start algorithms such as GRASP (Greedy Randomized Adaptive Search Procedure), see Feo and Resende (1995). The constructive method is responsible for obtaining the initial set of efficient solutions. It uses a greedy criterion to generate solutions from scratch by using an iterative algorithm. At each iteration, the algorithm performs the best choice with respect to the objective function to optimize.

The BpMD problem is a bi-objective optimization problem, so it is needed to optimize two objective functions that are in conflict. Therefore, to consider both objectives at the same time in the constructive procedure, this bi-objective optimization problem is turned into a single-objective optimization problem. There are several ways of combining two or more objectives in an optimization procedure. For example, Marti et al. (2015) presents a new approach for switching between different objectives. In this research, an aggregation of the considered objectives is proposed, resulting in a simple yet effective mechanism. More precisely, given a partial solution \(S'\) with \(|S'| < p\) and a node \(v \in N {\setminus } S'\), the proposed function is defined as \(\beta \cdot f_{pM}(S'\cup \{v\})-(1-\beta ) \cdot f_{pD}(S' \cup \{v\})\), where \(\beta \in [0,1]\) is a weight or proportion to the relative importance of the pMP objective.

In order to normalize both objective functions in the range [0, 1], the following intervals are defined: \(f_{pM}(S' \cup \{v\})\in [\min _{pM};\max _{pM}]=[0; \max d_{ij}: \ \forall i,j \in N \wedge i\ne j ] \) and \(f_{pD}(S' \cup \{v\})\in [\min _{pD};\max _{pD}]=[0;\max d_{ij}, \ \forall i,j \in N \wedge i\ne j ]\), where \(\min _{pM}\) and \(\max _{pM}\) represent the minimum and the maximum values of \(f_{pM}\), respectively, and similarly, \(\min _{pD}\) and \(\max _{pD}\) are the minimum and the maximum values of the \(f_{pD}\), respectively. For the sake of convenience, the objective function of the pDP is transformed into a minimization problem by simply switching the sign, i.e., \(-f_{pD}(S' \cup \{v\})\in [-\max _{pD};-\min _{pD}]\).

Therefore, given a partial solution \(S'\), a candidate \(v \in N {\setminus } S'\), and \(\beta \), the considered greedy function is defined as follows:

$$\begin{aligned} g(S' \cup \{v\}, \beta )= & {} \beta \frac{f_{pM}(S' \cup \{v\})-\min _{pM}}{\max _{pM}-\min _{pM}}\nonumber \\{} & {} +(1-\beta ) \frac{-f_{pD}(S' \cup \{v\})-(-\max _{pD})}{-\min _{pD}-(-\max _{pD})} \nonumber \\= & {} \beta \frac{f_{pM}(S' \cup \{v\})}{\max _{pM}}\nonumber \\{} & {} -(1-\beta ) \frac{f_{pD}(S' \cup \{v\})+\max _{pD}}{\max _{pD}} \end{aligned}$$
(1)

Algorithm 1 shows the pseudocode for the greedy construction used in this paper, which receives as input parameter the set of candidate elements able to host a facility and the step used to update \(\beta \). It is worth mentioning that as a multi-objective optimization problem is being addressed, therefore, the output of the constructive procedure is a set of efficient solutions.

figure a

The algorithm starts with an initial set of efficient solutions, ES, that is empty and the parameter \(\beta \) is initialized to zero (steps 1 and 2). With the aim of increasing the diversity of the set of constructed solutions, the values of \(\beta \) are uniformly selected in the interval [0, 1]. Specifically, \(\beta \) is set from 0.0 to 1.0, where each iteration increases its value by adding the input parameter step (see steps 3 to 15).

Greedy algorithms are mainly deterministic procedures, since they select in each step the best element to be included in the partial solution under construction. In order to further improve the diversification, it is proposed to construct several solutions where each one is initialized by considering a different element. In particular, if there are n elements, that means that at most, n different solutions could be constructed, (steps 4 to 13).

The method selects the first facility, v, to be added to the solution \(S'\) (step 5), creating the list of candidate nodes, C, with all nodes except the one that has been already included in the solution (step 6). The algorithm includes one node at each iteration until p nodes have been included in S, (steps 711). In each iteration, the most promising node from the list of candidates is selected, \(v'\) by using the aforementioned greedy function (see Eq. 1). Then, the selected node is included in the solution (step 9) and removed from the candidate list (step 10).

As a bi-objective optimization problem is being addressed, once that a feasible solution is found, it is needed to check whether it can be admitted in the set of efficient solutions (step 12) or not. Specifically, the function \( Insert \& Update (S',ES)\) determines if the solution \(S'\) is an efficient solution and, if so, it is inserted in the set of efficient solutions, removing all solutions of the set that are dominated by \(S'\). Finally, before starting a new iteration, the value of \(\beta \) is updated in step 14. The greedy algorithm ends by returning the set of efficient solutions (step 16).

3.2 Local search method

Once an initial set of efficient solutions is attained, a second phase is considered. This phase consists in applying a local search to all the efficient solutions obtained. As a facility location problem is being solved, the neighborhood of a solution S, denoted as \({\mathcal {N}}(S)\), for the BpMD problem is defined as the set of solutions that can be obtained by exchanging a selected facility with any non-selected facility. Specifically,

$$\begin{aligned} {\mathcal {N}}(S) \leftarrow \{ (S \setminus \{ u \}) \cup \{ v \}: u \in S, v \in N \setminus S \} \end{aligned}$$

The proposed local search follows a first-improvement strategy. In particular, the method traverses \({\mathcal {N}}(S)\), performing the first move that leads to a better solution with respect to the considered objective. If so, the search is subsequently restarted to the neighborhood of the updated solution; otherwise, the method stops since no better solution can be found. It is important to bear in mind that two objective functions are being addressed. Then, there are different variants to consider (see Mladenović et al. 2007; Erkut 1990). The use of the weighted sum function that was previously defined in Eq. 1 to guide the search is considered again. Specifically, the procedure starts by considering \(\beta = 0\), which means that only \(f_{pM}\) is improved (ignoring the other objective). Notice that after each move, the corresponding neighboring solution is tested whether it is admitted in the set of efficient solutions or not. This strategy is maintained until \(\beta = 1.0\) which means that only \(f_{pD}\) is improving, ignoring \(f_{pM}\).

Algorithm 2 shows the pseudocode of the proposed improvement strategy used in this paper. It receives three input parameters: an initial solution, S; the set of efficient solutions, ES; and the \(\beta \) parameter. As customary, the local search ends when no improvement is found (steps 2 to 11). In each iteration, this method explores the aforementioned neighborhood (steps 4 to 10) by following the first improving strategy. Specifically, the exploration is halted once a better neighboring solution is found (steps 5 to 8). Notice that the comparison between S and \(S'\) is done by considering Eq. 1 (step 5). Notice that it is needed to test whether any solution explored qualifies to enter in ES or not (step 9). Finally, the local search algorithm returns a set of efficient solutions (step 12).

figure b

So far, our proposal is quite similar to GRASP. However, our way to construct solutions is always deterministic with the only diversification of changing the initial node to be added to the partial solution under construction. Therefore, n different solutions are obtained in the constructive phase. Once that a feasible solution is obtained, the local search is applied as it is done in GRASP.

3.3 Interior path relinking

Interior PR (IPR) creates a path that connects two high-quality feasible solutions, exploring new solutions while traversing the path. The rationale behind this is that if the initiating and guiding solutions are promising, then there is a high probability of finding good solutions in the path that connects them.

The path between \(S_i\) and \(S_g\) is created including in \(S_i\) elements that are in \(S_g \setminus S_i\), exchanging them with those elements in \(S_i \setminus S_g\). In other words, it consists in iteratively modifying \(S_i\) to become more similar to \(S_g\) in every step, until reaching \(S_g\).

Figure 3 shows an example where the IPR is illustrated. The example considers a set of \(|N| = 10\) nodes labelled from 1 to 10 and \(p=5\). Let \(S_i=\{1,2,3,4,5\}\) and \(S_g=\{1,2,6,7,8\}\) be the initiating and guiding solutions, respectively. At the first step of the IPR, element 3 is replaced by element 6, obtaining the intermediate solution \(S_1=\{1,2,6,4,5\}\), then, at the second step, element 4 is interchanged by element 7, resulting in other intermediate solution, \(S_2=\{1,2,6,7,5\}\), and at the last step, element 5 is substituted by element 8 obtaining, in such a way, the guiding solution, \(S_g\). Similarly, to transform \(S_g\) into \(S_i\), the first step of the IPR replaces element 6 by element 3, obtaining the intermediate solution \(S'_1=\{1,2,3,7,8\}\), then, at the second step, element 7 is interchanged by element 4, resulting in the solution \(S'_2=\{1,2,3,4,8\}\), and the last step substitutes element 8 by element 5 obtaining, in such a way, the initiating solution, \(S_i\).

Fig. 3
figure 3

Interior path relinking

Notice that every solution found during the path is considered for entering in the set of efficient solutions.

3.4 Exterior path relinking

Exterior PR (EPR) (Duarte et al. 2015) follows the opposite idea of IPR. If the solutions in the efficient set are similar among them, then the path created with IPR will be rather short, thus leading to a small number of new solutions explored. This behavior results in maintaining the similarity among solutions in the efficient set. Then, it is necessary to include some diversity in the search, to explore a different region of the search space.

Given an initiating solution \(S_i\) and a guiding solution \(S_g\), EPR iteratively includes in \(S_i\) elements that are not in \(S_g\), with the aim of reaching new solutions which are diverse with respect to both \(S_i\) and \(S_g\). This strategy will eventually lead the algorithm to explore a wider portion of the search space to increase the quality of the efficient set.

Figure 4 depicts an example where the EPR has been illustrated. Again, a set of \(|N| = 10\) nodes is considered, labelled from 1 to 10 and \(p=5\). Let \(S_i=\{1,2,3,4,5\}\) and \(S_g=\{1,2,3,6,7\}\) be two pairs of solutions. The task is to remove the values \(S_i\cap S_g=\{1,2,3\}\) with elements in the set \(N\setminus S_i\cup S_g=\{8,9,10\}\). At the first step of the EPR, element 1 is replaced by element 8, getting the intermediate solution \(S_{1}=\{8,2,3,4,5\}\), at the second step, element 2 is interchanged by element 9, obtaining \(S_{2}=\{8,9,3,4,5\}\), at the third step, element 3 is substituted by element 10, resulting in the intermediate solution \(S_{3}=\{8,9,10,4,5\}\). Similarly, if you start with the solution \(S_g\) thought \(S_i\), \(S_g=\{1,2,3,6,7\}\), then the first step of the EPR interchanges element 1 with element 8, resulting in the intermediate solution \(S'_{1}=\{8,2,3,6,7\}\). The second step of the EPR replaces element 2 by element 9, getting the solution \(S'_2=\{8,9,3,6,7\}\). And finally, at the last step of the EPR, the element 3 is substituted by element 10, obtaining the intermediate solution \(S'_3=\{8,9,10,6,7\}\).

Fig. 4
figure 4

Exterior path relinking

3.5 Reactive path relinking

One of the main contributions of this work is the proposal of a novel path relinking approach that selects IPR or EPR analyzing the initiating and guiding solutions. This algorithm, named reactive path relinking (RPR), combines the two strategies of the PR previously described, IPR and EPR.

RPR analyzes the similarity between the initiating \(S_i\) and guiding \(S_g\) solutions. If the similarity between them does not exceed a certain threshold k, then it is assumed that \(S_i\) and \(S_g\) are different enough to apply IPR with the aim of intensifying. Otherwise, EPR is applied to diversify the search. The combination of both strategies balances intensification and diversification with the aim of finding a high-quality set of efficient solutions. The value of k is an input parameter of the algorithm (see Sect. 4 for a detailed experimentation to select the best value) which represents the minimum number of facilities that \(S_i\) and \(S_g\) must have in common to be considered similar solutions. Algorithm 3 shows the pseudocode for the considered RPR algorithm.

figure c

The algorithm receives 3 input parameters: the set of efficient solutions, \( ES \); k, the threshold to consider that two solutions are similar or not; and the weight \(\beta \). RPR starts by initializing the backup-copy of the efficient set (step 1) and the best found solution found during the complete execution of RPR (step 2). Additionally, the set T is initialized (step 3) and used to store the already combined solutions. Then, the main body of the algorithm is repeated while the generation of paths finds non-dominated intermediate solutions (steps 4 to 32). In each iteration, all elements are first shuffled in the efficient set to avoid prioritize in any search direction (step 5). Then, in order to increase the efficiency of the method, those solutions for which a path was generated in the previous iterations are discarded (step 8). After that, for non-explored pairs of solutions, their similarity is evaluated. If they are different enough (step 9), the IPR strategy is considered, being the set of nodes to remove, denoted by R, those which are in \(S_i\) but not in \(S_g\) (step 10), and the set of candidates nodes to be added those which are in \(S_g\) but not in \(S_i\), denoted by A (step 11). Otherwise, EPR strategy is considered (step 12), conforming R with the nodes that \(S_i\) and \(S_g\) have in common (step 13) and A with those that are neither in \(S_i\) nor in \(S_g\) (step 14). Then, the solution used to traverse the path, S, is initialized with the initiating solution \(S_i\) (step 16).

The method iterates while there are nodes to be removed (steps 1727). In each iteration, the nodes that will be removed and added to the solution are selected at random from R and A, respectively (steps 1819). Then, the solution is updated (step 20), as well as the nodes to be removed (step 21) and the candidates to be added (step 22).

Considering that a multi-objective optimization problem is being solved, it must be checked if any constructed solution can be admitted in the efficient set (step 23). Moreover, it is tested if any of the solutions found during the generation of a path is the best based on \(f_{r}\), or not (steps 24 to 26).

Once a specific path between \(S_i\) and \(S_g\) has been finished, the corresponding pair is stored in T to avoid exploring the same path in future iterations (step 28). At the end of RPR, the improvement method is applied with the aim of further improving the generated solution, updating, if required, the efficient set (step 33). Finally, the algorithm returns the update \( ES \).

3.6 Computational complexity

This section is devoted to analyze the computational complexity of the proposed algorithm. In particular, each component of the complete algorithm is evaluated and, finally, the complexity of the algorithm is presented.

The first component of the proposed algorithm is the greedy constructive procedure. This method constructs a solution starting from each node \(i \in N\) and, in each construction, the best node is selected to be included in each iteration, resulting in a complexity of \(O(n\cdot p)\).

Then, for each constructed solution, the local search method is applied which, following a first improvement approach, requires to traverse the complete neighborhood in the worst case, resulting in swapping each selected element \(s \in S\) with every available node in \(V\setminus S\), resulting in a complexity of \(O(n\cdot p)\).

These two phases, constructive and local search, are repeated during \(\beta / \textit{step}\) iterations, thus having a computational complexity of \(O(n \cdot p \cdot \beta / \textit{step})\).

Once the initial set of non-dominated solutions, ES, is constructed, the RPR is applied. The RPR is executed for each pair of solutions in ES avoiding repeating pairs (i.e., the combination of \(S_2\) and \(S_1\) is not performed if the combination of \(S_1\) and \(S_2\) has been already executed). Therefore, the complexity of this method is \(O(|ES| \cdot \log |ES| \cdot p)\). Notice that the inclusion of p in this equation is due to that, in the worst case, each combination performs p iterations (when the combined solutions are completely different in IPR or the same solution in EPR). At the end of each combination, the local search is applied, and then, RPR presents a total complexity of \(O(|ES| \cdot \log |ES| \cdot p^2 \cdot n)\).

Finally, the computational complexity of the complete algorithm is the maximum between the complexity of generating the set of non-dominated solutions and the complexity of executing the RPR over the set of non-dominated solutions, that is the maximum between \(O(n \cdot p \cdot \beta / \textit{step})\) and \(O(|ES| \cdot \log |ES| \cdot p^2 \cdot n)\), respectively.

4 Computational results

This section presents the computational results conducted on the proposed metaheuristic and a comparison against several of the best-known multi-objective algorithms. All experiments have been performed in an Intel Core i7-7700HQ (4 x 2.8 GHz) with 8GB RAM, and the algorithms were implemented using Java 9. The source code has been also made publicly available.Footnote 2

The results of this section are divided into two parts: preliminary and final experiments. In the former, the influence of each part of the proposed algorithm is analyzed as well as the corresponding parameters setting, while the latter shows the performance of the best configuration for the proposed algorithm over the complete set of instances, configured with the best selected parameters. The quality of the obtained results is measured by comparing them with the best approximation of the Pareto front found (hereafter B)Footnote 3 since the optimal Pareto front of the BpMD problem is unknown.

To evaluate the performance of the proposal, three set of instances have been considered, denoted as pmed set, D set, and kmedian set. The first one consists of 40 instances of the well-known OR-libraryFootnote 4 (Beasley 1990). The number of nodes ranges from 100 to 900, while the number of facilities to select is in the range [5, 200], having a large variety of combinations. The second set of instances consists of 9 medium-sized instances with \(n = 250\) and \(p = 25\), named as D250 and 10 instances with \(n = 350\) and \(p = 35\), named D350 previously solved by Colmenar et al. (2018) and by Sayyady et al. (2015). Last set of instances consists of 15 large-sized instances, the smallest one contains \(n = 1000\) nodes, then, the next instances increment by 1000 nodes until reach \(n = 5000\), each instance has 3 different p values [\(n\cdot 0.05\), \(n\cdot 0.15\), \(n\cdot 0.25\)]. They have been downloaded from UFLLib.Footnote 5

Table 2 summarizes the main characteristics of each instance. To prevent the well-known over-fitting, a representative subset of instances (25% aprox.) with different characteristics is randomly selected.

Table 2 Main characteristics of each considered instance

Before starting with the computational results, it is needed to briefly describe how the quality of the algorithms is measured when dealing with multi-objective optimization problems. As it is well-known, the performance among multi-objective optimization algorithms cannot be done by comparing the objective function values, but comparing Pareto fronts since multi-objective optimization obtains a set of efficient solutions instead of an unique solution as in single-objective optimization. Therefore, to compare different algorithms it is necessary to use metrics that evaluate the quality of each obtained Pareto front. It is needed to measure basically the cardinality of the Pareto front, the proximity of the obtained solutions to the best/optimal Pareto front, and the diversity of the solutions. In this work, several of the most extended multi-objective metrics have been considered (see Durillo and Nebro 2011 or Li and Yao 2019): the number of efficient solutions, coverage, spread, hypervolume, the \(\epsilon \)-indicator, generational distance, and inverted generational distance.

The number of efficient solutions in the Pareto front (|A|) counts the number of efficient solutions found with the considered algorithm. The decision-maker usually prefers a larger number of efficient solutions. However, this metric does not have into account the quality of the solutions.

The coverage metric, C(AB), evaluates the proportion of solutions of the best/optimal Pareto front B that weakly dominates the solutions from the Pareto front obtained by the algorithm A. Let r be the number of objective functions, then the coverage is computed as:

$$\begin{aligned} C(A,B)=\frac{|\forall b\in B| \exists a \in A: f_{r}(a)\le f_{r}(b) \forall r|}{|B|}. \end{aligned}$$

Therefore, the smaller the value of C(AB), the better the algorithm A is. Note that \(0\le C(A,B)\le 1\), if \(C(A,B)=0\) means that no solution in the best/optimal Pareto front is weakly dominated by the solutions obtained by algorithm A and \(C(A,B)=1\) means that all the solutions in the best/optimal Pareto front are weakly dominated by the solutions obtained by algorithm A.

The spread (\(\Delta \)) measures the extent of spread by the set of computed solutions obtained by the Pareto front of algorithm A.

$$\begin{aligned} \Delta =\frac{d_{f}+d_{l}+\sum _{i=1}^{|A|-1}|d_{i}-{\bar{d}}|}{d_{f}+d_{l}+(|A|-1){\bar{d}}} \end{aligned}$$

where \(d_i\) is the Euclidean distance between consecutive solutions in the Pareto front obtained with algorithm A, \({\hat{d}}\) is the mean of these distances, and \(d_f\) and \(d_l\) are, respectively, the Euclidean distances to the extreme solutions of the best/optimal Pareto front in the objective space. The best possible value is \(\Delta =0\) which indicates a perfect spread of the solutions in the Pareto front obtained with the considered algorithm.

The hypervolume (HV) evaluates the volume in the objective space which is covered by the Pareto front obtained with the algorithm A. To calculate the HV, for each solution \(a\in A\), a hypercube \(v_a\) is constructed with a reference point W (a vector of worst objective function values) and the solution a as the diagonal corners of the hypercube. Then, the hypervolume is the union of all hypercubes is found.

$$\begin{aligned} HV = volume \left( \bigcup _{a=1}^{|A|}v_{a}\right) \end{aligned}$$

Then, the larger the HV value, the better the set of efficient solutions obtained by the algorithm.

The \(\epsilon \)-indicator (\(\epsilon \)) measures the smallest distance (the smallest \(\epsilon \) value) needed to transform every point of the Pareto front obtained with the algorithm A in the closest point of the best/optimal Pareto front.

$$\begin{aligned} \epsilon =\inf \{ \epsilon \in {\mathbb {R}}:\forall b\in B \exists a\in A \text{ such } \text{ that } f_{k}(a) \le \epsilon \cdot f_{k}(b) \forall k \}. \end{aligned}$$

Therefore, the smaller the \(\epsilon \)-indicator, the better.

The generational distance (GD) measures how far the solutions of the Pareto front obtained by the algorithm A are from those solutions in the best/optimal Pareto front (B).

$$\begin{aligned} GD=\frac{\sqrt{\sum _{i=1}^{|A|}d_{i}^2}}{|A|} \end{aligned}$$

where \(d_i\) is the Euclidean distance between each of these solutions and the nearest solution of the best/optimal Pareto front. If \(GD = 0\) indicates that all the solutions are in the best Pareto front.

The inverted generational distance (IGD) is an inversion of the generational distance metric with the aim of measuring the distance from the best/optimal Pareto front to the set of efficient solutions obtained by the algorithm A.

$$\begin{aligned} IGD=\frac{\sqrt{\sum _{i=1}^{n}d_{i}^2}}{|B|}. \end{aligned}$$

The smallest the value of IGD, the better. Note that the only different between GD and IGD is that while comparing, the IGD does not miss any part of best/optimal Pareto front.

Finally, the computing time (CPU) required by each algorithm is included.

As the considered problem is a multi-objective optimization problem, a set of non-dominated solutions (which trivially contains more than one solution) is generated by each algorithm. The results presented in Tables 3 to 7 report the average values across the set of instances used in each experiment, by performing a single execution of the compared procedures.

4.1 Algorithm parameter tuning

This section is oriented toward both demonstrating the effectiveness of the proposed strategies and tuning the parameters. The aim of the first preliminary experiment is to determine the number of constructions to calculate the initial set of efficient solutions. As Sect. 3.1 describes, the number of solutions built depends on the step value. Specifically, it has been considered \( step \in \{1/50, 1/100, 1/150, 1/200\}\), which results in 51, 101, 151, and 201 solutions built. Table 3 shows the results obtained in this experiment, where the metrics described above are reported.

Table 3 Effect of varying the number of steps in the constructive procedure

Analyzing the results of Table 3, as expected the larger the number of solutions built, the better performance in all metrics. Therefore, the more solutions, the better results. However, the average computing time increases considerably. Note that between the two intermediate values of the column named step, values 1/100 and 1/150, there are no significant differences in the performance in all the metrics and the computing time can be substantially reduced. For this reason, a compromise between the quality of the performance metrics and the computational time could be selected. Thus, step is set to 1/100.

As described in Sect. 3.1, it is desirable to construct a rich and diverse set of efficient solutions. To this end, \(\beta \) is set to \(0.00,0.01,0.02,\ldots ,0.98,0.99,1.00\) since it was opted by the step=1/100. Note that, \(\beta =0.0\) optimizes \(f_{pD}\) meanwhile \(\beta =1.0\) optimizes \(f_{pM}\). Of course, \(\beta =0.50\) considers that both objectives are equally important, \(f_{pM}\) and \(f_{pD}\), see Eq. 1.

Once the number of constructions is set, Table 4 is analyzed. In the next experiment, the results obtained with the reactive path relinking when considering different values for k are shown. Specifically, the k value are set as \(0.25\times p\), \(0.50\times p\), and \(0.75\times p\), being p the number of facilities. Furthermore, in this Table 4, the IPR and the EPR are included in order to evaluate the actual contribution of the reactive mechanism. The associated results are summarized in Table 4, where the same metrics than in the above table are reported.

Table 4 Comparison of the different proposed variants of path relinking

According to the results shown in Table 4, it is difficult to conclude the more appropriate value of k to select the best RPR variant. Nevertheless, the RPR with \(k=0.75 \times p\) achieves the best values for the coverage (0.38), the hypervolume (0.64), and IGD (6680.34). Additionally, it can be observed that the differences between the best values obtained with other strategies and the strategy of RPR with \(k=0.75 \times p\) are quite similar, obtaining no significant differences: the number of efficient solutions is practically equal (36.68 vs 36.06), the spread (0.95 vs 0.95), the \(\epsilon \)-indicator (0.03 vs 0.03), and GD (1648.78 vs 1674.72). Furthermore, the computing time of RPR with \(k=0.75 \times p\) is the slightly faster (1014.91 vs 1067.66). Therefore, this variant is selected as the most competitive one.

Finally, with the aim of analyzing the impact of each part of the proposed algorithm, an additional experiment which consists of comparing the results obtained by the constructive procedure (C) is conducted, then coupled with the local search method (C+LS), and finally, the reactive path relinking (RPR). Table 5 shows the results obtained in this experiment.

Table 5 Analysis of the effect of the construction phase, local search and path relinking in the proposed algorithm

Analyzing the coverage, the most relevant part of the algorithm is the RPR, which is able to reduce the coverage to almost 0. In this case, the local search procedure clearly outperforms the results obtained by the constructive procedure (0.77 versus 0.87). On the other hand, considering the hypervolume, C+LS considerably improves the results of the constructive procedure, while RPR is able to provide the final improvement that results in a highly competitive algorithm for the BpMD. Notice that in the \(\epsilon \)-indicator metric the local search procedure is the one obtaining the most remarkable improvement. Additionally, the number of non-dominated solutions increases considerably when including the local search strategy. Therefore, it can be concluded that each part of the proposed algorithm is essential in the final RPR algorithm, each one of them providing different advantages that lead to increase the effectiveness of the algorithm.

4.2 Comparison to other algorithms

The most appropriate parameters to implement the RPR algorithm have been chosen considering the results obtained in the previous section. Specifically, step = 1/100 and \(k=0.75 \times p\). The performance of the proposed algorithm is compared with three of the most competitive evolutionary algorithms, MOEA/D (Multi-Objective Evolutionary Algorithm based on Decomposition), NSGA-II (elitist Non-dominated Sorting Genetic Algorithm), and SPEA2 (Strength Pareto Evolutionary Algorithm), the Scatter Search algorithm proposed by Colmenar et al. (2018) and the \(\epsilon \)-C method, see Sayyady et al. (2015). In all the remaining tables, the number between parenthesis close to the name of the instance set indicates the number of instances in that set. Before presenting the results, all algorithms will be briefly described.

MOEA/D, first proposed in Zhang and Li (2007), decomposes the multi-objective optimization problem into single-objective optimization subproblems using a set of even spread weight vectors. All these subproblems are simultaneously solved in a single run to approximate the set of efficient solutions. In MOEA/D, the neighborhood relations among these subproblems are defined based on the distances between their weight vectors. The optimal solutions of two neighboring subproblems should be very similar. Each subproblem is optimized by using information only from its neighboring subproblems.

NSGA-II, proposed by Deb et al. (2002), starts by generating an initial population of solutions that will be sorted based on non-domination. To sort each solution, a rank (fitness) value is assigned based on the front in which they belong to. Then, a crowding distance is assigned to each solution in the front. The crowding distance is a measure related to the density of solutions around each solution, i.e., how close a solution is to its neighbors. Next, individual solutions are selected by using a binary tournament selection with crowed-comparison-operator. The selected population generates offspring from the crossover and mutation operators, and this new population with the current population and current offsprings is sorted again based on non-domination and only the best N individuals are selected, being N the population size. The selection is based on rank and on crowding distance on the last front.

SPEA2 proposed by Zitzler et al. (2001) is an improvement version of the SPEA that resolves its weakness: fitness assignment, density estimation and archive truncation. SPEA2 uses a regular population and an external archive to keep the elite solutions. The algorithm starts by generating an initial population of solutions and an empty archive. Next, all non-dominated population solutions are copied to the archive and any dominated solutions or duplicates are removed from the archive. If the size of the archive exceeds a predefined limit, further archive members are deleted by a clustering technique which preserves the characteristics of the non-dominated front. Afterward, fitness values are assigned to both archive and population members and again individual solutions are selected by using a binary tournament selection. The selected population generates offspring from the crossover and mutation operators.

Note that the solution representation for these three evolutionary algorithms consists of an integer array of p elements, where each element indicates the selected facility (notice that the order of the selected facilities is not relevant). These algorithms require one main input parameter, the population size. This input parameters have been tuned with the aim of having a fair comparison in terms of computing time, resulting in a population size of 100 solutions. The stopping criterion for these three evolutionary algorithms is set to 3600 s. Finally, the remaining parameters have been set to the default and recommended values by the MOEA Framework library used to implement these algorithms (Hadka 2015).

The Scatter Search (SS) procedure reported in Colmenar et al. (2018) is another evolutionary algorithm that starts by randomly constructing a set of solutions to generate the initial population. Then, they are improved with three different local search methods, which are based on the first improvement strategy. The reference set update method maintains a set of not only the best solutions (in terms of the objective function values) but also according to their diversity. The subset generation method and the combination method considered in this Scatter Search follow a typical implementation.

Finally, it has been included an exact procedure based on the \(\epsilon \)-C strategy, with the aim of providing a near-optimal approximation of the Pareto front. Since this algorithm is very computationally demanding, the computational effort required to generate the optimal Pareto front makes it not suitable for this kind of problems, but it can be used to analyze how far from the best Pareto front are the proposed algorithms in a short computing time. The \(\epsilon \)-C method used in this work has been adapted from Sayyady et al. (2015) but including a time limit of 7200 s and, therefore, not all instances are solved to optimality.

Now, the results obtained by the algorithms are shown in Table 6 that summarizes the average results provided when solving the whole set of instances of the OR-library. In particular, the instances are grouped by the number of nodes in steps of 100. The summary results presented in the last row prove the superiority of our proposal (RPR) over the five analyzed algorithms (MOEA/D, NSGA-II, SPEA 2, SS and \(\epsilon \)-C). It can be observed that the RPR is the algorithm that reaches the largest number of efficient solutions on average. Specifically, almost 32 efficient solutions, followed by \(\epsilon \)-C which is able to find more than 17 efficient solutions on average. The coverage indicates that the percentage of solutions of the best Pareto front, B, that dominates the solutions from the Pareto front obtained by the RPR is only 13%, meanwhile that percentage is more than 95% for the MOEA/D, NSGA-II and SPEA2, more than 97% for the Scatter Search, and more than 75% for the \(\epsilon \)-C method, which demonstrates that most of the solutions of algorithm RPR are part of the best Pareto front B. To measure the extent of the spread achieved by the Pareto front approximation obtained by each algorithm, the Scatter Search finds the best value 0.99 and RPR obtains 0.96, but both values are very similar. On the other hand, the average size of the space covered by the RPR is 70.08%. Regarding the average \(\epsilon \)-C, the smallest distance needed to transform every solution of the Pareto front obtained by the RPR in the closest solution of the best Pareto front is 0.03 and the average values of the \(\epsilon \)-C for the remaining algorithms are much more larger. Finally, RPR obtains the best values for the GD and IGD, which indicate how far the solutions are from the best Pareto front.

If we now analyze the results depending on the instance size, we can clearly see that the time required by RPR scales better with the size, while the other algorithms quickly reach the time limit. Regarding the other metrics, the behavior is similar independently of the instance size, emerging RPR as the most competitive algorithm.

Furthermore, in order to facilitate future comparisons, all Pareto fronts of D250 and D350 instances are available in Fig. 5 and Fig. 6, and all individual results per instance are detailed in Tables 1017 of “Appendix A”.

Table 6 Comparison among all the algorithms with the pmed testbed
Table 7 Comparison among all the algorithms with the testbeds D_250 and D_350
Table 8 Comparison among all the algorithms with the testbeds kmedian where the \(\epsilon \)-C is able to obtain solutions
Table 9 Comparison among all the algorithms with the testbeds kmedian

The second set of instances considered in this paper are those reported in Colmenar et al. (2018), that is, set D250 and D350 with 9 and 10 instances, respectively. The associated average results are shown in Table 7. Analyzing the results, these instances seem to be easier to solve for the exact procedure \(\epsilon \)-C, so they can be used to illustrate how far from the best approximation of the Pareto Front are the heuristic algorithms included in this comparison. Although in terms of coverage and hypervolume value, \(\epsilon \)-C algorithm is able to obtain consistently better values, it is worth mentioning that the number of solutions in the non-dominated set obtained by RPR is considerably larger than the ones obtained by the other algorithms, including \(\epsilon \)-C. Furthermore, half of the solutions provided by the RPR are included in the best approximation of the Pareto front regarding the coverage metric. Additionally, the hypervolume obtained by the RPR is very similar to the one obtained by \(\epsilon \)-C, which is also an indicator of the quality of the front obtained by RPR. Regarding the metrics that measure the distance to the closest solution in the best approximation front, \(\epsilon \)-indicator, GD, and IGD, it can be seen that RPR presents the best values, indicating that, in those cases in which its solutions are covered by the best approximation front, they are still really close to the best solution. Indeed, RPR presents better results than \(\epsilon \)-C in this metrics, since when \(\epsilon \)-C is not able to find the best solution, it remains far from it due to the constraint on the computational time. Finally, the computing time produced by RPR is an order of magnitude lower than the one required by \(\epsilon \)-C, highlighting the suitability of RPR for providing fast and high-quality solutions for the tackled problem. A similar behavior to those obtained in the previous table can be emphasized, the scalability of RPR when analyzing the computing time. In the case of the exact procedure, the increase in computing time makes it unaffordable for solving larger instances.

With the aim of testing the algorithm when dealing with a more challenging set of instances, we have included the kmedian testbed, where the number of nodes ranges from 1000 to 5000. This experiment is divided into two different tables. On the one hand, Table 8 includes all the instances for which the \(\epsilon \)-C algorithm is able to finish, even without guaranteeing optimality, since the time limit has been set to 3600 s. On the other hand, Table 9 shows the results obtained in those instances in which the exact procedure \(\epsilon \)-C is not even able to start since the memory requirements are too high.

Analyzing the results in Table 8, the \(\epsilon \)-C is only able to provide solutions for 6 out of 15 of the kmedian instances. These instances seem to be harder to solve, since the number of solutions |A| is close to 5, being RPR able to generate a more populated set of non-dominated solutions. Additionally, RPR is able to reach the smallest value of coverage, indicating that it is able to reach the largest number of solutions belonging to the reference set of non-dominated solutions. In terms of hypervolume, RPR also provides competitive results, as well as in the remaining metrics in the considered time horizon of 3600 s.

Table 9 shows the results where \(\epsilon \)-C is not considered, since it is not able even to generate a single feasible solution. As it can be seen, results are rather similar than the previous ones. However, it is important to remark that, in this case, RPR is able to obtain the best results in more metrics, highlighting the scalability of the proposal. In particular, RPR is able to generate, on average, approximately 6 solutions in each non-dominated set, while the remaining methods are not able to reach 4 solutions. The value of 0.16 in the coverage metric indicates that most of the solutions provided by RPR belongs to the reference set, and the value of \(\epsilon \)-indicator or GD, for instance, suggests that the non-dominated set of solutions of RPR are better distributed along the search space. Analyzing the results per instance size, RPR maintains similar performance in all the metrics, even improving the \(\epsilon \)-indicator metric while increasing the size of the instance.

5 Conclusions

Nowadays, most of real-world optimization problems are modeled considering more than one objective at a time and usually those objectives are in conflict among them, being impossible to improve one objective without deteriorating at least one of the other. Facility locations are interesting combinatorial optimization problems that are being recently addressed from a multi-objective perspective. This manuscript is focused on the Bi-objective p-Median p-Dispersion problem (BpMD problem) that seeks to minimize the total distance between facilities and demand points and, at the same time to maximize the minimum distance between all pairs of facilities.

To address the BpMD problem, a new path relinking strategy has been proposed. The algorithm combines two different approaches: interior and exterior. The reactive path relinking has been designed in an efficient way and is able to decide which should use depending on the pairs of solutions to combine. This algorithm is able to find high-quality Pareto front approximations in short computing times, becoming a competitive algorithm when comparing the state of the art. Furthermore, computational experiments show the effectiveness of our proposed algorithm for solving relatively large instances of this bi-objective optimization problem in comparison with the other considered algorithms: MOEA/D, NSGA-II, SPEA2, Scatter Search and \(\epsilon \)-C method, being able to obtain better Pareto fronts.

Therefore, our results establish the first benchmarks not only for this problem but also for the applications of this novel proposed methodology, the reactive path relinking, to other multi-objective combinatorial optimization problems.

A future line of research is to adapt the reactive path relinking algorithm to solve other combinatorial optimization problems, whether single-objective or multi-objective nature. A new strategy to prove would be the adaptive reactive path relinking that will allow the algorithm to decide the k parameter to consider to combine every pair of solutions according their similarity or dissimilarity. Furthermore, it would be interesting to test another constructive metaheuristics as GRASP that includes randomization when selecting one node from a restricted candidate list containing the most promising nodes. Another interesting methodology to test would be the Fixed Set Search (FSS) algorithm, see Jovanovic et al. (2022), where a population of solutions is generated with GRASP and the elements appearing with more frequency in a subset of the most promising solutions is fixed for the next generation of solutions. As the authors hold, the FSS can be viewed as a metaheuristic that adds a learning mechanism to the GRASP.