Introduction

Optimization is the process that determines the best solution to a problem with several feasible solutions. An optimization problem consists of three parts: decision variables, constraints, and the objective function1. In this case, the purpose of optimization is to quantify the decision variables with respect to the constraints of the problem so that the value of the objective function is optimized2. With the advancement of science and technology, the importance and role of optimization in various branches of science have become clearer. Therefore, practical tools are needed to address the various optimization challenges. Optimization techniques fall into two groups: deterministic and stochastic approaches. Deterministic approaches in both gradient-based and nongradient-based groups are effective in linear, convex, uncomplicated, low-dimensional, and differentiable problems. However, these approaches lose their effectiveness in dealing with optimization problems that have features such as nonlinear, nonconvex, complex, high-dimensional, notdifferentiable, discrete search space, and NP-hard problems. The difficulties and inefficiencies of deterministic approaches have led to the emergence of stochastic approaches that, using random operators, random search, and trial-and-error processes, are effective in optimization applications. Metaheuristic optimization algorithms, known as stochastic approaches, have become very popular and widely used due to advantages such as simple concepts, easy implementation, independent of the type of problem, no need for objective function-derived information, and efficiency in nonlinear, nonconvex environments, and nonlinear search space3. The optimization process in metaheuristic algorithms starts with generating a number of random candidate solutions in the range allowed for the search space. Then, in an iterative process, the candidate solutions are improved by the algorithm steps. After completion of the algorithm implementation iterations, the best candidate solution is introduced as the solution to the problem. The nature of random search in metaheuristic algorithms leads to the fact that there is no guarantee that this best candidate solution is the best solution (known as the global optimal) to a problem. Therefore, the best candidate solution is known as a quasi-optimal solution, which is an acceptable solution and close to the global optimal4. Achieving better quasi-optimal solutions has become a challenge in optimization studies to motivate researchers to introduce and design countless metaheuristic algorithms. In designing optimization algorithms, two indicators of exploration and exploitation play an important role in the performance of optimization algorithms in achieving appropriate quasi-optimal solutions. Exploring indicates the ability of the algorithm to perform a global search, and exploitation indicates the ability of the algorithm to perform a local search in the search space. The key to the success of a metaheuristic algorithm in the optimization process is maintaining a suitable balance between exploration and exploitation5. The main research question is whether, given that numerous optimization algorithms have been developed so far, is there still a need to design newer algorithms? The answer to this question, given the concept of No Free Lunch (NFL)6, is that there is no guarantee that an algorithm will work the same in all optimization problems. The NFL states that an algorithm may have a successful implementation on some optimization issues but fail to address others. Consequently, a particular algorithm cannot be considered the best optimizer for all optimization problems. Influenced by the concept of the NFL theorem, authors are encouraged to come up with more effective solutions to optimization problems by introducing new optimizers. The NFL theorem also motivated the authors of this paper to develop a new metaheuristic algorithm to address optimization applications. The novelty and contribution of this paper are in the design of a new metaheuristic algorithm called Driving Training-Based Optimization (DTBO), which is based on the simulation of human activity in driving education. The contributions of this paper are as follows:

  • DTBO is introduced based on the driving training process in which a person is trained to learn driving skills.

  • A set of 53 objective functions is used to analyze the performance of DTBO in optimization applications.

  • To evaluate the quality of the performance of DTBO, the results obtained are compared with the results of 11 well-known optimization algorithms.

  • The efficiency of DTBO is evaluated in solving two real-world applications.

The rest of the article is organized in such a way that in the “Lecture review”, the literature review is presented. In “Driving training based optimization”, the proposed DTBO approach is introduced and modeled. In “Computational complexity of DTBO”, simulation studies and results are presented. A discussion of the results and performance of the DTBO is provided in “Discussion. The application of DTBO in solving real-world problems is evaluated in the “DTBO for real-world applications”. The conclusions and several perspectives of the study are provided in “Conclusion and future works” section.

Lecture review

Meta-heuristic algorithms have been developed inspired by various natural phenomena, wildlife, animals, birds, insects, plants, living organisms, laws of physics, biological sciences, genetics, rules of games, human activities, and other natural evolutionary processes. In a grouping based on the design’s primary inspiration, metaheuristic algorithms fall into five groups: swarm-based, evolutionary-based, physics-based, game-based, and human-based methods.

Swarm-based metaheuristic algorithms have been developed to model the swarming behaviors of animals, birds, and living things in nature. Among the famous algorithms that can be mentioned are Particle Swarm Optimization (PSO)7, Firefly Algorithm (FA)8, Artificial Bee Colony (ABC)9, and Ant Colony Optimization (ACO)10. The natural behavior of a group of birds or fish in search of food, while their movement is influenced by personal experience and swarming intelligence, has been the main idea in PSO design. Mathematical modeling of the natural feature of flashing lights in fireflies has been used in the FA design. The primary inspiration in ABC design is to simulate the intelligence of swarming bee colonies to find food sources. The ability of an ant colony to find the shortest path between the colony and food sources has been the main idea in the design of the ACO. Hunting and attacking prey strategy, as well as the process of finding food sources among living organisms, has been a source of inspiration in designing various metaheuristic algorithms such as the Tunicate Search Algorithm (TSA)11, Reptile Search Algorithm (RSA)12, Whale Optimization Algorithm (WOA)13, Orca Predation Algorithm (OPA)14, Marine Predator Algorithm (MPA)15, Pelican Optimization Algorithm (POA)16, Snow Leopard Optimization Algorithm (SLOA)17, Gray Wolf Optimization (GWO) algorithm18, Artificial Gorilla Troops Optimizer (GTO)19, African Vultures Optimization Algorithm (AVOA)20, Farmland Fertility21, Spotted Hyena Optimizer (SHO)22, and Tree Seed Algorithm (TSA)23.

Evolutionary-based metaheuristic algorithms have been introduced based on simulations of biological sciences, genetics, and using random operators. Among the most widely used and well-known evolutionary algorithms, you can name the Genetic Algorithm (GA)24 and Differential Evolutionary (DE)25. GA and DE have been developed on the basis of mathematical modeling of the reproductive process and the concept of natural selection, as well as the employment of random operators of selection, crossover, and mutation.

Physics-based metaheuristic algorithms are designed on the basis of mathematical modeling of various physical laws and phenomena. Among the well-known physics-based algorithms, one can mention the Simulated Annealing (SA)26 and the Gravitational Search Algorithm (GSA)27. SA is based on the physical phenomenon of melting and then cooling metals, known in metallurgy as annealing. The modeling of Gravitational Forces in a system consisting of objects with different masses and distances from each other has been the main inspiration in the design of GSAs. The physical phenomenon of the water cycle and its transformations in nature has been a source of inspiration for the design of the Water Cycle Algorithm (WCA)28. Cosmological concepts have been the main inspiration in the design of the Multi-Verse Optimizer (MVO)29. Some other physics-based methods are as follows: Flow Regime Algorithm (FRA)30, Nuclear Reaction Optimization (NRO)31, Spring Search Algorithm (SSA)32, and Equilibrium Optimizer (EO)33.

Game-based metaheuristic algorithms have been developed based on simulation of the rules that govern different games and the behavior of players, coaches, and other individuals who influence the games. The design of modeling competitions in the volleyball league has been the main idea in the design of the Volleyball Premier League (VPL) algorithm34 and the football league has been the main idea in the design of Football Game-Based Optimization (FGBO)35. The strategy and skill of the players to create puzzle pieces has been the main inspiration in designing the Puzzle Optimization Algorithm (POA)36. The effort of the players in tug-of-war was the main idea in designing the Tug-of-war Optimization (TWO) approach37.

Human-based metaheuristic algorithms are introduced on the basis of mathematical modeling of various human activities that have an evolution-based process. Teaching-Learning-Based Optimization (TLBO) is the most famous human-based algorithm designed based on simulation of the communication and interaction between a teacher and students in a classroom38. The economic activities of the rich and poor in society have been the main idea in designing Poor and Rich Optimization (PRO)39. Simulation of human behavior against online auction markets to achieve success has been used in the design of Human Mental Search (HMS)40. Interactions between doctors and patients, including disease prevention, check-up, and treatment, have been used in the design of DPO41.

Extensive studies have been conducted in the field of metaheuristic algorithms in various fields such as: development of binary versions42,43,44,45, improvement of existing methods46,47,48,49,50, and combination of metaheuristic algorithms51,52.

Based on the best knowledge gained from the literature review, so far, no optimization algorithm based on driving training modeling has been introduced and designed. The driving training process is an intelligent process that can be an incentive to design an optimizer. To address this research gap, in this paper, based on mathematical modeling of the driving training process and its various stages, a new metaheuristic algorithm is designed, which is introduced in the next section.

Driving training based optimization

In this section, the various steps of the proposed Driving Training Based Optimization (DTBO) method are presented and then its mathematical modeling is introduced.

Inspiration and main idea of DTBO

Driving training is an intelligent process in which a beginner is trained and acquires driving skills. A beginner as a learner driver can choose from several instructors when attending driving school. The instructor then teaches the learner driver the instructions and skills. The learner driver tries to learn driving skills from the instructor and drive following the instructor. In addition, personal practice can improve the driver’s skills of the learner. These interactions and activities have extraordinary potential for designing an optimizer. Mathematical modeling of this process is a fundamental inspiration in the design of DTBO.

Mathematical model of DTBO

DTBO is a population-based metaheuristic whose members consist of driving learners and instructors. DTBO members are candidate solutions to the given problem modeled using a matrix called the population matrix in Eq. (1). The initial position of these members at the start of implementation is randomly initialized using Eq. (2).

$$\begin{aligned} X&= \begin{bmatrix} X_1 \\ \vdots \\ X_i \\ \vdots \\ X_N \end{bmatrix}_{N\times m} = \begin{bmatrix} x_{11} &{} \cdots &{} x_{1j} &{} \cdots &{} x_{1m} \\ \vdots &{} \ddots &{} \vdots &{} \ddots &{} \vdots \\ x_{i1} &{} \cdots &{} x_{ij} &{} \cdots &{} x_{im} \\ \vdots &{} \ddots &{} \vdots &{} \ddots &{} \vdots \\ x_{N1} &{} \cdots &{} x_{Nj} &{} \cdots &{} x_{Nm} \\ \end{bmatrix}_{N\times m}~, \end{aligned}$$
(1)
$$\begin{aligned} x_{i,j}&= lb_j + r\cdot (ub_j-lb_j), \quad i=1,2, \dots , N,\ j=1,2,\dots , m~, \end{aligned}$$
(2)

where X is the population of DTBO, \(X_i\) is the ith candidate solution, \(x_{i,j}\) is the value of the jth variable determined by the ith candidate solution, N is the size of the population of DTBO, m is the number of problem variables, r is a random number from the interval [0, 1], \(lb_j\) and \(ub_j\) are the lower and upper bounds of the jth problem variable, respectively.

Each candidate solution assigns values to the problem variables, which, by placing them in the objective function, are evaluated for the objective function. Therefore, a value is computed for the objective function corresponding to each candidate solution. The vector in Eq. (3) models the values of the objective function.

$$\begin{aligned} F = \begin{bmatrix} F_1 \\ \vdots \\ F_i \\ \vdots \\ F_N \end{bmatrix}_{N\times 1} = \begin{bmatrix} F(X_1) \\ \vdots \\ F(X_i) \\ \vdots \\ F(X_N) \end{bmatrix}_{N\times 1}~, \end{aligned}$$
(3)

where F represents the vector of the objective functions and \(F_i\) denotes the value of the objective function delivered by the ith candidate solution.

The values obtained for the objective function are the main criteria to determine the goodness of the candidate solutions. Based on the comparison of the values of the objective function, the member that has the best value for the objective function is known as the best member of the population \((X_{best})\). The best member must also be updated, since the candidate solutions are improved and updated in each iteration.

The main difference between metaheuristic algorithms is the strategy employed in the process of updating candidate solutions. In DTBO, candidate solutions are updated in the following three different phases: (i) training the learner driver by the driving instructor, (ii) patterning the learner driver from instructor skills, and (iii) practice of the learner driver.

Phase 1: Training by the driving instructor (exploration)

The first phase of the DTBO update is based on the choice of the driving instructor by the learner driver and then the training of the driving by the selected instructor to the learner driver. Among the DTBO population, a select number of the best members are considered as driving instructors and the rest as learner drivers. Choosing the driving instructor and learning the skills of that instructor will lead to the movement of population members to different areas in the search space. This will increase the DTBO’s exploration power in the global search and discovery of the optimal area. Therefore, this phase of the DTBO update demonstrates the exploration ability of this algorithm. In each iteration, based on the comparison of the values of the objective function, the N members of the DTBO are selected as driving instructors, as shown in Eq. (4).

$$\begin{aligned} DI= \begin{bmatrix} DI_1 \\ \vdots \\ DI_i \\ \vdots \\ DI_{N_{DI}} \end{bmatrix}_{N_{DI}\times m} = \begin{bmatrix} DI_{11} &{} \cdots &{} DI_{1j} &{} \cdots &{} DI_{1m} \\ \vdots &{} \ddots &{} \vdots &{} \ddots &{} \vdots \\ DI_{i1} &{} \cdots &{} DI_{ij} &{} \cdots &{} DI_{im} \\ \vdots &{} \ddots &{} \vdots &{} \ddots &{} \vdots \\ DI_{N_{DI}1} &{} \cdots &{} DI_{N_{DI}j} &{} \cdots &{} DI_{N_{DI}m} \\ \end{bmatrix}_{N_{DI}\times m}~, \end{aligned}$$
(4)

where DI is the matrix of driving instructors, \(DI_i\) is the ith driving instructor, \(DI_{i,j}\) is the jth dimension, and \(N_{DI} = \lfloor 0.1 \cdot N \cdot (1-t/T) \rfloor\) is the number of driving instructors, where t is the current iteration and T is the maximum number of iterations.

The mathematical modeling of this DTBO phase is such that, first, the new position for each member is calculated using Eq. (5). Then, according to Eq. (6), this new position replaces the previous one if it improves the value of the objective function.

$$\begin{aligned} x_{i,j}^{P1}&= {\left\{ \begin{array}{ll} x_{i,j} + r\cdot (DI_{k_i,j} - I\cdot x_{i,j} )~, &{} F_{DI_{k_i}} < F_i~; \\ x_{i,j} + r\cdot (x_{i,j} - DI_{k_i,j} )~, &{} \text {otherwise}~,\\ \end{array}\right. } \end{aligned}$$
(5)
$$\begin{aligned} X_{i}&= {\left\{ \begin{array}{ll} X_{i}^{P1} ~, &{} F_{i}^{P1} < F_i~; \\ X_{i}~, &{} \text {otherwise}~, \\ \end{array}\right. } \end{aligned}$$
(6)

where \(X_i^{P1}\) is the new calculated status for the ith candidate solution based on the first phase of DTBO, \(x_{i,j}^{P1}\) is its jth dimension, \(F_i^{P1}\) is its objective function value, I is a number randomly selected from the set \(\{1,2\}\), r is a random number in the interval [0, 1], \(DI_{k_i}\), where \(k_i\) is randomly selected from the set \(\{1,2,\dots ,N_{DI}\}\), represents a randomly selected driving instructor to train the ith member, \(DI_{k_i,j}\) is its jth dimension, and \(F_{DI_{k_i}}\) is its objective function value.

Phase 2: Patterning of the instructor skills of the student driver (exploration)

The second phase of the DTBO update is based on the learner driver imitating the instructor, that is, the learner driver tries to model all the movements and skills of the instructor. This process moves DTBO members to different positions in the search space, thus increasing the DTBO’s exploration power. To mathematically simulate this concept, a new position is generated based on the linear combination of each member with the instructor according to Eq. (7). If this new position improves the value of the objective function, it replaces the previous position according to Eq. (8).

$$\begin{aligned} x_{i,j}^{P2}&= P\cdot x_{i,j} + (1-P)\cdot DI_{k_i,j}~, \end{aligned}$$
(7)
$$\begin{aligned} X_{i}&= {\left\{ \begin{array}{ll} X_{i}^{P2} ~, &{} F_{i}^{P2} < F_i; \\ X_{i}~, &{} \text {otherwise}~, \\ \end{array}\right. } \end{aligned}$$
(8)

where \(X_i^{P2}\) is the new calculated status for the ith candidate solution based on the second phase of DTBO, \(x_{i,j}^{P2}\) is its jth dimension, \(F_i^{P2}\) is its objective function value, and P is the patterning index given by

$$\begin{aligned} P = 0.01 + 0.9 \left( 1-\frac{t}{T} \right) ~. \end{aligned}$$
(9)

Phase 3: Personal practice (exploitation)

The third phase of the DTBO update is based on the personal practice of each learner driver to improve and enhance driving skills. Each learner driver tries to get closer to his best skills in this phase. This phase is such that it allows each member to discover a better position based on a local search around its current position. This phase demonstrates the power of DTBO to exploit local search. This DTBO phase is mathematically modeled so that a random position is first generated near each population member according to Eq. (10). Then, according to Eq. (11), this position replaces the previous position if it improves the value of the objective function.

$$\begin{aligned} x_{i,j}^{P3}&= x_{i,j} + (1-2r)\cdot R\cdot \left( 1 - \frac{t}{T} \right) \cdot x_{i,j}~, \end{aligned}$$
(10)
$$\begin{aligned} X_{i}&= {\left\{ \begin{array}{ll} X_{i}^{P3} ~, &{} F_{i}^{P3} < F_i; \\ X_{i}~, &{} \text {otherwise}~, \\ \end{array}\right. } \end{aligned}$$
(11)

where \(X_i^{P3}\) is the new calculated status for the ith candidate solution based on the third phase of DTBO, \(x_{i,j}^{P3}\) is its jth dimension, \(F_i^{P3}\) is its objective function value, r is a random real number of the interval [0, 1], R is the constant set to the value 0.05, t is the counter of iterations and T is the maximum number of iterations.

Repetition process, pseudo-Code of DTBO and DTBO flow chart

After updating the population members according to the first to third phases, a DTBO iteration is completed. The algorithm with the updated population entered the next DTBO iteration. The update process is repeated according to the steps of the first to third phases and according to Eqs. (4) to (11) to reach the maximum number of iterations. After the implementation of DTBO on the given problem is complete, the best candidate solution recorded during execution is introduced as the solution. The pseudocode of the proposed DTBO method is presented in Algorithm 1 and its flowchart is presented in Fig. 1.

figure a
Figure 1
figure 1

Flowchart of DTBO.

Computational complexity of DTBO

In this subsection, we discuss the computational complexity of DTBO. The preparation and initialization of DTBO for the number of members equal to N and the problem with the number of decision variables equal to m have a computational complexity equal to \({O}(N\,m)\). In each iteration, the DTBO members are updated in three phases. Therefore, the computational complexity of the DTBO update processes is equal to \({O}(3N\,m\,T)\), where T is the maximum number of iterations of the algorithm. Consequently, the total computational complexity of DTBO is equal to \({O}(N\,m(1+3T))\).

Simulation studies and results

This section is addressed to analyze the DTBO’s ability in optimization applications and provide optimal solutions to these types of problem. To this end, DTBO has been implemented on fifty-three standard objective functions of various types of unimodal, high-dimensional multimodal, fixed-dimensional multimodal53, and IEEE CEC2017 benchmark functions54. Furthermore, to evaluate the quality of the results obtained from DTBO, the performance of the proposed approach is compared with the performance of 11 well-known algorithms PSO, WOA, MVO, GA, GWO, GSA, MPA, TLBO, AVOA, RSA, and TSA. DTBO and competitor algorithms are used in twenty independent implementations, while each execution contains 1000 iterations to optimize the objective functions \(F_1\) to \(F_{23}\). The optimization results of the objective functions are reported using statistical indices mean, best, worst, standard deviation (std), median, and rank. The performance ranking criterion of optimization algorithms is based on the mean index. The values assigned to the control parameters of the competitor algorithms are listed in Table 1.

Table 1 Assigned values to the control parameters of competitor algorithms.

Evaluation of unimodal benchmark functions

The results of the implementation of DTBO and 11 competitor algorithms on the unimodal functions \(F_1\) to \(F_7\) are reported in Table 2. Comparison of statistical indicators shows that high-power DTBO has provided the global optimal in optimizing functions \(F_1\), \(F_2\), \(F_3\), \(F_4\), \(F_5\), and \(F_6\). Furthermore, DTBO performed better in optimizing the function \(F_7\) and is the best optimizer for this function. Analysis of the simulation results shows that DTBO performs better in optimizing unimodal functions by providing far more competitive results than the other algorithms.

Evaluation of high-dimensional multimodal benchmark functions

The optimization results of high-dimensional multimodal functions \(F_8\) to \(F_{13}\) using DTBO and 11 competitor algorithms are presented in Table 3. On the basis of the simulation results, it is evident that DTBO has made available the global optima of functions \(F_9\) and \(F_{11}\). DTBO is also the best optimizer for handling the functions \(F_8\), \(F_{10}\), \(F_{12}\), and \(F_{13}\). Comparing the performance of competitor algorithms against DTBO proves that DTBO, with its high ability, is much more efficient in optimizing multimodal functions.

Evaluation of fixed-dimensional multimodal benchmark functions

The optimization results obtained using DTBO and 11 competitor algorithms in optimizing fixed-dimensional multimodal functions from \(F_{14}\) to \(F_{23}\) are presented in Table 4. The optimization results show that DTBO is the best of all optimizers compared to handle all functions \(F_{14}\) to \(F_{23}\). Comparison of the performance of DTBO with competing algorithms shows that DTBO has effective efficiency and superior performance in handling fixed-dimensional multimodal functions. The behavior of the convergence curves of DTBO and competitor algorithms in achieving solutions for the objective functions \(F_1\) to \(F_{23}\) is presented in Fig. 2.

Evaluation of IEEE CEC2017 benchmark functions

The results of the implementation of DTBO and competitor algorithms in the CEC 2017 benchmark functions, including 30 objective functions \(C_1\) to \(C_{30}\) are presented in Tables 5 and 6. What is clear from the optimization results is that DTBO has performed better in most CEC 2017 functions than competitor algorithms.

The convergence curves of DTBO and competitor algorithms while obtaining the solution for CEC2017 functions are shown in Fig. 3.

The analysis of the simulation results shows that the proposed approach in dealing with the CEC2017 benchmark functions, with acceptable results, has the first rank of the best optimizer, among the 11 algorithms compared.

Statistical analysis

To provide statistical analysis of DTBO performance compared to competitor algorithms, the Wilcoxon sum rank test55 is used. The Wilcoxon sum rank test is a statistical test that, based on an indicator called the p value, shows whether the superiority of one method over another is statistically significant. The results of implementing the Wilcoxon sum rank test on DTBO in comparison with each of the competitor algorithms are presented in Table 7. Based on the results obtained, in each case where the p value is calculated less than 0.05, DTBO has a statistically significant superiority over the corresponding competitor algorithm.

Figure 2
figure 2

Convergence curves of DTBO and competitor algorithms in optimizing objective functions \(F_1\) to \(F_{23}\).

Figure 3
figure 3

Convergence curves of DTBO and competitor algorithms in optimizing objective functions \(C_1\) to \(C_{30}\).

Table 2 Evaluation results of unimodal functions.
Table 3 Evaluation results of high-dimensional multimodal functions.
Table 4 Evaluation results of fixed-dimensional multimodal functions.
Table 5 Evaluation results of IEEE CEC 2017 objective functions \(C_1\) to \(C_{18}\).
Table 6 Evaluation results of the IEEE CEC 2017 objective functions \(C_{19}\) to \(C_{30}\).
Table 7 p values from Wilcoxon sum rank test.
Table 8 Performance of optimization algorithms in pressure vessel design.
Table 9 Statistical results of optimization algorithms in the design of pressure vessels.
Table 10 Performance of optimization algorithms in the design of welded beams.
Table 11 Statistical results of optimization algorithms in the design of welded beams.

Discussion

The optimization mechanism in metaheuristic algorithms is based on a random search in the problem solving space. An algorithm will be able to search accurately and effectively in the search space when it scans the various search spaces and around promising areas. This fact means that the power of exploration in the global search and the power of exploitation in the local search have a significant impact on the performance of optimization algorithms. The DTBO update process has three different phases with the aim of providing a global and a local search. The first phase of the update based on “training by the driving instructor” scans different parts of the search space according to the ability to explore. The second phase of the implementation of DTBO also increases the DTBO exploration power by making sudden changes in the population position. The third phase of DTBO, called the “practice”, leads to local search and increases the exploitation ability of DTBO. The important thing about exploration and exploitation is that, in the initial iterations, priority is given to global search, so that the algorithm can scan different parts of the search space. The update equations in the second and third phases are designed to make larger changes to the population in the initial iterations. As a result, in initial iterations, the DTBO population displacement range is larger, leading to its effective exploration. As the replication of the algorithm increases, it is important that the algorithm moves to better areas in the search space and scans the search space around promising solutions in smaller steps. The update equations in the second and third phases are adjusted to provide smaller changes to the population by increasing the iterations of the algorithm and to converge to the optimal solution with smaller and more precise steps. These strategies in the process of updating the members of the population in DTBO have led to the proposed approach, which in addition to the high capability in exploration and exploitation, also has a good balance between these two capabilities. Because they have only one optimal solution, unimodal objective functions are suitable options for measuring the exploitation power of optimization algorithms in convergence towards global optimal. The results of optimization of the unimodal functions show that DTBO has a high exploitation capability in local search. Therefore, this algorithm has converged precisely to the global optimum to solve functions \(F_1\) to \(F_6\). High-dimensional multimodal objective functions are suitable options for evaluating the exploration power of optimization algorithms in identifying the main optimal area because they have many local optimal areas in the search space. The results obtained from the optimization of the functions \(F_8\) to \(F_{13}\) indicate the high exploration ability of DTBO. In the case of functions \(F_9\) and \(F_{11}\), after identifying the optimal area, it also converges to the global optimal. Fixed-dimensional multimodal objective functions, because they have fewer local optimal solutions (compared to functions \(F_8\) to \(F_{13}\)), are good options for analyzing the ability of optimization algorithms to maintain the balance between exploration and exploitation. The optimization results of functions \(F_{14}\) to \(F_{23}\) show that DTBO can provide optimal solutions for these optimization problems by creating a proper balance between exploration and exploitation.

The IEEE CEC2017 benchmark functions are also suitable to further challenge DTBO in solving more complex optimization problems. The results obtained from the optimization of the functions \(C_1\) to \(C_{30}\) indicate the high capability of the proposed DTBO to solve complex optimization problems.

DTBO for real-world applications

In this section, the ability of DTBO to provide the optimal solution for real-world optimization applications is challenged. For this purpose, DTBO and competing algorithms have been implemented in two optimization challenges, pressure vessel design and welded beam design.

Pressure vessel design

Pressure vessel design is a real-world optimization theme aimed at minimizing design costs, a schematic of which is shown in Fig. 456. The results of the implementation of the proposed DTBO and competitor algorithms in this challenge are reported in Tables 8 and  9. Based on the optimization results, DTBO has provided the solution to this problem with the values of the design variables equal to (0.7786347, 0.3853025, 40.34282, 199.5782) and the value of the objective function equal to 5885.3548. Analysis of the simulation results shows that DTBO has performed better than competitor algorithms in providing solutions and statistical indicators. The DTBO convergence curve while finding the solution to the pressure vessel design problem is shown in Fig. 5.

Figure 4
figure 4

Schematic of pressure vessel design.

Figure 5
figure 5

DTBO’s performance convergence curve in the design of a pressure vessel.

Welded beam design

Welded beam design is an engineering optimization problem aimed at reducing the fabrication cost, the schematic is shown in Fig. 613. The optimization results of this design using DTBO and competitor algorithms are presented in Table 10 and Table 11. The results show that DTBO has provided the solution to this problem with the values of the design variables equal to (0.20573, 3.4705, 9.0366, 0.20573) and the value of the objective function equal to 1.7249. What can be deduced from the simulation results is that DTBO has provided a more efficient solution to this problem compared to competitor algorithms by providing a better solution and better statistical indicators. The DTBO convergence curve while finding the solution to the design problem of welded beams is shown in Fig. 7.

Figure 6
figure 6

Schematic of welded beam design.

Figure 7
figure 7

DTBO performance convergence curve for the welded beam design.

Conclusion and future works

This paper introduced a new stochastic human-based algorithm called Driving Training-Based Optimization (DTBO). The process of learning to drive in a driving school is the fundamental inspiration of the DTBO design. DTBO was mathematically modeled in three phases: (i) training by the driving instructor, (ii) patterning of students from instructor skills, and (iii) practice. Furthermore, we have shown the performance of DTBO in optimizing fifty-three objective functions of a group of unimodal, high-dimensional, fixed-dimensional multimodal, and IEE CEC2017. The results obtained from the implementation of DTBO in the objective functions \(F_1\) to \(F_{23}\) showed that DTBO has a high ability to exploit, explore, and balance them to perform powerfully in the optimization process.

The optimization results of the functions \(C_1\) to \(C_{30}\) showed the acceptable ability of DTBO to solve complex optimization problems.

To analyze the performance of DTBO, we compared its results with the performance of 11 well-known algorithms. A comparison of DTBO performance against competitor algorithms showed that the proposed DTBO, with better results, is more effective in optimizing and achieving optimal solutions and is much more competitive than the algorithms compared.

The use of DTBO in addressing two engineering design issues demonstrated the effective ability of the proposed approach in solving real-world applications. The authors offer several research pathways for future studies, including the development of binary and multi-objective versions of DTBO, which are among the particular study potentials of this paper. The application of DTBO in optimization problems in various sciences and real-world optimization challenges are other perspectives on the study of the proposed approach.

Although DTBO has provided acceptable results in solving the problems studied in this paper, there are some limitations to this method in other applications. The authors do not in any way claim that DTBO is the best optimizer in solving optimization problems because according to the concept of the NFL theorem, such a hypothesis is completely and definitively rejected. Therefore, DTBO may not be effective in solving some optimization applications. Furthermore, the main limitation of any metaheuristic algorithm, including DTBO, is that there is always the possibility that new optimization approaches may be developed in the future that perform better in the handling of optimization applications.