Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems
Research highlights
► A novel optimization method, ‘Teaching–Learning-Based Optimization’, is proposed. ► Effectiveness of the algorithm is tested on many constrained benchmark problems. ► Results show better performance over other nature-inspired optimization methods. ► The method involves less computational effort for large scale problems. ► The method can be used for engineering design optimization applications.
Introduction
Engineering design can be characterized as a goal-oriented, constrained, decision making process to create products that satisfy well-defined human needs. Design optimization consists of certain goals (objective functions), a search space (feasible solutions) and a search process (optimization methods). The feasible solutions are the set of all designs characterized by all possible values of the design parameters (design variables). The optimization method searches for the optimal design from all available feasible designs.
Mechanical design includes an optimization process in which designers always consider certain objectives such as strength, deflection, weight, wear, corrosion, etc. depending on the requirements. However, design optimization for a complete mechanical assembly leads to a complicated objective function with a large number of design variables. So it is good practice to apply optimization techniques for individual components or intermediate assemblies rather than a complete assembly. For example, in an automobile power transmission system, the optimization of the gearbox is computationally and mathematically simpler than the optimization of the complete transmission system. Analytical or numerical methods for calculating the extremes of a function have long been applied to engineering computations. Although these methods perform well in many practical cases, they may fail in more complex design situations. In real design problems the number of design variables can be very large, and their influence on the objective function to be optimized can be very complicated, with a nonlinear character. The objective function may have many local optima, whereas the designer is interested in the global optimum. Such problems cannot be handled by classical methods (e.g. gradient methods) that only compute local optima. So there remains a need for efficient and effective optimization methods for mechanical design problems. Continuous research is being conducted in this field and nature-inspired heuristic optimization methods are proving to be better than deterministic methods and thus are widely used.
There are many nature-inspired optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), Ant Colony Optimization (ACO), Harmony Search (HS), the Grenade Explosion Method (GEM), etc., working on the principles of different natural phenomena. GA uses the theory of Darwin based on the survival of the fittest [1], [2], PSO implements the foraging behavior of a bird for searching food [3], [4], ABC uses the foraging behavior of a honey bee [5], [6], [7], ACO works on the behavior of an ant in searching for a destination from the source [8], [9], HS works on the principle of music improvisation in music players [10] and GEM works on the principle of the explosion of a grenade [11]. These algorithms have been applied to many engineering optimization problems and proved effective in solving some specific kinds of problem.
The most commonly used evolutionary optimization technique is the genetic algorithm (GA). However, GA provides a near optimal solution for a complex problem having large number of variables and constraints. This is mainly due to the difficulty in determining the optimum controlling parameters such as population size, crossover rate and mutation rate. A change in the algorithm parameters changes the effectiveness of the algorithm. The same is the case with PSO, which uses inertia weight, social and cognitive parameters. Similarly, ABC requires optimum controlling parameters of number of bees (employed, scout, and onlookers), limit, etc. HS requires the harmony memory consideration rate, pitch adjusting rate, and the number of improvisations. Therefore, the efforts must be continued to develop a new optimization technique which is free from the algorithm parameters, i.e. no algorithm parameters are required for the working of the algorithm. This aspect is considered in the present work.
The main motivation to develop a nature-based algorithm is its capacity to solve different optimization problems effectively and efficiently. It is assumed that the behavior of nature is always optimum in its performance. In this paper a new optimization method, Teaching–Learning-Based Optimization (TLBO), is proposed to obtain global solutions for continuous non-linear functions with less computational effort and high consistency. The TLBO method is based on the effect of the influence of a teacher on the output of learners in a class. Here, output is considered in terms of results or grades. The teacher is generally considered as a highly learned person who shares his or her knowledge with the learners. The quality of a teacher affects the outcome of the learners. It is obvious that a good teacher trains learners such that they can have better results in terms of their marks or grades.
Section snippets
Teaching–learning-based optimization
Assume two different teachers, and , teaching a subject with the same content to the same merit level learners in two different classes. Fig. 1 shows the distribution of marks obtained by the learners of two different classes evaluated by the teachers. Curves 1 and 2 represent the marks obtained by the learners taught by teacher and respectively. A normal distribution is assumed for the obtained marks, but in actual practice it can have skewness. The normal distribution is defined as
Implementation of TLBO for optimization
The step-wise procedure for the implementation of TLBO is given in this section.
Step 1: Define the optimization problem and initialize the optimization parameters.
Initialize the population size , number of generations , number of design variables , and limits of design variables ().
Define the optimization problem as: Minimize .
Subject to
where is the objective function, is a vector for design variables such that .
Step 2: Initialize the
Comparison of TLBO with other optimization techniques
Like GA, PSO, ABC, and HS, TLBO is also a population-based technique which implements a group of solutions to proceed to the optimum solution. Many optimization methods require algorithm parameters that affect the performance of the algorithm. GA requires the crossover probability, mutation rate, and selection method; PSO requires learning factors, the variation of weight, and the maximum value of velocity; ABC requires the limit value; and HS requires the harmony memory consideration rate,
Experimental studies
Different experiments have been conducted to check the effectiveness of TLBO against other optimization techniques. Different examples are investigated based on benchmark test functions, mechanical design benchmark functions and other mechanical design problems from the literature.
Conclusions
A novel optimization method, TLBO, is presented based on the philosophy of the teaching–learning process and its performance is checked by experimenting with different benchmark problems with different characteristics. The effectiveness of TLBO is also checked for different performance criteria, such as success rate, mean solution, average number of function evaluations required, convergence rate, etc. The results show the better performance of TLBO over other nature-inspired optimization
References (30)
- et al.
On the performance of artificial bee colony (ABC) algorithm
Applied Soft Computing
(2008) Ant colony optimization: introduction and recent trends
Physics of Life Reviews
(2005)- et al.
Grenade explosion method-a novel tool for optimization of multimodal functions
Applied Soft Computing
(2010) An efficient constraint handling method for genetic algorithm
Computer Methods in Applied Mechanics and Engineering
(2000)- et al.
Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization
Applied Soft Computing
(2010) - et al.
An effective co-evolutionary differential evolution for constrained optimization
Applied Mathematics and Computation
(2007) - et al.
An effective co-evolutionary particle swarm optimization for constrained engineering design problems
Engineering Applications of Artificial Intelligence
(2007) - et al.
Multi-objective design optimization of rolling bearings using genetic algorithm
Mechanism and Machine Theory
(2007) Genetic algorithms in search, optimization, and machine learning
(1989)Evolutionary algorithms in theory and practice
(1996)
Particle swarm optimization
Ant colony optimization
Cited by (3737)
Crossover Teaching Learning Based Optimization for channel estimation in MIMO system
2024, Expert Systems with ApplicationsParticle guided metaheuristic algorithm for global optimization and feature selection problems[Formula presented]
2024, Expert Systems with ApplicationsMetaheuristic-assisted complex H-infinity flight control tuning for the Hawkeye unmanned aerial vehicle: A comparative study
2024, Expert Systems with ApplicationsMSAO: A multi-strategy boosted snow ablation optimizer for global optimization and real-world engineering applications
2024, Advanced Engineering InformaticsA systematic review of applying grey wolf optimizer, its variants, and its developments in different Internet of Things applications
2024, Internet of Things (Netherlands)Improving teaching-learning-based optimization algorithm with golden-sine and multi-population for global optimization
2024, Mathematics and Computers in Simulation