Opposition versus randomness in soft computing techniques
Introduction
The footprints of the opposition concept can be observed in many areas around us. This concept has sometimes been labeled by different names. Opposite particles in physics, antonyms in languages, complement of an event in probability, antithetic variables in simulation, opposite proverbs in culture, absolute or relative complement in set theory, subject and object in philosophy of science, good and evil in animism, opposition parties in politics, theses and antitheses in dialectic, opposition day in parliaments, and dualism in religions and philosophies are just some examples to mention. Table 1 contains more instances and corresponding details.
It seems that without using the opposition concept, the explanation of different entities around us is hard and maybe even impossible. In order to explain an entity or a situation we sometimes explain its opposite instead. In fact, opposition often manifests itself in a balance between completely different entities. For instance, the east, west, south, and north can not be defined alone. The same is valid for cold and hot and many other examples. Extreme opposites constitute our upper and lower boundaries. Imagination of the infinity is vague, but when we consider the limited, it then becomes more imaginable because its opposite is definable.
Many machine intelligence or soft computing algorithms are inspired by different natural systems. Genetic algorithms, neural nets, reinforcement agents, and ant colonies are, to mention some examples, well established methodologies motivated by evolution, human nervous system, psychology and animal intelligence, respectively. The learning in natural contexts such as these is generally sluggish. Genetic changes, for instance, take generations to introduce a new direction in the biological development. Behavior adjustment based on evaluative feedback, such as reward and punishment, needs prolonged learning time as well.
In many cases the learning begins at a random point. We, so to speak, begin from scratch and move toward an existing solution. The weights of a neural network are initialized randomly, the population initialization in evolutionary algorithms (e.g., GA, DE, and PSO) is performed randomly, and the action policy of reinforcement agents is initially based on randomness, to mention some examples.
Generally, we deal with complex control problems [1], [2]. The random guess, if not far away from the optimal solution, can result in a fast convergence. However, it is natural to state that if we begin with a random guess, which is very far away from the existing solution, let say in worst case it is in the opposite location, then the approximation, search or optimization will take considerably more time, or in worst case becomes intractable. Of course, in absence of any a priori knowledge, it is not possible to make the best initial guess. Logically, we should be looking in all directions simultaneously, or more efficiently, in the opposite direction.
The scheme of opposition-based learning (OBL) was introduced by H.R. Tizhoosh [3]. In a very short time, it was applied to enhance reinforcement learning [4], [5], [6], [8], [18], [34], differential evolution [10], [11], [12], [13], [14], backpropagation neural networks [15], [17], simulated annealing [9], ant colony optimization [16], and window memorization for morphological algorithms [7]. Although, the achieved empirical results in the mentioned papers confirm that the concept of opposition-based learning is general enough and can be utilized in a wide range of learning and optimization fields to make these algorithms faster. We are still faced with this fundamental question: Why an opposite number is beneficial compared to an independent random number? In other words, why a second random number should not be used instead of an opposite number? Many optimization methods need to generate random numbers for initialization or using them during the search process, such as differential evolution (DE) [36], genetic algorithms (GAs) [20], ant colony optimization (ACO) [21], particle swarm optimization (PSO) [22], simulated annealing (SA) [23], and random search (RS) [24].
In this paper, we will prove that, in terms of convergence speed, utilizing random numbers and their opposites is more beneficial than using the pure randomness to generate initial estimates in absence of a priori knowledge about the solution. In fact, we mathematically and empirically show why opposite numbers have higher chance to be a fitter estimate compared to additional independent random guesses.
Finally, a population-based algorithm, differential evolution (DE), is selected to be accelerated using the opposition concept. The main reason for selecting DE is its high ability to find precise solutions for the mixed-typed black-box global optimization problems [25]. The population-based algorithms are computationally expensive and hence their acceleration is widely appreciated. Among the various evolutionary algorithms [26], [27], [28], differential evolution (DE) is well known for its effectiveness and robustness. Frequently reported studies demonstrate that the DE outperforms many other optimizers over both benchmark functions and real-world applications.
The rest of this paper is organized as follows: Section 2 covers the definitions, theorems, and proofs corresponding to opposition-based learning. Empirical verification of the mathematical results are given in Section 3. Employing of OBL to accelerate differential evolution is investigated in Section 4. Finally, the paper is concluded in Section 5.
Section snippets
Opposition-based learning (OBL): Definitions, theorems, and proofs
Definition 1 Let x be a real number in an interval ; the opposite of x, denoted by , is defined by Fig. 1 (top) illustrates x and its opposite in interval . As seen, x and are located in equal distance from the interval’s center () and the interval’s boundaries () as well. This definition can be extended to higher dimensions by applying the same formula to each dimension [3].
Definition 2 Let be a point in D-dimensional space, where
Empirical verification of mathematical results
In this section, the aforementioned mathematical proofs are experimentally verified and the usefulness of the opposite numbers in higher dimensional spaces is investigated. For this propose, three random points in a D-dimensional space are generated (n times), called X, , and . Then, the number of times (out of n) which X, , or is the closest to the randomly generated solution (measured by Euclidean distance) is counted and finally the probability of the closeness of each point is
Employing OBL to accelerate differential evolution
Differential evolution (DE) was proposed by Price and Storn in 1995 [29], [31]. It is an effective, robust, and simple global optimization algorithm [36] which has only a few control parameters. According to frequently reported comprehensive studies [19], [32], [33], [36], DE outperforms many other optimization methods in terms of convergence speed and robustness over common benchmark functions and real-world problems. Generally speaking, all population-based optimization algorithms, no
Conclusion
For many soft computing techniques, in absence of a-priori information about the solution, pure random guess is usually the only option to generate candidate solutions. Obviously the computation time, among others, is directly related to the distance of the guess from the optimal solution.
Experimental results, recently published, indicate that employing opposition-based learning within existing soft computing algorithms can accelerate the learn and search process. Promising results have been
References (41)
- et al.
Efficient differential evolution algorithms for multimodal optimal control problems
Appl. Soft Comput.
(2003) - et al.
A novel population initialization method for accelerating evolutionary algorithms
J. Comput. Math. Appl. (Elsevier)
(2007) - et al.
An improvement of the standard genetic algorithm fighting premature convergence in continuous optimization
Adv. Eng. Software
(2001) - et al.
Improvement of real coded genetic algorithm based on differential operators preventing premature convergence
Adv. Eng. Software
(2004) - et al.
Population set-based global optimization algorithms: Some modifications and numerical studies
J. Comput. Operat. Res.
(2004) - et al.
DE/EDA: a new evolutionary algorithm for global optimization
J. Inf. Sci.
(2005) - et al.
Solving nonconvex climate control problems: pitfalls and algorithm performances
Appl. Soft Comput.
(2004) Opposition-based learning: a new scheme for machine intelligence
Reinforcement learning based on actions and opposite actions
Opposition-Based Reinforcement Learning
J. Adv. Comput. Intell. Intell. Inf.
(2006)
Opposition-based Q () algorithm
Opposition-based differential evolution algorithms
Opposition-based differential evolution for optimization of noisy problems
Improving the convergence of backpropagation by opposite transfer functions
Cited by (308)
An adaptive differential evolution with opposition-learning based diversity enhancement
2024, Expert Systems with ApplicationsAn enhanced Equilibrium Optimizer for solving complex optimization problems
2024, Information SciencesMNEARO: A meta swarm intelligence optimization algorithm for engineering applications
2024, Computer Methods in Applied Mechanics and EngineeringRemanufacturing system scheduling of batch products with the consideration of dynamic changes in machine efficiency using an improved artificial bee colony algorithm
2024, Computers and Industrial EngineeringOptimal design of forced-draft counter-flow evaporative-cooling towers through single and multi-objective optimizations using oppositional chaotic artificial hummingbird algorithm
2023, Thermal Science and Engineering ProgressA novel oppositional teaching learning strategy based on the golden ratio to solve the Time-Cost-Environmental impact Trade-Off optimization problems
2023, Expert Systems with Applications